That’s a great take! As you rightly pointed out, it wasn’t my intention to imply that the “average Joe” is inferior.
The 8GB RAM might be sufficient to run certain algorithms; however, I won't expect it to be powerful enough for the new training models, especially since the progress in which AI advances is rapid.
Might I add, there are limitations on the capabilities of Apple Silicon NPUs. A few days ago, I came across a project called Anemll that claims it can run an entire small LLM on the NPU.
The SoC’s TDP was significantly reduced, but its performance was constrained by two factors:
1. The model requires over 8 GB of RAM to function optimally and a powerful NPU to handle larger models.
2. Apple Silicon being limited because of its weak NPU technology.
I'd guess Apple must vastly upgrade the NPU so the neural cores can fully host a semi-large LLM and allow apps to access AI without relying on its GPU.
When will that happen? I’m not sure. I suppose the M5 Pro might provide some clues about Apple’s future plans.