Thank you for this information. I did not know this. But my view (I may be wrong), is that AGI is too resource-intensive to be within the reach of normal computing of the ordinary user for at least 2 decades.
> is that AGI is too resource-intensive to be within the reach of normal computing of the ordinary user for at least 2 decades.
Hardware is still accelerating exponentially in density, albeit a bit slower. What you're not considering is that algorithmic improvements in machine learning are outpacing hardware improvements.
For instance, NVidia recently revealed how to switch from 32-bit floats to 16-bit floats with no perceptible loss in effectiveness, and they're working on 8-bit floats next. That's a full doubling in number of parameters in your model in only a single step. Other improvements are refinements to language models themselves to reduce overfitting and boost effectiveness with fewer parameters.
Arguably a machine learning model will achieve parity with human neuron density, in terms of number parameters, within the next decade. What that actually means is unclear.
You’re entitled to your own opinion, of course, but why do you hold this view?
And why is “the reach of normal commuting of the ordinary user” a relevant bar — Google search (as an example) requires computation beyond the reach of normal computing of the ordinary user yet has still had a tremendous impact.