I'm reminded of AI hypesters complaining that people are constantly moving the goalposts of AI. It's a similar effect and I think both have a similar reason.
When people think of AI they think of robots that can think like us. That can solve arbitrary problems, plan for the future, logically reason, etc. In an autonomous fashion.
That's always been true. So the goal posts haven't really moved, instead it's a continuous cycle of hype, understanding, disappointment, and acceptance. Every time a computer exhibits a new capability that's human-like, like recognizing faces, we wonder if this is what the start of AGI looks like, and unfortunately that's not been the case so far.
I think you're spot in with this. It's the enthusiasts that are constantly trying to move the goalposts towards them and then the general public puts it back where it goes once they catch on.
AGI is what people think of when they hear AI. AI is a bastardized term that people use to either justify, hype and/or sell their research, business or products.
The reason "AI" stops being AI once it becomes mainstream is that people figure out that it's not AI once they see the limitations of whatever the latest iteration is.
I'm not concerned with who used what and when. I'm talking about what people expect of AI. When you tell people that you're trying to create digital intelligence, they'll inevitably compare it to people. That's the expectation.
When people think of AI they think of robots that can think like us. That can solve arbitrary problems, plan for the future, logically reason, etc. In an autonomous fashion.
That's always been true. So the goal posts haven't really moved, instead it's a continuous cycle of hype, understanding, disappointment, and acceptance. Every time a computer exhibits a new capability that's human-like, like recognizing faces, we wonder if this is what the start of AGI looks like, and unfortunately that's not been the case so far.