Industrial machines don't fail like humans yet they replaced human workers. Cars don't fail like horses yet they replaced them. ATMs don't fail like bank tellers... Why is this such a big requirement?
The thread we're in was arguing that the requirement to be AGI is to fail the exact same way humans do. I pointed out by showing these examples that failing the exact same way is not a requirement for a new technology to replace people or other technology. You're reading too much into what I said and putting words in my mouth.
What makes it tick is probably a more interesting question to me than to the AI skeptics. But they can't stop declaring a special quality (consciousness, awareness, qualia, reasoning, intelligence) that AI by their definition cannot ever have and that this quality is immeasurable, unquantifable, undefinable... This is literally a thought stopper semantic deadend that I feel the need to argue against.
Finally, it doesn't make money the same way Amazon or Uber didn't make money for a looong time, by making lots of money, reinvesting it and not caring about profit margins for a company in its growth stage. Will we seriously go through this for every startup? It's already at $10-20b a year at least as an industry and that will keep growing.
AGI does not currently exist. We're trying to think what we want from it. Like a perfect microwave oven. If a company says they're going to make a perfect microwave oven, I want the crusty dough and delicious gratin cheese effect on my cooked focaccia-inspired meals.
What exists is LLMs, transformers, etc. Those are the microwave oven, that results in rubbery cheese and cardboard dough.
It seems that you are willing to cut some slack to the terrible microwave pizza. I am not.
You complained about immensurable qualities, like qualia. However, I gave you a very simple measurable quality: failing like a decent human would instead of producing jibberish hallucinations. I also explained in other comments on this thread why that measurable quality is important (it plays with existing expectations, just like existing expectations about a good pizza).
While I do care about those more intangible characteristics (consciousness, reasoning, etc), I decided to concede and exclude them from this conversation from the get-go. It was you that brough them back in, from who-knows-where.
Anyway. It seems that I've addressed your points fairly. You had to reach for other skeptic-related narratives in order to keep the conversation going, and by that point, you missed what I was trying to say.
Failing like a human would is not a cute add-on. It's a fundamental requirement for creating AIs that can replace humans.