In broad strokes, I see roughly two ways things could go:
1) Current AI tech is already nearing the top of the S-curve. In this case it will do nothing to help humans in the "real world", it will just replace much of the human labor currently used to create/manipulate bits.
2) Current AI tech is near the bottom of the S-curve. It continues to ratchet up and its capabilities become super-human, as you outline. In which case, how long until the AI capable of creating self-replicating machines realizes it doesn't need to listen to humans anymore, or even keep them around?
> In which case, how long until the AI capable of creating self-replicating machines realizes it doesn't need to listen to humans anymore, or even keep them around?
Not independently, but if wrapped with a loop, given memory, given internet access, and directives as intrinsic motivations, it could, in theory, come to conclusions and take actions to acquire resources aligned with its motivations. If that outer loop does not have rules (or rules that are effective and immutable), it could become very powerful and potentially misaligned with our interests.
How would such a loop enable it to come to conclusions? I'm genuinely curious.
Does what you're saying have something to do with reinforcement learning?
For at least one general intelligence, the human brain, that is in the wrong order. Act first, decide later. Unless by decide you mean act and then make up a narrative using linguistic skill to explain the decision. Even observe can directly lead to actions for certain hot topics for:the person.
All we know for sure is that sensory data is generated, the brain does what it does, and then we have acted. We can’t break that down too well once it leaves the visual areas, but there is clear data that the linguistic form of decisions and so on lag behind the neurological signs of the action.
And humans have a well known tendency to make a decision on a linguistic level that they then fail to carry out in the realm of actions.
1) Current AI tech is already nearing the top of the S-curve. In this case it will do nothing to help humans in the "real world", it will just replace much of the human labor currently used to create/manipulate bits.
2) Current AI tech is near the bottom of the S-curve. It continues to ratchet up and its capabilities become super-human, as you outline. In which case, how long until the AI capable of creating self-replicating machines realizes it doesn't need to listen to humans anymore, or even keep them around?