Only if the words were chosen simply at random in sequence and of course they're not this simplistic. They're constrained by the attention models so they do much better than this but they're still random. You can control the degree of randomness with the temperature knob.
This part about "constrained by the attention model" is doing a lot of implicit work here to dodge the question why GPT-4 can verifiably reason about things in text.
It also demonstrably is either flat out wrong about a lot of things or completely invents things that don't exist. It's a random process that sometimes generates content with actual informational value but the randomness is inherent in the algorithm.