This part about "constrained by the attention model" is doing a lot of implicit work here to dodge the question why GPT-4 can verifiably reason about things in text.
It also demonstrably is either flat out wrong about a lot of things or completely invents things that don't exist. It's a random process that sometimes generates content with actual informational value but the randomness is inherent in the algorithm.