Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't understand how ChatGPT can execute code. I understand how it can generate (good or bad) code, but how it can execute code correctly? Only a compiler can do that. Is ChatGPT "inferring" the output somehow? Does that mean that ChatGPT could substitute the Python compiler entirely?


No, it’s guessing what the code is supposed to do, and guessing at what a plausible output might be.

Like the prime number finder - it’s not calculating primes, it is recognizing the ‘shape’ of a prime number finding algorithm and it know what that looks like as an output.


It does it the same way you do it in your head when looking at the code.


Does it really execute it, or just picks a plausible answer from the internet? Ask it to multiple two 3 digit numbers.


> Does it really execute it, or just picks a plausible answer from the internet?

Well, if I ask it: “Write a Python program that reads a space-separated list of integers from stdin and returns the sum of their squares.” It does so (with explanations, sample input, and sample output. Now, that could be gotten from the internet, maybe.

If I then ask it to tell me the output of the program with a particular input string, it does so, with explanations of the calculation steps in the program to get to the result. It seems…improbable…that it is fetching those responses from somewhere on the internet or its training set.



Someone run the code previously in another context and the results were incorporated in the dataset used to train GPT. There is no code actually running anywhere in GPT. It only display the text. There is no true virtual machine being created anywhere.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: