It's doing the pure, "try to guess the most likely next token" task on which they were both trained (https://heartbeat.comet.ml/causal-language-modeling-with-gpt...).
ChatGPT is further trained with reinforcement from human feedback to make them more tool-like (https://arxiv.org/abs/2204.05862 & https://openai.com/blog/chatgpt & https://arxiv.org/abs/2203.02155),
with a bit of randomness added for variety's sake (https://huggingface.co/blo1g/how-to-generate).
It's doing the pure, "try to guess the most likely next token" task on which they were both trained (https://heartbeat.comet.ml/causal-language-modeling-with-gpt...).
ChatGPT is further trained with reinforcement from human feedback to make them more tool-like (https://arxiv.org/abs/2204.05862 & https://openai.com/blog/chatgpt & https://arxiv.org/abs/2203.02155),
with a bit of randomness added for variety's sake (https://huggingface.co/blo1g/how-to-generate).