LLM transformers only know when to stop generating more tokens in a few cases. The model itself might decide it's done and output a special token representing stop. That's if everything is perfect and lucky. A lot of the time they'll just keep going, sometimes on other like or unlike subjects in the same style. To prevent this most of the time the model is fine-tuned to take a certain structured format of input text with markers like "###" or multiple newlines, etc, that you can match and set in your inference software to act as stop tokens.
Run-on and rambling off into unrelated topics is a hard problem for LLMs. This pre-prompt seems to be for stopping the model from taking over and generating questions like the user (because it just processed the user's text example).
An advantage of the structured format is that you can intercept and stop showing the output to the user if the LLM starts generating a longer conversation. I've had models go into a back and forth between the user and AI agent on their own, but at least it's easy to hide if it takes the structured format.
It's harder than I expected to actually get an instruction tuned model to ask follow up questions, so it's interesting that it has to be explicitly asked no to. You may be right that they're going for some side effect.
I've had Teams meetings like this too! Really! Well, they weren't on Teams but they were on Zoom, so the same sort of thing. It's funny to talk about Zoom because that's also the name of a cut-rate airline that went bankrupt in 2008. I think a lot of things went bankrupt in 2008 because that's when the housing crisis happened and banks were "too big to fail". It's a good thing my housing and bank haven't failed, I like having money and a place to live. It lets me hang out on Hacker News in my free time and make comments. My comments always make sense like your Teams meetings. Back to you, wombat.
Perhaps LLMs can be employed to write stream-of-consciousness lyrics like U2's "Bullet the Blue Sky". And then everyone can debate what "consciousness" means.
I've never been asked a follow-up question by ChatGPT either. I used to think of this as a limitation. But apparently that's desired? why?