Mostly how to incorrectly spell bananana and do some bad logic.
When you realize LLM models are very broad statistical models with nearly 0 sense at all they become easy to manipulate with wrong information.
The annoying thing is going to be LLMs teaching people things they publish and feed back into the next training of LLMs which will become pervasive to the extent that verifiable information will be much more difficult to come by and highly prized. Will drive even further nostalia or just real valuation of analog methods and artifacts and glitch/lofi/noise which are the kinds of abberation which analog systems make especially those that ML has difficulty emulating.
When you realize LLM models are very broad statistical models with nearly 0 sense at all they become easy to manipulate with wrong information.
The annoying thing is going to be LLMs teaching people things they publish and feed back into the next training of LLMs which will become pervasive to the extent that verifiable information will be much more difficult to come by and highly prized. Will drive even further nostalia or just real valuation of analog methods and artifacts and glitch/lofi/noise which are the kinds of abberation which analog systems make especially those that ML has difficulty emulating.