Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You can also be interrogating a human and in the course of your conversation stumble across something it isn’t good at.


Sure, but very likely they'll be able to explain their lack to you in a satisfactory way, or, at least in a way that makes you think they're human.


Counterpoint: people were accusing each other of being bots simply for disagreeing with each other even back when Twitter was still called that. "Mr Firstname Bunchanumbers" etc.

(And we've been bemoaning "the lack of common sense these days" for at least as long as I've been an adult, and racists and sexists have been denying the intelligence of the outgroup as far back as writing can show us).


IMO this is a solvable problem though. Eventually LLMs will have more awareness of their own confidence and will be able to convincingly say “huh, I’m honestly not sure about that, can you explain a bit more about what you mean?” Or even “I’ve heard of X before but not in this context; can you please clarify what you mean here?”


See, humans respond very differently when that happens. The failure to do what humans do when they don’t understand something or know something is frequently what fails LLMs at the TT.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: