Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's not uniquely AI (though the persistence of errors probably is). But it is surprising that a computer program is not better at this because we expect computer programs to be good at following direct, explicit directions. I assume it fails here because it is non-deterministic, and there is no deterministic override available?


One of the issues here is that you as the user are not privy to all the instructions ChatGPT has been given. Before the chat begins, the bot is almost certainly given hidden instructions to answer politely. It's not that the bot is refusing to follow instructions, but that given two contradictory commands, it's choosing to follow one instruction over the other.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: