Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Also disappointing that they didn't follow up the incorrect responses with corrections. Like if you told gpt that "sorry, your answer is wrong because the stock market is closed on saturday", it would come up wiht a new answer that takes that into account.

If you have to keep correcting the tool yourself, you won’t arrive at the truth but at the limits of your own knowledge. You’ll have no basis to know which answer is the one you can finally trust.

That mode of operation reminds me of the Gell-Mann amnesia effect.

https://www.johndcook.com/blog/2021/01/18/gell-mann-amnesia/



when make a request of a person, you go back and forth with corrections and clarifications. I think it is going to take time for people to realize you need to do the same with LLM chatbots




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: