Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Right - because Large _Language_ Models are text generators, not calculators.


I think it's a valid complaint. A human could say "I don't know, use a calculator".


My children can answer it in a number of ways. Older one, depending on the mood, can just think and calculate, can say “I don’t care”, outright ignore the question, answer with a joke (“bazillion”).

Younger one cannot calculate yet, and will cheerfully answer with a random number, or a string of numbers — 15! 20! 45! 18!

These LLMs fit into “human-like” behavior no problem. It just doesn’t always behave like the smartest and self-aware person on the planet.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: