Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The discussion so far:

Me: The VM prompts give a defeasible reason to believe it has a world model

You: No it doesn't

Me: Why do you think so?

You: It fails badly in these other areas

Me: Failure in unrelated areas doesn't demonstrate a lack of world model in the examples demonstrated

You: The burden of proof is on you!

It seems that people are just universally terrible at constructing rational arguments. What can you do.



You just need better training data. Your internal language model can't follow a non-linear argument or understand basic logic.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: