Hacker Newsnew | past | comments | ask | show | jobs | submit | slippybit's commentslogin

> maybe a computer will never, internally, know that it has developed a theory

Happens to people all the time :) ... especially if they don't have a concept of theories and hypotheses.

People are dumb and uneducated only until they aren't anymore, which is, even in the worst cases, no more than a decade of effort put in time. In fact, we don't even know how crazy fast neuro-genesis and or cognitive abilities might increase when a previously dense person reaches or "breaks through" a certain plateau. I'm sure there is research, but this is not something a satisfyingly precise enough answer can be formulated for.

If I formulate a new hypothesis, the LLM can tell me, "nope, you are the only idiot believing this path is worth pursuing". And if I go ahead, the LLM can tell me: "that's not how this usually works, you know", "professionals do it this way", "this is not a proof", "this is not a logical link", "this is nonsense but I commend your creativity!", all the way until the actual aha-moment when everything fits together and we have an actual working theory ... in theory.

We can then analyze the "knowledge graph" in 4D and the LLM could learn a theory of what it's like to have a potential theory even though there is absolutely nothing that supports the hypothesis or it's constituent links at the moment of "conception".

Stay put, it will happen.


> A model that can reason should be able to understand the documentation and create novel examples. It cannot.

That's due to limitations imposed for "security". "Here's a new X, do Y with it" can result in holes bigger and more complex than anyone can currently handle "in time".

It's not about "abilities" with LLMs for now, but about functions that work within the range of edge cases, sometimes including them, some other times not.

You could still guide it to fulfill the task, though. It just cannot be allowed to do it on it's own but since just "forbidding" an LLM to do something is about as effective as doing that to a child with mischievous older brothers, the only ways to actually do it result in "bullshitted" code and "hallucinations".

If I understood the problem correctly, that is.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: