Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The LLM can replicate the trick of fooling users into thinking it's conscious as long as there is a sufficient supply of money to keep the LLM running and a sufficient number of new users who don't know the trick. If you don't account for either of those resources running out, you're not testing whether its feats are truly repeatable.


>The LLM can replicate the trick of fooling users into thinking it's conscious as long as there is a sufficient supply of money to keep the LLM running and a sufficient number of new users who don't know the trick.

Okay ? and you, presumably a human can replicate the trick of fooling me into thinking you're conscious as long as there is a sufficient supply of food to keep you running. So what's your point ? With each comment, you make less sense. Sorry to tell you, but there is no trick.


The difference is that the human can and did find its own food for literally ages. That's already a very, very important difference. And while we cannot really define what's conscious, it's a bit easier (still with some edge cases) to define what is alive. And probably what is alive has some degree of consciousness. An LLM definitely does not.


One of the "barriers" to me is that (AFAIK) an LLM/agent/whatever doesn't operate without you hitting the equivalent of an on switch.

It does not think idle thoughts while it's not being asked questions. It's not ruminating over its past responses after having replied. It's just off until the next prompt.

Side note: whatever future we get where LLMs get their own food is probably not one I want a part of. I've seen the movies.


This barrier is trivial to solve even today. It is not hard to put an LLM on an infinite loop of self-prompting.


A self-prompting loop still seems artificial to me. It only exists because you force it to externally.


You only exist because you were forced to be birthed externally? Everything has a beginning.

In fact, what is artificial is stopping the generation of an LLM when it reaches a 'stop token'.

A more natural barrier is the attention size, but with 2 million tokens, LLMs can think for a long time without losing any context. And you can take over with memory tools for longer horizon tasks.


Good points. :) Thank you.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: