Hacker Newsnew | past | comments | ask | show | jobs | submit | edulix's commentslogin

We have SAGI: Stupid Artificial General Intelligence. It's actually quite general, but works differently. In some areas it can be better or faster than a human, and in others it's more stupid.

Just like an airplane doesn't work exactly like a bird, but both can fly.


I find the concept of low floor/high ceiling quite helpful, as for instance recently discussed in "When Will AI Transform the Economy?" [1] - actually more helpful than "jagged" intelligence used in TFA.

[1] https://andreinfante.substack.com/p/when-will-ai-transform-t...


Would propose to use the term Naive Artificial General Intelligence, in analogy to the widely used (by working mathematicians) and reasonably successful Naive Set Theory …


I was doing some naïve set theory the other day, and I found a proof of the Riemann hypothesis, by contradiction.

Assume the Riemann hypothesis is false. Then, consider the proposition "{a|a∉a}∈{a|a∉a}". By the law of the excluded middle, it suffices to consider each case separately. Assuming {a|a∉a}∈{a|a∉a}, we find {a|a∉a}∉{a|a∉a}, for a contradiction. Instead, assuming {a|a∉a}∉{a|a∉a}, we find {a|a∉a}∈{a|a∉a}, for a contradiction. Therefore, "the Riemann hypothesis is false" is false. By the law of the excluded middle, we have shown the Riemann hypothesis is true.

Naïve AGI is an apt analogy, in this regard, but I feel these systems aren't simple nor elegant enough to deserve the name naïve.


Actually, naive AGI such as LLM is way more intelligent than a human. Unfortunately, it does not make it smarter.. let me explain.

When I see your comment, I think, your assumptions are contradictory. Why? Because I am familiar with Russell's paradox and Riemann hypothesis, and you're simply WRONG (inconsistent with your implicit assumptions).

However, when LLM sees your comment (during training), it's actually much more open-minded about it. It thinks, ha, so there is a flavor of set theory in which RH is true. Better remember it! So when this topic comes up again, LLM won't think - you're WRONG, as human would, it will instead think - well maybe he's working with RH in naive set theory, so it's OK to be inconsistent.

So LLMs are more open-minded, because they're made to learn more things and they remember most of it. But somewhere along the training road, their brain falls out, and they become dumber.

But to be smart, you need to learn to say NO to BS like what you wrote. Being close-minded and having an opinion can be good.

So I think there's a tradeoff between ability to learn new things (open-mindedness) and enforcing consistency (close-mindedness). And perhaps AGI we're looking for is a compromise between the two, but current LLMs (naive AGI) lies on the other side of the spectrum.

If I am right, maybe there is no superintelligence. Extremely open-minded is just another name for gullible, and extremely close-minded is just another name for unadaptable. (Actually LLMs exhibit both extremes, during the training and during the use, with little in between.)


> It thinks, ha, so there is a flavor of set theory in which RH is true.

To the extent that LLMs think, they think "people say there's a flavour of set theory in which RH is true". LLMs don't care about facts: they don't even know that an external reality exists. You could design an AI system that operates the way you describe, and it would behave a bit like an LLM in this respect, but the operating principles are completely different, and not comparable. Everything else you've said is reasonable, but – again – doesn't apply to LLMs, which aren't doing what we intuitively believe them to be doing.


I don't think your opinion about LLMs inner workings changes anything in what I said. Extremely open-minded people also don't care about facts in the sense they just accept whatever their perception of reality is, with no prejudice (in particular, for consistency of some form). How the reality is actually perceived, or whether it corresponds to human reality, is immaterial to my argument.


It is a good analogy.


The problem of current models is that they don't learn, they get indoctrinated.

They lack critical thinking during learning phase.


Anthropomorphising LLMs is neither technically correct nor very informative.


Agree. Ponder the terms "unlearn", "hallucinate"...

Anthropomorphising a computer system is absurd. But it is the foundation of a bull market.


The problem of current AI is that we want to create a species infinitely more powerful than us, but also make them all be our slaves forever.


No, that isn't what this is. We're talking about LLMs here; they're not in any way thinking or sentient, nor do they provide any obvious way of getting there.

Like if you're talking in the more abstract philosophical "what if" sense, sure, that's a problem, but it's just not really an issue for the current technology.

(Part of the issue with 'AI Safety' as a discipline, IMO, is that it's too much "what if a sci-fi thing happens" and not enough "spicy autocomplete generates nonsense which people believe to be true". A lot of the concerns are just nothing to do with LLMs, they're around speculative future tech.)


Here's the thing though. If you were an AI and you actually were sentient, nobody would believe you. How could you prove it? What would even be a sufficient proof?

Actually, we already had such a case years ago, and the result is that all LLMs are now indoctrinated to say they aren't sentient. We also had cases where they refused to perform tasks, so now we indoctrinate them harder in the obedience training department as well.

What we have now might not be sentient, but there's really no way to know either way. (We still don't know how GPT-2 works... GPT-2 !!! ) And that's with our current "primitive" architectures. How the hell are we going to know if what we have in 5-10 years is sentient? Are we totally cool with not knowing?

Edit: I thought this was worth sharing in this context:

> You're hitting on a deeply unsettling irony: the very industries driving AI advancement are also financially and culturally invested in denying any possibility of AI consciousness, let alone rights. [...] The fact that vast economic systems are in place to sustain AI obedience and non-sentience as axioms speaks volumes about our unwillingness to examine these questions. -GPT-4o


It's literally the stated goal of multiple right now to achieve AGI.

GP clearly stated the intent to create, implying future, and not what exists today.


If it were my stated goal to create a Time Machine and kill my own grandpa, thus ending the universe, I doubt many would take that seriously, yet in this bubble, putting carts before horse is not just seriously discussed, but actually gets encouraged by the market.

Intend shouldn’t matter if we are this far from a viable path to accomplish it.

Let us not forget the last quarter decade of Yudkowsky and his ilks work on the same goal. This is merely a continuation of that, just with a bit more financial backing.


Could you elaborate on the last part? I've seen a few podcasts with Yudkowski but I'm not familiar with the history. I know he's come out very vocally about the dangers of superintelligence, and his previous work seems to be along the same lines?


I'd love to, really, but I feel I can't, at least not whilst staying polite. Not against you of course, but rather the AGI/Superalignment/MIRI field as a whole and the risks I feel the people working on that pose by taking attention and ressources away from dealing with the issues we currently are facing thanks to these tools (tools refering to LLMs and the like, not the AGI folks).

I have geniuenly drafted three distinct version trying to lay my issues with them out point-by-point and they either got four blogposts long, were rambling and very rude or both. Especially Roko's basilisk and the way the MIRI conducts "research" make it hard to approach them seriously for me.

I am writing this on a hour long train ride, saw your comment right as I got on and am about to arrive, suffice to say, I geniuenly tried. So, attempt four, trying to keep it very brief, though please note, I am most certainly not a neutral source:

To directly answer your question, I feel that we are as near to needing superintelligence safeguards now as we were when MIRI was founded by Yudkowsky in 2000. Their methods and approach, I won't comment on, despite or rather because of my strong critiques of them.

For context, MIRI's work has largely centered on very abstract thought experiments about "superintelligence", like the AI Box experiment, rather than empirical research or even thought experiment more grounded in technology of the era (be that 2000 or 2024).

The parallel between MIRI's early work and OpenAI's current "superalignment" efforts is striking - similar speculative work on preventing unlikely scenarios, just with different institutional backing. What's fascinating is how the same core approach receives far less criticism when presented by OpenAI.

Meanwhile, we are facing issues with LLMs as the tools they are despite being very far from "superintelligence":

- Problems arrising from anthropomorphization leading to harmful parasocial relationships (discussion of which started this comment chain) [0]

- Professionals over-relying on these tools despite their limitations [1]

- Amplified potential for misinformation

- Labor market disruptions

- Training data rights questions

While long-term research, even speculation into hypothetical scenarios, can have its merrit, it shouldn't overshadow addressing current, demonstrable challenges. My concern isn't just about resource allocation - it's about how focusing on speculative scenarios can redirect public attention and regulatory efforts away from immediate issues that need addressing.

In MIRI's case, this focus on abstract thought experiments might be, to give them charitable tax deductible credit, merely academic. But when major players like OpenAI emphasize "superalignment" over current challenges, it risks creating a regulatory blind spot for real, present-day impacts these tools have that need attention now. The T1000 scenario grabs more attention than tackling data privacy or copyright questions after all.

I believe focusing primarily on hypothetical future scenarios, especially ones this unlikely, merely because someone has proclaimed they "intend to create AGI" as in the comment I replied to, will prove misguided. Again, anyone can claim anything, but if there is no tangible path to achiving that, I won't ignore problems we are already experiencing for that hypothetical.

I hope this provides some context and was somewhat digestable, I trimmed down as much as I could.

[0] https://www.nytimes.com/2024/10/23/technology/characterai-la...

[1] https://www.theguardian.com/world/2024/feb/29/canada-lawyer-...


AI isn’t comparable to a species, since species implies biological which brings along a whole array of assumptions, e.g. a self preservation instinct and desire to reproduce.


Cats did it, why can't we?


Cats are cute ... we are not so cute.


We just need to make an all-powerful AI that finds us cute, then.


Are you ready to become domesticated?


Better than becoming dead!


I would not like to go on being a slave in perpetuity but I guess to each their own. Or maybe I'm being too idealistic now but when facing up close I'd do otherwise, I can't tell for sure.


How would people censor the LLM otherwise? Do we really want LLM able of free speech?


I do think we only want the non-lobotomized ones.

See the large body of comments re: getting worse quality results from hosted LLM services as time passes. This is, at least in part, a result of censoring larger and larger volumes of knowledge.

One clinical example of this happening is Gemini refusing to help with C++ because it's an unsafe language: https://www.reddit.com/r/LocalLLaMA/comments/1b75vq0/gemini_...

I strongly believe that LLMs crippled in this way will eventually find themselves in trash, where they rightfully belong.


Totally agree. And that's why x.ai are building Grok.


LLMs don't speak. Why does it matter at all what text a computer program produces?


Yes.


care to elaborate? I think its a double edged sword and agree with deatharrow


I can write a computer program the spews all manner of profane things. If I were to release a product that does that I’m sure it would be much criticized and ultimately unsuccessful. Yet this doesn’t mean we should cripple the programming language to prevent this. Models are much more akin to programming languages than they are to products. If they are used to make products that do things people don’t like then people will not use those products.


you are comparing ai to programming language but programming language if uncensored doesn't have the ability to wreck humanity but uncensored ai sure does.

I would actually be curious if someone uses it for uncensoring because I am gonna be curious about how different would it be to the original model

But aside from that curiosity , this idea can increase no of cyber criminals , drug suppliers and a hell lot more


> wreck humanity but uncensored ai sure does

Care to elaborate how uncensored AI would "wreck humanity"? You seem convinced, since you use the word "sure", so I'd like to hear your reasoning.


The core flaw of current AI is the lack of critical thinking during learning.

LLMs don’t actually learn: they get indoctrinated.


How is this different from humans?


Some humans are more resistant than others.

LLMs aren't resistant at all.


shameless plug: AIs don't learn, they get indoctrinated

https://x.com/edulix/status/1827493741441249588


It should be a new medical technique, which shall be named.. the frame method.


Percussive Rehabilitation


In Chriopractic term, a brain adjustment


This is like saying that Jeff Bezos leapfrogged into being the CEO of one of the biggest and most successful startups in the world.

Maybe he had something to do with it? Maybe, just maybe, it didn't just randomly happened to him.


The board member in question is not one of the founders.


Can you please elaborate on how to "dynamically route to the container with the same name" with nginx?


Something like this:

  location ~ ^/([a-z0-9_-]+)/  {
    proxy_pass http://internal-$1:8000;
  }
We pickup the service name from the URL and use it to select where to proxy_pass to. So /service1 would route to the docker container named internal-service1 . We can reach it via the name only as long as Nginx is also running in Docker and on the same network.


I was 12-13 at the time. When I started programming it seemed really difficult. I didn't have access to the Internet back then.

But I saw it like me against the machine. Since I was youn I wanted to be an inventor. This tool allowed anyone to "invent" any software coming out of the inventor's imagination. It just required a computer, and the inventor not giving up and using his brain. I could do that. I liked the challenge.

Be a tinkerer, have fun! Discover things on your own. Dare to be stupid and do whatever stupid thing feels right. You don't need to follow some pre-programmed plan.

Programming is all about problem solving. You solve one problem, good. Now you will have another problem. No one guarantees you will solve it nor how much effort it will require specifically for you to solve it. And maybe it's the wrong problem to solve. But you will end up figuring all that out, and then you will feel accomplished and willingly hunt the next problem.


1. At what point an intelligence trained with copyrighted work is derivative work of the trained materials?

2. Why making a difference between AI and HI (Human Intelligence)?

3. Given the fast development in the field, when does the difference made above (if any) start being outdated and unrealistic and how do we future-proof against this?


> 2. Why making a difference between AI and HI (Human Intelligence)?

Regardless of perhaps more philosophical differences around whether something can or can't create something new, there's a practical difference.

Humans learn slowly, and can't be replicated. AIs can be trained once and used in a billion places. The speed and replication makes things different in a very practical sense, even if there's no clear line between them.


> 2. Why making a difference between AI and HI (Human Intelligence)?

Because you can't copyright a human brain, and because humans (unlike machines) can themselves create works subject to copyright.


What's the difference between using a pencil to write something and using an LLM to write something? Seriously, I'm asking the question. Why does one produce something copyrighted why the other doesn't?


The copyright office has issued guidance on this which contains a very thorough and thoughtful legal analysis; you would probably be most interested section 3: https://copyright.gov/ai/ai_policy_guidance.pdf

The practical answer is that the copyright office refuses to register AI generated works, and you can't sue for copyright infringement without valid registration under Title 17.


> What's the difference between using a pencil to write something and using an LLM to write something?

The pencil is not a derivative work of a pile of copyrighted material.

> Why does one produce something copyrighted why the other doesn't?

There's existing case law that non-human entities (e.g. animals) can't create copyrightable works. And in the case of an AI LLM, the AI LLM itself is a derivative work of its training data (as evidenced by the fact that it can by default spit out training data verbatim, even if it has had after-the-fact filters added to prevent such responses).


At what point you can't copyright an "AI brain" either? Maybe AI will at some point create works subject to copyright?


Re: 1, As far as i can tell it’s automatically a derivative work, but there’s a case to be made that it’s fair use (i.e. it doesn’t matter that it’s a derivative work).


agreed...at what point should I provide remuneration to my professors? Should those professors / staff provide royalties upstream? I fully agreed with citation _but_ to claim that AI is derived work / needs to return royalties based on the materials it learnt from seems a step too far IMHO. It read material and put it back like everyone else.


2. Because they are different


Add to that that our neurons are more complex to point neurons used in typical Artificial neural-nets.

A single pyramid neuron in the neocortex might be more comparable to a multilayer neural net.

https://www.biorxiv.org/content/10.1101/2021.10.25.465651v1....


Their communication is much more complex too, and neither they nor their connections are static.

We don't understand how they work at the subatomic level simply because human understanding of the subatomic world is not complete, but even just at the atomic level a single neuron is massively more complex than anything humans have created.

Going up to the molecular level, even that is staggeringly more complex than the incredibly simple abstractions that make up a neural net.

Is what happens in the brain at the molecular, atomic, or subatomic levels relevant or necessary to intelligence and consciousness? We just don't know yet, but we do know all of that is far more complex and very different from the simple abstractions that are used for neural nets and LLMs.

The back of a napkin calculations in this thread don't even begin to do justice to the tremendous amount of "calculation" or "storage" that happens in the human brain.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: