Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

[flagged]


I don't think laughing at people for their concerns is helpful. Especially when you don't seem to be bothered by the possibility that this is a civilization-ending development.

For the record, I don't think it is, personally. But people are terrified, and not taking their fear seriously and dealing with it compassionately is a recipe for serious, and unnecessary disaster.


A civilization-ending development, no.

A job-ending development for many people, yes.

The chattering classes see themselves becoming obsolete. Fast. They're freaking out. From professors of English literature to op-ed writers, word slingers are in trouble. Not novelists, though; none of these systems can hold a story together for any great length.

If everything you get paid for goes in and out over a wire, you're in trouble.


It's not just the chattering classes that are freaking out. It's the majority of ordinary people that I've talked with.


Majority of the 'ordinary people' don't care, and don't want to care. They just want to think about their next tourist trip, or getting coffee, or where to get lunch and dinner next. You try talking about the world-historical impact, the job-ending impact, the sci-fi level progression, THEY DON'T CARE. And that even includes tech workers.

The time to adapt to AI is now (Getting a trade is the safest bet), but obviously 99% of the population doesn't want to do any adaptation. So when GPT5,GPT6 comes out, only then will they confront AI, without any mental preparation.

That's why there's no point slowing down. People won't be alerted until the AI's get advanced enough, so better push it forward, to shock people into action. Institutions can move suprisingly fast when pushed to, every school and university has had to respond to ChatGPT already, and it kind of works. Otherwise people will just try and pretend this doesn't exist, forever.


In a sense, I hope that you're right. It's just not how it looks to me from here. I'm hearing a great deal of fear from all quarters.


You're suggesting we accelerate the danger to shock people awake?

So like putting poor performing FSD on public streets (daring regulators to ... do their job)?

Or connecting the nukes to some SkyNet-esque "defense" system, knowing its propensity for illogical and undefined behavior?


>If everything you get paid for goes in and out over a wire, you're in trouble.

So… almost everyone who posts here?

Not sure why everyone here is so excited to be out of a job. I guess they all think that UBI will fall out of the sky. I think they’ll be in for a rude awakening.


If you were to eliminate 10% of jobs with no replacements available, in the span of say a year, that would likely be a civilization ending development. People unable to feed their families tend to take drastic actions.


I’d probably adjust that to at least 50%. Many European counties had extended high youth unemployment, but minimal political unrest.


That drastic action is usually working a menial service job.


OpenAI's monster is too dangerous to be in peoples hands unless people pay $20pm to use it


Yeah, OpenAI isn't helping. The only thing that really concerns me about LLMs is how people are reacting to them, and OpenAI seems very intent on scaring the daylights out of as many people as possible.

Fear makes us stupid. Frightened people are dangerous people.


Laughing at Elon is always helpful to me personally.

The civilization ending development is going to happen or not happen. It’s hubris to think any one person or one country has a say at this point.


"The most common way people give up their power is by thinking they don't have any."


I think global climate change is also a civilization ending development but we don't see the world activate to battle that in unison.


> Especially when you don't seem to be bothered by the possibility that this is a civilization-ending development.

Lots of people freak out based on concerns divorced from reality. I laughed at the people who thought the mayan calendar meant 2012 was the end of the world. I laughed at the people who thought the LHC would create a black hole that would gobble the earth. I laughed at the people who thought covid vaccines were a eugenics mass sterialization campaign.

Why shouldn't I? The magnitude of the risk they propose does nothing to change the sheer ridiculousness of it.


> Please stop outcompeting us. Sincerely, The Losers

I disagree with the letter and I think these fears are overblown, but the attitude on display here is pretty unpleasant.


I suspect it’s about to get a whole lot more unpleasant.


So do you agree that nations should be able to freely develop/discover nuclear production?

Edit: We actually don't want people to develop something "too good". There are some things in the past century that have been "too good". Plastics are too good. So are nuclear weapons. Completely powerful AIs and machines that can displace jobs of millions of people are too good. If there's an AI tomorrow that can replace every single physical job out there, what do you think will happen? Riot and chaos in the street after about 1-2 years.

Edit2: People will always go back to "we will just create different jobs". But do you really think that everyone is capable of skilled job? Remember, no physical jobs anymore.


> If there's an AI tomorrow that can replace every single physical job out there, what do you think will happen? Riot and chaos in the street after about 1-2 years.

I don't think you even need to replace every physical job out there for that to be the result. I think all the ingredients needed exist right now, and I'm worried that unless the discourse about LLMs changes significantly, the perceived threat of them is enough to bring those riots and chaos.


They already have nukes. If a dirt poor country like North Korea can develop them then so can pretty much everyone else.


Is this an argument that international moratoriums can be highly effective even for high-value technology development?


No, they aren't effective, because NK, India, Israel and Pakistan developed their nukes despite treaties like the NPT.


Sure, the powerful countries all have them, which cancels out their threat. Should the US not develop this tech and let the Chinese be the sole AI superpower?


I mean technically they are. Other nations with more advanced abilities also seem able to freely attempt to prevent them from doing so though.


> Other nations with more advanced abilities also seem able to freely attempt to prevent them from doing so though.

So how is this any different? Other group of people with more connection to the government can enforce a law to prevent others from displacing their jobs?


This is a pretty disappointing opinion to share. Clearly something huge is happening already that we don't need yet another model to throw everything into disarray once more. I still work with GPT3, and think we haven't even begun to understand how to use even that. GPT3 is enough to disrupt entire industries, amongst which programming and tech will be IMO the most "vulnerable."

At least you are putting your disillusioned cynicism on full display, clearly showing that you think you already lost.


I’m not sure if it’s ultimately good or bad, but I do know that if only one company or country controls it then it will be bad. Either way, I think it’s an unstoppable force and preposterous that we puny humans could stop it when we can’t seem to build rail between LA and SF.


Just hope that there are multiple winners in multiple cultures with different takes on intelligence, meaning, purpose.


Thanks for raising a point that gets very little mention. A tragedy of technology is that is seems to lead to further homogenisation of humanity. (Good for things such as human rights, bad for culture.)


> Please stop outcompeting us. Sincerely, The Losers

Are there going to be any non-“losers”? AGI has the potential to put everyone, literally everyone, out of work. Permanently.

Who is going to be left to do the laughing and call others “losers”?


I mean, yeah, same thought after seeing the signatories. What are some of the cliches being used around here ? Toothpaste is out of the tub? Arrow has left the bow. The dye is cast. The ship has sailed. (Thanks ChatGPT).


The confetti has left the cannon.[0]

[0] https://news.ycombinator.com/item?id=35346683


If ChatGPT told you "the dye is cast", there's hope after all, because it's die, not dye.


The pee is in the pool. The black swan has left the barn.

And yeah, I had a laugh at the signatories. Of course my heart goes out to the non-billionaires that might be out of a job. Or maybe us lucky duckies are going to travel the world on our new basic income trust funds?


> Toothpaste is out of the tub.

Please don't correct that.


The genie is out of the bottle. [1]

[1] No AI was involved in the creation of this reply. ;-)


We’re dealing with all the dynamics of a not only the superorganism of humanity, but of the the biological reality of the earth as whatever it is in the soil of the stars. It is indeed about to get very interesting. All the ingredients of emergence are quite rich these days.


I'm historically a party pooper when it comes to new tech, but LLMs give me that anxious feeling in my gut that the world is about to change drastically. I feel privileged to be alive, and hope I live to see where things get ten, twenty, or thirty years from now.


Can't put the booger back in the nose


Everything is out in the open now. The methods, the algorithms, heck even powerful base model weights from Meta. The pot of gold at the end of the rainbow is clearly visible for all. The capabilities are emerging. The race is on. It’s not going to stop till there’s a winner.

You laugh but I think your view is flawed because your belief is, “we have to create an AGI because everyone else will if we don’t”

The definition of a world ending arms race ?

This topic amongst others should be a good time for people to actually come together, reflect and and talk about the future we want to create rather than just LOL about it, start wars with each other etc ?

I guess your just being cynical but really? LOL?

Even ChatGPT would probably tell you this isn’t a smart way forwards.


You can't even convince Russia to stop a war that they are clearly losing. Good luck convincing them to stop working on AI research.


I don't for a minute support what Russia is doing but I think if we lived in a less hostile, conflict focused, conquer all share little world, that conflict wouldn't happen.

As an American I know you think Russia is wrong, but it doesn't take a LLM to see that the invasion of Iraq was not all that different, the excuse for that war was that they had access to weapons of mass destruction so they needed to be wiped out.

Did you ever see any weapons of mass destruction?

The world is too small for these mindsets now. We need to grow up.

Don't hate the player, hate the game.


Russia has been doing this long before the USA even existed.

It's not comparable to the Iraq war. A better comparison would be the USA invading Canada.


Whatever you believe.


Please ask ChatGPT about how to gain consensus from the entire world


I would but I'd have no way to verify if whatever it tells me is accurate.


Honestly at this point "if I don't end the world, someone else will" doesn't sound half bad.


What exactly does the “winner” “win” in this race?


The Industrial Revolution, but intelligence instead of energy.


Elon Musk can't manage to come down on the right side of any issue these days.


Ya seems he is upset he missed out


He is one of the founders of OpenAI.. Soo I don't think he has missed out?


He has no stake in the organization whatsoever. They made a total break. Presumably because Elon is a moron.


Yes, lots of evidence out there he was pushed out and is now sour.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: