I don't think laughing at people for their concerns is helpful. Especially when you don't seem to be bothered by the possibility that this is a civilization-ending development.
For the record, I don't think it is, personally. But people are terrified, and not taking their fear seriously and dealing with it compassionately is a recipe for serious, and unnecessary disaster.
The chattering classes see themselves becoming obsolete. Fast. They're freaking out.
From professors of English literature to op-ed writers, word slingers are in trouble.
Not novelists, though; none of these systems can hold a story together for any great length.
If everything you get paid for goes in and out over a wire, you're in trouble.
Majority of the 'ordinary people' don't care, and don't want to care. They just want to think about their next tourist trip, or getting coffee, or where to get lunch and dinner next. You try talking about the world-historical impact, the job-ending impact, the sci-fi level progression, THEY DON'T CARE. And that even includes tech workers.
The time to adapt to AI is now (Getting a trade is the safest bet), but obviously 99% of the population doesn't want to do any adaptation. So when GPT5,GPT6 comes out, only then will they confront AI, without any mental preparation.
That's why there's no point slowing down. People won't be alerted until the AI's get advanced enough, so better push it forward, to shock people into action. Institutions can move suprisingly fast when pushed to, every school and university has had to respond to ChatGPT already, and it kind of works. Otherwise people will just try and pretend this doesn't exist, forever.
>If everything you get paid for goes in and out over a wire, you're in trouble.
So… almost everyone who posts here?
Not sure why everyone here is so excited to be out of a job. I guess they all think that UBI will fall out of the sky. I think they’ll be in for a rude awakening.
If you were to eliminate 10% of jobs with no replacements available, in the span of say a year, that would likely be a civilization ending development. People unable to feed their families tend to take drastic actions.
Yeah, OpenAI isn't helping. The only thing that really concerns me about LLMs is how people are reacting to them, and OpenAI seems very intent on scaring the daylights out of as many people as possible.
Fear makes us stupid. Frightened people are dangerous people.
> Especially when you don't seem to be bothered by the possibility that this is a civilization-ending development.
Lots of people freak out based on concerns divorced from reality. I laughed at the people who thought the mayan calendar meant 2012 was the end of the world. I laughed at the people who thought the LHC would create a black hole that would gobble the earth. I laughed at the people who thought covid vaccines were a eugenics mass sterialization campaign.
Why shouldn't I? The magnitude of the risk they propose does nothing to change the sheer ridiculousness of it.
So do you agree that nations should be able to freely develop/discover nuclear production?
Edit: We actually don't want people to develop something "too good". There are some things in the past century that have been "too good". Plastics are too good. So are nuclear weapons. Completely powerful AIs and machines that can displace jobs of millions of people are too good. If there's an AI tomorrow that can replace every single physical job out there, what do you think will happen? Riot and chaos in the street after about 1-2 years.
Edit2: People will always go back to "we will just create different jobs". But do you really think that everyone is capable of skilled job? Remember, no physical jobs anymore.
> If there's an AI tomorrow that can replace every single physical job out there, what do you think will happen? Riot and chaos in the street after about 1-2 years.
I don't think you even need to replace every physical job out there for that to be the result. I think all the ingredients needed exist right now, and I'm worried that unless the discourse about LLMs changes significantly, the perceived threat of them is enough to bring those riots and chaos.
Sure, the powerful countries all have them, which cancels out their threat. Should the US not develop this tech and let the Chinese be the sole AI superpower?
> Other nations with more advanced abilities also seem able to freely attempt to prevent them from doing so though.
So how is this any different? Other group of people with more connection to the government can enforce a law to prevent others from displacing their jobs?
This is a pretty disappointing opinion to share. Clearly something huge is happening already that we don't need yet another model to throw everything into disarray once more. I still work with GPT3, and think we haven't even begun to understand how to use even that. GPT3 is enough to disrupt entire industries, amongst which programming and tech will be IMO the most "vulnerable."
At least you are putting your disillusioned cynicism on full display, clearly showing that you think you already lost.
I’m not sure if it’s ultimately good or bad, but I do know that if only one company or country controls it then it will be bad. Either way, I think it’s an unstoppable force and preposterous that we puny humans could stop it when we can’t seem to build rail between LA and SF.
Thanks for raising a point that gets very little mention. A tragedy of technology is that is seems to lead to further homogenisation of humanity. (Good for things such as human rights, bad for culture.)
I mean, yeah, same thought after seeing the signatories. What are some of the cliches being used around here ? Toothpaste is out of the tub? Arrow has left the bow. The dye is cast. The ship has sailed. (Thanks ChatGPT).
The pee is in the pool. The black swan has left the barn.
And yeah, I had a laugh at the signatories. Of course my heart goes out to the non-billionaires that might be out of a job. Or maybe us lucky duckies are going to travel the world on our new basic income trust funds?
We’re dealing with all the dynamics of a not only the superorganism of humanity, but of the the biological reality of the earth as whatever it is in the soil of the stars. It is indeed about to get very interesting. All the ingredients of emergence are quite rich these days.
I'm historically a party pooper when it comes to new tech, but LLMs give me that anxious feeling in my gut that the world is about to change drastically. I feel privileged to be alive, and hope I live to see where things get ten, twenty, or thirty years from now.
Everything is out in the open now. The methods, the algorithms, heck even powerful base model weights from Meta. The pot of gold at the end of the rainbow is clearly visible for all. The capabilities are emerging. The race is on. It’s not going to stop till there’s a winner.
You laugh but I think your view is flawed because your belief is, “we have to create an AGI because everyone else will if we don’t”
The definition of a world ending arms race ?
This topic amongst others should be a good time for people to actually come together, reflect and and talk about the future we want to create rather than just LOL about it, start wars with each other etc ?
I guess your just being cynical but really? LOL?
Even ChatGPT would probably tell you this isn’t a smart way forwards.
I don't for a minute support what Russia is doing but I think if we lived in a less hostile, conflict focused, conquer all share little world, that conflict wouldn't happen.
As an American I know you think Russia is wrong, but it doesn't take a LLM to see that the invasion of Iraq was not all that different, the excuse for that war was that they had access to weapons of mass destruction so they needed to be wiped out.
Did you ever see any weapons of mass destruction?
The world is too small for these mindsets now. We need to grow up.