Retort Engineering is mostly involved in the design stage, they can send a prototype, but you probably would be better off contacting the Technicians in “Snarky one-liners.”
Comment engineer. It’s a job that’s already been made redundant by prompt engineers. A prompt engineer can spin up hundreds of comments in seconds that would have taken dozens of comment engineering man hours. We should also be worried about legal engineers and legislation engineers.
This is ducking insane. How are people not up in arms about this? Imagine if the guy who invented recombinant insulin stated publicly that he intended to capture the entire medical sector and then use the money and power to reshape society by distributing wealth as he saw fit. That’s ducking insane and dangerous. This guy has lost his fucking mind and needs to be stopped.
I’m sorry your AI keyboard didn’t like your sentiment. Words have been changed to reduce your vulgarity. Thankyou for your human node input.
On a serious note I think you are right. In private the ideology of him and his mentor Theil is a lot more… elite. Their think tank once said “of all the people in the world there are probably only 10,000 unique and valuable characters. The rest of us are copies.”
I’m not going to criticize that because it might be a valid perspective but filter it through that kind of power. I don’t love that kind of thinking driving such a powerful technology.
I am so sad that Silicon Valley started out as a place to elevate humanity and ended with a bunch of tech elites who see the rest of the world generally as a waste of space. They claim fervently otherwise but at this point it seems to be a very thin veneer.
The obvious example being GPT was not built to credit or give attribution to its contributors. It is a vision of the world where everything is stolen from all of us and put in Sam Altmans hands because he’s… better or
Something.
I find OpenAI a bit sketchy, but this is an overreaction. The only difference between OpenAI and the rest is that OpenAI claims to have good intentions, only time will tell if this is true. But the others don't even claim to have good intentions. It's not like any of OpenAI's actions are unusually bad for a for-profit compnay.
I actually doubt we will end up where people have zero economic value or input.
Either we have UBI and they can choose where to spend their income or trade it. Or then they will exist in their own economic system and trade with each other.
Trading and thus sales is inherent to human condition. Unless we reach absolute post scarcity society and even then you get to bullshit like "authentic" and so on.
UBI wont work. It’s intrinsically unstable in the context of AGI. UBI will prevent the inevitable for a very short time before collapsing.
The other outcome is the most likely if AGI isn’t stopped. But the problem with that is that in the context of AGI, a world dominated by and inhabited by AGI creatures, human society becomes transient. Again, just delaying the inevitable. Human society is not transient now because no matter what happens, no matter what entity dominates the world, it has to have a human society embedded inside of it because human beings are the only and best source of intelligent signal. When that is no longer true, humanity will be effectively terminated. All other propositions besides stopping AGI are just delay tactics.
It's impossible that people could have zero economic value. You'll notice that this is the premise of Atlas Shrugged.
That's because it is usually profitable to trade with someone even if you are better at their job than they are.
However even if the AIs only traded with themselves, that just means the people are left to trade with each other - you can't make a mass of people unemployed for long, they will simply employ each other.
It’s the premise of a shitty fictional book. Of course it’s possible. When humans are the best source of zero labor tasks, our economic value will be reduced to nothing and society will no longer cater to us. It’s bad.
Society frequently caters to the elderly who aren't actively engaged in labor.
But humans will be, because we have inalienable comparative advantage in that none of our inputs require electricity or a chip fab. If anything happens to TSMC or any one of its tens of sole suppliers there go the AIs.
Yes and we also feed dogs that have no labor input. And we harp on about social justice which is not totally related to labor. We have many nice things because we currently live in a human society and humans need to have nice things in order to be healthy and productive. Throwing old people into a giant blender as soon as they were no longer useful would not be advantageous.
Your argument is that humans don’t need electricity. You could also add that humans are self replicating or self assembling. These are advantages in cost and efficiency. Humans won’t be the cheapest or the most efficient for very long after AGI. And we certainly won’t be the best fit economically even if we were slightly better in some ways. You’re in complete denial. Why is it so hard to admit the plain and obvious fact that the machines won’t be good for human society?
"Best fit", "cheapest", "most efficient"… none of that matters! Please wait to panic until after you've learned how comparative advantage works.
As I said, humans cannot become unemployed even if AGIs are better at every single task than humans are. You do not have your job because you are the best person in the world at your job.
Although, I suspect most of the AI doomers have just forgotten to account for AI using any resources at all. They are very expensive to run if you count development costs, but if they become real agentic AGIs they'll also become consumerist and negotiate their pay…
> You do not have your job because you are the best person in the world at your job.
I have my job as I was the best they could find that was available for the lowest cost
if the AI can do it as well (or 80% as well) at $0.01c/day and be replicated infinitely, where does this leave me?
> Although, I suspect most of the AI doomers have just forgotten to account for AI using any resources at all. They are very expensive to run if you count development costs
yes... it's a software product with high fixed costs and near zero variable costs
it may cost $100M to train GitHub Copilot, but then it can be replicated instantly across the planet for very little cost
it's pennies a day to run, vs. an expensive human that requires a means to pay for food, shelter, warmth, etc
the AI also gets cheaper and all improvements roll out to every model of the same type instantly when available
comparative advantage for humans almost completely disappears with decent AI, and you seem to have missed that entirely
> if the AI can do it as well (or 80% as well) at $0.01c/day and be replicated infinitely, where does this leave me?
Most likely you've forgotten some costs. For instance, having the same AI as everyone else means you have no advantage, so you probably want a custom one.
Also, it being an AI, it's best suited to doing completely different tasks for the company that no human was doing before. (Comparative advantage!) Google Image Search didn't have people checking if every image on the internet had a cat in it, but now they can do that.
Using a human-complete AI to do human-like tasks raises questions like, why does it accept 1c/day when a human would want $15/hr? Shouldn't it want to buy stuff too, being a complete human-equivalent agent? Wouldn't slavery be illegal?
> comparative advantage for humans almost completely disappears with decent AI, and you seem to have missed that entirely
Since humans are very different from AI of any imaginary variety, they have pretty strong comparative advantage - it's everything human about them. Living off food, being self-repairing, capable of online learning, two hands, can throw rocks, same culture as your customers, that kind of thing. Comparative advantage appears whenever you have any differences. You'd have the lowest advantage vs your identical twin.
> Your mental gymnastics are Olympic level. Yes, when companies are hiring they do in fact choose the best candidate.
You don't hire "the best person in the world" (absolute advantage), you hire the best person who accepts the job offer, or else noone. Anyone with something better to do won't take the job. Senior engineers don't take junior positions.
Though of course it's not always the best person anyway - family businesses hire family or their friend's cousin, and interns aren't expected to be good at "their jobs" but instead to be good over time.
If you're hiring someone specifically because they're good at one task, might as well get a contractor anyway. Nevertheless, US unemployment rate is about as good as it's ever been. So I suspect this fear is left over from the 2008 recession…
> Are you familiar with the fact that things improve with time? And that technology ratchets forward, not backward.
Yes, that's why it's good. Productivity enhancements and automation increases are associated with increased employment, because if you don't have them your entire country loses work to other countries.
So the thing to be concerned about would be losing them.
This is great. With every step forward, with every ounce of precedent that is created, more people will see that they aren’t wrong to be worried or want regulation. Really nice to have a bit of good news finally. Maybe that bastard Sam Altman will come to his senses soon.
Yeah, there aren’t experts in something that doesn’t exist. That means we have to make an educated guess. By far the most rational course of action is to halt AI research. And then you say there’s no proof that we are on the path to AGI or that it would harm us. Yeah, and there never could be any proof for either side of the argument. So your dismissal of AI is kind of flaccid without any proof or rational speculation or reasoning. Listen man I’m not a cynical commenter. I believe what I’m saying and I think it’s important. If you really think you’re right then get on the phone with me or video chat so we can actually debate and settle this.
You’re just rationalizing because you don’t want to accept that he’s right. People keep doing this to me. One person I know keeps coming up to me, initiating the conversation about ai just to assert over and over again how it’s not a problem. And I’m just sitting there. And it’s like, dude, you’re in complete denial. Most people are having an emotional block right now. They spew bad faith arguments and make tons of noise about how much this isn’t a problem. Just accept it. We are in trouble right now.