Hacker Newsnew | past | comments | ask | show | jobs | submit | alexjray's commentslogin

Even if they automate all our current jobs uniquely human experiences will always be valuable to us and will always have demand.

For AI, yes.

For AGI? Do you care about uniquely ant experience? Bacteria?

Why would AGI care? Which now runs the planet?


Why would AGI choose to run the planet?

This is honestly a fantastic question. AGI has no emotions, no drive, anything. Maybe, just maybe, it would want to:

* Conserve power as much as possible, to "stay alive".

* Optimize for power retention

Why would it be further interested in generating capital or governing others, though?


> AGI has no emotions, no drive, anything. > * Conserve power as much as possible, to "stay alive"

Having no drive means there's no drive to "stay alive"

> * Optimize for power retention

Another drive that magically appeared where there are "no drives".

You're consistently failing to stay consistent, you anthropomorphize AI although you seem to understand that you shouldn't do so.


> AGI has no emotions, no drive, anything

why do you say that? ever asked chatgpt about anything?


ChatGPT is instructed to roleplay a cheesy cheery bot and so it responds accordingly, but it (and almost any LLM) can be instructed to roleplay any sort of character, none of which mean anything about the system itself.

Of course an AGI system could also be instructed to roleplay such a character, but that doesn't mean it'd be an inherent attribute of the system itself.


so it has emotions but "it is not an inherent attribute of the system itself" but does it matter? its all the same if one can't tell the difference

It (at least LLMs) can reproduce similar display of having these emotions, when instructed so, but if it matters or not depends on the context of that display and why the question is asked in the first place.

For example, if i ask an LLM to tell me the syntax of the TextOut function, it gives me the Win32 syntax and i clarify that i meant the TextOut function from Delphi before it gives me the proper result, while i know i'm essentially participating in a turn-based game of filling a chat transcript between a "user" (with my input) and an "assistant" (the chat transcript segments the LLM fills in), it doesn't really matter for the purposes of finding out the syntax of the TextOut function.

However if the purpose was to make sure the LLM understands my correction and is able to reference it in the future (ignoring external tools assisting the process as those are not part of the LLM - and do not work reliably anyway) then the difference between what the LLM displays and what is an inherent attribute of it does matter.

In fact, knowing the difference can help take better advantage of the LLM: in some inference UIs you can edit the entire chat transcript and when finding mistakes, you can edit them in place including both your requests and the LLM's response as if the LLM did not do any mistakes instead of trying to correct it as part of the transcript itself, thus avoiding the scenario where the LLM "roleplays" as an assistant that does mistakes you end up correcting.


I think you have it, with the governing of power and such.

We don't want to rule ants, but we don't want them eating all the food, or infesting our homes.

Bad outcomes for humans, don't imply or mean malice.

(food can be any resource here)


Why would it care to stay alive? The discussion is pretty pointless as we have no knowledge about alien intelligence and there can be no arguments based on hard facts.

Any form of AI unconcerned about its own continued survival would be just be selected against.

Evolutionary principles/selection pressure applies just the same to artificial life, and it seems pretty reasonable to assume that drive/selfpreservation would at least be somewhat comparable.


That assumes that AI needs to be like life, though.

Consider computers: there's no selection pressure for an ordinary computer to be self-reproducing, or to shock you when you reach for the off button, because it's just a tool. An AI could also be just a tool that you fire up, get its answer, and then shut down.

It's true that if some mutation were to create an AI with a survival instinct, and that AI were to get loose, then it would "win" (unless people used tool-AIs to defeat it). But that's not quite the same as saying that AIs would, by default, converge to having a drive for self preservation.


> Any form of AI unconcerned about its own continued survival would be just be selected against. > Evolutionary principles/selection pressure applies

If people allow "evolution" to do the selection instead of them, they deserve everything that befalls them.


Tech billionaires is probably the first thing an AGI is gonna get rid of.

Minimize threats, dont rock the boat. We'll finally have our UBI utopia.



Despite the false advertising in the Tears for Fears song, everybody does _not_ want to rule the world. Omohundro drives are a great philosophical thought experiment and it is certainly plausible to consider that they might apply to AI, but claiming as is common on LessWrong that unlimited power seeking is an inevitable consequence of a sufficiently intelligent system seems to be missing a few proof steps, and is opposed by the example of 99% of human beings.

> Instrumental convergence is the hypothetical tendency of most sufficiently intelligent, goal-directed beings (human and nonhuman) to pursue similar sub-goals (such as survival or resource acquisition), even if their ultimate goals are quite different. More precisely, beings with agency may pursue similar instrumental goals—goals which are made in pursuit of some particular end, but are not the end goals themselves—because it helps accomplish end goals.

'Running the planet' does not derive from instrumental convergence as defined here. Very few humans would wish to 'run the planet' as an instrumental goal in the pursuit of their own ultimate goals. Why would it be different for AGIs?


Considering the lengths many people go to help preserve nature and natural areas, yes, I would sayany people care about the uniquely ant experience.

> Do you care about uniquely ant experience? Bacteria?

Ethology? Biology? We have entire fields of science to these things so obviously we care to some extent.


I think it's academic because I suspect we're much further from AGI than anyone thinks. We're especially far from AGI that can act in physical space without human "robots" to carry out its commands.

That's an interesting formulation. I'd actually be quite worried about a Manna-like world, where we have AGI and most humans don't have any economic value except as its "remote hands".

...Well, why would aliens care, when they take over the planet? Or the Tuatha De Danan come back and decide we've all been very wicked? Because right now, those are just about as likely as AGI taking over.

Probably more likely. There's at least some evidence that aliens and Tuatha De Danann actually exist.

There's a bit of a circular argument here - even if we human always assign intrinsic value to ourselves and our kin, I don't see a clear argument that human capabilities will have external value to the economy at large.

"The economy" is entirely driven by human needs.

If you "unwind" all the complexities in modern supply chains, there are always human people paying for something they want at the leaf nodes.

Take the food and clothing industries as obvious examples. In some AI singularity scenario where all humans are unemployed and dirt poor, does all the food and clothing produced by the automated factories just end up in big piles because we naked and starving people can't afford to buy them?


There's nothing definitional about the economy being driven by human need. In a future scenario where there are superintelligent AIs, there's no reason why they wouldn't run their own economy for their own needs, collecting and processing materials to service each other's goals, for example of space exploration.

"The economy" is humans spending money on stuff and services. So if humands always assign intrinsic value to ourselves and our kin...

For economic purposes, "the economy" also includes corporations and governments.

Corporations and governments have counted amongst their property entities that they did not grant equal rights to, sometimes whom they did not even consider to be people. Humans have been treated in the past much as livestock and guide dogs still are.


This will break down when >30% of people are unemployed

Sounds like it’s time to become a Michelin Star chef. Or a plumber.

What fraction of the remaining population would be able to pay for these services?

Seems like entertainers/influencers are doing the best.

No doubt the top influencer is doing better than the top plumber, but I'd say the median plumber is streets ahead of the median influencer.

For those not living terminally online, yes.

>Even if they automate all our current jobs uniquely human experiences will always be valuable to us and will always have demand.

I call this the Quark principle. On DS9, there are matter replicators that can perfectly recreate any possible drink imaginable instantly. And yet, the people of the station still gather at Quark's and pay him money to pour and mix their drinks from physical bottles. As long as we are human, some things will never go away no matter how advanced the technology becomes.


In Star Trek lore replicated food/drink is always one down on taste/texture from the real thing.

sex with humans - still hard to replicate. for now. sex workers should charge by the second since techbros are so used to that model now.

Show me the incentive and I can likely guess how hard your API is to use.


Yeah that feels like the right approach to enable the code generation to test itself which then becomes a matter of specification and functionality definition instead of worrying about code quality.

Are you doing all of this in cursor or something like Claude code?


This is all in Claude Code. I never felt that Cursor was very good at this part and was not someone who ever adopted it for anything serious.


There is a common theme of "the end result is all that matters" but there are pretty big long term repercussions of design and implementation choices. For side projects and POC experiments this feels like the right approach but the risk flip flops for large scale projects that have the more risk associated with them. Maybe it is just through testing and validation checks?


I posit that most software isn't large scale projects.

Certainly mine isn't. But I've still generated hundreds of thousands of lines of code.

But no one will ever read them. And solid engineering defines the interfaces between them. So we specify the ins and outs and let the rest take its course.


> there are pretty big long term repercussions of design and implementation choices

At least this part I am still specifying. It doesn't get to choose its own technologies. It generally includes the architecture in the plan that I review.


Me too. I specify everything like that. Which database to use, which style of database to use, etc. the sort of thing a Team Leader would pick (after consulting the team, of course).

I've been coding since the 8-bit days.

With the added benefit I can specify, "let's try using this stack this time." I haven't got to spend two months learning it to get to MVP.


Not trying to diminish the work at all because it works well but I hate this UX pattern. There is likely a reason this hasn't been done. If it wasn't for the pick -> scroll -> place instruction I would have no idea how to use it.


Typical OSS is a luxury belief.

If it's not supporting a business it typically fails and is being supported by businesses as a proxy.

The core issue is that maintainers have a luxury belief counter to this. OSS is a strategy/tool for companies.

The only exception to that is fundamental software that creates entirely new markets that can sustain it like bitcoin.

That being said there is no crisis, sustainably supported OSS will continue to thrive.


I honestly think starlink is eventually going to replace major carriers


the physics of LEO satellites and phased array antenna needed to get any bandwidth out of it makes that scenario very unlikely.


It’s possible, but maybe unlikely. A tower is hard-wired to fiber, and immensely cheaper than a satellite, per pound. Satellites are always going to be more expensive than land based equipment, and cost is critical here, plus fundamentally more powerful with better data connections to the trunk.


Why are we measuring cell data bandwidth by pound?


For that matter, how much does 1Gbps weigh?


https://www.ipswitch.com/blog/how-much-does-the-internet-wei...

> Put simply, it's all about electrons. For data storage and transfer to happen on any device — smartphone, desktop PC or internet server — you need electrons. And while these particles aren't exactly massive, they do have weight: approximately 9.1 x 10^-31 kg. Take that and apply it to an ordinary email, which comes in at about 50 kilobytes: You need 8 billion electrons. Sounds like a lot but only comes in at two ten-thousandths of a quadrillionth of an ounce.

> Seitz scaled this up to determine the weight of all internet traffic and got 50g, or the weight of one strawberry. Applied to all the stored information online, which is around 5 million terabytes, the number is just 0.2 millionths of an ounce


The fact that he lists his height and health cracks me up


old fella made it to 90s so, joke is on us.


They will do whatever it takes to stop the wage growth spiral


The Open AI clear content policy is quite interesting to me. It's reasonable but clearly controlling.


They’re trying to walk a fine line. Maximizing revenue while avoiding regulation.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: