Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Come on man, you don't actually believe this. If you did you'd be a psychopath, and you certainly seem to care about people's lives when it comes to things like climate change. Just because you don't think AI doom is as likely, doesn't mean you should go and pretend that in that one case you all of a sudden have a nihilistic view of human life -- rhetoric matters.


I am not saying it would be clearly good if AI wiped out humanity, I'm just also not saying it would be clearly bad from a universal perspective.

There's no way to know until it all plays out and either way I won't be here when it all plays out.

But IMO to assume our continued existence is universally a positive (or of any universal consequence at all) is a hefty dose of narcissism.


There is no such thing as "universally a positive" unless you assume one. Not just in the sense of "there is no one true universal moral value function", but in the sense that "universal moral value function" is essentially gibberish -- as is "bad from a universal perspective". Humanity being wiped out would not be bad from a universal perspective because nothing is bad from a universal perspective. When we talk about good and bad we always implicitly couch that in "from a(/one or more) human perspective(s), ...".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: