Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have issues with that estimate, but even accepting it - it was still a big improvement over WW2, WW1, colonization, Mongol invasions etc. And certainly preferable to a global thermonuclear war which was a real possibility.

Have you heard of https://en.wikipedia.org/wiki/1983_Soviet_nuclear_false_alar...



Humans very deliberately created nuclear weapons, with the intention of using them to kill enormous numbers of people, which they did. The fact that humans created something horrible and then used that thing fewer times than they could have is hardly a triumph of humans, and it certainly isn’t great evidence to support the argument that humans are limited in their capacity for doing bad things because they supposedly share values with other humans.


Automated systems told them to start a nuclear war and humans didn't.


I don’t think that system was anything close to what would be described as an AI. Wasn’t it just a radar system intended to identify ICBMs?


Think of the system as (human + the warning computers + radars).

Thanks to the human element the system has shared human values and decided not to start a thermonuclear war despite it being the recommended course of action.

If it was a completely AI system it would probably just start the war.


But that system was designed (by humans) to give information and warnings to a human. Of course it would be a mistake to design a system like that and then remove the human.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: