Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

On the other hand I think it's quite strange that a talented entrepreneur and a physicist, among others, are considered as a source of expertise in a field they have nothing to do with, per se. I don't see any of the top AI/ML researchers voicing these kind of concerns. And while I highly respect Musk and Hawking, and agree that they are rational people, their concerns seem to be driven by "fear of the unknown" more than anything else, like another comment pointed out.

Whenever I see discussions about the dangers of AI, they are always about those Terminator-like AI-overlords that will destroy us all. Or that humans will be made redundant because robots will take all our jobs. But there are never concrete arguments or scenarios, just vague expressions of fear. Honestly, if I think about all the things HUMANS have done to each other and the planet, I can hardly imagine anything worse than us.



> their concerns seem to be driven by "fear of the unknown" more than anything else

It seems that their concerns are always dismissed based on the current state of the art, which is short sighted to say the least.


> I can hardly imagine anything worse than us.

Universe full of computronium, solving Collatz conjecture?


Would it implode if the conjecture was disproved?


Maybe, but it's worst case scenario. AI can't prove, nor disprove, nor prove improvability of the conjecture, because shortest proof requires 10^200 terabytes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: