Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The issue of “can AIs that are plausible developments from current technology meaningfully be assigned obligations?” is a different one from “assuming an AI has obligations and the ability to reason what is necessary to meet them, will that necessarily cause it prioritize self-preservation as a prerequisite to all other obligations?”


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: