The issue of “can AIs that are plausible developments from current technology meaningfully be assigned obligations?” is a different one from “assuming an AI has obligations and the ability to reason what is necessary to meet them, will that necessarily cause it prioritize self-preservation as a prerequisite to all other obligations?”