Anecdotally, I had a neighbor who programmed the guidance systems for bombs, and the only reason I remember him is because immediately after introducing himself as such, he followed up with, "But I'm not the one who's dropping them. By making them smarter I can save lives".
I think that no matter how technically intelligent a field's operators are, they are still subject to the same dissociations as everyone else.
You are absolutely right and I can totally relate to both your experience and your neighbor's.
I don't program guidance systems for bombs, but I program marketing tools which are, in essence, tricking consumers into buying stuff. I dissociate myself with that issue by considering that any commercial relationship is based on tricking the other party into buying more stuff, but I would totally understand if someone objected that my software is not morally acceptable to them (and I would politely suggest that they go bother someone else :p ).
Further down the line, we could end up discussing if living in a society based on capitalism is "right" or "wrong". I would totally understand if people considered that as "not an HN worthy submission", but I think that inside a thread on the moral, philosophical and social consequences of AI, it could come up as a subject...and be down-voted if need be, not flagged as off-topic.
Anecdotally, I had a neighbor who programmed the guidance systems for bombs, and the only reason I remember him is because immediately after introducing himself as such, he followed up with, "But I'm not the one who's dropping them. By making them smarter I can save lives".
I think that no matter how technically intelligent a field's operators are, they are still subject to the same dissociations as everyone else.