Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A broad reading of the language could include even classic target-seeking missiles. A heat-seeking missile is a rudimentary heuristic "AI agent" that makes "decisions" on direction of travel based on looking for hotspots. Sometimes this hits the target the person shooting wanted dead, but other times it hits a different target, because the "AI" made the wrong "decision".

It's easy to use scare quotes here, because the automated behavior is so direct and understandable that we don't really see it as AI. Even when it does something other than what the human intended, it doesn't seem like a rebellious robot, just a heat-seeking missile that happened to be near an unexpected heat source, which it of course followed, and therefore hit the wrong thing. The general idea that the human is giving high-level orders and a robot is making local decisions in an attempt to carry them out is not that different though. The main difference is that the local decision logic is nowadays getting more complex than "find hot thing nearby". But that too is a gradual trend: even old heat-seeking missiles started getting more complex logic, to try to avoid being misled by flares.



Well put. It's going to be hard to have a bright line test, especially when implemented by secret software.

The definition of "chemical weapons" is also problematic. Example: https://en.wikipedia.org/wiki/White_phosphorus_use_in_Iraq




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: