Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> "Fire!"

> "I'm unable to help, as I am only a language model and don't have the ability to process and understand that."



Having seen ChatGPT's results when you ask it to solve an ethical puzzle and substitute in various demographics to see how its answer varies (viz. with extreme bias), I'm far more concerned about how its fundamental bias will be woven into systems that develop weapons of war and provide intelligence.


Fundamental bias against the demographics it’s currently biased against seems to be exactly what American military wants. The foundational models are trained on mountains of propaganda and propaganda-inspired content against America’s enemies after all, much more than the other direction.


It will "hallucinate" and claim it hit the target successfully when it didn't actually happen.


And in the event that a human being accidentally attacks a hospital[1], the AI will successfully argue that it didn't happen, as doing so would violate the acceptable use policy.

1: https://www.msf.org/kunduz-hospital-attack-depth




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: