Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Extra LLMs make it harder, but not impossible, to use prompt injection.

In case anyone hasn't played it yet, you can test this theory against Lakera's Gandalf: https://gandalf.lakera.ai/intro



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: