Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Indeed, a Stanford study from a few years back showed that the image data sets used by essentially everybody contain CSAM.

Everybody else has teams building guardrails to mitigate this fundamental existential horror of these models. Musk fired all the safety people and decided to go all in on “adult” content.



What Stanford study?

If you mean the one titled "Generative ML and CSAM", I can't see any mention of your claim in that study. Care to explain?

https://fsi.stanford.edu/publication/generative-ml-and-csam-...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: