Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How will services like tiktok ever move past this without strict identity verification and a supporting global legal system? This seems like a common issue for every photo / video content provider. We've seen this exact same article about Facebook content moderation.

They have talented AI developers. They have a massive amount of harmful known content from moderation. They still need to surface harmful content to humans to check.



They could try training and compensating moderators more like software developers, for one.

Ultimately maybe it just doesn't make sense for social networks to be centralized monoliths.


Unlike software, human moderation work is literally non-scalable. There just isn't enough value they make per person to compensate them anywhere like software developers.

Besides, demand and supply.


There's probably something workable between "$200k salary and great benefits" and "$10 a day to be traumatized" that works better than the current system.


> global legal system

Every time you hear "social media should be unmoderated and it should be impossible to deplatform people", you should think of all the traumatizing-but-not-actually-illegal material here which people want shown to everybody.


I haven't found anything traumatizing. It's the real world, living in a bubble serves no one other than small children.


Do they currently have an incentive to move past this?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: