How will services like tiktok ever move past this without strict identity verification and a supporting global legal system? This seems like a common issue for every photo / video content provider. We've seen this exact same article about Facebook content moderation.
They have talented AI developers. They have a massive amount of harmful known content from moderation. They still need to surface harmful content to humans to check.
Unlike software, human moderation work is literally non-scalable. There just isn't enough value they make per person to compensate them anywhere like software developers.
There's probably something workable between "$200k salary and great benefits" and "$10 a day to be traumatized" that works better than the current system.
Every time you hear "social media should be unmoderated and it should be impossible to deplatform people", you should think of all the traumatizing-but-not-actually-illegal material here which people want shown to everybody.
They have talented AI developers. They have a massive amount of harmful known content from moderation. They still need to surface harmful content to humans to check.