Content moderators and their pay should be a footnote - the real story is Tiktok's treasure trove of criminal evidence, presumably with a lot of PII. It is telling that there isn't any effort to refer evidence automatically to local authorities; the crimefighting would be like shooting fish in a barrel. It would even help the content moderators by reducing caseload after a few highly publicized cases of Tiktok criminals automatically getting caught and prosecuted. That this is not done indicates Tiktok is a giant kompromat honeypot.
This is I think false - the will to prosecute by authorities is often low. Almost every online platform has great evidence of various frauds and crime. I heard Facebook reports 20 million images or more PER YEAR. I’ve had a few experiences. In one someone was cashing bogus checks and we got them to provide the account they’d been asked to deposit them to. There was zero interest in taking a report or looking at whose account that was (person in org who’d been cutting the checks stopped showing up and we were pretty sure was involved). Endless card fraud on tape at merchants- again crickets.
> the will to prosecute by authorities is often low
I hate that this type of statement always pops up as soon as I see any mention of LE. While it might be true for some parts of the US (and probably other countries), it isn't true for the majority of people involved in this field. It's just incredibly hard (thankfully) to convict someone of a crime. It requires a lot of evidence and paperwork. Meanwhile there's a pretty limited amount of resources to work on this kind of thing.
It would be impossible for 'authorities' to investigate 20 million images they receive from Facebook. But implying that authorities don't investigate anything, and don't even want to, is just flat out wrong. For example, here's a story about an awful child predator that was caught in The Netherlands, because American authorities started an investigation into reported images: https://en.wikipedia.org/wiki/Amsterdam_sex_crimes_case
This case is also an example of what authorities tend to focus on, which is producers. A lot of effort is spent trying to prevent producers/abusers from causing more harm. All of this effort can't be invested in investigating brokers/spreaders of 'known' CSAM. Time is limited for everyone and it is too for authorities.
And yes, a probable majority of fraud cases don't get picked up, even if they seem obvious and easy to solve. This is because the man hours for actually getting someone convicted is huge and authorities usually focus on cases involving violence. I personally had to cut short fraud investigations, because preventing a wife from getting murdered by her husband or stopping an illegal arms shipment took preference. And I can't say I feel guilty about that.
And you shouldn't. But it's still a problem that lots of fraud, scam etc just gets ignored.
There's so much fraud out there that looks easy to stop, because it's clear who does it, it's clear that they do it (online, with plenty of evidence) etc etc. Yet when you bring it to the attention of the authorities, they'll look at the individual case and say "okay, but it's only 50€, there's not enough public interest", which I could understand, if it wasn't a commercial-grade criminal operation that does the same thing thousands or tens of thousands of times.
I get that paperwork sucks and investigations take a lot of time and often get nowhere because something just doesn't materialize, but I get the feeling that we're more and more policing non-violent crime like Google & Co do tech support: if there's not a very public issue that the media is actively reporting on, it gets ignored.
And if it's a resource issue: get more resources, it'll pay for itself many times over. A recent example in Germany where some Clan from Turkey had been defrauding the elderly for years and ran up dozens to hundreds of millions of Euros in damages could've been stopped very quickly if there had been more interest from law enforcement. I'm sure we could've paid a person or two with that kind of money.
For fraud, also, there are civil remedies. If you really want it prosecuted, sue in civil court, win, and refer it for criminal prosecution. By and large prosecutors prioritize prosecuting acts for which there are no civil remedies.
Yeah, the failure is unlikely to be on the platform side.
I visited a police station in person to report a $3000 fraud with clear evidence and also document forgery. Police told me to get a lawyer, gave me the fraud report but told me I'm wasting my time if I fill it out.
I've also had a stolen laptop, tracked with Find my Mac, and again, police handed me the report form and told me to contact my insurance.
My dad was an auto mechanic and they had one of the garage vehicles stolen. The thieves broke a window, grabbed a set of keys, and took off with the car. The police came out, took a report and were leaving and the garage manager asked what the police would do next? Would somebody come and dust for prints?
The cop said something like "What's your deductible? Something like $250? Well how much work do you think we should do for $250?" (this was in the 80's).
Cops are too busy making 6 figures directing traffic and hanging out inside their idling SUVs all day to investigate any kind of property crime. I'm sure they would say they just need even more money to buy more guns and then they would help.
I hear you guys on the (relatively small time) financial fraud / theft, but the article mentions far worse. If you went to the police with a video and PII of a violent crime, would they still not open an investigation?
> If you went to the police with a video and PII of a violent crime, would they still not open an investigation?
The short answer... It depends.
People always assume a lot of things when they imply that video evidence makes something an easy case to solve. Identifying people in videos isn't as easy as it seems, for example. It's also not safe to assume that the person posting a video is also the person who filmed it, nor that the location data is correct. So who's the suspect?
Also, where does TikTok send the video? Do they send it to Police in a city where a video is geo tagged, or where it was uploaded, or where the user account was created?
This type of stuff seems trivial when reading an article like this, but it isn't in real life. It's all stuff that may take weeks to figure out and all that time can't be spent on other cases. So yes, sometimes (maybe even often) such reports don't lead to cases. And even if they did, there's no guarantee it would lead to anything.
Wouldn't Apple be a better company to handle crime though? iPhones are an even bigger treasure trove of criminal evidence - they're filled with pictures of crimes, documents proving crimes and chat messages admitting to crimes.
Apple could report so many wrongdoing to authorities but instead they quietly support all those criminals.
Apple and Google already do this. In fact, Google is so dedicated to the cause that they will ban you from their services even if the police think you did nothing wrong!
Wait, what kind of crime?
I am not using tik tok, i always thought its for 15 year olds.
So what kind of crime is posted there and who would be so daft to post incriminating video footage of themselves?
And even if true, if this is owned by china, and I smoke a joint in front of a camera...what are they gonna do? They do not know where I am, who I am and who has jurisdiction.
If they report me to the canadian authorities, they are sol, it is widely legal to smoke that in CAD.
If they report me to the uk government, nothing will happen.
If I never go to china, nothing will happen, so what is the basis of your thoughts on this?
What's most disturbing is the realization that a measurable percentage of humans are just f*cked up.
Considering how many reports there are of Facebook, Youtube, Tiktok, and other sites where content moderators are suffering from having to perform their jobs, it suggests that there's a great deal of really terrible stuff going on - and worse, that the people involved are filming and attempting to share it.
This is a serious thing, and it paints humanity as being far darker than it would seem from the surface. It also suggests that the apocalyptic movies may not be so far off when they suggest that humanity will revert to open barbarism if we're faced with a catastrophy large enough.
A relatively small percentage of people can post a lot of horrible stuff; TikTok has about 1 billion – BILLION – users who are active once a month. If only 1% of those people are fucked up, and they all post one fucked up thing a month then we'd be talking about 10 million videos/month.
I don't know how accurate that 1 billion number of users is exactly, but it's probably in the right ballpark. In reality there are probably a few million "serial posters" of horrible stuff. Either way, with numbers like that it's very easy to get millions of undisputedly horrible videos.
It wouldn't be 10 million videos/month, it would be 10 million active users. This is a notable difference because each of these users can be uploading numerous content and tiktok users who have posted content are much more likely to continue posting content far more frequently than just one video per month.
Theoretically it now does not seem too difficult that data science could actually identify who these people are based on the amount of metadata we all generate. Are there attempts to be preemptive in this regard?
The same kind of "data science" that associated many people and businesses named after the Egyptian goddess with an Islamic terror group and took automatic action against them?
I think we need to move to some way to identify actual people on the internet. I don't mean that in a "use your real name and upload your passport"-kind of way, but in a "we can 100% reliably tell you've been a twat in the past, whomever you are, so no new signups for you"-kind of way that also accounts for all the various interests such as right to privacy and anonymity.
Many platforms use phone numbers for that now, which is obviously far from perfect as I can easily get 100 of those today if I wanted to.
That's not a ludicrous idea. India requires activity be logged to a specific individual,
for instance. Seems a bit dystopian for my sensibilities though.
It doesn’t have to be that apocalyptic. Take the literal traffic enforcement for example. Driving faster than the speed limit gets traffic fines. Cities have cameras for red light or toll booth violations and have databases of offenders.
"Databases of offenders" is different from "database of people who haven't yet done anything but statistically we think might". I could be reading it wrong, but it sounded to me like the parent comment was talking about predicting people who might post such videos _before_ they did. I don't think you'd need to theorize about using data science to ban people after they post some number of violating videos.
Social media companies can be passive, reactive, proactive, or predictive in their efforts to ban bad actors. How much effort are they putting in at each level? It seems more towards the least effort.
Just because we can imagine it going wrong doesn't mean it will go wrong or it must go wrong. The role of science fiction is to explore possible futures, not to make the future taboo.
Do we need to be way more careful with how we design our society? Yes. Does it make sense to abandon possible solutions because of a movie? No.
>I don't know how accurate that 1 billion number of users is exactly
Ask Elon Musk to offer to buy it, then we'll see how accurate those numbers are.
In reality, I never believe people's published numbers. OF COURSE they are inflated. Their entire (maybe majority?) valuations are dependent on those numbers being as high as possible.
I didn't even check the source of those numbers; it was just the first thing that came up in a very quick search. It doesn't really matter: even with 500 million or 100 million you've got the same "a small percentage of a very large group can cause a lot of havoc"-effect. You see the same if you drive to work: you can easily encounter over a 100 of other drivers, and only a very small percentage of them can give the perception that "people are such bad drivers" when in fact it's just the 1% of the 100 people you encountered on that trip.
I find it both depressing and comforting; it's comforting to know most people aren't that bad, and it's depressing that such a small group of assholes can fuck it all up.
It doesn’t even matter — going back to the car example, say you drive near 1-2 thousand cars on an hour drive, if you have 2-3 scary experiences, that makes you think “drivers here suck!” So a sample size of like .1% makes you think a problem applies to 50% of drivers.
If I watch TikTok for an hour a day for a week, and I watch 4 videos/minute on average, I’ll watch over 1500 videos a week. If only one or two of those are VERY traumatizing (e.g. showing death), I’d think TikTok has a big moderation problem. That’s an ultra-low number of users making it seem like there is a huge issue
Bytedance is pre-IPO. However, you are accusing Google, Facebook, snapchat, Twitter of securities fraud. What is the latest from Musk, on Twitter's alleged securities fraud?
I'm not accusing anyone of anything other than padding their numbers. How you interpret that is up to you.
Do you really believe TikTok as exactly 1 Billion users, or 999,990,000+ users? Anyone that says that is 1 Billion is padding the numbers. Nobody is going to call that 999 million.
If the number is anything close to being rounded, the number is padded. Unless they are showing you an exact number from the time the count was read, it is being padded.
TikTok having an estimated 1bn users is quite low when you compare it to the likes of Google's YouTube at twice that (2.5bn). If anything I expected TikTok to have passed the 2bn mark already considering its reach, most notably surpassed by Meta's subsidiaries and YouTube.
I actually think modern society is horrible in all sorts of ways leading to, among other things, rising rates of mental health issues. But I fail to see the connection between rising rates of depression and mental health issues and what's being discussed here.
I am not sure how this relates to OPs comment about "f*cked up" videos being posted. There is no connection between depression and becoming interested in posting gruesome videos.
Diagnosis of depression going up (as shown in the first link) don't necessarily mean incidents of depression are going up. It could be that more people are trying to treat it than before.
That view goes back much farther than apocalyptic movies. In 1651, Thomas Hobbes wrote:
>the same is consequent to the time, wherein men live without other security, than what their own strength, and their own invention shall furnish them withal. In such condition, there is no place for industry; because the fruit thereof is uncertain: and consequently no culture of the earth; no navigation, nor use of the commodities that may be imported by sea; no commodious building; no instruments of moving, and removing, such things as require much force; no knowledge of the face of the earth; no account of time; no arts; no letters; no society; and which is worst of all, continual fear, and danger of violent death; and the life of man, solitary, poor, nasty, brutish, and short.
Hobbes, like many philosophers, did not have much faith in human nature.
"The heart is more deceitful than all else
And is desperately sick;
Who can understand it?" - Jeremiah
"Then the Lord saw that the wickedness of man was great on the earth, and that every intent of the thoughts of his heart was only evil continually." - Moses, about man before the flood.
"This is the judgment, that the Light has come into the world, and men loved the darkness rather than the Light, for their deeds were evil." - the Gospel of John
Humanity has suffered for millions of years. Just recently we have been able to live peacefully, but most of the population does not live in Japan or the Netherlands. And even then just 70 years ago these places went through a World War.
We have a long way to go until everyone on the planet can grow up in conditions where you turn out 'default good/well-meaning towards others'.
I don't believe that humanity has suffered for millions of years. From what I've read, life before agriculture was decent. The trouble really seemed to begin with agriculture, land, and people ownership.
Population explosion also began with agriculture, since more offspring meant more people to work your lands. Hunter-gatherers breastfed longer (so more years between ability to become pregnant again). Plus they were relatively mobile, so more having more children was not a practical goal.
Many of the diseases and other problems we have now are related to population increases in small areas (bad sanitation, etc.).
Then you've read some terribly misleading, romanticized false version of history. Hunter-gatherer lives should not be romanticized as being desirable. They were (and are, in the few places they still exist) incredibly hard, dangerous and miserable existences.
Think of it as the most extreme form of poverty. No clean water, no shoes, malnutrition, limited protection against the weather. Every day going about basic tasks carries the risk for death, disease and disablement.
There is a reason that human population didn't rapidly increase until relatively recently in our collective hundred-thousand-year history. It's because most humans died before reaching reproductive age. Even in the few pockets of the globe with conditions favorable to survival, periodic events (weather, disease, rival groups, over-hunting, etc) wiped out entire tribes in terrible ways.
>From what I've read, life before agriculture was decent.
I've read arguments in both directions and I'm extremely suspicious that they are almost all basically politically-driven. It goes like this:
If hunter-gatherers are predisposed to more or less gender or economic equality, or certain social structures, then that should perhaps inform how we construct our own modern societies.
In order to escape the clear appeal to nature fallacy, it then becomes necessary to argue that not only were prehistoric societies constructed in a certain way, but they were also extremely well-off. Therefore we clearly must "reject modernity, retvrn to monke", and embrace True Human Nature embodied by some cultural tradition or ideology.
The exact inverse would obviously imply that we must embrace a certain idea of technological or social progress in order to escape the "natural state of humanity" as fast as we can.
However, was the human hunter-gatherer experience was ever all that stable or predictable such that we can obviously draw out either of those major conclusions? I think a decent null hypothesis might be that all the extremes of human experience and social structure had to have occurred to some degree and that the overall average and distribution would fluctuate quite a bit over time according to weather, migration patterns, and accidents of cultural evolution that humanity at any level had basically no control over.
I find it really hard to believe that we can possibly have enough evidence in any relevant discipline to rule this out.
> The Younger Dryas (c. 12,900 to 11,700 years BP[2]) was a return to glacial conditions which temporarily reversed the gradual climatic warming after the Last Glacial Maximum (LGM, c. 27,000 to 20,000 years BP). The Younger Dryas was the last stage of the Pleistocene epoch (c. 2,580,000 to 11,700 years BP) and it preceded the current, warmer Holocene epoch. The Younger Dryas was the most severe and long lasting of several interruptions to the warming of the Earth's climate...
Anatomically modern humans have only been around for ~300,000 years. Behaviorally modern humans, more like 150,000 years (or as recently as 60,000 years, depends who you ask). We are really a very young specie.
What bothers me is that many basic ethical principles fall by the wayside when the topic shifts to our own children. The reason for that is obvious: we are the descendants of organisms that valued reproduction. It's nonetheless an interesting discussion to have.
Is it ethical to create a sentient AI that might suffer? -> we already make similar decisions millions of times per day
Don't treat humans as means to an end, instead treat them as ends -> we create other humans to satisfy our current needs in a world characterized by struggle
And of course as you have pointed out, some groups consider that it's better by default for a child to exist in adverse starting conditions (whilst also negatively impacting other people) than to not exist
The people who decide bringing new life into this world is unethical will quickly be replaced those who think it _is_ ethical. I understand your reasoning, but "don't have kids" can never be the answer.
It might never become popular but every individual aware of the argument can then choose to reproduce or not (assuming they aren't barred from it for other reasons). In other words, we can't really hide behind the survival of the species as a whole when we consider our own actions and their ramifications.
We know the great mass of humans will carry on as before and there's little we can do to change that, but we are also aware that we have this responsibility that we can act on in our own life. Every person in the position to reproduce has a choice laid out before them. For the majority of us, it's the single most impactful ethical decision we will ever make.
From what perspective are you using the word "quickly" here? From human scales, quickly isn't true as China has been limiting to 1 kid for how long? From cosmological scales, humans barely even existed.
> What's most disturbing is the realization that a measurable percentage of humans are just f*cked up.
I already knew that. I even know the percentage roughly. According to the Pareto principle, 80% of people are not totally fucked up, but the 20% that are cause 80% of the problems.
No, you don't. This "principle" is not a law of physics, it's just a rule of thumb that may or may not be wildly off the mark for an arbitrary distribution. Its common usage is like statistics horoscope.
Having worked as a data scientist at multiple companies (From FANG to startup), the first thing I look at when I get my hands on data is the existence of the Pareto principle.
I still haven’t found one company where this principle didn’t show up.
What does this prove? If you have lots of data and dimensions, I bet you could just as likely find distributions that are roughly 50/50, 60/40, 90/10, 100/0 if you looked for them.
There's a nested Pareto in that 20% of that 20% causes 80% of the 80%. Considering that psychopathy (or whatever the updated term is) runs at a bit more of a percent of the population...
It used to be more obvious. The internet used to have popular gore sites and all sorts of horrible content that would find its way online and even go viral in a number of cases. I think law enforcement, automated detection, anti spam measures, linking accounts to humans, and the centralization of content around just a few sites that had strict rules made it seem like this content barely existed and that people behaved sociably on the internet. I think people lost sight of the rough edges.
TikTok is much harder to censor in automated ways, clearly. People also find ways around the censors. So all that awful stuff may have found a way back onto the internet.
You should have seen what terms most people searched for when they used a searched engine back in the day. I doubt things changed much since then. It's all of the worst things you can imagine and probably even worse than you think.
What's truly tragic is that these "fucked up people" will always be the loudest voice in any conversation (be it fucked up by genetics, upbringing, hormones, drugs or ideology).
Some people are depraved and simply lack a moral compass. What's scary to think is these are, statistically, people you pass on your way to work. They exist in poor and rich countries, among the pious and among the irreverent and in between.
It's the same kind of people who adopt[1] in order to get government subsidies but treat their charges worse than cattle --at least cattle get to graze nd do cattle things.
[1] I hope it's not necessary to state that this is a small minority of adoptive/foster parents, but they exist, unfortunately.
> What's most disturbing is the realization that a measurable percentage of humans are just f*cked up.
yeah I think that's the theme of David Lynch movies
> It also suggests that the apocalyptic movies may not be so far off when they suggest that humanity will revert to open barbarism if we're faced with a catastrophy large enough.
I think that's disproven - I saw a study where they looked at peoples' behaviour during the WWII London bombings and it was mostly characterized by Mutual Aid
> and it paints humanity as being far darker than it would seem from the surface.
Unless a person intentionally hides from the world, it is hard to not be aware of this. Human trafficking, cartels, pollution, there is a myriad of ways in which humans are just not "great people". Why act disillusioned by it?
>This is a serious thing, and it paints humanity as being far darker than it would seem from the surface.
"Build it yourself" internet bubbles are giving people hilariously wrong impressions of society, and humanity.
These same bubbles led to many millions of people shocked and flabbergasted when Donald Trump won in 2016. There will probably be just as many flabbergasted people when Democrats get thrown out of the House in three weeks. Internet "safe spaces" and censorship of differing opinions give people that impression that society is changing, or has changed. It hasn't.
Anything is possible, but the polls and prediction markets are telling a pretty grim story. Democrats may still hold the Senate, but it’s little better than a coin flip.
The fact that people are downvoting this obvious proclamation is further proof of my original claim. People are in bubbles, unaware of reality. No one should be shocked that Democrats are about to lose the Congress, and yet here we are.
Tiktok is pretty good at reflecting yourself back to you - it gives you more of what you give your attention to.
Did you go into tiktok 'rubbernecking at the car crash'? If you watch bad content, it will serve more bad content to you.
I've had this reveal things about myself that I wouldnt have articulated, and aren't my idealized self perception: ex. videos of text-to-speech reddit posts overlayed over someone hopping around a minecraft obstacle course are compelling to my brain, even if my conscious mind finds them kind of dumb and silly.
The content and the traction it gets is insane, especially considering the target audience. I don't get why it's not banned or heavily restricted. It's a cess pool of fakes, propaganda, semi hidden advertisement, and other bottom of the barrel grade content
>I don't get why it's not banned or heavily restricted
What's up with this authoritarian discourse on HN? Since when are we casually calling for censorship of things that are "disliked"?
The megacorps are having videos taken down off so-called private clouds for calling out the rulling class too much. I worry for the Orwellian direction we are heading into.
> Since when are we casually calling for censorship of things that are "disliked"?
Some would argue: since technological progress outpaced society's capacity to cope with it. One of the possible answers for the Fermi Paradox is "everyone invents Facebook shortly after computers", and it seems a lot more plausible than it did ten years ago.
It's hard not to see surveillance capitalism as a variant of what Orwell was warning about - Facebook etc. are the telescreens and two-minutes-hate from 1984. As megacorps start to rival governments in reach and power, who exactly controls them is somewhat less important than their overall impact on the civilization.
It's religion on steroid, only ten or so years of it and we can already see the awful consequences, give it a few decades and we'll see where it goes, I'm personally not very optimistic
On the surface, places like HN or the software world in general sound libertarian in a limited sense. Capable people with the resources and skills to make things happen and tinker with projects. There is constant talk of reform or how to engineer the world into a more sensible and freer place.
However, this freedom is only aimed at the users themselves. In reality, venture capitalists, highly-paid software workers, digital nomads and other gentrifying forces are by and large on the side of authoritarianism in the role they play in the world. They produce systems that reward them greatly while increasing surveillance and quantification of the rest of the population.
It's not uncommon to see employees of adtech FAANG companies here wonder, without any trace of irony or self-awareness, why more people don't use ad blockers on their router or browser. There's a reason why the "make the world a better place" phrase is so heavily parodied.
The Zeitgeist has gotten especially stupid ever since Trump got elected. A very unpopular opinion. Rather than accept that people are horrible they look to technology for scapegoats. That and pretending the media conglomerates were the good ole days of accuracy instead of tremendously fucked up. You wouldn't see an entire coasts deliberately kept ignorant of a warcrimes confessional (Winter Soldier Hearings).
This resonates with me a lot. I find it quite odd that people are happy to blame facebook, twitter, instagram and etc. for the things people do on those platforms. It's like blaming the public square or the coffee shop or whatever place "problematic" people gather and discuss "problematic" ideas.
No. The platform is just a place just like a sidewalk is a place. The problem is society itself. But I guess that's a hard pill to swallow for people and making up imaginary enemies then laying down the regulations on those imaginary enemies is preferred to addressing issues with society itself.
The sidewalk and the coffee shop weren't designed to create and unlimited feed of engaging content and compete for your attention by triggering your brain's most basic responses. A quick Google search will show you dozens of studies on the topic.
Don't forget these "public squares" are companies who's sole purpose is to show you ads to generate revenues, they're not here to provide a fair and calm public forum, they're not public utilities
Using your metaphor, the sidewalk is your ISP. Facebook, Twitter, and Instagram are not the sidewalk, but instead a megaphone given to these people. The problem is society, but these platforms are not just places, they are giant amplifiers of this problematic society, and they are absolutely choosing (algorithmically) who to amplify.
Your error is declaring these platforms are "just a place."
There is a ton of evidence YouTube, Facebook, Twitter, etc. deliberately push inflammatory / extremist material to people to increase engagement.
My 8 year old daughter exclusively watches Let's Play and "how to draw" / DIY crafting videos on YouTube, and still gets recommended content on why vaccinations are dangerous and junk health pseudoscience, flat earth and New World Order conspiracy theories, and a ton of comparatively vanilla highly biased political "commentary."
Their algorithm tells them if they can get her interest on these videos (instead of others) she will spend more time on their site. They do not care why she is spending more time on their site, or if there is any societal harm from the content she is consuming - it's a "free country, after all."
If every time you went to your coffee shop, they introduced you to another customer peddling snake oil or handed you a newspaper full of extreme left or right wing talking points, you wouldn't say "it's just a coffee shop."
> Rather than accept that people are horrible they look to technology for scapegoat
It obviously is a two way street, technology shapes society as much as we shape technology. Do you think humanity evolves independently of its tools ?
People are horrible, what we made with tech equally is, media were biased before and we just have more sources and channels now. If anything you're making my point
I don't know where I would have given that impression. I am saying that HN is censored, so it would seem that the people who participate here are OK with it. I am.
Ah yes, krokodil, meaning clandestine made desomorphine with skin lesion causing impurities in it the very existence of which is a direct result of authoritarian prohibition.
The content is highly customised and targeted to one's liking. I've never seen any of the types you mention, besides an occasional pro-China fake news propaganda haha.
This is not to say TT doesn't contain such content, but it should be considered the feed you're seeing might be *significantly* different to someone else's feed…
The sad reality is that these $10-a-day content moderators make more money than medium sized creators who don't know how to monetize and are hoping the TikTok creator fund will do them justice.
It's a very manipulative platform on all sides. Creator, Consumer, and Moderator.
To be fair, this is true for solo artists in general - musicians, writers, indie game devs, youtubes. Even substack writers. More often than not it takes a long time to build up an audience and all.
The scenes from this article exceed any dystopian novel I've ever read. Humans have spent centuries building up social structures to eliminate our worst excesses. Social media is tearing those structures down again.
Some of what appear would have happened anyway. Social media also normalizes behavior (and weird beliefs) to the extent that they become destructive to the rest of society. QAnon is an example.
Even QAnon and similar stuff looks like something which you previously just wouldn't have noticed. Lots of people believe in extremely crazy things, sects worked just fine before social media, but unless you had a family member drawn into it, or lived next to members, you probably didn't see much of it unless some journalists decided that it was a topic worthy of an article.
You didn't actually respond to the point being made:
>Social media also normalizes behavior (and weird beliefs) to the extent that they become destructive to the rest of society.
Yes, there have always been loonies. Yes. But the issue is that social media has brought them together and given them a voice they have never had before.
When in the history of the US has there been a massively mainstream movement predicated on the fact that Democrats are literally agents of the devil, that wealthy elites harvest chemicals from tortured children, and that a shady New York businessman might be God's representative on earth?
I believe that point isn't accurate, because again, Scientology & friends existed before social media. For all I care, they're loonies, yet the found together and they have a voice. The Catholic Church is a bunch of loonies too, yet they found each other and they had a voice and so much power, there's just nothing like it today in the age of social media.
> When in the history of the US has there been a massively mainstream movement predicated on the fact that Democrats are literally agents of the devil, that wealthy elites harvest chemicals from tortured children, and that a shady New York businessman might be God's representative on earth?
Never. Mostly because Trump hasn't been around a hundred years prior, and harvesting chemicals from tortured children wasn't a thing anybody cared about. Or did you mean crazy antisemitic stuff and literally burning people because they figured they're witches? I don't know, sounds crazy, I'm sure that never happened.
Not flawlessly but they have worked. Rule of law has enabled major advances in the economic well-being of millions of people by ensuring protection of property and supporting operation of markets.
What a horror. We have an underclass citizenry subject to abuse or at best terribly life-changing work conditions, without the resources (personal, cultural, societal) to protect themselves or change position. It’s horrific.
It is a shared problem. Colombia is not so far away, neither geographically nor as a gig-heavy economic model.
We should be able to solve this – not from a “content moderation” perspective but from a human work conditions perspective. Counterintuitively, solving content moderation probably requires solving the human work conditions. If the workers were paid considerably better and well-supported, the costs might rise to the point of successfully incentivizing the ideal automated solutions.
Wow, I thought I was expecting nasty stuff to be mentioned there, but what is mentioned I would not even imagine in my wildest. Humans are capable of truly nasty and even more so put it on line for every one to see?
I would love to know from people working in this space how they handle illegal videos. Are they responsible to report it to the authorities in the respective countries? The company will not have a presence in many of the countries the app is used in, so even if they wanted to report to the authorities it's impossible to keep up. Do they just delete and move on? The legal, ethical implications here are staggering.
This reply was therapeutic for me. I wish the post had a trigger warning. I was very caught off guard by the first vulgar content that was mentioned, and I’m a little upset by the terribleness of it. I immediately stopped reading, but I wish I hadn’t started. I was reading through these comments looking for validation for my feelings and this comment helped. Thanks. My naivety got the best of me I think.
At YouTube the content moderation team (site quality) had mandatory group therapy. The therapist recommended not bottling up seeing traumatic things so the moderators would sometimes share between each other. One moderator mentioned that he would never open a link sent by a particular other moderator. I asked why and he said “he’s brown listed”. People suck and the internet brings out the worst in us.
They were also on one month contracts. If you made it through your initial period you could choose to convert to full time. Most didn’t.
I shudder to think what the next generation of moderators is facing.
How will services like tiktok ever move past this without strict identity verification and a supporting global legal system? This seems like a common issue for every photo / video content provider. We've seen this exact same article about Facebook content moderation.
They have talented AI developers. They have a massive amount of harmful known content from moderation. They still need to surface harmful content to humans to check.
Unlike software, human moderation work is literally non-scalable. There just isn't enough value they make per person to compensate them anywhere like software developers.
There's probably something workable between "$200k salary and great benefits" and "$10 a day to be traumatized" that works better than the current system.
Every time you hear "social media should be unmoderated and it should be impossible to deplatform people", you should think of all the traumatizing-but-not-actually-illegal material here which people want shown to everybody.
I find the generic “workers are important” quotes remarkable, in the sense that they do nothing but contribute to the sense that these companies really don’t care.
The ppl in the companies care. But they have become small replaceable cogs/machine parts given the scales the machine has reached. The machine has become mindless and uncontrollable.
Larger things scale less control over everything the chimp troupe with their 6 inch chimp brains have. Its the lesson of the last 2O years - we quickly loose control and enter Jurassic Park style reality.
One perspective is that large institutions have a sort of life of their own, caused by the interaction of internal factions and priorities, and almost beyond the ability of any individual to change.
Nobody in America is in favour of school shootings, and yet they get loads of them. Nobody at Google wants people to avoid their products expecting them to be cancelled, and yet they have that reputation.
Another perspective is that of course the institution's actions are the result of individuals' actions, what else could it possibly be?
That multinational that appears to have committed manslaughter would like us to believe nobody was responsible and it was just a tragic series of misunderstandings and communication breakdowns, so nobody should face any punishment. Are we fools to believe that?
> Another perspective is that of course the institution's actions are the result of individuals' actions, what else could it possibly be?
Of course, but the naive part is believing all those individuals are spherical people in a vacuum, making independent decisions that are all well-thought-out and optimize for globally best outcome.
> That multinational that appears to have committed manslaughter would like us to believe nobody was responsible and it was just a tragic series of misunderstandings and communication breakdowns, so nobody should face any punishment. Are we fools to believe that?
Yes and no. It's foolish to believe individuals have much agency in this setting. Everyone, from the bottom tier to C-suite, is entangled in a web of interlocking incentives, that are ultimately anchored outside of any one company or institution[0]. Some people are handling large levers and could almost unilaterally make the company change course, but at a great cost to themselves[1], and thus it's foolish to expect them to become heroes, especially before an issue hits the news cycle. It's the same kind of error in thinking that rests behind ideas like "voting with your wallet".
That said, the conclusion that "nobody should face any punishment" is also wrong. Punishment is a powerful incentive that can cut through the tangled web. Even if you believe nobody is by themselves responsible for a bad thing, targeted punishment (or threat of it) at e.g. people holding the levers can encourage them to be more eager to pull those levers and steer the entire system away from causing the bad outcome.
----
[0] - Like, desire to keep your current standard of living, whatever it is, which may be driven not by your own need for comfort, but a desire to not disappoint your spouse and/or children.
[1] - And highly likely it would be reversed the moment they got fired for operating the lever wrong. Checks and balances :).
This. It only gets the way it is, because the people have allowed it.
I know it's not ideal to have to go through the pain and trouble of being the good and righteous person who stands up against bad employers; but you HAVE to do it, or this is what happens.
Too often the people at the top don't define the culture explicitly. Instead, they do it accidentally through behaviors they incentivize, often without even realizing it. Sometimes they do this in direct conflict with the culture they are trying to explicitly champion (e.g., saying "we believe in transparency" while subtly punishing people for being open, maybe for well-intentioned reasons). And once a given company culture takes hold, it can be very hard to change because people who thrive in that culture tend to stay longer and get promoted more. Those people end up being a kind of "momentum" for the proliferation of said culture.
In my experience, it takes people in positions of power, and with real skills for cultivating culture, to deeply change an existing work culture for the better. I've seen a few leaders with such skills make wonderful changes at their scale of influence. But I've seen far more people in leadership positions who act as if they are largely unaware of the nuance and importance of good company culture.
Such changes are also hard because meaningful cultural improvements often conflict with short term revenue/profit. It takes a lot of discipline in senior leadership to maintain the needed resolve given ever-present pressures to produce in the short term.
Sure; companies have long gotten away from dumping their externalities out on the societies that host them. Criticism of this sort of thing isn't arguing that it's not legal; the argument is that it's bad.
This is all unnecessary. Social media is a soulless advertising platform that destroys peoples sense of self and being. I don't use them and I live a nice life devoid of vapid self promotion and other nonsense.
Erm, don't think many of Facebook's "moderators" in India are paid any better.
I think most BPO/call-centre jobs pay as much (it's about the GDP-per-capita of "rich" states like Karnataka).
forcing people to do shitty jobs that make that unwell, for shit all cash isn't a great look. Arguing that its ok because they make slightly above the median income for the place they live, even less so.
It's possible to get exploited without being a literal slave, though.
There's a vast spectrum between "I'm Bill Gates and nothing is ever out of my reach" and "slavery". Folks in poor nations with few prospects being exploited for their cheap labor often gets uncomfortably close to the slavery end of things.
If you end up watching traumatizing content for a low salary it does mean that the cards you were dealt at birth weren't so great. Perhaps they could have been in a better place if they pulled up their bootstraps but I'd challenge any SWE here to do well with the same starting circumstances
Then I'm at a loss as to what the parent poster wanted; acts of child porn and murder and whatnot are already illegal and fairly regularly punished. Those folks get plenty of (fair) blame.
I watched some TikToks recently by scrolling on the site and it was easily some of the lowest quality stuff I've ever seen in my life. They were literally just someone's face, a caption, and some random music for about 15 seconds. I've tried a few times now to search for tags of things I like and it is just the lowest denominator stuff I've ever seen on the internet. It's one step away from AI generated (and even that has started doing pretty great things now). I'm sure it's algorithm is amazing but is using TikTok any different to reading reddit or watching YouTube really? Does anyone actually transfer the stuff they read/watch from short term to long term memory?
The "defaults" on TikTok are fairly terrible. However, after spending maybe one hour at most scrolling and liking/disliking things to tune the algorithm, and search tags, it works fairly well. I have
- Learned several cooking recipes and some tricks
- Cleaning advice I've actually put to use
- How to use some woodworking tools. Also, general woodworking ideas
- Cat behavioral advice
In my experience, it's far better than Reddit (mostly because there's far more niche content, and it isn't limited to "recently posted") and the discovery is far easier than on YouTube.
> In my experience, it's far better than Reddit (mostly because there's far more niche content, and it isn't limited to "recently posted")
I find that very hard to believe (despite how god awful Reddit has gotten by now). Reddit is pretty much the last place on the internet where real people post long-form content that is not a single-issue niche forum. I can't imagine that the mode of TikTok with its short videos, algorithm-driven discovery and lack of community (commenting) encourages quality.
What they mean is a continuous churn of surface level stuff.
E.g. for woodworking, there's only so many times you can show how the basics of "10 Things you're doing wrong", or why the tape measure moves at the end.
You're not going to have that on the top of the subreddit that many times - maybe once a year.
On tiktok, that stuff will occasionally pop in, mixed with a bunch of other crap so the repetitiveness doesn't stick out in the short term. You also don't engage with it. There's almost no point to leaving a comment, it's all about mindlessly scrolling from video to video. This leads to lots of really easy to film but superficial stuff in large quantities.
> E.g. for woodworking, there's only so many times you can show how the basics of "10 Things you're doing wrong", or why the tape measure moves at the end.
It's playing out exactly like elsewhere on the Internet: beginner content is best to produce because they are the biggest audience, and the algorithm snowballs that advantage further because it only sees engagement, not quality.
In my experience, subreddits end up being, at best, a "look at what I did" without any actual way I can learn, and usually it's just a photo or something. At worst they are a mess of memes, repeated beginner questions and unmoderated content. Content on Reddit is also very short-lived, and information can get hidden in comments and become invisible.
On the other hand, TikTok works fairly well. Videos can be up to 3 minutes long now, most of the time it's enough to at least explain the basics and tell you where to get more detail if you want. Also, the "algorithm-driven discovery" means that creators usually need to focus on the content of the videos. The "video reply to a comment" mechanic is very helpful too, as it encourages making additional first-level content from people's questions instead of burying things in comments.
yeah reddit has huge issues. it’s difficult to find a popular image-based subreddit that hasn’t descended into repetitive memes. and as you say the fast decay of content means that “progress” is only in the minds of the current users. as soon as they decay, the meta resets.
there’s also a shocking amount of astroturfing going on. it used to be useful to search “[product] reddit” - and in some cases still can be - but many companies have cottoned onto this and they can extremely cheaply manipulate a reddit post for very high fidelity advertising. this is especially true for online-centric companies like VPNs and crypto exchanges. the VPN subreddit might as well be an advertising platform
I once saw a reddit post asking about VR headsets where the poster specifically stated that they do not want anything Meta/Facebook because they don’t trust them. the top 3 replies? “I recommend the metaquest 2 because it’s the best value for money” as if they’d said nothing at all
Reddit probably has the best moderation system out there from the big subreddits I have moderated. But even reddit is slowly starting to get gamed a bit and it takes a lot of investigative work from the moderators themselves to find out if certain individuals or entities are bad actors or harmful to the subreddit or wider community.
There are some hilarious stories, and I haven't encountered such an individual yet. But I have heard one story where one individual has managed to evade a certain subreddit over 62 times. Enough for the moderators of that subreddit to send a public complaint that other moderators like me probably picked up on.
But they are working on improving their ban evasion tooling so who knows if that will do anything against that one dude that managed to evade over 62 times. Where there is a will..there is a way...and that's what frustrates me about Tiktok. I couldn't imagine moderating a video based platform mainly, just takes too much of your time. Imagine having to deal with ban evaders on top of that and related.
Interesting, I think moderation of big subreddits is one of the worst things about reddit - its unpaid labor, so the only people willing to invest the time tend to be petty tyrants.
Ex. there is a moderator of major sub who likes to leave derisive comments to 'explain' moderation decisions and made a subreddit so other mods can join in on mocking users.
There's a phenomena where someone posts something like "I want to join two pieces of board to a pole with a nail, but I don't want to use a hammer, what should I do?" and gets back two recommendations to use a screw and screwdriver, three to use a hammer, one to use a nailgun, and one longer answer that explains the question is bad but begrudgingly accepts a nailgun.
I think in that situation, the people who suggest a hammer or a screw silently perform some mental arithmetic, assume the person asking the question doesn't understand the problem because they're asking for a very costly tradeoff, and give the best answer that solves the basic problem.
Don't forget the "person comes along with situation for which the typical dogma and rules of thumb fall short and anyone with sufficient expertise in the subject to identify that winds up downvoted to oblivion by the hordes of dolts who don't" problem.
This existed/exists on stackoverflow as well. Simple solutions to code questions are often met with explanations involving completely changing the stack/redoing everything in the way the person answering is familiar with (maybe just to signal they know how to do something) while completely ignoring any meaningful answer in the context of the question.
That happened to me right here in HN in a Linux story, when I mentioned and linked to an issue of not being able to use the Nvidia card along with the integrated intel card to connect 2 monitors. Answer was: Why are you trying to do such fringe configurations (plugging two monitors is fringe in Linux?)
I've watched a few 5+ minute TikToks about various things (cars, video games, plants, politics) in the past month
I don't think these are hugely popular but they reach some audience. A big video will have 500k to Millions of likes.
These longer videos usually cap out at a couple thousand.
They're out there but it's not searchable.
TikTok has an incredible breadth of content but takes a few days to weeks to get the algorithm to show you what you want.
When I use it the algorithm is something I'm actively trying to manipulate. "Up vote" stuff you to see by liking and saving. "Down vote" by quickly scrolling away from a video.
Any engagement is a signal to show you more of that -- if you're tired of seeing one type of video, you should immediately scroll as soon as you know you don't want to watch it.
You realize you are training their datasets (its not really an 'AI' in any meaningful sense) to get a very precise picture of your personality and emotions, which will be later used for ads targeting, and character profiling if you become somebody interesting in later years.
'Training the alghoritm' is just carrot on proverbial curated-content stick (you could also select 10 categories of content you like and refine it yourself further down the road, this is just less obvious for the same).
People were bashing Facebook/Instagram for this for a decade and many have stopped/rollbacked and just give few likes here and there for personal events of friends. I guess history even in digital world likes to repeat itself...
I was one of the people that didn’t care Facebook was doing that. I am one of the people who still doesn’t care that TikTok is doing it. I stopped using Facebook because my feed became garbage not because of any kind of privacy concerns. TikTok’s feed has been amazing and consistently feeds me content that I like. History probably repeats itself because a massive body of people don’t care about the things that you care about and are being delivered value that they do care about.
"History probably repeats itself because a massive body of people don’t care about the things that you care about and are being delivered value that they do care about"
Yes, it is known that most people do not care about deep stuff, but about food in their belly and beeing good entertained. Panem et circenses.
Btw. about the romans, in italy the prime minister is now a person who said she thinks Mussolini was a good politician. So yes, history repeats itself, if too many people do not care.
Democracy is actually a quite fragile thing. Many things we take for granted only exist, because people care. And it will erode if people stop caring.
So you are still free to submit to FB and TikTok as much as you like and hope this does not change.
Because, you know, companies knowing all about you, but you nothing about them and their motives is just a receipe for long term empowerment of them and not you.
And in the case of TikTok, there is an actual government behind it and de facto in control, which is not famous for democracy, nor human rights.
People signed up for Facebook/Instagram/Twitter with the expectation that they'd get a reverse-chronological feed of posts from the people they follow. People were bothered by the algorithmic feed because it wasn't what they signed up for (and those sites' feeds are often quite bad at showing quality content). TikTok never promised that kind of feed, and by most accounts does a pretty good job at surfacing videos people will find interesting. It's an important difference.
From my perspective FB/Instagram's algorithm (I'll throw in YouTube too) never really worked. They overemphasized showing me the exact same type of thing until the feed became boring. You liked this post about a chair -- here's nothing but chairs.
I also joined FB to keep in touch with people I met in real life. It changed to push me to engage with groups and people I had nothing in common with.
I didn't join TikTok to keep up with friends that way. The TikTok algorithm does a great job of showing me new stuff that I might be interested in, but is still related. It's remarkably good.
FWIW -- It also seems to forget everything about me if I log off for a months.
---
Sure, I'm training data sets. Everything I do online is monitored and analyzed by someone at some point.
I spent a year of my life trying to "get off Google." IMO it's pretty close to impossible if you want to function in society.
I'm skeptical that the TikTok algorithm knows anything about me that my search history doesn't show.
I don't think TikTok is a "good thing" but I don't understand why it is sometimes painted as this unique evil.
The worst part of TikTok in my opinion is how much time I've wasted using it, but that's nothing new for social networks.
I don't work for Tiktok so no clue about their actual implementation, but if they have what everybody else +-has, its a sophisticated statistical data model.
Some people put moniker AI on this, just like some people in marketing are putting AI into our TVs, toothbrushes, cars and god knows what else. Its got nothing with Artificial Intelligence per se, just a shortcut for 'something complicated I don't grok so its 23rd century magic'.
One can define "AI" in many ways. It certainly is not a conscious being "thinking" about what you would love to see next (much like AlphaGo isn't carefully "focusing" on the board and the possible moves), but a recommender system certainly displays some "intelligent" behavior in a restricted sense.
What Tiktok is definitely not doing, is "training their datasets".
> the last place on the internet where real people post long-form content that is not a single-issue niche forum
youtube is the biggest counter-example to this. yes there’s a lot of short-form content, but there’s also a huge amount of long-form, not that the algorithm will tell you that
Yes, but long form videos can only be consumed in long linear form.
Long form text can be skimmed, organized, and referenced. The signal to noise ratio can be a little lower without being an issue, because it's trivial to sift through a certain amount of noise when you are reading text. And you can have conversational content that doesn't spiral out of time constraints.
youtube shorts are making this reverse. What used to be a long form platform now has many creators who post tons of 30 second clips that aren't interesting or informative. The shorts are all skippable because if you can hear it in 30 seconds you can read a sentence about it in 5.
So I'm generally not using any social media, but sometimes try to use it for a week or two to at least understand what is going on in the world. Recently I specifically consumed a bunch of YouTube shorts, and Google should have a pretty good idea at who I am after using there services for 15 years, but even with pressing "do not recommend" on every single Andrew Tate/Jordan Peterson video, it will just incessantly recommend that content to me. The platform is literally unusable for me in that sense, I felt like I was in a social experiment trying to turn me into an Incel or something... In comparison, tiktoks algorithm works far better at recommending me things I actually like to watch.
shrug youtube is really good at adding content similar to what you've already watched to your recommendations. If you're getting jordan peterson and andrew tate videos its probably related to what you've been watching.
I suspect as well that simply watching shorts over long form content makes the recommendation algorithms assume things about your ability to concentrate on long form content.
Even still youtube has a function where you click the dots near a video and say 'not interested' to get rid of channels and topics from recommendations that works fairly well, so if its recommending something you dislike, you have the ability to remove those recommendations and tune future ones.
As stated above, I do not usually consume that content, I did try to tell the algorithm that I'm not enjoying the content, it didn't care. Youtube Shorts is just substantially worse than tiktok at recommending me stuff that I am interested in.
I'm not sure how to phrase this politely, but your comment comes off as deeply arrogant and unpleasant. That acronym especially just sounds like it is coming straight from an unflattering caricature of a nerd from the 90s/early 00s.
> If you're getting jordan peterson and andrew tate videos its probably related to what you've been watching.
Well, or their model isn't very good.
To take an example, let's say I watch "Race Highlights | 2022 United States Grand Prix" on the official F1 channel
Then the algorithm recommends "The Internet's Best Reactions To The 2022 United States Grand Prix: THAT WAS EPIC" with a thumbnail of gurning man.
If the model had accurately captured my viewing habits, the model would say: Formula 1, yes; Reaction videos, no; Things that call themselves epic, no; Gurning thumbnail, no. And it would not have recommended that video.
One possibility is that Youtube knows me better than I know myself, and though I think I don't like reaction videos and gurning they've determined I secretly love them. Another possibility is that their algorithm merely knows that people who watch F1 videos watch F1 videos, and because I watched an F1 video they're showing me an F1 video.
potentially there’s some subset of videos that you watch that are also often seen by viewers of Andrew Tate/Jordan Peterson nonsense. I wouldn’t be surprised if it is in fact some form of sports
not that this makes a good algorithm, but it’s a potential explanation
I was thinking more about text, but youtube indeed still has a surprising amount of good content if you know where to look, though even the hobbyists cater heavily to the algorithm.
Do you mean for content in German? There might just not be enough of it to be anything but low-effort junk - I never got anything useful in German and gave up after about 20 minutes.
I'm based in Germany but view English content and I can at least echo your sentiments about the lack of good content. I've read multiple times about how amazing the TikTok algorithm is meant to be and how quickly it learns about what to show you, and yet that has not been my experience at all.
Twice now I've spent an hour attempting to train the algorithm but it's just endless streams of trash. The worst part for me was coming across an interesting video, for example some industrial machine process, only to be subjected to the exact same video hosted by another user. Most times they have just overlaid some music, but sometimes they crop the original or add some useless text overlays.
The part that also annoyed me was it felt like the algorithm gave up every 20-25 videos and tried to show me something sexual, even though I was not interested. I always found the subject transitions extremely jarring and annoying - I already have my porn channels sorted thank you very much, I'm looking for a different type of wood-working
Indeed, the best metric is some sort of cpa media buying experience.
I have the following channels:
- twitch, good roi, short lived
- affiliate forums, long lived, breakeven roi
- youtube, fb, ig, cant ever recover investment, only good for content creation
- ad networks and tik tok , totally 0 conversion rate even at what appears to be tremendous amount of traffic.
So yeah, if you arebin my niche a and a million views does not convert, then that algo and the platform are useless.
Fun part is its deduction of locale by looking at your number/sim card. I usually get an eSIM while travelling, and the company seems to be based in Netherlands. Was in Mexico last time, and was getting a constant stream of content from Amsterdam. Took me a while to understand why.
Pretty sure it uses regular geo information as well because we travelled to France and immediately saw a bunch of French tiktok even though the SIM card was Canadian.
It's actually a fun quirk, you visit a country and you get to see their tiktok videos.
My comment doesn’t apply if you’re asking about videos for which the language is German. I assume you meant geographically based content regions for English videos.
Just checking that you are using a mobile (android or iOS) version of the app. I may be totally wrong (I’m basically making this up based on intuition), but I wouldn’t be surprised if the internet site version of TikTok doesn’t support the same level of input to the algorithm. There is fine tuning to your interaction data that the mobile app supports (I’m sure that it bases decisions on how long you watch videos (regardless of whether you like or follow), and I think it might even base decisions of what sections of videos make you stop watching)).
I hit it off right away with the mobile version. Off the bat: there were lots of half-nude women, gym rat videos, and other generic social content for my region. Within the hour, I was getting mostly finely curated comedy clips from standup comedians I liked (and not from others I didn’t like). It’s really amazing how well the algorithm works.
I'm an English speaker, who is learning Japanese as a second language. I've found it difficult to get tiktok to recommend japanese-language videos to me - will usually only do so if I follow an account.
I think there is a language switch somewhere high up in their algorithm that is hard to compete against with watch metrics.
Yes, working for me well, too in Germany. First boring content, then frighteningly well adjusted context. As I dislike most German content, it updated itself to mostly showing English/expat stuff. But also some German content that I like.
Why would you do that lol? Do you really have FOMO for the next garbage social media dopamine machine that simply wants to hijack your attention to sell ads and get you addicted to zombie scrolling like every other instance before it? Why does it have to work for you? I think you have your answer at this point.
It's just another garbage lowest common denominator mass content farm and memes. It doesn't have to be nor isn't something more profound than that, and in fact with it's short content length and more sophisticated AI it's just like everything before but worse.
You can certainly just dismiss it and occupy your limited time doing something else more productive or fulfilling.
FWIW, I've tried doing the same thing with Twitter. People hold that site in the highest regards, and whenever I read/post I'm always having the most miserable time. The longer I spent, the less sense it made to me and eventually I had to just leave it alone once and for all. I get where they're coming from when they want to enjoy something but don't know where to start
It seems we (well at least some) have come full circle re passivity when consuming content.
TV is like this and was much worse. Back where I grew up, during forced socialism/communism there were 2 official TV channels and that was it. Then came computers and internet, me and many others and we greatly enjoyed that one felt in complete power what to do, what to see read etc.
What you mention (maybe I don't get it since I never used tiktok and probably never will due to apparent addictiveness) sounds again like passive random consuming of whatever the other side decides is appropriate. You can tune it, and maybe it works a bit, hey its 2022. Surreal to me (and not in positive way), but to each their own.
This matches what happened with advertising. We went from cable TV and broadcast radio with 12+ minutes of ads per hour to being able to choose what we watch/listen to "on demand" with few - if any - ads. And now we're heading back to increasing, unskippable ad load in popular audio and video formats.
You need to search and then scroll and like stuff and follow people to get any signal in TikTok. There is a ton of great content. For example, maybe you are into weightlifting or gardening. Search for a particular aspect of weightlifting like "rear delts" or for gardening "pruning" and like some stuff or follow some creators you like. Rinse and repeat a few times. It doesn't take long to fine tune a feed that is pretty good.
Yep - it's cool to hate tiktok, but it's chock full of excellent content. Yeah there's a lot of trash (just like the internet in general) but just swipe it away. Follow creators you like, and the algorithm honestly does a pretty good job about showing videos I don't hate.
You’ll have people say that there are good quality videos on TikTok and that you just have to spend a small amount of time training their AI to show you what you want to see. I’m unconvinced by arguments that TikTok’s AI is good. I think in the vast majority of cases it overfits to multiple local minima/maxima and bounces around between them. Most (by far) TikTok users are only ever going to see the lowest common denominator meme clips.
My take-away was there's good stuff on there and awful stuff on there. Similar to the internet itself. The HN comments on my post were very split between loving and despising TikTok:
https://news.ycombinator.com/item?id=31360955
It takes awhile but I have found some truly fascinating content on TikTok
- Behind the scenes theatre stuff (I used to work in AV so this is relevant to my past interests)
- Aviation interests - got tons of good videos around the Oshkosh fly-in
- Farming videos - I grew up on a farm and enjoy many of these, I especially like comparing how this is done in different parts of the country
- Ski-lift technician TikTok - videos explaining safety systems, backup power, and behind the scenes of ski-lift operation
- Broadcast Transmitter engineering - folks working on terrestrial broadcasting systems which is loosely related to
- Amateur radio - lots of folks showing off their shacks and mobile rigs
- and more that I can't remember
Interesting thing I found about tiktok is that you do not have to like or comment for the "algos" to figure out what you like. They appear to closely monitor how long you watch a clip and if you just read it's comments.
I think this is a recent development, I've been seeing the same thing on Facebook's Reels. I believe it's taking advantage of the algorithm, by providing you the bare minimum to think that there might be a followup or punchline. But there isn't: it's just the one picture. By the time you realize you've been had, you've already watched long enough for the algorithm to silently register your interest.
I hate black-box recommendation algorithms because of this. You have no idea what it's inferred from your behavior, so simply waiting too long is enough to bork your feed. How long is too long? No idea.
In my experience, the best way to discover the good content from TikTok is to simply wait until somebody reposts it to reddit and it makes it to the front page.
Quality aside - but for me who just tried Tiktok a couple times, there was always some severely disturbing content on the homepage including gross pimple popping, viral infections, skin diseases, and other content that's not considered normal. Although some of it seemed fake, it definitely throws sensitive people off guard and the fact this platform of an app does nothing about it is beyond my understanding.
People seem to actually understand its large scale and evergrowing backlog better, along with "everybody can upload" better. I suspect that is motivated reasoning shining through. They don't want to give it up for a "whitelisted youtube". At most they kvetch about search algorithms.
I think the solution is that these jobs need to be compensated in proportion to their value, with applicants understanding what they've signed up for. Without moderation, there are no services. Governments that prioritize the protection of children among other things should enforce this.
Is it too trite to say that if you only need to check for a video being deletion-worthy, you can do it without watching the video? I click around a paused video if in doubt, and/or make it really small. This has saved me several ruined days. Couldn't they do the job effectively with a preview strip? playing the video for clarification if unclear
Question – at what point does content moderation become necessary?
Small publishing tools successfully use passive mechanisms to prevent illegal/horrific content (for example: community enforcement and platform culture, identity verification, cost/pricing). At what point do bad actors and unwanted content arrive? At what point do the passive mechanisms fail?
> “If you’re looking at this from a monetary perspective, then content moderation AI can’t compete with $1.80 an hour,” Carthy said, referring to a typical wage for content moderators based in the global south. “If that’s the only dimension you’re looking at, then no content moderation AI company can compete with that.”
I didn't realize human labor was that cheap. I assume Teleperformance is taking a decent chunk of $1.8/hr. 15 seconds to view each video = 240 videos/hr.
On AWS, p4d.24xlarge is $32/hr with 8 A100 GPUs or $4/A100 GPU hr.
> “The human brain is the most effective tool to identify toxic material,” said Roi Carthy, the chief marketing officer of L1ght, a content moderation AI company.
The learning on my end, is better cheaper computer vision would be able to alleviate human traumatization. The flip side is we'd also be able to generate 1000s of human traumatizing videos.
AKA the Chinese/PRC model, you need a shit load of actual mk1 eyeballs to filter internet into hugbox for political/domestic serenity. Why western platforms had to pull out of PRC (again, not banned) after minority riots, because they couldn't stomach the onerous moderation costs required to operate in PRC that other PRC platforms had to endure. A few years later, radicalization on western platforms compelled the same measures - CCP has consistenly been prescient in domains of mass communication. Fringe voices are loudest in an unrestricted enviroment, it takes a lot of resources to put up walls and prune weeds for popular yet harmless mainstream stuff to stand a chance.
It doesn't surprise me there's really disturbing content. This is the case for any platform.
But how much do moderators spend dealing with that rather BS reports? If you spend any time on Tiktok you'll quickly notice that reporting content is heavily brigaded and weaponized. The general process is:
1. Mass reports, probably from automated fake accounts, mass report one or more videos and/or the creator themselves;
2. As with almost all such systems, a certain number of reports triggers an automated response, such as taking down the video;
3. The creator then has to manually appeal the takedown. Sometimes they win, sometimes they lose. It seems to be really inconsistent;
4. Regardless of the outcome of the appeal, a certain number of reports will trigger a community guidelines violation, possibly locking or even completely nuking the account. The fact that every appeal is won seems to be irrelevant.
This problem is so bad that you can duet a video, say nothing and get reported for hate speech or harassment and lose your account.
What Tiktok doesn't do (like pretty much everyone else) is identify false reports and people who make false reports. It tends to be pretty easy too. Defining characteristics tend to be:
1. A default username ("user123239842");
2. Only follows 1 person;
3. No PFP.
Other sites (eg Twitch) make efforts to at least identify ban evaders. If you don't like someone's content Tiktok should make it hard for you to find it again and when you do, they should just shadowban you. Your comments don't appear to anyone else (which reduces harassment) and your reports are essentially ignored (while appearing to have gone through).
I'm actually amazed at just how easily and how often these systems, which supposedly exist to take down offensive content and protect users, are weaponized for simply saying something someone just doesn't like. This would surely reduce the time wasted for moderators (who often rule inconsistently on these issue) so the truly abhorrent content gets immediately nuked.
$10/day is probably a lot for this, I wonder how much they could cut their moderation costs by just launching a public platform where anyone anywhere in the world can get paid to do moderation.
Why would that be any different than the current situation?
In fact, I think conducting this more publicly would have huge benefits. Perhaps you would attract more motivated staff, like pedophiles to go through CSAM?
> Carolina, a former TikTok moderator who worked remotely for Teleperformance between June and September 2020, said supervisors asked her to be on camera continuously during her night shift. She was also warned that nobody else should be in view of the camera, and that her desk should be empty, apart from a drink in a transparent cup.
I’ll say first that I think it’s horrible that we have this problem at all.
I wonder if a way to avoid traumatizing workers with entire videos would be to capture single frames from videos, and divide them across all the workers with them still rating the frame for content violations.
Obviously, some frames would be worse than others to see, but I would have to imagine it would be drastically lower in terms of impact for each employee than watching video.
Don't forget PornHub, there have been articles about their content moderation teams facing the same problems obviously exacerbated by the platform. Plus the tragic thing there is if you see something fucked up... it might not even mean you should remove it.
Won't happen. It would cut out a lot of creative, legitimate content from people who would otherwise be unwilling to divulge their personal information to a CCP-governed application.
I wish I could upvote this more
....what value does tiktok offer??? I honestly wonder...I would happily bet money that it's net negative across the world.....
It's almost as if building a hypercentralized faux-communication system that tries to encourage mindless behavior and doesn't have any notion of locality is an inherently bad idea that will always lead to misery and suffering. Hm. Nah, that's can't be true. That would mean a lot of things I've read on the internet recently are lies, which is completely impossible. Yeah, I think this is just a sign that people suck and we need more centralization to control them. If we tied all TikTok accounts to some kind of social credit score, that would probably fix everything.
TikTok is the best thing happened in the internet. Why do I say so? Remember Facebook allowing you to reconnect with your lost friends. TikTok is doing this by connecting the world. I have seen many content I would not have seen from any other platform. Interesting food, interesting places, interesting personality. Unfortunately, the same side effect happened to Facebook is also happening on TikTok. But I have to say, TikTok made me seeing more things around the world. It is great in this aspect.