The algorithm has been given a job todo. First priority on any platform is engagement and a well functioning, complete human being is not going to be engaged by rage bait and hate. They are rare, precious jewels. The shit gets dumped on people who are lonely, have a grudge, feel left out. It is relentless and escalates until their brains cook. Algorithmic social media is a massive social harm. The people who are in deep likely need years of deprogramming and therapy to recover which they will never get.
These platforms need to be shut down and people with a conscience need to stop using them, regardless of their own positive experiences, to deny them the power of network effects and their impact on the vulnerable.
Off topic, but I bet a book on tobacco cultivation/history would be fascinating. Tobacco cultivation relied on the slave labor of millions and the global tobacco market influenced Jefferson and other American revolutionaries (who were seeing their wealth threatened). I've also read that Spain treated sharing seeds as punishable by death? The rare contrast that makes Monsanto look enlightened!
Mm, definitely. I think it's probably the cash crop that has historically been the most intertwined with politics, even more so than sugar.
Central America, the Balkans, the Levant. The Iroquois and Algonquians. Cuba. The Medicis and the Stuarts. And, as you say, revolutionary Virginia and Maryland. Lots of potential there for a grand narrative covering 600 years or more!
(And, to gp: yes, it absolutely did threaten governments, empires, and entire political systems!)
Yeah, isn't it only a relatively recent split - mid 20th century, I think?
Before that, the term "economy" was only used as a synonym for thrift or a system of management or control (and "economist" tended to mean someone who wanted to reduce spending or increase restrictions on something).
Arguably Marx is the most important historical scientist when it comes to political economy. The methodology pioneered by him has been extremely influential.
Reactionary liberalism, e.g. neoliberalism, Austrian school, that kind of thing, discards the 'mess' of interdisciplinary approaches and seek a return of a protestant worldview, riffing off of their use of the New Testament verses about "render unto Caesar". This puts them in harsh ideological conflict with the political economists and elevates their 'theology' above the work of previous scientists.
Historically some trace political economy to ibn Khaldun, but in the Occident it's Ricardo, Mill, Marx and so on that create a (to us) recognisable science out of it.
Science is not the only legitimate form of gaining knowledge. What you write applies to every philosopher. And economics is not generally known for being the most scientific of all sciences. This is all the more true of neoclassical economists, who are probably closer to your worldview if Marx triggers such a knee-jerk reaction in you. Whether you like it or not, Marx was a gifted systematic and analytical thinker. Even his ideological opponents admit this. At least if they can hold a candle to him intellectually...
Marx wasn't a scientist. He didn't follow the scientific method. He was a lazy pseudo-intellectual who cherry-picked particular pieces of history to support his preferred narrative.
Actually I've read it and am quite familiar. It's true that he was influential but all of his work was shoddy and poorly reasoned. Only morons are impressed by it.
The problem with this is that section 230 was specifically created to promote editorializing. Before section 230, online platforms were loath to engage in any moderation because they feared that a hint of moderation would jump them over into the realm of "publisher" where they could be held liable for the veracity of the content they published and, given the choice between no moderation at all or full editorial responsibility, many of the early internet platforms would have chosen no moderation (as full editorial responsibility would have been cost prohibitive).
In other words, that filter that keeps Nazis, child predators, doxing, etc. off your favorite platform only exists because of section 230.
Now, one could argue that the biggest platforms (Meta, Youtube, etc.) can, at this point, afford the cost of full editorial responsibility, but repealing section 230 under this logic only serves to put up a barrier to entry to any smaller competitor that might dislodge these platforms from their high, and lucrative, perch. I used to believe that the better fix would be to amend section 230 to shield filtering/removal, but not selective promotion, but TikTok has shown (rather cleverly) that selective filtering/removal can be just as effective as selective promotion of content.
When you have a feed with a million posts in it, they are. There is no practical difference between removing something and putting it on page 5000 where no one will ever see it, or from the other side, moderating away everything you wouldn't recommend.
Likewise, if you have a feed at all, it has to be in some order. Should it show everyone's posts or only people you follow? Should it show posts by popularity or something else? Is "popularity" global, regional, only among people you follow, or using some statistics based on things you yourself have previously liked?
There is no intrinsic default. Everything is a choice.
While I agree "There is no intrinsic default. Everything is a choice." and "There is no practical difference between removing something and putting it on page 5000" and similar (see my own recent comments on censorship vs. propaganda):
> Should it show everyone's posts or only people you follow?
Only people (well, accounts) you follow, obviously.
That's what I always thought "following" is *for*, until it became clear that the people running the algorithms had different ideas because they collectively decided both that I must surely want to see other content I didn't ask for and also not see the content I did ask for.
> Should it show posts by popularity or something else? Is "popularity" global, regional, only among people you follow, or using some statistics based on things you yourself have previously liked?
If they want to supply a feed of "Trending in your area", IMO that would be fine, if you ask for it. Choice (user choice) is key.
"We have a million pieces of content to show you, but are not allowed to editorialize" sounds like a constraint that might just spark some interesting UI innovations.
Not being allowed to use the "feed" pattern to shovel content into users' willing gullets based on maximum predicted engagement is the kind of friction that might result in healthier patterns of engagement.
I remember back in the day when Google+ was just launched. And it had promoted content. Content not from my 'circles' but random other content. I walked out and never looked back.
Of course, Facebook started doing the same.
The thing is, anything from people not explicitly subscribed to should be considered advertorial and the platform should be responsible for all of that content.
Early days facebook was simple:
1) You saw posts from all people you were connected to on the platform.
2) In the reverse order they were posted.
I can tell you it was a real p**r when they decided to do an algorithmic recommendation engine - as the experience became way worse. Before I could follow what my buddies were doing, as soon as they made this change the feed became garbage.
The point is that they don't have to be. You can moderate (scan for inappropriate content, copyrighted content, etc) without needing to have an algorithmic recommendation feed.
This is the first time I've ever heard somebody claim that section 230 exists to deter child predators.
That argument is of course nonsense. If the platform is aware of apparent violations including enticement, grooming etc. they are obligated to report this under federal statute, specifically 18 USC 2258A. Now if you think that statute doesn't go far enough then the right thing to do is amend it, or more broadly, establish stronger obligations on platforms to report evidence of criminal behavior to the authorities. Either way Section 230 is not needed for this purpose and deterring crime is not a justification for how it currently exists.
The final proof of how nonsensical this argument is, is that even if the intent you claim was true, it failed. Facebook and Instagram are the largest platforms for groomers online. Nazi and white supremacy content are everywhere on these websites as well. So clearly Section 230 didn't work for this purpose. Zuck was happy to open the Nazi floodgates on his platforms the moment a conservative President got elected. That was all it took.
The actual problem is that Meta is a lawless criminal entity. The mergers which created the modern Meta should have been blocked in the first place. When they weren't, Zuck figured he could go ahead and open the floodgates and become the largest enabler of CSAM, smut and fraud on earth. He was right. The United States government has become weak. It doesn't protect its people. It allows criminal perverts like the board of Meta and the rest of the Epstein class to prey on its people.
Reporting blatant criminal violations is not the same thing as moderating otherwise-protected speech that could be construed as misleading, offensive, or objectionable in some other way.
Section 230 being repealed doesn't mean that any moderation will be treated as publication. The ambient assumptions have changed a lot in the past 30 years. Now nobody would think that removing spam makes you liable as a publisher.
Algorithmic feeds are, prima facie, not moderation, not user-created content and do not fall under the purview of section 230.
> As interpreted by some courts, this language preserves immunity for some editorial changes to third-party content but does not allow a service provider to "materially contribute" to the unlawful information underlying a legal claim. Under the material contribution test, a provider loses immunity if it is responsible for what makes the displayed content illegal.[1]
I'm not a lawyer, but idk that seems pretty clear cut. If you, the provider, run some program which does illegal shit then 230 don't cover your ass.
If there is an algorithm, the social media platform is exactly as responsible for the content as any publisher
If it is only a straight chronological feed of posts by actually followed accts, the social media platform gets Section 230 protections.
The social media platforms have gamed the law, gotten legitimate protections for/from what their users post, but then they manipulate it to their advantage more than any publisher.
>>the solution is real easy, section 230 should not apply if there's an recommendation algorithm involved
>>treat the company as a traditional publisher
>>because they are, they're editorialising by selecting the content
>>vs, say, the old style facebook wall (a raw feed from user's friends), which should qualify for section 230
You can draw a fairly clear line from the corporate response to cigarettes being regulated through to the strategy for climate change and social media/crypto etc.
The Republicans are basically a coalition of corporate interests that want to get you addicted to stuff that will make you poor and unhealthy, and underling any collective attempt to help.
The previous vice-president claimed cigarettes don't give you cancer and the current president thinks wind turbine and the health problems caused by asbestos are both hoaxes. This is not a coincidence.
The two big times the Supreme Court flexed their powers were to shut down cigarette regulation by the FDA and Obama's Clean Power plan. Again, not a coincidence.
That's because we / our (USA) country is owned. As Carlin said, "It's a big club. And you ain't in it."[0]
But what isn't properly addressed when people link to this is that the real issue he's discussing is our failing educational system. It's not a coincidence that the Right attacks public schools and the orange man appointed a wrestling lady to dismantle the dept of education.[1]
Aside: I was in the audience for this show (his last TV special). Didn't know it'd be shot for TV. Kind of sucked, actually, cause they had lights on the audience for the cameras and one was right in my eyes. Anyway, a toast to George Carlin who was ahead of his time and would hate how right he's been.
"Democracy" itself was not at stake in the American Civil War because both sides practiced it. The Confederacy was/would have been a democracy analogous to ancient Athens--one where slaves (and women) were excluded from political participation. The vast majority of Confederate politicians, including Jefferson Davis, came from the "Democratic Party"--which, true to its name, championed enfranchisement for the "common (white) man" as opposed to control by elites.
Perhaps a better example is the "Tobacco War" of 1780 in the American Revolution, where Cornwallis and Benedict Arnold destroyed massive quantities of cured tobacco to try to cripple the war financing of the colonies.
Control of tobacco in Latin/South America since the 1700s (Spain's second-largest source of imperial revenue after precious metals) also had a directly stifling effect on democratic self-governance.
I think the point is a significant number of human beings were not participating in democracy at the time because their forced labor was critical to propping up the tobacco (and other) industries.
It’s hard to claim it’s actually democracy when it only exists after stripping the rights from a large section of people who would disagree with you, if they had the power to do so.
Social media cannot "threaten democracy". Democracy means that we transfer power to those who get the most votes.
There's nothing more anti-democratic than deciding that some votes don't count because the people casting them heard words you didn't like.
The kind of person to whom the concept of feed ranking threatening democracy is even a logical thought believes the role of the public is to rubber stamp policies a small group decides are best. If the public hears unapproved words, it might have unapproved thoughts, vote for unapproved parties, and set unapproved policy. Can't have that.
That trivial definition sees limited use in the real world. Few countries that are popularly considered democratic have direct democracy. Most weigh votes geographically or use some sort of representative model.
Most established definitions of democracy goes something like, heavily simplified:
1. Free media
2. Independent judicial system
3. Peaceful system for the transfer of power
The most popular model for implementing (3) is free and open elections, which has yielded pretty good results in the past century where it has been practiced.
Considering social media pretty much is media for most, it is a heavily concentrated power, and if there can any suspicions of being in cahoots with established political power and thus non-free, surely that is a threat to democracy almost by definition.
Let's be real here: It has been conclusively shown again and again that social media does influence elections. That much should be obvious without too much in the way of academic rigor.
Of course social media influences elections. Direct or indirect, the principle of democracy is the same: the electorate hears a diversity of perspectives and votes according to the ones found most convincing.
How can you say you believe in democracy when you want to control what people hear so they don't vote the wrong way? In a democracy there is no such thing as voting the wrong way.
Who are you to decide which perspectives get heard? You can object to algorithmic feed ranking only because it might make people vote wrong --- but as we established, the concept of "voting wrong" in a legitimate democracy doesn't even type check. In a legitimate democracy, it's the voting that decides what's right and wrong!
You write as though the selection of information by algorithmic feeds is a politically neutral act, which comes about by free actions of the people. But this is demonstrably not the case. Selecting hard for misinformation which enrages (because it increases engagement) means that social media are pushing populations further and further to the right. And this serves the interest of the literal handful of billionaires who control those sites. This is the unhealthy concentration of power the OP writes about, and it is a threat to democracy as we've known it.
By that logic, the New York Times also threatens democracy. Of course, it doesn't, and that's because no amount of opinion, injected in whatever manner and however biased, can override the role of free individuals in evaluating everything they've heard and voting their conscience.
You don't get to decide a priori certain electoral outcomes are bad and work backwards to banning information flows to preclude those outcomes.
No. The difference is that the New York Times has not been specifically engineered to be an addictive black hole for attention. Algorithmic social media is something new. Concentration of press power has always been a concern in democracy and many countries have sorted to regulate disability of individuals to wield that power. We get to choose as a society the rules on which we engaged with one another. Algorithmic social media is an abuse of basic human cognitive processing and we could if we wanted agreed that it’s not allowed in the public. It’s not a question of censoring particular information or viewpoints. – Here is that the mechanism of distribution itself is unhealthy.
Why change section 230? You can just make personalized algorithmic feeds optimized for engagement illegal instead, couldn't you? What advantage does it have to mess with 230, wouldn't the result be the same in practice?
230 is an obvious place to say “if you decide something is relevant to the user (based on criteria they have not explicitly expressed to you), then you are a publisher of that material and are therefore not a protected carriage service.
The solution must be a social one: we must culturally shun algorithmic social media, scold its proponents, and help the addicted.
We aren't going to be able to turn off the AI content spigot or write laws that control media format and content and withstand (in the US) 1st amendment review. But we can change the cultural perception.
We aren't going to stop algorithmic social media through sheer force of public will without government involvement.
Social communities aren't nimble. There a ton of inertia in a social media platform. People have their whole network, all their friends, on the platform; and all friends have their friends on the platform; etc. So in order to switch from one platform to another, you need everyone to switch at the same time, which is extremely hard.
Facebook started out pretty nice. You saw what your friends posted and what pages you follow posted, in chronological order. It had privacy issues, but it worked more or less how we'd want to, with no algorithmic timeline. But they moved towards being more and more algorithmic over time. Luckily, Facebook was bad enough that it has gotten way less popular, but that has taken a long time.
Twitter is the same. It started out being the social media platform we want: you saw what your followers posted or boosted, chronologically. No algorithmic feed. But look where it is now. Thankfully, Musk's involvement has made plenty of people leave, but there were a lot of years where everyone, regardless of political leaning, were on Twitter with an algorithmic timeline. Even though a lot of people complained about the algorithmic timeline when it was introduced, they stayed on Twitter because that's where everyone they knew were.
YouTube too. For a long time, the only thing you saw on YouTube was what people you've subscribed to posted. It built up a huge community and became the de facto video sharing platform as a nice non-algorithmic site, and then they turned the key and went all in on replacing the subscription feed with the algorithmic feed. Now they've even adopted short-form video where you aren't even supposed to pick which video you wanna watch, you're just supposed to scroll. And replacing YouTube is hard due to its momentum.
So even if everyone agrees that algorithmic feeds are terrible and move to a non-algorithmic platform over the next few decades, what do you propose we do when that new platform inevitably shifts towards being an algorithmic platform? Do we start a new multi-decade long transition to yet another platform?
It's really simple in the US: stop granting exemptions for the harm the content causes. Social media _is_ publishing. Expecting people to 'eat their vegetables' when only fast food is on offer is realistic, and flies in the face of all we know about the environmental drivers of public health.
If your tree is so weak that a single breeze can knock it off, why blame the wind? Disclaimer: I hate social media of all kinds, it's just that you're missing the forest.
The force of social media these past 20 years has been massive. We're talking radical change to the structure of information flow in society. That's not just a small breeze.
> we will look back at the algorithmic content feed as being on par with leaded gasoline or cigarettes in terms of societal harm
I agree 100%.
However, I think the core issue is not the use of an algorithm to recommend or even to show stuff.
I think the issue is that the algorithm is optimized for the interests of a platform (max engagement => max ad revenue) and not for the interests of a user (happiness, delight, however you want to frame it).
But the people with control of mechanisms of power like social influence do only care about money, so the voices of people who have other values become irrelevant.
If anything the algorithmic dopamine drip is just getting started. We haven't even entered the era of intensely personalized ai-driven individual influence campaigns. The billboard is just a billboard right now, but it won't be long before the billboard knows the most effective way to emotionally influence you and executes it perfectly. The algorithm is mostly still in your phone.
It’s crazy (but true) to think that by slowly manipulating someone’s feed, Zuck and Musk could convert people’s religions, political leanings, personal values, etc with little work. In fact, I would be surprised if there was NOT some part of Facebook and Twitter’s admin or support page where a user’s “preferences” could be modified i.e “over the next 8 months, convert the user to a staunch evangelical Christian” etc
My wife was complaining about far right knuckle draggers turning up in her feed. I assume the algorithm was shovelling more of them at her because she was rubbernecking. I told her to try a "block every time" approach. It took about two weeks until her feed was (mostly) free of them but it still throws one at her now and again.
I offer this as a data point about how hard it is to turn a polluted feed around. But I'm now wondering if "feed cleaning" is a service that could be automated, via LLM.
What next? The intellectual dark web?
I think we can have a free market of ideas or whatever you’re fetishising without it meaning that I can’t sit on the couch and open an app to see some family photos without it being intermingled with some loser saying that trans people should be hanged on the street.
And you know for a fact that I am not exaggerating. This is where the current political discourse is at.
Can I please have the freedom to do that without the lecture?
That sort of rage bait is literally targeted to rile up people sitting on the opposite side of the kind of people watching that other media site that rhymes with socks. It’s all fake bullshit algorithmically optimized to divide.
Everybody thinks their tribe is immune to this sort of stuff but it isn’t. It’s all the same nonsense packaged for different echo chambers.
At the end of the day, everybody is human. It isn’t us vs them, it’s just us.
The worst to me is the way people dehumanize other people who don't agree with them.
The other side politically doesn't just have different views, they are barely human knuckle draggers. Basically neanderthals, so who cares if they go extinct.
Trolls do as well. Very often if a comment is "bad", it comes from a relatively new account. Then it gets banned and a new account is created. Technically it's ban evasion, but dang doesn't really want to change anything at this point.
My wife uses the app, hence the "consistently block the assholes" approach. But if you're willing to stick to the website I can actually offer you this. Write a browser plugin that redirects you to "/?filter=all&sk=h_chr" every time you land on "/". That's what I use for myself.
I did this on reddit to try and get a useful /r/all and it ended up being mostly cats. I never look or vote on cat pictures but by just removing political serial posters, thats what I got.
Yeah, there's always someone saying "Just delete your Facebook account" as if that solves the underlying "Facebook is actively encouraging divisiveness" problem.
My mother-in-laws Facebook feed is full of fake news - from the left, politically. My own mom doesn’t have a Facebook, but she still manages to balance out the universe with fake news from the right on her YouTube feed.
The internet is a mistake for a lot of people and I don’t think we can fix that.
I think the feeds depends on the posts you read, even accidentally.
My feed is free from extreme left content but I didn't have to block anything. Simply by not reading that kind of content, the algorithm knows I am not interested.
Yes, hence my comment about "rubbernecking". If you tend to slow down for car crashes, the algorithm shows you more car crashes. It amplifies our worst instincts.
That effect also applies when you try to block car crashes. That happened to me years ago with the same genre of videos. Like car crashes and people falling and hurting themself a little bit.
> My wife was complaining about far right knuckle draggers turning up in her feed.
This is what is so difficult in facebook vs. HN. Here if people post angry insulting rants, it gets collectively downvoted to oblivion. That is effective.
On facebook there is no equivalent. All I can do is block an individual, but I personally have to do it for every offensive person, which is for practical purposes impossible. Facebooks needs a downvote button and an option to hide any comments which have N downvotes.
"I'm not interested" and "Don't show posts from this person" is the dowvote button for the algorithm. If you use those functions liberally your feed gets pretty clean and aligned.
I used to belong to a FB nostalgia group that was being relentlessly farmed by Indonesian accounts. The group members (and even the admins) weren't sophisticated enough to spot what was happening. They were absolutely engaging with the spam. They love AI colorizations too.
I don't trust "facebook users" as a group to provide a signal I consider useful.
HN model works, people do downvote for you, if you are just like everybody else here. You indicate that by visiting HN.
In more universal platform such as Facebook you need to indicate who you are by subscribing to specific groups or downvoting some of the content yourself. Just visiting. Facebook is not enoug. Once you signal who you are you also benefit from other people just like you downvoting content you wouldn't like, for you.
Facebook sucks but Reddit's algorithm is even worse. The only positive thing I will say in favor of Reddit is you can turn their algorithm off as Facebook has consistently denied its users a chronological feed of their friends.
How do other people use reddit? I'm subscribed to a bunch of subreddits and that's the content I see. Reddit is honestly one of the more positive parts of the web for me.
which subreddits do you frequent? My experience of any coding subreddits is lots of posturing, lots of closing, no few actually useful answers or discussion
My reddit feed is predominately my local community subreddit and various hobbies/activities - mountain biking and cycling stuff, outdoors stuff, geology, swimming, some ttrpg stuff - and then interspersed with a few more random things that I try to keep with more of a positive tilt - todayilearned, bestof, EarthPorn - that sort of thing.
I do have a few programming subreddits, rust, sveltekit, and adventofcode, which mostly seem more newsy or avenues to help or learn about developments in that area. /r/rust does have an annoying tendency to get posts of some person new to rust telling people who are presumably already familiar with rust about what an amazing and transformative language it is, but those are pretty easy to identify and skip by.
I think it is a mistake to think about people as being helpless consumers of the algorithm. The OP's mom no doubt makes some intentional choices in her life that make a difference. It just doesn't help that the algorithm will lean into whatever will get the most engagement.
> The shit gets dumped on people who are lonely, have a grudge, feel left out.
No, it gets dumped on pretty much everybody.
My Insta consists of travel and food pictures, and the people I follow are friends IRL and a very few travel/food influencers. So my feed consists of friends, travel/food content, dirty jokes thanks to my buddy who keeps sending them, and an ever increasing proportion of ads.
But both my "suggested reels" and the search view are exactly what the OP was complaining about: a non-stop parade of thirst traps by "content creators" pitching their OnlyFans accounts.
I mostly use Facebook by clicking on email notifications which are always real posts or comments by my real life friends. Some of them are a bit political but I just ignore those.
I just tried scrolling down the homepage and mine doesn't have any extreme political crap. However, it does have local political crap about the popular local issues (mostly bike lanes). Most of it is just harmless stuff like dashcam videos of bad local drivers, historic photos of my city, local issues like city infrastructure problems, curiosities like rare animals or space photos, and ads - tons and tons of ads.
I think it probably depends what you've engaged with indeed.
I find Facebook and Instagram are both completely polluted by that type of content. Facebook used to be trying to feed me right-wing rage bait and I think actively blocking finally cleared my feed of most of it and now it's all thirst-trap stuff. At least it's figured out I'm gay compared to Instagram.
Assuming you mean crap like “school book bans”, climate change denialism, or some dude coal rolling… You realize that is actually bait targeted at you specifically right? It wouldn’t work as bait if it was shit you agreed with! It’s actually left-wing rage bait!
If you were immersed in the “right wing echo chamber” your flavor of rage bait would be about a school introducing a neutral bathroom policy, or some college student struggling to define what a woman is. Every Christmas you’d see articles about cities banning Christmas lights in town hall and Starbucks no longer using Christmas themed cups. It’s all fucking made up nonsense. No real human acts the way these algorithms portray us.
Honestly even ‘right-wing’ and ‘left-wing’ are part of the trick. Real people don’t exist on a binary axis. We’re all a weird mess of values and experiences that don’t fit neatly into two boxes. But the algorithm needs two teams, because you can’t sell outrage without an enemy.
The first step to detox is seeing everyone as human not as a contrived label.
I actually mean the second kind of stuff - I don't know why it fed it to me except that the family connections I have on social media are all on FB and they tend to lean more conservative/evangelical.
People will engage with and promote that stuff even without a recommendation algorithm. Lots of subreddits are full of ragebait if you look at the most-upvoted posts.
>These platforms need to be shut down and people with a conscience need to stop using them, regardless of their own positive experiences, to deny them the power of network effects and their impact on the vulnerable.
In places where media is very biased to one political idea, online platforms like Facebook can be a breath of fresh air, people can share their ideas, voice their thoughts and concerns and express their opinions.
This is invaluable for democracy and it does have effect in the real life as it shapes the elections.
People don't depend just on the media anymore to have an informed opinion and the propaganda is much less effective.
And yet the algorithm has spent the last 3 or more weeks pumping MAGA, county and state Republican party, conservative Christian pages. There's a hand on the dials of "the algorithm"
> The shit gets dumped on people who are lonely, have a grudge, feel left out.
Like teenagers.
> The people who are in deep likely need years of deprogramming and therapy to recover which they will never get.
Like a cult. Current social media is like a cult that preys on teenagers. No wonder they want to ban it for young people. American government trying to forcefully spread its cult via the freedom.gov proxy is the vile cherry on top.
This is a quantitative change for Trump. He went from preying on a few kids to preying on all the kids in the world. He must feel ecstatic.
Using a fake identity and hiding behind a language model to avoid responsibility doesn't cut it. We are responsible for our actions including those committed by our tools.
If people want to hide behind a language model or a fantasy animated avatar online for trivial purposes that is their free expression - though arguably using words and images created by others isn't really self expression at all. It is very reasonable for projects to require human authorship (perhaps tool assisted), human accountability and human civility
It's been a long time and our memory only goes so far back. I'm not even that old, but the time between WWII and my birth is waaaaay less then my current age. Jimmy Doolittle was still hosting Christmas specials on TV when I was a kid. Nobody knows who TF that even is now. I doubt half of America has even heard of the Third Reich. Sure, they know that "Nazi" is some kind of insult, but the rest is history forgotten. The last educational film on the matter was Indiana Jones III.
Those of us who remember history will continue to fight, and our numbers aren't small. Maybe one day we can begin to repair the enormous national and global damage that has occurred.
It's pretty much the UN concentration-camp conspiracy theory that rightwing nutters have been pushing for decades, except that now it's their guys doing it so it's all OK.
> As with most of the Nazis’ murderous actions, the deportation of German Jews was improvised and haphazard . The increased numbers of Jews arriving in the ghettos of eastern Europe led to severe overcrowding, unsustainable food shortages and poor sanitation. This, in combination with the slow progress in the German invasion of the Soviet Union, convinced the Nazis that a ‘solution’ to the ‘Jewish problem’ needed to be organised sooner than had been originally envisaged.
Fully agree. I have no issues with the social media laws as they don't impact my family at all except for YouTube. Accounts under Family Link control should have been allowed as they are overseen by an 18+ parent.
Youtube should have voluntarily removed shorts and the front page or made them available as a parental control to appease the regulator. When I wrote to the minister they used YouTube's addictive algorithms as justification for including them as social media which I do agree with.
We had curated kids logins with age restrictions, subscriptions, and ad free under premium and also youtube music with individual playlists they used for instrument practice etc. We had to shift music platform. I know we can replicate a lot of this with special apps and browser extensions but this was a single cross platform solution that was working for responsible parents. To be fair it is partly YouTube's fault for prioritizing Shorts and watch time over quality.
Fully agree, responsible parents should not allow their kids (including teenagers) to use Shorts or TikTok. It is a shame that YouTube does not support blocking that crap. It is obvious "Don't be evil" is not Google's motto anymore.
You can create two Google accounts and parental control yourself. You can also use ublock or other browser addons, and of course, NewPipe. Youtube should have more settings for this, it's clearly going down the drain, but it's not like you can do nothing.
Honestly, it's one of the reasons I don't want to pay for Youtube Red, why would I pay for "no ads", when I still feel like I'm the product, because of my complete lack of control over the algorithm and user experience.
> I have no issues with the social media laws as they don't impact my family
This is probably the most common reason for why our society is in the shitters wrt. laws. I find it problematic that people only care after they have been shown they are affected. Look at any anti-privacy laws. No one cares until they get thrown into jail for posting memes online.
>
Youtube should have voluntarily removed shorts and the front page or made them available as a parental control to appease the regulator.
I honestly don't "get" the hate for YouTube Shorts:
While I clearly do prefer long-form videos on YouTube, in my Shorts feed I see videos that are, well, simply more short-form (admittedly because of the short length they are often more "shallow", but for sure not below some level that I would find annoying or unacceptable (and I think I am fast with such strong judgements)). So, at least judging from my Shorts feed, I can barely see any video that I would consider to be objectionable if I were a parent. It is quite possible that the YouTube algorithm detected very fast that I belong to a demographic that is not interested in particular kinds of videos that are perhaps common on YouTube Shorts and thus simply does not show them to me in my Shorts feed, so I am simply not aware of them.
So, seriously: why the huge hate for YouTube Shorts in particular concerning parenting?
> We had to shift music platform. I know we can replicate a lot of this ...
As far as practical solutions go a cheap VPS and a wireguard connection should let you continue with business as usual. From the perspective of YouTube maybe you moved to NZ or something.
> they used YouTube's addictive algorithms as justification for including them as social media
Did they provide YouTube the option of swapping out those algorithms to be exempted from the new law? It seems like this law was perhaps not a bad idea but the execution poorly thought out.
I won't be chasing an increasingly shitty online experience. I imported chromecasts before they were ever released here and had them connected via vpn to a US vps before services like Netflix went global. The pricing and content were really good value back then. Increasingly the relationship with big companies feels abusive. We are moving more towards self hosting, using physical media and changing lifestyle. Disconnecting isn't so bad.
Australia does not have a bill of rights. Our freedoms are guaranteed by our participation in the electoral process which is very high. This government governs with a large majority and the social media legislation is broadly popular with parents and older people.
The law of unintended consequences will apply. The legislation has been written in such a way that there is some flexibility in the application and there are some safeguards but its not directly addressing some of the biggest social harms. It's primary purpose (despite the conspiracies) seems to be populism and being seen to do do something for the kiddies.
The much bigger social problem is gambling which is out of control here. The second, related problem, is the use of techniques and studies by the gambling industry in games and social media to increase engagement which is what is messing with peoples heads. The government does not dare to touch the gambling industry or stop algorithmic placement of content. This would cause immense damage to company profits and create lobbying pressure.
So far from my experience this has been kind of low impact for adult users with existing accounts. Social media companies obviously have extremely good demographic data on their existing users as targeted marketing and influence is their core business.
Unfortunately this legislation hasn't addressed any of my real concerns with social media (it's the algorithms and engagement farming) and it is creating new problems.
Like Pascal's wager that absurdity is an appeal to stupidity. I expect the people running these companies are more interested in a different type of wager. One where they risk the future of the company to pump shares and make a quick profit.
This is a question of priorities. Identify a problem, decide to fix it, then execute. It isn't about the particular solutions. Australia's gun control would not translate to a country like the USA and perhaps neither would its health care. First decide to put a person on the moon. Then execute. Only one country did that. It isn't that they can't solve problems like school shootings or affordable healthcare. There is no real will to do so. Not sure why exactly. It is a very strange place that defies expectations of how a developed country would behave.
These platforms need to be shut down and people with a conscience need to stop using them, regardless of their own positive experiences, to deny them the power of network effects and their impact on the vulnerable.
reply