>Facebook by its nature takes stances on deeply political topics.
Why is that "by its nature"? I don't think that's "by its nature" at all. Phone companies and ISPs facilitate communication between people, but that doesn't necessitate they take political stances on what communication to allow. Why should Facebook? If certain language should be restricted, then laws should be written restricting said language and Facebook should comply with those laws. Nothing about Facebook's nature forces them to go beyond that and act as de facto language legislators.
Because Facebook, by it's nature, is designed to filter content. It turned everyone and everything into a content generator, even down to the level of "Jimmy liked a post by Burger King!". It gives everyone access to this unfathomable ocean of content, and then filters it down to something a person can consume.
Some of filtering is based on what the user wants to see, some of it is based on some notion of how "good" a piece of content is (scored by likes and engagement numbers), some of it is from advertisers paying to have their content make it through the filter, and some of it is Facebook deciding what should be seen and what shouldn't (mostly driven by their desire to keep you on the platform). Every single thing you see on Facebook has made it through a huge filter that ultimately decides if it's something you should see or not. And the inevitable outcome of building a gigantic what-information-do-you-get-to-see machine is that there are many, many parties trying to influence the machine.
This is a fair point, but I think the political aspect is an "emergent property" if you will, rather than an inherent one.
If Facebook limits the filtering to engagement, then it isn't the fault of Facebook that political content is engaging. That's just human nature. Disasters, outrage, politics, polarizing topics - these are all popular topics both online and off-line, and apread quickly as town gossip well before Facebook.
It is only when Facebook steps in and says that particular topics need to be exceptions to the filtering rules that apply to everything else, that they make themselves into a political actor.
For instance, let's say that the news feed showed you content based purely on number of likes. If political posts get lots of likes that isn't Facebook's problem. If the same ranking rules apply to all posts (# of likes) then they would remain neutral. As soon as Facebook says "content from x person will have their ranking artificially changed to reduce/increase engagement with it" thereby making an exception to the rule that applies to everything else, they have now become a political actor.
How could it not be the fault of Facebook when Facebook designed the algorithms that are creating all of the divisiveness on Facebook?
If I build a bridge intending it to stay up and it happens to fall down 6 months later, I'm responsible for it. Facebook created an algorithm that divides people politically and that surfaces content that is provably fictional. So they should be held responsible for it regardless of their intent. They don't get to invoke "common carrier" status when they're writing software that makes decisions about what you do or don't see. What makes a telephone a "common carrier" is the fact that the telephone doesn't decide who you call.
It doesn't matter whether it's software or a human. What matters is that decisions are being made by Facebook about what you do or don't see.
Whether or not it is intentional is immaterial to the effect. The law doesn't care about your intent. I wouldn't intentionally dump toxic waste into a river but I'm liable for dumping whether I intended to or not. Mark Zuckerberg can't just throw up his hands and go "oops it's software I can't help it" when it's his company that made all of the decisions about how the software works.
This isn't correct. The law in most modern democracies, as far as I'm aware, is very concerned with intent.
This why we generally define murder and manslaughter as distinct.
Murder is the unlawful killing of another human without justification or valid excuse, especially the unlawful killing of another human with malice aforethought.
Truth is not the point of social media. Facebook isn't an encyclopaedia. Humans already gravitate towards groups that validate their opinions. It makes sense for Facebook to show people content they want to see. It is incredibly Orwellian to say "Facebook should only show people content which corrects their wrong views". Facebook isn't a social conditioning tool. I find the alternative to "misinformation" much scarier. Misinformation and being mistaken are human flaws we will always have, and therefore any social groups will have them by default. Using technology as a tool to condition people out of their views against their will is scary.
The information is out there. There are reliable news sources. There are reliable databases and encyclopaedias and journalism. If people choose not to read them then that's on them.
>It makes sense for Facebook to show people content they want to see.
The problem is, Facebook doesn’t show people content that they want to see. They show people content that they will engage with. That’s a very important distinction.
HN algorithm/moderators actually explicitly do the opposite: if a thread gets too many comments too quickly, it’s ranked downward. The assumption is too many comments too quickly indicates a flamewar and the HN moderators want to keep a civil discussion. The approach Facebook takes is to “foster active discussion” which on the Internet typically means a flamewar. Noting generates engagement like controversial political views. So that’s what Facebook’s algorithm/moderators show to their users.
Facebook absolutely is a social conditioning tool, it’s designed from the ground up to show people content that stirs their emotions enough to click “like” or the mad face icon or even leave a comment and wait around until someone replies back.
My point is that this what happens in real life. People will continue to engage in stuff that they want to engage in. Facebook doesn't force people to engage in anything.
I think it is far worse to attempt to condition people by showing them things *they wouldn't otherwise engage with". What is scarier, showing somebody something they want to engage in based on their past behaviour, or showing somebody something they wouldn't have otherwise seen if you hadn't gone out of your way to shove it down their throat?
People seem to want Facebook to make people more placid. Oh you have extreme views? Here's, let's condition that out of you by only showing you more moderate stuff. Oh you think x is bad? Let's not show you anything to do with x so that you'll hopefully forget about it and not engage with that part of your brain any more.
Like I've already said, this alternative is far more Orwellian and far more of a tool for social control, than simply optimising for engagement.
> I think it is far worse to attempt to condition people by showing them things they wouldn't otherwise engage with". What is scarier, showing somebody something they want to engage in based on their past behaviour, or showing somebody something they wouldn't have otherwise seen if you hadn't gone out of your way to shove it down their throat?*
I don't think that makes sense, and I don't think that's what anyone's advocating for.
If you friend someone, or follow a page, or whatever, you are explicitly saying "I want to hear what this person/group has to say". They aren't saying "I want FB to carefully curate what this person/group says in order to increase my engagement of FB". FB shouldn't promote, hide, or reorder anything coming from someone who I've explicitly chosen to follow. It should just show me all of it, and let me decide what I do and don't want to see.
That's no distinction at all, what people engage with is just one effective way of measuring what people want to see. HN simply optimizes for something else, that's no less of a social conditioning tool than optimizing for engagement, just in a different direction. You could say that it's designed from the ground up to show people content that stirs their curiosity enough to comment cautiously, or to hide content that stirs their emotions enough to engage strongly.
Facebook has a fact-checking program. That program has third-party fact-checkers. Facebook has been documented as pressuring those third-party fact-checkers to change their rulings.
They can't impute that they checked facts, remove postsings they believe are incorrect, and then quietly put pressure on the fact-checkers to have a different "opinion" as to what is "factual".
I agree with what you're saying; pointing a finger at one particular thing and saying "this gets suppressed" is a bad solution. In fact it probably opens the door to a whole new category of problems.
But... I think what we're seeing with political content is just a symptom of the real problem.
> Disasters, outrage, politics, polarizing topics - these are all popular topics both online and off-line, and spread quickly as town gossip well before Facebook.
This is true. But when information spreads through people's conversations with each other there's limits to how fast it spreads. There's also a lot of room for dialogue and different perspectives. If I have some silly conspiracy theory that I want to spread around, it's going to be pretty hard to convince the people around me that 5G is going to activate microchips that were injected into my bloodstream. They will likely point out that basic laws of physics don't really allow for that. But if I know how to game a social media algorithm[0] to connect me with millions of people that are susceptible to that kind of thinking, I could convince a shockingly huge number of them to believe it[1]. Especially if the social media platform isolates those people from opposing opinions and connects them with people that think similarly.
I think social media is like removing the control rods from a reactor. Those basic human flaws are now being amplified and capitalized on at a scale we can barely even grasp. And it really doesn't matter if Facebook, Twitter, etc. are "at fault" or not. It's a fundamental problem with this services and the problems will continue to get worse.
Whether or not a property of a system is emergent does not seem particularly relevant to discussing its effects. It would be nice if we could just look at the way a platform like Facebook works, or see that it does not intend to be a "political actor", but the reality is that its effects are destructive. It has a strong effect on politics. The effects of Facebook are not outside of its responsibility simply because it isn't programmed to be political.
> let's say that the news feed showed you content based purely on number of likes
Does any site actually do this successfully? It seems to me that even sites that lean heavily towards algorithmic curation (including HN) still have an element of human veto.
it's not emergent, it's inherent. facebook is political because it was designed from the start to control information. that's coercive, designed literally to change people's behavior, whether covertly or overtly, which is the essence of power and the political. facebook employees cannot absolve themselves of responsibility by saying it's the machine that did it, or that other people were involved and complicit.
being political is not an incidental facet of facebook, it's a core intention.
1. This sounds like a somewhat naive take on filtering algorithms.
2. Does it matter if it’s Facebooks “fault” or not? The issue is their power.
Imho ideally they would acknowledge and accept responsibility for their power and in the US at least there would also be some laws regulating them in this regard.
No it isn't. The creator chooses which data to feed in, and which not to. Garbage in, garbage out. If Facebook positions itself as the arbitor of data and actively stops people from posting, then it must take responsibility for what is on the platform. If it simply allows people to sort the data that anybody can post, then any properties of that sorting (whether by likes, or date posted) are emergent.
Phones lines do have that problem, just with a weaker effect on a smaller scale. Phone call spam/fraud is notoriously rampant, to the point where many carriers & developers create centralized systems to detect and filter calls down to something a person can tolerate. These systems are scored by spam reports and other metrics that I'm not privy to, some of it is T-Mobile or Google deciding what calls should be auto-blocked and what shouldn't (mostly driven by their desire to keep people using telephony, otherwise they wouldn't bother).
And the inevitable outcome of building a what-calls-go-through machine is that there are many parties trying to influence the machine. Eg. faking caller ID, evading blocks with throwaway numbers, spamming no-response calls to figure out which numbers are valid to target, faking a robot voice to pretend to be a real person.
Practically every modern platform uses centralized systems to filter the noisy world down to something fit for purpose, and sometimes this intersects with political issues. That's no reason to expect a platform like Facebook to become even more political in their stance than the existing level of politicization that is almost impossible to avoid.
> Phone companies and ISPs facilitate communication between people, but that doesn't necessitate they take political stances on what communication to allow
Taking a stance to not control what communication is allowed is a very political stance. It just so happens that, I believe in those cases, it's also a legally mandated stance; but, if it weren't a legally mandated stance, it would absolutely be a political stance, whatever they ended up saying.
Where a private company decides to limit free speech (or not limit free speech) is, 100%, a political stance when the laws have not been written that make that decision for them.
Even if we maintain a law around protecting companies that just host other people's content vs curating and publishing content, it could be seen as a political decision whether a given website and company choose to be on the publisher vs public content stance.
I'm forgetting the word for publisher vs ... whatever it is where they take no responsibility for what people post on the site; but I hope my point is clear.
A non-political stance would be one which has zero side effects on anybody other than yourself. As soon as the actions you make and the decisions you take have an effect on somebody who isn't you, it becomes political.
Actually _taking_ a non-political stance is an exercise left to the reader.
Indeed! We all take political stances every day and it is certainly not possible to run an apolitical social network. The word is probably not very useful. But being able to reflect on “sensitive” issues is extremely useful and, I would argue, necessary.
And yet any random person on the street can easily flag something as being political or not political. I don’t accept that everything is automatically political. That is just an unrealistic argument that doesn’t match the reality of how the word is used.
You're correct. Almost everyone can efficiently categorize things they disagree with as "politics", while categorizing things they agree with as "common sense".
Again this is hyperbole and is not how people think. The politicization of everything and denying that some ideas are apolitical are tied. The latter is a false justification for the former.
Taking that logic at face value, it would mean "if you choose not to take a political stance, then you still have made a choice". Okay, so you've chosen not to make a decision. Yet I don't see how having made the choice implies you've still made a decision ("you have still taken a political stance"). If anything it seems like you just argued against the point?
Choosing not to restrict what political things people can say (or to whom) is definitely a political choice. In many contexts today in 2020, choosing to allow others to speak about their opinions without restriction or opposition is seen as, not only a political choice, but an active attack.
What is the current political climate? And is it static, will it not change?
Is providing food in supermarkets without political background checks "a political choice"? If so, then everything, including picking your nose with your left or right hand, is a political choice and the term "political choice" becomes utterly meaningless.
The current political climate is the policies enacted by politicians and the populace's reactions to it. It is not static, and it changes with the politicans in power, the laws in effect, and the political mood of the populace.
In these debates, people seem to pretend that if facebook or others don't stay politically neutral as possible premptively, that pissed off politicians will not do it for them, badly, in hundreds of countries. It's why social media companies have the kid gloves with politically powerful people.
A segment of social media company staff also don't like that reality and want their social to censor the political parties / discussions they don't like and thus they toe the line and give unsatisfying non-answers at all hands and to the media.
I see this a misconception a lot online so I have to point out something important. The "publisher" vs "platform" is only about liability.
Publishers are liable for everything posted on their websites. Platforms are not - as long as they make good faith efforts to take down or prevent posting of illegal content.
Both are allowed to engage in moderation, curation or "censorship". Engaging in such does not make a website a publisher.
> Taking a stance to not control what communication is allowed is a very political stance. It just so happens that, I believe in those cases, it's also a legally mandated stance; but, if it weren't a legally mandated stance, it would absolutely be a political stance, whatever they ended up saying.
Not really, no. That is not usually what people mean, when they say "take a political stance".
We can extend this to other examples. Do you think that a grocery store should ban people from their stores, if the individual is wearing a pro Trump, or pro Biden Tshirt?
I think it would be pretty silly to condemn a grocery store, for refusing to ban people from their stores, if they were wearing a "Vote for Biden/Trump" shirt.
Most people would find it absolutely and completely ridiculous to ban people from stores for doing that.
Yes, but a lot of companies would absolutely do so for employees--rather than customers--certainly including grocery stores even if they didn't otherwise have dress codes (which they likely do).
So then you agree with the vast majority of people that it is not a "political" decision to refuse to ban someone for wearing a "vote for Biden/Trump" shirt?
Cool. That is my point. Basically everyone would not call it political to refuse to ban someone for that.
[EDIT: actually what’s probably quite relevant is all the stories about high schools sending young women home because they don’t approve of their outfits even if said outfit doesn’t violate a dress code.]
Imagine someone walks into a grocery store naked.
Or wearing a t-shirt with a explicit image of a man and a woman having sex. Or two men having sex.
Or a t-shirt which says / shows something extremely inflammatory yet not illegal.
I could imagine various stores making various decisions in all of these cases, all of which would be the folks working in that store expressing their beliefs!
Humans are inherently social and thus inherently political (politics in the sense of politics as the negotiation and management of a community).
Or imagine someone walks into a grocery store and doesn’t wear a mask! Lol :):/:(.
But, in general, businesses businesses should generally have a fairly wide latitude as to what the t-shirt slogans they allow customers to wear. (Though I think we can imagine various slogans a business might deem objectionable.)
At the same time, businesses can reasonably have a fairly narrow latitude as to what employees should wear, even barring an official dress code, with respect to even advocating for a specific candidate--and that may even get into matters of company campaigning.
Actually I do call it a political decision, but one that is not so controversial in many places in the US - the political decision to value freedom of speech!
In China, for example, while I don’t know for sure I would bet you could not wear a t shirt with the face of say, a former communist party member who had opposed current clique in power (Ie the closest thing to an “opposition” politician that China has).
Or a democracy activist t shirt.
Point is, in the US, though we are lucky in that we often don’t have to think about it, our politicial principles of free speech allow for a lot of behavior.
Stores are demonstrating their political belief in freedom of speech if a store manager doesn’t kick up a fuss when someone walks in wearing a t shirt for a politician the manager dislikes.
Of course there are probably also laws or the manager is savvy enough to know they could get the store sued, but, you get what I’m saying I think / hope :).
You don’t notice it until it’s not there.
And thus, what often appears to be not making a choice, really is making a choice, albeit the default choice :).
> Do you think that a grocery store should ban people from their stores, if the individual is wearing a pro Trump, or pro Biden Tshirt?
A lot of bars and clubs have dress codes. Many ban wearing clothes that could be perceived as "gang" colors. It would be a terrible business decision for a grocery store to do this, but I don't think it's wrong.
Taking a stance to not control what communication is allowed is a very political stance.
That's simply not true. One can verify the truth or falsehood of your statement by applying the knife of logic. Draw your statement to its logical conclusion in order to determine if it results in absurdity.
Let's do that.
A person who has had their brain surgically removed will (quite probably) never mention politics or attempt to control other's communications about politics. According to your statement, that person's actions are political. Sorry, but that's absurd.
1. People who live in the Congo may consciously choose not to become involved in Candian politics. It is absurd to think that is political.
2. You wrote: Your conflation of passive inaction with an active choice to refrain from certain action is absurd.
That is only meaningful as a circular definitio0n. Choosing not to act differs from an inability to act. Why? How would you define the difference between the action of not acting and the action of not acting, because of not making a choice to act?
> How would you define the difference between the action of not acting and the action of not acting, because of not making a choice to act?
I wouldn't, because not making a choice to act in a particular way is very different from making a choice to avoid as policy acting in that way, or, as was at issue upthread, “Taking a stance to not” act in a particular way. Taking a stance is (for this subject matter, at least) political. Inaction on its own is not.
Neither an ISP nor phone company has a feed algorithm or spam prevention* as a central part of its operations.
These are key to the usability of any social network. And they are inherently biased. Any such organization also has to take money, so ads are also key to their operations, and they have taken political stands on ads too.
The attempted comparison to utility companies is not compelling.
You pick up the phone and call someone. You don't pick up the phone and have the phone company present you a list of people you might be interested in calling, ranked by how much they bid to be on that list.
I cannot say something into one end of the telephone, and depending on the content of my message, not have it delivered on the other end. That's the difference.
Perhaps this demonstrates why facebook is fundamentally different from an isp.
Facebook doesn't "facilitate" communication in the same way a computer "facilitates" communication. It facilitates communication in the same way that a forum or book club or group of people facilitates communication. Groups require some moderation to remain popular. Facebook is driven to moderate by the market.
> Nothing about Facebook's nature forces them to go beyond that and act as de facto language legislators
The ethics of being a/the major institution of mass communication in large parts of the world may not force FB to act as language legislators, but these ethics certainly should compel them to do so.
Relevant points:
- If FB’s status as a mass comms source is threatened, then the company itself is threatened. This threat can be due to a lack in trust in the platform and/or legislation that effectively legislates them out of existence (see below re free speech). This existential issue should compel them to factor language legislation into their corporate policies.
- Stockholders certainly care about FB’s status as a mass comms source even if no one else does.
- Stakeholders obviously care about this, too.
- Relying on governments to regulate mass communications is a Pandora’s box for FB since FB is an international platform.
- In the US, in order to facilitate and encourage free speech, mass comms laws are not particularly restrictive, but they are built on an underlying assumption about social-based regulation that generally hold up but seem to be completely broken with platforms like FB. If FB doesn’t address this issue, then the laws that end up addressing this issue may end up legislating FB out of existence.
To close, whether playing the language legislator is part of FB’s nature, an emergent property, or something else, there are very real reasons that FB has policies on regulating language. Whether they do this well or not is a completely different issue, but putting the onus on government legislators to address the problem with formal laws seems, at best, overly dismissive.
Relying on governments to regulate mass communications is strictly better for Facebook than the alternative, where governments disagree with Facebook's chosen policies and punish them instead. This is why most companies prefer to comply with local law, only enforcing universal policy where it doesn't potentially conflict with local law.
Companies don't just have no reason to regulate language, they also have no serious authority to do so. The onus has always been and can only be on government legislators to address these issues in the most fundamental sense.
I'd like to see Facebook try to take on the Democrat/Republican eternal conflict in the US or the CCP in China on adopting a universal policy that they don't agree with, armed with powerful arguments like "the power of ethics compels me!" or "it's a Pandora's box because we're international!" or "stockholders care about our status!". Going above and beyond their most basic policy obligations has been a great way to attract the ire of political authorities who are now agitated over whether Facebook's policy is intentionally empowering or weakening their political enemies.
At various points in time, Facebook has made it possible to target political ads toward based on your known interest in pseudoscience or antisemitism [1].
Giving access to this degree of targeting (even if the categories are algorithmically generated) is inherently political.
By choosing to have a system by which algorithms would recommend social content, they’ve committed to a course that is going to eventually collide with politics. The only way to avoid it is to get out of the recommendation game, which would be tantamount to corporate suicide at this point.
Because Facebook explicitly chooses how to construct the timeline it presents to its users, and becomes some of that timeline contains political content.
If Facebook were a dumb first-in-first-out aggregator, it wouldn't be political.
>Phone companies and ISPs facilitate communication between people, but that doesn't necessitate they take political stances on what communication to allow.
They were regulated as public utilities and common carriers. Facebook is not.
Why is that "by its nature"? I don't think that's "by its nature" at all. Phone companies and ISPs facilitate communication between people, but that doesn't necessitate they take political stances on what communication to allow. Why should Facebook? If certain language should be restricted, then laws should be written restricting said language and Facebook should comply with those laws. Nothing about Facebook's nature forces them to go beyond that and act as de facto language legislators.