Skip to content

Algorithm Neutrality and Bias: How Much Control?

Last updated on July 5, 2017

The Facebook algorithm is a global super power.

So, I read this article: Facebook is prioritizing my family and friends – but am I?

The point of the article — that you should focus on your friends & family in real life instead of Facebook — is poignant and topical. So much of our lives is spent on social media, without the “social” part, and even when it is there, something is missing in comparison to physical presence (without smart phones!).

Anyway, this post is not about that. I got to think about the from the algorithm neutrality perspective. So what does that mean?

Algorithm neutrality takes place when social networks allow content spread freely based on its merits (e.g., CTR, engagement rate); so that the most popular content gets the most dissemination. In other words, the network imposes no media bias. Although the content spreading might have a media bias, the social network is objective and only accounting its quantifiable merits.

Why does this matter? Well, a neutral algorithm guarantees manipulation-free dissemination of information. As soon as human judgment intervenes, there is a bias. That bias may lead to censorship and favoring of certain political party, for example. The effect can be clearly seen in the so-called media bias. Anyone following either the political coverage of the US elections or the Brexit coverage has noticed the immense media bias which is omnipresent in even the esteemed publications, like the Economist and Washington Post. Indeed, they take a stance and report based on their stance, instead of covering objectively. A politically biased media like the one in the US is not much better than the politically biased media in Russia.

It is clear that free channels of expression enable the proliferation of alternative views, whereupon an individual is (theoretically) better off, since there are more data points to base his/her opinion on. Thus, social networks (again, theoretically) mitigate media bias.

There are many issues though. First is the one that I call neutrality dilemma.

The neutrality dilemma arises from what I already mentioned: the information bias can be embedded in the content people share. If the network restricts the information dissemination, it moves from neutrality to control. If it doesn’t restrict information dissemination, there is a risk of propagation of harmful misinformation, or propaganda. Therefore, in this continuum of control and freedom there is a trade-off that the social networks constantly need to address in their algorithms and community policies. For example, Facebook is banning some content, such as violent extremism. They are also collaborating with local governments which can ask for removal of certain content. This can be viewed in their transparency report.

The dilemma has multiple dimensions.

First of all, there are ethical issues. From the perspective of “what is right”, shouldn’t the network prohibit diffusion of information when it is counter-factual? Otherwise, peopled can be mislead by false stories. But also, from perspective of what is right, shouldn’t there be free expression, even if a piece of information is not validated?

Second, there are some technical challenges:

A. How to identify “truthfulness” of content? In many cases, it is seemingly impossible because the issues are complex and not factual to begin with. Consider e.g. the Brexit: it is not a fact that the leave vote would lead into a worse situation than the stay vote, and vice versa. In a similar vein, it is not a fact that the EU should be kept together. These are questions of assumptions which make them hard: people freely choose the assumptions they want to believe, but there can be no objective validation of this sort of complex social problem.

B. How to classify political/argumentative views and relate them to one another? There are different point of views, like “pro-Brexit” and “anti-Brexit”. The social network algorithm should detect based on an individual’s behavior their membership in a given group: the behavior consists of messages posted, content liked, shared and commented. It should be fairly easy to form a view of a person’s stance on a given topic with the help of these parameters. Then, it is crucial to map the stances in relation to one another, so that the extremes can be identified.

As it currently stands, one is being shown the content he/she prefers which confirms the already established opinion. This does not support learning or getting an objective view of the matter: instead, if reinforces a biased worldview and indeed exacerbates the problems. It is crucial to remember that opinions do not remain only opinions but reflect into behavior: what is socially established becomes physically established through people’s actions in the real world. Therefore, the power of social networks needs to be taken with precaution.

C. How to identify the quality of argumentation? Quality of argumentation is important if applying the rotation of alternative views intended to mitigate reinforcement of bias. This is because the counter-arguments need to be solid: in fact, when making a decision, the pro and contra-sides need both be well-argued for an objective decision to emerge. Machine learning could be the solution — assuming we have training data on the “proper” structure of solid argumentation, we can compare this archetype to any kind of text material and assign it a score based on how good the argumentation is. Such a method does not consider the content of the argument, only its logical value. It would include a way to detect known argumentation errors based on syntax used. In fact, such a system is not unimaginably hard to achieve — common argumentation errors or logical fallacies are well documented.

Another form of detecting quality of argumentation is user-based reporting: individuals report the posts they don’t like, and these get discounted by the algorithm. However, Even when allowing users to report “low-quality” content, there is a risk they report content they disagree with, not which is poorly argued. In reporting, there is relativism or subjectivism that cannot be avoided.

Perhaps the most problematic of all are the socio-psychological challenges associated with human nature. The neutral algorithm enforces group polarization by connecting people who agree on a topic. This is natural outcome of a neutral algorithm, since people by their behavior confirm their liking of a content they agree with. This leads to reinforcement whereupon they are shown more of that type of content. The social effect is known as group polarization – an individual’s original opinion is enforced through observing other individuals sharing that opinion. That is why so much discussion in social media is polarized: there is this well known tendency of human nature not to remain objective but to take a stance in one group against another.

How can we curb this effect? A couple of solutions readily come to mind.

1. Rotating opposing views. If in a neutral system you are shown 90% of content that confirms your beliefs, rotation should force you to see more than 10% percent of alternative (say, 25%). Technically, this would require that “opinion archetypes” can be classified and contrasted to one another. Machine learning to the rescue?

The power of rotation comes from the idea it simulates social behavior: the more a person is exposed to subjects that initially seem strange and unlikeable (i.e., xenophobia), the more likely they are to be understood. A greater degree of awareness and understanding leads into higher acceptance of those things. In real world, people who frequently meet people from other cultures are more likely to accept other cultures in general.

Therefore, the same logic could by applied by Facebook in forcing us to see well-argumented counter-evidence to our beliefs. It is crucial that the counter-evidence is well-argued, or else there is a strong risk of reactance — people rejecting the opposing view even more. Unfortunately, this is a feature of the uneducated mind – not to be able to change one’s opinions but remain fixated on one’s beliefs. So the method is not full-proof, but it is better than what we now have.

2. Automatic fact-checking. Imagine a social network telling you “This content might contain false information”. Caution signals may curb the willingness to accept any information. In fact, it may be more efficient to show misinformation tagged as unreliable rather than hide it — in the latter case, there is possibility for individuals to correct their false beliefs.

3. Research in sociology. I am not educated to know enough about the general solutions of group polarization, groupthink and other associated social problems. But I know sociologists have worked on them – this research should be put to use in collaboration with engineers who design the algorithms.

However, the root causes for dissemination of misinformation, either purposefully harmful or due to ignorance, lie not on technology. The are human-based problems and must have a human-based solution.

What are these root causes? Lack of education. Poor quality of educational system. Lack of willingness to study a topic before forming an opinion (i.e., lazy mind). Lack of source/media criticism. Confirmation bias. Groupthink. Group polarization.

Ultimately, these are the root causes of why some content that should not spread, spreads. They are social and psychological traits of human beings, which cannot be altered via algorithmic solutions. However, algorithms can direct behavior into more positive outcomes, or at least avoid the most harmful extremes – if the aforementioned classification problems can be solved.

The other part of the equation is education — kids need to be taught from early on about media and source criticism, logical argumentation, argumentation skills and respect to another party in a debate. Indeed, respect and sympathy go a long way — in the current atmosphere of online debating it seems like many have forgotten basic manners.

In the online environment, provocations are easy and escalate more easily than in face-to-face encounters. It is “fun” to make fun of the ignorant people – a habit of the so-called intellectuals – nor it is correct to ignore science and facts – a habit of the so-called ignorants.

It is also unfortunate that many of the topics people debate on can be traced down to values and worldviews instead of more objective topics. When values and worldviews are fundamentally different among participants, it is truly hard to find a middle-way. It takes a lot of effort and character to be able to put yourself on the opposing party’s shoes, much more so than just point blank rejecting their view. It takes even more strength to change your opinion once you discover it was the wrong one.

Conclusion and discussion. Avoiding media bias is an essential advantage of social networks in information dissemination. I repeat: it’s a tremendous advantage. People are able to disseminate information and opinions without being controlled by mass-media outlets. At the same time, neutrality imposes new challenges. The most prominent question is to which extent should the network govern its content.

One one hand, user behavior is driving Facebook towards information sharing network – people are seemingly sharing more and more news content and less about their own lives – but Facebook wants to remain as social network, and therefore reduces neutrality in favor of personal content. What are the strategic implications? Will users be happier? Is it right to deviate from algorithm neutrality when you have dominant power over information flow?

Facebook is approaching a sort of an information monopoly when it comes to discovery (Google is the monopoly in information search), and I’d say it’s the most powerful global information dissemination medium today. That power comes with responsibility and ethical question, and hence the algorithm neutrality discussion. The strategic question for Facebook is that does it make sense for them to manipulate the natural information flow based on user behavior in a neutral system. The question for the society is should Facebook news feeds be regulated.

I am not advocating more regulation, since regulation is never a creative solution to any problem, nor does it tends to be informed by science. I advocate collaboration of sociologists and social networks in order to identify the best means to filter harmful misinformation and curb the generally known negative social tendencies that we humans possess. For sure, this can be done without endangering the free flow of information – the best part of social networks.

Published inenglish