Skip to content

Tag: algorithms

The balanced view algorithm

I recently participated in a meeting of computer scientists where the topic was “fake news”. The implicit assumption was that “we will do this tool x that will show people what is false information, and they will become informed.”

However, after the meeting I realized this might not be enough, and in fact be naïve thinking. It may not matter that algorithms and social media platforms show people ‘this is false information’. People might choose to believe in the conspiracy theory anyway, for various reasons. In those cases, the problem is not the lack of information, it is something else.

And the real question is: Can technology fix that something else? Or at least be part of the solution?

The balanced view algorithm

Because, technically, the algorithm is simple:

  1. Take a topic
  2. Define the polarities of the topic
  3. Show each user an equal number of content of each polarity

=> results in a balanced and informed citizen!

But, as said, if the opposing content is against what you want to believe in, well, then the problem is not “seeing” enough that content.

Conclusion

These are tough questions and reside in the interface of sociology and algorithms. On one hand, some of the solutions may approach manipulation but, as propagandists could tell, manipulation has to be subtle to be effective.

The major risk is that people might rebel against a balanced worldview. It is good to remember that ‘what you need to see’ is not the same as ‘what you want to see’. There is little that algorithms can do if people want to live in a bubble.

Originally published at https://algoritmitutkimus.fi/2017/04/16/the-balanced-view-algorithm/

 

The strategy algorithm

Introduction

The purpose of the strategy algorithm is to present a simple, parsimonius, and proven method for successful creation of a corporate strategy.

In corporations, the problems usually do not relate to lack of resources or options, but to complexity of having in fact too many choices. This can lead to illusion of superiority which is not a short-term problem since the corporation is protected by its existing buffers, but which will become a long-term issue when external conditions have tilted enough to cause a disruption driven by changing customer needs or competitors’ superior solutions. Therefore, any managing director or CEO needs simple guiding principles to reduce compexity into something manageable. The strategy algorithm (SA) is one such tool.

The strategy algorithm

The goal of the SA is to find a unique competitive advantage that the customers appreciate, that can be executed, and that is not the focus of any existing competitors. This goal is known as the strategic goal. The steps are as follows:

Phase 1

1. Define customer segments – what benefits are important for each segment?
2. Conduct competitor analysis – what segments are not focused on by any competitor?
3. Conduct internal analysis – what resources do we have and need to capture that segment?

Phase 2

4. Then, make sure 1-3 are co-aligned (=write out the strategy).
5. Then, define strategic projects to remove bottlenecks and create assets (=resources that serve the strategic goal).
6. Then, execute with strong focus (=anything that deviates from the strategic goal; discard).

Applying the strategy algorithm

As you can see, Phase 1 is geared toward research and planning, and Phase 2 toward implementation.

In step 1, you can use techniques such as:

  • conjoint analysis
  • personas (ethnography, interviews, surveys, social media analysis)

Conjoint analysis aims to find product attributes that customers most value. Another option is to summarize customer segments into personas that are fictive but descriptive characterizations of customer groups.

In step 2, “focus” is the keyword. Competitors can operate in the same market and offer similar products, but the main point is that they are not focusing on it (=their turnover is not dependent on it, they are not investing excessively in product development, marketing and distribution). In other words, by you taking the focus, competitors will remain at bay, because they have more important priorities. An example is Nokian Tyres – at one point, it was a generic tyre company, but as an outcome of strategic work they re-focused on “Trusted by the natives” guideline, i.e. winter tyres.

In step 3, you need to conduct a gap analysis of ‘what we have and what we need’. An example is Stephen Elop at Nokia – he recognized that the mobile world is moving to software ecosystems, and Nokia has redundant know-how about legacy mobile software. In hindsight, we can say he should have fired and hired much more aggressively to transform the company into a focused, competitive unit.

Acknowledgments

The thinking borrows heavily from the Master’s thesis of Lasse Kurkilahti (Turku School of Economics), as well as related works from Michael Porter, W. Chan Kim, Renée Mauborgne, and other strategic thinkers.

 

What is a “neutral algorithm”?

1. Introduction

Earlier today, I had a brief exchange of tweets with @jonathanstray about algorithms.

It started from his tweet:

Perhaps the biggest technical problem in making fair algorithms is this: if they are designed to learn what humans do, they will.

To which I replied:

Yes, and that’s why learning is not the way to go. “Fair” should not be goal, is inherently subjective. “Objective” is better

Then he wrote:

lots of things that are really important to society are in no way objective, though. Really the only exception is prediction.

And I wrote:

True, but I think algorithms should be as neutral (objective) as possible. They should be decision aids for humans.

And he answered:

what does “neutral” mean though?

After which I decided to write a post about it, since the idea is challenging to explain in 140 characters.

2. Definition

So, what is a neutral algorithm? I would define it like this:

“A neutral algorithm is a decision-making program whose operating principles are minimally inflenced by values or opinions of its creators.” [1]

An example of a neutral algorithm is a standard ad optimization algorithm: it gets to decide whether to show Ad1, Ad2, or Ad3. As opposed to asking from designers or corporate management which ad to display, it makes the decision based on objective measures, such as click-through rate (CTR).

A treatment that all ads (read: content, users) get is fair – they are diffused based on their merits (measured objectively by an unambiguous metric), not based on favoritism of any sort.

3. Foundations

The roots of algorithm neutrality stem from freedom of speech and net neutrality [2]. No outsiders can impose their values and opinions (e.g., censoring politically sensitive content) and interfere with the operating principles of the algorithm. Instead of being influenced by external manipulation, the decision making of the algorithm is as value-free (neutral) as possible. For example, in the case of social media, it chooses to display information which accurately reflects the sentiment and opinions of the people at a particular point in time.

4. Limitations

Now, I grant there are issues with “freedom”, some of which are considerable. For example, 1) for media, CTR-incentives lead to clickbaiting (alternative goal metrics should be considered), 2) for politicians and electorate, facts can be overshadowed by misinformation and short videos taken out of context to give false impression of individuals; and 3) for regular users, harmful misinformation can spread as a consequnce of neutrality (e.g., anti vaccination propaganda).

Another limitation is legislation – illegal content should be kept out by the algorithm. In this sense, the neutral algorithm needs to adhere to a larger institutional and regulatory context, but given that the laws themselves are “fair” this should impose no fundamental threat to the objective of neutral algorithms: free decision-making and, consequently, freedom of speech.

I wrote more about these issues here [3].

5. Conclusion

Inspite of the aforementioned issues, with a neutral algorithm each media/candidate/user has a level playing field. In time, they must learn to use it to argue in a way that merits the diffusion of their message.

The rest is up to humans – educated people respond to smart content, whereas ignorant people respond to and spread non-sense. A neutral algorithm cannot influence this; it can only honestly display what the state of ignorance/sophistication is in a society. A good example is Microsoft’s infamous bot Tay [4], a machine learning experiment turned bad. The alarming thing about the bot is not that “machines are evil”, but that *humans are evil*; the machine merely reflects that. Hence my original point of curbing human evilness by keeping algorithms free of human values as much as possible.

Perhaps in the future an algorithm could figuratively spoken save us from ourselves, but at the moment that act requires conscious effort from us humans. We need to make critical decisions based on our own judgment, instead of outsourcing ethically difficult choices to algorithms. Just as there is separation of church and state, there should be separation of humans and algorithms to the greatest possible extent.

Notes

[1] Initially, I thought about definition that would say “not influenced”, but it is not safe to assume that the subjectivity of its creators
would not in some way be reflected to the algorithm. But “minimal” leads into normative argument that that subjectivity should be mitigated.

[2] Wikipedia (2016): “Net neutrality (…) is the principle that Internet service providers and governments should treat all data on the Internet the same, not discriminating or charging differentially by user, content, site, platform, application, type of attached equipment, or mode of communication.”

[3] Algorithm Neutrality and Bias: How Much Control? <https://www.linkedin.com/pulse/algorithm-neutrality-bias-how-much-control-joni-salminen>

[4] A part of the story is that Tay was trolled heavily and therefore assumed a derogatory way of speech.

Algorithm Neutrality and Bias: How Much Control?

The Facebook algorithm is a global super power.

So, I read this article: Facebook is prioritizing my family and friends – but am I?

The point of the article — that you should focus on your friends & family in real life instead of Facebook — is poignant and topical. So much of our lives is spent on social media, without the “social” part, and even when it is there, something is missing in comparison to physical presence (without smart phones!).

Anyway, this post is not about that. I got to think about the from the algorithm neutrality perspective. So what does that mean?

Algorithm neutrality takes place when social networks allow content spread freely based on its merits (e.g., CTR, engagement rate); so that the most popular content gets the most dissemination. In other words, the network imposes no media bias. Although the content spreading might have a media bias, the social network is objective and only accounting its quantifiable merits.

Why does this matter? Well, a neutral algorithm guarantees manipulation-free dissemination of information. As soon as human judgment intervenes, there is a bias. That bias may lead to censorship and favoring of certain political party, for example. The effect can be clearly seen in the so-called media bias. Anyone following either the political coverage of the US elections or the Brexit coverage has noticed the immense media bias which is omnipresent in even the esteemed publications, like the Economist and Washington Post. Indeed, they take a stance and report based on their stance, instead of covering objectively. A politically biased media like the one in the US is not much better than the politically biased media in Russia.

It is clear that free channels of expression enable the proliferation of alternative views, whereupon an individual is (theoretically) better off, since there are more data points to base his/her opinion on. Thus, social networks (again, theoretically) mitigate media bias.

There are many issues though. First is the one that I call neutrality dilemma.

The neutrality dilemma arises from what I already mentioned: the information bias can be embedded in the content people share. If the network restricts the information dissemination, it moves from neutrality to control. If it doesn’t restrict information dissemination, there is a risk of propagation of harmful misinformation, or propaganda. Therefore, in this continuum of control and freedom there is a trade-off that the social networks constantly need to address in their algorithms and community policies. For example, Facebook is banning some content, such as violent extremism. They are also collaborating with local governments which can ask for removal of certain content. This can be viewed in their transparency report.

The dilemma has multiple dimensions.

First of all, there are ethical issues. From the perspective of “what is right”, shouldn’t the network prohibit diffusion of information when it is counter-factual? Otherwise, peopled can be mislead by false stories. But also, from perspective of what is right, shouldn’t there be free expression, even if a piece of information is not validated?

Second, there are some technical challenges:

A. How to identify “truthfulness” of content? In many cases, it is seemingly impossible because the issues are complex and not factual to begin with. Consider e.g. the Brexit: it is not a fact that the leave vote would lead into a worse situation than the stay vote, and vice versa. In a similar vein, it is not a fact that the EU should be kept together. These are questions of assumptions which make them hard: people freely choose the assumptions they want to believe, but there can be no objective validation of this sort of complex social problem.

B. How to classify political/argumentative views and relate them to one another? There are different point of views, like “pro-Brexit” and “anti-Brexit”. The social network algorithm should detect based on an individual’s behavior their membership in a given group: the behavior consists of messages posted, content liked, shared and commented. It should be fairly easy to form a view of a person’s stance on a given topic with the help of these parameters. Then, it is crucial to map the stances in relation to one another, so that the extremes can be identified.

As it currently stands, one is being shown the content he/she prefers which confirms the already established opinion. This does not support learning or getting an objective view of the matter: instead, if reinforces a biased worldview and indeed exacerbates the problems. It is crucial to remember that opinions do not remain only opinions but reflect into behavior: what is socially established becomes physically established through people’s actions in the real world. Therefore, the power of social networks needs to be taken with precaution.

C. How to identify the quality of argumentation? Quality of argumentation is important if applying the rotation of alternative views intended to mitigate reinforcement of bias. This is because the counter-arguments need to be solid: in fact, when making a decision, the pro and contra-sides need both be well-argued for an objective decision to emerge. Machine learning could be the solution — assuming we have training data on the “proper” structure of solid argumentation, we can compare this archetype to any kind of text material and assign it a score based on how good the argumentation is. Such a method does not consider the content of the argument, only its logical value. It would include a way to detect known argumentation errors based on syntax used. In fact, such a system is not unimaginably hard to achieve — common argumentation errors or logical fallacies are well documented.

Another form of detecting quality of argumentation is user-based reporting: individuals report the posts they don’t like, and these get discounted by the algorithm. However, Even when allowing users to report “low-quality” content, there is a risk they report content they disagree with, not which is poorly argued. In reporting, there is relativism or subjectivism that cannot be avoided.

Perhaps the most problematic of all are the socio-psychological challenges associated with human nature. The neutral algorithm enforces group polarization by connecting people who agree on a topic. This is natural outcome of a neutral algorithm, since people by their behavior confirm their liking of a content they agree with. This leads to reinforcement whereupon they are shown more of that type of content. The social effect is known as group polarization – an individual’s original opinion is enforced through observing other individuals sharing that opinion. That is why so much discussion in social media is polarized: there is this well known tendency of human nature not to remain objective but to take a stance in one group against another.

How can we curb this effect? A couple of solutions readily come to mind.

1. Rotating opposing views. If in a neutral system you are shown 90% of content that confirms your beliefs, rotation should force you to see more than 10% percent of alternative (say, 25%). Technically, this would require that “opinion archetypes” can be classified and contrasted to one another. Machine learning to the rescue?

The power of rotation comes from the idea it simulates social behavior: the more a person is exposed to subjects that initially seem strange and unlikeable (i.e., xenophobia), the more likely they are to be understood. A greater degree of awareness and understanding leads into higher acceptance of those things. In real world, people who frequently meet people from other cultures are more likely to accept other cultures in general.

Therefore, the same logic could by applied by Facebook in forcing us to see well-argumented counter-evidence to our beliefs. It is crucial that the counter-evidence is well-argued, or else there is a strong risk of reactance — people rejecting the opposing view even more. Unfortunately, this is a feature of the uneducated mind – not to be able to change one’s opinions but remain fixated on one’s beliefs. So the method is not full-proof, but it is better than what we now have.

2. Automatic fact-checking. Imagine a social network telling you “This content might contain false information”. Caution signals may curb the willingness to accept any information. In fact, it may be more efficient to show misinformation tagged as unreliable rather than hide it — in the latter case, there is possibility for individuals to correct their false beliefs.

3. Research in sociology. I am not educated to know enough about the general solutions of group polarization, groupthink and other associated social problems. But I know sociologists have worked on them – this research should be put to use in collaboration with engineers who design the algorithms.

However, the root causes for dissemination of misinformation, either purposefully harmful or due to ignorance, lie not on technology. The are human-based problems and must have a human-based solution.

What are these root causes? Lack of education. Poor quality of educational system. Lack of willingness to study a topic before forming an opinion (i.e., lazy mind). Lack of source/media criticism. Confirmation bias. Groupthink. Group polarization.

Ultimately, these are the root causes of why some content that should not spread, spreads. They are social and psychological traits of human beings, which cannot be altered via algorithmic solutions. However, algorithms can direct behavior into more positive outcomes, or at least avoid the most harmful extremes – if the aforementioned classification problems can be solved.

The other part of the equation is education — kids need to be taught from early on about media and source criticism, logical argumentation, argumentation skills and respect to another party in a debate. Indeed, respect and sympathy go a long way — in the current atmosphere of online debating it seems like many have forgotten basic manners.

In the online environment, provocations are easy and escalate more easily than in face-to-face encounters. It is “fun” to make fun of the ignorant people – a habit of the so-called intellectuals – nor it is correct to ignore science and facts – a habit of the so-called ignorants.

It is also unfortunate that many of the topics people debate on can be traced down to values and worldviews instead of more objective topics. When values and worldviews are fundamentally different among participants, it is truly hard to find a middle-way. It takes a lot of effort and character to be able to put yourself on the opposing party’s shoes, much more so than just point blank rejecting their view. It takes even more strength to change your opinion once you discover it was the wrong one.

Conclusion and discussion. Avoiding media bias is an essential advantage of social networks in information dissemination. I repeat: it’s a tremendous advantage. People are able to disseminate information and opinions without being controlled by mass-media outlets. At the same time, neutrality imposes new challenges. The most prominent question is to which extent should the network govern its content.

One one hand, user behavior is driving Facebook towards information sharing network – people are seemingly sharing more and more news content and less about their own lives – but Facebook wants to remain as social network, and therefore reduces neutrality in favor of personal content. What are the strategic implications? Will users be happier? Is it right to deviate from algorithm neutrality when you have dominant power over information flow?

Facebook is approaching a sort of an information monopoly when it comes to discovery (Google is the monopoly in information search), and I’d say it’s the most powerful global information dissemination medium today. That power comes with responsibility and ethical question, and hence the algorithm neutrality discussion. The strategic question for Facebook is that does it make sense for them to manipulate the natural information flow based on user behavior in a neutral system. The question for the society is should Facebook news feeds be regulated.

I am not advocating more regulation, since regulation is never a creative solution to any problem, nor does it tends to be informed by science. I advocate collaboration of sociologists and social networks in order to identify the best means to filter harmful misinformation and curb the generally known negative social tendencies that we humans possess. For sure, this can be done without endangering the free flow of information – the best part of social networks.