April 16, 2017
However, after the meeting I realized this might not be enough, and in fact be naïve thinking. It may not matter that algorithms and social media platforms show people ‘this is false information’. People might choose to believe in the conspiracy theory anyway, for various reasons. In those cases, the problem is not the lack of information, it is something else.
And the real question is: Can technology fix that something else? Or at least be part of the solution?
Because, technically, the algorithm is simple:
=> results in a balanced and informed citizen!
But, as said, if the opposing content is against what you want to believe in, well, then the problem is not “seeing” enough that content.
These are tough questions and reside in the interface of sociology and algorithms. On one hand, some of the solutions may approach manipulation but, as propagandists could tell, manipulation has to be subtle to be effective.
The major risk is that people might rebel against a balanced worldview. It is good to remember that ‘what you need to see’ is not the same as ‘what you want to see’. There is little that algorithms can do if people want to live in a bubble.
Originally published at https://algoritmitutkimus.fi/2017/04/16/the-balanced-view-algorithm/
April 11, 2017
The purpose of the strategy algorithm is to present a simple, parsimonius, and proven method for successful creation of a corporate strategy.
In corporations, the problems usually do not relate to lack of resources or options, but to complexity of having in fact too many choices. This can lead to illusion of superiority which is not a short-term problem since the corporation is protected by its existing buffers, but which will become a long-term issue when external conditions have tilted enough to cause a disruption driven by changing customer needs or competitors’ superior solutions. Therefore, any managing director or CEO needs simple guiding principles to reduce compexity into something manageable. The strategy algorithm (SA) is one such tool.
The goal of the SA is to find a unique competitive advantage that the customers appreciate, that can be executed, and that is not the focus of any existing competitors. This goal is known as the strategic goal. The steps are as follows:
1. Define customer segments – what benefits are important for each segment?
2. Conduct competitor analysis – what segments are not focused on by any competitor?
3. Conduct internal analysis – what resources do we have and need to capture that segment?
4. Then, make sure 1-3 are co-aligned (=write out the strategy).
5. Then, define strategic projects to remove bottlenecks and create assets (=resources that serve the strategic goal).
6. Then, execute with strong focus (=anything that deviates from the strategic goal; discard).
As you can see, Phase 1 is geared toward research and planning, and Phase 2 toward implementation.
In step 1, you can use techniques such as:
Conjoint analysis aims to find product attributes that customers most value. Another option is to summarize customer segments into personas that are fictive but descriptive characterizations of customer groups.
In step 2, “focus” is the keyword. Competitors can operate in the same market and offer similar products, but the main point is that they are not focusing on it (=their turnover is not dependent on it, they are not investing excessively in product development, marketing and distribution). In other words, by you taking the focus, competitors will remain at bay, because they have more important priorities. An example is Nokian Tyres – at one point, it was a generic tyre company, but as an outcome of strategic work they re-focused on “Trusted by the natives” guideline, i.e. winter tyres.
In step 3, you need to conduct a gap analysis of ‘what we have and what we need’. An example is Stephen Elop at Nokia – he recognized that the mobile world is moving to software ecosystems, and Nokia has redundant know-how about legacy mobile software. In hindsight, we can say he should have fired and hired much more aggressively to transform the company into a focused, competitive unit.
The thinking borrows heavily from the Master’s thesis of Lasse Kurkilahti (Turku School of Economics), as well as related works from Michael Porter, W. Chan Kim, Renée Mauborgne, and other strategic thinkers.
March 30, 2017
Earlier today, I had a brief exchange of tweets with @jonathanstray about algorithms.
It started from his tweet:
Perhaps the biggest technical problem in making fair algorithms is this: if they are designed to learn what humans do, they will.
To which I replied:
Yes, and that’s why learning is not the way to go. “Fair” should not be goal, is inherently subjective. “Objective” is better
Then he wrote:
lots of things that are really important to society are in no way objective, though. Really the only exception is prediction.
And I wrote:
True, but I think algorithms should be as neutral (objective) as possible. They should be decision aids for humans.
And he answered:
what does “neutral” mean though?
After which I decided to write a post about it, since the idea is challenging to explain in 140 characters.
So, what is a neutral algorithm? I would define it like this:
“A neutral algorithm is a decision-making program whose operating principles are minimally inflenced by values or opinions of its creators.” 
An example of a neutral algorithm is a standard ad optimization algorithm: it gets to decide whether to show Ad1, Ad2, or Ad3. As opposed to asking from designers or corporate management which ad to display, it makes the decision based on objective measures, such as click-through rate (CTR).
A treatment that all ads (read: content, users) get is fair – they are diffused based on their merits (measured objectively by an unambiguous metric), not based on favoritism of any sort.
The roots of algorithm neutrality stem from freedom of speech and net neutrality . No outsiders can impose their values and opinions (e.g., censoring politically sensitive content) and interfere with the operating principles of the algorithm. Instead of being influenced by external manipulation, the decision making of the algorithm is as value-free (neutral) as possible. For example, in the case of social media, it chooses to display information which accurately reflects the sentiment and opinions of the people at a particular point in time.
Now, I grant there are issues with “freedom”, some of which are considerable. For example, 1) for media, CTR-incentives lead to clickbaiting (alternative goal metrics should be considered), 2) for politicians and electorate, facts can be overshadowed by misinformation and short videos taken out of context to give false impression of individuals; and 3) for regular users, harmful misinformation can spread as a consequnce of neutrality (e.g., anti vaccination propaganda).
Another limitation is legislation – illegal content should be kept out by the algorithm. In this sense, the neutral algorithm needs to adhere to a larger institutional and regulatory context, but given that the laws themselves are “fair” this should impose no fundamental threat to the objective of neutral algorithms: free decision-making and, consequently, freedom of speech.
I wrote more about these issues here .
Inspite of the aforementioned issues, with a neutral algorithm each media/candidate/user has a level playing field. In time, they must learn to use it to argue in a way that merits the diffusion of their message.
The rest is up to humans – educated people respond to smart content, whereas ignorant people respond to and spread non-sense. A neutral algorithm cannot influence this; it can only honestly display what the state of ignorance/sophistication is in a society. A good example is Microsoft’s infamous bot Tay , a machine learning experiment turned bad. The alarming thing about the bot is not that “machines are evil”, but that *humans are evil*; the machine merely reflects that. Hence my original point of curbing human evilness by keeping algorithms free of human values as much as possible.
Perhaps in the future an algorithm could figuratively spoken save us from ourselves, but at the moment that act requires conscious effort from us humans. We need to make critical decisions based on our own judgment, instead of outsourcing ethically difficult choices to algorithms. Just as there is separation of church and state, there should be separation of humans and algorithms to the greatest possible extent.
 Initially, I thought about definition that would say “not influenced”, but it is not safe to assume that the subjectivity of its creators
would not in some way be reflected to the algorithm. But “minimal” leads into normative argument that that subjectivity should be mitigated.
 Wikipedia (2016): “Net neutrality (…) is the principle that Internet service providers and governments should treat all data on the Internet the same, not discriminating or charging differentially by user, content, site, platform, application, type of attached equipment, or mode of communication.”
 Algorithm Neutrality and Bias: How Much Control? <https://www.linkedin.com/pulse/algorithm-neutrality-bias-how-much-control-joni-salminen>
 A part of the story is that Tay was trolled heavily and therefore assumed a derogatory way of speech.