Last updated on July 5, 2017
I recently participated in a meeting of computer scientists where the topic was “fake news”. The implicit assumption was that “we will do this tool x that will show people what is false information, and they will become informed.”
However, after the meeting I realized this might not be enough, and in fact be naïve thinking. It may not matter that algorithms and social media platforms show people ‘this is false information’. People might choose to believe in the conspiracy theory anyway, for various reasons. In those cases, the problem is not the lack of information, it is something else.
And the real question is: Can technology fix that something else? Or at least be part of the solution?
The balanced view algorithm
Because, technically, the algorithm is simple:
- Take a topic
- Define the polarities of the topic
- Show each user an equal number of content of each polarity
=> results in a balanced and informed citizen!
But, as said, if the opposing content is against what you want to believe in, well, then the problem is not “seeing” enough that content.
Conclusion
These are tough questions and reside in the interface of sociology and algorithms. On one hand, some of the solutions may approach manipulation but, as propagandists could tell, manipulation has to be subtle to be effective.
The major risk is that people might rebel against a balanced worldview. It is good to remember that ‘what you need to see’ is not the same as ‘what you want to see’. There is little that algorithms can do if people want to live in a bubble.
Originally published at https://algoritmitutkimus.fi/2017/04/16/the-balanced-view-algorithm/