Skip to content

Tag: research

Belief systems and human action

What people believe, sometimes because real because of that.

1. Introduction. People are driven by beliefs and assumptions. We all make assumptions and use simplified thinking to cope with complexities of daily life. These include stereotypes, heuristical decision-making, and many forms of cognitive biases we’re all subject to. Because information individuals have is inherently limited as are their cognitive capabilities, our way of rational thinking is naturally bounded (Simon, 1956).

2. Belief systems. I want to talk about what I call “belief systems”. They can be defined as a form of shared thinking by a community or a niche of people. Some general characterizations follow. First, belief systems are characterized by common language (vocabulary) and shared way of thinking. Sociologists could define them as communities or sub-cultures, but I’m not using that term because it is usually associated with shared norms and values which do not matter in the context I refer to in this post.

3. Advantages and disadvantages. Second, the main advantage of belief systems is efficient communication, because all members share the belief system and are therefore privy to the meaning of specific terms and concepts. The main disadvantage of belief systems is the so-called tunnel vision which restricts the members adopting a belief system to seek or accept alternative ways of thinking. Both the main advantage and the main disadvantage result from the same principle: the necessity of simplicity. What I mean by that is that if a belief system is not parsimonious enough, it is not effective in communication but might escape tunnel vision (and vice versa).

4. Adoption of belief systems. For a belief system to spread, it is subject to the laws of network diffusion (Katz & Shapiro, 1985). The more people have adopted a belief system, the more valuable it becomes for an individual user. This encourages further adoption as a form of virtuous cycle. Simplicity enhances diffusion – a complex system is most likely not adopted by a critical mass of people. “Critical mass” refers here to the number of people sharing the belief system needed for additional members to adopt a belief system. Although this may not be any single number since the utility functions controlling the adoption are not uniformly distributed among individuals; there is an underlying assumption that belief systems are social by nature. If not enough people adopt a belief system, it is not remarkable enough to drive human action at a meaningful scale.

5. Understanding. Belief systems are intangible and unobservable by any direct means, but they are “real” is social sense of the word. They are social objects or constructs that can be scrutinized by using proxies that reflect their existence. The best proxy for this purpose is language. Thus, belief systems can be understood by analyzing language. Language reveals how people think. The use of language (e.g., professional slang) reveals underlying shared assumptions of members adhering to a belief system. An objective examinator would be able to observe and record the members’ use of language, and construct a map of the key concepts and vocabulary, along with their interrelations and underlying assumptions. Through this proceduce, any belief system could be dissected to its fundamental constituents, after which the merits and potential dischords (e.g., biases) could be objectively discussed.

For example, startup enthusiasts talk about “customer development” and “going out of building” as new, revolutionary way of replacing market research, whereas marketing researchers might consider little novelty in these concepts and actually be able to list those and many more market research techniques that would potentially yield a better outcome.

6. Performance. By objective means, a certain belief system might not be superior to another either to be adopted or to perform better. In practice, a belief system can yield high performance rewards either due to 1) additional efficiency in communication, 2) randomity of it working better than other competing solutions, or 3) its heuristical properties that e.g. enhance decision-making speed and/or accuracy. Therefore, beliefs systems might not need to be theoretically optimal solutions to yield a practically useful outcome.

7. Changing belief system. Moreover, belief systems are often unconcious. Consider the capitalistic belief system, or socialist belief system. Both drive the thinking of individuals to an enormous extent. Once a belief system is adopted, it is difficult to learn away. Getting rid of a belief system requires considerable cognitive effort, a sort of re-programming. An individual needs to be aware of the properties and assumptions of his belief system, and then want to change them e.g. by for looking counter-evidence. It is a psychological process equivalent to learning or “unlearning”.

8. Conclusion. People operate based on belief systems. Belief systems can be understood by analyzing language. Language reveals how people think. The use of language (e.g., professional slang) reveals underlying shared assumptions of a belief system. Belief systems produce efficiency gains for communication but simultaneously hinder consideration of possibly better alternatives. A belief system needs to be simple enough to be useful, people readily absorb it and do not question the assumptions thereafter. Changing belief systems is possible but requires active effort for a period of time.

References

Katz, M. L., & Shapiro, C. (1985). Network Externalities, Competition, and Compatibility. The American Economic Review, 75(3), 424–440.

Simon, H. A. (1956). Rational choice and the structure of the environment. Psychological Review, 63(2), 129–38.

Quick note: Measurement of brand advertising

Two brands colliding.

Hm, I’m thinking (therefore, I am a digital marketer). The classical advertising question has been:

How to measure the impact of advertising on a brand?

And then the answer has been “oh, you can’t”, or “it’s difficult”, or something along those lines. But, they say, it is there! The marketers’ argument for poor direct performance has traditionally been that there is a lift in brand awareness or attitude which are sometimes measured by conducting cumbersome surveys.

But actually, aren’t the aforementioned attributes just predictors of purchase? I mean, they should result in higher probability of purchase, right? Given that people know the brand and like the brand, they are more likely to purchase it.

If so, the impact metric *is* indeed always sales — it’s only a question of choosing the period of examination. If all advertising impacts lead to sales, then sales is the metric even when we talk of brand advertising.

According to the previous logic, it would seem measuring advertising impact by sales is always correct, but because of carryover effects (latent effects) the problem can be reformulated into:

What time period should we use to measure advertising impact?

And forget about measurement of brand impact. It’s not a question of impact on “soft” issues but impact on revenue. The influence mechanism itself might be soft, but it always needs to materialize as hard, cold cash. The more tricky questions are determining the correct examination period for campaigns which requires fitting it to the length of purchase process, and keeping the analytics trail alive for at least that time period.

Conclusions and discussion

If carryover effects occur, how can we determine the correct time frame for drawing conclusions on advertising impact?

…I have to say, though, that measuring brand sentiment can’t be wrong. It can help understand why people like/dislike the brand, and therefore provide improvement ideas and a description of perceived brand attributes, information which is helpful for both product development and marketing.

But the ultimate metric for assessing advertising impact should always be sales.

Negative tipping and Facebook: Warning signs

This Inc article states a very big danger for Facebook: http://www.inc.com/jeff-bercovici/facebook-sharing-crisis.html

It is widely established in platform theory that reaching a negative tipping point can destroy a platform. Negative tipping is essentially the reverse of positive tipping — instead of gaining momentum, the platforms starts quickly losing it.

There are two dimensions I want to look at in this post.

First, what I call “the curse of likes“. Essentially, Facebook has made it too easy to like pages and befriend people; as a result, they are unable to manage people’s newsfeeds in the best way in terms of engagement. There is too much clutter, leaving important social information out, and the “friend” network is too wide for the intimacy required to share personal things. The former reduces engagement rate, the latter results in unwillingness to share personal information.

Second, if people are sharing less about themselves, the platform has it more difficult to show them relevant ads. The success of Facebook as a business relies on its revenue model which is advertising. Both of the aforementioned risks are negative for advertising outcomes. If relevance decreases, a) user experience (negative effects of ads) and b) ad performance decrease as well, resulting in advertisers reducing their ad spend or, in worst-case scenario, them moving on to other platforms.

To counter these effects, Facebook can resort to a few strategies:

  1. Discourage people from “over-liking” things – this is for their own benefit, not to clutter the newsfeed
  2. Easy options to unsubscribe from people and pages — e.g., asking “Do you want to see this?” in relation to posts
  3. Favoring social content over news and company posts in the newsfeed algorithms – seeing personal social content is likely to incite more social content
  4. Sentiment control of newsfeed algorithm – to many, Facebook seems like a “negative place” with arguing on politics and such. This is in stark contrast to more intimate platforms such as Instagram. Thus, Facebook could incorporate sentiment adjustment in its newsfeed algorithm to emphasize positive content.
  5. Continued efforts to improve ad relevance – especially by giving incentives for high-CTR advertisers to participate by lowering their click prices, thereby encouraging engagement and match-seeking behavior.

Overall, Facebook as a platform will not be eternal. But I think the company is well aware of this, since their strategy is to constantly buy out rivals. The platform idea persists although individual platforms may perish.

Qualitative Analysis With NVivo – Essential Features

This post explains the use of NVivo software package for analysis of qualitative data. It focuses on four aspects:

  1. coding
  2. categorization
  3. relationships
  4. comparison of background variables

First, coding. This is simply giving names to phenomena observed in the material. It’s a process of abstraction and conceptualization, i.e. making the rich qualitative material more easily approachable by reducing its complexity into simple and descriptive codes which can be compared to and associated with one other at the later stages of the analysis.

(In the picture, the highlighted areas are coded by right-clicking them and giving them a descriptive label which relates to a phenomenon of interest.)

You can think of the codes shaping up in two ways: a) from previous literature or b) emerging as important points in the material based on researcher’s judgment. (You can think of this from the perspective of deductive/inductive emphasis.) Either way, they’re associated with one’s research questions — the material always needs to be analyzed in the light of one’s research questions so that the results of the analysis remain relevant for the study’s purpose. Oftentimes, the first step of reading and coding of all material is referred to open coding (e.g., Strauss & Corbin, 2008).

Second, categorization. This is simply bundling the codes with one another and placing them in a hierarchical “structure”.

(In the picture, you can see codes being formulated as “main themes” and “sub-themes”, i.e. categories that contain other categories.)

The structure should follow the operationalization of the study — this usually comes naturally because the material is closely linked to interview questions which then again are linked to research questions (which are linked to the purpose of the study to form a full circle from analysis to study purpose). For example, in the above picture the categories, or themes, relate to challenges and opportunities of multichannel commerce – the topic under scrutiny.

Third, relationships. This is important – while reading the material, the researcher should form tentative relationships in his or her head. For example, “I see, x seems to be associated with y“. These are the “heureka” moments one gets while immersed in the analysis.

(In the picture, you can see several tentative relationships emerging from the analysis. A portion of them will be chosen for further validation/falsification, and potentially reported as outcomes of the study.)

The relationships can be coded instantly as you come across them — the beauty of NVivo is that you can code evidence (direct citations) into the relationships, and then later when you click the relationship open, you can find all associated evidence (the same applies for all codes and categories). So, as you analyze the material further you can add confirming and contrary evidence to the previously thought of relationships while a keeping a “trail of thought” useful for reporting of the results. It is important to understand that at this point of the analysis the discovered relationships are so-called interim findings rather than final conclusions.

Now, qualitative research can result in several outcomes in terms of reporting, one being propositions. Propositions are conclusions of qualitative analysis; they are in a way tentative suggestions of general relationships, and can be formulated into hypothesis for quantitative testing. However, the propositions can be “validated” or turned more robust by qualitative comparison as well. This is done through an iterative process of a) reading the material repeatedly and trying to find both confirmatory and falsifying evidence for the interim propositions, and 2) collecting more research data especially focused on learning more about the tentative propositions (in Grounded Theory, this is referred to as theoretical sampling, see Glaser and Strauss, 1967). When you have done the process of comparative analysis and theoretical sampling, you can have more confidence (not in statistical but analytical sense) in your propositions.

Fourth, comparison of background variables. It took me a long time to learn the power of comparing background information – but the potential and importance is really high, especially when the qualitative analyst wants to move beyond description to deeper understanding. I believe it was Milton Friedman who said the goal of research should be to find “hidden constructs” of reality. In quantitative studies this is done by a) identifying latent variables and 2) finding statistical relationship between variables. In qualitative studies, we do not speak of relationships in statistical sense but rather of “associations” which can be of many types (some examples below).

(The picture depicts different types of associations named by the researcher, including i.a. “challenge”, “solution”, “opportunity”)

Anyway, there is absolutely no reason why we should not pursue discovery of hidden realities also in qualitative studies. In NVivo, this can be done via classifications and attributes. First, decide which are the important background information you want to compare. Then, include that in your interview questions. Once you have the material, code all interviews (each representing an informant) as nodes – to these nodes, you will assign a classification schema with the chosen background information (attributes).

(The picture includes a comparison matrix of small and large firm representatives views on customer service challenges — it can easily be expanded to include other dimensions as per the analysis framework.)

For example, consider you would like to compare the views of small and large firms on a specific multi-channel challenge, say customer service. You create a classification schema and give it the attribute of “size” with two potential values, small and large. Then, you’d run a matrix code query including small and large as rows and customer service as a column. Here, you can see the number of occurrences and more importantly, you can click them “open” to see all the associated evidence. You’re still tied to researcher’s judgment or “interpretativism” when comparing the answers, but at least this way you can conduct comparisons more systematically and in accordance with your analytical framework. It also helps you to discover patterns – for example, we could find that large firms tend to emphasize different challenges than smaller firms.

Finally, I’d say the fifth important aspect of qualitative analysis is visualization of the results, usually in the form of a model or framework. Unfortunately this is where NVivo fails hard.

(Unfortunately, it is important to draw a moderating line from the third variable to the relationship of the two other variables in NVivo.)

For example, you can’t draw “moderating” relationships, and also the variable names are cut short in language such as Finnish which has long words (I’ve reported these shortcomings to QSR which is the maker of NVivo). Granted, moderation is usually understood as a property of quantitative studies, but there’s no reason why a qualitative framework or model shouldn’t also incorporate them in a conceptual model (which could be later tested by structural equation modeling, for example). So, until these problems are fixed, I’d recommend sticking to other tools for visualization, such as Microsoft PowerPoint.

 References:

Corbin, J., & Strauss, A. (2008). Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory. SAGE.

Glaser, B. G., & Strauss, A. L. (1967). The Discovery of Grounded Theory: Strategies for Qualitative Research. Transaction Publishers.

The author holds a PhD in marketing, and is teaching and conducting research at the Turku School of Economics. His topics of interest include digital marketing, startup companies and platforms.

Google and the Prospect of Programmatic

Introduction

This is a short post taking a stance on programmatic ad platforms. It’s based on one single premise:

Digital convergence will lead into a situation where all ad spend, not only digital, will be managed through self-service, open ad platforms that operate based on auction principles

There are several reasons as to why this is not yet a reality; some of them relate to lack of technological competence by traditional media houses, some to their willingness to “protect” premium pricing (this protection has led to shrinking business and keeps doing so until they open up to the free market pricing), and a host of other factors (I’m actually currently engaged in a research project studying this phenomenon).

Digital convergence – you what?

Anyway, digital convergence means we’ll end up running campaigns through one or possibly a few ad platforms that all operate according to the same basic principles. They will resemble a lot like AdWords, because AdWords has been and still is the best advertising platform ever created. Why self-service is critical is due to the necessity of eliminating transaction costs in the selling process – we don’t in most cases need media sales people to operate these platforms. Because we don’t need them, we won’t need to pay their wages and this efficiency gain can be shifted to the prices.

The platforms will be open, meaning that there are no minimum media buys – just like in Google and Facebook, you can start with 5 $ if you want (try doing that now with your local TV media sales person). Regarding the pricing, it’s determined via ad auction, just like in Google and Facebook nowadays. The price levels will drop, but lowered barrier of access will increase liquidity and therefore fill seats more efficiently than in human-based bargaining. At least initially I expect some flux in these determinants — media houses will want to incorporate minimum pricing, but I predict it will go away in time as they realize the value of free market.

But now, to Google…

If Google was smart, it would develop programmatic ad platform for TV networks, or even integrate that with AdWords. The same applies actually to all media verticals: radio, print… Their potential demise will be this Alphabet business. All new ideas they’ve had have failed commercially, and to focus on producing more failed ideas leads unsurprisingly to more failure. Their luck, or skill however you want to take it, has been in understanding the platform business.

Just like Microsoft, Google must have people who understand about the platform business.

They’ve done a really good job with vertical integration, mainly with Android and Chrome. These support the core business model. Page’s fantasy land ideas really don’t. Well, from this point of view separating the Alphabet from the core actually makes sense, as long as the focus is kept on search and advertising.

So, programmatic ad platforms have the potential to disrupt Google, since search still dwarfs in comparison to TV + other offline media spend. And in the light of Google’s supposed understanding of platform dynamics, it’s surprising they’re not taking a stronger stance in bringing programmatic to the masses – and by masses, I mean offline media where the real money is. Google might be satisficing, and that’s a road to doom.

Dr. Joni Salminen holds a PhD in marketing from the Turku School of Economics. His research interests relate to startups, platforms, and digital marketing.

Contact email: [email protected]

The Vishnu Effect of Startups (creators/destroyers of jobs)

Background

In the Hindi scripture there is a famous passage in which the god Vishnu describes himself as death; to Westerners this is mostly known through Oppenheimer’s citation:

“Now, I am become Death, the destroyer of worlds.”

But, there is another god in Hinduism, Brahma, that is the creator of the universe.

How does this relate to startups?

Just like these two gods, startups are of dualistic nature. In particular, they are both job creators and job destroyers. One one hand they create new jobs and job types. On the other hand, they destroy existing jobs.

So what?

This dualistic nature is often ignored when evaluating the impact of startups on the society, although it’s definitely in the core of the Schumpeterian theory of innovation. What really matters for the society is the balance — how fast are new companies creating jobs vs. how fast they are destroying it.

I haven’t seen a single quantification of this effect, so it would definitely merit research. Theoretically, it can be called something like SIR, or startup impact ratio which would be jobs produced / jobs destroyed.

SIR = jobs produced / jobs destroyed

As long as the ratio is more than 1, the startups’ impact on the job market (and therefore indirectly on the society) is positive. In turn, if it’s below 1, “robots are taking our jobs”. Or, rather, if it’s above one, Brahma is winning while below one means Vishnu is dominating.

Dr. Joni Salminen holds a PhD in marketing from the Turku School of Economics. His research interests relate to startups, platforms, and digital marketing.

Contact email: [email protected]

The Basics of Dilemmas

Introduction

By definition, a dilemma is a trade-off situation in which there are two choices, each leading to a negative outcome.

General solution

A general solution, then, is to weigh the outcomes and compare them against one another.

For example:

choice A: -1
choice B: -2

In this example, choice A has smaller negative effect, so we’d pick that one.

Complications

However, there are complications.

Consider the above would in fact be the short-term outcomes, but there are also long-term outcomes. For example

choice A: -1, -3
choice B: -2, -1

This leads us into payoff functions, so that the outcomes (payoffs) consist of many variables. In the example, the long-term negative effects outweigh the short-term effects, and we would  change our choice to B.

However, the choice can also be arbitrary, meaning that neither choice dominates. In game theory terms, there is no dominant strategy.

This would be the case when

choice A: -1, -2
choice B: -2, -1

As you can see, it doesn’t matter which choice we take since each gives a negative outcome of equal size. There is an exception to this rule, namely when the player has a preference between short- and long-term outcomes. For example, if he wants to minimize long-term damage, he would pick B, and vice versa.

How to apply this in real life?

In decision-making situations, it’s common to make lists of + and -, i.e. listing positive and negative sides. by assigning a numerical value to them, you can calculate the sum and assign preference among choices. in other words, it becomes easier to make tough decisions.

I’m into digital marketing, startups, and platforms. Download my dissertation on startup dilemmas: http://goo.gl/QRc11f

Digital Marketing Laws (work in progress…)

Hi,

this is work in progress – I’ll keep updating this list as new moments of “heureka” hit me.

Digital marketing laws

  1. The higher the position in a SERP, the higher the CTR
  2. The more a mixed platform gains demand-side popularity, the more it restricts the organic reach of supply-side
  3. Search-engine traffic consistently outperforms social media traffic in direct ROI
  4. People are not stupid (yes, this is why retargeting is not a stairway to heaven)
  5. “it is almost always much cheaper to retain satisfied customers and turn them into repeat business than it is to attract a new, one-time customer.”

Want to add something? Please post it in the comment section!

A.I. – the next industrial revolution?

Introduction

Many workers are concerned about “robotization” and “automatization” taking away their jobs. Also the media has been writing actively about this topic lately, as can be seen in publications such as New York Times and Forbes.

Although there is undoubtedly some dramatization in the scenarios created by the media, it is true that the trend of automatization took away manual jobs throughout the 20th century and has continued – perhaps even accelerated – in the 21st century.

Currently the jobs taken away by machines are manual labor, but what happens if machines take away knowledge labor as well? I think it’s important to consider this scenario, as most focus has been on the manual jobs, whereas the future disruption is more likely to take place in knowledge jobs.

This article discusses what’s next – in particular from the perspective of artificial intelligence (A.I.). I’ve been developing a theory about this topic for a while now. (It’s still unfinished, so I apologize the fuzziness of thought…)

 Theory on development of job markets

My theory on development of job markets relies on two key assumptions:

  1. with each development cycle, less people are needed
  2. and the more difficult is for average people to add value

The idea here is that while it is relatively easy to replace a job taken away by simple machines (sewing machines still need people to operate them), it is much harder to replace jobs taken away by complex machines (such as an A.I.) providing higher productivity. Consequently, less people are needed to perform the same tasks.

By “development cycles”, I refer to the drastic shift in job market productivity, i.e.

craftmanship –> industrial revolution –> information revolution –> A.I. revolution

Another assumption is that the labor skills follow the Gaussian curve. This means most people are best suited for manual jobs, while information economy requires skills that are at the upper end of that curve (the smartest and brightest).

In other words, the average worker will find it more and more difficult to add value in the job market, due to sophistication of the systems (a lot more learning is needed to add value than in manual jobs where the training requires a couple of days). Even currently, the majority of global workers best fit to manual labor rather than information economy jobs, and so some economies are at a major disadvantage (consider Greece vs. Germany).

Consistent to the previous definition, we can see the job market including two types of workers:

  • workers who create
  • workers who operate

The former create the systems as their job, whereas the latter operate them as their job. For example, in the sphere of online advertising, Google’s engineers create the AdWords search-engine advertising platform, which is then used by online marketers doing campaigns for their clients. At the current information economy, the best situation is for workers who are able to create systems – i.e. their value-added is the greatest. With an A.I, however, both jobs can be overtaken by machine intelligence. This is the major threat to knowledge workers.

The replacement takes place due to what I call the errare humanum est -effect (disadvantage of humans vis-à-vis machines), according to which a machine is always superior to job tasks compared to human which is an erratic being controlled by biological constraints (e.g., need for food and sleep). Consequently, even the brightest humans will still lose to an A.I.

Examples

Consider these examples:

  • Facebook has one programmer per 1.2 million users [1] and one employee per 249,000 users [2]
  • Rovio has one employee per 507,000 gamers [3]
  • Pinterest has one employee per 400,000 users [2]
  • Supercell have one employee per 193,000 gamers [4]
  • Twitter has one employee per 79,000 users [5]
  • Linkedin has one employee per 47,000 users [6]

(Some of these figures are a bit outdated, but in general they serve to support my argument.)

Therefore, the ratio of workers vs. customers is much lower than in previous transitions. To build a car for one customer, you need tens of manufacturing workers. To serve customers in a super-market, the ratio needs to be something like 1:20 (otherwise queues become too long). But when the ratio is 1:1,000,000, not many people are needed to provide a service for the whole market.

As can be seen, the mobile application industry which has been touted as a source of new employment does indeed create new jobs [7], but it doesn’t create them for masses. This is because not many people are needed to succeed in this business environment.

Further disintermediation takes place when platforms talk to each other, forming super-ecosystems. Currently, this takes place though an API logic (application programming interface) which is a “dumb” logic, doing only prescribed tasks, but an A.I. would dramatically change the landscape by introducing creative logic in API-based applications.

Which jobs will an A.I. disrupt?

Many professional services are on the line. Here are some I can think of.

1. Marketing managers 

An A.I. can allocate budget and optimize campaigns far more efficiently than erroneous humans. The step from Google AdWords and Facebook Ads to automated marketing solutions is not that big – at the moment, the major advantage of humans is creativity, but the definition of an A.I. in this paper assumes creative functions.

2. Lawyers 

An A.I. can recall all laws, find precedent cases instantly and give correct judgments. I recently had a discussion with one of my developer friends – he was particularly interested in applying A.I. into the law system – currently it’s too big for a human to comprehend, as there are thousands of laws, some of which contradict one another. An A.I. can quickly find contradicting laws and give all alternative interpretations. What is currently the human advantage is a sense of moral (right and wrong) which can be hard to replicate with an A.I.

3. Doctors 

An A.I. makes faster and more accurate diagnoses; a robot performs surgical operations without flaw. I would say many standard diagnoses by human doctors could be replaced by A.I. measuring the symptoms. There have been several cases of incorrect diagnoses due to hurry and the human error factor – as noted previously, an A.I. is immune to these limitations. The major human advantage is sympathy, although some doctors are missing even this.

4. Software developers

Even developers face extinction; upon learning the syntax, an A.I. will improve itself better than humans do. This would lead into exponentially accelerating increase of intellect, something commonly depicted in the A.I. development scenarios.

Basically, all knowledge professions if accessible to A.I. will be disrupted.

Which jobs will remain?

Actually, the only jobs left would be manual jobs – unless robots take them as well (there are some economic considerations against this scenario). I’m talking about low-level manual jobs – transportation, cleaning, maintenance, construction, etc. These require more physical material – due to aforementioned supply and demand dynamics, it may be that people are cheaper to “build” than robots, and therefore can still assume simple jobs.

At the other extreme, there are experience services offered by people to other people – massage, entertainment. These can remain based on the previous logic.

How can workers prepare?

I can think of a couple of ways.

First, learn coding – i.e. talking to machines. people who understand their logic are in the position to add value — they have an access to the society of the future, whereas those who are unable to use systems get disadvantaged.

The best strategy for a worker in this environment is continuous learning and re-education. From the schooling system, this requires a complete re-shift in thinking – currently most universities are far behind in teaching practical skills. I notice this every day in my job as a university teacher – higher education must catch up, or it will completely lose its value.

Currently higher education is shielded by governments through official diplomas appreciated by recruiters, but true skills trump such an advantage in the long run. Already at this moment I’m advising my students to learn from MOOCs (massive open online courses) rather than relying on the education we give in my institution.

What are the implications for the society?

At a global scale, societies are currently facing two contrasting mega-trends:

  • the increase of productivity through automatization (= lower demand for labor)
  • the increase of population (= higher supply of labor) (everyone has seen the graph showing population growth starting from 19th century [8])

It is not hard to see these are contrasting: less people are needed for the same produce, whereas more people are born, and thus need jobs. The increase of people is exponential, while the increase in productivity comes, according to my theory, in large shifts. A large shift is bad because before it takes place, everything seems normal. (It’s like a tsunami approaching – no way to know before it hits you.)

What are the scenarios to solve the mega-trend contradiction?

I can think of a couple of ways:

  1. Marxist approach – redistribution of wealth and re-discovery of “job”
  2. WYSIWYG approach – making the systems as easy as possible

By adopting a Marxist approach, we can see there are two groups who are best off in this new world order:

  • The owners of the best A.I. (system capital)
  • The people with capacity to use and develop A.I. further (knowledge capital)

Others, as argued previously, are at a disadvantage. The phenomenon is much similar to the concept of “digital divide” which can refer to 1) the difference of citizens from developed and developing countries’ access to technologies, or 2) the ability of the elderly vs. the younger to use modern technology (the latter have, for example, worse opportunities in high-tech job markets).

There are some relaxations to the arguments I’ve made. First, we need to consider that the increase of time people have as well as the general population increase create demand for services relating experiences and entertainment per se; yet, there needs to be consideration of re-distribution of wealth, as people who are unable to work need to consume to provide work for others (in other words, the service economy needs special support and encouragement from government vis-à-vis machine labor).

While it is a precious goal that everyone contribute in the society through work, the future may require a re-check on this protestant work ethic if indeed the supply of work drastically decreases. the major reason, in my opinion, behind the failure of policies reducing work hours such as the 35-hour work-week in France is that other countries besides these pioneers are not adopting them, and so they gain a comparative advantage in the global market. We are yet not in the stage where supply of labor is dramatically reduced at a global scale, but according to my theory we are getting there.

Secondly, a major relaxation, indeed, is that the systems can be usable by people who lack the understanding of their technical finesse. This method is already widely applied – very few understand the operating principles of the Internet, and yet can use it without difficulties. Even more complex professional systems, like Google AdWords, can be used without detailed understanding of the Google’s algorithm or Vickrey second-price sealed auctions.

So, dumbing things down is one way to go. The problem with this approach in the A.I. context is that when the system is smart enough to use itself, there is no need to dumb down – i.e., having humans use it would be a non-optimal use of resources. Already we can see this in some bidding algorithms in online advertising – the system optimizes better than people. At the moment we online marketers can add value through copywriting and other creative ways, but the upcoming A.I. would take away this advantage from us.

Recommendations

It is natural state of job markets that most workers are skilled only for manual labor or very simple machine work; if these jobs are lost, new way of organizing society is needed. Rather than fighting the change, societies should approach it objectively (which is probably one of the hardest things for human psychology).

My recommendations for the policy makers are as follows:

  • decrease cost of human labor (e.g., in Finland sometime in the 70s services were exempted from taxes – this scenario should help)
  • reduce employment costs – the situation is in fact perverse, as companies are penalized through side costs if they recruit workers. In a society where demand of labor is scarce, the reverse needs to take place: companies that recruit need to be rewarded.
  • retain/introduce monetary transfers à la welfare societies – because labor is not enough for everyone, the state needs to pass money from capital holders to underprivileged. The Nordic states are closer to a working model than more capitalistic states such as the United States.
  • push education system changes – because skills required in the job market are more advanced and more in flux than previously, the curriculum substance needs to change faster than it currently does. Unnecessary learning should be eliminated, while focusing on key skills needed in the job market at the moment, and creating further education paths to lifelong learning.

Because the problem of reducing job demand is not acute, these changes are unlikely to take place until there is no other choice (which is, by the way, the case for most political decision making).

Open questions

Up to which point can the human labor be replaced? I call it the point of zero human when no humans are needed to produce an equal or larger output than what is being produced at an earlier point in time. The fortune of humans is that we are all the time producing more – if the production level was at the stage of 18th century, we would already be in the point of zero human. Therefore, job markets are not developing in a predictable way towards point of zero human, but it may nevertheless be a stochastic outcome of the current development rate of technology. Ultimately, time will tell. We are living exciting times.

References:

[1]: https://www.facebook.com/notes/facebook-engineering/facebook-engineering-bootcamp/177577963919

[2]: http://royal.pingdom.com/2013/02/26/pinterest-users-per-employee/

[3]: http://www.rovio.com/en/news/press-releases/284/rovio-entertainment-reports-2012-financial-results

[4]: http://www.gamesindustry.biz/articles/2014-02-11-clash-of-clans-daily-revenue-at-5.15-million-hacker

[5]: http://www.statista.com/statistics/272140/employees-of-twitter/

[6]: https://press.linkedin.com/about-linkedin

[7]: http://www.visionmobile.com/product/uk-app-economy-2014/

[8]: http://www.susps.org/images/worldpopgr.gif

The ROI of Academic Publishing

Problem of ROI in publishing

The return on investment (ROI) of academic publishing is absolutely terrible.

Think of it – thousands of hours spent correcting formatting, spellings, rephrasing, and so on. All this after the actual insight of the research has been accomplished. In all seriousness, 10% of time spent doing research and 90% writing and rewriting cannot be thought of anything else but waste.

Why should we care?

The inefficiency of the current way of doing it – as in combining doing research and writing about it under the same name of “doing research” – is horrible waste of intelligence and human resources. It inflates the cost of doing research, and also makes scientific progress slower than if 90% was spent on research and 10% on writing.

Root cause

Some might say it’s a perverse outcome of letting staff go – nowadays even professors have to do everything  by themselves because there is so few assistants and administrators. Why is this perverse? Because at the same time more people need work. It’s also perverse, or paradoxical, because letting the help go is done to increase efficiency but in the end it actually decreases efficiency as the research staff shifts their use of time from doing research to fixing spelling errors. There is a large misunderstanding that letting people go would lead to better efficiency – it may save costs but exactly at the cost of efficiency.

My experiences

The thought for this article came to mind when me and my colleague received yet again some minor edit requests for an article to be published in a book – the book material was ready already last year, but all these people are working to fix small minor details that add zero substance value. What a waste!

And I’m not alone in this situation; most if not all academics face the same problem.

Solution

Two solutions readily come to mind:

  • report the data and that’s it
  • use editors to fix all minor errors instead of forcing the high-thinkers to waste their time on it

The latter one is much better, as the first option misses the importance of interpreting the results and theorizing from them (the whole point of doing research).

What is ROI of research?

Efficiency, such as ROI of research, should be defined as learning more about the world. This will never be accomplished by writing reports but going out to the world. At the same time, I don’t mean to undermine basic research – the ROI of research is not the same as its immediate usefulness, let alone its immediate economic potential. ROI in my argument simply refers to the ratio of doing research vs. writing about it, not the actual quality of the outcome.

The author works as a university teacher in the Turku School of Economics