Skip to content

Digital marketing, startups and platforms Posts

Is “premium” ad space a hoax?

Answer: It kinda is.

“Premium publishers” and “premium ad space” — these are often heard terms in programmatic advertising. But they are also dangerously fallacious ideas.

I’ll give three reasons why:

  1. A priori problem
  2. Uniformity problem
  3. Equilibrium problem

First, publishers define what is “premium” a priori (before results) which is not the right sequence to do it (a priori problem). The value of ad space — or the status, premium or not — should be determined a posteriori, or after the fact. Anything will risk biases due to guess-work.

Second, what is “premium” (i.e., works well) for advertiser A might be different for advertiser B, but the same ad space is always “premium” or not (uniformity problem). The value of ad space should be determined based on its value to the advertiser, which is not a uniform distribution.

Third, fixing a higher price for “premium” inventory skews the market – rational advertisers won’t pay irrational premiums and the publisher ends up losing revenue instead of gaining “premium” price (equilibrium problem). This is the exact opposite outcome the publisher hoped for, and arises from imbalance of supply and demand.


I defined premium as ad space that works well in regards to the advertiser’s objectives. Other definitions also exist, e.g. Münstermann and Würtenberg (2015) who argue the distinctive trait between premium and non-premium media is the degree of its editorial professionalism, so that amateur websites would be less valuable. In many cases, this is an incorrect classifier from the advertiser’s perspective — e.g., placing an ad on a blogger’s website (influencer marketing) can fairly easily produce higher rents than placing it alongside “professional” content. The degree of professionalism of the content is not a major cue for the consumers, and therefore one should define “premium” from the advertiser’s point of view — as a placement that works.


The only reason, I suspect, premium inventory is still alive is due to the practice of private deals where advertisers are more interested in volume than performance – these advertisers are more informed by assumptions than data. Most likely as the buyers’ level of sophistication increases, they become more inclined to market-based pricing which has a much closer association with performance than private deals.

Quick note: Measurement of brand advertising

Two brands colliding.

Hm, I’m thinking (therefore, I am a digital marketer). The classical advertising question has been:

How to measure the impact of advertising on a brand?

And then the answer has been “oh, you can’t”, or “it’s difficult”, or something along those lines. But, they say, it is there! The marketers’ argument for poor direct performance has traditionally been that there is a lift in brand awareness or attitude which are sometimes measured by conducting cumbersome surveys.

But actually, aren’t the aforementioned attributes just predictors of purchase? I mean, they should result in higher probability of purchase, right? Given that people know the brand and like the brand, they are more likely to purchase it.

If so, the impact metric *is* indeed always sales — it’s only a question of choosing the period of examination. If all advertising impacts lead to sales, then sales is the metric even when we talk of brand advertising.

According to the previous logic, it would seem measuring advertising impact by sales is always correct, but because of carryover effects (latent effects) the problem can be reformulated into:

What time period should we use to measure advertising impact?

And forget about measurement of brand impact. It’s not a question of impact on “soft” issues but impact on revenue. The influence mechanism itself might be soft, but it always needs to materialize as hard, cold cash. The more tricky questions are determining the correct examination period for campaigns which requires fitting it to the length of purchase process, and keeping the analytics trail alive for at least that time period.

Conclusions and discussion

If carryover effects occur, how can we determine the correct time frame for drawing conclusions on advertising impact?

…I have to say, though, that measuring brand sentiment can’t be wrong. It can help understand why people like/dislike the brand, and therefore provide improvement ideas and a description of perceived brand attributes, information which is helpful for both product development and marketing.

But the ultimate metric for assessing advertising impact should always be sales.

Algorithm Neutrality and Bias: How Much Control?

The Facebook algorithm is a global super power.

So, I read this article: Facebook is prioritizing my family and friends – but am I?

The point of the article — that you should focus on your friends & family in real life instead of Facebook — is poignant and topical. So much of our lives is spent on social media, without the “social” part, and even when it is there, something is missing in comparison to physical presence (without smart phones!).

Anyway, this post is not about that. I got to think about the from the algorithm neutrality perspective. So what does that mean?

Algorithm neutrality takes place when social networks allow content spread freely based on its merits (e.g., CTR, engagement rate); so that the most popular content gets the most dissemination. In other words, the network imposes no media bias. Although the content spreading might have a media bias, the social network is objective and only accounting its quantifiable merits.

Why does this matter? Well, a neutral algorithm guarantees manipulation-free dissemination of information. As soon as human judgment intervenes, there is a bias. That bias may lead to censorship and favoring of certain political party, for example. The effect can be clearly seen in the so-called media bias. Anyone following either the political coverage of the US elections or the Brexit coverage has noticed the immense media bias which is omnipresent in even the esteemed publications, like the Economist and Washington Post. Indeed, they take a stance and report based on their stance, instead of covering objectively. A politically biased media like the one in the US is not much better than the politically biased media in Russia.

It is clear that free channels of expression enable the proliferation of alternative views, whereupon an individual is (theoretically) better off, since there are more data points to base his/her opinion on. Thus, social networks (again, theoretically) mitigate media bias.

There are many issues though. First is the one that I call neutrality dilemma.

The neutrality dilemma arises from what I already mentioned: the information bias can be embedded in the content people share. If the network restricts the information dissemination, it moves from neutrality to control. If it doesn’t restrict information dissemination, there is a risk of propagation of harmful misinformation, or propaganda. Therefore, in this continuum of control and freedom there is a trade-off that the social networks constantly need to address in their algorithms and community policies. For example, Facebook is banning some content, such as violent extremism. They are also collaborating with local governments which can ask for removal of certain content. This can be viewed in their transparency report.

The dilemma has multiple dimensions.

First of all, there are ethical issues. From the perspective of “what is right”, shouldn’t the network prohibit diffusion of information when it is counter-factual? Otherwise, peopled can be mislead by false stories. But also, from perspective of what is right, shouldn’t there be free expression, even if a piece of information is not validated?

Second, there are some technical challenges:

A. How to identify “truthfulness” of content? In many cases, it is seemingly impossible because the issues are complex and not factual to begin with. Consider e.g. the Brexit: it is not a fact that the leave vote would lead into a worse situation than the stay vote, and vice versa. In a similar vein, it is not a fact that the EU should be kept together. These are questions of assumptions which make them hard: people freely choose the assumptions they want to believe, but there can be no objective validation of this sort of complex social problem.

B. How to classify political/argumentative views and relate them to one another? There are different point of views, like “pro-Brexit” and “anti-Brexit”. The social network algorithm should detect based on an individual’s behavior their membership in a given group: the behavior consists of messages posted, content liked, shared and commented. It should be fairly easy to form a view of a person’s stance on a given topic with the help of these parameters. Then, it is crucial to map the stances in relation to one another, so that the extremes can be identified.

As it currently stands, one is being shown the content he/she prefers which confirms the already established opinion. This does not support learning or getting an objective view of the matter: instead, if reinforces a biased worldview and indeed exacerbates the problems. It is crucial to remember that opinions do not remain only opinions but reflect into behavior: what is socially established becomes physically established through people’s actions in the real world. Therefore, the power of social networks needs to be taken with precaution.

C. How to identify the quality of argumentation? Quality of argumentation is important if applying the rotation of alternative views intended to mitigate reinforcement of bias. This is because the counter-arguments need to be solid: in fact, when making a decision, the pro and contra-sides need both be well-argued for an objective decision to emerge. Machine learning could be the solution — assuming we have training data on the “proper” structure of solid argumentation, we can compare this archetype to any kind of text material and assign it a score based on how good the argumentation is. Such a method does not consider the content of the argument, only its logical value. It would include a way to detect known argumentation errors based on syntax used. In fact, such a system is not unimaginably hard to achieve — common argumentation errors or logical fallacies are well documented.

Another form of detecting quality of argumentation is user-based reporting: individuals report the posts they don’t like, and these get discounted by the algorithm. However, Even when allowing users to report “low-quality” content, there is a risk they report content they disagree with, not which is poorly argued. In reporting, there is relativism or subjectivism that cannot be avoided.

Perhaps the most problematic of all are the socio-psychological challenges associated with human nature. The neutral algorithm enforces group polarization by connecting people who agree on a topic. This is natural outcome of a neutral algorithm, since people by their behavior confirm their liking of a content they agree with. This leads to reinforcement whereupon they are shown more of that type of content. The social effect is known as group polarization – an individual’s original opinion is enforced through observing other individuals sharing that opinion. That is why so much discussion in social media is polarized: there is this well known tendency of human nature not to remain objective but to take a stance in one group against another.

How can we curb this effect? A couple of solutions readily come to mind.

1. Rotating opposing views. If in a neutral system you are shown 90% of content that confirms your beliefs, rotation should force you to see more than 10% percent of alternative (say, 25%). Technically, this would require that “opinion archetypes” can be classified and contrasted to one another. Machine learning to the rescue?

The power of rotation comes from the idea it simulates social behavior: the more a person is exposed to subjects that initially seem strange and unlikeable (i.e., xenophobia), the more likely they are to be understood. A greater degree of awareness and understanding leads into higher acceptance of those things. In real world, people who frequently meet people from other cultures are more likely to accept other cultures in general.

Therefore, the same logic could by applied by Facebook in forcing us to see well-argumented counter-evidence to our beliefs. It is crucial that the counter-evidence is well-argued, or else there is a strong risk of reactance — people rejecting the opposing view even more. Unfortunately, this is a feature of the uneducated mind – not to be able to change one’s opinions but remain fixated on one’s beliefs. So the method is not full-proof, but it is better than what we now have.

2. Automatic fact-checking. Imagine a social network telling you “This content might contain false information”. Caution signals may curb the willingness to accept any information. In fact, it may be more efficient to show misinformation tagged as unreliable rather than hide it — in the latter case, there is possibility for individuals to correct their false beliefs.

3. Research in sociology. I am not educated to know enough about the general solutions of group polarization, groupthink and other associated social problems. But I know sociologists have worked on them – this research should be put to use in collaboration with engineers who design the algorithms.

However, the root causes for dissemination of misinformation, either purposefully harmful or due to ignorance, lie not on technology. The are human-based problems and must have a human-based solution.

What are these root causes? Lack of education. Poor quality of educational system. Lack of willingness to study a topic before forming an opinion (i.e., lazy mind). Lack of source/media criticism. Confirmation bias. Groupthink. Group polarization.

Ultimately, these are the root causes of why some content that should not spread, spreads. They are social and psychological traits of human beings, which cannot be altered via algorithmic solutions. However, algorithms can direct behavior into more positive outcomes, or at least avoid the most harmful extremes – if the aforementioned classification problems can be solved.

The other part of the equation is education — kids need to be taught from early on about media and source criticism, logical argumentation, argumentation skills and respect to another party in a debate. Indeed, respect and sympathy go a long way — in the current atmosphere of online debating it seems like many have forgotten basic manners.

In the online environment, provocations are easy and escalate more easily than in face-to-face encounters. It is “fun” to make fun of the ignorant people – a habit of the so-called intellectuals – nor it is correct to ignore science and facts – a habit of the so-called ignorants.

It is also unfortunate that many of the topics people debate on can be traced down to values and worldviews instead of more objective topics. When values and worldviews are fundamentally different among participants, it is truly hard to find a middle-way. It takes a lot of effort and character to be able to put yourself on the opposing party’s shoes, much more so than just point blank rejecting their view. It takes even more strength to change your opinion once you discover it was the wrong one.

Conclusion and discussion. Avoiding media bias is an essential advantage of social networks in information dissemination. I repeat: it’s a tremendous advantage. People are able to disseminate information and opinions without being controlled by mass-media outlets. At the same time, neutrality imposes new challenges. The most prominent question is to which extent should the network govern its content.

One one hand, user behavior is driving Facebook towards information sharing network – people are seemingly sharing more and more news content and less about their own lives – but Facebook wants to remain as social network, and therefore reduces neutrality in favor of personal content. What are the strategic implications? Will users be happier? Is it right to deviate from algorithm neutrality when you have dominant power over information flow?

Facebook is approaching a sort of an information monopoly when it comes to discovery (Google is the monopoly in information search), and I’d say it’s the most powerful global information dissemination medium today. That power comes with responsibility and ethical question, and hence the algorithm neutrality discussion. The strategic question for Facebook is that does it make sense for them to manipulate the natural information flow based on user behavior in a neutral system. The question for the society is should Facebook news feeds be regulated.

I am not advocating more regulation, since regulation is never a creative solution to any problem, nor does it tends to be informed by science. I advocate collaboration of sociologists and social networks in order to identify the best means to filter harmful misinformation and curb the generally known negative social tendencies that we humans possess. For sure, this can be done without endangering the free flow of information – the best part of social networks.

A New Paradigm for Advertising

From its high point, the sheep can see far.


In Finland, and maybe elsewhere in the world as well, media agencies used to reside inside advertising agencies, back in the 1970-80s. Then they were separated from one another in the 1990s, so that advertising agencies do creative planning and media companies buy ad space in the media. Along with this process, heavy international integration took place and currently both the media and advertising agency markets are dominated by a handful of global players, such as Ogilvy, Dentsu, Havas, WPP, etc.

This article discusses that change and argues for re-convergence of media and advertising agencies. I call this the new paradigm (paradigm = a dominant mindset and way of doing things).

The old paradigm

The current advertising paradigm consists of two features:

1) Advertising = creative + media
2) Creative planning –> media buying –> campaigning

In this paradigm, advertising is seen as rigid, inflexible, and one-off game where you create one advertising concept and run it, regardless of customer response. You are making a one sizable bet, and that’s it. To reduce the risk of failure, creative agencies use tons of time to “make sure they get it right”. Sometimes they use advertising pre-testing, but the process is predominantly driven by intuition, or black-box creativity.

Overall, that is an old-fashioned paradigm, for which reason I believe we need a new paradigm.

Towards the new paradigm

The new advertising paradigm looks like this:

1) Advertising = creative + media + optimization
2) Creative planning –> media trials –> creative planning –> …

In that, advertising in seen as fluid, flexible, and consecutive game where you have many trials to succeed. The creative process feeds from consumer response, and in turn media buying is adjusted based on the results of each unique creative concept.

So what is the difference?

In the old paradigm, we would spend three months planning and create one “killer concept” which according to our intuition/experience is what people want to see. In the new paradigm, we spend five minutes to create a dozen concepts and let customers (data) tell us what people want to see. Essentially, we relinquish the idea that it is possible to produce a “perfect ad”, in particular without customer feedback, and instead rely on a method that gets us closer to perfection, albeit never reaching it.

The new paradigm is manifested in a continuous, iterative cycle. Campaigns never end, but are infinite — as we learn more about customers, budget spend may increase in function of time, but essentially optimization is never done. The campaign has no end, unlike in the old paradigm where people would stop marketing a product even if the demand for that product would not disappear.

You might notice that the paradigm may not be compatible of old-fashioned “shark fin” marketing, but views marketing as continuous optimization. In fact, the concept of campaign is replaced by the concept of optimization.

Let me elaborate this thought. Look at the picture (source: Jesper Åström) – it illustrates the problem of campaign-based (shark-fin) marketing. You put in money, but as soon you stop investing, your popularity drops.

Now consider an alternative, where you constantly invest in marketing and not in heavy spikes (campaigns) but gradually by altering your message and targeting (optimization). You get results more like this:

Although seasonality, which is a natural consequence of the business cycle, does not fade away, the baseline results increase in time.

Instead of being fixed, budget allocations live according to the seasonal business cycles — perhaps anticipating the demand fluctuations. The timing should also consider the carryover effect.


I suspect media agencies and advertising will converge once again, or at least the media-buying and creative planning functions will reside in the same organization. This is already the way many young digital marketing agencies are operating since their birth. Designers and optimizers (ad buyers) work side-by-side, the former giving instructions to the latter on what type of design concepts work, not based on intuition as old-paradigm Art Directors (AD) would do, but based on real-time customer response.

Most importantly, tearing down silos will benefit the clients. Doing creative work and optimization in tandem is a natural way of working — the creative concept should no longer be detached from reality, and we should not think of advertising work as a factory line where ads move from one production line to another, but instead as some sort of co-creation through which we are able to mitigate advertising waste and produce better results for advertising clients.

Why social advertising beats display advertising


I’ve long been skeptical of display advertising. At least my students know this, since ever year I start the digital marketing course by giving a lecture on why display sucks (and why inbound / search-engine marketing performs much better).

But this post is not about the many pitfalls of display. Rather, it’s outlining three arguments as to why I nowadays prefer social advertising, epitomized by Facebook Ads, over display advertising. Without further ado, here are the reasons why social rocks at the moment.

1. Quality of contacts

It’s commonly known Facebook advertising is cheap in comparison to many advertising channels, when measured by CPM or cost per individual reached. Display can be even cheaper, so isn’t that better? No, absolutely not. Reach or impressions are completely fallacious metrics — their business value approaches zero. Orders of magnitude more important is the quality of contacts.

The quality of Facebook traffic, when looking at post-click behavior, tends to be better than the quality of display traffic. Even when media companies speak of “premium inventory”, the results are weak. People just don’t like banner ads. The people who click them, if they are people and not bots to begin with, often exit the site instantly without clicking further.

2. Social interaction

People actually interact with social ads. They pose questions, like them and even share them to their friends. Share advertisements? OMG, but they really do. That represents an overkill opportunity for a brand to interact with its customer base, and systematically gather feedback and customer insight. This is simply not possible with any other form of advertising, display including.

Display ads, albeit using rich media executions, are completely static and dead when it comes to social interaction. Whereas social advertising creates an opportunity to gather social proof and actual word-of-mouth, even viral diffusion, in the one and same advertising platform, display advertising is completely lacking the social dimension.

3. Better ad formats

Social advertising, specifically Facebook gives a great flexibility in combining text, images and video. Typically, a banner ad can only fit a brief slogan (“Just do it.”), whereas a social advertisement can include many sentences of text, a compelling picture and even link description that together give the advertisers the ability to communicate the whole story of the company or its offering in one advertisement.

But isn’t that boring? No, you can craft it in a compelling way – the huge advantage is that people don’t even need to click to learn the most essential. If the goal of advertising is to inform about offerings, social advertising is among the most efficient ways to actually do it.


That’s it. I don’t see a way for display advertising to overcome these advantages of social advertising. Notice that I didn’t mention the superior targeting criteria — this is because display is quickly catching up to Facebook in that respect. It just won’t be enough.

Programmatic advertising: Red herring effect


Currently, there is very strong hype involved with programmatic buying. Corporations are increasing their investments on programmatic advertising and publishers are developing their own technologies to provide better targeting information for demand-side platforms.

But all is not well in the kingdom. Display advertising still faces fundamental problems which are, in my opinion, more critical to advertising performance than more granular targeting.

Problems of display advertising

In particular, there are four major problems:

  • banner blindness
  • ad blocking
  • ad clutter
  • post-click behavior

Banner blindness is a classical problem, stating that banner ads are not cognitively processed but left either consciously or unconsciously unprocessed by people exposed to them (Benway & Lane, 1998). This is a form of automatic behavior ignoring ads and focusing on primary task, i.e. processing website content. Various solutions have been proposed in the industry, including native advertising which “blends in” with the content, and moving to calculating only viewable impressions which would guarantee people actually see the banner ads they are exposed to. The problem with the former is confounding sponsored and organic content, while the problem with the latter is that seeing is not equivalent to processing (hence banner blindness).

Ad blocking has been in tremendous rise lately (Priori Data, 2016). Consumers across the world are rejecting ads, both in desktop and mobile. Partly, this is related to ad clutter referring to high ads-to-content ratio in the websites. The proliferation of ad blocking should be interpreted as an alarming signal by the media houses. Instead, many of them seem to take no notice, keeping their website layout and ads-to-content ratio high. If there are no major improvements in user satisfaction, visible in reducing ads-to-content ratio, demanding higher quality ads from advertisers and making changes to website layouts, ad blocking is likely to continue despite pleas of publishers. Less advertising, of better quality, is needed to trigger a positive sentiment towards online advertising.

Finally, post-click behavior of traffic originating from display ads tends to be unsatisfactory. Bounce rates are exceptionally high (80-90% in some cases), direct ROI is orders of magnitude lower than search, and alarmingly enough display often seems weak also when examining the entire conversion path. Consequently, using direct ROI as a measure for success in display advertising yields sub-par results. Unfortunately, direct ROI is used more and more by performance-oriented advertisers.

Brand advertisers, who seek no direct returns in their online ad spend (think Coca-Cola), may continue using reach metrics. Thus, focusing on these advertisers, which still make a large share of the advertising market, would seem like a good strategy for publishers. Moreover, combating click-fraud and other invalid click forms is essential. If shortsightedly optimizing for revenue at all means – including allowing bots to participate in RTB auctions – media houses and DSPs are shooting themselves in the foot.

Root causes

But let’s talk about why these problems have not been addressed, at least not fundamentally by the majority of media companies. There are a few reasons for that.

First, the organizational incentives are geared towards sales. The companies follow a media business model which principally means: the more ads you sell, the better. This equation does not consider user satisfaction or quality of ads you’re showing, only their number and the revenue originating from them.

At a more abstract level, the media houses face an optimization conundrum:

  • MAX number of ads
  • MAX price of ads
  • MAX ad revenue
  • (MAX ad performance)
  • (MAX user satisfaction)

Maximizing number of ads (shown on the website) and price of ads also maximizes ad revenue. However, it is not maximizing user satisfaction. User satisfaction and performance are in parentheses because they are not considered in the media company’s optimization function, although they should be because there is a feedback mechanism from user satisfaction to ad performance and from ad performance to price of ads.

Seemingly, many media companies are maximizing revenue in the short-term through power-selling strategy. However, they should be maximizing revenue in the long-term and that cannot take place without considering user satisfaction from consumer perspective and ad performance from advertiser’s perspective. Power selling actually hurts their interests in the long-term through the feedback mechanism.

Finding solutions

How to dismantle this conundrum? First, the media companies should obviously consider both user satisfaction and ad performance. The former is done by actively studying the satisfaction of their users in terms of ad exposure. The latter is done by actively asking or otherwise acquiring data from advertisers on campaign performance. I, as a marketing manager, rarely found media sales people interested in my campaign performance – they just wanted a quick sell. Even better than asking would be to find a way to directly dip into campaign performance, e.g. by gaining access to advertiser’s analytics.

Second, media companies should consider the dynamics between the variables they are working with. For example,

  • ad performance (as a dependent variable) and number of ads (as an independent variable)
  • ad performance and user satisfaction
  • user satisfaction and number of ads
  • price of ads and ad performance

It can be hypothesized, for example, that a higher ad performance leads to a higher price of ads as ads become more valuable to advertisers. If in addition ad performance increases as the number of ads decreases, there is a clear signal to decrease the number of ads on the website. Some of these hypotheses can be tested through controlled experiments.

Third, media companies should re-align incentives from power-selling to value-based selling. They should not want to “fill the slots” at any means, but only fill the slots with good advertising that performs well for the advertiser. Achieving such a goal may require a stronger collaboration with advertisers, including sharing data with them and intervening in their production processes to deliver such advertising which does not annoy end users and based on prior data is likely to perform well.


In conclusion, there is a bottleneck at advertising-customer interface. Red herring effect takes place when we are focusing on a secondary issue – in the context of digital marketing we have to acknowledge that there is no intrinsic value in impressions or programmatic advertising technology, if the baseline results remain low. Ultimately, advertisers face a choice of abundance with channels both online and offline. And although they are momentarily pushing for large programmatic investments, if the results don’t follow they are likely to shift budget allocations into a different sort of equilibrium in the long-run, once again under-weighing display advertising.

Personally, I believe the media industry is too slow to react and display advertising will lose budget share in the coming years especially against social media advertising and search advertising, but also against some traditional channels such as television.

Why do I love startups?

I’ve dedicated plenty of time for studying and coaching startups. But why do I care? Not only care, but be passionate about them, enough to say I love startups.

I got to think about that, and here are the results of that quick reflection.

1. Startups are about technology

Novelty, innovation, progress… call it what you want, but there is something endlessly exciting about startups. It’s to see them take something which exists and turn it into something completely new. This sounds melodramatic but at its core it’s close to creation, close to being a god. Not meaning to blaspheme, but honor the impact startups have on people’s lives. Of course there is a lot of hype and failure associated with this progress because the process of creation is not linear, but nevertheless the renewal of daily lives is to a great extent driven by startup innovations. We can see them all around us, never stopping to be amazed of what we humans can accomplish.

2. Startups have rebel spirit

They are the anti-thesis of corporations. As much as I love startups, as much I hate (bureaucratic) corporations. Startups are about freedom, creativity and independence — and about power to execute. Some other small organizations share these traits, which is why working with small tends to be easier than working with the big. Even big companies create innovative stuff from time to time, but I’ve seen plenty of cases where new ideas are strangled to death by internal politics. Many large organizations don’t want to change — truly — but they just pay lip service to change and new management fads. They also don’t need to change, because in the short-term the world actually remains quite stable. The change in any given industry does not come over-night which gives corporations plenty of time to adapt (i.e., they can hire and fire many CEOs until they have gradually shifted their focus to something that works).

The rebel spirit of startups can be seen in their desire to take on the world, solve big challenges (not only create vanity apps), and relentless execution and elimination of waste. Indeed, there’s a small optimization maniac inside me who loves startups because they aim for optimal use of resources – that’s the economic ideal. And they have to operate under strict scarcity which fosters innovation – much more exciting of a challenge to solve a major problem when facing resource constraints. You wouldn’t believe they are able to do it, but history shows otherwise.

3. The people and culture are amazing

Anyone who have been bitten by the startup bug know what I’m talking about. It’s energetic, young people that want to change the world for the better. Who wouldn’t get excited about that? On a side-note, it’s actually not to do with age; I’ve seen many mature people get excited about startups as well — so it’s more about mentality than age, gender or any demographic factor. The love for startups is universal – you can see that e.g. in the rapid diffusion of student-run entrepreneurship societies around the world. Startup people care about their surroundings, want to make a change, and are super helpful to one another. Again, this is the anti-thesis of “normal business” where dominant paradigms are rivalry and secrecy.

Startups openly share their ideas, invite new people to join them and are geared more towards collaboration than strategic thinking and self-interest. Even sometimes, coming from a business background, I think they are too nice (!) and neglect profit-seeking to their own demise (this shows e.g. in the monetization dilemma which I examined in my dissertation). However, it’s part of the startup magic at least in the early-stage: purely commercial motives would undoubtedly destroy some of the appeal. Ultimately, it’s the people of diverse backgrounds — IT, engineering, art, business, marketing, corporations — that make the startup scene such an interesting place to be.

4. Startups are never done

This relates to the first point of innovation. Joseph Schumpeter, a famous economist, had the idea of creative destruction which startups almost perfectly embody. When interacting with startups, you can see the world is never ready. The turnover of new companies coming and going, making small, medium and large impacts to their surroundings, is baffling. It’s analogous to research community, where scholars stand on the shoulders of those who came before them, and strive to make contributions, even small ones to the body of knowledge in their disciplines. Startups aim to make a contribution to the society, and are never finished at that.

In conclusion, startups are a fascinating topic to study and interact with. Startups are endlessly inspiring and embody the spirit of progress in daily lives of people. Startup people are a special group of people that willingly share their ideas and experiences to elevate one another.

Customers as a source of information: 4 risks


This post is based on Dr. Elina Jaakkola’s presentation “What is co-creation?” on 19th August, 2014. I will elaborate on some of the points she made in that presentation.

Customer research, a sub-form of market research, serves the purpose of acquiring customer insight. Often, when pursuing information from consumers
companies use surveys. Surveys, and usage of customers as a source of information, have some particular problems discussed in the following.

1. Hidden needs

Customers have latent or hidden needs that they do not express, perhaps due to social reasons (awkwardness) or due to the fact of them not knowing what is technically possible (unawareness). If one is not specifically asking about a need, it is easily left unmentioned, even if it has great importance for the customer. This problem is not easily solved, since even the company may not be aware of all the possibilities in the technological space. However, if the purpose is to learn about the customers, a capability of immersion and sympathy is needed.

2. Reporting bias

What customers report they would do is not equivalent to their true behavior. They might say one thing, and do something entirely different. In research, this is commonly known as reporting bias. It is a major problem when carrying out surveys. The general solution is to ask about past, not future behavior, although even this approach is subject to recall bias.

3. Interpretation problem

Consumers answers to surveys can misinterpret the questions, and analysts can also misinterpret their answers. It is difficult to vividly present choices of hypothetical products and scenarios to consumers, and therefore the answers one receives may not be accurate. A general solution is to avoid ambiguity in the framing of questions, so that everything is commonly known and clear to both the respondent and the analyst (shared meanings).

4. Loud minority

This is a case where a minority, for being more vocal, creates a false impression of needs of the whole population. For example, in social media this effect may easily take place. A general rule of thumb is that only 1% of members of a community actively participates in a given discussion while other 99% merely observe. It is easy to see consumers who are the loudest get their opinions out, but this may not represent the needs of the silent majority. The solution would be stratification, where one distinguishes different groups from one another so as to form a more balanced view of the population. This works when there is an adequate participation among strata. Another alternatively would be actively seek out non-vocal customers.


Generally, the mentioned problems relate to stated preferences. When we are using customers as a source of information, all kinds of biases emerge. That is why behavioral data, not dependent on what customers say, is a more reliable source of information. Thankfully, in digital environments it is possible to obtain behavioral data with much more ease than in analogue environments. The problems of it emerge from representativeness and on the other hand fitting it to other forms of data so as to gain a more granular understanding of the customer base.

Online ads: Forget tech, invest in creativity

Technology is not a long-lasting competitive advantage in SEM or other digital marketing – creativity is.

This brief post is inspired by an article I read about different bid management platforms:

“We combine data science to SEM, so you can target based on device, hour of day and NASDAQ development.”

Yeah… but why would you do that? Spend your time thinking of creative concepts that generally work, not only when NASDAQ is down by 10%. Just because something is technically possible, doesn’t make it useful. Many technocratic and inexperienced marketing executives still get lured by the “silver bullet” effect of ad technology. Even when you consider outside events such as NASDAQ development or what not, newsjacking is a far superior marketing solution instead of automation.

Commoditization of ad technology

In the end, platforms give all contestants a level playing field. For example, the Google’s system considers CTR in determining cost and reach. Many advertisers obsess about their settings, bid and other technical parameters, and ignore the most important part: the message. Perhaps it is because the message is the hardest part: increasing or decreasing one’s bid is a simple decision given the data, but how to create a stellar creative? That is a more complex, yet more important, problem.

Seeing people as numbers, not as people

The root cause might be that the world view of some digital marketers is twisted. Consumers are seen as some kind of cattle — aggregate numbers that only need to be fed ad impressions, and positive results magically emerge. This world view is false. People are not stupid – they will not click whatever ads (or even look at them), especially in this day and age of ad clutter. The notion that you could be successful just by adopting a “bidding management platform” is foolish. Nowadays, every impressions that counts needs to be earned. And while a bid management platform may help you get a 1% boost to your ROI, focusing on the message is likely to bring a much higher increase. Because ad performance is about people, not about technology.


The more solid the industry becomes and the more basic technological know-how becomes mastered by advertisers, the less of a role technology plays. At that point of saturation, marketing technology investments begin to decline and companies shift back to basics: competing with creativity.

Basic formulas for digital media planning

Planning makes happy people.


Media planning, or campaign planning in general, requires you to set goal metrics, so that you are able to communicate the expected results to a client. In digital marketing, these are metrics like clicks, impressions, costs, etc. The actual planning process usually involves using estimates — that is, sophisticated guesses of some sorts. These estimates may be based on your previous experience, planned goal targets (when for example given a specific business goal, like sales increase), or industry averages (if those are known).

Calculating online media plan metrics

By knowing or estimating some goal metrics, you are able to calculate others. But sometimes it’s hard to remember the formulas. This is a handy list to remind you of the key formulas.

  • ctr = clicks / imp
  • clicks = imp * ctr
  • imp = clicks / ctr
  • cpm = cost / (imp / 1000)
  • cost = cpm * (imp / 1000)
  • cpa = cpc / cvr
  • cpa = cost / conversions
  • cost = cpa * conversions
  • conversions = cost / cpa

In general, metrics relating to impressions are used as proxies for awareness and brand related goals. Metrics relating to clicks reflect engagement, while conversions indicate behavior. Oftentimes, I estimate CTR, CVR and CPC because 1) it’s good to set a starting goal for these metrics, and 2) they exhibit some regularity (e.g., ecommerce conversion rate tends to fall between 1-2%).


You don’t have to know everything to devise a sound digital media plan. A few goal metrics are enough to calculate all the necessary metrics. The more realistic your estimates are, the better. Worry not, accuracy will get better in time. In the beginning, it is best to start with moderate estimates you feel comfortable in achieving, or even outperforming. It’s always better to under-promise than under-perform. Finally, the achieved metric values differ by channel — sometimes a lot — so take that into consideration when crafting your media plan.