Skip to content

Tag: digital marketing

Meaningless marketing

I’d say 70% of marketing campaigns have little to no real effect. Most certainly they don’t have a positive return in hard currency.

Yet, most marketers spend their time running around, planning all sorts of campaigns and competitions people couldn’t care less of. They are professional producers of spam, where in fact they should be focusing on core of the business: understanding why customers buy, how could they buy more, what sort of products should we make, how can the business model be improved, etc. The wider concept of marketing deals with navigating the current and the future market; it is not about making people buy stuff they don’t need.

To a great extent, I blame the marketing education. In the academia, we don’t really get the real concept of marketing into our students’ minds. Even the students majoring in marketing don’t truly “get” that marketing is not the same as advertising; too often, they have a narrow understanding of it and are then easily molded into the perverse industry standards, ending up in the purgatory of meaningless campaigns while convincing themselves they’re doing something of real value.

But marketing is not about campaigns, and it sure as hell is not about “creating Facebook competitions”. Rather, marketing is a process of continuous improvement of the business. Yes, this includes campaigns because the business cycles in many industries follow seasonal patterns, and we need to communicate outwards. But marketing has so much more to give for strategy, if only marketers would stop wasting their time and instead focus on the essential.

Now, what I wrote here is only based on anecdotal evidence arising from personal observations. It would be interesting, and indeed of great importance, to find out if it’s correct that most marketers are wasting their time on petty campaigns instead of the big picture. This could be done for example by conducting a study that answers the questions:

  1. What do marketers do with their time?
  2. How does that contribute to the bottom line?
  3. Why? (That is, what is the real value created for a) the customer and b) the organization)
  4. How is the value being measured and defended inside the organization?

If nothing else, every marketer should ask themselves those questions.

Facebook Ads: remember data breakdowns

Here’s a small case study.

We observed irrational behavior from Facebook ads. We have two ad versions running; but the one with lower CTR gets a better relevance score and lower CPC.

This seems like an irrational outcome, because in my understanding, CTR as a measure of relevance should be largest impact factor to CPC and Relevance Score.

Figure 1  Aggregate data

So, we dug a little bit futher and did a breakdown of the data. It turns out, the ad version with lower aggregate CTR performs better on mobile. Apparently this adds emphasis to the algorithm’s calculation.

Figure 2  Breakdown data

Lesson learned: Always dig in deeper to understand aggregate numbers. (If you’re interested in learning more about aggregate data problems, do a lookup on “Simpson’s paradox”.)

What is a “neutral algorithm”?

1. Introduction

Earlier today, I had a brief exchange of tweets with @jonathanstray about algorithms.

It started from his tweet:

Perhaps the biggest technical problem in making fair algorithms is this: if they are designed to learn what humans do, they will.

To which I replied:

Yes, and that’s why learning is not the way to go. “Fair” should not be goal, is inherently subjective. “Objective” is better

Then he wrote:

lots of things that are really important to society are in no way objective, though. Really the only exception is prediction.

And I wrote:

True, but I think algorithms should be as neutral (objective) as possible. They should be decision aids for humans.

And he answered:

what does “neutral” mean though?

After which I decided to write a post about it, since the idea is challenging to explain in 140 characters.

2. Definition

So, what is a neutral algorithm? I would define it like this:

“A neutral algorithm is a decision-making program whose operating principles are minimally inflenced by values or opinions of its creators.” [1]

An example of a neutral algorithm is a standard ad optimization algorithm: it gets to decide whether to show Ad1, Ad2, or Ad3. As opposed to asking from designers or corporate management which ad to display, it makes the decision based on objective measures, such as click-through rate (CTR).

A treatment that all ads (read: content, users) get is fair – they are diffused based on their merits (measured objectively by an unambiguous metric), not based on favoritism of any sort.

3. Foundations

The roots of algorithm neutrality stem from freedom of speech and net neutrality [2]. No outsiders can impose their values and opinions (e.g., censoring politically sensitive content) and interfere with the operating principles of the algorithm. Instead of being influenced by external manipulation, the decision making of the algorithm is as value-free (neutral) as possible. For example, in the case of social media, it chooses to display information which accurately reflects the sentiment and opinions of the people at a particular point in time.

4. Limitations

Now, I grant there are issues with “freedom”, some of which are considerable. For example, 1) for media, CTR-incentives lead to clickbaiting (alternative goal metrics should be considered), 2) for politicians and electorate, facts can be overshadowed by misinformation and short videos taken out of context to give false impression of individuals; and 3) for regular users, harmful misinformation can spread as a consequnce of neutrality (e.g., anti vaccination propaganda).

Another limitation is legislation – illegal content should be kept out by the algorithm. In this sense, the neutral algorithm needs to adhere to a larger institutional and regulatory context, but given that the laws themselves are “fair” this should impose no fundamental threat to the objective of neutral algorithms: free decision-making and, consequently, freedom of speech.

I wrote more about these issues here [3].

5. Conclusion

Inspite of the aforementioned issues, with a neutral algorithm each media/candidate/user has a level playing field. In time, they must learn to use it to argue in a way that merits the diffusion of their message.

The rest is up to humans – educated people respond to smart content, whereas ignorant people respond to and spread non-sense. A neutral algorithm cannot influence this; it can only honestly display what the state of ignorance/sophistication is in a society. A good example is Microsoft’s infamous bot Tay [4], a machine learning experiment turned bad. The alarming thing about the bot is not that “machines are evil”, but that *humans are evil*; the machine merely reflects that. Hence my original point of curbing human evilness by keeping algorithms free of human values as much as possible.

Perhaps in the future an algorithm could figuratively spoken save us from ourselves, but at the moment that act requires conscious effort from us humans. We need to make critical decisions based on our own judgment, instead of outsourcing ethically difficult choices to algorithms. Just as there is separation of church and state, there should be separation of humans and algorithms to the greatest possible extent.

Notes

[1] Initially, I thought about definition that would say “not influenced”, but it is not safe to assume that the subjectivity of its creators
would not in some way be reflected to the algorithm. But “minimal” leads into normative argument that that subjectivity should be mitigated.

[2] Wikipedia (2016): “Net neutrality (…) is the principle that Internet service providers and governments should treat all data on the Internet the same, not discriminating or charging differentially by user, content, site, platform, application, type of attached equipment, or mode of communication.”

[3] Algorithm Neutrality and Bias: How Much Control? <https://www.linkedin.com/pulse/algorithm-neutrality-bias-how-much-control-joni-salminen>

[4] A part of the story is that Tay was trolled heavily and therefore assumed a derogatory way of speech.

Advertisers actively following “Opportunities” in Google AdWords risk bid wars

PPC bidding requires strategic thinking.

Introduction. Wow. I was doing some SEM optimization in Google AdWords while a thought struck me. It is this: Advertisers actively following “Opportunities” in AdWords risk bid wars. Why is that? I’ll explain.

Opportunities or not? The “Opportunities” feature proposes bid increases for given keywords, e.g. Week 1: Advertiser A has current bid b_a and is proposed a marginal cost m_a, so the new bid e_a = b_a+m_a. During the same Week 1: Advertiser B, in response to Advertiser A’s acceptance of bid increase, is recommended to maintain his current impression share by increasing his bid b_b to e_b = b_b+m_b. To maintain the impression share balance, Advertiser A is again in the following optimization period (say the optimization cycle is a week, so next week) proposed yet another marginal increase, et cetera.

If we turn m into a multiplier, then the bid will eventually be b_a = (b_a * m_a)^c, where c is the number of optimization cycles. Let’s say AdWords recommends 15% bid increase at each cycle (e.g., 0.20 -> 0.23$ in the 1st cycle); then after five cycles, the keyword bid has doubled compared to the baseline (illustrated in the picture).

Figure 1   Compounding bid increases

Alluring simplicity. Bidding wars were always a possible scenario in PPC advertising – however, the real issues here is simplicity. The improved “Opportunities” feature gives much better recommendations to advertisers than earlier version, which increases its usage and more easily leads into “lightly made” acceptance of bid increases that Google can show to likely maintain a bidder’s current competitive positioning. From auction psychology we know that bidders have a tendency to overbid when put into competitive pressure, and that’s exactly where Google is putting them.

It’s rational, too. I think that more aggressive bidding can easily take place under the increasing usage of “Opportunities”. Basically, the baselines shift at the end of each optimization cycle. The mutual increase of bids (i.e., bid war) is not only a potential outcome of light-headed bidding, but in fact increasing bids is rational as long as keywords still remain profitable. But in either case, economic rents (=excessive profits) will be competed away.

Conclusion. Most likely Google advertising will continue converging into a perfect market, where it is harder and harder for individual advertisers to extract rents, especially in long-term competition. “Opportunities” is one way of making auctions more transparent and encourage more aggressive bidding behavior. It would be interesting to examine if careless bidding is associated with the use of “Opportunities” (i.e., psychological aspect), and also if Google shows more recommendations to increase than decrease bids (i.e., opportunistic recommendations).

Digital marketing in China: search-engine marketing (SEM) on Baidu

Introduction

China is an enormous market, amounting to 1.3 billion people and growing. Out of all the BRIC markets, China is the furthest in the adoption of technology and digital platforms, especially smartphones and applications.

Perhaps the most known example of Chinese digital platforms in the West is Alibaba, the ecommerce giant with market cap of over 200 $bn. Through Ali Express, Western consumers can order Chinese products – but also Western companies can use the marketplace to sell their products to Chinese consumers. However, this blog post is about Baidu, the Chinese equivalent to Google.

About Baidu

Baidu was founded in 2000, almost at the same time as Google (which was
founded in 1998). Google left China in 2010 amidst censorship issues, after which Baidu has solified its position as the most popular search engine in China.

Most likely due to their similar origins, Baidu is much like Google. The user interface and functionalities have borrowed heavily from Google, but Baidu also displays some information differently from Google. An example of Baidu’s search-engine results page (SERP) can be seen below.

Figure 1   Example of Baidu’s SERP

A lot of Chinese use Baidu to search for entertainment instead of information;
Baidu’s search results page support this behavior. In terms of search results, there is active censorship on sensitive topics, but that is not directly influencing most Western companies interested in the Chinese market. Overall, to influence Chinese consumers, it is crucial to have a presence on Baidu — companies not visible on Baidu might not be considered by the Chinese Internet users as esteemed brands at all.

Facts about Baidu

I have collected here some interesting facts about Baidu:

  1. Baidu is the fourth most visited website in the world (Global Rank: 4), and number one in China [1]
  2. Over 6 billion daily searches [2]
  3. 657 million monthly mobile users (December 2015) [3]
  4. 95.9% of the Baidu visits were from mainland China. [4]
  5. Baidu’s share of the global search-engine market is 7.52% [5]
  6. Baidu offers over 100 services, including discussion forums, wiki (Baidu Baike), map service and social network [6]
  7. Most searched themes are film & TV, commodity supply & demand, education, game and travel [7]

The proliferation of Internet users has tremendously influenced Baidu’s usage, as can be seen from the statistics.

How to do digital marketing in Baidu?

Baidu enables three type of digital marketing: 1) search-engine optimization (SEO), 2) search-engine advertising (PPC), and 3) display advertising. Let’s look at these choices.

First, Baidu has a habit of favoring its numerous own properties (such as Baidu News, Zhidao, etc.) over other organic results. Even up to 80% of the first page results is filled by Baidu’s own domains, so search-engine optimization in Baidu is challenging. Second, Baidu has a similar network to GDN (Google Display Network). It includes some 600k+ websites. As always, display networks need to be filtered for ad fraud by using whitelisting and blacklisting techniques. After doing that, display advertising is recommended as an additional tactic to boost search advertising performance.

Indeed, the best way to reach Baidu users is search advertising. The performance of PPC usually exceeds other forms of digital marketing, because ads are shown to the right people at the right time. Advertising in Baidu is a common practice, and Baidu has more than 600,000 registered advertisers. Currently advertiser are especially focusing on mobile users, where Baidu’s market share is up to 90% and where usage is growing the fastest [8].

How does Baidu advertising work?

For an advertiser, Baidu offers similar functionalities than Google. Search-engine advertising, often called PPC (pay-per-click), is possible in Baidu. In this form of advertising, advertisers bid on keywords that represent users’ search queries. When a user makes a particular serch, they are shown text ads from the companies with winning bids. Companies are charged when their ad is clicked.

The following picture shows how ads are displayed on Baidu’s search results page.

Figure 2   Ads on Baidu

As you can see, ads are shown on top of the search results. Organic search results are placed after ads on the main column. On the right column, there is extra “rich” information, much like on Google. The text ads on Baidu’s SERP look like this:

Figure 3   Text ads on Baidu

The ad headlines can have up to 20 Chinese characters or 40 English characters, and the description text up to 100 Chinese characters or 200 English characters. There is also possibility to use video and images in a prominent way. Below is an example of Mercedez Benz’s presence in Baidu search results.


Figure 4   Example of brands presence on Baidu

It can be easily understood that using such formats is highly recommendable for brand advertisers.

How to access Baidu advertising?

Baidu’s search advertising platform is called Phoenix Nest (百度推广). The tools to access accounts include Web interface and Baidu PPC Editor (百度推广助手).

To start Baidu advertising, you will need to create an account. For that, you need to have a Chinese-language website, as well as send Baidu a digital copy business registration certificate issued in your local country. You also need to make a deposit of 6500 yuans, of which 1500 is held by Baidu as a setup fee and the rest is credited to your advertising account. The opening process for Baidu PPC account may take up to two weeks. Depending on your business, you might also need to apply for Chinese ICP license and host the website in mainland China.

Alternatives for Baidu

There are other search providers in China, such as 360 Search and Sogou but with its ~60% market share in search and ~50% of overall online advertising revenue in China, Baidu is the leading player. Additionally, Baidu is likely to remain on top in the near future to its considerable investments on machine learning and artificial intelligence in the fields of image and voice recognition. Currently, some 90% of Chinese Internet users are using Baidu [9]. For a marketer interested in doing digital marketing in China, Baidu should definitely be included in the channel mix.

Other prominent digital marketing channels include Weibo, WeChat, Qihoo 360, and Sogou. For selling consumer products, the best platforms are Taobao and Tmall – many Chinese may skip search engines and directly go to these platforms for their shopping needs. As usually, companies are advised to leverage the power of superplatforms in their marketing and business operations.

Sources

[1] Alexa Siteinfo: Baidu <http://www.alexa.com/siteinfo/baidu.com>
[2] Nine reasons to use Baidu <http://richwaytech.ca/9-reasons-use-baidu-for-sem-china/>
[3] Baidu Fiscal Year 2015 <http://www.prnewswire.com/news-releases/baidu-announces-fourth-quarter-and-fiscal-year-2015-results-300226534.html>
[4] Is Baidu Advertising a Good Way to Reach Chinese Speakers Living in Western Countries? <https://www.nanjingmarketinggroup.com/blog/how-much-baidu-traffic-there-outside-china>
[5] 50+ Amazing Baidu statistics and facts <http://expandedramblings.com/index.php/baidu-stats/>
[6] 10 facts to understand Baidu <http://seoagencychina.com/10-facts-to-understand-the-top-search-engine-baidu/>
[7] What content did Chinese search most in 2013 <https://www.chinainternetwatch.com/6802/what-content-did-chinese-search-most-2013/#ixzz4G59YyMRG>
[8] Baidu controls 91% mobile search market in China <http://www.scmp.com/tech/apps-gaming/article/1854981/baidu-controls-91pc-mobile-search-market-china-smaller-firms>
[9] Baidu Paid Search <http://is.baidu.com/paidsearch.html>

Media agency vs. Creative agency: Which will survive?

In space, nobody can hear your advertising.

Earlier today I wrote about convergence of media agencies and creative agencies. But let’s look at it from a different perspective: Which one would survive? If we had to pick.

To answer the question, let us first determine their value-provided, and then see which one is more expendable.

Media agencies. First, media agencies’ value-provided derives from their ability to aggregate both market sides: on one hand, they bundle demand side (advertisers) and use this critical mass to negotiate media prices down. On the other hand, they bundle supply side (media outlets) and therefore provide efficiency for advertisers – the advertisers don’t need to search and negotiate with dozens of providers. In other words, media agencies provide the typical intermediary functions which are useful in a fragmented market. Their markup is the arbitrage cost: they buy media at price p_b and sell at p_s, the arbitrage cost being a = p_s – p_b.

Creative agencies. Second, creative agencies value-provided derives from their creative abilities. They know customers and have creative ability to create advertising that appeals to a given target audience. They usually charge an hourly rate, c; if the campaign requires x working hours, the creative cost being e = c*x. And consequently, the total cost for advertiser is T = e+a. We also observe double marginalization, so that e+a > C, where C is the cost that either agency would charge would they handle both creative and media operations.

Transition. Now, let’s consider the current transition which makes this whole question relevant. Namely, the advertising industry is moving into programmatic. Programmatic is a huge threat for intermediation since it aggregates fragmented market players. In practice this means that the advertisers are grouped under demand-side platforms (DSPs ) and the media under supply-side platforms (SSPs). How does this impact the scenario? The transition seemingly has an impact on media agencies, but not on creative agencies — “manual” bundling is no longer needed, but the need for creative work remains.

Conclusion. In conclusion, it seems creative agencies are less replaceable, and therefore have a better position in vertical integration.

Limitations. Now, this assumes that advertisers have direct access to programmatic platforms (so that media agencies can in fact be replaced); currently, this is not the standard case. It also assumes that they have in-house competence in programmatic advertising which also is not the standard case. But in time, both of these conditions are likely to evolve. Either advertisers acquire in-house access and competence, or then outsource the work to creative agencies which, in turn, will develop programmatic capabilities.

Another limitation is that the outcome will depend a lot on the position towards the client base. Whoever is closer to the client, is better equipped to develop the missing capabilities. As commonly acknowledged, customer relationships are the most valuable assets in advertising business, potentially giving an opportunity to build missing capabilities even when other market players would have already acquired them. But based on this “fictional” comparison, we can argue that creative agencies are better off when approaching convergence.

A few thoughts on ad blockers

Anti-ad-blockers are becoming common nowadays.

Introduction. So, I read an article saying that ad blockers are not useful for the users. The argument, and the logic, is conventional: 1) the internet is not really free; 2) publishers need advertisers to subsidize content creation which in turn is also in the users’ interest, because 3) they don’t have to pay for the content. Without ads, the publishers will 4) either start charging for the content or go out of business. Either way, 5) “free” content will cease to exist. (As a real example, the founder of Xmarks wrote a captivating article about the consequences of free-riding in startup context. I encourage to check it out. [1])

Problem of rationality. The aforementioned logic is quite good. But where I disagree with the article is the following argument:

“as soon as users understand the implications of ad blockers [they will] delete them […].”

Based on general knowledge of human behavior, that sounds too much like wishful thinking. In this particular case, I think the dynamics of the tragedy of the commons (Hardin, 1968) [2] are more applicable. We might, in fact, consider “free” content as a type of common (shared) resource. If so, the problem becomes evident: as user_i starts exploiting the free content [3], there is no immediate effect either on the user in question or other users.

There is, however, a minimal impact on the environment (the advertising industry). But because this effect is so small (a few impressions out of millions), it is left undetected. Therefore, it is as if exploitation never took place. This not only gives an incentive for the user_i to continue exploitation, but also signals to other users that ad blocking is quite alright. In consequence, the activity becomes widespread behavior, as we now have witnessed.

Mathematically, this could be explained through a step function.

Figure 1 An example of step functions (Stack Overflow, 2012) [4]

The problem is that the negative effects are not linear, but only become an issue when a certain threshold is met. In other words, it is only when user_n exploits (uses ad blocker) when the cumulative negative effects amount to a crisis. At that point, we have a sudden change in the environment which could have been prevented if the feedback loop would be working and accurately reflecting on user behavior.

Complexities. However, the issue is slightly more complex. As many anecdotal and empirical examples show (boiling frog, slippery slope, last straw, etc.), the feedback loop could only work if it had a predictive property, because each transition from state S_t to S_t+1 does not cause an observable effect which would be large enough to justify change of behavior. Thus, prediction of outcomes of a particular behavior is required — something which humans are poor at, especially at a collective level. Second, the availability of information is not guaranteed: the user_i may not be aware of the actions of other users. To solve this problem, a system-level agent with information on the actions of all agents (e.g., ad block users) is required.

Why does ad blocking take place? Indeed, if it’s so harmful, why do people do it? First of all, people may not be aware of it. Advertisers should not be over-estimating users’ rationality or their ability to predict systemic changes; it is not uncommon that systemic problems are ignored by most people. They simply don’t think about the long-term consequences. But even if they did, and realized that the ad blockers ultimately decimate free content, they might still block the ads. Why? Well, for two reasons:

First, 1) the gains from using ad blocker are immediate (getting rid of ad nuisance) and short-term, whereas the gains from not using ad blocker are long term (keeping the “free” content) and give a higher pay-off for others.

Generally speaking, people have a tendency to prefer short-term rewards (instant gratification) over long-term rewards, even if they’d be much higher. That’s why many people buy a lottery ticket every week instead of working hard to realize their dreams. Also, although the long-term benefit of ads does introduce a pay-off for user_i, that payoff is lower than the “service” he is doing for others, so that reward(u_i) < reward(u_I), where I includes i. Under some circumstances, the psychological effect might be to over-emphasize own immediate benefit over a larger long-term benefit when there are others to share it. In fact, such behavior is rational in a way rationality is usually defined: making decisions based on self-interest.

Second, 2) users might expect someone else to fix it; free content is taken for granted and the threat for its existence is not taken seriously. This is commonly known as “somebody else’s problem”. Yes, we know that keeping lights on in the university toilet wastes energy, but let someone else turn them off (this is a real example based on author’s own perceptions…). The user_i perceives, perhaps correctly, that his contribution to the outcome is marginally low, and therefore does not see any reason to change behavior. If you think of it, it’s the same reason why some people don’t see voting worthwhile; what good does one vote do? Paradoxically, it makes all the difference when that logic prevails in large part of the electorate.

Third, 3) they just might not care. The value of free content might not outweigh the nuisance of ads; user_i might just be without content rather than seeing ads.
Even if this scenario seems a tad unrealistic when viewing a users’ entire media consumption, it might apply to a particular publisher. For example, when publisher_j introduces anti ad blockers, the user simply frequents the website of publisher_k instead.

Two drivers are in favor of this development:

  1. Low switching cost – the trouble for going to another site is close to zero,
    so no individual publisher can impose a lock-in (and, deriving from this proposition, they could do so only by forming a coalition, where publisher_j and publisher_k both introduce ad blockers).
  2. Race to the bottom – there is an incentive for a publisher to allow ad blockers
    and think of alternative ways to monetize their content. This is commonly known as “race to the bottom” which means that due to heightening competition, supply-side actors willingly decrease their pay-offs even when there is no definitive signal from the demand side (again, a coalition of strict adherence could solve this).

Conclusion. Many of these problems are modeled in game theory and have no definitive solutions. However, there is some hope. We can distinguish short-term rationality and long-term rationality. If the latter did not exist, anything that requires momentary sacrifice would be left undone. For example, individuals would not get schooling because it is more satisfactory to play Pokémon GO than to go to school (for most people). But people do go to school, and they do (sometimes) make sacrifices for the greater good. Such behavior is also driven by socio-psychological phenomena: say it would be a strict norm in the society not to use ad blockers, i.e. their use would not be socially approved. The norms and values of a community are strong preventers of undesireable behavior: that is why so many indigenous cultures have been able to thrive under harsh circumstances. But in this particular case (and maybe in the West altogether, where common value base is perhaps no more), it is hard to see the use of ad blockers becoming a “no no”. If anything, the youth perceive it as a positive behavior and take cue.

Suggestions. According to the logic of the commons problem, everyone suffers if no mechanism for preventing exploitation is not developed. But how to go about it? I have a couple of ideas:

1) It is paramount that the publishers acknowledge the problem – many of them still run their advertising operations without really thinking about. They say: “Sure, it’s a problem” and then do nothing. In a similar vein, blaming the users is an incorrect response, although it might be empathically understood when examining advertising as a social contract. For example, publishers see that users are violating the implicit contract (exposure to ads –> free content) by using ad blockers, whereas users see that publishers are violating the contract by placing too many ads on the website (content > ads). According to the previous example, there is not a common understanding or definition of the contract — perhaps this is one of the root causes of the problem. People know they are shown ads in exchange for consuming content they don’t have to pay for, but what are the rules of that exchange? How many ads can be placed? What type of ads? Can they circumvent the ads? Etc.

Second, 2) the motives for ad blocker usage need to be clarified in depth – what are they? From my own experience, I can tell I use ad blockers because they make surfing the Web faster. Many websites are full of ads which makes them load slowly – the root cause here would be ad clutter, or (seemingly) willingness to sacrifice user experience over ad money. I’m just one example, though. There may be other motivations as well, such as ads seem untrustworthy, uninteresting, or something else.

Whatever these reasons are, 3) they need to be taken seriously and fixed, going to the root of the problems. Solving the ad blocker problem requires systemic thinking – superficial solutions are not enough. It’s not a question of introducing paywalls or blocking blockers by technical means; rather, it’s about defining the relationship of publishers, users, and advertisers in a way that each party can accept. Because in the end, ad blockers belong to a complex set of problems that can be described as “no technology solution problems” [5], or at least technology is only a part of the solution here.

References

[1] End of the road for Xmarks. Available at: https://web.archive.org/web/20101001150539/http://blog.xmarks.com/?p=1886

[2] Hardin, G. (1968). The Tragedy of the Commons. Science, New Series, Vol. 162, No. 3859 (Dec. 13, 1968), pp. 1243-1248.

[3] Essentially this is equivalent to resource exploitation, although nominally it seems reverse.

[4] Stack Overflow (2012) Plotting step functions. Available at: http://stackoverflow.com/questions/8988871/plotting-a-step-function-in-mathematica

[5] Garrity, E. (2012). Tragedy of the Commons, Business Growth and the Fundamental Sustainability Problem. Sustainability, 4(10), 2443-2471.

Problems of standard attribution modelling

Attribution modelling is like digital magic.

Introduction

Wow, so I’m reading a great piece by Funk and Nabout (2015) [1]. They outline the main problems of attribution modelling. By “standard”, I refer to the commonly used method of attribution modelling, most commonly known from Google Analytics.

Previously, I’ve addressed this issue in my digital marketing class by saying that the choice of an attribution model is arbitrary, i.e. marketers can freely decide whether it’s better to use e.g. last-click model or first-click model. But now I realized this is obviously a wrong approach — given that the impact of each touch-point can be estimated. There is much more depth to attribution modelling than the standard model leads you to believe.

Five problems of standard attribution modelling

So, here are the five problems by Funk and Nabout (2015).

1. Giving touch-points accurate credit

This is the main problem to me. The impact of touch-points on conversion value needs to be weighed but it is seemingly an arbitrary rather than a statistically valid choice (that is, until we consider advanced methods!). Therefore, there is no objective rank or “betterness” between different attribution models.

2. Disregard for time

The standard attribution model does not consider the time interval between touch-points – it can range anywhere from 30 minutes to 90 days, restricted only by cookie duration. Why does this matter? Because time generally matters in consumer behavior. For example, if there is a long interval between contacts A_t and A_t+1, it may be that the effect of the first contact was not very powerful to incite a return visit. Of course, one could also argue there is a reason not to consider time, because any differences arise due to discrepancy of the natural decision-making process of the consumers which results in unknown intervals. Ignoring time would then standardize the intervals. However, if we assume patterns in consumers’ decision-making process, as it is usually done by stating that “in our product category, the purchase process is short, usually under 30 days”, then addressing time differences could yield a better forecast, say we should expect a second contact to take place at a certain point in time given our model of consumer behavior.

3. Ignoring interaction types

The nature of the touch or interaction should be considered when modeling customer journey. The standard attribution model assigns conversion value for different channels based on clicks, but the type of interaction in channels might be mixed. For example, for one conversion you might get a view in Facebook and click in AdWords whereas another conversion might have the reverse. But are views and clicks equally valuable? Most marketers would not say so. However, they would also assign some credit to views – at least according to classic advertising theory, visibility has an impact on advertising performance. Therefore, the attribution model should also consider several interaction types and the impact each type has on conversion propensity.

4. Survivorship bias

As Funk and Nabout (2015) note, “the analysis does not compare successful and unsuccessful customer journeys, [but] only looks at the former.” This is essentially a case of survivor bias – we are unable to compare those touch-points that lead to a conversion to those that did not. By doing so, we could observe that a certain channel has a higher likelihood to be included in a conversion path [2] than another channel, i.e. its weight should be higher and proportional to its ability to produce lift in the conversion rate. Excluding information on unsuccessful interaction, we risk getting Type I and Type II errors – that is, false negatives and positives.

5. Exclusion of offline data

The standard attribution model does not consider offline interactions. But research shows multi-channel consumer behavior is highly prevalent. The lack of data on these interactions is the major reason behind exclusion, but the the same it restricts the usefulness of attribution modelling to ecommerce context. Most companies, therefore, are not getting accurate information with attribution modelling beyond the online environment. And, as I’ve argued in my class, word-of-mouth is not included in the standard model either, and that is a major issue for accuracy, especially considering social media. Even if we want to measure the performance of advertising channel, social media ads have a distinct social component – they are shared and commented on, which results in additional interactions that should be considered when modeling customer journey.

Solutions

I’m still finishing reading the original article, but had to write these few lines because the points I encountered were poignant. Next I’m sure they will propose solutions, and I may update this article afterwards. At this point, I can only state two solutions that readily come to mind: 1) the use of conversion rate (CVR) as an attribution parameter — it’s a global metric and thus escapes survivorship bias; and 2) Universal Analytics, i.e. using methods such as Google’s Measurement Protocol to capture offline interactions. As someone smart said, solution to a problem leads to a new problem and that’s the case here as well — there needs to a universal identifier (“User ID” in Google’s terms) to associate online and offline interactions. In practice, this requires registration.

Conclusion

The criticism applies to standard attribution modeling, e.g. to how it is done in Google Analytics. There might be additional issues not included in the paper, such as aggregate data — to perform any type of statistical analysis, click-stream data is a must have. Also, a relevant question is: How do touch-points influence one another? And how to model that influence? Beyond technicalities, it is important for managers to understand the general limitations of current methods of attribution modelling and seek solutions in their own organizations to overcome them.

References

[1] Funk, B., & Abou Nabout, N. (2016). Cross-Channel Real-Time Response Analysis. in O. Busch (Hrsg.), Programmatic Advertising: The Successful Transformation to Automated, Data-Driven Marketing in Real-Time. (S. 141-151). Springer-Verlag.

[2] Conversion path and customer journey are essentially referring to the same thing; perhaps with the distinction that conversion path is typically considered to be digital while customer journey has a multichannel meaning.

Programmatic ads: Fallacy of quality supply

A major fallacy publishers still have is the notion of “quality supply” or “premium inventory”. I’ll explain the idea behind the argument.

Introduction. The fallacy of quality supply lies in publishers assuming the quality of certain placement (say, a certain website) is constant, whereas in reality it varies according to the response which, in turn, is a function of the customer and the ad. Both the customer and the ad are running indices, meaning that they constantly change. The job of a programmatic platform is to match the right ads with right customers in the right placements. This is a dynamic problem, where “quality” of a given placement can be defined at the time of match, not prior to it.

Re-defining quality. The term “quality” should in fact be re-defined as relevance — a high-quality quality ad is relevant to customers at a given time (of match), and vice versa. In this equation, the ad placement does not hold any inherent value but its value is always determined in a unique match between the customer, ad and placement. It follows that the ad itself needs to be relevant to the customer, irrespective to the placement. It is not known which interaction effect is stronger, ad + customer, or placement + customer, but it is commonly assumed that the placement has a moderating effect on the quality of the ad as perceived by the customer.

The value of ad space is dynamic. The idea of publishers defining quality distribution a priori is old-fashioned. It stems from the idea that publishers should rank and define the value of their advertising space. That is not compatible with platform logic, in which any particular placement can be of high or low quality (or anywhere between the extremes). In fact, the same placement can simultaneously be both high- and low quality, because its value depends on the advertiser and the customer which, as stated, fluctuate.

Customers care about ad content. To understand this point, quality should be understood from the point of the customer. It can be plausibly argue that customers are interested in ads (if at all) due to their content, not their context. If an ad says I get a promotion on item x which I like, I’m interested. This interest takes place whether the ad was placed on website A or website B. Thus, it is not logical to assume that the placement itself would have a substantial impact on ad performance.

Conclusion. To sum up, there is no value in an ad placement per se, but the value realizes if (and only if) relevance is met. Under this argument, the notion of “premium ad space” is inaccurate and in fact detrimental by its implications to the development of the programmatic ad industry. If ad space is priced according to inaccurate notions, it is not likely to match its market value and, given that the advertisers have choice, they will not continue buying such ad inventory. Higher relevance leads to higher performance which leads to advertiser satisfaction and a higher probability of repurchase of that media. Any predetermined notion of “quality supply” is not relevant in this chain.

Recommendations. Instead of maintaining the false dichotomy of “premium” and “remnant” inventory, publishers should strive to maximize relevance in match-making auctions at any means necessary. For this purpose, they should demand higher quality and variety of ads from the advertiser. Successful match-making depends on quality and variety at both sides of the two-sided market. Generally, when prices are set according to supply and demand, more economic activity takes place – there is no reason to expect otherwise in the advertising market. Publishers should therefore stop labeling their inventory as “quality” or “premium” and instead let markets decide whether it is so. Indeed, in programmatic advertising the so-called remnant inventory can outperform what publishers initially would perceive as superior placements.

Is “premium” ad space a hoax?

Answer: It kinda is.

“Premium publishers” and “premium ad space” — these are often heard terms in programmatic advertising. But they are also dangerously fallacious ideas.

I’ll give three reasons why:

  1. A priori problem
  2. Uniformity problem
  3. Equilibrium problem

First, publishers define what is “premium” a priori (before results) which is not the right sequence to do it (a priori problem). The value of ad space — or the status, premium or not — should be determined a posteriori, or after the fact. Anything will risk biases due to guess-work.

Second, what is “premium” (i.e., works well) for advertiser A might be different for advertiser B, but the same ad space is always “premium” or not (uniformity problem). The value of ad space should be determined based on its value to the advertiser, which is not a uniform distribution.

Third, fixing a higher price for “premium” inventory skews the market – rational advertisers won’t pay irrational premiums and the publisher ends up losing revenue instead of gaining “premium” price (equilibrium problem). This is the exact opposite outcome the publisher hoped for, and arises from imbalance of supply and demand.

Limitations

I defined premium as ad space that works well in regards to the advertiser’s objectives. Other definitions also exist, e.g. Münstermann and Würtenberg (2015) who argue the distinctive trait between premium and non-premium media is the degree of its editorial professionalism, so that amateur websites would be less valuable. In many cases, this is an incorrect classifier from the advertiser’s perspective — e.g., placing an ad on a blogger’s website (influencer marketing) can fairly easily produce higher rents than placing it alongside “professional” content. The degree of professionalism of the content is not a major cue for the consumers, and therefore one should define “premium” from the advertiser’s point of view — as a placement that works.

Conclusion

The only reason, I suspect, premium inventory is still alive is due to the practice of private deals where advertisers are more interested in volume than performance – these advertisers are more informed by assumptions than data. Most likely as the buyers’ level of sophistication increases, they become more inclined to market-based pricing which has a much closer association with performance than private deals.