Skip to content

Digital marketing, startups and platforms Posts

On online debates: fundamental differences

Back in the day, they knew how to debate.

Introduction. Here’s a thought, or argument: Most online disputes can be traced back to differences of premises. I’m observing this time and time again: two people disagree, but fail to see why. Each party believes they are right, and so they keep on debating; it’s like a never-ending cycle. I propose here that identifying the fundamental difference in their premises could end any debate sooner than later, and therefore save valuable time and energy.

Why does it matter? Due to commonness of this phenomenon, its solution is actually a societal priority — we need to teach people how to debate meaningfully so that they can efficiently reach a mutual agreement either by one of the parties adopting the other one’s argument (the “Gandhi principle”) or quickly identifying the fundamental disagreement in premises, so that the debate does not go on for an unnecessarily long period. In practice, the former seems to be rare — it is more common that people stick to their original point of view rather than “caving in”, as it is falsely perceived. While there may be several reasons for that, including stubborness, one authentic source of disagreement is the fundamental difference in premises, and its recognition is immune to loss of face, stubborness, or other socio-psychological conditions that prevent reconciliation (because it does not require admittance of defeat).

What does that mean? Simply put, people have different premises, emerging from different worldviews and experiences. Given this assumption, every skilled debater should recognize the existence of fundamental difference when in disagreement – they should consider, “okay, where is the other guy coming from?”, i.e. what are his premises? And through that process, present the fundamental difference and thus close the debate.

My point is simple: When tracing the argument back to the premises, for each conflict we can reveal a fundamental disagreement at the premise level.

The good news is that it gives us a reconciliation (and food for though to each, possibly leading into the Gandhi outcome of adopting opposing view when it is judged more credible). When we know there is a fundamental disagreement, we can work together to find it, and consider the finding of it as the end point of the deabte. Debating therefore becomes a task of not proving yourself right, but a task of discovering the root cause for disagreement. I believe this is more effective method for ending debates than the current methods resulting in a lot of unnecessary wasted time and effort.

The bad news is that oftentimes, the premises are either 1) very difficult to change because they are so fundamentally part of one’s beliefs that the individual refuses to alter them, or 2) we don’t know how we should change them because there might not be “better” premises at all, just different ones. Now, of course this argument in itself is based on a premise, that of relativity. But alternatively we could say that some premises are better than others, e.g. given a desirable outcome – but that would be a debate of value subjectivity vs. universality, and as such leads just into a circular debate (which we precisely do not want) because both fundamental premises co-exist.

In many practical political issues the same applies – nobody, not even the so-called experts, can certainly argue for the best scenario or predict the outcomes with a high degree of confidence. This leads to the problem of “many truths” which can be crippling for decision-making and perception of togetherness in a society. But in a situation like that, it is ever more critical to identify the fundamental differences in premises; that kind of transparency enables dispassionate evaluation of their merits and weaknesses and at the same time those of the other party’s thinking process. In a word, it is important for understanding your own thinking (following the old Socratean thought of ‘knowing thyself’) and for understanding the thinking of others.

The hazard of identifying fundamental premise differences is, of course, that it leads into “null result” (nobody wins). Simply put, we admit that there is a difference and perhaps logically draw the conclusion that neither is right, or that each pertains the belief of being right (but understand the logic of the other party). In an otherwise non-reconcialiable scenario, this would seem like a decent compromise, but it is also prohibitive if and when participants perceive the debate as competition. Instead, it should be perceived as co-creation: working together in a systematic way to exhaust each other’s arguments and thus derive the fundamental difference in premises.

Conclusion. In this post-modern era where 1) values and worldviews are more fragmented than ever, and 2) online discussions are commonplace thanks to social media, the number of argumentation conflicts is inherently very high. In fact, it is more likely to see conflict than agreement due to all this diversity. People naturally have different premises, emerging from idiosyncratic worldviews and experiences, and therefore the emergence of conflicting arguments can be seen as the new norm in a high-frequency communication environments such as social networks. People alleviate this effect by grouping with likeminded individuals which may lead into assuming more extreme positions than they would otherwise assume.

Education of argumentation theory, logic (philosophy and practice), and empathy is crucial to start solving this condition of disagreement which I think is of permanent nature. Earlier I used the term “skilled debater”. Indeed, debating is a skill. It’s a crucial skill of every citizen. Societies do wrong by giving people voice but not teaching them how to use it. Debating skills are not natural traits people are born with – they are learned skills. While some people are self-learned, it cannot be rationally assumed that the majority of people would learn these skills by themselves. Rather, they need to be educated, in schools at all levels. For example, most university programs are not teaching debating skills in the sense I’m describing here – yet they proclaim to instill critical thinking to their students. The level and the effort is inadequate – the schooling system needs to step up, and make the issue a priority. Otherwise we face another decade or more of ignorance taking over online discussions.

Miten tuottaa lisäarvoa dynaamisella uudellenkohdennuksella?

Juttelin tänään erään Facebookin edustajan kanssa, jonka tehtävänä on auttaa Elämyslahjoja tekemään parempaa Facebook-mainontaa.

Keskustelu pyöri aloittelijatasolla, kunnes sanoin että olemme tehneet Facebook-mainontaa jo monta vuotta ja tiedämme nämä perusjutut.

Silloin henkilö ehdotti meille Facebookin dynaamista uudellenkohdennusta (eng. dynamic retargeting). Ko. mainonnan muoto siis toimii niin, että tietyllä tuotesivulla käyneelle henkilölle näytetään Facebookissa samasta tuotteesta mainoksia. Katsot siis verkkokaupasta kenkiä, ja näet samat kengät Facebook-mainoksessa.

Hän suositteli sitä meille, koska se kuulemma toimii. Kysyin että miksi se toimii? No, hän sanoi, että ensinnäkin monet isot verkkokauppa-asiakkaat käyttävät sitä ja toiseksi se tuottaa hyvin konversioita.

Tarkastellaan näitä argumentteja:

A) “Muut käyttävät” –> pätevyys: huono, koska se että muut tekevät jotain ei tarkoita että se olisi järkevää; nettimarkkinoinnissa on paljon harhaisia markkinointikäsityksiä, jotka ajavat tehottomuutta, isoissakin firmoissa.

B) “Tuottaa konversioita” –> pätevyys: luultavasti huono, koska attribuutio ja ostoprosessi osuvat yksiin, jolloin Facebook-mainos saa sille “kuulumatonta” kunniaa myyneistä. Kirjoitin tästä ilmiöstä täällä.

Selitin siis nämä perustelut ja kerroin että tarkoitin kysymyksellä sitä, että mitä lisäarvoa kyseinen ominaisuus tuottaa asiakkaalle. Henkilö selkeästi häkeltyi, eikä osannut vastata. Hän sitten toisti ensin sanomansa argumentit.

Rupesin miettimään tätä asiaa.

Oikeasti – mitä hyötyä on siitä, että henkilö A näkee tuotteen X vielä uudelleen mahdollisesti viikkojen ajan uudelleenmarkkinoinnissa? Kun hän selailee Facebookia tai uutissivustoja. Se sama tuote, jonka jo katsoin läpi enkä ostanut.

Omassa päätelmässäni ei mitään hyötyä. Päinvastoin, se on hukattu mahdollisuus. Miksi “hukattu”?

Koska itse tieto siitä mistä tuotteesta henkilö oli kiinnostunut on erittäin arvokas, jos sitä käytetään oikein.

On väärin näyttää samaa tuotetta uudelleen ja uudelleen ja kuvitella, että ihminen yhtäkkiä muuttaisi mielensä. Tämä on sama kuin inttäminen perinteisessä myynnissä. Toimiiko sellainen taktiikka? Ei toimi. Inttämisen sijaan pitää tarjota vaihtoehtoja, ja kun tiedetään mistä asiakas on ollut kiinnostunut, voidaan sitä tietoa käyttää suosittelun pohjana.

Toisin sanoen Facebook kehottaa käyttämään uudelleenkohdennusta näin:

1) Kerro lisää samasta tuotteesta –> uskottavuus: matala, koska tämä on inttämistä (ts. tarjotaan sama asia tuhat kertaa, ja odotetaan eri tulosta = Einsteinin määritelmä typeryydelle)

2) Muistuta ostamisesta –> uskottavuus: matala, koska ihmisen muisti on pidempi kuin kultakalan (ts. olen jo nähnyt tuotteen ja päättänyt että ei)

Molempien taustalla on väärä ihmiskuva: “ihmiset ovat tyhmiä ja manipuloitavissa, joten heitä tulee koko ajan muistuttaa ja he kuin taikaiskusta päättävätkin ostaa tuotteen.” Ajatellaan, että ihmiset ovat aivottomia robotteja. Todellisuudessa retargeting toimii suureksi osaksi mainitsemani attribuutioharhan vuoksi, ei sen takia että se tuottaisi aitoa lisäarvoa.

Kuinka sitten tehdä asia oikein? Mahdollisuuksia tuottaa aitoa lisäarvoa dynaamisella uudelleenkohdennuksella ovat ainakin:

1) Ylösmyynti (eng. upselling, en löytänyt hyvää suomenkielistä käännöstä) – suositellaan asiakkaalle kalliimpaa (tai edullisempaa) vaihtoehtoa ja mahdollisesti lisäosia taikka lisäpalveluja. Lue lisää.

2) Ristiinmyynti (eng. cross-selling) – suositellaan asiakkaalle täydentäviä tuotteita (komplementteja); esim. jos ostit kengät, osta sukat, jne. Lue lisää ristiinmyynnistä.

Molemmissa taktiikoissa ajatuksena on, että suositellaan asiakkaalle tuotteita, joita hän ei ole vielä nähnyt, mutta joista hän datamme perusteella voisi olla kiinnostunut. Tilastollisten mallien avulla pystytään tunnistamaan paitsi tuotteiden suhdetta toisiinsa, myös tekemään suosituksia aikaisempien asiakkaiden ostoskorien sisällön perusteella. Ao. kuva havainnollistaa asiaa.

Lähde: Liukkonen, 2016

Yritysten pitäisi siis rakentaa samanlaisia suosittelukeinoja (eng. recommendation engine) mainontaan kuin mitä verkkosivuilla käytetään (ks. esim. Amazon, suomalaisia palveluntarjoajia ovat ainakin Nosto ja Custobar). Näillä voidaan parantaa mainonnan relevanssia ja ennen kaikkea tuottaa lisäarvoa. Perinteisessä myynnissä tehokkaiksi havaitut taktiikat tulisi jalkauttaa sopivalla tavalla verkkoon, koska niiden takana on pitkä tutkimus ja käytäntö ja niiden toimivuus voidaan täten perustella.

Johtopäätös: Dynaamisessa uudelleenmarkkinoinnissa kannattaa 1) näyttää mitä asiakas ei ole nähnyt, ja 2) tuottaa lisäarvoa (ei spämmiä).

Olin häkeltynyt kuinka pinnallisia ja yksioikoisia suosituksia sain Facebookin edustajalta (ihmiskuva? robotti; lisäarvo? ei mietitä). Mutta olen opettaessani huomannut tämän ennenkin: markkinoinnin opiskelijoille syötetään väärää mielikuvaa yksinkertaisista asiakkaista, joiden manipulaatioon (konversioon) riittää tietty määrä toistoja. Tungetaan pullaa kurkusta alas, jos ei hyvällä niin pahalla.

Tällainen vanhanaikainen ihmiskäsitys — ns. spämmääjän mentaliteetti — on markkinoijalle haitallista. Spämmäävä markkinointi johtaa reaktanssiin ja mainosten yleiseen vastustukseen. Lisäksi ohjelmallisessa ostamisessa on muitakin laatuongelmia, joten markkinoijien tulisi kaikin keinoin pyrkiä toteuttamaan lisäarvon periaatetta.

Toivottavasti tämä artikkeli herätti ajattelemaan asioista toisella tavalla, ainakin dynaamisen uudelleenkohdistuksen kontekstissa.

What is a “neutral algorithm”?

1. Introduction

Earlier today, I had a brief exchange of tweets with @jonathanstray about algorithms.

It started from his tweet:

Perhaps the biggest technical problem in making fair algorithms is this: if they are designed to learn what humans do, they will.

To which I replied:

Yes, and that’s why learning is not the way to go. “Fair” should not be goal, is inherently subjective. “Objective” is better

Then he wrote:

lots of things that are really important to society are in no way objective, though. Really the only exception is prediction.

And I wrote:

True, but I think algorithms should be as neutral (objective) as possible. They should be decision aids for humans.

And he answered:

what does “neutral” mean though?

After which I decided to write a post about it, since the idea is challenging to explain in 140 characters.

2. Definition

So, what is a neutral algorithm? I would define it like this:

“A neutral algorithm is a decision-making program whose operating principles are minimally inflenced by values or opinions of its creators.” [1]

An example of a neutral algorithm is a standard ad optimization algorithm: it gets to decide whether to show Ad1, Ad2, or Ad3. As opposed to asking from designers or corporate management which ad to display, it makes the decision based on objective measures, such as click-through rate (CTR).

A treatment that all ads (read: content, users) get is fair – they are diffused based on their merits (measured objectively by an unambiguous metric), not based on favoritism of any sort.

3. Foundations

The roots of algorithm neutrality stem from freedom of speech and net neutrality [2]. No outsiders can impose their values and opinions (e.g., censoring politically sensitive content) and interfere with the operating principles of the algorithm. Instead of being influenced by external manipulation, the decision making of the algorithm is as value-free (neutral) as possible. For example, in the case of social media, it chooses to display information which accurately reflects the sentiment and opinions of the people at a particular point in time.

4. Limitations

Now, I grant there are issues with “freedom”, some of which are considerable. For example, 1) for media, CTR-incentives lead to clickbaiting (alternative goal metrics should be considered), 2) for politicians and electorate, facts can be overshadowed by misinformation and short videos taken out of context to give false impression of individuals; and 3) for regular users, harmful misinformation can spread as a consequnce of neutrality (e.g., anti vaccination propaganda).

Another limitation is legislation – illegal content should be kept out by the algorithm. In this sense, the neutral algorithm needs to adhere to a larger institutional and regulatory context, but given that the laws themselves are “fair” this should impose no fundamental threat to the objective of neutral algorithms: free decision-making and, consequently, freedom of speech.

I wrote more about these issues here [3].

5. Conclusion

Inspite of the aforementioned issues, with a neutral algorithm each media/candidate/user has a level playing field. In time, they must learn to use it to argue in a way that merits the diffusion of their message.

The rest is up to humans – educated people respond to smart content, whereas ignorant people respond to and spread non-sense. A neutral algorithm cannot influence this; it can only honestly display what the state of ignorance/sophistication is in a society. A good example is Microsoft’s infamous bot Tay [4], a machine learning experiment turned bad. The alarming thing about the bot is not that “machines are evil”, but that *humans are evil*; the machine merely reflects that. Hence my original point of curbing human evilness by keeping algorithms free of human values as much as possible.

Perhaps in the future an algorithm could figuratively spoken save us from ourselves, but at the moment that act requires conscious effort from us humans. We need to make critical decisions based on our own judgment, instead of outsourcing ethically difficult choices to algorithms. Just as there is separation of church and state, there should be separation of humans and algorithms to the greatest possible extent.

Notes

[1] Initially, I thought about definition that would say “not influenced”, but it is not safe to assume that the subjectivity of its creators
would not in some way be reflected to the algorithm. But “minimal” leads into normative argument that that subjectivity should be mitigated.

[2] Wikipedia (2016): “Net neutrality (…) is the principle that Internet service providers and governments should treat all data on the Internet the same, not discriminating or charging differentially by user, content, site, platform, application, type of attached equipment, or mode of communication.”

[3] Algorithm Neutrality and Bias: How Much Control? <https://www.linkedin.com/pulse/algorithm-neutrality-bias-how-much-control-joni-salminen>

[4] A part of the story is that Tay was trolled heavily and therefore assumed a derogatory way of speech.

Advertisers actively following “Opportunities” in Google AdWords risk bid wars

PPC bidding requires strategic thinking.

Introduction. Wow. I was doing some SEM optimization in Google AdWords while a thought struck me. It is this: Advertisers actively following “Opportunities” in AdWords risk bid wars. Why is that? I’ll explain.

Opportunities or not? The “Opportunities” feature proposes bid increases for given keywords, e.g. Week 1: Advertiser A has current bid b_a and is proposed a marginal cost m_a, so the new bid e_a = b_a+m_a. During the same Week 1: Advertiser B, in response to Advertiser A’s acceptance of bid increase, is recommended to maintain his current impression share by increasing his bid b_b to e_b = b_b+m_b. To maintain the impression share balance, Advertiser A is again in the following optimization period (say the optimization cycle is a week, so next week) proposed yet another marginal increase, et cetera.

If we turn m into a multiplier, then the bid will eventually be b_a = (b_a * m_a)^c, where c is the number of optimization cycles. Let’s say AdWords recommends 15% bid increase at each cycle (e.g., 0.20 -> 0.23$ in the 1st cycle); then after five cycles, the keyword bid has doubled compared to the baseline (illustrated in the picture).

Figure 1   Compounding bid increases

Alluring simplicity. Bidding wars were always a possible scenario in PPC advertising – however, the real issues here is simplicity. The improved “Opportunities” feature gives much better recommendations to advertisers than earlier version, which increases its usage and more easily leads into “lightly made” acceptance of bid increases that Google can show to likely maintain a bidder’s current competitive positioning. From auction psychology we know that bidders have a tendency to overbid when put into competitive pressure, and that’s exactly where Google is putting them.

It’s rational, too. I think that more aggressive bidding can easily take place under the increasing usage of “Opportunities”. Basically, the baselines shift at the end of each optimization cycle. The mutual increase of bids (i.e., bid war) is not only a potential outcome of light-headed bidding, but in fact increasing bids is rational as long as keywords still remain profitable. But in either case, economic rents (=excessive profits) will be competed away.

Conclusion. Most likely Google advertising will continue converging into a perfect market, where it is harder and harder for individual advertisers to extract rents, especially in long-term competition. “Opportunities” is one way of making auctions more transparent and encourage more aggressive bidding behavior. It would be interesting to examine if careless bidding is associated with the use of “Opportunities” (i.e., psychological aspect), and also if Google shows more recommendations to increase than decrease bids (i.e., opportunistic recommendations).

Belief systems and human action

What people believe, sometimes because real because of that.

1. Introduction. People are driven by beliefs and assumptions. We all make assumptions and use simplified thinking to cope with complexities of daily life. These include stereotypes, heuristical decision-making, and many forms of cognitive biases we’re all subject to. Because information individuals have is inherently limited as are their cognitive capabilities, our way of rational thinking is naturally bounded (Simon, 1956).

2. Belief systems. I want to talk about what I call “belief systems”. They can be defined as a form of shared thinking by a community or a niche of people. Some general characterizations follow. First, belief systems are characterized by common language (vocabulary) and shared way of thinking. Sociologists could define them as communities or sub-cultures, but I’m not using that term because it is usually associated with shared norms and values which do not matter in the context I refer to in this post.

3. Advantages and disadvantages. Second, the main advantage of belief systems is efficient communication, because all members share the belief system and are therefore privy to the meaning of specific terms and concepts. The main disadvantage of belief systems is the so-called tunnel vision which restricts the members adopting a belief system to seek or accept alternative ways of thinking. Both the main advantage and the main disadvantage result from the same principle: the necessity of simplicity. What I mean by that is that if a belief system is not parsimonious enough, it is not effective in communication but might escape tunnel vision (and vice versa).

4. Adoption of belief systems. For a belief system to spread, it is subject to the laws of network diffusion (Katz & Shapiro, 1985). The more people have adopted a belief system, the more valuable it becomes for an individual user. This encourages further adoption as a form of virtuous cycle. Simplicity enhances diffusion – a complex system is most likely not adopted by a critical mass of people. “Critical mass” refers here to the number of people sharing the belief system needed for additional members to adopt a belief system. Although this may not be any single number since the utility functions controlling the adoption are not uniformly distributed among individuals; there is an underlying assumption that belief systems are social by nature. If not enough people adopt a belief system, it is not remarkable enough to drive human action at a meaningful scale.

5. Understanding. Belief systems are intangible and unobservable by any direct means, but they are “real” is social sense of the word. They are social objects or constructs that can be scrutinized by using proxies that reflect their existence. The best proxy for this purpose is language. Thus, belief systems can be understood by analyzing language. Language reveals how people think. The use of language (e.g., professional slang) reveals underlying shared assumptions of members adhering to a belief system. An objective examinator would be able to observe and record the members’ use of language, and construct a map of the key concepts and vocabulary, along with their interrelations and underlying assumptions. Through this proceduce, any belief system could be dissected to its fundamental constituents, after which the merits and potential dischords (e.g., biases) could be objectively discussed.

For example, startup enthusiasts talk about “customer development” and “going out of building” as new, revolutionary way of replacing market research, whereas marketing researchers might consider little novelty in these concepts and actually be able to list those and many more market research techniques that would potentially yield a better outcome.

6. Performance. By objective means, a certain belief system might not be superior to another either to be adopted or to perform better. In practice, a belief system can yield high performance rewards either due to 1) additional efficiency in communication, 2) randomity of it working better than other competing solutions, or 3) its heuristical properties that e.g. enhance decision-making speed and/or accuracy. Therefore, beliefs systems might not need to be theoretically optimal solutions to yield a practically useful outcome.

7. Changing belief system. Moreover, belief systems are often unconcious. Consider the capitalistic belief system, or socialist belief system. Both drive the thinking of individuals to an enormous extent. Once a belief system is adopted, it is difficult to learn away. Getting rid of a belief system requires considerable cognitive effort, a sort of re-programming. An individual needs to be aware of the properties and assumptions of his belief system, and then want to change them e.g. by for looking counter-evidence. It is a psychological process equivalent to learning or “unlearning”.

8. Conclusion. People operate based on belief systems. Belief systems can be understood by analyzing language. Language reveals how people think. The use of language (e.g., professional slang) reveals underlying shared assumptions of a belief system. Belief systems produce efficiency gains for communication but simultaneously hinder consideration of possibly better alternatives. A belief system needs to be simple enough to be useful, people readily absorb it and do not question the assumptions thereafter. Changing belief systems is possible but requires active effort for a period of time.

References

Katz, M. L., & Shapiro, C. (1985). Network Externalities, Competition, and Compatibility. The American Economic Review, 75(3), 424–440.

Simon, H. A. (1956). Rational choice and the structure of the environment. Psychological Review, 63(2), 129–38.

Digital marketing in China: search-engine marketing (SEM) on Baidu

Introduction

China is an enormous market, amounting to 1.3 billion people and growing. Out of all the BRIC markets, China is the furthest in the adoption of technology and digital platforms, especially smartphones and applications.

Perhaps the most known example of Chinese digital platforms in the West is Alibaba, the ecommerce giant with market cap of over 200 $bn. Through Ali Express, Western consumers can order Chinese products – but also Western companies can use the marketplace to sell their products to Chinese consumers. However, this blog post is about Baidu, the Chinese equivalent to Google.

About Baidu

Baidu was founded in 2000, almost at the same time as Google (which was
founded in 1998). Google left China in 2010 amidst censorship issues, after which Baidu has solified its position as the most popular search engine in China.

Most likely due to their similar origins, Baidu is much like Google. The user interface and functionalities have borrowed heavily from Google, but Baidu also displays some information differently from Google. An example of Baidu’s search-engine results page (SERP) can be seen below.

Figure 1   Example of Baidu’s SERP

A lot of Chinese use Baidu to search for entertainment instead of information;
Baidu’s search results page support this behavior. In terms of search results, there is active censorship on sensitive topics, but that is not directly influencing most Western companies interested in the Chinese market. Overall, to influence Chinese consumers, it is crucial to have a presence on Baidu — companies not visible on Baidu might not be considered by the Chinese Internet users as esteemed brands at all.

Facts about Baidu

I have collected here some interesting facts about Baidu:

  1. Baidu is the fourth most visited website in the world (Global Rank: 4), and number one in China [1]
  2. Over 6 billion daily searches [2]
  3. 657 million monthly mobile users (December 2015) [3]
  4. 95.9% of the Baidu visits were from mainland China. [4]
  5. Baidu’s share of the global search-engine market is 7.52% [5]
  6. Baidu offers over 100 services, including discussion forums, wiki (Baidu Baike), map service and social network [6]
  7. Most searched themes are film & TV, commodity supply & demand, education, game and travel [7]

The proliferation of Internet users has tremendously influenced Baidu’s usage, as can be seen from the statistics.

How to do digital marketing in Baidu?

Baidu enables three type of digital marketing: 1) search-engine optimization (SEO), 2) search-engine advertising (PPC), and 3) display advertising. Let’s look at these choices.

First, Baidu has a habit of favoring its numerous own properties (such as Baidu News, Zhidao, etc.) over other organic results. Even up to 80% of the first page results is filled by Baidu’s own domains, so search-engine optimization in Baidu is challenging. Second, Baidu has a similar network to GDN (Google Display Network). It includes some 600k+ websites. As always, display networks need to be filtered for ad fraud by using whitelisting and blacklisting techniques. After doing that, display advertising is recommended as an additional tactic to boost search advertising performance.

Indeed, the best way to reach Baidu users is search advertising. The performance of PPC usually exceeds other forms of digital marketing, because ads are shown to the right people at the right time. Advertising in Baidu is a common practice, and Baidu has more than 600,000 registered advertisers. Currently advertiser are especially focusing on mobile users, where Baidu’s market share is up to 90% and where usage is growing the fastest [8].

How does Baidu advertising work?

For an advertiser, Baidu offers similar functionalities than Google. Search-engine advertising, often called PPC (pay-per-click), is possible in Baidu. In this form of advertising, advertisers bid on keywords that represent users’ search queries. When a user makes a particular serch, they are shown text ads from the companies with winning bids. Companies are charged when their ad is clicked.

The following picture shows how ads are displayed on Baidu’s search results page.

Figure 2   Ads on Baidu

As you can see, ads are shown on top of the search results. Organic search results are placed after ads on the main column. On the right column, there is extra “rich” information, much like on Google. The text ads on Baidu’s SERP look like this:

Figure 3   Text ads on Baidu

The ad headlines can have up to 20 Chinese characters or 40 English characters, and the description text up to 100 Chinese characters or 200 English characters. There is also possibility to use video and images in a prominent way. Below is an example of Mercedez Benz’s presence in Baidu search results.


Figure 4   Example of brands presence on Baidu

It can be easily understood that using such formats is highly recommendable for brand advertisers.

How to access Baidu advertising?

Baidu’s search advertising platform is called Phoenix Nest (百度推广). The tools to access accounts include Web interface and Baidu PPC Editor (百度推广助手).

To start Baidu advertising, you will need to create an account. For that, you need to have a Chinese-language website, as well as send Baidu a digital copy business registration certificate issued in your local country. You also need to make a deposit of 6500 yuans, of which 1500 is held by Baidu as a setup fee and the rest is credited to your advertising account. The opening process for Baidu PPC account may take up to two weeks. Depending on your business, you might also need to apply for Chinese ICP license and host the website in mainland China.

Alternatives for Baidu

There are other search providers in China, such as 360 Search and Sogou but with its ~60% market share in search and ~50% of overall online advertising revenue in China, Baidu is the leading player. Additionally, Baidu is likely to remain on top in the near future to its considerable investments on machine learning and artificial intelligence in the fields of image and voice recognition. Currently, some 90% of Chinese Internet users are using Baidu [9]. For a marketer interested in doing digital marketing in China, Baidu should definitely be included in the channel mix.

Other prominent digital marketing channels include Weibo, WeChat, Qihoo 360, and Sogou. For selling consumer products, the best platforms are Taobao and Tmall – many Chinese may skip search engines and directly go to these platforms for their shopping needs. As usually, companies are advised to leverage the power of superplatforms in their marketing and business operations.

Sources

[1] Alexa Siteinfo: Baidu <http://www.alexa.com/siteinfo/baidu.com>
[2] Nine reasons to use Baidu <http://richwaytech.ca/9-reasons-use-baidu-for-sem-china/>
[3] Baidu Fiscal Year 2015 <http://www.prnewswire.com/news-releases/baidu-announces-fourth-quarter-and-fiscal-year-2015-results-300226534.html>
[4] Is Baidu Advertising a Good Way to Reach Chinese Speakers Living in Western Countries? <https://www.nanjingmarketinggroup.com/blog/how-much-baidu-traffic-there-outside-china>
[5] 50+ Amazing Baidu statistics and facts <http://expandedramblings.com/index.php/baidu-stats/>
[6] 10 facts to understand Baidu <http://seoagencychina.com/10-facts-to-understand-the-top-search-engine-baidu/>
[7] What content did Chinese search most in 2013 <https://www.chinainternetwatch.com/6802/what-content-did-chinese-search-most-2013/#ixzz4G59YyMRG>
[8] Baidu controls 91% mobile search market in China <http://www.scmp.com/tech/apps-gaming/article/1854981/baidu-controls-91pc-mobile-search-market-china-smaller-firms>
[9] Baidu Paid Search <http://is.baidu.com/paidsearch.html>

Media agency vs. Creative agency: Which will survive?

In space, nobody can hear your advertising.

Earlier today I wrote about convergence of media agencies and creative agencies. But let’s look at it from a different perspective: Which one would survive? If we had to pick.

To answer the question, let us first determine their value-provided, and then see which one is more expendable.

Media agencies. First, media agencies’ value-provided derives from their ability to aggregate both market sides: on one hand, they bundle demand side (advertisers) and use this critical mass to negotiate media prices down. On the other hand, they bundle supply side (media outlets) and therefore provide efficiency for advertisers – the advertisers don’t need to search and negotiate with dozens of providers. In other words, media agencies provide the typical intermediary functions which are useful in a fragmented market. Their markup is the arbitrage cost: they buy media at price p_b and sell at p_s, the arbitrage cost being a = p_s – p_b.

Creative agencies. Second, creative agencies value-provided derives from their creative abilities. They know customers and have creative ability to create advertising that appeals to a given target audience. They usually charge an hourly rate, c; if the campaign requires x working hours, the creative cost being e = c*x. And consequently, the total cost for advertiser is T = e+a. We also observe double marginalization, so that e+a > C, where C is the cost that either agency would charge would they handle both creative and media operations.

Transition. Now, let’s consider the current transition which makes this whole question relevant. Namely, the advertising industry is moving into programmatic. Programmatic is a huge threat for intermediation since it aggregates fragmented market players. In practice this means that the advertisers are grouped under demand-side platforms (DSPs ) and the media under supply-side platforms (SSPs). How does this impact the scenario? The transition seemingly has an impact on media agencies, but not on creative agencies — “manual” bundling is no longer needed, but the need for creative work remains.

Conclusion. In conclusion, it seems creative agencies are less replaceable, and therefore have a better position in vertical integration.

Limitations. Now, this assumes that advertisers have direct access to programmatic platforms (so that media agencies can in fact be replaced); currently, this is not the standard case. It also assumes that they have in-house competence in programmatic advertising which also is not the standard case. But in time, both of these conditions are likely to evolve. Either advertisers acquire in-house access and competence, or then outsource the work to creative agencies which, in turn, will develop programmatic capabilities.

Another limitation is that the outcome will depend a lot on the position towards the client base. Whoever is closer to the client, is better equipped to develop the missing capabilities. As commonly acknowledged, customer relationships are the most valuable assets in advertising business, potentially giving an opportunity to build missing capabilities even when other market players would have already acquired them. But based on this “fictional” comparison, we can argue that creative agencies are better off when approaching convergence.

A few thoughts on ad blockers

Anti-ad-blockers are becoming common nowadays.

Introduction. So, I read an article saying that ad blockers are not useful for the users. The argument, and the logic, is conventional: 1) the internet is not really free; 2) publishers need advertisers to subsidize content creation which in turn is also in the users’ interest, because 3) they don’t have to pay for the content. Without ads, the publishers will 4) either start charging for the content or go out of business. Either way, 5) “free” content will cease to exist. (As a real example, the founder of Xmarks wrote a captivating article about the consequences of free-riding in startup context. I encourage to check it out. [1])

Problem of rationality. The aforementioned logic is quite good. But where I disagree with the article is the following argument:

“as soon as users understand the implications of ad blockers [they will] delete them […].”

Based on general knowledge of human behavior, that sounds too much like wishful thinking. In this particular case, I think the dynamics of the tragedy of the commons (Hardin, 1968) [2] are more applicable. We might, in fact, consider “free” content as a type of common (shared) resource. If so, the problem becomes evident: as user_i starts exploiting the free content [3], there is no immediate effect either on the user in question or other users.

There is, however, a minimal impact on the environment (the advertising industry). But because this effect is so small (a few impressions out of millions), it is left undetected. Therefore, it is as if exploitation never took place. This not only gives an incentive for the user_i to continue exploitation, but also signals to other users that ad blocking is quite alright. In consequence, the activity becomes widespread behavior, as we now have witnessed.

Mathematically, this could be explained through a step function.

Figure 1 An example of step functions (Stack Overflow, 2012) [4]

The problem is that the negative effects are not linear, but only become an issue when a certain threshold is met. In other words, it is only when user_n exploits (uses ad blocker) when the cumulative negative effects amount to a crisis. At that point, we have a sudden change in the environment which could have been prevented if the feedback loop would be working and accurately reflecting on user behavior.

Complexities. However, the issue is slightly more complex. As many anecdotal and empirical examples show (boiling frog, slippery slope, last straw, etc.), the feedback loop could only work if it had a predictive property, because each transition from state S_t to S_t+1 does not cause an observable effect which would be large enough to justify change of behavior. Thus, prediction of outcomes of a particular behavior is required — something which humans are poor at, especially at a collective level. Second, the availability of information is not guaranteed: the user_i may not be aware of the actions of other users. To solve this problem, a system-level agent with information on the actions of all agents (e.g., ad block users) is required.

Why does ad blocking take place? Indeed, if it’s so harmful, why do people do it? First of all, people may not be aware of it. Advertisers should not be over-estimating users’ rationality or their ability to predict systemic changes; it is not uncommon that systemic problems are ignored by most people. They simply don’t think about the long-term consequences. But even if they did, and realized that the ad blockers ultimately decimate free content, they might still block the ads. Why? Well, for two reasons:

First, 1) the gains from using ad blocker are immediate (getting rid of ad nuisance) and short-term, whereas the gains from not using ad blocker are long term (keeping the “free” content) and give a higher pay-off for others.

Generally speaking, people have a tendency to prefer short-term rewards (instant gratification) over long-term rewards, even if they’d be much higher. That’s why many people buy a lottery ticket every week instead of working hard to realize their dreams. Also, although the long-term benefit of ads does introduce a pay-off for user_i, that payoff is lower than the “service” he is doing for others, so that reward(u_i) < reward(u_I), where I includes i. Under some circumstances, the psychological effect might be to over-emphasize own immediate benefit over a larger long-term benefit when there are others to share it. In fact, such behavior is rational in a way rationality is usually defined: making decisions based on self-interest.

Second, 2) users might expect someone else to fix it; free content is taken for granted and the threat for its existence is not taken seriously. This is commonly known as “somebody else’s problem”. Yes, we know that keeping lights on in the university toilet wastes energy, but let someone else turn them off (this is a real example based on author’s own perceptions…). The user_i perceives, perhaps correctly, that his contribution to the outcome is marginally low, and therefore does not see any reason to change behavior. If you think of it, it’s the same reason why some people don’t see voting worthwhile; what good does one vote do? Paradoxically, it makes all the difference when that logic prevails in large part of the electorate.

Third, 3) they just might not care. The value of free content might not outweigh the nuisance of ads; user_i might just be without content rather than seeing ads.
Even if this scenario seems a tad unrealistic when viewing a users’ entire media consumption, it might apply to a particular publisher. For example, when publisher_j introduces anti ad blockers, the user simply frequents the website of publisher_k instead.

Two drivers are in favor of this development:

  1. Low switching cost – the trouble for going to another site is close to zero,
    so no individual publisher can impose a lock-in (and, deriving from this proposition, they could do so only by forming a coalition, where publisher_j and publisher_k both introduce ad blockers).
  2. Race to the bottom – there is an incentive for a publisher to allow ad blockers
    and think of alternative ways to monetize their content. This is commonly known as “race to the bottom” which means that due to heightening competition, supply-side actors willingly decrease their pay-offs even when there is no definitive signal from the demand side (again, a coalition of strict adherence could solve this).

Conclusion. Many of these problems are modeled in game theory and have no definitive solutions. However, there is some hope. We can distinguish short-term rationality and long-term rationality. If the latter did not exist, anything that requires momentary sacrifice would be left undone. For example, individuals would not get schooling because it is more satisfactory to play Pokémon GO than to go to school (for most people). But people do go to school, and they do (sometimes) make sacrifices for the greater good. Such behavior is also driven by socio-psychological phenomena: say it would be a strict norm in the society not to use ad blockers, i.e. their use would not be socially approved. The norms and values of a community are strong preventers of undesireable behavior: that is why so many indigenous cultures have been able to thrive under harsh circumstances. But in this particular case (and maybe in the West altogether, where common value base is perhaps no more), it is hard to see the use of ad blockers becoming a “no no”. If anything, the youth perceive it as a positive behavior and take cue.

Suggestions. According to the logic of the commons problem, everyone suffers if no mechanism for preventing exploitation is not developed. But how to go about it? I have a couple of ideas:

1) It is paramount that the publishers acknowledge the problem – many of them still run their advertising operations without really thinking about. They say: “Sure, it’s a problem” and then do nothing. In a similar vein, blaming the users is an incorrect response, although it might be empathically understood when examining advertising as a social contract. For example, publishers see that users are violating the implicit contract (exposure to ads –> free content) by using ad blockers, whereas users see that publishers are violating the contract by placing too many ads on the website (content > ads). According to the previous example, there is not a common understanding or definition of the contract — perhaps this is one of the root causes of the problem. People know they are shown ads in exchange for consuming content they don’t have to pay for, but what are the rules of that exchange? How many ads can be placed? What type of ads? Can they circumvent the ads? Etc.

Second, 2) the motives for ad blocker usage need to be clarified in depth – what are they? From my own experience, I can tell I use ad blockers because they make surfing the Web faster. Many websites are full of ads which makes them load slowly – the root cause here would be ad clutter, or (seemingly) willingness to sacrifice user experience over ad money. I’m just one example, though. There may be other motivations as well, such as ads seem untrustworthy, uninteresting, or something else.

Whatever these reasons are, 3) they need to be taken seriously and fixed, going to the root of the problems. Solving the ad blocker problem requires systemic thinking – superficial solutions are not enough. It’s not a question of introducing paywalls or blocking blockers by technical means; rather, it’s about defining the relationship of publishers, users, and advertisers in a way that each party can accept. Because in the end, ad blockers belong to a complex set of problems that can be described as “no technology solution problems” [5], or at least technology is only a part of the solution here.

References

[1] End of the road for Xmarks. Available at: https://web.archive.org/web/20101001150539/http://blog.xmarks.com/?p=1886

[2] Hardin, G. (1968). The Tragedy of the Commons. Science, New Series, Vol. 162, No. 3859 (Dec. 13, 1968), pp. 1243-1248.

[3] Essentially this is equivalent to resource exploitation, although nominally it seems reverse.

[4] Stack Overflow (2012) Plotting step functions. Available at: http://stackoverflow.com/questions/8988871/plotting-a-step-function-in-mathematica

[5] Garrity, E. (2012). Tragedy of the Commons, Business Growth and the Fundamental Sustainability Problem. Sustainability, 4(10), 2443-2471.

Problems of standard attribution modelling

Attribution modelling is like digital magic.

Introduction

Wow, so I’m reading a great piece by Funk and Nabout (2015) [1]. They outline the main problems of attribution modelling. By “standard”, I refer to the commonly used method of attribution modelling, most commonly known from Google Analytics.

Previously, I’ve addressed this issue in my digital marketing class by saying that the choice of an attribution model is arbitrary, i.e. marketers can freely decide whether it’s better to use e.g. last-click model or first-click model. But now I realized this is obviously a wrong approach — given that the impact of each touch-point can be estimated. There is much more depth to attribution modelling than the standard model leads you to believe.

Five problems of standard attribution modelling

So, here are the five problems by Funk and Nabout (2015).

1. Giving touch-points accurate credit

This is the main problem to me. The impact of touch-points on conversion value needs to be weighed but it is seemingly an arbitrary rather than a statistically valid choice (that is, until we consider advanced methods!). Therefore, there is no objective rank or “betterness” between different attribution models.

2. Disregard for time

The standard attribution model does not consider the time interval between touch-points – it can range anywhere from 30 minutes to 90 days, restricted only by cookie duration. Why does this matter? Because time generally matters in consumer behavior. For example, if there is a long interval between contacts A_t and A_t+1, it may be that the effect of the first contact was not very powerful to incite a return visit. Of course, one could also argue there is a reason not to consider time, because any differences arise due to discrepancy of the natural decision-making process of the consumers which results in unknown intervals. Ignoring time would then standardize the intervals. However, if we assume patterns in consumers’ decision-making process, as it is usually done by stating that “in our product category, the purchase process is short, usually under 30 days”, then addressing time differences could yield a better forecast, say we should expect a second contact to take place at a certain point in time given our model of consumer behavior.

3. Ignoring interaction types

The nature of the touch or interaction should be considered when modeling customer journey. The standard attribution model assigns conversion value for different channels based on clicks, but the type of interaction in channels might be mixed. For example, for one conversion you might get a view in Facebook and click in AdWords whereas another conversion might have the reverse. But are views and clicks equally valuable? Most marketers would not say so. However, they would also assign some credit to views – at least according to classic advertising theory, visibility has an impact on advertising performance. Therefore, the attribution model should also consider several interaction types and the impact each type has on conversion propensity.

4. Survivorship bias

As Funk and Nabout (2015) note, “the analysis does not compare successful and unsuccessful customer journeys, [but] only looks at the former.” This is essentially a case of survivor bias – we are unable to compare those touch-points that lead to a conversion to those that did not. By doing so, we could observe that a certain channel has a higher likelihood to be included in a conversion path [2] than another channel, i.e. its weight should be higher and proportional to its ability to produce lift in the conversion rate. Excluding information on unsuccessful interaction, we risk getting Type I and Type II errors – that is, false negatives and positives.

5. Exclusion of offline data

The standard attribution model does not consider offline interactions. But research shows multi-channel consumer behavior is highly prevalent. The lack of data on these interactions is the major reason behind exclusion, but the the same it restricts the usefulness of attribution modelling to ecommerce context. Most companies, therefore, are not getting accurate information with attribution modelling beyond the online environment. And, as I’ve argued in my class, word-of-mouth is not included in the standard model either, and that is a major issue for accuracy, especially considering social media. Even if we want to measure the performance of advertising channel, social media ads have a distinct social component – they are shared and commented on, which results in additional interactions that should be considered when modeling customer journey.

Solutions

I’m still finishing reading the original article, but had to write these few lines because the points I encountered were poignant. Next I’m sure they will propose solutions, and I may update this article afterwards. At this point, I can only state two solutions that readily come to mind: 1) the use of conversion rate (CVR) as an attribution parameter — it’s a global metric and thus escapes survivorship bias; and 2) Universal Analytics, i.e. using methods such as Google’s Measurement Protocol to capture offline interactions. As someone smart said, solution to a problem leads to a new problem and that’s the case here as well — there needs to a universal identifier (“User ID” in Google’s terms) to associate online and offline interactions. In practice, this requires registration.

Conclusion

The criticism applies to standard attribution modeling, e.g. to how it is done in Google Analytics. There might be additional issues not included in the paper, such as aggregate data — to perform any type of statistical analysis, click-stream data is a must have. Also, a relevant question is: How do touch-points influence one another? And how to model that influence? Beyond technicalities, it is important for managers to understand the general limitations of current methods of attribution modelling and seek solutions in their own organizations to overcome them.

References

[1] Funk, B., & Abou Nabout, N. (2016). Cross-Channel Real-Time Response Analysis. in O. Busch (Hrsg.), Programmatic Advertising: The Successful Transformation to Automated, Data-Driven Marketing in Real-Time. (S. 141-151). Springer-Verlag.

[2] Conversion path and customer journey are essentially referring to the same thing; perhaps with the distinction that conversion path is typically considered to be digital while customer journey has a multichannel meaning.

Programmatic ads: Fallacy of quality supply

A major fallacy publishers still have is the notion of “quality supply” or “premium inventory”. I’ll explain the idea behind the argument.

Introduction. The fallacy of quality supply lies in publishers assuming the quality of certain placement (say, a certain website) is constant, whereas in reality it varies according to the response which, in turn, is a function of the customer and the ad. Both the customer and the ad are running indices, meaning that they constantly change. The job of a programmatic platform is to match the right ads with right customers in the right placements. This is a dynamic problem, where “quality” of a given placement can be defined at the time of match, not prior to it.

Re-defining quality. The term “quality” should in fact be re-defined as relevance — a high-quality quality ad is relevant to customers at a given time (of match), and vice versa. In this equation, the ad placement does not hold any inherent value but its value is always determined in a unique match between the customer, ad and placement. It follows that the ad itself needs to be relevant to the customer, irrespective to the placement. It is not known which interaction effect is stronger, ad + customer, or placement + customer, but it is commonly assumed that the placement has a moderating effect on the quality of the ad as perceived by the customer.

The value of ad space is dynamic. The idea of publishers defining quality distribution a priori is old-fashioned. It stems from the idea that publishers should rank and define the value of their advertising space. That is not compatible with platform logic, in which any particular placement can be of high or low quality (or anywhere between the extremes). In fact, the same placement can simultaneously be both high- and low quality, because its value depends on the advertiser and the customer which, as stated, fluctuate.

Customers care about ad content. To understand this point, quality should be understood from the point of the customer. It can be plausibly argue that customers are interested in ads (if at all) due to their content, not their context. If an ad says I get a promotion on item x which I like, I’m interested. This interest takes place whether the ad was placed on website A or website B. Thus, it is not logical to assume that the placement itself would have a substantial impact on ad performance.

Conclusion. To sum up, there is no value in an ad placement per se, but the value realizes if (and only if) relevance is met. Under this argument, the notion of “premium ad space” is inaccurate and in fact detrimental by its implications to the development of the programmatic ad industry. If ad space is priced according to inaccurate notions, it is not likely to match its market value and, given that the advertisers have choice, they will not continue buying such ad inventory. Higher relevance leads to higher performance which leads to advertiser satisfaction and a higher probability of repurchase of that media. Any predetermined notion of “quality supply” is not relevant in this chain.

Recommendations. Instead of maintaining the false dichotomy of “premium” and “remnant” inventory, publishers should strive to maximize relevance in match-making auctions at any means necessary. For this purpose, they should demand higher quality and variety of ads from the advertiser. Successful match-making depends on quality and variety at both sides of the two-sided market. Generally, when prices are set according to supply and demand, more economic activity takes place – there is no reason to expect otherwise in the advertising market. Publishers should therefore stop labeling their inventory as “quality” or “premium” and instead let markets decide whether it is so. Indeed, in programmatic advertising the so-called remnant inventory can outperform what publishers initially would perceive as superior placements.