Skip to content

Digital marketing, startups, and platforms Posts

How to Win the Google Online Marketing Challenge

GOMCHA European Winners 2016

1. Introduction

In the past couple of weeks, a few people have approached me asking for tips on how to do well in the Google Online Marketing Challenge. So, I thought I might as well gather some of my experiences in a blog post, and share them with everybody.

A little bit of background: I’ve been the professor of two winning teams (GOMC Europe 2013 & GOMC Europe 2016). Although the most credit is obviously due to the students that do all the hard work (the students at Turku School of Economics simply rock!), guidance does play an important role since most commonly the students have no prior experience in SEM/PPC, and need to be taught quickly where to focus on.

2. Advice to teachers

The target audience for this post is anyone participating in the challenge. For the teachers, I have one important advice:

Learn the system if you’re teaching it. There’s no substitute for real experience. The students are likely to have a million questions, and you need to give better answers than “google it.” Personally, I was fortunate enough to have done SEM for many years before starting to teach it. Without that experience, it would have been impossible to guide the teams do well. However, if you don’t have the same advantage, but you want your students to do well, turn to the industry. Many SEM companies out there are interested in mentoring/sparring the students, because that way they can also spot talented individuals for future hiring (win-win, right?).

3. How to win GOMCHA?

3.1 Overview

That said, here are my TOP3 “critical success factors” for winning the challenge:

  1. Choose your case wisely
  2. Focus on Quality Score
  3. Show impact

That’s it! Follow these principles and you will do well. Now, that being said, behind each of them is a whole layer of complexity 🙂 Let’s explore each point.

3.2 Choosing the AdWords case

First, one of the earliest questions students are going to ask is how to choose the company/organization they’re doing the campaign for. And that’s also one of the most important ones. How I do it: I let each team choose and find their own case; however, I tell them what is a good case and what is not. I wrote a separate post about choosing a good AdWords case. Read the post, and internalize the information.

Update: one more point to the linked post – choose one that preferably has some brand searches already. This helps you get higher overall CTR, and lower the overall CPC.

The choice of a good case is crucial, because you can be the best optimizer in the world, but if you have a bad case, you will fail. An example was a team that chose a coffee company — it was not a good case to choose because it had low product range and relatively few searches. For some reason, the team, which consisted of several students with *real experience* in AdWords, wanted to choose it. Not surprisingly, they struggled due to the above reasons and were easily overshadowed by other teams with no experience but a good case. Therefore, the formula here is: success = case * skills.

By the way, that is one of the most important lessons for any marketing student in general: Always choose your case wisely, and never market something whose potential you don’t believe in.

3.3 Choosing the metrics

Another common question relates to the metrics: What should we optimize for? While there are many important metrics, including CTR and CPC, I would say one is above the others. That is clearly the Quality Score, which seems to be very influential in Google’s ranking algorithm for the competition.

Note that I don’t have any insider information on this, but I’m saying *seems* because of this reason: In 2015, I instructed the teams to focus on a wide range of metrics, including CTR, CPC, and QS. What came out where several great teams that, in my opinion, had better overall metrics than many of the finalists that year (none of my teams were finalists). Last year, however, I switched the strategy and instructed the teams to heavily focus on Quality Score, even at the cost of other metrics. For example, to the team that ended up winning in 2016, I said “your goal is 10 x 10”, meaning they should get 10 keywords with QS 10. They ended up getting 12, and the rest is history 🙂

3.4 Why is Quality Score that important?

In my view, it’s because all optimization efforts basically culminate to that metric. To maximize your QS, you essentially need to do all the right things in terms of optimization, including account structure, ad creation, and landing pages. To get these things nailed, refer to this post. And google for more tips: blogs such as PPC Hero, Wordstream, and Certified Knowledge have plenty of subject matter to learn from. I also have complied an extensive list of digital marketing blogs that you can utilize.

However, do note that all third-party information is to some degree unreliable. Use it with caution, combined with your first-hand experiments (i.e., do what you see working the best in the light of numbers). The most reliable source of information is of course Google, because they know the system from the inside, any of the experts (including myself) don’t. So, use Google’s AdWords help as your main reference.

3.5 Show real impact

The last step, since many teams can score high on metrics, is to show real-life impact. This is pretty much the only way to differentiate when all finalist teams are good. The thing you can do here is, first of all, to meticulously follow Google’s guidelines for the reports to highlight your greatness. As a member of the academic panel, I know some cases have been failed due to not following the technical guidelines, so make sure your output is in line with them. However, that is not the main point; the main point is to show how you brought real results to your case organization. Although not part of the official ranking, if you look at the past winners, most of them have gained a lot of conversions. By knowing that, you can do the math. The reports of the winners from earlier years can be found at the challenge website.

4. List of practical tips

Finally, some practical tips (the list is in no particular order, and not comprehensive at all):

  1. Optimize every day like you were obsessed with AdWords
  2. Don’t be afraid to ask advice from the experts; take every help you can get to learn faster
  3. Prefer using ‘exact match’ keywords
  4. Never mix display campaigns with search campaigns (i.e., avoid ‘display select’)
  5. Avoid GDN altogether; you can experiment with it using a little budget, but focus 99% on search campaigns
  6. When possible, direct the keywords to a specific landing page (not homepage)
  7. Create ad groups based on semantic similarity of keywords (if you don’t know what this means, find out)
  8. Don’t stress about the initial bid price; set it at some level based on the Keyword Planner estimates and change according to results
  9. Or, alternatively, set it as high as possible to get a good Avg. Pos. and therefore improved CTR, and improved QS
  10. Set the bid price manually per keyword
  11. Use GA to report after-click performance (good for campaign report)
  12. Use as many AdWords features as possible (good for campaign report)

Finally, read Google’s materials, including the challenge website. Follow their advice meticulously, and read read read about search-engine advertising from digital marketing blogs and Google’s website.

Good luck!! 🙂

CAVEAT: I’m a member at the Google Online Marketing Challenge’s academic panel. These are my personal opinions and don’t necessarily represent the official panel views. The current judging criteria for the competition can be found at:

UPDATE (May, 2017): Together with Elina Ojala (next to me in the picture above), we had a Skype call with students of Lappeenranta University of Technology (LUT). Elina pointed out some critical things: it’s important 1) to be motivated, 2) have a really good team without free riding, 3) share tasks efficiently (e.g., analytics, copywriting; based on individual interests), and 3) go through extra effort (e.g., changing the landing pages, using GA). I added that for teachers it’s important to motivate the students: aim HIGH !! And to stress there is zero chance of winning if the team doesn’t work every day (=linear relationship between hours worked and performance).

Resources (some in Finnish)

The black sheep problem in machine learning

Just a picture of a black sheep.

Introduction. Hal Daumé III wrote an interesting blog post about language bias and the black sheep problem. In the post, he defines the problem as follows:

The “black sheep problem” is that if you were to try to guess what color most sheep were by looking and language data, it would be very difficult for you to conclude that they weren’t almost all black. In English, “black sheep” outnumbers “white sheep” about 25:1 (many “black sheep”s are movie references); in French it’s 3:1; in German it’s 12:1. Some languages get it right; in Korean it’s 1:1.5 in favor of white sheep. This happens with other pairs, too; for example “white cloud” versus “red cloud.” In English, red cloud wins 1.1:1 (there’s a famous Sioux named “Red Cloud”); in Korean, white cloud wins 1.2:1, but four-leaf clover wins 2:1 over three-leaf clover.

Thereafter, Hal accurately points out:

“co-occurance frequencies of words definitely do not reflect co-occurance frequencies of things in the real world”

But the mistake made by Hal is to assume language describes objective reality (“the real world”). Instead, I would argue that it describes social reality (“the social world”).

Black sheep in social reality. The higher occurence of ‘black sheep’ tells us that in social reality, there is a concept called ‘black sheep’ which is more common than the concept of white (or any color) sheep. People are using that concept, not to describe sheep, but as an abstract concept in fact describing other people (“she is the black sheep of the family”). Then, we can ask: Why is that? In what contexts is the concept used? And try to teach the machine its proper use through associations of that concept to other contexts (much like we teach kids when saying something is appropriate and when not). As a result, the machine may create a semantic web of abstract concepts which, if not leading to it understanding them, at least helps in guiding its usage of them.

We, the human. That’s assuming we want it to get closer to the meaning of the word in social reality. But we don’t necessarily want to focus on that, at least as a short-term goal. In the short-term, it might be more purposeful to understand that language is a reflection of social reality. This means we, the humans, can understand human societies better through its analysis. Rather than trying to teach machines to imputate data to avoid what we label an undesired state of social reality, we should use the outputs provided by the machine to understand where and why those biases take place. And then we should focus on fixing them. Most likely, technology plays only a minor role in that.

Conclusion. The “correction of biases” is equivalent to burying your head in the sand: even if they magically disappeared from our models, they would still remain in the social reality, and through the connection of social reality and objective reality, echo in the everyday lives of people.

How to teach machines common sense? Solutions for ambiguity problem in artificial intelligence


The ambiguity problem illustrated:

User: “Siri, call me an ambulance!”

Siri: “Okay, I will call you ‘an ambulance’.”

You’ll never reach the hospital, and end up bleeding to death.


Two potential solutions:

A. machine builds general knowledge (“common sense”)

B. machine identifies ambiguity & asks for clarification from humans

The whole “common sense” problem can be solved by introducing human feedback into the system. We really need to tell the machine what is what, just like a child. It is iterative learning, in which trials and errors take place.

But, in fact, A. and B. converge by doing so. Which is fine, and ultimately needed.

Contextual awareness

To determine which solution to an ambiguous situation is proper, the machine needs contextual awareness; this can be achieved by storing contextual information from each ambiguous situation, and being explained “why” a particular piece of information results in disambiguity. It’s not enough to say “you’re wrong”, but there needs to be an explicit association to a reason (concept, variable). Equally, it’s not enough to say “you’re right”, but again the same association is needed.

The process:

1) try something

2) get told it’s not right, and why (linking to contextual information)

3) try something else, corresponding to why

4) get rewarded, if it’s right.

The problem is, currently machines are being trained by data, not by human feedback.

New thinking on teaching the machine

So we would need to build machine-training systems which enable training by direct human feedback, i.e. a new way to teach and communicate with the machine. It’s not a trivial thing, since the whole machine-learning paradigm is based on data. From data and probabilities, we would need to move into associations and concepts. A new methodology is needed. Potentially, individuals could train their own AIs like pets (think Tamagotchi), or we could use large numbers of crowd workers who would explain the machine why things are how they are (i.e., create associations). A specific type of markup (=communication) would probably also be needed.

Through mimicking human learning we can teach the machine common sense. This is probably the only way; since common sense does not exist beyond human cognition, it can only be learnt from humans. An argument can be made that this is like going back in time, to era where machines followed rule-based programming (as opposed to being data-driven). However, I would argue rule-based learning is much closer to human learning than the current probability-based one, and if we want to teach common sense, we therefore need to adopt the human way.

Conclusion: machines need education

Machine learning may be at par, but machine training certainly is not. The current machine learning paradigm is data-driven, whereas we could look into ways for concept-driven training approaches.

Rule-based AdWords bidding: Hazardous loops

1. Introduction

In rule-based bidding, you want to sometimes have step-backs where you first adjust your bid based on a given condition, and then adjust it back after the condition has passed.

An example. An use case would be to decrease bids for weekend, and increase back to normal level for weekdays.

However, defining the step-back rate is not done how most people would think. I’ll tell you how.

2. Step-back bidding

For step-back bidding you need two rules: one to change the bid (increase/decrease) and another one to do the opposite (decrease/increase). The values applied by these rules must cancel one another.

So, if your first rule raises the bid from $1 to $2, you want the second rule to drop it back to $1.

Call these

x = raise by percentage

y = lower by percentage

Where most people get confused is by assuming x=y, so that you use the same value for both the rules.

Example 1:

x = raise by 15%

y = lower by 15%

That should get us back to our original bid, right? Wrong.

If you do the math (1*1.15*0.85), you get 0.997, whereas you want 1 (to get back to the baseline).

The more you iterate with the wrong step-back value, the farther from the baseline you end. To illustrate, see the following simulation, where the loop is applied weekly for three months (12 weeks * 2 = 24 data points).

Figure 1 Bidding loop

As you can see, the wrong method will take you more and more off from the correct pattern as the time goes by. For a weekly rule the difference might be manageable, especially if the rule’s incremental change is small, but imagine if you are running the rule daily or each time you bid (intra-day).

3. Solution

So, how to get to 1?

It’s very simple, really. Consider

  • B = baseline value (your original bid)
  • x = the value of the first rule (e.g., raise bid by 15% –> 0.15)
  • y = the value of the second rule (dependant on the 1st rule)

You want to solve y from

B(1+x) * y = 1

That is,

y = 1 / B(1+x)

For the value in Example 1,

y = 1 / 1*(1+0.15)

multiplying that by the increased value results in 1, so that

1.15 * (1/1*(1+0.15) = 1


Remember to consider elementary mathematics, when applying AdWords bidding rules!

Hakukoneoptimointi toimittajan nÀkökulmasta


Media on riippuvainen mainostuloista. On jatkuva kiistelyn aihe, miten paljon toimittajien tulisi kirjoittaa juttuja, jotka saavat klikkejÀ ja nÀyttöjÀ suhteessa juttuihin, joiden yhteiskunnallinen merkitys on korkea. NÀmÀ kaksi kun eivÀt aina kulje kÀsi kÀdessÀ.

Sosiaalisen median ja hakukoneiden merkitys toimittajan työssÀ

KÀytÀnnössÀ toimittajat joutuvat työnsÀ puolesta huomioimaan juttujen kiinnostavuuden sosiaalisessa mediassa. TÀmÀ on tÀrkeÀÀ vaikka haluaisi kirjoittaa vain yhteiskunnallisesti tÀrkeistÀ aiheista, koska huomion saaminen kilpailevan sisÀllön keskellÀ on ainut tapa saada viestinsÀ lÀpi. Sosiaalisen median osalta on siis huomioitava sellaisia seikkoja kuin 1) vetÀvÀn otsikon muotoilu, 2) vetÀvÀn esikatselukuvan valinta, ja 3) object graph -metatietojen muokkaaminen (vaikuttavat siihen miltÀ linkki nÀyttÀÀ sosiaalisessa mediassa).

Sosiaalisen median lisÀksi toimittajan on huomioitava hakukoneoptimointi, sillÀ somen ohella hakukoneet ovat tyypillisesti merkittÀvÀ liikenteen lÀhde. MitÀ paremmin artikkelit on optimoitu, sitÀ todennÀköisemmin ne sijoittuvat tÀrkeillÀ avainsanoilla korkealle Googlen tuloksissa.

MitÀ toimittajan on tiedettÀvÀ hakukoneoptimoinnista?

Juttuja kirjoittaessaan toimittajan on huomioitava seuraavat seikat hakukoneiden kannalta:

  1. Avainsanat – kaikessa pitÀÀ lĂ€hteĂ€ siitĂ€, ettĂ€ tunnistetaan oikeat avainsanat, joilla artikkelin halutaan löytyvĂ€n. TĂ€ssĂ€ kannattaa hyödyntÀÀ avainsanatutkimuksen työkaluja, kuten Googlen avainsanatyökalua (Keyword Planner).
  2. PÀÀotsikko ja vĂ€liotsikot – valittujen avainsanojen tulee nĂ€kyĂ€ jutun otsikossa ja vĂ€liotsikoissa. VĂ€liotsikot (h2) ovat tĂ€rkeitĂ€, sillĂ€ ne luovat hakukoneelle ymmĂ€rrettĂ€vissĂ€ olevan rakenteen, sekĂ€ tukevat kĂ€yttĂ€jien luontaista, skannaukseen pohjautuvaa verkkolukemista.
  3. Linkit – jutussa tulee olla linkkejĂ€ muihin lĂ€hteisiin oikeanlaisilla ankkuriteksteillĂ€ merkittynĂ€. Ei “lisÀÀ tietoa lisĂ€ravinteista saat klikkaamalla tĂ€nne“, vaan “esimerkiksi Helsingin Sanomat on kirjoittanut useita juttuja lisĂ€ravinteista“.
  4. Teksti – kappaleiden tulee olla lyhyitĂ€, sisĂ€ltÀÀ selkeĂ€sti luettavissa olevaa kieltĂ€ ja optimoitavia avainsanoja sopiva mÀÀrĂ€. Sopivan mÀÀrĂ€n mitta on se, ettĂ€ avainsanoja on luonnolliselta tuntuva mÀÀrĂ€ – liikaa toistoa ei saa olla, koska Google voi tulkita sen manipulointiyritykseksi.

Ennen kaikkea kirjoitetun artikkelin tulee olla sekÀ kÀyttÀjÀlle miellyttÀvÀ lukea, ettÀ hakukoneelle helposti ymmÀrrettÀvÀ. NÀmÀ kaksi seikkaa yhdistÀmÀllÀ hakukoneoptimoinnin perusteet ovat kunnossa.

Affinity analysis in political social media marketing – the missing link

Introduction. Hm… I’ve figured out how to execute successful political marketing campaign on social media [1], but one link is missing still. Namely, applying affinity analysis (cf. market basket analysis).

Discounting conversions. Now, you are supposed to measure “conversions” by some proxy – e.g., time spent on site, number of pages visited, email subscription. Determining which measurable action is the best proxy for likelihood of voting is a crucial sub-problem, which you can approach with several tactics. For example, you can use the closest action to final conversion (vote), i.e. micro-conversion. This requires you have an understanding of the sequence of actions leading to final conversion. You could also use a relative cut-off point; e.g. the nth percentile with the highest degree of engagement is considered as converted.

Anyhow, this is very important because once you have secured a vote, you don’t want to waste your marketing budget by showing ads to people who already have decided to vote for your candidate. Otherwise, you risk “preaching to the choir”. Instead, you want to convert as many uncertain voters to voters as possible, by using different persuasion tactics.

Affinity analysis. The affinity analysis can be used to accomplish this. In ecommerce, you would use it as a basis for recommendation engine for cross-selling or up-selling (“customers who bought this item also bought…” Ă  la Amazon). First you detemine which sets of products are most popular, and then show those combinations to buyers interested in any item belonging to that set.

In political marketing, affinity analysis means that because a voter is interested in topic A, he’s also interested in topic B. Therefore, we will show him information on topic B, given our extant knowledge his interests, in order to increase likelihood of conversion. This is a form of associative

Operationalization. But operationalizing this is where I’m still in doubt. One solution could be building an association matrix based on website behavior, and then form corresponding retargeting audiences (e.g., website custom audiences on Facebook). The following picture illustrates the idea.

Figure 1 Example of affinity analysis (1=Visited page, 0=Did not visit page)

For example, we can see that themes C&D and A&F commonly occur together, i.e. people visit those sub-pages in the campaign site. You can validate this by calculating correlations between all pairs. When you set your data in binary format (0/1), you can use Pearson correlation for the calculations.

Facebook targeting. Knowing this information, we can build target audiences on Facebook, e.g. “Visited /Theme_A; NOT /Theme_F; NOT /confirmation”, where confirmation indicates conversion. Then, we would show ads on Theme F to that particular audience. In practice, we could facilitate the process by first identifying the most popular themes, and then finding the associated themes. Once the user has been exposed to a given theme, and did not convert, he needs to be exposed to another theme (with the highest association score). The process is continued until themes run out, or the user converts, which ever comes first. Applying the earlier logic of determining proxy for conversion, visiting all theme sub-pages can also be used as a measure for conversion.

Finally, it is possible to use more advanced methods of associative learning. That is, we could determine that {Theme A, Theme F} => {Theme C}, so that themes A and B predict interest in theme C. However, it is more appropriate to predict conversion rather than interest in other themes, because ultimately we’re interested in persuading more voters.


[1] Posts in Finnish:

Total remarketing – the concept

Here’s a definition:

Total remarketing is remarketing in all possible channels with all possible list combinations.


  • Programmatic display networks (e.g., Adroll)
  • Google (GDN, RLSA)
  • Facebook (Website Custom Audience)
  • Facebook (Video viewers / Engaged with ads)
  • etc.

How to apply:

  1. Test 2-3 different value propositions per group
  2. Prefer up-selling and cross-selling over discounts (the goal is to increase AOV, not reduce it; e.g. you can include an $20 gift voucher when basket size exceeds $100)
  3. Configure well; exclude those who bought; use information you have to improve remarketing focus (e.g. time of site, products or categories visited — the same remarketing for all groups is like the same marketing for all groups)
  4. Consider automation options (dynamic retargeting; behavior based campaign suggestions for the target)

Koneoppimisen jÀmÀhtÀmisongelma

Konekin voi joskus jÀÀtyÀ.

Kone oppii kuten ihminen: empiirisen havaintoaineiston (= datan) perusteella.

TÀstÀ syystÀ samoin kuin ihmisen on hankala oppia pois huonoista tavoista ja asenteista (ennakkoluulot, stereotypiat), on koneen vaikea oppia nopeasti pois virheellisestÀ tulkinnasta.

Kysymys ei ole poisoppimisesta, mikÀ lienee monessa tapauksessa mahdotonta, vaan uuden oppimisesta, niin ettÀ vanhat muistirakenteet (= mallin ominaisuudet) korvataan tehokkaasti uusilla. Tehokkaasti, koska mitÀ kauemmin vanhat epÀpÀtevÀt mallit ovat kÀytössÀ, sitÀ enemmÀn koneellinen pÀÀtöksenteko ehtii tehdÀ vahinkoa. Ongelma korostuu laajan mittakaavan pÀÀtöksentekojÀrjestelmÀssÀ, jossa koneen vastuulla voi olla tuhansia tai jopa miljoonia pÀÀtöksiÀ lyhyen ajan sisÀllÀ.

Esimerkki: Kone on oppinut diagnosoimaan sairauden X oireiden {x} perusteella. Tuleekin uutta tutkimustietoa, jonka mukaan sairaus X liitetÀÀn oireisiin {y}, jotka ovat lÀhellÀ oireita {x} mutta eivÀt identtisiÀ. Koneelta kestÀÀ kauan oppia uusi assosiaatio, jos sen pitÀÀ tunnistaa itse eri oireiden yhteydet sairauksiin samalla unohtaen vanhoja malleja.

Miten tÀtÀ prosessia voidaan nopeuttaa? Ts. sÀilyttÀÀ koneoppimisen edut (= löytÀÀ oikeat ominaisuudet, esim. oireyhdistelmÀt) ihmistÀ nopeammin, mutta ihminen voi kuitenkin ohjatusti korjata koneen oppimaa mallia paremman tiedon varassa.

Teknisesti ongelman voisi mieltÀÀ ns. bandit-algoritmin kautta: Jos algoritmi toteuttaa sekĂ€ eksploraatiota ettĂ€ eksploitaatiota, voisi ongelmaa pyrkiĂ€ ratkomaan rajoittamalla hakuavaruutta. Koneelle voisi myös syöttÀÀ tarpeeksi evidenssiĂ€, jotta se oppisi suhteen nopeasti – ts. jos kone ei ole löytĂ€nyt samaa asiaa kuin tietty tieteellinen tutkimus, tĂ€mĂ€n tieteellisen tutkimuksen dataa voisi kĂ€yttÀÀ kouluttamaan konetta niin paljon (jopa ylipainottamalla, jos se suhteessa hukkuu muuhun dataan) ettĂ€ luokittelumalli korjautuu.

In 2016, Facebook bypassed Google in ads. Here’s why.


The gone 2016 was the first year I thought Facebook ends up beating Google in the ad race, despite the fact Google still dominates in revenue ($67Bn vs. $17Bn in 2015). I’ll explain why.

First, consider that Google’s growth is restricted by three things:

  1. natural demand
  2. keyword volumes, and
  3. approach of perfect market.

More demand than supply

First, at any given time there is a limited number of people interested in a product/service. The interest can be of purchase intent or just general interest, but either way it translates into searches. Each search is an impression that Google can sell to advertisers through its AdWords bidding. The major problem is this: even when I’d like to spend more money on AdWords, I cannot. There is simply not enough search volume to satisfy my budget (in many cases there is, but in highly targeted and profitable campaigns many times there isn’t). So, the excess budget I will spend elsewhere where the profitable ad inventory is not limited (that is, Facebook at the moment).

Limited growth

According to estimates, search volume is growing by 10-15% annually [1]. Yet, Google’s revenue is expected to grow even by 26% [2]. Over the year, Google’s growth rate in terms of search volume has substantially decreased, although this is perceived as a natural phenomenon (after trillion searches it’s hard to keep growing double digits). In any case, the aforementioned dynamics reflect to search volumes – when the volumes don’t grow much and new advertisers keep entering the ad auction, there is more competition over the same searches. In other words, supply stays stable but demand increases, resulting in more intense bid wars.

Approaching perfect market

For a long time now, I’ve added +15% increase in internal budgeting for AdWords, and last year that was hard to maintain. Google is still a profitable channel, but the advertisers’ surplus is decreasing year by year, incentivizing them to look for alternative channels. While Google is restrained by its natural search volumes, Facebook’s ad inventory (=impressions) are practically limitless. The closer AdWords gets to a perfect market (=no economic rents), the less attractive it is for savvy marketers. Facebook is less exploited, and allows rents.

What will Google do?

Finally, I don’t like the Alphabet business. Already in the beginning it signals to investors that Google is in “whatever comes to mind” business instead of strategic focus on search. Most likely Alphabet ends up draining resources from the mother company, producing loss and taking human capital off from succeeding in online ads business (which is where their money comes from). In contrast, Facebook is very focused on social; it buys off competitors and improves fast. That said, I do have to recognize that Google’s advertising system is still much better than that of Facebook, and in fact still the best in the world. But momentum seems to be shifting to Facebook’s side.


The maximum number of impressions (=ad inventory) of Facebook is much higher than that of Google, because Google is limited by natural demand and Facebook is not. In the marketplace, there is always more supply than demand which is why advertisers want to spend more than what Google enables. These factors combined with Facebook’s continously increasing ability to match interested people with the right type of ads, makes Facebook’s revenue potential much bigger than Google’s.

From advertiser’s perspective, Facebook and Google both are and are not competitors. They are competitors for ad revenue, but they are not competitors in the online channel mix. Because Google is for demand capture and Facebook for demand creation, most marketers want to include both in their channel mix. This means Google’s share of online ad revenue might decrease, but a rational online advertisers will not drop its use so it will remain as a (less important) channel into foreseeable future.




Buying and selling complement bundles: When individual selling maximizes profit


When we were young, me and my brother used to buy and sell game consoles on (local eBay) and on various gamer discussion forums (Konsolifin BBS, for example). We didn’t have much money, so this was a great way to earn some cash — plus it taught us some useful business lessons along the years.

What we would often do was to buy a bundle (console+games), break it apart and sell the pieces individually. At that time we didn’t know anything about economics, but intuitively it felt the right thing to do. Indeed, we would always make money with that strategy, as we knew the market prices (or their range) of each individual item.

Looking back, I can now try and explain with economic terms why this was a successful strategy. In other words, why individual selling of items in a complement bundle is a winning strategy.

Why does individual selling provide a better profit than the selling of a bundle?

Let’s first define the concepts.

  • individual selling = buy complement bundle, break it apart and sell individual pieces
  • a complement bundle = a central unit and its complements (e.g., a game console and games)

Briefly, it is so because the tastes of the market are randomly distributed and do not align with the exact contents of the bundle. It then follows that the exact set of complements does not maximize any individual’s utility, so they will bid accordingly (e.g., “I like those two games (out of five), but not the three so I don’t put much value to them”) and the market price of the bundle will set below the full value of its individual parts.

In contrast, by breaking apart and selling individually each complement can be appraised at full value (“I like that game, so I’ll pay its real value”). In other words, the seller will need to find a buyer for each piece who appreciates that piece to its full value (=has a preference for it).

The intuition

Tastes and preferences differ, which reflects to individuals’ utility functions and therefore willingness to pay. Selling a bundle is a compromise from the perspective of the seller – he compromises his full price, because the buyer is willing to pay only according to his preferences (utility function) which do not match completely with the contents of the bundle.


There are two exceptions I can think of:

1) Highly valued complements (or homogeneous tastes)

Say all the complements are of high value in the market (e.g., popular hit games). Then, a large portion of the market assigns full value to them, and the bundle sets close or equal to the sum of individual full prices. Similarly, if all the buyers value the complements in a similar way, i.e. their taste is homogeneous, the randomness required for the individual selling to perform does not exist.

2) Information asymmetry

Sometimes, you can get a higher price by selling a bundle than by selling the individual pieces. We would use this strategy when the value of complements is very little to an “expert”. Then, if you were less experienced you could see a game console + 5 games the 5 games, however, had very little value in the market and it would therefore make sense to include them in the bundle and to attract less-informed buyers. In other words, benefiting from information asymmetries.

Finally, the buyer of a complement bundle needs to be aware of the market price (or the range of it) of each item. Otherwise, he might end up paying more than the value of the sum of individual items.


Finding bundles and selling the pieces individually is a great way for young people to practice business. Luckily, there are always sellers in the market who are not looking to optimize their asking price, but appreciate the speed and comfort associated with selling bundles (i.e., dealing with one buyer). The actors with more time and less sensitivity to comfort can then take advantage of that condition to make some degree of profit.

EDIT: My friend Zeeshan pointed out that a business may actually prefer bundling even when the price is lower than in individual selling, if they assign a transaction cost (search, bargaining) to individual selling and the sum of transaction costs of selling individual items is higher than the sum of differences between the full price and bundle price of complements. (Sounds complicated but means that you’d spend too much time selling each item in comparison to profit.) For us as kids this didn’t matter since we had plenty of time, but for businesses the cost of selling does matter.