Skip to content

Digital marketing, startups, and platforms Posts

A Quick Note on Bidding Theory of Online Ad Auctions

Introduction

This is a simple post about some commonly known features of online ad auctions.

Generalized second-price auction (GSP) is a mechanism in which the advertiser pays a marginally higher bid than the advertiser losing to him. It encourages the bidder to place a truthful bid, i.e. one where the price level is such that marginal returns equal marginal cost.

Why is this important?

Simply because:

truthful bid = incentive to bid higher

In other words, if you know a bidder behind is bidding say 0,20 € and you’re bidding 0,35 €, under a standard auction you’d be tempted to lower your bid to 0,21 € and still beat the next advertiser.

In any case you wouldn’t directly know this because the bids are sealed; however, advertisers could programmatically try and find out other bids. When you’re using GSP, manually lowering bids to marginally beat your competition is not necessary. It’s therefore a “fair” and automatic system for pricing.

Of course, for the ad platform this system is also lucrative. When advertisers are all placing truthful bids, there is no gaming, i.e. no-one is attempting to extract rents (excessive profits) and the overall price level sets higher than what would take place under gaming (theoretically, you could model this also in a way that the price level is at equal level in both cases, since it’s a “free market” where prices would set to a marginal cost equilibrium either way).


Joni Salminen holds a PhD in marketing from the Turku School of Economics. His research interests relate to startups, platforms, and digital marketing.

Google and the Prospect of Programmatic

Introduction

This is a short post taking a stance on programmatic ad platforms. It’s based on one single premise:

Digital convergence will lead into a situation where all ad spend, not only digital, will be managed through self-service, open ad platforms that operate based on auction principles

There are several reasons as to why this is not yet a reality; some of them relate to lack of technological competence by traditional media houses, some to their willingness to “protect” premium pricing (this protection has led to shrinking business and keeps doing so until they open up to the free market pricing), and a host of other factors (I’m actually currently engaged in a research project studying this phenomenon).

Digital convergence – you what?

Anyway, digital convergence means we’ll end up running campaigns through one or possibly a few ad platforms that all operate according to the same basic principles. They will resemble a lot like AdWords, because AdWords has been and still is the best advertising platform ever created. Why self-service is critical is due to the necessity of eliminating transaction costs in the selling process – we don’t in most cases need media sales people to operate these platforms. Because we don’t need them, we won’t need to pay their wages and this efficiency gain can be shifted to the prices.

The platforms will be open, meaning that there are no minimum media buys – just like in Google and Facebook, you can start with 5 $ if you want (try doing that now with your local TV media sales person). Regarding the pricing, it’s determined via ad auction, just like in Google and Facebook nowadays. The price levels will drop, but lowered barrier of access will increase liquidity and therefore fill seats more efficiently than in human-based bargaining. At least initially I expect some flux in these determinants — media houses will want to incorporate minimum pricing, but I predict it will go away in time as they realize the value of free market.

But now, to Google…

If Google was smart, it would develop programmatic ad platform for TV networks, or even integrate that with AdWords. The same applies actually to all media verticals: radio, print… Their potential demise will be this Alphabet business. All new ideas they’ve had have failed commercially, and to focus on producing more failed ideas leads unsurprisingly to more failure. Their luck, or skill however you want to take it, has been in understanding the platform business.

Just like Microsoft, Google must have people who understand about the platform business.

They’ve done a really good job with vertical integration, mainly with Android and Chrome. These support the core business model. Page’s fantasy land ideas really don’t. Well, from this point of view separating the Alphabet from the core actually makes sense, as long as the focus is kept on search and advertising.

So, programmatic ad platforms have the potential to disrupt Google, since search still dwarfs in comparison to TV + other offline media spend. And in the light of Google’s supposed understanding of platform dynamics, it’s surprising they’re not taking a stronger stance in bringing programmatic to the masses – and by masses, I mean offline media where the real money is. Google might be satisficing, and that’s a road to doom.

Dr. Joni Salminen holds a PhD in marketing from the Turku School of Economics. His research interests relate to startups, platforms, and digital marketing.

Contact email: [email protected]

The Vishnu Effect of Startups (creators/destroyers of jobs)

Background

In the Hindi scripture there is a famous passage in which the god Vishnu describes himself as death; to Westerners this is mostly known through Oppenheimer’s citation:

“Now, I am become Death, the destroyer of worlds.”

But, there is another god in Hinduism, Brahma, that is the creator of the universe.

How does this relate to startups?

Just like these two gods, startups are of dualistic nature. In particular, they are both job creators and job destroyers. One one hand they create new jobs and job types. On the other hand, they destroy existing jobs.

So what?

This dualistic nature is often ignored when evaluating the impact of startups on the society, although it’s definitely in the core of the Schumpeterian theory of innovation. What really matters for the society is the balance — how fast are new companies creating jobs vs. how fast they are destroying it.

I haven’t seen a single quantification of this effect, so it would definitely merit research. Theoretically, it can be called something like SIR, or startup impact ratio which would be jobs produced / jobs destroyed.

SIR = jobs produced / jobs destroyed

As long as the ratio is more than 1, the startups’ impact on the job market (and therefore indirectly on the society) is positive. In turn, if it’s below 1, “robots are taking our jobs”. Or, rather, if it’s above one, Brahma is winning while below one means Vishnu is dominating.

Dr. Joni Salminen holds a PhD in marketing from the Turku School of Economics. His research interests relate to startups, platforms, and digital marketing.

Contact email: [email protected]

A major change in AdWords – How to react?

Introduction

Google has made a major change in AdWords. Ads are now shown only in the main column, no longer in the right column. Previously, there were generally speaking eight ads per SERP. For some queries, Google didn’t show ads at all, and additionally they’ve been constantly testing the limit, e.g. running up to 16 product listing ads per results page.

But what does that mean to an advertiser?

Analysis

The change means the number of ads shown per SERP (search-engine results page) is effectively reduced. Since the number of advertisers is not reduced (unless rotation is applied, see below), the competition intensifies. And since the visibility of search ads is based on cost-per-click auction, ceteris paribus the click prices will go up.

Therefore, logical conclusion is that when ad placements are cut, either CPC increases (due to higher competition) or impression share decreases (due to rotation). In the former, you pay more for the same number of visitors, in the latter you pay the same click price but get less visitors.

Why Google might in fact prefer ad rotation, i.e. curbing down an individual advertiser’s impression share (the number of times your ads is shown out of all possible times it could have been shown) is because that wouldn’t impact their return-on-ad-spend (ROAS) which is a relative metric. However, it would affect the absolute volume of clicks and, consequently, sales.

In some of my campaigns, I’m using a longtail positioning strategy where this will influence, since these campaigns are targeting positions 4+ which, as said, are mostly no longer available. Most likely, the change will completely eradicate the possibility of running those campaigns with my low CPC-goal.

Why did Google do this?

For Google, this is a beneficial and logical change since right column ads are commanding lower CTRs (click-through rates). This has two implications – first, they bring less money for Google since its revenue is directly associated with the number of clicks; second, as commonly known Google is using CTR as a proxy for user experience (for example, it’s a major component in Quality Score calculations which determine the true click price).

Therefore, removing the possibility of poorly performing ads while pushing the advertisers to an increased competition is a beneficial situation for Google. In the wider picture, even with higher click prices, the ROI of Google ads is not easily challenged by any other medium or channel, at least what I can see taking place in the near future.

However, for advertisers it may easily signify higher click prices and therefore decreasing returns of search advertising. This conflict of interest is unfortunate one for advertisers, especially given the skewed distribution of power in their relationship to Google.

(On a side-note, the relationship between advertisers and Google is extremely interesting. I studied that to some extent in my Master’s thesis back in 2009. You can find it here: https://www.dropbox.com/s/syaetj8m1k66oxr/10223.pdf?dl=0)

Conclusion

I recommend you revise the impact of this change on your accounts, either internally or if you’re using an agency, with them.

Dr. Joni Salminen holds a PhD in marketing from the Turku School of Economics. His research interests relate to startups, platforms, and digital marketing.

Contact email: [email protected]

How to prevent disruption from happening to you? AKA avoiding the “Vanjoki fallacy”

Introduction

A major issue of corporations is how they can avoid being disrupted. This is a commonly established issue, e.g. Christensen discusses it in his book “Innovator’s dilemma”. But I’m going to present here a simple solution for it.

Here it is.

Rule Number 1: Don’t look at absolute market shares, look at growth rates

I call this the “Vanjoki fallacy” which is based on the fatal error Vanjoki did while in Nokia, namely thinking that “Apple only has 3% of market share, we have 40%. Therefore we are safe”, when the guy should have looked at growth rates which were of course by far in Apple’s favor. Looking at them forces you to try and understand why, and you might still have a chance of turning the disruption around (although that’s not guaranteed).

“How can I do it?”

So, how to do it? Well, you should model your competitors’ growth – as soon as any of the relevant measures (e.g., revenue, product category, product sales) shows exponential growth, that’s an indicator of danger for you. Here’s the four-step process in detail.

First, 1) start out by defining the relevant measures to track. These derive from your industry and business model, and they are common goal metrics that you and your competitor share, e.g. sales.

Second, 2) get the data – easy enough if they are public companies, since their financial statements should have it. Notice, however, that there is a reporting lag when retrieving data from financial statements, which plays against you since you want as early knowledge of potential disruptors as possible. You might want to look at other sources of data, e.g. Google Trends development or some other proxy of their growth.

Third, 3) model the data; this is done by simply fitting the data into different statistical models representing various growth patterns — remember derivation at school? It’s like that, you want to know how fast something is growing. Most importantly, you want to find out whether the growth resembles linear, exponential growth, or logarithmic growth.

How to interpret these? Well, if it’s linear, good for you (considering your growth is also at least linear). If it’s exponential growth rate, that’s usually bad for you. If it’s logarithmic, depends where they’re at in the growth phase (if this seems complicated, google ‘logarithmic growth’ and you see how it looks). Now, compare the competitor’s growth model to yours – do have reason to be concerned?

Finally, 4) draw actionable conclusions and come up with a strategy to counter your opponent. Fine, they have exponential growth. But why is that? What are they doing better? Don’t be like that other ignorant Nokia manager Olli-Pekka Kallasvuo who publicly said he doesn’t have an iPhone, and that he will never get one. Instead, find out about your competitors products. Here is a list of questions:

  • What makes them better?
  • What makes their processes better?
  • What makes their brand better?
  • What makes their business model better?
  • What makes their employees better?

Find out the answers, and then make a plan for the best course of action. You may want to identify the most likely root causes of their growth, and then either imitate, null (if possible) or counter-disrupt them with your next-generation solution.

Conclusion

In conclusion, don’t be fooled by absolute values. The world is changing, and your role as a manager or executive is to be on top of that change. So, do the math and do your job. The corollary to this approach, by the way, is to create a some kind of “anti-disruption” alert system — that would make for a nice startup idea, but it’s a topic for another post.

Dr. Joni Salminen holds a PhD in marketing from the Turku School of Economics. His research interests relate to startups, platforms, and digital marketing.

Contact email: [email protected]

European financial crisis – the next steps?

Introduction

With this post, I’m anticipating the next phase of debate on European financial crisis, as the problem of asynchronous economies isn’t going away. The continent is currently riddled with the refugee crisis, but sooner or later the attention will return to this topic which hasn’t been properly dealt with.

The problem

In brief, there are two countries:

  • Country A – “good country” with flourishing exports and dynamic domestic market
  • Country B – “bad country” with slugging exports and slow domestic market

Both countries, however, have the same monetary policy. They cannot control money supply or key interest rate by themselves according to their specific needs, but these come as a some kind of average for both – this “average” is not optimal for either, or is optimal for one but not the other.

As Milton Friedman asserted long ago, the differences of such kind result in an un-optimal currency area. We’ve seen his predictions take form in the on-going European financial crisis which in this case results from the un-optimal property of the European Monetary Union (EMU).

How to solve the problem?

Some potential solutions are:

1. Fiscal transfers from surplus to deficit countries — seems impossible politically, and also leaves the moral hazard problem wide open (this solution suffers from disincentive to make structural reforms, and is dangerous in the sense it can bring hatred between EMU countries)

2. Budget control to European Central Bank (ECB) — in this case, the central bank would exercise supreme power over national budgets, and would approve only balanced budgets. From a simplistic point of view, this seems appealing due to the fact that it would it forcefully prevent overspend, and there would be no need for the dreaded fiscal transfers.

However, the problems with this approach are the following:

a. It takes away the sovereignty of nations — not a small thing at all, and non-federalists like myself would reject it only for this reason.

b. The economic issue with it is the ‘shrinking economy’ problem – according to Keynesian logic, the state needs to invest when the private sector is in a slump to stimulate the economy. Failing to do so risks a vicious cycle of increased unemployment and decreased consumption, resulting in a shrinking, not growing GDP.

So, I’m not exactly supporting the creation of balanced budgets at the time of distress. The only way it can work is as form of “shock therapy” which would force the private sector to compensate for decreasing public sector spend. Which, in turn, requires liquidity i.e. capital. Unfortunately, lack of trust in a country also tends to reflect to companies in that country in the form of higher interest rates.

Which leads to me another potential solution which again looks eloquent but is a trap.

3. Credit pooling (euro-bonds)

This is just sub-prime once again. In other words, we take the loans of a reliable country (credit rating A) and mix them with an unreliable country (credit rating C), and give the whole “package” and overall rating of B which seems quite enticing for the investors buying these bonds. By hiding the differences in ability to handle debt, the pool is able to attract much more money. In brief, everyone knows this leads to the dark side of moral hazard and will eventually explode.

For this reason, I’m categorically against euro-bonds. In fact, the European debt crisis was in large part due to investors treating sovereign bonds as if they were joint bonds, granting Greece lower interest rates than in the case it would not have been an EMU member state. Ironically enough, some people actually appraised this as a positive effect of the monetary union.

Conclusion and discussion

So, what’s the final solution then? I think it’s the road of enforcing the subsidiarity principle, in other words re-instituting economic power to local governments. The often evoked manifestation of this, dissolution of euro, could potentially be avoided by using the national banks (e.g., Bank of Greece) as interest-setters, while the ECB would keep in its control the supply of money.

I was even considering this would be given to national banks, but the risk of moral hazard is too big, and it would result in inflation concerns. But controlling the key interest rate would be important, especially in the sense that it could be set *higher* in “good countries” than what they currently have. Consider a high interest rate (i.e., low credit expansion) in Germany and a low interest rate (i.e., high credit expansion) in Greece; the two effects could cancel each other out and repel the fear of inflation.

However, the question is – are the “good countries” willing to pay a higher interest rate for the “bad countries'” sake? And would this solution escape moral hazard? For it to work, ECB would either credibly commit to the role of the lender of last resort, or then become the first lender. In either case, we seem to recursively go back to the risk of reckless crediting (unless national banks would do a better job in monitoring the agents, which they actually might do).

In the end, something has to give. I’ve often used the euro-zone as an example of a zero-sum game: one has to give, so that the other can receive. In a such a setting, it is not possible to create a solution which would result in equal wins for all players. Sadly, the politicians cannot escape economic principles – they are simply not a question of political decision-making. The longer they pretend so, the larger the systematic risks associated with the monetary union grow.

Joni Salminen
DSc. in Econ. and Business Adm.
Turku School of Economics

The author has been following the euro-crisis since its beginning.

The correct way to calculate ROI for online marketing

Introduction

This is a short post explaining the correct way to calculate ROI for online marketing. I got the idea earlier today while renewing my Google AdWords certificate and seeing this question in the exam:

Now, here’s the trap – I’m arguing most advertisers would choose the option C, although the correct one is option A. Let me elaborate on this.

The problem?

As everybody knows, ROI is calculated with this formula:

ROI = (returns-cost)/cost*100%

The problem is that the cost side is oftentimes seen too narrowly when reporting the performance of online advertising.

ROI is the ‘return on investment’, but the investment should not only be seen to include advertising cost but the cost of the product as well.

Let me give you an example. Here’s the basic information we have of our campaign performance:

  • cost of campaign A: 100€
  • sales from campaign A: 500€

So, applying the formula the ROI is (500-100)/100*100% = 400%

However, in reality we should consider the margin since that’s highly relevant for the overall profitability of our online marketing. In other words, the cost includes the products sold. Considering that our margin would be 15% in this example, we would get

  • cost of products sold: 500€*(1-0.25) =425€

Reapplying the ROI calculation:

(500-(100+425)) / (100+425) * 100% = -4.7%

So, as we can see, the profitability went from +400% to -4.7%.

The implications

The main implication: always consider the margin in your ROI calculation, otherwise you’re not measuring true profitability.

The more accurate formula, therefore, is:

ROI = (returns-(cost of advertising + cost of products sold)) / (cost of advertising + cost of products sold)

Another implication is that since the ROI depends on margins, products with the same price have different CPA goals. This kind of adjustment is typically ignored in bid-setting, also by more advanced system such as AdWords Conversion Optimizer which assumes a uniform CPA goal.

Limitations

Obviously, while the abuse of the ‘basic ROI’ calculation ignores the product in the cost side, it also ignores customer lifetime value from the return-side of the equation.

Dr. Joni Salminen holds a PhD in marketing from the Turku School of Economics. His research interests relate to startups, platforms, and digital marketing.

Contact email: [email protected]

Carryover effects and their measurement in Google Analytics

Introduction

Carryover effects in marketing are a tricky beast. On one hand, you don’t want to prematurely judge a campaign because the effect of advertising may be delayed. On the other hand, you don’t want bad campaigns to be defended with this same argument.

Solutions

What’s the solution then? They need to be quantified, or didn’t exist. Some ways to quantify are available in Google Analytics:

  • first, you have the time lag report of conversions – this shows how long it has taken for customers to convert
  • second, you have the possibility to increase the inspection window – by looking at a longer period, you can capture more carryover effects (e.g., you ran a major display campaign on July; looking back on December you might still see effects) [Notice that cookie duration limits the tracking, and also remember to use UTM parameters for tracking.]
  • third, you can look at assisted conversions to see the carryover effect in conversion paths – many campaigns may not directly convert, but are a part of the conversion path.

All these methods, however, are retrospective in nature. Predicting carryover effects is notoriously hard, and I’m not sure it would even be possible with such accuracy that it should be pursued.

Conclusion

In conclusion, I’d advise against being too hasty in drawing conclusion about campaign performance. This way you avoid the problem of premature judgment. The problem of shielding inferior campaigns can be tackled by using other proxy metrics of performance, such as the bounce rate. This would effectively tell you whether a campaign has even a theoretical chance of providing positive carryover effects. Indeed, regarding the prediction problem, proving the association between high bounce rate and low carryover effects would enforce this “rule of thumb” even further.

Dr. Joni Salminen holds a PhD in marketing from the Turku School of Economics. His research interests relate to startups, platforms, and digital marketing.

Contact email: [email protected]

Chasing the “true” CPA in digital marketing (for Pro’s only!)

This is a follow-up post on my earlier post about “fake” conversions — the post is in Finnish but, briefly, it’s about the problem of irreversibility of conversions in the ad platforms’ reporting. In reality, some conversions are cancelled (e.g., product returns), but the current platforms don’t track that.

So, my point was to include a ‘churn coefficient’ which would correct for the CPA calculation. In other words, it adjusts the CPA reported by the ad platform (e.g., AdWords) in regards to churn from “conversion” to conversion (as per the previous explanation).

The churn coefficient can be calculated like this:

1/(1-churn),

in which churn is the churn from the reported conversion to the lasting, real conversion.

However, I got to think about this and concluded this — since we consider the churn taking place due to real world circumstances as a lift to the reported CPA, we should also consider the mitigating factor of customer-to-customer references (i.e., word-of-mouth).

Consider it like this – on average, converted customers recommend your company to their friends, out of which some convert. that effect would not be correctly attributed to the referring customers under normal circumstances, but by attributing it uniformly to the average CPAs we can at least consider it in aggregate.

So, hence the ‘wom coefficient’:

1-(Cn / Cm), in which

Cn: conversions from new customers non-affiliated with any marketing channel
Cm: conversions from all marketing channels

The idea is that the new visitors who convert can be attributed to wom while conversions from marketing channels create the base of customers who are producing the recommendations. Both pieces of information can be retrieved in GA (for Cn, use an advanced segment).

So, the more accurate formula for “true” CPA calculation would be:

1-(Cn / Cm) * 1/(1-churn) * CPA

In reality, you could of course track at least a part of the recommendations through referral codes (cf. Dropbox). In this case you could have a more accurate wom coefficient.

Limitations:

Consider that in period t, not all Cn are created by Cm. Hence, it would be more realistic to assume a delay, e.g. compare to period t-1 (reference effect does not show instantly).

The formula does not consider cases where the referred customers come through existing marketing channels (this effect could be eased by not including branded search campaigns in Cm which is a good idea anyway if you want to find out the true performance of the channel in new customer acquisition).

Finally, not all customers from non-marketing channels may not originate from wom (especially if the company is using a lot of non-traceable offline marketing). Thus, the wom efficient could have a parameter that would consider this effect.

Dr. Joni Salminen holds a PhD in marketing from the Turku School of Economics. His research interests relate to startups, platforms, and digital marketing.

Contact email: [email protected]

Online ad platforms’ leeching logic

I and Mr. Pitkänen had a discussion about unfair advantage in business – e.g., a gift card company’s business model relying on people not redeeming gift cards, investment banker’s relying on monopoly to take 7% of each new IPO, doctor’s controlling how many new doctor’s are educated, taxi driver’s keeping the supply low through licenses, governments inventing new taxes…

It seems, everywhere you look you’ll find examples of someone messing with the so-called “free market”.

So, what’s the unfair advantage of online ad platforms? It’s something I call ‘leeching logic’. It’s about miscrediting conversions – channel x receives credit for a conversion while channel y has been the primary driver to it.

Let me give you two examples.

EXAMPLE 1:

You advertise in the radio for brand X. A person likes the ad and searches your brand in google. He clicks your search ad and buys.

Who gets credited for the sale?

radio ad – 0 conversions
google – 1 conversion

The conclusion: Google is leeching. In this way, all offline branding essentially creates a lift for search-engine advertising which is located at a later stage of the purchase funnel, often closing the conversion.

EXAMPLE 2:

You search for product Y in Google. You see a cool search ad by company A and click it. You also like the product. However, you need time to think and don’t buy it yet. Like half the planet, you go to Facebook later during that day. There, you’re shown a remarketing ad from company A but don’t really notice it, let alone click it. After thinking about the product for a week, you return to company A‘s website and make the purchase.

Who gets credited for the sale?

Google – 1 conversion (30-day click tracking)
Facebook – 1 conversion (28-days view tracking)

In reality, Facebook just rides on the fact someone visited a website and in between making the purchase also visited Facebook, while they learned about the product somewhere else. They didn’t click the retargeting ad or necessarily even cognitively processed it, yet the platform reports a conversion because of that ad.

For a long time, Facebook had trouble in finding its leeching logic, but now it finally has discovered it. And now, like for other businesses that have a leeching logic, the future looks bright. (Good time to invest, if the stock’s P/E wasn’t somewhere at 95.)

So, how should marketers deal with the leeches to get a more truthful picture of our actions? Here are a few ideas:

  •  exclude brand terms in search when evaluating overall channel performance
  • narrow down lookback window for views in Facebook — can’t remove it, though (because of leeching logic)
  • use attribution modeling (not possible for online-offline but works for digital cross-channel comparisons)
  • dedupe conversions between channels (essentially, the only way to do this is by attribution modeling in 3rd party analytics software, such as GA — platforms’ own reporting doesn’t address this issue)