Skip to content

Category: english

A major change in AdWords – How to react?

Introduction

Google has made a major change in AdWords. Ads are now shown only in the main column, no longer in the right column. Previously, there were generally speaking eight ads per SERP. For some queries, Google didn’t show ads at all, and additionally they’ve been constantly testing the limit, e.g. running up to 16 product listing ads per results page.

But what does that mean to an advertiser?

Analysis

The change means the number of ads shown per SERP (search-engine results page) is effectively reduced. Since the number of advertisers is not reduced (unless rotation is applied, see below), the competition intensifies. And since the visibility of search ads is based on cost-per-click auction, ceteris paribus the click prices will go up.

Therefore, logical conclusion is that when ad placements are cut, either CPC increases (due to higher competition) or impression share decreases (due to rotation). In the former, you pay more for the same number of visitors, in the latter you pay the same click price but get less visitors.

Why Google might in fact prefer ad rotation, i.e. curbing down an individual advertiser’s impression share (the number of times your ads is shown out of all possible times it could have been shown) is because that wouldn’t impact their return-on-ad-spend (ROAS) which is a relative metric. However, it would affect the absolute volume of clicks and, consequently, sales.

In some of my campaigns, I’m using a longtail positioning strategy where this will influence, since these campaigns are targeting positions 4+ which, as said, are mostly no longer available. Most likely, the change will completely eradicate the possibility of running those campaigns with my low CPC-goal.

Why did Google do this?

For Google, this is a beneficial and logical change since right column ads are commanding lower CTRs (click-through rates). This has two implications – first, they bring less money for Google since its revenue is directly associated with the number of clicks; second, as commonly known Google is using CTR as a proxy for user experience (for example, it’s a major component in Quality Score calculations which determine the true click price).

Therefore, removing the possibility of poorly performing ads while pushing the advertisers to an increased competition is a beneficial situation for Google. In the wider picture, even with higher click prices, the ROI of Google ads is not easily challenged by any other medium or channel, at least what I can see taking place in the near future.

However, for advertisers it may easily signify higher click prices and therefore decreasing returns of search advertising. This conflict of interest is unfortunate one for advertisers, especially given the skewed distribution of power in their relationship to Google.

(On a side-note, the relationship between advertisers and Google is extremely interesting. I studied that to some extent in my Master’s thesis back in 2009. You can find it here: https://www.dropbox.com/s/syaetj8m1k66oxr/10223.pdf?dl=0)

Conclusion

I recommend you revise the impact of this change on your accounts, either internally or if you’re using an agency, with them.

Dr. Joni Salminen holds a PhD in marketing from the Turku School of Economics. His research interests relate to startups, platforms, and digital marketing.

Contact email: [email protected]

How to prevent disruption from happening to you? AKA avoiding the “Vanjoki fallacy”

Introduction

A major issue of corporations is how they can avoid being disrupted. This is a commonly established issue, e.g. Christensen discusses it in his book “Innovator’s dilemma”. But I’m going to present here a simple solution for it.

Here it is.

Rule Number 1: Don’t look at absolute market shares, look at growth rates

I call this the “Vanjoki fallacy” which is based on the fatal error Vanjoki did while in Nokia, namely thinking that “Apple only has 3% of market share, we have 40%. Therefore we are safe”, when the guy should have looked at growth rates which were of course by far in Apple’s favor. Looking at them forces you to try and understand why, and you might still have a chance of turning the disruption around (although that’s not guaranteed).

“How can I do it?”

So, how to do it? Well, you should model your competitors’ growth – as soon as any of the relevant measures (e.g., revenue, product category, product sales) shows exponential growth, that’s an indicator of danger for you. Here’s the four-step process in detail.

First, 1) start out by defining the relevant measures to track. These derive from your industry and business model, and they are common goal metrics that you and your competitor share, e.g. sales.

Second, 2) get the data – easy enough if they are public companies, since their financial statements should have it. Notice, however, that there is a reporting lag when retrieving data from financial statements, which plays against you since you want as early knowledge of potential disruptors as possible. You might want to look at other sources of data, e.g. Google Trends development or some other proxy of their growth.

Third, 3) model the data; this is done by simply fitting the data into different statistical models representing various growth patterns — remember derivation at school? It’s like that, you want to know how fast something is growing. Most importantly, you want to find out whether the growth resembles linear, exponential growth, or logarithmic growth.

How to interpret these? Well, if it’s linear, good for you (considering your growth is also at least linear). If it’s exponential growth rate, that’s usually bad for you. If it’s logarithmic, depends where they’re at in the growth phase (if this seems complicated, google ‘logarithmic growth’ and you see how it looks). Now, compare the competitor’s growth model to yours – do have reason to be concerned?

Finally, 4) draw actionable conclusions and come up with a strategy to counter your opponent. Fine, they have exponential growth. But why is that? What are they doing better? Don’t be like that other ignorant Nokia manager Olli-Pekka Kallasvuo who publicly said he doesn’t have an iPhone, and that he will never get one. Instead, find out about your competitors products. Here is a list of questions:

  • What makes them better?
  • What makes their processes better?
  • What makes their brand better?
  • What makes their business model better?
  • What makes their employees better?

Find out the answers, and then make a plan for the best course of action. You may want to identify the most likely root causes of their growth, and then either imitate, null (if possible) or counter-disrupt them with your next-generation solution.

Conclusion

In conclusion, don’t be fooled by absolute values. The world is changing, and your role as a manager or executive is to be on top of that change. So, do the math and do your job. The corollary to this approach, by the way, is to create a some kind of “anti-disruption” alert system — that would make for a nice startup idea, but it’s a topic for another post.

Dr. Joni Salminen holds a PhD in marketing from the Turku School of Economics. His research interests relate to startups, platforms, and digital marketing.

Contact email: [email protected]

European financial crisis – the next steps?

Introduction

With this post, I’m anticipating the next phase of debate on European financial crisis, as the problem of asynchronous economies isn’t going away. The continent is currently riddled with the refugee crisis, but sooner or later the attention will return to this topic which hasn’t been properly dealt with.

The problem

In brief, there are two countries:

  • Country A – “good country” with flourishing exports and dynamic domestic market
  • Country B – “bad country” with slugging exports and slow domestic market

Both countries, however, have the same monetary policy. They cannot control money supply or key interest rate by themselves according to their specific needs, but these come as a some kind of average for both – this “average” is not optimal for either, or is optimal for one but not the other.

As Milton Friedman asserted long ago, the differences of such kind result in an un-optimal currency area. We’ve seen his predictions take form in the on-going European financial crisis which in this case results from the un-optimal property of the European Monetary Union (EMU).

How to solve the problem?

Some potential solutions are:

1. Fiscal transfers from surplus to deficit countries — seems impossible politically, and also leaves the moral hazard problem wide open (this solution suffers from disincentive to make structural reforms, and is dangerous in the sense it can bring hatred between EMU countries)

2. Budget control to European Central Bank (ECB) — in this case, the central bank would exercise supreme power over national budgets, and would approve only balanced budgets. From a simplistic point of view, this seems appealing due to the fact that it would it forcefully prevent overspend, and there would be no need for the dreaded fiscal transfers.

However, the problems with this approach are the following:

a. It takes away the sovereignty of nations — not a small thing at all, and non-federalists like myself would reject it only for this reason.

b. The economic issue with it is the ‘shrinking economy’ problem – according to Keynesian logic, the state needs to invest when the private sector is in a slump to stimulate the economy. Failing to do so risks a vicious cycle of increased unemployment and decreased consumption, resulting in a shrinking, not growing GDP.

So, I’m not exactly supporting the creation of balanced budgets at the time of distress. The only way it can work is as form of “shock therapy” which would force the private sector to compensate for decreasing public sector spend. Which, in turn, requires liquidity i.e. capital. Unfortunately, lack of trust in a country also tends to reflect to companies in that country in the form of higher interest rates.

Which leads to me another potential solution which again looks eloquent but is a trap.

3. Credit pooling (euro-bonds)

This is just sub-prime once again. In other words, we take the loans of a reliable country (credit rating A) and mix them with an unreliable country (credit rating C), and give the whole “package” and overall rating of B which seems quite enticing for the investors buying these bonds. By hiding the differences in ability to handle debt, the pool is able to attract much more money. In brief, everyone knows this leads to the dark side of moral hazard and will eventually explode.

For this reason, I’m categorically against euro-bonds. In fact, the European debt crisis was in large part due to investors treating sovereign bonds as if they were joint bonds, granting Greece lower interest rates than in the case it would not have been an EMU member state. Ironically enough, some people actually appraised this as a positive effect of the monetary union.

Conclusion and discussion

So, what’s the final solution then? I think it’s the road of enforcing the subsidiarity principle, in other words re-instituting economic power to local governments. The often evoked manifestation of this, dissolution of euro, could potentially be avoided by using the national banks (e.g., Bank of Greece) as interest-setters, while the ECB would keep in its control the supply of money.

I was even considering this would be given to national banks, but the risk of moral hazard is too big, and it would result in inflation concerns. But controlling the key interest rate would be important, especially in the sense that it could be set *higher* in “good countries” than what they currently have. Consider a high interest rate (i.e., low credit expansion) in Germany and a low interest rate (i.e., high credit expansion) in Greece; the two effects could cancel each other out and repel the fear of inflation.

However, the question is – are the “good countries” willing to pay a higher interest rate for the “bad countries'” sake? And would this solution escape moral hazard? For it to work, ECB would either credibly commit to the role of the lender of last resort, or then become the first lender. In either case, we seem to recursively go back to the risk of reckless crediting (unless national banks would do a better job in monitoring the agents, which they actually might do).

In the end, something has to give. I’ve often used the euro-zone as an example of a zero-sum game: one has to give, so that the other can receive. In a such a setting, it is not possible to create a solution which would result in equal wins for all players. Sadly, the politicians cannot escape economic principles – they are simply not a question of political decision-making. The longer they pretend so, the larger the systematic risks associated with the monetary union grow.

Joni Salminen
DSc. in Econ. and Business Adm.
Turku School of Economics

The author has been following the euro-crisis since its beginning.

The correct way to calculate ROI for online marketing

Introduction

This is a short post explaining the correct way to calculate ROI for online marketing. I got the idea earlier today while renewing my Google AdWords certificate and seeing this question in the exam:

Now, here’s the trap – I’m arguing most advertisers would choose the option C, although the correct one is option A. Let me elaborate on this.

The problem?

As everybody knows, ROI is calculated with this formula:

ROI = (returns-cost)/cost*100%

The problem is that the cost side is oftentimes seen too narrowly when reporting the performance of online advertising.

ROI is the ‘return on investment’, but the investment should not only be seen to include advertising cost but the cost of the product as well.

Let me give you an example. Here’s the basic information we have of our campaign performance:

  • cost of campaign A: 100€
  • sales from campaign A: 500€

So, applying the formula the ROI is (500-100)/100*100% = 400%

However, in reality we should consider the margin since that’s highly relevant for the overall profitability of our online marketing. In other words, the cost includes the products sold. Considering that our margin would be 15% in this example, we would get

  • cost of products sold: 500€*(1-0.25) =425€

Reapplying the ROI calculation:

(500-(100+425)) / (100+425) * 100% = -4.7%

So, as we can see, the profitability went from +400% to -4.7%.

The implications

The main implication: always consider the margin in your ROI calculation, otherwise you’re not measuring true profitability.

The more accurate formula, therefore, is:

ROI = (returns-(cost of advertising + cost of products sold)) / (cost of advertising + cost of products sold)

Another implication is that since the ROI depends on margins, products with the same price have different CPA goals. This kind of adjustment is typically ignored in bid-setting, also by more advanced system such as AdWords Conversion Optimizer which assumes a uniform CPA goal.

Limitations

Obviously, while the abuse of the ‘basic ROI’ calculation ignores the product in the cost side, it also ignores customer lifetime value from the return-side of the equation.

Dr. Joni Salminen holds a PhD in marketing from the Turku School of Economics. His research interests relate to startups, platforms, and digital marketing.

Contact email: [email protected]

Carryover effects and their measurement in Google Analytics

Introduction

Carryover effects in marketing are a tricky beast. On one hand, you don’t want to prematurely judge a campaign because the effect of advertising may be delayed. On the other hand, you don’t want bad campaigns to be defended with this same argument.

Solutions

What’s the solution then? They need to be quantified, or didn’t exist. Some ways to quantify are available in Google Analytics:

  • first, you have the time lag report of conversions – this shows how long it has taken for customers to convert
  • second, you have the possibility to increase the inspection window – by looking at a longer period, you can capture more carryover effects (e.g., you ran a major display campaign on July; looking back on December you might still see effects) [Notice that cookie duration limits the tracking, and also remember to use UTM parameters for tracking.]
  • third, you can look at assisted conversions to see the carryover effect in conversion paths – many campaigns may not directly convert, but are a part of the conversion path.

All these methods, however, are retrospective in nature. Predicting carryover effects is notoriously hard, and I’m not sure it would even be possible with such accuracy that it should be pursued.

Conclusion

In conclusion, I’d advise against being too hasty in drawing conclusion about campaign performance. This way you avoid the problem of premature judgment. The problem of shielding inferior campaigns can be tackled by using other proxy metrics of performance, such as the bounce rate. This would effectively tell you whether a campaign has even a theoretical chance of providing positive carryover effects. Indeed, regarding the prediction problem, proving the association between high bounce rate and low carryover effects would enforce this “rule of thumb” even further.

Dr. Joni Salminen holds a PhD in marketing from the Turku School of Economics. His research interests relate to startups, platforms, and digital marketing.

Contact email: [email protected]

Chasing the “true” CPA in digital marketing (for Pro’s only!)

This is a follow-up post on my earlier post about “fake” conversions — the post is in Finnish but, briefly, it’s about the problem of irreversibility of conversions in the ad platforms’ reporting. In reality, some conversions are cancelled (e.g., product returns), but the current platforms don’t track that.

So, my point was to include a ‘churn coefficient’ which would correct for the CPA calculation. In other words, it adjusts the CPA reported by the ad platform (e.g., AdWords) in regards to churn from “conversion” to conversion (as per the previous explanation).

The churn coefficient can be calculated like this:

1/(1-churn),

in which churn is the churn from the reported conversion to the lasting, real conversion.

However, I got to think about this and concluded this — since we consider the churn taking place due to real world circumstances as a lift to the reported CPA, we should also consider the mitigating factor of customer-to-customer references (i.e., word-of-mouth).

Consider it like this – on average, converted customers recommend your company to their friends, out of which some convert. that effect would not be correctly attributed to the referring customers under normal circumstances, but by attributing it uniformly to the average CPAs we can at least consider it in aggregate.

So, hence the ‘wom coefficient’:

1-(Cn / Cm), in which

Cn: conversions from new customers non-affiliated with any marketing channel
Cm: conversions from all marketing channels

The idea is that the new visitors who convert can be attributed to wom while conversions from marketing channels create the base of customers who are producing the recommendations. Both pieces of information can be retrieved in GA (for Cn, use an advanced segment).

So, the more accurate formula for “true” CPA calculation would be:

1-(Cn / Cm) * 1/(1-churn) * CPA

In reality, you could of course track at least a part of the recommendations through referral codes (cf. Dropbox). In this case you could have a more accurate wom coefficient.

Limitations:

Consider that in period t, not all Cn are created by Cm. Hence, it would be more realistic to assume a delay, e.g. compare to period t-1 (reference effect does not show instantly).

The formula does not consider cases where the referred customers come through existing marketing channels (this effect could be eased by not including branded search campaigns in Cm which is a good idea anyway if you want to find out the true performance of the channel in new customer acquisition).

Finally, not all customers from non-marketing channels may not originate from wom (especially if the company is using a lot of non-traceable offline marketing). Thus, the wom efficient could have a parameter that would consider this effect.

Dr. Joni Salminen holds a PhD in marketing from the Turku School of Economics. His research interests relate to startups, platforms, and digital marketing.

Contact email: [email protected]

Online ad platforms’ leeching logic

I and Mr. Pitkänen had a discussion about unfair advantage in business – e.g., a gift card company’s business model relying on people not redeeming gift cards, investment banker’s relying on monopoly to take 7% of each new IPO, doctor’s controlling how many new doctor’s are educated, taxi driver’s keeping the supply low through licenses, governments inventing new taxes…

It seems, everywhere you look you’ll find examples of someone messing with the so-called “free market”.

So, what’s the unfair advantage of online ad platforms? It’s something I call ‘leeching logic’. It’s about miscrediting conversions – channel x receives credit for a conversion while channel y has been the primary driver to it.

Let me give you two examples.

EXAMPLE 1:

You advertise in the radio for brand X. A person likes the ad and searches your brand in google. He clicks your search ad and buys.

Who gets credited for the sale?

radio ad – 0 conversions
google – 1 conversion

The conclusion: Google is leeching. In this way, all offline branding essentially creates a lift for search-engine advertising which is located at a later stage of the purchase funnel, often closing the conversion.

EXAMPLE 2:

You search for product Y in Google. You see a cool search ad by company A and click it. You also like the product. However, you need time to think and don’t buy it yet. Like half the planet, you go to Facebook later during that day. There, you’re shown a remarketing ad from company A but don’t really notice it, let alone click it. After thinking about the product for a week, you return to company A‘s website and make the purchase.

Who gets credited for the sale?

Google – 1 conversion (30-day click tracking)
Facebook – 1 conversion (28-days view tracking)

In reality, Facebook just rides on the fact someone visited a website and in between making the purchase also visited Facebook, while they learned about the product somewhere else. They didn’t click the retargeting ad or necessarily even cognitively processed it, yet the platform reports a conversion because of that ad.

For a long time, Facebook had trouble in finding its leeching logic, but now it finally has discovered it. And now, like for other businesses that have a leeching logic, the future looks bright. (Good time to invest, if the stock’s P/E wasn’t somewhere at 95.)

So, how should marketers deal with the leeches to get a more truthful picture of our actions? Here are a few ideas:

  •  exclude brand terms in search when evaluating overall channel performance
  • narrow down lookback window for views in Facebook — can’t remove it, though (because of leeching logic)
  • use attribution modeling (not possible for online-offline but works for digital cross-channel comparisons)
  • dedupe conversions between channels (essentially, the only way to do this is by attribution modeling in 3rd party analytics software, such as GA — platforms’ own reporting doesn’t address this issue)

 

I hate to see investors coming into a growing startup… here’s why

I hate to see, from a customer’s perspective, investors coming into a growing Web startup.

Because it only means rising prices.

The logic is this: 1) the investors need a positive return, and 2) the startup is growing because it has created something valuable, in most cases significantly more valuable than what it is charging from the customer.

Therefore, the investor logic is to raise the price and narrow down the extant value gap, i.e. charge according to the value provided (or, closer to it). However, most customers will still stay, because they keep getting more value than what they pay, even with increased prices, and therefore the startup can maximize its revenue. In addition, there would be a switching cost associated with finding a new provider, such as learning the new tool, configuring it, exporting/importing data, etc. So basically, this strategy is a form of value transfer from the customer to the startup — or, more correctly, to the investor.

Next, we’ll explore what this means for investors and founders.

1. Implications for investors

The major implication for an investor of course is that it makes sense to identify startups which are growing fast but have not optimized the value capture part of their business model.

However, a major difference lies in between having some revenue and not having revenue at all; in the latter case, the growth might be just an indicator of popularity, not business potential. (See my dissertation on startup dilemmas for a thorough elaboration of this topic.)

2. Implications for founders

The major implication to a startup is that if you seek funding, price your product well below the value provided, thereby sacrificing unit-level profitability for growth. But if you want to stay away from investors, experiment with rising prices – that way, it’s only you keeping the surplus. Obviously, this is “ceteris paribus”, so it excludes the potential revenue uplift from scaling with investor money. As we know, the only reason to bring in an investor is to grow the size of the business and thereby also increasing the founder’s personal profit, regardless of stock dilution.

Dr. Joni Salminen holds a PhD in marketing from the Turku School of Economics. His research interests relate to startups, platforms, and digital marketing.

Contact email: [email protected]

The Basics of Dilemmas

Introduction

By definition, a dilemma is a trade-off situation in which there are two choices, each leading to a negative outcome.

General solution

A general solution, then, is to weigh the outcomes and compare them against one another.

For example:

choice A: -1
choice B: -2

In this example, choice A has smaller negative effect, so we’d pick that one.

Complications

However, there are complications.

Consider the above would in fact be the short-term outcomes, but there are also long-term outcomes. For example

choice A: -1, -3
choice B: -2, -1

This leads us into payoff functions, so that the outcomes (payoffs) consist of many variables. In the example, the long-term negative effects outweigh the short-term effects, and we would  change our choice to B.

However, the choice can also be arbitrary, meaning that neither choice dominates. In game theory terms, there is no dominant strategy.

This would be the case when

choice A: -1, -2
choice B: -2, -1

As you can see, it doesn’t matter which choice we take since each gives a negative outcome of equal size. There is an exception to this rule, namely when the player has a preference between short- and long-term outcomes. For example, if he wants to minimize long-term damage, he would pick B, and vice versa.

How to apply this in real life?

In decision-making situations, it’s common to make lists of + and -, i.e. listing positive and negative sides. by assigning a numerical value to them, you can calculate the sum and assign preference among choices. in other words, it becomes easier to make tough decisions.

I’m into digital marketing, startups, and platforms. Download my dissertation on startup dilemmas: http://goo.gl/QRc11f

Organic reach and the choice of social media platform

(This is work in progress.)

Introduction

It is a well-established fact that the organic reach in a dominant platform decreases over time, as the competition over users’ attention increases. There is thus an inverse relation:

The more competition (by users and firms) in a user’s news feed, the less organic visibility for a firm.

The problem

How would a firm willing to engage in a social media activity approach this matter?

In particular,

  • how should it divide its time and marketing efforts between alternative platforms?
  • when does it make sense for it to diversify?

The analysis

The formula behind the decision is u * o, in which

u = fan base
o = organic reach

  • all else equal, the larger the organic reach, the better
  • all else equal, the larger the fan base, the better

But, even in a drastically smaller platform a large o can offset the relative fan base advantage.

For example, consider a firm has presence in two platforms.

platform A
500M users, 5,000 fans

platform B
10,000 users, 100 fans

By first look, it would make sense to invest time and effort in platform A, given that both the overall user base as well as the fan base are significantly larger. However, now consider the inclusion of factor o.

platform A
500M users, 5,000 fans
organic reach 1% = 50 users

platform B
10,000 users, 100 fans
organic reach 90% = 90 users

It now makes sense to shift its social media activities to platform B, as it gives better return on investment in terms of gained reach.

(it is assumed here that post-click actions are directly proportional to the amount of website traffic, and thus do not interfere in the return calculation).

Conclusion

More generally,

as organic reach decreases in platform A, platform B with relatively better organic visibility becomes more feasible

Implications

Firms are advised to consider their social media investments in the light of organic reach, and not be fooled by vanity metrics such as the total user base of a platform. Relative metrics, such as share of organic visibility matter more.

Entrant platforms can encourage switching behavior by promising firms larger degree of organic reach. At early stages this does not compromise utility of the users, as their news feeds are not yet cluttered. However, as the entrant platform matures and gains popularity, it will have an incentive of decreasing organic reach.

This effect may partially explain why a dominant platform position is never secure; entrants can promise better reach for both friends’ and firms’ posts, thereby giving more feedback on initial posts and a better user experience which may increase multi-homing behavior and even deserting dominant platforms, as multi-homing behavior has its cost in time and effort.

I’m into digital marketing, startups, platforms. Download my dissertation on startup dilemmas: http://goo.gl/QRc11f