Archive for the english category


The correct way to calculate ROI for online marketing



This is a short post explaining the correct way to calculate ROI for online marketing. I got the idea earlier today while renewing my Google AdWords certificate and seeing this question in the exam:

Now, here’s the trap – I’m arguing most advertisers would choose the option C, although the correct one is option A. Let me elaborate on this.

The problem?

As everybody knows, ROI is calculated with this formula:

ROI = (returns-cost)/cost*100%

The problem is that the cost side is oftentimes seen too narrowly when reporting the performance of online advertising.

ROI is the ‘return on investment’, but the investment should not only be seen to include advertising cost but the cost of the product as well.

Let me give you an example. Here’s the basic information we have of our campaign performance:

  • cost of campaign A: 100€
  • sales from campaign A: 500€

So, applying the formula the ROI is (500-100)/100*100% = 400%

However, in reality we should consider the margin since that’s highly relevant for the overall profitability of our online marketing. In other words, the cost includes the products sold. Considering that our margin would be 15% in this example, we would get

  • cost of products sold: 500€*(1-0.25) =425€

Reapplying the ROI calculation:

(500-(100+425)) / (100+425) * 100% = -4.7%

So, as we can see, the profitability went from +400% to -4.7%.

The implications

The main implication: always consider the margin in your ROI calculation, otherwise you’re not measuring true profitability.

The more accurate formula, therefore, is:

ROI = (returns-(cost of advertising + cost of products sold)) / (cost of advertising + cost of products sold)

Another implication is that since the ROI depends on margins, products with the same price have different CPA goals. This kind of adjustment is typically ignored in bid-setting, also by more advanced system such as AdWords Conversion Optimizer which assumes a uniform CPA goal.


Obviously, while the abuse of the ‘basic ROI’ calculation ignores the product in the cost side, it also ignores customer lifetime value from the return-side of the equation.

Dr. Joni Salminen holds a PhD in marketing from the Turku School of Economics. His research interests relate to startups, platforms, and digital marketing.

Contact email: [email protected]


Carryover effects and their measurement in Google Analytics



Carryover effects in marketing are a tricky beast. On one hand, you don’t want to prematurely judge a campaign because the effect of advertising may be delayed. On the other hand, you don’t want bad campaigns to be defended with this same argument.


What’s the solution then? They need to be quantified, or didn’t exist. Some ways to quantify are available in Google Analytics:

  • first, you have the time lag report of conversions – this shows how long it has taken for customers to convert
  • second, you have the possibility to increase the inspection window – by looking at a longer period, you can capture more carryover effects (e.g., you ran a major display campaign on July; looking back on December you might still see effects) [Notice that cookie duration limits the tracking, and also remember to use UTM parameters for tracking.]
  • third, you can look at assisted conversions to see the carryover effect in conversion paths – many campaigns may not directly convert, but are a part of the conversion path.

All these methods, however, are retrospective in nature. Predicting carryover effects is notoriously hard, and I’m not sure it would even be possible with such accuracy that it should be pursued.


In conclusion, I’d advise against being too hasty in drawing conclusion about campaign performance. This way you avoid the problem of premature judgment. The problem of shielding inferior campaigns can be tackled by using other proxy metrics of performance, such as the bounce rate. This would effectively tell you whether a campaign has even a theoretical chance of providing positive carryover effects. Indeed, regarding the prediction problem, proving the association between high bounce rate and low carryover effects would enforce this “rule of thumb” even further.

Dr. Joni Salminen holds a PhD in marketing from the Turku School of Economics. His research interests relate to startups, platforms, and digital marketing.

Contact email: [email protected]


Chasing the “true” CPA in digital marketing (for Pro’s only!)


This is a follow-up post on my earlier post about “fake” conversions — the post is in Finnish but, briefly, it’s about the problem of irreversibility of conversions in the ad platforms’ reporting. In reality, some conversions are cancelled (e.g., product returns), but the current platforms don’t track that.

So, my point was to include a ‘churn coefficient’ which would correct for the CPA calculation. In other words, it adjusts the CPA reported by the ad platform (e.g., AdWords) in regards to churn from “conversion” to conversion (as per the previous explanation).

The churn coefficient can be calculated like this:


in which churn is the churn from the reported conversion to the lasting, real conversion.

However, I got to think about this and concluded this — since we consider the churn taking place due to real world circumstances as a lift to the reported CPA, we should also consider the mitigating factor of customer-to-customer references (i.e., word-of-mouth).

Consider it like this – on average, converted customers recommend your company to their friends, out of which some convert. that effect would not be correctly attributed to the referring customers under normal circumstances, but by attributing it uniformly to the average CPAs we can at least consider it in aggregate.

So, hence the ‘wom coefficient’:

1-(Cn / Cm), in which

Cn: conversions from new customers non-affiliated with any marketing channel
Cm: conversions from all marketing channels

The idea is that the new visitors who convert can be attributed to wom while conversions from marketing channels create the base of customers who are producing the recommendations. Both pieces of information can be retrieved in GA (for Cn, use an advanced segment).

So, the more accurate formula for “true” CPA calculation would be:

1-(Cn / Cm) * 1/(1-churn) * CPA

In reality, you could of course track at least a part of the recommendations through referral codes (cf. Dropbox). In this case you could have a more accurate wom coefficient.


Consider that in period t, not all Cn are created by Cm. Hence, it would be more realistic to assume a delay, e.g. compare to period t-1 (reference effect does not show instantly).

The formula does not consider cases where the referred customers come through existing marketing channels (this effect could be eased by not including branded search campaigns in Cm which is a good idea anyway if you want to find out the true performance of the channel in new customer acquisition).

Finally, not all customers from non-marketing channels may not originate from wom (especially if the company is using a lot of non-traceable offline marketing). Thus, the wom efficient could have a parameter that would consider this effect.

Dr. Joni Salminen holds a PhD in marketing from the Turku School of Economics. His research interests relate to startups, platforms, and digital marketing.

Contact email: [email protected]


Online ad platforms’ leeching logic


I and Mr. Pitkänen had a discussion about unfair advantage in business – e.g., a gift card company’s business model relying on people not redeeming gift cards, investment banker’s relying on monopoly to take 7% of each new IPO, doctor’s controlling how many new doctor’s are educated, taxi driver’s keeping the supply low through licenses, governments inventing new taxes…

It seems, everywhere you look you’ll find examples of someone messing with the so-called “free market”.

So, what’s the unfair advantage of online ad platforms? It’s something I call ‘leeching logic’. It’s about miscrediting conversions – channel x receives credit for a conversion while channel y has been the primary driver to it.

Let me give you two examples.


You advertise in the radio for brand X. A person likes the ad and searches your brand in google. He clicks your search ad and buys.

Who gets credited for the sale?

radio ad – 0 conversions
google – 1 conversion

The conclusion: Google is leeching. In this way, all offline branding essentially creates a lift for search-engine advertising which is located at a later stage of the purchase funnel, often closing the conversion.


You search for product Y in Google. You see a cool search ad by company A and click it. You also like the product. However, you need time to think and don’t buy it yet. Like half the planet, you go to Facebook later during that day. There, you’re shown a remarketing ad from company A but don’t really notice it, let alone click it. After thinking about the product for a week, you return to company A‘s website and make the purchase.

Who gets credited for the sale?

Google – 1 conversion (30-day click tracking)
Facebook – 1 conversion (28-days view tracking)

In reality, Facebook just rides on the fact someone visited a website and in between making the purchase also visited Facebook, while they learned about the product somewhere else. They didn’t click the retargeting ad or necessarily even cognitively processed it, yet the platform reports a conversion because of that ad.

For a long time, Facebook had trouble in finding its leeching logic, but now it finally has discovered it. And now, like for other businesses that have a leeching logic, the future looks bright. (Good time to invest, if the stock’s P/E wasn’t somewhere at 95.)

So, how should marketers deal with the leeches to get a more truthful picture of our actions? Here are a few ideas:

  •  exclude brand terms in search when evaluating overall channel performance
  • narrow down lookback window for views in Facebook — can’t remove it, though (because of leeching logic)
  • use attribution modeling (not possible for online-offline but works for digital cross-channel comparisons)
  • dedupe conversions between channels (essentially, the only way to do this is by attribution modeling in 3rd party analytics software, such as GA — platforms’ own reporting doesn’t address this issue)



I hate to see investors coming into a growing startup… here’s why


I hate to see, from a customer’s perspective, investors coming into a growing Web startup.

Because it only means rising prices.

The logic is this: 1) the investors need a positive return, and 2) the startup is growing because it has created something valuable, in most cases significantly more valuable than what it is charging from the customer.

Therefore, the investor logic is to raise the price and narrow down the extant value gap, i.e. charge according to the value provided (or, closer to it). However, most customers will still stay, because they keep getting more value than what they pay, even with increased prices, and therefore the startup can maximize its revenue. In addition, there would be a switching cost associated with finding a new provider, such as learning the new tool, configuring it, exporting/importing data, etc. So basically, this strategy is a form of value transfer from the customer to the startup — or, more correctly, to the investor.

Next, we’ll explore what this means for investors and founders.

1. Implications for investors

The major implication for an investor of course is that it makes sense to identify startups which are growing fast but have not optimized the value capture part of their business model.

However, a major difference lies in between having some revenue and not having revenue at all; in the latter case, the growth might be just an indicator of popularity, not business potential. (See my dissertation on startup dilemmas for a thorough elaboration of this topic.)

2. Implications for founders

The major implication to a startup is that if you seek funding, price your product well below the value provided, thereby sacrificing unit-level profitability for growth. But if you want to stay away from investors, experiment with rising prices – that way, it’s only you keeping the surplus. Obviously, this is “ceteris paribus”, so it excludes the potential revenue uplift from scaling with investor money. As we know, the only reason to bring in an investor is to grow the size of the business and thereby also increasing the founder’s personal profit, regardless of stock dilution.

Dr. Joni Salminen holds a PhD in marketing from the Turku School of Economics. His research interests relate to startups, platforms, and digital marketing.

Contact email: [email protected]


The Basics of Dilemmas



By definition, a dilemma is a trade-off situation in which there are two choices, each leading to a negative outcome.

General solution

A general solution, then, is to weigh the outcomes and compare them against one another.

For example:

choice A: -1
choice B: -2

In this example, choice A has smaller negative effect, so we’d pick that one.


However, there are complications.

Consider the above would in fact be the short-term outcomes, but there are also long-term outcomes. For example

choice A: -1, -3
choice B: -2, -1

This leads us into payoff functions, so that the outcomes (payoffs) consist of many variables. In the example, the long-term negative effects outweigh the short-term effects, and we would  change our choice to B.

However, the choice can also be arbitrary, meaning that neither choice dominates. In game theory terms, there is no dominant strategy.

This would be the case when

choice A: -1, -2
choice B: -2, -1

As you can see, it doesn’t matter which choice we take since each gives a negative outcome of equal size. There is an exception to this rule, namely when the player has a preference between short- and long-term outcomes. For example, if he wants to minimize long-term damage, he would pick B, and vice versa.

How to apply this in real life?

In decision-making situations, it’s common to make lists of + and -, i.e. listing positive and negative sides. by assigning a numerical value to them, you can calculate the sum and assign preference among choices. in other words, it becomes easier to make tough decisions.

I’m into digital marketing, startups, and platforms. Download my dissertation on startup dilemmas:


Organic reach and the choice of social media platform


(This is work in progress.)


It is a well-established fact that the organic reach in a dominant platform decreases over time, as the competition over users’ attention increases. There is thus an inverse relation:

The more competition (by users and firms) in a user’s news feed, the less organic visibility for a firm.

The problem

How would a firm willing to engage in a social media activity approach this matter?

In particular,

  • how should it divide its time and marketing efforts between alternative platforms?
  • when does it make sense for it to diversify?

The analysis

The formula behind the decision is u * o, in which

u = fan base
o = organic reach

  • all else equal, the larger the organic reach, the better
  • all else equal, the larger the fan base, the better

But, even in a drastically smaller platform a large o can offset the relative fan base advantage.

For example, consider a firm has presence in two platforms.

platform A
500M users, 5,000 fans

platform B
10,000 users, 100 fans

By first look, it would make sense to invest time and effort in platform A, given that both the overall user base as well as the fan base are significantly larger. However, now consider the inclusion of factor o.

platform A
500M users, 5,000 fans
organic reach 1% = 50 users

platform B
10,000 users, 100 fans
organic reach 90% = 90 users

It now makes sense to shift its social media activities to platform B, as it gives better return on investment in terms of gained reach.

(it is assumed here that post-click actions are directly proportional to the amount of website traffic, and thus do not interfere in the return calculation).


More generally,

as organic reach decreases in platform A, platform B with relatively better organic visibility becomes more feasible


Firms are advised to consider their social media investments in the light of organic reach, and not be fooled by vanity metrics such as the total user base of a platform. Relative metrics, such as share of organic visibility matter more.

Entrant platforms can encourage switching behavior by promising firms larger degree of organic reach. At early stages this does not compromise utility of the users, as their news feeds are not yet cluttered. However, as the entrant platform matures and gains popularity, it will have an incentive of decreasing organic reach.

This effect may partially explain why a dominant platform position is never secure; entrants can promise better reach for both friends’ and firms’ posts, thereby giving more feedback on initial posts and a better user experience which may increase multi-homing behavior and even deserting dominant platforms, as multi-homing behavior has its cost in time and effort.

I’m into digital marketing, startups, platforms. Download my dissertation on startup dilemmas:


About moral hazard and banking crises



The struggle against moral hazard in banking is constant and real. There’s no turn-key solution for eliminating it, but it must be kept in mind at all times by policy makers.

Consider the following citation from Wikipedia:

“The role of the lender of last resort, and the existence of deposit insurance, both create moral hazard, since they reduce banks’ incentive to avoid making risky loans. They are nonetheless standard practice, as the benefits of collective prevention are commonly believed to outweigh the costs of excessive risk-taking.”

This structural problem, similar to that of the problem of the commons, drives individual bankers into competing with risks. It’s an escalating situation in which one banker takes a slightly larger risk; after seeing this one fares okay, another banker takes again a marginally increased risk position, and so on. As a result, the overall risk position of the market escalates (little by little), until one trigger event causes a collapse.

Because there is a lender of last resort, the risks for the bankers are mitigated (as long as there are enough bankers who participate in “bidding up” the risks). Because there is deposit insurance, the risk for the private individuals is eliminated as well, so they continue putting their money on the “roulette table” of the (rationally) greedy banker. The lender of the last resort will impose some more regulation, and the bankers promise to behave nicely.

However, there are no fixed threshold rules in how the financial markets work and so “boiling the frog” begins all over again.

What to do?

In my opinion, the best way to counter this effect of excessive risk taking is to move the collective risk at individual risk level, so that bankers would be privately responsible for their bank’s rescue – this would take the form of losing banking license and/or private assets.

This might lead, in cases of crises, in exchange of an entire generation of bankers, but this would only be fair; at good times, they are healthily compensated, so at bad times of their own doings, they must bear the consequences. Moral hazard, by definition, arises when there is a potential that the interests of the agent and the principal differ – by aligning the interests, the problem perishes. The same effect works in reverse; in this case aligning the cost of reckless behavior.

The author is a university teacher at the Turku School of Economics. His “hobby” is to keep track of the euro-crisis.


Big data is not enough data


There is a big data fallacy

My argument here is simple – even though it’s a common argument that “everything is tracked”, marketers face a big data fallacy when assessing their ability to predict consumer behavior.

The reason is explicated here [1]:

“On any given occasion, everything from personal factors such as how well a person has slept the night before, current mood, hunger, and previous choices, to environmental variables such as the weather, the presence of other people, background music, and even ceiling height can influence how a customer responds. Algorithms can use only a handful of variables, which means a lot of weight is inevitably placed on those variables, and often the contextual information that really matters, such as the person’s current physical and emotional condition or the physical environment in which the individual is tweeting, Facebooking, or buying online, isn’t considered.”

Therefore, what is known is simply not enough to accurately predict an individual consumer’s behavior. On average, however, given the limitation of computable variables, marketing algorithms can enhance marketing performance. But data will never make marketing “perfect” – just simply because there’s not enough of it.


[1]: Dholakia (2015)

I’m into digital marketing, startups, platforms. Download my dissertation on startup dilemmas:


How to measure offline marketing with online metrics?

How to measure offline marketing with online metrics?


The issue with offline marketing is tracking. For many offline marketing efforts, such as exhibitions and networking events, it’s hard to track results.

Participation in these events is often expensive, and the results are evaluated on a qualitative basis. Although qualitative evaluation is better than nothing, quantitative data is obviously better. And in many cases, we can do that – all we need it the measuring mindset and a little bit of creativity.

The bottom line is: If you’re spending a lot of money into offline marketing, you have to justify its performance. Otherwise you don’t know how well the money turns into desired outcomes, let alone how well event A compared with event B in terms of performance.

The simple solution

The issue can be solved by using metrics. For example, if we are selling in a trade fair, I can use performance metrics like these:

  • sales (€, qty)
  • number of catalogs and/or flyers distributed
  • number of emails gathered via a lead-generation contest (“give us your email – win prize x”)

Of course, knowing the cost of participation, we can now calculate composite metrics such as:

  • Direct ROI = (sales – cost) / cost
  • Cost per lead (email) = cost / number of emails
  • Cost per catalogue distributed = cost / number of catalogues distributed

These can be now measured against digital channels, and evaluated whether or not we’d like to participate in the event in question again, say, next year.

Comparing offline and online performance

During my time as a marketing manager, I’ve come up with different ways to standardize the offline metrics, that is to say calculate offline marketing activities so that they are comparable with digital channels.

Here are three ways we’ve been using.

1. Cost per card

  • CPCa = cost of participation / number of business cards collected
  • Compare with: CPL

Networking is an important part of the sales cycle, especially in B2B markets. By quantifying the results, you are able to compare one event against another, as well as compare the results with lead generation (CPL) through digital channels
(for this, only include the business cards of potential customers).

2. Cost per catalog

  • CPCat = cost of distribution / number of catalogues distributed
  • Compare with: CPC

In Finland, I’ve found that catalog distribution inside magazines is a cost-effective form of marketing. This metric I compare with Google CPC, i.e. the cost of average paid user via Google. The rationale is that since the catalog is inside the customer’s favorite magazine, she will surely take a look at it (during the reading
session you tend to have more time).

3. Cost per festival contact

  • CPF = cost of participation / number of visitors
  • Compare with: CPM

Summer festivals are hot in Finland. Every year, there is more than a dozen big festivals across the country. We’re participating in some of them together with our suppliers. Festivals most often provide you with the number previous year’s visitors. I find it best to compare this metric with CPM, since the visitors are just
hypothetical contacts.

Of course, we can use several metrics, so for festivals I use CPF to evaluate which ones are the most cost-effective ones (that’s one, but the not the only criterion, since the match between us and the target audience is more important). Then, to evaluate how well we did, I’ll use the other metrics, mainly cost per lead (email) and cost per catalog distributed.

Hopefully this article gave you some useful ideas. If you have something to share, please write in in the comments. Thanks for reading.

I’m into digital marketing, startups, platforms. Download my dissertation on startup dilemmas: