Archive for the digital marketing tag


Google and the Prospect of Programmatic



This is a short post taking a stance on programmatic ad platforms. It’s based on one single premise:

Digital convergence will lead into a situation where all ad spend, not only digital, will be managed through self-service, open ad platforms that operate based on auction principles

There are several reasons as to why this is not yet a reality; some of them relate to lack of technological competence by traditional media houses, some to their willingness to “protect” premium pricing (this protection has led to shrinking business and keeps doing so until they open up to the free market pricing), and a host of other factors (I’m actually currently engaged in a research project studying this phenomenon).

Digital convergence – you what?

Anyway, digital convergence means we’ll end up running campaigns through one or possibly a few ad platforms that all operate according to the same basic principles. They will resemble a lot like AdWords, because AdWords has been and still is the best advertising platform ever created. Why self-service is critical is due to the necessity of eliminating transaction costs in the selling process – we don’t in most cases need media sales people to operate these platforms. Because we don’t need them, we won’t need to pay their wages and this efficiency gain can be shifted to the prices.

The platforms will be open, meaning that there are no minimum media buys – just like in Google and Facebook, you can start with 5 $ if you want (try doing that now with your local TV media sales person). Regarding the pricing, it’s determined via ad auction, just like in Google and Facebook nowadays. The price levels will drop, but lowered barrier of access will increase liquidity and therefore fill seats more efficiently than in human-based bargaining. At least initially I expect some flux in these determinants — media houses will want to incorporate minimum pricing, but I predict it will go away in time as they realize the value of free market.

But now, to Google…

If Google was smart, it would develop programmatic ad platform for TV networks, or even integrate that with AdWords. The same applies actually to all media verticals: radio, print… Their potential demise will be this Alphabet business. All new ideas they’ve had have failed commercially, and to focus on producing more failed ideas leads unsurprisingly to more failure. Their luck, or skill however you want to take it, has been in understanding the platform business.

Just like Microsoft, Google must have people who understand about the platform business.

They’ve done a really good job with vertical integration, mainly with Android and Chrome. These support the core business model. Page’s fantasy land ideas really don’t. Well, from this point of view separating the Alphabet from the core actually makes sense, as long as the focus is kept on search and advertising.

So, programmatic ad platforms have the potential to disrupt Google, since search still dwarfs in comparison to TV + other offline media spend. And in the light of Google’s supposed understanding of platform dynamics, it’s surprising they’re not taking a stronger stance in bringing programmatic to the masses – and by masses, I mean offline media where the real money is. Google might be satisficing, and that’s a road to doom.

Dr. Joni Salminen holds a PhD in marketing from the Turku School of Economics. His research interests relate to startups, platforms, and digital marketing.

Contact email: [email protected]


A major change in AdWords – How to react?



Google has made a major change in AdWords. Ads are now shown only in the main column, no longer in the right column. Previously, there were generally speaking eight ads per SERP. For some queries, Google didn’t show ads at all, and additionally they’ve been constantly testing the limit, e.g. running up to 16 product listing ads per results page.

But what does that mean to an advertiser?


The change means the number of ads shown per SERP (search-engine results page) is effectively reduced. Since the number of advertisers is not reduced (unless rotation is applied, see below), the competition intensifies. And since the visibility of search ads is based on cost-per-click auction, ceteris paribus the click prices will go up.

Therefore, logical conclusion is that when ad placements are cut, either CPC increases (due to higher competition) or impression share decreases (due to rotation). In the former, you pay more for the same number of visitors, in the latter you pay the same click price but get less visitors.

Why Google might in fact prefer ad rotation, i.e. curbing down an individual advertiser’s impression share (the number of times your ads is shown out of all possible times it could have been shown) is because that wouldn’t impact their return-on-ad-spend (ROAS) which is a relative metric. However, it would affect the absolute volume of clicks and, consequently, sales.

In some of my campaigns, I’m using a longtail positioning strategy where this will influence, since these campaigns are targeting positions 4+ which, as said, are mostly no longer available. Most likely, the change will completely eradicate the possibility of running those campaigns with my low CPC-goal.

Why did Google do this?

For Google, this is a beneficial and logical change since right column ads are commanding lower CTRs (click-through rates). This has two implications – first, they bring less money for Google since its revenue is directly associated with the number of clicks; second, as commonly known Google is using CTR as a proxy for user experience (for example, it’s a major component in Quality Score calculations which determine the true click price).

Therefore, removing the possibility of poorly performing ads while pushing the advertisers to an increased competition is a beneficial situation for Google. In the wider picture, even with higher click prices, the ROI of Google ads is not easily challenged by any other medium or channel, at least what I can see taking place in the near future.

However, for advertisers it may easily signify higher click prices and therefore decreasing returns of search advertising. This conflict of interest is unfortunate one for advertisers, especially given the skewed distribution of power in their relationship to Google.

(On a side-note, the relationship between advertisers and Google is extremely interesting. I studied that to some extent in my Master’s thesis back in 2009. You can find it here:


I recommend you revise the impact of this change on your accounts, either internally or if you’re using an agency, with them.

Dr. Joni Salminen holds a PhD in marketing from the Turku School of Economics. His research interests relate to startups, platforms, and digital marketing.

Contact email: [email protected]


The correct way to calculate ROI for online marketing



This is a short post explaining the correct way to calculate ROI for online marketing. I got the idea earlier today while renewing my Google AdWords certificate and seeing this question in the exam:

Now, here’s the trap – I’m arguing most advertisers would choose the option C, although the correct one is option A. Let me elaborate on this.

The problem?

As everybody knows, ROI is calculated with this formula:

ROI = (returns-cost)/cost*100%

The problem is that the cost side is oftentimes seen too narrowly when reporting the performance of online advertising.

ROI is the ‘return on investment’, but the investment should not only be seen to include advertising cost but the cost of the product as well.

Let me give you an example. Here’s the basic information we have of our campaign performance:

  • cost of campaign A: 100€
  • sales from campaign A: 500€

So, applying the formula the ROI is (500-100)/100*100% = 400%

However, in reality we should consider the margin since that’s highly relevant for the overall profitability of our online marketing. In other words, the cost includes the products sold. Considering that our margin would be 15% in this example, we would get

  • cost of products sold: 500€*(1-0.25) =425€

Reapplying the ROI calculation:

(500-(100+425)) / (100+425) * 100% = -4.7%

So, as we can see, the profitability went from +400% to -4.7%.

The implications

The main implication: always consider the margin in your ROI calculation, otherwise you’re not measuring true profitability.

The more accurate formula, therefore, is:

ROI = (returns-(cost of advertising + cost of products sold)) / (cost of advertising + cost of products sold)

Another implication is that since the ROI depends on margins, products with the same price have different CPA goals. This kind of adjustment is typically ignored in bid-setting, also by more advanced system such as AdWords Conversion Optimizer which assumes a uniform CPA goal.


Obviously, while the abuse of the ‘basic ROI’ calculation ignores the product in the cost side, it also ignores customer lifetime value from the return-side of the equation.

Dr. Joni Salminen holds a PhD in marketing from the Turku School of Economics. His research interests relate to startups, platforms, and digital marketing.

Contact email: [email protected]


Carryover effects and their measurement in Google Analytics



Carryover effects in marketing are a tricky beast. On one hand, you don’t want to prematurely judge a campaign because the effect of advertising may be delayed. On the other hand, you don’t want bad campaigns to be defended with this same argument.


What’s the solution then? They need to be quantified, or didn’t exist. Some ways to quantify are available in Google Analytics:

  • first, you have the time lag report of conversions – this shows how long it has taken for customers to convert
  • second, you have the possibility to increase the inspection window – by looking at a longer period, you can capture more carryover effects (e.g., you ran a major display campaign on July; looking back on December you might still see effects) [Notice that cookie duration limits the tracking, and also remember to use UTM parameters for tracking.]
  • third, you can look at assisted conversions to see the carryover effect in conversion paths – many campaigns may not directly convert, but are a part of the conversion path.

All these methods, however, are retrospective in nature. Predicting carryover effects is notoriously hard, and I’m not sure it would even be possible with such accuracy that it should be pursued.


In conclusion, I’d advise against being too hasty in drawing conclusion about campaign performance. This way you avoid the problem of premature judgment. The problem of shielding inferior campaigns can be tackled by using other proxy metrics of performance, such as the bounce rate. This would effectively tell you whether a campaign has even a theoretical chance of providing positive carryover effects. Indeed, regarding the prediction problem, proving the association between high bounce rate and low carryover effects would enforce this “rule of thumb” even further.

Dr. Joni Salminen holds a PhD in marketing from the Turku School of Economics. His research interests relate to startups, platforms, and digital marketing.

Contact email: [email protected]


Chasing the “true” CPA in digital marketing (for Pro’s only!)


This is a follow-up post on my earlier post about “fake” conversions — the post is in Finnish but, briefly, it’s about the problem of irreversibility of conversions in the ad platforms’ reporting. In reality, some conversions are cancelled (e.g., product returns), but the current platforms don’t track that.

So, my point was to include a ‘churn coefficient’ which would correct for the CPA calculation. In other words, it adjusts the CPA reported by the ad platform (e.g., AdWords) in regards to churn from “conversion” to conversion (as per the previous explanation).

The churn coefficient can be calculated like this:


in which churn is the churn from the reported conversion to the lasting, real conversion.

However, I got to think about this and concluded this — since we consider the churn taking place due to real world circumstances as a lift to the reported CPA, we should also consider the mitigating factor of customer-to-customer references (i.e., word-of-mouth).

Consider it like this – on average, converted customers recommend your company to their friends, out of which some convert. that effect would not be correctly attributed to the referring customers under normal circumstances, but by attributing it uniformly to the average CPAs we can at least consider it in aggregate.

So, hence the ‘wom coefficient’:

1-(Cn / Cm), in which

Cn: conversions from new customers non-affiliated with any marketing channel
Cm: conversions from all marketing channels

The idea is that the new visitors who convert can be attributed to wom while conversions from marketing channels create the base of customers who are producing the recommendations. Both pieces of information can be retrieved in GA (for Cn, use an advanced segment).

So, the more accurate formula for “true” CPA calculation would be:

1-(Cn / Cm) * 1/(1-churn) * CPA

In reality, you could of course track at least a part of the recommendations through referral codes (cf. Dropbox). In this case you could have a more accurate wom coefficient.


Consider that in period t, not all Cn are created by Cm. Hence, it would be more realistic to assume a delay, e.g. compare to period t-1 (reference effect does not show instantly).

The formula does not consider cases where the referred customers come through existing marketing channels (this effect could be eased by not including branded search campaigns in Cm which is a good idea anyway if you want to find out the true performance of the channel in new customer acquisition).

Finally, not all customers from non-marketing channels may not originate from wom (especially if the company is using a lot of non-traceable offline marketing). Thus, the wom efficient could have a parameter that would consider this effect.

Dr. Joni Salminen holds a PhD in marketing from the Turku School of Economics. His research interests relate to startups, platforms, and digital marketing.

Contact email: [email protected]


Online ad platforms’ leeching logic


I and Mr. Pitkänen had a discussion about unfair advantage in business – e.g., a gift card company’s business model relying on people not redeeming gift cards, investment banker’s relying on monopoly to take 7% of each new IPO, doctor’s controlling how many new doctor’s are educated, taxi driver’s keeping the supply low through licenses, governments inventing new taxes…

It seems, everywhere you look you’ll find examples of someone messing with the so-called “free market”.

So, what’s the unfair advantage of online ad platforms? It’s something I call ‘leeching logic’. It’s about miscrediting conversions – channel x receives credit for a conversion while channel y has been the primary driver to it.

Let me give you two examples.


You advertise in the radio for brand X. A person likes the ad and searches your brand in google. He clicks your search ad and buys.

Who gets credited for the sale?

radio ad – 0 conversions
google – 1 conversion

The conclusion: Google is leeching. In this way, all offline branding essentially creates a lift for search-engine advertising which is located at a later stage of the purchase funnel, often closing the conversion.


You search for product Y in Google. You see a cool search ad by company A and click it. You also like the product. However, you need time to think and don’t buy it yet. Like half the planet, you go to Facebook later during that day. There, you’re shown a remarketing ad from company A but don’t really notice it, let alone click it. After thinking about the product for a week, you return to company A‘s website and make the purchase.

Who gets credited for the sale?

Google – 1 conversion (30-day click tracking)
Facebook – 1 conversion (28-days view tracking)

In reality, Facebook just rides on the fact someone visited a website and in between making the purchase also visited Facebook, while they learned about the product somewhere else. They didn’t click the retargeting ad or necessarily even cognitively processed it, yet the platform reports a conversion because of that ad.

For a long time, Facebook had trouble in finding its leeching logic, but now it finally has discovered it. And now, like for other businesses that have a leeching logic, the future looks bright. (Good time to invest, if the stock’s P/E wasn’t somewhere at 95.)

So, how should marketers deal with the leeches to get a more truthful picture of our actions? Here are a few ideas:

  •  exclude brand terms in search when evaluating overall channel performance
  • narrow down lookback window for views in Facebook — can’t remove it, though (because of leeching logic)
  • use attribution modeling (not possible for online-offline but works for digital cross-channel comparisons)
  • dedupe conversions between channels (essentially, the only way to do this is by attribution modeling in 3rd party analytics software, such as GA — platforms’ own reporting doesn’t address this issue)



How to measure offline marketing with online metrics?

How to measure offline marketing with online metrics?


The issue with offline marketing is tracking. For many offline marketing efforts, such as exhibitions and networking events, it’s hard to track results.

Participation in these events is often expensive, and the results are evaluated on a qualitative basis. Although qualitative evaluation is better than nothing, quantitative data is obviously better. And in many cases, we can do that – all we need it the measuring mindset and a little bit of creativity.

The bottom line is: If you’re spending a lot of money into offline marketing, you have to justify its performance. Otherwise you don’t know how well the money turns into desired outcomes, let alone how well event A compared with event B in terms of performance.

The simple solution

The issue can be solved by using metrics. For example, if we are selling in a trade fair, I can use performance metrics like these:

  • sales (€, qty)
  • number of catalogs and/or flyers distributed
  • number of emails gathered via a lead-generation contest (“give us your email – win prize x”)

Of course, knowing the cost of participation, we can now calculate composite metrics such as:

  • Direct ROI = (sales – cost) / cost
  • Cost per lead (email) = cost / number of emails
  • Cost per catalogue distributed = cost / number of catalogues distributed

These can be now measured against digital channels, and evaluated whether or not we’d like to participate in the event in question again, say, next year.

Comparing offline and online performance

During my time as a marketing manager, I’ve come up with different ways to standardize the offline metrics, that is to say calculate offline marketing activities so that they are comparable with digital channels.

Here are three ways we’ve been using.

1. Cost per card

  • CPCa = cost of participation / number of business cards collected
  • Compare with: CPL

Networking is an important part of the sales cycle, especially in B2B markets. By quantifying the results, you are able to compare one event against another, as well as compare the results with lead generation (CPL) through digital channels
(for this, only include the business cards of potential customers).

2. Cost per catalog

  • CPCat = cost of distribution / number of catalogues distributed
  • Compare with: CPC

In Finland, I’ve found that catalog distribution inside magazines is a cost-effective form of marketing. This metric I compare with Google CPC, i.e. the cost of average paid user via Google. The rationale is that since the catalog is inside the customer’s favorite magazine, she will surely take a look at it (during the reading
session you tend to have more time).

3. Cost per festival contact

  • CPF = cost of participation / number of visitors
  • Compare with: CPM

Summer festivals are hot in Finland. Every year, there is more than a dozen big festivals across the country. We’re participating in some of them together with our suppliers. Festivals most often provide you with the number previous year’s visitors. I find it best to compare this metric with CPM, since the visitors are just
hypothetical contacts.

Of course, we can use several metrics, so for festivals I use CPF to evaluate which ones are the most cost-effective ones (that’s one, but the not the only criterion, since the match between us and the target audience is more important). Then, to evaluate how well we did, I’ll use the other metrics, mainly cost per lead (email) and cost per catalog distributed.

Hopefully this article gave you some useful ideas. If you have something to share, please write in in the comments. Thanks for reading.

I’m into digital marketing, startups, platforms. Download my dissertation on startup dilemmas:


A Few Interesting Digital Analytics Problems… (And Their Solutions)



Here’s a list of analytics problems I’ve devised for a class I was teaching a digital analytics course (Web & Mobile Analytics, Information Technology Program) at Aalto University in Helsinki. Some solutions to them are also considered.

The problems

  • Last click fallacy = taking only the last interaction into account when analayzing channel or campaign performance (a common problem for standard Google Analytics reports)
  • Analysis paralysis = the inability to know which data to analyze or where to start the analysis process from (a common problem when first facing a new analytics tool 🙂 )
  • Vanity metrics = reporting ”show off” metrics as oppose to ones that are relevant and important for business objectives (a related phenomenon is what I call “metrics fallback” in which marketers use less relevant metrics basically because they look better than the primary metrics)
  • Aggregation problem = seeing the general trend, but not understanding why it took place (this is a problem of “averages”)
  • Multichannel problem = losing track of users when they move between online and offline (in cross-channel environment, i.e. between digital channels one can track users more easily, but the multichannel problem is a major hurdle for companies interested in knowing the total impact of their campaigns in a given channel)
  • Churn problem = a special case of the aggregation problem; the aggregate numbers show growth whereas in reality we are losing customers
  • Data discrepancy problem = getting different numbers from different platforms (e.g., standard Facebook conversion configuration shows almost always different numbers than GA conversion tracking)
  • Optimization goal dilemma = optimizing for platform-specific metrics leads to suboptimal business results, and vice versa. It’s because platform metrics, such as Quality Score, are meant to optimize competitiveness within the platform, not outside it.

The solutions

  • Last click fallacy → attribution modeling, i.e. accounting for all or select interactions and dividing conversion value between them
  • Analysis paralysis → choosing actionable metrics, grounded in business goals and objectives; this makes it easier to focus instead of just looking at all of the overwhelming data
  • Vanity metrics → choosing the right KPIs (see previous) and sticking to them
  • Aggregation problem → segmenting data (e.g. channel, campaign, geography, time)
  • Multichannel problem → universal analytics (and the associated use of either client ID or customer ID, i.e. a universal connector)
  • Churn problem → cohort analysis (i.e. segment users based on the timepoint of their enrollment)
  • Data discrepancy problem → understanding definitions & limitations of measurement in different ad platforms (e.g., difference between lookback windows in FB and Google), using UTM parameters to track individual campaigns
  • Optimization goal dilemma → making a judgment call, right? Sometimes you need to compromise; not all goals can be reached simultaneously. Ultimately you want business results, but as far as platform-specific optimization helps you getting to them, there’s no problem.

Want to add something to this list? Please write in the comments!

[edit: I’m compiling a larger list of analytics problems. Will update this post once it’s ready.]

Learn more

I’m into digital marketing, startups, platforms. Download my dissertation on startup dilemmas:


Digital Marketing Laws (work in progress…)



this is work in progress – I’ll keep updating this list as new moments of “heureka” hit me.

Digital marketing laws

  1. The higher the position in a SERP, the higher the CTR
  2. The more a mixed platform gains demand-side popularity, the more it restricts the organic reach of supply-side
  3. Search-engine traffic consistently outperforms social media traffic in direct ROI
  4. People are not stupid (yes, this is why retargeting is not a stairway to heaven)
  5. “it is almost always much cheaper to retain satisfied customers and turn them into repeat business than it is to attract a new, one-time customer.”

Want to add something? Please post it in the comment section!


Using the VRIN model to evaluate web platforms



In this article, I discuss how the classic VRIN model can be used to evaluate modern web platforms.

What is the VRIN model?

It’s one of the most cited models of the resource-based view of the firm. Essentially, it describes how a firm can achieve sustainable competitive advantage through resources that fulfill certain criteria.

These criteria for resources that provide a sustainable competitive advantage are:

  • valuable
  • rare
  • imperfectly imitable
  • non-substitutable

By gaining access to this type of resources, a firm can create a lasting competitive advantage. Note that this framework takes one perspective to strategy, i.e. the resource-based view. Alternative ones are e.g. Porter’s five forces and power-based frameworks, among many others.

The “resource” in resource-based view can be defined as some form of input which can be transformed into tangible or intangible output that provides utility or value in the market. In a competitive setting, a firm competes with its resources against other players; what resources it has and how it uses them are key variables in determining the competitive outcome, i.e. success or failure in the market.

How it applies to web platforms?

In each business environment, there are certain resources that are particularly important. An orange juice factory, for example, requires different resources to be successful than a consulting business (the former needs a good supply of oranges, and the latter bright consultants; both rely on good customer relationships, though).

So, what kind of resources are relevant for online platforms?

I first give a general overview of the VRIN dimensions in online context. This is done by comparing online environment with offline environment.


The term ‘value’ is tricky because of its definition: if we define it as something useful, we easily end up in a tautology (circular argument): a resource is valuable because it is useful for some party.

  • critical for offline: yes (but which resources?)
  • critical for online: yes (but which resources?)

The specific resources for online platforms are discussed later on.


One of the key preoccupations in economic theory is scarcity: raw materials are scarce and firms need to compete over their exploitation.

  • critical for offline: yes
  • critical for online: no

Offline industries are characterized by rivalry – once oil is consumed, it cannot be reused. Knowledge products on the web, on the other hand, are described as non-rivalry products: if one consumer downloads an MP3 song, that does not remove the ability for another consumer to download as well (but if a consumer buys a snickers bar, there is one less for others to buy). Scarcity is usually associated to startups so that they are forced to innovate due to liability of smallness.


This deals with how well the business idea can be copied.

  • critical for offline: yes
  • critical for online: no

in “traditional” industries, such as manufacturing, patents and copyrights (IPR) are important. They protect firms against infringement and plagiarism. without them, every innovation could be easily copied which would quickly erode any competitive advantage. Intellectual property rights therefore enable the protection of “innovations” against imitation.

Imitation is less of a concern online. In most cases, the web technologies are public knowledge (e.g., open source). Even large players contribute to public domain. Therefore, rather than being something that competitors could not imitate, the emphasis on competition between web platforms tends to be on acquiring users rather than patents. (There are also other sources of resource advantage we’ll discuss later on.)


The difference between imitation and substitution is that in the former you are being copied whereas in the latter your product is being replaced by another solution. For example, Evernote can be replaced by paper and pen.

  • critical for offline: yes (depends on the case though)
  • not so critical for offline: yes (see the example of Evernote)

However, I would argue the source of resource advantage comes from something else than immunity of subsitution: after all, there are tens of search-engines and hundreds of social networks but still the giants overcome them.

‘Why’ is the question we’re going to examine next.

Important resources for online platforms

Here’s what I think is important:

  1. knowledge
  2. storage/server capacity
  3. users
  4. content
  5. complementors
  6. algorithms
  7. company culture
  8. financing
  9. HQ location

Knowledge means holding the “smartest workers” – this is obviously a highly important resource. As Steve Jobs said, they’re not hiring smart people to tell them what to do, but so that the smart workers could tell Apple what to do.

  • valuable: yes
  • rare: no (comes in abundance)
  • imperfectly imitable: no
  • non-substitutable: yes

Storage/server capacity is crucial for web firms. The more users they have, the more important this resource is in order to provide a reliable user experience.

  • valuable: yes
  • rare: no
  • imperfectly imitable: no
  • non-substitutable: yes

Users are crucial given that the platform condition of critical mass is achieved. Critical mass is closely associated with network effects, meaning that the more there are users, the more valuable the platform is.

  • valuable: yes
  • rare: no
  • imperfectly imitable: no
  • non-substitutable: yes

Content is important as well — content is a complement to content platforms, whereas users are complements of social platforms (for more on this typology, see my dissertation).

  • valuable: yes
  • rare: no
  • imperfectly imitable: no
  • non-substitutable: yes

Complementors are antecedents to getting users or content – they are third parties that provide extensions to the core platform, and therefore add its usefulness to the users.

  • valuable: yes
  • rare: no (depends)
  • imperfectly imitable: yes
  • non-substitutable: no (can be replaced by in-house activities)

Algorithms are proprietary solutions platforms use to solve matching problems.

  • valuable: yes
  • rare: no (depends)
  • imperfectly imitable: no
  • non-substitutable: yes

Company culture is a resource which can be turned into an efficient deployment machine.

  • valuable: yes
  • rare: yes
  • imperfectly imitable: yes
  • non-substitutable: yes

A great company culture may be hard to imitate because its creation requires tacit knowledge.

Financing is an antecedent to acquiring other resources, such as the best team and storage capacity (although it’s not self-evident that money leads to functional a team, as examples in the web industry demonstrate).

  • valuable: yes
  • rare: no (for good businesses)
  • imperfectly imitable: no
  • non-substitutable: no (bootstrapping)

Finally, location is important because can provide an access to a network of partner companies, high-quality employees and investors (think Silicon valley) that, again, are linked to the successful use of other resources.

  • valuable: yes
  • rare: no
  • imperfectly imitable: no
  • non-substitutable: no

A location is not a rare asset because it’s always possible to find an office space in a given city; similarly, you can follow where your competitors go.


What can be learned from this analysis?

First, the “value” in the VRIN framework is self-evident and not very useful in finding out differences between resources, UNLESS the list of resources is really wide and not industry-specific. That would be case when exploring the ; here, the list creation was

My list highlights intangible resources as a source of competitive advantage for web platforms. Based on this analysis, company culture is a resource the most compatible with the VRIN criteria.

Although it was argued that substitutability is less of a concern in online than offline, the risk of disruption touches equally well the dominant web platforms. Their large user base protects them against incremental innovations, but not against disruptive innovations. However, just as the concept of “value” has tautological nature, disruption is the same – disruptive innovation is disruptive because it has disrupted an industry – and this can only be stated in hindsight.

Of course, the best executives in the world have seen disruption beforehand, e.g. Schibstedt and digital transformation of publishing, but most companies, even big ones like Nokia have failed to do so.

How to go deeper

Let’s take a look at the three big: Google, Facebook and eBay. Each one is a platform: Google combines searchers with websites (or, alternatively, advertisers with publisher websites (AdSense); or even more alternatively, advertisers with searchers (AdWords)), Facebook matches users to one another (one-sided platform) and advertisers with users (two-sided platform). eBay as an exchange platform matches buyers and sellers.

It would be useful to assess how well each of them score in the above resources and how the resources are understood in these companies.

I’m into digital marketing, startups, platforms. Download my dissertation on startup dilemmas: