Archive for the facebook tag

Joni

Here’s Facebook cheating you (and how to avoid it)

english

Here’s how Facebook is cheating advertisers with reporting of video views:

How is that cheating? Well, the advertiser implicitly assumes that ‘video views’ means people who have actually watched the video which is not the case here. Say, you have a 10-second video; this metric does not show people who have watched that video till the end, but only those who have watched the first three seconds — possibly just scrolling their newsfeeds and letting the video autoplay accidentally while quickly browsing forward. Essentially, that kind of exposure is worth closer to zero than the 1 cent Facebook usually reports.

Indeed, other view-based metrics such as CPV should be calculated based on somebody watching the video till the end, but in FB it’s 3 SECONDS after which they calculate it as a view. In effect, this will multiply the real CPV by order of several magnitudes, in some cases I’ve seen it’s 10x more than the figure reported by Facebook.

But aren’t they telling this honestly? Sure, they show the correct definition, but a large part of advertisers do not bother looking at it, or are unsuspecting misguiding definitions. After all, you should be able to trust that a big and reputable player like Facebook would not screw over advertisers. However, those of us who have played the game for many years know it’s not the first time (remember their definition of “click” a couple of years ago?).

What’s more, there’s no metric for the real CPV in the reports, so advertisers need to calculate it manually (at which point, based on my experience, it’s revealed that Facebook video views are are typically 5x more expensive than on Youtube).

How to avoid this shenanigans? Simply look at the metric ‘video views to 100%’. This is the real video views metric you should use – calculate your spend with that number, and you will get your true CPV. In other words:

ad cost / views to 100%

Keep your eyes open, my fellow advertisers!

UPDATE: Another good tactic, pointed out by my colleague Tommi Salenius, is to bid for 10-second views in your video campaigns. This is a relatively novel feature in Facebook, and although it doesn’t fix the problem, it’s a decent workaround. He also recommended to optimize “average % viewed” metric – you can do that e.g. by comparing different demographic segments. Finally, Facebook video ads can be seen to have a “social advantage” which refers to people’s ability to comment and like videos – sometimes this does take place 🙂 The advertiser can also include more text than in Youtube video ads which has a positive effect on ad prominence. It is then up to the advertiser to consider whether these advantages are worth the cost premium Facebook tends to have in comparison to Youtube.

Joni

Negative tipping and Facebook: Warning signs

english

This Inc article states a very big danger for Facebook: http://www.inc.com/jeff-bercovici/facebook-sharing-crisis.html

It is widely established in platform theory that reaching a negative tipping point can destroy a platform. Negative tipping is essentially the reverse of positive tipping — instead of gaining momentum, the platforms starts quickly losing it.

There are two dimensions I want to look at in this post.

First, what I call “the curse of likes“. Essentially, Facebook has made it too easy to like pages and befriend people; as a result, they are unable to manage people’s newsfeeds in the best way in terms of engagement. There is too much clutter, leaving important social information out, and the “friend” network is too wide for the intimacy required to share personal things. The former reduces engagement rate, the latter results in unwillingness to share personal information.

Second, if people are sharing less about themselves, the platform has it more difficult to show them relevant ads. The success of Facebook as a business relies on its revenue model which is advertising. Both of the aforementioned risks are negative for advertising outcomes. If relevance decreases, a) user experience (negative effects of ads) and b) ad performance decrease as well, resulting in advertisers reducing their ad spend or, in worst-case scenario, them moving on to other platforms.

To counter these effects, Facebook can resort to a few strategies:

  1. Discourage people from “over-liking” things – this is for their own benefit, not to clutter the newsfeed
  2. Easy options to unsubscribe from people and pages — e.g., asking “Do you want to see this?” in relation to posts
  3. Favoring social content over news and company posts in the newsfeed algorithms – seeing personal social content is likely to incite more social content
  4. Sentiment control of newsfeed algorithm – to many, Facebook seems like a “negative place” with arguing on politics and such. This is in stark contrast to more intimate platforms such as Instagram. Thus, Facebook could incorporate sentiment adjustment in its newsfeed algorithm to emphasize positive content.
  5. Continued efforts to improve ad relevance – especially by giving incentives for high-CTR advertisers to participate by lowering their click prices, thereby encouraging engagement and match-seeking behavior.

Overall, Facebook as a platform will not be eternal. But I think the company is well aware of this, since their strategy is to constantly buy out rivals. The platform idea persists although individual platforms may perish.

Joni

Ohjelmallisen ostamisen alusta: ideaaliominaisuuksia

suomeksi

Full-metal digitalist.

Maailma muuttuu, markkinoijani

Tällä hetkellä digitaalinen media on siirtymässä ohjelmallisen ostamisen malliin, ts. mainokset ostetaan ja myydään mainosalustan (esim. Google AdWords, Facebook) kautta. Myös perinteinen offline-media (TV, printti, radio) tulee ajan myötä siirtymään ohjelmallisen ostamisen järjestelmiin, joskin tässä menee arvioni mukaan vielä 5-10 vuotta.

Miksi ohjelmallinen ostaminen voittaa?

Syy on selkeä:

Ohjelmallinen ostaminen on lähtökohtaisesti aina tehokkaampaa kuin vaihdanta ihmisten välityksellä.

Taloustieteen näkökulmasta tarkasteltuna mainosvaihdantaan, kuten kaikkeen vaihdantaan, liittyy transaktiokustannuksia: hinnan neuvottelu, paketointi, yhteydenpito, kysymykset, mainosten lähettäminen, raportointi jne. Tämä on ihmistyötä joka maksaa aikaa ja vaivaa, eikä johda optimiratkaisuun hinnan tai mainonnan tehokkuuden kannalta.

Ihminen häviää aina algoritmille tehokkuudessa, ja mainonta on tehokkuuspeliä.

Edellä mainitut transaktiokustannukset voidaan minimoida ohjelmallisen ostamisen kautta. Mediamyyjiä ei yksinkertaisesti tarvita enää tässä prosessissa; samalla mainonnasta tulee halvempaa ja demokraattisempaa. Toki siirtymävaiheessa tulee olemaan siirtymäkipuja, etenkin liittyen organisaatiorakenteen muutokseen ja kompetenssin päivittämiseen. Bisneslogiikassa on myös siirryttävä “premium”-ajattelusta vapaaseen markkina-ajatteluun: mainostila on vain sen arvoinen kuin siitä saatavat tulokset ovat mainostajalle — nämä tulevat olemaan pienempiä kuin mediatalojen nykyinen hinnoittelu, mikä onkin negatiivinen kannustin siirtymän hyväksymiseen.

Mitkä ovat menestyksekkään ohjelmallisen ostamisen alustan ominaisuuksia?

Näkemykseni mukaan niitä ovat ainakin nämä:

  • matala aloituskustannus: tarvitaan vain 5 euron budjetti aloittamiseen (näin saadaan likviditeettiä alustalle, koska myös pienmainostajien on kannattavaa lähteä kokeilemaan)
  • budjettivapaus: mainostaja voi vapaasti määrittää budjetin, ei minimispendiä (ks. edellä)
  • markkinapohjainen hinnoittelu: tyypillisesti algoritminen huutokauppamalli, joka kannustaa totuudenmukaiseen huutamiseen (vrt. Googlen GSP ja Facebookin VCG-malli)
  • suorituspohjaisuus: hinnoittelukomponentti, jolla “palkitaan” parempia mainostajia ja näin kompensoidaan mainonnan haittoja loppukäyttäjälle
  • vapaa kohdennus: mainostaja voi itse määrittää kohdennuksen (tämän EI tule olla mediatalon “salattua tietoa”)

Nämä ominaisuudet ovat tärkeitä, koska kansainväliset kilpailijat jo tarjoavat ne, ja lisäksi ne on osoitettu toimiviksi niin teoreettisessa kuin käytännöllisessä tarkastelussa.

Tärkeitä näkökulmia mainostajan näkökulmasta ovat:

  • demokraattisuus: kuka vain voi päästä alustalle ja käyttää sitä itsepalveluna
  • tulospohjaisuus: maksetaan toteutuneista klikeistä/myynneistä, ei ainoastaan näytöistä
  • kohdennettavuus: mainostaja voi itse säätää kohdennuksen, mikä nostaa relevanssin mahdollisuutta ja näin vähentää mainonnan negatiivista verkostovaikutusta (ts. asiakkaiden ärsyyntymistä)

Kohdennusvaihtoehtoja voivat olla esim.

  • kontekstuaalinen kohdennus (sisällön ja mainostajan valitsemien avainsanojen yhteensopivuus)
  • demograafinen kohdennus (ikä, sukupuoli, kieli)
  • maantieteellinen kohdennus
  • kävijän kiinnostuksen kohteet

Osa näistä voi olla mediataloille hankalia selvitettäviä, ainakaan hankalampaa kuin Facebookille – kohdennus on kuitenkin mainonnan onnistumisen kannalta kriittinen seikka, joten tietojen saamiseksi on tehtävä työtä.

Johtopäätös

Ohjelmallisen ostamisen alustat ovat mediatalon ydinkompetenssia, eivät ostopalvelu. Siksi uskonkin, että alan toimijat lähtevät aggressiivisesti kehittämään kompetenssiaan alustojen kehittämisessä. Tai muuten ne jatkavat mainoskakun häviämistä Googlen ja Facebookin kaltaisille toimijoille, jotka tarjoavat edellä mainitut hyödyt.

Kirjoitin muuten mainosvaihdannasta pro gradun otsikolla “Power of Google: A study on online advertising exchange” vuonna 2009 — jo siinä sivuttiin näitä aiheita.

Joni Salminen
KTT, markkinointi
[email protected]

Kirjoittaja opettaa digitaalista markkinointia Turun kauppakorkeakoulussa.

Joni

Facebook Ads: too high performance might turn on you (theoretically)

english

Introduction

Now, earlier I wrote a post arguing that Facebook has an incentive to lower the CPC of well-targeting advertisers because better targeting improves user experience (in two-sided market terms, relevance through more precise targeting reduces the negative indirect network effects perceived by ad targets). You can read that post here.

However, consider the point from another perspective: the well-targeting advertiser is making rents (excessive profits) from their advertising which Facebook wants and as the platform owner is able to capture.

In this scenario, Facebook has an incentive to actually increase the CPC of a well-targeting advertiser until the advertiser’s marginal profit is aligned with marginal cost. In such a case, it would still make sense for the advertiser to continue investing (so the user experience remains satisfactory), but Facebook’s profit would be increased by the magnitude of the advertiser’s rent.

Problem of private information

This would require that Facebook be aware of the profit function of its advertisers which as for now might be private information to the advertisers. But had Facebook this information, it could consider it in the click-price calculation. Now, obviously that would violate the “objective” nature of Facebook’s VCG ad auction — it’s currently set to consider maximum CPC and ad performance (negative feedback, CTR, but not profit as far as I know). However, advertisers would not be able to monitor the use of their profit function because the precise ad auctions are carried out in a black box (i.e., asymmetric information). Thus, the scenario represents a type of moral hazard for Facebook – a potential risk the advertisers may not be aware of.

Origin of the idea

This idea I actually got from one of my students who said that “oh, I don’t think micro-targeting is useful“. Then I asked why and he said “because Facebook is probably charging too much from it”. I said to him that’s not the case, but also that it could be and the idea is interesting. Here I just elaborated it a bit further.

Also read this article about micro-targeting.

Micro-targeting is super interesting for B2B and personal branding (e.g., job seeking).

Another related point, that might interest you Jim (in case you’re reading this :), is the action of distributing profitable keywords by the platform owner between advertisers in search advertising. For example, Google could control impression share so that each advertiser would receive a satisfactory (given their profit function) portion of traffic WHILE optimizing its own return.

Conclusion

This idea is not well-developed though; it rests on the notion that there is heterogeneity in advertisers’ willingness to pay (arising e.g., from different in margins, average order values, operational efficiency or such) that would benefit the platform owner; I suspect it could be the case that the second-price auction anyway considers this as long as advertisers are bidding truthfully, in which case there’s no need for such “manipulation” by Google as the prices are always set to maximum anyway. So, just a random idea at this point.

Joni

Facebook ad testing: is more ads better?

english

Yellow ad, red ad… Does it matter in the end?

Introduction

I used to think differently about creating ad variations, but having tested both methods I’ve changed my mind. Read the explanation below.

There are two alternative approaches to ad testing:

  1. “Qwaya” method* — you create some base elements (headlines, copy texts, pictures), out of which a tool will create up to hundreds of ad variations
  2. “Careful advertiser” method — you create hand-crafted creatives, maybe three (version A, B, C) which you test against one another.

In both cases, you are able to calculate performance differences between ad versions and choose the winning design. The rationale in the first method is that it “covers more ground”, i.e. comes up with such variations that we wouldn’t have tried otherwise (due to lack of time or other reasons).

Failure of large search space

I used to advocate the first method, but it has three major downsides:

  1. it requires a lot more data to come up with statistical significance
  2. false positives may emerge in the process, and
  3. lack of internal coherence is likely to arise, due to inconsistency among creative elements (e.g., mismatch between copy text and image which may result in awkward messages).

Clearly though, the human must generate enough variation in his ad versions if he seeks a globally optimal solution. This can be done by a) making drastically different (e.g., humor vs. informativeness) as oppose to incrementally different ad versions, and b) covering extremes on different creative dimensions (e.g., humor: subtle/radical  informativeness: all benefits/main benefit).

Conclusion

Overall, this argument is an example of how marketing automation may not always be the best way to go! And as a corollary, the creative work done by humans is hard to replace by machines when seeking optimal creative solutions.

*Named after the Swedish Facebook advertising tool Qwaya which uses this feature as one of their selling points.

Joni

Facebook’s Incentive to Reward Precise Targeting

english
Facebook’s Incentive to Reward Precise Targeting

Facebook has an incentive to lower the advertising cost for more precise targeting by advertisers.

What, why?

Because by definition, the more precise targeting is the more relevant it its for end users. Knowing the standard nature of ads (as in: negative indirect network effect vis-à-vis users), the more relevant they are, the less unsatisfied the users. What’s more, their satisfaction is also tied to the performance of the ads (positive indirect network effect: the more satisfied the users, the better the ad performance), which should thus be better with more precise targeting.

Now, the relevance of ads can be improved by automatic means such as epsilon-greedy algorithms, and this is traditionally seen as Facebook’s advantage (right, Kalle?) but the real question is: Is that more efficient than “marketer’s intuition”?

I’d in fact argue that — contrary to my usual approach to marketer’s intuition and its fallibility — it is helpful here, and its use at least enables the narrowing down of optimal audience faster.

…okay, why is that then?

Because it’s always not only about the audience, but about the match between the message and audience — if the message was the same and audience varied, narrowing is still useful because the search space for Facebook’s algorithm is smaller, pre-qualified by humans in a sense.

But there’s an even more important property – by narrowing down the audience, the marketer is able to re-adjust their message to that particular audience, thereby increasing relevance (the “match” between preferences of the audience members and the message shown to them). This is hugely important because of the inherent combinatory nature of advertising — you cannot separate the targeting and message when measuring performance, it’s always performance = targeting * message.

Therefore, Facebook does have an incentive to encourage advertisers for more precise targeting and also reward that by providing a lower CPC. Not sure if they are doing this though, because it requires them to assign a weighted bid for advertisers with a more precise targeting — consider advertiser A who is mass-advertising to everyone in some large set X vs. advertiser B who is competing for a part of the same audience i.e. a sub-set x – they are both in the same auction but the latter should be compensated for his more precise targeting.

Concluding remarks

Perhaps this is factored in through Relevance Score and/or performance adjustment in the actual rank and CPC. That would yield the same outcome, given that the above mentioned dynamics hold, i.e. there’s a correlation between a more precise targeting and ad performance.

Joni

Online ad platforms’ leeching logic

english

I and Mr. Pitkänen had a discussion about unfair advantage in business – e.g., a gift card company’s business model relying on people not redeeming gift cards, investment banker’s relying on monopoly to take 7% of each new IPO, doctor’s controlling how many new doctor’s are educated, taxi driver’s keeping the supply low through licenses, governments inventing new taxes…

It seems, everywhere you look you’ll find examples of someone messing with the so-called “free market”.

So, what’s the unfair advantage of online ad platforms? It’s something I call ‘leeching logic’. It’s about miscrediting conversions – channel x receives credit for a conversion while channel y has been the primary driver to it.

Let me give you two examples.

EXAMPLE 1:

You advertise in the radio for brand X. A person likes the ad and searches your brand in google. He clicks your search ad and buys.

Who gets credited for the sale?

radio ad – 0 conversions
google – 1 conversion

The conclusion: Google is leeching. In this way, all offline branding essentially creates a lift for search-engine advertising which is located at a later stage of the purchase funnel, often closing the conversion.

EXAMPLE 2:

You search for product Y in Google. You see a cool search ad by company A and click it. You also like the product. However, you need time to think and don’t buy it yet. Like half the planet, you go to Facebook later during that day. There, you’re shown a remarketing ad from company A but don’t really notice it, let alone click it. After thinking about the product for a week, you return to company A‘s website and make the purchase.

Who gets credited for the sale?

Google – 1 conversion (30-day click tracking)
Facebook – 1 conversion (28-days view tracking)

In reality, Facebook just rides on the fact someone visited a website and in between making the purchase also visited Facebook, while they learned about the product somewhere else. They didn’t click the retargeting ad or necessarily even cognitively processed it, yet the platform reports a conversion because of that ad.

For a long time, Facebook had trouble in finding its leeching logic, but now it finally has discovered it. And now, like for other businesses that have a leeching logic, the future looks bright. (Good time to invest, if the stock’s P/E wasn’t somewhere at 95.)

So, how should marketers deal with the leeches to get a more truthful picture of our actions? Here are a few ideas:

  •  exclude brand terms in search when evaluating overall channel performance
  • narrow down lookback window for views in Facebook — can’t remove it, though (because of leeching logic)
  • use attribution modeling (not possible for online-offline but works for digital cross-channel comparisons)
  • dedupe conversions between channels (essentially, the only way to do this is by attribution modeling in 3rd party analytics software, such as GA — platforms’ own reporting doesn’t address this issue)

 

Joni

A Few Interesting Digital Analytics Problems… (And Their Solutions)

english

Introduction

Here’s a list of analytics problems I’ve devised for a class I was teaching a digital analytics course (Web & Mobile Analytics, Information Technology Program) at Aalto University in Helsinki. Some solutions to them are also considered.

The problems

  • Last click fallacy = taking only the last interaction into account when analayzing channel or campaign performance (a common problem for standard Google Analytics reports)
  • Analysis paralysis = the inability to know which data to analyze or where to start the analysis process from (a common problem when first facing a new analytics tool 🙂 )
  • Vanity metrics = reporting ”show off” metrics as oppose to ones that are relevant and important for business objectives (a related phenomenon is what I call “metrics fallback” in which marketers use less relevant metrics basically because they look better than the primary metrics)
  • Aggregation problem = seeing the general trend, but not understanding why it took place (this is a problem of “averages”)
  • Multichannel problem = losing track of users when they move between online and offline (in cross-channel environment, i.e. between digital channels one can track users more easily, but the multichannel problem is a major hurdle for companies interested in knowing the total impact of their campaigns in a given channel)
  • Churn problem = a special case of the aggregation problem; the aggregate numbers show growth whereas in reality we are losing customers
  • Data discrepancy problem = getting different numbers from different platforms (e.g., standard Facebook conversion configuration shows almost always different numbers than GA conversion tracking)
  • Optimization goal dilemma = optimizing for platform-specific metrics leads to suboptimal business results, and vice versa. It’s because platform metrics, such as Quality Score, are meant to optimize competitiveness within the platform, not outside it.

The solutions

  • Last click fallacy → attribution modeling, i.e. accounting for all or select interactions and dividing conversion value between them
  • Analysis paralysis → choosing actionable metrics, grounded in business goals and objectives; this makes it easier to focus instead of just looking at all of the overwhelming data
  • Vanity metrics → choosing the right KPIs (see previous) and sticking to them
  • Aggregation problem → segmenting data (e.g. channel, campaign, geography, time)
  • Multichannel problem → universal analytics (and the associated use of either client ID or customer ID, i.e. a universal connector)
  • Churn problem → cohort analysis (i.e. segment users based on the timepoint of their enrollment)
  • Data discrepancy problem → understanding definitions & limitations of measurement in different ad platforms (e.g., difference between lookback windows in FB and Google), using UTM parameters to track individual campaigns
  • Optimization goal dilemma → making a judgment call, right? Sometimes you need to compromise; not all goals can be reached simultaneously. Ultimately you want business results, but as far as platform-specific optimization helps you getting to them, there’s no problem.

Want to add something to this list? Please write in the comments!

[edit: I’m compiling a larger list of analytics problems. Will update this post once it’s ready.]

Learn more

I’m into digital marketing, startups, platforms. Download my dissertation on startup dilemmas: http://goo.gl/QRc11f

Joni

Using the VRIN model to evaluate web platforms

english

Introduction

In this article, I discuss how the classic VRIN model can be used to evaluate modern web platforms.

What is the VRIN model?

It’s one of the most cited models of the resource-based view of the firm. Essentially, it describes how a firm can achieve sustainable competitive advantage through resources that fulfill certain criteria.

These criteria for resources that provide a sustainable competitive advantage are:

  • valuable
  • rare
  • imperfectly imitable
  • non-substitutable

By gaining access to this type of resources, a firm can create a lasting competitive advantage. Note that this framework takes one perspective to strategy, i.e. the resource-based view. Alternative ones are e.g. Porter’s five forces and power-based frameworks, among many others.

The “resource” in resource-based view can be defined as some form of input which can be transformed into tangible or intangible output that provides utility or value in the market. In a competitive setting, a firm competes with its resources against other players; what resources it has and how it uses them are key variables in determining the competitive outcome, i.e. success or failure in the market.

How it applies to web platforms?

In each business environment, there are certain resources that are particularly important. An orange juice factory, for example, requires different resources to be successful than a consulting business (the former needs a good supply of oranges, and the latter bright consultants; both rely on good customer relationships, though).

So, what kind of resources are relevant for online platforms?

I first give a general overview of the VRIN dimensions in online context. This is done by comparing online environment with offline environment.

Value:

The term ‘value’ is tricky because of its definition: if we define it as something useful, we easily end up in a tautology (circular argument): a resource is valuable because it is useful for some party.

  • critical for offline: yes (but which resources?)
  • critical for online: yes (but which resources?)

The specific resources for online platforms are discussed later on.

Rarity:

One of the key preoccupations in economic theory is scarcity: raw materials are scarce and firms need to compete over their exploitation.

  • critical for offline: yes
  • critical for online: no

Offline industries are characterized by rivalry – once oil is consumed, it cannot be reused. Knowledge products on the web, on the other hand, are described as non-rivalry products: if one consumer downloads an MP3 song, that does not remove the ability for another consumer to download as well (but if a consumer buys a snickers bar, there is one less for others to buy). Scarcity is usually associated to startups so that they are forced to innovate due to liability of smallness.

Imitability:

This deals with how well the business idea can be copied.

  • critical for offline: yes
  • critical for online: no

in “traditional” industries, such as manufacturing, patents and copyrights (IPR) are important. They protect firms against infringement and plagiarism. without them, every innovation could be easily copied which would quickly erode any competitive advantage. Intellectual property rights therefore enable the protection of “innovations” against imitation.

Imitation is less of a concern online. In most cases, the web technologies are public knowledge (e.g., open source). Even large players contribute to public domain. Therefore, rather than being something that competitors could not imitate, the emphasis on competition between web platforms tends to be on acquiring users rather than patents. (There are also other sources of resource advantage we’ll discuss later on.)

Substitutability:

The difference between imitation and substitution is that in the former you are being copied whereas in the latter your product is being replaced by another solution. For example, Evernote can be replaced by paper and pen.

  • critical for offline: yes (depends on the case though)
  • not so critical for offline: yes (see the example of Evernote)

However, I would argue the source of resource advantage comes from something else than immunity of subsitution: after all, there are tens of search-engines and hundreds of social networks but still the giants overcome them.

‘Why’ is the question we’re going to examine next.

Important resources for online platforms

Here’s what I think is important:

  1. knowledge
  2. storage/server capacity
  3. users
  4. content
  5. complementors
  6. algorithms
  7. company culture
  8. financing
  9. HQ location

Knowledge means holding the “smartest workers” – this is obviously a highly important resource. As Steve Jobs said, they’re not hiring smart people to tell them what to do, but so that the smart workers could tell Apple what to do.

  • valuable: yes
  • rare: no (comes in abundance)
  • imperfectly imitable: no
  • non-substitutable: yes

Storage/server capacity is crucial for web firms. The more users they have, the more important this resource is in order to provide a reliable user experience.

  • valuable: yes
  • rare: no
  • imperfectly imitable: no
  • non-substitutable: yes

Users are crucial given that the platform condition of critical mass is achieved. Critical mass is closely associated with network effects, meaning that the more there are users, the more valuable the platform is.

  • valuable: yes
  • rare: no
  • imperfectly imitable: no
  • non-substitutable: yes

Content is important as well — content is a complement to content platforms, whereas users are complements of social platforms (for more on this typology, see my dissertation).

  • valuable: yes
  • rare: no
  • imperfectly imitable: no
  • non-substitutable: yes

Complementors are antecedents to getting users or content – they are third parties that provide extensions to the core platform, and therefore add its usefulness to the users.

  • valuable: yes
  • rare: no (depends)
  • imperfectly imitable: yes
  • non-substitutable: no (can be replaced by in-house activities)

Algorithms are proprietary solutions platforms use to solve matching problems.

  • valuable: yes
  • rare: no (depends)
  • imperfectly imitable: no
  • non-substitutable: yes

Company culture is a resource which can be turned into an efficient deployment machine.

  • valuable: yes
  • rare: yes
  • imperfectly imitable: yes
  • non-substitutable: yes

A great company culture may be hard to imitate because its creation requires tacit knowledge.

Financing is an antecedent to acquiring other resources, such as the best team and storage capacity (although it’s not self-evident that money leads to functional a team, as examples in the web industry demonstrate).

  • valuable: yes
  • rare: no (for good businesses)
  • imperfectly imitable: no
  • non-substitutable: no (bootstrapping)

Finally, location is important because can provide an access to a network of partner companies, high-quality employees and investors (think Silicon valley) that, again, are linked to the successful use of other resources.

  • valuable: yes
  • rare: no
  • imperfectly imitable: no
  • non-substitutable: no

A location is not a rare asset because it’s always possible to find an office space in a given city; similarly, you can follow where your competitors go.

Conclusions

What can be learned from this analysis?

First, the “value” in the VRIN framework is self-evident and not very useful in finding out differences between resources, UNLESS the list of resources is really wide and not industry-specific. That would be case when exploring the ; here, the list creation was

My list highlights intangible resources as a source of competitive advantage for web platforms. Based on this analysis, company culture is a resource the most compatible with the VRIN criteria.

Although it was argued that substitutability is less of a concern in online than offline, the risk of disruption touches equally well the dominant web platforms. Their large user base protects them against incremental innovations, but not against disruptive innovations. However, just as the concept of “value” has tautological nature, disruption is the same – disruptive innovation is disruptive because it has disrupted an industry – and this can only be stated in hindsight.

Of course, the best executives in the world have seen disruption beforehand, e.g. Schibstedt and digital transformation of publishing, but most companies, even big ones like Nokia have failed to do so.

How to go deeper

Let’s take a look at the three big: Google, Facebook and eBay. Each one is a platform: Google combines searchers with websites (or, alternatively, advertisers with publisher websites (AdSense); or even more alternatively, advertisers with searchers (AdWords)), Facebook matches users to one another (one-sided platform) and advertisers with users (two-sided platform). eBay as an exchange platform matches buyers and sellers.

It would be useful to assess how well each of them score in the above resources and how the resources are understood in these companies.

I’m into digital marketing, startups, platforms. Download my dissertation on startup dilemmas: http://goo.gl/QRc11f

Joni

How to use Facebook in marketing segmentation?

english

Introduction

This article discusses the potential of segmentation in Facebook advertising.

Why is segmentation needed?

Segmentation is one of the most fundamental concepts in marketing. Its goal is to identify the best match between the firm’s offering and the market, i.e. find a sub-set of customers who are most likely to buy the product and who therefore can be targeted cost-effectively by means of niche marketing rather than mass marketing.

There are some premises as to why segmentation works:

  • Not all buyers are alike.
  • Sub-groups of people with similar behavior, backgrounds, values and needs can be identified.
  • The sub-groups will be smaller and more homogeneous than the market as a whole.
  • It is easier to satisfy a small group of similar customers than to try to satisfy large groups of dissimilar customers.

(The list if a direct citation from the Essentials of Marketing by Jim Blythe, p. 76.)

While segmentation is about dividing the overall market into smaller pieces (segments), targeting is about selecting the appropriate marketing channels to reach those customer segments. Finally, positioning deals with message formulation in the attempt of positioning the firm and its offerings relative to competitors (e.g., cheaper, better quality). This is the basic marketing model called STP (segmentation, targeting, positioning).

How to apply segmentation in Facebook?

I will next discuss three stages of Facebook campaign creation.

1. Before the campaign

There are a few options for creation of basic segments.

  • generate marketing personas (advantage: makes you assume customer perspective; weakness: vulnerable to marketer’s intuition, i.e. tendency to assume you know your customer whereas in reality you don’t)
  • conduct market research (advantage: suited to your particular case; weakness: costly and takes time)
  • buy consumer research reports (advantage: large sample sizes, comprehensive; weakness: the reports tend to be very general)
  • use Facebook Audience Insights (advantage: specific to Facebook; weakness: gives little behavioral data)

The existence of weaknesses is okay – the whole of point of segmentation is to gather REAL data which is stronger thana priori assumptions.

Based on the insights you’ve gathered, create Saved target groups in Facebook. These incorporate the segments you want to target for. If you are using an ad management tool such as Smartly, you can split audiences into smaller micro-segments e.g. by age, gender and location. Say you have a general segment of Women aged 25-50; you could split it into the following micro-segments by using an interval of five years:

  • women 25-30
  • women 31-36
  • women 37-42
  • women 43-48

The advantage of micro-segments is more granular segmentation; however, the risk is going too granular while ignoring the real-world reason for differences (sometimes the performance difference between two micro-segments is just statistical noise).

After creating the segments in Facebook (reflected in Saved target groups), you want to test how they perform — so as to see how well your assumptions on the effectiveness of these segments are working. For this, create campaigns and let them run. In Power Editor, go to the Custom audiences (select from the sliding menu), select the segments you want to test and choose to create new ad groups. (See, now we have moved from segmentation into targeting, which is the natural step in the STP model.)

NB! If you particularly want to test customer segments, keep everything else (campaign settings, creatives) the same. In Power Editor, this is fairly simple to execute by copy-pasting the creatives between ad groups. This reduces the risk that the performance differences between various segments are a result of some other factor than targeting. Finally, name the ad sets to reflect the segment you are testing (e.g. Women 25-31).

2. During the campaign

After a week or so, go back to check the results. Since you’ve named the segments appropriately, you can quickly see the performance differences between the segments. To make sure the differences are statistically valid (if you are not using a tool such as Smartly), use a calculator to determine the statistical significance. I created one which can be downloaded here.

When interpreting results, remember that the outcome is a combination of segment and message (and that the message is a combination of substance and tone, i.e. what is said and how it is said). In other words,

Result = segment x message, in which message = substance x tone, so that
Result = segment x (substance x tone)

Therefore, as you change the message, it reflects to performance across various segments. This means that you are not actually testing the suitability of your product to the segment (which is what segmentation and targeting is all about), but the match between the message and the target audience. Although this may seem like semantics, it’s actually pretty important. You want to make sure you’re not getting a misleading response from your segment due to issues in message formulation (i.e. talking to them in a “wrong way”), and so you want to make sure it reflects the product as well as possible. Ideally, you’d want to tailor your message based on your ideas of the segment, BUT this is prohibited in the early stage because we want to make sure the message formulation does not interfere with the testing of segment performance.

How to solve this problem, then? Three ways: first, make sure the segments you are testing are not too far apart – i.e. women aged 17 and men aged 45 subjected to the same message can create issues. Second, try to formulate a general message to begin with, so it doesn’t exclude any segments. Third, you could of course make slight modifications to the message while testing the segments — here I would still keep the substance (e.g. cheap price) stable across segments while maybe changing the tone (e.g. type of words used) depending on the audience – for example, older people are usually addressed in a different tone than the younger audience (yo!).

Finally, one extra tip! If you want more granular data on how different groups within your segment have performed, go to Ad reports and check out the data breakdowns. There is a wealth of information there which can be used in creating further micro-segments.

3. After the campaign

What to do when you know which segments are the most profitable? Well, take the results you’ve got and generalize them into your other marketing activities. For example, when you’re buying print ads ask for demographic data they have on readers — it has to be accurate and based on research, not guesses — and choose the media that matches the best performing segments according to your Facebook data. In my opinion, there is no major reason to assume that people in the same segment would act differently in Facebook and elsewhere (strictly speaking, the only potential issue I can think of is that Facebook-people are more “advanced” in their technology use than offline-people, but this is generally a small problem since such a large share of population in most markets are users of Facebook).

There you go – hopefully this article has given you some useful ideas on the relationship between segmentation and Facebook advertising!

I’m into digital marketing, startups, platforms. Download my dissertation on startup dilemmas: http://goo.gl/QRc11f