Skip to content

Tag: digital marketing

Facebook Ads: too high performance might turn on you (theoretically)

Introduction

Now, earlier I wrote a post arguing that Facebook has an incentive to lower the CPC of well-targeting advertisers because better targeting improves user experience (in two-sided market terms, relevance through more precise targeting reduces the negative indirect network effects perceived by ad targets). You can read that post here.

However, consider the point from another perspective: the well-targeting advertiser is making rents (excessive profits) from their advertising which Facebook wants and as the platform owner is able to capture.

In this scenario, Facebook has an incentive to actually increase the CPC of a well-targeting advertiser until the advertiser’s marginal profit is aligned with marginal cost. In such a case, it would still make sense for the advertiser to continue investing (so the user experience remains satisfactory), but Facebook’s profit would be increased by the magnitude of the advertiser’s rent.

Problem of private information

This would require that Facebook be aware of the profit function of its advertisers which as for now might be private information to the advertisers. But had Facebook this information, it could consider it in the click-price calculation. Now, obviously that would violate the “objective” nature of Facebook’s VCG ad auction — it’s currently set to consider maximum CPC and ad performance (negative feedback, CTR, but not profit as far as I know). However, advertisers would not be able to monitor the use of their profit function because the precise ad auctions are carried out in a black box (i.e., asymmetric information). Thus, the scenario represents a type of moral hazard for Facebook – a potential risk the advertisers may not be aware of.

Origin of the idea

This idea I actually got from one of my students who said that “oh, I don’t think micro-targeting is useful“. Then I asked why and he said “because Facebook is probably charging too much from it”. I said to him that’s not the case, but also that it could be and the idea is interesting. Here I just elaborated it a bit further.

Also read this article about micro-targeting.

Micro-targeting is super interesting for B2B and personal branding (e.g., job seeking).

Another related point, that might interest you Jim (in case you’re reading this :), is the action of distributing profitable keywords by the platform owner between advertisers in search advertising. For example, Google could control impression share so that each advertiser would receive a satisfactory (given their profit function) portion of traffic WHILE optimizing its own return.

Conclusion

This idea is not well-developed though; it rests on the notion that there is heterogeneity in advertisers’ willingness to pay (arising e.g., from different in margins, average order values, operational efficiency or such) that would benefit the platform owner; I suspect it could be the case that the second-price auction anyway considers this as long as advertisers are bidding truthfully, in which case there’s no need for such “manipulation” by Google as the prices are always set to maximum anyway. So, just a random idea at this point.

Why human services are needed for world peace

The bot can be boss, as long as we have jobs.

Why are human services the future of our economy? (And, therefore, an absolute requirement for world peace [1].)

For three reasons:

  1. They do not pollute or waste material resources (or tend to do so with significantly less degree than material consumption)
  2. Exponential growth of population absolutely requires more human labor (supply and demand of labor)
  3. There’s no limit to service creation, but by type and nature they are infinite (because people’s needs are infinite and ever-changing)

Consequently, critical, absolutely critical measures are needed in the Western economies to enable true service economy.

Here are some ideas:

  • Taxation of human labor (VAT of services) must be drastically cut.
  • Side-costs of employing people (instead of machines) must be drastically cut.
  • Any technological solutions (e.g., platforms) increasing the match between supply and demand of human labor must be endorsed, and respectively all barriers such as cartels, removed.

Human services are the key to sustainable and socially balanced consumption – I look at Finland back in the 1950s; we were a real service economy. Today, every job possible has been replaced either by automation or by self-service (which companies call “customer participation”). We’re a digital self-service economy, not a service economy anymore.

I long for the days when we had bellboys, cleaning ladies, office clerks, research assistants and other support staff — they are important jobs which nowadays are no more. Self-service and efficiency are in fact the enemies of employment. We must consider if we want a society optimized for efficiency or one optimized for well-being (I’m starting to sound like, Bernie Sanders; which might not be a bad thing as such, but the argument has a deeper rationale in it).

Maximum efficiency is not maximum employment, far from it.

Regarding Silicon Valley and startups, there should be a counter-movement against efficiency. So far, software has been eating the world, and the world — at least in terms of job market — is becoming increasingly less. Granted, many new job types have been created to compensate for the loss, but much more is needed to fill the gap software is leaving. I think there needs to be a call for new type of startups, ones that empower human work. If you think about it, there already exists some good examples – Uber, Taskrabbit, Fiverr, Upwork are some of them. But all too often the core value proposition of a startup is based on its ability to reduce “waste” – that is, human labor.

I do not think there is any limit to creation of human services. People are never completely satisfied, and their new needs spawn new services, which in turn require new services, and so on and on. In fact, the only limit to consumption of services is one’s time and cognitive abilities! This is good and well, even hopeful if we think of the big picture. But I do think an environment needs to be created where incentives for providing human services match those of machine services, or at least approach that much more than what it currently does.

This is an issue that definitely needs to be addressed with real structural reforms in the society; as of yet, I haven’t seen ANY of that — not even discussion — in Finland. It’s as if the world was moving but the politicians were asleep, stuck in some old glory days. But in the end we all want the same thing – we want those old days BACK, when everyone had a job. It’s just that we cannot do it without adjusting the policies — radically — to the radical change of productivity which has taken place in the past decades.

It’s like another candidate — not Sanders — says: We gotta start winning again.

End notes

[1] The premise here is that the well-being of a middle class is required for a balanced and peaceful society. In contrast, the crumbling middle class will cause social unrest and wide dissatisfaction which will channel out in political radicalism, scapegoat seeking, and even wars between nations. Jobs are not just jobs, they are vehicle for peace.

The author has taught services marketing at the Turku School of Economics.

Facebook ad testing: is more ads better?

Yellow ad, red ad… Does it matter in the end?

Introduction

I used to think differently about creating ad variations, but having tested both methods I’ve changed my mind. Read the explanation below.

There are two alternative approaches to ad testing:

  1. “Qwaya” method* — you create some base elements (headlines, copy texts, pictures), out of which a tool will create up to hundreds of ad variations
  2. “Careful advertiser” method — you create hand-crafted creatives, maybe three (version A, B, C) which you test against one another.

In both cases, you are able to calculate performance differences between ad versions and choose the winning design. The rationale in the first method is that it “covers more ground”, i.e. comes up with such variations that we wouldn’t have tried otherwise (due to lack of time or other reasons).

Failure of large search space

I used to advocate the first method, but it has three major downsides:

  1. it requires a lot more data to come up with statistical significance
  2. false positives may emerge in the process, and
  3. lack of internal coherence is likely to arise, due to inconsistency among creative elements (e.g., mismatch between copy text and image which may result in awkward messages).

Clearly though, the human must generate enough variation in his ad versions if he seeks a globally optimal solution. This can be done by a) making drastically different (e.g., humor vs. informativeness) as oppose to incrementally different ad versions, and b) covering extremes on different creative dimensions (e.g., humor: subtle/radical  informativeness: all benefits/main benefit).

Conclusion

Overall, this argument is an example of how marketing automation may not always be the best way to go! And as a corollary, the creative work done by humans is hard to replace by machines when seeking optimal creative solutions.

*Named after the Swedish Facebook advertising tool Qwaya which uses this feature as one of their selling points.

Facebook’s Incentive to Reward Precise Targeting

Facebook has an incentive to lower the advertising cost for more precise targeting by advertisers.

What, why?

Because by definition, the more precise targeting is the more relevant it its for end users. Knowing the standard nature of ads (as in: negative indirect network effect vis-à-vis users), the more relevant they are, the less unsatisfied the users. What’s more, their satisfaction is also tied to the performance of the ads (positive indirect network effect: the more satisfied the users, the better the ad performance), which should thus be better with more precise targeting.

Now, the relevance of ads can be improved by automatic means such as epsilon-greedy algorithms, and this is traditionally seen as Facebook’s advantage (right, Kalle?) but the real question is: Is that more efficient than “marketer’s intuition”?

I’d in fact argue that — contrary to my usual approach to marketer’s intuition and its fallibility — it is helpful here, and its use at least enables the narrowing down of optimal audience faster.

…okay, why is that then?

Because it’s always not only about the audience, but about the match between the message and audience — if the message was the same and audience varied, narrowing is still useful because the search space for Facebook’s algorithm is smaller, pre-qualified by humans in a sense.

But there’s an even more important property – by narrowing down the audience, the marketer is able to re-adjust their message to that particular audience, thereby increasing relevance (the “match” between preferences of the audience members and the message shown to them). This is hugely important because of the inherent combinatory nature of advertising — you cannot separate the targeting and message when measuring performance, it’s always performance = targeting * message.

Therefore, Facebook does have an incentive to encourage advertisers for more precise targeting and also reward that by providing a lower CPC. Not sure if they are doing this though, because it requires them to assign a weighted bid for advertisers with a more precise targeting — consider advertiser A who is mass-advertising to everyone in some large set X vs. advertiser B who is competing for a part of the same audience i.e. a sub-set x – they are both in the same auction but the latter should be compensated for his more precise targeting.

Concluding remarks

Perhaps this is factored in through Relevance Score and/or performance adjustment in the actual rank and CPC. That would yield the same outcome, given that the above mentioned dynamics hold, i.e. there’s a correlation between a more precise targeting and ad performance.

A Little Guide to AdWords Optimization

Hello, my young padawan!

This time I will write a fairly concise post about optimizing Google AdWords campaigns.

As usual, my students gave the inspiration to this post. They’re currently participating in Google Online Marketing Challenge, and — from the mouths of children you hear the truth 🙂 — asked a very simple question: “What do we do when the campaigns are running?”

At first, I’m tempted to say that you’ll do optimization in my supervision, e.g. change the ad texts, pause add and change bids of keywords, etc. But then I decide to write them a brief introduction.

So, here it goes:

1. Structure – have the campaigns been named logically? (i.e., to mirror the website and its goals)? Are the ad groups tight enough? (i.e., include only semantically similar terms that can be targeted by writing very specific ads)

2. Settings – all features enabled, only search network, no search partners (– that applies to Google campaigns, in display network you have different rules but never ever mix the two under one campaign), language targeting Finnish English Swedish (languages that Finns use in Google)

3. Modifiers – are you using location or mobile bid modifiers? Should you? (If unsure, find out quick!)

4. Do you have need for display campaigns? If so, use display builder to build nice-looking ads; your targeting options are contextual targeting (keywords), managed placements (use Display Planner to find suitable sites), audience lists (remarketing), and affinity and topic categories (the former targets people with a given interest, the latter websites categorized under a given interest, e.g. traveling) (you can use many of these in one campaign)

5. Do you have enough keywords to reach the target daily spend? (Good to have more than 100, even thousands of keywords in the beginning.)

6. What match types are you using? You can start from broad, but gradually move towards exact match because it gives you the greatest control over which auctions you participate in.

7. What are your options to expand keyword base? Look for opportunities by taking a search term report from all keywords after you’ve run the campaign for week or so; this way you can also identify more negative keywords.

8. What negative keywords are you using? Very important to exclude yourself from auctions which are irrelevant for your business.

9. Pausing keywords — don’t delete anything ever, because then you’ll lose the analytical trace; but frequently stop keywords that are a) the most expensive and/or b) have the lowest CTR/Quality Score

10. Have you set bids at the keyword level? You should – it’s okay to start by setting the bid at ad group level, and then move gradually to keyword level as you begin to accumulate real data from the keyword market.

11. Ad positions – see if you’re competitive by looking at auction insights report; if you have low average positions (below 3), consider either pausing the keyword or increasing your bid (and relevance to ad — very important)

12. Are you running good ads? Remember, it’s all about text. You need to write good copy which is relevant to searchers. No marketing bullshit, please. Consider your copy as an answer to searchers request; it’s a service, not a sales pitch. This topic deserves its own post (and you’ll find them by googling), but as for now, know that the best way (in my opinion) is to have 2 ads per ad group constantly competing against one another. Then pause the losing ad and write a new contender — remember also that an ad can never be perfect: if your CTR is 10%, it’s really good but with a better ad you can have 11%.

13. Landing page relevance – you can see landing page experience by hovering over keywords – if the landing page experience is poor, think if you can instruct your client to make changes, or if you can change the landing page to a better one. The landing page relevance comes from the searcher’s perspective: when writing the search query, he needs to be shown ads that are relevant to that query and then directed to a webpage which is the closest match to that query. Simple in theory, in practice it’s your job to make sure there’s no mismatch here.

14. Quality Score – this is the godlike metric of AdWords. Anything below 4 is bad, so pause it or if it’s relevant for your business, then do your best to improve it. The closer you get to 10, the better (with no data, the default is 6).

15. Ad extensions – every possible ad extension should be in use, because they tend to gather a good CTR and also positively influence your Quality Score. So, this includes sitelinks, call extensions, reviews, etc.

And, finally, important metrics. You should always customize your column views at campaign, ad group and keyword level. The picture below gives an example of what I think are generally useful metrics to show — these may vary somewhat based on your case. (They can be the same for all levels, except keyword level should also include Quality Score.)

  • CTR (as high as possible, at least 5%)
  • CPC (as low as possible, in Finland 0.20€ sounds decent in most industries)
  • impression share (as high as possible WHEN business-relevant keywords, in long-tail campaigns it can be low with a good reason of getting cheap traffic; generally speaking, this indicates scaling potential; I’ve written a separate post about this, you can find it by looking at my posts)
  • Quality Score (as high as possible, scale 1-10)
  • Cost (useful to sort by cost to focus on the most expensive keywords and campaigns)
  • Avg. position (TOP3 is a good goal!)
  • Bounce rate (as low as possible, it tends to be around 40% on an average website) (this only shows if GA is connected –> connect if possible)
  • Conversion rate (as high as possible, tends to be 1-2% in ecommerce sites, more when conversion is not purchase)
  • Number of conversions (shows absolute performance difference between campaigns)

That’s it! Hope you enjoyed this post, and please leave comments if you have anything to add.

Using Napoleon’s 19th Century Principles for Email Writing

“In this age, in past ages, in any age… Napoleon.”
(The Duke of Wellington)

This is a short post reflecting upon Napoleon’s writing on war and efficient management. I think many of his principles are universal and apply to communication — my special consideration here is writing of emails, which is a vital skill because 1) you want your message to be read and replied! and 2) to get to that end, you need to learn how to write in a concise way.

Napoleon will help you to get there…

Quote 1:

“Reconnaissance memoranda should always be written in the simplest style and be purely descriptive. They should never stray from their objective by introducing extraneous ideas.”

First of all, write simple text. Avoid using complicated words and ambiguity (– expressions that can be interpreted in many ways). Oftentimes I see sentences that have ambiguity (or, in fact I myself writing them — when that happens, I instantly make it more clear so that there is absolutely no room for misinterpretation).

Quote 2:

“The art of war does not require complicated maneuvers; the simplest are the best, and common sense is fundamental. From which one might wonder how it is generals make blunders; it is because they try to be clever.”

The goal should never be to appear smart of whatever type; only to communicate your message efficiently. As I’ve said in other contexts, clear writing reflects clear thinking — and especially when it comes to writing emails, this is the only image you want to convey of yourself.

Quote 3:

“Think over carefully the great enterprise you are about to carry out; and let me know, before I sign your final orders, your own views as to the best way of carrying it out.”

In other words, make it easy for people to reply by asking for their opinion (when it’s such a matter their opinion would be useful). Write so that it’s easy to reply — e.g., don’t give too many choices or add any unnecessary layers of complexity.

Oftentimes I see messages which require considerable thinking to reply, and then it of course gets delayed or canceled altogether. Writing an email is like servicing a client; everything from the recipient’s part needs to be made as easy as possible.

Quote 4:

“This letter is the principle instruction of your plan of campaign, and if unforeseen events should occur, you will be guided in your conduct by the spirit of this instruction.”

This is actually the only quote where I disagree with Napoleon. Let me explain why. His rationale was based on the information asymmetry between him and his local officers. The officers have more immediate information; first of all, because of this it’s impossible to write a detailed instruction which would optimally consider the local circumstances, especially since they might change in the course of delivering the message (remember, in Napoleon’s day communication had a delay of even up to many days depending on the troops’ location).

Second, if the local officers were to verify each action, the delay in communication would result in losing crucial opportunities. In a word, decentralization of decision-making was essential for Napoleon. Napoleon himself explains it like this:

“The Emperor cannot give you positive orders, but only general instructions (objectives) because the distance is already considerable and will become greater still.”

However, in email communications the situation is different. First of all, there’s no communication lag, at least in the practical sense. Second of all, leaving things “open” for the recipient requires more cognitive effort from them, which in my experience leads to lower response rates and delays.

So, I’d say: Tell exactly what you want the other party to do. Don’t hint or imply – if you expect something to happen, make it clear. Oftentimes I see messages that are thought half-way through: the sender clearly implies that the recipient should finish his or her thinking. Not a good idea. Think the course of events through beforehand so that the recipient doesn’t have to.

More about Napoleon can be read from his memoirs, available at http://www.gutenberg.org/ebooks/3567

The author teaches and studies digital marketing at the Turku School of Economics.

Example of Google’s Moral Hazard: Pooling in Ad Auctions

Google has an incentive to group advertisers in ad auction even when this conflicts with the goals of an individual advertiser.

For example, you’d like to bid on ‘term x‘ and would not like be included in auctions ‘term x+n‘ due to e.g. lower relevance, your ad might still participate in the auction.

This relates to two features:

  1. use of synonyms — by increasing the use of synonyms, Google is able to pool more advertisers in the same ad auction
  2. broad match — by increasing the use of broad match, Google is able to pool more advertisers in the same ad auction

Simply put, the more bidders competing in the same ad auction, the higher the click price and therefore Google’s profit. It needs to be remarked that pooling not only increases the CPC of existing ad auctions by increasing competition, but it also creates new auctions altogether (because there needs to be a minimum number of bidders for ads to be launched on the SERP).

A practical example of this moral hazard is Google’s removal of ‘do not include synonyms or close variants‘ in the AdWords campaign settings, which took place a couple of years ago.

There are two ways advertisers can counter this effect:

  1. First, by efficient use of negative keywords.
  2. Second, by resorting to multi-word exact matches as much as possible.

In conclusion, I always tell my students that Google is a strategic agent that wants to optimize its own gain — as far as its and advertiser’s goals are aligned, everything is fine, but there are these special cases in which the goals deviate and the advertisers needs to recognize them and take action.

Modern Market Research Methods: A Startup Perspective

EDIT: Updated by adding competitive analysis, very important to benchmark competitors.

EDIT2: Updated by adding experimentation (14th April, 2016)

Introduction

Somebody on Quora was asking about ‘tools’ for validating viability and demand for a startup’s products.

I replied it’s not a question of tools, but plain old market research (which seems to be all too often ignored by startup founders).

Modern market research methods

In brief, I’d include the following options to a startup market research plan:

  1. market statistics from various consultancy and research institution reports (macro-level)
  2. general market (country, city) statistics generated just for your case (macro-level à la PESTLE)
  3. competitive analysis, i.e. benchmarking existing solutions — will help you find differentiation points and see if your “unique idea” already exists in the market
  4. (n)etnography, i.e. going in-depth to user communities to understand their motivations (micro-level, can be done offline and online)
  5. surveys, i.e. devising a questionnaire for relevant parties (e.g., customers, suppliers) to understand their motivations (just like the previous, but with larger N, i.e. micro-level study)
  6. customer development, which is most often used in B2B interviews as a presales activity to better understand the clients’ needs. Here’s an introduction to customer development (Slideshare).
  7. crowdfunding, i.e. testing the actual demand for the product by launching it as a concept in a crowdfunding platform – this is often referred to as presales, because you don’t have to have the product created yet.
  8. experimentation, i.e. running different variations against one another and determining their performance difference by statistical testing; the tests can relate to e.g. ad versions (value propositions, messages) or landing pages (product variations, landing page structure and elements). Here’s a tool for calculating statistical significance of ad tests.

So, there. Some of the methods are “old school”, but some — such as crowdfunding are newer ways to collect useful market feedback. Experimentation, although it may appear novel, is actually super old school. For example, one of the great pioneers of advertising, Claude Hopkins, talked about ad testing and conversion optimization already in the 1920. (You can actually download his excellent book, “Scientific advertising“, for free.)

How to combine different methods?

The optimal plan would include both macro- and micro-level studies to get both the “helicopter view” and the micro-level understanding needed for product adoption. Which methods to to include in your market research plan depends on the type of business. For example, crowdfunding can be seen as a market validation method most suitable for B2C companies and customer development for B2B companies.

The punchline

The most important point is that you, as a startup founder, don’t get lured into the ‘tool fallacy’ — there’s no tool to compensate for the lack of genuine customer understanding.

Dr. Joni Salminen holds a PhD in marketing from the Turku School of Economics. His research interests relate to startups, platforms, and digital marketing.

Contact email: [email protected]

Dynamic Pricing and Incomplete People Information

One of the main problems in analytics is the lack of people information (e.g., demographics, interests). It is controlled by superplatforms like Google and Facebook, but as soon as you have transition from the channel to the website, you lose this information.

So, I was thinking this in context of dynamic pricing. There’s no problem for determining an average solution, i.e. a price point that sets the price so that conversion is maximized on average. But that’s pretty useless, because as you know averages are bad for optimization – too much waste of efficiency. Consider dynamic pricing: the willingness to pay is what matters for setting the price, but it’s impossible to know the WTP function of individual visitors. That’s why aggregate measures *are* needed, but we can go beyond a general aggregate (average) to segmentation, and then use segment information as a predictor for conversion at different price points (by the way, determining the testing interval for price points is also an interesting issue, i.e. how big or small increments should you do —  but that’s not the topic here).

Going back to the people problem — you could tackle this with URL tagging: 1) include the targeting info into your landing URL, and you’re able to do personalization like dynamic pricing or tailored content by retrieving the targeting information from the URL and rendering the page accordingly. A smart system would not only do this, but 2) record the interactions of different targeting groups (e.g., men & women) and use this information to optimize for a goal (e.g., determining optimal price point per user group).

These are some necessary features for a dynamic pricing system. Of course then there’s the aforementioned interval problem; segmentation means you’re playing with less data per group, so you have less “trials” for effective tests. So, intuitively you can have this rule: the less the website has traffic, the larger the increments (+/-) should be for finding the optimal price point. However, if the increments become too large you’re likely to miss the optimal (it gets lost somewhere in between the intervals). I think here are some eloquent algorithmic solutions to that in the multi-armed bandits.

A Quick Note on Bidding Theory of Online Ad Auctions

Introduction

This is a simple post about some commonly known features of online ad auctions.

Generalized second-price auction (GSP) is a mechanism in which the advertiser pays a marginally higher bid than the advertiser losing to him. It encourages the bidder to place a truthful bid, i.e. one where the price level is such that marginal returns equal marginal cost.

Why is this important?

Simply because:

truthful bid = incentive to bid higher

In other words, if you know a bidder behind is bidding say 0,20 € and you’re bidding 0,35 €, under a standard auction you’d be tempted to lower your bid to 0,21 € and still beat the next advertiser.

In any case you wouldn’t directly know this because the bids are sealed; however, advertisers could programmatically try and find out other bids. When you’re using GSP, manually lowering bids to marginally beat your competition is not necessary. It’s therefore a “fair” and automatic system for pricing.

Of course, for the ad platform this system is also lucrative. When advertisers are all placing truthful bids, there is no gaming, i.e. no-one is attempting to extract rents (excessive profits) and the overall price level sets higher than what would take place under gaming (theoretically, you could model this also in a way that the price level is at equal level in both cases, since it’s a “free market” where prices would set to a marginal cost equilibrium either way).


Joni Salminen holds a PhD in marketing from the Turku School of Economics. His research interests relate to startups, platforms, and digital marketing.