Skip to content

Tag: digital marketing

Controlling ad quality in programmatic buying

Highway to ad quality.

Ad quality is an issue in programmatic buying where ad exchange takes place via computer systems. In traditional ad exchange, there’s a human supervising the quality of advertising, but in a programmatic system it’s possible to receive spammy, illegal, or otherwise undesirable advertising without publishers (ad sellers) being aware of it. Likewise, the quality of performance such as clicks, likes or even impressions might be compromised by fraudulent bot behavior.

In the lack of humans, how to control for quality? Well, some ways include:

  • bot detection — this is what Google uses to filter invalid clicks likely caused by bots. It includes i.a. detecting anomalies in click behavior. Facebook, too, has mechanisms for detecting bots. How well these systems function should be from time to time audited by neutral 3rd parties due to the inherent problem of moral hazard by ad platforms.
  • performance-adjusted pricing and visibility — again, used by Google and Facebook in Quality Score and Relevance Score, respectively. What works cannot be wrong, essentially. The ads with the best response get the most views for the less money. However, this does not directly solve the problem of removing undesirable ads from the system.
  • reporting — again, both Facebook and Google enable reporting of ads by end users. This shows to advertisers as negative feedback – once negative feedback reaches a certain threshold, the ad stops showing. It is in a way crowdsourcing the quality control to the end users.
  • algorithmic analysis of ad content — for example, Facebook is able to detect nudity in the pictures and consequently disqualify them. This is among the best methods, albeit technically demanding, because machine can treat many millions of ad content units in batches. With constantly developing machine learning solutions the accuracy of automatic detection of undesirable content approaches human classifiers.
  • finally, we can have human fail-safe as a “plan B”. Again, both Facebook and Google use manual detection of click-fraud but also in treatment of advertisers’ complaints over refused ads. However, the solution is expensive and does not scale over millions of ad units, so it can be seen as a backup at best.

There – I believe these are the most common ways to control ad quality in modern programmatic advertising platforms. If you have anything
to add, please share it in the comments!

EDIT: Came across with another quality control mechanism: private exchanges. They effectively limit the number of participating advertisers making it manageable for a small number of humans to verify the ads. The whole point of the problem is that this works for a handful or so ads, but when there are millions of ad units, humans cannot be used as the primary solution.

On digital marketing ROI

Introduction

There are many sub-types of ROI calculations in digital marketing. This post aims at making an argument that digital marketers should measure digital marketing returns as a sum of sub-returns from different channels/actions. Through that, they are able to capture the ROI impact on a wider scale than just looking at overall sales. Some metrics which inevitably have (some, albeit often hard to quantify) effects on dollar-returns, can only be accessed via a sub-type examination.

Profit, not revenue

Before going into the ROI types, I have to mention one important caveat in ROI  calculation — whenever possible, use profit as the upside, not revenue. This is simply because you want to measure the real profitability of your marketing efforts, which you cannot determine without including production costs into the equation. Don’t only measure marketing cost, measure the cost of being in business (because that’s what your bottom line consists of).

Digital marketing ROI

Figure 1  Digital marketing ROIs

So, here are different ROI sub-types in digital marketing:

  • dmROI = digital marketing ROI
  • oROI = organic digital marketing ROI
  • pROI = paid digital marketing ROI
  • osmROI = organic social media ROI
  • seoROI = search-engine optimization ROI
  • cmROI = content marketing ROI
  • psmROI = paid social media ROI
  • seaROI = search-engine advertising ROI
  • dROI = display ROI

And they can be divided like this:

dmROI = digital marketing ROI

consists of

oROI = organic digital marketing ROI

consists of

osmROI = organic social media ROI
seoROI = search-engine optimization ROI
cmROI = content marketing ROI

and

pROI = paid digital marketing ROI

consists of

psmROI = paid social media ROI
seaROI = search-engine advertising ROI
dROI = display ROI

Different returns

Now, the ROI equation has two sides: the cost and the return. As said, the return side measures the profit. But what happens when the profit is not directly computable? Such can be the case in deferred conversions, multi-channel effects and word-of-mouth, for example.

In this case we need to substitute profit with some other quantifiable measure. If one is not available, we have to calculate it.

The returns can be something like this:

  • Value of sales — this is simply euros
  • Value of customer lifetime — this is average order value times average frequency of repurchases during average customer lifetime (a lot of averages here…)
  • Value of impressions — for example, the increase of brand searches and their association with sales
  • Value of social shares — for example, the increase of organic reach leading to likes and associated returns
  • Value of likes — for example, the amount of sales from a social media channel divided by the number of followers in the channel in a given period
  • Value of email subscribers — the amount of sales from email channel divided by number of subscribers in a given period
  • Value of leads — the closing rate times average deal size gives the value of a lead
  • Value of organic traffic increase — the sales uplift from SEO activities vis-à-vis normal development of organic search traffic

We should aim at isolating the marketing effects to the best of our ability, i.e. determine what the baseline metric would have been without the marketing intervention and what it was; the difference between the two is our return. In a similar vein, we should seek to attribute not only direct but also indirect (assisting) interaction effects in the return side of a given marketing channel/effort. Not everything that should be observed can be observed (cf. Einstein), so we have to use arbitrary mechanisms such as attribution modeling.

Different costs

In turn, how should we define the costs?

  • In paid channels, they include media + labor costs
  • In organic channels, they include labor costs

There is a good rule of thumb: to achieve a certain reach, you need either high labor cost (and low media cost) or a high media cost (and low labor cost). Of course, the practical implementation decides the outcome, but this is the ceteris paribus scenario. The labor cost can be determined by internal accounting, e.g. activity-based costing (ABC). This cost calculation you can also use to determine “make or buy” decision – i.e., whether outsourcing digital marketing is feasible or not.

Conclusion

ROI is a fascinating question of which there is not certainty or absolute truth. Bringing in the sub-type examinations widens the scope of ROI and makes its constituency more accurate, yet leads into some sort of relativism, manifested e.g. in the choice of attribution models.

Quality in programmatic advertising

Introduction

This is a very short post explaining, from a media house’s perspective, how to manage the two-sided online ad market.

Why does quality matter, more than you think?

The success or failure of online advertising takes place through QUALITY. My argument rests on the notion of two-sided markets, along with their distinctive element of network effects. In more detail, bad ads impose a negative indirect network effect geared towards the end users of media. As a result, ad block usage is increasing rapidly in the world. See the graph from the fresh Kleiner Perkins report on Internet trends.

The answer to this, and many other problems of online display advertising, is not more ads (in contrast, it needs to be less, to avoid clutter) or better targeting, but rather the focus on quality.

Why do I say better targeting is not the answer?

Well, many media houses seem to have this wishful thinking that technology provides the answer to what essentially is a human problem. People don’t want to see crappy, intrusive advertising. “Crappy” here means uninteresting, poorly conceived creative implementations – something that is not hard to see if one browses any given media site. Intrusive means the ads jump to your face. While the latter ensures “guaranteed delivery” for one market side – namely the advertiser, it destroys the satisfaction of the other. And a two-sided market cannot function without both parties on board. New media formats, another convenient solution sought by media houses, are also not the answer. While they may fix the intrusiveness, they cannot amend for low-quality ads that are a much bigger problem.

How to solve the quality problem, then?

First of all, ad platforms and media companies need to be stricter with their clients – not every advertiser should in fact have the right to show online ads. It makes more sense in the long term for publishing houses to refuse bad advertisers (and possibly educate them) than to take the easy money in the short term. But in the current climate, where Facebook and Google are eating their lunch, media companies are tempted to clinch to every dollar and sell to everyone who wants to buy. In general, it’s not a wise business decision to cater all customer types, and in this particular case where ad quality is not uniformly distributed, it’s a decision of shooting yourself in the leg.

In practice, publishing houses need to create strict rule-based systems to control advertising impressions, instead of guaranteeing delivery, which is currently the case for many of them. Although advertisers may want guaranteed delivery, this is not the best choice for the overall (two-sided) market because of the aforementioned quality problem. If guaranteed delivery is to be given at any circumstances, there needs to be a credible commitment from the advertiser’s part to deliver high-quality ads. And how can this be confirmed? By running limited pilot campaigns and verifying the end user response is satisfactory. By no means the verification is a question of a marketing executive saying “Oh, these banners look nice, so this must be high-quality advertising.” That approach is old-fashioned and detrimental to both the industry and the individual company.

Moreover, media companies also need to practice vertical integration by offering creative services and data on best practices with online advertising. They need to show commitment for improving ad quality, both to end users and to advertisers. In strategic terms, they need to become “channel captains” that drive the positive change. Eventually, this will lead to a triple-win scenario where the end users are shown high-quality advertising, advertisers get satisfactory results and media houses in consequence receive a higher share of their clients’ media spend. In the current situation, none of these outcomes are realized.

Conclusion

Targeting and new media formats can be a part of the solution, but they will never be the core solution to the problem which is essentially a human problem. Only humans, not technology, can fix that. Thus, better creative implementations are needed — and the industry needs to collectively move from quantity to quality, or else the triumph of ad blocking persists. Media companies need to take charge and accept their responsibility of the future of online advertising – like Google and Facebook, they need to start accounting and demanding for quality from their clients. The old “anything goes” mentality needs to change, and it needs to change fast.

The author wrote his Master’s thesis on online advertising exchange (available here) and Doctoral dissertation on two-sided markets (available here). He is currently working as a Post-Doc Research at the Turku School of Economics.

Here’s Facebook cheating you (and how to avoid it)

Here’s how Facebook is cheating advertisers with reporting of video views:

How is that cheating? Well, the advertiser implicitly assumes that ‘video views’ means people who have actually watched the video which is not the case here. Say, you have a 10-second video; this metric does not show people who have watched that video till the end, but only those who have watched the first three seconds — possibly just scrolling their newsfeeds and letting the video autoplay accidentally while quickly browsing forward. Essentially, that kind of exposure is worth closer to zero than the 1 cent Facebook usually reports.

Indeed, other view-based metrics such as CPV should be calculated based on somebody watching the video till the end, but in FB it’s 3 SECONDS after which they calculate it as a view. In effect, this will multiply the real CPV by order of several magnitudes, in some cases I’ve seen it’s 10x more than the figure reported by Facebook.

But aren’t they telling this honestly? Sure, they show the correct definition, but a large part of advertisers do not bother looking at it, or are unsuspecting misguiding definitions. After all, you should be able to trust that a big and reputable player like Facebook would not screw over advertisers. However, those of us who have played the game for many years know it’s not the first time (remember their definition of “click” a couple of years ago?).

What’s more, there’s no metric for the real CPV in the reports, so advertisers need to calculate it manually (at which point, based on my experience, it’s revealed that Facebook video views are are typically 5x more expensive than on Youtube).

How to avoid this shenanigans? Simply look at the metric ‘video views to 100%’. This is the real video views metric you should use – calculate your spend with that number, and you will get your true CPV. In other words:

ad cost / views to 100%

Keep your eyes open, my fellow advertisers!

UPDATE: Another good tactic, pointed out by my colleague Tommi Salenius, is to bid for 10-second views in your video campaigns. This is a relatively novel feature in Facebook, and although it doesn’t fix the problem, it’s a decent workaround. He also recommended to optimize “average % viewed” metric – you can do that e.g. by comparing different demographic segments. Finally, Facebook video ads can be seen to have a “social advantage” which refers to people’s ability to comment and like videos – sometimes this does take place 🙂 The advertiser can also include more text than in Youtube video ads which has a positive effect on ad prominence. It is then up to the advertiser to consider whether these advantages are worth the cost premium Facebook tends to have in comparison to Youtube.

Facebook Ads: too high performance might turn on you (theoretically)

Introduction

Now, earlier I wrote a post arguing that Facebook has an incentive to lower the CPC of well-targeting advertisers because better targeting improves user experience (in two-sided market terms, relevance through more precise targeting reduces the negative indirect network effects perceived by ad targets). You can read that post here.

However, consider the point from another perspective: the well-targeting advertiser is making rents (excessive profits) from their advertising which Facebook wants and as the platform owner is able to capture.

In this scenario, Facebook has an incentive to actually increase the CPC of a well-targeting advertiser until the advertiser’s marginal profit is aligned with marginal cost. In such a case, it would still make sense for the advertiser to continue investing (so the user experience remains satisfactory), but Facebook’s profit would be increased by the magnitude of the advertiser’s rent.

Problem of private information

This would require that Facebook be aware of the profit function of its advertisers which as for now might be private information to the advertisers. But had Facebook this information, it could consider it in the click-price calculation. Now, obviously that would violate the “objective” nature of Facebook’s VCG ad auction — it’s currently set to consider maximum CPC and ad performance (negative feedback, CTR, but not profit as far as I know). However, advertisers would not be able to monitor the use of their profit function because the precise ad auctions are carried out in a black box (i.e., asymmetric information). Thus, the scenario represents a type of moral hazard for Facebook – a potential risk the advertisers may not be aware of.

Origin of the idea

This idea I actually got from one of my students who said that “oh, I don’t think micro-targeting is useful“. Then I asked why and he said “because Facebook is probably charging too much from it”. I said to him that’s not the case, but also that it could be and the idea is interesting. Here I just elaborated it a bit further.

Also read this article about micro-targeting.

Micro-targeting is super interesting for B2B and personal branding (e.g., job seeking).

Another related point, that might interest you Jim (in case you’re reading this :), is the action of distributing profitable keywords by the platform owner between advertisers in search advertising. For example, Google could control impression share so that each advertiser would receive a satisfactory (given their profit function) portion of traffic WHILE optimizing its own return.

Conclusion

This idea is not well-developed though; it rests on the notion that there is heterogeneity in advertisers’ willingness to pay (arising e.g., from different in margins, average order values, operational efficiency or such) that would benefit the platform owner; I suspect it could be the case that the second-price auction anyway considers this as long as advertisers are bidding truthfully, in which case there’s no need for such “manipulation” by Google as the prices are always set to maximum anyway. So, just a random idea at this point.

Why human services are needed for world peace

The bot can be boss, as long as we have jobs.

Why are human services the future of our economy? (And, therefore, an absolute requirement for world peace [1].)

For three reasons:

  1. They do not pollute or waste material resources (or tend to do so with significantly less degree than material consumption)
  2. Exponential growth of population absolutely requires more human labor (supply and demand of labor)
  3. There’s no limit to service creation, but by type and nature they are infinite (because people’s needs are infinite and ever-changing)

Consequently, critical, absolutely critical measures are needed in the Western economies to enable true service economy.

Here are some ideas:

  • Taxation of human labor (VAT of services) must be drastically cut.
  • Side-costs of employing people (instead of machines) must be drastically cut.
  • Any technological solutions (e.g., platforms) increasing the match between supply and demand of human labor must be endorsed, and respectively all barriers such as cartels, removed.

Human services are the key to sustainable and socially balanced consumption – I look at Finland back in the 1950s; we were a real service economy. Today, every job possible has been replaced either by automation or by self-service (which companies call “customer participation”). We’re a digital self-service economy, not a service economy anymore.

I long for the days when we had bellboys, cleaning ladies, office clerks, research assistants and other support staff — they are important jobs which nowadays are no more. Self-service and efficiency are in fact the enemies of employment. We must consider if we want a society optimized for efficiency or one optimized for well-being (I’m starting to sound like, Bernie Sanders; which might not be a bad thing as such, but the argument has a deeper rationale in it).

Maximum efficiency is not maximum employment, far from it.

Regarding Silicon Valley and startups, there should be a counter-movement against efficiency. So far, software has been eating the world, and the world — at least in terms of job market — is becoming increasingly less. Granted, many new job types have been created to compensate for the loss, but much more is needed to fill the gap software is leaving. I think there needs to be a call for new type of startups, ones that empower human work. If you think about it, there already exists some good examples – Uber, Taskrabbit, Fiverr, Upwork are some of them. But all too often the core value proposition of a startup is based on its ability to reduce “waste” – that is, human labor.

I do not think there is any limit to creation of human services. People are never completely satisfied, and their new needs spawn new services, which in turn require new services, and so on and on. In fact, the only limit to consumption of services is one’s time and cognitive abilities! This is good and well, even hopeful if we think of the big picture. But I do think an environment needs to be created where incentives for providing human services match those of machine services, or at least approach that much more than what it currently does.

This is an issue that definitely needs to be addressed with real structural reforms in the society; as of yet, I haven’t seen ANY of that — not even discussion — in Finland. It’s as if the world was moving but the politicians were asleep, stuck in some old glory days. But in the end we all want the same thing – we want those old days BACK, when everyone had a job. It’s just that we cannot do it without adjusting the policies — radically — to the radical change of productivity which has taken place in the past decades.

It’s like another candidate — not Sanders — says: We gotta start winning again.

End notes

[1] The premise here is that the well-being of a middle class is required for a balanced and peaceful society. In contrast, the crumbling middle class will cause social unrest and wide dissatisfaction which will channel out in political radicalism, scapegoat seeking, and even wars between nations. Jobs are not just jobs, they are vehicle for peace.

The author has taught services marketing at the Turku School of Economics.

Facebook ad testing: is more ads better?

Yellow ad, red ad… Does it matter in the end?

Introduction

I used to think differently about creating ad variations, but having tested both methods I’ve changed my mind. Read the explanation below.

There are two alternative approaches to ad testing:

  1. “Qwaya” method* — you create some base elements (headlines, copy texts, pictures), out of which a tool will create up to hundreds of ad variations
  2. “Careful advertiser” method — you create hand-crafted creatives, maybe three (version A, B, C) which you test against one another.

In both cases, you are able to calculate performance differences between ad versions and choose the winning design. The rationale in the first method is that it “covers more ground”, i.e. comes up with such variations that we wouldn’t have tried otherwise (due to lack of time or other reasons).

Failure of large search space

I used to advocate the first method, but it has three major downsides:

  1. it requires a lot more data to come up with statistical significance
  2. false positives may emerge in the process, and
  3. lack of internal coherence is likely to arise, due to inconsistency among creative elements (e.g., mismatch between copy text and image which may result in awkward messages).

Clearly though, the human must generate enough variation in his ad versions if he seeks a globally optimal solution. This can be done by a) making drastically different (e.g., humor vs. informativeness) as oppose to incrementally different ad versions, and b) covering extremes on different creative dimensions (e.g., humor: subtle/radical  informativeness: all benefits/main benefit).

Conclusion

Overall, this argument is an example of how marketing automation may not always be the best way to go! And as a corollary, the creative work done by humans is hard to replace by machines when seeking optimal creative solutions.

*Named after the Swedish Facebook advertising tool Qwaya which uses this feature as one of their selling points.

Facebook’s Incentive to Reward Precise Targeting

Facebook has an incentive to lower the advertising cost for more precise targeting by advertisers.

What, why?

Because by definition, the more precise targeting is the more relevant it its for end users. Knowing the standard nature of ads (as in: negative indirect network effect vis-à-vis users), the more relevant they are, the less unsatisfied the users. What’s more, their satisfaction is also tied to the performance of the ads (positive indirect network effect: the more satisfied the users, the better the ad performance), which should thus be better with more precise targeting.

Now, the relevance of ads can be improved by automatic means such as epsilon-greedy algorithms, and this is traditionally seen as Facebook’s advantage (right, Kalle?) but the real question is: Is that more efficient than “marketer’s intuition”?

I’d in fact argue that — contrary to my usual approach to marketer’s intuition and its fallibility — it is helpful here, and its use at least enables the narrowing down of optimal audience faster.

…okay, why is that then?

Because it’s always not only about the audience, but about the match between the message and audience — if the message was the same and audience varied, narrowing is still useful because the search space for Facebook’s algorithm is smaller, pre-qualified by humans in a sense.

But there’s an even more important property – by narrowing down the audience, the marketer is able to re-adjust their message to that particular audience, thereby increasing relevance (the “match” between preferences of the audience members and the message shown to them). This is hugely important because of the inherent combinatory nature of advertising — you cannot separate the targeting and message when measuring performance, it’s always performance = targeting * message.

Therefore, Facebook does have an incentive to encourage advertisers for more precise targeting and also reward that by providing a lower CPC. Not sure if they are doing this though, because it requires them to assign a weighted bid for advertisers with a more precise targeting — consider advertiser A who is mass-advertising to everyone in some large set X vs. advertiser B who is competing for a part of the same audience i.e. a sub-set x – they are both in the same auction but the latter should be compensated for his more precise targeting.

Concluding remarks

Perhaps this is factored in through Relevance Score and/or performance adjustment in the actual rank and CPC. That would yield the same outcome, given that the above mentioned dynamics hold, i.e. there’s a correlation between a more precise targeting and ad performance.

A Little Guide to AdWords Optimization

Hello, my young padawan!

This time I will write a fairly concise post about optimizing Google AdWords campaigns.

As usual, my students gave the inspiration to this post. They’re currently participating in Google Online Marketing Challenge, and — from the mouths of children you hear the truth 🙂 — asked a very simple question: “What do we do when the campaigns are running?”

At first, I’m tempted to say that you’ll do optimization in my supervision, e.g. change the ad texts, pause add and change bids of keywords, etc. But then I decide to write them a brief introduction.

So, here it goes:

1. Structure – have the campaigns been named logically? (i.e., to mirror the website and its goals)? Are the ad groups tight enough? (i.e., include only semantically similar terms that can be targeted by writing very specific ads)

2. Settings – all features enabled, only search network, no search partners (– that applies to Google campaigns, in display network you have different rules but never ever mix the two under one campaign), language targeting Finnish English Swedish (languages that Finns use in Google)

3. Modifiers – are you using location or mobile bid modifiers? Should you? (If unsure, find out quick!)

4. Do you have need for display campaigns? If so, use display builder to build nice-looking ads; your targeting options are contextual targeting (keywords), managed placements (use Display Planner to find suitable sites), audience lists (remarketing), and affinity and topic categories (the former targets people with a given interest, the latter websites categorized under a given interest, e.g. traveling) (you can use many of these in one campaign)

5. Do you have enough keywords to reach the target daily spend? (Good to have more than 100, even thousands of keywords in the beginning.)

6. What match types are you using? You can start from broad, but gradually move towards exact match because it gives you the greatest control over which auctions you participate in.

7. What are your options to expand keyword base? Look for opportunities by taking a search term report from all keywords after you’ve run the campaign for week or so; this way you can also identify more negative keywords.

8. What negative keywords are you using? Very important to exclude yourself from auctions which are irrelevant for your business.

9. Pausing keywords — don’t delete anything ever, because then you’ll lose the analytical trace; but frequently stop keywords that are a) the most expensive and/or b) have the lowest CTR/Quality Score

10. Have you set bids at the keyword level? You should – it’s okay to start by setting the bid at ad group level, and then move gradually to keyword level as you begin to accumulate real data from the keyword market.

11. Ad positions – see if you’re competitive by looking at auction insights report; if you have low average positions (below 3), consider either pausing the keyword or increasing your bid (and relevance to ad — very important)

12. Are you running good ads? Remember, it’s all about text. You need to write good copy which is relevant to searchers. No marketing bullshit, please. Consider your copy as an answer to searchers request; it’s a service, not a sales pitch. This topic deserves its own post (and you’ll find them by googling), but as for now, know that the best way (in my opinion) is to have 2 ads per ad group constantly competing against one another. Then pause the losing ad and write a new contender — remember also that an ad can never be perfect: if your CTR is 10%, it’s really good but with a better ad you can have 11%.

13. Landing page relevance – you can see landing page experience by hovering over keywords – if the landing page experience is poor, think if you can instruct your client to make changes, or if you can change the landing page to a better one. The landing page relevance comes from the searcher’s perspective: when writing the search query, he needs to be shown ads that are relevant to that query and then directed to a webpage which is the closest match to that query. Simple in theory, in practice it’s your job to make sure there’s no mismatch here.

14. Quality Score – this is the godlike metric of AdWords. Anything below 4 is bad, so pause it or if it’s relevant for your business, then do your best to improve it. The closer you get to 10, the better (with no data, the default is 6).

15. Ad extensions – every possible ad extension should be in use, because they tend to gather a good CTR and also positively influence your Quality Score. So, this includes sitelinks, call extensions, reviews, etc.

And, finally, important metrics. You should always customize your column views at campaign, ad group and keyword level. The picture below gives an example of what I think are generally useful metrics to show — these may vary somewhat based on your case. (They can be the same for all levels, except keyword level should also include Quality Score.)

  • CTR (as high as possible, at least 5%)
  • CPC (as low as possible, in Finland 0.20€ sounds decent in most industries)
  • impression share (as high as possible WHEN business-relevant keywords, in long-tail campaigns it can be low with a good reason of getting cheap traffic; generally speaking, this indicates scaling potential; I’ve written a separate post about this, you can find it by looking at my posts)
  • Quality Score (as high as possible, scale 1-10)
  • Cost (useful to sort by cost to focus on the most expensive keywords and campaigns)
  • Avg. position (TOP3 is a good goal!)
  • Bounce rate (as low as possible, it tends to be around 40% on an average website) (this only shows if GA is connected –> connect if possible)
  • Conversion rate (as high as possible, tends to be 1-2% in ecommerce sites, more when conversion is not purchase)
  • Number of conversions (shows absolute performance difference between campaigns)

That’s it! Hope you enjoyed this post, and please leave comments if you have anything to add.

Using Napoleon’s 19th Century Principles for Email Writing

“In this age, in past ages, in any age… Napoleon.”
(The Duke of Wellington)

This is a short post reflecting upon Napoleon’s writing on war and efficient management. I think many of his principles are universal and apply to communication — my special consideration here is writing of emails, which is a vital skill because 1) you want your message to be read and replied! and 2) to get to that end, you need to learn how to write in a concise way.

Napoleon will help you to get there…

Quote 1:

“Reconnaissance memoranda should always be written in the simplest style and be purely descriptive. They should never stray from their objective by introducing extraneous ideas.”

First of all, write simple text. Avoid using complicated words and ambiguity (– expressions that can be interpreted in many ways). Oftentimes I see sentences that have ambiguity (or, in fact I myself writing them — when that happens, I instantly make it more clear so that there is absolutely no room for misinterpretation).

Quote 2:

“The art of war does not require complicated maneuvers; the simplest are the best, and common sense is fundamental. From which one might wonder how it is generals make blunders; it is because they try to be clever.”

The goal should never be to appear smart of whatever type; only to communicate your message efficiently. As I’ve said in other contexts, clear writing reflects clear thinking — and especially when it comes to writing emails, this is the only image you want to convey of yourself.

Quote 3:

“Think over carefully the great enterprise you are about to carry out; and let me know, before I sign your final orders, your own views as to the best way of carrying it out.”

In other words, make it easy for people to reply by asking for their opinion (when it’s such a matter their opinion would be useful). Write so that it’s easy to reply — e.g., don’t give too many choices or add any unnecessary layers of complexity.

Oftentimes I see messages which require considerable thinking to reply, and then it of course gets delayed or canceled altogether. Writing an email is like servicing a client; everything from the recipient’s part needs to be made as easy as possible.

Quote 4:

“This letter is the principle instruction of your plan of campaign, and if unforeseen events should occur, you will be guided in your conduct by the spirit of this instruction.”

This is actually the only quote where I disagree with Napoleon. Let me explain why. His rationale was based on the information asymmetry between him and his local officers. The officers have more immediate information; first of all, because of this it’s impossible to write a detailed instruction which would optimally consider the local circumstances, especially since they might change in the course of delivering the message (remember, in Napoleon’s day communication had a delay of even up to many days depending on the troops’ location).

Second, if the local officers were to verify each action, the delay in communication would result in losing crucial opportunities. In a word, decentralization of decision-making was essential for Napoleon. Napoleon himself explains it like this:

“The Emperor cannot give you positive orders, but only general instructions (objectives) because the distance is already considerable and will become greater still.”

However, in email communications the situation is different. First of all, there’s no communication lag, at least in the practical sense. Second of all, leaving things “open” for the recipient requires more cognitive effort from them, which in my experience leads to lower response rates and delays.

So, I’d say: Tell exactly what you want the other party to do. Don’t hint or imply – if you expect something to happen, make it clear. Oftentimes I see messages that are thought half-way through: the sender clearly implies that the recipient should finish his or her thinking. Not a good idea. Think the course of events through beforehand so that the recipient doesn’t have to.

More about Napoleon can be read from his memoirs, available at http://www.gutenberg.org/ebooks/3567

The author teaches and studies digital marketing at the Turku School of Economics.