Skip to content

Tag: digital marketing

Programmatic ads: Fallacy of quality supply

A major fallacy publishers still have is the notion of “quality supply” or “premium inventory”. I’ll explain the idea behind the argument.

Introduction. The fallacy of quality supply lies in publishers assuming the quality of certain placement (say, a certain website) is constant, whereas in reality it varies according to the response which, in turn, is a function of the customer and the ad. Both the customer and the ad are running indices, meaning that they constantly change. The job of a programmatic platform is to match the right ads with right customers in the right placements. This is a dynamic problem, where “quality” of a given placement can be defined at the time of match, not prior to it.

Re-defining quality. The term “quality” should in fact be re-defined as relevance — a high-quality quality ad is relevant to customers at a given time (of match), and vice versa. In this equation, the ad placement does not hold any inherent value but its value is always determined in a unique match between the customer, ad and placement. It follows that the ad itself needs to be relevant to the customer, irrespective to the placement. It is not known which interaction effect is stronger, ad + customer, or placement + customer, but it is commonly assumed that the placement has a moderating effect on the quality of the ad as perceived by the customer.

The value of ad space is dynamic. The idea of publishers defining quality distribution a priori is old-fashioned. It stems from the idea that publishers should rank and define the value of their advertising space. That is not compatible with platform logic, in which any particular placement can be of high or low quality (or anywhere between the extremes). In fact, the same placement can simultaneously be both high- and low quality, because its value depends on the advertiser and the customer which, as stated, fluctuate.

Customers care about ad content. To understand this point, quality should be understood from the point of the customer. It can be plausibly argue that customers are interested in ads (if at all) due to their content, not their context. If an ad says I get a promotion on item x which I like, I’m interested. This interest takes place whether the ad was placed on website A or website B. Thus, it is not logical to assume that the placement itself would have a substantial impact on ad performance.

Conclusion. To sum up, there is no value in an ad placement per se, but the value realizes if (and only if) relevance is met. Under this argument, the notion of “premium ad space” is inaccurate and in fact detrimental by its implications to the development of the programmatic ad industry. If ad space is priced according to inaccurate notions, it is not likely to match its market value and, given that the advertisers have choice, they will not continue buying such ad inventory. Higher relevance leads to higher performance which leads to advertiser satisfaction and a higher probability of repurchase of that media. Any predetermined notion of “quality supply” is not relevant in this chain.

Recommendations. Instead of maintaining the false dichotomy of “premium” and “remnant” inventory, publishers should strive to maximize relevance in match-making auctions at any means necessary. For this purpose, they should demand higher quality and variety of ads from the advertiser. Successful match-making depends on quality and variety at both sides of the two-sided market. Generally, when prices are set according to supply and demand, more economic activity takes place – there is no reason to expect otherwise in the advertising market. Publishers should therefore stop labeling their inventory as “quality” or “premium” and instead let markets decide whether it is so. Indeed, in programmatic advertising the so-called remnant inventory can outperform what publishers initially would perceive as superior placements.

Is “premium” ad space a hoax?

Answer: It kinda is.

“Premium publishers” and “premium ad space” — these are often heard terms in programmatic advertising. But they are also dangerously fallacious ideas.

I’ll give three reasons why:

  1. A priori problem
  2. Uniformity problem
  3. Equilibrium problem

First, publishers define what is “premium” a priori (before results) which is not the right sequence to do it (a priori problem). The value of ad space — or the status, premium or not — should be determined a posteriori, or after the fact. Anything will risk biases due to guess-work.

Second, what is “premium” (i.e., works well) for advertiser A might be different for advertiser B, but the same ad space is always “premium” or not (uniformity problem). The value of ad space should be determined based on its value to the advertiser, which is not a uniform distribution.

Third, fixing a higher price for “premium” inventory skews the market – rational advertisers won’t pay irrational premiums and the publisher ends up losing revenue instead of gaining “premium” price (equilibrium problem). This is the exact opposite outcome the publisher hoped for, and arises from imbalance of supply and demand.

Limitations

I defined premium as ad space that works well in regards to the advertiser’s objectives. Other definitions also exist, e.g. Münstermann and Würtenberg (2015) who argue the distinctive trait between premium and non-premium media is the degree of its editorial professionalism, so that amateur websites would be less valuable. In many cases, this is an incorrect classifier from the advertiser’s perspective — e.g., placing an ad on a blogger’s website (influencer marketing) can fairly easily produce higher rents than placing it alongside “professional” content. The degree of professionalism of the content is not a major cue for the consumers, and therefore one should define “premium” from the advertiser’s point of view — as a placement that works.

Conclusion

The only reason, I suspect, premium inventory is still alive is due to the practice of private deals where advertisers are more interested in volume than performance – these advertisers are more informed by assumptions than data. Most likely as the buyers’ level of sophistication increases, they become more inclined to market-based pricing which has a much closer association with performance than private deals.

Algorithm Neutrality and Bias: How Much Control?

The Facebook algorithm is a global super power.

So, I read this article: Facebook is prioritizing my family and friends – but am I?

The point of the article — that you should focus on your friends & family in real life instead of Facebook — is poignant and topical. So much of our lives is spent on social media, without the “social” part, and even when it is there, something is missing in comparison to physical presence (without smart phones!).

Anyway, this post is not about that. I got to think about the from the algorithm neutrality perspective. So what does that mean?

Algorithm neutrality takes place when social networks allow content spread freely based on its merits (e.g., CTR, engagement rate); so that the most popular content gets the most dissemination. In other words, the network imposes no media bias. Although the content spreading might have a media bias, the social network is objective and only accounting its quantifiable merits.

Why does this matter? Well, a neutral algorithm guarantees manipulation-free dissemination of information. As soon as human judgment intervenes, there is a bias. That bias may lead to censorship and favoring of certain political party, for example. The effect can be clearly seen in the so-called media bias. Anyone following either the political coverage of the US elections or the Brexit coverage has noticed the immense media bias which is omnipresent in even the esteemed publications, like the Economist and Washington Post. Indeed, they take a stance and report based on their stance, instead of covering objectively. A politically biased media like the one in the US is not much better than the politically biased media in Russia.

It is clear that free channels of expression enable the proliferation of alternative views, whereupon an individual is (theoretically) better off, since there are more data points to base his/her opinion on. Thus, social networks (again, theoretically) mitigate media bias.

There are many issues though. First is the one that I call neutrality dilemma.

The neutrality dilemma arises from what I already mentioned: the information bias can be embedded in the content people share. If the network restricts the information dissemination, it moves from neutrality to control. If it doesn’t restrict information dissemination, there is a risk of propagation of harmful misinformation, or propaganda. Therefore, in this continuum of control and freedom there is a trade-off that the social networks constantly need to address in their algorithms and community policies. For example, Facebook is banning some content, such as violent extremism. They are also collaborating with local governments which can ask for removal of certain content. This can be viewed in their transparency report.

The dilemma has multiple dimensions.

First of all, there are ethical issues. From the perspective of “what is right”, shouldn’t the network prohibit diffusion of information when it is counter-factual? Otherwise, peopled can be mislead by false stories. But also, from perspective of what is right, shouldn’t there be free expression, even if a piece of information is not validated?

Second, there are some technical challenges:

A. How to identify “truthfulness” of content? In many cases, it is seemingly impossible because the issues are complex and not factual to begin with. Consider e.g. the Brexit: it is not a fact that the leave vote would lead into a worse situation than the stay vote, and vice versa. In a similar vein, it is not a fact that the EU should be kept together. These are questions of assumptions which make them hard: people freely choose the assumptions they want to believe, but there can be no objective validation of this sort of complex social problem.

B. How to classify political/argumentative views and relate them to one another? There are different point of views, like “pro-Brexit” and “anti-Brexit”. The social network algorithm should detect based on an individual’s behavior their membership in a given group: the behavior consists of messages posted, content liked, shared and commented. It should be fairly easy to form a view of a person’s stance on a given topic with the help of these parameters. Then, it is crucial to map the stances in relation to one another, so that the extremes can be identified.

As it currently stands, one is being shown the content he/she prefers which confirms the already established opinion. This does not support learning or getting an objective view of the matter: instead, if reinforces a biased worldview and indeed exacerbates the problems. It is crucial to remember that opinions do not remain only opinions but reflect into behavior: what is socially established becomes physically established through people’s actions in the real world. Therefore, the power of social networks needs to be taken with precaution.

C. How to identify the quality of argumentation? Quality of argumentation is important if applying the rotation of alternative views intended to mitigate reinforcement of bias. This is because the counter-arguments need to be solid: in fact, when making a decision, the pro and contra-sides need both be well-argued for an objective decision to emerge. Machine learning could be the solution — assuming we have training data on the “proper” structure of solid argumentation, we can compare this archetype to any kind of text material and assign it a score based on how good the argumentation is. Such a method does not consider the content of the argument, only its logical value. It would include a way to detect known argumentation errors based on syntax used. In fact, such a system is not unimaginably hard to achieve — common argumentation errors or logical fallacies are well documented.

Another form of detecting quality of argumentation is user-based reporting: individuals report the posts they don’t like, and these get discounted by the algorithm. However, Even when allowing users to report “low-quality” content, there is a risk they report content they disagree with, not which is poorly argued. In reporting, there is relativism or subjectivism that cannot be avoided.

Perhaps the most problematic of all are the socio-psychological challenges associated with human nature. The neutral algorithm enforces group polarization by connecting people who agree on a topic. This is natural outcome of a neutral algorithm, since people by their behavior confirm their liking of a content they agree with. This leads to reinforcement whereupon they are shown more of that type of content. The social effect is known as group polarization – an individual’s original opinion is enforced through observing other individuals sharing that opinion. That is why so much discussion in social media is polarized: there is this well known tendency of human nature not to remain objective but to take a stance in one group against another.

How can we curb this effect? A couple of solutions readily come to mind.

1. Rotating opposing views. If in a neutral system you are shown 90% of content that confirms your beliefs, rotation should force you to see more than 10% percent of alternative (say, 25%). Technically, this would require that “opinion archetypes” can be classified and contrasted to one another. Machine learning to the rescue?

The power of rotation comes from the idea it simulates social behavior: the more a person is exposed to subjects that initially seem strange and unlikeable (i.e., xenophobia), the more likely they are to be understood. A greater degree of awareness and understanding leads into higher acceptance of those things. In real world, people who frequently meet people from other cultures are more likely to accept other cultures in general.

Therefore, the same logic could by applied by Facebook in forcing us to see well-argumented counter-evidence to our beliefs. It is crucial that the counter-evidence is well-argued, or else there is a strong risk of reactance — people rejecting the opposing view even more. Unfortunately, this is a feature of the uneducated mind – not to be able to change one’s opinions but remain fixated on one’s beliefs. So the method is not full-proof, but it is better than what we now have.

2. Automatic fact-checking. Imagine a social network telling you “This content might contain false information”. Caution signals may curb the willingness to accept any information. In fact, it may be more efficient to show misinformation tagged as unreliable rather than hide it — in the latter case, there is possibility for individuals to correct their false beliefs.

3. Research in sociology. I am not educated to know enough about the general solutions of group polarization, groupthink and other associated social problems. But I know sociologists have worked on them – this research should be put to use in collaboration with engineers who design the algorithms.

However, the root causes for dissemination of misinformation, either purposefully harmful or due to ignorance, lie not on technology. The are human-based problems and must have a human-based solution.

What are these root causes? Lack of education. Poor quality of educational system. Lack of willingness to study a topic before forming an opinion (i.e., lazy mind). Lack of source/media criticism. Confirmation bias. Groupthink. Group polarization.

Ultimately, these are the root causes of why some content that should not spread, spreads. They are social and psychological traits of human beings, which cannot be altered via algorithmic solutions. However, algorithms can direct behavior into more positive outcomes, or at least avoid the most harmful extremes – if the aforementioned classification problems can be solved.

The other part of the equation is education — kids need to be taught from early on about media and source criticism, logical argumentation, argumentation skills and respect to another party in a debate. Indeed, respect and sympathy go a long way — in the current atmosphere of online debating it seems like many have forgotten basic manners.

In the online environment, provocations are easy and escalate more easily than in face-to-face encounters. It is “fun” to make fun of the ignorant people – a habit of the so-called intellectuals – nor it is correct to ignore science and facts – a habit of the so-called ignorants.

It is also unfortunate that many of the topics people debate on can be traced down to values and worldviews instead of more objective topics. When values and worldviews are fundamentally different among participants, it is truly hard to find a middle-way. It takes a lot of effort and character to be able to put yourself on the opposing party’s shoes, much more so than just point blank rejecting their view. It takes even more strength to change your opinion once you discover it was the wrong one.

Conclusion and discussion. Avoiding media bias is an essential advantage of social networks in information dissemination. I repeat: it’s a tremendous advantage. People are able to disseminate information and opinions without being controlled by mass-media outlets. At the same time, neutrality imposes new challenges. The most prominent question is to which extent should the network govern its content.

One one hand, user behavior is driving Facebook towards information sharing network – people are seemingly sharing more and more news content and less about their own lives – but Facebook wants to remain as social network, and therefore reduces neutrality in favor of personal content. What are the strategic implications? Will users be happier? Is it right to deviate from algorithm neutrality when you have dominant power over information flow?

Facebook is approaching a sort of an information monopoly when it comes to discovery (Google is the monopoly in information search), and I’d say it’s the most powerful global information dissemination medium today. That power comes with responsibility and ethical question, and hence the algorithm neutrality discussion. The strategic question for Facebook is that does it make sense for them to manipulate the natural information flow based on user behavior in a neutral system. The question for the society is should Facebook news feeds be regulated.

I am not advocating more regulation, since regulation is never a creative solution to any problem, nor does it tends to be informed by science. I advocate collaboration of sociologists and social networks in order to identify the best means to filter harmful misinformation and curb the generally known negative social tendencies that we humans possess. For sure, this can be done without endangering the free flow of information – the best part of social networks.

A New Paradigm for Advertising

From its high point, the sheep can see far.

Introduction

In Finland, and maybe elsewhere in the world as well, media agencies used to reside inside advertising agencies, back in the 1970-80s. Then they were separated from one another in the 1990s, so that advertising agencies do creative planning and media companies buy ad space in the media. Along with this process, heavy international integration took place and currently both the media and advertising agency markets are dominated by a handful of global players, such as Ogilvy, Dentsu, Havas, WPP, etc.

This article discusses that change and argues for re-convergence of media and advertising agencies. I call this the new paradigm (paradigm = a dominant mindset and way of doing things).

The old paradigm

The current advertising paradigm consists of two features:

1) Advertising = creative + media
2) Creative planning –> media buying –> campaigning

In this paradigm, advertising is seen as rigid, inflexible, and one-off game where you create one advertising concept and run it, regardless of customer response. You are making a one sizable bet, and that’s it. To reduce the risk of failure, creative agencies use tons of time to “make sure they get it right”. Sometimes they use advertising pre-testing, but the process is predominantly driven by intuition, or black-box creativity.

Overall, that is an old-fashioned paradigm, for which reason I believe we need a new paradigm.

Towards the new paradigm

The new advertising paradigm looks like this:

1) Advertising = creative + media + optimization
2) Creative planning –> media trials –> creative planning –> …

In that, advertising in seen as fluid, flexible, and consecutive game where you have many trials to succeed. The creative process feeds from consumer response, and in turn media buying is adjusted based on the results of each unique creative concept.

So what is the difference?

In the old paradigm, we would spend three months planning and create one “killer concept” which according to our intuition/experience is what people want to see. In the new paradigm, we spend five minutes to create a dozen concepts and let customers (data) tell us what people want to see. Essentially, we relinquish the idea that it is possible to produce a “perfect ad”, in particular without customer feedback, and instead rely on a method that gets us closer to perfection, albeit never reaching it.

The new paradigm is manifested in a continuous, iterative cycle. Campaigns never end, but are infinite — as we learn more about customers, budget spend may increase in function of time, but essentially optimization is never done. The campaign has no end, unlike in the old paradigm where people would stop marketing a product even if the demand for that product would not disappear.

You might notice that the paradigm may not be compatible of old-fashioned “shark fin” marketing, but views marketing as continuous optimization. In fact, the concept of campaign is replaced by the concept of optimization.

Let me elaborate this thought. Look at the picture (source: Jesper Åström) – it illustrates the problem of campaign-based (shark-fin) marketing. You put in money, but as soon you stop investing, your popularity drops.

Now consider an alternative, where you constantly invest in marketing and not in heavy spikes (campaigns) but gradually by altering your message and targeting (optimization). You get results more like this:

Although seasonality, which is a natural consequence of the business cycle, does not fade away, the baseline results increase in time.

Instead of being fixed, budget allocations live according to the seasonal business cycles — perhaps anticipating the demand fluctuations. The timing should also consider the carryover effect.

Conclusion

I suspect media agencies and advertising will converge once again, or at least the media-buying and creative planning functions will reside in the same organization. This is already the way many young digital marketing agencies are operating since their birth. Designers and optimizers (ad buyers) work side-by-side, the former giving instructions to the latter on what type of design concepts work, not based on intuition as old-paradigm Art Directors (AD) would do, but based on real-time customer response.

Most importantly, tearing down silos will benefit the clients. Doing creative work and optimization in tandem is a natural way of working — the creative concept should no longer be detached from reality, and we should not think of advertising work as a factory line where ads move from one production line to another, but instead as some sort of co-creation through which we are able to mitigate advertising waste and produce better results for advertising clients.

Why social advertising beats display advertising

Introduction

I’ve long been skeptical of display advertising. At least my students know this, since ever year I start the digital marketing course by giving a lecture on why display sucks (and why inbound / search-engine marketing performs much better).

But this post is not about the many pitfalls of display. Rather, it’s outlining three arguments as to why I nowadays prefer social advertising, epitomized by Facebook Ads, over display advertising. Without further ado, here are the reasons why social rocks at the moment.

1. Quality of contacts

It’s commonly known Facebook advertising is cheap in comparison to many advertising channels, when measured by CPM or cost per individual reached. Display can be even cheaper, so isn’t that better? No, absolutely not. Reach or impressions are completely fallacious metrics — their business value approaches zero. Orders of magnitude more important is the quality of contacts.

The quality of Facebook traffic, when looking at post-click behavior, tends to be better than the quality of display traffic. Even when media companies speak of “premium inventory”, the results are weak. People just don’t like banner ads. The people who click them, if they are people and not bots to begin with, often exit the site instantly without clicking further.

2. Social interaction

People actually interact with social ads. They pose questions, like them and even share them to their friends. Share advertisements? OMG, but they really do. That represents an overkill opportunity for a brand to interact with its customer base, and systematically gather feedback and customer insight. This is simply not possible with any other form of advertising, display including.

Display ads, albeit using rich media executions, are completely static and dead when it comes to social interaction. Whereas social advertising creates an opportunity to gather social proof and actual word-of-mouth, even viral diffusion, in the one and same advertising platform, display advertising is completely lacking the social dimension.

3. Better ad formats

Social advertising, specifically Facebook gives a great flexibility in combining text, images and video. Typically, a banner ad can only fit a brief slogan (“Just do it.”), whereas a social advertisement can include many sentences of text, a compelling picture and even link description that together give the advertisers the ability to communicate the whole story of the company or its offering in one advertisement.

But isn’t that boring? No, you can craft it in a compelling way – the huge advantage is that people don’t even need to click to learn the most essential. If the goal of advertising is to inform about offerings, social advertising is among the most efficient ways to actually do it.

Conclusion

That’s it. I don’t see a way for display advertising to overcome these advantages of social advertising. Notice that I didn’t mention the superior targeting criteria — this is because display is quickly catching up to Facebook in that respect. It just won’t be enough.

Programmatic advertising: Red herring effect

Introduction

Currently, there is very strong hype involved with programmatic buying. Corporations are increasing their investments on programmatic advertising and publishers are developing their own technologies to provide better targeting information for demand-side platforms.

But all is not well in the kingdom. Display advertising still faces fundamental problems which are, in my opinion, more critical to advertising performance than more granular targeting.

Problems of display advertising

In particular, there are four major problems:

  • banner blindness
  • ad blocking
  • ad clutter
  • post-click behavior

Banner blindness is a classical problem, stating that banner ads are not cognitively processed but left either consciously or unconsciously unprocessed by people exposed to them (Benway & Lane, 1998). This is a form of automatic behavior ignoring ads and focusing on primary task, i.e. processing website content. Various solutions have been proposed in the industry, including native advertising which “blends in” with the content, and moving to calculating only viewable impressions which would guarantee people actually see the banner ads they are exposed to. The problem with the former is confounding sponsored and organic content, while the problem with the latter is that seeing is not equivalent to processing (hence banner blindness).

Ad blocking has been in tremendous rise lately (Priori Data, 2016). Consumers across the world are rejecting ads, both in desktop and mobile. Partly, this is related to ad clutter referring to high ads-to-content ratio in the websites. The proliferation of ad blocking should be interpreted as an alarming signal by the media houses. Instead, many of them seem to take no notice, keeping their website layout and ads-to-content ratio high. If there are no major improvements in user satisfaction, visible in reducing ads-to-content ratio, demanding higher quality ads from advertisers and making changes to website layouts, ad blocking is likely to continue despite pleas of publishers. Less advertising, of better quality, is needed to trigger a positive sentiment towards online advertising.

Finally, post-click behavior of traffic originating from display ads tends to be unsatisfactory. Bounce rates are exceptionally high (80-90% in some cases), direct ROI is orders of magnitude lower than search, and alarmingly enough display often seems weak also when examining the entire conversion path. Consequently, using direct ROI as a measure for success in display advertising yields sub-par results. Unfortunately, direct ROI is used more and more by performance-oriented advertisers.

Brand advertisers, who seek no direct returns in their online ad spend (think Coca-Cola), may continue using reach metrics. Thus, focusing on these advertisers, which still make a large share of the advertising market, would seem like a good strategy for publishers. Moreover, combating click-fraud and other invalid click forms is essential. If shortsightedly optimizing for revenue at all means – including allowing bots to participate in RTB auctions – media houses and DSPs are shooting themselves in the foot.

Root causes

But let’s talk about why these problems have not been addressed, at least not fundamentally by the majority of media companies. There are a few reasons for that.

First, the organizational incentives are geared towards sales. The companies follow a media business model which principally means: the more ads you sell, the better. This equation does not consider user satisfaction or quality of ads you’re showing, only their number and the revenue originating from them.

At a more abstract level, the media houses face an optimization conundrum:

  • MAX number of ads
  • MAX price of ads
  • MAX ad revenue
  • (MAX ad performance)
  • (MAX user satisfaction)

Maximizing number of ads (shown on the website) and price of ads also maximizes ad revenue. However, it is not maximizing user satisfaction. User satisfaction and performance are in parentheses because they are not considered in the media company’s optimization function, although they should be because there is a feedback mechanism from user satisfaction to ad performance and from ad performance to price of ads.

Seemingly, many media companies are maximizing revenue in the short-term through power-selling strategy. However, they should be maximizing revenue in the long-term and that cannot take place without considering user satisfaction from consumer perspective and ad performance from advertiser’s perspective. Power selling actually hurts their interests in the long-term through the feedback mechanism.

Finding solutions

How to dismantle this conundrum? First, the media companies should obviously consider both user satisfaction and ad performance. The former is done by actively studying the satisfaction of their users in terms of ad exposure. The latter is done by actively asking or otherwise acquiring data from advertisers on campaign performance. I, as a marketing manager, rarely found media sales people interested in my campaign performance – they just wanted a quick sell. Even better than asking would be to find a way to directly dip into campaign performance, e.g. by gaining access to advertiser’s analytics.

Second, media companies should consider the dynamics between the variables they are working with. For example,

  • ad performance (as a dependent variable) and number of ads (as an independent variable)
  • ad performance and user satisfaction
  • user satisfaction and number of ads
  • price of ads and ad performance

It can be hypothesized, for example, that a higher ad performance leads to a higher price of ads as ads become more valuable to advertisers. If in addition ad performance increases as the number of ads decreases, there is a clear signal to decrease the number of ads on the website. Some of these hypotheses can be tested through controlled experiments.

Third, media companies should re-align incentives from power-selling to value-based selling. They should not want to “fill the slots” at any means, but only fill the slots with good advertising that performs well for the advertiser. Achieving such a goal may require a stronger collaboration with advertisers, including sharing data with them and intervening in their production processes to deliver such advertising which does not annoy end users and based on prior data is likely to perform well.

Conclusion

In conclusion, there is a bottleneck at advertising-customer interface. Red herring effect takes place when we are focusing on a secondary issue – in the context of digital marketing we have to acknowledge that there is no intrinsic value in impressions or programmatic advertising technology, if the baseline results remain low. Ultimately, advertisers face a choice of abundance with channels both online and offline. And although they are momentarily pushing for large programmatic investments, if the results don’t follow they are likely to shift budget allocations into a different sort of equilibrium in the long-run, once again under-weighing display advertising.

Personally, I believe the media industry is too slow to react and display advertising will lose budget share in the coming years especially against social media advertising and search advertising, but also against some traditional channels such as television.

Online ads: Forget tech, invest in creativity

Technology is not a long-lasting competitive advantage in SEM or other digital marketing – creativity is.

This brief post is inspired by an article I read about different bid management platforms:

“We combine data science to SEM, so you can target based on device, hour of day and NASDAQ development.”

Yeah… but why would you do that? Spend your time thinking of creative concepts that generally work, not only when NASDAQ is down by 10%. Just because something is technically possible, doesn’t make it useful. Many technocratic and inexperienced marketing executives still get lured by the “silver bullet” effect of ad technology. Even when you consider outside events such as NASDAQ development or what not, newsjacking is a far superior marketing solution instead of automation.

Commoditization of ad technology

In the end, platforms give all contestants a level playing field. For example, the Google’s system considers CTR in determining cost and reach. Many advertisers obsess about their settings, bid and other technical parameters, and ignore the most important part: the message. Perhaps it is because the message is the hardest part: increasing or decreasing one’s bid is a simple decision given the data, but how to create a stellar creative? That is a more complex, yet more important, problem.

Seeing people as numbers, not as people

The root cause might be that the world view of some digital marketers is twisted. Consumers are seen as some kind of cattle — aggregate numbers that only need to be fed ad impressions, and positive results magically emerge. This world view is false. People are not stupid – they will not click whatever ads (or even look at them), especially in this day and age of ad clutter. The notion that you could be successful just by adopting a “bidding management platform” is foolish. Nowadays, every impressions that counts needs to be earned. And while a bid management platform may help you get a 1% boost to your ROI, focusing on the message is likely to bring a much higher increase. Because ad performance is about people, not about technology.

Conclusion

The more solid the industry becomes and the more basic technological know-how becomes mastered by advertisers, the less of a role technology plays. At that point of saturation, marketing technology investments begin to decline and companies shift back to basics: competing with creativity.

Basic formulas for digital media planning

Planning makes happy people.

Introduction

Media planning, or campaign planning in general, requires you to set goal metrics, so that you are able to communicate the expected results to a client. In digital marketing, these are metrics like clicks, impressions, costs, etc. The actual planning process usually involves using estimates — that is, sophisticated guesses of some sorts. These estimates may be based on your previous experience, planned goal targets (when for example given a specific business goal, like sales increase), or industry averages (if those are known).

Calculating online media plan metrics

By knowing or estimating some goal metrics, you are able to calculate others. But sometimes it’s hard to remember the formulas. This is a handy list to remind you of the key formulas.

  • ctr = clicks / imp
  • clicks = imp * ctr
  • imp = clicks / ctr
  • cpm = cost / (imp / 1000)
  • cost = cpm * (imp / 1000)
  • cpa = cpc / cvr
  • cpa = cost / conversions
  • cost = cpa * conversions
  • conversions = cost / cpa

In general, metrics relating to impressions are used as proxies for awareness and brand related goals. Metrics relating to clicks reflect engagement, while conversions indicate behavior. Oftentimes, I estimate CTR, CVR and CPC because 1) it’s good to set a starting goal for these metrics, and 2) they exhibit some regularity (e.g., ecommerce conversion rate tends to fall between 1-2%).

Conclusion

You don’t have to know everything to devise a sound digital media plan. A few goal metrics are enough to calculate all the necessary metrics. The more realistic your estimates are, the better. Worry not, accuracy will get better in time. In the beginning, it is best to start with moderate estimates you feel comfortable in achieving, or even outperforming. It’s always better to under-promise than under-perform. Finally, the achieved metric values differ by channel — sometimes a lot — so take that into consideration when crafting your media plan.

Keyword optimization routine for search-engine advertising (AdWords)

In this post, I’m sharing a simple optimization process for search-engine advertising. I’ll also try to explain its rationale, i.e. explanation of why it should work. The process is particularly applicable to Google AdWords due to availability of metrics, but for the most parts it applies to Bing Ads as well.

First, take a list of your keywords along with the metrics defined in the following.

Then, sort by cost (high to low). Why? Because you may have thousands of keywords, out of which a handful matter for generating results — the Pareto principle is strong in search advertising. It makes sense to focus your time and effort on optimizing the keywords that make up most of your spend.

In metrics, look at

  • relevance (subjective evaluation)
  • match type –> if broad, switch to exact
  • impression share –> if low (below 70%), increase bid (all else equal)
  • cost per converted click –> if high (above CPA target), reduce bid
  • avg. position –> if low (below 3), increase bid (all else equal)
  • Quality Score –> if low (below 6), improve ad group structure, ad copy and/or landing pages

Relevance is the first and foremost. Ask yourself: is this a keyword people who are interested in my offering would use? Sometimes you may include terms you’re not unsure of, or because you want to achieve a certain volume of clicks. If you are able to achieve that volume with relative ease, you don’t need expansion but reduction of keywords. Reduction is started from the keywords with the lowest relevance – interpreted firstly by the results of a keyword (data trumps opinions) and secondarily by qualitative evaluation of the keywords according to the aforementioned rationale.

A common strategy is to start with broad match, and gradually move towards exact match. Take a look at the search terms report: are you getting a lot of irrelevant searches? If so, it definitely makes sense not only to include negative keywords but also to change the match type. Generally speaking, as the number of optimization cycles increases the number of broad match keywords decreases. In the end, you only have exact terms. However, this assumes you’re able to achieve click volume goals.

Are you getting enough impressions? Impression share indicates your keywords’ competitiveness in ad auctions. If relevance is high and impression share low, you especially want to take action in improving your competitiveness. The simplest step is to increase keyword bid. Depending on the baseline, performance, and SEA strategy, you may want to increase it by 30% or even 100% to get a real impact.

Regarding the goals, you should know your CPA target. A very basic way to calculate is by multiplying average order value with average profit per order, i.e. calculate your margin. The amount equivalent to margin is the maximum you can spend to remain profitable or at break-even. (Of course, the real pros consider customer lifetime value at this point, but for simplicity I’m leaving it out here.)

Average position matters because an ad with a high rank gains a natural lift. That is, you can run the same ad in position 3 and position 1 and get better results in position 1 just because it is position (not because the ad is better). This in turn influences your click-through rate and indirectly boosts your Quality Score which, in turn, reduces your CPC, all else being equal. Other ways to improve QS are to re-structure ad groups, usually by reducing the number of keywords and focusing on semantic similarity between the terms, writing better ad copy that encourages people to click (remember, no ad is perfect!), and improving landing page experience if that is identified as a weak component in your Quality Score evaluation.

This is what I pay attention to when optimizing keywords in search advertising. Feel free to share your comments!

5 questions to ask your Facebook marketing agency

Facebook marketing is not magic, although it might seem like it if you have no clue how to do it. Therefore, before anything else, the first piece of advice is: get to know the basics. Jonloomer.com is a good resource for that, as well as Facebook’s free training modules.

Now, to the actual point. A company may run Facebook marketing in-house or via an agency. For small companies, it often makes sense to do it yourself, but larger budgets require a deeper know-how and more time to get the best results. For these reasons, outsourcing is often chosen by many medium and large companies. When outsourcing, an agency can take care of organic Facebook marketing, paid advertising, or both.

But how to test the quality of your agency?

Well, remember the first advice – learn the basics of Facebook marketing. If you don’t know something, you cannot manage it. Second, you can ask these questions, before engaging an agency or during your relationship with them.

  1. What goals would you set for our Facebook marketing?
  2. How would you measure the achievement of those goals?
  3. Describe your strategy in achieving the goals.
  4. Describe your optimization process for Facebook marketing.
  5. Based on our Facebook posts, tell me something that I don’t know about my business

The first question reveals how well the agency grasps your business, and how they would fit your business goals to the Facebook environment. The goals don’t have to be exactly what you had thought of — it’s more important that they show innovativeness and general understanding of your business.

The second question reveals the metrics they would choose to measure performance – the more they are aligned with your general business goals, the better. In addition, if they are able to argue efficiently for both ROI- and non-ROI-oriented metrics, it’s a good sign as it shows an understanding of the general complexity of multichannel consumer behavior.

The third question tells how they would go about creating a Facebook marketing strategy — here you can pay attention to their proposed split between organic and paid, frequency of posting/optimization, target group definition, ad creation process, etc. You can ask specifying questions, e.g. about the suggested size of budget. That shows how they approach campaign planning on the fly – the better they know the environment, the better answers they can give.

Fourth, it is important to know how they would run the accounts in practice. For example, how much time are they willing to invest? Facebook marketing is a time-consuming activity, which is actually a major reason the optimization workflow has to be efficient to achieve the best results. For an agency it’s easy to spend money precariously because Facebook takes all the money you can throw at it — but optimization is a different ballgame.

The fifth question tells how well they have analyzed your accounts and prior Facebook marketing activities. Not all agencies bother to analyze the status quo in your Facebook marketing this before meeting you — or even when they are doing marketing for you — but obviously doing so communicates a genuine interest in closing/keeping you as a client, as well as attention to detail. If they are able to tell you something about your customers, for instance, that you didn’t know, it’s a very good sign.

There. Asking these questions and going through the associated discussion is, in my opinion, an excellent way to vet a Facebook marketing agency.

In addition, one of the by far most neglected aspect of managing digital marketing agencies is auditing. You should frequently have a 3rd party, such as another agency, audit your campaigns. Never be “forever happy” with an agency but instead always push for more. You want to show commitment so they see value in investing in the relationship, but you also want to keep them a little bit on their toes so they actually bother doing their best for you, as oppose to only chasing new clients.