Skip to content

Tag: online advertising

Problems of standard attribution modelling

Attribution modelling is like digital magic.

Introduction

Wow, so I’m reading a great piece by Funk and Nabout (2015) [1]. They outline the main problems of attribution modelling. By “standard”, I refer to the commonly used method of attribution modelling, most commonly known from Google Analytics.

Previously, I’ve addressed this issue in my digital marketing class by saying that the choice of an attribution model is arbitrary, i.e. marketers can freely decide whether it’s better to use e.g. last-click model or first-click model. But now I realized this is obviously a wrong approach — given that the impact of each touch-point can be estimated. There is much more depth to attribution modelling than the standard model leads you to believe.

Five problems of standard attribution modelling

So, here are the five problems by Funk and Nabout (2015).

1. Giving touch-points accurate credit

This is the main problem to me. The impact of touch-points on conversion value needs to be weighed but it is seemingly an arbitrary rather than a statistically valid choice (that is, until we consider advanced methods!). Therefore, there is no objective rank or “betterness” between different attribution models.

2. Disregard for time

The standard attribution model does not consider the time interval between touch-points – it can range anywhere from 30 minutes to 90 days, restricted only by cookie duration. Why does this matter? Because time generally matters in consumer behavior. For example, if there is a long interval between contacts A_t and A_t+1, it may be that the effect of the first contact was not very powerful to incite a return visit. Of course, one could also argue there is a reason not to consider time, because any differences arise due to discrepancy of the natural decision-making process of the consumers which results in unknown intervals. Ignoring time would then standardize the intervals. However, if we assume patterns in consumers’ decision-making process, as it is usually done by stating that “in our product category, the purchase process is short, usually under 30 days”, then addressing time differences could yield a better forecast, say we should expect a second contact to take place at a certain point in time given our model of consumer behavior.

3. Ignoring interaction types

The nature of the touch or interaction should be considered when modeling customer journey. The standard attribution model assigns conversion value for different channels based on clicks, but the type of interaction in channels might be mixed. For example, for one conversion you might get a view in Facebook and click in AdWords whereas another conversion might have the reverse. But are views and clicks equally valuable? Most marketers would not say so. However, they would also assign some credit to views – at least according to classic advertising theory, visibility has an impact on advertising performance. Therefore, the attribution model should also consider several interaction types and the impact each type has on conversion propensity.

4. Survivorship bias

As Funk and Nabout (2015) note, “the analysis does not compare successful and unsuccessful customer journeys, [but] only looks at the former.” This is essentially a case of survivor bias – we are unable to compare those touch-points that lead to a conversion to those that did not. By doing so, we could observe that a certain channel has a higher likelihood to be included in a conversion path [2] than another channel, i.e. its weight should be higher and proportional to its ability to produce lift in the conversion rate. Excluding information on unsuccessful interaction, we risk getting Type I and Type II errors – that is, false negatives and positives.

5. Exclusion of offline data

The standard attribution model does not consider offline interactions. But research shows multi-channel consumer behavior is highly prevalent. The lack of data on these interactions is the major reason behind exclusion, but the the same it restricts the usefulness of attribution modelling to ecommerce context. Most companies, therefore, are not getting accurate information with attribution modelling beyond the online environment. And, as I’ve argued in my class, word-of-mouth is not included in the standard model either, and that is a major issue for accuracy, especially considering social media. Even if we want to measure the performance of advertising channel, social media ads have a distinct social component – they are shared and commented on, which results in additional interactions that should be considered when modeling customer journey.

Solutions

I’m still finishing reading the original article, but had to write these few lines because the points I encountered were poignant. Next I’m sure they will propose solutions, and I may update this article afterwards. At this point, I can only state two solutions that readily come to mind: 1) the use of conversion rate (CVR) as an attribution parameter — it’s a global metric and thus escapes survivorship bias; and 2) Universal Analytics, i.e. using methods such as Google’s Measurement Protocol to capture offline interactions. As someone smart said, solution to a problem leads to a new problem and that’s the case here as well — there needs to a universal identifier (“User ID” in Google’s terms) to associate online and offline interactions. In practice, this requires registration.

Conclusion

The criticism applies to standard attribution modeling, e.g. to how it is done in Google Analytics. There might be additional issues not included in the paper, such as aggregate data — to perform any type of statistical analysis, click-stream data is a must have. Also, a relevant question is: How do touch-points influence one another? And how to model that influence? Beyond technicalities, it is important for managers to understand the general limitations of current methods of attribution modelling and seek solutions in their own organizations to overcome them.

References

[1] Funk, B., & Abou Nabout, N. (2016). Cross-Channel Real-Time Response Analysis. in O. Busch (Hrsg.), Programmatic Advertising: The Successful Transformation to Automated, Data-Driven Marketing in Real-Time. (S. 141-151). Springer-Verlag.

[2] Conversion path and customer journey are essentially referring to the same thing; perhaps with the distinction that conversion path is typically considered to be digital while customer journey has a multichannel meaning.

Programmatic ads: Fallacy of quality supply

A major fallacy publishers still have is the notion of “quality supply” or “premium inventory”. I’ll explain the idea behind the argument.

Introduction. The fallacy of quality supply lies in publishers assuming the quality of certain placement (say, a certain website) is constant, whereas in reality it varies according to the response which, in turn, is a function of the customer and the ad. Both the customer and the ad are running indices, meaning that they constantly change. The job of a programmatic platform is to match the right ads with right customers in the right placements. This is a dynamic problem, where “quality” of a given placement can be defined at the time of match, not prior to it.

Re-defining quality. The term “quality” should in fact be re-defined as relevance — a high-quality quality ad is relevant to customers at a given time (of match), and vice versa. In this equation, the ad placement does not hold any inherent value but its value is always determined in a unique match between the customer, ad and placement. It follows that the ad itself needs to be relevant to the customer, irrespective to the placement. It is not known which interaction effect is stronger, ad + customer, or placement + customer, but it is commonly assumed that the placement has a moderating effect on the quality of the ad as perceived by the customer.

The value of ad space is dynamic. The idea of publishers defining quality distribution a priori is old-fashioned. It stems from the idea that publishers should rank and define the value of their advertising space. That is not compatible with platform logic, in which any particular placement can be of high or low quality (or anywhere between the extremes). In fact, the same placement can simultaneously be both high- and low quality, because its value depends on the advertiser and the customer which, as stated, fluctuate.

Customers care about ad content. To understand this point, quality should be understood from the point of the customer. It can be plausibly argue that customers are interested in ads (if at all) due to their content, not their context. If an ad says I get a promotion on item x which I like, I’m interested. This interest takes place whether the ad was placed on website A or website B. Thus, it is not logical to assume that the placement itself would have a substantial impact on ad performance.

Conclusion. To sum up, there is no value in an ad placement per se, but the value realizes if (and only if) relevance is met. Under this argument, the notion of “premium ad space” is inaccurate and in fact detrimental by its implications to the development of the programmatic ad industry. If ad space is priced according to inaccurate notions, it is not likely to match its market value and, given that the advertisers have choice, they will not continue buying such ad inventory. Higher relevance leads to higher performance which leads to advertiser satisfaction and a higher probability of repurchase of that media. Any predetermined notion of “quality supply” is not relevant in this chain.

Recommendations. Instead of maintaining the false dichotomy of “premium” and “remnant” inventory, publishers should strive to maximize relevance in match-making auctions at any means necessary. For this purpose, they should demand higher quality and variety of ads from the advertiser. Successful match-making depends on quality and variety at both sides of the two-sided market. Generally, when prices are set according to supply and demand, more economic activity takes place – there is no reason to expect otherwise in the advertising market. Publishers should therefore stop labeling their inventory as “quality” or “premium” and instead let markets decide whether it is so. Indeed, in programmatic advertising the so-called remnant inventory can outperform what publishers initially would perceive as superior placements.

Is “premium” ad space a hoax?

Answer: It kinda is.

“Premium publishers” and “premium ad space” — these are often heard terms in programmatic advertising. But they are also dangerously fallacious ideas.

I’ll give three reasons why:

  1. A priori problem
  2. Uniformity problem
  3. Equilibrium problem

First, publishers define what is “premium” a priori (before results) which is not the right sequence to do it (a priori problem). The value of ad space — or the status, premium or not — should be determined a posteriori, or after the fact. Anything will risk biases due to guess-work.

Second, what is “premium” (i.e., works well) for advertiser A might be different for advertiser B, but the same ad space is always “premium” or not (uniformity problem). The value of ad space should be determined based on its value to the advertiser, which is not a uniform distribution.

Third, fixing a higher price for “premium” inventory skews the market – rational advertisers won’t pay irrational premiums and the publisher ends up losing revenue instead of gaining “premium” price (equilibrium problem). This is the exact opposite outcome the publisher hoped for, and arises from imbalance of supply and demand.

Limitations

I defined premium as ad space that works well in regards to the advertiser’s objectives. Other definitions also exist, e.g. Münstermann and Würtenberg (2015) who argue the distinctive trait between premium and non-premium media is the degree of its editorial professionalism, so that amateur websites would be less valuable. In many cases, this is an incorrect classifier from the advertiser’s perspective — e.g., placing an ad on a blogger’s website (influencer marketing) can fairly easily produce higher rents than placing it alongside “professional” content. The degree of professionalism of the content is not a major cue for the consumers, and therefore one should define “premium” from the advertiser’s point of view — as a placement that works.

Conclusion

The only reason, I suspect, premium inventory is still alive is due to the practice of private deals where advertisers are more interested in volume than performance – these advertisers are more informed by assumptions than data. Most likely as the buyers’ level of sophistication increases, they become more inclined to market-based pricing which has a much closer association with performance than private deals.

A New Paradigm for Advertising

From its high point, the sheep can see far.

Introduction

In Finland, and maybe elsewhere in the world as well, media agencies used to reside inside advertising agencies, back in the 1970-80s. Then they were separated from one another in the 1990s, so that advertising agencies do creative planning and media companies buy ad space in the media. Along with this process, heavy international integration took place and currently both the media and advertising agency markets are dominated by a handful of global players, such as Ogilvy, Dentsu, Havas, WPP, etc.

This article discusses that change and argues for re-convergence of media and advertising agencies. I call this the new paradigm (paradigm = a dominant mindset and way of doing things).

The old paradigm

The current advertising paradigm consists of two features:

1) Advertising = creative + media
2) Creative planning –> media buying –> campaigning

In this paradigm, advertising is seen as rigid, inflexible, and one-off game where you create one advertising concept and run it, regardless of customer response. You are making a one sizable bet, and that’s it. To reduce the risk of failure, creative agencies use tons of time to “make sure they get it right”. Sometimes they use advertising pre-testing, but the process is predominantly driven by intuition, or black-box creativity.

Overall, that is an old-fashioned paradigm, for which reason I believe we need a new paradigm.

Towards the new paradigm

The new advertising paradigm looks like this:

1) Advertising = creative + media + optimization
2) Creative planning –> media trials –> creative planning –> …

In that, advertising in seen as fluid, flexible, and consecutive game where you have many trials to succeed. The creative process feeds from consumer response, and in turn media buying is adjusted based on the results of each unique creative concept.

So what is the difference?

In the old paradigm, we would spend three months planning and create one “killer concept” which according to our intuition/experience is what people want to see. In the new paradigm, we spend five minutes to create a dozen concepts and let customers (data) tell us what people want to see. Essentially, we relinquish the idea that it is possible to produce a “perfect ad”, in particular without customer feedback, and instead rely on a method that gets us closer to perfection, albeit never reaching it.

The new paradigm is manifested in a continuous, iterative cycle. Campaigns never end, but are infinite — as we learn more about customers, budget spend may increase in function of time, but essentially optimization is never done. The campaign has no end, unlike in the old paradigm where people would stop marketing a product even if the demand for that product would not disappear.

You might notice that the paradigm may not be compatible of old-fashioned “shark fin” marketing, but views marketing as continuous optimization. In fact, the concept of campaign is replaced by the concept of optimization.

Let me elaborate this thought. Look at the picture (source: Jesper Åström) – it illustrates the problem of campaign-based (shark-fin) marketing. You put in money, but as soon you stop investing, your popularity drops.

Now consider an alternative, where you constantly invest in marketing and not in heavy spikes (campaigns) but gradually by altering your message and targeting (optimization). You get results more like this:

Although seasonality, which is a natural consequence of the business cycle, does not fade away, the baseline results increase in time.

Instead of being fixed, budget allocations live according to the seasonal business cycles — perhaps anticipating the demand fluctuations. The timing should also consider the carryover effect.

Conclusion

I suspect media agencies and advertising will converge once again, or at least the media-buying and creative planning functions will reside in the same organization. This is already the way many young digital marketing agencies are operating since their birth. Designers and optimizers (ad buyers) work side-by-side, the former giving instructions to the latter on what type of design concepts work, not based on intuition as old-paradigm Art Directors (AD) would do, but based on real-time customer response.

Most importantly, tearing down silos will benefit the clients. Doing creative work and optimization in tandem is a natural way of working — the creative concept should no longer be detached from reality, and we should not think of advertising work as a factory line where ads move from one production line to another, but instead as some sort of co-creation through which we are able to mitigate advertising waste and produce better results for advertising clients.

Why social advertising beats display advertising

Introduction

I’ve long been skeptical of display advertising. At least my students know this, since ever year I start the digital marketing course by giving a lecture on why display sucks (and why inbound / search-engine marketing performs much better).

But this post is not about the many pitfalls of display. Rather, it’s outlining three arguments as to why I nowadays prefer social advertising, epitomized by Facebook Ads, over display advertising. Without further ado, here are the reasons why social rocks at the moment.

1. Quality of contacts

It’s commonly known Facebook advertising is cheap in comparison to many advertising channels, when measured by CPM or cost per individual reached. Display can be even cheaper, so isn’t that better? No, absolutely not. Reach or impressions are completely fallacious metrics — their business value approaches zero. Orders of magnitude more important is the quality of contacts.

The quality of Facebook traffic, when looking at post-click behavior, tends to be better than the quality of display traffic. Even when media companies speak of “premium inventory”, the results are weak. People just don’t like banner ads. The people who click them, if they are people and not bots to begin with, often exit the site instantly without clicking further.

2. Social interaction

People actually interact with social ads. They pose questions, like them and even share them to their friends. Share advertisements? OMG, but they really do. That represents an overkill opportunity for a brand to interact with its customer base, and systematically gather feedback and customer insight. This is simply not possible with any other form of advertising, display including.

Display ads, albeit using rich media executions, are completely static and dead when it comes to social interaction. Whereas social advertising creates an opportunity to gather social proof and actual word-of-mouth, even viral diffusion, in the one and same advertising platform, display advertising is completely lacking the social dimension.

3. Better ad formats

Social advertising, specifically Facebook gives a great flexibility in combining text, images and video. Typically, a banner ad can only fit a brief slogan (“Just do it.”), whereas a social advertisement can include many sentences of text, a compelling picture and even link description that together give the advertisers the ability to communicate the whole story of the company or its offering in one advertisement.

But isn’t that boring? No, you can craft it in a compelling way – the huge advantage is that people don’t even need to click to learn the most essential. If the goal of advertising is to inform about offerings, social advertising is among the most efficient ways to actually do it.

Conclusion

That’s it. I don’t see a way for display advertising to overcome these advantages of social advertising. Notice that I didn’t mention the superior targeting criteria — this is because display is quickly catching up to Facebook in that respect. It just won’t be enough.

Programmatic advertising: Red herring effect

Introduction

Currently, there is very strong hype involved with programmatic buying. Corporations are increasing their investments on programmatic advertising and publishers are developing their own technologies to provide better targeting information for demand-side platforms.

But all is not well in the kingdom. Display advertising still faces fundamental problems which are, in my opinion, more critical to advertising performance than more granular targeting.

Problems of display advertising

In particular, there are four major problems:

  • banner blindness
  • ad blocking
  • ad clutter
  • post-click behavior

Banner blindness is a classical problem, stating that banner ads are not cognitively processed but left either consciously or unconsciously unprocessed by people exposed to them (Benway & Lane, 1998). This is a form of automatic behavior ignoring ads and focusing on primary task, i.e. processing website content. Various solutions have been proposed in the industry, including native advertising which “blends in” with the content, and moving to calculating only viewable impressions which would guarantee people actually see the banner ads they are exposed to. The problem with the former is confounding sponsored and organic content, while the problem with the latter is that seeing is not equivalent to processing (hence banner blindness).

Ad blocking has been in tremendous rise lately (Priori Data, 2016). Consumers across the world are rejecting ads, both in desktop and mobile. Partly, this is related to ad clutter referring to high ads-to-content ratio in the websites. The proliferation of ad blocking should be interpreted as an alarming signal by the media houses. Instead, many of them seem to take no notice, keeping their website layout and ads-to-content ratio high. If there are no major improvements in user satisfaction, visible in reducing ads-to-content ratio, demanding higher quality ads from advertisers and making changes to website layouts, ad blocking is likely to continue despite pleas of publishers. Less advertising, of better quality, is needed to trigger a positive sentiment towards online advertising.

Finally, post-click behavior of traffic originating from display ads tends to be unsatisfactory. Bounce rates are exceptionally high (80-90% in some cases), direct ROI is orders of magnitude lower than search, and alarmingly enough display often seems weak also when examining the entire conversion path. Consequently, using direct ROI as a measure for success in display advertising yields sub-par results. Unfortunately, direct ROI is used more and more by performance-oriented advertisers.

Brand advertisers, who seek no direct returns in their online ad spend (think Coca-Cola), may continue using reach metrics. Thus, focusing on these advertisers, which still make a large share of the advertising market, would seem like a good strategy for publishers. Moreover, combating click-fraud and other invalid click forms is essential. If shortsightedly optimizing for revenue at all means – including allowing bots to participate in RTB auctions – media houses and DSPs are shooting themselves in the foot.

Root causes

But let’s talk about why these problems have not been addressed, at least not fundamentally by the majority of media companies. There are a few reasons for that.

First, the organizational incentives are geared towards sales. The companies follow a media business model which principally means: the more ads you sell, the better. This equation does not consider user satisfaction or quality of ads you’re showing, only their number and the revenue originating from them.

At a more abstract level, the media houses face an optimization conundrum:

  • MAX number of ads
  • MAX price of ads
  • MAX ad revenue
  • (MAX ad performance)
  • (MAX user satisfaction)

Maximizing number of ads (shown on the website) and price of ads also maximizes ad revenue. However, it is not maximizing user satisfaction. User satisfaction and performance are in parentheses because they are not considered in the media company’s optimization function, although they should be because there is a feedback mechanism from user satisfaction to ad performance and from ad performance to price of ads.

Seemingly, many media companies are maximizing revenue in the short-term through power-selling strategy. However, they should be maximizing revenue in the long-term and that cannot take place without considering user satisfaction from consumer perspective and ad performance from advertiser’s perspective. Power selling actually hurts their interests in the long-term through the feedback mechanism.

Finding solutions

How to dismantle this conundrum? First, the media companies should obviously consider both user satisfaction and ad performance. The former is done by actively studying the satisfaction of their users in terms of ad exposure. The latter is done by actively asking or otherwise acquiring data from advertisers on campaign performance. I, as a marketing manager, rarely found media sales people interested in my campaign performance – they just wanted a quick sell. Even better than asking would be to find a way to directly dip into campaign performance, e.g. by gaining access to advertiser’s analytics.

Second, media companies should consider the dynamics between the variables they are working with. For example,

  • ad performance (as a dependent variable) and number of ads (as an independent variable)
  • ad performance and user satisfaction
  • user satisfaction and number of ads
  • price of ads and ad performance

It can be hypothesized, for example, that a higher ad performance leads to a higher price of ads as ads become more valuable to advertisers. If in addition ad performance increases as the number of ads decreases, there is a clear signal to decrease the number of ads on the website. Some of these hypotheses can be tested through controlled experiments.

Third, media companies should re-align incentives from power-selling to value-based selling. They should not want to “fill the slots” at any means, but only fill the slots with good advertising that performs well for the advertiser. Achieving such a goal may require a stronger collaboration with advertisers, including sharing data with them and intervening in their production processes to deliver such advertising which does not annoy end users and based on prior data is likely to perform well.

Conclusion

In conclusion, there is a bottleneck at advertising-customer interface. Red herring effect takes place when we are focusing on a secondary issue – in the context of digital marketing we have to acknowledge that there is no intrinsic value in impressions or programmatic advertising technology, if the baseline results remain low. Ultimately, advertisers face a choice of abundance with channels both online and offline. And although they are momentarily pushing for large programmatic investments, if the results don’t follow they are likely to shift budget allocations into a different sort of equilibrium in the long-run, once again under-weighing display advertising.

Personally, I believe the media industry is too slow to react and display advertising will lose budget share in the coming years especially against social media advertising and search advertising, but also against some traditional channels such as television.

Online ads: Forget tech, invest in creativity

Technology is not a long-lasting competitive advantage in SEM or other digital marketing – creativity is.

This brief post is inspired by an article I read about different bid management platforms:

“We combine data science to SEM, so you can target based on device, hour of day and NASDAQ development.”

Yeah… but why would you do that? Spend your time thinking of creative concepts that generally work, not only when NASDAQ is down by 10%. Just because something is technically possible, doesn’t make it useful. Many technocratic and inexperienced marketing executives still get lured by the “silver bullet” effect of ad technology. Even when you consider outside events such as NASDAQ development or what not, newsjacking is a far superior marketing solution instead of automation.

Commoditization of ad technology

In the end, platforms give all contestants a level playing field. For example, the Google’s system considers CTR in determining cost and reach. Many advertisers obsess about their settings, bid and other technical parameters, and ignore the most important part: the message. Perhaps it is because the message is the hardest part: increasing or decreasing one’s bid is a simple decision given the data, but how to create a stellar creative? That is a more complex, yet more important, problem.

Seeing people as numbers, not as people

The root cause might be that the world view of some digital marketers is twisted. Consumers are seen as some kind of cattle — aggregate numbers that only need to be fed ad impressions, and positive results magically emerge. This world view is false. People are not stupid – they will not click whatever ads (or even look at them), especially in this day and age of ad clutter. The notion that you could be successful just by adopting a “bidding management platform” is foolish. Nowadays, every impressions that counts needs to be earned. And while a bid management platform may help you get a 1% boost to your ROI, focusing on the message is likely to bring a much higher increase. Because ad performance is about people, not about technology.

Conclusion

The more solid the industry becomes and the more basic technological know-how becomes mastered by advertisers, the less of a role technology plays. At that point of saturation, marketing technology investments begin to decline and companies shift back to basics: competing with creativity.

Keyword optimization routine for search-engine advertising (AdWords)

In this post, I’m sharing a simple optimization process for search-engine advertising. I’ll also try to explain its rationale, i.e. explanation of why it should work. The process is particularly applicable to Google AdWords due to availability of metrics, but for the most parts it applies to Bing Ads as well.

First, take a list of your keywords along with the metrics defined in the following.

Then, sort by cost (high to low). Why? Because you may have thousands of keywords, out of which a handful matter for generating results — the Pareto principle is strong in search advertising. It makes sense to focus your time and effort on optimizing the keywords that make up most of your spend.

In metrics, look at

  • relevance (subjective evaluation)
  • match type –> if broad, switch to exact
  • impression share –> if low (below 70%), increase bid (all else equal)
  • cost per converted click –> if high (above CPA target), reduce bid
  • avg. position –> if low (below 3), increase bid (all else equal)
  • Quality Score –> if low (below 6), improve ad group structure, ad copy and/or landing pages

Relevance is the first and foremost. Ask yourself: is this a keyword people who are interested in my offering would use? Sometimes you may include terms you’re not unsure of, or because you want to achieve a certain volume of clicks. If you are able to achieve that volume with relative ease, you don’t need expansion but reduction of keywords. Reduction is started from the keywords with the lowest relevance – interpreted firstly by the results of a keyword (data trumps opinions) and secondarily by qualitative evaluation of the keywords according to the aforementioned rationale.

A common strategy is to start with broad match, and gradually move towards exact match. Take a look at the search terms report: are you getting a lot of irrelevant searches? If so, it definitely makes sense not only to include negative keywords but also to change the match type. Generally speaking, as the number of optimization cycles increases the number of broad match keywords decreases. In the end, you only have exact terms. However, this assumes you’re able to achieve click volume goals.

Are you getting enough impressions? Impression share indicates your keywords’ competitiveness in ad auctions. If relevance is high and impression share low, you especially want to take action in improving your competitiveness. The simplest step is to increase keyword bid. Depending on the baseline, performance, and SEA strategy, you may want to increase it by 30% or even 100% to get a real impact.

Regarding the goals, you should know your CPA target. A very basic way to calculate is by multiplying average order value with average profit per order, i.e. calculate your margin. The amount equivalent to margin is the maximum you can spend to remain profitable or at break-even. (Of course, the real pros consider customer lifetime value at this point, but for simplicity I’m leaving it out here.)

Average position matters because an ad with a high rank gains a natural lift. That is, you can run the same ad in position 3 and position 1 and get better results in position 1 just because it is position (not because the ad is better). This in turn influences your click-through rate and indirectly boosts your Quality Score which, in turn, reduces your CPC, all else being equal. Other ways to improve QS are to re-structure ad groups, usually by reducing the number of keywords and focusing on semantic similarity between the terms, writing better ad copy that encourages people to click (remember, no ad is perfect!), and improving landing page experience if that is identified as a weak component in your Quality Score evaluation.

This is what I pay attention to when optimizing keywords in search advertising. Feel free to share your comments!

5 questions to ask your Facebook marketing agency

Facebook marketing is not magic, although it might seem like it if you have no clue how to do it. Therefore, before anything else, the first piece of advice is: get to know the basics. Jonloomer.com is a good resource for that, as well as Facebook’s free training modules.

Now, to the actual point. A company may run Facebook marketing in-house or via an agency. For small companies, it often makes sense to do it yourself, but larger budgets require a deeper know-how and more time to get the best results. For these reasons, outsourcing is often chosen by many medium and large companies. When outsourcing, an agency can take care of organic Facebook marketing, paid advertising, or both.

But how to test the quality of your agency?

Well, remember the first advice – learn the basics of Facebook marketing. If you don’t know something, you cannot manage it. Second, you can ask these questions, before engaging an agency or during your relationship with them.

  1. What goals would you set for our Facebook marketing?
  2. How would you measure the achievement of those goals?
  3. Describe your strategy in achieving the goals.
  4. Describe your optimization process for Facebook marketing.
  5. Based on our Facebook posts, tell me something that I don’t know about my business

The first question reveals how well the agency grasps your business, and how they would fit your business goals to the Facebook environment. The goals don’t have to be exactly what you had thought of — it’s more important that they show innovativeness and general understanding of your business.

The second question reveals the metrics they would choose to measure performance – the more they are aligned with your general business goals, the better. In addition, if they are able to argue efficiently for both ROI- and non-ROI-oriented metrics, it’s a good sign as it shows an understanding of the general complexity of multichannel consumer behavior.

The third question tells how they would go about creating a Facebook marketing strategy — here you can pay attention to their proposed split between organic and paid, frequency of posting/optimization, target group definition, ad creation process, etc. You can ask specifying questions, e.g. about the suggested size of budget. That shows how they approach campaign planning on the fly – the better they know the environment, the better answers they can give.

Fourth, it is important to know how they would run the accounts in practice. For example, how much time are they willing to invest? Facebook marketing is a time-consuming activity, which is actually a major reason the optimization workflow has to be efficient to achieve the best results. For an agency it’s easy to spend money precariously because Facebook takes all the money you can throw at it — but optimization is a different ballgame.

The fifth question tells how well they have analyzed your accounts and prior Facebook marketing activities. Not all agencies bother to analyze the status quo in your Facebook marketing this before meeting you — or even when they are doing marketing for you — but obviously doing so communicates a genuine interest in closing/keeping you as a client, as well as attention to detail. If they are able to tell you something about your customers, for instance, that you didn’t know, it’s a very good sign.

There. Asking these questions and going through the associated discussion is, in my opinion, an excellent way to vet a Facebook marketing agency.

In addition, one of the by far most neglected aspect of managing digital marketing agencies is auditing. You should frequently have a 3rd party, such as another agency, audit your campaigns. Never be “forever happy” with an agency but instead always push for more. You want to show commitment so they see value in investing in the relationship, but you also want to keep them a little bit on their toes so they actually bother doing their best for you, as oppose to only chasing new clients.

Controlling ad quality in programmatic buying

Highway to ad quality.

Ad quality is an issue in programmatic buying where ad exchange takes place via computer systems. In traditional ad exchange, there’s a human supervising the quality of advertising, but in a programmatic system it’s possible to receive spammy, illegal, or otherwise undesirable advertising without publishers (ad sellers) being aware of it. Likewise, the quality of performance such as clicks, likes or even impressions might be compromised by fraudulent bot behavior.

In the lack of humans, how to control for quality? Well, some ways include:

  • bot detection — this is what Google uses to filter invalid clicks likely caused by bots. It includes i.a. detecting anomalies in click behavior. Facebook, too, has mechanisms for detecting bots. How well these systems function should be from time to time audited by neutral 3rd parties due to the inherent problem of moral hazard by ad platforms.
  • performance-adjusted pricing and visibility — again, used by Google and Facebook in Quality Score and Relevance Score, respectively. What works cannot be wrong, essentially. The ads with the best response get the most views for the less money. However, this does not directly solve the problem of removing undesirable ads from the system.
  • reporting — again, both Facebook and Google enable reporting of ads by end users. This shows to advertisers as negative feedback – once negative feedback reaches a certain threshold, the ad stops showing. It is in a way crowdsourcing the quality control to the end users.
  • algorithmic analysis of ad content — for example, Facebook is able to detect nudity in the pictures and consequently disqualify them. This is among the best methods, albeit technically demanding, because machine can treat many millions of ad content units in batches. With constantly developing machine learning solutions the accuracy of automatic detection of undesirable content approaches human classifiers.
  • finally, we can have human fail-safe as a “plan B”. Again, both Facebook and Google use manual detection of click-fraud but also in treatment of advertisers’ complaints over refused ads. However, the solution is expensive and does not scale over millions of ad units, so it can be seen as a backup at best.

There – I believe these are the most common ways to control ad quality in modern programmatic advertising platforms. If you have anything
to add, please share it in the comments!

EDIT: Came across with another quality control mechanism: private exchanges. They effectively limit the number of participating advertisers making it manageable for a small number of humans to verify the ads. The whole point of the problem is that this works for a handful or so ads, but when there are millions of ad units, humans cannot be used as the primary solution.