March 30, 2017
Attribution modelling is like digital magic.
Wow, so I’m reading a great piece by Funk and Nabout (2015) . They outline the main problems of attribution modelling. By “standard”, I refer to the commonly used method of attribution modelling, most commonly known from Google Analytics.
Previously, I’ve addressed this issue in my digital marketing class by saying that the choice of an attribution model is arbitrary, i.e. marketers can freely decide whether it’s better to use e.g. last-click model or first-click model. But now I realized this is obviously a wrong approach — given that the impact of each touch-point can be estimated. There is much more depth to attribution modelling than the standard model leads you to believe.
So, here are the five problems by Funk and Nabout (2015).
This is the main problem to me. The impact of touch-points on conversion value needs to be weighed but it is seemingly an arbitrary rather than a statistically valid choice (that is, until we consider advanced methods!). Therefore, there is no objective rank or “betterness” between different attribution models.
The standard attribution model does not consider the time interval between touch-points – it can range anywhere from 30 minutes to 90 days, restricted only by cookie duration. Why does this matter? Because time generally matters in consumer behavior. For example, if there is a long interval between contacts A_t and A_t+1, it may be that the effect of the first contact was not very powerful to incite a return visit. Of course, one could also argue there is a reason not to consider time, because any differences arise due to discrepancy of the natural decision-making process of the consumers which results in unknown intervals. Ignoring time would then standardize the intervals. However, if we assume patterns in consumers’ decision-making process, as it is usually done by stating that “in our product category, the purchase process is short, usually under 30 days”, then addressing time differences could yield a better forecast, say we should expect a second contact to take place at a certain point in time given our model of consumer behavior.
The nature of the touch or interaction should be considered when modeling customer journey. The standard attribution model assigns conversion value for different channels based on clicks, but the type of interaction in channels might be mixed. For example, for one conversion you might get a view in Facebook and click in AdWords whereas another conversion might have the reverse. But are views and clicks equally valuable? Most marketers would not say so. However, they would also assign some credit to views – at least according to classic advertising theory, visibility has an impact on advertising performance. Therefore, the attribution model should also consider several interaction types and the impact each type has on conversion propensity.
As Funk and Nabout (2015) note, “the analysis does not compare successful and unsuccessful customer journeys, [but] only looks at the former.” This is essentially a case of survivor bias – we are unable to compare those touch-points that lead to a conversion to those that did not. By doing so, we could observe that a certain channel has a higher likelihood to be included in a conversion path  than another channel, i.e. its weight should be higher and proportional to its ability to produce lift in the conversion rate. Excluding information on unsuccessful interaction, we risk getting Type I and Type II errors – that is, false negatives and positives.
The standard attribution model does not consider offline interactions. But research shows multi-channel consumer behavior is highly prevalent. The lack of data on these interactions is the major reason behind exclusion, but the the same it restricts the usefulness of attribution modelling to ecommerce context. Most companies, therefore, are not getting accurate information with attribution modelling beyond the online environment. And, as I’ve argued in my class, word-of-mouth is not included in the standard model either, and that is a major issue for accuracy, especially considering social media. Even if we want to measure the performance of advertising channel, social media ads have a distinct social component – they are shared and commented on, which results in additional interactions that should be considered when modeling customer journey.
I’m still finishing reading the original article, but had to write these few lines because the points I encountered were poignant. Next I’m sure they will propose solutions, and I may update this article afterwards. At this point, I can only state two solutions that readily come to mind: 1) the use of conversion rate (CVR) as an attribution parameter — it’s a global metric and thus escapes survivorship bias; and 2) Universal Analytics, i.e. using methods such as Google’s Measurement Protocol to capture offline interactions. As someone smart said, solution to a problem leads to a new problem and that’s the case here as well — there needs to a universal identifier (“User ID” in Google’s terms) to associate online and offline interactions. In practice, this requires registration.
The criticism applies to standard attribution modeling, e.g. to how it is done in Google Analytics. There might be additional issues not included in the paper, such as aggregate data — to perform any type of statistical analysis, click-stream data is a must have. Also, a relevant question is: How do touch-points influence one another? And how to model that influence? Beyond technicalities, it is important for managers to understand the general limitations of current methods of attribution modelling and seek solutions in their own organizations to overcome them.
 Funk, B., & Abou Nabout, N. (2016). Cross-Channel Real-Time Response Analysis. in O. Busch (Hrsg.), Programmatic Advertising: The Successful Transformation to Automated, Data-Driven Marketing in Real-Time. (S. 141-151). Springer-Verlag.
 Conversion path and customer journey are essentially referring to the same thing; perhaps with the distinction that conversion path is typically considered to be digital while customer journey has a multichannel meaning.
March 30, 2017
Answer: It kinda is.
“Premium publishers” and “premium ad space” — these are often heard terms in programmatic advertising. But they are also dangerously fallacious ideas.
I’ll give three reasons why:
First, publishers define what is “premium” a priori (before results) which is not the right sequence to do it (a priori problem). The value of ad space — or the status, premium or not — should be determined a posteriori, or after the fact. Anything will risk biases due to guess-work.
Second, what is “premium” (i.e., works well) for advertiser A might be different for advertiser B, but the same ad space is always “premium” or not (uniformity problem). The value of ad space should be determined based on its value to the advertiser, which is not a uniform distribution.
Third, fixing a higher price for “premium” inventory skews the market – rational advertisers won’t pay irrational premiums and the publisher ends up losing revenue instead of gaining “premium” price (equilibrium problem). This is the exact opposite outcome the publisher hoped for, and arises from imbalance of supply and demand.
I defined premium as ad space that works well in regards to the advertiser’s objectives. Other definitions also exist, e.g. Münstermann and Würtenberg (2015) who argue the distinctive trait between premium and non-premium media is the degree of its editorial professionalism, so that amateur websites would be less valuable. In many cases, this is an incorrect classifier from the advertiser’s perspective — e.g., placing an ad on a blogger’s website (influencer marketing) can fairly easily produce higher rents than placing it alongside “professional” content. The degree of professionalism of the content is not a major cue for the consumers, and therefore one should define “premium” from the advertiser’s point of view — as a placement that works.
The only reason, I suspect, premium inventory is still alive is due to the practice of private deals where advertisers are more interested in volume than performance – these advertisers are more informed by assumptions than data. Most likely as the buyers’ level of sophistication increases, they become more inclined to market-based pricing which has a much closer association with performance than private deals.
March 30, 2017
From its high point, the sheep can see far.
In Finland, and maybe elsewhere in the world as well, media agencies used to reside inside advertising agencies, back in the 1970-80s. Then they were separated from one another in the 1990s, so that advertising agencies do creative planning and media companies buy ad space in the media. Along with this process, heavy international integration took place and currently both the media and advertising agency markets are dominated by a handful of global players, such as Ogilvy, Dentsu, Havas, WPP, etc.
This article discusses that change and argues for re-convergence of media and advertising agencies. I call this the new paradigm (paradigm = a dominant mindset and way of doing things).
The old paradigm
The current advertising paradigm consists of two features:
1) Advertising = creative + media
2) Creative planning –> media buying –> campaigning
In this paradigm, advertising is seen as rigid, inflexible, and one-off game where you create one advertising concept and run it, regardless of customer response. You are making a one sizable bet, and that’s it. To reduce the risk of failure, creative agencies use tons of time to “make sure they get it right”. Sometimes they use advertising pre-testing, but the process is predominantly driven by intuition, or black-box creativity.
Overall, that is an old-fashioned paradigm, for which reason I believe we need a new paradigm.
Towards the new paradigm
The new advertising paradigm looks like this:
1) Advertising = creative + media + optimization
2) Creative planning –> media trials –> creative planning –> …
In that, advertising in seen as fluid, flexible, and consecutive game where you have many trials to succeed. The creative process feeds from consumer response, and in turn media buying is adjusted based on the results of each unique creative concept.
So what is the difference?
In the old paradigm, we would spend three months planning and create one “killer concept” which according to our intuition/experience is what people want to see. In the new paradigm, we spend five minutes to create a dozen concepts and let customers (data) tell us what people want to see. Essentially, we relinquish the idea that it is possible to produce a “perfect ad”, in particular without customer feedback, and instead rely on a method that gets us closer to perfection, albeit never reaching it.
The new paradigm is manifested in a continuous, iterative cycle. Campaigns never end, but are infinite — as we learn more about customers, budget spend may increase in function of time, but essentially optimization is never done. The campaign has no end, unlike in the old paradigm where people would stop marketing a product even if the demand for that product would not disappear.
You might notice that the paradigm may not be compatible of old-fashioned “shark fin” marketing, but views marketing as continuous optimization. In fact, the concept of campaign is replaced by the concept of optimization.
Let me elaborate this thought. Look at the picture (source: Jesper Åström) – it illustrates the problem of campaign-based (shark-fin) marketing. You put in money, but as soon you stop investing, your popularity drops.
Now consider an alternative, where you constantly invest in marketing and not in heavy spikes (campaigns) but gradually by altering your message and targeting (optimization). You get results more like this:
Although seasonality, which is a natural consequence of the business cycle, does not fade away, the baseline results increase in time.
Instead of being fixed, budget allocations live according to the seasonal business cycles — perhaps anticipating the demand fluctuations. The timing should also consider the carryover effect.
I suspect media agencies and advertising will converge once again, or at least the media-buying and creative planning functions will reside in the same organization. This is already the way many young digital marketing agencies are operating since their birth. Designers and optimizers (ad buyers) work side-by-side, the former giving instructions to the latter on what type of design concepts work, not based on intuition as old-paradigm Art Directors (AD) would do, but based on real-time customer response.
Most importantly, tearing down silos will benefit the clients. Doing creative work and optimization in tandem is a natural way of working — the creative concept should no longer be detached from reality, and we should not think of advertising work as a factory line where ads move from one production line to another, but instead as some sort of co-creation through which we are able to mitigate advertising waste and produce better results for advertising clients.
March 30, 2017
I’ve long been skeptical of display advertising. At least my students know this, since ever year I start the digital marketing course by giving a lecture on why display sucks (and why inbound / search-engine marketing performs much better).
But this post is not about the many pitfalls of display. Rather, it’s outlining three arguments as to why I nowadays prefer social advertising, epitomized by Facebook Ads, over display advertising. Without further ado, here are the reasons why social rocks at the moment.
It’s commonly known Facebook advertising is cheap in comparison to many advertising channels, when measured by CPM or cost per individual reached. Display can be even cheaper, so isn’t that better? No, absolutely not. Reach or impressions are completely fallacious metrics — their business value approaches zero. Orders of magnitude more important is the quality of contacts.
The quality of Facebook traffic, when looking at post-click behavior, tends to be better than the quality of display traffic. Even when media companies speak of “premium inventory”, the results are weak. People just don’t like banner ads. The people who click them, if they are people and not bots to begin with, often exit the site instantly without clicking further.
People actually interact with social ads. They pose questions, like them and even share them to their friends. Share advertisements? OMG, but they really do. That represents an overkill opportunity for a brand to interact with its customer base, and systematically gather feedback and customer insight. This is simply not possible with any other form of advertising, display including.
Display ads, albeit using rich media executions, are completely static and dead when it comes to social interaction. Whereas social advertising creates an opportunity to gather social proof and actual word-of-mouth, even viral diffusion, in the one and same advertising platform, display advertising is completely lacking the social dimension.
Social advertising, specifically Facebook gives a great flexibility in combining text, images and video. Typically, a banner ad can only fit a brief slogan (“Just do it.”), whereas a social advertisement can include many sentences of text, a compelling picture and even link description that together give the advertisers the ability to communicate the whole story of the company or its offering in one advertisement.
But isn’t that boring? No, you can craft it in a compelling way – the huge advantage is that people don’t even need to click to learn the most essential. If the goal of advertising is to inform about offerings, social advertising is among the most efficient ways to actually do it.
That’s it. I don’t see a way for display advertising to overcome these advantages of social advertising. Notice that I didn’t mention the superior targeting criteria — this is because display is quickly catching up to Facebook in that respect. It just won’t be enough.
March 30, 2017
Technology is not a long-lasting competitive advantage in SEM or other digital marketing – creativity is.
This brief post is inspired by an article I read about different bid management platforms:
“We combine data science to SEM, so you can target based on device, hour of day and NASDAQ development.”
Yeah… but why would you do that? Spend your time thinking of creative concepts that generally work, not only when NASDAQ is down by 10%. Just because something is technically possible, doesn’t make it useful. Many technocratic and inexperienced marketing executives still get lured by the “silver bullet” effect of ad technology. Even when you consider outside events such as NASDAQ development or what not, newsjacking is a far superior marketing solution instead of automation.
Commoditization of ad technology
In the end, platforms give all contestants a level playing field. For example, the Google’s system considers CTR in determining cost and reach. Many advertisers obsess about their settings, bid and other technical parameters, and ignore the most important part: the message. Perhaps it is because the message is the hardest part: increasing or decreasing one’s bid is a simple decision given the data, but how to create a stellar creative? That is a more complex, yet more important, problem.
Seeing people as numbers, not as people
The root cause might be that the world view of some digital marketers is twisted. Consumers are seen as some kind of cattle — aggregate numbers that only need to be fed ad impressions, and positive results magically emerge. This world view is false. People are not stupid – they will not click whatever ads (or even look at them), especially in this day and age of ad clutter. The notion that you could be successful just by adopting a “bidding management platform” is foolish. Nowadays, every impressions that counts needs to be earned. And while a bid management platform may help you get a 1% boost to your ROI, focusing on the message is likely to bring a much higher increase. Because ad performance is about people, not about technology.
The more solid the industry becomes and the more basic technological know-how becomes mastered by advertisers, the less of a role technology plays. At that point of saturation, marketing technology investments begin to decline and companies shift back to basics: competing with creativity.
March 30, 2017
In this post, I’m sharing a simple optimization process for search-engine advertising. I’ll also try to explain its rationale, i.e. explanation of why it should work. The process is particularly applicable to Google AdWords due to availability of metrics, but for the most parts it applies to Bing Ads as well.
First, take a list of your keywords along with the metrics defined in the following.
Then, sort by cost (high to low). Why? Because you may have thousands of keywords, out of which a handful matter for generating results — the Pareto principle is strong in search advertising. It makes sense to focus your time and effort on optimizing the keywords that make up most of your spend.
In metrics, look at
Relevance is the first and foremost. Ask yourself: is this a keyword people who are interested in my offering would use? Sometimes you may include terms you’re not unsure of, or because you want to achieve a certain volume of clicks. If you are able to achieve that volume with relative ease, you don’t need expansion but reduction of keywords. Reduction is started from the keywords with the lowest relevance – interpreted firstly by the results of a keyword (data trumps opinions) and secondarily by qualitative evaluation of the keywords according to the aforementioned rationale.
A common strategy is to start with broad match, and gradually move towards exact match. Take a look at the search terms report: are you getting a lot of irrelevant searches? If so, it definitely makes sense not only to include negative keywords but also to change the match type. Generally speaking, as the number of optimization cycles increases the number of broad match keywords decreases. In the end, you only have exact terms. However, this assumes you’re able to achieve click volume goals.
Are you getting enough impressions? Impression share indicates your keywords’ competitiveness in ad auctions. If relevance is high and impression share low, you especially want to take action in improving your competitiveness. The simplest step is to increase keyword bid. Depending on the baseline, performance, and SEA strategy, you may want to increase it by 30% or even 100% to get a real impact.
Regarding the goals, you should know your CPA target. A very basic way to calculate is by multiplying average order value with average profit per order, i.e. calculate your margin. The amount equivalent to margin is the maximum you can spend to remain profitable or at break-even. (Of course, the real pros consider customer lifetime value at this point, but for simplicity I’m leaving it out here.)
Average position matters because an ad with a high rank gains a natural lift. That is, you can run the same ad in position 3 and position 1 and get better results in position 1 just because it is position (not because the ad is better). This in turn influences your click-through rate and indirectly boosts your Quality Score which, in turn, reduces your CPC, all else being equal. Other ways to improve QS are to re-structure ad groups, usually by reducing the number of keywords and focusing on semantic similarity between the terms, writing better ad copy that encourages people to click (remember, no ad is perfect!), and improving landing page experience if that is identified as a weak component in your Quality Score evaluation.
This is what I pay attention to when optimizing keywords in search advertising. Feel free to share your comments!
March 30, 2017
Facebook marketing is not magic, although it might seem like it if you have no clue how to do it. Therefore, before anything else, the first piece of advice is: get to know the basics. Jonloomer.com is a good resource for that, as well as Facebook’s free training modules.
Now, to the actual point. A company may run Facebook marketing in-house or via an agency. For small companies, it often makes sense to do it yourself, but larger budgets require a deeper know-how and more time to get the best results. For these reasons, outsourcing is often chosen by many medium and large companies. When outsourcing, an agency can take care of organic Facebook marketing, paid advertising, or both.
Well, remember the first advice – learn the basics of Facebook marketing. If you don’t know something, you cannot manage it. Second, you can ask these questions, before engaging an agency or during your relationship with them.
The first question reveals how well the agency grasps your business, and how they would fit your business goals to the Facebook environment. The goals don’t have to be exactly what you had thought of — it’s more important that they show innovativeness and general understanding of your business.
The second question reveals the metrics they would choose to measure performance – the more they are aligned with your general business goals, the better. In addition, if they are able to argue efficiently for both ROI- and non-ROI-oriented metrics, it’s a good sign as it shows an understanding of the general complexity of multichannel consumer behavior.
The third question tells how they would go about creating a Facebook marketing strategy — here you can pay attention to their proposed split between organic and paid, frequency of posting/optimization, target group definition, ad creation process, etc. You can ask specifying questions, e.g. about the suggested size of budget. That shows how they approach campaign planning on the fly – the better they know the environment, the better answers they can give.
Fourth, it is important to know how they would run the accounts in practice. For example, how much time are they willing to invest? Facebook marketing is a time-consuming activity, which is actually a major reason the optimization workflow has to be efficient to achieve the best results. For an agency it’s easy to spend money precariously because Facebook takes all the money you can throw at it — but optimization is a different ballgame.
The fifth question tells how well they have analyzed your accounts and prior Facebook marketing activities. Not all agencies bother to analyze the status quo in your Facebook marketing this before meeting you — or even when they are doing marketing for you — but obviously doing so communicates a genuine interest in closing/keeping you as a client, as well as attention to detail. If they are able to tell you something about your customers, for instance, that you didn’t know, it’s a very good sign.
There. Asking these questions and going through the associated discussion is, in my opinion, an excellent way to vet a Facebook marketing agency.
In addition, one of the by far most neglected aspect of managing digital marketing agencies is auditing. You should frequently have a 3rd party, such as another agency, audit your campaigns. Never be “forever happy” with an agency but instead always push for more. You want to show commitment so they see value in investing in the relationship, but you also want to keep them a little bit on their toes so they actually bother doing their best for you, as oppose to only chasing new clients.
March 30, 2017
Highway to ad quality.
Ad quality is an issue in programmatic buying where ad exchange takes place via computer systems. In traditional ad exchange, there’s a human supervising the quality of advertising, but in a programmatic system it’s possible to receive spammy, illegal, or otherwise undesirable advertising without publishers (ad sellers) being aware of it. Likewise, the quality of performance such as clicks, likes or even impressions might be compromised by fraudulent bot behavior.
In the lack of humans, how to control for quality? Well, some ways include:
There – I believe these are the most common ways to control ad quality in modern programmatic advertising platforms. If you have anything
to add, please share it in the comments!
EDIT: Came across with another quality control mechanism: private exchanges. They effectively limit the number of participating advertisers making it manageable for a small number of humans to verify the ads. The whole point of the problem is that this works for a handful or so ads, but when there are millions of ad units, humans cannot be used as the primary solution.