Archive for the search engine optimisation tag

Joni

Advertisers actively following “Opportunities” in Google AdWords risk bid wars

english

PPC bidding requires strategic thinking.

Introduction. Wow. I was doing some SEM optimization in Google AdWords while a thought struck me. It is this: Advertisers actively following “Opportunities” in AdWords risk bid wars. Why is that? I’ll explain.

Opportunities or not? The “Opportunities” feature proposes bid increases for given keywords, e.g. Week 1: Advertiser A has current bid b_a and is proposed a marginal cost m_a, so the new bid e_a = b_a+m_a. During the same Week 1: Advertiser B, in response to Advertiser A’s acceptance of bid increase, is recommended to maintain his current impression share by increasing his bid b_b to e_b = b_b+m_b. To maintain the impression share balance, Advertiser A is again in the following optimization period (say the optimization cycle is a week, so next week) proposed yet another marginal increase, et cetera.

If we turn m into a multiplier, then the bid will eventually be b_a = (b_a * m_a)^c, where c is the number of optimization cycles. Let’s say AdWords recommends 15% bid increase at each cycle (e.g., 0.20 -> 0.23$ in the 1st cycle); then after five cycles, the keyword bid has doubled compared to the baseline (illustrated in the picture).

Figure 1   Compounding bid increases

Alluring simplicity. Bidding wars were always a possible scenario in PPC advertising – however, the real issues here is simplicity. The improved “Opportunities” feature gives much better recommendations to advertisers than earlier version, which increases its usage and more easily leads into “lightly made” acceptance of bid increases that Google can show to likely maintain a bidder’s current competitive positioning. From auction psychology we know that bidders have a tendency to overbid when put into competitive pressure, and that’s exactly where Google is putting them.

It’s rational, too. I think that more aggressive bidding can easily take place under the increasing usage of “Opportunities”. Basically, the baselines shift at the end of each optimization cycle. The mutual increase of bids (i.e., bid war) is not only a potential outcome of light-headed bidding, but in fact increasing bids is rational as long as keywords still remain profitable. But in either case, economic rents (=excessive profits) will be competed away.

Conclusion. Most likely Google advertising will continue converging into a perfect market, where it is harder and harder for individual advertisers to extract rents, especially in long-term competition. “Opportunities” is one way of making auctions more transparent and encourage more aggressive bidding behavior. It would be interesting to examine if careless bidding is associated with the use of “Opportunities” (i.e., psychological aspect), and also if Google shows more recommendations to increase than decrease bids (i.e., opportunistic recommendations).

Joni

Facebook Ads: too high performance might turn on you (theoretically)

english

Introduction

Now, earlier I wrote a post arguing that Facebook has an incentive to lower the CPC of well-targeting advertisers because better targeting improves user experience (in two-sided market terms, relevance through more precise targeting reduces the negative indirect network effects perceived by ad targets). You can read that post here.

However, consider the point from another perspective: the well-targeting advertiser is making rents (excessive profits) from their advertising which Facebook wants and as the platform owner is able to capture.

In this scenario, Facebook has an incentive to actually increase the CPC of a well-targeting advertiser until the advertiser’s marginal profit is aligned with marginal cost. In such a case, it would still make sense for the advertiser to continue investing (so the user experience remains satisfactory), but Facebook’s profit would be increased by the magnitude of the advertiser’s rent.

Problem of private information

This would require that Facebook be aware of the profit function of its advertisers which as for now might be private information to the advertisers. But had Facebook this information, it could consider it in the click-price calculation. Now, obviously that would violate the “objective” nature of Facebook’s VCG ad auction — it’s currently set to consider maximum CPC and ad performance (negative feedback, CTR, but not profit as far as I know). However, advertisers would not be able to monitor the use of their profit function because the precise ad auctions are carried out in a black box (i.e., asymmetric information). Thus, the scenario represents a type of moral hazard for Facebook – a potential risk the advertisers may not be aware of.

Origin of the idea

This idea I actually got from one of my students who said that “oh, I don’t think micro-targeting is useful“. Then I asked why and he said “because Facebook is probably charging too much from it”. I said to him that’s not the case, but also that it could be and the idea is interesting. Here I just elaborated it a bit further.

Also read this article about micro-targeting.

Micro-targeting is super interesting for B2B and personal branding (e.g., job seeking).

Another related point, that might interest you Jim (in case you’re reading this :), is the action of distributing profitable keywords by the platform owner between advertisers in search advertising. For example, Google could control impression share so that each advertiser would receive a satisfactory (given their profit function) portion of traffic WHILE optimizing its own return.

Conclusion

This idea is not well-developed though; it rests on the notion that there is heterogeneity in advertisers’ willingness to pay (arising e.g., from different in margins, average order values, operational efficiency or such) that would benefit the platform owner; I suspect it could be the case that the second-price auction anyway considers this as long as advertisers are bidding truthfully, in which case there’s no need for such “manipulation” by Google as the prices are always set to maximum anyway. So, just a random idea at this point.

Joni

Facebook ad testing: is more ads better?

english

Yellow ad, red ad… Does it matter in the end?

Introduction

I used to think differently about creating ad variations, but having tested both methods I’ve changed my mind. Read the explanation below.

There are two alternative approaches to ad testing:

  1. “Qwaya” method* — you create some base elements (headlines, copy texts, pictures), out of which a tool will create up to hundreds of ad variations
  2. “Careful advertiser” method — you create hand-crafted creatives, maybe three (version A, B, C) which you test against one another.

In both cases, you are able to calculate performance differences between ad versions and choose the winning design. The rationale in the first method is that it “covers more ground”, i.e. comes up with such variations that we wouldn’t have tried otherwise (due to lack of time or other reasons).

Failure of large search space

I used to advocate the first method, but it has three major downsides:

  1. it requires a lot more data to come up with statistical significance
  2. false positives may emerge in the process, and
  3. lack of internal coherence is likely to arise, due to inconsistency among creative elements (e.g., mismatch between copy text and image which may result in awkward messages).

Clearly though, the human must generate enough variation in his ad versions if he seeks a globally optimal solution. This can be done by a) making drastically different (e.g., humor vs. informativeness) as oppose to incrementally different ad versions, and b) covering extremes on different creative dimensions (e.g., humor: subtle/radical  informativeness: all benefits/main benefit).

Conclusion

Overall, this argument is an example of how marketing automation may not always be the best way to go! And as a corollary, the creative work done by humans is hard to replace by machines when seeking optimal creative solutions.

*Named after the Swedish Facebook advertising tool Qwaya which uses this feature as one of their selling points.

Joni

Facebook’s Incentive to Reward Precise Targeting

english
Facebook’s Incentive to Reward Precise Targeting

Facebook has an incentive to lower the advertising cost for more precise targeting by advertisers.

What, why?

Because by definition, the more precise targeting is the more relevant it its for end users. Knowing the standard nature of ads (as in: negative indirect network effect vis-à-vis users), the more relevant they are, the less unsatisfied the users. What’s more, their satisfaction is also tied to the performance of the ads (positive indirect network effect: the more satisfied the users, the better the ad performance), which should thus be better with more precise targeting.

Now, the relevance of ads can be improved by automatic means such as epsilon-greedy algorithms, and this is traditionally seen as Facebook’s advantage (right, Kalle?) but the real question is: Is that more efficient than “marketer’s intuition”?

I’d in fact argue that — contrary to my usual approach to marketer’s intuition and its fallibility — it is helpful here, and its use at least enables the narrowing down of optimal audience faster.

…okay, why is that then?

Because it’s always not only about the audience, but about the match between the message and audience — if the message was the same and audience varied, narrowing is still useful because the search space for Facebook’s algorithm is smaller, pre-qualified by humans in a sense.

But there’s an even more important property – by narrowing down the audience, the marketer is able to re-adjust their message to that particular audience, thereby increasing relevance (the “match” between preferences of the audience members and the message shown to them). This is hugely important because of the inherent combinatory nature of advertising — you cannot separate the targeting and message when measuring performance, it’s always performance = targeting * message.

Therefore, Facebook does have an incentive to encourage advertisers for more precise targeting and also reward that by providing a lower CPC. Not sure if they are doing this though, because it requires them to assign a weighted bid for advertisers with a more precise targeting — consider advertiser A who is mass-advertising to everyone in some large set X vs. advertiser B who is competing for a part of the same audience i.e. a sub-set x – they are both in the same auction but the latter should be compensated for his more precise targeting.

Concluding remarks

Perhaps this is factored in through Relevance Score and/or performance adjustment in the actual rank and CPC. That would yield the same outcome, given that the above mentioned dynamics hold, i.e. there’s a correlation between a more precise targeting and ad performance.

Joni

A Little Guide to AdWords Optimization

english
A Little Guide to AdWords Optimization

Hello, my young padawan!

This time I will write a fairly concise post about optimizing Google AdWords campaigns.

As usual, my students gave the inspiration to this post. They’re currently participating in Google Online Marketing Challenge, and — from the mouths of children you hear the truth 🙂 — asked a very simple question: “What do we do when the campaigns are running?”

At first, I’m tempted to say that you’ll do optimization in my supervision, e.g. change the ad texts, pause add and change bids of keywords, etc. But then I decide to write them a brief introduction.

So, here it goes:

1. Structure – have the campaigns been named logically? (i.e., to mirror the website and its goals)? Are the ad groups tight enough? (i.e., include only semantically similar terms that can be targeted by writing very specific ads)

2. Settings – all features enabled, only search network, no search partners (– that applies to Google campaigns, in display network you have different rules but never ever mix the two under one campaign), language targeting Finnish English Swedish (languages that Finns use in Google)

3. Modifiers – are you using location or mobile bid modifiers? Should you? (If unsure, find out quick!)

4. Do you have need for display campaigns? If so, use display builder to build nice-looking ads; your targeting options are contextual targeting (keywords), managed placements (use Display Planner to find suitable sites), audience lists (remarketing), and affinity and topic categories (the former targets people with a given interest, the latter websites categorized under a given interest, e.g. traveling) (you can use many of these in one campaign)

5. Do you have enough keywords to reach the target daily spend? (Good to have more than 100, even thousands of keywords in the beginning.)

6. What match types are you using? You can start from broad, but gradually move towards exact match because it gives you the greatest control over which auctions you participate in.

7. What are your options to expand keyword base? Look for opportunities by taking a search term report from all keywords after you’ve run the campaign for week or so; this way you can also identify more negative keywords.

8. What negative keywords are you using? Very important to exclude yourself from auctions which are irrelevant for your business.

9. Pausing keywords — don’t delete anything ever, because then you’ll lose the analytical trace; but frequently stop keywords that are a) the most expensive and/or b) have the lowest CTR/Quality Score

10. Have you set bids at the keyword level? You should – it’s okay to start by setting the bid at ad group level, and then move gradually to keyword level as you begin to accumulate real data from the keyword market.

11. Ad positions – see if you’re competitive by looking at auction insights report; if you have low average positions (below 3), consider either pausing the keyword or increasing your bid (and relevance to ad — very important)

12. Are you running good ads? Remember, it’s all about text. You need to write good copy which is relevant to searchers. No marketing bullshit, please. Consider your copy as an answer to searchers request; it’s a service, not a sales pitch. This topic deserves its own post (and you’ll find them by googling), but as for now, know that the best way (in my opinion) is to have 2 ads per ad group constantly competing against one another. Then pause the losing ad and write a new contender — remember also that an ad can never be perfect: if your CTR is 10%, it’s really good but with a better ad you can have 11%.

13. Landing page relevance – you can see landing page experience by hovering over keywords – if the landing page experience is poor, think if you can instruct your client to make changes, or if you can change the landing page to a better one. The landing page relevance comes from the searcher’s perspective: when writing the search query, he needs to be shown ads that are relevant to that query and then directed to a webpage which is the closest match to that query. Simple in theory, in practice it’s your job to make sure there’s no mismatch here.

14. Quality Score – this is the godlike metric of AdWords. Anything below 4 is bad, so pause it or if it’s relevant for your business, then do your best to improve it. The closer you get to 10, the better (with no data, the default is 6).

15. Ad extensions – every possible ad extension should be in use, because they tend to gather a good CTR and also positively influence your Quality Score. So, this includes sitelinks, call extensions, reviews, etc.

And, finally, important metrics. You should always customize your column views at campaign, ad group and keyword level. The picture below gives an example of what I think are generally useful metrics to show — these may vary somewhat based on your case. (They can be the same for all levels, except keyword level should also include Quality Score.)

  • CTR (as high as possible, at least 5%)
  • CPC (as low as possible, in Finland 0.20€ sounds decent in most industries)
  • impression share (as high as possible WHEN business-relevant keywords, in long-tail campaigns it can be low with a good reason of getting cheap traffic; generally speaking, this indicates scaling potential; I’ve written a separate post about this, you can find it by looking at my posts)
  • Quality Score (as high as possible, scale 1-10)
  • Cost (useful to sort by cost to focus on the most expensive keywords and campaigns)
  • Avg. position (TOP3 is a good goal!)
  • Bounce rate (as low as possible, it tends to be around 40% on an average website) (this only shows if GA is connected –> connect if possible)
  • Conversion rate (as high as possible, tends to be 1-2% in ecommerce sites, more when conversion is not purchase)
  • Number of conversions (shows absolute performance difference between campaigns)

That’s it! Hope you enjoyed this post, and please leave comments if you have anything to add.

Joni

Example of Google’s Moral Hazard: Pooling in Ad Auctions

english

Google has an incentive to group advertisers in ad auction even when this conflicts with the goals of an individual advertiser.

For example, you’d like to bid on ‘term x‘ and would not like be included in auctions ‘term x+n‘ due to e.g. lower relevance, your ad might still participate in the auction.

This relates to two features:

  1. use of synonyms — by increasing the use of synonyms, Google is able to pool more advertisers in the same ad auction
  2. broad match — by increasing the use of broad match, Google is able to pool more advertisers in the same ad auction

Simply put, the more bidders competing in the same ad auction, the higher the click price and therefore Google’s profit. It needs to be remarked that pooling not only increases the CPC of existing ad auctions by increasing competition, but it also creates new auctions altogether (because there needs to be a minimum number of bidders for ads to be launched on the SERP).

A practical example of this moral hazard is Google’s removal of ‘do not include synonyms or close variants‘ in the AdWords campaign settings, which took place a couple of years ago.

There are two ways advertisers can counter this effect:

  1. First, by efficient use of negative keywords.
  2. Second, by resorting to multi-word exact matches as much as possible.

In conclusion, I always tell my students that Google is a strategic agent that wants to optimize its own gain — as far as its and advertiser’s goals are aligned, everything is fine, but there are these special cases in which the goals deviate and the advertisers needs to recognize them and take action.

Joni

Dynamic Pricing and Incomplete People Information

english

One of the main problems in analytics is the lack of people information (e.g., demographics, interests). It is controlled by superplatforms like Google and Facebook, but as soon as you have transition from the channel to the website, you lose this information.

So, I was thinking this in context of dynamic pricing. There’s no problem for determining an average solution, i.e. a price point that sets the price so that conversion is maximized on average. But that’s pretty useless, because as you know averages are bad for optimization – too much waste of efficiency. Consider dynamic pricing: the willingness to pay is what matters for setting the price, but it’s impossible to know the WTP function of individual visitors. That’s why aggregate measures *are* needed, but we can go beyond a general aggregate (average) to segmentation, and then use segment information as a predictor for conversion at different price points (by the way, determining the testing interval for price points is also an interesting issue, i.e. how big or small increments should you do —  but that’s not the topic here).

Going back to the people problem — you could tackle this with URL tagging: 1) include the targeting info into your landing URL, and you’re able to do personalization like dynamic pricing or tailored content by retrieving the targeting information from the URL and rendering the page accordingly. A smart system would not only do this, but 2) record the interactions of different targeting groups (e.g., men & women) and use this information to optimize for a goal (e.g., determining optimal price point per user group).

These are some necessary features for a dynamic pricing system. Of course then there’s the aforementioned interval problem; segmentation means you’re playing with less data per group, so you have less “trials” for effective tests. So, intuitively you can have this rule: the less the website has traffic, the larger the increments (+/-) should be for finding the optimal price point. However, if the increments become too large you’re likely to miss the optimal (it gets lost somewhere in between the intervals). I think here are some eloquent algorithmic solutions to that in the multi-armed bandits.