# Tag: optimising

### 1. Introduction

In rule-based bidding, you want to sometimes have step-backs where you first adjust your bid based on a given condition, and then adjust it back after the condition has passed.

An example. An use case would be to decrease bids for weekend, and increase back to normal level for weekdays.

However, defining the step-back rate is not done how most people would think. I’ll tell you how.

### 2. Step-back bidding

For step-back bidding you need two rules: one to change the bid (increase/decrease) and another one to do the opposite (decrease/increase). The values applied by these rules must cancel one another.

So, if your first rule raises the bid from \$1 to \$2, you want the second rule to drop it back to \$1.

Call these

x = raise by percentage

y = lower by percentage

Where most people get confused is by assuming x=y, so that you use the same value for both the rules.

Example 1:

x = raise by 15%

y = lower by 15%

That should get us back to our original bid, right? Wrong.

If you do the math (1*1.15*0.85), you get 0.997, whereas you want 1 (to get back to the baseline).

The more you iterate with the wrong step-back value, the farther from the baseline you end. To illustrate, see the following simulation, where the loop is applied weekly for three months (12 weeks * 2 = 24 data points).

Figure 1 Bidding loop

As you can see, the wrong method will take you more and more off from the correct pattern as the time goes by. For a weekly rule the difference might be manageable, especially if the rule’s incremental change is small, but imagine if you are running the rule daily or each time you bid (intra-day).

### 3. Solution

So, how to get to 1?

It’s very simple, really. Consider

• B = baseline value (your original bid)
• x = the value of the first rule (e.g., raise bid by 15% –> 0.15)
• y = the value of the second rule (dependant on the 1st rule)

You want to solve y from

B(1+x) * y = 1

That is,

y = 1 / B(1+x)

For the value in Example 1,

y = 1 / 1*(1+0.15)

multiplying that by the increased value results in 1, so that

1.15 * (1/1*(1+0.15) = 1

### Conclusion

Remember to consider elementary mathematics, when applying AdWords bidding rules!

Introduction. Hm… I’ve figured out how to execute successful political marketing campaign on social media [1], but one link is missing still. Namely, applying affinity analysis (cf. market basket analysis).

Discounting conversions. Now, you are supposed to measure “conversions” by some proxy – e.g., time spent on site, number of pages visited, email subscription. Determining which measurable action is the best proxy for likelihood of voting is a crucial sub-problem, which you can approach with several tactics. For example, you can use the closest action to final conversion (vote), i.e. micro-conversion. This requires you have an understanding of the sequence of actions leading to final conversion. You could also use a relative cut-off point; e.g. the nth percentile with the highest degree of engagement is considered as converted.

Anyhow, this is very important because once you have secured a vote, you don’t want to waste your marketing budget by showing ads to people who already have decided to vote for your candidate. Otherwise, you risk “preaching to the choir”. Instead, you want to convert as many uncertain voters to voters as possible, by using different persuasion tactics.

Affinity analysis. The affinity analysis can be used to accomplish this. In ecommerce, you would use it as a basis for recommendation engine for cross-selling or up-selling (“customers who bought this item also bought…” à la Amazon). First you detemine which sets of products are most popular, and then show those combinations to buyers interested in any item belonging to that set.

In political marketing, affinity analysis means that because a voter is interested in topic A, he’s also interested in topic B. Therefore, we will show him information on topic B, given our extant knowledge his interests, in order to increase likelihood of conversion. This is a form of associative

Operationalization. But operationalizing this is where I’m still in doubt. One solution could be building an association matrix based on website behavior, and then form corresponding retargeting audiences (e.g., website custom audiences on Facebook). The following picture illustrates the idea.

Figure 1 Example of affinity analysis (1=Visited page, 0=Did not visit page)

For example, we can see that themes C&D and A&F commonly occur together, i.e. people visit those sub-pages in the campaign site. You can validate this by calculating correlations between all pairs. When you set your data in binary format (0/1), you can use Pearson correlation for the calculations.

Facebook targeting. Knowing this information, we can build target audiences on Facebook, e.g. “Visited /Theme_A; NOT /Theme_F; NOT /confirmation”, where confirmation indicates conversion. Then, we would show ads on Theme F to that particular audience. In practice, we could facilitate the process by first identifying the most popular themes, and then finding the associated themes. Once the user has been exposed to a given theme, and did not convert, he needs to be exposed to another theme (with the highest association score). The process is continued until themes run out, or the user converts, which ever comes first. Applying the earlier logic of determining proxy for conversion, visiting all theme sub-pages can also be used as a measure for conversion.

Finally, it is possible to use more advanced methods of associative learning. That is, we could determine that {Theme A, Theme F} => {Theme C}, so that themes A and B predict interest in theme C. However, it is more appropriate to predict conversion rather than interest in other themes, because ultimately we’re interested in persuading more voters.

Footnotes

[1] Posts in Finnish:

PPC bidding requires strategic thinking.

Introduction. Wow. I was doing some SEM optimization in Google AdWords while a thought struck me. It is this: Advertisers actively following “Opportunities” in AdWords risk bid wars. Why is that? I’ll explain.

Opportunities or not? The “Opportunities” feature proposes bid increases for given keywords, e.g. Week 1: Advertiser A has current bid b_a and is proposed a marginal cost m_a, so the new bid e_a = b_a+m_a. During the same Week 1: Advertiser B, in response to Advertiser A’s acceptance of bid increase, is recommended to maintain his current impression share by increasing his bid b_b to e_b = b_b+m_b. To maintain the impression share balance, Advertiser A is again in the following optimization period (say the optimization cycle is a week, so next week) proposed yet another marginal increase, et cetera.

If we turn m into a multiplier, then the bid will eventually be b_a = (b_a * m_a)^c, where c is the number of optimization cycles. Let’s say AdWords recommends 15% bid increase at each cycle (e.g., 0.20 -> 0.23\$ in the 1st cycle); then after five cycles, the keyword bid has doubled compared to the baseline (illustrated in the picture).

Figure 1   Compounding bid increases

Alluring simplicity. Bidding wars were always a possible scenario in PPC advertising – however, the real issues here is simplicity. The improved “Opportunities” feature gives much better recommendations to advertisers than earlier version, which increases its usage and more easily leads into “lightly made” acceptance of bid increases that Google can show to likely maintain a bidder’s current competitive positioning. From auction psychology we know that bidders have a tendency to overbid when put into competitive pressure, and that’s exactly where Google is putting them.

It’s rational, too. I think that more aggressive bidding can easily take place under the increasing usage of “Opportunities”. Basically, the baselines shift at the end of each optimization cycle. The mutual increase of bids (i.e., bid war) is not only a potential outcome of light-headed bidding, but in fact increasing bids is rational as long as keywords still remain profitable. But in either case, economic rents (=excessive profits) will be competed away.

Conclusion. Most likely Google advertising will continue converging into a perfect market, where it is harder and harder for individual advertisers to extract rents, especially in long-term competition. “Opportunities” is one way of making auctions more transparent and encourage more aggressive bidding behavior. It would be interesting to examine if careless bidding is associated with the use of “Opportunities” (i.e., psychological aspect), and also if Google shows more recommendations to increase than decrease bids (i.e., opportunistic recommendations).

## Introduction

I used to think differently about creating ad variations, but having tested both methods I’ve changed my mind. Read the explanation below.

There are two alternative approaches to ad testing:

1. “Qwaya” method* — you create some base elements (headlines, copy texts, pictures), out of which a tool will create up to hundreds of ad variations
2. “Careful advertiser” method — you create hand-crafted creatives, maybe three (version A, B, C) which you test against one another.

In both cases, you are able to calculate performance differences between ad versions and choose the winning design. The rationale in the first method is that it “covers more ground”, i.e. comes up with such variations that we wouldn’t have tried otherwise (due to lack of time or other reasons).

## Failure of large search space

I used to advocate the first method, but it has three major downsides:

1. it requires a lot more data to come up with statistical significance
2. false positives may emerge in the process, and
3. lack of internal coherence is likely to arise, due to inconsistency among creative elements (e.g., mismatch between copy text and image which may result in awkward messages).

Clearly though, the human must generate enough variation in his ad versions if he seeks a globally optimal solution. This can be done by a) making drastically different (e.g., humor vs. informativeness) as oppose to incrementally different ad versions, and b) covering extremes on different creative dimensions (e.g., humor: subtle/radical  informativeness: all benefits/main benefit).

## Conclusion

Overall, this argument is an example of how marketing automation may not always be the best way to go! And as a corollary, the creative work done by humans is hard to replace by machines when seeking optimal creative solutions.

*Named after the Swedish Facebook advertising tool Qwaya which uses this feature as one of their selling points.

What, why?

Because by definition, the more precise targeting is the more relevant it its for end users. Knowing the standard nature of ads (as in: negative indirect network effect vis-à-vis users), the more relevant they are, the less unsatisfied the users. What’s more, their satisfaction is also tied to the performance of the ads (positive indirect network effect: the more satisfied the users, the better the ad performance), which should thus be better with more precise targeting.

Now, the relevance of ads can be improved by automatic means such as epsilon-greedy algorithms, and this is traditionally seen as Facebook’s advantage (right, Kalle?) but the real question is: Is that more efficient than “marketer’s intuition”?

I’d in fact argue that — contrary to my usual approach to marketer’s intuition and its fallibility — it is helpful here, and its use at least enables the narrowing down of optimal audience faster.

## …okay, why is that then?

Because it’s always not only about the audience, but about the match between the message and audience — if the message was the same and audience varied, narrowing is still useful because the search space for Facebook’s algorithm is smaller, pre-qualified by humans in a sense.

But there’s an even more important property – by narrowing down the audience, the marketer is able to re-adjust their message to that particular audience, thereby increasing relevance (the “match” between preferences of the audience members and the message shown to them). This is hugely important because of the inherent combinatory nature of advertising — you cannot separate the targeting and message when measuring performance, it’s always performance = targeting * message.

Therefore, Facebook does have an incentive to encourage advertisers for more precise targeting and also reward that by providing a lower CPC. Not sure if they are doing this though, because it requires them to assign a weighted bid for advertisers with a more precise targeting — consider advertiser A who is mass-advertising to everyone in some large set X vs. advertiser B who is competing for a part of the same audience i.e. a sub-set x – they are both in the same auction but the latter should be compensated for his more precise targeting.

## Concluding remarks

Perhaps this is factored in through Relevance Score and/or performance adjustment in the actual rank and CPC. That would yield the same outcome, given that the above mentioned dynamics hold, i.e. there’s a correlation between a more precise targeting and ad performance.

As usual, my students gave the inspiration to this post. They’re currently participating in Google Online Marketing Challenge, and — from the mouths of children you hear the truth 🙂 — asked a very simple question: “What do we do when the campaigns are running?”

At first, I’m tempted to say that you’ll do optimization in my supervision, e.g. change the ad texts, pause add and change bids of keywords, etc. But then I decide to write them a brief introduction.

So, here it goes:

1. Structure – have the campaigns been named logically? (i.e., to mirror the website and its goals)? Are the ad groups tight enough? (i.e., include only semantically similar terms that can be targeted by writing very specific ads)

2. Settings – all features enabled, only search network, no search partners (– that applies to Google campaigns, in display network you have different rules but never ever mix the two under one campaign), language targeting Finnish English Swedish (languages that Finns use in Google)

3. Modifiers – are you using location or mobile bid modifiers? Should you? (If unsure, find out quick!)

4. Do you have need for display campaigns? If so, use display builder to build nice-looking ads; your targeting options are contextual targeting (keywords), managed placements (use Display Planner to find suitable sites), audience lists (remarketing), and affinity and topic categories (the former targets people with a given interest, the latter websites categorized under a given interest, e.g. traveling) (you can use many of these in one campaign)

5. Do you have enough keywords to reach the target daily spend? (Good to have more than 100, even thousands of keywords in the beginning.)

6. What match types are you using? You can start from broad, but gradually move towards exact match because it gives you the greatest control over which auctions you participate in.

7. What are your options to expand keyword base? Look for opportunities by taking a search term report from all keywords after you’ve run the campaign for week or so; this way you can also identify more negative keywords.

8. What negative keywords are you using? Very important to exclude yourself from auctions which are irrelevant for your business.

9. Pausing keywords — don’t delete anything ever, because then you’ll lose the analytical trace; but frequently stop keywords that are a) the most expensive and/or b) have the lowest CTR/Quality Score

10. Have you set bids at the keyword level? You should – it’s okay to start by setting the bid at ad group level, and then move gradually to keyword level as you begin to accumulate real data from the keyword market.

11. Ad positions – see if you’re competitive by looking at auction insights report; if you have low average positions (below 3), consider either pausing the keyword or increasing your bid (and relevance to ad — very important)

13. Landing page relevance – you can see landing page experience by hovering over keywords – if the landing page experience is poor, think if you can instruct your client to make changes, or if you can change the landing page to a better one. The landing page relevance comes from the searcher’s perspective: when writing the search query, he needs to be shown ads that are relevant to that query and then directed to a webpage which is the closest match to that query. Simple in theory, in practice it’s your job to make sure there’s no mismatch here.

14. Quality Score – this is the godlike metric of AdWords. Anything below 4 is bad, so pause it or if it’s relevant for your business, then do your best to improve it. The closer you get to 10, the better (with no data, the default is 6).

15. Ad extensions – every possible ad extension should be in use, because they tend to gather a good CTR and also positively influence your Quality Score. So, this includes sitelinks, call extensions, reviews, etc.

And, finally, important metrics. You should always customize your column views at campaign, ad group and keyword level. The picture below gives an example of what I think are generally useful metrics to show — these may vary somewhat based on your case. (They can be the same for all levels, except keyword level should also include Quality Score.)

• CTR (as high as possible, at least 5%)
• CPC (as low as possible, in Finland 0.20€ sounds decent in most industries)
• impression share (as high as possible WHEN business-relevant keywords, in long-tail campaigns it can be low with a good reason of getting cheap traffic; generally speaking, this indicates scaling potential; I’ve written a separate post about this, you can find it by looking at my posts)
• Quality Score (as high as possible, scale 1-10)
• Cost (useful to sort by cost to focus on the most expensive keywords and campaigns)
• Avg. position (TOP3 is a good goal!)
• Bounce rate (as low as possible, it tends to be around 40% on an average website) (this only shows if GA is connected –> connect if possible)
• Conversion rate (as high as possible, tends to be 1-2% in ecommerce sites, more when conversion is not purchase)
• Number of conversions (shows absolute performance difference between campaigns)

That’s it! Hope you enjoyed this post, and please leave comments if you have anything to add.

One of the main problems in analytics is the lack of people information (e.g., demographics, interests). It is controlled by superplatforms like Google and Facebook, but as soon as you have transition from the channel to the website, you lose this information.

So, I was thinking this in context of dynamic pricing. There’s no problem for determining an average solution, i.e. a price point that sets the price so that conversion is maximized on average. But that’s pretty useless, because as you know averages are bad for optimization – too much waste of efficiency. Consider dynamic pricing: the willingness to pay is what matters for setting the price, but it’s impossible to know the WTP function of individual visitors. That’s why aggregate measures *are* needed, but we can go beyond a general aggregate (average) to segmentation, and then use segment information as a predictor for conversion at different price points (by the way, determining the testing interval for price points is also an interesting issue, i.e. how big or small increments should you do —  but that’s not the topic here).

Going back to the people problem — you could tackle this with URL tagging: 1) include the targeting info into your landing URL, and you’re able to do personalization like dynamic pricing or tailored content by retrieving the targeting information from the URL and rendering the page accordingly. A smart system would not only do this, but 2) record the interactions of different targeting groups (e.g., men & women) and use this information to optimize for a goal (e.g., determining optimal price point per user group).

These are some necessary features for a dynamic pricing system. Of course then there’s the aforementioned interval problem; segmentation means you’re playing with less data per group, so you have less “trials” for effective tests. So, intuitively you can have this rule: the less the website has traffic, the larger the increments (+/-) should be for finding the optimal price point. However, if the increments become too large you’re likely to miss the optimal (it gets lost somewhere in between the intervals). I think here are some eloquent algorithmic solutions to that in the multi-armed bandits.

## Introduction

This is a short post explaining the correct way to calculate ROI for online marketing. I got the idea earlier today while renewing my Google AdWords certificate and seeing this question in the exam:

Now, here’s the trap – I’m arguing most advertisers would choose the option C, although the correct one is option A. Let me elaborate on this.

## The problem?

As everybody knows, ROI is calculated with this formula:

ROI = (returns-cost)/cost*100%

The problem is that the cost side is oftentimes seen too narrowly when reporting the performance of online advertising.

ROI is the ‘return on investment’, but the investment should not only be seen to include advertising cost but the cost of the product as well.

Let me give you an example. Here’s the basic information we have of our campaign performance:

• cost of campaign A: 100€
• sales from campaign A: 500€

So, applying the formula the ROI is (500-100)/100*100% = 400%

However, in reality we should consider the margin since that’s highly relevant for the overall profitability of our online marketing. In other words, the cost includes the products sold. Considering that our margin would be 15% in this example, we would get

• cost of products sold: 500€*(1-0.25) =425€

Reapplying the ROI calculation:

(500-(100+425)) / (100+425) * 100% = -4.7%

So, as we can see, the profitability went from +400% to -4.7%.

## The implications

The main implication: always consider the margin in your ROI calculation, otherwise you’re not measuring true profitability.

The more accurate formula, therefore, is:

ROI = (returns-(cost of advertising + cost of products sold)) / (cost of advertising + cost of products sold)

Another implication is that since the ROI depends on margins, products with the same price have different CPA goals. This kind of adjustment is typically ignored in bid-setting, also by more advanced system such as AdWords Conversion Optimizer which assumes a uniform CPA goal.

## Limitations

Obviously, while the abuse of the ‘basic ROI’ calculation ignores the product in the cost side, it also ignores customer lifetime value from the return-side of the equation.

Dr. Joni Salminen holds a PhD in marketing from the Turku School of Economics. His research interests relate to startups, platforms, and digital marketing.

Contact email: [email protected]utu.fi

## Introduction

Carryover effects in marketing are a tricky beast. On one hand, you don’t want to prematurely judge a campaign because the effect of advertising may be delayed. On the other hand, you don’t want bad campaigns to be defended with this same argument.

## Solutions

What’s the solution then? They need to be quantified, or didn’t exist. Some ways to quantify are available in Google Analytics:

• first, you have the time lag report of conversions – this shows how long it has taken for customers to convert
• second, you have the possibility to increase the inspection window – by looking at a longer period, you can capture more carryover effects (e.g., you ran a major display campaign on July; looking back on December you might still see effects) [Notice that cookie duration limits the tracking, and also remember to use UTM parameters for tracking.]
• third, you can look at assisted conversions to see the carryover effect in conversion paths – many campaigns may not directly convert, but are a part of the conversion path.

All these methods, however, are retrospective in nature. Predicting carryover effects is notoriously hard, and I’m not sure it would even be possible with such accuracy that it should be pursued.

## Conclusion

In conclusion, I’d advise against being too hasty in drawing conclusion about campaign performance. This way you avoid the problem of premature judgment. The problem of shielding inferior campaigns can be tackled by using other proxy metrics of performance, such as the bounce rate. This would effectively tell you whether a campaign has even a theoretical chance of providing positive carryover effects. Indeed, regarding the prediction problem, proving the association between high bounce rate and low carryover effects would enforce this “rule of thumb” even further.

Dr. Joni Salminen holds a PhD in marketing from the Turku School of Economics. His research interests relate to startups, platforms, and digital marketing.

Contact email: [email protected]

Recently I had an email correspondence with one my brightest digital marketing students. He asked for advice on creating an AdWords campaign plan.

I told him the plan should include certain elements, and only them (it’s easy to make a long and useless plan, and difficult to do it short and useful).

Anyway, in the process I also told him how to make sure he gets the necessary information from the client. These four things I’d like to share with everyone looking for a crystal-clear marketing brief.

They are:

1. campaign goal
2. target group
3. budget
4. duration

First, you want to know the client’s goal. In general, it can direct response (sales) or indirect response (awareness). This affects two things:

• metrics you include as your KPIs — in other words, will you optimize for impressions, clicks, or conversions.
• channels you include — if the client wants direct response, search-engine advertising is usually more effective than social media (and vice versa).

The channel selection is the first thing to include into your campaign plan.

Second, you want the client’s understanding of the target group. This affects targeting – in search-engine advertising it’s the keywords you choose; in social media advertising it’s the demographic targeting; in display it’s the managed placements.

Based on this information, you want to make a list (of keywords / placements / demographic types). These targeting elements are the second thing to include into your campaign plan.

Third, the budget matters a great deal. It affects two things:

• how many channels to choose
• how to set daily budgets

The bigger the budget is, the more channels can be included in the campaign plan. It’s not always linear, however; e.g. when search volumes are high and the goal is direct response, it makes most sense to spend all on search. But generally, it’s possible to target several stages in customers’ purchase funnel (i.e., stages they go through prior to conversion).

Hence, the budget spend is the third thing to include into your campaign plan.

The daily budget you calculate by dividing the total budget with the number of channels and the duration (in days) of the campaign. At this point, you can allocate the budget in different ways, e.g. search = 2xsocial. It’s important to notice that in social and display you can usually spend as much money as you want, because the available ad inventory is in effect unlimited. But in search the spend is curbed by natural search volumes.

I’m into digital marketing, startups, platforms. Download my dissertation on startup dilemmas: http://goo.gl/QRc11f