Skip to content

Tag: analytics

Customers as a source of information: 4 risks

Introduction

This post is based on Dr. Elina Jaakkola’s presentation “What is co-creation?” on 19th August, 2014. I will elaborate on some of the points she made in that presentation.

Customer research, a sub-form of market research, serves the purpose of acquiring customer insight. Often, when pursuing information from consumers
companies use surveys. Surveys, and usage of customers as a source of information, have some particular problems discussed in the following.

1. Hidden needs

Customers have latent or hidden needs that they do not express, perhaps due to social reasons (awkwardness) or due to the fact of them not knowing what is technically possible (unawareness). If one is not specifically asking about a need, it is easily left unmentioned, even if it has great importance for the customer. This problem is not easily solved, since even the company may not be aware of all the possibilities in the technological space. However, if the purpose is to learn about the customers, a capability of immersion and sympathy is needed.

2. Reporting bias

What customers report they would do is not equivalent to their true behavior. They might say one thing, and do something entirely different. In research, this is commonly known as reporting bias. It is a major problem when carrying out surveys. The general solution is to ask about past, not future behavior, although even this approach is subject to recall bias.

3. Interpretation problem

Consumers answers to surveys can misinterpret the questions, and analysts can also misinterpret their answers. It is difficult to vividly present choices of hypothetical products and scenarios to consumers, and therefore the answers one receives may not be accurate. A general solution is to avoid ambiguity in the framing of questions, so that everything is commonly known and clear to both the respondent and the analyst (shared meanings).

4. Loud minority

This is a case where a minority, for being more vocal, creates a false impression of needs of the whole population. For example, in social media this effect may easily take place. A general rule of thumb is that only 1% of members of a community actively participates in a given discussion while other 99% merely observe. It is easy to see consumers who are the loudest get their opinions out, but this may not represent the needs of the silent majority. The solution would be stratification, where one distinguishes different groups from one another so as to form a more balanced view of the population. This works when there is an adequate participation among strata. Another alternatively would be actively seek out non-vocal customers.

Conclusion

Generally, the mentioned problems relate to stated preferences. When we are using customers as a source of information, all kinds of biases emerge. That is why behavioral data, not dependent on what customers say, is a more reliable source of information. Thankfully, in digital environments it is possible to obtain behavioral data with much more ease than in analogue environments. The problems of it emerge from representativeness and on the other hand fitting it to other forms of data so as to gain a more granular understanding of the customer base.

Basic formulas for digital media planning

Planning makes happy people.

Introduction

Media planning, or campaign planning in general, requires you to set goal metrics, so that you are able to communicate the expected results to a client. In digital marketing, these are metrics like clicks, impressions, costs, etc. The actual planning process usually involves using estimates — that is, sophisticated guesses of some sorts. These estimates may be based on your previous experience, planned goal targets (when for example given a specific business goal, like sales increase), or industry averages (if those are known).

Calculating online media plan metrics

By knowing or estimating some goal metrics, you are able to calculate others. But sometimes it’s hard to remember the formulas. This is a handy list to remind you of the key formulas.

  • ctr = clicks / imp
  • clicks = imp * ctr
  • imp = clicks / ctr
  • cpm = cost / (imp / 1000)
  • cost = cpm * (imp / 1000)
  • cpa = cpc / cvr
  • cpa = cost / conversions
  • cost = cpa * conversions
  • conversions = cost / cpa

In general, metrics relating to impressions are used as proxies for awareness and brand related goals. Metrics relating to clicks reflect engagement, while conversions indicate behavior. Oftentimes, I estimate CTR, CVR and CPC because 1) it’s good to set a starting goal for these metrics, and 2) they exhibit some regularity (e.g., ecommerce conversion rate tends to fall between 1-2%).

Conclusion

You don’t have to know everything to devise a sound digital media plan. A few goal metrics are enough to calculate all the necessary metrics. The more realistic your estimates are, the better. Worry not, accuracy will get better in time. In the beginning, it is best to start with moderate estimates you feel comfortable in achieving, or even outperforming. It’s always better to under-promise than under-perform. Finally, the achieved metric values differ by channel — sometimes a lot — so take that into consideration when crafting your media plan.

Carryover effects and their measurement in Google Analytics

Introduction

Carryover effects in marketing are a tricky beast. On one hand, you don’t want to prematurely judge a campaign because the effect of advertising may be delayed. On the other hand, you don’t want bad campaigns to be defended with this same argument.

Solutions

What’s the solution then? They need to be quantified, or didn’t exist. Some ways to quantify are available in Google Analytics:

  • first, you have the time lag report of conversions – this shows how long it has taken for customers to convert
  • second, you have the possibility to increase the inspection window – by looking at a longer period, you can capture more carryover effects (e.g., you ran a major display campaign on July; looking back on December you might still see effects) [Notice that cookie duration limits the tracking, and also remember to use UTM parameters for tracking.]
  • third, you can look at assisted conversions to see the carryover effect in conversion paths – many campaigns may not directly convert, but are a part of the conversion path.

All these methods, however, are retrospective in nature. Predicting carryover effects is notoriously hard, and I’m not sure it would even be possible with such accuracy that it should be pursued.

Conclusion

In conclusion, I’d advise against being too hasty in drawing conclusion about campaign performance. This way you avoid the problem of premature judgment. The problem of shielding inferior campaigns can be tackled by using other proxy metrics of performance, such as the bounce rate. This would effectively tell you whether a campaign has even a theoretical chance of providing positive carryover effects. Indeed, regarding the prediction problem, proving the association between high bounce rate and low carryover effects would enforce this “rule of thumb” even further.

Dr. Joni Salminen holds a PhD in marketing from the Turku School of Economics. His research interests relate to startups, platforms, and digital marketing.

Contact email: [email protected]

A Few Interesting Digital Analytics Problems… (And Their Solutions)

Introduction

Here’s a list of analytics problems I’ve devised for a class I was teaching a digital analytics course (Web & Mobile Analytics, Information Technology Program) at Aalto University in Helsinki. Some solutions to them are also considered.

The problems

  • Last click fallacy = taking only the last interaction into account when analayzing channel or campaign performance (a common problem for standard Google Analytics reports)
  • Analysis paralysis = the inability to know which data to analyze or where to start the analysis process from (a common problem when first facing a new analytics tool 🙂 )
  • Vanity metrics = reporting ”show off” metrics as oppose to ones that are relevant and important for business objectives (a related phenomenon is what I call “metrics fallback” in which marketers use less relevant metrics basically because they look better than the primary metrics)
  • Aggregation problem = seeing the general trend, but not understanding why it took place (this is a problem of “averages”)
  • Multichannel problem = losing track of users when they move between online and offline (in cross-channel environment, i.e. between digital channels one can track users more easily, but the multichannel problem is a major hurdle for companies interested in knowing the total impact of their campaigns in a given channel)
  • Churn problem = a special case of the aggregation problem; the aggregate numbers show growth whereas in reality we are losing customers
  • Data discrepancy problem = getting different numbers from different platforms (e.g., standard Facebook conversion configuration shows almost always different numbers than GA conversion tracking)
  • Optimization goal dilemma = optimizing for platform-specific metrics leads to suboptimal business results, and vice versa. It’s because platform metrics, such as Quality Score, are meant to optimize competitiveness within the platform, not outside it.

The solutions

  • Last click fallacy → attribution modeling, i.e. accounting for all or select interactions and dividing conversion value between them
  • Analysis paralysis → choosing actionable metrics, grounded in business goals and objectives; this makes it easier to focus instead of just looking at all of the overwhelming data
  • Vanity metrics → choosing the right KPIs (see previous) and sticking to them
  • Aggregation problem → segmenting data (e.g. channel, campaign, geography, time)
  • Multichannel problem → universal analytics (and the associated use of either client ID or customer ID, i.e. a universal connector)
  • Churn problem → cohort analysis (i.e. segment users based on the timepoint of their enrollment)
  • Data discrepancy problem → understanding definitions & limitations of measurement in different ad platforms (e.g., difference between lookback windows in FB and Google), using UTM parameters to track individual campaigns
  • Optimization goal dilemma → making a judgment call, right? Sometimes you need to compromise; not all goals can be reached simultaneously. Ultimately you want business results, but as far as platform-specific optimization helps you getting to them, there’s no problem.

Want to add something to this list? Please write in the comments!

[edit: I’m compiling a larger list of analytics problems. Will update this post once it’s ready.]

Learn more

I’m into digital marketing, startups, platforms. Download my dissertation on startup dilemmas: http://goo.gl/QRc11f

The Bounce Problem: How to Track Bounce in Simple Landing Pages

Introduction

This post applies to cases satisfying two conditions.

First, you have a simple landing page designed for immediate action (=no further clicks). This can be the case for many marketing campaigns for which we design a landing page without navigation and a very simple goal, such as learning about a product or watching a video.

Second, you have a high bounce rate, indicating a bad user experience. Bounce rate is calculated as follows:

visitors who leave without clicking further / all visitors

Why does high bounce indicate bad user experience?

It’s a proxy for it. A high bounce rate simply means a lot of people leave the website without clicking further. This usually indicates bad relevance: the user was expecting something else, didn’t find, and so leaves the site immediately.

For search engines a high bounce rate indicates bad landing page relevance vis-à-vis a given search query (keyword), as the user immediately returns to the SERP (search-engine result page). Search engines, such as Google, would like to offer the right solution for a given search query as fast as possible to please their users, and therefore a poor landing page experience may lead to lower ranking for a given website in Google.

The bounce problem

I’ll give a simple example. Say you have a landing page with only one call-to-action, such as viewing a video. You then have a marketing campaign resulting to ten visitors. After viewing the video, all ten users leave the site.

Now, Google Analytics would record this as 100% bounce rate; everyone left without clicking further. Moreover, the duration of the visits would be recorded as 0:00, since the duration is only stored after a user clicks further (which didn’t happen in this case).

So, what should we conclude as site owners when looking at our statistics? 100% bounce: that means either that a) our site sucks or b) the channel we acquired the visitors from sucks. But, in the previous case it’s an incorrect conclusion; all of the users watched the video and so the landing page (and marketing campaign associated with it) was in fact a great success!

How to solve the bounce problem

I will show four solutions to improve your measurement of user experience through bounce rate.

First, simply create an event that pings your analytics software (most typically Google Analytics) when a user makes a desired on-page action (e.g. video viewing). This removes users who completed a desired action but still left without clicking further from the bounce rate calculation.

Here are Google’s instructions for event tracking.

Second, ping GA based on visit duration, e.g. create an event of spending one minute on the page. This will in effect lower your reported bounce rate by degree of users who stay at least a minute on the landing page.

Third, create a form. Filling a form directs the user to another site which then triggers an event for analytics. In most cases, this is also compatible with our condition of a simple landing page with one CTA (well, if you have a video and a form that’s two actions for a user, but in most cases I’d say it’s not too much).

Finally, there is a really cool Analytics plugin by Rob Flaherty called Scrolldepth (thanks Tatu Patronen for the tip!). It pings Google Analytics as users scroll down the page, e.g. by 25%, 75% and 100% intervals. In addition to solving the bounce problem, it also gives you more data on user behavior.

Limitations

Note that adding event tracking to reduce bounce rate only reduces it in your analytics. Search-engines still see bounce as direct exits, and may include that in their evaluation of landing page experience. Moreover, individual solutions have limitations – creation of a form is not always natural given the business, or it may create additional incentive for the user; and Scrolldepth is most useful in lengthy landing pages, which is not always the case.

I’m into digital marketing, startups, platforms. Download my dissertation on startup dilemmas: http://goo.gl/QRc11f