Archive for the Google Analytics tag

Joni

How to Win the Google Online Marketing Challenge

english

GOMCHA European Winners 2016

1. Introduction

In the past couple of weeks, a few people have approached me asking for tips on how to do well in the Google Online Marketing Challenge. So, I thought I might as well gather some of my experiences in a blog post, and share them with everybody.

A little bit of background: I’ve been the professor of two winning teams (GOMC Europe 2013 & GOMC Europe 2016). Although the most credit is obviously due to the students that do all the hard work (the students at Turku School of Economics simply rock!), guidance does play an important role since most commonly the students have no prior experience in SEM/PPC, and need to be taught quickly where to focus on.

2. Advice to teachers

The target audience for this post is anyone participating in the challenge. For the teachers, I have one important advice:

Learn the system if you’re teaching it. There’s no substitute for real experience. The students are likely to have a million questions, and you need to give better answers than “google it.” Personally, I was fortunate enough to have done SEM for many years before starting to teach it. Without that experience, it would have been impossible to guide the teams do well. However, if you don’t have the same advantage, but you want your students to do well, turn to the industry. Many SEM companies out there are interested in mentoring/sparring the students, because that way they can also spot talented individuals for future hiring (win-win, right?).

3. How to win GOMCHA?

3.1 Overview

That said, here are my TOP3 “critical success factors” for winning the challenge:

  1. Choose your case wisely
  2. Focus on Quality Score
  3. Show impact

That’s it! Follow these principles and you will do well. Now, that being said, behind each of them is a whole layer of complexity 🙂 Let’s explore each point.

3.2 Choosing the AdWords case

First, one of the earliest questions students are going to ask is how to choose the company/organization they’re doing the campaign for. And that’s also one of the most important ones. How I do it: I let each team choose and find their own case; however, I tell them what is a good case and what is not. I wrote a separate post about choosing a good AdWords case. Read the post, and internalize the information.

Update: one more point to the linked post – choose one that preferably has some brand searches already. This helps you get higher overall CTR, and lower the overall CPC.

The choice of a good case is crucial, because you can be the best optimizer in the world, but if you have a bad case, you will fail. An example was a team that chose a coffee company — it was not a good case to choose because it had low product range and relatively few searches. For some reason, the team, which consisted of several students with *real experience* in AdWords, wanted to choose it. Not surprisingly, they struggled due to the above reasons and were easily overshadowed by other teams with no experience but a good case. Therefore, the formula here is: success = case * skills.

By the way, that is one of the most important lessons for any marketing student in general: Always choose your case wisely, and never market something whose potential you don’t believe in.

3.3 Choosing the metrics

Another common question relates to the metrics: What should we optimize for? While there are many important metrics, including CTR and CPC, I would say one is above the others. That is clearly the Quality Score, which seems to be very influential in Google’s ranking algorithm for the competition.

Note that I don’t have any insider information on this, but I’m saying *seems* because of this reason: In 2015, I instructed the teams to focus on a wide range of metrics, including CTR, CPC, and QS. What came out where several great teams that, in my opinion, had better overall metrics than many of the finalists that year (none of my teams were finalists). Last year, however, I switched the strategy and instructed the teams to heavily focus on Quality Score, even at the cost of other metrics. For example, to the team that ended up winning in 2016, I said “your goal is 10 x 10”, meaning they should get 10 keywords with QS 10. They ended up getting 12, and the rest is history 🙂

3.4 Why is Quality Score that important?

In my view, it’s because all optimization efforts basically culminate to that metric. To maximize your QS, you essentially need to do all the right things in terms of optimization, including account structure, ad creation, and landing pages. To get these things nailed, refer to this post. And google for more tips: blogs such as PPC Hero, Wordstream, and Certified Knowledge have plenty of subject matter to learn from. I also have complied an extensive list of digital marketing blogs that you can utilize.

However, do note that all third-party information is to some degree unreliable. Use it with caution, combined with your first-hand experiments (i.e., do what you see working the best in the light of numbers). The most reliable source of information is of course Google, because they know the system from the inside, any of the experts (including myself) don’t. So, use Google’s AdWords help as your main reference.

3.5 Show real impact

The last step, since many teams can score high on metrics, is to show real-life impact. This is pretty much the only way to differentiate when all finalist teams are good. The thing you can do here is, first of all, to meticulously follow Google’s guidelines for the reports to highlight your greatness. As a member of the academic panel, I know some cases have been failed due to not following the technical guidelines, so make sure your output is in line with them. However, that is not the main point; the main point is to show how you brought real results to your case organization. Although not part of the official ranking, if you look at the past winners, most of them have gained a lot of conversions. By knowing that, you can do the math. The reports of the winners from earlier years can be found at the challenge website.

4. List of practical tips

Finally, some practical tips (the list is in no particular order, and not comprehensive at all):

  1. Optimize every day like you were obsessed with AdWords
  2. Don’t be afraid to ask advice from the experts; take every help you can get to learn faster
  3. Prefer using ‘exact match’ keywords
  4. Never mix display campaigns with search campaigns (i.e., avoid ‘display select’)
  5. Avoid GDN altogether; you can experiment with it using a little budget, but focus 99% on search campaigns
  6. When possible, direct the keywords to a specific landing page (not homepage)
  7. Create ad groups based on semantic similarity of keywords (if you don’t know what this means, find out)
  8. Don’t stress about the initial bid price; set it at some level based on the Keyword Planner estimates and change according to results
  9. Or, alternatively, set it as high as possible to get a good Avg. Pos. and therefore improved CTR, and improved QS
  10. Set the bid price manually per keyword
  11. Use GA to report after-click performance (good for campaign report)
  12. Use as many AdWords features as possible (good for campaign report)

Finally, read Google’s materials, including the challenge website. Follow their advice meticulously, and read read read about search-engine advertising from digital marketing blogs and Google’s website.

Good luck!! 🙂

CAVEAT: I’m a member at the Google Online Marketing Challenge’s academic panel. These are my personal opinions and don’t necessarily represent the official panel views. The current judging criteria for the competition can be found at: https://www.google.com/onlinechallenge/discover/judging.html

UPDATE (May, 2017): Together with Elina Ojala (next to me in the picture above), we had a Skype call with students of Lappeenranta University of Technology (LUT). Elina pointed out some critical things: it’s important 1) to be motivated, 2) have a really good team without free riding, 3) share tasks efficiently (e.g., analytics, copywriting; based on individual interests), and 3) go through extra effort (e.g., changing the landing pages, using GA). I added that for teachers it’s important to motivate the students: aim HIGH !! And to stress there is zero chance of winning if the team doesn’t work every day (=linear relationship between hours worked and performance).

Resources (some in Finnish)

Joni

A Few Interesting Digital Analytics Problems… (And Their Solutions)

english

Introduction

Here’s a list of analytics problems I’ve devised for a class I was teaching a digital analytics course (Web & Mobile Analytics, Information Technology Program) at Aalto University in Helsinki. Some solutions to them are also considered.

The problems

  • Last click fallacy = taking only the last interaction into account when analayzing channel or campaign performance (a common problem for standard Google Analytics reports)
  • Analysis paralysis = the inability to know which data to analyze or where to start the analysis process from (a common problem when first facing a new analytics tool 🙂 )
  • Vanity metrics = reporting ”show off” metrics as oppose to ones that are relevant and important for business objectives (a related phenomenon is what I call “metrics fallback” in which marketers use less relevant metrics basically because they look better than the primary metrics)
  • Aggregation problem = seeing the general trend, but not understanding why it took place (this is a problem of “averages”)
  • Multichannel problem = losing track of users when they move between online and offline (in cross-channel environment, i.e. between digital channels one can track users more easily, but the multichannel problem is a major hurdle for companies interested in knowing the total impact of their campaigns in a given channel)
  • Churn problem = a special case of the aggregation problem; the aggregate numbers show growth whereas in reality we are losing customers
  • Data discrepancy problem = getting different numbers from different platforms (e.g., standard Facebook conversion configuration shows almost always different numbers than GA conversion tracking)
  • Optimization goal dilemma = optimizing for platform-specific metrics leads to suboptimal business results, and vice versa. It’s because platform metrics, such as Quality Score, are meant to optimize competitiveness within the platform, not outside it.

The solutions

  • Last click fallacy → attribution modeling, i.e. accounting for all or select interactions and dividing conversion value between them
  • Analysis paralysis → choosing actionable metrics, grounded in business goals and objectives; this makes it easier to focus instead of just looking at all of the overwhelming data
  • Vanity metrics → choosing the right KPIs (see previous) and sticking to them
  • Aggregation problem → segmenting data (e.g. channel, campaign, geography, time)
  • Multichannel problem → universal analytics (and the associated use of either client ID or customer ID, i.e. a universal connector)
  • Churn problem → cohort analysis (i.e. segment users based on the timepoint of their enrollment)
  • Data discrepancy problem → understanding definitions & limitations of measurement in different ad platforms (e.g., difference between lookback windows in FB and Google), using UTM parameters to track individual campaigns
  • Optimization goal dilemma → making a judgment call, right? Sometimes you need to compromise; not all goals can be reached simultaneously. Ultimately you want business results, but as far as platform-specific optimization helps you getting to them, there’s no problem.

Want to add something to this list? Please write in the comments!

[edit: I’m compiling a larger list of analytics problems. Will update this post once it’s ready.]

Learn more

I’m into digital marketing, startups, platforms. Download my dissertation on startup dilemmas: http://goo.gl/QRc11f

Joni

The Bounce Problem: How to Track Bounce in Simple Landing Pages

english

Introduction

This post applies to cases satisfying two conditions.

First, you have a simple landing page designed for immediate action (=no further clicks). This can be the case for many marketing campaigns for which we design a landing page without navigation and a very simple goal, such as learning about a product or watching a video.

Second, you have a high bounce rate, indicating a bad user experience. Bounce rate is calculated as follows:

visitors who leave without clicking further / all visitors

Why does high bounce indicate bad user experience?

It’s a proxy for it. A high bounce rate simply means a lot of people leave the website without clicking further. This usually indicates bad relevance: the user was expecting something else, didn’t find, and so leaves the site immediately.

For search engines a high bounce rate indicates bad landing page relevance vis-à-vis a given search query (keyword), as the user immediately returns to the SERP (search-engine result page). Search engines, such as Google, would like to offer the right solution for a given search query as fast as possible to please their users, and therefore a poor landing page experience may lead to lower ranking for a given website in Google.

The bounce problem

I’ll give a simple example. Say you have a landing page with only one call-to-action, such as viewing a video. You then have a marketing campaign resulting to ten visitors. After viewing the video, all ten users leave the site.

Now, Google Analytics would record this as 100% bounce rate; everyone left without clicking further. Moreover, the duration of the visits would be recorded as 0:00, since the duration is only stored after a user clicks further (which didn’t happen in this case).

So, what should we conclude as site owners when looking at our statistics? 100% bounce: that means either that a) our site sucks or b) the channel we acquired the visitors from sucks. But, in the previous case it’s an incorrect conclusion; all of the users watched the video and so the landing page (and marketing campaign associated with it) was in fact a great success!

How to solve the bounce problem

I will show four solutions to improve your measurement of user experience through bounce rate.

First, simply create an event that pings your analytics software (most typically Google Analytics) when a user makes a desired on-page action (e.g. video viewing). This removes users who completed a desired action but still left without clicking further from the bounce rate calculation.

Here are Google’s instructions for event tracking.

Second, ping GA based on visit duration, e.g. create an event of spending one minute on the page. This will in effect lower your reported bounce rate by degree of users who stay at least a minute on the landing page.

Third, create a form. Filling a form directs the user to another site which then triggers an event for analytics. In most cases, this is also compatible with our condition of a simple landing page with one CTA (well, if you have a video and a form that’s two actions for a user, but in most cases I’d say it’s not too much).

Finally, there is a really cool Analytics plugin by Rob Flaherty called Scrolldepth (thanks Tatu Patronen for the tip!). It pings Google Analytics as users scroll down the page, e.g. by 25%, 75% and 100% intervals. In addition to solving the bounce problem, it also gives you more data on user behavior.

Limitations

Note that adding event tracking to reduce bounce rate only reduces it in your analytics. Search-engines still see bounce as direct exits, and may include that in their evaluation of landing page experience. Moreover, individual solutions have limitations – creation of a form is not always natural given the business, or it may create additional incentive for the user; and Scrolldepth is most useful in lengthy landing pages, which is not always the case.

I’m into digital marketing, startups, platforms. Download my dissertation on startup dilemmas: http://goo.gl/QRc11f