Skip to content

Category: english

The Bounce Problem: How to Track Bounce in Simple Landing Pages

Introduction

This post applies to cases satisfying two conditions.

First, you have a simple landing page designed for immediate action (=no further clicks). This can be the case for many marketing campaigns for which we design a landing page without navigation and a very simple goal, such as learning about a product or watching a video.

Second, you have a high bounce rate, indicating a bad user experience. Bounce rate is calculated as follows:

visitors who leave without clicking further / all visitors

Why does high bounce indicate bad user experience?

It’s a proxy for it. A high bounce rate simply means a lot of people leave the website without clicking further. This usually indicates bad relevance: the user was expecting something else, didn’t find, and so leaves the site immediately.

For search engines a high bounce rate indicates bad landing page relevance vis-à-vis a given search query (keyword), as the user immediately returns to the SERP (search-engine result page). Search engines, such as Google, would like to offer the right solution for a given search query as fast as possible to please their users, and therefore a poor landing page experience may lead to lower ranking for a given website in Google.

The bounce problem

I’ll give a simple example. Say you have a landing page with only one call-to-action, such as viewing a video. You then have a marketing campaign resulting to ten visitors. After viewing the video, all ten users leave the site.

Now, Google Analytics would record this as 100% bounce rate; everyone left without clicking further. Moreover, the duration of the visits would be recorded as 0:00, since the duration is only stored after a user clicks further (which didn’t happen in this case).

So, what should we conclude as site owners when looking at our statistics? 100% bounce: that means either that a) our site sucks or b) the channel we acquired the visitors from sucks. But, in the previous case it’s an incorrect conclusion; all of the users watched the video and so the landing page (and marketing campaign associated with it) was in fact a great success!

How to solve the bounce problem

I will show four solutions to improve your measurement of user experience through bounce rate.

First, simply create an event that pings your analytics software (most typically Google Analytics) when a user makes a desired on-page action (e.g. video viewing). This removes users who completed a desired action but still left without clicking further from the bounce rate calculation.

Here are Google’s instructions for event tracking.

Second, ping GA based on visit duration, e.g. create an event of spending one minute on the page. This will in effect lower your reported bounce rate by degree of users who stay at least a minute on the landing page.

Third, create a form. Filling a form directs the user to another site which then triggers an event for analytics. In most cases, this is also compatible with our condition of a simple landing page with one CTA (well, if you have a video and a form that’s two actions for a user, but in most cases I’d say it’s not too much).

Finally, there is a really cool Analytics plugin by Rob Flaherty called Scrolldepth (thanks Tatu Patronen for the tip!). It pings Google Analytics as users scroll down the page, e.g. by 25%, 75% and 100% intervals. In addition to solving the bounce problem, it also gives you more data on user behavior.

Limitations

Note that adding event tracking to reduce bounce rate only reduces it in your analytics. Search-engines still see bounce as direct exits, and may include that in their evaluation of landing page experience. Moreover, individual solutions have limitations – creation of a form is not always natural given the business, or it may create additional incentive for the user; and Scrolldepth is most useful in lengthy landing pages, which is not always the case.

I’m into digital marketing, startups, platforms. Download my dissertation on startup dilemmas: http://goo.gl/QRc11f

Assessing the scalability of AdWords campaigns

Introduction

Startups, and why not bigger companies, too, often test marketing channels by allocating a small budget to each channel, and then analyzing the results (e.g. CPA, cost per action) per channel.

This is done to determine a) business potential and b) channel potential. The former refers to how lucrative it is to acquire customers given their lifetime value, and the latter to how well each channel performs.

Problem

However, there is one major issue: scaling. It means that when we pour x dollars into the marketing channel in the test phase and get CPA of y dollars, will the CPA remain the same when we increase the budget to x+z dollars (say hundred times more)?

This issue can be tackled by acquiring enough data for statistical significance. This gives us confidence that the results will be similar once the budget is increased.

In AdWords, however, the scaling problem takes another form: the natural limitation of search volumes. By this I mean that at any given time, only a select number of customers are looking for a specific topic. Contrary to Facebook which has de facto an unlimited ad inventory (billions of ad impressions), Google only has a limited (although very large) ad inventory.

Solution

Here’s how to assess the scalability of AdWords campaigns:

1. Go to campaign view
2. Enable column called “Search impression share” (Modify columns –> Competitive metrics)

This will tell you how many searchers saw your ad out of all who could have seen it (this is influenced by your daily budget and bid).

In general, you want impression share to be as high as possible, given that the campaign ROI is positive. So, in general >80% is good, <10% is bad. (The exception is when running a long-tail strategy aiming for low-cost clicks, in which case <10% is okay.)

3. Calculate the scalability as follows:

scalability = clicks / impression share

For example, if you have an impression share of 40 % with which you’ve accumulated 500 clicks, by increasing your budget and bids so that you are able to capture 100% impression share, you will accumulate 1250 clicks (=500/0,40) which is the full potential of this campaign.

Limitations

Note that the formula assumes the CTR remains constant. Additionally, increasing bids may increase your CPA, so improving quality score through better ads and relevance is important to offset this effect.

The ROI of Academic Publishing

Problem of ROI in publishing

The return on investment (ROI) of academic publishing is absolutely terrible.

Think of it – thousands of hours spent correcting formatting, spellings, rephrasing, and so on. All this after the actual insight of the research has been accomplished. In all seriousness, 10% of time spent doing research and 90% writing and rewriting cannot be thought of anything else but waste.

Why should we care?

The inefficiency of the current way of doing it – as in combining doing research and writing about it under the same name of “doing research” – is horrible waste of intelligence and human resources. It inflates the cost of doing research, and also makes scientific progress slower than if 90% was spent on research and 10% on writing.

Root cause

Some might say it’s a perverse outcome of letting staff go – nowadays even professors have to do everything  by themselves because there is so few assistants and administrators. Why is this perverse? Because at the same time more people need work. It’s also perverse, or paradoxical, because letting the help go is done to increase efficiency but in the end it actually decreases efficiency as the research staff shifts their use of time from doing research to fixing spelling errors. There is a large misunderstanding that letting people go would lead to better efficiency – it may save costs but exactly at the cost of efficiency.

My experiences

The thought for this article came to mind when me and my colleague received yet again some minor edit requests for an article to be published in a book – the book material was ready already last year, but all these people are working to fix small minor details that add zero substance value. What a waste!

And I’m not alone in this situation; most if not all academics face the same problem.

Solution

Two solutions readily come to mind:

  • report the data and that’s it
  • use editors to fix all minor errors instead of forcing the high-thinkers to waste their time on it

The latter one is much better, as the first option misses the importance of interpreting the results and theorizing from them (the whole point of doing research).

What is ROI of research?

Efficiency, such as ROI of research, should be defined as learning more about the world. This will never be accomplished by writing reports but going out to the world. At the same time, I don’t mean to undermine basic research – the ROI of research is not the same as its immediate usefulness, let alone its immediate economic potential. ROI in my argument simply refers to the ratio of doing research vs. writing about it, not the actual quality of the outcome.

The author works as a university teacher in the Turku School of Economics

Startup syndromes: “The Iznogoud Syndrome”

1. Definition

The Iznogoud Syndrome can be defined as follows:

A startup strives to disrupt existing market structures instead of adapting to them.

In most industries, existing relationships are strong, cemented and will not change due to one startup. Therefore, a better strategy is to find ways of providing utility in the existing ecosystem.

2. Origins

The name of this startup syndrome is based on the French comic character who wants to “become Caliph instead of the Caliph“, and continuously fails in that (over-ambitious) attempt. Much similarly, many startups are over-ambitious in their attempt to succeed. In my experience, they have an idealistic worldview while lacking a realistic perspective on the business landscape. While this works for some outliers – for example Steve Jobs – better results can be achieved with a realistic worldview on average. The world is driven by probabilities and hence it’s better to target averages than outliers.

3. Examples

I see them all the time. Most startups I advise in startup courses and events aim at disintermediation: they want to remove vendors from the market and replace them. For example, a startup wanted to remove recruiting agencies by making their own recruiting platform. Since recruiting agencies already have the customer relationships, it’s an unrealistic scenario. What upset me was that the team didn’t even consider providing value to the recruiting agencies, but intuitively saw them as junk to be replaced.

Another example: there is a local dominant service providing information on dance events, which holds something like 90% of market (everyone uses it). Yet, it has major usability issues. Instead of partnering with the current market leader to fix their problems, the startup wants to create its competing platform from scratch and then “steal” all users. That’s an unrealistic scenario. All around, there is too much emphasis put on disintermediation and seeing current market operators either as waste or competitors as oppose to potential partners in user acquisition, distribution or whatever.

Startups should realize they are not alone in the market, but the market has been there for a hundred years. They cannot just show up and say “hey, I’m going to change how you’ve done business for 100 years.” Or they can, but they will most likely fail. This is all well for the industry in which it doesn’t matter if 9 out of 10 fail, as the one winning brings the profits, but for an individual startup it makes more sense to get the odds of success (even average one) greater. So you see, what is good for the startup industry in general is not the same as what is good for your startup in particular.

4. Similarity to other startup syndromes

The Iznogoud syndrome is similar to “Market education syndrome”, according to which an innovation created by the startup falls short in consumer adoption regardless of its technical quality – many VC’s avoid products requiring considerable market education costs. Whereas the Market education syndrome can be seen a particular issue in B2C markets, the Iznogoud syndrome is more acute in B2B markets.

5. Recommendations

Simply put, startups should learn more about their customers or clients. They need to understand their business logic (B2B) or daily routines (B2C) and how value can be provided there. In B2B markets, there are generally two ways to provide value for clients:

  • help them sell more
  • help them cut costs

If you do so, potential clients are more likely to listen. As stated previously, this is a more realistic scenario in doing business than thinking ways of replacing them.

I’m into digital marketing, startups, platforms. Download my dissertation on startup dilemmas: http://goo.gl/QRc11f

A simple formula for assessing the feasibility of AdWords cases

Update [24th March, 2017]: In addition to the formula explained in the post, I would add the following general criteria for a good AdWords case: 1) Low-Medium competition (high CPCs force to look for alternative channels), 2) Good website/landing pages (i.e., load fast, easy to navigate, have text information relevant to the keywords.

Introduction

Google AdWords is a form of on-demand marketing which matches demand (keywords) with supply (ads). Because it provides good relevance between demand and supply, it efficiently fulfills the core purpose of marketing which is, again, to match supply and demand. However, while this property of AdWords makes it generally much more effective than other forms of online marketing, it also leads to a major limitation: the campaigns cannot scale beyond natural search volumes.

I often tell this to my students participating in the Google Online Marketing Challenge (GOMC), but a few of them always fall into the “trap of low search volume”. I will explain this in the following.

Selection criteria

First, the relevant dimensions for assessing the potential in AdWords are:

  • geographic range: the based on the company’s offerings
  • product range

These can vary from low to high so that

Low geographic range x Low product range = Trap of low search volume

Low geographic range x High product range = Potential risk of low search volume

High geographic range x Low product range = Potential risk of low search volume

High geographic range x High product range = High search volume (Best case for AdWords)

In other words, this formula favors companies with nationwide distribution and large product range. These campaigns tend to scale the best and offer the best ratio between cost and value of optimization. In contrast, local business with one or two products or services are the least feasible candidates.

What does the trap of limited search volume mean?

Well, first of all it means the spend will be low. In GOMC, this means some teams struggle to spend the required $250 during the three-week campaign window.

Second, and more importantly, it means these cases are less interesting for marketers. They offer little room for optimization (because spend is low and there is very little data to work with).

Also for this reason the management cost of running these campaigns (=the amount a marketer can charge for his/her services) can become unbalanced: for example, if the yearly spend of a low-volume campaign is, say $400 and the marketers charges $100 per hour for his/her work, there is no point for client to pay for many working hours, as their cost quickly exceeds that of the media budget.

Conclusion

As a marketer, you always want to select the best case to amplify with your skills. You can think of it through two dimensions:

  • marketing
  • product

By multiplying them, we get the following.

Bad marketing x Bad product = Bad results

Bad marketing x Good product = Okay results

Good marketing x Bad product = Bad results

Good marketing x Good product = Good results

The same in numbers:

0 x 0 = 0

0 x 1 = 0

1 x 0 = 0

1 x 1 = 1

In other words, it makes sense to choose a case which is good for you as a marketer. A good case will work decently with bad marketing, but not vice versa. And only coupled with good marketing will the maximum potential of a good product be achieved.

Author:

Joni Salminen
Ph.D., marketing

How to calculate metrics for an AdWords campaign plan

I teach this very simple formula to my students when they are required to write a pre-campaign report for the Google Online Marketing Challenge (GOMC).

You want to report metrics in a table like this:

budget    ctr      cpc    clicks    impressions
250         0,05   0,2     1250    25000

(The numbers are examples.)

To calculate estimates for a campaign plan, you only need to know three figures:

  • budget
  • goal CTR
  • goal CPC

In the case of GOMC, the budget is set to $250. In other marketing cases, it is based on your marketing plan.

Goal CTR is what you want to accomplish with your ads. I usually say a CTR of 5% is a good target. Based on bidding strategy and competition, however, it can range between 3 and 10%. Less than 3% is not desirable, as it indicates poor relevance between keywords and ads.

Goal CPC is what you want to pay for clicks. Ideally, you want the CTR to be as high as possible and CPC as low as possible to maximize traffic (website visitors). The actual figure will be based on competition as well as your quality score (to which CTR contributes, among other factors of relevance).
Quality score can be enabled by customizing columns in keyword view; the bid estimates for your keywords can be retrieved via Keyword planner, as well as by looking at bid estimates (first-page and top-of-page) in the keyword view. In Finland, I usually say €0.2 is a good target for average CPC. In other markets, the CPC tends to be higher.

Out of the previous figures, you can calculate other metrics:

  • clicks = budget / cpc
  • impressions = clicks / ctr

The calculation assumes full usage of budget, which is not always possible when organic search volumes limit the growth (this is just a general limitation of search advertising).

Bugs and problems in Facebook Ads [UPDATED 10/08/2016]

Introduction

I’ve been doing a lot of Facebook advertising. Compared to Google AdWords, Facebook Ads is missing a lot of features, and has annoying bugs. I’m listing these problems here, in case anyone working at Facebook would like to have an advertiser’s opinion, and that people working with programmatic ad platforms see how difficult it is to create — if not perfect, then at least a satisfactory system.

A caveat: although I’m updating the list from time to time, it might be some bugs are already corrected and the missing features added. The ones fixed have been pointed out by strike-through.

Acknowledgments: A big thanks goes to Mr. Tommi Salenius, who is my right hand in digital marketing.

[UPDATED 10/08/2016]

  • add ‘like disavow tool’ (cf. Google’s link disavow)
  • ‘Facebook marketing partner’ –> expanding to smaller agencies (cf. Google Partners)
  • save target groups when making targeting in ad creation tool
  • add possibility to exclude saved audiences
  • ads receive an unequal number of impressions; if many ads in one ad set, most of them receive zero impressions
  • de-duping target group frequency across campaigns (overlapping audiences: avoid inflation of total frequency by de-duping)
  • distribute budget automatically between campaigns and ad sets
  • Split option in Power Editor does not split an existing audiences, but actually creates a new (complementing) one
  • add possibility to exclude age groups (could be done with exclusion of saved audiences)
  • sorting columns does not work in Power Editor reports section
  • sorting based on conversions does not work properly in Ads Manager columns (it calculates some sort of average)
  • re-position image in Power Editor –> not possible to see preview
  • in web interface impossible to make advance connection with parameter OR – now it uses AND – for example, fans of my page AND friends of fans makes target group impossibly small
  • does not show total budget (or any totals) in campaign view (UPDATE: partly fixed for some metrics, but total budget still not visible)
  • impossible to target competitors’ fans (what are the barriers for making this happen?)
  • breakdowns not possible based on e.g. education level (more breakdown possibilities)
  • possibility to set budget at campaign level
  • no possibility to filter campaign (cf. adwords) –> trying to find a campaign quickly is a pain
  • utm tagging missing –> impossible to track from 3rd party analytics
  • shared budget feature is missing –> you should copy this feature from AdWords
  • when copying campaigns, impossible to change goal (really stupid, cannot test performance with different goals)
  • campaign reporting –> no trends, no graphs –> impossible to assess long-term development of campaigns (compared to AdWords)
  • campaign page –> no possibility to change metric for graph (much better in AdWords where two metrics can be freely chosen)
  • no frequency cap (again, possible in AdWords)
  • no ‘compare to previous time period’ option in reports (unlike AdWords)
  • no possibility to delete images in image gallery –> wtf, makes it very difficult to manage
  • too small image size in image gallery –> again, hard to manage images
  • not possible to copy numbers in power editor (!!!) –> sometimes, you’d want to copy numbers between campaigns or into excel
  • power editor loses text field content when changing ad (field)
  • power editor does not enable image variation
  • web version does not show all image variation ads in first pageload
  • unable to copy ad sets in web interface –> impossible to make quick new versions targeting e.g. newsfeed vs. right column
  • doesn’t show pause status in ads while in review
  • power editor does not copy ad statuses while duplicating ad sets
  • rotate evenly option missing –> compare to AdWords
  • cta not possible to be removed in powereditor once put into ads
  • unable to revert to suggested image in web interface after choosing image from gallery
  • facebook ads no sound in video preview
  • missing bid modifiers: e.g. for ad placement, e.g. -50 %, right column

Problems in Page Insights:

  • inability to answer standard questions such as: what are the all-time most liked posts? how many posts did we do last month?

Want to contribute? Send me bugs and/or missing features and I’ll list them here.

Dr. Joni Salminen holds a PhD in marketing from the Turku School of Economics. His research interests relate to startups, platforms, and digital marketing.

Contact email: [email protected]

Notes on Customer Development

I keep forgetting this stuff, so noting it down for myself (and others).

1. Don’t ask “would you” questions, ask “did you” questions. People are unable to predict their behavior.

2. Don’t ask about your product, ask about their problem. Wrong question: “We have this product A – would you use it?”. Right question: “Do you ever have this problem B?” [that you think the product A will solve]

3. Only in the very end introduce your solution. Then ask openly what he or she thinks about it: “What do you see problematic about it?” Also ask if they know someone who would like this solution.

4. Listen, don’t pitch. Pitching is for other times – you DON’T need to sell your product to this person, you only need to hear about his or her life.

5. Repeat what he or she says – many times people think they understand what the other person is saying, but they don’t. Only by repeating with your own words and getting them to nod “That’s right” you can make sure you got it.

6. Make notes – obviously. You don’t want to forget, but without notes you will.

7. Make “many” interviews. Many = as long as you notice there are no more new insights. In research, this is called saturation. You want to reach saturation and make sure you’ve identified the major patterns.

8. Avoid loaded questions. False: “Is this design good?” Correct: “What do you think of this design?”

9. Avoid yes/no questions. What would you learn from them? Nothing.

10. Focus more on disproving your idea rather than validating it. In philosophy of science, this is called falsificationism. It means not claim can be proved absolutely true, but every claim can be proved wrong. Rather than wanting to prove yourself right (at the risk of making a false positive), you want to prove yourself wrong and avoid wasting time on a bad idea. Remember: most startup ideas suck (it’s true – I’ve seen hundreds, and most will never amount to business – be very very critical about your idea).

As hinted in the previous, customer developing is like doing real research. You want to avoid false positives – i.e., getting the impression your idea is good although it sucks; and false negatives which is to conclude the idea is bad although in reality it’s not.

In general, you want to avoid respondent bias, recall bias, and confirmation bias. These are fancy names meaning that you want people to tell you honestly what they think, and you want to interpret it in an objective way, not being too fixed on your initial assumption (i.e., hypothesis). Be ready to change your opinion, like Gandhi advised.

About non-interview methods, i.e. testing via landing pages.

a. Force customers to pay from the beginning – this way you see if the thing has value to anyone.

b. Needless to say: MVP. Create first the non-scalable, bare minimum solution. This is not even a product, it’s a service. Use manual labor over technology and get the user information through free tools like Google Forms.

c. If you get a high dropout, you need to make sure people understand the USP. For this, you CAN ask your friends’ opinions: “Do you get it?” But prefer friends without prior knowledge on the project, because they have fresh eyes.

Before conducting any interviews or tests, do some market research based on facts. Yes, I know Steve Blank says to “go out of the building” straight away and forget about traditional market research, but he’s not a marketing expert. Think a bit before you fly out the door: Who are your customers? Why them? Do they have money? Do they want to buy from you? etc.

You can use this spreadsheet for segmentation (not my doing, just copied it from Sixteen Ventures):

https://docs.google.com/spreadsheet/ccc?key=0ArHFxUyqbcmHdHp5VEY2eXNLby0zaHFKSDhpc0xEdkE&usp=sharing

Example questions from Cindy Alvarez:

  • How is your customer currently dealing with this task/problem? (What solution/process are they using?)
  • What do they like about their current solution/process?
  • Is there some other solution/process you’ve tried in the past that was better or worse?
  • What do they wish they could do that currently isn’t possible or practical?
  • If they could do [answer to the above question], how would that make their lives better?
  • Who is involved with this solution/process? How long does it take?
  • What is their state of mind when doing this task? How busy/hurried/stressed/bored/frustrated? [note: learn this by watching their facial expressions and listening to their voice]
  • What are they doing immediately before and after their current solution/process?
  • How much time or money would they be willing to invest in a solution that made their lives easier?

More points from Cindy (she’s a real specialist):

  • Abstract your problem by a level. For example, if you want to know whether someone will use a healthy lunch delivery service, ask about “lunch”
  • Start with an open-ended “Tell me about how you…” question. i.e. “Tell me about how you deal with lunch during the workweek”
  • Shut up for 60 seconds. This is a LONG, LONG time and it feels awkward. It also forces the person to go beyond the short (and probably useless) answer and go into detail.
  • Whenever you hear emotion in the person’s voice, prolong that line of conversation.
  • (You can prolong conversations by asking why/how often/who/where questions. It may take 2 or 3 or more of these follow-up questions to get at the interesting detail.)
  • Avoid yes/no questions. Whichever one the person chooses, it’s probably not useful for you.
  • Whenever the person starts complaining listen (and encourage it!) People are more specific with complaints than praise, and specificity is where you learn.
  • Challenge your pre-existing hypotheses by referencing the mythical “other person”. For example, “I’ve heard from other people that ______. Do you agree?” It’s easier for people to disagree with an anonymous third party than to disagree with YOU.
  • Avoid talking about your product or your ideas until the end – but then DO give the person the opportunity to ask you some questions. This is NOT a chance for you to sell your idea, it’s just an equalizer. You’ve been asking questions the whole time, now it’s their turn.
  • Thank them profusely and reinforce one concrete point that you learned.
    • Alwaaaaaayyyyys ask for referrals to 2-3 other friends who are roughly in the target market so you can interview them.

Here are some useful links:

http://www.quora.com/Customer-Development/What-are-your-favorite-methods-for-doing-problem-interviews-during-Customer-Discovery

https://blog.kissmetrics.com/26-customer-development-resources/

http://sixteenventures.com/startup-customer-development-hacks

http://practicetrumpstheory.com/how-to-interview-your-users-and-get-useful-feedback/

http://giffconstable.com/2011/07/12-tips-for-customer-development-interviews-revised/
If you have to read one book about this topic, read this one: http://www.amazon.com/Interviewing-Users-Uncover-Compelling-Insights-ebook

If you want to read another book, then it’s this one: http://www.amazon.com/Lean-Customer-Development-Building-Customers-ebook

If you need to read a third book, then you should stop doing a startup and become a researcher 🙂

Crowdfunding pitch to media – an example

Here’s an example on how to do PR for a crowdfunding campaign. It should be sent at least a couple of weeks prior to launch.

Hi [name],

this is [yourname] from [yourcompany].

We are preparing to release a new product in [yourplatform], and I wanted to give you heads-up since you wrote about [a competitor] six months ago. Our product is similar, but better 😉

Here’s why it is better:

  • [reason 1]
  • [reason 2]
  • [reason 3]

Here’s a link to press material including pictures and more information: [link]

The campaign will be launched on [date], so I hope you’d publish an article about us at around that time.

In the meantime, I’m of course available for any questions / comments!

Have a nice day,

[yourname] from [www.yourwebsite.com]

Tel. [telephone]

Skype: [Skype]

Email: [email]