Skip to content

Category: english

How to use Facebook in marketing segmentation?

Introduction

This article discusses the potential of segmentation in Facebook advertising.

Why is segmentation needed?

Segmentation is one of the most fundamental concepts in marketing. Its goal is to identify the best match between the firm’s offering and the market, i.e. find a sub-set of customers who are most likely to buy the product and who therefore can be targeted cost-effectively by means of niche marketing rather than mass marketing.

There are some premises as to why segmentation works:

  • Not all buyers are alike.
  • Sub-groups of people with similar behavior, backgrounds, values and needs can be identified.
  • The sub-groups will be smaller and more homogeneous than the market as a whole.
  • It is easier to satisfy a small group of similar customers than to try to satisfy large groups of dissimilar customers.

(The list if a direct citation from the Essentials of Marketing by Jim Blythe, p. 76.)

While segmentation is about dividing the overall market into smaller pieces (segments), targeting is about selecting the appropriate marketing channels to reach those customer segments. Finally, positioning deals with message formulation in the attempt of positioning the firm and its offerings relative to competitors (e.g., cheaper, better quality). This is the basic marketing model called STP (segmentation, targeting, positioning).

How to apply segmentation in Facebook?

I will next discuss three stages of Facebook campaign creation.

1. Before the campaign

There are a few options for creation of basic segments.

  • generate marketing personas (advantage: makes you assume customer perspective; weakness: vulnerable to marketer’s intuition, i.e. tendency to assume you know your customer whereas in reality you don’t)
  • conduct market research (advantage: suited to your particular case; weakness: costly and takes time)
  • buy consumer research reports (advantage: large sample sizes, comprehensive; weakness: the reports tend to be very general)
  • use Facebook Audience Insights (advantage: specific to Facebook; weakness: gives little behavioral data)

The existence of weaknesses is okay – the whole of point of segmentation is to gather REAL data which is stronger thana priori assumptions.

Based on the insights you’ve gathered, create Saved target groups in Facebook. These incorporate the segments you want to target for. If you are using an ad management tool such as Smartly, you can split audiences into smaller micro-segments e.g. by age, gender and location. Say you have a general segment of Women aged 25-50; you could split it into the following micro-segments by using an interval of five years:

  • women 25-30
  • women 31-36
  • women 37-42
  • women 43-48

The advantage of micro-segments is more granular segmentation; however, the risk is going too granular while ignoring the real-world reason for differences (sometimes the performance difference between two micro-segments is just statistical noise).

After creating the segments in Facebook (reflected in Saved target groups), you want to test how they perform — so as to see how well your assumptions on the effectiveness of these segments are working. For this, create campaigns and let them run. In Power Editor, go to the Custom audiences (select from the sliding menu), select the segments you want to test and choose to create new ad groups. (See, now we have moved from segmentation into targeting, which is the natural step in the STP model.)

NB! If you particularly want to test customer segments, keep everything else (campaign settings, creatives) the same. In Power Editor, this is fairly simple to execute by copy-pasting the creatives between ad groups. This reduces the risk that the performance differences between various segments are a result of some other factor than targeting. Finally, name the ad sets to reflect the segment you are testing (e.g. Women 25-31).

2. During the campaign

After a week or so, go back to check the results. Since you’ve named the segments appropriately, you can quickly see the performance differences between the segments. To make sure the differences are statistically valid (if you are not using a tool such as Smartly), use a calculator to determine the statistical significance. I created one which can be downloaded here.

When interpreting results, remember that the outcome is a combination of segment and message (and that the message is a combination of substance and tone, i.e. what is said and how it is said). In other words,

Result = segment x message, in which message = substance x tone, so that
Result = segment x (substance x tone)

Therefore, as you change the message, it reflects to performance across various segments. This means that you are not actually testing the suitability of your product to the segment (which is what segmentation and targeting is all about), but the match between the message and the target audience. Although this may seem like semantics, it’s actually pretty important. You want to make sure you’re not getting a misleading response from your segment due to issues in message formulation (i.e. talking to them in a “wrong way”), and so you want to make sure it reflects the product as well as possible. Ideally, you’d want to tailor your message based on your ideas of the segment, BUT this is prohibited in the early stage because we want to make sure the message formulation does not interfere with the testing of segment performance.

How to solve this problem, then? Three ways: first, make sure the segments you are testing are not too far apart – i.e. women aged 17 and men aged 45 subjected to the same message can create issues. Second, try to formulate a general message to begin with, so it doesn’t exclude any segments. Third, you could of course make slight modifications to the message while testing the segments — here I would still keep the substance (e.g. cheap price) stable across segments while maybe changing the tone (e.g. type of words used) depending on the audience – for example, older people are usually addressed in a different tone than the younger audience (yo!).

Finally, one extra tip! If you want more granular data on how different groups within your segment have performed, go to Ad reports and check out the data breakdowns. There is a wealth of information there which can be used in creating further micro-segments.

3. After the campaign

What to do when you know which segments are the most profitable? Well, take the results you’ve got and generalize them into your other marketing activities. For example, when you’re buying print ads ask for demographic data they have on readers — it has to be accurate and based on research, not guesses — and choose the media that matches the best performing segments according to your Facebook data. In my opinion, there is no major reason to assume that people in the same segment would act differently in Facebook and elsewhere (strictly speaking, the only potential issue I can think of is that Facebook-people are more “advanced” in their technology use than offline-people, but this is generally a small problem since such a large share of population in most markets are users of Facebook).

There you go – hopefully this article has given you some useful ideas on the relationship between segmentation and Facebook advertising!

I’m into digital marketing, startups, platforms. Download my dissertation on startup dilemmas: http://goo.gl/QRc11f

A.I. – the next industrial revolution?

Introduction

Many workers are concerned about “robotization” and “automatization” taking away their jobs. Also the media has been writing actively about this topic lately, as can be seen in publications such as New York Times and Forbes.

Although there is undoubtedly some dramatization in the scenarios created by the media, it is true that the trend of automatization took away manual jobs throughout the 20th century and has continued – perhaps even accelerated – in the 21st century.

Currently the jobs taken away by machines are manual labor, but what happens if machines take away knowledge labor as well? I think it’s important to consider this scenario, as most focus has been on the manual jobs, whereas the future disruption is more likely to take place in knowledge jobs.

This article discusses what’s next – in particular from the perspective of artificial intelligence (A.I.). I’ve been developing a theory about this topic for a while now. (It’s still unfinished, so I apologize the fuzziness of thought…)

 Theory on development of job markets

My theory on development of job markets relies on two key assumptions:

  1. with each development cycle, less people are needed
  2. and the more difficult is for average people to add value

The idea here is that while it is relatively easy to replace a job taken away by simple machines (sewing machines still need people to operate them), it is much harder to replace jobs taken away by complex machines (such as an A.I.) providing higher productivity. Consequently, less people are needed to perform the same tasks.

By “development cycles”, I refer to the drastic shift in job market productivity, i.e.

craftmanship –> industrial revolution –> information revolution –> A.I. revolution

Another assumption is that the labor skills follow the Gaussian curve. This means most people are best suited for manual jobs, while information economy requires skills that are at the upper end of that curve (the smartest and brightest).

In other words, the average worker will find it more and more difficult to add value in the job market, due to sophistication of the systems (a lot more learning is needed to add value than in manual jobs where the training requires a couple of days). Even currently, the majority of global workers best fit to manual labor rather than information economy jobs, and so some economies are at a major disadvantage (consider Greece vs. Germany).

Consistent to the previous definition, we can see the job market including two types of workers:

  • workers who create
  • workers who operate

The former create the systems as their job, whereas the latter operate them as their job. For example, in the sphere of online advertising, Google’s engineers create the AdWords search-engine advertising platform, which is then used by online marketers doing campaigns for their clients. At the current information economy, the best situation is for workers who are able to create systems – i.e. their value-added is the greatest. With an A.I, however, both jobs can be overtaken by machine intelligence. This is the major threat to knowledge workers.

The replacement takes place due to what I call the errare humanum est -effect (disadvantage of humans vis-à-vis machines), according to which a machine is always superior to job tasks compared to human which is an erratic being controlled by biological constraints (e.g., need for food and sleep). Consequently, even the brightest humans will still lose to an A.I.

Examples

Consider these examples:

  • Facebook has one programmer per 1.2 million users [1] and one employee per 249,000 users [2]
  • Rovio has one employee per 507,000 gamers [3]
  • Pinterest has one employee per 400,000 users [2]
  • Supercell have one employee per 193,000 gamers [4]
  • Twitter has one employee per 79,000 users [5]
  • Linkedin has one employee per 47,000 users [6]

(Some of these figures are a bit outdated, but in general they serve to support my argument.)

Therefore, the ratio of workers vs. customers is much lower than in previous transitions. To build a car for one customer, you need tens of manufacturing workers. To serve customers in a super-market, the ratio needs to be something like 1:20 (otherwise queues become too long). But when the ratio is 1:1,000,000, not many people are needed to provide a service for the whole market.

As can be seen, the mobile application industry which has been touted as a source of new employment does indeed create new jobs [7], but it doesn’t create them for masses. This is because not many people are needed to succeed in this business environment.

Further disintermediation takes place when platforms talk to each other, forming super-ecosystems. Currently, this takes place though an API logic (application programming interface) which is a “dumb” logic, doing only prescribed tasks, but an A.I. would dramatically change the landscape by introducing creative logic in API-based applications.

Which jobs will an A.I. disrupt?

Many professional services are on the line. Here are some I can think of.

1. Marketing managers 

An A.I. can allocate budget and optimize campaigns far more efficiently than erroneous humans. The step from Google AdWords and Facebook Ads to automated marketing solutions is not that big – at the moment, the major advantage of humans is creativity, but the definition of an A.I. in this paper assumes creative functions.

2. Lawyers 

An A.I. can recall all laws, find precedent cases instantly and give correct judgments. I recently had a discussion with one of my developer friends – he was particularly interested in applying A.I. into the law system – currently it’s too big for a human to comprehend, as there are thousands of laws, some of which contradict one another. An A.I. can quickly find contradicting laws and give all alternative interpretations. What is currently the human advantage is a sense of moral (right and wrong) which can be hard to replicate with an A.I.

3. Doctors 

An A.I. makes faster and more accurate diagnoses; a robot performs surgical operations without flaw. I would say many standard diagnoses by human doctors could be replaced by A.I. measuring the symptoms. There have been several cases of incorrect diagnoses due to hurry and the human error factor – as noted previously, an A.I. is immune to these limitations. The major human advantage is sympathy, although some doctors are missing even this.

4. Software developers

Even developers face extinction; upon learning the syntax, an A.I. will improve itself better than humans do. This would lead into exponentially accelerating increase of intellect, something commonly depicted in the A.I. development scenarios.

Basically, all knowledge professions if accessible to A.I. will be disrupted.

Which jobs will remain?

Actually, the only jobs left would be manual jobs – unless robots take them as well (there are some economic considerations against this scenario). I’m talking about low-level manual jobs – transportation, cleaning, maintenance, construction, etc. These require more physical material – due to aforementioned supply and demand dynamics, it may be that people are cheaper to “build” than robots, and therefore can still assume simple jobs.

At the other extreme, there are experience services offered by people to other people – massage, entertainment. These can remain based on the previous logic.

How can workers prepare?

I can think of a couple of ways.

First, learn coding – i.e. talking to machines. people who understand their logic are in the position to add value — they have an access to the society of the future, whereas those who are unable to use systems get disadvantaged.

The best strategy for a worker in this environment is continuous learning and re-education. From the schooling system, this requires a complete re-shift in thinking – currently most universities are far behind in teaching practical skills. I notice this every day in my job as a university teacher – higher education must catch up, or it will completely lose its value.

Currently higher education is shielded by governments through official diplomas appreciated by recruiters, but true skills trump such an advantage in the long run. Already at this moment I’m advising my students to learn from MOOCs (massive open online courses) rather than relying on the education we give in my institution.

What are the implications for the society?

At a global scale, societies are currently facing two contrasting mega-trends:

  • the increase of productivity through automatization (= lower demand for labor)
  • the increase of population (= higher supply of labor) (everyone has seen the graph showing population growth starting from 19th century [8])

It is not hard to see these are contrasting: less people are needed for the same produce, whereas more people are born, and thus need jobs. The increase of people is exponential, while the increase in productivity comes, according to my theory, in large shifts. A large shift is bad because before it takes place, everything seems normal. (It’s like a tsunami approaching – no way to know before it hits you.)

What are the scenarios to solve the mega-trend contradiction?

I can think of a couple of ways:

  1. Marxist approach – redistribution of wealth and re-discovery of “job”
  2. WYSIWYG approach – making the systems as easy as possible

By adopting a Marxist approach, we can see there are two groups who are best off in this new world order:

  • The owners of the best A.I. (system capital)
  • The people with capacity to use and develop A.I. further (knowledge capital)

Others, as argued previously, are at a disadvantage. The phenomenon is much similar to the concept of “digital divide” which can refer to 1) the difference of citizens from developed and developing countries’ access to technologies, or 2) the ability of the elderly vs. the younger to use modern technology (the latter have, for example, worse opportunities in high-tech job markets).

There are some relaxations to the arguments I’ve made. First, we need to consider that the increase of time people have as well as the general population increase create demand for services relating experiences and entertainment per se; yet, there needs to be consideration of re-distribution of wealth, as people who are unable to work need to consume to provide work for others (in other words, the service economy needs special support and encouragement from government vis-à-vis machine labor).

While it is a precious goal that everyone contribute in the society through work, the future may require a re-check on this protestant work ethic if indeed the supply of work drastically decreases. the major reason, in my opinion, behind the failure of policies reducing work hours such as the 35-hour work-week in France is that other countries besides these pioneers are not adopting them, and so they gain a comparative advantage in the global market. We are yet not in the stage where supply of labor is dramatically reduced at a global scale, but according to my theory we are getting there.

Secondly, a major relaxation, indeed, is that the systems can be usable by people who lack the understanding of their technical finesse. This method is already widely applied – very few understand the operating principles of the Internet, and yet can use it without difficulties. Even more complex professional systems, like Google AdWords, can be used without detailed understanding of the Google’s algorithm or Vickrey second-price sealed auctions.

So, dumbing things down is one way to go. The problem with this approach in the A.I. context is that when the system is smart enough to use itself, there is no need to dumb down – i.e., having humans use it would be a non-optimal use of resources. Already we can see this in some bidding algorithms in online advertising – the system optimizes better than people. At the moment we online marketers can add value through copywriting and other creative ways, but the upcoming A.I. would take away this advantage from us.

Recommendations

It is natural state of job markets that most workers are skilled only for manual labor or very simple machine work; if these jobs are lost, new way of organizing society is needed. Rather than fighting the change, societies should approach it objectively (which is probably one of the hardest things for human psychology).

My recommendations for the policy makers are as follows:

  • decrease cost of human labor (e.g., in Finland sometime in the 70s services were exempted from taxes – this scenario should help)
  • reduce employment costs – the situation is in fact perverse, as companies are penalized through side costs if they recruit workers. In a society where demand of labor is scarce, the reverse needs to take place: companies that recruit need to be rewarded.
  • retain/introduce monetary transfers à la welfare societies – because labor is not enough for everyone, the state needs to pass money from capital holders to underprivileged. The Nordic states are closer to a working model than more capitalistic states such as the United States.
  • push education system changes – because skills required in the job market are more advanced and more in flux than previously, the curriculum substance needs to change faster than it currently does. Unnecessary learning should be eliminated, while focusing on key skills needed in the job market at the moment, and creating further education paths to lifelong learning.

Because the problem of reducing job demand is not acute, these changes are unlikely to take place until there is no other choice (which is, by the way, the case for most political decision making).

Open questions

Up to which point can the human labor be replaced? I call it the point of zero human when no humans are needed to produce an equal or larger output than what is being produced at an earlier point in time. The fortune of humans is that we are all the time producing more – if the production level was at the stage of 18th century, we would already be in the point of zero human. Therefore, job markets are not developing in a predictable way towards point of zero human, but it may nevertheless be a stochastic outcome of the current development rate of technology. Ultimately, time will tell. We are living exciting times.

References:

[1]: https://www.facebook.com/notes/facebook-engineering/facebook-engineering-bootcamp/177577963919

[2]: http://royal.pingdom.com/2013/02/26/pinterest-users-per-employee/

[3]: http://www.rovio.com/en/news/press-releases/284/rovio-entertainment-reports-2012-financial-results

[4]: http://www.gamesindustry.biz/articles/2014-02-11-clash-of-clans-daily-revenue-at-5.15-million-hacker

[5]: http://www.statista.com/statistics/272140/employees-of-twitter/

[6]: https://press.linkedin.com/about-linkedin

[7]: http://www.visionmobile.com/product/uk-app-economy-2014/

[8]: http://www.susps.org/images/worldpopgr.gif

The Digital Marketing Brief – four things to ask your client

Recently I had an email correspondence with one my brightest digital marketing students. He asked for advice on creating an AdWords campaign plan.

I told him the plan should include certain elements, and only them (it’s easy to make a long and useless plan, and difficult to do it short and useful).

Anyway, in the process I also told him how to make sure he gets the necessary information from the client. These four things I’d like to share with everyone looking for a crystal-clear marketing brief.

They are:

1. campaign goal
2. target group
3. budget
4. duration

First, you want to know the client’s goal. In general, it can direct response (sales) or indirect response (awareness). This affects two things:

  • metrics you include as your KPIs — in other words, will you optimize for impressions, clicks, or conversions.
  • channels you include — if the client wants direct response, search-engine advertising is usually more effective than social media (and vice versa).

The channel selection is the first thing to include into your campaign plan.

Second, you want the client’s understanding of the target group. This affects targeting – in search-engine advertising it’s the keywords you choose; in social media advertising it’s the demographic targeting; in display it’s the managed placements.

Based on this information, you want to make a list (of keywords / placements / demographic types). These targeting elements are the second thing to include into your campaign plan.

Third, the budget matters a great deal. It affects two things:

  • how many channels to choose
  • how to set daily budgets

The bigger the budget is, the more channels can be included in the campaign plan. It’s not always linear, however; e.g. when search volumes are high and the goal is direct response, it makes most sense to spend all on search. But generally, it’s possible to target several stages in customers’ purchase funnel (i.e., stages they go through prior to conversion).

Hence, the budget spend is the third thing to include into your campaign plan.

The daily budget you calculate by dividing the total budget with the number of channels and the duration (in days) of the campaign. At this point, you can allocate the budget in different ways, e.g. search = 2xsocial. It’s important to notice that in social and display you can usually spend as much money as you want, because the available ad inventory is in effect unlimited. But in search the spend is curbed by natural search volumes.

I’m into digital marketing, startups, platforms. Download my dissertation on startup dilemmas: http://goo.gl/QRc11f

The Bounce Problem: How to Track Bounce in Simple Landing Pages

Introduction

This post applies to cases satisfying two conditions.

First, you have a simple landing page designed for immediate action (=no further clicks). This can be the case for many marketing campaigns for which we design a landing page without navigation and a very simple goal, such as learning about a product or watching a video.

Second, you have a high bounce rate, indicating a bad user experience. Bounce rate is calculated as follows:

visitors who leave without clicking further / all visitors

Why does high bounce indicate bad user experience?

It’s a proxy for it. A high bounce rate simply means a lot of people leave the website without clicking further. This usually indicates bad relevance: the user was expecting something else, didn’t find, and so leaves the site immediately.

For search engines a high bounce rate indicates bad landing page relevance vis-à-vis a given search query (keyword), as the user immediately returns to the SERP (search-engine result page). Search engines, such as Google, would like to offer the right solution for a given search query as fast as possible to please their users, and therefore a poor landing page experience may lead to lower ranking for a given website in Google.

The bounce problem

I’ll give a simple example. Say you have a landing page with only one call-to-action, such as viewing a video. You then have a marketing campaign resulting to ten visitors. After viewing the video, all ten users leave the site.

Now, Google Analytics would record this as 100% bounce rate; everyone left without clicking further. Moreover, the duration of the visits would be recorded as 0:00, since the duration is only stored after a user clicks further (which didn’t happen in this case).

So, what should we conclude as site owners when looking at our statistics? 100% bounce: that means either that a) our site sucks or b) the channel we acquired the visitors from sucks. But, in the previous case it’s an incorrect conclusion; all of the users watched the video and so the landing page (and marketing campaign associated with it) was in fact a great success!

How to solve the bounce problem

I will show four solutions to improve your measurement of user experience through bounce rate.

First, simply create an event that pings your analytics software (most typically Google Analytics) when a user makes a desired on-page action (e.g. video viewing). This removes users who completed a desired action but still left without clicking further from the bounce rate calculation.

Here are Google’s instructions for event tracking.

Second, ping GA based on visit duration, e.g. create an event of spending one minute on the page. This will in effect lower your reported bounce rate by degree of users who stay at least a minute on the landing page.

Third, create a form. Filling a form directs the user to another site which then triggers an event for analytics. In most cases, this is also compatible with our condition of a simple landing page with one CTA (well, if you have a video and a form that’s two actions for a user, but in most cases I’d say it’s not too much).

Finally, there is a really cool Analytics plugin by Rob Flaherty called Scrolldepth (thanks Tatu Patronen for the tip!). It pings Google Analytics as users scroll down the page, e.g. by 25%, 75% and 100% intervals. In addition to solving the bounce problem, it also gives you more data on user behavior.

Limitations

Note that adding event tracking to reduce bounce rate only reduces it in your analytics. Search-engines still see bounce as direct exits, and may include that in their evaluation of landing page experience. Moreover, individual solutions have limitations – creation of a form is not always natural given the business, or it may create additional incentive for the user; and Scrolldepth is most useful in lengthy landing pages, which is not always the case.

I’m into digital marketing, startups, platforms. Download my dissertation on startup dilemmas: http://goo.gl/QRc11f

Assessing the scalability of AdWords campaigns

Introduction

Startups, and why not bigger companies, too, often test marketing channels by allocating a small budget to each channel, and then analyzing the results (e.g. CPA, cost per action) per channel.

This is done to determine a) business potential and b) channel potential. The former refers to how lucrative it is to acquire customers given their lifetime value, and the latter to how well each channel performs.

Problem

However, there is one major issue: scaling. It means that when we pour x dollars into the marketing channel in the test phase and get CPA of y dollars, will the CPA remain the same when we increase the budget to x+z dollars (say hundred times more)?

This issue can be tackled by acquiring enough data for statistical significance. This gives us confidence that the results will be similar once the budget is increased.

In AdWords, however, the scaling problem takes another form: the natural limitation of search volumes. By this I mean that at any given time, only a select number of customers are looking for a specific topic. Contrary to Facebook which has de facto an unlimited ad inventory (billions of ad impressions), Google only has a limited (although very large) ad inventory.

Solution

Here’s how to assess the scalability of AdWords campaigns:

1. Go to campaign view
2. Enable column called “Search impression share” (Modify columns –> Competitive metrics)

This will tell you how many searchers saw your ad out of all who could have seen it (this is influenced by your daily budget and bid).

In general, you want impression share to be as high as possible, given that the campaign ROI is positive. So, in general >80% is good, <10% is bad. (The exception is when running a long-tail strategy aiming for low-cost clicks, in which case <10% is okay.)

3. Calculate the scalability as follows:

scalability = clicks / impression share

For example, if you have an impression share of 40 % with which you’ve accumulated 500 clicks, by increasing your budget and bids so that you are able to capture 100% impression share, you will accumulate 1250 clicks (=500/0,40) which is the full potential of this campaign.

Limitations

Note that the formula assumes the CTR remains constant. Additionally, increasing bids may increase your CPA, so improving quality score through better ads and relevance is important to offset this effect.

The ROI of Academic Publishing

Problem of ROI in publishing

The return on investment (ROI) of academic publishing is absolutely terrible.

Think of it – thousands of hours spent correcting formatting, spellings, rephrasing, and so on. All this after the actual insight of the research has been accomplished. In all seriousness, 10% of time spent doing research and 90% writing and rewriting cannot be thought of anything else but waste.

Why should we care?

The inefficiency of the current way of doing it – as in combining doing research and writing about it under the same name of “doing research” – is horrible waste of intelligence and human resources. It inflates the cost of doing research, and also makes scientific progress slower than if 90% was spent on research and 10% on writing.

Root cause

Some might say it’s a perverse outcome of letting staff go – nowadays even professors have to do everything  by themselves because there is so few assistants and administrators. Why is this perverse? Because at the same time more people need work. It’s also perverse, or paradoxical, because letting the help go is done to increase efficiency but in the end it actually decreases efficiency as the research staff shifts their use of time from doing research to fixing spelling errors. There is a large misunderstanding that letting people go would lead to better efficiency – it may save costs but exactly at the cost of efficiency.

My experiences

The thought for this article came to mind when me and my colleague received yet again some minor edit requests for an article to be published in a book – the book material was ready already last year, but all these people are working to fix small minor details that add zero substance value. What a waste!

And I’m not alone in this situation; most if not all academics face the same problem.

Solution

Two solutions readily come to mind:

  • report the data and that’s it
  • use editors to fix all minor errors instead of forcing the high-thinkers to waste their time on it

The latter one is much better, as the first option misses the importance of interpreting the results and theorizing from them (the whole point of doing research).

What is ROI of research?

Efficiency, such as ROI of research, should be defined as learning more about the world. This will never be accomplished by writing reports but going out to the world. At the same time, I don’t mean to undermine basic research – the ROI of research is not the same as its immediate usefulness, let alone its immediate economic potential. ROI in my argument simply refers to the ratio of doing research vs. writing about it, not the actual quality of the outcome.

The author works as a university teacher in the Turku School of Economics

Startup syndromes: “The Iznogoud Syndrome”

1. Definition

The Iznogoud Syndrome can be defined as follows:

A startup strives to disrupt existing market structures instead of adapting to them.

In most industries, existing relationships are strong, cemented and will not change due to one startup. Therefore, a better strategy is to find ways of providing utility in the existing ecosystem.

2. Origins

The name of this startup syndrome is based on the French comic character who wants to “become Caliph instead of the Caliph“, and continuously fails in that (over-ambitious) attempt. Much similarly, many startups are over-ambitious in their attempt to succeed. In my experience, they have an idealistic worldview while lacking a realistic perspective on the business landscape. While this works for some outliers – for example Steve Jobs – better results can be achieved with a realistic worldview on average. The world is driven by probabilities and hence it’s better to target averages than outliers.

3. Examples

I see them all the time. Most startups I advise in startup courses and events aim at disintermediation: they want to remove vendors from the market and replace them. For example, a startup wanted to remove recruiting agencies by making their own recruiting platform. Since recruiting agencies already have the customer relationships, it’s an unrealistic scenario. What upset me was that the team didn’t even consider providing value to the recruiting agencies, but intuitively saw them as junk to be replaced.

Another example: there is a local dominant service providing information on dance events, which holds something like 90% of market (everyone uses it). Yet, it has major usability issues. Instead of partnering with the current market leader to fix their problems, the startup wants to create its competing platform from scratch and then “steal” all users. That’s an unrealistic scenario. All around, there is too much emphasis put on disintermediation and seeing current market operators either as waste or competitors as oppose to potential partners in user acquisition, distribution or whatever.

Startups should realize they are not alone in the market, but the market has been there for a hundred years. They cannot just show up and say “hey, I’m going to change how you’ve done business for 100 years.” Or they can, but they will most likely fail. This is all well for the industry in which it doesn’t matter if 9 out of 10 fail, as the one winning brings the profits, but for an individual startup it makes more sense to get the odds of success (even average one) greater. So you see, what is good for the startup industry in general is not the same as what is good for your startup in particular.

4. Similarity to other startup syndromes

The Iznogoud syndrome is similar to “Market education syndrome”, according to which an innovation created by the startup falls short in consumer adoption regardless of its technical quality – many VC’s avoid products requiring considerable market education costs. Whereas the Market education syndrome can be seen a particular issue in B2C markets, the Iznogoud syndrome is more acute in B2B markets.

5. Recommendations

Simply put, startups should learn more about their customers or clients. They need to understand their business logic (B2B) or daily routines (B2C) and how value can be provided there. In B2B markets, there are generally two ways to provide value for clients:

  • help them sell more
  • help them cut costs

If you do so, potential clients are more likely to listen. As stated previously, this is a more realistic scenario in doing business than thinking ways of replacing them.

I’m into digital marketing, startups, platforms. Download my dissertation on startup dilemmas: http://goo.gl/QRc11f

A simple formula for assessing the feasibility of AdWords cases

Update [24th March, 2017]: In addition to the formula explained in the post, I would add the following general criteria for a good AdWords case: 1) Low-Medium competition (high CPCs force to look for alternative channels), 2) Good website/landing pages (i.e., load fast, easy to navigate, have text information relevant to the keywords.

Introduction

Google AdWords is a form of on-demand marketing which matches demand (keywords) with supply (ads). Because it provides good relevance between demand and supply, it efficiently fulfills the core purpose of marketing which is, again, to match supply and demand. However, while this property of AdWords makes it generally much more effective than other forms of online marketing, it also leads to a major limitation: the campaigns cannot scale beyond natural search volumes.

I often tell this to my students participating in the Google Online Marketing Challenge (GOMC), but a few of them always fall into the “trap of low search volume”. I will explain this in the following.

Selection criteria

First, the relevant dimensions for assessing the potential in AdWords are:

  • geographic range: the based on the company’s offerings
  • product range

These can vary from low to high so that

Low geographic range x Low product range = Trap of low search volume

Low geographic range x High product range = Potential risk of low search volume

High geographic range x Low product range = Potential risk of low search volume

High geographic range x High product range = High search volume (Best case for AdWords)

In other words, this formula favors companies with nationwide distribution and large product range. These campaigns tend to scale the best and offer the best ratio between cost and value of optimization. In contrast, local business with one or two products or services are the least feasible candidates.

What does the trap of limited search volume mean?

Well, first of all it means the spend will be low. In GOMC, this means some teams struggle to spend the required $250 during the three-week campaign window.

Second, and more importantly, it means these cases are less interesting for marketers. They offer little room for optimization (because spend is low and there is very little data to work with).

Also for this reason the management cost of running these campaigns (=the amount a marketer can charge for his/her services) can become unbalanced: for example, if the yearly spend of a low-volume campaign is, say $400 and the marketers charges $100 per hour for his/her work, there is no point for client to pay for many working hours, as their cost quickly exceeds that of the media budget.

Conclusion

As a marketer, you always want to select the best case to amplify with your skills. You can think of it through two dimensions:

  • marketing
  • product

By multiplying them, we get the following.

Bad marketing x Bad product = Bad results

Bad marketing x Good product = Okay results

Good marketing x Bad product = Bad results

Good marketing x Good product = Good results

The same in numbers:

0 x 0 = 0

0 x 1 = 0

1 x 0 = 0

1 x 1 = 1

In other words, it makes sense to choose a case which is good for you as a marketer. A good case will work decently with bad marketing, but not vice versa. And only coupled with good marketing will the maximum potential of a good product be achieved.

Author:

Joni Salminen
Ph.D., marketing

How to calculate metrics for an AdWords campaign plan

I teach this very simple formula to my students when they are required to write a pre-campaign report for the Google Online Marketing Challenge (GOMC).

You want to report metrics in a table like this:

budget    ctr      cpc    clicks    impressions
250         0,05   0,2     1250    25000

(The numbers are examples.)

To calculate estimates for a campaign plan, you only need to know three figures:

  • budget
  • goal CTR
  • goal CPC

In the case of GOMC, the budget is set to $250. In other marketing cases, it is based on your marketing plan.

Goal CTR is what you want to accomplish with your ads. I usually say a CTR of 5% is a good target. Based on bidding strategy and competition, however, it can range between 3 and 10%. Less than 3% is not desirable, as it indicates poor relevance between keywords and ads.

Goal CPC is what you want to pay for clicks. Ideally, you want the CTR to be as high as possible and CPC as low as possible to maximize traffic (website visitors). The actual figure will be based on competition as well as your quality score (to which CTR contributes, among other factors of relevance).
Quality score can be enabled by customizing columns in keyword view; the bid estimates for your keywords can be retrieved via Keyword planner, as well as by looking at bid estimates (first-page and top-of-page) in the keyword view. In Finland, I usually say €0.2 is a good target for average CPC. In other markets, the CPC tends to be higher.

Out of the previous figures, you can calculate other metrics:

  • clicks = budget / cpc
  • impressions = clicks / ctr

The calculation assumes full usage of budget, which is not always possible when organic search volumes limit the growth (this is just a general limitation of search advertising).

Bugs and problems in Facebook Ads [UPDATED 10/08/2016]

Introduction

I’ve been doing a lot of Facebook advertising. Compared to Google AdWords, Facebook Ads is missing a lot of features, and has annoying bugs. I’m listing these problems here, in case anyone working at Facebook would like to have an advertiser’s opinion, and that people working with programmatic ad platforms see how difficult it is to create — if not perfect, then at least a satisfactory system.

A caveat: although I’m updating the list from time to time, it might be some bugs are already corrected and the missing features added. The ones fixed have been pointed out by strike-through.

Acknowledgments: A big thanks goes to Mr. Tommi Salenius, who is my right hand in digital marketing.

[UPDATED 10/08/2016]

  • add ‘like disavow tool’ (cf. Google’s link disavow)
  • ‘Facebook marketing partner’ –> expanding to smaller agencies (cf. Google Partners)
  • save target groups when making targeting in ad creation tool
  • add possibility to exclude saved audiences
  • ads receive an unequal number of impressions; if many ads in one ad set, most of them receive zero impressions
  • de-duping target group frequency across campaigns (overlapping audiences: avoid inflation of total frequency by de-duping)
  • distribute budget automatically between campaigns and ad sets
  • Split option in Power Editor does not split an existing audiences, but actually creates a new (complementing) one
  • add possibility to exclude age groups (could be done with exclusion of saved audiences)
  • sorting columns does not work in Power Editor reports section
  • sorting based on conversions does not work properly in Ads Manager columns (it calculates some sort of average)
  • re-position image in Power Editor –> not possible to see preview
  • in web interface impossible to make advance connection with parameter OR – now it uses AND – for example, fans of my page AND friends of fans makes target group impossibly small
  • does not show total budget (or any totals) in campaign view (UPDATE: partly fixed for some metrics, but total budget still not visible)
  • impossible to target competitors’ fans (what are the barriers for making this happen?)
  • breakdowns not possible based on e.g. education level (more breakdown possibilities)
  • possibility to set budget at campaign level
  • no possibility to filter campaign (cf. adwords) –> trying to find a campaign quickly is a pain
  • utm tagging missing –> impossible to track from 3rd party analytics
  • shared budget feature is missing –> you should copy this feature from AdWords
  • when copying campaigns, impossible to change goal (really stupid, cannot test performance with different goals)
  • campaign reporting –> no trends, no graphs –> impossible to assess long-term development of campaigns (compared to AdWords)
  • campaign page –> no possibility to change metric for graph (much better in AdWords where two metrics can be freely chosen)
  • no frequency cap (again, possible in AdWords)
  • no ‘compare to previous time period’ option in reports (unlike AdWords)
  • no possibility to delete images in image gallery –> wtf, makes it very difficult to manage
  • too small image size in image gallery –> again, hard to manage images
  • not possible to copy numbers in power editor (!!!) –> sometimes, you’d want to copy numbers between campaigns or into excel
  • power editor loses text field content when changing ad (field)
  • power editor does not enable image variation
  • web version does not show all image variation ads in first pageload
  • unable to copy ad sets in web interface –> impossible to make quick new versions targeting e.g. newsfeed vs. right column
  • doesn’t show pause status in ads while in review
  • power editor does not copy ad statuses while duplicating ad sets
  • rotate evenly option missing –> compare to AdWords
  • cta not possible to be removed in powereditor once put into ads
  • unable to revert to suggested image in web interface after choosing image from gallery
  • facebook ads no sound in video preview
  • missing bid modifiers: e.g. for ad placement, e.g. -50 %, right column

Problems in Page Insights:

  • inability to answer standard questions such as: what are the all-time most liked posts? how many posts did we do last month?

Want to contribute? Send me bugs and/or missing features and I’ll list them here.

Dr. Joni Salminen holds a PhD in marketing from the Turku School of Economics. His research interests relate to startups, platforms, and digital marketing.

Contact email: [email protected]