March 29, 2017
Here’s a list of analytics problems I’ve devised for a class I was teaching a digital analytics course (Web & Mobile Analytics, Information Technology Program) at Aalto University in Helsinki. Some solutions to them are also considered.
Want to add something to this list? Please write in the comments!
[edit: I’m compiling a larger list of analytics problems. Will update this post once it’s ready.]
I’m into digital marketing, startups, platforms. Download my dissertation on startup dilemmas: http://goo.gl/QRc11f
March 29, 2017
this is work in progress – I’ll keep updating this list as new moments of “heureka” hit me.
Want to add something? Please post it in the comment section!
March 29, 2017
In this article, I discuss how the classic VRIN model can be used to evaluate modern web platforms.
It’s one of the most cited models of the resource-based view of the firm. Essentially, it describes how a firm can achieve sustainable competitive advantage through resources that fulfill certain criteria.
These criteria for resources that provide a sustainable competitive advantage are:
By gaining access to this type of resources, a firm can create a lasting competitive advantage. Note that this framework takes one perspective to strategy, i.e. the resource-based view. Alternative ones are e.g. Porter’s five forces and power-based frameworks, among many others.
The “resource” in resource-based view can be defined as some form of input which can be transformed into tangible or intangible output that provides utility or value in the market. In a competitive setting, a firm competes with its resources against other players; what resources it has and how it uses them are key variables in determining the competitive outcome, i.e. success or failure in the market.
In each business environment, there are certain resources that are particularly important. An orange juice factory, for example, requires different resources to be successful than a consulting business (the former needs a good supply of oranges, and the latter bright consultants; both rely on good customer relationships, though).
I first give a general overview of the VRIN dimensions in online context. This is done by comparing online environment with offline environment.
The term ‘value’ is tricky because of its definition: if we define it as something useful, we easily end up in a tautology (circular argument): a resource is valuable because it is useful for some party.
The specific resources for online platforms are discussed later on.
One of the key preoccupations in economic theory is scarcity: raw materials are scarce and firms need to compete over their exploitation.
Offline industries are characterized by rivalry – once oil is consumed, it cannot be reused. Knowledge products on the web, on the other hand, are described as non-rivalry products: if one consumer downloads an MP3 song, that does not remove the ability for another consumer to download as well (but if a consumer buys a snickers bar, there is one less for others to buy). Scarcity is usually associated to startups so that they are forced to innovate due to liability of smallness.
This deals with how well the business idea can be copied.
in “traditional” industries, such as manufacturing, patents and copyrights (IPR) are important. They protect firms against infringement and plagiarism. without them, every innovation could be easily copied which would quickly erode any competitive advantage. Intellectual property rights therefore enable the protection of “innovations” against imitation.
Imitation is less of a concern online. In most cases, the web technologies are public knowledge (e.g., open source). Even large players contribute to public domain. Therefore, rather than being something that competitors could not imitate, the emphasis on competition between web platforms tends to be on acquiring users rather than patents. (There are also other sources of resource advantage we’ll discuss later on.)
The difference between imitation and substitution is that in the former you are being copied whereas in the latter your product is being replaced by another solution. For example, Evernote can be replaced by paper and pen.
However, I would argue the source of resource advantage comes from something else than immunity of subsitution: after all, there are tens of search-engines and hundreds of social networks but still the giants overcome them.
‘Why’ is the question we’re going to examine next.
Here’s what I think is important:
Knowledge means holding the “smartest workers” – this is obviously a highly important resource. As Steve Jobs said, they’re not hiring smart people to tell them what to do, but so that the smart workers could tell Apple what to do.
Storage/server capacity is crucial for web firms. The more users they have, the more important this resource is in order to provide a reliable user experience.
Users are crucial given that the platform condition of critical mass is achieved. Critical mass is closely associated with network effects, meaning that the more there are users, the more valuable the platform is.
Content is important as well — content is a complement to content platforms, whereas users are complements of social platforms (for more on this typology, see my dissertation).
Complementors are antecedents to getting users or content – they are third parties that provide extensions to the core platform, and therefore add its usefulness to the users.
Algorithms are proprietary solutions platforms use to solve matching problems.
Company culture is a resource which can be turned into an efficient deployment machine.
A great company culture may be hard to imitate because its creation requires tacit knowledge.
Financing is an antecedent to acquiring other resources, such as the best team and storage capacity (although it’s not self-evident that money leads to functional a team, as examples in the web industry demonstrate).
Finally, location is important because can provide an access to a network of partner companies, high-quality employees and investors (think Silicon valley) that, again, are linked to the successful use of other resources.
A location is not a rare asset because it’s always possible to find an office space in a given city; similarly, you can follow where your competitors go.
What can be learned from this analysis?
First, the “value” in the VRIN framework is self-evident and not very useful in finding out differences between resources, UNLESS the list of resources is really wide and not industry-specific. That would be case when exploring the ; here, the list creation was
My list highlights intangible resources as a source of competitive advantage for web platforms. Based on this analysis, company culture is a resource the most compatible with the VRIN criteria.
Although it was argued that substitutability is less of a concern in online than offline, the risk of disruption touches equally well the dominant web platforms. Their large user base protects them against incremental innovations, but not against disruptive innovations. However, just as the concept of “value” has tautological nature, disruption is the same – disruptive innovation is disruptive because it has disrupted an industry – and this can only be stated in hindsight.
Of course, the best executives in the world have seen disruption beforehand, e.g. Schibstedt and digital transformation of publishing, but most companies, even big ones like Nokia have failed to do so.
Let’s take a look at the three big: Google, Facebook and eBay. Each one is a platform: Google combines searchers with websites (or, alternatively, advertisers with publisher websites (AdSense); or even more alternatively, advertisers with searchers (AdWords)), Facebook matches users to one another (one-sided platform) and advertisers with users (two-sided platform). eBay as an exchange platform matches buyers and sellers.
It would be useful to assess how well each of them score in the above resources and how the resources are understood in these companies.
I’m into digital marketing, startups, platforms. Download my dissertation on startup dilemmas: http://goo.gl/QRc11f
March 29, 2017
I started thinking this question today when reading my students’ exam answers. The questions was “Define business logic and give an example of it”, and many answers actually defined strategy. At that point, I realized it’s not so easy to see a difference between these two concepts.
So, what would I see as the main difference between strategy and business logic?
First, strategy in my opinion involves competition – it’s firm-related decision-making in which we try to gain a competitive advantage, i.e. apply a strategy that helps us win; or, more particularly, to achieve a goal, such as grabbing market share, become profitable, grow, etc. Hence, strategy is closely associated with reaching a pre-defined goal – in company terms, we usually set a vision of where we want to take the company in a certain time-frame (say five years from now), and then create an overall strategy that should take us towards that ideal state. When the firm’s vision is based on some shared principles or values, this is called mission.
As a concept, strategy is much older than business logic and has its roots in military thinking (hence the competitive dimension). For example, Ceaser, Napoleon and Clausewitz are seen as classics of strategy.
Business logic, then again, would be a description of “why” — why are customers paying us money? It’s much more focused on value / benefit / utility than strategy. I would say business logic is an explanation as to why an organization can remain viable – e.g., it can transform some form of resources (raw material) into output (products). Or, it can be based on exploiting people’s vice (such as the Finnish liquor monopoly Alko) or market inefficiencies, or it can create markets for other players (e.g. Google AdWords).
It seems the two concept involve some overlap – the description of business logic approach strategy when we think how the firm combines resources to produce something customers perceive attractive enough to buy. I’d also say both are applicable to many organizations, not just firms – consider a university, for example. The strategy of a university revolves around ways of attracting the best students and teachers (it’s like a two-sided market), but its business logic is to transform education resources into courses and monetize that either through tuition fees (e.g. US) or state money (e.g. Finland).
As I said to my students, it’s an eye-opening experience when you start seeing either of these concepts “bare” — at that point you truly understand the core of particular choices firms make, and why things are the way they are.
In sum, I’d say strategy is a barebone description of how to compete in a market, whereas business logic is a barebone description of how to make money. If both were games, strategy would be Risk and business logic Monopoly.
What do you think? Please share your thoughts on the topic!
I’m into digital marketing, startups, platforms. Download my dissertation on startup dilemmas: http://goo.gl/QRc11f
March 29, 2017
This article discusses the potential of segmentation in Facebook advertising.
Segmentation is one of the most fundamental concepts in marketing. Its goal is to identify the best match between the firm’s offering and the market, i.e. find a sub-set of customers who are most likely to buy the product and who therefore can be targeted cost-effectively by means of niche marketing rather than mass marketing.
There are some premises as to why segmentation works:
(The list if a direct citation from the Essentials of Marketing by Jim Blythe, p. 76.)
While segmentation is about dividing the overall market into smaller pieces (segments), targeting is about selecting the appropriate marketing channels to reach those customer segments. Finally, positioning deals with message formulation in the attempt of positioning the firm and its offerings relative to competitors (e.g., cheaper, better quality). This is the basic marketing model called STP (segmentation, targeting, positioning).
I will next discuss three stages of Facebook campaign creation.
There are a few options for creation of basic segments.
The existence of weaknesses is okay – the whole of point of segmentation is to gather REAL data which is stronger thana priori assumptions.
Based on the insights you’ve gathered, create Saved target groups in Facebook. These incorporate the segments you want to target for. If you are using an ad management tool such as Smartly, you can split audiences into smaller micro-segments e.g. by age, gender and location. Say you have a general segment of Women aged 25-50; you could split it into the following micro-segments by using an interval of five years:
The advantage of micro-segments is more granular segmentation; however, the risk is going too granular while ignoring the real-world reason for differences (sometimes the performance difference between two micro-segments is just statistical noise).
After creating the segments in Facebook (reflected in Saved target groups), you want to test how they perform — so as to see how well your assumptions on the effectiveness of these segments are working. For this, create campaigns and let them run. In Power Editor, go to the Custom audiences (select from the sliding menu), select the segments you want to test and choose to create new ad groups. (See, now we have moved from segmentation into targeting, which is the natural step in the STP model.)
NB! If you particularly want to test customer segments, keep everything else (campaign settings, creatives) the same. In Power Editor, this is fairly simple to execute by copy-pasting the creatives between ad groups. This reduces the risk that the performance differences between various segments are a result of some other factor than targeting. Finally, name the ad sets to reflect the segment you are testing (e.g. Women 25-31).
After a week or so, go back to check the results. Since you’ve named the segments appropriately, you can quickly see the performance differences between the segments. To make sure the differences are statistically valid (if you are not using a tool such as Smartly), use a calculator to determine the statistical significance. I created one which can be downloaded here.
When interpreting results, remember that the outcome is a combination of segment and message (and that the message is a combination of substance and tone, i.e. what is said and how it is said). In other words,
Result = segment x message, in which message = substance x tone, so that
Result = segment x (substance x tone)
Therefore, as you change the message, it reflects to performance across various segments. This means that you are not actually testing the suitability of your product to the segment (which is what segmentation and targeting is all about), but the match between the message and the target audience. Although this may seem like semantics, it’s actually pretty important. You want to make sure you’re not getting a misleading response from your segment due to issues in message formulation (i.e. talking to them in a “wrong way”), and so you want to make sure it reflects the product as well as possible. Ideally, you’d want to tailor your message based on your ideas of the segment, BUT this is prohibited in the early stage because we want to make sure the message formulation does not interfere with the testing of segment performance.
How to solve this problem, then? Three ways: first, make sure the segments you are testing are not too far apart – i.e. women aged 17 and men aged 45 subjected to the same message can create issues. Second, try to formulate a general message to begin with, so it doesn’t exclude any segments. Third, you could of course make slight modifications to the message while testing the segments — here I would still keep the substance (e.g. cheap price) stable across segments while maybe changing the tone (e.g. type of words used) depending on the audience – for example, older people are usually addressed in a different tone than the younger audience (yo!).
Finally, one extra tip! If you want more granular data on how different groups within your segment have performed, go to Ad reports and check out the data breakdowns. There is a wealth of information there which can be used in creating further micro-segments.
What to do when you know which segments are the most profitable? Well, take the results you’ve got and generalize them into your other marketing activities. For example, when you’re buying print ads ask for demographic data they have on readers — it has to be accurate and based on research, not guesses — and choose the media that matches the best performing segments according to your Facebook data. In my opinion, there is no major reason to assume that people in the same segment would act differently in Facebook and elsewhere (strictly speaking, the only potential issue I can think of is that Facebook-people are more “advanced” in their technology use than offline-people, but this is generally a small problem since such a large share of population in most markets are users of Facebook).
There you go – hopefully this article has given you some useful ideas on the relationship between segmentation and Facebook advertising!
March 29, 2017
Many workers are concerned about “robotization” and “automatization” taking away their jobs. Also the media has been writing actively about this topic lately, as can be seen in publications such as New York Times and Forbes.
Although there is undoubtedly some dramatization in the scenarios created by the media, it is true that the trend of automatization took away manual jobs throughout the 20th century and has continued – perhaps even accelerated – in the 21st century.
Currently the jobs taken away by machines are manual labor, but what happens if machines take away knowledge labor as well? I think it’s important to consider this scenario, as most focus has been on the manual jobs, whereas the future disruption is more likely to take place in knowledge jobs.
This article discusses what’s next – in particular from the perspective of artificial intelligence (A.I.). I’ve been developing a theory about this topic for a while now. (It’s still unfinished, so I apologize the fuzziness of thought…)
My theory on development of job markets relies on two key assumptions:
The idea here is that while it is relatively easy to replace a job taken away by simple machines (sewing machines still need people to operate them), it is much harder to replace jobs taken away by complex machines (such as an A.I.) providing higher productivity. Consequently, less people are needed to perform the same tasks.
By “development cycles”, I refer to the drastic shift in job market productivity, i.e.
craftmanship –> industrial revolution –> information revolution –> A.I. revolution
Another assumption is that the labor skills follow the Gaussian curve. This means most people are best suited for manual jobs, while information economy requires skills that are at the upper end of that curve (the smartest and brightest).
In other words, the average worker will find it more and more difficult to add value in the job market, due to sophistication of the systems (a lot more learning is needed to add value than in manual jobs where the training requires a couple of days). Even currently, the majority of global workers best fit to manual labor rather than information economy jobs, and so some economies are at a major disadvantage (consider Greece vs. Germany).
Consistent to the previous definition, we can see the job market including two types of workers:
The former create the systems as their job, whereas the latter operate them as their job. For example, in the sphere of online advertising, Google’s engineers create the AdWords search-engine advertising platform, which is then used by online marketers doing campaigns for their clients. At the current information economy, the best situation is for workers who are able to create systems – i.e. their value-added is the greatest. With an A.I, however, both jobs can be overtaken by machine intelligence. This is the major threat to knowledge workers.
The replacement takes place due to what I call the errare humanum est -effect (disadvantage of humans vis-à-vis machines), according to which a machine is always superior to job tasks compared to human which is an erratic being controlled by biological constraints (e.g., need for food and sleep). Consequently, even the brightest humans will still lose to an A.I.
Consider these examples:
(Some of these figures are a bit outdated, but in general they serve to support my argument.)
Therefore, the ratio of workers vs. customers is much lower than in previous transitions. To build a car for one customer, you need tens of manufacturing workers. To serve customers in a super-market, the ratio needs to be something like 1:20 (otherwise queues become too long). But when the ratio is 1:1,000,000, not many people are needed to provide a service for the whole market.
As can be seen, the mobile application industry which has been touted as a source of new employment does indeed create new jobs , but it doesn’t create them for masses. This is because not many people are needed to succeed in this business environment.
Further disintermediation takes place when platforms talk to each other, forming super-ecosystems. Currently, this takes place though an API logic (application programming interface) which is a “dumb” logic, doing only prescribed tasks, but an A.I. would dramatically change the landscape by introducing creative logic in API-based applications.
Many professional services are on the line. Here are some I can think of.
1. Marketing managers
An A.I. can allocate budget and optimize campaigns far more efficiently than erroneous humans. The step from Google AdWords and Facebook Ads to automated marketing solutions is not that big – at the moment, the major advantage of humans is creativity, but the definition of an A.I. in this paper assumes creative functions.
An A.I. can recall all laws, find precedent cases instantly and give correct judgments. I recently had a discussion with one of my developer friends – he was particularly interested in applying A.I. into the law system – currently it’s too big for a human to comprehend, as there are thousands of laws, some of which contradict one another. An A.I. can quickly find contradicting laws and give all alternative interpretations. What is currently the human advantage is a sense of moral (right and wrong) which can be hard to replicate with an A.I.
An A.I. makes faster and more accurate diagnoses; a robot performs surgical operations without flaw. I would say many standard diagnoses by human doctors could be replaced by A.I. measuring the symptoms. There have been several cases of incorrect diagnoses due to hurry and the human error factor – as noted previously, an A.I. is immune to these limitations. The major human advantage is sympathy, although some doctors are missing even this.
4. Software developers
Even developers face extinction; upon learning the syntax, an A.I. will improve itself better than humans do. This would lead into exponentially accelerating increase of intellect, something commonly depicted in the A.I. development scenarios.
Basically, all knowledge professions if accessible to A.I. will be disrupted.
Actually, the only jobs left would be manual jobs – unless robots take them as well (there are some economic considerations against this scenario). I’m talking about low-level manual jobs – transportation, cleaning, maintenance, construction, etc. These require more physical material – due to aforementioned supply and demand dynamics, it may be that people are cheaper to “build” than robots, and therefore can still assume simple jobs.
At the other extreme, there are experience services offered by people to other people – massage, entertainment. These can remain based on the previous logic.
I can think of a couple of ways.
First, learn coding – i.e. talking to machines. people who understand their logic are in the position to add value — they have an access to the society of the future, whereas those who are unable to use systems get disadvantaged.
The best strategy for a worker in this environment is continuous learning and re-education. From the schooling system, this requires a complete re-shift in thinking – currently most universities are far behind in teaching practical skills. I notice this every day in my job as a university teacher – higher education must catch up, or it will completely lose its value.
Currently higher education is shielded by governments through official diplomas appreciated by recruiters, but true skills trump such an advantage in the long run. Already at this moment I’m advising my students to learn from MOOCs (massive open online courses) rather than relying on the education we give in my institution.
At a global scale, societies are currently facing two contrasting mega-trends:
It is not hard to see these are contrasting: less people are needed for the same produce, whereas more people are born, and thus need jobs. The increase of people is exponential, while the increase in productivity comes, according to my theory, in large shifts. A large shift is bad because before it takes place, everything seems normal. (It’s like a tsunami approaching – no way to know before it hits you.)
I can think of a couple of ways:
By adopting a Marxist approach, we can see there are two groups who are best off in this new world order:
Others, as argued previously, are at a disadvantage. The phenomenon is much similar to the concept of “digital divide” which can refer to 1) the difference of citizens from developed and developing countries’ access to technologies, or 2) the ability of the elderly vs. the younger to use modern technology (the latter have, for example, worse opportunities in high-tech job markets).
There are some relaxations to the arguments I’ve made. First, we need to consider that the increase of time people have as well as the general population increase create demand for services relating experiences and entertainment per se; yet, there needs to be consideration of re-distribution of wealth, as people who are unable to work need to consume to provide work for others (in other words, the service economy needs special support and encouragement from government vis-à-vis machine labor).
While it is a precious goal that everyone contribute in the society through work, the future may require a re-check on this protestant work ethic if indeed the supply of work drastically decreases. the major reason, in my opinion, behind the failure of policies reducing work hours such as the 35-hour work-week in France is that other countries besides these pioneers are not adopting them, and so they gain a comparative advantage in the global market. We are yet not in the stage where supply of labor is dramatically reduced at a global scale, but according to my theory we are getting there.
Secondly, a major relaxation, indeed, is that the systems can be usable by people who lack the understanding of their technical finesse. This method is already widely applied – very few understand the operating principles of the Internet, and yet can use it without difficulties. Even more complex professional systems, like Google AdWords, can be used without detailed understanding of the Google’s algorithm or Vickrey second-price sealed auctions.
So, dumbing things down is one way to go. The problem with this approach in the A.I. context is that when the system is smart enough to use itself, there is no need to dumb down – i.e., having humans use it would be a non-optimal use of resources. Already we can see this in some bidding algorithms in online advertising – the system optimizes better than people. At the moment we online marketers can add value through copywriting and other creative ways, but the upcoming A.I. would take away this advantage from us.
It is natural state of job markets that most workers are skilled only for manual labor or very simple machine work; if these jobs are lost, new way of organizing society is needed. Rather than fighting the change, societies should approach it objectively (which is probably one of the hardest things for human psychology).
My recommendations for the policy makers are as follows:
Because the problem of reducing job demand is not acute, these changes are unlikely to take place until there is no other choice (which is, by the way, the case for most political decision making).
Up to which point can the human labor be replaced? I call it the point of zero human when no humans are needed to produce an equal or larger output than what is being produced at an earlier point in time. The fortune of humans is that we are all the time producing more – if the production level was at the stage of 18th century, we would already be in the point of zero human. Therefore, job markets are not developing in a predictable way towards point of zero human, but it may nevertheless be a stochastic outcome of the current development rate of technology. Ultimately, time will tell. We are living exciting times.
March 29, 2017
Recently I had an email correspondence with one my brightest digital marketing students. He asked for advice on creating an AdWords campaign plan.
I told him the plan should include certain elements, and only them (it’s easy to make a long and useless plan, and difficult to do it short and useful).
Anyway, in the process I also told him how to make sure he gets the necessary information from the client. These four things I’d like to share with everyone looking for a crystal-clear marketing brief.
1. campaign goal
2. target group
First, you want to know the client’s goal. In general, it can direct response (sales) or indirect response (awareness). This affects two things:
The channel selection is the first thing to include into your campaign plan.
Second, you want the client’s understanding of the target group. This affects targeting – in search-engine advertising it’s the keywords you choose; in social media advertising it’s the demographic targeting; in display it’s the managed placements.
Based on this information, you want to make a list (of keywords / placements / demographic types). These targeting elements are the second thing to include into your campaign plan.
Third, the budget matters a great deal. It affects two things:
The bigger the budget is, the more channels can be included in the campaign plan. It’s not always linear, however; e.g. when search volumes are high and the goal is direct response, it makes most sense to spend all on search. But generally, it’s possible to target several stages in customers’ purchase funnel (i.e., stages they go through prior to conversion).
Hence, the budget spend is the third thing to include into your campaign plan.
The daily budget you calculate by dividing the total budget with the number of channels and the duration (in days) of the campaign. At this point, you can allocate the budget in different ways, e.g. search = 2xsocial. It’s important to notice that in social and display you can usually spend as much money as you want, because the available ad inventory is in effect unlimited. But in search the spend is curbed by natural search volumes.
March 29, 2017
This post applies to cases satisfying two conditions.
First, you have a simple landing page designed for immediate action (=no further clicks). This can be the case for many marketing campaigns for which we design a landing page without navigation and a very simple goal, such as learning about a product or watching a video.
Second, you have a high bounce rate, indicating a bad user experience. Bounce rate is calculated as follows:
visitors who leave without clicking further / all visitors
It’s a proxy for it. A high bounce rate simply means a lot of people leave the website without clicking further. This usually indicates bad relevance: the user was expecting something else, didn’t find, and so leaves the site immediately.
For search engines a high bounce rate indicates bad landing page relevance vis-à-vis a given search query (keyword), as the user immediately returns to the SERP (search-engine result page). Search engines, such as Google, would like to offer the right solution for a given search query as fast as possible to please their users, and therefore a poor landing page experience may lead to lower ranking for a given website in Google.
I’ll give a simple example. Say you have a landing page with only one call-to-action, such as viewing a video. You then have a marketing campaign resulting to ten visitors. After viewing the video, all ten users leave the site.
Now, Google Analytics would record this as 100% bounce rate; everyone left without clicking further. Moreover, the duration of the visits would be recorded as 0:00, since the duration is only stored after a user clicks further (which didn’t happen in this case).
So, what should we conclude as site owners when looking at our statistics? 100% bounce: that means either that a) our site sucks or b) the channel we acquired the visitors from sucks. But, in the previous case it’s an incorrect conclusion; all of the users watched the video and so the landing page (and marketing campaign associated with it) was in fact a great success!
I will show four solutions to improve your measurement of user experience through bounce rate.
First, simply create an event that pings your analytics software (most typically Google Analytics) when a user makes a desired on-page action (e.g. video viewing). This removes users who completed a desired action but still left without clicking further from the bounce rate calculation.
Here are Google’s instructions for event tracking.
Second, ping GA based on visit duration, e.g. create an event of spending one minute on the page. This will in effect lower your reported bounce rate by degree of users who stay at least a minute on the landing page.
Third, create a form. Filling a form directs the user to another site which then triggers an event for analytics. In most cases, this is also compatible with our condition of a simple landing page with one CTA (well, if you have a video and a form that’s two actions for a user, but in most cases I’d say it’s not too much).
Finally, there is a really cool Analytics plugin by Rob Flaherty called Scrolldepth (thanks Tatu Patronen for the tip!). It pings Google Analytics as users scroll down the page, e.g. by 25%, 75% and 100% intervals. In addition to solving the bounce problem, it also gives you more data on user behavior.
Note that adding event tracking to reduce bounce rate only reduces it in your analytics. Search-engines still see bounce as direct exits, and may include that in their evaluation of landing page experience. Moreover, individual solutions have limitations – creation of a form is not always natural given the business, or it may create additional incentive for the user; and Scrolldepth is most useful in lengthy landing pages, which is not always the case.
March 29, 2017
Startups, and why not bigger companies, too, often test marketing channels by allocating a small budget to each channel, and then analyzing the results (e.g. CPA, cost per action) per channel.
This is done to determine a) business potential and b) channel potential. The former refers to how lucrative it is to acquire customers given their lifetime value, and the latter to how well each channel performs.
However, there is one major issue: scaling. It means that when we pour x dollars into the marketing channel in the test phase and get CPA of y dollars, will the CPA remain the same when we increase the budget to x+z dollars (say hundred times more)?
This issue can be tackled by acquiring enough data for statistical significance. This gives us confidence that the results will be similar once the budget is increased.
In AdWords, however, the scaling problem takes another form: the natural limitation of search volumes. By this I mean that at any given time, only a select number of customers are looking for a specific topic. Contrary to Facebook which has de facto an unlimited ad inventory (billions of ad impressions), Google only has a limited (although very large) ad inventory.
Here’s how to assess the scalability of AdWords campaigns:
1. Go to campaign view
2. Enable column called “Search impression share” (Modify columns –> Competitive metrics)
This will tell you how many searchers saw your ad out of all who could have seen it (this is influenced by your daily budget and bid).
In general, you want impression share to be as high as possible, given that the campaign ROI is positive. So, in general >80% is good, <10% is bad. (The exception is when running a long-tail strategy aiming for low-cost clicks, in which case <10% is okay.)
3. Calculate the scalability as follows:
scalability = clicks / impression share
For example, if you have an impression share of 40 % with which you’ve accumulated 500 clicks, by increasing your budget and bids so that you are able to capture 100% impression share, you will accumulate 1250 clicks (=500/0,40) which is the full potential of this campaign.
Note that the formula assumes the CTR remains constant. Additionally, increasing bids may increase your CPA, so improving quality score through better ads and relevance is important to offset this effect.
March 29, 2017
The return on investment (ROI) of academic publishing is absolutely terrible.
Think of it – thousands of hours spent correcting formatting, spellings, rephrasing, and so on. All this after the actual insight of the research has been accomplished. In all seriousness, 10% of time spent doing research and 90% writing and rewriting cannot be thought of anything else but waste.
The inefficiency of the current way of doing it – as in combining doing research and writing about it under the same name of “doing research” – is horrible waste of intelligence and human resources. It inflates the cost of doing research, and also makes scientific progress slower than if 90% was spent on research and 10% on writing.
Some might say it’s a perverse outcome of letting staff go – nowadays even professors have to do everything by themselves because there is so few assistants and administrators. Why is this perverse? Because at the same time more people need work. It’s also perverse, or paradoxical, because letting the help go is done to increase efficiency but in the end it actually decreases efficiency as the research staff shifts their use of time from doing research to fixing spelling errors. There is a large misunderstanding that letting people go would lead to better efficiency – it may save costs but exactly at the cost of efficiency.
The thought for this article came to mind when me and my colleague received yet again some minor edit requests for an article to be published in a book – the book material was ready already last year, but all these people are working to fix small minor details that add zero substance value. What a waste!
And I’m not alone in this situation; most if not all academics face the same problem.
Two solutions readily come to mind:
The latter one is much better, as the first option misses the importance of interpreting the results and theorizing from them (the whole point of doing research).
Efficiency, such as ROI of research, should be defined as learning more about the world. This will never be accomplished by writing reports but going out to the world. At the same time, I don’t mean to undermine basic research – the ROI of research is not the same as its immediate usefulness, let alone its immediate economic potential. ROI in my argument simply refers to the ratio of doing research vs. writing about it, not the actual quality of the outcome.
The author works as a university teacher in the Turku School of Economics