Skip to content

Digital marketing, startups, and platforms Posts

Broken Window Fallacy — Still Relevant?

Do we need more broken windows?

There is a fallacy within broken window fallacy.

I will now explain this argument.

Essentially, the fallacy (by the classic French economist Frederick Bastiat) argues that “breaking windows”, although brings work for window repairers and thus adds to economic activity, is sub-optimal because of the opportunity cost of using the work to build (or buy) something new.

But, this means there indeed is alternative job for the window repairer, or that the labor COULD be used more efficiently elsewhere (e.g., producing innovations). In the early days, where productivity was the bottle-neck for economic growth, this might have been the case.

But nowadays, I think the modern job market dynamics do not categorically support the broken window fallacy — it’s getting increasingly difficult to create products that add genuine value and not only rely on exploiting market inefficiencies (which, arguably, could be a form of value) or persuading consumers to buy for the sake of owning the new shiny thing (which is often misunderstood as marketing’s core value added).

As of now, we’d need some more broken windows to stimulate the economy and get the money circulating.

Productivity is no longer the bottleneck for growth — there is over-supply in both material produce and human labor — generating demand is now the primary economic problem. Marketers would recognize this as the shift from production orientation to sales-orientation and finally to customer orientation. We need a similar shift in the scale of the economy.

But how to accomplish that?

A good start, from policy perspective, are infrastructure projects that produce not only jobs but enable the platform effect, i.e. possibility to “innovate on top of” as per the classic definition of platforms. Any investments that enable the creation of economic activity, either by moving or connecting people, or by removing barriers such as Uber-prohibitive taxi cartels, or by making licenses and software tools and equipment accessible and affordable for all (also in terms of skills, viz. education), or by supporting incubators and access to early-stage funding, are positive ways to enable the creation of innovations.

Consider Finland — I live in Turku which is about two hours train ride from Helsinki. Turku is a big city (in Finnish standards) but in terms of job opportunities it pales in comparison to Helsinki (which is the capital). So, lately there has been talks of a “one hour train” between Turku and Helsinki. As stated previously, building this would not only increase employment but also provide a platform externality since more people would be able to live in Turku and work in Helsinki. Instead of investing the money which Finland is borrowing at an increasing rate to unemployment benefits, it should go to employing people on e.g. infrastructure projects like this.

Now consider Europe; following the same logic, we should focus on large-scale infrastructure projects like improving railroad connections between Asia and Europe, roads, hospitals, telecommunications, etc. Instead, much time is spent arguing on either minor details or managing whatever “threats” (refugees, euro-crisis) the continent is facing. Dealing with threats is important sure, but the policies cannot only be reactive — without a foresight and building concrete things, there will be no better tomorrow.

Anyway, transportation, communications technology, better housing and health-care — I’m counting those as infrastructure services that benefit the economy both short- and long-term. And repairing any of them does not fall under broken window fallacy — quite the opposite.

The author works as a researcher at the Turku School of Economics.

Negative tipping and Facebook: Warning signs

This Inc article states a very big danger for Facebook:

It is widely established in platform theory that reaching a negative tipping point can destroy a platform. Negative tipping is essentially the reverse of positive tipping — instead of gaining momentum, the platforms starts quickly losing it.

There are two dimensions I want to look at in this post.

First, what I call “the curse of likes“. Essentially, Facebook has made it too easy to like pages and befriend people; as a result, they are unable to manage people’s newsfeeds in the best way in terms of engagement. There is too much clutter, leaving important social information out, and the “friend” network is too wide for the intimacy required to share personal things. The former reduces engagement rate, the latter results in unwillingness to share personal information.

Second, if people are sharing less about themselves, the platform has it more difficult to show them relevant ads. The success of Facebook as a business relies on its revenue model which is advertising. Both of the aforementioned risks are negative for advertising outcomes. If relevance decreases, a) user experience (negative effects of ads) and b) ad performance decrease as well, resulting in advertisers reducing their ad spend or, in worst-case scenario, them moving on to other platforms.

To counter these effects, Facebook can resort to a few strategies:

  1. Discourage people from “over-liking” things – this is for their own benefit, not to clutter the newsfeed
  2. Easy options to unsubscribe from people and pages — e.g., asking “Do you want to see this?” in relation to posts
  3. Favoring social content over news and company posts in the newsfeed algorithms – seeing personal social content is likely to incite more social content
  4. Sentiment control of newsfeed algorithm – to many, Facebook seems like a “negative place” with arguing on politics and such. This is in stark contrast to more intimate platforms such as Instagram. Thus, Facebook could incorporate sentiment adjustment in its newsfeed algorithm to emphasize positive content.
  5. Continued efforts to improve ad relevance – especially by giving incentives for high-CTR advertisers to participate by lowering their click prices, thereby encouraging engagement and match-seeking behavior.

Overall, Facebook as a platform will not be eternal. But I think the company is well aware of this, since their strategy is to constantly buy out rivals. The platform idea persists although individual platforms may perish.

People vs. business models: Warren Buffet’s dilemma

In Quora, somebody asked why Warren Buffet prefers not to invest in startups [1]. One of the answers that resonated with me was this one:

“In an interview several years back, Warren Buffet said that he does not like to invest in companies whose success is based on the smartness of its people.

His reasoning was that all companies hire from the same pool of talent, so smart people by themselves do not provide long-term competitive advantage or “moat” (because a competitor can hire the same or similar talent). He thought of a company’s processes (not just operating processes, but also processes for new product creation, developing new business models etc) as the place where its value resides.”

So, I got to think of this question:

Which is more important for business success, people or business model?

At critical extremes, the answer splits like this:

People are critical, so talented people can make any business model work.


Business model is critical, so even non-talented people can make a good business model work.

The question is quintessential for startup entrepreneurship — should we be
chasing the best people or the best combination of business model parameters?

In other words, can we find business models combinations that even an idiot could use to succeed? Or, is it like most lean startup advocates argue, that any business model parameters are just guesses and the success rests only on the team’s ability to execute them?

There are examples of smart people turning around business that would have otherwise failed. There are equally examples of poorly managed companies that still thrive because they have a killer business model at place.

However, facing the market dynamics often involves shaping the business model parameters that therefore cannot be seen static but dynamic in nature. But who are the ones shaping them? It’s the people — ultimately everything in companies can be abstracted to human actions. But, without the right “recipe” of business model components at place, the actions of even the smartest people can become futile. As such, we may not be able to examine the team and business model separately – business model and people are not isolated but interacting factors.

The truth, therefore, lies somewhere in between and in the mix of both. Oftentimes in dichotomous questions like this end up in a structurally similar conclusion that was made here. Almost every time, an extreme argument can be shot down. The fallacy of believing in extremes can therefore save you time, but lead astray.

As for Warren Buffet, the explanation given in the Quora post sounds plausible — for an investor, it may be an efficient strategy to focus on business model parameters and macro-competitive factors (and finding opportunities against logical basis) instead of betting on startups with risky ideas and people.

[1] Here’s the Quora discussion:

Ohjelmallisen ostamisen alusta: ideaaliominaisuuksia

Full-metal digitalist.

Maailma muuttuu, markkinoijani

Tällä hetkellä digitaalinen media on siirtymässä ohjelmallisen ostamisen malliin, ts. mainokset ostetaan ja myydään mainosalustan (esim. Google AdWords, Facebook) kautta. Myös perinteinen offline-media (TV, printti, radio) tulee ajan myötä siirtymään ohjelmallisen ostamisen järjestelmiin, joskin tässä menee arvioni mukaan vielä 5-10 vuotta.

Miksi ohjelmallinen ostaminen voittaa?

Syy on selkeä:

Ohjelmallinen ostaminen on lähtökohtaisesti aina tehokkaampaa kuin vaihdanta ihmisten välityksellä.

Taloustieteen näkökulmasta tarkasteltuna mainosvaihdantaan, kuten kaikkeen vaihdantaan, liittyy transaktiokustannuksia: hinnan neuvottelu, paketointi, yhteydenpito, kysymykset, mainosten lähettäminen, raportointi jne. Tämä on ihmistyötä joka maksaa aikaa ja vaivaa, eikä johda optimiratkaisuun hinnan tai mainonnan tehokkuuden kannalta.

Ihminen häviää aina algoritmille tehokkuudessa, ja mainonta on tehokkuuspeliä.

Edellä mainitut transaktiokustannukset voidaan minimoida ohjelmallisen ostamisen kautta. Mediamyyjiä ei yksinkertaisesti tarvita enää tässä prosessissa; samalla mainonnasta tulee halvempaa ja demokraattisempaa. Toki siirtymävaiheessa tulee olemaan siirtymäkipuja, etenkin liittyen organisaatiorakenteen muutokseen ja kompetenssin päivittämiseen. Bisneslogiikassa on myös siirryttävä “premium”-ajattelusta vapaaseen markkina-ajatteluun: mainostila on vain sen arvoinen kuin siitä saatavat tulokset ovat mainostajalle — nämä tulevat olemaan pienempiä kuin mediatalojen nykyinen hinnoittelu, mikä onkin negatiivinen kannustin siirtymän hyväksymiseen.

Mitkä ovat menestyksekkään ohjelmallisen ostamisen alustan ominaisuuksia?

Näkemykseni mukaan niitä ovat ainakin nämä:

  • matala aloituskustannus: tarvitaan vain 5 euron budjetti aloittamiseen (näin saadaan likviditeettiä alustalle, koska myös pienmainostajien on kannattavaa lähteä kokeilemaan)
  • budjettivapaus: mainostaja voi vapaasti määrittää budjetin, ei minimispendiä (ks. edellä)
  • markkinapohjainen hinnoittelu: tyypillisesti algoritminen huutokauppamalli, joka kannustaa totuudenmukaiseen huutamiseen (vrt. Googlen GSP ja Facebookin VCG-malli)
  • suorituspohjaisuus: hinnoittelukomponentti, jolla “palkitaan” parempia mainostajia ja näin kompensoidaan mainonnan haittoja loppukäyttäjälle
  • vapaa kohdennus: mainostaja voi itse määrittää kohdennuksen (tämän EI tule olla mediatalon “salattua tietoa”)

Nämä ominaisuudet ovat tärkeitä, koska kansainväliset kilpailijat jo tarjoavat ne, ja lisäksi ne on osoitettu toimiviksi niin teoreettisessa kuin käytännöllisessä tarkastelussa.

Tärkeitä näkökulmia mainostajan näkökulmasta ovat:

  • demokraattisuus: kuka vain voi päästä alustalle ja käyttää sitä itsepalveluna
  • tulospohjaisuus: maksetaan toteutuneista klikeistä/myynneistä, ei ainoastaan näytöistä
  • kohdennettavuus: mainostaja voi itse säätää kohdennuksen, mikä nostaa relevanssin mahdollisuutta ja näin vähentää mainonnan negatiivista verkostovaikutusta (ts. asiakkaiden ärsyyntymistä)

Kohdennusvaihtoehtoja voivat olla esim.

  • kontekstuaalinen kohdennus (sisällön ja mainostajan valitsemien avainsanojen yhteensopivuus)
  • demograafinen kohdennus (ikä, sukupuoli, kieli)
  • maantieteellinen kohdennus
  • kävijän kiinnostuksen kohteet

Osa näistä voi olla mediataloille hankalia selvitettäviä, ainakaan hankalampaa kuin Facebookille – kohdennus on kuitenkin mainonnan onnistumisen kannalta kriittinen seikka, joten tietojen saamiseksi on tehtävä työtä.


Ohjelmallisen ostamisen alustat ovat mediatalon ydinkompetenssia, eivät ostopalvelu. Siksi uskonkin, että alan toimijat lähtevät aggressiivisesti kehittämään kompetenssiaan alustojen kehittämisessä. Tai muuten ne jatkavat mainoskakun häviämistä Googlen ja Facebookin kaltaisille toimijoille, jotka tarjoavat edellä mainitut hyödyt.

Kirjoitin muuten mainosvaihdannasta pro gradun otsikolla “Power of Google: A study on online advertising exchange” vuonna 2009 — jo siinä sivuttiin näitä aiheita.

Joni Salminen
KTT, markkinointi
[email protected]

Kirjoittaja opettaa digitaalista markkinointia Turun kauppakorkeakoulussa.

Facebook Ads: too high performance might turn on you (theoretically)


Now, earlier I wrote a post arguing that Facebook has an incentive to lower the CPC of well-targeting advertisers because better targeting improves user experience (in two-sided market terms, relevance through more precise targeting reduces the negative indirect network effects perceived by ad targets). You can read that post here.

However, consider the point from another perspective: the well-targeting advertiser is making rents (excessive profits) from their advertising which Facebook wants and as the platform owner is able to capture.

In this scenario, Facebook has an incentive to actually increase the CPC of a well-targeting advertiser until the advertiser’s marginal profit is aligned with marginal cost. In such a case, it would still make sense for the advertiser to continue investing (so the user experience remains satisfactory), but Facebook’s profit would be increased by the magnitude of the advertiser’s rent.

Problem of private information

This would require that Facebook be aware of the profit function of its advertisers which as for now might be private information to the advertisers. But had Facebook this information, it could consider it in the click-price calculation. Now, obviously that would violate the “objective” nature of Facebook’s VCG ad auction — it’s currently set to consider maximum CPC and ad performance (negative feedback, CTR, but not profit as far as I know). However, advertisers would not be able to monitor the use of their profit function because the precise ad auctions are carried out in a black box (i.e., asymmetric information). Thus, the scenario represents a type of moral hazard for Facebook – a potential risk the advertisers may not be aware of.

Origin of the idea

This idea I actually got from one of my students who said that “oh, I don’t think micro-targeting is useful“. Then I asked why and he said “because Facebook is probably charging too much from it”. I said to him that’s not the case, but also that it could be and the idea is interesting. Here I just elaborated it a bit further.

Also read this article about micro-targeting.

Micro-targeting is super interesting for B2B and personal branding (e.g., job seeking).

Another related point, that might interest you Jim (in case you’re reading this :), is the action of distributing profitable keywords by the platform owner between advertisers in search advertising. For example, Google could control impression share so that each advertiser would receive a satisfactory (given their profit function) portion of traffic WHILE optimizing its own return.


This idea is not well-developed though; it rests on the notion that there is heterogeneity in advertisers’ willingness to pay (arising e.g., from different in margins, average order values, operational efficiency or such) that would benefit the platform owner; I suspect it could be the case that the second-price auction anyway considers this as long as advertisers are bidding truthfully, in which case there’s no need for such “manipulation” by Google as the prices are always set to maximum anyway. So, just a random idea at this point.

Why human services are needed for world peace

The bot can be boss, as long as we have jobs.

Why are human services the future of our economy? (And, therefore, an absolute requirement for world peace [1].)

For three reasons:

  1. They do not pollute or waste material resources (or tend to do so with significantly less degree than material consumption)
  2. Exponential growth of population absolutely requires more human labor (supply and demand of labor)
  3. There’s no limit to service creation, but by type and nature they are infinite (because people’s needs are infinite and ever-changing)

Consequently, critical, absolutely critical measures are needed in the Western economies to enable true service economy.

Here are some ideas:

  • Taxation of human labor (VAT of services) must be drastically cut.
  • Side-costs of employing people (instead of machines) must be drastically cut.
  • Any technological solutions (e.g., platforms) increasing the match between supply and demand of human labor must be endorsed, and respectively all barriers such as cartels, removed.

Human services are the key to sustainable and socially balanced consumption – I look at Finland back in the 1950s; we were a real service economy. Today, every job possible has been replaced either by automation or by self-service (which companies call “customer participation”). We’re a digital self-service economy, not a service economy anymore.

I long for the days when we had bellboys, cleaning ladies, office clerks, research assistants and other support staff — they are important jobs which nowadays are no more. Self-service and efficiency are in fact the enemies of employment. We must consider if we want a society optimized for efficiency or one optimized for well-being (I’m starting to sound like, Bernie Sanders; which might not be a bad thing as such, but the argument has a deeper rationale in it).

Maximum efficiency is not maximum employment, far from it.

Regarding Silicon Valley and startups, there should be a counter-movement against efficiency. So far, software has been eating the world, and the world — at least in terms of job market — is becoming increasingly less. Granted, many new job types have been created to compensate for the loss, but much more is needed to fill the gap software is leaving. I think there needs to be a call for new type of startups, ones that empower human work. If you think about it, there already exists some good examples – Uber, Taskrabbit, Fiverr, Upwork are some of them. But all too often the core value proposition of a startup is based on its ability to reduce “waste” – that is, human labor.

I do not think there is any limit to creation of human services. People are never completely satisfied, and their new needs spawn new services, which in turn require new services, and so on and on. In fact, the only limit to consumption of services is one’s time and cognitive abilities! This is good and well, even hopeful if we think of the big picture. But I do think an environment needs to be created where incentives for providing human services match those of machine services, or at least approach that much more than what it currently does.

This is an issue that definitely needs to be addressed with real structural reforms in the society; as of yet, I haven’t seen ANY of that — not even discussion — in Finland. It’s as if the world was moving but the politicians were asleep, stuck in some old glory days. But in the end we all want the same thing – we want those old days BACK, when everyone had a job. It’s just that we cannot do it without adjusting the policies — radically — to the radical change of productivity which has taken place in the past decades.

It’s like another candidate — not Sanders — says: We gotta start winning again.

End notes

[1] The premise here is that the well-being of a middle class is required for a balanced and peaceful society. In contrast, the crumbling middle class will cause social unrest and wide dissatisfaction which will channel out in political radicalism, scapegoat seeking, and even wars between nations. Jobs are not just jobs, they are vehicle for peace.

The author has taught services marketing at the Turku School of Economics.

Facebook ad testing: is more ads better?

Yellow ad, red ad… Does it matter in the end?


I used to think differently about creating ad variations, but having tested both methods I’ve changed my mind. Read the explanation below.

There are two alternative approaches to ad testing:

  1. “Qwaya” method* — you create some base elements (headlines, copy texts, pictures), out of which a tool will create up to hundreds of ad variations
  2. “Careful advertiser” method — you create hand-crafted creatives, maybe three (version A, B, C) which you test against one another.

In both cases, you are able to calculate performance differences between ad versions and choose the winning design. The rationale in the first method is that it “covers more ground”, i.e. comes up with such variations that we wouldn’t have tried otherwise (due to lack of time or other reasons).

Failure of large search space

I used to advocate the first method, but it has three major downsides:

  1. it requires a lot more data to come up with statistical significance
  2. false positives may emerge in the process, and
  3. lack of internal coherence is likely to arise, due to inconsistency among creative elements (e.g., mismatch between copy text and image which may result in awkward messages).

Clearly though, the human must generate enough variation in his ad versions if he seeks a globally optimal solution. This can be done by a) making drastically different (e.g., humor vs. informativeness) as oppose to incrementally different ad versions, and b) covering extremes on different creative dimensions (e.g., humor: subtle/radical  informativeness: all benefits/main benefit).


Overall, this argument is an example of how marketing automation may not always be the best way to go! And as a corollary, the creative work done by humans is hard to replace by machines when seeking optimal creative solutions.

*Named after the Swedish Facebook advertising tool Qwaya which uses this feature as one of their selling points.

Facebook’s Incentive to Reward Precise Targeting

Facebook has an incentive to lower the advertising cost for more precise targeting by advertisers.

What, why?

Because by definition, the more precise targeting is the more relevant it its for end users. Knowing the standard nature of ads (as in: negative indirect network effect vis-à-vis users), the more relevant they are, the less unsatisfied the users. What’s more, their satisfaction is also tied to the performance of the ads (positive indirect network effect: the more satisfied the users, the better the ad performance), which should thus be better with more precise targeting.

Now, the relevance of ads can be improved by automatic means such as epsilon-greedy algorithms, and this is traditionally seen as Facebook’s advantage (right, Kalle?) but the real question is: Is that more efficient than “marketer’s intuition”?

I’d in fact argue that — contrary to my usual approach to marketer’s intuition and its fallibility — it is helpful here, and its use at least enables the narrowing down of optimal audience faster.

…okay, why is that then?

Because it’s always not only about the audience, but about the match between the message and audience — if the message was the same and audience varied, narrowing is still useful because the search space for Facebook’s algorithm is smaller, pre-qualified by humans in a sense.

But there’s an even more important property – by narrowing down the audience, the marketer is able to re-adjust their message to that particular audience, thereby increasing relevance (the “match” between preferences of the audience members and the message shown to them). This is hugely important because of the inherent combinatory nature of advertising — you cannot separate the targeting and message when measuring performance, it’s always performance = targeting * message.

Therefore, Facebook does have an incentive to encourage advertisers for more precise targeting and also reward that by providing a lower CPC. Not sure if they are doing this though, because it requires them to assign a weighted bid for advertisers with a more precise targeting — consider advertiser A who is mass-advertising to everyone in some large set X vs. advertiser B who is competing for a part of the same audience i.e. a sub-set x – they are both in the same auction but the latter should be compensated for his more precise targeting.

Concluding remarks

Perhaps this is factored in through Relevance Score and/or performance adjustment in the actual rank and CPC. That would yield the same outcome, given that the above mentioned dynamics hold, i.e. there’s a correlation between a more precise targeting and ad performance.

Qualitative Analysis With NVivo – Essential Features

This post explains the use of NVivo software package for analysis of qualitative data. It focuses on four aspects:

  1. coding
  2. categorization
  3. relationships
  4. comparison of background variables

First, coding. This is simply giving names to phenomena observed in the material. It’s a process of abstraction and conceptualization, i.e. making the rich qualitative material more easily approachable by reducing its complexity into simple and descriptive codes which can be compared to and associated with one other at the later stages of the analysis.

(In the picture, the highlighted areas are coded by right-clicking them and giving them a descriptive label which relates to a phenomenon of interest.)

You can think of the codes shaping up in two ways: a) from previous literature or b) emerging as important points in the material based on researcher’s judgment. (You can think of this from the perspective of deductive/inductive emphasis.) Either way, they’re associated with one’s research questions — the material always needs to be analyzed in the light of one’s research questions so that the results of the analysis remain relevant for the study’s purpose. Oftentimes, the first step of reading and coding of all material is referred to open coding (e.g., Strauss & Corbin, 2008).

Second, categorization. This is simply bundling the codes with one another and placing them in a hierarchical “structure”.

(In the picture, you can see codes being formulated as “main themes” and “sub-themes”, i.e. categories that contain other categories.)

The structure should follow the operationalization of the study — this usually comes naturally because the material is closely linked to interview questions which then again are linked to research questions (which are linked to the purpose of the study to form a full circle from analysis to study purpose). For example, in the above picture the categories, or themes, relate to challenges and opportunities of multichannel commerce – the topic under scrutiny.

Third, relationships. This is important – while reading the material, the researcher should form tentative relationships in his or her head. For example, “I see, x seems to be associated with y“. These are the “heureka” moments one gets while immersed in the analysis.

(In the picture, you can see several tentative relationships emerging from the analysis. A portion of them will be chosen for further validation/falsification, and potentially reported as outcomes of the study.)

The relationships can be coded instantly as you come across them — the beauty of NVivo is that you can code evidence (direct citations) into the relationships, and then later when you click the relationship open, you can find all associated evidence (the same applies for all codes and categories). So, as you analyze the material further you can add confirming and contrary evidence to the previously thought of relationships while a keeping a “trail of thought” useful for reporting of the results. It is important to understand that at this point of the analysis the discovered relationships are so-called interim findings rather than final conclusions.

Now, qualitative research can result in several outcomes in terms of reporting, one being propositions. Propositions are conclusions of qualitative analysis; they are in a way tentative suggestions of general relationships, and can be formulated into hypothesis for quantitative testing. However, the propositions can be “validated” or turned more robust by qualitative comparison as well. This is done through an iterative process of a) reading the material repeatedly and trying to find both confirmatory and falsifying evidence for the interim propositions, and 2) collecting more research data especially focused on learning more about the tentative propositions (in Grounded Theory, this is referred to as theoretical sampling, see Glaser and Strauss, 1967). When you have done the process of comparative analysis and theoretical sampling, you can have more confidence (not in statistical but analytical sense) in your propositions.

Fourth, comparison of background variables. It took me a long time to learn the power of comparing background information – but the potential and importance is really high, especially when the qualitative analyst wants to move beyond description to deeper understanding. I believe it was Milton Friedman who said the goal of research should be to find “hidden constructs” of reality. In quantitative studies this is done by a) identifying latent variables and 2) finding statistical relationship between variables. In qualitative studies, we do not speak of relationships in statistical sense but rather of “associations” which can be of many types (some examples below).

(The picture depicts different types of associations named by the researcher, including i.a. “challenge”, “solution”, “opportunity”)

Anyway, there is absolutely no reason why we should not pursue discovery of hidden realities also in qualitative studies. In NVivo, this can be done via classifications and attributes. First, decide which are the important background information you want to compare. Then, include that in your interview questions. Once you have the material, code all interviews (each representing an informant) as nodes – to these nodes, you will assign a classification schema with the chosen background information (attributes).

(The picture includes a comparison matrix of small and large firm representatives views on customer service challenges — it can easily be expanded to include other dimensions as per the analysis framework.)

For example, consider you would like to compare the views of small and large firms on a specific multi-channel challenge, say customer service. You create a classification schema and give it the attribute of “size” with two potential values, small and large. Then, you’d run a matrix code query including small and large as rows and customer service as a column. Here, you can see the number of occurrences and more importantly, you can click them “open” to see all the associated evidence. You’re still tied to researcher’s judgment or “interpretativism” when comparing the answers, but at least this way you can conduct comparisons more systematically and in accordance with your analytical framework. It also helps you to discover patterns – for example, we could find that large firms tend to emphasize different challenges than smaller firms.

Finally, I’d say the fifth important aspect of qualitative analysis is visualization of the results, usually in the form of a model or framework. Unfortunately this is where NVivo fails hard.

(Unfortunately, it is important to draw a moderating line from the third variable to the relationship of the two other variables in NVivo.)

For example, you can’t draw “moderating” relationships, and also the variable names are cut short in language such as Finnish which has long words (I’ve reported these shortcomings to QSR which is the maker of NVivo). Granted, moderation is usually understood as a property of quantitative studies, but there’s no reason why a qualitative framework or model shouldn’t also incorporate them in a conceptual model (which could be later tested by structural equation modeling, for example). So, until these problems are fixed, I’d recommend sticking to other tools for visualization, such as Microsoft PowerPoint.


Corbin, J., & Strauss, A. (2008). Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory. SAGE.

Glaser, B. G., & Strauss, A. L. (1967). The Discovery of Grounded Theory: Strategies for Qualitative Research. Transaction Publishers.

The author holds a PhD in marketing, and is teaching and conducting research at the Turku School of Economics. His topics of interest include digital marketing, startup companies and platforms.

A Little Guide to AdWords Optimization

Hello, my young padawan!

This time I will write a fairly concise post about optimizing Google AdWords campaigns.

As usual, my students gave the inspiration to this post. They’re currently participating in Google Online Marketing Challenge, and — from the mouths of children you hear the truth 🙂 — asked a very simple question: “What do we do when the campaigns are running?”

At first, I’m tempted to say that you’ll do optimization in my supervision, e.g. change the ad texts, pause add and change bids of keywords, etc. But then I decide to write them a brief introduction.

So, here it goes:

1. Structure – have the campaigns been named logically? (i.e., to mirror the website and its goals)? Are the ad groups tight enough? (i.e., include only semantically similar terms that can be targeted by writing very specific ads)

2. Settings – all features enabled, only search network, no search partners (– that applies to Google campaigns, in display network you have different rules but never ever mix the two under one campaign), language targeting Finnish English Swedish (languages that Finns use in Google)

3. Modifiers – are you using location or mobile bid modifiers? Should you? (If unsure, find out quick!)

4. Do you have need for display campaigns? If so, use display builder to build nice-looking ads; your targeting options are contextual targeting (keywords), managed placements (use Display Planner to find suitable sites), audience lists (remarketing), and affinity and topic categories (the former targets people with a given interest, the latter websites categorized under a given interest, e.g. traveling) (you can use many of these in one campaign)

5. Do you have enough keywords to reach the target daily spend? (Good to have more than 100, even thousands of keywords in the beginning.)

6. What match types are you using? You can start from broad, but gradually move towards exact match because it gives you the greatest control over which auctions you participate in.

7. What are your options to expand keyword base? Look for opportunities by taking a search term report from all keywords after you’ve run the campaign for week or so; this way you can also identify more negative keywords.

8. What negative keywords are you using? Very important to exclude yourself from auctions which are irrelevant for your business.

9. Pausing keywords — don’t delete anything ever, because then you’ll lose the analytical trace; but frequently stop keywords that are a) the most expensive and/or b) have the lowest CTR/Quality Score

10. Have you set bids at the keyword level? You should – it’s okay to start by setting the bid at ad group level, and then move gradually to keyword level as you begin to accumulate real data from the keyword market.

11. Ad positions – see if you’re competitive by looking at auction insights report; if you have low average positions (below 3), consider either pausing the keyword or increasing your bid (and relevance to ad — very important)

12. Are you running good ads? Remember, it’s all about text. You need to write good copy which is relevant to searchers. No marketing bullshit, please. Consider your copy as an answer to searchers request; it’s a service, not a sales pitch. This topic deserves its own post (and you’ll find them by googling), but as for now, know that the best way (in my opinion) is to have 2 ads per ad group constantly competing against one another. Then pause the losing ad and write a new contender — remember also that an ad can never be perfect: if your CTR is 10%, it’s really good but with a better ad you can have 11%.

13. Landing page relevance – you can see landing page experience by hovering over keywords – if the landing page experience is poor, think if you can instruct your client to make changes, or if you can change the landing page to a better one. The landing page relevance comes from the searcher’s perspective: when writing the search query, he needs to be shown ads that are relevant to that query and then directed to a webpage which is the closest match to that query. Simple in theory, in practice it’s your job to make sure there’s no mismatch here.

14. Quality Score – this is the godlike metric of AdWords. Anything below 4 is bad, so pause it or if it’s relevant for your business, then do your best to improve it. The closer you get to 10, the better (with no data, the default is 6).

15. Ad extensions – every possible ad extension should be in use, because they tend to gather a good CTR and also positively influence your Quality Score. So, this includes sitelinks, call extensions, reviews, etc.

And, finally, important metrics. You should always customize your column views at campaign, ad group and keyword level. The picture below gives an example of what I think are generally useful metrics to show — these may vary somewhat based on your case. (They can be the same for all levels, except keyword level should also include Quality Score.)

  • CTR (as high as possible, at least 5%)
  • CPC (as low as possible, in Finland 0.20€ sounds decent in most industries)
  • impression share (as high as possible WHEN business-relevant keywords, in long-tail campaigns it can be low with a good reason of getting cheap traffic; generally speaking, this indicates scaling potential; I’ve written a separate post about this, you can find it by looking at my posts)
  • Quality Score (as high as possible, scale 1-10)
  • Cost (useful to sort by cost to focus on the most expensive keywords and campaigns)
  • Avg. position (TOP3 is a good goal!)
  • Bounce rate (as low as possible, it tends to be around 40% on an average website) (this only shows if GA is connected –> connect if possible)
  • Conversion rate (as high as possible, tends to be 1-2% in ecommerce sites, more when conversion is not purchase)
  • Number of conversions (shows absolute performance difference between campaigns)

That’s it! Hope you enjoyed this post, and please leave comments if you have anything to add.