Skip to content

Category: english

Problem/Solution Space: A Startup Perspective

I was inspired to write this post by the following pictures that I’d included in my lecture material a few years. Writing it in a bit of a hurry since the class starts soon! (but it’ll good enough to make the point)

(You can find the original source for the pictures by googling.)

Okay, a couple of things.

First, it’s highly important for a startup to define both the problem space and the solution space relating to their product. This includes the particular pain points that the customer whose problems we’re solving is experiencing – at minimum, solving one pain point, if substantial enough, suffices to make a successful business. The solution space includes the competition — here, it is super important to consider not only the direct competition (a common mistake) but also the indirect competition.

I call it the “pen and paper” test — can the problem you’re solving, most often with a high degree of technological sophistication, solved with a simpler, non-technological way?

And more importantly, how are the customers solving it now? It takes a lot for them to change their habits, much more than what founders typically think. The customer will not download an app to solve the problem — no matter if it’s free or not — unless the app provides a solution several magnitudes better than what he currently has. So, bear this in mind.

Second, once the gravity of the problem we’ve set to solve has been “validated” by more trustworthy means than guessing (such as customer development), the problem dimensions need to be tied formally into the product features the team is building (the second picture depicts this).

This way, we avoid waste in the startup development process (remember, waste is your biggest enemy because you’re always on borrowed time).

Third, after this the usage of these features needs to be backed up real usage data — in other words, the product needs to be exposed to real users whose behavior is analyzed based on engagement metrics (e.g., time they spend with the product, what features they use, how frequently, etc.). For this, there needs to be a good analytical system built into the product. Follow the Facebook guideline here: you don’t know what data you might later need, so store everything. This enables maximum flexibility for subsequent analyses.

And finally, of course when we get feedback on the usage of the product, we tie it back to the problem we’ve set out to solve and conclude whether or not we’re actually solving it. If the data suggest low engagement, we need to start over and make radical changes to the core of the product. If the data gives us a nice depiction, we’ll still continue with further adjustments to improve the user experience (which, of course, is by definition never good enough).

That’s it. Thank you for reading (and I’m off to class!)

Dr. Joni Salminen holds a PhD in marketing from the Turku School of Economics. His research interests relate to startups, platforms, and digital marketing.

Contact email: [email protected]

Modern Market Research Methods: A Startup Perspective

EDIT: Updated by adding competitive analysis, very important to benchmark competitors.

EDIT2: Updated by adding experimentation (14th April, 2016)

Introduction

Somebody on Quora was asking about ‘tools’ for validating viability and demand for a startup’s products.

I replied it’s not a question of tools, but plain old market research (which seems to be all too often ignored by startup founders).

Modern market research methods

In brief, I’d include the following options to a startup market research plan:

  1. market statistics from various consultancy and research institution reports (macro-level)
  2. general market (country, city) statistics generated just for your case (macro-level à la PESTLE)
  3. competitive analysis, i.e. benchmarking existing solutions — will help you find differentiation points and see if your “unique idea” already exists in the market
  4. (n)etnography, i.e. going in-depth to user communities to understand their motivations (micro-level, can be done offline and online)
  5. surveys, i.e. devising a questionnaire for relevant parties (e.g., customers, suppliers) to understand their motivations (just like the previous, but with larger N, i.e. micro-level study)
  6. customer development, which is most often used in B2B interviews as a presales activity to better understand the clients’ needs. Here’s an introduction to customer development (Slideshare).
  7. crowdfunding, i.e. testing the actual demand for the product by launching it as a concept in a crowdfunding platform – this is often referred to as presales, because you don’t have to have the product created yet.
  8. experimentation, i.e. running different variations against one another and determining their performance difference by statistical testing; the tests can relate to e.g. ad versions (value propositions, messages) or landing pages (product variations, landing page structure and elements). Here’s a tool for calculating statistical significance of ad tests.

So, there. Some of the methods are “old school”, but some — such as crowdfunding are newer ways to collect useful market feedback. Experimentation, although it may appear novel, is actually super old school. For example, one of the great pioneers of advertising, Claude Hopkins, talked about ad testing and conversion optimization already in the 1920. (You can actually download his excellent book, “Scientific advertising“, for free.)

How to combine different methods?

The optimal plan would include both macro- and micro-level studies to get both the “helicopter view” and the micro-level understanding needed for product adoption. Which methods to to include in your market research plan depends on the type of business. For example, crowdfunding can be seen as a market validation method most suitable for B2C companies and customer development for B2B companies.

The punchline

The most important point is that you, as a startup founder, don’t get lured into the ‘tool fallacy’ — there’s no tool to compensate for the lack of genuine customer understanding.

Dr. Joni Salminen holds a PhD in marketing from the Turku School of Economics. His research interests relate to startups, platforms, and digital marketing.

Contact email: [email protected]

Dynamic Pricing and Incomplete People Information

One of the main problems in analytics is the lack of people information (e.g., demographics, interests). It is controlled by superplatforms like Google and Facebook, but as soon as you have transition from the channel to the website, you lose this information.

So, I was thinking this in context of dynamic pricing. There’s no problem for determining an average solution, i.e. a price point that sets the price so that conversion is maximized on average. But that’s pretty useless, because as you know averages are bad for optimization – too much waste of efficiency. Consider dynamic pricing: the willingness to pay is what matters for setting the price, but it’s impossible to know the WTP function of individual visitors. That’s why aggregate measures *are* needed, but we can go beyond a general aggregate (average) to segmentation, and then use segment information as a predictor for conversion at different price points (by the way, determining the testing interval for price points is also an interesting issue, i.e. how big or small increments should you do —  but that’s not the topic here).

Going back to the people problem — you could tackle this with URL tagging: 1) include the targeting info into your landing URL, and you’re able to do personalization like dynamic pricing or tailored content by retrieving the targeting information from the URL and rendering the page accordingly. A smart system would not only do this, but 2) record the interactions of different targeting groups (e.g., men & women) and use this information to optimize for a goal (e.g., determining optimal price point per user group).

These are some necessary features for a dynamic pricing system. Of course then there’s the aforementioned interval problem; segmentation means you’re playing with less data per group, so you have less “trials” for effective tests. So, intuitively you can have this rule: the less the website has traffic, the larger the increments (+/-) should be for finding the optimal price point. However, if the increments become too large you’re likely to miss the optimal (it gets lost somewhere in between the intervals). I think here are some eloquent algorithmic solutions to that in the multi-armed bandits.

The Psychological Cost of Answering an Email

You’re not getting as many replies to your messages as you’d like. Why is that?

Well, there may be many reasons, but I’m discussing one of them here. It’s the psychological cost of processing an email and acting upon it. My hypothesis is simple:

The higher the psychological cost of answering an email, the lower the response rate.

This means that don’t make people think (the same principle applies in UX design!).

So, if you propose a meeting time, don’t give many choices — only give one, if that’s not okay let them process it further (by that time the processing has already begun, it’s like a bait).

If you give many choices, the person has to think between them; also, he knows he still has to wait for your reply which is far higher psychological cost than just replying “ok”.

Remember, even if it wouldn’t seem like much, people get so much email that any marginal increase of complexity is likely to sway them for answering immediately and therefore postponing or even ignoring the message.

Any addition of cognitive effort will reduce the reply rates of your emails. As you’ll be sending many of them throughout your career, non-replies and delays add up and hinder your ability to achieve your goals in a timely manner. Therefore, learning how to write great emails is a hugely important skill. And one way to go about it reducing the psychological cost of the recipient.

Joni Salminen holds a PhD in marketing from the Turku School of Economics. His research interests relate to startups, platforms, and digital marketing.

A Quick Note on Bidding Theory of Online Ad Auctions

Introduction

This is a simple post about some commonly known features of online ad auctions.

Generalized second-price auction (GSP) is a mechanism in which the advertiser pays a marginally higher bid than the advertiser losing to him. It encourages the bidder to place a truthful bid, i.e. one where the price level is such that marginal returns equal marginal cost.

Why is this important?

Simply because:

truthful bid = incentive to bid higher

In other words, if you know a bidder behind is bidding say 0,20 € and you’re bidding 0,35 €, under a standard auction you’d be tempted to lower your bid to 0,21 € and still beat the next advertiser.

In any case you wouldn’t directly know this because the bids are sealed; however, advertisers could programmatically try and find out other bids. When you’re using GSP, manually lowering bids to marginally beat your competition is not necessary. It’s therefore a “fair” and automatic system for pricing.

Of course, for the ad platform this system is also lucrative. When advertisers are all placing truthful bids, there is no gaming, i.e. no-one is attempting to extract rents (excessive profits) and the overall price level sets higher than what would take place under gaming (theoretically, you could model this also in a way that the price level is at equal level in both cases, since it’s a “free market” where prices would set to a marginal cost equilibrium either way).


Joni Salminen holds a PhD in marketing from the Turku School of Economics. His research interests relate to startups, platforms, and digital marketing.

Google and the Prospect of Programmatic

Introduction

This is a short post taking a stance on programmatic ad platforms. It’s based on one single premise:

Digital convergence will lead into a situation where all ad spend, not only digital, will be managed through self-service, open ad platforms that operate based on auction principles

There are several reasons as to why this is not yet a reality; some of them relate to lack of technological competence by traditional media houses, some to their willingness to “protect” premium pricing (this protection has led to shrinking business and keeps doing so until they open up to the free market pricing), and a host of other factors (I’m actually currently engaged in a research project studying this phenomenon).

Digital convergence – you what?

Anyway, digital convergence means we’ll end up running campaigns through one or possibly a few ad platforms that all operate according to the same basic principles. They will resemble a lot like AdWords, because AdWords has been and still is the best advertising platform ever created. Why self-service is critical is due to the necessity of eliminating transaction costs in the selling process – we don’t in most cases need media sales people to operate these platforms. Because we don’t need them, we won’t need to pay their wages and this efficiency gain can be shifted to the prices.

The platforms will be open, meaning that there are no minimum media buys – just like in Google and Facebook, you can start with 5 $ if you want (try doing that now with your local TV media sales person). Regarding the pricing, it’s determined via ad auction, just like in Google and Facebook nowadays. The price levels will drop, but lowered barrier of access will increase liquidity and therefore fill seats more efficiently than in human-based bargaining. At least initially I expect some flux in these determinants — media houses will want to incorporate minimum pricing, but I predict it will go away in time as they realize the value of free market.

But now, to Google…

If Google was smart, it would develop programmatic ad platform for TV networks, or even integrate that with AdWords. The same applies actually to all media verticals: radio, print… Their potential demise will be this Alphabet business. All new ideas they’ve had have failed commercially, and to focus on producing more failed ideas leads unsurprisingly to more failure. Their luck, or skill however you want to take it, has been in understanding the platform business.

Just like Microsoft, Google must have people who understand about the platform business.

They’ve done a really good job with vertical integration, mainly with Android and Chrome. These support the core business model. Page’s fantasy land ideas really don’t. Well, from this point of view separating the Alphabet from the core actually makes sense, as long as the focus is kept on search and advertising.

So, programmatic ad platforms have the potential to disrupt Google, since search still dwarfs in comparison to TV + other offline media spend. And in the light of Google’s supposed understanding of platform dynamics, it’s surprising they’re not taking a stronger stance in bringing programmatic to the masses – and by masses, I mean offline media where the real money is. Google might be satisficing, and that’s a road to doom.

Dr. Joni Salminen holds a PhD in marketing from the Turku School of Economics. His research interests relate to startups, platforms, and digital marketing.

Contact email: [email protected]

The Vishnu Effect of Startups (creators/destroyers of jobs)

Background

In the Hindi scripture there is a famous passage in which the god Vishnu describes himself as death; to Westerners this is mostly known through Oppenheimer’s citation:

“Now, I am become Death, the destroyer of worlds.”

But, there is another god in Hinduism, Brahma, that is the creator of the universe.

How does this relate to startups?

Just like these two gods, startups are of dualistic nature. In particular, they are both job creators and job destroyers. One one hand they create new jobs and job types. On the other hand, they destroy existing jobs.

So what?

This dualistic nature is often ignored when evaluating the impact of startups on the society, although it’s definitely in the core of the Schumpeterian theory of innovation. What really matters for the society is the balance — how fast are new companies creating jobs vs. how fast they are destroying it.

I haven’t seen a single quantification of this effect, so it would definitely merit research. Theoretically, it can be called something like SIR, or startup impact ratio which would be jobs produced / jobs destroyed.

SIR = jobs produced / jobs destroyed

As long as the ratio is more than 1, the startups’ impact on the job market (and therefore indirectly on the society) is positive. In turn, if it’s below 1, “robots are taking our jobs”. Or, rather, if it’s above one, Brahma is winning while below one means Vishnu is dominating.

Dr. Joni Salminen holds a PhD in marketing from the Turku School of Economics. His research interests relate to startups, platforms, and digital marketing.

Contact email: [email protected]

A major change in AdWords – How to react?

Introduction

Google has made a major change in AdWords. Ads are now shown only in the main column, no longer in the right column. Previously, there were generally speaking eight ads per SERP. For some queries, Google didn’t show ads at all, and additionally they’ve been constantly testing the limit, e.g. running up to 16 product listing ads per results page.

But what does that mean to an advertiser?

Analysis

The change means the number of ads shown per SERP (search-engine results page) is effectively reduced. Since the number of advertisers is not reduced (unless rotation is applied, see below), the competition intensifies. And since the visibility of search ads is based on cost-per-click auction, ceteris paribus the click prices will go up.

Therefore, logical conclusion is that when ad placements are cut, either CPC increases (due to higher competition) or impression share decreases (due to rotation). In the former, you pay more for the same number of visitors, in the latter you pay the same click price but get less visitors.

Why Google might in fact prefer ad rotation, i.e. curbing down an individual advertiser’s impression share (the number of times your ads is shown out of all possible times it could have been shown) is because that wouldn’t impact their return-on-ad-spend (ROAS) which is a relative metric. However, it would affect the absolute volume of clicks and, consequently, sales.

In some of my campaigns, I’m using a longtail positioning strategy where this will influence, since these campaigns are targeting positions 4+ which, as said, are mostly no longer available. Most likely, the change will completely eradicate the possibility of running those campaigns with my low CPC-goal.

Why did Google do this?

For Google, this is a beneficial and logical change since right column ads are commanding lower CTRs (click-through rates). This has two implications – first, they bring less money for Google since its revenue is directly associated with the number of clicks; second, as commonly known Google is using CTR as a proxy for user experience (for example, it’s a major component in Quality Score calculations which determine the true click price).

Therefore, removing the possibility of poorly performing ads while pushing the advertisers to an increased competition is a beneficial situation for Google. In the wider picture, even with higher click prices, the ROI of Google ads is not easily challenged by any other medium or channel, at least what I can see taking place in the near future.

However, for advertisers it may easily signify higher click prices and therefore decreasing returns of search advertising. This conflict of interest is unfortunate one for advertisers, especially given the skewed distribution of power in their relationship to Google.

(On a side-note, the relationship between advertisers and Google is extremely interesting. I studied that to some extent in my Master’s thesis back in 2009. You can find it here: https://www.dropbox.com/s/syaetj8m1k66oxr/10223.pdf?dl=0)

Conclusion

I recommend you revise the impact of this change on your accounts, either internally or if you’re using an agency, with them.

Dr. Joni Salminen holds a PhD in marketing from the Turku School of Economics. His research interests relate to startups, platforms, and digital marketing.

Contact email: [email protected]

How to prevent disruption from happening to you? AKA avoiding the “Vanjoki fallacy”

Introduction

A major issue of corporations is how they can avoid being disrupted. This is a commonly established issue, e.g. Christensen discusses it in his book “Innovator’s dilemma”. But I’m going to present here a simple solution for it.

Here it is.

Rule Number 1: Don’t look at absolute market shares, look at growth rates

I call this the “Vanjoki fallacy” which is based on the fatal error Vanjoki did while in Nokia, namely thinking that “Apple only has 3% of market share, we have 40%. Therefore we are safe”, when the guy should have looked at growth rates which were of course by far in Apple’s favor. Looking at them forces you to try and understand why, and you might still have a chance of turning the disruption around (although that’s not guaranteed).

“How can I do it?”

So, how to do it? Well, you should model your competitors’ growth – as soon as any of the relevant measures (e.g., revenue, product category, product sales) shows exponential growth, that’s an indicator of danger for you. Here’s the four-step process in detail.

First, 1) start out by defining the relevant measures to track. These derive from your industry and business model, and they are common goal metrics that you and your competitor share, e.g. sales.

Second, 2) get the data – easy enough if they are public companies, since their financial statements should have it. Notice, however, that there is a reporting lag when retrieving data from financial statements, which plays against you since you want as early knowledge of potential disruptors as possible. You might want to look at other sources of data, e.g. Google Trends development or some other proxy of their growth.

Third, 3) model the data; this is done by simply fitting the data into different statistical models representing various growth patterns — remember derivation at school? It’s like that, you want to know how fast something is growing. Most importantly, you want to find out whether the growth resembles linear, exponential growth, or logarithmic growth.

How to interpret these? Well, if it’s linear, good for you (considering your growth is also at least linear). If it’s exponential growth rate, that’s usually bad for you. If it’s logarithmic, depends where they’re at in the growth phase (if this seems complicated, google ‘logarithmic growth’ and you see how it looks). Now, compare the competitor’s growth model to yours – do have reason to be concerned?

Finally, 4) draw actionable conclusions and come up with a strategy to counter your opponent. Fine, they have exponential growth. But why is that? What are they doing better? Don’t be like that other ignorant Nokia manager Olli-Pekka Kallasvuo who publicly said he doesn’t have an iPhone, and that he will never get one. Instead, find out about your competitors products. Here is a list of questions:

  • What makes them better?
  • What makes their processes better?
  • What makes their brand better?
  • What makes their business model better?
  • What makes their employees better?

Find out the answers, and then make a plan for the best course of action. You may want to identify the most likely root causes of their growth, and then either imitate, null (if possible) or counter-disrupt them with your next-generation solution.

Conclusion

In conclusion, don’t be fooled by absolute values. The world is changing, and your role as a manager or executive is to be on top of that change. So, do the math and do your job. The corollary to this approach, by the way, is to create a some kind of “anti-disruption” alert system — that would make for a nice startup idea, but it’s a topic for another post.

Dr. Joni Salminen holds a PhD in marketing from the Turku School of Economics. His research interests relate to startups, platforms, and digital marketing.

Contact email: [email protected]

European financial crisis – the next steps?

Introduction

With this post, I’m anticipating the next phase of debate on European financial crisis, as the problem of asynchronous economies isn’t going away. The continent is currently riddled with the refugee crisis, but sooner or later the attention will return to this topic which hasn’t been properly dealt with.

The problem

In brief, there are two countries:

  • Country A – “good country” with flourishing exports and dynamic domestic market
  • Country B – “bad country” with slugging exports and slow domestic market

Both countries, however, have the same monetary policy. They cannot control money supply or key interest rate by themselves according to their specific needs, but these come as a some kind of average for both – this “average” is not optimal for either, or is optimal for one but not the other.

As Milton Friedman asserted long ago, the differences of such kind result in an un-optimal currency area. We’ve seen his predictions take form in the on-going European financial crisis which in this case results from the un-optimal property of the European Monetary Union (EMU).

How to solve the problem?

Some potential solutions are:

1. Fiscal transfers from surplus to deficit countries — seems impossible politically, and also leaves the moral hazard problem wide open (this solution suffers from disincentive to make structural reforms, and is dangerous in the sense it can bring hatred between EMU countries)

2. Budget control to European Central Bank (ECB) — in this case, the central bank would exercise supreme power over national budgets, and would approve only balanced budgets. From a simplistic point of view, this seems appealing due to the fact that it would it forcefully prevent overspend, and there would be no need for the dreaded fiscal transfers.

However, the problems with this approach are the following:

a. It takes away the sovereignty of nations — not a small thing at all, and non-federalists like myself would reject it only for this reason.

b. The economic issue with it is the ‘shrinking economy’ problem – according to Keynesian logic, the state needs to invest when the private sector is in a slump to stimulate the economy. Failing to do so risks a vicious cycle of increased unemployment and decreased consumption, resulting in a shrinking, not growing GDP.

So, I’m not exactly supporting the creation of balanced budgets at the time of distress. The only way it can work is as form of “shock therapy” which would force the private sector to compensate for decreasing public sector spend. Which, in turn, requires liquidity i.e. capital. Unfortunately, lack of trust in a country also tends to reflect to companies in that country in the form of higher interest rates.

Which leads to me another potential solution which again looks eloquent but is a trap.

3. Credit pooling (euro-bonds)

This is just sub-prime once again. In other words, we take the loans of a reliable country (credit rating A) and mix them with an unreliable country (credit rating C), and give the whole “package” and overall rating of B which seems quite enticing for the investors buying these bonds. By hiding the differences in ability to handle debt, the pool is able to attract much more money. In brief, everyone knows this leads to the dark side of moral hazard and will eventually explode.

For this reason, I’m categorically against euro-bonds. In fact, the European debt crisis was in large part due to investors treating sovereign bonds as if they were joint bonds, granting Greece lower interest rates than in the case it would not have been an EMU member state. Ironically enough, some people actually appraised this as a positive effect of the monetary union.

Conclusion and discussion

So, what’s the final solution then? I think it’s the road of enforcing the subsidiarity principle, in other words re-instituting economic power to local governments. The often evoked manifestation of this, dissolution of euro, could potentially be avoided by using the national banks (e.g., Bank of Greece) as interest-setters, while the ECB would keep in its control the supply of money.

I was even considering this would be given to national banks, but the risk of moral hazard is too big, and it would result in inflation concerns. But controlling the key interest rate would be important, especially in the sense that it could be set *higher* in “good countries” than what they currently have. Consider a high interest rate (i.e., low credit expansion) in Germany and a low interest rate (i.e., high credit expansion) in Greece; the two effects could cancel each other out and repel the fear of inflation.

However, the question is – are the “good countries” willing to pay a higher interest rate for the “bad countries'” sake? And would this solution escape moral hazard? For it to work, ECB would either credibly commit to the role of the lender of last resort, or then become the first lender. In either case, we seem to recursively go back to the risk of reckless crediting (unless national banks would do a better job in monitoring the agents, which they actually might do).

In the end, something has to give. I’ve often used the euro-zone as an example of a zero-sum game: one has to give, so that the other can receive. In a such a setting, it is not possible to create a solution which would result in equal wins for all players. Sadly, the politicians cannot escape economic principles – they are simply not a question of political decision-making. The longer they pretend so, the larger the systematic risks associated with the monetary union grow.

Joni Salminen
DSc. in Econ. and Business Adm.
Turku School of Economics

The author has been following the euro-crisis since its beginning.