Archive for the english category

Joni

Social media marketing for researchers: How to promote your publications and reach the right people

english
Social media marketing for researchers: How to promote your publications and reach the right people

Today the Social Computing group at Qatar Computing Research Institute had the pleasure of listening to the presentation of Luis Fernandez Luque about social media marketing for researchers. Luis talked about how to promote your publications and personal brand, as well as how to reach the right people on social media with your research.

Luis is one of the most talented researchers I know, and a very good friend. He has two amazing girls and a great wife. You can follow Luis’ research on health informatics on Slideshare, Twitter, and of course connect with him on LinkedIn.

In this post, I’ll summarize some points of his presentation (if you want the full thing, you need to ask him :), and reflect them on my own experiences as a digital marketer.

Without further ado, here are 7 social media tips for researchers.

1. Upload your articles to the 3 big social media platforms for researchers

According to Luis, there are three major social media sites for researchers. These are:

You should post your papers on each of these platforms to get extra visibility. According to Luis, the point is to disseminate content in existing platforms because they have the critical mass of audience readily available. This is preferable to starting your own website from scratch and trying to attract visitors.

However, I recommend doing both. In addition to sharing your research on social media, you can have a separate websites for yourself and dedicated websites for your research projects. Having dedicated websites with a relevant domain provides search-engine optimization (SEO) benefits. In particular, websites are indexed better than social media sites which means you have a better chance of being found. Your papers will get indexed by search engines and therefore will attract occasional hits, depending on your chosen keywords and competition for them (see point number 4).

For the same reason you want to effectively cross-link and cross-post your content. For example, 1) publish the post in your own website, 2) re-publish it on LinkedIn, and 3) share on Twitter, LinkedIn, and Google+ (as well as researcher social networks, if it’s academic content, but here I’m referring to idea posts or popularized articles). Don’t forget Google+, because occasionally those posts show up in search results. Sharing can be repeated and schedule by using BufferApp. For example, I have all my LinkedIn articles mirrored at jonisalminen.com.

Finally, besides your research papers, consider sharing your dissertation as well as Bachelor/Master theses. Those are often easier to read and reach a wider audience.

2. Recycle content and ideas

Luis mentioned he was able to increase the popularity of one of his papers by creating a Slideshare presentation about it. This principle is more commonly known as content tree in inbound marketing. I completely agree with Luis’ advice – it is often straight-forward and fast to create a presentation based on your existing paper, because you already know what you want to say.

If you have conference presentations or teaching material readily available, even better. For example, I’ve shared all my digital marketing lectures and teaching material at Slideshare, and they steadily attract views (tens of thousands in total so far). Here is an example of a presentation I made based on the post you’re reading. As you can see, it has an interesting title that aims to be “search-engine optimized”. By scrolling down, you also notice that Slideshare converts the presentation also into pure text. This is good for search-engine visibility, and one reason why Slideshare presentations rank well in Google. The picture from my Slideshare Analytics shows many people find the presentations through Google.

Figure 1 Slideshare Analytics showing large share of search traffic.

Luis also mentioned including the name of your publication in the title slide which is a good idea if you want to catch more citations from interested readers.

3. Create an online course

MOOCs and other forms of online education form a great way for disseminating your ideas and making your research more well known. Luis mentioned two platforms for this:

The point is to share knowledge and at the same time mention your own research. I think Luis mentioned he had at some point 4,000 participants for his course which is a very large audience and shows the power of online courses compared to traditional classrooms (I think I had maximum 100 students in my course, so you can see how big the difference in reach is).

4. Choose the right title

This is like copywriting for researchers. The title plays an important role for two reasons: 1) it determines whether people become interested and click forward to reading your paper, and 2) it can increase or decrease your chances of being found in Google. A straight analogy is journalism: you want some degree of click-bait in your title, because you are competing against all other papers for attention. However, in my experience many scholars pay little attention to the attractiveness of the title of their paper from the clicker’s perspective, and even fewer perform keyword research (the post in Finnish) to find out about popularity of related keywords.

So, how to choose the title of a research paper?

  1. Research & include relevant keywords
  2. Mention the problem your research deals with

The title should be catchy (=attractive) and include keywords people are using when they are searching information on the topic, be it research papers or just general knowledge. Luis’ tip was to include the problem (e.g., diabetes) in the title to get more downloads. Moreover, when sharing your papers, use relevant hashtags. In the academia, the natural way is to identify conference hashtags relating to your topic — as long as it’s relevant, using conference hashtags to promote your research is okay.

You can use tools such as Google Keyword Planner and Google Trends for keyword research. To research hashtags, Twitter’s recommendation feature is an easy approach (e.g., in TweetDeck you get recommendations when you start writing a hashtag). You can also use tools such as Hashtagify and Keyhole to research relevant hashtags. Finally, also include the proper keywords in your abstract. While full papers are often hidden behind gateways, abstracts are indexed by search engines.

5. Write guest blogs

Instead of trying to make a go with your own website (which is admittedly tough!), Luis recommended to write guest posts in a popular blogs. The rationale is the same as in the case of social media platforms: these venues already have an audience. As long as the blog deals with your vertical, the audience is likely to be interested in what you say. For content marketers, getting quality content is also a consistent source of concern, so it is easy to see a win-win here.

For example, you can write to research foundation blog. In case they gave you money, this also serves to show you are actively trying to popularize your research, and they get something in return for their money! Consider also industry associations (e.g., I haven’t come around to it yet, but I would like to write to IAB Finland’s blog since they have a large audience interested in digital marketing).

6. Define your audience

Luis advised to define your audience carefully – it is all about determining your area of focus and where you want to make an impact. On social media, you cannot control who sees your posts, but you can increase the chances of reaching the right people by this simple recipe:

  1. Find out who are the important people in your field
  2. Follow them on Twitter and LinkedIn
  3. Tag them to posts of both platforms.

The last point doesn’t always yield results, but I’ve also had some good experiences by including the Twitter handle of a person I know is working on the topic I’m writing about. Remember, you are not spamming but asking for their opinion. That is perfectly fine.

7. Track and optimize

This is perhaps the most important thing. Just like in all digital marketing, you need to work on your profile and social media activity constantly to get results. The competition is quite high, but in the academia, not many are fluent with social media marketing. So, as long as you put in some effort, you should get results relatively easier than in the commercial world! (Although, truth be told, you are competing with commercial content as well.)

How to measure social media impact?

  • choose metrics
  • set goals
  • track & optimize

For example, you could have reads/downloads as the main KPI. Then, you could have the goal of increasing that metric +30% in the next six months. Then, you would track the results and act accordingly. The good thing about numbers and small successes is that you become addicted. Well, this is mostly a good thing because in the end you also want to get some research done! But as you see that your posts get some coverage, it encourages to carry on. And gradually you are able to uplift your social media impact.

A research group could do this as a whole by giving somebody the task to summarize social media reach of individuals + the group as a whole. It would be fairly easy to incentivize good performance, and encourage knowledge sharing on what works. By sharing best practices, the whole group could benefit. Besides disseminating your research, social media activity can increase your citations, as well as improve chances for receiving funding (as you can show “real impact” through numbers).

The tool recommended by Luis is called Altmetric which is specifically tailored for research analytics. I haven’t used it before, but will give it a go.

Conclusion

The common theme is sharing your knowledge. In addition to just posting, you can also ask and answer questions on social media sites (e.g., on ResearchGate) and practitioner forums (e.g., Quora). I was able to beat my nemesis Mr. Valtteri Kaartemo in our Great Dissertation Downloads Competition by being active on Quora for a few weeks. Answering Quora questions and including a link in the signature got my dissertation over 1,000 downloads quickly, and since some question remain relevant over time, it still helps. But this is not only about competitions and your own “brand” but about using your knowledge to help others. Think of yourself as an asset – the society has invested tremendous amounts of time, effort and money into your education, and you owe it to the society to pay some of it back. One way to do that is sharing your knowledge on social media.

I still remember one professor saying a few years ago she doesn’t put her presentations on Slideshare because “somebody might steal the ideas”. But as far as I’m concerned, a much bigger problem is that nobody cares about her ideas. We live in a world where researchers compete against all sources of information – and we must adapt to this game. In my experience, the ratio of effort put in conducting research and communicating it is totally twisted, as most researchers lack the basic skills for social media marketing and hardly do any content marketing at all.

This is not only harmful for their careers, but also to various stakeholder groups that miss the important insights of their research. And I’m not only talking about popularization, but also other researchers increasingly rely on social media and search engines for finding relevant papers in their field. Producing high-quality content is not enough, but you also need to market your papers on social media. By doing so, you are making a service to the community.

Readings

Joni

The balanced view algorithm

english
The balanced view algorithm

I recently participated in a meeting of computer scientists where the topic was “fake news”. The implicit assumption was that “we will do this tool x that will show people what is false information, and they will become informed.”

However, after the meeting I realized this might not be enough, and in fact be naïve thinking. It may not matter that algorithms and social media platforms show people ‘this is false information’. People might choose to believe in the conspiracy theory anyway, for various reasons. In those cases, the problem is not the lack of information, it is something else.

And the real question is: Can technology fix that something else? Or at least be part of the solution?

The balanced view algorithm

Because, technically, the algorithm is simple:

  1. Take a topic
  2. Define the polarities of the topic
  3. Show each user an equal number of content of each polarity

=> results in a balanced and informed citizen!

But, as said, if the opposing content is against what you want to believe in, well, then the problem is not “seeing” enough that content.

Conclusion

These are tough questions and reside in the interface of sociology and algorithms. On one hand, some of the solutions may approach manipulation but, as propagandists could tell, manipulation has to be subtle to be effective.

The major risk is that people might rebel against a balanced worldview. It is good to remember that ‘what you need to see’ is not the same as ‘what you want to see’. There is little that algorithms can do if people want to live in a bubble.

Originally published at https://algoritmitutkimus.fi/2017/04/16/the-balanced-view-algorithm/

 

Joni

The strategy algorithm

english
The strategy algorithm

Introduction

The purpose of the strategy algorithm is to present a simple, parsimonius, and proven method for successful creation of a corporate strategy.

In corporations, the problems usually do not relate to lack of resources or options, but to complexity of having in fact too many choices. This can lead to illusion of superiority which is not a short-term problem since the corporation is protected by its existing buffers, but which will become a long-term issue when external conditions have tilted enough to cause a disruption driven by changing customer needs or competitors’ superior solutions. Therefore, any managing director or CEO needs simple guiding principles to reduce compexity into something manageable. The strategy algorithm (SA) is one such tool.

The strategy algorithm

The goal of the SA is to find a unique competitive advantage that the customers appreciate, that can be executed, and that is not the focus of any existing competitors. This goal is known as the strategic goal. The steps are as follows:

Phase 1

1. Define customer segments – what benefits are important for each segment?
2. Conduct competitor analysis – what segments are not focused on by any competitor?
3. Conduct internal analysis – what resources do we have and need to capture that segment?

Phase 2

4. Then, make sure 1-3 are co-aligned (=write out the strategy).
5. Then, define strategic projects to remove bottlenecks and create assets (=resources that serve the strategic goal).
6. Then, execute with strong focus (=anything that deviates from the strategic goal; discard).

Applying the strategy algorithm

As you can see, Phase 1 is geared toward research and planning, and Phase 2 toward implementation.

In step 1, you can use techniques such as:

  • conjoint analysis
  • personas (ethnography, interviews, surveys, social media analysis)

Conjoint analysis aims to find product attributes that customers most value. Another option is to summarize customer segments into personas that are fictive but descriptive characterizations of customer groups.

In step 2, “focus” is the keyword. Competitors can operate in the same market and offer similar products, but the main point is that they are not focusing on it (=their turnover is not dependent on it, they are not investing excessively in product development, marketing and distribution). In other words, by you taking the focus, competitors will remain at bay, because they have more important priorities. An example is Nokian Tyres – at one point, it was a generic tyre company, but as an outcome of strategic work they re-focused on “Trusted by the natives” guideline, i.e. winter tyres.

In step 3, you need to conduct a gap analysis of ‘what we have and what we need’. An example is Stephen Elop at Nokia – he recognized that the mobile world is moving to software ecosystems, and Nokia has redundant know-how about legacy mobile software. In hindsight, we can say he should have fired and hired much more aggressively to transform the company into a focused, competitive unit.

Acknowledgments

The thinking borrows heavily from the Master’s thesis of Lasse Kurkilahti (Turku School of Economics), as well as related works from Michael Porter, W. Chan Kim, Renée Mauborgne, and other strategic thinkers.

 

Joni

Using flow principles to introduce addiction to mobile or Web app

english
Using flow principles to introduce addiction to mobile or Web app

Introduction

Flow is a well-known concept in psychology, invented by Mihaly Csikszentmihalyi and published in 1975. It describes the state of losing yourself to a task; basically losing the sense of time and being just very immersed and focused on what you are doing. While many professionals (including myself!) would kill to have flow 100% of the time, because it would greatly enhance their productivity (and true professionals always want to get things done!), it is also important for UX designers and software developers who can use it to enhance the success of their applications, especially given 80% of users are likely to drop after registering.

How to improve the flow of users?

To improve the likelihood of flow for your users, follow these principles:

  1. The user has a clearly defined, simple goal
  2. He or she intuitively understands how to reach that goal
  3. The goal must be achieved with minimal effort
  4. He or she must immediately know if success or not
  5. There must be immediate transition to a new task

(I compiled them from Csikszentmihalyi’s ideas and this article on gambling.)

Let’s use Tinder as an example.

First, I use Tinder to meet a girl (or the girl). That equals a clearly defined goal. Second, straight after logging in I see a picture with ‘❤’ or ‘X’ as choices – there is no need to explain what to do; in other words the app is intuitive. Third, all I need to do is swipe left or right, i.e. use minimal effort to get a small reward each time. In addition, it’s much easier to use Tinder than go out to the real world to meet people which would be an alternative way to accomplish my goal (=minimal effort). I will instantly know if I was luck because the system alerts of matches as they happen (=instant gratification). Whether or not there is a match, I’m instantly shown another choice that follows the same simple pattern (=ludic loop). Although I rarely get matches, that’s okay because I instantly get a new chance (=there is no way to exit the loop).

As a result, I am addicted.

A few notes on the applicability of flow principles

1. The size of the reward is not important at all; much more important is you get it straight away (=principle of instant gratification). So, if you’re providing one reward of size X, it can make more sense to split it into n parts, so that reward size becomes X/n.

2. It’s irrelevant whether or not the method applied is the best method to reach the goal. In fact, I’m a firm believer in that Tinder (or online dating in general) doesn’t work too well, because you need to meet people in person to see if there is chemistry or not (the app won’t tell you that, and it’s the most important factor to me). So why do I use Tinder? Because all that reasoning takes place in a high level of thinking (=high effort), and the app overrides it by giving me instant rewards with low effort. So, although I know it’s not efficient, that doesn’t matter because I get some enjoyment over it. Makes sense? That’s how we people are!

3. As a corollary to the previous point, you can understand that splitting the effort X into smaller increments of n/X can result in a situation where the invididual is using more time in doing those n increments than he would be in just doing the task X. The most clever people use this feature to motivate themselves to work – they say “I only do this very little task”, and end up doing a lot more. But this also has implications to mobile and Web developers, and also in crowdsourcing because it effectively enables a micro-task design beat a full-task design in the quantity of output.

Conclusion

The principles laid out here apply to social media feeds. Just think about it: every post gives you small small sartisfaction. Or, rather, their marginal utility is distributed randomly which makes it exciting to you, the player – you know that the post quality varies, so you might hit a “jackpot” of finding something interesting like a job opportunity, or then simply miss — either way, you get the feedback instantly. And the effort is marginal: just like the pigeons in the famous Skinner’s box, you only need to “pull a lever” (i.e., scroll down). Because you don’t succeed every time, you pull more levers. Very easy, very addictive. And there’s pretty much no way to avoid it, because it utilizes an universal and inherent features of the human psyche.

Joni

User feedback: A startup perspective

english
User feedback: A startup perspective

Introduction – the first-order problem

The first-order problem for startups often is, they are not making something people want enough to pay for. As you can see from the CB Insights data, founders identify this as the most common reason for failure.

Figure 1 Reasons for startups failure

Notice the connection between 1 and 2: We can paraphrase that as “there was no market need, therefore the startup ran out of cash.” Investor hype aside (think of Twitter), most startups don’t have the luxury of living years with negative profitability. This is why I emphasise the part ‘enough to pay for’ – contrary to Andrew Chen and others who advice to first get users and then figure out how to make money [1], I’m of the small but growing ‘direct monetization’ school of thought [2].

The solution for this problem is evident: find out from users what they want or need, and then build it.

Second-order problem

But, this is followed by a second-order problem: How should you learn from the users? It’s not evident at all; let me elaborate.

  • First, if you ask users what they want, you get inaccurate feedback because the users don’t know all the possibilities. In other words, people don’t know what they want (feel free to insert a Henry Ford quote here).
  • Second, if you show them a demo, you get inaccurate feedback because your product is not ready, and the users cannot magically “imagine” how it would solve their problems, if it was ready.

Lean to the rescue?

Eric Ries (video), and a large number of his followers (video), advocate ‘Minimum Viable Product’ (MVP) as the solution. The theory goes that “it’s enough your MVP demonstrates the solution”, as potential customers is shown how the product essentially solves their problem, and for the rest they fill in the gaps. The key difference to a demo is the tight connection to the problem – we only need to show the logical connection between problem and solution (this is referred to as product-solution fit [2]), and for that we necessarily do not need even a laptop.

However, the MVP approach has two major problems. First of all, for many problems you cannot create an effective MVP. Consider Apple Pen, or many other products of that company. “See, here’s a pen – would you use it?”. It’s not very effective – you miss all the subtleties that the final product has and that the people pay for. Oftentimes, they pay for fine details, not for the hard crude solution. For this reason, the final product often ends up being very different from an MVP which is closer to a prototype. Second, there are complex problems which have, say, one main problem and two sub-problems: For example, to speeden up the set-up of a manufacturing plant, you need to solve logistical bottlenecks. But how do you capture that complexity in your MVP? For this kind of problems, it’s all or nothing: a partial solution won’t do. Moreover, they require the kind of deep customer understanding of the customer’s circumstances which is not usually part of the MVP gospel, centered on simple consumer software products as opposed to, say, B2B industry solutions.

I grant that the MVP approach has advantages, as technically, you could solve a complex problem on a flowchart, or communicate your solution as a video (as Dropbox did). I’m just highlighting that it has shortcomings, too. Most importantly, the final product that the people end up buying is often something very different from the MVP. So, maybe MVP could be used as a starting point, but not as the end solution.

How, then?

The best solution, as far as I can see, is this:

Learn and much of the nature of the problems, and then bridge that knowledge with the technical possibilities.

As you can see, this approach closely follows Steve Blank’s customer development.

The main difference is that Blank argues strongly it’s “not focus group” (read: market research), in my opinion it’s exactly that. In fact, you can apply both traditional and novel methods of market research to get to the bottom of users needs and wants. These include etnography, surveys, qualitative interviews, etc. I wrote a separate blog post about market research for startups.

At the core of Blank’s idea is the notion that the founders are testing their hypotheses by customer development. However, those hypotheses originate from innate assumptions about the customer’s reality, and are likely to be biased and flawed. Challenging the hypotheses is therefore must, and not a bad solution at all. However, we can also start by learning about the problem, not from the hypothesis formulation. Ultimately, I believe you can reach the same outcome either by starting from the founders’ hypotheses, or by inducing them from market research [3]. Which is faster and more efficient, probably depends on details of execution. Given the execution is equal, then the accuracy of the original hypotheses is the determining factor — if they are far off, more adjustment has to be done. In comparison, inductive market research, in theory, arrives straight away to the core of the user problems.

Conclusion

In the proposed approach, we take any means necessary to find out what is needed or wanted, and then combine that with the information of what is possible. If you look close enough, this is what marketing is all about – matching supply and demand. Consequently, the role, or competence, of a market researcher, is crucial for a startup organization. They need someone to bridge the technical knowledge, existing in developers’ heads, and customer knowledge, existing in customers’ heads. Often, these two groups don’t speak the same language, so the individual who is mediating is acting as a kind of an interpreter. (S)he has to have the ability to understand both languages — that of technology, and that of ordinary people.

Endnotes

[1] It’s the well-known Y-Combinator motto: “make something that people want”. This can be interpreted as getting users being the priority, which is why I like to re-phrase it as “make something that people want to pay for.”

[2] The major exception for foregoing direct monetization is subvention: e.g., in platforms that seems to be the de facto necessity to even enter the market, while for all startup is may be when users are recruited to learn about them. From an economic point of view, this equals subventing one group of users (early adopters) to improve access to another group (main market).

[2] Problem-solution fit precedes product-market fit, which essentially deals with having a product with a lucrative market.

[3] The same separation exists in the academia: There are hypothetico-deductive studies, and inductive studies.

My other writings on startup problems:

Joni

Startups! Are you using a ‘mean’ or an ‘outlier’ as a reference point?

english
Startups! Are you using a ‘mean’ or an ‘outlier’ as a reference point?

Introduction

This post is about startup thinking. In my dissertation about startup dilemmas [1], I argued that startups can exhibit what I call as ‘reference point bias’. My evidence was emerging from the failure narratives of startup founders, where they reported having experienced this condition.

The reference point bias is a false analogy where the founder compares their startup with a great success case (e.g., Facebook, Google, Groupon).

According to this false analogy: “If Facebook did X and succeeded, we will do X and succeed, too.

A_x | B_x –> S

or doing A ‘x’, given that B did ‘x’, results in success.

According to a contrary logic, they ought to consider the “mean” (failures) rather than the “outlier” (success) because that enables better preparation for the thousand-and-one problems they will face. (This is equivalent to thinking P(s) = 1- P(f), or that eliminating failure points (f) one can achieve success (s); which was a major underlying motivation for my dissertation.)

Why is this a problem?

Firstly, because in the process of making decisions under the reference point bias, you are likely to miss all the hardship left out from the best practices outlined by the example of your outliers. In other words, your reference point suffes from survivorship bias and post-hoc rationalization.

But a bigger, and a more substantial problem in my opinion, is the fundamental discrepancy between the conditions of the referred success case and the startup at hand.

Let me elaborate. Consider

A{a} ≠ B{b},

where the conditions (a) taking place in your startup’s (A) market differ from the conditions (b) of your reference point (B). As a corollary, as the set of market conditions of A approach B, the better suited those reference points (and their stories & best practices) become to your particular scenario. But startups rarely perform a systematic analysis for discovering how close the conditions whereupon certain advice or best practice were conceived match those at hand.

As a result, discrepancies originating from local differences, e.g. culture, competition, etc., emerge. Some of these dimensions can be modeled or captured by using the BMC (Business Model Canvas) framework. For example, customer segments, distribution channels, value propositions — all these can differ from one geographical location or point in time to another, and can be systematically analyzed with BMC.

In addition to BMC, it is important to note the impact of competitive conditions (a major deficit in the BMC framework), and especially that of the indirect competition [2]. At a higher level of abstraction, we can define discrepancies originating from spatial, temporal, or cultural distance. Time is an important aspect since, in business, different tactics expire (e.g., in advertising we speak of fatigue or burn indicating the loss of effectiveness), and there are generally “windows of opportunity” which result in the importance of choosing the correct time-to-market (you can easily be too early or too late).

So, overall, reference point bias is dangerous, because you end up taking best practices from Twitter literally, and never end up making actual money. In particular, platform and freemium businesses are tricky, and based on my experience something like 90% of the reference point outliers can be located to those fields. It should be kept in mind that platforms naturally suffer from high mortality due to winner-take-all dynamics [3].

In fact, one of the managerial implications of my dissertation was that platform business may not be a recommended business model at all; at least it is one order of magnitude harder than a your conventional product business. The same goes for freemium: giving something for free in the hopes of at some point charging for it turns out, more often than not, wishful thinking. Yet, startups time after time are drawn towards these challenging business models instead of more linear ones.

That is why the general rule “This is not Google, and you’re not Sergey Brin.” is a great leveler for founders overlooking cruel business realities.

But, when is outlier a good thing?

All that being said, later on, I have realized there is another logic behind using reference points. It is simply the classic: “Aim for the stars, land on the moon.”

Namely, having these idols, even though flawed ones, encourage thousands and thousands of young minds to enter the startup scene. And that’s a good thing, resulting in a net positive effect. Sometimes it’s better not knowing how hard a problem is, because if you knew, you would never take on the challenge.

Conclusion

In conclusion, my advice to founders would be two-fold:

1) Use reference points as a source of inspiration, i.e. something you strive to become (it’s okay wanting to be as successful as Facebook)

2) But, don’t apply their strategies and tactics literally in your context.

Each context is unique, and the exact same business model rarely applies in a different market, defined by spatial, temporal and cultural distance. So the next time you hear a big-shot from Google or Facebook telling how they made it, listen carefully, but with a critical mind. Try to systematically analyze the conditions where they took place, not only “why” they worked.

End notes

[1] Salminen, J. (2014, November 7). Startup dilemmas – Strategic problems of early-stage platforms on the internet. Turku School of Economics, Turku.

[2] That is, how local people do things differently: A good example is WhatsApp which was not popular in the US because operators gave free SMS; the rest of the world was, and is, very different.

[2] Katz, M. L., & Shapiro, C. (1985). Network Externalities, Competition, and Compatibility. The American Economic Review, 75(3), 424–440.

Joni

Miten startupit voisivat oikeasti ratkoa ongelmia? Näkymättömän alaluokan merkitys

english

Johdanto

Luin mielenkiintoisen artikkelin: http://miter.mit.edu/the-unexotic-underclass/

Teesinä on, että startupit keskittyvät yhteiskunnan kannalta “vääriin” ongelmiin. Ne keskittyvät joko eliitin ongelmiin (korkeasti koulutetut kosmopoliitit) tai eksoottisiin kolmannen maailman ongelmiin, joihin usein luovat lumeratkaisuja kestävien ratkaisujen sijaan. Sen sijaan alemman keskiluokan ongelmat jätetään huomiotta: esim. työttömyys, uudelleenkouluttautuminen, sotaveteraanit (USA). Tätä kohderyhmää kuvataan näkymättömäksi “alaluokaksi”, koska startupeille he eivät ole olemassa.

Miksi näin on?

Kirjoitin tästä ilmiöstä yhdessä konferenssipaperissa [1] pari vuotta sitten. Syyt ovat selvät: Ensinnäkin ongelman tunnistus lähtee ratkojan omasta kokemuskentästä. Koska useimmat ovat korkeasti koulutettuja kosmopoliitteja, he ratkovat omiensa ongelmia. (Tämä näkyy selkeästi opiskelijoiden startup-ideoissa: aina samat baarisovellusideat vuodesta toiseen.)

Kuvio kuvaa tätä ilmiötä.

Kuvio 1. Rajallisen kokemuksen efekti [2]

Toiseksi sosiaalisia ongelmia ei voi tyypillisesti ratkoa pelkästään teknologian avulla, vaan ne vaativat joko institutionaalisia ratkaisuja tai vähintään hybridiratkaisuja jotka yhdistävät yhteiskunnallisen muutoksen ja teknologian.

Esimerkki:

Ongelma_1: Taidot ja osaamiset eivät kohtaa työmarkkinoilla => työttömyyttä

Ratkaisu: Parempi uudelleenkoulutus (=nopea, saatavilla oleva, “helppo”), joka opettaa työmarkkinoilla tarvittavia taitoja.

Ongelma_2: Vain virallinen tutkinto merkitsee työmarkkinoilla [3], ts. startupin sovellus ei sovi institutionaaliseen kehikkoon. Insitutionaalinen kehikko muuttuu hitaammin kuin sosiaalinen maailma, mikä on ongelman juurisyy.

On tyypillistä, että startupien ongelmat ketjutuvat [4]. Usein n-tason ongelmat tulevat instituutioiden tasolta, minkä vuoksi institutionaalinen uusiutumisen nopeuttaminen on keskeinen osa liiketoimintaratkaisua.

Johtopäätös

Polku toimivaan sosiaalisen ongelmien ratkomiseen on kaksijakoinen:

1) Jotta startupit voisivat todella ratkoa sosiaalisia ongelmia, niiden pitää laajentaa kokemuskenttäänsä kosmopoliittisen maailmankuvan ulkopuolelle.

2) Jotta startupit voisivat todella ratkoa sosiaalisia ongelmia, tarvitaan hybridiratkaisu, joka tarkoittaa että markkinamekanismin toimintaa ei estetä instituutioiden tasolla.

Nykyisellään startupit voidaan ymmärtää osana “eliittiä”, joka ei välitä alemman keskiluokan ongelmista, ja osaltaan tämä asetelma edesauttaa yhteiskunnallisten ongelmien syntymistä ja tyytymättömyyden purkautumista esimerkiksi radikaalien johtajien valinnan kautta. Kuten historiasta tiedetään, radikaalien johtajien valinta ei ole suotuisaa kehitystä yhteiskunnan kannalta. Startupien tulisi pyrkiä olemaan osa ratkaisua ja ennakkoluulottomasti kartoittaa uusia markkinoita, uusia kohderyhmiä (työttömät, yh-äidit, vanhukset, jne.). Jotta startupit voisivat todella ratkoa sosiaalisia ongelmia, niiden pitää laajentaa kokemuskenttäänsä kosmopoliittisen maailmankuvan ulkopuolelle. Tämä edistäisi yhteiskunnallista kehitystä ja olisi hyväksi myös bisnesmielessä, koska “missä ongelma, siellä markkina”.

Lähteet

[1] Konferenssipaperi: https://www.researchgate.net/publication/314134815_Why_avoid_difficult_problems_Exploring_the_avoidance_behavior_within_startup_motive

[2] Esitys Åbo Akademissa: https://www.slideshare.net/jonis12/pitching-bo-akademi

[3] Toki löytyy esimerkkejä päinvastaisesta, mutta pääasiassa tilanne on näin.

[4] Väitöskirja: http://www.doria.fi/handle/10024/99349

Joni

Experimenting with IBM Watson Personality Insights: How accurate is it?

english

Introduction

I ran an analysis with IBM Watson Personality Insights. It retrieved my tweets and analyzed their text content to describe me as a person.

Doing so is easy – try it here: https://personality-insights-livedemo.mybluemix.net/

I’ll briefly discuss the accuracy of the findings in this post.

TL;DR: The accuracy of IBM Watson is a split decision – some classifications seem to be accurate, while others are not. The inaccuracies are probably due to lack of source material exposing a person’s full range of preferences.

Findings

The tool analyzed 25,082 words and labelled the results as “Very Strong Analysis”. In the following, I will use introspection to comment the accuracy of the findings.

“You are a bit critical, excitable and expressive.”

Introspection: TRUE

“You are philosophical: you are open to and intrigued by new ideas and love to explore them. You are proud: you hold yourself in high regard, satisfied with who you are. And you are authority-challenging: you prefer to challenge authority and traditional values to help bring about positive changes.”

Introspection: TRUE

“Your choices are driven by a desire for efficiency.”

Introspection: TRUE

“You are relatively unconcerned with both tradition and taking pleasure in life. You care more about making your own path than following what others have done. And you prefer activities with a purpose greater than just personal enjoyment.”

Introspection: TRUE

At this point, I was very impressive with the tool. So far, I would completely agree with its assessment of my personality, although it’s only using my tweets which are short and mostly shared links.

While the description given by Watson Personality Insights was spot on (introspection agreement: 100%), I found the categorical evaluation to be lacking. In particular, “You are likely to______”

“be concerned about the environment”

Introspection: FALSE (I am not particularly concerned about the environment, as in nature, although I am worried about societal issues like influence of automation on jobs, for example)

“read often”

Introspection: TRUE

“be sensitive to ownership cost when buying automobiles”

Introspection: TRUE

Actually, the latter one is quite amazing because it describes my consumption patterns really well. I’m a very frugal consumer, always taking into consideration the lifetime cost of an acquisition (e.g., of a car).

In addition, the tool tells also that “You are unlikely to______”

“volunteer to learn about social causes”

Introspection: TRUE

“prefer safety when buying automobiles”

Introspection: FALSE (In fact, I’m thinking of buying a car soon and safety is a major criteria since the city I live in has rough traffic.)

“like romance movies”

Introspection: FALSE (I do like them! Actually just had this discussion with a friend offline, which is a another funny coincidence.)

So, the overall accuracy rate here is only 3/6 = 50%.

I did not read into the specification in more detail, but I suspect the system chooses the evaluated categories based on the available amount of data; i.e. it simply leaves off topics with inadequate data. Since there is a very broad number of potential topics (ranging from ‘things’ to ‘behaviors’), the probability of accumulating enough data points on some topics increases as the amount of text increases. In other words, you are more likely to hit some categories and, after accumulating enough data on them, you can present quite a many descriptors about the person (while simply leaving out those you don’t have enough information on).

However, the choice of topics was problematic: I have never tweeted anything relating to romantic movies (at least to my recall) which is why it’s surprising that the tool chose it as a feature. The logic must be: “in absence of Topic A, there is no interest in Topic A” which is somewhat fallible given my Twitter behavior (= leaning towards professional content). Perhaps this is the root of the issue – if my tweets had a higher emphasis on movies/entertainment, it could better predict my preferences. But as of now, it seems the system has some gaps in describing the full spectrum of a user’s preferences.

Finally, Watson Personality Insights gives out numerical scores for three dimensions: Personality, Consumer needs, and Values. My scores are presented in the following figure.

Figure 1 IBM Watson Personality Scores

I won’t go through all of them here, but the verdict here is also split. Some are correct, e.g. practicality and curiousity, while others I would say are false (e.g., liberty, tradition). In sum, I would say it’s more accurate than not (i.e., beats chance).

Conclusion

The system is surprisingly accurate given the fact it is analyzing unstructured text. It would be interesting to know how the accuracy fares in comparison to personality scales (survey data). This is because those scales rely on structured form of inquiry which has less noise, at least in theory. In addition, survey scales may result in a comprehensive view of traits, as all the traits can be explicitly asked about. Social media data may systematically miss certain expressions of personality: for example, my tweets focus on professional content and therefore are more likely to misclassify my liking of romantic movies – a survey could explicitly ask both my professional and personal likings, and therefore form a more balanced picture.

Overall, it’s exciting and even a bit scary how well a machine can describe you. The next step, then, is how could the results be used? Regardless of all the hype surrounding the Cambridge Analytica’s ability to “influence the election”, in reality combining marketing messages with personality descriptions is not straight-forward as it may seem. This is because preferences are much more complex than just saying you are of personality type A, therefore you approve message B. Most likely, the inferred personality traits are best used as additional signals or features in decision-making situations. They are not likely to be only ones, or even the most important ones, but have the potential to improve models and optimization outcomes for example in marketing.

Joni

How to Win the Google Online Marketing Challenge

english

GOMCHA European Winners 2016

1. Introduction

In the past couple of weeks, a few people have approached me asking for tips on how to do well in the Google Online Marketing Challenge. So, I thought I might as well gather some of my experiences in a blog post, and share them with everybody.

A little bit of background: I’ve been the professor of two winning teams (GOMC Europe 2013 & GOMC Europe 2016). Although the most credit is obviously due to the students that do all the hard work (the students at Turku School of Economics simply rock!), guidance does play an important role since most commonly the students have no prior experience in SEM/PPC, and need to be taught quickly where to focus on.

2. Advice to teachers

The target audience for this post is anyone participating in the challenge. For the teachers, I have one important advice:

Learn the system if you’re teaching it. There’s no substitute for real experience. The students are likely to have a million questions, and you need to give better answers than “google it.” Personally, I was fortunate enough to have done SEM for many years before starting to teach it. Without that experience, it would have been impossible to guide the teams do well. However, if you don’t have the same advantage, but you want your students to do well, turn to the industry. Many SEM companies out there are interested in mentoring/sparring the students, because that way they can also spot talented individuals for future hiring (win-win, right?).

3. How to win GOMCHA?

3.1 Overview

That said, here are my TOP3 “critical success factors” for winning the challenge:

  1. Choose your case wisely
  2. Focus on Quality Score
  3. Show impact

That’s it! Follow these principles and you will do well. Now, that being said, behind each of them is a whole layer of complexity 🙂 Let’s explore each point.

3.2 Choosing the AdWords case

First, one of the earliest questions students are going to ask is how to choose the company/organization they’re doing the campaign for. And that’s also one of the most important ones. How I do it: I let each team choose and find their own case; however, I tell them what is a good case and what is not. I wrote a separate post about choosing a good AdWords case. Read the post, and internalize the information.

Update: one more point to the linked post – choose one that preferably has some brand searches already. This helps you get higher overall CTR, and lower the overall CPC.

The choice of a good case is crucial, because you can be the best optimizer in the world, but if you have a bad case, you will fail. An example was a team that chose a coffee company — it was not a good case to choose because it had low product range and relatively few searches. For some reason, the team, which consisted of several students with *real experience* in AdWords, wanted to choose it. Not surprisingly, they struggled due to the above reasons and were easily overshadowed by other teams with no experience but a good case. Therefore, the formula here is: success = case * skills.

By the way, that is one of the most important lessons for any marketing student in general: Always choose your case wisely, and never market something whose potential you don’t believe in.

3.3 Choosing the metrics

Another common question relates to the metrics: What should we optimize for? While there are many important metrics, including CTR and CPC, I would say one is above the others. That is clearly the Quality Score, which seems to be very influential in Google’s ranking algorithm for the competition.

Note that I don’t have any insider information on this, but I’m saying *seems* because of this reason: In 2015, I instructed the teams to focus on a wide range of metrics, including CTR, CPC, and QS. What came out where several great teams that, in my opinion, had better overall metrics than many of the finalists that year (none of my teams were finalists). Last year, however, I switched the strategy and instructed the teams to heavily focus on Quality Score, even at the cost of other metrics. For example, to the team that ended up winning in 2016, I said “your goal is 10 x 10”, meaning they should get 10 keywords with QS 10. They ended up getting 12, and the rest is history 🙂

3.4 Why is Quality Score that important?

In my view, it’s because all optimization efforts basically culminate to that metric. To maximize your QS, you essentially need to do all the right things in terms of optimization, including account structure, ad creation, and landing pages. To get these things nailed, refer to this post. And google for more tips: blogs such as PPC Hero, Wordstream, and Certified Knowledge have plenty of subject matter to learn from. I also have complied an extensive list of digital marketing blogs that you can utilize.

However, do note that all third-party information is to some degree unreliable. Use it with caution, combined with your first-hand experiments (i.e., do what you see working the best in the light of numbers). The most reliable source of information is of course Google, because they know the system from the inside, any of the experts (including myself) don’t. So, use Google’s AdWords help as your main reference.

3.5 Show real impact

The last step, since many teams can score high on metrics, is to show real-life impact. This is pretty much the only way to differentiate when all finalist teams are good. The thing you can do here is, first of all, to meticulously follow Google’s guidelines for the reports to highlight your greatness. As a member of the academic panel, I know some cases have been failed due to not following the technical guidelines, so make sure your output is in line with them. However, that is not the main point; the main point is to show how you brought real results to your case organization. Although not part of the official ranking, if you look at the past winners, most of them have gained a lot of conversions. By knowing that, you can do the math. The reports of the winners from earlier years can be found at the challenge website.

4. List of practical tips

Finally, some practical tips (the list is in no particular order, and not comprehensive at all):

  1. Optimize every day like you were obsessed with AdWords
  2. Don’t be afraid to ask advice from the experts; take every help you can get to learn faster
  3. Prefer using ‘exact match’ keywords
  4. Never mix display campaigns with search campaigns (i.e., avoid ‘display select’)
  5. Avoid GDN altogether; you can experiment with it using a little budget, but focus 99% on search campaigns
  6. When possible, direct the keywords to a specific landing page (not homepage)
  7. Create ad groups based on semantic similarity of keywords (if you don’t know what this means, find out)
  8. Don’t stress about the initial bid price; set it at some level based on the Keyword Planner estimates and change according to results
  9. Or, alternatively, set it as high as possible to get a good Avg. Pos. and therefore improved CTR, and improved QS
  10. Set the bid price manually per keyword
  11. Use GA to report after-click performance (good for campaign report)
  12. Use as many AdWords features as possible (good for campaign report)

Finally, read Google’s materials, including the challenge website. Follow their advice meticulously, and read read read about search-engine advertising from digital marketing blogs and Google’s website.

Good luck!! 🙂

CAVEAT: I’m a member at the Google Online Marketing Challenge’s academic panel. These are my personal opinions and don’t necessarily represent the official panel views. The current judging criteria for the competition can be found at: https://www.google.com/onlinechallenge/discover/judging.html

UPDATE (May, 2017): Together with Elina Ojala (next to me in the picture above), we had a Skype call with students of Lappeenranta University of Technology (LUT). Elina pointed out some critical things: it’s important 1) to be motivated, 2) have a really good team without free riding, 3) share tasks efficiently (e.g., analytics, copywriting; based on individual interests), and 3) go through extra effort (e.g., changing the landing pages, using GA). I added that for teachers it’s important to motivate the students: aim HIGH !! And to stress there is zero chance of winning if the team doesn’t work every day (=linear relationship between hours worked and performance).

Resources (some in Finnish)

Joni

The black sheep problem in machine learning

english

Just a picture of a black sheep.

Introduction. Hal Daumé III wrote an interesting blog post about language bias and the black sheep problem. In the post, he defines the problem as follows:

The “black sheep problem” is that if you were to try to guess what color most sheep were by looking and language data, it would be very difficult for you to conclude that they weren’t almost all black. In English, “black sheep” outnumbers “white sheep” about 25:1 (many “black sheep”s are movie references); in French it’s 3:1; in German it’s 12:1. Some languages get it right; in Korean it’s 1:1.5 in favor of white sheep. This happens with other pairs, too; for example “white cloud” versus “red cloud.” In English, red cloud wins 1.1:1 (there’s a famous Sioux named “Red Cloud”); in Korean, white cloud wins 1.2:1, but four-leaf clover wins 2:1 over three-leaf clover.

Thereafter, Hal accurately points out:

“co-occurance frequencies of words definitely do not reflect co-occurance frequencies of things in the real world”

But the mistake made by Hal is to assume language describes objective reality (“the real world”). Instead, I would argue that it describes social reality (“the social world”).

Black sheep in social reality. The higher occurence of ‘black sheep’ tells us that in social reality, there is a concept called ‘black sheep’ which is more common than the concept of white (or any color) sheep. People are using that concept, not to describe sheep, but as an abstract concept in fact describing other people (“she is the black sheep of the family”). Then, we can ask: Why is that? In what contexts is the concept used? And try to teach the machine its proper use through associations of that concept to other contexts (much like we teach kids when saying something is appropriate and when not). As a result, the machine may create a semantic web of abstract concepts which, if not leading to it understanding them, at least helps in guiding its usage of them.

We, the human. That’s assuming we want it to get closer to the meaning of the word in social reality. But we don’t necessarily want to focus on that, at least as a short-term goal. In the short-term, it might be more purposeful to understand that language is a reflection of social reality. This means we, the humans, can understand human societies better through its analysis. Rather than trying to teach machines to imputate data to avoid what we label an undesired state of social reality, we should use the outputs provided by the machine to understand where and why those biases take place. And then we should focus on fixing them. Most likely, technology plays only a minor role in that.

Conclusion. The “correction of biases” is equivalent to burying your head in the sand: even if they magically disappeared from our models, they would still remain in the social reality, and through the connection of social reality and objective reality, echo in the everyday lives of people.