Skip to content

Tag: research

Social media marketing for researchers: How to promote your publications and reach the right people

Today the Social Computing group at Qatar Computing Research Institute had the pleasure of listening to the presentation of Luis Fernandez Luque about social media marketing for researchers. Luis talked about how to promote your publications and personal brand, as well as how to reach the right people on social media with your research.

Luis is one of the most talented researchers I know, and a very good friend. He has two amazing girls and a great wife. You can follow Luis’ research on health informatics on Slideshare, Twitter, and of course connect with him on LinkedIn.

In this post, I’ll summarize some points of his presentation (if you want the full thing, you need to ask him :), and reflect them on my own experiences as a digital marketer.

Without further ado, here are 7 social media tips for researchers.

1. Upload your articles to the 3 big social media platforms for researchers

According to Luis, there are three major social media sites for researchers. These are:

You should post your papers on each of these platforms to get extra visibility. According to Luis, the point is to disseminate content in existing platforms because they have the critical mass of audience readily available. This is preferable to starting your own website from scratch and trying to attract visitors.

However, I recommend doing both. In addition to sharing your research on social media, you can have a separate websites for yourself and dedicated websites for your research projects. Having dedicated websites with a relevant domain provides search-engine optimization (SEO) benefits. In particular, websites are indexed better than social media sites which means you have a better chance of being found. Your papers will get indexed by search engines and therefore will attract occasional hits, depending on your chosen keywords and competition for them (see point number 4).

For the same reason you want to effectively cross-link and cross-post your content. For example, 1) publish the post in your own website, 2) re-publish it on LinkedIn, and 3) share on Twitter, LinkedIn, and Google+ (as well as researcher social networks, if it’s academic content, but here I’m referring to idea posts or popularized articles). Don’t forget Google+, because occasionally those posts show up in search results. Sharing can be repeated and schedule by using BufferApp. For example, I have all my LinkedIn articles mirrored at jonisalminen.com.

Finally, besides your research papers, consider sharing your dissertation as well as Bachelor/Master theses. Those are often easier to read and reach a wider audience.

2. Recycle content and ideas

Luis mentioned he was able to increase the popularity of one of his papers by creating a Slideshare presentation about it. This principle is more commonly known as content tree in inbound marketing. I completely agree with Luis’ advice – it is often straight-forward and fast to create a presentation based on your existing paper, because you already know what you want to say.

If you have conference presentations or teaching material readily available, even better. For example, I’ve shared all my digital marketing lectures and teaching material at Slideshare, and they steadily attract views (tens of thousands in total so far). Here is an example of a presentation I made based on the post you’re reading. As you can see, it has an interesting title that aims to be “search-engine optimized”. By scrolling down, you also notice that Slideshare converts the presentation also into pure text. This is good for search-engine visibility, and one reason why Slideshare presentations rank well in Google. The picture from my Slideshare Analytics shows many people find the presentations through Google.

Figure 1 Slideshare Analytics showing large share of search traffic.

Luis also mentioned including the name of your publication in the title slide which is a good idea if you want to catch more citations from interested readers.

3. Create an online course

MOOCs and other forms of online education form a great way for disseminating your ideas and making your research more well known. Luis mentioned two platforms for this:

The point is to share knowledge and at the same time mention your own research. I think Luis mentioned he had at some point 4,000 participants for his course which is a very large audience and shows the power of online courses compared to traditional classrooms (I think I had maximum 100 students in my course, so you can see how big the difference in reach is).

4. Choose the right title

This is like copywriting for researchers. The title plays an important role for two reasons: 1) it determines whether people become interested and click forward to reading your paper, and 2) it can increase or decrease your chances of being found in Google. A straight analogy is journalism: you want some degree of click-bait in your title, because you are competing against all other papers for attention. However, in my experience many scholars pay little attention to the attractiveness of the title of their paper from the clicker’s perspective, and even fewer perform keyword research (the post in Finnish) to find out about popularity of related keywords.

So, how to choose the title of a research paper?

  1. Research & include relevant keywords
  2. Mention the problem your research deals with

The title should be catchy (=attractive) and include keywords people are using when they are searching information on the topic, be it research papers or just general knowledge. Luis’ tip was to include the problem (e.g., diabetes) in the title to get more downloads. Moreover, when sharing your papers, use relevant hashtags. In the academia, the natural way is to identify conference hashtags relating to your topic — as long as it’s relevant, using conference hashtags to promote your research is okay.

You can use tools such as Google Keyword Planner and Google Trends for keyword research. To research hashtags, Twitter’s recommendation feature is an easy approach (e.g., in TweetDeck you get recommendations when you start writing a hashtag). You can also use tools such as Hashtagify and Keyhole to research relevant hashtags. Finally, also include the proper keywords in your abstract. While full papers are often hidden behind gateways, abstracts are indexed by search engines.

5. Write guest blogs

Instead of trying to make a go with your own website (which is admittedly tough!), Luis recommended to write guest posts in a popular blogs. The rationale is the same as in the case of social media platforms: these venues already have an audience. As long as the blog deals with your vertical, the audience is likely to be interested in what you say. For content marketers, getting quality content is also a consistent source of concern, so it is easy to see a win-win here.

For example, you can write to research foundation blog. In case they gave you money, this also serves to show you are actively trying to popularize your research, and they get something in return for their money! Consider also industry associations (e.g., I haven’t come around to it yet, but I would like to write to IAB Finland’s blog since they have a large audience interested in digital marketing).

6. Define your audience

Luis advised to define your audience carefully – it is all about determining your area of focus and where you want to make an impact. On social media, you cannot control who sees your posts, but you can increase the chances of reaching the right people by this simple recipe:

  1. Find out who are the important people in your field
  2. Follow them on Twitter and LinkedIn
  3. Tag them to posts of both platforms.

The last point doesn’t always yield results, but I’ve also had some good experiences by including the Twitter handle of a person I know is working on the topic I’m writing about. Remember, you are not spamming but asking for their opinion. That is perfectly fine.

7. Track and optimize

This is perhaps the most important thing. Just like in all digital marketing, you need to work on your profile and social media activity constantly to get results. The competition is quite high, but in the academia, not many are fluent with social media marketing. So, as long as you put in some effort, you should get results relatively easier than in the commercial world! (Although, truth be told, you are competing with commercial content as well.)

How to measure social media impact?

  • choose metrics
  • set goals
  • track & optimize

For example, you could have reads/downloads as the main KPI. Then, you could have the goal of increasing that metric +30% in the next six months. Then, you would track the results and act accordingly. The good thing about numbers and small successes is that you become addicted. Well, this is mostly a good thing because in the end you also want to get some research done! But as you see that your posts get some coverage, it encourages to carry on. And gradually you are able to uplift your social media impact.

A research group could do this as a whole by giving somebody the task to summarize social media reach of individuals + the group as a whole. It would be fairly easy to incentivize good performance, and encourage knowledge sharing on what works. By sharing best practices, the whole group could benefit. Besides disseminating your research, social media activity can increase your citations, as well as improve chances for receiving funding (as you can show “real impact” through numbers).

The tool recommended by Luis is called Altmetric which is specifically tailored for research analytics. I haven’t used it before, but will give it a go.

Conclusion

The common theme is sharing your knowledge. In addition to just posting, you can also ask and answer questions on social media sites (e.g., on ResearchGate) and practitioner forums (e.g., Quora). I was able to beat my nemesis Mr. Valtteri Kaartemo in our Great Dissertation Downloads Competition by being active on Quora for a few weeks. Answering Quora questions and including a link in the signature got my dissertation over 1,000 downloads quickly, and since some question remain relevant over time, it still helps. But this is not only about competitions and your own “brand” but about using your knowledge to help others. Think of yourself as an asset – the society has invested tremendous amounts of time, effort and money into your education, and you owe it to the society to pay some of it back. One way to do that is sharing your knowledge on social media.

I still remember one professor saying a few years ago she doesn’t put her presentations on Slideshare because “somebody might steal the ideas”. But as far as I’m concerned, a much bigger problem is that nobody cares about her ideas. We live in a world where researchers compete against all sources of information – and we must adapt to this game. In my experience, the ratio of effort put in conducting research and communicating it is totally twisted, as most researchers lack the basic skills for social media marketing and hardly do any content marketing at all.

This is not only harmful for their careers, but also to various stakeholder groups that miss the important insights of their research. And I’m not only talking about popularization, but also other researchers increasingly rely on social media and search engines for finding relevant papers in their field. Producing high-quality content is not enough, but you also need to market your papers on social media. By doing so, you are making a service to the community.

Readings

Startups! Are you using a ‘mean’ or an ‘outlier’ as a reference point?

Introduction

This post is about startup thinking. In my dissertation about startup dilemmas [1], I argued that startups can exhibit what I call as ‘reference point bias’. My evidence was emerging from the failure narratives of startup founders, where they reported having experienced this condition.

The reference point bias is a false analogy where the founder compares their startup with a great success case (e.g., Facebook, Google, Groupon).

According to this false analogy: “If Facebook did X and succeeded, we will do X and succeed, too.

A_x | B_x –> S

or doing A ‘x’, given that B did ‘x’, results in success.

According to a contrary logic, they ought to consider the “mean” (failures) rather than the “outlier” (success) because that enables better preparation for the thousand-and-one problems they will face. (This is equivalent to thinking P(s) = 1- P(f), or that eliminating failure points (f) one can achieve success (s); which was a major underlying motivation for my dissertation.)

Why is this a problem?

Firstly, because in the process of making decisions under the reference point bias, you are likely to miss all the hardship left out from the best practices outlined by the example of your outliers. In other words, your reference point suffes from survivorship bias and post-hoc rationalization.

But a bigger, and a more substantial problem in my opinion, is the fundamental discrepancy between the conditions of the referred success case and the startup at hand.

Let me elaborate. Consider

A{a} ≠ B{b},

where the conditions (a) taking place in your startup’s (A) market differ from the conditions (b) of your reference point (B). As a corollary, as the set of market conditions of A approach B, the better suited those reference points (and their stories & best practices) become to your particular scenario. But startups rarely perform a systematic analysis for discovering how close the conditions whereupon certain advice or best practice were conceived match those at hand.

As a result, discrepancies originating from local differences, e.g. culture, competition, etc., emerge. Some of these dimensions can be modeled or captured by using the BMC (Business Model Canvas) framework. For example, customer segments, distribution channels, value propositions — all these can differ from one geographical location or point in time to another, and can be systematically analyzed with BMC.

In addition to BMC, it is important to note the impact of competitive conditions (a major deficit in the BMC framework), and especially that of the indirect competition [2]. At a higher level of abstraction, we can define discrepancies originating from spatial, temporal, or cultural distance. Time is an important aspect since, in business, different tactics expire (e.g., in advertising we speak of fatigue or burn indicating the loss of effectiveness), and there are generally “windows of opportunity” which result in the importance of choosing the correct time-to-market (you can easily be too early or too late).

So, overall, reference point bias is dangerous, because you end up taking best practices from Twitter literally, and never end up making actual money. In particular, platform and freemium businesses are tricky, and based on my experience something like 90% of the reference point outliers can be located to those fields. It should be kept in mind that platforms naturally suffer from high mortality due to winner-take-all dynamics [3].

In fact, one of the managerial implications of my dissertation was that platform business may not be a recommended business model at all; at least it is one order of magnitude harder than a your conventional product business. The same goes for freemium: giving something for free in the hopes of at some point charging for it turns out, more often than not, wishful thinking. Yet, startups time after time are drawn towards these challenging business models instead of more linear ones.

That is why the general rule “This is not Google, and you’re not Sergey Brin.” is a great leveler for founders overlooking cruel business realities.

But, when is outlier a good thing?

All that being said, later on, I have realized there is another logic behind using reference points. It is simply the classic: “Aim for the stars, land on the moon.”

Namely, having these idols, even though flawed ones, encourage thousands and thousands of young minds to enter the startup scene. And that’s a good thing, resulting in a net positive effect. Sometimes it’s better not knowing how hard a problem is, because if you knew, you would never take on the challenge.

Conclusion

In conclusion, my advice to founders would be two-fold:

1) Use reference points as a source of inspiration, i.e. something you strive to become (it’s okay wanting to be as successful as Facebook)

2) But, don’t apply their strategies and tactics literally in your context.

Each context is unique, and the exact same business model rarely applies in a different market, defined by spatial, temporal and cultural distance. So the next time you hear a big-shot from Google or Facebook telling how they made it, listen carefully, but with a critical mind. Try to systematically analyze the conditions where they took place, not only “why” they worked.

End notes

[1] Salminen, J. (2014, November 7). Startup dilemmas – Strategic problems of early-stage platforms on the internet. Turku School of Economics, Turku.

[2] That is, how local people do things differently: A good example is WhatsApp which was not popular in the US because operators gave free SMS; the rest of the world was, and is, very different.

[2] Katz, M. L., & Shapiro, C. (1985). Network Externalities, Competition, and Compatibility. The American Economic Review, 75(3), 424–440.

Experimenting with IBM Watson Personality Insights: How accurate is it?

Introduction

I ran an analysis with IBM Watson Personality Insights. It retrieved my tweets and analyzed their text content to describe me as a person.

Doing so is easy – try it here: https://personality-insights-livedemo.mybluemix.net/

I’ll briefly discuss the accuracy of the findings in this post.

TL;DR: The accuracy of IBM Watson is a split decision – some classifications seem to be accurate, while others are not. The inaccuracies are probably due to lack of source material exposing a person’s full range of preferences.

Findings

The tool analyzed 25,082 words and labelled the results as “Very Strong Analysis”. In the following, I will use introspection to comment the accuracy of the findings.

“You are a bit critical, excitable and expressive.”

Introspection: TRUE

“You are philosophical: you are open to and intrigued by new ideas and love to explore them. You are proud: you hold yourself in high regard, satisfied with who you are. And you are authority-challenging: you prefer to challenge authority and traditional values to help bring about positive changes.”

Introspection: TRUE

“Your choices are driven by a desire for efficiency.”

Introspection: TRUE

“You are relatively unconcerned with both tradition and taking pleasure in life. You care more about making your own path than following what others have done. And you prefer activities with a purpose greater than just personal enjoyment.”

Introspection: TRUE

At this point, I was very impressive with the tool. So far, I would completely agree with its assessment of my personality, although it’s only using my tweets which are short and mostly shared links.

While the description given by Watson Personality Insights was spot on (introspection agreement: 100%), I found the categorical evaluation to be lacking. In particular, “You are likely to______”

“be concerned about the environment”

Introspection: FALSE (I am not particularly concerned about the environment, as in nature, although I am worried about societal issues like influence of automation on jobs, for example)

“read often”

Introspection: TRUE

“be sensitive to ownership cost when buying automobiles”

Introspection: TRUE

Actually, the latter one is quite amazing because it describes my consumption patterns really well. I’m a very frugal consumer, always taking into consideration the lifetime cost of an acquisition (e.g., of a car).

In addition, the tool tells also that “You are unlikely to______”

“volunteer to learn about social causes”

Introspection: TRUE

“prefer safety when buying automobiles”

Introspection: FALSE (In fact, I’m thinking of buying a car soon and safety is a major criteria since the city I live in has rough traffic.)

“like romance movies”

Introspection: FALSE (I do like them! Actually just had this discussion with a friend offline, which is a another funny coincidence.)

So, the overall accuracy rate here is only 3/6 = 50%.

I did not read into the specification in more detail, but I suspect the system chooses the evaluated categories based on the available amount of data; i.e. it simply leaves off topics with inadequate data. Since there is a very broad number of potential topics (ranging from ‘things’ to ‘behaviors’), the probability of accumulating enough data points on some topics increases as the amount of text increases. In other words, you are more likely to hit some categories and, after accumulating enough data on them, you can present quite a many descriptors about the person (while simply leaving out those you don’t have enough information on).

However, the choice of topics was problematic: I have never tweeted anything relating to romantic movies (at least to my recall) which is why it’s surprising that the tool chose it as a feature. The logic must be: “in absence of Topic A, there is no interest in Topic A” which is somewhat fallible given my Twitter behavior (= leaning towards professional content). Perhaps this is the root of the issue – if my tweets had a higher emphasis on movies/entertainment, it could better predict my preferences. But as of now, it seems the system has some gaps in describing the full spectrum of a user’s preferences.

Finally, Watson Personality Insights gives out numerical scores for three dimensions: Personality, Consumer needs, and Values. My scores are presented in the following figure.

Figure 1 IBM Watson Personality Scores

I won’t go through all of them here, but the verdict here is also split. Some are correct, e.g. practicality and curiousity, while others I would say are false (e.g., liberty, tradition). In sum, I would say it’s more accurate than not (i.e., beats chance).

Conclusion

The system is surprisingly accurate given the fact it is analyzing unstructured text. It would be interesting to know how the accuracy fares in comparison to personality scales (survey data). This is because those scales rely on structured form of inquiry which has less noise, at least in theory. In addition, survey scales may result in a comprehensive view of traits, as all the traits can be explicitly asked about. Social media data may systematically miss certain expressions of personality: for example, my tweets focus on professional content and therefore are more likely to misclassify my liking of romantic movies – a survey could explicitly ask both my professional and personal likings, and therefore form a more balanced picture.

Overall, it’s exciting and even a bit scary how well a machine can describe you. The next step, then, is how could the results be used? Regardless of all the hype surrounding the Cambridge Analytica’s ability to “influence the election”, in reality combining marketing messages with personality descriptions is not straight-forward as it may seem. This is because preferences are much more complex than just saying you are of personality type A, therefore you approve message B. Most likely, the inferred personality traits are best used as additional signals or features in decision-making situations. They are not likely to be only ones, or even the most important ones, but have the potential to improve models and optimization outcomes for example in marketing.

The black sheep problem in machine learning

Just a picture of a black sheep.

Introduction. Hal Daumé III wrote an interesting blog post about language bias and the black sheep problem. In the post, he defines the problem as follows:

The “black sheep problem” is that if you were to try to guess what color most sheep were by looking and language data, it would be very difficult for you to conclude that they weren’t almost all black. In English, “black sheep” outnumbers “white sheep” about 25:1 (many “black sheep”s are movie references); in French it’s 3:1; in German it’s 12:1. Some languages get it right; in Korean it’s 1:1.5 in favor of white sheep. This happens with other pairs, too; for example “white cloud” versus “red cloud.” In English, red cloud wins 1.1:1 (there’s a famous Sioux named “Red Cloud”); in Korean, white cloud wins 1.2:1, but four-leaf clover wins 2:1 over three-leaf clover.

Thereafter, Hal accurately points out:

“co-occurance frequencies of words definitely do not reflect co-occurance frequencies of things in the real world”

But the mistake made by Hal is to assume language describes objective reality (“the real world”). Instead, I would argue that it describes social reality (“the social world”).

Black sheep in social reality. The higher occurence of ‘black sheep’ tells us that in social reality, there is a concept called ‘black sheep’ which is more common than the concept of white (or any color) sheep. People are using that concept, not to describe sheep, but as an abstract concept in fact describing other people (“she is the black sheep of the family”). Then, we can ask: Why is that? In what contexts is the concept used? And try to teach the machine its proper use through associations of that concept to other contexts (much like we teach kids when saying something is appropriate and when not). As a result, the machine may create a semantic web of abstract concepts which, if not leading to it understanding them, at least helps in guiding its usage of them.

We, the human. That’s assuming we want it to get closer to the meaning of the word in social reality. But we don’t necessarily want to focus on that, at least as a short-term goal. In the short-term, it might be more purposeful to understand that language is a reflection of social reality. This means we, the humans, can understand human societies better through its analysis. Rather than trying to teach machines to imputate data to avoid what we label an undesired state of social reality, we should use the outputs provided by the machine to understand where and why those biases take place. And then we should focus on fixing them. Most likely, technology plays only a minor role in that.

Conclusion. The “correction of biases” is equivalent to burying your head in the sand: even if they magically disappeared from our models, they would still remain in the social reality, and through the connection of social reality and objective reality, echo in the everyday lives of people.

How to teach machines common sense? Solutions for ambiguity problem in artificial intelligence

Introduction

The ambiguity problem illustrated:

User: “Siri, call me an ambulance!”

Siri: “Okay, I will call you ‘an ambulance’.”

You’ll never reach the hospital, and end up bleeding to death.

Solutions

Two potential solutions:

A. machine builds general knowledge (“common sense”)

B. machine identifies ambiguity & asks for clarification from humans

The whole “common sense” problem can be solved by introducing human feedback into the system. We really need to tell the machine what is what, just like a child. It is iterative learning, in which trials and errors take place.

But, in fact, A. and B. converge by doing so. Which is fine, and ultimately needed.

Contextual awareness

To determine which solution to an ambiguous situation is proper, the machine needs contextual awareness; this can be achieved by storing contextual information from each ambiguous situation, and being explained “why” a particular piece of information results in disambiguity. It’s not enough to say “you’re wrong”, but there needs to be an explicit association to a reason (concept, variable). Equally, it’s not enough to say “you’re right”, but again the same association is needed.

The process:

1) try something

2) get told it’s not right, and why (linking to contextual information)

3) try something else, corresponding to why

4) get rewarded, if it’s right.

The problem is, currently machines are being trained by data, not by human feedback.

New thinking on teaching the machine

So we would need to build machine-training systems which enable training by direct human feedback, i.e. a new way to teach and communicate with the machine. It’s not a trivial thing, since the whole machine-learning paradigm is based on data. From data and probabilities, we would need to move into associations and concepts. A new methodology is needed. Potentially, individuals could train their own AIs like pets (think Tamagotchi), or we could use large numbers of crowd workers who would explain the machine why things are how they are (i.e., create associations). A specific type of markup (=communication) would probably also be needed.

Through mimicking human learning we can teach the machine common sense. This is probably the only way; since common sense does not exist beyond human cognition, it can only be learnt from humans. An argument can be made that this is like going back in time, to era where machines followed rule-based programming (as opposed to being data-driven). However, I would argue rule-based learning is much closer to human learning than the current probability-based one, and if we want to teach common sense, we therefore need to adopt the human way.

Conclusion: machines need education

Machine learning may be at par, but machine training certainly is not. The current machine learning paradigm is data-driven, whereas we could look into ways for concept-driven training approaches.

Analyzing sentiment of topical dimensions in social media

Introduction

Had an interesting chat with Sami Kuusela from Underhood.co. Based on that, got some inspiration for an analysis framework which I’ll briefly describe here.

The model

Figure 1 Identifying and analyzing topical text material

The description

  1. User is interested in a given topic (e.g., Saara Aalto, or #saaraaalto). He enters the relevant keywords.
  2. The system runs a search and retrieves text data based on that (e.g., tweets).
  3. A cluster analysis (e.g., unsupervised topic model) identifies central themes from the data.
  4. Vectorization of representative keywords based on cluster analysis (e.g., 10 most popular) is run to extract words from a reference lexicon of words that have a similar meaning. This increases the generality of each topic cluster by associating them with other words that are close in the vector space.
  5. Text mining is run to refine the themes, i.e. placing the right text pieces under the correct themes. These are now called “dimensions”, since they describe the key dimensions of the text corpus (e.g., Saara’s voice, performance, song choices…).
  6. Sentiment analysis can be run to score the general (pos/neg/neu) or specific (e.g., emotions: joy, excitement, anger, disappointment, etc.) sentiment of each dimension. This could be done by using a machine-learning model with annotated training data (if the data-set is vast), or some sentiment lexicon (if the data-set is small).

I’m not sure whether steps 4 and 5 would improve the system’s ability to identify topics. It might be that a more general model is not required because the system already can detect the key themes. Would be interesting to test this with a developer.

Anyway, what’s the whole point?

The whole point is to acknowledge that each large topic naturally divides into small sub-topics, which are dimensions that people perceive relevant for that particular topic. For example, in politics it could be things like “economy”, “domestic policy”, “immigration”, “foreign policy”, etc. While the dimensions can have some consistency based on the field, e.g. all political candidates share some dimensions, the exact mix is likely to be unique, e.g. dimensions of social media texts relating to Trump are likely to be considerably different from those of Clinton. That’s why the analysis ultimately needs to be done case-by-case.

In any case, it is important to note that instead of giving a general sentiment or engagement score of, say a political candidate, we can use an approach like this to give a more in-depth or segmented view of them. This leads to better understanding of “what works or not”, which is information that can be used in strategic decision-making. In addition, the topic-segmented sentiment data could be associated with predictors in a predictive model, e.g. by multiplying each topic sentiment with the weight of the respective topic (assuming the topic corresponds with the predictor).

Limitations

This is just a conceptual model. As said, would be interesting to test it. There are many potential issues, such as handling with cluster overlap (some text pieces can naturally be placed into several clusters which can cause classification problems) and hierarchical issues (e.g., “employment” is under “economy” and should hence influence the former’s sentiment score).

Defining SMQs: Strategic Marketing Questions

Introduction

Too often, marketing is thought of being advertising and nothing more. However, already Levitt (1960) and Kotler (1970) established that marketing is a strategic priority. Many organizations, perhaps due to lack of marketers in their executive boards, have since forgotten this imperative.

Another reason for decreased importance of marketing is due to marketing scholars pushing the idea that “everything is marketing” which leads to decay of the marketing concept – if it is everything, it is nothing.

Nevertheless, if we reject the omni-marketing concept and return to the useful way of perceiving marketing, we observe the linkage between marketing and strategy.

Basic questions

Tania Fowler wrote a great piece on marketing, citing some ideas of Professor Roger Martin’s HBR article (2014). Drawing from that article, the basic strategic marketing questions are:

  • Who are our customers? (segmentation)
  • Why do they care about our product? (USPs/value propositions/benefits)
  • How are their needs and desires evolving? (predictive insight)
  • What potential customers exist and why aren’t we reaching them? (market potential)

This is a good start, but we need to expand the list of questions. Borrowing from Osterwalder (2009) and McCarthy (1960), let’s apply BMC (9 dimensions of a business model) and 4P marketing mix thinking (Product, Place, Promotion, Price).

Business Model Canvas approach

This leads to the following set of questions:

  • What is the problem we are solving?
  • What are our current revenue models? (monetization)
  • How good are they from customer perspective? (consumer behavior)
  • What is our current pricing strategy? (Kotler’s pricing strategies)
  • How suitable is our pricing to customers? (compared to perceived value)
  • How profitable is our current pricing?
  • How competitive is our current pricing?
  • How could our pricing be improved?
  • Where are we distributing the product/solution?
  • Is this where customers buy similar products/solutions?
  • What are our potential revenue models?
  • Who are our potential partners? Why? (nature of win-win)

Basically, each question can be presented as a question of “now” and “future”, whereupon we can identify strategic gaps. Strategy is a lot about seeing one step ahead — the thing is, foresight should be based on some kind of realism, or else fallacies take the place of rationality. Another point from marketing and startup literature is that people are not buying products, but solutions (solution-based selling, product-market fit, etc.) Someone said the same thing about brands, but I think solution is more accurate in the strategic context.

Adding competitors and positioning

The major downside of BMC and 4P thinking from strategic perspective is their oversight of competition. Therefore, borrowing from Ries and Trout (1972) and Porter (1980), we add these questions:

  • Who are our direct competitors? (substitutes)
  • Who are our indirect competitors? (cross-verticality, e.g. Google challenging media companies)
  • How are we different from competitors? (value proposition matrix)
  • Do our differentiating factors truly matter to the customers? (reality check)
  • How do we communicate our main benefits to customers? (message)
  • How is our brand positioned in the minds of the customers? (positioning)
  • Are there other products customers need to solve their problem? What are they? (complements)

Defining the competitive advantage, or critical success factors (CSFs), leads into natural linkage to resources, as we need to ask what are the resources we need to execute, and how to acquire and commit those resources (often human capital).

Resource-based view

Therefore, I’m turning to resource-based thinking in asking:

  • What are our current resources?
  • What are the resources we need to be competitive? (VRIN framework)
  • How to we acquire those resources? (recruiting, M&As)
  • How do we commit those resources? (leadership, company culture)

Indeed, company culture is a strategic imperative which is often ignored in strategic decision making. Nowadays, perhaps more than ever, great companies are built on talent and competence. Related strategic management literature deals with dynamic capabilities (e.g., Teece, 2007) and resource-based view (RBV) (e.g., Wernerfelt, 1984). In practice, companies like Facebook and Google do everything possible to attract and retain the brightest minds.

Do not forget profitability

Finally, even the dreaded advertising questions have a strategic nature, relating to customer acquisition and loyalty, as well as ROI in regards to both as well as to our offering. Considering this, we add:

  • How much does it cost to acquire a new customer?
  • What are the best channels to acquire new customers?
  • Given the customer acquisition cost (CAC) and customer lifetime value (CLV), are we profitable?
  • How profitable are each products/product categories? (BCG matrix)
  • How can we make customers repeat purchases? (cross-selling, upselling)
  • What are the best channels to encourage repeat purchase?
  • How do we encourage customer loyalty?

As you can see, these questions are of strategic nature, too, because they are directly linked to revenue and customer. After all, business is about creating customers, as stated by Peter Drucker. However, Drucker also maintained that a business with no repeat customers is no business at all. Thus, marketing often focuses on customer acquisition and loyalty.

The full list of strategic marketing questions

Here are the questions in one list:

  1. Who are our customers? (segmentation)
  2. Why do they care about our product? (USPs/value propositions/benefits)
  3. How are their needs and desires evolving? (predictive insight)
  4. What potential customers exist and why aren’t we reaching them? (market potential)
  5. What is the problem we are solving?
  6. What are our current revenue models? (monetization)
  7. How good are they from customer perspective? (consumer behavior)
  8. What is our current pricing strategy? (Kotler’s pricing strategies)
  9. How suitable is our pricing to customers? (compared to perceived value)
  10. How profitable is our current pricing?
  11. How competitive is our current pricing?
  12. How could our pricing be improved?
  13. Where are we distributing the product/solution?
  14. Is this where customers buy similar products/solutions?
  15. What are our potential revenue models?
  16. Who are our potential partners? Why? (nature of win-win)
  17. Who are our direct competitors? (substitutes)
  18. Who are our indirect competitors? (cross-verticality, e.g. Google challenging media companies)
  19. How are we different from competitors? (value proposition matrix)
  20. Do our differentiating factors truly matter to the customers? (reality check)
  21. How do we communicate our main benefits to customers? (message)
  22. How is our brand positioned in the minds of the customers? (positioning)
  23. Are there other products customers need to solve their problem? What are they? (complements)
  24. What are our current resources?
  25. What are the resources we need to be competitive? (VRIN framework)
  26. How to we acquire those resources? (recruiting, M&As)
  27. How do we commit those resources? (leadership, company culture)
  28. How much does it cost to acquire a new customer?
  29. What are the best channels to acquire new customers?
  30. Given the customer acquisition cost (CAC) and customer lifetime value (CLV), are we profitable?
  31. How profitable are each products/product categories? (BCG matrix)
  32. How can we make customers repeat purchases? (cross-selling, upselling)
  33. What are the best channels to encourage repeat purchase?
  34. How do we encourage customer loyalty?

The list should be universally applicable to all companies. But filling in the list is not “oh, let me guess” type of exercise. As you can see, answering to many questions requires customer and competitor insight that, as the startup guru Steve Blank says, needs to be retrieved by getting out of the building. Those activities are time-consuming and costly. But only if the base information is accurate, strategic planning serves a purpose. So don’t fall prey to guesswork fallacy.

Implementing the list

One of the most important things in strategic planning is iteration — it’s not “set and forget”, but “rinse and repeat”. So, asking these questions should be repeated from time to time. However, people tend to forget repetition. That’s why corporations often use consultants — they need fresh eyes to spot opportunities they’re missing due to organizational myopia.

Moreover, communicating the answers across the organization is crucial. Having a shared vision ensures each atomic decision maker is able to act in the best possible way, enabling adaptive or emergent strategy as opposed to planned strategy (Mintzberg, 1978). For this to truly work, customer insight needs to be internalized by everyone in the organization. In other words, strategic information needs to be made transparent (which it is not, in most organizations).

And for the information to translate into action, the organization should be built to be nimble; empowering people, distributing power and reducing unnecessary hierarchy. People are not stupid: give them a vision and your trust, and they will work for a common cause. Keep them in silos and treat them as sub-ordinates, and they become passive employees instead of psychological owners.

Concluding remarks

We can say that marketing is a strategic priority, or that strategic planning depends on the marketing function. Either way, marketing questions are strategic questions. In fact, strategic management and strategic marketing are highly overlapping concepts. Considering both research and practice, their division can be seen artificial and even counter-productive. For example, strategic management scholars and marketing scholars may speak of the same things with different names. The same applies to the relationship between CEOs and marketing executives. Joining forces reduces redundancy and leads to a better future of strategic decision-making.

Meaningless marketing

I’d say 70% of marketing campaigns have little to no real effect. Most certainly they don’t have a positive return in hard currency.

Yet, most marketers spend their time running around, planning all sorts of campaigns and competitions people couldn’t care less of. They are professional producers of spam, where in fact they should be focusing on core of the business: understanding why customers buy, how could they buy more, what sort of products should we make, how can the business model be improved, etc. The wider concept of marketing deals with navigating the current and the future market; it is not about making people buy stuff they don’t need.

To a great extent, I blame the marketing education. In the academia, we don’t really get the real concept of marketing into our students’ minds. Even the students majoring in marketing don’t truly “get” that marketing is not the same as advertising; too often, they have a narrow understanding of it and are then easily molded into the perverse industry standards, ending up in the purgatory of meaningless campaigns while convincing themselves they’re doing something of real value.

But marketing is not about campaigns, and it sure as hell is not about “creating Facebook competitions”. Rather, marketing is a process of continuous improvement of the business. Yes, this includes campaigns because the business cycles in many industries follow seasonal patterns, and we need to communicate outwards. But marketing has so much more to give for strategy, if only marketers would stop wasting their time and instead focus on the essential.

Now, what I wrote here is only based on anecdotal evidence arising from personal observations. It would be interesting, and indeed of great importance, to find out if it’s correct that most marketers are wasting their time on petty campaigns instead of the big picture. This could be done for example by conducting a study that answers the questions:

  1. What do marketers do with their time?
  2. How does that contribute to the bottom line?
  3. Why? (That is, what is the real value created for a) the customer and b) the organization)
  4. How is the value being measured and defended inside the organization?

If nothing else, every marketer should ask themselves those questions.

On online debates: fundamental differences

Back in the day, they knew how to debate.

Introduction. Here’s a thought, or argument: Most online disputes can be traced back to differences of premises. I’m observing this time and time again: two people disagree, but fail to see why. Each party believes they are right, and so they keep on debating; it’s like a never-ending cycle. I propose here that identifying the fundamental difference in their premises could end any debate sooner than later, and therefore save valuable time and energy.

Why does it matter? Due to commonness of this phenomenon, its solution is actually a societal priority — we need to teach people how to debate meaningfully so that they can efficiently reach a mutual agreement either by one of the parties adopting the other one’s argument (the “Gandhi principle”) or quickly identifying the fundamental disagreement in premises, so that the debate does not go on for an unnecessarily long period. In practice, the former seems to be rare — it is more common that people stick to their original point of view rather than “caving in”, as it is falsely perceived. While there may be several reasons for that, including stubborness, one authentic source of disagreement is the fundamental difference in premises, and its recognition is immune to loss of face, stubborness, or other socio-psychological conditions that prevent reconciliation (because it does not require admittance of defeat).

What does that mean? Simply put, people have different premises, emerging from different worldviews and experiences. Given this assumption, every skilled debater should recognize the existence of fundamental difference when in disagreement – they should consider, “okay, where is the other guy coming from?”, i.e. what are his premises? And through that process, present the fundamental difference and thus close the debate.

My point is simple: When tracing the argument back to the premises, for each conflict we can reveal a fundamental disagreement at the premise level.

The good news is that it gives us a reconciliation (and food for though to each, possibly leading into the Gandhi outcome of adopting opposing view when it is judged more credible). When we know there is a fundamental disagreement, we can work together to find it, and consider the finding of it as the end point of the deabte. Debating therefore becomes a task of not proving yourself right, but a task of discovering the root cause for disagreement. I believe this is more effective method for ending debates than the current methods resulting in a lot of unnecessary wasted time and effort.

The bad news is that oftentimes, the premises are either 1) very difficult to change because they are so fundamentally part of one’s beliefs that the individual refuses to alter them, or 2) we don’t know how we should change them because there might not be “better” premises at all, just different ones. Now, of course this argument in itself is based on a premise, that of relativity. But alternatively we could say that some premises are better than others, e.g. given a desirable outcome – but that would be a debate of value subjectivity vs. universality, and as such leads just into a circular debate (which we precisely do not want) because both fundamental premises co-exist.

In many practical political issues the same applies – nobody, not even the so-called experts, can certainly argue for the best scenario or predict the outcomes with a high degree of confidence. This leads to the problem of “many truths” which can be crippling for decision-making and perception of togetherness in a society. But in a situation like that, it is ever more critical to identify the fundamental differences in premises; that kind of transparency enables dispassionate evaluation of their merits and weaknesses and at the same time those of the other party’s thinking process. In a word, it is important for understanding your own thinking (following the old Socratean thought of ‘knowing thyself’) and for understanding the thinking of others.

The hazard of identifying fundamental premise differences is, of course, that it leads into “null result” (nobody wins). Simply put, we admit that there is a difference and perhaps logically draw the conclusion that neither is right, or that each pertains the belief of being right (but understand the logic of the other party). In an otherwise non-reconcialiable scenario, this would seem like a decent compromise, but it is also prohibitive if and when participants perceive the debate as competition. Instead, it should be perceived as co-creation: working together in a systematic way to exhaust each other’s arguments and thus derive the fundamental difference in premises.

Conclusion. In this post-modern era where 1) values and worldviews are more fragmented than ever, and 2) online discussions are commonplace thanks to social media, the number of argumentation conflicts is inherently very high. In fact, it is more likely to see conflict than agreement due to all this diversity. People naturally have different premises, emerging from idiosyncratic worldviews and experiences, and therefore the emergence of conflicting arguments can be seen as the new norm in a high-frequency communication environments such as social networks. People alleviate this effect by grouping with likeminded individuals which may lead into assuming more extreme positions than they would otherwise assume.

Education of argumentation theory, logic (philosophy and practice), and empathy is crucial to start solving this condition of disagreement which I think is of permanent nature. Earlier I used the term “skilled debater”. Indeed, debating is a skill. It’s a crucial skill of every citizen. Societies do wrong by giving people voice but not teaching them how to use it. Debating skills are not natural traits people are born with – they are learned skills. While some people are self-learned, it cannot be rationally assumed that the majority of people would learn these skills by themselves. Rather, they need to be educated, in schools at all levels. For example, most university programs are not teaching debating skills in the sense I’m describing here – yet they proclaim to instill critical thinking to their students. The level and the effort is inadequate – the schooling system needs to step up, and make the issue a priority. Otherwise we face another decade or more of ignorance taking over online discussions.

What is a “neutral algorithm”?

1. Introduction

Earlier today, I had a brief exchange of tweets with @jonathanstray about algorithms.

It started from his tweet:

Perhaps the biggest technical problem in making fair algorithms is this: if they are designed to learn what humans do, they will.

To which I replied:

Yes, and that’s why learning is not the way to go. “Fair” should not be goal, is inherently subjective. “Objective” is better

Then he wrote:

lots of things that are really important to society are in no way objective, though. Really the only exception is prediction.

And I wrote:

True, but I think algorithms should be as neutral (objective) as possible. They should be decision aids for humans.

And he answered:

what does “neutral” mean though?

After which I decided to write a post about it, since the idea is challenging to explain in 140 characters.

2. Definition

So, what is a neutral algorithm? I would define it like this:

“A neutral algorithm is a decision-making program whose operating principles are minimally inflenced by values or opinions of its creators.” [1]

An example of a neutral algorithm is a standard ad optimization algorithm: it gets to decide whether to show Ad1, Ad2, or Ad3. As opposed to asking from designers or corporate management which ad to display, it makes the decision based on objective measures, such as click-through rate (CTR).

A treatment that all ads (read: content, users) get is fair – they are diffused based on their merits (measured objectively by an unambiguous metric), not based on favoritism of any sort.

3. Foundations

The roots of algorithm neutrality stem from freedom of speech and net neutrality [2]. No outsiders can impose their values and opinions (e.g., censoring politically sensitive content) and interfere with the operating principles of the algorithm. Instead of being influenced by external manipulation, the decision making of the algorithm is as value-free (neutral) as possible. For example, in the case of social media, it chooses to display information which accurately reflects the sentiment and opinions of the people at a particular point in time.

4. Limitations

Now, I grant there are issues with “freedom”, some of which are considerable. For example, 1) for media, CTR-incentives lead to clickbaiting (alternative goal metrics should be considered), 2) for politicians and electorate, facts can be overshadowed by misinformation and short videos taken out of context to give false impression of individuals; and 3) for regular users, harmful misinformation can spread as a consequnce of neutrality (e.g., anti vaccination propaganda).

Another limitation is legislation – illegal content should be kept out by the algorithm. In this sense, the neutral algorithm needs to adhere to a larger institutional and regulatory context, but given that the laws themselves are “fair” this should impose no fundamental threat to the objective of neutral algorithms: free decision-making and, consequently, freedom of speech.

I wrote more about these issues here [3].

5. Conclusion

Inspite of the aforementioned issues, with a neutral algorithm each media/candidate/user has a level playing field. In time, they must learn to use it to argue in a way that merits the diffusion of their message.

The rest is up to humans – educated people respond to smart content, whereas ignorant people respond to and spread non-sense. A neutral algorithm cannot influence this; it can only honestly display what the state of ignorance/sophistication is in a society. A good example is Microsoft’s infamous bot Tay [4], a machine learning experiment turned bad. The alarming thing about the bot is not that “machines are evil”, but that *humans are evil*; the machine merely reflects that. Hence my original point of curbing human evilness by keeping algorithms free of human values as much as possible.

Perhaps in the future an algorithm could figuratively spoken save us from ourselves, but at the moment that act requires conscious effort from us humans. We need to make critical decisions based on our own judgment, instead of outsourcing ethically difficult choices to algorithms. Just as there is separation of church and state, there should be separation of humans and algorithms to the greatest possible extent.

Notes

[1] Initially, I thought about definition that would say “not influenced”, but it is not safe to assume that the subjectivity of its creators
would not in some way be reflected to the algorithm. But “minimal” leads into normative argument that that subjectivity should be mitigated.

[2] Wikipedia (2016): “Net neutrality (…) is the principle that Internet service providers and governments should treat all data on the Internet the same, not discriminating or charging differentially by user, content, site, platform, application, type of attached equipment, or mode of communication.”

[3] Algorithm Neutrality and Bias: How Much Control? <https://www.linkedin.com/pulse/algorithm-neutrality-bias-how-much-control-joni-salminen>

[4] A part of the story is that Tay was trolled heavily and therefore assumed a derogatory way of speech.