Archive for the social media tag

Joni

How to do political marketing on social media? A systematic process leveraging Facebook Ads

english
How to do political marketing on social media? A systematic process leveraging Facebook Ads

This post very briefly explains a process of using and scaling Facebook advertising for political marketing. It might not be clear for all readers, but professional online marketers should be able to follow.

The recipe for political marketing by using Facebook Ads:

  1. Create starting parameters (Age, Gender, Location, Message)
  2. Create total combinations based on the starting parameters
  3. Use prior information to narrow down search space: e.g., identify the 100 most important target groups (e.g., battle-ground states)
  4. Create Facebook Ads campaigns based on narrowed down search space
  5. Run the campaigns (the shortest time is one day, but I would recommend at least 2-3 days to accommodate Facebook’s algorithm)
  6. Analyze the results; combine data to higher level clusters (i.e., aggregate performance stats with matching groups from different campaigns)
  7. Scale up; allocate budget based on performance, iteratively optimize for non-engaged but important groups, and remove the already-converted voters.

The intuition:

You are using Facebook Ads to test how many different target groups respond to your message. You will cluster this data to identify the most engaged target groups. You will then try to maximize voter turnout within those groups (i.e., maximize conversion). In addition, you will create new messages for those groups which are not currently responding well but which you need to capture in order to win the election. You will keep testing these groups by creating new messages, one by one finding the most responsive groups for a given message.

Once a target group shows a high level of engagement, you will scale up your advertising efforts (think 10x or 100x increase). You will keep the test cycle short (a week is more than enough), and the scaling period long. Based on campaign events, you may want to revisit already secured groups to ensure their engagement remains high. Because you are not able to measure the ultimate conversion (=voting directly), you will use proxy metrics that reflect the engagement of different target groups (particularly, clicks, CTR, post-click behavior such as time-on-site, newsletter subscriptions). This enables you to predict likelihood to vote based on social media engagement. Once a person has “converted”, he or she is removed from targeting – this is done to avoid wasting your budget by preaching to the choir.

Here are some additional metrics you can consider, some of them are harder to infer than the basic ones: frequency of activity, sentiment level, interest in a single issue that cause votes, and historical voting records (district level). According to different metrics used, we can set a target level (e.g., time-on-site > 3 mins) or binary event (subscription to campaign newsletter) which represents conversion.

Overall, we try to mimic the best practices of online marketing optimization here by 1) testing with explore-exploit mentality (scaling appropriately), and 2) excluding those who converted from future targeting (in effect, they are moved into a different budget which is direct targeting by email – a form which is more personal and cheaper than ads). In addition, we delimit the search space by using our prior information on the electorate, again to avoid wasteful impressions and maximize ROI-efficiency.

Then, we fill the selected groups with data and observe the performance metrics. Finally, we cluster the results to get a higher-level understanding of each group, as well as find points of agreement between the groups that can be used to refine the communication strategy of the larger political campaign. Therefore, the data we obtain is not solely limited to Facebook Ads but can be used to further enhance messaging in other channels as well.

There. The methodology represents a systematic and effective way to leverage Facebook Ads for political social media marketing.

Also read:

Agile methods for predicting contest outcomes by social media analysis

Analyzing sentiment of topical dimensions in social media

Affinity analysis in political social media marketing – the missing link

Joni

Social media marketing for researchers: How to promote your publications and reach the right people

english
Social media marketing for researchers: How to promote your publications and reach the right people

Today the Social Computing group at Qatar Computing Research Institute had the pleasure of listening to the presentation of Luis Fernandez Luque about social media marketing for researchers. Luis talked about how to promote your publications and personal brand, as well as how to reach the right people on social media with your research.

Luis is one of the most talented researchers I know, and a very good friend. He has two amazing girls and a great wife. You can follow Luis’ research on health informatics on Slideshare, Twitter, and of course connect with him on LinkedIn.

In this post, I’ll summarize some points of his presentation (if you want the full thing, you need to ask him :), and reflect them on my own experiences as a digital marketer.

Without further ado, here are 7 social media tips for researchers.

1. Upload your articles to the 3 big social media platforms for researchers

According to Luis, there are three major social media sites for researchers. These are:

You should post your papers on each of these platforms to get extra visibility. According to Luis, the point is to disseminate content in existing platforms because they have the critical mass of audience readily available. This is preferable to starting your own website from scratch and trying to attract visitors.

However, I recommend doing both. In addition to sharing your research on social media, you can have a separate websites for yourself and dedicated websites for your research projects. Having dedicated websites with a relevant domain provides search-engine optimization (SEO) benefits. In particular, websites are indexed better than social media sites which means you have a better chance of being found. Your papers will get indexed by search engines and therefore will attract occasional hits, depending on your chosen keywords and competition for them (see point number 4).

For the same reason you want to effectively cross-link and cross-post your content. For example, 1) publish the post in your own website, 2) re-publish it on LinkedIn, and 3) share on Twitter, LinkedIn, and Google+ (as well as researcher social networks, if it’s academic content, but here I’m referring to idea posts or popularized articles). Don’t forget Google+, because occasionally those posts show up in search results. Sharing can be repeated and schedule by using BufferApp. For example, I have all my LinkedIn articles mirrored at jonisalminen.com.

Finally, besides your research papers, consider sharing your dissertation as well as Bachelor/Master theses. Those are often easier to read and reach a wider audience.

2. Recycle content and ideas

Luis mentioned he was able to increase the popularity of one of his papers by creating a Slideshare presentation about it. This principle is more commonly known as content tree in inbound marketing. I completely agree with Luis’ advice – it is often straight-forward and fast to create a presentation based on your existing paper, because you already know what you want to say.

If you have conference presentations or teaching material readily available, even better. For example, I’ve shared all my digital marketing lectures and teaching material at Slideshare, and they steadily attract views (tens of thousands in total so far). Here is an example of a presentation I made based on the post you’re reading. As you can see, it has an interesting title that aims to be “search-engine optimized”. By scrolling down, you also notice that Slideshare converts the presentation also into pure text. This is good for search-engine visibility, and one reason why Slideshare presentations rank well in Google. The picture from my Slideshare Analytics shows many people find the presentations through Google.

Figure 1 Slideshare Analytics showing large share of search traffic.

Luis also mentioned including the name of your publication in the title slide which is a good idea if you want to catch more citations from interested readers.

3. Create an online course

MOOCs and other forms of online education form a great way for disseminating your ideas and making your research more well known. Luis mentioned two platforms for this:

The point is to share knowledge and at the same time mention your own research. I think Luis mentioned he had at some point 4,000 participants for his course which is a very large audience and shows the power of online courses compared to traditional classrooms (I think I had maximum 100 students in my course, so you can see how big the difference in reach is).

4. Choose the right title

This is like copywriting for researchers. The title plays an important role for two reasons: 1) it determines whether people become interested and click forward to reading your paper, and 2) it can increase or decrease your chances of being found in Google. A straight analogy is journalism: you want some degree of click-bait in your title, because you are competing against all other papers for attention. However, in my experience many scholars pay little attention to the attractiveness of the title of their paper from the clicker’s perspective, and even fewer perform keyword research (the post in Finnish) to find out about popularity of related keywords.

So, how to choose the title of a research paper?

  1. Research & include relevant keywords
  2. Mention the problem your research deals with

The title should be catchy (=attractive) and include keywords people are using when they are searching information on the topic, be it research papers or just general knowledge. Luis’ tip was to include the problem (e.g., diabetes) in the title to get more downloads. Moreover, when sharing your papers, use relevant hashtags. In the academia, the natural way is to identify conference hashtags relating to your topic — as long as it’s relevant, using conference hashtags to promote your research is okay.

You can use tools such as Google Keyword Planner and Google Trends for keyword research. To research hashtags, Twitter’s recommendation feature is an easy approach (e.g., in TweetDeck you get recommendations when you start writing a hashtag). You can also use tools such as Hashtagify and Keyhole to research relevant hashtags. Finally, also include the proper keywords in your abstract. While full papers are often hidden behind gateways, abstracts are indexed by search engines.

5. Write guest blogs

Instead of trying to make a go with your own website (which is admittedly tough!), Luis recommended to write guest posts in a popular blogs. The rationale is the same as in the case of social media platforms: these venues already have an audience. As long as the blog deals with your vertical, the audience is likely to be interested in what you say. For content marketers, getting quality content is also a consistent source of concern, so it is easy to see a win-win here.

For example, you can write to research foundation blog. In case they gave you money, this also serves to show you are actively trying to popularize your research, and they get something in return for their money! Consider also industry associations (e.g., I haven’t come around to it yet, but I would like to write to IAB Finland’s blog since they have a large audience interested in digital marketing).

6. Define your audience

Luis advised to define your audience carefully – it is all about determining your area of focus and where you want to make an impact. On social media, you cannot control who sees your posts, but you can increase the chances of reaching the right people by this simple recipe:

  1. Find out who are the important people in your field
  2. Follow them on Twitter and LinkedIn
  3. Tag them to posts of both platforms.

The last point doesn’t always yield results, but I’ve also had some good experiences by including the Twitter handle of a person I know is working on the topic I’m writing about. Remember, you are not spamming but asking for their opinion. That is perfectly fine.

7. Track and optimize

This is perhaps the most important thing. Just like in all digital marketing, you need to work on your profile and social media activity constantly to get results. The competition is quite high, but in the academia, not many are fluent with social media marketing. So, as long as you put in some effort, you should get results relatively easier than in the commercial world! (Although, truth be told, you are competing with commercial content as well.)

How to measure social media impact?

  • choose metrics
  • set goals
  • track & optimize

For example, you could have reads/downloads as the main KPI. Then, you could have the goal of increasing that metric +30% in the next six months. Then, you would track the results and act accordingly. The good thing about numbers and small successes is that you become addicted. Well, this is mostly a good thing because in the end you also want to get some research done! But as you see that your posts get some coverage, it encourages to carry on. And gradually you are able to uplift your social media impact.

A research group could do this as a whole by giving somebody the task to summarize social media reach of individuals + the group as a whole. It would be fairly easy to incentivize good performance, and encourage knowledge sharing on what works. By sharing best practices, the whole group could benefit. Besides disseminating your research, social media activity can increase your citations, as well as improve chances for receiving funding (as you can show “real impact” through numbers).

The tool recommended by Luis is called Altmetric which is specifically tailored for research analytics. I haven’t used it before, but will give it a go.

Conclusion

The common theme is sharing your knowledge. In addition to just posting, you can also ask and answer questions on social media sites (e.g., on ResearchGate) and practitioner forums (e.g., Quora). I was able to beat my nemesis Mr. Valtteri Kaartemo in our Great Dissertation Downloads Competition by being active on Quora for a few weeks. Answering Quora questions and including a link in the signature got my dissertation over 1,000 downloads quickly, and since some question remain relevant over time, it still helps. But this is not only about competitions and your own “brand” but about using your knowledge to help others. Think of yourself as an asset – the society has invested tremendous amounts of time, effort and money into your education, and you owe it to the society to pay some of it back. One way to do that is sharing your knowledge on social media.

I still remember one professor saying a few years ago she doesn’t put her presentations on Slideshare because “somebody might steal the ideas”. But as far as I’m concerned, a much bigger problem is that nobody cares about her ideas. We live in a world where researchers compete against all sources of information – and we must adapt to this game. In my experience, the ratio of effort put in conducting research and communicating it is totally twisted, as most researchers lack the basic skills for social media marketing and hardly do any content marketing at all.

This is not only harmful for their careers, but also to various stakeholder groups that miss the important insights of their research. And I’m not only talking about popularization, but also other researchers increasingly rely on social media and search engines for finding relevant papers in their field. Producing high-quality content is not enough, but you also need to market your papers on social media. By doing so, you are making a service to the community.

Readings

Joni

Polling social media users to predict election outcomes

english

The 45th President of the USA

Introduction

The problem of predicting election outcomes with social media is that the data, such as likes, are aggregate, whereas the election system is not — apart from simple majority voting, in which you only have the classic representativeness problem that Gallup solved in 1936. To solve the aggregation problem, one needs to segment the polling data so that it 1) corresponds to the prevailing election system and 2) accurately reflects the voters according to that system. For example, in the US presidential election each state has a certain number of electoral votes. To win, a candidate needs to reach 270 electoral votes.

Disaggregating the data

One obvious solution would be track like sources to profiles and determine the state based on publically given information by the user. This way we could filter out foreign likers as well. However, there are some issues of using likes as indicators of votes. Most importantly, “liking” something on social media does not in itself predict future behavior of an individual to a sufficient degree.

Therefore, I suggest here a simple polling method via social media advertising (Facebook Ads) and online surveys (Survey Monkey). Polling is partly facing the same aforementioned problem of future behavior than using likes as the overarching indicator which is why in the latter part of this article I discuss how these approaches could be combined.

At this point, it is important to acknowledge that online polling does have significant advantages relating to 1) anonymity, 2) cost, and 3) speed. That is, people may feel more at ease expressing their true sentiment to a machine than another human being. Second, the method has the potential to collect a sizeable sample in a more cost-effective fashion than calling. Finally, a major advantage is that due to the scalable nature of online data collection, the predictions can be updated faster than via call-based polling. This is particularly important because election cycles can involve quick and hectic turns. If the polling delay is from a few days to a week, it is too late to react to final week events of a campaign which may still carry a great weight in the outcome. In other words: the fresher the data, the better. (An added bonus is that by doing several samples, we could consider momentum i.e. growth speed of a candidate’s popularity into our model – albeit this can be achieved with traditional polling as well.)

Social media polling (SMP)

The method, social media polling or SMP, is described in the following picture.

Figure 1 Social media polling

The process:

1. Define segmentation criteria

First, we understand the election system. For example, in the US system every state has a certain weight expressed by its share of total electoral votes. There are 50 states, so these become our segmentation criteria. In case we deem appropriate to do further segmentation (e.g., gender, age), we can do so by creating additional segments which are reflected in target groups and surveys. (These sub-segments can also be analyzed in the actual data later on.)

2. Create unique surveys

Then, we create a unique survey for each segment so that the answers will be bucketed. The questions of the survey are identical – they are just behind different links to enable easy segmentation. We create a survey rather than use a visible poll (app) or picture-type of poll (“like if you vote Trump, heart if you vote Hillary”), because we want to avoid social desirability bias. A click on Facebook will lead the user to the unique survey of their segment, and their answers won’t be visible to the public.

3. Determine sample size

Calculating sample size is one of those things that will make your head spin, because there’s no easy answer as to what is a good sample size. Instead, “it depends.” However, we can use some heuristical rules to come up with decent alternatives in the context of elections. Consider two potential sample sizes.

  • Sample size: 500
  • Confidence level: 95%
  • Margin of error: +/- 4.4%
  • Sample size: 1,000
  • Confidence level: 95%
  • Margin of error: +/- 3%

These are seen as decent options among election pollsters. However, the margin of error is still quite sizeable in both of them. For example, if there are two candidates and their “true” support values are A=49%, B=51%, the large margin of error makes us easily go wrong. We could solve this by increasing the sample size, but the problem is that if we would like to reduce the margin of error from +/- 3% to say 1%, our required sample size grows dramatically (more precisely, with a 95% confidence and population size of 1M, it’s 9512 – unpractically high for a 50-state model). In other words, we have to accept the risk of wrong predictions in this type of situation.

All states have over 1,000,000 million people so each of them are considered as “large” populations (this is a mathematical thing – required sample size stabilizes after reaching a certain population size). Although US is characterized as one population, in the context of election prediction it’s actually several different populations (because we have independent states that vote). The procedure we apply is stratified random sampling in which the large general population is split into sub-groups. In practice, each sub-group requires its own sample, and therefore our approach requires a considerably larger sample size than a prediction that would only consider the whole population of the country. But, exactly because of this it should be more accurate.

So, with this lengthy explanation let us say we satisfice with a sample size of 500 per state. That would be 500×50=25,000 respondents. If it would cost 0.60$ to get a respondent via running Facebook ads, the cost for data collection would be 15,000$. For repetitive purposes, there are a few strategies. First, the sample size can be reduced for states that show a large difference between the candidates. In other words, we don’t need to collect a large number of respondents if we “know” the popularity difference between candidates is high. The important thing is that the situation is measured periodically, and sample sizes are flexibly adjusted according to known results. In a similar vein, we can increase the sample size for states where the competition is tight, to reduce the margin of error and therefore to increase the accuracy of our prediction. To my understanding, the opportunity of flexible sampling is not efficiently used by all pollsters.

4. Create Facebook campaigns

For each segment, a target group is created in Facebook Ads. The target group is used to advertise to that particular group; for example, the Michigan survey link is only shown to people from Michigan. That way, we minimize the risk of people outside the segment responding (however, they can excluded later on by IP). At this stage, creating attractive ads help keeping the cost per response low.

5. Run until sample size is reached

The administrator observes the results and stops the data collection once a segment has reached the desired sample size. When all segments are ready, the data collection is stopped.

6. Verify data

Based on IP, we can filter out respondent who do not belong to our particular geographical segment (=state).

Ad clicks can be used to determine sample representativeness by other factors – in other words, we can use Facebook’s campaign reports to segment age and gender information. If a particular group is under-represented, we can correct by altering the targeting towards them and resume data collection. However, we can also accept the under-representation if we have no valid reference model as to voting behavior of the sub-segments. For example, millennials might be under-represented in our data, but this might correspond with their general voting behavior as well – if we assume survey response rate corresponds with voting rate of the segments, then there is no problem.

7. Analyze results

The analysis process is straight-forward:

segment-level results x weights = prediction outcome

For example, in the US presidential election, segment-level results would be each state (who polls highest in the state is the winner there) which would be multiplied by the share of electoral votes of each state. The candidate who gets at least 270 votes is the predicted winner.

Other methods

Now, as for other methods, we can use behavioral data. I have previously argued behavioral data is a stronger indicator of future actions since it’s free from reporting bias. In other words, people say they do, but won’t end up doing. This is a very common problem, but in research and daily life.

To correct for that, we consider two approaches here:

1) The volume of likes method, which parallels a like to a vote (the more likes a candidate has in relation to another candidate, the more likely they are to win)

For this method to work, the “intensity of like”, i.e. its correlation to behavior should be determined, as not all likes are indicators of voting before. Likes don’t readily translate into votes, and there does not appear to be other information we can use to further examine their correlation (like is a like). We could, however, add contextual information of the person, or use rules such as “the more likes a person gives, the more likely (s)he is to vote for a candidate.”

Or, we could use another solution which I think is better:

2) Text analysis/mining

By analyzing social media comments of a person, we can better infer the intensity of their attitude towards a given topic (in this case, a candidate). If a person is using strongly positive vocabulary while referring to a candidate, then (s)he is more likely to vote for him/her than if the comments are negative or neutral. Notice that the mere positive-negative range is not enough, because positivity has degrees of intensity we have to consider. It is different to say “he is okay” than “omg he is god emperor”. The more excitement and associated feelings – which need to be carefully mapped and defined in the lexicon – a person exhibits, the more likely voting behavior is.

Limitations

As I mentioned, even this approach risks shortcoming of representativeness. First, the population on Facebook may not correspond with the population at large. It may be that the user base is skewed by age or some other factor. The choice of platform greatly influences the sample; for example, SnapChat users are on average younger than Facebook users, whereas Twitter users are more liberal. It is not clear whether Facebook’s user base represents a skewed sample or not. Second, the people voicing their opinions may be part of “vocal minority” as opposed to “silent majority”. In that case, we apply the logic of Gaussian standard distribution and assumed that the general population is more lenient to middle ground than the extremes — if, in addition, we would assume the central tendency to be symmetrical (meaning people in the middle are equally likely to tip into either candidate in a dual race), the analysis of extremes can still yield a valid prediction.

Another limitation may be that advertising targeting is not equivalent to random sampling, but has some kind of bias. That bias could emerge e.g. from 1) ad algorithm favoring a particular sub-set of the target group, i.e. showing more ads to them, whereas we would like to get all types of respondents; or 2) self-selection in which the respondents are of similar kind and again not representative to the population. Out of my head, I’d say number two is less of a problem because those people who show enough interest are also the ones who vote – remember, essentially we don’t need to care about the opinions of the people who don’t vote (that’s how elections work!). But number one could be a serious issue, because ad algorithm directs impressions based on responses and might identify some hidden pattern we have no control over. Basically, the only thing we can do is examine superficial segment information on the ad reports, and evaluate if the ad rotation was sufficient or not.

Combining different approaches

As both approaches – traditional polling and social media analysis – have their shortcomings and advantages, it might be feasible to combine the data under a mixed model which would factor in 1) count of likes, 2) count of comments with high affinity (=positive sentiment), and 3) polled preference data. A deduplicating process would be needed to not count twice those who liked and commented – this requires associating likes and comments to individuals. Note that the hybrid approach requires geographic information as well, because otherwise segmentation is diluted. Anyhow, taking user as the central entity could be a step towards determining voting propensity:

user (location, count of likes, count of comments, comment sentiment) –> voting propensity

Another way to see this is that enriching likes with relevant information (in regards to the election system) can help model social media data in a more granular and meaningful way.

Joni

Analyzing sentiment of topical dimensions in social media

english

Introduction

Had an interesting chat with Sami Kuusela from Underhood.co. Based on that, got some inspiration for an analysis framework which I’ll briefly describe here.

The model

Figure 1 Identifying and analyzing topical text material

The description

  1. User is interested in a given topic (e.g., Saara Aalto, or #saaraaalto). He enters the relevant keywords.
  2. The system runs a search and retrieves text data based on that (e.g., tweets).
  3. A cluster analysis (e.g., unsupervised topic model) identifies central themes from the data.
  4. Vectorization of representative keywords based on cluster analysis (e.g., 10 most popular) is run to extract words from a reference lexicon of words that have a similar meaning. This increases the generality of each topic cluster by associating them with other words that are close in the vector space.
  5. Text mining is run to refine the themes, i.e. placing the right text pieces under the correct themes. These are now called “dimensions”, since they describe the key dimensions of the text corpus (e.g., Saara’s voice, performance, song choices…).
  6. Sentiment analysis can be run to score the general (pos/neg/neu) or specific (e.g., emotions: joy, excitement, anger, disappointment, etc.) sentiment of each dimension. This could be done by using a machine-learning model with annotated training data (if the data-set is vast), or some sentiment lexicon (if the data-set is small).

I’m not sure whether steps 4 and 5 would improve the system’s ability to identify topics. It might be that a more general model is not required because the system already can detect the key themes. Would be interesting to test this with a developer.

Anyway, what’s the whole point?

The whole point is to acknowledge that each large topic naturally divides into small sub-topics, which are dimensions that people perceive relevant for that particular topic. For example, in politics it could be things like “economy”, “domestic policy”, “immigration”, “foreign policy”, etc. While the dimensions can have some consistency based on the field, e.g. all political candidates share some dimensions, the exact mix is likely to be unique, e.g. dimensions of social media texts relating to Trump are likely to be considerably different from those of Clinton. That’s why the analysis ultimately needs to be done case-by-case.

In any case, it is important to note that instead of giving a general sentiment or engagement score of, say a political candidate, we can use an approach like this to give a more in-depth or segmented view of them. This leads to better understanding of “what works or not”, which is information that can be used in strategic decision-making. In addition, the topic-segmented sentiment data could be associated with predictors in a predictive model, e.g. by multiplying each topic sentiment with the weight of the respective topic (assuming the topic corresponds with the predictor).

Limitations

This is just a conceptual model. As said, would be interesting to test it. There are many potential issues, such as handling with cluster overlap (some text pieces can naturally be placed into several clusters which can cause classification problems) and hierarchical issues (e.g., “employment” is under “economy” and should hence influence the former’s sentiment score).

Joni

Organic reach and the choice of social media platform

english

(This is work in progress.)

Introduction

It is a well-established fact that the organic reach in a dominant platform decreases over time, as the competition over users’ attention increases. There is thus an inverse relation:

The more competition (by users and firms) in a user’s news feed, the less organic visibility for a firm.

The problem

How would a firm willing to engage in a social media activity approach this matter?

In particular,

  • how should it divide its time and marketing efforts between alternative platforms?
  • when does it make sense for it to diversify?

The analysis

The formula behind the decision is u * o, in which

u = fan base
o = organic reach

  • all else equal, the larger the organic reach, the better
  • all else equal, the larger the fan base, the better

But, even in a drastically smaller platform a large o can offset the relative fan base advantage.

For example, consider a firm has presence in two platforms.

platform A
500M users, 5,000 fans

platform B
10,000 users, 100 fans

By first look, it would make sense to invest time and effort in platform A, given that both the overall user base as well as the fan base are significantly larger. However, now consider the inclusion of factor o.

platform A
500M users, 5,000 fans
organic reach 1% = 50 users

platform B
10,000 users, 100 fans
organic reach 90% = 90 users

It now makes sense to shift its social media activities to platform B, as it gives better return on investment in terms of gained reach.

(it is assumed here that post-click actions are directly proportional to the amount of website traffic, and thus do not interfere in the return calculation).

Conclusion

More generally,

as organic reach decreases in platform A, platform B with relatively better organic visibility becomes more feasible

Implications

Firms are advised to consider their social media investments in the light of organic reach, and not be fooled by vanity metrics such as the total user base of a platform. Relative metrics, such as share of organic visibility matter more.

Entrant platforms can encourage switching behavior by promising firms larger degree of organic reach. At early stages this does not compromise utility of the users, as their news feeds are not yet cluttered. However, as the entrant platform matures and gains popularity, it will have an incentive of decreasing organic reach.

This effect may partially explain why a dominant platform position is never secure; entrants can promise better reach for both friends’ and firms’ posts, thereby giving more feedback on initial posts and a better user experience which may increase multi-homing behavior and even deserting dominant platforms, as multi-homing behavior has its cost in time and effort.

I’m into digital marketing, startups, platforms. Download my dissertation on startup dilemmas: http://goo.gl/QRc11f