Skip to content

Digital marketing, startups and platforms Posts

Analyzing sentiment of topical dimensions in social media

Introduction

Had an interesting chat with Sami Kuusela from Underhood.co. Based on that, got some inspiration for an analysis framework which I’ll briefly describe here.

The model

Figure 1 Identifying and analyzing topical text material

The description

  1. User is interested in a given topic (e.g., Saara Aalto, or #saaraaalto). He enters the relevant keywords.
  2. The system runs a search and retrieves text data based on that (e.g., tweets).
  3. A cluster analysis (e.g., unsupervised topic model) identifies central themes from the data.
  4. Vectorization of representative keywords based on cluster analysis (e.g., 10 most popular) is run to extract words from a reference lexicon of words that have a similar meaning. This increases the generality of each topic cluster by associating them with other words that are close in the vector space.
  5. Text mining is run to refine the themes, i.e. placing the right text pieces under the correct themes. These are now called “dimensions”, since they describe the key dimensions of the text corpus (e.g., Saara’s voice, performance, song choices…).
  6. Sentiment analysis can be run to score the general (pos/neg/neu) or specific (e.g., emotions: joy, excitement, anger, disappointment, etc.) sentiment of each dimension. This could be done by using a machine-learning model with annotated training data (if the data-set is vast), or some sentiment lexicon (if the data-set is small).

I’m not sure whether steps 4 and 5 would improve the system’s ability to identify topics. It might be that a more general model is not required because the system already can detect the key themes. Would be interesting to test this with a developer.

Anyway, what’s the whole point?

The whole point is to acknowledge that each large topic naturally divides into small sub-topics, which are dimensions that people perceive relevant for that particular topic. For example, in politics it could be things like “economy”, “domestic policy”, “immigration”, “foreign policy”, etc. While the dimensions can have some consistency based on the field, e.g. all political candidates share some dimensions, the exact mix is likely to be unique, e.g. dimensions of social media texts relating to Trump are likely to be considerably different from those of Clinton. That’s why the analysis ultimately needs to be done case-by-case.

In any case, it is important to note that instead of giving a general sentiment or engagement score of, say a political candidate, we can use an approach like this to give a more in-depth or segmented view of them. This leads to better understanding of “what works or not”, which is information that can be used in strategic decision-making. In addition, the topic-segmented sentiment data could be associated with predictors in a predictive model, e.g. by multiplying each topic sentiment with the weight of the respective topic (assuming the topic corresponds with the predictor).

Limitations

This is just a conceptual model. As said, would be interesting to test it. There are many potential issues, such as handling with cluster overlap (some text pieces can naturally be placed into several clusters which can cause classification problems) and hierarchical issues (e.g., “employment” is under “economy” and should hence influence the former’s sentiment score).

CLV-pohjainen markkinointibudjetin allokaatio alustojen välillä

Odottelen, että ilmaantuisi helppo tapa tehdä järkevää markkinointia. Nythän tilanne on niin, että markkinointibudjetit allokoidaan joko fiiliksen mukaan (lehdet, TV, radio) taikka konversiokustannusten mukaan (digi). Eli toisin sanoen jälkimmäisessä tapauksessa lasketaan CPA:ta, kun pitäisi laskea CLV:tä.

Miksi CLV:tä ei lasketa?

CLV:tä ei lasketa, koska se on niin hankalaa. Pitäisi erottaa ostohistoria ja kytkeä jokainen konversio asiakkaaseen (luokittelu: uusi/vanha), jotta saataisiin elinkaariarvo. Tieto on piilossa omissa järjestelmissä (CRM/SQL) ja sen kytkeminen Web-analytiikkaan vaatii räätälöintiä. Kertaluonteisen konversion, tyypillisesti myynnin, hintaa on helppo seurata yhdellä skriptin pätkällä ja siksi sitä käytetään budjetin allokoimisen pohjana. Tyydytään saatavilla olevaan dataan, koska se on helpointa.

Miksi CLV pitäisi laskea?

CLV eli asiakkaan elinkaariarvo määrittää monta asiaa. Asiakkaat voidaan jakaa kannattavuuden mukaan eri segmentteihin, joille tarjotaan eri palvelutaso ja ehdotetaan erilaisia tuotteita taikka annetaan lisäetuja kiitoksena uskollisuudesta. Elinkaariarvo voi vaihdella kanavoittain, ikäluokittain, sijainnin mukaan, yms. Erottelevat tekijät olisi tärkeä tunnistaa, jotta kannattavimmille asiakkaille yhteiset piirteet ohjaisivat markkinoinnin kohdistamista.

CLV-laskelmien jalkauttaminen

Parhaimmillaan kohdentamisen voi automatisoida. Ainakin kolme tapaa tulee mieleen:

1. Budjetin/bidin säätö CLV-estimoinnin perusteella: kannattavimmille kohderyhmille osoitetaan enemmän budjettia taikka niiden huomiosta tarjotaan mainoshuutokaupassa tietty ylikerroin, sillä perusteella että ne ovat kannattavampia. Tämän voi toteuttaa joko vain nykyiselle asiakaskannalle, tai laajemmalle kohderyhmälle jolla on kannattavinta asiakasryhmää vastaavat piirteet (vrt. “lookalike”-logiikka). Erilaiset dataratkaisut (DMP) mahdollistavat teoriassa tietyn kohderyhmän saavuttamisen mediariippumattomasti, vaikka käytännössä eri alustat joutuu konfiguroimaan erikseen. Mikäli rakennetaan väliohjelmisto (à la Smartly), voidaan budjetointi/bid-päätökset tehdä joustavasti niin että ohjelmisto kerää tiedot alustoilta, yhdistää yrityksen omaan dataan, päivittää CLV-laskelman ja sen mukaisesti tekee em. päätökset.

2. Erillinen palvelukokemus: tarjotaan kannattavimmille asiakkaille eri tuotesuositukset tai lisäpalvelut verkossa taikka sen ulkopuolella. Ts. tyypillinen verkkosivun dynaaminen personointi, mutta tässä tapauksessa CLV-arvon perusteella.

3. Erilliset kampanjat/tarjoukset: kohdistetaan sähköposti- ja muuta suoramarkkinointia joka on räätälöity kannattavimmille asiakasryhmille. Ts. dynaamiset listat, jotka päivittyvät CLV-laskelman perusteella, ja kullekin listalle räätälöidyt sisällöt/polut.

Filosofinen tausta on, että pyritään vähentämään hukkaa tavoittamalla vain sitoutuneet asiakkaat. Aika voidaan käyttää siihen, että mietitään miten palvella heitä entistä paremmin, sen sijaan että jahdataan jatkuvasti uusia asiakkaita. Tuloksen on markkinointi-investointien tuottavuuden (ROI) kasvu. Riskinä on häirikönti – viestinnän pitää tuntua oikea-aikaiselta ja sisällön puolesta osuvalta. Liian usein toteutukset kuitenkin ovat tylsiä ja eivät pureudu tarpeeksi kohderyhmän aitoihin motiiveihin. Sen vuoksi kampanjoiden saamaa responssia ja etenkin negatiivisia “pisteitä” tulee seurata aktiivisesti.

Johtopäätös

Markkinointibudjettien allokointi on yhä edelleen mutun varassa. Toimialalla ei ole tarjolla helppoja ratkaisuja ongelman ratkaisuun. Tulevaisuudessa CLV-laskelmat mahdollistava teknologia toivottavasti yleistyy ja tulee kaikkien saataville kohtuuhintaan. Alalla on vielä tilaa useammalle startupille, vaikka lopulta Facebookin ja Googlen kaltaiset toimijat tulevat levittämään CLV-laskelmat kaikkien mainostajien saataville.

Agile methods for predicting contest outcomes by social media analysis

People think, or seem to assume, that there is some magical machine that spits out accurate predictions of future events from social media data. There is not, and that’s why each credible analysis takes human time and effort. But therein also lies the challenge: when fast decisions are needed, time-taking analyses reduce agility. Real-time events would require real-time analysis, whereas data analysis is often cumbersome and time-taking effort, including data collection, cleaning, machine training, etc.

It’s a project for weeks or days, not for hours. All the practical issues of the analysis workflow make it difficult to provide accurate predictions at a fast pace (although there are other challenges as well).

An example is Underhood.co – they predicted Saara Aalto to win X-Factor UK based on social media sentiment, but ended up being wrong. While there are many potential reasons for this, my conclusion is that their indicators lack sufficient predictive power. They are too reliant on aggregates (in this case country-level data), and had a problematic approach to begin with – just like with any prediction, the odds change on the go as new information becomes available, so you should never predict the winner weeks ahead. Of course, theirs was just a publicity stunt where they hoped being right would prove the value of their service. Another example, of course, is the US election where prediction markets were completely wrong of the outcome. That was, according to my theory, because of wrong predictors – polls ask what is your preference or what you would do, whereas social media engagement shows what people do (in social media), and as such are closer to real behavior, hence better predictors.

Even if I do think human analysts are still needed in the near future, more solutions for quick collection and analysis of social media data are needed, especially to combine the human and machine work in the best possible way. Some of these approaches can be based on automation, but others can be methodological, such as quick definition of relevant social media outlets for sampling.

Here are some ideas I have been thinking of:

I. Data collection

  1. Quick definition of choice space (e.g., candidates in a political election, X-Factor contestants)
  2. Identification of related media social media outlets (i.e., communities, topic hashtags)
  3. Collecting sample (API, scraping, or copy-paste (crowdsourcing))

Each part is case-dependent and idiosyncratic – for whatever event, I’m thinking competitions here, you have to this work from scratch. Ultimately, you cannot get the whole Internet as your data, but you want the sample to be as representative as possible. For example, it was obvious that Twitter users showed much more negative sentiment towards Trump than Facebook users, and in both platforms you had supporter groups/topic concentrations that should first be identified before any data collection. Then, the actual data collection is tricky. People again seem to assume all data is easily accessible. It’s not – while Twitter and Facebook have an API, Youtube and Reddit don’t, for example. This means the comments that you use for predicting the outcome (by analyzing their relative share of the total as well as the strength of the sentiment beyond pos/neg) need to be fetched either by webscraping or manually copying them to a spreadsheet. Due to large volumes of data, crowdsourcing could be useful — e.g., setting up a Google Sheet where crowdworkers each paste the text material in clean format. The raw text content, e.g. tweets, Facebook comments, Reddit comments, is put in separate sheets for each candidate.

II. Data analysis

  1. Cluster visualization (defining clusters, visualizing their respective sizes (plot # of voters), breakdown by source platform and potential other factors)
  2. Manual training (classifying the sentiment, or “likelihood to vote”)
  3. Machine classification (calculating the number of likely voters)

In every statistical analysis, the starting point should be visualizing the data. This shows an aggregate “helicopter view” of the situation. Such a snapshot is useful also for demonstrating the results for the end-user, to let the data speak for itself. Candidates are bubbles in the chart, their sizes in respect to the number of calculated likely voters. The data could be broken down according to source platforms, or other factors, by using the candidate as a point of gravity for the cluster.

Likelihood to vote could be classified as a scale, not binary. That is, instead of saying “sentiment is positive: YES/NO”, we could say “How likely is the person to vote?” which is the same as asking how enthusiastic or engaged he or she is. Therefore, a scale is better, e.g. ranging from -5 (definitely not voting for this candidate) to +5 (definitely voting for this candidate). The manual training, which also could be done with the help of crowd, helps the machine classifier to improve its accuracy on the go. Based on training data, it would generalize classification to all material. Now, the material is bucketed so that each candidate is evaluated separately and the number of likely voters can be calculated. It is possible that the machine classifier could benefit from training input from both candidates, inasmuch the language showing positive and negative engagement is not significantly different.

It is important to note that negative sentiment does not really matter. What we are interested in is the number of likely voters. This is because of the election dynamics – it does not matter how poor a candidates aggregate sentiment is, i.e. the ratio between haters and sympathizers, as long as his or her number of likely voters is higher than that of the competition. This effect was evident in the recent US presidential election.

The crucial thing is keep the process alive during the whole election/competition period. There is no such point where it becomes certain that one loses and the other remains, although the divide can become substantial and therefore increase the accuracy of the prediction.

III. Presentation of results

  • constantly updating feed (à la Facebook video stream)
  • cluster visualization
  • search trend widget (source: Google Trends)
  • live updating predictions (manual training –> machine model)

The results could be shown as a form of dashboard to the end user. Search trend graph and the above mentioned cluster visualization could be viable alternatives. In addition, it would be interesting to see the count of voters evolving in time – in such a way that it, along with the visualization, could be “played back” to examine the development in time. In other words, interactive visualization. As noted, the prediction, or the count of likely votes, should update real-time as a result of combined human-machine work.

Conclusion and discussion

The idea behind development of more agile methods to use social media data to predict content outcomes is that the accuracy of the prediction is based on the choice of indicators rather than the finesse of the method. For example, complex Bayesian models falsely predicted Hillary Clinton would win the election. It’s not that the models were poorly built; they just used the wrong indicators, namely polling data. This is the usual case of ‘garbage in, garbage out’, and it shows that the choice of indicators is more important than technical features of the predictive model.

The choice of indicators should be done based on their predictive power and although I don’t have strict evidence on it, it intuitively makes sense that social media engagement is a stronger indicator in many instances than survey data, because it’s based on actual preferences instead of stated preferences. Social scientists know from long tradition of survey research that there are myriad of social effects reducing the reliability of data (e.g., social desirability bias). Those, I would argue, are much smaller issue in social media engagement data.

However, to be fair, there can be issues of bias in the social media engagement data. The major concern is low participation rate: a common heuristic is that 1/10 of participants actually contribute in writing, while the other 9/10 are readers whose real thoughts remain unknown. It’s then a question of how well does the vocal minority reflect the opinion of the silent majority. Or, in some cases, this is irrelevant for competitions if the overall voting share remains low. For example, if it’s 60% it is relative much more important to mobilize the active base than if voting was close to 100% where one would need a universal acceptance.

Another issue is the non-representative sampling. This is a concern when the voting takes place offline, and online data does not accurately reflect the voting of those who do not express themselves online. However, as social media participation is constantly increasing, this becomes less of a problem. In addition, compared to other methods of data collection – apart from stratified polling, perhaps – social media is likely to give a good result on competitive predictions because of their political nature. People who strongly support a candidate are more likely to be vocal about it, and the channel for voicing their opinion is the social media.

It is evident that the value of social media engagement as a predictor is currently underestimated, as proven by the large emphasis put on political polls and virtually zero discussion on social media data. As a direct consequence of this, those who are able to leverage the social media data in the proper way will gain competitive advantage, be it betting market, or any other purpose where prediction accuracy plays a key role. The prediction work will remain a hybrid effort by man and machine.

On complexity of explaining business failure

Introduction

During the research period for my dissertation based on startup failures, I realized there are multiple layers of failure factors associated with any given company (or, in reverse, success factors).

These are:

  1. generic business problems (e.g., cash-flow)
  2. individual-level problems (e.g., personal chemistry)
  3. company type problems (e.g., lack of funding for startups)
  4. business model problems (e.g., chicken-and-egg for platforms)

Only if you combine these multiple layers – or perspectives – can you understand why one business venture fails and another one succeeds. However, it is also a relative and interpretative task — I would argue there can be no objective dominant explanation but failure as an outcome is always a combination of reasons and cannot therefore be reduced into simple explanations at all.

A part of the reason for the complexity is the existence of parallel root causes.

For example,

  • A company can said to have failed because it runs out of money.
  • However, why did it run out of money? Because customers would not buy.
  • Why didn’t they buy? Because the product was bad.
  • Why was the product bad? Because the team failed to recognize true need in the market.
  • Why did they fail to recognize it? They lacked such competence.
  • Why did they lack the competence? Because they had not enough funding to acquire it.

Alas! We ended up making a circular argument. That can happen with any failure explanation, as can coming up with a different root cause. In a team of many, while also considering several stakeholders, it is common that people’s explanations to cause and effect vary a great deal. It is just a feature of social reality that we have a hard time of finding unambiguity.

Conclusion

In general, it is hard to dissect cause and effect. Human beings are inclined to form narratives where they choose a dominant explanation and discard others. By acknowledging a multi-layered view on failure, one can examine a business case by applying different lenses one after another. This includes interviewing different stakeholder groups and understanding multiple perspectives ranging from individual to structural issues.

There are no easy answers as to why a particular company succeeds or fails, even though the human mind and various success stories would lead you to believe so!

3 teesiä digitaalisen markkinoinnin (ja miksei muidenkin) asiantuntijapalvelujen myynnistä

En todellakaan ole myynnin ammattilainen. En ole, koska olen introvertti ja sulkeutunut tyyppi ja — rehellisesti sanottuna — en pidä kovinkaan monen ihmisen seurasta.

Näistä piirteistä huolimatta Konvertigon (www.konvertigo.io) toimitusjohtaja väitti että osaan “vakuuttaa asiakkaat asiantuntemuksella”. Niin tai näin, ainakin noudatan muutamaa perusperiaatetta, joiden olen huomannut toimivan myyntitilanteissa. Ne ovat:

  1. Itseluottamus – tähän liittyy esimerkiksi, että osaa yksiselitteisesti kommunikoida lukuja (vaikkapa että 40 % bounce on melko hyvä, tai että tyypillinen verkkokaupan konversio on 1-2 %) ja käsitteitä (yhdessä tapaamisessa käytiin läpi ‘instant gratification’ ja ‘onboarding’, jotka olivat asiakkaalle kiinnostavia).

Yksiselitteisyys on tärkeä – pitäisi välttää sanomasta “se riippuu” tai “toisaalta”, tai muuten monimutkaistaa asioita. Myyntitapaamisen tavoitteena pitäisi olla asiakkaan monimutkaisen tilanteen yksinkertaistaminen, jotta sitä voidaan tuottavalla tavalla käsitellä. Tähän liittyy ns. oikeiden kysymysten kysyminen, jonka pitäisi olla asiantuntijalle sisällöllisesti helppoa – mutta usein pelätään kysyä tai keskeyttää. Älä pelkää kysyä tai keskeyttää! Itseluottamus tulee automaattisesti kokemuksen myötä. En ainakaan itse tietoisesti pyri olemaan itsevarma: sosiaalisissa tilanteissa olen epävarma ja koulussakin olin aina jännittäjä. Mutta olen huomannut, että asioista joista tiedän ja joita kohtaan tunnen intohimoa, on helppo puhua itsevarmasti.

  1. Todellinen arvo – ei koskaan pidä pakkomyydä tai ehdottaa jotain, johon ei usko. Olen täysin tyytyväinen, jos tapaaminen johtaa siihen että “aha, ei meillä olekaan mitään tarjottavaa teille” — tai no, ei ehkä täysin, koska jo ennen tapaamista pitäisi olla käsitys siitä että mahdollisuus tuottaa todellista arvoa, ja vääräksi osoittautuminen on heikon arvostelukyvyn paljastumista. Joka tapauksessa, aina pitää löytää aito mahdollisuus tuottaa hyötyä asiakkaalle. Tämä on tärkeää siksi, että yrityksen tavoitteena on luoda asiakassuhteita, ja asiakassuhde (tai mikä tahansa suhde) voi ainoastaan olla kestävä kun siitä on molemmille osapuolille jatkuvaa hyötyä. Ennen pitkää todellista arvoa tuottamattomat suhteet purkautuvat – pinnalta käsin syy voi olla mikä tahansa, mutta juurisyy on todellisen arvon puute.

Vaikka esimerkiksi verkkomarkkinoinnissa on suhteellisen helppo piilottaa todellinen arvo ja raportoida lukuja, jotka antavat sokerisen kuvan tuloksista, pitää jokaisen asiantuntijan olla ennen kaikkea rehti itselleen. Muuten ei kehtaa katsoa aamulla peiliin, tai edes lähteä kotoa töihin — joutuu vaan tekemään jotain mistä ei ole kenellekään aitoa hyötyä. Jos on rehti itselleen, on helppo olla rehti myös asiakkaalle. Tarkoitan, että todellinen arvo – kun se löytyy – on helppo kommunikoida kun siihen uskoo itsekin.

  1. Yhteisluonti – miten tuo edellä mainittu todellinen arvo löytyy? Asiantuntijapalvelujen kohdalla se löytyy vuorovaikutuksessa asiakkaan kanssa; kysymällä, vastaamalla, ja analysoimalla. Myyjällä voi olla etukäteen käsitys siitä, mitä se voisi olla, mutta todellinen kuva muodostuu vasta vuorovaikutuksen myötä. Tämän takia myyntitapaamisessa pitää aina olla asiantuntija mukana.

Voi olla ns. hybriditiimi, jossa on asiantuntijan lisäksi mukana “varsinainen myyjä”, joka taitaa klousaamisen ja neuvottelun, mutta joskus nämä tehtävät voi hoitaa myös asiantuntija. Parhaimmillaan varsinaisen myyjän voi korvata, mutta asiantuntijaa ei. Yhteisluonti nivoutuu myös tuotteistamisen kanssa; monesti yritys voi kuvitella hyvän tuotteistamisen riittävän asiantuntijan läsnäolon korvaamiseksi, mutta asiakkaan tarpeet voivat kuitenkin vaatia merkittävää sopeuttamista. Kaikkea on mahdotonta tuotteistaa etukäteen.

Tiivistettynä teesit ovat:

  • Tiedä mistä puhut (johtaa itseluottamukseen ja sitä kautta kaupantekoon)
  • Myy ja tuota aina todellista arvoa (johtaa kestävään asiakassuhteeseen)
  • Älä kuvittele tietäväsi asiakkaan tarpeita, ennen kuin tiedät ne.

Varmasti on paljon muitakin teesejä, joita asiantuntijapalvelujen myyntiin voisi liittää. Nämä nousivat ensimmäisenä itselleni mieleen. Voi kommentoida, jos kirjoitus herättää ajatuksia!

Buying? How to determine the offer price for a website

Introduction

A few years back I was considering of buying a website. In the end, I didn’t end up making the offer, largely because I couldn’t figure out how to calculate the offer price in a plausible way. Since then I’ve had a bit more experience in estimating figures in other contexts, as well as participating in some M&A discussions in the ecommerce field. But today, while cleaning my inbox, I happened to read that old email from many years ago, and thought of sharing some thoughts on the topic — hopefully as a bit wiser person!

Basic figures

If you are planning of buying a website and thinking about the offer price, you should know some basic figures of the website:

ARPU, or average revenue per user

if there is none, you have to estimate the earning potential. If the monetization model is advertising, find some stats about avg. CPMs in the industry. If it’s freemium, consider avg. revenue per premium user as well as the conversion rate from free to paid (again, you can find some industry averages).

Number of users/visitors

This is easy to get from analytics software.

Revenue

Revenue or revenue potential (if there is none at the moment) can be calculated by multiplying the two previous figures. So you would move from unit metrics to aggregate numbers.

Profit

You also need to consider the cost of maintenance, marketing and other actions that are needed to keep the site running and growing. Deduct those from the revenue to get profit. If you want faster growth, you need to factor in an investment for that; although it’s not exactly a part of the offer calculation, it still needs to be considered in the overall plan for making money with the website.

Calculating the offer price

Then, to determine offer price you need to multiply the profit with a time unit, e.g. months or years, to get the offer price. This figure is like a line in the sand — you can try and think it from the seller’s perspective: how many years or months of profit would he want to recoup in order for him to be willing to sell.

As an investor, your best break can be found when the profit is low, but revenue potential and number of visitors as well as visitor loyalty are high. The high revenue potential means that there is likely to be a realistic monetization model, but because that has not been applied yet, one can negotiate a good price if the seller is willing to let go of the website. Loyalty – manifested in high rate of returning visitors – indicates that the website provides real value for its visitors instead of relying e.g. on spammy tactics to lure in casual browsers. In the end, the quality of traffic matters a lot in whatever business model you apply.

You should also consider the stability of the figures – in particular, the historical growth rate. With the historical growth rate, you are able to project the development of traffic and revenue in the future. At this point, be realistic of what it takes to uphold the growth rate and thorough in asking the current owner in great detail what he has done so far and why. This information is highly valuable.

Because there is a lot of imprecision in coming up with the aforementioned figures, you would be wise to factor in risk at every stage of the calculation. Convey the risk also to the buyer in a credible way, so that he sees ‘it won’t be easy’ to get your money back. This is a negotiation tactic but also the real state of affairs in many cases.

Closing remarks

I don’t include any “goodwill” on things like brand or design in the calculation, because I think those are irrelevant for the price determination. All sunk costs that don’t serve the revenue potential are pretty much redundant — sticking to real numbers and, when they are absent, realistic estimates — is a much better way of determining the price of a website.

How to measure media bias?

Mass media is old, and so is their bias.

Introduction

Media bias is under heavy discussion at the moment, especially relating to the on-going presidential election in the US. However, the quality of discussion is not the way it should be; I mean, there should be objective analysis on the role of the media. Instead, most comments are politically motivated accusations or denials. This article aims to be objective, discussing the measurement of media bias; that is, how could we identify whether a particular media outlet is biased or not? The author feels there are not generally acknowledged measures for this, so it is easy to claim or deny bias without factual validation. Essentially, this erodes the quality of the discussion, leading only into a war of opinions. Second, without the existence of such measures, both the media and the general public are unable to monitor the fairness of coverage.

Why is media fairness important?

Fairness of the media is important for one main reason: the media have a strong influence on the public opinion. In other words, journalists have great power, and with great power comes great responsibility. The existence of bias leads to different standards of coverage depending on the topic being reported. In other words, the information is being used to portray a selective view of the world. This is analogous to confirmation bias; a person wants to prove a certain point, so he or she only acknowledges evidence supporting that point. Such behavior is very easy for human beings, for which reason journalists should be extra cautious in letting their own opinions influence the content of their reportage.

In addition to being a private problem, the media bias can also be understood as a systemic problem. This arises through 1) official guidelines and 2) informal group think. First, the official guidelines means that the opinions, beliefs or worldviews of the particular media outlet are diffused down the organization. Meaning that the editorial board communicates its official stance (“we, as a media outlet, support a political candidate X”) which is then taken by the individual reporters as their ethos. When the media outlet itself, or the surrounding “media industry” as a whole, absorbs a view, there is a tendency to silence the dissidents. This, again, can be reduced to elementary human psychology, known as the conformity bias or group think. Because others in your reference group accept a certain viewpoint, you are more likely to accept it as well due to social pressure. The informal dynamics are even more dangerous to objective reporting than the official guidelines because they are subtle and implicit by nature. In other words, journalist may not be aware of bias and just consider their worldview “normal” while arguments opposing it are classified as wrong and harmful.

Finally, media fairness is important due to its larger implications on information sources and the actions taken by citizens based on the information they are exposed to. It is in the society’s best interest that people resort to legitimate and trustworthy sources of information, as opposed to unofficial, rogue sources that can spread misinformation or disinformation. However, when the media becomes biased, it loses its legitimacy and becomes discredited; as a form of reactance to the biased stories, citizens turn to alternative sources of information. The problem is that these sources may not be trustworthy at all. Therefore, by waving their journalistic ethics, the mass media become at par with all other information sources; in a word, lose their credibility. The lack of credible sources of information leads into a myriad of problems for the society, such as distrust in the government, civil unrest or other forms of action people take based on the information they receive. Under such circumstances, the problem of “echo chamber” is fortified — individuals feel free to select their sources according to their own beliefs instead of facts. After all, if all information is biased, what does it matter which one you choose to believe in?

How to measure media bias?

Overview

While it may not be difficult to define media bias at a general level, it may be difficult to observe an instance of bias in an unanimously acceptable way. That is where commonly accepted measures could be of some help. To come up with such measures, we can start by defining the information elements that can be retrieved for objectivity analysis. Then, we should consider how they can best be analyzed to determine whether a particular media outlet is biased.

In other words, what information do we have? Well, we can observe two sources: 1) the media itself, and 2) all other empirical observations (e.g., events taking place). Notice that observing the world only through media would be inaccurate testimony of human behavior; we draw a lot from our own experiences and from around us. By observing the stories created by the media we know what is being reported and what is not being reported. By observing things around us (apart from the media), we know what is happening and what is not happening. By combining these dimensions, we can derive

  1. what is being reported (and happens)
  2. what is being reported (but does not happen)
  3. what is not being reported (but happens), and
  4. what is not being reported (but does not happen).

Numbers 2 and 4 are not deemed relevant for this inquiry, but 1 and 3 are. Namely, the choice of information, i.e. what is being reported and what is being left out of reporting. Hence, this is the first dimension of our measurement framework.

1. Choice of information

  • topic inclusion — what topics are reported (themes –> identify, classify, count)
  • topic exclusion — what topics are not reported (reference –> define, classify, count)
  • story inclusion — what is included in the reportage (themes –> identify, classify, count)
  • story exclusion — what is left out of the reportage (reference –> define, classify, count)
  • story frequency — how many times a story is repeated (count)

This dimension measures what is being talked about in the media. It measures inclusion, exclusion and frequency to determine what information the media disseminates. The two levels are topics and stories — both have themes that can be identified, then material classified into them, and counted to get an understanding of the coverage. Measuring exclusion works in the same way, except the analyst needs to have a frame of reference he or she can compare the found themes with. For example, if the frame of reference contains “Education” and the topics found from the material do not include education, then it can be concluded that the media at the period of sampling did not cover education. Besides themes, reference can include polarity, and thus one can examine if opposing views are given equal coverage. Finally, the frequency of stories measures media’s emphasis; reflecting the choice of information.

Because all information is selected from a close-to-infinite pool of potential stories, one could argue that all reportage is inherently biased. Indeed, there may not be universal criteria that would justify reporting Topic A over Topic B. However, measurement helps form a clearer picture of a) what the media as a whole is reporting, and b) what does each individual media outlet report in comparison to others. A member of the audience is then better informed on what themes the media has chosen to report. This type of helicopter view can enhance the ability to detect a biased information choice, either by a particular media outlet or the media as a whole.

The question of information choice is pertinent to media bias, especially relating to exclusion of information. A biased reporter can defend himself by arguing “If I’m biased, show me where!”. But bias is not the same as inaccuracy. A biased story can still be accurate, for example, it may only leave some critical information out. The emphasis of a certain piece of information at the expense of other is a clear form of bias. Because not every piece of information can be included in a story, something is forcefully let out. Therefore, there is a temptation to favor a certain story-line. However, this concern can be neutralized by introducing balance; for a given topic, let there be an equal effort for exhibiting positive and negative evidence. And in terms of exclusion, discarding an equal amount of information from both extremes, if need be.

In addition to measuring what is being reported, we also need to consider how it is being reported. This is the second dimension of the measurement framework, dealing with the formulation of information.

2. Formulation of information

  • IN INTERVIEWS: question formulation — are the questions reporters are asking neutral or biased in terms of substance (identify, classify, count)
  • IN REPORTS: message formulation — are the paragraphs/sentences in reportage neutral or biased in terms of substance (classify, count)
  • IN INTERVIEWS: tone — is the tone reporters are asking the questions neutral or biased (classify count)
  • IN REPORTS: tone — are the paragraphs/sentences in reportage neutral or biased in terms of tone (classify, count)
  • loaded headlines (identify, count)
  • loaded vocabulary (identify, count)
  • general sentiment towards key objects (identify, classify: pos/neg/neutral)

This dimension measures how the media reports on the topics it has chosen. It is a form of content analysis, involving qualitative and quantitative features. Measures cover interview type of settings, as well as various reportages such as newspaper articles and television coverage. The content can be broken down into pieces (questions, paragraphs, sentences) and their objectivity evaluated based on both substance and tone. An example of bias in substance would be presenting an opinion as a fact, or taking a piece of information out of context. An example of biased tone would be using negative or positive adjectives in relation to select objects (e.g., presidential candidates).

Presenting loaded headlines and text as percentage of total observations gives an indication of how biased the content is. In addition, the analyst can evaluate the general sentiment the reportage portrays of key objects — this includes first identifying the key objects of the story, and then classifying their treatment on a three-fold scale (positive, negative, neutral).

I mentioned earlier that agreeing on the observation of bias is an issue. This is due to the interpretative nature of these measures; i.e., they involve a degree of subjectivity which is generally not considered as a good characteristic for a measure. Counting frequencies (e.g., how often a word was mentioned) is not susceptible to interpretation but judging the tone of the reporter is. Yet, those are the kind of cues that reveal a bias, so they should be incorporated in the measurement framework. Perhaps we can draw an analogy to any form of research here; it is always up to the integrity of the analyst to draw conclusions. Even studies that are said to include high reliability by design can be reported in a biased way, e.g. by re-framing the original hypotheses. Ultimately, application of measurement in social sciences remains at the shoulder of the researcher. Any well-trained, committed researcher is more likely to follow the guideline of objectivity than not; but of course this cannot be guaranteed. The explication of method application should reveal to an outsider the degree of trustworthiness of the study, although the evaluation requires a degree of sophistication. Finally, using several analysts reduces an individual bias in interpreting content; inter-rater agreement can then be calculated with Cohen’s kappa or similar metrics.

After assessing the objectivity of the content, we turn to the source. Measurement of source credibility is important in both validating prior findings as well as understanding why the (potential) bias takes place.

3. Source credibility

  • individual political views (identify)
  • organizational political affiliation (identify)
  • reputation (sample)

This dimensions measures why the media outlet reports the way it does. If individual and organizational affiliations are not made clear in the reportage, the analyst needs to do work to discover them. In addition, the audience has shaped a perception of bias based on historical exposure to the media outlet — running a properly sampled survey can provide support information for conclusions of the objectivity study.

How to prevent media bias?

The work of journalists is sometimes compared to that of a scientist: in both professions, one needs curiosity, criticality, ability to observe, and objectivity. However, whereas scientists mostly report dull findings, reporters are much more pressured to write sexy, entertaining stories. This leads into the the problem of sense-making, i.e. reporters create a coherent story with a clear message, instead showing the messy reality. The sense-making bias in itself favors media bias, because creating a narrative forces one to be selective of what to include and what to exclude. As long as there is this desire for simple narratives, coverage of complex topics cannot be entirely objective. We may, however, mitigate this effect by upholding certain principles.

I suggest three principles for the media to uphold in their coverage of topics.

  • criticality
  • balance
  • objectivity
  • independence

First, the media should have a critical stance to its object of reportage. Instead of accepting the piece of information they receive as truth, they should push to ask hard questions. But that should be done in a balanced way – for example, in a presidential race, both candidates should get an equal amount of “tough” questions. Furthermore, journalists should not absorb any “truths”, beliefs or presumptions that affect in their treatment of a topic. Since every journalist is a human being, this requirement is quite an idealistic one; but the effect of personal preferences or those imposed by the social environment should in any case be mitigated. The goal of objectivity should be cherished, even if the outcome is in conflict with one’s personal beliefs. Finally, the media should be independent. Both in that it is not being dictated by any interest group, public or private, on what to report, but also in that it is not expressing or committing into a political affiliation. Much like church and state are kept separate according to Locke’s social contract as well as Jefferson’s constitutional ideas, the press and the state should be separated. This rule should apply to both publicly and privately funded media outlets.

Conclusion

The status of the media is precious. They have an enormous power over the opinions of the citizens. However, this is conditional power; should they lose objectivity, they’d also lose the influence, as people turn to alternative sources of information. I have presented that a major root cause of the problem is the media’s inability to detect its own bias. Through better detection and measurement of bias, corrective action can be taken. But since those corrective actions are conditioned to willingness to be objective, a willingness many media outlets are not signalling, the measurement in itself is not adequate in solving the larger problem. At a larger scale, I have proposed there be a separation of media and politics, which prevents by law any media outlet to take a political side. Such legislation is likely to increase objectivity and decrease the harmful polarization that the current partisan-based media environment constantly feeds into.

Overall, there should be some serious discussion on what the role of media in the society should be. In addition, attention to journalistic education and upholding of journalistic ethics should be paid. If the industry is not able to monitor itself, it is upon the society to introduce such regulation that the media will not abuse its power but remains objective. I have suggested the media and related stakeholders provide information on potential bias. I have also suggested new measures for bias that consider both the inclusion and exclusion of information. The measurement of inclusion can be done by analyzing news stories for common keywords and themes. If the analyst has an a prior framework of topics/themes/stories he or she considers as reference, it can be then concluded how well the media covers those themes by classifying the material accordingly. Such analysis would also reveal what is not being reported, an important distinction that is often not taken into account.

Defining SMQs: Strategic Marketing Questions

Introduction

Too often, marketing is thought of being advertising and nothing more. However, already Levitt (1960) and Kotler (1970) established that marketing is a strategic priority. Many organizations, perhaps due to lack of marketers in their executive boards, have since forgotten this imperative.

Another reason for decreased importance of marketing is due to marketing scholars pushing the idea that “everything is marketing” which leads to decay of the marketing concept – if it is everything, it is nothing.

Nevertheless, if we reject the omni-marketing concept and return to the useful way of perceiving marketing, we observe the linkage between marketing and strategy.

Basic questions

Tania Fowler wrote a great piece on marketing, citing some ideas of Professor Roger Martin’s HBR article (2014). Drawing from that article, the basic strategic marketing questions are:

  • Who are our customers? (segmentation)
  • Why do they care about our product? (USPs/value propositions/benefits)
  • How are their needs and desires evolving? (predictive insight)
  • What potential customers exist and why aren’t we reaching them? (market potential)

This is a good start, but we need to expand the list of questions. Borrowing from Osterwalder (2009) and McCarthy (1960), let’s apply BMC (9 dimensions of a business model) and 4P marketing mix thinking (Product, Place, Promotion, Price).

Business Model Canvas approach

This leads to the following set of questions:

  • What is the problem we are solving?
  • What are our current revenue models? (monetization)
  • How good are they from customer perspective? (consumer behavior)
  • What is our current pricing strategy? (Kotler’s pricing strategies)
  • How suitable is our pricing to customers? (compared to perceived value)
  • How profitable is our current pricing?
  • How competitive is our current pricing?
  • How could our pricing be improved?
  • Where are we distributing the product/solution?
  • Is this where customers buy similar products/solutions?
  • What are our potential revenue models?
  • Who are our potential partners? Why? (nature of win-win)

Basically, each question can be presented as a question of “now” and “future”, whereupon we can identify strategic gaps. Strategy is a lot about seeing one step ahead — the thing is, foresight should be based on some kind of realism, or else fallacies take the place of rationality. Another point from marketing and startup literature is that people are not buying products, but solutions (solution-based selling, product-market fit, etc.) Someone said the same thing about brands, but I think solution is more accurate in the strategic context.

Adding competitors and positioning

The major downside of BMC and 4P thinking from strategic perspective is their oversight of competition. Therefore, borrowing from Ries and Trout (1972) and Porter (1980), we add these questions:

  • Who are our direct competitors? (substitutes)
  • Who are our indirect competitors? (cross-verticality, e.g. Google challenging media companies)
  • How are we different from competitors? (value proposition matrix)
  • Do our differentiating factors truly matter to the customers? (reality check)
  • How do we communicate our main benefits to customers? (message)
  • How is our brand positioned in the minds of the customers? (positioning)
  • Are there other products customers need to solve their problem? What are they? (complements)

Defining the competitive advantage, or critical success factors (CSFs), leads into natural linkage to resources, as we need to ask what are the resources we need to execute, and how to acquire and commit those resources (often human capital).

Resource-based view

Therefore, I’m turning to resource-based thinking in asking:

  • What are our current resources?
  • What are the resources we need to be competitive? (VRIN framework)
  • How to we acquire those resources? (recruiting, M&As)
  • How do we commit those resources? (leadership, company culture)

Indeed, company culture is a strategic imperative which is often ignored in strategic decision making. Nowadays, perhaps more than ever, great companies are built on talent and competence. Related strategic management literature deals with dynamic capabilities (e.g., Teece, 2007) and resource-based view (RBV) (e.g., Wernerfelt, 1984). In practice, companies like Facebook and Google do everything possible to attract and retain the brightest minds.

Do not forget profitability

Finally, even the dreaded advertising questions have a strategic nature, relating to customer acquisition and loyalty, as well as ROI in regards to both as well as to our offering. Considering this, we add:

  • How much does it cost to acquire a new customer?
  • What are the best channels to acquire new customers?
  • Given the customer acquisition cost (CAC) and customer lifetime value (CLV), are we profitable?
  • How profitable are each products/product categories? (BCG matrix)
  • How can we make customers repeat purchases? (cross-selling, upselling)
  • What are the best channels to encourage repeat purchase?
  • How do we encourage customer loyalty?

As you can see, these questions are of strategic nature, too, because they are directly linked to revenue and customer. After all, business is about creating customers, as stated by Peter Drucker. However, Drucker also maintained that a business with no repeat customers is no business at all. Thus, marketing often focuses on customer acquisition and loyalty.

The full list of strategic marketing questions

Here are the questions in one list:

  1. Who are our customers? (segmentation)
  2. Why do they care about our product? (USPs/value propositions/benefits)
  3. How are their needs and desires evolving? (predictive insight)
  4. What potential customers exist and why aren’t we reaching them? (market potential)
  5. What is the problem we are solving?
  6. What are our current revenue models? (monetization)
  7. How good are they from customer perspective? (consumer behavior)
  8. What is our current pricing strategy? (Kotler’s pricing strategies)
  9. How suitable is our pricing to customers? (compared to perceived value)
  10. How profitable is our current pricing?
  11. How competitive is our current pricing?
  12. How could our pricing be improved?
  13. Where are we distributing the product/solution?
  14. Is this where customers buy similar products/solutions?
  15. What are our potential revenue models?
  16. Who are our potential partners? Why? (nature of win-win)
  17. Who are our direct competitors? (substitutes)
  18. Who are our indirect competitors? (cross-verticality, e.g. Google challenging media companies)
  19. How are we different from competitors? (value proposition matrix)
  20. Do our differentiating factors truly matter to the customers? (reality check)
  21. How do we communicate our main benefits to customers? (message)
  22. How is our brand positioned in the minds of the customers? (positioning)
  23. Are there other products customers need to solve their problem? What are they? (complements)
  24. What are our current resources?
  25. What are the resources we need to be competitive? (VRIN framework)
  26. How to we acquire those resources? (recruiting, M&As)
  27. How do we commit those resources? (leadership, company culture)
  28. How much does it cost to acquire a new customer?
  29. What are the best channels to acquire new customers?
  30. Given the customer acquisition cost (CAC) and customer lifetime value (CLV), are we profitable?
  31. How profitable are each products/product categories? (BCG matrix)
  32. How can we make customers repeat purchases? (cross-selling, upselling)
  33. What are the best channels to encourage repeat purchase?
  34. How do we encourage customer loyalty?

The list should be universally applicable to all companies. But filling in the list is not “oh, let me guess” type of exercise. As you can see, answering to many questions requires customer and competitor insight that, as the startup guru Steve Blank says, needs to be retrieved by getting out of the building. Those activities are time-consuming and costly. But only if the base information is accurate, strategic planning serves a purpose. So don’t fall prey to guesswork fallacy.

Implementing the list

One of the most important things in strategic planning is iteration — it’s not “set and forget”, but “rinse and repeat”. So, asking these questions should be repeated from time to time. However, people tend to forget repetition. That’s why corporations often use consultants — they need fresh eyes to spot opportunities they’re missing due to organizational myopia.

Moreover, communicating the answers across the organization is crucial. Having a shared vision ensures each atomic decision maker is able to act in the best possible way, enabling adaptive or emergent strategy as opposed to planned strategy (Mintzberg, 1978). For this to truly work, customer insight needs to be internalized by everyone in the organization. In other words, strategic information needs to be made transparent (which it is not, in most organizations).

And for the information to translate into action, the organization should be built to be nimble; empowering people, distributing power and reducing unnecessary hierarchy. People are not stupid: give them a vision and your trust, and they will work for a common cause. Keep them in silos and treat them as sub-ordinates, and they become passive employees instead of psychological owners.

Concluding remarks

We can say that marketing is a strategic priority, or that strategic planning depends on the marketing function. Either way, marketing questions are strategic questions. In fact, strategic management and strategic marketing are highly overlapping concepts. Considering both research and practice, their division can be seen artificial and even counter-productive. For example, strategic management scholars and marketing scholars may speak of the same things with different names. The same applies to the relationship between CEOs and marketing executives. Joining forces reduces redundancy and leads to a better future of strategic decision-making.

Meaningless marketing

I’d say 70% of marketing campaigns have little to no real effect. Most certainly they don’t have a positive return in hard currency.

Yet, most marketers spend their time running around, planning all sorts of campaigns and competitions people couldn’t care less of. They are professional producers of spam, where in fact they should be focusing on core of the business: understanding why customers buy, how could they buy more, what sort of products should we make, how can the business model be improved, etc. The wider concept of marketing deals with navigating the current and the future market; it is not about making people buy stuff they don’t need.

To a great extent, I blame the marketing education. In the academia, we don’t really get the real concept of marketing into our students’ minds. Even the students majoring in marketing don’t truly “get” that marketing is not the same as advertising; too often, they have a narrow understanding of it and are then easily molded into the perverse industry standards, ending up in the purgatory of meaningless campaigns while convincing themselves they’re doing something of real value.

But marketing is not about campaigns, and it sure as hell is not about “creating Facebook competitions”. Rather, marketing is a process of continuous improvement of the business. Yes, this includes campaigns because the business cycles in many industries follow seasonal patterns, and we need to communicate outwards. But marketing has so much more to give for strategy, if only marketers would stop wasting their time and instead focus on the essential.

Now, what I wrote here is only based on anecdotal evidence arising from personal observations. It would be interesting, and indeed of great importance, to find out if it’s correct that most marketers are wasting their time on petty campaigns instead of the big picture. This could be done for example by conducting a study that answers the questions:

  1. What do marketers do with their time?
  2. How does that contribute to the bottom line?
  3. Why? (That is, what is the real value created for a) the customer and b) the organization)
  4. How is the value being measured and defended inside the organization?

If nothing else, every marketer should ask themselves those questions.

Konversio-optimoinnin LTO-malli

Konversio-optimoinnilla tavoitellaan asiakkaan lompakkoa.

Johdanto

Konversio-optimointi on ollut alusta alkaen ollut osa digitaalista markkinointia. Olen opiskellut ja opettanut konversio-optimointia vuodesta 2012 lähtien. Silti monet verkkokaupat ja -sivustot yhä laiminlyövät konversio-optimoinnin perusperiaatteita.

Mitä konversio-optimointi on? Määrittelen sen näin:

Konversio-optimointi on ostamisen todennäköisyyden kasvattamiseen tähtäävää järjestelmällistä kehitystoimintaa.

Pyritään siis kehittämään verkkosivua ja muita ostamiseen vaikuttavia tekijöitä. Järjestelmällisyys tarkoittaa sitä, että a) kehityksestä on tehty suunnitelma, b) sitä noudatetaan vaihe vaiheelta, ja c) päätökset tehdään datan perusteella.

Huom! Konversio-optimointi ei siis ole vain verkkosivun parantamista, vaan se koskee kaikkia konversion todennäköisyyteen vaikuttavia tekijöitä, mukaan lukien tuotteet, maksutavat, jne.

Konversio-optimointi on suomeksi sanottuna myynnin tehostamista, vaikka konversio voi tietysti tarkoittaa jotain muutakin tavoitetta kuin myynti.

Mitkä ovat konversio-optimoinnin perusperiaatteet?

Lähestytään tätä LTO-mallin kautta. Sen osat ovat:

  1. Löytäminen
  2. Tietäminen
  3. Ostaminen

Nämä kolme ovat peruselementtejä asiakkaan polussa verkkosivuille saapumisesta ostoon. Ensin heidän pitää löytää sopiva tuote, sitten oppia siitä ja lopuksi ostaa se. Kunkin vaiheen pitää olla helppoa ja sisältää kaikki tarvittava tieto.

1. Löytäminen

Löytämisessä on tärkeää 1) toimiva haku, 2) toimiva tuoteselaus, ja 3) kattava navigaatio. Parhaassa hakuominaisuudessa on mukana automaattinen täydennys ja se antaa tuloksia luonnollisen kielen kautta, eli siis ihmiset voivat kirjoittaa hakukenttään mitä vain ja se antaa tuloksia tuotevalikoiman perusteella.

Esimerkki täydentävästä hausta:

verkkokaupan haku
verkkokaupan haku

Toimiva tuoteselaus on tärkeää kokemuksellisuuden kannalta: ihmiset pitävät selaamisesta, joka on verkko-ostamisen “shoppailu”-osa. Tuotteiden pitää siis olla houkuttelevasti läsnä ja selaamisen mieluusti keskeyttämätöntä, mikä voidaan toteuttaa loputon selaus -tekniikalla (eng. infinite scrolling). Eräässä verkkokaupassa huomasimme, että kävijät lähtevät sivulta selattuaan tuotevalikoiman loppuun, mikä kertoo juuri selauskokemuksen luonnollisesta tärkeydestä.

Esimerkki loputtomasta selauksesta:

verkkokaupan loputon selaus
verkkokaupan loputon selaus

Navigaatio on läsnä monella tasolla; etusivulla (päävalikko), kategoriasivuilla (päävalikko, sivuvalikko) ja tuotesivuilla (päävalikko, leivänmurut). Navigoinnin kattavuus on tärkeää, koska et tiedä mille sivulle asiakas ensiksi päätyy. Koska aloitussivu riippuu esimerkiksi siitä minkä kampanjan kautta asiakas saapuu sivulle, navigaation täytyy olla aina läsnä ja saatavilla.

Esimerkki kattavasta navigaatiosta:

verkkokaupan navigaatio
verkkokaupan navigaatio

Tuotteiden on siis oltava mahdollisimman löydettävissä, mikä tarkoittaa kategoriasivujen kohdalla kahta asiaa: 1) suodatinominaisuudet (eng. filters) ja 2) automaattiset suositukset.

Suodatinten kautta asiakas voi typistää tuhansien nimikkeiden valikoiman omien preferenssiensä mukaiseksi. Mitä enemmän tuotteita on tarjolla, sitä tärkeämpi hyvä suodatintoiminnallisuus on. Automaattinen suosittelu vie löytämisen astetta pidemmälle ehdottamalla sopivia tuotteita käytettävissä olevan datan perusteella. Näitä voivat olla esimerkiksi suosituimmat tuotteet (=eniten myyntiä, eniten näyttökertoja) tai tuotekategoriaan liittyviä täydentäviä tuotteita (=komplementit). Automaattiset suositukset voivat myös olla personoituja, jolloin käytetään ns. 1st party -dataa valikoiman räätälöimiseksi sivuille palaaville kävijöille.

Esimerkkejä suodatinjärjestelmistä:

verkkokaupan suodattimet
verkkokaupan suodattimet

Tässä esimerkki tuotesuositusten automatisoinnista:

verkkokaupan automaattiset suositukset
verkkokaupan automaattiset suositukset

2. Tietäminen

Löytämiseen liittyvät teknisten ominaisuuksien lisäksi informatiiviset opastetekstit, jotka kannustavat kävijää eteenpäin konversiopolulla. Informaation rooli on vielä tärkeämpää tuotesivuilla — sen puute on yksi tavallisimmista verkkomyynnin virheistä. Jokaisesta tuotteesta pitäisi olla niin paljon tietoa saatavilla, että asiakkaiden kaikkiin mahdollisiin kysymyksiin vastataan etukäteen. Hyvin harva kävijä kysyy erikseen sähköpostilla lisätietoa, ellei kyseessä ole korkea sitoutuminen. Useimmat eivät vaivaudu. Tuoteinformaation lisäksi ostamiseen liittyvät tiedot pitävät olla läsnä joka sivulla: keskeiset maksu- ja toimitustiedot.

Tiedon pitää olla saatavilla monessa muodossa: ideaalitilanteessa tuotteesta on saatavilla yksityiskohtainen tekstikuvaus, mahdolliset tekniset tiedot, useita korkealaatuisia kuvia, ja tuotteesta riippuen video. Riippumatta siitä onko myytävä tuote palvelu vai tavara, on mahdollista tuottaa kaikkia em. sisältötyyppejä. Monet verkkokaupat vain jättävät sen tekemättä syystä tai toisesta. Yleistä on, että kaikki rahat menevät teknologian hankintaan, ja sisältö hoidetaan vasemmalla kädellä – ihan kuin myynti olisi tekninen suoritus eikä ihmisiin vaikuttamista. Hyvän käytettävyyden ja sitä tukevan layoutin lisäksi tarvitaan mielenkiintoista ja informatiivista sisältöä; ilman sisältö tavara ei liiku tai se liikkuu hitaammin kuin voisi liikkua.

3. Ostaminen

Ostaminen on mallin kolmas osa – kuten mainittu, ostoon liittyvä tieto pitää olla “omni-läsnä” eli saatavilla joka konversiopolun vaiheessa. Sitä ei pidä piilottaa esim. UKK-osioon. Mutta informaation lisäksi ostamisen sisältö täytyy olla kunnossa. Sisällöllä tarkoitan itse vaihtoehtoja: maksutapojen pitää olla kuluttajan mieleen (esim. osamaksu, mobiilimaksu, verkkopankit, lasku), samoin toimitustapojen (sähköinen toimitus vs. posti) ja toimitusajan (toimitus heti sähköpostiin, postin kautta 2 päivässä tai kuriirilla 1 päivässä). Ostamiseen liittyy totta kai muutakin: esimerkiksi alennusten antaminen useamman tuotteen ostamisesta, asiakaspalvelun saatavuus (esim. maksuton puhelinpalvelu, reaaliaikainen chat-asiakaspalvelu), ja kanta-asiakkuus.

Johtopäätös

Konversio-optimointi on monimutkainen aihe, sillä se sisältää kaikki ostamisen todennäköisyyteen vaikuttavat tekijät. Sitä voidaan tarkastella monesta näkökulmasta, mukaan lukien tässä esitetty LTO-malli, joka pyrkii tiivistämään konversio-optimoinnin toimenpiteet kolmen peruspilarin alle. Ilmiötä voitaisiin tarkastella muidenkin mallien kautta, joita ovat esimerkiksi a) konventiomalli, b) suostuttelumalli (why, how, what? + psykologia) ja ns. 3) Korko-malli. Saatan kirjoittaa näistä malleista toiste, mutta toivottavasti tämä artikkeli antoi uusia ajatuksia!