Problem of averages applies to social media community guidelines

(This post is based on a discussion with a couple of other online hate researchers.)

  1. given a general policy (community guidelines), it is possible to create specific explanations that cover most cases of violating the policy (example of explanation: “your message was flagged as containing islamophobic content”). This is based on the idea that ultimately the policy itself is finite, so even though cases of islamophobia might be many, the policy always either contains or does not contain this form of hate speech. If the general policy itself is lacking, then it needs to be fixed first.
  2. the problem of explaining hate speech moderation could be seen as a classification problem, where each explanation is a class. Here, we observed the ground truth problem, which we referred to as inherent subjectivity of hate speech. In other words, it is not possible to create uncontestable or “immutable” hate speech.
  3. the solution to this inherent subjectivity can take place at two levels: (a) at the user level by finetuning/adjusting the hate speech detection algorithm based on user preferences and not community guidelines, i.e., learning to flag what the user finds offensive rather than defining it a priori. This would make community guidelines obsolete or at least very much less influential (possibly only focusing on some hateful words that could not be used in a non-offensive way, if those exist).
    …or, (b) at the community level, where each community (e.g., page, person) within the platform defines its own rules as to what speech is allowed. By joining that community, a user acknowledges those rules. This essentially shifts the community guideline creation from the platform to subcommunities within the platform.
  4. both a and b above rely on the notion that a platform’s community guidelines essentially suffer from the problem of averages: average is good in general, but perfect for nobody. The only way I can see around that is by incorporating user or community features (=preferences) to essentially allow/disallow certain speech in certain communities. Then, users who do not wish to see certain speech simply unfollow the community. This affords the flexibility of creating all kinds of spaces. Simultaneously, I would give more moderation tools to the communities themselves which, again, I think is a better approach than a global “one shoe fits all” policy.

Sales attribution in cross-channel retargeting

Digital marketing thought of the day (with Tommi Salenius):

There are two channel types for online ads: “first-channel”, meaning the channel that gets a customer’s attention first. And “second-channel”, where we run retargeting ads to activate the non-converted audience from the first-channel.

There are also two types of consumers: “buy now”, meaning those that buy immediately without the need for activation. And “buy later”, meaning those that require longer time and/or activation to buy.

So, depending on how well the first-channel is able to locate the different consumer types, you might see very different performance results. If the first-channel is good at locating the “buy now” consumers, it will show good performance and the second-channel will show bad performance. In contrast, the opposite happens if the first-channel is good at locating “buy later” consumers; in this case, the first-channel will appear weak and the second-channel will appear good.

This typology highlights the complexities of attributing campaign performance in cross-channel environments: a channel might do well in locating potential buyers but another channel might claim the credit for the sale.

“It eliminates all the fun.”​ Automation taking over marketing?

Jon Loomer, a well-respected digital marketer, was interviewed by Andy Gray in Andy’s podcast.

They discussed automation and the impacts it has on the future of digital marketing.

Andy asked what happens when everything becomes standardized by the platform? That is, when the platform sets the click price, chooses the targeting, and even writes the ads.

The logic of the question was that a marketer will no longer have any competitive tools against others within the platform – and it appears we cannot do anything to get ahead of the competition anymore.

Jon’s comment to all this — “it eliminates all the fun” — got my attention.

Eliminating all fun is an important aspect from a marketer’s perspective. One can easily lose the meaning of one’s work in such an environment where all creativity, experimentation, and decision making is taken away, and one is left with the role of supporting the algorithm with occasional oneliners that the machine chooses from.

However, from a macro perspective, a couple of thoughts about the future.

First, we might enter some form of “perfect market” where supply and demand are matched in perfect alignment of the platform’s vision. Then, if the rules and procedures are the same for all, this can be considered fair (as in: procedural fairness) and the “biggest checkbook” doesn’t always win.

One example is the quality score — it can equalize advertisers by setting click price based on quality, not the willingness to bid the highest.

…in quality score’s case, though, the score becomes eradicated as the platform is taking over the quality part of ad creation. But the point remains — there may arise natural differentiation factors. For example, a major brand couldn’t buy “barber shop in [small town x]” keywords because the system would (supposedly) be able to know that the major brand doesn’t have an outlet there, and so the small barber shops that have would be at a structural advantage in this local example of bidding.

The question still remains: how would the winner be determined among the rivaling small barber shops?

In my opinion, there would need to be some secondary information that serves as a signal for matching user intent with the best possible alternative for that intent: product reviews, website usability, pricing…

The kind of information that affects how likely a user is to buy from Barbershop A vs. Barbershop B. Think Google reviews, Core Web Vitals, XML product feed information (or simplified versions of it). Opening hours, etc.

Take an example of user searching for a barber shop at midnight: is there one open? If so, it wins the competition due to natural factors, not due to “optimization”.

The point is: there will always be natural signals (as in: characteristics appearing outside the ad platform and independent of it) that the ad platform can incorporate in its decision as to which ad takes precedence. These signals can take away the “game” or “gaming” of the platform, that we call optimization (at the same time taking away the fun from doing digital marketing), but it’s not certain this would result in a situation where either the companies using advertising or the users would be worse off.

Non-linear growth of value in platform business

The value of aggregation in ad business (and probably in most other verticals, too) is that 1 impression would have zero value, i.e., no advertiser wants to pay. 10 impressions also have zero value, so do 100 impressions. But when we get to hundreds of thousands to millions, all of a sudden the value spikes from zero to a non-trivial amount. …so, aggregators can buy a lot of small players for nickels, and put them together to reach a scale that goes from zero to non-trivial value. (Another way to frame this is that the value does not grow linearily with the number of impressions.) #platforms #business #aggregation #economics

About academic competition and network effects

It’s extremely hard for a small research team to compete against huge labs. It’s a question of network effects; the huge labs have the critical mass of funding, collaborators, momentum, and publication pull to attract the best talent, whether the brightest PhD students or the best post-docs. The rest are left with the “scraps”. Seems cold to say so, but academia is just like any other environment: many have PhDs but only a few select are able to publish in the top venues, year after year. And those people are more and more centralized to the top institutions. For a research institution, the choice is pretty clear: either focus on building a critical mass of talent (i.e., big enough teams with big enough funding) or face institutional decline. #research #strategy #competition #talent

Research on Persona Analytics

This year (2021), we have managed to publish two research papers that I consider important milestones in persona analytics research. They are listed below.

Jung, S., Salminen, J., & Jansen, B. J. (2021). Persona Analytics: Implementing Mouse-tracking for an Interactive Persona System. Extended Abstracts of ACM Human Factors in Computing Systems – CHI EA ’21. https://doi.org/10.1145/3411763.3451773

Jung, S.-G., Salminen, J., & Jansen, B. J. (2021). Implementing Eye-Tracking for Persona Analytics. ETRA ’21 Adjunct: ACM Symposium on Eye Tracking Research and Applications, 1–4. https://doi.org/10.1145/3450341.3458765

Algorithms that describe a researcher’s mind

Algorithms that describe a researcher’s mind:

(a) Work on the paper “closest to publication”. => downside: can reduce the willingness to solve difficult problems because they are farther from publication

(b) Always switch to more interesting topic, when you see one. => downside: you’ll never get anything published (but upside can be that you learn a lot about different topics, at least superficially)

(c) Define a “larger than life” problem and dedicate your whole life for it. => downside: somebody else might solve it before you, or it may not be solved during your lifetime at all

(d) Scope the field you are interested in and formulate a “research roadmap” or agenda that consists of several studies. Then conduct the studies sequentially. => downside: very hard to implement if funding is project-based and you cannot secure funding for each study.

(e) Find a niche that “nobody dominates” and focus all your research in that niche. => downside: you will likely end up with few citations, because there aren’t many people working on it.

(f*) Chase the trendy new topic perpetually, always switching your focus according to what seems to interest other people. downside => you will likely not gain deep knowledge in any field, or make a fundamental contribution since making one tends to require years of work.

==================

I wonder, how many researchers would recognize themselves in each of these algorithms?

*NOTE: the difference between b) and f) is that in b), your own interests drive you, whereas in f), other people’s interests (as you perceive them) drive you.