Skip to content

Machine learning and Facebook Ads

Last updated on July 5, 2017

Introduction

One important thing in machine learning is feature engineering (selection & extraction). This means choosing the right variables that improve the model’s performance, while discarding those reducing it. The more impact your variables have on the performance metric, the better. Because the real world is complex, you may start with dozens or even hundreds of variables (=features), but in the end, you only want to keep the ones that improve the model’s performance.

While there are algorithms, such as information gain, to help, expert judgment can be of help as well. That’s because experts may have prior information on the important inputs. Therefore, one could interview industry insiders prior to creating a machine-learning model. Basically, the expert opinion narrows down the feature space. While this approach has risks, primarily foregoing hidden or non-obvious features, as well as potential expert biases, it also has obvious advantages in terms of distinguishing signal from noise.

So, the premise of narrowing down search space is the motivation for this article. I got to think, and do some rapid research, on what features matter for performance of Facebook advertising. These could be used as a basis for machine learning model e.g. to predict performance of a given ad.

A. Text features

  • topic
  • sentiment [1]
  • includesPrice
  • includesBrandName
  • wordCount [1]
  • wordLength
  • charCount [1]
  • includesEmojis
  • meaningEmojis
  • includesQuestion
  • includesExclamation
  • includesImperative
  • includesBenefits
  • includesNumbers
  • isSimpleLanguage
  • includesShortURL

B. Images

  • includesText [2]
  • includesPrice
  • includesProduct [2]
  • includesLogo
  • imageObjects
  • includesPeople
  • includesFace
  • includesAnimals
  • imageLocation
  • isStockphoto
  • includesCTA
  • isDarkColorTheme [2]

C. Metrics

  • clicksAll
  • clicksWebsite
  • websitePurchases
  • countLikes

D. Demographics

  • gender
  • age
  • location

E. Misc features

  • adPlacement
  • campaignGoal

Application

A simple model could only account for C (=independent and dependent variables) and D (independent variables), while more complex models would run a more complex analysis of text and images using linear or non-linear optimization, such as neural networks (shallow or deep learning). Also, some of these features could be retrieved by using commercial or public APIs. For example,

  • Google Cloud Vision API – for image analysis [3]
  • MonkeyLearn – for text analysis [4]
  • EmojiNet API – for emoji analysis [5]

Limitations

Ideally, each advertiser has his own model, because they may not generalize well (e.g., different advertisers have different target groups). However, feature selection may benefit from learning from earlier experiences. Also, given that there is enough data, it may be possible that the model learns which features apply across different advertisers, achieving a greater degree of generalizability.

References

[1] https://adespresso.com/academy/blog/we-analyzed-37259-facebook-ads-and-heres-what-we-learned/

[2] https://venngage.com/blog/facebook-images/

[3] https://cloud.google.com/vision/

[4] http://monkeylearn.com/

[5] http://emojinet.knoesis.org/home.php

Published inenglish