← Back to blog home
Hero Image

Recently, Google published a white paper (pdf) on its model for predicting ad click-through rates at massive scale for its multi-billion dollar advertising platform. While the white paper may be overkill for most marketing professionals, it does answer a series of complex questions and uncovers a few advanced marketing methods.

Google’s explanation is heavy on mathematics and machine learning, so here is our translation of the important parts in English:

If you bid on a particular search term and supply a particular ad copy, what is the probability that your ad will be clicked?

The white paper begins with the goal of predicting P (click | q,a) where q = query and a = ad copy. This is a Bayesian probability, where you want to calculate the likelihood that an event will occur given a specified condition or set of conditions. So what we’re actually solving for is:

What is the probability that a click will occur given a particular search query and a particular set of ad copy?

The problem is then modeled using logistic regression. Logistic regression attempts to quantify the relationship between a dependent variable and several independent variables by using probability scores as the predicted values of the dependent variable. In this case, the dependent variable is the probability that an ad will receive a click. The independent variables could be numerous and would include things like position on the page, existence of the search terms in the ad, geolocation of the searcher, etc.

The output of logistic regression is a function that approximates the likelihood of a click given all of these independent factors.

For each instance (t) Google describes the dependent variable (the probability of a click) in terms of a feature vector (x, the independent variables) and given model parameters (w).

The model predicts P = \sigma (w \cdot x), where w is a model parameter and x is a feature vector. In machine learning a “feature” is an individual measurable property describing a phenomenon being observed and a “feature vector” is an n-dimensional vector of numerical features that represent an object.

Lets first start with w \cdot x. This is read as the dot product of w and x. A dot product (also called a scalar product) is an operation that takes two equal-length sequences of numbers and returns a single number. Algebraically, the dot product is computed as the sum of the products of the corresponding entries of the two sequences of numbers. For example:
w \cdot x = \sum w_{i} x_{i} = w_{1}x_{1} + w_{2}x_{2} + \cdot \cdot \cdot + w_{n}x_{n}

Now that we know how to get w \cdot x, we then calculate the probability of a click as:
P(click) = \sigma (w \cdot x)

Sigma refers to the sigmoid function, which is a logistic curve that is S-shaped.
Sigmoid functions exhibit approximately exponential growth that slows because of maturity or saturation. It is common when looking at populations in ecology, etc. Here is the sigmoid function:
\sigma(a) = \frac {1}{1 + exp(-a)}

Next, the authors use Online Gradient Descent (OGD) to optimize with high prediction accuracy. A gradient is a vector field that points in the direction of the greatest rate of increase.

An easy way to think about this is to imagine opening a window in a warm room when it is cold outside. As we know, the heat from the warm room will begin to escape, creating a temperature gradient between the inside and outside. At some specific point in space, the temperature will be changing faster than at all other points. We can represent this temperature change in terms of slope (rate of change in temperature at each point) and direction. So, each vector in the gradient represents both the slope and direction at each point.

Online Gradient Descent is just an optimization method that looks for the point where the slope is greatest.

One issue with OGD is, it is not good at producing “sparse” models. This is a machine learning issue. A dynamic algorithm needs to be “taught” by feeding it samples and measuring output. In this case, the authors are talking about a sparse vector that corresponds to a feature selection.

For example, when there are more features than training examples (for machine learning), some features are set to zero when using what is known as a regularization model. Google chose to use a FLTR regularization model, which stands for Follow the Regularized Leader. FTRL is currently used to deal with the relationship between sparsity and “loss,” the mathematical representation of some “cost” associated with the event. The process of optimization seeks to minimize a loss function.

Follow the Regularized Leader (FTRL) is a method used to balance sparsity and accuracy.

Here is the algorithm:
Google-Ad-Click_Algorithm

What is important here:
P_{t} = \sigma (x_{t} \cdot w)

For round “t” we are asking the algorithm to predict the probability of a click as the dot product of a feature vector (x) and model parameters (w). Then we iterate from t = 1 to t = i.

As the number of predictions increases, the accuracy of the predictions also increases.

And now, back to practical material

If you liked this post, you might want to take a look at Levers.

Levers offers forecasting and simulation software for marketing analytics - ecommerce, lead gen, social and more. Learn how to make your future business strategies with Levers.

Comments

  • Posted by Martin Roettgerding

    Very interesting, thanks for blogging this.

    There’s one point I’d like to add. This model is actually about estimating click-through probabilities. The term ‘predicted click-through rate’ is already a simplification, or rather a translation into a language marketing professionals can understand a little better. However, an even better term that highlights the importance of this issue would be Quality Score (auction QS, to be precise) – because this is what this is all about.

  • Posted by Jeff

    Great breakdown — I appreciate that you converted the math-heavy paper to a more readable format.

    One thing to nitpick which is not a big deal, but since you use the term several times in your article, I wouldn’t say the paper published by Google is a “white paper”. A white paper is practical guide or report published by a company for its customers/clients to learn something. This is an academic-style paper published at a top data mining conference in its industry track (which means it’s usually not as peer-reviewed compared to the research track). It’s meant for other academics and researchers to share ideas about state-of-the-art research in the field.

Leave A Comment