Recently, Google published a white paper *(pdf)* on its model for predicting ad click-through rates at massive scale for its multi-billion dollar advertising platform. While the white paper may be overkill for most marketing professionals, it does answer a series of complex questions and uncovers a few advanced marketing methods.

Google’s explanation is heavy on mathematics and machine learning, so here is our translation of the important parts in English:

**If you bid on a particular search term and supply a particular ad copy, what is the probability that your ad will be clicked?**

The white paper begins with the goal of predicting P (click | q,a) where q = query and a = ad copy. This is a Bayesian probability, where you want to calculate the likelihood that an event will occur given a specified condition or set of conditions. So what we’re actually solving for is:

**What is the probability that a click will occur given a particular search query and a particular set of ad copy?**

The problem is then modeled using logistic regression. Logistic regression attempts to quantify the relationship between a dependent variable and several independent variables by using probability scores as the predicted values of the dependent variable. In this case, the dependent variable is the probability that an ad will receive a click. The independent variables could be numerous and would include things like position on the page, existence of the search terms in the ad, geolocation of the searcher, etc.

**The output of logistic regression is a function that approximates the likelihood of a click given all of these independent factors.**

For each instance (t) Google describes the dependent variable (the probability of a click) in terms of a feature vector (x, the independent variables) and given model parameters (w).

The model predicts , where w is a model parameter and x is a feature vector. In machine learning a “feature” is an individual measurable property describing a phenomenon being observed and a “feature vector” is an n-dimensional vector of numerical features that represent an object.

Lets first start with . This is read as the dot product of w and x. A dot product (also called a scalar product) is an operation that takes two equal-length sequences of numbers and returns a single number. Algebraically, the dot product is computed as the sum of the products of the corresponding entries of the two sequences of numbers. For example:

Now that we know how to get , we then calculate the probability of a click as:

**Sigma refers to the sigmoid function, which is a logistic curve that is S-shaped.**

Sigmoid functions exhibit approximately exponential growth that slows because of maturity or saturation. It is common when looking at populations in ecology, etc. Here is the sigmoid function:

Next, the authors use Online Gradient Descent (OGD) to optimize with high prediction accuracy. A gradient is a vector field that points in the direction of the greatest rate of increase.

An easy way to think about this is to imagine opening a window in a warm room when it is cold outside. As we know, the heat from the warm room will begin to escape, creating a temperature gradient between the inside and outside. At some specific point in space, the temperature will be changing faster than at all other points. We can represent this temperature change in terms of slope (rate of change in temperature at each point) and direction. So, each vector in the gradient represents both the slope and direction at each point.

**Online Gradient Descent is just an optimization method that looks for the point where the slope is greatest.**

One issue with OGD is, it is not good at producing “sparse” models. This is a machine learning issue. A dynamic algorithm needs to be “taught” by feeding it samples and measuring output. In this case, the authors are talking about a sparse vector that corresponds to a feature selection.

For example, when there are more features than training examples (for machine learning), some features are set to zero when using what is known as a regularization model. Google chose to use a FLTR regularization model, which stands for Follow the Regularized Leader. FTRL is currently used to deal with the relationship between sparsity and “loss,” the mathematical representation of some “cost” associated with the event. The process of optimization seeks to minimize a loss function.

**Follow the Regularized Leader (FTRL) is a method used to balance sparsity and accuracy.**

Here is the algorithm:

What is important here:

For round “t” we are asking the algorithm to predict the probability of a click as the dot product of a feature vector (x) and model parameters (w). Then we iterate from t = 1 to t = i.

**As the number of predictions increases, the accuracy of the predictions also increases.**

And now, back to practical material…

Subscribe to our blog.

We'll occasionally send you an email when we have featured posts and great stories about analytics and data forecasting.

### If you liked this post, you might want to take a look at Levers.

Levers offers forecasting and simulation software for marketing analytics - ecommerce, lead gen, social and more. Learn how to make your future business strategies with Levers.

Very interesting, thanks for blogging this.

There’s one point I’d like to add. This model is actually about estimating click-through probabilities. The term ‘predicted click-through rate’ is already a simplification, or rather a translation into a language marketing professionals can understand a little better. However, an even better term that highlights the importance of this issue would be Quality Score (auction QS, to be precise) – because this is what this is all about.

Great breakdown — I appreciate that you converted the math-heavy paper to a more readable format.

One thing to nitpick which is not a big deal, but since you use the term several times in your article, I wouldn’t say the paper published by Google is a “white paper”. A white paper is practical guide or report published by a company for its customers/clients to learn something. This is an academic-style paper published at a top data mining conference in its industry track (which means it’s usually not as peer-reviewed compared to the research track). It’s meant for other academics and researchers to share ideas about state-of-the-art research in the field.