Here we will go through an example of thresholding and calibration for an imbalanced dataset. The data we use in this example is click through rate (CTR) data. CTR is a ratio showing how often people who see your ad end up clicking it. CTR can be used to gauge how well your ads are performing. CTR is the number of clicks that your ad receives divided by the number of times your ad is shown:
For example, if you had $5$ clicks and $100$ impressions, then your CTR would be $5\%$.
import warnings
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import matplotlib.cm as cm
from sklearn.preprocessing import StandardScaler
from sklearn.calibration import CalibratedClassifierCV, calibration_curve
from sklearn.model_selection import train_test_split, cross_val_score, KFold
from sklearn.metrics import (roc_curve, auc, roc_auc_score, plot_confusion_matrix, accuracy_score, f1_score,
precision_recall_curve, average_precision_score, brier_score_loss, recall_score,
confusion_matrix)
from sklearn.neural_network import MLPClassifier
from sklearn.linear_model import LogisticRegression
from xgboost import XGBClassifier
from sklearn.svm import SVC
from plot_utils import (plot_thresholds, plot_calibration_curve, comp_conf_matrix, plot_decision_surface)
pd.set_option('display.max_columns', None)
sns.set(style="darkgrid", color_codes=True)
warnings.filterwarnings('ignore')
def reduce_mem_usage(df, verbose=True):
numerics = ['int16', 'int32', 'int64', 'float16', 'float32', 'float64']
start_mem = df.memory_usage().sum() / 1024**2
for col in df.columns:
col_type = df[col].dtypes
if col_type in numerics:
c_min = df[col].min()
c_max = df[col].max()
if str(col_type)[:3] == 'int':
if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max:
df[col] = df[col].astype(np.int8)
elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max:
df[col] = df[col].astype(np.int16)
elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max:
df[col] = df[col].astype(np.int32)
elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max:
df[col] = df[col].astype(np.int64)
else:
if c_min > np.finfo(np.float16).min and c_max < np.finfo(np.float16).max:
df[col] = df[col].astype(np.float16)
elif c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max:
df[col] = df[col].astype(np.float32)
else:
df[col] = df[col].astype(np.float64)
end_mem = df.memory_usage().sum() / 1024**2
if verbose: print('Mem. usage decreased to {:5.2f} Mb ({:.1f}% reduction)'.format(end_mem, 100 * (start_mem - end_mem) / start_mem))
return df
df = pd.read_csv('prepro_ctr_data.csv').drop('Unnamed: 0', axis=1)
df = reduce_mem_usage(df)
print(len(df.index))
df = df.sample(n=550_000)
df.info()
CTR prediction is an example where classes are severely imbalanced and the distribution can often be modeled as a poisson random variable. In this dataset $1$ is for click $0$ is for no click.
sns.countplot(x='click',data=df, palette='hls')
plt.show()
df.click.value_counts() / len(df.index)
Like many other learning algorithms in scikit-learn
, some classifiers like LogisticRegression()
or RandomForestClassifier()
comes with a built-in method of handling imbalanced classes called class_weight
. If we have highly imbalanced classes and have not addressed it during preprocessing, we have the option of using the class_weight
parameter to weight the classes to help with having a balanced mix of each class. Specifically, the balanced
argument will automatically weigh classes inversely proportional to their frequency giving most attention to the minority class:
where $w_j$ is the weight to class $j,n$ is the number of observations, $n_j$ is the number of observations in class $j$, and $k$ is the total number of classes.
When using XGBoost we can control the balance of positive and negative weights using the scale_pos_weight
parameter. This parameter takes an integer and is typically the ratio of positive and negative intances in your target data:
X = df.drop('click', axis=1)
y = df.click
l_counts = y.value_counts().values
ratio = round(l_counts[0]/l_counts[1])
ratio
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
classifiers = [XGBClassifier(n_estimators=1000, scale_pos_weight=ratio, n_jobs=-1, random_state=2),
MLPClassifier(activation='logistic', random_state=1, max_iter=200),
LogisticRegression(class_weight='balanced', random_state=1, n_jobs=-1)]
names = ['XGBoost', 'MLP', 'Logistic']
datasets = [df]
X0 = PCA(2).fit_transform(X.values)
X0 = StandardScaler().fit_transform(X0)
y0 = y.values
plot_decision_surface(X0, y0, classifiers, proba=True, size=[40,8], s=100)
Here we look at $3$ different classifiers XGBoost, MLP (neural network) and Logistic. Note these classifiers are fitted and scored using only the first $2$ principal components of the dataset for $2D$ visual representation. From the resulting plots you can see that XGBoost is capable of achieving a non-linear relationship. The MLP is also very adept at understanding non-linearity but, in this case there is no parameter to weigh predictions to the minority class or make a "cost-sensitive" neural network so, the resulting region is mostly the color of the majority class which is the cause of the much higher accuracy score (bottom right of plot). The logistic classifier prefers linearity in the data from the resulting decision regions.
X = StandardScaler().fit_transform(X)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.4, random_state=42)
scores = []
for name, clf in zip(names, classifiers):
clf.fit(X_train, y_train)
y_scores = clf.predict_proba(X_test)
score = roc_auc_score(y_test, y_scores[:, 1])
score = round(score,4)
print(f'{name}: roc_auc = {score}')
scores.append(score)
Many machine learning algorithms are capable of predicting a probability or scoring of class membership, and this must be interpreted before it can be mapped to a class label. This is achieved by using a threshold, such as $0.5$ (often the default threshold), where all values equal or greater than the threshold are mapped to one class and all other values are mapped to another class.
The problem is that the default threshold may not represent an optimal interpretation of the predicted probabilities. This might be the case for reasons, such as:
A ROC curve is a diagnostic plot that evaluates a set of probability predictions made by a model on a test dataset.
The false-positive rate is plotted on the x-axis and the true positive rate is plotted on the y-axis and the plot is referred to as the Receiver Operating Characteristic curve, or ROC curve. A diagonal line on the plot from the bottom-left to top-right indicates the “curve” for a no-skill classifier (predicts the majority class in all cases), and a point in the top left of the plot indicates a model with perfect skill.
The curve is useful to understand the trade-off in the true-positive rate and false-positive rate for different thresholds. The area under the ROC Curve, so-called ROC AUC, provides a single number to summarize the performance of a model in terms of its ROC Curve with a value between $0.5$ (no-skill) and $1.0$ (perfect skill).
We could locate the threshold with the optimal balance between false positive and true positive rates with the G-Mean. The Geometric Mean or G-Mean is a metric for imbalanced classification that, if optimized, will seek a balance between the sensitivity and the specificity.
Unlike the ROC Curve, a precision-recall curve focuses on the performance of a classifier on the positive (minority class) only.
A precision-recall curve is calculated by creating accurate class labels for probability predictions across a set of thresholds and calculating the precision and recall for each threshold. A line plot is created for the thresholds in ascending order with recall on the x-axis and precision on the y-axis.
A no-skill model is represented by a horizontal line with a precision that is the ratio of positive examples in the dataset $0.01$ on our synthetic dataset. perfect skill classifier has full precision and recall with a dot in the top-right corner.
If we are interested in a threshold that results in the best balance of precision and recall, then this is the same as optimizing the F-measure that summarizes the harmonic mean of both measures.
clf = XGBClassifier(n_estimators=1000, scale_pos_weight=ratio, n_jobs=-1, random_state=123)
clf.fit(X_train, y_train)
y_score = clf.predict_proba(X_test)
y_pred = clf.predict(X_test)
plot_thresholds(X_test, y_test, y_score, name='XGBoost', fig_x=16, fig_y=5, sizes=[0.5, -160, -40])
g_ts_pred = np.where(y_score[:, 1] > 0.51, 1, 0)
f1_ts_pred = np.where(y_score[:, 1] > 0.55, 1, 0)
prl_ts_pred = np.where(y_score[:, 1] > 0.65, 1, 0)
f1_ts_cm = np.around(confusion_matrix(y_test, f1_ts_pred, normalize='true'), decimals=3)
g_ts_cm = np.around(confusion_matrix(y_test, g_ts_pred, normalize='true'), decimals=3)
prl_ts_cm = np.around(confusion_matrix(y_test, prl_ts_pred, normalize='true'), decimals=3)
no_ts_cm = np.around(confusion_matrix(y_test, y_pred, normalize='true'), decimals=3)
comp_conf_matrix(mtx1=no_ts_cm, mtx2=f1_ts_cm, labels=['click', 'no click'],
titles=['No Threshold', 'F1 Thresholded'], sizes=[10, 4])
comp_conf_matrix(mtx1=prl_ts_cm, mtx2=g_ts_cm, labels=['click', 'no click'],
titles=['PRL Thresholded', 'G Thresholded'], sizes=[10, 4])
The resulting confusion matrix from thresholding resulted in a much better balance in true positives and negatives. This is beneficial because we mostly are concerned with true positives money gained or impressions paid for that were clicked on.
Predicted probabilities that match the expected distribution of probabilities for each class are referred to as calibrated. The problem is, not all machine learning models are capable of predicting calibrated probabilities.There are methods to diagnose how calibrated predicted probabilities are and methods to better calibrate the predicted probabilities with the observed distribution of each class. Often, this can lead to better quality predictions, depending on how the skill of the model is evaluated.
Although a model may be able to predict probabilities, the distribution and behavior of the probabilities may not match the expected distribution of observed probabilities in the training data. This is especially common with complex nonlinear machine learning algorithms that do not directly make probabilistic predictions and instead use approximations.
The distribution of the probabilities can be adjusted to better match the expected distribution observed in the data. This adjustment is referred to as calibration, as in the calibration of the model or the calibration of the distribution of class probabilities.
A reliability diagram is a line plot of the relative frequency of what was observed (y-axis) versus the predicted probability frequency (x-axis).
Specifically, the predicted probabilities are divided up into a fixed number of buckets along the x-axis. The number of events (class=1) are then counted for each bin (e.g. the relative observed frequency). Finally, the counts are normalized. The results are then plotted as a line plot.
These plots are commonly referred to as ‘calibration‘ plots or curves as they summarize how well the forecast probabilities are calibrated. The better calibrated or more reliable a forecast, the closer the points will appear along the main diagonal from the bottom left to the top right of the plot.
The position of the points or the curve relative to the diagonal can help to interpret the probabilities; for example:
Probabilities are continuous, so we expect some deviation from the line, often shown as an S-shaped curve showing tendencies of over-forecasting low probabilities and under-forecasting high probabilities.
Some algorithms are fit in such a way that their predicted probabilities are already calibrated like a logistic regression model where the overall average predicted probability is equal to the average of the response.
Other algorithms do not directly produce predictions of probabilities, and instead a prediction of probabilities must be approximated. Some examples include neural networks, support vector machines, and decision trees.
Calibration of prediction probabilities is a rescaling operation that is applied after the predictions have been made by a predictive model. There are two popular approaches to calibrating probabilities; they are the Platt Scaling and Isotonic Regression.
Platt Scaling is simpler and is suitable for reliability diagrams with the S-shape. Isotonic Regression is more complex, requires a lot more data (otherwise it may overfit), but can support reliability diagrams with different shapes (is nonparametric).
Keep in mind better calibrated probabilities may or may not lead to better class-based or probability-based predictions. It really depends on the specific metric used to evaluate predictions. In fact, some empirical results suggest that the algorithms that can benefit the more from calibrating predicted probabilities include SVMs, bagged decision trees, and random forests.
plot_calibration_curve(y, X_train, X_test, y_train, y_test, clf, "XGBoost", fig_idx=1, n_cv=5,
fig_x=10, fig_y=10, imb=True, C=1)
From the resulting reliability diagram it looks like sigmoid (platts method) increases our precision the most.
sigmoid = CalibratedClassifierCV(clf, cv=5, method='sigmoid')
sigmoid.fit(X_train, y_train)
y_score = sigmoid.predict_proba(X_test)
y_pred = sigmoid.predict(X_test)
plot_thresholds(X_test, y_test, y_score, name='XGBoost', fig_x=16, fig_y=5, sizes=[0.5, 25, -40])
g_cal_ts_pred = np.where(y_score[:, 1] > 0.17, 1, 0)
f1_cal_ts_pred = np.where(y_score[:, 1] > 0.20, 1, 0)
prl_cal_ts_pred = np.where(y_score[:, 1] > 0.29, 1, 0)
g_cal_ts_cm = np.around(confusion_matrix(y_test, g_cal_ts_pred, normalize='true'), decimals=3)
f1_cal_ts_cm = np.around(confusion_matrix(y_test, f1_cal_ts_pred, normalize='true'), decimals=3)
prl_cal_ts_cm = np.around(confusion_matrix(y_test, prl_cal_ts_pred, normalize='true'), decimals=3)
cal_no_ts_cm = np.around(confusion_matrix(y_test, y_pred, normalize='true'), decimals=3)
comp_conf_matrix(mtx1=cal_no_ts_cm, mtx2=f1_cal_ts_cm, labels=['click', 'no click'],
titles=['Cal No Threshold', 'Cal F1 Thresholded'], sizes=[10, 4])
comp_conf_matrix(mtx1=prl_cal_ts_cm, mtx2=g_cal_ts_cm, labels=['click', 'no click'],
titles=['Cal PRL Thresholded', 'Cal G Thresholded'], sizes=[10, 4])
The resulting confusion matrix from thresholding and calibrating looks to be beneficial. Once again we mostly are concerned with true positives or impressions paid for that were clicked on (ad ROI) and there was a valuable increase in true positives.
The return on investment (ROI) for ad spend can be categorized using the four outcomes from a confusion matrix. This quantity is defined as the ratio between the total return and the total cost. If this quantity is greater than $1$, it indicates the total return was greater than the total cost and vice versa. In this exercise, we will compute a sample ROI assuming a fixed r
, the return on a click per number of impressions, and cost
, the cost per number of impressions.
def compare_roi(matrs=None, titles=None):
for i in range(len(matrs)):
# Compute confusion matrix and get four categories
tn, fp, fn, tp = matrs[i].ravel()
# Calculate total return, total spent, and ROI
r = 0.2
cost = 0.05
total_return = tp * r
total_cost = (tp + fp) * cost
roi = total_return / total_cost
print(f"{titles[i]} Total return: %s, Total cost: %s, ROI: %s" %(round(total_return,2),
round(total_cost,2), round(roi,2)))
compare_roi(matrs=[no_ts_cm, f1_ts_cm, prl_ts_cm, g_ts_cm, f1_cal_ts_cm, prl_cal_ts_cm, g_cal_ts_cm],
titles=['No Threshold', 'F1 Thresholded', 'PRL Thresholded', 'G Thresholded',
'Cal F1 Thresholded', 'Cal PRL Thresholded', 'Cal G Thresholded'])
The calibrated precision recall line thresholded model has the greatest ROI