Running an Experiment

  1. Run an experiment by selecting [Click for Actions] button beside the dataset that you want to use. Click Predict to begin an experiment.
Datasets action menu
  1. The Experiment Settings form displays and auto-fills with the selected dataset. Optionally specify a validation dataset and/or a test dataset.
  • The validation set is used to tune parameters (models, features, etc.). If a validation dataset is not provided, the training data is used (with holdout splits). If a validation dataset is provided, training data is not used for parameter tuning - only for training. A validation dataset can help to improve the generalization performance on shifting data distributions.
  • The test dataset is used for the final stage scoring and is the dataset for which model metrics will be computed against. Test set predictions will be available at the end of the experiment. This dataset is not used during training of the modeling pipeline.

Keep in mind that these datasets must have the same number of columns as the training dataset. Also note that if provided, the validation set is not sampled down, so it can lead to large memory usage, even if accuracy=1 (which reduces the train size).

  1. Specify the target (response) column. Note that not all explanatory functionality will be available for multiclass classification scenarios (scenarios with more than two outcomes). When the target column is selected, Driverless AI automatically provides the target column type and the number of rows. If this is a classification problem, then the UI shows unique and frequency statistics (Target Freq/Most Freq) for numerical columns. If this is a regression problem, then the UI shows the dataset mean and standard deviation values.

Notes Regarding Frequency:

  • For data imported in versions <= 1.0.19, TARGET FREQ and MOST FREQ both represent the count of the least frequent class for numeric target columns and the count of the most frequent class for categorical target columns.
  • For data imported in versions 1.0.20-1.0.22, TARGET FREQ and MOST FREQ both represent the frequency of the target class (second class in lexicographic order) for binomial target columns; the count of the most frequent class for categorical multinomial target columns; and the count of the least frequent class for numeric multinomial target columns.
  • For data imported in version 1.0.23 (and later), TARGET FREQ is the frequency of the target class for binomial target columns, and MOST FREQ is the most frequent class for multinomial target columns.
  1. The next step is to set the parameters and settings for the experiment. (Refer to the Experiment Settings section that follows for more information about these settings.) You can set the parameters individually, or you can let Driverless AI infer the parameters and then override any that you disagree with. Available parameters and settings include the following:
  • Dropped Columns: The columns we do not want to use as predictors such as ID columns, columns with data leakage, etc.

  • Weight Column: The column that indicates the per row observation weights. If “None” is specified, each row will have an observation weight of 1.

  • Fold Column: The column that indicates the fold. If “None” is specified, the folds will be determined by Driverless AI. This is set to “Disabled” if a validation set is used.

  • Time Column: The column that provides a time order, if applicable. If “AUTO” is specified, Driverless AI will auto-detect a potential time order. If “OFF” is specified, auto-detection is disabled. This is set to “Disabled” if a validation set is used.

  • Specify the scorer to use for this experiment. The scorers vary based on whether this is a classification or regression experiment. Available scorers include:

    • Regression: GINI, R2, MSE, RMSE (default), RMSLE, RMSPE, MAE, MER, MAPE, SMAPE
    • Classification: GINI, MCC, F05, F1, F2, ACCURACY, LOGLOSS, AUC (default), AUCPR
  • Desired relative Accuracy from 1 to 10

  • Desired relative Time from 1 to 10

  • Desired relative Interpretability from 1 to 10

    Driverless AI will automatically infer the best settings for Accuracy, Time, and Interpretability and provide you with an experiment preview based on those suggestions. If you adjust these knobs, the experiment preview will automatically update based on the new settings.

Additional settings:

  • If this is a classification problem, then click the Classification button. Note that Driverless AI determines the problem type based on the response column. Though not recommended, you can override this setting and specify whether this is a classification or regression problem.
  • Click the Reproducible button to build this with a random seed.
  • Specify whether to enable GPUs. (Note that this option is ignored on CPU-only systems.)
Experiment settings
  1. Click Launch Experiment to start the experiment.

The experiment launches with a randomly generated experiment name. You can change this name at anytime during or after the experiment. Mouse over the name of the experiment to view an edit icon, then type in the desired name.

As the experiment runs, a running status displays in the upper middle portion of the UI. First Driverless AI figures out the backend and determines whether GPUs are running. Then it starts parameter tuning, followed by feature engineering. Finally, Driverless AI builds the scoring pipeline.

In addition to the status, the UI also displays details about the dataset, the iteration data (internal validation) for each cross validation fold along with any specified scorer value, the feature importance values, and CPU/Memory information (including Notifications, Logs, and Trace info). For classification problems, the lower right section includes a toggle between an ROC curve, Precision-Recall graph, Lift chart, Gains chart, and GPU Usage information (if GPUs are available). For regression problems, the lower right section includes a toggle between an Actual vs. Predicted chart and GPU Usage information (if GPUs are available). (Refer to the Experiment Graphs section for more information.) Upon completion, an Experiment Summary section will populate in the lower right section.

The bottom portion of the experiment screen will show any warnings that Driverless AI encounters. You can hide this pane by clicking the x icon.

You can stop experiments that are currently running. Click the Finish button to stop the experiment. This jumps the experiment to the end and completes the ensembling and the deployment package. You can also click Abort to terminate the experiment. (You will be prompted to confirm the abort.) Note that aborted experiments will not display on the Experiments page.

Experiment

Completed Experiment

After an experiment status changes from RUNNING to COMPLETE, the UI provides you with several options:

  • Interpret this Model: Refer to Interpreting a Model.
  • Score on Another Dataset: Refer to Score on Another Dataset.
  • Transform Another Dataset: Refer to Transform Another Dataset.
  • Download (Holdout) Training Predictions: In csv format, available if a validation set was NOT provided.
  • Download Validation Predictions: In csv format, available if a validation set was provided.
  • Download Test Predictions: In csv format, available if a test dataset is used.
  • Download Python Scoring Pipeline: A standalone Python scoring pipeline for H2O Driverless AI. Refer to Driverless AI Standaone Python Scoring Pipeline.
  • Build MOJO Scoring Pipeline: A standalone Model Object, Optimized scoring pipeline. Refer to Driverless AI MOJO Scoring Pipeline.
  • Download Experiment Summary: A zip file containing the following files. Refer to Experiment Summary for more information.
    • A summary of the experiment
    • The experiment features along with their relative importance
    • Ensemble information
    • An experiment preview
    • PDF and markdown versions of an auto-generated report for the experiment (available when accuracy > 1)
    • A target transformations tuning leaderboard
    • A tuning leaderboard
  • Download Logs
  • View Notifications/Warnings (if any existed)
Experiment Complete

Experiment Settings

This section describes the settings that are available when running an experiment.

Dropped Columns

Dropped columns are columns that you do not want to be used as predictors in the experiment.

Validation Dataset

The validation dataset is used for tuning the modeling pipeline. If provided, the entire training data will be used for training, and validation of the modeling pipeline is performed with only this validation dataset. This is not generally recommended, but can make sense if the data are non-stationary. In such a case, the validation dataset can help to improve the generalization performance on shifting data distributions.

This dataset must have the same number of columns (and column types) as the training dataset. Also note that if provided, the validation set is not sampled down, so it can lead to large memory usage, even if accuracy=1 (which reduces the train size).

Test Dataset

The test dataset is used for testing the modeling pipeline and creating test predictions. The test set is never used during training of the modeling pipeline. (Results are the same whether a test set is provided or not.) If a test dataset is provided, then test set predictions will be available at the end of the experiment.

Weight Column

Optional: Column that indicates the observation weight (a.k.a. sample or row weight), if applicable. This column must be numeric with values >= 0. Rows with higher weights have higher importance. The weight affects model training through a weighted loss function, and affects model scoring through weighted metrics. The weight column is not used when making test set predictions (but scoring of the test set predictions can use the weight).

Fold Column

Optional: Column to use to create stratification folds during (cross-)validation, if applicable. Must be of integer or categorical type. Rows with the same value in the fold column represent cohorts, and each cohort is assigned to exactly one fold. This can help to build better models when the data is grouped naturally. If left empty, the data is assumed to be i.i.d. (identically and independently distributed). For example, when viewing data for a pneumonia dataset, person_id would be a good Fold Column. This is because the data may include multiple diagnostic snapshots per person, and we want to ensure that the same person’s characteristics show up only in either the training or validation frames, but not in both to avoid data leakage. Note that a fold column cannot be specified if a validation set is used.

Time Column

Optional: Column that provides a time order (time stamps for observations), if applicable. Can improve model performance and model validation accuracy for problems where the target values are auto-correlated with respect to the ordering (per time-series group).

The values in this column must be a datetime format understood by pandas.to_datetime(), like “2017-11-29 00:30:35” or “2017/11/29”, or integer values. If [AUTO] is selected, all string columns are tested for potential date/datetime content and considered as potential time columns. If a time column is found, feature engineering and model validation will respect the causality of time. If [OFF] is selected, no time order is used for modeling and data may be shuffled randomly (any potential temporal causality will be ignored).

When your data has a date column, then in most cases, specifying [AUTO] for the Time Column will be sufficient. However, if you select a specific date column, then Driverless AI will provide you with an additional side menu. At a minimum, this side menu will allow you to specify the number of weeks you want to predict and after how many weeks do you want to start predicting. These options default to [AUTO]. Or you can select Expert Settings to specify per-group periodicities, such as a time-store group or a time-customer_id group. You can also adjust the period size and type.

A blog describing a time-series example is available here.

Time column settings

Accuracy

The following table describes how the Accuracy value affects a Driverless AI experiment.

Accuracy Max Rows x Cols Ensemble Level Target Transformation Parameter Tuning Level Num Individuals Num Folds Only First Fold Model Distribution Check
1 100K 0 False 0 Auto 3 True No
2 1M 0 False 0 Auto 3 True No
3 50M 0 True 1 Auto 3 True No
4 100M 0 True 1 Auto 3-4 True No
5 200M 1 True 1 Auto 3-4 True Yes
6 500M 2 True 1 Auto 3-5 True Yes
7 750M <=3 True 2 Auto 3-10 Auto Yes
8 1B <=3 True 2 Auto 4-10 Auto Yes
9 2B <=3 True 3 Auto 4-10 Auto Yes
10 10B <=4 True 3 Auto 4-10 Auto Yes

Note: A check for a shift in the distribution between train and test is done for accuracy >= 5.

The list below includes more information about the parameters that are used when calculating accuracy.

  • Max Rows x Cols: The maximum number of rows x colums to use in model training

    • For classification, stratified random row sampling is performed (by target)
    • For regression, random row sampling is performed
  • Ensemble Level: The level of ensembling done for the final model (if no time column is selected)

    • 0: single model
    • 1: 1x 4-fold models ensembled together
    • 2: 2x 5-fold models ensembled together
    • 3: 5x 5-fold models ensembled together
    • 4: 8x 5-fold models ensembled together
    • If ensemble level > 0, then the final model score shows an error estimate that includes the data generalization error (standard deviation of scores over folds) and the error in the estimate of the score (bootstrap score’s standard deviation with sample size same as data size).
    • For accuracy >= 8, the estimate of the error in the validation score reduces, and the error in the score is dominated by the data generalization error.
    • The estimate of the error in the test score is estimated by the maximum of the bootstrap with sample size equal to the test set size and the validation score’s error.
  • Target Transformation: Try target transformations and choose the transformation(s) that have the best score(s).

    Possible transformations: identity, unit_box, log, square, square root, double square root, inverse, Anscombe, logit, sigmoid

  • Parameter Tuning Level: The level of parameter tuning done

    • 0: no parameter tuning
    • 1: 8 different parameter settings
    • 2: 16 different parameter settings
    • 3: 32 different parameter settings
    • 4: 64 different parameter settings
    • Optimal model parameters are chosen based on a combination of the model’s accuracy, training speed, and complexity.
  • Num Individuals: The number of individuals in the population for the genetic algorithms

    • Each individual is a gene. The more genes, the more combinations of features are tried.
    • The number of individuals is automatically determined and can depend on the number of GPUs. Typical values are between 4 and 16.
  • Num Folds: The number of internal validation splits done for each pipeline

    • If the problem is a classification problem, then stratified folds are created.
  • Only First Fold Model: Whether to only use the first fold split for internal validation to save time

    • Example: Setting Num Folds to 3 and Only First Fold Model = True means you are splitting the data into 67% training and 33% validation.
    • If “Only First Fold Model” is False, then errors on the score shown during feature engineering include the data generalization error (standard deviation of scores over folds) and the error in the estimate of the score (bootstrap score’s standard deviation with a sample size the same as the data size).
    • If “Only First Fold Model” is True, then errors on the score shown during feature engineering include only the error in the estimate of the score (bootstrap score’s standard deviation with a sample size same as the data size).
    • For accuracy >= 8, the estimate of the error in the score reduces, and the error in the score is dominated by the data generalization error. This provides the most accurate generalization error.
  • Early Stopping Rounds: Time-based means based upon the Time table below.

  • Distribution Check: Checks whether validation or test data are drawn from the same distribution as the training data. Note that this is purely informative to the user. Driverless AI does not take information from the test set into consideration during training.

  • Strategy: Feature selection strategy (to prune-away features that do not clearly give improvement to model score). Feature selection is triggered by interpretability. Strategy = “FS” if interpretability >= 6; otherwise strategy is None.

Time

This specifies the relative time for completing the experiment (i.e., higher settings take longer). Early stopping will take place if the experiment doesn’t improve the score for the specified amount of iterations.

Time Iterations Early Stopping Rounds
1 1-5 None
2 10 5
3 30 5
4 40 5
5 50 10
6 100 10
7 150 15
8 200 20
9 300 30
10 500 50

Note: See the Accuracy table for cases when not based upon time.

Interpretability

The higher the interpretability setting, the lower the complexity of the engineered features and of the final model(s).

Interpretability Ensemble Level Monotonicity Constraints
<= 5 <= 3 Disabled
>= 6 <= 2 Disabled
>= 7 <= 2 Enabled
>= 8 <= 1 Enabled
10 0 Enabled
  • Monotonicity Constraints: If enabled, the model will satisfy knowledge about monotonicity in the data and monotone relationships between the predictors and the target variable. For example, in house price prediction, the house price should increase with lot size and number of rooms, and should decrease with crime rate in the area. If enabled, Driverless AI will automatically determine if monotonicity is present and enforce it in its modeling pipelines.

Scorers

Classification or Regression

  • GINI (Gini Coefficient): The Gini index is a well-established method to quantify the inequality among values of a frequency distribution, and can be used to measure the quality of a binary classifier. A Gini index of zero expresses perfect equality (or a totally useless classifier), while a Gini index of one expresses maximal inequality (or a perfect classifier).

The Gini index is based on the Lorenz curve. The Lorenz curve plots the true positive rate (y-axis) as a function of percentiles of the population (x-axis).

The Lorenz curve represents a collective of models represented by the classifier. The location on the curve is given by the probability threshold of a particular model. (i.e., Lower probability thresholds for classification typically lead to more true positives, but also to more false positives.)

The Gini index itself is independent of the model and only depends on the Lorenz curve determined by the distribution of the scores (or probabilities) obtained from the classifier.

Lorenz curve

Regression

  • R2 (R Squared): The R2 value represents the degree that the predicted value and the actual value move in unison. The R2 value varies between 0 and 1 where 0 represents no correlation between the predicted and actual value and 1 represents complete correlation.

Calculating the R2 value for linear models is mathematically equivalent to \(1 - SSE/SST\) (or \(1 - \text{residual sum of squares}/\text{total sum of squares}\)). For all other models, this equivalence does not hold, so the \(1 - SSE/SST\) formula cannot be used. In some cases, this formula can produce negative R2 values, which is mathematically impossible for a real number. Because Driverless AI does not necessarily use linear models, the R2 value is calculated using the squared Pearson correlation coefficient.

R2 equation:

\[R2 = \frac{\sum_{i=1}^{n}(x_i-\bar{x})(y_i-\bar{y})}{\sqrt{\sum_{i=1}^{n}(x_i-\bar{x})^2\sum_{i=1}^{n}(y_i-\bar{y})^2}}\]

Where:

  • x is the predicted target value
  • y is the actual target value
  • MSE (Mean Squared Error): The MSE metric measures the average of the squares of the errors or deviations. MSE takes the distances from the points to the regression line (these distances are the “errors”) and squaring them to remove any negative signs. MSE incorporates both the variance and the bias of the predictor.

MSE also gives more weight to larger differences. The bigger the error, the more it is penalized. For example, if your correct answers are 2,3,4 and the algorithm guesses 1,4,3, then the absolute error on each one is exactly 1, so squared error is also 1, and the MSE is 1. But if the algorithm guesses 2,3,6, then the errors are 0,0,2, the squared errors are 0,0,4, and the MSE is a higher 1.333. The smaller the MSE, the better the model’s performance. (Tip: MSE is sensitive to outliers. If you want a more robust metric, try mean absolute error (MAE).)

MSE equation:

\[MSE = \frac{1}{N} \sum_{i=1}^{N}(y_i -\hat{y}_i)^2\]
  • RMSE (Root Mean Squared Error): The RMSE metric evaluates how well a model can predict a continuous value. The RMSE units are the same as the predicted target, which is useful for understanding if the size of the error is of concern or not. The smaller the RMSE, the better the model’s performance. (Tip: RMSE is sensitive to outliers. If you want a more robust metric, try mean absolute error (MAE).)

    RMSE equation:

    \[RMSE = \sqrt{\frac{1}{N} \sum_{i=1}^{N}(y_i -\hat{y}_i)^2 }\]

Where:

  • N is the total number of rows (observations) of your corresponding dataframe.
  • y is the actual target value.
  • \(\hat{y}\) is the predicted target value.
  • RMSLE (Root Mean Squared Logarithmic Error): This metric measures the ratio between actual values and predicted values and takes the log of the predictions and actual values. Use this instead of RMSE if an under-prediction is worse than an over-prediction. You can also use this when you don’t want to penalize large differences when both of the values are large numbers.

    RMSLE equation:

    \[RMSLE = \sqrt{\frac{1}{N} \sum_{i=1}^{N} \big(ln \big(\frac{y_i +1} {\hat{y}_i +1}\big)\big)^2 }\]

Where:

  • N is the total number of rows (observations) of your corresponding dataframe.
  • y is the actual target value.
  • \(\hat{y}\) is the predicted target value.
  • RMSPE (Root Mean Square Percentage Error): This metric is the RMSE expressed as a percentage. The smaller the RMSPE, the better the model performance.

    RMSPE equation:

    \[RMSPE = \sqrt{\frac{1}{N} \sum_{i=1}^{N} \frac{(y_i -\hat{y}_i)^2 }{(y_i)^2}}\]
  • MAE (Mean Absolute Error): The mean absolute error is an average of the absolute errors. The MAE units are the same as the predicted target, which is useful for understanding whether the size of the error is of concern or not. The smaller the MAE the better the model’s performance. (Tip: MAE is robust to outliers. If you want a metric that is sensitive to outliers, try root mean squared error (RMSE).)

    MAE equation:

    \[MAE = \frac{1}{N} \sum_{i=1}^{N} | x_i - x |\]

    Where:

    • N is the total number of errors
    • \(| x_i - x |\) equals the absolute errors.
  • MAPE (Mean Absolute Percentage Error): MAPE measures the size of the error in percentage terms. It is calculated as the average of the unsigned percentage error.

    MAPE equation:

    \[MAPE = \big(\frac{1}{N} \sum \frac {|Actual - Forecast |}{|Actual|} \big) * 100\]

Because the MAPE measure is in percentage terms, it gives an indication of how large the error is across different scales. Consider the following example:

Actual Predicted Absolute Error Absolute Percentage Error
5 1 4 80%
15,000 15,004 4 0.03%

Both records have an absolute error of 4, but this error could be considered “small” or “big” when you compare it to the actual value.

  • SMAPE (Symmetric Mean Absolute Percentage Error): Unlike the MAPE, which divides the absolute errors by the absolute actual values, the SMAPE divides by the mean of the absolute actual and the absolute predicted values. This is important when the actual values can be 0 or near 0. Actual values near 0 cause the MAPE value to become infinitely high. Because SMAPE includes both the actual and the predicted values, the SMAPE value can never be greater than 200%.

Consider the following example:

Actual Predicted
0.01 0.05
0.03 0.04

The MAPE for this data is 216.67% but the SMAPE is only 80.95%.

Both records have an absolute error of 4, but this error could be considered “small” or “big” when you compare it to the actual value.

  • MER (Median Error Rate or Median Absolute Percentage Error): MER measures the median size of the error in percentage terms. It is calculated as the median of the unsigned percentage error.

    MER equation:

    \[MER = \big(median \frac {|Actual - Forecast |}{|Actual|} \big) * 100\]
Because the MER is the median, half the scored population has a lower absolute percentage error than the MER, and half the population has a larger absolute percentage error than the MER.

Classification

  • MCC (Matthews Correlation Coefficient): The goal of the MCC metric is to represent the confusion matrix of a model as a single number. The MCC metric combines the true positives, false positives, true negatives, and false negatives using the equation described below.

A Driverless AI model will return probabilities, not predicted classes. To convert probabilities to predicted classes, a threshold needs to be defined. Driverless AI iterates over possible thresholds to calculate a confusion matrix for each threshold. It does this to find the maximum MCC value. Driverless AI’s goal is to continue increasing this maximum MCC.

Unlike metrics like Accuracy, MCC is a good scorer to use when the target variable is imbalanced. In the case of imbalanced data, high Accuracy can be found by simply predicting the majority class. Metrics like Accuracy and F1 can be misleading, especially in the case of imbalanced data, because they do not consider the relative size of the four confusion matrix categories. MCC, on the other hand, takes the proportion of each class into account. The MCC value ranges from -1 to 1 where -1 indicates a classifier that predicts the opposite class from the actual value, 0 means the classifier does no better than random guessing, and 1 indicates a perfect classifier.

MCC equation:

\[MCC = \frac{TP \; x \; TN \; - FP \; x \; FN}{\sqrt{(TP+FP)(TP+FN)(TN+FP)(TN+FN)}}\]
  • F05, F1, and F2: A Driverless AI model will return probabilities, not predicted classes. To convert probabilities to predicted classes, a threshold needs to be defined. Driverless AI iterates over possible thresholds to calculate a confusion matrix for each threshold. It does this to find the maximum some F metric value. Driverless AI’s goal is to continue increasing this maximum F metric.

The F1 score provides a measure for how well a binary classifier can classify positive cases (given a threshold value). The F1 score is calculated from the harmonic mean of the precision and recall. An F1 score of 1 means both precision and recall are perfect and the model correctly identified all the positive cases and didn’t mark a negative case as a positive case. If either precision or recall are very low it will be reflected with a F1 score closer to 0.

F1 equation:

\[F1 = 2 \;\Big(\; \frac{(precision) \; (recall)}{precision + recall}\; \Big)\]

Where:

  • precision is the positive observations (true positives) the model correctly identified from all the observations it labeled as positive (the true positives + the false positives).
  • recall is the positive observations (true positives) the model correctly identified from all the actual positive cases (the true positives + the false negatives).

The F0.5 score is the weighted harmonic mean of the precision and recall (given a threshold value). Unlike the F1 score, which gives equal weight to precision and recall, the F0.5 score gives more weight to precision than to recall. More weight should be given to precision for cases where False Positives are considered worse than False Negatives. For example, if your use case is to predict which products you will run out of, you may consider False Positives worse than False Negatives. In this case, you want your predictions to be very precise and only capture the products that will definitely run out. If you predict a product will need to be restocked when it actually doesn’t, you incur cost by having purchased more inventory than you actually need.

F05 equation:

\[F0.5 = 1.25 \;\Big(\; \frac{(precision) \; (recall)}{0.25 \; precision + recall}\; \Big)\]

Where:

  • precision is the positive observations (true positives) the model correctly identified from all the observations it labeled as positive (the true positives + the false positives).
  • recall is the positive observations (true positives) the model correctly identified from all the actual positive cases (the true positives + the false negatives).

The F2 score is the weighted harmonic mean of the precision and recall (given a threshold value). Unlike the F1 score, which gives equal weight to precision and recall, the F2 score gives more weight to recall than to precision. More weight should be given to recall for cases where False Negatives are considered worse than False Positives. For example, if your use case is to predict which customers will churn, you may consider False Negatives worse than False Positives. In this case, you want your predictions to capture all of the customers that will churn. Some of these customers may not be at risk for churning, but the extra attention they receive is not harmful. More importantly, no customers actually at risk of churning have been missed.

F2 equation:

\[F2 = 5 \;\Big(\; \frac{(precision) \; (recall)}{4\;precision + recall}\; \Big)\]

Where:

  • precision is the positive observations (true positives) the model correctly identified from all the observations it labeled as positive (the true positives + the false positives).
  • recall is the positive observations (true positives) the model correctly identified from all the actual positive cases (the true positives + the false negatives).
  • Accuracy: In binary classification, Accuracy is the number of correct predictions made as a ratio of all predictions made. In multiclass classification, the set of labels predicted for a sample must exactly match the corresponding set of labels in y_true.

A Driverless AI model will return probabilities, not predicted classes. To convert probabilities to predicted classes, a threshold needs to be defined. Driverless AI iterates over possible thresholds to calculate a confusion matrix for each threshold. It does this to find the maximum Accuracy value. Driverless AI’s goal is to continue increasing this maximum Accuracy.

Accuracy equation:

\[Accuracy = \Big(\; \frac{\text{number correctly predicted}}{\text{number of observations}}\; \Big)\]
  • Logloss: The logarithmic loss metric can be used to evaluate the performance of a binomial or multinomial classifier. Unlike AUC which looks at how well a model can classify a binary target, logloss evaluates how close a model’s predicted values (uncalibrated probability estimates) are to the actual target value. For example, does a model tend to assign a high predicted value like .80 for the positive class, or does it show a poor ability to recognize the positive class and assign a lower predicted value like .50? Logloss ranges between 0 and 1, with 0 meaning that the model correctly assigns a probability of 0% or 100%.

Binary classification equation:

\[Logloss = - \;\frac{1}{N} \sum_{i=1}^{N}w_i(\;y_i \ln(p_i)+(1-y_i)\ln(1-p_i)\;)\]

Multiclass classification equation:

\[Logloss = - \;\frac{1}{N} \sum_{i=1}^{N}\sum_{j=1}^{C}w_i(\;y_i,_j \; \ln(p_i,_j)\;)\]

Where:

  • N is the total number of rows (observations) of your corresponding dataframe.
  • w is the per row user-defined weight (defaults is 1).
  • C is the total number of classes (C=2 for binary classification).
  • p is the predicted value (uncalibrated probability) assigned to a given row (observation).
  • y is the actual target value.
  • AUC (Area Under the Receiver Operating Characteristic Curve): This model metric is used to evaluate how well a binary classification model is able to distinguish between true positives and false positives. An AUC of 1 indicates a perfect classifier, while an AUC of .5 indicates a poor classifier whose performance is no better than random guessing.
H2O uses the trapezoidal rule to approximate the area under the ROC curve. (Tip: AUC is usually not the best metric for an imbalanced binary target because a high number of True Negatives can cause the AUC to look inflated. For an imbalanced binary target, we recommend AUCPR or MCC.)
  • AUCPR (Area under the Precision-Recall Curve): This model metric is used to evaluate how well a binary classification model is able to distinguish between precision recall pairs or points. These values are obtained using different thresholds on a probabilistic or other continuous-output classifier. AUCPR is an average of the precision-recall weighted by the probability of a given threshold.
The main difference between AUC and AUCPR is that AUC calculates the area under the ROC curve and AUCPR calculates the area under the Precision Recall curve. The Precision Recall curve does not care about True Negatives. For imbalanced data, a large quantity of True Negatives usually overshadow the effects of changes in other metrics like False Positives. The AUCPR will be much more sensitive to True Positives, False Positives, and False Negatives than AUC. As such, AUCPR is recommended over AUC for highly imbalanced data.

Scorer Best Practices - Regression

When deciding which scorer to use in a regression problem, some main questions to ask are:

  • Do you want your scorer sensitive to outliers?
  • What unit should the scorer be in?
Sensitive to Outliers

Certain scorers are more sensitive to outliers. When a scorer is sensitive to outliers, it means that it is important that the model predictions are never “very” wrong. For example, let’s say we have an experiment predicting number of days until an event. The graph below shows the absolute error in our predictions.

Absolute error in predictions

Usually our model is very good. We have an absolute error less than 1 day about 70% of the time. There is one instance, however, where our model did very poorly. We have one prediction that was 30 days off.

Instances like this will more heavily penalize scorers that are sensitive to outliers. If we do not care about these outliers in poor performance as long as we typically have a very accurate prediction, then we would want to select a scorer that is robust to outliers. We can see this reflected in the behavior of the scorers: MSE and RMSE.

  MSE RMSE
Outlier 0.99 2.64
No Outlier 0.80 1.0

Calculating the RMSE and MSE on our error data, the RMSE is more than twice as large as the MSE because RMSE is sensitive to outliers. If we remove the one outlier record from our calculation, RMSE drops down significantly.

Performance Units

Different scorers will show the performance of the Driverless AI experiment in different units. Let’s continue with our example where our target is to predict the number of days until an event. Some possible performance units are:

  • Same as target: The unit of the scorer is in days
    • ex: MAE = 5 means the model predictions are off by 5 days on average
  • Percent of target: The unit of the scorer is the percent of days
    • ex: MAPE = 10% means the model predictions are off by 10 percent on average
  • Square of target: The unit of the scorer is in days squared
    • ex: MSE = 25 means the model predictions are off by 5 days on average (square root of 25 = 5)
Comparison
Metric Units Sensitive to Outliers Tip
R2 scaled between 0 and 1 No use when you want perfor mance scaled betwee n 0 and 1
MSE square of target Yes  
RMSE same as target Yes  
RMSLE log of target Yes  
RMSPE percent of target Yes use when target values are across differ ent scales
MAE same as target No  
MAPE percent of target No use when target values are across differ ent scales
SMAPE percent of target divided by 2 No use when target values close to 0

Scorer Best Practices - Classification

When deciding which scorer to use in a classification problem some main questions to ask are:

  • Do you want the scorer to evaluate the predicted probabilities or the classes that those probabilities can be converted to?
  • Is your data imbalanced?
Scorer Evaluates Probabilities or Classes

The final output of a Driverless AI model is a predicted probability that a record is in a particular class. The scorer you choose will either evaluate how accurate the probability is or how accurate the assigned class is from that probability.

Choosing this depends on the use of the Driverless AI model. Do we want to use the probabilities or do we want to convert those probabilities into classes? For example, if we are predicting whether a customer will churn, we may take the predicted probabilities and turn them into classes - customers who will churn vs customers who won’t churn. If we are predicting the expected loss of revenue, we will instead use the predicted probabilities (predicted probability of churn * value of customer).

If your use case requires a class assigned to each record, you will want to select a scorer that evaluates the model’s performance based on how well it classifies the records. If your use case will use the probabilities, you will want to select a scorer that evaluates the model’s performance based on the predicted probability.

Robust to Imbalanced Data

For certain use cases, positive classes may be very rare. In these instances, some scorers can be misleading. For example, if I have a use case where 99% of the records have Class = No, then a model which always predicts No will have 99% accuracy.

For these use cases, it is best to select a metric that does not include True Negatives or considers relative size of the True Negatives like AUCPR or MCC.

Comparison
Metric Evaluation Based On Tip
MCC Class good for imbalanced data
F1 Class  
F0.5 Class good when you want to give more weight to precision
F2 Class good when you want to give more weight to recall
Accuracy Class highly interpretable
Logloss Probability  
AUC Class  
AUCPR Class good for imbalanced data

Experiment Graphs

This section describes the dashboard graphs that are displayed for running and completed experiments. These graphs are interactive. Hover over a point on the graph for more details about the point.

Binary Classfication Experiments

For Binary Classification experiments, Driverless AI shows ROC Curves, a Precision-Recall graph, a Lift chart, and a Gains chart.

Experiment graphs
  • ROC: This shows Receiver-Operator Characteristics curve stats on validation data. The area under this curve is called AUC. The True Positive Rate (TPR) is the relative fraction of correct positive predictions, and the False Positive Rate (FPR) is the relative fraction of incorrect positive corrections. Each point corresponds to a classification threshold (e.g., YES if probability >= 0.3 else NO). For each threshold, there is a unique confusion matrix that represents the balance between TPR and FPR. Most useful operating points are in the top left corner in general.
Hover over a point in the ROC curve to see the True Positive, True Negative, False Positive, False Negative, Threshold, FPR, TPR, Accuracy, F1, and MCC value for that point.
  • Precision-Recall: This shows the Precision-Recall curve on validation data. The area under this curve is called AUCPR.
  • Precision: correct positive predictions (TP) / all positives (TP + FP).
  • Recall: correct positive predictions (TP) / positive predictions (TP + FN).

Each point corresponds to a classification threshold (e.g., YES if probability >= 0.3 else NO). For each threshold, there is a unique confusion matrix that represents the balance between Recall and Precision. This ROCPR curve can be more insightful than the ROC curve for highly imbalanced datasets.

Hover over a point in this graph to see the True Positive, True Negative, False Positive, False Negative, Threshold, Recall, Precision, Accuracy, F1, and MCC value for that point.

  • Lift: This chart shows lift stats on validation data. For example, “How many times more observations of the positive target class are in the top predicted 1%, 2%, 10%, etc. (cumulative) compared to selecting observations randomly?” By definition, the Lift at 100% is 1.0.
Hover over a point in the Lift chart to view the quantile percentage and cumulative lift value for that point.
  • Gains: This shows Gains stats on validation data. For example, “What fraction of all observations of the positive target class are in the top predicted 1%, 2%, 10%, etc. (cumulative)?” By definition, the Gains at 100% are 1.0.
Hover over a point in the Gains chart to view the quantile percentage and cumulative gain value for that point.

Multiclass Classification Experiments

The ROC curve, Precision-Recall, Lift chart, and Gains chart are also shown for multiclass problems. Driverless AI does this by considering the multi-class problem as multiple one-vs-all problems. This method is known as micro-averaging (reference: http://scikit-learn.org/stable/auto_examples/model_selection/plot_roc.html#multiclass-settings).

For example, you may want to predict the species in the iris data. The predictions would look something like this:

class.Iris-setosa class.Iris-versicolor class.Iris-virginica
0.9628 0.021 0.0158
0.0182 0.3172 0.6646
0.0191 0.9534 0.0276

To create the ROC, Lift, and Gains chart, Driverless AI converts the results to 3 one-vs-all problems:

prob-setosa actual-setosa   prob-versicolor actual-versicolor   prob-virginica actual-virginica
0.9628 1   0.021 0   0.0158 0
0.0182 0   0.3172 1   0.6646 0
0.0191 0   0.9534 1   0.0276 0

The result is 3 vectors of predicted and actual values for binomial problems. Driverless AI concatenates these 3 vectors together to compute the ROC curve, lift, and gains chart.

predicted = [0.9628, 0.0182, 0.0191, 0.021, 0.3172, 0.9534, 0.0158, 0.6646, 0.0276]
actual = [1, 0, 0, 0, 1, 1, 0, 0, 0]

Regression Experiments

An Actual vs. Predicted table displays for Regression experiments. This shows Actual vs Predicted values on validation data. A small sample of values are displayed. A perfect model has a diagonal line.

Hover over a point on the graph to view the Actual and Predicted values for that point.

Experiment graphs