# Interpreting a Model¶

Model interpretations can be run on a Driverless AI experiment or on the predictions created by an external model (that is, a model not created by Driverless AI).

Use the Interpret This Model button on a completed experiment page to interpret a Driverless AI model on original and transformed features. You can also click the MLI link from the top navigation menu to interpret either a Driverless AI model or an external model.

## Interpreting a Driverless AI Model¶

A completed Driverless AI model can be interpreted from either the Interpreted Models page or the Completed Experiment Page.

Note

• This release deprecates experiments run in 1.8.9 and earlier. MLI migration is not supported for experiments from versions <= 1.8.9. This means that you can’t directly run interpretations on a Driverless AI model built using versions 1.8.9 and earlier, but you can still view interpretations built using those versions.

• MLI is not supported for unsupervised learning models.

• MLI is not supported for Image or multiclass Time Series experiments.

• MLI does not require an Internet connection to run on current models.

• To specify a port of a specific H2O instance for use by MLI, use the h2o_port config.toml setting. You can also specify an IP address for use by MLI with the h2o_ip setting.

### Run Interpretations From Interpreted Models Page¶

The following steps describe how to run an interpretation from the Interpreted Models page.

1. Click the MLI link in the upper-right corner of the UI to view a list of interpreted models.

2. Click the New Interpretation button. The Interpretation Settings page is displayed.

3. Select a dataset to use for the interpretation. The selected dataset must contain the same columns as the training dataset used for the experiment.

4. Specify the Driverless AI model that you want to use for the interpretation. After you select a model, the Target Column used for the model is automatically selected.

5. Optionally specify which MLI recipes (or Explainers) to run. You can also change Explainer (recipe) specific settings when selecting which recipes to use for the interpretation.

6. Optionally specify any additional Interpretation Expert Settings to use when running this interpretation.

7. Optionally specify a weight column.

8. Optionally specify one or more dropped columns. Columns that were dropped when the model was created are automatically dropped for the interpretation.

9. Click the Launch MLI button.

### Run Interpretation From Completed Experiment Page¶

The following steps describe how to run an interpretation from the Completed Experiment Page.

1. On the Completed Experiment page, click the Interpret This Model button.

2. Select a dataset to use for the interpretation. The selected dataset must contain the same columns as the training dataset used for the experiment.

3. Select one of the following options:

• With Default Settings: Run an interpretation using the default settings.

• With Custom Settings: Run an interpretation using custom settings. Selecting this option opens the Interpretation Settings page, where you can specify which MLI recipes (explainers) to use for the interpretation and change explainer-specific settings and interpretation expert settings. To run an interpretation with your specified custom settings, click the Launch MLI button.

The interpretation includes a summary of the interpretation, interpretations using the built Driverless AI model, and interpretations using surrogate models that are built on the predictions from the Driverless AI model. For information on the available plots, see Understanding the Model Interpretation Page.

The plots are interactive, and the logs / artifacts can be downloaded by clicking on the Actions button.

## Interpreting Predictions From an External Model¶

Model Interpretation does not need to be run on a Driverless AI experiment. You can train an external model and run Model Interpretability on the predictions from the model. This can be done from the MLI page.

1. Click the MLI link in the upper-right corner of the UI to view a list of interpreted models.

2. Click the New Interpretation button.

3. Leave the Select Model option to none

4. Select the dataset that you want to use for the model interpretation. This must include a prediction column that was generated by the external model. If the dataset does not have predictions, then you can join the external predictions. An example showing how to do this in Python is available in the Run Model Interpretation on External Model Predictions section of the Credit Card Demo.

5. Specify a Target Column (actuals) and the Prediction Column (scores from the external model).

6. Optionally specify any additional MLI Expert Settings to use when running this interpretation.

7. Optionally specify a weight column.

8. Optionally specify one or more dropped columns. Columns that were dropped when the model was created are automatically dropped for the interpretation.

9. Click the Launch MLI button.

Note: When running interpretations on an external model, leave the Select Model option empty. That option is for selecting a Driverless AI model.

The generated interpretation includes the plots and explanations created using the surrogate models and a summary. For more information, see Understanding the Model Interpretation Page.

## Explainer Recipes¶

Driverless AI Machine Learning Interpretability comes with a number of out-of-the-box explainer recipes for model interpretation that can be enabled when running a new interpretation from the MLI page. Details about the interpretations generated by these recipes can be found here. And a list of explainer specific expert settings can be found here. The following is a list of available recipes:

• Absolute Permutation Feature Importance

• AutoDoc

• Disparate Impact Analysis

• Interpretability Data ZIP (Surrogate and Shapley Techniques)

• NLP Leave-one-covariate-out (LOCO)

• NLP Partial Dependence Plot

• NLP Tokenizer

• NLP Vectorizer + Linear Model (VLM) Text Feature Importance

• Original Feature Importance

• Partial Dependence Plot

• Relative Permutation Feature Importance

• Sensitivity Analysis

• Shapley Summary Plot for Original Features (Naive Shapley Method)

• Shapley Values for Original Features (Kernel SHAP Method)

• Shapley Values for Original Features (Naive Method)

• Shapley Values for Transformed Features

• Surrogate Decision Tree

• Surrogate Random Forest Importance

• Surrogate Random Forest Leave-one-covariate-out (LOCO)

• Surrogate Random Forest Partial Dependence Plot

• Transformed Feature Importance

• k-LIME / LIME-SUP

This recipe list is extensible, and users can create their own custom recipes. For more information, see MLI Custom Recipes.

## Interpretation Expert Settings¶

When interpreting from the MLI page, a variety of configuration options are available in the Interpretation Expert Settings panel that let you customize interpretations. Recipe-specific settings are also available for some recipes. Use the search bar to refine the list of settings or locate a specific setting.

For more information on each of these settings, see Interpretation Expert Settings. Also see for explainer (recipe) specific expert settings.

Notes:

• The selection of available expert settings is determined by the type of model you want to interpret and the specified LIME method.

• Expert settings are not available for time-series models.

### Expert Settings from Recipes (Explainers)¶

For some recipes like Driverless AI Partial dependence, Disparate Impact Analysis (DIA) explainer and DT (Decision Tree) Surrogate explainer, some of the settings can be toggled from the recipe page. Also, enabling some of the recipes like Original Kernel SHAP explainer will add new options to the expert settings.

For more information on explainer specific expert settings, see Explainer (Recipes) Expert Settings.