MLI Overview

Driverless AI provides robust interpretability of machine learning models to explain modeling results in a human-readable format. In the Machine Learning Interpretability (MLI) view, Driverless AI employs a host of different techniques and methodologies for interpreting and explaining the results of its models. A number of charts are generated automatically (depending on experiment type), including K-LIME, Shapley, Variable Importance, Decision Tree Surrogate, Partial Dependence, Individual Conditional Expectation, Sensitivity Analysis, NLP Tokens, NLP LOCO, and more. Additionally, you can download a CSV of LIME, Shapley, and Original (Kernel SHAP) Shapley reason codes as well as text and Python files of Decision Tree Surrogate model rules from this view.

The techniques and methodologies used by Driverless AI for model interpretation can be extended with recipes (Python code snippets). For more information on custom recipes for MLI, see https://github.com/h2oai/driverlessai-recipes/tree/rel-1.9.1/explainers.

This chapter describes Machine Learning Interpretability (MLI) in Driverless AI for both regular and time-series experiments. Refer to the following sections for more information:

Note

Migration Information

  • Interpretations made in version 1.9.0 are supported in 1.9.x and later.

  • Interpretations made in version 1.8.x aren’t supported in 1.9.x and later. However, interpretations made in 1.8.x can still be viewed and rerun.

Note

  • MLI is not supported for unsupervised learning models.

  • MLI is not supported for Image or multiclass Time Series experiments.

  • MLI does not require an Internet connection to run on current models.

  • To specify a port of a specific H2O instance for use by MLI, use the h2o_port config.toml setting. You can also specify an IP address for use by MLI with the h2o_ip setting.

  • Residuals (that is, the differences between observed and predicted values) can be used as targets in MLI surrogate models for the purpose of debugging models. For more information, see Running Surrogate Models on Residuals.

Additional Resources

  • Click here to download our MLI cheat sheet.

  • An Introduction to Machine Learning Interpretability” book.

  • Click here to access the H2O.ai MLI Resources repository. This repository includes materials that illustrate applications or adaptations of various MLI techniques for practicing data scientists.

  • Click here to access the H2O.ai Machine Learning Interpretability custom recipes repository.

  • Click here to view our H2O Driverless AI Machine Learning Interpretability walkthrough video.