MLI Overview

Driverless AI provides robust interpretability of machine learning models to explain modeling results in a human-readable format. In the Machine Learning Interpetability (MLI) view, Driverless AI employs a host of different techniques and methodologies for interpreting and explaining the results of its models. A number of charts are generated automatically (depending on experiment type), including K-LIME, Shapley, Variable Importance, Decision Tree Surrogate, Partial Dependence, Individual Conditional Expectation, Sensitivity Analysis, NLP Tokens, NLP LOCO, and more. Additionally, you can download a CSV of LIME, Shapley, and Original (Kernel SHAP) Shapley reason codes as well as text and Python files of Decision Tree Surrogate model rules from this view.

The techniques and methodologies used by Driverless AI for model interpretation can be extended with recipes (Python code snippets). For more information on custom recipes for MLI, see https://github.com/h2oai/driverlessai-recipes/tree/rel-1.9.1/explainers.

This chapter describes Machine Learning Interpretability (MLI) in Driverless AI for both regular and time-series experiments. Refer to the following sections for more information:

Additional Resources

  • Click here to download our MLI cheat sheet.

  • An Introduction to Machine Learning Interpetability” book.

  • Click here to access the H2O.ai MLI Resources repository. This repo includes materials that illustrate applications or adaptations of various MLI techniques for practicing data scientists.

  • Click here to access the H2O.ai Machine Learning Interpretability custom recipes repository.

  • Click here to view our H2O Driverless AI Machine Learning Interpretability walkthrough video.

Notes

  • This release deprecates experiments run in 1.8.9 and earlier. MLI migration is not supported for experiments from versions <= 1.8.9.

  • MLI is not supported for Image or multiclass Time Series experiments.

  • MLI does not require an Internet connection to run on current models.