MLI Overview¶
Driverless AI provides robust interpretability of machine learning models to explain modeling results in a human-readable format. In the Machine Learning Interpetability (MLI) view, Driverless AI employs a host of different techniques and methodologies for interpreting and explaining the results of its models. A number of charts are generated automatically (depending on experiment type), including K-LIME, Shapley, Variable Importance, Decision Tree Surrogate, Partial Dependence, Individual Conditional Expectation, Sensitivity Analysis, NLP Tokens, NLP LOCO, and more. Additionally, you can download a CSV of LIME and Shapley reasons codes from this view.
This chapter describes Machine Learning Interpretability (MLI) in Driverless AI for both regular and time-series experiments. Refer to the following sections for more information:
Additional Resources
Click here
to download our MLI cheat sheet.Click here to access the H2O.ai MLI Resources repository. This repo includes materials that illustrate applications or adaptations of various MLI techniques for practicing data scientists.
Click here to view our H2O Driverless AI Machine Learning Interpretability walkthrough video.
Limitations
This release deprecates experiments run in 1.7.0 and earlier. MLI will not be available for experiments from versions <= 1.7.0.
MLI is not supported for multiclass Time Series experiments.
MLI does not require an Internet connection to run on current models.