Task 9: Interpret the model
In this task, you will generate model interpretability insights to analyze key aspects of the trained model, such as feature importance, and other interpretability metrics. This helps in understanding which features have the most influence on model predictions, and how the model makes decisions.
To generate the interpretability insights for the experiment using the test dataset, run the following command:
experiment_interpretation = dai.mli.create(experiment, test)
The result will be an interpretation object that includes visual and analytical insights.
Feedback
- Submit and view feedback for this page
- Send feedback about H2O Driverless AI | Tutorials to cloud-feedback@h2o.ai