Skip to main content

What is H2O Model Analyzer?

H2O Model Analyzer is a unified interactive framework to explore vulnerabilities in predictive models and their applications. It allows the user to simulate (What-Ifs), analyze and evaluate a model's behavior and limitations (model's robustness) with changes in the real world. It enables a user to probe machine learning models to discover failure edge cases and validate prior beliefs using local explanations, counterfactuals and adversarial examples with minimal effort. It aims to improve the model's behavior by improving the data.

A data-centric approach

Background

Machine learning models can encounter unexpected adversity (corrupted data) and intentional adversity (hack data) that can cause a model to deliver incorrect predictions. Adversarial robustness refers to a model's resistance to being incidentally and intently misled. Due to this, when working with predictive models, there is often the need to explore and analyze alternative possibilities to a model's decision. During this exploration and analysis, data scientists ask an array of questions similar to the ones that follow:

  • How will the model behave if the features of the selected data point are perturbed a certain way based on domain intuitiveness (What-if)?
  • Are predictions stable to imperceptible adversarial perturbation?
  • Is the model performing poorly for a subset of data?
  • Did the model perform poorly because of a labeling error?

The above questions help understand a predictive model's performance consistency throughout its operational domain. With that in mind, H2O Model Analyzer enables data scientists to discover edge cases and correct incidental and intentional adversity automatically.

Note
  • Model Analyzer requires a prebuilt model and it's associated dataset to start an analysis.
  • There is support to register new models; only H2O MLOps deployed models are currently allowed.

Feedback