Skip to main content

Model security flow

Overview

The flow of evaluating and analyzing the security of a Driverless AI (DAI) model (experiment) using a particular model attack type can be summarized in the following sequential steps:

In the below sections, each step above, in turn, is summarized.

Step 1: Specify model's endpoint URL

As the first step in the model security flow, specify your model's endpoint URL (created after deploying your model in H2O MLOps).

Note

H2O Model Security only supports Driverless AI (DAI) models deployed in H2O MLOps.

Step 2: Select a model attack type

As the second step in the model security flow, please select a model attack for your DAI model, which enables the evaluation and analysis of its security on the grounds of the selected model attack (e.g., Adversarial attack).

Step 3: Import a model's validation dataset

As the third step in the model security flow, import a model's validation dataset; H2O Model Security uses the validation dataset to test the model's security on the grounds of the selected model attack.

Note

The validation dataset must follow the same format as the training dataset used to train the model deployed in H2O MLOps.

Step 4: Specify the model attack settings

As the fourth step in the model security flow, define the settings of the selected model attack type.

Step 5: Analyze the impacts on attacked model

As the fifth and final step of the model security flow, evaluate and analyze the generated attack metrics after attacking the model with a selected model attack type. H2O Model Security offers an array of metrics in charts, stats cards, and confusion matrices to understand a model attack.


Feedback