Create an adversarial attack
Overview
An adversarial attack enables you to evaluate the security of a Driverless AI model (experiment) on grounds of an adversarial attack.
Intructions
To create an adversarial attack, consider the following instructions:
- Click Menu.
- In the Endpoint URL box, enter the model's endpoint URL. Note
H2O Model Security only supports Driverless AI (DAI) models deployed in H2O MLOps.
- In the Model attack type list, select Adversarial attack.
- Click Browse.... Or drag and drop the file (validation dataset). Note
The validation dataset must follow the same format as the training dataset used to train the model deployed in H2O MLOps. Values in this dataset should reflect your adversarial modified values.
- Click Upload data.
- In the Number of features to attack box, enter the number of features to attack per row.
- In the Columns to exclude (, seperate) box, enter the columns to exclude from the validation dataset (, seperate).
- In the Target column box, enter the model's (validation dataset) target column (to predict).
- Click Begin attack.
Note
- To learn about each setting of an adversarial attack, see Settings: Adversarial attack.
- H2O Model Security offers an array of metrics in the form of charts, stats cards, and confusion matrices to understand a completed adversarial attack. To learn more, see Metrics: Adversarial attack.
- To learn about the flow of an attack in H2O Model Security, see Model security flow.
Feedback
- Submit and view feedback for this page
- Send feedback about H2O Model Security to cloud-feedback@h2o.ai