Skip to main content

Create an adversarial attack

Overview

An adversarial attack enables you to evaluate the security of a Driverless AI model (experiment) on grounds of an adversarial attack.

Intructions

To create an adversarial attack, consider the following instructions:

  1. Click Menu.
  2. In the Endpoint URL box, enter the model's endpoint URL.
    Note

    H2O Model Security only supports Driverless AI (DAI) models deployed in H2O MLOps.

  3. In the Model attack type list, select Adversarial attack.
  4. Click Browse.... Or drag and drop the file (validation dataset).
    Note

    The validation dataset must follow the same format as the training dataset used to train the model deployed in H2O MLOps. Values in this dataset should reflect your adversarial modified values.

  5. Click Upload data.
  6. In the Number of features to attack box, enter the number of features to attack per row.
  7. In the Columns to exclude (, seperate) box, enter the columns to exclude from the validation dataset (, seperate).
  8. In the Target column box, enter the model's (validation dataset) target column (to predict).
  9. Click Begin attack.
Note

Feedback