Driverless AI is supported on the following NVIDIA DGX products, and the installation steps for each platform are the same.

  1. Log in to your NVIDIA DGX account at
  2. In the Registry menu, select one of the h2oai-driverless-ai options. Note that one registry is for Cuda 8 and the other is for Cuda 9.
  1. At the bottom of the screen, select one of the H2O Driverless AI tags to retrieve the pull command.
  1. On your NVIDIA DGX machine, open a command prompt and use the specified pull command to retrieve the Driverless AI image. For example:
docker pull
  1. Set up the data, log, license, and tmp directories on the host machine:
# Set up the data, log, license, and tmp directories on the host machine
mkdir data
mkdir log
mkdir license
mkdir tmp
  1. At this point, you can copy data into the data directory on the host machine. The data will be visible inside the Docker container.
  2. Start the Driverless AI Docker image:
nvidia-docker run \
   --rm \
   -u `id -u`:`id -g` \
   -p 12345:12345 \
   -p 54321:54321 \
   -p 9090:9090 \
   -v `pwd`/data:/data \
   -v `pwd`/log:/log \
   -v `pwd`/license:/license \
   -v `pwd`/tmp:/tmp \

Driverless AI will begin running:

Welcome to's Driverless AI
   version: X.Y.Z

- Put data in the volume mounted at /data
- Logs are written to the volume mounted at /log/YYYYMMDD-HHMMSS
- Connect to Driverless AI on port 12345 inside the container
- Connect to Jupyter notebook on port 8888 inside the container
  1. Connect to Driverless AI with your browser: