Installing Driverless AI

For the best (and intended-as-designed) experience, install Driverless AI on modern data center hardware with GPUs and CUDA support. Use Pascal or Volta GPUs with maximum GPU memory for best results. (Note that the older K80 and M60 GPUs available in EC2 are supported and very convenient, but not as fast.)

Driverless AI requires 10 GB of free disk space to run and will stop working if less than 10 GB is available. You should have lots of system CPU memory (64 GB or more) and free disk space (at least 30 GB and/or 10x your dataset size) available.

To simplify cloud installation, Driverless AI is provided as an AMI.

To simplify local installation, Driverless AI is provided as a Docker image. For the best performance, including GPU support, use nvidia-docker. For a lower-performance experience without GPUs, use regular docker (with the same docker image).

These installation steps assume that you have a license key for Driverless AI. For information on how to obtain a license key for Driverless AI, contact sales@h2o.ai.

Quick-Start Tables by Environment

Use the following tables for Cloud, Server, and Desktop to find the right setup instructions for your environment.

Cloud

Refer to the following for more information about instance types:

Provider Instance Type Num GPUs Suitable for Refer to Section
NVIDIA GPU Cloud     Serious use Install on NVIDIA GPU/DGX
AWS p2.xlarge 1 Experimentation Install on AWS
p2.8xlarge 8 Serious use
p2.16xlarge 16 Serious use
p3.2xlarge 1 Experimentation
p3.8xlarge 4 Serious use
p3.16xlarge 8 Serious use
g3.4xlarge 1 Experimentation
g3.8xlarge 2 Experimentation
g3.16xlarge 4 Serious use
Azure Standard_NV6 1 Experimentation Install on Azure
Standard_NV12 2 Experimentation
Standard_NV24 4 Serious use
Standard_NC6 1 Experimentation
Standard_NC12 2 Experimentation
Standard_NC24 4 Serious use
Google Compute   Install on Google Compute

Server

Operating System GPUs? Min Mem Refer to Section
NVIDIA DGX-1 Yes 128 GB Install on NVIDIA GPU/DGX
Ubuntu 16.04 Yes 64 GB Install on Ubuntu with GPUs
Ubuntu No 64 GB Install on Ubuntu
RHEL with GPUs Yes 64 GB Install on RHEL with GPUs
RHEL No 64 GB Install on RHEL
IBM Power (Minsky) Yes 64 GB Contact sales@h2o.ai

Desktop

Operating System GPU Support? Min Mem Suitable for Refer to Section
NVIDIA DGX Station Yes 64 GB Serious Use Install on NVIDIA GPU/DGX
Mac OS X No 16 GB Experimentation Install on Mac OS X
Windows 10 Pro No 16 GB Experimentation Install on Windows 10 Pro
Linux See server table above

Install on NVIDIA GPU/DGX

Driverless AI is supported on the following NVIDIA DGX products, and the installation steps for each platform are the same.

  1. Log in to your NVIDIA DGX account at https://compute.nvidia.com/registry.
  2. Click the nvidia_partners link in the Registry menu and select h2o-driverless-ai.
_images/dgx_select_h2odai.png
  1. At the bottom of the screen, select the H2O Driverless AI tag and retrive the pull command.
_images/dgx_select_tag.png
  1. On your NVIDIA DGX machine, open a command prompt and use the specified pull command to retrieve the Driverless AI image. For example:
docker pull nvcr.io/nvidia_partners/h2o-driverless-ai:1.0.5
  1. Load the Driverless AI Docker image:
sudo docker load < driverless-ai-docker-runtime-latest-release.gz
  1. Set up the data, log, license, and tmp directories on the host machine:
# Set up the data, log, license, and tmp directories on the host machine
mkdir data
mkdir log
mkdir license
mkdir tmp
  1. At this point, you can copy data into the data directory on the host machine. The data will be visible inside the Docker container.
  2. Start the Driverless AI Docker image:
nvidia-docker run \
   --rm \
   -u `id -u`:`id -g` \
   -p 12345:12345 \
   -p 9090:9090 \
   -v `pwd`/data:/data \
   -v `pwd`/log:/log \
   -v `pwd`/license:/license \
   -v `pwd`/tmp:/tmp \
   opsh2oai/h2oai-runtime

Driverless AI will begin running:

---------------------------------
Welcome to H2O.ai's Driverless AI
---------------------------------
   version: X.Y.Z

- Put data in the volume mounted at /data
- Logs are written to the volume mounted at /log/YYYYMMDD-HHMMSS
- Connect to Driverless AI on port 12345 inside the container
- Connect to Jupyter notebook on port 8888 inside the container
  1. Connect to Driverless AI with your browser:
http://Your-Driverless-AI-Host-Machine:12345

Install on AWS

  1. Log in to your AWS account at https://aws.amazon.com.
  2. In the upper right corner of the Amazon Web Services page, make sure that the location drop-down is US East (N Virginia).
_images/ami_location_dropdown.png
  1. Select the EC2 option under the Compute section to open the EC2 Dashboard.
_images/ami_select_ec2.png
  1. Click the Launch Instance button under the Create Instance section.
_images/ami_launch_instance_button.png
  1. Under Community AMIs, search for h2oai, and then select the version that you want to launch.
_images/ami_select_h2oai_ami.png
  1. On the Choose an Instance Type page, select GPU compute in the Filter by dropdown. This will ensure that your Driverless AI instance will run on GPUs. Select a GPU compute instance from the available options. (We recommend at least 32 vCPUs.) Click the Next: Configure Instance Details button.
_images/ami_choose_instance_type.png
  1. Specify the Instance Details that you want to configure. In most cases, the default values are sufficient. Click Next: Add Storage.
  2. Specify the Storage Device settings. Note again that Driverless AI requires 10 GB to run and will stop working of less than 10 GB is available. The machine should have a minimum of 30 GB of disk space. Click Next: Add Tags.
_images/ami_add_storage.png
  1. If desired, add unique Tag name to identify your instance. Click Next: Configure Security Group.
  2. Add the following security rules to enable SSH access to Driverless AI and to (optionally) enable access to H2O Flow, then click Review and Launch.
Type Protocol Port Range Source Description
SSH TCP 22 Anywhere 0.0.0.0/0  
Custom TCP Rule TCP 12345 Anywhere 0.0.0.0/0 Launch DAI
Custom TCP Rule TCP 54321 Anywhere 0.0.0.0/0 Optional access to H2O Flow
_images/ami_add_security_rules.png
  1. Review the configuration, and then click Launch.
  2. A popup will appear prompting you to select a key pair. This is required in order to SSH into the instance. You can select your existing key pair or create a new one. Be sure to accept the acknowledgement, then click Launch Instances to start the new instance.
_images/ami_select_key_pair.png
  1. Upon successful completion, a message will display informing you that your instance is launching. Click the View Instances button to see information about the instance including the IP address. The Connect button on this page provides information on how to SSH into your instance.
  2. Open a Terminal window and SSH into the IP address of the AWS instance. Replace the DNS name below with your instance DNS.
ssh -i "mykeypair.pem" ubuntu@ec2-34-230-6-230.compute-1.amazonaws.com
  1. At this point, you can copy data into the data directory on the host machine using scp. (Note that the data folder already exists.) For example:
scp <data_file>.csv ubuntu@ec2-34-230-6-230.compute-1.amazonaws.com:/home//data

The data will be visible inside the Docker container.

  1. Connect to Driverless AI with your browser:
http://Your-Driverless-AI-Host-Machine:12345

Stopping the EC2 Instance

The EC2 instance will continue to run even when you close the aws.amazon.com portal. To stop the instance:

  1. On the EC2 Dashboard, click the Running Instances link under the Resources section.
  2. Select the instance that you want to stop.
  3. In the Actions drop down menu, select Instance State > Stop.
  4. A confirmation page will display. Click Yes, Stop to stop the instance.

Install on Azure

  1. Log in to your Azure portal at https://portal.azure.com, and click the New button.
  2. Search for Deep Learning in the Marketplace, and select Deep Learning Virtual Machine:
_images/azure_search_for_deep_learning.png
  1. At the next screen, click Create. This launches the Deep Learning Virtual Machine creation process.
  2. On the Basics tab:
  1. Enter a name for the VM.
  2. Select Linux for the OS type.
  3. Enter the name that you will use when connecting to the machine through SSH.
  4. Enter and confirm a password that will be used when connecting to the machine through SSH.
  5. Specify the payment method.
  6. Enter a name unique name for the resource group.
  7. Specify the VM region.

Click OK when you are done.

_images/azure_basics_tab.png
  1. On the Settings tab, select your virtual machine size. Specify the HDD disk type, and select a configuration. We recommend a minimum of 24 vCPUS. Also note that Driverless AI requires 10 GB of free space in order to run and will stop working of less than 10 GB is available. We recomend a minimum of 30 GB of disk space. Click OK when you are done.
_images/azure_vm_size.png
  1. The Summary tab performs a validation on the specified settings and will report back any errors. When the validation passes successfully, click OK.
_images/azure_validation_passed.png
  1. Click Create to create the VM.
_images/azure_create_vm.png
  1. After the VM is created, the next step is to configure Inbound Security Rules. On the left navigation, select Dashboard, then select the newly created VM (with Network Security Group appended name). Select the Inbound Security Rules tab.
_images/azure_inbound_security_rules_tab.png
  1. Click Add at the top to add a new inbound security rule. An inbound security rule is required for port 12345 to access Driverless AI. An inbound rule is optional for port 54321 to access H2O Flow. Accept the defaults for all values except for the Destination port ranges field. Change this to 12345. Specify a unique name for the rule. Optionally enter a description, and then click OK. (Perform the same steps to add a rule for port 54321 if desired.)
_images/azure_new_inbound_security_rule.png
  1. After the new Inbound Security Rule is added, select Resource Groups from the left navigation. Select your Driverless AI VM to view the IP address of your newly created machine.
  2. Open a terminal window and ssh into the machine running the VM. Optionally run pwd to retrieve your current location in the VM, and optionally run nvidia-smi to verify that the NVIDIA driver is running.
  3. Once you are logged in to the VM, use wget to retrieve the latest Driverless AI version.
wget https://s3-us-west-2.amazonaws.com/h2o-internal-release/docker/driverless-ai-docker-runtime-latest-release.gz
  1. Set up the data, log, license, and tmp directories on the host machine:
# Set up the data, log, license, and tmp directories on the host machine
mkdir data
mkdir log
mkdir license
mkdir tmp
  1. At this point, you can copy data into the data directory on the host machine using scp. For example:
scp <data_file>.csv <username>@<vm_address>:/home/h2o/data

The data will be visible inside the Docker container.

  1. Load the Docker image:
sudo docker load < driverless-ai-docker-runtime-latest-release.gz
  1. Start the Driverless AI Docker image:
sudo nvidia-docker run \
   --rm \
   -u `id -u`:`id -g` \
   -p 12345:12345 \
   -p 9090:9090 \
   -v `pwd`/data:/data \
   -v `pwd`/log:/log \
   -v `pwd`/license:/license \
   -v `pwd`/tmp:/tmp \
   opsh2oai/h2oai-runtime

Driverless AI will begin running:

---------------------------------
Welcome to H2O.ai's Driverless AI
---------------------------------
   version: X.Y.Z

- Put data in the volume mounted at /data
- Logs are written to the volume mounted at /log/YYYYMMDD-HHMMSS
- Connect to Driverless AI on port 12345 inside the container
- Connect to Jupyter notebook on port 8888 inside the container
  1. Connect to Driverless AI with your browser:
http://Your-Driverless-AI-Host-Machine:12345

Stopping the Azure Instance

The Azure instance will continue to run even when you close the Azure portal. To stop the instance:

  1. Click the Virtual Machines left menu item.
  2. Select the checkbox beside your DriverlessAI virtual machine.
  3. On the right side of the row, click the … button, then select Stop. (Note that you can then restart this by selecting Start.)
_images/azure_stop_vm.png

Install on Google Compute

  1. In your browser, log in to the Google Compute Engine Console at https://console.cloud.google.com/.
  2. In the left navigation panel, select Compute Engine > VM Instances.
_images/gce_newvm_instance.png
  1. Click Create Instance.
_images/gce_create_instance.png
  1. Specify the following at a minimum:
  • A unique name for this instance.
  • The desired zone. Note that not all zones and user accounts can select zones with GPU instances. Refer to the following for information on how to add GPUs: https://cloud.google.com/compute/docs/gpus/.
  • A supported OS, for example Ubuntu 16.04. Be sure to also increase the disk size of the OS image to be 64 GB.

Click Create at the bottom of the form when you are done. This creates the new VM instance.

center
  1. Create a Firewall rule for Driverless AI. On the Google Cloud Platform left navigation panel, select VPC network > Firewall rules. Specify the following settings:
  • (Optional) Specify a unique name for this instance.
  • Change the Targets dropdown to All instances in the network.
  • Specify the Source IP range to be 0.0.0.0/0.
  • Under Protocols and Ports, specify the following: tcp:12345.

Click Create at the bottom of the form when you are done.

center
  1. On the VM Instances page, SSH to the new VM Instance by selecting Open in Browser Window from the SSH dropdown.
center
  1. H2O provides a script for you to run in your VM instance. Open an editor in the VM instance (for example, vi). Enter the following text and save the script as install.sh.
apt-get -y update && \
apt-get -y --no-install-recommends install \
curl \
apt-utils \
python-software-properties \
software-properties-common \

add-apt-repository -y ppa:graphics-drivers/ppa
add-apt-repository -y "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -

apt-get update
apt-get install -y nvidia-384
apt-get install -y docker-ce

wget -P /tmp https://github.com/NVIDIA/nvidia-docker/releases/download/v1.0.1/nvidia-docker_1.0.1-1_amd64.deb
dpkg -i /tmp/nvidia-docker*.deb && rm /tmp/nvidia-docker*.deb

mkdir ~/tmp
mkdir ~/log
mkdir ~/data
mkdir -p ~/jupyter/notebooks
mkdir ~/scripts
mkdir ~/license
mkdir ~/demo
  1. Type the following commands to run the install script.
chmod +x install.sh
sudo ./install.sh
  1. (Optional) If you are using CPUs, the script does not install Docker CE. You can install and run it using the following:
sudo apt-get install docker-ce
sudo apt-get nvidia-modprobe
  1. Change directories to /tmp. Install the nvidia-docker deb file, then reboot.
cd /tmp
sudo dpkg -i nvidia-docker_1.0.1-1_amd64.deb

# reboot
sudo reboot
  1. Verify that nvidia-smi is working by running the nvidia-smi command. If it fails, then run the following to install the correct driver.
sudo apt-get install -y nvidia-384
  1. Add your Google Compute user name to the Docker container.
sudo usermod -aG docker <username>
  1. Download the Driverless AI image, replacing X.Y.Z below with your Driverless AI Docker image version (for example, 1.0.0).
wget https://s3-us-west-2.amazonaws.com/h2o-internal-release/docker/driverless-ai-docker-runtime-rel-X.Y.Z.gz
  1. Load the Driverless AI Docker image, replacing X.Y.Z below with your Driverless AI Docker image version (for example, 1.0.0).
sudo docker load < driverless-ai-docker-runtime-rel-X.Y.Z.gz
  1. Start the Driverless AI Docker image with nvidia-docker. Note that if you are using Docker CE on CPUs, then replace nvidia-docker run below with docker run:
# Start the Driverless AI Docker image
nvidia-docker run \
    --rm \
    -u `id -u`:`id -g` \
    -p 12345:12345 \
    -p 9090:9090 \
    -v `pwd`/data:/data \
    -v `pwd`/log:/log \
    -v `pwd`/license:/license \
    -v `pwd`/tmp:/tmp \
    opsh2oai/h2oai-runtime

Driverless AI will begin running:

---------------------------------
Welcome to H2O.ai's Driverless AI
---------------------------------
   version: X.Y.Z

- Put data in the volume mounted at /data
- Logs are written to the volume mounted at /log/YYYYMMDD-HHMMSS
- Connect to Driverless AI on port 12345 inside the container
- Connect to Jupyter notebook on port 8888 inside the container
  1. Connect to Driverless AI with your browser:
http://Your-Driverless-AI-Host-Machine:12345

Stopping the GCE Instance

The Google Compute Engine instance will continue to run even when you close the aportal. To stop the instance:

  1. On the VM Instances page, click on the VM instance that you want to stop.
  2. Click Stop at the top of the page.
  3. A confirmation page will display. Click Stop to stop the instance.

Install on Ubuntu with GPUs

Open a Terminal and ssh to the machine that will run Driverless AI. Once you are logged in, perform the following steps.

  1. Retrieve the Driverless AI package from https://www.h2o.ai/driverless-ai-download/.
  2. Install Docker on Ubuntu (if not already installed):
# Install Docker on Ubuntu
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
apt-get update
apt-get install docker-ce
  1. Install nvidia-docker on Ubuntu (if not already installed):
# Install nvidia-docker on Ubuntu
wget -P /tmp https://github.com/NVIDIA/nvidia-docker/releases/download/v1.0.1/nvidia-docker_1.0.1-1_amd64.deb
dpkg -i /tmp/nvidia-docker*.deb
rm /tmp/nvidia-docker*.deb
  1. Verify that the NVIDIA driver is up and running. If the driver is not up and running, log on to http://www.nvidia.com/Download/index.aspx?lang=en-us to get the latest NVIDIA Tesla V/P/K series driver.
nvidia-smi
  1. Load the Driverless AI Docker image, replacing X.Y.Z below with your Driverless AI Docker image version (for example, 1.0.0).
# Load the Driverless AI docker image
docker load < driverless-ai-docker-runtime-rel-X.Y.Z.gz
  1. Set up the data, log, and license directories on the host machine:
# Set up the data, log, license, and tmp directories on the host machine
mkdir data
mkdir log
mkdir license
mkdir tmp
  1. At this point, you can copy data into the data directory on the host machine. The data will be visible inside the Docker container.
  2. Start the Driverless AI Docker image with nvidia-docker:
# Start the Driverless AI Docker image
nvidia-docker run \
    --rm \
    -u `id -u`:`id -g` \
    -p 12345:12345 \
    -p 9090:9090 \
    -v `pwd`/data:/data \
    -v `pwd`/log:/log \
    -v `pwd`/license:/license \
    -v `pwd`/tmp:/tmp \
    opsh2oai/h2oai-runtime

Driverless AI will begin running:

---------------------------------
Welcome to H2O.ai's Driverless AI
---------------------------------
   version: X.Y.Z

- Put data in the volume mounted at /data
- Logs are written to the volume mounted at /log/YYYYMMDD-HHMMSS
- Connect to Driverless AI on port 12345 inside the container
- Connect to Jupyter notebook on port 8888 inside the container
  1. Connect to Driverless AI with your browser:
http://Your-Driverless-AI-Host-Machine:12345

Install on Ubuntu

This section describes how to install and start the Driverless AI Docker image on Ubuntu. Note that this uses Docker EE and not NVIDIA Docker. GPU support will not be available.

  1. Open a Terminal and ssh to the machine that will run Driverless AI. Once you are logged in, perform the following steps.
  2. Retrieve the Driverless AI package from https://www.h2o.ai/driverless-ai-download/.
  3. Install Docker on Ubuntu (if not already installed):
# Install Docker on Ubuntu
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
apt-get update
apt-get install docker-ce
  1. Load the Driverless AI Docker image, replacing X.Y.Z below with your Driverless AI Docker image version (for example, 1.0.0):
# Load the Driverless AI Docker image
docker load < driverless-ai-docker-runtime-rel-X.Y.Z.gz
  1. Set up the data, log, license, and tmp directories on the host machine:
# Set up the data, log, license, and tmp directories
mkdir data
mkdir log
mkdir license
mkdir tmp
  1. At this point, you can copy data into the data directory on the host machine. The data will be visible inside the Docker container.
  2. Start the Driverless AI Docker image:
# Start the Driverless AI Docker image
docker run \
    --rm \
    -u `id -u`:`id -g` \
    -p 12345:12345 \
    -p 9090:9090 \
    -v `pwd`/data:/data \
    -v `pwd`/log:/log \
    -v `pwd`/license:/license \
    -v `pwd`/tmp:/tmp \
    opsh2oai/h2oai-runtime

Driverless AI will begin running:

---------------------------------
Welcome to H2O.ai's Driverless AI
---------------------------------
   version: X.Y.Z

- Put data in the volume mounted at /data
- Logs are written to the volume mounted at /log/YYYYMMDD-HHMMSS
- Connect to Driverless AI on port 12345 inside the container
- Connect to Jupyter notebook on port 8888 inside the container
  1. Connect to Driverless AI with your browser:
http://Your-Driverless-AI-Host-Machine:12345

Install on RHEL with GPUs

This section describes how to install and start the Driverless AI Docker image on RHEL systems with GPUs. Note that the provided nvidia-docker rpm is for x86_64 machines. nvidia-docker has limited support for ppc64le machines. More information is available here.

Note: As of this writing, Driverless AI has only been tested on RHEL version 7.4.

Open a Terminal and ssh to the machine that will run Driverless AI. Once you are logged in, perform the following steps.

  1. Retrieve the Driverless AI package from https://www.h2o.ai/driverless-ai-download/.
  2. Install and start Docker EE on RHEL (if not already installed). Follow the instructions on https://docs.docker.com/engine/installation/linux/docker-ee/rhel/.
  3. Install nvidia-docker and the nvidia-docker plugin on RHEL (if not already installed):
# Install nvidia-docker and nvidia-docker-plugin
wget -P /tmp https://github.com/NVIDIA/nvidia-docker/releases/download/v1.0.1/nvidia-docker-1.0.1-1.x86_64.rpm
sudo rpm -i /tmp/nvidia-docker*.rpm && rm /tmp/nvidia-docker*.rpm
sudo systemctl start nvidia-docker
  1. Verify that the NVIDIA driver is up and running. If the driver is not up and running, log on to http://www.nvidia.com/Download/index.aspx?lang=en-us to get the latest NVIDIA Tesla V/P/K series driver.
nvidia-docker run --rm nvidia/cuda nvidia-smi
  1. Load the Driverless AI Docker image, replacing X.Y.Z below with your Driverless AI Docker image version (for example, 1.0.0).
# Load the Driverless AI docker image
docker load < driverless-ai-docker-runtime-rel-X.Y.Z.gz
  1. Set up the data, log, and license directories on the host machine:
# Set up the data, log, license, and tmp directories on the host machine
mkdir data
mkdir log
mkdir license
mkdir tmp
  1. At this point, you can copy data into the data directory on the host machine. The data will be visible inside the Docker container.
  2. Start the Driverless AI Docker image with nvidia-docker:
# Start the Driverless AI Docker image
nvidia-docker run \
    --rm \
    -u `id -u`:`id -g` \
    -p 12345:12345 \
    -p 9090:9090 \
    -v `pwd`/data:/data \
    -v `pwd`/log:/log \
    -v `pwd`/license:/license \
    -v `pwd`/tmp:/tmp \
    opsh2oai/h2oai-runtime

Driverless AI will begin running:

---------------------------------
Welcome to H2O.ai's Driverless AI
---------------------------------
   version: X.Y.Z

- Put data in the volume mounted at /data
- Logs are written to the volume mounted at /log/YYYYMMDD-HHMMSS
- Connect to Driverless AI on port 12345 inside the container
- Connect to Jupyter notebook on port 8888 inside the container
  1. Connect to Driverless AI with your browser at http://Your-Driverless-AI-Host-Machine:12345.

Install on RHEL

This section describes how to install and start the Driverless AI Docker image on RHEL. Note that this uses Docker EE and not NVIDIA Docker. GPU support will not be available.

Note: As of this writing, Driverless AI has only been tested on RHEL version 7.4.

  1. Install and start Docker EE on RHEL (if not already installed). Follow the instructions on https://docs.docker.com/engine/installation/linux/docker-ee/rhel/.
  2. On the machine that is running Docker EE, retrieve the Driverless AI package from https://www.h2o.ai/driverless-ai-download/
  3. Load the Driverless AI Docker image, replacing X.Y.Z below with your Driverless AI Docker image version (for example, 1.0.0):
# Load the Driverless AI Docker image
docker load < driverless-ai-docker-runtime-rel-X.Y.Z.gz
  1. Set up the data, log, license, and tmp directories.
$ mkdir data
$ mkdir log
$ mkdir license
$ mkdir tmp
  1. Copy data into the data directory on the host. The data will be visible inside the Docker container at /<user-home>/data.
  2. Start the Driverless AI Docker image with docker. GPU support will not be available.
$ docker run \
   --rm \
   -u `id -u`:`id -g` \
   -p 12345:12345 \
   -p 9090:9090 \
   -v `pwd`/data:/data \
   -v `pwd`/log:/log \
   -v `pwd`/license:/license \
   -v `pwd`/tmp:/tmp \
   opsh2oai/h2oai-runtime
  1. Connect to Driverless AI with your browser at http://Your-Driverless-AI-Host-Machine:12345.

Install on Mac OS X

This section describes how to install and start the Driverless AI Docker image on Mac OS X. Note that this uses regular Docker and not NVIDIA Docker. GPU support will not be available.

Caution:

  1. Retrieve the Driverless AI package from https://www.h2o.ai/driverless-ai-download/
  2. Download and run Docker for Mac from https://docs.docker.com/docker-for-mac/install
  3. Adjust the amount of memory given to Docker to be at least 10 GB. Driverless AI won’t run at all with less than 10 GB of memory. You can optionally adjust the number of CPUs given to Docker. You will find the controls by clicking on (Docker Whale)->Preferences->Advanced as shown in the following screenshots. (Don’t forget to Apply the changes after setting the desired memory value.)
_images/macosx_docker_menu_bar.png _images/macosx_docker_advanced_preferences.png
  1. With Docker running, open a Terminal. Navigate to the location of your downloaded Driverless AI and enter the following command, replacing X.Y.Z below with your Driverless AI Docker image version (for example, 1.0.0).
$ docker load < driverless-ai-docker-runtime-rel-X.Y.Z.gz
  1. Set up the data, log, license, and tmp directories.
$ mkdir data
$ mkdir log
$ mkdir license
$ mkdir tmp
  1. Copy data into the data directory on the host. The data will be visible inside the Docker container at /data.
  2. Start the Driverless AI Docker image with docker. GPU support will not be available.
$ docker run \
   --rm \
   -u `id -u`:`id -g` \
   -p 12345:12345 \
   -p 9090:9090 \
   -v `pwd`/data:/data \
   -v `pwd`/log:/log \
   -v `pwd`/license:/license \
   -v `pwd`/tmp:/tmp \
   opsh2oai/h2oai-runtime
  1. Connect to Driverless AI with your browser at http://localhost:12345.

Install on Windows 10 Pro

Note: Currently, Driverless AI is only supported on Windows 10 Pro.

This section describes how to install and start the Driverless AI Docker image on a Windows 10 Pro machine. Note that this uses regular Docker and not NVIDIA Docker. GPU support will not be available.

  1. Retrieve the Driverless AI package from https://www.h2o.ai/driverless-ai-download/.
  2. Download, install, and run Docker for Windows from https://docs.docker.com/docker-for-windows/install/. You can verify that Docker is running by typing docker version in a terminal (such as Windows PowerShell). Note that you may have to reboot after installation.
  3. Before running Driverless AI, you must:
  • Enable shared access to the C drive. Driverless AI will not be able to see your local data if this is not set.
  • Adjust the amount of memory given to Docker to be at least 10 GB. Driverless AI won’t run at all with less than 10 GB of memory.
  • Optionally adjust the number of CPUs given to Docker.

You can adjust these settings by clicking on the Docker whale in your taskbar (look for hidden tasks, if necessary), then selecting Settings > Shared Drive and Settings > Advanced as shown in the following screenshots. Don’t forget to Apply the changes after setting the desired memory value. (Docker will restart.) Note that if you cannot make changes, stop Docker and then start Docker again by right clicking on the Docker icon on your desktop and selecting Run as Administrator.

_images/windows_docker_menu_bar.png _images/windows_shared_drive_access.png _images/windows_docker_advanced_preferences.png
  1. With Docker running, open a PowerShell terminal. Navigate to the location of your downloaded Driverless AI and enter the following command, replacing X.Y.Z below with your Driverless AI Docker image version (for example, 1.0.0).
C:\User>docker load -i .\driverless-ai-docker-runtime-rel-X.Y.Z.gz
  1. Set up the data, log, license, and tmp directories.
C:\User>md data
C:\User>md log
C:\User>md license
C:\User>md tmp
  1. Copy data into the /data directory. The data will be visible inside the Docker container at /data.
  2. Start the Driverless AI Docker image. Be sure to replace path_to_ below with the entire path to the location of the folders that you created (for example, “c:/Users/user-name/driverlessai_folder/data”). Note that this is regular Docker, not NVIDIA Docker. GPU support will not be available.
C:\User>docker run --rm -p 12345:12345 -p 9090:9090 -v c:/path_to_data:/data -v c:/path_to_log:/log -v c:/path_to_license:/license -v c:/path_to_tmp:/tmp opsh2oai/h2oai-runtime
  1. Connect to Driverless AI with your browser at http://localhost:12345.