Install on Ubuntu ----------------- This section describes how to install the Driverless AI Docker image on Ubuntu. The installation steps vary depending on whether your system has GPUs or if it is CPU only. Environment ~~~~~~~~~~~ +-------------------------+-------+---------+ | Operating System | GPUs? | Min Mem | +=========================+=======+=========+ | Ubuntu with GPUs | Yes | 64 GB | +-------------------------+-------+---------+ | Ubuntu with CPUs | No | 64 GB | +-------------------------+-------+---------+ .. _install-on-ubuntu-with-gpus: Install on Ubuntu with GPUs ~~~~~~~~~~~~~~~~~~~~~~~~~~~ **Note**: Driverless AI is supported on Ubuntu 16.04 or later. Open a Terminal and ssh to the machine that will run Driverless AI. Once you are logged in, perform the following steps. 1. Retrieve the Driverless AI Docker image from https://www.h2o.ai/download/. (Note that the contents of this Docker image include a CentOS kernel and CentOS packages.) 2. Install and run Docker on Ubuntu (if not already installed): .. code-block:: bash # Install and run Docker on Ubuntu curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - sudo apt-key fingerprint 0EBFCD88 sudo add-apt-repository \ "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" sudo apt-get update sudo apt-get install docker-ce sudo systemctl start docker 3. Install nvidia-docker2 (if not already installed). More information is available at https://github.com/NVIDIA/nvidia-docker/blob/master/README.md. .. code-block:: bash curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | \ sudo apt-key add - distribution=$(. /etc/os-release;echo $ID$VERSION_ID) curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | \ sudo tee /etc/apt/sources.list.d/nvidia-docker.list sudo apt-get update # Install nvidia-docker2 and reload the Docker daemon configuration sudo apt-get install -y nvidia-docker2 4. Verify that the NVIDIA driver is up and running. If the driver is not up and running, log on to http://www.nvidia.com/Download/index.aspx?lang=en-us to get the latest NVIDIA Tesla V/P/K series driver: .. code-block:: bash nvidia-smi 5. Set up a directory for the version of Driverless AI on the host machine: .. code-block:: bash :substitutions: # Set up directory with the version name mkdir |VERSION-dir| 6. Change directories to the new folder, then load the Driverless AI Docker image inside the new directory: .. code-block:: bash :substitutions: # cd into the new directory cd |VERSION-dir| # Load the Driverless AI docker image docker load < dai-docker-ubi8-x86_64-|VERSION-long|.tar.gz 7. Enable persistence of the GPU. Note that this needs to be run once every reboot. Refer to the following for more information: http://docs.nvidia.com/deploy/driver-persistence/index.html. .. include:: enable-persistence.rst 8. Set up the data, log, and license directories on the host machine: .. code-block:: bash # Set up the data, log, license, and tmp directories on the host machine (within the new directory) mkdir data mkdir log mkdir license mkdir tmp 9. At this point, you can copy data into the data directory on the host machine. The data will be visible inside the Docker container. 10. Run ``docker images`` to find the image tag. 11. Start the Driverless AI Docker image and replace TAG below with the image tag. Depending on your install version, use the ``docker run --runtime=nvidia`` (>= Docker 19.03) or ``nvidia-docker`` (< Docker 19.03) command. Note that from version 1.10 DAI docker image runs with internal ``tini`` that is equivalent to using ``--init`` from docker, if both are enabled in the launch command, tini will print a (harmless) warning message. We recommend ``--shm-size=2g --cap-add=SYS_NICE --ulimit nofile=131071:131071 --ulimit nproc=16384:16384`` in docker launch command. But if user plans to build :ref:`image auto model ` extensively, then ``--shm-size=4g`` is recommended for Driverless AI docker command. **Note**: Use ``docker version`` to check which version of Docker you are using. .. tabs:: .. tab:: >= Docker 19.03 .. code-block:: bash :substitutions: # Start the Driverless AI Docker image docker run --runtime=nvidia \ --pid=host \ --rm \ --shm-size=2g --cap-add=SYS_NICE --ulimit nofile=131071:131071 --ulimit nproc=16384:16384 \ -u `id -u`:`id -g` \ -p 12345:12345 \ -v `pwd`/data:/data \ -v `pwd`/log:/log \ -v `pwd`/license:/license \ -v `pwd`/tmp:/tmp \ h2oai/dai-ubi8-x86_64:|tag: .. tab:: < Docker 19.03 .. code-block:: bash :substitutions: # Start the Driverless AI Docker image nvidia-docker run \ --pid=host \ --rm \ --shm-size=2g --cap-add=SYS_NICE --ulimit nofile=131071:131071 --ulimit nproc=16384:16384 \ -u `id -u`:`id -g` \ -p 12345:12345 \ -v `pwd`/data:/data \ -v `pwd`/log:/log \ -v `pwd`/license:/license \ -v `pwd`/tmp:/tmp \ h2oai/dai-ubi8-x86_64:|tag| Driverless AI will begin running:: -------------------------------- Welcome to H2O.ai's Driverless AI --------------------------------- - Put data in the volume mounted at /data - Logs are written to the volume mounted at /log/20180606-044258 - Connect to Driverless AI on port 12345 inside the container - Connect to Jupyter notebook on port 8888 inside the container 12. Connect to Driverless AI with your browser: .. code-block:: bash http://Your-Driverless-AI-Host-Machine:12345 .. _install-on-ubuntu-cpus-only: Install on Ubuntu with CPUs ~~~~~~~~~~~~~~~~~~~~~~~~~~~ **Note**: Driverless AI is supported on Ubuntu 16.04 or later. This section describes how to install and start the Driverless AI Docker image on Ubuntu. Note that this uses ``docker`` and not ``nvidia-docker``. GPU support will not be available. **Watch the installation video** `here `__. Note that some of the images in this video may change between releases, but the installation steps remain the same. Open a Terminal and ssh to the machine that will run Driverless AI. Once you are logged in, perform the following steps. 1. Retrieve the Driverless AI Docker image from https://www.h2o.ai/download/. 2. Install and run Docker on Ubuntu (if not already installed): .. code-block:: bash # Install and run Docker on Ubuntu curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - sudo apt-key fingerprint 0EBFCD88 sudo add-apt-repository \ "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" sudo apt-get update sudo apt-get install docker-ce sudo systemctl start docker 3. Set up a directory for the version of Driverless AI on the host machine: .. code-block:: bash :substitutions: # Set up directory with the version name mkdir |VERSION-dir| 4. Change directories to the new folder, then load the Driverless AI Docker image inside the new directory: .. code-block:: bash :substitutions: # cd into the new directory cd |VERSION-dir| # Load the Driverless AI docker image docker load < dai-docker-ubi8-x86_64-|VERSION-long|.tar.gz 5. Set up the data, log, license, and tmp directories on the host machine (within the new directory): .. code-block:: bash # Set up the data, log, license, and tmp directories mkdir data mkdir log mkdir license mkdir tmp 6. At this point, you can copy data into the data directory on the host machine. The data will be visible inside the Docker container. 7. Run ``docker images`` to find the new image tag. 8. Start the Driverless AI Docker image. Note that GPU support will not be available. Note that from version 1.10 DAI docker image runs with internal ``tini`` that is equivalent to using ``--init`` from docker, if both are enabled in the launch command, tini will print a (harmless) warning message. We recommend ``--shm-size=2g --cap-add=SYS_NICE --ulimit nofile=131071:131071 --ulimit nproc=16384:16384`` in docker launch command. But if user plans to build :ref:`image auto model ` extensively, then ``--shm-size=4g`` is recommended for Driverless AI docker command. .. code-block:: bash :substitutions: # Start the Driverless AI Docker image docker run \ --pid=host \ --rm \ --shm-size=2g --cap-add=SYS_NICE --ulimit nofile=131071:131071 --ulimit nproc=16384:16384 \ -u `id -u`:`id -g` \ -p 12345:12345 \ -v `pwd`/data:/data \ -v `pwd`/log:/log \ -v `pwd`/license:/license \ -v `pwd`/tmp:/tmp \ -v /etc/passwd:/etc/passwd:ro \ -v /etc/group:/etc/group:ro \ h2oai/dai-ubi8-x86_64:|tag| Driverless AI will begin running:: -------------------------------- Welcome to H2O.ai's Driverless AI --------------------------------- - Put data in the volume mounted at /data - Logs are written to the volume mounted at /log/20180606-044258 - Connect to Driverless AI on port 12345 inside the container - Connect to Jupyter notebook on port 8888 inside the container 9. Connect to Driverless AI with your browser: .. code-block:: bash http://Your-Driverless-AI-Host-Machine:12345 Stopping the Docker Image ~~~~~~~~~~~~~~~~~~~~~~~~~ .. include:: stop-docker.rst Upgrading the Docker Image ~~~~~~~~~~~~~~~~~~~~~~~~~~ .. include:: upgrade-docker.rst