.. _install-on-nvidia-dgx: Install on NVIDIA GPU Cloud/NGC Registry ---------------------------------------- Driverless AI is supported on the following NVIDIA DGX products, and the installation steps for each platform are the same. - `NVIDIA GPU Cloud `__ - `NVIDIA DGX-1 `__ - `NVIDIA DGX-2 `__ - `NVIDIA DGX Station `__ Environment ~~~~~~~~~~~ +----------------------------+------+------------+--------------+ | Provider | GPUs | Min Memory | Suitable for | +============================+======+============+==============+ | NVIDIA GPU Cloud | Yes | | Serious use | +----------------------------+------+------------+--------------+ | NVIDIA DGX-1/DGX-2 | Yes | 128 GB | Serious use | +----------------------------+------+------------+--------------+ | NVIDIA DGX Station | Yes | 64 GB | Serious Use | +----------------------------+------+------------+--------------+ Installing the NVIDIA NGC Registry ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ **Note**: These installation instructions assume that you are running on an NVIDIA DGX machine. Driverless AI is only available in the NGC registry for DGX machines. 1. Log in to your NVIDIA GPU Cloud account at https://ngc.nvidia.com/registry. (Note that NVIDIA Compute is no longer supported by NVIDIA.) 2. In the **Registry > Partners** menu, select **h2oai-driverless**. .. image:: ../images/ngc_select_dai.png :align: center 3. At the bottom of the screen, select one of the H2O Driverless AI tags to retrieve the pull command. .. image:: ../images/ngc_select_tag.png :align: center 4. On your NVIDIA DGX machine, open a command prompt and use the specified pull command to retrieve the Driverless AI image. For example: .. code-block:: bash docker pull nvcr.io/nvidia_partners/h2o-driverless-ai:latest 5. Set up a directory for the version of Driverless AI on the host machine: .. code-block:: bash :substitutions: # Set up directory with the version name mkdir |VERSION-dir| 6. Set up the data, log, license, and tmp directories on the host machine: .. code-block:: bash :substitutions: # cd into the directory associated with the selected version of Driverless AI cd |VERSION-dir| # Set up the data, log, license, and tmp directories on the host machine mkdir data mkdir log mkdir license mkdir tmp 7. At this point, you can copy data into the data directory on the host machine. The data will be visible inside the Docker container. 8. Enable persistence of the GPU. Note that this only needs to be run once. Refer to the following for more information: http://docs.nvidia.com/deploy/driver-persistence/index.html. .. include:: enable-persistence.rst 9. Run ``docker images`` to find the new image tag. 10. Start the Driverless AI Docker image and replace TAG below with the image tag. Depending on your install version, use the ``docker run --runtime=nvidia`` (>= Docker 19.03) or ``nvidia-docker`` (< Docker 19.03) command. Note that from version 1.10 DAI docker image runs with internal ``tini`` that is equivalent to using ``--init`` from docker, if both are enabled in the launch command, tini will print a (harmless) warning message. We recommend ``--shm-size=2g --cap-add=SYS_NICE --ulimit nofile=131071:131071 --ulimit nproc=16384:16384`` in docker launch command. But if user plans to build :ref:`image auto model ` extensively, then ``--shm-size=4g`` is recommended for Driverless AI docker command. **Note**: Use ``docker version`` to check which version of Docker you are using. .. tabs:: .. tab:: >= Docker 19.03 .. code-block:: bash :substitutions: # Start the Driverless AI Docker image docker run --runtime=nvidia \ --pid=host \ --rm \ --shm-size=2g --cap-add=SYS_NICE --ulimit nofile=131071:131071 --ulimit nproc=16384:16384 \ -u `id -u`:`id -g` \ -p 12345:12345 \ -v `pwd`/data:/data \ -v `pwd`/log:/log \ -v `pwd`/license:/license \ -v `pwd`/tmp:/tmp \ h2oai/dai-ubi8-x86_64:|tag| .. tab:: < Docker 19.03 .. code-block:: bash :substitutions: # Start the Driverless AI Docker image nvidia-docker run \ --pid=host \ --rm \ --shm-size=2g --cap-add=SYS_NICE --ulimit nofile=131071:131071 --ulimit nproc=16384:16384 \ -u `id -u`:`id -g` \ -p 12345:12345 \ -v `pwd`/data:/data \ -v `pwd`/log:/log \ -v `pwd`/license:/license \ -v `pwd`/tmp:/tmp \ h2oai/dai-ubi8-x86_64:|tag| Driverless AI will begin running:: -------------------------------- Welcome to H2O.ai's Driverless AI --------------------------------- - Put data in the volume mounted at /data - Logs are written to the volume mounted at /log/20180606-044258 - Connect to Driverless AI on port 12345 inside the container - Connect to Jupyter notebook on port 8888 inside the container 11. Connect to Driverless AI with your browser: .. code-block:: bash http://Your-Driverless-AI-Host-Machine:12345 Stopping Driverless AI ~~~~~~~~~~~~~~~~~~~~~~ Use Ctrl+C to stop Driverless AI. Upgrading Driverless AI ~~~~~~~~~~~~~~~~~~~~~~~ The steps for upgrading Driverless AI on an NVIDIA DGX system are similar to the installation steps. .. include:: upgrade-warning.frag **Note**: Use Ctrl+C to stop Driverless AI if it is still running. Requirements '''''''''''' As of 1.7.0, CUDA 9 is no longer supported. Your host environment must have CUDA 10.0 or later with NVIDIA drivers >= 440.82 installed (GPU only). Driverless AI ships with its own CUDA libraries, but the driver must exist in the host environment. Go to https://www.nvidia.com/Download/index.aspx to get the latest NVIDIA Tesla V/P/K series driver. Upgrade Steps ''''''''''''' 1. On your NVIDIA DGX machine, create a directory for the new Driverless AI version. 2. Copy the data, log, license, and tmp directories from the previous Driverless AI directory into the new Driverless AI directory. 3. Run ``docker pull nvcr.io/h2oai/h2oai-driverless-ai:latest`` to retrieve the latest Driverless AI version. 4. Start the Driverless AI Docker image. 5. Connect to Driverless AI with your browser at http://Your-Driverless-AI-Host-Machine:12345.