Linux TAR SH¶
The Driverless AI software is available for use in pure user-mode environments as a self-extracting TAR SH archive. This form of installation does not require a privileged user to install or to run.
This artifact has the same compatibility matrix as the RPM and DEB packages (combined), it just comes packaged slightly differently. See those sections for a full list of supported environments.
The installation steps assume that you have a valid license key for Driverless AI. For information on how to obtain a license key for Driverless AI, visit https://www.h2o.ai/products/h2o-driverless-ai/. Once obtained, you will be prompted to paste the license key into the Driverless AI UI when you first log in.
Note
To ensure that AutoDoc pipeline visualizations are generated correctly on native installations, installing fontconfig is recommended.
Requirements¶
RedHat 7/RedHat 8 or Ubuntu 16.04/Ubuntu 18.04/Ubuntu 20.04/Ubuntu 22.04
NVIDIA drivers >= 471.68 recommended (GPU only). Note that if you are using K80 GPUs, the minimum required NVIDIA driver version is 450.80.02
OpenCL (Required for full LightGBM support on GPU-powered systems)
Driverless AI TAR SH, available from https://www.h2o.ai/download/
Note: CUDA 11.8.0 (for GPUs) and cuDNN (required for TensorFlow support on GPUs) are included in the Driverless AI package.
Installing OpenCL¶
OpenCL is required for full LightGBM support on GPU-powered systems. To install OpenCL, run the following as root:
mkdir -p /etc/OpenCL/vendors && echo "libnvidia-opencl.so.1" > /etc/OpenCL/vendors/nvidia.icd && chmod a+r /etc/OpenCL/vendors/nvidia.icd && chmod a+x /etc/OpenCL/vendors/ && chmod a+x /etc/OpenCL
Note
If OpenCL is not installed, then CUDA LightGBM is automatically used. CUDA LightGBM is only supported on Pascal-powered (and later) systems, and can be enabled manually with the enable_lightgbm_cuda_support
config.toml setting.
Installing Driverless AI¶
Run the following commands to install the Driverless AI TAR SH.
# Install Driverless AI.
chmod 755 dai-1.11.1.1-linux-x86_64.sh
./dai-1.11.1.1-linux-x86_64.sh
You may now cd to the unpacked directory and optionally make changes to config.toml.
Starting Driverless AI¶
# Start Driverless AI.
./run-dai.sh
Starting NVIDIA Persistence Mode¶
If you have NVIDIA GPUs, you must run the following NVIDIA command. This command needs to be run every reboot. For more information: http://docs.nvidia.com/deploy/driver-persistence/index.html.
sudo nvidia-smi -pm 1
Install OpenCL¶
OpenCL is required in order to run LightGBM on GPUs. Run the following for Centos7/RH7 based systems using yum and x86.
yum -y clean all
yum -y makecache
yum -y update
wget http://dl.fedoraproject.org/pub/epel/7/x86_64/Packages/c/clinfo-2.1.17.02.09-1.el7.x86_64.rpm
wget http://dl.fedoraproject.org/pub/epel/7/x86_64/Packages/o/ocl-icd-2.2.12-1.el7.x86_64.rpm
rpm -if clinfo-2.1.17.02.09-1.el7.x86_64.rpm
rpm -if ocl-icd-2.2.12-1.el7.x86_64.rpm
clinfo
mkdir -p /etc/OpenCL/vendors && \
echo "libnvidia-opencl.so.1" > /etc/OpenCL/vendors/nvidia.icd
Looking at Driverless AI log files¶
less log/dai.log
less log/h2o.log
less log/procsy.log
less log/vis-server.log
Stopping Driverless AI¶
# Stop Driverless AI.
./kill-dai.sh
Uninstalling Driverless AI¶
To uninstall Driverless AI, just remove the directory created by the unpacking process. By default, all files for Driverless AI are contained within this directory.
Upgrading Driverless AI¶
WARNINGS:
This release deprecates experiments and MLI models from 1.7.0 and earlier.
Experiments, MLIs, and MOJOs reside in the Driverless AI tmp directory and are not automatically upgraded when Driverless AI is upgraded. We recommend you take the following steps before upgrading.
Build MLI models before upgrading.
Build MOJO pipelines before upgrading.
Stop Driverless AI and make a backup of your Driverless AI tmp directory before upgrading.
If you did not build MLI on a model before upgrading Driverless AI, then you will not be able to view MLI on that model after upgrading. Before upgrading, be sure to run MLI jobs on models that you want to continue to interpret in future releases. If that MLI job appears in the list of Interpreted Models in your current version, then it will be retained after upgrading.
If you did not build a MOJO pipeline on a model before upgrading Driverless AI, then you will not be able to build a MOJO pipeline on that model after upgrading. Before upgrading, be sure to build MOJO pipelines on all desired models and then back up your Driverless AI tmp directory.
The upgrade process inherits the service user and group from /etc/dai/User.conf and /etc/dai/Group.conf. You do not need to manually specify the DAI_USER or DAI_GROUP environment variables during an upgrade.
Requirements¶
We recommend to have NVIDIA driver >= 471.68 installed (GPU only) in your host environment for a seamless experience on all architectures, including Ampere. Driverless AI ships with CUDA 11.8.0 for GPUs, but the driver must exist in the host environment.
Go to NVIDIA download driver to get the latest NVIDIA Tesla A/T/V/P/K series drivers. For reference on CUDA Toolkit and Minimum Required Driver Versions and CUDA Toolkit and Corresponding Driver Versions, see here .
Note
If you are using K80 GPUs, the minimum required NVIDIA driver version is 450.80.02.
Upgrade Steps¶
Stop your previous version of Driverless AI.
Run the self-extracting archive for the new version of Driverless AI.
Port any previous changes you made to your config.toml file to the newly unpacked directory.
Copy the tmp directory (which contains all the Driverless AI working state) from your previous Driverless AI installation into the newly unpacked directory.
Start your newly extracted version of Driverless AI.