Installing and Upgrading Driverless AI

For the best (and intended-as-designed) experience, install Driverless AI on modern data center hardware with GPUs and CUDA support. Use Pascal or Volta GPUs with maximum GPU memory for best results. (Note that the older K80 and M60 GPUs available in EC2 are supported and very convenient, but not as fast.)

Driverless AI supports local, LDAP, and PAM authentication. Authentication can be configured by setting environment variables or via a config.toml file. Refer to the Setting Environment Variables section for more information.

Driverless AI also supports HDFS, S3, Google Cloud Storage, and Google Big Query access. Support for these data sources can be configured by setting environment variables for the data connectors or via a config.toml file. Refer to the Enabling Data Connectors section for more information.

Sizing Requirements for Native Installs

Driverless AI requires a minimum of 5 GB of system memory in order to start experiments and a minimum of 5 GB of disk space in order to run experiments. Note that these limits can changed in the config.toml file. We recommend that you have lots of system CPU memory (64 GB or more) and free disk space (at least 30 GB and/or 10x your dataset size) available.

Sizing Requirements for Docker Installs

For Docker installs, we recommend a minimum of 100 GB of free disk space. Driverless AI uses approximately 38 GB. In addition, the unpacking/temp files require space on the same Linux mount /var during installation. Once DAI runs, the mounts from the Docker container can point to other file system mount points.

GPU Sizing Requirements

If you are running Driverless AI with GPUs, be sure that your GPU has compute capacity >=3.5 and at least 4GB of RAM. If these requirements are not met, then Driverless AI will switch to CPU-only mode.

Note about nvidia-docker 1.0

If you have nvidia-docker 1.0 installed, you need to remove it and all existing GPU containers. Refer to https://github.com/NVIDIA/nvidia-docker/blob/master/README.md for more information.