Environment Variables and Configuration Options

Driverless AI provides a number of environment variables that can be passed when starting Driverless AI or specified in a config.toml file. The complete list of variables is in the Using the config.toml File section. The steps for specifying variables vary depending on whether you installed a Driverless AI RPM, DEB, or TAR SH or whether you are running a Docker image.

Setting Environment Variables in Docker Images

Each property must be prepended with DRIVERLESS_AI_. The example below starts Driverless AI with environment variables that enable S3 and HDFS access (without authentication).

nvidia-docker run \
  --pid=host \
  --init \
  --rm \
  -u `id -u`:`id -g` \
  -e DRIVERLESS_AI_ENABLED_FILE_SYSTEMS="file,s3,hdfs" \
  -e DRIVERLESS_AI_AUTHENTICATION_METHOD="local" \
  -e DRIVERLESS_AI_LOCAL_HTPASSWD_FILE="<htpasswd_file_location>" \
  -v /etc/passwd:/etc/passwd:ro \
  -v /etc/group:/etc/group:ro \
  -v `pwd`/data:/data \
  -v `pwd`/log:/log \
  -v `pwd`/license:/license \
  -v `pwd`/tmp:/tmp \
  h2oai/dai-centos7-x86_64:TAG

Setting Configuration Options in Native Installs

The config.toml file is available in the etc/dai folder after the RPM, DEB, or TAR SH is installed. Edit the desired variables in this file, and then restart Driverless AI.

The example below shows the configuration options in the config.toml file to set when enabling the S3 and HDFS access (without authentication)

  1. Export the Driverless AI config.toml file or add it to ~/.bashrc. For example:

# DEB and RPM
export DRIVERLESS_AI_CONFIG_FILE="/etc/dai/config.toml"

# TAR SH
export DRIVERLESS_AI_CONFIG_FILE="/path/to/your/unpacked/dai/directory/config.toml"
  1. Specify the desired configuration options to enable S3 and HDFS access (without authentication).

# File System Support
# upload : standard upload feature
# file : local file system/server file system
# hdfs : Hadoop file system, remember to configure the HDFS config folder path and keytab below
# dtap : Blue Data Tap file system, remember to configure the DTap section below
# s3 : Amazon S3, optionally configure secret and access key below
# gcs : Google Cloud Storage, remember to configure gcs_path_to_service_account_json below
# gbq : Google BigQuery, remember to configure gcs_path_to_service_account_json below
# minio : Minio Cloud Storage, remember to configure secret and access key below
# snow : Snowflake Data Warehouse, remember to configure Snowflake credentials below (account name, username, password)
# kdb : KDB+ Time Series Database, remember to configure KDB credentials below (hostname and port, optionally: username, password, classpath, and jvm_args)
# azrbs : Azure Blob Storage, remember to configure Azure credentials below (account name, account key)
# jdbc: JDBC Connector, remember to configure JDBC below. (jdbc_app_configs)
enabled_file_systems = "file,s3,hdfs"

# authentication_method
# unvalidated : Accepts user id and password, does not validate password
# none : Does not ask for user id or password, authenticated as admin
# pam :  Accepts user id and password, Validates user with operating system
# ldap : Accepts user id and password, Validates against an ldap server, look
# local: Accepts a user id and password, Validated against a htpasswd file provided in local_htpasswd_file
# for additional settings under LDAP settings
authentication_method = "local"

# Local password file
# Generating a htpasswd file: see syntax below
# htpasswd -B "<location_to_place_htpasswd_file>" "<username>"
# note: -B forces use of brcypt, a secure encryption method
local_htpasswd_file = "<htpasswd_file_location>"
  1. Start Driverless AI. Note that the command used to start Driverless AI varies depending on your install type.

# Linux RPM or DEB with systemd
sudo systemctl start dai

# Linux RPM or DEB without systemd
sudo -H -u dai /opt/h2oai/dai/run-dai.sh

# Linux TAR SH
./run-dai.sh