HDFS Setup

This section provides instructions for configuring Driverless AI to work with HDFS.

Supported Hadoop Platforms

  • CDH 5.4
  • CDH 5.5
  • CDH 5.6
  • CDH 5.7
  • CDH 5.8
  • CDH 5.9
  • CDH 5.10
  • CDH 5.13
  • CDH 5.14
  • CDH 5.15
  • CDH 5.16
  • CDH 6.0
  • CDH 6.1
  • CDH 6.2
  • CDH 6.3
  • HDP 2.2
  • HDP 2.3
  • HDP 2.4
  • HDP 2.5
  • HDP 2.6
  • HDP 3.0
  • HDP 3.1

Description of Configuration Attributes

  • hdfs_config_path: The location the HDFS config folder path. This folder can contain multiple config files.

  • hdfs_auth_type: Selects HDFS authentication. Available values are:

    • principal: Authenticate with HDFS with a principal user.
    • keytab: Authenticate with a keytab (recommended). If running DAI as a service, then the Kerberos keytab needs to be owned by the DAI user.
    • keytabimpersonation: Login with impersonation using a keytab.
    • noauth: No authentication needed.
  • key_tab_path: The path of the principal key tab file. For use when hdfs_auth_type=principal.

  • hdfs_app_principal_user: The Kerberos application principal user.

  • hdfs_app_login_user: The user ID of the current user (for example, user@realm).

  • hdfs_app_jvm_args: JVM args for HDFS distributions. Separate each argument with spaces.

    • -Djava.security.krb5.conf
    • -Dsun.security.krb5.debug
    • -Dlog4j.configuration
  • hdfs_app_classpath: The HDFS classpath.

HDFS with No Authentication

This example enables the HDFS data connector and disables HDFS authentication in the config.toml file. This allows users to reference data stored in HDFS directly using the name node address, for example: hdfs://name.node/datasets/iris.csv.

  1. Export the Driverless AI config.toml file or add it to ~/.bashrc. For example:
# DEB and RPM
export DRIVERLESS_AI_CONFIG_FILE="/etc/dai/config.toml"

# TAR SH
export DRIVERLESS_AI_CONFIG_FILE="/path/to/your/unpacked/dai/directory/config.toml"
  1. Specify the following configuration options in the config.toml file. Note that the procsy port, which defaults to 12347, also has to be changed.
# IP address and port of procsy process.
procsy_ip = "127.0.0.1"
procsy_port = 8080

# File System Support
# upload : standard upload feature
# file : local file system/server file system
# hdfs : Hadoop file system, remember to configure the HDFS config folder path and keytab below
# dtap : Blue Data Tap file system, remember to configure the DTap section below
# s3 : Amazon S3, optionally configure secret and access key below
# gcs : Google Cloud Storage, remember to configure gcs_path_to_service_account_json below
# gbq : Google Big Query, remember to configure gcs_path_to_service_account_json below
# minio : Minio Cloud Storage, remember to configure secret and access key below
# snow : Snowflake Data Warehouse, remember to configure Snowflake credentials below (account name, username, password)
# kdb : KDB+ Time Series Database, remember to configure KDB credentials below (hostname and port, optionally: username, password, classpath, and jvm_args)
# azrbs : Azure Blob Storage, remember to configure Azure credentials below (account name, account key)
# jdbc: JDBC Connector, remember to configure JDBC below. (jdbc_app_configs)
enabled_file_systems = "file, hdfs"
  1. Save the changes when you are done, then stop/restart Driverless AI.

HDFS with Keytab-Based Authentication

This example:

  • Places keytabs in the /tmp/dtmp folder on your machine and provides the file path as described below.
  • Configures the option hdfs_app_prinicpal_user to reference a user for whom the keytab was created (usually in the form of user@realm).
  1. Export the Driverless AI config.toml file or add it to ~/.bashrc. For example:
# DEB and RPM
export DRIVERLESS_AI_CONFIG_FILE="/etc/dai/config.toml"

# TAR SH
export DRIVERLESS_AI_CONFIG_FILE="/path/to/your/unpacked/dai/directory/config.toml"
  1. Specify the following configuration options in the config.toml file.
# IP address and port of procsy process.
procsy_ip = "127.0.0.1"
procsy_port = 8080

# File System Support
# upload : standard upload feature
# file : local file system/server file system
# hdfs : Hadoop file system, remember to configure the HDFS config folder path and keytab below
# dtap : Blue Data Tap file system, remember to configure the DTap section below
# s3 : Amazon S3, optionally configure secret and access key below
# gcs : Google Cloud Storage, remember to configure gcs_path_to_service_account_json below
# gbq : Google Big Query, remember to configure gcs_path_to_service_account_json below
# minio : Minio Cloud Storage, remember to configure secret and access key below
# snow : Snowflake Data Warehouse, remember to configure Snowflake credentials below (account name, username, password)
# kdb : KDB+ Time Series Database, remember to configure KDB credentials below (hostname and port, optionally: username, password, classpath, and jvm_args)
# azrbs : Azure Blob Storage, remember to configure Azure credentials below (account name, account key)
# jdbc: JDBC Connector, remember to configure JDBC below. (jdbc_app_configs)
enabled_file_systems = "file, hdfs"

# HDFS connector
# Auth type can be Principal/keytab/keytabPrincipal
# Specify HDFS Auth Type, allowed options are:
#   noauth : No authentication needed
#   principal : Authenticate with HDFS with a principal user
#   keytab : Authenticate with a Key tab (recommended)
#   keytabimpersonation : Login with impersonation using a keytab
hdfs_auth_type = "keytab"

# Path of the principal key tab file
key_tab_path = "/tmp/<keytabname>"

# Kerberos app principal user (recommended)
hdfs_app_principal_user = "<user@kerberosrealm>"
  1. Save the changes when you are done, then stop/restart Driverless AI.

HDFS with Keytab-Based Impersonation

Notes:

  • If using Kerberos, be sure that the Driverless AI time is synched with the Kerberos server.
  • If running Driverless AI as a service, then the Kerberos keytab needs to be owned by the Driverless AI user.
  • Logins are case sensitive when keytab-based impersonation is configured.

The example:

  • Sets the authentication type to keytabimpersonation.
  • Places keytabs in the /tmp/dtmp folder on your machine and provides the file path as described below.
  • Configures the hdfs_app_principal_user variable, which references a user for whom the keytab was created (usually in the form of user@realm).
  1. Export the Driverless AI config.toml file or add it to ~/.bashrc. For example:
# DEB and RPM
export DRIVERLESS_AI_CONFIG_FILE="/etc/dai/config.toml"

# TAR SH
export DRIVERLESS_AI_CONFIG_FILE="/path/to/your/unpacked/dai/directory/config.toml"
  1. Specify the following configuration options in the config.toml file.
# IP address and port of procsy process.
procsy_ip = "127.0.0.1"
procsy_port = 8080

# File System Support
# upload : standard upload feature
# file : local file system/server file system
# hdfs : Hadoop file system, remember to configure the HDFS config folder path and keytab below
# dtap : Blue Data Tap file system, remember to configure the DTap section below
# s3 : Amazon S3, optionally configure secret and access key below
# gcs : Google Cloud Storage, remember to configure gcs_path_to_service_account_json below
# gbq : Google Big Query, remember to configure gcs_path_to_service_account_json below
# minio : Minio Cloud Storage, remember to configure secret and access key below
# snow : Snowflake Data Warehouse, remember to configure Snowflake credentials below (account name, username, password)
# kdb : KDB+ Time Series Database, remember to configure KDB credentials below (hostname and port, optionally: username, password, classpath, and jvm_args)
# azrbs : Azure Blob Storage, remember to configure Azure credentials below (account name, account key)
# jdbc: JDBC Connector, remember to configure JDBC below. (jdbc_app_configs)
enabled_file_systems = "file, hdfs"

# HDFS connector
# Auth type can be Principal/keytab/keytabPrincipal
# Specify HDFS Auth Type, allowed options are:
#   noauth : No authentication needed
#   principal : Authenticate with HDFS with a principal user
#   keytab : Authenticate with a Key tab (recommended)
#   keytabimpersonation : Login with impersonation using a keytab
hdfs_auth_type = "keytabimpersonation"

# Path of the principal key tab file
key_tab_path = "/tmp/<keytabname>"

# Kerberos app principal user (recommended)
hdfs_app_principal_user = "<user@kerberosrealm>"
  1. Save the changes when you are done, then stop/restart Driverless AI.