Using Data Connectors with the Docker Image¶
Available file systems can be configured via the enabled_file_systems
property. Note that each property must be prepended with DRIVERLESS_AI_. Replace TAG below with the image tag.
nvidia-docker run \
--pid=host \
--init \
--rm \
--shm-size=256m \
-u `id -u`:`id -g` \
-p 12345:12345 \
-e DRIVERLESS_AI_ENABLED_FILE_SYSTEMS="file,s3,hdfs,gcs,gbq,kdb,minio,snow,dtap,azrbs,hive" \
-v `pwd`/data:/data \
-v `pwd`/log:/log \
-v `pwd`/license:/license \
-v `pwd`/tmp:/tmp \
h2oai/dai-centos7-ppc64le:TAG
The sections that follow shows examples describing how to use environment variables to enable HDFS, S3, Google Cloud Storage, Google Big Query, Minio, Snowflake, kdb+, Azure Blob Store, BlueData DataTap, Hive, and JDBC data sources.
- S3 Setup
- HDFS Setup
- Azure Blob Store Setup
- Supported Data Sources Using the Azure Blob Store Connector
- Description of Configuration Attributes
- Example 1: Enabling the Azure Blob Store Data Connector
- Example 2: Mount Azure File Shares to the Local File System
- Example 3: Enable HDFS Connector to Connect to Azure Data Lake Gen 1
- Example 4: Enable HDFS Connector to Connect to Azure Data Lake Gen 2
- FAQ
- BlueData DataTap Setup
- Google BigQuery Setup
- Google Cloud Storage Setup
- Hive Setup
- kdb+ Setup
- Minio Setup
- Snowflake Setup
- JDBC Setup
- Data Recipe URL Setup
- Data Recipe File Setup