Snowflake¶
Driverless AI allows you to explore Snowflake data sources from within the Driverless AI application. This section provides instructions for configuring Driverless AI to work with Snowflake. This setup requires you to enable authentication. If you enable Snowflake connectors, those file systems will be available in the UI, but you will not be able to use those connectors without authentication.
Snowflake with Authentication¶
This example enables the Snowflake data connector with authentication by passing the account
, user
, and password
variables.
- Export the Driverless AI config.toml file or add it to ~/.bashrc. For example:
# DEB and RPM export DRIVERLESS_AI_CONFIG_FILE="/etc/dai/config.toml" # TAR SH export DRIVERLESS_AI_CONFIG_FILE="/path/to/your/unpacked/dai/directory/config.toml"
- Edit the following environment variables in the config.toml file.
# File System Support # upload : standard upload feature # file : local file system/server file system # hdfs : Hadoop file system, remember to configure the HDFS config folder path and keytab below # dtap : Blue Data Tap file system, remember to configure the DTap section below # s3 : Amazon S3, optionally configure secret and access key below # gcs : Google Cloud Storage, remember to configure gcs_path_to_service_account_json below # gbq : Google Big Query, remember to configure gcs_path_to_service_account_json below # minio : Minio Cloud Storage, remember to configure secret and access key below # snow : Snowflake Data Warehouse, remember to configure Snowflake credentials below (account name, username, password) # kdb : KDB+ Time Series Database, remember to configure KDB credentials below (hostname and port, optionally: username, password, classpath, and jvm_args) # azrbs : Azure Blob Storage, remember to configure Azure credentials below (account name, account key) enabled_file_systems = "file, snow" # Snowflake Connector credentials snowflake_account = "<account_id>" snowflake_user = "<username>" snowflake_password = "<password>"
- Save the changes when you are done, then stop/restart Driverless AI.
After the Snowflake connector is enabled, you can add datasets by selecting Snowflake from the Add Dataset (or Drag and Drop) drop-down menu.
Specify the following information to add your dataset.
- Enter Output Filename: Specify the name of the file on your local system that you want to add to Driverless AI. Note that this can only be a CSV file (for example, myfile.csv).
- Enter Database: Specify the name of the Snowflake database that you are querying.
- Enter Warehouse: Specify the name of the Snowflake warehouse that you are querying.
- Enter Schema: Specify the schema of the dataset that you are querying.
- Enter Region: (Optional) Specify the region of the warehouse that you are querying. This can be found in the Snowflake-provided URL to access your database (as in <optional-deployment-name>.<region>.<cloud-provider>.snowflakecomputing.com).
- Enter Role: (Optional) Specify your role as designated within Snowflake. See https://docs.snowflake.net/manuals/user-guide/security-access-control-overview.html for more information.
- Enter File Formatting Params: (Optional) Specify any additional parameters for formatting your datasets. Available parameters are listed in https://docs.snowflake.net/manuals/sql-reference/sql/create-file-format.html#optional-parameters. (Note: Use only parameters for
TYPE = CSV
.) For example, if your dataset includes a text column that contains commas, you can specify a different delimiter usingFIELD_DELIMITER='character'
. Separate multiple parameters with spaces only. For example:
FIELD_DELIMITER='|' FIELD_OPTIONALLY_ENCLOSED_BY=""
- Enter Snowflake Query: Specify the Snowflake query that you want to execute.
- When you are finished, select the Click to Make Query button to add the dataset.