Snowflake Setup

Driverless AI allows you to explore Snowflake data sources from within the Driverless AI application. This section provides instructions for configuring Driverless AI to work with Snowflake. This setup requires you to enable authentication. If you enable Snowflake connectors, those file systems will be available in the UI, but you will not be able to use those connectors without authentication.

Snowflake with Authentication

This example enables the Snowflake data connector with authentication by passing the account, user, and password variables.

  1. Export the Driverless AI config.toml file or add it to ~/.bashrc. For example:

# DEB and RPM
export DRIVERLESS_AI_CONFIG_FILE="/etc/dai/config.toml"

export DRIVERLESS_AI_CONFIG_FILE="/path/to/your/unpacked/dai/directory/config.toml"
  1. Specify the following configuration options in the config.toml file.

# File System Support
# upload : standard upload feature
# file : local file system/server file system
# hdfs : Hadoop file system, remember to configure the HDFS config folder path and keytab below
# dtap : Blue Data Tap file system, remember to configure the DTap section below
# s3 : Amazon S3, optionally configure secret and access key below
# gcs : Google Cloud Storage, remember to configure gcs_path_to_service_account_json below
# gbq : Google Big Query, remember to configure gcs_path_to_service_account_json below
# minio : Minio Cloud Storage, remember to configure secret and access key below
# snow : Snowflake Data Warehouse, remember to configure Snowflake credentials below (account name, username, password)
# kdb : KDB+ Time Series Database, remember to configure KDB credentials below (hostname and port, optionally: username, password, classpath, and jvm_args)
# azrbs : Azure Blob Storage, remember to configure Azure credentials below (account name, account key)
# jdbc: JDBC Connector, remember to configure JDBC below. (jdbc_app_configs)
# recipe_url: load custom recipe from URL
# recipe_file: load custom recipe from local file system
enabled_file_systems = "file, snow"

# Snowflake Connector credentials
snowflake_account = "<account_id>"
snowflake_user = "<username>"
snowflake_password = "<password>"
  1. Save the changes when you are done, then stop/restart Driverless AI.

After the Snowflake connector is enabled, you can add datasets by selecting Snowflake from the Add Dataset (or Drag and Drop) drop-down menu.

Add Dataset

Specify the following information to add your dataset.

  1. Enter Output Filename: Specify the name of the file on your local system that you want to add to Driverless AI. Note that this can only be a CSV file (for example, myfile.csv).

  2. Enter Database: Specify the name of the Snowflake database that you are querying.

  3. Enter Warehouse: Specify the name of the Snowflake warehouse that you are querying.

  4. Enter Schema: Specify the schema of the dataset that you are querying.

  5. Enter Region: (Optional) Specify the region of the warehouse that you are querying. This can be found in the Snowflake-provided URL to access your database (as in <optional-deployment-name>.<region>.<cloud-provider>

  6. Enter Role: (Optional) Specify your role as designated within Snowflake. See for more information.

  7. Enter File Formatting Params: (Optional) Specify any additional parameters for formatting your datasets. Available parameters are listed in (Note: Use only parameters for TYPE = CSV.) For example, if your dataset includes a text column that contains commas, you can specify a different delimiter using FIELD_DELIMITER='character'. Separate multiple parameters with spaces only. For example:

  1. Enter Snowflake Query: Specify the Snowflake query that you want to execute.

  2. When you are finished, select the Click to Make Query button to add the dataset.

Make BigQuery