Using the config.toml File
The config.toml
file is a configuration file that uses the TOML v0.5.0 file format. Administrators can customize various aspects of a Driverless AI (DAI) environment by editing the config.toml
file before starting DAI.
Note
For information on configuration security, see Configuration Security.
Configuration Override Chain
The configuration engine reads and overrides variables in the following order:
Driverless AI defaults: These are stored in a Python
config
module.config.toml
- Place this file in a folder or mount it in a Docker container and specify the path in the “DRIVERLESS_AI_CONFIG_FILE” environment variable.Keystore file - Set the
keystore_file
parameter in the config.toml file or the environment variable “DRIVERLESS_AI_KEYSTORE_FILE” to point to a valid DAI keystore file generated using the h2oai.keystore tool. If an environment variable is set, the value in the config.toml forkeystore_file
is overridden.Environment variable - Configuration variables can also be provided as environment variables. They must have the prefix DRIVERLESS_AI_ followed by the variable name in all caps. For example, “authentication_method” can be provided as “DRIVERLESS_AI_AUTHENTICATION_METHOD”. Setting environment variables overrides values from the keystore file.
Copy the
config.toml
file from inside the Docker image to your local filesystem.
# Make a config directory mkdir config # Copy the config.toml file to the new config directory. docker run --runtime=nvidia \ --pid=host \ --rm \ --init \ -u `id -u`:`id -g` \ -v `pwd`/config:/config \ --entrypoint bash \ h2oai/dai-ubi8-x86_64:2.2.0-cuda11.8.0.xx -c "cp /etc/dai/config.toml /config"
Edit the desired variables in the
config.toml
file. Save your changes when you are done.Start DAI with the DRIVERLESS_AI_CONFIG_FILE environment variable. Ensure that this environment variable points to the location of the edited
config.toml
file so that the software can locate the configuration file.
docker run --runtime=nvidia \ --pid=host \ --init \ --rm \ --shm-size=2g --cap-add=SYS_NICE --ulimit nofile=131071:131071 --ulimit nproc=16384:16384 \ -u `id -u`:`id -g` \ -p 12345:12345 \ -e DRIVERLESS_AI_CONFIG_FILE="/config/config.toml" \ -v `pwd`/config:/config \ -v `pwd`/data:/data \ -v `pwd`/log:/log \ -v `pwd`/license:/license \ -v `pwd`/tmp:/tmp \ h2oai/dai-ubi8-x86_64:2.2.0-cuda11.8.0.xx
Native installs include DEBs, RPMs, and TAR SH installs.
Export the DAI
config.toml
file or add it to~/.bashrc
. For example:
export DRIVERLESS_AI_CONFIG_FILE=“/config/config.toml”
Edit the desired variables in the
config.toml
file. Save your changes when you are done.Start DAI. Note that the command used to start DAI varies depending on your install type.
Sample config.toml File
The following is a copy of the standard config.toml
file included with this version of DAI. The sections that follow describe some examples showing how to set different environment variables, data connectors, authentication methods, and notifications.
1
2##############################################################################
3# DRIVERLESS AI CONFIGURATION FILE
4#
5# Comments:
6# This file is authored in TOML (see https://github.com/toml-lang/toml)
7#
8# Config Override Chain
9# Configuration variables for Driverless AI can be provided in several ways,
10# the config engine reads and overrides variables in the following order
11#
12# 1. h2oai/config/config.toml
13# [internal not visible to users]
14#
15# 2. config.toml
16# [place file in a folder/mount file in docker container and provide path
17# in "DRIVERLESS_AI_CONFIG_FILE" environment variable]
18#
19# 3. Keystore file
20# [set keystore_file parameter in config.toml, or environment variable
21# "DRIVERLESS_AI_KEYSTORE_FILE" to point to a valid DAI keystore file
22# generated using h2oai.keystore tool
23#
24# 4. Environment variable
25# [configuration variables can also be provided as environment variables
26# they must have the prefix "DRIVERLESS_AI_" followed by
27# variable name in caps e.g "authentication_method" can be provided as
28# "DRIVERLESS_AI_AUTHENTICATION_METHOD"]
29##############################################################################
30
31# If the experiment is not done after this many minutes, stop feature engineering and model tuning as soon as possible and proceed with building the final modeling pipeline and deployment artifacts, independent of model score convergence or pre-determined number of iterations. Only active is not in reproducible mode. Depending on the data and experiment settings, overall experiment runtime can differ significantly from this setting.
32#max_runtime_minutes = 1440
33
34# if non-zero, then set max_runtime_minutes automatically to min(max_runtime_minutes, max(min_auto_runtime_minutes, runtime estimate)) when enable_preview_time_estimate is true, so that the preview performs a best estimate of the runtime. Set to zero to disable runtime estimate being used to constrain runtime of experiment.
35#min_auto_runtime_minutes = 60
36
37# Whether to tune max_runtime_minutes based upon final number of base models,so try to trigger start of final model in order to better ensure stop entire experiment before max_runtime_minutes.Note: If the time given is short enough that tuning models are reduced belowfinal model expectations, the final model may be shorter than expected leadingto an overall shorter experiment time.
38#max_runtime_minutes_smart = true
39
40# If the experiment is not done after this many minutes, push the abort button. Preserves experiment artifacts made so far for summary and log zip files, but further artifacts are made.
41#max_runtime_minutes_until_abort = 10080
42
43# If reproducbile is set, then experiment and all artifacts are reproducible, however then experiments may take arbitrarily long for a given choice of dials, features, and models.
44# Setting this to False allows the experiment to complete after a fixed time, with all aspects of the model and feature building are reproducible and seeded, but the overall experiment behavior will not necessarily be reproducible if later iterations would have been used in final model building.
45# This should set to True if every seeded experiment of exact same setup needs to generate the exact same final model, regardless of duration.
46#strict_reproducible_for_max_runtime = true
47
48# Uses model built on large number of experiments to estimate runtime. It can be inaccurate in cases that were not trained on.
49#enable_preview_time_estimate = true
50
51# Uses model built on large number of experiments to estimate mojo size. It can be inaccurate in cases that were not trained on.
52#enable_preview_mojo_size_estimate = true
53
54# Uses model built on large number of experiments to estimate max cpu memory. It can be inaccurate in cases that were not trained on.
55#enable_preview_cpu_memory_estimate = true
56
57#enable_preview_time_estimate_rough = false
58
59# If the experiment is not done by this time, push the abort button. Accepts time in format given by time_abort_format (defaults to %Y-%m-%d %H:%M:%S)assuming a time zone set by time_abort_timezone (defaults to UTC). One can also give integer seconds since 1970-01-01 00:00:00 UTC. Applies to time on a DAI worker that runs experiments. Preserves experiment artifacts made so far for summary and log zip files, but further artifacts are made.NOTE: If start new experiment with same parameters, restart, or refit, thisabsolute time will apply to such experiments or set of leaderboard experiments.
60#time_abort = ""
61
62# Any format is allowed as accepted by datetime.strptime.
63#time_abort_format = "%Y-%m-%d %H:%M:%S"
64
65# Any time zone in format accepted by datetime.strptime.
66#time_abort_timezone = "UTC"
67
68# Whether to delete all directories and files matching experiment pattern when call do_delete_model (True),
69# or whether to just delete directories (False). False can be used to preserve experiment logs that do
70# not take up much space.
71#
72#delete_model_dirs_and_files = true
73
74# Whether to delete all directories and files matching dataset pattern when call do_delete_dataset (True),
75# or whether to just delete directories (False). False can be used to preserve dataset logs that do
76# not take up much space.
77#
78#delete_data_dirs_and_files = true
79
80# # Recipe type
81# ## Recipes override any GUI settings
82# - **'auto'**: all models and features automatically determined by experiment settings, toml settings, and feature_engineering_effort
83# - **'compliant'** : like 'auto' except:
84# - *interpretability=10* (to avoid complexity, overrides GUI or python client chose for interpretability)
85# - *enable_glm='on'* (rest 'off', to avoid complexity and be compatible with algorithms supported by MLI)
86# - *fixed_ensemble_level=0*: Don't use any ensemble
87# - *feature_brain_level=0*(: No feature brain used (to ensure every restart is identical)
88# - *max_feature_interaction_depth=1*: interaction depth is set to 1 (no multi-feature interactions to avoid complexity)
89# - *target_transformer='identity'*: for regression (to avoid complexity)
90# - *check_distribution_shift_drop='off'*: Don't use distribution shift between train, valid, and test to drop features (bit risky without fine-tuning)
91# - **'monotonic_gbm'** : like 'auto' except:
92# - *monotonicity_constraints_interpretability_switch=1*: enable monotonicity constraints
93# - *self.config.monotonicity_constraints_correlation_threshold = 0.01*: see below
94# - *monotonicity_constraints_drop_low_correlation_features=true*: drop features that aren't correlated with target by at least 0.01 (specified by parameter above)
95# - *fixed_ensemble_level=0*: Don't use any ensemble (to avoid complexity)
96# - *included_models=['LightGBMModel']*
97# - *included_transformers=['OriginalTransformer']*: only original (numeric) features will be used
98# - *feature_brain_level=0*: No feature brain used (to ensure every restart is identical)
99# - *monotonicity_constraints_log_level='high'*
100# - *autodoc_pd_max_runtime=-1*: no timeout for PDP creation in AutoDoc
101# - **'kaggle'** : like 'auto' except:
102# - external validation set is concatenated with train set, with target marked as missing
103# - test set is concatenated with train set, with target marked as missing
104# - transformers that do not use the target are allowed to fit_transform across entire train + validation + test
105# - several config toml expert options open-up limits (e.g. more numerics are treated as categoricals)
106# - Note: If plentiful memory, can:
107# - choose kaggle mode and then change fixed_feature_interaction_depth to large negative number,
108# otherwise default number of features given to transformer is limited to 50 by default
109# - choose mutation_mode = "full", so even more types are transformations are done at once per transformer
110# - **'nlp_model'**: Only enables NLP models that process pure text
111# - **'nlp_transformer'**: Only enables NLP transformers that process pure text, while any model type is allowed
112# - **'image_model'**: Only enables Image models that process pure images
113# - **'image_transformer'**: Only enables Image transformers that process pure images, while any model type is allowed
114# - **'unsupervised'**: Only enables unsupervised transformers, models and scorers
115# - **'gpus_max'**: Maximize use of GPUs (e.g. use XGBoost, rapids, Optuna hyperparameter search, etc.)
116# - **'more_overfit_protection'**: Potentially improve overfit, esp. for small data, by disabling target encoding and making GA behave like final model for tree counts and learning rate
117# - **'feature_store_mojo'**: Creates a MOJO to be used as transformer in the H2O Feature Store, to augment data on a row-by-row level based on Driverless AI's feature engineering. Only includes transformers that don't depend on the target, since features like target encoding need to be created at model fitting time to avoid data leakage. And features like lags need to be created from the raw data, they can't be computed with a row-by-row MOJO transformer.
118# Each pipeline building recipe mode can be chosen, and then fine-tuned using each expert settings. Changing the
119# pipeline building recipe will reset all pipeline building recipe options back to default and then re-apply the
120# specific rules for the new mode, which will undo any fine-tuning of expert options that are part of pipeline building
121# recipe rules.
122# If choose to do new/continued/refitted/retrained experiment from parent experiment, the recipe rules are not re-applied
123# and any fine-tuning is preserved. To reset recipe behavior, one can switch between 'auto' and the desired mode. This
124# way the new child experiment will use the default settings for the chosen recipe.
125#recipe = "auto"
126
127# Whether to treat model like UnsupervisedModel, so that one specifies each scorer, pretransformer, and transformer in expert panel like one would do for supervised experiments.
128# Otherwise (False), custom unsupervised models will assume the model itself specified these.
129# If the unsupervised model chosen has _included_transformers, _included_pretransformers, and _included_scorers selected, this should be set to False (default) else should be set to True.
130# Then if one wants the unsupervised model to only produce 1 gene-transformer, then the custom unsupervised model can have:
131# _ngenes_max = 1
132# _ngenes_max_by_layer = [1000, 1]
133# The 1000 for the pretransformer layer just means that layer can have any number of genes. Choose 1 if you expect single instance of the pretransformer to be all one needs, e.g. consumes input features fully and produces complete useful output features.
134#
135#custom_unsupervised_expert_mode = false
136
137# Whether to enable genetic algorithm for selection and hyper-parameter tuning of features and models.
138# - If disabled ('off'), will go directly to final pipeline training (using default feature engineering and feature selection).
139# - 'auto' is same as 'on' unless pure NLP or Image experiment.
140# - "Optuna": Uses DAI genetic algorithm for feature engineering, but model hyperparameters are tuned with Optuna.
141# - In the Optuna case, the scores shown in the iteration panel are the best score and trial scores.
142# - Optuna mode currently only uses Optuna for XGBoost, LightGBM, and CatBoost (custom recipe).
143# - If Pruner is enabled, as is default, Optuna mode disables mutations of eval_metric so pruning uses same metric across trials to compare properly.
144# Currently does not supported when pre_transformers or multi-layer pipeline used, which must go through at least one round of tuning or evolution.
145#
146#enable_genetic_algorithm = "auto"
147
148# How much effort to spend on feature engineering (-1...10)
149# Heuristic combination of various developer-level toml parameters
150# -1 : auto (5, except 1 for wide data in order to limit engineering)
151# 0 : keep only numeric features, only model tuning during evolution
152# 1 : keep only numeric features and frequency-encoded categoricals, only model tuning during evolution
153# 2 : Like #1 but instead just no Text features. Some feature tuning before evolution.
154# 3 : Like #5 but only tuning during evolution. Mixed tuning of features and model parameters.
155# 4 : Like #5, but slightly more focused on model tuning
156# 5 : Default. Balanced feature-model tuning
157# 6-7 : Like #5, but slightly more focused on feature engineering
158# 8 : Like #6-7, but even more focused on feature engineering with high feature generation rate, no feature dropping even if high interpretability
159# 9-10: Like #8, but no model tuning during feature evolution
160#
161#feature_engineering_effort = -1
162
163# Whether to enable train/valid and train/test distribution shift detection ('auto'/'on'/'off').
164# By default, LightGBMModel is used for shift detection if possible, unless it is turned off in model
165# expert panel, and then only the models selected in recipe list will be used.
166#
167#check_distribution_shift = "auto"
168
169# Whether to enable train/test distribution shift detection ('auto'/'on'/'off') for final model transformed features.
170# By default, LightGBMModel is used for shift detection if possible, unless it is turned off in model
171# expert panel, and then only the models selected in recipe list will be used.
172#
173#check_distribution_shift_transformed = "auto"
174
175# Whether to drop high-shift features ('auto'/'on'/'off'). Auto disables for time series.
176#check_distribution_shift_drop = "auto"
177
178# If distribution shift detection is enabled, drop features (except ID, text, date/datetime, time, weight) for
179# which shift AUC, GINI, or Spearman correlation is above this value
180# (e.g. AUC of a binary classifier that predicts whether given feature value
181# belongs to train or test data)
182#
183#drop_features_distribution_shift_threshold_auc = 0.999
184
185# Specify whether to check leakage for each feature (``on`` or ``off``).
186# If a fold column is used, this option checks leakage without using the fold column.
187# By default, LightGBM Model is used for leakage detection when possible, unless it is
188# turned off in the Model Expert Settings tab, in which case only the models selected with
189# the ``included_models`` option are used. Note that this option is always disabled for time
190# series experiments.
191#
192#check_leakage = "auto"
193
194# If leakage detection is enabled,
195# drop features for which AUC (R2 for regression), GINI,
196# or Spearman correlation is above this value.
197# If fold column present, features are not dropped,
198# because leakage test applies without fold column used.
199#
200#drop_features_leakage_threshold_auc = 0.999
201
202# Max number of rows x number of columns to trigger (stratified) sampling for leakage checks
203#
204#leakage_max_data_size = 10000000
205
206# Specify the maximum number of features to use and show in importance tables.
207# When Interpretability is set higher than 1,
208# transformed or original features with lower importance than the top max_features_importance features are always removed.
209# Feature importances of transformed or original features correspondingly will be pruned.
210# Higher values can lead to lower performance and larger disk space used for datasets with more than 100k columns.
211#
212#max_features_importance = 100000
213
214# Whether to create the Python scoring pipeline at the end of each experiment.
215#make_python_scoring_pipeline = "auto"
216
217# Whether to create the MOJO scoring pipeline at the end of each experiment. If set to "auto", will attempt to
218# create it if possible (without dropping capabilities). If set to "on", might need to drop some models,
219# transformers or custom recipes.
220#
221#make_mojo_scoring_pipeline = "auto"
222
223# Whether to create a C++ MOJO based Triton scoring pipeline at the end of each experiment. If set to "auto", will attempt to
224# create it if possible (without dropping capabilities). If set to "on", might need to drop some models,
225# transformers or custom recipes. Requires make_mojo_scoring_pipeline != "off".
226#
227#make_triton_scoring_pipeline = "off"
228
229# Whether to automatically deploy the model to the Triton inference server at the end of each experiment.
230# "remote" will deploy to the remote Triton inference server to location provided by triton_host_remote (and optionally, triton_model_repository_dir_remote).
231# "off" requires manual action (Deploy wizard or Python client or manual transfer of exported Triton directory from Deploy wizard) to deploy the model to Triton.
232#
233#auto_deploy_triton_scoring_pipeline = "off"
234
235# Test remote Triton deployments during creation of MOJO pipeline. Requires triton_host_remote to be configured and make_triton_scoring_pipeline to be enabled.
236#triton_mini_acceptance_test_remote = true
237
238#triton_client_timeout_testing = 300
239
240#test_triton_when_making_mojo_pipeline_only = false
241
242# Perform timing and accuracy benchmarks for Injected MOJO scoring vs Python scoring. This is for full scoring data, and can be slow. This also requires hard asserts. Doesn't force MOJO scoring by itself, so depends on mojo_for_predictions='on' if want full coverage.
243#mojo_for_predictions_benchmark = true
244
245# Fail hard if MOJO scoring is this many times slower than Python scoring.
246#mojo_for_predictions_benchmark_slower_than_python_threshold = 10
247
248# Fail hard if MOJO scoring is slower than Python scoring by a factor specified by mojo_for_predictions_benchmark_slower_than_python_threshold, but only if have at least this many rows. To reduce false positives.
249#mojo_for_predictions_benchmark_slower_than_python_min_rows = 100
250
251# Fail hard if MOJO scoring is slower than Python scoring by a factor specified by mojo_for_predictions_benchmark_slower_than_python_threshold, but only if takes at least this many seconds. To reduce false positives.
252#mojo_for_predictions_benchmark_slower_than_python_min_seconds = 2.0
253
254# Inject MOJO into fitted Python state if mini acceptance test passes, so can use C++ MOJO runtime when calling predict(enable_mojo=True, IS_SCORER=True, ...). Prerequisite for mojo_for_predictions='on' or 'auto'.
255#inject_mojo_for_predictions = true
256
257# Use MOJO for making fast low-latency predictions after experiment has finished (when applicable, for AutoDoc/Diagnostics/Predictions/MLI and standalone Python scoring via scorer.zip). For 'auto', only use MOJO if number of rows is equal or below mojo_for_predictions_max_rows. For larger frames, it can be faster to use the Python backend since used libraries are more likely already vectorized.
258#mojo_for_predictions = "auto"
259
260# For smaller datasets, the single-threaded but low latency C++ MOJO runtime can lead to significantly faster scoring times than the regular in-Driverless AI Python scoring environment. If enable_mojo=True is passed to the predict API, and the MOJO exists and is applicable, then use the MOJO runtime for datasets that have fewer or equal number of rows than this threshold. MLI/AutoDoc set enable_mojo=True by default, so this setting applies. This setting is only used if mojo_for_predictions is 'auto'.
261#mojo_for_predictions_max_rows = 10000
262
263# Batch size (in rows) for C++ MOJO predictions. Only when enable_mojo=True is passed to the predict API, and when the MOJO is applicable (e.g., fewer rows than mojo_for_predictions_max_rows). Larger values can lead to faster scoring, but use more memory.
264#mojo_for_predictions_batch_size = 100
265
266# Relative tolerance for mini MOJO acceptance test. If Python/C++ MOJO differs more than this from Python, won't use MOJO inside Python for later scoring. Only applicable if mojo_for_predictions=True. Disabled if <= 0.
267#mojo_acceptance_test_rtol = 0.0
268
269# Absolute tolerance for mini MOJO acceptance test (for regression/Shapley, will be scaled by max(abs(preds)). If Python/C++ MOJO differs more than this from Python, won't use MOJO inside Python for later scoring. Only applicable if mojo_for_predictions=True. Disabled if <= 0.
270#mojo_acceptance_test_atol = 0.0
271
272# Whether to attempt to reduce the size of the MOJO scoring pipeline. A smaller MOJO will also lead to
273# less memory footprint during scoring. It is achieved by reducing some other settings like interaction depth, and
274# hence can affect the predictive accuracy of the model.
275#
276#reduce_mojo_size = false
277
278# Whether to create the pipeline visualization at the end of each experiment.
279# Uses MOJO to show pipeline, input features, transformers, model, and outputs of model. MOJO-capable tree models show first tree.
280#make_pipeline_visualization = "auto"
281
282# Whether to create the python pipeline visualization at the end of each experiment.
283# Each feature and transformer includes a variable importance at end in brackets.
284# Only done when forced on, and artifacts as png files will appear in summary zip.
285# Each experiment has files per individual in final population:
286# 1) preprune_False_0.0 : Before final pruning, without any additional variable importance threshold pruning
287# 2) preprune_True_0.0 : Before final pruning, with additional variable importance <=0.0 pruning
288# 3) postprune_False_0.0 : After final pruning, without any additional variable importance threshold pruning
289# 4) postprune_True_0.0 : After final pruning, with additional variable importance <=0.0 pruning
290# 5) posttournament_False_0.0 : After final pruning and tournament, without any additional variable importance threshold pruning
291# 6) posttournament_True_0.0 : After final pruning and tournament, with additional variable importance <=0.0 pruning
292# 1-5 are done with 'on' while 'auto' only does 6 corresponding to the final post-pruned individuals.
293# Even post pruning, some features have zero importance, because only those genes that have value+variance in
294# variable importance of value=0.0 get pruned. GA can have many folds with positive variance
295# for a gene, and those are not removed in case they are useful features for final model.
296# If small mojo option is chosen (reduce_mojo_size True), then the variance of feature gain is ignored
297# for which genes and features are pruned as well as for what appears in the graph.
298#
299#make_python_pipeline_visualization = "auto"
300
301# Whether to create the experiment AutoDoc after end of experiment.
302#
303#make_autoreport = true
304
305#max_cols_make_autoreport_automatically = 1000
306
307#max_cols_make_pipeline_visualization_automatically = 5000
308
309# Pass environment variables from running Driverless AI instance to Python scoring pipeline for
310# deprecated models, when they are used to make predictions. Use with caution.
311# If config.toml overrides are set by env vars, and they differ from what the experiment's env
312# looked like when it was trained, then unexpected consequences can occur. Enable this only to "
313# override certain well-controlled settings like the port for H2O-3 custom recipe server.
314#
315#pass_env_to_deprecated_python_scoring = false
316
317#transformer_description_line_length = -1
318
319# Whether to measure the MOJO scoring latency at the time of MOJO creation.
320#benchmark_mojo_latency = "auto"
321
322# Max size of pipeline.mojo file (in MB) for automatic mode of MOJO scoring latency measurement
323#benchmark_mojo_latency_auto_size_limit = 2048
324
325# If MOJO creation times out at end of experiment, can still make MOJO from the GUI or from the R/Py clients (timeout doesn't apply there).
326#mojo_building_timeout = 1800.0
327
328# If MOJO visualization creation times out at end of experiment, MOJO is still created if possible within the time limit specified by mojo_building_timeout.
329#mojo_vis_building_timeout = 600.0
330
331# If MOJO creation is too slow, increase this value. Higher values can finish faster, but use more memory.
332# If MOJO creation fails due to an out-of-memory error, reduce this value to 1.
333# Set to -1 for all physical cores.
334#
335#mojo_building_parallelism = -1
336
337# Size in bytes that all pickled and compressed base models have to satisfy to use parallel MOJO building.
338# For large base models, parallel MOJO building can use too much memory.
339# Only used if final_fitted_model_per_model_fold_files is true.
340#
341#mojo_building_parallelism_base_model_size_limit = 100000000
342
343# Whether to show model and pipeline sizes in logs.
344# If 'auto', then not done if more than 10 base models+folds, because expect not concerned with size.
345#show_pipeline_sizes = "auto"
346
347# safe: assume might be running another experiment on same node
348# moderate: assume not running any other experiments or tasks on same node, but still only use physical core count
349# max: assume not running anything else on node at all except the experiment
350# If multinode is enabled, this option has no effect, unless worker_remote_processors=1 when it will still be applied.
351# Each exclusive mode can be chosen, and then fine-tuned using each expert settings. Changing the
352# exclusive mode will reset all exclusive mode related options back to default and then re-apply the
353# specific rules for the new mode, which will undo any fine-tuning of expert options that are part of exclusive mode rules.
354# If choose to do new/continued/refitted/retrained experiment from parent experiment, all the mode rules are not re-applied
355# and any fine-tuning is preserved. To reset mode behavior, one can switch between 'safe' and the desired mode. This
356# way the new child experiment will use the default system resources for the chosen mode.
357#
358#exclusive_mode = "safe"
359
360# Maximum number of workers for Driverless AI server pool (only 1 needed currently)
361#max_workers = 1
362
363# Max number of CPU cores to use for the whole system. Set to <= 0 to use all (physical) cores.
364# If the number of ``worker_remote_processors`` is set to a value >= 3, the number of cores will be reduced
365# by the ratio (``worker_remote_processors_max_threads_reduction_factor`` * ``worker_remote_processors``)
366# to avoid overloading the system when too many remote tasks are processed at once.
367# One can also set environment variable 'OMP_NUM_THREADS' to number of cores to use for OpenMP
368# (e.g., in bash: 'export OMP_NUM_THREADS=32' and 'export OPENBLAS_NUM_THREADS=32').
369#
370#max_cores = 0
371
372# Max number of CPU cores to use across all of DAI experiments and tasks.
373# -1 is all available, with stall_subprocess_submission_dai_fork_threshold_count=0 means restricted to core count.
374#
375#max_cores_dai = -1
376
377# Number of virtual cores per physical core (0: auto mode, >=1 use that integer value). If >=1, the reported physical cores in logs will match the virtual cores divided by this value.
378#virtual_cores_per_physical_core = 0
379
380# Mininum number of virtual cores per physical core. Only applies if virtual cores != physical cores. Can help situations like Intel i9 13900 with 24 physical cores and only 32 virtual cores. So better to limit physical cores to 16.
381#min_virtual_cores_per_physical_core_if_unequal = 2
382
383# Number of physical cores to assume are present (0: auto, >=1 use that integer value).
384# If for some reason DAI does not automatically figure out physical cores correctly,
385# one can override with this value. Some systems, especially virtualized, do not always provide
386# correct information about the virtual cores, physical cores, sockets, etc.
387#override_physical_cores = 0
388
389# Number of virtual cores to assume are present (0: auto, >=1 use that integer value).
390# If for some reason DAI does not automatically figure out virtual cores correctly,
391# or only a portion of the system is to be used, one can override with this value.
392# Some systems, especially virtualized, do not always provide
393# correct information about the virtual cores, physical cores, sockets, etc.
394#override_virtual_cores = 0
395
396# Whether to treat data as small recipe in terms of work, by spreading many small tasks across many cores instead of forcing GPUs, for models that support it via static var _use_single_core_if_many. 'auto' looks at _use_single_core_if_many for models and data size, 'on' forces, 'off' disables.
397#small_data_recipe_work = "auto"
398
399# Stall submission of tasks if total DAI fork count exceeds count (-1 to disable, 0 for automatic of max_cores_dai)
400#stall_subprocess_submission_dai_fork_threshold_count = 0
401
402# Stall submission of tasks if system memory available is less than this threshold in percent (set to 0 to disable).
403# Above this threshold, the number of workers in any pool of workers is linearly reduced down to 1 once hitting this threshold.
404#
405#stall_subprocess_submission_mem_threshold_pct = 2
406
407# Whether to set automatic number of cores by physical (True) or logical (False) count.
408# Using all logical cores can lead to poor performance due to cache thrashing.
409#
410#max_cores_by_physical = true
411
412# Absolute limit to core count
413#max_cores_limit = 200
414
415# Control maximum number of cores to use for a model's fit call (0 = all physical cores >= 1 that count). See also tensorflow_model_max_cores to further limit TensorFlow main models.
416#max_fit_cores = 10
417
418# Control maximum number of cores to use for a scoring across all chosen scorers (0 = auto)
419#parallel_score_max_workers = 0
420
421# Whether to use full multinode distributed cluster (True) or single-node dask (False).
422# In some cases, using entire cluster can be inefficient. E.g. several DGX nodes can be more efficient
423# if used one DGX at a time for medium-sized data.
424#
425#use_dask_cluster = true
426
427# Control maximum number of cores to use for a model's predict call (0 = all physical cores >= 1 that count)
428#max_predict_cores = 0
429
430# Factor by which to reduce physical cores, to use for post-model experiment tasks like autoreport, MLI, etc.
431#max_predict_cores_in_dai_reduce_factor = 4
432
433# Maximum number of cores to use for post-model experiment tasks like autoreport, MLI, etc.
434#max_max_predict_cores_in_dai = 10
435
436# Control maximum number of cores to use for a model's transform and predict call when doing operations inside DAI-MLI GUI and R/Py client.
437# The main experiment and other tasks like MLI and autoreport have separate queues. The main experiments have run at most worker_remote_processors tasks (limited by cores if auto mode),
438# while other tasks run at most worker_local_processors (limited by cores if auto mode) tasks at the same time,
439# so many small tasks can add up. To prevent overloading the system, the defaults are conservative. However, if most of the activity involves autoreport or MLI, and no model experiments
440# are running, it may be safe to increase this value to something larger than 4.
441# -1 : Auto mode. Up to physical cores divided by 4, up to maximum of 10.
442# 0 : all physical cores
443# >= 1: that count).
444#
445#max_predict_cores_in_dai = -1
446
447# Control number of workers used in CPU mode for tuning (0 = socket count -1 = all physical cores >= 1 that count). More workers will be more parallel but models learn less from each other.
448#batch_cpu_tuning_max_workers = 0
449
450# Control number of workers used in CPU mode for training (0 = socket count -1 = all physical cores >= 1 that count)
451#cpu_max_workers = 0
452
453# Expected maximum number of forks, used to ensure datatable doesn't overload system. For actual use beyond this value, system will start to have slow-down issues
454#assumed_simultaneous_dt_forks_munging = 3
455
456# Expected maximum number of forks by computing statistics during ingestion, used to ensure datatable doesn't overload system
457#assumed_simultaneous_dt_forks_stats_openblas = 1
458
459# Maximum of threads for datatable for munging
460#max_max_dt_threads_munging = 4
461
462# Expected maximum of threads for datatable no matter if many more cores
463#max_max_dt_threads_stats_openblas = 8
464
465# Maximum of threads for datatable for reading/writing files
466#max_max_dt_threads_readwrite = 4
467
468# Maximum parallel workers for final model building.
469# 0 means automatic, >=1 means limit to no more than that number of parallel jobs.
470# Can be required if some transformer or model uses more than the expected amount of memory.
471# Ways to reduce final model building memory usage, e.g. set one or more of these and retrain final model:
472# 1) Increase munging_memory_overhead_factor to 10
473# 2) Increase final_munging_memory_reduction_factor to 10
474# 3) Lower max_workers_final_munging to 1
475# 4) Lower max_workers_final_base_models to 1
476# 5) Lower max_cores to, e.g., 1/2 or 1/4 of physical cores.
477#max_workers_final_base_models = 0
478
479# Maximum parallel workers for final per-model munging.
480# 0 means automatic, >=1 means limit to no more than that number of parallel jobs.
481# Can be required if some transformer uses more than the expected amount of memory.
482#max_workers_final_munging = 0
483
484# Minimum number of threads for datatable (and OpenMP) during data munging (per process).
485# datatable is the main data munging tool used within Driverless ai (source :
486# https://github.com/h2oai/datatable)
487#
488#min_dt_threads_munging = 1
489
490# Like min_datatable (and OpenMP)_threads_munging but for final pipeline munging
491#min_dt_threads_final_munging = 1
492
493# Maximum number of threads for datatable during data munging (per process) (0 = all, -1 = auto).
494# If multiple forks, threads are distributed across forks.
495#max_dt_threads_munging = -1
496
497# Maximum number of threads for datatable during data reading and writing (per process) (0 = all, -1 = auto).
498# If multiple forks, threads are distributed across forks.
499#max_dt_threads_readwrite = -1
500
501# Maximum number of threads for datatable stats and openblas (per process) (0 = all, -1 = auto).
502# If multiple forks, threads are distributed across forks.
503#max_dt_threads_stats_openblas = -1
504
505# Maximum number of threads for datatable during TS properties preview panel computations).
506#max_dt_threads_do_timeseries_split_suggestion = 1
507
508# Number of GPUs to use per experiment for training task. Set to -1 for all GPUs.
509# An experiment will generate many different models.
510# Currently num_gpus_per_experiment!=-1 disables GPU locking, so is only recommended for
511# single experiments and single users.
512# Ignored if GPUs disabled or no GPUs on system.
513# More info at: https://github.com/NVIDIA/nvidia-docker/wiki/nvidia-docker#gpu-isolation
514# In multinode context when using dask, this refers to the per-node value.
515# For ImageAutoModel, this refers to the total number of GPUs used for that entire model type,
516# since there is only one model type for the entire experiment.
517# E.g. if have 4 GPUs and want 2 ImageAuto experiments to run on 2 GPUs each, can set
518# num_gpus_per_experiment to 2 for each experiment, and each of the 4 GPUs will be used one at a time
519# by the 2 experiments each using 2 GPUs only.
520#
521#num_gpus_per_experiment = -1
522
523# Number of CPU cores per GPU. Limits number of GPUs in order to have sufficient cores per GPU.
524# Set to -1 to disable, -2 for auto mode.
525# In auto mode, if lightgbm_use_gpu is 'auto' or 'off', then min_num_cores_per_gpu=1, else min_num_cores_per_gpu=2, due to lightgbm requiring more cores even when using GPUs.
526#min_num_cores_per_gpu = -2
527
528# Number of GPUs to use per model training task. Set to -1 for all GPUs.
529# For example, when this is set to -1 and there are 4 GPUs available, all of them can be used for the training of a single model.
530# Only applicable currently to image auto pipeline building recipe or Dask models with more than one GPU or more than one node.
531# Ignored if GPUs disabled or no GPUs on system.
532# For ImageAutoModel, the maximum of num_gpus_per_model and num_gpus_per_experiment (all GPUs if -1) is taken.
533# More info at: https://github.com/NVIDIA/nvidia-docker/wiki/nvidia-docker#gpu-isolation
534# In multinode context when using Dask, this refers to the per-node value.
535#
536#num_gpus_per_model = 1
537
538# Number of GPUs to use for predict for models and transform for transformers when running outside of fit/fit_transform.
539# -1 means all, 0 means no GPUs, >1 means that many GPUs up to visible limit.
540# If predict/transform are called in same process as fit/fit_transform, number of GPUs will match,
541# while new processes will use this count for number of GPUs for applicable models/transformers.
542# Exception: TensorFlow, PyTorch models/transformers, and RAPIDS predict on GPU always if GPUs exist.
543# RAPIDS requires python scoring package be used also on GPUs.
544# In multinode context when using Dask, this refers to the per-node value.
545#
546#num_gpus_for_prediction = 0
547
548# Which gpu_id to start with
549# -1 : auto-mode. E.g. 2 experiments can each set num_gpus_per_experiment to 2 and use 4 GPUs
550# If using CUDA_VISIBLE_DEVICES=... to control GPUs (preferred method), gpu_id=0 is the
551# first in that restricted list of devices.
552# E.g. if CUDA_VISIBLE_DEVICES='4,5' then gpu_id_start=0 will refer to the
553# device #4.
554# E.g. from expert mode, to run 2 experiments, each on a distinct GPU out of 2 GPUs:
555# Experiment#1: num_gpus_per_model=1, num_gpus_per_experiment=1, gpu_id_start=0
556# Experiment#2: num_gpus_per_model=1, num_gpus_per_experiment=1, gpu_id_start=1
557# E.g. from expert mode, to run 2 experiments, each on a distinct GPU out of 8 GPUs:
558# Experiment#1: num_gpus_per_model=1, num_gpus_per_experiment=4, gpu_id_start=0
559# Experiment#2: num_gpus_per_model=1, num_gpus_per_experiment=4, gpu_id_start=4
560# E.g. Like just above, but now run on all 4 GPUs/model
561# Experiment#1: num_gpus_per_model=4, num_gpus_per_experiment=4, gpu_id_start=0
562# Experiment#2: num_gpus_per_model=4, num_gpus_per_experiment=4, gpu_id_start=4
563# If num_gpus_per_model!=1, global GPU locking is disabled
564# (because underlying algorithms don't support arbitrary gpu ids, only sequential ids),
565# so must setup above correctly to avoid overlap across all experiments by all users
566# More info at: https://github.com/NVIDIA/nvidia-docker/wiki/nvidia-docker#gpu-isolation
567# Note that GPU selection does not wrap, so gpu_id_start + num_gpus_per_model must be less than number of visibile GPUs
568#
569#gpu_id_start = -1
570
571# Whether to reduce features until model does not fail.
572# Currently for non-dask XGBoost models (i.e. GLMModel, XGBoostGBMModel, XGBoostDartModel, XGBoostRFModel),
573# during normal fit or when using Optuna.
574# Primarily useful for GPU OOM.
575# If XGBoost runs out of GPU memory, this is detected, and
576# (regardless of setting of skip_model_failures),
577# we perform feature selection using XGBoost on subsets of features.
578# The dataset is progressively reduced by factor of 2 with more models to cover all features.
579# This splitting continues until no failure occurs.
580# Then all sub-models are used to estimate variable importance by absolute information gain,
581# in order to decide which features to include.
582# Finally, a single model with the most important features
583# is built using the feature count that did not lead to OOM.
584# For 'auto', this option is set to 'off' when reproducible experiment is enabled,
585# because the condition of running OOM can change for same experiment seed.
586# Reduction is only done on features and not on rows for the feature selection step.
587#
588#allow_reduce_features_when_failure = "auto"
589
590# With allow_reduce_features_when_failure, this controls how many repeats of sub-models
591# used for feature selection. A single repeat only has each sub-model
592# consider a single sub-set of features, while repeats shuffle which
593# features are considered allowing more chance to find important interactions.
594# More repeats can lead to higher accuracy.
595# The cost of this option is proportional to the repeat count.
596#
597#reduce_repeats_when_failure = 1
598
599# With allow_reduce_features_when_failure, this controls the fraction of features
600# treated as an anchor that are fixed for all sub-models.
601# Each repeat gets new anchors.
602# For tuning and evolution, the probability depends
603# upon any prior importance (if present) from other individuals,
604# while final model uses uniform probability for anchor features.
605#
606#fraction_anchor_reduce_features_when_failure = 0.1
607
608# Error strings from XGBoost that are used to trigger re-fit on reduced sub-models.
609# See allow_reduce_features_when_failure.
610#
611#xgboost_reduce_on_errors_list = "['Memory allocation error on worker', 'out of memory', 'XGBDefaultDeviceAllocatorImpl', 'invalid configuration argument', 'Requested memory']"
612
613# Error strings from LightGBM that are used to trigger re-fit on reduced sub-models.
614# See allow_reduce_features_when_failure.
615#
616#lightgbm_reduce_on_errors_list = "['Out of Host Memory']"
617
618# LightGBM does not significantly benefit from GPUs, unlike other tools like XGBoost or Bert/Image Models.
619# Each experiment will try to use all GPUs, and on systems with many cores and GPUs,
620# this leads to many experiments running at once, all trying to lock the GPU for use,
621# leaving the cores heavily under-utilized. So by default, DAI always uses CPU for LightGBM, unless 'on' is specified.
622#lightgbm_use_gpu = "auto"
623
624# Kaggle username for automatic submission and scoring of test set predictions.
625# See https://github.com/Kaggle/kaggle-api#api-credentials for details on how to obtain Kaggle API credentials",
626#
627#kaggle_username = ""
628
629# Kaggle key for automatic submission and scoring of test set predictions.
630# See https://github.com/Kaggle/kaggle-api#api-credentials for details on how to obtain Kaggle API credentials",
631#
632#kaggle_key = ""
633
634# Max. number of seconds to wait for Kaggle API call to return scores for given predictions
635#kaggle_timeout = 120
636
637#kaggle_keep_submission = false
638
639# If provided, can extend the list to arbitrary and potentially future Kaggle competitions to make
640# submissions for. Only used if kaggle_key and kaggle_username are provided.
641# Provide a quoted comma-separated list of tuples (target column name, number of test rows, competition, metric) like this:
642# kaggle_competitions='("target", 200000, "santander-customer-transaction-prediction", "AUC"), ("TARGET", 75818, "santander-customer-satisfaction", "AUC")'
643#
644#kaggle_competitions = ""
645
646# Period (in seconds) of ping by Driverless AI server to each experiment
647# (in order to get logger info like disk space and memory usage).
648# 0 means don't print anything.
649#ping_period = 60
650
651# Whether to enable ping of system status during DAI experiments.
652#ping_autodl = true
653
654# Minimum amount of disk space in GB needed to run experiments.
655# Experiments will fail if this limit is crossed.
656# This limit exists because Driverless AI needs to generate data for model training
657# feature engineering, documentation and other such processes.
658#disk_limit_gb = 5
659
660# Minimum amount of disk space in GB needed to before stall forking of new processes during an experiment.
661#stall_disk_limit_gb = 1
662
663# Minimum amount of system memory in GB needed to start experiments.
664# Similarly with disk space, a certain amount of system memory is needed to run some basic
665# operations.
666#memory_limit_gb = 5
667
668# Minimum number of rows needed to run experiments (values lower than 100 might not work).
669# A minimum threshold is set to ensure there is enough data to create a statistically
670# reliable model and avoid other small-data related failures.
671#
672#min_num_rows = 100
673
674# Minimum required number of rows (in the training data) for each class label for classification problems.
675#min_rows_per_class = 5
676
677# Minimum required number of rows for each split when generating validation samples.
678#min_rows_per_split = 5
679
680# Level of reproducibility desired (for same data and same inputs).
681# Only active if 'reproducible' mode is enabled (GUI button enabled or a seed is set from the client API).
682# Supported levels are:
683# reproducibility_level = 1 for same experiment results as long as same O/S, same CPU(s) and same GPU(s)
684# reproducibility_level = 2 for same experiment results as long as same O/S, same CPU architecture and same GPU architecture
685# reproducibility_level = 3 for same experiment results as long as same O/S, same CPU architecture, not using GPUs
686# reproducibility_level = 4 for same experiment results as long as same O/S, (best effort)
687#
688#reproducibility_level = 1
689
690# Seed for random number generator to make experiments reproducible, to a certain reproducibility level (see above).
691# Only active if 'reproducible' mode is enabled (GUI button enabled or a seed is set from the client API).
692#
693#seed = 1234
694
695# The list of values that should be interpreted as missing values during data import.
696# This applies to both numeric and string columns. Note that the dataset must be reloaded after applying changes to this config via the expert settings.
697# Also note that 'nan' is always interpreted as a missing value for numeric columns.
698#missing_values = "['', '?', 'None', 'nan', 'NA', 'N/A', 'unknown', 'inf', '-inf', '1.7976931348623157e+308', '-1.7976931348623157e+308']"
699
700# Whether to impute (to mean) for GLM on training data.
701#glm_nan_impute_training_data = false
702
703# Whether to impute (to mean) for GLM on validation data.
704#glm_nan_impute_validation_data = false
705
706# Whether to impute (to mean) for GLM on prediction data (required for consistency with MOJO).
707#glm_nan_impute_prediction_data = true
708
709# For tensorflow, what numerical value to give to missing values, where numeric values are standardized.
710# So 0 is center of distribution, and if Normal distribution then +-5 is 5 standard deviations away from the center.
711# In many cases, an out of bounds value is a good way to represent missings, but in some cases the mean (0) may be better.
712#tf_nan_impute_value = -5
713
714# Internal threshold for number of rows x number of columns to trigger certain statistical
715# techniques (small data recipe like including one hot encoding for all model types, and smaller learning rate)
716# to increase model accuracy
717#statistical_threshold_data_size_small = 100000
718
719# Internal threshold for number of rows x number of columns to trigger certain statistical
720# techniques (fewer genes created, removal of high max_depth for tree models, etc.) that can speed up modeling.
721# Also controls maximum rows used in training final model,
722# by sampling statistical_threshold_data_size_large / columns number of rows
723#statistical_threshold_data_size_large = 500000000
724
725# Internal threshold for number of rows x number of columns to trigger sampling for auxiliary data uses,
726# like imbalanced data set detection and bootstrap scoring sample size and iterations
727#aux_threshold_data_size_large = 10000000
728
729# Internal threshold for set-based method for sampling without replacement.
730# Can be 10x faster than np_random_choice internal optimized method, and
731# up to 30x faster than np.random.choice to sample 250k rows from 1B rows etc.
732#set_method_sampling_row_limit = 5000000
733
734# Internal threshold for number of rows x number of columns to trigger certain changes in performance
735# (fewer threads if beyond large value) to help avoid OOM or unnecessary slowdowns
736# (fewer threads if lower than small value) to avoid excess forking of tasks
737#performance_threshold_data_size_small = 100000
738
739# Internal threshold for number of rows x number of columns to trigger certain changes in performance
740# (fewer threads if beyond large value) to help avoid OOM or unnecessary slowdowns
741# (fewer threads if lower than small value) to avoid excess forking of tasks
742#performance_threshold_data_size_large = 100000000
743
744# Threshold for number of rows x number of columns to trigger GPU to be default for models like XGBoost GBM.
745#gpu_default_threshold_data_size_large = 1000000
746
747# Maximum fraction of mismatched columns to allow between train and either valid or test. Beyond this value the experiment will fail with invalid data error.
748#max_relative_cols_mismatch_allowed = 0.5
749
750# Enable various rules to handle wide (Num. columns > Num. rows) datasets ('auto'/'on'/'off'). Setting on forces rules to be enabled regardless of columns.
751#enable_wide_rules = "auto"
752
753# If columns > wide_factor * rows, then enable wide rules if auto. For columns > rows, random forest is always enabled.
754#wide_factor = 5.0
755
756# Maximum number of columns to start an experiment. This threshold exists to constraint the # complexity and the length of the Driverless AI's processes.
757#max_cols = 10000000
758
759# Largest number of rows to use for column stats, otherwise sample randomly
760#max_rows_col_stats = 1000000
761
762# Largest number of rows to use for cv in cv for target encoding when doing gini scoring test
763#max_rows_cv_in_cv_gini = 100000
764
765# Largest number of rows to use for constant model fit, otherwise sample randomly
766#max_rows_constant_model = 1000000
767
768# Largest number of rows to use for final ensemble base model fold cores, otherwise sample randomly
769#max_rows_final_ensemble_base_model_fold_scores = 1000000
770
771# Largest number of rows to use for final ensemble blender for regression and binary (scaled down linearly by number of classes for multiclass for >= 10 classes), otherwise sample randomly.
772#max_rows_final_blender = 1000000
773
774# Smallest number of rows (or number of rows if less than this) to use for final ensemble blender.
775#min_rows_final_blender = 10000
776
777# Largest number of rows to use for final training score (no holdout), otherwise sample randomly
778#max_rows_final_train_score = 5000000
779
780# Largest number of rows to use for final ROC, lift-gains, confusion matrix, residual, and actual vs. predicted. Otherwise sample randomly
781#max_rows_final_roccmconf = 1000000
782
783# Largest number of rows to use for final holdout scores, otherwise sample randomly
784#max_rows_final_holdout_score = 5000000
785
786# Largest number of rows to use for final holdout bootstrap scores, otherwise sample randomly
787#max_rows_final_holdout_bootstrap_score = 1000000
788
789# Whether to obtain permutation feature importance on original features for reporting in logs and summary zip file
790# (as files with pattern fs_*.json or fs_*.tab.txt).
791# This computes feature importance on a single un-tuned model
792# (typically LightGBM with pre-defined un-tuned hyperparameters)
793# and simple set of features (encoding typically is frequency encoding or target encoding).
794# Features with low importance are automatically dropped if there are many original features,
795# or a model with feature selection by permutation importance is created if interpretability is high enough in order to see if it gives a better score.
796# One can manually drop low importance features, but this can be risky as transformers or hyperparameters might recover
797# their usefulness.
798# Permutation importance is obtained by:
799# 1) Transforming categoricals to frequency or target encoding features.
800# 2) Fitting that model on many folds, different data sizes, and slightly varying hyperparameters.
801# 3) Predicting on that model for each feature where each feature has its data shuffled.
802# 4) Computing the score on each shuffled prediction.
803# 5) Computing the difference between the unshuffled score and the shuffled score to arrive at a delta score
804# 6) The delta score becomes the variable importance once normalized by the maximum.
805# Positive delta scores indicate the feature helped the model score,
806# while negative delta scores indicate the feature hurt the model score.
807# The normalized scores are stored in the fs_normalized_* files in the summary zip.
808# The unnormalized scores (actual delta scores) are stored in the fs_unnormalized_* files in the summary zip.
809# AutoDoc has a similar functionality of providing permutation importance on original features,
810# where that takes the specific final model of an experiment and runs training data set through permutation importance to get original importance,
811# so shuffling of original features is performed and the full pipeline is computed in each shuffled set of original features.
812#
813#orig_features_fs_report = false
814
815# Maximum number of rows when doing permutation feature importance, reduced by (stratified) random sampling.
816#
817#max_rows_fs = 500000
818
819#max_rows_leak = 100000
820
821# How many workers to use for feature selection by permutation for predict phase.
822# (0 = auto, > 0: min of DAI value and this value, < 0: exactly negative of this value)
823#
824#max_workers_fs = 0
825
826# How many workers to use for shift and leakage checks if using LightGBM on CPU.
827# (0 = auto, > 0: min of DAI value and this value, < 0: exactly negative of this value)
828#
829#max_workers_shift_leak = 0
830
831# Maximum number of columns selected out of original set of original columns, using feature selection.
832# The selection is based upon how well target encoding (or frequency encoding if not available) on categoricals and numerics treated as categoricals.
833# This is useful to reduce the final model complexity. First the best
834# [max_orig_cols_selected] are found through feature selection methods and then
835# these features are used in feature evolution (to derive other features) and in modelling.
836#
837#max_orig_cols_selected = 10000000
838
839# Maximum number of numeric columns selected, above which will do feature selection
840# same max_orig_cols_selected but for numeric columns.
841#max_orig_numeric_cols_selected = 10000000
842
843#max_orig_nonnumeric_cols_selected_default = 300
844
845# Maximum number of non-numeric columns selected, above which will do feature selection on all features. Same as max_orig_numeric_cols_selected but for categorical columns.
846# If set to -1, then auto mode which uses max_orig_nonnumeric_cols_selected_default, but then for small data can be increased up to 10x larger.
847#
848#max_orig_nonnumeric_cols_selected = -1
849
850# The factor times max_orig_cols_selected, by which column selection is based upon no target encoding and no treating numerical as categorical
851# in order to limit performance cost of feature engineering
852#max_orig_cols_selected_simple_factor = 2
853
854# Like max_orig_cols_selected, but columns above which add special individual with original columns reduced.
855#
856#fs_orig_cols_selected = 10000000
857
858# Like max_orig_numeric_cols_selected, but applicable to special individual with original columns reduced.
859# A separate individual in the genetic algorithm is created by doing feature selection by permutation importance on original features.
860#
861#fs_orig_numeric_cols_selected = 10000000
862
863# Like max_orig_nonnumeric_cols_selected, but applicable to special individual with original columns reduced.
864# A separate individual in the genetic algorithm is created by doing feature selection by permutation importance on original features.
865#
866#fs_orig_nonnumeric_cols_selected = 200
867
868# Like max_orig_cols_selected_simple_factor, but applicable to special individual with original columns reduced.
869#fs_orig_cols_selected_simple_factor = 2
870
871#predict_shuffle_inside_model = true
872
873#use_native_cats_for_lgbm_fs = true
874
875#orig_stddev_max_cols = 1000
876
877# Maximum allowed fraction of unique values for integer and categorical columns (otherwise will treat column as ID and drop)
878#max_relative_cardinality = 0.95
879
880# Maximum allowed number of unique values for integer and categorical columns (otherwise will treat column as ID and drop)
881#max_absolute_cardinality = 1000000
882
883# Whether to treat some numerical features as categorical.
884# For instance, sometimes an integer column may not represent a numerical feature but
885# represent different numerical codes instead.
886# Very restrictive to disable, since then even columns with few categorical levels that happen to be numerical
887# in value will not be encoded like a categorical.
888#
889#num_as_cat = true
890
891# Max number of unique values for integer/real columns to be treated as categoricals (test applies to first statistical_threshold_data_size_small rows only)
892#max_int_as_cat_uniques = 50
893
894# Max number of unique values for integer/real columns to be treated as categoricals (test applies to first statistical_threshold_data_size_small rows only). Applies to integer or real numerical feature that violates Benford's law, and so is ID-like but not entirely an ID.
895#max_int_as_cat_uniques_if_not_benford = 10000
896
897# When the fraction of non-numeric (and non-missing) values is less or equal than this value, consider the
898# column numeric. Can help with minor data quality issues for experimentation, > 0 is not recommended for production,
899# since type inconsistencies can occur. Note: Replaces non-numeric values with missing values
900# at start of experiment, so some information is lost, but column is now treated as numeric, which can help.
901# If < 0, then disabled.
902# If == 0, then if number of rows <= max_rows_col_stats, then convert any column of strings of numbers to numeric type.
903#
904#max_fraction_invalid_numeric = 0.0
905
906# Number of folds for models used during the feature engineering process.
907# Increasing this will put a lower fraction of data into validation and more into training
908# (e.g., num_folds=3 means 67%/33% training/validation splits).
909# Actual value will vary for small or big data cases.
910#
911#num_folds = 3
912
913#fold_balancing_repeats_times_rows = 100000000.0
914
915#max_fold_balancing_repeats = 10
916
917#fixed_split_seed = 0
918
919#show_fold_stats = true
920
921# For multiclass problems only. Whether to allow different sets of target classes across (cross-)validation
922# fold splits. Especially important when passing a fold column that isn't balanced w.r.t class distribution.
923#
924#allow_different_classes_across_fold_splits = true
925
926# Accuracy setting equal and above which enables full cross-validation (multiple folds) during feature evolution
927# as opposed to only a single holdout split (e.g. 2/3 train and 1/3 validation holdout)
928#
929#full_cv_accuracy_switch = 9
930
931# Accuracy setting equal and above which enables stacked ensemble as final model.
932# Stacking commences at the end of the feature evolution process..
933# It quite often leads to better model performance, but it does increase the complexity
934# and execution time of the final model.
935#
936#ensemble_accuracy_switch = 5
937
938# Number of fold splits to use for ensemble_level >= 2.
939# The ensemble modelling may require predictions to be made on out-of-fold samples
940# hence the data needs to be split on different folds to generate these predictions.
941# Less folds (like 2 or 3) normally create more stable models, but may be less accurate
942# More folds can get to higher accuracy at the expense of more time, but the performance
943# may be less stable when the training data is not enough (i.e. higher chance of overfitting).
944# Actual value will vary for small or big data cases.
945#
946#num_ensemble_folds = 4
947
948# Includes pickles of (train_idx, valid_idx) tuples (numpy row indices for original training data)
949# for all internal validation folds in the experiment summary zip. For debugging.
950#
951#save_validation_splits = false
952
953# Number of repeats for each fold for all validation
954# (modified slightly for small or big data cases)
955#
956#fold_reps = 1
957
958#max_num_classes_hard_limit = 10000
959
960# Maximum number of classes to allow for a classification problem.
961# High number of classes may make certain processes of Driverless AI time-consuming.
962# Memory requirements also increase with higher number of classes
963#
964#max_num_classes = 1000
965
966# Maximum number of classes to compute ROC and CM for,
967# beyond which roc_reduce_type choice for reduction is applied.
968# Too many classes can take much longer than model building time.
969#
970#max_num_classes_compute_roc = 200
971
972# Maximum number of classes to show in GUI for confusion matrix, showing first max_num_classes_client_and_gui labels.
973# Beyond 6 classes the diagnostics launched from GUI are visually truncated.
974# This will only modify client-GUI launched diagnostics if changed in config.toml and server is restarted,
975# while this value can be changed in expert settings to control experiment plots.
976#
977#max_num_classes_client_and_gui = 10
978
979# If too many classes when computing roc,
980# reduce by "rows" by randomly sampling rows,
981# or reduce by truncating classes to no more than max_num_classes_compute_roc.
982# If have sufficient rows for class count, can reduce by rows.
983#
984#roc_reduce_type = "rows"
985
986#min_roc_sample_size = 1
987
988# Maximum number of rows to obtain confusion matrix related plots during feature evolution.
989# Does not limit final model calculation.
990#
991#max_rows_cm_ga = 500000
992
993# Number of actuals vs. predicted data points to use in order to generate in the relevant
994# plot/graph which is shown at the right part of the screen within an experiment.
995#num_actuals_vs_predicted = 100
996
997# Whether to use feature_brain results even if running new experiments.
998# Feature brain can be risky with some types of changes to experiment setup.
999# Even rescoring may be insufficient, so by default this is False.
1000# For example, one experiment may have training=external validation by accident, and get high score,
1001# and while feature_brain_reset_score='on' means we will rescore, it will have already seen
1002# during training the external validation and leak that data as part of what it learned from.
1003# If this is False, feature_brain_level just sets possible models to use and logs/notifies,
1004# but does not use these feature brain cached models.
1005#
1006#use_feature_brain_new_experiments = false
1007
1008# Whether reuse dataset schema, such as data types set in UI for each column, from parent experiment ('on') or to ignore original dataset schema and only use new schema ('off').
1009# resume_data_schema=True is a basic form of data lineage, but it may not be desirable if data colunn names changed to incompatible data types like int to string.
1010# 'auto': for restart, retrain final pipeline, or refit best models, default is to resume data schema, but new experiments would not by default reuse old schema.
1011# 'on': force reuse of data schema from parent experiment if possible
1012# 'off': don't reuse data schema under any case.
1013# The reuse of the column schema can also be disabled by:
1014# in UI: selecting Parent Experiment as None
1015# in client: setting resume_experiment_id to None
1016#resume_data_schema = "auto"
1017
1018#resume_data_schema_old_logic = false
1019
1020# Whether to show (or use) results from H2O.ai brain: the local caching and smart re-use of prior experiments,
1021# in order to generate more useful features and models for new experiments.
1022# See use_feature_brain_new_experiments for how new experiments by default do not use brain cache.
1023# It can also be used to control checkpointing for experiments that have been paused or interrupted.
1024# DAI will use H2O.ai brain cache if cache file has
1025# a) any matching column names and types for a similar experiment type
1026# b) exactly matches classes
1027# c) exactly matches class labels
1028# d) matches basic time series choices
1029# e) interpretability of cache is equal or lower
1030# f) main model (booster) is allowed by new experiment.
1031# Level of brain to use (for chosen level, where higher levels will also do all lower level operations automatically)
1032# -1 = Don't use any brain cache and don't write any cache
1033# 0 = Don't use any brain cache but still write cache
1034# Use case: Want to save model for later use, but want current model to be built without any brain models
1035# 1 = smart checkpoint from latest best individual model
1036# Use case: Want to use latest matching model, but match can be loose, so needs caution
1037# 2 = smart checkpoint from H2O.ai brain cache of individual best models
1038# Use case: DAI scans through H2O.ai brain cache for best models to restart from
1039# 3 = smart checkpoint like level #1, but for entire population. Tune only if brain population insufficient size
1040# (will re-score entire population in single iteration, so appears to take longer to complete first iteration)
1041# 4 = smart checkpoint like level #2, but for entire population. Tune only if brain population insufficient size
1042# (will re-score entire population in single iteration, so appears to take longer to complete first iteration)
1043# 5 = like #4, but will scan over entire brain cache of populations to get best scored individuals
1044# (can be slower due to brain cache scanning if big cache)
1045# 1000 + feature_brain_level (above positive values) = use resumed_experiment_id and actual feature_brain_level,
1046# to use other specific experiment as base for individuals or population,
1047# instead of sampling from any old experiments
1048# GUI has 3 options and corresponding settings:
1049# 1) New Experiment: Uses feature brain level default of 2
1050# 2) New Experiment With Same Settings: Re-uses the same feature brain level as parent experiment
1051# 3) Restart From Last Checkpoint: Resets feature brain level to 1003 and sets experiment ID to resume from
1052# (continued genetic algorithm iterations)
1053# 4) Retrain Final Pipeline: Like Restart but also time=0 so skips any tuning and heads straight to final model
1054# (assumes had at least one tuning iteration in parent experiment)
1055# Other use cases:
1056# a) Restart on different data: Use same column names and fewer or more rows (applicable to 1 - 5)
1057# b) Re-fit only final pipeline: Like (a), but choose time=1 and feature_brain_level=3 - 5
1058# c) Restart with more columns: Add columns, so model builds upon old model built from old column names (1 - 5)
1059# d) Restart with focus on model tuning: Restart, then select feature_engineering_effort = 3 in expert settings
1060# e) can retrain final model but ignore any original features except those in final pipeline (normal retrain but set brain_add_features_for_new_columns=false)
1061# Notes:
1062# 1) In all cases, we first check the resumed experiment id if given, and then the brain cache
1063# 2) For Restart cases, may want to set min_dai_iterations to non-zero to force delayed early stopping, else may not be enough iterations to find better model.
1064# 3) A "New experiment with Same Settings" of a Restart will use feature_brain_level=1003 for default Restart mode (revert to 2, or even 0 if want to start a fresh experiment otherwise)
1065#feature_brain_level = 2
1066
1067# Whether to smartly keep score to avoid re-munging/re-training/re-scoring steps brain models ('auto'), always
1068# force all steps for all brain imports ('on'), or never rescore ('off').
1069# 'auto' only re-scores if a difference in current and prior experiment warrants re-scoring, like column changes, metric changes, etc.
1070# 'on' is useful when smart similarity checking is not reliable enough.
1071# 'off' is uesful when know want to keep exact same features and model for final model refit, despite changes in seed or other behaviors
1072# in features that might change the outcome if re-scored before reaching final model.
1073# If set off, then no limits are applied to features during brain ingestion,
1074# while can set brain_add_features_for_new_columns to false if want to ignore any new columns in data.
1075# In addition, any unscored individuals loaded from parent experiment are not rescored when doing refit or retrain.
1076# Can also set refit_same_best_individual True if want exact same best individual (highest scored model+features) to be used
1077# regardless of any scoring changes.
1078#
1079#feature_brain_reset_score = "auto"
1080
1081#enable_strict_confict_key_check_for_brain = true
1082
1083#allow_change_layer_count_brain = false
1084
1085# Relative number of columns that must match between current reference individual and brain individual.
1086# 0.0: perfect match
1087# 1.0: All columns are different, worst match
1088# e.g. 0.1 implies no more than 10% of columns mismatch between reference set of columns and brain individual.
1089#
1090#brain_maximum_diff_score = 0.1
1091
1092# Maximum number of brain individuals pulled from H2O.ai brain cache for feature_brain_level=1, 2
1093#max_num_brain_indivs = 3
1094
1095# Save feature brain iterations every iter_num % feature_brain_iterations_save_every_iteration == 0, to be able to restart/refit with which_iteration_brain >= 0
1096# 0 means disable
1097#
1098#feature_brain_save_every_iteration = 0
1099
1100# When doing restart or re-fit type feature_brain_level with resumed_experiment_id, choose which iteration to start from, instead of only last best
1101# -1 means just use last best
1102# Usage:
1103# 1) Run one experiment with feature_brain_iterations_save_every_iteration=1 or some other number
1104# 2) Identify which iteration brain dump one wants to restart/refit from
1105# 3) Restart/Refit from original experiment, setting which_iteration_brain to that number in expert settings
1106# Note: If restart from a tuning iteration, this will pull in entire scored tuning population and use that for feature evolution
1107#
1108#which_iteration_brain = -1
1109
1110# When doing re-fit from feature brain, if change columns or features, population of individuals used to refit from may change order of which was best,
1111# leading to better result chosen (False case). But sometimes want to see exact same model/features with only one feature added,
1112# and then would need to set this to True case.
1113# E.g. if refit with just 1 extra column and have interpretability=1, then final model will be same features,
1114# with one more engineered feature applied to that new original feature.
1115#
1116#refit_same_best_individual = false
1117
1118# When doing restart or re-fit of experiment from feature brain,
1119# sometimes user might change data significantly and then warrant
1120# redoing reduction of original features by feature selection, shift detection, and leakage detection.
1121# However, in other cases, if data and all options are nearly (or exactly) identical, then these
1122# steps might change the features slightly (e.g. due to random seed if not setting reproducible mode),
1123# leading to changes in features and model that is refitted. By default, restart and refit avoid
1124# these steps assuming data and experiment setup have no changed significantly.
1125# If check_distribution_shift is forced to on (instead of auto), then this option is ignored.
1126# In order to ensure exact same final pipeline is fitted, one should also set:
1127# 1) brain_add_features_for_new_columns false
1128# 2) refit_same_best_individual true
1129# 3) feature_brain_reset_score 'off'
1130# 4) force_model_restart_to_defaults false
1131# The score will still be reset if the experiment metric chosen changes,
1132# but changes to the scored model and features will be more frozen in place.
1133#
1134#restart_refit_redo_origfs_shift_leak = "[]"
1135
1136# Directory, relative to data_directory, to store H2O.ai brain meta model files
1137#brain_rel_dir = "H2O.ai_brain"
1138
1139# Maximum size in bytes the brain will store
1140# We reserve this memory to save data in order to ensure we can retrieve an experiment if
1141# for any reason it gets interrupted.
1142# -1: unlimited
1143# >=0 number of GB to limit brain to
1144#brain_max_size_GB = 20
1145
1146# Whether to take any new columns and add additional features to pipeline, even if doing retrain final model.
1147# In some cases, one might have a new dataset but only want to keep same pipeline regardless of new columns,
1148# in which case one sets this to False. For example, new data might lead to new dropped features,
1149# due to shift or leak detection. To avoid change of feature set, one can disable all dropping of columns,
1150# but set this to False to avoid adding any columns as new features,
1151# so pipeline is perfectly preserved when changing data.
1152#
1153#brain_add_features_for_new_columns = true
1154
1155# If restart/refit and no longer have the original model class available, be conservative
1156# and go back to defaults for that model class. If False, then try to keep original hyperparameters,
1157# which can fail to work in general.
1158#
1159#force_model_restart_to_defaults = true
1160
1161# Whether to enable early stopping
1162# Early stopping refers to stopping the feature evolution/engineering process
1163# when there is no performance uplift after a certain number of iterations.
1164# After early stopping has been triggered, Driverless AI will initiate the ensemble
1165# process if selected.
1166#early_stopping = true
1167
1168# Whether to enable early stopping per individual
1169# Each individual in the generic algorithm will stop early if no improvement,
1170# and it will no longer be mutated.
1171# Instead, the best individual will be additionally mutated.
1172#early_stopping_per_individual = true
1173
1174# Minimum number of Driverless AI iterations to stop the feature evolution/engineering
1175# process even if score is not improving. Driverless AI needs to run for at least that many
1176# iterations before deciding to stop. It can be seen a safeguard against suboptimal (early)
1177# convergence.
1178#
1179#min_dai_iterations = 0
1180
1181# Maximum features per model (and each model within the final model if ensemble) kept.
1182# Keeps top variable importance features, prunes rest away, after each scoring.
1183# Final ensemble will exclude any pruned-away features and only train on kept features,
1184# but may contain a few new features due to fitting on different data view (e.g. new clusters)
1185# Final scoring pipeline will exclude any pruned-away features,
1186# but may contain a few new features due to fitting on different data view (e.g. new clusters)
1187# -1 means no restrictions except internally-determined memory and interpretability restrictions.
1188# Notes:
1189# * If interpretability > remove_scored_0gain_genes_in_postprocessing_above_interpretability, then
1190# every GA iteration post-processes features down to this value just after scoring them. Otherwise,
1191# only mutations of scored individuals will be pruned (until the final model where limits are strictly applied).
1192# * If ngenes_max is not also limited, then some individuals will have more genes and features until
1193# pruned by mutation or by preparation for final model.
1194# * E.g. to generally limit every iteration to exactly 1 features, one must set nfeatures_max=ngenes_max=1
1195# and remove_scored_0gain_genes_in_postprocessing_above_interpretability=0, but the genetic algorithm
1196# will have a harder time finding good features.
1197#
1198#nfeatures_max = -1
1199
1200# Maximum genes (transformer instances) per model (and each model within the final model if ensemble) kept.
1201# Controls number of genes before features are scored, so just randomly samples genes if pruning occurs.
1202# If restriction occurs after scoring features, then aggregated gene importances are used for pruning genes.
1203# Instances includes all possible transformers, including original transformer for numeric features.
1204# -1 means no restrictions except internally-determined memory and interpretability restrictions
1205#
1206#ngenes_max = -1
1207
1208# Like ngenes_max but controls minimum number of genes.
1209# Useful when DAI by default is making too few genes but want many more.
1210# This can be useful when one has few input features, so DAI may remain conservative and not make many transformed features. But user knows that some transformed features may be useful.
1211# E.g. only target encoding transformer might have been chosen, and one wants DAI to explore many more possible input features at once.
1212#ngenes_min = -1
1213
1214# Minimum genes (transformer instances) per model (and each model within the final model if ensemble) kept.
1215# Instances includes all possible transformers, including original transformer for numeric features.
1216# -1 means no restrictions except internally-determined memory and interpretability restrictions
1217#
1218#nfeatures_min = -1
1219
1220# Whether to limit feature counts by interpretability setting via features_allowed_by_interpretability
1221#limit_features_by_interpretability = true
1222
1223# Whether to use out-of-fold predictions of Word-based CNN TensorFlow models as transformers for NLP if TensorFlow enabled
1224#enable_tensorflow_textcnn = "auto"
1225
1226# Whether to use out-of-fold predictions of Word-based Bi-GRU TensorFlow models as transformers for NLP if TensorFlow enabled
1227#enable_tensorflow_textbigru = "auto"
1228
1229# Whether to use out-of-fold predictions of Character-level CNN TensorFlow models as transformers for NLP if TensorFlow enabled
1230#enable_tensorflow_charcnn = "auto"
1231
1232# Whether to use pretrained PyTorch models as transformers for NLP tasks. Fits a linear model on top of pretrained embeddings. Requires internet connection. Default of 'auto' means disabled. To enable, set to 'on'. GPU(s) are highly recommended.Reduce string_col_as_text_min_relative_cardinality closer to 0.0 and string_col_as_text_threshold closer to 0.0 to force string column to be treated as text despite low number of uniques.
1233#enable_pytorch_nlp_transformer = "auto"
1234
1235# More rows can slow down the fitting process. Recommended values are less than 100000.
1236#pytorch_nlp_transformer_max_rows_linear_model = 50000
1237
1238# Whether to use pretrained PyTorch models and fine-tune them for NLP tasks. Requires internet connection. Default of 'auto' means disabled. To enable, set to 'on'. These models are only using the first text column, and can be slow to train. GPU(s) are highly recommended.Set string_col_as_text_min_relative_cardinality=0.0 to force string column to be treated as text despite low number of uniques.
1239#enable_pytorch_nlp_model = "auto"
1240
1241# Select which pretrained PyTorch NLP model(s) to use. Non-default ones might have no MOJO support. Requires internet connection. Only if PyTorch models or transformers for NLP are set to 'on'.
1242#pytorch_nlp_pretrained_models = "['bert-base-uncased', 'distilbert-base-uncased', 'bert-base-multilingual-cased']"
1243
1244# Max. number of epochs for TensorFlow models for making NLP features
1245#tensorflow_max_epochs_nlp = 2
1246
1247# Accuracy setting equal and above which will add all enabled TensorFlow NLP models below at start of experiment for text dominated problems
1248# when TensorFlow NLP transformers are set to auto. If set to on, this parameter is ignored.
1249# Otherwise, at lower accuracy, TensorFlow NLP transformations will only be created as a mutation.
1250#
1251#enable_tensorflow_nlp_accuracy_switch = 5
1252
1253# Path to pretrained embeddings for TensorFlow NLP models, can be a path in local file system or an S3 location (s3://).
1254# For example, download and unzip https://nlp.stanford.edu/data/glove.6B.zip
1255# tensorflow_nlp_pretrained_embeddings_file_path = /path/on/server/to/glove.6B.300d.txt
1256#
1257#tensorflow_nlp_pretrained_embeddings_file_path = ""
1258
1259#tensorflow_nlp_pretrained_s3_access_key_id = ""
1260
1261#tensorflow_nlp_pretrained_s3_secret_access_key = ""
1262
1263# Allow training of all weights of the neural network graph, including the pretrained embedding layer weights. If disabled, then the embedding layer is frozen, but all other weights are still fine-tuned.
1264#tensorflow_nlp_pretrained_embeddings_trainable = false
1265
1266#tensorflow_nlp_have_gpus_in_production = false
1267
1268#bert_migration_timeout_secs = 600
1269
1270#enable_bert_transformer_acceptance_test = false
1271
1272#enable_bert_model_acceptance_test = false
1273
1274# Whether to parallelize tokenization for BERT Models/Transformers.
1275#pytorch_tokenizer_parallel = true
1276
1277# Number of epochs for fine-tuning of PyTorch NLP models. Larger values can increase accuracy but take longer to train.
1278#pytorch_nlp_fine_tuning_num_epochs = -1
1279
1280# Batch size for PyTorch NLP models. Larger models and larger batch sizes will use more memory.
1281#pytorch_nlp_fine_tuning_batch_size = -1
1282
1283# Maximum sequence length (padding length) for PyTorch NLP models. Larger models and larger padding lengths will use more memory.
1284#pytorch_nlp_fine_tuning_padding_length = -1
1285
1286# Path to pretrained PyTorch NLP models. Note that this can be either a path in the local file system
1287# (/path/on/server/to/bert_models_folder), an URL or a S3 location (s3://).
1288# To get all models, download http://s3.amazonaws.com/artifacts.h2o.ai/releases/ai/h2o/pretrained/bert_models.zip
1289# and unzip and store it in a directory on the instance where DAI is installed.
1290# ``pytorch_nlp_pretrained_models_dir=/path/on/server/to/bert_models_folder``
1291#
1292#pytorch_nlp_pretrained_models_dir = ""
1293
1294#pytorch_nlp_pretrained_s3_access_key_id = ""
1295
1296#pytorch_nlp_pretrained_s3_secret_access_key = ""
1297
1298# Fraction of text columns out of all features to be considered a text-dominated problem
1299#text_fraction_for_text_dominated_problem = 0.3
1300
1301# Fraction of text transformers to all transformers above which to trigger that text dominated problem
1302#text_transformer_fraction_for_text_dominated_problem = 0.3
1303
1304# Whether to reduce options for text-dominated models to reduce expense, e.g. disable ensemble, disable genetic algorithm, single identity target encoder for classification, etc.
1305#text_dominated_limit_tuning = true
1306
1307# Whether to reduce options for image-dominated models to reduce expense, e.g. disable ensemble, disable genetic algorithm, single identity target encoder for classification, etc.
1308#image_dominated_limit_tuning = true
1309
1310# Threshold for average string-is-text score as determined by internal heuristics
1311# It decides when a string column will be treated as text (for an NLP problem) or just as
1312# a standard categorical variable.
1313# Higher values will favor string columns as categoricals, lower values will favor string columns as text.
1314# Set string_col_as_text_min_relative_cardinality=0.0 to force string column to be treated as text despite low number of uniques.
1315#string_col_as_text_threshold = 0.3
1316
1317# Threshold for string columns to be treated as text during preview - should be less than string_col_as_text_threshold to allow data with first 20 rows that don't look like text to still work for Text-only transformers (0.0 - text, 1.0 - string)
1318#string_col_as_text_threshold_preview = 0.1
1319
1320# Mininum fraction of unique values for string columns to be considered as possible text (otherwise categorical)
1321#string_col_as_text_min_relative_cardinality = 0.1
1322
1323# Mininum number of uniques for string columns to be considered as possible text (if not already)
1324#string_col_as_text_min_absolute_cardinality = 10000
1325
1326# If disabled, require 2 or more alphanumeric characters for a token in Text (Count and TF/IDF) transformers, otherwise create tokens out of single alphanumeric characters. True means that 'Street 3' is tokenized into 'Street' and '3', while False means that it's tokenized into 'Street'.
1327#tokenize_single_chars = true
1328
1329# Supported image types. URIs with these endings will be considered as image paths (local or remote).
1330#supported_image_types = "['jpg', 'jpeg', 'png', 'bmp', 'ppm', 'tif', 'tiff', 'JPG', 'JPEG', 'PNG', 'BMP', 'PPM', 'TIF', 'TIFF']"
1331
1332# Whether to create absolute paths for images when importing datasets containing images. Can faciliate testing or re-use of frames for scoring.
1333#image_paths_absolute = false
1334
1335# Whether to use pretrained deep learning models for processing of image data as part of the feature engineering pipeline. A column of URIs to images (jpg, png, etc.) will be converted to a numeric representation using ImageNet-pretrained deep learning models. If no GPUs are found, then must be set to 'on' to enable.
1336#enable_tensorflow_image = "auto"
1337
1338# Supported ImageNet pretrained architectures for Image Transformer. Non-default ones will require internet access to download pretrained models from H2O S3 buckets (To get all models, download http://s3.amazonaws.com/artifacts.h2o.ai/releases/ai/h2o/pretrained/dai_image_models_1_11.zip and unzip inside tensorflow_image_pretrained_models_dir).
1339#tensorflow_image_pretrained_models = "['xception']"
1340
1341# Dimensionality of feature (embedding) space created by Image Transformer. If more than one is selected, multiple transformers can be active at the same time.
1342#tensorflow_image_vectorization_output_dimension = "[100]"
1343
1344# Enable fine-tuning of the ImageNet pretrained models used for the Image Transformer. Enabling this will slow down training, but should increase accuracy.
1345#tensorflow_image_fine_tune = false
1346
1347# Number of epochs for fine-tuning of ImageNet pretrained models used for the Image Transformer.
1348#tensorflow_image_fine_tuning_num_epochs = 2
1349
1350# The list of possible image augmentations to apply while fine-tuning the ImageNet pretrained models used for the Image Transformer. Details about individual augmentations could be found here: https://albumentations.ai/docs/.
1351#tensorflow_image_augmentations = "['HorizontalFlip']"
1352
1353# Batch size for Image Transformer. Larger architectures and larger batch sizes will use more memory.
1354#tensorflow_image_batch_size = -1
1355
1356# Path to pretrained Image models.
1357# To get all models, download http://s3.amazonaws.com/artifacts.h2o.ai/releases/ai/h2o/pretrained/dai_image_models_1_11.zip,
1358# then extract it in a directory on the instance where Driverless AI is installed.
1359#
1360#tensorflow_image_pretrained_models_dir = "./pretrained/image/"
1361
1362# Max. number of seconds to wait for image download if images are provided by URL
1363#image_download_timeout = 60
1364
1365# Maximum fraction of missing elements in a string column for it to be considered as possible image paths (URIs)
1366#string_col_as_image_max_missing_fraction = 0.1
1367
1368# Fraction of (unique) image URIs that need to have valid endings (as defined by string_col_as_image_valid_types) for a string column to be considered as image data
1369#string_col_as_image_min_valid_types_fraction = 0.8
1370
1371# Whether to use GPU(s), if available, to transform images into embeddings with Image Transformer. Can lead to significant speedups.
1372#tensorflow_image_use_gpu = true
1373
1374# Nominally, the time dial controls the search space, with higher time trying more options, but any keys present in this dictionary will override the automatic choices.
1375# e.g. ``params_image_auto_search_space="{'augmentation': ['safe'], 'crop_strategy': ['Resize'], 'optimizer': ['AdamW'], 'dropout': [0.1], 'epochs_per_stage': [5], 'warmup_epochs': [0], 'mixup': [0.0], 'cutmix': [0.0], 'global_pool': ['avg'], 'learning_rate': [3e-4]}"``
1376# Options, e.g. used for time>=8
1377# # Overfit Protection Options:
1378# 'augmentation': ``["safe", "semi_safe", "hard"]``
1379# 'crop_strategy': ``["Resize", "RandomResizedCropSoft", "RandomResizedCropHard"]``
1380# 'dropout': ``[0.1, 0.3, 0.5]``
1381# # Global Pool Options:
1382# avgmax -- sum of AVG and MAX poolings
1383# catavgmax -- concatenation of AVG and MAX poolings
1384# https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/layers/adaptive_avgmax_pool.py
1385# ``'global_pool': ['avg', 'avgmax', 'catavgmax']``
1386# # Regression: No MixUp and CutMix:
1387# ``'mixup': [0.0]``
1388# ``'cutmix': [0.0]``
1389# # Classification: Beta distribution coeff to generate weights for MixUp:
1390# ``'mixup': [0.0, 0.4, 1.0, 3.0]``
1391# ``'cutmix': [0.0, 0.4, 1.0, 3.0]``
1392# # Optimization Options:
1393# ``'epochs_per_stage': [5, 10, 15]`` # from 40 to 135 epochs
1394# ``'warmup_epochs': [0, 0.5, 1]``
1395# ``'optimizer': ["AdamW", "SGD"]``
1396# ``'learning_rate': [1e-3, 3e-4, 1e-4]``
1397#params_image_auto_search_space = "{}"
1398
1399# Nominally, the accuracy dial controls the architectures considered if this is left empty,
1400# but one can choose specific ones. The options in the list are ordered by complexity.
1401#image_auto_arch = "[]"
1402
1403# Any images smaller are upscaled to the minimum. Default is 64, but can be as small as 32 given the pooling layers used.
1404#image_auto_min_shape = 64
1405
1406# 0 means automatic based upon time dial of min(1, time//2).
1407#image_auto_num_final_models = 0
1408
1409# 0 means automatic based upon time dial of max(4 * (time - 1), 2).
1410#image_auto_num_models = 0
1411
1412# 0 means automatic based upon time dial of time + 1 if time < 6 else time - 1.
1413#image_auto_num_stages = 0
1414
1415# 0 means automatic based upon time dial or number of models and stages
1416# set by image_auto_num_models and image_auto_num_stages.
1417#image_auto_iterations = 0
1418
1419# 0.0 means automatic based upon the current stage, where stage 0 uses half, stage 1 uses 3/4, and stage 2 uses full image.
1420# One can pass 1.0 to override and always use full image. 0.5 would mean use half.
1421#image_auto_shape_factor = 0.0
1422
1423# Control maximum number of cores to use for image auto model parallel data management. 0 will disable mp: https://pytorch-lightning.readthedocs.io/en/latest/guides/speed.html
1424#max_image_auto_ddp_cores = 10
1425
1426# Percentile value cutoff of input text token lengths for nlp deep learning models
1427#text_dl_token_pad_percentile = 99
1428
1429# Maximum token length of input text to be used in nlp deep learning models
1430#text_dl_token_pad_max = 512
1431
1432# Interpretability setting equal and above which will use automatic monotonicity constraints in
1433# XGBoostGBM/LightGBM/DecisionTree models.
1434#
1435#monotonicity_constraints_interpretability_switch = 7
1436
1437# For models that support monotonicity constraints, and if enabled, show automatically determined monotonicity constraints for each feature going into the model based on its correlation with the target. 'low' shows only monotonicity constraint direction. 'medium' shows correlation of positively and negatively constraint features. 'high' shows all correlation values.
1438#monotonicity_constraints_log_level = "medium"
1439
1440# Threshold, of Pearson product-moment correlation coefficient between numerical or encoded transformed
1441# feature and target, above (below negative for) which will enforce positive (negative) monotonicity
1442# for XGBoostGBM, LightGBM and DecisionTree models.
1443# Enabled when interpretability >= monotonicity_constraints_interpretability_switch config toml value.
1444# Only if monotonicity_constraints_dict is not provided.
1445#
1446#monotonicity_constraints_correlation_threshold = 0.1
1447
1448# If enabled, only monotonic features with +1/-1 constraints will be passed to the model(s), and features
1449# without monotonicity constraints (0, as set by monotonicity_constraints_dict or determined automatically)
1450# will be dropped. Otherwise all features will be in the model.
1451# Only active when interpretability >= monotonicity_constraints_interpretability_switch or
1452# monotonicity_constraints_dict is provided.
1453#
1454#monotonicity_constraints_drop_low_correlation_features = false
1455
1456# Manual override for monotonicity constraints. Mapping of original numeric features to desired constraint
1457# (1 for pos, -1 for neg, or 0 to disable. True can be set for automatic handling, False is same as 0).
1458# Features that are not listed here will be treated automatically,
1459# and so get no constraint (i.e., 0) if interpretability < monotonicity_constraints_interpretability_switch
1460# and otherwise the constraint is automatically determined from the correlation between each feature and the target.
1461# Example: {'PAY_0': -1, 'PAY_2': -1, 'AGE': -1, 'BILL_AMT1': 1, 'PAY_AMT1': -1}
1462#
1463#monotonicity_constraints_dict = "{}"
1464
1465# Exploring feature interactions can be important in gaining better predictive performance.
1466# The interaction can take multiple forms (i.e. feature1 + feature2 or feature1 * feature2 + ... featureN)
1467# Although certain machine learning algorithms (like tree-based methods) can do well in
1468# capturing these interactions as part of their training process, still generating them may
1469# help them (or other algorithms) yield better performance.
1470# The depth of the interaction level (as in "up to" how many features may be combined at
1471# once to create one single feature) can be specified to control the complexity of the
1472# feature engineering process. For transformers that use both numeric and categorical features, this constrains
1473# the number of each type, not the total number. Higher values might be able to make more predictive models
1474# at the expense of time (-1 means automatic).
1475#
1476#max_feature_interaction_depth = -1
1477
1478# Instead of sampling from min to max (up to max_feature_interaction_depth unless all specified)
1479# columns allowed for each transformer (0), choose fixed non-zero number of columns to use.
1480# Can make same as number of columns to use all columns for each transformers if allowed by each transformer.
1481# -n can be chosen to do 50/50 sample and fixed of n features.
1482#
1483#fixed_feature_interaction_depth = 0
1484
1485# Accuracy setting equal and above which enables tuning of model parameters
1486# Only applicable if parameter_tuning_num_models=-1 (auto)
1487#tune_parameters_accuracy_switch = 3
1488
1489# Accuracy setting equal and above which enables tuning of target transform for regression.
1490# This is useful for time series when instead of predicting the actual target value, it
1491# might be better to predict a transformed target variable like sqrt(target) or log(target)
1492# as a means to control for outliers.
1493#tune_target_transform_accuracy_switch = 5
1494
1495# Select a target transformation for regression problems. Must be one of: ['auto',
1496# 'identity', 'identity_noclip', 'center', 'standardize', 'unit_box', 'log', 'log_noclip', 'square',
1497# 'sqrt', 'double_sqrt', 'inverse', 'anscombe', 'logit', 'sigmoid'].
1498# If set to 'auto', will automatically pick the best target transformer (if accuracy is set to
1499# tune_target_transform_accuracy_switch or larger, considering interpretability level of each target transformer),
1500# otherwise will fall back to 'identity_noclip' (easiest to interpret, Shapley values are in original space, etc.).
1501# All transformers except for 'center', 'standardize', 'identity_noclip' and 'log_noclip' perform clipping
1502# to constrain the predictions to the domain of the target in the training data. Use 'center', 'standardize',
1503# 'identity_noclip' or 'log_noclip' to disable clipping and to allow predictions outside of the target domain observed in
1504# the training data (for parametric models or custom models that support extrapolation).
1505#
1506#target_transformer = "auto"
1507
1508# Select list of target transformers to use for tuning. Only for target_transformer='auto' and accuracy >= tune_target_transform_accuracy_switch.
1509#
1510#target_transformer_tuning_choices = "['identity', 'identity_noclip', 'center', 'standardize', 'unit_box', 'log', 'square', 'sqrt', 'double_sqrt', 'anscombe', 'logit', 'sigmoid']"
1511
1512# Tournament style (method to decide which models are best at each iteration)
1513# 'auto' : Choose based upon accuracy and interpretability
1514# 'uniform' : all individuals in population compete to win as best (can lead to all, e.g. LightGBM models in final ensemble, which may not improve ensemble performance due to lack of diversity)
1515# 'model' : individuals with same model type compete (good if multiple models do well but some models that do not do as well still contribute to improving ensemble)
1516# 'feature' : individuals with similar feature types compete (good if target encoding, frequency encoding, and other feature sets lead to good results)
1517# 'fullstack' : Choose among optimal model and feature types
1518# 'model' and 'feature' styles preserve at least one winner for each type (and so 2 total indivs of each type after mutation)
1519# For each case, a round robin approach is used to choose best scores among type of models to choose from.
1520# If enable_genetic_algorithm=='Optuna', then every individual is self-mutated without any tournament
1521# during the genetic algorithm. The tournament is only used to prune-down individuals for, e.g.,
1522# tuning -> evolution and evolution -> final model.
1523#
1524#tournament_style = "auto"
1525
1526# Interpretability above which will use 'uniform' tournament style
1527#tournament_uniform_style_interpretability_switch = 8
1528
1529# Accuracy below which will use uniform style if tournament_style = 'auto' (regardless of other accuracy tournament style switch values)
1530#tournament_uniform_style_accuracy_switch = 6
1531
1532# Accuracy equal and above which uses model style if tournament_style = 'auto'
1533#tournament_model_style_accuracy_switch = 6
1534
1535# Accuracy equal and above which uses feature style if tournament_style = 'auto'
1536#tournament_feature_style_accuracy_switch = 13
1537
1538# Accuracy equal and above which uses fullstack style if tournament_style = 'auto'
1539#tournament_fullstack_style_accuracy_switch = 13
1540
1541# Whether to use penalized score for GA tournament or actual score
1542#tournament_use_feature_penalized_score = true
1543
1544# Whether to keep poor scores for small data (<10k rows) in case exploration will find good model.
1545# sets tournament_remove_poor_scores_before_evolution_model_factor=1.1
1546# tournament_remove_worse_than_constant_before_evolution=false
1547# tournament_keep_absolute_ok_scores_before_evolution_model_factor=1.1
1548# tournament_remove_poor_scores_before_final_model_factor=1.1
1549# tournament_remove_worse_than_constant_before_final_model=true
1550#tournament_keep_poor_scores_for_small_data = true
1551
1552# Factor (compared to best score plus each score) beyond which to drop poorly scoring models before evolution.
1553# This is useful in cases when poorly scoring models take a long time to train.
1554#tournament_remove_poor_scores_before_evolution_model_factor = 0.7
1555
1556# For before evolution after tuning, whether to remove models that are worse than (optimized to scorer) constant prediction model
1557#tournament_remove_worse_than_constant_before_evolution = true
1558
1559# For before evolution after tuning, where on scale of 0 (perfect) to 1 (constant model) to keep ok scores by absolute value.
1560#tournament_keep_absolute_ok_scores_before_evolution_model_factor = 0.2
1561
1562# Factor (compared to best score) beyond which to drop poorly scoring models before building final ensemble. This is useful in cases when poorly scoring models take a long time to train.
1563#tournament_remove_poor_scores_before_final_model_factor = 0.3
1564
1565# For before final model after evolution, whether to remove models that are worse than (optimized to scorer) constant prediction model
1566#tournament_remove_worse_than_constant_before_final_model = true
1567
1568# Driverless AI uses a genetic algorithm (GA) to find the best features, best models and
1569# best hyper parameters for these models. The GA facilitates getting good results while not
1570# requiring torun/try every possible model/feature/parameter. This version of GA has
1571# reinforcement learning elements - it uses a form of exploration-exploitation to reach
1572# optimum solutions. This means it will capitalise on models/features/parameters that seem # to be working well and continue to exploit them even more, while allowing some room for
1573# trying new (and semi-random) models/features/parameters to avoid settling on a local
1574# minimum.
1575# These models/features/parameters tried are what-we-call individuals of a population. More # individuals connote more models/features/parameters to be tried and compete to find the best # ones.
1576#num_individuals = 2
1577
1578# set fixed number of individuals (if > 0) - useful to compare different hardware configurations. If want 3 individuals in GA race to be preserved, choose 6, since need 1 mutatable loser per surviving individual.
1579#fixed_num_individuals = 0
1580
1581#max_fold_reps_hard_limit = 20
1582
1583# number of unique targets or folds counts after which switch to faster/simpler non-natural sorting and print outs
1584#sanitize_natural_sort_limit = 1000
1585
1586# number of fold ids to report cardinality for, both most common (head) and least common (tail)
1587#head_tail_fold_id_report_length = 30
1588
1589# Whether target encoding (CV target encoding, weight of evidence, etc.) could be enabled
1590# Target encoding refers to several different feature transformations (primarily focused on
1591# categorical data) that aim to represent the feature using information of the actual
1592# target variable. A simple example can be to use the mean of the target to replace each
1593# unique category of a categorical feature. This type of features can be very predictive,
1594# but are prone to overfitting and require more memory as they need to store mappings of
1595# the unique categories and the target values.
1596#
1597#enable_target_encoding = "auto"
1598
1599# For target encoding, whether a model is used to compute Ginis for checking sanity of transformer. Requires cvte_cv_in_cv to be enabled. If enabled, CV-in-CV isn't done in case the check fails.
1600#cvte_cv_in_cv_use_model = false
1601
1602# For target encoding,
1603# whether an outer level of cross-fold validation is performed,
1604# in cases when GINI is detected to flip sign (or have inconsistent sign for weight of evidence)
1605# between fit_transform on training, transform on training, and transform on validation data.
1606# The degree to which GINI is poor is also used to perform fold-averaging of look-up tables instead
1607# of using global look-up tables.
1608#
1609#cvte_cv_in_cv = true
1610
1611# For target encoding,
1612# when an outer level of cross-fold validation is performed,
1613# increase number of outer folds or abort target encoding when GINI between feature and target
1614# are not close between fit_transform on training, transform on training, and transform on validation data.
1615#
1616#cv_in_cv_overconfidence_protection = "auto"
1617
1618#cv_in_cv_overconfidence_protection_factor = 3.0
1619
1620#enable_lexilabel_encoding = "off"
1621
1622#enable_isolation_forest = "off"
1623
1624# Whether one hot encoding could be enabled. If auto, then only applied for small data and GLM.
1625#enable_one_hot_encoding = "auto"
1626
1627# Limit number of output features (total number of bins) created by all BinnerTransformers based on this
1628# value, scaled by accuracy, interpretability and dataset size. 0 means unlimited.
1629#binner_cardinality_limiter = 50
1630
1631# Whether simple binning of numeric features should be enabled by default. If auto, then only for
1632# GLM/FTRL/TensorFlow/GrowNet for time-series or for interpretability >= 6. Binning can help linear (or simple)
1633# models by exposing more signal for features that are not linearly correlated with the target. Note that
1634# NumCatTransformer and NumToCatTransformer already do binning, but also perform target encoding, which makes them
1635# less interpretable. The BinnerTransformer is more interpretable, and also works for time series.
1636#enable_binning = "auto"
1637
1638# Tree uses XGBoost to find optimal split points for binning of numeric features.
1639# Quantile use quantile-based binning. Might fall back to quantile-based if too many classes or
1640# not enough unique values.
1641#binner_bin_method = "['tree']"
1642
1643# If enabled, will attempt to reduce the number of bins during binning of numeric features.
1644# Applies to both tree-based and quantile-based bins.
1645#binner_minimize_bins = true
1646
1647# Given a set of bins (cut points along min...max), the encoding scheme converts the original
1648# numeric feature values into the values of the output columns (one column per bin, and one extra bin for
1649# missing values if any).
1650# Piecewise linear is 0 left of the bin, and 1 right of the bin, and grows linearly from 0 to 1 inside the bin.
1651# Binary is 1 inside the bin and 0 outside the bin. Missing value bin encoding is always binary, either 0 or 1.
1652# If no missing values in the data, then there is no missing value bin.
1653# Piecewise linear helps to encode growing values and keeps smooth transitions across the bin
1654# boundaries, while binary is best suited for detecting specific values in the data.
1655# Both are great at providing features to models that otherwise lack non-linear pattern detection.
1656#binner_encoding = "['piecewise_linear', 'binary']"
1657
1658# If enabled (default), include the original feature value as a output feature for the BinnerTransformer.
1659# This ensures that the BinnerTransformer never has less signal than the OriginalTransformer, since they can
1660# be chosen exclusively.
1661#
1662#binner_include_original = true
1663
1664#isolation_forest_nestimators = 200
1665
1666# Transformer display names to indicate which transformers to use in experiment.
1667# More information for these transformers can be viewed here:
1668# http://docs.h2o.ai/driverless-ai/latest-stable/docs/userguide/transformations.html
1669# This section allows including/excluding these transformations and may be useful when
1670# simpler (more interpretable) models are sought at the expense of accuracy.
1671# the interpretability setting)
1672# for multi-class: '['NumCatTETransformer', 'TextLinModelTransformer',
1673# 'FrequentTransformer', 'CVTargetEncodeTransformer', 'ClusterDistTransformer',
1674# 'WeightOfEvidenceTransformer', 'TruncSVDNumTransformer', 'CVCatNumEncodeTransformer',
1675# 'DatesTransformer', 'TextTransformer', 'OriginalTransformer',
1676# 'NumToCatWoETransformer', 'NumToCatTETransformer', 'ClusterTETransformer',
1677# 'InteractionsTransformer']'
1678# for regression/binary: '['TextTransformer', 'ClusterDistTransformer',
1679# 'OriginalTransformer', 'TextLinModelTransformer', 'NumToCatTETransformer',
1680# 'DatesTransformer', 'WeightOfEvidenceTransformer', 'InteractionsTransformer',
1681# 'FrequentTransformer', 'CVTargetEncodeTransformer', 'NumCatTETransformer',
1682# 'NumToCatWoETransformer', 'TruncSVDNumTransformer', 'ClusterTETransformer',
1683# 'CVCatNumEncodeTransformer']'
1684# This list appears in the experiment logs (search for 'Transformers used')
1685#
1686#included_transformers = "[]"
1687
1688# Auxiliary to included_transformers
1689# e.g. to disable all Target Encoding: excluded_transformers =
1690# '['NumCatTETransformer', 'CVTargetEncodeF', 'NumToCatTETransformer',
1691# 'ClusterTETransformer']'.
1692# Does not affect transformers used for preprocessing with included_pretransformers.
1693#
1694#excluded_transformers = "[]"
1695
1696# Exclude list of genes (i.e. genes (built on top of transformers) to not use,
1697# independent of the interpretability setting)
1698# Some transformers are used by multiple genes, so this allows different control over feature engineering
1699# for multi-class: '['InteractionsGene', 'WeightOfEvidenceGene',
1700# 'NumToCatTargetEncodeSingleGene', 'OriginalGene', 'TextGene', 'FrequentGene',
1701# 'NumToCatWeightOfEvidenceGene', 'NumToCatWeightOfEvidenceMonotonicGene', '
1702# CvTargetEncodeSingleGene', 'DateGene', 'NumToCatTargetEncodeMultiGene', '
1703# DateTimeGene', 'TextLinRegressorGene', 'ClusterIDTargetEncodeSingleGene',
1704# 'CvCatNumEncodeGene', 'TruncSvdNumGene', 'ClusterIDTargetEncodeMultiGene',
1705# 'NumCatTargetEncodeMultiGene', 'CvTargetEncodeMultiGene', 'TextLinClassifierGene',
1706# 'NumCatTargetEncodeSingleGene', 'ClusterDistGene']'
1707# for regression/binary: '['CvTargetEncodeSingleGene', 'NumToCatTargetEncodeSingleGene',
1708# 'CvCatNumEncodeGene', 'ClusterIDTargetEncodeSingleGene', 'TextLinRegressorGene',
1709# 'CvTargetEncodeMultiGene', 'ClusterDistGene', 'OriginalGene', 'DateGene',
1710# 'ClusterIDTargetEncodeMultiGene', 'NumToCatTargetEncodeMultiGene',
1711# 'NumCatTargetEncodeMultiGene', 'TextLinClassifierGene', 'WeightOfEvidenceGene',
1712# 'FrequentGene', 'TruncSvdNumGene', 'InteractionsGene', 'TextGene',
1713# 'DateTimeGene', 'NumToCatWeightOfEvidenceGene',
1714# 'NumToCatWeightOfEvidenceMonotonicGene', ''NumCatTargetEncodeSingleGene']'
1715# This list appears in the experiment logs (search for 'Genes used')
1716# e.g. to disable interaction gene, use: excluded_genes =
1717# '['InteractionsGene']'.
1718# Does not affect transformers used for preprocessing with included_pretransformers.
1719#
1720#excluded_genes = "[]"
1721
1722# "Include specific models" lets you choose a set of models that will be considered during experiment training. The
1723# individual model settings and its AUTO / ON / OFF mean following: AUTO lets the internal decision mechanisms determine
1724# whether the model should be used during training; ON will try to force the use of the model; OFF turns the model
1725# off during training (it is equivalent of deselecting the model in the "Include specific models" picker).
1726#
1727#included_models = "[]"
1728
1729# Auxiliary to included_models
1730#excluded_models = "[]"
1731
1732#included_scorers = "[]"
1733
1734# Select transformers to be used for preprocessing before other transformers operate.
1735# Pre-processing transformers can potentially take any original features and output
1736# arbitrary features, which will then be used by the normal layer of transformers
1737# whose selection is controlled by toml included_transformers or via the GUI
1738# "Include specific transformers".
1739# Notes:
1740# 1) preprocessing transformers (and all other layers of transformers) are part of the python and (if applicable) mojo scoring packages.
1741# 2) any BYOR transformer recipe or native DAI transformer can be used as a preprocessing transformer.
1742# So, e.g., a preprocessing transformer can do interactions, string concatenations, date extractions as a preprocessing step,
1743# and next layer of Date and DateTime transformers will use that as input data.
1744# Caveats:
1745# 1) one cannot currently do a time-series experiment on a time_column that hasn't yet been made (setup of experiment only knows about original data, not transformed)
1746# However, one can use a run-time data recipe to (e.g.) convert a float date-time into string date-time, and this will
1747# be used by DAIs Date and DateTime transformers as well as auto-detection of time series.
1748# 2) in order to do a time series experiment with the GUI/client auto-selecting groups, periods, etc. the dataset
1749# must have time column and groups prepared ahead of experiment by user or via a one-time data recipe.
1750#
1751#included_pretransformers = "[]"
1752
1753# Auxiliary to included_pretransformers
1754#excluded_pretransformers = "[]"
1755
1756#include_all_as_pretransformers_if_none_selected = false
1757
1758#force_include_all_as_pretransformers_if_none_selected = false
1759
1760# Number of full pipeline layers
1761# (not including preprocessing layer when included_pretransformers is not empty).
1762#
1763#num_pipeline_layers = 1
1764
1765# There are 2 data recipes:
1766# 1) that adds new dataset or modifies dataset outside experiment by file/url (pre-experiment data recipe)
1767# 2) that modifies dataset during experiment and python scoring (run-time data recipe)
1768# This list applies to the 2nd case. One can use the same data recipe code for either case, but note:
1769# A) the 1st case can make any new data, but is not part of scoring package.
1770# B) the 2nd case modifies data during the experiment, so needs some original dataset.
1771# The recipe can still create all new features, as long as it has same *name* for:
1772# target, weight_column, fold_column, time_column, time group columns.
1773#
1774#included_datas = "[]"
1775
1776# Auxiliary to included_datas
1777#excluded_datas = "[]"
1778
1779# Custom individuals to use in experiment.
1780# DAI contains most information about model type, model hyperparameters, data science types for input features, transformers used, and transformer parameters an Individual Recipe (an object that is evolved by mutation within the context of DAI's genetic algorithm).
1781# Every completed experiment auto-generates python code for the experiment that corresponds to the individual(s) used to build the final model. This auto-generated python code can be edited offline and uploaded as a recipe, or it can be edited within the custom recipe management editor and saved. This allowed one a code-first access to a significant portion of DAI's internal transformer and model generation.
1782# Choices are:
1783# * Empty means all individuals are freshly generated and treated by DAI's AutoML as a container of model and transformer choices.
1784# * Recipe display names of custom individuals, usually chosen via the UI. If the number of included custom individuals is less than DAI would need, then the remaining individuals are freshly generated.
1785# The expert experiment-level option fixed_num_individuals can be used to enforce how many individuals to use in evolution stage.
1786# The expert experiment-level option fixed_ensemble_level can be used to enforce how many individuals (each with one base model) will be used in the final model.
1787# These individuals act in similar way as the feature brain acts for restart and retrain/refit, and one can retrain/refit custom individuals (i.e. skip the tuning and evolution stages) to use them in building a final model.
1788# See toml make_python_code for more details.
1789#included_individuals = "[]"
1790
1791# Auxiliary to included_individuals
1792#excluded_individuals = "[]"
1793
1794# Whether to generate python code for the best individuals for the experiment.
1795# This python code contains a CustomIndividual class that is a recipe that can be edited and customized. The CustomIndividual class itself can also be customized for expert use.
1796# By default, 'auto' means on.
1797# At the end of an experiment, the summary zip contains auto-generated python code for the individuals used in the experiment, including the last best population (best_population_indivXX.py where XX iterates the population), last best individual (best_individual.py), final base models (final_indivYY.py where YY iterates the final base models).
1798# The summary zip also contains an example_indiv.py file that generates other transformers that may be useful that did not happen to be used in the experiment.
1799# In addition, the GUI and python client allow one to generate custom individuals from an aborted or finished experiment.
1800# For finished experiments, this will provide a zip file containing the final_indivYY.py files, and for aborted experiments this will contain the best population and best individual files.
1801# See included_individuals for more details.
1802#make_python_code = "auto"
1803
1804# Whether to generate json code for the best individuals for the experiment.
1805# This python code contains the essential attributes from the internal DAI
1806# individual class. Reading the json code as a recipe is not supported.
1807# By default, 'auto' means off.
1808#
1809#make_json_code = "auto"
1810
1811# Maximum number of genes to make for example auto-generated custom individual,
1812# called example_indiv.py in the summary zip file.
1813#
1814#python_code_ngenes_max = 100
1815
1816# Minimum number of genes to make for example auto-generated custom individual,
1817# called example_indiv.py in the summary zip file.
1818#
1819#python_code_ngenes_min = 100
1820
1821# Select the scorer to optimize the binary probability threshold that is being used in related Confusion Matrix based scorers that are trivial to optimize otherwise: Precision, Recall, FalsePositiveRate, FalseDiscoveryRate, FalseOmissionRate, TrueNegativeRate, FalseNegativeRate, NegativePredictiveValue. Use F1 if the target class matters more, and MCC if all classes are equally important. AUTO will try to sync the threshold scorer with the scorer used for the experiment, otherwise falls back to F1. The optimized threshold is also used for creating labels in addition to probabilities in MOJO/Python scorers.
1822#threshold_scorer = "AUTO"
1823
1824# Auxiliary to included_scorers
1825#excluded_scorers = "[]"
1826
1827# Whether to enable constant models ('auto'/'on'/'off')
1828#enable_constant_model = "auto"
1829
1830# Whether to enable Decision Tree models ('auto'/'on'/'off'). 'auto' disables decision tree unless only non-constant model chosen.
1831#enable_decision_tree = "auto"
1832
1833# Whether to enable GLM models ('auto'/'on'/'off')
1834#enable_glm = "auto"
1835
1836# Whether to enable XGBoost GBM models ('auto'/'on'/'off')
1837#enable_xgboost_gbm = "auto"
1838
1839# Whether to enable LightGBM models ('auto'/'on'/'off')
1840#enable_lightgbm = "auto"
1841
1842# Whether to enable TensorFlow models ('auto'/'on'/'off')
1843#enable_tensorflow = "auto"
1844
1845# Whether to enable PyTorch-based GrowNet models ('auto'/'on'/'off')
1846#enable_grownet = "auto"
1847
1848# Whether to enable FTRL support (follow the regularized leader) model ('auto'/'on'/'off')
1849#enable_ftrl = "auto"
1850
1851# Whether to enable RuleFit support (beta version, no mojo) ('auto'/'on'/'off')
1852#enable_rulefit = "auto"
1853
1854# Whether to enable automatic addition of zero-inflated models for regression problems with zero-inflated target values that meet certain conditions: y >= 0, y.std() > y.mean()
1855#enable_zero_inflated_models = "auto"
1856
1857# Whether to use dask_cudf even for 1 GPU. If False, will use plain cudf.
1858#use_dask_for_1_gpu = false
1859
1860# Number of retrials for dask fit to protect against known xgboost issues https://github.com/dmlc/xgboost/issues/6272 https://github.com/dmlc/xgboost/issues/6551
1861#dask_retrials_allreduce_empty_issue = 5
1862
1863# Whether to enable XGBoost RF mode without early stopping.
1864# Disabled unless switched on.
1865#
1866#enable_xgboost_rf = "auto"
1867
1868# Whether to enable dask_cudf (multi-GPU) version of XGBoost GBM/RF.
1869# Disabled unless switched on.
1870# Only applicable for single final model without early stopping. No Shapley possible.
1871#
1872#enable_xgboost_gbm_dask = "auto"
1873
1874# Whether to enable multi-node LightGBM.
1875# Disabled unless switched on.
1876#
1877#enable_lightgbm_dask = "auto"
1878
1879# If num_inner_hyperopt_trials_prefinal > 0,
1880# then whether to do hyper parameter tuning during leakage/shift detection.
1881# Might be useful to find non-trivial leakage/shift, but usually not necessary.
1882#
1883#hyperopt_shift_leak = false
1884
1885# If num_inner_hyperopt_trials_prefinal > 0,
1886# then whether to do hyper parameter tuning during leakage/shift detection,
1887# when checking each column.
1888#
1889#hyperopt_shift_leak_per_column = false
1890
1891# Number of trials for Optuna hyperparameter optimization for tuning and evolution models.
1892# 0 means no trials.
1893# For small data, 100 is ok choice,
1894# while for larger data smaller values are reasonable if need results quickly.
1895# If using RAPIDS or DASK, hyperparameter optimization keeps data on GPU entire time.
1896# Currently applies to XGBoost GBM/Dart and LightGBM.
1897# Useful when there is high overhead of DAI outside inner model fit/predict,
1898# so this tunes without that overhead.
1899# However, can overfit on a single fold when doing tuning or evolution,
1900# and if using CV then averaging the fold hyperparameters can lead to unexpected results.
1901#
1902#num_inner_hyperopt_trials_prefinal = 0
1903
1904# Number of trials for Optuna hyperparameter optimization for final models.
1905# 0 means no trials.
1906# For small data, 100 is ok choice,
1907# while for larger data smaller values are reasonable if need results quickly.
1908# Applies to final model only even if num_inner_hyperopt_trials=0.
1909# If using RAPIDS or DASK, hyperparameter optimization keeps data on GPU entire time.
1910# Currently applies to XGBoost GBM/Dart and LightGBM.
1911# Useful when there is high overhead of DAI outside inner model fit/predict,
1912# so this tunes without that overhead.
1913# However, for final model each fold is independently optimized and can overfit on each fold,
1914# after which predictions are averaged
1915# (so no issue with averaging hyperparameters when doing CV with tuning or evolution).
1916#
1917#num_inner_hyperopt_trials_final = 0
1918
1919# Number of individuals in final model (all folds/repeats for given base model) to
1920# optimize with Optuna hyperparameter tuning.
1921# -1 means all.
1922# 0 is same as choosing no Optuna trials.
1923# Might be only beneficial to optimize hyperparameters of best individual (i.e. value of 1) in ensemble.
1924#
1925#num_hyperopt_individuals_final = -1
1926
1927# Optuna Pruner to use (applicable to XGBoost and LightGBM that support Optuna callbacks). To disable choose None.
1928#optuna_pruner = "MedianPruner"
1929
1930# Set Optuna constructor arguments for particular applicable pruners.
1931# https://optuna.readthedocs.io/en/stable/reference/pruners.html
1932#
1933#optuna_pruner_kwargs = "{'n_startup_trials': 5, 'n_warmup_steps': 20, 'interval_steps': 20, 'percentile': 25.0, 'min_resource': 'auto', 'max_resource': 'auto', 'reduction_factor': 4, 'min_early_stopping_rate': 0, 'n_brackets': 4, 'min_early_stopping_rate_low': 0, 'upper': 1.0, 'lower': 0.0}"
1934
1935# Optuna Pruner to use (applicable to XGBoost and LightGBM that support Optuna callbacks).
1936#optuna_sampler = "TPESampler"
1937
1938# Set Optuna constructor arguments for particular applicable samplers.
1939# https://optuna.readthedocs.io/en/stable/reference/samplers.html
1940#
1941#optuna_sampler_kwargs = "{}"
1942
1943# Whether to enable Optuna's XGBoost Pruning callback to abort unpromising runs. Not done if tuning learning rate.
1944#enable_xgboost_hyperopt_callback = true
1945
1946# Whether to enable Optuna's LightGBM Pruning callback to abort unpromising runs. Not done if tuning learning rate.
1947#enable_lightgbm_hyperopt_callback = true
1948
1949# Whether to enable XGBoost Dart models ('auto'/'on'/'off')
1950#enable_xgboost_dart = "auto"
1951
1952# Whether to enable dask_cudf (multi-GPU) version of XGBoost Dart.
1953# Disabled unless switched on.
1954# If have only 1 GPU, then only uses dask_cudf if use_dask_for_1_gpu is True
1955# Only applicable for single final model without early stopping. No Shapley possible.
1956#
1957#enable_xgboost_dart_dask = "auto"
1958
1959# Whether to enable dask_cudf (multi-GPU) version of XGBoost RF.
1960# Disabled unless switched on.
1961# If have only 1 GPU, then only uses dask_cudf if use_dask_for_1_gpu is True
1962# Only applicable for single final model without early stopping. No Shapley possible.
1963#
1964#enable_xgboost_rf_dask = "auto"
1965
1966# Number of GPUs to use per model hyperopt training task. Set to -1 for all GPUs.
1967# For example, when this is set to -1 and there are 4 GPUs available, all of them can be used for the training of a single model across a Dask cluster.
1968# Ignored if GPUs disabled or no GPUs on system.
1969# In multinode context, this refers to the per-node value.
1970#
1971#num_gpus_per_hyperopt_dask = -1
1972
1973# Whether to use (and expect exists) xgbfi feature interactions for xgboost.
1974#use_xgboost_xgbfi = false
1975
1976# Which boosting types to enable for LightGBM (gbdt = boosted trees, rf_early_stopping = random forest with early stopping rf = random forest (no early stopping), dart = drop-out boosted trees with no early stopping
1977#enable_lightgbm_boosting_types = "['gbdt']"
1978
1979# Whether to enable automatic class weighting for imbalanced multiclass problems. Can make worse probabilities, but improve confusion-matrix based scorers for rare classes without the need to manually calibrate probabilities or fine-tune the label creation process.
1980#enable_lightgbm_multiclass_balancing = "auto"
1981
1982# Whether to enable LightGBM categorical feature support (runs in CPU mode even if GPUs enabled, and no MOJO built)
1983#enable_lightgbm_cat_support = false
1984
1985# Whether to enable LightGBM linear_tree handling
1986# (only CPU mode currently, no L1 regularization -- mae objective, and no MOJO build).
1987#
1988#enable_lightgbm_linear_tree = false
1989
1990# Whether to enable LightGBM extra trees mode to help avoid overfitting
1991#enable_lightgbm_extra_trees = false
1992
1993# basic: as fast as when no constraints applied, but over-constrains the predictions.
1994# intermediate: very slightly slower, but much less constraining while still holding monotonicity and should be more accurate than basic.
1995# advanced: slower, but even more accurate than intermediate.
1996#
1997#lightgbm_monotone_constraints_method = "intermediate"
1998
1999# Forbids any monotone splits on the first x (rounded down) level(s) of the tree.
2000# The penalty applied to monotone splits on a given depth is a continuous,
2001# increasing function the penalization parameter.
2002# https://lightgbm.readthedocs.io/en/latest/Parameters.html#monotone_penalty
2003#
2004#lightgbm_monotone_penalty = 0.0
2005
2006# Whether to enable LightGBM CUDA implementation instead of OpenCL.
2007# CUDA with LightGBM only supported for Pascal+ (compute capability >=6.0)
2008#enable_lightgbm_cuda_support = false
2009
2010# Whether to show constant models in iteration panel even when not best model.
2011#show_constant_model = false
2012
2013#drop_constant_model_final_ensemble = true
2014
2015#xgboost_rf_exact_threshold_num_rows_x_cols = 10000
2016
2017# Select objectives allowed for XGBoost.
2018# Added to allowed mutations (the default reg:squarederror is in sample list 3 times)
2019# Note: logistic, tweedie, gamma, poisson are only valid for targets with positive values.
2020# Note: The objective relates to the form of the (regularized) loss function,
2021# used to determine the split with maximum information gain,
2022# while the metric is the non-regularized metric
2023# measured on the validation set (external or internally generated by DAI).
2024#
2025#xgboost_reg_objectives = "['reg:squarederror']"
2026
2027# Select metrics allowed for XGBoost.
2028# Added to allowed mutations (the default rmse and mae are in sample list twice).
2029# Note: tweedie, gamma, poisson are only valid for targets with positive values.
2030#
2031#xgboost_reg_metrics = "['rmse', 'mae']"
2032
2033# Select which objectives allowed for XGBoost.
2034# Added to allowed mutations (all evenly sampled).
2035#xgboost_binary_metrics = "['logloss', 'auc', 'aucpr', 'error']"
2036
2037# Select objectives allowed for LightGBM.
2038# Added to allowed mutations (the default mse is in sample list 2 times if selected).
2039# "binary" refers to logistic regression.
2040# Note: If choose quantile/huber or fair and data is not normalized,
2041# recommendation is to use params_lightgbm to specify reasonable
2042# value of alpha (for quantile or huber) or fairc (for fair) to LightGBM.
2043# Note: mse is same as rmse correponding to L2 loss. mae is L1 loss.
2044# Note: tweedie, gamma, poisson are only valid for targets with positive values.
2045# Note: The objective relates to the form of the (regularized) loss function,
2046# used to determine the split with maximum information gain,
2047# while the metric is the non-regularized metric
2048# measured on the validation set (external or internally generated by DAI).
2049#
2050#lightgbm_reg_objectives = "['mse', 'mae']"
2051
2052# Select metrics allowed for LightGBM.
2053# Added to allowed mutations (the default rmse is in sample list three times if selected).
2054# Note: If choose huber or fair and data is not normalized,
2055# recommendation is to use params_lightgbm to specify reasonable
2056# value of alpha (for huber or quantile) or fairc (for fair) to LightGBM.
2057# Note: tweedie, gamma, poisson are only valid for targets with positive values.
2058#
2059#lightgbm_reg_metrics = "['rmse', 'mse', 'mae']"
2060
2061# Select objectives allowed for LightGBM.
2062# Added to allowed mutations (the default binary is in sample list 2 times if selected)
2063#lightgbm_binary_objectives = "['binary', 'xentropy']"
2064
2065# Select which binary metrics allowed for LightGBM.
2066# Added to allowed mutations (all evenly sampled).
2067#lightgbm_binary_metrics = "['binary', 'binary', 'auc']"
2068
2069# Select which metrics allowed for multiclass LightGBM.
2070# Added to allowed mutations (evenly sampled if selected).
2071#lightgbm_multi_metrics = "['multiclass', 'multi_error']"
2072
2073# tweedie_variance_power parameters to try for XGBoostModel and LightGBMModel if tweedie is used.
2074# First value is default.
2075#tweedie_variance_power_list = "[1.5, 1.2, 1.9]"
2076
2077# huber parameters to try for LightGBMModel if huber is used.
2078# First value is default.
2079#huber_alpha_list = "[0.9, 0.3, 0.5, 0.6, 0.7, 0.8, 0.1, 0.99]"
2080
2081# fair c parameters to try for LightGBMModel if fair is used.
2082# First value is default.
2083#fair_c_list = "[1.0, 0.1, 0.5, 0.9]"
2084
2085# poisson max_delta_step parameters to try for LightGBMModel if poisson is used.
2086# First value is default.
2087#poisson_max_delta_step_list = "[0.7, 0.9, 0.5, 0.2]"
2088
2089# quantile alpha parameters to try for LightGBMModel if quantile is used.
2090# First value is default.
2091#quantile_alpha = "[0.9, 0.95, 0.99, 0.6]"
2092
2093# Default reg_lambda regularization for GLM.
2094#reg_lambda_glm_default = 0.0004
2095
2096#lossguide_drop_factor = 4.0
2097
2098#lossguide_max_depth_extend_factor = 8.0
2099
2100# Parameters for LightGBM to override DAI parameters
2101# e.g. ``'eval_metric'`` instead of ``'metric'`` should be used
2102# e.g. ``params_lightgbm="{'objective': 'binary', 'n_estimators': 100, 'max_leaves': 64, 'random_state': 1234}"``
2103# e.g. ``params_lightgbm="{'n_estimators': 600, 'learning_rate': 0.1, 'reg_alpha': 0.0, 'reg_lambda': 0.5, 'gamma': 0, 'max_depth': 0, 'max_bin': 128, 'max_leaves': 256, 'scale_pos_weight': 1.0, 'max_delta_step': 3.469919910597877, 'min_child_weight': 1, 'subsample': 0.9, 'colsample_bytree': 0.3, 'tree_method': 'gpu_hist', 'grow_policy': 'lossguide', 'min_data_in_bin': 3, 'min_child_samples': 5, 'early_stopping_rounds': 20, 'num_classes': 2, 'objective': 'binary', 'eval_metric': 'binary', 'random_state': 987654, 'early_stopping_threshold': 0.01, 'monotonicity_constraints': False, 'silent': True, 'debug_verbose': 0, 'subsample_freq': 1}"``
2104# avoid including "system"-level parameters like ``'n_gpus': 1, 'gpu_id': 0, , 'n_jobs': 1, 'booster': 'lightgbm'``
2105# also likely should avoid parameters like: 'objective': 'binary', unless one really knows what one is doing (e.g. alternative objectives)
2106# See: https://xgboost.readthedocs.io/en/latest/parameter.html
2107# And see: https://github.com/Microsoft/LightGBM/blob/master/docs/Parameters.rst
2108# Can also pass objective parameters if choose (or in case automatically chosen) certain objectives
2109# https://lightgbm.readthedocs.io/en/latest/Parameters.html#metric-parameters
2110#params_lightgbm = "{}"
2111
2112# Parameters for XGBoost to override DAI parameters
2113# similar parameters as LightGBM since LightGBM parameters are transcribed from XGBoost equivalent versions
2114# e.g. ``params_xgboost="{'n_estimators': 100, 'max_leaves': 64, 'max_depth': 0, 'random_state': 1234}"``
2115# See: https://xgboost.readthedocs.io/en/latest/parameter.html
2116#params_xgboost = "{}"
2117
2118# Like params_xgboost but for XGBoost random forest.
2119#params_xgboost_rf = "{}"
2120
2121# Like params_xgboost but for XGBoost's dart method
2122#params_dart = "{}"
2123
2124# Parameters for TensorFlow to override DAI parameters
2125# e.g. ``params_tensorflow="{'lr': 0.01, 'add_wide': False, 'add_attention': True, 'epochs': 30, 'layers': (100, 100), 'activation': 'selu', 'batch_size': 64, 'chunk_size': 1000, 'dropout': 0.3, 'strategy': '1cycle', 'l1': 0.0, 'l2': 0.0, 'ort_loss': 0.5, 'ort_loss_tau': 0.01, 'normalize_type': 'streaming'}"``
2126# See: https://keras.io/ , e.g. for activations: https://keras.io/activations/
2127# Example layers: ``(500, 500, 500), (100, 100, 100), (100, 100), (50, 50)``
2128# Strategies: ``'1cycle'`` or ``'one_shot'``, See: https://github.com/fastai/fastai
2129# 'one_shot" is not allowed for ensembles.
2130# normalize_type: 'streaming' or 'global' (using sklearn StandardScaler)
2131#
2132#params_tensorflow = "{}"
2133
2134# Parameters for XGBoost's gblinear to override DAI parameters
2135# e.g. ``params_gblinear="{'n_estimators': 100}"``
2136# See: https://xgboost.readthedocs.io/en/latest/parameter.html
2137#params_gblinear = "{}"
2138
2139# Parameters for Decision Tree to override DAI parameters
2140# parameters should be given as XGBoost equivalent unless unique LightGBM parameter
2141# e.g. ``'eval_metric'`` instead of ``'metric'`` should be used
2142# e.g. ``params_decision_tree="{'objective': 'binary', 'n_estimators': 100, 'max_leaves': 64, 'random_state': 1234}"``
2143# e.g. ``params_decision_tree="{'n_estimators': 1, 'learning_rate': 1, 'reg_alpha': 0.0, 'reg_lambda': 0.5, 'gamma': 0, 'max_depth': 0, 'max_bin': 128, 'max_leaves': 256, 'scale_pos_weight': 1.0, 'max_delta_step': 3.469919910597877, 'min_child_weight': 1, 'subsample': 0.9, 'colsample_bytree': 0.3, 'tree_method': 'gpu_hist', 'grow_policy': 'lossguide', 'min_data_in_bin': 3, 'min_child_samples': 5, 'early_stopping_rounds': 20, 'num_classes': 2, 'objective': 'binary', 'eval_metric': 'logloss', 'random_state': 987654, 'early_stopping_threshold': 0.01, 'monotonicity_constraints': False, 'silent': True, 'debug_verbose': 0, 'subsample_freq': 1}"``
2144# avoid including "system"-level parameters like ``'n_gpus': 1, 'gpu_id': 0, , 'n_jobs': 1, 'booster': 'lightgbm'``
2145# also likely should avoid parameters like: ``'objective': 'binary:logistic'``, unless one really knows what one is doing (e.g. alternative objectives)
2146# See: https://xgboost.readthedocs.io/en/latest/parameter.html
2147# And see: https://github.com/Microsoft/LightGBM/blob/master/docs/Parameters.rst
2148# Can also pass objective parameters if choose (or in case automatically chosen) certain objectives
2149# https://lightgbm.readthedocs.io/en/latest/Parameters.html#metric-parameters
2150#params_decision_tree = "{}"
2151
2152# Parameters for Rulefit to override DAI parameters
2153# e.g. ``params_rulefit="{'max_leaves': 64}"``
2154# See: https://xgboost.readthedocs.io/en/latest/parameter.html
2155#params_rulefit = "{}"
2156
2157# Parameters for FTRL to override DAI parameters
2158#params_ftrl = "{}"
2159
2160# Parameters for GrowNet to override DAI parameters
2161#params_grownet = "{}"
2162
2163# How to handle tomls like params_tune_lightgbm.
2164# override: For any key in the params_tune_ toml dict, use the list of values instead of DAI's list of values.
2165# override_and_first_as_default: like override, but also use first entry in tuple/list (if present) as override as replacement for (e.g.) params_lightgbm when using params_tune_lightgbm.
2166# exclusive: Only tune the keys in the params_tune_ toml dict, unless no keys are present. Otherwise use DAI's default values.
2167# exclusive_and_first_as_default: Like exclusive but same first as default behavior as override_and_first_as_default.
2168# In order to fully control hyperparameter tuning, either one should set "override" mode and include every hyperparameter and at least one value in each list within the dictionary, or choose "exclusive" and then rely upon DAI unchanging default values for any keys not given.
2169# For custom recipes, one can use recipe_dict to pass hyperparameters and if using the "get_one()" function in a custom recipe, and if user_tune passed contains the hyperparameter dictionary equivalent of params_tune_ tomls, then this params_tune_mode will also work for custom recipes.
2170#params_tune_mode = "override_and_first_as_default"
2171
2172# Whether to adjust GBM trees, learning rate, and early_stopping_rounds for GBM models or recipes with _is_gbm=True.
2173# True: auto mode, that changes trees/LR/stopping if tune_learning_rate=false and early stopping is supported by the model and model is GBM or from custom individual with parameter in adjusted_params.
2174# False: disable any adjusting from tuning-evolution into final model.
2175# Setting this to false is required if (e.g.) one changes params_lightgbm or params_tune_lightgbm and wanted to preserve the tuning-evolution values into the final model.
2176# One should also set tune_learning_rate to true to tune the learning_rate, else it will be fixed to some single value.
2177#params_final_auto_adjust = true
2178
2179# Dictionary of key:lists of values to use for LightGBM tuning, overrides DAI's choice per key
2180# e.g. ``params_tune_lightgbm="{'min_child_samples': [1,2,5,100,1000], 'min_data_in_bin': [1,2,3,10,100,1000]}"``
2181#params_tune_lightgbm = "{}"
2182
2183# Like params_tune_lightgbm but for XGBoost
2184# e.g. ``params_tune_xgboost="{'max_leaves': [8, 16, 32, 64]}"``
2185#params_tune_xgboost = "{}"
2186
2187# Like params_tune_lightgbm but for XGBoost random forest
2188# e.g. ``params_tune_xgboost_rf="{'max_leaves': [8, 16, 32, 64]}"``
2189#params_tune_xgboost_rf = "{}"
2190
2191# Dictionary of key:lists of values to use for LightGBM Decision Tree tuning, overrides DAI's choice per key
2192# e.g. ``params_tune_decision_tree="{'min_child_samples': [1,2,5,100,1000], 'min_data_in_bin': [1,2,3,10,100,1000]}"``
2193#params_tune_decision_tree = "{}"
2194
2195# Like params_tune_lightgbm but for XGBoost's Dart
2196# e.g. ``params_tune_dart="{'max_leaves': [8, 16, 32, 64]}"``
2197#params_tune_dart = "{}"
2198
2199# Like params_tune_lightgbm but for TensorFlow
2200# e.g. ``params_tune_tensorflow="{'layers': [(10,10,10), (10, 10, 10, 10)]}"``
2201#params_tune_tensorflow = "{}"
2202
2203# Like params_tune_lightgbm but for gblinear
2204# e.g. ``params_tune_gblinear="{'reg_lambda': [.01, .001, .0001, .0002]}"``
2205#params_tune_gblinear = "{}"
2206
2207# Like params_tune_lightgbm but for rulefit
2208# e.g. ``params_tune_rulefit="{'max_depth': [4, 5, 6]}"``
2209#params_tune_rulefit = "{}"
2210
2211# Like params_tune_lightgbm but for ftrl
2212#params_tune_ftrl = "{}"
2213
2214# Like params_tune_lightgbm but for GrowNet
2215# e.g. ``params_tune_grownet="{'input_dropout': [0.2, 0.5]}"``
2216#params_tune_grownet = "{}"
2217
2218# Whether to force max_leaves and max_depth to be 0 if grow_policy is depthwise and lossguide, respectively.
2219#params_tune_grow_policy_simple_trees = true
2220
2221# Maximum number of GBM trees or GLM iterations. Can be reduced for lower accuracy and/or higher interpretability.
2222# Early-stopping usually chooses less. Ignored if fixed_max_nestimators is > 0.
2223#
2224#max_nestimators = 3000
2225
2226# Fixed maximum number of GBM trees or GLM iterations. If > 0, ignores max_nestimators and disables automatic reduction
2227# due to lower accuracy or higher interpretability. Early-stopping usually chooses less.
2228#
2229#fixed_max_nestimators = -1
2230
2231# LightGBM dart mode and normal rf mode do not use early stopping,
2232# and they will sample from these values for n_estimators.
2233# XGBoost Dart mode will also sample from these n_estimators.
2234# Also applies to XGBoost Dask models that do not yet support early stopping or callbacks.
2235# For default parameters it chooses first value in list, while mutations sample from the list.
2236#
2237#n_estimators_list_no_early_stopping = "[50, 100, 150, 200, 250, 300]"
2238
2239# Lower limit on learning rate for final ensemble GBM models.
2240# In some cases, the maximum number of trees/iterations is insufficient for the final learning rate,
2241# which can lead to no early stopping triggered and poor final model performance.
2242# Then, one can try increasing the learning rate by raising this minimum,
2243# or one can try increasing the maximum number of trees/iterations.
2244#
2245#min_learning_rate_final = 0.01
2246
2247# Upper limit on learning rate for final ensemble GBM models
2248#max_learning_rate_final = 0.05
2249
2250# factor by which max_nestimators is reduced for tuning and feature evolution
2251#max_nestimators_feature_evolution_factor = 0.2
2252
2253# Lower limit on learning rate for feature engineering GBM models
2254#min_learning_rate = 0.05
2255
2256# Upper limit on learning rate for GBM models
2257# If want to override min_learning_rate and min_learning_rate_final, set this to smaller value
2258#
2259#max_learning_rate = 0.5
2260
2261# Whether to lock learning rate, tree count, early stopping rounds for GBM algorithms to the final model values.
2262#lock_ga_to_final_trees = false
2263
2264# Whether to tune learning rate for GBM algorithms (if not doing just single final model).
2265# If tuning with Optuna, might help isolate optimal learning rate.
2266#
2267#tune_learning_rate = false
2268
2269# Max. number of epochs for TensorFlow and FTRL models
2270#max_epochs = 50
2271
2272# Number of epochs for TensorFlow when larger data size.
2273#max_epochs_tf_big_data = 5
2274
2275# Maximum tree depth (and corresponding max max_leaves as 2**max_max_depth)
2276#max_max_depth = 12
2277
2278# Default max_bin for tree methods
2279#default_max_bin = 256
2280
2281# Default max_bin for LightGBM (64 recommended for GPU LightGBM for speed)
2282#default_lightgbm_max_bin = 249
2283
2284# Maximum max_bin for tree features
2285#max_max_bin = 256
2286
2287# Minimum max_bin for any tree
2288#min_max_bin = 32
2289
2290# Amount of memory which can handle max_bin = 256 can handle 125 columns and max_bin = 32 for 1000 columns
2291# As available memory on system goes higher than this scale, can handle proportionally more columns at higher max_bin
2292# Currently set to 10GB
2293#scale_mem_for_max_bin = 10737418240
2294
2295# Factor by which rf gets more depth than gbdt
2296#factor_rf = 1.25
2297
2298# Whether TensorFlow will use all CPU cores, or if it will split among all transformers. Only for transformers, not TensorFlow model.
2299#tensorflow_use_all_cores = true
2300
2301# Whether TensorFlow will use all CPU cores if reproducible is set, or if it will split among all transformers
2302#tensorflow_use_all_cores_even_if_reproducible_true = false
2303
2304# Whether to disable TensorFlow memory optimizations. Can help fix tensorflow.python.framework.errors_impl.AlreadyExistsError
2305#tensorflow_disable_memory_optimization = true
2306
2307# How many cores to use for each TensorFlow model, regardless if GPU or CPU based (0 = auto mode)
2308#tensorflow_cores = 0
2309
2310# For TensorFlow models, maximum number of cores to use if tensorflow_cores=0 (auto mode), because TensorFlow model is inefficient at using many cores. See also max_fit_cores for all models.
2311#tensorflow_model_max_cores = 4
2312
2313# How many cores to use for each Bert Model and Transformer, regardless if GPU or CPU based (0 = auto mode)
2314#bert_cores = 0
2315
2316# Whether Bert will use all CPU cores, or if it will split among all transformers. Only for transformers, not Bert model.
2317#bert_use_all_cores = true
2318
2319# For Bert models, maximum number of cores to use if bert_cores=0 (auto mode), because Bert model is inefficient at using many cores. See also max_fit_cores for all models.
2320#bert_model_max_cores = 8
2321
2322# Max number of rules to be used for RuleFit models (-1 for all)
2323#rulefit_max_num_rules = -1
2324
2325# Max tree depth for RuleFit models
2326#rulefit_max_tree_depth = 6
2327
2328# Max number of trees for RuleFit models
2329#rulefit_max_num_trees = 500
2330
2331# Enable One-Hot-Encoding (which does binning to limit to number of bins to no more than 100 anyway) for categorical columns with fewer than this many unique values
2332# Set to 0 to disable
2333#one_hot_encoding_cardinality_threshold = 50
2334
2335# How many levels to choose one-hot by default instead of other encodings, restricted down to 10x less (down to 2 levels) when number of columns able to be used with OHE exceeds 500. Note the total number of bins is reduced if bigger data independently of this.
2336#one_hot_encoding_cardinality_threshold_default_use = 40
2337
2338# Treat text columns also as categorical columns if the cardinality is <= this value.
2339# Set to 0 to treat text columns only as text.
2340#text_as_categorical_cardinality_threshold = 1000
2341
2342# If num_as_cat is true, then treat numeric columns also as categorical columns if the cardinality is > this value.
2343# Setting to 0 allows all numeric to be treated as categorical if num_as_cat is True.
2344#numeric_as_categorical_cardinality_threshold = 2
2345
2346# If num_as_cat is true, then treat numeric columns also as categorical columns to possibly one-hot encode if the cardinality is > this value.
2347# Setting to 0 allows all numeric to be treated as categorical to possibly ohe-hot encode if num_as_cat is True.
2348#numeric_as_ohe_categorical_cardinality_threshold = 2
2349
2350#one_hot_encoding_show_actual_levels_in_features = false
2351
2352# Fixed ensemble_level
2353# -1 = auto, based upon ensemble_accuracy_switch, accuracy, size of data, etc.
2354# 0 = No ensemble, only final single model on validated iteration/tree count
2355# 1 = 1 model, multiple ensemble folds (cross-validation)
2356# >=2 = >=2 models, multiple ensemble folds (cross-validation)
2357#
2358#fixed_ensemble_level = -1
2359
2360# If enabled, use cross-validation to determine optimal parameters for single final model,
2361# and to be able to create training holdout predictions.
2362#cross_validate_single_final_model = true
2363
2364# Model to combine base model predictions, for experiments that create a final pipeline
2365# consisting of multiple base models.
2366# blender: Creates a linear blend with non-negative weights that add to 1 (blending) - recommended
2367# extra_trees: Creates a tree model to non-linearly combine the base models (stacking) - experimental, and recommended to also set enable cross_validate_meta_learner.
2368# neural_net: Creates a neural net model to non-linearly combine the base models (stacking) - experimental, and recommended to also set enable cross_validate_meta_learner.
2369#
2370#ensemble_meta_learner = "blender"
2371
2372# If enabled, use cross-validation to create an ensemble for the meta learner itself. Especially recommended for
2373# ``ensemble_meta_learner='extra_trees'``, to make unbiased training holdout predictions.
2374# Will disable MOJO if enabled. Not needed for ``ensemble_meta_learner='blender'``."
2375#
2376#cross_validate_meta_learner = false
2377
2378# Number of models to tune during pre-evolution phase
2379# Can make this lower to avoid excessive tuning, or make higher to do enhanced tuning.
2380# ``-1 : auto``
2381#
2382#parameter_tuning_num_models = -1
2383
2384# Number of models (out of all parameter_tuning_num_models) to have as SEQUENCE instead of random features/parameters.
2385# ``-1 : auto, use at least one default individual per model class tuned``
2386#
2387#parameter_tuning_num_models_sequence = -1
2388
2389# Number of models to add during tuning that cover other cases, like for TS having no TE on time column groups.
2390# ``-1 : auto, adds additional models to protect against overfit on high-gain training features.``
2391#
2392#parameter_tuning_num_models_extra = -1
2393
2394# Dictionary of model class name (keys) and number (values) of instances.
2395#num_tuning_instances = "{}"
2396
2397#validate_meta_learner = true
2398
2399#validate_meta_learner_extra = false
2400
2401# Specify the fixed number of cross-validation folds (if >= 2) for feature evolution. (The actual number of splits allowed can be less and is determined at experiment run-time).
2402#fixed_num_folds_evolution = -1
2403
2404# Specify the fixed number of cross-validation folds (if >= 2) for the final model. (The actual number of splits allowed can be less and is determined at experiment run-time).
2405#fixed_num_folds = -1
2406
2407# set "on" to force only first fold for models - useful for quick runs regardless of data
2408#fixed_only_first_fold_model = "auto"
2409
2410# Set the number of repeated cross-validation folds for feature evolution and final models (if > 0), 0 is default. Only for ensembles that do cross-validation (so no external validation and not time-series), not for single final models.
2411#fixed_fold_reps = 0
2412
2413#num_fold_ids_show = 10
2414
2415#fold_scores_instability_warning_threshold = 0.25
2416
2417# Upper limit on the number of rows x number of columns for feature evolution (applies to both training and validation/holdout splits)
2418# feature evolution is the process that determines which features will be derived.
2419# Depending on accuracy settings, a fraction of this value will be used
2420#
2421#feature_evolution_data_size = 300000000
2422
2423# Upper limit on the number of rows x number of columns for training final pipeline.
2424#
2425#final_pipeline_data_size = 1000000000
2426
2427# Whether to automatically limit validation data size using feature_evolution_data_size (giving max_rows_feature_evolution shown in logs) for tuning-evolution, and using final_pipeline_data_size, max_validation_to_training_size_ratio_for_final_ensemble for final model.
2428#limit_validation_size = true
2429
2430# Smaller values can speed up final pipeline model training, as validation data is only used for early stopping.
2431# Note that final model predictions and scores will always be provided on the full dataset provided.
2432#
2433#max_validation_to_training_size_ratio_for_final_ensemble = 2.0
2434
2435# Ratio of minority to majority class of the target column beyond which stratified sampling is done for binary classification. Otherwise perform random sampling. Set to 0 to always do random sampling. Set to 1 to always do stratified sampling.
2436#force_stratified_splits_for_imbalanced_threshold_binary = 0.01
2437
2438#force_stratified_splits_for_binary_max_rows = 1000000
2439
2440# Specify whether to do stratified sampling for validation fold creation for iid regression problems. Otherwise perform random sampling.
2441#stratify_for_regression = true
2442
2443# Sampling method for imbalanced binary classification problems. Choices are:
2444# "auto": sample both classes as needed, depending on data
2445# "over_under_sampling": over-sample the minority class and under-sample the majority class, depending on data
2446# "under_sampling": under-sample the majority class to reach class balance
2447# "off": do not perform any sampling
2448#
2449#imbalance_sampling_method = "off"
2450
2451# For smaller data, there's no generally no benefit in using imbalanced sampling methods.
2452#imbalance_sampling_threshold_min_rows_original = 100000
2453
2454# For imbalanced binary classification: ratio of majority to minority class equal and above which to enable
2455# special imbalanced models with sampling techniques (specified by imbalance_sampling_method) to attempt to improve model performance.
2456#
2457#imbalance_ratio_sampling_threshold = 5
2458
2459# For heavily imbalanced binary classification: ratio of majority to minority class equal and above which to enable only
2460# special imbalanced models on full original data, without upfront sampling.
2461#
2462#heavy_imbalance_ratio_sampling_threshold = 25
2463
2464# Special handling can include special models, special scorers, special feature engineering.
2465#
2466#imbalance_ratio_multiclass_threshold = 5
2467
2468# Special handling can include special models, special scorers, special feature engineering.
2469#
2470#heavy_imbalance_ratio_multiclass_threshold = 25
2471
2472# -1: automatic
2473#imbalance_sampling_number_of_bags = -1
2474
2475# -1: automatic
2476#imbalance_sampling_max_number_of_bags = 10
2477
2478# Only for shift/leakage/tuning/feature evolution models. Not used for final models. Final models can
2479# be limited by imbalance_sampling_max_number_of_bags.
2480#imbalance_sampling_max_number_of_bags_feature_evolution = 3
2481
2482# Max. size of data sampled during imbalanced sampling (in terms of dataset size),
2483# controls number of bags (approximately). Only for imbalance_sampling_number_of_bags == -1.
2484#imbalance_sampling_max_multiple_data_size = 1.0
2485
2486# Rank averaging can be helpful when ensembling diverse models when ranking metrics like AUC/Gini
2487# metrics are optimized. No MOJO support yet.
2488#imbalance_sampling_rank_averaging = "auto"
2489
2490# A value of 0.5 means that models/algorithms will be presented a balanced target class distribution
2491# after applying under/over-sampling techniques on the training data. Sometimes it makes sense to
2492# choose a smaller value like 0.1 or 0.01 when starting from an extremely imbalanced original target
2493# distribution. -1.0: automatic
2494#imbalance_sampling_target_minority_fraction = -1.0
2495
2496# For binary classification: ratio of majority to minority class equal and above which to notify
2497# of imbalance in GUI to say slightly imbalanced.
2498# More than ``imbalance_ratio_sampling_threshold`` will say problem is imbalanced.
2499#
2500#imbalance_ratio_notification_threshold = 2.0
2501
2502# List of possible bins for FTRL (largest is default best value)
2503#nbins_ftrl_list = "[1000000, 10000000, 100000000]"
2504
2505# Samples the number of automatic FTRL interactions terms to no more than this value (for each of 2nd, 3rd, 4th order terms)
2506#ftrl_max_interaction_terms_per_degree = 10000
2507
2508# List of possible bins for target encoding (first is default value)
2509#te_bin_list = "[25, 10, 100, 250]"
2510
2511# List of possible bins for weight of evidence encoding (first is default value)
2512# If only want one value: woe_bin_list = [2]
2513#woe_bin_list = "[25, 10, 100, 250]"
2514
2515# List of possible bins for ohe hot encoding (first is default value). If left as default, the actual list is changed for given data size and dials.
2516#ohe_bin_list = "[10, 25, 50, 75, 100]"
2517
2518# List of max possible number of bins for numeric binning (first is default value). If left as default, the actual list is changed for given data size and dials. The binner will automatically reduce the number of bins based on predictive power.
2519#binner_bin_list = "[5, 10, 20]"
2520
2521# If dataset has more columns, then will check only first such columns. Set to 0 to disable.
2522#drop_redundant_columns_limit = 1000
2523
2524# Whether to drop columns with constant values
2525#drop_constant_columns = true
2526
2527# Whether to detect duplicate rows in training, validation and testing datasets. Done after doing type detection and dropping of redundant or missing columns across datasets, just before the experiment starts, still before leakage detection. Any further dropping of columns can change the amount of duplicate rows. Informative only, if want to drop rows in training data, make sure to check the drop_duplicate_rows setting. Uses a sample size, given by detect_duplicate_rows_max_rows_x_cols.
2528#detect_duplicate_rows = true
2529
2530#drop_duplicate_rows_timeout = 60
2531
2532# Whether to drop duplicate rows in training data. Done at the start of Driverless AI, only considering columns to drop as given by the user, not considering validation or training datasets or leakage or redundant columns. Any further dropping of columns can change the amount of duplicate rows. Time limited by drop_duplicate_rows_timeout seconds.
2533# 'auto': "off""
2534# 'weight': If duplicates, then convert dropped duplicates into a weight column for training. Useful when duplicates are added to preserve some distribution of instances expected. Only allowed if no weight columnn is present, else duplicates are just dropped.
2535# 'drop': Drop any duplicates, keeping only first instances.
2536# 'off': Do not drop any duplicates. This may lead to over-estimation of accuracy.
2537#drop_duplicate_rows = "auto"
2538
2539# If > 0, then acts as sampling size for informative duplicate row detection. If set to 0, will do checks for all dataset sizes.
2540#detect_duplicate_rows_max_rows_x_cols = 10000000
2541
2542# Whether to drop columns that appear to be an ID
2543#drop_id_columns = true
2544
2545# Whether to avoid dropping any columns (original or derived)
2546#no_drop_features = false
2547
2548# Direct control over columns to drop in bulk so can copy-paste large lists instead of selecting each one separately in GUI
2549#cols_to_drop = "[]"
2550
2551#cols_to_drop_sanitized = "[]"
2552
2553# Control over columns to group by for CVCatNumEncode Transformer, default is empty list that means DAI automatically searches all columns,
2554# selected randomly or by which have top variable importance.
2555# The CVCatNumEncode Transformer takes a list of categoricals (or these cols_to_group_by) and uses those columns
2556# as new feature to perform aggregations on (agg_funcs_for_group_by).
2557#cols_to_group_by = "[]"
2558
2559#cols_to_group_by_sanitized = "[]"
2560
2561# Whether to sample from given features to group by (True) or to always group by all features (False) when using cols_to_group_by.
2562#sample_cols_to_group_by = false
2563
2564# Aggregation functions to use for groupby operations for CVCatNumEncode Transformer, see also cols_to_group_by and sample_cols_to_group_by.
2565#agg_funcs_for_group_by = "['mean', 'sd', 'min', 'max', 'count']"
2566
2567# Out of fold aggregations ensure less overfitting, but see less data in each fold. For controlling how many folds used by CVCatNumEncode Transformer.
2568#folds_for_group_by = 5
2569
2570# Control over columns to force-in. Forced-in features are are handled by the most interpretable transformer allowed by experiment
2571# options, and they are never removed (although model may assign 0 importance to them still).
2572# Transformers used by default include:
2573# OriginalTransformer for numeric,
2574# CatOriginalTransformer or FrequencyTransformer for categorical,
2575# TextOriginalTransformer for text,
2576# DateTimeOriginalTransformer for date-times,
2577# DateOriginalTransformer for dates,
2578# ImageOriginalTransformer or ImageVectorizerTransformer for images,
2579# etc.
2580#cols_to_force_in = "[]"
2581
2582#cols_to_force_in_sanitized = "[]"
2583
2584# Strategy to apply when doing mutations on transformers.
2585# Sample mode is default, with tendency to sample transformer parameters.
2586# Batched mode tends to do multiple types of the same transformation together.
2587# Full mode does even more types of the same transformation together.
2588#
2589#mutation_mode = "sample"
2590
2591# 'baseline': Explore exemplar set of models with baselines as reference.
2592# 'random': Explore 10 random seeds for same setup. Useful since nature of genetic algorithm is noisy and repeats might get better results, or one can ensemble the custom individuals from such repeats.
2593# 'line': Explore good model with all features and original features with all models. Useful as first exploration.
2594# 'line_all': Like 'line', but enable all models and transformers possible instead of only what base experiment setup would have inferred.
2595# 'product': Explore one-by-one Cartesian product of each model and transformer. Useful for exhaustive exploration.
2596#leaderboard_mode = "baseline"
2597
2598# Controls whether users can launch an experiment in Leaderboard mode form the UI.
2599#leaderboard_off = false
2600
2601# Allows control over default accuracy knob setting.
2602# If default models are too complex, set to -1 or -2, etc.
2603# If default models are not accurate enough, set to 1 or 2, etc.
2604#
2605#default_knob_offset_accuracy = 0
2606
2607# Allows control over default time knob setting.
2608# If default experiments are too slow, set to -1 or -2, etc.
2609# If default experiments finish too fast, set to 1 or 2, etc.
2610#
2611#default_knob_offset_time = 0
2612
2613# Allows control over default interpretability knob setting.
2614# If default models are too simple, set to -1 or -2, etc.
2615# If default models are too complex, set to 1 or 2, etc.
2616#
2617#default_knob_offset_interpretability = 0
2618
2619# Whether to enable checking text for shift, currently only via label encoding.
2620#shift_check_text = false
2621
2622# Whether to use LightGBM random forest mode without early stopping for shift detection.
2623#use_rf_for_shift_if_have_lgbm = true
2624
2625# Normalized training variable importance above which to check the feature for shift
2626# Useful to avoid checking likely unimportant features
2627#shift_key_features_varimp = 0.01
2628
2629# Whether to only check certain features based upon the value of shift_key_features_varimp
2630#shift_check_reduced_features = true
2631
2632# Number of trees to use to train model to check shift in distribution
2633# No larger than max_nestimators
2634#shift_trees = 100
2635
2636# The value of max_bin to use for trees to use to train model to check shift in distribution
2637#shift_max_bin = 256
2638
2639# The min. value of max_depth to use for trees to use to train model to check shift in distribution
2640#shift_min_max_depth = 4
2641
2642# The max. value of max_depth to use for trees to use to train model to check shift in distribution
2643#shift_max_max_depth = 8
2644
2645# If distribution shift detection is enabled, show features for which shift AUC is above this value
2646# (AUC of a binary classifier that predicts whether given feature value belongs to train or test data)
2647#detect_features_distribution_shift_threshold_auc = 0.55
2648
2649# Minimum number of features to keep, keeping least shifted feature at least if 1
2650#drop_features_distribution_shift_min_features = 1
2651
2652# Shift beyond which shows HIGH notification, else MEDIUM
2653#shift_high_notification_level = 0.8
2654
2655# Whether to enable checking text for leakage, currently only via label encoding.
2656#leakage_check_text = true
2657
2658# Normalized training variable importance (per 1 minus AUC/R2 to control for leaky varimp dominance) above which to check the feature for leakage
2659# Useful to avoid checking likely unimportant features
2660#leakage_key_features_varimp = 0.001
2661
2662# Like leakage_key_features_varimp, but applies if early stopping disabled when can trust multiple leaks to get uniform varimp.
2663#leakage_key_features_varimp_if_no_early_stopping = 0.05
2664
2665# Whether to only check certain features based upon the value of leakage_key_features_varimp. If any feature has AUC near 1, will consume all variable importance, even if another feature is also leaky. So False is safest option, but True generally good if many columns.
2666#leakage_check_reduced_features = true
2667
2668# Whether to use LightGBM random forest mode without early stopping for leakage detection.
2669#use_rf_for_leakage_if_have_lgbm = true
2670
2671# Number of trees to use to train model to check for leakage
2672# No larger than max_nestimators
2673#leakage_trees = 100
2674
2675# The value of max_bin to use for trees to use to train model to check for leakage
2676#leakage_max_bin = 256
2677
2678# The value of max_depth to use for trees to use to train model to check for leakage
2679#leakage_min_max_depth = 6
2680
2681# The value of max_depth to use for trees to use to train model to check for leakage
2682#leakage_max_max_depth = 8
2683
2684# When leakage detection is enabled, if AUC (R2 for regression) on original data (label-encoded)
2685# is above or equal to this value, then trigger per-feature leakage detection
2686#
2687#detect_features_leakage_threshold_auc = 0.95
2688
2689# When leakage detection is enabled, show features for which AUC (R2 for regression,
2690# for whether that predictor/feature alone predicts the target) is above or equal to this value.
2691# Feature is dropped if AUC/R2 is above or equal to drop_features_leakage_threshold_auc
2692#
2693#detect_features_per_feature_leakage_threshold_auc = 0.8
2694
2695# Minimum number of features to keep, keeping least leakage feature at least if 1
2696#drop_features_leakage_min_features = 1
2697
2698# Ratio of train to validation holdout when testing for leakage
2699#leakage_train_test_split = 0.25
2700
2701# Whether to enable detailed traces (in GUI Trace)
2702#detailed_traces = false
2703
2704# Whether to enable debug log level (in log files)
2705#debug_log = false
2706
2707# Whether to add logging of system information such as CPU, GPU, disk space at the start of each experiment log. Same information is already logged in system logs.
2708#log_system_info_per_experiment = true
2709
2710#check_system = true
2711
2712#check_system_basic = true
2713
2714# How close to the optimal value (usually 1 or 0) does the validation score need to be to be considered perfect (to stop the experiment)?
2715#abs_tol_for_perfect_score = 0.0001
2716
2717# Timeout in seconds to wait for data ingestion.
2718#data_ingest_timeout = 86400.0
2719
2720# How many seconds to allow mutate to take, nominally only takes few seconds at most. But on busy system doing many individuals, might take longer. Optuna sometimes live lock hangs in scipy random distribution maker.
2721#mutate_timeout = 600
2722
2723# Whether to trust GPU locking for submission of GPU jobs to limit memory usage.
2724# If False, then wait for as GPU submissions to be less than number of GPUs,
2725# even if later jobs could be purely CPU jobs that did not need to wait.
2726# Only applicable if not restricting number of GPUs via num_gpus_per_experiment,
2727# else have to use resources instead of relying upon locking.
2728#
2729#gpu_locking_trust_pool_submission = true
2730
2731# Whether to steal GPU locks when process is neither on GPU PID list nor using CPU resources at all (e.g. sleeping). Only steal from multi-GPU locks that are incomplete. Prevents deadlocks in case multi-GPU model hangs.
2732#gpu_locking_free_dead = true
2733
2734#tensorflow_allow_cpu_only = false
2735
2736#check_pred_contribs_sum = false
2737
2738#debug_daimodel_level = 0
2739
2740#debug_debug_xgboost_splits = false
2741
2742#log_predict_info = true
2743
2744#log_fit_info = true
2745
2746# Amount of time to stall (in seconds) before killing the job (assumes it hung). Reference time is scaled by train data shape of rows * cols to get used stalled_time_kill
2747#stalled_time_kill_ref = 440.0
2748
2749# Amount of time between checks for some process taking long time, every cycle full process list will be dumped to console or experiment logs if possible.
2750#long_time_psdump = 1800
2751
2752# Whether to dump ps every long_time_psdump
2753#do_psdump = false
2754
2755# Whether to check every long_time_psdump seconds and SIGUSR1 to all children to see where maybe stuck or taking long time.
2756#livelock_signal = false
2757
2758# Value to override number of sockets, in case DAIs determination is wrong, for non-trivial systems. 0 means auto.
2759#num_cpu_sockets_override = 0
2760
2761# Value to override number of GPUs, in case DAIs determination is wrong, for non-trivial systems. -1 means auto.Can also set min_num_cores_per_gpu=-1 to allowany number of GPUs for each experiment regardlessof number of cores.
2762#num_gpus_override = -1
2763
2764# Whether to show GPU usage only when locking. 'auto' means 'on' if num_gpus_override is different than actual total visible GPUs, else it means 'off'
2765#show_gpu_usage_only_if_locked = "auto"
2766
2767# Show inapplicable models in preview, to be sure not missing models one could have used
2768#show_inapplicable_models_preview = false
2769
2770# Show inapplicable transformers in preview, to be sure not missing transformers one could have used
2771#show_inapplicable_transformers_preview = false
2772
2773# Show warnings for models (image auto, Dask multinode/multi-GPU) if conditions are met to use but not chosen to avoid missing models that could benefit accuracy/performance
2774#show_warnings_preview = false
2775
2776# Show warnings for models that have no transformers for certain features.
2777#show_warnings_preview_unused_map_features = true
2778
2779# Up to how many input features to determine, during GUI/client preview, unused features. Too many slows preview down.
2780#max_cols_show_unused_features = 1000
2781
2782# Up to how many input features to show transformers used for each input feature.
2783#max_cols_show_feature_transformer_mapping = 1000
2784
2785# Up to how many input features to show, in preview, that are unused features.
2786#warning_unused_feature_show_max = 3
2787
2788#interaction_finder_max_rows_x_cols = 200000.0
2789
2790#interaction_finder_corr_threshold = 0.95
2791
2792# Required GINI relative improvement for InteractionTransformer.
2793# If GINI is not better than this relative improvement compared to original features considered
2794# in the interaction, then the interaction is not returned. If noisy data, and no clear signal
2795# in interactions but still want interactions, then can decrease this number.
2796#interaction_finder_gini_rel_improvement_threshold = 0.5
2797
2798# Number of transformed Interactions to make as best out of many generated trial interactions.
2799#interaction_finder_return_limit = 5
2800
2801# Whether to enable bootstrap sampling. Provides error bars to validation and test scores based on the standard error of the bootstrap mean.
2802#enable_bootstrap = true
2803
2804# Minimum number of bootstrap samples to use for estimating score and its standard deviation
2805# Actual number of bootstrap samples will vary between the min and max,
2806# depending upon row count (more rows, fewer samples) and accuracy settings (higher accuracy, more samples)
2807#
2808#min_bootstrap_samples = 1
2809
2810# Maximum number of bootstrap samples to use for estimating score and its standard deviation
2811# Actual number of bootstrap samples will vary between the min and max,
2812# depending upon row count (more rows, fewer samples) and accuracy settings (higher accuracy, more samples)
2813#
2814#max_bootstrap_samples = 100
2815
2816# Minimum fraction of row size to take as sample size for bootstrap estimator
2817# Actual sample size used for bootstrap estimate will vary between the min and max,
2818# depending upon row count (more rows, smaller sample size) and accuracy settings (higher accuracy, larger sample size)
2819#
2820#min_bootstrap_sample_size_factor = 1.0
2821
2822# Maximum fraction of row size to take as sample size for bootstrap estimator
2823# Actual sample size used for bootstrap estimate will vary between the min and max,
2824# depending upon row count (more rows, smaller sample size) and accuracy settings (higher accuracy, larger sample size)
2825#
2826#max_bootstrap_sample_size_factor = 10.0
2827
2828# Seed to use for final model bootstrap sampling, -1 means use experiment-derived seed.
2829# E.g. one can retrain final model with different seed to get different final model error bars for scores.
2830#
2831#bootstrap_final_seed = -1
2832
2833# Benford's law: mean absolute deviance threshold equal and above which integer valued columns are treated as categoricals too
2834#benford_mad_threshold_int = 0.03
2835
2836# Benford's law: mean absolute deviance threshold equal and above which real valued columns are treated as categoricals too
2837#benford_mad_threshold_real = 0.1
2838
2839# Variable importance below which feature is dropped (with possible replacement found that is better)
2840# This also sets overall scale for lower interpretability settings.
2841# Set to lower value if ok with many weak features despite choosing high interpretability,
2842# or if see drop in performance due to the need for weak features.
2843#
2844#varimp_threshold_at_interpretability_10 = 0.001
2845
2846# Whether to avoid setting stabilize_varimp=false and stabilize_fs=false for time series experiments.
2847#allow_stabilize_varimp_for_ts = false
2848
2849# Variable importance is used by genetic algorithm to decide which features are useful,
2850# so this can stabilize the feature selection by the genetic algorithm.
2851# This is by default disabled for time series experiments, which can have real diverse behavior in each split.
2852# But in some cases feature selection is improved in presence of highly shifted variables that are not handled
2853# by lag transformers and one can set allow_stabilize_varimp_for_ts=true.
2854#
2855#stabilize_varimp = true
2856
2857# Whether to take minimum (True) or mean (False) of delta improvement in score when aggregating feature selection scores across multiple folds/depths.
2858# Delta improvement of score corresponds to original metric minus metric of shuffled feature frame if maximizing metric,
2859# and corresponds to negative of such a score difference if minimizing.
2860# Feature selection by permutation importance considers the change in score after shuffling a feature, and using minimum operation
2861# ignores optimistic scores in favor of pessimistic scores when aggregating over folds.
2862# Note, if using tree methods, multiple depths may be fitted, in which case regardless of this toml setting,
2863# only features that are kept for all depths are kept by feature selection.
2864# If interpretability >= config toml value of fs_data_vary_for_interpretability, then half data (or setting of fs_data_frac)
2865# is used as another fit, in which case regardless of this toml setting,
2866# only features that are kept for all data sizes are kept by feature selection.
2867# Note: This is disabled for small data since arbitrary slices of small data can lead to disjoint features being important and only aggregated average behavior has signal.
2868#
2869#stabilize_fs = true
2870
2871# Whether final pipeline uses fixed features for some transformers that would normally
2872# perform search, such as InteractionsTransformer.
2873# Use what learned from tuning and evolution (True) or to freshly search for new features (False).
2874# This can give a more stable pipeline, especially for small data or when using interaction transformer
2875# as pretransformer in multi-layer pipeline.
2876#
2877#stabilize_features = true
2878
2879#fraction_std_bootstrap_ladder_factor = 0.01
2880
2881#bootstrap_ladder_samples_limit = 10
2882
2883#features_allowed_by_interpretability = "{1: 10000000, 2: 10000, 3: 1000, 4: 500, 5: 300, 6: 200, 7: 150, 8: 100, 9: 80, 10: 50, 11: 50, 12: 50, 13: 50}"
2884
2885#nfeatures_max_threshold = 200
2886
2887#rdelta_percent_score_penalty_per_feature_by_interpretability = "{1: 0.0, 2: 0.1, 3: 1.0, 4: 2.0, 5: 5.0, 6: 10.0, 7: 20.0, 8: 30.0, 9: 50.0, 10: 100.0, 11: 100.0, 12: 100.0, 13: 100.0}"
2888
2889#drop_low_meta_weights = true
2890
2891#meta_weight_allowed_by_interpretability = "{1: 1E-7, 2: 1E-5, 3: 1E-4, 4: 1E-3, 5: 1E-2, 6: 0.03, 7: 0.05, 8: 0.08, 9: 0.10, 10: 0.15, 11: 0.15, 12: 0.15, 13: 0.15}"
2892
2893#meta_weight_allowed_for_reference = 1.0
2894
2895#feature_cost_mean_interp_for_penalty = 5
2896
2897#features_cost_per_interp = 0.25
2898
2899#varimp_threshold_shift_report = 0.3
2900
2901#apply_featuregene_limits_after_tuning = true
2902
2903#remove_scored_0gain_genes_in_postprocessing_above_interpretability = 13
2904
2905#remove_scored_0gain_genes_in_postprocessing_above_interpretability_final_population = 2
2906
2907#remove_scored_by_threshold_genes_in_postprocessing_above_interpretability_final_population = 7
2908
2909#show_full_pipeline_details = false
2910
2911#num_transformed_features_per_pipeline_show = 10
2912
2913#fs_data_vary_for_interpretability = 7
2914
2915#fs_data_frac = 0.5
2916
2917#many_columns_count = 400
2918
2919#columns_count_interpretable = 200
2920
2921#round_up_indivs_for_busy_gpus = true
2922
2923#tuning_share_varimp = "best"
2924
2925# Graphviz is an optional requirement for native installations (RPM/DEP/Tar-SH, outside of Docker)to convert .dot files into .png files for pipeline visualizations as part of experiment artifacts
2926#require_graphviz = true
2927
2928# Unnormalized probability to add genes or instances of transformers with specific attributes.
2929# If no genes can be added, other mutations
2930# (mutating models hyper parmaters, pruning genes, pruning features, etc.) are attempted.
2931#
2932#prob_add_genes = 0.5
2933
2934# Unnormalized probability, conditioned on prob_add_genes,
2935# to add genes or instances of transformers with specific attributes
2936# that have shown to be beneficial to other individuals within the population.
2937#
2938#prob_addbest_genes = 0.5
2939
2940# Unnormalized probability to prune genes or instances of transformers with specific attributes.
2941# If a variety of transformers with many attributes exists, default value is reasonable.
2942# However, if one has fixed set of transformers that should not change or no new transformer attributes
2943# can be added, then setting this to 0.0 is reasonable to avoid undesired loss of transformations.
2944#
2945#prob_prune_genes = 0.5
2946
2947# Unnormalized probability change model hyper parameters.
2948#
2949#prob_perturb_xgb = 0.25
2950
2951# Unnormalized probability to prune features that have low variable importance, as opposed to pruning entire instances of genes/transformers when prob_prune_genes used.
2952# If prob_prune_genes=0.0 and prob_prune_by_features==0.0 and prob_prune_by_top_features==0.0, then genes/transformers and transformed features are only pruned if they are:
2953# 1) inconsistent with the genome
2954# 2) inconsistent with the column data types
2955# 3) had no signal (for interactions and cv_in_cv for target encoding)
2956# 4) transformation failed
2957# E.g. these are toml settings are then ignored:
2958# 1) ngenes_max
2959# 2) limit_features_by_interpretability
2960# 3) varimp_threshold_at_interpretability_10
2961# 4) features_allowed_by_interpretability
2962# 5) remove_scored_0gain_genes_in_postprocessing_above_interpretability
2963# 6) nfeatures_max_threshold
2964# 7) features_cost_per_interp
2965# So this acts similar to no_drop_features, except no_drop_features also applies to shift and leak detection, constant columns are not dropped, ID columns are not dropped.
2966#prob_prune_by_features = 0.25
2967
2968# Unnormalized probability to prune features that have high variable importance,
2969# in case they have high gain but negaive perfomrance on validation and would otherwise maintain poor validation scores.
2970# Similar to prob_prune_by_features but for high gain features.
2971#prob_prune_by_top_features = 0.25
2972
2973# Maximum number of high gain features to prune for each mutation call, to control behavior of prob_prune_by_top_features.
2974#max_num_prune_by_top_features = 1
2975
2976# Like prob_prune_genes but only for pretransformers, i.e. those transformers in layers except last layer that connects to model.
2977#prob_prune_pretransformer_genes = 0.5
2978
2979# Like prob_prune_by_features but only for pretransformers, i.e. those transformers in layers except last layer that connects to model.
2980#prob_prune_pretransformer_by_features = 0.25
2981
2982# Like prob_prune_by_top_features but only for pretransformers, i.e. those transformers in layers except last layer that connects to model.
2983#prob_prune_pretransformer_by_top_features = 0.25
2984
2985# When doing restart, retrain, refit, reset these individual parameters to new toml values.
2986#override_individual_from_toml_list = "['prob_perturb_xgb', 'prob_add_genes', 'prob_addbest_genes', 'prob_prune_genes', 'prob_prune_by_features', 'prob_prune_by_top_features', 'prob_prune_pretransformer_genes', 'prob_prune_pretransformer_by_features', 'prob_prune_pretransformer_by_top_features']"
2987
2988# Max. number of trees to use for all tree model predictions. For testing, when predictions don't matter. -1 means disabled.
2989#fast_approx_max_num_trees_ever = -1
2990
2991# Max. number of trees to use for fast_approx=True (e.g., for AutoDoc/MLI).
2992#fast_approx_num_trees = 250
2993
2994# Whether to speed up fast_approx=True further, by using only one fold out of all cross-validation folds (e.g., for AutoDoc/MLI).
2995#fast_approx_do_one_fold = true
2996
2997# Whether to speed up fast_approx=True further, by using only one model out of all ensemble models (e.g., for AutoDoc/MLI).
2998#fast_approx_do_one_model = false
2999
3000# Max. number of trees to use for fast_approx_contribs=True (e.g., for 'Fast Approximation' in GUI when making Shapley predictions, and for AutoDoc/MLI).
3001#fast_approx_contribs_num_trees = 50
3002
3003# Whether to speed up fast_approx_contribs=True further, by using only one fold out of all cross-validation folds (e.g., for 'Fast Approximation' in GUI when making Shapley predictions, and for AutoDoc/MLI).
3004#fast_approx_contribs_do_one_fold = true
3005
3006# Whether to speed up fast_approx_contribs=True further, by using only one model out of all ensemble models (e.g., for 'Fast Approximation' in GUI when making Shapley predictions, and for AutoDoc/MLI).
3007#fast_approx_contribs_do_one_model = true
3008
3009# Approximate interval between logging of progress updates when making predictions. >=0 to enable, -1 to disable.
3010#prediction_logging_interval = 300
3011
3012# Whether to use exploit-explore logic like DAI 1.8.x. False will explore more.
3013#use_187_prob_logic = true
3014
3015# Whether to enable cross-validated OneHotEncoding+LinearModel transformer
3016#enable_ohe_linear = false
3017
3018#max_absolute_feature_expansion = 1000
3019
3020#booster_for_fs_permute = "auto"
3021
3022#model_class_name_for_fs_permute = "auto"
3023
3024#switch_from_tree_to_lgbm_if_can = true
3025
3026#model_class_name_for_shift = "auto"
3027
3028#model_class_name_for_leakage = "auto"
3029
3030#default_booster = "lightgbm"
3031
3032#default_model_class_name = "LightGBMModel"
3033
3034#num_as_cat_false_if_ohe = true
3035
3036#no_ohe_try = true
3037
3038# Number of classes above which to include TensorFlow (if TensorFlow is enabled),
3039# even if not used exclusively.
3040# For small data this is decreased by tensorflow_num_classes_small_data_factor,
3041# and for bigger data, this is increased by tensorflow_num_classes_big_data_reduction_factor.
3042#tensorflow_added_num_classes_switch = 5
3043
3044# Number of classes above which to only use TensorFlow (if TensorFlow is enabled),
3045# instead of others models set on 'auto' (models set to 'on' are still used).
3046# Up to tensorflow_num_classes_switch_but_keep_lightgbm, keep LightGBM.
3047# If small data, this is increased by tensorflow_num_classes_small_data_factor.
3048#tensorflow_num_classes_switch = 10
3049
3050#tensorflow_num_classes_switch_but_keep_lightgbm = 15
3051
3052#tensorflow_num_classes_small_data_factor = 3
3053
3054#tensorflow_num_classes_big_data_reduction_factor = 6
3055
3056# Compute empirical prediction intervals (based on holdout predictions).
3057#prediction_intervals = true
3058
3059# Confidence level for prediction intervals.
3060#prediction_intervals_alpha = 0.9
3061
3062# Appends one extra output column with predicted target class (after the per-class probabilities).
3063# Uses argmax for multiclass, and the threshold defined by the optimal scorer controlled by the
3064# 'threshold_scorer' expert setting for binary problems. This setting controls the training, validation and test
3065# set predictions (if applicable) that are created by the experiment. MOJO, scoring pipeline and client APIs
3066# control this behavior via their own version of this parameter.
3067#pred_labels = true
3068
3069# Class count above which do not use TextLin Transformer.
3070#textlin_num_classes_switch = 5
3071
3072#text_gene_dim_reduction_choices = "[50]"
3073
3074#text_gene_max_ngram = "[1, 2, 3]"
3075
3076# Max size (in tokens) of the vocabulary created during fitting of Tfidf/Count/Comatrix based text
3077# transformers (not CNN/BERT). If multiple values are provided, will use the first one for initial models, and use remaining
3078# values during parameter tuning and feature evolution. Values smaller than 10000 are recommended for speed,
3079# and a reasonable set of choices include: 100, 1000, 5000, 10000, 50000, 100000, 500000.
3080# Note: If force_enable_text_comatrix_preprocess is set to True, then only selective set of top vocabularies will be used due to computational and memory complexity.
3081#text_transformers_max_vocabulary_size = "[1000, 5000]"
3082
3083# Enables caching of BERT embeddings by temporally saving the embedding vectors to the experiment directory. Set to -1 to cache all text, set to 0 to disable caching.
3084#number_of_texts_to_cache_in_bert_transformer = -1
3085
3086# Modify early stopping behavior for tree-based models (LightGBM, XGBoostGBM, CatBoost) such
3087# that training score (on training data, not holdout) and validation score differ no more than this absolute value
3088# (i.e., stop adding trees once abs(train_score - valid_score) > max_abs_score_delta_train_valid).
3089# Keep in mind that the meaning of this value depends on the chosen scorer and the dataset (i.e., 0.01 for
3090# LogLoss is different than 0.01 for MSE). Experimental option, only for expert use to keep model complexity low.
3091# To disable, set to 0.0
3092#max_abs_score_delta_train_valid = 0.0
3093
3094# Modify early stopping behavior for tree-based models (LightGBM, XGBoostGBM, CatBoost) such
3095# that training score (on training data, not holdout) and validation score differ no more than this relative value
3096# (i.e., stop adding trees once abs(train_score - valid_score) > max_rel_score_delta_train_valid * abs(train_score)).
3097# Keep in mind that the meaning of this value depends on the chosen scorer and the dataset (i.e., 0.01 for
3098# LogLoss is different than 0.01 for MSE). Experimental option, only for expert use to keep model complexity low.
3099# To disable, set to 0.0
3100#max_rel_score_delta_train_valid = 0.0
3101
3102# Whether to search for optimal lambda for given alpha for XGBoost GLM.
3103# If 'auto', disabled if training data has more rows * cols than final_pipeline_data_size or for multiclass experiments.
3104# Disabled always for ensemble_level = 0.
3105# Not always a good approach, can be slow for little payoff compared to grid search.
3106#
3107#glm_lambda_search = "auto"
3108
3109# If XGBoost GLM lambda search is enabled, whether to do search by the eval metric (True)
3110# or using the actual DAI scorer (False).
3111#glm_lambda_search_by_eval_metric = false
3112
3113#gbm_early_stopping_rounds_min = 1
3114
3115#gbm_early_stopping_rounds_max = 10000000000
3116
3117# Whether to enable early stopping threshold for LightGBM, varying by accuracy.
3118# Stops training once validation score changes by less than the threshold.
3119# This leads to fewer trees, usually avoiding wasteful trees, but may lower accuracy.
3120# However, it may also improve generalization by avoiding fine-tuning to validation set.
3121# 0 leads to value of 0 used, i.e. disabled
3122# > 0 means non-automatic mode using that *relative* value, scaled by first tree results of the metric for any metric.
3123# -1 means always enable, but the threshold itself is automatic (lower the accuracy, the larger the threshold).
3124# -2 means fully automatic mode, i.e. disabled unless reduce_mojo_size is true. In true, the lower the accuracy, the larger the threshold.
3125# NOTE: Automatic threshold is set so relative value of metric's min_delta in LightGBM's callback for early stopping is:
3126# if accuracy <= 1:
3127# early_stopping_threshold = 1e-1
3128# elif accuracy <= 4:
3129# early_stopping_threshold = 1e-2
3130# elif accuracy <= 7:
3131# early_stopping_threshold = 1e-3
3132# elif accuracy <= 9:
3133# early_stopping_threshold = 1e-4
3134# else:
3135# early_stopping_threshold = 0
3136#
3137#enable_early_stopping_threshold = -2.0
3138
3139#glm_optimal_refit = true
3140
3141# Whether to force enable co-occurrence text preprocess, only applicable to TextTransformer, default is False.Note: This setting will override choice made from Gene. Currently MOJO does not support co-occurrence matrix operation.
3142#force_enable_text_comatrix_preprocess = false
3143
3144# Window size of the neighboring vocabulary being counted during fitting of Co-Occurrence based text
3145# transformers (not CNN/BERT). If multiple values are provided, will use the first one for initial models, and use remaining
3146# values during parameter tuning and feature evolution. Values smaller than 5 are recommended for speed and memory,
3147# defaults are 3, 2, 4.
3148#text_gene_comatrix_window_size_choices = "[3, 2, 4]"
3149
3150# Max. number of top variable importances to save per iteration (GUI can only display a max. of 14)
3151#max_varimp_to_save = 100
3152
3153# Max. number of top variable importances to show in logs during feature evolution
3154#max_num_varimp_to_log = 10
3155
3156# Max. number of top variable importance shifts to show in logs and GUI after final model built
3157#max_num_varimp_shift_to_log = 10
3158
3159# Skipping just avoids the failed transformer.
3160# Sometimes python multiprocessing swallows exceptions,
3161# so skipping and logging exceptions is also more reliable way to handle them.
3162# Recipe can raise h2oaicore.systemutils.IgnoreError to ignore error and avoid logging error.
3163# Features that fail are pruned from the individual.
3164# If that leaves no features in the individual, then backend tuning, feature/model tuning, final model building, etc.
3165# will still fail since DAI should not continue if all features are from a failed state.
3166#
3167#skip_transformer_failures = true
3168
3169# Skipping just avoids the failed model. Failures are logged depending upon detailed_skip_failure_messages_level."
3170# Recipe can raise h2oaicore.systemutils.IgnoreError to ignore error and avoid logging error.
3171#
3172#skip_model_failures = true
3173
3174# Skipping just avoids the failed scorer if among many scorers. Failures are logged depending upon detailed_skip_failure_messages_level."
3175# Recipe can raise h2oaicore.systemutils.IgnoreError to ignore error and avoid logging error.
3176# Default is True to avoid failing in, e.g., final model building due to a single scorer.
3177#
3178#skip_scorer_failures = true
3179
3180# Skipping avoids the failed recipe. Failures are logged depending upon detailed_skip_failure_messages_level."
3181# Default is False because runtime data recipes are one-time at start of experiment and expected to work by default.
3182#
3183#skip_data_recipe_failures = false
3184
3185# Whether can skip final model transformer failures for layer > first layer for multi-layer pipeline.
3186#can_skip_final_upper_layer_failures = true
3187
3188# How much verbosity to log failure messages for failed and then skipped transformers or models.
3189# Full failures always go to disk as *.stack files,
3190# which upon completion of experiment goes into details folder within experiment log zip file.
3191#
3192#detailed_skip_failure_messages_level = 1
3193
3194# Whether to not just log errors of recipes (models and transformers) but also show high-level notification in GUI.
3195#
3196#notify_failures = true
3197
3198# Instructions for 'Add to config.toml via toml string' in GUI expert page
3199# Self-referential toml parameter, for setting any other toml parameters as string of tomls separated by
3200# (spaces around
3201# are ok).
3202# Useful when toml parameter is not in expert mode but want per-experiment control.
3203# Setting this will override all other choices.
3204# In expert page, each time expert options saved, the new state is set without memory of any prior settings.
3205# The entered item is a fully compliant toml string that would be processed directly by toml.load().
3206# One should include 2 double quotes around the entire setting, or double quotes need to be escaped.
3207# One enters into the expert page text as follows:
3208# e.g. ``enable_glm="off"
3209# enable_xgboost_gbm="off"
3210# enable_lightgbm="on"``
3211# e.g. ``""enable_glm="off"
3212# enable_xgboost_gbm="off"
3213# enable_lightgbm="off"
3214# enable_tensorflow="on"""``
3215# e.g. ``fixed_num_individuals=4``
3216# e.g. ``params_lightgbm="{'objective':'poisson'}"``
3217# e.g. ``""params_lightgbm="{'objective':'poisson'}"""``
3218# e.g. ``max_cores=10
3219# data_precision="float32"
3220# max_rows_feature_evolution=50000000000
3221# ensemble_accuracy_switch=11
3222# feature_engineering_effort=1
3223# target_transformer="identity"
3224# tournament_feature_style_accuracy_switch=5
3225# params_tensorflow="{'layers': (100, 100, 100, 100, 100, 100)}"``
3226# e.g. ""max_cores=10
3227# data_precision="float32"
3228# max_rows_feature_evolution=50000000000
3229# ensemble_accuracy_switch=11
3230# feature_engineering_effort=1
3231# target_transformer="identity"
3232# tournament_feature_style_accuracy_switch=5
3233# params_tensorflow="{'layers': (100, 100, 100, 100, 100, 100)}"""
3234# If you see: "toml.TomlDecodeError" then ensure toml is set correctly.
3235# When set in the expert page of an experiment, these changes only affect experiments and not the server
3236# Usually should keep this as empty string in this toml file.
3237#
3238#config_overrides = ""
3239
3240# Whether to dump every scored individual's variable importance to csv/tabulated/json file produces files like:
3241# individual_scored_id%d.iter%d.<hash>.features.txt for transformed features.
3242# individual_scored_id%d.iter%d.<hash>.features_orig.txt for original features.
3243# individual_scored_id%d.iter%d.<hash>.coefs.txt for absolute importance of transformed features.
3244# There are txt, tab.txt, and json formats for some files, and "best_" prefix means it is the best individual for that iteration
3245# The hash in the name matches the hash in the files produced by dump_modelparams_every_scored_indiv=true that can be used to track mutation history.
3246#dump_varimp_every_scored_indiv = false
3247
3248# Whether to dump every scored individual's model parameters to csv/tabulated/json file
3249# produces files like: individual_scored.params.[txt, csv, json].
3250# Each individual has a hash that matches the hash in the filenames produced if dump_varimp_every_scored_indiv=true,
3251# and the "unchanging hash" is the first parent hash (None if that individual is the first parent itself).
3252# These hashes can be used to track the history of the mutations.
3253#
3254#dump_modelparams_every_scored_indiv = true
3255
3256# Number of features to show in model dump every scored individual
3257#dump_modelparams_every_scored_indiv_feature_count = 3
3258
3259# Number of past mutations to show in model dump every scored individual
3260#dump_modelparams_every_scored_indiv_mutation_count = 3
3261
3262# Whether to append (false) or have separate files, files like: individual_scored_id%d.iter%d*params*, (true) for modelparams every scored indiv
3263#dump_modelparams_separate_files = false
3264
3265# Whether to dump every scored fold's timing and feature info to a *timings*.txt file
3266#
3267#dump_trans_timings = false
3268
3269# whether to delete preview timings if wrote transformer timings
3270#delete_preview_trans_timings = true
3271
3272# Attempt to create at most this many exemplars (actual rows behaving like cluster centroids) for the Aggregator
3273# algorithm in unsupervised experiment mode.
3274#
3275#unsupervised_aggregator_n_exemplars = 100
3276
3277# Attempt to create at least this many clusters for clustering algorithm in unsupervised experiment mode.
3278#
3279#unsupervised_clustering_min_clusters = 2
3280
3281# Attempt to create no more than this many clusters for clustering algorithm in unsupervised experiment mode.
3282#
3283#unsupervised_clustering_max_clusters = 10
3284
3285#use_random_text_file = false
3286
3287#runtime_estimation_train_frame = ""
3288
3289#enable_bad_scorer = false
3290
3291#debug_col_dict_prefix = ""
3292
3293#return_early_debug_col_dict_prefix = false
3294
3295#return_early_debug_preview = false
3296
3297#wizard_random_attack = false
3298
3299#wizard_enable_back_button = true
3300
3301#wizard_deployment = ""
3302
3303#wizard_repro_level = -1
3304
3305#wizard_sample_size = 100000
3306
3307#wizard_model = "rf"
3308
3309# Maximum number of columns to start an experiment. This threshold exists to constraint the # complexity and the length of the Driverless AI's processes.
3310#wizard_max_cols = 100000
3311
3312# How many seconds to allow preview to take for Wizard.
3313#wizard_timeout_preview = 30
3314
3315# How many seconds to allow leakage detection to take for Wizard.
3316#wizard_timeout_leakage = 60
3317
3318# How many seconds to allow duplicate row detection to take for Wizard.
3319#wizard_timeout_dups = 30
3320
3321# How many seconds to allow variable importance calculation to take for Wizard.
3322#wizard_timeout_varimp = 30
3323
3324# How many seconds to allow dataframe schema calculation to take for Wizard.
3325#wizard_timeout_schema = 60
3326
3327#max_reorder_experiments = 100
3328
3329# Default the upper bound number of experiments owned per user. Negative value means infinite quota.
3330#default_experiments_quota_per_user = -1
3331
3332# Dictionary of key:list of experiments quota values for users, overrides above defaults with specified set of users
3333# e.g: ``override_experiments_quota_for_users="{'user1':10,'user2':20,'user3':30}"`` to set user1 with 10 experiments quota,
3334# user2 with 20 experiments quota and user3 with 30 experiments quota.
3335#
3336#override_experiments_quota_for_users = "{}"
3337
3338# authentication_method
3339# unvalidated : Accepts user id and password. Does not validate password.
3340# none: Does not ask for user id or password. Authenticated as admin.
3341# openid: Users OpenID Connect provider for authentication. See additional OpenID settings below.
3342# oidc: Renewed OpenID Connect authentication using authorization code flow. See additional OpenID settings below.
3343# pam: Accepts user id and password. Validates user with operating system.
3344# ldap: Accepts user id and password. Validates against an ldap server. Look
3345# for additional settings under LDAP settings.
3346# local: Accepts a user id and password. Validated against an htpasswd file provided in local_htpasswd_file.
3347# ibm_spectrum_conductor: Authenticate with IBM conductor auth api.
3348# tls_certificate: Authenticate with Driverless by providing a TLS certificate.
3349# jwt: Authenticate by JWT obtained from the request metadata.
3350#
3351#authentication_method = "unvalidated"
3352
3353# Additional authentication methods that will be enabled for for the clients.Login forms for each method will be available on the``/login/<authentication_method>`` path.Comma separated list.
3354#additional_authentication_methods = "[]"
3355
3356# The default amount of time in hours before a user is signed out and must log in again. This setting is used when a default timeout value is not provided by ``authentication_method``.
3357#authentication_default_timeout_hours = 72.0
3358
3359# When enabled, the user's session is automatically prolonged, even when they are not interacting directly with the application.
3360#authentication_gui_polling_prolongs_session = false
3361
3362# OpenID Connect Settings:
3363# Refer to the OpenID Connect Basic Client Implementation Guide for details on how OpenID authentication flow works
3364# https://openid.net/specs/openid-connect-basic-1_0.html
3365# base server URI to the OpenID Provider server (ex: https://oidp.ourdomain.com
3366#auth_openid_provider_base_uri = ""
3367
3368# URI to pull OpenID config data from (you can extract most of required OpenID config from this url)
3369# usually located at: /auth/realms/master/.well-known/openid-configuration
3370#auth_openid_configuration_uri = ""
3371
3372# URI to start authentication flow
3373#auth_openid_auth_uri = ""
3374
3375# URI to make request for token after callback from OpenID server was received
3376#auth_openid_token_uri = ""
3377
3378# URI to get user information once access_token has been acquired (ex: list of groups user belongs to will be provided here)
3379#auth_openid_userinfo_uri = ""
3380
3381# URI to logout user
3382#auth_openid_logout_uri = ""
3383
3384# callback URI that OpenID provide will use to send 'authentication_code'
3385# This is OpenID callback endpoint in Driverless AI. Most OpenID providers need this to be HTTPs.
3386# (ex. https://driverless.ourdomin.com/openid/callback)
3387#auth_openid_redirect_uri = ""
3388
3389# OAuth2 grant type (usually authorization_code for OpenID, can be access_token also)
3390#auth_openid_grant_type = ""
3391
3392# OAuth2 response type (usually code)
3393#auth_openid_response_type = ""
3394
3395# Client ID registered with OpenID provider
3396#auth_openid_client_id = ""
3397
3398# Client secret provided by OpenID provider when registering Client ID
3399#auth_openid_client_secret = ""
3400
3401# Scope of info (usually openid). Can be list of more than one, space delimited, possible
3402# values listed at https://openid.net/specs/openid-connect-basic-1_0.html#Scopes
3403#auth_openid_scope = ""
3404
3405# What key in user_info JSON should we check to authorize user
3406#auth_openid_userinfo_auth_key = ""
3407
3408# What value should the key have in user_info JSON in order to authorize user
3409#auth_openid_userinfo_auth_value = ""
3410
3411# Key that specifies username in user_info JSON (we will use the value of this key as username in Driverless AI)
3412#auth_openid_userinfo_username_key = ""
3413
3414# Quote method from urllib.parse used to encode payload dict in Authentication Request
3415#auth_openid_urlencode_quote_via = "quote"
3416
3417# Key in Token Response JSON that holds the value for access token expiry
3418#auth_openid_access_token_expiry_key = "expires_in"
3419
3420# Key in Token Response JSON that holds the value for access token expiry
3421#auth_openid_refresh_token_expiry_key = "refresh_expires_in"
3422
3423# Expiration time in seconds for access token
3424#auth_openid_token_expiration_secs = 3600
3425
3426# Enables advanced matching for OpenID Connect authentication.
3427# When enabled ObjectPath (<http://objectpath.org/>) expression is used to
3428# evaluate the user identity.
3429#
3430#auth_openid_use_objectpath_match = false
3431
3432# ObjectPath (<http://objectpath.org/>) expression that will be used
3433# to evaluate whether user is allowed to login into Driverless.
3434# Any expression that evaluates to True means user is allowed to log in.
3435# Examples:
3436# Simple claim equality: `$.our_claim is "our_value"`
3437# List of claims contains required value: `"expected_role" in @.roles`
3438#
3439#auth_openid_use_objectpath_expression = ""
3440
3441# Sets token introspection URL for OpenID Connect authentication. (needs to be an absolute URL) Needs to be set when API token introspection is enabled. Is used to get the token TTL when set and IDP does not provide expires_in field in the token endpoint response.
3442#auth_openid_token_introspection_url = ""
3443
3444# Sets an URL where the user is being redirected after being logged out when set. (needs to be an absolute URL)
3445#auth_openid_end_session_endpoint_url = ""
3446
3447# If set, server will use these scopes when it asks for the token on the login. (space separated list)
3448#auth_openid_default_scopes = ""
3449
3450# Specifies the source from which user identity and username is retrieved.
3451# Currently supported sources are:
3452# user_info: Retrieves username from UserInfo endpoint response
3453# id_token: Retrieves username from ID Token using
3454# `auth_openid_id_token_username_key` claim
3455#
3456#auth_oidc_identity_source = "userinfo"
3457
3458# Claim of preferred username in a message holding the user identity, which will be used as a username in application. The user identity source is specified by `auth_oidc_identity_source`, and can be e.g. UserInfo endpoint response or ID Token
3459#auth_oidc_username_claim = ""
3460
3461# OpenID-Connect Issuer URL, which is used for automatic provider infodiscovery. E.g. https://login.microsoftonline.com/<client-id>/v2.0
3462#auth_oidc_issuer_url = ""
3463
3464# OpenID-Connect Token endpoint URL. Setting this is optional and if it's empty, it'll be automatically set by provider info discovery.
3465#auth_oidc_token_endpoint_url = ""
3466
3467# OpenID-Connect Token introspection endpoint URL. Setting this is optional and if it's empty, it'll be automatically set by provider info discovery.
3468#auth_oidc_introspection_endpoint_url = ""
3469
3470# Absolute URL to which user is redirected, after they log out from the application, in case OIDC authentication is used. Usually this is absolute URL of DriverlessAI Login page e.g. https://1.2.3.4:12345/login
3471#auth_oidc_post_logout_url = ""
3472
3473# Key-value mapping of extra HTTP query parameters in an OIDC authorization request.
3474#auth_oidc_authorization_query_params = "{}"
3475
3476# When set to True, will skip cert verification.
3477#auth_oidc_skip_cert_verification = false
3478
3479# When set will use this value as the location for the CA cert, this takes precedence over auth_oidc_skip_cert_verification.
3480#auth_oidc_ca_cert_location = ""
3481
3482# Enables option to use Bearer token for authentication with the RPC endpoint.
3483#api_token_introspection_enabled = false
3484
3485# Sets the method that is used to introspect the bearer token.
3486# OAUTH2_TOKEN_INTROSPECTION: Uses OAuth 2.0 Token Introspection (RPC 7662)
3487# endpoint to introspect the bearer token.
3488# This useful when 'openid' is used as the authentication method.
3489# Uses 'auth_openid_client_id' and 'auth_openid_client_secret' and to
3490# authenticate with the authorization server and
3491# `auth_openid_token_introspection_url` to perform the introspection.
3492#
3493#api_token_introspection_method = "OAUTH2_TOKEN_INTROSPECTION"
3494
3495# Sets the minimum of the scopes that the access token needs to have
3496# in order to pass the introspection. Space separated./
3497# This is passed to the introspection endpoint and also verified after response
3498# for the servers that don't enforce scopes.
3499# Keeping this empty turns any the verification off.
3500#
3501#api_token_oauth2_scopes = ""
3502
3503# Which field of the response returned by the token introspection endpoint should be used as a username.
3504#api_token_oauth2_username_field_name = "username"
3505
3506# Enables the option to initiate a PKCE flow from the UI in order to obtaintokens usable with Driverless clients
3507#oauth2_client_tokens_enabled = false
3508
3509# Sets up client id that will be used in the OAuth 2.0 Authorization Code Flow to obtain the tokens. Client needs to be public and be able to use PKCE with S256 code challenge.
3510#oauth2_client_tokens_client_id = ""
3511
3512# Sets up the absolute url to the authorize endpoint.
3513#oauth2_client_tokens_authorize_url = ""
3514
3515# Sets up the absolute url to the token endpoint.
3516#oauth2_client_tokens_token_url = ""
3517
3518# Sets up the absolute url to the token introspection endpoint.It's displayed in the UI so that clients can inspect the token expiration.
3519#oauth2_client_tokens_introspection_url = ""
3520
3521# Sets up the absolute to the redirect url where Driverless handles the redirect part of the Authorization Code Flow. this <Driverless base url>/oauth2/client_token
3522#oauth2_client_tokens_redirect_url = ""
3523
3524# Sets up the scope for the requested tokens. Space seprated list.
3525#oauth2_client_tokens_scope = "openid profile ai.h2o.storage"
3526
3527# ldap server domain or ip
3528#ldap_server = ""
3529
3530# ldap server port
3531#ldap_port = ""
3532
3533# Complete DN of the LDAP bind user
3534#ldap_bind_dn = ""
3535
3536# Password for the LDAP bind
3537#ldap_bind_password = ""
3538
3539# Provide Cert file location
3540#ldap_tls_file = ""
3541
3542# use true to use ssl or false
3543#ldap_use_ssl = false
3544
3545# the location in the DIT where the search will start
3546#ldap_search_base = ""
3547
3548# A string that describes what you are searching for. You can use Pythonsubstitution to have this constructed dynamically.(only {{DAI_USERNAME}} is supported)
3549#ldap_search_filter = ""
3550
3551# ldap attributes to return from search
3552#ldap_search_attributes = ""
3553
3554# specify key to find user name
3555#ldap_user_name_attribute = ""
3556
3557# When using this recipe, needs to be set to "1"
3558#ldap_recipe = "0"
3559
3560# Deprecated do not use
3561#ldap_user_prefix = ""
3562
3563# Deprecated, Use ldap_bind_dn
3564#ldap_search_user_id = ""
3565
3566# Deprecated, ldap_bind_password
3567#ldap_search_password = ""
3568
3569# Deprecated, use ldap_search_base instead
3570#ldap_ou_dn = ""
3571
3572# Deprecated, use ldap_base_dn
3573#ldap_dc = ""
3574
3575# Deprecated, use ldap_search_base
3576#ldap_base_dn = ""
3577
3578# Deprecated, use ldap_search_filter
3579#ldap_base_filter = ""
3580
3581# Path to the CRL file that will be used to verify client certificate.
3582#auth_tls_crl_file = ""
3583
3584# What field of the subject would used as source for username or other values used for further validation.
3585#auth_tls_subject_field = "CN"
3586
3587# Regular expression that will be used to parse subject field to obtain the username or other values used for further validation.
3588#auth_tls_field_parse_regexp = "(?P<username>.*)"
3589
3590# Sets up the way how user identity would be obtained
3591# REGEXP_ONLY: Will use 'auth_tls_subject_field' and 'auth_tls_field_parse_regexp'
3592# to extract the username from the client certificate.
3593# LDAP_LOOKUP: Will use LDAP server to lookup for the username.
3594# 'auth_tls_ldap_server', 'auth_tls_ldap_port',
3595# 'auth_tls_ldap_use_ssl', 'auth_tls_ldap_tls_file',
3596# 'auth_tls_ldap_bind_dn', 'auth_tls_ldap_bind_password'
3597# options are used to establish the connection with the LDAP server.
3598# 'auth_tls_subject_field' and 'auth_tls_field_parse_regexp'
3599# options are used to parse the certificate.
3600# 'auth_tls_ldap_search_base', 'auth_tls_ldap_search_filter', and
4801#license_manager_ssl_certs = "true"
4802
4803# Amount of time that Driverless AI workers will keep retrying to startup and obtain a lease from
4804# the license manager before timing out. Time out will cause worker startup to fail.
4805#
4806#license_manager_worker_startup_timeout = 3600000
4807
4808# Emergency setting that will allow Driverless AI to run even if there is issues communicating with
4809# or obtaining leases from, the License Manager server.
4810# This is an encoded string that can be obtained from either the license manager ui or the logs of the license
4811# manager server.
4812#
4813#license_manager_dry_run_token = ""
4814
4815# Choose LIME method to be used for creation of surrogate models.
4816#mli_lime_method = "k-LIME"
4817
4818# Choose whether surrogate models should be built for original or transformed features.
4819#mli_use_raw_features = true
4820
4821# Choose whether time series based surrogate models should be built for original features.
4822#mli_ts_use_raw_features = false
4823
4824# Choose whether to run all explainers on the sampled dataset.
4825#mli_sample = true
4826
4827# Set maximum number of features for which to build Surrogate Partial Dependence Plot. Use -1 to calculate Surrogate Partial Dependence Plot for all features.
4828#mli_vars_to_pdp = 10
4829
4830# Set the number of cross-validation folds for surrogate models.
4831#mli_nfolds = 3
4832
4833# Set the number of columns to bin in case of quantile binning.
4834#mli_qbin_count = 0
4835
4836# Number of threads for H2O instance for use by MLI.
4837#h2o_mli_nthreads = 8
4838
4839# Use this option to disable MOJO scoring pipeline. Scoring pipeline is chosen automatically (from MOJO and Python pipelines) by default. In case of certain models MOJO vs. Python choice can impact pipeline performance and robustness.
4840#mli_enable_mojo_scorer = true
4841
4842# When number of rows are above this limit sample for MLI for scoring UI data.
4843#mli_sample_above_for_scoring = 1000000
4844
4845# When number of rows are above this limit sample for MLI for training surrogate models.
4846#mli_sample_above_for_training = 100000
4847
4848# The sample size, number of rows, used for MLI surrogate models.
4849#mli_sample_size = 100000
4850
4851# Number of bins for quantile binning.
4852#mli_num_quantiles = 10
4853
4854# Number of trees for Random Forest surrogate model.
4855#mli_drf_num_trees = 100
4856
4857# Speed up predictions with a fast approximation (can reduce the number of trees or cross-validation folds).
4858#mli_fast_approx = true
4859
4860# Maximum number of interpreters status cache entries.
4861#mli_interpreter_status_cache_size = 1000
4862
4863# Max depth for Random Forest surrogate model.
4864#mli_drf_max_depth = 20
4865
4866# not only sample training, but also sample scoring.
4867#mli_sample_training = true
4868
4869# Regularization strength for k-LIME GLM's.
4870#klime_lambda = "[1e-06, 1e-08]"
4871
4872# Regularization distribution between L1 and L2 for k-LIME GLM's.
4873#klime_alpha = 0.0
4874
4875# Max cardinality for numeric variables in surrogate models to be considered categorical.
4876#mli_max_numeric_enum_cardinality = 25
4877
4878# Maximum number of features allowed for k-LIME k-means clustering.
4879#mli_max_number_cluster_vars = 6
4880
4881# Use all columns for k-LIME k-means clustering (this will override `mli_max_number_cluster_vars` if set to `True`).
4882#use_all_columns_klime_kmeans = false
4883
4884# Strict version check for MLI
4885#mli_strict_version_check = true
4886
4887# MLI cloud name
4888#mli_cloud_name = ""
4889
4890# Compute original model ICE using per feature's bin predictions (true) or use "one frame" strategy (false).
4891#mli_ice_per_bin_strategy = false
4892
4893# By default DIA will run for categorical columns with cardinality <= mli_dia_default_max_cardinality.
4894#mli_dia_default_max_cardinality = 10
4895
4896# By default DIA will run for categorical columns with cardinality >= mli_dia_default_min_cardinality.
4897#mli_dia_default_min_cardinality = 2
4898
4899# When number of rows are above this limit, then sample for MLI transformed Shapley calculation.
4900#mli_shapley_sample_size = 100000
4901
4902# Enable MLI keeper which ensures efficient use of filesystem/memory/DB by MLI.
4903#enable_mli_keeper = true
4904
4905# Enable MLI Sensitivity Analysis
4906#enable_mli_sa = true
4907
4908# Enable priority queues based explainers execution. Priority queues restrict available system resources and prevent system over-utilization. Interpretation execution time might be (significantly) slower.
4909#enable_mli_priority_queues = true
4910
4911# Explainers are run sequentially by default. This option can be used to run all explainers in parallel which can - depending on hardware strength and the number of explainers - decrease interpretation duration. Consider explainer dependencies, random explainers order and hardware over utilization.
4912#mli_sequential_task_execution = true
4913
4914# When number of rows are above this limit, then sample for Disparate Impact Analysis.
4915#mli_dia_sample_size = 100000
4916
4917# When number of rows are above this limit, then sample for Partial Dependence Plot.
4918#mli_pd_sample_size = 25000
4919
4920# Use dynamic switching between Partial Dependence Plot numeric and categorical binning and UI chart selection in case of features which were used both as numeric and categorical by experiment.
4921#mli_pd_numcat_num_chart = true
4922
4923# If 'mli_pd_numcat_num_chart' is enabled, then use numeric binning and chart if feature unique values count is bigger than threshold, else use categorical binning and chart.
4924#mli_pd_numcat_threshold = 11
4925
4926# In New Interpretation screen show only datasets which can be used to explain a selected model. This can slow down the server significantly.
4927#new_mli_list_only_explainable_datasets = false
4928
4929# Enable async/await-based non-blocking MLI API
4930#enable_mli_async_api = true
4931
4932# Enable main chart aggregator in Sensitivity Analysis
4933#enable_mli_sa_main_chart_aggregator = true
4934
4935# When to sample for Sensitivity Analysis (number of rows after sampling).
4936#mli_sa_sampling_limit = 500000
4937
4938# Run main chart aggregator in Sensitivity Analysis when the number of dataset instances is bigger than given limit.
4939#mli_sa_main_chart_aggregator_limit = 1000
4940
4941# Use predict_safe() (true) or predict_base() (false) in MLI (PD, ICE, SA, ...).
4942#mli_predict_safe = false
4943
4944# Number of max retries should the surrogate model fail to build.
4945#mli_max_surrogate_retries = 5
4946
4947# Allow use of symlinks (instead of file copy) by MLI explainer procedures.
4948#enable_mli_symlinks = true
4949
4950# Fraction of memory to allocate for h2o MLI jar
4951#h2o_mli_fraction_memory = 0.45
4952
4953# Add TOML string to Driverless AI server config.toml configuration file.
4954#mli_custom = ""
4955
4956# To exclude e.g. Sensitivity Analysis explainer use: excluded_mli_explainers=['h2oaicore.mli.byor.recipes.sa_explainer.SaExplainer'].
4957#excluded_mli_explainers = "[]"
4958
4959# Enable RPC API performance monitor.
4960#enable_ws_perfmon = false
4961
4962# Number of parallel workers when scoring using MOJO in Kernel Explainer.
4963#mli_kernel_explainer_workers = 4
4964
4965# Use Kernel Explainer to obtain Shapley values for original features.
4966#mli_run_kernel_explainer = false
4967
4968# Sample input dataset for Kernel Explainer.
4969#mli_kernel_explainer_sample = true
4970
4971# Sample size for input dataset passed to Kernel Explainer.
4972#mli_kernel_explainer_sample_size = 1000
4973
4974# 'auto' or int. Number of times to re-evaluate the model when explaining each prediction. More samples lead to lower variance estimates of the SHAP values. The 'auto' setting uses nsamples = 2 * X.shape[1] + 2048. This setting is disabled by default and DAI determines the right number internally.
4975#mli_kernel_explainer_nsamples = "auto"
4976
4977# 'num_features(int)', 'auto' (default for now, but deprecated), 'aic', 'bic', or float. The l1 regularization to use for feature selection (the estimation procedure is based on a debiased lasso). The 'auto' option currently uses aic when less that 20% of the possible sample space is enumerated, otherwise it uses no regularization. THE BEHAVIOR OF 'auto' WILL CHANGE in a future version to be based on 'num_features' instead of AIC. The aic and bic options use the AIC and BIC rules for regularization. Using 'num_features(int)' selects a fix number of top features. Passing a float directly sets the alpha parameter of the sklearn.linear_model.Lasso model used for feature selection.
4978#mli_kernel_explainer_l1_reg = "aic"
4979
4980# Max runtime for Kernel Explainer in seconds. Default is 900, which equates to 15 minutes. Setting this parameter to -1 means to honor the Kernel Shapley sample size provided regardless of max runtime.
4981#mli_kernel_explainer_max_runtime = 900
4982
4983# Tokenizer used to extract tokens from text columns for MLI.
4984#mli_nlp_tokenizer = "tfidf"
4985
4986# Number of tokens used for MLI NLP explanations. -1 means all.
4987#mli_nlp_top_n = 20
4988
4989# Maximum number of records used by MLI NLP explainers.
4990#mli_nlp_sample_limit = 10000
4991
4992# Minimum number of documents in which token has to appear. Integer mean absolute count, float means percentage.
4993#mli_nlp_min_df = 3
4994
4995# Maximum number of documents in which token has to appear. Integer mean absolute count, float means percentage.
4996#mli_nlp_max_df = 0.9
4997
4998# The minimum value in the ngram range. The tokenizer will generate all possible tokens in the (mli_nlp_min_ngram, mli_nlp_max_ngram) range.
4999#mli_nlp_min_ngram = 1
5000
5001# The maximum value in the ngram range. The tokenizer will generate all possible tokens in the (mli_nlp_min_ngram, mli_nlp_max_ngram) range.
5002#mli_nlp_max_ngram = 1
5003
5004# Mode used to choose N tokens for MLI NLP.
5005# "top" chooses N top tokens.
5006# "bottom" chooses N bottom tokens.
5007# "top-bottom" chooses math.floor(N/2) top and math.ceil(N/2) bottom tokens.
5008# "linspace" chooses N evenly spaced out tokens.
5009#mli_nlp_min_token_mode = "top"
5010
5011# The number of top tokens to be used as features when building token based feature importance.
5012#mli_nlp_tokenizer_max_features = -1
5013
5014# The number of top tokens to be used as features when computing text LOCO.
5015#mli_nlp_loco_max_features = -1
5016
5017# The tokenizer method to use when tokenizing a dataset for surrogate models. Can either choose 'TF-IDF' or 'Linear Model + TF-IDF', which first runs TF-IDF to get tokens and then fits a linear model between the tokens and the target to get importances of tokens, which are based on coefficients of the linear model. Default is 'Linear Model + TF-IDF'. Only applies to NLP models.
5018#mli_nlp_surrogate_tokenizer = "Linear Model + TF-IDF"
5019
5020# The number of top tokens to be used as features when building surrogate models. Only applies to NLP models.
5021#mli_nlp_surrogate_tokens = 100
5022
5023# Ignore stop words for MLI NLP.
5024#mli_nlp_use_stop_words = true
5025
5026# List of words to filter out before generation of text tokens, which are passed to MLI NLP LOCO and surrogate models (if enabled). Default is 'english'. Pass in custom stop-words as a list, e.g., ['great', 'good'].
5027#mli_nlp_stop_words = "english"
5028
5029# Append passed in list of custom stop words to default 'english' stop words.
5030#mli_nlp_append_to_english_stop_words = false
5031
5032# Enable MLI for image experiments.
5033#mli_image_enable = true
5034
5035# The maximum number of rows allowed to get the local explanation result, increase the value may jeopardize overall performance, change the value only if necessary.
5036#mli_max_explain_rows = 500
5037
5038# The maximum number of rows allowed to get the NLP token importance result, increasing the value may consume too much memory and negatively impact the performance, change the value only if necessary.
5039#mli_nlp_max_tokens_rows = 50
5040
5041# The minimum number of rows to enable parallel execution for NLP local explanations calculation.
5042#mli_nlp_min_parallel_rows = 10
5043
5044# Run legacy defaults in addition to current default explainers in MLI.
5045#mli_run_legacy_defaults = false
5046
5047# Run explainers sequentially for one given MLI job.
5048#mli_run_explainers_sequentially = false
5049
5050# Set dask CUDA/RAPIDS cluster settings for single node workers.
5051# Additional environment variables can be set, see: https://dask-cuda.readthedocs.io/en/latest/ucx.html#dask-scheduler
5052# e.g. for ucx use: {} dict version of: dict(n_workers=None, threads_per_worker=1, processes=True, memory_limit='auto', device_memory_limit=None, CUDA_VISIBLE_DEVICES=None, data=None, local_directory=None, protocol='ucx', enable_tcp_over_ucx=True, enable_infiniband=False, enable_nvlink=False, enable_rdmacm=False, ucx_net_devices='auto', rmm_pool_size='1GB')
5053# WARNING: Do not add arguments like {'n_workers': 1, 'processes': True, 'threads_per_worker': 1} this will lead to hangs, cuda cluster handles this itself.
5054#
5055#dask_cuda_cluster_kwargs = "{'scheduler_port': 0, 'dashboard_address': ':0', 'protocol': 'tcp'}"
5056
5057# Set dask cluster settings for single node workers.
5058#
5059#dask_cluster_kwargs = "{'n_workers': 1, 'processes': True, 'threads_per_worker': 1, 'scheduler_port': 0, 'dashboard_address': ':0', 'protocol': 'tcp'}"
5060
5061# Whether to start dask workers on this multinode worker.
5062#
5063#start_dask_worker = true
5064
5065# Set dask scheduler env.
5066# See https://docs.dask.org/en/latest/setup/cli.html
5067#
5068#dask_scheduler_env = "{}"
5069
5070# Set dask scheduler env.
5071# See https://docs.dask.org/en/latest/setup/cli.html
5072#
5073#dask_cuda_scheduler_env = "{}"
5074
5075# Set dask scheduler options.
5076# See https://docs.dask.org/en/latest/setup/cli.html
5077#
5078#dask_scheduler_options = ""
5079
5080# Set dask cuda scheduler options.
5081# See https://docs.dask.org/en/latest/setup/cli.html
5082#
5083#dask_cuda_scheduler_options = ""
5084
5085# Set dask worker env.
5086# See https://docs.dask.org/en/latest/setup/cli.html
5087#
5088#dask_worker_env = "{'NCCL_P2P_DISABLE': '1', 'NCCL_DEBUG': 'WARN'}"
5089
5090# Set dask worker options.
5091# See https://docs.dask.org/en/latest/setup/cli.html
5092#
5093#dask_worker_options = "--memory-limit 0.95"
5094
5095# Set dask cuda worker options.
5096# Similar options as dask_cuda_cluster_kwargs.
5097# See https://dask-cuda.readthedocs.io/en/latest/ucx.html#launching-scheduler-workers-and-clients-separately
5098# "--rmm-pool-size 1GB" can be set to give 1GB to RMM for more efficient rapids
5099#
5100#dask_cuda_worker_options = "--memory-limit 0.95"
5101
5102# Set dask cuda worker env.
5103# See: https://dask-cuda.readthedocs.io/en/latest/ucx.html#launching-scheduler-workers-and-clients-separately
5104# https://ucx-py.readthedocs.io/en/latest/dask.html
5105#
5106#dask_cuda_worker_env = "{}"
5107
5108# See https://docs.dask.org/en/latest/setup/cli.html
5109# e.g. ucx is optimal, while tcp is most reliable
5110#
5111#dask_protocol = "tcp"
5112
5113# See https://docs.dask.org/en/latest/setup/cli.html
5114#
5115#dask_server_port = 8786
5116
5117# See https://docs.dask.org/en/latest/setup/cli.html
5118#
5119#dask_dashboard_port = 8787
5120
5121# See https://docs.dask.org/en/latest/setup/cli.html
5122# e.g. ucx is optimal, while tcp is most reliable
5123#
5124#dask_cuda_protocol = "tcp"
5125
5126# See https://docs.dask.org/en/latest/setup/cli.html
5127# port + 1 is used for dask dashboard
5128#
5129#dask_cuda_server_port = 8790
5130
5131# See https://docs.dask.org/en/latest/setup/cli.html
5132#
5133#dask_cuda_dashboard_port = 8791
5134
5135# If empty string, auto-detect IP capable of reaching network.
5136# Required to be set if using worker_mode=multinode.
5137#
5138#dask_server_ip = ""
5139
5140# Number of processses per dask (not cuda-GPU) worker.
5141# If -1, uses dask default of cpu count + 1 + nprocs.
5142# If -2, uses DAI default of total number of physical cores. Recommended for heavy feature engineering.
5143# If 1, assumes tasks are mostly multi-threaded and can use entire node per task. Recommended for heavy multinode model training.
5144# Only applicable to dask (not dask_cuda) workers
5145#
5146#dask_worker_nprocs = 1
5147
5148# Number of threads per process for dask workers
5149#dask_worker_nthreads = 1
5150
5151# Number of threads per process for dask_cuda workers
5152# If -2, uses DAI default of physical cores per GPU,
5153# since must have 1 worker/GPU only.
5154#
5155#dask_cuda_worker_nthreads = -2
5156
5157# See https://github.com/dask/dask-lightgbm
5158#
5159#lightgbm_listen_port = 12400
5160
5161# Whether to enable jupyter server
5162#enable_jupyter_server = false
5163
5164# Port for jupyter server
5165#jupyter_server_port = 8889
5166
5167# Whether to enable jupyter server browser
5168#enable_jupyter_server_browser = false
5169
5170# Whether to root access to jupyter server browser
5171#enable_jupyter_server_browser_root = false
5172
5173# Hostname (or IP address) of remote Triton inference service (outside of DAI), to be used when auto_deploy_triton_scoring_pipeline
5174# and make_triton_scoring_pipeline are not disabled. If set, check triton_model_repository_dir_remote and triton_server_params_remote as well.
5175#
5176#triton_host_remote = ""
5177
5178# Path to model repository directory for remote Triton inference server outside of Driverless AI. All Triton deployments for all users are stored in this directory. Requires write access to this directory from Driverless AI (shared file system). This setting is optional. If not provided, will upload each model deployment over gRPC protocol.
5179#triton_model_repository_dir_remote = ""
5180
5181# Parameters to connect to remote Triton server, only used if triton_host_remote and
5182# triton_model_repository_dir_remote are set.
5183# Note: 'model-control-mode' need to be set to 'explicit' in order to allow DAI upload model to remote
5184# triton server.
5185# .
5186#triton_server_params_remote = "{'http-port': 8000, 'grpc-port': 8001, 'metrics-port': 8002, 'model-control-mode': 'explicit'}"
5187
5188#triton_log_level = 0
5189
5190#triton_model_reload_on_startup_count = 0
5191
5192#triton_clean_up_temp_python_env_on_startup = true
5193
5194# When set to true, CPU executors will strictly run just CPU tasks.
5195#multinode_enable_strict_queue_policy = false
5196
5197# Controls whether CPU tasks can run on GPU machines.
5198#multinode_enable_cpu_tasks_on_gpu_machines = true
5199
5200# Storage medium to be used to exchange data between main server and remote worker nodes.
5201#multinode_storage_medium = "minio"
5202
5203# How the long running tasks are scheduled.
5204# multiprocessing: forks the current process immediately.
5205# singlenode: shares the task through redis and needs a worker running.
5206# multinode: same as singlenode and also shares the data through minio
5207# and allows worker to run on the different machine.
5208#
5209#worker_mode = "singlenode"
5210
5211# Redis settings
5212#redis_ip = "127.0.0.1"
5213
5214# Redis settings
5215#redis_port = 6379
5216
5217# Redis database. Each DAI instance running on the redis server should have unique integer.
5218#redis_db = 0
5219
5220# Redis password. Will be randomly generated main server startup, and by default it will show up in config file uncommented.If you are running more than one DriverlessAI instance per system, make sure each and every instance is connected to its own redis queue.
5221#main_server_redis_password = "PlWUjvEJSiWu9j0aopOyL5KwqnrKtyWVoZHunqxr"
5222
5223# If set to true, the config will get encrypted before it gets saved into the Redis database.
5224#redis_encrypt_config = false
5225
5226# The port that Minio will listen on, this only takes effect if the current system is a multinode main server.
5227#local_minio_port = 9001
5228
5229# Location of main server's minio server.
5230#main_server_minio_address = "127.0.0.1:9001"
5231
5232# Access key of main server's minio server.
5233#main_server_minio_access_key_id = "GMCSE2K2T3RV6YEHJUYW"
5234
5235# Secret access key of main server's minio server.
5236#main_server_minio_secret_access_key = "JFxmXvE/W1AaqwgyPxAUFsJZRnDWUaeQciZJUe9H"
5237
5238# Name of minio bucket used for file synchronization.
5239#main_server_minio_bucket = "h2oai"
5240
5241# S3 global access key.
5242#main_server_s3_access_key_id = "access_key"
5243
5244# S3 global secret access key
5245#main_server_s3_secret_access_key = "secret_access_key"
5246
5247# S3 bucket.
5248#main_server_s3_bucket = "h2oai-multinode-tests"
5249
5250# Maximum number of local tasks processed at once, limited to no more than total number of physical (not virtual) cores divided by two (minimum of 1).
5251#worker_local_processors = 32
5252
5253# A concurrency limit for the 3 priority queues, only enabled when worker_remote_processors is greater than 0.
5254#worker_priority_queues_processors = 4
5255
5256# A timeout before which a scheduled task is bumped up in priority
5257#worker_priority_queues_time_check = 30
5258
5259# Maximum number of remote tasks processed at once, if value is set to -1 the system will automatically pick a reasonable limit depending on the number of available virtual CPU cores.
5260#worker_remote_processors = -1
5261
5262# If worker_remote_processors >= 3, factor by which each task reduces threads, used by various packages like datatable, lightgbm, xgboost, etc.
5263#worker_remote_processors_max_threads_reduction_factor = 0.7
5264
5265# Temporary file system location for multinode data transfer. This has to be an absolute path with equivalent configuration on both the main server and remote workers.
5266#multinode_tmpfs = ""
5267
5268# When set to true, will use the 'multinode_tmpfs' as datasets store.
5269#multinode_store_datasets_in_tmpfs = false
5270
5271# How often the server should extract results from redis queue in milliseconds.
5272#redis_result_queue_polling_interval = 100
5273
5274# Sleep time for worker loop.
5275#worker_sleep = 0.1
5276
5277# For how many seconds worker should wait for main server minio bucket before it fails
5278#main_server_minio_bucket_ping_timeout = 180
5279
5280# A JSON list of up to two objects, where each object defines a worker node profile with name, num_cpus, num_gpus, memory_gb, gpu_is_mig. Currently, the profiles must be named CPU and GPU. The GPU profile must have num_gpus greater than 0. An example worker_spec: [{"name": "CPU", "num_cpus": 8, "num_gpus": 2, "memory_gb": 32, "gpu_is_mig": true}].
5281#worker_node_spec = ""
5282
5283# How long the worker should wait on redis db initialization in seconds.
5284#worker_start_timeout = 30
5285
5286#worker_no_main_server_wait_time = 1800
5287
5288#worker_no_main_server_wait_time_with_hard_assert = 30
5289
5290# For how many seconds the worker shouldn't respond to be marked unhealthy.
5291#worker_healthy_response_period = 300
5292
5293# Whether to enable priority queue for worker nodes to schedule experiments.
5294#
5295#enable_experiments_priority_queue = false
5296
5297# Exposes the DriverlessAI base version when enabled.
5298#expose_server_version = true
5299
5300# https settings
5301# You can make a self-signed certificate for testing with the following commands:
5302# sudo openssl req -x509 -newkey rsa:4096 -keyout private_key.pem -out cert.pem -days 3650 -nodes -subj '/O=Driverless AI'
5303# sudo chown dai:dai cert.pem private_key.pem
5304# sudo chmod 600 cert.pem private_key.pem
5305# sudo mv cert.pem private_key.pem /etc/dai
5306#enable_https = false
5307
5308# https settings
5309# You can make a self-signed certificate for testing with the following commands:
5310# sudo openssl req -x509 -newkey rsa:4096 -keyout private_key.pem -out cert.pem -days 3650 -nodes -subj '/O=Driverless AI'
5311# sudo chown dai:dai cert.pem private_key.pem
5312# sudo chmod 600 cert.pem private_key.pem
5313# sudo mv cert.pem private_key.pem /etc/dai
5314#ssl_key_file = "/etc/dai/private_key.pem"
5315
5316# https settings
5317# You can make a self-signed certificate for testing with the following commands:
5318# sudo openssl req -x509 -newkey rsa:4096 -keyout private_key.pem -out cert.pem -days 3650 -nodes -subj '/O=Driverless AI'
5319# sudo chown dai:dai cert.pem private_key.pem
5320# sudo chmod 600 cert.pem private_key.pem
5321# sudo mv cert.pem private_key.pem /etc/dai
5322#ssl_crt_file = "/etc/dai/cert.pem"
5323
5324# https settings
5325# Passphrase for the ssl_key_file,
5326# either use this setting or ssl_key_passphrase_file,
5327# or neither if no passphrase is used.
5328#ssl_key_passphrase = ""
5329
5330# https settings
5331# Passphrase file for the ssl_key_file,
5332# either use this setting or ssl_key_passphrase,
5333# or neither if no passphrase is used.
5334#ssl_key_passphrase_file = ""
5335
5336# SSL TLS
5337#ssl_no_sslv2 = true
5338
5339# SSL TLS
5340#ssl_no_sslv3 = true
5341
5342# SSL TLS
5343#ssl_no_tlsv1 = true
5344
5345# SSL TLS
5346#ssl_no_tlsv1_1 = true
5347
5348# SSL TLS
5349#ssl_no_tlsv1_2 = false
5350
5351# SSL TLS
5352#ssl_no_tlsv1_3 = false
5353
5354# https settings
5355# Sets the client verification mode.
5356# CERT_NONE: Client does not need to provide the certificate and if it does any
5357# verification errors are ignored.
5358# CERT_OPTIONAL: Client does not need to provide the certificate and if it does
5359# certificate is verified against set up CA chains.
5360# CERT_REQUIRED: Client needs to provide a certificate and certificate is
5361# verified.
5362# You'll need to set 'ssl_client_key_file' and 'ssl_client_crt_file'
5363# When this mode is selected for Driverless to be able to verify
5364# it's own callback requests.
5365#
5366#ssl_client_verify_mode = "CERT_NONE"
5367
5368# https settings
5369# Path to the Certification Authority certificate file. This certificate will be
5370# used when to verify client certificate when client authentication is turned on.
5371# If this is not set, clients are verified using default system certificates.
5372#
5373#ssl_ca_file = ""
5374
5375# https settings
5376# path to the private key that Driverless will use to authenticate itself when
5377# CERT_REQUIRED mode is set.
5378#
5379#ssl_client_key_file = ""
5380
5381# https settings
5382# path to the client certificate that Driverless will use to authenticate itself
5383# when CERT_REQUIRED mode is set.
5384#
5385#ssl_client_crt_file = ""
5386
5387# If enabled, webserver will serve xsrf cookies and verify their validity upon every POST request
5388#enable_xsrf_protection = true
5389
5390# Sets the `SameSite` attribute for the `_xsrf` cookie; options are "Lax", "Strict", or "".
5391#xsrf_cookie_samesite = ""
5392
5393#enable_secure_cookies = false
5394
5395# When enabled each authenticated access will be verified comparing IP address of initiator of session and current request IP
5396#verify_session_ip = false
5397
5398# Enables automatic detection for forbidden/dangerous constructs in custom recipe
5399#custom_recipe_security_analysis_enabled = false
5400
5401# List of modules that can be imported in custom recipes. Default empty list means all modules are allowed except for banlisted ones
5402#custom_recipe_import_allowlist = "[]"
5403
5404# List of modules that cannot be imported in custom recipes
5405#custom_recipe_import_banlist = "['shlex', 'plumbum', 'pexpect', 'envoy', 'commands', 'fabric', 'subprocess', 'os.system', 'system']"
5406
5407# Regex pattern list of calls which are allowed in custom recipes.
5408# Empty list means everything (except for banlist) is allowed.
5409# E.g. if only `os.path.*` is in allowlist, custom recipe can only call methods
5410# from `os.path` module and the built in ones
5411#
5412#custom_recipe_method_call_allowlist = "[]"
5413
5414# Regex pattern list of calls which need to be rejected in custom recipes.
5415# E.g. if `os.system` in banlist, custom recipe cannot call `os.system()`.
5416# If `socket.*` in banlist, recipe cannot call any method of socket module such as
5417# `socket.socket()` or any `socket.a.b.c()`
5418#
5419#custom_recipe_method_call_banlist = "['os\\.system', 'socket\\..*', 'subprocess.*', 'os.spawn.*']"
5420
5421# List of regex patterns representing dangerous sequences/constructs
5422# which could be harmful to whole system and should be banned from code
5423#
5424#custom_recipe_dangerous_patterns = "['rm -rf', 'rm -fr']"
5425
5426# If enabled, user can log in from 2 browsers (scripts) at the same time
5427#allow_concurrent_sessions = true
5428
5429# Extra HTTP headers.
5430#extra_http_headers = "{}"
5431
5432# If enabled, the webserver will add a Content-Security-Policy header to all responses. This header helps to prevent cross-site scripting (XSS) attacks by specifying which sources of content are allowed to be loaded by the browser.
5433#add_csp_header = true
5434
5435# By default DriverlessAI issues cookies with HTTPOnly and Secure attributes (morsels) enabled. In addition to that, SameSite attribute is set to 'Lax', as it's a default in modern browsers. The config overrides the default key/value (morsels).
5436#http_cookie_attributes = "{'samesite': 'Lax'}"
5437
5438# Enable column imputation
5439#enable_imputation = false
5440
5441# Adds advanced settings panel to experiment setup, which allows creating
5442# custom features and more.
5443#
5444#enable_advanced_features_experiment = false
5445
5446# Specifies whether DriverlessAI uses H2O Storage or H2O Entity Server for
5447# a shared entities backend.
5448# h2o-storage: Uses legacy H2O Storage.
5449# entity-server: Uses the new HAIC Entity Server.
5450#
5451#h2o_storage_mode = "h2o-storage"
5452
5453# Address of the H2O Storage endpoint. Keep empty to use the local storage only.
5454#h2o_storage_address = ""
5455
5456# Whether to use remote projects stored in H2O Storage instead of local projects.
5457#h2o_storage_projects_enabled = false
5458
5459# Whether the channel to the storage should be encrypted.
5460#h2o_storage_tls_enabled = true
5461
5462# Path to the certification authority certificate that H2O Storage server identity will be checked against.
5463#h2o_storage_tls_ca_path = ""
5464
5465# Path to the client certificate to authenticate with H2O Storage server
5466#h2o_storage_tls_cert_path = ""
5467
5468# Path to the client key to authenticate with H2O Storage server
5469#h2o_storage_tls_key_path = ""
5470
5471# UUID of a Storage project to use instead of the remote HOME folder.
5472#h2o_storage_internal_default_project_id = ""
5473
5474# Deadline for RPC calls with H2O Storage in seconds. Sets maximum number of seconds that Driverless waits for RPC call to complete before it cancels it.
5475#h2o_storage_rpc_deadline_seconds = 60
5476
5477# Deadline for RPC bytestrteam calls with H2O Storage in seconds. Sets maximum number of seconds that Driverless waits for RPC call to complete before it cancels it. This value is used for uploading and downloading artifacts.
5478#h2o_storage_rpc_bytestream_deadline_seconds = 7200
5479
5480# Storage client manages it's own access tokens derived from the refresh token received on the user login. When this option is set access token with the scopes defined here is requested. (space separated list)
5481#h2o_storage_oauth2_scopes = ""
5482
5483# Maximum size of message size of RPC request in bytes. Requests larger than this limit will fail.
5484#h2o_storage_message_size_limit = 1048576000
5485
5486# Maximum size of message size of RPC request in bytes. Requests larger than this limit will fail.
5487#h2o_authz_message_size_limit = 1048576000
5488
5489# If the `h2o_mlops_ui_url` is provided alongside the `enable_storage`, DAI is able to redirect user to the MLOps app upon clicking the Deploy button.
5490#h2o_mlops_ui_url = ""
5491
5492# If the `feature_store_ui_url` is provided alongside the `enable_file_systems`, DAI is able to redirect user to the Feature Store app upon clicking the Feature Store button.
5493#feature_store_ui_url = ""
5494
5495# H2O Secure Store server endpoint URL
5496#h2o_secure_store_endpoint_url = ""
5497
5498# Enable TLS communication between DAI and the H2O Secure Store server
5499#h2o_secure_store_enable_tls = true
5500
5501# Path to the client certificate to authenticate with the H2O Secure Store server. This is only effective when h2o_secure_store_enable_tls=True.
5502#h2o_secure_store_tls_cert_path = ""
5503
5504# Whether to enable or disable linking datasets into projects.
5505#h2o_storage_dataset_linking_enabled = true
5506
5507# Whether to enable or disable linking experiments into projects.
5508#h2o_storage_experiment_linking_enabled = true
5509
5510# Keystore file that contains secure config.toml items like passwords, secret keys etc. Keystore is managed by h2oai.keystore tool.
5511#keystore_file = ""
5512
5513# Verbosity of logging
5514# 0: quiet (CRITICAL, ERROR, WARNING)
5515# 1: default (CRITICAL, ERROR, WARNING, INFO, DATA)
5516# 2: verbose (CRITICAL, ERROR, WARNING, INFO, DATA, DEBUG)
5517# Affects server and all experiments
5518#log_level = 1
5519
5520# Whether to collect relevant server logs (h2oai_server.log, dai.log from systemctl or docker, and h2o log)
5521# Useful for when sending logs to H2O.ai
5522#collect_server_logs_in_experiment_logs = false
5523
5524# When set, will migrate all user entities to the defined user upon startup, this is mostly useful during
5525# instance migration via H2O's AIEM/Steam.
5526#migrate_all_entities_to_user = ""
5527
5528# Whether to have all user content isolated into a directory for each user.
5529# If set to False, all users content is common to single directory,
5530# recipes are shared, and brain folder for restart/refit is shared.
5531# If set to True, each user has separate folder for all user tasks,
5532# recipes are isolated to each user, and brain folder for restart/refit is
5533# only for the specific user.
5534# Migration from False to True or back to False is allowed for
5535# all experiment content accessible by GUI or python client,
5536# all recipes, and starting experiment with same settings, restart, or refit.
5537# However, if switch to per-user mode, the common brain folder is no longer used.
5538#
5539#per_user_directories = true
5540
5541# List of file names to ignore during dataset import. Any files with names listed above will be skipped when
5542# DAI creates a dataset. Example, directory contains 3 files: [data_1.csv, data_2.csv, _SUCCESS]
5543# DAI will only attempt to create a dataset using files data_1.csv and data_2.csv, and _SUCCESS file will be ignored.
5544# Default is to ignore _SUCCESS files which are commonly created in exporting data from Hadoop
5545#
5546#data_import_ignore_file_names = "['_SUCCESS']"
5547
5548# For data import from a directory (multiple files), allow column types to differ and perform upcast during import.
5549#data_import_upcast_multi_file = false
5550
5551# If set to true, will explode columns with list data type when importing parquet files.
5552#data_import_explode_list_type_columns_in_parquet = false
5553
5554# List of file types that Driverless AI should attempt to import data as IF no file extension exists in the file name
5555# If no file extension is provided, Driverless AI will attempt to import the data starting with first type
5556# in the defined list. Default ["parquet", "orc"]
5557# Example: 'test.csv' (file extension exists) vs 'test' (file extension DOES NOT exist)
5558# NOTE: see supported_file_types configuration option for more details on supported file types
5559#
5560#files_without_extensions_expected_types = "['parquet', 'orc']"
5561
5562# do_not_log_list : add configurations that you do not wish to be recorded in logs here.They will still be stored in experiment information so child experiments can behave consistently.
5563#do_not_log_list = "['cols_to_drop', 'cols_to_drop_sanitized', 'cols_to_group_by', 'cols_to_group_by_sanitized', 'cols_to_force_in', 'cols_to_force_in_sanitized', 'do_not_log_list', 'do_not_store_list', 'pytorch_nlp_pretrained_s3_access_key_id', 'pytorch_nlp_pretrained_s3_secret_access_key', 'auth_openid_end_session_endpoint_url']"
5564
5565# do_not_store_list : add configurations that you do not wish to be stored at all here.Will not be remembered across experiments, so not applicable to data science related itemsthat could be controlled by a user. These items are automatically not logged.
5566#do_not_store_list = "['h2o_authz_action_prefix', 'h2o_authz_user_prefix', 'h2o_authz_result_cache_ttl_sec', 'pip_install_options']"
5567
5568# Memory limit in bytes for datatable to use during parsing of CSV files. -1 for unlimited. 0 for automatic. >0 for constraint.
5569#datatable_parse_max_memory_bytes = -1
5570
5571# Delimiter/Separator to use when parsing tabular text files like CSV. Automatic if empty. Must be provided at system start.
5572#datatable_separator = ""
5573
5574# Whether to enable ping of system status during DAI data ingestion.
5575#ping_load_data_file = false
5576
5577# Period between checking DAI status. Should be small enough to avoid slowing parent who stops ping process.
5578#ping_sleep_period = 0.5
5579
5580# Precision of how data is stored
5581# 'datatable' keeps original datatable storage types (i.e. bool, int, float32, float64) (experimental)
5582# 'float32' best for speed, 'float64' best for accuracy or very large input values, "datatable" best for memory
5583# 'float32' allows numbers up to about +-3E38 with relative error of about 1E-7
5584# 'float64' allows numbers up to about +-1E308 with relative error of about 1E-16
5585# Some calculations, like the GLM standardization, can only handle up to sqrt() of these maximums for data values,
5586# So GLM with 32-bit precision can only handle up to about a value of 1E19 before standardization generates inf values.
5587# If you see "Best individual has invalid score" you may require higher precision.
5588#data_precision = "float32"
5589
5590# Precision of most data transformers (same options and notes as data_precision).
5591# Useful for higher precision in transformers with numerous operations that can accumulate error.
5592# Also useful if want faster performance for transformers but otherwise want data stored in high precision.
5593#transformer_precision = "float32"
5594
5595# Whether to change ulimit soft limits up to hard limits (for DAI server app, which is not a generic user app).
5596# Prevents resource limit problems in some cases.
5597# Restricted to no more than limit_nofile and limit_nproc for those resources.
5598#ulimit_up_to_hard_limit = true
5599
5600#disable_core_files = false
5601
5602# number of file limit
5603# Below should be consistent with start-dai.sh
5604#limit_nofile = 131071
5605
5606# number of threads limit
5607# Below should be consistent with start-dai.sh
5608#limit_nproc = 16384
5609
5610# '
5611# Whether to compute training, validation, and test correlation matrix (table and heatmap pdf) and save to disk
5612# alpha: WARNING: currently single threaded and quadratically slow for many columns
5613#compute_correlation = false
5614
5615# Whether to dump to disk a correlation heatmap
5616#produce_correlation_heatmap = false
5617
5618# Value to report high correlation between original features
5619#high_correlation_value_to_report = 0.95
5620
5621# If True, experiments aborted by server restart will automatically restart and continue upon user login
5622#restart_experiments_after_shutdown = false
5623
5624# When environment variable is set to toml value, consider that an override of any toml value. Experiment's remember toml values for scoring, and this treats any environment set as equivalent to putting OVERRIDE_ in front of the environment key.
5625#any_env_overrides = false
5626
5627# Include byte order mark (BOM) when writing CSV files. Required to support UTF-8 encoding in Excel.
5628#datatable_bom_csv = false
5629
5630# Whether to enable debug prints (to console/stdout/stderr), e.g. showing up in dai*.log or dai*.txt type files.
5631#debug_print = false
5632
5633# Level (0-4) for debug prints (to console/stdout/stderr), e.g. showing up in dai*.log or dai*.txt type files. 1-2 is normal, 4 would lead to highly excessive debug and is not recommended in production.
5634#debug_print_level = 0
5635
5636#return_quickly_autodl_testing = false
5637
5638#return_quickly_autodl_testing2 = false
5639
5640#return_before_final_model = false
5641
5642# Whether to check if config.toml keys are valid and fail if not valid
5643#check_invalid_config_toml_keys = true
5644
5645#predict_safe_trials = 2
5646
5647#fit_safe_trials = 2
5648
5649#allow_no_pid_host = true
5650
5651#enable_autodl_system_insights = true
5652
5653#enable_deleting_autodl_system_insights_finished_experiments = true
5654
5655#main_logger_with_experiment_ids = true
5656
5657# Reduce memory usage during final ensemble feature engineering (1 uses most memory, larger values use less memory)
5658#final_munging_memory_reduction_factor = 2
5659
5660# How much more memory a typical transformer needs than the input data.
5661# Can be increased if, e.g., final model munging uses too much memory due to parallel operations.
5662#munging_memory_overhead_factor = 5
5663
5664#per_transformer_segfault_protection_ga = false
5665
5666#per_transformer_segfault_protection_final = false
5667
5668# How often to check resources (disk, memory, cpu) to see if need to stall submission.
5669#submit_resource_wait_period = 10
5670
5671# Stall submission of subprocesses if system CPU usage is higher than this threshold in percent (set to 100 to disable). A reasonable number is 90.0 if activated
5672#stall_subprocess_submission_cpu_threshold_pct = 100
5673
5674# Restrict/Stall submission of subprocesses if DAI fork count (across all experiments) per unit ulimit nproc soft limit is higher than this threshold in percent (set to -1 to disable, 0 for minimal forking. A reasonable number is 90.0 if activated
5675#stall_subprocess_submission_dai_fork_threshold_pct = -1.0
5676
5677# Restrict/Stall submission of subprocesses if experiment fork count (across all experiments) per unit ulimit nproc soft limit is higher than this threshold in percent (set to -1 to disable, 0 for minimal forking). A reasonable number is 90.0 if activated. For small data leads to overhead of about 0.1s per task submitted due to checks, so for scoring can slow things down for tests.
5678#stall_subprocess_submission_experiment_fork_threshold_pct = -1.0
5679
5680# Whether to restrict pool workers even if not used, by reducing number of pool workers available. Good if really huge number of experiments, but otherwise, best to have all pool workers ready and only stall submission of tasks so can be dynamic to multi-experiment environment
5681#restrict_initpool_by_memory = true
5682
5683# Whether to terminate experiments if the system memory available falls below memory_limit_gb_terminate
5684#terminate_experiment_if_memory_low = false
5685
5686# Memory in GB beyond which will terminate experiment if terminate_experiment_if_memory_low=true.
5687#memory_limit_gb_terminate = 5
5688
5689# A fraction that with valid values between 0.1 and 1.0 that determines the disk usage quota for a user, this quota will be checked during datasets import or experiment runs.
5690#users_disk_usage_quota = 1.0
5691
5692# Path to use for scoring directory path relative to run path
5693#scoring_data_directory = "tmp"
5694
5695#num_models_for_resume_graph = 1000
5696
5697# Internal helper to allow memory of if changed exclusive mode
5698#last_exclusive_mode = ""
5699
5700#mojo_acceptance_test_errors_fatal = true
5701
5702#mojo_acceptance_test_errors_shap_fatal = true
5703
5704#mojo_acceptance_test_orig_shap = true
5705
5706# Which MOJO runtimes should be tested as part of the mini acceptance tests
5707#mojo_acceptance_test_mojo_types = "['C++', 'Java']"
5708
5709# Create MOJO for feature engineering pipeline only (no predictions)
5710#make_mojo_scoring_pipeline_for_features_only = false
5711
5712# Replaces target encoding features by their input columns. Instead of CVTE_Age:Income:Zip, this will create Age:Income:Zip. Only when make_mojo_scoring_pipeline_for_features_only is enabled.
5713#mojo_replace_target_encoding_with_grouped_input_cols = false
5714
5715# Use pipeline to generate transformed features, when making predictions, bypassing the model that usually converts transformed features into predictions.
5716#predictions_as_transform_only = false
5717
5718# If set to true, will make sure only current instance can access its database
5719#enable_single_instance_db_access = true
5720
5721# DCGM daemon address, DCGM has to be in standalone mode in remote/local host.
5722#dcgm_daemon_address = "127.0.0.1"
5723
5724# Deprecated - maps to enable_pytorch_nlp_transformer and enable_pytorch_nlp_model in 1.10.2+
5725#enable_pytorch_nlp = "auto"
5726
5727# How long to wait per GPU for tensorflow/torch to run during system checks.
5728#check_timeout_per_gpu = 20
5729
5730# Whether to fail start-up if cannot successfully run GPU checks
5731#gpu_exit_if_fails = true
5732
5733#how_started = ""
5734
5735#wizard_state = ""
5736
5737# Whether to enable pushing telemetry events to a configured telemetry receiver in 'telemetry_plugins_dir'.
5738#enable_telemetry = false
5739
5740# Directory to scan for telemetry recipes.
5741#telemetry_plugins_dir = "./telemetry_plugins"
5742
5743# Whether to enable TLS to communicate to H2O.ai Telemetry Service.
5744#h2o_telemetry_tls_enabled = false
5745
5746# Timeout value when communicating to H2O.ai Telemetry Service.
5747#h2o_telemetry_rpc_deadline_seconds = 60
5748
5749# H2O.ai Telemetry Service address in H2O.ai Cloud.
5750#h2o_telemetry_address = ""
5751
5752# H2O.ai Telemetry Service access token file location.
5753#h2o_telemetry_service_token_location = ""
5754
5755# TLS CA path when communicating to H2O.ai Telemetry Service.
5756#h2o_telemetry_tls_ca_path = ""
5757
5758# TLS certificate path when communicating to H2O.ai Telemetry Service.
5759#h2o_telemetry_tls_cert_path = ""
5760
5761# TLS key path when communicating to H2O.ai Telemetry Service.
5762#h2o_telemetry_tls_key_path = ""
5763
5764# Whether to enable pushing audit events to a configured Audit Trail receiver in 'audit_trail_plugins_dir'.
5765#enable_audit_trail = false
5766
5767# Whether to return all stack trace error log to audit trail API
5768#enable_debug_error_audit_trail = false
5769
5770# Timeout value when communicating to H2O.ai Audit Trail Service.
5771#h2o_audit_trail_rpc_deadline_seconds = 60
5772
5773# H2O.ai Audit Trail Service address in H2O.ai Cloud.
5774#h2o_audit_trail_address = ""
5775
5776# Path to the Kubernetes service account token for Audit Trail and AuthZ.
5777#h2o_k8s_service_token_location = "/var/run/secrets/kubernetes.io/serviceaccount/token"
5778
5779# Enable H2O.ai AuthZ.
5780#enable_h2o_authz = false
5781
5782# The endpoint (host:port) of the H2O.ai AuthZ Policy Server in H2O.ai Cloud.
5783#h2o_authz_policy_server_endpoint = ""
5784
5785# H2O.ai HAIC engine name for driverless instance that contains the
5786# workspace ID. Example:
5787# /workspaces/<workspace name>/daiEngines/<engine name>
5788#
5789#haic_engine_name = ""
5790
5791# Whether to disable downloading logs via both API and UI. Note: this settings does not apply to admin user.
5792#disable_download_logs = false
5793
5794# Enable time series lag-based recipe with lag transformers. If disabled, the same train-test gap and periods are used, but no lag transformers are enabled. If disabled, the set of feature transformations is quite limited without lag transformers, so consider setting enable_time_unaware_transformers to true in order to treat the problem as more like an IID type problem.
5795#time_series_recipe = true
5796
5797# Whether causal splits are used when time_series_recipe is false orwhether to use same train-gap-test splits when lag transformers are disabled (default behavior).For train-test gap, period, etc. to be used when lag-based recipe is disabled, this must be false.
5798#time_series_causal_split_recipe = false
5799
5800# Whether to use lag transformers when using causal-split for validation
5801# (as occurs when not using time-based lag recipe).
5802# If no time groups columns, lag transformers will still use time-column as sole time group column.
5803#
5804#use_lags_if_causal_recipe = false
5805
5806# 'diverse': explore a diverse set of models built using various expert settings. Note that it's possible to rerun another such diverse leaderboard on top of the best-performing model(s), which will effectively help you compose these expert settings.
5807# 'sliding_window': If the forecast horizon is N periods, create a separate model for each of the (gap, horizon) pairs of (0,n), (n,n), (2*n,n), ..., (2*N-1, n) in units of time periods.
5808# The number of periods to predict per model n is controlled by the expert setting 'time_series_leaderboard_periods_per_model', which defaults to 1.
5809#time_series_leaderboard_mode = "diverse"
5810
5811# Fine-control to limit the number of models built in the 'sliding_window' mode. Larger values lead to fewer models.
5812#time_series_leaderboard_periods_per_model = 1
5813
5814# Whether to create larger validation splits that are not bound to the length of the forecast horizon.
5815#time_series_merge_splits = true
5816
5817# Maximum ratio of training data samples used for validation across splits when larger validation splits are created.
5818#merge_splits_max_valid_ratio = -1.0
5819
5820# Whether to keep a fixed-size train timespan across time-based splits.
5821# That leads to roughly the same amount of train samples in every split.
5822#
5823#fixed_size_train_timespan = false
5824
5825# Provide date or datetime timestamps (in same format as the time column) for custom training and validation splits like this: "tr_start1, tr_end1, va_start1, va_end1, ..., tr_startN, tr_endN, va_startN, va_endN"
5826#time_series_validation_fold_split_datetime_boundaries = ""
5827
5828# Set fixed number of time-based splits for internal model validation (actual number of splits allowed can be less and is determined at experiment run-time).
5829#time_series_validation_splits = -1
5830
5831# Maximum overlap between two time-based splits. Higher values increase the amount of possible splits.
5832#time_series_splits_max_overlap = 0.5
5833
5834# Earliest allowed datetime (in %Y%m%d format) for which to allow automatic conversion of integers to a time column during parsing. For example, 2010 or 201004 or 20100402 or 201004022312 can be converted to a valid date/datetime, but 1000 or 100004 or 10000402 or 10004022313 can not, and neither can 201000 or 20100500 etc.
5835#min_ymd_timestamp = 19000101
5836
5837# Latest allowed datetime (in %Y%m%d format) for which to allow automatic conversion of integers to a time column during parsing. For example, 2010 or 201004 or 20100402 can be converted to a valid date/datetime, but 3000 or 300004 or 30000402 or 30004022313 can not, and neither can 201000 or 20100500 etc.
5838#max_ymd_timestamp = 21000101
5839
5840# maximum number of data samples (randomly selected rows) for date/datetime format detection
5841#max_rows_datetime_format_detection = 100000
5842
5843# Manually disables certain datetime formats during data ingest and experiments.
5844# For example, ['%y'] will avoid parsing columns that contain '00', '01', '02' string values as a date column.
5845#
5846#disallowed_datetime_formats = "['%y']"
5847
5848# Whether to use datetime cache
5849#use_datetime_cache = true
5850
5851# Minimum amount of rows required to utilize datetime cache
5852#datetime_cache_min_rows = 10000
5853
5854# Automatically generate is-holiday features from date columns
5855#holiday_features = true
5856
5857#holiday_country = ""
5858
5859# List of countries for which to look up holiday calendar and to generate is-Holiday features for
5860#holiday_countries = "['UnitedStates', 'UnitedKingdom', 'EuropeanCentralBank', 'Germany', 'Mexico', 'Japan']"
5861
5862# Max. sample size for automatic determination of time series train/valid split properties, only if time column is selected
5863#max_time_series_properties_sample_size = 250000
5864
5865# Maximum number of lag sizes to use for lags-based time-series experiments. are sampled from if sample_lag_sizes==True, else all are taken (-1 == automatic)
5866#max_lag_sizes = 30
5867
5868# Minimum required autocorrelation threshold for a lag to be considered for feature engineering
5869#min_lag_autocorrelation = 0.1
5870
5871# How many samples of lag sizes to use for a single time group (single time series signal)
5872#max_signal_lag_sizes = 100
5873
5874# If enabled, sample from a set of possible lag sizes (e.g., lags=[1, 4, 8]) for each lag-based transformer, to no more than max_sampled_lag_sizes lags. Can help reduce overall model complexity and size, esp. when many unavailable columns for prediction.
5875#sample_lag_sizes = false
5876
5877# If sample_lag_sizes is enabled, sample from a set of possible lag sizes (e.g., lags=[1, 4, 8]) for each lag-based transformer, to no more than max_sampled_lag_sizes lags. Can help reduce overall model complexity and size. Defaults to -1 (auto), in which case it's the same as the feature interaction depth controlled by max_feature_interaction_depth.
5878#max_sampled_lag_sizes = -1
5879
5880# Override lags to be used
5881# e.g. [7, 14, 21] # this exact list
5882# e.g. 21 # produce from 1 to 21
5883# e.g. 21:3 produce from 1 to 21 in step of 3
5884# e.g. 5-21 produce from 5 to 21
5885# e.g. 5-21:3 produce from 5 to 21 in step of 3
5886#
5887#override_lag_sizes = "[]"
5888
5889# Override lags to be used for features that are not known ahead of time
5890# e.g. [7, 14, 21] # this exact list
5891# e.g. 21 # produce from 1 to 21
5892# e.g. 21:3 produce from 1 to 21 in step of 3
5893# e.g. 5-21 produce from 5 to 21
5894# e.g. 5-21:3 produce from 5 to 21 in step of 3
5895#
5896#override_ufapt_lag_sizes = "[]"
5897
5898# Override lags to be used for features that are known ahead of time
5899# e.g. [7, 14, 21] # this exact list
5900# e.g. 21 # produce from 1 to 21
5901# e.g. 21:3 produce from 1 to 21 in step of 3
5902# e.g. 5-21 produce from 5 to 21
5903# e.g. 5-21:3 produce from 5 to 21 in step of 3
5904#
5905#override_non_ufapt_lag_sizes = "[]"
5906
5907# Smallest considered lag size
5908#min_lag_size = -1
5909
5910# Whether to enable feature engineering based on selected time column, e.g. Date~weekday.
5911#allow_time_column_as_feature = true
5912
5913# Whether to enable integer time column to be used as a numeric feature.
5914# If using time series recipe, using time column (numeric time stamps) as input features can lead to model that
5915# memorizes the actual time stamps instead of features that generalize to the future.
5916#
5917#allow_time_column_as_numeric_feature = false
5918
5919# Allowed date or date-time transformations.
5920# Date transformers include: year, quarter, month, week, weekday, day, dayofyear, num.
5921# Date transformers also include: hour, minute, second.
5922# Features in DAI will show up as get_ + transformation name.
5923# E.g. num is a direct numeric value representing the floating point value of time,
5924# which can lead to over-fitting if used on IID problems. So this is turned off by default.
5925#datetime_funcs = "['year', 'quarter', 'month', 'week', 'weekday', 'day', 'dayofyear', 'hour', 'minute', 'second']"
5926
5927# Whether to filter out date and date-time transformations that lead to unseen values in the future.
5928#
5929#filter_datetime_funcs = true
5930
5931# Whether to consider time groups columns (tgc) as standalone features.
5932# Note that 'time_column' is treated separately via 'Allow to engineer features from time column'.
5933# Note that tgc_allow_target_encoding independently controls if time column groups are target encoded.
5934# Use allowed_coltypes_for_tgc_as_features for control per feature type.
5935#
5936#allow_tgc_as_features = true
5937
5938# Which time groups columns (tgc) feature types to consider as standalone features,
5939# if the corresponding flag "Consider time groups columns as standalone features" is set to true.
5940# E.g. all column types would be ["numeric", "categorical", "ohe_categorical", "datetime", "date", "text"]
5941# Note that 'time_column' is treated separately via 'Allow to engineer features from time column'.
5942# Note that if lag-based time series recipe is disabled, then all tgc are allowed features.
5943#
5944#allowed_coltypes_for_tgc_as_features = "['numeric', 'categorical', 'ohe_categorical', 'datetime', 'date', 'text']"
5945
5946# Whether various transformers (clustering, truncated SVD) are enabled,
5947# that otherwise would be disabled for time series due to
5948# potential to overfit by leaking across time within the fit of each fold.
5949#
5950#enable_time_unaware_transformers = "auto"
5951
5952# Whether to group by all time groups columns for creating lag features, instead of sampling from them
5953#tgc_only_use_all_groups = true
5954
5955# Whether to allow target encoding of time groups. This can be useful if there are many groups.
5956# Note that allow_tgc_as_features independently controls if tgc are treated as normal features.
5957# 'auto': Choose CV by default.
5958# 'CV': Enable out-of-fold and CV-in-CV (if enabled) encoding
5959# 'simple': Simple memorized targets per group.
5960# 'off': Disable.
5961# Only relevant for time series experiments that have at least one time column group apart from the time column.
5962#tgc_allow_target_encoding = "auto"
5963
5964# if allow_tgc_as_features is true or tgc_allow_target_encoding is true, whether to try both possibilities to see which does better during tuning. Safer than forcing one way or the other.
5965#tgc_allow_features_and_target_encoding_auto_tune = true
5966
5967# Enable creation of holdout predictions on training data
5968# using moving windows (useful for MLI, but can be slow)
5969#time_series_holdout_preds = true
5970
5971# Max number of splits used for creating final time-series model's holdout/backtesting predictions. With the default value '-1' the same amount of splits as during model validation will be used. Use 'time_series_validation_splits' to control amount of time-based splits used for model validation.
5972#time_series_max_holdout_splits = -1
5973
5974#single_model_vs_cv_score_reldiff = 0.05
5975
5976#single_model_vs_cv_score_reldiff2 = 0.0
5977
5978# Whether to blend ensembles in link space, so that can apply inverse link function to get predictions after blending. This allows to get Shapley values to sum up to final predictions, after applying inverse link function: preds = inverse_link( (blend(base learner predictions in link space ))) = inverse_link(sum(blend(base learner shapley values in link space))) = inverse_link(sum( ensemble shapley values in link space ))For binary classification, this is only supported if inverse_link = logistic = 1/(1+exp(-x))For multiclass classification, this is only supported if inverse_link = softmax = exp(x)/sum(exp(x))For regression, this behavior happens naturally if all base learners use the identity link function, otherwise not possible
5979#blend_in_link_space = true
5980
5981# Whether to speed up time-series holdout predictions for back-testing on training data (used for MLI and metrics calculation). Can be slightly less accurate.
5982#mli_ts_fast_approx = false
5983
5984# Whether to speed up Shapley values for time-series holdout predictions for back-testing on training data (used for MLI). Can be slightly less accurate.
5985#mli_ts_fast_approx_contribs = true
5986
5987# Enable creation of Shapley values for holdout predictions on training data
5988# using moving windows (useful for MLI, but can be slow), at the time of the experiment. If disabled, MLI will
5989# generate Shapley values on demand.
5990#mli_ts_holdout_contribs = true
5991
5992# Values of 5 or more can improve generalization by more aggressive dropping of least important features. Set to 1 to disable.
5993#time_series_min_interpretability = 5
5994
5995# Dropout mode for lag features in order to achieve an equal n.a.-ratio between train and validation/test. The independent mode performs a simple feature-wise dropout, whereas the dependent one takes lag-size dependencies per sample/row into account.
5996#lags_dropout = "dependent"
5997
5998# Normalized probability of choosing to lag non-targets relative to targets (-1.0 = auto)
5999#prob_lag_non_targets = -1.0
6000
6001# Method to create rolling test set predictions, if the forecast horizon is shorter than the time span of the test set. One can choose between test time augmentation (TTA) and a successive refitting of the final pipeline.
6002#rolling_test_method = "tta"
6003
6004#rolling_test_method_max_splits = 1000
6005
6006# Apply TTA in one pass instead of using rolling windows for internal validation split predictions. Note: Setting this to 'False' leads to significantly longer runtimes.
6007#fast_tta_internal = true
6008
6009# Apply TTA in one pass instead of using rolling windows for test set predictions. This only applies if the forecast horizon is shorter than the time span of the test set. Note: Setting this to 'False' leads to significantly longer runtimes.
6010#fast_tta_test = true
6011
6012# Probability for new Lags/EWMA gene to use default lags (determined by frequency/gap/horizon, independent of data) (-1.0 = auto)
6013#prob_default_lags = -1.0
6014
6015# Unnormalized probability of choosing other lag time-series transformers based on interactions (-1.0 = auto)
6016#prob_lagsinteraction = -1.0
6017
6018# Unnormalized probability of choosing other lag time-series transformers based on aggregations (-1.0 = auto)
6019#prob_lagsaggregates = -1.0
6020
6021# Time series centering or detrending transformation. The free parameter(s) of the trend model are fitted and the trend is removed from the target signal, and the pipeline is fitted on the residuals. Predictions are made by adding back the trend. Note: Can be cascaded with 'Time series lag-based target transformation', but is mutually exclusive with regular target transformations. The robust centering or linear detrending variants use RANSAC to achieve a higher tolerance w.r.t. outliers. The Epidemic target transformer uses the SEIR model: https://en.wikipedia.org/wiki/Compartmental_models_in_epidemiology#The_SEIR_model
6022#ts_target_trafo = "none"
6023
6024# Dictionary to control Epidemic SEIRD model for de-trending of target per time series group.
6025# Note: The target column must correspond to I(t), the infected cases as a function of time.
6026# For each training split and time series group, the SEIRD model is fitted to the target signal (by optimizing
6027# the free parameters shown below for each time series group).
6028# Then, the SEIRD model's value is subtracted from the training response, and the residuals are passed to
6029# the feature engineering and modeling pipeline. For predictions, the SEIRD model's value is added to the residual
6030# predictions from the pipeline, for each time series group.
6031# Note: Careful selection of the bounds for the free parameters N, beta, gamma, delta, alpha, rho, lockdown,
6032# beta_decay, beta_decay_rate is extremely important for good results.
6033# - S(t) : susceptible/healthy/not immune
6034# - E(t) : exposed/not yet infectious
6035# - I(t) : infectious/active <= target column
6036# - R(t) : recovered/immune
6037# - D(t) : deceased
6038# ### Free parameters:
6039# - N : total population, N=S+E+I+R+D
6040# - beta : rate of exposure (S -> E)
6041# - gamma : rate of recovering (I -> R)
6042# - delta : incubation period
6043# - alpha : fatality rate
6044# - rho : rate at which people die
6045# - lockdown : day of lockdown (-1 => no lockdown)
6046# - beta_decay : beta decay due to lockdown
6047# - beta_decay_rate : speed of beta decay
6048# ### Dynamics:
6049# if lockdown >= 0:
6050# beta_min = beta * (1 - beta_decay)
6051# beta = (beta - beta_min) / (1 + np.exp(-beta_decay_rate * (-t + lockdown))) + beta_min
6052# dSdt = -beta * S * I / N
6053# dEdt = beta * S * I / N - delta * E
6054# dIdt = delta * E - (1 - alpha) * gamma * I - alpha * rho * I
6055# dRdt = (1 - alpha) * gamma * I
6056# dDdt = alpha * rho * I
6057# Provide lower/upper bounds for each parameter you want to control the bounds for. Valid parameters are:
6058# N_min, N_max, beta_min, beta_max, gamma_min, gamma_max, delta_min, delta_max, alpha_min, alpha_max,
6059# rho_min, rho_max, lockdown_min, lockdown_max, beta_decay_min, beta_decay_max,
6060# beta_decay_rate_min, beta_decay_rate_max. You can change any subset of parameters, e.g.,
6061# ts_target_trafo_epidemic_params_dict="{'N_min': 1000, 'beta_max': 0.2}"
6062# To get SEIR model (in cases where death rates are very low, can speed up calculations significantly):
6063# set alpha_min=alpha_max=rho_min=rho_max=beta_decay_rate_min=beta_decay_rate_max=0, lockdown_min=lockdown_max=-1.
6064#
6065#ts_target_trafo_epidemic_params_dict = "{}"
6066
6067#ts_target_trafo_epidemic_target = "I"
6068
6069# Time series lag-based target transformation. One can choose between difference and ratio of the current and a lagged target. The corresponding lag size can be set via 'Target transformation lag size'. Note: Can be cascaded with 'Time series target transformation', but is mutually exclusive with regular target transformations.
6070#ts_lag_target_trafo = "none"
6071
6072# Lag size used for time series target transformation. See setting 'Time series lag-based target transformation'. -1 => smallest valid value = prediction periods + gap (automatically adjusted by DAI if too small).
6073#ts_target_trafo_lag_size = -1
6074
6075# Maximum amount of columns send from UI to backend in order to auto-detect TGC
6076#tgc_via_ui_max_ncols = 10
6077
6078# Maximum frequency of duplicated timestamps for TGC detection
6079#tgc_dup_tolerance = 0.01
6080
6081# Timeout in seconds for time-series properties detection in UI.
6082#timeseries_split_suggestion_timeout = 30.0
6083
6084# Weight TS models scores as split number to this power.
6085# E.g. Use 1.0 to weight split closest to horizon by a factor
6086# that is number of splits larger than oldest split.
6087# Applies to tuning models and final back-testing models.
6088# If 0.0 (default) is used, median function is used, else mean is used.
6089#
6090#timeseries_recency_weight_power = 0.0
6091
6092# Every *.toml file is read from this directory and process the same way as main config file.
6093#user_config_directory = ""
6094
6095# IP address for the procsy process.
6096#procsy_ip = "127.0.0.1"
6097
6098# Port for the procsy process.
6099#procsy_port = 12347
6100
6101# Request timeout (in seconds) for the procsy process.
6102#procsy_timeout = 3600
6103
6104# IP address for use by MLI.
6105#h2o_ip = "127.0.0.1"
6106
6107# Port of H2O instance for use by MLI. Each H2O node has an internal port (web port+1, so by default port 12349) for internal node-to-node communication
6108#h2o_port = 12348
6109
6110# IP address and port for Driverless AI HTTP server.
6111#ip = "127.0.0.1"
6112
6113# IP address and port for Driverless AI HTTP server.
6114#port = 12345
6115
6116# A list of two integers indicating the port range to search over, and dynamically find an open port to bind to (e.g., [11111,20000]).
6117#port_range = "[]"
6118
6119# Strict version check for DAI
6120#strict_version_check = true
6121
6122# File upload limit (default 100GB)
6123#max_file_upload_size = 104857600000
6124
6125# Data directory. All application data and files related datasets and
6126# experiments are stored in this directory.
6127#data_directory = "./tmp"
6128
6129# Sets a custom path for the master.db. Use this to store the database outside the data directory,
6130# which can improve performance if the data directory is on a slow drive.
6131#db_path = ""
6132
6133# Datasets directory. If set, it will denote the location from which all
6134# datasets will be read from and written into, typically this location shall be configured to be
6135# on an external file system to allow for a more granular control to just the datasets volume.
6136# If empty then will default to data_directory.
6137#datasets_directory = ""
6138
6139# Path to the directory where the logs of HDFS, Hive, JDBC, and KDB+ data connectors will be saved.
6140#data_connectors_logs_directory = "./tmp"
6141
6142# Subdirectory within data_directory to store server logs.
6143#server_logs_sub_directory = "server_logs"
6144
6145# Subdirectory within data_directory to store pid files for controlling kill/stop of DAI servers.
6146#pid_sub_directory = "pids"
6147
6148# Path to the directory which will be use to save MapR tickets when MapR multi-user mode is enabled.
6149# This is applicable only when enable_mapr_multi_user_mode is set to true.
6150#
6151#mapr_tickets_directory = "./tmp/mapr-tickets"
6152
6153# MapR tickets duration in minutes, if set to -1, it will use the default value
6154# (not specified in maprlogin command), otherwise will be the specified configuration
6155# value but no less than one day.
6156#
6157#mapr_tickets_duration_minutes = -1
6158
6159# Whether at server start to delete all temporary uploaded files, left over from failed uploads.
6160#
6161#remove_uploads_temp_files_server_start = true
6162
6163# Whether to run through entire data directory and remove all temporary files.
6164# Can lead to slow start-up time if have large number (much greater than 100) of experiments.
6165#
6166#remove_temp_files_server_start = false
6167
6168# Whether to delete temporary files after experiment is aborted/cancelled.
6169#
6170#remove_temp_files_aborted_experiments = true
6171
6172# Whether to opt in to usage statistics and bug reporting
6173#usage_stats_opt_in = true
6174
6175# Configurations for a HDFS data source
6176# Path of hdfs coresite.xml
6177# core_site_xml_path is deprecated, please use hdfs_config_path
6178#core_site_xml_path = ""
6179
6180# (Required) HDFS config folder path. Can contain multiple config files.
6181#hdfs_config_path = ""
6182
6183# Path of the principal key tab file. Required when hdfs_auth_type='principal'.
6184# key_tab_path is deprecated, please use hdfs_keytab_path
6185#
6186#key_tab_path = ""
6187
6188# Path of the principal key tab file. Required when hdfs_auth_type='principal'.
6189#
6190#hdfs_keytab_path = ""
6191
6192# Whether to delete preview cache on server exit
6193#preview_cache_upon_server_exit = true
6194
6195# When this setting is enabled, any user can see all tasks running in the system, including their owner and an identification key. If this setting is turned off, user can see only their own tasks.
6196#all_tasks_visible_to_users = true
6197
6198# When enabled, server exposes Health API at /apis/health/v1, which provides system overview and utilization statistics
6199#enable_health_api = true
6200
6201#notification_url = "https://s3.amazonaws.com/ai.h2o.notifications/dai_notifications_prod.json"
6202
6203# When enabled, the notification scripts will inherit
6204# the parent's process (DriverlessAI) environment variables.
6205#
6206#listeners_inherit_env_variables = false
6207
6208# Notification scripts
6209# - the variable points to a location of script which is executed at given event in experiment lifecycle
6210# - the script should have executable flag enabled
6211# - use of absolute path is suggested
6212# The on experiment start notification script location
6213#listeners_experiment_start = ""
6214
6215# The on experiment finished notification script location
6216#listeners_experiment_done = ""
6217
6218# The on experiment import notification script location
6219#listeners_experiment_import_done = ""
6220
6221# Notification script triggered when building of MOJO pipeline for experiment is
6222# finished. The value should be an absolute path to executable script.
6223#
6224#listeners_mojo_done = ""
6225
6226# Notification script triggered when rendering of AutoDoc for experiment is
6227# finished. The value should be an absolute path to executable script.
6228#
6229#listeners_autodoc_done = ""
6230
6231# Notification script triggered when building of python scoring pipeline
6232# for experiment is finished.
6233# The value should be an absolute path to executable script.
6234#
6235#listeners_scoring_pipeline_done = ""
6236
6237# Notification script triggered when experiment and all its artifacts selected
6238# at the beginning of experiment are finished building.
6239# The value should be an absolute path to executable script.
6240#
6241#listeners_experiment_artifacts_done = ""
6242
6243# Whether to run quick performance benchmark at start of application
6244#enable_quick_benchmark = true
6245
6246# Whether to run extended performance benchmark at start of application
6247#enable_extended_benchmark = false
6248
6249# Scaling factor for number of rows for extended performance benchmark. For rigorous performance benchmarking,
6250# values of 1 or larger are recommended.
6251#extended_benchmark_scale_num_rows = 0.1
6252
6253# Number of columns for extended performance benchmark.
6254#extended_benchmark_num_cols = 20
6255
6256# Seconds to allow for testing memory bandwidth by generating numpy frames
6257#benchmark_memory_timeout = 2
6258
6259# Maximum portion of vm total to use for numpy memory benchmark
6260#benchmark_memory_vm_fraction = 0.25
6261
6262# Maximum number of columns to use for numpy memory benchmark
6263#benchmark_memory_max_cols = 1500
6264
6265# Whether to run quick startup checks at start of application
6266#enable_startup_checks = true
6267
6268# Application ID override, which should uniquely identify the instance
6269#application_id = ""
6270
6271# After how many seconds to abort MLI recipe execution plan or recipe compatibility checks.
6272# Blocks main server from all activities, so long timeout is not desired, esp. in case of hanging processes,
6273# while a short timeout can too often lead to abortions on busy system.
6274#
6275#main_server_fork_timeout = 10.0
6276
6277# After how many days the audit log records are removed.
6278# Set equal to 0 to disable removal of old records.
6279#
6280#audit_log_retention_period = 5
6281
6282# Time to wait after performing a cleanup of temporary files for in-browser dataset upload.
6283#
6284#dataset_tmp_upload_file_retention_time_min = 5
6285