Skip to main content
Version: Next

Experiment settings: Audio regression

The settings for an audio regression experiment are listed and described below.

General settings

Dataset

This setting defines the dataset for the experiment.

Problem category

This setting defines a particular general problem type category, for example, image.

note
  • The selected problem category (for example, image) determines the options in the Problem type setting.
  • The From experiment option enables you to utilize the settings of an experiment (another experiment).
    • The From experiment option is unavailable when you select AutoDL as the experience level.

Experiment

This setting defines the experiment H2O Hydrogen Torch references to initialize the experiment settings. H2O Hydrogen Torch initializes the experiment settings with the values from the selected (built) experiment.

Setting dependency

This setting is available only if From experiment is selected in the Problem category setting.

Problem type

This setting defines the problem type of the experiment, which also defines the settings H2O Hydrogen Torch displays for the experiment.

Note
  • The selected problem category (in the Problem category setting) determines the available problem types.
  • The selected problem type and experience level determine the settings H2O Hydrogen Torch displays for the experiment.

Import config from YAML

This setting defines the YML file that defines the experiment settings.

Note
  • H2O Hydrogen Torch supports a YML file import and export functionality. You can download the config settings of finished experiments, make changes, and re-upload them when starting a new experiment in any instance of H2O Hydrogen Torch.

Use previous experiment weights

This setting determines whether to initialize the model weights with the weights from the experiment specified in the Experiment setting.

note

A model's weights are available for an experiment (model) of the same problem type and backbone.

tip

This setting might be useful in case you want to continue training from a built experiment.

Setting dependency

The Use previous experiment weights setting is available only if From experiment is selected in the Problem category setting.

Experiment name

This setting defines the name of the experiment.

Dataset settings

Train dataframe

This setting specifies the path to a file that contains a dataframe comprising training records utilized by H2O Hydrogen Torch for model training within the experiment. Here, the term 'file' denotes a specific file adhering to a dataset format tailored for the problem type addressed in the experiment. To learn more, see Dataset formats.

note
  • The records are combined into mini-batches when training the model.
  • If a validation dataframe is provided, a fold column is not needed in the train dataframe.
  • To import datasets for inference only, when defining the settings for an experiment, set the Train dataframe setting to None while setting the Test dataframe setting to the relevant dataframe (as a result, H2O Hydrogen Torch utilizes the relevant dataset for predictions and not for training).

Data folder

Defines the location of the folder containing assets (for example, images or audio clips) the model utilizes for training. H2O Hydrogen Torch loads assets from this folder during training.

Validation strategy

This setting specifies the validation strategy H2O Hydrogen Torch uses for the experiment.

tip

To properly assess the performance of your trained models, it is common practice to evaluate it on separate holdout data that the model has not seen during training.

Details
Options

  • K-fold cross validation
    • This option splits the data using the provided optional fold column in the train data or performs an automatic 5-fold cross-validation in the absence of a fold column.
  • Grouped k-fold cross-validation
    • This option allows you to specify a group column based on which the data is split into folds.
  • Custom holdout validation
    • This option specifies a separate holdout dataframe.
  • Automatic holdout validation
    • This option allows you to specify a holdout validation sample size that is automatically generated.

Validation dataframe

This setting defines a file containing a dataframe with validation records that H2O Hydrogen Torch uses to evaluate the model during training.

Note
  • To set a Validation dataframe requires the Validation strategy to be set to Custom holdout validation. In the case of providing a validation dataframe, H2O Hydrogen Torch fully respects the choice of a separate validation dataframe and does not perform any internal cross-validation. In other words, the model is trained on the full provided train dataframe, and model performance is evaluated on the provided validation dataframe.
  • The validation dataframe should have the same format as the train dataframe but does not require a fold column.
Setting dependency

The Validation dataframe settings is only available when you select Validation strategy in the Custom holdout validation setting.

Selected folds

This setting defines the selected validation fold(s) in case of cross-validation; a separate model is trained for each value selected. Each model utilizes the corresponding part of the data as a holdout sample to assess performance while the model is fitted to the rest of the records from the training dataframe. As a result, folds estimate how the model performs in general when used to make predictions on data not used during model training.

Note

H2O Hydrogen Torch allows running experiments on a single selected fold for faster experimenting and multiple selected folds to gain more trust in the model's generalization and performance capabilities.

Setting dependency

This setting is available only when the Validation strategy setting is not set to Custom holdout validation or Automatic holdout validation.

Test dataframe

This setting defines a file containing a dataframe with test records that H2O Hydrogen Torch uses to test the model.

note
  • The test dataframe should have the same format as the train dataframe but does not require a label column.
  • To import datasets for inference only, when defining the setting for an experiment, set the Train dataframe setting to None while setting the Test dataframe setting to the relevant dataframe (as a result, H2O Hydrogen Torch utilizes the relevant dataset for predictions and not for training).

Data folder test

Defines the location of the folder containing assets (for example, images, texts, or audio clips) H2O Hydrogen Torch utilizes to test the model. H2O Hydrogen Torch loads the assets from this folder when testing the model. This setting is only available if a test dataframe is selected.

Setting dependency
  • This setting is only available if a test dataframe is selected.
  • The Data folder test setting appears when you specify a test dataframe in the Test dataframe setting.

Unlabeled dataframe

Defines a separate CSV or Parquet file (depending on the problem type) containing a dataframe with unlabeled records that H2O Hydrogen Torch utilizes to generate pseudo labels. H2O Hydrogen Torch first trains the model with the provided labeled data (Train dataframe). Right after, the model predicts pseudo labels for the provided unlabeled dataframe before doing another training run that combines the original labels and pseudo labels.

note
  • Image regression | Image classification | Image object detection
    • The unlabeled dataframe just needs to contain a single image column
  • Text regression | Text classification
    • The unlabeled dataframe just needs to contain a single text column
  • Audio regression | Audio classification | Speech recognition
    • The unlabeled dataframe just needs to contain a single audio column
  • Image regression | Image classification | Image object detection | Audio regression | Audio classification | Speech recognition
    • Assets (images or audios) need to be located in the Data folder (setting)
  • All supported problem types
    • The training time can significantly increase depending on the size of the unlabeled data
tip

As labeling can be expensive, having additional unlabeled data is quite common. Providing this unlabeled data in H2O Hydrogen Torch trains the model semi-supervised, potentially improving the model quality in contrast to only training on labeled data.

Label columns

This setting defines the name(s) of the dataframe column(s) that refer to the target value(s) an H2O Hydrogen Torch experiment can aim to predict.

Audio column

Defines the dataframe column storing the names of audios that H2O Hydrogen Torch loads from the Data folder and Data folder test when training and testing the model.

Data sample

This setting defines the percentage of the data to use for the experiment. The default percentage is 100%.

tip

Changing the default value can significantly increase the training speed. Still, it might lead to a substantially poor accuracy value. Using 100% of the data for final models is highly recommended.

Data sample choice

This setting specifies the data H2O Hydrogen Torch samples according to the percentage set in the Data sample setting. H2O Hydrogen Torch does not sample the unselected data.

Setting dependency

The Data sample choice setting is only available if the value in the Data sample setting is less than 1.0.

Audio settings

Audio parameters

Defines if H2O Hydrogen Torch, or you can define the values for the following audio settings:

  • Sample rate
  • Training chunk seconds
  • STFT window size
  • Hop size
  • Mel frequency bins
  • Minimum frequency
  • Maximum frequency
Details
Options

Details
Audio classification | Audio regression

  • Auto
    • H2O Hydrogen Torch calculates the Sample rate and Training chunk seconds using the training samples and assigns default values to the following audio settings:
      • STFT window size
      • Hop size
      • Mel frequency bins
      • Minimum frequency
      • Maximum frequency
  • Manual
    • You can specify all audio settings.
      • Sample rate
      • Audio channels
      • Training chunk seconds
      • STFT window size
      • Hop size
      • Mel frequency bins
      • Minimum frequency
      • Maximum frequency

Sample rate

Defines the sample rate (Hz) that H2O Hydrogen Torch utilizes to resample the audio files to a given sample rate for training and inference (validation and prediction). This setting becomes useful when audio files in the dataset have mixed samples (22kHz, 32kHz, 44kHz, etc.).

Note
  • Resampling the audio files to a common sample rate can result in a faster training
  • The Sample Rate setting is only available if Manual is selected in the Audio Parameters setting
  • The Auto option selects the most common sample rate from the training set
  • Speech recognition
    • 16000hz is a good default setting and most contemporary speech architectures are pretrained against it

Audio channels

This setting specifies the number of audio channels to be applied to audio files during model training.

note
  • If the actual number of audio channels in an audio file is higher than the specified value, the file is truncated (padded) accordingly.
  • If this setting is set to 1 (default value) of 1, the raw audio is averaged across channels if an audio file has more than 1 audio channel.
  • Truncation of audio files occurs based on channel index that is, audio_waveform[:audio_channels, :].
  • For padding, new zero-padded channels are added before existing channels.
  • Audio samples used for playback in the user interface are pre-converted to mono.
  • Spectrogram visualizations in the experiment insights tabs only utilize the last channel.

Training chunk seconds

Grid search hyperparameter

Defines the chunk size in seconds that H2O Hydrogen Torch uses to sample the audio for training. Shorter audio clips are padded with zeros.

Note
  • The Training Chunk Seconds setting is only available if Manual is selected in the Audio Parameters setting
  • If Auto is selected in the Audio Parameters setting, Auto selects a maximum clip duration no longer than 60 seconds

Stft window size

Grid search hyperparameter

Defines the window size H2O Hydrogen Torch uses for the Short-time Fourier transform (STFT).

note
  • There is a trade-off between time and frequency resolution in spectrograms. Shorter windows improve the temporal resolution at the expense of frequency resolution
  • The STFT Window Size setting is only available if Manual is selected in the Audio Parameters setting

Hop size

Grid search hyperparameter

Defines the number of audio samples H2O Hydrogen Torch uses between adjacent short-time Fourier transform (STFT) columns.

note
  • Smaller values can improve the temporal resolution in the spectrogram by using more overlapping windows
  • The Hop Size setting is only available if Manual is selected in the Audio Parameters setting

Mel frequency bins

Grid search hyperparameter

Defines the number of frequency bins H2O Hydrogen Torch uses on the Mel scale spectrogram.

Note
  • Larger values can result in a better frequency resolution although they need longer windows
  • The Mel Frequency Bins setting is only available if Manual is selected in the Audio Parameters setting

Minimum frequency

Grid search hyperparameter

Defines the minimum frequency (Hz) H2O Hydrogen Torch uses for spectrograms.

Note

The Minimum frequency setting is only available in Manual mode.

Maximum frequency

Grid search hyperparameter

Defines the highest frequency (Hz) H2O Hydrogen Torch uses for spectrograms.

note

The Maximum Frequency setting is only available if Manual is selected in the Audio Parameters setting

Spectrogram normalization

Grid search hyperparameter

Defines the transformer to normalize the spectrogram data before training the model.

Details
Options

Details
Audio classification | Audio regression

  • No
    • No normalization is applied to the spectrogram.
  • Image
    • Calculates mean and standard deviation per spectrogram and then applies normalization: subtracts mean and divides by standard deviation.

Augmentation settings

Mix audio

Grid search hyperparameter

This setting defines the audio mix augmentation to utilize during model training.

Details
Options

Details
Audio regression | Audio classification

  • Disabled: No mix augmentations are applied.
  • Mixup: Mixup adds (mixes) two audios based on a random ratio.

Mix target

Grid search hyperparameter

This setting defines the target (label) mix augmentation to apply during model training.

Details
Options

Details
Image regression | 3D image regression | Image classification | 3D image classification | Image object detection | Image semantic segmentation | 3D image semantic segmentation | Image instance segmentation | Audio regression | Audio classification

  • Ratio: Two classification targets are averaged based on the sample ratio during model training.
  • Min: The minimum of both targets is taken while ignoring the ratio during model training.
  • Max: The maximum of both targets is taken while ignoring the ratio during model training.

Mix concentration

Grid search hyperparameter

This setting defines the concentration parameter value of the Beta probability distribution to generate mix ratios. A larger value leads to more equal ratios (50% - 50%) for mixing.

setting dependency
  • Image problem types: The Mix concentration setting is only available when the Mixup option is selected in the Mix image setting.
  • Audio problem types: The Mix concentration setting is only available when the Mixup option is selected in the Mix audio setting.

Mix probability

Grid search hyperparameter

This setting defines the probability value to apply mix augmentation. The mix probability value is used for each batch or mix iteration.

example

If the mixing probability is specified as 0.3, mix augmentation is applied to each batch (or mix iteration) with a probability of 0.3.

setting dependency
  • Image problem types: The Mix probability setting is only available when the Mixup option is selected in the Mix image setting.
  • Audio problem types: The Mix probability setting is only available when the Mixup option is selected in the Mix audio setting.

Mix iterations

Grid search hyperparameter

  • Image problem types: This setting defines the number of times to apply mix augmentation on each batch. The larger the value, the more images are mixed into a single train sample.
  • Audio problem types: This setting defines the number of times to apply mix augmentation on each batch. The larger the value, the more audios are mixed into a single train sample.
setting dependency
  • Image problem types: The Mix iterations setting is only available when the Mixup option is selected in the Mix image setting.
  • Audio problem types: The Mix iterations setting is only available when the Mixup option is selected in the Mix audio setting.

Architecture settings

Pretrained

Grid search hyperparameter

Defines whether the neural network should start with pre-trained weights. When this setting is On, the training of the neural network starts with a pre-trained model on a generic task. When turned Off, the initial weights of the neural network to train become random.

Backbone

Grid search hyperparameter

Defines the backbone neural network architecture to train the model.

Note
  • Image regression | Image classification | Image metric learning | Audio regression | Audio classification
    • H2O Hydrogen Torch accepts backbone neural network architectures from the timm library (select or enter the architecture name)
  • Image object detection
    • H2O Hydrogen Torch provides several backbone state-of-the-art neural network architectures for model training. When you select Faster RCnn or Fcos as the model type for the experiment, you can input any architecture name from the timm library. When you select Efficientdet as the model type for the experiment, you can input any architecture name from the efficientdet-pytorch library
  • Image semantic segmentation | Image instance segmentation
    • H2O Hydrogen Torch accepts backbone neural network architectures from the segmentation-models-pytorch library (select or enter the architecture name).
  • 3D image regression | 3D image classification
    • H2O Hydrogen Torch accepts backbone (encoder) neural network architectures from a subset (resnet and efficientnet) of the timm library (select or enter the architecture name).
  • Text regression | Text classification | Text token classification | Text span prediction | Text sequence to sequence | Text metric learning
    • H2O Hydrogen Torch accepts backbone neural network architectures from the Hugging Face library (select or enter the architecture name)
  • Speech recognition
    • HuggingFace Wav2Vec2 CTC models are supported
tip
  • All problem types
    • Usually, it is good to use simpler architectures for quicker experiments and larger models when aiming for the highest accuracy
  • Speech recognition
    • If possible, leverage backbones pre-trained closely to your use case (for example, noisy audio, casual speech, etc.)

Pool

Grid search hyperparameter

This setting defines the global pooling method H2O Hydrogen Torch uses in the model architecture before the final fully connected layer. Instead of adding a fully connected layer on top of the feature maps, global pooling is applied to each feature map beforehand.

note

Certain backbones (for example, VIT) do not require pooling. Accordingly, H2O Hydrogen Torch does not display this setting.

Details
Options

Details
Image regression

  • Average
    • H2O Hydrogen Torch applies global average pooling.
  • CatAverageMax
    • H2O Hydrogen Torch concatenates global average and max poolings.
  • GeM
  • Max
    • H2O Hydrogen Torch applies a global max pooling.
  • MeanAverageMax
    • H2O Hydrogen Torch calculates the mean between global average and max poolings.

Details
Image classification

  • Average
    • H2O Hydrogen Torch applies global average pooling.
  • CatAverageMax
    • H2O Hydrogen Torch concatenates global average and max poolings.
  • GeM
  • Max
    • H2O Hydrogen Torch applies a global max pooling.
  • MeanAverageMax
    • H2O Hydrogen Torch calculates the mean between global average and max poolings.

Details
Image metric learning

  • Average
    • H2O Hydrogen Torch applies global average pooling.
  • CatAverageMax
    • H2O Hydrogen Torch concatenates global average and max poolings.
  • GeM
  • Max
    • H2O Hydrogen Torch applies a global max pooling.
  • MeanAverageMax
    • H2O Hydrogen Torch calculates the mean between global average and max poolings.

Details
Text regression

  • Average
    • H2O Hydrogen Torch applies global average pooling.
  • GeM
  • Max
    • H2O Hydrogen Torch applies a global max pooling.
  • First token
    • This option enables H2O Hydrogen Torch to use the output of the first token.
  • Last token
    • This option enbles H2O Hydrogen Torch to use the output of the last unpadded token.

Details
Text classification

  • Average
    • H2O Hydrogen Torch applies global average pooling.
  • GeM
  • Max
    • H2O Hydrogen Torch applies a global max pooling.
  • First token
    • This option enables H2O Hydrogen Torch to use the output of the first token.
  • Last token
    • This option enables H2O Hydrogen Torch to use the output of the last unpadded token.

Details
Text metric learning

  • Average
    • H2O Hydrogen Torch applies global average pooling.
  • GeM
  • Max
    • H2O Hydrogen Torch applies a global max pooling.
  • First token
    • This option enables H2O Hydrogen Torch to use the output of the first token.
  • Last token
    • This option enables H2O Hydrogen Torch to use the output of the last unpadded token.

Details
Audio regression

  • Average
    • H2O Hydrogen Torch applies global average pooling.
  • CatAverageMax
    • H2O Hydrogen Torch concatenates global average and max poolings.
  • GeM
  • Max
    • H2O Hydrogen Torch applies a global max pooling.
  • MeanAverageMax
    • H2O Hydrogen Torch calculates the mean between global average and max poolings.

Details
Audio classification

  • Average
    • H2O Hydrogen Torch applies global average pooling.
  • CatAverageMax
    • H2O Hydrogen Torch concatenates global average and max poolings.
  • GeM
  • Max
    • H2O Hydrogen Torch applies a global max pooling.
  • MeanAverageMax
    • H2O Hydrogen Torch calculates the mean between global average and max poolings.

Dropout

Grid search hyperparameter

This setting defines the dropout rate before the final fully connected layer that H2O Hydrogen Torch applies during model training. This setting defines the dropout rate between the backbone and neck of the model H2O Hydrogen Torch applies during model training. The dropout rate helps the model generalize better by randomly dropping a share of the neural network connections.

Training settings

Loss function

Grid search hyperparameter

This setting defines the loss function H2O Hydrogen Torch utilizes during model training. The loss function is a differentiable function measuring the prediction error. The model utilizes gradients of the loss function to update the model weights during training.

Details
Options

Details
Image regression | 3D image regression | Text regression | Audio regression

  • MAE
    • H2O Hydrogen Torch utilizes the mean absolute error (L1 norm) as the loss function.
  • MSE
    • H2O Hydrogen Torch utilizes the mean squared error (squared L2 norm) as the loss function.
  • RMSE
    • H2O Hydrogen Torch utilizes the mean squared error (L2 norm) as a loss function.

Details
Image classification | 3D image classification | Text classification | Audio classification

  • BCE
    • H2O Hydrogen Torch uses binary cross entropy loss.
  • Classification
    • This default classification loss automatically chooses between BCE (multi-label) and CrossEntropy (multi-class) for classification.
  • CrossEntropy
    • H2O Hydrogen Torch utilizes multi-class cross entropy loss as a loss function.
  • SigmoidFocal
  • SoftmaxFocal

Details
Image semantic segmentation | 3D image semantic segmentation | Image instance segmentation

  • BCE
    • H2O Hydrogen Torch uses binary cross entropy loss.
  • BCEDice
    • H2O Hydrogen Torch uses binary cross entropy loss and Dice loss weights 2 and 1, respectively.
  • BCELovasz
    • H2O Hydrogen Torch uses binary cross entropy loss and Lovasz loss with equal weights.
  • Dice
    • H2O Hydrogen Torch uses Dice loss.
  • Focal
  • FocalDice
    • H2O Hydrogen Torch uses Focal loss and Dice loss with weights 2 and 1, respectively.
  • Jaccard
    • H2O Hydrogen Torch uses Jaccard loss.

Details
Image metric learning | Text metric learning

Details
Text token classification | Text span prediction | Text sequence to sequence

  • CrossEntropy
    • H2O Hydrogen Torch utilizes multi-class cross entropy loss as a loss function.

Details
Speech recognition

  • CTC Loss
    • H2O Hydrogen Torch utilizes Conectionist Temporal Classification loss as a loss function.

Optimizer

Grid search hyperparameter

This setting defines the algorithm or method (optimizer) to use for model training. The selected algorithm or method defines how the model should change the attributes of the neural network, such as weights and learning rate. Optimizers solve optimization problems and make more accurate updates to attributes to reduce learning losses.

Details
Options

Learning rate

Grid search hyperparameter

This setting defines the learning rate H2O Hydrogen Torch uses when training the model, specifically when updating the neural network's weights. The learning rate is the speed at which the model updates its weights after processing each mini-batch of data.

note
  • The learning rate is an important setting to tune as it balances under and overfitting.
  • The number of epochs highly impacts the optimal value of the learning rate.

Differential learning rate layers

Defines the learning rate to apply to certain layers of a model. H2O Hydrogen Torch applies the regular learning rate to layers without a specified learning rate.

Details
Options

Details
Image regression | Image classification | Text regression | Text classification | Text token classification | Audio regression | Audio classification

  • Backbone
    • H2O Hydrogen Torch applies a different learning rate to a body of the neural network architecture.
  • Head
    • H2O Hydrogen Torch applies a different learning rate to a head of the neural network architecture.

Details
Image object detection

The options for an image object detection experiment are different based on the selected Model type (setting). Options:

  • If you select EfficientDet as the experiment's Model type (setting), the following options are available:

    Details
    Options

    • Backbone
      • H2O Hydrogen Torch applies a different learning rate to a body of the EfficientDet architecture.
    • FPN
      • H2O Hydrogen Torch applies a different learning rate to a Feature Pyramid Network (FPN) block of the EfficientDet architecture.
    • class_net
      • H2O Hydrogen Torch applies a different learning rate to a classification head of the EfficientDet architecture.
    • box_net
      • H2O Hydrogen Torch applies a different learning rate to a box regression head of the EfficientDet architecture.

  • If you select Faster R-CNN as the experiment's Model type (setting), the following options are available:

    Details
    Options

    • Body
      • H2O Hydrogen Torch applies a different learning rate to a body of the Faster R-CNN architecture.
    • FPN
      • H2O Hydrogen Torch applies a different learning rate to a Feature Pyramid Network (FPN) block in the Faster R-CNN architecture.
    • RPN
      • H2O Hydrogen Torch applies a different learning rate to a Region Proposal block of the Faster R-CNN architecture.
    • ROI heads
      • H2O Hydrogen Torch applies a different learning rate to the Faster R-CNN architecture proposal heads.

  • If you select FCOS as the experiment's Model type (setting), the following options are available:

    Details
    Options

    • Body
      • H2O Hydrogen Torch applies a different learning rate to a body of the FCOS architecture.
    • FPN
      • H2O Hydrogen Torch applies a different learning rate to a Feature Pyramid Network (FPN) block of the FCOS architecture.
    • classification_head
      • H2O Hydrogen Torch applies a different learning rate to the classification head of the FCOS architecture.
    • regression_head
      • H2O Hydrogen Torch applies a different learning rate to a box regression head of the FCOS architecture.

Details
Image semantic segmentation

  • Encoder
    • H2O Hydrogen Torch applies a different learning rate to the encoder of the neural network architecture.
  • Decoder
    • H2O Hydrogen Torch applies a different learning rate to the decoder of the neural network architecture.
  • Segmentation head
    • H2O Hydrogen Torch applies a different learning rate to the head of the neural network architecture.

Details
3D image semantic segmentation | Text sequence to sequence

  • Encoder
    • H2O Hydrogen Torch applies a different learning rate to the encoder of the neural network architecture.
  • Decoder
    • H2O Hydrogen Torch applies a different learning rate to the decoder of the neural network architecture.

Details
Image instance segmentation

  • Encoder
    • H2O Hydrogen Torch applies a different learning rate to the encoder of the neural network architecture.
  • Decoder
    • H2O Hydrogen Torch applies a different learning rate to the decoder of the neural network architecture.
  • Segmentation head
    • H2O Hydrogen Torch applies a different learning rate to the head of the neural network architecture.

Details
Image metric learning | Text metric learning

  • Backbone
    • H2O Hydrogen Torch applies a different learning rate to a body of the neural network architecture.
  • Neck
    • H2O Hydrogen Torch applies a different learning rate to a neck of the neural network architecture.
  • Loss
    • H2O Hydrogen Torch applies a different learning rate to an ArcFace block of the neural network architecture.

Details
Text regression

  • Backbone
    • H2O Hydrogen Torch applies a different learning rate to a body of the neural network architecture.

Details
Text span prediction

  • qa_outputs

tip

A common strategy is to apply a lower learning rate to the backbone of a model for better convergence and training stability.

note

Different layers are available for different problem types.

Batch size

Grid search hyperparameter

This setting defines the number of training examples a mini-batch uses during an iteration of the training model to estimate the error gradient before updating the model weights. In other words, this setting defines the batch size used per GPU.

note

During model training, the training data is packed into mini-batches of a fixed size.

Automatically adjust batch size

If this setting is turned On, H2O Hydrogen Torch checks whether the Batch size specified fits into the GPU memory. If a GPU out-of-memory (OOM) error occurs, H2O Hydrogen Torch automatically decreases the Batch size by a factor of 2 units until it fits into the GPU memory or Batch size equals 1.

Drop last batch

This setting drops the last incomplete batch during model training when turned On.

note

H2O Hydrogen Torch groups the train data into mini-batches of equal size during the training process, but the last batch can have fewer records than the others. Not dropping the last batch can lead to a less robust gradient estimation while causing a more volatile training step.

Epochs

Grid search hyperparameter

This setting defines the number of epochs to train the model. In other words, it specifies the number of times the learning algorithm goes through the entire training dataset.

note
  • The Epochs setting is an important setting to tune because it balances under- and overfitting.
  • The learning rate highly impacts the optimal value of the epochs.
  • For the following supported problem types, H2O Hydrogen Torch now enables you to utilize/deploy a pre-trained model trained on zero epochs (where H2O Hydrogen Torch does not train the model and the pretrained model (experiment) can be deployed as-is):
    • Speech recognition
    • Text sequence to sequence
    • text span prediction

Schedule

Grid search hyperparameter

This setting defines the learning rate schedule H2O Hydrogen Torch utilizes during model training. Specifying a learning rate schedule prevents the learning rate from staying the same. Instead, a learning rate schedule causes the learning rate to change over iterations, typically decreasing the learning rate to achieve a better model performance and training convergence.

Details
Options

  • Constant
    • H2O Hydrogen Torch applies a constant learning rate during the training process.
  • Cosine
    • H2O Hydrogen Torch applies a cosine learning rate that follows the values of the cosine function.
  • Linear
    • H2O Hydrogen Torch applies a linear learning rate that decreases the learning rate linearly.

Warmup epochs

Grid search hyperparameter

This setting determines the number of epochs to warmup for gradually increasing the learning rate from 0 to the specified value. The learning rate increases linearly during the warmup period, allowing the model to adapt to the learning process gradually.

Note

You can set the value of this setting as a ratio of an epoch. For instance, setting it to 0.1 means warmup is performed for only 10% of the first full epoch.

Weight decay

Grid search hyperparameter

This setting defines the weight decay that H2O Hydrogen Torch uses for the optimizer during model training.

note

Weight decay is a regularization technique that adds an L2 norm of all model weights to the loss function while increasing the probability of improving the model generalization.

Gradient clip

Grid search hyperparameter

This setting defines the maximum norm of the gradients H2O Hydrogen Torch specifies during model training. Defaults to 0, no clipping. When a value greater than 0 is specified, H2O Hydrogen Torch modifies the gradients during model training. H2O Hydrogen Torch uses the specified value as an upper limit for the norm of the gradients, calculated using the Euclidean norm over all gradients per batch.

note

This setting can help model convergence when extreme gradient values cause high volatility of weight updates.

Grad accumulation

Grid search hyperparameter

This setting defines the number of gradient accumulations before H2O Hydrogen Torch updates the neural network weights during model training.

note
  • Grad accumulation can be beneficial if only small batches are selected for training. With gradient accumulation, the loss and gradients are calculated after each batch, but it waits for the selected accumulations before updating the model weights. You can control the batch size through the Batch size setting.
  • Changing the default value of Grad Accumulation might require adjusting the learning rate and batch size.

Save best checkpoint

This setting determines if H2O Hydrogen Torch should save the model weights of the epoch exhibiting the best validation metric. When turned On, H2O Hydrogen Torch saves the model weights for the epoch exhibiting the best validation metric. When turned Off, H2O Hydrogen Torch saves the model weights after the last epoch is executed.

note
  • This setting should be turned On with care as it has the potential to lead to overfitting of the validation data.
  • The default goal should be to attempt to tune models so that the last or very last epoch is the best epoch.
  • Suppose an evident decline for later epochs is observed in logging. In that case, it is usually better to adjust hyperparameters, such as reducing the number of epochs or increasing regularization, instead of turning this setting On.

Evaluation epochs

This setting defines the number of epochs H2O Hydrogen Torch uses before each validation loop for model training. In other words, it determines the frequency (in a number of epochs) to run the model evaluation on the validation data.

note
  • Increasing the number of Evaluation Epochs can speed up an experiment.
Setting dependency

The Evaluation epochs setting is available only if the following setting is turned Off: Save Best Checkpoint.

Evaluate before training

Determines whether to perform a validation run before training. This setting is potentially helpful for assessing the performance of zero-shot pertained backbones and checking the modeling pipeline.

note

The following supported problem types support externally pretrained zero-shot models (while problem types that do not contain this support fit a new head on top of a backbone):

  • Text span prediction
  • Text sequence to sequence
  • Speech recognition

Calculate train metric

This setting determines whether the model metric should also be calculated for the training data at the end of the training. When On, the model metric is calculated for the training data. The resulting values do not indicate the true model performance because they are based on H2O Hydrogen Torch's identical data records for model training but can give insights into over/underfitting.

Train validation data

This setting defines whether the model should use the entire train and validation dataset during model training. When turned On, H2O Hydrogen Torch uses the whole train dataset and validation data to train the model.

note
  • H2O Hydrogen Torch also evaluates the model on the provided validation fold. Validation is always only on the provided validation fold.
  • H2O Hydrogen Torch uses both datasets for model training if you provide a train and validation dataset.
    • To define a training dataset, use the Train dataframe setting. For more information, see Train dataframe.
    • To define a validation dataset, use the Validation dataframe setting. For more information, see Validation dataframe.
  • Turning On the Train validation data setting should produce a model that you can expect to perform better because H2O Hydrogen Torch trained the model on more data. Thought, also note that using the entire train dataset and out-of-fold validation dataset generally causes the model's accuracy to be overstated as information from the validation data is incorporated into the model during the training process.
    note

    If you have five folds and set fold 0 as validation, H2O Hydrogen Torch usually trains on folds 1-4 and reports on fold 0. With Train validation data turned On, we can add fold 0 to the training, but H2O Hydrogen Torch still reports its accuracy. As a result, it overstated for fold 0 but should be better for any unseen (test) data/production scenarios. For that reason, you usually want to consider this setting after running your experiments and deciding on models.

:::

note

This setting is only available if you turned the Save best checkpoint setting Off.

Run interpretations

Determines whether the experiment (model) generates validation interpretation insights at the end of the experiment. Validation interpretation insights are only available for image, text, and audio classification and regression experiments.

Build scoring pipelines

Determines whether the experiment (model) automatically generates an H2O MLOps pipeline and Python scoring pipeline at the end of the experiment. If turned Off, you can still create scoring pipelines on demand when the experiment is complete (e.g., when you click Download soring or Download MLOps).

Export to ONNX

This setting attempts to export the trained model to an open neural network exchange (ONNX) format. If successful, the model in an ONNX format is available in the scoring pipeline.

ONNX target device

This setting defines the target device on which the open neural network exchange (ONNX) model runs. H2O Hydrogen Torch conducts model optimization for either CPU or GPU devices.

Prediction settings

Metric

This setting defines the metric to evaluate the model's performance.

Batch size inference

This setting defines the batch size of examples to utilize for inference.

note

Selecting 0 sets the Batch size inference to the same value used for the Batch size setting.

Inference chunk method

Defines the inference chunk method H2O Hydrogen Torch uses for predictions.

Details
Options

Details
Audio classification | Audio regression

  • Fix
    • Forces the same chunk size for all audio clips, using zero padding for shorter clips and truncating for longer clips.
    note

    Fix as the model's inference chunk method enables batch processing with more efficient GPU usage.

  • Varying
    • H2O Hydrogen Torch reads shorter clips without padding.

Max inference chunk seconds

Defines the maximum chunk size in seconds that H2O Hydrogen Torch uses from the audio. Shorter audio clips are used as-is.

Setting dependency

The Max inference chunk seconds setting is only available if you select Varying in the Inference chunk method setting.

Inference chunk seconds

Defines the exact chunk size in seconds that H2O Hydrogen Torch uses to load the audio for predictions. Shorter audio clips are padded with zeros.

Setting dependency

The Inference chunk seconds setting is only available if you select Fix in the Inference chunk method setting.

Environment settings

Gpus

This setting determines the list of GPUs H2O Hydrogen Torch can use for the experiment. GPUs are listed by name, referring to their system ID (starting from 1). If no GPUs are selected, H2O Hydrogen Torch utilizes the CPU for model training.

Number of seeds per run

This setting defines the number of seeds to use for a single run. If more than one seed is selected, each experiment runs multiple times.

note
  • Deep learning models can sometimes exhibit certain randomness in individual runs. Running an experiment multiple times with multiple seeds, can give insights into stability of results.
  • In case of high randomness, better judgement can be made about the performance of a model with certain hyperparameter settings, by comparing the average results across seeds, for example in a grid search scenario.

Number of GPUs per run

This setting defines the number of GPUs to use for a single run when training the model. A single run might represent a single fold, a single seed run or a single grid search run.

example

If 5 GPUs are available, it is possible to run a 5-fold cross-validation in parallel using a single GPU per fold.

note
  • The available GPUs are the ones that can be enabled using the GPUs setting.
  • If the number of GPUs is less than or equal to 1, this setting (Number of GPUs per run ) is not available.

Mixed precision training

Determines whether to use mixed-precision during model training. When turned Off, H2O Hydrogen Torch does not use mixed-precision for training.

Note

Mixed-precision is a technique that helps decrease memory consumption and increases training speed.

Mixed precision inference

Determines whether to use mixed-precision during model inference.

note

Mixed-precision is a technique that helps decrease memory consumption and increases inference speed.

Sync batch normalization

Determines whether to synchronize batch normalization across GPUs in a distributed data-parallel (DDP) mode. In other words, when turned On, multi-GPU training is enabled to synchronize the batch normalization layers of the model across GPUs. In a nutshell, H2O Hydrogen Torch with multi GPU splits the batch across GPUs, and therefore, when a normalization layer wants to normalize data, it has access only to the part of the batch stored on the device. As a result, it works out of the box but gives better results if the data in all GPUs are collected to normalize the data of the entire batch.

Note

When turned On, data scientists can expect the training speed to drop slightly while the model's accuracy improves. However, this rarely happens in practice and only occurs under specific problem types and defined batch sizes.

Number of workers

This setting defines the number of workers H2O Hydrogen Torch uses for the DataLoader. In other words, it defines the number of CPU processes to use when reading and loading data to GPUs during model training.

Seed

This setting defines the random seed value that H2O Hydrogen Torch uses during model training. It defaults to -1, an arbitrary value. When the value is modified (not -1), the random seed allows results to be reproducible—defining a seed aids in obtaining predictable and repeatable results every time. Otherwise, not modifying the default seed value (-1) leads to random numbers at every invocation.

Logging settings

Logger

This setting defines the logger type that H2O Hydrogen Torch uses for model training

Details
Options

  • None
    • This option does does not use any logger.
  • Neptune
    • This option utilizes Neptune as a logger to track the experiment. To use Neptune, you must define the following settings: Neptune API token and Neptune project.

Neptune API token

This setting defines the Neptune API token to validate all subsequent Neptune API calls.

setting dependency

This setting is available if you select Neptune in the Logger setting.

Neptune project

This setting defines the Neptune project.

setting dependency

This setting is available if you select Neptune in the Logger setting.

Log grad norm

This setting determines whether to log the total grad norm before and after clipping.

note

This setting adds a small overhead during the experiment runtime but can help determine if the gradients are exploding or unstable.

tip

Turn this setting on if you suspect unstable gradients; as a result, you may then choose a value for the gradient clip to prevent exploding gradients.

Number of audios

This setting defines the number of audios to show in the experiment Insights tab.

AutoDL settings

Time budget

This setting specifies the number of experiments that H2O Hydrogen Torch will generate, each with different values for certain hyperparameters referred to as grid search hyperparameters.

Details
Options
  • 1
    • This option selects several values for certain grid search hyperparameters. Up to 10 child experiments are generated when you run (start) the parent experiment.
      • Image regression: Backbone options: "tf_efficientnetv2_b3", "resnet50"; Learning Rate options: 0.001, 0.0003; Epochs options: 5, 10. Image size is fixed at 224x224 pixels.
      • 3D image regression: Backbone options: "resnet18d", "tf_efficientnet_b0_ns"; Learning Rate options: 0.001, 0.0003; Epochs options: 5, 10. Image size is fixed at 128x128x32 pixels.
      • Image classification:
      • 3D image classification:
      • Image object detection:
      • Image semantic segmentation:
      • 3D image semantic segmentation:
      • Image instance segmentation:
      • Image metric learning:
      • Text regression:
      • Text classification:
      • Text token classification:
      • Text span prediction:
      • Text sequence-to-sequence:
      • Text metric learning:
      • Image and text classification:
      • Audio regression:
      • Audio classification:
      • Speech recognition:
      • Graph node regression:
      • Graph node classification:
      • Multi-modal causal language modeling:
  • 2
    • This option selects several values for certain grid search hyperparameters. Up to 50 child experiments are generated when you run (start) the parent experiment.
      • Image regression: Backbone options: "tf_efficientnetv2_b3", "resnet50"; Learning Rate options: 0.001, 0.0003, 0.0001; Epochs options: 5, 10; Augmentation Strategy options: "Soft", "Medium". Image size is fixed at 224x224 pixels.
      • 3D image regression: Backbone options: "resnet18d", "tf_efficientnet_b1_ns"; Learning Rate options: 0.001, 0.0003, 0.0001; Epochs options: 5, 10; Augmentation Strategy options: "Soft", "Medium". Image size is fixed at 128x128x128 pixels.
      • Image classification:
      • 3D image classification:
      • Image object detection:
      • Image semantic segmentation:
      • 3D image semantic segmentation:
      • Image instance segmentation:
      • Image metric learning:
      • Text regression:
      • Text classification:
      • Text token classification:
      • Text span prediction:
      • Text sequence-to-sequence:
      • Text metric learning:
      • Image and text classification:
      • Audio regression:
      • Audio classification:
      • Speech recognition:
      • Graph node regression:
      • Graph node classification:
      • Multi-modal causal language modeling:
  • 3
    • This option selects several values for certain grid search hyperparameters. When you run (start) the parent experiment, up to 100 child experiments are generated.
      • Image regression: Backbone options: "tf_efficientnetv2_b3", "resnet50", "eca_nfnet_l0",; Learning Rate options: 0.001, 0.0003, 0.0001; Epochs options: 5, 10; Augmentation Strategy: "Soft", "Medium"; Mix Augmentations options: "Disabled", "Mixup". Image size is fixed at 384x384 pixels.
      • 3D image regression: Backbone options: "resnet34d", "tf_efficientnet_b3_ns"; Learning Rate options: 0.001, 0.0003, 0.0001; Epochs options: 10, 20; Augmentation Strategy options: "Soft", "Medium"; Mix Augmentations options: "Disabled", "Mixup". Image size is fixed at 256x256x128 pixels.
      • Image classification:
      • 3D image classification:
      • Image object detection:
      • Image semantic segmentation:
      • 3D image semantic segmentation:
      • Image instance segmentation:
      • Image metric learning:
      • Text regression:
      • Text classification:
      • Text token classification:
      • Text span prediction:
      • Text sequence-to-sequence:
      • Text metric learning:
      • Image and text classification:
      • Audio regression:
      • Audio classification:
      • Speech recognition:
      • Graph node regression:
      • Graph node classification:
      • Multi-modal causal language modeling:
note

This setting is only available if you select AutoDL as the experience level.


Feedback