Skip to main content

AutoML

The AutoML section enables iterative fine-tuning using intelligent agent-driven configuration. Users specify how many experiments they’d like to run, and the system automatically adjusts settings like learning rate and backbone selection to find the best-performing model.

AutoML Runs Table​

When navigating to the AutoML section from the top nav or homepage, you’ll see a table of all AutoML runs within your project.

Each row shows:

  • Name: Searchable name of the AutoML run
  • Progress: Completed / total experiments
  • Best Score: Performance metric of the best experiment
  • Created: Date/time the run was started
  • Status: Completed, running, or queued
  • Actions: Rename or delete

Starting an AutoML Run​

Click + New AutoML to begin a new run. This launches the same setup flow as starting a regular experiment, with the AutoML toggle enabled by default. No manual hyperparameter configuration is required.

Once “Start Training” is clicked:

  • You’ll be prompted to set the number of experiments (default is 5)
  • The system builds models iteratively:
    • Starts with a base config
    • Evaluates results
    • Adjusts parameters (e.g. backbone, learning rate)
    • Starts the next experiment

Viewing an Active Run​

Once started, you’ll see:

  • Status: Queued, running, or completed
  • Progress: e.g. 2/5
  • Best Score: The current best performing configuration
  • Created: When the run was started
  • Stop / Delete: Controls for halting or removing the run

Experiments Table​

This section displays:

  • Name of each individual experiment
  • Experiment ID
  • Dataset used
  • Created timestamp
  • Status: Completed, validating, or queued
  • Validation Metrics: e.g. validation loss, perplexity

Clicking on any experiment takes you to the Experiment Detail View (see: Experiments Documentation).

AutoML Logs​

This expandable section shows:

  • Backbone models selected
  • Parameter configurations tested
  • Status of internal experiment launches

Connected Experiments​

Once complete, a visual flow chart shows the iterative steps taken during AutoML, with each model, loss, and perplexity value for comparison.


Feedback