Reusable workflows
Reusable workflows allow you to call one workflow from another, enabling modular workflow design and reducing duplication.
Schema
See Schema Reference for #Trigger (with callable field), #WorkflowCall, and #Job definitions.
Making Workflows Callable
Mark a workflow as callable using trigger.callable: true:
id: deploy-model
name: Deploy Model
trigger:
callable: true
inputs:
model_path:
type: string
required: true
deploy_token:
type: string
required: true
secret: true # Masked in logs and UI
jobs:
deploy:
steps:
- name: Deploy
env:
TOKEN: ${{ .inputs.deploy_token }}
run: h2o deploy ${{ .inputs.model_path }} --token $TOKEN
Workflow ID and Resource Names
Each workflow has two identifiers:
-
Workflow ID (
idfield): User-defined unique identifier that forms part of the resource name.- Used when calling workflows via WorkflowCall.
- Required field in workflow definition.
-
Workflow Name (
namefield): Optional display name.- Not unique — multiple workflows can have the same display name.
- Not used for WorkflowCall references.
Resource Name Format
The full resource name combines the workspace ID and workflow ID:
/workspaces/{workspace-id}/workflows/{workflow-id}
Example: /workspaces/abc123/workflows/deploy-model
Reference Formats in WorkflowCall
When calling workflows, you can use two formats:
Short ID (same workspace):
workflow:
name: deploy-model # Expands to /workspaces/{current-workspace}/workflows/deploy-model
Full Resource Name (cross-workspace calls):
workflow:
name: /workspaces/other-workspace-id/workflows/deploy-model
Calling Workflows
Call a workflow from a job using the workflow field:
id: train-and-deploy
name: Train and Deploy
secrets:
- name: workspaces/xxx/secrets/deploy-token
as: deploy_token
jobs:
train:
steps:
- name: Train model
run: python train.py
- name: Upload model
upload:
path: ./model/
destination: drive://models/latest/
deploy:
depends_on: [train]
workflow:
name: deploy-model # Short ID - references workflow in same workspace
inputs:
model_path: "drive://models/latest/"
deploy_token: ${{ .secrets.deploy_token }}
Cross-workspace example:
deploy-to-prod:
workflow:
name: /workspaces/prod-workspace-123/workflows/deploy-service # Full resource name
inputs:
version: "v1.2.3"
Input Passing
Regular Inputs
Pass inputs using the workflow.inputs map. Supports expressions:
jobs:
process:
workflow:
name: workflow-process-data
inputs:
dataset: ${{ .inputs.dataset_name }}
preprocessing: "standard"
Secret Inputs
Mark inputs as secret: true to mask values in logs and UI:
# Callable workflow
inputs:
api_key:
type: string
secret: true # Value will be masked
# Caller
secrets:
- name: workspaces/xxx/secrets/api-key
as: api_key
jobs:
call:
workflow:
name: workflow-api-integration
inputs:
api_key: ${{ .secrets.api_key }}
Behavior
Concurrency Control
Each workflow uses its own concurrency settings independently. Called workflows do not inherit parent concurrency groups.
Failure Propagation
When a called workflow fails, the parent job fails, triggering standard failure handling (cancel_on_failure, depends_on).
Example: Parallel Processing
Callable workflow:
id: process-dataset
trigger:
callable: true
inputs:
dataset_name:
type: string
required: true
jobs:
process:
steps:
- name: Download
download:
source: drive://datasets/raw/${{ .inputs.dataset_name }}/
path: ./data/
- name: Process
run: python process.py --input ./data/
- name: Upload
upload:
path: ./data/processed/
destination: drive://datasets/processed/${{ .inputs.dataset_name }}/
Caller (parallel processing):
id: multi-dataset-processing
cancel_on_failure: false
jobs:
process-users:
workflow:
name: workflow-process-dataset
inputs:
dataset_name: "users"
process-transactions:
workflow:
name: workflow-process-dataset
inputs:
dataset_name: "transactions"
train:
depends_on: [process-users, process-transactions]
steps:
- name: Train model
run: python train.py --data drive://datasets/processed/
- Submit and view feedback for this page
- Send feedback about H2O Workflows to cloud-feedback@h2o.ai