Available in: GBM, DRF, Deep Learning, GLM, GAM, Naïve-Bayes, K-Means, XGBoost, AutoML
N-fold cross-validation is used to validate a model internally, i.e., to estimate the model performance without having to sacrifice a validation split. When building cross-validated models, H2O builds
nfolds cross-validated models and 1 overarching model over all of the training data. For example, if you specify
nfolds=5, then 6 models are built. The first 5 models are the cross-validation models and are built on 80% of the training data. Each cross-validated model produces a prediction frame pertaining to its fold. You can save each of these prediction frames by enabling the
keep_cross_validation_predictions option. Note that this option is disabled by default.
More information is available in the Cross-Validation section.