Configuration
The configuration parameters can be specified either on the command line or in a properties file. The properties file makes it simpler to maintain and share settings across instances, for example, if running in a cluster.
Create the properties file
To create the properties file, use the following command:
java -Dcreateproperties=true -jar ai.h2o.mojos.jar > myproduction.properties
This command generates a properties file with the defaults that can be changed for the environment. To use this file at runtime, use the following command line argument:
-Dpropertiesfilename=myproduction.properties
If no propertiesfilename
is set, the default properties file H2OaiRestServer.properties
is used. If no properties file exists, then the program defaults are used.
Create Autogen
The server can be executed to generate notebooks and terminate, rather than executed as an endpoint. This is used as an easy way to create templates.
Parameter | Default | Description |
---|---|---|
createautogen | Name of model to use for notebook generation | |
createnotebook | SQL | Type of notebook |
notebookfilename | Modelname+type | Provides a way to generate the output file name |
The current directory is used as the location of the model unless the command line parameter ModelDirectory
is included on the command line.
Parameters
The properties file enables specific settings to be set for the environment.
Parameter | Description |
---|---|
ModelDirectory | This is the location of the Driverless AI models (.mojo) and the H2O-3 models (.zip). In Windows environments, use a double backslash (\\) for the path. ModelDirectory = /H2O.ai/Models/ |
Modelname | The default model name to use if no model name is passed with a scoring request. modelname = pipeline.mojo |
SecureEndPoints | Defines the start of a uri for which authentication will be required. SecureEndPoints = /modelsecure** |
SecureEndPointsAllowedIP | This is an IP address for which requests are always allowed regardless of the SecureEndPoints setting. This is useful for specific management requests from a defined IP. The default is for any request to be accepted. SecureEndPointsAllowedIP = 0.0.0.0/0 |
Restuser | This is the default username for operations that require standard user access. restuser = h2o |
Restpass | This is the encrypted password for the restuser. restpass = aDJvMTIz |
Adminuser | The username required for administrative requests to the server. adminuser = h2oadmin |
adminpass | This is the encrypted password for the admin user. adminpass = aDJvMTIz |
ScorerHostIP | If this instance should forward recipes to a specific host, this is the forwarding destination. ScorerHostIP = 127.0.0.1 |
ScorerHostPort | Used in conjunction with the ScorerHostIP for supporting recipes. ScorerHostPort = 9191 |
ScorerRecycle | If ScorerPool is set to zero (default) and ScorerRecycle is true (default) then a single Python model is executed, and dynamically started and stopped as needed. ScorerRecycle = true |
ScorerPool | This is the number of Python HTTP Servers that are running in this instance. This is also the port index of the ScorerHostPort that is used for connections. This allows multiple models to be started at the same time, with 0 being the default value. Usually, using a single instance with ScorerRecycle set to true is more maintainable except in concurrent request to different models. ScorerPool = 0 |
Scorelog | If enabled (true), then when a prediction is made, the model name, UUID, input variables and predictions are written to standard out as part of an audit trail. scorelog = true |
Modelmonitor | If enabled (true), then when a prediction is made, a specially formatted line is written to the standard out file, which is then used by the Model Monitoring component. modelmonitor = true |
Extendedoutput | If enabled (true), then all the predictions are returned, otherwise only the first result is returned. extendedoutput = true |
Explainability | If enabled, then the prediction will also return klime reason codes if the model supports returning them. explainability = false |
ExplainabilityShapley | If set to true and if the model supports this function, then the shapley reason codes are returned with the prediction. explainabilityShapley = false |
ExplainabilityShapContrib | If set to true and if the model supports this function, then the shapley reason codes are returned with the prediction for Driverless AI models. explainabilityShapContrib = false |
ExplainabilityUsesFloats | Some older models use floats for numeric features.explainabilityUsesFloats = true |
DetectShift | This enables checking if the input row is different from the training. This requires the Experiment summary to be available on the REST endpoint.DetectShift = false |
DetectShiftTolerance | Acceptable difference as a percentage. Only valid if DetectShift is enabled.DetectShiftTolerance = 0.0f |
DetectShiftEnforce | If DetectShift is enabled and the input row is equal or larger than the DetectShiftTolerance value, then a blank response is sent and not the prediction.DetectShiftEnforce = false |
Cachefeatures | If the model is already loaded, you can use the model's features. This saves processing time at the expense of memory to hold the list of features.cachefeatures = true |
UnloadModels | If memory is required to load a new model, but memory is limited, and if the parameter unload is set to true, then the model with the lowest use count will be unloaded. UnloadModels = true |
RetryOnUnload | If enabled, this option will cause the request that initiated the unloading of models to be retried. Three attempts will be made, after which an exception (HTTP 500) error will be returned to the caller.RetryOnUnload = true |
PreloadModels | This is a comma-separated list of models that will be loaded at start up. Otherwise the models are loaded on demand. PreloadModels=pipeline.mojo,mymodel.mojo |
InputSeperator | If specified, this will be used as a feature separator. This is usually not set. InputSeperator = , |
SecureModel | The REST server can monitor for attacks on the model. For example, probing. SecureModel = false |
SecureModelLogging | If set to true, the request will be logged to standard out.SecureModelLogging = false |
SecureModelFeatureChangeDistance | Minimum value distance between feature request inputs. SecureModelFeatureChangeDistance = 1000 |
SecureModelFeatureChangeNumber | Minimum number of features that should change per request. SecureModelFeatureChangeNumber = 2 |
SecureModelFeatureChangePercent | Minimum feature change for this request. SecureModelFeatureChangePercent = 8 |
SecureModelNumberOfReqests | Number of requests from a single source IP. SecureModelNumberOfReqests = 50 |
SecureModelAction | If true, a blank result will be returned if suspicious requests are received. SecureModelAction = false |
SecureModelTrackingAgeMS | How long to remember requests from the same source IP. Default is 1 hour. SecureModelTrackingAgeMS = 3600000 |
SecureModelTrackingSize | Maximum number of source locations to remember. SecureModelTrackingSize = 1000 |
SecureModelMSBetweenReqests | Minimum time between requests in milliseconds from the same source. SecureModelMSBetweenReqests = 4000 |
setConvertInvalidNumbersToNa | Specific setting for H2O-3 models. setConvertInvalidNumbersToNa = false |
setConvertUnknownCategoricalLevelsToNa | Specific setting for H2O-3 models. setConvertUnknownCategoricalLevelsToNa = false |
IgnoreEmptyStrings | For both Driverless AI and H2O-3 models, if the input is parsed but contains an empty string, this specifies whether the model should receive an empty string. IgnoreEmptyStrings = false |
ResultLength | Truncates the returned prediction to the specified number digits. Using zero (0) disables the truncation. resultLength = 0 |
IncludeUUID | When writing scoring details to standard out, this specifies whether to include the models UUID. includeUUID = true |
numberWorkers | A Snowflake specific setting that controls the number of threads used for scoring. numberWorkers = 10 |
SnowflakeAllowedFunctions | A Snowflake specific setting that ensures only the requests from this function will be accepted. SnowflakeAllowedFunctions = H2O.* |
EnableModelBuild | A Snowflake specific setting that enables model building for this instance. EnableModelBuild = True |
SecureModelAllowedAgent | A Snowflake specific setting that will only accept requests from these specific source locations. SecureModelAllowedAgent = .*snowflake.* |
SF_function | A Snowflake specific setting that defines the template for returning populated SQL function create call. sf_function =H2OPredicit sf_url=<ADD_AWS_API_GATEWAY_NAME.execute-api.us-east-1.amazonaws.com/<ADD_AWS_API_GATEWAY_STAGE_NAME |
RemoveQuotes | If the input features are in quotes, this specifies whether the quotes should be removed before scoring. RemoveQuotes = True |
AutoGen | Specifies if auto generation of scoring code templates is enabled. Setting Autogen to false also disables OpenAPI generation via http://hostname:port/v3/api-docs/ Autogen = True |
AutogenHost | The endpoint used to generate the code. This has a separate interface for isolation. AutogenHost = 127.0.0.1:8080 |
AutogenType | This setting is used to allow or restrict what templates are available. For example, maybe only Snowflake templates should be available. AutogenHost = <valid regex expression> Examples: .* : This is the default and lists all templates .*Snowflake.* : Returns only Snowflake templates ((?!PowerBI).)*$ : Do not return templates for PowerBI Call curl -u h2o:h2o123 -s http://127.0.0.1:8080/autogen?notebook=listall to get all the available template types. |
TempWorkingDir | This setting is used to define the location that eScorer will use for operations where temporary files are required to be written and accessed. Note that this setting ends with a / . For eample, /opt/H2O/temp/ . If the default value is not set and /opt/H2O/temp/ does not exist, then the JVM runtime setting java.io.tempdir is used. TempWorkingDir=<valid directory location> |
- Submit and view feedback for this page
- Send feedback about H2O eScorer to cloud-feedback@h2o.ai