Automatically generate key store for H2O Flow SSL
This option can be set either to "internal" or "external" When set to "external" H2O Context is created by connecting to existing H2O cluster, otherwise it creates H2O cluster living in Spark - that means that each Spark executor will have one h2o instance running in it.
Interval used to ping and check the H2O backend status.
Enable or disable web on h2o client node.
Extra properties passed to H2O client during startup.
Allows to override the base URL address of Flow UI, including the scheme, which is showed to the user.
Location of iced directory for the driver instance.
Ignore SPARK_PUBLIC_DNS setting on the H2O client.
IP of H2O client node
Location of log directory for the driver instance.
H2O log level for client running in Spark driver
Subnet selector for H2O client - if the mask is specified then Spark network setup is not discussed.
Port on which H2O client publishes its API.
Print detailed messages to client stdout
Exact client port to access web UI.
Configuration property - name of H2O cloud
Configuration property - timeout for cloud up.
Enable/Disable listener which kills H2O when there is a change in underlying cluster's topology*
H2O's URL context path
Timeout in milliseconds specifying how often the H2O backend checks whether the Sparkling Water client (either H2O client or REST) is connected
Enable/Disable exit on unsupported Spark parameters.
Path to flow dir.
Extra http headers for Flow UI
Decide whether Scala cells are running synchronously or asynchronously
Number of max parallel Scala cell jobs.
Enable hash login.
Offset between the API(=web) port and the internal communication port; api_port + port_offset = h2o_port
Secure internal connections by automatically generated credentials
Path to Java KeyStore file.
Alias for certificate in keystore to secure Flow
Password for Java KeyStore file.
Enable Kerberos login.
Enable LDAP login.
Login configuration file.
If a scoring MOJO instance is not used within a Spark executor JVM for a given timeout in milliseconds, it's evicted from executor's cache.
Extra properties passed to H2O nodes during startup.
Location of log directory for remote nodes.
H2O internal log level for launched remote nodes.
Subnet selector for H2O nodes running inside executors - if the mask is specified then Spark network setup is not discussed.
Configuration property - base port used for individual H2O nodes configuration.
Set how often in seconds stack traces are taken on each h2o node.
Limit for number of threads used by H2O, default -1 means unlimited
Password for the client authentication.
Enable/Disable Sparkling-Water REPL *
Number of executors started at the start of h2o services, by default 1
Enable/Disable check for Spark version.
Path to Java KeyStore file used for the internal SSL communication.
User name for cluster and the client authentication.