Skip to main content
Version: 1.2.0

Helm values

These are the default values used to configure Feature Store Helm charts:

## Global Chart parameters
##
global:
## @param global.onlineEnabled Deploy Feature-Store Online services
## @param global.uiEnabled Deploy Feature-Store Website service
## @param global.ingressEnabled Enable ingress record generation for Feature-Store
## @param global.telemetryEnabled Enable telemetry generation for Feature-Store
## @param global.namespaceOverride Override default namespace - .Release.Namespace
## @param global.log4jConfigMapOverride Use different config map with log4j configuration for all components
## @param global.licenceKey licence to use feature store
## @param global.tiers.freemium Override default freemium definition
##
onlineEnabled: false
uiEnabled: false
ingressEnabled: false
telemetryEnabled: false
namespaceOverride:
log4jConfigMapOverride:
licenceKey:
tiers:
freemium:

## @param global.cloud.discovery.enabled Register Feature Store in Cloud Discovery
## @param global.cloud.discovery.grpcApiPublicUri Public URI of the GRPC API service
## @param global.cloud.discovery.restApiCorePublicUri Public URI of the REST API of core service
## @param global.cloud.discovery.restApiOnlinePublicUri Public URI of the REST API of online service
## @param global.cloud.discoveryServiceUrl H2O.ai Discovery Service URL
## @param global.cloud.appStoreServiceName H2O.ai App Store Service name
## @param global.cloud.h2oGpteServiceName H2O.ai GPTe Service name
## @param global.cloud.apiSecured Flag if Feature store external url is using SSL (most of the time it should be true) (used for generating notebook examples)
## @param global.cloud.h2oGPTe.enabled Flag should be set to true if Feature Store should integrate with H2O GPTe
## @param global.cloud.h2oGPTe.url URI of the H2O GPTe API service
## @param global.cloud.h2oGPTe.apiKey API Key for the H2O GPTe API service. If empty user access token will be used.
## @param global.cloud.drive.stsEndpointUr URI of H2O Drive STS service
## @param global.cloud.drive.roleArn: RoleArn used when requesting temporary credentials from Drive STS service
## @param global.cloud.drive.roleSessionName: RoleSessionName used when requesting temporary credentials from Drive STS service
## @param global.cloud.drive.durationSeconds Validity duration for the requested temporary credentials
## @param global.cloud.drive.endpointUri: URI of H2O Drive service
## @param global.cloud.drive.bucketNamePrefix: First part of a bucket name that will be used for accessing Drive objects (keys)
##
cloud:
discovery:
enabled: false
grpcApiPublicUri:
restApiCorePublicUri:
restApiOnlinePublicUri:
discoveryServiceUrl:
appStoreServiceName: appstore
h2oGpteServiceName: h2ogpte
apiSecured: true
h2oGPTe:
enabled: false
url:
apiKey:
drive:
enabled: false
stsEndpointUri:
region:
roleArn:
roleSessionName:
durationSeconds: 3600
endpointUri:
bucketNamePrefix: users-
## @param global.imagePullSecrets Docker registry secret names as an array
##
imagePullSecrets: []

## @param global.extraTrustedCertificates.configMapName Config Map name to store additional trusted CA certificates
## @param global.extraTrustedCertificates.caBundleKey Config Map key name to store CA bundle, default - ca_bundle.pem
##
extraTrustedCertificates:
configMapName:
caBundleKey:

## @param global.kafka.username Kafka username
## @param global.kafka.password Kafka password
kafka:
username:
password:

## @param global.online.databaseBackend Database Provider for online feature store [redis, postgres]
## @param global.online.redis.authMethod Authentication method for redis. Available methods are [iam, password]. For IAM, redis version 7 or above is required
## @param global.online.redis.nodes. List of redis nodes in format "host:port". Cluster connection is created in case more than one node is provided
## @param global.online.redis.timeout Timeout to finish redis commands. Expressed in ISO 8601-1 duration format. (default 10s)
## @param global.online.redis.database Redis database
## @param global.online.redis.secret Redis secret used by all nodes
## @param global.online.redis.isCluster Force cluster connection in case only one node is provided in @param global.redis.nodes
## @param global.online.redis.isSecure Enable SSL/TLS connection to the cluster
## @param global.online.redis.iam.userId IAM-enabled user id in AWS elasticache
## @param global.online.redis.iam.replicationGroupId AWS elasticache replication group id
## @param global.online.redis.iam.region region when AWS elasticache is deployed
## @param global.online.redis.pool.maxTotal maximum connection in pool (default 8)
## @param global.online.redis.pool.maxIdle maximum idle connection in pool (default 8)
## @param global.online.postgres.dsn JDBC Postgres connection string (https://jdbc.postgresql.org/documentation/use/#connecting-to-the-database). Connection string must include authentication details.
## @param global.online.postgres.authMethod Authentication method for Postgres. Available methods are [iam, password]
## @param global.online.postgres.housekeepingInterval The duration specifying how often the job removing invalid records from online runs. Expressed in ISO 8601-1 duration format
## @param global.online.postgres.maximumPoolSize The property controls the maximum number of connections that HikariCP will keep in the pool, including both idle and in-use connections.
## @param global.online.postgres.minimumIdle The property controls the minimum number of idle connections that HikariCP tries to maintain in the pool, including both idle and in-use connections. If the idle connections dip below this value, HikariCP will make a best effort to restore them quickly and efficiently.
## @param global.online.postgres.idleTimeout This property controls the maximum amount of time (in milliseconds) that a connection is allowed to sit idle in the pool. Whether a connection is retired as idle or not is subject to a maximum variation of +30 seconds, and average variation of +15 seconds. A connection will never be retired as idle before this timeout. A value of 0 means that idle connections are never removed from the pool.
## @param global.online.signatureVerificationKey Key used for generating/validation message signature during online ingestion
online:
databaseBackend:
redis:
authMethod: "password"
nodes: []
database: 0
secret:
isCluster:
isSecure:
iam:
userId:
replicationGroupId:
region:
pool:
maxTotal:
maxIdle:
postgres:
dsn:
authMethod: "password"
housekeepingInterval: PT1H
maximumPoolSize:
minimumIdle:
idleTimeout:
signatureVerificationKey:

## @param global.storage.commonConfig.tempCredentialsExpiration Duration representing maximal validity of temporary credentials generated for interaction with Feature Store storage. Expressed in ISO 8601-1 duration format
## @param global.storage.offline.username Backend Storage username
## @param global.storage.offline.password Backend Storage password
## @param global.storage.offline.backend Offline Storage Provider [s3, datalakegen2, snowflake, gcp]
## @param global.storage.offline.s3.authMethod Authentication method for storage. Available methods are [iam, password]
## @param global.storage.offline.s3.endpoint S3 Endpoint URL (required if storage backend is S3)
## @param global.storage.offline.s3.region S3 Region (required if storage backend is S3)
## @param global.storage.offline.s3.sessionRoleArn Role ARN to assume temporary access to S3 (required if storage backend is S3 and Minio is not used as storage proxy)
## @param global.storage.offline.s3.kmsEnabled Flag should be set to true if storage bucket in S3 is encrypted by KMS
## @param global.storage.offline.gcp.projectId Project Id for GCP Storage
## @param global.storage.offline.gcp.credentials Content of gcp storage file (encoded in base64) for offline storage
## @param global.storage.offline.datalakegen2.authMethod Authentication method for storage. Available methods are [password, ami]
## @param global.storage.offline.datalakegen2.ami.tenantId Tenant Id for Azure Managed Identities (Optional)
## @param global.storage.offline.datalakegen2.ami.endpoint Endpoint for Azure Managed Identities (Optional)
## @param global.storage.offline.datalakegen2.ami.clientId Client Id for Azure Managed Identities (Optional)
## @param global.storage.offline.datalakegen2.ami.accountName Account name required when using Azure Managed Identities
## @param global.storage.offline.datalakegen2.ami.aadpodidbindingValue Value for label aadpodidbinding required when using Azure Managed Identities
## @param global.storage.offline.snowflake.authMethod Authentication method for storage. Available methods are [password, privateKey]
## @param global.storage.offline.snowflake.endpoint Snowflake Endpoint URL (required if storage backend is Snowflake)
## @param global.storage.offline.snowflake.account Snowflake account
## @param global.storage.offline.snowflake.warehouse Snowflake warehouse
## @param global.storage.offline.snowflake.database Snowflake database name (required if storage backend is Snowflake)
## @param global.storage.offline.snowflake.role Snowflake role name
## @param global.storage.offline.snowflake.readOnlyViewSparkUser Snowflake user name used to retrieve as spark
## @param global.storage.offline.snowflake.readOnlyViewSparkPassword Snowflake user password used to retrieve as spark
## @param global.storage.offline.snowflake.readOnlyViewSparkPrivateKey Snowflake user private key used to retrieve as spark
## @param global.storage.offline.snowflake.readOnlyViewSparkPassphrase Snowflake user passphrase for private key if encrypted used to retrieve as spark
## @param global.storage.offline.snowflake.privateKey Backend Storage privateKey
## @param global.storage.offline.snowflake.passphrase Backend Storage passphrase for private key if encrypted
## @param global.storage.offline.containers.root Name of the root container. If defined, all sub-containers are handled as folders in that bucket, otherwise will be handled as individual buckets. Not used in snowflake offline storage. Please use instead global.config.offline.snowflake.database
## @param global.storage.offline.containers.data Name of the data container. If global.storage.offline.containers.root is defined, it represents folder within this container, otherwise represents separate container.
## @param global.storage.offline.storageIdConversionEnabled Enable conversion of resource ids (project, feature set) to storage id (it should be enabled if feature store was used prior 0.15.0 version)
## @param global.storage.supporting.username Supporting Storage username
## @param global.storage.supporting.password Supporting Storage password
## @param global.storage.supporting.backend Storage Provider [datalakegen2, s3, gcp]
## @param global.storage.supporting.s3.authMethod Authentication method for supporting storage. Available methods are [iam, password]
## @param global.storage.supporting.s3.endpoint S3 Endpoint URL (required if storage backend is S3)
## @param global.storage.supporting.s3.region S3 Region (required if storage backend is S3)
## @param global.storage.supporting.s3.sessionRoleArn Role ARN to assume temporary access to S3 (required if storage backend is S3 and Minio is not used as storage proxy)
## @param global.storage.supporting.s3.kmsEnabled Flag should be set to true if storage bucket in S3 is encrypted by KMS
## @param global.storage.supporting.gcp.projectId Project Id for GCP Storage
## @param global.storage.supporting.gcp.credentials Content of gcp storage file (encoded in base64) for supporting storage
## @param global.storage.supporting.datalakegen2.authMethod Authentication method for supporting storage. Available methods are [password, ami]
## @param global.storage.supporting.datalakegen2.ami.tenantId Tenant Id for Azure Managed Identities (Optional)
## @param global.storage.supporting.datalakegen2.ami.endpoint Endpoint for Azure Managed Identities (Optional)
## @param global.storage.supporting.datalakegen2.ami.clientId Client Id for Azure Managed Identities (Optional)
## @param global.storage.supporting.datalakegen2.ami.accountName Account name required when using Azure Managed Identities
## @param global.storage.supporting.datalakegen2.ami.aadpodidbindingValue Value for label aadpodidbinding required when using Azure Managed Identities
## @param global.storage.supporting.containers.root Name of the root container. If defined, all sub-containers are handled as folders in that bucket, otherwise will be handled as individual buckets
## @param global.storage.supporting.containers.artifacts Name of the artifacts container. If global.storage.supporting.containers.root is defined, it represents folder within this container, otherwise represents separate container.
## @param global.storage.supporting.containers.temp Name of the temporary container. If global.storage.supporting.containers.root is defined, it represents folder within this container, otherwise represents separate container.
## @param global.storage.supporting.containers.online Name of the data bucket/folder for online to offline storage
## @param global.storage.supporting.containers.preview Name of the preview container. If global.storage.supporting.containers.root is defined, it represents folder within this container, otherwise represents separate container.
## @param global.storage.supporting.containers.retrieve Name of the retrieve container. If global.storage.supporting.containers.root is defined, it represents folder within this container, otherwise represents separate container.

storage:
commonConfig:
tempCredentialsExpiration: PT1H
offline:
username:
password:
backend:
s3:
authMethod: "password"
endpoint:
region:
sessionRoleArn:
kmsEnabled: false
gcp:
projectId:
credentials:
datalakegen2:
authMethod: "password"
ami:
tenantId:
endpoint:
clientid:
accountName:
aadpodidbindingValue:
snowflake:
authMethod: "password"
privateKey:
passphrase:
endpoint:
account:
database:
warehouse:
role:
readOnlyViewSparkUser:
readOnlyViewSparkPassword:
readOnlyViewSparkPrivateKey:
readOnlyViewSparkPassphrase:
containers:
root:
data:
storageIdConversionEnabled: false
supporting:
username:
password:
backend:
s3:
authMethod: "password"
endpoint:
region:
sessionRoleArn:
kmsEnabled: false
gcp:
projectId:
credentials:
datalakegen2:
authMethod: "password"
ami:
tenantId:
endpoint:
clientid:
accountName:
aadpodidbindingValue:
containers:
root:
artifacts:
temp:
online:
preview:
retrieve:

## @param global.config.idpMetadataUri URI to openid-configuration file
## @param global.config.notifications.channels Name of the Notification Channels where notifications will be send. Allowed values [kafka,logs]. If empty notifications are disabled
## @param global.config.messaging.kafka.healthCheckRequestTimeout Max request time to execute request to kafka broker in health check. Expressed in ISO 8601-1 duration format
## @param global.config.messaging.kafka.topicModificationEnabled If enable topics will be created/updated automatically
## @param global.config.messaging.kafka.authMethod Authentication method for Kafka. Available methods are [iam, password]
## @param global.config.messaging.kafka.tlsEnabled User secure connection to access kafka
## @param global.config.messaging.kafka.bootstrapServers Kafka Bootstrap Servers
## @param global.config.messaging.kafka.topics.notifications Kafka Notifications Topic Name
## @param global.config.messaging.kafka.topics.jobUpdates Kafka Job Updates Topic Name
## @param global.config.messaging.kafka.topics.onlineInput Kafka Online Input Topic Name
## @param global.config.messaging.kafka.topics.onlineOfflineIngest Kafka Online Offline Ingest Topic Name
## @param global.config.messaging.kafka.topics.onlineStorageCleanup Kafka Online Storage Cleanup Topic Name
## @param global.config.messaging.kafka.topics.modificationEvents Kafka Modification Events Topic Name
## @param global.config.messaging.kafka.topics.telemetry Kafka Telemetry Events Topic Name
## @param global.config.messaging.kafka.topics.telemetryAggregated Kafka Telemetry Aggregated Events Topic Name
## @param global.config.messaging.kafka.topics.uiNotifications Kafka UI Notifications Topic Name
## @param global.config.messaging.kafka.topicsConfig.notifications.numPartitions Number of partitions for notifications topic
## @param global.config.messaging.kafka.topicsConfig.notifications.replicationFactor Replication factor for notifications topic
## @param global.config.messaging.kafka.topicsConfig.notifications.retentionMs Retention policy for notifications topic. Expressed in ms
## @param global.config.messaging.kafka.topicsConfig.notifications.retentionMinutes Retention policy for notifications topic. Expressed in minutes
## @param global.config.messaging.kafka.topicsConfig.notifications.retentionHours Retention policy for notifications topic. Expressed in hours
## @param global.config.messaging.kafka.topicsConfig.jobUpdates.numPartitions Number of partitions for jobUpdates topic
## @param global.config.messaging.kafka.topicsConfig.jobUpdates.replicationFactor Replication factor for jobUpdates topic
## @param global.config.messaging.kafka.topicsConfig.jobUpdates.retentionMs Retention policy for jobUpdates topic. Expressed in ms
## @param global.config.messaging.kafka.topicsConfig.jobUpdates.retentionMinutes Retention policy for jobUpdates topic. Expressed in minutes
## @param global.config.messaging.kafka.topicsConfig.jobUpdates.retentionHours Retention policy for jobUpdates topic. Expressed in hours
## @param global.config.messaging.kafka.topicsConfig.jobUpdates.consumer.concurrency kafka consumer concurrency (default 1)
## @param global.config.messaging.kafka.topicsConfig.onlineInput.numPartitions Number of partitions for onlineInput topic
## @param global.config.messaging.kafka.topicsConfig.onlineInput.replicationFactor Replication factor for onlineInput topic
## @param global.config.messaging.kafka.topicsConfig.onlineInput.retentionPolicy Retention period for onlineInput topic. Expressed in ISO 8601-1 duration format
## @param global.config.messaging.kafka.topicsConfig.onlineInput.consumer.concurrency kafka consumer concurrency (default 1)
## @param global.config.messaging.kafka.topicsConfig.onlineOfflineIngest.numPartitions Number of partitions for onlineOfflineIngest topic
## @param global.config.messaging.kafka.topicsConfig.onlineOfflineIngest.replicationFactor Replication factor for onlineOfflineIngest topic
## @param global.config.messaging.kafka.topicsConfig.onlineOfflineIngest.retentionPolicy Retention policy for onlineOfflineIngest topic. Expressed in ISO 8601-1 duration format
## @param global.config.messaging.kafka.topicsConfig.onlineOfflineIngest.consumer.concurrency kafka consumer concurrency (default 1)
## @param global.config.messaging.kafka.topicsConfig.onlineStorageCleanup.numPartitions Number of partitions for onlineStorageCleanup topic
## @param global.config.messaging.kafka.topicsConfig.onlineStorageCleanup.replicationFactor Replication factor for onlineStorageCleanup topic
## @param global.config.messaging.kafka.topicsConfig.onlineStorageCleanup.retentionPolicy Retention policy for onlineStorageCleanup topic. Expressed in ISO 8601-1 duration format
## @param global.config.messaging.kafka.topicsConfig.modificationEvents.numPartitions Number of partitions for modificationEvents topic
## @param global.config.messaging.kafka.topicsConfig.modificationEvents.replicationFactor Replication factor for modificationEvents topic
## @param global.config.messaging.kafka.topicsConfig.modificationEvents.retentionPolicy Retention policy for modificationEvents topic. Expressed in ISO 8601-1 duration format
## @param global.config.messaging.kafka.topicsConfig.modificationEvents.consumer.concurrency kafka consumer concurrency (default 1)
## @param global.config.messaging.kafka.topicsConfig.telemetry.numPartitions Number of partitions for telemetry topic
## @param global.config.messaging.kafka.topicsConfig.telemetry.replicationFactor Replication factor for telemetry topic
## @param global.config.messaging.kafka.topicsConfig.telemetry.retentionPolicy Retention policy for telemetry topic. Expressed in ISO 8601-1 duration format
## @param global.config.messaging.kafka.topicsConfig.telemetryAggregated.numPartitions Number of partitions for telemetry aggregated topic
## @param global.config.messaging.kafka.topicsConfig.telemetryAggregated.replicationFactor Replication factor for telemetry aggregated topic
## @param global.config.messaging.kafka.topicsConfig.telemetryAggregated.retentionPolicy Retention policy for telemetry aggregated topic. Expressed in ISO 8601-1 duration format
## @param global.config.messaging.kafka.topicsConfig.uiNotifications.numPartitions Number of partitions for ui notifications topic
config:
idpMetadataUri:
notificationChannels: []
messaging:
kafka:
healthCheckRequestTimeout: "PT3S"
topicModificationEnabled: false
authMethod: "password"
tlsEnabled: true
bootstrapServers:
topics:
notifications:
jobUpdates:
onlineInput:
onlineOfflineIngest:
onlineStorageCleanup:
modificationEvents:
telemetry:
telemetryAggregated:
uiNotifications:
topicsConfig:
notifications:
numPartitions:
replicationFactor:
retentionPolicy:
jobUpdates:
numPartitions:
replicationFactor:
retentionPolicy:
consumer:
concurrency: 1
onlineInput:
numPartitions:
replicationFactor:
retentionPolicy:
consumer:
concurrency: 1
onlineOfflineIngest:
numPartitions:
replicationFactor:
retentionPolicy:
consumer:
concurrency: 1
onlineStorageCleanup:
numPartitions:
replicationFactor:
retentionPolicy:
consumer:
concurrency: 1
modificationEvents:
numPartitions:
replicationFactor:
retentionPolicy:
consumer:
concurrency: 1
telemetry:
numPartitions:
replicationFactor:
retentionPolicy:
telemetryAggregated:
numPartitions:
replicationFactor:
retentionPolicy:
uiNotifications:
numPartitions: 1

## Core service deployment parameters
##
core:
## @param core.javaOpts Additional JVM configuration (e.g. GC logs)
javaOpts:

## @param core.rbac.create Specifies whether RBAC resources should be created
##
rbac:
create: true

## @param serviceAccount.core.roleArn Core IAM role ARN to associate with Kubernetes cluster
serviceAccount:
core:
roleArn:

## @param core.log4jConfigMapOverride Use different config map with log4j configuration for core pod
log4jConfigMapOverride:

## Core pods' Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod
## @param podSecurityContext.enabled Enable security context for the pods
## @param podSecurityContext.runAsUser Set Core pod's Security Context runAsUser
## @param podSecurityContext.fsGroup Set Core pod's Security Context fsGroup
##
podSecurityContext:
enabled: true
runAsUser: 1001
fsGroup: 1001

## Core containers' Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
## @param containerSecurityContext.enabled Enable Core containers' Security Context
## @param containerSecurityContext.runAsUser Set Core containers' Security Context runAsUser
## @param containerSecurityContext.runAsNonRoot Set Core containers' Security Context runAsNonRoot
## @param containerSecurityContext.allowPrivilegeEscalation Force the child process to be run as nonprivilege
## e.g.
## enabled: true
## runAsUser: 1001
## runAsNonRoot: true
## allowPrivilegeEscalation: false
##
containerSecurityContext:
enabled: false

## @param core.replicaCount Number of Core Service replicas
##
replicaCount: 2

## @param core.commonLabels Labels to add to all deployed objects
##
commonLabels: {}

## Core Service Autoscaling configuration
## ref: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/
## @param autoscaling.enabled Enable Horizontal POD autoscaling for Core Service
## @param autoscaling.minReplicas Minimum number of Core Service replicas
## @param autoscaling.maxReplicas Maximum number of Core Service replicas
## @param autoscaling.targetCPU Target CPU utilization percentage
## @param autoscaling.targetMemory Target Memory utilization percentage
##
autoscaling:
enabled: false
minReplicas: 2
maxReplicas: 10
targetCPU: 50
targetMemory: 50

## @param core.database.authMethod Authentication method for Postgres. Available methods are [iam, password]
## @param core.database.dsn JDBC Postgres connection string (https://jdbc.postgresql.org/documentation/use/#connecting-to-the-database). Connection string must include authentication details.
## @param core.database.existingDSNSecret Instead @param core.database.dsn provide existing secret containing postgres connection string
## @param core.database.existingDSNSecretKey Key in @param core.database.existingDSNSecret secret
## @param core.database.maximumPoolSize The property controls the maximum number of connections that HikariCP will keep in the pool, including both idle and in-use connections.
## @param core.database.minimumIdle The property controls the minimum number of idle connections that HikariCP tries to maintain in the pool, including both idle and in-use connections. If the idle connections dip below this value, HikariCP will make a best effort to restore them quickly and efficiently.
## @param core.database.idleTimeout This property controls the maximum amount of time (in milliseconds) that a connection is allowed to sit idle in the pool. Whether a connection is retired as idle or not is subject to a maximum variation of +30 seconds, and average variation of +15 seconds. A connection will never be retired as idle before this timeout. A value of 0 means that idle connections are never removed from the pool.
database:
authMethod: "password"
dsn:
existingDSNSecret:
existingDSNSecretKey:
maximumPoolSize:
minimumIdle:
idleTimeout:
## @param core.core.salt Random value which will be used to encode PATs
## @param core.core.jobsCredentialsKey Random value which will be used to encode user credentials
## @param core.core.idpClientSecret IdP Client Secret
## @param core.core.idpPatClientSecret IdP Client Secret for exchanging PAT to access token
##
core:
salt:
jobsCredentialsKey:
idpClientSecret:
idpPatClientSecret:

## @param core.att.onlinestore.enabled Enable support for storing feature-sets in redis (online store)
## @param core.att.onlinestore.username Online storage username
## @param core.att.onlinestore.password Online storage password
## @param core.att.onlinestore.baseUrl Base url for online features
##
att:
onlinestore:
enabled: false
username:
password:
baseUrl:

## @param core.docker.image Core Service Docker image path
## @param core.docker.tag Core Service image tag
##
docker:
image: "h2oai/feature-store-core"
tag:

## @param core.config.idpClientId IdP Client ID
## @param core.config.idpRealm IdP Realm required when idpPatExchangeEnabled
## @param core.config.idpPublicMetadataUri URI to openid-configuration file for public client/app
## @param core.config.idpPublicClientId Public IdP Client ID
## @param core.config.idpPublicScopes IdP scopes list for public client/app
## @param core.config.idpRedirectUri IdP redirect uri
## @param core.config.idpScopes IdP scopes list
## @param core.config.idpIdClaimKey JWT Claim name which will be used as user id
## @param core.config.idpRolesClaimKey Name of claim storing roles
## @param core.config.idpAdminRoleName Name of role to identify user as admin
## @param core.config.idpReviewerRoleNames Names of roles to identify user as reviewer
## @param core.config.idpForbiddenToLoginRoleNames Names of roles that are not allowed to login into Feature Store
## @param core.config.idpPatClientId IdP Client ID for PAT token exchange to access token required when idpPatExchangeEnabled
## @param core.config.idpPatExchangeEnabled flag to enable exchanging PAT to access token
## @param core.config.idpServerUri Idp uri required when idpPatExchangeEnabled
## @param core.config.jobsCollectionTTLDays TTL for finished jobs
## @param core.config.disableStatistics Disable statistics computation
## @param core.config.validationNamingRegex Regular expression used for validating names for projects and featuresets
## @param core.config.managementReadiness Names of readiness probes to know when a container is ready to start accepting traffic. Available values: [readinessState,kafka]
## @param core.config.livenessReadiness Names of liveness probes to know when to restart a container. Available values: [livenessState,kafka]
## @param core.config.spark.applicationRetries Number of retries for executing POST request in spark operator for spark application creation
## @param core.config.spark.failureRetries Number of retries in case of job failure
## @param core.config.spark.submissionFailureRetries Number of retries in case submission failure
## @param core.config.spark.restartPolicy Restart policy mode [onfailure, never]
## @param core.config.spark.submissionFailureIntervalSeconds Linear backoff interval (in seconds) between submission retries
## @param core.config.spark.failureIntervalSeconds Linear backoff interval (in seconds) between retries in case of job failure
## @param core.config.spark.ttlSeconds Delete finished jobs (k8's resources) older than TTL value (in seconds)
## @param core.config.memoryCache.featureSets.evictionTimeoutMinutes Timeout in minutes after feature sets stored in the memory cache are evicted
## @param core.config.autoProjectOwners List e-mail addresses which are added as editors for each project created. The users provided in this list must first login for this functionality to happen
## @param core.config.autoProjectEditors List e-mail addresses which are added as editors for each project created. The users provided in this list must first login for this functionality to happen
## @param core.config.isProjectLockedByDefault Configure if project should be locked by default during creation
## @param.core.config.defaultFeatureSetFlow Default value for feature set flow (applied during registration). Available values: [ONLINE_ONLY, OFFLINE_ONLY, OFFLINE_ONLINE_MANUAL, OFFLINE_ONLINE_AUTOMATIC]
## @param core.config.defaultProjectsPageSize Default page size for listing projects
## @param core.config.defaultFeatureSetsPageSize Default page size for listing feature sets
## @param core.config.defaultFeaturesPageSize Default page size for listing features
## @param core.config.defaultScheduledTasksPageSize Default page size for listing scheduled tasks
## @param core.config.defaultPersonalAccessTokensPageSize Default page size for listing personal access tokens
## @param core.config.defaultFeatureSetsDraftPageSize Default page size for listing feature sets drafts
## @param core.config.defaultArtifactsPageSize Default page size for listing artifacts
## @param core.config.defaultReviewProcessPageSize Default page size for listing reviews
## @param core.config.defaultJobsPageSize Default page size for listing jobs
## @param core.config.prohibitedCliMethods Comma separated list of grpc method that are prohibited to use from CLI, for example [ai.h2o.featurestore.api.v1.CoreService/DeleteFeatureSet]
## @param core.config.patMaxTokensPerUser Maximum number of personal access tokens user can create, -1 means unlimited
## @param core.config.patMaximumAllowedTokenDuration Maximum allowed amount of time before a personal access token expires. Default is unlimited (expressed as zero duration PT0H). Expressed in ISO 8601-1 duration format.
## @param core.config.featureSetDraftExpirationPeriod when specified then after this period a feature set draft will be eligible for a housekeeping cleanup
## @param core.config.housekeeping.authenticationCode The time-to-live duration for records in the authentication code table.
## @param core.config.housekeeping.job The time-to-live duration for records in the job table.
## @param core.config.housekeeping.sessionAccessPaths The time-to-live duration for records in the session_access_paths table.
## @param core.config.housekeeping.pkce The time-to-live duration for records in the pkce table.
## @param core.config.housekeeping.cachedResponse The time-to-live duration for records in the cached_response table.
## @param core.config.housekeeping.recentlyUsedResourcesLimit The number of records (per user) that should be kept in recently_used_resource table.
## @param core.config.housekeeping.uploadedFile The time-to-live duration for uploaded and not used anywhere uploaded files.
## @param core.config.dashboard.defaultPopularFeatureSetsSize Default number of popular feature sets returned
## @param core.config.dashboard.defaultRecentlyUsedResourcesSize The size of list returned when asking for recently used resources
## @param core.config.dashboard.defaultPinnedFeatureSetsSize Default number of pinned feature sets returned
## @param core.config.reviewProcess.enabledScope Variable used to enable review process for all feature sets or only for sensitive ones. Allowed values [ALL,SENSITIVE]. If empty review process is disabled
## @param core.config.webUrl Feature store web url (should be set only when Feature Store Web was deployed)
## @param core.config.ingestion.featureSetPreviewNumRows Number of rows to show in feature set preview
## @param core.config.ingestion.featureSetPreviewNumCols Number of columns to show in feature set preview
## @param core.config.scheduler.housekeepingCron Feature store housekeeping cron. How often the tasks should run. For example: "10 * * * * *"
## @param core.config.scheduler.housekeepingTask Feature store housekeeping lock name
## @param core.config.scheduler.housekeepingLockAtMostFor Parameter which specifies how long the lock should be kept in case the executing node dies
## @param core.config.scheduler.housekeepingLockAtLeastFor Parameter which specifies minimum amount of time for which the lock should be kept
## @param core.config.scheduler.housekeepingLockAtLeastFor Parameter which specifies minimum amount of time for which the lock should be kept
## @param core.config.scheduler.housekeepingLockNameForFeatureSearchText Feature store housekeeping lock name for cleaning feature search table
## @param core.config.scheduler.housekeepingCronForFeatureSearchText Feature store housekeeping cron. How often the tasks related to cleaning feature search table should run. For example: "0 0 */24 * * *"
## @param core.config.scheduler.housekeepingLockNameForFeatureSetDrafts Feature store housekeeping lock name for cleaning feature set drafts table
## @param core.config.scheduler.housekeepingCronForFeatureSetDrafts Feature store housekeeping cron. How often the tasks related to cleaning feature set drafts table should run. For example: "0 0 * * * *"
## @param core.config.uiNotifications.enabled Enable ui notifications
## @param core.config.authz.userserver.enabled Enable integration to H2o authz userserver
## @param core.config.authz.userserver.address Address to userserver GRPC Api
## @param core.config.authz.userserver.negotiationType NegotiationType (tsl, plaintext, plaintext_upgrade)
config:
idpClientId:
idpRealm:
idpPublicMetadataUri:
idpPublicClientId:
idpPublicScopes:
idpRedirectUri:
idpScopes:
idpIdClaimKey: sub
idpRolesClaimKey: roles
idpAdminRoleName: admin
idpReviewerRoleNames:
- reviewer
idpForbiddenToLoginRoleNames: []
idpPatClientId:
idpPatExchangeEnabled:
idpServerUri:
jobsCollectionTTLDays:
disableStatistics: false
validationNamingRegex: ".*"
managementReadiness: readinessState, kafka
livenessReadiness: livenessState, kafka
spark:
applicationRetries: 5
failureRetries: 2
submissionFailureRetries: 3
restartPolicy: onfailure
submissionFailureIntervalSeconds:
failureIntervalSeconds:
ttlSeconds:
memoryCache:
featureSets:
evictionTimeoutMinutes: 10
autoProjectOwners: []
autoProjectEditors: []
isProjectLockedByDefault: false
defaultFeatureSetFlow: "OFFLINE_ONLINE_AUTOMATIC"
defaultProjectsPageSize: 100
defaultFeatureSetsPageSize: 100
defaultFeaturesPageSize: 100
defaultScheduledTasksPageSize: 100
defaultPersonalAccessTokensPageSize: 50
defaultFeatureSetsDraftPageSize: 50
defaultArtifactsPageSize: 50
defaultReviewProcessPageSize: 50
defaultJobsPageSize: 50
prohibitedCliMethods: []
patMaxTokensPerUser: -1
patMaximumAllowedTokenDuration: PT0H
featureSetDraftExpirationPeriod: P30D
housekeeping:
authenticationCode: PT10M
job: P30D
sessionAccessPaths: PT1H
pkce: PT1H
cachedResponse: PT1H
recentlyUsedResourcesLimit: 20
uploadedFile: P3D
dashboard:
defaultPopularFeatureSetsSize: 10
defaultRecentlyUsedResourcesSize: 20
defaultPinnedFeatureSetsSize: 10
reviewProcess:
enabledScope:
webUrl:
ingestion:
featureSetPreviewNumRows: 100
featureSetPreviewNumCols: 50
scheduler:
housekeepingCron:
housekeepingTask:
housekeepingLockAtMostFor:
housekeepingLockAtLeastFor:
housekeepingLockNameForFeatureSearchText:
housekeepingCronForFeatureSearchText:
housekeepingLockNameForFeatureSetDrafts:
housekeepingCronForFeatureSetDrafts:
authz:
userserver:
enabled: false
address:
negotiationType: tls

## @param core.extraContainerVolumes Volumes that can be used in deployment pods
##
extraContainerVolumes: []
# - name: log4j-config
# mountPath: /opt/h2oai/config/
# subPath: log4j2.properties # (optional)
# configMap: log4j-config
# readOnly: true

## @param core.env Extra environment variables that will be pass onto deployment pods
##
env: {}

## @param core.service.api.annotations Additional annotations for the API Service resource.
## @param core.service.web.annotations Additional annotations for the Web Service resource.
##
service:
api:
annotations: {}
web:
annotations: {}

## @param core.nodeSelector Node labels for update Core pods assignment
## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector
##
nodeSelector: {}

## @param core.tolerations Tolerations for update Core pods assignment
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []

## @param core.affinity Affinity specification for Core pods
## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity
##
affinity: {}

## Core containers' resource requests and limits
## ref: https://kubernetes.io/docs/user-guide/compute-resources/
## @param core.resources.limits The resources limits for the Controller container
## @param core.resources.requests The requested resources for the Controller container
##
resources:
limits: {}
requests: {}

## Spark Operator service deployment parameters
##
sparkoperator:
## @param sparkoperator.javaOpts Additional JVM configuration (e.g. GC logs)
javaOpts:

## @param sparkoperator.log4jConfigMapOverride Use different config map with log4j configuration for sparkoperator pod
log4jConfigMapOverride:

## @param sparkoperator.rbac.create Specifies whether RBAC resources should be created
##
rbac:
create: true

## @param serviceAccount.sparkOperator.roleArn Spark operator IAM role ARN to associate with Kubernetes cluster
## @param serviceAccount.spark.roleArn Spark driver & executor IAM role ARN to associate with Kubernetes cluster
serviceAccount:
sparkOperator:
roleArn:
spark:
roleArn:

## Spark Operator pods' Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod
## @param podSecurityContext.enabled Enable security context for the pods
## @param podSecurityContext.runAsUser Set Spark Operator pod's Security Context runAsUser
## @param podSecurityContext.fsGroup Set Spark Operator pod's Security Context fsGroup
##
podSecurityContext:
enabled: true
runAsUser: 1001
fsGroup: 1001

## Spark Operator containers' Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
## @param containerSecurityContext.enabled Enable Spark Operator containers' Security Context
## @param containerSecurityContext.runAsUser Set Spark Operator containers' Security Context runAsUser
## @param containerSecurityContext.runAsNonRoot Set Spark Operator containers' Security Context runAsNonRoot
## @param containerSecurityContext.allowPrivilegeEscalation Force the child process to be run as nonprivilege
## e.g.
## enabled: true
## runAsUser: 1001
## runAsNonRoot: true
## allowPrivilegeEscalation: false
##
containerSecurityContext:
enabled: false

## @param sparkoperator.commonLabels Labels to add to all deployed objects
##
commonLabels: {}

## @param sparkoperator.docker.image Spark Operator Service Docker image path
## @param sparkoperator.docker.tag Spark Operator Service image tag
##
docker:
image: "h2oai/feature-store-spark-operator"
tag:

## @param sparkoperator.driverlessAiLicenseKey Driverless ai license key used for mojo transformation in derived feature sets
driverlessAiLicenseKey:

## @param sparkoperator.database.authMethod Authentication method for Postgres. Available methods are [iam, password]
## @param sparkoperator.database.dsn JDBC Postgres connection string (https://jdbc.postgresql.org/documentation/use/#connecting-to-the-database). Connection string must include authentication details.
## @param sparkoperator.database.existingDSNSecret Instead @param sparkoperator.database.dsn provide existing secret containing postgres connection string
## @param sparkoperator.database.existingDSNSecretKey Key in @param sparkoperator.database.existingDSNSecret secret
## @param sparkoperator.database.maximumPoolSize The property controls the maximum number of connections that HikariCP will keep in the pool, including both idle and in-use connections.
## @param sparkoperator.database.minimumIdle The property controls the minimum number of idle connections that HikariCP tries to maintain in the pool, including both idle and in-use connections. If the idle connections dip below this value, HikariCP will make a best effort to restore them quickly and efficiently.
## @param sparkoperator.database.idleTimeout This property controls the maximum amount of time (in milliseconds) that a connection is allowed to sit idle in the pool. Whether a connection is retired as idle or not is subject to a maximum variation of +30 seconds, and average variation of +15 seconds. A connection will never be retired as idle before this timeout. A value of 0 means that idle connections are never removed from the pool.
database:
authMethod: "password"
dsn:
existingDSNSecret:
existingDSNSecretKey:
maximumPoolSize:
minimumIdle:
idleTimeout:

## @param sparkoperator.config.spark.docker.image Spark Job Docker image path
## @param sparkoperator.config.spark.docker.tag Spark Job Docker image label/tag
## @param sparkoperator.config.spark.pullImagePolicy Pull docker image policy [Always, IfNotPresent, Never]
## @param sparkoperator.config.spark.minExecutors Minimum number of Spark executors to be used
## @param sparkoperator.config.spark.maxExecutors Maximum number of Spark executors to be used
## @param sparkoperator.config.spark.log4jConfigMapOverride Use different config map with log4j configuration for spark driver/executor pods
## @param sparkoperator.config.spark.extraOptions Spark properties, ref: https://spark.apache.org/docs/latest/running-on-kubernetes.html#spark-properties
##
config:
spark:
docker:
image: "h2oai/feature-store-spark"
Tag:
pullImagePolicy:
minExecutors: 1
maxExecutors: 5
log4jConfigMapOverride:
## example:
## extraOptions:
## - spark.kubernetes.driver.request.cores=2
## - spark.kubernetes.driver.limit.cores=2
## - spark.kubernetes.executor.request.cores=2
## - spark.kubernetes.executor.limit.cores=2
extraOptions:

## @param sparkoperator.extraConfigmapMounts configmaps to mount in core pods
##
extraConfigmapMounts: []
# - name: log4j-config
# mountPath: /opt/h2oai/config/
# subPath: log4j2.properties # (optional)
# configMap: log4j-config
# readOnly: true

## @param sparkoperator.env Extra environment variables that will be pass onto deployment pods
##
env: {}

## @param sparkoperator.sparkTemplates.driver Template yaml to define the driver pod
## @param sparkoperator.sparkTemplates.executor Template yaml to define the executor pod
##
sparkTemplates:
driver:
executor:

## @param sparkoperator.nodeSelector Node labels for update Spark Operator pods assignment
## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector
##
nodeSelector: {}

## @param sparkoperator.tolerations Tolerations for update Spark Operator pods assignment
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []

## @param sparkoperator.affinity Affinity specification for Spark Operator pods
## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity
##
affinity: {}

## Spark Operator container's resource requests and limits
## ref: https://kubernetes.io/docs/user-guide/compute-resources/
## @param sparkoperator.resources.limits The resources limits for the Controller container
## @param sparkoperator.resources.requests The requested resources for the Controller container
##
resources:
limits: {}
requests: {}


## Online-Store service deployment parameters
##
onlinestore:
## @param onlinestore.javaOpts Additional JVM configuration (e.g. GC logs)
javaOpts:

## @param onlinestore.log4jConfigMapOverride Use different config map with log4j configuration for online-store pod
log4jConfigMapOverride:

## @param serviceAccount.core.roleArn Core IAM role ARN to associate with Kubernetes cluster
serviceAccount:
core:
roleArn:

## Online-Store pods' Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod
## @param podSecurityContext.enabled Enable security context for the pods
## @param podSecurityContext.runAsUser Set Online-Store pod's Security Context runAsUser
## @param podSecurityContext.fsGroup Set Online-Store pod's Security Context fsGroup
##
podSecurityContext:
enabled: true
runAsUser: 1001
fsGroup: 1001

## Online-Store containers' Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
## @param containerSecurityContext.enabled Enable Online-Store containers' Security Context
## @param containerSecurityContext.runAsUser Set Online-Store containers' Security Context runAsUser
## @param containerSecurityContext.runAsNonRoot Set Online-Store containers' Security Context runAsNonRoot
## @param containerSecurityContext.allowPrivilegeEscalation Force the child process to be run as nonprivilege
## e.g.
## enabled: true
## runAsUser: 1001
## runAsNonRoot: true
## allowPrivilegeEscalation: false
##
containerSecurityContext:
enabled: false

## @param onlinestore.replicaCount Number of Online-Store Service replicas
##
replicaCount: 2

## @param onlinestore.commonLabels Labels to add to all deployed objects
##
commonLabels: {}

## Online-Store Service Autoscaling configuration
## ref: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/
## @param autoscaling.enabled Enable Horizontal POD autoscaling for Online-Store Service
## @param autoscaling.minReplicas Minimum number of Online-Store Service replicas
## @param autoscaling.maxReplicas Maximum number of Online-Store Service replicas
## @param autoscaling.targetCPU Target CPU utilization percentage
## @param autoscaling.targetMemory Target Memory utilization percentage
##
autoscaling:
enabled: false
minReplicas: 2
maxReplicas: 10
targetCPU: 50
targetMemory: 50

## @param core.config.managementReadiness Names of readiness probes to know when a container is ready to start accepting traffic. Available values: [readinessState,kafka,redis]
## @param core.config.livenessReadiness Names of liveness probes to know when to restart a container. Available values: [livenessState,kafka,redis]
## @param onlinestore.config.onlineOfflineIngestionCutoffTimeInMinutes Cutoff time to trigger ingestion from online to offline. Expressed in minutes
config:
managementReadiness: readinessState, kafka
livenessReadiness: livenessState, kafka
onlineOfflineIngestionCutoffTimeInMinutes: 15

## @param onlinestore.docker.image Online Store Service Docker image path
## @param onlinestore.docker.tag Online Store Service image tag
##
docker:
image: "h2oai/feature-store-online-store"
tag:

## @param onlinestore.extraConfigmapMounts configmaps to mount in core pods
##
extraConfigmapMounts: []
# - name: log4j-config
# mountPath: /opt/h2oai/config/
# subPath: log4j2.properties # (optional)
# configMap: log4j-config
# readOnly: true

## @param onlinestore.env Extra environment variables that will be pass onto deployment pods
##
env: {}

## @param onlinestore.nodeSelector Node labels for update Online Store pods assignment
## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector
##
nodeSelector: {}

## @param onlinestore.tolerations Tolerations for update Online Store pods assignment
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []

## @param onlinestore.affinity Affinity specification for Online Store pods
## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity
##
affinity: {}

## Online Store containers' resource requests and limits
## ref: https://kubernetes.io/docs/user-guide/compute-resources/
## @param onlinestore.resources.limits The resources limits for the Controller container
## @param onlinestore.resources.requests The requested resources for the Controller container
##
resources:
limits: {}
requests: {}


## Web service deployment parameters
##
web:
## Web pods' Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod
## @param podSecurityContext.enabled Enable security context for the pods
## @param podSecurityContext.runAsUser Set Web pod's Security Context runAsUser
## @param podSecurityContext.fsGroup Set Web pod's Security Context fsGroup
##
podSecurityContext:
enabled: true
runAsUser: 1001
fsGroup: 1001

## Web containers' Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
## @param containerSecurityContext.enabled Enable Web containers' Security Context
## @param containerSecurityContext.runAsUser Set Web containers' Security Context runAsUser
## @param containerSecurityContext.runAsNonRoot Set Web containers' Security Context runAsNonRoot
## @param containerSecurityContext.allowPrivilegeEscalation Force the child process to be run as nonprivilege
## e.g.
## enabled: true
## runAsUser: 1001
## runAsNonRoot: true
## allowPrivilegeEscalation: false
##
containerSecurityContext:
enabled: false

## @param web.replicaCount Number of Web Service replicas
##
replicaCount: 2

## @param web.commonLabels Labels to add to all deployed objects
##
commonLabels: {}

## @param web.docker.image Web Service Docker image path
## @param web.docker.tag Web Service image tag
##
docker:
image: "h2oai/feature-store-web"
tag:

## @param web.extraConfigmapMounts configmaps to mount in web pods
##
extraConfigmapMounts: []

## @param web.env Extra environment variables that will be pass onto deployment pods
##
env: {}

## @param web.nodeSelector Node labels for update Web pods assignment
## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector
##
nodeSelector: {}

## @param web.tolerations Tolerations for update Web pods assignment
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []

## @param web.affinity Affinity specification for Web pods
## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity
##
affinity: {}

## Web containers' resource requests and limits
## ref: https://kubernetes.io/docs/user-guide/compute-resources/
## @param web.resources.limits The resources limits for the web container
## @param web.resources.requests The requested resources for the web container
##
resources:
limits: {}
requests: {}


## Telemetry service deployment parameters
##
telemetry:
## @param telemetry.javaOpts Additional JVM configuration (e.g. GC logs)
javaOpts:

## @param telemetry.log4jConfigMapOverride Use different config map with log4j configuration for telemetry pod
log4jConfigMapOverride:

## @param serviceAccount.core.roleArn Core IAM role ARN to associate with Kubernetes cluster
serviceAccount:
core:
roleArn:

## Telemetry pods' Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod
## @param podSecurityContext.enabled Enable security context for the pods
## @param podSecurityContext.runAsUser Set Telemetry pod's Security Context runAsUser
## @param podSecurityContext.fsGroup Set Telemetry pod's Security Context fsGroup
##
podSecurityContext:
enabled: true
runAsUser: 1001
fsGroup: 1001

## Telemetry containers' Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
## @param containerSecurityContext.enabled Enable Telemetry containers' Security Context
## @param containerSecurityContext.runAsUser Set Telemetry containers' Security Context runAsUser
## @param containerSecurityContext.runAsNonRoot Set Telemetry containers' Security Context runAsNonRoot
## @param containerSecurityContext.allowPrivilegeEscalation Force the child process to be run as nonprivilege
## e.g.
## enabled: true
## runAsUser: 1001
## runAsNonRoot: true
## allowPrivilegeEscalation: false
##
containerSecurityContext:
enabled: false

## @param telemetry.replicaCount Number of Telemetry Service replicas
##
replicaCount: 1

## @param telemetry.commonLabels Labels to add to all deployed objects
##
commonLabels: {}

## @param core.config.managementReadiness Names of readiness probes to know when a container is ready to start accepting traffic. Available values: [readinessState,kafka]
## @param core.config.livenessReadiness Names of liveness probes to know when to restart a container. Available values: [livenessState,kafka]
config:
managementReadiness: readinessState, kafka
livenessReadiness: livenessState, kafka

## @param telemetry.docker.image Telemetry Service Docker image path
## @param telemetry.docker.tag Telemetry Store Service image tag
##
docker:
image: "h2oai/feature-store-telemetry"
tag:

## @param telemetry.window.timeduration The duration specifying how often the aggregation should occur. Expressed in ISO 8601-1 duration format
##
window:
timeDuration: "PT60S"

## @param telemetry.kafkaApplicationId Kafka application id to uniquely identify streaming sessions
##
kafkaApplicationId:

## @param telemetry.service.host telemetry service address
## @param telemetry.service.enableKeepAlive telemetry connection should be keep alive
## @param telemetry.service.keepAliveWithoutCalls telemetry connection should be keep alive without calls
## @param telemetry.service.negotiationType negotiation connection between client and telemetry service
##
service:
host:
enableKeepAlive: true
keepAliveWithoutCalls: true
negotiationType: "plaintext"

## @param telemetry.nodeSelector Node labels for update Telemetry pods assignment
## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector
##
nodeSelector: {}

## @param telemetry.tolerations Tolerations for update Telemetry pods assignment
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []

## @param telemetry.affinity Affinity specification for Telemetry pods
## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity
##
affinity: {}

## Telemetry containers' resource requests and limits
## ref: https://kubernetes.io/docs/user-guide/compute-resources/
## @param telemetry.resources.limits The resources limits for the Controller container
## @param telemetry.resources.requests The requested resources for the Controller container
##
resources:
limits: {}
requests: {}

## @param telemetry.extraConfigmapMounts configmaps to mount in core pods
##
extraConfigmapMounts: []
# - name: log4j-config
# mountPath: /opt/h2oai/config/
# subPath: log4j2.properties # (optional)
# configMap: log4j-config
# readOnly: true

## @param telemetry.env Extra environment variables that will be pass onto deployment pods
##
env: {}


## Installer Tests deployment parameters
##
test:

## @param test.docker.image Installer Tests Docker image path
## @param test.docker.tag Installer Tests image tag
##
docker:
image: "h2oai/feature-store-installer-tests"
tag:

## @param test.userAuthTokenSecret Kubernetes secret name containing Test User's Personal Access Token used for installer tests
## @param test.userAuthTokenSecretKey Kubernetes secret name key containing Test User's Personal Access Token used for installer tests
##
userAuthTokenSecret: "feature-store-test-user-auth-token"
userAuthTokenSecretKey: "token"


## Configure the ingress resource that allows you to access the Feature-Store installation
## ref: https://kubernetes.io/docs/user-guide/ingress/
##
ingress:
## @param ingress.apiVersion Ingress API version
##
apiVersion: networking.k8s.io/v1

## @param ingress.ingressClassName IngressClass that is used to implement the Ingress (Kubernetes 1.18+)
## ref: https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/
##
ingressClassName: ""

api:
## @param ingress.annotations Additional annotations for the Ingress resource. To enable certificate autogeneration, place here your cert-manager annotations.
## For a full list of possible ingress annotations, please see
## ref: https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md
## Use this parameter to set the required annotations for cert-manager, see
## ref: https://cert-manager.io/docs/usage/ingress/#supported-annotations
##
## e.g:
## annotations:
## nginx.ingress.kubernetes.io/backend-protocol: GRPC
## nginx.ingress.kubernetes.io/grpc-backend-for-port: fs-api-port
## nginx.ingress.kubernetes.io/proxy-read-timeout: 3600s
## nginx.ingress.kubernetes.io/proxy-send-timeout: 3600s
## nginx.ingress.kubernetes.io/proxy-buffer-size: 16k
##
annotations:

## @param ingress.hostname Default host for the ingress record
##
hostname:

## @param ingress.path Path for the API endpoint
##
path: /

## @param ingress.tls Enable TLS configuration for the host defined at `ingress.hostname` parameter
## TLS certificates will be retrieved from a TLS secret with name: `{{- printf "%s-tls" .Values.ingress.hostname }}`
## You can:
## - Use the `ingress.secrets` parameter to create this TLS secret
## - Relay on cert-manager to create it by setting the corresponding annotations
## - Relay on Helm to create self-signed certificates by setting `ingress.selfSigned=true`
##
tls: false

web:
## @param ingress.annotations Additional annotations for the Ingress resource. To enable certificate autogeneration, place here your cert-manager annotations.
## For a full list of possible ingress annotations, please see
## ref: https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md
## Use this parameter to set the required annotations for cert-manager, see
## ref: https://cert-manager.io/docs/usage/ingress/#supported-annotations
##
## e.g:
## annotations:
## cert-manager.io/cluster-issuer: cluster-issuer-name
##
annotations:

## @param ingress.hostname Default host for the ingress record
##
hostname:

## @param ingress.callbackPath Path for the IdP redirect url
##
callbackPath: /Callback

## @param ingress.apiPath Path for the REST API endpoint
##
apiPath: /api

## @param ingress.onlineApiPath Path for the Online Store REST API endpoint
##
onlineApiPath: /online/api

## @param ingress.tls Enable TLS configuration for the host defined at `ingress.hostname` parameter
## TLS certificates will be retrieved from a TLS secret with name: `{{- printf "%s-tls" .Values.ingress.hostname }}`
## You can:
## - Use the `ingress.secrets` parameter to create this TLS secret
## - Relay on cert-manager to create it by setting the corresponding annotations
## - Relay on Helm to create self-signed certificates by setting `ingress.selfSigned=true`
##
tls: false

Feedback