Skip to main content
Version: 0.19.3

Logging

By default, all Feature Store services produce logs to stdout in JSON format.

Log structure

We use the following JSON template:

{
"time": {
"$resolver": "timestamp",
"pattern": {
"format": "yyyy-MM-dd'T'HH:mm:ss.SS'Z'",
"timeZone": "UTC"
}
},
"level": {
"$resolver": "level",
"field": "name"
},
"message": {
"$resolver": "message",
"stringified": true
},
"error": {
"$resolver": "exception",
"field": "message",
"stringified": true
},
"user": {
"$resolver": "mdc",
"key": "userId"
},
"grpc_server_method": {
"$resolver": "mdc",
"key": "grpcServerMethod"
},
"thread_id": {
"$resolver": "thread",
"field": "name"
},
"logger": {
"$resolver": "logger",
"field": "name"
},
"stacktrace": {
"$resolver": "exception",
"field": "stackTrace",
"stackTrace": {
"stringified": true
}
}
}

Customize log format

It is s possible to provide a custom log4j2 configuration. To overwrite the log4j2.properties file, use the following instructions:

Core services

Prepare the new log4j2.properties file you want to use. The file has to be packed into the k8s config-map. For example:

apiVersion: v1
kind: ConfigMap
metadata:
name: log4j-config
data:
log4j2.properties: |
monitorInterval=5

# Root logger option
rootLogger.level=INFO
rootLogger.appenderRefs=stdout
rootLogger.appenderRef.stdout.ref=STDOUT
appenders=console

# Direct log messages to stdout
appender.console.type=Console
appender.console.name=STDOUT
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = %d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n

The next step is to configure Helm chart to fetch the new properties:

extraConfigmapMounts:
- name: log4j-config
mountPath: /log4j/
configMap: log4j-config
readOnly: true

env:
JAVA_LOG4J_PROPERTIES: /log4j/log4j2.properties

Use the extraConfigmapMounts parameter to mount the config map as a file into the pod. Next, overwrite the default log properties location by exporting the JAVA_LOG4J_PROPERTIES environment variable.

Spark jobs

Prepare the new log4j2.properties file that you want to use. The file has to be packed into the k8s config-map. For example:

apiVersion: v1
kind: ConfigMap
metadata:
name: log4j-config
data:
log4j.properties: |
log4j.monitorInterval=5

# Root logger option
log4j.rootLogger=INFO, stdout

# Direct log messages to stdout
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.Target=System.out
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n

The next step is to configure Helm chart to fetch the new properties:

sparkoperator:
config:
spark:
extraOptions:
- spark.driver.extraJavaOptions="-Dlog4j2.configurationFile=file:///log4j/log4j2.properties"
- spark.executor.extraJavaOptions="-Dlog4j2.configurationFile=file:///log4j/log4j2.properties"

sparkTemplates:
driver:
apiVersion: v1
kind: Pod
spec:
volumes:
- name: log4j-config
configMap:
name: log4j-config
containers:
- name: spark-kubernetes-driver # This will be interpreted as driver Spark main container
volumeMounts:
- name: log4j-config
mountPath: /log4j/
executor:
apiVersion: v1
kind: Pod
spec:
volumes:
- name: log4j-config
configMap:
name: log4j-config
containers:
- name: spark-kubernetes-executor # This will be interpreted as executor Spark main container
volumeMounts:
- name: log4j-config
mountPath: /log4j/

New properties have to be configured for driver and executor pods by adding extra options to Spark in sparkoperator.config.spark.extraOptions. A part of that new property file has to be mounted into the Spark pod. This is done by modifying the sparkoperator.sparkTemplates section.


Feedback