How to configure CRI-O logLevel in OpenShift 4?
Environment
- Red Hat OpenShift Container Platform (RHOCP)
- 4
Issue
- In order to troubleshoot some issues related to cri-o, it is recommended to establish verbosity of log level depending on the issue to be tracked.
Resolution
As documented in the official Openshift documentation we perform the following steps.
- Create
ContainerRuntimeConfigcustom resource for configuring cri-o logLevel
$ cat <<EOF > custom-loglevel.yaml
apiVersion: machineconfiguration.openshift.io/v1
kind: ContainerRuntimeConfig
metadata:
name: custom-loglevel
spec:
machineConfigPoolSelector:
matchLabels:
custom-crio: custom-loglevel
containerRuntimeConfig:
logLevel: debug
EOF
$ oc create -f custom-loglevel.yaml
- Verify the resource created
$ oc get ctrcfg
- To roll out the loglevel changes to all the worker nodes , add
custom-crio: custom-loglevelunder labels in the machineConfigPool config
Note: This will mark node SchedulingDisabled and NotReady while applying the changes on the respective worker nodes one by one.
$ oc edit machineconfigpool worker
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfigPool
metadata:
creationTimestamp: 2019-04-10T16:39:39Z
generation: 1
labels:
custom-crio: custom-loglevel ---------------> this need to add
- Check to ensure that a new 99-worker-XXX-containerruntime is created and that a new rendered worker is created:
$ oc get machineconfigs
- The changes should now be rolled out to each node in the worker pool via that new rendered-worker machine config.
You can verify by checking that the latest rendered-worker machine-config has been rolled out to the pools successfully:
$ oc get mcp
-
For changing CRIO log levels on master nodes, you can do the following.
-
Create
ContainerRuntimeConfigcustom resource for configuring cri-o logLevel
$ cat <<EOF > master-custom-loglevel.yaml
apiVersion: machineconfiguration.openshift.io/v1
kind: ContainerRuntimeConfig
metadata:
name: master-custom-loglevel
spec:
machineConfigPoolSelector:
matchLabels:
custom-crio: master-custom-loglevel
containerRuntimeConfig:
logLevel: debug
EOF
$ oc create -f master-custom-loglevel.yaml
- Verify the resource created
$ oc get ctrcfg
- To roll out the loglevel changes to all the master nodes , add
custom-crio: master-custom-loglevelunder labels in the machineConfigPool config
Note: This will mark node SchedulingDisabled and NotReady while applying the changes on the respective master nodes one by one.
$ oc edit machineconfigpool master
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfigPool
metadata:
creationTimestamp: 2019-04-10T16:39:39Z
generation: 1
labels:
custom-crio: master-custom-loglevel ---------------> this needs to be added
- Check to ensure that a new 99-master-XXX-containerruntime is created and that a new rendered worker is created:
$ oc get machineconfigs
- The changes should now be rolled out to each node in the worker pool via that new rendered-worker machine config.
You can verify by checking that the latest rendered-master machine-config has been rolled out to the pools successfully:
$ oc get mcp
Note: To revert the changes implemented using a ContainerRuntimeConfig CustomRessource, you must delete the CR. Removing the label from the machine config pool does not revert the changes.
Diagnostic Steps
- The available options for the cri-o logLevel
| Verbosity | Description |
|---|---|
panic | PanicLevel level, highest level of severity. Logs and then calls panic with the message passed to Debug, Info, . |
fatal | FatalLevel level, Logs and then calls logger.Exit(1). It will exit even if the logging level is set to Panic |
error | Any error which is fatal to the operation, but not the service or application (can't open a required file, missing data, etc.). |
warn | WarnLevel level. Non-critical entries that deserve eyes. |
info | The message includes any fields passed at the log site, as well as any fields accumulated on the logger. |
debug | DebugLevel level. Usually only enabled when debugging. Very verbose logging. |
- Now check cri-o logs:
$ oc debug node/worker-0
To use host binaries, run `chroot /host`
Pod IP: 10.0.90.40
If you don't see a command prompt, try pressing enter.
sh-4.2# chroot /host
sh-4.4# journalctl -u crio | tail -5
Sep 28 05:25:40 worker-0 crio[1379]: time="2021-09-28 05:25:40.015886418Z" level=debug msg="......"
This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.