How to transition the collectors and the default log store from Red Hat OpenShift Logging 5 to 6
The following document describes how to transition the Log collectors and Log Store from Red Hat OpenShift Logging (RHOL) 5 to 6. The guide includes steps and configuration file modification. It does not include any steps for migrating data between these two versions as there is no migration process because the breaking change is only on the collector side.
In summary, after applying the following steps:
- Red Hat OpenShift Logging Operator and Loki Operator will be running in 6 versions.
- The configuration files will be modified to run in RHOL 6.
Prerequisites
- Installed Red Hat OpenShift Logging Operator 5.8/5.9
- Installed Loki Operator provided by Red Hat 5.8/5.9.
- Vector must be defined as Log Collector. If the Log Collector is not Vector, check the next two Red Hat Knowledge Articles:
- How to migrate Fluentd to Vector in Red Hat OpenShift Logging 5.5+ versions ?
- Migrating the log collector from Fluentd to Vector reducing the number of logs duplicated in RHOCP 4
- Loki must be defined as Log Storage. If the Log Storage is not Loki, migrate the default Log Storage following the Red Hat Knowledge Article: Migrating the default log store from Elasticsearch to Loki in OCP 4 (This step could be ommited if using Red Hat Elasticsearch and desired to transition from Elasticsearch to Loki at the same moment that upgrading from Red Hat OpenShift Logging v5 to v6)
Current Stack (5.x)
Assuming the current stack looks like the below that represents a fully managed OpenShift Logging stack (Vector, Loki) including collection, forwarding and storage.
Disclaimer: the stack might vary regarding
resources/nodes/tolerations/nodeSelectors/collector type/backend storage used. After reviewing this guide, refer also to configure Tolerations, NodeSelector & Resources for collector pods in OpenShift Logging 6 for additional information about collector configurations in OpenShift Logging 6.
Cluster Logging Custom Resource instance
apiVersion: logging.openshift.io/v1
kind: ClusterLogging
metadata:
name: instance
namespace: openshift-logging
spec:
collection:
type: vector
logStore:
lokistack:
name: <Name_of_LokiStack>
type: lokistack
managementState: Managed
LokiStack Custom Resource
apiVersion: loki.grafana.com/v1
kind: LokiStack
metadata:
name: <Name_of_LokiStack>
namespace: openshift-logging
spec:
managementState: Managed
size: 1x.extra-small
storage:
schemas:
- effectiveDate: "2024-06-01"
version: v13
secret:
name: logging-loki-s3
type: s3
storageClassName: gp3-csi
tenants:
mode: openshift-logging
Steps for upgrading to the new Stack (6.x)
Step 1: Remove the Red Hat LogFileMetricExporter Custom Resource (CR)
Backup the LogFileMetricExporter CR
$ oc get logfilemetricexporters instance -n openshift-logging -o yaml > lfme_backup.yaml
Delete the LogFileMetricExporter CR
$ oc delete logfilemetricexporters instance -n openshift-logging
Step 2: Update the Red Hat Logging Operator from 5.8/5.9 to 6:
Following the guide Updating the Red Hat Openshift Logging Operator Change Subscription Update Channel to stable-6.y. Note that Logging 6.2 and later versions don't support upgrading from Logging 5.x, so we may need to perform upgrade multiple times, or simply we may want to uninstall the Logging 5.x and then install Logging 6.2+ version.
Step 3: Update the Red Hat Loki Operator from 5.8/5.9 to 6:
Following the guide Updating the Loki Operator Change Subscription Update Channel to stable-6.y.
Note: if it was used the Red Hat Elasticsearch at the moment of upgrading to Logging v6, it could be desired to configure the collector to Log Forward to the Red Hat Loki at the moment of transitioning from RHOL v5 to v6 and being maintained the Red Hat Elasticsearch for checking the old logs or totally removing it. Steps:
- Install the Red Hat Loki Operator v6
- If it's needed to maintain the Red Hat Elasticsearch as "only read". Delete the
ownerReferencesfrom the Elasticsearch and Kibana as documented in the Red Hat Documentation: "Log Storage" to do not be removed when deleting theclusterLogging Custom Resource (CR)
Step 4: Create and manage the Service Account.
Create the Service Account:
$ oc -n openshift-logging create serviceaccount collector
Bind the Cluster Role to the Service Account to be able to write the logs to the Red Hat LokiStack
$ oc -n openshift-logging adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector
Add additional roles to the collector service account for being able to read the different type of logs
$ oc -n openshift-logging adm policy add-cluster-role-to-user collect-application-logs -z collector
$ oc -n openshift-logging adm policy add-cluster-role-to-user collect-audit-logs -z collector
$ oc -n openshift-logging adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector
Step 5: Delete the ClusterLogging instance and deploy the ClusterLogForwarder observability Custom Resource
Delete the ClusterLogging instance:
Note that the ClusterLogging instance is not neededed anymore since Logging 6.0, see documentation section:
Upgrading to Logging 6.0
$ oc delete clusterlogging <CR name> -n <namespace>
Check no collector pods are running:
$ oc get pods -l component=collector -A
Check that no clusterLogForwarder.logging.openshift.io exists
$ oc get clusterLogForwarder.logging.openshift.io -A
If any of such CR exist, they are belonging to the old 5.x Logging stack, and need to be removed. Please make a backup and delete them before deploying any clusterLogForwarder.observability.openshift.io CR with the new APIversion.
Move the Vector checkpoints for the
clusterLogging CR instance
Follow the Resolution section from the Red Hat Knowledge Article: "How to migrate Vector checkpoints in RHOCP 4"
Deploy the ClusterLogForwarder observability Custom Resource previously created:
$ cat << EOF |oc create -f -
apiVersion: observability.openshift.io/v1
kind: ClusterLogForwarder
metadata:
name: collector
namespace: openshift-logging
spec:
serviceAccount:
name: collector
outputs:
- name: default-lokistack
type: lokiStack
lokiStack:
target:
name: <Name_of_LokiStack>
namespace: openshift-logging
authentication:
token:
from: serviceAccount
tls:
ca:
key: service-ca.crt
configMapName: openshift-service-ca.crt
pipelines:
- name: default-logstore
inputRefs:
- application
- infrastructure
outputRefs:
- default-lokistack
EOF
Step 6. Delete the Red Hat Log Visualization with the web console
If it was not enabled the Red Hat Log Visualization in Logging v5, the resources to be deleted could be not present as it could be used Kibana as Log Visualization.
Until not completed the Steps 6 and 7, the Log visualization web console won't be available.
$ oc get consoles.operator.openshift.io -o yaml -o jsonpath='{.spec.plugins}' |grep "logging-view-plugin" && oc patch consoles.operator.openshift.io/cluster --type json -p='[{"op": "remove", "path": "/spec/plugins", "value":{'logging-view-plugin'}}]'
console.operator.openshift.io/cluster patched
$ oc get consoleplugins logging-view-plugin && oc delete consoleplugins logging-view-plugin
NAME AGE
logging-view-plugin 45m
consoleplugin.console.openshift.io "logging-view-plugin" deleted
Step 7: Install the Cluster Observability Operator:
Following the guide This page is not included, but the link has been rewritten to point to the nearest parent document.Installing the Cluster Observability Operator
Step 8: Deploy the UIPlugin to enable the Log section in the Observe tab:
$ cat << EOF |oc create -f -
apiVersion: observability.openshift.io/v1alpha1
kind: UIPlugin
metadata:
name: logging
spec:
type: Logging
logging:
lokiStack:
name: <Name_of_LokiStack>
EOF
Step 9: Recreate the logfilemetricexporters CR
$ oc create -f lfme_backup.yaml
Step 10. Delete clusterlogforwarders.logging.openshift.io and clusterloggings.logging.openshift.io CRD
$ oc delete crd clusterloggings.logging.openshift.io clusterlogforwarders.logging.openshift.io
Step 11. Uninstall the Red Hat Elasticsearch Operator
If Elasticsearch is not longer used for other components: Service Mesh, Jaeger, Tracing, etc, then uninstall the Elasticsearch Operator as documented in the Red Hat Documentation section: Uninstalling Elasticsearch.
It's possible to verify that an Elasticsearch CR doesn't exist running the command:
$ oc get elasticsearch -A