Collector configmap not generated when same input used twice for the same output in RHOL 6

Solution Verified - Updated

Environment

  • Red Hat OpenShift Container Platform (RHOL)
    • 4
  • Red Hat OpenShift Logging (RHOL)
    • 6.0
  • Vector

Issue

  • When the same input is set as inputRefs twice for the same output, the configmap containing the Vector configuration is not generated

  • All the inputConditions, outputConditions and pipelineConditions show reason: ValidationSuccess and status: True, but the collector configmap is not generated

  • The clusterLogForwarder custom resource (CR) shows in the status.conditions that not Ready caused by a ValidationFailure , but it doesn't indicate the validation failing

    - lastTransitionTime: "2025-02-24T18:04:27Z"
      message: one or more of inputs, outputs, pipelines, filters have a validation
        failure
      reason: ValidationFailure
      status: "False"
      type: observability.openshift.io/Valid
    - lastTransitionTime: "2025-02-24T18:04:27Z"
      message: ""
      reason: ValidationFailure
      status: "False"
      type: Ready
    

Resolution

This issue has been reported to Red Hat engineering. It is being tracked in Bug This content is not included.LOG-6758 and delivered a fix in RHOL 6.2.1 through errata RHBA-2025:3908

If this issue still occurs in the environment after updating, open a support case in the Red Hat Customer Portal referring to this solution

Workaround

Remove the inputRef being used twice for the same outputRef in the pipeline as it could cause at the same time the logs being duplicated in the destination.

Root Cause

The collector verify a ClusterLogForwarder with multiple inputs to a LokiStack output as invalid due to incorrect, internal processing logic.

Diagnostic Steps

In the example below, it's assumed that:

  • the clusterLogForwarder CR is called collector
  • the namespace where the clusterLogForwarder CR is created is openshift-logging
  • the inputRef used twice for the same outputRef: default-lokistack is application
  1. Verify the clusterLogForwarder CR has the same inputRef defined twice for the same outputRef in the pipelines:

    $ ns="openshift-logging"
    $ cr="collector"
    $ oc get obsclf $cr -n $ns -o yaml
    apiVersion: observability.openshift.io/v1
    kind: ClusterLogForwarder
    metadata:
      name: collector
      namespace: openshift-logging
    [...]
      pipelines:
        - inputRefs:
            - audit
          name: syslog
          outputRefs:
            - default-lokistack
        - inputRefs:
            - infrastructure
            - application
          name: logging-loki
          outputRefs:
            - default-lokistack
        - inputRefs:
            - application
          name: container-logs
          outputRefs:
            - default-lokistack
      serviceAccount:
        name: collector
    
  2. Verify that the clusterLogForwarder CR has in the status.conditions an error where it indicates that it's not Ready, but it doesn't indicate in what part of the configuration the error is:

    $ oc get obsclf $cr -n $ns -o yaml
    [...]
    status:
      conditions:
      - lastTransitionTime: "2025-02-24T18:09:41Z"
        message: 'permitted to collect log types: [application audit infrastructure]'
        reason: ClusterRolesExist
        status: "True"
        type: observability.openshift.io/Authorized
      - lastTransitionTime: "2025-02-24T18:04:27Z"
        message: one or more of inputs, outputs, pipelines, filters have a validation
          failure
        reason: ValidationFailure
        status: "False"
        type: observability.openshift.io/Valid
      - lastTransitionTime: "2025-02-24T18:04:27Z"
        message: ""
        reason: ValidationFailure
        status: "False"
        type: Ready
    
  3. Verify that the collector configmap containing the Vector configuration is not created by the Cluster Logging Operator:

    $  oc get cm $cr-config -n $ns
    Error from server (NotFound): configmaps "collector-config" not found
    
Components
Category

This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.