How to configure the ClusterLogForwarder to log forward to the Red Hat Managed Elasticsearch in Red Hat OpenShift Logging 6
Environment
- Red Hat OpenShift Container Platform (RHOCP)
- 4
- Red Hat OpenShift Logging (RHOL)
- 6
- Vector
- Elasticsearch
Issue
-
How can the collector be configured to continue log forwarding to the Red Hat Managed Elasticsearch after migrating to Red Hat OpenShift Logging (RHOL) 6?
-
After migrating to Logging 6 and configured to continue sending to the Red Hat Managed Elasticsearch following the example from the Documentation section "This page is not included, but the link has been rewritten to point to the nearest parent document.Red Hat Managed Elasticsearch" , the Elasticsearch is not receiving any logs and the collectors show errors like:
2025-01-27T17:36:00.944148Z ERROR sink{component_kind="sink" component_id=output_default_elasticsearch component_type=elasticsearch}: vector::sinks::elasticsearch::service: Response contained errors. error_code="http_response_200" response=Response { status: 200, version: HTTP/1.1, headers: {"content-length": "2138", "content-type": "application/json; charset=UTF-8", "gap-upstream-address": "localhost:9200", "date": "Mon, 27 Jan 2025 17:36:00 GMT"}, body: b"{\"took\":0,\"ingest_took\":0,\"errors\":true,\"items\":[{\"create\":{\"_index\":\"infrastructure-write\",\"_type\":\"_doc\",\"_id\":\"MDlkZTE4YTEtNGRkZS00NTg1LWFiNjEtNTNmZjU2ODBmNDY1\",\"status\":404,\"error\":{\"type\":\"index_not_found_exception\",\"reason\":\"no such index and [action.auto_create_index] contains [-*-write] which forbids automatic creation of the index\",\"index_uuid\":\"_na_\",\"index\":\"infrastructure-write\"}}},{\"create\":{\"_index\":\"infrastructure-write\",\"_type\":\"_doc\",\"_id\":\"ZGM1YjY2NTEtZjcwOC00NTYzLThiMDktOGY3OGZhNTY1MjMx\",\"status\":404,\"error\":{\"type\":\"index_not_found_exception\",\"reason\":\"no such index and [action.auto_create_index] contains [-*-write] which forbids automatic creation of the index\",\"index_uuid\":\"_na_\",\"index\":\"infrastructure-write\"}}},{\"create\":{\"_index\":\"infrastructure-write\",\"_type\":\"_doc\",\"_id\":\"OWNkZDk3MTEtNWQyYS00NDA3LWEyYzAtMmY5NzQzODRiYWMz\",\"status\":404,\"error\":{\"type\":\"index_not_found_exception\",\"reason\":\"no such index and [action.auto_create_index] contains [-*-write] which forbids automatic creation of the index\",\"index_uuid\":\"_na_\",\"index\":\"infrastructure-write\"}}},{\"create\":{\"_index\":\"infrastructure-write\",\"_type\":\"_doc\",\"_id\":\"Y2U0NDY4Y2EtMzM0Yy00MjQ4LWFmZWMtYTRkN2NlYWMwOGI0\",\"status\":404,\"error\":{\"type\":\"index_not_found_exception\",\"reason\":\"no such index and [action.auto_create_index] contains [-*-write] which forbids automatic creation of the index\",\"index_uuid\":\"_na_\",\"index\":\"infrastructure-write\"}}},{\"create\":{\"_index\":\"infrastructure-write\",\"_type\":\"_doc\",\"_id\":\"MzkzNjg4NmEtYWM1MS00ZjNmLWJlMWYtYTAyMWM0ZmJjMThm\",\"status\":404,\"error\":{\"type\":\"index_not_found_exception\",\"reason\":\"no such index and [action.auto_create_index] contains [-*-write] which forbids automatic creation of the index\",\"index_uuid\":\"_na_\",\"index\":\"infrastructure-write\"}}},{\"create\":{\"_index\":\"infrastructure-write\",\"_type\":\"_doc\",\"_id\":\"ZmJiODQxZjYtYWE3OS00MGU3LTk5YjgtOWU0ODZiMjM3NmZl\",\"status\":404,\"error\":{\"type\":\"index_not_found_exception\",\"reason\":\"no such index and [action.auto_create_index] contains [-*-write] which forbids automatic creation of the index\",\"index_uuid\":\"_na_\",\"index\":\"infrastructure-write\"}}}]}" }
Resolution
Define an output for each type of log desired to log forward to the Red Hat Managed Elasticsearch and use it in the pipeline to log forward the logs.
A ClusterLogForwarder custom resource example where the 3 types of logs are log forwarded to the Red Hat Managed Elasticsearch.
Step 1. Create the clf_elasticsearch.yaml file
apiVersion: observability.openshift.io/v1
kind: ClusterLogForwarder
metadata:
name: collector
namespace: openshift-logging
spec:
managementState: Managed
outputs:
- name: audit-elasticsearch
type: elasticsearch
elasticsearch:
url: https://elasticsearch:9200
version: 6
index: audit-write
tls:
ca:
key: ca-bundle.crt
secretName: collector
certificate:
key: tls.crt
secretName: collector
key:
key: tls.key
secretName: collector
- name: app-elasticsearch
type: elasticsearch
elasticsearch:
url: https://elasticsearch:9200
version: 6
index: app-write
tls:
ca:
key: ca-bundle.crt
secretName: collector
certificate:
key: tls.crt
secretName: collector
key:
key: tls.key
secretName: collector
- name: infra-elasticsearch
type: elasticsearch
elasticsearch:
url: https://elasticsearch:9200
version: 6
index: infra-write
tls:
ca:
key: ca-bundle.crt
secretName: collector
certificate:
key: tls.crt
secretName: collector
key:
key: tls.key
secretName: collector
pipelines:
- name: app
inputRefs:
- application
outputRefs:
- app-elasticsearch
- name: audit
inputRefs:
- audit
outputRefs:
- audit-elasticsearch
- name: infra
inputRefs:
- infrastructure
outputRefs:
- infra-elasticsearch
serviceAccount:
name: collector
Step 2. Create the ClusterLogForwarder resource:
$ oc create -f 06_clf-instance_elasticsearch.yaml
clusterlogforwarder.observability.openshift.io/collector created
Step 3. Review that the collectors are restarted and not longer the error present
$ ns="openshift-logging"
$ oc get pods -l app.kubernetes.io/component=collector -n
NAME READY STATUS RESTARTS AGE
collector-72mhq 1/1 Running 0 5m6s
collector-74qzl 1/1 Running 0 5m6s
collector-h8kqp 1/1 Running 0 5m6s
collector-hlwkd 1/1 Running 0 5m6s
collector-qgjs6 1/1 Running 0 5m6s
collector-wp9xf 1/1 Running 0 5m6s
$ pods=$(oc get pods -l app.kubernetes.io/component=collector -n $ns -o name)
$ for pod in $(echo $pods); do oc logs $pod -n $ns | grep -c "\[action.auto_create_index\] contains.* which forbids automatic creation of the index"; done
0
0
0
0
0
0
Step 4. Review in Kibana that able to see logs
Root Cause
Disclaimer: Links contained herein to external website(s) are provided for convenience only. Red Hat has not reviewed the links and is not responsible for the content or its availability. The inclusion of any link to an external website does not imply endorsement by Red Hat of the website or their entities, products or services. You agree that Red Hat is not responsible or liable for any loss or expenses that may result due to your use of (or reliance on) the external site or content.
The Red Hat Managed Elasticsearch forbids the the automatic creation of indices when they don't follow the index pattern:
infra-.*writeaudit-.*writeapp-.*write
As it's observed in the verification implemented in the Content from github.com is not included.code.
If in the ClusterLogForwarder custom resource is set the index to:
index: '{.log_type||"noindex"}-write'
And considering that the .log_type can take the values of: infrastructure, application and audit. The collector will try to log forward the logs to the indices called infrastructure-write, application-write and audit-write, rejecting the Elasticsearch the ingestion as it's not allowing the automatic creation of the indices for the infrastructure and application logs.
Diagnostic Steps
- Verify that the Cluster Logging version is v6
$ ns="openshift-logging"
$ oc get csv -n $ns -l "operators.coreos.com/cluster-logging.openshift-logging="
- Verify that the collector pods have the same error that below:
$ pods=$(oc get pods -l app.kubernetes.io/component=collector -n $ns -o name)
$ for pod in $(echo $pods); do oc logs $pod -n $ns ; done |grep "\[action.auto_create_index\] contains.* which forbids automatic creation of the index" |tail -1
2025-01-27T20:58:45.812354Z ERROR sink{component_kind="sink" component_id=output_default_elasticsearch component_type=elasticsearch}: vector::sinks::util::retries: Not retriable; dropping the request. reason="error type: index_not_found_exception, reason: no such index and [action.auto_create_index] contains [-*-write] which forbids automatic creation of the index" internal_log_rate_limit=true
$ for pod in $(echo $pods); do oc logs $pod -n $ns | grep -c "\[action.auto_create_index\] contains.* which forbids automatic creation of the index"; done
321
488
127
270
163
208
- Verify that in the
ClusterLogForwarderexists an output definition for log forwarding to the Red Hat Managed Elasticsearch where the index pattern is notapp-.*write,infra-.*writeoraudit-.*write. A not correctindexdefinition leading to the error can be:
$ cr="instance"
$ oc get clusterLogForwarder $cr -n $ns
--- OUTPUT OMITTED
spec:
managementState: Managed
outputs:
- elasticsearch:
index: '{.log_type||"app"}-write'
url: https://elasticsearch:9200
version: 6
name: default-elasticsearch
tls:
ca:
key: ca-bundle.crt
secretName: collector
certificate:
key: tls.crt
secretName: collector
key:
key: tls.key
secretName: collector
type: elasticsearch
--- OUTPUT OMITTED ---
This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.