Logging alerts
Configuring logging alerts.
Abstract
Chapter 1. Default logging alerts
Logging alerts are installed as part of the Red Hat OpenShift Logging Operator installation. Alerts depend on metrics exported by the log collection and log storage backends. These metrics are enabled if you selected the option to Enable Operator recommended cluster monitoring on this namespace when installing the Red Hat OpenShift Logging Operator.
Default logging alerts are sent to the OpenShift Container Platform monitoring stack Alertmanager in the openshift-monitoring namespace, unless you have disabled the local Alertmanager instance.
1.1. Accessing the Alerting UI from the Administrator perspective
You can access the Alerting user interface (UI) through the Administrator perspective of the OpenShift Container Platform web console.
Prerequisites
- You have administrator permissions.
- You have access to the OpenShift Container Platform web console.
Procedure
- From the Administrator perspective, go to Observe → Alerting. The three main pages in the Alerting UI in this perspective are the Alerts, Silences, and Alerting rules pages.
1.2. Red Hat OpenShift Logging Operator alerts
The following alerts are generated by the Vector collector. You can view these alerts in the OpenShift Container Platform web console.
Table 1.1. Vector collector alerts
| Alert | Message | Description | Severity |
|---|---|---|---|
|
|
| Vector is reporting that Prometheus could not scrape a specific Vector instance. | Critical |
|
|
| Collectors are consuming too much node disk on the host. | Warning |
|
|
| At least 10% of sent requests responded with "HTTP 403 Forbidden" for collector "<intance>" in namespace <namespace> for the output "<output>". | Critical |
1.3. Loki Operator alerts
The following alerts are generated by the Loki Operator. You can view these alerts in the OpenShift Container Platform web console.
Table 1.2. Loki Operator alerts
| Alert | Message | Description | Severity |
|---|---|---|---|
|
|
| One or more Loki ingesters are failing to flush at least 20% of their chunks to backend storage over a 5-minute period. This indicates issues with storage connectivity, authentication, or storage capacity that require immediate intervention. | critical |
|
|
|
At least 10% of requests result in | critical |
|
|
|
At least 10% of write requests to the lokistack-gateway result in | critical |
|
|
|
At least 10% of query requests to the lokistack-gateway result in | critical |
|
|
| A panic was triggered. | critical |
|
|
| The 99th percentile is experiencing latency higher than 1 second. | critical |
|
|
| At least 10% of requests are received the rate limit error code. | warning |
|
|
| The storage path is experiencing slow read response rates. | warning |
|
|
| The write path is experiencing high load causing backpressure storage flushing. | warning |
|
|
| The read path has a high volume of queries, causing longer response times. | warning |
|
|
| Loki is discarding samples during ingestion because they fail validation. | warning |
|
|
|
The | warning |
|
|
| One or more of the deployed LokiStacks contains an outdated storage schema configuration. | warning |
1.4. Additional resources
Chapter 2. Custom logging alerts
You can configure the LokiStack deployment to produce customized alerts and recorded metrics. If you want to use customized Content from grafana.com is not included.alerting and recording rules, you must enable the LokiStack ruler component.
LokiStack log-based alerts and recorded metrics are triggered by providing Content from grafana.com is not included.LogQL (Grafana documentation) expressions to the ruler component.
To provide these expressions, you must create an AlertingRule custom resource (CR) containing Content from prometheus.io is not included.alerting rules, or a RecordingRule CR containing Prometheus-compatible Content from prometheus.io is not included.recording rules (Prometheus documentation).
Administrators can configure log-based alerts or recorded metrics for application, audit, or infrastructure tenants. Users without administrator permissions can configure log-based alerts or recorded metrics for application tenants of the applications that they have access to.
Application, audit, and infrastructure alerts are sent by default to the OpenShift Container Platform monitoring stack Alertmanager in the openshift-monitoring namespace, unless you have disabled the local Alertmanager instance. If the Alertmanager that is used to monitor user-defined projects in the openshift-user-workload-monitoring namespace is enabled, application alerts are sent to the Alertmanager in this namespace by default.
2.1. Configuring the ruler
When the LokiStack ruler component is enabled, users can define a group of Content from grafana.com is not included.LogQL (Grafana documentation) expressions that trigger logging alerts or recorded metrics.
Administrators can enable the ruler by modifying the LokiStack custom resource (CR).
Prerequisites
- You have installed the Red Hat OpenShift Logging Operator and the Loki Operator.
-
You have created a
LokiStackCR. - You have administrator permissions.
Procedure
Enable the ruler by ensuring that the
LokiStackCR has the following spec configuration:apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: <name> namespace: <namespace> spec: # ... rules: enabled: true 1 selector: 2 matchLabels: <label_name>: "true" 3 namespaceSelector: 4 matchLabels: <label_name>: "true" 5- 1
- Enable Loki alerting and recording rules in your cluster.
- 2
- Specify the selector for the alerting and recording resources.
- 3
- Add a custom label that can be added to namespaces where you want to enable the use of logging alerts and metrics.
- 4
- Specify the namespaces in which the alerting and recording rules are defined for the Loki Operator. If undefined, only the rules defined in the same namespace as the
LokiStackare used. - 5
- Add a custom label that can be added to namespaces where you want to enable the use of logging alerts and metrics.
2.2. Authorizing LokiStack rules RBAC permissions
Administrators can allow users to create and manage their own alerting and recording rules by binding cluster roles to usernames. Cluster roles are defined as ClusterRole objects that contain necessary role-based access control (RBAC) permissions for users.
The following cluster roles for alerting and recording rules are available for LokiStack:
| Rule name | Description |
|---|---|
|
|
Users with this role have administrative-level access to manage alerting rules. This cluster role grants permissions to create, read, update, delete, list, and watch |
|
|
Users with this role can view the definitions of Custom Resource Definitions (CRDs) related to |
|
|
Users with this role have permission to create, update, and delete |
|
|
Users with this role can read |
|
|
Users with this role have administrative-level access to manage recording rules. This cluster role grants permissions to create, read, update, delete, list, and watch |
|
|
Users with this role can view the definitions of Custom Resource Definitions (CRDs) related to |
|
|
Users with this role have permission to create, update, and delete |
|
|
Users with this role can read |
2.2.1. Examples
To apply cluster roles for a user, you must bind an existing cluster role to a specific username.
Cluster roles can be cluster or namespace scoped, depending on which type of role binding you use. When a RoleBinding object is used, as when using the oc adm policy add-role-to-user command, the cluster role only applies to the specified namespace. When a ClusterRoleBinding object is used, as when using the oc adm policy add-cluster-role-to-user command, the cluster role applies to all namespaces in the cluster.
The following example command gives the specified user create, read, update and delete (CRUD) permissions for alerting rules in a specific namespace in the cluster:
Example cluster role binding command for alerting rule CRUD permissions in a specific namespace
$ oc adm policy add-role-to-user alertingrules.loki.grafana.com-v1-admin -n <namespace> <username>
The following command gives the specified user administrator permissions for alerting rules in all namespaces:
Example cluster role binding command for administrator permissions
$ oc adm policy add-cluster-role-to-user alertingrules.loki.grafana.com-v1-admin <username>
Additional resources
2.3. Creating a log-based alerting rule with Loki
The AlertingRule CR contains a set of specifications and webhook validation definitions to declare groups of alerting rules for a single LokiStack instance. In addition, the webhook validation definition provides support for rule validation conditions:
-
If an
AlertingRuleCR includes an invalidintervalperiod, it is an invalid alerting rule -
If an
AlertingRuleCR includes an invalidforperiod, it is an invalid alerting rule. -
If an
AlertingRuleCR includes an invalid LogQLexpr, it is an invalid alerting rule. -
If an
AlertingRuleCR includes two groups with the same name, it is an invalid alerting rule. - If none of the above applies, an alerting rule is considered valid.
| Tenant type | Valid namespaces for AlertingRule CRs |
|---|---|
| audit |
|
| infrastructure |
|
| application | All other namespaces. |
Prerequisites
- Red Hat OpenShift Logging Operator 5.7 and later
- OpenShift Container Platform 4.13 and later
Procedure
Create an
AlertingRulecustom resource (CR):Example infrastructure AlertingRule CR
apiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: loki-operator-alerts namespace: openshift-operators-redhat 1 labels: 2 openshift.io/cluster-monitoring: "true" spec: tenantID: infrastructure 3 groups: - name: LokiOperatorHighReconciliationError rules: - alert: HighPercentageError expr: | 4 sum(rate({kubernetes_namespace_name="openshift-operators-redhat", kubernetes_pod_name=~"loki-operator-controller-manager.*"} |= "error" [1m])) by (job) / sum(rate({kubernetes_namespace_name="openshift-operators-redhat", kubernetes_pod_name=~"loki-operator-controller-manager.*"}[1m])) by (job) > 0.01 for: 10s labels: severity: critical 5 annotations: summary: High Loki Operator Reconciliation Errors 6 description: High Loki Operator Reconciliation Errors 7- 1
- The namespace where this
AlertingRuleCR is created must have a label matching the LokiStackspec.rules.namespaceSelectordefinition. - 2
- The
labelsblock must match the LokiStackspec.rules.selectordefinition. - 3
AlertingRuleCRs forinfrastructuretenants are only supported in theopenshift-*,kube-*, ordefaultnamespaces.- 4
- The value for
kubernetes_namespace_name:must match the value formetadata.namespace. - 5
- The value of this mandatory field must be
critical,warning, orinfo. - 6
- This field is mandatory.
- 7
- This field is mandatory.
Example application AlertingRule CR
apiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: app-user-workload namespace: app-ns 1 labels: 2 openshift.io/cluster-monitoring: "true" spec: tenantID: application groups: - name: AppUserWorkloadHighError rules: - alert: expr: | 3 sum(rate({kubernetes_namespace_name="app-ns", kubernetes_pod_name=~"podName.*"} |= "error" [1m])) by (job) for: 10s labels: severity: critical 4 annotations: summary: This is an example summary. 5 description: This is an example description. 6- 1
- The namespace where this
AlertingRuleCR is created must have a label matching the LokiStackspec.rules.namespaceSelectordefinition. - 2
- The
labelsblock must match the LokiStackspec.rules.selectordefinition. - 3
- Value for
kubernetes_namespace_name:must match the value formetadata.namespace. - 4
- The value of this mandatory field must be
critical,warning, orinfo. - 5
- The value of this mandatory field is a summary of the rule.
- 6
- The value of this mandatory field is a detailed description of the rule.
Apply the
AlertingRuleCR:$ oc apply -f <filename>.yaml