Red Hat OpenShift Pipelines Operator has version higher than the cluster version (OCP version 4.13 and 4.15))
Environment
Scenario 1:
• Red Hat OpenShift Pipelines 1.15.x
• Red Hat OpenShift Container Platform 4.13
Scenario 2:
• Red Hat OpenShift Pipelines 1.21.x
• Red Hat OpenShift Container Platform 4.15
Issue
For several hours on February 3, 2026, Red Hat released 4.18 Red Hat Operators catalog content into 4.12-4.17 clusters. The tags have since been recovered to point at catalogs appropriate for their 4.y. Clusters running 4.12 through 4.17 versions during the window from 2026-02-03 21:16 UTC through 2026-02-04 05:18 UTC were impacted by the 4.18 catalog content.
Specifically, the operators with installPlanApproval:Automatic on Subscription.operators.coreos.com which had updates recommended in the 4.18 catalog had their installed operator version updated to the version present in the 4.18 catalog, regardless of whether that version was appropriate for the version of OCP that the cluster was running.
Resolution
There are two solutions available depending on your cluster version and constraints:
Scenario 1: OpenShift Pipelines 1.15.x on OCP 4.13
Solution 1 – Upgrading OCP version (Recommended)
Upgrading the OpenShift cluster to 4.14.x, as 4.13 is out of support.
Solution 2 – Downgrading Operator to 1.14.x
Prerequisites
- You must have cluster-admin access to the OpenShift cluster
- The
jqcommand-line tool must be installed - The current OpenShift Pipelines version must be 1.15.x
- TektonConfig must be in Ready state
- CRITICAL: No PipelineRuns or TaskRuns should be in Running or Pending state
Check for Running Workloads
IMPORTANT: PipelineRuns and TaskRuns created in 1.15.x cannot be patched after downgrade to 1.14.x due to field incompatibilities. Attempting to patch will result in:
Error from server (BadRequest): admission webhook "webhook.pipeline.tekton.dev" denied the request:
mutation failed: cannot decode incoming new object: json: unknown field "DisableInlineSpec"
Before proceeding, verify no active workloads are running:
# Check for running or pending PipelineRuns
oc get pipelineruns -A -o json | \
jq -r '.items[] | select(.status.conditions[]? | select(.type=="Succeeded" and .status=="Unknown")) | "\(.metadata.namespace)\t\(.metadata.name)"'
# Check for running or pending TaskRuns
oc get taskruns -A -o json | \
jq -r '.items[] | select(.status.conditions[]? | select(.type=="Succeeded" and .status=="Unknown")) | "\(.metadata.namespace)\t\(.metadata.name)"'
If any workloads are found, cancel them, delete them or wait for them to finish.
Procedure
- Download the downgrade script from the repository:
curl -LO https://raw.githubusercontent.com/openshift-pipelines/operator-downgrade/main/osp-downgrade-1.15-1.14.sh
chmod +x osp-downgrade-1.15-1.14.sh
- Execute the script:
./osp-downgrade-1.15-1.14.sh
The script performs the following operations:
- Validates current environment and creates backup
- Removes the 1.15.x operator subscription and CSV
- Installs the 1.14.x operator from the pipelines-1.14 channel
- Removes incompatible configuration parameters
- Waits for TektonConfig to reach Ready state
- Verifies workload preservation
- Monitor the script output. The downgrade typically completes in 3-5 minutes.
==========================================
✅ DOWNGRADE COMPLETE
==========================================
Operator downgraded:
From: openshift-pipelines-operator-rh.v1.15.x
To: openshift-pipelines-operator-rh.v1.14.x
TektonConfig status:
Version: 1.14.x
Ready: True
Verification
# Check operator version
oc get csv -n openshift-operators -l operators.coreos.com/openshift-pipelines-operator-rh.openshift-operators
# Check TektonConfig version and status
oc get tektonconfig config
# Verify all components are running
oc get pods -n openshift-pipelines
oc get pods -n openshift-operators -l name=openshift-pipelines-operator
Test creating a new workload:
cat <<EOF | oc apply -f -
apiVersion: v1
kind: Namespace
metadata:
name: test-downgrade
---
apiVersion: tekton.dev/v1
kind: TaskRun
metadata:
name: test-post-downgrade
namespace: test-downgrade
spec:
taskSpec:
steps:
- name: echo
image: registry.access.redhat.com/ubi8/ubi-minimal:latest
script: |
#!/bin/sh
echo "Downgrade verification successful"
EOF
# Verify it runs successfully
oc get taskrun test-post-downgrade -n test-downgrade
Note: Backup files are stored in osp-backup-<timestamp>/ in the current directory
Troubleshooting
1. TektonConfig Patch Timeout
If the script reports a timeout when patching TektonConfig:
⚠️ Warning: Failed to patch TektonConfig after 120s
Wait 30-60 seconds for webhook CA bundle propagation, then manually run the patch:
oc patch tektonconfig config --type='merge' -p '{"spec":{"addon":{"params":[{"name":"communityClusterTasks","value":"true"},{"name":"clusterTasks","value":"true"},{"name":"pipelineTemplates","value":"true"}]}}}'
Verify TektonConfig becomes Ready:
oc get tektonconfig config -o jsonpath='{.status.conditions[?(@.type=="Ready")].status}'
2. Cannot Modify Existing Pipeline Runs
Note: This is expected behaviour, not a failure.
PipelineRuns and TaskRuns created in 1.15.x are read-only after downgrade. They can be:
Viewed:
oc get pipelinerun <name> -n <namespace> -o yaml
Deleted:
oc delete pipelinerun <name> -n <namespace>
They cannot be patched or have labels/annotations modified.
To modify a workload: delete and recreate it. New PipelineRuns created after the downgrade will work normally.
Scenario 2: OpenShift Pipelines 1.21.x on OCP 4.15
Solution 1 – Upgrading OCP version (Recommended)
Upgrading the OpenShift cluster to 4.16.x, as OpenShift Pipelines 1.21.x requires OCP 4.16 or later for full support.
Solution 2 – Downgrading Operator to 1.20.x
Prerequisites
- You must have cluster-admin access to the OpenShift cluster
- The
jqcommand-line tool must be installed - The current OpenShift Pipelines version must be 1.21.x
- TektonConfig must be in Ready state
- CRITICAL: No PipelineRuns or TaskRuns should be in Running or Pending state
Important Notes
Results API with Embedded Database:
- If Results API is enabled WITHOUT an external database, the script will automatically:
- Backup the PostgreSQL database before downgrade
- Disable Results API and delete PVC/PV for clean state
- Re-enable Results API after downgrade
- Restore the database from backup
Results API with External Database:
- If an external database is configured (
spec.result.is_external_db: true), no database backup/restore is needed - Results API remains enabled throughout the downgrade
Check for Running Workloads
IMPORTANT: Running workloads should complete before downgrade to avoid interruption and ensure clean reconciliation.
Before proceeding, verify no active workloads are running:
# Check for running or pending PipelineRuns
oc get pipelineruns -A -o json | \
jq -r '.items[] | select(.status.conditions[]? | select(.type=="Succeeded" and .status=="Unknown")) | "\(.metadata.namespace)\t\(.metadata.name)"'
# Check for running or pending TaskRuns
oc get taskruns -A -o json | \
jq -r '.items[] | select(.status.conditions[]? | select(.type=="Succeeded" and .status=="Unknown")) | "\(.metadata.namespace)\t\(.metadata.name)"'
If any workloads are found, cancel them, delete them or wait for them to finish.
Procedure
- Download the downgrade script from the repository:
curl -LO https://raw.githubusercontent.com/openshift-pipelines/operator-downgrade/main/osp-downgrade-1.21-1.20.sh
chmod +x osp-downgrade-1.21-1.20.sh
- Execute the script:
./osp-downgrade-1.21-1.20.sh
The script performs the following operations:
- Validates current environment and creates backups
- Backs up Results API database if using embedded PostgreSQL
- Removes the 1.21.x operator subscription and CSV
- Installs the 1.20.x operator from the pipelines-1.20 channel
- Removes incompatible configuration parameters (route_enabled, route_tls_termination)
- Waits for TektonConfig to reach Ready state
- Restores Results API database if backup was taken
- Verifies workload preservation
- Monitor the script output. The downgrade typically completes in 5-8 minutes.
==========================================
✅ DOWNGRADE COMPLETE
==========================================
Operator downgraded:
From: openshift-pipelines-operator-rh.v1.21.0
To: openshift-pipelines-operator-rh.v1.20.2
TektonConfig status:
Version: 1.20.2
Ready: True
Backup location: osp-backup-<timestamp>/
Verification
# Check operator version
oc get csv -n openshift-operators -l operators.coreos.com/openshift-pipelines-operator-rh.openshift-operators
# Check TektonConfig version and status
oc get tektonconfig config
# Verify all components are running
oc get pods -n openshift-pipelines
oc get pods -n openshift-operators -l name=openshift-pipelines-operator
Test creating a new workload:
cat <<EOF | oc apply -f -
apiVersion: v1
kind: Namespace
metadata:
name: test-downgrade
---
apiVersion: tekton.dev/v1
kind: TaskRun
metadata:
name: test-post-downgrade
namespace: test-downgrade
spec:
taskSpec:
steps:
- name: echo
image: registry.access.redhat.com/ubi8/ubi-minimal:latest
script: |
#!/bin/sh
echo "Downgrade verification successful"
EOF
# Verify it runs successfully
oc get taskrun test-post-downgrade -n test-downgrade
Note: Backup files are stored in osp-backup-<timestamp>/ in the current directory
Troubleshooting
1. TektonConfig Not Ready After 15 Minutes
If TektonConfig doesn't become Ready within the timeout:
# Check TektonConfig status
oc get tektonconfig config -o yaml
# Check operator logs
oc logs deployment/openshift-pipelines-operator -n openshift-operators --tail=100
# Check component status
oc get tektonpipeline,tektontrigger,tektonchain,tektonresult -A
Common issues:
- TektonChain not ready: Check chain controller logs
- TektonResult not ready: Check Results API pods and database connectivity
- Post-upgrade stuck: Check for stale InstallerSets:
oc get tektoninstallerset
2. Results API Pods Not Starting
If Results API pods are stuck after restoration:
# Check pod status
oc get pods -n openshift-pipelines | grep result
# Check pod logs
oc logs -n openshift-pipelines <pod-name>
# Verify PVC was created
oc get pvc -n openshift-pipelines | grep postgres
# Check if database restoration completed
POSTGRES_POD=$(oc get pods -n openshift-pipelines -l app.kubernetes.io/name=tekton-results-postgres --no-headers | awk '{print $1}')
oc exec -n openshift-pipelines "$POSTGRES_POD" -- psql -U result -d tekton-results -c "\dt"
If database restoration failed, you can manually restore:
cat osp-backup-<timestamp>/results-db-backup.sql | \
oc exec -i -n openshift-pipelines <postgres-pod> -- psql -U result -d tekton-results
3. Webhook Validation Errors
If you see webhook-related errors during parameter removal, the script should automatically handle this by:
- Backing up webhooks
- Deleting them temporarily
- Applying the cleaned TektonConfig
- Restoring webhooks
If manual intervention is needed:
# Check webhook status
oc get mutatingwebhookconfigurations.admissionregistration.k8s.io webhook.operator.tekton.dev
oc get validatingwebhookconfigurations.admissionregistration.k8s.io validation.webhook.operator.tekton.dev
# Restore from backup if needed
oc apply -f osp-backup-<timestamp>/webhooks/mutating-webhook.yaml
oc apply -f osp-backup-<timestamp>/webhooks/validating-webhook.yaml
Root Cause
On 2026-02-03, the v4.18 redhat-operators catalog was accidentally pushed to v4.12 and v4.16 and similar tags. Clusters running those older OCP versions got caught up on that v4.18 content and failed to automatically recover when recovered catalog content was shipped (OCPBUGS-75921). In addition, some clusters had enabled automatic OLM-installed operator updates. For those clusters, if the 4.18 catalog contained recommended update advice from the version the clusters were currently running, the OLM-installed operator would be automatically updated, potentially to a version that is not actually compatible with the version of OCP the cluster is actually running.
This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.