Procedure to enable Clair scanning of images in disconnected environments

Solution Verified - Updated

Environment

  • Quay Operator 3.13.x and below
  • Quay 3.10.0 and above
  • Clair 4.7.1 and above

Issue

  • In Quay operator based deployment Clair Scan not working in disconnected environment.
  • All security scan results in quay UI showing queued.

Resolution

Note: From Quay operator v3.14, Clair embeds the CPE mapping data required for grading Red Hat product images correctly.
Thus, follow procedure as per the documentation - 8.1. Setting up Clair in a disconnected OpenShift Container Platform cluster to enable Clair scanning in disconnected environments.

The below procedure shall be used for v3.13.x and below.

  1. Create a PVC for Clair Common Product Enumeration (CPE) files using the following definition:
# clair-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: clair-cpe
  namespace: quay-enterprise
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: STORAGE_CLASS_NAME_HERE         # specify the storage class that you're using on your cluster

$ oc create -f clair-pvc.yaml

2 Create the CPE temporary directory locally and download the two files from these links:

$ mkdir cpe && cd cpe
$ curl -L -O https://www.redhat.com/security/data/metrics/repository-to-cpe.json 
$ curl -L -O https://access.redhat.com/security/data/metrics/container-name-repos-map.json
  1. Create a UBI minimal pod which we'll use to upload the CPE files. We can use the following UBI pod definition:
# ubi-minimal.yaml
apiVersion: v1
kind: Pod
metadata:
  name: ubi-minimal
  namespace: quay-enterprise
spec:
  containers:
  - name: ubi-minimal
    command: ["/bin/bash", "-c", "sleep 86400"]
    image: registry.redhat.io/ubi9/ubi:latest
    volumeMounts:
    - name: clair-cpe-mount
      mountPath: /data
  volumes:
  - name: clair-cpe-mount
    persistentVolumeClaim:
      claimName: clair-cpe

$ oc create -f ubi-minimal.yaml
$ oc cp repository-to-cpe.json ubi-minimal:/data/
$ oc cp container-name-repos-map.json ubi-minimal:/data/
$ oc delete pod ubi-minimal
  1. Create the Clair PostgreSQL PVC, service account, PostgreSQL sample config CM and the deployment itself:
# clair-database-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: clair-postgres-13
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 50Gi
  storageClassName: STORAGE_CLASS_HERE             # specify the storage class that you're using on your cluster

$ oc create -f clair-database-pvc.yaml
# clair-postgres-conf-sample.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: clair-postgres-conf-sample
data:
  postgresql.conf.sample: |
    huge_pages = off
    logging_collector = on
    log_filename = 'postgresql-%a.log'
    log_truncate_on_rotation = on
    log_rotation_age = 1d
    log_rotation_size = 0

$ oc create -f clair-postgres-conf-sample.yaml
# clair-postgres-service-account.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: clair-postgres
  annotations:
    quay-component: clair-postgres

$ oc create -f clair-postgres-service-account.yaml
# clair-postgres-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: clair-postgres
  labels:
    quay-component: clair-postgres
spec:
  replicas: 1
  selector:
    matchLabels:
      quay-component: clair-postgres
  template:
    metadata:
      labels:
        quay-component: clair-postgres
    spec:
      serviceAccountName: clair-postgres
      containers:
      - name: postgresql-13
        image: registry.redhat.io/rhel9/postgresql-13:1-161
        ports:
        - containerPort: 5432
          name: postgres
        env:
        - name: POSTGRESQL_USER
          value: "clair-database"
        - name: POSTGRESQL_PASSWORD
          value: "password"
        - name: POSTGRESQL_DATABASE
          value: "clair-database"
        - name: POSTGRESQL_ADMIN_PASSWORD
          value: "password"
        volumeMounts:
        - name: postgres-data
          mountPath: /var/lib/pgsql/data
        - name: clair-postgres-conf-sample
          mountPath: /usr/share/pgsql/postgresql.conf.sample
          subPath: postgresql.conf.sample
        resources:
          requests:
            cpu: 500m
            memory: 2Gi
      volumes:
      - name: clair-postgres-conf-sample
        configMap:
          name: clair-postgres-conf-sample
      - name: postgres-data
        persistentVolumeClaim:
          claimName: clair-postgres-13
---
apiVersion: v1
kind: Service
metadata:
  name: clair-postgres
  labels:
    quay-component: clair-postgres
  annotations:
    quay-component: clair-postgres
spec:
  type: ClusterIP
  ports:
    - port: 5432
      protocol: TCP
      name: postgres
      targetPort: 5432
  selector:
    quay-component: clair-postgres

$ oc create -f clair-postgres-deployment.yaml

Replace the values for the PostgreSQL Clair database, user and password as you see fit. We should see the PostgreSQL pod come up soon.

  1. Create the file kustomization.yaml which we'll use to modify the way the operator deploys Clair:
# kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
  - ../../tmp

patches:
- target:
    version: v1
    kind: Deployment
    name: clair-app
    group: "apps"
  patch: |-
    - op: add
      path: /spec/template/spec/containers/0/volumeMounts/-
      value:
        name: clair-cpe-mount
        mountPath: /data
    - op: add
      path: /spec/template/spec/volumes/-
      value:
        name: clair-cpe-mount
        persistentVolumeClaim:
          claimName: clair-cpe

$ oc create configmap clair-cpe-override --from-file=kustomization.yaml -n openshift-operators

The -n openshift-operators is the default namespace where the operator is deployed. Replace the default namespace if the operator is deployed elsewhere. Create the modified subscription for the operator:

# quay-operator-subscription.yaml
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: quay-operator
  namespace: openshift-operators
spec:
  channel: quay-v3.10
  installPlanApproval: Automatic
  name: quay-operator
  source: redhat-operators
  sourceNamespace: openshift-marketplace
  startingCSV: quay-operator.v3.10.0
  config:
    volumes:
    - name: config-volume
      configMap:
        name: clair-cpe-override
    volumeMounts:
    - mountPath: /workspace/kustomize/overlays/current/kustomization.yaml
      subPath: kustomization.yaml
      name: config-volume

$ oc replace -f quay-operator-subscription.yaml

We should see a new operator pod being scheduled in the openshift-operators namespace. Again, replace the namespace value in the subscription with the proper namespace where the operator lives.

  1. Create the initial Clair config file with the following properties:
# clair-config.yaml
http_listen_addr: :8080
introspection_addr: :8089
log_level: debug
indexer:
  connstring: host=clair-postgres port=5432 dbname=clair-database user=CLAIR_USER_HERE password=CLAIR_USER_PASSWORD sslmode=disable
  scanlock_retry: 10
  layer_scan_concurrency: 5
  migrations: true
  airgap: true
  scanner:
    repo:
      rhel-repository-scanner:
        repo2cpe_mapping_file: /data/repository-to-cpe.json
    package:
      rhel_containerscanner:
        name2repos_mapping_file: /data/container-name-repos-map.json
matcher:
  connstring: host=clair-postgres port=5432 dbname=clair-database user=CLAIR_USER_HERE password=CLAIR_USER_PASSWORD sslmode=disable
  disable_updaters: true
  max_conn_pool: 100
  migrations: true
  indexer_addr: clair-indexer
notifier:
  connstring: host=clair-postgres port=5432 dbname=clair-database user=CLAIR_USER_HERE password=CLAIR_USER_PASSWORD sslmode=disable
  delivery_interval: 1m
  poll_interval: 5m
  migrations: true
  webhook:
    callback: http://quay-clair-app/notifier/api/v1/notifications
    target: https://{QUAY_REGISTRY_NAME}-quay-{QUAY_NAMESPACE_NAME}.apps.{CLUSTER_NAME}.{CLUSTER_DOMAIN}/secscan/notification

The target under .notifier.webhook needs to be modified for the security notifications to work properly. The QUAY_REGISTRY_NAME here represents the name of the QuayRegistry custom resource that you deploy.

  1. Add clair-config.yaml key/value in ConfigBundleSecret
    a) If quayregistry is already created, then update the config bundle secret by running:
$ oc get quayregistry <QUAY_REGISTRY_NAME> -n <quay_namespace> -o jsonpath={.spec.configBundleSecret}
  init-config-bundle   <-- this is the config bundle secret name

##encode the clair-config.yaml with base64
$ base64 -w 0 clair-config.yaml 

$ oc patch secret <configbundlesecret> -p '{"data":{"clair-config.yaml":"<base64_encoded_output>"}}' -n <quay-namespace>

(or)
b) Quayregsitry is not yet created, then create the initial config bundle from the file by running:

$ oc create secret generic init-config-bundle --from-file=clair-config.yaml
  1. Finally, create/modify the QuayRegistry custom resource and deploy:
# quay-registry.yaml
apiVersion: quay.redhat.com/v1
kind: QuayRegistry
metadata:
  name: QUAY_REGISTRY_NAME
spec:
  configBundleSecret: init-config-bundle
  components:
  - kind: clair
    managed: true
    overrides:
      replicas: 1        # must be set because we're mounting a PVC inside Clair
  - kind: horizontalpodautoscaler
    managed: false       # must be set to `false` because we're mounting a PVC inside Clair
  - kind: clairpostgres  
    managed: false       # must be set to `false` since we have manually deployed Clair PostgreSQL database

$ oc create -f quay-registry.yaml

After the quay-registry.yaml has been created the operator should reconcile the deployment with new pods

  1. Watch for the pods to transition to the Running state, then upload the CVE data according to Mapping repositories to Common Product Enumeration information
  • With the CPEs in-place all images in the registry should be successfully scanned and the vulnerability report should be available from the Quay UI.

Notable caveats:

  • Clair cannot be scaled to more than one pod, this is a consequence of mounting a PVC directly inside Clair. If we make any changes to the Clair configuration which would require the operator to redeploy Quay or Clair and create new pods, it's possible that new Clair pods will be stuck in ContainerCreating status until we manually delete the old pod. Afterwards, the new pod will be able to be scheduled and the old replica set will be deleted by the operator.

  • The advantage of this approach is that the operator takes care of managing Clair and its service and is responsible for creating correct security keys necessary for Quay-Clair communication. It will also automatically make Clair trust Quay and the certs that Quay exposes, so there should be no x509 errors. Another approach would be to make Clair unmanaged and manually deployed, but it would require more configuration with secrets and configmaps that are now automatically added.

  • This solution was tested in a deployment of Quay and Clair in a cluster in AWS. The operator used was Quay 3.10.0. While there's no real version limit, we cannot be certain that this deployment will work on different operator versions. We would also not recommend upgrading the operator after the kustomize template is applied.

Root Cause

  • In order to index RHEL based images properly, Clair needs access to two CPE files which describe the relationship between installed packages and repositories where they reside. This information is then cross referenced with the CVE database when a security report is created.
  • If these files are not available, the indexer will error out and detection of packages will not be completed successfully.
  • In addition to this issue, Quay operator does not properly render Clair's config file from the provided initial config stored in the init config bundle. This forces us to build the whole Clair configuration from the start instead of relying on the operator to populate required fields (for instance database connection string).

The two issues are tracked in the following JIRAs:

Diagnostic Steps

Entries in the clair app pod log:

$ oc logs <clair-app-pod> -n <quay-ns>
{"level":"info","layer":"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4","scanner":"rhel-repository-scanner","state":"ScanLayers","request_id":"dfadc90529db141d","manifest":"sha256:e1e2634d0b71ee373d1caa3db8ecd80833847894f3a182ac9a51ffb615a7ea85","component":"indexer/LayerScanner.scanLayer","kind":"repository","error":"rhel: unable to create a mappingFile object","time":"2024-01-03T13:56:33Z"}
{"level":"info","manifest":"sha256:e1e2634d0b71ee373d1caa3db8ecd80833847894f3a182ac9a51ffb615a7ea85","state":"ScanLayers","request_id":"dfadc90529db141d","component":"indexer/controller/Controller.Index","time":"2024-01-03T13:56:33Z","message":"layers scan done"}
{"level":"error","request_id":"dfadc90529db141d","component":"indexer/controller/Controller.Index","manifest":"sha256:e1e2634d0b71ee373d1caa3db8ecd80833847894f3a182ac9a51ffb615a7ea85","state":"ScanLayers","error":"failed to scan all layer contents: rhel: unable to create a mappingFile object","time":"2024-01-03T13:56:33Z","message":"error during scan"}
{"level":"info","request_id":"dfadc90529db141d","component":"libindex/Libindex.Index","manifest":"sha256:e1e2634d0b71ee373d1caa3db8ecd80833847894f3a182ac9a51ffb615a7ea85","time":"2024-01-03T13:56:33Z","message":"index request done"}
{"level":"info","component":"httptransport/New","request_id":"dfadc90529db141d","remote_addr":"10.130.1.167:32852","method":"POST","request_uri":"/indexer/api/v1/index_report","status":500,"duration":11838.101818,"time":"2024-01-03T13:56:33Z","message":"handled HTTP request"}
{"level":"info","request_id":"5bebb4d0734a8033","component":"libindex/Libindex.Index","manifest":"sha256:eb34aa94fea61a8d2d91d00264c7b31621ba639946b0f50cfc199596b50d636f","time":"2024-01-03T13:56:34Z","message":"index request start"}
{"level":"info","component":"indexer/controller/Controller.Index","request_id":"5bebb4d0734a8033","manifest":"sha256:eb34aa94fea61a8d2d91d00264c7b31621ba639946b0f50cfc199596b50d636f","time":"2024-01-03T13:56:34Z","message":"starting scan"}
{"level":"info","request_id":"5bebb4d0734a8033","manifest":"sha256:eb34aa94fea61a8d2d91d00264c7b31621ba639946b0f50cfc199596b50d636f","state":"CheckManifest","component":"indexer/controller/Controller.Index","time":"2024-01-03T13:56:34Z","message":"manifest to be scanned"}
{"level":"info","component":"indexer/controller/Controller.Index","request_id":"5bebb4d0734a8033","manifest":"sha256:eb34aa94fea61a8d2d91d00264c7b31621ba639946b0f50cfc199596b50d636f","state":"FetchLayers","time":"2024-01-03T13:56:34Z","message":"layers fetch start"}
{"level":"info","component":"indexer/controller/Controller.Index","request_id":"5bebb4d0734a8033","manifest":"sha256:eb34aa94fea61a8d2d91d00264c7b31621ba639946b0f50cfc199596b50d636f","state":"FetchLayers","time":"2024-01-03T13:57:57Z","message":"layers fetch success"}
{"level":"info","component":"indexer/controller/Controller.Index","request_id":"5bebb4d0734a8033","manifest":"sha256:eb34aa94fea61a8d2d91d00264c7b31621ba639946b0f50cfc199596b50d636f","state":"FetchLayers","time":"2024-01-03T13:57:57Z","message":"layers fetch done"}
{"level":"info","component":"indexer/controller/Controller.Index","request_id":"5bebb4d0734a8033","manifest":"sha256:eb34aa94fea61a8d2d91d00264c7b31621ba639946b0f50cfc199596b50d636f","state":"ScanLayers","time":"2024-01-03T13:57:57Z","message":"layers scan start"}
{"level":"info","layer":"sha256:d8190195889efb5333eeec18af9b6c82313edd4db62989bd3a357caca4f13f0e","request_id":"5bebb4d0734a8033","state":"ScanLayers","scanner":"rhel_containerscanner","component":"rhel/rhcc/scanner.Scan","manifest":"sha256:eb34aa94fea61a8d2d91d00264c7b31621ba639946b0f50cfc199596b50d636f","kind":"package","path":"root/buildinfo/Dockerfile-rhel-els-8.6-921","time":"2024-01-03T13:57:57Z","message":"found buildinfo Dockerfile"}
{"level":"info","state":"ScanLayers","scanner":"rhel_containerscanner","component":"indexer/LayerScanner.scanLayer","manifest":"sha256:eb34aa94fea61a8d2d91d00264c7b31621ba639946b0f50cfc199596b50d636f","kind":"package","layer":"sha256:d8190195889efb5333eeec18af9b6c82313edd4db62989bd3a357caca4f13f0e","request_id":"5bebb4d0734a8033","error":"rhcc: unable to create a mappingFile object","time":"2024-01-03T13:57:57Z"}
{"level":"info","state":"ScanLayers","layer":"sha256:d8190195889efb5333eeec18af9b6c82313edd4db62989bd3a357caca4f13f0e","kind":"repository","manifest":"sha256:eb34aa94fea61a8d2d91d00264c7b31621ba639946b0f50cfc199596b50d636f","component":"indexer/LayerScanner.scanLayer","scanner":"rhel-repository-scanner","request_id":"5bebb4d0734a8033","error":"rhel: unable to create a mappingFile object","time":"2024-01-03T13:57:57Z"}
{"level":"info","manifest":"sha256:eb34aa94fea61a8d2d91d00264c7b31621ba639946b0f50cfc199596b50d636f","request_id":"5bebb4d0734a8033","state":"ScanLayers","kind":"repository","layer":"sha256:97da74cc6d8fa5d1634eb1760fd1da5c6048619c264c23e62d75f3bf6b8ef5c4","component":"indexer/LayerScanner.scanLayer","scanner":"rhel-repository-scanner","error":"rhel: unable to create a mappingFile object","time":"2024-01-03T13:57:58Z"}
{"level":"info","component":"indexer/controller/Controller.Index","request_id":"5bebb4d0734a8033","manifest":"sha256:eb34aa94fea61a8d2d91d00264c7b31621ba639946b0f50cfc199596b50d636f","state":"ScanLayers","time":"2024-01-03T13:57:58Z","message":"layers scan done"}
{"level":"error","component":"indexer/controller/Controller.Index","request_id":"5bebb4d0734a8033","manifest":"sha256:eb34aa94fea61a8d2d91d00264c7b31621ba639946b0f50cfc199596b50d636f","state":"ScanLayers","error":"failed to scan all layer contents: rhcc: unable to create a mappingFile object","time":"2024-01-03T13:57:58Z","message":"error during scan"}

Entries in the quay-app logs show a 500 error:

$ oc logs <quay-app-pod> -n <quay-ns>
securityworker stdout | 2024-01-03 13:56:42,805 [96] [ERROR] [util.secscan.v4.api] Security scanner endpoint responded with non-200 HTTP status code: 500
Components

This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.