Storage configuration for OpenShift Virtualization 4.21.x on Google Cloud

Updated

With the release of Red Hat OpenShift Virtualization 4.21.1, Google Cloud support is now generally available. Running OpenShift Virtualization on Google Cloud requires the following additional configuration steps:

Create a default Hyperdisk storage class

Google Cloud clusters ship with a standard-csi StorageClass that uses standard persistent disks. For OpenShift Virtualization workloads, your cluster default storage class must be a Hyperdisk Balanced StorageClass backed by a storage pool.

Verify volume attachment limits per node

Some Google Cloud machine types report a low volume attachment limit (e.g. 15) to OpenShift Container Platform. If you plan to run many VMs per node, you can verify and potentially override this limit.

Configure unlimited snapshot restores

On Google Cloud, standard snapshots (using pd.csi.storage.gke.io) are limited to 6 restores per hour per snapshot. Using a VolumeSnapshotClass with snapshot-type: images removes this limit. In 4.21, the GCP PD CSI driver operator does not include this VolumeSnapshotClass, so you create it manually by running one command. Image-type snapshots must be created from RWO sources, but can be restored to RWX volumes (e.g. for Live Migration).

Prerequisites

  • Red Hat OpenShift Container Platform 4.21.5
  • OpenShift Virtualization 4.21.1 deployed on Google Cloud
  • Google Cloud PD CSI driver installed and configured
  • Cluster admin access to create a StorageClass and apply a VolumeSnapshotClass

Procedure

Step 1.1: Create a Hyperdisk StorageClass

If you do not already have a default Hyperdisk StorageClass, create it first. This will be used for regular VM snapshots and for VM disks.

Create a file named hyperdisk-storageclass.yaml with the following content:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: sp-balanced-storage
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
    storageclass.kubevirt.io/is-default-virt-class: "true"
allowVolumeExpansion: true
parameters:
  storage-pools: projects/<project-name>/zones/<location+zone>/storagePools/<pool-name>
  type: hyperdisk-balanced
provisioner: pd.csi.storage.gke.io
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer

Important:

Apply:

oc apply -f hyperdisk-storageclass.yaml

Step 1.2: Remove the default annotation from standard-csi StorageClass

After creating the Hyperdisk StorageClass as the new default, remove the default annotation from standard-csi to avoid having two default storage classes:

oc annotate storageclass standard-csi storageclass.kubernetes.io/is-default-class-

Verify that only your Hyperdisk StorageClass is marked as default:

oc get storageclass

Step 2: Verify volume attachment limits

Check the maximum volume attachment limit reported by each node:

oc get csinode -o custom-columns="NAME:.metadata.name,MAX-VOLUMES:.spec.drivers[0].allocatable.count"

If any node shows a low value (e.g. 15), you might need to apply an override label before running workloads at scale. For details and override instructions, see Volume Attachment Limit Per Node.

Step 3.1: Prepare to create the VolumeSnapshotClass for images

Before you create the additional VolumeSnapshotClass for images, confirm that all existing OS image imports have completed. Run the following command and examine the output, confirming that each image has a value of true in the READYTOUSE column:

oc get volumesnapshot,dataimportcron -n openshift-virtualization-os-images

Each DataImportCron should have a corresponding VolumeSnapshot with READYTOUSE: true. If any snapshot is missing or not ready, wait for the import to complete before proceeding.

Note: If OS images are still importing when you create the additional VolumeSnapshotClass, some images may not be snapshotted correctly.

Step 3.2: Create the VolumeSnapshotClass for images

The Google Cloud PD CSI driver operator does not provision this VolumeSnapshotClass in 4.21. Create it manually by applying the official asset:

oc apply -f https://raw.githubusercontent.com/openshift/gcp-pd-csi-driver-operator/main/assets/volumesnapshotclass_images.yaml

This creates a VolumeSnapshotClass named csi-gce-pd-vsc-images with snapshot-type: images. It is not set as the default, so regular VM snapshots continue to use your default VolumeSnapshotClass.

Verification:

  • Confirm the VolumeSnapshotClass exists:

    oc get volumesnapshotclass csi-gce-pd-vsc-images -o yaml | grep snapshot-type
    

    You should see snapshot-type: images.

  • After OS images are imported, check that snapshots in the OS images namespace (openshift-virtualization-os-images) use csi-gce-pd-vsc-images:

    oc get volumesnapshot -n openshift-virtualization-os-images -o yaml | grep csi-gce-pd-vsc-images
    

Additional resources

Components
Article Type