How to wipe OpenStack disks in OpenShift Container Platform 4 reinstallation.
Environment
- Red Hat OpenShift Container Platform 4 [RHOCP]
- OpenStack Platform.
Issue
To securely and completely wipe the disk content of a VM in OpenStack, the approach depends on the type of storage backend (Cinder volumes, ephemeral disks on local compute host, or Ceph-backed instances).
Resolution
- Ephemeral Disk (Default Compute Storage, local disk on KVM/QEMU host)
Wipe from the compute node (host):
$ virsh list --all # Get instance UUID or name
$ virsh domblklist <instance-name> # Get disk path
# Example: securely wipe the disk
$ sudo shred -v -n 3 -z /var/lib/nova/instances/<instance-uuid>/disk
This overwrites the file with random data 3 times and adds a final zero write (-z).
- Cinder Volume Backed VM
Identify volume
$ openstack server show <vm-id> -c volumes_attached
Securely wipe the Cinder volume (detach first)
$ openstack server remove volume <vm-id> <volume-id>
$ openstack volume set --state available <volume-id>
SSH into a host with access to the volume backend (LVM, iSCSI, Ceph) and run:
If LVM:
$ sudo dd if=/dev/zero of=/dev/mapper/cinder--volumes-<volume-id> bs=1M status=progress
Or if not using LVM
$ sudo wipefs -a /dev/mapper/cinder--volumes-<volume-id>
$ sudo blkdiscard /dev/mapper/cinder--volumes-<volume-id>
- Ceph-backed Volumes (RBD) for Cinder.
Wipe with rbd tool:
rbd zap --pool volumes <volume-id> --destroy
Or:
rbd bench --io-type write --io-pattern rand --size 10G --pool volumes <volume-id>
Or zero-fill:
rbd map volumes/<volume-id>
sudo dd if=/dev/zero of=/dev/rbdX bs=1M status=progress
rbd unmap /dev/rbdX
Root Cause
Reinstalling OpenShift Container Platform may fail in non-expected ways if disks are not wiped.
Diagnostic Steps
For Ephemeral Disk (Nova local storage):
# List all instances on the compute node
virsh list --all
# Get disk block device paths for the instance
virsh domblklist <instance-name or ID>
# Check file type and size
ls -lh /var/lib/nova/instances/<instance-uuid>/
# Check if the VM is still running (avoid wiping active disks)
virsh dominfo <instance-name>
# Optional: inspect the disk content before wipe
qemu-img info /var/lib/nova/instances/<instance-uuid>/disk
For Cinder volumes:
# List volumes and get volume ID
openstack volume list
# Get volume details
openstack volume show <volume-id>
# Check if the volume is attached
openstack server show <vm-id> -c volumes_attached
# Detach if needed
openstack server remove volume <vm-id> <volume-id>
# Set volume to available state
openstack volume set --state available <volume-id>
# On Cinder node with LVM:
sudo lvdisplay | grep <volume-id>
sudo lvs | grep <volume-id>
# Identify the device path
sudo dmsetup ls | grep <volume-id>
# Optional: check for partitions
sudo fdisk -l /dev/mapper/cinder--volumes-<volume-id>
For Ceph RBD:
# List volumes in Ceph pool
rbd ls -p volumes
# Get info on the specific volume
rbd info volumes/<volume-id>
# Check if volume is in use
rbd status volumes/<volume-id>
# Optionally map the volume to a block device
rbd map volumes/<volume-id>
# Inspect mapped device
lsblk
sudo fdisk -l /dev/rbdX
# Unmap when done
rbd unmap /dev/rbdX
SBR
Product(s)
Components
Category
Tags
This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.