Enabling AMD Secure Encrypted Virtualization in RHEL 8

Updated

Secure Encrypted Virtualization (SEV) is a memory encryption feature for KVM virtual machines (VMs) hosted on certain AMD hardware. SEV encrypts data currently in use by the VM, which significantly improves the security of the environment.

IMPORTANT: RHEL 8 currently provides SEV as a Technology Preview. Therefore, Red Hat cannot provide support for VMs with SEV configured, the instructions below may not work properly, and using SEV in a production environment is discouraged.

To set up SEV on your VM, do the following:

Prerequisites

  • To use SEV, the host machine must have certain specialized hardware. To verify this is the case, look for sev among the CPU flags on the host:
# cat /proc/cpuinfo | grep sev
sme ssbd sev ibpb
  • Ensure that SEV is enabled in the host kernel:
$ cat /sys/module/kvm_amd/parameters/sev
1

If this command outputs 1, SEV is enabled on your host. If it outputs 0, do the following to enable SEV:

  1. Add the following line to the kernel command line:
mem_encrypt=on kvm_amd.sev=1
  1. To make the changes persistent, create the /etc/modprobe.d/sev.conf file and add the following line to it:
options kvm_amd sev=1
  • The KVM virtualization software on your host must support SEV. To verify this:
  1. Stop the libvirtd service, clean the domain capabilities cache, then start libvirtd again:
# systemctl stop libvirtd.service
# rm -f /var/cache/libvirt/qemu/capabilities/*
# systemctl start libvirtd.service
  1. Use the virsh domcapabilities command. If its output includes <sev supported='yes'>, your virtualization stack supports SEV.
 # virsh domcapabilities
<domainCapabilities>
[...]
  <features>
    [...]
    <sev supported='yes'>
      <cbitpos>47</cbitpos>
      <reducedPhysBits>1</reducedPhysBits>
    </sev>
    [...]
  </features>
</domainCapabilities>
  • The VM intended for SEV must be using the Q35 machine type. To verify, use the following command and replace safeashouses with the name of your VM:
 # virsh dumpxml safeashouses | grep "type arch"
<type arch='x86_64' machine='pc-q35-3.0'>hvm</type>

If the output includes q35, the VM is using Q35.

  • The VM’s boot disk type currently cannot be virtio-blk.

Procedure

  1. Configure the VM to use an OVMF boot loader. To do so, edit the XML configuration of the VM using the virsh edit command and point to the OVMF binary file:
<os>
  <type arch='x86_64' machine='pc-q35-3.0'>hvm</type>
  <loader readonly='yes' type='pflash'>/usr/share/edk2/ovmf/OVMF_CODE.fd</loader>
</os>
  1. Edit the XML configuration again and enable memory backing in the VM. Include the parameter in the setting:
<memoryBacking>
  <locked/>
</memoryBacking>
  1. Set a memory limit on the master machine.slice systemd group on the host. To do so, use one or both of the following commands, and replace 62G and 63G with values appropriate for your system.
# systemctl set-property machine.slice MemoryHigh=62G
# systemctl set-property machine.slice MemoryMax=63G

The memory backing setting in the previous step may cause the host to be vulnerable to a DoS attack that uses malicious QEMU code. Setting an appropriate memory limit prevents this vulnerability.

NOTE: The above example commands ensure that if machine.slice starts consuming more than 62 GB of memory, the system will provide it with only a minimum amount of memory resources, and that machine.slice cannot consume more than 63 GB of memory. Such values can be useful for example on a host with 64 GB RAM, so that at least 1 GB is always reserved for the host operating system.

However, these memory values may not be optimal for your system. For example, if this causes your VMs to degrade in performance, increase the limits accordingly. For more information on managing system resources, see the systemd.resource-control(5) man page.

  1. Enable emulated IOMMU on virtio devices. To do so, edit the XML configuration and add the following lines to the section:
<controller type='virtio-serial' index='0'>
  <driver iommu='on'/>
</controller>
<controller type='scsi' index='0' model='virtio-scsi'>
  <driver iommu='on'/>
</controller>
<memballoon model='virtio'>
  <driver iommu='on'/>
</memballoon>
<rng model='virtio'>
  <backend model='random'>/dev/urandom</backend>
  <driver iommu='on'/>
</rng>
<interface type='network'>
  <driver name='qemu' iommu='on'/>
  <rom enabled='no'/>
</interface>
  1. Enable SEV as the launch security type for the VM. To do so, add the following lines to the XML configuration, and adjust cbitpos and reducedPhysBits to values provided by the virsh domcapabilities command:
<launchSecurity type='sev'>
  <cbitpos>47</cbitpos>
  <reducedPhysBits>1</reducedPhysBits>
  <policy>0x0001</policy>
</launchSecurity>

Note that this setting must be outside of the <devices> section, unlike the IOMMU setting in the previous step.

  1. To verify that SEV has been enabled successfully, use the following in the VM:
# dmesg | grep SEV

If the command displays any output, SEV is running.

Additional sources
For further information on improving the security of your VMs in RHEL 8, see the Configuring and managing virtualization document.

Article Type