Configuring security services

Red Hat OpenStack Services on OpenShift 18.0

Configuring the security features for Red Hat OpenStack Services on OpenShift

Abstract

Customize security features for Red Hat OpenStack Services on OpenShift based on the requirements of your environment.

Providing feedback on Red Hat documentation

We appreciate your feedback. Tell us how we can improve the documentation.

To provide documentation feedback for Red Hat OpenStack Services on OpenShift (RHOSO), create a Jira issue in the OSPRH Jira project.

Procedure

  1. Log in to the Red Hat Atlassian Jira.
  2. Click the following link to open a Create Issue page: Content from redhat.atlassian.net is not included.Create issue
  3. Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue.
  4. Click Create.
  5. Review the details of the bug you created.

Chapter 1. Scheduling fernet key rotation

For security purposes, the fernet keys in your Red Hat OpenStack Services on OpenShift (RHOSO) environment are automatically rotated. To meet the unique security requirements of your environment, you can modify the frequency with which fernet key rotations occur as well as the number of old decryption keys kept after each rotation.

1.1. Updating fernet key rotation frequency

As of Red Hat OpenStack Services on OpenShift (RHOSO), you can update the frequency with which the Identity service (keystone) rotates its fernet keys.

Procedure

  1. Edit the OpenStackControlPlane custom resource (CR) for editing:

    $ oc edit openstackcontrolplane openstack-control-plane
  2. Under the properties field under the Identity service (keystone) configuration, add the following:

      fernetMaxActiveKeys:
        default: <active_keys>
        description: FernetMaxActiveKeys - Maximum number of fernet token keys after rotation
        type: int
      fernetRotationDays:
        default: <days>
    • Replace <active_keys> with the number of keys to keep active. The default is 5.
    • Replace <days> with the frequency with which to rotate your fernet keys.

Chapter 2. Adding custom TLS certificates for Red Hat OpenStack Services on OpenShift

When you deploy Red Hat OpenStack Services on OpenShift (RHOSO), TLS-e (TLS everywhere) is enabled by default. TLS is handled by cert-manager, which applies both ingress (public) encryption, as well as reencryption to each pod. Currently, disabling TLS on RHOSO is not supported.

2.1. TLS in Red Hat OpenStack Services on OpenShift

When you deploy Red Hat OpenStack Services on OpenShift (RHOSO), most API connections are protected by TLS.

Note

TLS is not currently available for the internal Alert Manager Web UI service endpoint.

You might be required to protect public APIs using your own internal certificate authority. In order to replace the automatically generated certificates you must create a secret that contains your additional ca certs, including all certificates in needed chains of trust.

You can apply trusted certificates from your own internal certificate authority (CA) to public interfaces on RHOSO. The public interface is where ingress traffic meets the service’s route. Do not attempt to manage encryption on internal (pod level) interfaces.

If you decide to apply trusted certificates from your own internal certificate authority (CA), you will need the following information.

DNS names

For each service you apply your own custom certificate to, you will need its DNS hostname for the process of generating the certificate. You can get a list of public hostnames using the following command: oc get -n openstack routes

Note

To use a single certificate for two or more services, use a wildcard in the DNS name field, or list multiple DNS names in the subject alt names field. If you do not use a wildcard, then you must update the certificate in the event of a route hostname change.

Duration
To update a service’s certificate in OpenShift, the service must be restarted. The duration for the certificate is the longest amount of time a service can stay live without being restarted, subject to your internal security policies.
Usages
You must include - key encipherment, digital signature, and server auth within the list of usages in your certificate.

Updating TLS to use custom certificates requires edits to both the control plane and the data plane.

The following is the default TLS settings that are used if not annotated and changed:

apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
metadata:
  name: myctlplane
spec:
  tls:
    default:
      ingress:
        ca:
          duration: 87600h
        cert:
          duration: 43800h
        enabled: true
      podLevel:
        enabled: true
        internal:
          ca:
            duration: 87600h
          cert:
            duration: 43800h
        libvirt:
          ca:
            duration: 87600h
          cert:
            duration: 43800h
        ovn:
          ca:
            duration: 87600h
          cert:
            duration: 43800h

2.2. Adding custom CA certificates to the control plane

When you deploy Red Hat OpenStack Services on OpenShift (RHOSO), default CA certificates are also deployed on the control plane. When you add a custom CA certificate from a Red Hat Satellite Server (RHSS) or another 3rd party certificate authority to your RHOSO control plane, RHOSO services can validate certificates issued by those 3rd party certificate authorities.

To accomplish this, you must add your custom CA certificate into a bundle that includes all certificates that OpenStack services can verify against.

Note

If TLS is not enabled on each node set, then it must be enabled, which requires deploying the data plane.

Procedure

  1. Create a PEM-formatted bundle, for example, mybundle.pem. Include all the CA certificates that you want OpenStack to trust.
  2. Create a manifest file called cacerts.yaml that includes the mybundle.pem created in the previous step. Include all the certificates in chains of trust if applicable:

    apiVersion: v1
    kind: Secret
    metadata:
      name: cacerts
      namespace: openstack
    type: Opaque
    data:
      myBundleExample: <cat mybundle.pem | base64 -w0>
      CACertExample: <cat cacert.pem | base64 -w0>
    • Replace mybundle.pem with the name of your certificate or certificate bundle. The results are pasted as the value of the myBundleExample field.
    • Replace cacert.pem with the name of your CA certificate.
  3. Create the secret from the manifest file:

    $ oc apply -f cacerts.yaml
  4. Edit the openstack_control_plane.yaml custom resource (CR) file and add your bundle as the parameter for caBundleSecretName:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: myctlplane
    spec:
      tls:
        podLevel:
          enabled: true
        caBundleSecretName: cacerts
  5. Apply the control plane changes:

    $ oc apply -f openstack_control_plane.yaml
  6. Determine if TLS is enabled on each node set, by running the following command, which returns true if TLS is enabled on the specified node set:

    $ oc get openstackdataplanenodeset <node_set_name> -n <namespace> -o json | jq .items[0].spec.tlsEnabled
    • Replace <node_set_name> with the name of the OpenStackDataPlaneNodeSet CR that the node belongs to.
    • Replace <namespace> with the namespace of the required Red Hat OpenStack Services on OpenShift (RHOSO) environment, for example, openstack.
  7. If TLS is not enabled, you must enable it:

    1. Open the OpenStackDataPlaneNodeSet CR file for each node on the data plane, and enable TLS in each:

      apiVersion: dataplane.openstack.org/v1beta1
      kind: OpenStackDataPlaneNodeSet
      metadata:
        name: <node_set_name>
        namespace: openstack
      spec:
        tlsEnabled: true
    2. Save the updated OpenStackDataPlaneNodeSet CR files and apply the updates:

      $ oc apply -f openstack_data_plane.yaml -n <namespace>
  8. Create a file on your workstation to define the OpenStackDataPlaneDeployment CR:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneDeployment
    metadata:
      name: <node_set_deployment_name>
    • Replace <node_set_deployment_name> with the name of the OpenStackDataPlaneDeployment CR. This name must be unique, must consist of lower case alphanumeric characters, - (hyphen) or . (period), and must start and end with an alphanumeric character.
  9. Add the OpenStackDataPlaneNodeSet CRs to the OpenStackDataPlaneDeployment CR file:

    spec:
      ...
      nodeSets:
        - <node_set_name>
  10. Save the OpenStackDataPlaneDeployment CR deployment file.
  11. Deploy the modified OpenStackDataPlaneNodeSet CRs:

    $ oc create -f openstack_data_plane_deploy.yaml -n <namespace>

    You can view the Ansible logs while the deployment executes:

    $ oc get pod -l app=openstackansibleee -w
    $ oc logs -l app=openstackansibleee -f --max-log-requests 10

    If the oc logs command returns an error similar to the following error, increase the --max-log-requests value:

    error: you are attempting to follow 19 log streams, but maximum allowed concurrency is 10, use --max-log-requests to increase the limit
  12. Verify that the modified OpenStackDataPlaneNodeSet CRs are deployed:

    $ oc get openstackdataplanedeployment -n <namespace>
    NAME             	STATUS   MESSAGE
    openstack-data-plane   True     Setup Complete
    
    
    $ oc get openstackdataplanenodeset -n <namespace>
    NAME             	STATUS   MESSAGE
    openstack-data-plane   True     NodeSet Ready

    For information about the meaning of the returned status, see Data plane conditions and states in the Deploying Red Hat OpenStack Services on OpenShift guide.

    If the status indicates that the data plane has not been deployed, then troubleshoot the deployment. For information, see Troubleshooting the data plane creation and deployment in the Deploying Red Hat OpenStack Services on OpenShift guide.

2.3. Updating the control plane with custom certificates for public services

You might be required to protect public APIs by using your own internal certificate authority (CA). To replace the automatically generated route certificates with common signed certificates from your CA, you must create a secret that contains your additional CA certificate, and all certificates in the chain of trust.

Prerequisites

  • You have a list of each of the public services for which to apply your custom service certificates. You can get this list using the oc route list -n openstack command. Use this information for the number of certificates you must create, the DNS names for those certificates, as well as finding the relevant services to edit in the openstack_control_plane.yaml custom resource (CR).
  • You have a service certificate for the public services

Procedure

  1. Create a manifest file called cacerts.yaml that includes all CA certificates. Include all certificates in chains of trust if applicable:

    apiVersion: v1
    kind: Secret
    metadata:
      name: cacerts
      namespace: openstack
    type: Opaque
    data:
      myBundleExample: <cat mybundle.pem | base64 -w0>
      CACertExample: <cat cacert.pem | base64 -w0>
    • Replace mybundle.pem with the name of your certificate or certificate bundle. The results are pasted as the value of the myBundleExample field.
    • Replace cacert.pem with the name of your CA certificate.
  2. Create the secret from the manifest file:

    $ oc apply -f cacerts.yaml
  3. Create a manifest file for each secret named api_certificate_<service>_secret.yaml:

    apiVersion: v1
    kind: Secret
    metadata:
      name: api_certificate_<service>_secret
      namespace: openstack
    type: kubernetes.io/tls
    data:
      tls.crt: <cat tlscrt.pem | base64 -w0>
      tls.key: <cat tlskey.pem | base64 -w0>
      ca.crt: <cat cacrt.pem | base64 -w0>
    • Replace <service> with the name of the service that this secret is for.
    • Replace tlscrt.pem with the name of your signed certificate.
    • Replace tlskey.pem with the name of your private key.
    • Replace cacrt.pem with the name of your CA certificate.
  4. Create the secret

    $ oc apply -f api_certificate_<service>_secret.yaml
  5. Edit the openstack_control_plane.yaml custom resource and add your bundle as the parameter for caBundleSecretName:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: myctlplane
    spec:
      tls:
        podLevel:
          enabled: true
        caBundleSecretName: cacerts
  6. Apply the secret service certificates to each of the public services under the apiOverride field. For example enter the following for the Identity service (keystone):

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: myctlplane
      namespace: openstack
    spec:
      ...
      keystone:
        apiOverride:
          tls:
            secretName: api_certificate_keystone_secret

    The edits for the Compute service (nova) and noVNCProxy appear as the following:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: myctlplane
      namespace: openstack
    spec:
    ...
      nova:
        apiOverride:
          tls:
            secretName: api_certificate_nova_secret
          route: {}
        cellOverride:
          cell1:
            noVNCProxy:
              tls:
                secretName: api_certificate_novavncproxy_secret
  7. Apply the control plane changes

    $ oc apply -f openstack_control_plane.yaml

2.4. Updating the control plane with a single custom certificate for public services

You might be required to protect public APIs by using your own internal certificate authority (CA). To replace the automatically generated route certificates with a common signed certificate from your CA, you must create a secret that contains your CA certificate, and all certificates in the chain of trust.

Prerequisites

  • You have a list of each of the public services for which to apply your custom service certificate. You can get this list by using the oc route list -n openstack command. Use this information for the DNS names for the certificate, as well as for finding the relevant services to edit in the openstack_control_plane.yaml custom resource (CR).

Procedure

  1. Create a signed certificate that includes the hostname for every service in the alt_names section:

    [alt_names]
    DNS.1 = barbican-public-openstack.apps.ocp.openstack.lab
    DNS.2 = cinder-public-openstack.apps.ocp.openstack.lab
    DNS.3 = glance-default-public-openstack.apps.ocp.openstack.lab
    DNS.4 = horizon-openstack.apps.ocp.openstack.lab
    DNS.5 = keystone-public-openstack.apps.ocp.openstack.lab
    DNS.6 = manila-public-openstack.apps.ocp.openstack.lab
    DNS.7 = neutron-public-openstack.apps.ocp.openstack.lab
    DNS.8 = nova-novncproxy-cell1-public-openstack.apps.ocp.openstack.lab
    DNS.9 = nova-public-openstack.apps.ocp.openstack.lab
    DNS.10 = placement-public-openstack.apps.ocp.openstack.lab
  2. Create a manifest file called cacerts.yaml that includes all CA certificates. Include all certificates in chains of trust if applicable:

    apiVersion: v1
    kind: Secret
    metadata:
      name: cacerts
      namespace: openstack
    type: Opaque
    data:
      myBundleExample: <cat mybundle.pem | base64 -w0>
      CACertExample: <cat cacert.pem | base64 -w0>
    • Replace mybundle.pem with the name of your certificate or certificate bundle. The results are pasted as the value of the myBundleExample field.
    • Replace cacert.pem with the name of your CA certificate.
  3. Create the secret from the manifest file:

    $ oc apply -f cacerts.yaml
  4. Create a manifest file for a secret named certificate-secret.yaml:

    apiVersion: v1
    kind: Secret
    metadata:
      name: certificate-secret
      namespace: openstack
    type: kubernetes.io/tls
    data:
      tls.crt: <cat tlscrt.pem | base64 -w0>
      tls.key: <cat tlskey.pem | base64 -w0>
      ca.crt: <cat cacrt.pem | base64 -w0>
    • Replace tlscrt.pem with the name of your signed certificate.
    • Replace tlskey.pem with the name of your private key.
    • Replace cacrt.pem with the name of your CA certificate.
  5. Create the secret

    $ oc apply -f certificate-secret.yaml
  6. Edit the openstack_control_plane.yaml custom resource and add your bundle as the parameter for caBundleSecretName:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: myctlplane
    spec:
      tls:
        podLevel:
          enabled: true
        caBundleSecretName: cacerts
  7. Apply the secret service certificates to each of the public services under the apiOverride field. For example, enter the following for the Identity service (keystone):

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: myctlplane
      namespace: openstack
    spec:
      ...
      keystone:
        apiOverride:
          tls:
            secretName: certificate-secret

    The edits for the Compute service (nova) and NoVNCProxy appear as the following:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: myctlplane
      namespace: openstack
    spec:
    ...
      nova:
        apiOverride:
          tls:
            secretName: certificate-secret
          route: {}
        cellOverride:
          cell1:
            NoVNCProxy:
              tls:
                secretName: certificate-secret
  8. Apply the control plane changes

    $ oc apply -f openstack_control_plane.yaml

2.5. Using your CA certs on remote clients

If you do not use a trusted CA from a public entity, openstack client commands fail with an SSL verification error, requiring the --insecure command option to succeed. You can securely communicate with OpenStack API using a private certificate authority using the following steps.

Prerequisites

  • You have deployed RHOSO with default certificates, or have used custom certificates that are not signed by a public certificate authority.

Procedure

  1. Log onto OpenShift with global administrative permissions.
  2. Extract the ca cert for the public endpoints from the rootca-public secret.

    $ oc get secret rootca-public -o json | jq -r '.data."ca.crt"' | base64 -d > ca.crt
  3. Transfer the ca.crt file to the client that accesses the OpenStack API.
  4. Update your authentication file with the path to ca.crt.

    1. If you use a clouds.yml authentication file, add the cacert parameter:

      clouds:
          secure:
              cacert: </path/to/ca.crt>
      • Replace </path/to/ca.crt> with the absolute path and name of the CA cert on your system.
    2. If you use a resource credentials file, update the file with the exported CACERT variable:

      $ export OC_CACERT=</path/to/ca.crt>
      • Replace </path/to/ca.crt> with the absolute path and name of the CA cert on your system.

Chapter 3. Custom issuers for cert-manager

An issuer is a resource that acts as a certificate authority for a specific namespace, and is managed by the cert-manager Operator. TLS-e (TLS everywhere) is enabled in Red Hat OpenStack Services on OpenShift (RHOSO) environments, and it uses the following issuers by default:

  • rootca-internal
  • rootca-libvirt
  • rootca-ovn
  • rootca-public

3.1. Creating a custom issuer

You can create custom ingress as well as custom internal issuers. To create and manage your own certificates for internal endpoints, you must create a custom internal issuer.

Procedure

  1. Create a custom issuer in a file named rootca-custom.yaml:

    apiVersion: cert-manager.io/v1
    kind: Issuer
    metadata:
      name: <issuer_name>
    spec:
      ca:
        secretName: <secret_name>
    • Replace <issuer_name> with the name of your custom issuer, for example, rootca-ingress-custom.
    • Replace <secret_name> with the name of the Secret CR used by the certificate for your custom issuer. If you do not include a secret, one is created automatically.
  2. Create a certificate in a file named ca-issuer-certificate.yaml:

    apiVersion: cert-manager.io/v1
    kind: Certificate
    metadata:
      name: <issuer_name>
    spec:
      commonName: <issuer_name>
      isCA: true
      duration: <hours>
      privateKey:
        algorithm: RSA
        size: 3072
      issuerRef:
        name: selfsigned-issuer
        kind: Issuer
      secretName: <secret-name>
    • Replace <issuer_name> with the name of your custom issuer. This matches the issuer created in the first step.
    • Replace <hours> with the duration in hours, for example, a value of 87600h is equivalent to 3650 days, or about 10 years.
    • Replace <secret_name> with the name of the Secret CR used by the certificate for your custom issuer. If you do not include a secret, one is created automatically.
  3. Create the issuer and certificate:

    $ oc create -f rootca-custom.yaml
    $ oc create -f ca-issuer-certificate.yaml
  4. Add the custom issuer to the TLS service definition in the control plane CR file.

    1. If your custom issuer is an ingress issuer, the customer issuer is defined under the ingress attribute as shown below:

      apiVersion: core.openstack.org/v1beta1
      kind: OpenStackControlPlane
      metadata:
        name: openstack-control-plane
      spec:
        tls:
           ingress:
             enabled: true
             ca:
               customIssuer: <issuer_name>
         ...
      • Replace <issuer_name> with the name of your custom issuer. This matches the issuer created in the first step.
    2. If your custom issuer is an internal issuer, the custom issuer is defined at the pod level under the internal attribute as shown below:

      apiVersion: core.openstack.org/v1beta1
      kind: OpenStackControlPlane
      metadata:
        name: myctlplane
      spec:
        tls:
           ingress:
             enabled: true
           podLevel:
             enabled: true
             internal:
               ca:
                 customIssuer: <issuer_name>
      • Replace <issuer_name> with the name of your custom issuer. This matches the issuer created in the first step.

Chapter 4. Enabling TLS on a deployed RHOSO environment

TLS is enabled by default in Red Hat OpenStack Services on OpenShift (RHOSO) environments. If you disabled TLS when you deployed your RHOSO environment, or if you adopted your Red Hat OpenStack Platform 17.1 deployment to a RHOSO environment, then you can reenable TLS after deployment.

Important
  • Enabling TLS on a deployed RHOSO environment involves some data plane downtime when connectivity to Rabbitmq and OVS from the control plane is lost during the redeployment.

    • If your deployment uses the default configuration where no floating IP connectivity is directed through the control plane, then this downtime does not affect the workload hosted on the RHOSO environment.
    • If your deployment routes traffic through the control plane, then the downtime will impact the workload hosted on the RHOSO environment.
  • New workloads cannot be created and existing workloads cannot be managed with the OpenStack API while the control plane and data plane are being updated.

4.1. Enabling TLS on a deployed RHOSO environment error messages

The following error messages are logged when the connectivity to Rabbitmq and OVS is lost from the control plane during the redeployment to enable TLS:

  • Extract from the nova-compute log:

    Aug 09 11:35:49 edpm-compute-0 nova_compute[105613]: 2024-08-09 11:35:49.037 2 ERROR oslo.messaging._drivers.impl_rabbit [-] [98752a36-cf06-4d26-aee8-f5b21bf55aef] AMQP server on rabbitmq-cell1.openstack.svc:5672 is unreachable: <RecoverableConnectionError: unknown error>. Trying again in 1 seconds.: amqp.exceptions.RecoverableConnectionError: <RecoverableConnectionError: unknown error>
    Aug 09 11:35:49 edpm-compute-0 nova_compute[105613]: 2024-08-09 11:35:49.566 2 ERROR oslo.messaging._drivers.impl_rabbit [-] [8c795961-cb17-4a6d-82ee-25c862316b40] AMQP server on rabbitmq-cell1.openstack.svc:5672 is unreachable: timed out. Trying again in 32 seconds.: socket.timeout: timed out
  • Extract from the OVN controller log:

    ovn_controller[55433]: 2024-08-09T11:35:47Z|00452|reconnect|INFO|tcp:ovsdbserver-sb.openstack.svc:6642: connected Aug 09 11:35:47 edpm-compute-0
    ovn_controller[55433]: 2024-08-09T11:35:47Z|00453|jsonrpc|WARN|tcp:ovsdbserver-sb.openstack.svc:6642: error parsing stream: line 0, column 0, byte 0: invalid character U+0015 Aug 09 11:35:47 edpm-compute-0
    ovn_controller[55433]: 2024-08-09T11:35:47Z|00454|reconnect|WARN|tcp:ovsdbserver-sb.openstack.svc:6642: connection dropped (Protocol error)

4.2. Enabling TLS on a RHOSO environment after deployment

If TLS is disabled in your deployed Red Hat OpenStack Services on OpenShift (RHOSO) environment, you can reenable it on a operational RHOSO environment with minimal disruption.

Prerequisites

  • The RHOSO environment is deployed on a Red Hat OpenShift Container Platform (RHOCP) cluster. For more information, see Deploying Red Hat OpenStack Services on OpenShift.
  • You are logged on to a workstation that has access to the RHOCP cluster as a user with cluster-admin privileges.

Procedure

  1. Open your OpenStackControlPlane custom resource (CR) file, openstack_control_plane.yaml, on your workstation.
  2. Add the following spec.tls configuration, if not already present:

    spec:
      tls:
        ingress:
          ca:
            duration: 87600h0m0s
          cert:
            duration: 43800h0m0s
          enabled: true
        podLevel:
          enabled: true
          internal:
            ca:
              duration: 87600h0m0s
            cert:
              duration: 43800h0m0s
          libvirt:
            ca:
              duration: 87600h0m0s
            cert:
              duration: 43800h0m0s
          ovn:
            ca:
              duration: 87600h0m0s
            cert:
              duration: 43800h0m0s
    • If the tls configuration is already present in the CR file, then ensure that the podLevel is enabled.
  3. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
  4. The rabbitmq pods cannot change the TLS configuration on an operating environment, therefore you must delete the existing rabbitmq pods to update the control plane with the new rabbitmq pods that have TLS enabled:

    $ oc delete pod -n openstack -l app.kubernetes.io/component=rabbitmq
  5. Wait for the control plane to be ready:

    $ oc wait openstackcontrolplane -n openstack --for=condition=Ready --timeout=400s -l core.openstack.org/openstackcontrolplane

    While waiting for the control plane to be ready, new workloads cannot be created and existing workloads cannot be managed with the OpenStack API. The nova-compute service on the data plane nodes cannot connect to the cell1 rabbitmq and reports as down:

    $ oc rsh openstackclient
    $ openstack compute service list -c Binary -c Host -c Status -c State
    +----------------+-------------------------------------+---------+-------+
    | Binary         | Host                                | Status  | State |
    +----------------+-------------------------------------+---------+-------+
    | nova-conductor | nova-cell0-conductor-0              | enabled | up    |
    | nova-scheduler | nova-scheduler-0                    | enabled | up    |
    | nova-conductor | nova-cell1-conductor-0              | enabled | up    |
    | nova-compute   | edpm-compute-0.ctlplane.example.com | enabled | down  |
    | nova-compute   | edpm-compute-1.ctlplane.example.com | enabled | down  |
    +----------------+-------------------------------------+---------+-------+

    The OVN controller and the OVN metadata agent cannot connect to the southbound database:

    $ openstack network agent list -c 'Agent Type' -c Host -c Alive -c State
    +------------------------------+-------------------------------------+-------+-------+
    | Agent Type                   | Host                                | Alive | State |
    +------------------------------+-------------------------------------+-------+-------+
    | OVN Controller Gateway agent | crc                                 | :-)   | UP    |
    | OVN Controller agent         | edpm-compute-1.ctlplane.example.com | XXX   | UP    |
    | OVN Metadata agent           | edpm-compute-1.ctlplane.example.com | XXX   | UP    |
    | OVN Controller agent         | edpm-compute-0.ctlplane.example.com | XXX   | UP    |
    | OVN Metadata agent           | edpm-compute-0.ctlplane.example.com | XXX   | UP    |
    +------------------------------+-------------------------------------+-------+-------+
    Note

    The existing workload is not impacted if workload traffic is not routed through the control plane.

  6. Open the OpenStackDataPlaneNodeSet CR file for each node on the data plane, and enable TLS in each:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneNodeSet
    metadata:
      name: <node_set_name>
      namespace: openstack
    spec:
      tlsEnabled: true
    • Replace <node_set_name> with the name of the OpenStackDataPlaneNodeSet CR that the node belongs to.
  7. Save the updated OpenStackDataPlaneNodeSet CR files and apply the updates:

    $ oc apply -f openstack_data_plane.yaml -n openstack
  8. Check that TLS is enabled on each node set:

    $ oc get openstackdataplanenodeset <node_set_name> -n openstack -o json | jq .items[0].spec.tlsEnabled
    true
  9. Create a file on your workstation to define the OpenStackDataPlaneDeployment CR:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneDeployment
    metadata:
      name: <node_set_deployment_name>
    • Replace <node_set_deployment_name> with the name of the OpenStackDataPlaneDeployment CR. The name must be unique, must consist of lower case alphanumeric characters, - (hyphen) or . (period), and must start and end with an alphanumeric character.
    Tip

    Give the OpenStackDataPlaneDeployment CR file a descriptive name that indicates the purpose of the modified node set.

  10. Add the OpenStackDataPlaneNodeSet CRs that you modified to enable TLS:

    spec:
      nodeSets:
        - <node_set_name>
    • Provide the required <node_set_name> for each node on the data plane.
  11. Save the OpenStackDataPlaneDeployment CR deployment file.
  12. Deploy the modified OpenStackDataPlaneNodeSet CRs:

    $ oc create -f openstack_data_plane_deploy.yaml -n openstack

    You can view the Ansible logs while the deployment executes:

    $ oc get pod -l app=openstackansibleee -n openstack -w
    $ oc logs -l app=openstackansibleee -f --max-log-requests 10 -n openstack

    If the oc logs command returns an error similar to the following error, increase the --max-log-requests value:

    error: you are attempting to follow 19 log streams, but maximum allowed concurrency is 10, use --max-log-requests to increase the limit
  13. Verify that the modified OpenStackDataPlaneNodeSet CRs are deployed:

    $ oc get openstackdataplanedeployment -n openstack
    NAME             	STATUS   MESSAGE
    openstack-data-plane   True     Setup Complete
    
    
    $ oc get openstackdataplanenodeset -n openstack
    NAME             	STATUS   MESSAGE
    openstack-data-plane   True     NodeSet Ready

    For information about the returned status, see Data plane conditions and states in the Deploying Red Hat OpenStack Services on OpenShift guide.

    If the status indicates that the data plane has not been deployed, then troubleshoot the deployment. For information, see Troubleshooting the data plane creation and deployment in the Deploying Red Hat OpenStack Services on OpenShift guide.

  14. Verify that the nova-compute service is connected again to TLS rabbitmq:

    $ oc rsh openstackclient
    $ openstack compute service list -c Binary -c Host -c Status -c State
    +----------------+-------------------------------------+---------+-------+
    | Binary         | Host                                | Status  | State |
    +----------------+-------------------------------------+---------+-------+
    | nova-conductor | nova-cell0-conductor-0              | enabled | up    |
    | nova-scheduler | nova-scheduler-0                    | enabled | up    |
    | nova-conductor | nova-cell1-conductor-0              | enabled | up    |
    | nova-compute   | edpm-compute-0.ctlplane.example.com | enabled | up    |
    | nova-compute   | edpm-compute-1.ctlplane.example.com | enabled | up    |
    +----------------+-------------------------------------+---------+-------+
  15. Verify that the OVN agents are running again:

    $ openstack network agent list -c 'Agent Type' -c Host -c Alive -c State
    +------------------------------+-------------------------------------+-------+-------+
    | Agent Type                   | Host                                | Alive | State |
    +------------------------------+-------------------------------------+-------+-------+
    | OVN Controller Gateway agent | crc                                 | :-)   | UP    |
    | OVN Controller agent         | edpm-compute-1.ctlplane.example.com | :-)   | UP    |
    | OVN Metadata agent           | edpm-compute-1.ctlplane.example.com | :-)   | UP    |
    | OVN Controller agent         | edpm-compute-0.ctlplane.example.com | :-)   | UP    |
    | OVN Metadata agent           | edpm-compute-0.ctlplane.example.com | :-)   | UP    |
    +------------------------------+-------------------------------------+-------+-------+

4.3. Deploying RHOSO with TLS disabled

TLS is enabled, by default, when you deploy Red Hat OpenStack Services on OpenShift (RHOSO). But you can disable TLS, if you need to.

Note

You can re-enable TLS on a operational RHOSO environment with minimal disruption.

Procedure

  1. Open your OpenStackControlPlane custom resource (CR) file, openstack_control_plane.yaml, on your workstation.
  2. Add the following spec.tls configuration, if not already present:

    spec:
      tls:
        ingress:
          enabled: false
        podLevel:
          enabled: false
  3. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml
  4. Open the OpenStackDataPlaneNodeSet CR file for each node on the data plane, and disable TLS by setting spec.tlsEnabled to false:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneNodeSet
    metadata:
      name: <node_set_name>
      namespace: openstack
    spec:
      tlsEnabled: false
    • Replace <node_set_name> with the name of the OpenStackDataPlaneNodeSet CR that the node belongs to.
  5. Save the updated OpenStackDataPlaneNodeSet CR files and apply the updates:

    $ oc apply -f openstack_data_plane.yaml
  6. Verify that TLS is disabled on every node set:

    $ oc get openstackdataplanenodeset <node_set_name> -n openstack -o json | jq .items[0].spec.tlsEnabled

Chapter 5. Configuring LDAP on RHOSO

To connect Red Hat OpenStack Services on OpenShift to LDAP so that your OpenStack users authenticate by using pre-established LDAP identities, do the following:

  1. Use the OpenStack CLI to create the domain.
  2. Use RHOSO to create a secret that contains the required configuration.
  3. Mount the secret to the service by using the OpenStackControlPlane custom resource file.

5.1. Configuring LDAP by using Red Hat Identity

Use the OpenStack CLI or the OpenStack Dashboard (horizon) to create OpenStack domains.

Prerequisites

  • A pre-established Red Hat Identity server.

Procedure

  1. Create an OpenStack domain:

    $ openstack domain create <name>
    • Replace <name> with the name of your OpenStack domain.
  2. Create a keystone-domains secret called keystone-domains.yaml. This secret is mounted into the /etc/keystone/domains configuration directory:

    apiVersion: v1
    kind: Secret
    metadata:
      name: keystone-domains
      namespace: openstack
    type: Opaque
    stringData:
        keystone.<domain_name>.conf: |
            [identity]
            driver = ldap
            [ldap]
            url = ldaps://localhost
            user = =openstack,ou=Users,dc=director,dc=example,dc=com
            password = RedactedComplexPassword
            suffix = dc=domain,dc=example,dc=com
            user_tree_dn = ou=Users,dc=domain,dc=example,dc=com
            user_objectclass = person
            group_tree_dn = ou=Groups,dc=example,dc=org
            group_objectclass = groupOfNames
            use_tls = True
  3. Create the secret:

    $ oc apply -f keystone-domain-name.yaml
  4. Open your OpenStackCustomResource custom resource (CR) file and add the secret by using the extraMounts field:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack
    spec:
      keystone:
        template:
          customServiceConfig: |
            [identity]
            domain_specific_drivers_enabled = True
          extraMounts:
          - name: v1
            region: r1
              extraVol:
                - propagation:
                  - Keystone
                  extraVolType: Conf
                  volumes:
                  - name: keystone-domains
                    secret:
                      secretName: keystone-domains
                  mounts:
                  - name: keystone-domains
                    mountPath: "/etc/keystone/domains"
                    readOnly: true
  5. Apply the changes to your OpenStack control plane CR:

    $ oc apply -f openstack_control_plane.yaml

Chapter 6. Configuring a Luna HSM back end to work with the RHOSO Key Manager service

When you install Red Hat OpenStack Services on OpenShift (RHOSO), you have the option of using the Key Manager service with either a default SimpleCrypto back end, or using it with a Luna hardware security module (HSM). Using a hardware security module provides hardened protection for storing keys.

When you use a Luna HSM, the Key Manager service communicates with the Luna HSM by using a PKCS #11 interface to load libraries provided by Thales. To integrate your RHOSO deployment with a Luna HSM, you must complete the following steps:

6.1. Adding the Luna HSM client to the Key Manager service

Build a new image for the Key Manager service that integrates required Thales software. You must repeat this step when you update RHOSO.

Creating an ansible playbook to build this image simplifies the process of configuring RHOSO for your Luna HSM. The ansible-role-rhoso-luna-hsm RPM, which is part of the RHOSO repository, contains roles that are required for this playbook.

The following playbook automates downloading the barbican-api and barbican-worker images from the Red Hat source repository, adding the Luna client software, and storing the resulting image in your destination repository.

The following steps are run from any system from which you can execute Ansible playbooks.

Prerequisites

Procedure

  1. Use DNF to install ansible-role-rhoso-luna-hsm:

    $ sudo dnf -y install ansible-role-rhoso-luna-hsm
  2. Place the Luna minimal client image for Linux in a known location. In this procedure, the image is placed in /opt/luna.
  3. Move a copy of the Luna minimal client for Linux tarball to /opt/luna:

    $ mv <LunaClient-Minimal-10.7.2.x86_64.tar> /opt/luna
    • Replace <LunaClient-Minimal-10.7.2.x86_64.tar> with the name of your Luna Minimal client for Linux tarball.
  4. Create a playbook called custom-image.yaml that creates the custom Key Manager image:

    ---
    - name: Create and upload the custom Key Manager image
      ansible.builtin.include_role:
        name: rhoso_luna_hsm
        tasks_from: create_image
      vars:
        barbican_src_image_registry: "quay.io:5001"
        barbican_src_image_namespace: "openstack-k8s-operators"
        barbican_src_image_tag: "latest"
        barbican_dest_image_registry: "<my_registry_url>:5001"
        barbican_dest_image_namespace: "openstack-k8s-operators"
        barbican_dest_image_tag: "luna-custom"
        image_registry_verify_tls: "<true|false>"
        luna_minclient_src: "file:///opt/luna/<filename>"
    ---
    • Replace <my_registry_url> with the URL for your registry.
    • Replace <true|false> with either true or false based on the requirements of your image registry.
    • Replace <filename> with the name of your source image, for example: LunaClient-Minimal-10.7.2.x86_64.tar.
  5. Run the playbook:

    $ ansible-playbook custom-image.yaml

6.2. Creating secrets for the Key Manager service

Create secrets for the Key Manager service to enable secure communication with the Luna HSM backend in Red Hat OpenStack Services on OpenShift (RHOSO). These secrets authorize the Key Manager to authenticate with the hardware and manage encryption keys.

The following steps use keys, certificates, and configuration for your Luna HSM to create two secrets. One is called login_secret, which contains your HSM partition password. The other secret is called luna_data_secret, and it contains your certificates, keys, and chrystoki.conf configuration file. These secrets are required in your Red Hat OpenShift Container Platform environment to enable secure communication between the Key Manager service and your HSM. You create an Ansible playbook to identify the client certs to be copied in.

Prerequisites

Procedure

  1. Place the Luna certificate and key into the /opt/luna directory tree:

    $ cp <luna_client_name>.pem /opt/luna
    $ cp <luna_client_name>Key.pem /opt/luna
    • Replace <luna_client_name> with the name of your Luna certificate.
  2. Download the server certificate from the HSM device:

    $ scp -O <hsm-device.examle.com:server.pem> /opt/luna/
  3. Optional: If you have more than one HSM for HA, get every cert from the HSM and concatenate them into a single file:

    $ scp -O <hsm-device-01.examle.com:server-01.pem> /opt/luna/
    $ scp -O <hsm-device-02.examle.com:server-02.pem> /opt/luna/
    $ cat /opt/luna/cert/server-01.pem > /opt/luna/CAFile.pem
    $ cat /opt/luna/cert/server-02.pem >> /opt/luna/CAFile.pem
  4. Update your Crystoki.conf file to look similar to the following:

    Note

    The contents from the LunaClient-Minimal tarball is extracted to the /usr/local/luna/ directory in the Key Manager container. You must update the paths in your Crystoki.conf file to match this example.

    Chrystoki2 = {
      LibUNIX = /usr/local/luna/libs/64/libCryptoki2.so;
      LibUNIX64 = /usr/local/luna/libs/64/libCryptoki2.so;
    }
    
    Luna = {
      DefaultTimeOut = 500000;
      PEDTimeout1 = 100000;
      PEDTimeout2 = 200000;
      PEDTimeout3 = 10000;
      KeypairGenTimeOut = 2700000;
      CloningCommandTimeOut = 300000;
      CommandTimeOutPedSet = 720000;
    }
    
    CardReader = {
      RemoteCommand = 1;
    }
    
    Misc = {
      PE1746Enabled = 0;
      ToolsDir = ./bin/64;
      PartitionPolicyTemplatePath = ./ppt/partition_policy_templates;
      ProtectedAuthenticationPathFlagStatus = 0;
      MutexFolder = ./lock;
    }
    
    LunaSA Client = {
      ReceiveTimeout = 20000;
      SSLConfigFile = /usr/local/luna/openssl.cnf;
      ClientPrivKeyFile = /usr/local/luna/<luna_client_name>Key.pem;
      ClientCertFile = /usr/local/luna/<luna_client_name>.pem;
      ServerCAFile = /usr/local/luna/CAFile.pem;
      NetClient = 1;
      TCPKeepAlive = 1;
      ServerName00 = <ip_address>;
      ServerPort00 = 1792;
      ServerHtl00 = 0;
    }
    • Replace <luna_client_name> with the name of your Luna certificate.
    • Replace <ip_address> with the IP address of your Luna HSM.
  5. Optional: If you are configuring HA, you must include additional entries for the IP addresses of each HSM, as well as configurations for the VirtualToken, HASynchronize, and HAConfigurations parameters:

    ...
      ServerName00 = <ip_address>;
      ServerPort00 = 1792;
      ServerHtl00 = 0;
      ServerName01 = <ip_address>;
      ServerPort01 = 1792;
      ServerHtl01 = 0;
    }
    
    VirtualToken = {
      VirtualToken00Label = myHAGroup;
      VirtualToken00SN = <virtual_token_sn>;
      VirtualToken00Members = <virtual_token_member>,<virtual_token_member>;
    }
    
    HASynchronize = {
      myHAGroup = 1;
    }
    
    HAConfiguration = {
      haLogStatus = enabled;
    }
    • Replace <virtual_token_sn> with the serial number of your first partition prepended by a 1. For example, for partition 545000014, use a value of 1545000014
    • Replace <virtual_token_member> with the serial numbers of the partitions from the HSMs you are using.
  6. Move the chrystoki.conf configuration file to /opt/luna:

    $ mv chrystoki.conf /opt/luna
  7. Create an Ansible playbook called create-luna-secrets.yaml to create the required secrets:

    ---
    - name: Create secrets with the HSM certs and hsm-login credentials
    ansible.builtin.include_role:
        name: rhoso_luna_hsm
        tasks_from: create_secrets
    vars:
        luna_client_name: <luna_client_name>
        chrystoki_conf_src: "/opt/luna/chrystoki.conf"
        luna_server_cert_src: "/opt/luna/<server.pem>"
        luna_client_cert_src: "/opt/luna/"
        luna_partition_password: "<my_partion_password>"
        kubeconfig_path: "<kubeconfig_path>"
        oc_dir: "<path_to_oc>"
        luna_data_secret: "luna_data_secret"
        login_secret: "login_secret"
    ---
    • Replace <luna_client_name> with the name of your Luna certificate.
    • Replace <server.pem> with the name of your server certificate.
    • Replace <my_partion_password> with your HSM partition password.
    • Replace <kubeconfig_path> with the path to your .kube configuration file. For example: $HOME/.kube/config.
    • Replace <path_to_oc> with the output of which oc.
  8. Run the Ansible play book:

    $ ansible-playbook create-luna-secrets.yaml

6.3. Modifying the OpenStackVersion CR for the Key Manager custom image

Change OpenStack version. The following procedure shows the OpenStackVersion custom resource (CR) that defines the custom container image.

Procedure

  1. Create a CR file with the following contents:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackVersion
    metadata:
      name: openstack-galera-network-isolation
      namespace: openstack
    spec:
      customContainerImages:
          barbicanAPIImage: <api_image>
          barbicanWorkerImage: <worker_image>
    • Replace <api_image> with the registry and path to the custom barbicanAPIImage.
    • Replace <worker_image> with the registry and path to the custom barbicanWorkerImage.
  2. Apply the OpenStackVersion CR:

    $ oc apply -f <filename>
    • Replace <filename> with the OpenStackVersion CR file name.

6.4. Configuring the Key Manager service for the Luna HSM

You must modify the Key Manager (barbican) service section of the OpenStackControl custom resource (CR) to fully integrate your Luna HSM with Red Hat OpenStack Services on OpenShift (RHOSO).

Procedure

  1. Configure the OpenStackControlPlane CR:

    1. Optional: If you have saved secrets that are using RHOSO Key Manager simple_crypto, keep those secrets available by enabling multiple back ends:

      spec:
        barbican:
          apiOverride:
            route: {}
          enabled: true
          template:
            globalDefaultSecretStore: pkcs11
            enabledSecretStores:
              - pkcs11
              - simple_crypto
    2. Configure the Key Manager service for use with the Luna HSM:

      spec:
        barbican:
          apiOverride:
            route: {}
          enabled: true
          template:
            globalDefaultSecretStore: pkcs11
            enabledSecretStores:
              - pkcs11
              - simple_crypto
            apiTimeout: 90
            barbicanAPI:
              apiTimeout: 0
              customServiceConfig: |
                [secretstore:pkcs11]
                secret_store_plugin = store_crypto
                crypto_plugin = p11_crypto
                [p11_crypto_plugin]
                plugin_name = PKCS11
                library_path = /usr/local/luna/lib/libCryptoki2.so
                token_serial_number = <serial_number>
                mkek_label = <mkek_label>
                hmac_label = <hmac_label>
                encryption_mechanism = CKM_AES_GCM
                aes_gcm_generate_iv = true
                hmac_key_type = CKK_GENERIC_SECRET
                hmac_keygen_mechanism = CKM_GENERIC_SECRET_KEY_GEN
                hmac_keywrap_mechanism = CKM_AES_KEY_WRAP_KWP
                key_wrap_mechanism = true
                key_wrap_generate_iv = true
                always_set_cka_sensitive = true
                os_locking_ok = false
              pkcs11:
                loginSecret: "login_secret"
                clientDataSecret: "luna_data_secret"
                clientDataPath: /usr/local/luna/config
      • Replace <serial_number> with the token serial number of your HSM. If you are using HA, you must replace <serial_number> with the virtual token serial number. For more information, see Creating secrets for the Key Manager service.
      • Replace <mkek_label> with a user-defined label. If you have already defined this label, you must use the same one.
      • Replace <hmac_label> with a user-defined label. If you have already defined this label, you must use the same one.

        Note

        Use one of the following options to identify the HSM that you can use. These options are mutually exclusive and have the following order of precedence:

        Parameter

        Value

        Precedence

        token_serial_number

        <serial_number>

        1 - Highest

        token_labels

        Comma delimited lists

        2 - Middle

        slot-id

        <slot_id>

        3 - Lowest

  2. Deploy the OpenStackControlPlane CR:

    $ oc apply -f openstack_control_plane.yaml

Chapter 7. Configuring a Proteccio HSM back end to work with the RHOSO Key Manager service

When you install Red Hat OpenStack Services on OpenShift (RHOSO), you have the option of using the Key Manager (barbican) service with either a default SimpleCrypto back end, or using it with a hardware security module (HSM). Using a hardware security module provides hardened protection for storing keys.

When you use a Trustway HSM, the Key Manager service communicates with the Trustway HSM by using a PKCS #11 interface to load libraries provided by Eviden. To integrate your RHOSO deployment with a Proteccio HSM, you must complete the following steps:

7.1. Tested software versions for the Trustway hardware security module

The following table details the versions of software tested by Red Hat.

SoftwareVersion

cryptoki

2.20

CRYPTO

167

Firmware

147, 167

FPGA

-1596587865

library

3.17

MCS

65539

7.2. Adding the Trustway HSM client to the Key Manager service

Build a new image for the Key Manager service that integrates the required Proteccio software. You must repeat this step when you update RHOSO. Creating an ansible playbook to build this image simplifies the process of configuring RHOSO for your Trustway HSM. The ansible-role-rhoso-Trustway-hsm RPM, which is part of the RHOSO repository, contains roles that are required for this playbook. The following playbook automates required tasks for configuring the Trustway HSM back end to work with the RHOSO Key Manager service:

  • Downloads the barbican-api and barbican-worker images from the Red Hat source repository
  • Adds the Trustway client software, to the images
  • Stores the resulting images in your destination repository
  • Creates OpenShift secrets for the Key Manager service

The playbook uses keys, certificates, and configuration for your Trustway HSM to create two secrets. One is called login_secret, which contains your HSM password or PIN. The other secret is called proteccio_data_secret, and it contains your certificates, keys, and the proteccio.rc configuration file. These secrets are required in your Red Hat OpenShift Container Platform (RHOCP) environment to enable secure communication between the Key Manager service and your HSM. You can use an Ansible playbook to identify the client certificates to be copied in.

Prerequisites

  • The Trustway client image for Linux. For information about obtaining this software, contact Eviden.
  • An available image service, such as an internally available Quay service, or an account with quay.io. For more information, see Deploying the Red Hat Quay Operator on OpenShift Container Platform.
  • The client certificate and the key for your Trustway HSM.
  • The Trustway HSM certificate file.
  • You are running commands on a workstation on which you can run Ansible playbooks.

Procedure

  1. Use DNF to install ansible-role-rhoso-proteccio-hsm:

    $ sudo dnf -y install ansible-role-rhoso-proteccio-hsm
  2. Place the Trustway client image for Linux, as well as the client cert and the client key into the /opt/proteccio directory tree.

    $ cp <trustway_client_cert>.crt /opt/proteccio
    $ cp <Trustway_client_key>.key /opt/proteccio
    $ cp  <Proteccio3.06.05.iso> /opt/proteccio
    • Replace <trustway_client_cert> with the file name of your client certificate.
    • Replace <trustway_client_key> with the file name of your client key.
    • Replace <Proteccio3.06.05.iso> with the name of your Trustway client for Linux ISO.
  3. Retrieve the server certificate from the HSM device, and copy it to the /opt/proceccio directory. For more information on retrieving the server certificate from your Proteccio HSM, see the vendor documentation.
  4. Optional: If you have more than one HSM for HA, get every certificate for each of the HSMs and put them altogether in the /opt/proteccio directory.
  5. Update your proteccio.rc file to look similar to the following:

    [PROTECCIO]
    IPaddr=<Trustway_HSM_IP_address>
    SSL=1
    SrvCert=<HSM_Certificate_Name>.CRT
    
    [CLIENT]
    Mode=0
    LoggingLevel=7
    LogFile=/var/log/barbican/proteccio.log
    StatusFile=/var/log/barbican/HSM_Status.log
    ClntKey=<Client_Certificate_Name>.key
    ClntCert=<Client_Certificate_Name>.crt
    • Replace <Trustway_HSM_IP_Address> with the IP address of your Trustway HSM.
    • Replace <HSM_Certificate_Name> with the name of your Trustway certificate.
    • In the file above, Mode=0 means that only a single HSM device is in place.
    • Replace <Client_Certificate_Name> with your client certificate name.
  6. Optional: If you are configuring HA, you must include additional entries for the IP addresses of each HSM. Each new HSM must be inside of a [PROTECCIO] section. Additionally, you much change the Mode parameter inside the [CLIENT] to a value of either 1 or 2. For more information, see the official Eviden documentation.

    [PROTECCIO]
    IPaddr=<Trustway_HSM-2_IP_address>
    SSL=1
    SrvCert=<HSM-2_Certificate_Name>.CRT
    
    [CLIENT]
    Mode=2
    • Replace <Trustway_HSM-2_IP_Address> with the IP address of your second Trustway HSM.
    • Create a new [PROTECCIO] section with the corresponding parameters for every subsequent Trustway unit you have in your environment.
  7. Move the proteccio.rc configuration file to /opt/proteccio:

    $ mv proteccio.rc /opt/proteccio
  8. Create a playbook called ansible-proteccio.yaml with the following contents:

      vars:
        Trustway_client_name: <name>
        Trustway_server_cert_src: "/opt/trustway/<server.pem>"
        Trustway_partition_password: "<password>"
        Trustway_data_secret: "Trustway_data_secret"
        login_secret: "login_secret"
        barbican_dest_image_namespace: "<namespace>"
        proteccio_client_src: "file:///opt/proteccio/<iso_file>"
        proteccio_password: "{{ PIN to log into proteccio }}"
        kubeconfig_path: "<kubeconfig_path"
        oc_dir: "<directory>"
      roles:
        - rhoso_proteccio_hsm
    • Replace <name> with the name of your Trustway certificate.
    • Replace <server.pem> with the name of your server certificate.
    • Replace <password> with your HSM partition password.
    • Replace <namespace> with your account name for Quay.io or other container registry.
    • Replace the contents of <iso_file> with the name of the Proteccio client ISO file.
    • Replace the contents of <kubeconfig_path> with the full path to your OpenShift’s configuration file.
    • Replace <directory> with the full path to the OpenShift Client location.
  9. Run the playbook:

    $ ansible-playbook ansible-proteccio.yaml

7.3. Modifying the OpenStackVersion CR for the Key Manager custom image

Update the OpenStack version by using the OpenStackVersion custom resource (CR). The following procedure shows the CR that defines the custom container image.

Procedure

  1. Create a CR file with the following contents:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackVersion
    metadata:
      name: openstack-galera-network-isolation
      namespace: openstack
    spec:
      customContainerImages:
          barbicanAPIImage: <api_image>
          barbicanWorkerImage: <worker_image>
    • Replace <api_image> with the registry and path to the custom barbicanAPIImage.
    • Replace <worker_image> with the registry and path to the custom barbicanWorkerImage.
  2. Apply the OpenStackVersion CR:

    $ oc apply -f <filename>
    • Replace <filename> with the OpenStackVersion CR file name.

7.4. Configuring the Key Manager service for the Trustway HSM

You must modify the Key Manager (barbican) service section of the OpenStackControl custom resource (CR) to fully integrate your Trustway HSM with Red Hat OpenStack Services on OpenShift (RHOSO).

Procedure

  1. Configure the Key Manager service within your OpenStackControlPlane CR for use with the Trustway HSM:

    spec:
      barbican:
        apiOverride:
          route: {}
        enabled: true
        template:
          globalDefaultSecretStore: pkcs11
          enabledSecretStores:
            - pkcs11
            - simple_crypto
          apiTimeout: 90
          barbicanAPI:
            apiTimeout: 0
            customServiceConfig: |
              [secretstore:pkcs11]
              secret_store_plugin = store_crypto
              crypto_plugin = p11_crypto
              [p11_crypto_plugin]
              plugin_name = PKCS11
              library_path = /opt/tw_proteccio/lib/libnethsm.so
              token_labels = <token_label>
              mkek_label = <mkek_label>
              hmac_label = <hmac_label>
              encryption_mechanism = CKM_AES_CBC
              hmac_key_type = CKK_GENERIC_SECRET
              hmac_keygen_mechanism = CKM_GENERIC_SECRET_KEY_GEN
              hmac_mechanism = CKM_SHA256_HMAC
              key_wrap_mechanism = CKM_AES_CBC_PAD
              key_wrap_generate_iv = true
              always_set_cka_sensitive = true
              os_locking_ok = false
    
            pkcs11:
              loginSecret: "login_secret"
              clientDataSecret: "proteccio-data"
              clientDataPath: /etc/proteccio
    • Replace <token_label> with the token label of your HSM. If you are using HA, you must replace <token_label> with the virtual token serial number.
    • Replace <mkek_label> with a user-defined label. If you have already defined this label, you must use the same one.
    • Replace <hmac_label> with a user-defined label. If you have already set this up, you must use the same label.

      Note

      Use one of the following options to identify the HSM that you can use. These options are mutually exclusive and have the following order of precedence:

      Parameter

      Value

      Precedence

      token_serial_number

      <serial_number>

      1 - Highest

      token_labels

      Comma delimited lists

      2 - Middle

      slot-id

      <slot_id>

      3 - Lowest

  2. Optional: If you have saved secrets that are using RHOSO Key Manager simple_crypto, keep those secrets available by enabling multiple back ends:

    spec:
      barbican:
        apiOverride:
          route: {}
        enabled: true
        template:
          globalDefaultSecretStore: pkcs11
          enabledSecretStores:
            - pkcs11
            - simple_crypto
  3. Deploy the OpenStackControlPlane CR:

    $ oc apply -f openstack_control_plane.yaml

Chapter 8. Configuring federated authentication in RHOSO

Red Hat supports only Red Hat’s single sign-on (SSO) technology as the identity provider for Red Hat OpenStack Services on OpenShift (RHOSO). If you use another vendor, contact Red Hat Support for a support exception.

8.1. Deploying RHOSO with a single sign-on federated IDP

Federation allows users to log in to the OpenStack Dashboard (horizon) by using Red Hat’s single sign-on (SSO) technology.

Note

By default, users who log out of the OpenStack Dashboard are not logged out of SSO.

Making use of a single sign-on federated solution requires modifications of the Identity service (keystone). You can use a secret to configure Red Hat OpenStack Services on OpenShift (RHOSO) Identity service to be integrated into your federated authentication solution.

Note

Your federation client must have implicit flow enabled.

Prerequisites

  • You have installed RHOSO.
  • You have a SSO federated solution in your environment.

Procedure

  1. Retrieve the Identity service (keystone) endpoint:

    $ oc get keystoneapis.keystone.openstack.org -o json | jq '.items[0].status.apiEndpoints.public'
  2. Provide your SSO administrator with the following redirect URIs as well as the web origin.

    https://<keystoneURL>/v3/auth/OS-FEDERATION/identity_providers/<idp_name>/protocols/openid/websso/
    https://<keystoneURL>/v3/auth/OS-FEDERATION/websso/openid
    
    webOrigins: https://<keystoneURL>
    • Replace <keystoneURL> with the URL retrieved in step 1. This url must end in a trailing /.
    • Replace <idp_name> with a value of your choosing, for example, kcipaIDP.

      In response, your SSO administrator provides you with a ClientID and a ClientSecret.

      Note

      The chosen <idp_name> value must match all referenced <idp_name> values in this procedure.

  3. Retrieve the Memcached hostname:

    1. For an IPv4 deployment run the following command:

      $ oc get memcacheds.memcached.openstack.org -n openstack -o json | jq -r '.items[0].status.serverList[0] | split(":")[0]'
    2. For an IPv6 deployment, run the following command:

      $ oc get memcacheds.memcached.openstack.org -n openstack -o json | jq -r '.items[0].status.serverListWithInet[0]'
  4. Create a keystone-httpd-override.yaml CR file and add the following configuration:

    apiVersion: v1
    kind: Secret
    metadata:
      name: keystone-httpd-override
      namespace: openstack
    type: Opaque
    stringData:
      federation.conf: |
        # Example OIDC directives for the *public* endpoint
        OIDCClaimPrefix "OIDC-"
        OIDCScope "openid email profile"
        OIDCClaimDelimiter ";"
        OIDCPassUserInfoAs "claims"
        OIDCPassClaimsAs "both"
        OIDCClientID "<my_client_id>"
        OIDCClientSecret "<my_client_secret>"
        OIDCCryptoPassphrase "<crypto_pass>"
        OIDCProviderMetadataURL <metadata_url>
        OIDCResponseType "id_token"
        OIDCOAuthClientID "my_oauth_client_id"
        OIDCOAuthClientSecret "12345678"
        OIDCOAuthIntrospectionEndpoint "<https://my_oauth_introspection_endpoint>"
        OIDCRedirectURI "{{ .KeystoneEndpointPublic }}/v3/auth/OS-FEDERATION/identity_providers/<idp_name>/protocols/openid/websso/"
    
        <LocationMatch "/v3/auth/OS-FEDERATION/identity_providers/<idp_name>/protocols/openid/websso">
            AuthType "openid-connect"
            Require valid-user
        </LocationMatch>
    
        <Location ~ "/v3/OS-FEDERATION/identity_providers/<idp_name>/protocols/openid/auth">
            AuthType oauth20
            Require valid-user
        </Location>
    
        <LocationMatch "/v3/auth/OS-FEDERATION/websso/openid">
            AuthType "openid-connect"
            Require valid-user
        </LocationMatch>
    • Replace <my_client_id> with your client ID to use for the OpenID Connect provider handshake. You must get this from your SSO administrator.
    • Replace <my_client_secret> with the client secret to use for the OpenID Connect provider handshake. You must get this from your SSO administrator after providing your redirect URLs.
    • Replace <crypto_pass> with a secure passphrase to use when encrypting data for the OpenID Connect handshake. This is a user-defined value.
    • Replace <metadata_url> with the URL that points to your OpenID Connect provider metadata. Use the format: "https://<FQDN>/realms/<realm>/.well-known/openid-configuration. The SSO administrator will provide the requisite <FQDN> and organization-specific <realm> name for your OpenID provider.
    • Replace <https://my_oauth_introspection_endpoint> with the value provided by the SSO administrator.
    • Replace <idp_name> with your chosen string that creates unique redirect URL, for example, kcipaIDP. This value must be replaced for the keystoneFederationIdentityProviderName parameter and the LocationMatch and Location directive arguments.

      Important

      The full value for the OIDCRedirectURI parameter must end in a trailing /.

  5. Create the secret:

    $ oc create -f keystone-httpd-override.yaml
  6. Get the URL for the OpenStack Dashboard:

    $ oc get horizons.horizon.openstack.org -o json | jq -r '.items[0].status.endpoint'
  7. Edit the keystone section of the OpenStackControlPlane CR file and add the secret:

    keystone:
      template:
        customServiceConfig: |
          [federation]
          trusted_dashboard=<horizon_endpoint>/dashboard/auth/websso/
          [openid]
          remote_id_attribute=HTTP_OIDC_ISS
          [auth]
          methods = password,token,oauth1,mapped,application_credential,openid
        httpdCustomization:
          customConfigSecret: keystone-httpd-override
    • Replace <horizon_endpoint> with the value you retrieved in step 6.
    • Remove external from the methods = comma delimited list.
    • Add the httpdCustomization.customConfigSecret parameter and set this value to the key created in the keystone-httpd-override.yaml CR file.
  8. Edit the horizon section of the OpenStackControlPlane CR file to configure the OpenStack Dashboard (horizon):

    horizon:
      template:
        customServiceConfig: |
          # Point Horizon to the Keystone public endpoint
          OPENSTACK_KEYSTONE_URL = "<keystone_endpoint>/v3"
    
          # Enable WebSSO in Horizon
          WEBSSO_ENABLED = True
    
          # Provide login options in Horizon's dropdown menu
          WEBSSO_CHOICES = (
            ("credentials", _("Keystone Credentials")),
            ("OIDC", _("OpenID Connect")),
          )
    
          # Map Horizon's "OIDC" choice to the Keystone IDP and protocol
          WEBSSO_IDP_MAPPING = {
            "OIDC": ("<idp_name>", "openid"),
          }
    • Replace <keystone_endpoint> with the value you retrieved in the first step.
    • Replace <idp_name> with your chosen string that creates a unique redirect URL, for example, kcipaIDP.
  9. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml

8.2. Integrating the Identity service with a single sign-on federated IdP

After you deploy Red Hat OpenStack Services on OpenShift (RHOSO) with Red Hat’s single sign-on (SSO) technology for federation, you must integrate SSO with RHOSO.

Procedure

  1. Create a federated domain:

    $ openstack domain create <federated_domain_name>
    • Replace <federated_domain_name> with the name of the domain you are managing with your identity provider, for example, my_domain.

      For example:

      +-------------+----------------------------------+
      | Field       | Value                            |
      +-------------+----------------------------------+
      | description |                                  |
      | enabled     | True                             |
      | id          | b493634c9dbf4546a2d1988af181d7c9 |
      | name        | my_domain                        |
      | options     | {}                               |
      | tags        | []                               |
      +-------------+----------------------------------+
  2. Set up the federation identity provider:

    $ openstack identity provider create --remote-id https://<sso_fqdn>:9443/realms/<realm> --domain <federated_domain_name> <idp_name>
    • Replace <sso_fqdn> with the fully qualified domain name for your SSO identity provider.
    • Replace <realm> with the SSO realm. The default realm is master.
    • Replace <federated_domain_name> with the name of the federated domain that you created in step 1, for example, my_domain.
    • Replace <idp_name> with the string that you chose when deploying SSO to create the unique redirect URL, for example, kcipaIDP.

      For example:

      +-------------------+-----------------------------------------------------+
      | Field             | Value                                               |
      +-------------------+-----------------------------------------------------+
      | authorization_ttl | None                                                |
      | description       | None                                                |
      | domain_id         | b493634c9dbf4546a2d1988af181d7c9                    |
      | enabled           | True                                                |
      | id                | kcipaIDP                                            |
      | remote_ids        | https://sso.fqdn.local:9443/realms/master           |
      +-------------------+-----------------------------------------------------+
  3. Create a mapping file that is unique to the identity needs of your cloud:

    $ cat > mapping.json << EOF
    [
    	{
        	"local": [
            	{
                	"user": {
                 	"name": "{0}"
                	},
                	"group": {
                    	"domain": {
                     	"name": "<federated_domain_name>"
                    	},
                    	"name": "<federated_group_name>"
                	}
            	}
        	],
        	"remote": [
            	{
                	"type": "OIDC-preferred_username"
            	}
        	]
    	}
    ]
    EOF
    • Replace <federated_domain_name> with the domain you created in step 1, for example, my_domain.
    • Replace <federated_group_name> with the name of the federated group that you create in a later step, for example, my_fed_group.
  4. Use the mapping file to create the federation mapping rules for RHOSO:

    $ openstack mapping create --rules <mapping_file> <mapping_rules>
    • Replace <mapping_file> with the name of the mapping file that you created in the previous step, for example, mapping.json.
    • Replace <mapping_rules> with the name of the mapping rules created from this file, for example, IPAmap.
  5. Create a federated group:

    $ openstack group create --domain <federated_domain_name> <federated_group_name>
    • Replace <federated_domain_name> with the name of the domain that you created in step 1, for example, my_domain.
    • Replace <federated_group_name> with the name of the federated group that have specified in the mapping file, for example, my_fed_group.
  6. Create an Identity service (keystone) project:

    $ openstack project create --domain <federated_domain_name> <federated_project_name>
    • Replace <federation_project_name> with the name of the Identity service project.
  7. Add the Identity service federation group to a role:

    $ openstack role add --group <federated_group_name> --group-domain <federated_domain_name> --project <federated_project_name> --project-domain <federated_domain_name> member
  8. Create the OpenID federation protocol:

    $ openstack federation protocol create openid --mapping <mapping_rules> --identity-provider <idp_name>
    • Replace <mapping_rules> with the name of the mapping rules you created from your mapping file, for example, IPAmap.
    • Replace <idp_name> with your chosen string that creates the unique redirect URL, for example, kcipaIDP.

Chapter 9. Configuring multi-realm federated authentication in RHOSO

You can configure Red Hat OpenStack Services on OpenShift (RHOSO) Identity service (keystone) and Dashboard (horizon) to provide multi-realm federated authentication using OpenID Connect (OIDC) as the protocol. Multi-realm federation allows users to log in to the OpenStack Dashboard by using single sign-on (SSO) and select from one of several external Identity Providers (IdPs).

9.1. Deploying RHOSO with multiple federated Identity Providers

Multi-realm federation allows users to log in to the Red Hat OpenStack Services on OpenShift (RHOSO) Dashboard by using single sign-on (SSO) and select from one of several external Identity Providers (IdPs).

Note

The RHOSO deployment of multiple federated IdPs implements the Web SSO authentication flow because the OpenStack CLI does not support multiple IdPs.

Prerequisites

  • You have installed RHOSO.
  • You have multiple external OpenID Connect (OIDC) IdPs configured in your environment.

Procedure

  1. Choose a name to uniquely identify each IdP.

    In this example there are two IdPs, whose names are referenced as <idp_name_1> and <idp_name_2>.

  2. Obtain the following settings from each IdP administrator:

    • The FQDN for each IdP that is referenced in this procedure as <fqdn_1> and <fqdn_2>.
    • The federation Realm Name for each IdP that is referenced in this procedure as <realm_name_1> and <realm_name_2>.
    • The Client ID for each IdP that is referenced in this procedure as <client_id_1> and <client_id_2>.
    • The Client Secret for each IdP that is referenced in this procedure as <client_secret_1> and <client_secret_2>.
    • The Provider Metadata URL for each IdP that is referenced in this procedure as <provider_metadata_url_1> and <provider_metadata_url_2>.
  3. Retrieve the Identity service (keystone) public endpoint:

    $ oc get keystoneapis.keystone.openstack.org -o json | jq '.items[0].status.apiEndpoints.public'

    This Identity service endpoint is referenced in this procedure as <keystone_url>.

  4. Provide the IdP administrators with the following information:

    • Web origin:

      https://<keystone_url>
    • Redirect URIs:

      https://<keystone_url>/v3/auth/OS-FEDERATION/websso/openid

      Provide a URI for each IdP containing their unique IdP name, which must end in a trailing /. You must send each URI to their respective IdP administrator:

      https://<keystone_url>/v3/auth/OS-FEDERATION/identity_providers/<idp_name_1>/protocols/openid/websso/
      https://<keystone_url>/v3/auth/OS-FEDERATION/identity_providers/<idp_name_2>/protocols/openid/websso/
    • Each federation client must have Implicit flow enabled and not Authorization code flow.
  5. Create a custom resource (CR) file for a secret called keystone-httpd-override:

    apiVersion: v1
    kind: Secret
    metadata:
     name: keystone-httpd-override
     namespace: openstack
    type: Opaque
    stringData:
     federation.conf: |
       # Example OIDC directives for the *public* endpoint
       OIDCClaimPrefix "OIDC-"
       OIDCResponseType "id_token"
       OIDCScope "openid email profile"
       OIDCClaimDelimiter ";"
       OIDCPassUserInfoAs "claims"
       OIDCPassClaimsAs "both"
       OIDCCryptoPassphrase "<crypto_pass>"
       OIDCRedirectURI "<keystone_url>/v3/redirect_uri/"
                            OIDCMetadataDir "/var/lib/httpd/metadata"
       OIDCAuthRequestParams "prompt=login"
       <IfModule headers_module>
         <Location "/v3/local-logout/clear">
           Header always add Set-Cookie "mod_auth_openidc_session=deleted; Path=/; Max-Age=0; HttpOnly; Secure; SameSite=None"
         </Location>
       </IfModule>
    
       RewriteEngine On
    
       RewriteRule ^/v3/auth/OS-FEDERATION/identity_providers/(<idp_name_1>|<idp_name_2>)/protocols/openid/websso$ \
         /v3/local-logout/clear [R=302,L]
    
       RewriteRule ^/v3/local-logout/clear$ \
         /v3/auth/OS-FEDERATION/websso/openid [R=302,L,QSA,NE]
    
       <Location "/v3/auth/OS-FEDERATION/websso/openid">
         AuthType openid-connect
         Require  valid-user
       </Location>
    
       <Location "/v3/redirect_uri">
         AuthType openid-connect
         Require  valid-user
       </Location>
    Important

    The full value of the OIDCRedirectURI parameter must end in a trailing /.

    • Replace <crypto_pass> with a user-defined passphrase to use when encrypting data for the OpenID Connect handshake.
    • Replace <keystone_url> with the Identity service endpoint value that you retrieved in step 3.
    • Replace <idp_name_1> and <idp_name_2> with their unique IdP names that you specified in step 1.
    • The following OIDC parameter and associated Apache configuration is designed to provide the most failsafe solution that supports the login of users from multiple IdPs. Consequently, previous sessions are not saved and users must reauthenticate themselves after they log out of the Dashboard:

       OIDCAuthRequestParams "prompt=login"
      
       <IfModule headers_module>
         <Location "/v3/local-logout/clear">
           Header always add Set-Cookie "mod_auth_openidc_session=deleted; Path=/; Max-Age=0; HttpOnly; Secure; SameSite=None"
         </Location>
       </IfModule>
        RewriteEngine On
        RewriteRule ^/v3/auth/OS-FEDERATION/identity_providers/(<idp_name_1>|<idp_name_2>)/protocols/openid/websso$ \
         /v3/local-logout/clear [R=302,L]
        RewriteRule ^/v3/local-logout/clear$ \
         /v3/auth/OS-FEDERATION/websso/openid [R=302,L,QSA,NE]

      If users in your multiple federated IdP deployment do not belong to more than one IdP then you can allow users to reopen the Dashboard they have closed without providing any authentication. In this case, you must remove this OIDC parameter and provide a different Apache LocationMatch configuration to save the previous sessions.

  6. Create the keystone-httpd-override secret:

    $ oc create -f keystone-httpd-override.yaml
  7. Retrieve the URL for the Dashboard:

    $ oc get horizons.horizon.openstack.org -o json | jq -r '.items[0].status.endpoint'
  8. Use the following Ansible playbook to create a secret called federation-realm-data:

    - name: Download realm1 OpenID configuration
      ansible.builtin.uri:
        url: "<provider_metadata_url_1>"
        method: GET
        return_content: true
        validate_certs: false
      register: openid_wellknown_config1
    
    - name: Download realm2 OpenID configuration
      ansible.builtin.uri:
        url: "<provider_metadata_url_2>"
        method: GET
        return_content: true
        validate_certs: false
      register: openid_wellknown_config2
    
    - name: Set federation_config_items
      ansible.builtin.set_fact:
        federation_config_items:
          - filename: "<fqdn_1>%2Fauth%2Frealms%2F<realm_name_1>.conf"
            contents: |
              {
                "scope" : "openid email profile"
              }
          - filename: "<fqdn_1>%2Fauth%2Frealms%2F<realm_name_1>.client"
            contents: "{{ {'client_id': <client_id_1>, 'client_secret': <client_secret_1> } | to_json }}"
          - filename: "<fqdn_1>%2Fauth%2Frealms%2F<realm_name_1>.provider"
            contents: |
              {{ openid_wellknown_config1.content }}
          - filename: "<fqdn_2>%2Fauth%2Frealms%2F<realm_name_2>.conf"
            contents: |
              {
                "scope" : "openid email profile"
              }
          - filename: "<fqdn_2>%2Fauth%2Frealms%2F<realm_name_2>.client"
            contents: "{{ {'client_id': <client_id_2>, 'client_secret': <client_secret_2>} | to_json }}"
          - filename: "<fqdn_2>%2Fauth%2Frealms%2F<realm_name_2>.provider"
            contents: |
              {{ openid_wellknown_config2.content }}
    - name: Generate the final federation_config.json string (as a dictionary)
      ansible.builtin.set_fact:
        _raw_federation_config_json_value: |
          {
          {% for item in federation_config_items %}
            "{{ item.filename }}": {{ item.contents }}{% if not loop.last %},{% endif %}
          {% endfor %}
          }
    - name: Final JSON string for Secret stringData
      ansible.builtin.set_fact:
        federation_config_json_string: "{{ _raw_federation_config_json_value }}"
    
    - name: Create a Kubernetes Secret with federation metadata
      kubernetes.core.k8s:
        state: present
        definition:
          apiVersion: v1
          kind: Secret
          type: Opaque
          metadata:
            name: federation-realm-data
            namespace: openstack
          stringData:
            federation-config.json: "{{ federation_config_json_string }}"
    • Replace the IdP variables with the values you obtained from the IdP administrators in step 2.
  9. Open your OpenStackControlPlane custom resource (CR) file, openstack_control_plane.yaml, on your workstation.
  10. Edit the keystone section of the OpenStackControlPlane CR:

    keystone:
     template:
       customServiceConfig: |
         [federation]
         trusted_dashboard=<horizon_endpoint>/dashboard/auth/websso/
         [openid]
         remote_id_attribute=HTTP_OIDC_ISS
         [auth]
         methods = password,token,oauth1,mapped,application_credential,openid
       httpdCustomization:
         customConfigSecret: keystone-httpd-override
         federatedRealmConfig: federation-realm-data
    • Replace <horizon_endpoint> with the Dashboard URL you retrieved in step 7.
    • Remove external from the methods = comma delimited list.
    • Add the httpdCustomization.customConfigSecret parameter and set this value to the key created in the keystone-httpd-override.yaml CR file in step 5.
    • Add the httpdCustomization.federatedRealmConfig parameter and set this value to the federation-realm-data secret created by the Ansible Playbook in step 8.
  11. Edit the horizon section of the OpenStackControlPlane CR:

    horizon:
     template:
     customServiceConfig: |
       # Point horizon to the keystone public endpoint
       OPENSTACK_KEYSTONE_URL = "<keystone_endpoint>/v3"
    
       # Enable WebSSO in horizon
       WEBSSO_ENABLED = True
    
       # Provide login options in the horizon dropdown menu
       WEBSSO_CHOICES = (
         ("credentials", _("Keystone Credentials")),
         ("OIDC1", _("OpenID Connect IdP1")),
         ("OIDC2", _("OpenID Connect IdP2")),
       )
    
       # Map the "OIDC" choice of horizon to the keystone IDP and protocol
       WEBSSO_IDP_MAPPING = {
         "OIDC1": ("<idp_name1>", "openid"),
         "OIDC2": ("<idp_name2>", "openid"),
       }
  12. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml

Chapter 10. Configuring a Single Keystone Multiple OpenStacks multi-region deployment to simplify user management and configuration

The Single Keystone Multiple OpenStacks (SKMO) multi-region Red Hat OpenStack Services on OpenShift (RHOSO) deployment simplifies user management and configuration for multiple regions.

In standard multi-region RHOSO deployments, each region is isolated with its own Identity (keystone) and Dashboard (horizon) services. This requires separate user accounts for each region, making credential management and rotation difficult.

SKMO multi-region RHOSO deployment architecture

An SKMO deployment requires the following architecture to facilitate the simplified user management and configuration:

  • An SKMO deployment consists of a single central region and multiple workload regions.

    Important

    When you deploy each workload region, you must define a unique RHOSO namespace, a unique region name, and unique Identity service user names for all the OpenStack services that communicate with the Identity service. For more information about the unique networking and configuration requirements of the SKMO deployment, see Plan your Single Keystone Multiple OpenStacks deployment.

  • The central region provides the Dashboard (horizon) service that is shared by all the regions of the SKMO deployment.
  • The central region provides a centralized Identity (keystone) service:

    • You must use the Identity service of the central region to create the default administrator user for each workload region.
    • You must use the Identity service of the central region to create the catalog entries for the public and private endpoints of the Identity service for each workload region.
  • The centralized Identity and Dashboard services provide a single pane of glass for the simplified configuration and management of the users. Each end user has a single set of credentials. You can enable or disable their access to every region in the central region. For more information, see Deploy Single Keystone Multiple OpenStacks.

10.1. Plan your Single Keystone Multiple OpenStacks deployment

The Single Keystone Multiple OpenStacks (SKMO) multi-region Red Hat OpenStack Services on OpenShift (RHOSO) deployment adopts an interdependent regional architecture. The central region provides centralized Dashboard (horizon) and Identity (keystone) services that are relied upon by all the other workload regions. Therefore, you must implement the following requirements for a successful SKMO deployment:

  1. The interdependence between the workload and the central regions requires that every region must provide the following unique identifications for a successful SKMO deployment:

    • A unique namespace to differentiate the RHOSO deployment of each region. For more information, see Create a unique namespace for each workload region.

      Important

      The RHOSO deployment namespace forms part of the DNS name for each OpenStack service. If you do not use different RHOSO namespaces for every region, conflicts occur between services in your different regions.

    • A unique region name defined by the spec of their Identity (keystone) service in the OpenStackControlPlane custom resource (CR). For more information, see Modify the deployment of each workload region.

      Note

      When the central Red Hat OpenStack Services on OpenShift (RHOSO) region is deployed, this region is called regionOne by default. If you use a workload region naming convention, then you can rename the region name of the central region to make it more easily identifiable. For more information, see Rename the central region.

    • The OpenStack services that communicate with the Identity service must use uniquely named Identity service users to simplify the task of managing their individual credentials. For more information, see Modify the deployment of each workload region.

      Warning

      If you do not specify unique Identity service user names for all the OpenStack services that communicate with the Identity service, then changing the password for a service user disrupts this service in all the workload regions that use this same user - unless you schedule a maintenance window for all of these regions first.

  2. The interdependence between the workload and the central regions imposes the following restrictions that must be met by the logical networking topology of your DNS configuration:

    • Each workload region must resolve the DNS name of the Identity service in the central region and access it.
    • The data plane nodes in each workload region must resolve the DNS name of the Identity service in the central region and access it.
    • After deploying the workload regions, the Dashboard (horizon) service in the central region must resolve the DNS names in the service catalog of every workload region and access them.
  3. The interdependence between the Identity services in each workload region and the Identity service of the central region changes how the service-to-service communication for the workload regions are routed. For more information, see Create the public and private endpoints of each workload region.

    In a normal OpenStack deployment like the central region, the Identity service has both a public and internal endpoint. These endpoints exist in separate networks to keep the internal service-to-service communication separate from public traffic. However, the workload regions are forced to send all of their internal service-to-service communication traffic to the public endpoint and therefore the public network of the central region. Even though the internal service-to-service communication traffic of the workload regions is encrypted, it is more vulnerable to DDOS attacks because it is not isolated on a separate internal network making it easier for external attackers to intercept these messages.

  4. The barbican-keystone-listener service requires access to the RabbitMQ message queue so that when a project is deleted by the Identity service (keystone), it can tell the Key Manager service (barbican) to clean up the related secrets and the other artifacts that it manages.

    In an SKMO deployment, the RabbitMQ message queue of the central region and not the workload regions contain the necessary Identity service (keystone) messages. For this reason, the barbican-keystone-listener services in the workload regions cannot know when projects are deleted so that the Key Manager service cannot clean them up. Therefore you must implement a third-party application like Scupper and configure your SKMO deployment to allow the barbican-keystone-listener services in the workload regions to access the RabbitMQ message queue in the central region to clean up deleted projects.

10.2. Deploy Single Keystone Multiple OpenStacks

A Single Keystone Multiple OpenStacks (SKMO) multi-region Red Hat OpenStack Services on OpenShift (RHOSO) deployment creates a centralized Dashboard (horizon) and Identity (keystone) service to provide a single pane of glass for the simplified configuration and management of the users. Each end user has a single set of credentials and their access to every workload region can be enabled or disabled in the central region.

Note

You must not manually configure the Dashboard (horizon) service of the central region to connect to the various workload regions because a Managing regions dropdown list is added to the UI automatically for a SKMO deployment to allow users to select the required workload regions. For more information, see SKMO Dashboard region configuration.

Prerequisites

Procedure

  1. Deploy the central region called regionOne by default, unless you rename it. For more information, see Rename the central region.

    The deployment of the central region does not require a data plane.

  2. Create the default administrator Identity service user for each workload region in the central region. These Identity service users must be granted the admin role in the admin project of the central region. For more information, see Create the default administrator user for each workload region.
  3. Create the catalog entries for the public and private endpoints of the Identity service in each workload region by using the Identity service in the central region. For more information, see Create the public and private endpoints of each workload region.

    Note

    Both the public and private endpoints of the Identity service in each workload region point to the public Identity service endpoint in the central region.

  4. Modify and deploy each workload region. An important part of this deployment modification involves creating a unique region name and unique Identity service users for each workload region. For more information, see Modify the deployment of each workload region.
  5. After you deploy a workload region of a multi-region Red Hat OpenStack Services on OpenShift (RHOSO) Single Keystone Multiple OpenStacks (SKMO) deployment you must configure the deployed central region to trust this workload region. For more information, see Configure the central region to trust a deployed workload region.

10.3. Rename the central region

When the central Red Hat OpenStack Services on OpenShift (RHOSO) region of a Single Keystone Multiple OpenStacks (SKMO) multi-region Red Hat OpenStack Services on OpenShift (RHOSO) deployment is deployed this region is called regionOne by default. If you use a workload region naming convention when you name the workload regions you can rename the central region to make it more easily identifiable.

Prerequisites

  • You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges.
  • You have the oc command line tool installed on your workstation.

Procedure

  1. In the central region, set the default RHOSO namespace:

    $ oc project <central-region-namespace>
    • Replace <central-region-namespace> with the name of the unique namespace for the central region, for example openstack.
  2. Edit the OpenStackControlPlane CR of your central region on your workstation:

    $ oc edit openstackcontrolplane <name>
    • Replace <name> with the name of your YAML OpenStackControlPlane CR. You can use the following command to retrieve this name: oc get openstackcontrolplane.
  3. Configure the region parameter of the Identity service (keystone):

    ...
      spec:
        ...
        keystone:
          ...
          template:
            ...
            region: <central-region-name>
    • Replace <central-region-name> with the region name for your central region.
  4. Save and close the editor to apply this change.
  5. Wait for the deployment of the control plane to reach the Ready status:

    $ oc wait openstackcontrolplane <name> --for=condition=Ready --timeout=600s

10.4. Create the default administrator user for each workload region

You must use the Identity service (keystone) of the deployed Red Hat OpenStack Services on OpenShift (RHOSO) central region of a Single Keystone Multiple OpenStacks (SKMO) multi-region Red Hat OpenStack Services on OpenShift (RHOSO) deployment to create the default administrator user for each workload region. These workload administrator users must be granted the admin role in the admin project of the central region. For more information, see Deploy Single Keystone Multiple OpenStacks.

Prerequisites

  • You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges.
  • You have the oc command line tool installed on your workstation.

Procedure

  1. In the central region, set the default RHOSO namespace:

    $ oc project <central-region-namespace>
    • Replace <central-region-namespace> with the name of the unique namespace for the central region, for example openstack.
  2. Access the remote shell for the OpenStackClient pod from your workstation to run OpenStack CLI commands:

    $ oc rsh openstackclient
  3. Create the default administrator user for each workload region:

    $ openstack user create --domain Default --project <central-region-admin-project> --project-domain Default --password <workload-region-admin-password> <workload-region-admin-name>
    • Replace <central-region-admin-project> with the name of the admin project in the central region that is admin by default.
    • Replace <workload-region-admin-password> with the password of the default administrator user of each`workload` region.

      Note

      Set this password as the value of the AdminPassword: parameter of the Secret custom resource (CR) file that you must create to provide secure access to the RHOSO service pods when you deploy each workload region.

    • Replace <workload-region-admin-name> with the name of the default administrator user of each workload region, for example admin-two.
  4. Add the following roles to each default workload region administrator user:

    $ openstack role add --project admin --project-domain Default --user <workload-region-admin-name> --user-domain Default admin
    $ openstack role add --system all --user <workload-region-admin-name> --user-domain Default admin

10.5. Create the public and private endpoints of each workload region

You must use the Identity service (keystone) of the deployed Red Hat OpenStack Services on OpenShift (RHOSO) central region of a Single Keystone Multiple OpenStacks (SKMO) multi-region Red Hat OpenStack Services on OpenShift (RHOSO) deployment to create the catalog entries for the public and private endpoints of the Identity service in each workload region. For more information, see Deploy Single Keystone Multiple OpenStacks.

Both the public and private endpoints of the Identity service in each workload region must specify the public Identity service endpoint of the central region.

Therefore even though the internal service-to-service communication traffic of the workload regions is encrypted it is more vulnerable to DDOS attacks because it is not segregated on a separate internal network making it easier for external attackers to intercept these messages.

Prerequisites

  • You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges.
  • You have the oc command line tool installed on your workstation.

Procedure

  1. In the central region, set the default RHOSO namespace:

    $ oc project <central-region-namespace>
    • Replace <central-region-namespace> with the name of the unique namespace for the central region, for example openstack.
  2. Access the remote shell for the OpenStackClient pod from your workstation to run OpenStack CLI commands:

    $ oc rsh openstackclient
  3. Obtain the public Identity service endpoint of the central region:

    $ openstack endpoint list --region <central-region-name> --service keystone --interface public
    • Replace <central-region-name> with the name of your central region, which is regionOne by default.
  4. Copy this URL that is referenced in this procedure as <central-region-public-keystone-url>.
  5. Create the public and private endpoints of each workload region to specify the public Identity service endpoint of the central region:

    $ openstack endpoint create --region <workload-region-name> keystone public <central-region-public-keystone-url>
    $ openstack endpoint create --region <workload-region-name> keystone internal <central-region-public-keystone-url>
    • Replace <workload-region-name> with the name of the required workload region, for example regionTwo.

      This creates the catalog entries for the public and private endpoints of the Identity service in each workload region in the Identity service catalog of the central region.

10.6. Modify the deployment of each workload region

You must modify the deployment of each workload region of a Single Keystone Multiple OpenStacks (SKMO) multi-region Red Hat OpenStack Services on OpenShift (RHOSO) deployment because the Dashboard (horizon) and Identity (keystone) service of the central region is shared with all the workload regions. For more information, see Deploy Single Keystone Multiple OpenStacks.

An important part of this deployment modification involves creating a unique region name and unique Identity service users for each workload region. For more information, see Plan your Single Keystone Multiple OpenStacks deployment.

Procedure

  1. Create a unique RHOSO deployment namespace for each workload region to differentiate these RHOSO deployments from each other. For more information, see Create a unique namespace for each workload region.
  2. Specify the password of the administrator user that you manually created for each workload region as the value of the AdminPassword: parameter when you create the Secret custom resource (CR) to provide secure access to the RHOSO service pods in each workload region. For more information, see Create the default administrator user for each workload region and Provide secure access to RHOSO services in the Deploying Red Hat OpenStack Services on OpenShift guide.
  3. Obtain the CA certificate from the central region and add it to a secret in each workload region so it can be trusted. For more information, see Configure each workload region to trust the central region.
  4. Create a modified OpenStackControlPlane custom resource (CR) for each workload region. For more information, see Modify the control plane CR of each workload region.

    This involves defining a unique region name defined in the spec of the Identity (keystone) service for each workload region and creating unique Identity service users.

10.6.1. Create a unique namespace for each workload region

You must create a unique namespace for every workload region of your Single Keystone Multiple OpenStacks (SKMO) multi-region Red Hat OpenStack Services on OpenShift (RHOSO) deployment. This unique namespace is necessary to differentiate the RHOSO deployment of each region.

Important

The RHOSO deployment namespace forms part of the DNS name for each OpenStack service. If you do not use different namespaces for every region, conflicts occur between services in your different regions.

Prerequisites

  • You are logged on to a workstation that has access to the RHOCP cluster, as a user with cluster-admin privileges.
  • You have the oc command line tool installed on your workstation.

Procedure

  1. Create a project in your deployed RHOSO environment:

    $ oc new-project <workload-region-namespace>
    • Replace <workload-region-namespace> with the name of the unique namespace for each workload region, for example openstack-two.
  2. Ensure that this namespace is labeled to enable privileged pod creation by the OpenStack Operators:

    $ oc get namespace <workload-region-namespace> -ojsonpath='{.metadata.labels}' | jq
    {
      "kubernetes.io/metadata.name": "<workload-region-namespace>",
      "pod-security.kubernetes.io/enforce": "privileged",
      "security.openshift.io/scc.podSecurityLabelSync": "false"
    }
  3. If the security context constraint (SCC) is not "privileged" use the following commands to change it:

    $ oc label ns <workload-region-namespace> security.openshift.io/scc.podSecurityLabelSync=false --overwrite
    $ oc label ns <workload-region-namespace> pod-security.kubernetes.io/enforce=privileged --overwrite

10.6.2. Configure each workload region to trust the central region

After you deploy the central region of a Single Keystone Multiple OpenStacks (SKMO) multi-region Red Hat OpenStack Services on OpenShift (RHOSO) deployment, you must configure each workload region to trust the central region. For more information, see Deploy Single Keystone Multiple OpenStacks.

In the following procedure, the Identity service region name of the central region is regionOne:

Prerequisites

  • You are logged on to a workstation that has access to your Red Hat OpenShift Container Platform (RHOCP) cluster as a user with cluster-admin privileges.
  • You have the oc command line tool installed on your workstation.

Procedure

  1. In the central region, set the default RHOSO namespace:

    $ oc project <central-region-namespace>
    • Replace <central-region-namespace> with the name of the unique namespace for the central region, for example openstack.
  2. Obtain the CA certificate in the central region and extract it into a file, for example regionOne-ca.crt :

    Note

    To decode the certificate before creating the output .crt file, add | base64 -d to this command.

    $ oc get secret rootca-public -o yaml | yq '.data."ca.crt"' > regionOne-ca.crt
  3. Copy the regionOne-ca.crt file to a deployed workload region.
  4. In this workload region, set the default RHOSO namespace:

    $ oc project <workload-region-namespace>
    • Replace <workload-region-namespace> with the name of the unique namespace for this workload region, for example openstack-two.
  5. Create a PEM-formatted bundle in this workload region, for example custom-ca-certs.pem that includes the contents of this regionOne-ca.crt file and all the other custom CA certificates that you want each workload region to trust.
  6. Create a manifest file for a secret in this workload region that specifies the contents of the custom-ca-certs.pem bundle created in the previous step. In this example, this manifest file is called custom-ca-certs.yaml and the secret is called custom-ca-certs:

    apiVersion: v1
    data:
      custom-ca-certs.pem: <contents-of-PEM-bundle>
    kind: Secret
    metadata:
      annotations:
      name: custom-ca-certs
      namespace: <workload-region-namespace>
    type: Opaque
    • Replace <contents-of-PEM-bundle> with the base64 encoded string of the contents of the PEM-formatted bundle created in step 2 called custom-ca-certs.pem that includes the CA certificate from the central region. You can get this base64 encoded string by using the following command: cat custom-ca-certs.pem | base64 -w0.
    • Replace <workload-region-namespace> with the name of the unique namespace that you created for this workload region, for example openstack-two.
  7. Create the secret in this workload region from the manifest file. In this example, this manifest file is called custom-ca-certs.yaml:

    $ oc apply -f custom-ca-certs.yaml

, Repeat steps 3 to 7 for every deployed workload region.

Next steps

10.6.3. Modify the control plane CR of each workload region

You must modify the OpenStackControlPlane custom resource (CR) for each workload region of a Single Keystone Multiple OpenStacks (SKMO) multi-region Red Hat OpenStack Services on OpenShift (RHOSO) deployment because the Dashboard (horizon) and Identity (keystone) service of the central region is shared with all the workload regions.

Note

An important part of modifying the OpenStackControlPlane CR for each workload region involves creating a unique region name and unique Identity service users for each workload region.

Prerequisites

Procedure

  1. Create a file on your workstation in this workload region named openstack_control_plane.yaml to define the OpenStackControlPlane CR:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack-control-plane
      namespace: <workload-region-namespace>
    spec:
      secret: <workload-region-secret>
    • Replace <workload-region-namespace> with the name of the unique namespace for this workload region, in this example openstack-two.
    • Replace <workload-region-secret> with the name of the Secret CR for this workload region, in this example osp-secret.
  2. Perform steps 3 to 6 of Creating the control plane in the Deploying Red Hat OpenStack Services on OpenShift guide.
  3. Edit the OpenStackControlPlane CR of this workload region and specify the name of the secret containing all the CA certificates for this workload region including the CA certificate from the central region, in this example custom-ca-certs:

    Note

    If this section does not exist then you must add it.

    ...
      spec:
        ...
        tls:
          ...
          caBundleSecretName: custom-ca-certs
  4. Edit the OpenStackControlPlane CR of this workload region and disable the Dashboard (horizon) service:

    ...
      spec:
        ...
        horizon:
          ...
          enabled: false
  5. Edit the OpenStackControlPlane CR of this workload region and configure the Identity (keystone) service:

    Note

    You might need to remove default service configuration such as metadata or if this Identity service is configured as a load balancer.

    ...
      spec:
        ...
        keystone:
          ...
          template:
            ...
            externalKeystoneAPI: true
            adminProject: <central-region-admin-project>
            adminUser: <workload-region-admin-name>
            region: <workload-region-name>
            override:
              ...
              service:
                ...
                internal:
                  endpointURL: <central-region-public-keystone-url>
                ...
                public:
                  endpointURL: <central-region-public-keystone-url>
    • Replace <central-region-admin-project> with the name of the admin project in the central region that is admin by default.
    • Replace <workload-region-admin-name> with the name of the default administrator user of this workload region, for example admin-two.
    • Replace <workload-region-name> with the name of this workload region, for example regionTwo.
    • Replace <central-region-public-keystone-url> with the public Identity service endpoint of the central region.
  6. Edit the OpenStackControlPlane CR of this workload region to specify unique Identity service user names for all the OpenStack services that communicate with the Identity service.

    Note

    If you do not specify unique Identity service user names then changing the password for a service user disrupts this service in all the workload regions that use this same user, unless you schedule a maintenance window for all of these regions first.

    The following OpenStack services commonly communicate with the Identity service and require unique Identity service user names:

    ...
      spec:
      ...
      barbican:
        ...
        template:
          ...
          serviceUser: <workload-region-barbican-serviceUser>
      ...
      cinder:
        ...
        template:
          ...
          serviceUser: <workload-region-cinder-serviceUser>
      ...
      glance:
        ...
        template:
          ...
          serviceUser: <workload-region-glance-serviceUser>
      ...
      neutron:
        ...
        template:
          ...
          serviceUser: <workload-region-neutron-serviceUser>
      ...
      nova:
        ...
        template:
          ...
          serviceUser: <workload-region-nova-serviceUser>
      ...
      placement:
        ...
        template:
          ...
          serviceUser: <workload-region-placement-serviceUser>
      ...
      swift:
        ...
        template:
          ...
          swiftProxy:
            ...
            serviceUser: <workload-region-swift-serviceUser>
    • Replace <workload-region-barbican-serviceUser> with the unique Identity service user name for the Key Manager service (barbican) of this workload region, for example barbican-two.
    • Replace <workload-region-cinder-serviceUser> with the unique Identity service user name for the Block Storage service (cinder) of this workload region, for example cinder-two.
    • Replace <workload-region-glance-serviceUser> with the unique Identity service user name for the Image service service (glance) of this workload region, for example glance-two.
    • Replace <workload-region-neutron-serviceUser> with the unique Identity service user name for the Networking service (neutron) of this workload region, for example neutron-two.
    • Replace <workload-region-nova-serviceUser> with the unique Identity service user name for the Compute service (nova) of this workload region, for example nova-two.
    • Replace <workload-region-placement-serviceUser> with the unique Identity service user name for the Placement service (placement) of this workload region, for example placement-two.
    • Replace <workload-region-swift-serviceUser> with the unique Identity service user name for the Object Storage service (swift) of this workload region, for example swift-two.
  7. If you use a back end that communicates to the Identity service (keystone) then you must specify the unique name of this workload region when configuring this back end.

    Note

    Back ends that do not communicate to the Identity service like Red Hat Ceph Storage do not require any additional configuration.

    For example the Object Storage service (swift) back end for the Image service (glance) communicates to the Identity service. Therefore, when you configure this back end you must specify the name of this workload region:

    ...
      spec:
      ...
      glance:
        ...
        template:
          ...
          customServiceConfig:
              [DEFAULT]
              enabled_backends = default_backend:swift
    
              [glance_store]
              default_backend = default_backend
    
              [default_backend]
              swift_store_create_container_on_put = True
              swift_store_auth_version = 3
              swift_store_auth_address = {{ .KeystoneInternalURL }}
              swift_store_endpoint_type = internalURL
              swift_store_user = service:glance
              swift_store_key = {{ .ServicePassword }}
              swift_store_region = <workload-region-name>
    • Replace <workload-region-name> with the name of this workload region, for example regionTwo.
  8. Perform all the remaining steps starting from step 7 of the Creating the control plane in the Deploying Red Hat OpenStack Services on OpenShift guide. But replace the openstack namespace of these commands specified as -n openstack with the name of the unique namespace you created for the workload region, in this example openstack-two. In this example, specify -n openstack-two for these commands.

10.7. Configure the central region to trust a deployed workload region

After you deploy a workload region of a Single Keystone Multiple OpenStacks (SKMO) multi-region Red Hat OpenStack Services on OpenShift (RHOSO) deployment, you must configure the deployed central region to trust this workload region.

Prerequisites

  • You are logged on to a workstation that has access to your Red Hat OpenShift Container Platform (RHOCP) cluster as a user with cluster-admin privileges.
  • You have the oc command line tool installed on your workstation.

Procedure

  1. In the deployed workload region, set the default RHOSO namespace:

    $ oc project <workload-region-namespace>
    • Replace <workload-region-namespace> with the name of the unique namespace for this workload region, for example openstack-two.
  2. Obtain the CA certificate of this workload region and extract it into a file, for example regionTwo-ca.crt:

    Note

    To decode the certificate before creating the output .crt file add | base64 -d to this command.

    $ oc get secret rootca-public -o yaml | yq '.data."ca.crt"' > regionTwo-ca.crt
  3. Copy the regionTwo-ca.crt file to the deployed central region.
  4. In the central region, set the default RHOSO namespace:

    $ oc project <central-region-namespace>
    • Replace <central-region-namespace> with the name of the unique namespace for the central region, for example openstack.
  5. Edit your OpenStackControlPlane CR of the central region:

    $ oc edit openstackcontrolplane <name>
    • Replace <name> with the name of your YAML OpenStackControlPlane CR. You can use the following command to retrieve this name: oc get openstackcontrolplane.
  6. If your OpenStackControlPlane CR in the central region contains the spec.tls.caBundleSecretName parameter:

    1. Obtain the name of the secret that contains the PEM-formatted bundle containing all the other custom CA certificates in chains of trust if applicable that the central region trusts, for example custom-ca-certs:

      ...
        spec:
          ...
          tls:
            ...
            caBundleSecretName: custom-ca-certs
    2. Edit the specified secret, in this example custom-ca-certs

      $ oc edit secret custom-ca-certs
    3. Append the contents of the CA certificate of the deployed workload region, for example regionTwo-ca.crt to the PEM-formatted bundle containing all the other custom CA certificates that the central region trusts.
    4. Save and exit the editor to automatically apply the changes to the secret, in this example custom-ca-certs.
  7. If your OpenStackControlPlane CR in the central region does not contain the spec.tls.caBundleSecretName parameter:

    1. Create a PEM-formatted bundle, for example custom-ca-certs.pem that includes the contents of this regionTwo-ca.crt file.
    2. Create a manifest file for the secret in the central region that specifies the contents of the custom-ca-certs.pem bundle created in the previous step. In this example the manifest file is called custom-ca-certs.yaml and the secret is called custom-ca-certs:

      apiVersion: v1
      data:
        custom-ca-certs.pem: <contents-of-PEM-bundle>
      kind: Secret
      metadata:
        annotations:
        name: custom-ca-certs
        namespace: <namespace>
      type: Opaque
      • Replace <namespace> with the namespace of the central region, in this example openstack.
      • Replace <contents-of-PEM-bundle> with the base64 encoded string of the contents of the PEM-formatted bundle you created called custom-ca-certs.pem that includes the CA certificate from regionTwo. You can get this base64 encoded string by using the following command: cat custom-ca-certs.pem | base64 -w0.
  1. Create the secret in the central region from the manifest file. In this example the manifest file is called custom-ca-certs.yaml:

    $ oc apply -f custom-ca-certs.yaml
  2. Edit your OpenStackControlPlane CR of the central region:

    $ oc edit openstackcontrolplane <name>
    • Replace <name> with the name of your YAML OpenStackControlPlane CR. You can use the following command to retrieve this name: oc get openstackcontrolplane.
  3. Add the secret that you have created, in this example custom-ca-certs:

    ...
      spec:
        ...
        tls:
          ...
          caBundleSecretName: custom-ca-certs
  4. Save and close the editor to automatically apply this change.
  5. Wait for the deployment of the control plane of the central region to reach the Ready status:

    $ oc wait openstackcontrolplane <name> --for=condition=Ready --timeout=600s

Next steps

  • Extract the catalog entries for this workload region and make sure that the Dashboard (horizon) service in the central region can resolve the DNS names in the service catalog of this workload region and access them.

10.8. SKMO Dashboard region configuration

When deploying a Single Keystone Multiple OpenStacks (SKMO) multi-region Red Hat OpenStack Services on OpenShift (RHOSO) deployment you must not configure the Dashboard (horizon) service of the central region to select which region a user wants to log into.

In a standard multi-region OpenStack deployment, each region is an isolated OpenStack deployment with their own Dashboard (horizon) and Identity (keystone) service. For this reason, you must configure the Dashboard (horizon) service to log into the Identity (keystone) service of each region. This configuration creates a dropdown list on the Login page for users to select the required isolated region or more specifically the Identity (keystone) service of this region.

In the SKMO deployment, the central region provides a centralized Identity (keystone) service that is used for logging into the entire multi-region deployment. Therefore, the dropdown list of regions must not be configured on the Login page of the Dashboard because no matter what workload region a user selects, the central region is always selected, confusing your users.

Note

The SKMO Dashboard automatically provides the Managing regions dropdown list in the UI to allow users to select the required workload region.

Legal Notice

Copyright © Red Hat.
Except as otherwise noted below, the text of and illustrations in this documentation are licensed by Red Hat under the Creative Commons Attribution–Share Alike 3.0 Unported license . If you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, the Red Hat logo, JBoss, Hibernate, and RHCE are trademarks or registered trademarks of Red Hat, LLC. or its subsidiaries in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS is a trademark or registered trademark of Hewlett Packard Enterprise Development LP or its subsidiaries in the United States and other countries.
The OpenStack® Word Mark and OpenStack logo are trademarks or registered trademarks of the Linux Foundation, used under license.
All other trademarks are the property of their respective owners.