Configuring security services
Configuring the security features for Red Hat OpenStack Services on OpenShift
Abstract
Providing feedback on Red Hat documentation
We appreciate your feedback. Tell us how we can improve the documentation.
To provide documentation feedback for Red Hat OpenStack Services on OpenShift (RHOSO), create a Jira issue in the OSPRH Jira project.
Procedure
- Log in to the Red Hat Atlassian Jira.
- Click the following link to open a Create Issue page: Content from redhat.atlassian.net is not included.Create issue
- Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue.
- Click Create.
- Review the details of the bug you created.
Chapter 1. Scheduling fernet key rotation
For security purposes, the fernet keys in your Red Hat OpenStack Services on OpenShift (RHOSO) environment are automatically rotated. To meet the unique security requirements of your environment, you can modify the frequency with which fernet key rotations occur as well as the number of old decryption keys kept after each rotation.
1.1. Updating fernet key rotation frequency
As of Red Hat OpenStack Services on OpenShift (RHOSO), you can update the frequency with which the Identity service (keystone) rotates its fernet keys.
Procedure
Edit the
OpenStackControlPlanecustom resource (CR) for editing:$ oc edit openstackcontrolplane openstack-control-plane
Under the
propertiesfield under the Identity service (keystone) configuration, add the following:fernetMaxActiveKeys: default: <active_keys> description: FernetMaxActiveKeys - Maximum number of fernet token keys after rotation type: int fernetRotationDays: default: <days>-
Replace
<active_keys>with the number of keys to keep active. The default is5. -
Replace
<days>with the frequency with which to rotate your fernet keys.
-
Replace
Chapter 2. Adding custom TLS certificates for Red Hat OpenStack Services on OpenShift
When you deploy Red Hat OpenStack Services on OpenShift (RHOSO), TLS-e (TLS everywhere) is enabled by default. TLS is handled by cert-manager, which applies both ingress (public) encryption, as well as reencryption to each pod. Currently, disabling TLS on RHOSO is not supported.
2.1. TLS in Red Hat OpenStack Services on OpenShift
When you deploy Red Hat OpenStack Services on OpenShift (RHOSO), most API connections are protected by TLS.
TLS is not currently available for the internal Alert Manager Web UI service endpoint.
You might be required to protect public APIs using your own internal certificate authority. In order to replace the automatically generated certificates you must create a secret that contains your additional ca certs, including all certificates in needed chains of trust.
You can apply trusted certificates from your own internal certificate authority (CA) to public interfaces on RHOSO. The public interface is where ingress traffic meets the service’s route. Do not attempt to manage encryption on internal (pod level) interfaces.
If you decide to apply trusted certificates from your own internal certificate authority (CA), you will need the following information.
- DNS names
For each service you apply your own custom certificate to, you will need its DNS hostname for the process of generating the certificate. You can get a list of public hostnames using the following command:
oc get -n openstack routesNoteTo use a single certificate for two or more services, use a wildcard in the DNS name field, or list multiple DNS names in the subject alt names field. If you do not use a wildcard, then you must update the certificate in the event of a route hostname change.
- Duration
- To update a service’s certificate in OpenShift, the service must be restarted. The duration for the certificate is the longest amount of time a service can stay live without being restarted, subject to your internal security policies.
- Usages
-
You must include -
key encipherment,digital signature, andserver authwithin the list of usages in your certificate.
Updating TLS to use custom certificates requires edits to both the control plane and the data plane.
The following is the default TLS settings that are used if not annotated and changed:
apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
metadata:
name: myctlplane
spec:
tls:
default:
ingress:
ca:
duration: 87600h
cert:
duration: 43800h
enabled: true
podLevel:
enabled: true
internal:
ca:
duration: 87600h
cert:
duration: 43800h
libvirt:
ca:
duration: 87600h
cert:
duration: 43800h
ovn:
ca:
duration: 87600h
cert:
duration: 43800h- To create a custom TLS certificate for each public service see Updating the control plane with custom certificates for public services.
- To create a single custom TLS certificate to apply to the public services, see Updating the control plane with a single custom certificate for public services.
2.2. Adding custom CA certificates to the control plane
When you deploy Red Hat OpenStack Services on OpenShift (RHOSO), default CA certificates are also deployed on the control plane. When you add a custom CA certificate from a Red Hat Satellite Server (RHSS) or another 3rd party certificate authority to your RHOSO control plane, RHOSO services can validate certificates issued by those 3rd party certificate authorities.
To accomplish this, you must add your custom CA certificate into a bundle that includes all certificates that OpenStack services can verify against.
If TLS is not enabled on each node set, then it must be enabled, which requires deploying the data plane.
Procedure
-
Create a PEM-formatted bundle, for example,
mybundle.pem. Include all the CA certificates that you want OpenStack to trust. Create a manifest file called
cacerts.yamlthat includes themybundle.pemcreated in the previous step. Include all the certificates in chains of trust if applicable:apiVersion: v1 kind: Secret metadata: name: cacerts namespace: openstack type: Opaque data: myBundleExample: <cat mybundle.pem | base64 -w0> CACertExample: <cat cacert.pem | base64 -w0>
-
Replace
mybundle.pemwith the name of your certificate or certificate bundle. The results are pasted as the value of themyBundleExamplefield. -
Replace
cacert.pemwith the name of your CA certificate.
-
Replace
Create the secret from the manifest file:
$ oc apply -f cacerts.yaml
Edit the
openstack_control_plane.yamlcustom resource (CR) file and add your bundle as the parameter forcaBundleSecretName:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: myctlplane spec: tls: podLevel: enabled: true caBundleSecretName: cacertsApply the control plane changes:
$ oc apply -f openstack_control_plane.yaml
Determine if TLS is enabled on each node set, by running the following command, which returns
trueif TLS is enabled on the specified node set:$ oc get openstackdataplanenodeset <node_set_name> -n <namespace> -o json | jq .items[0].spec.tlsEnabled
-
Replace
<node_set_name>with the name of the OpenStackDataPlaneNodeSet CR that the node belongs to. -
Replace
<namespace>with the namespace of the required Red Hat OpenStack Services on OpenShift (RHOSO) environment, for example,openstack.
-
Replace
If TLS is not enabled, you must enable it:
Open the OpenStackDataPlaneNodeSet CR file for each node on the data plane, and enable TLS in each:
apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: <node_set_name> namespace: openstack spec: tlsEnabled: true
Save the updated OpenStackDataPlaneNodeSet CR files and apply the updates:
$ oc apply -f openstack_data_plane.yaml -n <namespace>
Create a file on your workstation to define the OpenStackDataPlaneDeployment CR:
apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: <node_set_deployment_name>
-
Replace
<node_set_deployment_name>with the name of the OpenStackDataPlaneDeployment CR. This name must be unique, must consist of lower case alphanumeric characters, - (hyphen) or . (period), and must start and end with an alphanumeric character.
-
Replace
Add the OpenStackDataPlaneNodeSet CRs to the OpenStackDataPlaneDeployment CR file:
spec: ... nodeSets: - <node_set_name>- Save the OpenStackDataPlaneDeployment CR deployment file.
Deploy the modified
OpenStackDataPlaneNodeSetCRs:$ oc create -f openstack_data_plane_deploy.yaml -n <namespace>
You can view the Ansible logs while the deployment executes:
$ oc get pod -l app=openstackansibleee -w $ oc logs -l app=openstackansibleee -f --max-log-requests 10
If the
oc logscommand returns an error similar to the following error, increase the--max-log-requestsvalue:error: you are attempting to follow 19 log streams, but maximum allowed concurrency is 10, use --max-log-requests to increase the limit
Verify that the modified OpenStackDataPlaneNodeSet CRs are deployed:
$ oc get openstackdataplanedeployment -n <namespace> NAME STATUS MESSAGE openstack-data-plane True Setup Complete $ oc get openstackdataplanenodeset -n <namespace> NAME STATUS MESSAGE openstack-data-plane True NodeSet Ready
For information about the meaning of the returned status, see Data plane conditions and states in the Deploying Red Hat OpenStack Services on OpenShift guide.
If the status indicates that the data plane has not been deployed, then troubleshoot the deployment. For information, see Troubleshooting the data plane creation and deployment in the Deploying Red Hat OpenStack Services on OpenShift guide.
2.3. Updating the control plane with custom certificates for public services
You might be required to protect public APIs by using your own internal certificate authority (CA). To replace the automatically generated route certificates with common signed certificates from your CA, you must create a secret that contains your additional CA certificate, and all certificates in the chain of trust.
Prerequisites
-
You have a list of each of the public services for which to apply your custom service certificates. You can get this list using the
oc route list -n openstackcommand. Use this information for the number of certificates you must create, the DNS names for those certificates, as well as finding the relevant services to edit in theopenstack_control_plane.yamlcustom resource (CR). - You have a service certificate for the public services
Procedure
Create a manifest file called
cacerts.yamlthat includes all CA certificates. Include all certificates in chains of trust if applicable:apiVersion: v1 kind: Secret metadata: name: cacerts namespace: openstack type: Opaque data: myBundleExample: <cat mybundle.pem | base64 -w0> CACertExample: <cat cacert.pem | base64 -w0>
-
Replace
mybundle.pemwith the name of your certificate or certificate bundle. The results are pasted as the value of themyBundleExamplefield. -
Replace
cacert.pemwith the name of your CA certificate.
-
Replace
Create the secret from the manifest file:
$ oc apply -f cacerts.yaml
Create a manifest file for each secret named
api_certificate_<service>_secret.yaml:apiVersion: v1 kind: Secret metadata: name: api_certificate_<service>_secret namespace: openstack type: kubernetes.io/tls data: tls.crt: <cat tlscrt.pem | base64 -w0> tls.key: <cat tlskey.pem | base64 -w0> ca.crt: <cat cacrt.pem | base64 -w0>
-
Replace
<service>with the name of the service that this secret is for. -
Replace
tlscrt.pemwith the name of your signed certificate. -
Replace
tlskey.pemwith the name of your private key. -
Replace
cacrt.pemwith the name of your CA certificate.
-
Replace
Create the secret
$ oc apply -f api_certificate_<service>_secret.yaml
Edit the
openstack_control_plane.yamlcustom resource and add your bundle as the parameter forcaBundleSecretName:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: myctlplane spec: tls: podLevel: enabled: true caBundleSecretName: cacertsApply the secret service certificates to each of the public services under the apiOverride field. For example enter the following for the Identity service (keystone):
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: myctlplane namespace: openstack spec: ... keystone: apiOverride: tls: secretName: api_certificate_keystone_secretThe edits for the Compute service (nova) and noVNCProxy appear as the following:
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: myctlplane namespace: openstack spec: ... nova: apiOverride: tls: secretName: api_certificate_nova_secret route: {} cellOverride: cell1: noVNCProxy: tls: secretName: api_certificate_novavncproxy_secretApply the control plane changes
$ oc apply -f openstack_control_plane.yaml
2.4. Updating the control plane with a single custom certificate for public services
You might be required to protect public APIs by using your own internal certificate authority (CA). To replace the automatically generated route certificates with a common signed certificate from your CA, you must create a secret that contains your CA certificate, and all certificates in the chain of trust.
Prerequisites
-
You have a list of each of the public services for which to apply your custom service certificate. You can get this list by using the
oc route list -n openstackcommand. Use this information for the DNS names for the certificate, as well as for finding the relevant services to edit in theopenstack_control_plane.yamlcustom resource (CR).
Procedure
Create a signed certificate that includes the hostname for every service in the
alt_namessection:[alt_names] DNS.1 = barbican-public-openstack.apps.ocp.openstack.lab DNS.2 = cinder-public-openstack.apps.ocp.openstack.lab DNS.3 = glance-default-public-openstack.apps.ocp.openstack.lab DNS.4 = horizon-openstack.apps.ocp.openstack.lab DNS.5 = keystone-public-openstack.apps.ocp.openstack.lab DNS.6 = manila-public-openstack.apps.ocp.openstack.lab DNS.7 = neutron-public-openstack.apps.ocp.openstack.lab DNS.8 = nova-novncproxy-cell1-public-openstack.apps.ocp.openstack.lab DNS.9 = nova-public-openstack.apps.ocp.openstack.lab DNS.10 = placement-public-openstack.apps.ocp.openstack.lab
Create a manifest file called
cacerts.yamlthat includes all CA certificates. Include all certificates in chains of trust if applicable:apiVersion: v1 kind: Secret metadata: name: cacerts namespace: openstack type: Opaque data: myBundleExample: <cat mybundle.pem | base64 -w0> CACertExample: <cat cacert.pem | base64 -w0>
-
Replace
mybundle.pemwith the name of your certificate or certificate bundle. The results are pasted as the value of themyBundleExamplefield. -
Replace
cacert.pemwith the name of your CA certificate.
-
Replace
Create the secret from the manifest file:
$ oc apply -f cacerts.yaml
Create a manifest file for a secret named
certificate-secret.yaml:apiVersion: v1 kind: Secret metadata: name: certificate-secret namespace: openstack type: kubernetes.io/tls data: tls.crt: <cat tlscrt.pem | base64 -w0> tls.key: <cat tlskey.pem | base64 -w0> ca.crt: <cat cacrt.pem | base64 -w0>
-
Replace
tlscrt.pemwith the name of your signed certificate. -
Replace
tlskey.pemwith the name of your private key. -
Replace
cacrt.pemwith the name of your CA certificate.
-
Replace
Create the secret
$ oc apply -f certificate-secret.yaml
Edit the
openstack_control_plane.yamlcustom resource and add your bundle as the parameter forcaBundleSecretName:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: myctlplane spec: tls: podLevel: enabled: true caBundleSecretName: cacertsApply the secret service certificates to each of the public services under the
apiOverridefield. For example, enter the following for the Identity service (keystone):apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: myctlplane namespace: openstack spec: ... keystone: apiOverride: tls: secretName: certificate-secretThe edits for the Compute service (nova) and
NoVNCProxyappear as the following:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: myctlplane namespace: openstack spec: ... nova: apiOverride: tls: secretName: certificate-secret route: {} cellOverride: cell1: NoVNCProxy: tls: secretName: certificate-secretApply the control plane changes
$ oc apply -f openstack_control_plane.yaml
2.5. Using your CA certs on remote clients
If you do not use a trusted CA from a public entity, openstack client commands fail with an SSL verification error, requiring the --insecure command option to succeed. You can securely communicate with OpenStack API using a private certificate authority using the following steps.
Prerequisites
- You have deployed RHOSO with default certificates, or have used custom certificates that are not signed by a public certificate authority.
Procedure
- Log onto OpenShift with global administrative permissions.
Extract the ca cert for the public endpoints from the
rootca-publicsecret.$ oc get secret rootca-public -o json | jq -r '.data."ca.crt"' | base64 -d > ca.crt
-
Transfer the
ca.crtfile to the client that accesses the OpenStack API. Update your authentication file with the path to
ca.crt.If you use a
clouds.ymlauthentication file, add thecacertparameter:clouds: secure: cacert: </path/to/ca.crt>-
Replace
</path/to/ca.crt>with the absolute path and name of the CA cert on your system.
-
Replace
If you use a resource credentials file, update the file with the exported
CACERTvariable:$ export OC_CACERT=</path/to/ca.crt>
-
Replace
</path/to/ca.crt>with the absolute path and name of the CA cert on your system.
-
Replace
Chapter 3. Custom issuers for cert-manager
An issuer is a resource that acts as a certificate authority for a specific namespace, and is managed by the cert-manager Operator. TLS-e (TLS everywhere) is enabled in Red Hat OpenStack Services on OpenShift (RHOSO) environments, and it uses the following issuers by default:
- rootca-internal
- rootca-libvirt
- rootca-ovn
- rootca-public
3.1. Creating a custom issuer
You can create custom ingress as well as custom internal issuers. To create and manage your own certificates for internal endpoints, you must create a custom internal issuer.
Procedure
Create a custom issuer in a file named
rootca-custom.yaml:apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: <issuer_name> spec: ca: secretName: <secret_name>-
Replace
<issuer_name>with the name of your custom issuer, for example,rootca-ingress-custom. -
Replace
<secret_name>with the name of the Secret CR used by the certificate for your custom issuer. If you do not include a secret, one is created automatically.
-
Replace
Create a certificate in a file named
ca-issuer-certificate.yaml:apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: <issuer_name> spec: commonName: <issuer_name> isCA: true duration: <hours> privateKey: algorithm: RSA size: 3072 issuerRef: name: selfsigned-issuer kind: Issuer secretName: <secret-name>-
Replace
<issuer_name>with the name of your custom issuer. This matches the issuer created in the first step. -
Replace
<hours>with the duration in hours, for example, a value of87600his equivalent to 3650 days, or about 10 years. -
Replace
<secret_name>with the name of the Secret CR used by the certificate for your custom issuer. If you do not include a secret, one is created automatically.
-
Replace
Create the issuer and certificate:
$ oc create -f rootca-custom.yaml $ oc create -f ca-issuer-certificate.yaml
Add the custom issuer to the TLS service definition in the control plane CR file.
If your custom issuer is an ingress issuer, the customer issuer is defined under the
ingressattribute as shown below:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack-control-plane spec: tls: ingress: enabled: true ca: customIssuer: <issuer_name> ...-
Replace
<issuer_name>with the name of your custom issuer. This matches the issuer created in the first step.
-
Replace
If your custom issuer is an internal issuer, the custom issuer is defined at the pod level under the
internalattribute as shown below:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: myctlplane spec: tls: ingress: enabled: true podLevel: enabled: true internal: ca: customIssuer: <issuer_name>-
Replace
<issuer_name>with the name of your custom issuer. This matches the issuer created in the first step.
-
Replace
Additional resources
Chapter 4. Enabling TLS on a deployed RHOSO environment
TLS is enabled by default in Red Hat OpenStack Services on OpenShift (RHOSO) environments. If you disabled TLS when you deployed your RHOSO environment, or if you adopted your Red Hat OpenStack Platform 17.1 deployment to a RHOSO environment, then you can reenable TLS after deployment.
Enabling TLS on a deployed RHOSO environment involves some data plane downtime when connectivity to Rabbitmq and OVS from the control plane is lost during the redeployment.
- If your deployment uses the default configuration where no floating IP connectivity is directed through the control plane, then this downtime does not affect the workload hosted on the RHOSO environment.
- If your deployment routes traffic through the control plane, then the downtime will impact the workload hosted on the RHOSO environment.
- New workloads cannot be created and existing workloads cannot be managed with the OpenStack API while the control plane and data plane are being updated.
4.1. Enabling TLS on a deployed RHOSO environment error messages
The following error messages are logged when the connectivity to Rabbitmq and OVS is lost from the control plane during the redeployment to enable TLS:
Extract from the
nova-computelog:Aug 09 11:35:49 edpm-compute-0 nova_compute[105613]: 2024-08-09 11:35:49.037 2 ERROR oslo.messaging._drivers.impl_rabbit [-] [98752a36-cf06-4d26-aee8-f5b21bf55aef] AMQP server on rabbitmq-cell1.openstack.svc:5672 is unreachable: <RecoverableConnectionError: unknown error>. Trying again in 1 seconds.: amqp.exceptions.RecoverableConnectionError: <RecoverableConnectionError: unknown error> Aug 09 11:35:49 edpm-compute-0 nova_compute[105613]: 2024-08-09 11:35:49.566 2 ERROR oslo.messaging._drivers.impl_rabbit [-] [8c795961-cb17-4a6d-82ee-25c862316b40] AMQP server on rabbitmq-cell1.openstack.svc:5672 is unreachable: timed out. Trying again in 32 seconds.: socket.timeout: timed out
Extract from the OVN controller log:
ovn_controller[55433]: 2024-08-09T11:35:47Z|00452|reconnect|INFO|tcp:ovsdbserver-sb.openstack.svc:6642: connected Aug 09 11:35:47 edpm-compute-0 ovn_controller[55433]: 2024-08-09T11:35:47Z|00453|jsonrpc|WARN|tcp:ovsdbserver-sb.openstack.svc:6642: error parsing stream: line 0, column 0, byte 0: invalid character U+0015 Aug 09 11:35:47 edpm-compute-0 ovn_controller[55433]: 2024-08-09T11:35:47Z|00454|reconnect|WARN|tcp:ovsdbserver-sb.openstack.svc:6642: connection dropped (Protocol error)
4.2. Enabling TLS on a RHOSO environment after deployment
If TLS is disabled in your deployed Red Hat OpenStack Services on OpenShift (RHOSO) environment, you can reenable it on a operational RHOSO environment with minimal disruption.
Prerequisites
- The RHOSO environment is deployed on a Red Hat OpenShift Container Platform (RHOCP) cluster. For more information, see Deploying Red Hat OpenStack Services on OpenShift.
-
You are logged on to a workstation that has access to the RHOCP cluster as a user with
cluster-adminprivileges.
Procedure
-
Open your
OpenStackControlPlanecustom resource (CR) file,openstack_control_plane.yaml, on your workstation. Add the following
spec.tlsconfiguration, if not already present:spec: tls: ingress: ca: duration: 87600h0m0s cert: duration: 43800h0m0s enabled: true podLevel: enabled: true internal: ca: duration: 87600h0m0s cert: duration: 43800h0m0s libvirt: ca: duration: 87600h0m0s cert: duration: 43800h0m0s ovn: ca: duration: 87600h0m0s cert: duration: 43800h0m0s-
If the
tlsconfiguration is already present in the CR file, then ensure that thepodLevelis enabled.
-
If the
Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstack
The rabbitmq pods cannot change the TLS configuration on an operating environment, therefore you must delete the existing rabbitmq pods to update the control plane with the new rabbitmq pods that have TLS enabled:
$ oc delete pod -n openstack -l app.kubernetes.io/component=rabbitmq
Wait for the control plane to be ready:
$ oc wait openstackcontrolplane -n openstack --for=condition=Ready --timeout=400s -l core.openstack.org/openstackcontrolplane
While waiting for the control plane to be ready, new workloads cannot be created and existing workloads cannot be managed with the OpenStack API. The
nova-computeservice on the data plane nodes cannot connect to the cell1 rabbitmq and reports as down:$ oc rsh openstackclient $ openstack compute service list -c Binary -c Host -c Status -c State +----------------+-------------------------------------+---------+-------+ | Binary | Host | Status | State | +----------------+-------------------------------------+---------+-------+ | nova-conductor | nova-cell0-conductor-0 | enabled | up | | nova-scheduler | nova-scheduler-0 | enabled | up | | nova-conductor | nova-cell1-conductor-0 | enabled | up | | nova-compute | edpm-compute-0.ctlplane.example.com | enabled | down | | nova-compute | edpm-compute-1.ctlplane.example.com | enabled | down | +----------------+-------------------------------------+---------+-------+
The OVN controller and the OVN metadata agent cannot connect to the southbound database:
$ openstack network agent list -c 'Agent Type' -c Host -c Alive -c State +------------------------------+-------------------------------------+-------+-------+ | Agent Type | Host | Alive | State | +------------------------------+-------------------------------------+-------+-------+ | OVN Controller Gateway agent | crc | :-) | UP | | OVN Controller agent | edpm-compute-1.ctlplane.example.com | XXX | UP | | OVN Metadata agent | edpm-compute-1.ctlplane.example.com | XXX | UP | | OVN Controller agent | edpm-compute-0.ctlplane.example.com | XXX | UP | | OVN Metadata agent | edpm-compute-0.ctlplane.example.com | XXX | UP | +------------------------------+-------------------------------------+-------+-------+
NoteThe existing workload is not impacted if workload traffic is not routed through the control plane.
Open the
OpenStackDataPlaneNodeSetCR file for each node on the data plane, and enable TLS in each:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: <node_set_name> namespace: openstack spec: tlsEnabled: true
-
Replace
<node_set_name>with the name of theOpenStackDataPlaneNodeSetCR that the node belongs to.
-
Replace
Save the updated
OpenStackDataPlaneNodeSetCR files and apply the updates:$ oc apply -f openstack_data_plane.yaml -n openstack
Check that TLS is enabled on each node set:
$ oc get openstackdataplanenodeset <node_set_name> -n openstack -o json | jq .items[0].spec.tlsEnabled true
Create a file on your workstation to define the
OpenStackDataPlaneDeploymentCR:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: <node_set_deployment_name>
-
Replace
<node_set_deployment_name>with the name of theOpenStackDataPlaneDeploymentCR. The name must be unique, must consist of lower case alphanumeric characters,-(hyphen) or.(period), and must start and end with an alphanumeric character.
TipGive the
OpenStackDataPlaneDeploymentCR file a descriptive name that indicates the purpose of the modified node set.-
Replace
Add the
OpenStackDataPlaneNodeSetCRs that you modified to enable TLS:spec: nodeSets: - <node_set_name>-
Provide the required
<node_set_name>for each node on the data plane.
-
Provide the required
-
Save the
OpenStackDataPlaneDeploymentCR deployment file. Deploy the modified
OpenStackDataPlaneNodeSetCRs:$ oc create -f openstack_data_plane_deploy.yaml -n openstack
You can view the Ansible logs while the deployment executes:
$ oc get pod -l app=openstackansibleee -n openstack -w $ oc logs -l app=openstackansibleee -f --max-log-requests 10 -n openstack
If the
oc logscommand returns an error similar to the following error, increase the--max-log-requestsvalue:error: you are attempting to follow 19 log streams, but maximum allowed concurrency is 10, use --max-log-requests to increase the limit
Verify that the modified
OpenStackDataPlaneNodeSetCRs are deployed:$ oc get openstackdataplanedeployment -n openstack NAME STATUS MESSAGE openstack-data-plane True Setup Complete $ oc get openstackdataplanenodeset -n openstack NAME STATUS MESSAGE openstack-data-plane True NodeSet Ready
For information about the returned status, see Data plane conditions and states in the Deploying Red Hat OpenStack Services on OpenShift guide.
If the status indicates that the data plane has not been deployed, then troubleshoot the deployment. For information, see Troubleshooting the data plane creation and deployment in the Deploying Red Hat OpenStack Services on OpenShift guide.
Verify that the
nova-computeservice is connected again to TLS rabbitmq:$ oc rsh openstackclient $ openstack compute service list -c Binary -c Host -c Status -c State +----------------+-------------------------------------+---------+-------+ | Binary | Host | Status | State | +----------------+-------------------------------------+---------+-------+ | nova-conductor | nova-cell0-conductor-0 | enabled | up | | nova-scheduler | nova-scheduler-0 | enabled | up | | nova-conductor | nova-cell1-conductor-0 | enabled | up | | nova-compute | edpm-compute-0.ctlplane.example.com | enabled | up | | nova-compute | edpm-compute-1.ctlplane.example.com | enabled | up | +----------------+-------------------------------------+---------+-------+
Verify that the OVN agents are running again:
$ openstack network agent list -c 'Agent Type' -c Host -c Alive -c State +------------------------------+-------------------------------------+-------+-------+ | Agent Type | Host | Alive | State | +------------------------------+-------------------------------------+-------+-------+ | OVN Controller Gateway agent | crc | :-) | UP | | OVN Controller agent | edpm-compute-1.ctlplane.example.com | :-) | UP | | OVN Metadata agent | edpm-compute-1.ctlplane.example.com | :-) | UP | | OVN Controller agent | edpm-compute-0.ctlplane.example.com | :-) | UP | | OVN Metadata agent | edpm-compute-0.ctlplane.example.com | :-) | UP | +------------------------------+-------------------------------------+-------+-------+
4.3. Deploying RHOSO with TLS disabled
TLS is enabled, by default, when you deploy Red Hat OpenStack Services on OpenShift (RHOSO). But you can disable TLS, if you need to.
You can re-enable TLS on a operational RHOSO environment with minimal disruption.
Procedure
-
Open your
OpenStackControlPlanecustom resource (CR) file,openstack_control_plane.yaml, on your workstation. Add the following
spec.tlsconfiguration, if not already present:spec: tls: ingress: enabled: false podLevel: enabled: falseUpdate the control plane:
$ oc apply -f openstack_control_plane.yaml
Open the
OpenStackDataPlaneNodeSetCR file for each node on the data plane, and disable TLS by settingspec.tlsEnabledtofalse:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: <node_set_name> namespace: openstack spec: tlsEnabled: false
-
Replace
<node_set_name>with the name of theOpenStackDataPlaneNodeSetCR that the node belongs to.
-
Replace
Save the updated
OpenStackDataPlaneNodeSetCR files and apply the updates:$ oc apply -f openstack_data_plane.yaml
Verify that TLS is disabled on every node set:
$ oc get openstackdataplanenodeset <node_set_name> -n openstack -o json | jq .items[0].spec.tlsEnabled
Chapter 5. Configuring LDAP on RHOSO
To connect Red Hat OpenStack Services on OpenShift to LDAP so that your OpenStack users authenticate by using pre-established LDAP identities, do the following:
- Use the OpenStack CLI to create the domain.
- Use RHOSO to create a secret that contains the required configuration.
-
Mount the secret to the service by using the
OpenStackControlPlanecustom resource file.
5.1. Configuring LDAP by using Red Hat Identity
Use the OpenStack CLI or the OpenStack Dashboard (horizon) to create OpenStack domains.
Prerequisites
- A pre-established Red Hat Identity server.
Procedure
Create an OpenStack domain:
$ openstack domain create <name>
-
Replace
<name>with the name of your OpenStack domain.
-
Replace
Create a
keystone-domainssecret calledkeystone-domains.yaml. This secret is mounted into the/etc/keystone/domainsconfiguration directory:apiVersion: v1 kind: Secret metadata: name: keystone-domains namespace: openstack type: Opaque stringData: keystone.<domain_name>.conf: | [identity] driver = ldap [ldap] url = ldaps://localhost user = =openstack,ou=Users,dc=director,dc=example,dc=com password = RedactedComplexPassword suffix = dc=domain,dc=example,dc=com user_tree_dn = ou=Users,dc=domain,dc=example,dc=com user_objectclass = person group_tree_dn = ou=Groups,dc=example,dc=org group_objectclass = groupOfNames use_tls = TrueCreate the secret:
$ oc apply -f keystone-domain-name.yaml
Open your
OpenStackCustomResourcecustom resource (CR) file and add the secret by using theextraMountsfield:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: keystone: template: customServiceConfig: | [identity] domain_specific_drivers_enabled = True extraMounts: - name: v1 region: r1 extraVol: - propagation: - Keystone extraVolType: Conf volumes: - name: keystone-domains secret: secretName: keystone-domains mounts: - name: keystone-domains mountPath: "/etc/keystone/domains" readOnly: trueApply the changes to your OpenStack control plane CR:
$ oc apply -f openstack_control_plane.yaml
Chapter 6. Configuring a Luna HSM back end to work with the RHOSO Key Manager service
When you install Red Hat OpenStack Services on OpenShift (RHOSO), you have the option of using the Key Manager service with either a default SimpleCrypto back end, or using it with a Luna hardware security module (HSM). Using a hardware security module provides hardened protection for storing keys.
When you use a Luna HSM, the Key Manager service communicates with the Luna HSM by using a PKCS #11 interface to load libraries provided by Thales. To integrate your RHOSO deployment with a Luna HSM, you must complete the following steps:
6.1. Adding the Luna HSM client to the Key Manager service
Build a new image for the Key Manager service that integrates required Thales software. You must repeat this step when you update RHOSO.
Creating an ansible playbook to build this image simplifies the process of configuring RHOSO for your Luna HSM. The ansible-role-rhoso-luna-hsm RPM, which is part of the RHOSO repository, contains roles that are required for this playbook.
The following playbook automates downloading the barbican-api and barbican-worker images from the Red Hat source repository, adding the Luna client software, and storing the resulting image in your destination repository.
The following steps are run from any system from which you can execute Ansible playbooks.
Prerequisites
- The Luna minimal client image for Linux. For information about obtaining this software, contact Thales.
- An available image service, such as an internally available Quay service, or an account with quay.io. For more information, see Deploying the Red Hat Quay Operator on OpenShift Container Platform.
Procedure
Use DNF to install
ansible-role-rhoso-luna-hsm:$ sudo dnf -y install ansible-role-rhoso-luna-hsm
-
Place the Luna minimal client image for Linux in a known location. In this procedure, the image is placed in
/opt/luna. Move a copy of the Luna minimal client for Linux tarball to
/opt/luna:$ mv <LunaClient-Minimal-10.7.2.x86_64.tar> /opt/luna
-
Replace
<LunaClient-Minimal-10.7.2.x86_64.tar>with the name of your Luna Minimal client for Linux tarball.
-
Replace
Create a playbook called
custom-image.yamlthat creates the custom Key Manager image:--- - name: Create and upload the custom Key Manager image ansible.builtin.include_role: name: rhoso_luna_hsm tasks_from: create_image vars: barbican_src_image_registry: "quay.io:5001" barbican_src_image_namespace: "openstack-k8s-operators" barbican_src_image_tag: "latest" barbican_dest_image_registry: "<my_registry_url>:5001" barbican_dest_image_namespace: "openstack-k8s-operators" barbican_dest_image_tag: "luna-custom" image_registry_verify_tls: "<true|false>" luna_minclient_src: "file:///opt/luna/<filename>" ----
Replace
<my_registry_url>with the URL for your registry. -
Replace
<true|false>with eithertrueorfalsebased on the requirements of your image registry. -
Replace
<filename>with the name of your source image, for example:LunaClient-Minimal-10.7.2.x86_64.tar.
-
Replace
Run the playbook:
$ ansible-playbook custom-image.yaml
6.2. Creating secrets for the Key Manager service
Create secrets for the Key Manager service to enable secure communication with the Luna HSM backend in Red Hat OpenStack Services on OpenShift (RHOSO). These secrets authorize the Key Manager to authenticate with the hardware and manage encryption keys.
The following steps use keys, certificates, and configuration for your Luna HSM to create two secrets. One is called login_secret, which contains your HSM partition password. The other secret is called luna_data_secret, and it contains your certificates, keys, and chrystoki.conf configuration file. These secrets are required in your Red Hat OpenShift Container Platform environment to enable secure communication between the Key Manager service and your HSM. You create an Ansible playbook to identify the client certs to be copied in.
Prerequisites
- The client cert for your Luna HSM. For more information see Content from thalesdocs.com is not included.Comparing NTLS and STC.
-
You must disable
ntls ipcheckon your Luna HSM, for more information see Content from thalesdocs.com is not included.ntls ipcheck.
Procedure
Place the Luna certificate and key into the
/opt/lunadirectory tree:$ cp <luna_client_name>.pem /opt/luna $ cp <luna_client_name>Key.pem /opt/luna
-
Replace
<luna_client_name>with the name of your Luna certificate.
-
Replace
Download the server certificate from the HSM device:
$ scp -O <hsm-device.examle.com:server.pem> /opt/luna/
Optional: If you have more than one HSM for HA, get every cert from the HSM and concatenate them into a single file:
$ scp -O <hsm-device-01.examle.com:server-01.pem> /opt/luna/ $ scp -O <hsm-device-02.examle.com:server-02.pem> /opt/luna/ $ cat /opt/luna/cert/server-01.pem > /opt/luna/CAFile.pem $ cat /opt/luna/cert/server-02.pem >> /opt/luna/CAFile.pem
Update your
Crystoki.conffile to look similar to the following:NoteThe contents from the LunaClient-Minimal tarball is extracted to the
/usr/local/luna/directory in the Key Manager container. You must update the paths in yourCrystoki.conffile to match this example.Chrystoki2 = { LibUNIX = /usr/local/luna/libs/64/libCryptoki2.so; LibUNIX64 = /usr/local/luna/libs/64/libCryptoki2.so; } Luna = { DefaultTimeOut = 500000; PEDTimeout1 = 100000; PEDTimeout2 = 200000; PEDTimeout3 = 10000; KeypairGenTimeOut = 2700000; CloningCommandTimeOut = 300000; CommandTimeOutPedSet = 720000; } CardReader = { RemoteCommand = 1; } Misc = { PE1746Enabled = 0; ToolsDir = ./bin/64; PartitionPolicyTemplatePath = ./ppt/partition_policy_templates; ProtectedAuthenticationPathFlagStatus = 0; MutexFolder = ./lock; } LunaSA Client = { ReceiveTimeout = 20000; SSLConfigFile = /usr/local/luna/openssl.cnf; ClientPrivKeyFile = /usr/local/luna/<luna_client_name>Key.pem; ClientCertFile = /usr/local/luna/<luna_client_name>.pem; ServerCAFile = /usr/local/luna/CAFile.pem; NetClient = 1; TCPKeepAlive = 1; ServerName00 = <ip_address>; ServerPort00 = 1792; ServerHtl00 = 0; }-
Replace
<luna_client_name>with the name of your Luna certificate. -
Replace
<ip_address>with the IP address of your Luna HSM.
-
Replace
Optional: If you are configuring HA, you must include additional entries for the IP addresses of each HSM, as well as configurations for the
VirtualToken,HASynchronize, andHAConfigurationsparameters:... ServerName00 = <ip_address>; ServerPort00 = 1792; ServerHtl00 = 0; ServerName01 = <ip_address>; ServerPort01 = 1792; ServerHtl01 = 0; } VirtualToken = { VirtualToken00Label = myHAGroup; VirtualToken00SN = <virtual_token_sn>; VirtualToken00Members = <virtual_token_member>,<virtual_token_member>; } HASynchronize = { myHAGroup = 1; } HAConfiguration = { haLogStatus = enabled; }-
Replace
<virtual_token_sn>with the serial number of your first partition prepended by a 1. For example, for partition545000014, use a value of1545000014 -
Replace
<virtual_token_member>with the serial numbers of the partitions from the HSMs you are using.
-
Replace
Move the
chrystoki.confconfiguration file to/opt/luna:$ mv chrystoki.conf /opt/luna
Create an Ansible playbook called
create-luna-secrets.yamlto create the required secrets:--- - name: Create secrets with the HSM certs and hsm-login credentials ansible.builtin.include_role: name: rhoso_luna_hsm tasks_from: create_secrets vars: luna_client_name: <luna_client_name> chrystoki_conf_src: "/opt/luna/chrystoki.conf" luna_server_cert_src: "/opt/luna/<server.pem>" luna_client_cert_src: "/opt/luna/" luna_partition_password: "<my_partion_password>" kubeconfig_path: "<kubeconfig_path>" oc_dir: "<path_to_oc>" luna_data_secret: "luna_data_secret" login_secret: "login_secret" ----
Replace
<luna_client_name>with the name of your Luna certificate. -
Replace
<server.pem>with the name of your server certificate. -
Replace
<my_partion_password>with your HSM partition password. -
Replace
<kubeconfig_path>with the path to your.kubeconfiguration file. For example:$HOME/.kube/config. -
Replace
<path_to_oc>with the output ofwhich oc.
-
Replace
Run the Ansible play book:
$ ansible-playbook create-luna-secrets.yaml
6.3. Modifying the OpenStackVersion CR for the Key Manager custom image
Change OpenStack version. The following procedure shows the OpenStackVersion custom resource (CR) that defines the custom container image.
Procedure
Create a CR file with the following contents:
apiVersion: core.openstack.org/v1beta1 kind: OpenStackVersion metadata: name: openstack-galera-network-isolation namespace: openstack spec: customContainerImages: barbicanAPIImage: <api_image> barbicanWorkerImage: <worker_image>-
Replace
<api_image>with the registry and path to the custombarbicanAPIImage. -
Replace
<worker_image>with the registry and path to the custombarbicanWorkerImage.
-
Replace
Apply the
OpenStackVersionCR:$ oc apply -f <filename>
-
Replace
<filename>with theOpenStackVersionCR file name.
-
Replace
6.4. Configuring the Key Manager service for the Luna HSM
You must modify the Key Manager (barbican) service section of the OpenStackControl custom resource (CR) to fully integrate your Luna HSM with Red Hat OpenStack Services on OpenShift (RHOSO).
Procedure
Configure the
OpenStackControlPlaneCR:Optional: If you have saved secrets that are using RHOSO Key Manager
simple_crypto, keep those secrets available by enabling multiple back ends:spec: barbican: apiOverride: route: {} enabled: true template: globalDefaultSecretStore: pkcs11 enabledSecretStores: - pkcs11 - simple_cryptoConfigure the Key Manager service for use with the Luna HSM:
spec: barbican: apiOverride: route: {} enabled: true template: globalDefaultSecretStore: pkcs11 enabledSecretStores: - pkcs11 - simple_crypto apiTimeout: 90 barbicanAPI: apiTimeout: 0 customServiceConfig: | [secretstore:pkcs11] secret_store_plugin = store_crypto crypto_plugin = p11_crypto [p11_crypto_plugin] plugin_name = PKCS11 library_path = /usr/local/luna/lib/libCryptoki2.so token_serial_number = <serial_number> mkek_label = <mkek_label> hmac_label = <hmac_label> encryption_mechanism = CKM_AES_GCM aes_gcm_generate_iv = true hmac_key_type = CKK_GENERIC_SECRET hmac_keygen_mechanism = CKM_GENERIC_SECRET_KEY_GEN hmac_keywrap_mechanism = CKM_AES_KEY_WRAP_KWP key_wrap_mechanism = true key_wrap_generate_iv = true always_set_cka_sensitive = true os_locking_ok = false pkcs11: loginSecret: "login_secret" clientDataSecret: "luna_data_secret" clientDataPath: /usr/local/luna/config-
Replace
<serial_number>with the token serial number of your HSM. If you are using HA, you must replace<serial_number>with the virtual token serial number. For more information, see Creating secrets for the Key Manager service. -
Replace
<mkek_label>with a user-defined label. If you have already defined this label, you must use the same one. Replace
<hmac_label>with a user-defined label. If you have already defined this label, you must use the same one.NoteUse one of the following options to identify the HSM that you can use. These options are mutually exclusive and have the following order of precedence:
Parameter
Value
Precedence
token_serial_number
<serial_number>
1 - Highest
token_labels
Comma delimited lists
2 - Middle
slot-id
<slot_id>
3 - Lowest
-
Replace
Deploy the
OpenStackControlPlaneCR:$ oc apply -f openstack_control_plane.yaml
Chapter 7. Configuring a Proteccio HSM back end to work with the RHOSO Key Manager service
When you install Red Hat OpenStack Services on OpenShift (RHOSO), you have the option of using the Key Manager (barbican) service with either a default SimpleCrypto back end, or using it with a hardware security module (HSM). Using a hardware security module provides hardened protection for storing keys.
When you use a Trustway HSM, the Key Manager service communicates with the Trustway HSM by using a PKCS #11 interface to load libraries provided by Eviden. To integrate your RHOSO deployment with a Proteccio HSM, you must complete the following steps:
7.1. Tested software versions for the Trustway hardware security module
The following table details the versions of software tested by Red Hat.
| Software | Version |
|---|---|
| cryptoki | 2.20 |
| CRYPTO | 167 |
| Firmware | 147, 167 |
| FPGA | -1596587865 |
| library | 3.17 |
| MCS | 65539 |
7.2. Adding the Trustway HSM client to the Key Manager service
Build a new image for the Key Manager service that integrates the required Proteccio software. You must repeat this step when you update RHOSO. Creating an ansible playbook to build this image simplifies the process of configuring RHOSO for your Trustway HSM. The ansible-role-rhoso-Trustway-hsm RPM, which is part of the RHOSO repository, contains roles that are required for this playbook. The following playbook automates required tasks for configuring the Trustway HSM back end to work with the RHOSO Key Manager service:
-
Downloads the
barbican-apiandbarbican-workerimages from the Red Hat source repository - Adds the Trustway client software, to the images
- Stores the resulting images in your destination repository
- Creates OpenShift secrets for the Key Manager service
The playbook uses keys, certificates, and configuration for your Trustway HSM to create two secrets. One is called login_secret, which contains your HSM password or PIN. The other secret is called proteccio_data_secret, and it contains your certificates, keys, and the proteccio.rc configuration file. These secrets are required in your Red Hat OpenShift Container Platform (RHOCP) environment to enable secure communication between the Key Manager service and your HSM. You can use an Ansible playbook to identify the client certificates to be copied in.
Prerequisites
- The Trustway client image for Linux. For information about obtaining this software, contact Eviden.
- An available image service, such as an internally available Quay service, or an account with quay.io. For more information, see Deploying the Red Hat Quay Operator on OpenShift Container Platform.
- The client certificate and the key for your Trustway HSM.
- The Trustway HSM certificate file.
- You are running commands on a workstation on which you can run Ansible playbooks.
Procedure
Use DNF to install
ansible-role-rhoso-proteccio-hsm:$ sudo dnf -y install ansible-role-rhoso-proteccio-hsm
Place the Trustway client image for Linux, as well as the client cert and the client key into the
/opt/protecciodirectory tree.$ cp <trustway_client_cert>.crt /opt/proteccio $ cp <Trustway_client_key>.key /opt/proteccio $ cp <Proteccio3.06.05.iso> /opt/proteccio
-
Replace
<trustway_client_cert>with the file name of your client certificate. -
Replace
<trustway_client_key>with the file name of your client key. -
Replace
<Proteccio3.06.05.iso>with the name of your Trustway client for Linux ISO.
-
Replace
-
Retrieve the server certificate from the HSM device, and copy it to the
/opt/procecciodirectory. For more information on retrieving the server certificate from your Proteccio HSM, see the vendor documentation. - Optional: If you have more than one HSM for HA, get every certificate for each of the HSMs and put them altogether in the /opt/proteccio directory.
Update your
proteccio.rcfile to look similar to the following:[PROTECCIO] IPaddr=<Trustway_HSM_IP_address> SSL=1 SrvCert=<HSM_Certificate_Name>.CRT [CLIENT] Mode=0 LoggingLevel=7 LogFile=/var/log/barbican/proteccio.log StatusFile=/var/log/barbican/HSM_Status.log ClntKey=<Client_Certificate_Name>.key ClntCert=<Client_Certificate_Name>.crt
-
Replace
<Trustway_HSM_IP_Address>with the IP address of your Trustway HSM. -
Replace
<HSM_Certificate_Name>with the name of your Trustway certificate. -
In the file above,
Mode=0means that only a single HSM device is in place. -
Replace
<Client_Certificate_Name>with your client certificate name.
-
Replace
Optional: If you are configuring HA, you must include additional entries for the IP addresses of each HSM. Each new HSM must be inside of a [PROTECCIO] section. Additionally, you much change the Mode parameter inside the [CLIENT] to a value of either
1or2. For more information, see the official Eviden documentation.[PROTECCIO] IPaddr=<Trustway_HSM-2_IP_address> SSL=1 SrvCert=<HSM-2_Certificate_Name>.CRT [CLIENT] Mode=2
-
Replace
<Trustway_HSM-2_IP_Address>with the IP address of your second Trustway HSM. -
Create a new
[PROTECCIO]section with the corresponding parameters for every subsequent Trustway unit you have in your environment.
-
Replace
Move the
proteccio.rcconfiguration file to/opt/proteccio:$ mv proteccio.rc /opt/proteccio
Create a playbook called ansible-proteccio.yaml with the following contents:
vars: Trustway_client_name: <name> Trustway_server_cert_src: "/opt/trustway/<server.pem>" Trustway_partition_password: "<password>" Trustway_data_secret: "Trustway_data_secret" login_secret: "login_secret" barbican_dest_image_namespace: "<namespace>" proteccio_client_src: "file:///opt/proteccio/<iso_file>" proteccio_password: "{{ PIN to log into proteccio }}" kubeconfig_path: "<kubeconfig_path" oc_dir: "<directory>" roles: - rhoso_proteccio_hsm-
Replace
<name>with the name of your Trustway certificate. -
Replace
<server.pem>with the name of your server certificate. -
Replace
<password>with your HSM partition password. -
Replace
<namespace>with your account name for Quay.io or other container registry. -
Replace the contents of
<iso_file>with the name of the Proteccio client ISO file. -
Replace the contents of
<kubeconfig_path>with the full path to your OpenShift’s configuration file. -
Replace
<directory>with the full path to the OpenShift Client location.
-
Replace
Run the playbook:
$ ansible-playbook ansible-proteccio.yaml
7.3. Modifying the OpenStackVersion CR for the Key Manager custom image
Update the OpenStack version by using the OpenStackVersion custom resource (CR). The following procedure shows the CR that defines the custom container image.
Procedure
Create a CR file with the following contents:
apiVersion: core.openstack.org/v1beta1 kind: OpenStackVersion metadata: name: openstack-galera-network-isolation namespace: openstack spec: customContainerImages: barbicanAPIImage: <api_image> barbicanWorkerImage: <worker_image>-
Replace
<api_image>with the registry and path to the custombarbicanAPIImage. -
Replace
<worker_image>with the registry and path to the custombarbicanWorkerImage.
-
Replace
Apply the
OpenStackVersionCR:$ oc apply -f <filename>
-
Replace
<filename>with theOpenStackVersionCR file name.
-
Replace
7.4. Configuring the Key Manager service for the Trustway HSM
You must modify the Key Manager (barbican) service section of the OpenStackControl custom resource (CR) to fully integrate your Trustway HSM with Red Hat OpenStack Services on OpenShift (RHOSO).
Procedure
Configure the Key Manager service within your
OpenStackControlPlaneCR for use with the Trustway HSM:spec: barbican: apiOverride: route: {} enabled: true template: globalDefaultSecretStore: pkcs11 enabledSecretStores: - pkcs11 - simple_crypto apiTimeout: 90 barbicanAPI: apiTimeout: 0 customServiceConfig: | [secretstore:pkcs11] secret_store_plugin = store_crypto crypto_plugin = p11_crypto [p11_crypto_plugin] plugin_name = PKCS11 library_path = /opt/tw_proteccio/lib/libnethsm.so token_labels = <token_label> mkek_label = <mkek_label> hmac_label = <hmac_label> encryption_mechanism = CKM_AES_CBC hmac_key_type = CKK_GENERIC_SECRET hmac_keygen_mechanism = CKM_GENERIC_SECRET_KEY_GEN hmac_mechanism = CKM_SHA256_HMAC key_wrap_mechanism = CKM_AES_CBC_PAD key_wrap_generate_iv = true always_set_cka_sensitive = true os_locking_ok = false pkcs11: loginSecret: "login_secret" clientDataSecret: "proteccio-data" clientDataPath: /etc/proteccio-
Replace
<token_label>with the token label of your HSM. If you are using HA, you must replace<token_label>with the virtual token serial number. -
Replace
<mkek_label>with a user-defined label. If you have already defined this label, you must use the same one. Replace
<hmac_label>with a user-defined label. If you have already set this up, you must use the same label.NoteUse one of the following options to identify the HSM that you can use. These options are mutually exclusive and have the following order of precedence:
Parameter
Value
Precedence
token_serial_number
<serial_number>
1 - Highest
token_labels
Comma delimited lists
2 - Middle
slot-id
<slot_id>
3 - Lowest
-
Replace
Optional: If you have saved secrets that are using RHOSO Key Manager simple_crypto, keep those secrets available by enabling multiple back ends:
spec: barbican: apiOverride: route: {} enabled: true template: globalDefaultSecretStore: pkcs11 enabledSecretStores: - pkcs11 - simple_cryptoDeploy the
OpenStackControlPlaneCR:$ oc apply -f openstack_control_plane.yaml
Chapter 8. Configuring federated authentication in RHOSO
Red Hat supports only Red Hat’s single sign-on (SSO) technology as the identity provider for Red Hat OpenStack Services on OpenShift (RHOSO). If you use another vendor, contact Red Hat Support for a support exception.
8.1. Deploying RHOSO with a single sign-on federated IDP
Federation allows users to log in to the OpenStack Dashboard (horizon) by using Red Hat’s single sign-on (SSO) technology.
By default, users who log out of the OpenStack Dashboard are not logged out of SSO.
Making use of a single sign-on federated solution requires modifications of the Identity service (keystone). You can use a secret to configure Red Hat OpenStack Services on OpenShift (RHOSO) Identity service to be integrated into your federated authentication solution.
Your federation client must have implicit flow enabled.
Prerequisites
- You have installed RHOSO.
- You have a SSO federated solution in your environment.
Procedure
Retrieve the Identity service (keystone) endpoint:
$ oc get keystoneapis.keystone.openstack.org -o json | jq '.items[0].status.apiEndpoints.public'
Provide your SSO administrator with the following redirect URIs as well as the web origin.
https://<keystoneURL>/v3/auth/OS-FEDERATION/identity_providers/<idp_name>/protocols/openid/websso/ https://<keystoneURL>/v3/auth/OS-FEDERATION/websso/openid webOrigins: https://<keystoneURL>
-
Replace
<keystoneURL>with the URL retrieved in step 1. This url must end in a trailing/. Replace
<idp_name>with a value of your choosing, for example,kcipaIDP.In response, your SSO administrator provides you with a
ClientIDand aClientSecret.NoteThe chosen
<idp_name>value must match all referenced<idp_name>values in this procedure.
-
Replace
Retrieve the Memcached hostname:
For an IPv4 deployment run the following command:
$ oc get memcacheds.memcached.openstack.org -n openstack -o json | jq -r '.items[0].status.serverList[0] | split(":")[0]'For an IPv6 deployment, run the following command:
$ oc get memcacheds.memcached.openstack.org -n openstack -o json | jq -r '.items[0].status.serverListWithInet[0]'
Create a
keystone-httpd-override.yamlCR file and add the following configuration:apiVersion: v1 kind: Secret metadata: name: keystone-httpd-override namespace: openstack type: Opaque stringData: federation.conf: | # Example OIDC directives for the *public* endpoint OIDCClaimPrefix "OIDC-" OIDCScope "openid email profile" OIDCClaimDelimiter ";" OIDCPassUserInfoAs "claims" OIDCPassClaimsAs "both" OIDCClientID "<my_client_id>" OIDCClientSecret "<my_client_secret>" OIDCCryptoPassphrase "<crypto_pass>" OIDCProviderMetadataURL <metadata_url> OIDCResponseType "id_token" OIDCOAuthClientID "my_oauth_client_id" OIDCOAuthClientSecret "12345678" OIDCOAuthIntrospectionEndpoint "<https://my_oauth_introspection_endpoint>" OIDCRedirectURI "{{ .KeystoneEndpointPublic }}/v3/auth/OS-FEDERATION/identity_providers/<idp_name>/protocols/openid/websso/" <LocationMatch "/v3/auth/OS-FEDERATION/identity_providers/<idp_name>/protocols/openid/websso"> AuthType "openid-connect" Require valid-user </LocationMatch> <Location ~ "/v3/OS-FEDERATION/identity_providers/<idp_name>/protocols/openid/auth"> AuthType oauth20 Require valid-user </Location> <LocationMatch "/v3/auth/OS-FEDERATION/websso/openid"> AuthType "openid-connect" Require valid-user </LocationMatch>-
Replace
<my_client_id>with your client ID to use for the OpenID Connect provider handshake. You must get this from your SSO administrator. -
Replace
<my_client_secret>with the client secret to use for the OpenID Connect provider handshake. You must get this from your SSO administrator after providing your redirect URLs. -
Replace
<crypto_pass>with a secure passphrase to use when encrypting data for the OpenID Connect handshake. This is a user-defined value. -
Replace
<metadata_url>with the URL that points to your OpenID Connect provider metadata. Use the format: "https://<FQDN>/realms/<realm>/.well-known/openid-configuration. The SSO administrator will provide the requisite<FQDN>and organization-specific<realm>name for your OpenID provider. -
Replace
<https://my_oauth_introspection_endpoint>with the value provided by the SSO administrator. Replace
<idp_name>with your chosen string that creates unique redirect URL, for example,kcipaIDP. This value must be replaced for thekeystoneFederationIdentityProviderNameparameter and theLocationMatchandLocationdirective arguments.ImportantThe full value for the
OIDCRedirectURIparameter must end in a trailing/.
-
Replace
Create the secret:
$ oc create -f keystone-httpd-override.yaml
Get the URL for the OpenStack Dashboard:
$ oc get horizons.horizon.openstack.org -o json | jq -r '.items[0].status.endpoint'
Edit the
keystonesection of theOpenStackControlPlaneCR file and add the secret:keystone: template: customServiceConfig: | [federation] trusted_dashboard=<horizon_endpoint>/dashboard/auth/websso/ [openid] remote_id_attribute=HTTP_OIDC_ISS [auth] methods = password,token,oauth1,mapped,application_credential,openid httpdCustomization: customConfigSecret: keystone-httpd-override-
Replace
<horizon_endpoint>with the value you retrieved in step 6. -
Remove
externalfrom themethods =comma delimited list. -
Add the
httpdCustomization.customConfigSecretparameter and set this value to the key created in thekeystone-httpd-override.yamlCR file.
-
Replace
Edit the
horizonsection of theOpenStackControlPlaneCR file to configure the OpenStack Dashboard (horizon):horizon: template: customServiceConfig: | # Point Horizon to the Keystone public endpoint OPENSTACK_KEYSTONE_URL = "<keystone_endpoint>/v3" # Enable WebSSO in Horizon WEBSSO_ENABLED = True # Provide login options in Horizon's dropdown menu WEBSSO_CHOICES = ( ("credentials", _("Keystone Credentials")), ("OIDC", _("OpenID Connect")), ) # Map Horizon's "OIDC" choice to the Keystone IDP and protocol WEBSSO_IDP_MAPPING = { "OIDC": ("<idp_name>", "openid"), }-
Replace
<keystone_endpoint>with the value you retrieved in the first step. -
Replace
<idp_name>with your chosen string that creates a unique redirect URL, for example,kcipaIDP.
-
Replace
Update the control plane:
$ oc apply -f openstack_control_plane.yaml
Next steps
8.2. Integrating the Identity service with a single sign-on federated IdP
After you deploy Red Hat OpenStack Services on OpenShift (RHOSO) with Red Hat’s single sign-on (SSO) technology for federation, you must integrate SSO with RHOSO.
Procedure
Create a federated domain:
$ openstack domain create <federated_domain_name>
Replace
<federated_domain_name>with the name of the domain you are managing with your identity provider, for example,my_domain.For example:
+-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | | | enabled | True | | id | b493634c9dbf4546a2d1988af181d7c9 | | name | my_domain | | options | {} | | tags | [] | +-------------+----------------------------------+
Set up the federation identity provider:
$ openstack identity provider create --remote-id https://<sso_fqdn>:9443/realms/<realm> --domain <federated_domain_name> <idp_name>
-
Replace
<sso_fqdn>with the fully qualified domain name for your SSO identity provider. -
Replace
<realm>with the SSO realm. The default realm ismaster. -
Replace
<federated_domain_name>with the name of the federated domain that you created in step 1, for example,my_domain. Replace
<idp_name>with the string that you chose when deploying SSO to create the unique redirect URL, for example,kcipaIDP.For example:
+-------------------+-----------------------------------------------------+ | Field | Value | +-------------------+-----------------------------------------------------+ | authorization_ttl | None | | description | None | | domain_id | b493634c9dbf4546a2d1988af181d7c9 | | enabled | True | | id | kcipaIDP | | remote_ids | https://sso.fqdn.local:9443/realms/master | +-------------------+-----------------------------------------------------+
-
Replace
Create a mapping file that is unique to the identity needs of your cloud:
$ cat > mapping.json << EOF [ { "local": [ { "user": { "name": "{0}" }, "group": { "domain": { "name": "<federated_domain_name>" }, "name": "<federated_group_name>" } } ], "remote": [ { "type": "OIDC-preferred_username" } ] } ] EOF-
Replace
<federated_domain_name>with the domain you created in step 1, for example,my_domain. -
Replace
<federated_group_name>with the name of the federated group that you create in a later step, for example,my_fed_group.
-
Replace
Use the mapping file to create the federation mapping rules for RHOSO:
$ openstack mapping create --rules <mapping_file> <mapping_rules>
-
Replace
<mapping_file>with the name of the mapping file that you created in the previous step, for example,mapping.json. -
Replace
<mapping_rules>with the name of the mapping rules created from this file, for example,IPAmap.
-
Replace
Create a federated group:
$ openstack group create --domain <federated_domain_name> <federated_group_name>
-
Replace
<federated_domain_name>with the name of the domain that you created in step 1, for example,my_domain. -
Replace
<federated_group_name>with the name of the federated group that have specified in the mapping file, for example,my_fed_group.
-
Replace
Create an Identity service (keystone) project:
$ openstack project create --domain <federated_domain_name> <federated_project_name>
-
Replace
<federation_project_name>with the name of the Identity service project.
-
Replace
Add the Identity service federation group to a role:
$ openstack role add --group <federated_group_name> --group-domain <federated_domain_name> --project <federated_project_name> --project-domain <federated_domain_name> member
Create the OpenID federation protocol:
$ openstack federation protocol create openid --mapping <mapping_rules> --identity-provider <idp_name>
-
Replace
<mapping_rules>with the name of the mapping rules you created from your mapping file, for example,IPAmap. -
Replace
<idp_name>with your chosen string that creates the unique redirect URL, for example,kcipaIDP.
-
Replace
Chapter 9. Configuring multi-realm federated authentication in RHOSO
You can configure Red Hat OpenStack Services on OpenShift (RHOSO) Identity service (keystone) and Dashboard (horizon) to provide multi-realm federated authentication using OpenID Connect (OIDC) as the protocol. Multi-realm federation allows users to log in to the OpenStack Dashboard by using single sign-on (SSO) and select from one of several external Identity Providers (IdPs).
9.1. Deploying RHOSO with multiple federated Identity Providers
Multi-realm federation allows users to log in to the Red Hat OpenStack Services on OpenShift (RHOSO) Dashboard by using single sign-on (SSO) and select from one of several external Identity Providers (IdPs).
The RHOSO deployment of multiple federated IdPs implements the Web SSO authentication flow because the OpenStack CLI does not support multiple IdPs.
Prerequisites
- You have installed RHOSO.
- You have multiple external OpenID Connect (OIDC) IdPs configured in your environment.
Procedure
Choose a name to uniquely identify each IdP.
In this example there are two IdPs, whose names are referenced as
<idp_name_1>and<idp_name_2>.Obtain the following settings from each IdP administrator:
-
The
FQDNfor each IdP that is referenced in this procedure as<fqdn_1>and<fqdn_2>. -
The federation
Realm Namefor each IdP that is referenced in this procedure as<realm_name_1>and<realm_name_2>. -
The
Client IDfor each IdP that is referenced in this procedure as<client_id_1>and<client_id_2>. -
The
Client Secretfor each IdP that is referenced in this procedure as<client_secret_1>and<client_secret_2>. -
The
Provider Metadata URLfor each IdP that is referenced in this procedure as<provider_metadata_url_1>and<provider_metadata_url_2>.
-
The
Retrieve the Identity service (keystone) public endpoint:
$ oc get keystoneapis.keystone.openstack.org -o json | jq '.items[0].status.apiEndpoints.public'
This Identity service endpoint is referenced in this procedure as
<keystone_url>.Provide the IdP administrators with the following information:
Web origin:
https://<keystone_url>
Redirect URIs:
https://<keystone_url>/v3/auth/OS-FEDERATION/websso/openid
Provide a URI for each IdP containing their unique IdP name, which must end in a trailing
/. You must send each URI to their respective IdP administrator:https://<keystone_url>/v3/auth/OS-FEDERATION/identity_providers/<idp_name_1>/protocols/openid/websso/ https://<keystone_url>/v3/auth/OS-FEDERATION/identity_providers/<idp_name_2>/protocols/openid/websso/
- Each federation client must have Implicit flow enabled and not Authorization code flow.
Create a custom resource (CR) file for a secret called
keystone-httpd-override:apiVersion: v1 kind: Secret metadata: name: keystone-httpd-override namespace: openstack type: Opaque stringData: federation.conf: | # Example OIDC directives for the *public* endpoint OIDCClaimPrefix "OIDC-" OIDCResponseType "id_token" OIDCScope "openid email profile" OIDCClaimDelimiter ";" OIDCPassUserInfoAs "claims" OIDCPassClaimsAs "both" OIDCCryptoPassphrase "<crypto_pass>" OIDCRedirectURI "<keystone_url>/v3/redirect_uri/" OIDCMetadataDir "/var/lib/httpd/metadata" OIDCAuthRequestParams "prompt=login" <IfModule headers_module> <Location "/v3/local-logout/clear"> Header always add Set-Cookie "mod_auth_openidc_session=deleted; Path=/; Max-Age=0; HttpOnly; Secure; SameSite=None" </Location> </IfModule> RewriteEngine On RewriteRule ^/v3/auth/OS-FEDERATION/identity_providers/(<idp_name_1>|<idp_name_2>)/protocols/openid/websso$ \ /v3/local-logout/clear [R=302,L] RewriteRule ^/v3/local-logout/clear$ \ /v3/auth/OS-FEDERATION/websso/openid [R=302,L,QSA,NE] <Location "/v3/auth/OS-FEDERATION/websso/openid"> AuthType openid-connect Require valid-user </Location> <Location "/v3/redirect_uri"> AuthType openid-connect Require valid-user </Location>ImportantThe full value of the
OIDCRedirectURIparameter must end in a trailing/.-
Replace
<crypto_pass>with a user-defined passphrase to use when encrypting data for the OpenID Connect handshake. -
Replace
<keystone_url>with the Identity service endpoint value that you retrieved in step 3. -
Replace
<idp_name_1>and<idp_name_2>with their unique IdP names that you specified in step 1. The following OIDC parameter and associated Apache configuration is designed to provide the most failsafe solution that supports the login of users from multiple IdPs. Consequently, previous sessions are not saved and users must reauthenticate themselves after they log out of the Dashboard:
OIDCAuthRequestParams "prompt=login" <IfModule headers_module> <Location "/v3/local-logout/clear"> Header always add Set-Cookie "mod_auth_openidc_session=deleted; Path=/; Max-Age=0; HttpOnly; Secure; SameSite=None" </Location> </IfModule> RewriteEngine On RewriteRule ^/v3/auth/OS-FEDERATION/identity_providers/(<idp_name_1>|<idp_name_2>)/protocols/openid/websso$ \ /v3/local-logout/clear [R=302,L] RewriteRule ^/v3/local-logout/clear$ \ /v3/auth/OS-FEDERATION/websso/openid [R=302,L,QSA,NE]If users in your multiple federated IdP deployment do not belong to more than one IdP then you can allow users to reopen the Dashboard they have closed without providing any authentication. In this case, you must remove this OIDC parameter and provide a different Apache
LocationMatchconfiguration to save the previous sessions.
-
Replace
Create the
keystone-httpd-overridesecret:$ oc create -f keystone-httpd-override.yaml
Retrieve the URL for the Dashboard:
$ oc get horizons.horizon.openstack.org -o json | jq -r '.items[0].status.endpoint'
Use the following Ansible playbook to create a secret called
federation-realm-data:- name: Download realm1 OpenID configuration ansible.builtin.uri: url: "<provider_metadata_url_1>" method: GET return_content: true validate_certs: false register: openid_wellknown_config1 - name: Download realm2 OpenID configuration ansible.builtin.uri: url: "<provider_metadata_url_2>" method: GET return_content: true validate_certs: false register: openid_wellknown_config2 - name: Set federation_config_items ansible.builtin.set_fact: federation_config_items: - filename: "<fqdn_1>%2Fauth%2Frealms%2F<realm_name_1>.conf" contents: | { "scope" : "openid email profile" } - filename: "<fqdn_1>%2Fauth%2Frealms%2F<realm_name_1>.client" contents: "{{ {'client_id': <client_id_1>, 'client_secret': <client_secret_1> } | to_json }}" - filename: "<fqdn_1>%2Fauth%2Frealms%2F<realm_name_1>.provider" contents: | {{ openid_wellknown_config1.content }} - filename: "<fqdn_2>%2Fauth%2Frealms%2F<realm_name_2>.conf" contents: | { "scope" : "openid email profile" } - filename: "<fqdn_2>%2Fauth%2Frealms%2F<realm_name_2>.client" contents: "{{ {'client_id': <client_id_2>, 'client_secret': <client_secret_2>} | to_json }}" - filename: "<fqdn_2>%2Fauth%2Frealms%2F<realm_name_2>.provider" contents: | {{ openid_wellknown_config2.content }} - name: Generate the final federation_config.json string (as a dictionary) ansible.builtin.set_fact: _raw_federation_config_json_value: | { {% for item in federation_config_items %} "{{ item.filename }}": {{ item.contents }}{% if not loop.last %},{% endif %} {% endfor %} } - name: Final JSON string for Secret stringData ansible.builtin.set_fact: federation_config_json_string: "{{ _raw_federation_config_json_value }}" - name: Create a Kubernetes Secret with federation metadata kubernetes.core.k8s: state: present definition: apiVersion: v1 kind: Secret type: Opaque metadata: name: federation-realm-data namespace: openstack stringData: federation-config.json: "{{ federation_config_json_string }}"- Replace the IdP variables with the values you obtained from the IdP administrators in step 2.
-
Open your
OpenStackControlPlanecustom resource (CR) file,openstack_control_plane.yaml, on your workstation. Edit the
keystonesection of the OpenStackControlPlane CR:keystone: template: customServiceConfig: | [federation] trusted_dashboard=<horizon_endpoint>/dashboard/auth/websso/ [openid] remote_id_attribute=HTTP_OIDC_ISS [auth] methods = password,token,oauth1,mapped,application_credential,openid httpdCustomization: customConfigSecret: keystone-httpd-override federatedRealmConfig: federation-realm-data-
Replace
<horizon_endpoint>with the Dashboard URL you retrieved in step 7. -
Remove
externalfrom themethods =comma delimited list. -
Add the
httpdCustomization.customConfigSecretparameter and set this value to the key created in thekeystone-httpd-override.yamlCR file in step 5. -
Add the
httpdCustomization.federatedRealmConfigparameter and set this value to thefederation-realm-datasecret created by the Ansible Playbook in step 8.
-
Replace
Edit the
horizonsection of theOpenStackControlPlaneCR:horizon: template: customServiceConfig: | # Point horizon to the keystone public endpoint OPENSTACK_KEYSTONE_URL = "<keystone_endpoint>/v3" # Enable WebSSO in horizon WEBSSO_ENABLED = True # Provide login options in the horizon dropdown menu WEBSSO_CHOICES = ( ("credentials", _("Keystone Credentials")), ("OIDC1", _("OpenID Connect IdP1")), ("OIDC2", _("OpenID Connect IdP2")), ) # Map the "OIDC" choice of horizon to the keystone IDP and protocol WEBSSO_IDP_MAPPING = { "OIDC1": ("<idp_name1>", "openid"), "OIDC2": ("<idp_name2>", "openid"), }Update the control plane:
$ oc apply -f openstack_control_plane.yaml
Chapter 10. Configuring a Single Keystone Multiple OpenStacks multi-region deployment to simplify user management and configuration
The Single Keystone Multiple OpenStacks (SKMO) multi-region Red Hat OpenStack Services on OpenShift (RHOSO) deployment simplifies user management and configuration for multiple regions.
In standard multi-region RHOSO deployments, each region is isolated with its own Identity (keystone) and Dashboard (horizon) services. This requires separate user accounts for each region, making credential management and rotation difficult.
- SKMO multi-region RHOSO deployment architecture
An SKMO deployment requires the following architecture to facilitate the simplified user management and configuration:
An SKMO deployment consists of a single
centralregion and multipleworkloadregions.ImportantWhen you deploy each
workloadregion, you must define a unique RHOSO namespace, a unique region name, and unique Identity service user names for all the OpenStack services that communicate with the Identity service. For more information about the unique networking and configuration requirements of the SKMO deployment, see Plan your Single Keystone Multiple OpenStacks deployment.-
The
centralregion provides the Dashboard (horizon) service that is shared by all the regions of the SKMO deployment. The
centralregion provides a centralized Identity (keystone) service:-
You must use the Identity service of the
centralregion to create the default administrator user for eachworkloadregion. -
You must use the Identity service of the
centralregion to create the catalog entries for the public and private endpoints of the Identity service for eachworkloadregion.
-
You must use the Identity service of the
-
The centralized Identity and Dashboard services provide a
single pane of glassfor the simplified configuration and management of the users. Each end user has a single set of credentials. You can enable or disable their access to every region in thecentralregion. For more information, see Deploy Single Keystone Multiple OpenStacks.
10.1. Plan your Single Keystone Multiple OpenStacks deployment
The Single Keystone Multiple OpenStacks (SKMO) multi-region Red Hat OpenStack Services on OpenShift (RHOSO) deployment adopts an interdependent regional architecture. The central region provides centralized Dashboard (horizon) and Identity (keystone) services that are relied upon by all the other workload regions. Therefore, you must implement the following requirements for a successful SKMO deployment:
The interdependence between the
workloadand thecentralregions requires that every region must provide the following unique identifications for a successful SKMO deployment:A unique namespace to differentiate the RHOSO deployment of each region. For more information, see Create a unique namespace for each
workloadregion.ImportantThe RHOSO deployment namespace forms part of the DNS name for each OpenStack service. If you do not use different RHOSO namespaces for every region, conflicts occur between services in your different regions.
A unique region name defined by the
specof their Identity (keystone) service in theOpenStackControlPlanecustom resource (CR). For more information, see Modify the deployment of eachworkloadregion.NoteWhen the
centralRed Hat OpenStack Services on OpenShift (RHOSO) region is deployed, this region is calledregionOneby default. If you use aworkloadregion naming convention, then you can rename the region name of thecentralregion to make it more easily identifiable. For more information, see Rename thecentralregion.The OpenStack services that communicate with the Identity service must use uniquely named Identity service users to simplify the task of managing their individual credentials. For more information, see Modify the deployment of each
workloadregion.WarningIf you do not specify unique Identity service user names for all the OpenStack services that communicate with the Identity service, then changing the password for a service user disrupts this service in all the
workloadregions that use this same user - unless you schedule a maintenance window for all of these regions first.
The interdependence between the
workloadand thecentralregions imposes the following restrictions that must be met by the logical networking topology of your DNS configuration:-
Each
workloadregion must resolve the DNS name of the Identity service in thecentralregion and access it. -
The data plane nodes in each workload region must resolve the DNS name of the Identity service in the
centralregion and access it. -
After deploying the
workloadregions, the Dashboard (horizon) service in thecentralregion must resolve the DNS names in the service catalog of everyworkloadregion and access them.
-
Each
The interdependence between the Identity services in each
workloadregion and the Identity service of thecentralregion changes how the service-to-service communication for theworkloadregions are routed. For more information, see Create the public and private endpoints of eachworkloadregion.In a normal OpenStack deployment like the
centralregion, the Identity service has both a public and internal endpoint. These endpoints exist in separate networks to keep the internal service-to-service communication separate from public traffic. However, theworkloadregions are forced to send all of their internal service-to-service communication traffic to the public endpoint and therefore the public network of thecentralregion. Even though the internal service-to-service communication traffic of theworkloadregions is encrypted, it is more vulnerable to DDOS attacks because it is not isolated on a separate internal network making it easier for external attackers to intercept these messages.The
barbican-keystone-listenerservice requires access to the RabbitMQ message queue so that when a project is deleted by the Identity service (keystone), it can tell the Key Manager service (barbican) to clean up the related secrets and the other artifacts that it manages.In an SKMO deployment, the RabbitMQ message queue of the
centralregion and not theworkloadregions contain the necessary Identity service (keystone) messages. For this reason, thebarbican-keystone-listenerservices in theworkloadregions cannot know when projects are deleted so that the Key Manager service cannot clean them up. Therefore you must implement a third-party application like Scupper and configure your SKMO deployment to allow thebarbican-keystone-listenerservices in theworkloadregions to access the RabbitMQ message queue in thecentralregion to clean up deleted projects.
10.2. Deploy Single Keystone Multiple OpenStacks
A Single Keystone Multiple OpenStacks (SKMO) multi-region Red Hat OpenStack Services on OpenShift (RHOSO) deployment creates a centralized Dashboard (horizon) and Identity (keystone) service to provide a single pane of glass for the simplified configuration and management of the users. Each end user has a single set of credentials and their access to every workload region can be enabled or disabled in the central region.
You must not manually configure the Dashboard (horizon) service of the central region to connect to the various workload regions because a Managing regions dropdown list is added to the UI automatically for a SKMO deployment to allow users to select the required workload regions. For more information, see SKMO Dashboard region configuration.
Prerequisites
- The interdependent regional architecture of the SKMO deployment requires a number of unique requirements to be met to ensure a successful deployment. For more information, see Plan your Single Keystone Multiple OpenStacks deployment.
Procedure
Deploy the
centralregion calledregionOneby default, unless you rename it. For more information, see Rename thecentralregion.The deployment of the
centralregion does not require a data plane.-
Create the default administrator Identity service user for each
workloadregion in thecentralregion. These Identity service users must be granted theadminrole in theadminproject of thecentralregion. For more information, see Create the default administrator user for eachworkloadregion. Create the catalog entries for the public and private endpoints of the Identity service in each
workloadregion by using the Identity service in thecentralregion. For more information, see Create the public and private endpoints of eachworkloadregion.NoteBoth the public and private endpoints of the Identity service in each
workloadregion point to the public Identity service endpoint in thecentralregion.-
Modify and deploy each
workloadregion. An important part of this deployment modification involves creating a unique region name and unique Identity service users for eachworkloadregion. For more information, see Modify the deployment of eachworkloadregion. -
After you deploy a
workloadregion of a multi-region Red Hat OpenStack Services on OpenShift (RHOSO) Single Keystone Multiple OpenStacks (SKMO) deployment you must configure the deployedcentralregion to trust thisworkloadregion. For more information, see Configure thecentralregion to trust a deployedworkloadregion.
10.3. Rename the central region
When the central Red Hat OpenStack Services on OpenShift (RHOSO) region of a Single Keystone Multiple OpenStacks (SKMO) multi-region Red Hat OpenStack Services on OpenShift (RHOSO) deployment is deployed this region is called regionOne by default. If you use a workload region naming convention when you name the workload regions you can rename the central region to make it more easily identifiable.
Prerequisites
-
You are logged on to a workstation that has access to the RHOSO control plane as a user with
cluster-adminprivileges. -
You have the
occommand line tool installed on your workstation.
Procedure
In the
centralregion, set the default RHOSO namespace:$ oc project <central-region-namespace>
-
Replace
<central-region-namespace>with the name of the unique namespace for thecentralregion, for exampleopenstack.
-
Replace
Edit the
OpenStackControlPlaneCR of yourcentralregion on your workstation:$ oc edit openstackcontrolplane <name>
-
Replace
<name>with the name of your YAMLOpenStackControlPlaneCR. You can use the following command to retrieve this name:oc get openstackcontrolplane.
-
Replace
Configure the
regionparameter of the Identity service (keystone):... spec: ... keystone: ... template: ... region: <central-region-name>-
Replace
<central-region-name>with the region name for yourcentralregion.
-
Replace
- Save and close the editor to apply this change.
Wait for the deployment of the control plane to reach the
Readystatus:$ oc wait openstackcontrolplane <name> --for=condition=Ready --timeout=600s
10.4. Create the default administrator user for each workload region
You must use the Identity service (keystone) of the deployed Red Hat OpenStack Services on OpenShift (RHOSO) central region of a Single Keystone Multiple OpenStacks (SKMO) multi-region Red Hat OpenStack Services on OpenShift (RHOSO) deployment to create the default administrator user for each workload region. These workload administrator users must be granted the admin role in the admin project of the central region. For more information, see Deploy Single Keystone Multiple OpenStacks.
Prerequisites
-
You are logged on to a workstation that has access to the RHOSO control plane as a user with
cluster-adminprivileges. -
You have the
occommand line tool installed on your workstation.
Procedure
In the
centralregion, set the default RHOSO namespace:$ oc project <central-region-namespace>
-
Replace
<central-region-namespace>with the name of the unique namespace for thecentralregion, for exampleopenstack.
-
Replace
Access the remote shell for the
OpenStackClientpod from your workstation to run OpenStack CLI commands:$ oc rsh openstackclient
Create the default administrator user for each
workloadregion:$ openstack user create --domain Default --project <central-region-admin-project> --project-domain Default --password <workload-region-admin-password> <workload-region-admin-name>
-
Replace
<central-region-admin-project>with the name of the admin project in thecentralregion that isadminby default. Replace
<workload-region-admin-password>with the password of the default administrator user of each`workload` region.NoteSet this password as the value of the
AdminPassword:parameter of theSecretcustom resource (CR) file that you must create to provide secure access to the RHOSO service pods when you deploy eachworkloadregion.-
Replace
<workload-region-admin-name>with the name of the default administrator user of eachworkloadregion, for exampleadmin-two.
-
Replace
Add the following roles to each default
workloadregion administrator user:$ openstack role add --project admin --project-domain Default --user <workload-region-admin-name> --user-domain Default admin $ openstack role add --system all --user <workload-region-admin-name> --user-domain Default admin
10.5. Create the public and private endpoints of each workload region
You must use the Identity service (keystone) of the deployed Red Hat OpenStack Services on OpenShift (RHOSO) central region of a Single Keystone Multiple OpenStacks (SKMO) multi-region Red Hat OpenStack Services on OpenShift (RHOSO) deployment to create the catalog entries for the public and private endpoints of the Identity service in each workload region. For more information, see Deploy Single Keystone Multiple OpenStacks.
Both the public and private endpoints of the Identity service in each workload region must specify the public Identity service endpoint of the central region.
Therefore even though the internal service-to-service communication traffic of the workload regions is encrypted it is more vulnerable to DDOS attacks because it is not segregated on a separate internal network making it easier for external attackers to intercept these messages.
Prerequisites
-
You are logged on to a workstation that has access to the RHOSO control plane as a user with
cluster-adminprivileges. -
You have the
occommand line tool installed on your workstation.
Procedure
In the
centralregion, set the default RHOSO namespace:$ oc project <central-region-namespace>
-
Replace
<central-region-namespace>with the name of the unique namespace for thecentralregion, for exampleopenstack.
-
Replace
Access the remote shell for the
OpenStackClientpod from your workstation to run OpenStack CLI commands:$ oc rsh openstackclient
Obtain the public Identity service endpoint of the
centralregion:$ openstack endpoint list --region <central-region-name> --service keystone --interface public
-
Replace <central-region-name> with the name of your
centralregion, which isregionOneby default.
-
Replace <central-region-name> with the name of your
-
Copy this URL that is referenced in this procedure as
<central-region-public-keystone-url>. Create the public and private endpoints of each
workloadregion to specify the public Identity service endpoint of thecentralregion:$ openstack endpoint create --region <workload-region-name> keystone public <central-region-public-keystone-url> $ openstack endpoint create --region <workload-region-name> keystone internal <central-region-public-keystone-url>
Replace <workload-region-name> with the name of the required
workloadregion, for exampleregionTwo.This creates the catalog entries for the public and private endpoints of the Identity service in each
workloadregion in the Identity service catalog of thecentralregion.
10.6. Modify the deployment of each workload region
You must modify the deployment of each workload region of a Single Keystone Multiple OpenStacks (SKMO) multi-region Red Hat OpenStack Services on OpenShift (RHOSO) deployment because the Dashboard (horizon) and Identity (keystone) service of the central region is shared with all the workload regions. For more information, see Deploy Single Keystone Multiple OpenStacks.
An important part of this deployment modification involves creating a unique region name and unique Identity service users for each workload region. For more information, see Plan your Single Keystone Multiple OpenStacks deployment.
Procedure
-
Create a unique RHOSO deployment namespace for each
workloadregion to differentiate these RHOSO deployments from each other. For more information, see Create a unique namespace for eachworkloadregion. -
Specify the password of the administrator user that you manually created for each
workloadregion as the value of theAdminPassword:parameter when you create theSecretcustom resource (CR) to provide secure access to the RHOSO service pods in eachworkloadregion. For more information, see Create the default administrator user for eachworkloadregion and Provide secure access to RHOSO services in the Deploying Red Hat OpenStack Services on OpenShift guide. -
Obtain the CA certificate from the
centralregion and add it to a secret in eachworkloadregion so it can be trusted. For more information, see Configure eachworkloadregion to trust thecentralregion. Create a modified
OpenStackControlPlanecustom resource (CR) for eachworkloadregion. For more information, see Modify the control plane CR of eachworkloadregion.This involves defining a unique region name defined in the
specof the Identity (keystone) service for eachworkloadregion and creating unique Identity service users.
10.6.1. Create a unique namespace for each workload region
You must create a unique namespace for every workload region of your Single Keystone Multiple OpenStacks (SKMO) multi-region Red Hat OpenStack Services on OpenShift (RHOSO) deployment. This unique namespace is necessary to differentiate the RHOSO deployment of each region.
The RHOSO deployment namespace forms part of the DNS name for each OpenStack service. If you do not use different namespaces for every region, conflicts occur between services in your different regions.
Prerequisites
-
You are logged on to a workstation that has access to the RHOCP cluster, as a user with
cluster-adminprivileges. -
You have the
occommand line tool installed on your workstation.
Procedure
Create a project in your deployed RHOSO environment:
$ oc new-project <workload-region-namespace>
-
Replace
<workload-region-namespace>with the name of the unique namespace for eachworkloadregion, for exampleopenstack-two.
-
Replace
Ensure that this namespace is labeled to enable privileged pod creation by the OpenStack Operators:
$ oc get namespace <workload-region-namespace> -ojsonpath='{.metadata.labels}' | jq { "kubernetes.io/metadata.name": "<workload-region-namespace>", "pod-security.kubernetes.io/enforce": "privileged", "security.openshift.io/scc.podSecurityLabelSync": "false" }If the security context constraint (SCC) is not "privileged" use the following commands to change it:
$ oc label ns <workload-region-namespace> security.openshift.io/scc.podSecurityLabelSync=false --overwrite $ oc label ns <workload-region-namespace> pod-security.kubernetes.io/enforce=privileged --overwrite
10.6.2. Configure each workload region to trust the central region
After you deploy the central region of a Single Keystone Multiple OpenStacks (SKMO) multi-region Red Hat OpenStack Services on OpenShift (RHOSO) deployment, you must configure each workload region to trust the central region. For more information, see Deploy Single Keystone Multiple OpenStacks.
In the following procedure, the Identity service region name of the central region is regionOne:
Prerequisites
-
You are logged on to a workstation that has access to your Red Hat OpenShift Container Platform (RHOCP) cluster as a user with
cluster-adminprivileges. -
You have the
occommand line tool installed on your workstation.
Procedure
In the
centralregion, set the default RHOSO namespace:$ oc project <central-region-namespace>
-
Replace
<central-region-namespace>with the name of the unique namespace for thecentralregion, for exampleopenstack.
-
Replace
Obtain the CA certificate in the
centralregion and extract it into a file, for exampleregionOne-ca.crt:NoteTo decode the certificate before creating the output .crt file, add
| base64 -dto this command.$ oc get secret rootca-public -o yaml | yq '.data."ca.crt"' > regionOne-ca.crt
-
Copy the
regionOne-ca.crtfile to a deployedworkloadregion. In this
workloadregion, set the default RHOSO namespace:$ oc project <workload-region-namespace>
-
Replace
<workload-region-namespace>with the name of the unique namespace for thisworkloadregion, for exampleopenstack-two.
-
Replace
-
Create a PEM-formatted bundle in this
workloadregion, for examplecustom-ca-certs.pemthat includes the contents of thisregionOne-ca.crtfile and all the other custom CA certificates that you want eachworkloadregion to trust. Create a manifest file for a secret in this
workloadregion that specifies the contents of thecustom-ca-certs.pembundle created in the previous step. In this example, this manifest file is calledcustom-ca-certs.yamland the secret is calledcustom-ca-certs:apiVersion: v1 data: custom-ca-certs.pem: <contents-of-PEM-bundle> kind: Secret metadata: annotations: name: custom-ca-certs namespace: <workload-region-namespace> type: Opaque
-
Replace
<contents-of-PEM-bundle>with the base64 encoded string of the contents of the PEM-formatted bundle created in step 2 calledcustom-ca-certs.pemthat includes the CA certificate from thecentralregion. You can get this base64 encoded string by using the following command:cat custom-ca-certs.pem | base64 -w0. -
Replace
<workload-region-namespace>with the name of the unique namespace that you created for thisworkloadregion, for exampleopenstack-two.
-
Replace
Create the secret in this
workloadregion from the manifest file. In this example, this manifest file is calledcustom-ca-certs.yaml:$ oc apply -f custom-ca-certs.yaml
, Repeat steps 3 to 7 for every deployed workload region.
Next steps
-
Edit the
OpenStackControlPlanecustom resource (CR) of eachworkloadregion and add thiscustom-ca-certssecret as the value of thespec.tls.caBundleSecretNameparameter. For more information, see Modify the control plane CR of eachworkloadregion.
10.6.3. Modify the control plane CR of each workload region
You must modify the OpenStackControlPlane custom resource (CR) for each workload region of a Single Keystone Multiple OpenStacks (SKMO) multi-region Red Hat OpenStack Services on OpenShift (RHOSO) deployment because the Dashboard (horizon) and Identity (keystone) service of the central region is shared with all the workload regions.
An important part of modifying the OpenStackControlPlane CR for each workload region involves creating a unique region name and unique Identity service users for each workload region.
Prerequisites
-
You are logged on to a workstation that has access to the RHOCP cluster, as a user with
cluster-adminprivileges. -
You have the
occommand line tool installed on your workstation. -
You have deployed the control plane of the
centralregion. -
You have created the default administrator Identity service user for this
workloadregion in thecentralregion. For more information, see Create the default administrator user for eachworkloadregion. -
You have created the public and private endpoints of this
workloadregion to specify the public Identity service endpoint of thecentralregion. For more information, see Create the public and private endpoints of eachworkloadregion. -
You have created a unique RHOSO namespace for this
workloadregion. For more information, see Create a unique namespace for eachworkloadregion. -
You have created a secret containing all the CA certificates for this
workloadregion including the CA certificate from thecentralregion. For more information, see Configure eachworkloadregion to trust thecentralregion.
Procedure
Create a file on your workstation in this
workloadregion namedopenstack_control_plane.yamlto define theOpenStackControlPlaneCR:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack-control-plane namespace: <workload-region-namespace> spec: secret: <workload-region-secret>
-
Replace
<workload-region-namespace>with the name of the unique namespace for thisworkloadregion, in this exampleopenstack-two. -
Replace
<workload-region-secret>with the name of theSecretCR for thisworkloadregion, in this exampleosp-secret.
-
Replace
- Perform steps 3 to 6 of Creating the control plane in the Deploying Red Hat OpenStack Services on OpenShift guide.
Edit the
OpenStackControlPlaneCR of thisworkloadregion and specify the name of the secret containing all the CA certificates for thisworkloadregion including the CA certificate from thecentralregion, in this examplecustom-ca-certs:NoteIf this section does not exist then you must add it.
... spec: ... tls: ... caBundleSecretName: custom-ca-certsEdit the
OpenStackControlPlaneCR of thisworkloadregion and disable the Dashboard (horizon) service:... spec: ... horizon: ... enabled: falseEdit the
OpenStackControlPlaneCR of thisworkloadregion and configure the Identity (keystone) service:NoteYou might need to remove default service configuration such as metadata or if this Identity service is configured as a load balancer.
... spec: ... keystone: ... template: ... externalKeystoneAPI: true adminProject: <central-region-admin-project> adminUser: <workload-region-admin-name> region: <workload-region-name> override: ... service: ... internal: endpointURL: <central-region-public-keystone-url> ... public: endpointURL: <central-region-public-keystone-url>-
Replace
<central-region-admin-project>with the name of the admin project in thecentralregion that isadminby default. -
Replace
<workload-region-admin-name>with the name of the default administrator user of thisworkloadregion, for exampleadmin-two. -
Replace
<workload-region-name>with the name of thisworkloadregion, for exampleregionTwo. -
Replace
<central-region-public-keystone-url>with the public Identity service endpoint of thecentralregion.
-
Replace
Edit the
OpenStackControlPlaneCR of thisworkloadregion to specify unique Identity service user names for all the OpenStack services that communicate with the Identity service.NoteIf you do not specify unique Identity service user names then changing the password for a service user disrupts this service in all the
workloadregions that use this same user, unless you schedule a maintenance window for all of these regions first.The following OpenStack services commonly communicate with the Identity service and require unique Identity service user names:
... spec: ... barbican: ... template: ... serviceUser: <workload-region-barbican-serviceUser> ... cinder: ... template: ... serviceUser: <workload-region-cinder-serviceUser> ... glance: ... template: ... serviceUser: <workload-region-glance-serviceUser> ... neutron: ... template: ... serviceUser: <workload-region-neutron-serviceUser> ... nova: ... template: ... serviceUser: <workload-region-nova-serviceUser> ... placement: ... template: ... serviceUser: <workload-region-placement-serviceUser> ... swift: ... template: ... swiftProxy: ... serviceUser: <workload-region-swift-serviceUser>-
Replace
<workload-region-barbican-serviceUser>with the unique Identity service user name for the Key Manager service (barbican) of thisworkloadregion, for examplebarbican-two. -
Replace
<workload-region-cinder-serviceUser>with the unique Identity service user name for the Block Storage service (cinder) of thisworkloadregion, for examplecinder-two. -
Replace
<workload-region-glance-serviceUser>with the unique Identity service user name for the Image service service (glance) of thisworkloadregion, for exampleglance-two. -
Replace
<workload-region-neutron-serviceUser>with the unique Identity service user name for the Networking service (neutron) of thisworkloadregion, for exampleneutron-two. -
Replace
<workload-region-nova-serviceUser>with the unique Identity service user name for the Compute service (nova) of thisworkloadregion, for examplenova-two. -
Replace
<workload-region-placement-serviceUser>with the unique Identity service user name for the Placement service (placement) of thisworkloadregion, for exampleplacement-two. -
Replace
<workload-region-swift-serviceUser>with the unique Identity service user name for the Object Storage service (swift) of thisworkloadregion, for exampleswift-two.
-
Replace
If you use a back end that communicates to the Identity service (keystone) then you must specify the unique name of this
workloadregion when configuring this back end.NoteBack ends that do not communicate to the Identity service like Red Hat Ceph Storage do not require any additional configuration.
For example the Object Storage service (swift) back end for the Image service (glance) communicates to the Identity service. Therefore, when you configure this back end you must specify the name of this
workloadregion:... spec: ... glance: ... template: ... customServiceConfig: [DEFAULT] enabled_backends = default_backend:swift [glance_store] default_backend = default_backend [default_backend] swift_store_create_container_on_put = True swift_store_auth_version = 3 swift_store_auth_address = {{ .KeystoneInternalURL }} swift_store_endpoint_type = internalURL swift_store_user = service:glance swift_store_key = {{ .ServicePassword }} swift_store_region = <workload-region-name>-
Replace
<workload-region-name>with the name of thisworkloadregion, for exampleregionTwo.
-
Replace
-
Perform all the remaining steps starting from step 7 of the Creating the control plane in the Deploying Red Hat OpenStack Services on OpenShift guide. But replace the
openstacknamespace of these commands specified as-n openstackwith the name of the unique namespace you created for theworkloadregion, in this exampleopenstack-two. In this example, specify-n openstack-twofor these commands.
10.7. Configure the central region to trust a deployed workload region
After you deploy a workload region of a Single Keystone Multiple OpenStacks (SKMO) multi-region Red Hat OpenStack Services on OpenShift (RHOSO) deployment, you must configure the deployed central region to trust this workload region.
Prerequisites
-
You are logged on to a workstation that has access to your Red Hat OpenShift Container Platform (RHOCP) cluster as a user with
cluster-adminprivileges. -
You have the
occommand line tool installed on your workstation.
Procedure
In the deployed
workloadregion, set the default RHOSO namespace:$ oc project <workload-region-namespace>
-
Replace
<workload-region-namespace>with the name of the unique namespace for thisworkloadregion, for exampleopenstack-two.
-
Replace
Obtain the CA certificate of this
workloadregion and extract it into a file, for exampleregionTwo-ca.crt:NoteTo decode the certificate before creating the output .crt file add
| base64 -dto this command.$ oc get secret rootca-public -o yaml | yq '.data."ca.crt"' > regionTwo-ca.crt
-
Copy the
regionTwo-ca.crtfile to the deployedcentralregion. In the
centralregion, set the default RHOSO namespace:$ oc project <central-region-namespace>
-
Replace
<central-region-namespace>with the name of the unique namespace for thecentralregion, for exampleopenstack.
-
Replace
Edit your
OpenStackControlPlaneCR of thecentralregion:$ oc edit openstackcontrolplane <name>
-
Replace
<name>with the name of your YAMLOpenStackControlPlaneCR. You can use the following command to retrieve this name:oc get openstackcontrolplane.
-
Replace
If your
OpenStackControlPlaneCR in thecentralregion contains thespec.tls.caBundleSecretNameparameter:Obtain the name of the secret that contains the PEM-formatted bundle containing all the other custom CA certificates in chains of trust if applicable that the
centralregion trusts, for examplecustom-ca-certs:... spec: ... tls: ... caBundleSecretName: custom-ca-certsEdit the specified secret, in this example
custom-ca-certs$ oc edit secret custom-ca-certs
-
Append the contents of the CA certificate of the deployed
workloadregion, for exampleregionTwo-ca.crtto the PEM-formatted bundle containing all the other custom CA certificates that thecentralregion trusts. -
Save and exit the editor to automatically apply the changes to the secret, in this example
custom-ca-certs.
If your
OpenStackControlPlaneCR in thecentralregion does not contain thespec.tls.caBundleSecretNameparameter:-
Create a PEM-formatted bundle, for example
custom-ca-certs.pemthat includes the contents of thisregionTwo-ca.crtfile. Create a manifest file for the secret in the
centralregion that specifies the contents of thecustom-ca-certs.pembundle created in the previous step. In this example the manifest file is calledcustom-ca-certs.yamland the secret is calledcustom-ca-certs:apiVersion: v1 data: custom-ca-certs.pem: <contents-of-PEM-bundle> kind: Secret metadata: annotations: name: custom-ca-certs namespace: <namespace> type: Opaque
-
Replace
<namespace>with the namespace of thecentralregion, in this exampleopenstack. -
Replace
<contents-of-PEM-bundle>with the base64 encoded string of the contents of the PEM-formatted bundle you created calledcustom-ca-certs.pemthat includes the CA certificate fromregionTwo. You can get this base64 encoded string by using the following command:cat custom-ca-certs.pem | base64 -w0.
-
Replace
-
Create a PEM-formatted bundle, for example
Create the secret in the
centralregion from the manifest file. In this example the manifest file is calledcustom-ca-certs.yaml:$ oc apply -f custom-ca-certs.yaml
Edit your
OpenStackControlPlaneCR of thecentralregion:$ oc edit openstackcontrolplane <name>
-
Replace
<name>with the name of your YAMLOpenStackControlPlaneCR. You can use the following command to retrieve this name:oc get openstackcontrolplane.
-
Replace
Add the secret that you have created, in this example
custom-ca-certs:... spec: ... tls: ... caBundleSecretName: custom-ca-certs- Save and close the editor to automatically apply this change.
Wait for the deployment of the control plane of the
centralregion to reach theReadystatus:$ oc wait openstackcontrolplane <name> --for=condition=Ready --timeout=600s
Next steps
-
Extract the catalog entries for this
workloadregion and make sure that the Dashboard (horizon) service in thecentralregion can resolve the DNS names in the service catalog of thisworkloadregion and access them.
10.8. SKMO Dashboard region configuration
When deploying a Single Keystone Multiple OpenStacks (SKMO) multi-region Red Hat OpenStack Services on OpenShift (RHOSO) deployment you must not configure the Dashboard (horizon) service of the central region to select which region a user wants to log into.
In a standard multi-region OpenStack deployment, each region is an isolated OpenStack deployment with their own Dashboard (horizon) and Identity (keystone) service. For this reason, you must configure the Dashboard (horizon) service to log into the Identity (keystone) service of each region. This configuration creates a dropdown list on the Login page for users to select the required isolated region or more specifically the Identity (keystone) service of this region.
In the SKMO deployment, the central region provides a centralized Identity (keystone) service that is used for logging into the entire multi-region deployment. Therefore, the dropdown list of regions must not be configured on the Login page of the Dashboard because no matter what workload region a user selects, the central region is always selected, confusing your users.
The SKMO Dashboard automatically provides the Managing regions dropdown list in the UI to allow users to select the required workload region.