Installing
Installing OpenShift Service Mesh
Abstract
Chapter 1. Supported platforms and configurations
Before you can install Red Hat OpenShift Service Mesh 3.2.3, you must subscribe to OpenShift Container Platform and install OpenShift Container Platform in a supported configuration. If you do not have a subscription on your Red Hat account, contact your sales representative for more information.
1.1. Supported platforms for Service Mesh
The following platform versions support Service Mesh control plane version 3.2.3:
- Red Hat OpenShift Container Platform version 4.18 or later
- Red Hat OpenShift Dedicated version 4
- Azure Red Hat OpenShift (ARO) version 4
- Red Hat OpenShift Service on AWS (ROSA)
The Red Hat OpenShift Service Mesh Operator supports many versions of Istio.
If you are installing Red Hat OpenShift Service Mesh on a "Restricted network", follow the instructions for your chosen OpenShift Container Platform infrastructure.
For additional information about Red Hat OpenShift Service Mesh lifecycle and supported platforms, see the "Support Policy".
1.2. Supported configurations for Service Mesh
Red Hat OpenShift Service Mesh supports the following configurations:
- This release of Red Hat OpenShift Service Mesh is supported on OpenShift Container Platform x86_64, IBM Z®, IBM Power®, and Advanced RISC Machine (ARM).
- A single OpenShift Container Platform cluster has all Service Mesh components.
- Configurations that do not integrate external services such as virtual machines.
Red Hat OpenShift Service Mesh does not support the EnvoyFilter configuration except where explicitly documented.
1.3. Supported network configurations for Service Mesh
You can use the following OpenShift networking plugins for the Red Hat OpenShift Service Mesh:
- OpenShift-SDN.
- OVN-Kubernetes. See "About the OVN-Kubernetes network plugin" for more information.
- Third-party CNI plugins that OpenShift Container Platform certifies and Service Mesh validates through conformance testing. See "Certified OpenShift CNI plugins" for more information.
1.4. Supported configurations for Kiali
Access the Kiali console through supported web browsers by using the mandatory OpenShift authentication strategy, which leverages cluster role-based access control (RBAC) to manage user permissions.
- The Kiali console is supported on Google Chrome, Microsoft Edge, Mozilla Firefox, or Apple Safari browsers.
-
Red Hat OpenShift Service Mesh (OSSM) supports only the
openshiftauthentication strategy when you deploy Kiali. Theopenshiftstrategy controls access based on the user’s role-based access control (RBAC) roles of the OpenShift Container Platform.
1.5. Additional resources
- OpenShift Operator Life Cycles
- About Red Hat OpenShift Service Mesh installation
- Installing Red Hat OpenShift Service Mesh on AWS
- Installing Red Hat OpenShift Service Mesh on AWS with user-provisioned infrastructure
- Installing Red Hat OpenShift Service Mesh on bare metal
- Installing Red Hat OpenShift Service Mesh on vSphere
- Installing Red Hat OpenShift Service Mesh on IBM Z® and IBM® LinuxONE
- Installing Red Hat OpenShift Service Mesh on IBM Power®
- This content is not included.About the OVN-Kubernetes network plugin
- Certified OpenShift CNI plugins
- Restricted network
- Support Policy
Chapter 2. Installing OpenShift Service Mesh
Installing OpenShift Service Mesh consists of three main tasks: installing the OpenShift Operator, deploying Istio, and customizing the Istio configuration. Then, you can also install the sample bookinfo application to push data through the mesh and explore mesh functionality.
2.1. About deploying Istio using the Red Hat OpenShift Service Mesh Operator
To deploy Istio using the Red Hat OpenShift Service Mesh Operator, you must create an Istio resource. Then, the Operator creates an IstioRevision resource, which represents one revision of the Istio control plane.
Based on the IstioRevision resource, the Operator deploys the Istio control plane, which includes the istiod Deployment resource and other resources.
The Red Hat OpenShift Service Mesh Operator might create additional instances of the IstioRevision resource, depending on the update strategy defined in the Istio resource.
Before installing OpenShift Service Mesh 3, ensure you are not running OpenShift Service Mesh 3 and OpenShift Service Mesh 2 in the same cluster, because it causes conflicts unless configured correctly. To migrate from OpenShift Service Mesh 2, see "Migrating from OpenShift Service Mesh 2.6".
2.1.1. About Istio control plane update strategies
The update strategy affects how the Operator performs the update. The spec.updateStrategy field in the Istio resource configuration determines how the OpenShift Service Mesh Operator updates the Istio control plane.
When the Operator detects a change in the spec.version field or identifies a new minor release with a configured vX.Y-latest alias, it initiates an upgrade procedure. For each mesh, you select one of two strategies:
-
InPlace -
RevisionBased
InPlace is the default strategy for updating OpenShift Service Mesh. Both the update strategies apply to sidecar and ambient modes.
If you use ambient mode, you must update the Istio Container Network Interface (CNI) and ZTunnel components in addition to the standard control plane update procedures.
Service Mesh recommends the InPlace update strategy for ambient mode. Using RevisionBased updates with ambient mode has limitations and requires manual intervention.
2.2. Installing the Service Mesh Operator
For clusters without OpenShift Service Mesh instances, install the Service Mesh Operator. OpenShift Service Mesh operates cluster-wide and needs a scope configuration to prevent conflicts between Istio control planes. For clusters with OpenShift Service Mesh 3 or later, see "Deploying multiple service meshes on a single cluster".
Prerequisites
- You have deployed a cluster on OpenShift Container Platform 4.14 or later.
- You have logged in to the OpenShift Container Platform web console as a user with the cluster-admin role.
Procedure
- In the OpenShift Container Platform web console, navigate to the Operators → OperatorHub page.
- Search for the Red Hat OpenShift Service Mesh 3 Operator.
- Locate the Service Mesh Operator, and click to select it.
- When the prompt that discusses the community operator opens, click Continue.
- Click Install.
On the Install Operator page, perform the following steps:
-
Select All namespaces on the cluster (default) as the Installation Mode. This mode installs the Operator in the default
openshift-operatorsnamespace, which enables the Operator to watch and be available to all namespaces in the cluster. - Select Automatic as the Approval Strategy. This ensures that the Operator Lifecycle Manager (OLM) handles the future upgrades to the Operator automatically. If you select the Manual approval strategy, OLM creates an update request. As a cluster administrator, you must then manually approve the OLM update request to update the Operator to the new version.
Select an Update Channel.
- Choose the stable channel to install the latest stable version of the Red Hat OpenShift Service Mesh 3 Operator. It is the default channel for installing the Operator.
-
To install a specific version of the Red Hat OpenShift Service Mesh 3 Operator, choose the corresponding
stable-<version>channel. For example, to install the Red Hat OpenShift Service Mesh Operator version 3.0.x, use the stable-3.0 channel.
-
Select All namespaces on the cluster (default) as the Installation Mode. This mode installs the Operator in the default
- Click Install to install the Operator.
Verification
-
Click Operators → Installed Operators to verify that the Service Mesh Operator is installed.
Succeededshould show in the Status column.
2.2.1. About Service Mesh custom resource definitions
Installing the Red Hat OpenShift Service Mesh Operator also installs custom resource definitions (CRD) that administrators can use to configure Istio for Service Mesh installations.
The Operator Lifecycle Manager (OLM) installs two categories of CRDs:
- Sail Operator CRDs
- Istio CRDs.
Sail Operator CRDs define custom resources for installing and maintaining the Istio components required to operate a service mesh. These custom resources belong to the sailoperator.io API group and include the Istio, IstioRevision, IstioCNI, and ZTunnel resource kinds.
You can use Istio CRDs to configure the mesh and manage your services. These CRDs define custom resources in several istio.io API groups, such as networking.istio.io and security.istio.io. The CRDs also include various resource kinds, such as AuthorizationPolicy, DestinationRule, and VirtualService, that administrators use to configure a service mesh.
2.3. About Istio deployment
To deploy Istio, you must create two resources: Istio and IstioCNI. The Istio resource deploys and configures the Istio Control Plane. The IstioCNI resource deploys and configures the Istio Container Network Interface (CNI) plugin.
You should create these resources in separate projects; therefore, you must create two projects as part of the Istio deployment process.
You can use the OpenShift web console or the OpenShift CLI (oc) to create a project or a resource in your cluster.
In the OpenShift Container Platform, a project functions as a Kubernetes namespace with additional annotations that define the allowed range of user IDs. Typically, the OpenShift Container Platform web console uses the term project, and the CLI uses the term namespace, but the terms are essentially synonymous.
2.3.1. Creating the Istio project using the web console
The Service Mesh Operator deploys the Istio control plane to a project that you create. In this example, istio-system is the name of the project.
Prerequisites
- You have installed the Red Hat OpenShift Service Mesh Operator.
- You have logged in to the OpenShift Container Platform web console as cluster-admin.
Procedure
- In the OpenShift Container Platform web console, click Home → Projects.
- Click Create Project.
-
At the prompt, enter a name for the project in the Name field. For example,
istio-system. The other fields offer supplementary information to theIstioresource definition and are optional. - Click Create. The Service Mesh Operator deploys Istio to the project you specified.
2.3.2. Creating the Istio resource using the web console
Create the Istio resource that will contain the YAML configuration file for your Istio deployment. The Red Hat OpenShift Service Mesh Operator uses information in the YAML file to create an instance of the Istio control plane.
Prerequisites
- You have installed the Service Mesh Operator.
- You have logged in to the OpenShift Container Platform web console as cluster-admin.
Procedure
- In the OpenShift Container Platform web console, click Operators → Installed Operators.
-
Select
istio-systemin the Project drop-down menu. - Click the Service Mesh Operator.
- Click Istio.
- Click Create Istio.
-
Select the
istio-systemproject from the Namespace drop-down menu. Click Create. This action deploys the Istio control plane.
When
State: Healthydisplays in the Status column, Istio is successfully deployed.
2.3.3. Creating the IstioCNI project using the web console
The Service Mesh Operator deploys the Istio CNI plugin to a project that you create. In this example, istio-cni is the name of the project.
Prerequisites
- You have installed the Red Hat OpenShift Service Mesh Operator.
- You have logged in to the OpenShift Container Platform web console as cluster-admin.
Procedure
- In the OpenShift Container Platform web console, click Home → Projects.
- Click Create Project.
-
At the prompt, you must enter a name for the project in the Name field. For example,
istio-cni. The other fields offer supplementary information and are optional. - Click Create.
2.3.4. Creating the IstioCNI resource using the web console
Create an Istio Container Network Interface (CNI) resource, which has the configuration file for the Istio CNI plugin. The Service Mesh Operator uses the configuration specified by this resource to deploy the CNI pod.
Prerequisites
- You have installed the Red Hat OpenShift Service Mesh Operator.
- You have logged in to the OpenShift Container Platform web console as cluster-admin.
Procedure
- In the OpenShift Container Platform web console, click Operators → Installed Operators.
-
Select
istio-cniin the Project drop-down menu. - Click the Service Mesh Operator.
- Click IstioCNI.
- Click Create IstioCNI.
-
Ensure that the name is
default. Click Create. This action deploys the Istio CNI plugin.
When
State: Healthydisplays in the Status column, the Istio CNI plugin is successfully deployed.
2.4. Scoping the Service Mesh with discovery selectors
Service Mesh includes workloads that meet the following criteria:
- The control plane has discovered the workload.
- The workload has an Envoy proxy sidecar injected.
By default, the control plane discovers workloads in all namespaces across the cluster, with the following results:
- Each proxy instance receives configuration for all namespaces, including workloads not enrolled in the mesh.
- Any workload with the appropriate pod or namespace injection label receives a proxy sidecar.
In shared clusters, you might want to limit the scope of Service Mesh to only certain namespaces. This approach is especially useful if many service meshes run in the same cluster.
2.4.1. About discovery selectors
With discovery selectors, the mesh administrator can control the namespaces, which the control plane can access. By using a Kubernetes label selector, the administrator sets the criteria for the namespaces visible to the control plane, excluding any namespaces that do not match the specified criteria.
istiod always opens a watch to OpenShift for all namespaces. However, discovery selectors ignore objects that are not selected very early in its processing, minimizing costs.
The discoverySelectors field accepts an array of Kubernetes selectors, which apply to labels on namespaces. You can configure each selector for different use cases:
-
Custom label names and values. For example, configure all namespaces with the label
istio-discovery=enabled. -
A list of namespace labels by using set-based selectors with OR logic. For example, configure namespaces with
istio-discovery=enabledORregion=us-east1. -
Inclusion and exclusion of namespaces. For example, configure namespaces with
istio-discovery=enabledand the labelapp=helloworld.
Discovery selectors are not a security boundary. istiod continues to have access to all namespaces even when you have configured the discoverySelector field.
2.4.2. Scoping a Service Mesh by using discovery selectors
You can restrict the namespaces that Service Mesh manages by configuring discoverySelectors in the Istio resource to include only specific labeled namespaces.
Prerequisites
- You have the OpenShift Service Mesh operator installed.
- You have created an Istio CNI resource.
Procedure
Add a label to the namespace containing the Istio control plane, for example, the
istio-systemsystem namespace, by running the following command:$ oc label namespace istio-system istio-discovery=enabled
Change the
Istiocontrol plane resource to include adiscoverySelectorssection with the same label, similar to the following example:kind: Istio apiVersion: sailoperator.io/v1 metadata: name: default spec: namespace: istio-system values: meshConfig: discoverySelectors: - matchLabels: istio-discovery: enabledApply the
Istiocustom resource (CR) by running the following command:$ oc apply -f istio.yaml
Ensure that all namespaces that will contain workloads that are to be part of the Service Mesh have both the
discoverySelectorlabel and, if needed, the appropriate Istio injection label.NoteDiscovery selectors help restrict the scope of a single Service Mesh and are essential for limiting the control plane scope when you deploy many Istio control planes in a single cluster.
2.5. About the Bookinfo application
Installing the bookinfo example application consists of two main tasks: deploying the application and creating a gateway so the application is accessible outside the cluster.
You can use the bookinfo application to explore service mesh features. Using the bookinfo application, you can easily confirm that requests from a web browser pass through the mesh and reach the application.
The bookinfo application displays information about a book, similar to a single catalog entry of an online book store. The application displays a page that describes the book, lists book details (ISBN, number of pages, and other information), and book reviews.
The mesh exposes the bookinfo application, and the mesh configuration defines how the microservices serve requests. The review information comes from one of three services: reviews-v1, reviews-v2 or reviews-v3. If you deploy the bookinfo application without defining the reviews virtual service, then the mesh uses a round robin rule to route requests to a service.
By deploying the reviews virtual service, you can specify a different behavior. For example, you can specify that if a user logs into the bookinfo application, then the mesh routes requests to the reviews-v2 service, and the application displays reviews with black stars. If a user does not log in to the bookinfo application, then the mesh routes requests to the reviews-v3 service, and the application displays reviews with red-colored stars.
For more information, see "Bookinfo Application".
2.5.1. Deploying the Bookinfo application
Prerequisites
- You have deployed a cluster on OpenShift Container Platform 4.15 or later.
-
You have logged in to the OpenShift Container Platform web console as a user with the
cluster-adminrole. - You have access to the OpenShift CLI (oc).
- You have installed the Red Hat OpenShift Service Mesh Operator, created the Istio resource, and the Operator has deployed Istio.
-
You have created
IstioCNIresource, and the Operator has deployed the necessaryIstioCNIpods.
Procedure
- In the OpenShift Container Platform web console, navigate to the Home → Projects page.
- Click Create Project.
Enter
bookinfoin the Project name field.The Display name and Description fields offer supplementary information and are not required.
- Click Create.
Apply the Istio discovery selector and injection label to the
bookinfonamespace by entering the following command:$ oc label namespace bookinfo istio-discovery=enabled istio-injection=enabled
NoteIn this example, the name of the
Istioresource isdefault. If theIstioresource name is different, you must set theistio.io/revlabel to the name of theIstioresource instead of adding theistio-injection=enabledlabel.Apply the
bookinfoYAML file to deploy thebookinfoapplication by entering the following command:oc apply -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/bookinfo/platform/kube/bookinfo.yaml -n bookinfo
Verification
Verify that the
bookinfoservice is available by running the following command:$ oc get services -n bookinfo
You should see output similar to the following example:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE details ClusterIP 172.30.137.21 <none> 9080/TCP 44s productpage ClusterIP 172.30.2.246 <none> 9080/TCP 43s ratings ClusterIP 172.30.33.85 <none> 9080/TCP 44s reviews ClusterIP 172.30.175.88 <none> 9080/TCP 44s
Verify that the
bookinfopods are available by running the following command:$ oc get pods -n bookinfo
You should see output similar to the following example:
NAME READY STATUS RESTARTS AGE details-v1-698d88b-km2jg 2/2 Running 0 66s productpage-v1-675fc69cf-cvxv9 2/2 Running 0 65s ratings-v1-6484c4d9bb-tpx7d 2/2 Running 0 65s reviews-v1-5b5d6494f4-wsrwp 2/2 Running 0 65s reviews-v2-5b667bcbf8-4lsfd 2/2 Running 0 65s reviews-v3-5b9bd44f4-44hr6 2/2 Running 0 65s
When the
Readycolumns displays2/2, the proxy sidecar was successfully injected. Confirm thatRunningdisplays in theStatuscolumn for each pod.Verify that the
bookinfoapplication is running by sending a request to thebookinfopage. Run the following command:$ oc exec "$(oc get pod -l app=ratings -n bookinfo -o jsonpath='{.items[0].metadata.name}')" -c ratings -n bookinfo -- curl -sS productpage:9080/productpage | grep -o "<title>.*</title>"
2.5.2. About accessing the Bookinfo application using a gateway
The Red Hat OpenShift Service Mesh Operator does not deploy gateways. Gateways are not part of the control plane. As a security best-practice, you can deploy Ingress and Egress gateways in a separate namespace than the namespace that has the control plane.
You can deploy gateways by using either the Gateway API or the gateway injection method.
2.5.3. Accessing the Bookinfo application by using Istio gateway injection
Gateway injection uses the same mechanisms as Istio sidecar injection to create a gateway from a Deployment resource coupled with a Service resource. The Service resource is accessible from outside an OpenShift Container Platform cluster.
Prerequisites
-
You have logged in to the OpenShift Container Platform web console as
cluster-admin. - You have installed the Red Hat OpenShift Service Mesh Operator.
-
You have deployed the
Istioresource.
Procedure
Create the
istio-ingressgatewaydeployment and service by running the following command:$ oc apply -n bookinfo -f ingress-gateway.yaml
NoteThis example uses a sample
ingress-gateway.yamlContent from raw.githubusercontent.com is not included.file that is available in the Istio community repository.Configure the
bookinfoapplication to use the new gateway. Apply the gateway configuration by running the following command:$ oc apply -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/bookinfo/networking/bookinfo-gateway.yaml -n bookinfo
NoteTo configure gateway injection with the
bookinfoapplication, this example provides a sample gateway configuration file that you must apply in the application’s namespace.Use a route to expose the gateway external to the cluster by running the following command:
$ oc expose service istio-ingressgateway -n bookinfo
Change the YAML file to automatically scale the pod when ingress traffic increases.
You can see the following example configuration for reference:
apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: labels: istio: ingressgateway release: istio name: ingressgatewayhpa namespace: bookinfo spec: maxReplicas: 5 metrics: - resource: name: cpu target: averageUtilization: 80 type: Utilization type: Resource minReplicas: 2 scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: istio-ingressgateway-
spec.maxReplicasshows an example that sets the maximum replicas to5and the minimum replicas to2. It also creates another replica when usage reaches 80%.
-
Specify the minimum number of pods that must be running on the node.
You can see the following example configuration for reference:
apiVersion: policy/v1 kind: PodDisruptionBudget metadata: labels: istio: ingressgateway release: istio name: ingressgatewaypdb namespace: bookinfo spec: minAvailable: 1 selector: matchLabels: istio: ingressgatewayspec.minAvailableshows an example that ensures one replica is running if a pod gets restarted on a new node.Obtain the gateway hostname and the URL for the product page by running the following command:
$ HOST=$(oc get route istio-ingressgateway -n bookinfo -o jsonpath='{.spec.host}')Verify that the
productpageis accessible from a web browser by running the following command:$ echo productpage URL: http://$HOST/productpage
2.5.4. Accessing the Bookinfo application by using Gateway API
Manage gateway resources in Red Hat OpenShift Service Mesh by using the Kubernetes Gateway API, which transitioned from manual installation to automated platform management in recent OpenShift Container Platform releases.
In OpenShift Container Platform 4.15 and later, Red Hat OpenShift Service Mesh implements the Gateway API custom resource definitions (CRDs). However, in OpenShift Container Platform 4.18 and earlier, the CRDs are not installed by default. Therefore, in OpenShift Container Platform 4.15 through 4.18, you must manually install the CRDs. Starting with OpenShift Container Platform 4.19, these CRDs are automatically installed and managed, and you can no longer create, update, or delete them.
For details about enabling Gateway API for Ingress in OpenShift Container Platform 4.19 and later, see "Configuring ingress cluster traffic" in the OpenShift Container Platform documentation.
Red Hat provides support for using the Kubernetes Gateway API with Red Hat OpenShift Service Mesh. Red Hat does not offer support for the Kubernetes Gateway API custom resource definitions (CRDs). This procedure uses community Gateway API CRDs for demonstration purposes only.
Prerequisites
-
You have logged in to the OpenShift Container Platform web console as
cluster-admin. - You have installed the Red Hat OpenShift Service Mesh Operator.
-
You have deployed the
Istioresource.
Procedure
Enable the Gateway API CRDs for OpenShift Container Platform 4.18 and earlier, by running the following command:
$ oc get crd gateways.gateway.networking.k8s.io &> /dev/null || { oc kustomize "github.com/kubernetes-sigs/gateway-api/config/crd?ref=v1.0.0" | oc apply -f -; }Create and configure a gateway by using the
GatewayandHTTPRouteresources by running the following command:$ oc apply -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/bookinfo/gateway-api/bookinfo-gateway.yaml -n bookinfo
NoteTo configure a gateway with the
bookinfoapplication by using the Gateway API, this example provides a sample gateway configuration file that you must apply to the application’s namespace.Ensure that the Gateway API service is ready, and has an address allocated by running the following command:
$ oc wait --for=condition=programmed gtw bookinfo-gateway -n bookinfo
Retrieve the host by running the following command:
$ export INGRESS_HOST=$(oc get gtw bookinfo-gateway -n bookinfo -o jsonpath='{.status.addresses[0].value}')Retrieve the port by running the following command:
$ export INGRESS_PORT=$(oc get gtw bookinfo-gateway -n bookinfo -o jsonpath='{.spec.listeners[?(@.name=="http")].port}')Retrieve the gateway URL by running the following command:
$ export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT
Obtain the gateway hostname and the URL of the product page by running the following command:
$ echo "http://${GATEWAY_URL}/productpage"
Verification
- Verify that the productpage is accessible from a web browser.
2.6. Customizing Istio configuration
Customize the Istio control plane by using the values field in the Istio resource to apply advanced Helm configuration settings optimized for OpenShift environments.
When you create this resource by using the OpenShift Container Platform web console, it is pre-populated with configuration settings to enable Istio to run on OpenShift.
Procedure
- Click Operators → Installed Operators.
- Click Istio in the Provided APIs column.
-
Click the
Istioinstance, nameddefault, in the Name column. Click YAML to view the
Istioconfiguration and make modifications.For a list of available configuration for the
valuesfield, refer to "Istio’s artifacthub chart documentation".
2.7. About Istio High Availability
Running the Istio control plane in High Availability (HA) mode prevents single points of failure, and ensures continuous mesh operation even if an istiod pod fails.
By using HA, if one istiod pod becomes unavailable, another one continues to manage and configure the Istio data plane, preventing service outages or disruptions. HA provides scalability by distributing the control plane workload, enables graceful upgrades, supports disaster recovery operations, and protects against zone-wide mesh outages.
There are two ways for a system administrator to configure HA for the Istio deployment:
-
Defining a static replica count: This approach involves setting a fixed number of
istiodpods, providing a consistent level of redundancy. -
Using autoscaling: This approach dynamically adjusts the number of
istiodpods based on resource usage or custom metrics, providing more efficient resource consumption for fluctuating workloads.
2.7.1. Configuring Istio HA by using autoscaling
Configure the Istio control plane in High Availability (HA) mode to prevent a single point of failure, and ensure continuous mesh operation even if one of the istiod pods fails.
Autoscaling defines the minimum and maximum number of Istio control plane pods that can operate. OpenShift Container Platform uses these values to scale the number of control planes in operation based on resource usage, such as CPU or memory, to efficiently respond to the varying number of workloads and overall traffic patterns within the mesh.
Prerequisites
-
You have logged in to the OpenShift Container Platform web console as a user with the
cluster-adminrole. - You have installed the Red Hat OpenShift Service Mesh Operator.
-
You have deployed the
Istioresource.
Procedure
- In the OpenShift Container Platform web console, click Installed Operators.
- Click Red Hat OpenShift Service Mesh 3 Operator.
- Click Istio.
-
Click the name of the Istio installation. For example,
default. - Click YAML.
Change the
Istiocustom resource (CR) similar to the following example:apiVersion: sailoperator.io/v1 kind: Istio metadata: name: default spec: namespace: istio-system values: pilot: autoscaleMin: 2 autoscaleMax: 5 cpu: targetAverageUtilization: 80 memory: targetAverageUtilization: 80-
spec.values.pilot.autoscaleMinspecifies the minimum number of Istio control plane replicas that always run. -
spec.values.pilot.autoscaleMaxspecifies the maximum number of Istio control plane replicas, allowing for scaling based on load. To support HA, there must be at least two replicas. -
spec.values.pilot.cpu.targetAverageUtilizationspecifies the target CPU usage for autoscaling to 80%. If the average CPU usage exceeds this threshold, the Horizontal Pod Autoscaler (HPA) automatically increases the number of replicas. -
spec.values.pilot.memory.targetAverageUtilizationspecifies the target memory usage for autoscaling to 80%. If the average memory usage exceeds this threshold, the HPA automatically increases the number of replicas.
-
Verification
Verify the status of the Istio control pods by running the following command:
$ oc get pods -n istio-system -l app=istiod
You should see output similar to the following example:
NAME READY STATUS RESTARTS AGE istiod-7c7b6564c9-nwhsg 1/1 Running 0 70s istiod-7c7b6564c9-xkmsl 1/1 Running 0 85s
Two
istiodpods are running. Two pods, the minimum requirement for an HA Istio control plane, indicates that a basic HA setup is in place.
2.7.1.1. API settings for Service Mesh HA autoscaling mode
Use the following Istio custom resource definition (CRD) parameters when you configure a service mesh for High Availability (HA) by using autoscaling.
Table 2.1. HA API parameters
| Parameter | Description |
|---|---|
|
|
Defines the minimum number of OpenShift uses this parameter only if you enable the Horizontal Pod Autoscaler (HPA) for the Istio deployment. This is the default behavior. |
|
|
Defines the maximum number of
For OpenShift to automatically scale the number of You must also configure metrics for autoscaling to work properly. If you do not configure any metrics, the autoscaler cannot scale the deployment up or down. OpenShift uses this parameter only if you enable the Horizontal Pod Autoscaler (HPA) for the Istio deployment. This is the default behavior. |
|
|
Defines the target CPU usage for the |
|
|
Defines the target memory usage for the |
|
|
You can use the For more information, see "Configurable Scaling Behavior". |
2.7.2. Configuring Istio HA by using replica count
Configure the Istio control plane for high availability (HA) by setting a static replica count to ensure continuous mesh operation and redundancy across multiple istiod pods.
The replica count defines a fixed number of Istio control plane pods that can operate. Use replica count for mesh environments where the control plane workload is relatively stable or predictable, or when you prefer to manually scale the istiod pod.
Prerequisites
-
You have logged in to the OpenShift Container Platform web console as a user with the
cluster-adminrole. - You have installed the Red Hat OpenShift Service Mesh Operator.
- You have deployed the Istio resource.
Procedure
Obtain the name of the
Istioresource by running the following command:$ oc get istio -n istio-sytem
You should see output similar to the following example:
NAME REVISIONS READY IN USE ACTIVE REVISION STATUS VERSION AGE default 1 1 0 default Healthy v1.24.6 24m
The name of the
Istioresource isdefault.Update the
Istiocustom resource (CR) by adding theautoscaleEnabledandreplicaCountparameters by running the following command:$ oc patch istio default -n istio-system --type merge -p ' spec: values: pilot: autoscaleEnabled: false replicaCount: 2 '-
spec.values.pilot.autoscaleEnabledspecifies a setting that disables autoscaling and ensures that the number of replicas remains fixed. -
spec.values.pilot.replicaCountspecifies the number of Istio control plane replicas. To support HA, there must be at least two replicas.
-
Verification
Verify the status of the Istio control pods by running the following command:
$ oc get pods -n istio-system -l app=istiod
You should see output similar to the following example:
NAME READY STATUS RESTARTS AGE istiod-7c7b6564c9-nwhsg 1/1 Running 0 70s istiod-7c7b6564c9-xkmsl 1/1 Running 0 85s
Two
istiodpods are running, which is the minimum requirement for an HA Istio control plane and indicates that a basic HA setup is in place.
2.8. Additional resources
- Content from istio.io is not included.Bookinfo Application (Istio documentation)
- Deploying multiple service meshes on a single cluster
- This content is not included.Configuring ingress cluster traffic
- Content from kubernetes.io is not included.Horizontal Pod Autoscaling (Kubernetes documentation)
- Content from kubernetes.io is not included.Label selectors (Kubernetes documentation)
- Content from kubernetes.io is not included.Resources that support set-based requirements (Kubernetes documentation)
- Content from artifacthub.io is not included.Istio’s artifacthub chart documentation
- Content from kubernetes.io is not included.Configurable Scaling Behavior
- Migrating from OpenShift Service Mesh 2.6
Chapter 3. Sidecar injection
Enable security, observability, and traffic management by deploying sidecar proxies to intercept network traffic within each application pod in the mesh.
3.1. About sidecar injection
Automate proxy deployment in the mesh by using namespace or pod-level labels to trigger sidecar injection and associate workloads with a specific control plane.
When you apply a valid injection label to the pod template defined in a deployment, any new pods created by that deployment automatically receive a sidecar. Similarly, applying a pod injection label at the namespace level ensures any new pods in that namespace include a sidecar.
Injection happens at pod creation through an admission controller, so changes appear on individual pods rather than the deployment resources. To confirm sidecar injection, check the pod details directly using oc describe, where you can see the injected Istio proxy container.
3.2. Identifying the revision name
Manage sidecar injection by applying revision-specific labels to workloads, which allows the Red Hat OpenShift Service Mesh Operator to automate control plane association through IstioRevision resources.
The naming of an IstioRevision depends on the spec.updateStrategy.type setting in the Istio resource. If set to InPlace, the revision shares the Istio resource name. If set to RevisionBased, the revision name follows the format <Istio resource name>-v<version>. Typically, each Istio resource corresponds to a single IstioRevision. However, during a revision-based upgrade, many IstioRevision resources might exist, each representing a distinct control plane instance.
To see available revision names, use the following command:
$ oc get istiorevisions
You should see output similar to the following example:
NAME READY STATUS IN USE VERSION AGE my-mesh-v1-23-0 True Healthy False v1.23.0 114s
3.2.1. Enabling sidecar injection with default revision
When the service mesh’s IstioRevision name is default, it is possible to use the following labels on a namespace or a pod to enable sidecar injection:
| Resource | Label | Enabled value | Disabled value |
|---|---|---|---|
| Namespace |
|
|
|
| Pod |
|
|
|
You can also enable injection by setting the istio.io/rev: default label in the namespace or pod.
3.2.2. Enabling sidecar injection with other revisions
When the IstioRevision name is not default, use the specific IstioRevision name with the istio.io/rev label to map the pod to the required control plane and enable sidecar injection. To enable injection, set the istio.io/rev: default label in either the namespace or the pod, as adding it to both is not required.
For example, with the revision shown earlier, the following labels would enable sidecar injection:
| Resource | Enabled label | Disabled label |
|---|---|---|
| Namespace |
|
|
| Pod |
|
|
If you apply both labels, the istio-injection label overrides the revision label and assigns the namespace to the default revision.
3.3. Enabling sidecar injection
To show different approaches for configuring sidecar injection, the following procedures use the Bookinfo application.
- Prerequisites
-
You have installed the Red Hat OpenShift Service Mesh Operator, created an
Istioresource, and the Operator has deployed Istio. -
You have created the
IstioCNIresource, and the Operator has deployed the necessaryIstioCNIpods. - You have created the namespaces that are to be part of the mesh, and they are discoverable by the Istio control plane.
-
Optional: You have deployed the workloads that you want to include in the mesh. In the following examples, you deployed the Bookinfo application to the
bookinfonamespace, but did not configure sidecar injection (step 5 in "Deploying the Bookinfo application" procedure). For more information, see "Deploying the Bookinfo application".
-
You have installed the Red Hat OpenShift Service Mesh Operator, created an
3.3.1. Enabling sidecar injection with namespace labels
In this example, the control plane injects a sidecar proxy into all workloads, making this the best approach when you want to include most workloads in the mesh.
Procedure
Verify the revision name of the Istio control plane using the following command:
$ oc get istiorevisions
You should see output similar to the following example:
Example output:
NAME TYPE READY STATUS IN USE VERSION AGE default Local True Healthy False v1.23.0 4m57s
Since the revision name is default, you can use the default injection labels without referencing the exact revision name.
Verify that workloads already running in the required namespace show
1/1containers asREADYby using the following command. This confirms that the pods are running without sidecars.$ oc get pods -n bookinfo
You should see output similar to the following example:
Example output:
NAME READY STATUS RESTARTS AGE details-v1-65cfcf56f9-gm6v7 1/1 Running 0 4m55s productpage-v1-d5789fdfb-8x6bk 1/1 Running 0 4m53s ratings-v1-7c9bd4b87f-6v7hg 1/1 Running 0 4m55s reviews-v1-6584ddcf65-6wqtw 1/1 Running 0 4m54s reviews-v2-6f85cb9b7c-w9l8s 1/1 Running 0 4m54s reviews-v3-6f5b775685-mg5n6 1/1 Running 0 4m54s
To apply the injection label to the
bookinfonamespace, run the following command at the CLI:$ oc label namespace bookinfo istio-injection=enabled namespace/bookinfo labeled
To ensure the control plane applies sidecar injection, redeploy the workloads in the
bookinfonamespace. Use the following command to perform a rolling update of all workloads:$ oc -n bookinfo rollout restart deployments
Verification
Verify the rollout by checking that the new pods display
2/2containers asREADY, confirming successful sidecar injection by running the following command:$ oc get pods -n bookinfo
You should see output similar to the following example:
NAME READY STATUS RESTARTS AGE details-v1-7745f84ff-bpf8f 2/2 Running 0 55s productpage-v1-54f48db985-gd5q9 2/2 Running 0 55s ratings-v1-5d645c985f-xsw7p 2/2 Running 0 55s reviews-v1-bd5f54b8c-zns4v 2/2 Running 0 55s reviews-v2-5d7b9dbf97-wbpjr 2/2 Running 0 55s reviews-v3-5fccc48c8c-bjktn 2/2 Running 0 55s
3.3.2. Exclude a workload from the mesh
You can exclude specific workloads from sidecar injection even if you enabled namespace-wide injection.
This example is for demonstration purposes only. The bookinfo application requires all workloads to be part of the mesh for proper functionality.
Procedure
-
Open the application’s
Deploymentresource in an editor. In this case, exclude theratings-v1service. Change the
spec.template.metadata.labelssection of yourDeploymentresource to include the labelsidecar.istio.io/inject: falseto disable sidecar injection.kind: Deployment apiVersion: apps/v1 metadata: name: ratings-v1 namespace: bookinfo labels: app: ratings version: v1 spec: template: metadata: labels: sidecar.istio.io/inject: 'false'NoteAdding the label to the top-level
labelssection of theDeploymentdoes not affect sidecar injection.Updating the deployment triggers a rollout, creating a new
ReplicaSetwith updated pod(s).
Verification
Verify that the updated pod(s) do not contain a sidecar container and show
1/1containers asRunningby running the following command:$ oc get pods -n bookinfo
You should see output similar to the following example:
NAME READY STATUS RESTARTS AGE details-v1-6bc7b69776-7f6wz 2/2 Running 0 29m productpage-v1-54f48db985-gd5q9 2/2 Running 0 29m ratings-v1-5d645c985f-xsw7p 1/1 Running 0 7s reviews-v1-bd5f54b8c-zns4v 2/2 Running 0 29m reviews-v2-5d7b9dbf97-wbpjr 2/2 Running 0 29m reviews-v3-5fccc48c8c-bjktn 2/2 Running 0 29m
3.3.3. Enabling sidecar injection with pod labels
You can include individual workloads for sidecar injection instead of applying it to all workloads within a namespace, making it ideal for scenarios where only a few workloads need to be part of a service mesh. This example also demonstrates the use of a revision label for sidecar injection, where the Istio resource is created with the name my-mesh. A unique Istio resource name is required when multiple Istio control planes are present in the same cluster or during a revision-based control plane upgrade.
Procedure
Verify the revision name of the Istio control plane by running the following command:
$ oc get istiorevisions
You should see output similar to the following example:
NAME TYPE READY STATUS IN USE VERSION AGE my-mesh Local True Healthy False v1.23.0 47s
Since the revision name is
my-mesh, use the revision labelistio.io/rev=my-meshto enable sidecar injection.Verify that workloads already running show
1/1containers asREADY, indicating that the pods are running without sidecars by running the following command:$ oc get pods -n bookinfo
You should see output similar to the following example:
NAME READY STATUS RESTARTS AGE details-v1-65cfcf56f9-gm6v7 1/1 Running 0 4m55s productpage-v1-d5789fdfb-8x6bk 1/1 Running 0 4m53s ratings-v1-7c9bd4b87f-6v7hg 1/1 Running 0 4m55s reviews-v1-6584ddcf65-6wqtw 1/1 Running 0 4m54s reviews-v2-6f85cb9b7c-w9l8s 1/1 Running 0 4m54s reviews-v3-6f5b775685-mg5n6 1/1 Running 0 4m54s
-
Open the application’s
Deploymentresource in an editor. In this case, update theratings-v1service. Update the
spec.template.metadata.labelssection of yourDeploymentto include the appropriate pod injection or revision label. In this case,istio.io/rev: my-mesh:kind: Deployment apiVersion: apps/v1 metadata: name: ratings-v1 namespace: bookinfo labels: app: ratings version: v1 spec: template: metadata: labels: istio.io/rev: my-meshNoteAdding the label to the top-level
labelssection of theDeploymentresource does not impact sidecar injection.Updating the deployment triggers a rollout, creating a new
ReplicaSetwith the updated pod(s).
Verification
Verify that only the ratings-v1 pod now shows
2/2containersREADY, indicating that the sidecar has been successfully injected by running the following command:$ oc get pods -n bookinfo
You should see output similar to the following example:
NAME READY STATUS RESTARTS AGE details-v1-559cd49f6c-b89hw 1/1 Running 0 42m productpage-v1-5f48cdcb85-8ppz5 1/1 Running 0 42m ratings-v1-848bf79888-krdch 2/2 Running 0 9s reviews-v1-6b7444ffbd-7m5wp 1/1 Running 0 42m reviews-v2-67876d7b7-9nmw5 1/1 Running 0 42m reviews-v3-84b55b667c-x5t8s 1/1 Running 0 42m
- Repeat for other workloads that you want to include in the mesh.
3.4. Enabling sidecar injection with namespace labels and an IstioRevisionTag resource
To use the istio-injection=enabled label when your revision name is not default, you must create an IstioRevisionTag resource with the name default that references your Istio resource.
Prerequisites
-
You have installed the Red Hat OpenShift Service Mesh Operator, created an
Istioresource, and the Operator has deployed Istio. -
You have created the
IstioCNIresource, and the Operator has deployed the necessaryIstioCNIpods. - You have created the namespaces that are to be part of the mesh, and they are discoverable by the Istio control plane.
-
Optional: You have deployed the workloads that you want to include in the mesh. In the following examples, you deployed the Bookinfo application to the
bookinfonamespace, but did not configure sidecar injection (step 5 in "Deploying the Bookinfo application" procedure). For more information, see "Deploying the Bookinfo application".
Procedure
Find the name of your
Istioresource by running the following command:$ oc get istio
You should see output similar to the following example:
NAME REVISIONS READY IN USE ACTIVE REVISION STATUS VERSION AGE default 1 1 1 default-v1-24-3 Healthy v1.24.3 11s
In this example, the
Istioresource uses the namedefault, but the underlying revision is calleddefault-v1-24-3.Create the
IstioRevisionTagresource in a YAML file:You should see output similar to the following example:
apiVersion: sailoperator.io/v1 kind: IstioRevisionTag metadata: name: default spec: targetRef: kind: Istio name: defaultApply the
IstioRevisionTagresource by running the following command:$ oc apply -f istioRevisionTag.yaml
Verify that a new
IstioRevisionTagresource exists in your cluster by running the following command:$ oc get istiorevisiontags.sailoperator.io
Example output:
NAME STATUS IN USE REVISION AGE default Healthy True default-v1-24-3 4m23s
In this example, the new tag is referencing your active revision,
default-v1-24-3. Now you can use theistio-injection=enabledlabel as if your revision has the namedefault.Confirm that the pods are running without sidecars by running the following command. Any workloads that are already running in the required namespace should show
1/1containers in theREADYcolumn.$ oc get pods -n bookinfo
You should see output similar to the following example:
NAME READY STATUS RESTARTS AGE details-v1-65cfcf56f9-gm6v7 1/1 Running 0 4m55s productpage-v1-d5789fdfb-8x6bk 1/1 Running 0 4m53s ratings-v1-7c9bd4b87f-6v7hg 1/1 Running 0 4m55s reviews-v1-6584ddcf65-6wqtw 1/1 Running 0 4m54s reviews-v2-6f85cb9b7c-w9l8s 1/1 Running 0 4m54s reviews-v3-6f5b775685-mg5n6 1/1 Running 0 4m54s
Apply the injection label to the
bookinfonamespace by running the following command:$ oc label namespace bookinfo istio-injection=enabled \ namespace/bookinfo labeled
To ensure the control plane applies sidecar injection, redeploy the workloads in the
bookinfonamespace by running the following command:$ oc -n bookinfo rollout restart deployments
Verification
Verify the rollout by running the following command and confirming that the new pods display
2/2containers in theREADYcolumn:$ oc get pods -n bookinfo
You should see output similar to the following example:
NAME READY STATUS RESTARTS AGE details-v1-7745f84ff-bpf8f 2/2 Running 0 55s productpage-v1-54f48db985-gd5q9 2/2 Running 0 55s ratings-v1-5d645c985f-xsw7p 2/2 Running 0 55s reviews-v1-bd5f54b8c-zns4v 2/2 Running 0 55s reviews-v2-5d7b9dbf97-wbpjr 2/2 Running 0 55s reviews-v3-5fccc48c8c-bjktn 2/2 Running 0 55s
3.5. Additional resources
Chapter 4. Istio ambient mode
Istio ambient mode provides a sidecar-less architecture for Red Hat OpenShift Service Mesh that reduces operational complexity and resource overhead by using node-level Layer 4 (L4) proxies and optional Layer 7 proxies.
4.1. About Istio ambient mode
To understand the Istio ambient mode architecture, see the following definitions:
- ZTunnel proxy
- A per-node proxy that manages secure, transparent Transmission Control Protocol (TCP) connections for all workloads on the node. It operates at Layer 4 (L4), offloading mutual Transport Layer Security (mTLS) and L4 policy enforcement from application pods.
- Waypoint proxy
- An optional proxy that runs per service account or namespace to offer advanced Layer 7 (L7) features such as traffic management, policy enforcement, and observability. You can apply L7 features selectively to avoid the overhead of sidecars for every service.
- Istio CNI plugin
- Redirects traffic to the Ztunnel proxy on each node, enabling transparent interception without requiring modifications to application pods.
Istio ambient mode offers the following benefits:
- Simplified operations that remove the need to manage sidecar injection, reducing the complexity of mesh adoption and operations.
-
Reduced resource consumption with a per-node Ztunnel proxy that provides L4 service mesh features and an optional
waypointproxy that reduces resource overhead per pod. Incremental adoption that enables workloads to join the mesh with the L4 features such as mutual Transport Layer Security (mTLS) and basic policies with optional
waypointproxies added later to use L7 service mesh features, such as HTTP(L7) traffic management.NoteThe L7 features require deploying
waypointproxies, which introduces minimal additional overhead for the selected services.- Enhanced security that provides a secure, zero-trust network foundation with mTLS by default for all meshed workloads.
Ambient mode is a newer architecture and might involve different operational considerations than traditional sidecar models.
While well-defined discovery selectors allow a service mesh deployed in ambient mode alongside a mesh in sidecar mode, this scenario has not been thoroughly validated. To avoid potential conflicts, install Istio ambient mode only on clusters that do not have an existing Red Hat OpenShift Service Mesh installation. Ambient mode remains a Technology Preview feature.
Istio ambient mode is not compatible with clusters that use Red Hat OpenShift Service Mesh 2.6 or earlier. You must not install or use them together.
4.2. Installing Istio ambient mode
You can install Istio ambient mode on OpenShift Container Platform 4.19 or later and Red Hat OpenShift Service Mesh 3.1.0 or later with the required Gateway API custom resource definitions (CRDs).
Prerequisites
- You have deployed a cluster on OpenShift Container Platform 4.19 or later.
- You have installed the OpenShift Service Mesh Operator 3.1.0 or later in the OpenShift Container Platform cluster.
-
You have logged in to the OpenShift Container Platform cluster either through the web console as a user with the
cluster-adminrole, or with theoc logincommand, depending on the installation method. -
You have configured the OVN-Kubernetes Container Network Interface (CNI) to use local gateway mode by setting the
routingViaHostfield astruein thegatewayConfigspecification for the Cluster Network Operator. For more information, see "Configuring gateway mode".
Procedure
Install the Istio control plane:
Create the
istio-systemnamespace by running the following command:$ oc create namespace istio-system
Create an
Istioresource namedistio.yamlsimilar to the following example:You can see the following example configuration for reference:
apiVersion: sailoperator.io/v1 kind: Istio metadata: name: default spec: namespace: istio-system profile: ambient values: pilot: trustedZtunnelNamespace: ztunnelImportantYou must set the
profilefield toambientand configure the.spec.values.pilot.trustedZtunnelNamespacevalue to match the namespace where you install theZTunnelresource.Apply the
Istiocustom resource (CR) by running the following command:$ oc apply -f istio.yaml
Wait for the Istio control plane to contain the
Readystatus condition by running the following command:$ oc wait --for=condition=Ready istios/default --timeout=3m
Install the Istio Container Network Interface (CNI):
Create the
istio-cninamespace by running the following command:$ oc create namespace istio-cni
Create the
IstioCNIresource namedistio-cni.yamlsimilar to the following example:You can see the following example configuration for reference:
apiVersion: sailoperator.io/v1 kind: IstioCNI metadata: name: default spec: namespace: istio-cni profile: ambient values: cni: ambient: reconcileIptablesOnStartup: trueSet the
spec.profilefield toambientandspec.values.cni.ambient.reconcileIptablesOnStartupfield totrue.reconcileIptablesOnStartupfield enables theIstioCNIagent to detect and fix incompatible iptables rules in already-running ambient pods when the CNI agent starts up, handling scenarios like upgrades or rule drift.Apply the
IstioCNICR by running the following command:$ oc apply -f istio-cni.yaml
Wait for the
IstioCNIpods to contain theReadystatus condition by running the following command:$ oc wait --for=condition=Ready istios/default --timeout=3m
Install the Ztunnel proxy:
Create the
ztunnelnamespace for Ztunnel proxy by running the following command:$ oc create namespace ztunnel
The namespace name for
ztunnelproject must match thetrustedZtunnelNamespaceparameter in Istio configuration.Create the
Ztunnelresource namedztunnel.yamlsimilar to the following example:You can see the following example configuration for reference:
apiVersion: sailoperator.io/v1alpha1 kind: ZTunnel metadata: name: default spec: namespace: ztunnel profile: ambient
Apply the
ZtunnelCR by running the following command:$ oc apply -f ztunnel.yaml
Wait for the
Ztunnelpods to contain theReadystatus condition by running the following command:$ oc wait --for=condition=Ready ztunnel/default --timeout=3m
4.3. About discovery selectors and Istio ambient mode
Istio ambient mode includes workloads when the control plane discovers each workload and the appropriate label enables traffic redirection through the Ztunnel proxy.
By default, the control plane discovers workloads in all namespaces across the cluster. As a result, each proxy receives configuration for every namespace, including workloads that are not enrolled in the mesh. In shared or multitenant clusters, limiting mesh participation to specific namespaces helps reduce configuration costs and supports many service meshes within the same cluster.
For more information about discovery selectors, see "Scoping the Service Mesh with discovery selectors".
4.3.1. Scoping the Service Mesh with discovery selectors in Istio ambient mode
To limit the scope of the OpenShift Service Mesh in Istio ambient mode, you can configure discoverySelectors parameter in the meshConfig section of the Istio resource. The configuration controls which namespaces the control plane discovers based on label selectors.
Prerequisites
- You have deployed a cluster on OpenShift Container Platform 4.19 or later.
-
You have created an
Istiocontrol plane resource. -
You have created an
IstioCNIresource. -
You have created a
Ztunnelresource.
Procedure
Add a label to the namespace containing the
Istiocontrol plane resource, for example, theistio-systemnamespace, by running the following command:$ oc label namespace istio-system istio-discovery=enabled
Add a label to the namespace containing the
IstioCNIresource, for example, theistio-cninamespace, by running the following command:$ oc label namespace istio-cni istio-discovery=enabled
Add a label to the namespace containing the
Ztunnelresource, for example, theztunnelnamespace, by running the following command:$ oc label namespace ztunnel istio-discovery=enabled
Change the
Istiocontrol plane resource to include adiscoverySelectorssection with the same label:Create a YAML file with the name
istio-discovery-selectors.yamlsimilar to the following example:You can see the following example configuration for reference:
apiVersion: sailoperator.io/v1 kind: Istio metadata: name: default spec: namespace: istio-system profile: ambient values: pilot: trustedZtunnelNamespace: ztunnel meshConfig: discoverySelectors: - matchLabels: istio-discovery: enabledApply the YAML file to
Istiocontrol plane resource by running the following command:$ oc apply -f istio-discovery-selectors.yaml
4.4. Deploying the Bookinfo application in Istio ambient mode
You can deploy the bookinfo sample application in Istio ambient mode without sidecar injection by using the ZTunnel proxy.
For more information about the bookinfo application, see "About the Bookinfo application".
Prerequisites
- You have deployed a cluster on OpenShift Container Platform 4.15 or later, which includes the supported Kubernetes Gateway API custom resource definitions (CRDs) required for Istio ambient mode.
-
You have logged in to the OpenShift Container Platform cluster either through the web console as a user with the
cluster-adminrole, or with theoc logincommand, depending on the installation method. - You have installed the Red Hat OpenShift Service Mesh Operator, created the Istio resource, and the Operator has deployed Istio.
-
You have created an
IstioCNIresource, and the Operator has deployed the necessaryIstioCNIpods. -
You have created a
Ztunnelresource, and the Operator has deployed the necessaryZtunnelpods.
Procedure
Create the
bookinfonamespace by running the following command:$ oc create namespace bookinfo
Add the
istio-discovery=enabledlabel to thebookinfonamespace by running the following command:$ oc label namespace bookinfo istio-discovery=enabled
Apply the
bookinfoYAML file to deploy thebookinfoapplication by running the following command:$ oc apply -n bookinfo -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.26/samples/bookinfo/platform/kube/bookinfo.yaml
Apply the
bookinfo-versionsYAML file to deploy thebookinfoapplication by running the following command:$ oc apply -n bookinfo -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.26/samples/bookinfo/platform/kube/bookinfo-versions.yaml
Verify that the
bookinfopods are running by entering the following command:$ oc -n bookinfo get pods
You should see output similar to the following example:
NAME READY STATUS RESTARTS AGE details-v1-54ffdd5947-8gk5h 1/1 Running 0 5m9s productpage-v1-d49bb79b4-cb9sl 1/1 Running 0 5m3s ratings-v1-856f65bcff-h6kkf 1/1 Running 0 5m7s reviews-v1-848b8749df-wl5br 1/1 Running 0 5m6s reviews-v2-5fdf9886c7-8xprg 1/1 Running 0 5m5s reviews-v3-bb6b8ddc7-bvcm5 1/1 Running 0 5m5s
Verify that the
bookinfoapplication is running by entering the following command:$ oc exec "$(oc get pod -l app=ratings -n bookinfo \ -o jsonpath='{.items[0].metadata.name}')" \ -c ratings -n bookinfo \ -- curl -sS productpage:9080/productpage | grep -o "<title>.*</title>"Add the bookinfo application to the Istio ambient mesh by labeling either the entire namespace or the individual pods:
To include all workloads in the bookinfo namespace, apply the
istio.io/dataplane-mode=ambientlabel to thebookinfonamespace, by running the following command:$ oc label namespace bookinfo istio.io/dataplane-mode=ambient
-
To include only specific workloads, apply the
istio.io/dataplane-mode=ambientlabel directly to individual pods. See the "Additional resources" section for more details on the labels used to add or exclude workloads in a mesh.
NoteAdding workloads to the ambient mesh does not require restarting or redeploying application pods. Unlike sidecar mode, the number of containers in each pod remains unchanged.
Confirm that Ztunnel proxy has successfully opened listening sockets in the pod network namespace by running the following command:
$ istioctl ztunnel-config workloads --namespace ztunnel
You should see output similar to the following example:
NAMESPACE POD NAME ADDRESS NODE WAYPOINT PROTOCOL bookinfo details-v1-54ffdd5947-cflng 10.131.0.69 ip-10-0-47-239.ec2.internal None HBONE bookinfo productpage-v1-d49bb79b4-8sgwx 10.128.2.80 ip-10-0-24-198.ec2.internal None HBONE bookinfo ratings-v1-856f65bcff-c6ldn 10.131.0.70 ip-10-0-47-239.ec2.internal None HBONE bookinfo reviews-v1-848b8749df-45hfd 10.131.0.72 ip-10-0-47-239.ec2.internal None HBONE bookinfo reviews-v2-5fdf9886c7-mvwft 10.128.2.78 ip-10-0-24-198.ec2.internal None HBONE bookinfo reviews-v3-bb6b8ddc7-fl8q2 10.128.2.79 ip-10-0-24-198.ec2.internal None HBONE istio-cni istio-cni-node-7hwd2 10.0.61.108 ip-10-0-61-108.ec2.internal None TCP istio-cni istio-cni-node-bfqmb 10.0.30.129 ip-10-0-30-129.ec2.internal None TCP istio-cni istio-cni-node-cv8cw 10.0.75.71 ip-10-0-75-71.ec2.internal None TCP istio-cni istio-cni-node-hj9cz 10.0.47.239 ip-10-0-47-239.ec2.internal None TCP istio-cni istio-cni-node-p8wrg 10.0.24.198 ip-10-0-24-198.ec2.internal None TCP istio-system istiod-6bd6b8664b-r74js 10.131.0.80 ip-10-0-47-239.ec2.internal None TCP ztunnel ztunnel-2w5mj 10.128.2.61 ip-10-0-24-198.ec2.internal None TCP ztunnel ztunnel-6njq8 10.129.0.131 ip-10-0-75-71.ec2.internal None TCP ztunnel ztunnel-96j7k 10.130.0.146 ip-10-0-61-108.ec2.internal None TCP ztunnel ztunnel-98mrk 10.131.0.50 ip-10-0-47-239.ec2.internal None TCP ztunnel ztunnel-jqcxn 10.128.0.98 ip-10-0-30-129.ec2.internal None TCP
4.5. About waypoint proxies in Istio ambient mode
After setting up Istio ambient mode with ztunnel proxies, you can add waypoint proxies to enable advanced Layer 7 (L7) processing features that Istio provides.
Istio ambient mode separates the functionality of Istio into two layers:
- A secure Layer 4 (L4) overlay managed by ztunnel proxies
- An L7 layer managed by optional waypoint proxies
A waypoint proxy is an Envoy-based proxy that performs L7 processing for workloads running in ambient mode. It functions as a gateway to a resource such as a namespace, service, or pod. You can install, upgrade, and scale waypoint proxies independently of applications. The configuration uses the Kubernetes Gateway API.
You can lower resource usage in Red Hat OpenShift Service Mesh by using waypoint proxies to serve many workloads within a shared security boundary, such as a namespace, instead of running a separate proxy for every pod.
A destination waypoint enforces policies by acting as a gateway. All incoming traffic to a resource, such as a namespace, service, or pod, passes through the waypoint for policy enforcement.
The ztunnel node proxy manages L4 functions in ambient mode, including mutual Transport Layer Security (mTLS) encryption, L4 traffic processing, and telemetry. Ztunnel and waypoint proxies communicate using HTTP-Based Overlay Network (HBONE), a protocol that tunnels traffic over HTTP/2 CONNECT to mutual TLS (mTLS) on port 15008.
You can add a waypoint proxy if workloads require any of the following L7 capabilities:
- Traffic management
- Advanced HTTP routing, load balancing, circuit breaking, rate limiting, fault injection, retries, and timeouts
- Security
- Authorization policies based on L7 attributes such as request type or HTTP headers
- Observability
- HTTP metrics, access logging, and tracing for application traffic
4.6. Deploying waypoint proxies using gateway API
You can deploy waypoint proxies by using Kubernetes Gateway resource.
Prerequisites
- You have logged in to the OpenShift Container Platform 4.19 or later, which provides supported Kubernetes Gateway API custom resource definitions (CRDs) required for ambient mode functionality.
- You have the Red Hat OpenShift Service Mesh Operator 3.2.0 or later installed on the OpenShift cluster.
- You have Istio deployed in ambient mode.
-
You have applied the required labels to workloads or namespaces to enable
ztunneltraffic redirection.
Istio ambient mode is not compatible with clusters that use Red Hat OpenShift Service Mesh 2.6 or earlier. You must not deploy both versions in the same cluster.
Procedure
On OpenShift Container Platform 4.18 and earlier, install the community-maintained Kubernetes Gateway API CRDs by running the following command:
$ oc get crd gateways.gateway.networking.k8s.io &> /dev/null || \ { oc apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.2.0/standard-install.yaml; }Starting with OpenShift Container Platform 4.19, the Gateway API CRDs are installed by default.
NoteThe CRDs are community maintained and not supported by Red Hat. Upgrading to OpenShift Container Platform 4.19 or later, which includes supported Gateway API CRDs, might disrupt applications.
4.7. Deploying a waypoint proxy
You can deploy a waypoint proxy in the bookinfo application namespace to route traffic through the Istio ambient data plane and enforce L7 policies.
Prerequisites
- You have logged in to the OpenShift Container Platform 4.19 or later, which provides supported Kubernetes Gateway API custom resource definitions (CRDs) required for ambient mode functionality.
- You have the Red Hat OpenShift Service Mesh Operator 3.2.0 or later installed on the OpenShift cluster.
- You have Istio deployed in ambient mode.
-
You have deployed the
bookinfosample application for the following example. -
You have added the
label istio.io/dataplane-mode=ambientto the target namespace.
Procedure
Deploy a waypoint proxy in the
bookinfoapplication namespace similar to the following example:You can see the following example configuration for reference:
apiVersion: gateway.networking.k8s.io/v1 kind: Gateway metadata: labels: istio.io/waypoint-for: service name: waypoint namespace: bookinfo spec: gatewayClassName: istio-waypoint listeners: - name: mesh port: 15008 protocol: HBONEApply the
waypointcustom resource (CR) by running the following command:$ oc apply -f waypoint.yaml
The
istio.io/waypoint-for: servicelabel indicates that the waypoint handles traffic for services. The label determines the type of traffic processed. For more information, see "Waypoint traffic types".Enroll the
bookinfonamespace to use the waypoint by running the following command:$ oc label namespace bookinfo istio.io/use-waypoint=waypoint
After enrolling the namespace, requests from any pods by using the ambient data plane to services in
bookinfowill route through the waypoint for L7 processing and policy enforcement.
Verification
Confirm that the waypoint proxy manages all the services in the
bookinfonamespace by running the following command:$ istioctl ztunnel-config svc --namespace ztunnel
Example output:
NAMESPACE SERVICE NAME SERVICE VIP WAYPOINT ENDPOINTS bookinfo details 172.30.15.248 waypoint 1/1 bookinfo details-v1 172.30.114.128 waypoint 1/1 bookinfo productpage 172.30.155.45 waypoint 1/1 bookinfo productpage-v1 172.30.76.27 waypoint 1/1 bookinfo ratings 172.30.24.145 waypoint 1/1 bookinfo ratings-v1 172.30.139.144 waypoint 1/1 bookinfo reviews 172.30.196.50 waypoint 3/3 bookinfo reviews-v1 172.30.172.192 waypoint 1/1 bookinfo reviews-v2 172.30.12.41 waypoint 1/1 bookinfo reviews-v3 172.30.232.12 waypoint 1/1 bookinfo waypoint 172.30.92.147 None 1/1
You can also configure only specific services or pods to use a waypoint by labeling the required service or pod. When enrolling a pod explicitly, also add the istio.io/waypoint-for: workload label to the corresponding gateway resource.
4.8. Enabling cross-namespace waypoint usage
You can use a cross-namespace waypoint to allow resources in one namespace to route traffic through a waypoint deployed in a different namespace.
Procedure
Add the
istio-discovery=enabledlabel to thedefaultnamespace by running the following command:$ oc label namespace default istio-discovery=enabled
Create a
Gatewayresource that allows workloads in thebookinfonamespace to use thewaypoint-defaultfrom thedefaultnamespace similar to the following example:You can see the following example configuration for reference:
apiVersion: gateway.networking.k8s.io/v1 kind: Gateway metadata: name: waypoint-default namespace: default spec: gatewayClassName: istio-waypoint listeners: - name: mesh port: 15008 protocol: HBONE allowedRoutes: namespaces: from: Selector selector: matchLabels: kubernetes.io/metadata.name: bookinfoApply the cross-namespace waypoint by running the following command:
$ oc apply -f waypoint-default.yaml
Add the labels required to use a cross-namespace waypoint:
Add the
istio.io/use-waypoint-namespacelabel to specify the namespace where the waypoint is present by running the following command:$ oc label namespace bookinfo istio.io/use-waypoint-namespace=default
Add the
istio.io/use-waypointlabel to specify the waypoint to use by running the following command:$ oc label namespace bookinfo istio.io/use-waypoint=waypoint-default
4.9. About Layer 7 features in ambient mode
Ambient mode includes stable Layer 7 (L7) capabilities implemented through the Gateway API HTTPRoute resource and the Istio AuthorizationPolicy resource.
The AuthorizationPolicy resource works in both sidecar and ambient modes. In ambient mode, you can target authorization policies for ztunnel enforcement or attach them for waypoint enforcement. To attach a policy to a waypoint, include a targetRef that references either the waypoint itself or a Service configured to use that waypoint.
You can attach Layer 4 (L4) or L7 policies to the waypoint proxy to ensure correct identity-based enforcement. The destination ztunnel recognizes traffic by the identity of the waypoint, after it is part of the traffic path.
Istio peer authentication policies, which configure mutual TLS (mTLS) modes, are supported by ztunnel. In ambient mode, ztunnel and HTTP-Based Overlay Network Environment (HBONE) ignore policies that set the mode to DISABLE because they always enforce mTLS. For more information, see "Peer authentication".
4.10. Routing traffic using waypoint proxies
You can use a deployed waypoint proxy to split traffic between different versions of the Bookinfo reviews service for feature testing or A/B testing.
Procedure
Create the traffic routing configuration similar to the following example:
You can see the following example configuration for reference:
apiVersion: gateway.networking.k8s.io/v1 kind: HTTPRoute metadata: name: reviews namespace: bookinfo spec: parentRefs: - group: "" kind: Service name: reviews port: 9080 rules: - backendRefs: - name: reviews-v1 port: 9080 weight: 90 - name: reviews-v2 port: 9080 weight: 10Apply the traffic routing configuration by running the following command:
$ oc apply -f traffic-route.yaml
Verification
Access the
productpageservice from within the ratings pod by running the following command:$ oc exec "$(oc get pod -l app=ratings -n bookinfo \ -o jsonpath='{.items[0].metadata.name}')" -c ratings -n bookinfo \ -- curl -sS productpage:9080/productpage | grep -om1 'reviews-v[12]'Most responses (90%) will contain
reviews-v1output, while a smaller part (10%) will containreviews-v2output.
4.11. Adding authorization policy
Use an Layer 7 (L7) authorization policy to explicitly allow the curl service to send GET requests to the productpage service while blocking all other operations.
Procedure
Create the authorization policy similar to the following example:
You can see the following example configuration for reference:
apiVersion: security.istio.io/v1 kind: AuthorizationPolicy metadata: name: productpage-waypoint namespace: bookinfo spec: targetRefs: - kind: Service group: "" name: productpage action: ALLOW rules: - from: - source: principals: - cluster.local/ns/curl/sa/curl to: - operation: methods: ["GET"]Apply the authorization policy by running the following command:
$ oc apply -f authorization-policy.yaml
NoteThe
targetRefsfield specifies the service targeted by the authorization policy of the waypoint proxy.
Verification
Create a namespace for a
curlclient by running the following command:$ oc create namespace curl
Deploy a
curlclient by running the following command:$ oc apply -n curl -f https://raw.githubusercontent.com/openshift-service-mesh/istio/refs/heads/master/samples/curl/curl.yaml
Apply the label for ambient mode to the
curlnamespace by running the following command:$ oc label namespace curl istio.io/dataplane-mode=ambient
Verify that a
GETrequest to theproductpageservice succeeds with an HTTP 200 response when made from thedefault/curlpod, by running the following command:$ oc -n curl exec deploy/curl -- sh -c \ 'curl -s -o /dev/null -w "HTTP %{http_code}\n" http://productpage.bookinfo.svc.cluster.local:9080/productpage'Verify that the applied authorization policy denies a
POSTrequest to the same service with an HTTP 403 response, by running the following command:$ oc -n curl exec deploy/curl -- sh -c \ 'curl -s -o /dev/null -w "HTTP %{http_code}\n" -X POST http://productpage.bookinfo.svc.cluster.local:9080/productpage'Verify that a
GETrequest from another service, such as theratingspod in thebookinfonamespace, is also denied withRBAC: access denied, by running the following command:$ oc exec "$(oc get pod -l app=ratings -n bookinfo \ -o jsonpath='{.items[0].metadata.name}')" \ -c ratings -n bookinfo \ -- curl -sS productpage:9080/productpageClean up the resources by running the following commands:
Delete the
curlapplication by running the following command:$ oc delete -n curl -f https://raw.githubusercontent.com/openshift-service-mesh/istio/refs/heads/master/samples/curl/curl.yaml
Delete the
curlnamespace by running the following command:$ oc delete namespace curl
4.12. Additional resources
- This content is not included.Configuring gateway mode
- Scoping the mesh with discovery selectors
- About the Bookinfo application
- Content from istio.io is not included.Ambient mode architecture (Istio documentation)
- Content from istio.io is not included.Adding workloads to a mesh in ambient mode (Istio documentation)
- Content from istio.io is not included.Waypoint traffic types (Istio documentation)
- Content from istio.io is not included.Peer authentication (Istio documentation)
Chapter 5. OpenShift Service Mesh and cert-manager
The cert-manager tool provides a unified API to manage X.509 certificates for applications in a Kubernetes environment. You can use cert-manager to integrate with public or private key infrastructures (PKI) and automate certificate renewal.
5.1. About the cert-manager Operator istio-csr agent
The cert-manager Operator for Red Hat OpenShift enhances certificate management for securing workloads and control plane components in Red Hat OpenShift Service Mesh and Istio. It supports issuing, delivering, and renewing certificates used for mutual Transport Layer Security (mTLS) through cert-manager issuers.
By integrating Istio with the istio-csr agent, which the cert-manager Operator manages, you enable Istio to request and manage the certificates directly. The integration simplifies security configuration and centralizes certificate management within the cluster.
You must install the cert-manager Operator for Red Hat OpenShift before you create and install your Istio resource.
5.1.1. Integrating Service Mesh with the cert-manager Operator by using the istio-csr agent
Integrate the cert-manager Operator with OpenShift Service Mesh by deploying the istio-csr agent and configuring an Istio resource to process certificate signing requests for workloads and the control plane.
Prerequisites
- You have installed the cert-manager Operator for Red Hat OpenShift version 1.15.1.
- You have logged in to OpenShift Container Platform 4.14 or later.
- You have installed the OpenShift Service Mesh Operator.
-
You have a
IstioCNIinstance running in the cluster. -
You have installed the
istioctlcommand.
Procedure
Create the
istio-systemnamespace by running the following command:$ oc create namespace istio-system
Patch the cert-manager Operator to install the
istio-csragent by running the following command:$ oc -n cert-manager-operator patch subscription openshift-cert-manager-operator \ --type='merge' -p \ '{"spec":{"config":{"env":[{"name":"UNSUPPORTED_ADDON_FEATURES","value":"IstioCSR=true"}]}}}'Create the root certificate authority (CA) issuer by creating an
Issuerobject for theistio-csragent:Create a new project for installing the
istio-csragent by running the following command:$ oc new-project istio-csr
Create an
Issuerobject similar to the following example:NoteThe
selfSignedissuer serves demonstration purposes, testing, or proof-of-concept environments. For production deployments, use a secure and trusted CA.apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: selfsigned namespace: istio-system spec: selfSigned: {} --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: istio-ca namespace: istio-system spec: isCA: true duration: 87600h secretName: istio-ca commonName: istio-ca privateKey: algorithm: ECDSA size: 256 subject: organizations: - cluster.local - cert-manager issuerRef: name: selfsigned kind: Issuer group: cert-manager.io --- apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: istio-ca namespace: istio-system spec: ca: secretName: istio-caCreate the objects by running the following command:
$ oc apply -f issuer.yaml
Wait for the
istio-cacertificate to contain the "Ready" status condition by running the following command:$ oc wait --for=condition=Ready certificates/istio-ca -n istio-system
Create the
IstioCSRcustom resource:Create the
IstioCSRcustom resource similar to the following example:apiVersion: operator.openshift.io/v1alpha1 kind: IstioCSR metadata: name: default namespace: istio-csr spec: istioCSRConfig: certManager: issuerRef: name: istio-ca kind: Issuer group: cert-manager.io istiodTLSConfig: trustDomain: cluster.local istio: namespace: istio-systemCreate the
istio-csragent by running the following command:$ oc create -f istioCSR.yaml
Verify that the
istio-csrdeployment is ready by running the following command:$ oc get deployment -n istio-csr
Install the
istioresource:NoteThe configuration disables the built-in CA server for Istio and forwards certificate signing requests from
istiodto theistio-csragent. Theistio-csragent obtains certificates for bothistiodand mesh workloads from the cert-manager Operator. Theistio-csragent generates theistiodTLS certificate, and the system mounts it into the pod at a known location.Create the
Istioobject similar to the following example:apiVersion: sailoperator.io/v1 kind: Istio metadata: name: default spec: version: v1.24-latest namespace: istio-system values: global: caAddress: cert-manager-istio-csr.istio-csr.svc:443 pilot: env: ENABLE_CA_SERVER: "false"Create the
Istioresource by running the following command:$ oc apply -f istio.yaml
Verify that the
istioresource displays the "Ready" status condition by running the following command:$ oc wait --for=condition=Ready istios/default -n istio-system
5.1.2. Verifying Service Mesh with the cert-manager Operator using the istio-csr agent
You can use the sample httpbin service and sleep application to verify traffic between workloads. Check the workload proxy certificate to verify a successful cert-manager Operator installation.
Procedure
Create the followingnamespaces:
Create the
apps-1namespace by running the following command:$ oc new-project apps-1
Create the
apps-2namespace by running the following command:$ oc new-project apps-2
Add the
istio-injection=enabledlabel on the namespaces:Add the
istio-injection=enabledlabel on theapps-1namespace by running the following command:$ oc label namespaces apps-1 istio-injection=enabled
Add the
istio-injection=enabledlabel on theapps-2namespace by running the following command:$ oc label namespaces apps-2 istio-injection=enabled
Deploy the
httpbinapp in the namespaces:Deploy the
httpbinapp in theapps-1namespace by running the following command:$ oc apply -n apps-1 -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/httpbin/httpbin.yaml
Deploy the
httpbinapp in theapps-2namespace by running the following command:$ oc apply -n apps-2 -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/httpbin/httpbin.yaml
Deploy the
sleepapp in the namespaces:Deploy the
sleepapp in theapps-1namespace by running the following command:$ oc apply -n apps-1 -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/sleep/sleep.yaml
Deploy the
sleepapp in theapps-2namespace by running the following command:$ oc apply -n apps-2 -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/sleep/sleep.yaml
Verify that the created apps have sidecars injected:
Verify that the created apps have sidecars injected for
apps-1namespace by running the following command:$ oc get pods -n apps-1
Verify that the created apps have sidecars injected for
apps-2namespace by running the following command:$ oc get pods -n apps-2
Create a mesh-wide strict mutual Transport Layer Security (mTLS) policy similar to the following example:
NoteEnabling
PeerAuthenticationin strict mTLS mode verifies correct certificate distribution and functional mTLS communication between workloads.apiVersion: security.istio.io/v1beta1 kind: PeerAuthentication metadata: name: default namespace: istio-system spec: mtls: mode: STRICTApply the mTLS policy by running the following command:
$ oc apply -f peer_auth.yaml
Verify that the
apps-1/sleepapp can access theapps-2/httpbinservice by running the following command:$ oc -n apps-1 exec "$(oc -n apps-1 get pod \ -l app=sleep -o jsonpath={.items..metadata.name})" \ -c sleep -- curl -sIL http://httpbin.apps-2.svc.cluster.local:8000You should see output similar to the following example:
HTTP/1.1 200 OK access-control-allow-credentials: true access-control-allow-origin: * content-security-policy: default-src 'self'; style-src 'self' 'unsafe-inline'; img-src 'self' camo.githubusercontent.com content-type: text/html; charset=utf-8 date: Wed, 18 Jun 2025 09:20:55 GMT x-envoy-upstream-service-time: 14 server: envoy transfer-encoding: chunked
Verify that the
apps-2/sleepapp can access theapps-1/httpbinservice by running the following command:$ oc -n apps-2 exec "$(oc -n apps-1 get pod \ -l app=sleep -o jsonpath={.items..metadata.name})" \ -c sleep -- curl -sIL http://httpbin.apps-2.svc.cluster.local:8000You should see output similar to the following example:
HTTP/1.1 200 OK access-control-allow-credentials: true access-control-allow-origin: * content-security-policy: default-src 'self'; style-src 'self' 'unsafe-inline'; img-src 'self' camo.githubusercontent.com content-type: text/html; charset=utf-8 date: Wed, 18 Jun 2025 09:21:23 GMT x-envoy-upstream-service-time: 16 server: envoy transfer-encoding: chunked
Verify that the
httpbinworkload certificate matches as expected by running the following command:$ istioctl proxy-config secret -n apps-1 \ $(oc get pods -n apps-1 -o jsonpath='{.items..metadata.name}' --selector app=httpbin) \ -o json | jq -r '.dynamicActiveSecrets[0].secret.tlsCertificate.certificateChain.inlineBytes' \ | base64 --decode | openssl x509 -text -nooutYou should see output similar to the following example:
... Issuer: O = cert-manager + O = cluster.local, CN = istio-ca ... X509v3 Subject Alternative Name: URI:spiffe://cluster.local/ns/apps-1/sa/httpbin
5.1.3. Uninstalling Service Mesh with the cert-manager Operator by using the istio-csr agent
Uninstall the cert-manager Operator and the istio-csr agent from OpenShift Service Mesh after verifying that no mesh components depend on the agent or its issued certificates to avoid service disruption.
Procedure
Remove the
IstioCSRcustom resource by running the following command:$ oc -n <istio-csr_project_name> delete istiocsrs.operator.openshift.io default
Remove the related resources:
List the cluster scoped-resources by running the following command:
$ oc get clusterrolebindings,clusterroles -l "app=cert-manager-istio-csr,app.kubernetes.io/name=cert-manager-istio-csr"
Save the names of the listed resources for later reference.
List the resources in
istio-csragent deployed namespace by running the following command:$ oc get certificate,deployments,services,serviceaccounts -l "app=cert-manager-istio-csr,app.kubernetes.io/name=cert-manager-istio-csr" -n <istio_csr_project_name>
Save the names of the listed resources for later reference.
List the resources in Red Hat OpenShift Service Mesh or Istio deployed namespaces by running the following command:
$ oc get roles,rolebindings \ -l "app=cert-manager-istio-csr,app.kubernetes.io/name=cert-manager-istio-csr" \ -n <istio_csr_project_name>
Save the names of the listed resources for later reference.
For each resource listed in previous steps, delete the resources by running the following command:
$ oc -n <istio_csr_project_name> delete <resource_type>/<resource_name>
5.2. Additional resources
Chapter 6. Multi-cluster topologies
Multi-cluster topologies are useful for organizations with distributed systems or environments seeking enhanced scalability, fault tolerance, and regional redundancy.
6.1. About multi-cluster mesh topologies
In a multi-cluster mesh topology, you install and manage a single Istio mesh across many OpenShift Container Platform clusters, enabling communication and service discovery between the services.
Two factors decide the multi-cluster mesh topology: control plane topology and network topology. There are two options for each topology. Therefore, there are four possible multi-cluster mesh topology configurations.
- Multi-Primary Single Network: Combines the multi-primary control plane topology and the single network topology models.
- Multi-Primary Multi-Network: Combines the multi-primary control plane topology and the multi-network network topology models.
- Primary-Remote Single Network: Combines the primary-remote control plane topology and the single network topology models.
- Primary-Remote Multi-Network: Combines the primary-remote control plane topology and the multi-network topology models.
6.1.1. Control plane topology models
A multi-cluster mesh must use one of the following control plane topologies:
- Multi-Primary: In this configuration, a control plane is present on every cluster. Each control plane observes the API servers in all of the other clusters for services and endpoints.
- Primary-Remote: In this configuration, the control plane is present only on one cluster, called the primary cluster. No control plane runs on any of the other clusters, called remote clusters. The control plane on the primary cluster discovers services and endpoints and configures the sidecar proxies for the workloads in all clusters.
6.1.2. Network topology models
A multi-cluster mesh must use one of the following network topologies:
- Single Network: All clusters reside on the same network and there is direct connectivity between the services in all the clusters. There is no need to use gateways for communication between the services across cluster boundaries.
- Multi-Network: Clusters reside on different networks and there is no direct connectivity between services. Gateways enable communication across network boundaries.
6.2. Multi-Cluster configuration overview
To configure a multi-cluster topology you must perform the following actions:
- Install the OpenShift Service Mesh Operator for each cluster.
- Create or have access to root and intermediate certificates for each cluster.
- Apply the security certificates for each cluster.
- Install Istio for each cluster.
6.2.1. Creating certificates for a multi-cluster topology
Create the root and intermediate certificate authority (CA) certificates for two clusters.
Prerequisites
- You have OpenSSL installed locally.
Procedure
Create the root CA certificate:
Create a key for the root certificate by running the following command:
$ openssl genrsa -out root-key.pem 4096
Create an OpenSSL configuration certificate file named
root-ca.conffor the root CA certificates:You can see the following example configuration for reference:
encrypt_key = no prompt = no utf8 = yes default_md = sha256 default_bits = 4096 req_extensions = req_ext x509_extensions = req_ext distinguished_name = req_dn [ req_ext ] subjectKeyIdentifier = hash basicConstraints = critical, CA:true keyUsage = critical, digitalSignature, nonRepudiation, keyEncipherment, keyCertSign [ req_dn ] O = Istio CN = Root CA
Create the certificate signing request by running the following command:
$ openssl req -sha256 -new -key root-key.pem \ -config root-ca.conf \ -out root-cert.csr
Create a shared root certificate by running the following command:
$ openssl x509 -req -sha256 -days 3650 \ -signkey root-key.pem \ -extensions req_ext -extfile root-ca.conf \ -in root-cert.csr \ -out root-cert.pem
Create the intermediate CA certificate for the East cluster:
Create a directory named
eastby running the following command:$ mkdir east
Create a key for the intermediate certificate for the East cluster by running the following command:
$ openssl genrsa -out east/ca-key.pem 4096
Create an OpenSSL configuration file named
intermediate.confin theeast/directory for the intermediate certificate of the East cluster. Copy the following example file and save it locally:You can see the following example configuration for reference:
[ req ] encrypt_key = no prompt = no utf8 = yes default_md = sha256 default_bits = 4096 req_extensions = req_ext x509_extensions = req_ext distinguished_name = req_dn [ req_ext ] subjectKeyIdentifier = hash basicConstraints = critical, CA:true, pathlen:0 keyUsage = critical, digitalSignature, nonRepudiation, keyEncipherment, keyCertSign subjectAltName=@san [ san ] DNS.1 = istiod.istio-system.svc [ req_dn ] O = Istio CN = Intermediate CA L = east
Create a certificate signing request by running the following command:
$ openssl req -new -config east/intermediate.conf \ -key east/ca-key.pem \ -out east/cluster-ca.csr
Create the intermediate CA certificate for the East cluster by running the following command:
$ openssl x509 -req -sha256 -days 3650 \ -CA root-cert.pem \ -CAkey root-key.pem -CAcreateserial \ -extensions req_ext -extfile east/intermediate.conf \ -in east/cluster-ca.csr \ -out east/ca-cert.pem
Create a certificate chain from the intermediate and root CA certificate for the east cluster by running the following command:
$ cat east/ca-cert.pem root-cert.pem > east/cert-chain.pem && cp root-cert.pem east
Create the intermediate CA certificate for the West cluster:
Create a directory named
westby running the following command:$ mkdir west
Create a key for the intermediate certificate for the West cluster by running the following command:
$ openssl genrsa -out west/ca-key.pem 4096
Create an OpenSSL configuration file named
intermediate.confin thewest/directory for the intermediate certificate of the West cluster. Copy the following example file and save it locally:You can see the following example configuration for reference:
[ req ] encrypt_key = no prompt = no utf8 = yes default_md = sha256 default_bits = 4096 req_extensions = req_ext x509_extensions = req_ext distinguished_name = req_dn [ req_ext ] subjectKeyIdentifier = hash basicConstraints = critical, CA:true, pathlen:0 keyUsage = critical, digitalSignature, nonRepudiation, keyEncipherment, keyCertSign subjectAltName=@san [ san ] DNS.1 = istiod.istio-system.svc [ req_dn ] O = Istio CN = Intermediate CA L = west
Create a certificate signing request by running the following command:
$ openssl req -new -config west/intermediate.conf \ -key west/ca-key.pem \ -out west/cluster-ca.csr
Create the certificate by running the following command:
$ openssl x509 -req -sha256 -days 3650 \ -CA root-cert.pem \ -CAkey root-key.pem -CAcreateserial \ -extensions req_ext -extfile west/intermediate.conf \ -in west/cluster-ca.csr \ -out west/ca-cert.pem
Create the certificate chain by running the following command:
$ cat west/ca-cert.pem root-cert.pem > west/cert-chain.pem && cp root-cert.pem west
6.2.2. Applying certificates to a multi-cluster topology
Apply root and intermediate certificate authority (CA) certificates to the clusters in a multi-cluster topology.
In this procedure, CLUSTER1 is the East cluster and CLUSTER2 is the West cluster.
Prerequisites
- You have access to two OpenShift Container Platform clusters with external load balancer support.
- You have created the root CA certificate and intermediate CA certificates for each cluster or someone has made them available for you.
Procedure
Apply the certificates to the East cluster of the multi-cluster topology:
Log in to East cluster by running the following command:
$ oc login -u https://<east_cluster_api_server_url>
Set up the environment variable that has the
occommand context for the East cluster by running the following command:$ export CTX_CLUSTER1=$(oc config current-context)
Create a project called
istio-systemby running the following command:$ oc get project istio-system --context "${CTX_CLUSTER1}" || oc new-project istio-system --context "${CTX_CLUSTER1}"Configure Istio to use
network1as the default network for the pods on the East cluster by running the following command:$ oc --context "${CTX_CLUSTER1}" label namespace istio-system topology.istio.io/network=network1Create the CA certificates, certificate chain, and the private key for Istio on the East cluster by running the following command:
$ oc get secret -n istio-system --context "${CTX_CLUSTER1}" cacerts || oc create secret generic cacerts -n istio-system --context "${CTX_CLUSTER1}" \ --from-file=east/ca-cert.pem \ --from-file=east/ca-key.pem \ --from-file=east/root-cert.pem \ --from-file=east/cert-chain.pemNoteIf you followed the instructions in "Creating certificates for a multi-cluster mesh", your certificates will be present in the
east/directory. If your certificates are present in a different directory, change the syntax.
Apply the certificates to the West cluster of the multi-cluster topology:
Log in to the West cluster by running the following command:
$ oc login -u https://<west_cluster_api_server_url>
Set up the environment variable that has the
occommand context for the West cluster by running the following command:$ export CTX_CLUSTER2=$(oc config current-context)
Create a project called
istio-systemby running the following command:$ oc get project istio-system --context "${CTX_CLUSTER2}" || oc new-project istio-system --context "${CTX_CLUSTER2}"Configure Istio to use
network2as the default network for the pods on the West cluster by running the following command:$ oc --context "${CTX_CLUSTER2}" label namespace istio-system topology.istio.io/network=network2Create the CA certificate secret for Istio on the West cluster by running the following command:
$ oc get secret -n istio-system --context "${CTX_CLUSTER2}" cacerts || oc create secret generic cacerts -n istio-system --context "${CTX_CLUSTER2}" \ --from-file=west/ca-cert.pem \ --from-file=west/ca-key.pem \ --from-file=west/root-cert.pem \ --from-file=west/cert-chain.pemNoteIf you followed the instructions in "Creating certificates for a multi-cluster mesh", your certificates will be present in the
west/directory. If your certificates are present in a different directory, change the syntax.
Next steps
- Install Istio on all the clusters comprising the mesh topology.
6.3. Installing a multi-primary multi-network mesh
Install Istio in the multi-primary multi-network topology on two OpenShift Container Platform clusters.
In this procedure, CLUSTER1 is the East cluster and CLUSTER2 is the West cluster.
You can adapt these instructions for a mesh spanning more than two clusters.
Prerequisites
- You have installed the OpenShift Service Mesh 3 Operator on all of the clusters that include the mesh.
- You have created certificates for the multi-cluster mesh.
- You have applied certificates to the multi-cluster topology.
- You have created an Istio Container Network Interface (CNI) resource.
-
You have
istioctlinstalled.
In on-premise environments, such as those running on bare metal, OpenShift Container Platform clusters often do not include a native load-balancer capability. A service of type LoadBalancer, such as the istio-eastwestgateway, does not automatically assign an external IP address. To ensure the required external IP assignment for cross-cluster communication, cluster administrators must install and configure the MetalLB Operator. MetalLB is valuable in bare metal or bare metal-like infrastructures when fault-tolerant access to an application via an external IP address is necessary. Once deployed, MetalLB provides a platform-native load balancer. In addition to bare metal, the MetalLB Operator can offer load balancing for installations on other infrastructures that might lack native load-balancer capability, including:
- VMware vSphere
- IBM Z® and IBM® LinuxONE
- IBM Z® and IBM® LinuxONE for Red Hat Enterprise Linux (RHEL) KVM
- IBM Power®
For more information, see MetalLB Operator.
Procedure
Create an
ISTIO_VERSIONenvironment variable that defines the Istio version to install by running the following command:$ export ISTIO_VERSION=1.24.3
Install Istio on the East cluster:
Create an
Istioresource on the East cluster by running the following command:$ cat <<EOF | oc --context "${CTX_CLUSTER1}" apply -f - apiVersion: sailoperator.io/v1 kind: Istio metadata: name: default spec: version: v${ISTIO_VERSION} namespace: istio-system values: global: meshID: mesh1 multiCluster: clusterName: cluster1 network: network1 EOFWait for the control plane to return the
Readystatus condition by running the following command:$ oc --context "${CTX_CLUSTER1}" wait --for condition=Ready istio/default --timeout=3mCreate an East-West gateway on the East cluster by running the following command:
$ oc --context "${CTX_CLUSTER1}" apply -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/deployment-models/resources/east-west-gateway-net1.yamlExpose the services through the gateway by running the following command:
$ oc --context "${CTX_CLUSTER1}" apply -n istio-system -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/deployment-models/resources/expose-services.yaml
Install Istio on the West cluster:
Create an
Istioresource on the West cluster by running the following command:$ cat <<EOF | oc --context "${CTX_CLUSTER2}" apply -f - apiVersion: sailoperator.io/v1 kind: Istio metadata: name: default spec: version: v${ISTIO_VERSION} namespace: istio-system values: global: meshID: mesh1 multiCluster: clusterName: cluster2 network: network2 EOFWait for the control plane to return the
Readystatus condition by running the following command:$ oc --context "${CTX_CLUSTER2}" wait --for condition=Ready istio/default --timeout=3mCreate an East-West gateway on the West cluster by running the following command:
$ oc --context "${CTX_CLUSTER2}" apply -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/deployment-models/resources/east-west-gateway-net2.yamlExpose the services through the gateway by running the following command:
$ oc --context "${CTX_CLUSTER2}" apply -n istio-system -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/deployment-models/resources/expose-services.yaml
Create the
istio-reader-service-accountservice account for the East cluster by running the following command:$ oc --context="${CTX_CLUSTER1}" create serviceaccount istio-reader-service-account -n istio-systemCreate the
istio-reader-service-accountservice account for the West cluster by running the following command:$ oc --context="${CTX_CLUSTER2}" create serviceaccount istio-reader-service-account -n istio-systemAdd the
cluster-readerrole to the East cluster by running the following command:$ oc --context="${CTX_CLUSTER1}" adm policy add-cluster-role-to-user cluster-reader -z istio-reader-service-account -n istio-systemAdd the
cluster-readerrole to the West cluster by running the following command:$ oc --context="${CTX_CLUSTER2}" adm policy add-cluster-role-to-user cluster-reader -z istio-reader-service-account -n istio-systemInstall a remote secret on the East cluster that provides access to the API server on the West cluster by running the following command:
$ istioctl create-remote-secret \ --context="${CTX_CLUSTER2}" \ --name=cluster2 \ --create-service-account=false | \ oc --context="${CTX_CLUSTER1}" apply -f -Install a remote secret on the West cluster that provides access to the API server on the East cluster by running the following command:
$ istioctl create-remote-secret \ --context="${CTX_CLUSTER1}" \ --name=cluster1 \ --create-service-account=false | \ oc --context="${CTX_CLUSTER2}" apply -f -
6.3.1. Verifying a multi-cluster topology
Deploy sample applications and verify traffic on a multi-cluster topology on two OpenShift Container Platform clusters.
In this procedure, CLUSTER1 is the East cluster and CLUSTER2 is the West cluster.
Prerequisites
- You have installed the OpenShift Service Mesh Operator on all of the clusters that include the mesh.
- You have completed "Creating certificates for a multi-cluster mesh".
- You have completed "Applying certificates to a multi-cluster topology".
- You have created an Istio Container Network Interface (CNI) resource.
-
You have
istioctlinstalled on the laptop you will use to run these instructions. - You have installed a multi-cluster topology.
Procedure
Deploy sample applications on the East cluster:
Create a sample application namespace on the East cluster by running the following command:
$ oc --context "${CTX_CLUSTER1}" get project sample || oc --context="${CTX_CLUSTER1}" new-project sampleLabel the application namespace to support sidecar injection by running the following command:
$ oc --context="${CTX_CLUSTER1}" label namespace sample istio-injection=enabledDeploy the
helloworldapplication:Create the
helloworldservice by running the following command:$ oc --context="${CTX_CLUSTER1}" apply \ -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/helloworld/helloworld.yaml \ -l service=helloworld -n sampleCreate the
helloworld-v1deployment by running the following command:$ oc --context="${CTX_CLUSTER1}" apply \ -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/helloworld/helloworld.yaml \ -l version=v1 -n sample
Deploy the
sleepapplication by running the following command:$ oc --context="${CTX_CLUSTER1}" apply \ -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/sleep/sleep.yaml -n sampleWait for the
helloworldapplication on the East cluster to return theReadystatus condition by running the following command:$ oc --context="${CTX_CLUSTER1}" wait --for condition=available -n sample deployment/helloworld-v1Wait for the
sleepapplication on the East cluster to return theReadystatus condition by running the following command:$ oc --context="${CTX_CLUSTER1}" wait --for condition=available -n sample deployment/sleep
Deploy the sample applications on the West cluster:
Create a sample application namespace on the West cluster by running the following command:
$ oc --context "${CTX_CLUSTER2}" get project sample || oc --context="${CTX_CLUSTER2}" new-project sampleLabel the application namespace to support sidecar injection by running the following command:
$ oc --context="${CTX_CLUSTER2}" label namespace sample istio-injection=enabledDeploy the
helloworldapplication:Create the
helloworldservice by running the following command:$ oc --context="${CTX_CLUSTER2}" apply \ -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/helloworld/helloworld.yaml \ -l service=helloworld -n sampleCreate the
helloworld-v2deployment by running the following command:$ oc --context="${CTX_CLUSTER2}" apply \ -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/helloworld/helloworld.yaml \ -l version=v2 -n sample
Deploy the
sleepapplication by running the following command:$ oc --context="${CTX_CLUSTER2}" apply \ -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/sleep/sleep.yaml -n sampleWait for the
helloworldapplication on the West cluster to return theReadystatus condition by running the following command:$ oc --context="${CTX_CLUSTER2}" wait --for condition=available -n sample deployment/helloworld-v2Wait for the
sleepapplication on the West cluster to return theReadystatus condition by running the following command:$ oc --context="${CTX_CLUSTER2}" wait --for condition=available -n sample deployment/sleep
Verification
For the East cluster, send 10 requests to the
helloworldservice by running the following command:$ for i in {0..9}; do \ oc --context="${CTX_CLUSTER1}" exec -n sample deploy/sleep -c sleep -- curl -sS helloworld.sample:5000/hello; \ doneVerify that you see responses from both clusters. This means version 1 and version 2 of the service can be seen in the responses.
For the West cluster, send 10 requests to the
helloworldservice:$ for i in {0..9}; do \ oc --context="${CTX_CLUSTER2}" exec -n sample deploy/sleep -c sleep -- curl -sS helloworld.sample:5000/hello; \ doneVerify that you see responses from both clusters. This means the responses show both version 1 and version 2 of the service.
6.3.2. Removing a multi-cluster topology from a development environment
After experimenting with the multi-cluster functionality in a development environment, remove the multi-cluster topology from all the clusters.
In this procedure, CLUSTER1 is the East cluster and CLUSTER2 is the West cluster.
Prerequisites
- You have installed a multi-cluster topology.
Procedure
Remove Istio and the sample applications from the East cluster of the development environment by running the following command:
$ oc --context="${CTX_CLUSTER1}" delete istio/default ns/istio-system ns/sample ns/istio-cniRemove Istio and the sample applications from the West cluster of development environment by running the following command:
$ oc --context="${CTX_CLUSTER2}" delete istio/default ns/istio-system ns/sample ns/istio-cni
6.4. Installing a primary-remote multi-network mesh
Install Istio in a primary-remote multi-network topology on two OpenShift Container Platform clusters.
In this procedure, CLUSTER1 is the East cluster and CLUSTER2 is the West cluster. The East cluster is the primary cluster and the West cluster is the remote cluster.
You can adapt these instructions for a mesh spanning more than two clusters.
Prerequisites
- You have installed the OpenShift Service Mesh 3 Operator on all of the clusters that include the mesh.
- You have completed "Creating certificates for a multi-cluster mesh".
- You have completed "Applying certificates to a multi-cluster topology".
- You have created an Istio Container Network Interface (CNI) resource.
-
You have
istioctlinstalled on the laptop you will use to run these instructions.
Procedure
Create an
ISTIO_VERSIONenvironment variable that defines the Istio version to install by running the following command:$ export ISTIO_VERSION=1.24.3
Install Istio on the East cluster:
Set the default network for the East cluster by running the following command:
$ oc --context="${CTX_CLUSTER1}" label namespace istio-system topology.istio.io/network=network1Create an
Istioresource on the East cluster by running the following command:$ cat <<EOF | oc --context "${CTX_CLUSTER1}" apply -f - apiVersion: sailoperator.io/v1 kind: Istio metadata: name: default spec: version: v${ISTIO_VERSION} namespace: istio-system values: global: meshID: mesh1 multiCluster: clusterName: cluster1 network: network1 externalIstiod: true EOF-
spec.values.global.externalIstiod: trueThis enables the control plane installed on the East cluster to serve as an external control plane for other remote clusters.
-
Wait for the control plane to return the "Ready" status condition by running the following command:
$ oc --context "${CTX_CLUSTER1}" wait --for condition=Ready istio/default --timeout=3mCreate an East-West gateway on the East cluster by running the following command:
$ oc --context "${CTX_CLUSTER1}" apply -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/deployment-models/resources/east-west-gateway-net1.yamlExpose the control plane through the gateway so that services in the West cluster can access the control plane by running the following command:
$ oc --context "${CTX_CLUSTER1}" apply -n istio-system -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/deployment-models/resources/expose-istiod.yamlExpose the application services through the gateway by running the following command:
$ oc --context "${CTX_CLUSTER1}" apply -n istio-system -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/deployment-models/resources/expose-services.yaml
Install Istio on the West cluster:
Save the IP address of the East-West gateway running in the East cluster by running the following command:
$ export DISCOVERY_ADDRESS=$(oc --context="${CTX_CLUSTER1}" \ -n istio-system get svc istio-eastwestgateway \ -o jsonpath='{.status.loadBalancer.ingress[0].ip}')Create an
Istioresource on the West cluster by running the following command:$ cat <<EOF | oc --context "${CTX_CLUSTER2}" apply -f - apiVersion: sailoperator.io/v1 kind: Istio metadata: name: default spec: version: v${ISTIO_VERSION} namespace: istio-system profile: remote values: istiodRemote: injectionPath: /inject/cluster/cluster2/net/network2 global: remotePilotAddress: ${DISCOVERY_ADDRESS} EOFAnnotate the
istio-systemnamespace in the West cluster so that the East cluster’s control plane manages it by running the following command:$ oc --context="${CTX_CLUSTER2}" annotate namespace istio-system topology.istio.io/controlPlaneClusters=cluster1Set the default network for the West cluster by running the following command:
$ oc --context="${CTX_CLUSTER2}" label namespace istio-system topology.istio.io/network=network2Install a remote secret on the East cluster that provides access to the API server on the West cluster by running the following command:
$ istioctl create-remote-secret \ --context="${CTX_CLUSTER2}" \ --name=cluster2 | \ oc --context="${CTX_CLUSTER1}" apply -f -Wait for the
Istioresource to return the "Ready" status condition by running the following command:$ oc --context "${CTX_CLUSTER2}" wait --for condition=Ready istio/default --timeout=3mCreate an East-West gateway on the West cluster by running the following command:
$ oc --context "${CTX_CLUSTER2}" apply -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/deployment-models/resources/east-west-gateway-net2.yamlNoteBecause you installed the West cluster with a remote profile, exposing application services on the East cluster also reveals them on the East-West gateways of both clusters.
6.5. Installing Kiali in a multi-cluster mesh
Install Kiali in a multi-cluster mesh configuration on two OpenShift Container Platform clusters.
In this procedure, CLUSTER1 is the East cluster and CLUSTER2 is the West cluster.
You can adapt these instructions for a mesh spanning more than two clusters.
Prerequisites
- You have installed the latest Kiali Operator on each cluster.
- You have Istio installed in a multi-cluster configuration on each cluster.
-
You have
istioctlinstalled on the laptop you can use to run these instructions. -
You have logged in to the OpenShift Container Platform web console as a user with the
cluster-adminrole. - You have configured a metrics store so that Kiali can query metrics from all the clusters. Kiali queries metrics and traces from their required endpoints.
Procedure
Install Kiali on the East cluster:
Create a YAML file named
kiali.yamlthat creates a namespace for the Kiali deployment.You can see the following example configuration for reference:
apiVersion: kiali.io/v1alpha1 kind: Kiali metadata: name: kiali namespace: istio-system spec: version: default external_services: prometheus: auth: type: bearer use_kiali_token: true thanos_proxy: enabled: true url: https://thanos-querier.openshift-monitoring.svc.cluster.local:9091NoteThe endpoint for this example uses OpenShift Monitoring to configure metrics. For more information, see "Configuring OpenShift Monitoring with Kiali".
Apply the YAML file on the East cluster by running the following command:
$ oc --context cluster1 apply -f kiali.yaml
You should see output similar to the following example:
kiali-istio-system.apps.example.com
Ensure that the Kiali custom resource (CR) is ready by running the following command:
$ oc wait --context cluster1 --for=condition=Successful kialis/kiali -n istio-system --timeout=3m
You should see output similar to the following example:
kiali.kiali.io/kiali condition met
Display your Kiali Route hostname.
$ oc --context cluster1 get route kiali -n istio-system -o jsonpath='{.spec.host}'Create a Kiali CR on the West cluster.
You can see the following example configuration for reference:
apiVersion: kiali.io/v1alpha1 kind: Kiali metadata: name: kiali namespace: istio-system spec: version: default auth: openshift: redirect_uris: # Replace kiali-route-hostname with the hostname from the previous step. - "https://{kiali-route-hostname}/api/auth/callback/cluster2" deployment: remote_cluster_resources_only: trueThe Kiali Operator creates the resources necessary for the Kiali server on the East cluster to connect to the West cluster. The Kiali server is not installed on the West cluster.
Apply the YAML file on the West cluster by running the following command:
$ oc --context cluster2 apply -f kiali-remote.yaml
Ensure that the Kiali CR is ready by running the following command:
$ oc wait --context cluster2 --for=condition=Successful kialis/kiali -n istio-system --timeout=3m
Create a remote cluster secret so that Kiali installation in the East cluster can access the West cluster.
Create a long lived API token bound to the kiali-service-account in the West cluster. Kiali uses this token to authenticate to the West cluster.
You can see the following example configuration for reference:
apiVersion: v1 kind: Secret metadata: name: "kiali-service-account" namespace: "istio-system" annotations: kubernetes.io/service-account.name: "kiali-service-account" type: kubernetes.io/service-account-tokenApply the YAML file on the West cluster by running the following command:
$ oc --context cluster2 apply -f kiali-svc-account-token.yaml
Create a
kubeconfigfile and save it as a secret in the namespace on the East cluster where the Kiali deployment is present.To simplify this process, use the
kiali-prepare-remote-cluster.shscript to generate thekubeconfigfile by running the followingcurlcommand:$ curl -L -o kiali-prepare-remote-cluster.sh https://raw.githubusercontent.com/kiali/kiali/master/hack/istio/multicluster/kiali-prepare-remote-cluster.sh
Change the script to make it executeable by running the following command:
chmod +x kiali-prepare-remote-cluster.sh
Enter the script so that it passes the East and West cluster contexts to the
kubeconfigfile by running the following command:$ ./kiali-prepare-remote-cluster.sh --kiali-cluster-context cluster1 --remote-cluster-context cluster2 --view-only false --kiali-resource-name kiali-service-account --remote-cluster-namespace istio-system --process-kiali-secret true --process-remote-resources false --remote-cluster-name cluster2
NoteUse the
--helpoption to display additional details about how to use the script.
Trigger the reconciliation loop so that the Kiali Operator registers the remote secret that the CR has, by running the following command:
$ oc --context cluster1 annotate kiali kiali -n istio-system --overwrite kiali.io/reconcile="$(date)"
Wait for Kiali resource to become ready by running the following command:
oc --context cluster1 wait --for=condition=Successful --timeout=2m kialis/kiali -n istio-system
Wait for Kiali server to become ready by running the following command:
oc --context cluster1 rollout status deployments/kiali -n istio-system
Log in to Kiali.
-
When you first access Kiali, log in to the cluster that has the Kiali deployment. In this example, access the
Eastcluster. Display the hostname of the Kiali route by running the following command:
oc --context cluster1 get route kiali -n istio-system -o jsonpath='{.spec.host}'- Navigate to the Kiali URL in your browser: Content from <your-kiali-route-hostname> is not included.https://<your-kiali-route-hostname>.
-
When you first access Kiali, log in to the cluster that has the Kiali deployment. In this example, access the
Log in to the West cluster through Kiali.
To see other clusters in the Kiali UI, you must first login as a user to those clusters through Kiali.
- Click the user profile dropdown in the top right hand menu.
- Select Login to West. The OpenShift login page appears and requires your West cluster credentials to continue.
Verify that Kiali shows information from both clusters.
- Click Overview and verify that you can see namespaces from both clusters.
- Click Navigate and verify that you see both clusters on the mesh graph.
6.6. Additional resources
Chapter 7. Deploying multiple service meshes on a single cluster
You can use the Red Hat OpenShift Service Mesh to operate many service meshes in a single cluster, with each mesh managed by a separate control plane. Using discovery selectors and revisions prevents conflicts between control planes.
7.1. About deploying multiple control planes
You can configure a cluster to host multiple control planes by deploying unique Istio resources in separate namespaces and using revision labels to manage sidecar injection for specific workloads.
Each Istio resource must also configure discovery selectors to specify which namespaces the Istio control plane observes. Only namespaces with labels that match the configured discovery selectors can join the mesh. Additionally, discovery selectors determine which control plane creates the istio-ca-root-cert config map in each namespace, which encrypts traffic between services with mutual TLS within each mesh.
When adding an additional Istio control plane to a cluster with an existing control plane, ensure that the existing Istio instance has discovery selectors configured to avoid overlapping with the new control plane.
All control planes in a cluster share a single IstioCNI resource, and you must update this resource independent of other cluster resources.
7.2. Using multiple control planes on a single cluster
You can use discovery selectors to limit the visibility of an Istio control plane to specific namespaces in a cluster.
By combining discovery selectors with control plane revisions, you can deploy multiple control planes in a single cluster, ensuring that each control plane manages only its assigned namespaces. This approach avoids conflicts between control planes and enables soft multi-tenancy for service meshes.
7.2.1. Deploying the first control plane
You deploy the first control plane by creating its assigned namespace.
Prerequisites
- You have installed the OpenShift Service Mesh operator.
You have created an Istio Container Network Interface (CNI) resource.
NoteYou can run the following command to check for existing
Istioinstances:$ oc get istios
-
You have installed the
istioctlbinary on your localhost.
You can have extended support for more than two control planes. The maximum number of service meshes in a single cluster depends on the available cluster resources.
Procedure
Create the namespace for the first Istio control plane called
istio-system-1by running the following command:$ oc new-project istio-system-1
Label the first namespace, which the Istio
discoverySelectorsfield uses by running the following command:$ oc label namespace istio-system-1 istio-discovery=mesh-1
Create a YAML file named
istio-1.yamlwith the namemesh-1and thediscoverySelectorasmesh-1similar to the following example:kind: Istio apiVersion: sailoperator.io/v1 metadata: name: mesh-1 spec: namespace: istio-system-1 values: meshConfig: discoverySelectors: - matchLabels: istio-discovery: mesh-1 # ...Create the first
Istioresource by running the following command:$ oc apply -f istio-1.yaml
To restrict workloads in
mesh-1from communicating freely with decrypted traffic between meshes, deploy aPeerAuthenticationresource to enforce mutual TLS (mTLS) traffic within themesh-1data plane. Apply thePeerAuthenticationresource in theistio-system-1namespace by using a configuration file, such aspeer-auth-1.yaml, by running the following command:$ oc apply -f peer-auth-1.yaml
You can see the following example configuration for reference:
apiVersion: security.istio.io/v1 kind: PeerAuthentication metadata: name: "mesh-1-peerauth" namespace: "istio-system-1" spec: mtls: mode: STRICT
7.2.2. Deploying the second control plane
After deploying the first control plane, you can deploy the second control plane by creating its assigned namespace.
Procedure
Create a namespace for the second Istio control plane called
istio-system-2by running the following command:$ oc new-project istio-system-2
Label the second namespace, which the Istio
discoverySelectorsfield uses by running the following command:$ oc label namespace istio-system-2 istio-discovery=mesh-2
Create a YAML file named
istio-2.yamlsimilar to the following example:kind: Istio apiVersion: sailoperator.io/v1 metadata: name: mesh-2 spec: namespace: istio-system-2 values: meshConfig: discoverySelectors: - matchLabels: istio-discovery: mesh-2 # ...Create the second
Istioresource by running the following command:$ oc apply -f istio-2.yaml
Deploy a policy for workloads in the
istio-system-2namespace to only accept mutual TLS trafficpeer-auth-2.yamlby running the following command:$ oc apply -f peer-auth-2.yaml
You can see the following example configuration for reference:
apiVersion: security.istio.io/v1 kind: PeerAuthentication metadata: name: "mesh-2-peerauth" namespace: "istio-system-2" spec: mtls: mode: STRICT
7.2.3. Verifying multiple control planes
Verify that both Istio control planes deploy and run as expected. You can validate that the istiod pod is successfully running in each Istio system namespace.
Procedure
Verify that the control plane in
istio-system-1manages the workloads by running the following command:$ oc get pods -n istio-system-1
You should see output similar to the following example:
NAME READY STATUS RESTARTS AGE istiod-mesh-1-b69646b6f-kxrwk 1/1 Running 0 4m14s
Verify that the control plane in
istio-system-2manages the workloads by running the following command:$ oc get pods -n istio-system-2
You should see output similar to the following example:
NAME READY STATUS RESTARTS AGE istiod-mesh-2-8666fdfc6-mqp45 1/1 Running 0 118s
7.3. Deploy application workloads in each mesh
To deploy application workloads, assign each workload to a separate namespace.
Procedure
Create an application namespace called
app-ns-1by running the following command:$ oc create namespace app-ns-1
To ensure the first control plane discovers the namespace, add the
istio-discovery=mesh-1label by running the following command:$ oc label namespace app-ns-1 istio-discovery=mesh-1
To enable sidecar injection into all the pods by default, while mapping the pods in this namespace to the first control plane, add the
istio.io/rev=mesh-1label to the namespace by running the following command:$ oc label namespace app-ns-1 istio.io/rev=mesh-1
Optional: You can verify the
mesh-1revision name by running the following command:$ oc get istiorevisions
Deploy the
sleepandhttpbinapplications by running the following command:$ oc apply -n app-ns-1 \ -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/sleep/sleep.yaml \ -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/httpbin/httpbin.yaml
Wait for the
httpbinandsleeppods to run with sidecars injected by running the following command:$ oc get pods -n app-ns-1
You should see output similar to the following example:
NAME READY STATUS RESTARTS AGE httpbin-7f56dc944b-kpw2x 2/2 Running 0 2m26s sleep-5577c64d7c-b5wd2 2/2 Running 0 91m
Create a second application namespace called
app-ns-2by running the following command:$ oc create namespace app-ns-2
Create a third application namespace called
app-ns-3by running the following command:$ oc create namespace app-ns-3
Add the label
istio-discovery=mesh-2to both namespaces and the revision labelmesh-2to match the discovery selector of the second control plane by running the following command:$ oc label namespace app-ns-2 app-ns-3 istio-discovery=mesh-2 istio.io/rev=mesh-2
Deploy the
sleepandhttpbinapplications to theapp-ns-2namespace by running the following command:$ oc apply -n app-ns-2 \ -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/sleep/sleep.yaml \ -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/httpbin/httpbin.yaml
Deploy the
sleepandhttpbinapplications to theapp-ns-3namespace by running the following command:$ oc apply -n app-ns-3 \ -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/sleep/sleep.yaml \ -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/httpbin/httpbin.yaml
Optional: Use the following command to wait for a deployment to be available:
$ oc wait deployments -n app-ns-2 --all --for condition=Available
Verification
After deploying the applications, use the
istioctl pscommand to verify that the correct control plane manages each workload:Verify that the
istio-system-1control plane manages the workloads by running the following command:$ istioctl ps -i istio-system-1
You should see output similar to the following example:
NAME CLUSTER CDS LDS EDS RDS ECDS ISTIOD VERSION httpbin-7f56dc944b-vwfm5.app-ns-1 Kubernetes SYNCED (11m) SYNCED (11m) SYNCED (11m) SYNCED (11m) IGNORED istiod-mesh-1-b69646b6f-kxrwk 1.23.0 sleep-5577c64d7c-d675f.app-ns-1 Kubernetes SYNCED (11m) SYNCED (11m) SYNCED (11m) SYNCED (11m) IGNORED istiod-mesh-1-b69646b6f-kxrwk 1.23.0
Verify that the
istio-system-2control plane manages the workloads by running the following command:$ istioctl ps -i istio-system-2
You should see output similar to the following example:
NAME CLUSTER CDS LDS EDS RDS ECDS ISTIOD VERSION httpbin-7f56dc944b-54gjs.app-ns-3 Kubernetes SYNCED (3m59s) SYNCED (3m59s) SYNCED (3m59s) SYNCED (3m59s) IGNORED istiod-mesh-2-8666fdfc6-mqp45 1.23.0 httpbin-7f56dc944b-gnh72.app-ns-2 Kubernetes SYNCED (4m1s) SYNCED (4m1s) SYNCED (3m59s) SYNCED (4m1s) IGNORED istiod-mesh-2-8666fdfc6-mqp45 1.23.0 sleep-5577c64d7c-k9mxz.app-ns-2 Kubernetes SYNCED (4m1s) SYNCED (4m1s) SYNCED (3m59s) SYNCED (4m1s) IGNORED istiod-mesh-2-8666fdfc6-mqp45 1.23.0 sleep-5577c64d7c-m9hvm.app-ns-3 Kubernetes SYNCED (4m1s) SYNCED (4m1s) SYNCED (3m59s) SYNCED (4m1s) IGNORED istiod-mesh-2-8666fdfc6-mqp45 1.23.0
Verify that the mesh restricts application connectivity to local workloads:
Send a request from the
sleeppod inapp-ns-1to thehttpbinservice inapp-ns-2to check that the communication fails by running the following command:$ oc -n app-ns-1 exec deploy/sleep -c sleep -- curl -sIL http://httpbin.app-ns-2.svc.cluster.local:8000
The
PeerAuthenticationresources created earlier enforce mutual TLS (mTLS) traffic inSTRICTmode within each mesh. Each mesh uses its own root certificate, managed by theistio-ca-root-certconfig map, which prevents communication between meshes. The output indicates a communication failure, similar to the following example:You should see output similar to the following example:
HTTP/1.1 503 Service Unavailable content-length: 95 content-type: text/plain date: Wed, 16 Oct 2024 12:05:37 GMT server: envoy
Confirm that the communication works by sending a request from the
sleeppod to thehttpbinservice that are present in theapp-ns-2namespace, whichmesh-2manages by running the following command:$ oc -n app-ns-2 exec deploy/sleep -c sleep -- curl -sIL http://httpbin.app-ns-3.svc.cluster.local:8000
You should see output similar to the following example:
HTTP/1.1 200 OK access-control-allow-credentials: true access-control-allow-origin: * content-security-policy: default-src 'self'; style-src 'self' 'unsafe-inline'; img-src 'self' camo.githubusercontent.com content-type: text/html; charset=utf-8 date: Wed, 16 Oct 2024 12:06:30 GMT x-envoy-upstream-service-time: 8 server: envoy transfer-encoding: chunked
7.4. Additional resources
Chapter 8. External control plane topology
You can use the external control plane topology to isolate the control plane from the data plane on separate clusters.
8.1. About external control plane topology
The external control plane topology improves security and offers the ability to host the Service Mesh as a service. In this configuration, one cluster hosts and manages the Istio control plane, while other clusters host the applications.
8.1.1. Installing the control plane and data plane on separate clusters
Install Istio on a control plane cluster and a separate data plane cluster. This installation approach provides increased security.
You can adapt these instructions for a mesh spanning more than one data plane cluster. You can also adapt these instructions for multiple meshes with multiple control planes on the same control plane cluster.
Prerequisites
- You have installed the OpenShift Service Mesh Operator on the control plane cluster and the data plane cluster.
-
You have
istioctlinstalled on the laptop you will use to run these instructions.
Procedure
Create an
ISTIO_VERSIONenvironment variable that defines the Istio version to install on all the clusters by running the following command:$ export ISTIO_VERSION=1.24.3
Create a
REMOTE_CLUSTER_NAMEenvironment variable that defines the name of the cluster by running the following command:$ export REMOTE_CLUSTER_NAME=cluster1
Set up the environment variable that contains the
occommand context for the control plane cluster by running the following command:$ export CTX_CONTROL_PLANE_CLUSTER=<context_name_of_the_control_plane_cluster>
Set up the environment variable that contains the
occommand context for the data plane cluster by running the following command:$ export CTX_DATA_PLANE_CLUSTER=<context_name_of_the_data_plane_cluster>
Set up the ingress gateway for the control plane:
Create a project called
istio-systemby running the following command:$ oc get project istio-system --context "${CTX_CONTROL_PLANE_CLUSTER}" || oc new-project istio-system --context "${CTX_CONTROL_PLANE_CLUSTER}"Create an
Istioresource on the control plane cluster to manage the ingress gateway by running the following command:$ cat <<EOF | oc --context "${CTX_CONTROL_PLANE_CLUSTER}" apply -f - apiVersion: sailoperator.io/v1 kind: Istio metadata: name: default spec: version: v${ISTIO_VERSION} namespace: istio-system value: global: network: network1 EOFCreate the ingress gateway for the control plane by running the following command:
$ oc --context "${CTX_CONTROL_PLANE_CLUSTER}" apply -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/deployment-models/resources/controlplane-gateway.yamlGet the assigned IP address for the ingress gateway by running the following command:
$ oc --context "${CTX_CONTROL_PLANE_CLUSTER}" get svc istio-ingressgateway -n istio-system -o jsonpath='{.status.loadBalancer.ingress[0].ip}'Store the IP address of the ingress gateway in an environment variable by running the following command:
$ export EXTERNAL_ISTIOD_ADDR=$(oc -n istio-system --context="${CTX_CONTROL_PLANE_CLUSTER}" get svc istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
Install Istio on the data plane cluster:
Create a project called
external-istiodon the data plane cluster by running the following command:$ oc get project external-istiod --context "${CTX_DATA_PLANE_CLUSTER}" || oc new-project external-istiod --context "${CTX_DATA_PLANE_CLUSTER}"Create an
Istioresource on the data plane cluster by running the following command:$ cat <<EOF | oc --context "${CTX_DATA_PLANE_CLUSTER}" apply -f - apiVersion: sailoperator.io/v1 kind: Istio metadata: name: external-istiod spec: version: v${ISTIO_VERSION} namespace: external-istiod profile: remote values: defaultRevision: external-istiod global: remotePilotAddress: ${EXTERNAL_ISTIOD_ADDR} configCluster: true pilot: configMap: true istiodRemote: injectionPath: /inject/cluster/cluster2/net/network1 EOF-
spec.values.global.configCluster: trueidentifies the data plane cluster as the source of the mesh configuration.
-
Create a project called
istio-cnion the data plane cluster by running the following command:$ oc get project istio-cni --context "${CTX_DATA_PLANE_CLUSTER}" || oc new-project istio-cni --context "${CTX_DATA_PLANE_CLUSTER}"Create an
IstioCNIresource on the data plane cluster by running the following command:$ cat <<EOF | oc --context "${CTX_DATA_PLANE_CLUSTER}" apply -f - apiVersion: sailoperator.io/v1 kind: IstioCNI metadata: name: default spec: version: v${ISTIO_VERSION} namespace: istio-cni EOF
Set up the external Istio control plane on the control plane cluster:
Create a project called
external-istiodon the control plane cluster by running the following command:$ oc get project external-istiod --context "${CTX_CONTROL_PLANE_CLUSTER}" || oc new-project external-istiod --context "${CTX_CONTROL_PLANE_CLUSTER}"Create a
ServiceAccountresource on the control plane cluster by running the following command:$ oc --context="${CTX_CONTROL_PLANE_CLUSTER}" create serviceaccount istiod-service-account -n external-istiodStore the API server address for the data plane cluster in an environment variable by running the following command:
$ DATA_PLANE_API_SERVER=https://<hostname_or_IP_address_of_the_API_server_for_the_data_plane_cluster>:6443
Install a remote secret on the control plane cluster that provides access to the API server on the data plane cluster by running the following command:
$ istioctl create-remote-secret \ --context="${CTX_DATA_PLANE_CLUSTER}" \ --type=config \ --namespace=external-istiod \ --service-account=istiod-external-istiod \ --create-service-account=false \ --server="${DATA_PLANE_API_SERVER}" | \ oc --context="${CTX_CONTROL_PLANE_CLUSTER}" apply -f -Create an
Istioresource on the control plane cluster by running the following command:$ cat <<EOF | oc --context "${CTX_CONTROL_PLANE_CLUSTER}" apply -f - apiVersion: sailoperator.io/v1 kind: Istio metadata: name: external-istiod spec: version: v${ISTIO_VERSION} namespace: external-istiod profile: empty values: meshConfig: rootNamespace: external-istiod defaultConfig: discoveryAddress: $EXTERNAL_ISTIOD_ADDR:15012 pilot: enabled: true volumes: - name: config-volume configMap: name: istio-external-istiod - name: inject-volume configMap: name: istio-sidecar-injector-external-istiod volumeMounts: - name: config-volume mountPath: /etc/istio/config - name: inject-volume mountPath: /var/lib/istio/inject env: INJECTION_WEBHOOK_CONFIG_NAME: "istio-sidecar-injector-external-istiod-external-istiod" VALIDATION_WEBHOOK_CONFIG_NAME: "istio-validator-external-istiod-external-istiod" EXTERNAL_ISTIOD: "true" LOCAL_CLUSTER_SECRET_WATCHER: "true" CLUSTER_ID: cluster2 SHARED_MESH_CONFIG: istio global: caAddress: $EXTERNAL_ISTIOD_ADDR:15012 configValidation: false meshID: mesh1 multiCluster: clusterName: cluster2 network: network1 EOFCreate
GatewayandVirtualServiceresources so that the sidecar proxies on the data plane cluster can access the control plane by running the following command:$ oc --context "${CTX_CONTROL_PLANE_CLUSTER}" apply -f - <<EOF apiVersion: networking.istio.io/v1 kind: Gateway metadata: name: external-istiod-gw namespace: external-istiod spec: selector: istio: ingressgateway servers: - port: number: 15012 protocol: tls name: tls-XDS tls: mode: PASSTHROUGH hosts: - "*" - port: number: 15017 protocol: tls name: tls-WEBHOOK tls: mode: PASSTHROUGH hosts: - "*" --- apiVersion: networking.istio.io/v1 kind: VirtualService metadata: name: external-istiod-vs namespace: external-istiod spec: hosts: - "*" gateways: - external-istiod-gw tls: - match: - port: 15012 sniHosts: - "*" route: - destination: host: istiod-external-istiod.external-istiod.svc.cluster.local port: number: 15012 - match: - port: 15017 sniHosts: - "*" route: - destination: host: istiod-external-istiod.external-istiod.svc.cluster.local port: number: 443 EOFWait for the
external-istiodIstioresource on the control plane cluster to return the "Ready" status condition by running the following command:$ oc --context "${CTX_CONTROL_PLANE_CLUSTER}" wait --for condition=Ready istio/external-istiod --timeout=3mWait for the
Istioresource on the data plane cluster to return the "Ready" status condition by running the following command:$ oc --context "${CTX_DATA_PLANE_CLUSTER}" wait --for condition=Ready istio/external-istiod --timeout=3mWait for the
IstioCNIresource on the data plane cluster to return the "Ready" status condition by running the following command:$ oc --context "${CTX_DATA_PLANE_CLUSTER}" wait --for condition=Ready istiocni/default --timeout=3m
Verification
Deploy sample applications on the data plane cluster:
Create a namespace for sample applications on the data plane cluster by running the following command:
$ oc --context "${CTX_DATA_PLANE_CLUSTER}" get project sample || oc --context="${CTX_DATA_PLANE_CLUSTER}" new-project sampleLabel the namespace for the sample applications to support sidecar injection by running the following command:
$ oc --context="${CTX_DATA_PLANE_CLUSTER}" label namespace sample istio.io/rev=external-istiodDeploy the
helloworldapplication:Create the
helloworldservice by running the following command:$ oc --context="${CTX_DATA_PLANE_CLUSTER}" apply \ -f https://raw.githubusercontent.com/istio/istio/${ISTIO_VERSION}/samples/helloworld/helloworld.yaml \ -l service=helloworld -n sampleCreate the
helloworld-v1deployment by running the following command:$ oc --context="${CTX_DATA_PLANE_CLUSTER}" apply \ -f https://raw.githubusercontent.com/istio/istio/${ISTIO_VERSION}/samples/helloworld/helloworld.yaml \ -l version=v1 -n sample
Deploy the
sleepapplication by running the following command:$ oc --context="${CTX_DATA_PLANE_CLUSTER}" apply \ -f https://raw.githubusercontent.com/istio/istio/${ISTIO_VERSION}/samples/sleep/sleep.yaml -n sampleVerify that the pods on the
samplenamespace have a sidecar injected by running the following command:$ oc --context="${CTX_DATA_PLANE_CLUSTER}" get pods -n sampleThe terminal should return
2/2for each pod on thesamplenamespace by running the following command:Example output:
NAME READY STATUS RESTARTS AGE helloworld-v1-6d65866976-jb6qc 2/2 Running 0 1m sleep-5fcd8fd6c8-mg8n2 2/2 Running 0 1m
Verify that internal traffic can reach the applications on the cluster:
Verify a request can be sent to the
helloworldapplication through thesleepapplication by running the following command:$ oc exec --context="${CTX_DATA_PLANE_CLUSTER}" -n sample -c sleep deploy/sleep -- curl -sS helloworld.sample:5000/helloThe terminal should return a response from the
helloworldapplication:Example output:
Hello version: v1, instance: helloworld-v1-6d65866976-jb6qc
Install an ingress gateway to expose the sample application to external clients:
Create the ingress gateway by running the following command:
$ oc --context="${CTX_DATA_PLANE_CLUSTER}" apply -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/refs/heads/main/chart/samples/ingress-gateway.yaml -n sampleConfirm that the ingress gateway is running by running the following command:
$ oc get pod -l app=istio-ingressgateway -n sample --context="${CTX_DATA_PLANE_CLUSTER}"The terminal should return output confirming that the gateway is running:
Example output:
NAME READY STATUS RESTARTS AGE istio-ingressgateway-7bcd5c6bbd-kmtl4 1/1 Running 0 8m4s
Expose the
helloworldapplication through the ingress gateway by running the following command:$ oc apply -f https://raw.githubusercontent.com/istio/istio/refs/heads/master/samples/helloworld/helloworld-gateway.yaml -n sample --context="${CTX_DATA_PLANE_CLUSTER}"Set the gateway URL environment variable by running the following command:
$ export INGRESS_HOST=$(oc -n sample --context="${CTX_DATA_PLANE_CLUSTER}" get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}'); \ export INGRESS_PORT=$(oc -n sample --context="${CTX_DATA_PLANE_CLUSTER}" get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].port}'); \ export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT
Verify that external traffic can reach the applications on the mesh:
Confirm that the
helloworldapplication is accessible through the gateway by running the following command:$ curl -s "http://${GATEWAY_URL}/hello"The
helloworldapplication should return a response.Example output:
Hello version: v1, instance: helloworld-v1-6d65866976-jb6qc
Chapter 9. Istioctl tool
Use the istioctl command line utility to perform diagnostic and debugging tasks for OpenShift Service Mesh 3 service mesh components.
9.1. Support for Istioctl
OpenShift Service Mesh 3 supports a selection of Istioctl commands.
- Supported Istioctl commands
| Command | Description |
|---|---|
|
|
Manage the control plane ( |
|
| Analyze the Istio configuration and print validation messages |
|
| Generate the autocompletion script for the specified shell |
|
| Create a secret with credentials to allow Istio to access remote Kubernetes API servers |
|
| Display help about any command |
|
| Retrieve information about the proxy configuration from Envoy (Kubernetes only) |
|
| Retrieve the synchronization status of each Envoy in the mesh |
|
|
List the remote clusters each |
|
| Validate the Istio policy and rules files |
|
| Print the build version information |
|
| Manage the waypoint configuration |
|
| Update or retrieve the current Ztunnel configuration. |
9.2. Installing the Istioctl tool
Install the istioctl command-line utility to debug and diagnose Istio service mesh deployments.
Prerequisites
- You have access to the OpenShift Container Platform web console.
- You have installed the OpenShift Service Mesh 3 Operator.
-
You have created at least one
Istioresource.
Procedure
Confirm which version of the
Istioresource runs on the installation by running the following command:$ oc get istio -ojsonpath="{range .items[*]}{.spec.version}{'\n'}{end}" | sed s/^v// | sortIf there are many
Istioresources with different versions, select the latest version. The latest version is displayed last.- In the OpenShift Container Platform web console, click the Help icon and select Command Line Tools.
- Click Download istioctl. Choose the version and architecture that matches your system.
Extract the
istioctlbinary file.If you are using a Linux operating system, run the following command:
$ tar xzf istioctl-<VERSION>-<OS>-<ARCH>.tar.gz
- If you are using an Apple Mac operating system, unpack and extract the archive.
- If you are using a Microsoft Windows operating system, use the zip software to extract the archive.
Move to the uncompressed directory by running the following command:
$ cd istioctl-<VERSION>-<OS>-<ARCH>
Add the
istioctlclient to the path by running the following command:$ export PATH=$PWD:$PATH
Confirm that the
istioctlclient version and the Istio control plane version match or are within one version by running the following command:$ istioctl version
You should see output similar to the following example:
client version: 1.20.0 control plane version: 1.24.3_ossm data plane version: none
Chapter 10. Enabling mutual Transport Layer Security
You can use Red Hat OpenShift Service Mesh for your application to customize the communication security between the complex array of microservices. Mutual Transport Layer Security (mTLS) is a protocol that enables two parties to authenticate each other.
10.1. About mutual Transport Layer Security (mTLS)
In OpenShift Service Mesh 3, you use the Istio resource instead of the ServiceMeshControlPlane resource to configure mTLS settings.
In OpenShift Service Mesh 3, you configure STRICT mTLS mode by using the PeerAuthentication and DestinationRule resources. You set TLS protocol versions through Istio Workload Minimum TLS Version Configuration.
Review the following Istio resources and concepts to configure mTLS settings properly:
PeerAuthentication-
defines the type of mTLS traffic a sidecar accepts.
PERMISSIVEmode allows both plain text and mTLS traffic.STRICTmode requires mTLS for all incoming traffic.. DestinationRule-
configures the type of TLS traffic a sidecar sends. In
DISABLEmode, the sidecar sends plain text. InSIMPLE,MUTUAL, andISTIO_MUTUALmodes, the sidecar establishes a TLS connection. Auto mTLS-
ensures the mesh uses mTLS by default to encrypt all inter-mesh traffic, regardless of the
PeerAuthenticationmode configuration. TheenableAutoMtlsglobal mesh configuration field controlsAuto mTLS, which OpenShift Service Mesh 2 and 3 enable by default. The mTLS setting operates entirely between sidecar proxies, requiring no changes to application or service code.
By default, PeerAuthentication uses PERMISSIVE mode, allowing sidecars in the Service Mesh to accept both plain text and mTLS-encrypted traffic.
10.2. Enabling strict mTLS mode by using the namespace
You can restrict workloads to accept only encrypted mTLS traffic by enabling the STRICT mode in PeerAuthentication.
You can see the following example configuration for reference:
apiVersion: security.istio.io/v1
kind: PeerAuthentication
metadata:
name: default
namespace: <namespace>
spec:
mtls:
mode: STRICT
You can enable mTLS for all destination hosts in the <namespace> by creating a DestinationRule resource with MUTUAL or ISTIO_MUTUAL mode if you disable auto mTLS and apply STRICT mode to PeerAuthentication.
You can see the following example configuration for reference:
apiVersion: networking.istio.io/v1
kind: DestinationRule
metadata:
name: enable-mtls
namespace: <namespace>
spec:
host: "*.<namespace>.svc.cluster.local"
trafficPolicy:
tls:
mode: ISTIO_MUTUAL10.3. Enabling strict mTLS across the whole service mesh
You can configure mTLS across the entire mesh by applying the PeerAuthentication policy to the istiod namespace, such as istio-system. The istiod namespace name must match to the spec.namespace field of your Istio resource.
You can see the following example configuration for reference:
apiVersion: security.istio.io/v1
kind: PeerAuthentication
metadata:
name: default
namespace: istio-system
spec:
mtls:
mode: STRICT
Additionally, create a DestinationRule resource to disable mTLS for communication with the API server, as it does not have a sidecar. Apply similar DestinationRule configurations for other services without sidecars.
You can see the following example configuration for reference:
apiVersion: networking.istio.io/v1
kind: DestinationRule
metadata:
name: api-server
namespace: istio-system
spec:
host: kubernetes.default.svc.cluster.local
trafficPolicy:
tls:
mode: DISABLE10.4. Validating encryptions with Kiali
The Kiali console offers several ways to validate whether or not your applications, services, and workloads have Mutual Transport Layer Security (mTLS) encryption enabled.
The Services Detail Overview page displays a Security icon on the graph edges where at least one request with mTLS enabled is present. Also note that Kiali displays a lock icon in the Network section next to ports that use mTLS configuration.