About

Red Hat OpenShift Service Mesh 3.2

About OpenShift Service Mesh

Red Hat OpenShift Documentation Team

Abstract

This document provides an overview of OpenShift Service Mesh features.

Chapter 1. About OpenShift Service Mesh

You can use Red Hat OpenShift Service Mesh to manage the connectivity, security, and observability of microservices. Based on the "Istio project", OpenShift Service Mesh provides a centralized control point in your application.

1.1. Introduction to Red Hat OpenShift Service Mesh

Red Hat OpenShift Service Mesh adds a transparent layer on existing distributed applications without requiring any changes to the application code.

The mesh introduces an easy way to create a network of deployed services that provides discovery, load balancing, service-to-service authentication, failure recovery, metrics, and monitoring. A service mesh also provides more complex operational functionality, including A/B testing, canary releases, access control, and end-to-end authentication.

Microservice architectures split the work of enterprise applications into modular services, which can make scaling and maintenance easier. However, as an enterprise application built on a microservice architecture grows in size and complexity, it becomes difficult to understand and manage. Service Mesh can address those architecture problems by capturing or intercepting traffic between services and can change, redirect, or create new requests to other services.

1.2. Core features

Red Hat OpenShift Service Mesh provides several key capabilities uniformly across a network of services:

  • Traffic Management - Control the flow of traffic and API calls between services, make calls more reliable, and make the network more robust in the face of adverse conditions.
  • Service Identity and Security - Offer services in the mesh with a verifiable identity and offer the ability to protect service traffic as it flows over networks of varying degrees of trustworthiness.
  • Policy Enforcement - Apply organizational policy to the interaction between services, ensure the system enforces access policies and distributes resources fairly among consumers. Configuring the mesh makes policy changes, not by changing application code.
  • Telemetry - Gain understanding of the dependencies between services and the nature and flow of traffic between them, providing the ability to quickly identify issues.

1.3. Additional resources

Chapter 2. Understanding OpenShift Service Mesh

You can use Red Hat OpenShift Service Mesh to connect, secure, and monitor microservices in your Red Hat OpenShift Service Mesh environment. Core resources, Kiali integrations, and observability components form the service mesh ecosystem.

2.1. Red Hat OpenShift Service Mesh resources

Use Red Hat OpenShift Service Mesh resources to create, customize, and manage control planes while enabling workload connectivity and revision-based canary update strategies across the mesh.

The following two parts constitute Red Hat OpenShift Service Mesh:

  • Red Hat OpenShift Service Mesh resources
  • Kiali provided by Red Hat

The following three parts constitute Kiali:

  • Kiali Operator provided by Red Hat
  • Kiali Server
  • OpenShift Service Mesh Console (OSSMC) plugin

OpenShift Service Mesh integrates with the following:

  • Observability components such as:

    • OpenShift Monitoring
    • Red Hat OpenShift distributed tracing platform
    • Red Hat OpenShift distributed tracing data collection Operator
  • cert-manager
  • Argo rollouts

Red Hat OpenShift Service Mesh Operator manages the lifecycle of your Istio control planes. Instead of creating a new configuration schema, Istio’s Helm chart APIs build OpenShift Service Mesh Operator APIs.

Note
  • Though Istio’s Helm chart APIs build Red Hat OpenShift Service Mesh APIs, Helm charts are not supported.
  • The Red Hat OpenShift Service Mesh Custom Resource Definition (CRD) values fields expose all installation and configuration options found in Istio’s Helm charts.

2.1.1. Istio resource

The Istio resource manages your Istio control planes. It is a cluster-wide resource, because the Istio control plane operates in and requires access to the entire cluster.

To select a namespace to run the control plane pods in, you can use the spec.namespace field.

Note

The spec.namespace field is immutable: to move a control plane to another namespace, you must remove the Istio resource and re-create it with a different spec.namespace.

You can access all Istio custom resource definition (CRD) options through spec.values fields similar to the following example:

apiVersion: sailoperator.io/v1
kind: Istio
metadata:
  name: default
spec:
  version: v1.22.3
  namespace: istio-system
  updateStrategy:
    type: InPlace
  values:
    pilot:
      resources:
        requests:
          cpu: 100m
          memory: 1024Mi

You can run the following command to see all the customization options:

$ oc explain istios.spec.values

To perform canary updates of the control plane, OpenShift Service Mesh supports many Istio versions. You can set the version field to the new version by either using the full version or the v<x>.<y>-latest alias to automatically select the latest version for a specific minor version. For example, setting v1.23-latest ensures that the Operator maintains the latest version of Istio 1.23.

OpenShift Service Mesh supports two different update strategies for your control planes:

InPlace
The OpenShift Service Mesh Operator immediately replaces your existing control plane resources with the ones for the new version.
RevisionBased
Uses Istio’s canary update mechanism by creating a second control plane to which you can migrate your workloads to complete the update.

After creating an Istio resource, OpenShift Service Mesh generates a revision name for the resource based on the updateStrategy, and creates a corresponding IstioRevision.

2.1.2. IstioRevision resource

The IstioRevision is a cluster-wide resource and the lowest-level API OpenShift Service Mesh provides. It is usually not created by the user, but by the Operator itself. Its schema closely resembles that of the Istio resource. Instead of representing the state of a control plane you want to be present in your cluster, it represents a revision of that control plane.

A revision of the control plane you want to be present in your cluster is an instance of Istio with a specific version and revision name, and you can use its revision name to add workloads or entire namespaces to the mesh. For example: by using the istio.io/rev=<REVISION_NAME> label.

You can think of the relationship between the Istio and IstioRevision resources as similar to the relationship between Kubernetes' replica set and pod. Users can create a replica set, and results in the automatic creation of pods, which will trigger the instantiation of your containers.

Similarly, users create an Istio resource that instructs the OpenShift Service Mesh Operator to create a matching IstioRevision resource, which then in turn triggers the creation of the Istio control plane. To do that, the OpenShift Service Mesh Operator will copy all of your relevant configuration from the Istio resource to the IstioRevision resource.

2.1.3. IstioRevisionTag resource

The IstioRevisionTag resource represents a stable revision tag that functions as an alias for Istio control plane revisions. With the stable tag, prod, you can use the label istio.io/rev=prod to inject proxies into your workloads. When you perform an upgrade to a control plane with a new revision name, you can update your tag to point to the new revision instead of having to relabel your workloads and namespaces. For more information, see "Stable revision labels".

You can use the IstioRevisionTag resource with the OpenShift Service Mesh Operator. Therefore you can reference both an IstioRevision and an Istio resource. When using an Istio resource, after you update your control plane, the underlying IstioRevision resource changes, and the OpenShift Service Mesh Operator automatically updates your revision tag. You only need to restart your deployments to re-inject the new proxies.

The IstioRevisionTag has one field in its spec: field, targetRef, which can reference an Istio or IstioRevision resource. After deploying the IstioRevisionTag, you can use both the istio.io/rev=default and istio-injection=enabled labels to inject proxies into your workloads.

Important

You can only use the istio-injection label for revisions and revision tags that have the name default, such as the IstioRevisionTag resource in the following example:

apiVersion: sailoperator.io/v1
kind: IstioRevisionTag
metadata:
  name: default
spec:
  targetRef:
    kind: Istio
    name: prod
  • spec.targetRef.kind:: The kind of resource the tag references. The value can be either Istio or IstioRevision.
  • spec.targetRef.name:: The name of the resource the tag references. The value can be either the name of an Istio or IstioRevision resource.

2.1.4. IstioCNI resource

The OpenShift Service Mesh Operator manages the lifecycle of Istio’s Container Network Interface (CNI) plugin separately. To install Istio’s CNI plugin, you create an IstioCNI resource.

The IstioCNI resource is a cluster-wide resource as it installs a daemon set that operates on all nodes of your cluster. You can select a version by setting the spec.version field, as you can see in the example that follows. To update the CNI plugin, change the version field to the version you want to install. Similar to the Istio resource, it also has a values field that exposes all of the options provided in the istio-cni chart:

apiVersion: sailoperator.io/v1
kind: IstioCNI
metadata:
  name: default
spec:
  version: v1.22.3
  namespace: istio-cni
  values:
    cni:
      cniConfDir: /etc/cni/net.d
      excludeNamespaces:
      - kube-system

2.2. Red Hat OpenShift Service Mesh and Kiali

Kiali derives from the open source Kiali project. For more information see "Kiali project". The following three parts compose Kiali provided by Red Hat:

  • Kiali Operator provided by Red Hat
  • Kiali Server
  • OpenShift Service Mesh Console (OSSMC) plugin

Working together, they form the user interface (UI) for OpenShift Service Mesh. Kiali provides visibility into your service mesh by showing you the microservices and how they connect to each other.

Kiali helps you define, validate, and observe your Istio service mesh. It helps you to understand the structure of your service mesh by inferring the topology, and also provides information about the health of your service mesh.

Kiali provides an interactive graph view of your mesh namespaces in near real time that provides visibility into features such as circuit breakers, request rates, latency, and even graphs of traffic flows. Kiali offers insights about components at different levels, such as applications, services, workloads, and can display the interactions with contextual information and charts on the selected graph node or edge.

Kiali also provides the ability to validate your Istio configurations, such as gateways, destination rules, virtual services, mesh policies, and so on. Kiali provides detailed metrics, and a basic Grafana integration is available for advanced queries. Integrating Red Hat OpenShift distributed tracing platform (Tempo) and Red Hat OpenShift distributed tracing data collection into the Kiali console provides distributed tracing platform.

2.2.1. Kiali architecture

Kiali Server (back end)
This component runs in the container application platform and communicates with the service mesh components, retrieves and processes data, and exposes this data to the console. The Kiali Server does not need storage. When deploying the Server to a cluster, you set configurations in config maps and secrets.
Kiali console (front end)
The Kiali console is a web application. The console queries the Kiali Server for data to present it to the user.

In addition, Kiali depends on external services and components provided by the container application platform and Istio.

Red Hat Service Mesh Istio
Istio is a Kiali requirement. Istio is the component that provides and controls the service mesh. Although you can install Kiali and Istio separately, Kiali depends on Istio and will not work if it is not present. Kiali needs to retrieve Istio data and configurations, which Prometheus and the Red Hat OpenShift Service Mesh cluster API expose.
Prometheus
You can optionally include a dedicated Prometheus instance. When you enable Istio telemetry, Prometheus stores the metrics data. Kiali uses this Prometheus data to choose the mesh topology, display metrics, calculate health, show possible problems, and so on. Kiali communicates directly with Prometheus and assumes the data schema used by Istio Telemetry. Prometheus is an Istio dependency and a hard dependency for Kiali, and many of Kiali’s features will not work without Prometheus.
OpenShift Container Platform API
Kiali uses the OpenShift Container Platform API to fetch and resolve service mesh configurations. For example, Kiali queries the cluster API to retrieve definitions for namespaces, services, deployments, pods, and other entities. Kiali also makes queries to resolve relationships between the different cluster entities. The cluster API is also queried to retrieve Istio configurations such as virtual services, destination rules, route rules, gateways, quotas, and so on.
Tracing
Tracing is optional, but when you install Red Hat OpenShift distributed tracing platform and configure Kiali, the Kiali console includes a tab to display distributed tracing platform data, and tracing integration on the graph itself. Note that tracing data will not be available if you disable Istio’s distributed tracing platform feature. Also note that the user must have access to the namespace where the user needs to see tracing data.
Grafana
Grafana is optional. When available, the metrics pages of Kiali display links to direct the user to the same metric in Grafana. Note that Grafana is not supported as part of OpenShift Container Platform or OpenShift Service Mesh.

2.2.2. Kiali features

OpenShift Service Mesh integrates the Kiali console and provides the following capabilities:

Health
Quickly identify issues with applications, services, or workloads.
Topology
Visualize how your applications, services, or workloads communicate through the Kiali graph.
Metrics
You can chart service mesh and application performance for Go, Node.js, and other frameworks with predefined metrics dashboards, or create your own custom dashboards.
Tracing
Follow the path of a request through various microservices that make up an application by using Red Hat OpenShift distributed tracing platform (Tempo) integration.
Validations
Perform advanced validations on the most common Istio objects (Destination Rules, Service Entries, Virtual Services, and so on).
Configuration
Optional ability to create, update, and delete Istio routing configuration by using wizards or directly in the YAML editor in the Kiali Console.

2.2.3. OpenShift Service Mesh Console (OSSMC) plugin

The OpenShift Service Mesh Console (OSSMC) plugin is an OpenShift Container Platform plugin for Red Hat OpenShift Service Mesh. It integrates much of the Kiali Operator provided by Red Hat interface into the OpenShift Console, injecting both a Service Mesh main menu option with dedicated screens, and integrating Service Mesh tabs throughout console.

Kiali Operator provided by Red Hat installs the OSSMC plugin, that requires the Kiali Server component. OSSMC plugin has its own Custom Resource (CR) configuration.

2.3. Red Hat OpenShift Service Mesh and Observability

Red Hat OpenShift Service Mesh integrates with Red Hat Observability components, including:

OpenShift Monitoring

The Cluster Monitoring Operator (CMO) deploys monitoring stack components by default in every OpenShift Container Platform installation and manages them. These components include Prometheus, Alertmanager, Thanos Querier, and so on. The CMO also deploys the Telemeter Client, which sends a subset of data from platform Prometheus instances to Red Hat to ease Remote Health Monitoring for clusters.

When you have added your application to the mesh, you can monitor the in-cluster health and performance of your applications running on OpenShift Container Platform with metrics and customized alerts for CPU and memory usage, network connectivity, and other resource usage.

Red Hat OpenShift distributed tracing platform

Red Hat OpenShift Service Mesh uses Red Hat OpenShift distributed tracing platform to allow developers to view call flows in a microservice application.

Two parts make up the integration of Red Hat OpenShift distributed tracing platform with Red Hat OpenShift Service Mesh: Red Hat OpenShift distributed tracing platform (Tempo) and Red Hat OpenShift distributed tracing data collection.

Red Hat OpenShift distributed tracing platform (Tempo)

Provides distributed tracing platform to monitor and troubleshoot transactions in complex distributed systems. For more information, see "Grafana Tempo".

For more information about distributed tracing platform (Tempo), its features, installation, and configuration, see "Red Hat OpenShift distributed tracing platform (Tempo)".

Red Hat OpenShift distributed tracing data collection

based on the open source OpenTelemetry project, which aims to offer unified, standardized, and vendor-neutral telemetry data collection for cloud-native software. Red Hat OpenShift distributed tracing data collection product provides support for deploying and managing the OpenTelemetry Collector and simplifying the workload instrumentation. For more information, see "OpenTelemetry project".

The OpenTelemetry Collector can receive, process, and forward telemetry data in many formats, making it the ideal component for telemetry processing and interoperability between telemetry systems. The Collector provides a unified solution for collecting and processing metrics, traces, and logs. For more information, see "OpenTelemetry Collector".

For more information about distributed tracing data collection, its features, installation, and configuration, see: "Red Hat OpenShift distributed tracing data collection".

2.4. Red Hat OpenShift Service Mesh and cert-manager

The cert-manager tool is a solution for X.509 certificate management on Kubernetes. It delivers a unified API to integrate applications with private or public key infrastructure (PKI), such as Vault, Google Cloud Certificate Authority Service, Let’s Encrypt, and other providers.

The cert-manager tool ensures that the certificates are valid and up-to-date by attempting to renew the certificates at a configured time before they expire.

For Istio users, cert-manager also provides integration with istio-csr, which is a certificate authority (CA) server that handles certificate signing requests (CSR) from Istio proxies. The server then delegates signing to cert-manager, which forwards CSRs to the configured CA server.

2.5. Red Hat OpenShift Service Mesh and Argo Rollouts

Red Hat OpenShift Service Mesh, when used with Argo Rollouts, provides more advanced routing capabilities by using Istio, and does not require the sidecar container configuration.

You can use OpenShift Service Mesh to split traffic between two application versions.

  • Canary version: A new version of an application where you gradually route the traffic.
  • Stable version: The current version of an application. After the canary version is stable and all user traffic is directed to it, it becomes the new stable version. The previous stable version is discarded

The Istio-support within Argo Rollouts uses the Gateway and VirtualService resources to handle traffic routing.

  • Gateway: You can use a Gateway to manage inbound and outbound traffic for your mesh. The gateway is the entry point of OpenShift Service Mesh and handles traffic requests sent to an application.
  • VirtualService: VirtualService defines traffic routing rules and the percentage of traffic that goes to underlying services, such as the stable and canary services.

2.6. Additional resources

Legal Notice

Copyright © Red Hat.
Except as otherwise noted below, the text of and illustrations in this documentation are licensed by Red Hat under the Creative Commons Attribution–Share Alike 3.0 Unported license . If you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, the Red Hat logo, JBoss, Hibernate, and RHCE are trademarks or registered trademarks of Red Hat, LLC. or its subsidiaries in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS is a trademark or registered trademark of Hewlett Packard Enterprise Development LP or its subsidiaries in the United States and other countries.
The OpenStack® Word Mark and OpenStack logo are trademarks or registered trademarks of the Linux Foundation, used under license.
All other trademarks are the property of their respective owners.