Installation Guidance: SAP Edge Integration Cell on OpenShift

Updated

Table of Contents

Preface

About this document

This document provides guidance for Red Hat and SAP customers and partners on installing and managing the SAP Integration Suite Edge Integration Cell on the OpenShift Container Platform.

Audience

This guide is intended for system administrators, solution architects, and technical professionals involved in deploying and managing SAP Integration Suite Edge Integration Cell on OpenShift.

This document does not contain step by step details of installation or other tasks, as they are covered in the relevant documentation on http://access.redhat.com/. Links to the appropriate documents will be made when required.

Interactive Resources:

Scripts and playbooks

Any scripts provided are being provided as-is, without any form of support or warranty. All provided scripts can be modified by the customer at will.

1. Red Hat & SAP - Edge Solutions

What is it? - Solution Components and Definitions

1.1 Edge Lifecycle Management [ELM]

SAP Edge Lifecycle Management (ELM) is the solution used to deploy and register SAP payloads on customers' on-premise Kubernetes platforms, without SAP managing the infrastructure.

For detailed information, refer to the Content from help.sap.com is not included.Administration Guide for SAP Edge Lifecycle Management.

1.2 Edge Integration Cell [EIC]

SAP Edge Integration Cell (EIC) is a hybrid integration runtime that enables businesses to manage APIs and execute integration scenarios within their private landscape, keeping sensitive data on-premise.

For more information, refer to:

Additional Resources:

2. Prerequisites for installing SAP Integration Suite Edge Integration Cell

2.1 About this section

Notes: This section is a reworked version of SAP's documentation - please note some of the language is generic to keep it as close to the original as possible - we have included links and content specific to running the Edge Integration Cell on Red Hat OpenShift 4.14 [OCP]

For the most accurate and up-to-date information, please refer to the Content from me.sap.com is not included.Latest SAP Notes.

2.2 Synopsis

You plan to install SAP Integration Suite Edge Integration Cell on Openshift Container Platform, this may be a fresh installation, or incorporated into an existing deployment.

NB: For the respective landscape variant, make sure you have deployed the right Kubernetes and storage infrastructure.

2.3 Deployment Solution

The following section defines the platforms that Edge Integration Cell [EIC] can run on - EIC is deployed using Edge Lifecycle Management [ELM]. ELM provides the delivery and management channel for EIC and other potential SAP payloads on the edge of SAP in the future.

Edge Integration Cell is released for use on the following OpenShift with persistent storage provided by its dynamic volume provisioning.

2.3a Cluster Access Models for ELM Registration

When registering an OpenShift cluster with SAP Edge Lifecycle Management (ELM), you can choose between two security models:

Full Cluster-Admin Access (Default)

The traditional approach where ELM uses a kubeconfig with full cluster-admin privileges to manage the cluster. This is the simplest setup but grants broad permissions.

Restricted Access Model (Supported)

A security-enhanced approach that uses fine-grained Role-Based Access Control (RBAC) instead of full cluster-admin rights. This model:

  • Uses a dedicated Service Account with precisely scoped permissions
  • Requires pre-configured namespaces and Red Hat OpenShift Service Mesh 3.x
  • Is suitable for shared/multi-tenant cluster environments
  • Provides enhanced security isolation
  • Is fully supported by SAP for production deployments

Important Notes:

In general, newer OpenShift versions are planned to be supported. SAP intends to perform compatibility tests on new OpenShift versions. Support for new versions will be communicated via Content from me.sap.com is not included.this SAP Note.

As older OpenShift [OCP] versions phase out according to the OpenShift Container Platform Life Cycle Policy, they cannot undergo verification for compatibility with newer patches of Edge Integration Cell. Consequently, the minimum required OpenShift version rises over time. The goal is to support OpenShift versions in alignment with the OpenShift Extended Update Support (EUS) lifecycle and beyond. Initially, we validated even-numbered versions, such as 4.14 and 4.16, offering extended support and a streamlined validation process with SAP ELM EIC teams. Starting from OpenShift 4.18, we will validate all major OpenShift releases, including odd-numbered versions. This expanded cadence allows us to better prepare for upgrades, incorporate customer feedback more effectively, and enhance the stability and reliability of deployments across all supported versions.

Note: Currently, only container images for the operating system architecture linux/amd64 are supported. It is recommended to start new deployments with higher OpenShift versions as new versions are frequently released.

OCP best architectural practices can be found in the documentation here.

2.4 Sizing Information

Minimum CPU / Memory requirements for High Availability (HA) and non-HA (agent / worker nodes)

Availability ModeCPU / MemoryPersistent Volumes
non-HA8 CPU / 32 GiB101 GiB
HA16 CPU / 64 GiB204 GiB

Please note that the CPU and memory requirements listed above represent the total resources needed for the entire EIC application, not for individual worker nodes.

The Reference Architectures section in this document provides general guidelines for sizing based on various workloads.

For deployment on OCP, the best practice for sizing and scale can be found here.

Where appropriate, Red Hat supports single node installation - more information can be found Content from me.sap.com is not included.here

SAP best practices can be found Content from me.sap.com is not included.here - please check for the latest guidance as this is being updated constantly. Please also refer to Content from help.sap.com is not included.SAP Integration Suite Sizing Guidelines.

Maximum number of pods per node

A non-HA setup will use ~40 pods (including kube components), depending on the platform. An HA setup will use around 80 to 90 pods with the minimum configuration.

OpenShift nodes with appropriately sized resources can comfortably accommodate these requirements. For detailed information, please refer to this link.

2.5 Networking

Best Practices for OCP Networking can be found here configuration will vary depending on client needs and deployment architecture.

In OpenShift, a diverse array of Container Network Interface (CNI) solutions is available, including the default OVN-Kubernetes and SDN. Effective planning is important when configuring IP addressing and determining subnet sizes to efficiently accommodate the anticipated number of pods during OpenShift installation. Comprehensive network configuration parameters during OpenShift installation are provided for detailed guidance. After the installation, it is also flexible to expand the cluster network range if you need more IP addresses for additional nodes.

To ensure external accessibility to the Istio Ingress Gateway—a core component of the EIC solution—the recommended approach is to configure a LoadBalancer service. For environments like bare-metal clusters, MetalLB provides a robust LoadBalancer solution tailored for infrastructures without cloud-native load balancers, enabling fault-tolerant access to applications via external IP addresses. Alternatively, OpenShift Routes can be used to expose the Ingress Gateway, though this approach has architectural differences and operational restrictions compared to a LoadBalancer service. See Section 3.9 for details on support scope and technical considerations.

MetalLB can be installed through OpenShift OperatorHub. Detailed information on MetalLB can be found here.

The SAP Edge Integration Cell leverages the Content from help.sap.com is not included.SAP Cloud Connector to establish a communication channel with the SAP Business Technology Platform (BTP) in the cloud. This configuration requires the Integration Cell to initiate an internet connection, as it serves as a vital link between SAP BTP applications and on-premise systems.

The SAP Cloud Connector operates as a reverse invoke proxy, enabling secure and efficient communication between the on-premise network and SAP BTP. Note that an air-gapped network setup is not applicable in this scenario.

For more detailed information, please refer to the official SAP documentation on the Content from help.sap.com is not included.SAP Cloud Connector.

2.6 SAP Message Service

Message Service supports specific configuration tiers. Tiers define internal configuration settings like maximum connections (concurrent client connections), maximum number of queue messages (references to messages queued for delivery to consumers) or maximum spool size (guaranteed messages are stored in a message spool on a persistent volume). Additionally, it defines the resource requirements for CPU, memory, and disk.

For an HA setup, Message Service runs 3 pods (primary, backup, monitoring) having the same CPU and memory requirements. Primary and backup use the same persistent volume size, whereas the monitoring pod only requires 3 Gi.

Service TierMax ConnectionsMax Queue Messages [millions]CPU LimitMemory Limit [GiB]Persistent Volume [GiB]
10010010023.4100
250250 (VPN) / 1000 (System)10026.5100
1K100024026.5350
10K10000240413.9350

For production setups, it is recommended to consider Message Service Tier 250 or higher. Memory limits need to be considered when choosing K8s agent / worker node instance types.

Please be aware: Service Tier also defines system limits for configuration parameters Max Endpoints (Max Queues, JMS Adapter requires 3 internal Queues per JMS Queue), Max Egress Flows (Consumers), Max Ingress Flows (Producers), Max Transacted Sessions (same system limit value as for Max Connections) and Max Transactions (5 times the system limit value of Max Connections).

Note: If you require a messaging tier beyond the numbers listed in the table, please contact SAP to explore options for obtaining third-party services, such as Solace, to achieve higher throughput.

2.7 Additional services consumed by Edge Integration Cell

2.7.1 Deployment best practices

Additional Services, external to SAP Edge Integration Cell, are required to maintain state - these services are commonly deployed by many customers on existing OCP clusters.

There are three types of services required:

  1. Database Service - Used for storing local persistent data
    • PostgreSQL (recommended for most deployments)
    • SAP HANA DB (supported alternative - see Section 2.7.2a)
  2. [Optional] Data Store Service - Used for caching state and persisting data of SAP EIC API proxy
    • Redis (can be replaced by Valkey or SAP HANA DB)
    • Valkey (open-source alternative to Redis - see Section 2.7.3)
    • SAP HANA DB (supported alternative - see Section 2.7.2a)
  3. [Optional] Local Container Registry - Quay is a SAP certified solution as a local container registry for disconnected implementation and is used by Edge Lifecycle Management to deploy and manage the Edge Integration Cell (Quay is included as part of OCP - more information here)

PostgreSQL, Redis, and Quay can be easily deployed and managed using the Content from operatorframework.io is not included.Operator Framework. Valkey can be deployed using Helm charts or standard Kubernetes manifests. For POC environments, external database and data store services can be optional — SAP EIC deploys self-contained pods if they are left unconfigured during installation. However, this is not recommended for production.

Note: As an alternative to PostgreSQL and Redis/Valkey, SAP HANA DB can serve as both the database and data store service. For detailed information, see Content from me.sap.com is not included.SAP Note 3247839.

Red Hats' OperatorHub is the web console interface that OCP cluster administrators use to discover and install Operators. With one click, an Operator can be pulled from its off-cluster source, installed and subscribed on the cluster, and made ready for engineering teams to self-service manage the product across deployment environments using Operator Lifecycle Manager [OLM]. OLM also supports interactions from command-line tools, similar to managing normal Kubernetes resources. More information on the Operator Hub and Operators can be found here

2.7.2 PostgreSQL Service

Content from www.postgresql.org is not included.PostgreSQL versions 12 and 15 (preferred) are currently supported as external databases.
For backup and restore options, please refer to Content from www.postgresql.org is not included.Backup and Restore.

2.7.2.1 Operator for Enterprise PostgreSQL solution offered via Red Hat Marketplace

Please refer to the following pages for the detailed information:

Content from swc.saas.ibm.com is not included.Content from swc.saas.ibm.com is not included.https://swc.saas.ibm.com/en-us/redhat-marketplace/products/crunchy-postgresql-for-kubernetes
Content from access.crunchydata.com is not included.Content from access.crunchydata.com is not included.https://access.crunchydata.com/documentation/postgres-operator/latest
Content from access.crunchydata.com is not included.Content from access.crunchydata.com is not included.https://access.crunchydata.com/documentation/postgres-operator/latest/architecture/high-availability

Crunchy Data, offering the Enterprise Level Postgres Operator through Red Hat MarketPlace, supports the solution via a subscription-based model directly. This model provides various tiers with different response times and service levels, including bug fixes, security patches, updates, and technical support. A subscription is required for using the software in third-party consulting or support services. For more information, please refer to their Content from www.crunchydata.com is not included.Terms of Use.

If you are using OpenShift deployed on hyperscalers, you can also use managed database services such as:

  • Azure: Azure Database for PostgreSQL

  • AWS: Amazon RDS for PostgreSQL

    PostgreSQL connectivity and user information is required during solution deployment.
    Required user privileges: GRANT ALL on schema
    Users require permission to create database objects like tables, indexes in addition to standard SQL operations.

2.7.2a SAP HANA DB Service

Content from help.sap.com is not included.SAP HANA DB 2.0 SPS 08 is currently supported as an external database and data store service for SAP Edge Integration Cell.

SAP HANA DB can serve as both the database and data store service:

  • Storing local persistent data (database function)
  • Caching state and persisting data of SAP EIC API proxy (data store function)

Important Notes:

Note: SAP HANA DB may be an appropriate choice for organizations already using SAP HANA in their landscape.

2.7.3 Redis / Valkey Service

Content from redis.io is not included.Redis versions 6.x, 7.x and Content from valkey.io is not included.Valkey versions 8.0–8.2 (compatible with Redis 7.2.x) are currently supported as external data store.
For backup and restore options, please refer to Content from redis.io is not included.Redis persistence or Content from valkey.io is not included.Valkey persistence.

2.7.3.1 Enterprise Operator Solution REDIS offered by Redis Enterprise via Red Hat Marketplace

Content from swc.saas.ibm.com is not included.Content from swc.saas.ibm.com is not included.https://swc.saas.ibm.com/en-us/redhat-marketplace/products/redis-labs-enterprise
Content from redis.io is not included.Content from redis.io is not included.https://redis.io/docs/latest/operate/rs/

This solution is directly backed by the Redis Labs team, detailed in Content from redislabs.com is not included.Appendix 1 of the Redis Enterprise Software Subscription Agreement. The agreement categorizes support services into Support Services, Customer Success Services, and Consulting Services, offering assistance from basic troubleshooting to advanced consultancy and ongoing optimization tailored to diverse customer needs.

If you are using running OpenShift on hyperscalers, you can use managed data store services such as:

2.7.3.2 Valkey

Content from valkey.io is not included.Valkey is an open-source, high-performance key/value data store forked from Redis. It is wire-protocol compatible with Redis, making it a drop-in replacement for the data store service.

Supported versions: 8.0–8.2 (compatible with Redis 7.2.x). Minimum requirements: 1 CPU / 1 GiB Memory.

Valkey can be deployed on OpenShift using Helm charts or standard Kubernetes manifests. For more information, refer to the Content from valkey.io is not included.Valkey documentation.

If you are using OpenShift on hyperscalers, you can use managed services compatible with Valkey such as:

2.7.4 Quay Container Image Registry Service

Quay is a secure, scalable container registry that integrates seamlessly with OpenShift, providing robust image storage, management, vulnerability scanning, and distribution capabilities for your containerized applications. It is fully supported by Red Hat.

2.8 Storage

A shared storage can be attached to store external data for certain pods. Volume binding mode 'Immediate' is required for this shared storage.

For OCP, storage configuration depends on the underlying infrastructure. An overview of OpenShift Container Platform storage can be found This page is not included, but the link has been rewritten to point to the nearest parent document.here. By default, OpenShift installs CSI drivers and creates storage classes for most backing storage during the OpenShift installation process. Please refer to the table below and the detailed information can be found here.

In general, K8s worker nodes should use ephemeral disks for the OS (which is the default normally). Ephemeral OS disk size should be at least 80 GB. Ephemeral storage per K8s node should allow for sufficient temporary storage, especially if large messages shall be processed using stream cache. Ephemeral storage limit for Edge Integration Cell worker pod will be 10Gi by default.

Allocating 120 GB of disk space per OpenShift node ensures ample storage capacity to support various workloads effectively.

Dedicated storage classes can be used for Message Service, PostgreSQL and Redis (non-productive deployment). Otherwise, the default storage class will be used. Besides the RWX use case mentioned above, disk-based storage classes (no file-based storage or NFS) should be used in general. Please make sure that used storage classes support volume expansion. Otherwise, increasing persistent volumes is not possible.

Verify if your OpenShift cluster's existing storageClass has the necessary features to provision EIC and external services according to the requirements. Detailed information is available here. If the existing storageClass lacks the required features, consider installing OpenShift Data Foundation (ODF) on top of the existing storage service.

Your OpenShift cluster may come pre-configured with default storage classes for block storage. Examples include the thin storage class for VMware vSphere, standard-csi/ssd-csi for GCP, or gp3-csi/gp2-csi for AWS. These default storage classes can be used as block storage.

If "Enable Shared File System" is set to "true" during the deployment process of Edge Integration Cell, you may need an additional storage class for file storage, such as AWS EFS. Detailed instructions for creating an AWS EFS storage class can be found in the OpenShift documentation.

Alternatively, by installing ODF, you can leverage the SSD disks attached to the nodes or existing storage class as backing storage and establish unified storage classes for block, file, and object storage. This ensures seamless deployment of EIC on OpenShift across any environment.

While using ODF, the ocs-storagecluster-ceph-rbd storage class can provide RWO/RWX block volumes and can be set as the This page is not included, but the link has been rewritten to point to the nearest parent document.default storage class. ODF also supports expanding Persistent Volume Claims, offering more flexibility in managing persistent storage resources. ODF supports persistent volumes with various access modes (RWO and RWX for both filesystem and block volumes), providing the underlying storage service for EIC and its external services. Additionally, during EIC deployment, the ocs-storagecluster-cephfs storage class, offering RWX filesystem volumes, can serve as the Shared File System Storage Class when 'Enable Shared File System' is set to 'true'.

Note: When using ODF CephFS for the Shared File System Storage Class with standard cluster-admin deployments, you may encounter "Permission Denied" errors due to SELinux MCS label conflicts. See Red Hat KB 7137220 for the resolution. This issue does not affect Restricted-Access deployments.

The table below provides an overview of the storage requirements for various components, detailing their access modes, whether they are optional, any relevant comments, and the required persistent volume sizes:

ComponentAccess ModeCommentsPersistent Volume (GiB)
Local Services - PostgreSQL databaseRWONon-productive50
Local Services - Redis data storeRWONon-productive10
Monitoring - Prometheus (Optional)RWO20
Shared storage - Java thread, heap dumps (Optional)RWX50
Message Service Tier 100RWONon-productive100
Message Service Tier 250 (VPN/System)RWO100
Message Service Tier 1KRWO350
Message Service Tier 10KRWO350
Crunchy PostgreSQLRWO100
Redis EnterpriseRWO60

2.9 Message Service Storage requirements

Storage class for Message Service should use disks that provide 1000 IOPS performance or higher. It is recommended to use SSDs when possible.

2.10 Reference Architectures

The following content includes reference setups and general sizing guidelines for both High Availability (HA) and non-HA configurations. For detailed guidance and support, please consult with the Red Hat team.

Workloads specify the CPU, memory, and storage needs for each application or service like SAP Edge Integration Cell, OpenShift Data Foundation, Crunchy Postgres, and Redis Enterprise. These details are crucial for understanding individual resource requirements. For simplicity, we will use Message Service Tier 250 as a reference. Please adjust accordingly if you prefer to use a different Message Service tier.

In contrast, the specifications for Master Nodes, ODF Nodes, and Worker Nodes outline the hardware configurations necessary for different nodes within the OpenShift cluster. These specifications provide the infrastructure to support all deployed applications effectively.

The following table presents various deployment configurations for OpenShift tailored for the SAP Edge Integration Cell (EIC), segmented into High Availability (HA) and Non-HA setups. Each configuration details the specific node composition, including master, worker, and OpenShift Data Foundation (ODF) as well as dependent external services. This comprehensive overview accommodates a range of operational needs, from resilient HA deployments to streamlined single-node setups, offering flexibility to optimize OpenShift implementations for diverse infrastructure environments.

EIC
HANon-HA
OpenShiftHA* 3 Master Nodes + 3 Worker Nodes + 3 ODF Nodes
* 3 Master Nodes + 3 Worker Nodes
* 3 Nodes Cluster
* 3 Master Nodes + 3 Worker Nodes (Without ODF)
Non-HAN/A

2.10.1 Reference Architecture for running EIC in HA mode

2.10.1.1 3 Master Nodes + 3 Worker Nodes + 3 ODF Nodes

WorkloadsResources
Content from me.sap.com is not included.SAP Edge Integration Cell16 CPU / 64 GiB RAM / 204 GiB PV storage
OpenShift Data Foundation30 CPU / 72 GiB RAM / 3 storage devices
Content from access.crunchydata.com is not included.Crunchy Postgres (2 replicas)2 CPU / 4 GiB RAM / 100 GiB PV storage
Content from github.com is not included.Redis Enterprise (3 nodes)12 CPU / 18 GB RAM / 60 GB PV storage
Node TypeResources
Master Node * 34 CPUs / 16 GB RAM / 120 GB Disk Storage
ODF Node * 310 CPUs / 24 GB RAM / 120 GB Disk Storage (+Additional storage >= 0.5 T)
Worker Node * 312 CPUs / 30 GB RAM / 120 GB Disk Storage

2.10.1.2 3 Master Nodes + 3 Worker Nodes

WorkloadsResources
Content from me.sap.com is not included.SAP Edge Integration Cell16 CPU / 64 GiB RAM / 204 GiB PV storage
OpenShift Data Foundation30 CPU / 72 GiB RAM / 3 storage devices
Content from access.crunchydata.com is not included.Crunchy Postgres (2 replicas)2 CPU / 4 GiB RAM / 100 GiB PV storage
Content from github.com is not included.Redis Enterprise (3 nodes)12 CPU / 18 GB RAM / 60 GB PV storage
Node TypeResources
Master Node * 34 CPUs / 16 GB RAM / 120 GB Disk Storage
Worker Node * 322 CPUs / 54 GB RAM / 120 GB Disk Storage (+Additional storage >= 0.5 T)

2.10.1.3 3 Nodes Cluster

WorkloadsResources
Content from me.sap.com is not included.SAP Edge Integration Cell16 CPU / 64 GiB RAM / 204 GiB PV storage
OpenShift Data Foundation30 CPU / 72 GiB RAM / 3 storage devices
Content from access.crunchydata.com is not included.Crunchy Postgres (1 replica)2 CPU / 4 GiB RAM / 100 GiB PV storage
Content from github.com is not included.Redis Enterprise (1 node)12 CPU / 18 GB RAM / 60 GB PV storage
Node TypeResources
Master Node * 326 CPUs / 68 GB RAM / 240 GB Disk Storage (+Additional storage >= 0.5 T)

2.10.1.4 3 Master Nodes + 3 Worker Nodes

WorkloadsResources
Content from me.sap.com is not included.SAP Edge Integration Cell16 CPU / 64 GiB RAM / 204 GiB PV storage
Content from access.crunchydata.com is not included.Crunchy Postgres (2 replicas)2 CPU / 4 GiB RAM / 100 GiB PV storage
Content from github.com is not included.Redis Enterprise (3 nodes)12 CPU / 14 GB RAM / 60 GB PV storage
Node TypeResources
Master Node * 34 CPUs / 16 GB RAM / 120 GB Disk Storage
Worker Node * 316 CPUs / 38 GB RAM / 120 GB Disk Storage

2.10.1.5 3 Master Nodes + 3 Worker Nodes (With external SAP HANA DB)

This configuration uses Content from help.sap.com is not included.SAP HANA DB 2.0 SPS 08 as a unified database and data store service, eliminating the need for separate PostgreSQL and Redis instances. For detailed information, see Content from me.sap.com is not included.SAP Note 3247839.

WorkloadsResources
Content from me.sap.com is not included.SAP Edge Integration Cell16 CPU / 64 GiB RAM / 204 GiB PV storage
OpenShift Data Foundation30 CPU / 72 GiB RAM / 3 storage devices
Content from help.sap.com is not included.SAP HANA DB 2.0 SPS 08 (External or Managed Service)See Content from help.sap.com is not included.SAP HANA documentation for sizing requirements
Node TypeResources
Master Node * 34 CPUs / 16 GB RAM / 120 GB Disk Storage
ODF Node * 310 CPUs / 24 GB RAM / 120 GB Disk Storage (+Additional storage >= 0.5 T)
Worker Node * 310 CPUs / 26 GB RAM / 120 GB Disk Storage

Note: SAP HANA DB can be deployed externally (on-premise or as a managed service in hyperscaler environments). This configuration uses a single database system for both database and data store functions. For SAP HANA sizing and resource requirements, please refer to the Content from help.sap.com is not included.SAP HANA documentation.


2.10.2 Reference Architecture for Running EIC in Non-HA Mode

2.10.2.1 3 Master Nodes + 3 Worker Nodes

WorkloadsResources
Content from me.sap.com is not included.SAP Edge Integration Cell8 CPU / 32 GiB RAM / 101 GiB PV storage
OpenShift Data Foundation24 CPU / 72 GiB RAM / 3 storage devices
Content from access.crunchydata.com is not included.Crunchy Postgres (1 replica)1 CPU / 2 GiB RAM / 100 GiB PV storage
Content from github.com is not included.Redis Enterprise (1 node)6 CPU / 6 GB RAM / 10 GB PV storage
Node TypeResources
Master Node * 34 CPUs / 16 GB RAM / 120 GB Disk Storage
Worker Node * 314 CPUs / 38 GB RAM / 120 GB Disk Storage (+Additional storage >= 0.3 T)

2.10.2.2 3 Master Nodes + 3 Worker Nodes (External Services deployed outside of OpenShift)

WorkloadsResources
Content from me.sap.com is not included.SAP Edge Integration Cell8 CPU / 32 GiB RAM / 161 GiB PV storage
OpenShift Data Foundation24 CPU / 72 GiB RAM / 3 storage devices
Node TypeResources
Master Node * 34 CPUs / 16 GB RAM / 120 GB Disk Storage
Worker Node * 312 CPUs / 36 GB RAM / 120 GB Disk Storage (+Additional storage >= 0.3 T)

2.10.2.3 3 Nodes Cluster

WorkloadsResources
Content from me.sap.com is not included.SAP Edge Integration Cell8 CPU / 32 GiB RAM / 101 GiB PV storage
OpenShift Data Foundation (Compact Mode)24 CPU / 72 GiB RAM / 3 storage devices
Content from access.crunchydata.com is not included.Crunchy Postgres (1 replica)1 CPU / 2 GiB RAM /
Node TypeResources
Master Node * 318 CPUs / 54 GB RAM / 120 GB Disk Storage (+Additional storage >= 0.3 T, e.g. SSD disks for Bare Metal servers, vSan VMDK, etc)

2.10.2.4 Single Node OpenShift

WorkloadsResources
Content from me.sap.com is not included.SAP Edge Integration Cell8 CPU / 32 GiB RAM / 11 GiB PV storage
Node TypeResources
Single Node18 CPUs / 50 GB RAM / 200 GB Disk Storage (+Additional 300 GB storage to be used by e.g., LVM storage operator)

Note: If your cluster uses storage services other than ODF, refer to the guidelines for reference architecture and sizing. Ensure that sufficient resources are available to handle the storage service workloads.

2.10.2.5 3 Master Nodes + 3 Worker Nodes

WorkloadsResources
Content from me.sap.com is not included.SAP Edge Integration Cell8 CPU / 32 GiB RAM / 101 GiB PV storage
Content from access.crunchydata.com is not included.Crunchy Postgres (1 replica)1 CPU / 2 GiB RAM / 100 GiB PV storage
Content from github.com is not included.Redis Enterprise (1 node)6 CPU / 6 GB RAM / 10 GB PV storage
Node TypeResources
Master Node4 CPUs / 16 GB RAM / 120 GB Disk Storage
Worker Node8 CPUs / 18 GB RAM / 120 GB Disk Storage
Additional Storage300 GB Persistent Storage

To deploy EIC on OpenShift Dedicated, Red Hat offers a fully managed OpenShift service on Amazon Web Services (AWS) and Google Cloud. Sizing considerations will vary based on whether you enable the Shared File System, message service tiers, and other options for EIC. Please note that the certification plan for this is currently under discussion. If you are interested in this solution, please let us know.

Additionally, you can use This page is not included, but the link has been rewritten to point to the nearest parent document.Hosted control planes in OpenShift Container Platform to strengthen security, optimize resource use by consolidating control planes on fewer nodes, and streamline multicluster management for enhanced operational efficiency in hybrid-cloud environments.


3. Installation

For more detailed instructions and troubleshooting tips, refer to the Content from help.sap.com is not included.official SAP documentation. This resource provides in-depth guides, best practices, and FAQs to help you successfully install and manage your SAP Edge Integration Cell. Additionally, watch the following video to see all the necessary steps for the initial setup of the Edge Integration Cell: Content from help.sap.com is not included.Initial Setup of SAP Edge Integration Cell.

3.1 Installing OpenShift

Before installing OpenShift Container Platform, choose between self-managing the cluster or using a managed service. Self-management involves deploying OpenShift on your chosen infrastructure, while managed services include options like OpenShift Dedicated, OpenShift Online, and services on Azure, AWS, etc. For detailed installation preparation, refer to OpenShift Installation Preparation.

3.2 Installing OpenShift Data Foundation (Optional)

For detailed instructions on installing OpenShift Data Foundation across various platforms, please refer to the official documentation at Red Hat OpenShift Data Foundation 4.14.

3.3 Installing External Services

Note: This step may be optional for a POC. You might not need to set up external database and data store instances. If you don't enable or configure them during the SAP Edge Integration Cell (EIC) installation, EIC will automatically deploy self-contained Postgres and Redis pods within its own service namespace.

Database and Data Store Options:

3.3.1 Installing CrunchyData Postgres service

To install the CrunchyData Postgres service on OpenShift, follow the detailed instructions provided in the Content from access.crunchydata.com is not included.CrunchyData Postgres Operator Installation Guide. This guide offers steps to deploy and configure the CrunchyData Postgres Operator, ensuring a smooth setup of your PostgreSQL environment within the OpenShift platform. For additional resources, including example scripts and YAML configurations for setting up a test PostgreSQL instance, refer to Content from github.com is not included.SAP Edge Integration Cell External Services Examples.

3.3.2 Installing Redis Enterprise Service

For detailed instructions on installing Redis Enterprise Service on OpenShift, consult the Content from redis.io is not included.Redis Enterprise Operator Installation Guide. This guide provides step-by-step procedures to effectively deploy and configure the Redis Enterprise Operator, ensuring optimal setup and integration of Redis Enterprise within your OpenShift environment. Additionally, for example scripts and YAML configurations to set up a test Redis instance, consult the Content from github.com is not included.SAP Edge Integration Cell External Services Examples.

3.4 Installing Quay container registry (Optional)

If you need to set up a local Quay container registry to facilitate the installation of EIC in a disconnected environment, you can follow the detailed instructions in the Red Hat Quay Operator Installation Guide.

3.5 Configuring Proxy (Optional)

If you wish to use a proxy for running the ELM/EIC on OpenShift, follow the instructions below to configure the HTTP(S) proxy settings.

Configuring the HTTP(S) Proxy is crucial for different components in the setup. The corresponding No Proxy settings are treated differently by each component:

  • management host (or jump host)
  • OpenShift cluster
  • ELM/EIC

The following sections assume the following configuration:

  • Cluster's base domain: example.com
  • Cluster name: foo (API is listening at api.foo.example.com:6443)
  • Local proxy server: http://proxy.example.com:3128
  • Management host's hostname: jump.example.com (add its shortname jump to the NO_PROXY settings)
  • Local network CIDR: 192.168.128.0/24
  • OpenShift's service network default range: 172.30.0.0/16

3.5.1 Configuring HTTP Proxy on the management host

Export the Proxy environment variables on your management host according to your Linux distribution. For RHEL, follow the instructions on applying a system-wide proxy. For example, in BASH:

sudo tee /etc/profile.d/http_proxy.sh > /dev/null <<EOF
export http_proxy=http://proxy.example.com:3128
export https_proxy=http://proxy.example.com:3128
export no_proxy=localhost,127.0.0.1,jump,.example.com,192.168.128.0/24
EOF
source /etc/profile.d/http_proxy.sh

In this example, .example.com is a wildcard pattern that matches any subdomains like foo.example.com.

3.5.2 Configuring HTTP Proxy on the OpenShift cluster

Typically, OpenShift is configured to use the proxy during installation, but it is also possible to set or re-configure it afterward. An example configuration might look like this:

Keep in mind that wildcard characters (e.g., *.example.com) are not supported by OpenShift. The complete no_proxy list, which includes container and service networks and additional service names, is generated automatically and stored in the .status.noProxy field of the proxy object:

3.5.3 Configuring HTTP Proxy for the ELM/EIC

The Edge Lifecycle Management (ELM) uses the proxy settings from the environment on the management host configured earlier. This is essential for ELM to communicate with the SAP image registry (proxied), local image registry, and OpenShift API (not proxied).

During ELM's initialization phase, set the Proxy settings when prompted. For detailed instructions on configuring the "Bypass HTTP Proxy For" settings, please refer to the information provided Content from help.sap.com is not included.here.

You can generate the value for the 'Bypass HTTP Proxy For' option using the following helper command:

bash <(curl -s https://raw.githubusercontent.com/redhat-sap/sap-edge/master/utils/get_no_proxy.sh)
# to see the usage and options, append `--help`
bash <(curl -s https://raw.githubusercontent.com/redhat-sap/sap-data-intelligence/master/utils/get_no_proxy.sh) --help

3.6 Configuring Restricted Access for ELM (Optional)

Note: This section is optional and only applies if you plan to use the Restricted Access Model instead of the default full cluster-admin approach. This is a fully supported feature for production deployments.

Overview

The restricted access model allows you to onboard an OpenShift cluster into SAP Edge Lifecycle Management using fine-grained RBAC permissions instead of full cluster-admin rights. This approach is particularly valuable for:

  • Shared/Multi-tenant clusters where multiple teams or applications coexist
  • Enhanced security requirements that mandate least-privilege access
  • Compliance scenarios where cluster-admin access is restricted by policy
  • Production deployments where security isolation is a priority

Key Benefits

  • Least-Privilege Access: ELM operates with only the permissions it needs
  • Security Isolation: Limits the blast radius in case of security incidents
  • Multi-Tenancy Support: Enables safe coexistence with other workloads
  • Audit Compliance: Meets regulatory requirements for controlled access

Configuration Process

The restricted access setup consists of five main stages that must be completed before registering the cluster with ELM:

  1. Prepare Namespaces - Create and configure dedicated namespaces with appropriate security annotations
  2. Configure OpenShift Service Mesh 3.x - Deploy Red Hat OpenShift Service Mesh 3.x to manage network traffic
  3. Apply RBAC Permissions - Apply fine-grained permissions using resources from SAP Note 3618713
  4. Generate Kubeconfig File - Create a kubeconfig using the dedicated Service Account
  5. Register Cluster in ELM - Add the cluster as an Edge Node with restricted access enabled

Documentation Resources

For complete step-by-step instructions, detailed commands, verification procedures, and troubleshooting guidance, please refer to the official documentation:

Primary Documentation:

Additional Resources:

Prerequisites

Before proceeding with the restricted access configuration, ensure you have:

  • Cluster-admin privileges on the target OpenShift cluster (required for initial setup only)
  • Access to SAP Note 3618713 (requires SAP S-user credentials)
  • Downloaded resources.zip from SAP Note 3618713 containing RBAC manifests
  • Red Hat OpenShift Service Mesh 3.x Operators installed on the cluster
  • OpenShift CLI (oc) authenticated to your cluster
  • OpenShift Container Platform 4.14+ (tested versions for Service Mesh 3.x)

Important Notes

  • Production Ready: This feature is fully supported by SAP for production deployments
  • Timing: Configuration must be completed before registering the cluster with ELM
  • Critical Setting: When registering in the ELM UI, you must check the "Restricted Access to Kubernetes cluster" checkbox
  • Security: Keep the generated edgelm-kubeconfig file secure - it provides access to your cluster
  • Service Mesh Options: Choose between shared service mesh (standard) or dedicated service mesh (enhanced isolation) based on your security requirements

Quick Reference: Registration Steps

When you're ready to register the cluster after completing the configuration:

  1. In the ELM UI, start the Add an Edge Node process
  2. Enter a name for your Edge Node
  3. ⚠️ CRITICAL: Check "Restricted Access to Kubernetes cluster" checkbox
  4. Provide the contents of the generated edgelm-kubeconfig file
  5. Complete the remaining configuration steps as guided by the UI

3.7 Installing Edge Lifecycle Management

For detailed instructions on installing Edge Lifecycle Management, please refer to the Content from help.sap.com is not included.SAP Edge Lifecycle Management Installation Guide.

3.8 Installing Edge Integration Cell

To set up and manage the Edge Integration Cell, consult the Content from help.sap.com is not included.SAP Integration Suite Documentation.

3.9 Exposing the SAP EIC Istio Ingress Gateway (Optional)

The Istio ingress-gateway service requires an external IP or hostname to be assigned by your platform. This is a requirement for SAP EIC deployment to complete successfully.

Understanding Your Load Balancer Options

Before proceeding, check if your OpenShift platform already has an external Load Balancer integrated. In enterprise on-premise environments, clusters are often already behind hardware or software load balancers. If a load balancer is already configured and integrated with the cluster to handle type: LoadBalancer services, you do not need to install MetalLB or use Routes — your existing infrastructure will automatically assign an external IP to the istio-ingressgateway service.

If no load balancer infrastructure exists, you have the following options:

  • Option 1: LoadBalancer Provider — Use MetalLB or any other load balancer solution that can provide external IPs for type: LoadBalancer services. MetalLB is an officially supported Red Hat operator, but it is not mandatory if another load-balancing solution is already in place.

  • Option 2: OpenShift Routes — An alternative for environments without load balancer infrastructure, using the native OpenShift routing layer. See important support scope and technical restrictions below.

Option 1: Using a LoadBalancer Provider (e.g., MetalLB)

If your OpenShift cluster is running on bare-metal infrastructure without an existing load balancer, you can use MetalLB or a similar provider. MetalLB provides LoadBalancer functionality by assigning external IPs to services. Note that MetalLB is simply one provider option — any solution that can satisfy the type: LoadBalancer service requirement will work.

Step 1: Install MetalLB

You can install MetalLB through OpenShift OperatorHub. Once installed, configure MetalLB for your cluster and assign external IPs to services.

Step 2: Verify the External IP

Since the Istio Ingress Gateway service is already configured as a LoadBalancer type service, MetalLB will automatically allocate an external IP for it. To verify the external IP, use the following command:

oc get svc istio-ingressgateway -n istio-system

You should see an EXTERNAL-IP assigned to the service:

NAME                  TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
istio-ingressgateway   LoadBalancer   10.96.179.193    192.0.2.22     443:32057/TCP               10m

You will need to use DNS to correlate the SAP EIC "Default Virtual Host" with the EXTERNAL-IP allocated by MetalLB, so that traffic is correctly routed.

Step 3: Test External Access

Once MetalLB assigns an external IP, you can use the assigned IP to test external access. For example:

curl -k --request GET --url https://[Default Virtual Host]/http/example

Option 2: Using OpenShift Routes

[IMPORTANT] Support Scope for OpenShift Routes

While SAP Edge Integration Cell (EIC) can be exposed via OpenShift Routes to accommodate strict on-premise security policies, this configuration deviates from the standard SAP delivery model (which typically relies on LoadBalancer or NodePort services).

  • Red Hat Support: Red Hat validates and supports the OpenShift Route configuration and the underlying ingress infrastructure.
  • SAP Support: SAP support covers the EIC application logic.
  • Customer Responsibility: Any functional limitations, connectivity issues, or feature unavailability resulting specifically from the use of Routes (e.g., inability to support raw TCP without SNI, or loss of source IP preservation) are the responsibility of the customer.

OpenShift Routes are a native way to expose services using a hostname. This method works by mapping a domain name directly to an internal Kubernetes service, bypassing the need for a service of type LoadBalancer.

Architectural Differences: Routes vs. Standard Services

When deploying SAP EIC on OpenShift, customers must understand the architectural differences between the standard Kubernetes approach and the OpenShift Route approach:

FeatureStandard (NodePort / LoadBalancer)OpenShift Route (HAProxy)
OSI LayerLayer 4 (Transport)Layer 7 (Application) & TLS
ProtocolsTCP, UDPHTTP, HTTPS, TLS (SNI-based TCP)
Client IPPreserved (usually)Not Preserved (Router IP sees traffic)*
Port UsageHigh ports (30000-32767)Standard ports (80/443)

*Note: The OpenShift Router (HAProxy) acts as a TCP proxy — it creates a new TCP connection to the backend in all termination modes, replacing the original client IP with the router's own IP. Source IP preservation is not available with this deployment topology.

Traffic Flow with External Load Balancer

In many enterprise on-premise deployments, an external load balancer (e.g., F5, HAProxy, NetScaler) sits in front of the OpenShift cluster. When using OpenShift Routes with an external load balancer, the entire chain must be configured for end-to-end TLS passthrough:

Client → DNS → External LB (L4 Passthrough) → OpenShift Router (Passthrough Route) → Istio Gateway (TLS Termination)

Each component in the chain has specific requirements:

  1. DNS: The EIC Virtual Host hostname (e.g., eic.apps.<cluster-domain>) must resolve to the external load balancer's VIP address.

  2. External Load Balancer: Must be configured for Layer 4 TCP Passthrough on port 443 — it must not terminate SSL/TLS. The LB forwards the raw encrypted bytes to the OpenShift Router nodes.

  3. OpenShift Router: Must have a Passthrough Route whose spec.host exactly matches the hostname used by clients and the external LB. The Router uses the SNI header to select the correct Route — if no Route matches, traffic is silently dropped.

  4. Istio Gateway (EIC): Receives the encrypted traffic and terminates TLS using the certificates configured in the ELM Keystore.

Common Pitfalls:

  • Hostname Mismatch: The OpenShift Route hostname must match the hostname clients use to connect (the Virtual Host configured in EIC). A mismatch causes the Router to drop traffic with no visible error, since SNI-based routing fails silently.

  • SSL Termination at the External LB: If the external load balancer terminates SSL (instead of L4 passthrough), the SNI header is lost after re-encryption, and the OpenShift Router cannot match the traffic to the correct Passthrough Route. The external LB must forward traffic as-is.

  • Wrong Route Termination Mode: The Route must use termination: passthrough. Using termination: edge or no termination causes the Router to either terminate TLS prematurely or treat the traffic as unencrypted HTTP — both break the EIC traffic flow.

Operational Restrictions

Due to the nature of OpenShift Routes, the following restrictions apply to this deployment topology:

  • Raw TCP Traffic: OpenShift Routes rely on Server Name Indication (SNI) to route traffic on port 443. Any SAP EIC component or API artifact that uses "Raw TCP" (without TLS/SNI) cannot be exposed via a standard Route on port 443.

  • UDP Traffic: OpenShift Routes do not support UDP traffic. Any feature requiring UDP is unavailable in this topology.

  • Mutual TLS (mTLS): mTLS between the client and the application is supported with the Passthrough Route used in this guide, since the OpenShift Router forwards the TLS connection directly to Istio without terminating it. However, mTLS is not compatible with edge or re-encrypt Routes, where the router terminates the client TLS session.

Manual Workaround for Route-Based Deployment

When using OpenShift Routes, the Istio deployment may wait for an external IP assignment. Use the following workaround to allow the deployment of EIC to proceed:

  1. Retry the ELM solution operation if it has failed or timed out.

  2. While the Istio solution is waiting for deployment to complete, manually patch the istio-ingressgateway service to assign the external hostname of your OpenShift Route:

    oc patch svc istio-ingressgateway -n <gateway-namespace> --type='merge' -p '
    status:
      loadBalancer:
        ingress:
        - hostname: "eic.apps.<cluster-domain>"
    '
    

    Replace <gateway-namespace> with the namespace where your Istio ingress gateway is deployed (see note below), and eic.apps.<cluster-domain> with your actual Route hostname.

  3. This change allows the Istio deployment to continue by satisfying the external IP/hostname requirement.

Step 1: Define the Route

Create a YAML file to define the Route. Below is an example configuration for exposing the Istio Ingress Gateway using passthrough TLS termination.

In this case, when you deploy the SAP EIC, you need to use a domain like eic.apps. as the "Default Virtual Host".

Important — Namespace Selection:

The Route must be created in the same namespace as the istio-ingressgateway service. The namespace depends on your deployment model:

Deployment ModelGateway Namespace
Standard (cluster-admin)istio-system
Restricted Access (Service Mesh 3.x)istio-gateways

Adjust the metadata.namespace in the YAML below accordingly.

apiVersion: route.openshift.io/v1
kind: Route
metadata:
  name: eic-https-route
  namespace: istio-system  # Use 'istio-gateways' for restricted-access deployments
  labels:
    app: istio-ingressgateway
spec:
  host: eic.apps.<cluster-domain>
  to:
    kind: Service
    name: istio-ingressgateway
    weight: 100
  port:
    targetPort: https
  tls:
    termination: passthrough
    insecureEdgeTerminationPolicy: None
  wildcardPolicy: None

Replace with the domain configured during your OpenShift installation, and update namespace if using the restricted-access model.

Step 2: Apply the Route

Use the following command to create the Route:

oc apply -f eic-route.yaml

Step 3: Verify the Route

After applying the Route, verify its creation with:

oc get route -n istio-system   # Use 'istio-gateways' for restricted-access deployments

You should see output similar to the following:

NAME              HOST/PORT                            PATH  SERVICES               PORT    TERMINATION   WILDCARD
eic-https-route   eic.apps.<cluster-domain>             istio-ingressgateway   https   passthrough    None

Step 4: Test External Access

You can now access the Istio Ingress Gateway using the hostname specified in the Route. For example:

curl -k --request GET --url https://eic.apps.<cluster-domain>/http/example
Article Type