EAP 7 clustering deployments boundaries in OCP cluster
Environment
- Red Hat JBoss Enterprise Application Platform (EAP)
- 7.x image
- Red hat OpenShift Container Platform (OCP)
- 4.x
Issue
- How to combine multiple OCP deployments in a single cluster?
- How are EAP 7 deployments separated?
Resolution
For information about clustering capabilities, see the solution EAP 7 image clustering in OCP 4.
In regards to boundaries of clustering capabilities, the Jgroups protocols DNS_PING/KUBE_PING are specific to a deployment.
Meaning DNS_PING can detect the cluster only inside the same namespace and same deployment.
The limitation for the same deployment is set on the service (which is required for the communication).
Workarounds for clustering two deployments in same NS (Project)
For clustering two deployments in the same namespace, user can change the selector on the DNS_PING service to make EAP cluster with other deployments in the same namespace - see on Diagnostic Steps that the respective service has a selector setting the deploymentConfig name: eap-ap:
$ oc get service eap-app-ping -o yaml
apiVersion: v1
kind: Service
metadata:
annotations:
description: The JGroups ping port for clustering.
...
spec:
clusterIP: None
clusterIPs:
- None
...
publishNotReadyAddresses: true
selector:
deploymentConfig: eap-app <------------------------ here the deploymentConfig is set for the eap-app
Note it might be necessary to change the NetworkPolicy for making this work, i.e. by enabling communication between namespaces.
Deployment vs DeploymentConfig Selector
The service (iptables rules) knows which pods to communicate based on the label being set to match the service selector. The labels are set based on the deployment/deploymentconfig name:
deployment labels
selector:
deployment: eap-app <------------------------ here the deployment is set for the eap-app
deploymentConfig labels:
selector:
deploymentConfig: eap-app <------------------------ here the deploymentConfig is set for the eap-app
However, deployment's label (in the pods and in the deployment's yaml) changes according to the yaml version. So in case the user adds annotations or other labels on the yaml deployment, the label of the pod will change. Therefore the user must adjust the service's selector accordingly:
After two modifications (annotations added) those are the labels:
"deployment": "eap-app-2", <---------------------- deployment label changes
"deploymentConfig": "eap-app",
"deploymentconfig": "eap-app"
Above pod's service should be:
selector:
deployment: eap-app-2 <--- version does matter - because the version of deployment's label changes based on the version
DNS_PING properties
As explained on the solution EAP 7 image clustering in OCP 4, the DNS_PING relies on the service that is set on property dns_query.
In regards to make EAP 7 deployment cluster with another OCP cluster, it might be necessary to use JDBC_PING.
On DNS_PING, there are two ports that come into play but have different purposes: OPENSHIFT_DNS_PING_SERVICE_PORT (8888) and TCP/UDP port (7600). By default 8888 is for the DNS lookup to the ping-service (default eap-app-ping service), and all cluster communication are on 7600 between EAP/RHDG instances.
| Usage | Port |
|---|---|
| DNS lookup | 8888 |
| Between EAP/RHDG instances | 7600 |
EAP 7 clusters everything together
Even though Jgroups is set for a specific service, eap-app-ping, if other services connect the pod - the cluster will be formed with those new pods.
Example:
eap-app-ping 127.1.1.1:8888,127.1.1.2:8888 40m
...
example-webservices 127.1.1.1:8888,127.1.1.2:8888127.1.1.3:8888 13m <---------------- 3 pods
The cluster will be formed with three pods (more than eap-app-ping detects):
01:50:38,722 INFO [org.infinispan.CLUSTER] (thread-14,ee,eap-name-2-4-l2rr4) ISPN000094: Received new cluster view for channel ee: [eap-app-2-lzrrt|3] (4) [eap-app-pod1, eap-app-pod2, eap-name-pod1, ]
EAP 7 will cluster all the pods that are in contact from any service - even if it is not on the dns_query
Root Cause
The application that comes with EAP 7 image, ROOT.war, is not clustered. To activate the clustered EAP 7 capabilities, an clustered application is required.
async_discovery_use_separate_thread_per_request If enabled, use a separate thread for every discovery request. Can be used with or without async_discovery.
Relate solutions
| Issue | Solution |
|---|---|
| EAP Operator Load balancer service | JBoss EAP 7 Operator creates LoadBalancer service |
Diagnostic Steps
- To verify the current stack on jboss-cli:
$ /subsystem=jgroups/stack=tcp:read-resource(include-runtime) - To verify the protocols set on the deployment:
$ oc get dc/eap-app -o yaml | grep -B 5 -A 5 JGROUPS
deploymentConfig: eap-app
name: eap-app
spec:
containers:
- env:
- name: JGROUPS_PING_PROTOCOL
value: dns.DNS_PING
- name: OPENSHIFT_DNS_PING_SERVICE_NAME
value: eap-app-ping
- name: OPENSHIFT_DNS_PING_SERVICE_PORT
value: "8888"
- name: MQ_CLUSTER_PASSWORD
value: O2eiQKTQ
- name: MQ_QUEUES
- name: MQ_TOPICS
- name: JGROUPS_CLUSTER_PASSWORD
value: vqUNfE62
- name: AUTO_DEPLOY_EXPLODED
value: "false"
- name: ENABLE_GENERATE_DEFAULT_DATASOURCE
value: "false"
- For confirming clustering, search for ISPN000078, ISPN000079, ISPN000094 for channel ee opening and cluster view respectively
- For verifying the dns_ping protocol and service setting do:
$ cat /opt/eap/standalone/configuration/standalone-openshift.xml | grep -A 3 -B 3 dns
<stacks>
<stack name="tcp">
<transport type="TCP" socket-binding="jgroups-tcp"/>
<protocol type="dns.DNS_PING">
<property name="dns_query">eap-app-ping</property>
<property name="async_discovery_use_separate_thread_per_request">true</property>
</protocol>
<protocol type="MERGE3"/>
--
</stack>
<stack name="udp">
<transport type="UDP" socket-binding="jgroups-udp"/>
<protocol type="dns.DNS_PING">
<property name="dns_query">eap-app-ping</property>
<property name="async_discovery_use_separate_thread_per_request">true</property>
</protocol>
<protocol type="MERGE3"/>
- To get all the services do: $ oc get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
eap-app ClusterIP 172.30.112.183 <none> 8080/TCP 49m
eap-app-ping ClusterIP None <none> 8888/TCP 49m
^ The service is headless, so the ping will go straight to the pods - without loadbalancer.
- For details on service:
$ oc get service eap-app-ping -o yaml
apiVersion: v1
kind: Service
metadata:
annotations:
description: The JGroups ping port for clustering.
openshift.io/generated-by: OpenShiftNewApp
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
creationTimestamp: "2022-12-04T00:40:44Z"
labels:
app: eap74-basic-s2i <-------------------------------------- S2I label
app.kubernetes.io/component: eap74-basic-s2i
app.kubernetes.io/instance: eap74-basic-s2i
application: eap-app
template: eap74-basic-s2i
xpaas: 7.4.0
name: eap-app-ping
namespace: eap-galleon
resourceVersion: "26078"
uid: 22c0bb18-a666-4fe4-8339-a163375b2a02
spec:
clusterIP: None
clusterIPs:
- None
internalTrafficPolicy: Cluster <------------------- internalTrafficPolicy
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: ping
port: 8888 <--------------------------------- port
protocol: TCP
targetPort: 8888 <-------------------------------- target port
publishNotReadyAddresses: true
selector:
deploymentConfig: eap-app <----------------------- deployment config - which deployments apply for this service
sessionAffinity: None
type: ClusterIP <---------------------------------- ClusterIP
status:
loadBalancer: {}
- Get the endpoints will show the eap-app-ping and the pod ips:
oc get ep,svc -o wide
NAME ENDPOINTS AGE
endpoints/eap-app-ping 1.1.1.127:8888,1.1.1.128:8888 75m
KUBE_PING example:
KUBE_PING protocol won't have a eap-app-ping, but instead a headless service - see below statefulset:
$ oc get statefulset eap-example -o yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
annotations:
image.openshift.io/triggers: '[{ "from": { "kind":"ImageStreamTag", "name":"quay.io/wildfly-quickstarts/clusterbench:latest"},
"fieldPath": "spec.template.spec.containers[?(@.name==\"eap-example\")].image"}]'
wildfly.org/wildfly-server-generation: "1"
creationTimestamp: "2022-12-04T01:56:50Z"
generation: 1
labels:
app.kubernetes.io/managed-by: eap-operator
app.kubernetes.io/name: eap-example
app.openshift.io/runtime: eap
name: eap-example
namespace: eap7-test
ownerReferences:
- apiVersion: wildfly.org/v1alpha1
blockOwnerDeletion: true
controller: true
kind: WildFlyServer
name: eap-example
uid: 0ddfe306-d12e-4085-8cf4-f4bef03345c0
resourceVersion: "52440"
uid: c02deb17-9b9d-4a93-b0b5-38e85a073711
spec:
podManagementPolicy: Parallel
replicas: 2
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/managed-by: eap-operator
app.kubernetes.io/name: eap-example
app.openshift.io/runtime: eap
serviceName: eap-example-headless
template:
metadata:
annotations:
wildfly.org/server-type: generic
creationTimestamp: null
labels:
app.kubernetes.io/managed-by: eap-operator
app.kubernetes.io/name: eap-example
app.openshift.io/runtime: eap
com.company: Red_Hat
rht.comp: EAP
rht.prod_name: Red_Hat_Runtimes
rht.prod_ver: 2022-Q1
rht.subcomp_t: application
wildfly.org/operated-by-headless: active
wildfly.org/operated-by-loadbalancer: active
spec:
containers:
- env:
- name: KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: KUBERNETES_LABELS
value: app.kubernetes.io/managed-by=eap-operator,app.kubernetes.io/name=eap-example,app.openshift.io/runtime=eap
- name: STATEFULSET_HEADLESS_SERVICE_NAME
value: eap-example-headless
image: quay.io/wildfly-quickstarts/clusterbench:latest
This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.