Testing Network Bandwidth in OpenShift using iPerf Container.

Updated

The general guide for testing network bandwidth in RHEL can be found here:

How to test network bandwidth?

Step 1: Build a Container Image with iPerf Inside

NOTE: In order to build the iperf3 container, you will need to be on a Red Hat subscribed system.

Dockerfile

This simple Dockerfile is all that is needed. The default run behavior for this container is to sleep forever, giving you a chance to get a remote shell into the Pod once deployed. From there, you can execute the iperf3 commands.

FROM registry.redhat.io/rhel8/support-tools:latest
RUN yum install -y iperf3
ENTRYPOINT trap : TERM INT; sleep infinity & wait # Listen for kill signals and exit quickly.

Build

$ podman build -t quay.io/example/iperf .
$ docker build -t quay.io/example/iperf .

Push

You may push the image to a public registry like <Content from quay.io is not included.https://quay.io/>. If public registries are not accessible by your OpenShift cluster, then push the container image built above to your private registry.

$ podman push quay.io/example/iperf
$ docker push quay.io/example/iperf

Step 2: Deploy Server and Client iPerf Pods

Two Pods will be needed to use iPerf. A client to transmit packets and a server to listen for the packets.

apiVersion: v1
kind: Pod
metadata:
  name: iperf-server
spec:
  #hostNetwork: true #<--------- UNCOMMENT THIS SETTING IF THE IPERF POD MUST RUN IN THE HOST NETWORK
  nodeName: iperf-server-node.example.net #<-------- REPLACE THIS WITH THE NODE WHERE IPERF SERVER MUST RUN
  containers:
  - name: server
    image: quay.io/example/iperf  #<---------- REPLACE THIS
---
apiVersion: v1
kind: Pod
metadata:
  name: iperf-client
spec:
  #hostNetwork: true #<--------- UNCOMMENT THIS SETTING IF THE IPERF POD MUST RUN IN THE HOST NETWORK
  nodeName: iperf-client-node.example.net #<-------- REPLACE THIS WITH THE NODE WHERE IPERF CLIENT MUST RUN
  containers:
  - name: client
    image: quay.io/example/iperf  #<---------- REPLACE THIS

Write the contents above to a file called iperf-pods.yaml and then run:

$ oc create -f iperf-pods.yaml

After a few moments you should see two Pods in a Running state like this:

$ oc get pods 
NAME           READY   STATUS    RESTARTS   AGE
iperf-client   1/1     Running   0          6s
iperf-server   1/1     Running   0          6s

NOTE: If you see ErrImagePull or ImagePullBackOff make sure you have properly pushed the image from Step 1 and entered in the image reference correctly in YAML above.

NOTE: If you uncommented the hostNetwork: true setting and have errors creating the pods, check that the default service account of the namespace has the right SCCs assigned to allow pods on the host network.

NOTE: You should be careful to check which Node each test Pod is running on. The goal of your network test may determine where you want the Pods to run.

Step 3: Run the Test

Now that the iPerf client and server are running, you must identify the iPerf server Pod IP:

$ oc get pod iperf-server -o wide
NAME           READY   STATUS    RESTARTS   AGE   IP
iperf-server   1/1     Running   0          8s    10.130.2.13

Next, open a remote shell in both Pods and begin the test:

Server

$ oc exec -it iperf-server -- iperf3 -i 5 -s

-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------

Client

$ oc exec -it iperf-client -- iperf3 -i 5 -t 60 -c $(oc get pod iperf-server -o jsonpath='{.status.podIP}')

or

$ oc exec -it iperf-client -- iperf3 -i 5 -t 60 -c <iPerf Server Pod IP>

Once executing the command in the client Pod, you will see the test begin.

Reference Test

The following test was run using these parameters:

  • Amazon Web Services (AWS)
  • Two r5.xlarge instances to run test Pods
  • OpenShift 4.4.11
  • No additional workload outside of OpenShift Operators and Deployments
  • Default SDN Plugin (NetworkPolicy for OCP 4)

Server Pod Logs

$ oc exec -it iperf-server -- iperf3 -i 5 -s
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 10.131.2.12, port 50062
[  5] local 10.130.2.15 port 5201 connected to 10.131.2.12 port 50064
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-5.00   sec  2.22 GBytes  3.82 Gbits/sec                  
[  5]   5.00-10.00  sec  2.26 GBytes  3.88 Gbits/sec                  
[  5]  10.00-15.00  sec  2.22 GBytes  3.82 Gbits/sec                  
[  5]  15.00-20.00  sec  2.23 GBytes  3.84 Gbits/sec                  
[  5]  20.00-25.00  sec  2.22 GBytes  3.81 Gbits/sec                  
[  5]  25.00-30.00  sec  2.23 GBytes  3.84 Gbits/sec                  
[  5]  30.00-35.00  sec  2.26 GBytes  3.88 Gbits/sec                  
[  5]  35.00-40.00  sec  2.26 GBytes  3.88 Gbits/sec                  
[  5]  40.00-45.00  sec  2.26 GBytes  3.89 Gbits/sec                  
[  5]  45.00-50.00  sec  2.15 GBytes  3.69 Gbits/sec                  
[  5]  50.00-55.00  sec  2.23 GBytes  3.83 Gbits/sec                  
[  5]  55.00-60.00  sec  2.18 GBytes  3.75 Gbits/sec                  
[  5]  60.00-60.04  sec  18.0 MBytes  3.81 Gbits/sec                  
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-60.04  sec  0.00 Bytes  0.00 bits/sec                  sender
[  5]   0.00-60.04  sec  26.7 GBytes  3.83 Gbits/sec                  receiver
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------

Client Pod Logs

$ oc exec -it iperf-client -- iperf3 -i 5 -t 60 -c $(oc get pod iperf-server -o jsonpath='{.status.podIP}')
Connecting to host 10.130.2.15, port 5201
[  4] local 10.131.2.12 port 50064 connected to 10.130.2.15 port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-5.00   sec  2.24 GBytes  3.85 Gbits/sec  195   1.25 MBytes       
[  4]   5.00-10.00  sec  2.26 GBytes  3.88 Gbits/sec   58   1.18 MBytes       
[  4]  10.00-15.00  sec  2.22 GBytes  3.82 Gbits/sec  531   1.06 MBytes       
[  4]  15.00-20.00  sec  2.23 GBytes  3.84 Gbits/sec  243   1.08 MBytes       
[  4]  20.00-25.00  sec  2.22 GBytes  3.81 Gbits/sec   13   1018 KBytes       
[  4]  25.00-30.00  sec  2.23 GBytes  3.84 Gbits/sec   13    926 KBytes       
[  4]  30.00-35.00  sec  2.26 GBytes  3.88 Gbits/sec   92    980 KBytes       
[  4]  35.00-40.00  sec  2.26 GBytes  3.88 Gbits/sec   15   1.27 MBytes       
[  4]  40.00-45.00  sec  2.26 GBytes  3.89 Gbits/sec  159   1.21 MBytes       
[  4]  45.00-50.00  sec  2.15 GBytes  3.69 Gbits/sec  340   1.14 MBytes       
[  4]  50.00-55.00  sec  2.23 GBytes  3.83 Gbits/sec   63    939 KBytes       
[  4]  55.00-60.00  sec  2.18 GBytes  3.75 Gbits/sec   15   1.29 MBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-60.00  sec  26.8 GBytes  3.83 Gbits/sec  1737             sender
[  4]   0.00-60.00  sec  26.7 GBytes  3.83 Gbits/sec                  receiver

iperf Done.
Category
Components
Article Type