How to test network bandwidth? (iperf3 qperf netcat nc ttcp)
Environment
- Red Hat Enterprise Linux
- Network
Issue
- How to test network bandwidth?
- How to use tools such as iperf qperf netcat nc ttcp to see maximum available networking speed?
- Need to measure a system's maximum TCP and UDP bandwidth performance throughput. How this can be achieved?
- Where can I download the iperf utility package?
- How to test network throughput without special tools such as iperf?
- What is iperf and can I use it on a RHEL machine?
Resolution
Several solutions are available for testing network bandwidth:
Downloading and installing iperf3
Add the EPEL Repository
If using RHEL 7, 8, 9 this step can be skipped as iperf is included in the supported channel.
If using RHEL 6 or RHEL 5, add the EPEL repository to get a ready-made RPM package:
Install iperf3 Package
# yum install iperf3
Compile from source code
If you are unable to add EPEL, you may compile iperf3 from the upstream source.
The latest version of the iperf source code is at <Content from github.com is not included.https://github.com/esnet/iperf>
Other OSes such as BSD or UNIX can likely compile this source code if required.
Instructions for building are provided on GitHub, and in the README.md file in the source.
Windows, Mac, iOS, Android, etc
Builds of iperf3 for other environments are available at <Content from iperf.fr is not included.https://iperf.fr/iperf-download.php>
Bandwidth Test
iperf3 has the notion of a "client" and "server" for testing network throughput between two systems.
The following example sets a large send and receive buffer size to maximise throughput, and performs a test for 60 seconds which should be long enough to fully exercise a network.
Server
On the server system, iperf3 is told to listen for a client connection:
server # iperf3 -i 5 -s
-i the interval to provide periodic bandwidth updates
-s listen as a server
See man iperf3 for more information on specific command line switches.
Client
On the client system, iperf3 is told to connect to the listening server via hostname or IP address:
client # iperf3 -i 5 -t 60 -c <server hostname or ip address>
-i the interval to provide periodic bandwidth updates
-t the time to run the test in seconds
-c connect to a listening server at...
See man iperf3 for more information on specific command line switches.
Test Results
Both the client and server report their results once the test is complete:
Server
server # iperf3 -i 10 -s
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 10.0.0.2, port 22216
[ 5] local 10.0.0.1 port 5201 connected to 10.0.0.2 port 22218
[ ID] Interval Transfer Bandwidth
[ 5] 0.00-10.00 sec 17.5 GBytes 15.0 Gbits/sec
[ 5] 10.00-20.00 sec 17.6 GBytes 15.2 Gbits/sec
[ 5] 20.00-30.00 sec 18.4 GBytes 15.8 Gbits/sec
[ 5] 30.00-40.00 sec 18.0 GBytes 15.5 Gbits/sec
[ 5] 40.00-50.00 sec 17.5 GBytes 15.1 Gbits/sec
[ 5] 50.00-60.00 sec 18.1 GBytes 15.5 Gbits/sec
[ 5] 60.00-60.04 sec 82.2 MBytes 17.3 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth
[ 5] 0.00-60.04 sec 0.00 Bytes 0.00 bits/sec sender
[ 5] 0.00-60.04 sec 107 GBytes 15.3 Gbits/sec receiver
Client
client # iperf3 -i 1 -t 60 -c 10.0.0.1
Connecting to host 10.0.0.1, port 5201
[ 4] local 10.0.0.2 port 22218 connected to 10.0.0.1 port 5201
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-10.00 sec 17.6 GBytes 15.1 Gbits/sec 0 6.01 MBytes
[ 4] 10.00-20.00 sec 17.6 GBytes 15.1 Gbits/sec 0 6.01 MBytes
[ 4] 20.00-30.00 sec 18.4 GBytes 15.8 Gbits/sec 0 6.01 MBytes
[ 4] 30.00-40.00 sec 18.0 GBytes 15.5 Gbits/sec 0 6.01 MBytes
[ 4] 40.00-50.00 sec 17.5 GBytes 15.1 Gbits/sec 0 6.01 MBytes
[ 4] 50.00-60.00 sec 18.1 GBytes 15.5 Gbits/sec 0 6.01 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-60.00 sec 107 GBytes 15.4 Gbits/sec 0 sender
[ 4] 0.00-60.00 sec 107 GBytes 15.4 Gbits/sec receiver
Reading the Result
Between these two systems, we could achieve a bandwidth of 15.4 gigabit per second or approximately 1835 MiB (mebibyte) per second.
Notes
- The server listens on TCP port 5201 by default. This port will need to be allowed through any firewalls present.
- The port used can be changed with the
-pcommandline option. - There is a bug with iperf3 in UDP mode where the server reports a bandwidth of zero. This is expected; use the client reading for bandwidth count: <Content from github.com is not included.https://github.com/esnet/iperf/issues/314>
- When using the
-woption to set the buffer size, it cannot be larger than either ofnet.core.rmem_maxornet.core.wmem_maxof both the sender and receiver. In other words, both the sender and receiver system must have theirrmem_maxandwmem_maxbe at least as large as the-woption size. - Per the Content from software.es.net is not included.iperf FAQ, it is expected that iperf2 may outperform iperf3 when testing with parallel streams (the
-Poption). - Extremely high bandwidth such as 40 Gbps or 100 Gbps may require a different testing method altogether: iperf3 does not reach full speed on fast network connection
Root Cause
Be aware that the theoretical maximum transfer rate will not be what is achieved in practice. For additional details, read the Factors limiting actual performance, criteria for real decisions section on Wikipedia's Content from en.wikipedia.org is not included.List of device bit rates page.
Any given transfer is only as fast as the slowest link.
A Request For Enhancement (RFE) has been filed to have iperf included in RHEL 7. This is tracked under Red Hat Private Bug 913329 - [RFE] include iperf in RHEL.
Alternative Options
qperf
qperf is a network bandwidth and latency measurement tool which works over many transports including TCP/IP, RDMA, UDP, and SCTP.
It is available in the RHEL Server channel, so no third-party packages are required.
More details are available at: How to use qperf to measure network bandwidth and latency performance?
nc (netcat) and dd
If iperf is not an option for the enviroment, a simple throughput test can be performed with nc (netcat) and dd.
dd is in the coreutils package, and nc is in the nc package, both provided by Red Hat.
On the target, start a netcat listener. In the following example, netcat is listening on TCP port 12345:
server # nc -l -n 12345 > /dev/null
Have the client connect to the listener. The dd command will report throughput/second:
# dd if=/dev/zero bs=1M count=10240 | nc -n <server hostname or ip address> 12345
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 42.1336 s, 255 MB/s
Note that netcat uses a smaller buffer than other tools and this buffer size cannot be changed, this introduces a bottleneck in netcat, so throughput will be significantly lower with netcat than with purpose-built tools like iperf or TTCP.
ttcp
The ttcp (Test TCP) tool has RPMs available in other repositories:
TTCP works by creating a TCP pipe, it is up to the user to send data over the pipe.
server # ttcp -l 1048576 -b 4194304 -r -s
-l the buffer size to read from the TCP socket
-b the socket buffer size
-r receive
-s sink (discard) data after recieving
A client can use dd to generate data to send over the pipe:
client # dd if=/dev/zero bs=1M count=10240 | ttcp -l 1048576 -b 4194304 -t -n 10240 <server hostname or ip address>
-l the buffer size to write to the TCP socket
-b the socket buffer size
-t transmit
-n the number of buffers to send
The server shows only the TTCP result:
ttcp-r: 10737418240 bytes in 7.157 real seconds = 1465034.870 KB/sec +++
ttcp-r: 25074 I/O calls, msec/call = 0.292, calls/sec = 3503.254
ttcp-r: 0.009user 4.366sys 0:07real 60% 0i+0d 1804maxrss 0+256pf 19857+1csw
The client shows results from both dd and from TTCP:
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 7.1567 s, 1.5 GB/s
ttcp-t: 10737418240 bytes in 7.157 real seconds = 1465070.691 KB/sec +++
ttcp-t: 10240 I/O calls, msec/call = 0.716, calls/sec = 1430.733
ttcp-t: 0.022user 4.218sys 0:07real 59% 0i+0d 1064maxrss 0+256pf 163783+3csw
Diagnostic Steps
- Bandwidth on virtual machines can be influenced greatly by memory and load.
Physical machines can also be affected, but virtual machines are affected more.
If you were to have substantial load on the system, then the true bandwidth available will not get shown.
Example:
Running this command in the background before an iperf test on a vm with low Memory/cpu
dd if=/dev/zero of=/dev/null &
Will provide the following results
--------------------------------->
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 173 MBytes 145 Mbits/sec
[ 4] 10.0-20.0 sec 168 MBytes 141 Mbits/sec
[ 4] 20.0-30.0 sec 178 MBytes 149 Mbits/sec
Running without any load will result in more bandwidth
---------------------------------->
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 9.87 GBytes 8.48 Gbits/sec
[ 3] 10.0-20.0 sec 9.80 GBytes 8.42 Gbits/sec
^C[ 3] 0.0-22.5 sec 22.1 GBytes 8.46 Gbits/sec
This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.