About iPERF3
iPERF31 (Internet Protocol Performance) is a free and open source cross-platform command-line tool used for checking network performance in terms of bandwidth and speed (TCP and UDP). It is a highly reliable tool in comparison to the many network bandwidth and speed testing tools. In addition, it is a very effective tool when testing for network performance between two servers.
You need at least two servers to run an Iperf3 test – a source and a destination server.
Install IPERF3
Ubuntu / Debianapt-get install iperf3
Centos / RockyLinuxyum install iperf3
Windows
Download and extract iperf-3.*-win64.zip
Pros of iPERF3
The big advantage of iperf3 is that it is capable of running multiple test connections between two servers simultaneously. This is crucial for testing connections of 1 Gbps or higher (opposed to a wget command to download a test file that only holds one connection unable to utilize all of the server resources – usually 1 Gbps maximum).
Using Iperf you can test the bandwidth speed from the origin server (iperf client-server) to a destination server (iperf listening server). Both need to have sufficient bandwidth to perform the tests you are after (If you’re performing a 10Gbps speed test, both servers need a 10GE uplink, the listening server should Ideally have a higher bandwidth capacity than the client server – 20Gbps.).
Using Iperf commands on the client-server (in this case your server or pc), you will run a benchmark test and show the throughput data of all test connections.
Cons of traditional speedtest services
Firstly, the speed test providers run tests between your server and the web servers on their side. Therefore you should check what uplink ports their end servers have. The majority of today’s speedtest servers run on a 1Gbps uplink, so it’s physically impossible to measure higher throughput than the uplink capacity.
Secondly, if you’re downloading a test file placed on your server from your home device, you are always limited by the network speed of your home connection (the speed you subscribed for with your local ISP). Given the average internet connection speed worldwide is around 80Mbps2, you’ll never be able to test a server with 10Gbps unmetered bandwidth efficiently.
Other forms of connection testing such as the wget command also present limitations (single concurrent connections) which render the method ineffective for testing network throughput in high-bandwidth network environments.
Commands
Here is a command for running an Iperf test on your client server. The only other data you need is the IP address of the Iperf listening server. It's recommended to run more parallel streams (TCP/UDP), as one stream will be hashed to one physical uplink interface of an access switch. Use parameter -P n, where n represents the count of parallel flows.
Command:iperf3 -P 20 -c ams.speedtest.clouvider.net -p 5203
This test will run 20 simultaneous connections against the Iperf listening server ams.speedtest.clouvider.net on port 5203.iperf3 -P 20 -c ams.speedtest.clouvider.net -p 5203 -R
This test is almost the same, but reversed, which is the same as download.
Command via Docker:docker run -it --rm -p 5201:5201 -p 5201:5201/udp r0gger/iperf3 -c ams.speedtest.clouvider.net -p 5203
Options
-c ip/host
--bidir run in bidirectional mode. Client and server send and receive data.
-p, --port server port to listen on/connect to (default = 5201)
-P, --parallel number of parallel client streams to run (default =10)
-i, --interval seconds between periodic throughput reports
-t, --time time in seconds to transmit for (default 10 secs)
-R, --reverse run in reverse mode (server sends, client receives) (download)
-4, --version4 only use IPv4 (default)
-6, --version6 only use IPv6
-p, --port server port to listen on/connect to (default = 5201)
-f, --format [kmgtKMGT] format to report: Kbits, Mbits, Gbits, Tbits
-i, --interval seconds between periodic throughput reports
-F, --file name xmit/recv the specified file
-A, --affinity n/n,m set CPU affinity
-B, --bind <host> bind to the interface associated with the address
-V, --verbose more detailed output
-J, --json output in JSON format
--logfile f send output to a log file
--forceflush force flushing output at every interval
--timestamps <format> emit a timestamp at the start of each output line (using optional format string as per strftime(3))
-d, --debug emit debugging output
-v, --version show version information and quit
-h, --help show this message and quit
-s, --server run in server mode
-D, --daemon run the server as a daemon
-I, --pidfile file write PID file
-1, --one-off handle one client connection then exit
--server-bitrate-limit #[KMG][/#] server's total bit rate limit (default 0 = no limit)
(optional slash and number of secs interval for averaging total data rate. Default is 5 seconds)
--rsa-private-key-path path to the RSA private key used to decrypt authentication credentials
--authorized-users-path path to the configuration file containing user credentials
-c, --client run in client mode, connecting to
--sctp use SCTP rather than TCP
-X, --xbind bind SCTP association to links
--nstreams # number of SCTP streams
-u, --udp use UDP rather than TCP
--connect-timeout # timeout for control connection setup (ms)
-b, --bitrate #[KMG][/#] target bitrate in bits/sec (0 for unlimited)
(default 1 Mbit/sec for UDP, unlimited for TCP)
(optional slash and packet count for burst mode)
--pacing-timer #[KMG] set the timing for pacing, in microseconds (default 1000)
--fq-rate #[KMG] enable fair-queuing based socket pacing in bits/sec (Linux only)
-t, --time # time in seconds to transmit for (default 10 secs)
-n, --bytes #[KMG] number of bytes to transmit (instead of -t)
-k, --blockcount #[KMG] number of blocks (packets) to transmit (instead of -t or -n)
-l, --length #[KMG] length of buffer to read or write (default 128 KB for TCP, dynamic or 1460 for UDP)
--cport bind to a specific client port (TCP and UDP, default: ephemeral port)
-P, --parallel # number of parallel client streams to run
-R, --reverse run in reverse mode (server sends, client receives)
--bidir run in bidirectional mode. Client and server send and receive data.
-w, --window #[KMG] set window size / socket buffer size
-C, --congestion set TCP congestion control algorithm (Linux and FreeBSD only)
-M, --set-mss # set TCP/SCTP maximum segment size (MTU - 40 bytes)
-N, --no-delay set TCP/SCTP no delay, disabling Nagle's Algorithm
-4, --version4 only use IPv4
-6, --version6 only use IPv6
-S, --tos N set the IP type of service, 0-255.
The usual prefixes for octal and hex can be used, i.e. 52, 064 and 0x34 all specify the same value.
--dscp N or --dscp val set the IP dscp value, either 0-63 or symbolic.
Numeric values can be specified in decimal, octal and hex (see --tos above).
-L, --flowlabel N set the IPv6 flow label (only supported on Linux)
-Z, --zerocopy use a 'zero copy' method of sending data
-O, --omit N omit the first n seconds
-T, --title str prefix every output line with this string
--extra-data str data string to include in client and server JSON
--get-server-output get results from server
--udp-counters-64bit use 64-bit counters in UDP test packets
--repeating-payload use repeating pattern in payload, instead of randomized payload (like in iperf2)
--username username for authentication
--rsa-public-key-path path to the RSA public key used to encrypt authentication credentials
Test results (client side):
- Interval: It specifies the time duration for the data transfer.
- Transfer: This shows the total data size transferred using the Iperf test. All of the data is flushed out after completing the test.
- Bitrate (bandwidth): The rate of speed with which the data was transferred in Mbit/sec.
Each line gives the sender and receiver results for every stream. The most important results to look at are the last two [SUM] lines. They show the average result of the network bandwidth test. In this case, the total network bandwidth reaching 8.50 Gbits/sec.
Common issues
iperf3: error - the server is busy running a test. try again later
This error indicates that the server is currently occupied with an ongoing test, and it cannot accept a new test request until the current one completes or try another port. However, it's also possible that there are other issues preventing the server from responding.
iperf3: error - unable to connect to server: Device or resource busy
iperf3: interrupt - the server has terminated
This error indicates that the iperf3-client is unable to establish a connection with the iperf3-server because the server's resources or network device are currently occupied or unavailable.
Sources
1. iperf3: A TCP, UDP, and SCTP network bandwidth measurement tool: https://github.com/esnet/iperf
2. Average worldwide internetspeed: https://en.wikipedia.org/wiki/List_of_sovereign_states_by_Internet_connection_speeds
3. Read more: https://medium.com/@i.anujpratapsingh/how-to-use-iperf3-for-network-bandwidth-testing-fa92c096b01c