We’ve covered many aspects of iPerf on our blog, and recently I found that that iPerf3 version 3.19 added native Multi-Path TCP (MPTCP) support. In this post we’ll explain what MPTCP is, why it matters, and walk through a hands-on demo using two Raspberry Pis to show its resilience in action.
What is MPTCP?
Multi-Path TCP is an extension to standard TCP (defined in RFC 8684) that allows a single TCP connection to use multiple network paths at the same time.

With regular TCP, a connection is tied to a single pair of IP addresses. If that path degrades or fails, the connection drops. MPTCP solves this by splitting one logical connection across multiple physical paths, each as a separate “subflow.”
The main benefits are:
- Resilience – if one path fails, traffic shifts to another without dropping the connection
- Throughput aggregation – on truly independent paths, bandwidth can be combined
- Better resource utilization – traffic shifts dynamically toward less congested paths
The protocol was developed by Olivier Bonaventure and his team at UCLouvain in Belgium, and its first major real-world deployment came from Apple. Many people use Siri while walking or driving. As they move farther away from a WiFi access point, the TCP connection used by Siri to stream voice eventually fails, resulting in error messages. To address this, Apple has been using MPTCP since iOS 7 — when a user issues a Siri voice command, iOS establishes a connection over both WiFi and cellular, so if WiFi drops, the connection hands over to cellular seamlessly as described on this video about MPTCP at Apple.
MPTCP is now used on all iPhones to provide seamless handovers and improve performance for Siri, Apple Music, and other applications. This deployment has also encouraged 3GPP to adopt MPTCP for the ATSSS service, which will allow future 5G smartphones to seamlessly switch between WiFi and cellular networks. Cloudflare has also written about how it is changing connectivity more broadly. On the infrastructure side, the Linux kernel has supported MPTCPv1 natively since kernel 5.6 (2020).
Installing iPerf3 3.19 or Later from Source
One way to demonstrate MPTCP is by using iPerf3 3.19 or later. Most Linux distributions lag behind on this. Debian Bookworm’s repository ships iPerf3 3.12, so you need to build from source.
First install the kernel headers matching your running kernel. This step is required for MPTCP to be detected at compile time:
apt install linux-headers-$(uname -r) git build-essential
Then clone and build:
git clone https://github.com/esnet/iperf.git cd iperf git checkout 3.21 ./configure make make install
Verify MPTCP support was compiled in:
iperf3 –help | grep mptcp -m, –mptcp use MPTCP rather than plain TCP
If the ‘–mptcp’ flag appears, you’re good.
You also need a kernel with MPTCP support enabled:
iperf3 –help | grep mptcp -m, –mptcp use MPTCP rather than plain TCP
Raspberry Pi OS Bookworm (kernel 6.6+) has MPTCP enabled by default. Older Raspbian kernels do not.
Lab Setup
For this demo we used two Raspberry Pi 4s running Raspberry Pi OS Bookworm, each with a wired (eth0) and wireless (wlan0) interface on the same 172.31.0.0/24 network:
| Device | eth0 | wlan0 |
| Client | 172.31.0.133 | 172.31.0.173 |
| Server | 172.31.0.218 | 172.31.0.237 |
Configuring MPTCP Endpoints
MPTCP needs to know which local interfaces to use as subflow endpoints. These settings are not persistent across reboots, so a reboot will cleanly reset them if something goes wrong.
On the client (172.31.0.133):
ip mptcp endpoint flush ip mptcp endpoint add 172.31.0.133 dev eth0 subflow ip mptcp endpoint add 172.31.0.173 dev wlan0 subflow ip mptcp limits set subflows 2 add_addr_accepted 4
On the server (172.31.0.218):
ip mptcp endpoint flush ip mptcp endpoint add 172.31.0.133 dev eth0 subflow ip mptcp endpoint add 172.31.0.173 dev wlan0 subflow ip mptcp limits set subflows 2 add_addr_accepted 4
The ‘signal’ ’flag on the server tells MPTCP to advertise the server’s additional addresses to the client via the ADD_ADDR option. This allows the client to open subflows to both server interfaces.
Running the Test
Start the server with an explicit IPv4 bind address. Without ‘-B’, iPerf3 defaults to an IPv6 listener and MPTCP on Linux currently only supports IPv4:
iperf3 -s -B 0.0.0.0
On the client, open two terminals. In the first, monitor MPTCP subflow events:
ip mptcp monitor
In the second, run the test:
iperf3 -c 172.31.0.218 -t 10 –mptcp
Watching MPTCP Negotiate Subflows
As soon as the connection starts,’ip mptcp monitor’ shows the subflow negotiation in real time:
$>ip mptcp monitor [ CREATED] token=26d472c7 remid=0 locid=0 saddr4=172.31.0.133 daddr4=172.31.0.218 sport=56420 dport=5201 [ ESTABLISHED] token=26d472c7 remid=0 locid=0 saddr4=172.31.0.133 daddr4=172.31.0.218 sport=56420 dport=5201 [ ANNOUNCED] token=26d472c7 remid=2 daddr4=172.31.0.237 dport=5201 [SF_ESTABLISHED] token=26d472c7 remid=0 locid=2 saddr4=172.31.0.173 daddr4=172.31.0.218 sport=58871 dport=5201 backup=0 ifindex=3
Step by step:
- The main subflow is created and established over eth0: 172.31.0.133 to 172.31.0.218
- The server advertises its wlan0 address (172.31.0.237) via ADD_ADDR
- A second subflow is established over the client’s wlan0: 172.31.0.173 to 172.31.0.218
Two independent paths are now active for a single TCP connection.
The Failover Demo
With the test running, we brought down wlan0 on the client at around the 3-second mark:
ip link set wlan0 down
Here is the iPerf3 output:
$>iperf3 -c 172.31.0.218 -t 10 –mptcp [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 113 MBytes 947 Mbits/sec 93 208 KBytes [ 5] 1.00-2.00 sec 110 MBytes 924 Mbits/sec 66 212 KBytes [ 5] 2.00-3.00 sec 108 MBytes 904 Mbits/sec 75 209 KBytes [ 5] 3.00-4.00 sec 34.4 MBytes 288 Mbits/sec 22 192 KBytes <– wlan0 down [ 5] 4.00-5.00 sec 85.2 mbytes 715 mbits/sec 0 342 kbytes <– recovering 5.00-6.00 sec 110 925 407 fully recovered 6.00-7.00 919 421 kbytes 7.00-8.00 109 914 427 8.00-9.00 918 430 9.00-10.00 431 – [ id] interval transfer bitrate retr 0.00-10.00 998 837 256 sender
At second 3, throughput dropped from around ~900 Mbits/sec to 288 Mbits/sec. MPTCP detected the subflow loss, retransmitted in-flight data, and shifted all traffic to the surviving eth0 subflow. Within two seconds the connection was back to full speed, without dropping.
With regular TCP, taking down the active interface would have killed the connection outright.
A Note on Bandwidth Aggregation
You might expect MPTCP to show higher throughput than regular TCP since it uses two interfaces. In our lab it did not. MPTCP averaged 837 Mbits/sec compared to 937 Mbits/sec for regular TCP. Both interfaces share the same upstream switch, so there are no truly independent paths to aggregate. MPTCP also adds some overhead for managing subflows and resequencing data.
Bandwidth aggregation with MPTCP requires genuinely independent paths, for example a wired connection and a cellular link on separate ISPs. The resilience benefit however works even on a shared LAN, as shown above.
Conclusion
MPTCP support in iPerf3 3.19 makes it straightforward to test and validate MPTCP deployments on Linux. The setup requires a kernel with MPTCP enabled (Linux 5.6+), kernel headers installed before building iPerf3 from source, and endpoint configuration via `ip mptcp`.
The main takeaway from this demo is that MPTCP’s value in most deployments is not bandwidth aggregation but connection resilience. A two-second dip and full recovery is a very different outcome from a dropped connection, which is why Apple, Cloudflare, and a growing number of network operators are deploying it.
–>
Get your free trial now
Monitor your network from the user perspective
