L2TP+ipsec speeds

I did some throughput measurements using 4 identical VirtualBoxes running RouterOS 6.44.3 with 1 CPU hosted on the same Xeon with 12 cores.
Basically L2TP tunnel is as fast as direct routing (even sometimes slightly faster). Hence counting reduction of less than 5%.
IPSec alone bring a 20-25% throughput reduction.
However when L2TP and IPSec are used at the same time, the results deteriorate by 50% (Was using wrong MTU)…

Configurations attached.

MT1 <-> MT2 <-> MT3  <-> MT4

Speed (in mbps) between MT1 and MT4 using BTest:

MT2-MT3 Link	MTU	UDP/s	UDP/r	UDP/b	1xTCP/s	1xTCP/r	1xTCP/b
Routing no FW	1500	390	447	180/180	260	230	110/110
L2TP no IPSec	1460	396	401	169/169	298	265	130/100
L2TP+IPSec	1460!	205	190	120/66	186	180	77/77
L2TP+IPSec	1420	280	280	120/120	208	200	80/80
IPSec 128bits	1500	320	318	136/136	230	216	104/80
IPSec 256bits	1500	327	318	140/140	230	211	95/95

MT.zip (2.46 KB)

After lowering the MTU/MRU to 1420 for L2TP+ipsec to avoid fragmentation, I have some expected results:

L2TP+IPSec	280	280	120/120	208	200	80/80

Can you test L2TP ipsec with Multilink Protocol activated - MRRU=1600 on both sides ? Don’t use tcp mss clamping - ppp profile set=no on both sites too !

Unsure how to use that setting properly, however with MTU=1420 and MRRU=1600, no clamp in FW nor PPP, I got about 5% less than with MTU=1460.

Server site:

Client site:

You test with iperf3 on both sites ? Test wit different tcp sizes of packet too!

As I mentioned above the MRRU is the slowest of all options which seems normal to me. It is not designed to solve speed issues.
Fragmentation is to avoid.

I feel a bit puzzle why IPSec alone without adjustments to MTU did not exhibited any comparable slowness.
I used BTest server for practical reason. Size of packet can be set only in UDP. For TCP, I wonder if it does size adjustments using MTU discovery.
Might have to redo tests using iPerf to sort that all out…

REM: MTU shall probably be even much lower for L2TP+ipsec, but I came to that value after observing raw traffic with wireshark. Thus it is probably fine tuned only for BTest traffic…

I obtain comparable measures using iPerf2 (using stresslinux_32bit_11.4.i686-0.7.106.vmx) so I won’t repost the table as there are no significant changes to results highlighted earlier.
The only new interesting part was to run iPerf using -m switch.

	ROS     'iPerf -m'
Mode	MTU	MSS	MTU
Routing	1500	1448	1500
L2TP	1420	1368	1408
IPSec	1500	1386	1426
OVPN	1500	1350	1390

what’s the recommendation then?