I did some throughput measurements using 4 identical VirtualBoxes running RouterOS 6.44.3 with 1 CPU hosted on the same Xeon with 12 cores.
Basically L2TP tunnel is as fast as direct routing (even sometimes slightly faster). Hence counting reduction of less than 5%.
IPSec alone bring a 20-25% throughput reduction.
However when L2TP and IPSec are used at the same time, the results deteriorate by 50% (Was using wrong MTU)…
Configurations attached.
MT1 <-> MT2 <-> MT3 <-> MT4
Speed (in mbps) between MT1 and MT4 using BTest:
MT2-MT3 Link MTU UDP/s UDP/r UDP/b 1xTCP/s 1xTCP/r 1xTCP/b
Routing no FW 1500 390 447 180/180 260 230 110/110
L2TP no IPSec 1460 396 401 169/169 298 265 130/100
L2TP+IPSec 1460! 205 190 120/66 186 180 77/77
L2TP+IPSec 1420 280 280 120/120 208 200 80/80
IPSec 128bits 1500 320 318 136/136 230 216 104/80
IPSec 256bits 1500 327 318 140/140 230 211 95/95
Can you test L2TP ipsec with Multilink Protocol activated - MRRU=1600 on both sides ? Don’t use tcp mss clamping - ppp profile set=no on both sites too !
As I mentioned above the MRRU is the slowest of all options which seems normal to me. It is not designed to solve speed issues.
Fragmentation is to avoid.
I feel a bit puzzle why IPSec alone without adjustments to MTU did not exhibited any comparable slowness.
I used BTest server for practical reason. Size of packet can be set only in UDP. For TCP, I wonder if it does size adjustments using MTU discovery.
Might have to redo tests using iPerf to sort that all out…
REM: MTU shall probably be even much lower for L2TP+ipsec, but I came to that value after observing raw traffic with wireshark. Thus it is probably fine tuned only for BTest traffic…
I obtain comparable measures using iPerf2 (using stresslinux_32bit_11.4.i686-0.7.106.vmx) so I won’t repost the table as there are no significant changes to results highlighted earlier.
The only new interesting part was to run iPerf using -m switch.