So I have a virtualized CHR running in a datacentre which has a GRE Tunnel running over IPSEC to my home router, which is a hEX v3 (RB750Gr3).
For some reason when using an IPSEC tunnel, I only seem to be able to achieve around 20Mbps with a bandwidth test from the CHR to RB750Gr3. However if I perform the bandwidth test directly from the CHR to the public IP of the RB750Gr3 (Directly over the internet instead of IPSEC tunnel), I am able to achieve maximum throughput.
The resources allocated to the CHR are:
CPU: 4 Cores of a D-1531 Xeon processor,
Memory: 1GB,
Network: VirtIO Adapters
When running a bandwidth test over the IPSEC tunnel, the CPU of the CHR sits at 25% with one core maxed out at 100%. The CPU of the hEX v3 sits happily around 10%.
Does anyone have any idea how I could improve performance over IPSEC? I realize the hEX has hardware acceleration, but shouldn’t I be achieving more than a measly 20Mbps over an IPSEC tunnel between these routers?
Do you have a server with a single CPU core capable of more than 2.2GHz? Your post clearly stated the performance is capping out 1 core of the CHR. That is your limitation.
Might be an interesting read, first things first. What version is your CHR? Make sure it is at least up to 6.39 and ensure your hardware and hypervisor is allowing the AES extensions through to the CHR VM.
Try aes-128-cbc or even no encryption at all (AH protocol).
It is not useful to assign 4 processors to CHR - it will not use them for parallel processing for tasks like this.
It must be another problem then. I am using AH protocol between a 2011 and CCR and I can saturate the link.
(which I cannot do using ESP because of the slow CPU in the 2011)
With your 750Gr3 you have accellerated AES and you could do ESP without problem, but apparently for you the CHR side is the bottleneck.
Put a 750Gr3 there too
I’m running CHR on Intel Haswell, without TSX, to support high availability failover to Intel Xeon CPU E5-2640v3. I’ve confirmed AES pass through by booting the CHR guest using CentOS 7 recovery environment.
That equates to 5.2 Gbps, when using AES 128 bit CBC encoding within the virtual guest. I don’t see L2TP IPSec in CHR reporting ‘Hardware AEAD’ when reviewing the installed SAs either…
A thought: it does not appear you accounting for various MTU overheads anywhere, so it may be an issue with path maximum transmission unit (MTU) discovery (PMTUD), or TCP maximum segmentation size (TCP MSS). Things to remember:
Traditional Ethernet MTU is 1500, so most things use this by default