simple GRE tunnel over IPSEC transport mode, AES-CBC (Tried other algs as well), between CCR 1036 and RB1100AHx2 maxes at about 50Mbit aggregate throughput. 25/25 tx/rx, 5/45 tx/rx, etc.
Same tunnel disabling the IPSEC policy nets over 500mbit aggregate throughput.
Running the same setup between to RB1100AHx2s and I get about 500mbit aggregate throughput with encryption enabled.
Obviously some problem with CCR.
If I just do IPSEC tunnel mode across the CCR, performance seems good, unfortunately, I need to use routing protocols.
ROS 6.12, also tried 6.11 with same results (6.11 actually seemed slower?)
i can second this. GRE (or IPIP) + ipsec seems very slow between CCR and 1100AHx2.
I also tried almost all AES variants. but the performance seems to be limited to around 50Mbps.
But next coming weeks I will spend some time testing all variant of ipsec connections.
Strange thing indeed is that ipsec tunnels seems much faster than ipsec over IPIP or GRE tunnels.
We have one ipsec tunnel from a SonicWall NSA3500 series to our CCR1036, this one tops at 17-18MB/s (around 180Mbps) which is fair because our CCR is connected at 500Mbps and the SonicWall at 200Mbps. This ipsec is AES-256 at Phase 2.
Both connections are fiber connected to the same ISP. So in best case they both are connected to the same switch.
I will be back with a table of all kinds of tested ipsec speeds.
Same here.. If I use tunnel mode instead of transport for subnet to subnet communication, its as fast as you would expect it to be. Its only when tunneling GRE or IPIP that it slows. It sounds like a operating system quirk when the 2 are combined. Unfortunately, I need to run GRE tunnels for routing protocols, multicast, and VPLS.
Ive tried several variations. All 3 forms of AES - GCM, CTR, CBC, 3des, blowfish, sha and md5 hashing. Even setup L2TP/IPSEC and got the same results. If I replace the CCR with an RB1100AHx2, speed is nearly 10x faster on the same config.
I have 2 CCRs, but haven’t tested it that way. They are a redundant pair… 2x CCR will end up serving 5x sites each with 2x RB1100AHx2. I suppose I could create a tunnel between them to test, but without hearing from support, its kind of a moot point.
1 week, multiple emails to support, no response.. I know they know about it.. After searching the forums, I can see it is a problem with their code on the Tilera CPU… Alpha quality at best.
They have already responded. That’s the way it is. The tcp connection, gre tunnel, forwarding, and ipsec are all processed on one core. Apparently 1 core of a rb1100ahx2 is faster than 1 core of a tilera.
What I need to know if there are any specs on the various hardware platforms for performance especially throughput on VPNs. That is something I need access to. THen we can compare the specs to real-world.
Mikrotik needs to be more forthcoming all the way around as well as swat bugs not adding new features on all hardware platforms.
I landed on this page, because I’m having exactly the issue described in this topic.
@mrz: can you be so kind to share the relevant parts of your config, to see how your tunnel was configured?
This really looks like a serious issue and I think it would be good to look into this…
The problem with the CCR performance you are experiencing is because you are using IPSEC and GRE tunnel with only 1 tunnel to 1 other host. From the design of routerOS on this it will not use more than 1 or 2 cores for this.
You can boost the performance by using multiple tunnels to the same host and setting up multiple load balancing. For example if you tried mikrotik btest server you will find that it wont use more than 1 core but if you run multiple sessions of it you can get 100% CPU usage with many many sessions from the same computer and using 64 byte packets.