Community discussions

MikroTik App
 
tazdan
just joined
Posts: 16
Joined: Tue Nov 19, 2013 2:09 am

Re: MPLS - massive throughput difference on CHR when using explicit nulls

Tue Apr 24, 2018 1:29 am

Hyper-V works because it does not assemble packets into 64k buffers. But this assembly happens only for traffic which source and destination is also virtual guest. If destination is physical router outside VM environment then there should be no problem with MPLS.
Thanks for the update - So are Mikrotik working on a solution for this or is this something VMWare need to change? what can we do to make it work? without changing an entire VMWare infrastructure to Hyper-V that is!!
 
nickdwhite
just joined
Posts: 11
Joined: Thu Jun 22, 2006 11:41 pm

Re: MPLS - massive throughput difference on CHR when using explicit nulls

Thu Apr 26, 2018 7:15 pm

Hyper-V works because it does not assemble packets into 64k buffers. But this assembly happens only for traffic which source and destination is also virtual guest. If destination is physical router outside VM environment then there should be no problem with MPLS.

Are there multiple issues here? Are you saying that the speed issue should not exist if you simply have a single CHR per host on ESXI? I had 4 VM routers on 4 physical hosts (ESXI) all daisy-chained and was still seeing the speed issue running from R1 to R4.

R1 <-> R2 <-> R3 <-> R4
(each of these is a separate Supermicro server with a single CHR VM installed)
 
aussiewan
newbie
Posts: 26
Joined: Wed Sep 07, 2011 5:28 am

Re: MPLS - massive throughput difference on CHR when using explicit nulls

Mon Jul 23, 2018 8:04 am

Are there any updates on this issue? In particular, have there been any improvements since RouterOS 6.42, which has a heap of hypervisor integration improvements?
 
aussiewan
newbie
Posts: 26
Joined: Wed Sep 07, 2011 5:28 am

Re: MPLS - massive throughput difference on CHR when using explicit nulls

Wed Jul 25, 2018 2:19 am

For those following, I emailed support and received the following response:
As far as I can tell problem is reported and in TODO list, but when exactly it will be resolved I cannot tell.
One of the best working hypervisors with least amount of problems is hyper-v, if this MPLS problem is really big issue for you then you might try to switch to hyper-v.
 
User avatar
StubArea51
Trainer
Trainer
Posts: 1739
Joined: Fri Aug 10, 2012 6:46 am
Location: stubarea51.net
Contact:

Re: MPLS - massive throughput difference on CHR when using explicit nulls

Tue Aug 07, 2018 8:12 pm

Thanks for the update!
 
jkat
just joined
Posts: 1
Joined: Sun May 06, 2018 11:23 pm

Re: MPLS - massive throughput difference on CHR when using explicit nulls

Mon Jan 07, 2019 1:15 pm

Any update on this ?
 
konstantinJFK
newbie
Posts: 25
Joined: Wed Mar 08, 2017 3:44 pm
Location: Milan, Italy
Contact:

Re: MPLS - massive throughput difference on CHR when using explicit nulls

Fri May 10, 2019 12:36 am

Hyper-V works because it does not assemble packets into 64k buffers. But this assembly happens only for traffic which source and destination is also virtual guest. If destination is physical router outside VM environment then there should be no problem with MPLS.
I can confirm the issue is still here with 6.43.13 build.

workaround to disable large receive offload (LRO) for the whole host DOES NOT WORK!

https://kb.vmware.com/s/article/2055140


What to do?
 
jbaird
newbie
Posts: 48
Joined: Tue May 10, 2011 6:11 am

Re: MPLS - massive throughput difference on CHR when using explicit nulls

Fri Nov 01, 2019 4:00 pm

Does anyone know if this has been fixed? I had planned on rolling ESXi for a pair of VPLS aggregation routers, but it sounds like I may need to consider Hyper-V instead?
 
User avatar
IPAsupport
Frequent Visitor
Frequent Visitor
Posts: 62
Joined: Fri Sep 20, 2019 4:02 pm

Re: MPLS - massive throughput difference on CHR when using explicit nulls

Tue Nov 05, 2019 3:45 pm

I'd use HyperV....I haven't seen any notification this has been fixed.

I've heard other people mention that ProxMox with Open Vswitch works but I haven't tested or confirmed that.
 
aussiewan
newbie
Posts: 26
Joined: Wed Sep 07, 2011 5:28 am

Re: MPLS - massive throughput difference on CHR when using explicit nulls

Tue May 12, 2020 6:19 am

Hi all,

The latest stable release, 6.45.9, includes the following note:
*) system - correctly handle Generic Receive Offloading (GRO) for MPLS traffic;

Does anyone know if this fixes the issue covered in this thread? I don't have time to lab anything up for the moment to test.

Regards,
Philip
 
User avatar
nz_monkey
Forum Guru
Forum Guru
Posts: 2104
Joined: Mon Jan 14, 2008 1:53 pm
Location: Over the Rainbow
Contact:

Re: MPLS - massive throughput difference on CHR when using explicit nulls

Tue May 12, 2020 8:31 am

Hi all,

The latest stable release, 6.45.9, includes the following note:
*) system - correctly handle Generic Receive Offloading (GRO) for MPLS traffic;

Does anyone know if this fixes the issue covered in this thread? I don't have time to lab anything up for the moment to test.

Regards,
Philip

I am curious, but am also in the same predicament with time :(


Kevin/Derek @ IPArchitechs have you guys had a chance to test this in your lab yet ?
 
User avatar
IPAsupport
Frequent Visitor
Frequent Visitor
Posts: 62
Joined: Fri Sep 20, 2019 4:02 pm

Re: MPLS - massive throughput difference on CHR when using explicit nulls

Wed Sep 02, 2020 3:19 am

We haven´t tested that yet, but as soon as we do, we'll share the results
 
mducharme
Trainer
Trainer
Posts: 1777
Joined: Tue Jul 19, 2016 6:45 pm
Location: Vancouver, BC, Canada

Re: MPLS - massive throughput difference on CHR when using explicit nulls

Mon Apr 05, 2021 2:45 am

I am testing this - I am seeing promising results but still some weird behaviour.

When running a TCP btest on a hardware router (1100ahx2) going across an MPLS network to a CHR, I'm seeing full rates for send and receive.
When I run the btest on the CHR against the same 1100ahx2 as last, I get full rates for receive but only 2-3Mbps for send. That is really bizarre to me - why does btest show full rates when on receive from CHR to hardware but not on send from CHR to hardware since it is in the same direction? It seems the behaviour changes depending on what side initiated the btest.

VPLS seems fine from hardware to CHR, I can get full rates across the VPLS tunnel regardless of which side initiates the btest.

Things get even stranger with two CHR's routed to each other through a CCR, ex. CHR1 <--> CCR <--> CHR2. When running TCP btest on CHR2 against CHR1, I am seeing 1Gbps receive and ~3Mbps send. When I run btest on CHR1 against CHR2, I also see 1Gbps receive and ~3Mbps send. Whether the traffic is going from CHR1 to CHR2 or vice versa doesn't seem to matter, the only thing that seems to matter is sending is always slow from the CHR that initiated the btest but receiving is always fast.

Also, I tried connecting two CHR's via VPLS tunnel. It doesn't seem to pass traffic other than neighbor discovery - adding IPs to both sides of the tunnel and trying to ping the far side gives no response and ARP does not complete (but the dynamic entry without the C appears in the arp table with the far side mac). So CHR to CHR VPLS does not seem to work at all.

I have tried changing the use-explicit-null setting on both CHR's and it does not change this behavior.

Update: I figured out the reason for the different send/receive behaviour - I have advertise filters on MPLS so that only the loopbacks are advertised for tunnel purposes. Btest doesn't allow specifying the source interface so depending on the direction it is initiated in, the traffic may or may not have labels applied. When it is a label pushed, it is slow, so the same problem described before seems to exist.

So it looks like it is *not* fixed, but maybe I still have to disable LRO and TSO in the ESXi host. I will try that next.
 
User avatar
bajodel
Long time Member
Long time Member
Posts: 551
Joined: Sun Nov 24, 2013 8:30 am
Location: Italy

Re: MPLS - massive throughput difference on CHR when using explicit nulls

Sun Jan 16, 2022 1:05 pm

Any update on this? I'm labbing with MPLS on CHRs (1Mb up limit) and I cannot figure it out. Thnx

Who is online

Users browsing this forum: No registered users and 20 guests