Mikrotik ? Please ?
Any updates on this MikroTik? I’ve been holding off on deploying CHR for MPLS because of this and would love to see this fixed.
Same here - been waiting for a LONG TIME!! hopefully an update soon…
Any updates on this?
Not that I have read in the release notes, and not that Mikrotik have told me about
that being said I haven’t actually re-run the tests for quite some time. perhaps I’ll get a chance soon to try again and see if anything has changed.
BUMP
I’ve been running an 8-router Mikrotik lab (on ESXI) for several weeks, and can confirm this is still broken in 6.41.3. I have not tried the latest release candidate though (6.42rc52) - I might give that a test tonight.
I moved my lab to Hyper-V Core 2012 R2, and can confirm that MPLS runs fine on that.
That’s great info…we did a bunch of CHR testing on different hypervisors for BGP and presented the results in Berlin at MUM Europe 2018. Hyper-V was way better than ESXi and ProxMox (KVM) by a significant margin.
Hyper-V works because it does not assemble packets into 64k buffers. But this assembly happens only for traffic which source and destination is also virtual guest. If destination is physical router outside VM environment then there should be no problem with MPLS.
Thanks for the update - So are Mikrotik working on a solution for this or is this something VMWare need to change? what can we do to make it work? without changing an entire VMWare infrastructure to Hyper-V that is!!
Are there multiple issues here? Are you saying that the speed issue should not exist if you simply have a single CHR per host on ESXI? I had 4 VM routers on 4 physical hosts (ESXI) all daisy-chained and was still seeing the speed issue running from R1 to R4.
R1 ↔ R2 ↔ R3 ↔ R4
(each of these is a separate Supermicro server with a single CHR VM installed)
Are there any updates on this issue? In particular, have there been any improvements since RouterOS 6.42, which has a heap of hypervisor integration improvements?
For those following, I emailed support and received the following response:
As far as I can tell problem is reported and in TODO list, but when exactly it will be resolved I cannot tell.
One of the best working hypervisors with least amount of problems is hyper-v, if this MPLS problem is really big issue for you then you might try to switch to hyper-v.
Thanks for the update!
Any update on this ?
I can confirm the issue is still here with 6.43.13 build.
workaround to disable large receive offload (LRO) for the whole host DOES NOT WORK!
https://kb.vmware.com/s/article/2055140
What to do?
Does anyone know if this has been fixed? I had planned on rolling ESXi for a pair of VPLS aggregation routers, but it sounds like I may need to consider Hyper-V instead?
I’d use HyperV…I haven’t seen any notification this has been fixed.
I’ve heard other people mention that ProxMox with Open Vswitch works but I haven’t tested or confirmed that.
Hi all,
The latest stable release, 6.45.9, includes the following note:
*) system - correctly handle Generic Receive Offloading (GRO) for MPLS traffic;
Does anyone know if this fixes the issue covered in this thread? I don’t have time to lab anything up for the moment to test.
Regards,
Philip