I recently noticed I only get 200Mb/s intervlan routing. The cpu is only hitting 40-50%. Is there a way to get the CPU to be utilized more? I also noticed the fast track is not working when doing inter vlan routing.
I have checked the cables and devices using an unmanaged switch and they all hit 1Gbps.
I am trying to learn vlans on ROS too. I just recently got a hEX S. I know Ubiquiti ER-X much better, but I am starting to understand how ROS does things.
You may be hitting a Single core limit, the MT7621A SOC has 2 full cores (hyper threaded to 4).
If I am understanding your config corrrectly, all “bridge ports” (2-5) are set with pvid=10 untagged and ether3 (tagged 30) and ether5 (tagged 20, 40, 50) are hybrid ports.
Can you show a diagram of how things are connected?
Does
/interface bridge port print
show H indicating hardware switching enabled?
How are you testing? iperf3? Copying large file from NAS?
Have you tested throughput on untagged vlan 10 between two ports? I realize this traffic won’t be routed, but just want to make sure this traffic isn’t going through the CPU.
What is connected to the two hybrid ports, ether3 and ether5? Access points, or vlan-aware switch?
Did you test between vlan 10 and 20 from two different ports, as well as from the same port (ether5) Also between port3 vlan30 and port5 vlan50?
You’re using a bridge based VLAN which is only really supported on a limited number of newer devices. While it works it does result in the performance you are seeing.
Port 1: WAN
Port 2: PC
Port 3: virtual switch inside Hyper-V Hypervisor (2nd pc on vlan 10, Virtual NAS on vlan 30)
Port 4: TV
Port 5: Access point
I tested with iperf3 and with a large file. Hardware switching is enabled (H is visible).
The throughput between devices on the same vlan using the hex S is 1Gbp/s (Tested from PC to Server and from PC to NAS).
Thank you for your help
50% of all cores combined, means 100% one core is being used. you may check by looking:
/tool profile cpu=all
or
/system/resource/cpu/print
The reason of slow speed may be the the non-ARM cpu and the lack of route cache on rOS 7.x.
Mikrotik seems to be moving to ARM SoC’s and I don’t think that other/legacy devices will become any better.
So what is the solution? Is there anything I can do on routerOS or do I have to get a managed switch and trunk everything to the cpu instead of using bridge based vlan? Would I then get 1Gbps intervlan routing?
If the bottleneck is routing, I don’t see how using an external switch will help, since inter-vlan traffic needs to be routed (through the CPU).
Since vlan 10 is trusted. can your hyper-V allow a second interface on the virtual NAS, with an virtual interface in vlan 10? Then the largest consumer of the data (PC’s?) will be on the same layer 2 network as the NAS, and no routing will be required by devices on vlan 10, it will just be switched.
You will loose all ability to firewall in the hEX S, but you may be able to firewall at the virtual NAS.
Do you trust your TV to be on the trusted network? It is currently on vlan 10. I would have expected it to be on the IoT network. But perhaps it is streaming from the NAS?
I don’t know enough about ROS to be much more help, it’s like the blind leading the blind.
It seems that jookray suggested that route caching (is that fasttrack?) may have been available in 6. But the hardware assisted bridging wasn’t for the RB760iGS until 7.1r5, so if you downgraded you would probably need to use an external switch or possibly configure vlans with the switch section (which is no longer the recommendation, if I understand correctly).
Or get a faster router.
Unless there is a way to enable fast path/fasttrack, and I don’t know if that works on the hEX S or not. I haven’t gotten that far yet.
route cache was removed by the Linux kernel, it is not coming back.
I’ve looked again the config, it seems that should be enabled. you could try using Jumbo frames internally, if supported.
if you really need near 1g or more, I think that the only way is an RB5009 or RB4011, as they are way more capable.
for e.g. my RB5009 does reach at least 2.5g between vlans, I do not have 2 10g devices to test
The TV is on vlan 10 for casting. However I haven’t really used casting so I might as well set it on the IoT vlan. I will put the NAS with a 2nd interface on the Trusted vlan. I just find this a “dirty” solution.
1Gbps with fast track would be ideal. If fast track does not work on inter vlan routing I would like to at least get the advertised 380Mb/s. Right now I am at half that speed.
The RB5009 does not have a POE out which I use for my access point. And I do not really want to buy an older RB4011 which is at the same price of the RB5009.
I still have a hard time understanding why the fast track does not work when doing intervlan routing but does work with the wan to lan.
What’s new in 7.2rc2 (2022-Jan-28 11:00):
*) bridge - added fast-path and inter-VLAN routing FastTrack support when vlan-filtering is enabled;
I am using my hEX in a lab situation (trying to find a solution to be able to use hEX S where we use ER-X’s) and since I am starting fresh with ROS, I am focusing on v7, and have the latest testing version loaded. In the latest version, I did notice there are quite a few bridge fixes; note the first one “*) bridge - fixed FastPath when using “frame-types=admit-only-untagged-and-priority-tagged” setting;”
What’s new in 7.2rc4 (2022-Feb-22 13:37):
*) bridge - fixed FastPath when using “frame-types=admit-only-untagged-and-priority-tagged” setting;
*) bridge - fixed IP address on untagged bridge interface when vlan-filtering is enabled (introduced in v7.2rc2);
*) bridge - fixed PPPoE packet forwarding when using “use-ip-firewall-for-pppoe” setting;
*) bridge - fixed destination NAT when using “use-ip-firewall” setting;
*) bridge - fixed filter and NAT “set-priority” on ARM64 devices;
*) bridge - fixed filter rules when using interface lists;
*) bridge - fixed priority tagged frame forwarding when using “frame-types=admit-only-untagged-and-priority-tagged” setting;
But I wouldn’t use the testing version in a production environment. I probably would wait at a minimum until it reached stable, but normally I wait for “software” to mature a bit before moving to production, so I prefer the long term releases. But if you are using vlans on bridge with the vlan-filtering option on a RB760iGS or RB750r3, you will have poor performance unless you are using the hardware assist. And as you discovered, without FastTrack, routing performance won’t be stellar either.