Hardware offloading for RB5009 or any RB series?

Hi everyone,
Just trying to get some thoughts and knowledge from some of you out there who has experienced and tested this thing.

First off, why I got curious about hardware offloading in RB5009, is because I have a bridge interface named “bridge-customer” and multiple ports are part of it.
5 x VLAN interfaces (ether1.100, ether2.100, ether3.100, ether4.100, ether5.100)
and 1 physical interface (ether6)

ether1 to ether5 goes to multiple switches and traffic are sent from those switches to 5009 as tagged VLAN 100.
device on ether6 doesn’t have VLAN capability so untagged frames coming into ether6, gets part of the same L2 domain as ether1.100 to ether5.100.

I run DHCP on top of bridge-customer.

I just noticed that “Hardware Offload” is activated on ether6, but not on the VLAN interfaces. Why I notice this is because when there’s huge traffic received on ether1.100 to ether5.100, the CPU spikes. 500Mbps can trigger 50% CPU usage. Mind you, I don’t have filter rules, ONLY single NAT rule.

I don’t have a way to generate traffic on device attached to ether6, so it’s a question to me whether traffic from that hardware offload activated port will contribute or not to CPU spike.

Website says that bridging and routing forwarding speed of RB5009 is the same. So, it makes me think, if I do inter-vlan routing in a way that all ports are part of bridge (so ports are HW offload activated), TAG VLAN and enable VLAN filtering, will it make a difference in terms of performance?
hw off.png

Layer2 misconfiguration
VLAN in bridge with a physical interface
https://wiki.mikrotik.com/wiki/Manual:Layer2_misconfiguration#VLAN_in_bridge_with_a_physical_interface

Solution
To avoid compatibility issues you should use bridge VLAN filtering.

Was thinking the same thing, but then, this is not a switch. Bridging and routing capability of the device says the same thing on their test result.
Anyway, I will be testing that in the LAB. Hoping for a positive result.

I also have RB5009 (since 2 weeks), configured it using the VLAN bridge approach.
recent documentation: https://help.mikrotik.com/docs/display/ROS/Bridging+and+Switching#BridgingandSwitching-VLANExample-TrunkandAccessPorts

When doing iperf from computer on VLAN10 to NAS on VLAN2 running iperf server, I get pretty close to 1Gb throughput (960-ish).
CPU on RB5009 is doing close to nothing, 1 or 2% … so yeah, HW offload works nicely.

Already so when I unboxed it (7.6), currently running on 7.8.

Thank you for the feedback!

The block diagram answered my question:
https://i.mt.lv/cdn/product_files/RB5009UGS_220852.png

Essentially, I need to put the interfaces into bridge to activate HW offload, then follow the VLAN bridge method, so I can make use of 10Gbps full duplex capability. https://help.mikrotik.com/docs/display/ROS/Bridging+and+Switching#BridgingandSwitching-VLANExample-TrunkandAccessPorts

My issue is fixed after I enabled FastTrack!