Has anyone else seen limited throughput between ports of the ether1-5 switch group, even though the hardware switch chip was used (ether1 as master for ether2-5 slaves) so the traffic never reaches the CPU? RB1100AH running ROS 5.24, couldn’t play with it much as it’s in production use, but simply moving the cables from ether2-5 to ports of a separate gigabit switch (good old Dell PowerConnect 5324) where ether1 is also connected made things work much better. Despite gigabit links, the RB1100AH switch throughput was about 20-30 Mbps (rough measurement with speedtest.net) - but it only affected traffic between switch group ports, ether1-CPU throughput was fine. It’s a “router on a stick” with a few tagged VLANs, basically serving PPPoE to customers on one VLAN and advertising their IP addresses via OSPF to BGP routers (Vyatta on x86, looking to try the CCR soon) on another. If that matters, the switch was passing tagged VLAN traffic but had no special VLAN configuration (mode “fallback” so it should be transparent to any VLAN traffic, VLANs only configured on the CPU side).
The reason I tried to use the RB1100AH switch was to extend the number of ports of the 5324, but adding another 5324 instead looks like a better plan now. I’ve seen similar issues reported about the RB250GS (or was it RB260GS), perhaps it’s related (same switch chip).