Bonding and clients

Hello
I have set up a bonding test (balance rr + arp monitoring) as follow
(PC1)—1Gbps—(RB433GL#1)===[Bonding 100Mpbs x 2]===(RB433GL#2)—1Gbps—(PC2)

Fault tolerance is working fine, but aggreggation looks strange in my opinion;

  • If i do a single bandwith test (btest) between PC1 and PC2, i am capped to 100Mbps (50Mpbs on each link, roughly)
  • If i do a single bandwith test (btest) between PC1 and RB433GL#2, i get ~140Mpbs (capped by RB433 cpu)
  • If i do a multiple connections test (iperf, 50 parallel connections) between PC1 and PC2 i get the max (~190Mbps)

So, i don’t explain why i have to set up multiple parallel connections to actually aggregate the links, is it a normal bonding feature?
also, why aggregation works fine with a single connection between PC1 and RB433#2 but not between PC1 and PC2…

PC1 is in 192.168.0.0/24 network
Bonding is in 192.168.1.0/24 network
PC2 is in 192.168.2.0/24 network
and routes are set up on both RB433s

If anybody can enlighten me.. thanks

Depending on the used aggregation mode (you didn’t mention what you use) this is normal beheaviour. Have a look at MT’s documentation: http://wiki.mikrotik.com/wiki/Manual:Interface/Bonding

You’ll mostly find LACP (IEEE 802.3ad) out in the wild because it’s an official standard and it’s widely supported even by rather cheap rackmount switches and what you describe is a typical finding and correct / intended behaviour as connections consistently use the same endpoints (ports) within an aggregated link. You gain link stability and overall bandwidth but single connections are limited / ensured to max out at single port speed.

Thank you for your answer
I have used balance rr + arp monitoring as stated in my first post. There is no significant change with 802.3ad, i’ll take is as normal behaviour then.