CCR-1009-8G-1S-1S+ tops out at ~500mbps?

Hi Folks,

I emailed Mikrotik support about this almost a month ago, but haven’t had a reply, so hoping someone on the forum here can help!

We bought a CCR-1009 just over a month ago, to try to drop our power usage (we’ve been using an old HP DL360G3!), when we moved into a new datacenter. Our configuration is that we have 2x1gbps ethernet ports to our upstream, bonded as bond1 using LACP. Those connect into ether3 and ether4. We also have 2x1gbps ethernet ports on ether1 and ether2, bonded as bond0 using LACP to our switch (HP Procurve J4906A (3400cl-48g)). bond0 has 8 or so vlans on it, providing various subnets to our internal and customer servers.

Our upstream has been having some throughput issues, so they asked us to run up a VM for them to use to run iperf etc between our network and theirs. They eventually tracked it down to a 1gbps per ASIC issue on their core switch (d’oh!), and re-jigged ports in the bond to their core router, and fixed that issue. Which then pointed out that we simply couldn’t seem to do more than around 480mbps to any host outside our network.

That led me to try running iperf between hosts on our network, and found that I can’t do more than 500mbps between hosts when going between vlans. I fired up Winbox to check port utilisations, and discovered that traffic from server1 to server2 goes in and out the same interface in the bond. Traffic from server2 to server1 uses the other interface. I then involved other servers, and found the right combination (mac addresses, I assume), so that traffic would come in one interface and out the other on the bond. With that I could get it to 600mbps throughput. It got me to wondering if it was a similar switch ASIC issue as our upstream had been seeing, but I found that I can run multiple concurrent iperf sessions between multiple servers on our network, provided I don’t cross vlans (and so have to go through the CCR), and they will all happily pull 1gbps at the same time.

So then I tried running the iperfs on multiple machines at once crossing vlan boundaries, but that just resulted in my throughput halving - so instead of 480mbps, I was getting ~240mbps in/out on each port.

We were on 6.10, but have upgraded to 6.15 without seeing a difference. When I open resources in Winbox, I can see that with one iperf running, one CPU core is at 99%, and with two iperf’s running, two CPU cores are at 99%, so I’m assuming that the ethernet interfaces are being handled by one cpu core each, and hitting capacity at around 500mbps in+out?

Is there any way to make this faster? (i.e. hit line speed?) Have I maybe configured something incorrectly? I notice that ether1-4 are on a switch, and 5-8 are not, so I have switched them so our bonds are now on ether5/6 and ether 7/8, but this has not made any noticeable difference.

I’d really appreciate any help you can offer :slight_smile:

Thanks,

Damien

same issue here

ccr 1016 bonding towards cisco 3750
iperf tops at around 900Mbit/s without routing within same vlan - Host A connected to CCR1016 - Host B connected to 3750 Port


interVLAN routing tops @ 240Mbit/s.

I also have a support ticket open with Mikrotik and have not heard back - Was running 6.13 - now on 6.17 with same issue.

jul/22/2014 00:02:35 by RouterOS 6.17

/interface bridge
add comment=CORE l2mtu=1590 name=bridgeVLAN10 priority=0x1
add comment=MGMT name=bridgeVLAN99 priority=0x2


/interface bonding
add mode=802.3ad name=bonding-C3750-0 slaves=ether3,ether4 transmit-hash-policy=layer-2-and-3

/interface vlan
add interface=bonding-C3750-0 name=vlan10-c3750 vlan-id=10
add interface=bonding-C3750-0 name=vlan99-c3750 vlan-id=99

/interface bridge port
add bridge=bridgeVLAN99 interface=vlan99-c3750
add bridge=bridgeVLAN10 interface=vlan10-c3750

/ip address
add address=192.168.1.254/19 interface=bridgeVLAN10 network=192.168.0.0
add address=10.254.1.1/16 interface=bridgeVLAN99 network=10.254.0.0

802.3ad doesn’t do per packet load balancing just per flow. So any individual flow will not exceed the port speed its going over. With a L2/L3 hashing chosen pretty much any connection for a computer through a local default route will get hashed the same. Better load balancing would be L3/L4 but it seems that Mikrotik doesn’t have a LACP compatible L3/L4 hashing mechanism.

I think I do understand that.

but that does not explain why we would get 900Mbit/s over bonded link within same VLAN - but then interVLAN routing does only 200Mbit/s over same physical environment.

(only thing changed - assign physical port to different VLAN Bridge).

Ah, yea probably falls into the not quite all multithreaded category. Does one of the CPUs get pegged?

no, overall load 1-4% - single core max 10%.

Are you aware that ccr1009 ports 1-4 belong to the same switch group? So they connect with the CPU through a single gigabit link.
So if you’re testing between those ports … pretty sure you’re filling up the link between the switch and the cpu.

no sure about 1009 - though 1016-12G has no switch chip and each port Mikrotik says is hard wired to vCPU - so each should get 1GBit/s - so bonding should get X * 1Gbit/s or close to 1GBit/s for each session.

Even with ROS 6.19 I do not get over 300Mbit/s. There seems to be a slight improvement within 20-30Mbit/s with latest ROS, though far away from 1GBit/s.

Again, this is interVLAN routing on CCR1016, with bonded ethernet interfaces to source (Cisco 3750 stack).