Hi Folks,
I emailed Mikrotik support about this almost a month ago, but haven’t had a reply, so hoping someone on the forum here can help!
We bought a CCR-1009 just over a month ago, to try to drop our power usage (we’ve been using an old HP DL360G3!), when we moved into a new datacenter. Our configuration is that we have 2x1gbps ethernet ports to our upstream, bonded as bond1 using LACP. Those connect into ether3 and ether4. We also have 2x1gbps ethernet ports on ether1 and ether2, bonded as bond0 using LACP to our switch (HP Procurve J4906A (3400cl-48g)). bond0 has 8 or so vlans on it, providing various subnets to our internal and customer servers.
Our upstream has been having some throughput issues, so they asked us to run up a VM for them to use to run iperf etc between our network and theirs. They eventually tracked it down to a 1gbps per ASIC issue on their core switch (d’oh!), and re-jigged ports in the bond to their core router, and fixed that issue. Which then pointed out that we simply couldn’t seem to do more than around 480mbps to any host outside our network.
That led me to try running iperf between hosts on our network, and found that I can’t do more than 500mbps between hosts when going between vlans. I fired up Winbox to check port utilisations, and discovered that traffic from server1 to server2 goes in and out the same interface in the bond. Traffic from server2 to server1 uses the other interface. I then involved other servers, and found the right combination (mac addresses, I assume), so that traffic would come in one interface and out the other on the bond. With that I could get it to 600mbps throughput. It got me to wondering if it was a similar switch ASIC issue as our upstream had been seeing, but I found that I can run multiple concurrent iperf sessions between multiple servers on our network, provided I don’t cross vlans (and so have to go through the CCR), and they will all happily pull 1gbps at the same time.
So then I tried running the iperfs on multiple machines at once crossing vlan boundaries, but that just resulted in my throughput halving - so instead of 480mbps, I was getting ~240mbps in/out on each port.
We were on 6.10, but have upgraded to 6.15 without seeing a difference. When I open resources in Winbox, I can see that with one iperf running, one CPU core is at 99%, and with two iperf’s running, two CPU cores are at 99%, so I’m assuming that the ethernet interfaces are being handled by one cpu core each, and hitting capacity at around 500mbps in+out?
Is there any way to make this faster? (i.e. hit line speed?) Have I maybe configured something incorrectly? I notice that ether1-4 are on a switch, and 5-8 are not, so I have switched them so our bonds are now on ether5/6 and ether 7/8, but this has not made any noticeable difference.
I’d really appreciate any help you can offer ![]()
Thanks,
Damien