Page 1 of 1

RB1100 performance when shaping 1Gpbs

Posted: Mon Jan 17, 2011 10:24 am
by mihaialdea
Hello,

I curently have a RouterOS 4.16 set up as a transparent bridge shaping the traffic for a couple of servers, some of which being VPS nodes. All in all there are about 500+ queues and at peak times I have about 500Mbps going through it. At the time I had it installed I had an Intel server with an X3340 laying around, which had an outstanding performance with only 2 VPS nodes and about 500 queues, but handling about 50Mbps at peak times.
After I've added a couple of more servers and like 20 more queues, the traffic rised up to about 500Mbps (sustained throughput during peak times), however the clients started complaining: there are problems with the latency, packet loss and at the time I'm writing this I can't even login on the shaper, nor on the machines behind it.
My question is: how much throughput can a RB1100 handle properly? I know the CPU is like 10 times slower comparing to the quad core X3340, however I'm thinking that with a good instruction set on the PowerPC CPU, it would perform quite good under lower throughput.

From your experience, what is a maximum throughput that an RB1100 could sustained without packet loss and dropped packets (other than the normal queue packet drops)? I want to know wether it's better to buy a couple of RB1100 routers and have the traffic load shaped by a couple of those, or I could just forget about them and go for a i7 fat server with Intel server NICs?

Thank you.

Re: RB1100 performance when shaping 1Gpbs

Posted: Mon Jan 17, 2011 6:59 pm
by disca
Can't answer your question re throughput - as not pushing that much traffic through.

We have an RB1000 shaping 100Mbits or so of webserver type traffic (and providing firewall). Uses around 10-15% CPU. I think this could be reduced quite a bit with rule optimisation etc.

If power usage is an issue I think you would find a couple of RB1100 used quite a lot less then a normal 1U server etc.

In terms of network design - I'd much rather have multiple smaller units sharing the load than one large server - greater redundancy possible etc etc.