I’ve alreay tried that, I’ve bridged ports 1 and 6. Same thing…
-
What does support@mikrotik.com say about these issues?
If you are going to ask them, include a supout.rif and description and link to this thread. -
Anyone cares to try v5RC8?
The scheduled downtime could be from 1 minute for the upgrade to normally 5 minutes to 1 hour if you need to recover the router. Prepare a .backup beforehand just in case or simply - remember the config or write down the details to be able to recreate.
I’ve done some further tests that lead to new conclusions.
I put a dedicated machine behind it and started downloading an ISO from a local mirror server. The speed went up to ~ 500Mbps with everything turned off, no queues, no conntrack, no firewall, nothing.
It was still capable of handling about 200Mbps with conntrack on, one packet marking rule (just to test) and one queue. The only difference was that now I had a dedicated out-of-band dedicated port. I know that this should not be relevant, but anyway.
So tried to put it in a real life situation and I put an online gaming server behind it. 60-80% CPU usage at about 30Mbps traffic and about 10k active connections. So I believe that once you hit the 50 Mbps cap with normal traffic you will get packet loos and delays.
Excuse me for not thinking before about that but here’s a print screen of the traffic/cpu usage. Now it’s 9:35 AM and the gamers are sleeping, this is why the traffic is so low. Yet the CPU usage is abnormally high.

What does support@mikrotik.com say about these issues?
If you are going to ask them, include a supout.rif and description and link to this thread.Anyone cares to try v5RC8?
The scheduled downtime could be from 1 minute for the upgrade to normally 5 minutes to 1 hour if you need to recover the router. Prepare a .backup beforehand just in case or simply - remember the config or write down the details to be able to recreate.
I’ll do that. I first asked the forums because you guys have real life experience with these toys while MT apparently only benchmarks them and puts the test results in a document.
- Anyone cares to try v5RC8?
The scheduled downtime could be from 1 minute for the upgrade to normally 5 minutes to 1 hour if you need to recover the router. Prepare a .backup beforehand just in case or simply - remember the config or write down the details to be able to recreate.
I haven’t tested it yet on RB1100 but I tested it on an Intel server and it had some very strange problems. Read more here: http://forum.mikrotik.com/t/hardware-requirements-for-a-high-performance-traffic-shaper/44434/1
Thanks.
Seems that something is seriously wrong in the config itself if you get that much CPU load under traffic so low.
BTW, did you notice these sexy routers under “Made for Mikrotik”?

Network-optimized or not I doubt any PPC matches Quad Core setup. And if those Ethernet interfaces are at least on PCIe x1, they surely would fit your demands. Too bad they, IMO cost at least twice as much as 1100 ![]()
Seems that something is seriously wrong in the config itself if you get that much CPU load under traffic so low.
BTW, did you notice these sexy routers under “Made for Mikrotik”?
Network-optimized or not I doubt any PPC matches Quad Core setup. And if those Ethernet interfaces are at least on PCIe x1, they surely would fit your demands. Too bad they, IMO cost at least twice as much as 1100
Hi,
I got the point, go for x86. And I did, you wouldn’t imagine the experience… I was relatively OK with a SR1630GP with it’s onboard NICs as long as I didn’t threw more than 200Mbps at it. Due to the PCI interface limitations, it was limited to 200Mbps.
I do however need to shape about 1Gbps so I figured that this can’t be done by one machine, I need two that could handle ~500Mbps. So I’ve bought a PRO/1000 ET with the 82576 chipset and plugged it in. The only MT version that sees the board is 5.0rc7 (for some reason rc8 doesn’t load them properly). I had to do some tweaks such as lifting the interface MTU to 1504 because of multiple tagged VLANs I had there (while 4.16 didn’t needed that, but didn’t had the multiple-cpu support like 5.0).
The 5.0 is rather shabby in respect to shaping tagged VLANs, everything comes almost to a stall at some points, but after disabling “Use IP firewall on VLAN” in the bridge settings, all works OK.
Bottom line here is that RB1100 is too damn slow, going for top notch hardware isn’t an option either due to the lack of support for the (I quote) “expensive uncommon chipset”, like the 82576 is.
I’ve even opened a topic to find out the top config that someone is using in production. I already feel sorry for the money spent on the RB1100 and now I start to feel sorry about the money spent on the PRO/1000 ET, it’s too damn new it seems for Mikrotik to fully suport it.
..Due to the PCI interface limitations, it was limited to 200Mbps…
Could you elaborate a little bit more on this.
I read here a slightly different story http://ixbtlabs.com/articles2/gigeth32bit/index.html
…the throughput of the Gigabit Ethernet reaches 1000 Mbit/s which is approximately equal to 120MB/s, i.e. it’s nearing the speed of the 33MHz 32bit PCI bus.
and right now I am unable to actually test transfer that passes through two PCI Gigabit cards that are on the same PC.
It depends on the system, You can have multiple PCI channels in a single mobo or all of them can be lumped into one channel.
PCI 32bit/33mhz caps out at 1064mbit/sec
PCI 32bit or 64bit /66mhz caps out at 2128mbit/sec
PCI 64bit/66mhz caps out at 4264mbit/sec
So if you have a mobo with 2 Gbit NIC’s in PCI slots and those slots share the same channel you would not be able to reach 1gbit full duplex of transfer speed, for that you would need PCI 66mhz slot in 32bit or 64bit flavors. However if each NIC was in a PCI with it’s own PCI channel then with 32bit/33mhz you could do 1gbit full duplex.
So tried to put it in a real life situation and I put an online gaming server behind it. 60-80% CPU usage at about 30Mbps traffic and about 10k active connections. So I believe that once you hit the 50 Mbps cap with normal traffic you will get packet loos and delays.
For comparison, I do over 50 Mbps with 30K active connections in the connection tracker and some firewall rules, but no queues, on a RB493AH with under 40% CPU.
I too was looking for a Mikrotik-made ready-to-go solution and did quite some research about it. It seems like the older 1333MHz RB1000 performs better than RB1100, but is more expensive and difficult to get. And when you’re more into it, seems like it’s not that expensive to build your custom x86 machine or buy that already made one in these good lookin’ 19" rack cases.. What’s more, in custom built machine you can put HDD which doesn’t suffer from rewrite cycles and in general I’ve noticed that all my x86 based machines perform way better with RouterOS than any Mikrotik PPC or mipsbe..
For comparison, I do over 50 Mbps with 30K active connections in the connection tracker and some firewall rules, but no queues, on a RB493AH with under 40% CPU.
It may have something to do with the tagged VLANs. I transport a few tagged VLANs to the switchport where the MT is plugged in. If I disable the “User IP firewall for VLAN” under the bridge settings I have <1ms ping reply to the hosts on the tagged VLANs while with the option enabled I get 20-40ms when there’s about 200Mbps passing through. This happens on a x86 machine but I assume that the same thing happens on a Routerboard.
I too was looking for a Mikrotik-made ready-to-go solution and did quite some research about it. It seems like the older 1333MHz RB1000 performs better than RB1100, but is more expensive and difficult to get. And when you’re more into it, seems like it’s not that expensive to build your custom x86 machine or buy that already made one in these good lookin’ 19" rack cases.. What’s more, in custom built machine you can put HDD which doesn’t suffer from rewrite cycles and in general I’ve noticed that all my x86 based machines perform way better with RouterOS than any Mikrotik PPC or mipsbe..
I’m seriously thinking of buying something from here:
http://www.itxdepot.com/xcart/home.php?cat=12000
for my future servers running RouterOS. While RouterOS is an absolutely astonishing product and it’s worth every penny, I can’t say the same thing about RB1100. If you have over 200Mbps traffic to pass through it, you’ll have to start disabling connection tracking, don’t do packet marking, don’t use too many firewall rules and don’t use simple queues. Basically if you have a couple of hundreds Mbps passing through, you need to turn the RB1100 into an unamanged switch. I’d go for a Linksys switch or a second-hand Cisco for that matter, at half the price ![]()
So yes, I would definitely recommend RouterOS + x86 and strongly advise against RB1100 for more than 50Mbps throughput. My personal advice after trying different stuff is not to aim for high-end devices (Quad Core Xeons with 82576 or 85675 network chips). For a bridge/router go for PRO/1000 GT NICs and dual core CPUs with speeds as high as possible (ex. Core 2 Duo @ 3.06GHz). This type of hardware is under production for many years and the above mentioned NICs are very stable according to the Mikrotik HCL. I haven’t tested it myself but I rely only on this:
depending on the packet size, we can get even 1.3Gbit throughput with RB1100 and firewall and conntrack on. Maybe there is some other problem?
Thats great but how many firewall rules etc etc. The RB1100 is lacking in CPU power, It’s aimed as a core or like-core router.
The reality is x86 is where you have to go when you have a core router handling MPLS with 50-100 BGP based VPLS circuits running, 20-40 PPPoE sessions coming in over a VLan and ~100 firewall mangle rules based on IP. Add in OSPF and a BGP system with 1000-1500 routes and want to push 100+mbit of traffic
We’re currently at the limits of the RB800 we have in a outer core router running 1/2 that pushing 30mbit, The CPU’s sitting at 70%+ all the time and we will be skipping the RB1100 upgrade and going straight to x86.
Here’s hoping that the RB1200 is a multicore or multicpu system designed around ROS 5’s SMP setup and can handle 1gbit of routed traffic in a setup close to what you see in a medium to large ISP’s core
I didn’t say it will be more powerful than a 16 core Xeon router, currently we have no device to compete with that. But it is not so bad as the above poster implied, and we get much better results with them.
I should hope so with 16 cores ![]()
There are alot of problems with going the x86 path, ROS is rather picky and has had some interesting bugs with certain hardware in the past. It would be nice if MT produced a router aimed at medium to large ISP’s that need the grunt but also need the backing from MT to support it. Something in the $1200-1600USD mark with options for 512mb/1gb/2gb/4gb of ram in it and enough CPU power to do 1gbit+ whilst doing bgp, routing, mpls etc
Either that or come up with a golden x86 1-2ru server that you can say without a doubt ROS wont have any issues on ![]()
Just an idea
ok, on it
![]()
I think no one here says that RB1100 is a bad product. It’s just time to admit that it’s intended for small to medium sized ISPs or companies that are running lots of computers and do not need complex solutions 1100 cannot handle. Many people are confused seeing expensive, high-end MikroTik router with 13 gigabit ports and so they expect it to perform as top-class router. Seems like gigabit is fitted there as the only means to have >100mbps traffic through a single port. And whats more, evasive action from Mikrotik always giving notes about packet size reminds me of Apple, when they said “Your holding it wrong”.. ![]()
Besides, ain’t x86 systems more flexible? I mean you can upgrade and replace various different parts if they stop functioning or doesn’t seem to pack enough juice anymore.. When in the case of RB, you have to throw it all away ![]()
for the most important links, connect each cable to a different switchgroup, then throughput will be better. ie. speed from port 3 to 4 is not as good as from 3 to 7