MT 2.9 as BGP edge-router

Dear Mikrotik People,
hello Forum,

we are a german-based ISP and we plan to invoke several software-based routers on some of our
locations. we plan to use Mikrotik RouterOS as edge routers with BGP peers to our upstream and
peering providers. we want to connect these edge routers to our Core L3 network, using trunked
gigabit links (802.3ad LACP), and we want them to be stable, resilient and fast.

As for now, i’ve got some questions, that i don’t want to ask the support staff only, but also
the community that has already deployed MT Router solutions and actually lives with them:

\

  1. Experience in edge-networks (Upstream/Peering >1gbs+)

Are there any people that use MT router in upstream or peering situations that cover
1gbps+ bandwidth? What are your suggestions on behalf of cpu and hardware?

  • forwarding rate @ p4 3,0ghz
  • with and without firewall

as an example, p4 3,0ghz with intel quad 1000 MT gig ethernet will be used as single transit
router to our primary upstream. what can we expect in terms of packet rates that the router
still can handle?

what impact on the forwarding rate does it have that no HT or SMP can be used?
whats the reason MT can’t handle dualprocessors?


2) Kernel used in MT software

We are interested in getting to know what kernel version the RouterOS 2.9 is based of.
do they base already on 2.6.?

Is there NAPI support for intel 1000Mbit/s adapters in MT?


3) BGP implementation

As we must rely on our BGP network, i’m specifically interested in the BGP implementation
of MT. is there support for communities, filters, path prepending, route-maps and all the
other “more-deep” stuff? do we have some kind of logging status of our neighbor connections?
can we even syslog that to another (external) syslog server?


4) iBGP & VRRP on same interface

As mentioned, we want to use several MT peering / upstream routers, that consolidate traffic
on our core L3 network. we want them to speak with each other peering router via iBGP, and
gateway backup should be done via VRRP. is that possible on the same (bonding) interface,
that looks towards the L3 core?


I’m really looking forward to gathering more information about this promising project.

I would love to hear from people that actually use MT routers in production for similar
purposes.

Routing people - would you advise changing edge routers to MT? right now, our overall network
load peaks up to ~200 Mbit/s, with roundabout 40-50kpps. We plan to raise that load soon.

Thanks,
Frank

  1. We havent tried 1GBPS but at 400mbps with P4 2.8ghz we see load averages of 30%. We use intel pro 1000 MT dual nic. At 1GBPS your PCI bus would probably run out so its better to spread across pci slots.

  2. Dont know kernel but we use 2.8.28 and it works well.

  3. BGP by default does not support filters communities and more-deep stuff . You can use 2.9 (which we have not tried) with routing-test and it is supposed to have these features. By far standard BGP is stabled

  4. we tried vrrp on the same interface but had some trouble in keeping it alive. Worked then suddenly stopped (this was a while back with 2.8.21) we removed it since and have not done it. Dont know about bonding.

IF your looking for a decent router with good features i will stand behind MT . The standard BGP works and until they take out routing-test and make it standard routing we will continue to use the base routing package. It will definately work for 200Mbit/s.

In anycase you can always use multiple boxes as well as setup 1 Box for BGP and separate the function.

  1. We havent tried 1GBPS but at 400mbps with P4 2.8ghz we see load averages of 30%. We use intel pro 1000 MT dual nic. At 1GBPS your PCI bus would probably run out so its better to spread across pci slots.

  2. Dont know kernel but we use 2.8.28 and it works well.

  3. BGP by default does not support filters communities and more-deep stuff . You can use 2.9 (which we have not tried) with routing-test and it is supposed to have these features. By far standard BGP is stabled

  4. we tried vrrp on the same interface but had some trouble in keeping it alive. Worked then suddenly stopped (this was a while back with 2.8.21) we removed it since and have not done it. Dont know about bonding.

IF your looking for a decent router with good features i will stand behind MT . The standard BGP works and until they take out routing-test and make it standard routing we will continue to use the base routing package. It will definately work for 200Mbit/s.

In anycase you can always use multiple boxes as well as setup 1 Box for BGP and separate the function.

Hello nikhil,

thank you for your answer.

Now we’re one step further.

I’m asking for kernel version and especially NAPI support for intel adapters because it massively offloads CPU under high incoming packet load, e.g. DOS attacks and sth. like that. statistics were given that one single intel 1000 T adapter can handle up to 870kpps (!!!) on an pentium III, with NAPI kernel support, that is available since 2.4.19 and 2.6.x of linux kernel versions.

is there anyone who knows what kernel tree and version MT software uses and can help me out on this question?

second, if we use “routing-test” on 2.9, will we be able to do settings at winbox client or just on the telnet console? i thought i read something about that in the forum…

are there any people that deployed 2.9 on bgp edges, and did advanced bgp with route-maps, prepends, communities and filtering for a transit AS?
i could not find much on the documentation…

thanks again, folks…

Why you would not use cisco at your core ? I would not prefere to have something like MT at core of the network. I suppose that you dont have time to loose with something that is under development and its not finished yet, like Routing-test package.

My style would be, use something that really works and its designed to that job.

Don’t forget the PCI bus has a combined theoretical maximum of up to 1Gbit/s - Overhead / 2 (packet going in and then out ).
You’d need to use a better bus for higher speeds.

Here is a link about NAPI support I ran into just now too… could be compiled into the kernel if not already.

http://www.ntop.org/PF_RING.html

Sam

Thanks for all answers.

Before reading this forum, it occurred to me that MT really works and is designed to do the job. i still believe in software-based routers, but i also still have to rely on advanced BGP implementation. i would love to hear a developers voice on this issue.

Are we talking about PCI-X 133MHz? The limit you are referring to occurs to me as the limit of standard 33MHz PCI 32bit. Of course, we plan to use PCI-X with quad Gigabit Intel NIC. we won’t use the full capacity of this NIC, as we will bond two interfaces to L3 core and point the third interface to our first carrier, and the fourth as 100mbit backup to the same, so the total routeable load will range somewhere between 2-2,1 gbit/s, subtracting overhead and interface restrictions.

i think, PCI-X will do that job. but will MT do, too? should we use XEON processors rather than P4?

thanks, sam! that’s exactly what i’m talking about. just for the case, NAPI isn’t enabled in the current MT kernel, are we able to recompile the MT kernel by ourselves or can this only be done by the developers?

who do i have to contact to get clarification on this?

many thanks again,
frank