Ram, Ram, and more Ram. Other than that, I have seen no major issues. I have read some posts by people with problems but never experienced problems myself. Watch your upgrades though and always have a backup unit when doing upgrades.
ps, a nice processor will help too. More than anything though, Ram
RAM is what is causing us to look away from the Cisco 7206s in our stable because of the 256 MB limitation of the NPE-300 (262Mhz). CEF crashes, resulting in a necessary reload if and only if you want to continue with production. Current IOS for the Cisco 7206 doesn’t list that as a resolved caveat (IMHO).
We like the Sun Fire X2100 (Opteron, 2.6 GHz) at $745 if we could get favorable references on a DS3 card for the PCI bus.
There’s a lot of things MikroTik could do better, but Cisco 7206s burping/crashing under attack is an ugly sight and the smell is worse.
just a thought, use the 7206 to terminate the DS3 and hand it off to the MT box via ethernet. that eliminates any potential driver issues or problems with the DS3 card, and the 7206 will be plenty suifficiant to perform that task once you remove BGP from it…
btw, what’s your free memory level on your 7206? I’ve got a 7204, NPE300, 256mb, full routes from 2 providers, ~50mb free memory, but every 60 seconds my ping times jump sky high while BGP updates… I’m quite fed up with it.
ITX has been under DOS attack for about 10 days for a couple of sites we host in the community’s interest, one having to do with spam and one having to do with fixes for M$ exploits (see recent DDJ article).
The symptom is that CEF crashes after exhausting memory (down as low as 411.91 kB, processor utilization higher than 85%.
Level 3 claims not to have a traceback facility so they have chosen not help with the spoofed addresses. As a consequence, I can no longer recommend Level 3 as an upstream.
We run 3 BGP sessions, two to upstreams and one to Akamai (in house), OSPF (customer announcements) on a 7206 NPE-300 w/256 MB (max). The other box, a 7206 with NPE-400 and 256 MB is not causing grief at this time.
We are happy to hear about any real world solutions; I like your idea about using the 7206 as a front end to the MT to keep the DS3 terminated on the 7206. Recent bugs just fixed in ROS’s BGP code concern me somewhat. I believe there was one about multiple BGP neighbors. BGP is hard to implement correctly.
Be aware that you’ll lose a lot of the ability to configure your BGP sessions if you move to ROS. The last place I worked, our border router was a ROS box after the ancient Cisco box we had couldn’t handle the full routing table any more. We were dual homed, full tables from each provider. Something as simple as shutting off one session for testing? You can’t, you have to remove the config for that provider and you better write down the settings before you do it. Want to block a default route from one of your providers? Never could figure that one out, the Prefix Lists don’t work as expected. Want to change the weights of routes from a provider? Forget it, not possible.
If you’re in a vanilla environment, nothing special, ROS is an option, but I couldn’t recommend it otherwise.
Routing → Filters work in my experence, dual homed full route loads?
routing bgp peer disable XX - Disables a bgp per and drops any loaded routes from that peer
The main thing is to set the in and out filter name in hte peer details, then when using routing → filter you can apply the right filter to the right peer.
Only issues with ROS BGP is stablity of it, set next-hop was an issue in .28 and .29 but fixed in .30
Not in 2.8, and we stopped doing automatic upgrades after the 2.8 fiasco where they didn’t regression test changes to BGP and broke routing for anybody who upgraded. That was about 2.8.24 or 2.8.26 I believe. Taking your border router down for an upgrade is enough trouble for an ISP, taking it down and not having it be usable afterwards teaches you to never trust that vendor again.
How many small ISP’s do you know of that have a test lab with a duplicate border router sitting there for the occasional test? And for the record, I’m not saying don’t use BGP, I’m warning people that MT’s implementation is not feature rich, nor do they have a good track record for regression testing their own product before releasing it.
Your first post in this thread included your own example of that in 2.9.
Exactly. I started with 2.9.27 and have had good experiences. Like I said, have a backup one to run every time you do an upgrade. Testing is always best.
mt bgp is almost working very well. Outbound filters do not work properly in all cases, I just ran into this today.
iBGP is still a little squirly. 512mb of ram is good for full route tables. A beefy CPU is required if you want minimal problems.
We are using .27 on 2 border routers for months now. We are not receiving all routes because they lockup when exchanging routes between themselves.
Having a dev environment for a bgp setup is required, you cannot expect it to work reliably under load until you’ve tested in dev. Chicken egg problem though, how can you get a dev bgp feed to test with : )
I’m warning people that MT’s implementation is not feature rich,
how can you say that, if you have not even used it for more than a year? since 2.8 almost everything has changed
And we’d know this how? The 2.9 PDF manual doesn’t even include a chapter for BGP any more, you removed it after 2.8. And don’t tell me about the routing-test package, I’m talking about the default routing package.
MikroTik violates the trust of it’s customers every time it releases a version that hasn’t been regression tested well enough to prevent introducing bugs in previously working packages. If you’re going to compete with other companies for the network edge market, you better start hiring in your own test lab.