Community discussions

MikroTik App
 
fflo
newbie
Topic Author
Posts: 46
Joined: Wed Jan 02, 2019 7:59 am

BGP full table routing on CCR2xxx with route filters

Wed Jul 19, 2023 5:02 pm

Hi,

running the BGP full table on CCR2xxx equipment is working smoothly only if the "Input Filter" (and "Output Filter") is disabled.
Enabling an "Input Filter" list on a BGP full table to filter out invalid prefixes results in one CPU thread going stuck at 100% and route updates needing more than 10 minutes to get processed.

Is there a way to implement prefix filters without having to struggle with poor performance?
Haven't noticed such behavior using native prefix filters on underlying Bird routing services on native Linux implementations.

Thanks for your feedback.
 
wiseroute
Member
Member
Posts: 352
Joined: Sun Feb 05, 2023 11:06 am

Re: BGP full table routing on CCR2xxx with route filters

Wed Jul 19, 2023 5:22 pm

hello,

is this ccr in production or just feeding test?

which routeros version? and in what role? i mean: ebgp/ibgp? direct peering or rr-client?

if you have sample config and screenshot - maybe @mrz could help you.
 
User avatar
sirbryan
Member
Member
Posts: 316
Joined: Fri May 29, 2020 6:40 pm
Location: Utah
Contact:

Re: BGP full table routing on CCR2xxx with route filters

Wed Jul 19, 2023 5:52 pm

What is the affinity set at? Should be "alone" and "alone" (which puts BGP processes on their own core). Also, what is your filter? A list of dozens of bogons, or a regex of some kind?

I've got a couple of 2116's pulling in full tables and filtering out everything beyond a single AS (one filter line) and it loads up pretty quickly. They are all running 7.10.
 
User avatar
chechito
Forum Guru
Forum Guru
Posts: 3007
Joined: Sun Aug 24, 2014 3:14 am
Location: Bogota Colombia
Contact:

Re: BGP full table routing on CCR2xxx with route filters

Wed Jul 19, 2023 7:16 pm

Related Content

Advanced BGP tips: affinity
https://youtu.be/py4up-lO8zY
 
jplitza
just joined
Posts: 9
Joined: Mon Sep 20, 2021 4:12 pm

Re: BGP full table routing on CCR2xxx with route filters

Thu Oct 05, 2023 1:46 pm

I can totally confirm the observation of the OP. We have nearly 100 filter rules. Not counting jumps, there are roughly 22 "ebgp-in" and 5 "ebgp-out" for each peer (some shared). See below.

Changing filters attached to only one peer can cause 10 minutes of 100% CPU load on one core.

The setup is relatively normal I'd say:
Transit A   Transit B   Peering P
    |           |           |
    +-------+   |    +------+
            |   |    |
     +------+---+----+------+
     | CCR2216-1G-12XS-2XQ  |
     +------+---+----+------+
            |   |    |
   +--------+   +    +------+
   |            |           |
iBGP I     Customer C1  Customer C2
So we get full tables from transits A and B, apply bogon filtering (which sadly requires one rule per bogon network, since filtering via address lists doesn't match longer prefixes), community filtering and setting and some local preference calculation. Basically, the ebgp-in chain for transit looks like this (each line is one item in /routing/filter/rule, with some jumps omitted):
delete bgp-communities ^64496:;
set bgp-local-pref 170;
if (dst in 0.0.0.0/8 && dst-len in 8-32) { reject; }
if (dst in 10.0.0.0/8 && dst-len in 8-32) { reject; }
if (dst in 100.64.0.0/10 && dst-len in 10-32) { reject; }
if (dst in 127.0.0.0/8 && dst-len in 8-32) { reject; }
if (dst in 169.254.0.0/16 && dst-len in 16-32) { reject; }
if (dst in 172.16.0.0/12 && dst-len in 12-32) { reject; }
if (dst in 192.0.0.0/29 && dst-len in 29-32) { reject; }
if (dst in 192.0.2.0/24 && dst-len in 24-32) { reject; }
if (dst in 192.168.0.0/16 && dst-len in 16-32) { reject; }
if (dst in 198.18.0.0/15 && dst-len in 15-32) { reject; }
if (dst in 198.51.100.0/24 && dst-len in 24-32) { reject; }
if (dst in 203.0.113.0/24 && dst-len in 24-32) { reject; }
if (dst in 240.0.0.0/4 && dst-len in 4-32) { reject; }
if (dst in 255.255.255.255 && dst-len == 32) { reject; }
append bgp-communities 64496:120;
if (bgp-local-pref > 0) { set bgp-local-pref -bgp-path-len; }
if (bgp-communities includes graceful-shutdown) { set bgp-local-pref 0; }
if (bgp-communities includes blackhole) { set blackhole yes; }
if (bgp-communities any-list restrict-hw-offload and not bgp-as-path [[:TOP_AS:]]$) { set suppress-hw-offload yes; }
rpki-verify default; if (rpki invalid) { reject } else { accept }
After the routes are accepted into the RIB, they are forwarded to iBGP and customers, filtering by communities, prefixes and source.
if (dst == 192.0.2.0/24) { accept; }
if (dst == 198.51.100.0/24) { accept; }
if (dst == 203.0.113.0/24) { accept; }
if (not dst in 2001:db8::/32 && dst-len in 1-48 && protocol ospf && afi ipv6) { accept; }
if (bgp-communities any-list redistribute-to-customers) { accept; }
I built a lab with CHR and limited the CPU time the VMs may use. /routing/stats/process indicates that each transit session used ~2,5 minutes process time, while the customer session took >3 minutes (probably consecutive). That's a bit surprising, given that the customer session exports many routes and only receives one in my test setup, and exporting (5 filter rules, no best path selection) should be much faster than importing (22 filter rules and best path selection required).

Without filters, process times are around 1 minute (except for one transit, which only took 10s - guess it came in first and no best path selection was necessary)

Any hints how to optimize this setup? Is "if (A || B || C) { accept; }" faster than "if (A) { accept; }","if (B) { accept; }"? It's not clear to me how setting affinity could benefit this setup, since all documentation and also the Youtube video only indicate it helps with few cores (of which the CCR2216 surely has enough). And I'm already using input.accept-nlri on customer sessions.

(Sidenote I noticed while labbing: the dhcp-client times out installing a default route on boot, because the routing stack is already at work parsing BGP)
 
chubbs596
Frequent Visitor
Frequent Visitor
Posts: 90
Joined: Fri Dec 06, 2013 6:07 pm

Re: BGP full table routing on CCR2xxx with route filters

Mon Oct 09, 2023 6:17 pm

I can totally confirm the observation of the OP. We have nearly 100 filter rules. Not counting jumps, there are roughly 22 "ebgp-in" and 5 "ebgp-out" for each peer (some shared). See below.

Changing filters attached to only one peer can cause 10 minutes of 100% CPU load on one core.

The setup is relatively normal I'd say:
Hi,

we are seeing similar issue with Long Processing times and high CPU load even on a powerful CHR with 10x 3Ghz Amd Rome cores, for me its mostly effecting output/ advertised routes, We have about 75 active peers, and we are also make use of jump rules to a set of specified dst addr filters,
with setting some communities and BGP-med ect.

deppending on the change we make on the out-filters, it would load all 10 CHR cpu cores to 80-90% for about 30sec before the updates are sent out to peers,

normally we would change something like " set bgp-out-med xx; " or set bgp-communities some-list;

On v6 this normally took 2-5 sec to apply,

We have a support case open with Mikrotik but not really getting any feedback back,

if this takes so long on a CHR with 10x Power x86 cores, I can see this will take more than 10min on a CCR2004 or CCR2x16 router,

from what I can gather from some other discussions, it seems it parses all filters for every route in the routing table, so the more routes you have in the RIB this issue will just get worse.

Who is online

Users browsing this forum: No registered users and 5 guests