You might want to check these notes from docsAs minutes pass, the fasttracked connections will gradually drop from the ASIC and go back to CPU, until there is no benefit from running L3 offloaded fasttrack. And lastly, some customers at the AGG dhcp server (100.64.0.0/19 cgnat local connected route) will not receive incoming UDP 5060 port packets for SIP VoIP services. But the local pppoe connected routes at this router are not affected by anything at all.
And again, we turn off L3HW-Offloading at the switch, and voila, everything gets fixed as soon as the CPU takes over.
Advaced Monitor shows some infoYes, how much routes fit into the switch ASIC's TCAM for l3hw depends on the kind of routes and the specific TCAM implementation. There is AFAIK also no way to show the available TCAM space on MT switches with l3hw.
Still it is a reasonably expectation that routes are loaded into the switch ASIC's TCAM as far as they fit, and the rest is handled by the CPU. What we saw in our tests with l3hw on CCR2x16, is that routing in general starts to struggle if the l3hw TMAC is full and ROS has to dynamically decide what routes to be handled by CPU and what to hand-off to the switch ASIC. This is admittedly no trivial problem to solve, but as long as this is an issue, it makes running l3hw risky.
Yes, we caught that, but in this case, what happens is that it eventually drops to 0 fasttracked connections on the L3Hw-offloading monitor (we have been using it with CLI, the winbox monitor was just added on this last version, and shows the same). I can understand that a limited amount will be candidate for offloaded fasttrack, but it drops until it doesn’t work at all.You might want to check these notes from docsAs minutes pass, the fasttracked connections will gradually drop from the ASIC and go back to CPU, until there is no benefit from running L3 offloaded fasttrack. And lastly, some customers at the AGG dhcp server (100.64.0.0/19 cgnat local connected route) will not receive incoming UDP 5060 port packets for SIP VoIP services. But the local pppoe connected routes at this router are not affected by anything at all.
And again, we turn off L3HW-Offloading at the switch, and voila, everything gets fixed as soon as the CPU takes over.
1 Depends on the complexity of the routing table. Whole-byte IP prefixes (/8, /16, /24, etc.) occupy less HW space than others (e.g., /22). Starting with RouterOS v7.3, when the Routing HW table gets full, only routes with longer subnet prefixes are offloaded (/30, /29, /28, etc.) while the CPU processes the shorter prefixes. In RouterOS v7.2 and before, Routing HW memory overflow led to undefined behavior. Users can fine-tune what routes to offload via routing filters (for dynamic routes) or suppressing hardware offload of static routes. IPv4 and IPv6 routing tables share the same hardware memory.
2 When the HW limit of Fasttrack or NAT entries is reached, other connections will fall back to the CPU. MikroTik's smart connection offload algorithm ensures that the connections with the most traffic are offloaded to the hardware.
3 Fasttrack connections share the same HW memory with ACL rules. Depending on the complexity, one ACL rule may occupy the memory of 3-6 Fasttrack connections
https://help.mikrotik.com/docs/spaces/R ... iceSupport
talking about Offloading Fasttrack Connections in L3 hw offload scenarios:Correct, we have been using the CLI tool until 7.18, and still the GUI tool shows the same. When offloading is started, it reports somewhere around ~3500 fasttracked connections, and CPU drops to 0% - 1%, but the routes problem drops at least half the traffic of the downstream AC, and fasttracking HW keeps dropping until it reaches 0 and you can see the CPU rise as it drops, until there is no more hw fasttrack. We are evaluating building a loaded x86 server to run RouterOS and replace the CCR2216 while this gets sorted out, which will probably take a long time![]()
Quoting part of my original post, which is probably a little bit too long, TL;DRIn general just common carrier services that were already running on the previous CCR1072 for years with no issues. The upstream port that does firewalling is set to "L3-HW-OFFLOADING=no" as expected for this type of config. The rest of the ports, and the downstream port for the downstream AC router (CCR2216) are set to HW Offload.
Yup, we are aware of this, and I would LOVE that they would add more VPLS/MPLS offloading capabilities (RFC2544 fails on VPLS over CPU). But in our case, we use LDP/MPLS solely for loopback VPLS transport capabilities, it is filtered for loopback addresses only, so no customer or backbone traffic runs over MPLS on our network. In the extensive testing we’ve done trying to figure out what is causing this issue, we even completely disabled LDP/MPLS with absolutely no change in the outcome. Like I mentioned before, offloading does what it is expected to do (drops cpu usage over to the ASIC), but breaks routes in the system.MPLS is 0% HW offloaded
Any packets with a label will hit the routers CPU. The only saving grace is the limited MPLS/VPLS FastPath support that will somewhat accelerate the software based forwarding.
Because of this, CCR2116 and CCR2216 are really only suitable for very basic ISP's doing pure L3 forwarding. As soon as you start to scale to more than a single AS ingress/egress point, you will need MPLS for Traffic Engineering and Mikrotik don't currently have a router that can provide high performance MPLS Push/Pop operations.
In theory Mikrotik could add ASIC offload for MPLS label operations on the CCR2116/CCR2216 but as the Marvell Prestera CX/DX switch ASIC's are designed for Datacentre/Campus switching, I expect that when used for ISP edge use cases that there will be issues with the number of prefixes in the ASIC's forwarding database, and with the speed of adding/updating/removing prefixes/labels in the ASIC's FDB. There are also a number of issues with MPLS on the Marvell Prestera CX/DX ASIC's which I expect that Mikrotik would also experience if they added support.
Not really, it would appear like so, but no, it is just a common AGG/BNG router. Our network is pretty segmented by function. The Edges have their BGP for peering/ip-transit role only, the cores are for internal bgp and ospf routing only, Aggregators and Access routers deal with cgnat and customer transit as expected, nothing is really at a point where there is a box running too much roles, and this is the same setup that has been running trouble free for the last 5-6 years at least on the CCR1072 routers we are replacing.the ammount of problems related in this topic simultaneously (l3 hw off, fasttrack hw offload, mpls), leads me to think that you are combining too much roles on the same device
maybe you must consider in your design the possibility to segregate/separate different roles/functions
the divide and conquer principle
if you are not mixing features in the same machine i think you must create a separated topic for each specific issue to make it clearer
i think it was a marketing statement and in the perspective of product segmentation it can be truea “CCR1072 drop in replacement”.
Yep… What sold us was the ASIC capabilities on the CCR2xxx series, even though the CPU is inferior to the 1072. In the end, the hardware is more than capable of getting the job done, or at least on paper it is... But the software doesn’t want to cooperate with usi think it was a marketing statement and in the perspective of product segmentation it can be truea “CCR1072 drop in replacement”.
but
ccr1072 and ccr2216 are fundamentally very different machines, only if a plethora of very specific factors align you can expect it to be a direct swap and replace operation
i hope you find a solution
Only the main table.Is everything in the main routing table or are you running any VRFs?
Could you share your setup scenario and a bit more details on what exact issues and triggers you observe on your case?Hi!
we also see this problem on 2116/2216 in our network.
The 2 2216's we have as Edge/Border BGP only routers perform flawlessly (multi-homed setup). They see a combined throughput of ~22Gbit/s, and manage the entire internet ipv4 and ipv6 table (no problems there, only thing is to NEVER open the routing table via Winbox or it could crash, we manage these CLI only). They have all ports hardware offloaded, and do BGP ip-transit with 3 other providers directly, and also a IPX session on one of the 100G ports. Apart from one time where the filters got kind of stuck and we had to disable and enable them back again, they perform like they should, and since it is fully hardware offloaded, CPU % never goes past 2-5% max, and that is because ipv6 traffic is not offloaded.I have a dozen 2116's in production. Three as border routers (two pulling in full tables), two as BGP aggregation, two more for CGNAT, and a handful more as provider edge to downstream BGP customers.
While we have a combined 50Gbps available to us (10Gbps to 5 providers), we only use about 5Gbps at peak serving just north of 1100 subscribers across four downstream providers. No MPLS is needed anywhere. L3HW offload shaves about 5-10% of CPU load, depending on the router's function (no L3HW offload on CGNAT). None of them peaks over 25% when in all-CPU mode.
If you look at the stats on the product pages, with L3HW offload taken out of the equation, the 2116 outperforms the 2216 in CPU-only routing throughput for some reason.
Personally, I'm building some Ampere-based machines and testing RouterOS performance on that, both in CHR and bare-metal modes, with the intention of putting CPU-heavy loads on those machines when the time comes. I envision a small cluster of 2-4 of those machines with a number of redundant CHR's performing the various functions that L3HW can't offload. One of those server builds (with a mix of new and used parts) runs about the same cost as a brand new 2216. You could get a switch of your choosing with high port density, and run the Ampere servers as a router-on-a stick with two 40/100Gbps ports LAG'd into that switch, and come out ahead.
The 2 2216's we have as Edge/Border BGP only routers perform flawlessly (multi-homed setup). They see a combined throughput of ~22Gbit/s, and manage the entire internet ipv4 and ipv6 table (no problems there, only thing is to NEVER open the routing table via Winbox or it could crash, we manage these CLI only). They have all ports hardware offloaded, and do BGP ip-transit with 3 other providers directly, and also a IPX session on one of the 100G ports. Apart from one time where the filters got kind of stuck and we had to disable and enable them back again, they perform like they should, and since it is fully hardware offloaded, CPU % never goes past 2-5% max, and that is because ipv6 traffic is not offloaded.I have a dozen 2116's in production. Three as border routers (two pulling in full tables), two as BGP aggregation, two more for CGNAT, and a handful more as provider edge to downstream BGP customers.
While we have a combined 50Gbps available to us (10Gbps to 5 providers), we only use about 5Gbps at peak serving just north of 1100 subscribers across four downstream providers. No MPLS is needed anywhere. L3HW offload shaves about 5-10% of CPU load, depending on the router's function (no L3HW offload on CGNAT). None of them peaks over 25% when in all-CPU mode.
If you look at the stats on the product pages, with L3HW offload taken out of the equation, the 2116 outperforms the 2216 in CPU-only routing throughput for some reason.
Personally, I'm building some Ampere-based machines and testing RouterOS performance on that, both in CHR and bare-metal modes, with the intention of putting CPU-heavy loads on those machines when the time comes. I envision a small cluster of 2-4 of those machines with a number of redundant CHR's performing the various functions that L3HW can't offload. One of those server builds (with a mix of new and used parts) runs about the same cost as a brand new 2216. You could get a switch of your choosing with high port density, and run the Ampere servers as a router-on-a stick with two 40/100Gbps ports LAG'd into that switch, and come out ahead.
We will keep the 2216 as Edge only routers only, and we are in the process of replacing the BNG 2216's with Juniper's MX240's. It is quite the expensive ordeal, training-wise as well, but oh well...
No, the route filters are for BGP accept rules and prepend priorities, etc. All 900k+ ipv4 routes are hw offloaded. Like I said, no issues with those. Problems arise when using them as BNG routers.Are you doing anything with route filters to suppress l3HW offload and prioritize offloading certain routes?
ipv4-routes-total: 740438
ipv4-routes-hw: 24405
ipv4-routes-cpu: 716041
ipv4-shortest-hw-prefix: 24
ipv4-hosts: 13
route-queue-size: 0
fasttrack-ipv4-conns: 0
fasttrack-hw-min-speed: 0
nexthop-cap: 8192
nexthop-usage: 102
ipv4-routes-total: 762299
ipv4-routes-hw: 1075
ipv4-routes-cpu: 761222
ipv4-shortest-hw-prefix: 25
ipv4-hosts: 124
route-queue-size: 0
fasttrack-ipv4-conns: 0
fasttrack-hw-min-speed: 0
nexthop-cap: 8192
nexthop-usage: 8192
Oh… that doesn’t look right… Is this a 2216? Running BGP as Edge with a 2216 will offload all routes, even thought the documentation states a smaller number. The 15 routes you observe on CPU in my screenshot are the connected routes for the nni addresses between peers and lo, which is expected because they need to be referenced to the cpu for ICMP and protocols purposes. The version is seen on the screenshot as well, I think 7.17.2, and it has uptime since that version was released which is a couple of months at lease, stable so far. No wonder you see quite some CPU usage, you are practically running full on CPU, 90%+ of your routes are not offloaded to HW, so all goes through the CPU. If properly hardware offloaded, you will see close to 0% CPU usage. What does your single linux type bridge config look like?Fascinating, since the docs state 16-36K for 2116/98DX3255 and 60K-120K for 2216/98DX8525.
I'm running 7.15.3 on my BGP borders/aggs (7.16 locked up routes and required a reboot after a few days; I haven't tried 7.17 or 7.18 yet).
My busiest border:
The router next to it:Code: Select allipv4-routes-total: 740438 ipv4-routes-hw: 24405 ipv4-routes-cpu: 716041 ipv4-shortest-hw-prefix: 24 ipv4-hosts: 13 route-queue-size: 0 fasttrack-ipv4-conns: 0 fasttrack-hw-min-speed: 0 nexthop-cap: 8192 nexthop-usage: 102
What version of RouterOS are you running?Code: Select allipv4-routes-total: 762299 ipv4-routes-hw: 1075 ipv4-routes-cpu: 761222 ipv4-shortest-hw-prefix: 25 ipv4-hosts: 124 route-queue-size: 0 fasttrack-ipv4-conns: 0 fasttrack-hw-min-speed: 0 nexthop-cap: 8192 nexthop-usage: 8192
ipv4-routes-total: 740007
ipv4-routes-hw: 24301
ipv4-routes-cpu: 715706
ipv4-shortest-hw-prefix: 24
ipv4-hosts: 102
route-queue-size: 0
nexthop-cap: 8192
nexthop-usage: 115
vxlan-mtu-packet-drop: 0
fasttrack-ipv4-conns: 0
fasttrack-hw-min-speed: 0
Ahh well… I cannot say for the 2116, we don’t run any as Edge routers, so maybe that is the max capacity for that particular ASIC. At least I can vouch for the 2216 that it does offload almost a million routes no problem, and will also partially offload the IPv6 table as well, but the ASIC will get full, so we keep that turned off since our ipv6 traffic is under 1G as of today.No, it's a 2116.
I loaded 7.18.2 on one of my backup routers that participates in ingesting the tables. 24K looks like the middle of 16K and 34K.
Code: Select allipv4-routes-total: 740007 ipv4-routes-hw: 24301 ipv4-routes-cpu: 715706 ipv4-shortest-hw-prefix: 24 ipv4-hosts: 102 route-queue-size: 0 nexthop-cap: 8192 nexthop-usage: 115 vxlan-mtu-packet-drop: 0 fasttrack-ipv4-conns: 0 fasttrack-hw-min-speed: 0
Here is one of them. HW offloads the entire table with no issues. 994k routes total.
ipv4-routes-total: 594945
ipv4-routes-hw: 95231
ipv4-routes-cpu: 499713
ipv4-shortest-hw-prefix: 0
ipv4-hosts: 5
route-queue-size: 0
route-queue-rate: 66
route-process-rate: 66
lpm-cap: 27200
lpm-usage: 24319
lpm-bank-cap: 1360
lpm-bank-usage: 1360,1360,1360,1360,1360,1360,1360,1360,1360,1127,72,0,1360,1360,1360,
1360,1360,1360,1360,1360
nexthop-cap: 8192
nexthop-usage: 323
ipv4-routes-total: 751578
ipv4-routes-hw: 23930
ipv4-routes-cpu: 727649
ipv4-shortest-hw-prefix: 24
ipv4-hosts: 20
route-queue-size: 0
route-queue-rate: 291691
route-process-rate: 291691
fasttrack-ipv4-conns: 0
fasttrack-queue-size: 0
fasttrack-queue-rate: 0
fasttrack-process-rate: 0
fasttrack-hw-min-speed: 0
fasttrack-hw-offloaded: 0
fasttrack-hw-unloaded: 0
lpm-cap: 6720
lpm-usage: 6192
lpm-bank-cap: 336
lpm-bank-usage: 330,281,270,276,336,336,336,336,256,285,288,174,336,336,336,336,336,336,336,336
pbr-cap: 4608
pbr-usage: 0
pbr-lpm-bank: 1
nat-usage: 0
nexthop-cap: 8192
nexthop-usage: 104
One of the more common solutions for MikroTik operators that need MPLS features in hardware is to use a whitebox solution like IP Infusion + UfiSpace.*Update*
Meanwhile, we have 2 tier2 techs currently obtaining the JNCIA-Junos certification as a last ditch effort.
ipv4-routes-total: 751768
ipv4-routes-hw: 118103
ipv4-routes-cpu: 633664
ipv4-shortest-hw-prefix: 24
ipv4-hosts: 79
route-queue-size: 0
route-queue-rate: 80
route-process-rate: 80
nexthop-cap: 8192
nexthop-usage: 85
vxlan-mtu-packet-drop: 0
fasttrack-ipv4-conns: 0
fasttrack-queue-size: 0
fasttrack-queue-rate: 0
fasttrack-process-rate: 0
fasttrack-hw-min-speed: 0
fasttrack-hw-offloaded: 0
fasttrack-hw-unloaded: 0
lpm-cap: 27200
lpm-usage: 24957
lpm-bank-cap: 1360
lpm-bank-usage: 1196
1129
1111
1107
1360
1360
1360
1360
1071
1092
1102
1080
1360
1360
1360
1360
1360
1360
1325
1144
pbr-cap: 8192
pbr-usage: 0
pbr-lpm-bank: 2
nat-usage: 0
Maybe the ccr2216 ASIC has something extra not mentioned in the docs, or I'm doing something wrong?
It offloading 4x the amount of routes as my 2116's (which is to be expected), but not 700K routes.
@elcano89, how many peers does your 2216 have? I'm wondering what else is in your config that gets it to offload 900K routes.
[@fn.edgemkt.hatorey.edge00] /interface/ethernet/switch/l3hw-settings> monitor once
ipv4-routes-total: 994494
ipv4-routes-hw: 994478
ipv4-routes-cpu: 15
ipv4-shortest-hw-prefix: 0
ipv4-hosts: 4
route-queue-size: 0
fasttrack-ipv4-conns: 0
fasttrack-hw-min-speed: 0
nexthop-cap: 8192
nexthop-usage: 87
[@fn.edgemkt.hatorey.edge00] /interface/ethernet/switch/l3hw-settings> /
[@fn.edgemkt.hatorey.edge00] > system/routerboard/print
routerboard: yes
model: CCR2216-1G-12XS-2XQ
serial-number: HJ20ACFBR7W
firmware-type: al64v3
factory-firmware: 7.16.1
current-firmware: 7.17.2
upgrade-firmware: 7.17.2
[@fn.edgemkt.hatorey.edge00] >
Ahh... HW offloaded VPLS... That would be the day! Don't get my hopes up too soonOne of the more common solutions for MikroTik operators that need MPLS features in hardware is to use a whitebox solution like IP Infusion + UfiSpace.*Update*
Meanwhile, we have 2 tier2 techs currently obtaining the JNCIA-Junos certification as a last ditch effort.
It's a much better value than Juniper.
Hopefully we'll see MikroTik release some MPLS hw offload features this year. It's been a pain point for a lot of ISPs.
********************************************************************************
* RFC-2544 Conformance testsuite.
********************************************************************************
General settings:
File name : RFC-fibernethatorrey-1g
Technician name : Winder Torres
Description : RFC-fibernethatorrey-1g
Note :
Working rate : Layer 1
Product name : AMO-10000-LT
Unit identifier : FX.ACAC.SJU.TELXIUS
Firmware version : AMO-10000-LT_7.9.1_23673
Operation mode : Standard
Serial number : C414-1302
Assembly : 500-090-03:22:26:00
********************************************************************************
Testsuite settings:
Name : RFC CIENA - 1 Gbps
Description : RFC CIENA - 1 Gbps
Throughput test : Enabled
Delay test : Enabled
Frame loss test : Enabled
Back-to-back test: Disabled
Strict failure : Disabled
Verbose report : Disabled
Exclude VLAN size: Yes
********************************************************************************
Peer settings:
Testing layer : Layer-2
Peer MAC address : 04:79:FD:28:9A:65
NID MAC address : 00:15:AD:66:9C:36
Ethertype : 0x8902
Opcode : 3
MEG level : 7
********************************************************************************
Error codes:
Binary search failure (1), Out Of Order or duplicate failure (2),
Frame loss failure (3), Loss of connection with peer (4),
********************************************************************************
Throughput settings:
Trial duration : 30 secs
Maximum rate : 960 Mbps
Minimum rate : 1 Mbps
Step size : 10 Mbps
Frame loss : 0.3%
Fine stepping : False
Binary duration : 2 secs
Frame sizes : 64 128 256 512 1024 1280 1518 9000 Bytes
Started at : 2025-03-04 11:35:49+00:00
********************************************************************************
Frame Tx Tx Rx Rx Frame Status
Size rate rate rate rate loss
(bytes) (Mbps) (frames/sec) (Mbps) (frames/sec) (%)
------- --------- ------------ --------- ------------ ------ ------
64 330 491,071 329.752 490,703 0.1 Pass
128 590 498,311 589.851 498,185 0.1 Pass
256 960 434,783 959.459 434,537 0.1 Pass
512 960 225,564 959.793 225,515 0.1 Pass
1024 960 114,943 959.909 114,932 0.1 Pass
1280 960 92,308 959.982 92,306 0.1 Pass
1518 960 78,023 960 78,023 0.0 Pass
9000 960 13,304 959.843 13,302 0.1 Pass
********************************************************************************
Delay settings:
Trial duration : 60 secs
Maximum rate : 960 Mbps
Minimum rate : 1 Mbps
Step size : 10 Mbps
Fine stepping : False
Binary duration : 2 secs
Frame loss : 0.3%
Frame sizes : 64 128 256 512 1024 1280 1518 9000 Bytes
Started at : 2025-03-04 11:44:16+00:00
********************************************************************************
Frame Tx Minimum Average Maximum Minimum Average Maximum Frame Status
Size rate delay delay delay DV DV DV loss
(bytes) (Mbps) (usec) (usec) (usec) (usec) (usec) (usec) (%)
------- --------- ------- ------- ------- ------- ------- ------- ------ ------
64 330 6106 6246 119075 0 1 112696 0.1 Pass
128 590 6109 6313 314621 0 1 308074 0.1 Pass
256 960 6111 6244 112619 0 0 106185 0.1 Pass
512 960 6131 6219 176943 0 1 170372 0.1 Pass
1024 960 6156 6222 13224 0 0 6470 0.1 Pass
1280 960 6174 6273 117997 0 1 111548 0.1 Pass
1518 960 6194 6307 331435 0 2 324918 0.0 Pass
9000 960 6714 6777 13058 0 4 6236 0.1 Pass
********************************************************************************
Frame loss settings:
Trial duration : 30 secs
Maximum rate : 960 Mbps
Step size : 10 Mbps
Fine stepping : False
Binary duration : 2 secs
Frame sizes : 64 128 256 512 1024 1280 1518 9000 Bytes
Started at : 2025-03-04 11:55:47+00:00
********************************************************************************
Frame Tx Tx Rx Rx Frame Status
Size rate rate rate rate loss
(bytes) (Mbps) (frames/sec) (Mbps) (frames/sec) (%)
------- --------- ------------ --------- ------------ ------ ------
64 80 119,048 79.988 119,030 0.1 Fail (3)
64 70 104,167 69.992 104,155 0.1 Fail (3)
64 60 89,286 59.999 89,284 0.1 Fail (3)
64 50 74,405 49.997 74,400 0.1 Fail (3)
64 40 59,524 40 59,524 0.1 Fail (3)
64 30 44,643 29.998 44,640 0.1 Fail (3)
64 20 29,762 20 29,762 0.0 Pass
64 10 14,881 10 14,881 0.1 Fail (3)
128 240 202,703 239.948 202,659 0.1 Fail (3)
128 230 194,257 229.950 194,215 0.1 Fail (3)
128 220 185,811 219.992 185,804 0.1 Fail (3)
128 210 177,365 209.851 177,239 0.1 Fail (3)
128 200 168,919 199.979 168,901 0.1 Fail (3)
128 190 160,473 189.952 160,433 0.1 Fail (3)
128 180 152,027 179.961 151,994 0.1 Fail (3)
128 170 143,581 169.987 143,570 0.1 Fail (3)
128 160 135,135 159.969 135,109 0.1 Fail (3)
128 150 126,689 149.988 126,679 0.1 Fail (3)
128 140 118,243 139.994 118,238 0.1 Fail (3)
128 130 109,797 129.996 109,794 0.1 Fail (3)
128 120 101,351 119.997 101,348 0.1 Fail (3)
128 110 92,905 109.999 92,905 0.1 Fail (3)
128 100 84,459 99.998 84,458 0.1 Fail (3)
128 90 76,014 89.999 76,013 0.1 Fail (3)
128 80 67,568 80 67,568 0.1 Fail (3)
128 70 59,122 70 59,122 0.0 Pass
128 60 50,676 60 50,676 0.1 Fail (3)
128 50 42,230 50 42,230 0.1 Fail (3)
128 40 33,784 40 33,784 0.0 Pass
128 30 25,338 30 25,338 0.0 Pass
256 260 117,754 260 117,753 0.1 Fail (3)
256 250 113,225 249.972 113,212 0.1 Fail (3)
256 240 108,696 240 108,696 0.0 Pass
256 230 104,167 229.971 104,154 0.1 Fail (3)
256 220 99,638 219.972 99,625 0.1 Fail (3)
256 210 95,109 209.993 95,105 0.1 Fail (3)
256 200 90,580 199.999 90,579 0.1 Fail (3)
256 190 86,051 189.998 86,050 0.1 Fail (3)
256 180 81,522 179.997 81,521 0.1 Fail (3)
256 170 76,993 170 76,993 0.1 Fail (3)
256 160 72,464 160 72,464 0.1 Fail (3)
256 150 67,935 150 67,935 0.0 Pass
256 140 63,406 140 63,406 0.1 Fail (3)
256 130 58,877 129.996 58,875 0.1 Fail (3)
256 120 54,348 120 54,348 0.0 Pass
256 110 49,819 110 49,819 0.0 Pass
512 530 124,530 529.813 124,486 0.1 Fail (3)
512 520 122,180 519.807 122,135 0.1 Fail (3)
512 510 119,831 509.921 119,812 0.1 Fail (3)
512 500 117,481 499.857 117,448 0.1 Fail (3)
512 490 115,132 489.985 115,128 0.1 Fail (3)
512 480 112,782 479.949 112,770 0.1 Fail (3)
512 470 110,432 469.999 110,432 0.1 Fail (3)
512 460 108,083 459.975 108,077 0.1 Fail (3)
512 450 105,733 450 105,733 0.0 Pass
512 440 103,383 439.981 103,379 0.1 Fail (3)
512 430 101,034 429.970 101,027 0.1 Fail (3)
512 420 98,684 419.999 98,684 0.1 Fail (3)
512 410 96,335 410 96,335 0.0 Pass
512 400 93,985 399.993 93,983 0.1 Fail (3)
512 390 91,635 389.994 91,634 0.1 Fail (3)
512 380 89,286 379.999 89,286 0.1 Fail (3)
512 370 86,936 369.998 86,936 0.1 Fail (3)
512 360 84,586 359.990 84,584 0.1 Fail (3)
512 350 82,237 350 82,237 0.1 Fail (3)
512 340 79,887 340 79,887 0.1 Fail (3)
512 330 77,538 330 77,538 0.1 Fail (3)
512 320 75,188 320 75,188 0.0 Pass
512 310 72,838 309.984 72,835 0.1 Fail (3)
512 300 70,489 299.972 70,482 0.1 Fail (3)
512 290 68,139 290 68,139 0.0 Pass
512 280 65,789 280 65,789 0.0 Pass
1024 960 114,943 960 114,943 0.1 Fail (3)
1024 950 113,745 949.998 113,745 0.1 Fail (3)
1024 940 112,548 939.931 112,540 0.1 Fail (3)
1024 930 111,351 930 111,351 0.1 Fail (3)
1024 920 110,153 919.842 110,134 0.1 Fail (3)
1024 910 108,956 909.919 108,946 0.1 Fail (3)
1024 900 107,759 900 107,759 0.0 Pass
1024 890 106,561 889.975 106,558 0.1 Fail (3)
1024 880 105,364 879.705 105,329 0.1 Fail (3)
1024 870 104,167 869.921 104,157 0.1 Fail (3)
1024 860 102,969 859.998 102,969 0.1 Fail (3)
1024 850 101,772 849.921 101,763 0.1 Fail (3)
1024 840 100,575 839.909 100,564 0.1 Fail (3)
1024 830 99,377 829.987 99,376 0.1 Fail (3)
1024 820 98,180 819.903 98,169 0.1 Fail (3)
1024 810 96,983 809.998 96,983 0.1 Fail (3)
1024 800 95,785 799.987 95,784 0.1 Fail (3)
1024 790 94,588 789.947 94,582 0.1 Fail (3)
1024 780 93,391 779.992 93,390 0.1 Fail (3)
1024 770 92,194 769.941 92,186 0.1 Fail (3)
1024 760 90,996 759.970 90,993 0.1 Fail (3)
1024 750 89,799 749.994 89,798 0.1 Fail (3)
1024 740 88,602 739.998 88,601 0.1 Fail (3)
1024 730 87,404 730 87,404 0.0 Pass
1024 720 86,207 719.982 86,205 0.1 Fail (3)
1024 710 85,010 710 85,010 0.0 Pass
1024 700 83,812 699.998 83,812 0.1 Fail (3)
1024 690 82,615 689.902 82,603 0.1 Fail (3)
1024 680 81,418 680 81,418 0.0 Pass
1024 670 80,220 669.948 80,214 0.1 Fail (3)
1024 660 79,023 660 79,023 0.0 Pass
1024 650 77,826 650 77,826 0.0 Pass
1280 960 92,308 959.846 92,293 0.1 Fail (3)
1280 950 91,346 949.975 91,344 0.1 Fail (3)
1280 940 90,385 940 90,385 0.0 Pass
1280 930 89,423 930 89,423 0.0 Pass
1518 960 78,023 960 78,023 0.1 Fail (3)
1518 950 77,211 949.986 77,210 0.1 Fail (3)
1518 940 76,398 939.927 76,392 0.1 Fail (3)
1518 930 75,585 929.949 75,581 0.1 Fail (3)
1518 920 74,772 919.924 74,766 0.1 Fail (3)
1518 910 73,960 909.993 73,959 0.1 Fail (3)
1518 900 73,147 899.933 73,142 0.1 Fail (3)
1518 890 72,334 889.999 72,334 0.1 Fail (3)
1518 880 71,521 879.969 71,519 0.1 Fail (3)
1518 870 70,709 869.957 70,705 0.1 Fail (3)
1518 860 69,896 859.999 69,896 0.1 Fail (3)
1518 850 69,083 850 69,083 0.0 Pass
1518 840 68,270 839.982 68,269 0.1 Fail (3)
1518 830 67,458 829.995 67,457 0.1 Fail (3)
1518 820 66,645 820 66,645 0.0 Pass
1518 810 65,832 809.929 65,827 0.1 Fail (3)
1518 800 65,020 800 65,020 0.0 Pass
1518 790 64,207 789.996 64,206 0.1 Fail (3)
1518 780 63,394 780 63,394 0.0 Pass
1518 770 62,581 770 62,581 0.0 Pass
9000 960 13,304 959.800 13,301 0.1 Fail (3)
9000 950 13,165 949.907 13,164 0.1 Fail (3)
9000 940 13,027 939.285 13,017 0.1 Fail (3)
9000 930 12,888 929.815 12,885 0.1 Fail (3)
9000 920 12,749 919.997 12,749 0.1 Fail (3)
9000 910 12,611 909.957 12,610 0.1 Fail (3)
9000 900 12,472 899.996 12,472 0.1 Fail (3)
9000 890 12,334 889.901 12,332 0.1 Fail (3)
9000 880 12,195 879.849 12,193 0.1 Fail (3)
9000 870 12,057 869.773 12,053 0.1 Fail (3)
9000 860 11,918 859.835 11,916 0.1 Fail (3)
9000 850 11,779 849.903 11,778 0.1 Fail (3)
9000 840 11,641 839.885 11,639 0.1 Fail (3)
9000 830 11,502 829.758 11,499 0.1 Fail (3)
9000 820 11,364 819.846 11,362 0.1 Fail (3)
9000 810 11,225 809.996 11,225 0.1 Fail (3)
9000 800 11,086 799.781 11,083 0.1 Fail (3)
9000 790 10,948 790 10,948 0.0 Pass
9000 780 10,809 779.893 10,808 0.1 Fail (3)
9000 770 10,671 769.998 10,671 0.1 Fail (3)
9000 760 10,532 759.927 10,531 0.1 Fail (3)
9000 750 10,394 749.916 10,392 0.1 Fail (3)
9000 740 10,255 739.998 10,255 0.1 Fail (3)
9000 730 10,116 729.937 10,116 0.1 Fail (3)
9000 720 9,978 719.815 9,975 0.1 Fail (3)
9000 710 9,839 710 9,839 0.0 Pass
9000 700 9,701 699.926 9,700 0.1 Fail (3)
9000 690 9,562 689.999 9,562 0.1 Fail (3)
9000 680 9,424 679.985 9,423 0.1 Fail (3)
9000 670 9,285 669.909 9,284 0.1 Fail (3)
9000 660 9,146 660 9,146 0.0 Pass
9000 650 9,008 649.887 9,006 0.1 Fail (3)
9000 640 8,869 640 8,869 0.0 Pass
9000 630 8,731 629.978 8,730 0.1 Fail (3)
9000 620 8,592 619.999 8,592 0.1 Fail (3)
9000 610 8,453 609.997 8,453 0.1 Fail (3)
9000 600 8,315 599.924 8,314 0.1 Fail (3)
9000 590 8,176 589.920 8,175 0.1 Fail (3)
9000 580 8,038 579.969 8,037 0.1 Fail (3)
9000 570 7,899 569.961 7,899 0.1 Fail (3)
9000 560 7,761 560 7,761 0.0 Pass
9000 550 7,622 550 7,622 0.0 Pass
********************************************************************************
Ended at : 2025-03-04 13:36:39+00:00
Testsuite status : Completed
********************************************************************************
Maybe the ccr2216 ASIC has something extra not mentioned in the docs, or I'm doing something wrong?We just deployed them, and didn't give it much thought, I thought this was normal behaviour... This edge is running 7.17.2 which doesn't seem to offer that extra info about the LPM,
Code: Select all[@fn.edgemkt.hatorey.edge00] /interface/ethernet/switch/l3hw-settings> monitor once
Maybe the ccr2216 ASIC has something extra not mentioned in the docs, or I'm doing something wrong?
It offloading 4x the amount of routes as my 2116's (which is to be expected), but not 700K routes.
@elcano89, how many peers does your 2216 have? I'm wondering what else is in your config that gets it to offload 900K routes.We just deployed them, and didn't give it much thought, I thought this was normal behaviour... This edge is running 7.17.2 which doesn't seem to offer that extra info about the LPM, and it has 3 peers. 2 are ip-transit with full table output, and 1 is a local IPX peering for local ISPs. It is in fact offloaded, since there is practically no traffic present on the bridge. Traffic that runs on CPU can be observed on the bridge's throughput, but offloaded traffic won't show on the bridge. Here's my printout again:
Code: Select all[@fn.edgemkt.hatorey.edge00] /interface/ethernet/switch/l3hw-settings> monitor once ipv4-routes-total: 994494 ipv4-routes-hw: 994478 ipv4-routes-cpu: 15 ipv4-shortest-hw-prefix: 0 ipv4-hosts: 4 route-queue-size: 0 fasttrack-ipv4-conns: 0 fasttrack-hw-min-speed: 0 nexthop-cap: 8192 nexthop-usage: 87 [@fn.edgemkt.hatorey.edge00] /interface/ethernet/switch/l3hw-settings> / [@fn.edgemkt.hatorey.edge00] > system/routerboard/print routerboard: yes model: CCR2216-1G-12XS-2XQ serial-number: HJ20ACFBR7W firmware-type: al64v3 factory-firmware: 7.16.1 current-firmware: 7.17.2 upgrade-firmware: 7.17.2 [@fn.edgemkt.hatorey.edge00] >
[admin@MikroTik] > interface/ethernet/switch/l3hw-settings/advanced/monitor
ipv4-routes-total: 942338
ipv4-routes-hw: 942326
ipv4-routes-cpu: 11
ipv4-shortest-hw-prefix: 0
ipv4-hosts: 1
route-queue-size: 0
route-queue-rate: 0
route-process-rate: 0
nexthop-cap: 8192
nexthop-usage: 85
vxlan-mtu-packet-drop: 0
fasttrack-ipv4-conns: 0
fasttrack-queue-size: 0
fasttrack-queue-rate: 0
fasttrack-process-rate: 0
fasttrack-hw-min-speed: 0
fasttrack-hw-offloaded: 0
fasttrack-hw-unloaded: 0
lpm-cap: 27200
lpm-usage: 11
lpm-bank-cap: 1360
lpm-bank-usage: 1
0
0
0
1
0
0
0
4
0
0
0
5
0
0
0
0
0
0
0
pbr-cap: 8192
pbr-usage: 0
pbr-lpm-bank: 2
nat-usage: 0
[admin@MikroTik] > interface/ethernet/switch/l3hw-settings/advanced/monitor
ipv4-routes-total: 942337
ipv4-routes-hw: 37729
ipv4-routes-cpu: 904608
ipv4-shortest-hw-prefix: 30
ipv4-hosts: 1
route-queue-size: 0
route-queue-rate: 0
route-process-rate: 0
nexthop-cap: 8192
nexthop-usage: 85
vxlan-mtu-packet-drop: 0
fasttrack-ipv4-conns: 0
fasttrack-queue-size: 0
fasttrack-queue-rate: 0
fasttrack-process-rate: 0
fasttrack-hw-min-speed: 0
fasttrack-hw-offloaded: 0
fasttrack-hw-unloaded: 0
lpm-cap: 27200
lpm-usage: 22392
lpm-bank-cap: 1360
lpm-bank-usage: 1022
1033
1027
1028
1221
1033
1040
678
1358
1245
1254
1242
1246
1249
1273
1260
1101
1024
1031
1027
pbr-cap: 8192
pbr-usage: 0
pbr-lpm-bank: 2
nat-usage: 0
See the @raimondsp explanation:
viewtopic.php?p=985463#p913427
When indexing the entire IP address range (0.0.0.0 - 255.255.255.255), you can offload as many routes as needed, as long as you also have a default route (0.0.0.0/0) using the same next-hop.
Thanks for the detailed explanation, that helps a lot on how to plan hw-offload usage for different scenarios. If this isn't added to the official docs, it definitely should be!See the @raimondsp explanation:
viewtopic.php?p=985463#p913427
When indexing the entire IP address range (0.0.0.0 - 255.255.255.255), you can offload as many routes as needed, as long as you also have a default route (0.0.0.0/0) using the same next-hop.
...
ipv4-routes-total: 772402
ipv4-routes-hw: 772386
ipv4-routes-cpu: 12
ipv4-shortest-hw-prefix: 0
ipv4-hosts: 93
route-queue-size: 0
route-queue-rate: 7
route-process-rate: 7
fasttrack-ipv4-conns: 0
fasttrack-queue-size: 0
fasttrack-queue-rate: 0
fasttrack-process-rate: 0
fasttrack-hw-min-speed: 0
fasttrack-hw-offloaded: 0
fasttrack-hw-unloaded: 0
lpm-cap: 27200
lpm-usage: 12
lpm-bank-cap: 1360
lpm-bank-usage: 3
0
0
0
3
0
0
0
1
0
0
0
5
0
0
0
0
0
0
0
pbr-cap: 8192
pbr-usage: 0
pbr-lpm-bank: 2
nat-usage: 0
nexthop-cap: 8192
nexthop-usage: 85
There’s a lot of positive input here around getting 900k+ in hardware forwarding. I will warn folks that this alone is not a surprising feat if all next-hops track the same. Especially with a default route. The real test would be to have multiple next-hops inputted to the FIB as that usually represents a multi-homed ASN and the usual case for BGP routing is, well; multi-homing.
There’s a lot of positive input here around getting 900k+ in hardware forwarding. I will warn folks that this alone is not a surprising feat if all next-hops track the same. Especially with a default route. The real test would be to have multiple next-hops inputted to the FIB as that usually represents a multi-homed ASN and the usual case for BGP routing is, well; multi-homing.
multiple next hops was pretty well covered in MikroTik's reply here: viewtopic.php?t=215416#p1136615
The capabilities of Qumran and other commodity chips are pretty well known as is the pricing to be able to handle full tables in a whitebox switch - which is at least 10x of a CCR2216.
What's interesting to me is how far you can push the Prestera ASIC - for example if you have a single transit peer and a small IX with limited routes, then you may be able to fit everything in hardware. That won't be everyone's use case but understanding the limits of the ASIC is very helpful in design and planning.
My understanding is that fastpath is the component that requires driver integration. I'm talking about mikrotik making a package that includes the already written Mellanox ASAP2 driver and making fasttrack like code that offloads what would have been fasttrack flows to the ASAP2 driver rather than fastpath.Fast-track is a driver optimization. Makes no sense that Mikrotik should develop customized drivers for 3rd party hardware.