5009 high CPU vs. CCR1009

We recently had a CCR1009 at a tower site die so replaced with a 5009 running r7.7. CPU usage jumped from 5-10% to 30-40%. Profile shows even distribution across CPUs and Networking is the large portion. Router is passing 350-750Mbps depending on time of day. Using OSPF & private BGP, approx 400 routes. We are using the Bridge for VLAN filtering, but could bypass if it would help. No MPLS, NAT, PPPoE or VPN. We’re using 4011s at similar sites with similar config/traffic. They are all around 10-20% CPU usage. Is this to be expected with the 5009, or is there something we should be looking at with our config?

Did your testing comparison include the same RoS version? Not stated for what was on the CCR1009 or the RB4011s…

My guess is a combo of different firmware and something off on your configuration.
The RB4011 is only ARM32CPU 512 storage, and the RB5009 is an ARM64CPU 1gig storage.
Test results wise the RB5009 has more throughput than the RB4011 and the more rules added the RB5009 fares better.

In terms of comparing CCR1009 (tile) to arm64, I have no way to assess that…
Tile had a 9 core count vs 4 for the ARM devices.
Tile, throughput wise with any rules loaded quickly dropped to basically the same numbers as the 5009 in throughput.
Where TILE really outperforms the RB5009 is multiple VPN tunnel throughput.

Assuming maybe the extra ram and cores gave TILE and advantage???

Good point I forgot to mention. The old 1009 was on 6.48.x (don’t recall the specific version). Our 4011s are mostly 6.48.6. The configuration was copied from the CCR1009 to the 5009, with the exception of the new way to configure OSPF and BGP. Looking at Mikrotik’s test results page, routing w/ 25 filter rules, 512b packets - the 5009 should be slightly better than the 1009 and 50% better than the 4011. It’s a production site, so not easy to test config changes.

ROS v7 comes with newer linux kernel which doesn’t support route cache any more. So it is expected to see slightly higher CPU load for same amount of routing in most cases. It might not explain larger difference (e.g. 30% vs. 10% CPU load) in same traffic conditions though.

Hadn’t thought about the route cache, however it has less than 400 routes total. I disabled spanning tree, SNMP, and most of the firewall rules and no difference. For the most part, it is just passing 500 Mbps through the router and that is taking 40% CPU.

Route cache is not about routing protocols, it’s about deciding next hop for every individual packet passing router. The same problem happens even if router only has one single static route set (0.0.0.0/0 gw ) … probably in this case the process is slightly faster than with large routing table. It means associating MAC address of next hop to destination IP address (and that table can be huge even if using single next hop). Route cache meant that this association could persist for quite a long time (even if it was in reality invalidated due to any reason) while without it it persists only a short while.