pppoe low performance

I’d like to ask why pppoe in routeros has such a low performance. EoIP is several times faster (approx 5x in my setup based on cpu load and used bandwidth). I use dynamic simple queues if that matters.

Are you using encryption on the PPPOE server?

no encryption, no compression.
Interesting that cpu load with pppoe significantly depends on the number of pppoe clients, more than on the traffic.

and what if you try without simple queues? =)

Is there any alternative for rate limitation with radius?

sure. use ‘Address-List’ parameter in Profile (or send it from radius), then create manually a few PCQ queues and mark packets for them according to those address lists

Unfortunately, I also use Framed-Route attribute to create a dynamic static routes for some clients. That way PCQ will not work, as it is per-IP address limitation, not per-interface which I need.

yep, it would be nice if we can have some PCQ Classifier rules, like ‘use 192.168.0.1 as fake src-address for the whole 192.168.0.0/28 subnet’ or ‘round each dst-address to /29’… Normis, one more feature request for v5? =)

+1 :frowning:

Ok, thank you for pcq! I have implemented it and cpu load dropped somewhat (by 10-20%).

Has anybody profiled where the cpu is mostly spent?

eliminating of “action=jump” in “ip firewall mangle” reduced cpu load even more.

I wonder why “action=jump” is so slow.

At 11:45 I have “optimized” mangle table with a jump to avoid duplicate checking an address list in 16 rules.
At 17:00 I have restored previous version without jumps.
(picture from RB1000)

Let me answer myself. It appears the cause for slowness of “jump” was change-tcp-mss=yes in ppp profile.
It creates many hidden entries in firewall mangle table and jump apparently uses linear search to find the named chain.

I’m not sure that search is linear… anyway, many dynamic rules is not good - each packet should be processed by each rule…

I can explain the significant raise of cpu usage when I added a new chain and a jump only by linear search in mangle/forward table.
Anyway change-tcp-mss=no now, pcq queues are in place, and cpu usage is below 50% :smiley:
(200 pppoe users, 120 Mbit/s on RB1000)

congratulations =)

yes, if you need change-tcp-mss=yes on many pppoe active seccions, it’s always better to set change-tcp-mss=no and then create corresponding static rules