I measured the performance of queuing algorithms on a cAP ac. I used RouterOS 7.6 and modified the default setting which is a basic firewall / NAT router. The WAN and LAN side were wired to computers running Fedora. The test were done with crusader over IPv4. Shapers were set to 1 Gbps.
CAKE shaper using simple queues, no FastTrack: 189 Mbps
CAKE without shaper using simple queues, no FastTrack: 192 Mbps
CAKE shaper using queue tree (either1, bridge), no FastTrack: 196 Mbps
CAKE without shaper using queue tree (either1, bridge), no FastTrack: 199 Mbps
CAKE shaper using interface queue, no FastTrack: 221 Mbps
fq-codel using simple queues, no FastTrack: 237 Mbps
fq-codel using queue tree (either1, bridge), no FastTrack: 252 Mbps
CAKE shaper using queue tree (either1, bridge), with FastTrack: 379 Mbps
CAKE without shaper using queue tree (either1, bridge), with FastTrack: 379 Mbps
CAKE shaper using queue tree (either1, ether2), with FastTrack: 385 Mbps
CAKE without shaper using queue tree (either1, ether2), with FastTrack: 393 Mbps
CAKE shaper using interface queue, with FastTrack: 505 Mbps
fq-codel using queue tree (either1, bridge), with FastTrack: 537 Mbps
fq-codel using queue tree (either1, ether2), with FastTrack: 581 Mbps
codel using queue tree (either1, ether2), with FastTrack: 756 Mbps
No queues with FastTrack: 942 Mbps (1384 Mbps WAN + LAN combined)
When using FastTrack combined with queues you’ll also limit traffic between subnets. It would be nice if MikroTik could let FastTrack mark packets when they’re passed to the queue tree. It could make use of the packet mark on the packet when the connection gets fast tracked. This would avoid this limitation and you could shape WAN at higher speeds while also avoid limitting traffic between VLANs.
The best configurations when using a bridge and a single subnet seems to be:
CAKE without shaper using queue tree (either1, bridge) with FastTrack (379 Mbps)
fq-codel using queue tree (either1, bridge) with FastTrack (537 Mbps)
The best configurations when using a single LAN port and a single subnet seems to be:
CAKE shaper using interface queue, with FastTrack: 505 Mbps
fq-codel using queue tree (either1, ether2) with FastTrack: 581 Mbps
These are more best case bandwidth results. For good latencies you’ll need to stay below these numbers.
I don’t understand your methodology for this bit. It doesn’t look like you tested fq_codel on the interface queue,
all by itself, no trees, no shaping, just replacing what is a (very small) fifo, in their default modes?
In my part of the world (linux, openwrt, ios), fq_codel is the native qdisc on the interface. no shaping.
So far as I understood mikrotik allows the same and defaults to a very small fifo, or sfq?
In general I care that I have low latency, always, and sometimes taking a bandwidth hit to do so, is the right thing,
so having a latency figure also from the tests would be good. If I got 700Mbits out of fq_codel with 5ms latency and jitter,
and a gbit out of the other with 300ms of latency, I know which I would choose.
I was interested in performance for shaping with AQM below link rates so did not measure plain interface queues. The no queues results is mostly to give an idea of the performance overhead. I’ve not included latency results as these are all CPU limited. My impression is that “no queue” also doesn’t use a qdisc at all, but just the hardware queue itself.
I have however tested fq-codel as an interface queue on the WAN and LAN interface with FastTrack. It gets 1283 Mbps WAN + LAN combined (93% of the no queue result).
I don’t know what they mean by no queues. There always is a queue. If they have a zero length fifo + a ringbuffer of some size on this product,
it’s still a queue. Figuring out when they drop packets from it and how big it is is always on my mind.
A lot of subsystems have BQL now in the linux kernel, which moderates the size of the ring buffer significantly and makes fq_codel more effective in reducing latencies. I recently discovered the mvpp2 driver (in the RB5009) didn’t, and we put it in there for linux 6.1. The results were nice.
BQL: https://lwn.net/Articles/469652/
With ecn enabled e2e, fq_codel will mark, rather than drop packets, thus reducing p99 latency.
“They” as in MikroTik? The interface queue type is shown in the appropriate tab and can be changed there without a Simple Queue or a Queue Tree added. Ethernet ports will show only-hardware-queue by default. Any non-default queue type created should be available there as well.
On RouterOS 7.6 on a cAP ac, I’m unable to get HTB+fq-codel/cake with max-limit shaping to work on the bridge interface (for download) despite disabling fastpath; the wlan interfaces are dynamically added by CAPsMAN. Do you have a working example /export?