I’m interested in making a QoS setup where the queues come into effect when packets are lost, AKA when interface queues become used. My Mikrotik device uses an LTE interface and depending on where I take it, the speeds can range from 1 to 100 Mbps. If I used queue trees the usual way, I would have to choose some speed to start limiting at. However, this means the QoS would either never work or not utilize all the speed available. Is there a way to do this that isn’t dependent on some speed setting?
Is this even possible?
Bump, I think this kind of queue is also called SQM
I’m afraid the role of SQM is different from what you expect - it uses ECN to notify endpoints about the queue being almost full in order to avoid the need to actually drop packets, and it takes into account specific features (overhead size) of the bottleneck link to allow the shaping to work more precisely, i.e. with smaller margin.
But it still relies on prior knowledge of the available bandwidth.
To determine the currently available bandwidth dynamically, you’d have to “measure” it, which basically means to ping some remote target with echo requests that are of highest priority at your egress, so that no response at all to such an echo request, or a higher-than-usual delay of the response, would indicate that shaping (in case of no response) or buffering (in case of delayed response) takes place in the ISP network, as the ISP doesn’t respect any priority markings and queues the ICMP echo requests and responses into the same pair of queues like the rest of your traffic. Of course this is not a precise measurement, and it only gives useful results when the volume of the “natural” traffic is high enough to be shaped, but in just one of the directions.
So accommodating the queue limit-at and max-limit parameters to the currently available bandwidth would require quite a lot of scripting, plus it would need to be run continuously, because the bandwidth in a mobile network is shared among users, so when just a few users compete for the bandwidth of a cell, arrival or departure of a single user has a significant effect on the bandwidth available to the other ones.
The only place where a priority-based queueing (e.g. using DSCP or 802.1q) can work with a radio interface, is on the radio device itself.
When you have a MikroTik device with LTE card, that would theoretically be possible. E.g. the WiFi interfaces do that, when WMM is enabled.
I don’t know if the LTE cards can do it. To try, setup a mangle rule like:
/ip firewall mangle
add action=set-priority chain=postrouting new-priority=from-dscp-high-3-bits \
passthrough=yes
That sets the priority at Linux level according to the DSCP field in each packet (assuming that is what you need).
When the LTE device driver knows about priority, it will then send higher-priority packets before lower-priority.
I have no MikroTik LTE device so I cannot try it. But with WiFi it works.