PCC between wan pppoe , static gateway on bridge; and dscp management

I have a pcc between wan pppoe and a static gateway on bridge and I have the dscp configured to adjust to the bridge interface.
I would like the dscp to handle the static gateway traffic separately(the dscp works but works badly).
Does anyone know how to do it or do they have any examples to propose?

Well, since you’ve chosen to create a new topic rather than continue in the original one, it would make sense to attach the existing configuration and topology drawing here.

Anyway, we have to start from discussing the purpose of the bandwidth management (QoS handling) and how useful it is to base it on DSCP marks.

The point is that QoS handling works well in private networks and in the outgoing direction from them, but much worse across the internet. Most ISPs ignore DSCP values (in fact, respecting them would contradict the network neutrality principle), but what is worse, they even modify the DSCP values as they forward the packets. So while you can let the endpoint devices in your two LANs set DSCP values in the packets, and tell the Mikrotik to prioritize these packets into queues based on those DSCP values when forwarding them, it is not reliable to use the DSCP values as a basis for prioritization of packets which came in via WAN interfaces. And from what I can see in your mangle rules, except using protocol=icmp to assign the packets the packet-mark representing the highest priority, DSCP is the only basis for prioritization of all the other (non-icmp) packets.

So to sum it up, you can traffic-shape TCP flows in WAN->LAN direction by throttling them at the path towards LAN endpoint devices, but DSCP is in most cases not a good basis for that.

For LAN to LAN, prioritization based on DSCP is perfectly OK if the reason is the limited capacity on the wireless link between the two houses (i.e. if a device in house 1 is connected to a GigabitEthernet port but the link to the other house is only a 50 Mbit/s one). But if you want to keep the wireless WAN gateway in house 2 and use it as an uplink for house 1, you should place a more complex queue tree in front of the link to house 2 and e.g. prioritize the WAN packets relatively to each other up to DSCP into sub-queues within one parent queue, but prioritize this whole parent queue above another parent queue which would handle all the remaining traffic on the inter-house link, again prioritized into sub-queues within that parent queue. The distribution of the total bandwidth of the link between the two parent queues would be a subject to some fine tuning, in terms that each of them would have to get some minimum guaranteed bandwidth and could use the other one’s quota if the other one would not be using it. Sure there is no point in assigning the WAN parent queue more bandwidth than the LTE gateway can use in the upload direction.

But being not perfect itself, the above is only half of the solution, because at house 2 end of the inter-house link, there is no traffic-shaping device, so both the WAN gateway and the endpoint device(s?) send packets towards house 1 at full speed of their Ethernet interfaces, so the packets compete for the bandwidth on the inter-house link without control and thus get lost randomly.

For me, the only argument why to keep the WAN 2 (lte gateway) in house 2 is redundancy, i.e. that house 2 would still have internet connection if the inter-house link (or mains power in house 1) goes down. In all other respects, such setup only brings complexity. By moving the LTE gateway to house 1 and connecting it to a dedicated interface of the Mikrotik, the whole bandwidth of the inter-house link would be used only for own traffic of devices in house 2.

here’s just this point, I put the lte gateway in the second house to have redundancy in case the wifi connection between the houses failed.
Later I thought that you could use the gw.LTE even if the connection to the WAN1 collapsed.
Finally the load balancing between the two connections would have been the best for me!

now, to keep the qos on the route of the second house and on the gwLTE I don’t need at all, but it would be useful to keep it on the WAN1 as in origin.

In such case do just that, skip the QoS handling on the inter-house link.

the traffic from/to gatewayLTE and to the house2 arrives at the 'Tik on the eth10 interface, the eth10 is part of the bridge and the whole bridge has this qos rule:
queue treeprint.txt (4.83 KB)
separating to dscp management of the gwLTE from the rest of the bridge, is no small thing for me …

Neither for me because I never needed to set up queue handling on any of my Mikrotik networks, and I’m not going to learn how to do that on a platform I can access only indirectly :slight_smile:

ok sindy , but I have very high latencies inside the lan when there are heavy downloads and having the voip the only way I’ve seen use to solve the problem , is the one of dscp, that’s why I use it.
I agree with you that the simplest things are also the best!
ps: now I’ve disabled the dspc I see how the lan behaves I’ll say

I didn’t say you should not use traffic prioritization, I just say that my experience shows you cannot trust received DSCP values on downlink traffic and you should therefore use different criteria to assign priorities to it, and that I’ve been lucky so far that I did not need to deal with traffic prioritization on Mikrotik myself.

The biggest issue in general is that you have to substitute the missing possibility to prioritize the download traffic at the ISP side of your WAN link by throttling the non-real-time download traffic at your side in order to leave space on the downlink for the realtime one.

Not long ago it was basically enough to throttle all TCP and let all UDP flow freely, because the amount of application protocols with loss management using UDP as transport was negligible; with the emergence of QUIC it makes sense to throttle also that one. And it may be much easier to identify RTP packets by source, destination and size than to identify QUIC packets.

most likely your ISPs are honest and offer a stable connection, here with me instead offer you a 30/3 connection, but these values are never respected and the latency is variable 8-50ms use the dscp is a must!
So all I have to do is leave the gw.LTE as a backup connection with the failover only (which I’ve seen sometimes doesn’t work when the isp keeps the connection but not the internet connectivity) activated by a script.

I feel that we understand the same words very differently.

When you say dscp, you seem to actually have in mind handling packets based on their priority.

When I say dscp, I have in mind the field in IP header of the packet which is used to deliver the information about packet priority to downstream systems, which may use it but also may not.

And I totally agree with you that assigning priority to packets and handling them according to that priority is necessary wherever the summary traffic routed to a link can exceed the physical bandwidth of that link and you need to save real-time traffic like RTP from being dropped. A download direction of a WAN link while many file downloads are in progress and is a typical example.

The only thing which I keep saying that it is not safe to trust the value in the DSCP header field of packets which have arrived through the internet because it often changes during transport over parts of network which are not under your control, and that you should use other criteria to assign priority to packets received from the WANs. It if perfectly OK to use the DSCP value which has been assigned by a device in your own network to prioritize the upload direction on the WANs.

here are these words that make me curious… how could it be done? :bulb: :bulb:

The fact that it is not easy in general is the reason why the DSCP field exists :slight_smile:

However, for commercial VoIP services in particular, the typical case is that the SIP and RTP packets come from a limited number of IP addresses, so you have to identify these addresses and prioritize anything what comes from there. If this is not possible, UDP packets with both src-port and dst-port values higher than 8000 and size below 500 bytes are also good candidates to be RTP packets. I have even seen someone here to identify Skype connections, but prioritizing Skype requires connection-marking and as each connection can only have a single connection-mark, you end up with connection-marks like wan1_skype and wan2_skype and complex marking rules.

I had not thought it could be done, so if I understood correctly in my case I have a ATA cisco with these data: proxy 83.211.227.21 port rtp min/max 13456/16482 and a sip port 5061. I could with rules of mangle and queue tree always give the highest priority to the voip service? :open_mouth:

I would say that if you give the highest priority to anything which comes through WAN to the LAN IP of that ATA (supposing you’ll configure it with a static IP address or create for it a static dhcp lease) and vice versa, you’re good.

But I have to repeat what I wrote earlier - prioritization on transmission side is easy, every time the previous packet has left and you choose what to transmit next, you check the queues starting from the highest priority one, and you transmit the packet waiting in the first non-empty queue found. This approach is useless with prioritization on receiving side, where you have to create an outgoing bandwidth limiter for packets being forwarded to LAN and use it to limit all streams from sources which are able to adjust their sending speed to the rate on which they get acknowledgements back, and the summary bandwidth for all these streams must be lower than the physical downlink bandwidth of the WAN. The difference between the two is where the real time traffic has to fit.

So if your downlink bandwidth is 30 Mbit/s as you say, and you want to support two phone calls simultaneously, you have to leave a margin of at least 200 kbit/s so the max-limit of the queue shaping the “throttlable” traffic in WAN->LAN direction must be 29.8 Mbit/s. In fact there are also other UDP services which cannot be throttled, so you need to keep the marging higher, I would use 1 Mbit/s or at least 0.5 Mbit/s for that. The real time traffic must bypass the throttling queue subtree in order that the whole setup would serve its purpose.

Your current setup for WAN->LAN direction behaves as if the LAN was the bottleneck, so it does not throttle the “throttlable” traffic below the downlink bandwidth of the WAN, so it doesn’t prevent RTP packets from getting lost while being sent by the ISP down your WAN link. So if you don’t have problems hearing the other party at your end while heavy downloads are in progress, the only explanation is that the ISP does prioritize the RTP packets as it sends them to you, which usually only happens if the ISP is also your VoIP service provider. But if it is the case, the whole prioritization of the WAN->LAN direction at your side is not necessary.

here’s just that that gives me problems the wisp does not have a stable band a bit provides me 30/3 M sometimes instead provides me even 3/1 depends on how much traffic has. (a real disgust and costs a lot) ;so I used that queue tree type and a script that fits the bandwidth limits.

But that’s two different problems.

If the ISP has overbooked backbone links, i.e. if the sum of download rates sold to the clients exceeds the backbone capacity, causing your actual downlink bandwidth to drop down to 1/10 of the nominal when everybody in the neighbourhood starts downloading, and if the ISP doesn’t discriminate between traffic categories in any way, there is essentially nothing you can do - which is my exact case here.

If such ISP is at least clever enough to throttle all subscribers proportionally, it would still make sense to implement what I wrote, i.e. throttling of the throttlable traffic 200 kbit/s below the currently available downlink bandwidth and keeping that margin for real time traffic. I only wonder how do you measure the currently available downlink bandwidth?

I have never been able to understand how to do this without using a script, since the maximum bandwidth in uploads and downloads varies continuously in the hours of maximum traffic.

I only wonder how do you measure the currently available downlink bandwidth?

I use the 'Tik Tools, enable the Btestserver and bandwidthTest and look at the graph and the maximum and minimum values

So you have a btest server somewhere else (behind the remote end of your WAN link) and use it to measure the WAN link parameters? Even if so, this is a measurement method which only works if there is no other traffic on your WAN link, otherwise the measurement in progress fights for the WAN bandwidth with your regular traffic so you measure a nonsense and affect your regular traffic.

I always thought that the ‘Tik’ bandwidth test took precedence over all LAN traffic. :frowning:
But anyway it’s obvious I think, that when I have 3 Mbps instead of 30 and very little traffic in lan, the wisp is reducing the bandwidth

After letting it shake down a bit in my head overnight, I have an idea for you.

You may watch for RTP packets to be sent from your ATA (not for SIP packets because the ATA has to re-register and send keepalive packets) every couple of seconds and if it is being sent, you can reduce the max-limit on the bridge for the throttlable traffic to, say, 2 Mbit/s (assuming that the downlink bandwidth at that time is reduced but you still get your own 3 Mbits or so. If the ISP doesn’t provide a fair distribution between clients, doing so would be useless.

To watch for the RTP being sent, you would use the following pre-routing rule:

/ip firewall mangle
add action=add-dst-to-address-list address-list=watch-rtp address-list-timeout=5s chain=prerouting protocol=udp src-address=ip.of.the.ata src-port=13456-16482

You would then schedule a script for every second which would check for presence of an item on that address list, and if it is not empty, it would adjust the queue max-limit; once the call ends and the address-list becomes empty, you can set the max-limit back to 30 Mbit/s.

This way, you would always throttle the downloads to some minimum bandwidth during each call, and let them run at full available download bandwidth when no call is in progress.