queue Trees

Hello,

I am trying to control Bandwidth based on IP, the best was was to use Queue Trees, I can control outgoing bw but cannot do incoming, if I select New connections, should I also select established? Will it not double restrict? The traffic i want to control is from one port.

/ip firewall mangle
add action=mark-connection chain=forward comment=Upload connection-state=new disabled=no new-connection-mark=Upload-Client1 passthrough=yes src-address-list=Client1
add action=mark-connection chain=forward comment=Download connection-state=new disabled=no dst-address-list=Client1 new-connection-mark=Download-Client1 passthrough=yes

add action=mark-packet chain=forward comment=Upload connection-mark=Upload-Client1 disabled=no new-packet-mark=Upload-Client1 passthrough=yes src-address-list=Client1
add action=mark-packet chain=forward comment=Download connection-mark=Download-Client1 disabled=no dst-address-list=Client1 new-packet-mark=Download-Client1 passthrough=yes

/queue type
add kind=pcq name=Client1-Download pcq-classifier=dst-address pcq-dst-address6-mask=64 pcq-rate=10M pcq-src-address6-mask=64
add kind=pcq name=Client1-Upload pcq-classifier=src-address pcq-dst-address6-mask=64 pcq-rate=10M pcq-src-address6-mask=64

/queue tree
add name=Client1-Download packet-mark=Download-Client1 parent=global queue=Client1-Download
add name=Client1-Upload packet-mark=Upload-Client1 parent=global queue=Client1-Upload


/ip firewall address-list
add address=192.168.255.0/29 list=Client1


Thank you.

Of course you cannot control incoming traffic’s bandwidth directly. The maximum you can do is to slow down the delivery of the incoming traffic from the ISP to the local destination, and if the transport or application protocol of a given session supports some kind of feedback to the sender, this will make the sender reduce the bandwidth towards your end device, but this doesn’t work for all protocols.

Also, you seem to have misunderstood the role of connection marking. Many tutorials say “assign a connection-mark first, and later translate it to packet-mark” but they don’t explain why. The idea is that if the classification conditions are complex, it saves CPU to evaluate them just once, when handling the initial packet of each given connection, and save the result in the form of a connection-mark to the context of the connection as maintained by the connection tracking module of the firewall, so every subsequent packet of that same connection doesn’t need to be re-checked for all the classification conditions because the connection tracking module assigns it that connection-mark as a metafield, which the subsequent stages of the firewall can work with just like with the actual fields of the packet headers. This is also the reason why the connection-state=new is part of the conditions of the action=mark-connection rule - to avoid re-classification of every packet by the potentially complex match conditions.

Each connection can have at most one connection-mark; if any packet belonging to an already marked connection matches an action=mark-connection rule, the connection-mark of the connection gets rewritten by the new new-connection-mark value.

In your rules below, each connection gets connection-marked with either Upload-Client1 (if initiated by one of the clients) or Download-Client1 (if initiated from outside towards one of the clients acting as server in that particular connection), thanks to the connection-state=new match condition which only matches on the initial packet of each connection.

To place the download and upload packets to different queues, you have to let the action=mark-packet rule match on more than just the connection-mark (which must be common for both directions, otherwise it loses its purpose), e.g. the in-interface(-list). So what comes in via WAN is download, hence it will get a packet-mark Download-Client1; what does not come in via WAN is upload, hence it will get a packet-mark Upload-Client1.

Thank you so much for the explanation, with my scenario I am not sure how to control Bandwidth using the router, I am using OSPF, so there are chances some interfaces will change from incoming to outgoing when links flap, like I do not have a particular interface as WAN in a router. It is part of OSPF network. I was thinking of how I can control bandwidth in the same router. Outgoing works great, issue is incoming.

Thank you so much for the explanation, with my scenario I am not sure how to control Bandwidth using the router, I am using OSPF, so there are chances some interfaces will change from incoming to outgoing when links flap, like I do not have a particular interface as WAN in a router. It is part of OSPF network. I was thinking of how I can control bandwidth in the same router. Outgoing works great, issue is incoming.

Well, nobody says you have to distinguish between “incoming” and “outgoing” by a particular interface, you can use address-list for that. So whatever matches dst-address-list=internal-subnets is considered “incoming”, whereas whatever matches src-address-list=internal-subnets is considered “outgoing” - in terms of an individual packet, not a connection. For connections, “incoming” means “initiated by a remote client towards a local server” (and consists of both incoming and outgoing individual packets), and “outgoing” means “initiated by a local client towards a remote server” (and also consists of both incoming and outgoing individual packets).

What I was saying was only that if you want to enforce bandwidth restriction by packet direction, not by connection setup direction, you have to translate the same connection-mark to one of the two distinct packet-mark values depending on packet direction in order to handle each direction by another queue. Or not use connection marking at all, for the cost of more CPU spent per packet.

As you say that the same interface can be an “uplink” one now and a “downlink” one a minute later, it means to me that the bandwidth enforcement must be used on multiple routers, as the packets belonging to the same connection may pass through one router now and through another one the minute later.

Maybe I still don’t understand the nature of your problem?

Hi Again,

I just want to control bandwidth if possible from the router, I can use address list for this, but when I put these rules, I can only control upload traffic and not incoming traffic, I have tried to look around web and could not find any good examples on what I want to do. Would it be possible to direct me where I can see some examples on how this is done normaly I tried those rules but they did not help much.

Thank you again.

This is what I’ve already written in my first response. There is no way to directly limit the traffic the router receives from elsewhere, and it is not a missing capability of RouterOS, it is pure physics - once an upstream router sends a packet to your router, it does arrive there no matter what you do on your router itself. What you can only affect directly on your router itself is the rate on which you forward those received packets downstream (further to the destination), and by doing so, indirectly affect the incoming flow for those types of traffic which can accomodate to that.

If the recipient has some means to inform the sender about the receive rate (or lost packets), the sender may lower the bandwidth based on this information - either by just sending slower in case of non-real-time data like web pages, or by choosing a compression with lower quality but higher compression efficiency in case of real time data. If no feedback channel is available, the output queue will simply overflow and what doesn’t fit to it will be lost, but it will not affect the occupied bandwidth on reception.

Thank you for helping me understand this, I have another question when limiting users by IP address, would it be equal bandwidth to all users or they can use as much as they want as long as they do not exceed the total allocated. Thank you

There is a special queue type, called PCQ, which allows automatic distribution of the queue’s common bandwidth quota among “flows”. The flows are distinguished by any combination (source address prefix, source port, destination address prefix, destination port) configured for such a queue.

Configuration of a per-flow bandwidth limit is optional; if not configured, a single flow may take all the bandwidth allowed for the whole queue if there are no competing flows, if configured, each flow can only use up to that limit even if no competing flows exist.

If each of the N flows requires more than 1/N of the queue bandwidth, all of them get restricted to 1/N of the queue bandwidth. If some of the flows require less than that, their share is equally distributed among those which require more (but only up to the per flow limit if configured).

I’m not sure whether the above is an answer to your question, though.

Thank you, looking at this example, https://wiki.mikrotik.com/wiki/Manual:Queues_-_PCQ_Examples which distributes equal bandwidth, what I want is to allocate total Bandwidth like 5mb and that subnet or group should not exceed the 5mb, I don’t want to do equal distribution. How can I do that.

So when there are two devices, each of them tries hard to push through 5 mb, and the common cap is 5 mb in total, how should these common 5 mb be distributed between the two, given that you explicitly do not want it to be distributed equally? Or are you saying that you don’t need equal distribution but you don’t mind if it happens?