I want to set up a specific traffic shaping but I can’t seem to find out how to implement it.
So, I want to limit the bandwidth to 100mbit up/down per IP (on the LAN side) regardless of the IP (meaning not having separate queues for each IP).
This is easy enough with PCQ.
But if for example IP 10.0.0.10 has 1 connection that fills the 100mbit then any other connection on that IP will be slow.
I want to be able to have 100mbit per IP (regardless of source/dest port) and then be able to do PCQ so that those 100mbits per IP will be shared evenly to any new connections thus making the whole thing much more responsive even though it will be limited.
Also I prefer not to do this based on IPs (meaning different rules/queues per IP).
I have multiple /24s behind the router and I want by default all IPs of those /24s to be limited like that and I will exclude whichever IPs don’t need to be limited.
I tried setting up a queue tree with an inner (parent) queue with a PCQ queue type that limits (pcq-rate) to 100mbit based on source/dest IP on the PCQ options.
Then added a leaf (child) queue with a different PCQ queue type that shares the bandwidth essentially for each connection by using src/dst IPs & Ports on the PCQ options.
Unfortunately this does not work. Only the leaf queue PCQ is enforced and the documentation is really bad and outdated on the subject so I don’t know what I am doing wrong (I am definitely not doing this right - It’s more like trial and error )
Does anyone know how to implement this type of Traffic Shaping/QoS ?
Would like to know as well because like you said the documentation for it is outdated and the non-official tutorials I could find are not that reliable.
The router probably does not do anything to share unevenly between connections from the same IP, it is the end system that does that.
When you have a queue with traffic shaping the router will drop packets, it depends on the TCP implementation what will happen next.
Normally to avoid that you could try SFQ but I think it cannot be used in your situation. You could experiment with it to see if it solves the problem for 1 system.
If you want to have 100meg available to all while also capping total bandwidth at 100meg for everyone on a /24 then you need to change the address mask from /32 to /24 in the PCQ options.
I want to have 100mbit limit per IP (/32) without having to create queues for each IP (there multiple /24s behind the router - hence thousands of IPs).
That’s pretty easy by using PCQ. It will apply the limit using a single queue. It works like a charm.
The issue with this approach alone is that when someone does for example an ftp transfer to IP 1.1.1.1 the PCQ will apply the limit of 100mbit to that IP.
But when another user tries to connect to that IP at the same time, it’s nearly impossible since due to the ftp transfer there are a lot of queued packets (and drops) - that’s normal behavior when applying limits.
What I want is to do is apply a second level of PCQ to each IP. So that I have 100mbit per /32 from the first PCQ, and then on those 100mbit per /32 have another PCQ that will divide the traffic based on src-address-dst-address essentially sharing the available 100mbit to the users requesting them.
So when the FTP transfer occurs and takes up all 100mbit and another user is trying to access the same IP the second PCQ will kick in and divide the traffic allowing the second user to access the IP.
Yes the FTP session will get less traffic and more drops but the other users accessing the same server will not notice any lag or loss (because of how PCQ works).
My problem is that I don’t know how to apply this double layer of PCQ (if I may call it that way)
PS: I understand that if there are for example 100 servers behind the router an the router uplink is 1gbit, and all servers trying to reach 100mbit there won’t be enough bandwidth available anyway. I know that and that is not an issue. I Simply need to apply the 100mbit limit without slowing down every connection to the throttled IP (when max limit is reached).
There is a router that behind it there are multiple servers (colocation/dedicated).
The servers are all connected to gbit switches but they are allowed to have 100mbit connections to the internet.
But locally (on the same subnet/vlan) they must not be limited, so we cannot set the switch ports manually at 100mbit.
So we want to apply these limits from a central point (the router).
The servers come and go all the time so creating queues per IP/server is not feasible.
That’s why I am trying to implement this using PCQ to reduce the queues to a minimum (less errors, less processing overhead, etc)
I presume you use 1.1.1.1 here as an example IP of one system inside your own network - is that correct?
Or is 1.1.1.1 an outside system that two of your internal systems both do FTP to?
Again, what happens to individual TCP sessions to a system that is bandwidth limited is heavily dependent on the OS running on that system and the parameters of the network stack.
As I noted, you can try to influence that behaviour by using SFQ.
I disagree. I know that what I am trying to do with PCQ works, and works really well. I know for a fact that I can limit the bandwidth without every connection getting slowed down. That’s the whole point of PCQ. I’ve been using it for ages. Please read the documentation to understand what PCQ does and how it works.
The problem is - as I have described extensively I believe - on how to apply 2 different PCQ policies on the same packets. First limit each IP (/32) as a whole to 100mbit
Second PCQ (or SFQ for that matter) those available 100mbits per IP so that they are shared equally between connections made to that IP.
With ‘the catch’ being that I want to do this with a few global queues/mangle rules. Not separate rules per IP as this would be very difficult to manage (there are thousands of IPs behind the router)
To my understanding, when you limit the bandwidth, what happens to the connections on any system behind the ‘limiter’ is dependent on the device putting the limit and how it will queue/drop the excessive packets. Not how the end system will handle the rate limited packets.
When the router drops the excessive packets of a new connection, how on earth would the end system’s network stack will handle packets that doesn’t even receive in the first place? What you are saying makes no sense for the specific case I am describing.
The bottleneck is the router that puts the limit, not the end system.
Thus the ‘fix’ must be applied on the router that limits the bandwidth.
Most likely you haven’t understood my problem. Please carefully re-read my posts.
SFQ is slightly different than PCQ. I don’t see how it matters at all in what I am asking. It doesn’t change anything.
Can you elaborate or provide a real world example that suits what I am asking?
What happens depends on how the end system is reacting to the dropping of packets.
Is it reacting selfishly, by trying to re-send the dropped packets as quickly as possible and therefore optimizing the rate of the connection to its own benefit, or is it reacting cooperatively by recognizing that the connection is apparently overburdened and it may be better to downscale the TCP window and/or the transmission rate to a value where there is less packet loss.
When it is doing the cooperating method, it also allows other connections to the same system more reasonable behaviour.
It could even try to balance the bandwith of multiple TCP sessions that way.
When the systems don’t want to be cooperative, you will have to enforce that behaviour externally. And when you don’t want to put effort into that, you are in trouble.
I understand you situation , and you can easily accomplish this by putting another mikrotik board behind the current one
the first one that closed to servers limit uploading of servers to 100M per ip, and the second router limiting the incoming traffic from any address to each address in your network
i.e. Now for each IP (parent), distribute all allocated bandwidth (100Mbps) equally between different ports (applications) used by this IP.
So any single client (IP) can’t saturate their own allocated bandwidth with a single application.
So a user (IP) uploading 100Mbps with a torrent client doesn’t consume all the bandwidth for the same user (IP), but rather balance the bandwidth for all applications (port) used for said user (IP).
I presume that using pcq-classifier=src-address,src-port and pcq-classifier=dst-address,dst-port combo would work as a oneliner, as described here
I have a feeling that using pcq-classifier=src-address,dst-address,dst-port and pcq-classifier=src-address,dst-address,src-port would produce better groups (PCQ sub-streams), for example downloading with a browser from single server IP (src-address) port 443 (src-port) to client IP (src-address, multiple ports), but this depends on whether the server is outside or inside the router (where you would reverse the classifiers).