The first two queues seem to work fine to limit bandwidth to the two clients, though as you will see I have set queue 0 to be RED type. This seems to restrict traffic slightly more than pfifo type. Why is this and what is the correct queue type in this scenario? The third queue (2) is to throttle all traffic from other address in the 192.168.0.0/24 network which seems to work fine. The fourth queue is just to monitor the total traffic on the public interface and seems to work fine also. I have added the fifth queue (4) as I would like to implement sfq for all traffic from and to the clients as I imagine the upstream from public (to the Internet) will get busy at times. However, this queue appears to do nothing at the bottom of the table and if moved to the top of the table allows all traffic to pass unrestricted.
What I am doing wrong? What is the correct way to implement sfq for all traffic passing to and from an interface?
If you have available cpu and memory resources (in this case you surely have them), RED performs better then pfifo.
No packet can enter two simple queues at a time. If the packet is catched by one queue, than it will escape from all others. Therefore your third queue will ‘shade’ forthcoming ones.
There are no better or worse queue types - they are just different. Classless queues (schedulers) which by nature do not limit data rate (FIFO, SFQ, RED, but not PCQ) can not be more or less accurate as there is no way they can be accurate at all. The only comparison that can be made is in what measure they are effective in some particular cases. For example, RED is good for TCP as TCP can adapt to packet losts and decrease traffic speed before the actual limit is met, so the channel would be used more effectively. But the same RED algorithm is not so good for UDP or ICMP as for them packet loss is a packet loss - for such a traffic maybe SFQ or FIFO is better. Also note that there is no difference between PFIFO and BFIFO except the measurement units they use to limit their wait buffer (queue).
That is normal as simple queues are put in two places simultaneously - in global-in (direct queue) and in global-out (reverse queue). I think, I’ve mentioned this effect in the forum some weeks ago - please do search. Anyway, having queue #0 match src-address=192.168.0.103/32 in global-in and queue #3 match everything in global-out, makes a good explanation for the observed effect. But having two entries matching the same thing in the same place, only the first one of them will actually work.
Thanks for you reply, lastguru. To ask my original question another way, when I set the two clients to the same bandwidth for download, each of which is equal to the total available upstream bandwidth, laptop2 always dominates i.e. upstream bandwidth available is 1024kbs, both clients set to download max-limit of 1024kbs, when both downloading laptop1 gets approximately 250kbs and laptop2 gets approx 750kbs.
I want to give both clients an equal slice of the avaliable bandwidth i.e. approx 512kbs each when they are both downloading. This seems to be the bevaviour of an sfq queue, but what queues do I actually need to set up on the Mikrotik?
set 0,1 limit-at=0/512000 max-limit=0/1024000 interface=local_interface queue=sfq
disable 2,3,4
this will limit download only. it should work like this - when both are downloading, they receive 512kbps, but if one is idle, the other one can go up to 1024kbps.
Thanks for your reply, Dave. I will try this later today but I’m sure it will work. However, this is OK with just two clients (as I have now, this is only a test setup) but what if I have say, 50 clients and the number is changing all the time? Each time I add or remove a client I have to reset the limit-at value to bandwidth/number-clients for all clients. Of course, this becomes much more complex if each client’s speed setting is different e.g. 10 clients at 512kbs, 20 clients at 768kbs 20 clients at 1024kbs. Is this the only way?
[admin@pad001X] ip firewall mangle> print
Flags: X - disabled, I - invalid, D - dynamic
0 src-mac-address=00:04:25:9E:00:81 action=passthrough mark-connection=CN02_conn
How do you measure the actual speeds? And how long is the test going?
How do the downloads from laptop1 and laptop2 differ?
There is an undefined behavour of such a setup in short term. In long run with random traffic, it will eventually equalize user sessions, but there is no express guarantees on that. That is especially true if you are using long buffers (try reducing queue buffer) or different queues on different clients (pfifo may be more agressive than red). That is why PCQ was made - try that on all your customers at once (not one PCQ for each client, but one for all of them)
I am usually doing the same downloads but even if I download from two different websites the results are very similar.
So, I would have a parent PCQ queue set to the maximum upstream bandwidth available then child queues of default type to limit speed for each particular client? I have tried this and there is no change.
I am thinking that I am doing something fundamentally wrong. I am particularly puzzled by the section in the manual which says that sfq cannot limit data rate at all, but I am doing it so I’m obviously not fully understanding what is going on. Could you post an example of using PCQ to equalize the speed to each client?
For the given task it would be better to change this example like this:
3. do not include pcq-rade in both entries (i.e., pcq-rate would be “0”)
4. use max-limit here with the values of maximal total download and upload