We are having issues with simple queues only when the traffic is over 10M. We have thousands of customers using simple queues under 5M plans and they all works perfect. If we setup a customer for a 30M download and 5M upload. The customer can only do 1 concurrent connection at 17M and get there full 5M upload. If we disable the simple queue I am able to get 93M/87M speedtest. If I set the same shaping up with advanced queue and packet marking then everything works as expected.
We have noticed though that if we have the customer shaped to 30M with a simple queue they can
Download ISO at 10M
Download exe file at 10M
and still get 10M on speedtest for a total of 30M traffic on simple queue.
What we are seeing is that we are unable to get 1 concurrent tcp connection on a simple queue to go past 10M reliably. They can get a total of 30M but needs to have it coming from different streams. If we take off the simple queue with no other changes everything works perfect.
We have found a way around this but would prefer to shape all customers the same.
Thank you very much, We have tested the ethernet-default and so far it seems to be working. We just need to get this setup in the network now. But works in test lap.
We have the same problem which is very easily reproducible. Our system uses a FreeRadius to assign “Mikrotik-Rate-Limit” of 512k/10240k to clients. MikroTik assigns “default-small” as queue type.
To reproduce the problem we go to a high site and do a TCP bandwidth test direction send to client’s IP.
On the graph the client ALWAYS get 4mbit. If you remove the dynamically created simple queue, the client suddenly gets speeds of up to their full 10mbit.
To test if “default-small” is the problem, we go to Queue Types and change default-small from Queue Size 10 packets to Queue Size 50 packets. Now our 10mbit client is getting a very good speed.
So, I’ve just changed default-small queue size from 10 to 50 packets, and that seems to do the trick for the high speed customers. But, now the lower speed customers will see longer latency.
Changing internal defaults on MikroTik queues seems wrong and we are worried about a huge network wide change (many high sites) that might assist big customers but give more latency to small customers. Can anybody provide us with additional insight into what is going on here?