Interface Queue Causing Slow Performance

For many years we have witnessed the effect the default interface queue has on throughput. There are many forum threads with frustrated users that have never had closure over why almost all Mikrotik devices with access to high capacity Internet fail to perform with default Interface queues. The answer has been to go away from only-hardware-queue or ethernet-default and instead use mutli-queue-ethernet-default, then increase the queue size drastically.

In our case, we have increased our mutli-queue-ethernet-default queue size many times as bandwidth demand has increased. Lately, we were using a queue size of 1000 and were unable to exceed 4Gbps until we increased to 1500.

The question for everyone that has had to do this is simple: What is the recommendation for interface queue on a router that has excess capacity?

In our case we have 10Gbps of Internet access with a CCR1072-1G-8S+ on v6.43.7. Interface queues alone are responsible for degraded throughput performance.

What do you mean with “fail to perform”?
Are you running benchmark tests or are you doing realistic everyday traffic?
Do you know that increasing queue size is generally considered a bad idea? (see https://www.bufferbloat.net/ )

If you have excess capacity, why use queues :wink: ? Someone else on the forum had a setup in production, where the backbone was unlimited, and policing / shaping happened closer to clients.

1500 packets x 1500 b/packet = 2,2MB

4Gbps / 8bit/byte = 512MB/s

2,2MB / 512MB/s = 4ms looks to be fine from buffer bloat point of view.

Sorry to resurrect an old thread.
Can you please explain in detail your calculation?

Is there a final answer changing the int queue type to mq ?