The configuration is this:
- I have RB433 with one miniPCI card (R52H), wireless interface set to ap-bridge and one client (RB711) connected to it.
- I have two firewall mark rules: chain output -> mark paket "OutputMark" and chain prerouting -> mark paket "OutputMark"
- I have one queue in Queue tree called WLan1Queue, type is PCQ (classifier all four checkboxes - src port, src address, dst port, dst address), limit-at=1 max-limit=100M, priority=8
Now the problem:
- I start ping from AP to the client (300ms interval, 50bytes), it's avg. 1ms
- Then I start bandwidth test (send UDP to client, no limit)
- As bandwidth test reaches capacity of the wireless link (in a few seconds), the problem appears:
- There are 3 PCQ queues created (according to the classifier - one for ping, one for bandwidth test and one for bandwidth test control - that is what you can see from Torch tool)
- So the outgoing traffic is pretty divided and the round robin algorithm works well
- The problem is, that ping to client shows avg. 62ms
- Wireless link is without any interference, it is indoors, bandwidth test shows almost 30Mbps, ccq is 95-100, data-rate 54Mbps all the time.
- I was thinking that this is still some weird wireless problem, so I connected these devices using UTP cable (ether1 to ether1) and tried the same test, but with Max-limit=200M (so that HTB does not limit the traffic). Similar thing occured:
The ping was 128ms.
- I also tried to switch to NStreme, 10Mb Ethernet, 802.11N, used different wireless cards, everything behave the same, except the latency and available bandwidth - it varies from interface "type".
- If you set Max-limit lower than the capacity of the link (either wireless or wired), then the HTB will provide limitations and no delay will occur.
- I have read HTB manual hundred times and the interesting sentence is this (refman 3.0):
Please note this part: "When there is a possibility to send out a packet..."When there is a possibility to send out a packet, HTB queries all its self slots in order of priority, starting with highest priority on the lowest level, till lowest priority on highest level.
What exactly does this mean? Well, I thought that the interface (let's say wireless card) can say to the OS that it is able to send. Or OS can ask the wireless card "Are you able to send?" and it would reply "Yes". So, if it worked that way, there couldn't be any delay! Where would it come from? If the card is able to send 30Mbps (using 1500 UDP pakets) then it means 2500 packets per second, resulting in 0.4ms delay (one way). But constant 62ms delay?
And the ethernet interface, 97Mbps of UDP traffic = 8000 pps = 0.125ms per packet. But 128ms constant delay?
So it seams that THERE IS A HIDDEN BUFFER (FIFO) on each real interface!
And it's size can be calculated:
802.11 interface: 155 packets (1500 bytes)
NStreme interface: 73 packets (1500 bytes)
Ethernet interface: 1000 packets (1500 bytes)
This presents a huge problem for wireless links! For 100Mb Ethernet you can easily set up an HTB limitation to 95Mbps and everything will work fine (it's full duplex and the data rate is fixed). But if you have a wireless link, you never know how much bandwidth you have available. So if you set limitations to HTB (say 20Mbps), it will work perfectly if you have at least 20Mbps available, but what will happen if an interference occures (whether from other wireless protocols, or "interference" from 802.11 network operating on the same frequency)? HTB will not limit the traffic and try to put it into real interface through the "hidden buffer" and as the bandwidth loweres, the ping increases, since it takes more time now to transmit these 155 packets (for 802.11a).
Anybody has come to this problem? Why is there this buffer and what determines it size? Can it be set in ROS? (for example to 10 pakets instead of 155). Thank you for your answers.