Dude can do this. If you setup a ping, it will avg out the ping times over time.
Yes, I know. Im sorry, didnt said the whole stuff behind it.
The fact that I just want the average value from ROS ping tool statistics does not mean that I just want to "see" this value, but store the value on a variable inside a ROS device too.
Whats the point - simple QoS with ICMP priority=1 and other traffic priority=8 and we have completely different ping numbers - even on full load ping times are less than 50ms
Yes, you are correct, but not exactly. Let me try explain better whats happening and maybe Im doing something wrong.
Who knows?
Since the symptom we have show the same face on different hardwares and scenarios, just consider the following:
- MT ROS on a P4 3GHz processor power, 512MB ram and two radios;
- The first one (wlan1) as station 5.8GHz turbo txpower default, etc etc;
- The second one (wlan2) as ap-bridge 2.4GHz B only, txpower default, same etc (Ive tryed tons of changes anyway with no success);
Ok, no mistery so far and a P4 is kinda plenty of processor resources for just one AP in 802.11b. Basicly, one public IP address on wlan1, a default gateway, local DNS caching, RADIUS aaa, a public IP pool, a PPPOE server over wlan2 and, of course,
some sort of traffic shaping. Such a simple setup is completely sufficient to people surf the web good. Lets talk about queues and shapers.
Our first implementations: One simple queue for each IP from the pool, each one applied over customers IP addresses individually only (no traffic priority at all). 1/3 of this pool at 256k/256k max-limit and the rest at 128k/128k max-limit only. Then you (customer) may see nice pings when doing nothing and nasty pings when the queue is full. Different queue types like pfifo and sfq change the ping responses you see, but not change the fact that when the queue is full latency is high.
We noticed that some congested wireless AP was showing high latency for all clients from times to times, even if a determined customer measured was doing nothing. Also on non congested AP, but in this case was a clear radio issue on the customer side. Then, we did a shot on priority expecting to solve the high latency and it did very nice!
Our lastest implementations: Mangling different types of traffic, as you know, tcp syn, icmp, tcp 80, 25, 110, p2p and finnaly a default one. Using queue tree, one main queue for download on global-out and another for upload on global-in. Each one with five child inner classes labeled as follow: icmp, high_prio, mid_prio, low_prio and p2p. Each of these five inner classes with two or more child leaf classes. Most of the leaf classes are PCQ type with rates mentioned above, including p2p and default traffic. Some PCQ types have zero as rate.
After this lastest implementation latency was great, I mean, low. About 2ms, even if the customer is surfing @ 2mbits. People can play online games, watch youtube, they are happy. But some sort of imperfectness, dunno, from times to times people still get high latency.
The total AP throughput is under those max-limit values, queues are green, wireless ok, very good signal and ccq, plenty of processor resources. Well...
I see that one station can surf alone @ 5mbits without limits, but 40 stations dont raise the total throughput more than 900kbits due to wireless/protocols overhead I think and the average throughput that this AP reach is 3mbits with limits. Lets call this 900k thing as
"foot limit". We set about 100 customers per access point access-list running 802.11b, there is about 60% of this list with radios associated and about 50% are PPPOE clients effectively connected (peak values). I thought that if many people want to shake simultaneously, our lastest implementations should solve the high latency. Did not really, but is the most great implementation so far we did. Seems that priority dont work well when queues are green, but if I do set the main download queue max-limit equal the "foot limit" when congested, latency get low and prioritys get back to work. But since it is a fixed value, when the AP is not congested people should surf @ 2mbits, not 900k.
Then I thought I could implement a script to measure this latency and dynamically adjust some queues max-limit. i.e. Periodically measure latency, if above "X" then deduct "Ykbits" from "Z" queue max-limit, if below then add and so on... like an AGC.
How could we call this?
Adaptative Latency Immunity?
Thats it, basicly. Ive made some pictures:
This is how the queue tree structure looks like:
This is a tipical latency issue: (green 0-200ms, yellow 201-500, red 501-...)
Hope this post explain the idea or, perhaps, raise a better one.
If not, just ask for any else details you need.
Thank you for the attention.
Ozelo