Best performance under load ??

Hi,

I have just setup a part of network using RouterOS on PCs (Celeron 600) and CM9 a/b/g cards. The scenario looks like this:

Net – Airca8PRO (AP) – 5 km (5500 Mhz) – CM10 (client) ROS1 CM10 (AP) – 4 km (5180 MHz) – CM10 (client) ROS2

When I test separately both links they are OK (24Mbps). We constantly monitor our net sending 10 fast 1450 byte packets every 2 minutes. Currently the network is very lightly loaded (maybe just those pings). If leave everything as is I get packet loss up to 50-60% on the monitoring pings. When I run BTtest between ROS1 and ROS2 I get 0% packet loss. Sorry if that is lame question, but the whole thing just does not make much sense to me.

Thanks for your insight. Tom

BTW, it’s all just simply routed, no OSPF, shaping or other stuff.

What are you pinging and from where? If you are polling and send a ping burst (or flood) they can get lost on a lightly loaded network.

I am pinging ROS1 and ROS2 from Net. Yes they are short bursts. BUT - here is some data:

Net - ROS2 without load

1458 bytes from 10.2.21.1: icmp_seq=998 ttl=59 time=25.8 ms
1458 bytes from 10.2.21.1: icmp_seq=999 ttl=59 time=20.9 ms
1458 bytes from 10.2.21.1: icmp_seq=1000 ttl=59 time=20.4 ms

— 10.2.21.1 ping statistics —
1000 packets transmitted, 899 received, 10% packet loss, time 38119ms
rtt min/avg/max/mdev = 19.479/35.192/196.361/19.200 ms, pipe 7, ipg/ewma 38.157/28.576 ms
gw:~ #

Net - ROS2 with BTtest between ROS1 and ROS2

1458 bytes from 10.2.21.1: icmp_seq=998 ttl=59 time=41.9 ms
1458 bytes from 10.2.21.1: icmp_seq=999 ttl=59 time=46.1 ms
1458 bytes from 10.2.21.1: icmp_seq=1000 ttl=59 time=84.2 ms

— 10.2.21.1 ping statistics —
1000 packets transmitted, 999 received, 0% packet loss, time 62403ms
rtt min/avg/max/mdev = 38.423/52.016/119.428/9.176 ms, pipe 3, ipg/ewma 62.466/53.065 ms

Net - ROS1 with BTtest between ROS1 and ROS2

1458 bytes from 10.2.20.1: icmp_seq=998 ttl=60 time=18.5 ms
1458 bytes from 10.2.20.1: icmp_seq=999 ttl=60 time=22.8 ms
1458 bytes from 10.2.20.1: icmp_seq=1000 ttl=60 time=17.5 ms

— 10.2.20.1 ping statistics —
1000 packets transmitted, 998 received, 0% packet loss, time 29653ms
rtt min/avg/max/mdev = 17.420/22.277/85.892/6.790 ms, pipe 4, ipg/ewma 29.682/20.549 ms

Net - ROS1 without load

1458 bytes from 10.2.20.1: icmp_seq=997 ttl=60 time=19.6 ms
1458 bytes from 10.2.20.1: icmp_seq=999 ttl=60 time=61.4 ms
1458 bytes from 10.2.20.1: icmp_seq=1000 ttl=60 time=39.1 ms

— 10.2.20.1 ping statistics —
1000 packets transmitted, 895 received, 10% packet loss, time 36913ms
rtt min/avg/max/mdev = 17.497/31.479/216.114/25.139 ms, pipe 9, ipg/ewma 36.950/28.854 ms

As you can see, this is 1000 1450 byte packets using adaptive ping rate. I consider that quite a good load for testing. Still the effect remains. I just don’t get why running bandwith test between ROS1 and ROS2 influences the link from Net to ROS1 ?

Ok, I guess I get it now - when I run bttest between ROS1 and ROS2 the winbox is running from the Net, which means there is some 10kbps or of stable flow even on the first link, which is enough to make things right. Still, this behavior is something I was not quite used to on 802.11b links or even WSD 802.11a bridge. Is there something I can possibly tune to get rid of it / minimize it ? Other than just constantly loading the links ? Except for the obvious answer - get some customers to generate the load :wink:

Are you running nstream? Any polling will have this when the link is idle and gets less of a priority.

Just a thought but could there be an issues with the routing tables getting cleared ? as you are sending a fast burst it might be just the first set of packets that are getting droped as the route is determined. This can happen on any router.

If you using polling turn it off it is not necessary for P2P. We often saw dropped packets or intermitent high ping times on Wave Rider gear, and it was simply the way the polling worked vs the timing of the ping packets.

erik

I had polling turned on, but turning that off didn’t make a difference. Still the same behavior when there no load… On the link between ROS1 and ROS2 I do use nstreme.

As far as routing is concerned, I am just using static routes so there should no routing table manipulation going on ( at least I hope OSPF is not kicking in).