The reason I bought them was to set them up in a ~300m bridge with the goal of achieving the lowest latency possible while maintaining 60Mbps downlink speeds. I was originally looking at some Ubiquiti nanobridges but one of the guys on their forum said he thought nstreme was the only thing that might be able to get the latency I’m looking for. See original thread here:
Unfortunately after settings them up my ping is varying between 5-10ms. I’m sure we can get it down. I’m currently reading through the wiki but can you point me in the direction of what settings I should read about to start tuning for low latency?
I did some testing today with the following setup:
Laptop 1—Ethernet—5HPnD station bridgeWireless Link5HPnD bridge—Ethernet—Laptop 2
Using nv2, when pinging from Laptop 1 to bridge radio I saw pings between 20ms and 30ms, but only ~7ms from Laptop 2 to the station bridge. (Note: I may have this backwards but I know for sure the behavior was different pinging one way vs the other).
Luckily using nstreme provides much better pings both ways, usually around 0.7-0.8ms with occasional spikes. Any ideas on this behavior?
I will look into those settings. If it makes a difference, this bridge will be for a specific purpose and mostly UDP packets will be sent across. It will need to be able to handle TCP packets but the latency on those doesn’t matter. So my thinking is for UDP anything having to do with ACKs can be gotten rid of for the most part?
Hello. Someone from MT support - perhaps Uldis once wrote on the board that the HW retries setting has no effect if mode is selected NV2.
So which is it?
Is it possible and I have mentioned this several times before that MT could simply when you select NV2 or Nstream it would grey out (or a simple red X in data field) on setting not used by that wireless protocol, thus avoiding wasted time in trying adjusting these setting.
Yes, as I mentioned above Nstreme does seem to work much better than nv2, but I’m still trying to tune it as much as possible if anyone has more suggestions
Hate to be this guy - some of the other operators in our country are reporting latency of 1-2ms on UBNT’s airmax stuff - so one supposes its possible on MT as well (being TDD).
We have reasonable results from NV2 in P2P - around 5ms stable but that is as low as we have ever seen.
23 km Link. nv2. Rb800 - Rb800 . Nosy enviroment. ccq is 91/58 % TX/RX -62/-62 dBm SNR 52 R5hn + Ubnt Rocket dishes (and yes. algin algin algin.. and more algin on the antenna)
Uptill now we used to align the PTP for max CCQ, best signal, even reduce data rates to increase ccq and this was pinging beyond both sides of the PTP but now I must try pinging just the PTP /30 addresses and see if i can reduce ping time.
Spent a hour adjusting and could not reduce ping times, first i noticed the number of packets sent by NV2 is much higher 380 vs. 140,
My personal conclusion about NV2 high pings maybe caused by load CPU % spikes, I base this on another observation that is when I tried to setup a script to reboot a board on high CPU, http://forum.mikrotik.com/t/a-script-to-calculate-average-cpu-load/20573/1 when I ran test on low CPU load it ran perfectly and all of the samples where shown in the log but running the test with high load (TCP Bandwidth Test to localhost (127.0.0.1).) it would not run fully and most of the time just 2 of the 5 samples were in the log so a average for this script could not be established when the CPU was running at 100% , so from this if a script cannot run fully ( I could have reduced the samples but did not want false alarms ) what else can be effected?
My thoughts if somehow the max CPU load could be limited to 99% and not 100% if this can be achieved.
n21roadie means that options that are not available in certain mode setting should be greyed out, as I also mentioned before. It creates confusion, you yourself became a victim of it now!