I changed out the dishes on my 1/4 mile link to 29dbi parabolic (2ft). Each end has 60 ft of LMR400. It’s running on a 1.8 Celeron with MT 2.8.21, AR5213 cards, in 5ghz-turbo mode. THe goal for this connection was absolutely the fastest speed, with total reliability. This link is actually redundantly load balanced (multi-gateway).
I ran it in Nstreme, Polling enabled, dynamic frame size. I did the UDP test and got 82Mbps, and ran the TCP test and got 65Mbps. So this confirms what I believe Stephen Patrick did in a lab environment, in a real RF environment. That performance was similar on both links.
Now for some STRANGE reason, when I first establish the link, it oscillates on and off over and over with N-Streme enabled. Without N-Streme, it connects instantly. If I then turn on N-Streme, it will work. My signals are in the -50 area. If I disable the interface after it works, and enable it again, it starts oscillating.
The other strange thing that happens is that after I disable the station interface to test a broken link (failover), it loses IP connectivity until I disable/enable the AP side. That could be fixed again with a netwatch script (on the AP side), but I’d rather not if I could avoid it.
Each side has their own subnet:
192.168.1.0
192.168.2.0
Each link has it’s own subnet:
192.168.10.1 - 192.168.10.2
192.168.11.1 - 192.168.11.2
But for some reason netwatch keeps thinking the link is down even though it reconnects wirelessly.
Nice to see such good results!
(yes these are pretty much what we get in the lab, though 72Mbps TCP was achieved, there could be other setup differences. Also I think TCP performance is more likely to be “distance related” than UDP as there are more packets flying in the “back” direction.
What version RouterOS for this test?
Were you using WDS, and bridged ethernet/wireless interfaces?
(or was it a routed connection?)
29dBi’s for a 400m link ! Crumbs - that should give a superb link margin.
Using Nstreme does require higher quality received RF signals than without, though -50 is quite respectable.
Does changing the frequency alter things?
Well my dishes are 5.3 - I only have 3 choices for freq.
Yes it is a routed connection, AP / station. It’s routed because not only was this purpose built for a single link, it provides dual redundant load balanced links. It is also a wireless “hub” that not only recieves and incoming internet signal (and NATs), and also each end transmits to another building (4 buildings in all), each one on a 192.168.1.x, 192.168.2.x, 192.168.3.x, 192.168.4.x.
But as far as back and forth performance, I did a “both” directions test and got 36Mbps of TCP each way, and about 39Mbps of UDP. And that was each link individually, not dual-NStreme.
Link margin was the reason for the 29’s. We originally had 19dbi panels, went to 24 dbi panels, also tried 26dbi grids. This link has to be high speed even in the worst conditions - it’s not a backhaul for a ISP, it’s a corporate network link.
Also, I forgot to mention I had AES encryption enabled on the link, that may be why it ran slower, but I don’t consider this “slow” by any means
Hey good feedback.
BTW what CPU utilisation was it showing?
Any other tasks other than wireless P2P that the router is doing?
Also interesting that AES wasn’t slowing things up much.
I was running the bandwidth test ON the system, and was running around 50-60%. I’m sure if I ran it on another computer it would be less, but why bother with all that if it’s not limiting it?
These were bridged with WDS, not routed.
Here we were running bandwidth test through the router, in your case it would be eating up the rest - and you have more CPU power anyway (and more heat too!)
And the heat is the reason for needing bigger dishes - but we had the luxury of rack-mounting the computers in the server room, so they are in nice 2U rackmount cases. Unfortunately the one building’s third-floor server room was 50 ft from the roof conduit. The other building was a single story, with a 50ft tower, so that’s the other side’s cable length (plus 10ft for getting to the server from the tower)
These servers were nice though. In the back we drilled 4 holes, and have N-female connectors on the back. It looks more like a wireless appliance than a server, like a Cisco router, but black. One connector recieves our wireless internet signal from the tower, one connector connects an adjacent building, and the other 2 connectors are the 1/4 mile equal cost redundant link between the 2 buildings. Each side has their own internet feed from our tower.
I would like to look into getting some fast boards I could mount in a sealed NEMA enclosure so I can do the same thing with only 2 to 3 feet of antenna cable (or even just 8-12" N-male pigtails if I could mount a panel antenna on the NEMA enclosure itself)
I hope the RB533 are fast enough to do these super speed Nstreme links…
Yup you are right it’s a complete nightmare using consumer boards.
Basically
EPIA-V does not work properly in all BIOS versions we used, you get <10Mbps throughput. I know other users have EPIA-V’s in use with differing results but not ~70Mbps throughput, so please correct.
EPIA-M’s provide the throughput but at max speed there are “lock ups” which halt MT completely. I think that is a PCI bus-lockup either a function of the BIOS, or the VIA chipset.
Pentium Mobile boards, absolutely perfect, wonderful performance from the one we have, but there is more heat than VIA, and much more $$!.
Custom VIA boards, currently solved the “lock up” problem but there is more testing and development to do.
Our goal is “outdoor grade” weather-sealed hardware which is not achieved using off-the-shelf motherboard hardware. Others do please comment with experiences -
If anyone knows more on MB’s, please comment: Anyone who is interested to know more about what we are doing at CableFree, contact me off-line. Note we don’t sell boards, we are a MT OEM and sell complete systems. Not a sales pitch, please don’t flame.