NetBox 5 lab / reallife results

Hello a few hours ago I received my first two NetBox 5 (RB911G-5HPacD).
Let me share my results so far with them.

Both devices are close to default configuration at the moment. Both upgraded to v6.18 with latest firmware.

First test conditions :
Devices are in two different rooms separated by a wall with omni 1,5db antennas attached to them.
band : 5Ghz-only-ac
channel width : 20/40/80Mhz Ceee
protocol : 802.11

Test is done between the boards - not the perfect I know, but devices are currently in an office with 100mbit/s lan attached.
netbox5-802.11-idle.jpg
netbox5-802.11-send.jpg
netbox5-802.11-receive.jpg
Later today I will test with nstreme and nv2. In a few days this will be mounted in production - 8km link with rocket dishes.

2nd test everything is the same except :
protocol : nstreme
netbox5-nstreme-send.jpg
netbox5-nstreme-receive.jpg
I expected better throughput here, but it is worse than plain 802.11ac. Maybe I need to play with frequencies a bit.

:laughing:
Man, hang it on the outside, yet it does not make sense.

As I said in a few days the boards will be outside - 8km link. I will post that result too.

Better result, again 802.11, everything the same different frequency :
netbox5-802.11-send.jpg
Tested nv2 aswell, but the results are bad - the throughput is very unstable. I will not be doing anymore tests inside - with these antennas I cannot go more than 520Mbit/s wireless rate even though the signal is very good.

great, and on which antennas?

The antennas I have and will be using are RocketDish 5G-30 5GHz

The good choice could be mANT30 instead of Rocket Dish.

Cetalfio

Actually “ac” is supported only by 802.11.
Nstreme or NV2 are not optimized for this new protocol.

It’s the true, I have both, and mANT is more better made and have better performance.

Anyone tested with v6.19rc versions, any improvements there ?
I did another test with nv2 this morning, got better results, but still unstable throughput. With 802.11 I get very stable speed, but this is just a test, I am not sure that it will manage to stay low latency under thousands of connections.

as seen in the wireless nothing changed in rc

What’s new in 6.19rc12 (2014-Aug-21 11:53):

*) ippool - improve performance when acquiring address without preference;
*) partitions - copying partitions did not work on some boards;
*) bridge - added ~Auto Isolate~ stp enhancement (802.1q-2011, 13.25.5)
*) ipsec - when peer config is changed kill only relevant SAs;
*) vpls - do not abort BGP connection when receiving invalid 12 byte
nexthop encoding;
*) dns-update - fix zone update;
*) dhcpv4 server - support multiple radius address lists;
*) console - added unary operator ‘any’ that evaluates to true if argument
is not null or nothing value;
*) CCR - improved performance;
*) firewall - packet defragmenting will only happen with connection tracking enabled;
*) firewall - optimized option matching order with-in a rule;
*) firewall - rules that require CONNTRACK to work will now have Invalid flag when CONNTRACK is disabled;
*) firewall - rules that require use-ip-firewall to work will now have invalid flag when use-ip-firewall is disabled;
*) firewall - rules that have interface with ~Slave~ flag specified as in-/out-interface will now have Invalid flag;
*) firewall - rules that have interface without ~Slave~ flag specified as in-/out-bridge-port will now have Invalid flag;
*) firewall - rules with Invalid flags will now be auto-commented to explain why;
*) l2tp - force l2tp to not use MPPE encryption if IPsec is used;
*) sstp - force sstp to not use MPPE encryption (it already has TLS one);

latest RC release has further improvements for 802.11ac and nv2, please check with the new build

I have upgraded to v6.19rc12. The devices will be installed shortly at their location and I will post more results in real life application.

Hello,
Bellow are my tests, done with 2 RB SXT G-5HPacD v6.18 with wireless-fp in ptp mode (configured via quickset as wds bridge)

Devices are installed on towers (1 - 50 m height, other - 45m, perfect LOS, distance -between them 0,85 km)

Tests was done from AP side, because it is connected via fiber.

Tests was done with btest. I have tested with limited speed to 200 Mbit send/receive/both.

This is freq usage picture from ap side: (strange is that AP is crashing on frequency scaning mode…probably bug?)
freq.test.png
So i did tests on several frequencies, to be sure, that there is no influence from other links and superchannel mode.

  1. The frequency 5045 Mhz (superchannel) was selected due to noisy environment.

Continue reading…

Tests for NV2 protocol
1a - protocol nv2

transfer to station
nv2-send.png
receiption from station
nv2-receive.png
Duplex rate to station
nv2-both-200.200.png

I have noticed, that there is no possibility to get 200Mbits full duplex from this link,
so i have reduced rx rate to 100Mbit.
nv2-both-200.100.png
IMO there is problem with radio link, not cpu limitation.
And what the hell? to what to believe when measuring cpu load ? tool profile, or to winbox resources window???

Next tests with Nstreme (enabled polling, framer policy - best fit, framer limit - 4000)

TX
nstreme-send.png
RX
nstreme-receive.png
Note! this test was succsessful only from 3rd hit, first ones was ended with station disconnection

Next ----- duplex on nstreme

zyzelis what are the conditions of the link?
CyberTod did you already install the 8km link ?

Duplex 200/200 Mbits
nstreme-both-200-200.png
after crashes works, but poor and without response to pings from station

Here is Duplex 200/100 Mbits
nstreme-both-200-100.png
Quick note:
Only possible way to get results is before aplying “both” test, is to start one way tests (first receive and after send or vice versa)

Normis what do you mean “conditions”?
i wrote abow 3 posts what are conditions…
Do you need some additional info?

sorry, I missed that post. Now I see