I’ve configured a 100mt point-to-point transparent link with nstream and I obtained a real stable TCP 45Mbps.
When moving on nv2 I have an unstable 15-18Mbps. What am I missing?
I’m using a 2x2 802.11n on RB433AH with R52Hn and RouterOS version 5.5.
Do you think is possible to get a 70Mbps real TCP in this situation?
Have a nice sunday.
f.
need more info, equipment, signal on both sides, etc.
Thank for your answer Normis, I’m honored to talk with you.
Find attached Bridge and Client Configuration, Bridge and Station Registrations.
Hardware is RB411AH with R52nH; boards are placed into a waterproof metallic enclousure with 2 N RF connector. Each connector feeds a 19dBi single pol flat panel, one panel is V polarized, the other is H polarized. Two panels are mounted on a mast, the space between the center of them is about 60cm.
The link distance is more or less 80 meters. Actual transmission power is set to 0 dBm on both sides.
I’ve played before with nv2; with nv2 I experienced remarkable throughput testing from RB433AH to RB433AH (about 70Mbps TCP and less tha 200Mbps UDP) but poor TCP/UDP performance between two XP PCs directly connected to RB433AH ethernets.
When moving to nstream I have the same performances (45Mbps TCP/ 80Mbps UDP) regardless the pair of testing device (between PCs oand/or RB433AH) but performances are less than in nv2. PC tests were all performed with Bandwidth test for Windows and a Connection Count = 1.
Hope this helps clarifying my scenario.
Now my questions are:
a) what is wrong in my configuration? Consider that for me this is the first time I deal with 802.11n and MIMO on Mikrotik OS…
b) which is the best I can get in such an installation?
Thanks in advance.
Registrations.txt (2.44 KB)
Station-Config.txt (4.63 KB)
Bridge-Config.txt (4.57 KB)
check with Nomis whether 0dbm is a valid level, I didnt think MT cards could be configured that low!
(this was an expensive option, did you not consider the SXT!)
Yes I did. Unfortunately I had some external costraints that forced me to use flat panels as little as possible (no dishes or dual pol panel). More I need two ethernet ports, one for data and one for management, to allow for LACP aggregation. Once this issues will be fixed I was commisioned to add a second link aggrgated throught LACP. Any experienca abut this?
we had some links working in -2dbm.
so it has to be a valid level.
(0dbm is 1mw)
Raise both sides to 5 or 6 dbm. The amplifier circuitry doesn’t work right
On that low power with r52hn.
This is my link 20km

All is on default only data rates I set manual
This is actual Internet traffic but it last only for 5-10 min (Or I don’t know when data rates will drop <104mbit on the photo are now tx/rx 216/216)
read about my problem pls.
http://forum.mikrotik.com/t/tx-rx-rates-drops-after-5-10min-using-nstreme/49023/1
Pls if someone can help me
DrLove73 try with nstreme and see what are u data rates than
I have very bad results with nv2 (25-80mbit going up and down also my data rates were 270/300mbit) I don’t know what is it
I ask forum is there spacial way to setup nv2 I have no answer
I have good results with nstreme but I have problem read link.
Tnx
I post last updates about my battle with 100mt PtP link…
I’ve noticed that moving from nstream to nv2 I had two main consequnces:
- the round trip time is increased from less than 2 ms to about 12ms (why? I figure out this is due to TDMA nature of the link…)
- the bridge-to-bridge bandwidth - tested with bandwidth test in tcp mode - is increased from 40/45 to 70Mbps (Congratulations to Mikrotik for this improvment!)
I remembered that increasing bandwidth and latency of a link can lead to TCP windows size issues… some numbers:
nstream: 45Mbps2ms = 45.000.000 bit/sec * 0,002 sec = 90.000 bit = 11.250 bytes
nv2: 70Mbps12ms = 70.000.000 bit/sec * 0,012 sec = 840.000 bit = 105.000 bytes
I like imagining that these numbers represents the amount of data that a link can “buffer”; this “link buffer” does not affect connectionless protocol (it simply adds some delay…) but has a great impact on connection oriented trasmission, where endpoints needs mutual confirmation about trasmitted “packets”, like TCP. In this case, for maximum TCP performances, it’s required that the transmitting device completly fills the “line buffer” before stopping and waiting for acknowledgement. If not completly filled it’s like if some useful transmission time would not be used leading to decreased performances. The amount of data that TCP sends before waiting for acknowledgment is the TCP window size.
I’ve tried nv2 with two different test (iperf.exe for windows XP) - iperf -c 172.16.0.109 → in this case the default TCP window size (65536 bytes) is used → I had almost 20Mbps, the same sad result if comapred to a 300/300Mbs datarate.
- iperf -c 172.16.0.199 -w 800k → in this case a 800000 bytes TCP window size is used → I had the incredible 90Mbps!!!
Starting a second test in opposite direction I got 60Mbps for a total wireless TCP aggregate throughput = 150Mbps that is really good if compared to 300/300 datarates. I consider this a really remarkable result, another proof of the excellent work they did in Mikrotik.
Now my question is: how can I manage this situation? In my scenario it’s really impossible to tweak TCP settings on various hosts in the network. Is there a way to solve the problem? Is there something wrong on wireless configuration that leads to this TCP window size issue?
I really think I need some help…
f.
p.s. LACP aggregation works perfectly!