More hops and low throughoutput in one tcp connection

If you have more hops on nv2, and you do the tests using speedtest(your customer, speedtest make one tcp connection) or using btest(one direction, tcp, 1 connection), your throughoutput will be max 15mbit/s.
But when you create more tcp connections(20 - default), your throughoutput will be in my case 50mbit/s.
Why is this like this? (I do the same on ubnt link, and there was: with 20 tcp: 45mbit, 1 tcp: 38mbit).

This can be bad, when your customer have paid for higher speeds…and test it using one connection…

rumors say that it is caused by TCP timing/tcp_window_size adaptive mechanism. TCP stream tries to find proper value for window size (which allows to send data without acknowledging each packet). Since the NV2 delay is relatively high the actual TCP window size for the single TCP connection stays low so the transfer rate is low too (because the stream waits for ack too often).

Is any possible solution of this? Is this also problem of ubnt devices(airmax)?

if this is really a problem with too small tcp window than it is caused by the host to host delay. If NV2 takes big portion of the delay the only way is to lower the NV2 delay. If you can try to avoid multiple NV2 lines in the chain. It means use the NV2 just only on ‘last mile’ - i.e. on AP.

Some people recommend to use a ‘TCP optimizer’ - some utility which you run on customer PCs which tweaks windows TCP stack settings and causes better TCP stack behaviour on higher latency connections. I don’t think it is the proper and best way.

It would be fine if Nstreme is working with multichain and 802.11N properly…

Yes this hope me also to nstreme work on multichain and on N cards properly…
NV2 is for ap…(too big latency in comparision with nstreme or dual-nstreme)