RB SXT G-5HPacD and RB921UAGS-5SHPacD MultiHop TCP Bandwidth

Hello everybody,

we setup a link with the new AC products like this
network.png
So there are 4 wireless links always between the different areas. If we test all the wireless hops for themselves we get above 100Mbit TCP, but for the whole link from br05rtr01 to bg03rtr01 only about 30Mbit TCP.
At the moment we are using nv2 wireless protocol with static wds and the network is fully routed with OSPF and MPLS on each Router, SXT and NetMetal.

Can you tell me what settings we should check to get more Throughput? Is this some timing issue or how can i improve it?

What about
Interface Queues = at the moment default
Ethernet Flow Control = both tx/rx off
MTU = L2 MTU on all device is min. 1600
Wireless Channel Width = at the moment 80MHz while using AC
Data Rates = default
NV2 TDMA Settings = default

Signals are around -40 to -60 and CCQ about 70%.

Thanks for your info.

Regards
Andy

Hi Andy,

There’s a few things you should check to help debug this.

  1. Test bandwidth from the CCR at the green site to each hop in turn to see where the throughput takes a dive.
  2. You may be getting self-interference between two radio hops. e.g. in the red zone, the blue/red link may be interfering with the red/yellow link as they will both be transmitting at the same time. Make sure you have plenty of isolation (either in distance or channel separation).
  3. Are ALL devices shown in this diagram. e.g. Are there any switches that are not shown? Some switches with small/no buffering can cause lots of packet loss when moving packets between different sized links.
  4. Have you tried limiting the bandwidth test at source. i.e., rather that letting it find the max, try setting the speediest to, say, 50mbps and see what happens. If congestion is happening inside the network it may be uncontrolled and causing overall bad performance due to unexpected packet loss.
  5. Check the CPU on each RB when the test is running. Remember that a TCP bandwidth test running from a RB will max out the CPU very quickly. If one of the hops is running high CPU that will also impact performance. Ideally use a laptop at either end to run the bandwidth test - at worst, use UDP. TCP running from the RB will not give you the speeds you think.

Let us know how you get on.

Rich

This is an old problem with NV2. Search for some older threads covering this issue. It is a problem MT is not solving for a long time now. The only solution is to use other products for backhaul.
It does not affect general bandwidth but the single user experience.

make shure to use fixed same TDMA settings on all hops.
Have a configuration like this
Station ↔ (AP + AP) ↔ (Station + Station) ↔ (AP+AP) etc.

use minimal power settings.

Use only modulatoin rates wehre you archieve 95-100% CCQ
Try different frequency like 10–20mhz above or below.

Good luck!

Hi Ste,

I had a search for what you’re referencing - presumably you’re talking about issues like this: http://forum.mikrotik.com/t/nstreme-problem/82783/1

Obviously there’s an issue with some variant of configuration, but have to say I haven’t experienced this myself. Maybe it only occurs with 3+ hops, but I have tried this out just today on a 2 hop NV2 to NV2 link and had no issues with throughput using 1 TCP stream. Tested with both bridged and routed setup on v6.22 throughout. Admittedly I only had RB’s to test with and (as expected) CPU maxed out when running a TCP bandwidth test, but I was happily passing circa 80mbps when the CPU hit 100% over a 2 hop path that I know can only achieve 95mbps max (one device is a SXT Lite5 with 100mbps ethernet). Exact setup was SXT wireless SXT --wired-- SXT wireless SXT

This said, I have the two SXT’s in the middle connected via an RB260GSP and for a while I was seeing collisions on a supposedly full duplex link along with the expected poor performance under load. A reboot of the RB260 fixed that and afterwards all ran at full speed. TBH, there seems to be a lot of instability in the low-end switch products when it comes to mixed 100mbps/1gbps environments, so this doesn’t surprise me too much.

Like I say, there’s obviously a variant of config that is problematic (as you’ve experienced), but I don’t think there’s an automatic blame on NV2. Andy should still run through a proper diagnostic process, and if it does turn out to be NV2, then maybe an analysis of the config would be in order to see how it differs from users who don’t have any issues.

Andy, FWIW I’ve also had this running with good speeds with much the same setup - OSPF+MPLS+VPLS and running PPPoE over the top. Only 2 wireless hops, both running NV2. No WDS (don’t believe you need it if you’re running MPLS or routing).

There was some talk about NV2 not being “tuned” for AC yet - not sure of the status on that. My setup has some AC’s, but connected to SXT-N’s so everything running in N-only mode. I think you need to at least find the first place where the speed starts to drop - it will reduce the number of parts you need to debug.

Rich

I don’t see how this can be an nv2 problem…do you think nv2 knows that you use multiple hops?
From my experience this is always an interference problem because of small or no distance between antennas. Mounting station and ap/bridge on the same mast will always be a problem.

It adds up with every hop. More hops gives slower tcp speed while udp speed stays at the speed of the slowest link in the chain.
It was not there with nstreme. I see it at different sites with different wireless conditions. Replacing a link with another gear helps.
I guess it has to do how nv2 queues/aggregates packets. I did extensive test with changing all parameters including frequency over a long period of time and ROS Versions.

Hi,

we testen 802.11 instead of nv2 and the speed almost doubled. Now i get around 50-60Mbit while only using 1 tcp connection. With NV2 i had to use multiple to get the full speed. CCQ is also better around 90%. But still i would like to push about 120Mbit through the whole link.



I will try this, at the moment i have a setup like AP (Station AP) (Station AP) (Station AP) . Is there a technical background of doing it like you described?


  1. The Link from green to blue is the worst/longest i think with the biggest drop. I got 100Mbit with nv2 on that single link but it dropped from 60 to 30Mbit when passing the complete link. I will check the NV2 configuration this weekend one more time.
  2. We have the most devices seperated by distance and all of them use different channels
  3. All the devices are shown in the diagram. I thought it doesn’t make any difference if i directly connect an SXT ↔ SXT or SXT ↔ Switch ↔ SXT.
  4. tried that once but didn’t get more than the max without limit through the link.
  5. first we tested with the integrated BW Test and then used a laptop and server on the other side. UDP throughput was better but we need tcp.

Thanks for your replies!

Hi,

We’re having similar problems with NV2. Bandwidth in each ptp is ok, but passing throught a hop or two it get very bad…
Did you solved the problem?

Regards

Yes. Buy something other. MT does not care.

Where do you think it’s the problem, in the PtP or the routers?

PTP with nv2. Combining MT-Routers with licensed gear I do not see this problems.
Using nstreme I dont see this problem.

And as usually no single word from Mikrotik :frowning:

Hello!

I just repeated almost identical scenario.

CCR9 — wire > 921UAGS-5SHPacT — wireless > 921UAGS-5SHPacT — wire > 260GS — wire > 922UAGS-5HPacT — wireless > 921UAGS-5SHPacT — wire > 911G-5HPacD — wireless > SXT G-5HPacD — wire > 911G-5HPacD — wireless > 911G-5HPacD — wire > 751U-2HnD [END]

I used only one chain, lowered TX power to get 40 - 60 dBm, OSPF, MPLS, loopbridge without interfaces as transport address, static wds. I set TDMA period to 1 ms to lower latency. Ping between hops is ~5 - 10 ms.

Is there something I missing ?

I tested TX with BT from CCR to END and got 91 - 95 Mbps, note that last wire link is 100Mbps as in your example. 751U CPU was 99% - 100%.
I tested RX with BT from CCR to END and got 82 - 90 Mbps, note that last wire link is 100Mbps as in your example. 751U CPU was 100%.

BT was TCP/UDP? If TCP, how many connections?

Thanks for indicating that.

Here’s my results on how much tcp-connections used => tx speed + CPU load:

1 => ~51 Mbps + ~58%
2 => ~53 Mbps + ~60%
5 => ~67 Mbps + ~75%
8 => ~80 Mbps + ~90%
13 => ~88 Mbps + ~95%
16 => ~93 Mbps + ~100%

Have you tested any device/setup with tcp-connection-count=1 ? I have strong feeling that you wont get encouraging results. In real life there’s mostly many TCP connections.

The problem we have is, that single user with single TCP connection(speedtest.net, …) can not achieve the speed.
Total agregate speed of the link is not problem.
Can you repeat the test with Nstreme protocol?

A speedtester (speedtest.net) uses 4 tcp-Streams. A ftp-User uses 1 stream. Under less ideal (longer distance/lower signal) conditions the difference between tcp/udp is much worse. So I have speeds of 10-15MBit/s at places in my network where I see stable 60-70Mbit/s UDP.

Nv2 has it’s limits at the moment. I recommend you to use Nstreme instead of Nv2, as for setup I created it worked perfectly.

But you tried Nstreme in the lab. Try to make several km outdoor links with 95% CCQ and -55 signal and you will see the problems with nstreme.
Nstreme has perfect latency, but there still exists the “not polled for long” error. This should be fixed.