Do you have noticed problem with NV2 that even on easy setup, like below (one main link, with 150 Mbps TCP test, on remote side sector with connected for test 1-2 subscriber, and test to device POP1 are 100+ Mbps). Problem are that Btest from Subscriber to CORE router are 30-40 Mbps MAX ?? CORE < wifi 150 Mbps > POP1 --Gbit LAN-- Sector <wifi 100+ Mbps> Subscriber,
Which hop drops the speed, the first one (core<->POP1) or the second one (sector<->subscriber)?
I’m sure you’re aware that there are some issues using NV2 on ac-capable routerboards (most notably ARM-based) … there are plenty of posts in this part of forum.
Drop are on whole route test Subscriber <> Core, independent tests from Subs. <> POP1 are +100 Mbps, and POP1 <> CORE are 150 Mbps, but route Subscriber to Core through POP1 are more that half down.
Devices are Netmetal on Core, POP, and Sector, Subscriber are LHG
TCP is subject to a few things that can lower average throughput compared to UDP:
latency … does ping/traceroute show any anomaly in RTT times?
Note that TDD can affect timing of ACK packets (sent in reverse direction) … even more so if there are two (or more) TDD links in a row and Tx windows are not aligned (if they are optimally aligned for one direction, they will be mis-aligned for the other direction so it’s really hard to optimize this aspect). This might show in RTT jitter (probably usual ping, which sends one packet every second, doesn’t show it; wireshark on sender would show it). If ACK packets don’t arrive at sender regularly, sender doesn’t transmit additional data regularly … it iregularity is large enough (depends on TCP window size and average throughput … with initial TCP window size being pretty small, the iregularity can easily be too big), sender will not be able to develop adequate transmit speed.
dropped packets … any retransmision shrinks TCP window size which affects throughput a lot.
router buffer bloat … this affects the ability of sender to fine-tune the Tx rate and Tx rate can fluctuate quite a lot … which can negatively impact overall average throughput
Most of those shortcomings are hidden if test is done using multiple parallel TCP streams … each of streams has it’s own share of problems, but since they behave independently, statistics evens them out and overall performance becomes smooth (and can reach connection limits).