Strange slow RX but not TX

Hello everyone,

I recently purchased a MikroTik CCR2004 and set up my network with the following specifications:

ISP Connection: 1 Gbps down / 700 Mbps up (providing IPv4 through IPv6 tunneling with Free FAI in France).

Issue Description:
I am experiencing very slow download speeds when trying to clone a Git repository or download large files over HTTP. The download speed caps at around 100 KB/s, while the router’s CPU usage remains around 0%.

Interestingly, while running speedtest from various devices, I consistently receive results over 500 Mbps. But the download of file during this same test is still cap … (Download from the browser or from wget doesn’t change anything still 100 KB/s and from any machine linux or macOS)

Additional Observations:

Enabling a VPN (such as ProtonVPN) boosts the download speed to over 30 MB/s. Which sound crazy to me.
Disabling all firewall rules, including fasttrack, does not change the slow download behavior.
When using iperf3, I achieve good results, but using the -R option also caps speeds at about 100 KB/s.
Streaming 4K videos on YouTube works seamlessly, with no buffering, even while two 4K Netflix streams are active.

Testing Environment:

I’ve tested downloads from multiple devices connected to the network.
Within my VLAN setup, iperf tests between VLANs show that traffic saturates the Ethernet link without issues, with low CPU usage on the router.

Configuration:
I will include my MikroTik configuration in the attachments for reference.

Questions:

Does anyone have suggestions on what could be causing this issue?
Are there specific settings or configurations I should check to resolve the slow download speeds without the VPN?
I have concerned maybe about my MTU configuration but this part is a bit obscur to me.

Thank you for your help!
slow_internet.rsc (18.6 KB)

Most likely this is MTU related, as you mention IPv4-in-IPv6.

I have similar setup at home and had to set MTU to 1460, probably you will also need to clamp MSS to MTU.

You can check your actual MTU here: http://speedguide.net:8080/

Optionally, you can test if this is the case with iperf3 by reducing MSS size using option “-M 1420”.

Here the result from speedguide

« SpeedGuide.net TCP Analyzer Results » 
Tested on: 2024.11.21 13:41 
IP address: xx.xx.xxx.xx 
Client OS/browser: Mac OS (Firefox 132.0) 
 
TCP options string: 020405b4010303060101080a89a5c79c0000000004020000 
MSS: 1460 
MTU: 1500 
TCP Window: 131712 (not multiple of MSS) 
RWIN Scaling: 6 bits (2^6=64) 
Unscaled RWIN : 2058 
Recommended RWINs: 64240, 128480, 256960, 513920, 1027840 
BDP limit (200ms): 527 Mbps (53 Megabytes/s) 
BDP limit (500ms): 211 Mbps (21 Megabytes/s) 
MTU Discovery: OFF 
TTL: 54 
Timestamps: ON 
SACKs: ON 
IP ToS: 00000000 (0)

I tried what you proposed something like that iperf3 -R -M 1420 -c 185.93.2.193 but it fail with iperf3: error - unable to set TCP/SCTP MSS: Invalid argument.

In terms of MTU if I’m not wrong in the order

  • sfp port ACTUAL MTU 1700 L2 MTU 1796
  • vlan836 ACTUAL MTU 1700 L2 MTU 1792
  • ipipv6 ACTUAL MTU 1500 L2 MTU 65535
  • bridge1 ACTUAL MTU 1500 L2 MTU 1596

Where do you propose to change the MTU?

Interesting… according to speedguide you have normal MTU - but this is strange as tunneling IPv4 in IPv6 (without tricks on the way) will definitely make it lower.

As to iperf3 - not sure why it fails, did you run it on Linux (or other *ix like) or Windows? It could be also version-dependent.

As to changing MTU on an interface - set it to something relatively low (~ 1400) on the interface that has your default gateway for IPv4. If it works, then you could increase it a bit to find the maximum that works.

If you are under Linux (which is behind your Mikrotik), you could adjust the MTU for the default route only - like “ip ro re default via mtu 1400”, instead of changing the interface MTU on the router itself.

Another potential issue is that your router or provider blocks ICMP’s unreachable messages - under normal circumstances there is no need to fiddle with MTU as it will be auto-discovered, but some providers block such ICMPs, or they are blocked/dropped by your router.

just for testing, disable hardware offload in /interface bridge Port for a moment and redo the tests.

I disabled all hardware offloading, but unfortunately, it did not resolve the issue.

Regarding my testing with iperf, I found that when run on Linux, it performed well, in contrast to my earlier tests on macOS. Without the -R option, I achieved 500 Mbps, but using -R resulted in a drop to just 2 Mbps so it’s same as before.

Moreover, I conducted further iperf tests using the -P 20 option, which allowed for 20 streams, each conveying 2 Mbps. Does this information assist in troubleshooting?

Interestingly, when I connect through a VPN, the slow download issue disappears entirely. I suspect this might be related to my use of WireGuard, which operates over UDP, whereas the issue appears to be isolated to RX TCP traffic. Could this be an indication of a potential MTU issue, if so do you mind explaining why?

EDIT: After some reading on MTU I understand know that the issue is indeed at the MTU level since it works over UDP and by extension with WireGuard. Now I don’t know where to adjust the MTU properly :confused:

Small update, I might have wrongly tested the first time but I don’t have the issue over IPv6, if you have any other idea I would be happy to hear.

If IPv6 is not affected, then it explains why your tests with YouTube and Netflix are doing well, because they would use IPv6. Same with many speedtest.net test servers nowadays. And it looks like the problem only affects IPv4 TCP traffic.

When you perform the iperf3 test with -R (the test that produced only 100KB/s), can you check on the sender side (the remote side) whether a Cwnd column is available? and which values does that column have?

Here the result what are you looking for?

Server output:
Accepted connection from xx.xx.xx.xx, port 48694
[  5] local 185.93.2.193 port 5201 connected to xx.xx.xx.xx port 48704
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   439 KBytes  3.59 Mbits/sec  116    127 KBytes       
[  5]   1.00-2.00   sec   510 KBytes  4.17 Mbits/sec  364    147 KBytes       
[  5]   2.00-3.00   sec   607 KBytes  4.97 Mbits/sec  136   55.1 KBytes       
[  5]   3.00-4.00   sec   669 KBytes  5.48 Mbits/sec  486    301 KBytes       
[  5]   4.00-5.00   sec   287 KBytes  2.35 Mbits/sec  277    206 KBytes       
[  5]   5.00-6.00   sec   250 KBytes  2.05 Mbits/sec  267   52.3 KBytes       
[  5]   6.00-7.00   sec   486 KBytes  3.98 Mbits/sec  333   69.3 KBytes       
[  5]   7.00-8.00   sec  1.51 MBytes  12.6 Mbits/sec  711    228 KBytes       
[  5]   8.00-9.00   sec   525 KBytes  4.29 Mbits/sec  620   56.6 KBytes       
[  5]   9.00-10.00  sec   356 KBytes  2.92 Mbits/sec   24   73.5 KBytes       
[  5]  10.00-11.00  sec   831 KBytes  6.81 Mbits/sec    0   17.0 KBytes       
[  5]  11.00-12.00  sec   831 KBytes  6.81 Mbits/sec    0   17.0 KBytes       
[  5]  12.00-13.00  sec   891 KBytes  7.30 Mbits/sec    0   14.1 KBytes       
[  5]  13.00-14.00  sec   831 KBytes  6.81 Mbits/sec    0   17.0 KBytes       
[  5]  14.00-15.00  sec  1.08 MBytes  9.09 Mbits/sec   46   91.9 KBytes       
[  5]  15.00-16.00  sec   337 KBytes  2.76 Mbits/sec  165   67.9 KBytes       
[  5]  16.00-17.00  sec   653 KBytes  5.35 Mbits/sec    0   17.0 KBytes       
[  5]  17.00-18.00  sec  1.16 MBytes  9.73 Mbits/sec    0   17.0 KBytes       
[  5]  18.00-19.00  sec   810 KBytes  6.64 Mbits/sec   81   41.0 KBytes       
[  5]  19.00-20.00  sec   387 KBytes  3.17 Mbits/sec  315   62.2 KBytes       
[  5]  20.00-21.00  sec   665 KBytes  5.44 Mbits/sec  393   90.5 KBytes       
[  5]  21.00-22.00  sec   708 KBytes  5.80 Mbits/sec  487   89.1 KBytes       
[  5]  22.00-23.00  sec  3.02 MBytes  25.4 Mbits/sec  1814   63.6 KBytes       
[  5]  23.00-24.00  sec   204 KBytes  1.67 Mbits/sec  507    189 KBytes       
[  5]  24.00-25.00  sec   154 KBytes  1.26 Mbits/sec  175   21.2 KBytes       
[  5]  25.00-26.00  sec   245 KBytes  2.00 Mbits/sec  235   82.0 KBytes       
[  5]  26.00-27.00  sec   672 KBytes  5.50 Mbits/sec  205   33.9 KBytes       
[  5]  27.00-28.00  sec   831 KBytes  6.81 Mbits/sec    0   19.8 KBytes       
[  5]  28.00-29.00  sec  1.04 MBytes  8.75 Mbits/sec   53    115 KBytes       
[  5]  29.00-30.00  sec   632 KBytes  5.18 Mbits/sec  227   59.4 KBytes       
[  5]  30.00-31.00  sec   577 KBytes  4.73 Mbits/sec  106   73.5 KBytes       
[  5]  31.00-32.00  sec   950 KBytes  7.78 Mbits/sec    0   17.0 KBytes       
[  5]  32.00-33.00  sec  1.04 MBytes  8.76 Mbits/sec    0   17.0 KBytes       
[  5]  33.00-34.00  sec  1.04 MBytes  8.76 Mbits/sec    0   19.8 KBytes       
[  5]  34.00-35.00  sec  1.04 MBytes  8.76 Mbits/sec    0   17.0 KBytes       
[  5]  35.00-36.00  sec  1.10 MBytes  9.24 Mbits/sec    0   17.0 KBytes       
[  5]  36.00-37.00  sec  1.16 MBytes  9.73 Mbits/sec    0   17.0 KBytes       
[  5]  37.00-38.00  sec  1.10 MBytes  9.24 Mbits/sec    0   17.0 KBytes       
[  5]  38.00-39.00  sec  1.10 MBytes  9.25 Mbits/sec   46   45.2 KBytes       
[  5]  39.00-40.00  sec   679 KBytes  5.56 Mbits/sec  210    153 KBytes       
[  5]  40.00-41.00  sec   560 KBytes  4.59 Mbits/sec  352   91.9 KBytes       
[  5]  41.00-42.00  sec   298 KBytes  2.45 Mbits/sec   27   53.7 KBytes       
[  5]  42.00-43.00  sec   772 KBytes  6.32 Mbits/sec    0   28.3 KBytes       
[  5]  43.00-44.00  sec  2.03 MBytes  17.0 Mbits/sec    0   22.6 KBytes       
[  5]  44.00-45.00  sec  1.97 MBytes  16.5 Mbits/sec    0   25.5 KBytes       
[  5]  45.00-46.00  sec  1.29 MBytes  10.8 Mbits/sec  117    167 KBytes       
[  5]  46.00-47.00  sec   597 KBytes  4.89 Mbits/sec  263    113 KBytes       
[  5]  47.00-48.00  sec   950 KBytes  7.78 Mbits/sec    0    110 KBytes       
[  5]  48.00-49.00  sec   823 KBytes  6.74 Mbits/sec    0   76.4 KBytes       
[  5]  49.00-50.00  sec   847 KBytes  6.94 Mbits/sec  178   62.2 KBytes       
[  5]  50.00-51.00  sec   416 KBytes  3.41 Mbits/sec   49   87.7 KBytes       
[  5]  51.00-52.00  sec  1.22 MBytes  10.2 Mbits/sec    0   25.5 KBytes       
[  5]  52.00-53.00  sec  2.09 MBytes  17.5 Mbits/sec    0   39.6 KBytes       
[  5]  53.00-54.00  sec  1.05 MBytes  8.83 Mbits/sec  221    188 KBytes       
[  5]  54.00-55.00  sec   247 KBytes  2.03 Mbits/sec  207   8.48 KBytes       
[  5]  55.00-56.00  sec   697 KBytes  5.72 Mbits/sec  214    277 KBytes       
[  5]  56.00-57.00  sec   116 KBytes   949 Kbits/sec  200   36.8 KBytes       
[  5]  57.00-58.00  sec   238 KBytes  1.95 Mbits/sec    6   33.9 KBytes       
[  5]  58.00-59.00  sec   297 KBytes  2.43 Mbits/sec    0   17.0 KBytes       
[  5]  59.00-60.00  sec   831 KBytes  6.81 Mbits/sec    0   17.0 KBytes       
[  5]  60.00-60.04  sec  59.4 KBytes  12.4 Mbits/sec    0   17.0 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.04  sec  48.3 MBytes  6.74 Mbits/sec  10233             sender

FYI on linux you can get the server output with

iperf3 -R --get-server-output -c 185.93.2.193

Yes, thank you. The congestion window (for the sender side, on your side this is further limited by the receive window) is way too small. And has no chances to increase before encountering a lot of retries (probably due to packet loss). It looks like the round-trip delay from you to the iperf3 server is about 18-20ms. When there are retries the sender side had to reduce the congestion window (cf congestion avoidance algorithms), as you can observe. It looks like retries are needed when the Cwnd is larger than 30KB.

If you use a bandwidth delay product calculator, like this one https://calculator.academy/bandwidth-delay-product-calculator/ or this one https://www.speedguide.net/bdp.php, you’ll see that if the congestion window can’t get bigger than those numbers, you won’t get higher bandwidth. That’s also why if you make 20 connections with -P 20, the sum of the bandwidth is larger, because it looks like the Cwnd limit is about the same for the individual connections. For comparison, here is a test from my place to the server of yours. I’m located in a different continent with a much higher ping time (193ms), and the congestion window can steadily increase to 8.74 Mbytes without retries:

iperf3-cwnd.png
My bet is that when you remove the -R option, the sender side (you) can achieve much much larger Cwnd sizes. When you perform iperf3 tests between your VLANs, do you see large or small Cwnd values? Because the delay within your LANs is much smaller (sub ms) you might not notice the effects of a limited congestion window on the final bandwidth.

Here the result without -R

root@docker:~# iperf3 --get-server-output -c 185.93.2.193 -t 10
Connecting to host 185.93.2.193, port 5201
[  5] local 192.168.43.251 port 43242 connected to 185.93.2.193 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  58.4 MBytes   490 Mbits/sec   95    216 KBytes       
[  5]   1.00-2.00   sec  71.4 MBytes   599 Mbits/sec   19    304 KBytes       
[  5]   2.00-3.00   sec  73.0 MBytes   612 Mbits/sec   22    297 KBytes       
[  5]   3.00-4.00   sec  78.0 MBytes   654 Mbits/sec    4    293 KBytes       
[  5]   4.00-5.00   sec  73.6 MBytes   617 Mbits/sec   12    206 KBytes       
[  5]   5.00-6.00   sec  64.6 MBytes   542 Mbits/sec   46    270 KBytes       
[  5]   6.00-7.00   sec  76.7 MBytes   644 Mbits/sec    3    262 KBytes       
[  5]   7.00-8.00   sec  75.6 MBytes   634 Mbits/sec   12    260 KBytes       
[  5]   8.00-9.00   sec  64.8 MBytes   543 Mbits/sec    6    204 KBytes       
[  5]   9.00-10.00  sec  72.4 MBytes   607 Mbits/sec    1    307 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec   708 MBytes   594 Mbits/sec  220             sender
[  5]   0.00-10.04  sec   707 MBytes   591 Mbits/sec                  receiver

The ping to this server is good.

root@docker:~# ping 185.93.2.193
PING 185.93.2.193 (185.93.2.193) 56(84) bytes of data.
64 bytes from 185.93.2.193: icmp_seq=1 ttl=58 time=3.18 ms
64 bytes from 185.93.2.193: icmp_seq=2 ttl=58 time=3.44 ms
64 bytes from 185.93.2.193: icmp_seq=3 ttl=58 time=3.07 ms
64 bytes from 185.93.2.193: icmp_seq=4 ttl=58 time=2.60 ms
64 bytes from 185.93.2.193: icmp_seq=5 ttl=58 time=2.68 ms
--- 185.93.2.193 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4006ms
rtt min/avg/max/mdev = 2.602/2.993/3.442/0.315 ms

Locally between two desktop not on the same VLAN I have no Retr and Cwnd is bigger and I saturate the GB link

root@docker:~# iperf3 --get-server-output -c 192.168.11.3 -t 10
Connecting to host 192.168.11.3, port 5201
[  5] local 192.168.43.251 port 40466 connected to 192.168.11.3 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   113 MBytes   950 Mbits/sec    0    362 KBytes       
[  5]   1.00-2.00   sec   112 MBytes   938 Mbits/sec    0    362 KBytes       
[  5]   2.00-3.00   sec   112 MBytes   938 Mbits/sec    0    362 KBytes       
[  5]   3.00-4.00   sec   112 MBytes   938 Mbits/sec    0    362 KBytes       
[  5]   4.00-5.00   sec   112 MBytes   938 Mbits/sec    0    362 KBytes       
[  5]   5.00-6.00   sec   112 MBytes   939 Mbits/sec    0    362 KBytes       
[  5]   6.00-7.00   sec   112 MBytes   939 Mbits/sec    0    362 KBytes       
[  5]   7.00-8.00   sec   112 MBytes   938 Mbits/sec    0    362 KBytes       
[  5]   8.00-9.00   sec   112 MBytes   939 Mbits/sec    0    362 KBytes       
[  5]   9.00-10.00  sec   111 MBytes   933 Mbits/sec    0    362 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  1.09 GBytes   939 Mbits/sec    0             sender
[  5]   0.00-10.00  sec  1.09 GBytes   938 Mbits/sec                  receiver

and with -R

Accepted connection from 192.168.43.251, port 54910
[  5] local 192.168.11.3 port 5201 connected to 192.168.43.251 port 54926
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   113 MBytes   951 Mbits/sec    0    452 KBytes       
[  5]   1.00-2.00   sec   111 MBytes   930 Mbits/sec    0    452 KBytes       
[  5]   2.00-3.00   sec   110 MBytes   926 Mbits/sec    0    519 KBytes       
[  5]   3.00-4.00   sec   111 MBytes   933 Mbits/sec    0    542 KBytes       
[  5]   4.00-5.00   sec   112 MBytes   937 Mbits/sec    0    663 KBytes       
[  5]   5.00-6.00   sec   111 MBytes   933 Mbits/sec    2    619 KBytes       
[  5]   6.00-7.00   sec   110 MBytes   923 Mbits/sec    3    491 KBytes       
[  5]   7.00-8.00   sec   111 MBytes   933 Mbits/sec    4    489 KBytes       
[  5]   8.00-9.00   sec   111 MBytes   933 Mbits/sec    0    509 KBytes       
[  5]   9.00-10.00  sec   110 MBytes   923 Mbits/sec    0    580 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  1.09 GBytes   932 Mbits/sec    9             sender

Problem with using public servers (including iperf3 servers) is that there might be bottlenecks other than “last mile”. I tried iperf3 server from the screenshots of @CGGXANNX and I got shitty performance in both directions. In both directions I see fair amount of retransmissions … and for TCP retransmissions are sure way to kill any kind of performance (retransmission both means re-sending data and it shrinks TCP window to portion of what it could be, so throughput in next few seconds is low even without retransmissions). If I tried a few other public iperf3 servers, I was getting various results (but all of them were better and more consistent than the ones I got from server from mentioned screenshot.

So when determining performance of router, it’s important to be 200% sure that there are no other bottlenecks.

I agree with you, Mkx, but I ended up doing this test with iperf because I had very slow internet speed over IPv4, such as when pulling from GitHub. The issue disappeared the moment I used a VPN, even in the same location (city).

Also, I tested the same iperf server with the equipment provided by my internet service provider instead of the CCR, and I got good results with no retries. The issue is that I don’t want to keep this equipment because it’s very old, and I have my own gear.

I have the same problem on CCR2216 with L3 HW offloading enabled or disabled. I have even gone so far as to connect the web server directly to the CCR2216 and download of a file is 200Kbps whereas previously it was downloading at my full link speed at home which is 400Mbps.

Multi-stream HTTP downloads such as speedtest.net gives full line speed. Streaming absolutely perfect even on 4/8K.

I am suspecting that something in the recent OS releases has a bug. Im on 7.16.1

Thanks for sharing I was becoming crazy … Did you try to downgrade?
On my side I did dump the packets between the ONU and my ISP hardware and compared it to the ONU and CCR connection.
While using my ISP HW the packets are clearly identify as ipipv6 tunnel where coming from CCR it’s only ipv6 packets and I see some TCP RST.

I plan to try something that should not impact anything is to put my SFP in a bridge and do the VLAN 836 using VLAN filtering instead of L3 VLAN. I don’t have any hope on this part…

I just checked I’m on 7.15.1, Upgraded to 7.16.1 and I have the same issue. Still present in 7.17rc2 too

Did you only get the issue when you went from 7.15 yo 7.16 ?

No I also had the issue in 7.15 and I’ve just tested the 7.17rc2 still the same.

I also have the same issue, only my ipv4 download is very slow 100kb/s maximum

Tu as quelle freebox ?
configurée en routeur ou pas ?
Si routeur, as tu mis en œuvre la délégation des prefix ipv6?