Mikrotik WireGuard Tx Drops errors

Initial situation:
Typical VPN scheme Site-A <-> Site-B.
E50UG routers on each side + WireGuard.
Connection parameters:
Site-A
GPON 300 Mbps
Site-B
PPPoE FO 100 Mbps

Everything is working perfectly.
The VPN speed is almost equal to the physical bandwidth of the formed channel ~=>100 Mbps, and sometimes higher. :slight_smile: :slight_smile:
... but there is a nuance....
when transferring large files from Site-A to Site-B (read "with constant maximum channel load") errors appear...
Tx Drops errors are only on the "wg" (Wireguard) interface on Site-A.
The error rate is 0.02% - 0.07%, i.e., "conditionally averaged" is from 2 to 7 "drops" for every 10,000 transmitted packets.
At the same time, absolutely all other interfaces have no errors at all.
Transmission in the opposite direction does not lead to any errors.
Errors occur completely randomly, without any pattern (most likely this is a manifestation of the "backbone" channel variation).
It seems that this is due to the difference (300/100) of the bandwidth of Site-A and Site-B.
I'm thinking that this can probably be solved by some kind of "buffering" via queues and the like...
But since there are errors, their "one-sided" manifestation is slightly annoying and I want to remove it, and my knowledge of WireGuard on Mikrotik is clearly insufficient for proper diagnosis and working out the right working solution.

On the other hand, I certainly understand that some packet loss is a normal for UDP and there may be no point in doing anything at all...
I would like to hear opinions about this situation from people with more experience and knowledge.

Thanks!

I use WG since fews years, with boths hardware as site2site vpn. Here is my personnal setup with 2 RB5009 over 5G/1G fiber connection each.

I've checked others instances i have at work, and sometimes there drop packets too, nothing important, service work fine for me.

[admmikrotik@router50a] > /interface/print stats-detail where name=wg1
 0   R   ;;; wg1 - interco50a
         name="wg1" last-link-down-time=2026-01-02 13:57:08 last-link-up-time=2026-01-02 13:57:23 
         link-downs=1 rx-byte=13 580 148 164 tx-byte=49 354 592 512 rx-packet=27 853 381 
         tx-packet=50 862 654 rx-drop=33 tx-drop=1 tx-queue-drop=0 rx-error=2 tx-error=202 fp-rx-byte=0 
         fp-tx-byte=0 fp-rx-packet=0 fp-tx-packet=0
[admmikrotik@router70a] > /interface/print stats-detail where name=wg1
23   R   ;;; wg1 - interco70a
         name="wg1" last-link-down-time=2026-01-02 13:57:06 last-link-up-time=2026-01-02 13:57:21 link-downs=3 
         rx-byte=48 269 375 340 tx-byte=13 627 840 440 rx-packet=49 581 302 tx-packet=27 916 716 rx-drop=0 
         tx-drop=87 tx-queue-drop=0 rx-error=4 tx-error=476 fp-rx-byte=0 fp-tx-byte=0 fp-rx-packet=0 fp-tx-packet=0

When seing my setups i've lot of pain to be over 10/12Mb/s ... i'm scared about perfs with your's!

Thanks for sharing this that help to fix error.

I ran a number of more tests and finally made sure that this was due to the packet buffering rules in WireGuard.

The external WAN interfaces work flawlessly despite the difference in channel speeds and loading, as can be seen from traffic data and graphs.
Therefore, the WireGuard interface cannot transfer the packet to the WAN-int and discards it.
Perhaps this can be solved by increasing the buffer of the WAN-int to receive packets from WireGuard, but I do not yet know how to do this correctly.
So let it work as is, and I will try to study/understand if it is possible and how to set up buffering correctly.:wink:

Some tests results are attached below.

==================================================

Data sending from Site-B to Site-A ( 100Mbps >>> 300Mbps)

==============

Data sending from Site-A to Site-B ( 300Mbps >>> 100Mbps)

===============

iperf3 test results:

Host 192.168.1.9@Site-1 (iperf3 client) & host 192.168.2.34@Site-B (iperf3 server)

Send from Site-A to Site-B

Send from Site-B to Site-A

========================================================

The problem is solved!!!
It seems to be the WireGuard@Mikrotik does not have the ability to programmatically configure queues. At least I didn't find it in the documentation or on the CLI. And there is not even a mention of such a possibility.
Therefore, there is only one way left - to set up the correct queue on the Wan interface.

I decided to start with "smart" queues.
Changing "multi-queue-ethernet-default" to "FQ-CoDel", with default settings, solved all the issues!
Test with transmission of > 18M packets - 0 errors!

:grinning_face:

1 Like

I'm interested by your solution. Can you provide your queues cfg ?

When running integrated bw-test, i've 300/500Mbits/s between sites.
But it's when having routing through this interface that all slow down... maybe mss/mtu, but already having put a rules mangle :

/ip firewall mangle
add action=change-mss chain=forward new-mss=1360 out-interface=wg1 protocol=tcp tcp-flags=syn
add action=change-mss chain=forward in-interface=wg1 new-mss=1360 protocol=tcp tcp-flags=syn

I also have exclude the wg1 ip addr range from being fasttracked.

I have no any mss/mtu settings and mangle rules.

My PPPoE ISP mtu is 1480 - therefore my โ€œwireguardโ€ mtu for wg-interfaces is 60 bytes less - 1420. So, on both ends i have wg-interfaces with mtu=1420.

If discovery is used, then add 255.255.255.255/32 in the peer to the allowed addresses. Or turn off discovery (none interfaces).

And queues:

  1. Create FQ-CoDel

  1. Assign

I think thatโ€™s all :slight_smile: