Initial situation:
Typical VPN scheme Site-A <-> Site-B.
E50UG routers on each side + WireGuard.
Connection parameters:
Site-A
GPON 300 Mbps
Site-B
PPPoE FO 100 Mbps
Everything is working perfectly.
The VPN speed is almost equal to the physical bandwidth of the formed channel ~=>100 Mbps, and sometimes higher.
... but there is a nuance....
when transferring large files from Site-A to Site-B (read "with constant maximum channel load") errors appear...
Tx Drops errors are only on the "wg" (Wireguard) interface on Site-A.
The error rate is 0.02% - 0.07%, i.e., "conditionally averaged" is from 2 to 7 "drops" for every 10,000 transmitted packets.
At the same time, absolutely all other interfaces have no errors at all.
Transmission in the opposite direction does not lead to any errors.
Errors occur completely randomly, without any pattern (most likely this is a manifestation of the "backbone" channel variation).
It seems that this is due to the difference (300/100) of the bandwidth of Site-A and Site-B.
I'm thinking that this can probably be solved by some kind of "buffering" via queues and the like...
But since there are errors, their "one-sided" manifestation is slightly annoying and I want to remove it, and my knowledge of WireGuard on Mikrotik is clearly insufficient for proper diagnosis and working out the right working solution.
On the other hand, I certainly understand that some packet loss is a normal for UDP and there may be no point in doing anything at all...
I would like to hear opinions about this situation from people with more experience and knowledge.
I ran a number of more tests and finally made sure that this was due to the packet buffering rules in WireGuard.
The external WAN interfaces work flawlessly despite the difference in channel speeds and loading, as can be seen from traffic data and graphs.
Therefore, the WireGuard interface cannot transfer the packet to the WAN-int and discards it.
Perhaps this can be solved by increasing the buffer of the WAN-int to receive packets from WireGuard, but I do not yet know how to do this correctly.
So let it work as is, and I will try to study/understand if it is possible and how to set up buffering correctly.
The problem is solved!!!
It seems to be the WireGuard@Mikrotik does not have the ability to programmatically configure queues. At least I didn't find it in the documentation or on the CLI. And there is not even a mention of such a possibility.
Therefore, there is only one way left - to set up the correct queue on the Wan interface.
I decided to start with "smart" queues.
Changing "multi-queue-ethernet-default" to "FQ-CoDel", with default settings, solved all the issues!
Test with transmission of > 18M packets - 0 errors!
I'm interested by your solution. Can you provide your queues cfg ?
When running integrated bw-test, i've 300/500Mbits/s between sites.
But it's when having routing through this interface that all slow down... maybe mss/mtu, but already having put a rules mangle :
My PPPoE ISP mtu is 1480 - therefore my โwireguardโ mtu for wg-interfaces is 60 bytes less - 1420. So, on both ends i have wg-interfaces with mtu=1420.
If discovery is used, then add 255.255.255.255/32 in the peer to the allowed addresses. Or turn off discovery (none interfaces).