Multi Gb/s connection with bonding

Hello,

I have a set up as following:
DELL PowerEdge R220 with two 1Gbe NIC, Windows Server 2019 Essentials. I created a NIC Team with Switch Independent mode and connected both to my router.
RB1100AHx4 router. 2 port are in Dell bond with balance_rr mode. Another 3 port are in Synology bond with 802.3ad mode. Both are in the same bridge.
Synology RS1219+ with SSD Cache and also 3 NIC are in a bond with LACP mode connected to the router.

My plan is/was to create a multi Gb/s link between the Dell server and Synology NAS. However with this setup I can only get a 113MB/s file transfer, which is the 1Gb/s limit, I think.
I have read that 802.3ad protocol cannot provide increased throughput in one session only with multiple ones. But I also have read just the opposite as well.
Anyway, could you please help me if my plan is executable and if so, how? Must I opt for a direct 10Gb direct link between the two subject?

Thank you in advance

I wonder where you have read the opposite, can you give a link?


If you can use a multi-threaded file transfer protocol, where multiple TCP sessions are used to transport different parts of the same file in parallel, and the bonds are configured to take the L4 (port) information into account when choosing a link for a packet, you may be lucky and the threads may land on different links.