Ling Aggregation LACP how make stable 2Gb/s

Hello - sorry for my english…
I have got synology 4bay with 2x1Gb etchernet set as bondind Dynamic Link Ag. LACP
http://prntscr.com/ctqe7e
My MT CCR1009 8G1s1s+ Configuration:
http://prntscr.com/ctqegp
I put by 10Gb SFP+ MT Switch.
My problem is that i cant download from 10 PC and Mac more that 1Gb from NAs.
On one cabe i have got about 980 Mbit/s on secend 0. Sometimes after restore config from backup i see 1.4Gb/s (one interface 980Mb/s second about 300-400 Mb/s). but if i make reboot MT speed came back to 1Gb/s. If i set LBalacing rr i see 500/500 on bootch cable. Lag is make on separate MT interface with full GB/s each.(7-8).

  • I have to same if i conect two CCR1009 by LACP. internal bandwitch cant reach more that 1 GB/s
    I hope you understand me. Any idea what is wrong ?

I am not sure if this helps, but this configuration works well with Juniper.

Std export:

/interface bonding
add comment="Primary Bonded Interface" name=bonding1 slaves=sfp1,sfp2 transmit-hash-policy=layer-2-and-3

Verbose export:

/interface bonding
add arp=enabled arp-interval=100ms arp-ip-targets="" comment="Primary Bonded Interface" disabled=no down-delay=0ms lacp-rate=30secs link-monitoring=mii mii-interval=100ms min-links=0 mode=balance-rr mtu=1500 name=bonding1 primary=none slaves=sfp1,sfp2 transmit-hash-policy=layer-2-and-3 up-delay=0ms

If you’re really using a MT Switch, you cannot use LACP (only option is balance-xor, don’t know if synology supports that).
CCR1009 should not be a problem, remember to use ether 5-8 and not 1-4 (1-4 are connected to CPU with 1Gbps in total).

use 802.3ad lacp at both side. then tell us your experience.

In general Link-Aggregation won’t make a single connection faster than the native link speed of a single link.
The only benefits of link-aggregation are redundancy and overall better throughput when serving multiple clients, because each client connection will be placed on one of the links.

The only way to get faster throughput using a single connection is XOR (per packet load sharing), but this is not widely supported and will most likely break other things due to packet mis-ordering.