CSS610-8G-2S+IN Switching Capacity

Hi,

I got CSS610-8G-2S+IN switch recently for the fact that it comes with 2 x SFP+ ports (along with 8 x 1G ports). With a 10Gbe SFP+ transceiver connected to a 10G uplink, I assumed that all 8 x 1G ports would be able to switch at full 1G speed to any servers upstream. But iperf3 test I did suggests that the 8 x 1G ports seem to be bottlenecked at maximum 1G through the 10G SFP+ uplink.. meaning the 8 x 1G ports is sharing a 1G backplane (not sure if that’s the right term).

Is that correct?? I hope not, because I don’t see what’s the purpose of those 2 x SFP+ port, if those 8 x 1Gbe ports cannot switch at full speed through the 10Gbe SFP+ port.

How exactly did you test throughput?

Does this diagram help?

CSS610_Switch_10G_Test.JPG
I know that the bottleneck is not on the server or the CRS309 switch.. as I have test laptop devices attached to CRS309 to perform the same test (against the server), which didn’t have trouble going above 1G to the server. Is there any other information that would help here?

a few quick questions:
Have you verified, that the connections from CSS610 to CRS309 and from CRS309 to Server have linked at 10G speed?
Have any Ingress Rage and Egress Rate settings been applied to sfp1 and sfp2 on the CSS610?
Are you running the CRS309 in SwOS or RouterOS mode? If RouterOS, is the bridge connecting sfp1 and sfp8 (according to your picture) running in hardware mode (HW=yes)?

OK.. found the issue.. The laptops weren’t on the same network (as the server), causing it to route through a router connected upstream (which has a 1Gigabit port). That explains the bottleneck. Amateur mistake. Can confirm that with 4 x laptops connected to CSS610 (this time on the same network), all simultaneously running “iperf3 -c” command.. it reached 3.9Gbps throughput at the server end..

iperf3_result.png
Happy with that (don’t have another 4 x laptops to do this test).
I can sleep better now.. :slight_smile:
Thanks very much for the help.