CRS310 and issues with different speed/ports

Hello
CRS310 8G+ with 7.16.1

A single bridge with all the ports on it.
Uplink port 10G
all the ports to servers with 2.5G (each server has 2x2.5)

I have tried configuring the bond to LACP, I am not able to saturate the uplink. There is an evident buffer issue.
Peak speed, low speed, peak speed, low speed etc.
If I set the uplink port (sfpplus1) to 2.5G, it goes to a CRS317 7.15.3 with l3-hw , the speed is fixed at about 2.3G

Is there a difference in SWITCHING ports with different speed.. or ROUTING (l3-hw) between a bridge with 2.5G ports and an uplink port of 10G?

since we ROUTE, we shouldnt have the issue that we have in SWITCHING.

in 7.16.x in winbox go to: switch → qos → port

There you will be able to see detailed drop stats, if you confirm some buffer issue maybe you can tune QoS settings to mitigate the situation (this mostly apply for hardware switching, aka L2 and/or L3 HW offloaded forwarding)

thank you for your answer.
however my question is now:

if we keep switched ports to the same speed, and have another port ROUTING, we should avoid the issue?

@maggiore,

A single bridge with all the ports on it.
Uplink port 10G
all the ports to servers with 2.5G (each server has 2x2.5)

I have tried configuring the bond to LACP, > I am not able to saturate the uplink. > There is an evident buffer issue.

I’m sorry I don’t quite understand your question. i mean: did you do some stretch test from your lacp servers ports to the uplink? or something else? iperf? how many servers did the test concurrently? there should be at least 4 servers to saturate the uplink port.

and a bonded 2.5g nic should be at around 2.5g in output. so what is the problem?

i am not sure whats your issue, but Routing using L3HW is just “L3 switching”

Hello
I have made some test with iperf3 and if we send from the 2.5G to the 10G port, to another unit with 10G port and the flow is with up/down, as buffer full, buffer empty and so on. If I keep the uplink port to 2.5g the stream is linear and without up/down.

I will try to explain my question:

The bridge is composed on 2.5G ports, all with the same speed.
If I ROUTE the uplink port, than switching, should I get the same issue with buffers?
On the standard RB (without L3-hw) when I route between two different interfaces, the traffic flows via CPU and no issues are present.

The problem with communication pausing and/or packets being dropped when there’s speed change (most notably from faster to slower, e.g. ingress port is 10Gbps and egress port is 2.5Gbps) is buffering. A switch has only certain amount of buffer and if there’s a burst of frames, switch needs to buffer them. If buffer is full, switch can only drop some frames … or if flow control is in use, switch sends pause frames to sender(s).

If device is routing, there’s no reason for it to behave any differently … unless CPU processing overhead adds enough delay to reduce throughput. Another possibility is that when certain port is used “stand alone” for routing only, port does provide “feedback” and blocks sending further packets/frames if Tx buffer is full.
As already mentioned, when L3HW offload is in action, routing becomes switching and since it’s wirespeed, same problems can arise.

When all ports run at same speed, then chance of seeing it are smaller, but not entirely impossible. Because traffic from multiple ingress ports may egress via same port which makes it fill buffer again.

As I already noted, solution might be to enable flow control. However some users prefer not to do it, at least on some MT devices flow control is pretty disruptive … when any of egress port buffers get full, all ports emit pause frames … even if some of ports actually carry traffic which egresses via un-congested ports. Explanation for the disruptive behaviour is that switch has no means of knowing which egress port is going to be used for any ingress frame until it’s already received (typical for “store-and-forward” operations).