first of all best wishes at all from a new member in this forum.
As a fan of mikrotik products, i decided to install 3 crs 312-4c+8xg switches in our company, because our old 1G Switches weren’t the best after a couple of years.
So i installed them in our network and our DELL R530 got an 10G Network Card (Dell (Intel) 10G X550-T RJ45 DUAL Ethernet Network CARD).
Now to my Problem:
If I transfer Files or make performance tests with iperf under Ubuntu theres a performance Issue with 1GBit Network Clients.
When i transfer Files from the Client to Server the speed corresponds to a 1G connection, but from server to Client i got a Max of 30M.
If i connect the same 1G Device direct to the server, then the connection is fast in both directions.
Try to enable flow control on the 10Gb link - symptoms suggest the switch has a too small buffer so bursty traffic causes packet loss. With flow control, the switch can tell the server to slow down, effectively moving the queue from the switch (small buffer in the chip) to the server which has much more buffer space.
Sounds like you are not switching, but bridging instead So using that little CPU to push data. make sure you have hw=yes on all bridge ports, and all your VLAns are actually configured in bridge settings not as logical interfaces.
CRS3xx devices are dual boot - is it running SwOS or RouterOS, and which version?
Is flow control really working? Check Tx/Rx pause frame counters in port statistics on both ends of the link.
Slow CPU would limit traffic in both directions, so I don’t think this is the case here.
sorry for the long time since my last update.
I’ve tested a lot, but theres not really any progress.
The Switches running under Switch OS and i think flow control is working.
Tomorrow i make new tests with a Intel X520 sfp network card, then i also check the pause frames.
I met the same problem, gigabyte interface can only transfer at about 30Mb, have you found any solution to this problem, if you could share the root cause and solution, would be greatly appreciated.
I have same question.
PC TO NAS SPEED
1G -------10G 40-60M
10G------10G 300M
10G------1G 40-60M
1G------1G 110M
NAS TO PC SPEED
1G -------10G 110M
10G------10G 300M
10G------1G 80M
1G------1G 110M
What ever I use swos or ros.My crs312 can’t make good speed.
My nas is SYNOLOGY DS1819+ .All computer have same question.When i saw somebody has same question.I think this is a bug.
same problems here, nas and pc connected to crs309 with S+RJ10 modules, swos 2.1,tested with iPerf3
nas to pc 100M
pc to NAS <30MB
it’s just seems like a duplex problem as in old network envionment , very disappointed about crs309…
I have also suffered this performance problem (a 10GbE link within 1GbE end-to-end traffic). This problem still exists on the latest stable firmware as of writing (6.48).
This issue is fixed for me in 6.49beta11. There are a variety of fixes and changes in this version, I recommend anyone suffering from queue issues / burst traffic throughput on 10GbE connections to try out this version.
Has any of you confirmed that this problem is gone? Second important question: has anyone checked that hw offload is enabled and working? (I guess this relates to RouterOS users, but perhaps in SwOS there is also this kind of thing)
I HAVE THIS PROBLEM TOO.
Brand new unit, received only hours ago. Latest stable software. Operating in SwOS mode. Single default vlan. No special configuration. Just straight switch.
Issue turns out to show its head when traffic flows from higher-speed ports onto lower-speed ports.
This is true EVEN WHEN THE OFFERED LOAD is less than the speed capability of the egress port.
For example, IPERF3 test traffic ingressing on a 10Gig port encounters bottlenecking when departing on a 2.5Gbps port: EVEN WHEN the offered load is externally constrained to be LESS than 2.5Gbps.
As an example: my ISP rate shapes us down to 2.3Gbps. The switch connects to the 10gig-to-10gig router at 10Gig. If traffic flows to a client connected to the switch at 10Gbps, the client receives 2.3Gbps. If that same client has its Ethernet interface speed changed to 2.5G, the IPERF3 test drops down to 400 to 500Mbps! 1/5th of the speed that should be passing through w/o incident.
Ethernet flow control does nothing. I’ve tried squeezing tcp window (to avoid buffer overflows). I’ve tried udp. Nothing I have tried breaks through the bottleneck.
The observed behavior suggests that the Marvell chip is not operating in true store-and-forward mode. Maybe it operates in some sort of cross-bar switching mode when the ingress and egress ports are configured at the same speed. BUT when the egress port is at a different speed, it looks like it routes the traffic through some SLOW adjunct path for rate adaption. That rate-adaption pathway is not powerful enough to saturate at 2.5Gbps link. (I even tried it with 1Gbps egress and that cannot be saturated when doing rate adaption.)
Is there some secret fix for this that is not in the literature? Can anyone clue me in.
Imagine all the new high-end PCs with 2.5Gig Ethernet ports. 10Gig file servers (and even latest generation 10Gig ISP connections) will apparently be SIGNIFICANTLY bottlenecked delivering to the 2.5Gig clients. (400 or 500Mbps, not the hoped-for 2.5Gbps)