I wonder how can I enable jumbo frames on my new CRS504-4xq-in when I try to set the MTU to 9000 it does not accept it, I tried setting it in the bridge and on the interfaces booth did nto work.
According to the marketing material jumbo frames should be supported.
So how do I enable them?
On the PC side i have set 9000 and when using a cable from PC1 to PC2 it works.
But not through the switch.
Also when I run a benchmark I get 12 gbit/s at best
If it looks like the device ‘does not accept’ your MTU, you are missing out some interface. to increase the MTU on a bridge, you need to increase it on all physical interface attached to that bridge as well.
I have a CRS504 here easily reaching near the 100gbit, if you are measuring noticeably less, it must have something to do with you configuration (traffic hitting the cpu, etc) or your way to measure (running speed test from the device itself, etc.)
You can share your configuration here for better feedback.
You need to first increase the L2MTU of all the ports that are members of the bridge first. Then you can increase the MTU on all those ports (you cannot set the MTU to be greater than L2MTU). The MTU of the bridge cannot exceed the smallest L2MTU of all its member ports.
Another thing, if you are using a QSFP, lets say in port 1, only add interface qsfp28-1-1 to the bridge, leave the other interface enabled but otherwise unconfigured.
Could you may be provide your config for reference?
Also how to best test the performance with iperf3 i reach now 30gbit still a far cry from the 100 it should be able to do
There is more to the config, but for cleanness I removed most of L3, custom scripts, etc.
[note: parts of the config are related to RoCE, don’t get confused (; ]
Regarding your performance testing, make sure to test from endpoint to endpoint (which i guess you are doing with iperf3) and also make sure you are not running into a TCP bottleneck, e.g. test with multiple parallel tcp connections (iperf3 -P 10 …) With that i do get close to 100gbit/s.
are you testing on windows or on linux?
I cant get more then 30 Gbe with iperf at its best, but when testing samba on a ramdisk with multiple clients i can get in total 60 Gbe so may be its a windows issue?
LOL windows suxxx… well windows client version, with a windows server to a windows server i can get 80-100 GBE with 10 iperf3 streams and 9000 mtu, on the exact same machines
Which Windows Server version did you test? It might be due to the default congestion control algorithm on older Windows Server versions (DCTCP) being different from the one the desktop versions (currently CUBIC).