I setup a CRS518-16XS-2XQ for connecting 6 ESX servers for initial staging with 14x 10GbE. For staging , i use also one 1G uplink to connect the staging notebook. The switch runs in bridge mode and has the latest stable fw installed. I reset the switch before and i only configured the IP for mgmt-access. The rest is pretty default (all ports are associated to the default bridge).
All server nodes connected on 10GbE can communicate with each other, but there is no package forwarded from the 10G to the 1G ports. I tried to connect the staging laptop to the management port or using a 1G copper tranceiver with no success.
10G <> 10G ports: OK
1G <> 1G ports: OK
10G <> 1G ports: NOK
I know, nobody would use an 1G uplink on a 10G switch in production, we only use this setup for staging.
Is this the intended behaviour of this switch model?
What switch are you using? The one you are talking about only has SFP28 links and QSFP28 links. Do you mean you are just running the SFP28 links at 10GB rates?
Regardless, this switch has a 1GB link between the switch chip and cpu, and then a 100 MB management link per the block diagram.
So aside from the obvious that from SFP28 → management you have a max bandwidth of 100 Mb/s, you are likely overflowing the buffers with all the down switching in the path not to mention the anemic CPU.
Yes i use 10G (SFP+) transceivers not 25G. For 1G i tried to (temorarly) use the mgmt-port and also tried with a SFP+/copper tranceiver (like the S-RJ10 from Mikrotik). I can not even ping one host connected to the 10G ports from a 1G port. Even the 100Mbit for the mgmt port would be fine.
Yes, both come up (link). I even can ping from the SFP+/Copper connected to port 14 for instance to the host connected to the mgmt port. But not to the hosts connected via 10G Fiber.