Well, the CRS504 is not designed for server-rooms - it has all the wrong features. It's made for a city/metro network as I see it.
Go buy white-box/bare-metal 100G switches if you need it for you servers. Don't see the point why Mikrotik should make it.
Two VmWare ESXi servers with two 100-Gig ports ( 4 ports for redundant communications)
Two NAS server with two 100-Gig ports ( 4 ports for redundant communications)
Two 100-GIg uplink/downlink ports to other switches ( 2 ports)
So far - for a simple tiny network room of 2 servers and 2 nas devices and 2 uplink/downlink ports we are at 10 ports
IMO , a four port 100-Gig switch is almost useless if you want proper network redundancy to more than 3 100-Gig devices.
I would not like that setup. I run something larger and plan to switch that over.
3 servers with 100G links.
The last port is either an uplink to the switch with 25G ports (and 2 100g) or a breakout cable, connecting 1 file server, a database server, a backup server and the last cable an uplink port.
Two of those switches - you talk of redundancy, but you go with one switch
That is enough to handle even a not so small company needs. Remember that the 3 servers (running virtualization) can be dual docket EPYC with up to 6TB RAM each. I right now use dual socket EPYC one with 0.5tb each.
Traffic to dedicated database and backup is way lower than the cross traffic of the VM's - hyperconvergent storage is insane in the network load. The backup does not deal with that.
So, the platform is perfectly capable to handle needs even most small companies cannot dream of - it is just tight, you lack the 100G ports to plug in another server during migrations etc.
I just find it odd that, given that the platform supports 6 ports, they decided to make a 4 port. Maybe as experiment to keep the costs down while they evaluate feasibility.
If you go that way and want redundancy (which makes sense there totally) you want 2 switches so your ESX traffic does not break down when you update the firmware