the connections between the various rack are already present and must not be changed (number of fibers). The switches will be connected to PCs and the servers directly to the core switches. There are various VLANs but basically each switch serves a distinct room (classroom). Redundancy is not needed if this allows me faster backbones.
Ah I see. If the inter-rack connections are fixed then there's not much remaining flexibility.
My main thought is that the CCR2216-1G-12XS-2XQ is an expensive and rather strange device that seems a little ahead of the curve. There are only a couple of Mikrotik devices that actually support 25GbE and notably that does not include the CRS354 or CRS328. If this is going to be a 10GBE-only design and you
only need L2 and L3 capabilities that can be offloaded, the CRS326-24S+2Q+RM seems like more than enough to handle the role.
The thing I'm most concerned about is how you're going to actually use most of this bandwidth in a layer-2 environment. Unless you're going to be on the bleeding edge and using the MLAG support in v7.x.x, a lot of the links in this diagram are simply going to have to block. Assuming that the 2x100GbE LAG in the original diagram is going to be the preferred path closest to the root bridge, then either half of the 10GBE links to the racks are going to have to block or all of the 40GbE links within the racks are going to have to block, or maybe some combination of the two depending on the specific situation. Rack A and D will end up in the bizarre situation where 1 40GbE and 1 10GbE has to block.
If you really want a lot of this bandwidth to be useful, I recommend considering a more regularized design where the edge switches connect to a layer-2 aggregation layer or a collapsed layer-2/layer-3 aggregation rather than everything sort of connecting to everything else.