Are these switches and routers all located together or further apart from each other? If together you might be able to do something with DAC 10Gbit connections and build a form of stack with the hardware listed.
CRS switches can’t do STP or RSTP (yet?) unless you take ports out of switch group and put them on a bridge which will increase load on CPU. CRS224 has 400MHz CPU which will prove as a bottleneck in such configuration.
In general, I would look at 3650 or 2960 for switching, and Mikrotik for routing. It depends a lot on what you actually do with the switches (are they near, do you stack them, which features are in place, etc…). You also may want to check the quality of the copper cabling when moving to gigabit.
Although not all, but some of your mentioned features are already supported in CRS switches and RouterBoard switch-chips. Hybrid ports (tagged and untagged VLANs on the same port) are supported in all RouterBoard switch-chips. And Access Control List in most of the RouterBoard switch-chips provide many options to ensure the same functionality as DHCP-Snooping, ARP-Guard, and other Port Security Features.
I have to agree that MikroTik products aren’t quite up to snuff for what one would typically expect in a business deployment. Sure, there are some features that provide similar functionality to what one finds in the higher end Cisco/Nortel/etc. switches, but they’re not quite the same, and there’s definitely a learning curve to figure out how to use them effectively.
I’d love to see MT come up with a true edge switch, i.e. 48 10/100/1000 ports with 4x 10g or 2x 40g uplinks. I’d even be happy with 24x 10/100/1000 ports if I could get 4x 10gb ports - two for uplink, two for stacking. To complement this, I’d also love to see an aggregation/core switch, something like a CRS with 24 or more 10gb ports and a single 10/100/1000 port simply for management purposes. This aggregation switch would then have at least 4x additional 10gb or 2x 40gb uplink ports to go to a core router. I realize that 40gb is pretty pricey, but MT has proved that 10gb doesn’t have to be. This would also allow the flexibility to connect high throughput servers on 10gb ports, such as a VMWare host with multiple guest OS’s requiring a full 1Gb bandwith per guest or a SAN that could provide multiple 1Gb streams to several clients simultaneously.
The upcoming CCR1072-8s+ will be a nice unit, but for those of us who like redundant data paths, it restricts us to running up to 4x CRS226’s, which uses up all our 10gb ports on the CRS’s, so there’s no possibility for stacking. Of course, the lack of STP/RSTP makes multipathing a tricky endeavor, so that would need to be added to the ROS code as well.
Don’t get me wrong, I like MT equipment. I think they’re great for pro-sumers like me. I just think they fall a little short on a few features and configurations that would result in a huge increase in demand.