…using the same chipset that is in the high end router / switches now (i.e. Marvell Pristera).
The CRS504-4XQ-IN is just a little too limiting. Yes, 4 ports are good - but you lose one port for an uplink and possibly 2 for a chain. That leaves 2 or 3 ports usable. On a 6-port switch - which the chipset supports - that is not 2-3 usable ports, but 4-5 - up to doubling the number of effective ports.
is interesting to see that while 98DX8525 in theory can do 6 x 100g that was not implemented, maybe is a power/size requirements thing
maybe product segmentation to differentiate it enough from maybe a future 8 x 100g port device
4 x 100g fit for many scenarios, you just need to add a secondary switch below CRS 504 for access/distribution tasks, i think CRS 504 was not designed to be an all in one solution, just that.
That said, I have this problem - I am going to switch soon to a 4x100, and I am running a 3 machinecluster, so I can go 3x100 and 1x4x25 (i.e. fan out) cable for uplink - but I have ZERO reserve capacity on that.I would really love a little more possible - I woudl really love the 6x100. ( would be a small step - I would expect the next larger step to be a 16x100 once that becomes available chip wise.
I could use some 100-Gig switches with 8 to 64 ports.
I suspect Mikrotik may not have anything to address the need for NOC networks needing 100-Gig core switches anytime in the near future.
I am with you - I could use some, too, but you are likely right - there simply is no ready SOC to put in. What is there is propietary and not available to Mikrotik,
Hence asking for the 6 port. The switch chip they use can actually handle 6x100g And given that they use this switch chip already - they “simply” have to pack it into a proper case. No question on whether they can do it - they do, just not in the form of 6x100.
I think something like one of the following would be greatly wanted and used
Add some of the popular 100-Gig switch drivers into ROS x86 , then make it a Network-Operating-System.
Add some of the popular 100-Gig switch drivers into SwOS on a x86 , then make it a Network-Operating-System.
Presto , Mikrotik would have an ONIE ( Open Network Install Enviornment ) x86 switch operating system that answers the need for many 100 to 400-Gig ports ( 8 to 64 ports ) , that can run on many different and popular ONIE switches.
i hope MikroTik is working on a 8 x 100g switch but it will take months to come, i think maybe until the next year, and off course it will be far more expensive
adittionally an 8 x 100g switch puts MikroTik on a predicament, almost in the obligation to release a possible CCR2316 with 4 x 100g + 12 x 25g
so we are talking about a complete new line of products, and challenges to MikroTik, not something trivial like adding 2 x 100g interfaces to a switch
Unless you have a chipset for Mikrotik, they CAN NOT be working on an 8x100G - the one they use right now is, as I repeatedly say, limited to 6x100 and I am not aware of a larger low cost chip.
Also they have drivers etc. for this chip nailed down - including hardware level NAT actually, if I read the manual correctly.
AND they have motherboards that use the 24x25g lines the chip has, so “all” they have to do is make mobos that expose them as 6x100.
Not free, but nothing here that they have not already solved.
Contrary to your propsal of a totally new chip - they od not do chips at all, so far.
Most - if not all switch chip manufactures have ready-to-go drivers, Getting the drivers into an operating system such as Mikrotik’s ROS Linux based operating system might be as easy as add the driver source code into the existing ROS Linux source code and compile. There might be some minor tweaks such as the number of ports and port types the scripts might need if they do not auto-detect port counts and port speeds.
so now you may understand that maybe they dont want to make it
beyond the fact that can be made or not
surelly if you put on advance an order for 10.000 units of that hypoteticall product they will think twice about it, not only for your personal lab needs
is not the first time that in this forum somebody request a new device which fits its personal needs, i am sure most the released devices come from a really well established need from many customers or a specific need by a small number of customers which order a lot of units, is the way it works, economy of scale is only achieved by selling many units of a product, for that you will ensure a strong demand for it, keep an eye on your competition and:
be sure do not create products which canibalize between them
i think today a 6 x 100g switch will canibalize the existent 4 x 100g, so until 4 x 100g scales its sales up to a certain point a 6 x 100g will not se the ligth, maybe already exists but is a bussiness decision not to release it
How many 100-Gig switches are all of the world’s network server rooms planning to order through the next 5 years ?
How many of those 100-Gig switches will have more than four ports ?
Does Mikrotik have any 100-Gig switches with more than four ports ?
A Mikrotik ONIE operating system ( ported from x86 ROS and/or SwOS ) could get Mikrotik into many new carrier-class enterprise L2/L3 markets because ONIE switches ( basic white-box / bare-metal switches ) have already been on the market for nearly 10+ years. Bare-metal ONIE 32-port 100-GIg switches can be found for prices around $8k to $16k and double for 64-Port 100-Gig switches.
So , without a Microtik many-port 100-GIg switch , how many of those server rooms will settle on Mikrotik 10-GIg networks where each switch has a maximum of four 100-GIg interfaces ?
Two VmWare ESXi servers with two 100-Gig ports ( 4 ports for redundant communications)
Two NAS server with two 100-Gig ports ( 4 ports for redundant communications)
Two 100-GIg uplink/downlink ports to other switches ( 2 ports)
So far - for a simple tiny network room of 2 servers and 2 nas devices and 2 uplink/downlink ports we are at 10 ports
IMO , a four port 100-Gig switch is almost useless if you want proper network redundancy to more than 3 100-Gig devices.
I would not like that setup. I run something larger and plan to switch that over.
3 servers with 100G links.
The last port is either an uplink to the switch with 25G ports (and 2 100g) or a breakout cable, connecting 1 file server, a database server, a backup server and the last cable an uplink port.
Two of those switches - you talk of redundancy, but you go with one switch
That is enough to handle even a not so small company needs. Remember that the 3 servers (running virtualization) can be dual docket EPYC with up to 6TB RAM each. I right now use dual socket EPYC one with 0.5tb each.
Traffic to dedicated database and backup is way lower than the cross traffic of the VM’s - hyperconvergent storage is insane in the network load. The backup does not deal with that.
So, the platform is perfectly capable to handle needs even most small companies cannot dream of - it is just tight, you lack the 100G ports to plug in another server during migrations etc.
I just find it odd that, given that the platform supports 6 ports, they decided to make a 4 port. Maybe as experiment to keep the costs down while they evaluate feasibility.
If you go that way and want redundancy (which makes sense there totally) you want 2 switches so your ESX traffic does not break down when you update the firmware
Re: … If you go that way and want redundancy (which makes sense there totally) you want 2 switches …
Just now putting in the purchase/request for two 32-Port 100-GIg switches running Sonic Enterprise. Would of been nice to have Microtik as a possible switch solution - but I can’t wait any longer. If they work well , then I will be getting several more.
As I said, it’s not targeted for VMware/hypervisor setups. Any thing less than 24 ports in a datacenter doesn’t make sense. And for redundancy you want two switches at minimum.
What if… just bare with me for a sec, but what if RouterOS 10 had a SONiC installer/image and you could install RouterOS on a white box open networking switch.
Why wouldn’t hardware/software vendors want to use standards for routers/switches? If the software could take advantage of the switch ASIC features?
I suspect that there are no near-future tik plans for a many-ports 100-Gig switch ( or for WhiteBox switch support ).
Which is why I have some Sonic 32-port 100-Gig switches on the way to my NOC ( and several more soon after that ) - I can’t wait any longer for tik.
** Once you go 100-Gig , all other slower switches are slooowwwww — especially when running NFS or iSCSI or SMB or high-speed data backups/transfers **
I suspect that there are no near-future tik plans for a many-ports 100-Gig switch ( or for WhiteBox switch support ).
Which is why I have some Sonic 32-port 100-Gig switches on the way to my NOC ( and several more soon after that ) - I can’t wait any longer for tik.
** Once you go 100-Gig , all other slower switches are slooowwwww — especially when running NFS or iSCSI or SMB or high-speed data backups/transfers **
[/quote]
How would you rate SONiC vs Cumulus? I use some Mellanox/Nvidia switches with Cumulus but have wondered how SONiC would be to admin.