Just PoE powering. No need for next PSU.I...Maybe make an expander module that can be mounted on the front or back of the rack, via a fiber optic cable and connected power back to the switch. Would make it easy to have a top of rack back and front switch ports....
And airflow from left to right. Hmm...Better to make front and back ports.
Yeah I'm waiting CRS326-24S+2Q+ because current CRS317 also has pretty low port density. Not gonna even ask for 48 SFP+ ports in single switch bc I may get old before it happens...Personally, I would be much happier if we finally get CRS354-48P-4S+2Q+ which was presented almost year ago. I actually had to buy switch from different company because we needed to clean up rack and there is not a single 48 port switch in mikrotik's range. :'( largest is 24 port and that is not enough nowadays. Instead of giant leaps, let's increase the density in usual way
how about... vertical switch >_> like only 5cm deep so that you could mount it behind normal equipment (especially shorter ones), somewhat like giant rackmount PDU.Standard 1U 48 ports are already a mess when cable arrangement is not managed
That layout would be a pain in the.......rack....
A front-side high density would be ok for a 3 or 4 rack units, but a lot of space wasted in depht.
You can mount 19" equipment in front and back of a rack, the only notable problem can be air-flow if two devices share same U-position and force air towards their corresponding back sides (one against the other) ... not sure how would vertical switch make any difference in this regard?how about... vertical switch >_> like only 5cm deep so that you could mount it behind normal equipment (especially shorter ones), somewhat like giant rackmount PDU.
Vertical switch would technicality use 0U since it could be mounted behind normal equipment just like PDUs are usually mount behind servers in the same U row. The only downside would be airflow obstruction since it'd realistically put metal wall behind equipment exhaust.You can mount 19" equipment in front and back of a rack, the only notable problem can be air-flow if two devices share same U-position and force air towards their corresponding back sides (one against the other) ... not sure how would vertical switch make any difference in this regard?how about... vertical switch >_> like only 5cm deep so that you could mount it behind normal equipment (especially shorter ones), somewhat like giant rackmount PDU.
True, but from experience I can tell you that servers have insanely overkill cooling for extreme worst case scenarios. Such obstruction probably wouldn't change much (especially since you probably don't need to cover half of back door with switches, just 1 or 2 2U switches (I estimate CRS326 depth to a bit above 2U height)). I regularly replace those 15k rpm blowers with silent 5k fans that have significantly lower airflow and they keep up. It's just that in servers world 40 degree in chassis means "overheating". Whereas gaming desktop computers with noise optimized cooling usually have 50-60 deg ambient in chassis. You can compare it to CRS317 vs CRS326 - 317 enables fans above 40deg while 326 being passively cooled is totally fine with 70+ deg on CPU. It's not that 317 has lower tolerance, it's just configured this way following "better safe than sorry" rule.You're right, mounting two regular equipment pieces in same U-position is only possible for short equipment and that's what I've had in mind.
But then I'd never mount just anything behind full server chasis which could obstruct warm air exhaust ... 1-U server can easily consume 500W+ (and generate just as much heat) and I wouldn't like to dampen the air-flow cooling that owen.
Not a bad idea, but if mounted IN FRONT of other equipment.how about... vertical switch >_> like only 5cm deep so that you could mount it behind normal equipment (especially shorter ones), somewhat like giant rackmount PDU.
+1 but it'd probably need implementation of switch clustering in the first place. So I guess it'd be hard to make with current ROS capabilities.dont think this concept will work.
what i would like to see is a "port expander":
a Master Switch with all the intelligence and 1 or 2 Expand-Ports
a Expander with 24 or 48 Ports without intelligence
Just expand the port count without the need to manage another switch (nexus like)
I though of rear mounting since we were talking about environment with long servers and servers always have rear facing network cards. For example at work we have racks where all cables are in back or rack and in the end switch ia the only device with front facing ports and it's always pain in the ass since we always struggle to trace cables to back of rack.Not a bad idea, but if mounted IN FRONT of other equipment.
Cable management must be in front side of rack to avoid headaches when maintaning patch cord connections
That would add one additional "0" to price. I saw modular hardware vs non-modular equivalent prices difference for other vendors. It's still hard for me to understand why would anyone buy modular hardware...I had suggested in a previous similar post to build a chassis with blades. Could be switch blades, routing blades, whatever port configuration/speed. Could even have a fan blade, just in case. Hot swappable power supplies.
One blade is old? A faster one comes out? No problem, swap it.
Agreed, but it depends on clients and needs. Cisco versus more extensive MikroTik? Some might prefer the 2nd choice.That would add one additional "0" to price. I saw modular hardware vs non-modular equivalent prices difference for other vendors. It's still hard for me to understand why would anyone buy modular hardware...I had suggested in a previous similar post to build a chassis with blades. Could be switch blades, routing blades, whatever port configuration/speed. Could even have a fan blade, just in case. Hot swappable power supplies.
One blade is old? A faster one comes out? No problem, swap it.
From what I understand actual stacking is work-in-progress. Iirc someone mentioned after MUM that QSFP+ ports in CRS326-24S+2Q+RM are supposed to be used for more advanced stacking/clustering (not just uplinks)Other idea I just had: stackable switches that get managed as a single device. Could be a good compromise.
They are on the site.Personally, I would be much happier if we finally get CRS354-48P-4S+2Q+ which was presented almost year ago. I actually had to buy switch from different company because we needed to clean up rack and there is not a single 48 port switch in mikrotik's range. :'( largest is 24 port and that is not enough nowadays. Instead of giant leaps, let's increase the density in usual way
+1Small DIN switches.. Wide input DC power. It'd be nice to be able to mount small switches on walls in closets, cabinets, backboards, industrial situations, etc...
Good idea!Small DIN switches.. Wide input DC power. It'd be nice to be able to mount small switches on walls in closets, cabinets, backboards, industrial situations, etc...
Many industrial and DIN-mounted stuff is horrible from a software and management perspective. The are mostly built to last and not be flexible. And you can't build big network with just plain switches.I've used them in a few buldings and while the idea and the hardware is great, their management software (nexman) is kind of mediocre.
I think it would be better if switches would be sideways and in a tray: you pull out the tray and "easily" work on the cablin. This would require some extra decimeter of cabling and a cable holder inside the tray. It would still be a bigger hassle compared to traditional switches.This is just a crazy theoretical concept to spark a discussion. What would you guys think about a switch that is aligned like this, with ports facing upwards, taking up a 2U space. The device would be able to slide out, to acess the ports. Much more ports than on a regular 2U switch.
Image is pure concept art, nothing real.
Sadly, breakouts of more than 4 cables/ports is only supported starting with 400G ports. 400GBASE-SR16 (802.3bs), 400GBASE-FR8 (802.3bs), 400GBASE-LR8 (802.3bs), and 400GBASE-ER8 (802.3cn) being the relevant standards.May be add some mtp 24 ports for use with this splitters?
b05d375438b16cb7186bd0ad26c12c17.image.500x500.jpg
Power supply could happen via PoE-PD, possibly with PoE-passthrough.
Could also work with ... fibre-to-fibre.
The "port expander adapter", with its 803.1br switch chip, will need some power, being an active cable. Since you'd want 10GBASE-T on the side facing the switch, you're bound to 802.3bt-PD (power delivery) for the port to be usable as "PoE-in".Took slightly out of context on purpose ... but how would PoE happen in this case? ;-)Power supply could happen via PoE-PD, possibly with PoE-passthrough.
Could also work with ... fibre-to-fibre.
+1Small DIN switches.. Wide input DC power. It'd be nice to be able to mount small switches on walls in closets, cabinets, backboards, industrial situations, etc...
Re cooling:I think the concept has merit but I would worry about cooling. The application of this high density switch worries me due to a single point of failure.
Why? It is quite easy to put a stronger exhausting fan, and use the PSU as part of the ventilation system. If You are worried about wear and tear of the PSU fan, just make it fanless, and use a shroud to guide the air from the discrete internal fans. Just like is done with CPU coolers on 1U chassis.Re cooling:I think the concept has merit but I would worry about cooling. The application of this high density switch worries me due to a single point of failure.
In my opinion , a router/switch with an internal power supply will always run warmer/hotter than an identical device with an external DC power supply.
Well - yes --- it is always possible to do some fan modification/changes to get the type of cooling you desire.Why? It is quite easy to put a stronger exhausting fan, and use the PSU as part of the ventilation system. If You are worried about wear and tear of the PSU fan, just make it fanless, and use a shroud to guide the air from the discrete internal fans. Just like is done with CPU coolers on 1U chassis.Re cooling:I think the concept has merit but I would worry about cooling. The application of this high density switch worries me due to a single point of failure.
In my opinion , a router/switch with an internal power supply will always run warmer/hotter than an identical device with an external DC power supply.
But You talked about cooling!
Well - yes --- it is always possible to do some fan modification/changes to get the type of cooling you desire.
For me, the issue is not cooling.
For me, the issue is getting up to or greater than 7 days of backup battery run time when the power goes out.
Typically a 110 VAC UPS powering a some Mikrotik APs and a Mikrotik switch on a 110 VAC UPS ends up with the UPS consuming more power from the batteries just to run the UPS itself. What is the cost for an external 110 VAC UPS and additional batteries that is able to run half-a-dozen Mikrotik devices on battery power for about 1-week without utility power ?
By staying DC you don't have the inefficient battery-to-AC-inverter to get 110 VAC then other 110 VAC power supplies to get back to board level DC power.
Mikrotik does have some optional DC replacement power supplies that are Mikrotik made to replace a Mikrotik 110 VAC power supply. However , those power supplies are not able to power a 24 port or more all PoE switch. And the power supplies for something like their 48-port PoE switch & 4 sfp+ & 2Q ports switch has two fans dedicated just to cool the AC power supply and another 2 fans to cool the motherboard.But You talked about cooling!
Well - yes --- it is always possible to do some fan modification/changes to get the type of cooling you desire.
For me, the issue is not cooling.
For me, the issue is getting up to or greater than 7 days of backup battery run time when the power goes out.
Typically a 110 VAC UPS powering a some Mikrotik APs and a Mikrotik switch on a 110 VAC UPS ends up with the UPS consuming more power from the batteries just to run the UPS itself. What is the cost for an external 110 VAC UPS and additional batteries that is able to run half-a-dozen Mikrotik devices on battery power for about 1-week without utility power ?
By staying DC you don't have the inefficient battery-to-AC-inverter to get 110 VAC then other 110 VAC power supplies to get back to board level DC power.
"In my opinion , a router/switch with an internal power supply will always run warmer/hotter than an identical device with an external DC power supply. "
Yes, I agree with you: it would be great to be able to power it directly from DC. I think Mikrotik should do a DC-DC PSU, just so we could use it with more devices.
But these PSU can't be used on all models with an internal PSU! One thing I think they should do is exactly the same thing the ATX standard did for PC PSUs: modularity. Even if they did one standard to hot swapabble and another to "normal" ones. This would be great.
Mikrotik does have some optional DC replacement power supplies that are Mikrotik made to replace a Mikrotik 110 VAC power supply. However , those power supplies are not able to power a 24 port or more all PoE switch. And the power supplies for something like their 48-port PoE switch & 4 sfp+ & 2Q ports switch has two fans dedicated just to cool the AC power supply and another 2 fans to cool the motherboard.
I assume you have seen some of my recent posts on the DC mods I've done on the 24-port-PoE and their 48-port-PoE devices ( convert from 110 VAC to 24-VDC. I know they run cooler now because their is now no longer any internal AC power supplies - but I did keep the power supply fans connected just because they were already there.
https://mikrotik.com/product/rb5009ug_s_inAlthough higher port density would be nice, the design doesn't seem feasible to me. In my opinion you should better focus on new RB or CCR/CRS models which can be dual-mounted in 1U. I definitely see a market for such devices in cases where redundancy is required but budget and space are limited (for example on customer premises, branch offices, etc.).
A RB4011 with smaller width and two SFP+ instead of one would be perfect:
- One fiber (ISP A / ISP B) to each RB as upstream
- One fiber between both RBs as example for running iBGP
- 8-12 copper ports for attaching customers switches, etc.
That would be a CRS328-24G-4S+RMI would like to see a CRS326-24G-2S+RM with 4 SPF+ or 2 QSFP+ cages. This would close the gap between CRS326-24G-2S+RM and CRS326-24S+2Q+RM.
It's creative. But not sure how useful it be have looking to actually mount it a "rack"... e.g. "able to slide out to acess the ports" as that seems like a cabling nightmare.What would you guys think about a switch that is aligned like this, with ports facing upwards, taking up a 2U space.
If you're looking to get density INSIDE a rack, perhaps a "double-end" switch with ethernet ports on "front" and "back" (power in middle, accessed from side?). That might get more density-per-U and solve an actually issue I run into that some rack equipment use "front" ethernet ports, while others are on "back". A switch that use both sides might be more useful that trying to adapt a rack to "top mounted" ethernet ports...In my opinion you should better focus on new RB or CCR/CRS models which can be dual-mounted in 1U. [... with] two SFP+ instead of one would be perfect
Or at least 5 rows in 2UWhat about 3 rows in 1U?
https://www.youtube.com/watch?v=5kILCRsachk
Today , I need/want qty 4 to 8 100-Gig many port switches ( 2 and 4 port 100-Gig switches do not even come close to what I now need ).In the next few years we will be wanting 200 Gbit/s QSFP56 then a few more and it will be 400 Gbit/s QSFP-DD. The demand for bigger and bigger links is growing.