It is unfortunate that there are no clear images of the sides, but it would be possible that the screw holes for the mounting brackets are symmetrical and you can mount the whole unit backwards.
Of course you should not do that in a cold aisle/hot aisle server room environment.
That would be an easily solvable mechanical problem (a drill and a few self-tapping screws or a proper threading of the drilled holes with a tap) the issues can IMHO be the space needed for the IEC power connectors[1] (if the rack has a cover/door, some racks have them very near the front panel of devices) and the fans that would be blowing air "the wrong way", taking air from the front and not from the back.
There is no “stock” option for reverse mounting. I would have preferred that.
It comes with those rear support rails, but they’re barely long enough for a 24” deep quad-post rack. I have to have a shelf to support mine in my 32” deep racks.
RouterOS on an RDS2216 is the same as on a CCR2216. You can choose whether or not to enable rose-storage, container, or any of the other packages. The RDS2216 comes with twice the amount of RAM, the same processor, and a better collection of ports for a homelab than CCR2216.
The only reason I would purchase a CCR2216 over an RDS2216 is if I needed either the 25Gbps switching (just as well spend the money on a CRS520) or the 200-400Gbps of layer 3 switching (HW offload). In my setup, I usually do the router-on-a-stick configuration with routers LAG’d into an MLAG stack and use VLANs as cross connects between all the routers, switches, and servers.
You are right, the $800 price difference is significant, and I can imagine that being the deciding factor in some cases. A few other things that people should also consider:
the 12 (vs 4) fans on the RDS2216 will create extra noise (OK for a data centre, not great for a homelab)
I happened to stumble across a 6-month old CCR2216 on eBay UK for just under $1500. Keen to put it through its paces and see how well it works out in practice.
As for the cables: once every few years, so definitely not an issue.
Well the main problem with the RDS2216 (unless that has been fixed) is that it only allows to run simple containers for a single functionality, not an entire Linux machine as a VM.
To consider this as a replacement for an existing VMware or Proxmox server that is really a requirement.
Is there a container that allows you to host a VM in hardware virtualization mode, either for an ARM64 machine or (even better) for an AMD64 machine?
That’s an excellent point! The extra amount of RAM is most welcome.
As for the mix of ports, SFP+ will work the same in SFP28 ports, and the 2 x 10G RJ45 can be reproduced with adapter modules that those who buy this hardware will most likely already have them as spares and/or lying around from other projects.
To be honest, the deciding factor was finding a lightly used CCR2216 on eBay for half the original price. The switch chip is important (this is where CCR2004 disappoints the most) and I need a device that can handle 25Gbps WAN, as well as the near-future 100Gbps WAN.
Now that I took a proper look at that CRS520, I can imagine it superseding my CRS312 in a few years. Thanks for the tip!
Your setup sounds like what I am slowly building towards. I think that it will take me at least another 5 years to justify the hardware investment, but the writing is on the wall.
Is there a link that describes your setup in more detail where I can learn more about it?
The container runtimes that are typically used for this type of isolation are gVisor, kata-containers or Firecracker. You still get containers, but with sandboxing guarantees. They all have tradeoffs, and the company that I work for currently leans on gVisor (runsc) which is made available part of the install. All this is Linux-based, so in theory it should work on RouterOS, but there are no plans to integrate. The focus is managed K8s distributions (EKS, GKE, AKS, OKE, etc.) and bare metal Linux instances soon, but not router hardware.
Going back to my initial comment, I want my router to rip packets (pun intended), and my Threadripper/EPYC to handle the rest. My latest addition to the now sprawling homelab is a 9970X (same as Linus’ PC) which right now is the perfect hardware to test high-performance containers. With the recent addition of 8 x 4TB NVMe PCIe 4.0 x4 flash storage, the CPUs & storage capabilities of an RDS2216 in comparison are similar to a Raspberry Pi. The RDS2216 is a strong all-in-one unit, and I pick the 9970X as my fast containers & storage host. I am also running a slower 5700X & i9-9900K, but that’s mostly backup (because 2 of everything).
Ok I mentioned it because we have an aging ESXi server running that has VMs with stuff like the controller (network) app for Unifi, RADIUS servers, an intranet server with MySQL database, some monitoring, each as a Linux installation in a VM.
It would be nice when instead of buying a new Dell server we could have one or two RDS2216 do that. But it would be too much effort to research how to repackage that all in containers, at most I would want to do a re-install of Debian for ARM64 as part of the migration. It seems to be beyond what the RDS2216 can do, we probably will use Proxmox instead.
You could try to replace the RADIUS with the RDS and bring more value to the device.
I’m in a similar situation, where I have to decide which device to use. CCR2216, RDS or stay with my Unifi setup.
Unfortunately I ordered the 25Gbit already. Now I have to decide if I will change it to 10GBit to stay with the Unifi setup, where I have a Dram Machine SE, a 24 Port PoE switch, an access point and some other devices.
I also thought about buying a CCR2004 and use it as Modem only.
Because a Vendor change, even for a small lab like mine, is a financial challenge, I would be thankful for any shared experience and information about the Miktotik devices in combination with Init7.
Do you made any decision and could you maybe already collect some experience with some device?
We do not have an RDS yet, we only have CCR and RB5009 devices.
I have tried migrating RADIUS to User-Manager in RouterOS, but it is not really suitable because it is geared towards paying users on wireless, with all bookkeeping done via RADIUS accounting, while what we need is only RADIUS authentication. It can do that but there is absolutely no logging, enhancement request filed.
Another issue is the difficulty of synchronizing user lists between different RouterOS devices. And I do not like that User-Manager users are in the RouterOS config, instead of exclusively in the User-Manager database (which we could synchronize). The large number of users clutters our config exports.
Not one in particular that I can think of. Kevin here goes into the what’s and why’s of doing switch-centric design. His demo is more elaborate than mine, but the principles are the same:
Use switches (MLAG stacks) to hand off to routers and servers
Use VLANs as a way to create point-to-point links between routers and external services
Using LACP and two-of-everything (where it makes sense) gives you ultimate reliability for both outages and day-to-day maintenance