It works perfectly with Proxmox - but not an easy slot in solution, although with the direction of VmWare possibly worth investigating.
I also had trouble using NFS to be honest, thought from what I can see its implemented more for mikrotik-to-mikrotik use
To get NFS to work on macOS took some command-line tweaking; it’s possible some of those same arguments (or variations) would be needed on some Linux distributions. I haven’t tried NFS from ROSE on VMware yet.
NFS works wonderfully with ESXi. Stick to version 3 if you don’t need the v4 features, it’s a lot easier to configure.
Hello there!
Same issue with NVME over TCP and iSCSI here with VMWare, hope it gets solved soon. The full power of the RDS2216 is not fully utilized without support for datacenter environments such as VMWare.
So I’m the “Very committed contributor” from the NVMeoTCP and ESXi article above LOL. I also have an RDS.
RDS uses in Linux kernel NVMeoTCP from what I can gather and expected. This is why one poster says it just works fine for Proxmox as that is Debian based I’m pretty sure and uses the same NVMeoTCP Linux kernel modules.
If you want a NVMeoTCP Target for ESXi as an initiator, you will indeed need an implementation of NVMeoTCP that supports Fused Commands. The only one of these that is open source to my knowledge is SPDK.io (Also runs in user space with performance benefits) but there is no appliance per se that I have seen for running this, though I’m sure someone might have a pre built container image for it.
There are some enterprise vendors like LightBits Labs that you can get more of a virtual appliance from that are also certified to be an NVMeoTCP Target for ESXi, but sadly I just talked to some folks there the other day and they don’t offer any type of community or homelab license.
If anyone finds a SPDK container image please post, or maybe I’ll just whip one up based on some work from a few years ago.
https://gist.github.com/singlecheeze/0bbc2c29a5b6670887127b93f7b71e3f
Please note I did some janky looping back in my quick and oh so dirty testing of SPDK above as I was only validating that it did indeed work; meeting requirements for ESXi Fused Commands. Performance would probably be much more consistent and maybe I’ll retest without the jank. Many enterprise storage vendors use SPDK in their NVMeoTCP offerings and products to my knowledge.
@Jetman77, welcome to the forum!
Just a quick note: as the name suggests, SPDK (Storage Performance Development Kit) is just a development kit, not a full appliance, so to make use of it, you’ll need an actual SPDK-based app.
As for running an SPDK-based app in a Mikrotik RouterOS v7 container, that’s unfortunately not possible. ROS v7 is a closed embedded Linux without user-space access to PCI devices, hugepages, or custom kernel modules. You’d need a full Linux distro with VFIO support to use SPDK in a container or otherwise.
Have any of you already installed Nextcloud as a container on it? Is there a manual for this?
Hello,
encountering issues with crashes on this device.
Basically after few hours or days it will drop all traffic, so have to make a reboot of device, Device will not respond to winbox, mac telnet or watchdog.
Basically running few VLANS, hotspot and queues and few subnets.
When crash happen, we see on the other side of the SFP link that link goes down and up, but 0 traffic
Are there any workarounds about QoS? In new 7.19 rc it should be repaired, but RDS2116 is rebooting 2 times daily, We think it is some kind of buffer overflow or something similar, when we use queues.
Also similar problem like in this forum http://forum.mikrotik.com/t/ccr2216-2116-switch-port-flapping/182662/4
- devices have same issue as a CCR2116 where port sometimes goes down and up, but internet will not work, have to make hard power cycle to get it working.
The CCR2116 had better behaviour, it would respond to a watchdog and reboot itself, but RDS2116 will not
Quick update for the ones who are still working on fan noise: the RDS does come with high quality Nidec fans, no problem there, but they definitely have a “beehive” ring to them once they spool up—sure, all fans do, but the Nidecs are really whiny at medium to high speed. Though my previous attempts with all 9Pxx/9Gxx Sanyo Denkis failed, as they refused to “listen” to the PWM signal applied by the router, I managed to find a good replacement: 109P0412P3H013. It does not come with vanes like the Nidecs do, so the flow is not as directional, and their max speed is about 9k RPMs as opposed to 18k for the Nidecs, but they make a difference in noise at similar RPM, with a frequency profile noticeably lower than the Nidecs (high freqs are much more noticeable then lower freqs). And yes, they will accept the RPM signal from the router, and the top speed of 9k RPM is plenty based on what we’ve seen so far. I also tested them in a CCR2004 (the r2 flavor w/ PWM), and was equally successful. They seem to be much more lenient when it comes to the PWM signal they will accept. Note that the batch of fans I used dates back to 2019, I can’t guarantee newer batches will behave the same.
Long story short, if you don’t care about noise, stick to the original Nidecs, they’re great, but if you do, do me a favor and skip the Noctua nonsense and try the Sanyo Denkis, they work and are much better fans overall.
There may be hope. 7.20b mentions hardware passthrough to containers…
Hello!
I can confirm a big “no-go” with VMware ESXi and iSCSI. I tried to test it on VMware ESXi 7.0.3 build-23794027
Regarding the iqn I would suggest to make it like “iqn.1996-06.com.mikrotik::”, so by default for default RouterOS settings and nvme slot 1 it will look like: “iqn.1996-06.com.mikrotik:mikrotik:nvme1”, but you will have an opportunity to change /System/Identity to proper device name like “mtrds01” and then you will have “iqn.1996-06.com.mikrotik:mtrds01:nvme1” and on next RDS device you can go with “iqn.1996-06.com.mikrotik:mtrds02:nvme1”.
So when you will attach iSCSI devices to ESXi, ESXi could properly separate nvme1 on mtrds01 from nvme1 on mtrds02. And also, there should be proper unique scsi device UUID on each device and also storage vendor name.
For example this is how I configure block device properties with linux SCST iSCSI target:
DEVICE c1vd239 {
eui64_id 0x6388d5770dad4828a09c4ea8348aa255
t10_dev_id dev-c1vd239
t10_vend_id iscsistorage01
So in mikrotik case, those fields could be like
eui64_id 0x{generated from device mac address + slot number + random part} # !! generated once per system factory reset !!
t10_dev_id slot-nvmeX # slot number
t10_vend_id iscsi</System/Identity> # no spaces or special characters here as far as I know.
Best regards,
Evgenii D.
@Jetman77, welcome to the forum!
Just a quick note: as the name suggests, SPDK (Storage Performance Development Kit) is just a development kit, not a full appliance, so to make use of it, you’ll need an actual SPDK-based app.
As for running an SPDK-based app in a Mikrotik RouterOS v7 container, that’s unfortunately not possible. ROS v7 is a closed embedded Linux without user-space access to PCI devices, hugepages, or custom kernel modules. You’d need a full Linux distro with VFIO support to use SPDK in a container or otherwise.
Hello!
We use several (if not tens) of SPDK based Linux NVME-RDMA storage systems.Ubuntu + instructions from SPDK documentation and you have your NVME-TCP or NVME-RDMA (we use RDMA) storage appliance. In our case the manual labor is to create initial config via command line and then manage this storage manually. Surprisingly it works well with ESXi and vSphere clusters and it works faster than SCST iSCSI on the same HW.
The downside for us is that simple intel server with proper disk backplane and dual port 25GB card costs around $10000 and we were very excited about RDS which costs ~$2000.
But unfortunately, the devil is in the details and currently I can not even test the NVME-TCP or iSCSI performance.
I really hope that iqn / nqn problem will be solved soon.
Best regards,
Evgenii D.
Hi everyone! I started testing Mikrtotik RDS2216.
I encountered a few oddities and questions:
- I tried to mount the disk in Windows system with the help of
NVME over TCP. Windows 10 and Windows Server 2025, Starwind NVMe-oF Initiator driver were used for testing. In both cases the system connection or discovery hung and at that moment RDS was rebooted. An error record is created in the logs:
router was rebooted without proper shutdown, probably kernel failure
kernel failure in previous boot
StarNVMeoF_Ctrl and standard Windows iSCSI Initiator were used. Built-in support for NVME-Of in Windows Server 2025 did not test because according to reviews this solution does not work yet.
When I used linux as the initiator, no such problems were observed.
Who has mounted disk in Windows system with NVME over TCP have encountered similar problem?
- On synology storage you can create separate LUN devices on one raid array with Btrfs file system, mount them in different systems, and take snapshots from them on the storage itself. Is it possible to organize this with ROSE, or with NVME over TCP to transfer BTRFS subvolums as a disk? So far the only idea I have is to mount a file as a block device and then take snapshots from it, but the performance of this is extremely low.
It would be great, if you could send supout.rif file made after this happens, when using the latest RouterOS beta, to support@mikrotik.com
I was really eager to get one, got it but there seems to be a problem with the pcie disks backplane, routeros does not detects any disks (besides the m2 sata in the motherboard). The cables seem to be connected, both the power and the PCIe cable but the backplane is not even warm, dead cold…
I have raised a ticket to support but still waiting.
Hi friend,
I am thinking of buying this model to replace a Netgear RN3138 NAS that is getting old and no longer receives updates. My main concern is that it will be located in part of the living room, so I need it to be quiet. For example, I currently have the CCR2116, and it makes no noise at all. With this modification, will it no longer be so noisy?
It is at the bottom of the rack and CCR on top.
Regards
