Community discussions

MikroTik App
 
aglabs
newbie
Topic Author
Posts: 49
Joined: Mon Dec 28, 2020 1:05 am

RDS2216 Experience

Fri Apr 11, 2025 2:19 am

I haven't seen much information posted yet on the RDS platform recently released. I'd figure I'd share the highlights of my adventure so far. Apologies for the wall of text, but hopefully some of the information here is useful to others.

I ended up using m.2 to u.2 adapter: https://www.startech.com/en-us/hdd/u2m2e125 Which is perfect in height to fit 20x of into the RDS2216.

The build quality is generally good, and typical of Mikrotik hardware with one exception.

The plastic drive caddy's with plastic rails, there seems to be some assembly issues here. 2 out of 2 units I received both needed some adjustments. It was impossible on unit 1 to even use the bottom two drive bays of center 4 drives, on unit 2 i could not get a good connection for the drive to work until I made similar adjustments. Initially I thought the ports were dead until I tested on a drive w/o a caddy.

On unit 1, I had to loosen 4 of the screws on top and bottom of case that support the most center 4 drive bays of the unit, slide in the populated caddy and tighten back up.
On unit 2, I had same issue but 2nd 4 drive bays from right of unit.

After the minor tweak I had no issue getting 20x drives in each working. Other than that, no other assembly issues that I could tell, so if you find it tough to get caddy seated when populated, this might be why.

Once powered up I noticed a few things, the first thing I noticed is, these are loud, I have some noctua fans on order to test out to see if i can get similar airflow rate while trying to quiet the box down a bit. Stock, its not something you want to run in your house, unless its in a well sound insulated room/closet.
Edit: I just got my order of Noctua NF-A4x20 fans. There is some extra work to make them fit and not move around due to not being as thick as stock fans, my biggest concern was they are a fraction of the cfm capabilities of the stock fans but once installed seemed to have no issues maintaining temperatures of the box in an 80f degree room.
Edit 2: See Below, tuning fan to 15-20% lowers noise substantially - no real need to convert to noctua fans.


Opening winbox, I had to go back to block diagram to make sure I wasn't crazy, but I thought this had a switch chip that supported offload capabilities, however no switch menu appears in webfig or winbox. I found /interface/ethernet/switch on cli is present, and when I look at l3hw config, it shows enabled, but l3hw monitor shows l3hw is not running. I assume this is probably going to be implemented at a later date.

As of 7.18.2 - its not currently possible to use winbox or webfig to setup btrfs. attempting to setup btrfs will crash winbox, new and old.

I will say I am a fan of the way mikrotik has done the disk menu logic, the idea of placing file image on an provisioned disk, setting up raid, etc, works well.

I also found the documentation https://help.mikrotik.com/docs/spaces/R ... 9711/Btrfs to not work when working with more than 2 drives.
/disk/btrfs/filesystem/add-device [find where present-devs=<disk-name-1>] device=<disk-name-2>
/disk/btrfs/filesystem/add-device [find where present-devs=<disk-name-1>] device=<disk-name-3>
/disk/btrfs/filesystem/add-device [find where present-devs=<disk-name-1>] device=<disk-name-4>
Only the first command will work, the rest seem to be ignored. Instead I used /disk/btrfs/filesystem/print to get id of the entry containing first two disks, then use
/disk/btrfs/filesystem/add-device number=<id from before> device=<disk-name-3>
etc.



As far as disk performance, I haven't done extensive testing yet, but on 10gbe with btrfs configured, it seems to have no issue saturating line rate read/write doing large file copies on NFS and NVME over tcp (using a file on btrfs). NFS though does have some weird IO latency issue with reads after doing a lot of random disk io, but it seems to clear up a few minutes later. I also have it on the to-do list to test sharing out individual disks via nvme over tcp to a server to see if multiple sessions would net more performance. Also test raid setup to see if some of the performance behavior I saw from NFS/SMB goes away.

SMB performance is interesting, copying to the RDS, I can hit about 5-6GBps (large files), but copying from I only see 30MBps, I have it on my to do list to try a container running samba to see what happens there.
Edit: this turned out to be an issue with a CCR2116 and l3hw - once l3hw was restarted issue went away. RDS sustains 10gbit transfers.

Overall, I am a fan of these boxes. For the price, no one really makes this sort of form factor with these capabilities, I'm excited to see what they will bring to this box.
Last edited by aglabs on Sun Apr 13, 2025 3:16 am, edited 2 times in total.
 
User avatar
sirbryan
Member
Member
Posts: 477
Joined: Fri May 29, 2020 6:40 pm
Location: Utah
Contact:

Re: RDS2216 Experience

Fri Apr 11, 2025 7:45 am

There's a fix in 7.19b for SMB shares having issues when backed by BTRFS (to macOS?). I had poor SMB results on 7.18, but downgrading to 7.17.x proved to work just fine. (I had the issues on an ARM CHR VM on 7.18.x as well, so it wasn't specific to the RDS.)
 
aglabs
newbie
Topic Author
Posts: 49
Joined: Mon Dec 28, 2020 1:05 am

Re: RDS2216 Experience

Sat Apr 12, 2025 4:24 am

There's a fix in 7.19b for SMB shares having issues when backed by BTRFS (to macOS?). I had poor SMB results on 7.18, but downgrading to 7.17.x proved to work just fine. (I had the issues on an ARM CHR VM on 7.18.x as well, so it wasn't specific to the RDS.)
Thanks for this. I did confirm on a CSR2116 the same SMB behavior exists that I reported. I was able to downgrade it and confirm you are right, 7.17 does make reads more consistent with writes, very interesting.

Unfortunately the RDS wont downgrade below 7.18 for me.

Also a note on SMB performance in container, copy to RDS goes at about 6gbps, copy from is 1gbps, which is faster than built in smb service, but still rather slow which seems to be a regression in 7.18
 
CallPut
just joined
Posts: 2
Joined: Sat Apr 12, 2025 7:33 am

Re: RDS2216 Experience

Sat Apr 12, 2025 7:49 am

About the noise problem of this device, I figured out , just set the fan mini percent to 20% in system health settings , The noise will be gone.
 
User avatar
sirbryan
Member
Member
Posts: 477
Joined: Fri May 29, 2020 6:40 pm
Location: Utah
Contact:

Re: RDS2216 Experience

Sat Apr 12, 2025 11:38 pm

Unfortunately the RDS wont downgrade below 7.18 for me.
Try 7.19b if you can live with a beta for a little while.

Also a note on SMB performance in container, copy to RDS goes at about 6gbps, copy from is 1gbps, which is faster than built in smb service, but still rather slow which seems to be a regression in 7.18
Try a container with its data on a bare drive formatted as ext4, or using mdraid-style RAID instead of btrfs. I know I got 6-8Gbps both ways with containers at some point.
 
aglabs
newbie
Topic Author
Posts: 49
Joined: Mon Dec 28, 2020 1:05 am

Re: RDS2216 Experience

Sun Apr 13, 2025 12:25 am

About the noise problem of this device, I figured out , just set the fan mini percent to 20% in system health settings , The noise will be gone.
Yea nice find, I had the thought this morning to try this, much more bearable. Guess ill put the noctua fans back on the shelf for another project!
 
aglabs
newbie
Topic Author
Posts: 49
Joined: Mon Dec 28, 2020 1:05 am

Re: RDS2216 Experience

Sun Apr 13, 2025 12:28 am

Unfortunately the RDS wont downgrade below 7.18 for me.
Try 7.19b if you can live with a beta for a little while.

Also a note on SMB performance in container, copy to RDS goes at about 6gbps, copy from is 1gbps, which is faster than built in smb service, but still rather slow which seems to be a regression in 7.18
Try a container with its data on a bare drive formatted as ext4, or using mdraid-style RAID instead of btrfs. I know I got 6-8Gbps both ways with containers at some point.

I figured out the performance issues my curiosity spiked when i started seeing a lot of inconsistency here. I'm now able to copy to/from at linerate 10gbe - it turns out the CCR2116 that was doing inner vlan routing, l3hw stopped working, once I cycled l3hw on it, all my issues went away. Note to self, dont be lazy and next time test w/o network hops.

Also I did mdraid originally to play with, but for some reason after trying btrfs, whenever i try to remove btrfs and go back to mdraid, the raid device gets stuck in clear state, and wont format. Havent tried to deep dive on it since btrfs is working well for me now that network issue is fixed.
 
aglabs
newbie
Topic Author
Posts: 49
Joined: Mon Dec 28, 2020 1:05 am

Re: RDS2216 Experience

Sun Apr 13, 2025 4:25 am

The Latest fun one. Transferring a large qty of files (100k +) via SMB to the RDS2216, any time you open files in winbox, it stops responding and you have to kill winbox process. This was repeatable so I opened a support case: SUP-185295 - I did see mention of the file browser being overhauled in 7.20 potentially so will wait and see.

Also (SUP-185296) I noticed when share name starts with uppercase, i.e. Dropbox. after some period of time a duplicate lower case will exist when going to SMB uri from windows, dropbox and Dropbox appear, with duplicate data even though ip > smb only shows Dropbox shared.
 
User avatar
dag
just joined
Posts: 7
Joined: Mon Dec 16, 2019 8:48 pm
Location: Dallas, TX

Re: RDS2216 Experience

Mon Apr 14, 2025 3:02 pm

Yea nice find, I had the thought this morning to try this, much more bearable. Guess ill put the noctua fans back on the shelf for another project!
Yeah I wouldn't try Noctuas, they're "quiet" because irl they move very little air. Plus, my experience with Mikrotik is that their PWM implementation is not universal. Sunon and Nidec fans work (the ones Mikrotik use in most of their stuff), Delta fans also work, but Sanyo-Denki or Papst fans run full throttle no matter what you do (tested on CCR2216, RDS2216 and so on). You might run into the same issue with Noctuas. It sucks because fans from Sanyo Denki fans actually do move air, and the noise/cfm ratio is actually quite good. The 9GA0412P3M01 would have been a great shoe-in replacement (tested one I had laying around with the RDS, it runs full speed no matter what).
 
User avatar
normis
MikroTik Support
MikroTik Support
Posts: 27078
Joined: Fri May 28, 2004 11:04 am
Location: Riga, Latvia
Contact:

Re: RDS2216 Experience

Mon Apr 14, 2025 3:05 pm

There are a lot of improvements for RDS and with File menu in general in 7.20 and 7.21, we are actively fixing and improving a lot of what is discussed in this topic.
 
jaclaz
Forum Guru
Forum Guru
Posts: 2873
Joined: Tue Oct 03, 2023 4:21 pm

Re: RDS2216 Experience

Mon Apr 14, 2025 4:07 pm

The 9GA0412P3M01 would have been a great shoe-in replacement (tested one I had laying around with the RDS, it runs full speed no matter what).
It seems like the Sunon accepts a wide range of PWM frequency, 22-28 KHz, the Sanyo-Denki specs talk about 25 Khz only, so likely they are more "strict".
 
User avatar
sirbryan
Member
Member
Posts: 477
Joined: Fri May 29, 2020 6:40 pm
Location: Utah
Contact:

Re: RDS2216 Experience

Mon Apr 14, 2025 8:27 pm

I can confirm the Noctua's don't respond to speed control. They stay at 5K RPM and CPU rapidly approaches 60+C.

[Edit]

Looks like they do respond to fan control, but since the machine was so hot they were running full bore the whole time.

I'm testing a half-and-half setup with five Noctuas on the power supply side and five stock fans on the CPU side. Seems to be working ok, combined with 18% min RPM and 60C CPU temp setting.

[Update]

I put the stock fans all back in and just adjusted the speeds to an acceptable level so that it doesn't have to ramp up every few minutes.
Last edited by sirbryan on Sat Apr 19, 2025 8:11 pm, edited 2 times in total.
 
aglabs
newbie
Topic Author
Posts: 49
Joined: Mon Dec 28, 2020 1:05 am

Re: RDS2216 Experience

Tue Apr 15, 2025 6:31 am

Interesting, I ran a4x20 pwm and it did indeed respond to pwm for me, setting 60% netted about 4k rpm. Setting 10% was roughly 1.2k rpm. Fans would spike to 5k which is their max when temp rose above 60 and it would come back down. It seemed to have no issue maintaining temps in an 78-80f degree room for me.

I did end up going back stock with 15% baseline, it ramps up frequently under load but also seems able to handle higher ambient temps.

For what it's worth noctua CFM rating is less than 1/4 of stock fans.
 
User avatar
dag
just joined
Posts: 7
Joined: Mon Dec 16, 2019 8:48 pm
Location: Dallas, TX

Re: RDS2216 Experience

Sat Apr 19, 2025 4:01 pm

For what it's worth noctua CFM rating is less than 1/4 of stock fans.
Noctua is mostly marketing hype, their CFM ratings are so pathetic no wonder why they’re quiet. In a switch like that, you do want to focus on static pressure as opposed to CFM though, it’s hard to move air through a crowded 1U server. I use Sanyo-Denki whenever I can, you will hardly find such quality and wide selection within this price range. If only Mikrotik’s PWM’s implementation would be more standard-compliant…
 
ISPIE001
just joined
Posts: 8
Joined: Thu Jan 18, 2018 12:00 pm

Re: RDS2216 Experience

Thu Apr 24, 2025 11:32 pm

Anyone had any luck with getting VMWare to successfully connect to an ISCSI partition on an RDS2216? - it seems as if the LUN being offered is not suitable to complete the connection and the ISX host fails to complete the connection.
Any help here would be greatly appreciated
 
User avatar
sirbryan
Member
Member
Posts: 477
Joined: Fri May 29, 2020 6:40 pm
Location: Utah
Contact:

Re: RDS2216 Experience

Fri Apr 25, 2025 7:48 am

Anyone had any luck with getting VMWare to successfully connect to an ISCSI partition on an RDS2216? - it seems as if the LUN being offered is not suitable to complete the connection and the ISX host fails to complete the connection.
Any help here would be greatly appreciated
MikroTik isn't using the proper IQN naming convention (iqn.date.domain:target) that VMware expects. It simply exports the slot name (nvme1) instead of something like iqn.1996-06.com.mikrotik:nvme1. Discovery doesn't work from VMware, but it works from iscsiadm in Linux (ip.add.ress:3260,1), which shows the simple target names.

Unfortunately colons aren't allowed in slot names to "fake" the IQN format and get VMware to accept the target names. (Tested with ESXi 6.7.)
 
CallPut
just joined
Posts: 2
Joined: Sat Apr 12, 2025 7:33 am

Re: RDS2216 Experience

Fri Apr 25, 2025 10:32 am

Anyone had any luck with getting VMWare to successfully connect to an ISCSI partition on an RDS2216? - it seems as if the LUN being offered is not suitable to complete the connection and the ISX host fails to complete the connection.
Any help here would be greatly appreciated
MikroTik isn't using the proper IQN naming convention (iqn.date.domain:target) that VMware expects. It simply exports the slot name (nvme1) instead of something like iqn.1996-06.com.mikrotik:nvme1. Discovery doesn't work from VMware, but it works from iscsiadm in Linux (ip.add.ress:3260,1), which shows the simple target names.

Unfortunately colons aren't allowed in slot names to "fake" the IQN format and get VMware to accept the target names. (Tested with ESXi 6.7.)
This issue is likely unrelated to the IQN naming convention. According to my online research, ESXi NVMe over TCP requires the target must support NVMe fused command, Maybe the RouterOS TCP NVMe Target module (nvmet-tcp) does not support NVMe fused command?

The following article mentions a similar issue:
https://koutoupis.com/2022/04/22/vmware ... -over-tcp/
 
User avatar
sirbryan
Member
Member
Posts: 477
Joined: Fri May 29, 2020 6:40 pm
Location: Utah
Contact:

Re: RDS2216 Experience

Fri Apr 25, 2025 3:39 pm

This issue is likely unrelated to the IQN naming convention. According to my online research, ESXi NVMe over TCP requires the target must support NVMe fused command, Maybe the RouterOS TCP NVMe Target module (nvmet-tcp) does not support NVMe fused command?

The following article mentions a similar issue:
https://koutoupis.com/2022/04/22/vmware ... -over-tcp/

We're talking about iSCSI, not NVMe.

But having both work with VMware would be a huge win.
 
ISPIE001
just joined
Posts: 8
Joined: Thu Jan 18, 2018 12:00 pm

Re: RDS2216 Experience

Fri Apr 25, 2025 7:51 pm

Surely this is something Mikrotik could fix very easily. (Allowing those characters in the naming convention might help)

My only alternative here is to migrare 6 ESX Hosts and 40 VMs over to Proxmox - but that is not someting I have the time for now

So this green thing will probably end up on Ebay. Pity they didn't mention this in the flashy videos
 
User avatar
sirbryan
Member
Member
Posts: 477
Joined: Fri May 29, 2020 6:40 pm
Location: Utah
Contact:

Re: RDS2216 Experience

Mon Apr 28, 2025 8:33 pm

What's wrong with testing on NFS, at least unless/until ROSE properly exports iSCSI IQNs? You could surely spin up one or two Promox hosts for testing (which is what I'm doing).

(I'll take it if you don't want it.)
 
ISPIE001
just joined
Posts: 8
Joined: Thu Jan 18, 2018 12:00 pm

Re: RDS2216 Experience

Tue Apr 29, 2025 7:57 pm

It works perfectly with Proxmox - but not an easy slot in solution, although with the direction of VmWare possibly worth investigating.
I also had trouble using NFS to be honest, thought from what I can see its implemented more for mikrotik-to-mikrotik use
 
User avatar
sirbryan
Member
Member
Posts: 477
Joined: Fri May 29, 2020 6:40 pm
Location: Utah
Contact:

Re: RDS2216 Experience

Wed Apr 30, 2025 7:20 pm

I also had trouble using NFS to be honest, thought from what I can see its implemented more for mikrotik-to-mikrotik use

To get NFS to work on macOS took some command-line tweaking; it's possible some of those same arguments (or variations) would be needed on some Linux distributions. I haven't tried NFS from ROSE on VMware yet.
 
psannz
Member Candidate
Member Candidate
Posts: 130
Joined: Mon Nov 09, 2015 3:52 pm
Location: Stuttgart, Germany

Re: RDS2216 Experience

Wed Apr 30, 2025 8:03 pm

It works perfectly with Proxmox - but not an easy slot in solution, although with the direction of VmWare possibly worth investigating.
I also had trouble using NFS to be honest, thought from what I can see its implemented more for mikrotik-to-mikrotik use
NFS works wonderfully with ESXi. Stick to version 3 if you don't need the v4 features, it's a lot easier to configure.
 
carlosmdarribas
just joined
Posts: 1
Joined: Sun May 11, 2025 8:01 pm

Re: RDS2216 Experience

Sun May 11, 2025 8:04 pm

Hello there!
Same issue with NVME over TCP and iSCSI here with VMWare, hope it gets solved soon. The full power of the RDS2216 is not fully utilized without support for datacenter environments such as VMWare.