Rose Data Server (RDS2216) Review. Real world storage server with basic stats

I’ve recently grabbed a Rose Data Server (RDS2216) and hooked the 10G SPFs directly to my two proxmox hosts.

Running NVME over TCP to the nodes and then tested using local and shared storage.

As of writing, this Rose is currently hosting the mastodon (aus.social) and pixelfed (pixelfed.au) as well as other fediverse services!


note: I believe this 500MB/s~ is expected because the 10G SPF DAC is maxing out…. I’ve setup jumbo packets and that boosted by 10%… this would likely be better if I had 100G DACs (and even better once rose supports multipath for 200G links if you wanted to saturate the full 20 disks).

RoSE NVMe/TCP RAID5 Local NVMe RAID1 Local SSD RAID5 (ZFS with intel optane cache)
Random Read IOPS 27.4K 144K 159K
Random Read Latency 4,667µs 884µs 796µs
Random Write IOPS 42.5K 74.4K 87.6K
Random Write Latency 3,002µs 1,708µs 1,445µs
Sequential Read 597 MB/s 6,059 MB/s 5,342 MB/s
Single-thread Latency 234µs 69µs 67µs
Single-thread IOPS 4.0K 11.5K 11.8K
ioping Latency 486µs 273µ 337µs

The disks are 8 x SAMSUNG 960GB PM983 Gen3 X4 NVME on a raid 5 + 2 marked as spares.
then the raid disk is served as a block device over NVME over TCP… my proxmox node formats the block with LVM (the nodes handle all of the LVM volume smarts).

In short, this performance is perfectly cromulent (the low latency is excellent). I’ve moved a bulk of my VMs OS disks (efi/cloudinit/etc) on proxmox and the node to node migrate is working perfectly without fault.

I have a little bit of the fear about putting all of my services on such a new service.. but I figure they will continue to improve the features over time… from a speed and stability perspective, so far I am very happy with it.

Very low cost but only basic storage features (it does one job).

Requests to staff:

  • I would rather a “storageOS“ mode that disables everything that I don’t care about.. and has improved features focused on storage.
  • Please release a 6-8 x 100Gbit SPF Switch that I can use as a dedicated storage switch (no routing required for this) - refresh CRS504-4XQ-IN maybe?
  • please support nvme over tcp - NVMe native multipathing (NVMe-MPIO) to Aggregate throughput from multiple NICs
  • Please add more monitoring/metrics/alerts for SMARTS + nvme disk health, nvme-over-tcp details/diagnostics.
  • Alerts on disk failures (snmp and email?).
  • Please consider supporting ZFS/bcachefs or other filesystems (I am not a btfs hater and it works nicely) - I’m using 2Gb of the 32GB memory. ZFS would work nicely with 16-24GB for cache.
  • Native LVM management (used by proxmox shared storage).
  • Native backup support to external storage (attached USB storage would be great).
  • The Rose product sheet mentions “clustering“.. but you never explain what kind?
  • The 2 x SFF 8644 (24G each) what’s the point of this? SATA expander?
  • Please create a seperate storage category in this forum.

If anybody is aware of tuning on the rose/proxmox host guest level.. please feel free to tell us!

This is about as much information as you can currently get out of winbox.
the raid statistics need to be updated to show actual raid information.. and there isn’t much health information on the disks such as smart or nvme metadata available.