7.8beta2 adds new package ROSE-storage

What’s new in 7.8beta2 (2023-Jan-20 12:27):
!) storage - added new “rose-storage” package support for extended disk management and monitoring functionality (ARM, ARM64, Tile and x86) (CLI only)

What’s new in 7.8beta3
*) rose-storage - added support for GPT partitioning (CLI-only)
*) rose-storage - added support for authentication and encryption for SMB (CLI-only)
*) rose-storage - fixed “rose-storage” package update (needs manual upgrade from 7.8beta2)
*) rose-storage - fixed SMB support for macOS clients
*) rose-storage - various stability fixes for SMB
*) rose-storage - prioritize block device export (iSCSI, nvme-tcp) over mounting

Manual (being updated):
https://help.mikrotik.com/docs/x/YYBwCQ

OMG great features !!!

Windows users be aware that default built in NFS client does not support NFS v4 mode.
https://learn.microsoft.com/en-us/windows-server/storage/nfs/nfs-overview

maybe anyone more competent in windows systems may comment on this. Is there any good third party tools or workarounds?

I tried it out. I couldn’t get NFS working with Mac either. Happy to try more, but curious what the NFS path should be (e.g. is it the slot= ?)
SMB had as a smb-share= but there wasn’t an equivalent for NFS.

See: http://forum.mikrotik.com/t/v7-8beta-testing-is-released/163742/1

simple “/” should work as path. Currently nfs/smb is applied to disk. If only exact folder is required on client side - use “slot/folder”
f.e. on linux initiator:
sudo mount -t nfs 10.155.166.7:/sata2/stuff /mnt/files

Found the magic “-o” incantations, see http://forum.mikrotik.com/t/v7-8beta-testing-is-released/163742/91

sudo mkdir <mnt_point>
sudo mount -t nfs -o vers=4.0,hard,bg,intr,resvport,rw <dns_name>:/<ROSE_slot_name> <mnt_point>

worked for me with raid1-part1 being the slot for the RAID1 SSDs in RB1100. I’d tried just -o vers=4.0 originally, but apparently that’s wasn’t enough.

The minimum required was -o vers=4,resvport with rw needed if you want it writable. The other options (hard,bg,intr) have to do with error handling, background resolution, interruptions, etc. and can be easily Googled.

With the Blackmagic Disk Speed Test, I’m maxing out the speed of the M.2 drive (2800Mbps in this case). I love it.

How does SMB work from non-router clients? I got it working between a hAP AX3 and a 2116 by specifying the disk name in the smb-share parameter, with smb-user as blank, but from macOS command line, variations on sudo mount -t smbfs -o nopassprompt smb://192.168.x.x/nvme1 /tmp/blah fail with an authentication error, and “guest” doesn’t work either.

I got an NVME to SATA adapter to add some drives to the 2116.

http://forum.mikrotik.com/t/v7-8beta-testing-is-released/163742/98

Pretty slick to have file sharing options that approach source device throughput over the LAN.

macOS fix for SMB will be available in next release

Hello,

please fix this (SUP-104510) . Thank you

this issue is not related to new ROSE package. SMB in ROSE package currently will only support SMB2.1 SMB3.0, SMB3.1.1 dialects

Curious on the recommend protocol/scheme to mount a container image on one RouterOS from another RouterOS using ROSE? Imagine it doesn’t matter much in a lot of cases, but some thoughts on how to best to use ROSE for the remote storage for a container use case be helpful.

e.g. if on same 1G or 2.5G LAN segment, between for example RB5009 as client and RB1100AHx4 as storage server. Does the specific hardware (assuming supports ROSE) even matter in the selection process? e.g. is iSCSI or NVMe-over-TCP use more CPU/mem/etc intensive, etc. etc., since if the container host is on limited CPU/mem, sacrificing disk max speed to keep CPU/mem usage lower might be of some benefit in some cases too…

slot is the name of the folder you will get in “Files”. i.e. it is the local mountpoint.
I got the NFS client working with a Linux NFS server using this:

/disk
add nfs-address=192.168.1.3 nfs-share=/local/mikrotik slot=nfs type=nfs

It creates a folder “nfs” in /file with the mounted Linux directory in it.
Frustratingly, there are no error messages at all. When the mount does not succeed, you will not get the new folder in /file but there will be no other indication what is wrong.

The rose-storage module also has command additions in /file/sync.
However that is completely undocumented and not so easy to figure out…

this is rsync, i will add documentation tomorrow for this

After attaching an SSD via an NVMe-SATA adapter (see 7.8b2 thread), I exported the SSD via NFS on a 2116. I then mounted that export from a hAP AX3, created a container with the storage pointed to the NFS share:

/disk
add nfs-address=192.168.x.x nfs-share=pcie1-sata1-part1 slot=nfs1 type=nfs
/container mounts
add dst=/etc/frr name=frr-etc src=/nfs1/frr-etc
/container config
set registry-url=https://registry-1.docker.io tmpdir=nfs1/docker-pull
/container
add remote-image=frrouting/frr:v8.4.0 interface=veth1 root-dir=nfs1/frr-root logging=yes mounts=frr-etc

(Note: @antonsb, is there a way all of the attributes of the container add command, such as remote-image, can be displayed in the export or print detail output so that we can easily replicate/reload containers if things go south, or copy the config from one router to another?)

Given the BFD vs. other “nerdy” features debate going on in the other thread, I thought I’d experiment with FRR in containers.

Fortunately, the hAP AX3 and the Mac are able to mount the NFS share simultaneously. This has been useful in debugging issues with this container. For example, the FRR container creates its /etc/frr directory with non-root permissions, the result being that the container can’t create or write new config files (particularly vtysh.conf, zebra.conf, etc.). The parent directory (nfs1/frr-etc in this case) had UID:GID of 32868:32869, but the files created by the container were 32768:32768 (frr:frr inside the container). I couldn’t change permissions from within the container, but I was able to fix them from the Mac.

Hello,

I did run this command

/disk add nfs-address=192.168.1.3 nfs-share=/local/mikrotik slot=nfs type=nfs

but I do not see any nfs directory in files is it correct? I see nfs only in disks management.

Do you not understand that that was just an example and that nfs-address and nfs-share parameters will have to be adjusted to your local setup?

NFS adress I changed but not the NFS share. So I will create folder NFS on my disk and than use this adjusted command. How do I remove this actual NFS share?

Thank you