sudo mkdir <mnt_point>
sudo mount -t nfs -o vers=4.0,hard,bg,intr,resvport,rw <dns_name>:/<ROSE_slot_name> <mnt_point>
Hello,macOS fix for SMB will be available in next release
this issue is not related to new ROSE package. SMB in ROSE package currently will only support SMB2.1 SMB3.0, SMB3.1.1 dialectsHello,
please fix this (SUP-104510) . Thank you
slot is the name of the folder you will get in "Files". i.e. it is the local mountpoint.I tried it out. I couldn't get NFS working with Mac either. Happy to try more, but curious what the NFS path should be (e.g. is it the slot= ?)
SMB had as a smb-share= but there wasn't an equivalent for NFS.
/disk
add nfs-address=192.168.1.3 nfs-share=/local/mikrotik slot=nfs type=nfs
this is rsync, i will add documentation tomorrow for thisThe rose-storage module also has command additions in /file/sync.
However that is completely undocumented and not so easy to figure out...
/disk
add nfs-address=192.168.x.x nfs-share=pcie1-sata1-part1 slot=nfs1 type=nfs
/container mounts
add dst=/etc/frr name=frr-etc src=/nfs1/frr-etc
/container config
set registry-url=https://registry-1.docker.io tmpdir=nfs1/docker-pull
/container
add remote-image=frrouting/frr:v8.4.0 interface=veth1 root-dir=nfs1/frr-root logging=yes mounts=frr-etc
/disk add nfs-address=192.168.1.3 nfs-share=/local/mikrotik slot=nfs type=nfs
Rebooting
Failed to stop diskd: std failure: timeout (13)
could not umount system: Resource busy
/disk add iscsi-address=192.168.x.x iscsi-iqn=iqn.2005-10.org.freenas.ctl:rose-iscsi slot=rose-iscsi type=iscsi
[admin@lab-hap-ax3] /disk> print
Flags: B - BLOCK-DEVICE; M, F - FORMATTING
Columns: SLOT, MODEL, SERIAL, INTERFACE, SIZE, FREE, FS, RAID-MASTER, ISCSI-STATE
# SLOT MODEL SERIAL INTERFA SIZE FREE FS RAID ISCSI-STA
0 M nfs1 nfs://192.168.xx.xx/pcie1-sata1-part1 network 235 152 605 184 235 006 853 120 nfs none
1 BM rose-iscsi iSCSI Disk 0050569aa261004 network 137 438 969 856 134 145 351 680 ext4 none connected
At /disk, you'll type print, which will show you the disks, then you'll type remove and the number to the left of the bad NFS share.Hello,
ho do I remove or edit existing nfs?
thank you very much...workedAt /disk, you'll type print, which will show you the disks, then you'll type remove and the number to the left of the bad NFS share.Hello,
ho do I remove or edit existing nfs?
None – great work here. You've given a whole new life to the RB1100s we have. Just the RAID options alone is super useful. And y'all have multiple protocol for containers to use a remote disk...Feel free to suggest other protocols and features that should be supported.
[user@mt] /disk> /console/inspect request=syntax path=disk
Columns: TYPE, SYMBOL, SYMBOL-TYPE, NESTED, NONORM, TEXT
TYPE SYMBOL SYMBOL-TYPE NESTED NONORM TEXT
syntax collection 0 yes
syntax .. explanation 1 no go up to root
syntax add explanation 1 no Create a new item
syntax comment explanation 1 no Set comment for items
syntax copy explanation 1 no
syntax disable explanation 1 no Disable items
syntax edit explanation 1 no
syntax eject-drive explanation 1 no
syntax enable explanation 1 no Enable items
syntax export explanation 1 no Print or save an export script that can be used to restore configuration
syntax find explanation 1 no Find items by value
syntax format-drive explanation 1 no
syntax get explanation 1 no Gets value of item's property
syntax monitor-traffic explanation 1 no
syntax nvme-discover explanation 1 no
syntax print explanation 1 no Print values of item properties
syntax raid-scrub explanation 1 no
syntax remove explanation 1 no Remove item
syntax reset explanation 1 no
syntax reset-counters explanation 1 no
syntax set explanation 1 no Change item properties
syntax unset explanation 1 no
[user@mt] /disk> /console/inspect request=syntax path=disk,set
Columns: TYPE, SYMBOL, SYMBOL-TYPE, NESTED, NONORM, TEXT
TYPE SYMBOL SYMBOL-TYPE NESTED NONORM TEXT
syntax collection 0 yes
syntax <numbers> explanation 1 no List of item numbers
syntax comment explanation 1 no Short description of the item
syntax crypted-backend explanation 1 no
syntax disabled explanation 1 no Defines whether item is ignored or used
syntax encryption-key explanation 1 no
syntax iscsi-address explanation 1 no
syntax iscsi-export explanation 1 no
syntax iscsi-iqn explanation 1 no
syntax iscsi-port explanation 1 no
syntax nfs-address explanation 1 no
syntax nfs-export explanation 1 no
syntax nfs-share explanation 1 no
syntax nvme-tcp-address explanation 1 no
syntax nvme-tcp-export explanation 1 no
syntax nvme-tcp-name explanation 1 no
syntax nvme-tcp-port explanation 1 no
syntax parent explanation 1 no
syntax partition-offset explanation 1 no
syntax partition-size explanation 1 no
syntax raid-chunk-size explanation 1 no
syntax raid-device-count explanation 1 no
syntax raid-master explanation 1 no
syntax raid-max-component-size explanation 1 no
syntax raid-member-failed explanation 1 no
syntax raid-role explanation 1 no
syntax raid-type explanation 1 no
syntax ramdisk-size explanation 1 no
syntax self-encryption-password explanation 1 no
syntax slot explanation 1 no
syntax smb-address explanation 1 no
syntax smb-export explanation 1 no
syntax smb-password explanation 1 no
syntax smb-share explanation 1 no
syntax smb-user explanation 1 no
syntax tmpfs-max-size explanation 1 no
syntax type explanation 1 no
Can it only sync between path and network? Real rsync can also sync between two pathnames, so I tried:this is rsync, i will add documentation tomorrow for thisThe rose-storage module also has command additions in /file/sync.
However that is completely undocumented and not so easy to figure out...
Bit of a long shot, but how about RDMA protocols such as RoCE or iWARP?Feel free to suggest other protocols and features that should be supported.
One use case I'm picturing is a router with a large NVMe/MSATA store serving small partitions to end-user routers for containers like Pi Hole and Uptime Kuma, to avoid tons of USB sticks out in the field.Feel free to suggest other protocols and features that should be supported.
I haven't found (or really looked for) any pre-built Samba servers. I built my own using alpine linux and adding the samba package, then created a custom config file. That would be a quick way to do it. But if you have more granular or detailed authentication requirements, you'd have to keep looking or build your own.Hello,
do you know any SAMBA server which has web UI and is working in ROS container? I have bought hap AX3 but Hikvision cameras are not able to connect anything from Mirotik. With the cameras is not working SAMBA v1 in ROS 7.7 not even NFS from ROS 7.8 - rose storage is not working.
Or anything with journalling so you can do back-in-time restore of a file...While we're making feature requests...
ZFS - probably more complicated
likely ? Look at for example the ZeroTier package ; MT released it at version 1.6.6 and it was never updated since then ... ZT is already at 1.10.xZFS - probably more complicated
Yeah, both BTRFS and ZFS are great choices but as the latter is a third party add-on (originated from Sun Microsystems) it would likely be harder to maintain.
likely ? Look at for example the ZeroTier package ; MT released it at version 1.6.6 and it was never updated since then ... ZT is already at 1.10.xSame with Wireguard. If you cannot commit to maintain it, leave it!
[admin@lab] /disk> add iscsi-address=192.168.0.100 iscsi-iqn=iqn.2004-04.com.qnap:ts-251:iscsi.lab.f84517 slot=nas type=iscsi
[admin@lab] /disk> pr
action timed out - try again, if error continues contact MikroTik support and send a supout file (13)
root@ubuntu:~# iscsiadm --mode discovery --type sendtargets --portal 192.168.0.100
192.168.0.100:3260,1 iqn.2004-04.com.qnap:ts-251:iscsi.lab.f84517
root@ubuntu:~# iscsiadm --mode node --targetname iqn.2004-04.com.qnap:ts-251:iscsi.lab.f84517 --portal 192.168.0.100 --login
Logging in to [iface: default, target: iqn.2004-04.com.qnap:ts-251:iscsi.lab.f84517, portal: 192.168.0.100,3260]
Login to [iface: default, target: iqn.2004-04.com.qnap:ts-251:iscsi.lab.f84517, portal: 192.168.0.100,3260] successful.
root@ubuntu:~# dmesg |tail
[ 1146.271269] scsi host1: iSCSI Initiator over TCP/IP
[ 1146.304530] scsi 1:0:0:0: Direct-Access QNAP iSCSI Storage 4.0 PQ: 0 ANSI: 5
[ 1146.310141] sd 1:0:0:0: Attached scsi generic sg1 type 0
[ 1146.311223] sd 1:0:0:0: [sdb] 10485760 512-byte logical blocks: (5.37 GB/5.00 GiB)
[ 1146.312015] sd 1:0:0:0: [sdb] Write Protect is off
[ 1146.312040] sd 1:0:0:0: [sdb] Mode Sense: 43 00 00 08
[ 1146.313021] sd 1:0:0:0: [sdb] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
[ 1146.324904] sd 1:0:0:0: [sdb] Preferred minimum I/O size 512 bytes
[ 1146.324929] sd 1:0:0:0: [sdb] Optimal transfer size 8388608 bytes
[ 1146.365988] sd 1:0:0:0: [sdb] Attached SCSI disk
With latest kernels, btrfs is strong enought to be used by production solution (i used it for my own Nas, raid1c3 for metadata, raid5 for data). I think it could be easiest to implement and more compatible to different CPU arch (and it eat less ram than ZFS).If MikroTik is seriously considering porting a ZFS derivative to RoS one clearly have no idea regarding the complexity in terms of kernel extensions, user space management procs and admin tools. All licensing issues must also be coordinated and approved by mr. Ellison before even considering starting the implementation process.
In terms of complexity, ZFS is orders of magnitude more difficult to implement than BFD.
Take my advice, if you're really looking to implement some sort of volume management or advanced file systems, pick something that already exists in the linux sphere. And since this is obviously not MikroTik's primary area of expertise, please do your homework before making any promises.
If MikroTik is seriously considering porting a ZFS derivative to RoS one clearly have no idea regarding the complexity in terms of kernel extensions, user space management procs and admin tools. All licensing issues must also be coordinated and approved by mr. Ellison before even considering starting the implementation process.
In terms of complexity, ZFS is orders of magnitude more difficult to implement than BFD.
Take my advice, if you're really looking to implement some sort of volume management or advanced file systems, pick something that already exists in the linux sphere. And since this is obviously not MikroTik's primary area of expertise, please do your homework before making any promises.
I'd think the immediate need is so @Normis doesn't need a Synology NAS at his house, than an data center solution...I really hope for their own sake they don't try to enter the storage solutions market.
must have been issue from QNAP side, as iscsi disk shared from linux host works without problems on CHR:@rameex43
make raid1 with tcp-nvme drives
@Babujnik
Thanks we will try to fix this.
@sirbryan
Simple partitioning will be available in upcoming versions. We will check what we can do about LVM or ZFS.
@issme
RDMA is different beast, currently its not planned.
[user@lab] /container> add remote-image=pihole/pihole:latest interface=test envlist=pihole_envs root-dir=iscsi/pihole
[user@lab] /container> pr
0 name="961c061d-628b-4b29-9cc5-1b25152952f8" tag="pihole/pihole:latest" os="" arch="" interface=test envlist="pihole_envs" root-dir=iscsi/pihole mounts="" dns="" status=extracting
[user@lab] /container> pr
0 name="961c061d-628b-4b29-9cc5-1b25152952f8" tag="pihole/pihole:latest" os="linux" arch="amd64" interface=test envlist="pihole_envs" root-dir=iscsi/pihole mounts="" dns="" status=error
10:03:49 container,info,debug importing remote image: pihole/pihole, tag: latest
10:03:49 system,info item added by noyes
10:03:51 container,info,debug getting layer sha256:8740c948ffd4c816ea7ca963f99ca52f4788baa23f228da9581a9ea2edd3fcd7
10:03:59 container,info,debug layer sha256:8740c948ffd4c816ea7ca963f99ca52f4788baa23f228da9581a9ea2edd3fcd7 downloaded
10:04:07 container,info,debug was unable to import, container 961c061d-628b-4b29-9cc5-1b25152952f8
:global path "/nfs1/images/disk1"
# ...
/container add file=[:pick "$(path)/$(containername).tar" 1 999] ...
LOL, 100% agree there should be some "implicit chroot" – but left wondering, since Docker been around a while, how many people even know "chroot" .Maybe it would be nice when everything acted as if there was an implicit "chroot" to that directory.
would be awesome if you could include S3-compatible client (linux s3cmd maybe?) for auto backup with versioning and recoveryFeel free to suggest other protocols and features that should be supported.
How about just an "ln -s" or some kinda "alias"? For example, I'd still like to have some /disk1 (or whaterver) - just be an symlink/alias to the busX-partY scheme.Feel free to suggest other protocols and features that should be supported.
mount 192.168.100.135:/var/nfs/general /nfs/general
/disk add type=nfs nfs-address=192.168.100.135:/var/nfs-shares/general slot=dockers
invalid value for argument nfs-address
/disk add type=nfs nfs-address=192.168.100.135 slot=dockers
file pr
Columns: NAME, TYPE, SIZE, CREATION-TIME
# NAME TYPE SIZE CREATION-TIME
0 auto-before-reset.backup backup 26.4KiB jan/01/1970 01:00:04
1 dockers disk apr/01/2023 10:55:33
2 dockers/var/nfs directory apr/01/2023 11:00:47
3 dockers/var/nfs/general directory apr/01/2023 11:22:50
4 dockers/mnt/test directory apr/01/2023 12:09:19
5 dockers/var directory apr/01/2023 11:00:47
6 dockers/mnt directory apr/01/2023 12:09:19
/disk
add nfs-address=192.168.100.135 nfs-share=/var/nfs/general slot=dockers type=nfs
Hmm, I can't really force that on the NAS. I can enable/disable NFSv4.1 , but other than that its "enable or disable" NFS as a whole.Try with NFS v3, that works for me...