Now that RouterOS v7.1rc3 supports running containers, I took the time to create a Docker image to run RIPE Atlas Software Probe on my MikroTik router.
The image is based on the official code provided by RIPE NCC, with a few tweaks to make it run under Alpine Linux. Alpine is based on musl, not glibc, and that makes the image a lot smaller and faster to run in containers. I’m currently running it on my hAP ac³ and it is working great
To use this image is very simple:
# Create veth interface
/interface/veth/add address=172.16.0.1/24 gateway=172.16.0.254 name=veth1
# Create bridge interface
/interface/bridge/add admin-mac=00:53:FF:1A:2B:3C auto-mac=no mtu=1500 name=bridge-docker
# Add IPv4 to bridge
/ip/address/add address=172.16.0.254/24 interface=bridge-docker
# Add veth to bridge
/interface/bridge/port/add bridge=bridge-docker ingress-filtering=no interface=veth1
# Add bridge to LAN (so it can do NAT)
/interface/list/member/add interface=bridge-docker list=LAN
# Create mounts for /var/atlas-probe/etc and /var/atlas-probe/status
/container/mounts/add dst=/var/atlas-probe/etc name=atlas-probe-etc src=atlas-probe-etc
/container/mounts/add dst=/var/atlas-probe/status name=atlas-probe-status src=atlas-probe-status
# Create container
/container/add dns=172.16.0.254 hostname=ripe-atlas interface=veth1 mounts=atlas-probe-etc,atlas-probe-status root-dir=ripe-atlas start-on-boot=yes remote-image=ctassisf/ripe-atlas-alpine:latest
# Start container
/container/start number=0
# Check if container is status=running
/container/print
This is still a work in progress. Maybe there’s something not quite right. I’m also interested in making IPv6 work (either natively or through NAT) but I wasn’t able to do so yet.
You can see my probe running here. I’m also attaching some screenshots showing how my hAP ac³ is handling the load (it doesn’t make a dent lol).
Hello CTassisF
Thank you for the tutorial.
I’ve tried it and it works like a charm - until a reboot.
After reboot - i can not start the container. webfig/winbox/cli - nothing works. nothing happens - no messages. and container is still stopped. there are no messages in the log (logging is enabled on the container and as a topic in /system/logging).
I’m storing the image tar-file, as well as mount-points on the external USB-stick (ext4 fs). I’m using hap ac^2.
Have you ever encountered anything like this? I saw some messages on the forum, but those bugs were resolved in 7.5betax - i’m using 7.6 stable now.
It works great on Hap AX2, BUT I have noticed that there is a lot of writes to the disk and its due to atlas probe writing constantly temporary data to \var\atlasdata
I think it would be best to mount \var\atlasdata as 32 MB tmpfs so that internal flash drive wouldn’t die too soon.
I have measured that without tmpfs it does 300 000 sector writes DAILY, so its safe to assume it could kill Mikrotik flash in just a few months, same goes for lower quality pendrives.
Temporary solution that I have found is to make ramdisk on mikrotik because doing it from container itself turned out to be harder than I thought without running container as root.
Good job, will try at free time. My Altas probes runing on RPIs, maybe it’s time to migrate
BTW, maybe you know a working solution of “looking glass” container for Tik devices?
Thanks for posting CTassisF, I’m very interested in this project to be able to run a software probe on my router and this package seems to be exactly what I’m looking for.
I set up a veth, bridge, and connected everything network-wise. Then I added a container and pulled your image from docker registry:
This does appear to work, and I can start the container and see that it’s running.
I’m a little confused how the container gets a unique key and how I can get that key to persist. It appears the key is generated randomly on the container first starting. What happens when there is a new version of the software probe and I pull a new image from docker hub? Is my key lost and I just have to register the new one?
Lastly it appears the image on docker hub contains Alpine 3.20.3 rather than the most recent release 3.21.2. Is this expected?
You’re correct, the key is randomly generated after the container’s first start. The key is stored in a directory inside the container, so you have to use mounts (volumes) to persist the key across version updates, which require container recreation.
Recently, there was a change to how these mounts should be created. Here are the correct instructions:
/probe/etc/ripe-atlas is where the key and other config data is stored. /probe/var/run/ripe-atlas/status is where currently running tests are stored. /probe/var/spool/ripe-atlas/data as a volume is optional but recommended; it is a directory where many write events occur, so it is better to put it in a tmpfs than to fry your USB storage.
Please, test with these mounts/volumes and let me know if it works for you.
Regarding the Alpine 3.20… I’m waiting on a pull request to fix the original ripe-atlas-software-probe code so I can build it against Alpine Linux 3.21 versions.