CCR1xxx with containers

If I’m able to build a linux with tile-gx support, will my tile CCR1 be able to run containers?

Do you have a TILE build chain to create the OCI images?

If I would have it I would not ask the question, would I?

I’m asking if Mikrotik is going to support the Tile architecture as it was bought by Nvidia, which means the end for Tile arch. Mikrotik has invested heavily into this architecture, so the question is will Tile arch get container support? That is simple binary question yes or no (maybe is in this case no ).

Since there are already

ppc64le

or

mips64le

supported I don’t see any technical reason why Tile could not be supported. Mikrotik has to maintain the Tile kernel tree themselves.

All of the CCR1xxx series have been discontinued. Since they are not an active product any more, I doubt any development time will be spent on new features for that platform. They will be supported (according to Mikrotik) for at least 5 years from date of purchase. These updates do not guarantee new functionality.

I see. Thank you for the information.

No because Mikrotik never released the Container package for them.. I was not happy about this decision… My CCR1036 and CCR1016 routers with 16GB RAM would have made great hosts..

It seemed to be planned up until 7.0 actually launched..

You’ve missed the point of my Socratic hint. What I wanted you to think about and realize is that even had MikroTik waved a magic wand and caused container support to appear in the TILE builds of RouterOS, how would you build the OCI images it needs to consume? No images, no running containers.

The current list of available build platforms for Docker is:


{
  "supported": [
    "linux/amd64",
    "linux/arm64",
    "linux/riscv64",
    "linux/ppc64le",
    "linux/s390x",
    "linux/386",
    "linux/mips64le",
    "linux/mips64",
    "linux/arm/v7",
    "linux/arm/v6"
  ],
  "emulators": [
    "aarch64",
    "arm",
    "mips64",
    "mips64le",
    "ppc64le",
    "riscv64",
    "s390x"
  ]
}

(The command that produces that output is “docker run --privileged --rm tonistiigi/binfmt --install all”, found here.)

Without TILE CPU support from Docker, there can be no OCI images for RouterOS to consume short of getting someone else (who?) to provide a complete build toolchain.

Note that this also answers the far more common questions about MIPS CPUs. There is upstream support for 64-bit MIPS CPUs, but not for the tiny 32-bit MIPS CPUs MT uses in so many of their products.

Chicken-Egg…

With no Tile systems being able to run docker, there is no reason to create the toolchain.

The system has to be able to run Docker first, otherwise there is no point.. It isn’t possible to run without the package, so even if someone provided the build toolchain, it still couldn’t work.

Building OCI images does not require Docker Engine to run on the target CPU. BuildKit allows cross-compiling from any supported host, provided you’ve installed the CPU emulators using the instructions linked from my prior post.

My point is, the set of available emulators for doing this cross-compilation does not include one for TILE CPUs.

In principle, someone could produce the QEMU and binfmt stuff needed to support this, but Docker, Inc. has not done so. It is possible for a third party to do it, but until someone does, there is no point in MT producing a corresponding container.npk package for that platform.

The only hope I see stems from the fact that MT clearly has a TILE CPU cross-compilation toolchain internally, for their own uses. While I doubt that it’s set up to integrate with Docker Engine directly as-is, they could do the work to push that up through the QEMU and Linux kernel binfmt projects so that buildx could then consume it. The next question this raises, though, is what is their incentive for doing that for someone else’s obsolete CPU architecture?

Alright so lets say I do that, because I want to run Docker on my CCR1036, it will do me no good because Mikrotik still won’t allow it to run, put the hours in to get it ready just to hope it gets enabled??

If the only thing stopping me was me (or someone else) putting in the effort, then it would be worth doing to make it happen.

Correction: you want to run OCI containers on your CCR1xxx. The tooling produced by Docker, Inc can produce and consume OCI images, but OCI is not “Docker”, and Docker isn’t the only way to produce these OCI images.

This is not a pointlessly niggly distinction. There is no sensible reason to believe that RouterOS contains any substantial amount of code written by Docker, Inc. employees. There may be some kernel patches and such, but nothing approaching a container runtime. The most barebones expression of Docker’s runtime is runc, which is about 11 megs installed on my nearest-to-hand build platform, whereas the container.npk package is about one-hundredth that size.

The next-nearest competitors I’m aware of are crun and systemd-container at about 1.5 megs each. RouterOS’s container runtime is stripped-down even by these standards.


If the only thing stopping me was me (or someone else) putting in the effort, then it would be worth doing to make it happen.

Have you tried proposing that to MT and have a rejection message in-hand, or are you presuming failure from the start?

Have you ever talked to MT support to suggest a feature? How did that turn out for you?

Why/what would I need to propose? That if the package was available, that someone would make it work?

As I said, there is no reason to invest the time when the end-goal can’t be achieved, regardless of the time/money invested.

You’d need to get TILE support into QEMU, then get the Linux kernel’s binfmt feature to recognize TILE binaries and send them down to QEMU for CPU emulation. This is how cross-compilation works under both Docker’s BuildKit and Red Hat’s Podman, at the least.

QEMU does not currently support TILE. I’ve found some 2015 patches for it, but as far as I can tell, it is not the case that they were released in QEMU for a time, then later removed.

Another way to come to the same conclusion on Red Hat type Linuxes is to say:


$ dnf search qemu | grep user-static

That yields your set of possible cross-compilation targets on that system, available to Podman, the premiere OCI-compatible alternative to Docker.


Have you ever talked to MT support to suggest a feature?

Yes. RouterOS 7.14 contains a change to the container runtime that I arm-twisted them into implementing. It wasn’t easy, but it did land.

Oh, and one more minor detail: you’d also need to provide at least one container base image for TILE, without which you wouldn’t have the TILE compiler and library binaries for QEMU to run during the OCI image build steps.

Note, for example, that Alpine — a very popular container runtime base — is not yet ported to TILE.

This is all a tremendous amount of work, but if someone were to do it and hand MT a working image — one that instantiated and successfully ran under QEMU — they would have a hard time arguing against producing a container.npk for TILE.

So.. Tremendous amount of work, to eventually, maybe, get container.npk.. No one is going to undertake that for that maybe eventually.

What is the aguement against releasing it now, knowing no one can use it?

Can’t even get features fixed that don’t follow the provided documentation. Asking Mikrotik for features is a waste of time, and these are features that are small adjustments for features that already exist.

Then this “no one” is going to get exactly what they deserve: nothing.

MT’s incentive to do all that work is zero. If you don’t show MT that it can be done, they won’t take the final step of building container.npk on TILE for you.


What is the aguement against releasing it now, knowing no one can use it?

Doesn’t that question answer itself? Until you have a build of QEMU supporting TILE, a useful base image for TILE to bootstrap other container images with, and a binfmt patch that ties the two together, a TILE build of container.npk has zero value.

The way I read your replies is that you think all this is MikroTik’s fault, which is an odd stance since none but the final linchpin prerequisite projects are run by MikroTik.


Asking Mikrotik for features is a waste of time

Yes, that’s why the RouterOS changelogs are zero in length and infinitely far between.

Oh, wait…that’s not the case at all, is it?

As I already told you, RouterOS 7.14 not only has a change I wanted, implemented at my direct behest, it’s in relation to the container runtime. It can be done.

I didn’t say there were no feature updates and changes being made, I said asking for a feature change, is a waste of time. Mikrotik does what they want feature wise. Last time I asked for a feature to follow the provided documentation, the offical response was “there are no plans to fix”. Is/was a BGP option/setting that worked for IPv4 but not IPv6.

I have accepted that containers will never work on the Tile architecture, but I am still disappointed in the decision.

I am not going to put a thousand hours into a project for Mikrotik to maybe allow it after it is working, I am disappointed that Mikrotik pulled support for containers for the Tile architecture.. The higher end CCRs seemed like the ideal place for containers, the CCR1036 and CCR1016 in particular.. Just don’t use the MicroSD slot for storage though, it is extremely slow.

I requested that the configuration import/export include hashed user passwords around 6.44, this is a small, minor change.

I’ve asked for proper changelogs since I started using RouterOS.. That hasn’t happened either.. Some changes are listed, many are not.

Lack of changelogs is one of the big reasons I’m trying to get my network off of RouterOS, not sure to what yet, but something else..

What I am trying to get across to you is that it isn’t MikroTik’s decision. They could build container.npk for TILE today, and it would still not get you containers on TILE.


Mikrotik pulled support for containers for the Tile architecture..

“Pulled?” It never existed.

What they did is the same thing I did above: looked for the available tooling, found none, and decided not to waste development resources providing it.


The higher end CCRs seemed like the ideal place for containers

If that’s the case, then why didn’t Docker do it for all of the other TILE-based host types? Or Podman? Or Rancher?

For that matter, why does the keystone project upstream from them all (QEMU) not support TILE?

Answer: it’s a dead architecture with no good reason to keep pouring development effort into it, none of which is MikroTik’s fault.

Maybe, but without it, it CAN’T happen.

It was one of the promised v7 features though, up until v7 was released, then it became just ARM/x86.

I don’t work there, I wasn’t in those meetings. They have added a number of architectures since they started.

I accept that containers will never exist on Mikrotik’s Tile platform, but I am still disappointed by it..

The same argument applies to QEMU TILE support and the requisite base container image needed to bootstrap the first practical image. Why is MT to blame for not providing the last-0.1% bit when none of the rest exists?


[quote=“, post:18, topic:173851”]
“Pulled?” It never existed.
[/quote]

It was one of the promised v7 features though

[citation needed]


They have added a number of architectures since they started.

Yes…after QEMU support for the CPU in question and the base images were provided.