Is there a list of drivers supported by x86, CHR-x86, and CHR-ARM64?

I’m testing a number of small router options, including Intel N100 and N150 boxes, as well as Raspberry Pi CM5, with 2.5G and 5G interfaces.

I have some Realtek 2.5G USB dongles that are recognized and usable from x86 on the N150, but I can’t seem to get them to load with USB passthrough on the CHR running on the Pi. (I haven’t tried them with passthrough on Intel yet.).

CHR on the Pi is freaking out when I bridge all the devices through (a 5G and two 2.5G’s, each port on their own bridge), so I was hoping passthrough would work. The fact that it doesn’t leads me to believe the ARM CHR doesn’t have all the USB drivers that bare-metal x86 has. And the extra-NICs package doesn’t work either.

Do you install the CHR on Pi bare metal?

I know it’s possible to run a PVE or ESXi on Pi, then run the CHR Arm64 inside the PVE or ESXi. But I prefer to install the CHR image on a SD card and boot in the Pi.

I have not figured out how to get that to work. I do have it working on an Ampere processor, but CHR ARM64 would require UEFI boot, and UEFI on RPi5 is a bit of a challenge at the moment.

I thought the idea of CHR was to let the host OS/hypervisor handle the hardware and present generic interfaces (e.g. virtio) to the guest, being CHR in this instance.

This is turning into a fascinating thread…

As far as addressing the original question goes, I’m not aware that MT has ever kept an up-to-date list published of supported third-party hardware for any given version. But if you extract out the squashfs images from the NPKs, you can take a look at the contents of /lib/modules/* yourself to see at least the exact list of kernel modules each release comes with. Whether or not a particular version of a given kernel module provides support for your given NIC, however, is hard to say.

For example: I have no experience with Realtek USB NICs, but some Googling tells me that RTL8156 is the 2.5G USB chipset, and that more recent versions of the r8152 driver/module has support for the 8156. It sounds like this didn’t show up, though, until a few Linux kernel mainline versions well after the 5.6.x version used by ROS7, so if the RTL8156 is supported by ROS7 on x86, then MT must have backported that driver version or that later chipset support into their kernel. I can confirm that r8152.ko also makes an appearance in the ARM64 version of ROS7 (in the base system package, not extra-nics); however, if you can’t get it to recognize the USB dongle, then perhaps this build of r8152.ko is not being made from the same sources as the one for x86? (That would be weird, though, IMO…) I can also confirm that extra-nics does not include any additional USB NIC drivers.

Also just FYI, though “x86” and “CHR” releases were slightly more differentiated back in ROS6 than they are in 7, even back with 6, the differentiation was more about which kernel you were running; there weren’t really different “x86” vs “CHR” builds being made. ROS6 x86 shipped with 3 kernels (& 3 sets of kernel modules): 32-bit uniprocessor, 32-bit SMP, and 64-bit SMP. The 64-bit SMP kernel made its debut with CHR (and in fact ROS6 CHR was x86_64 only), but actually shipped with every subsequent release of “x86” as well; it was just hidden from view in most cases unless you were running the CHR release. The 64-bit kernel had more drivers built for it than the 32-bit kernel did (notably, the PV NIC drivers for various hypervisors), and there were ways to activate the 64-bit kernel on a bare-metal install, at which point you could take advantage of any extra driver support the x86_64 version provided (so you could, for example, install normal “x86” within a VM instance, license it normally, but switch to the 64-bit kernel and thus have all the functionality and benefits of CHR, including working PV NICs for your favorite hypervisors). As of ROS7, though, driver support for “x86” and “CHR” is unified (and in fact I believe “x86” is 64-bit-only now anyway).


I would love clarification on what this poster was communicating. When I first read it, I thought he was saying that he DOES write the raw CHR image to an SD card & successfully boots it bare-metal. But it doesn’t sound like @sirbryan interpreted it that way & instead answered it as if this were a question about what’s possible, not a statement of what’s possible.

I also don’t have access to any non-MT / third-party ARM64 hardware platforms to run any tests on myself. But…man, if there are non-Ampere ARMv8 platforms that the ARM64 version of ROS can boot bare-metal on, that changes everything! As sirbryan hints at, I’m sure that even assuming ARM64 ROS has no Ampere-specific instruction dependencies & also contains no runtime checks to ensure it’s actually running on either an Ampere chip or an official RB/CCR device, that for it to work on any third-party SBCs at all would require UEFI support on that hardware.

In the past, though, on x86, I have tried playing around with attempting to boot a CHR image on bare-metal, and it doesn’t work. It appears that MT actually added some checks to see if it is running as a hypervisor guest, and if it fails those checks, it artificially halts / refuses to continue booting. Since I don’t have access to third-party ARM64 hardware, I can’t test it / know for sure, but I would not be surprised it the same wasn’t also true for CHR-ARM64. This is too bad, since the difference between CHR and non-CHR is extremely skin deep…both use the exact same kernels / binaries / squashfs images, and although I haven’t found the location of it yet, it just appears there is some flag set on the filesystem somewhere that tells ROS if it is installed in “CHR” mode or “legacy” mode. The only real difference in behavior between the two (aside from it running the hypervisor check) is the licensing. There are positives and negatives to both licensing models, and it would be nice to be given the option to use the CHR licensing even when running bare-metal, because one of its perks is that licenses are transferable. (I honestly don’t see what the downside would be to MT for allowing people to choose the CHR licensing on bare-metal installs…what risks would that expose MT’s business model up to?)


There are of course different goals that different people might have when running a virtualized router, all equally valid. The one you brought up (of abstracting the hardware differences away) is definitely one valid goal. But virtio/PV interfaces and drivers have their own downsides, notably that they are usually less performant than being able to talk directly to the hardware itself, using the native drivers for that hardware. So some people who just want to be able to have a single host do multiple things & run multiple guests with virtualization (with one of them being a router), but otherwise want to squeeze the most performance out of it, might choose to dedicate networking hardware to the router guest by passing certain interfaces to that guest directly. Then there’s also SR-IOV, which is a kind of “best of both worlds” approach since the NIC can be shared amongst multiple guests without a paravirtualization layer adding any overhead, but this is specific/limited to PCIe, and does require both explicit support on the part of the PCIe device, as well as specific driver support on the part of the guest (each SR-IOV capable network interface has its own vendor-specific driver running on both the host side and the guest side).

About the RTL8156 (and all the RTL851x that use r8152.ko), Realtek provides the up-to-date GPL-2.0 driver source code for Linux that can even be built for kernel 2.6.x https://www.realtek.com/Download/List?cate_id=585

I guess MikroTik uses this instead of backporting from the newer kernel source tree.

I was responding to the first question, of whether I run it bare metal or not. Based on the context, I assumed the author meant “But I would prefer to install the CHR image on an SD card and boot into the Pi.”

I have a HoneyComb LX2 that does support UEFI boot, and I was able to get ARM64 RouterOS 7 to boot just fine. Unfortunately, it doesn’t see the LX2160A’s SFP+ NICs (it would be amazing if it did). If you want anything to work on that board, you have to use the PCIe slot or USB3 ports. For that one, I’ll have to use Proxmox/KVM, manually initialize the NICs (NXP’s stuff is kind of funky), and bridge those through to RouterOS.


There is a post somewhere where @normis says there should be no problem running CHR “natively” on x86, and the ARM64 CHR ISO is the only way to install RouterOS on Ampere (and other UEFI-capable ARM64 systems). I don’t recall exactly how I did it, but I got CHR to work on x86 during my various tests.


In situations where CHR won’t boot natively on the hardware (i.e. Raspberry Pi), I use the hypervisor as a shell in which to start RouterOS. But if possible, I still want RouterOS to have unfettered access to the NICs to make up (as best as I can) for the losses due to the hypervisor.

Ideally, I’d love it if MikroTik could make CHR boot on Raspberry Pi 4 & 5. I don’t know if they view it as cutting into their hardware margins, but the RPi 5 is a formidable device, with a 2+GHz quad-core CPU (rivaling a hAP AX3, RB4011/5009/CCR2004) and optional 2, 4, 8, or 16GB of RAM and a 1x PCIe slot, perfect for up to 6Gbps of routing. With their low power requirements, you could power them over Ethernet from a L3HW capable switch like the NetPower16, and send CPU-bound tasks (like Wireguard) to it. And with some of these high-density compute module boards, you could put a number of these routers in a small footprint, running tasks like BGP route reflectors, VPN concentrators, firewalls, etc. If RouterOS figured out a way to expose the GPIO ports for use by scripts, they would make for amazing little telemetry boxes that could be managed by The Dude (or whatever else).

Total loss should be under 10% if the hypervisor is getting all the hardware virtualization support it needs. In terms of your project, this doesn’t mean 2.5G - 10% = 2.25G, it means it needs 10% more CPU to drive the card to its limits. Therefore, the only way this doesn’t work out for you is if it’s pegging the CPU and you need to squeeze those last fractional gigabits out another way.

My 10% number is based both on non-CHR testing with hypervisors here and on CHR reports seen here on the forum. Modern hardware virtualization has gotten really steenkin’ good.

To be fair, most of that testing has been on x86_64. My narrower experience with ARM64 virtualization has been on Apple Silicon, a very different platform from the Pi5 when it comes to hypervisors.

With CHR on Proxmox (KVM/Qemu) on RPi 5, I can get up to 3-4Gbps in speed tests, both generated by the router and passing through it (router-on-a-stick) on both 5Gbps and 10Gbps PCIe cards. It fills up a 2.5Gbps card no problem. (2.5 USB are a mixed bag as mentioned earlier.)

The virtualized overhead feels like it’s more than 10%. From Linux natively, I can peg the PCIe bus at 6Gbps with iperf3 one-way to/from my Mac using a 10Gbps adapter. I haven’t tried OpenWRT or installing FRR/Quagga and routing through it, but it’s easy to imagine the performance would be significantly better.

Of course, the next step would be testing PPS in all scenarios, to uncover card and CPU limitations. But for my original use case as a router for a small relay site with 10-30 subscribers, 2.5-5Gbps would be just fine.

My org played around with CHR back before ROS7 came out, but based on our experience running it on HyperV (using paravirtualized network interfaces), we experienced very strange and sporadic packet loss seemingly tied to “microbursting” that we have never experienced on bare-metal, even when the host was barely loaded down and there was plenty of available CPU cycles and bandwidth to go around. When we briefly tried to load production traffic onto it after initial labs seemed to look good, we quickly received many complaints from end-users who started noticing problems. Although we continue to use virtual ROS instances for non-bandwidth-intensive tasks, the whole experience really soured us on using virtualization for anything requiring good, no-compromises forwarding performance …y’know, the main purpose of a router, heh.

We picked HyperV for that test based on the IP Architechs talk on CHR route table ingest performance, where HV seemed to be the clear winner. Perhaps we chose poorly, as although HV clearly was head-and-shoulders faster than the other hypervisors that they tested at that particular task, it did the worst as far as actual packet forwarding performance goes (though their tests seemed to indicate it was at least “good enough”). What is puzzling about their experience, though, is just how much running ROS under most hypervisors RADICALLY slowed down BGP convergence times, relative to both HyperV and bare-metal. That’s a major red flag. But then again, so is the relative forwarding performance of ROS under HyperV vs. the other contenders. Granted, we haven’t re-tested in a while, and maybe things have gotten better and various issues improved or fixed either on the ROS side or within the various hypervisors, but at least at the time, it was obvious there were many hidden and not-well-understood costs to virtualizing ROS…and the reasons for them have never been explained to my satisfaction by anyone to this day.