This is turning into a fascinating thread…
As far as addressing the original question goes, I’m not aware that MT has ever kept an up-to-date list published of supported third-party hardware for any given version. But if you extract out the squashfs images from the NPKs, you can take a look at the contents of /lib/modules/* yourself to see at least the exact list of kernel modules each release comes with. Whether or not a particular version of a given kernel module provides support for your given NIC, however, is hard to say.
For example: I have no experience with Realtek USB NICs, but some Googling tells me that RTL8156 is the 2.5G USB chipset, and that more recent versions of the r8152 driver/module has support for the 8156. It sounds like this didn’t show up, though, until a few Linux kernel mainline versions well after the 5.6.x version used by ROS7, so if the RTL8156 is supported by ROS7 on x86, then MT must have backported that driver version or that later chipset support into their kernel. I can confirm that r8152.ko also makes an appearance in the ARM64 version of ROS7 (in the base system package, not extra-nics); however, if you can’t get it to recognize the USB dongle, then perhaps this build of r8152.ko is not being made from the same sources as the one for x86? (That would be weird, though, IMO…) I can also confirm that extra-nics does not include any additional USB NIC drivers.
Also just FYI, though “x86” and “CHR” releases were slightly more differentiated back in ROS6 than they are in 7, even back with 6, the differentiation was more about which kernel you were running; there weren’t really different “x86” vs “CHR” builds being made. ROS6 x86 shipped with 3 kernels (& 3 sets of kernel modules): 32-bit uniprocessor, 32-bit SMP, and 64-bit SMP. The 64-bit SMP kernel made its debut with CHR (and in fact ROS6 CHR was x86_64 only), but actually shipped with every subsequent release of “x86” as well; it was just hidden from view in most cases unless you were running the CHR release. The 64-bit kernel had more drivers built for it than the 32-bit kernel did (notably, the PV NIC drivers for various hypervisors), and there were ways to activate the 64-bit kernel on a bare-metal install, at which point you could take advantage of any extra driver support the x86_64 version provided (so you could, for example, install normal “x86” within a VM instance, license it normally, but switch to the 64-bit kernel and thus have all the functionality and benefits of CHR, including working PV NICs for your favorite hypervisors). As of ROS7, though, driver support for “x86” and “CHR” is unified (and in fact I believe “x86” is 64-bit-only now anyway).
I would love clarification on what this poster was communicating. When I first read it, I thought he was saying that he DOES write the raw CHR image to an SD card & successfully boots it bare-metal. But it doesn’t sound like @sirbryan interpreted it that way & instead answered it as if this were a question about what’s possible, not a statement of what’s possible.
I also don’t have access to any non-MT / third-party ARM64 hardware platforms to run any tests on myself. But…man, if there are non-Ampere ARMv8 platforms that the ARM64 version of ROS can boot bare-metal on, that changes everything! As sirbryan hints at, I’m sure that even assuming ARM64 ROS has no Ampere-specific instruction dependencies & also contains no runtime checks to ensure it’s actually running on either an Ampere chip or an official RB/CCR device, that for it to work on any third-party SBCs at all would require UEFI support on that hardware.
In the past, though, on x86, I have tried playing around with attempting to boot a CHR image on bare-metal, and it doesn’t work. It appears that MT actually added some checks to see if it is running as a hypervisor guest, and if it fails those checks, it artificially halts / refuses to continue booting. Since I don’t have access to third-party ARM64 hardware, I can’t test it / know for sure, but I would not be surprised it the same wasn’t also true for CHR-ARM64. This is too bad, since the difference between CHR and non-CHR is extremely skin deep…both use the exact same kernels / binaries / squashfs images, and although I haven’t found the location of it yet, it just appears there is some flag set on the filesystem somewhere that tells ROS if it is installed in “CHR” mode or “legacy” mode. The only real difference in behavior between the two (aside from it running the hypervisor check) is the licensing. There are positives and negatives to both licensing models, and it would be nice to be given the option to use the CHR licensing even when running bare-metal, because one of its perks is that licenses are transferable. (I honestly don’t see what the downside would be to MT for allowing people to choose the CHR licensing on bare-metal installs…what risks would that expose MT’s business model up to?)
There are of course different goals that different people might have when running a virtualized router, all equally valid. The one you brought up (of abstracting the hardware differences away) is definitely one valid goal. But virtio/PV interfaces and drivers have their own downsides, notably that they are usually less performant than being able to talk directly to the hardware itself, using the native drivers for that hardware. So some people who just want to be able to have a single host do multiple things & run multiple guests with virtualization (with one of them being a router), but otherwise want to squeeze the most performance out of it, might choose to dedicate networking hardware to the router guest by passing certain interfaces to that guest directly. Then there’s also SR-IOV, which is a kind of “best of both worlds” approach since the NIC can be shared amongst multiple guests without a paravirtualization layer adding any overhead, but this is specific/limited to PCIe, and does require both explicit support on the part of the PCIe device, as well as specific driver support on the part of the guest (each SR-IOV capable network interface has its own vendor-specific driver running on both the host side and the guest side).