I am thinking about upgrading my local network to 10Gb using fiber cables. While I found MikroTik switches with SFP+ ports I can’t find a router having at least two SFP+ ports and 2.5Gb ethernet ports, at least not with decent CPU in reasonable price. I m thinking about:
while strictly not vouching for THOSE boxes … consider installing proxmox on one of them and use virtualized CHR (with PCIe passthrough if supported) to test and after that purchase a P10 or P-unlimited license.
allthough i doubt VT-d is supported on either of those 2 models.
note - A CHR VM running on Proxmox without PCI passthrough is still fast enough to handle about 6 Gig of throughput.
fyi ; I don’t use PCI passthrough on any VMs in my Proxmox cluster. Mostly because I want/need the ability to live-migrate my active routers from one Proxmox server to another without having to worry that my PCI network cards are identically configured.
What kind of hardware have you tested this on Tom?
My main router used to be a rb5009 up until a month ago. I switch my internet to 5gb fiber so I started using my MS-01 with proxmox & CHR, but I have both WAN and LAN on proxmox bridges that are then connected to CHR. I’ve benchmarked about 4.5gbps best case scenario, but it typically runs at around 3.6-4gbps. During the benchmark it’ll climb up, hang for a milisecond and then jump down before climbing up again. Not sure if it’s my setup or just my isp.
You need to check what hardware those boxes use, like for example network card, storage controller, and so on. Then make sure drivers for those hardware devices are included by default in Linux version 5.6.3. It’s not possible to add any custom drivers yourself.
If you are not sure how to check this, I suggest running CHR in a virtual environment on either Windows or Linux, depending on which system you are more comfortable with.
I’ve tested CHR on many hardware servers running Proxmox.
All of them have 10-Gig or 40-Gig network cards.
All of then Xeon CPUs.
Most of my Proxmox servers have hyper-threading disabled.
Moments ago , I tested a CHR on a Proxmox server ( 24 x Intel(R) Xeon(R) CPU E5-2687W v4 @ 3.00GHz (2 Sockets) ).
4-CPUs
ROS 7.18.2
tools-speedtest to 127.0.0.1 ( test to it self )
Your CHR running under Proxmox should get about the same speed to itself.
Note - when running through a network interface - you will not get these 60+ Gig speeds. This is a decent test to see how fast internally your CHR ROS device can process packets internally - not counting physical/virtual interfaces packets to through.
Note - be sure to set your Proxmox server to be tuned for throughput verses response.
just for the heck of it , I performed a speedtest to 127.0.0.1 on a Mikrotik CRS CRS326-24S+2Q+ ( not really a performance router )
TCP-Download: 360Mbps local-cpu-load:100%
TCP-Upload: 362Mbps local-cpu-load:100% remote-cpu-load:100%
UDP-Download: 15.8Gbps local-cpu-load:98% remote-cpu-load:98%
UDP-Upload: 15.6Gbps local-cpu-load:96% remote-cpu-load:96%
The point I want to make in this second much slower speedtest is that CPU speed and cores can really make a big difference in how fast ROS can process Layer-3 packets not counting the physical interface.
-and another speedtest-
This test was from a CHR on one Proxmox server - through a switch - to a different CHR on a different Proxmox server.
TCP-Download: 5.32Gbps local-cpu-load:21%
TCP-Upload: 7.57Gbps local-cpu-load:16% remote-cpu-load:18%%
UDP-Download: 7.30Gbps local-cpu-load:29% remote-cpu-load:45%
UDP-Upload: 9.52Gbps local-cpu-load:64% remote-cpu-load:24%
And to compare apples to oranges … On the same two different Proxmox servers that host the two CHRs I was testing, I ran a VyOS router iperf3 speedtest and got the following test speeds:
11.3 GBytes 9.73 Gbits/sec 92 sender
11.3 GBytes 9.72 Gbits/sec receiver
note - none of the above have PCI passthrough - All my VMs on Proxmox have VirtIO configured for the VM network interfaces.
The machine you linked to on Ali appears to be this Topton product. Though I’ve yet to pull the trigger on any of them, I’ve been keeping my eye on this and similar boxes by other manufacturers, as I too have a strong interest in good value x86 hardware that can run ROS on it bare-metal.
When it comes to networking, the machine appears to be available in various combinations of different interfaces. There is a common set of 4 2.5Gbit/s copper ports. But you can additionally add either 4 SFP ports, 2 SFP+ ports, or 4 SFP+ ports. The 4 SFP expansion is built on an Intel i350 chipset. The 2 SFP+ expansion is built on an Intel 82599 (X520) chipset. The 4 SFP+ expansion is built on an Intel X710 chipset. Given its age, I have a hard time believing it wouldn’t work, but it is unclear from my searches so far how well supported this particular variant of the i350 (-AM4) is or isn’t supported on ROS, but who cares as it is the least interesting option anyway. The X520 is WELL supported by ROS for sure, both v6 and v7, although some consider it “long in the tooth” by this point (though I use it myself in plenty of x86 boxes, and with a few caveats, it largely works great). The X710 is supported by ROS but v7-only.
The 4 built-in 2.5Gbit/s ports are the wildcard. They appear to be backed by an Intel i226-V chipset. This was not supported in ROS 7 out of the gate. I have not found any solid reports of people using it successfully under ROS in my brief searching, but also not too many reports of people even trying it, period. Support for its older brother, the i225, was apparently finally added in v7.8, but I can’t find any solid information about the i226. If you or anyone you know happen to end up rolling the dice and obtaining one of these, please do report back on your experience.
Obviously not all available on the market, like third-party drivers, but a bunch of Mikrotik legacy and most of the mainstream drivers, like for example the NICs listed below. You simply have to check for each device whether a driver is available.
Unless you’re going with known-working physical interfaces, as others have already suggested, the only way to know for sure if a particular box would do the job is to buy it and put a hypervisor on it and test with CHR (or x86 ISO as a VM) and PCI passthrough. If all ports are passed through and show up in the VM, then it should support everything natively.
I’ve been doing tests with physical vs. virtual and found that native ROS on bare metal (x86 and arm64) is a mixed bag. If I see that a supposedly supported NIC doesn’t show up, their support team has been pretty good at getting those device drivers working (enable a stock Linux driver or add a PCI ID in the existing drivers). The down side is that anything they add from here on out will only ever run 7.20 or later, which may or may not be a problem for you.
If you’re trying to squeeze every bit of performance, CHR+PCI passthrough is a good compromise vs. native-only. That way whatever NICs ROS doesn’t natively support can be bridge through by the hypervisor. Plus you can snapshot or clone your CHR instances and have a much easier time restoring working configs as a whole machine.
Ideally, you could combine a mini-PC router with a CRS300, like the CRS310 with 8 2.5 ports and 2 SFP+. The CRS300’s can do wire speed routing between LAN segments, and then the mini-PC could be used as a router-on-a stick for anything leaving the LAN(s).
I have read somewhere that Proxmox can not migrate a VM that is using PCI-Passthrough.
If this is correct , how do you migrate a ROS ( x86 and/or CHR ) using PCI-Passthrough to another Proxmox ?
Is it possible to migrate if the Proxmox servers are identical ?
OP has suggested they’re NOT interested in CHR. But that decision has side-effects… RouterOS needs a supported networking card. And unless listed no one case say “for sure” it will work.
Now, personally, the slight overhead of virtualization (especially with passthrough network) is worth it for all the other benefits of being a VM (snapshots/backup, migration, testing, easily movable licensing, etc.) – using native X86 puts all your eggs on in one basket.
OP was asking about whether ROS supported the box he was looking at. My ideas were mainly targeted at a way to test/vet any x86 hardware, especially if you’re budget-constrained. It would be awful if you spent money on a box only to find that it doesn’t work and you have no option to return it. (I have a box very similar to the one he’s looking at buying, and IIRC only the SFP+ worked on ROS7; the 2.5’s did not.)
If the hardware isn’t supported by ROS natively, a hypervisor will provide some level of support for all of the ports on the box. Pass-through for supported NICs gives the buyer as-close-to-native as possible, while making the unsupported NICs available, albeit virtualized.
OP is only buying one little box; he’s not building a DC cluster of routers and switches that needs maximum uptime. So most of the hypervisor cluster benefits aren’t a part of the issue. Even then, if the hardware’s chosen purposes benefit from passed-through PCI cards to the VM, then you could still set up a redundant pair (or cluster) of CHR’s, each with their own card, just as you might put together a cluster of CCR2x16’s and reap the benefits of failover and redundancy..
(Now to your question: you cannot live migrate a VM with a PCI card. I have a pair of Minisforum MS-01’s running Proxmox in my lab, each with an additional Mellanox 40G card, and I’ve tested passing both the X710’s and the Mellanox through to the VM. I have migrated shut down CHR’s between the two, and since everything is identical between the two hosts, the CHR will boot and run on either one, as long as the chosen PCI card is available on the host.)
Do you happen to know what ethernet chip was being used for the 2.5s (PCI vendor and device IDs would be even cooler)? And curious when was the last time you tried it (what ROS ver#)?
I tried 7.17, 7.18, and possibly 7.19. I didn’t spend a ton of time playing with RouterOS on it since I’d rather be able to use all the ports. It’s now part of my Proxmox cluster of mini-PC’s that include some Minisform MS-01’s, Intel NUCi7’s, and a GMKTek Nucbox. This CWWK and the Nucbox, along with Raspberry Pi 5’s, are possible candidates for small tower/micropop sites where I need a router-on-a-stick to do CPU-based work (i.e. Wireguard tunnels, maybe VXLAN encapsulation/decapsulation, and other things that CRS300’s can’t do in hardware).
The only thread I had been able to unearth so far with anybody at all talking about i226-V compatibility with ROS is this one from late 2023, which seems to imply that the interfaces do actually show up for them.
In light of your testimony, though, I’m wondering if they typo’d i225-V, which ROS added support for in late 2022.
I tested about five different boxes at the time, and so I can’t honestly remember which ports worked and which didn’t, but suffice it to say, all six did not show up on that box with native CHR/X86 at the time of testing. 7.19 or 7.20 may be a different story, since I’ve submitted a bunch of requests to them.