It is possible to make it easier in eBGP by using options like “redistribute connected”. Of course it should be avoided but in such a limited environment it can be used.
Whether IP management is an extra burden depends on the underlying VPN. Of course when you use an L2 VPN it is, but I normally use an L3 VPN and it can be simple (e.g. L2TP/IPsec). Especially now that in v7 you do not have to configure every peer separately but rather can use templates and connections without peer address specification (inbound only)…
It might interest you to know that Ubiquiti (yes, them) is using VPP on their 60Ghz Wave line now. I’m not a big fan of how ADHD Ubiquiti is, so don’t worry about me being a fanboi. It was just interesting when I dug in to the CLI in a Wave device and found a vppctl binary; was able to tease the config out and, yep.
On a related note, I think there might be a niche to be filled in our market. It might make sense to offer a generic arm64-based routing appliance that runs all open source software. Run whatever you want on it. It’s just Linux. Offer a value-add OS and support maybe.
I know Netgate offers something like this with TNSR, but it’s not quite a perfect fit for us.
I really like Winbox, and especially couldn’t live without RoMON (which has saved me countless truck rolls in the middle of the night).
openwrt can run on quite a number of mikrotiks. From there, you can add FRR and potentially vpp. contributing to openwrt, maybe by just donating hardware, could be your path to this.
Anyone that has a decent 60Ghz product uses vpp. UBNT was using it before “Wave”. Their initial Qualcomm 60Ghz utilized vpp as most vendors were jumping on the terragraph wagon and terragraph uses vpp and dpdk. Even if manufactures opted for PTMP over terragraph mesh I think that terragraph really brought the pieces together and gave the manufactures the software concepts to make these products a reality. It’s a no-brainer that Mikrotik should implement vpp / dpdk. The question is if they are chasing consumer markets or service provider markets. They would see much more service provider sales with routers that have muscle. The only real way forward is L3HW that is big enough to handle full tables or 8 & 16 core vpp / dpdk routers that can move 100G w/o L3HW offload.
remember, there’s new ampere chips coming soon. that might come with some routing surprises as well. I doubt those are being brought it just for control plan on marvell switch chips.
any news on the CPU version Ampere mikrotik will use? at leat they have more cpus, highger clock-rates.. are they trying to reach high performance now.. similar to V7 with XEON E5-2699v4 models? hehehe..
i hope they really make further support for newer AMD EPYC cpus also on mikrotik… 128 Cores ans 256 core versions.. would probably make a nice router Frankenstein insane version.
I’m a bit mixed on the new high core count moves. Frankly, for pure routing the 2004 and 2116 are blazing fast, optimizing routeros to use more cores would just keep making those better and better.
The biggest problem I’m seeing still comes down to single core performance. BGP is way faster, but still can get stomped handily by a FRR box on a modern intel chip. A shaping tree get’s stuck in a single CPU core. SNMP gets stuck on a single CPU core.
Granted, you could work through some of those limitations by making them multi-threaded but some things are just not well suited for it. A shaper tree for instance sort of needs to have the top level HTB on a single core or some careful balancing of average traffic which would need a separate and new tool from mikrotik, and BGP needs to have each peer on a single core because moving data between cores is so much slower as to negate multi-threading gains.
I think it’s a long shot but mikrotik dropping a 13-14th gen xeon box or similar ryzen box with really high core speeds would be awesome. I know there are ways to get routeros on that hardware, CHR, hacking x86 installer into a CHR on bare metal etc, but I’m not really interested in that. I want a MIKROTIK box in 1U with port options etc. Like a CCR2004-* but with a very high clock high performance CPU. Frankly, the only ARM series chip that would be interesting here is Apple’s, everyone else is well behind on single core performance. Frankly, the i3-13100F would be so incredible in a ccr2004-12SFP+ format…
why use CHR ?? is mikrotik v7 RoS runs great in bare metal… with support for SAS and SSD drives now.. run great.. i think CHR its useless unless you pretend to use old V6 RoS… we have running gen12 and gen13 on bare metal X86_64 with multithread enabled on rOS v7 .. running great..with average 10 to 12Gbps traffic with litle over 18% cpu usage.. for PPPOE with over 5k users..
we will try to setup some test servers with script to generate traffic and pppoe users.. see if we can make it up to 15k users for testing..
Has anyone managed to configure an MTU greater than 1500? My connection is established only with MTU 1500 (on port). Otherwise, the connection will not be established.
I mean when configuring between Cisco and Mikrotik.
OT - In my opinion, one of the major advantages of CHR is that the platform becomes hardware-agnostic and also enables it to move or upgrade “live” including network sessions to new hw without any downtime (aka Hyper-v/vSphere live migration). A perfectly fit for a datacenter virtual BNG using ISIS I might add. Additionally, performance wise using today’s modern drivers supporting DirectPath/SR-IOV, it’s fast as bare metal and the overhead of the supervisor is barely measurable.
Thus from a purely operational and production perspective, CHR has almost nothing but advantages.
I totally agree with this. Lost a lot of hours trying to bring up adjacency between RouterOS (CCR2004) and IOS XR (NCS540 Series). Does not work with MTU above 1500; tried tuning l2-mtu, mtu and lsp-max-size with no success. The session was up and LSPs exchanged only when the MTU was set to 1500 on both ends.
Also, BFD is not an option for IS-IS, basically you cannot trigger down IS-IS adjacency using BFD (option not available, like it is available for OSPF). So, for fast convergence you must rely on tight IS-IS timers.
Any plans/roadmap for improving IS-IS support in RouterOS?
when we saw it tag as “working”, we’ll spend a lot of time to make sure we’re not doing something wrong why it didn’t work
because honestly speaking, I don’t see any improvement in IS-IS since 7.12 until now