The Gigabit NIC(s) are planned to work for mulit ISP wan.
Wherein the SFP+ is connected to Cisco switch with 10G-LR-SM transceiver on both side.
We noticed on the cisco side the status show as “Link is up” with 10G and SFP+. But on mikrotik side the interface show 10G, Slave, Running but show “no link”.
In the PC, the Winbox can discover the mikrotik router if connected to the same switch with a LAN or wifi but the result on ping for a network ip or router gateway.
This feels like a driver support problem to me. We care a whole lot less about the NICs’ rated speeds than the chipsets involved. RotuerOS isn’t a general-purpose Linux distribution with every single upstream driver included.
I’m not aware of an official listing of what is available, short of digging through the Linux kernel source tarball. Your rights under the GPL allow you to request that from MikroTik, but I suspect this forum post is more likely to be helpful to you, to start at least.
Meanwhile, would you mind explaining why you went down this path rather than buy a CCR2004-16G-2S+ or better, which otherwise appears to meet your needs? It likely draws a third the power, takes half the rack space, and puts out a lot less heat and noise while likely achieving the necessary level of performance.
I’m not guaranteeing that a CCR2004 is a drop-in replacement for what you’ve hand-built. I’m just saying I wouldn’t go beyond MikroTik hardware for running RouterOS until I’d convinced myself nothing I could buy pre-built would meet my performance needs, and that “dual 10G” doesn’t sound like you’re anywhere near that limit.
@tareqbd, do yourself a big favour and buy an MT box, or at least skip flaky x86-68 drivers and instead go with ESXi using CHR or a similar option for close-to-bare-metal speed.
First of all, the client placed a limited budget before us to start something with Mikrotik product.
The single x86 would work for Load balancing, NAT, DHCP, PPoE and Hotspot server etc.
The client will have 2000+ concurrent hotspot users and 3-6 1gbit WAN.
The client initially got proposal from another supplier with CCR1016 but a local Mikrotik partner advised them that for 2000+ users they would need minimum CCR1036 or CCR1072.
Hence the client has come up with alternative solutions and advised us if we could offer them the same.
I have also placed a request earlier through mikrotik tech support portal and awaiting their response.
The entire CCR1xxx range is end-of-life. What isn’t discontinued outright is old stock meant for replacing broken hardware in existing installations. I would never recommend them for new projects today.
For 2000+ users they would need minimum CCR1036 or CCR1072.
I can’t speak to user load limits, but in my mind, the next step up in the line from the 2004 is the 2116, then the 2216. If none of those three will do what you want, only then would I advise you to chase CHR or native x86 ROS solutions.
For home or small business a 2004, 5009 or 2116 is fine. But for DC or Edge a decent bare metal stomps the 2116 & 2216. Will it consume more power? Of course that goes without question. Most service providers are not counting watts, They need to be able to run multiple full BGP tables without waiting all day. For consumers it would be illogical to go with CHR or Bare Metal as there are many other models that will adequately provide the needed performance silently and at low power consumption. Especially valid if one lives in a 3rd world electric country like Denmark or Germany.
That’s a myth and might have been true in the old days of virtualization. Mellanox ConnectX Ethernet cards are good, even v3, and most HP server built-in NICs are sufficient as well. To achieve bare-metal-speed it’s just a matter of enabling SR-IOV.
Its quite common for CHR’s to hit an invisible wall after several Gbps even with SR-IOV. Maybe it will eventually get there but its not close right now for anything moving a significant amount of data.
I’m not aware of any ‘magic’ walls so far, only misconfigured systems. We’re able to shovel several hundred gigabits using modern io-srv and no-copy drivers without any major inpact on our bng systems. That goes for both IB and ETH drivers.
If you can’t get SFP+ to work with the built-in x86-64 RoS drivers your only option is virtualization using CHR on for example ESXi or Hyper-V.
ESXi usually works very well on most HP servers like the DL360 offering performance close to bare-metal speed using the built-in NICs. If you prefer Windows as a host OS, Hyper-V is anther good option.
i am having troubles also in getting the Brocadom BCM57840 QUAD PORT SFP+ 10g to work its not recognized in the RoS 7.1.11 latest stable version.
aldo the other dual port SFP+10g broadcom BCM57711 is recognized and working well in 7.1.11 i tought of swapping the 2 port with a 4 port card, but now its not compatible still
on the system resources PCI its showing the card as the picture below, as anyone managed to get support for this card on the x86_64 version ?
Well they do exist on bare metal… its the PCI-e lanes..
i have setup on the test bench 2 units Dell R630 both with 2 xeon E5-2699A v4 and 128GB ram each…
i have 2 mellanox dual port MCX354-FCBT with firmware upgraded and ethernet mode set… both NICs are PCI-e 3.0 x8 i have setup a bonding on both machines with the 2 interfaces from mellanox inside..
and i can only get 64gbps full troupught max… running average 6,8 to 7million packets on TCP.. cpu wise is on 19% usage.. i am waiting on more another set of MCX354 to arrive so i can insert them on another pci-e slot on each server.. and run more test… but hey… anything over 30gbps troughput on routing mode… with no fastpath its amazing on this old hardwares… can do more stuff then newer CCR2216 which cost twice the price of each server.. i mean for “routing purposes and pppoe” not talking about ASIC or Switch chip functions.. because the x86_64 hardware servers do not have it.. but BGP and PPPoE uses Routing.. not ASIC or switch chip.. so its a must.. to know what kind of usage on your mikrotik router you will need.
now i am just wondering how woul mikrotik behave on newer Xeon Gold or newer AMD EPYC 64 core or 128 core cpus… with PCI-e 4.0 x8 and 16 slots… has anyone tested? using full cpu trottle on bt server local i have managed to get nearly 140gbps full troughput with cpus on 85% usage.. TCP.. on UDP.. not sure if this is limitation on hardware or software but i could not get past 450gbps trouhgput on UDP even with average 40% cpu usage..