I have 4 x RouterOS v7 bare metal routers running on Intel E5-2699 CPU.
They load a full Internet BGP table in like 15-20 seconds.
10G both direction (20G aggregate)tcp speedtest in winbox from Mikrotik to Mikrotik (both E5-2699) is 20% load.
Hi could you please share RouterOS V7? CHR? or BareMetal?
And which SFP+ NICs are you using?
i am having troubles with a old R620 with E5-2697v2 cpus.. 128GB ram.. and Intel X520 cards.. getting RX-ERRORS.. on low traffic average 130Mbps..
i did run bandwidth test prior from 2 ccrs1036 via crs317 switch switchOS mode.. fordwaring more then 8gbps traffic TCP to the server wan port.. and not a single packet rx-errors showing in 30minutes consecutive.. after i started pppoe-server connected to clients devices.. with 130Mbps i start getting RX-ERRORS on the WAN link port..
i have upgraded intel firmware nics from dell website latest firmware available April 2023..
i am running outo of ideas .. i am running bare metal v7.12.1
BTW: my issue is not cpuwise.. as i have played with this same server with Mellanox MCX455 card 100G connected ethernet mode.. and on the other side R420 device with Mellanox card also.. and bandwidth test agregate from one machine to the other reached 64gbps full-trougput which is the max the pci-e 3.0 8x can handle.. with over 3million ppps being sent and received on both servers with average 30% cpu usage.. without single packet RX-ERRORS.. as soon as we fire up the intel cards.. we start getting issues on the WAN side.. we tested both cards. on the Dell server.
As for testing bootleneck on this mikrotik servers.. i have found out the hard way that we need RAM in order to make the PCI-e slots BUSlines talk with CPU directly for troughput... i started with 16GB ram only on the server and with the 100G cards i could not forward on BT Test.. more then 15gbpbs aggregate.. once i upgraded to 128GB in the correct slot positions.. on both servers, we fired up a new BT test.. and we got full 64gbps troughput on both servers passing trough the PCI-e 100G in ethernet mode on the nic cards..
i heard the PCI-e 3.0 x16 Slot can go up to 128gbps full troughput on them.. but i only have 1 100G card x16 , on the other side i was running mellanox 40Gbps pci-e 3.0 8x slot card.. so both machines were linked at 40Gbps.