Community discussions

MikroTik App
 
lmatys
newbie
Topic Author
Posts: 28
Joined: Fri Aug 23, 2013 1:39 pm

x86 Mikrotik v7 performance - choosing the x86 CPU

Tue Sep 05, 2023 6:56 pm

Hello.

Focusing on traffic/packet troughput with Mikrotik V7 installed directly as bare metal system, which x86 cpu processor could be more suitable for bgp full table router:

Intel Xeon 2690 V4 with 14 cores at 2.60 Ghz
or
Intel Xeon 2699 V4 with 22 cores BUT at 2.20 Ghz

BGP convergence time is always important but not the top priority.

I'm close to be sure that more cores can handle more simultaneus traffic flows, but not sure about v7 architecture and how the traffic is being divaded beetwen cpu cores? Possible that lower cpu core clock could lead to lower single flow troughtput? But what kind of difference are we talking about?

What are your experiences?
Best.
 
ConradPino
Member
Member
Posts: 337
Joined: Sat Jan 21, 2023 12:44 pm
Contact:

Re: x86 Mikrotik v7 performance - choosing the x86 CPU

Wed Sep 06, 2023 2:45 am

I believe this relationship has some merit:
  • 14 * 2.6 = 36.4
  • 22 * 2.2 = 48.4
I expect lower clock speed to affect single packet latency.
I expect 22 cores to have more concurrent packets in flight.
RouterOS v6 v7 have Linux kernels at differing versions I don't recall.

When port speed is the bottleneck, CPU speed and processor count are irrelevant.
Case in point: examine hAP ax3 performance: https://mikrotik.com/product/hap_ax3#fndtn-testresults
Those numbers look very close to 2.5 GiB port speed IMO.
 
User avatar
jspool
Member
Member
Posts: 469
Joined: Sun Oct 04, 2009 4:06 am
Location: Oregon

Re: x86 Mikrotik v7 performance - choosing the x86 CPU

Wed Sep 06, 2023 8:43 am

I have 4 x RouterOS v7 bare metal routers running on Intel E5-2699 CPU.
They load a full Internet BGP table in like 15-20 seconds.
10G both direction (20G aggregate)tcp speedtest in winbox from Mikrotik to Mikrotik (both E5-2699) is 20% load.
 
lmatys
newbie
Topic Author
Posts: 28
Joined: Fri Aug 23, 2013 1:39 pm

Re: x86 Mikrotik v7 performance - choosing the x86 CPU

Wed Sep 06, 2023 10:32 am

I'm using few servers with V7 and E2690, and it looks impressive regarding overall performance and bgp performance.
But not having a lab, to check what could be the bottleneck: the E2690 or 2x10GE ports.

Thank you, best.
 
PortalNET
Member Candidate
Member Candidate
Posts: 126
Joined: Sun Apr 02, 2017 7:24 pm

Re: x86 Mikrotik v7 performance - choosing the x86 CPU

Fri Dec 15, 2023 4:03 pm

I have 4 x RouterOS v7 bare metal routers running on Intel E5-2699 CPU.
They load a full Internet BGP table in like 15-20 seconds.
10G both direction (20G aggregate)tcp speedtest in winbox from Mikrotik to Mikrotik (both E5-2699) is 20% load.
Hi could you please share RouterOS V7? CHR? or BareMetal?

And which SFP+ NICs are you using?

i am having troubles with a old R620 with E5-2697v2 cpus.. 128GB ram.. and Intel X520 cards.. getting RX-ERRORS.. on low traffic average 130Mbps..

i did run bandwidth test prior from 2 ccrs1036 via crs317 switch switchOS mode.. fordwaring more then 8gbps traffic TCP to the server wan port.. and not a single packet rx-errors showing in 30minutes consecutive.. after i started pppoe-server connected to clients devices.. with 130Mbps i start getting RX-ERRORS on the WAN link port..

i have upgraded intel firmware nics from dell website latest firmware available April 2023..

i am running outo of ideas .. i am running bare metal v7.12.1


BTW: my issue is not cpuwise.. as i have played with this same server with Mellanox MCX455 card 100G connected ethernet mode.. and on the other side R420 device with Mellanox card also.. and bandwidth test agregate from one machine to the other reached 64gbps full-trougput which is the max the pci-e 3.0 8x can handle.. with over 3million ppps being sent and received on both servers with average 30% cpu usage.. without single packet RX-ERRORS.. as soon as we fire up the intel cards.. we start getting issues on the WAN side.. we tested both cards. on the Dell server.

As for testing bootleneck on this mikrotik servers.. i have found out the hard way that we need RAM in order to make the PCI-e slots BUSlines talk with CPU directly for troughput... i started with 16GB ram only on the server and with the 100G cards i could not forward on BT Test.. more then 15gbpbs aggregate.. once i upgraded to 128GB in the correct slot positions.. on both servers, we fired up a new BT test.. and we got full 64gbps troughput on both servers passing trough the PCI-e 100G in ethernet mode on the nic cards..

i heard the PCI-e 3.0 x16 Slot can go up to 128gbps full troughput on them.. but i only have 1 100G card x16 , on the other side i was running mellanox 40Gbps pci-e 3.0 8x slot card.. so both machines were linked at 40Gbps.
 
User avatar
Larsa
Forum Guru
Forum Guru
Posts: 1068
Joined: Sat Aug 29, 2015 7:40 pm
Location: The North Pole, Santa's Workshop

Re: x86 Mikrotik v7 performance - choosing the x86 CPU

Fri Dec 15, 2023 4:28 pm

A suggestion is to start by focusing on the network interface which is generally the most crucial component whether is used as "bare metal" or as a virtual Network Interface Card (vNIC) in CHR. A well-developed driver is also a prerequisite and can be a showstopper determining whether the Network Interface Card (NIC) can be used with IO-SRV and so on. The rest is just raw CPU power and typically suffices regardless of the model, usually having significantly more internal throughput than the NICs

I probably don't need to point out that production testing is of course a necessity.
 
User avatar
jspool
Member
Member
Posts: 469
Joined: Sun Oct 04, 2009 4:06 am
Location: Oregon

Re: x86 Mikrotik v7 performance - choosing the x86 CPU

Fri Dec 15, 2023 7:45 pm

I have 4 x RouterOS v7 bare metal routers running on Intel E5-2699 CPU.
They load a full Internet BGP table in like 15-20 seconds.
10G both direction (20G aggregate)tcp speedtest in winbox from Mikrotik to Mikrotik (both E5-2699) is 20% load.
Hi could you please share RouterOS V7? CHR? or BareMetal?

And which SFP+ NICs are you using?

i am having troubles with a old R620 with E5-2697v2 cpus.. 128GB ram.. and Intel X520 cards.. getting RX-ERRORS.. on low traffic average 130Mbps..

i did run bandwidth test prior from 2 ccrs1036 via crs317 switch switchOS mode.. fordwaring more then 8gbps traffic TCP to the server wan port.. and not a single packet rx-errors showing in 30minutes consecutive.. after i started pppoe-server connected to clients devices.. with 130Mbps i start getting RX-ERRORS on the WAN link port..

i have upgraded intel firmware nics from dell website latest firmware available April 2023..

i am running outo of ideas .. i am running bare metal v7.12.1


BTW: my issue is not cpuwise.. as i have played with this same server with Mellanox MCX455 card 100G connected ethernet mode.. and on the other side R420 device with Mellanox card also.. and bandwidth test agregate from one machine to the other reached 64gbps full-trougput which is the max the pci-e 3.0 8x can handle.. with over 3million ppps being sent and received on both servers with average 30% cpu usage.. without single packet RX-ERRORS.. as soon as we fire up the intel cards.. we start getting issues on the WAN side.. we tested both cards. on the Dell server.

As for testing bootleneck on this mikrotik servers.. i have found out the hard way that we need RAM in order to make the PCI-e slots BUSlines talk with CPU directly for troughput... i started with 16GB ram only on the server and with the 100G cards i could not forward on BT Test.. more then 15gbpbs aggregate.. once i upgraded to 128GB in the correct slot positions.. on both servers, we fired up a new BT test.. and we got full 64gbps troughput on both servers passing trough the PCI-e 100G in ethernet mode on the nic cards..

i heard the PCI-e 3.0 x16 Slot can go up to 128gbps full troughput on them.. but i only have 1 100G card x16 , on the other side i was running mellanox 40Gbps pci-e 3.0 8x slot card.. so both machines were linked at 40Gbps.
I am using baremetal with Mellanox ConnectX-5 series. I initially tried newer Intel cards but had odd issues. Support stated that there were issues with the Intel driver so I swapped over to Mellanox and haven't had any issues since. The other stuff to keep in mind is NUMA. I converted my dual cpu to a more powerful single cpu and moved the nics to the appropriate slots that are connected to the installed cpu. Since then it has been consistent. With dual cpu I never could get the performance as it was always using the qpi bus often and crippling performance.
 
PortalNET
Member Candidate
Member Candidate
Posts: 126
Joined: Sun Apr 02, 2017 7:24 pm

Re: x86 Mikrotik v7 performance - choosing the x86 CPU

Sat Dec 16, 2023 5:41 am

A suggestion is to start by focusing on the network interface which is generally the most crucial component whether is used as "bare metal" or as a virtual Network Interface Card (vNIC) in CHR. A well-developed driver is also a prerequisite and can be a showstopper determining whether the Network Interface Card (NIC) can be used with IO-SRV and so on. The rest is just raw CPU power and typically suffices regardless of the model, usually having significantly more internal throughput than the NICs

I probably don't need to point out that production testing is of course a necessity.
Hi.. IO-SRV is disabled because we decided to run on bare metal on hdd no vmware or CHR.. i have a couple of broadcom BCM57xxx dual SFP+ cards, will give it a go tomorrow, will try to remove the mezzanine i350 dell card with 2 sfp+ intel, and will remove the x520da2 also.. and will put 2 broadcom dual sfp+ just for the sake of testing..
 
PortalNET
Member Candidate
Member Candidate
Posts: 126
Joined: Sun Apr 02, 2017 7:24 pm

Re: x86 Mikrotik v7 performance - choosing the x86 CPU

Sat Dec 16, 2023 5:43 am



Hi could you please share RouterOS V7? CHR? or BareMetal?

And which SFP+ NICs are you using?

i am having troubles with a old R620 with E5-2697v2 cpus.. 128GB ram.. and Intel X520 cards.. getting RX-ERRORS.. on low traffic average 130Mbps..

i did run bandwidth test prior from 2 ccrs1036 via crs317 switch switchOS mode.. fordwaring more then 8gbps traffic TCP to the server wan port.. and not a single packet rx-errors showing in 30minutes consecutive.. after i started pppoe-server connected to clients devices.. with 130Mbps i start getting RX-ERRORS on the WAN link port..

i have upgraded intel firmware nics from dell website latest firmware available April 2023..

i am running outo of ideas .. i am running bare metal v7.12.1


BTW: my issue is not cpuwise.. as i have played with this same server with Mellanox MCX455 card 100G connected ethernet mode.. and on the other side R420 device with Mellanox card also.. and bandwidth test agregate from one machine to the other reached 64gbps full-trougput which is the max the pci-e 3.0 8x can handle.. with over 3million ppps being sent and received on both servers with average 30% cpu usage.. without single packet RX-ERRORS.. as soon as we fire up the intel cards.. we start getting issues on the WAN side.. we tested both cards. on the Dell server.

As for testing bootleneck on this mikrotik servers.. i have found out the hard way that we need RAM in order to make the PCI-e slots BUSlines talk with CPU directly for troughput... i started with 16GB ram only on the server and with the 100G cards i could not forward on BT Test.. more then 15gbpbs aggregate.. once i upgraded to 128GB in the correct slot positions.. on both servers, we fired up a new BT test.. and we got full 64gbps troughput on both servers passing trough the PCI-e 100G in ethernet mode on the nic cards..

i heard the PCI-e 3.0 x16 Slot can go up to 128gbps full troughput on them.. but i only have 1 100G card x16 , on the other side i was running mellanox 40Gbps pci-e 3.0 8x slot card.. so both machines were linked at 40Gbps.
I am using baremetal with Mellanox ConnectX-5 series. I initially tried newer Intel cards but had odd issues. Support stated that there were issues with the Intel driver so I swapped over to Mellanox and haven't had any issues since. The other stuff to keep in mind is NUMA. I converted my dual cpu to a more powerful single cpu and moved the nics to the appropriate slots that are connected to the installed cpu. Since then it has been consistent. With dual cpu I never could get the performance as it was always using the qpi bus often and crippling performance.

Interesting i will try out the dual SFP+ mellanox card also..i think i have one in stock just in case.. need to figure out this odd isue.. and why its throwin the rx-erro just.. and after 200k errors.. it crashes cards..
 
PortalNET
Member Candidate
Member Candidate
Posts: 126
Joined: Sun Apr 02, 2017 7:24 pm

Re: x86 Mikrotik v7 performance - choosing the x86 CPU

Mon Jan 01, 2024 9:19 pm

I am using baremetal with Mellanox ConnectX-5 series. I initially tried newer Intel cards but had odd issues. Support stated that there were issues with the Intel driver so I swapped over to Mellanox and haven't had any issues since. The other stuff to keep in mind is NUMA. I converted my dual cpu to a more powerful single cpu and moved the nics to the appropriate slots that are connected to the installed cpu. Since then it has been consistent. With dual cpu I never could get the performance as it was always using the qpi bus often and crippling performance.

Hiya.. first of all happy new year..

just had time now to login and check posts...

you have mentioned NUMA on top.. were you refering to NVIDIA NUMA GPUS instead?

If you don´t mind asking ? what kind of Single CPU are u using?

i have a pair of E5-2699V4 laying around, i am just ebaying for the right cheap server R630 or R730 so i can plug them in in orther to try them out..

would that be a bad combination using 2 E5-2699v4? is there a higher core + clock rate higher then the E5-2699v4 models worth testing around?

i did see the AMD EPYC 64core/128threads cpus.. just not sure if mikrotik supports them.

Who is online

Users browsing this forum: Bing [Bot] and 5 guests