Ros V5 beta new intel drivers

In ros V5 we have new intel drivers (igb xxx), and i whant to know what parameters used in this drivers ???
If i use intel card with chipset 82576E (8 queue on rx and 8 queue on tx) how many queue per port use new driver ? and does its use MSI-X interrupts ?
Can we see this information in Ros V5 beta ? and can we change this parameter in driver ?

Normis, can you answer for this question ?

I don’t understand the question :slight_smile:

Ok ))
I have Intel 82576 Gigabit Network Controller which supports 8-receive queues per port. (2-port 1Gbps PCI-express package (E1G142ET))
This ethernet card works with MSI-X interrupts (MSI-X enables the Ethernet controller to direct interrupt messages to multiple processor cores), but in ROS V5 we see :

irq=17 owner=“[0 0 IO-APIC-fasteoi eth1]”

irq=19 owner=“[0 0 IO-APIC-fasteoi eth2]”

This card use old fasteoi interrupt.

Question :

  1. How can i enable MSI-X interrupt ?
  2. How i can enable 8-receive queues per port on i82576 ethernet card ?
  3. How i can see with what parameter load ‘igb’ driver ?

This screen from linux router that use same i82576 ethernet card (‘igb’ driver load whith parameter - “modprobe igb IntMode=3,3”)

dmesg

Intel(R) Gigabit Ethernet Network Driver - version 1.3.8.6
Copyright (c) 2007-2008 Intel Corporation.
PCI: Enabling device 0000:01:00.0 (0000 → 0003)
ACPI: PCI Interrupt 0000:01:00.0[A] → GSI 16 (level, low) → IRQ 169
PCI: Setting latency timer of device 0000:01:00.0 to 64
igb: 0000:01:00.0: igb_validate_option: Interrupt Mode set to 3
igb: eth0: igb_probe: Intel(R) Gigabit Ethernet Network Connection
igb: eth0: igb_probe: (PCIe:2.5Gb/s:Width x4) 00:1b:21:2e:9c:a4
igb: eth0: igb_probe: Using MSI-X interrupts. 4 rx queue(s), 1 tx queue(s)
PCI: Enabling device 0000:01:00.1 (0000 → 0003)
ACPI: PCI Interrupt 0000:01:00.1 > **-> GSI 17 (level, low) → IRQ 193
PCI: Setting latency timer of device 0000:01:00.1 to 64
igb: 0000:01:00.1: igb_validate_option: Interrupt Mode set to 3
igb: eth1: igb_probe: Intel(R) Gigabit Ethernet Network Connection
igb: eth1: igb_probe: (PCIe:2.5Gb/s:Width x4) 00:1b:21:2e:9c:a5
igb: eth1: igb_probe: Using MSI-X interrupts. 4 rx queue(s), 1 tx queue(s)
\

cat /proc/interrupts | grep eth

140: 23 63 0 0 PCI-MSI-X eth0-Q0
148: 23 0 0 63 PCI-MSI-X eth0-Q1
156: 23 43 20 0 PCI-MSI-X eth0-Q2
164: 23 0 0 63 PCI-MSI-X eth0-Q3
172: 2 0 0 0 PCI-MSI-X eth0
180: 21 67 0 0 PCI-MSI-X eth1-Q0
188: 21 0 67 0 PCI-MSI-X eth1-Q1
196: 21 20 0 47 PCI-MSI-X eth1-Q2
204: 21 0 67 0 PCI-MSI-X eth1-Q3
212: 1 0 0 0 PCI-MSI-X eth1**

Important question.
There is no reason to use new Intel NICs if mikrotik uses only 1 CPU per NIC.
Many people wait for Ros5 cause they want to increase routing speed in mikrotik.
Yes, many people use mikrotik as a router.
Now routing limited by cpu power.
To increase speed we have to use all processors.
To use all processors we have to buy NICs with multicpu (MSI-X+queues) support like Intel E1G142ET (82576),
but if driver will use only 1 CPU/NIC - where is no sense in Ros5 for routing purposes.

igb driver parameters:
IntMode 0-3
0 - Legacy Interrupts, single queue
1 - MSI interrupts, single queue
2 - MSI-X interrupts, single queue (default)
3 - MSI-X interrupts, multiple queues !!!

RSS 0-8
1 0 - Assign up to whichever is less, number of CPUS or number of queues
X - Assign X queues where X is less than the maximum number of queues

NOTE: for 82575-based adapters the maximum number of queues is 4; for 82576-based and newer adapters it is 8.


we need next parameters:
“intmode=3,3” for E1G142ET(2 port NIC)
“intmode=3,3,3,3” for E1G144ET (4 port NIC)

  • we want to see string like “Using MSI-X interrupts. 4 rx queue(s), 1 tx queue(s)”
  • load of each cpu, like “Cpu0: 30%, Cpu1:30%, Сpu2:33%…”

Hello Normis, now you understand our question ??? ))

One queue per interface. Basically we are using defaults in all of the drivers except for our cards.

But why ??? more than one queue can increase router perfomence !!
And what is “our cards” RB816 ?

all routerboards have ethernets, and also we do have pci ethernet cards

But why you dont want use MSI-X interrupts and more than 1 queue ?? Please explain.
And what mikrotik ethernet card have Gigabit speed ? I only see 100M card on the site.
RB1000 does not have enought power fo routing my traffic

normis, please for the love of god enable multiple queues and msi-x for hardware that supports it. there is no other way RouterOS can ever reach 10G throughput and acceptable packets per second figures…

i can’t exactly remember where i saw that video (from some new zealand network operators group meeting), but two guys from new zealand built an 8-core x86 linux-based router with Intel 10G cards that could push upwards of 3,5 million pps and about 40G traffic in total.

one critical part of this was using the multiple queues and msi-x interrupts for the intel 10G cards. if mikrotik doesn’t want to completely fall behind in raw throughput performance - then please enable the msi-x and multiple queue per interface support.

especially the newer intel cards and (i think) some broadcom cards heavily benefit from that on multicore systems.

yes, please at least allow it a configuration item. we wont be able to continue to routeros our network in the coming years without it. end users are already getting 50-100mb internet connections, ISPs need way more than that.

It’s funny that, until this thread, there were no problems :slight_smile:

Its simple:

  1. then many users loot at “CPU load” in mikrotik they see usage less then 25% and they think that all ok. But they don’t know that load /per processors like ‘90/0/0/0’ (1 ethernet/quad core) and it is very bad.
  2. Every year traffic in networks dramatically increases, so now mikrotik perfomance not enougth to handle with new traffic.

To solve problem with high speed forwarding on PC people use “new” Intel NICs.

If ROS will not fully support new NICs, people will use freebsd, linux, vyatta,…, those who have enougth money - cisco routers.
Because of traffic increase in a year ROS usage will be limited to wireless or small networks

I’d even say that for ten years nobody seemed to notice any problems, so your statement doesn’t make any sense (ie. nothing changed)

Problems are not present because it should not be )

For that time people dont have a big traffic, for now many of us have traffic more than 3-5 Gb/s. If you have 100 - 500 Mb/s traffic - than you dont need MSI-X and queue, but in other case you cant work without this.
Thats why many vendors (like intel, broadcom) create drivers and chip that supports many queue per port (to use not only one cpu core), and thats why many linux distribution start to use this drivers.
Vyatta, for example, says that Vyatta much better than Cisco and many people start to use Vyatta, especially after that document http://www.vyatta.com/downloads/whitepapers/Intel_Router_solBrief_r04.pdf

Why Mikrotik dont whant to be a Cisco concurent ??? You dont ned money ??

You are the first to raise this question, so I guess it’s not as important as you think it is. We will look into the matter though, thanks for the suggestion.

I’d even say that for ten years nobody seemed to notice any problems

Clearly you must have skipped over those countless reports of multi-core systems only using one core for packet routing :wink: using those drivers and enabling multiple queues would let multi-core systems use all available cpu power for routing. It should help a lot with packets per second throughput, and especially with 10G interfaces.

Enabling msi-x and multiple queues is definitely one step into the right direction :slight_smile:

I raise this question 2 or 3 times, but no one intresting it, beacose no one intresting router perfomence with traffic more than 1 Gb/s (in real network, not syntetic test).
Normis - you can compile experimental package fo ROS, like Xen or wireless test, and if people need real perfomence thay can use this package…
I think many of us can test this and send feedback to mikrotik.