10G with Mikrotik WORKING

We tested it and decided to share with all of you as it was never done before here on forum:

Test is done with intel 10G copper nics, intel 1366 processor to machine with the same setup.

Send:
exe-net-10g-1.jpg
Receive:
exe-net-10g-receive.jpg
Both:
exe-net-10g-both.jpg
In interfaces it shows about 1300M, we suppose it is bug. It will be reported to support.

Nice!

Now turn on contrack :stuck_out_tongue:

Contrack on all the time.

With contrack off, it shows 4G on interfaces, so it seems it is not bug.
exe-net-10g-contrack-off.jpg

Very nice then! I’m curious about the various performance hits using ROS features has. Any change you can do some mangles based on it, packet marking etc etc

It’s great to see wirespeed (albeit on a octocore box) But reality is there are very few applications that would call for using MT with 10gbit nic’s as a pure bridge :slight_smile:

I agree: )

The problem is, we didn’t find motherboard which will give different irqs to pci-e slots. New motherboards give only two different irqs to all pci-e slots, so many nics share the same irq and that is the problem.

As soon as we find suitable motherboard, you can see it with many rules.

P.S. Also I am not sure why this 10G nics are very very warm when working.

Judging by the fact you can push 10gbit thru a x86 box I would say the cards are doing alot of the heavy lifting work wise. The cards do have some grunty ASIC’s on them

Per specifications, working temperature is 55C max. I didn’t measure temperature yet, but I think that is more than 60-70

p.s. it is warm like graphic card without cooler… : )

Hmm, does the card have heat sinks on any part of the card? I’d be worried about it at those temps. You could run into stability issues.

Can you load normal linux or even windows and redo the tests/take temps? The cards might need some fan’s running over them

I agree, it should have coolers. Yes there are two large sinks.

If you plug out hdd and leave system without booting, temps are the same. It seems it is normal, I plug also graphic card into x16 without coolers and this card have the same temp as intel nics.

please make BW test to loopback (127.0.0.1) - just interesting ))

also, what about TCP tests, and real life traffic?

In the meantime, we found out that you get better bandwidth with I7 in comparation to XEON.

I7-950 blows Xeon E5620 in just any way. For example, with bandwidth test, Xeon achieve 6.4G/5G while I7 9.7G/9.7G very fast.

The problem is only with motherboards. All new intel motherboards including server versions issues the same irq on pci-e slots and that is very bad for performance not just with mikrotik but with windows too if all slots are used.

So, if you use all PCI-E slots with new boards and get IRQ conflicts, there is bandwidth saturation and rx drops.

I cannot understand why it is so difficult for intel to make chipsets with better irq balancing on slots.

I spoke with intel representative,he told me that I am wrong, that irqs are different with new boards. He reffered that OS will issue different irqs to interfaces. And that is true. But he cannot understand that problem is with IRQs on slots which are issued by BIOS and not OS. So, the same IRQ for all pci-e slots means big trouble in the mean of performance and bandwidth.

They told us that they cannot guarantee anything with mikrotik but with windows they can. So we used x58 motherboard and used all pci-e slots with windows 7. As all pci-e slots have one irq (in our case 11) performance was very bad.

So, it has influence on windows without reason. Other thing is that people do not use anything extra on pci-e slots beside graphic card so in most cases they do not have problems with IRQs and bad performance. So why they put 5-6 slots on motherboard?

Also we tested intel server board with two x8 pci-e, one x4 and two integrated intel ethernet interfaces.
Put cards in all slots and all of them will have the same IRQ. In our case IRQ 10 and that is bad for performance and stability and there is bandwidth saturation.

I have faced the same issue when I tried to setup some kind of QoS/traffic prioritizing solution. The server I used was Dell power-edge Xeon quad core and six 1gigabit ethernet pci-x cards (one card was dual port) configured in bridge mode in between two cisco switches configured with etherchannel to aggregate three 1gig interfaces, the parent queue that I used was global-out. When traffic was reaching and exceeding 1 gig, lot of rx-drops were occurring and this had a huge impact on service.
I tried another scenario with less interfaces using only four 1gig interfaces to create two different bridge groups, and the parent queue in this case were physical interfaces and not global-out, in this case when traffic was reaching or exceeding 1gig, the same performance was noticed, but now server even time to time stopped responding. The another scenario with two servers had no problem, two 1gig ports were aggregated with cisco etherchannel no rx-drops, again moving back to one server again same issues. I think the only way to provide more than 1Gig of throughput is to use 10G interfaces, or is there any hardware that has overcome IRQ issue.

Regards.

Faton

This can heppend also if you use realtek pci-e or marvell pci-e ethernet cards with any version of mikrotik v4.x or v5.x .. as drivers in kernel or something other is very bad. Use only intel.

We tested also few versions of x58 motherboards with windows when 3 or more pci-e are used at the same time. You get plenty of blue screens. So as all pci-e use the same irq the problem is on intel, as now they make motherboards/chipsets with this issue. And all new motherboards are like this.

This is the key, if somebody know about hardware which support newest processors and bios gives different irqs to pci-e slots, please post. Maybe AMD?

Did you have to tune any settings to achieve this and/or lock tx and rx chains to different processors or was it all done ‘out-of-the-box’ so to speak and worked straight away?

It will work “out of the box” but it is better if you tune it: put different cpu core to irqs of 10G nic.

Do you still have it setup or have the machines been moved elsewhere now?

Interested to know if it works correctly (graphing + interface rates + firewall etc) on v5.0 and above.

Can you post more of the hardware info too please?
What model Intel NICs (so we know what’s supported ;-D)

It is in different setup now.

Nics are: Network Card INTEL 10 Gigabit AT2 Server Adapter (PCI Express 8x,10GBase-T) .. code: INE10G41AT2

So far it works correctly. Tab: Ether>Interface>Status it is displayed 10G but on Ethernet Tab there isn’t option to choose 10G or when you choose 1G it is still 10G and this doesn’t work.

Anyway, this doesn’t influence nic working as it works OK.

Anyone have recommendations for hardware that I can put a dozen 10GigE ports on and have them work well?