well, in short, my question sounds like this: wtf is ‘none’ queue type?
I’ll expand my question a bit… from docs:
Starting from v5.8 there is new kind > none > and new default queue > only-hardware-queue> . All RouterBOARDS will have this new queue type set as default interface queue
and what’s with x86?
when I try to change ethernet queue to ‘only-hardware-queue’, WinBox says “Couldn’t change Interface Queue - only-hardware-queue allowed only on interfaces for which it is the default queue (6)”. but when I create my own queue type and set type=none, I can apply this type to my ether2… does it work at all?..
It is a well known fact that hardware/driver based solutions work faster than generic software solutions (especially with SMP) . Same thing applies also to interface queues. By specifying queue type “none” for the interface, all the control of packet queuing falls unilaterally on a driver, bypassing software - this gives you fastest/less resource demanding way to work with packets, but it also require specific changes/features in the driver, this is why at this point this feature is supported only in few RouterBoard Ethernet drivers.
x86 Ethernet driver support is not planned, you will have to do with MQ-FIFO for now
if you’re about my ticket - then at least in 5.8 it’s NOT fixed. absolutely no changes. I’ll try to check v5.9 in lab - but I need to generate HUGE amount of traffic for this…
p.s. MT has so many Janises… I’ll start mixing you up soon…
by the way… on one router, in Tools → Profile I have ‘queueing’ taking 8-10% of CPU. router does not contain any queues, and I set queue type for interfaces to ‘none’, but ‘queueing’ still takes much of CPU time - what’s wrong with this?..
128 packets per 2 seconds.. that is tough. Problem was fixed that (if you have less than those 128 packets), packet following template packet will be delivered. So sequence of packets should be correct now.
check what value you have set for cache-entries. Default value of 4K is reasonable amount for most users, but if you have a lot of connections, then when you reach the limit older connection data is taken out and scheduled to be sent to target. With data amounts you named, it seems that 4K limit is way too small for you as one packet can contain around 40 flow entries, meaning, you have ~10120 connections terminating every second and that is unlikely even with your amounts of traffic.
In addition you can try to increase the inactive flow timeout from 15s to some value larger than that (like 30)
What do you think about returning RB44GV (but 1/3 of the old price) and making a PCI-E and PCI-X versions and making drivers for those that would allow for better performance through queue-type=none ?
What can you comment about setting queue type=none on a Wireless card? Would that benefit something if it would be made possible?
chupaka, any real life tests? is there any benefits with multi-queue-ethernet-default or only-hardware-queue, “none” queue?
Normis, Intel driver sometime will have support for those new queues?
unfortunately, I’m scared of switching to ‘none’ queues on remote hardware routers, so the only one testing is on ESXi. and - I can’t see any difference. furthermore, I’m not even sure this type of queue is actually installed - MT staff says it’s working on RBs only