new 'none' queue type

well, in short, my question sounds like this: wtf is ‘none’ queue type? :slight_smile:

I’ll expand my question a bit… from docs:

Starting from v5.8 there is new kind > none > and new default queue > only-hardware-queue> . All RouterBOARDS will have this new queue type set as default interface queue

and what’s with x86?

when I try to change ethernet queue to ‘only-hardware-queue’, WinBox says “Couldn’t change Interface Queue - only-hardware-queue allowed only on interfaces for which it is the default queue (6)”. but when I create my own queue type and set type=none, I can apply this type to my ether2… does it work at all?..

It is a well known fact that hardware/driver based solutions work faster than generic software solutions (especially with SMP) . Same thing applies also to interface queues. By specifying queue type “none” for the interface, all the control of packet queuing falls unilaterally on a driver, bypassing software - this gives you fastest/less resource demanding way to work with packets, but it also require specific changes/features in the driver, this is why at this point this feature is supported only in few RouterBoard Ethernet drivers.

x86 Ethernet driver support is not planned, you will have to do with MQ-FIFO for now

thanks, Normis. and how does it work when I set type=none on x86 platform?

it will not work, there is a bug that it allows you to set it, but you shouldn’t

but it does work even after reboot, that’s why I’m asking :slight_smile: does it stay ‘ethernet-default’ or something?..

p.s. please fix [Ticket#2011092666000014] before this one… NetFlow is unusable for now…

you will probably get crashes soon. and no benefits.

it is already fixed :slight_smile:

if you’re about my ticket - then at least in 5.8 it’s NOT fixed. absolutely no changes. I’ll try to check v5.9 in lab - but I need to generate HUGE amount of traffic for this…

p.s. MT has so many Janises… I’ll start mixing you up soon…

5.9 includes fix for the traffic-flow problem, 5.8 was too far in release already to include those changes.

I don’t know what exactly you fixed, but in 5.9 NetFlow v5 still sends not more than 128 packets per 2 seconds - I can’t see any change at all :slight_smile:

vmware esxi 5; router uptime is 3,5 days; 10.7 TiB traffic over 2 ethernets and 1 ipip tunnel; 4 pcq queues
so far so good =)

UPD: uptime 11d 16:55:21 =) 36 TiB of data :slight_smile:

by the way… on one router, in Tools → Profile I have ‘queueing’ taking 8-10% of CPU. router does not contain any queues, and I set queue type for interfaces to ‘none’, but ‘queueing’ still takes much of CPU time - what’s wrong with this?..

128 packets per 2 seconds.. that is tough. Problem was fixed that (if you have less than those 128 packets), packet following template packet will be delivered. So sequence of packets should be correct now.

check what value you have set for cache-entries. Default value of 4K is reasonable amount for most users, but if you have a lot of connections, then when you reach the limit older connection data is taken out and scheduled to be sent to target. With data amounts you named, it seems that 4K limit is way too small for you as one packet can contain around 40 flow entries, meaning, you have ~10120 connections terminating every second and that is unlikely even with your amounts of traffic.

In addition you can try to increase the inactive flow timeout from 15s to some value larger than that (like 30)

What do you think about returning RB44GV (but 1/3 of the old price) and making a PCI-E and PCI-X versions and making drivers for those that would allow for better performance through queue-type=none ?

What can you comment about setting queue type=none on a Wireless card? Would that benefit something if it would be made possible?

Thank you.

15d uptime, btw =)

also, I recently upgraded my x86 router and saw this:

[admin@MikroTik] > queue interface pr
Flags: D - dynamic 
 #   INTERFACE                            QUEUE                            DEFAULT-QUEUE                           
 0   ether1                               multi-queue-ethernet-default     ethernet-default                        
 1   ether2                               multi-queue-ethernet-default     ethernet-default                        
 2   ether3                               only-hardware-queue              only-hardware-queue                     
[admin@MikroTik] >

so, there ARE x86 cards with DEFAULT-QUEUE=only-hardware-queue? :slight_smile:

chupaka, any real life tests? is there any benefits with multi-queue-ethernet-default or only-hardware-queue, “none” queue?
Normis, Intel driver sometime will have support for those new queues?

unfortunately, I’m scared of switching to ‘none’ queues on remote hardware routers, so the only one testing is on ESXi. and - I can’t see any difference. furthermore, I’m not even sure this type of queue is actually installed - MT staff says it’s working on RBs only :slight_smile:

Why do we still have wireless-default (SFQ) as the default on the Wi-Fi interface in v5.21 2012.10.10 ?

I think we should now have 1. codel queue (as best option) plus 2. only-hardware-queue as second best ?