There are quite a few such tools, but this one is one of more popular.is this "free download manager" using a given port?
If yes, you can capture it's traffic using the port, other wise I will
suggest to have a look at the "conection rate"
I read wiki. Now, given my connection numbers 50mbps down and 1 up - is it effective applied on download at all?What I posted was just a snip of the "Connection rate" configuration.
This configuration is very good in situation where heavy download disturbs
normal internet operation. I strongly suggest you to read the wiki about connection rate.
With this configuration, you will be able to divide the heavy download, whatever it is, P2P,
download manager, youtube etc., from the "normal" web browsing, Voip etc.
In what I have posted, you will see that there is a queue that acts as parent with limit to 1MB,
and there are two other child queues with the same limit, but with different priorities.
Later, I changed a little bit this configuration, living always a space of 128kbps to the child queue
with the highest priority.
If you read the wiki you will understand me better.
1/30 is a very real ADSL line. I can't get anything else in Spain.Your 1Mbps upload related to the 30Mbps real download is terribly low.
This internet line is so asymmetric! But this is not the issue we are discussing right now.
Hmmm, I don't think "earlier" is possible. When connections are started router has no idea how long or how big these are going to be.I monitored my qos setup over last week, what I noticed is that marking connections based on connection-bytes with connection-rate isn't good enough when connections are short lived - download manager opens dozens of connections for every file, and there are many of them (think 50-80 100M files in a queue, which goes pretty fast). To mark a new connection as "heavy" it takes some 500K download with regular priority, and as new connections created often and in large quantities all those first 500Ks slow down regular HTTP traffic noticeably.
Need some more ideas on how to catch multiple connections heavy stuff earlier...
Well, everyone can buy a car and can do with it what he wants..... as long as you stay within the laws on driving cars. Why these laws? Well, without them it will become a massacre on the roads.I'm a rather strong believer in net neutrality. If someone is paying me for a certain download speed then they have every right to do whatever they want with the speed that they payed for. It's also a matter of professional ethics. If someone is paying you their money for a certain download rate then you should give it to them and not say 'oh you have a 1mb connection except when you try to download stuff then I'm going to throttle you down.' The only time you really get to use the bandwidth you pay for is downloading. If you're overselling your bandwidth that's just a bad business practice.
The way around that is to say you have a 256kb connection burstable to 6mb so web pages will download fast but extended downloads will drop down to their paid bandwidth. That way you are not misrepresenting your service.
We actually talking about two types of queues here:I don't know how many times I've read it... took a while to digest
The problem seems to be still there - there will be many heavy connections marked as regular and waiting to be remarked as heavy. But while all of them are waiting the regular queue will be overloaded, which directly translates into slowness of browsing.
Said that, in default PCQ queue config packets compete within the same queue. When done like you suggested there will be a bunch of queues competing for the same bandwidth. What's the advantages of second way? You can hard limit the rate, but I guess you don't want to, because if no one is browsing at some moment what's the point to limit downloads and waste bandwidth? Heavy stuff should be limited only when necessary, not always...
I agree with you, somehow. This topic is about making life easier to the client whatever his limits are. For example, this configurations are optimalI'm a rather strong believer in net neutrality. If someone is paying me for a certain download speed then they have every right to do whatever they want with the speed that they payed for. It's also a matter of professional ethics. If someone is paying you their money for a certain download rate then you should give it to them and not say 'oh you have a 1mb connection except when you try to download stuff then I'm going to throttle you down.' The only time you really get to use the bandwidth you pay for is downloading. If you're overselling your bandwidth that's just a bad business practice.
The way around that is to say you have a 256kb connection burstable to 6mb so web pages will download fast but extended downloads will drop down to their payed bandwidth. That way you are not misrepresenting your service.
Well, if that is what we name it "technical traffic shaping" then that is what we do.Most support net neutrality. But it usually presumes going against shaping for business/political reasons (p2p shaping by Comcast is a good example); technical traffic shaping always was there, and usually isn't really felt by users as it merely balances traffic.
I really don't get what's the point of doing thatSo this child will be filled now with more, but smaller, queues then first
That's what I had with the initial setup - all connections created with medium priority, and after some conn-bytes at conn-rate some of them get downgraded to to low priority. I'm not sure I understand what we can gain by creating individual queues per src and dest combinations though - overall bandwidth distribution remains the same, doesn't it? And main problem is with very fact that all of them are initially in medium priority bucket, screaming and kicking at each other.But if we now give this new child a lower priority then the previous one, we guarantee that newly made port 80 connections start with higher priority then already running "heavy" ones.
i think this is good thanks manHmmm, I don't think "earlier" is possible. When connections are started router has no idea how long or how big these are going to be.I monitored my qos setup over last week, what I noticed is that marking connections based on connection-bytes with connection-rate isn't good enough when connections are short lived - download manager opens dozens of connections for every file, and there are many of them (think 50-80 100M files in a queue, which goes pretty fast). To mark a new connection as "heavy" it takes some 500K download with regular priority, and as new connections created often and in large quantities all those first 500Ks slow down regular HTTP traffic noticeably.
Need some more ideas on how to catch multiple connections heavy stuff earlier...
Another approach maybe:
Normally we use pcq's with classifier dst or src address only. So the queue is grouping all srcIP/port-dstIP/port connections in one queue as long as the src IP is the same. (And thus all coming from one client.)
If now the mange filters all port 80 traffic to give it a conn.marker and a package marker the queue tree then uses a queue type to put all these connections in. Normally we have pcq-type-rate set at "0" to allow all connections to balance their load inside that queue and we set max.rate in the child of the queue tree (and prioritise) to limit ALL similar (= port 80) connections from ALL clients.
So if indeed already plenty connections from one client are made and put in that queue belonging that client new connections have a hard time to enter.
What happens now if we would put src-port as an extra classifier? So we make a new queue type with src-port and src-IP each time a new connection is made? Now we create multiple pcq's from one client each time with a different src.port. Port 80 connections from client if made by download manager will only differ in src-port.
By also separating queues based on src port, each new connection doesn't have to compete with other already existing queues, but only with other queues (from other clients) in the queue tree. And here we can rate limit and prioritize with the help of mangle connection rate and size matcher?
How will it work:
Download manager of client opens several connections.
Conn.tracker creates connections based on src-IP/Port-dst-IP/Port combinations. Since each connection from download manager has different src-port different connections are registred by conn.tracker.
Mangle ´sees´ these all and since they are all port 80 gives them "http" conn.mark.
Now several streams of packages labeled with conn.mark "http" is created for that client.
All these streams are now matched by package marker with matcher "conn.mark" and given package mark "HTTP"
So now we have several streams (from each client, and from many clients) all with a package mark "HTTP".
So far normal mangle for QoS setup.
But now we want to distinguish within this group of connections the slow or short lived ones from the fast and/or lasting onces.
So we have to mangle again the stream of “HTTP” labeled packages and with the help of matcher "connection-bytes" and "connection-rate" we give different new package marks. Like "HTTP_normal" and HTTP-Heavy".
Now we set queue tree.
We have parent, limiting to total physical available bandwith.
Then we create childs:
First child will put all port 80 traffic in a queue that has package marker "HTTP_normal" and give it a normal priority (5) and we set limit-at rate for bandwidth guarantee and max. limit for max. throughput limit. (Set it to same max.limit as parent. See other topic for this)
Now we also make a new child for traffic that in mangle has been given a package mark "HTTP-Heavy". Since we want to give this traffic lower priority over "Http-normal" we give it a lower priority. We can also give it some lower guaranteed bandwidth so it has to use priority in competition with other traffic sooner. We can even give lower max. rate then other so it will never completely fill the pipe.
regarding the queue types we use:
For normal traffic we use normal pcq type with src.address classifier only. For ALL port 80 connections (new for browsing or new for download) it guarantees that all these connections share the total bandwidth of this child equally with unlimited speed (rate=0). (The total limiting is done in the child for all these connections together and also by client shaping for each client individually.)
Now, if some of these connections hit connection-rate and size matcher in mangle their packages suddenly get a different label "HTTP-Heavy".
Since the normal child made for normal browsing looks only to package marked "HTTP-normal" they disappear now from this child. So we have to make new child.
This one now has to look for the label "HTTP-Heavy". We also change the queue type. Here we make a new queue type, and we give it the classifiers src.address AND port so each connection now suddenly gets its own queue.
This child gets a lower priority so new “same kind of traffic but not heavy yet” (0HTTP-normal) get preference over these.
We can now also further limit the speed of these existing but newly labeled package streams by setting a rate limit in the queue. Since each different src.port even from same client, now has its own connection, and thus queue, we can limit this queue. We can set it to any limit we can as long as it is lower then the total available for all port 80 traffic for that client. (If not only priority would make a difference.)
I think this way you can even make several steps in limiting long living connections.
I don’t know if I am right in all this. Maybe I am overlooking something and I also did not try this setup yet.
So any input, even if it make me look stupid, is still appreciated!