I’d like to suggest a new queueing type - basically how NetEqualizer does it.
For a known, fixed Internet bandwidth, prioritize user access based on that user’s historical bandwidth usage, computed with a moving average (or in discrete steps, either way, so long as it keeps a history). Guarantee a minimum per-user bandwidth, perhaps some fraction of the known, fixed Internet bandwidth multiplied by a user-specifiable percentage (a per-user-queue).
In other words, a per-user-queue of some fixed amount, but prioritizing (not limiting) the remaining bandwidth based on that user’s historical usage.
This way, heavy users will never have their speeds choked to zero bandwidth, but are able to use the full link when others are not using it. Still, the lighter the user the further ahead in the queue they will go. So if I have used 2 GB this week and X has used 1 GB, and we are sharing 256 Kbps, and I have, say, the ‘reserve bandwidth’ at 25% of the 256 Kbps, I will get at least 256/2 * 0.25 = 32 Kbps while X is downloading. When X is finished downloading at 224 Kbps I can go back to using the full 256 Kbps for my download. If I hold off for a while and X gets up to 3 GB for the week ours roles will change, and I will get fast speeds for my downloads. If we are about even, we will share it equally.
In my opinion this is the most fair way to do traffic shaping. Heavy users are still allowed full reign of the line when it is free (night time, etc.) while light users get maximum speed all the time. Let me know what you think.
Oh, by the way, the reason a moving average of some kind is necessary is that, at the start of the week or month, everyone will be at 0 GB otherwise, so if I log on for a small bit in the morning I will end up getting a slower connection for a while than even a heavy user. To prevent this strange kind of behavior near the rollover time, it could keep a total per-week, but use the last four weeks (for instance) to decide my priority.
The netequalizer white paper breaks down the technology pretty well. Taking a look at it might give you some ideas on how to do something similar. I personally don’t know.
I do know their algorithms work stunningly well (we have a 70Mbit system), but I have no idea how to achieve the same effect. It would be great if one of the brighter members could figure out how to get close using MT!
I learned a little bit more about how the NetEqualizer does queuing thought I’d add it to this body of knowledge on this site. Their core technology is from open source so I am not giving away any trade secrets.
This is an augment to the above conversation, which refers to how the NetEqualizer shares an entire network by slowing heavy streams down dynamically during times of congestion.
They also deploy fixed rate limits for anyone that is interested. These are static rate limits set up by IP and they have developed their own techniques in the Linux kernel. You can have a fixed rate limit on a NetEqualizer as well as allow the NetEqualizer fall back and slow heavy streams automatically during times of peak usage.
For standard rate limiting they put packets in a queue ,but only after that user is “nearing” their alloted amount data for the current second. The word nearing is used loosely. Let’s say a user has an assigned rate cap of 100kbs… and in the current second they are at 90kbs transferred already then the NetEqualizer would queue the last couple of packets for that second very simple but effective. The actually have two thresholds to start this queuing an early one (each second) and a later one that kicks in more harshly if the early one does not do the trick.