We are running into issues with Netflix consuming too much of our bandwidth at night especially. When I torch the Netflix traffic, I have been able to determine that each customer feed comes from a single IP address, using port 80, although it can sometimes be across more than one connection at the same time, it appears to be only one IP address at a time. Upon inspection, Netflix uses thousands of IP addresses to originate their server traffic. I thought of the idea to throttle inbound port 80 traffic to a maximum of 2 Mbps per inbound IP address.
Can anyone think of a reason not to do this? I realize that some customers might get worse performance from other legitimate sites that really are http traffic, but I cannot think of a good reason not to do this.
Also, does anyone know the best way to create this throttle?
I knew this would come up this week! You are in the U.S., at least Texas was the last time I checked. You should do a Google search for “net neutrality”. If you restrict Netflix or Skype (or any service based on high bandwidth), you are now in violation of a federal regulation. You can throttle bandwidth generally, but you cannot throttle bandwidth based on the service.
You are incorrect. Netflix states that they need a minimum of 1.5 Mbps of bandwidth for their service. If I provide them with 2 Mbps, I am not denying them service, nor am I making it of a lesser quality. More than 1.5 Mbps merely allows the user buffer to fill faster.
Also, I am not throttling them specifically. ALL port 80 traffic would be managed this way. I would make the argument that Netflix using more bandwidth than they need denies access to other Net services.
I am not saying it is correct, or that I like it. Actually I don’t like it. If you restrict all your clients to 2M, then you are ok. If you restrict your clients to 2M for most sites, but restrict connections to Netflix or Skype to 1M, then you are in violation of the FCC regulation.
ADD: I restrict my clients to 1M (less than required for Netflix), but the bandwidth restriction is for all services and websites. That also is not in violation of the FCC regulation. As long as it is not discriminating between websites.
ADD: My last question is; can you use burst rates in the U.S. now? If I burst rate 2M for 12 minutes (YouTube videos), then back to 1M (Netflix), is that discriminating?
To do this basically you need to set up queues for specific kinds of traffic, and the most effective way for you to go about this is with mangle rules. There are a couple of approaches you can take to this.
1.) Set up mangle rules that will mark connections with a dst. port of 80 and 443, then mark packets based off of that connection mark. In your Queue trees set up PCQ and set a hard limit of 2Mbps for that packet mark on a per user basis.
2.) Set up four mangle rules, two will mark connections with a dst. port of 80 and 443, but the first will mark a the connection that has transferred less than 10 MB, the other rule will mark connections that have transferred more than 10 MB. Set up the other two mangle rules to mark the packets accordingly. Set up a Queue tree that will assign a higher priority to the packets with less than 10 MB transferred and a lower priority to the one with more than 10MB.
The second method has the advantage of allowing unrestricted bandwidth for HTTP, and normal web browsing should not be impacted by people downloading large files by HTTP, HTTPS or watching Netflix. When normal web browsing isn’t going on Netflix will be able to take what it needs when it needs it. You can get fancier and assign different priorities to different kinds of traffic you know about and assume everything else is something you don’t care about.
@Feklar: Both those solutions are now in violation of a new FCC regulation that was approved by the FCC on December 21st. To restrict ANY SERVICE (email, ftp, ssh, etc) more than any other is now a violation of FCC regulations.
The first solution is exactly what he was asking for, restricting all HTTP/HTTPS (Netflix can use both) to no more than 2 Mbps per end user.
The second solution is a more fair and is simple QoS. You are not really discriminating one service over the other, what you are doing is reordering packets so that normal web browsing can still get the full bandwidth available when needed regardless of what others are watching on Netflix or downloading larger files. However when the bandwidth is not needed by normal web browsing Netflix and other downloads are more than free to take everything that is there.
(snip)However when the bandwidth is not needed by normal web browsing Netflix and other downloads are more than free to take everything that is there.
Violation…
ADD: Your bandwidth throttling is based on ports 80 and 443. Those are only two of several services available. You may throttle on total bandwidth only, regardless of the port.
ADD2: I hope all of you understand (including Feklar)…I don’t like this. At first, it seemed ok, but the more I looked into it…
Page 47 -52 over Reasonable Network Management and brings up specifically congestion management. If a few end users are crowding out the ability of others to access online content (i.e. downloading movies via Netflix) he is within his rights to temporarily limit the amount of bandwidth they receive. In his case it would be limiting the heavy HTTP downloads for a very short period of time to allow other services through so their services is not degraded by the few. Once the other services are done with their thing, then the heavy HTTP stuff can continue unhindered.
I’ll need to do some more reading up on it as well, but I’ve noticed that a lot of news articles are going to slant what is going on one way or the other and not really present everything that is there. When I was reading the news articles that’s how they all basically came off as saying you can’t do any kind of rate limiting or QoS. But reading through some of the actual document there is some more leeway in there than what they made it out to be.
I would agree if they took out all of the options for a network to protect itself and ensure that everyone gets a fair amount of access, the few would abuse everything and make everything a nightmare to deal with and we would be back at square one. They way they worded that section is that they will evaluate those kinds of situations on a case by case basis, but you don’t need their permission to take reasonable actions.
I read those pages you recommended. More karma for you! That was not online yet when I found about it last week. All ISPs in the US should be familiar with those pages in that doc Feklar posted above.
This should get you started. Add in additional rules for HTTPS (can just reuse connection marks unless you want to treat it differently) and any other traffic you may want to classify.
When I enter the examples provided, I get an invalid statement. What appears to be missing is the “New Connection Mark” When I add the entry for that to the statement, it becomes valid, but no traffic is marked. This occurs on all 4 statements. I cannot seem to find the flaw…
I have instituted the 4 packet marking mangle rules, but I am trying to decide which option for QoS is better for me:
I have enough overall bandwidth on my network, however we are a WISP and our access points are also getting overwhelmed by the Netflix/Video traffic.
If I set Heavy HTTP traffic at the lowest priority, I understand that it will delay these packets during high usage, but won’t the actual bandwidth consumption be the same?
If I set Heavy HTTP traffic to cap at 2 Mbps, I should see less bandwidth consumption, which in turn, should relieve congestion on my access points.
Based on this, does it seem that for me, setting the queue for 2 Mbps per Heavy HTTP connection would be the better option?
It’s really a design decision on your part there and depends on what kind of service you are selling. You are correct that by just giving heavy HTTP a lower priority what it will do is delay or drop the heavy stuff in favor of the light stuff, but if there is space for it, it will all go through, so actual consumption will be close to or about the same.
I’m not sure if you rate limit each end user or not, but that would be a better place to look at. If you sell them a service at a rate limit, it would not be a good idea to set a hard cap on HTTP for each end user that is less than the rate limit they are paying for. That would very likely be a violation of Net Neutrality that me and SerferTim were trading back and forth on, and if I was an end user, I would not be happy about that kind of setup at all either. It would be a much better idea to build out the capacity of your APs to support the extra end users.
So I am attempting to understand the hard cap on the Heavy HTTP traffic. Using a PCQ on a queue tree, and the way I have marked my traffic, would the 2 Mbps cap on Heavy HTTP be on each port connection or on the entire IP address destination? More accurately, if there is a data stream on port 80 from one source, would ALL of the port 80 traffic be restricted to 2 Mbps for the duration of the stream, or just the individual connection?
You don’t need to use PCQ for the 2nd option, a simple pfifo is more than enough. PCQ will dynamically divide up a queue into smaller queues based on the parameters you feed it. That is still an option if you want to do that. The queue tree should look something like this:
This sets a hard limit of 10M on the queue, specifically 10M on the LAN interface, 7M of those Megs are guaranteed for Normal HTTP, the other 3 is guaranteed for Heavy, when the 7 is not fully used by “Normal HTTP” the heavy can take the rest.