So nobody can really switch it on? I'd presume every traffic nowadays has VoIP packages in it as well (if you are a internet provider)?Honestly, the only time that you **REALLY** need it off, is with VoIP. it really screws with the packets and makes tons of jitter.
Bandwidth can't be the problem. We have a 300/300Mb symmetric line and rarely see combined client traffic coming close to 200...More bandwidth? Without knowing all the details of your network, links, load, traffic, etc - there is really no way to answer that. If you have over-sold your bandwidth, you don't have a lot of choice. You can try traffic shaping, throttling, etc.. Flow Control is going going to help with the physical links are maxed out. It stops traffic indiscriminately. So NOT a good option when dealing with VoIP.
What you will need to do is analyze your traffic, figure out where it's all being used and then figure out if you have to buy more bandwidth, or if you need to do some QoS.
Also using a custom queue type - pfifo, Queue Size =10 packets for pppoe server....auto is the same as on except when auto-negotiation=yes flow control status is resolved by taking into account what other end advertises.....
I cannot agree more... I have found one of the #1 factors to provide that "snappy" browsing experience lies on a very fast DNS cache, and making all your users to actually use it.Perception of a fast network is very important regardless of what package they have sign up for, so to this we use short term traffic bursts, most customers generally don't check speeds until they notice a connection has slowed down and we are trying to ensure they don't have a reason to complain!
I agree. But what is the best way to achieve?@WirelessRudy:
Yes, Earthing in Spain or any other country with very hot weather can be a challenge...
Thanks for the link, very interesting! Did you take the effort to find something for me in Spanish or are you Spanish? I am not, although live here in Alicante region for 16 years now.... My languages are Dutch, English, Spanish, German.... in that order..... but reading the document in Spanish is not a problem...The best is not trusting anything already installed (unless you measure it and tests ok on a dry, hot summer day), and to install you own grounding, you can source everything needed from a Electricity supplier: special conductivity salts, and the ground rods or plates best suited for the terrain: http://www.dielectroindustrial.es/syste ... 202011.pdf
Do you refer to the http://www.speedtest.net servers? No, we don't have our own. I am thinking of it now. But my assistant thinks its a bad idea since we are only behind a 300/300Mbit line from our provider and such server would be open to everybody else in the world! (Although servers are picked after a ping test. I'd presume this will guarantee the rest of the world will not pick our server?)Regarding speedtest, do you have your own servers? Most servers are deployed by other (W)ISPs, so measuring using those servers will give you rather mixed results as most as you said, oversubscribe.
I am on the other hand thinking it might be a good idea so our clients test directly to our server and thus give best results.....
Not sure what you mean with "back pressure" problem. To be honest, with so many variable that can be set at different places in our network and in the routers, most of them are in use for years, I am without directions.Are you sure yours is a backpressure problem and not a TCP connection, or fragmentation issue??
In fact, we had an 'audit' done by some IT engineer last year and he pointed to some things we could do better but overall but we never got a good help out of that. Our network was/is too complicated for just a simple general advice....
Yes, in our CCR1016-12G that is connected to provider's Cisco we have the PPPPoE server assigning simple queues to the authenticated clients.Do you use simple queues at the PPPoE AC to limit speeds?
We have an Intel i7 3.2Ghz with 8Gb running a CHR virtual environment which in return has the Dude running and user manager to manage the clients.
In that user manager we have some profiles for the clients; There are no priorities set for users. Because we never reach the maximum of our assigned bandwidth priorities have no use in this router (?)
The difference in what client's connections are and the burst we give them is quite big and can be for some long time. I've done this since I don't care if people make a 5 min heavy download at great speed but am not waiting people starting downloads at the highest speeds lasting for hours..... but maybe this whole setup need review after the many years we basically used this same principle (I only increased the speeds and prolonged the times of the burst together with the data thresholds...)
I cannot agree more... I have found one of the #1 factors to provide that "snappy" browsing experience lies on a very fast DNS cache, and making all your users to actually use it.Perception of a fast network is very important regardless of what package they have sign up for, so to this we use short term traffic bursts, most customers generally don't check speeds until they notice a connection has slowed down and we are trying to ensure they don't have a reason to complain!
add action=dst-nat chain=dstnat comment="re-route dns requests to Google DNS" dst-port=53 protocol=udp to-addresses=8.8.8.8 to-ports=53
add action=dst-nat chain=dstnat comment="re-route dns requests to Google DNS" dst-port=53 protocol=tcp to-addresses=8.8.8.8 to-ports=53
Main reason was to ease the local sourcing of the components, I knew you're located in Alicante. Yes I am Spanish.Thanks for the link, very interesting! Did you take the effort to find something for me in Spanish or are you Spanish? I am not, although live here in Alicante region for 16 years now.... My languages are Dutch, English, Spanish, German.... in that order..... but reading the document in Spanish is not a problem...
So, this would mean looking from your gateway 'into' your network once Gigabit is passed and 100M is reached further down the 'pipe' you'd better not use gigabit again?So keep it going!Great discussion going on here.
One tidbit I'd like to add is that if you're going back and forth between 100Mbps and 1Gbps several times in a path, this can cause issues if the equipment isn't well aware of it - suppose a router can transmit across a gigabit interface to some device via a switch, but the next device is a 100Mbps-eth device.
Suppose you have this:
(R1) ---gige---> [switch] -----faste---> {bridge} --faste---> [switch] ---gige--> (R2)
R1 and R2 see it as a gigabit link. Even if you queue at 100Mbps limits, there can be issues because the traffic still crosses the wire at gigabit speeds, and in small timescales, you may exceed the line rate of the 100M device links... and if the {bridge} happens to be experiencing congestion/interference/lower TX/RX rates - the problem can get worse.
You would be better off using 100Mbps links from the routers to the switches in this case so that the routers can't barf packets at the switches faster than they can clear the 100Mbps links.
Well, just switched the gateway redirector on again. See for how long the dns cache will do good to us.And in upgrading the network to answer to the growing demand from customers/keeping pace or head with competition we can and will offer more speeds to the customer. Hence a network that always worked fine with 4Mb's assigned to clients that occasionally used that (apart from the usual lice using P2P) we now see people bringing back their smart TV's to start watching streaming video on demand since he, the provider says he delivers 15Mb now so why not 'eat' that.... So yes, any flaws will pop up...Regarding your network, pointing out possible enhancements is just one part while tuning a production network... then comes actual enhancements deployment, measurements, testing and reasoning out results towards further improvements or pointing out bottlenecks and "weak rings in the chain"... it's not too unusual to unleash hidden flaws or problems while doing so.
I agree. I'ts alive, it's alive.Wireless Networks in production have a "organic" component of sorts, it is usually needed to closely watch and monitor them for an extended time, and slowly introduce changes one at a time to be able to tell, and measure the before and after difference in order to thoroughly audit it; sometimes fixing a bottleneck reveals another one upstream or downstream, thus changing all the game.
But yes, fix one thing just to run into something new.....
Regarding DNS cache I meant deploying a recursive resolver DNS cache locally. Google services can change rather quick, so TTL is something to specially watch out for while caching their records.
The difference of using google DNS or provider DNS vs a local, fast recursive cache difference can be of several orders of magnitude; and this is what all customers routinely use of the network starts with everytime:
DNS requests are resolved by the local cache in 1/8th of the time using external DNS would take. This means slicing out 1,750ms of all requests, and more important: you're optimizing this traffic, which is amongst the most critical for a smooth, snappy network User Experience, if not the most important.
Redirection is usually the most easy way of making (forcing) everybody to use the cache, but it may not work for all your customers depending on how they are resolving DNS, usually a minority will need their ip bypassed as their resolver will complain or refuse to work when redirected.
Dont do this. Run your own DNS resolvers inside your own network. They query the root and aren't leaching another providers DNS infrastructure. I used to be a dns provider and it sucks when everyone sucks your resources for no return.Since then we actually started to point each CPE and every other router directly to the OpenDNS servers so request just passed the gateway router.
In the last year we tried the 'dst-nat re-route to itself' with the dns cache server in the gateway again but again after a week or so we ran into dns issues at the clients. Many pages became unresponsive so we disabled the dns cache in MT again and replaced the forewarding to the google servers. (A special test program revealed that these for us are faster than OpenDNS or any other dns server.)
/ip dns
set allow-remote-requests=yes cache-max-ttl=1d cache-size=2048KiB max-udp-packet-size=4096 query-server-timeout=2s query-total-timeout=10s servers=8.8.8.8,8.8.4.4,208.67.222.222,208.67.220.220
/ip firewall nat
add action=redirect chain=dstnat comment="re-route dns requests to Google DNS" dst-port=53 protocol=udp to-ports=53
add action=redirect chain=dstnat comment="re-route dns requests to Google DNS" dst-port=53 protocol=tcp to-ports=53
Ok, with your post and reading up a bit more it seems the camp of 'No, don't use it!' is winning.......Flow Control, should I use it?
IMHO: NO. Now we have VoIP and great and more modern QoS systems thath can hadle this type of traffic the right way.
extract from wiki: https://en.wikipedia.org/wiki/Ethernet_flow_control
It is an old mechanism, it was the FIRST mechanism! : "The first flow control mechanism, the PAUSE frame, was defined by the Institute of Electrical and Electronics Engineers (IEEE) task force that defined full duplex Ethernet link segments. The IEEE standard 802.3x was issued in 1997"
The original motivation for the pause frame was to handle network interface controllers (NICs) that did not have enough buffering to handle full-speed reception.
Ethernet Flow control disturbs the Ethernet class of service (defined in IEEE 802.1p), as the data of all priorities are stopped to clear the existing buffers which might also consist of low priority data.Ethernet Flow control disturbs the Ethernet class of service (defined in IEEE 802.1p), as the data of all priorities are stopped to clear the existing buffers which might also consist of low priority data.
@zerobyte;
Any input on the mixed 100M/1000M theme?
Is it indeed a better idea that once the route from gateway to client has seen the first 100M cable it is not wise to use 1000M in the network further down to the client?
But what is the client itself installs gigabit in his LAN?
Or it the 'mixture' issue only an issue if we use switched or bridged interfaces in a route?
And what about the transfer from Ethernet to wireless?
I can imagine traffic that comes from an internet server at full speed (initially 300Mb from my provider) running over gigabit cables to my CCR and then to a NetMetal will see its first bottleneck in a 200 conn. rate wireless link. After that again some gigabit cables and another CCR and more gigabit cable to the next NetMetal that serves a 100Mb link. Finally further down the road we will even come accross a 100M cable and the P2MP network will only deliver some 10, 20, 40?? Mbs to the client?
All in all, at the first CCR, my gateway, the PPPoE server assigned a single queue for that specific client that limits the speed to 10, 20 or 30Mbps.......
So how is the package behaviour in respect of 'mixed speed' environment now?
Yeah ok, but does it hurt if behind the bottle neck (that could well be a 100M cable) we find another stretch of the path formed by a gigabit cable?@zerobyte;
Any input on the mixed 100M/1000M theme?
Is it indeed a better idea that once the route from gateway to client has seen the first 100M cable it is not wise to use 1000M in the network further down to the client?
But what is the client itself installs gigabit in his LAN?
Or it the 'mixture' issue only an issue if we use switched or bridged interfaces in a route?
And what about the transfer from Ethernet to wireless?
I can imagine traffic that comes from an internet server at full speed (initially 300Mb from my provider) running over gigabit cables to my CCR and then to a NetMetal will see its first bottleneck in a 200 conn. rate wireless link. After that again some gigabit cables and another CCR and more gigabit cable to the next NetMetal that serves a 100Mb link. Finally further down the road we will even come accross a 100M cable and the P2MP network will only deliver some 10, 20, 40?? Mbs to the client?
All in all, at the first CCR, my gateway, the PPPoE server assigned a single queue for that specific client that limits the speed to 10, 20 or 30Mbps.......
So how is the package behaviour in respect of 'mixed speed' environment now?
Think of this device:
220px-Wooden_hourglass_3.jpg
... and all should be clear - Note that the sand falls quite freely though the lower large portion of the hourglass. It doesn't really matter how fast the network is beyond the bottleneck - if your network can push too much traffic up against a bottleneck then you get this effect. If the sand were only allowed to fall into the upper half of the hourglass at the same rate that it drains through the neck, then there wouldn't be a pile of sand in the top half.
There's always going to be a bottleneck, but it's how well the bottleneck handles traffic and how well the devices feeding data into the bottleneck handle this that can make the difference between smooth performance and jittery performance.
In general, a traffic shaper will give much better results than a hard limitation that simply fifo queues a few Kilobytes of data. The reason I said "router" in my example is that most routers have a lot of qos and traffic management tools at their disposal, and many switches/bridges have only limited tools to this effect. For instance, Ubiquiti gear will prioritize certain traffic based on DSCP values - if the router is classifying and marking important traffic with high-priority DSCP values, then the wireless gear will behave better during congestion than it would if there were no QoS at all.
If you were to implement a queue tree on the R1 router which shapes the traffic to fit well through the 100Mbps links (I usually shape just a tad slower than the physical bottleneck) then you're going to have more control over performance than if a simple store-and-forward switch receives frames too fast to write them out the 100M interface.
Obviously, there's no one-size-fits-all rule of thumb here - suppose you have a gig-e link into a switch and there are 4 possible 100Mbps paths that traffic could take beyond the switch - you can't just shape to 100M and call it a day.
OK, clear as a bone.. but what about the mix with wireless?In general, no, if you have a 100M device going through a switch which forwards along a 1G interface, there's no way for your device to overflow the capacity of the 1G interface so it doesn't hurt to go from smaller to larger, even with "dumb" devices like workgroup switches, media converters, etc.
Just realize that networks aren't uniderectional - so in the return direction, things are going from fast to slow.....
Anyway, if a link looks like this:
----======------ then you're never going to overflow the buffers at the 100/1g boundaries because it's 100M at the ends...
if it looks like this:
====-----===== then you could possibly run into problems.
Ok, conclusion sort of is;Nice wireless ASCII art.
Worst case, try setting the interfaces to 100M (if the wireless link is at or below 100M) - otherwise, I don't think there's a lot on the physical link that you can/must do. Any tweaking you investigate should come in the form of queues/traffic shaping.
These days you can easily install pihole with unbound on many routers/micro-servers and you're done.Dont do this. Run your own DNS resolvers inside your own network. They query the root and aren't leaching another providers DNS infrastructure. I used to be a dns provider and it sucks when everyone sucks your resources for no return.
If you are really fancy you run anycast DNS inside your network. Advertise a single address in multiple places using ospf (100.100.100.100 is a great IP to use) and it will automatically serve it from the closest location.
Well yes, if only mikrotik routers were using "up to date" hardware and kernels, they would actually not still suffer from flow control related issues, which they still are 6 years later.Blast from the past, talking about pihole on a 6 year old topic named "Flow Control, should I use it?"
Totally related. Bravo!