Which is better (reliability, throughput, latency) for use with nv2: station-wds,station-bridge or station w/MPLS for PtMP, 802.1g WIFI?
I’m wondering the same thing, Maybe people can share their experiences?
My point to point shots are on different subnets, routed to my APs. My AP interfaces are bridged, my bridges are routed. My customers are NAT’ed.
DHCP server on the APBridge give out IP’s to the CPE, the CPE is running NAT and gives the clients IP addresses all on the same 192.168.88.0/24 subnet. My customers can play each other online, but it would be hard for one of them to hack another.
In order for any traffic to pass from tower to tower it must be routed. In tower is the same network, so far. My system hasn’t grown large enough for that to be a problem.
If you bridge it all the way to the customer you will have endless, uncontrollable network chatter. While it won’t be a significant bitrate, it will cause all sorts of problems on your network.
Absolute least amount of per packet overhead is basic station mode (it sends IP packets without ethernet header) whereas any other mode will send additional ethernet header. it is possible to use station mode with mpls when the station side is acting as a router but additional ethernet header is most likely sent, not counting the routing protocols required as they would probably require only ip.
I would profile my configuration before rolling it, what seems easy in a lab sure gets tedious after a while in production. Especially if you need to debug.
Hey 0ldman,
Could you post an example of a ptp link–radio config and address scheme?
And could you be more specific about which interfaces are bridged, and how the bridges are routed?
I’m trying to work out an addressing scheme for my network that will be helpful when implementing OSPF. At the moment, we have a couple dozen APs linked by a “backbone” using 5GHz and Ethernet. The radios use ap-bridge & dynamic wds, and all backbone interfaces are bridged. (Customer-facing interfaces are on different subnets, served DHCP by the local AP, etc.)
The problem is, in ROS 5.x and 6.x, ap-bridge to ap-bridge via WDS causes packet storms, at least in our network environment. People talk about routing between our APs rather than bridging; the problem is, I have yet to see a working example of a routed backbone configuration. Do you just assign a separate IP address to each backbone interface, then configure OSFP to share each AP’s internal routes with the other APs? What systematic numbering scheme(s) have people developed?
I have been working with MT for ~5 years, but my background is in software & mathematics, and I have never had a professor or experienced network engineer to pass on best-practices for what might seem like elementary questions. I have gotten away with re-inventing the wheel thus far, but I’m reaching the limits of trial-and-error.
The radios use ap-bridge & dynamic wds, and all backbone interfaces are bridged. (Customer-facing interfaces are on different subnets, served DHCP by the local AP, etc.)
Be careful of doing dynamic things with your links, the dynamic stuff should rather be on Layer 3
The problem is, in ROS 5.x and 6.x, ap-bridge to ap-bridge via WDS causes packet storms, at least in our network environment. People talk about routing between our APs rather than bridging; the problem is, I have yet to see a working example of a routed backbone configuration. Do you just assign a separate IP address to each backbone interface, then configure OSFP to share each AP's internal routes with the other APs? What systematic numbering scheme(s) have people developed?
I have made several numbering schemes for several networks and it’s a topic i don’t usually discuss.
However, think about your network, make an (informed) guess as to how many IP’s you require for management of cpes and other last mile equipment. Router loopbacks should be in a different address range than cpe’s and link addresses since you don’t want them to be aggregated (it will come apparent later), equipment link addresses should be on a different address range than both loopbacks and equipment management. None of those need to be public addresses, but if you already have a PA/PI allocation, i recommend loopbacks to be a public, reasons will become apparent later on.
If you are thinking public addresses only to the customers then that is big question that depends on your last mile access method. DHCP leaves you little choice, it’s also got several issues that generally leaves it best suited inside an origanization and not as an access method, and later on it will be hard to justify your allocation/usage, no matter what size allocation you pick.
I guess what I’m trying to say is that you’re choice of last mile access method is the most prone to problems, easily broken (intentionally or not), hardest to get right.
The best access method is often determined by your equipment, your configuration just has to compensate for the flaws in the access method. Very few of the methods that often are implemented in an enterprise (controlled environment) works well in an uncontrolled environment (last mile and generally wide area networks).
I have been working with MT for ~5 years, but my background is in software & mathematics, and I have never had a professor or experienced network engineer to pass on best-practices for what might seem like elementary questions. I have gotten away with re-inventing the wheel thus far, but I'm reaching the limits of trial-and-error.
There is no such thing as an industry wide best practices but some equipment manufacturers give out best practices for using their equipment. Designing, operating and maintaining wide area networks is actually an enormous subject and is very tightly dictated by your choice in equipment.
I’ve been working with MikroTik RouterOS since before version 2.3 (i don’t exactly remember my first version number), i came into the line of work as a programmer and my specialty back then was writing compilers, transcoders and interpreters (mostly for languages i was developing on my own). Compression algorithms was a hobby.
Quick example
Tower 1 AP, ether1 is the handoff subnet, wlan1 and wlan2 are bridge as APBridge, 10.0.0.0/24.
wlan3 is a ptp shot to tower 2, 10.1.0.0/29.
Tower 2 AP, wlan1 is station 10.1.0.0/29, ether 1, wlan2, wlan3 and wlan4 are APBridge 10.0.1.0/24.
Lately I’ve been adding the next PTP shot as a client of APBridge (PTP station uses the same subnet as the clients) and route it behind ether1 of the client. Right now I have a few towers feeding from the 5GHz AP of a neighboring tower. As ether1 is generally attached to my APBridge I can easily add a PTP AP to feed the secondary tower and if I have a problem with the feed I can roll over to the 5GHz AP, even back feed from the PTP AP to the secondary 5GHz AP with a little work. My customers never know if my system has a problem most of the time. I’ve had entire PTP shots die and only be down for a few minutes. Half of the time it requires a drive, but I can usually get something going and prevent an emergency tower climb.