do you have the PPPoE server (NAS) at a central router in your NOC, or a small one to each tower? What are the pros and cos?
I think that is better to a central place, so you have 1 point for authenticate the user and do traffic logging.
The CONS are:
you have to encapsulate the traffic from each tower to the NOC on the PPPoE Tunnel, so QoS are more difficult.
you have to route much more big packet (EoIP or VPLS) to route L2 from each tower to NOC, because the pppoe works on L2
Currently we have centrally located NAS/PPPoE servers (we have multiple) and it works well. I’m looking at pushing them further out into the network and using OSPF for the core/distribution layer of the network and using PPPoE to handle the access layer. That way it’s just layer 2 from the AP to the customer, and routed layer 3 the rest of the way.
Makes it easy for failover and load balancing across links using a dynamic routing protocol like OSPF vs trying to bond or use other layer 2 protocols to accomplish the same thing.
I agree. Now I’m the same architecture. I want to move to a full routed layer 3. I’m looking for a way (if exists) to distribute the public IPs from the central POP. If you use OSPF i think you have to segment the IP class and route a single segment to each AP/Tower (wasting some IPs.).
While both methods are certainly possible we typically use the following rule of thumb when providing our consulting as part of our core network platforms (http://www.neology.co.za).
Relevant to Both
Have a decent monitoring platform and make sure it dials you PPPoE platforms to ensure its running
Move your AAA to the core - this allows for effective billing and IP block allocations
Make sure you plan your subnetting/IP allocations and use an intelligent Radius (AAA) to do this - allocate IPs from correct pools based on customer sites etc
Think about inter-customer traffic - do you want to keep ‘edge’ customer away from your precious backhauls - do you plan to offer end-to-end VPNs for customers (and should this traverse your core
Dynamic Routing protocols for the win - but try not to have too much flapping going on (more on this later)
Centralised
Appropriate during initial stages with a few sites or in situations where remote sites are possibly prone to vandalism or theft
If limited field staff are available centralising makes is easier to fix/maintain
If the backhaul is not your own you may have limited VLAN/MTU/privacy and want to encapsulate all traffic
Inbound traffic that you wish to block (eg torrents) get stopped at the core - instead of pushing all the way down to the edge sites
Two customers on same tower would have to backhaul all the way to the core to communicate - not ideal if that often happens
Single point of failure - but you can build some decent resilliance/failover if you are decently monitoring and AAA
MultiSite/Distributed
Each individual site is self-contained meaning customer sessions terminate close to the customer
Bandwidth management, QoS tagging etc can be done at the edge
Inter customer traffic does not have to go to the core - depends on your routing architecture
If a all backhauls fail the customer may be left with a connected system but no internet - script and redirect customers on failures
Multiple internet breakouts can be effectively used
A customer that sees multiple towers can fail over in case of a failure on one highsite
Routing and architecture comments
Would recommend OSPF only for the connected backhaul links
iBGP between sites internally
Configure subnets/aggregates based on site locations to keep the entries required to a minimum
Consider MPLS if you want to do backhaul hiding and some nice engineering
Keep in mind that any tunnels you build adds overhead and potentially adversely affects performance
Hope this provides some food for thought. Maybe a last link of relevance from one of my business partners regarding the value of PPP (TheRodent) - Open Access Networks, or “PPP” is not dead
can you point me to some documentation about how to manage /32 routes in RouterOS ?
I’m using this via EoIP PtP link between tower and NAS, but I have much overhead in latency and fragmentation (the latency is like 2ms over the wireless link and 60ms inside the EoIP tunnel over the same wireless link). I have to give a try to VPLS, maybe it is better than EoIP.
It is a very interesting approach. Does the network is running now, or is it a closed project?
Have look to my recent post, you can see an architeture that provide vpls from tower to core (NOC). You can have two vpls from each tower to 2 centralized pppoeserver.
Thanks to ospf, the pubblic ip address assigned to pppoeclients are routed becouse there is another ospf annunce between routerboard with pppoe-server and routerboard (core router) that is linked to upstream internet provider with BGP transit.
So pubblic ip are always reachable and if one pppoe fault in automatic all pppoe-client connects to other pppoe-server and dynamic routing of ospf permit reachable!
Resurrecting this old thread…I’m looking to carry my pppoe with ibgp. Anyone who’s tried this? How do you summarize the /32s? I suppose redistribute connected routes into bgp and route filter what you don’t want bgp to advertise. But then how to supernet the many /32s? Any help very much appreciated.
I’d not use redistribution, if you planned well the allocations per tower/site you could do something like this:
1.- Create a blackhole to the subnet allocated to the site (say it’s 172.16.0.0/24)
ip route add type=blackhole dst-address=172.16.0.0/24
or
(optional, but suggested) tag your 172.16.0.0/24 prefix with a community representing where it is, so you can filter or do other neat things with it in the future
ip route add type=blackhole dst-address=172.16.0.0/24 bgp-communities=XXX:XXXXX
3.- Make your customers get a /32 from this subnet and don’t redistribute anything
This way your network only sees a /24, and the site itself is the only one seeing /32s. When traffic reaches the router on the site, if there’s no /32 active (meaning no customer active using that IP) the router will just blackhole the packet (discard it)
Hello, from my experience the best way of doing this is by handling PPPoE Servers on each tower, but it really depends the amount of subscribers, because if you’re trying to queueing 2000 of subscribers per router, then it’ll experiment some high cpu peaks in a massive event (as example, some providers used to increase the bandwidth at midnight), but on the other hand, it’s easiest to admin a CORE PPPoE server than severals PPPoE servers.