Multiple QinQ arrive to CCR over SFP+ port (no bonding, just one port)
In every QinQ there are multiple vlans, example:
vlan300
vlan1000
vlan1001
..
vlan301
vlan1000
vlan1001
..
vlan302
vlan1000
vlan1001
..
vlan
vlan1000
vlan1001
..
Each internal vlan has to serve IPs using PPPoE
OSPF is running on those dynamic interfaces (passive)
I’m wondering what would be better more stable:
One PPPoE server for each vlan, using the example above its would be 8 PPPoE servers but in my scenario there are more (+15) servers
Create a bridge, put every internal vlan as port of the bridge with the same horizon and then configure 1 PPPoE server on top of the bridge
Any experiences using multiple (+15) PPPoE servers on the same chassis?
I think using a bridge would be a loss in performance but it would simplify the config.
Any advice or similar experiences would be useful!
Thanks!
Since PPPoE is something that hits the CPU anyway, you won’t have any noticeable performance drop by bridging all the interfaces
And PPPoE stability has never been an issue in my experience with MikroTik, it’s the one thing thats worked perfectly every single time for me, and I much prefer MikroTik’s PPPoE Client as it reconnects very quickly when there’s a drop, unlike most other vendors (30-120 seconds typically)
It’s more a management perspective you need to look at
Benefits of individual servers
You see the PPPoE sessions listed under each interface, so it’s easier to see “Oh that customer is connected to this link”
You can name each PPPoE server instance, so from the clients router you can check the AC service name to again determine “they are on this link”
Easier to see if something is not working because 1 or 2 interfaces will have no PPPoE sessions on them. That’s easier to see than remembering “oh we’re supposed to have 723 sessions we only have 680 active now though” vs seeing “hang on why are there no sessions on this interface at all”
If for some reason you need to, you can turn off PPPoE server on just that 1 interface/VLAN
Can use different PPPoE profiles i.e. connections added to different interface/address lists, different DNS servers etc
Benefits of bridged setup
Only 1 PPPoE server instance, never need to touch it again, just add new VLANs/Interfaces and put them in the bridge, done
Easier and less work to scale when you are using lots of interfaces i.e. VPLS, just put in bridge with horizon
I typically prefer the bridged setup because all our PPPoE sessions are standardized, they all get the same DNS server details, all get added into an interface list, universal firewall rules apply to that list. It’s far easier for us to just add a VLAN on an interface and slap it in a bridge. Less room for user error than making a new PPPoE server sessions and forgetting to i.e. pick the correct PPP profile
We use the bridged setup with horizon set. Our PPPoE concentrator has a few special PPPoE servers that need to go to separate RADIUS servers vs. our regular RADIUS servers. As a result each PPPoE server instance needs to have a matching RADIUS server entry with the RADIUS server’s Called-ID matching the PPPoE server name. We combine all of our customers that use the regular RADIUS servers into one bridge. If they were split, we would have to duplicate our default RADIUS server 30 times with different called-IDs matching the individual PPPoE servers, since they cannot all be named the same.
Hi guys sorry to bump into an old topic.. but this somehowe has confused me now..
i have the following scenario ATM..and i have some doubts regarding cpu usage and pppoe-server
we have a deployed scenario with CCR1036 2SFP+ model and OLT fiber manufacturer 16PON device..
CCR SFP1+ Receiving internet Link from our CCR BGP , and SFP2+ port uplink 10Gbps to OLT PON device 10G port connected.. so far so good..
we have 1 unique PPPOE-Server with 1 VLAN, this VLAN is the same for all 16PONs on the same OLT… which we currently have around 700 pppoe-sessions..
we have same dns servers for all devices.. we are radius server machine which delivers only 4 or 5 different profiles for internet plans..
would it be optimal to use as it is? or would it be optimal cpu wise on the CCRs.. to use different VLAN per PON on the OLT with separate Interface Uplink PON on the OLT directly connected to CCR mikrotik ethernet interface with specific unique vlan and pppoe-server on that specific address also?
i have read that on the CCRs diagram each Interface is directly connected to the CPU wright? just not sure if each interface has direct connection to a certain specific number os cpu cores tilera.. or if 1 unique interface has direct access to all the Tilera cpu cores on the mikrotik…
Any Tips on which ideal scenario would be? split each PON on the OLT with its own unique VLAN, and it own Unique Interface uplink directly connected to the specific unique interface ethernet on the mikrotik, or i can just use 1 Uplink interface 10G direclty connected to the 10G mikrotik interface sfp+ using just 1 vlan and 1 pppoe-server?
BTW just to clarify the OLT PON device has got 2 10G sfp+ uplink ports + 8 ports ethernet interfaces, thats why i tought if i could split the OLT PON 8 ports device to work each single port as uplink port with specific vlan (paired with PON vlan specific) and connect directly to mikrotik CCR ether port with same vlan..
just curious which option would be optimal for cpu wise low consumption on the CCR and best speeds performance.
You’re overthinking it. There will be zero difference in CPU usage. PPPoE sessions terminate and then must hit the routers CPU. No they are not tied to individual cores or anything like that
Just throw everything into a bridge with a horizon value (same on all) to avoid traffic flowing between customers. They can still ‘route’ to each other, but you don’t want L2 traffic bleeding across which is the reason for setting horizon
thanks for your input, so if i understood correctly its wiser to leave just 1 single SFP+ Port connected directly to the SFP+ port uplink on the OLT wright? with all PON ports and Uplink 10g port in bridge mode with no VLAN at all set?
I am confused on the Horizon value? is that on the SFP+ port interface configuration? or you mean Bridge with creating a Bridge interface?
it’s not because of the multiple pppoe servers that you have to worry about
if you have all that QinQ Vlan filtering made by software that can limit your performance
i think the best pppoe performance can be achieved with multiple pppoe servers serving its respective vlans with a simple configuration no QinQ nor vlan filtering by software, running in fast-path mode
for each feature or config you include you will have a cumulative penalty in performance
i understand, in some scenarios is not feasible to run in fast-path mode, the thing is that your performance expectations be consistent with that
We evaluated such a setup for quite a similar usecase.
Not willing to risk any bridges bringing down the CPU we went for CCR1072 and later (smaller setups) CCR2004.
The older CCR takes forever to load such a config (reboot takes up to 20 minutes), but it is running 7k+ VLAN-Interfaces and bridging about 6k of those into several software-bridges (different use-cases). Each bridge holds two further ports that are used to connect to the BRAS (PPPoE servers). We basically use the CCRs as big bridges with lots of ports attached and then move the pppoe-traffic to separate servers (as we need some features MT/ROSv6/v7 does not offer).
But for testing purposes we already attached a pppoe-server to some of those bridges and it works also quite well as fully integrated pppoe-server-monster-bridge.
You might also want to look at bridge filters and/or horizon or even enable RSTP to prevent weird loops and/or broadcast-issues.
fyi the CCR2k4 does reboot in seconds (ROSv7), we will yet have to try to move the older platforms to ROSv7, but you know taking down several k users for 20 minutes is not an easy decision.
I’d seriously doubt this argument: pppoe usually is a user-context process, while bridges are done in kernel-context.
If pppoe is a kernel-module in ROS, then I would stand corrected, but I would be surprised also .