Tue Nov 05, 2013 8:12 pm
Our internal route reflectors (over-powered x86 boxes) maintain ~350 BGP sessions each without breaking a sweat (0-2% typical CPU utilization). These sessions are not very busy or heavy, as they do not carry Internet routes or customer routes, only internal management subnets and L2VPNs.
Our PPPoE concentrators (RB1100AHs) each maintain ~200 fairly busy VPLS tunnels (along with ~1000 PPPoEs carried by the VPLS tunnels), and sit at ~60-80% CPU (the vast majority of that consumed by queuing). The only 'problem' we have seen with that many VPLS tunnels, is inaccurate information in the /bridge host table; traffic gets forwarded correctly, but MAC addresses are displayed as being on the wrong interface.
I don't see any particular reason why those couldn't scale further, as long as one is careful with the design. 200 busy/heavy BGP sessions would probably kill just about anything, and would probably be handled better with a multi-tier reflector system. To push 500 PPPoE-bearing VPLS tunnels to a concentrator, it would probably be best to separate the functions (VPLS tunnels to a dedicated L2VPN PE router, that hands them off to a dedicated concentrator, with a dedicated shaping/queuing box behind that).
--Eric