+1 ![]()
+1 ![]()
- 1000000000000000000000000000000000000
+1 for implementing MLPPP server
Dear Mikrotik staffâŚ
+1 for implementing MLPPP server
+1 UP
Mikrotik Staff, please check this:
https://bsdrp.net/documentation/examples/aggregating_multiple_isp_links
trust me, thereâs no mlppp support on ios-xr and itâs not even planned. indeed it would be nice to have it
it varies. Per packet load sharing will restrict your RTT to the worst value among your available paths/links, that severly limits tcp performance in an unpredictable way. instead, per destination/flow sharing will limit the throughput of your flow to the capacity of your member link.
ppls can (and will in many cases) result OoS packet delivery that again ruines the throughput, and can break several lame protocols.
imo, unless youâre severely limited with link throughput you shouldnât really use ppls, especially because low speed links tend to have high jitter (in absolute. values) that will exactly cause the oos behaviour.
but if youâre on high speed link, the per flow throughput limitation isnât important anymore.
Hi Mikrotik Staff, I implemented in lab a MLPPP server on a Linux machine and MPD service, and seem work fine.
Need only: set link enable multilink
Below the mpd.conf.
startup:
poes:
set ippool add p0 10.11.12.0 10.11.12.19
create bundle template poes_b
set bundle enable compression
set ccp yes mppc
set mppc yes e40
set mppc yes e128
set mppc yes stateless
set iface group pppoe
set iface up-script /usr/local/sbin/vpn-linkup-poes
set iface down-script /usr/local/sbin/vpn-linkdown-poes
set iface idle 0
set iface disable on-demand
set iface disable proxy-arp
set iface enable tcpmssfix
set iface mtu 1500
set ipcp no vjcomp
set ipcp ranges 10.10.10.1/32 ippool p0
set ipcp dns 8.8.8.8
create link template poes_l pppoe
set link action bundle poes_b
set auth max-logins 5
set pppoe iface em1
set link enable multilink
set link no pap chap
set link enable pap
set link keep-alive 60 180
set link max-redial -1
set link mru 1492
set link latency 1
set link enable incoming
+1 for MLPPP
+1 for MLPPP server