RouterOS 3.10 MLPPP (PPPoE-Client) with CISCO 6400

Hi,

we test a RouterOS PPPoE Client on a RB/333:

[admin@MikroTik] > interface pppoe-client print
Flags: X - disabled, R - running
0 R name=“pppoe-out1” max-mtu=1480 max-mru=1480 mrru=disabled
interface=ether2,ether3 user=“username” password=“password”
profile=mlpppoe service-name=“” ac-name=“” add-default-route=yes
dial-on-demand=no use-peer-dns=yes allow=pap,chap



[admin@MikroTik] > ppp profile print
Flags: * - default
1 name=“mlpppoe” use-compression=no use-vj-compression=no use-encryption=no
only-one=no change-tcp-mss=yes

What we see is, that the PPPoE-Client only connects one time across ether3. We didn’t see any second login request in the RADIUS-Server or on the CISCO. Also a Torch on ether2 didn’t show any kind of PPPoE-Discovery packages.

Question:
Did anybody have a working CISCO configuration for running MLPPP with MT RouterOS. We have it sucsessfully running with CISCO and CISCO and also with CISCO and LANCOM.


Regards
Lutz

Got both PPPoE-Clients up and running now.

But:

we have 2 real DSL lines for testing, both only with 1MBit/sec. downstream. If I run a bandwidth test it toggle between 700 KBit and 1,5MBit.

Will still do some more test.

Regards
Lutz

Next test with 2 * 3MBit ADSL Uplinks works nearly fine. At the moment we get 5MBit/sec. thruput. Maybe with some more improvement we get more.

Regards
Lutz

Super!! Thanks for the data. I’m just waiting to get my pair of DSL lines installed to give this a try.

George

But keep in mind, that all DSL-Uplinks in a MLPPP bundle must have absolute the same linkspeed, otherwise you will run in problems. The first test are made with a 768KBit and a 1M Bit downstrem DSL. This didn’t work okay.

Regards
Lutz

That’s interesting. I would have expected n times the slowest link…

George

No, then you run in problems with latency and toggeling linkspeed by heavy load on the MLPPP bundle, because MLPPP wants to fullfill all links with the maximum possible bandwidth. But if you have different linkspeeds, then you get problems with packet reordering and fragmentation, so the thruput toggle from low to high to low.


Regards
Lutz

I think I know solution for you IF it is standard MLPPP implementation (don’t have place to try, but still) - just set lower MTU for Ethernet with slower link!

Lets say if you have 750k:1M - mtu on interfaces should be 1125:1500 bytes.

Note sure, but i think you also need to specify MRRU.

This didn’t help you in a MLPPP environment.

Regards
Lutz

Did you actually tried it? What was behavior? Cause MLPPP is dividing packets into fragments one way or another, so MTU can affect the size of the pieces going thought