Community discussions

MikroTik App
 
nabuk
newbie
Topic Author
Posts: 47
Joined: Sun Sep 05, 2004 1:45 pm
Location: Italy

STEN, what about final pppoe configuration ?

Sat Jan 27, 2007 12:30 pm

Hi,
definitively, what's the right ppppoe configuration for mikrotik with a bridge made by ethernet end eoip interface where customer do pppoe ?

At the moment I've understand that mikrotik have some bug in mtu/mru/mss settings, and some custmer can browse without problem, but some others have problem due to mss.

In some post Sten said that all works fine if we set mtu and mru to 1492, "change tcp mss=no" in ppp profile, and add some firewall rules.
Is it right ? What're the correct firewall rules ?
The server was a p4 computer, with 4eth interface, one connect it to internet, the other 3 are direct ppppoe or eoip termination.
Pppoe-server was on a bridge interface.


Regards
 
sten
Forum Veteran
Forum Veteran
Posts: 922
Joined: Tue Jun 01, 2004 12:10 pm

Sat Jan 27, 2007 10:43 pm

If you can enforce that all clients have an MTU/MRU of 1492 you can use these rules in
/ ip firewall mangle {
  add chain=forward action=jump jump-target=mss tcp-flags=syn disabled=no comment=\[tcp\],\_mss
  add chain=mss action=change-mss protocol=tcp tcp-flags=syn new-mss=1452 tcp-mss=1453-65535 disabled=no comment=\[tcp\],\_mss\_1452\_for\_mtu\_1492
  add chain=mss action=change-mss protocol=tcp tcp-flags=syn new-mss=536 tcp-mss=!536-1460 disabled=yes comment=\[tcp\],\_mss\_fixation
  add chain=mss action=change-mss protocol=tcp tcp-flags=syn new-mss=clamp-to-pmtu disabled=yes comment=\[tcp\],\_mss\_clamp\-to\-pmtu
Basically it does the right thing by reducing MSS to the correct level when MSS is bigger than desired.
For mss values other than the desired range is fixed at 536, you might want to keep this one disabled. It helps people overcome stupid mss hacks (that certain P2P users use to lessen their upload performance) but might come at a price (It might break some ancient AS/400 ip stacks)
Then the last rule does the right thing but can only be performed one way.

Basically if you paste this and make sure the jump rule in "forward" is always at top.

But if you use EoIP to terminate these users you might want to focus around a smaller MTU/MRU to overcome the fragmentation the EoIP tunnel overhead will cause. You must then adjust the values accordingly.
Move along. Nothing to see here.
 
nabuk
newbie
Topic Author
Posts: 47
Joined: Sun Sep 05, 2004 1:45 pm
Location: Italy

Thu Feb 01, 2007 9:18 am

Ok,
i'll try it. About pppoe over eoip tunnel, 1480 or lover mtu ?
 
sten
Forum Veteran
Forum Veteran
Posts: 922
Joined: Tue Jun 01, 2004 12:10 pm

Thu Feb 01, 2007 10:07 pm

It's plain math, but i don't have a way to verify all the numbers at the moment.

Maximum size IP packet on plain old ethernet is 1500
Overhead of IP inside IP (eoip) is 20 bytes
Overhead of EoIP (GRE) is (i'm not sure) but lets say that it's 10 bytes
Overhead of Ethernet inside EoIP is usually 14 bytes
Overhead of PPPoE inside Ethernet is 6 bytes.

So the maximum size of the IP packet inside PPPoE packet can be calculated by doing:

1500
- 20 ip inside ip overhead
- 10 eoip overhead <-- needs adjustment
- 14 ethernet header of pppoe packet
- 6 pppoe header
-------
1450 optimal size of pppoe MTU/MRU
====


So a safe value is somewhere around 1450. And if you ever bother to find out what the eoip overhead is, you could recalculate the new efficiency.

For some stupid reason, certain PPP implementations will gawk unless the size of the MTU/MRU is divisible by 8.. So maybe you want to try 1440?

Now remember, a larger MTU will work but that would lead to fragments of the EoIP tunnel which will result in more work for the infrastructure, higher latency and almost double the potential for packet loss.

Now that is a pretty massive overhead but it's up to you to decide whether this would lead to a good design.
Move along. Nothing to see here.

Who is online

Users browsing this forum: benoitc, sindy, zuku and 86 guests