max-mtu ignored in Multilink PPPoE connections, v5.0beta4

Hi, all. I believe this is a legit bug, so I’ll probably send a support request, but I figured I’d post here first just in case someone else runs into the same issue. Sorry this turned out so long… it’s a somewhat complicated issue.

To reproduce, create a PPPoE Client interface which is attached to at least two eth interfaces (it can be reproduced even if only one eth interface is actually up-- as long as PPPoE is configured for at least two eth interfaces, it will try to start multilink PPP).

The PPPoE interface’s max-mtu attribute doesn’t seem to behave as I’d expect.

The IP MTU is fixed at 1492, and is not affected by the max-mtu setting. If it were to be fixed to anything when using MLPPP, it should probably be 1487 which would create 1500-byte PPPoE frames after the additional 5 bytes overhead with MLPPP.

The auto-generated TCP MSS-adjust mangle rule is also affected. The rule for inbound SYNs which affects the MSS of outbound traffic seems fixed at 1452 (which corresponds to the fixed 1492 IP MTU).

On the plus side, the rule for outbound SYNs which affects the MSS of inbound traffic correctly tracks max-mru setting.

Changing the max-mtu setting seems to have a different effect I wasn’t expecting: it appears to limit the maximum (outer) PPP frame size (in other words, the maximum PPPoE payload size) before multilink PPP fragmentation occurs. Ideally, the maximum PPP frame size should be 1494 bytes, but unfortunately it seems to capped at 1492 bytes.

I’m not sure the best way to fix this, but I think I’d start off with changing the max-mtu setting so it affects the IP MTU and the TCP MSS adjust rule on the inbound side (affecting outbound TCP traffic). To be more user-friendly, maybe cap it at 1492 for “normal” PPP, and 1487 for multilink PPP (when multiple interfaces are added or the MRRU setting is specified)?

I think that the max-mtu setting should not affect the maximum PPP frame size at all. I’d either make this a fixed setting (fixed at 1494 bytes, which would create a 1500 bytes PPPoE frame), or add it as a new attribute, or possibly make it auto-configured based on the MTU of the underlying interface(s) (seems complicated, especially if the underlying interfaces have different MTUs).

The bottom line of all of this:

Since the IP MTU is hard-coded to be 1492, IP packets larger than 1487 bytes get fragmented (even dont-fragment packets) at the multilink PPP layer. This causes the other end of the PPPoE tunnel to be forced to re-assemble the MLPPP fragments, which is CPU-intensive for most routers (including MikroTiks). Custom MSS-adjust rules help for TCP, but I don’t know of any workaround for other protocols…

If anyone has any input, I’d love to hear it!

Here’s a multilink PPPoE frame cheat-sheet:

  Ethernet headers
  /-----------------\
   PPPoE            6 byte header
   /---------------\
    PPP             2 byte header
    /-------------\
     PPP Multilink  4 byte header
     /-----------\
      PPP           1 byte header (present only at beginning of MLPPP fragments)
      /---------\
       IP           1487 bytes max, headers+payload
      \---------/
     \-----------/
    \-------------/
   \---------------/
  \-----------------/
                    1500 bytes max