An interface of type GRE Tunnel has a read-only field mentioning the Actual MTU.
How is this field calculated? Is it a static calculation made from the header overhead?
(e.g. when IPsec is defined it decreases, so this could be true)
If so, what is the base value of the MTU it starts from? Is this a static value of 1500 or the actual MTU of
the interface the packets are being routed to? Setting a lower MTU on the port the GRE traffic is routed to
does NOT change the Actual MTU field... is that to be expected?
Is this field somehow adjusted when ICMP "fragmentation needed" packets are received after transmitting
a too-large datagram?
I am debugging issues that arise when using a GRE tunnel over an internet connection with a 1492-byte MTU
(PPPoE link and no RFC4638). Packets with maximum size, equal to "Actual MTU" are just silently dropped.
It could well be that the internet router does not return the correct ICMP or it is dropped somewhere, I could
not yet debug that. Setting a static MTU to a lower value fixes the issue, but I am curious what the
functionality of the Actual MTU field is and if it should be able to resolve this issue when everything is