You've said the /interface l2tp-client was running, haven't you? If so, /interface l2tp-server print at the server side should show an <l2tp-username> interface as well.
Yes, it was up, but with some invalid (or empty) IP addresses, as I haven't set those
remote-address and
local-address in the secret - my bad, I have never used L2TP with MikroTik (yet).
You have to set the remote-address and local-address on the /ppp secret row, but you have to use yet another pair of addresses than the c.c.c.c and s.s.s.s used to establish the L2TP tunnel. And there seems to still remain some misunderstanding - it is correct that there is only a single L2TP tunnel. The round-robin distribution will be applied to the L2TP transport packets, not to the L2TP payload ones.
A packet that passes through an output chain is a packet sent by the router itself; such packets never pass through prerouting. And vice versa, packets that came from the outside do pass through prerouting but never pass through output.
But once a payload packet gets encapsulated into a transport one, the transport one is a separate packet from the point of view of the firewall.
So the actual payload packet arrives via an Ethernet interface, gets handled by prerouting, filter and postrouting, and "leaves" via L2TP. L2TP encapsulates it into a transport packet, which is sent by the router itself, so it passes through output and postrouting and "leaves" via IPIP. IPIP encapsulates it into its own transport packet, which is also sent by the router itself, passes through output and postrouting, and gets caught by the IPsec policy and encapsulated again.
Of course you can (and have to) assign routing-mark values to packets you want to send via the L2TP tunnel (unless you would want almost everything to go through there, or just a few destination subnets no matter the source).
Thank you very much for your patience, I now understand fully this concept.
Now everything is up and running, however there is not too much of performance difference compared to Bonding/EoIPs. Transfer speed (measured with speedtest.net) almost seems decreasing as I increase the IPIPs, ie. with a single IPIP it reached ~21MBps (Note: ISP's limitation is only activated after a few minutes of constant transfer, so this test was not yet affected), then I added 2 more IPIPs, and transfer decreased to somewhere below 10MBps (but this can be due to daylight timings: I tested in the morning with one IPIP, by that time it was 5am at S, and I tested with more IPIPs when it was already 8 or 9am at S - who knows).
However it works as it should, so every packet reaches now its destination, I don't have any NATting or routing problem now. Thank you!
I have only two questions left: is it possible that creating more transports (IPIPs) for L2TP cause lower passthrough between 2 routers than if L2TP has only one transport IPIP? Maybe it's due to mangles? CPU load doesn't reach even 25% though.
It seems the best result can be achieved by using as less transport as possible (2), however they seem can't provide the latency needed for streaming video needs.
Normally the stream I wish to watch is having a huge first impact (keyframe), then transport is relaxed for a few seconds, then another huge impact (next keyframe) on transport. However if transport is not fast enough (both in terms of bandwidth
and latency), video will stall, as it's not getting the next frame in proper time.
I have no affect on the stream itself (i.e. I can't change it, but another codec would be much more suitable for this tunnel
), but I have another Tik at another location, which is
not behind NAT so I have an idea (question2): if due to this stream's behavior, ISP limitation
might be not an issue here; what would be the fastest, most reactive connection between two Tiks if they are:
Server (Video stream comes through) -- MT1 -- direct connection to internet -- ISP2 -- MT2 -- Client (wish to watch the stream)
No need to be secured, I wish to use this purely for video passthrough, as fast and responsive as possible.
Thank you!