If the payload connection should be a single one, I’m afraid there is no way to beat that - Mikrotik only supports MLPPP as PPPoE client, and no other load distribution method can spread a single connection across multiple links. If you have in mind a total bandwidth shared by multiple connections, it should be doable. But it’s probably just a matter of time until OTE starts throttling connections by IP addresses alone, so you might end up with 15 IP addresses at the UK end to get each of your 15 tunnels throttled separately.
Thank you, this is so infuriating. Netflix and Social Media get no throttling but my VPN does. Super stupid question but there isn’t a way to get the two CCRs to connect without it looking like a VPN to the ISP?
EU law used to forbid throttling but now due to the pandemic, important services like Facebook are not throttled but business VPNs are.
As you mention EU law, could it be that OTE only throttles links to UK addresses now that Brexit took place?
If the throttling only affects certain types of traffic, you can try an SSTP VPN (as it uses TCP port 443, same as https, at server side) where the client end will be in Greece. But VPNs using TCP as transport have issues of their own, and SSTP doesn’t seem to be a speed champion. Try and see.
… and one more idea, as Google uses QUIC on UDP port 443 rather than TCP, maybe OTE is not brave enough to throttle Google services? I.e. an IKEv2 tunnel with UDP port 443 at the UK side might not be throttled. If the Greek end has a public IP on the CCR itself, you have to force use of NAT for the IPsec, as otherwise it would use bare ESP as transport packets, which would spoil the trick. It must be IKEv2 because IKE(v1) uses two UDP ports at responder side.
An ISP here I worked with had a kind of similar thing, but they slowed the connection down depending on how much data the single connection had transferred.. The first 10 MB was at full speed, 5 mbps, after 10MB the connection slowed to 3 mbps. After 15MB it slowed to 1.5 mbps. Eventually it slowed to 128 kbps for the remainder of time. Worked well for speedtest results, did not work well for VPN.
The solution was to switch to switch to a connectionless tunnel.. GRE, IPoE, something similar. Each packet was a new connection.
When talking about “not throttling google” I had rather in mind matching on UDP port 443 than matching on a list of Google IP addresses - matching on a single port value is much less resource-consuming than matching on an address list. But if the ISP uses per-connection bandwidth management, maybe it doesn’t make a big difference.
Regarding GRE, IPIP etc., it is a lottery - OTE may drop these protocols completely, or it may connection-track them just by IP addresses.
For your (@kevinds) case, where the bandwidth is gradually throttled with the amount of data transported, I could imagine two or three L2TP tunnels with a routing failover and a script that would disable and re-enable them in turns and assign a different source port to each connection, but where the connections are throttled all the time, such an approach wouldn’t help.
@AJSG, to my surprise, unlike with SSTP, it is not possible to configure a remote port other than 1701 for /interface l2tp-client in RouterOS, so my idea of using server-side port 443 for L2TP would be hard to implement - a workaround exists but it can easily cause a brain overheat. Hence the test of treatment of connections to UDP port 443 is much easier with IKEv2.
But coming back to the initial idea of multiple tunnels in parallel - it should be actually possible to use mangle rules to distribute the traffic among all of those tunnels “manually”, but since this will cause packets to be missequenced at the receiving side (smaller packets sent just after longer ones will often overtake them), the result may be worse than expected. But still worth trying.
So I’ve tried the last idea, to “manually” distribute the traffic among three separate L2TP tunnels, each limited to 200 kbps at transport side, imitating the OTE behaviour. It works - for the payload, I’ve got 580 kbps throughput for a single test UDP connection and about 560 kbps for a single test TCP connection.
What I didn’t expect was that if you set up multiple /interface l2tp-client to the same server, they use the same transport connection, so you have to take some extra measures to prevent this.
If you want all the minute details of the setup, let me know.
UDP is not affected at all by the ISP, get full available bandwidth (34.7Mbps) with a Connection Count 1, TCP gives me 1.01Mbps for single. Even with a Connection Count of “50” today I cannot even get to 33Mbps. I suspect that the idea of multiple tunnels might be too complex to implement.
I am on the line with OTE at the moment to see if I can at least get a fixed IP out of them. Our current external IP addresses are virtual.
That changes the whole point of view. Both bare L2TP and IPsec in NAT-T mode use UDP, so if UDP sessions are not throttled, the issue is somewhere else, because the IPsec transport packets look the same no matter what payload packets (UDP, TCP, other ones) they carry, so the only difference the ISP can see between them is size.
What could make a difference would be if OTE was restricting VPNs selectively - UDP to ports 1701 and 4500 to cover bare L2TP and NAT-T IPsec.
So when you say that “UDP is not affected at all whilst TCP is throttled to 1 Mbps”, do you run the bandwidth test towards the public IP of the UK machine or inside the L2TP/IPsec tunnel?
TCP throughput is heavily affected by round-trip time, so if the network path between OTE and UK has changed and the delay has grown, this may be the explanation. What’s the RTT indicated by ping from Greece to the UK inside and outside the VPN tunnel?
Since the Greek end is on an island, OTE may have had to use some microwave backup due to an outage of the regular wired/optical line.
I’m testing within the VPN tunnel using local gateway to remote gateway, ie 172.16.110.1 to 172.16.105.1 (both sides are CCR 1016s). Using the Tool built into RouterOS. However this is backed up by external testing. Both inside an outside the VPN I can download from Google or watch Netflix no problem and I can see Rx flying. When I try and move files or similar within the VPN Rx collapses.
I tried using Teamviewer File transfer within the VPN, and I get 6-7 times the speed of when I transfer the same file via normal VPN.
Tends to be pretty steady around the 70ms (TTL 64) within the VPN to the internal IPs, 90ms (TTL 64ms) to 8.8.8.8 (all zero packet loss). Outside the VPN ping to 8.8.8.8 is 56ms and TTL is 116.
So that I understood it properly - you’ve got one PC with Teamviewer connected to each of the two CCRs, and any traffic from the LAN in Greece reaches internet via the VPN and thus via the CCR in UK, is that correct?
Because the only way I can imagine how the additional encapsulation could improve the VPN performance would be that the Teamviewer file transfer would use multiple TCP sessions in order to address the RTT issue. It never came to my mind to check that.
70 ms may be quite a lot for the TCP.
Another thing is what is the base for comparison? I.e. was it significantly better in the past, or you just noticed the slowness now but you have no hard data from the past?
What’s the ping RTT if you ping the UK public address from Greece outside the tunnel?
Teamviewer file transfer was just a test so see if (like you write below) it uses multiple connections, which it would assume it does. With the VPN connected I can log in to both CCR via Winbox as both networks x.x.110.x and x.x.105.x are visible.
Yes till about the 10-12th August I would not have any of these issues. Throttling used to be illegal in Europe, but they changed the laws due to lockdown. Unfortunately they also allowed Netflix et at to evade these rules as clearly they are so important to home working (or lack of working!).
My external IP (Virgin in London x.x.218.71) 75ms, pretty stable, 0 loss, TTL 49
It was already this high back in July when the connection was allowing full bandwidth usage, obviously certain days were better than others, but I am always talking about “max available” not theoretical. So if London Virgin line upload is 30, then Tinos download would max out at about 28. On a day where Virgin upload is 10 I would be happy wth 8’ish download.
I don’t think the change of behaviour is caused by some action of the ISP intended to slow down VPNs.
The thing is that no matter what you do inside the L2TP/IPsec tunnel, the ISP can see the whole tunnel traffic as a single UDP connection from port 4500 of the Tinos CCR to port 4500 of the London CCR (as your IP at Tinos is not a public one, London probably sees the connection to come from another port than 4500, but that changes nothing). So if they were using per-connection throttling, the bandwidth inside the VPN would be the same no matter whether you were using UDP, single-session TCP or multi-session TCP.
Many people are actually complaing about throughput of L2TP/IPsec here on the forum, but till now I haven’t seen a clear conclusion what are the external conditions to cause that. My assumption is that it is somehow related to packet size and/or order at arrival, where the latter may depend on the former. Both IPsec and TCP are sensitive about packet arrival order to some extent.
If you feel like helping me analyse that, could you please sniff into a file at both devices while downloading a file via the tunnel? If yes, I’ll give you a more detailed instruction on that.
As a quick shot, try to set max-mtu and max-mru at both the L2TP server (first) and at the L2TP client to 1300. Each change will probably make the tunnel drop and re-establish. If this change makes the TCP speed good again on a single connection, the issue is definitely related to handling of small packets.
I will do as you suggested over the next few days. I must say that the issue is that I did not have these issues till the beginning of August, was working fine in July. It makes sense that it can only be down to three things, Virgin ISP, OTE ISP or the new RouterOS package.
Considering that many commercial services are unaffected, such as Netflix, Google Docs etc etc, but direct connections are (such as internal file transfer) my assumption is that the ISPs are targeting private VPN traffic.