L2TP is disconnect after every 8 hours

Hello.
There mikrotik 750GR2. Connecting to an Internet configured so. 2 ports are combined in the bridge. One port is plugged with a white asterisk ip. The second interface, set the other white ip LAN through which comes to internet. The 3-5 of the ports in the switch group and look to the local area network. Users go to the Internet through nat. On mikrotike set l2tp. Users connect windows (7 and xp) l2tp client and use the LAN resources. But every 8 hours connection unexpected disconnect. In this case, the log records mikrotik

failed to begin ipsec sa negotiation
<L2tp-test: terminating …- hungup
<L2tp-test: disconnected

Help me please.

[admin@Mikrotik750GR2] /ip ipsec peer> print detail
Flags: X - disabled, D - dynamic
0 address=0.0.0.0/0 local-address=:: passive=no port=500 auth-method=pre-shared-key secret=“111111111”
generate-policy=port-override policy-template-group=*FFFFFFFF exchange-mode=main-l2tp send-initial-contact=yes
nat-traversal=yes hash-algorithm=sha1 enc-algorithm=aes-256,aes-192,aes-128,3des dh-group=modp1024 lifetime=1h
dpd-interval=disable-dpd dpd-maximum-failures=5

[admin@Mikrotik750GR2] /ip ipsec proposal>> print detail
Flags: X - disabled, * - default
0 * name=“default” auth-algorithms=sha1 enc-algorithms=aes-256-cbc,aes-128-cbc,3des lifetime=30m pfs-group=modp1024

Following, I have having the issues with the same 8 hour disconnect

You can try to change the profile from default-encryption to default and test if that solves the issue.

Sometimes the encryption gets out of sync and resulting that the tunnel gets terminated and the reconnects

Hi
I am facing similar issue . my l2tp client get disconnected after every 1 minute 14 sec. I have tried to check keepalive time and session time but could get success. can you guide me what could be the issue .
I get following log on l2tp client
disconnected
initializing
connecting…
terminationg…—sesion closed
disconnected

mukes, in your situation, you aren’t connected ever. just … connecting, but you never gets… connected.

Hi
I have exactly the same problem…all my L2TP/IPSEC session get disconnected after exactly 8 hours

Did someone manage to find a solution to this ?

Same here(Disconnect after 8h)…

I have same problem. VPN is disconnected every 8h and then it can’t be reconected for cca. 1 minute…

Hello, I have exactly the same problem. My IPsec/L2TP connection drops every 8 hours. It takes it up to 50 minutes to recover. I’ve looks through the logs, but was not able to find anything wrong. I’ve checked on server side - timeout there 23 hours, on Mikrotik I did not found where timeout can be setup.

What else I could check/look at to fix this?

seme here
rb1100ahx4 6.42.1
~8h on L2TP/IPSec

All of you: can you tell us what version of ROS you have, need to see if you are at same version, maybe there is a bug with!

This is not ROS version related! I use it and have same issue from start using mikrotik products. Now Im on version 6.42.1 but issue is here in all version from 6.38 or maybe 6.36 I don’t remember right but more than 3 years…

Yeah, it doesn’t matter the version.
It happens to me sometimes with some “routers”.
Did you check if the IP is changing in one side. Maybe the problem (i didn’t check it) is the IP changing from the ISP…

No ip is not changed because both sites have static IP’s, links has not been disconnected. It is Mikrotik related and disconnect is done after exact 8h after connection is made.

Yes. It happens to me also, but know I didn’t care about the disconnection because “now” it could happens to my links, but someday I will need a continuous link

I have some L2TP over IPSec links. They don’t have this behaviour. The weirder part is taking 50 minutes to reconnect.

/ppp active print detail
Flags: R - radius 
 0   name="victor" service=l2tp caller-id="---.---.126.90" address=0.0.0.0 uptime=2d10h48m15s encoding="cbc(aes) + hmac(sha256)" session-id=0x81200026 limit-bytes-in=0 limit-bytes-out=0 
 1   name="alfandega2" service=l2tp caller-id="---.---.8.65" address=0.0.0.0 uptime=21h50m45s encoding="cbc(aes) + hmac(sha256)" session-id=0x81200032 limit-bytes-in=0 limit-bytes-out=0 
 2   name="alfandega1" service=l2tp caller-id="---.---.166.68" address=0.0.0.0 uptime=14h49m54s encoding="cbc(aes) + hmac(sha256)" session-id=0x81200039 limit-bytes-in=0 limit-bytes-out=0

It must be some kind of timeout, or scheduled change on the ISP’s network.

In my house now: (receiving)

Flags: R - radius
0 name=“casavzla” service=l2tp caller-id=“186.xx.xx.xx” address=192.168.16.11 uptime=3d14h33m3s encoding=“cbc(aes) + hmac(sha256)”
session-id=0x81002F85 limit-bytes-in=0 limit-bytes-out=0

1 name=“mayjo” service=l2tp caller-id=“95.xx.xx.xx” address=192.168.16.10 uptime=9h31m1s encoding=“cbc(aes) + hmac(sha256)” session-id=0x8100301C
limit-bytes-in=0 limit-bytes-out=0

I don’t know how to print the outgoing ppp/pptp…

hgonzale, what are the clients in your case?
The thing is that as this topic made me curious, I’ve started an L2TP/IPsec connection using the embedded VPN client of Windows 10 and used it so that there would be real traffic through the L2TP session, and it broke down as well. In my case, it didn’t take exactly 8 hours but something like 7:36 until the Windows client has decided to renew the IPsec phase 1, but it took it so long between tearing down the old one and starting to establish the new one that Mikrotik has managed to tear down the L2TP layer on inactivity in the meantime. See the commented tour below.

The DHCP lease time on the laptop side is 10 minutes so it is unlikely that this would be related, as there were tens of DHCP renewals which didn’t break the IPsec. So I’ll try another round during the night, this time with an Android device.

On top of that, there is no ISP involved - the laptop is connected using WiFi to one 'Tik (uptime much longer than between now and the L2TP breakdown), and the L2TP/IPsec connection passes through NATting OpenWRT device and gets to the other 'Tik which is the L2TP/IPsec server.

When the IPsec connection is initially established, the client declares sincerely the Phase 1 lifetime limitation to 8 hours:


11:22:22 ipsec,debug Compared: Local:Peer 
11:22:22 ipsec,debug (lifetime = 86400:28800)

28800 seconds means 8 hours


11:22:22 ipsec,debug (lifebyte = 0:0) 
11:22:22 ipsec,debug enctype = AES-CBC:AES-CBC 
11:22:22 ipsec,debug (encklen = 256:256) 
11:22:22 ipsec,debug hashtype = SHA:SHA 
11:22:22 ipsec,debug authmethod = pre-shared key:pre-shared key 
11:22:22 ipsec,debug dh_group = 2048-bit MODP group:2048-bit MODP group 
11:22:22 ipsec,debug an acceptable proposal found.

After this, the connection establishes and just works, only Phase 2 is renegotiated from time to time without impact.
Nothing indicates a problem just before the breakdown:


18:57:21 ipsec,debug KA: 192.168.10.88[4500]->10.0.0.5[4500] 
18:57:21 ipsec,debug 1 times of 1 bytes message will be sent to 10.0.0.5[4500] 
18:57:21 ipsec,debug,packet ff

KA means KeepAlive and it is an IPsec keepalive here. These are sent three times a minute.


18:57:29 l2tp,debug,packet sent control message to 10.0.0.5:1701 from 192.168.10.88:1701 
18:57:29 l2tp,debug,packet     tunnel-id=5, session-id=0, ns=456, nr=4 
18:57:29 l2tp,debug,packet     (M) Message-Type=HELLO 
18:57:29 l2tp,debug,packet rcvd control message (ack) from 10.0.0.5:1701 to 192.168.10.88:1701 
18:57:29 l2tp,debug,packet     tunnel-id=1263, session-id=0, ns=4, nr=457

This is an L2TP keepalive - the server sends HELLO and the client responds with ack. These are sent once a minute and they’re asynchronous to the IPsec KeepAlives


18:57:41 ipsec,debug KA: 192.168.10.88[4500]->10.0.0.5[4500] 
18:57:41 ipsec,debug 1 times of 1 bytes message will be sent to 10.0.0.5[4500] 
18:57:41 ipsec,debug,packet ff 

18:58:01 ipsec,debug KA: 192.168.10.88[4500]->10.0.0.5[4500] 
18:58:01 ipsec,debug 1 times of 1 bytes message will be sent to 10.0.0.5[4500] 
18:58:01 ipsec,debug,packet ff 

18:58:21 ipsec,debug KA: 192.168.10.88[4500]->10.0.0.5[4500] 
18:58:21 ipsec,debug 1 times of 1 bytes message will be sent to 10.0.0.5[4500] 
18:58:21 ipsec,debug,packet ff

Here below the trouble begins:


18:58:24 ipsec,debug ===== received 92 bytes from 10.0.0.5[4500] to 192.168.10.88[4500] 
18:58:24 ipsec,debug,packet 2a859f74 83bfff84 66b1ac17 dc1967e4 08100501 227b959d 0000005c 35e16fc6 
18:58:24 ipsec,debug,packet 227f11e3 5d1d573e 97169e66 7d53809e 1c2cf21d e2a39f2d 55a276b0 2f09b4b2 
18:58:24 ipsec,debug,packet b9ccda68 403e04f4 d4f31281 4ab50866 ce73f92a 25b48241 04fba3be 
18:58:24 ipsec,debug receive Information. 
18:58:24 ipsec,debug compute IV for phase2 
18:58:24 ipsec,debug phase1 last IV: 
18:58:24 ipsec,debug 108b0de7 933fdadc c36cb287 3ee353ad 227b959d 
18:58:24 ipsec,debug hash(sha1) 
18:58:24 ipsec,debug encryption(aes) 
18:58:24 ipsec,debug phase2 IV computed: 
18:58:24 ipsec,debug c625865e d69af68e d7672100 66f32a20 
18:58:24 ipsec,debug encryption(aes) 
18:58:24 ipsec,debug IV was saved for next processing: 
18:58:24 ipsec,debug 4ab50866 ce73f92a 25b48241 04fba3be 
18:58:24 ipsec,debug encryption(aes) 
18:58:24 ipsec,debug with key: 
18:58:24 ipsec,debug 180cc989 150aa766 f2f526af bb0819cd c17f8f66 6632fc13 2eba948d c143a772 
18:58:24 ipsec,debug decrypted payload by IV: 
18:58:24 ipsec,debug c625865e d69af68e d7672100 66f32a20 
18:58:24 ipsec,debug decrypted payload, but not trimed. 
18:58:24 ipsec,debug 0c000018 7badbada 4bd6bb2c 2aaf50c0 56d9c747 d2b78da3 0000001c 00000001 
18:58:24 ipsec,debug 01100001 2a859f74 83bfff84 66b1ac17 dc1967e4 00000000 00000000 00000000 
18:58:24 ipsec,debug padding len=1 
18:58:24 ipsec,debug skip to trim padding. 
18:58:24 ipsec,debug decrypted. 
18:58:24 ipsec,debug 2a859f74 83bfff84 66b1ac17 dc1967e4 08100501 227b959d 0000005c 0c000018 
18:58:24 ipsec,debug 7badbada 4bd6bb2c 2aaf50c0 56d9c747 d2b78da3 0000001c 00000001 01100001 
18:58:24 ipsec,debug 2a859f74 83bfff84 66b1ac17 dc1967e4 00000000 00000000 00000000 
18:58:24 ipsec,debug HASH with: 
18:58:24 ipsec,debug 227b959d 0000001c 00000001 01100001 2a859f74 83bfff84 66b1ac17 dc1967e4 
18:58:24 ipsec,debug hmac(hmac_sha1) 
18:58:24 ipsec,debug HASH computed: 
18:58:24 ipsec,debug 7badbada 4bd6bb2c 2aaf50c0 56d9c747 d2b78da3 
18:58:24 ipsec,debug hash validated. 
18:58:24 ipsec,debug begin. 
18:58:24 ipsec,debug seen nptype=8(hash) len=24 
18:58:24 ipsec,debug seen nptype=12(delete) len=28 
18:58:24 ipsec,debug succeed. 
18:58:24 ipsec,debug 10.0.0.5 delete payload for protocol ISAKMP

So the client has sent us a request to delete the IPsec Phase 1 (ISAKMP), which consequently takes down Phase 2 (ESP in this case) as well.


18:58:24 ipsec,info purging ISAKMP-SA 192.168.10.88[4500]<=>10.0.0.5[4500] spi=2a859f7483bfff84:66b1ac17dc1967e4. 
18:58:24 ipsec purged IPsec-SA proto_id=ESP spi=0xeb151c6 
18:58:24 ipsec purged IPsec-SA proto_id=ESP spi=0x7670525 
18:58:24 ipsec,debug an undead schedule has been deleted. 
18:58:24 ipsec removing generated policy

The line above is important - as we’ve removed the policy, the L2TP packets won’t be matched and sent via the SA although it still exists by now.


18:58:24 ipsec purged ISAKMP-SA 192.168.10.88[4500]<=>10.0.0.5[4500] spi=2a859f7483bfff84:66b1ac17dc1967e4. 
18:58:24 ipsec,debug purged SAs. 
18:58:24 ipsec,info ISAKMP-SA deleted 192.168.10.88[4500]-10.0.0.5[4500] spi:2a859f7483bfff84:66b1ac17dc1967e4 rekey:1 
18:58:24 ipsec KA remove: 192.168.10.88[4500]->10.0.0.5[4500] 
18:58:24 ipsec,debug KA tree dump: 192.168.10.88[4500]->10.0.0.5[4500] (in_use=1) 
18:58:24 ipsec,debug KA removing this one...

Demolition of the IPsec connection completed. The L2TP transport packets cannot get anywhere until the IPsec connection gets established again. But it’s almost the time to send an l2tp HELLO…


18:58:29 l2tp,debug,packet sent control message to 10.0.0.5:1701 from 192.168.10.88:1701 
18:58:29 l2tp,debug,packet     tunnel-id=5, session-id=0, ns=457, nr=4 
18:58:29 l2tp,debug,packet     (M) Message-Type=HELLO 
18:58:30 l2tp,debug,packet sent control message to 10.0.0.5:1701 from 192.168.10.88:1701 
18:58:30 l2tp,debug,packet     tunnel-id=5, session-id=0, ns=457, nr=4 
18:58:30 l2tp,debug,packet     (M) Message-Type=HELLO 
18:58:31 l2tp,debug,packet sent control message to 10.0.0.5:1701 from 192.168.10.88:1701 
18:58:31 l2tp,debug,packet     tunnel-id=5, session-id=0, ns=457, nr=4 
18:58:31 l2tp,debug,packet     (M) Message-Type=HELLO 
18:58:33 l2tp,debug,packet sent control message to 10.0.0.5:1701 from 192.168.10.88:1701 
18:58:33 l2tp,debug,packet     tunnel-id=5, session-id=0, ns=457, nr=4 
18:58:33 l2tp,debug,packet     (M) Message-Type=HELLO 
18:58:37 l2tp,debug,packet sent control message to 10.0.0.5:1701 from 192.168.10.88:1701 
18:58:37 l2tp,debug,packet     tunnel-id=5, session-id=0, ns=457, nr=4 
18:58:37 l2tp,debug,packet     (M) Message-Type=HELLO 
18:58:45 l2tp,debug,packet sent control message to 10.0.0.5:1701 from 192.168.10.88:1701 
18:58:45 l2tp,debug,packet     tunnel-id=5, session-id=0, ns=457, nr=4 
18:58:45 l2tp,debug,packet     (M) Message-Type=HELLO 
18:58:53 l2tp,debug tunnel 1263 received no replies, disconnecting

You can see that the L2TP HELLOs are retransmited, doubling the delay with each retransmission (0.5 s, 1s, 2s, 4s, 8s), so after 23.5s in total, the server gives up waiting for an ****

ack

and initiates the disconnection process.


18:58:53 l2tp,debug tunnel 1263 entering state: dead 
18:58:53 l2tp,debug session 1 entering state: dead 
18:58:53 l2tp,ppp,debug <10.0.0.5>: LCP lowerdown 
18:58:53 l2tp,ppp,debug <10.0.0.5>: LCP closed 
18:58:53 l2tp,ppp,debug <10.0.0.5>: CCP lowerdown 
18:58:53 l2tp,ppp,debug <10.0.0.5>: BCP lowerdown 
18:58:53 l2tp,ppp,debug <10.0.0.5>: BCP down event in starting state 
18:58:53 l2tp,ppp,debug <10.0.0.5>: IPCP lowerdown 
18:58:53 l2tp,ppp,debug <10.0.0.5>: IPCP closed 
18:58:53 l2tp,ppp,debug <10.0.0.5>: IPV6CP lowerdown 
18:58:53 l2tp,ppp,debug <10.0.0.5>: IPV6CP down event in starting state 
18:58:53 l2tp,ppp,debug <10.0.0.5>: MPLSCP lowerdown 
18:58:53 l2tp,ppp,debug <10.0.0.5>: CCP close 
18:58:53 l2tp,ppp,debug <10.0.0.5>: BCP close 
18:58:53 l2tp,ppp,debug <10.0.0.5>: IPCP close 
18:58:53 l2tp,ppp,debug <10.0.0.5>: IPV6CP close 
18:58:53 l2tp,ppp,debug <10.0.0.5>: MPLSCP close 
18:58:53 l2tp,ppp,info l2tp-server-dedecek: terminating... - hungup 
18:58:53 l2tp,ppp,debug <10.0.0.5>: LCP lowerdown 
18:58:53 l2tp,ppp,debug <10.0.0.5>: LCP down event in starting state 
18:58:53 l2tp,ppp,info,account dedecek logged out, 27387 24129137 55951331 123213 106836 
18:58:53 l2tp,ppp,info l2tp-server-dedecek: disconnected 
18:58:53 ipsec,debug unbind ::ffff:192.168.99.1

Three seconds later, which is 32 seconds after it has shot down the previous Phase 1, the client initiates establishment of a new session:


18:58:56 ipsec,debug ===== received 408 bytes from 10.0.0.5[4500] to 192.168.10.88[4500] 
18:58:56 ipsec,debug,packet 42b08e69 f8f6c26e 00000000 00000000 01100200 00000000 00000198 0d0000d4 
18:58:56 ipsec,debug,packet 00000001 00000001 000000c8 01010005 03000028 01010000 80010007 800e0100 
18:58:56 ipsec,debug,packet 80020002 80040014 80030001 800b0001 000c0004 00007080 03000028 02010000 
18:58:56 ipsec,debug,packet 80010007 800e0080 80020002 80040013 80030001 800b0001 000c0004 00007080 
18:58:56 ipsec,debug,packet 03000028 03010000 80010007 800e0100 80020002 8004000e 80030001 800b0001 
18:58:56 ipsec,debug,packet 000c0004 00007080 03000024 04010000 80010005 80020002 8004000e 80030001 
18:58:56 ipsec,debug,packet 800b0001 000c0004 00007080 00000024 05010000 80010005 80020002 80040002 
18:58:56 ipsec,debug,packet 80030001 800b0001 000c0004 00007080 0d000018 01528bbb c0069612 1849ab9a 
18:58:56 ipsec,debug,packet 1c5b2a51 00000001 0d000018 1e2b5169 05991c7d 7c96fcbf b587e461 00000009 
18:58:56 ipsec,debug,packet 0d000014 4a131c81 07035845 5c5728f2 0e95452f 0d000014 90cb8091 3ebb696e 
18:58:56 ipsec,debug,packet 086381b5 ec427b1f 0d000014 4048b7d5 6ebce885 25e7de7f 00d6c2d3 0d000014 
18:58:56 ipsec,debug,packet fb1de3cd f341b7ea 16b7e5be 0855f120 0d000014 26244d38 eddb61b3 172a36e3 
18:58:56 ipsec,debug,packet d0cfb819 00000014 e3a5966a 76379fe7 07228231 e5ce8652 
18:58:56 ipsec,debug Marking ports as changed 
18:58:56 ipsec,debug Marking ports as changed 
18:58:56 ipsec,debug === 
18:58:56 ipsec,info respond new phase 1 (Identity Protection): 192.168.10.88[4500]<=>10.0.0.5[4500]

It then took another 2 seconds until new SAs were negotiated and installed:


...
18:58:58 ipsec,debug call pk_sendupdate 
18:58:58 ipsec,debug encryption(aes-cbc) 
18:58:58 ipsec,debug hmac(sha1) 
18:58:58 ipsec,debug call pfkey_send_update_nat 
18:58:58 ipsec IPsec-SA established: ESP/Transport 10.0.0.5[4500]->192.168.10.88[4500] spi=0x1f67a4 
18:58:58 ipsec,debug pfkey update sent. 
18:58:58 ipsec,debug encryption(aes-cbc) 
18:58:58 ipsec,debug hmac(sha1) 
18:58:58 ipsec,debug call pfkey_send_add_nat 
18:58:58 ipsec IPsec-SA established: ESP/Transport 192.168.10.88[4500]->10.0.0.5[4500] spi=0xf1a4f34 
18:58:58 ipsec,debug pfkey add sent. 
18:58:58 ipsec,debug ===== received 76 bytes from 10.0.0.5[4500] to 192.168.10.88[4500] 
18:58:58 ipsec,debug,packet 2a859f74 83bfff84 66b1ac17 dc1967e4 08100501 db32ba58 0000004c f1e52518 
18:58:58 ipsec,debug,packet baeb8459 5c9cdab5 29193055 b74da572 854a337a be9c47ed 70ba26e1 0004899f 
18:58:58 ipsec,debug,packet e0e045e9 bfbb4850 fb354c32 
18:58:58 ipsec 10.0.0.5 unknown Informational exchange received.

And it took another 8 seconds until the client started sending its own HELLO keepalives still within the old session (see the

ns

,

nr

values), which is however too late to help anything.


18:59:06 l2tp,debug,packet rcvd control message from 10.0.0.5:1701 to 192.168.10.88:1701 
18:59:06 l2tp,debug,packet     tunnel-id=1263, session-id=0, ns=4, nr=457 
18:59:06 l2tp,debug,packet     (M) Message-Type=HELLO 
18:59:16 l2tp,debug,packet rcvd control message from 10.0.0.5:1701 to 192.168.10.88:1701 
18:59:16 l2tp,debug,packet     tunnel-id=1263, session-id=0, ns=4, nr=457 
18:59:16 l2tp,debug,packet     (M) Message-Type=HELLO 
18:59:26 l2tp,debug,packet rcvd control message from 10.0.0.5:1701 to 192.168.10.88:1701 
18:59:26 l2tp,debug,packet     tunnel-id=1263, session-id=0, ns=4, nr=457 
18:59:26 l2tp,debug,packet     (M) Message-Type=HELLO



As Android client also limits the Phase 1 lifetime to 8 hours, I’ll first check how the renegotiation looks like in Android case, and then I’ll try whether configuring a shorter lifetime limit at RouterOS side won’t make the client(s) behave differently.

All mines are other mikrotiks..

I have a dialup pptp to my server without encryption but is not in the list.
They are only dial in, I need to extract the dial out, but I don’t know to do

The results with my version of the embedded Android client are even more cryworthy than with Windows 10.

The Android client, like the Windows 10 one, declares a 28800 seconds Phase 1 lifetime in its Phase 1 proposal, and when this time expires, RouterOS drops the connection, without any attempt from Android side to re-establish it before or after the drop. But the Andriod still shows the VPN connection as active and stubbornly attempts to use it, so you can see “packets/bytes sent” on it to grow but “packets/bytes received” stay unchanged, several hours after the connection went down.

I’ve limited the Phase 1 lifetime at Mikrotik side, assuming that it might actively terminate the Phase 1 security association and thus provoke the client for a renewal, or that the client might proactively renew the session from its side once the end of the lifetime announced by Mikrotik approaches; well, none of this happens. Mikrotik keeps the session alive (presumably because it is configured to server mode and is thus unable to renew it), and Android doesn’t bother to renew it either, so the session continues to run. And the Windows client behaves the same way. I expect both sessions to end the same way like when 24 h lifetime is set on Mikrotik side, after 8 hours.

So I assume that gents in Redmond became aware of the issue and have added the auto-renewal into the WIndows10 client (which explains that these sessions do not last exactly 8 hours as reported before), but the auto-renewal takes it too much time (so far?) for the l2tp server not to give up.

If someone here happens to own some iThing, it might be interesting for the audience here to check how the iOS clients behave in this regard.