I have two regions in gcloud with two different addresses (192.xx.xx.0/24 and 192.yy.yy.0/24)
a tunnel (IKE2) is created for each of these addresses
each of the tunnels works on my router, however, when I want to run two at the same time, traffic stops going through one of them
in the /ipsec/policy tab all tunnels have the status “established”
this is not likely a problem on the gcloud side, as I have the same tunnels set up on fortigate in another location
it is also not a problem of RouterOS version (originally 6.47.x, then 6.49.x, now 7.6)
Have you perhaps encountered similar problems? Below I paste the anonymized configuration of the tunnels. IMO it looks like it’s losing some static routes, but I can’t trace what I’m doing wrong. Can you guys help?
Nothing in your configuration seems strange to me, and multiple connections from the same local IP address to multiple distinct remote peers are nothing unusual too.
Is the Mikrotik connected to internet directly or via some firewall (or even NAT) device? I can only imagine a firewall device to have some issues with handling bare ESP - if so, forcing NAT into the path might help.
And just as a blind shot, try setting level=unique for the policies.
I know it’s nothing unusual (on this Mikrotik I have 4 more tunnels to other locations and with them there was no problem, the only difference is exchange-mode=main).
This router has a direct connection to the internet (via PPPoE).
An interesting fact is that if I have both connections raised (gcloud_peer + gcloud_peer_fra), the traffic only goes to the gcloud_peer peer address, while if I stop it, the traffic appears on the gcloud_peer_fra peer. The other tunnels work without any interruption and do not lose any packets.
As for the firewall rules. There is nothing special there (esp and ah have accepts and are at the very top of the stack), in NAT similarly (address lists of these tunnels have accepts in src-nat)
Unfortunately changing level=unique did not bring any change
This router has quite a long configuration to paste it in its entirety (a lot of things to hide) - so I don’t know if it makes sense to process and paste it all, but please let me know, I’ll prepare something
Disable both peers (or identities), then enable them again, to clean up all 4 SAs and let them be recreated. Start pinging both destination networks as you did (it doesn’t matter that one of the pings will not work).
Then show me the output of the following (of course you can obfuscate the public IPs):
/ip ipsec active-peers print (only the rows for the relevant peers are interesting)
/ip ipsec installed-sa print where src-address~“ip.of.peer.1|ip.of.peer.2” or dst-address~“ip.of.peer.1|ip.of.peer.2” (again, you can obfuscate the public IPs; no point in obfuscating the keys as they are ephemeral and if you disable the peers/identities again before posting, these values cannot be misused).
Sorry, another time, but with /ip/ipsec/installed-sa/print detail … - I didn’t know this was necessary in ROS 7 to see the number of bytes and packets handled by the SA.
That’s what I was assuming - the first one, with src-address=ip-gloud_peer_fra, shows only one packet transported, while the two with src-address=ip-local_router show about 450 packets each, whereas the one with src-address=ip-gcloud_peer shows 880 packets (so roughly twice 450) with an average size of 84 bytes.
To double-check, disable and re-enable the peers again, ping only the private subnet server by gcloud peer fra for a minute or so, and then check the same output of /ip/ipsec/installed-sa print detail …. If my assumption is correct, you’ll see non-0 packet counts for the SA with src-address=ip-local_router dst-address=ip-gloud_peer_fra (correct) and with src-address=ip-gcloud_peer dst-address=ip-local_router (incorrect).
Don’t worry, so do I. Maybe our careers are just too short so far
It seems like a bug to me, so I’d first talk to Gcloud support. But you may try to adjust to that bug by configuring both policies with peer=gcloud_peer,gcloud_peer_fra. In theory, both policies will get negotiated with gcloud_peer if both peers are available, and if communication to gcloud_peer eventually gets lost, they will negotiate with ip-gcloud_peer_fra instead (and stay there even if gcloud_peer becomes available again, until gcloud_peer_fra eventually fails).
But I’ve only tried this method where the remote peers were creating the policies dynamically at their end, so it may fail miserably here.
If both tunnels were in the same region, that could be a solution. On the other hand, one tunnel compiles to Poland, the other to Germany, so perhaps this is the bug…
Especially since I have exactly these two subnets tied up in another location using Fortigate - no problem there…
The fact that the two tunnels are intended for two different countries did not prevent Gcloud from sending data from Germany via the tunnel built for Poland. So inside their network something ignores the regional partitioning. That’s what I had in mind when saying you have to adjust your side to their buggy behaviour.
I changed the rule in google vpn, now the policy includes both subnets (even though the GUI doesn’t suggest at all that you can enter a subnet from a different region), threw out the google_fra config completely, added that subnet in the policy using google_peer and now everything works fine. Finally!
Thank you very much for the few messages mentioned, which finally helped to fix my problem