We have a CHR running in our Amazon Web Services virtual private cloud. It's been a bit of a journey getting there but glad we did it, as it's good to have Mikrtoik flexibility in AWS.
However, AWS does things a little differently in the networking department so it's not always easy to work out what is going on.
as background, our VPC is in 10.100.0.0/16 and the CHR is in the 10.100.1.0/24 subnet
the CHR is operating as a VPN server and addresses given to L2TP clients are in the 10.101.1.0/16 range
the CHR also has an ipsec connection to our office which is on 10.11.0.0/16
when I ping from a VPN client to a CHR on the same vpc subnet, the ping goes out with the mac address of the eth1 interface of the CHR... as you would expect and comes back correctly with the mac of the local instance
However, when I ping to a host on the office lan and look a the sniffer on the CHR (filtering only for pings)
1) I see the packet come into the L2TP (no mac address as expected)
2) I then see a ping response back from the Ethernet of the amazon gateway (the route tables know about 10.101.0.0 but why would the traffic go to the AWS GW first if the CHR is getting the traffic back from an ipsec tunnel and knows the address of the next hop as it its own VPN client
3) I see the ping reply exiting on L2TP correctly to the host on the vpn
ping works fine, but I'm confused as to why the AWS GW is involved at all. the only thing i can think is that for some reason the CHR is forwarding the echo reply to the IGW when it comes in from the office ipsec tunnel (the IGW is the default route for the CHR).. but why would it if it has knowledge of all routes it needs? (this all is a made a little more complicated with ipsec policies required in addition to routing tables etc). I'm guessing that I am not seeing the echo request leaving the CHR or reply reurning to it as it is over the ipsec tunnel and that is "not an interface"?
What am I missing here?