I’m in a state of light shock… I’ve reset a hAP ac² to defaults and built an IKEv2 IPsec peer & identity, and on the remote peer, I’ve created an identity referring to a mode-config with address=192.168.209.1 and split-include=0.0.0.0/0. So at the initiator side, it looks as follows:
[me@MyTik] > ip ipsec export
...
/ip ipsec mode-config
set [ find default=yes ] src-address-list=mode-config-list
/ip ipsec peer
add address=192.168.10.84/32 exchange-mode=ike2 name=peer1
/ip ipsec identity
# Suggestion to use stronger pre-shared key or different authentication method
add generate-policy=port-strict mode-config=request-only peer=peer1 secret=averysecureone
[me@MyTik] > ip firewall address-list print
Flags: X - disabled, D - dynamic
# LIST ADDRESS CREATION-TIME TIMEOUT
0 mode-config-list 192.168.99.0/24 may/27/2019 21:04:34
[me@MyTik] > ip address print
Flags: X - disabled, I - invalid, D - dynamic
# ADDRESS NETWORK INTERFACE
0 ;;; defconf
192.168.99.1/24 192.168.99.0 bridge
1 D 192.168.10.83/24 192.168.10.0 ether1
2 D 192.168.209.1/32 192.168.209.1 ether1
[me@MyTik] > ip ipsec policy print
Flags: T - template, X - disabled, D - dynamic, I - invalid, A - active, * - default
0 T * group=default src-address=::/0 dst-address=::/0 protocol=all proposal=default template=yes
1 DA src-address=192.168.209.1/32 src-port=any dst-address=0.0.0.0/0 dst-port=any protocol=all action=encrypt level=unique
ipsec-protocols=esp tunnel=yes sa-src-address=192.168.10.83 sa-dst-address=192.168.10.84 proposal=default ph2-count=1
I have intentionally not changed the default firewall rules, so the chain=forward action=fasttrack-connection in /ip firewall filter was there. So I’ve expected to see the usual behaviour where every Nth packet of a fasttracked connection will not be fasttracked, and these Nth packets will be caught by the IPsec policy and delivered, while the actually fasttracked ones will take the default route to nowhere (if I disable the peer, the connected laptop in 192.168.99.0/24 can get nowhere as the default route’s gateway is the LAN bridge) as they will miss the IPsec policy due to fasttracking.
To my surprise, the packet count in fasttracked connections at the “client” side is equal to the one at the “server” side. So even actually fasttracked packets get caught by the policy and delivered via the SA:
client:
[me@MyTik] > ip firewall connection print where dst-address~"216.58.201.78:443"
Flags: E - expected, S - seen-reply, A - assured, C - confirmed, D - dying, F - fasttrack, s - srcnat, d - dstnat
# PR.. SRC-ADDRESS DST-ADDRESS TCP-STATE TIMEOUT ORIG-RATE REPL-RATE ORIG-PACKETS REPL-PACKETS
0 SAC Fs tcp 192.168.99.254:57468 216.58.201.78:443 established 23h59m46s 0bps 0bps 2 654 4 019
server (where fasttracking is off):
[me@vpn-server] > ip firewall connection print where dst-address~"216.58.201.78:443"
Flags: E - expected, S - seen-reply, A - assured, C - confirmed, D - dying, F - fasttrack, s - srcnat, d - dstnat
# PR.. SRC-ADDRESS DST-ADDRESS TCP-STATE TIMEOUT ORIG-RATE REPL-RATE ORIG-PACKETS REPL-PACKETS
0 SAC s tcp 192.168.209.1:57468 216.58.201.78:443 established 23h59m43s 0bps 0bps 2 648 4 019
So given that even the actually fasttracked packets are caught by the IPsec policy, there is no way to separate the “real” firewall processing from the IPsec policy matching by insertion of an IPIP hop between the two, therefore there is no way to eventually split these two stages of processing of the same packet which would eventually lead to each stage to be executed by another CPU core/thread.
I wonder whether it is an intentional change in 6.44 or something else, and what’s the impact on performance.
So what remains at your side is to replicate this setup and see the difference in throughput between fasttracking off and fasttracking on when PPPoE, IPsec and firewalling is done on a single machine. My home uplink of 20 Mbit/s is not suitable for any stress testing.
BTW, I suspect that the fact that the IPsec processing is stuck to one thread/core is intentional. The receiving side is not happy when the packets arrive in swapped order, and to ensure that they don’t, all of them have to be processed by the same thread on the sending side. So it is well possible that with several SAs you’d have a higher overall throughput, but your application scenario doesn’t leave any space for several SAs.