After we script around the mikrotik policy issue the connection seems to be useable for a while.
Yesterday we started testing for throughput on this link and the result was really poor.
We got only 80mbps on this link.
dne@xxx:~/netio/bin$ ./linux-x86_64 -s
NETIO - Network Throughput Benchmark, Version 1.31
(C) 1997-2010 Kai Uwe Rommel
UDP server listening.
TCP server listening.
TCP connection established ...
Receiving from client, packet size 1k ... 9116.44 KByte/s
Sending to client, packet size 1k ... 7694.83 KByte/s
Receiving from client, packet size 2k ... 9058.97 KByte/s
Sending to client, packet size 2k ... 7827.14 KByte/s
Receiving from client, packet size 4k ... 8145.50 KByte/s
Sending to client, packet size 4k ... 7817.76 KByte/s
Receiving from client, packet size 8k ... 8898.51 KByte/s
Sending to client, packet size 8k ... 7899.10 KByte/s
Receiving from client, packet size 16k ... 8588.56 KByte/s
Sending to client, packet size 16k ... 7376.13 KByte/s
Receiving from client, packet size 32k ... 8714.80 KByte/s
Sending to client, packet size 32k ... 7822.97 KByte/s
Done.
TCP server listening.
TCP connection established ...
Receiving from client, packet size 64 ... 8076.24 KByte/s
Sending to client, packet size 64 ... 7912.76 KByte/s
Done.
TCP server listening.
TCP connection established ...
Receiving from client, packet size 128 ... 9060.91 KByte/s
Sending to client, packet size 128 ... 7719.30 KByte/s
Done.
TCP server listening.
TCP connection established ...
Receiving from client, packet size 256 ... 8312.16 KByte/s
Sending to client, packet size 256 ... 5824.54 KByte/s
Done.
TCP server listening.
TCP connection established ...
Receiving from client, packet size 512 ... 8834.41 KByte/s
Sending to client, packet size 512 ... 7961.55 KByte/s
Done.
TCP server listening.
TCP connection established ...
Receiving from client, packet size 1k ... 8579.08 KByte/s
Sending to client, packet size 1k ... 7761.30 KByte/s
Done.
TCP server listening.
TCP connection established ...
Receiving from client, packet size 1460 ... 8563.63 KByte/s
Sending to client, packet size 1460 ... 8163.70 KByte/s
Done.
TCP server listening.
TCP connection established ...
Receiving from client, packet size 1500 ... 8807.63 KByte/s
Sending to client, packet size 1500 ... 7358.85 KByte/s
Done.
dne@xxx:~$ iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[ 4] local 192.168.88.111 port 5001 connected with 172.31.22.165 port 39198
------------------------------------------------------------
Client connecting to 172.31.22.165, TCP port 5001
TCP window size: 45.0 KByte (default)
------------------------------------------------------------
[ 6] local 192.168.88.111 port 43887 connected with 172.31.22.165 port 5001
[ ID] Interval Transfer Bandwidth
[ 6] 0.0-10.0 sec 40.1 MBytes 33.5 Mbits/sec
[ 4] 0.0-10.1 sec 44.6 MBytes 37.2 Mbits/sec
[ 5] local 192.168.88.111 port 5001 connected with 172.31.22.165 port 39199
[ 5] 0.0-10.1 sec 86.4 MBytes 71.8 Mbits/sec
edit:
Instance (m4.10xlarge) at aws site is connected via 10gbit to VPC and on local via 1gbit.
Uplink for ccr on datacenter site is 1gbit dedicated for this test.
We were able to get 1.7 Gbps of IPSEC on a 1500 byte MTU between two CCR1036 routers in our lab. Sounds like you may have other factors in the transport or the AWS endpoint that may be limiting you.
Can you provide details on how you are testing? While I am able to saturate the link with testing traffic, performance is greatly impacted by the connection quality (details here). This is also witnessed by iperf3 test traffic, but is easier to overcome with multiple parallel streams (and acceptance of loss). Wondering if you were only testing TCP and disregarded retransmissions and loss.