All my clients is mikrotik router board that connect to main office with l2tp/ipsec.
Great, so the pre-requisite that the clients never give up is met.
I tested with ccr router and give same problem
That's no surprise. The amount of data processing needed to establish a connection is high, and several steps are necessary. As a too large delay of any of these steps is fatal for the connection attempt as a whole, and as the chances that none of the steps fails for a particular connection are close to zero, none of the attempts ever succeeds and the load remains high forever.
Do you have a suggestion for this problem?
Sure, I have already described the suggestion above. You have to moderate the flow of the initial packets to UDP port 500 to the IPsec stack in the firewall using the using the
limitmatcher in
/ip firewall filter. This will reduce the number of connection attempts to even start. As dropping the initial packets from the others is a much less CPU intensive task, this dropping will not prevent the connections whose initial packets were allowed to get in from completion; once they get completed, the CPU load will decrease and a batch of subsequent connection attempts can be enabled.
So replace the single rule
action=accept chain=input protocol=udp dst-port=500,1701,4500
in
/ip firewall filter by the following three ones:
action=accept chain=input dst-port=500 limit=5/10s,0 protocol=udp
action=drop chain=input dst-port=500 protocol=udp
action=accept chain=input protocol=udp dst-port=1701,4500
and that should do the trick.
If the rule to be replaced looks different in your firewall, or if you don't use a stateful firewall, post the output of
/ip firewall filter export.
5 attempts in 10s should be at the safe side; if it helps, you may be able to allow more. 5 attempts in 10s means slightly more than 15 minutes for all 500 clients to recover.
If some of the clients connect from public addresses, the server configuration needs to be modified, otherwise connections from those clients would ruin the idea.