You have misunderstood how the /ip ipsec peer configuration works.
When an initial packet comes from the remote initiator, its source IP address and the exchange mode are matched against the address and exchange-mode items of all rows under /ip ipsec peer for exact match of exchange mode and best match of address. Best match of address means that the row with longest address prefix (mask) is chosen if more than one match. So if there are three peers with exchange-mode=ike2 and address values 0.0.0.0/0, 100.100.100.0/24, and 100.100.100.5/32, the third one will be chosen if the initial IKEv2 packet comes from 100.100.100.5, the second one will be chosen it the packet comes from 100.100.100.6, and the first one will be chosen if the packet comes from 100.100.99.3.
If some other local-address other than the default 0.0.0.0 (meaning any local address of the routerboard) is specified for the row in /ip ipsec peer, it is also taken into account.
This is the stage where the rest of the configuration elements is chosen. So all actual remote peers whose initial packets match the same row in /ip ipsec peer
- must be distinguished from one another by their ID, which is looked up as the name item under /ip ipsec user (if auth-method=pre-shared-key-xauth) or in an external user database using RADIUS (if auth-mode=eap-radius or rsa-signature-hybrid),
- must make do with the same settings of the responder peer, namely the profile, generate-policy, and policy-group.
In your configuration, you have set two /ip ipsec peer rows with same address and exchange-mode, which means only the one of them declared first is ever used. What should have warned you was that there is no remote-id parameter of the peer nor any link between the /ip ipsec user items and the /ip ipsec peer items.
Now regarding the assignment of addresses and matching policies to the initiators (clients). As a single /ip ipsec peer row is used, at responder (server) side, to create several distinct connections to actual remote peers whose addresses are unknown in advance, the policies for them must be dynamically generated as the sa-dst-address of the policy must be determined when the remote peer comes up. So the responder peer configuration must be set to generate-policy=port-strict or port-override.
The IP address of the client can either be configured statically at the client (or an existing one can be used) or the server can assign it using the mode-config procedure, and so can be the subnets which the client can reach via the tunnel. In the first case, the policy (or multiple policies) are configured statically at the client as well, because all of their parameters (src-address, dst-address, sa-src-address, sa-dst-address) are known in advance. In the second case, also generate-policy at the client must be set to port-strict or port-override.
If you insist on running the GRE tunnels to clients, I’d recommend to set the address and policies statically at each client device, given that you probably cannot fully automate the link between the public IP you assign to a particular client and the client’s location, so you cannot create 20 identical client configurations and only set individual user names and passwords for them. You need the link between the user name and the public address tunneled to it as well, and this link goes via the private address which the client will use and to which the public one will be tunneled via the GRE.
But if you can wrap your head around the idea of using no GRE at all, you can assign the public IPs directly using mode-config if you set these addresses in the /ip ipsec user at server side. In that case, you have to assign a mode-config profile to the /ip ipsec peer row at both the client and server side, whereas at client side it would be a modified version of the default request-only one and at server side it would be one created for the use case.
The easiest way how to configure the policy templates at responder side is to set a single policy template in the group, saying src-address=0.0.0.0/0 dst-address=0.0.0.0/0, but it is dangerous as it would let a misconfigured peer steal the whole responder’s traffic by setting src-address=0.0.0.0/0 in its own policy. The safest approach is to create the exact policies for each client as templates. Or you can simply create a single template with as narrow src-address and dst-address subnets as sufficient for it to work (so if you decide to assign the public addresses directly using mode-config, you would set the template to src-address=0.0.0.0/0 dst-address=the.public.subnet.you.use/its-mask).
On the client side, the policy template’s src-address and dst-address may stay the default ones.
One important moment is encryption. Do you really need it, given that you’d only encrypt the path from the client to your server, and from there on the data would flow as the camera has sent them anyway? If you don’t, you can set enc-algorithms=null in the proposals at both sides, and thus save some CPU (unless authentication cannot be hardware assisted if encryption algorithm is null, I’ve never tried that). If you use ssh, https, or winbox to connect to the clients via the tunnel, you don’t need the IPsec to encrypt the management access either.
On the other hand, you do want the highest available protection against someone connecting instead of your client. So use strong encryption alogorithms in /ip ipsec peer profile, and use a hex-encoded 128-byte random string (i.e. 256 hex symbols) as the /ip ipsec peer secret and other such strings as /ip ipsec user password of the individual clients.
If you choose the way of direct assignment of the public IP addresses to the client devices, there is one more extra you need to cover, which is the port forwarding to the cameras in situation where the public IP is unknown in advance. The dst-nat rules have to match on in-interface=the-wan-if-name ipsec-policy=in,ipsec so that they would only be applied on traffic coming in via the IPsec tunnel.