Good morning to all, time without going through this great community.
I tell you, I currently have a large part of my work infrastructure working with mikrotik, from Routers, Firewall, VLAN Network Segmentation, Internet Administration etc.
And I use a Cloud Core as the main Firewall in order to host the web services.
• (Current) Currently configured as follows.
Internet WAN 300 Connections ------Mikrotik Firewall NAT -------LAN All Connectios to Web Server 1.
• (To be implemented) I would like to balance all the incoming connections and divide them into several Web Servers,
WAN 300 Connections ---- Mikrotik Firewall NAT ----- LAN 100 Connections to Web Server 1. HTTP
----- LAN 100 Connections to Web Server 2. HTTP
----- LAN 100 Connections to Web Server 3. HTTP
I have been doing a lot of research and I only managed to find information for load balancing for WISP.
This is if you care about same client going to same server. If you don’t, use per-connection-classifier=src-address-and-port, or nth=3,x option instead of PCC.
If you simply disable one of these rules, all connections that would otherwise use it won’t go anywhere, which is of course very bad. You can add another rule after these without per-connection-classifier option, which would point to some backup server and would catch all connections not previously caught by previous rules. So normally with all three active, backup rule won’t be used. The trouble is, you get into “it was not made for this” territory very fast. If you don’t have dedicated backup server, you’ll have to use one of the three. But what if that one goes down? You can work around it by e.g. adding backup rules for all three servers and when disabling PCC one, you’d also disable backup for same server. And as long as at least one server is active, it will work. But it’s still not very good, because everything from inactive server will go to only one backup, so one of remaining ones will get twice the trafic as the other one. You could continue with more complex rule changes, but it’s not very practical.
The right solution is dedicated load balancer. You’d unconditionally forward port to it, and it would take care of dealing with backends. To avoid single point of failure, you can have two balancers, and either switch between them using netwatch or use VRRP to make it work automatically.