In my setup I have Matrox X Series x64 Bit hardware which comes with inbuilt 5 Ports of 10G each.
I only know Matrox as a graphic card vendor, but x64 and 5×10 Gbit/s interfaces indicate that packet throughput is not an issue.
Load balancing have a issue that it break https connections.
This gives me two bits of information: that the actual issue is not load balancing (or, more exactly, load distribution) as such but how exactly it is done, and that you get a separate public subnet or even just a separate public address on each of the three links connected to ether1 .. ether3.
So following the principle "we don't deliver what you ordered, we deliver what you actually wanted", I dare to translate the requirement to "avoid load balancing" into a requirement to "make sure that all connections of any given PPPoE client will always get src-nated to the same public IP, except if the respective uplink is broken".
Splitting the pppoe clients among the 3 uplinks can be manually or automatically what ever Mikrotik Guru's suggest.
Laziness is the mother of evolution, so automatic distribution would be my personal preference, as it is maintenance-free. The drawback of manual distribution is that you
have to manually choose an uplink for each client; the positive is that you
can manually choose an uplink for each client, which allows you to eventually redistribute the clients if the uplinks are not loaded evenly.
So
supposing you've got three uplinks with three individual /30 public subnets (what a waste of valuable resources!), you would create six routing tables with default routes; three of them use all three gateways each with different priority, and three of them use just one gateway each.
/ip route
add gateway=uplink.1.gw.ip routing-mark=prefer-1
add gateway=uplink.2.gw.ip routing-mark=prefer-1 distance=2
add gateway=uplink.3.gw.ip routing-mark=prefer-1 distance=3
add gateway=uplink.2.gw.ip routing-mark=prefer-2
add gateway=uplink.3.gw.ip routing-mark=prefer-2 distance=2
add gateway=uplink.1.gw.ip routing-mark=prefer-2 distance=3
add gateway=uplink.3.gw.ip routing-mark=prefer-3
add gateway=uplink.1.gw.ip routing-mark=prefer-3 distance=2
add gateway=uplink.2.gw.ip routing-mark=prefer-3 distance=3
add gateway=uplink.1.gw.ip routing-mark=use-1
add gateway=uplink.2.gw.ip routing-mark=use-2
add gateway=uplink.3.gw.ip routing-mark=use-3
The distribution of client's connections is done using mangle rules.
/interface list add name=WAN
/interface list member
add list=WAN interface=uplink-1
add list=WAN interface=uplink-2
add list=WAN interface=uplink-3
/ip firewall mangle
#accept mid-connection (i.e. already connection-marked) download packets straight away
add chain=prerouting in-interface-list=WAN connection-mark=!no-mark action=accept
#send mid-connection (i.e. already connection-marked) upload packets through their previously assigned uplink
add chain=prerouting connection-mark=use-1 action=mark-routing new-routing-mark=use-1 passthrough=no
add chain=prerouting connection-mark=use-2 action=mark-routing new-routing-mark=use-2 passthrough=no
add chain=prerouting connection-mark=use-3 action=mark-routing new-routing-mark=use-3 passthrough=no
#handling the first response packet - set the connection-mark depending on the uplink through which the connection has actually been established
add chain=prerouting in-interface=uplink-1 action=mark-connection new-connection-mark=use-1 passthrough=no
add chain=prerouting in-interface=uplink-2 action=mark-connection new-connection-mark=use-2 passthrough=no
add chain=prerouting in-interface=uplink-3 action=mark-connection new-connection-mark=use-3 passthrough=no
#handling the first request packet - prefer an uplink depending the IP address of the customer
add chain=prerouting in-interface-list=WAN per-connection-classifier=src-address:3/0 action=mark-routing new-routing-mark=prefer-1 passthrough=no
add chain=prerouting in-interface-list=WAN per-connection-classifier=src-address:3/1 action=mark-routing new-routing-mark=prefer-2 passthrough=no
add chain=prerouting in-interface-list=WAN per-connection-classifier=src-address:3/2 action=mark-routing new-routing-mark=prefer-3 passthrough=no
The whole idea here is that for the first request packet from any given customer, always the same rule out of the three last ones above matches, because the
per-connection-classifier calculates the hash only from the customer's IP address. So the first packet is sent always to the same routing table, where the route with
distance=1 is used unless that path is down; the other two can be used only if the preferred path is down.
The first response packet (the second one of the connection) arrives via one of the uplinks; this indicates which uplink has actually been chosen for the connection, so we need to remember this information in order to tie that connection to that uplink for the rest of its existence, even if the uplink eventually breaks down. This is the job of the three
action=mark-connection rules.
If you choose manual linking of a client to an uplink, the easiest way to me is to
- create three /ip firewall address-list items, prefer-1, prefer-2, prefer-3, and three /ppp profile items, each referring to one of these three address lists
- let each client prefer a particular uplink by setting the corresponding /ppp profile item in their /ppp secret row
- replace the per-connection-classifier match conditions in the three rules assigning the routing-mark to the initial packets by src-address-list=prefer-X match conditions.
In both cases, the
src-nat or
masquerade rules match on the respective
out-interface names.
Since you have a single upstream ISP, it would be much better than any of the above if the ISP didn't use public subnets directly on the uplinks, gave the same 12 public addresses (or even just 8, a single /29) completely to you, and used a private interconnection subnet on each uplink to route traffic towards these public IPs. In that case, you'd be able to use ECMP to distribute the load among all three uplinks and still NAT any given customer to the same public IP. If one of the uplinks would fail, all the existing connections would seamlessly keep using the remaining two. The ISP would have to do the same ECMP of course.