I’m trying to setup a big network and I just want to find the best possible design to implement it.
Here are the prerequisites .
I have multiple Public IP addresses. Let’s say subnets 1.1.1.0/24, 2.2.2.0/24, 3.3.3.0/24.
I need complete isolation between clients so VLANs is a must .
Need redundancy so VRRP is a must also.
Here’s what we have now.
2x ISP’s with BGP full routing tables on two mikrotik CCR1072 .
So best case its an active-active scenario ( or active - backup ).
So this is what I want:
Same networks are advertised through BGP on both routers ( I have allready setup this and it is working ).
VRRP in case one router fails , the other one should start forwarding traffic instead. This is setup partially. What I mean is that since I have multiple subnets I have setup many VRRP interfaces on the internal facing ports. The problem here is that I lose 3 IPs per subnet due to VRRP, and VRRP traffic is being broadcasted on the subnets which in turn can be seen on clients traffic.
I have not found a proper way to implement vlans on such a big margin.
Lets say I have many clients (200 clients at the same time) , then I need to setup 200 vlan interfaces ( which I cannot create like a bulk, and I need to create them one by one ), then how could I setup vlan and VRRP on multiple vlans effectively, to have complete isolation?
I hope I’ve been thorough enough with my explanation.
And some configurations already implemented (this is a sample to help understand my situation).
I have read in another post in the forum, that for best results, I’ve better to add only one VRRP between direct connection on the two routers, and setup an up/down script for bringing interfaces up or down.
You can run VRRP for multiple networks but it seems you’re running all of the instances on the same underlying interface. You should run it on the layer 3 interfaces that actually forward the traffic. Likely based on your post this should be the VLAN interfaces with the shared IP assigned to the VRRP interface as a /32.
The creation of all of this can be scripted and the code can be generated with a short script in the language of your own preference.
I’m running the VRRP on the same interface as all customers are assigned on this interface.
VRRP-interface ↔ Cisco Switch ↔ Proxmox Cluster Server .
The issue with “this should be the VLAN interfaces with the shared IP assigned to the VRRP interface as a /32” is that, I have several /24’s prefixes cut down to /25 /26 /27.
Customers that have a vlan lets say one in vlan 100 and one in vlan 101 on the same prefix /24 , how would communicate with VRRP gateway ? Can the vrrp gateway be a member of multiple vlans? and if so how ?
You should set up one VRRP per physical interface.
Regarding loosing 3 IPs per subnet, not correct, you will loose only 2 IPs on a subnet that is running VRRP on IPv4. Or set up VRRP v3 on IPv6 an don’t loose any IPs.
How is it possible that I have it working with only using the one IP address? I have .254 configured on both routers as the IP on the VRRP interface, and the physical interface has no IPs configured.
The VRRP parent interfaces also don’t need to match the subnet of IPs attached to the VRRP interfaces. Documentation and training will always show them being in the same subnet but you can run /30 or even /31 on VRRP interfaces.
Clients often want redundant links and infrastructure and therefore assume that we need to grow routing subnets to /29. The following sample topology shows a simulation where a single IP is shared between two routers, using 10.255.255.0/29 on the VRRP parent interfaces (could also be /30):
Operat0r:
You may want to search these forums for the MikroTik high availability script solution, where a single VRRP interface is used to track router master status and configurations are automatically transferred between them. It generally requires switches to be used to provide uplink to both routers via VLANs but we have numerous routers setup to work like this with excellent result.
Not trivial as you really should read the resulting scripts to know how they work.