CRS317 in HA environment

Hi

I would like to create a network like this. Is the part with LAGG and LACP possible with the CRS317-1G-16S+RM (I don’t think this one is stackable?)?
If not, what is a possible solution?

Thanks

Hi

Something went wrong with the image. Here is the good one:


Hypervisor as in a virtual port channel or a hot standby?

I don’t understand you well, Dude2048. The hypervisors are bonded via LACP (802.3ad). Or an active-passive setup is also possible.

The problem is the redundancy across the switches. Normally I would stack the two switches and make a LACP LAG group with one port on each switch. Or I would need something like Multi-Chassis Trunking.
What is the best solution to create this redundancy with CR317’s?

Stacking switches is not supported. You can’t bond it like you want with lacp. Maybe something with vrrp.

I don’t care about stacking, but MC-LAG would be great.

VRRP isn’t a real option for this scenario I think. I need the 10G switching capacity, not routing.
MC-LAG would indeed be a great addition.

But for now, what would be my best option in this situation?
Is it a option to bond two server ports in Active-Passive and connect each server port with a switch port on each switch? Should I then also create a Trunk between the two switches?
Maybe other ideas?

Haven’t tried this before with a hypervisor, but you could probably:

  • Bridge the two 10G interfaces of the hypervisor


  • Configure RSTP

This way you would have redundancy, but no throughput aggregation.
The same applies for firewalls.

The main goal of the setup is redundancy, not throughput.
And bridging the hypervisor in Active-Backup, you mean? With a Trunk between the two switches because the hypervisors have to talk to each other, also when one link of a hypervisor is down?

No i meant something like that https://support.sonus.net/display/DSCDOC150/Creating+a+Virtual+Switch+in+VMware+ESXi
All hypervisors i know of have built in redundancy capabilities.
HyperV has dynamic teaming.

I came here searching for exactly the same need and the same hardware! I read that you can achieve redundancy and traffic aggregation using bonding mode 6 (Balance ALB)

https://www.ibm.com/support/knowledgecenter/linuxonibm/com.ibm.linux.z.l0wlcb00/l0wlcb00_bondingmodes.html

This setup is said to work with two “dumb switches”. What is unclear for me is the behavior when one switch or link fails: Timeout / recovery of active IP session, does the switches have to be linked together or not, etc. Every websites that i found on this topic didn’t provide enough behavior details nor practical examples…

My use case if for a GlusterFS cluster storing KVM Virtual Machines images. Does anybody came up with a working setup? If not, i’ll configure a test bench for this in my lab.

Would be very interested to learn about your progress / results with this.

When thinking of bonding using Mikrotik, one should study this document in detail.

I went with fs.com switches, which have MLAG support.