One thing to consider which may affect your setup is VMware Licensing. If you are using the vSphere/ESXi Evaluation, then you know you'll lose functionality and full feature/resource utilization after 60 days (I believe it's 60 days...) If that is the case, best to just scale back to using ESXi standard vSwitch (also just using the ESXi feature set) rather than the vSphere DS/features.
Below may be irrelevant:
You can create VLANs and assign them to one physical ether/SFP+ Router interface (i.e. - your uplink to ESXi port) which automatically changes the interface encapsulation to 802.1Q trunk. Then put IP addresses on the VLAN interfaces. Create a standard vSwitch and then add port groups for each VLAN you want to tag and then assign the PG to respective VM(s).
I'm using an Enterprise Plus license, so I have indefinite access to Distributed Switch and all of its features. What I don't have is an NSX license. I currently have 8 VLANs to account for, and may add more in the future.
I'm not sure if I understood your last suggestion. Let me know if I missed the target (likely):
- create a new standard vSwitch
- assign one or more physical NICs (uplinks) to the vSwitch
- create one portgroup for each VLAN, on said vSwitch
- assign individual VLAN IDs to each portgroup
- assign one or more vmkernel NICs to each portgroup
- assign IPv4 addresses (within each VLAN's intended subnet) to each vmkernel NIC
- migrate all VMs to the newly tagged portgroups
If the above process is correct, I've attempted it before. This relies on VST iirc (where the Virtual Switch handles VLAN tagging, in opposed to the VMs or another entity). The two main issues that I ran into with that plan were that not all of my VMs were able to migrate successfully, and any that did make it had no Internet connectivity after the migration - which destroyed their ability to download vital security patches. I'll ignore the first issue for the time being, since that would possibly require a separate trip to the VMware Community forums to resolve. Proper routing (outside of each VLAN) is one thing that may have to be done by a separate entity. There are three main options that I know of for this role (of VLAN router):
- ESXi: dedicated router VM (RouterOS, VyOS, etc.)
- NSX-T: Gateway/Logical Router
- Opaque: Routing defined on dedicated hardware (MikroTik)
The first option requires yet another VM, which in turn needs compute resources from the hypervisor host and will be present on all portgroups/VLANs. If the hypervisor host ever comes under resource contention, I'd think network performance could take a hit. Less headroom/leeway if something goes wrong. The second option requires a paid subscription for NSX-T, which is unrealistic. The third option would offload the task to a physical router/switch (which I already own).
Here are a few questions that I have as a result of this:
- What's handling DHCP when I define VLANs in vSphere? Is it the VLAN router I mentioned above?
- Which will perform better in most scenarios - a physical router/switch or a vSwitch?
- Which will perform more consistently if my ESXi host happens to run low on system resources?
- What are the benefits of using a vSwitch (necessitates VST) in this scenario?
- If I end up using the third option, then would I be better off defining the VLANs on the physical router/switch as well?
Sorry if my questions seem dumb or obvious. These are the things that came to mind when reviewing all of the data so far. I've been at this issue for at least one and a half months, and haven't had much luck on the vSphere VST side so far.