Most know that having different subnets for management traffic, vMotion, storage, etc. is a best practice but some may not understand why.
Routing 101: Two paths to the same place
Let’s take for example you have a laptop and is connected to both wifi and LAN at home/office (on the same subnet). Which connection is being used when you browse the Internet or even print to a local network printer?
The answer is the first connection you are hooked up to, or the more technically correct answer is the route that is of a higher order of preference. When two NICs are connected on the same subnet, your local routing table will have two entries to the directly connected subnet, e.g.
192.168.1.0/24 via en1 metric 100 192.168.1.0/24 via en0 metric 100
When a packet is sent (e.g. to your gateway at 192.168.1.1), the operating system will look up the route table and pick the first match, and in this instance it would be en1. Of course if the two routes have different metrics, then the metric of a smaller number gets the preference.
But if you have both NICs connected to different subnets, e.g.
192.168.1.0/24 via en1 metric 100 172.16.1.0/24 via en0 metric 100
Then it becomes clear which path to take when you try to get to your printer at 172.16.1.15 or your NAS at 192.168.1.100.
The same thing happens when you have two (or more) vmkernel NICs on the same subnet. Separating the subnets will ensure that the desired traffic takes the correct path out.
Interface binding
Some may wonder why can’t interface binding be used, similar to running ping -I <intf>? The answer is yes! Interface binding is used for Multi-NIC vMotion (5.1 and newer) and for the Software iSCSI initiator where iSCSI multi-pathing requires two or more different vmknics within the same subnet. But this works only with vMotion, the Software iSCSI initiator, or other specific ESXi services designed to have NIC binding.
To allow NIC binding to work, the general requirement is that only one active physical NIC can be present in the NIC teaming configuration, e.g.
// vSwitch setup vSwitch0 = eth0, eth1 vSwitch1 = eth2, eth3 vSwitch2 = eth4, eth5 // No port binding vmk0 management 192.168.1.11/24 via vSwitch0 (active: eth0, eth1) vmk1 iscsi-hb 172.16.1.11/24 via vSwitch1 (active: eth2, eth3) // Port binding services vmk2 iscsi-1 172.16.1.12/24 via vSwitch1 (active: eth2, unused: eth3) vmk3 iscsi-2 172.16.1.13/24 via vSwitch1 (active: eth3, unused: eth2) vmk4 vmotion-1 172.16.2.11/24 via vSwitch2 (active: eth4, standby: eth5) vmk5 vmotion-2 172.16.2.12/24 via vSwitch2 (active: eth5, standby: eth4)
vSphere 5.1 and iSCSI heartbeat
iSCSI heartbeat which uses regular ICMP ping does not bind to a specific interface prior to vSphere 5.1. Back in the good old days, it was a best practice to create 3 vmknics and leave the vmknic with the lowest index number for iSCSI heartbeat to give it a routing priority. In vSphere 5.1 and later VMware addressed this and made iSCSI heartbeat bind to an interface but there have been reports of it not working as intended.
vSphere 6.0 and TCP/IP Stacks
VMware introduced independent routing tables (know as TCP/IP stacks) in vSphere 5.5 but it was cumbersome to configure via CLI. In vSphere 6.0 three different TCP/IP stacks are available by default so that Management, vMotion and Provisioning (cloning, snapshots, etc.) traffic can be configured to route differently. For the Cisco guys this is easily explained as VRF. This introduction allows vMotion and Provisioning traffic to be routed. Although not usually needed, vMotion routing will be required if you want Long Distance vMotion to get across two different (routed) subnets.