Tag: vmware

  • Flash crashing with vSphere Client, fix for Mac

    Flash crashing with vSphere Client, fix for Mac

    Looks like a new Adobe Flash (a.k.a. Shockwave Flash) update caused widespread panic with users of the web-based flash vSphere Client 6.x leaving them with a “Shockwave Flash has Crashed” message and no vSphere Client. The immediate fix is to downgrade Adobe Flash.

    VMware has released an official KB 2151945 but it only provides instructions for Windows users.


    Here’s how to fix for Mac users

    1. Go to Adobe’s Archived Adobe Flash Player versions page.
    2. Scroll down to Flash Player Archives and download Flash Player 27.0.0.159 (released 10/10/2017) or use this direct link.
    3. Once the package is downloaded it should extract automagically. If it does not, extract it.
    4. Open the folder fp_27 and sub-folder 27_0_r0_159.
    5. Double click uninstall_flashplayer27_0r0_159_mac.dmg to mount the image, and run the Adobe Flash Player Uninstaller. This should uninstall the existing Flash Player on your computer.
    6. You will need to close your browser(s) at this point. Bookmark this page for reference later.

    For users of Firefox

    1. After uninstalling, double click flashplayer27_0r0_159_mac.dmg and then (re)Install Adobe Flash Player. Your browser(s) should still remain closed at this point.
    2. You will be prompted to select how you want Adobe to update Flash. Be sure to select Notify me to install updates.
    3. Unmount the two disk images you mounted earlier.

    For users of Chrome

    1. Delete the current flash version. Open Terminal, and run these commands:
      cd ~/Library/Application\ Support/Google/Chrome/PepperFlash/
      rm -rf 27.0.0.170
    2. Double click flashplayer27_0r0_159_macpep.dmg and then (re)Install Adobe Flash Player.
    3. You will be prompted to select how you want Adobe to update Flash. Be sure to select Notify me to install updates.
    4. Unmount the two disk images you mounted earlier.

    You should now be able to get back to work in VMware vSphere Client. Drop me a note if the instructions do not work for you, I’ll be glad to update the content.

  • Why are multiple subnets needed in vSphere ESXi?

    Why are multiple subnets needed in vSphere ESXi?

    Most know that having different subnets for management traffic, vMotion, storage, etc. is a best practice but some may not understand why.

    Routing 101: Two paths to the same place

    Let’s take for example you have a laptop and is connected to both wifi and LAN at home/office (on the same subnet). Which connection is being used when you browse the Internet or even print to a local network printer?

    The answer is the first connection you are hooked up to, or the more technically correct answer is the route that is of a higher order of preference. When two NICs are connected on the same subnet, your local routing table will have two entries to the directly connected subnet, e.g.

    192.168.1.0/24 via en1 metric 100
    192.168.1.0/24 via en0 metric 100

    When a packet is sent (e.g. to your gateway at 192.168.1.1), the operating system will look up the route table and pick the first match, and in this instance it would be en1. Of course if the two routes have different metrics, then the metric of a smaller number gets the preference.

    But if you have both NICs connected to different subnets, e.g.

    192.168.1.0/24 via en1 metric 100
    172.16.1.0/24 via en0 metric 100

    Then it becomes clear which path to take when you try to get to your printer at 172.16.1.15 or your NAS at 192.168.1.100.

    The same thing happens when you have two (or more) vmkernel NICs on the same subnet. Separating the subnets will ensure that the desired traffic takes the correct path out.

    Interface binding

    Some may wonder why can’t interface binding be used, similar to running ping -I <intf>? The answer is yes! Interface binding is used for Multi-NIC vMotion (5.1 and newer) and for the Software iSCSI initiator where iSCSI multi-pathing requires two or more different vmknics within the same subnet. But this works only with vMotion, the Software iSCSI initiator, or other specific ESXi services designed to have NIC binding.

    To allow NIC binding to work, the general requirement is that only one active physical NIC can be present in the NIC teaming configuration, e.g.

    // vSwitch setup
    vSwitch0 = eth0, eth1
    vSwitch1 = eth2, eth3
    vSwitch2 = eth4, eth5
    
    // No port binding
    vmk0 management 192.168.1.11/24 via vSwitch0 (active: eth0, eth1)
    vmk1 iscsi-hb 172.16.1.11/24 via vSwitch1 (active: eth2, eth3)
    
    // Port binding services
    vmk2 iscsi-1 172.16.1.12/24 via vSwitch1 (active: eth2, unused: eth3)
    vmk3 iscsi-2 172.16.1.13/24 via vSwitch1 (active: eth3, unused: eth2)
    vmk4 vmotion-1 172.16.2.11/24 via vSwitch2 (active: eth4, standby: eth5)
    vmk5 vmotion-2 172.16.2.12/24 via vSwitch2 (active: eth5, standby: eth4)

    vSphere 5.1 and iSCSI heartbeat

    iSCSI heartbeat which uses regular ICMP ping does not bind to a specific interface prior to vSphere 5.1. Back in the good old days, it was a best practice to create 3 vmknics and leave the vmknic with the lowest index number for iSCSI heartbeat to give it a routing priority. In vSphere 5.1 and later VMware addressed this and made iSCSI heartbeat bind to an interface but there have been reports of it not working as intended.

    vSphere 6.0 and TCP/IP Stacks

    VMware introduced independent routing tables (know as TCP/IP stacks) in vSphere 5.5 but it was cumbersome to configure via CLI. In vSphere 6.0 three different TCP/IP stacks are available by default so that Management, vMotion and Provisioning (cloning, snapshots, etc.) traffic can be configured to route differently. For the Cisco guys this is easily explained as VRF. This introduction allows vMotion and Provisioning traffic to be routed. Although not usually needed, vMotion routing will be required if you want Long Distance vMotion to get across two different (routed) subnets.