Thursday, August 20, 2015

Nexus 1000v - will the network fail if the VSMs fail?

At current client there has been some concern regarding the robustness of the network in relation to Nexus 1000v. It's a vSphere 5.5 environment running in Vblock with Nexus 1000v switches (bundled with Vblock).

The question was whether the network on the ESXi hosts is dependent on the two management 1KV VMs and if the network will fail entirely if these two VMs are down.

Furthermore, there was a question whether all ESXi traffic flows through these two VMs, the management VMs were being perceived as actual switches.

The answer is, for most, probably pretty straight forward but I decided to verify anyway.

Two notes first:

1) By adding Nexus 1000v to your environment you may receive some benefits. But you also add complexity. Through the looking glass of vCenter, it is simply easier to understand and manage a virtual distributed switch (vDS). Some network admins may disagree of course.

2) From Googling a bit and also from general experience, it doesn't seem like that many people are actually using the 1KV's. There is not much info to be found online and the stuff there is seems a bit outdated.

That said, let's get to it:

The Nexus 1000v infrastructure consists of two parts.

1) Virtual Supervisor Module (VSM). This is a small virtual appliance for management and configuration. You can have one or two. With two VMs, they run in active/passive mode

2) Virtual Ethernet Modules (VEM). These modules are installed/pushed to each of the ESXi hosts in the environment

All configuration of networks/VLANs is done in the VSMs (NX-OS interface) and then pushed to the VEMs. From vCenter it looks like a regular vDS but you can see in the description that it is an 1KV, see below:

Even if both VMSs should fail, the network will continue to work as before on all ESXi hosts. The network state and info/configuration is kept separately on all ESXi hosts in the VEM. However, control is lost and no changes can be done before the VSMs are up and sync'ed again.

For the other question, no, the VM traffic does not flow through the VSM, they are only for management. The VM ethernet traffic flows through the pNICs in the ESXi hosts and on to the network infrastructure. The same as with standard virtual switches and vDS'es. This means that the VSMs cannot be a bandwidth bottleneck or single point of failure in that sense.

For documentation: See 45 seconds of this video from time: 14.45 to 15.30.

Below are two diagrams that show the overview of VSM and VEM:

Wednesday, July 22, 2015

Cold Migration of Shared Disks (Oracle RAC and NFS clusters)

Certain applications use shared disks (Oracle RAC and NFS clusters due to clustering features). These can be vMotion’ed between hosts (for RAC you have to be careful for monster VMs and high loads as the cluster timeout has low threshold that can be reached during cut-over), but svMotion is not possible. Migration of disks will have to be done while both virtual machines (or all VMs in cluster) are shut down (cold migration). The method involves shutting down both primary and secondary node, removing the shared disk that has to be migrated (without deleting it) from the secondary node, migrating the disk to new LUN from primary VM, and then re-adding the disk to secondary node after migration is completed (including configuration of multi-writer flag for disk). After this both VMs can be booted.

Note: This is not RDM disks but regular vmdk's with multi-writer flag set

Instruction steps

Steps to migrate shared disks (Oracle RAC and NFS)

  • Identify the two VMs that share disks, note the VM names
  • Identify the disk(s) that should be migrated to new LUN, note the scsi ID for each disk (e.g. SCSI (1:0))
  • Note (mostly for Oracle RAC) if disk is configured in Independent and persistent mode
  • Ensure maintenance/blackout windows is in place
  • Shut down both VMs
  • For secondary VM, go to Edit Settings -> Options -> General -> Configuration Parameters (see screen dump below) and verify if the “multi-writer” flag is set for the disks to be moved
  • While both VMs are shut down, remove the disk(s) from the secondary VM (without deleting it)
  • From primary VM, right click and choose Migrate. Migrate the disk(s) to the new LUN
  • Wait for the process to finish
  • On secondary VM, go to Edit Settings -> Hardware -> Add. Select Hard disk and Use existing hard disk. Browse for the disk in the new location and click add. Make sure the same SCSI ID is used as before
  • For secondary VM, go to Edit Settings -> Options -> General -> Configuration Parameters -> Add row and add the multi-writer flag to each of the re-added disks.
  • (If disk is/was configured in Independent and persistent mode, go to Edit settings -> Hardware -> Mark the disk -> under Mode, check the Independent check-box and verify that the Persistent option is set)
  • Boot the primary VM, boot the secondary VM
  • Ensure that application is functioning as expected. Done

Monday, July 13, 2015

Permanent Device Loss (PDL) and HA on vSphere 5.5

At my current client we are doing a number of non-functional requirement (NFR) tests involving storage. One of them is about removing a LUN to see if HA kicks in.

The setup is an EMC VPLEX Metro stretched cluster (or cross-cluster) configured in Uniform mode. So active-active setup with site replicated storage and 50% of hosts on each site. And vSphere 5.5.

What complicates things in a stretched Metro cluster in Uniform mode is that even though storage is replicated between sites, the ESXi hosts only see storage on their own site. So if you kill a LUN on one site A in VPLEX, the hosts in site A will not be able to see LUNs on site B and HA therefore is required.

My initial thought was that cutting/killing a LUN on the VPLEX would make the VMs on that LUN freeze indefinitely until storage becomes available again. This is what happened earlier with vSphere 4.x and it was a real pain for the VMware admins (an all-path-down (APD) scenario).

However, as of vSphere 5 U1 and later, HA can now handle a Permanent Device Loss (PDL) where a LUN becomes unavailable while the ESXi hosts are still running - and the array is still able to communicate with the hosts (if array is down, you have an APD and HA will not kick in).

In vSphere 5.5, HA will work automatically if you configure two advanced settings which are non-default, go to ESXi host -> Configuration -> Advanced Settings and set the following:

  • VMkernel.Boot.terminateVMOnPDL = yes
  • Disk.AutoremoveOnPDL = 0

This has been documented well by Duncan Epping on Yellow-Bricks and on And a bit more info here for 5.0 U1.

See screen dump below for settings:

A reboot of the ESXi host is required for the two changes to take effect.

Thursday, April 2, 2015

Dead paths in ESXi 5.5 on LUN 0

At a client recently, going over the ESXi logs, I found that a certain entry was spamming the /var/log/vmkwarning logs. This was not just on one host but on all hosts. The entry was:

Warning: NMP: nmpPathClaimEnd:1192: Device, seen through path vmhba1:C0:T1:L0 is not registered (no active paths)

As it was on all hosts, the indication was that the error or misconfiguration is not in the ESXi hosts but probably at the storage layer.

In vCenter, two dead paths for LUN 0 were shown on each host under Storage Adapters. However, it didn't seem to affect any LUNs actually in use:

The environment is running Vblock with Cisco UCS hardware and VNX7500 storage. ESXi hosts boots from LUN. UIM is used to deploy both LUNs and hosts. VPLEX is used for active-active between sites (Metro cluster)

The ESXi boot LUN has id 0 and is provisioned directly via VNX. The LUNs for virtual machines are provisioned via the VPLEX and their id's starts from 1.

However, ESXi still expects a LUN with id 0 from the VPLEX. If not, the above error will show.


To fix the issue, present a small "dummy" LUN to all the hosts via the VPLEX with LUN id 0. It can be a thin provisioned 100 MB LUN. Rescan the hosts. But don't add the datastore to the hosts, just leave it presented to the hosts but not visible/usable in vCenter. This will make the error go away.

When storage later has to be added, the dummy LUN will show as an available 100 MB LUN and likely operations guys will know not to add this particular LUN.

From a storage perspective the steps are the following:

  • Manually create a small thin lun on the VNX array 
  • Present to VPLEX SG on the VNX
  • Claim  the device on VPLEX
  • Create virtual volume
  • Present to Storage-views with LUN ID 0
  • Note.  Don’t create datastore on the lun.
Update 2015.07.21; 
According to VCE, adding this LUN 0 is not supported with UIM(P (provisioning tool for Vblock). We started seeing issues with the re-adapt function for UIM/P and storage issues after that. So we had to remove the LUN 0. So far, there is no fix if using UIM/P.

Thursday, March 26, 2015

Moving EMC VPLEX Witness server to the cloud - vCloud Air

To have a full active-active storage setup with live fail-over in case of a site failure for EMC VPLEX (with e.g. VNX or VMAX below), a Witness server is required. This is a small OVF Linux appliance (based on SLES). The witness server must be placed in a third failure domain, ie a third physical site.

If this is not done, then manual intervention is required to activate remaining site. This is described here (EMC documentation) and here (VMware documentation).

I have seen at multiple clients that a third site is not available and then the Witness server is placed on one of the two sites.

I looked into whether the Witness server can be moved to a cloud provider. Apparently it cannot be moved to Amazon AWS due to a specific kernel parameter set in the appliance (SLES) that doesn't match with the underlying AWS hypervisor, which is based on XenServer (this is what I've been told).

My thought was that the new VMware vCloud Air IaaS solution could be used as it is based on VMware ESXi and the Witness server normally runs on an ESXi host. Contacting EMC both in Denmark and in Sweden did not give a result. They didn't know whether this could be done and the official VPLEX documentation doesn't specify anything in this regard (link above).

However, after a bit of digging I found an EMC whitepaper that describes this exact situation (it is from 2015 and probably quite new)

It is technically possible and supported by EMC. The white paper includes documentation, installation steps, and security details. EMC professional services can assist with install/config if required.

Update 2015.07.22: We have now implemented this successfully in production and it was a fairly painless process.

Link to white paper

It should be ensured that proper monitoring is set up for the Witness server. Connection state can be verified manually in the VPLEX Unisphere interface (see below). Also VPLEX can be configured to send SNMP traps to a monitoring tool for alerting.

Saturday, August 30, 2014

VMworld account - change email and update account status (partner, alumni, VCP, and VMUG member)

Changing email

For the VMworld account, it is possible to change most information via the 'edit profile', but not the email.
To change the email, you have to open a ticket with VMworld support.

Here is info and link:

"Due to security protocol, the registration team are unable to make changes to account details.  The account support team will be able to action this for you.  They can be contacted via an online line found here"

Updating Partner, Alumni, VCP status

After changing my email address, all info connected to the VMworld account about Partner, VCP, VMUG, and VCP status were gone (this is required to obtain e.g. Alumni discount). To update this info, another ticket has to be created. It took about one working day.

Here is info and link (same link as above):

"In order to validate your Alumni, VCP, Partner and VMug you need to ensure that your previously used e-mail addresses are all linked.

Please contact the account support team quoting your Partner ID and years you have previously attended on the link below who will be able to assist.

Sunday, December 22, 2013

VMFS heap depletion

Over the past couple of days, we've had a VM that has crashed a number of times. When you try to open the VM console you get a black screen and a yellow MKS error at the top of the console. Strangely enough the VM can still be pinged. After powering it off and on again it boots but after not too long the same thing happens again. Also, vMotion did not work for a number of VMs and fails with the following error:

"The operation is not allowed in the current state"

In the vmkwarning log there are the following entries:

"WARNING: HBX: 1889: Failed to initialize VMFS3 distributed locking on volume 50d136be-62d92875-869a-10604bace2cc: Out of memory"

"WARNING: Fil3: 2034: Failed to reserve volume f530 28 1 50d136be 62d92875 6010869a cce2ac4b 0 0 0 0 0 0 0"

"WARNING: Heap: 2525: Heap vmfs3 already at its maximum size. Cannot expand."

"WARNING: Heap: 2900: Heap_Align(vmfs3, 6160/6160 bytes, 8 align) failed.  caller: 0x41800d8e84e9"

I found a KB article and a post from Cormac Hogan that explains the issue.

In ESXi 5.0 U1 the default VMFS heap size is set to 80 which means that the maximum total size of open vmdk files is 8 TB. When that limit is reached, then VMs can't access their disks.

There are two ways to fix this:

  • Upgrade to ESXi 5.1 U1 (or ESX 5.0 patch 5)
  • Increase the VMFS3.MaxHeapSizeMB to 256 (default is 80) in Configuration -> Advanced Settings and reboot the host

Upgrading to ESXi 5.1 U1 increases the maximum file total size of open vmdk's to 60 TB in stead of 8 TB.

Increasing the heap size to 256 will increase the maximum to 25 TB.