Unknown status of Hardware Acceleration

When I read VMware documents, there is a cool feature Hardware Acceleration I found in storage book. That recall me an outage about one year ago, our NetApp filer was crashed due to motherboard problem, part of datastores was failed, we have to move virtual machine from the filer to other. We noticed the storage vMotion performance was pretty high, the data moving speed was 2 times less than regular storage vMotion. That’s the advantage of Hardware Acceleration.

The first thing of this year is standardize the virtualization environment. I found an interesting problem when I checked the Hardware Acceleration part, same luns show different status on different ESXi 5 host of a cluster, some of the hosts show Hardware Acceleration enabled, and some show Unknown.

The storage is EMC Clarion CX series with ALUA enabled, I found working hosts attached VAAI filter, non-working hosts had nothing.

Working Host

Figure 1   Working Host

Non-working Host

Figure 2   Non-working Host

ESXi 5 automatic attach different filter according to lun properties, that issue indicates the lun properties was different on different ESXi 5 host, that’s a storage layer issue, after troubleshooting with EMC, we found Failover Mode of luns was different on each host, the Failover Mode should be 4 instead of default 1.

Please be aware of that storage activity on particular host will interrupt when you change Failover Mode, please put the host in maintenance mode first.

Regarding Failover Mode, I had discussion with a storage engineer, he told me different storage vendor have different name for “Failover Mode”, some storage vendor may request choose OS type of target machine. For EMC, there are 5 modes, please refer to page 10 on EMC document

VMotion fails with the error: A general system error occurred. Invalid fault

vSphere client pop following error when I put some ESXi 5.0 host to maintenance mode.

A general system error occurred. Invalid fault

That message really no help for troubleshooting, I found a KB article in VMware website, but it’s not my case.

My virtual machines is intact, I can change setting, remove from inventory or power on/off the boxes, so what’s the issue?

I found the following message in hostd.log:

2013-01-18T01:18:10.177Z [39489B90 info 'Default' opID=DDBEEEE7-0000023A-78] File path provided /vmfs/volumes/4fef9740-0b0c0cee-c1a4-e8393521ff62/VM-01 does not exist or underlying datastore is inaccessible: /vmfs/volumes/4fef9740-0b0c0cee-c1a4-e8393521ff62/VM-01

Also found messages in vmware.log:

2013-01-18T01:19:41.966Z| vmx| Migrate_SetFailure: Timed out waiting for migration start request.

The logs indicates ESXi cannot identify the location of VM configuration file, it leads to ESXi don’t know the IP address family of VM and also not able to allocate      memory in target host.

But my datastore is accessible and I can browse content, I think the only reason is ESXi host still use old information of datastore, a re-scan can fix the problem.

Move multiple datastores to a folder

We are moving virtual machine from old storage to new datastore today, there are a lot of old datastores need to be removed after migration, for saftey consideration, I move all old datastore to a folder and then do decommission process.

There are more than 60 datastores, and vSphere client not allow move in one time. Here is a PowerCLI script can help move multiple datastores to a folder.

Note: Please make sure your folder name is uniquely.

When you create datastores.txt, please make sure first line is “Name”, one datastore name in each line.
Example:
Name
datastore1
datastore2
datastore3

Move-Datastores