Just googled this issue, some of people got similar problem, following was my problem and solution, hope it’s useful for you.
Virtual machine maybe lost on vCenter Server inventory for some reason, it’s also disappeared on ESXi inventory when you connect to the individual host directly. You are able to find the VM process by esxcli vm process list, but not able to get it by vim-cmd vmsvc/getallvms.
If you try load the VM manually by commeand vim-cmd vmsvc/reload [vmid], it back you error:
dynamicType = <unset>,
faultCause = (vmodl.MethodFault) null,
msg = “Unable to find a VM corresponding to “vmid””,
You can also find the VM process by esxtop command.
My solution was remove the ESXi host from vCenter Server, then restart management services, and then browse the VM folder, add the VM back to inventory, then join the host back to vCenter Server.
It’s been a month, i was busy to make our environment more stable, a lot of troubleshooting, webex session and discussing. Few days ago I noticed random VMs kept vMotion constantly. Some VMs got strange situation, show orphan, invalid or unknown status, but still online.
I couldn’t find any evidence why the VMs went to these status. One more thing I noticed was CPU and memory utilization of ESXi 5.1 shows 0 on vCenter server 5.1.
Following statement is not mature conclusion, it’s my inference according to DRS, HA and that particular 0 value CPU/memory. I also discussed that with VMware BCS support.
VM changed to abnormal status due to vMotion interrupted by something, more like HA kicked off due to network/storage intermittent failed. That become high chance since DRS kept try move heavy workload VM to 0 CPU/memory host.
You have to upgrade to ESXi 5.1 latest version or vCenter Server 5.1 update 1c to permanent fix this problem.
Choose one option from following options, that’s temporary solution, issue will present again.
1. Restart ESXi management agent.
2. Disconnect/reconnect ESXi on vSphere client.
Update: you have to upgrade ESXi host and vcenter server both to permanent fix the problem.
vSphere client pop following error when I put some ESXi 5.0 host to maintenance mode.
A general system error occurred. Invalid fault
That message really no help for troubleshooting, I found a KB article in VMware website, but it’s not my case.
My virtual machines is intact, I can change setting, remove from inventory or power on/off the boxes, so what’s the issue?
I found the following message in hostd.log:
2013-01-18T01:18:10.177Z [39489B90 info 'Default' opID=DDBEEEE7-0000023A-78] File path provided /vmfs/volumes/4fef9740-0b0c0cee-c1a4-e8393521ff62/VM-01 does not exist or underlying datastore is inaccessible: /vmfs/volumes/4fef9740-0b0c0cee-c1a4-e8393521ff62/VM-01
Also found messages in vmware.log:
2013-01-18T01:19:41.966Z| vmx| Migrate_SetFailure: Timed out waiting for migration start request.
The logs indicates ESXi cannot identify the location of VM configuration file, it leads to ESXi don’t know the IP address family of VM and also not able to allocate memory in target host.
But my datastore is accessible and I can browse content, I think the only reason is ESXi host still use old information of datastore, a re-scan can fix the problem.