Tag: storage

  • PortChannel does not work on Cisco UCS Fabric Interconnect

    Whatever  you configure on MDS, whatever you configure on Cisco UCS FIs, whatever you do for port channel on both side, the Cisco UCS uplink ports always down with error message Initilize failed, or Error disabled.

    Congratulation! your device hit MDS firmware bug…https://tools.cisco.com/bugsearch/bug/CSCtr01652/?reffering_site=dumpcr.

     

  • Device or Resource Busy

    You may read my post How to find which ESXi 5.1 host lock the VM, it’s a solution to figure out which host lock down a file.

    But sometimes you may face similar problem but different solution.

    You are able to browse the file by CLI or GUI, but cannot delete by either way. It returns you device or resource busy or similar error messages.

    You could try following command to delete the file/folders:

    rm [File or folder name] -rf

  • How to get HBA WWPN of ESXi hosts

    It’s busy month, I haven’t update my blog since I back from Phuket with my wife. I’m running into multiple projects, a little overload.

    Just a quick share, my storage team ask me provide WWPN of all hosts to do a health check. it’s nightmare to pull out the data from vSphere client or web client. Just found a way to get it.

    Get-VMHost -Location | Get-VMHostHBA -type fibrechannel | select VMHost,Device,@{N=”WWPN”;E={“{0:X}” -f $_.PortWorldWideName}}

    Especially “{0:X}” -f $_.PortWorldWideName}

    {0:X} is format, check out here  to find more.

    -f is kind of pipeline.

    $_.PortWorldWideName is the value you want to convert.

     

  • Windows cannot be installed on drive 0 partition 1

    I think Windows Server 2012 will be next popular server OS just like Windows Server 2008, it’s also a nice hypervisor OS on virtual world. How do you think?

    Installation is first step to experience the wonderful OS, you may see some strange problem during that step just like me. Today’s topic occurred long time ago, just want to share with people who may face similar issue like me.

    That’s HP blade system with local disk attached, you may see similar problem on other vendor. When you select disk to install OS, installer may says Windows can’t be installed on drive 0 partition 1, or Windows cannot be installed on this disk. This computer’s hardware may not support booting to this disk. Ensure that the disk’s controllers is enabled in the computer’s BIOS menu.

    That’s because boot volume is not set on array controller. For example by HP servers, you have to reboot and press F8 after BIOS checks array controller to enter array controller management interface. Then go to Select Boot Volume in main menu, select Direct Attached Storage, and then select the disk you want to install OS. Follow up the wizard to continue boot up.

    If the problem persists, go to array controller management interface, rebuild array and select boot volume again, it should fix your problem.

  • Nodes in the ESXi cluster may report corruption after reboot host or attach device

    VCE just released a new KB vce2563 to description the issue.

    If your ESXi 5.x hosts is connected on VMAX running Enginuity 5876.159.102 and later, you may see this particular issue after reboot ESXi host or attach storage if you enabled block delete feature of VAAI.

    To check the option status you can run following command on PowerCLI:

     Get-VMHost -Location cluster name | Get-VMHostAdvancedConfiguration -Name VMFS3.EnableBlockDelete

  • Error 2931 The connection to the VMM agent on the virtualization server was lost

    Windows Server 2012, the biggest competitor of VMware vSphere. There are adequate reason to use Hyper-V 2012 instead of vSphere 5.x, but it’s still very hard to for newbie, we spend more than 30 hours to try figure out how to create cluster on SCVMM 2012 SP1, the software is easy to install, but hard to configure. I saw “failed” everywhere, it’s not a mature product in my view.

    We installed Windows Server 2012 data center edition on HP BL460, storage is NetApp FAS2240 (Maybe wrong, I’m not storage guy). We got following error message when we created Hyper-V Cluster on SCVMM2012 SP1.

    Error (2931)
    VMM is unable to complete the request. The connection to the VMM agent on the virtualization server (xxx) was lost.
    Unknown error (0x80338029)

    Recommended Action
    Ensure that the Windows Remote Management (WS-Management) service and the VMM agent are installed and running and that a firewall is not blocking HTTPS traffic.

    This can also happen due to DNS issues. Try and see if the server (dcahyv04.amat.com) is reachable over the network and can be looked up in DNS. You can ping the virtualization server from VMM management server and make sure that the IP address returned matches the IP address locally obtained from the virtualization server.

    If the error still persists, restart the virtualization server, and then try the operation again.

    SCVMM job failed on Mounts storage disk on xxxx.

    Initially I thought that’s something wrong with services, I checked the mentioned Windows Remote Management service, but it’s up and running. Then I noticed WINS servers was not set, but still no lucky.

    Why the job always failed on mount storage? Maybe something related to disk operation? SCVMM server is remote server, it must be operates disk remotely, so I tried connect Hyper-V server by Computer Management tool remotely, it show me RPC is unavailable when I click Disk Management node. Aha…firewall problem, that’s because SCVMM server disabled firewall, but Hyper-V server enabled, the RPC ports was blocked by client side.

    Sometimes cluster creating can be successful after I disabled firewall, but still Hyper-V server looks like hard to mount storage.

    Since SCVMM mount/unmount storage on each Hyper-V hosts during cluster creating, it takes very long time to mount storage before the job failed, we suspected that’s something related to storage, finally, we installed NetApp Host Utilities 6.0.1 and NTAP MPIO 4.0.1 to solved the problem.

    To summarize, you must enable remote management for Hyper-V host, such as Remote Register service…etc, allows required ports in Windows firewall and storage MPIO plugin should be installed as well. BTW, you should disable UAC on Windows Server 2012, it’s different with Windows Server 2008, check http://social.technet.microsoft.com/wiki/contents/articles/13953.windows-server-2012-deactivating-uac.aspx.

    That’s just first step to make Hyper-V successful. 🙂

  • How to configure vSAN on nested ESXi hosts with SSD hard disk

    There are lot of articles introduce vSAN feature and steps by steps guide. I referred William Lam’s article & Duncan’s article to configure vSAN on my lab, I was true I exactly followed his steps to configure the vSAN, but I can not saw anything under disk field under Disk Management.

    Please note: Following steps does not work for ESXi 6.0 RC on VMware Workstation 10. You have to set scsix:y.virtualssd = 0 in vmx file to mark the disk as non-SSD. Please refer to William’s article for detail.

    After looked into it deeper, I found something interesting:

    esxcli storage core device list

    I got that output:

    mpx.vmhba1:C0:T1:L0
    Display Name: Local VMware, Disk (mpx.vmhba1:C0:T1:L0)
    Has Settable Display Name: false
    Size: 5120
    Device Type: Direct-Access
    Multipath Plugin: NMP
    Devfs Path: /vmfs/devices/disks/mpx.vmhba1:C0:T1:L0
    Vendor: VMware,
    Model: VMware Virtual S
    Revision: 1.0
    SCSI Level: 2
    Is Pseudo: false
    Status: on
    Is RDM Capable: false
    Is Local: true
    Is Removable: false
    Is SSD: true
    Is Offline: false
    Is Perennially Reserved: false
    Queue Full Sample Size: 0
    Queue Full Threshold: 0
    Thin Provisioning Status: unknown
    Attached Filters:
    VAAI Status: unsupported
    Other UIDs: vml.0000000000766d686261313a313a30
    Is Local SAS Device: false
    Is Boot USB Device: false
    No of outstanding IOs with competing worlds: 32

    Initially, I thought that disk marked as SSD since I ran command to enable SSD. Actually it’s not like that, it shows SSD since my hard disk is SSD!!!! I don’t have to run the command introduced in the articles to turn SSD on, it’s nature SSD. lol

    What I need to do is actually totally oppositely. That’s the steps I used to enable vSAN:

    1. Create two disks.

    2. Login ESXi hosts by SSH.

    3. Run following command, find out the two disks you want to use for vSAN. Record runtime name.

    esxcli storage core device list

    4. Run following command to disable SSD for one disk.

    esxcli storage nmp satp rule add –satp VMW_SATP_LOCAL –device vmhba1:C0:T2:L0 –option “disable_ssd”

    5. Follow up the articles above to enable vSAN ports, create clusters, enable vSAN on cluster and join ESXi hosts to clusters.

  • How to decode ESXi 5.x SCSI error code

    Storage is critical component for virtualization, lot of VM performance issue is related to storage latency. You may see similar error message on vmkernel log for some case:

    2014-02-11T07:18:20.541Z cpu8:425351)ScsiDeviceIO: 2331: Cmd(0x4124425bc700) 0x2a, CmdSN 0xd5 from world 602789 to dev “naa.514f0c5c11a00025” failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x4 0x44 0x0

    It much like language of another planet when I first time saw itJ. Let’s see how to “translate” it to human language.

    First, I split it to several sections:

    a) 2014-02-11T07:18:20.541Z cpu8:425351)

    b) ScsiDeviceIO: 2331: Cmd(0x4124425bc700) 0x2a, CmdSN 0xd5

    c) from world 602789

    d) to dev “naa.514f0c5c11a00025”

    e) failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x4 0x44 0x0

    Section A shows the UTC time when the error occurred.

    Section B shows what command is sent. (Actually I don’t even know what the command means is, please let me know if you know it.)

    Section C shows which world the command related to.

    You can found which world it is by following command

    ps | grep 602789

    Section D shows which storage device it show error message.

    You could identify which datastore it is by following command if your datastore contains single LUN:

    esxcfg-scsidevs –m naa.514f0c5c11a00025

    You could also check out LUN setting and information by following command:

    esxcli storage core device list –d naa.514f0c5c11a00025

    esxcli storage nmp device list –d naa.514f0c5c11a00025

    Section E shows SCSI sense code. That’s the part I want to give more detail.

    It’s breakdown to two sections:

    SCSI status codeH:0x0 D:0x2 P:0x0

    H means host status

    D means device status

    P means plugin status

    Sense data0x4 0x44 0x0

    0x4 means Sense Key

    0x44 means Additional Sense Code

    0x0 means ASC Qualifier

    Before decode, you should translate each code to NNNh notation, 0xNNN = NNNh. For example 0x7a = 7Ah, 0x77 = 77h.

    SCSI status code is easy to decode. You just need to change the format and check out the code from http://www.t10.org/lists/2status.htm.

    In our example H:0x0 D:0x2 P:0x0, host code 0x0 (00h) means ESX host side is good, device code 0x2 (02h) means device is not ready, plugin status code 0x0 (00h) means LUN plugin is good. (Clarify: device code 0x2 is actually means “check condition”, it’s not really means “device is not ready”, it’s just for easy understand, but looks like it confuse since “Check Condition” has different means with “Device is not Ready”. Thanks Tony point out that. )

    Sense data is a little bit complicate. You have to refer two links http://www.t10.org/lists/2sensekey.htm and http://www.t10.org/lists/asc-num.txt.

    In our example: 0x4 0x44 0x0, Sense Key 0x4 (4h) means HARDWARE ERROR, Additional Sense Code is 0x44 (44h) and ASC Qualifier is 0x0 (00h), combine the both code to 44h/00h, it means INTERNAL TARGET FAILURE.

    Okay, then we put all decode language together:

    ESX host side is good, device is not ready, LUN plugin is good because HARDWARE ERROR INTERNAL TARGET FAILURE

    Actually I dumped this code from an fnic firmware/driver incompatible case. Is it make your troubleshooting more easy?J

    You could also refer to following links to get more detail:

    Understanding SCSI device/target NMP errors/conditions in ESX/ESXi 4.x and ESXi 5.x

    Understanding SCSI host-side NMP errors/conditions in ESX 4.x and ESXi 5.x

    Interpreting SCSI sense codes in VMware ESXi and ESX

    Interpreting SCSI sense codes in VMware ESXi and ESX

  • vHBAs and other PCI devices may stop responding in ESXi 5.x when using Interrupt Remapping

    Your vHBAs or other PCI devices may stop running in ESXi 5.x when using Interrupt Remapping feature.

    This issue only impact to UCS blade BIOS version 1.4(3c), it has been fixed on 1.4(3j).

    Please refer to http://kb.vmware.com/kb/1030265 to see how to disable Interrupt Remapping feature in ESXi 5.x

    Also refer to https://tools.cisco.com/bugsearch/bug/CSCty96722.

  • IPv6 link in NetApp SMVI backup log

    NetApp Virtual Storage Console is my favorite to manage and backup data on NetApp attached ESXi host, there is lot of benefits to secure VM data more efficient.

    The installation is pretty simple, and very less resource it requires, you can even install it on a multi-role virtual machine. But the first headache maybe the backup log.…

    The default report URL is IPv6 in NetApp Virtual Storage Console. You have to add parameter in wrapper.conf file manually. Here is detail steps:

    This procedure has to be repeated after NetApp Virtual Storage Console is upgraded.

    1) Shut down SMVI server (via Windows service).

    2) Open the wrapper.conf in C:Program FilesNetAppVirtual Storage Consolesmviserveretc

    3) Locate section

     

    Java Additional Parameters
    wrapper.java.additional.1=-XX:MaxPermSize=128m
    wrapper.java.additional.2=-Dcom.sun.management.jmxremote
    wrapper.java.additional.3=-Dcommon.dir=.
    wrapper.java.additional.4=-Dorg.apache.cxf.Logger=org.apache.cxf.common.logging.Log4jLogger

    4) Add following line:

    wrapper.java.additional.5=-Djava.net.preferIPv4Stack=true

    5) Start SMVI server (via Windows service).