NPIV support in VMware ESX4

Whilst revising for the VCP4 Beta Exam and also replying to a thread on the VMTN Forum, I’ve come across a couple of instances where there is a lack of “using NPIV in VMware ESX 4″ information. The only good post I can find is Jason Boche‘s post; N_Port ID Virtualization (NPIV) and VMware Virtual Infrastructure, but his post is written and tested using ESX3.5. So I have decided to find out as much information as I can and post it here.

Definition: NPIV stands for N_Port (Node Port) ID Virtualization

What does NPIV do? NPIV is a useful Fibre Channel feature which allows a physical HBA (Host BUS Adapter) to have multiple Node Ports. Normally, a physical HBA would have only 1 N_Port ID. The use of NPIV enables you to have multiple unique N_Port ID’s per physical HBA. NPIV can be used by ESX4 to allow more Fibre Channel connections than the maximum physical allowance which is currently 8 HBA’s per Host or 16 HBA Ports per Host.   See the image above for a graphical representation of NPIV.

What are the Advantages of using NPIV?

  • Standard storage management methodology across physical and virtual servers.
  • Portability of access privileges during VM migration.
  • Fabric performance, as NPIV provides quality of service (QoS) and prioritization for ensured VM-level bandwidth assignment.
  • Auditable data security due to zoning (one server, one zone).

Can NPIV be used with VMware ESX4? Yes! But NPIV can only be used with RDM disks and will not work for virtual disks. VM’s with regular virtual disks use the WWN’s of the host’s physical HBA’s. To use NPIV with ESX4 you need the following;

  • FC Switches that are used to access storage must be NPIV-Aware.
  • The ESX Server host’s physical HBA’s must support NPIV.

Currently, the following vendors and types of HBA provide this support:

  • QLogic – any 4GB HBA.
  • Emulex– 4GB HBA’s at have NPIV-compatible firmware.

How does NPIV work with VMware ESX4? When NPIV is enabled on a Virtual Machine, 8 WWN (Worldwide Name) pairs (WWPN (Port) & WWNN (Node) ) are specified for that VM on creation. Once the VM has been Powered On the VMKernel initiates a VPORT (Virtual Port) on the physical HBA which is used to access the Fibre Channel network. Once the VPORT is ready the VM then uses each of these WWN pairs in sequence to try to discover an access path to the Fibre Channel network.

VPORT’s appear to the FC network as a physical HBA because of its unique WWN’s, but an assigned VPORT will be removed from the ESX Host when the VM is Powered Off.

How is NPIV configured in VMware ESX4?

Before you try to enable NPIV on a VM, the VM must have an RDM added. If your VM does not, the NPIV options are greyed out and you will see this warning:

Once you have connected your RDM to your VM, the NPIV options become available to you;

You can now either manually add your WWN’s in the VM’s .vmx file or you can let ESX generate them for you.

Note: These screenshots were taken on ESX3.5 (Thanks Jason Boche) as I don’t have access to an ESX4 Test Lab with 8 NPIV enabled HBA’s. As you can see, only 4 Port WWN’s were created. This is due to the maximum HBA limit on ESX3.5, the maximum limit on ESX4 as stated earlier is 8 HBA’s. This means that when WWN’s are generated on ESX4 you will see 8 Port WWN’s instead of the pictured 4.

The WWN’s generated on the VM must then be added to the Fibre Channel Fabric in order for the VM to be able to access the Storage Network. If this step isn’t followed, the LUN (Logical Unit Number) will not be able to be presented to the VM.

Once you have completed all of these steps, your RDM will now be available in your VM’s OS.

Updated:

NPIV enabled virtual machines cannot have their Storage vMotioned using sVMotion, But they can be vMotioned!

If you want to use VMotion for a virtual machine with enabled NPIV, make sure that the RDM file
is located on the same datastore where the virtual machine configuration file resides.

If you do want to use vMotion with a NPIV enabled VM, you must make sure that the WWN’s that are created on the new Host are added to the Fibre Channel Fabric otherwise the VM will not see your RDM’s.

Resources

  • another simon

    I can't quite get my brain around why svMotion doesn't work with NPIV+RDM. I would think that normal vMotion would have more issues (move VM/RDM's between hosts), yet it's svMotion (move VM/RDM's between datastores) that's forbidden. What exactly is breaking here?
    Secondly, regarding “If you do want to use vMotion with a NPIV enabled VM, you must make sure that the WWN’s that are created on the new Host are added to the Fibre Channel Fabric otherwise the VM will not see your RDM’s”, why wouldn't the originally generated NPIV WWN's work on any host in the cluster?
    Lastly, has any of this changed in ESX4 update 1?

    Thanks!

  • c’mon!

    Valid questions here ^ or there v (not sure where my post is going to show up) by Another Simon….

    Got any answers?

Get Adobe Flash player