VMware VMDirectPath I/O

What is VMware VMDirectPath I/O?

VMDirectPath allows guest operating systems to directly access an I/O device, bypassing the virtualization layer. This direct path, or passthrough can improve performance for VMware ESX systems that utilize high-speed I/O devices, such as 10 Gigabit Ethernet. A single VM can connect to up to two passthrough devices.

VMDirectPath I/O is experimentally supported for the following Storage and Network I/O devices:

  • QLogic QLA25xx 8 Gb Fibre Channel adapters
  • Emulex LPe12000 8 Gb Fibre Channel adapters
  • LSI 3442e-R and 3801e (1068 chip based) 3 Gb SAS adapters
  • Intel 82598 10 Gigabit Ethernet controller
  • Broadcom 57710 and 57711 10 Gigabit Ethernet controllers

VMware regularly adds support for new hardware. Check your hardware’s support at the VMware Hardware Compatibility Guide portal.

Read the rest of this entry »

NPIV support in VMware ESX4

Whilst revising for the VCP4 Beta Exam and also replying to a thread on the VMTN Forum, I’ve come across a couple of instances where there is a lack of “using NPIV in VMware ESX 4” information. The only good post I can find is Jason Boche‘s post; N_Port ID Virtualization (NPIV) and VMware Virtual Infrastructure, but his post is written and tested using ESX3.5. So I have decided to find out as much information as I can and post it here.

Definition: NPIV stands for N_Port (Node Port) ID Virtualization

What does NPIV do? NPIV is a useful Fibre Channel feature which allows a physical HBA (Host BUS Adapter) to have multiple Node Ports. Normally, a physical HBA would have only 1 N_Port ID. The use of NPIV enables you to have multiple unique N_Port ID’s per physical HBA. NPIV can be used by ESX4 to allow more Fibre Channel connections than the maximum physical allowance which is currently 8 HBA’s per Host or 16 HBA Ports per Host.   See the image above for a graphical representation of NPIV.

Read the rest of this entry »