Defining VN-Link

The misunderstanding of Cisco’s enhanced network products for VMware environments has hit critical mass.  At this point very few people know what does what, how, and when to use it.  Hopefully this will demystify some of it.


Product name for a family of products, does not specifically refer to any one product so forget the idea of hardware vs. software implementation, etc.  Think of the Nexus family of switches: 1000v, 2000, 4000, 5000, 7000.  All different products solving different design goals but are components of the Data Center 3.0 portfolio.  The separate products that fall under VN-Link are described below:

Nexus 1000v:

The Nexus 1000v is a Cisco software switch for VMware environments.  It is comprised of two components: a Virtual Supervisor Module (VSM) which acts as the control plane, and a Virtual Ethernet Module (VEM) which acts as a data plane.  2 VSM modules operate in an active/standby fashion for HA and each VMware host gets a VEM.  This switch is managed by a Cisco NXOS CLI and looks/smells/feels like a physical switch from a management perspective…that’s the whole point:

‘Network teams, here’s your network back, thanks for letting us borrow it.’  – The Server Team

The Nexus 1000v does not rely on any mystical magic such as VN-Tag (discussed shortly) to write frames.  Standard Ethernet rules apply and MAC based forwarding stays the same.  The software switch itself is proprietary (just like any hardware/software you buy from anyone) but the protocol used is standards based Ethernet.

Hypervisor Bypass/Direct path I/O:

Hypervisor bypass is the ability for a VM to access PCIe adapter hardware directly in order to reduce the overhead on a physical VMware’s hosts CPU.  This functionality can be done with most any PCIe device using VMware’s Direct-Path I/O.  The advantage here is less host CPU/memory overhead for I/O virtualization.  The disadvantage is currently no support for vMotion and limits as to the number of Direct-Path I/O devices per host.  This doesn’t require Cisco hardware or software to do, but Cisco does have a device that makes this more appealing in blade servers with limited PCIe devices (the VIC discussed later.)

Pass Through Switching (PTS):

PTS is a capability of the Cisco UCS blade system.  It relies on management intelligence in the UCS Manager and switching intelligence on each host to pull management of the virtual network into the UCS manager.  This allows a single point of management for the entire access layer including the virtual switching environment, hooray less management overhead more doing something that matters!

PTS directly maps a Virtual Machines virtual NIC to an individual physical NIC port across a virtualized pass-through switch.  No internal switching is done in the VMware environment, instead switching and policy enforcement are handled by the upstream Fabric Interconnect.  What makes this usable is the flexibility on number of interfaces provided by the VIC, discussed next.

Virtual Interface Card (VIC) the card formerly known as Palo:

The virtual interface card is an DCB and FCoE capable I/O card that is able to virtualize the PCIe bus to create multiple interfaces and present them to the operating system.  Theoretically the card can create a mix of 128 virtual Ethernet and Fibre Channel interfaces, but the real usable number is 58.  Don’t get upset about the numbers your operating system can’t even support 58 PCIe devices today ;-).  Each virtual interface is known as a VIF and is presented to the operating system (any OS) as an individual PCIe device.  The operating system can then do anything it chooses and is capable of with the interfaces.  In the example of VMware the VMware OS (yes there is an actual OS installed there on the bare metal underneath the VMs) can then assign those virtual interfaces (VIF) to vSwitches, VM kernel ports, or Service Console ports, as it could with any other physical NIC.  It can also assign them to the 1000v, to be used for Direct-Path I/O, or to use with Pass-Through Switching.  Even more important is the flexibility to use separate VIFs for each of these purposes on the same host (read: none of these is mutually exclusive.)   The VIC relies on VN-Tag for identification of individual VIFs, this is the only technology discussed in this post that uses VN-tag (although there are others.)


VN-Tag is a frame tagging method that Cisco has proposed to the IEEE and is used in several Cisco hardware products.  VN-Tag serves two major purposes:

1) It provides individual identification for virtual interfaces (VIF.)

2) It allows a VN-Tag capable Ethernet switch to switch and forward frames for several VIFs sharing a set of uplinks.  For example if VIF 1 and 2 are both using port 1 as an uplink to a VN-Tag capable switch device the VN-Tag allows the switch to forward the frame back down the same link because the destination VIF is different than the source VIF.

VN-Tag has been successfully used in production environments for over a year.  If you’re using a Nexus 2000, you’re already using VN-Tag.  VN-Tag is used by the: Nexus 2000 Series Switches, the UCS I/O Module (IOM), and the Cisco Virtual Interface Card (VIC.)  The switching for these devices is handled by one of the two VN-Tag capable switches: Nexus 5000 or UCS 6100 Fabric interconnect.  Currently all implementations of VN-Tag use hardware to write the tags.

– Joe Onisick (

14 thoughts on “Defining VN-Link”

  1. Thanks for this post Joe. If I understand correctly the functionality that is currently lacking is vmotion aware PTS within the UCS system. A VM can take advantage of PTS today within UCS, a running VM is pinned to a host?

    Do you have any links or documents that go into more detail about how the management is done? Your first paragraph under the (PTS) heading sounds intriguing. I would just like to see what the implementation looks like from a management perspective in UCS Manager and vCenter.

    1. When using PTS, guest VMs are capable of vMotion to other hosts participating in the same DVS (since that’s really what PTS is doing under the covers) – which also requires that it be within the same UCS. The target host must also use the Cisco VIC (aka Palo).

  2. Rawley, as far as management of PTS goes it’s handled within the UCS service profile in much the same way as physical server deployment is handled. Full VMware functionality is still available. The best resources for understanding the management of PTS will be the UCS configuration guides from Cisco.

  3. Hi Dave,

    In one of the Cisco DCUCI student’s guides (volume 2) it is said that: “For Cisco UCS systems hosting more than 50 virtual machines per Cisco UCS VIC M81KR adapter, consider Cisco Nexus 1000V instead of Cisco UCS VIC M81KR/PTS/DVS.”

    Why that?? I can see the reason (nor other cisco specialist and trainers I asked)

    Thanks in advance

  4. @Pedroyaja, The current Generation 1 Cisco VIC supports a max of 58 interfaces when used within UCS blades due to addressing usage for the virtual interfaces. Additionally most operating systems support less PCIe devices then that, which means some virtual interfaces wouldn’t be recognized. This is why for scalability above 50 the Nexus 1000v may be a better choice.


  5. Yeah, I see that, but, why “instead”?, why dont use them together? When using VIC the OS can see up to 58 interfaces (if the OS were capable of doing that) so, you can even use VMWare Directh Path to free CPU workload but this is not neccesary. You can use the VIC to show the OS, for instance, four NICs and four HBA’s without Directh Path, can you?? or perhaps I am very wrong about the use of VICs and nexus 1000v

    Thanks very much Joe

  6. Ok. I think I am starting to understand. I think it refers to replace the use of VIC+PTS with Nexus 1000v when you are over 50 vm in a ESX host. If not, I give up…

  7. @pedroyaja You are correct, the courseware may be slightly misleading. Cisco VIC/Palo and Nexus 1000v are not mutually exclusive and can be used very well together as you state. The comparison should really be UCS’s Pass-Through Switching (PTS) or Hypervisor Bypass/Direct-Path I/O vs. 1000v dependant on scalability.

    While the Nexus 1000v would not require the VIC it could benefit or be provided aditional flexibility via the virtual interface capabilities of the VIC.

  8. @Pedro Garcia Yes! That is exactly correct, please see my comment #9. The solutions are all similar in functionality and benefit and it makes them quite confusing. Definitely not an easy subject.

Leave a Reply

Your email address will not be published. Required fields are marked *