One of the most often requested features in the early days of UCS was the ability to directly attach 10GE storage devices (both Ethernet and FCoE based) to the UCS Fabric Interconnects.
Up until UCSM 1.4, only two types of Ethernet port configurations existed in UCS – Server Ports (those connected to IO Modules in the chassis) and Uplink Ports (those connected to the upstream Ethernet switches). As UCS treated all Uplink ports equally, you could not in a supported manner connect an end device such as a storage array or server to those ports. There were, of course, clever customers who found ways to do it – but it wasn’t the “right” or most optimal way to do it.
Especially within the SMB market, many customers may not have existing 10G Ethernet infrastructures outside of UCS, or FC switches to connect storage to. For these customers, UCS could often provide a “data center in a box”, with the exception of storage connectivity. For Ethernet-based storage, all storage arrays had to be connected to some external Ethernet switch, while FC arrays had to be connected to a FC switch. Adding a 10G Ethernet or FC switch just for a few ports didn’t make a lot of financial sense, especially if those customers didn’t have any additional need for those devices beyond UCS.
With UCSM 1.4, all of that changes. Of course, the previous method of connecting to upstream Ethernet and FC switches still exists, and will still be the proper topology for many customers. Now, however, a new set of options has been opened.
Take a look at some of the new port types available in UCSM 1.4 :
New in 1.4 are the Appliance, FCoE Storage, Monitoring Ethernet, Monitoring FC, and Storage FC port types.
I’ll cover the Monitoring types in a later post.
On the Ethernet side of things, the Appliance and FCoE Storage allow for the direct connection of Ethernet storage devices to the Fabric Interconnects.
The Appliance port is intended for connecting Ethernet-based storage arrays (such as those serving iSCSI or NFS services) directly to the Fabric Interconnect. If you recall from previous posts, in the default deployment mode (Ethernet Host Virtualizer), UCS selected one Uplink port to accept all broadcast and multicast traffic from the upstream switches. By adding this Appliance port type, you can ensure that any port configured as an Appliance Port will not be selected to receive broadcast/multicast traffic from the Ethernet fabric, as well as providing the ability to configure VLAN support on the port independently of the other Uplink ports.
The FCoE Storage Port type provides similar functionality as the Appliance Port type, while extending FCoE protocol support beyond the Fabric Interconnect. Note that this is not intended for an FCoE connection to another FCF (FCoE Forwarder) such as a Nexus 5000. Only direct connection of FCoE storage devices (such as those produced by NetApp and EMC) are supported. When an Ethernet port is configured as an FCoE Storage Port, traffic is expected to arrive without a VLAN tag. The Ethernet headers will be stripped away and a VSAN tag will be added to the FC frame. Much as the previous FC port configuration was, only one VSAN is supported per FCoE Storage Port. Think of these ports like an Ethernet “access” port – the traffic is expected to arrive un-tagged, and the switching device (in this case, the Fabric Interconnect) will tag the frames with a VSAN to keep track of it internally. When the frames are eventually delivered to the destination (typically the CNA on the blade), the VSAN tag will be removed before delivery. Again, it’s very similar to traffic flowing through a traditional Ethernet switch, access port to access port. Even though both the sending and receiving devices are expecting un-tagged traffic, it’s still tagged internally within the switch while in transit.
The Storage FC Port type allows for the direct attachment of a FC storage device to one of the native FC ports on the Fabric Interconnect expansion modules. Like the FCoE Storage Port type, the FC frames arriving on these ports are expected to be un-tagged – so no connection to an MDS FC switch, etc. Each Storage FC Port is assigned a VSAN number to keep the traffic separated within the UCS Unified Fabric. When used in this way, the Fabric Interconnect is not providing any FC zoning configuration capabilities – all devices within a particular VSAN will be allowed, at least at the FC switching layer (FC2), to communicate with each other. The expectation is that the devices themselves, through techniques such as LUN Masking, etc, will provide the access control. This is acceptable for small implementations, but does not scale well for larger or more enterprise-like configurations. In those situations, an external FC switch should be used either for connectivity or to provide zoning information – the so-called “hybrid model”. I’ll cover the hybrid model in a later post.
Dave,
You read my mind, great post! One other note about the ‘Appliance port.’ It’s also useful for devices like the Nexus 1010 which not only provide Nexus 1000v VSM functionality but also several other network appliance services with more on the roadmap. It’s basically becoming a virtual Cat 6K.
Joe
Is there any good pointers to documentation, other than the cli docs and your post here, which describe what and how to use an Appliance port? I’ve been digging around, but have come up empty.
Haven’t seen much yet, but maybe I can help. What’s your question?
Hey Dave,
I am trying to use hte direct storage connect feature. i wanted to know if we need t enable the Ethernet switching mode for the direct NAS Appliance ? if so how doe sit affect my FC switching mode???
Mohit –
Ethernet and Fibre Channel switching modes are independent of one another – you can set them individually.
If you’re using the Ethernet “Appliance” port type, you do not need Ethernet switching mode – in fact, the whole point of the Appliance type is to allow that type of connection while remaining in Ethernet Host Virtualizer mode.
If you want to use the FCoE Storage Port type, you’ll need to be in Fibre Channel Switching mode, but Ethernet switching mode.
Hope this helps.
When I connected my NAS to use the appliance port, in order for the vnic of the blade server to communicate with the NAS, i found that there should be a connected uplink port in the 6100 even though I created a private VLAN for the appliance port and the vnic. Why?
Peter – I’ve answered your question here.
Could you guys qualify this: “Only direct connection of FCoE storage devices (such as those produced by NetApp and EMC) are supported.”
I’m looking at putting together a direct attached storage solution (without a fabric switch) and I’m trying to figure out my options. First being for a packaged remote solution (yes I’m familiar with Flexpod and vBlock), and next being for primary datacenter storage.
Our storage and server groups are split so I want something server side that I can manage for ‘local’ SAN boot and perhaps VMware datastores without going northbound to the primary fabric. So perhaps an internal VSAN for booting to the FCoE capable array and then access to our existing VSANs that are accessible via the northbound FC ports for access to all other tiers of storage.
Also for the FCoE enabled arrays where is the zoning handled as I know that isn’t available in the current balboa release..?
End game is to get rid of local spinning disks in my blades (1000+ right now) but still be cost effective and play nice with my storage folks…
Great post Dave, it really explains VSANs quite nicely. In regards to setup of a SAN Cloud VSAN to external Fiber Channel switches I’ve created a tutorial that your readers might find useful http://www.sysadmintutorials.com/tutorials/cisco/cisco-ucs/cisco-ucs-fabric-interconnect-vsan-setup/
Dave,
I have a question regarding Appliance Ports. Can this also be used to connect an F5 Big-IP load balancer? And if so, what do we need to do to get it to work?
Thanks in Advance!
The appliance port effectively acts like an “access” port on any other switch. So you can connect anything to it that you’d normally have on an access port. I’m not familiar with the functioning of the F5, so I don’t know if it would be useful in this situation.
I too am attempting to connect an F5 Big-IP load balancer to the 6120.
We have 2 6120s connected redundantly to the UCS blade chassis and 1 F5 connected via 2 links (one to each 6120), but we’re having problems with traffic stopping and starting. If I disable one interface (only connect to 1 6120) everything works fine. I’ve been playing with STP, RSTP, MSTP without success.
Has anyone gotten an F5 to connect to a pair of 6120’s?
Thanks!
Hi Dave,
Just wanted to update that I read the latest FAQ on FC direct connect and it says that FC direct connect support for production use has been withdrawn by Cisco. This was due to unexpected behavior seen by customers and Cisco support center when using it even for small deployments. The temporary fix is to introduce 2 MDS or Nexus switches (for redundancy).
The support for direct connect will be re-introduced at a later with a new firmware release.
I just thought you may want to verify and update the blog.
Thanks
I’ve heard similar, but haven’t found any official notice… do you have any links you can refer me to?
Hello guys,
I wondering if I attach a NetApp filer with FCoE on a FI, and configure the two port as a FCoE storage port, will I be able to use FCoE and NFS, or only the FCoE will be available?
Thank you
Georges
Only FCoE at present. Ports are configured as FCoE ports *or* Appliance ports… even though the NetApp is capable of doing both functions on the same port, the Fabric Interconnect isn’t as of today. Future software releases should enable this capability.
So it means that if I want use both FCoE and NFS, just with the Fabric Interconnect, and NetApp direct attached, which configuration should I do?
You’d need two ports going to the NetApp. One for FCoE, one for NFS.
And by enabling the Switch Mode and / or FC Switch Mode, should is work as a workaround?
Not sure what you’re getting at – you’ll need FC Switch Mode for direct-attach FCoE, but you don’t need Ethernet Switch Mode to use an Appliance port.
Thank you very much Dave
My pleasure!
anyone have an issue with 6120’s doing a server cold dhcp boot, and getting dhcp address on port-chan 1 but failing on port-chan 2..any help would be appreciated..
topology is nexus 7K with port-chan’s to the a and b side..6120 is in EH mode..the port-chan’s going to gateway 2 fail the dhcp boot request..
Hi Larry –
Never seen this myself – anyone else?
– Dave
How do appliance ports handle redundancy/failover on the Fabric Interconnects? For instance, If I have an appliance connecting to an appliance port on both FI A and B and the vNIC on the applicance VLAN configured for Fabric A with failover enabled, how would that blade connect to the appliance port if Fabric A goes down? Would it travel northbound out the uplink ports and back down to fabric B? Thanks for the help!
Well, it really depends on what protocol you’re using, the nature of the failure, the behavior of the target, and the target you’re trying to reach.
Assuming that the “A” Fabric Interconnect indeed has failed, the appliance port on A will also be down. The vNIC will failover to B (assuming failover is configured, as per your comment). Now, we have to look at what your blade is trying to do. If it’s pure L2, trying to reach the MAC address of the port connected to the now-failed Fabric A, the traffic will be received by FI-B, and since that MAC address is unknown to that FI, will be sent out the uplink port. The upstream L2 environment, having no path to the MAC address (since FI-A is down), will flood the frame out all ports except the ingress (that came from FI-B). In any case, that MAC address is no longer accessible and the frame will eventually be dropped by all node ports.
If you’re running something L3 (iSCSI or NFS for example), the Ethernet frame generated will still be targeted for the old MAC address, at least until your IP stack decides to age out the ARP entry. Once that happens, a new ARP request will be sent, at which point the MAC address on the FI-B appliance interface will be learned *assuming that your appliance has also failed the IP address over to the other port*. Traffic can then be switched locally in FI-B, and no trip to the uplink port is required.
A better solution (assuming L3) would be to have an IP address on each of the two appliance interfaces (one for A, one for B), and have your application/initiator/etc addressing both – fabric failover doesn’t really come into play here (though it could). When FI-A goes down, that path and IP would get marked unreachable, and you’d begin (or continue, assuming active/active configration) using the interface on FI-B – again, switched locally and not having to go to the upstream L2 environment.
Did you ever cover the information on configuring the external switch to provide zoning information – “hybrid model”?