FCoE vs. iSCSI vs. NFS

The following was just a short note I wrote in an internal discussion about FCoE vs. iSCSI vs. NFS – and spurred by Tony Bourke’s discussion about methods for implementing FCoE.

This wasn’t intended to be a detailed analysis, just a couple of random musings.   Comments as always are welcome.

—–

While NFS and iSCSI are completely different approaches to accessing
storage, they both “suffer” from the same ailment – TCP.   Remember folks,
TCP was developed in the 70’s for the express purpose of connecting
disparate networks over long, latent, and likely unreliable links.  The
overheads placed onto communication solely to address these criteria
simply aren’t appropriate in the datacenter.  We’re talking about a
protocol written to support links slower than your Bluetooth headset.  🙂

iSCSI is a hack, plain and simple.  It solves a cost problem, not a
technology one.  Even its name is misleading – iSCSI.   It isn’t SCSI over
IP – it’s SCSI over TCP over IP. So call it tSCSI or tiSCSI.

I’m not saying they’re not “good enough”, but why do “good enough” now
that “better” is getting much closer in price?  On the array side, I
expect more and more vendors to go the NetApp route – all protocols in one
box – just turn on which ones you want to use (via appropriate licensing,
of course).  10G DCB makes this even easier and more attractive – one
port, you pick the protocol you’re comfortable with.

As one of my coworkers points out, FCoE is a bit of a cannon – and for many customers,
their storage challenges are more in mosquito scale.

Fibre Channel was developed with storage in mind as a datacenter protocol,
I haven’t seen one yet I like better for moving SCSI commands around *in
the datacenter*.   I’m sure someone will develop a new protocol at some
point that utilizes DCB-specific architectures to replace iSCSI and
FCoE… but why?   If you want a high performance, low latency,
made-for-storage protocol, run FC over whatever wire you feel like.   If
you want a low-cost solution utilizing commodity
hardware/switching/routing, use iSCSI and/or NFS.   I don’t know that
there’s a new problem to solve here.

For customers that already have and know FC, FCoE is a no-brainer.
Nothing new to learn about how to control access, you’re just replacing
the wires.  iSCSI and NFS introduce whole new mechanisms and mindsets into
accessing storage if you’re not used to them.

I saw a quote the other day that said that Fibre Channel is like smoking –
if you’re not already doing it, there’s no reason to start now.   I get
the sentiment, but I don’t agree.   FC as a protocol is the right tool for
a lot of jobs – but it’s not the right tool for every job.

UCSM 1.4 : Direct attach appliance/storage ports!

One of the most often requested features in the early days of UCS was the ability to directly attach 10GE storage devices (both Ethernet and FCoE based) to the UCS Fabric Interconnects.

Up until UCSM 1.4, only two types of Ethernet port configurations existed in UCS – Server Ports (those connected to IO Modules in the chassis) and Uplink Ports (those connected to the upstream Ethernet switches).   As UCS treated all Uplink ports equally, you could not in a supported manner connect an end device such as a storage array or server to those ports.   There were, of course, clever customers who found ways to do it – but it wasn’t the “right” or most optimal way to do it.

Especially within the SMB market, many customers may not have existing 10G Ethernet infrastructures outside of UCS, or FC switches to connect storage to.   For these customers, UCS could often provide a “data center in a box”, with the exception of storage connectivity.   For Ethernet-based storage, all storage arrays had to be connected to some external Ethernet switch, while FC arrays had to be connected to a FC switch.   Adding a 10G Ethernet or FC switch just for a few ports didn’t make a lot of financial sense, especially if those customers didn’t have any additional need for those devices beyond UCS.

With UCSM 1.4, all of that changes.   Of course, the previous method of connecting to upstream Ethernet and FC switches still exists, and will still be the proper topology for many customers.  Now, however, a new set of options has been opened.

Take a look at some of the new port types available in UCSM 1.4 :

New in 1.4 are the Appliance, FCoE Storage, Monitoring Ethernet, Monitoring FC, and Storage FC port types.

I’ll cover the Monitoring types in a later post.

On the Ethernet side of things, the Appliance and FCoE Storage allow for the direct connection of Ethernet storage devices to the Fabric Interconnects.

The Appliance port is intended for connecting Ethernet-based storage arrays (such as those serving iSCSI or NFS services) directly to the Fabric Interconnect.   If you recall from previous posts, in the default deployment mode (Ethernet Host Virtualizer), UCS selected one Uplink port to accept all broadcast and multicast traffic from the upstream switches.   By adding this Appliance port type, you can ensure that any port configured as an Appliance Port will not be selected to receive broadcast/multicast traffic from the Ethernet fabric, as well as providing the ability to configure VLAN support on the port independently of the other Uplink ports.

The FCoE Storage Port type provides similar functionality as the Appliance Port type, while extending FCoE protocol support beyond the Fabric Interconnect.   Note that this is not intended for an FCoE connection to another FCF (FCoE Forwarder) such as a Nexus 5000.   Only direct connection of FCoE storage devices (such as those produced by NetApp and EMC) are supported.   When an Ethernet port is configured as an FCoE Storage Port, traffic is expected to arrive without a VLAN tag.   The Ethernet headers will be stripped away and a VSAN tag will be added to the FC frame.   Much as the previous FC port configuration was, only one VSAN is supported per FCoE Storage Port.   Think of these ports like an Ethernet “access” port – the traffic is expected to arrive un-tagged, and the switching device (in this case, the Fabric Interconnect) will tag the frames with a VSAN to keep track of it internally.   When the frames are eventually delivered to the destination (typically the CNA on the blade), the VSAN tag will be removed before delivery.   Again, it’s very similar to traffic flowing through a traditional Ethernet switch, access port to access port.   Even though both the sending and receiving devices are expecting un-tagged traffic, it’s still tagged internally within the switch while in transit.

The Storage FC Port type allows for the direct attachment of a FC storage device to one of the native FC ports on the Fabric Interconnect expansion modules.  Like the FCoE Storage Port type, the FC frames arriving on these ports are expected to be un-tagged – so no connection to an MDS FC switch, etc.   Each Storage FC Port is assigned a VSAN number to keep the traffic separated within the UCS Unified Fabric.   When used in this way, the Fabric Interconnect is not providing any FC zoning configuration capabilities – all devices within a particular VSAN will be allowed, at least at the FC switching layer (FC2), to communicate with each other.   The expectation is that the devices themselves, through techniques such as LUN Masking, etc, will provide the access control.   This is acceptable for small implementations, but does not scale well for larger or more enterprise-like configurations.   In those situations, an external FC switch should be used either for connectivity or to provide zoning information – the so-called “hybrid model”.   I’ll cover the hybrid model in a later post.