FCoE vs. iSCSI vs. NFS

The following was just a short note I wrote in an internal discussion about FCoE vs. iSCSI vs. NFS – and spurred by Tony Bourke’s discussion about methods for implementing FCoE.

This wasn’t intended to be a detailed analysis, just a couple of random musings.   Comments as always are welcome.

—–

While NFS and iSCSI are completely different approaches to accessing
storage, they both “suffer” from the same ailment – TCP.   Remember folks,
TCP was developed in the 70’s for the express purpose of connecting
disparate networks over long, latent, and likely unreliable links.  The
overheads placed onto communication solely to address these criteria
simply aren’t appropriate in the datacenter.  We’re talking about a
protocol written to support links slower than your Bluetooth headset.  🙂

iSCSI is a hack, plain and simple.  It solves a cost problem, not a
technology one.  Even its name is misleading – iSCSI.   It isn’t SCSI over
IP – it’s SCSI over TCP over IP. So call it tSCSI or tiSCSI.

I’m not saying they’re not “good enough”, but why do “good enough” now
that “better” is getting much closer in price?  On the array side, I
expect more and more vendors to go the NetApp route – all protocols in one
box – just turn on which ones you want to use (via appropriate licensing,
of course).  10G DCB makes this even easier and more attractive – one
port, you pick the protocol you’re comfortable with.

As one of my coworkers points out, FCoE is a bit of a cannon – and for many customers,
their storage challenges are more in mosquito scale.

Fibre Channel was developed with storage in mind as a datacenter protocol,
I haven’t seen one yet I like better for moving SCSI commands around *in
the datacenter*.   I’m sure someone will develop a new protocol at some
point that utilizes DCB-specific architectures to replace iSCSI and
FCoE… but why?   If you want a high performance, low latency,
made-for-storage protocol, run FC over whatever wire you feel like.   If
you want a low-cost solution utilizing commodity
hardware/switching/routing, use iSCSI and/or NFS.   I don’t know that
there’s a new problem to solve here.

For customers that already have and know FC, FCoE is a no-brainer.
Nothing new to learn about how to control access, you’re just replacing
the wires.  iSCSI and NFS introduce whole new mechanisms and mindsets into
accessing storage if you’re not used to them.

I saw a quote the other day that said that Fibre Channel is like smoking –
if you’re not already doing it, there’s no reason to start now.   I get
the sentiment, but I don’t agree.   FC as a protocol is the right tool for
a lot of jobs – but it’s not the right tool for every job.

10 thoughts on “FCoE vs. iSCSI vs. NFS”

  1. No offense, but reading all these talking head bloggers regurgitating Cisco Kool Aid is getting old. Does anyone, anywhere have an original idea?? Reading Cisco white papers and Cisco press propaganda books does not an engineer make.

    By the way, I do like Cisco. But the proliferation of Cisco warriors all over the place is really frustrating.

    1. Why bother hiding behind a pseudonym Melvin? If you have something to say, by all means join the conversation. All opinions are welcome here (see previous discussions with Brocade’s Brook Reams as proof). I won’t take the bait on your troll, but I’m happy to engage in meaningful conversation and debate.

  2. I would disagree with the statement that iSCSI is a hack, or at least it’s no less than a hack than FCoE is. iSCSI puts SCSI commands on top of the TCP/IP stack, and FCoE puts SCSI on top of FCP on top of FC on top of Ethernet 🙂

  3. So, Joe Smith (if that is your real name), do you have anything insightful or even slightly helpful to say or are you just trolling and avoiding substance to hide your ignorance of this technology?

    1. I think part of our difference of opinion is how we’re looking at the intent of the protocols. Sure, they both move SCSI around, but beyond that, there are some significant differences. iSCSI is built to to run on top of any TCP stack, with no consideration for what’s below it. You point out that this is a “pro” – every NIC and switch you own can run it. That’s true. But it’s also part of the “con” of iSCSI – there’s no requirement that what’s below TCP be suitable for datacenter-class communications, that it be reliable, have storage-typical latency expectations, etc.
      FCoE on the other hand, does place requirements on what the transport layers are doing. Yes, this requires new hardware and increases costs – but it also changes the support capabilities – in my opinion, in a good way. There are storage-specific tools to monitor, manage, and troubleshoot an FCoE connection. With iSCSI, you’ve got your traditional Ethernet tools, your traditional IP tools, your traditional TCP tools, etc. I keep going back to what J. Michel Metz said (and I’m paraphrasing) – “You care about performance when things go right. You care about resiliency when things go wrong.”

  4. Speaking of NetApp, did you guys (Tony and Dave) see the IBM performance test on FC vs FCoE vs iSCSI? I am curious to see what your thoughts were about how close the performance was for each of the protocols?

  5. Please do not lump FCoE, FC, and iSCSI with NFS. NFS is a much different beast that block level protocols. I will also state, that not all implementations of any of the above storage protocols are created equal.

    iSCSI – takes SCSI and allows for packet forwarding and routing reducing many of the inflexible issues around SCSI and FC. The RFC standard also allows some really interesting features not available in FCP.

    NFS, is not a block level protocol, it introduces file system characteristics and should never be compared to block protocols, and doing so shows great ignorance about the volume manager and file system stack.

  6. Well, Steven, you probably should do a little more reading on my blog before calling me ignorant of storage and filesystem concepts.

    I never said that NFS and block protocols were similar. In fact, the very first statement of this post was that NFS and iSCSI were completely different methods of accessing storage (block vs. file). The only reason that NFS entered the discussion at all is that to the consumer of storage, it’s a valid method of accessing remote storage.

    Block and file protocols, however, must be compared – because the users of storage compare them when deciding on what products and protocols to use. Us geeks spend lots of hours debating the finer points of various protocols, but at some decision-making levels, it’s simply a matter of having access to the quantity and quality of storage needed for a given task, that meets performance, reliability, and maintainability requirements.

  7. I strongly agree with Dave.
    Most IT users forget that iSCSI is not at all cost-effective, least to say high-performing. If you want iSCSI to be efficient and competitive with FC – eventually you end up investing 3 times the cost of a fibre-based solution.
    Users who grab a $100 switch at eBay by D-Link and run iSCSI over it aren’t serious users. They are not even worth mentioning as an example for implementation. When you have heavy-weight storage requirements, you can’t rely on the Ethernet protocol which is one of the most inefficient protocols out there (those who remember token ring would recall that token ring has a 96% efficiency vs. about 20-28% efficiency of ethernet).
    FC is expensive because FC is professional. iSCSI is also expensive and any low-cost solution will eventually be a low-end solution!

Leave a Reply to Dave Alexander Cancel reply

Your email address will not be published. Required fields are marked *