I get asked a lot why Cisco doesn’t have feature X, or support hardware Y in their UCS product line. A recent discussion with a coworker reminded me that lots of those questions are out there, so I might as well give my opinion on them.
Disclaimer : I don’t work for Cisco, I don’t speak for Cisco, these are just my random musings about the various questions I hear.
Why doesn’t Cisco have non-Intel blades, like AMD or RISC-type architectures? Are they going to in the future?
As of today, Intel processors (the Xeon 5500/5600, 6500/7500 families) represent the core (pun intended) of the x86 processor market. Sure, even Intel has other lines (Atom, for one), and AMD still makes competitive processors, but most benchmarks and analysts (except for those employed by other vendors) agree that Intel is the current king. AMD has leapfrogged Intel in the past, and may do so again in the future, but for right now – Intel is where it’s at.
If you look at this from a cost-to-engineer perspective, it starts to make sense. It will cost Cisco just as much to develop an AMD-based blade as it does for the more popular and common Intel processors. Cisco may be losing business to customers that prefer AMD, but until they’ve run out of customers on the Intel side of things, it just doesn’t make financial sense to attack the AMD space as well.
As for RISC/Unix type architectures (really, any non-x86 platform), who’s chip would they use? HP? Not likely. IBM? Again, why support a competitor’s architecture – especially one as proprietary as IBM. (Side note – I’m a really big fan of IBM AIX systems, just not in the “blade” market) Roll their own? Why bother? It’s still a question of return on investment. Even if Cisco could convince customers to abandon their existing proprietary architectures for a Cisco proprietary processor, how much business do you really think they’d do? Nowhere near enough to justify the development cost.
Why doesn’t Cisco have Infiniband adapters for their blades? What about the rack-mount servers?
One of the key concepts in UCS is the unified fabric, using only Ethernet as the chassis-to-Fabric Interconnect topology. By eliminating protocol-specific cabling (Fibre Channel, Infiniband, etc), the overall complexity of the environment is reduced and the bandwidth is flexibly allocated between upper (above Ethernet) layer protocols. Instead of having separate cabling and modules for different protocols (a la legacy blade architectures), any protocol needed is encapsulated over Ethernet. Fibre Channel over Ethernet (FCoE) is the first such implemenatation in UCS, but certainly won’t be the last.
Infiniband as a protocol has a number of compelling features for certain applications, so I’d definitely see Cisco supporting RDMA over Converged Ethernet (RoCE) in the future. RoCE does for Infiniband what FCoE does for Fibre Channel. The underlying transport is replaced with Ethernet, while keeping the protocol intact. Proponents of Infiniband will point to the transport’s legendary latency characteristics, specifically low and predictable. The UCS unified fabric architecture provides just such an environment – low, predictable latency that’s consistent in both inter- and intra-chassis applications.
As for the rack-mount servers, there’s nothing stopping customers from purchasing and installing their own PCI Infiniband adapters. Cisco isn’t producing one, and won’t directly support it – but rather treats it as a 3rd party device to be supported by that manufacturer.
What about embedded hypervisors?
Another key feature of UCS is that the blades themselves are stateless, at least in theory. No identity (MACs, WWNs, UUIDs, etc), no personality (boot order, BIOS configuration) until one is assigned by the management architecture. Were the blades to have an embedded hypervisor, that statelessness is lost. Even though it’s potentially a very small amount of stateful data (IP address, etc), it’s still there. This is probably the most-likely to be supported question in my list. My expectation is that at some point in the future, the UCS Manager will be able to “push” an embedded hypervisor, along with its configuration, to the blade along with the service profile. By making UCS Manager the true stateful owner of the configuration data, having a “working copy” on the blade becomes less of an issue.
Final thoughts…
I’ve used this analogy in the past, so I’ll repeat it here. I look at UCS as sort of the Macintosh of the server world. It’s a closely controlled set of hardware in order to provide the best possible user experience, at the cost of not supporting some edge-case configurations or feature sets. No, you can’t have Infiniband, or GPUs on the blade, or embedded hypervisors. The fact is that the majority of data center workloads don’t need these features. If you need those features, there are plenty of vendors that provide them. If you want a single vendor for all your servers – regardless of edge-case requirements – there are certainly vendors that provide that (HP, IBM, etc). In my opinion, though, it’s that breadth of those product offering that makes those solutions less attractive. In accommodating for every possible use case, you end up with a very complex architecture. Cisco UCS is streamlined to provide the best possible experience for the bulk of data center workloads. Cisco doesn’t need to be, or want to be as near as I can tell, an “everything to everybody” solution. Pick something you can do really, really well and do it better than anyone else. Let the “other guys” work on the edge cases. Yes – that will cost Cisco some business. Believe it or not, despite what the rhetoric on Twitter would have you believe, there’s enough business out there for all of these server vendors. Cisco, even if they’re wildly successful in replacing legacy servers with UCS, isn’t going to run HP or IBM or Dell out of business. They don’t need to. They can make a lot of money, and make a lot of customers very happy, co-existing in the marketplace with these vendors. Cisco provides yet another choice. If it doesn’t meet your needs, don’t buy it. 🙂
No offense or disrespect is intended to my HP and IBM colleagues. You guys make cool gear too, you’re just solving the problems in a different way. Which way is “best”? Well, now, that really comes down to the specific customer doesn’t it?
2 thoughts on “Why doesn’t Cisco…?”