Brocade’s Flawed FCoE “Study”

Disclosures
I do not work for Cisco, Brocade, or any of the companies mentioned here. I do work for a reseller that sells some of these products, but this post (as are all posts on this site) is my opinion only, and does not necessarily reflect the views of my employer or any of the manufacturers listed here. Evaluator Group Inc. did invite me to have a call with them to discuss the study via a tweet sent from the @evaluator_group Twitter account. Mr. Fellows also emailed me to offer a call. After my analysis, I deemed such a call unnecessary.

Ok, with that out of the way…

The “Study”

There were quite a few incredulous tweets floating around this week after Brocade publicized an “independent” study performed by Russ Fellows of Evaluator Group Inc. It was also reviewed by Chris Mellor of The Register, which is how I came to know about it. In the review, Mr. Mellor states that “Brocade’s money was well spent,” though I beg to differ.

As of this posting, the study is still available from the Evaluator Group Inc. website, though I would hope that after some measure of peer review, it will be removed given how deeply flawed it is. As I do not have permission to redistribute the study, I will instead suggest that you get a copy at the above link and follow along.

The stated purpose of the study was to compare traditional Fibre Channel (hereafter FC) against Fibre Channel over Ethernet (hereafter FCoE), specifically as a SCSI transport between blade servers and solid state storage. To reduce equipment requirements, only a single path was designed into the test, unlike a production environment that would have at a minimum two. The report further stated that an attempt would be made to keep the amount of bandwidth available to each scenario equal.

The Tech

The vendor of storage was not disclosed, though it should be fairly irrelevant (with one exception to be noted below). The storage was connected via two 16Gb FC links to a Brocade 6510 switch. The Brocade 6510 is a “top of rack” style traditional FC switch that is not capable of FCoE.

The chosen architecture for the FC test was an HP c7000 blade enclosure containing two blades, using a Brocade FC switch. The embedded Brocade switch is connected to the Brocade 6510 via a single 16Gb FC link.

The FCoE test was performed using a Cisco UCS architecture, consisting of a single Fabric Interconnect, connected via 4 10Gb converged Ethernet links to a single blade chassis containing two blades. The Fabric Interconnect is connected to the Brocade 6510 via two 8Gb FC links. As of this writing, the only FC connectivity supported by Cisco UCS is 10Gb FCoE or 1/2/4/8Gb FC.

So what’s the problem?

There are many, many fundamental flaws with the study. I eventually ran out of patience to catalog them individually, so I’m instead going to call out some of the most egregious transgressions.

To start, let’s consider testing methodology. The stated purpose of this test was to evaluate storage connectivity options, narrowed to FC and FCoE. It was not presented as a comparison of server vendors. As such, as many variables as possible should be eliminated to isolate the effects of the protocol and transport. This is the first place that this study breaks down. Why was Cisco UCS chosen? If the effects of protocol and transport are truly the goal of the test, why would the HP c7000 not also be the best choice? There are several ways to achieve FCoE in a c7000, both externally and internally.

The storage in use is connected via two 16Gb FC links. The stated reason for this is that the majority of storage deployments still use FC instead of FCoE, which is certainly true. The selection of the Brocade 6510 is interesting, however, in that Brocade has other switches that would have been capable of supporting FCoE and FC simultaneously. It’s clear that the choice of an FC only switch was designed to force the FCoE traffic to be de-encapsulated before going to the storage. Already we can see that we are not testing FC vs. FCoE, but rather FC natively end to end vs. one hop of FCoE. Even so, the latency and performance impact caused by the encapsulation of the FC protocol into Ethernet is negligible. The storage vendor was not disclosed, and as such, I do not know if it could have also supported FCoE, making for a true end-to-end FCoE test.  Despite the study’s claim, end-to-end FCoE is not immature and has been successfully deployed by many customers.

In UCS architecture, all traffic is converged between the blade chassis and the Fabric Interconnect. All switching, management, configuration, etc, occurs within the Fabric Interconnect. The use of four 10Gb Ethernet links between the chassis and Fabric Interconnect is significant overkill given the stated goal of maintaining similar bandwidth between the tests. At worst, two links would have been required to provide each blade with a dedicated 10Gb of bandwidth. Presumably, the decision to go with four was so that the claim could be made that more bandwidth was made available per blade than was available to the 16Gb-capable blades in the HP solution. The study did not disclose the logical configuration of the UCS blades, but the performance data suggests a configuration of a single vHBA per blade. In this configuration, the vHBA would follow a single 10Gb path from the blade to the Fabric Interconnect (via the IO Module), and would in turn be pinned to a single 8Gb FC uplink. Already you can see that regardless of the number of links provided from chassis to Fabric Interconnect, the bottleneck will be the 8Gb FC uplink. The second blade’s vHBA would be pinned (automatically, mind you) to the second 8Gb FC uplink. Essentially in this configuration, each blade has 8Gb of FC bandwith to the Brocade 6510 switch. The VIC 1240 converged network adapter (CNA) on the blade is capable of 20Gb of bandwidth to each fabric. The creation of a second vHBA and allowing the operating system to load balance across them would have provided more bandwidth. The study mentions the use of a software FCoE initiator as being part of the reason for increased CPU utilization.

We didn’t understand the technology, but…

In the “ease of use” comparison, it was noted that the HP environment was configured in three hours, whereas it took eight hours to configure UCS. The study makes it clear that they did not have the requisite skill to configure UCS and required the support of an outside VAR (who was not named) to complete the configuration. The study also states that the HP was configured without assistance. Clearly the engineering team involved here was skilled in HP and not UCS. How this reflects poorly on the product (and especially FC vs. FCoE – that’s the point, right?) is beyond me. I can personally (and have) configure a UCS environment like this in well under an hour. It would probably take me eight hours to perform similar configuration on an HP system, given my lack of hands-on experience in configuring them. This is not a flaw of the HP product, and I wouldn’t penalize it as such. (There are lots of reasons I like UCS over HP c7000, but that’s significantly beyond the scope of this post)

Many of the “ease of use” characteristics cited reflected an all Brocade environment – similar efficiencies would have existed in an all Cisco environment as well, which the study neglected to test.

A software what?

The study observes a spike in CPU utilization with increased link utilization, which is (incorrectly) attributed to the use of a software FCoE initiator. This one point threw me (and others) off quite a bit, as it is extremely rare to use a software FCoE initiator, and non-existent when FCoE capable hardware is present (such as the VIC 1240 in use here). After a number of confusing tweets from the @evaluator_group twitter account, it became clear that while they say they were using a software initiator, it was a misunderstanding of the Cisco VIC 1240 – again pointing to a lack of skill and experience with the product. My suspicion is that the spike in CPU utilization (and latency, and corresponding increase in response times) occurred not due to the FCoE protocol, but rather the queuing that was required when the two 8Gb FC links (total of 13.6Gb/s total bandwidth available, though not aggregated – each vHBA will be pinned to one uplink) became saturated. This is entirely consistent with observed application/storage performance when the links are saturated. This is entirely speculation, however, as the logical configuration of the UCS was not provided.  Despite there being similar total bandwidth available, neither server would have been able to burst above 6.8Gb/s, leading to queuing (and the accompanying latency/response impact).

Is that all?

I could go on and on with individual points that were wrong, misleading, or poorly designed, but I don’t actually think it’s necessary. Once the real purpose of the test (Brocade vs. Cisco) became clear, every conclusion reached in the FC vs. FCoE discussion (however incorrect) is moot.

If Brocade really wants to fund an FC vs. FCoE study that will stand up to scrutiny, it needs to use the same servers (no details were provided on specific CPUs in use – they could have been wildly different for all we know), the same chassis, and really isolate the protocol as they claimed to do. Here’s the really sad part – Brocade could have proven what they wanted to (that 16Gb FC is faster than 10Gb FCoE) in a fair fight. Take the same HP chassis used for the FC test, and put in an FCoE module (with CNAs on the servers) instead. Connect via FCoE to a Brocade FCoE capable switch, and use FCoE capable storage. Despite the test’s claim, there’s a lot of FCoE storage out there in production – just ask NetApp and EMC. At comparable cable counts, 16Gb FC will be faster than 10Gb FCoE. What a shock, huh? Instead, this extraordinarily flawed “study” has cost Brocade and unfortunately Evaluator Group Inc. a lot of credibility.

I’m not anti-Brocade (though I do prefer MDS for FC switching, which is not news to anyone who knows me), I’m not anti-FC (I still like it a lot, though I think pure FC networks’ days are numbered), I’m just really, really anti-FUD. Compete on tech, compete on features, compete on value, compete on price, compete on whatever it is that makes you different. Just don’t do it in a misleading, dishonest way. Respect your customers enough to know they’ll see through blatant misrepresentations, and respect your products enough to let them compete fairly.

Updated: Check out Tony Bourke’s great response here.

Dave’s Pet Peeves

This post could also be named “Why market share doesn’t matter”, “Why I don’t care what now-standard ideas you invented”, or “What have you done for me lately?”

In my career, like most of you, I have sat through too many product presentations, marketing pitches, and technical demos to count. I have talked to countless engineers, account managers, architects, gurus, and charlatans. Some folks fit more than one category.

It struck me recently that I tune out almost immediately when I hear, in the context of explaining why I should choose a particular product, that “we have the largest market share and ship more units of this tech than any of our competitors!” Why? Because if that’s your lead story, and not the quality or innovativeness of your tech, you’re riding on past success and inertia. I’m not interested in inertia. I want to see what you’re doing that’s INTERESTING.

A certain large OEM told me that they invented a particular technology class (when in fact it was invented – more or less – by a company they acquired), as a basis for why their technology was superior. Now, don’t get me wrong – “we’ve been doing this longer than anyone else, and therefore have had more time to refine our solution” is a perfectly valid argument – but not as your lead story. Likewise, telling me (this was another OEM) that you invented a particular idea (even though you didn’t) and that everyone else is copying you now should NOT be your marketing pitch. In fact, if you tell me that you came up with an idea first and then everyone else jumped on the bandwagon later, it actually makes me want to look to your competitors. Since they have the advantage of having seen what you did right and wrong, and were able to craft their solutions afterwards – what’s to say that theirs aren’t better? First to market does NOT necessarily mean best.

Just because you were first to market doesn’t mean you’re not the best either – I’m just saying that to me, that fact is irrelevant.

I’ve been accused of being biased to particular technology companies, but that’s not actually the truth. I’m biased to technologies that make sense to me and solve real problems that I have or see in my customers’ environments. If my “A Game” technology (see link for Joe Onisick’s explanation in the context of Cisco UCS) competes with your product, it isn’t because I dislike your technology, it’s because for the problems I’m trying to solve, I prefer this one. Come out with something better, and I’ll look at it.

In my new role, I have the advantage of partnering with many different OEMs and selecting the right products to meet my customers’ needs. These needs are not always purely technical. But as a technical guy, I’m going to start with my “A Game” solution unless a customer requirement dictates something else, or something better technically comes along.

So this is my message to AMs, PMs, and anyone else that wants to convince me (and I’m very open to being convinced, I just have very high standards) to look at their technology: Do not lead with market share, time in the market, or that you invented a particular class of technology. Tell me what you do that’s innovative, solves my customers’ problems, and does it better/faster/cheaper than your competitors. THAT is what I care about.

Why doesn’t Cisco…?

I get asked a lot why Cisco doesn’t have feature X, or support hardware Y in their UCS product line.   A recent discussion with a coworker reminded me that lots of those questions are out there, so I might as well give my opinion on them.

Disclaimer : I don’t work for Cisco, I don’t speak for Cisco, these are just my random musings about the various questions I hear.

Why doesn’t Cisco have non-Intel blades, like AMD or RISC-type architectures?  Are they going to in the future?

As of today, Intel processors (the Xeon 5500/5600, 6500/7500 families) represent the core (pun intended) of the x86 processor market.  Sure, even Intel has other lines (Atom, for one), and AMD still makes competitive processors, but most benchmarks and analysts (except for those employed by other vendors) agree that Intel is the current king.   AMD has leapfrogged Intel in the past, and may do so again in the future, but for right now – Intel is where it’s at.

If you look at this from a cost-to-engineer perspective, it starts to make sense.   It will cost Cisco just as much to develop an AMD-based blade as it does for the more popular and common Intel processors.   Cisco may be losing business to customers that prefer AMD, but until they’ve run out of customers on the Intel side of things, it just doesn’t make financial sense to attack the AMD space as well.

As for RISC/Unix type architectures (really, any non-x86 platform), who’s chip would they use?  HP?  Not likely.  IBM?  Again, why support a competitor’s architecture – especially one as proprietary as IBM.  (Side note – I’m a really big fan of IBM AIX systems, just not in the “blade” market)   Roll their own?  Why bother?  It’s still a question of return on investment.   Even if Cisco could convince customers to abandon their existing proprietary architectures for a Cisco proprietary processor, how much business do you really think they’d do?   Nowhere near enough to justify the development cost.

Why doesn’t Cisco have Infiniband adapters for their blades?  What about the rack-mount servers?

One of the key concepts in UCS is the unified fabric, using only Ethernet as the chassis-to-Fabric Interconnect topology.  By eliminating protocol-specific cabling (Fibre Channel, Infiniband, etc), the overall complexity of the environment is reduced and the bandwidth is flexibly allocated between upper (above Ethernet) layer protocols.   Instead of having separate cabling and modules for different protocols (a la legacy blade architectures), any protocol needed is encapsulated over Ethernet.   Fibre Channel over Ethernet (FCoE) is the first such implemenatation in UCS, but certainly won’t be the last.

Infiniband as a protocol has a number of compelling features for certain applications, so I’d definitely see Cisco supporting RDMA over Converged Ethernet (RoCE) in the future.  RoCE does for Infiniband what FCoE does for Fibre Channel.  The underlying transport is replaced with Ethernet, while keeping the protocol intact.  Proponents of Infiniband will point to the transport’s legendary latency characteristics, specifically low and predictable.   The UCS unified fabric architecture provides just such an environment – low, predictable latency that’s consistent in both inter- and intra-chassis applications.

As for the rack-mount servers, there’s nothing stopping customers from purchasing and installing their own PCI Infiniband adapters.   Cisco isn’t producing one, and won’t directly support it – but rather treats it as a 3rd party device to be supported by that manufacturer.

What about embedded hypervisors?

Another key feature of UCS is that the blades themselves are stateless, at least in theory.  No identity (MACs, WWNs, UUIDs, etc), no personality (boot order, BIOS configuration) until one is assigned by the management architecture.    Were the blades to have an embedded hypervisor, that statelessness is lost.  Even though it’s potentially a very small amount of stateful data (IP address, etc), it’s still there.   This is probably the most-likely to be supported question in my list.  My expectation is that at some point in the future, the UCS Manager will be able to “push” an embedded hypervisor, along with its configuration, to the blade along with the service profile.   By making UCS Manager the true stateful owner of the configuration data, having a “working copy” on the blade becomes less of an issue.

Final thoughts…

I’ve used this analogy in the past, so I’ll repeat it here.   I look at UCS as sort of the Macintosh of the server world.   It’s a closely controlled set of hardware in order to provide the best possible user experience, at the cost of not supporting some edge-case configurations or feature sets.   No, you can’t have Infiniband, or GPUs on the blade, or embedded hypervisors.   The fact is that the majority of data center workloads don’t need these features.   If you need those features, there are plenty of vendors that provide them.  If you want a single vendor for all your servers – regardless of edge-case requirements – there are certainly vendors that provide that (HP, IBM, etc).   In my opinion, though, it’s that breadth of those product offering that makes those solutions less attractive.   In accommodating for every possible use case, you end up with a very complex architecture.   Cisco UCS is streamlined to provide the best possible experience for the bulk of data center workloads.   Cisco doesn’t need to be, or want to be as near as I can tell, an “everything to everybody” solution.  Pick something you can do really, really well and do it better than anyone else.   Let the “other guys” work on the edge cases.  Yes – that will cost Cisco some business.   Believe it or not, despite what the rhetoric on Twitter would have you believe, there’s enough business out there for all of these server vendors.   Cisco, even if they’re wildly successful in replacing legacy servers with UCS, isn’t going to run HP or IBM or Dell out of business.   They don’t need to.   They can make a lot of money, and make a lot of customers very happy, co-existing in the marketplace with these vendors.   Cisco provides yet another choice.   If it doesn’t meet your needs, don’t buy it.   🙂

No offense or disrespect is intended to my HP and IBM colleagues.   You guys make cool gear too, you’re just solving the problems in a different way.   Which way is “best”?  Well, now, that really comes down to the specific customer doesn’t it?

The Problem With Vendor Sponsored Testing

I suppose this post has been a long time coming.

It was spurred into reality by an exchange with @bladeguy who pointed out that Cisco, too, sponsors tests of their equipment – just like HP and the Tolly reports.  At first, I’d intended to do a comparison of the Tolly reports and the Principled Technologies reports, looking for obvious (or not so obvious) bias.   Once I started down that path, however, I realized it really wasn’t necessary.   Sponsored tests (from any organization) will always be biased, and therefore unreliable from a technical perspective.   There are always tuning parameters that the “loser” will insist were wrong which skewed the results, there are always different ways to architect the test that would have given the “loser” an edge.   That’s why they’re sponsored tests.

I commented briefly before on Tolly’s first HP vs. Cisco UCS report, so I won’t repeat it again here.  Suffice it to say, the bloggers and such did a pretty good job of chopping it up.

My issue with the Tolly reports I’ve seen thus far, the bandwidth debacle and the airflow test, simply don’t pass the smell test.   Yes, they’re repeatable.   Yes, the testing results are defensible.   But the conclusions and declarations of a “winner”?  Not so much.   I’m not faulting Tolly here.   They’re doing their job, producing the product that their customer has asked them for.  These tests aren’t sponsored by the HP engineering groups (for whom I have a lot of respect) looking to validate their technological prowess – they’re sponsored by the marketing departments to provide ammunition for the sales process.   As such, do you really think they’re going into this looking for a fair fight?   Of course not.   They’re going to stack the deck in their favor as much as they think they can get away with (and knowing marketing departments, more than they can get away with).   That’s what marketing groups (from every vendor) do.

@bladeguy pointed out that Cisco has engaged Principled Technologies to do some testing of UCS equipment versus both legacy and current HP equipment.  At first glance, I didn’t detect any significant bias – especially in the tests comparing legacy equipment to current UCS gear.   I’m not sure how any bias could be construed, since really they’re just showing the advantage and consolidation ratios capable when moving from old gear to new gear.   Obviously you can’t compare against Cisco’s legacy servers (since there aren’t any), and HP servers are the logical choice since they have a huge server market share.   I would suspect that similar results would have been achieved when comparing legacy HP equipment against current HP equipment as well.   HP can (and I’m sure has) perform similar tests if they’d like to demonstrate that.

The more troublesome tests are those comparing current generations of equipment from two manufacturers.   The sponsor of the test will always win, or that report will never see the light of day.  That’s simply how it works.   Companies like Tolly, Principled Technologies, etc aren’t going to bite the hand that feeds them.   As such, they’re very careful to construct the tests such that the sponsor will prevail.   This is no secret in the industry.   It’s been discussed many times before.

Even the Principled Technologies tests that compared current generations of hardware looked like pretty fair fights to me.  If you look closely at the specifications of the tested systems, they really tend to reveal the benefits of more memory, or other such considerations, as opposed to the hardware itself.   @bladeguy pointed out several items in the Principled Technologies tests that, in his opinion, skewed the results towards Cisco.   I’m not in any position to refute his claims – but the items he mentioned really come down to tuning.   So essentially he’s saying that the HP equipment in the test wasn’t tuned properly, and I’m certainly not going to argue that point.   As a sponsored test, the sponsor will be victorious.

And therein lies the problem.   Sponsored tests are meaningless, from any vendor.   I simply don’t believe that sponsored tests provide value to the technical community.  But that’s ok – they’re not targeted at the technical community.   They’re marketing tools, used by sales and marketing teams to sway the opinions of management decision makers with lots of “independent” results.    If I want to know which server platform is better for my environment, I’m going to do my own research, and if necessary invite the vendors in for a bake-off.   Prove it to me, with your tuning as necessary, and I’ll have the other vendors do the same.

My real problem with these tests, understanding that they’re not aimed at the technical community, is the that many in the technical community use them to “prove” that their platform is superior to whoever their competing against at the moment.   Like politics, these kinds of arguments just make me tired.   Anyone coming into the argument already has their side picked – no amount of discussion is going to change their mind.   My point for blogging about UCS is not to sell it – I don’t sell gear.   It’s because I believe in the platform and enjoy educating about it.

I happen to prefer Cisco UCS, yes.   If you’ve ever been in one of my UCS classes, you’ll have also heard me say that HP and IBM – Cisco’s chief rivals in this space – also make excellent equipment with some very compelling technologies.   The eventual “best” solution simply comes down to what’s right for your organization.   I understand why these sponsored tests exist, but to me, they actually lessen the position of sponsor.   They make me wonder, “if your product is so good, why stack the deck in the test?”   The answer to that, of course, is that the engineers aren’t the ones requesting or sponsoring the test.

As came up in my UCS class today, for the vast majority of data center workloads, the small differences in performance that you might be able to get out of Vendor X or Vendor Y’s server is completely meaningless.   When I used to sell/install storage, I used to get asked the question as to which company’s storage to buy if the customer wanted maximum performance.  My answer, every single time was, “HP, or IBM, or HDS, or EMC, or…”   Why?  Because technology companies are always leapfrogging each other with IOPS and MB/s and any other metric you can think of.   What’s the fastest today in a particular set of circumstances will get replaced tomorrow by someone else.

So what’s the solution?  Well, true independent testing, of course.   How do you do true independent testing?   You get a mediator (Tolly, Principled Technologies, etc are fine), representatives from both manufacturers to agree on the testing criteria, and allow each manufacturer to submit their own architecture and tuning to meet the testing criteria.   The mediator then performs the tests.    The results are published with the opportunity for both manufacturers to respond to the results.   Think any marketing organization from any company would ever allow this to happen?   The standard line in the testing industry is “Vendor X was invited to participate, but declined.”  Of course they declined.   They’ve already lost before the first test was run.   I wouldn’t expect Cisco to participate in a HP-sponsored Tolly test any more than I’d expect HP to participate in a Cisco-sponsored Principled Technologies test.

Don’t chase that 1% performance delta.   You’ll just waste time and money.   Find the solution that meets your organizational needs, provides you the best management and support options, and buy it.   Let some other chump chase that magical “fastest” unicorn.   It doesn’t exist.  As in all things IT, “it depends.”

All comments are welcome, including those from testing organizations!

Great UCS write-up by Joe Onisick

If you’re not currently following Joe’s blog over at definethecloud.wordpress.com, you should start.

He just posted another great article on why UCS is his server platform of choice.   Before you write him off as just another Cisco fan-boy, definitely take a look at his logic.   Even if you have another vendor preference, he presents some excellent points to consider.

Take a look : http://definethecloud.wordpress.com/2010/05/23/why-cisco-ucs-is-my-a-game-server-architecture/

Fantastic post on statelessness : HP VirtualConnect vs. Cisco UCS

M. Sean McGee posted this great comparison of VirtualConnect and UCS.   I’ve often struggled to give students a clear picture of the differences – HP will tell you that “VirtualConnect is just as good, and we’ve been doing it for years!”.  Well, yes… it does some things similarly, and you can’t argue the timeframe.   UCS does a lot more – and until now, I didn’t have a great source that directly compared them.   From now on, all I have to do is send them to M. Sean McGee’s post!

The State of Statelessness

Tolly Report

A lot of people have been asking me what I think of the recently released Tolly report comparing the bandwidth of the HP and Cisco blade solutions.

The short answer is, I don’t think much of it.   It’s technically sound, and the the conclusions it reaches are perfectly reasonable – for the conditions of the tests they performed.   In keeping with Tolly’s charter, the tests were repeatable, documented, and indisputable.  The problem is, the results of the testing only tell half the story.   The *conclusions* they reach, on the other hand, aren’t as defensible.

It’s really not necessary to get into a point by point rebuttal.   At least not for me.   I’m sure Cisco will be along any minute to do just that.

The facts that Tolly established during the report aren’t groundbreaking or surprising.   Essentially, the tests were built to demonstrate the oversubscription of links between UCS chassis and Fabric Interconnects, which Cisco is quite willing to disclose at 2:1.  These tests were paired with HP comparisons of blade-to-blade traffic on the same VLAN, which in HP architectures keeps the traffic local to the chassis.  The interesting thing there is that if the traffic were between the same blades but in different VLANs, the Cisco and HP solutions would have performed identically (assuming the same aggregation-layer architecture).   What makes that interesting is that the Tolly report’s figures depend on a specific configuration and test scenario – the Cisco values won’t change (or get any worse) no matter how you change the variables.  The HP values will vary widely.

And that, my friends, is where I see the true benefit of Cisco’s architecture.   Change the conditions of the test repeatedly, and you’ll get the same results.

I’m not faulting Tolly in this case.  Not at all.  They were asked to test a certain set of conditions, and did so thoroughly and presumably accurately.  It’s just that you can’t take that set of data and use it to make any kind of useful comparison between the platforms.   The real world is much more complicated than a strictly controlled set of test objectives.   Do we really think that HP went to Tolly and asked “We want a fair fight?”   Of course not.

Cisco Kicks HP to the Curb

Well, it’s official.   HP is not going to be a Cisco gold partner any longer.

Given HP and Cisco’s very public competition, I can’t say this is any surprise.  While HP certainly has contributed significant sales to Cisco in the past in the form of routing and switching equipment, HP has aggressively moved to position their own products in front of Cisco’s recently (and why wouldn’t they?).

Does anyone think this actually means much for the sales of either company, or more of a “yawn” type move?

Full Story Here

Excellent discussion and comparison of UCS and HP blade offerings

Over at By the Bell, there’s a great post and commentary on differences between HP’s offerings and Cisco UCS.   As many of the comments point out, the post itself is a bit biased towards UCS, but the comments are an excellent set of discussions about the various pros and cons of the solutions.  As good as the post is, the comments (I think) are even better.   Check it out!