Great UCS write-up by Joe Onisick

If you’re not currently following Joe’s blog over at definethecloud.wordpress.com, you should start.

He just posted another great article on why UCS is his server platform of choice.   Before you write him off as just another Cisco fan-boy, definitely take a look at his logic.   Even if you have another vendor preference, he presents some excellent points to consider.

Take a look : http://definethecloud.wordpress.com/2010/05/23/why-cisco-ucs-is-my-a-game-server-architecture/

UCS with disjointed L2 Domains

How do we deal with disjointed L2 domains in UCS?

To start, what’s a disjointed L2 domain?  This is where you have two Ethernet “clouds” that never connect, but must be accessed by the same UCS Fabric Interconnect.   Take, for example, a multi-tenant scenario where we have multiple customer’s servers within the same UCS cluster that must access different L2 domains.

How do we ensure that all traffic from Customer A’s blade only goes to their cloud, while Customer B’s blades only connect to their cloud?

The immediately obvious answer is to use UCS pin groups to tie each customers interfaces (through their vNIC configuration) to the uplinks that go to their cloud.   Unfortunately, this only solves half of the problem.

In the default operational mode of the Fabric Interconnects (called Ethernet Host Virtualizer, sometimes called End Host Virtualizer), only one uplink is used to receive multicast or broadcast traffic.   EHV mode assumes a single L2 fabric on the uplinks (VLAN considerations notwithstanding).  So in this example, only broadcasts or multicasts from one of the two fabrics would be accepted.   Obviously, this is a problem.

The only way to get around this is to put the Fabric Interconnects into Ethernet Switching mode.   This causes the Fabric Interconnect to behave as a standard L2 switch, including spanning tree considerations.  Now uplinks can receive broadcasts and multicasts regardless of the fabrics they are connected to.   This does, however, increase the administrative overhead of the Fabric Interconnects and reduces your flexibility in uplink configuration since now we must channel all ports going into the same L2 domain in order to use the bandwidth.

To me, a more ideal situation would be to leave the Fabric Interconnects in EHV mode, and use another L2 switch to perform the split between fabrics, such as the following:

This configuration allows the Fabric Interconnect to remain in EHV mode and has the upstream L2 switches performing the split between the L2 domains.  ACLs can be configured on the L2 switches as necessary to isolate the networks, something that cannot be done on the Fabric Interconnect regardless of mode.

Both of these scenarios assume that each of the two customer L2 clouds are using different VLAN numbering, since there’s no capacity in UCS to distinguish between the same VLAN numbers on either Fabric.   There are certainly L3 and other translation tricks that you could use to accomodate this, but that’s an entirely different post.  🙂