UCS Fabric Failover Examined

Fabric failover is a unique feature of the Cisco Unified Computing System that provides a “teaming-like” function in hardware . This function is entirely transparent to the operating system running on the server and does not require any configuration inside the OS. I think that this feature is quite useful, because it creates resiliency for UCS blades or rack servers, without depending on any drivers or configuration inside the operating system.

I have often encountered servers that were supposed to be redundantly connected to the network, as they were physically connected to two different switches. However, due to missing or misconfigured teaming, these servers would still lose their connectivity if the primary link failed. Therefore, I think that a feature that offers resiliency against path failures for Ethernet traffic without any need for teaming configuration inside the operating system, is very interesting. This is especially true for bare-metal Windows or Linux servers on UCS blades or rack servers.

In this post I do not intend to cover the basics of Fabric Failover, as this has already  been done excellently by other bloggers. So if you need a quick primer or refresher on this feature, then I recommend that you read Brad Hedlund’s classic post “Cisco UCS Fabric Failover: Slam Dunk? or So What?“.

Instead of rehashing the basic principles of fabric failover, I intend to dive a bit deeper into the UCSM GUI, UCSM CLI and NX-OS CLI to examine and illustrate the operation of this feature inside UCS. This serves a dual purpose: Gaining more insight in the actual implementation of the fabric failover feature and getting more familiar with some essential UCS screens and commands.

Continue reading

Cisco Nexus 1000v ISSU

Now that Cisco has finally released the software for the Cisco ASA 1000v virtual security appliance, I am eager to install it and get a feel for its capabilities. However, before I can get to the fun part and start playing with the virtual ASA, I will have to upgrade the virtual infrastructure in my lab to the correct versions to support it. Primarily, this means that I will have to upgrade both the Nexus 1000v and the VNMC that I have running in my lab.

In a lab situation, the simplest way to accomplish this would be to just deploy a completely new instance of the Nexus 1000v and then restore its configuration from a backup, but I decided that it would be more educational to actually perform an upgrade from my current Nexus 1000v version 4.2(1)SV1(5.1a) to the version 4.2(1)SV1(5.2) that is required to support the ASA 1000v. According to the Cisco documentation it should be possible to perform this procedure as a hitless In-Service Software Update (ISSU), so I decide to put that to the test and see if there are any gotchas in this process.

Continue reading