Cisco Nexus 1000v ISSU

Now that Cisco has finally released the software for the Cisco ASA 1000v virtual security appliance, I am eager to install it and get a feel for its capabilities. However, before I can get to the fun part and start playing with the virtual ASA, I will have to upgrade the virtual infrastructure in my lab to the correct versions to support it. Primarily, this means that I will have to upgrade both the Nexus 1000v and the VNMC that I have running in my lab.

In a lab situation, the simplest way to accomplish this would be to just deploy a completely new instance of the Nexus 1000v and then restore its configuration from a backup, but I decided that it would be more educational to actually perform an upgrade from my current Nexus 1000v version 4.2(1)SV1(5.1a) to the version 4.2(1)SV1(5.2) that is required to support the ASA 1000v. According to the Cisco documentation it should be possible to perform this procedure as a hitless In-Service Software Update (ISSU), so I decide to put that to the test and see if there are any gotchas in this process.

Although I am not particularly fond of reading documentation, I firmly believe in “reading the fine manual” and following the recommended procedure when it comes to software upgrades. It wouldn’t be the first time that I set myself up for a painful troubleshooting and recovery exercise by not paying enough attention to the upgrade documentation and release notes. So I dig into the Installation and Upgrade Guide to see what I am getting myself into.

To keep this post readable I will paraphrase the content of the upgrade guide and apply it to my home lab environment.

<disclaimer> Reading a blog post never substitutes for reading the actual documentation, so be warned: In this post I may leave out obvious steps (you do have backups, don’t you?), I will skip steps that do not apply to my particular lab situation, and I might even make the odd mistake or typo here or there. So be warned, if you get yourself in trouble based on the information in this post you only have yourself to blame!</disclaimer>

To start the procedure, I copy the kickstart and system images to the boot flash of the primary VSM. Then, I verify that they are available on the boot flash:

n1kv# dir bootflash://sup-active/
      77824     Sep 06 14:05:43 2012  accounting.log
       4096     Apr 25 23:05:06 2012  core/
       4096     Apr 25 23:05:06 2012  log/
      16384     Apr 25 23:04:36 2012  lost+found/
        944     Sep 11 17:18:04 2012  mts.log
   19536384     Apr 25 23:04:54 2012  nexus-1000v-kickstart-mz.4.2.1.SV1.5.1a.bin
   19540480     Sep 11 18:03:44 2012  nexus-1000v-kickstart-mz.4.2.1.SV1.5.2.bin
   80812928     Apr 25 23:04:58 2012  nexus-1000v-mz.4.2.1.SV1.5.1a.bin
   76989616     Sep 11 18:06:03 2012  nexus-1000v-mz.4.2.1.SV1.5.2.bin
       7985     Sep 11 17:18:29 2012  stp.log.1
       4096     Aug 10 11:39:56 2012  sysdebug/
       8569     Sep 06 14:05:18 2012  system.cfg.new
       4096     Apr 25 23:05:31 2012  vdc_2/
       4096     Apr 25 23:05:31 2012  vdc_3/
       4096     Apr 25 23:05:31 2012  vdc_4/
   17416906     Apr 25 23:05:04 2012  vnmc-vsmpa.1.3.1d.bin

Usage for bootflash://sup-active
  329252864 bytes used
 1265623040 bytes free
 1594875904 bytes total

At this point, these images have not been synchronized to the boot flash of the standby VSM. This will be taken care of by the install all command later on.

n1kv# dir bootflash://sup-standby/
      77824     Sep 11 17:18:13 2012  accounting.log
       4096     Apr 25 23:05:06 2012  core/
       4096     Apr 25 23:05:06 2012  log/
      16384     Apr 25 23:04:36 2012  lost+found/
        750     Sep 11 17:18:04 2012  mts.log
   19536384     Apr 25 23:04:54 2012  nexus-1000v-kickstart-mz.4.2.1.SV1.5.1a.bin
   80812928     Apr 25 23:04:58 2012  nexus-1000v-mz.4.2.1.SV1.5.1a.bin
       5752     Sep 11 17:18:29 2012  stp.log.1
       1546     Sep 11 17:17:53 2012  system.cfg.new
       4096     Apr 25 23:05:31 2012  vdc_2/
       4096     Apr 25 23:05:31 2012  vdc_3/
       4096     Apr 25 23:05:31 2012  vdc_4/
   17416906     Sep 11 17:18:27 2012  vnmc-vsmpa.1.3.1d.bin

Usage for bootflash://sup-standby
  232587264 bytes used
 1362288640 bytes free
 1594875904 bytes total

Because this is a test of the ISSU procedure, I start a ping between two Windows XP VMs that are running behind the VEMs of my Nexus 1000v. However, during the first part of the procedure we will only touch the VSMs, which are not in the traffic path anyway, so no disruption should be experienced during this step.

To verify that the upgrade will be non-disruptive I first execute the show install all impact command:

n1kv# show install all impact kickstart bootflash:nexus-1000v-kickstart-mz.4.2.1.SV1.5.2.bin system bootflash:nexus-1000v-mz.4.2.1.SV1.5.2.bin | no-more

Verifying image bootflash:/nexus-1000v-kickstart-mz.4.2.1.SV1.5.2.bin for boot variable "kickstart".
[####################] 100% -- SUCCESS

Verifying image bootflash:/nexus-1000v-mz.4.2.1.SV1.5.2.bin for boot variable "system".
[####################] 100% -- SUCCESS

Verifying image type.
[####################] 100% -- SUCCESS

Extracting "system" version from image bootflash:/nexus-1000v-mz.4.2.1.SV1.5.2.bin.
[####################] 100% -- SUCCESS

Extracting "kickstart" version from image bootflash:/nexus-1000v-kickstart-mz.4.2.1.SV1.5.2.bin.
[####################] 100% -- SUCCESS

Notifying services about system upgrade.
[####################] 100% -- SUCCESS

Compatibility check is done:
Module  bootable          Impact  Install-type  Reason
------  --------  --------------  ------------  ------
     1       yes  non-disruptive         reset  
     2       yes  non-disruptive         reset  

Images will be upgraded according to following table:
Module       Image         Running-Version             New-Version  Upg-Required
------  ----------  ----------------------  ----------------------  ------------
     1      system         4.2(1)SV1(5.1a)          4.2(1)SV1(5.2)           yes
     1   kickstart         4.2(1)SV1(5.1a)          4.2(1)SV1(5.2)           yes
     2      system         4.2(1)SV1(5.1a)          4.2(1)SV1(5.2)           yes
     2   kickstart         4.2(1)SV1(5.1a)          4.2(1)SV1(5.2)           yes

Module         Running-Version                                         ESX Version       VSM Compatibility       ESX Compatibility
------  ----------------------  ----------------------------------------------------  ----------------------  ----------------------
     3         4.2(1)SV1(5.1a)         VMware ESXi 5.0.0 Releasebuild-623860 (3.0)              COMPATIBLE              COMPATIBLE
     4         4.2(1)SV1(5.1a)         VMware ESXi 5.0.0 Releasebuild-623860 (3.0)              COMPATIBLE              COMPATIBLE

The output confirms that the upgrade will be non-disruptive and that my ESXi hosts are compatible with the new software. So it is time to kick off the VSM upgrade for real:

n1kv# install all kickstart bootflash:nexus-1000v-kickstart-mz.4.2.1.SV1.5.2.bin system bootflash:nexus-1000v-mz.4.2.1.SV1.5.2.bin

Verifying image bootflash:/nexus-1000v-kickstart-mz.4.2.1.SV1.5.2.bin for boot variable "kickstart".
[####################] 100% -- SUCCESS

Verifying image bootflash:/nexus-1000v-mz.4.2.1.SV1.5.2.bin for boot variable "system".
[####################] 100% -- SUCCESS

Verifying image type.
[####################] 100% -- SUCCESS

Extracting "system" version from image bootflash:/nexus-1000v-mz.4.2.1.SV1.5.2.bin.
[####################] 100% -- SUCCESS

Extracting "kickstart" version from image bootflash:/nexus-1000v-kickstart-mz.4.2.1.SV1.5.2.bin.
[####################] 100% -- SUCCESS

Notifying services about system upgrade.
[####################] 100% -- SUCCESS

Compatibility check is done:
Module  bootable          Impact  Install-type  Reason
------  --------  --------------  ------------  ------
     1       yes  non-disruptive         reset  
     2       yes  non-disruptive         reset  

Images will be upgraded according to following table:
Module       Image         Running-Version             New-Version  Upg-Required
------  ----------  ----------------------  ----------------------  ------------
     1      system         4.2(1)SV1(5.1a)          4.2(1)SV1(5.2)           yes
     1   kickstart         4.2(1)SV1(5.1a)          4.2(1)SV1(5.2)           yes
     2      system         4.2(1)SV1(5.1a)          4.2(1)SV1(5.2)           yes
     2   kickstart         4.2(1)SV1(5.1a)          4.2(1)SV1(5.2)           yes

Module         Running-Version                                         ESX Version       VSM Compatibility       ESX Compatibility
------  ----------------------  ----------------------------------------------------  ----------------------  ----------------------
     3         4.2(1)SV1(5.1a)         VMware ESXi 5.0.0 Releasebuild-623860 (3.0)              COMPATIBLE              COMPATIBLE
     4         4.2(1)SV1(5.1a)         VMware ESXi 5.0.0 Releasebuild-623860 (3.0)              COMPATIBLE              COMPATIBLE

Do you want to continue with the installation (y/n)?  [n] y

Install is in progress, please wait.

Syncing image bootflash:/nexus-1000v-kickstart-mz.4.2.1.SV1.5.2.bin to standby.
[####################] 100% -- SUCCESS

Syncing image bootflash:/nexus-1000v-mz.4.2.1.SV1.5.2.bin to standby.
[####################] 100% -- SUCCESS

Setting boot variables.
[####################] 100% -- SUCCESS

Performing configuration copy.
[####################] 100% -- SUCCESS
2012 Sep 17 15:29:08 n1kv %PLATFORM-2-MOD_REMOVE: Module 2 removed (Serial number T0C29A963AA)
2012 Sep 17 15:29:39 n1kv %PLATFORM-2-MOD_DETECT: Module 2 detected (Serial number :unavailable) Module-Type Virtual Supervisor Module Model :unavailable

Module 2: Waiting for module online.
 -- SUCCESS

Notifying services about the switchover.
[####################] 100% -- SUCCESS

"Switching over onto standby".

At this point my SSH session is disconnected, because the primary VSM now executes a switchover to the secondary VSM, which has booted with the new software. I reconnect to the Nexus 1000v VSM management IP address to verify the rest of the process. At this point I confirm that the secondary VSM has been upgraded to the new version and I see that the primary VSM is still going through its boot process:

n1kv# show module
Mod  Ports  Module-Type                       Model               Status
---  -----  --------------------------------  ------------------  ------------
2    0      Virtual Supervisor Module         Nexus1000V          active *
3    248    Virtual Ethernet Module           NA                  ok
4    248    Virtual Ethernet Module           NA                  ok

Mod  Sw                  Hw      
---  ------------------  ------------------------------------------------  
2    4.2(1)SV1(5.2)      0.0                                              
3    4.2(1)SV1(5.1a)     VMware ESXi 5.0.0 Releasebuild-623860 (3.0)      
4    4.2(1)SV1(5.1a)     VMware ESXi 5.0.0 Releasebuild-623860 (3.0)      

Mod  MAC-Address(es)                         Serial-Num
---  --------------------------------------  ----------
2    00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8  NA
3    02-00-0c-00-03-00 to 02-00-0c-00-03-80  NA
4    02-00-0c-00-04-00 to 02-00-0c-00-04-80  NA

Mod  Server-IP        Server-UUID                           Server-Name
---  ---------------  ------------------------------------  --------------------
2    192.168.37.60    NA                                    NA
3    192.168.37.53    564df2b3-2253-1fd5-27c2-13540ccf8d04  esxi-1.lab.layerzero.nl
4    192.168.37.54    564df736-d512-ce8f-b6a5-bddfa60aebdb  esxi-2.lab.layerzero.nl

* this terminal session

After a couple of minutes I see the primary VSM coming only as the standby supervisor:

n1kv# 2012 Sep 17 15:31:00 n1kv %PLATFORM-2-MOD_DETECT: Module 1 detected (Serial number :unavailable) Module-Type Virtual Supervisor Module Model :unavailable

At this point both VSMs have been upgraded to the 4.2(1)SV1(5.2) software, but the VEMs on the ESXi hosts are still at the previous version 4.2(1)SV1(5.1a), as can be seen from the output of the show module command:

n1kv# show module
Mod  Ports  Module-Type                       Model               Status
---  -----  --------------------------------  ------------------  ------------
1    0      Virtual Supervisor Module         Nexus1000V          ha-standby
2    0      Virtual Supervisor Module         Nexus1000V          active *
3    248    Virtual Ethernet Module           NA                  ok
4    248    Virtual Ethernet Module           NA                  ok

Mod  Sw                  Hw      
---  ------------------  ------------------------------------------------  
1    4.2(1)SV1(5.2)      0.0                                              
2    4.2(1)SV1(5.2)      0.0                                              
3    4.2(1)SV1(5.1a)     VMware ESXi 5.0.0 Releasebuild-623860 (3.0)      
4    4.2(1)SV1(5.1a)     VMware ESXi 5.0.0 Releasebuild-623860 (3.0)      

Mod  MAC-Address(es)                         Serial-Num
---  --------------------------------------  ----------
1    00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8  NA
2    00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8  NA
3    02-00-0c-00-03-00 to 02-00-0c-00-03-80  NA
4    02-00-0c-00-04-00 to 02-00-0c-00-04-80  NA

Mod  Server-IP        Server-UUID                           Server-Name
---  ---------------  ------------------------------------  --------------------
1    192.168.37.60    NA                                    NA
2    192.168.37.60    NA                                    NA
3    192.168.37.53    564df2b3-2253-1fd5-27c2-13540ccf8d04  esxi-1.lab.layerzero.nl
4    192.168.37.54    564df736-d512-ce8f-b6a5-bddfa60aebdb  esxi-2.lab.layerzero.nl

* this terminal session

To verify that the installation has completed properly, you can issue the show install all status command. However, this command only shows the installation log for the module that you are currently on, so you need to use the attach command to verify both VSM modules:

n1kv# show install all status 
This is the log of last installation.

Continuing with installation, please wait
Trying to start the installer... 

Module 2: Waiting for module online.
 -- SUCCESS

Install has been successful.
n1kv# attach module 1
Attaching to module 1 ...
To exit type 'exit', to abort type '$.' 
Cisco Nexus Operating System (NX-OS) Software
TAC support: http://www.cisco.com/tac
Copyright (c) 2002-2012, Cisco Systems, Inc. All rights reserved.
The copyrights to certain works contained in this software are
owned by other third parties and used and distributed under
license. Certain components of this software are licensed under
the GNU General Public License (GPL) version 2.0 or the GNU
Lesser General Public License (LGPL) Version 2.1. A copy of each
such license is available at
http://www.opensource.org/licenses/gpl-2.0.php and
http://www.opensource.org/licenses/lgpl-2.1.php
n1kv(standby)# show install all status
This is the log of last installation.

Verifying image bootflash:/nexus-1000v-kickstart-mz.4.2.1.SV1.5.2.bin for boot variable "kickstart".
 -- SUCCESS

Verifying image bootflash:/nexus-1000v-mz.4.2.1.SV1.5.2.bin for boot variable "system".
 -- SUCCESS

Verifying image type.
 -- SUCCESS

Extracting "system" version from image bootflash:/nexus-1000v-mz.4.2.1.SV1.5.2.bin.
 -- SUCCESS

Extracting "kickstart" version from image bootflash:/nexus-1000v-kickstart-mz.4.2.1.SV1.5.2.bin.
 -- SUCCESS

Notifying services about system upgrade.
 -- SUCCESS

Compatibility check is done:
Module  bootable          Impact  Install-type  Reason
------  --------  --------------  ------------  ------
     1       yes  non-disruptive         reset  
     2       yes  non-disruptive         reset  

Images will be upgraded according to following table:
Module       Image         Running-Version             New-Version  Upg-Required
------  ----------  ----------------------  ----------------------  ------------
     1      system         4.2(1)SV1(5.1a)          4.2(1)SV1(5.2)           yes
     1   kickstart         4.2(1)SV1(5.1a)          4.2(1)SV1(5.2)           yes
     2      system         4.2(1)SV1(5.1a)          4.2(1)SV1(5.2)           yes
     2   kickstart         4.2(1)SV1(5.1a)          4.2(1)SV1(5.2)           yes

Images will be upgraded according to following table:
Module         Running-Version                                         ESX Version       VSM Compatibility       ESX Compatibility
------  ----------------------  ----------------------------------------------------  ----------------------  ----------------------
     3         4.2(1)SV1(5.1a)         VMware ESXi 5.0.0 Releasebuild-623860 (3.0)              COMPATIBLE              COMPATIBLE
     4         4.2(1)SV1(5.1a)         VMware ESXi 5.0.0 Releasebuild-623860 (3.0)              COMPATIBLE              COMPATIBLE

Install is in progress, please wait.

Syncing image bootflash:/nexus-1000v-kickstart-mz.4.2.1.SV1.5.2.bin to standby.
 -- SUCCESS

Syncing image bootflash:/nexus-1000v-mz.4.2.1.SV1.5.2.bin to standby.
 -- SUCCESS

Setting boot variables.
 -- SUCCESS

Performing configuration copy.
 -- SUCCESS

Module 2: Waiting for module online.
 -- SUCCESS

Notifying services about the switchover.
 -- SUCCESS

"Switching over onto standby".
n1kv(standby)# exit

So now it is time to update the VEMs. Because I do not have update manager deployed in my lab, I will follow the manual upgrade procedure.

The first step in this process is to notify the VMware administrator of the upgrade from the Nexus 1000v VSM using the following command:

n1kv# vmware vem upgrade notify 
Warning: 
Please ensure the hosts are running compatible ESX versions for the upgrade. Refer to corresponding
"Cisco Nexus 1000V and VMware Compatibility Information" guide.

This command notifies the VMware administrator of the availability of an upgrade, which he can either accept or reject.

Note: When I tried to execute the vmware vem upgrade notify command I discovered that my SVS connection was broken after the upgrade. It seems that this is related to the DNS name resolution for the SVS connection. The error message that I got when I issued the connect command is ERROR: [VMWARE-VIM] Operation could not be completed due to connection failure.host nor service provided, or not known. getaddrinfo failed in tcp_connect(). When I replaced the remote hostname statement that I was using to specify the DNS name for my vCenter server with a remote ip address statement the VSM connected without problems. Apparently something in the new VSM software has broken the DNS resolution for the SVS connection. Unfortunately, this is not mentioned anywhere in the release notes. For the sake of continuing the upgrade I simply decided to stick with the remote IP address instead of the remote hostname for the SVS connection.

From the VSM we can now verify that the notification has been sent:

n1kv# show vmware vem upgrade status

Upgrade VIBs: System VEM Image
Upgrade Status: Upgrade Availability Notified in vCenter
Upgrade Notification Sent Time: Tue Sep 18 10:59:03 2012
Upgrade Status Time(vCenter):  
Upgrade Start Time: 
Upgrade End Time(vCenter): 
Upgrade Error: 
Upgrade Bundle ID:
    VSM: VEM410-201208144101-BG
    DVS: VEM410-201204142101-BG

Somewhat naively, I expected this command to trigger some sort of an immediate visual notification, such as a pop-up or an alarm in the vSphere client. However, all it does is generate an event and create an action box in the summary tab for the Nexus 1000v distributed vSwitch, as can be seen from the screenshot below. Clearly, it would be possible to create an alarm in vCenter that triggers on this event, but by default there is no direct notification to the VMware administrator. On the other hand, I would assume that upgrades are generally coordinated between the VMWare and network teams anyway, so there is probably no real need for direct notification.

This screenshot shows where you can find the notification that allows the VMware administrator to accept or reject the upgrade:

Upgrade notification

I accept the upgrade, which changes the state of the “configuration issues” box to the following:

Upgrade in progress

Now I move back to the VSM to verify the upgrade status from the VSM side:

n1kv# show vmware vem upgrade status

Upgrade VIBs: System VEM Image
Upgrade Status: Upgrade Accepted by vCenter Admin
Upgrade Notification Sent Time: Tue Sep 18 10:59:03 2012
Upgrade Status Time(vCenter): Tue Sep 18 11:41:26 2012 
Upgrade Start Time: 
Upgrade End Time(vCenter): 
Upgrade Error: 
Upgrade Bundle ID:
    VSM: VEM410-201208144101-BG
    DVS: VEM410-201204142101-BG

To proceed with the upgrade we issue the vmware vem upgrade proceed command. If you are running update manager this will push the new VEM software to all the hosts, however, without update manager, this just generates an error message in vCenter that you can ignore. Now we can move on to the actual installation of the VEM software on the ESXi hosts. Before installing, I need to determine which actual VIB file I need to use for the upgrade. This information can be found in the compatibility document for release 4.2(1)SV1(5.2). Running the vmware -v command on one of my ESXi hosts reveals the build information that I need to cross-reference in the tables in the compatibility document:

~ # vmware -v
VMware ESXi 5.0.0 build-623860

Based on this build number I will need to install VEM software version cross_cisco-vem-v144-4.2.1.1.5.2.0-3.0.1.vib. So I copy the vib to the /tmp directory on both my ESXi hosts and proceed to install the software using the vmware vCLI:

tom@skywalker:~$ esxcli -s esxi-1.lab.layerzero.nl software vib install -v /tmp/cross_cisco-vem-v144-4.2.1.1.5.2.0-3.0.1.vib
Enter username: root
Enter password: 
 [MaintenanceModeError]
 MaintenanceMode is required to remove: [Cisco_bootbank_cisco-vem-v142-esx_4.2.1.1.5.1a.0-3.0.1]; install: [].
 Please refer to the log file for more details.

Of course, the upgrade documentation had warned me for this, but just to see what would happen I try to install the software without first moving the host into maintenance mode. Luckily, the host does not allow me to go through with the upgrade, which would disrupt connectivity for the VMs. So now I perform the procedure the proper way. I put the host in maintenance mode, wait for DRS to move the VMs and retry the VEM installation:

tom@skywalker:~$ esxcli -s esxi-1.lab.layerzero.nl software vib install -v /tmp/cross_cisco-vem-v144-4.2.1.1.5.2.0-3.0.1.vib
Enter username: root
Enter password: 
Installation Result
   Message: Operation finished successfully.
   Reboot Required: false
   VIBs Installed: Cisco_bootbank_cisco-vem-v144-esx_4.2.1.1.5.2.0-3.0.1
   VIBs Removed: Cisco_bootbank_cisco-vem-v142-esx_4.2.1.1.5.1a.0-3.0.1
   VIBs Skipped:

The installation completes properly this time and on the VSM I see the VEM being removed and reconnected:

n1kv# 2012 Sep 18 13:42:12 n1kv %VEM_MGR-2-VEM_MGR_REMOVE_NO_HB: Removing VEM 3 (heartbeats lost)
2012 Sep 18 13:42:13 n1kv %VEM_MGR-2-MOD_OFFLINE: Module 3 is offline
2012 Sep 18 13:43:04 n1kv %VEM_MGR-2-VEM_MGR_DETECTED: Host esxi-1 detected as module 3
2012 Sep 18 13:43:04 n1kv %VEM_MGR-2-MOD_ONLINE: Module 3 is online

The show module and vemcmd show version commands confirm that the module has been upgraded to the new version:

n1kv# show module vem 3
Mod  Ports  Module-Type                       Model               Status
---  -----  --------------------------------  ------------------  ------------
3    248    Virtual Ethernet Module           NA                  ok

Mod  Sw                  Hw      
---  ------------------  ------------------------------------------------  
3    4.2(1)SV1(5.2)      VMware ESXi 5.0.0 Releasebuild-623860 (3.0)      

Mod  MAC-Address(es)                         Serial-Num
---  --------------------------------------  ----------
3    02-00-0c-00-03-00 to 02-00-0c-00-03-80  NA

Mod  Server-IP        Server-UUID                           Server-Name
---  ---------------  ------------------------------------  --------------------
3    192.168.37.53    564df2b3-2253-1fd5-27c2-13540ccf8d04  esxi-1.lab.layerzero
.nl
n1kv# module vem 3 execute vemcmd show version
VEM Version: 4.2.1.1.5.2.0-3.0.1
VSM Version: 4.2(1)SV1(5.2)
System Version: VMware ESXi 5.0.0 Releasebuild-623860

Note that these commands also list the ESXi release build of the host, so rather than using the vmware -v command on the host, I could have used these commands to determine the build version. However, this trick only works for software upgrades, not for fresh installations, because this information is only visible when the VEM is already connected to the VSM.

To upgrade the second ESXi host in my lab I move the first host out of maintenance mode and then repeat the procedure above for the second host. After the installation of the VIB completes, I verify that all the VSMs and VEMs are now running version 4.2(1)SV1(5.2):

n1kv# show module 
Mod  Ports  Module-Type                       Model               Status
---  -----  --------------------------------  ------------------  ------------
1    0      Virtual Supervisor Module         Nexus1000V          active *
2    0      Virtual Supervisor Module         Nexus1000V          ha-standby
3    248    Virtual Ethernet Module           NA                  ok
4    248    Virtual Ethernet Module           NA                  ok

Mod  Sw                  Hw      
---  ------------------  ------------------------------------------------  
1    4.2(1)SV1(5.2)      0.0                                              
2    4.2(1)SV1(5.2)      0.0                                              
3    4.2(1)SV1(5.2)      VMware ESXi 5.0.0 Releasebuild-623860 (3.0)      
4    4.2(1)SV1(5.2)      VMware ESXi 5.0.0 Releasebuild-623860 (3.0)      

Mod  MAC-Address(es)                         Serial-Num
---  --------------------------------------  ----------
1    00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8  NA
2    00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8  NA
3    02-00-0c-00-03-00 to 02-00-0c-00-03-80  NA
4    02-00-0c-00-04-00 to 02-00-0c-00-04-80  NA

Mod  Server-IP        Server-UUID                           Server-Name
---  ---------------  ------------------------------------  --------------------
1    192.168.37.60    NA                                    NA
2    192.168.37.60    NA                                    NA
3    192.168.37.53    564df2b3-2253-1fd5-27c2-13540ccf8d04  esxi-1.lab.layerzero.nl
4    192.168.37.54    564df736-d512-ce8f-b6a5-bddfa60aebdb  esxi-2.lab.layerzero.nl

* this terminal session

To indicate to vCenter that the upgrade has now been completed and to clear the associated configuration warning on the dvSwitch summary page I issue the following command:

n1kv# vmware vem upgrade complete

In general, there is one final step that the upgrade guide doesn’t mention, which is updating the feature level. With every new software release, new features may be introduced that require the latest version of the VEM software to run on the hosts. However, in a mixed environment where some hosts have been upgraded and some haven’t been upgraded yet, it would be dangerous if the Nexus 1000v would simply allow you to configure these features without warning. Therefore, the system has a configurable VEM feature level. So if you upgrade a Nexus 1000v, but forget to update the VEM feature level, the switch may not allow you to configure certain features.

For example, when mac-pinning was first introduced, an upgraded Nexus 1000v VSM would not allow you to configure the mac-pinning feature until you had updated the feature level to the latest version. You can argue whether it is always necessary to perform this step, or only to perform it when you enable new features that require it. However, for completeness sake, I included it here to show how you can verify the VEM feature level and update it when necessary:

n1kv# show system vem feature level 
Current feature level: 4.2(1)SV1(5.1)
n1kv# system update vem feature level ?

    Version number index from the list above

n1kv# system update vem feature level
Feature      Version
Level        String
--------------------
1            4.2(1)SV1(5.2)
n1kv# system update vem feature level 1
n1kv# show system vem feature level
Current feature level: 4.2(1)SV1(5.2)

And finally, let’s not forget to save the configuration for the new software version.

n1kv# copy run start
[########################################] 100%

This completes the upgrade procedure for the Nexus 1000v.

As mentioned before, I had a continuous ping running on a couple of VMs throughout the entire procedure to confirm that the upgrade was non-disruptive. I lost a few pings during the vMotions that took place when I put the hosts in maintenance mode, but that was to be expected. Essentially, the procedure as laid out in the upgrade guide provides a non-disruptive upgrade path, but strongly depends on vMotion. The procedure requires you to place the hosts in maintenance mode one-by-one and then update the VEM software on each of these hosts, which can be a time consuming process. I am sure that VMware Update Manager would have simplified this, but the underlying process to attain a non-disruptive upgrade is essentially the same.

The fact that I lost the connection between vCenter and my VSMs during this upgrade due to the DNS issue, only confirms that it is wise to perform a dry-run of the upgrade process in a lab environment before you go through with an actual upgrade in a production environment. Doing a test run will help in getting a feel for the procedure and what is required, as well as uncover any potential glitches. I hope that this post has at least given some insight into what ISSU on the Nexus 1000v entails.

Now that my Nexus 1000v is running the latest and greatest software, I am ready to move on to updating VNMC and then finally installing the ASA 1000v!

3 thoughts on “Cisco Nexus 1000v ISSU

  1. you have forgotten Upgrading the VMware DVS Version ! this way VMware is still displaying the old version on the switch summary page.

    • Hmm, I wasn’t aware of that command (“vmware dvs-version”). Makes me wonder whether it already existed when I did this upgrade back in 2012…

      Anyway, thanks for the comment!

      Tom

Leave a Reply

Your email address will not be published. Required fields are marked *