OTV and LISP are two interesting new data center technologies that are worth examining when you are studying for a Cisco Data Center certification, such as CCNP or CCIE Data Center. Unfortunately, not everybody can afford a couple of Nexus 7000s to play with. As an instructor for Fast Lane I regularly have access to Nexus based labs, but I still thought that it would be nice to have a lab setup of my own to experiment with. Fortunately, there is now a very nice way to get some hands-on experience with these protocols through the Cisco Cloud Services Router (CSR) 1000v, which I blogged about earlier.
The CSR 1000v is based on the same IOS XE code that runs on the ASR 1000, which supports both OTV and LISP. So I decided to try to build a lab to test VM mobility using OTV and LISP in my home lab using a number of CSR 1000v instances.
Note: The CSR runs IOS-XE, not NX-OS, and as a result it uses a different command set to configure OTV and LISP. In that sense, it cannot replace practicing with actual Nexus gear for the purposes of exam preparation. However, it does allow you to examine the underlying structures and mechanisms of the technologies, and it allows you to get an idea of the common configuration elements in an OTV or LISP configuration.
The Basic Lab Setup
I decide to implement the following topology:
I create two separate VLANs on my vSWITCH which I intend to bridge together using OTV between the routers “dc1-otv” and “dc2-otv”. I create two VMs, one in each VLAN, to which I assign IP addresses from the same IP subnet 192.168.200.0/24. VM “VM-1” gets IP address 192.168.200.1 and “VM-2” gets 192.168.200.2. The routers “dc1-xtr”, “dc2-xtr”, and “branch-xtr” will be configured as LISP xTRs later.
I put the following basic configuration on router dc1-otv:
! hostname dc1-otv ! enable secret cisco ! no ip domain lookup ! interface GigabitEthernet1 ip address 10.200.200.1 255.255.255.0 no shutdown ! router ospf 1 router-id 10.200.200.1 network 10.200.200.1 0.0.0.0 area 0 ! line vty 0 4 exec-timeout 0 0 password cisco login !
And I put a similar configuration on router dc2-otv:
! hostname dc2-otv ! enable secret cisco ! no ip domain lookup ! interface GigabitEthernet1 ip address 10.200.200.2 255.255.255.0 no shutdown ! router ospf 1 router-id 10.200.200.2 network 10.200.200.2 0.0.0.0 area 0 ! line vty 0 4 exec-timeout 0 0 password cisco login !
The next thing I do is preparing the OTV join interface for multicast operation. I enable multicast routing, set the IGMP version to 3 and enable PIM in passive mode on the OTV join interface:
! ip multicast-routing distributed ! interface GigabitEthernet1 ip pim passive ip igmp version 3 !
Note: Unlike the Nexus 7000, the CSR requires multicast routing to be enabled in order to enable the IGMP functionality that is required for OTV. On the Nexus 7000 it is not necessary to enable multicast routing and PIM. Simply setting the IGMP version to 3 is sufficient on that platform.
Next I configure the OTV site ID and create the Overlay interface on router dc1-otv with the following parameters:
otv site-identifier 0001.0001.0001 ! interface Overlay1 otv control-group 18.104.22.168 otv data-group 22.214.171.124/24 otv join-interface GigabitEthernet1 no shutdown
Of course I configure router dc2-otv in a similar manner:
otv site-identifier 0002.0002.0002 ! interface Overlay1 otv control-group 126.96.36.199 otv data-group 188.8.131.52/24 otv join-interface GigabitEthernet1 no shutdown
I verify the OTV configuration and confirm that the adjacency between the two routers has been established:
dc1-otv#show otv overlay 1 Overlay Interface Overlay1 VPN name : None VPN ID : 1 State : UP AED Capable : No, site interface not up IPv4 control group : 184.108.40.206 Mcast data group range(s): 220.127.116.11/24 Join interface(s) : GigabitEthernet1 Join IPv4 address : 10.200.200.1 Tunnel interface(s) : Tunnel0 Encapsulation format : GRE/IPv4 Site Bridge-Domain : None Capability : Multicast-reachable Is Adjacency Server : No Adj Server Configured : No Prim/Sec Adj Svr(s) : None dc1-otv#show otv adjacency Overlay 1 Adjacency Database Hostname System-ID Dest Addr Up Time State dc2-otv 001e.bd03.a200 10.200.200.2 00:00:47 UP
One thing that is worth noticing, is that the OTV devices are not marked as “AED capable” yet. This is caused by the fact that the OTV site VLAN is not configured and operational at this point. The site VLAN configuration is done slightly differently on the CSR compared to the Nexus 7000. The CSR is not a switch, and therefore does not support direct configuration of VLANs. Instead of a site VLAN, a site bridge-group is configured. The bridge group represents the broadcast domain and can be linked to interfaces and VLAN tags using so-called “service instances”. To setup the site VLAN I use the following commands on router dc1-otv:
! otv site bridge-domain 2001 ! interface GigabitEthernet2 no shutdown service instance 2001 ethernet encapsulation dot1q 2001 bridge-domain 2001
And similarly, I configure the following on router dc2-otv:
! otv site bridge-domain 2002 ! interface GigabitEthernet2 no shutdown service instance 2002 ethernet encapsulation dot1q 2002 bridge-domain 2002
These commmands essentially create a bridged domain on the router, which is then associated with interface GigabitEthernet2 for frames that carry an 802.1Q VLAN tag of 2001. For more information about Ethernet Service Instances refer to Configuring Ethernet Virtual Connections on the Cisco ASR 1000 Series Router in the ASR 1000 configuration guide.
At this point the OTV overlay has become fully operational on both sides:
dc1-otv#show otv overlay 1 Overlay Interface Overlay1 VPN name : None VPN ID : 1 State : UP AED Capable : Yes IPv4 control group : 18.104.22.168 Mcast data group range(s): 22.214.171.124/24 Join interface(s) : GigabitEthernet1 Join IPv4 address : 10.200.200.1 Tunnel interface(s) : Tunnel0 Encapsulation format : GRE/IPv4 Site Bridge-Domain : 2001 Capability : Multicast-reachable Is Adjacency Server : No Adj Server Configured : No Prim/Sec Adj Svr(s) : None
In this lab the site bridge-domain configuration is pretty meaningless, because there is only a single OTV edge device per site. Therefore, I just attached the bridge-group to an arbitrary interface and VLAN tag, to simply ensure that the overlay interface would become operational. In reality, you should take care that the VLAN selected for the site bridge-domain is properly extended between OTV edge devices within the site, but not carried across the overlay.
The final piece in this configuration is to actually extend some VLANs across the OTV overlay. Again, this is done through a bridge-group and corresponding service instance configurations. I add the following configuration to router dc1-otv:
! interface GigabitEthernet2 service instance 201 ethernet encapsulation untagged rewrite ingress tag push dot1q 200 symmetric bridge-domain 200 ! interface Overlay1 service instance 201 ethernet encapsulation dot1q 200 bridge-domain 200 !
And I add a similar configuration on dc2-otv:
! interface GigabitEthernet2 service instance 202 ethernet encapsulation untagged rewrite ingress tag push dot1q 200 symmetric bridge-domain 200 ! interface Overlay1 service instance 202 ethernet encapsulation dot1q 200 bridge-domain 200 !
This configuration is a little peculiar, which has to do with the specifics of my lab setup. The intention is to create a single VLAN 200 stretched across the two DC sites. However, in my lab this is all setup on a common virtual infrastructure. To still create two separate “VLAN 200” instances I essentially created two VMware port-groups and associated VLANS (VLAN 201 and VLAN 202). CSR dc1-otv and VM-1 are both connected to VLAN 201. Similarly, CSR dc2-otv and VM-2 are connected to VLAN 202 (see diagram). As a result, the “VLAN 200” frames arrive as untagged frames on the internal interfaces of CSR dc1-otv and dc2-otv. These frames then need to be bridged across the cloud as VLAN 200 frames. In a more realistic scenario the frames would already arrive with VLAN 200 tags on the internal interfaces and the rewrite commands would be unnecessary.
With these final steps the OTV configuration is finished and I can put it to the test. So I ping from VM-1 in DC-1 (192.168.200.1) to VM-2 in DC-2 (192.168.200.2):
The ping succeeds, confirming that OTV is operational.
Update: Brantley Richbourg and Brandon Farmer pointed out in the comments that this ping fails if you don’t have your VMware vSwitch set to accept promiscuous mode. Initially, I didn’t notice this behavior, because I already had my vSwitch set to accept promiscuous mode for different reasons. I retested the lab with promiscuous mode set to “reject” and confirmed that this stops the ping from working. The explanation for this behavior is that the frames from VM-1 to VM-2 have the VM-2 MAC address as the destination MAC address, which is not registered to the CSR virtual NIC. For unicast MAC frames, the vSwitch normally only sends frames with a particular destination MAC address to the VM that is associated with this MAC address (if it is local) or to an uplink (if it is remote). Therefore, the VM-1 to VM-2 frame is not sent to the CSR VM, so the CSR never sees the frame. As a result, the frame cannot be forwarded across the overlay. When promiscuous mode is set to “accept” on the vSwitch, the CSR receives all traffic on VLAN 201, allowing it to forward the VM-1 to VM-2 traffic across the overlay. So, if this ping fails in your lab, make sure that you have your vSwitch set to accept promiscuous mode! Thanks to Brantley and Brandon for pointing out this potential issue with the lab setup!
Now, let’s verify the MAC address entries and ARP entries on the OTV edge devices:
dc1-otv#show otv route Codes: BD - Bridge-Domain, AD - Admin-Distance, SI - Service Instance, * - Backup Route OTV Unicast MAC Routing Table for Overlay1 Inst VLAN BD MAC Address AD Owner Next Hops(s) ---------------------------------------------------------- 0 200 200 000c.296a.a4ad 40 BD Eng Gi2:SI201 0 200 200 000c.297c.e283 50 ISIS dc2-otv 2 unicast routes displayed in Overlay1 ---------------------------------------------------------- 2 Total Unicast Routes Displayed dc1-otv#show otv arp-nd-cache Overlay1 ARP/ND L3->L2 Address Mapping Cache BD MAC Layer-3 Address Age (HH:MM:SS) Local/Remote 200 000c.297c.e283 192.168.200.2 00:00:33 Remote
Also, the internal OTV IS-IS database can be examined to confirm that the MAC addresses are advertised by the OTV edge devices:
dc1-otv#show otv isis database detail Tag Overlay1: IS-IS Level-1 Link State Database: LSPID LSP Seq Num LSP Checksum LSP Holdtime ATT/P/OL dc2-otv.00-00 0x0000000C 0xBED7 933 0/0/0 Area Address: 00 NLPID: 0xCC 0x8E Hostname: dc2-otv Metric: 10 IS-Extended dc1-otv.01 Layer 2 MAC Reachability: topoid 0, vlan 200, confidence 1 000c.297c.e283 dc1-otv.00-00 * 0x0000000C 0xE51A 959 0/0/0 Area Address: 00 NLPID: 0xCC 0x8E Hostname: dc1-otv Metric: 10 IS-Extended dc1-otv.01 Layer 2 MAC Reachability: topoid 0, vlan 200, confidence 1 000c.296a.a4ad dc1-otv.01-00 * 0x0000000A 0x2916 753 0/0/0 Metric: 0 IS-Extended dc1-otv.00 Metric: 0 IS-Extended dc2-otv.00
Although it is usually not necessary to dive into the IS-IS database that is used by the OTV control plane, it is nice to be able to take a peek under the hood.
So now that we have a working OTV setup that extends VLAN 200 and the corresponding subnet 192.168.200.0/24 across the two DC sites, it is time to add LISP to optimize the inbound routing for mobile VMs.
Preparing for LISP
To start, I add two additional routers to the network, which will act as a LISP ingress tunnel router (ITR) and egress tunnel router (ETR) for their respective sites: router dc1-xtr and router dc2-xtr. I connect these routers to the IP core and enable HSRP on the interface that faces the stretched VLAN 200. This results in the following basic configurations:
! hostname dc1-xtr ! enable secret cisco ! no ip domain lookup ! interface GigabitEthernet1 ip address 10.200.200.3 255.255.255.0 no shutdown ! interface GigabitEthernet2 ip address 192.168.200.252 255.255.255.0 standby 200 ip 192.168.200.254 no shutdown ! router ospf 1 router-id 10.200.200.3 network 10.200.200.3 0.0.0.0 area 0 ! line vty 0 4 exec-timeout 0 0 password cisco login !
! hostname dc2-xtr ! enable secret cisco ! no ip domain lookup ! interface GigabitEthernet1 ip address 10.200.200.4 255.255.255.0 no shutdown ! interface GigabitEthernet2 ip address 192.168.200.253 255.255.255.0 standby 200 ip 192.168.200.254 no shutdown ! router ospf 1 router-id 10.200.200.4 network 10.200.200.4 0.0.0.0 area 0 ! line vty 0 4 exec-timeout 0 0 password cisco login !
While doing my verifications I notice something interesting in the behavior of HSRP:
dc1-xtr#show standby brief P indicates configured to preempt. | Interface Grp Pri P State Active Standby Virtual IP Gi2 200 100 Active local unknown 192.168.200.254 dc2-xtr#show standby brief P indicates configured to preempt. | Interface Grp Pri P State Active Standby Virtual IP Gi2 200 100 Active local unknown 192.168.200.254
Both routers dc1-xtr and dc2-xtr consider themselves to be the active router and do not list a standby router. Is OTV not properly bridging the traffic of these routers across the overlay? Let’s try a quick ping to see if these routers have connectivity across the extended VLAN 200:
dc1-xtr#ping 192.168.200.253 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 192.168.200.253, timeout is 2 seconds: ..!!! Success rate is 60 percent (3/5), round-trip min/avg/max = 1/1/2 ms
So OTV seems to properly bridge the packets between routers dc1-xtr and dc2-xtr across the overlay. Let’s have a closer look on the OTV edge devices:
dc1-otv#show otv detail | include FHRP FHRP Filtering Enabled : Yes dc1-otv#show otv route Codes: BD - Bridge-Domain, AD - Admin-Distance, SI - Service Instance, * - Backup Route OTV Unicast MAC Routing Table for Overlay1 Inst VLAN BD MAC Address AD Owner Next Hops(s) ---------------------------------------------------------- 0 200 200 0000.0c07.acc8 40 BD Eng Gi2:SI201 *0 200 200 0000.0c07.acc8 50 ISIS dc2-otv 0 200 200 000c.290b.cab0 50 ISIS dc2-otv 0 200 200 000c.295d.29e5 40 BD Eng Gi2:SI201 0 200 200 000c.296a.a4ad 40 BD Eng Gi2:SI201 0 200 200 000c.297c.e283 50 ISIS dc2-otv 6 unicast routes displayed in Overlay1 ---------------------------------------------------------- 6 Total Unicast Routes Displayed dc1-otv#sh otv isis database detail Tag Overlay1: IS-IS Level-1 Link State Database: LSPID LSP Seq Num LSP Checksum LSP Holdtime ATT/P/OL dc2-otv.00-00 0x00000025 0xF13B 1178 0/0/0 Area Address: 00 NLPID: 0xCC 0x8E Hostname: dc2-otv Metric: 10 IS-Extended dc1-otv.01 Layer 2 MAC Reachability: topoid 0, vlan 200, confidence 1 0000.0c07.acc8 000c.290b.cab0 000c.297c.e283 dc1-otv.00-00 * 0x00000025 0x5EE0 1180 0/0/0 Area Address: 00 NLPID: 0xCC 0x8E Hostname: dc1-otv Metric: 10 IS-Extended dc1-otv.01 Layer 2 MAC Reachability: topoid 0, vlan 200, confidence 1 000c.295d.29e5 000c.296a.a4ad dc1-otv.01-00 * 0x0000000D 0x2319 659 0/0/0 Metric: 0 IS-Extended dc1-otv.00 Metric: 0 IS-Extended dc2-otv.00
As it turns out, OTV on the CSR has FHRP filtering built-in and enabled by default. This means that it is not necessary to configure customized access-lists to filter HSRP hellos across the overlay. Interestingly enough, it does seem to advertise the HSRP MAC address through OTV IS-IS. When you configure manual FHRP filtering on a Nexus 7000 you would usually suppress these advertisements as well as the actual HSRP packets. It looks like this behavior on the CSR could lead to continual MAC updates for the HSRP MAC address, which in turn could affect control plane stability. This may be a point worth investigating further if you are considering deploying CSR/ASR-based OTV in production. On the other hand, I really like the fact that the CSR has FHRP filtering straight out of the box and that it is controllable through a simple command (
otv filter-fhrp), rather than a cumbersome access-list configuration.
Now that we have verified the basic setup we can start on the actual LISP configuration. I start by configuring my LISP map-server and map-resolver. I want to make these vital functions redundant and I want to separate these functions from the LISP tunnel routers (xTR) to keep the configurations a bit cleaner. Although it is definitely possible to put the map-server/map-resolver functions on a router that also acts as a LISP xTR, I think this might make the configurations a bit harder to understand. Also, I intend to take some LISP sniffer traces on this lab later and having separate IP addresses for the different functions will make these traces more easily readable.
Of course I could have added two more routers to my lab, but since the OTV routers are separate from the LISP xTRs anyway, I decide to make these routers perform the role of LISP map-servers and map-resolvers. First, I add the LISP map-server function to routers dc1-otv and dc2-otv:
router lisp site DC1-DC2 authentication-key DC1-DC2-S3cr3t eid-prefix 192.168.200.0/24 accept-more-specifics ! ipv4 map-server
I added the
accept-more-specifics keyword to the EID prefix to allow registration of individual mobile /32 routes later.
Next, I set up the map-resolver function on routers dc1-otv and dc2-otv:
interface Loopback37 description LISP map-resolver anycast address ip address 10.37.37.37 255.255.255.255 ! router ospf 1 network 10.37.37.37 0.0.0.0 area 0 ! router lisp ipv4 map-resolver
To make the LISP resolver function redundant I add an anycast address (10.37.37.37) to both dc1-otv and dc2-otv. This address will be configured as the LISP map-resolver address on the LISP xTRs.
Note: When configuring anycast IP addresses on routers, you should take proper care that these addresses never get selected as a router ID for any routing protocol. This is why I specifically configured the OSPF router ID on these routers using the
Now that the LISP map-server and map-resolver have been set up, I configure the routers dc1-xtr and dc2-xtr as LISP xTRs for the 192.168.200.0/24 EID space that is associated with the OTV extended VLAN 200. On router dc1-xtr I add the following commands:
router lisp locator-set DC1 10.200.200.3 priority 10 weight 50 ! database-mapping 192.168.200.0/24 locator-set DC1 ! ipv4 itr map-resolver 10.37.37.37 ipv4 itr ipv4 etr map-server 10.200.200.1 key DC1-DC2-S3cr3t ipv4 etr map-server 10.200.200.2 key DC1-DC2-S3cr3t ipv4 etr
And on router dc2-xtr I add the following commands:
router lisp locator-set DC2 10.200.200.4 priority 10 weight 50 ! database-mapping 192.168.200.0/24 locator-set DC2 ! ipv4 itr map-resolver 10.37.37.37 ipv4 itr ipv4 etr map-server 10.200.200.1 key DC1-DC2-S3cr3t ipv4 etr map-server 10.200.200.2 key DC1-DC2-S3cr3t ipv4 etr
The only real difference between these configurations is the locator IP address, which is 10.200.200.3 for DC1 and 10.200.200.4 for DC2.
Next, I verify that the EID prefix 192.168.200.0/24 has been registered on the map-servers:
dc1-otv#show lisp site detail LISP Site Registration Information Site name: DC1-DC2 Allowed configured locators: any Allowed EID-prefixes: EID-prefix: 192.168.200.0/24 First registered: 00:04:05 Routing table tag: 0 Origin: Configuration, accepting more specifics Merge active: No Proxy reply: No TTL: 1d00h State: complete Registration errors: Authentication failures: 0 Allowed locators mismatch: 0 ETR 10.200.200.3, last registered 00:00:09, no proxy-reply, map-notify TTL 1d00h, no merge, hash-function sha1, nonce 0x24CDDA34-0x0CF777A1 state complete, no security-capability xTR-ID 0x78ED0ACF-0x8B46F5F7-0xF896252E-0xEC47696E site-ID unspecified Locator Local State Pri/Wgt 10.200.200.3 yes up 10/50 ETR 10.200.200.4, last registered 00:00:19, no proxy-reply, map-notify TTL 1d00h, no merge, hash-function sha1, nonce 0xEB7D2794-0x64B6413D state complete, no security-capability xTR-ID 0x58ABCEE0-0x5DF04443-0xBB0C7B58-0x4AEB3FD8 site-ID unspecified Locator Local State Pri/Wgt 10.200.200.4 yes up 10/50
To test the LISP functionality, I need to have a third site that I can run connectivity tests from. This is the role of the branch router in the lab topology. I add the following basic configuration to that router:
! hostname branch-xtr ! enable secret cisco ! no ip domain lookup ! ip dhcp pool BRANCH-LAN network 172.16.37.0 255.255.255.0 default-router 172.16.37.254 ! interface GigabitEthernet1 ip address 10.200.200.5 255.255.255.0 no shutdown ! interface GigabitEthernet2 ip address 172.16.37.254 255.255.255.0 no shutdown ! router ospf 1 router-id 10.200.200.5 network 10.200.200.5 0.0.0.0 area 0 ! line vty 0 4 exec-timeout 0 0 password cisco login !
Because this router is part of the LISP enabled network and the prefix 172.16.37.0/24 is part of the EID space, we need to create a corresponding LISP site configuration for this site on the LISP map-servers. So I add the following commands to routers dc1-otv and dc2-otv:
router lisp site BRANCH authentication-key Br@nch-S3cr3t eid-prefix 172.16.37.0/24
Now that the map-servers have been set up, we can add the LISP xTR configuration on the branch router. I add the following to router branch-xtr:
router lisp database-mapping 172.16.37.0/24 10.200.200.5 priority 10 weight 100 ipv4 itr map-resolver 10.37.37.37 ipv4 itr ipv4 etr map-server 10.200.200.1 key Br@nch-S3cr3t ipv4 etr map-server 10.200.200.2 key Br@nch-S3cr3t ipv4 etr
And again, I confirm that the EID prefix has been properly registered on the map-servers:
dc1-otv#show lisp site name BRANCH Site name: BRANCH Allowed configured locators: any Allowed EID-prefixes: EID-prefix: 172.16.37.0/24 First registered: 00:00:11 Routing table tag: 0 Origin: Configuration Merge active: No Proxy reply: No TTL: 1d00h State: complete Registration errors: Authentication failures: 0 Allowed locators mismatch: 0 ETR 10.200.200.5, last registered 00:00:11, no proxy-reply, map-notify TTL 1d00h, no merge, hash-function sha1, nonce 0x4A43B6D4-0xF8540179 state complete, no security-capability xTR-ID 0xE8323230-0xFABD8623-0x2448E48B-0x2B80C3A0 site-ID unspecified Locator Local State Pri/Wgt 10.200.200.5 yes up 10/100
So now it is time to put the LISP configurations to the test and ping from VM-3 to VM-1 and VM-2. The pings succeed as expected:
However, at this point we have only implemented a straightforward LISP setup without introducing the VM mobility concept. The branch router does not know where to forward packets for individual VM IP addresses. It only knows how to reach the EID prefix 192.168.200.0/24 of VLAN 200. When I actually perform a traceroute to VM-1 and VM-2 from VM-3 I see that the traffic to both VMs is routed through DC-1:
This means that traffic from VM-3 to VM-2 actually needs to go across the OTV interconnect between DC1 and DC2 in order to reach VM-2. Clearly, this is sub-optimal and this is where LISP mobility comes in. By registering individual /32 EID prefixes for each VM, the location of the VMs can be tracked in the LISP enabled network and the traffic flow can be optimized. So I add the following commands to router dc1-xtr:
router lisp dynamic-eid MOBILE-VMS database-mapping 192.168.200.0/28 locator-set DC1 map-server 10.200.200.1 key DC1-DC2-S3cr3t map-server 10.200.200.2 key DC1-DC2-S3cr3t map-notify-group 126.96.36.199 ! interface GigabitEthernet2 lisp mobility MOBILE-VMS lisp extended-subnet-mode
And I add similar commands on dc2-xtr:
router lisp dynamic-eid MOBILE-VMS database-mapping 192.168.200.0/28 locator-set DC2 map-server 10.200.200.1 key DC1-DC2-S3cr3t map-server 10.200.200.2 key DC1-DC2-S3cr3t map-notify-group 188.8.131.52 ! interface GigabitEthernet2 lisp mobility MOBILE-VMS lisp extended-subnet-mode
For the mobile VMs I selected a sub-prefix of the overall 192.168.200.0/24 prefix that belongs to the twin-DC site. Of course, I could also have used the complete prefix for mobility. In that case the regular LISP database mapping for that prefix should be removed.
Note: There is one other thing that is specifically worth noting about this configuration: I configured the link-local multicast group 184.108.40.206 as the LISP map-notify group. This is not a best practice and I should have used a regular ASM multicast group (for example 220.127.116.11). However, when I first configured LISP using a random 18.104.22.168/8 ASM group I was experiencing all sorts of problems. My VMs were reporting duplicate IP addresses and the traceroutes from VM-3 were inconsistent. After a couple of hours of troubleshooting, I finally figured out that this was caused by the fact that the LISP map-notify multicast group wasn’t properly forwarded across OTV between routers dc1-xtr and dc2-xtr. I tried to tackle this problem in various ways, including converting my OTV configuration to a unicast setup with adjacency servers, but to no avail. In the end I started suspecting IGMP snooping (or the lack thereof) in the whole chain of vSwitches, bridge-domains, and OTV to be the cause of the problem. To test this hypothesis I decided to change the multicast group to a link-local group, because those groups should always be flooded within a VLAN regardless of IGMP snooping. It is a bit of a hack, and clearly it isn’t a real solution for the underlying multicast problem. But at least implementing this workaround allowed me to further concentrate on LISP rather than an obscure multicast issue. I am hoping that this is a VMware vSwitch problem, rather than an OTV multicast problem, but further testing is needed to pinpoint the issue.
So with the proper LISP configuration in place we can now test the traceroutes from VM-3 to VM-1 and VM-2 again and see if the path was optimized:
The two different RLOC addresses for the VMs can also be verified in the LISP map-cache on the branch router branch-xtr:
branch-xtr#show ip lisp map-cache LISP IPv4 Mapping Cache for EID-table default (IID 0), 3 entries 0.0.0.0/0, uptime: 00:47:51, expires: never, via static send map-request Negative cache entry, action: send-map-request 192.168.200.1/32, uptime: 00:45:47, expires: 23:54:47, via map-reply, complete Locator Uptime State Pri/Wgt 10.200.200.3 00:05:12 up 10/50 192.168.200.2/32, uptime: 00:46:54, expires: 23:55:38, via map-reply, complete Locator Uptime State Pri/Wgt 10.200.200.4 00:35:25 up 10/50
As a final test I move VM-1 from DC-1 to DC-2. Due to the way the lab is set up this is not a vMotion, but simply a change of port-group from VLAN 201 to VLAN 202. When I trace again after the move I confirm that the traceroute now goes through router dc2-xtr (10.200.200.4). During the move I also ran a continuous ping from VM-3 to VM-1 and I only lost a couple of packets. Of course, the LISP map-cache on router branch-xtr also reflects the change in RLOC IP address:
branch-xtr#show ip lisp map-cache 192.168.200.1 LISP IPv4 Mapping Cache for EID-table default (IID 0), 3 entries 192.168.200.1/32, uptime: 00:53:55, expires: 23:59:16, via map-reply, complete Sources: map-reply State: complete, last modified: 00:00:43, map-source: 10.200.200.4 Active, Packets out: 195 (~ 00:01:56 ago) Locator Uptime State Pri/Wgt 10.200.200.4 00:00:43 up 10/50 Last up-down state change: 00:00:43, state change count: 1 Last route reachability change: never, state change count: 0 Last priority / weight change: never/never RLOC-probing loc-status algorithm: Last RLOC-probe sent: never
At this point, we have a working configuration combining OTV with LISP mobility, which is a good base for further experimentation with these protocols. Despite the multicast problems that I experienced, I feel that a virtual lab setup with a couple of CSR 1000v’s is a nice addition to the toolbox for testing advanced routing and data center technologies like LISP and OTV.
Note: For those that want to try this out in their own labs, I published my complete configurations for reference here.