The current setup was a Cisco 1000v VSM (V irtual Supervisor Module) in L2 Control mode, but the same procedure can be applied to a Cisco 1000v in L3 Control mode. Change Cisco 1000v redundancy mode from standalone to HA (primary) on existing VSM. Confirm that role changed from standalone to primary. Cisco Nexus 1000V Virtual Security Gateway Version 4.2(1)VSG2(1.1) for VMware vSphere 4.1 +. 1010 OVA - nexus-1000v.VSG2.1.1.1010.ova (md5. Cisco Virtual Switch Update Manager Release 1.0 Getting Started Guide for Cisco Nexus 1000V 01/Sep/2015 Video: Upgrading Cisco Nexus 1000V Using Cisco VSUM 1.5.x 05/Oct/2015 Video: Installing Cisco Nexus 1000V Using Cisco VSUM 1.0 02/Sep/2014.

Over the last 12 months I’ve been doing a lot of work that has involved the Cisco Nexus 1000v, and during this time I came to realise that there wasn’t a huge amount of recent information available online about it.

Because of this I’m going to put together a short post covering what the 1000v is, and a few points around it’s deployment.

What is the Nexus 1000v?

The blurb on the VMware website defines the 1000v as “..a software switch implementation that provides an extensible architectural platform for virtual machines and cloud networking.”, and the Cisco website says, “This switch: Extends the network edge to the hypervisor and virtual machines, is built to scale for cloud networks, forms the foundation of virtual network overlays for the Cisco Open Network Environment and Software Defined Networking (SDN)”

So that’s all fine and good, but what does this mean for us? Well, the 1000v is a software only switch that sits inside the ESXi (and KVM or Hyper-V, if they’re your poison) Hypervisor that leverages VMware’s built-in Distributed vSwitch functionality.

It utilizes the same NX-OS codebase and CLI as any of the hardware Nexus switches, so if you’re familiar with the Nexus 5k, you can manage a 1000v easily enough, too.

This offers some compelling features over the normal dvSwitch such as LACP link aggregation, QoS, Traffic Shaping, vPath, and Enhanced VXLAN, and would also allow an administrative boundary between servers and networking, even down to the VM level. Obviously all the bonuses of a standard dvSwitch around centralised management also still apply.

Components of the 1000v

The 1000v is made up of 2 main components, 2 VSMs, and the VEMs. The VSMs are the Virtual Supervisor Modules, which equate to a supervisor module in a physical multi-chassis switch, and the VEMs are like the I/O blades that provide access to the network.

Virtual Supervisor Modules

The VSM runs as a VM within the cluster, with a second VSM running as standby on another ESXi host. Good practice would be to set an affinity rule to prevent both VSMs living on the same host in case of host failure.

1000

Virtual Ethernet Modules

The VEM is the software module that embeds in the ESXi kernel and ties the server into the 1000v.

1000v Deployment

There are 2 ways of deploying the 1000v, Layer 2 mode (which is deprecated) and Layer 3 mode which allows the VSMs to sit on a different subnet to the ESXi hosts.

Deploying the 1000v is relatively straight forward, and this post is not designed to be a step-by-step guide to installing the 1000v (Cisco’s documentation can be found here). The later versions of the 1000v have a GUI installer which makes initial deployment simple.
Once the VSM pair has been deployed you need to:

Create a L3 SVS (SVS config sets how the VEMs connect to the VSMs) domain, and define your L3 control interface

  • 1000v# svs-domain
  • 1000v(config-svs-domain)# domain id 10
  • 1000v(config-svs-domain)# no packet vlan
  • 1000v(config-svs-domain)# no control vlan
  • 1000v(config-svs-domain)# svs mode L3 interface mgmt0

Create a SVS connection to link the VSM with vCentre

  • 1000v# svs connection vcenter
  • 1000v(config-svs-conn)# protocol vmware-vim
  • 1000v(config-svs-conn)# remote ip address 192.168.1.50
  • 1000v(config-svs-conn)# vmware dvs datacenter-name London
  • 1000v(config-svs-conn)# connect

Create your Ethernet (physical uplink port-profiles) and vethernet (VM port-profiles) port-profiles, and add L3 capability to your ESXi management vmk port-profile

  • 1000v# port-profile type veth esxi-mgmt
  • 1000v(config-port-prof)# capability l3control
  • Warning: Port-profile ‘esxi-mgmt’ is configured with ‘capability l3control’. Also configure the corresponding access vlan as a system vlan in:
  • * Port-profile ‘esxi-mgmt’.
  • * Uplink port-profiles that are configured to carry the vlan
  • 1000v(config-port-prof)# vmware port-group
  • 1000v(config-port-prof)# switchport mode access
  • 1000v(config-port-prof)# switchport access vlan 5
  • 1000v(config-port-prof)# no shut
  • 1000v(config-port-prof)# system vlan 5
  • 1000v(config-port-prof)# state enabled
  • 1000v(config-port-prof)# port-profile type ethernet InfrUplink_DvS
  • 1000v(config-port-prof)# vmware port-group
  • 1000v(config-port-prof)# switchport mode trunk
  • 1000v(config-port-prof)# switchport trunk allow vlan 5
  • 1000v(config-port-prof)# channel-group auto
  • 1000v(config-port-prof)# no shut
  • 1000v(config-port-prof)# system vlan 5
  • 1000v(config-port-prof)# state enabled

Note the point above where you have to put a “system vlan” on your l3control interface, this ensures network traffic on that VLAN will always remain in the forwarding state, even before a VEM is programmed by the VSM, especially important in the case of the control interface.

Deploy the VEMs to the ESXi hosts (this can be done from the GUI)

Once the VEMs are on the ESXi hosts, you need to migrate the ESXi management vmk into the 1000v, this will then show the VEMs and the ESXi hosts in the 1000v when you run the ‘show modules’ command.

At this point we have communication between the VSM and the VEMs within ESXi, and we can start configuring port-profiles for our non-management traffic.

Simple, right?



This instructable is based on a guide by Robert Burns

  • Download ESXi ISO from VMware and save to a directory accessible from your Workstation. This tutorial is using VMware-VMvisor-Installer-5.5.0.update02-2068190.x86_64.iso
  • Download vCenter Server Appliance OVA file and save to a directory accessible from your Workstation. This tutorial is using VMware-vCenter-Server-Appliance-5.5.0.10000-1624811_OVF10.ova
  • Download the Nexus 1000v software from Cisco.com and extract to a folder on your computer (this tutorial is using Nexus1000v.4.2.1.SV2.2.3)

Launch VMware workstation

File > New Virtual Machine…

Select Custom (advanced)

Click Next

Set Hardware compatibility to Workstation 10.0

Click Next

Choose Install from: Installer disc image file (iso):

Browse to the directory where you saved the ESXi ISO. It should state “VMware ESXi 5 detected”.

Click Next.

Name the new Virtual Machine (e.g. ESX01)

Specify the Location where you wish to save the virtual machine files

Click Next

Specify 2 Processors with 2 Cores per processor (you can select just one core if you can not spare the resources).

Click Next

Set the memory as 6GB (6144 MB). This is required in case you want to test running both VSMs on the same ESX host. Specify less RAM if you do not have enough resources.

Click Next

Select “Use host-only networking” to create a LAN segment for all your VMs that is also accessible from your Workstation PC.

Click Next

Select SCSI Controller: LSI Logic (Recommended) I/O Controller Type

Click Next

Select SCSI (Recommended) disk type.

Click Next

Select “Create a new virtual disk”

Click Next

Leave default allocation of 40 GB disk capacity and select Split virtual disk into multiple files. You can specify more disk space if you intend installing many VMs on the ESX local disk store.

Click Next.

Specify the name for the Disk File e.g. ESX01.vmdk

Click Next.

Click Finish.

NOTE:In previous attempts, I selected “Customize Hardware…” and added an additional 3 Network Adapters. However, these additional NICs were not recognised and so will be added later. YMMV.

Power on the virtual machine (if it doesn’t start automatically).

Press Enter to continue

Press F11 to accept and continue

Press Enter to continue

Use the arrow keys to highlight the correct keyboard layout, then press Enter to continue

Enter your password on each line ensuring they match, then press Enter to continue

Press F11 to install

Wait for it to install. My installation took extra long at 28%.

Press Enter to reboot.

Install completed.

Right-click ESX01 > Power > Shutdown Guest

Repeat to create ESX02, ESX03 and ESX04.

File > Open…

Browse to VMware-vCenter-Server-Appliance-5.5.0.10000-1624811_OVF10.ova and click Open

Click Import

After Install, right-click the VM and choose Settings

Change the Network Adapter from Bridged to Host Only and click OK

Power on the VCSA and wait until install is completed and you see this screen (note: install can take quite a while, as can each boot up of this appliance).

Open a browser to https://192.168.63.137:5480

username: root

password: vmware

Click Login.

Review the EULA (lol), tick “Accept license agreement” and click Next.

Wait for the VCSA to be happy (huh?)…

Specify “Configure with default settings” and click Next.

Review the configuration and click Start.

Be prepared for the “Configuring SSO” stage to take a very long time:

Wait until all four configuration items get green check marks, then click Close.

Have a poke around the VMware vCenter Server Appliance configuration GUI. Note, this is just the appliance admin GUI not the vsphere web client GUI which you will see next.

Use the link on the top right of the screen to Logout.user root.

3.1 Configure VMware Datacenter

Connect to the vSphere Web Gui by browsing to https://192.168.63.137/vsphere-client

Login as root (password = vmware) – wait a while for login to complete

From the VMware vSphere Web Client Home page, go to vCenter > Hosts and Clusters > localhost > Create Datacenter.

Enter a name for the Datacenter e.g. Lab DC

Wait for the client to validate the input.

Go to Localhost > Lab DC > Create a Cluster

Specify the name for the cluster e.g. 1000v-Cluster

Go to Localhost > Lab DC > 1000v-Cluster > Add a host

Enter the IP address of your first ESXi host e.g. 192.168.63.133

Nexus

Enter the username and password for the host.

Note the Security Alert (this can be ignored in a lab environment) and click Yes to connect to the host.

Review the Host Summary and click next

As we are using trial licenses, select (No License Key) and click Next

Do not enable lockdown mode. Click Next.

Review the final summary and click Finish.

Cisco Nexus 1000v Ova Download

Repeat to add ESX02 to 1000v-Cluster

Repeat to create a new Cluster named “vSwitch-Cluster” in the “Lab DC” datacentre and add ESX03 and ESX04 to it. Your vSphere Datacenter and its clusters should look something like this…

From the directory you unzipped the Nexus 1000v files to, go to the VSMInstaller_App directory and launch Nexus1000V-install_CNX.jar

Select Cisco Nexus 1000V Complete Installation and choose Custom (you may have to wait a few seconds once you have selected Custom)

Read all of the Pre-Requisites as it contains very useful information:

Click Next.

Enter the IP address of the VCSA appliance, leave the port as 443 and enter the username and password details.

Username: root

Password: vmware

Click Next.

Enter the details as shown above. If you only want to use a single Host for the primary and secondary VSM, you can enter the same host details twice. Choose Layer 2 connectivity mode and specify the Domain ID to something memorable e.g. 100. Leave all the Port Groups assigned to “VM Network”. You may wish to save this configuration before clicking Next as this will save a bit of time if you have to ever repeat this step. Once you are happy, click Next.

Cisco Nexus 1000 V Ova

Review the details and click Next.

Wait for the install to complete. This could take a while.

Once completed, click Finish.

Next, select Virtual Ethernet Module Installation.

Review the pre-requisites and click Next.

Enter the vCenter Server credentials as before and click Next.

Enter the VSM credentials and click next.

Select “Install VEM and add module to Nexus 1000v” and specify the management VLAN as 1.

Click Next.

Use the CTRL key to select both the hosts from the 1000v-Cluster and click Next.

Review the details and click Finish.

Review the final Summary page and click Close.

Log into https://192.168.63.137:9443/vsphere-client/#

Cisco Nexus 1000v Ova

Go vCenter > Hosts and Clusters

Note that there will be an alarm against each of the hosts on which the Nexus 1000v VEM is installed stating that connection has been lost. This is a historical alarm and can be cleared (click on Reset to Green).

SSH into your VSM

Verify all modules are correctly installed by issuing the ‘show module’ command

Verify the high availability status of the active and standby VSMs by issuing the ‘show redundancy status’ command:

Let’s look at the networking for the Standard vSwitch Cluster.

Cisco Nexus 1000 V Oval

Log into the Web Client and go to vCenter > Hosts and Clusters > localhost > Lab DC > vSwitch-Cluster. Select one of the hosts and go to the Networking > Virtual switches