This page documents the steps and components that were part of my local SDN Testbed. We may even cover some GENI stuff.

Note: I recently expanded this build to include iSCSI storage. Part 1 of 3 can be found here:

Phase 1

  1. Step 1 - ESXi Box
  2. My ESXi box is a cute little HP-Compaq dc 7700 Small Form Factor (2VA7311PMT, Bios Rev. 786E1 v 01.10) with a 120GB HD - it only just supports virtualization in the BIOS. Speaking of which, I had to ensure that hardware virtualization was supported (you should check to see if it is enabled), allow removeable media boot and change the boot order so that USB was selected first. You may have another boot order. I have actually done both DVD and USB installs of ESXi. Currently my boxes run 5.0. I started with 6GB of RAM but went up to 8GB.

    I had a tool to create the USB bootable media but I can't remember what it is. Will update later. ESXi installs quickly and usually without any problems. the only time that my installs have bombed out is if the hardware will simply not support virtualization. So...older boxes.

  3. Step 2 - Management and vSphere
  4. Once the install completes, you have to decide how you are going to manage the box. It also helps if you have some idea of the networks you are going to use for the virtual machines. Since I knew that I was going to have several VMs and VM networks (VMnets), I connected mine to a local switch and trunked to the ESXi box.

    Thus, the management network will be different than the VMs. I chose the 10 net and VLAN 10 (original) and set the ESXi interface to 4095 which means all VLANs or a trunk. On the switch I created VLAN 10 and added a laptop with vSphere installed. From this point on, you will almost never need to touch the keyboard for the ESX box. Everything will be done via vSphere.

  5. Step 3 - VM Creation
  6. At this point it is time to open up vSphere and log into the ESXi box remotely.

Phase 2

  1. Step 1 - Adding Virtual Machines
  2. Since the resources of my ESXi node were limited I decided on a combination of Ubuntu 14 desktop and WinXP VMs. Most of my testing used the Ubuntu VMs. My method was to upload the .iso to the ESXi datastore and run the installs from there. You have to remember that eventually storage will become a problem as the VMs and iso's are stored locally.

    As you can see I also modified the internal networking of the ESXi box quite a bit. If you look close you can also see the 4095 value I used to trunk to ESXi. By default you will have the internal vSwitch0 which is a VMWare construct that emulates a standard Ethernet switch although the performance is quite a bit different. This vSwitch is tied to your physical adapter. My ESXi only had one Ethernet port on it's NIC. BTW - your VMs will also have an emulated Intel 1Gbps link. Lastly, when you set up VMs they will all be connected to vSwitch0, even if they are in different VMnets. So, how do you test your SDN or virtual routing/switching? My approach was to install a second vSwitch and connect several VMs down there. These VMs will be isolated from the outside world. This means that the only way for traffic to exit from vSwitch1 was if I set up switching or routing between the vswitches. You dig?

  3. Step 2 - Building OVS and POX
  4. OK, SDN is still new. Generally speaking SDN topologies are comprised of networking devices that support OpenFlow (the protocol used to configure network devices) and a controller. Recently (2014-15) this general definition has been modified but it is the one I've been working with for this testbed. For example, articles are now written about controlled and uncontrolled SDN topologies. But again, I used a controller. My choice was POX. I fooled around with OpenDaylight and a couple of others but things were changing so fast I settled on a more stable package. For most folks, the network device is based on Openvswitch or OVS and so I used that package as well.

    I added this image because sometimes it is tough to see how things are connected. The controller is connected to OVS and uses ports 6633 and 6634 to exchange OpenFlow messages. I have documented the OpenFlow messaging in a youtube playlist on my channel. Two other Ubuntu VMs were added and note that the OVS VM spans the two vSwitches.

  5. Step 3 - POX and OVS configuration
  6. Let's start with OVS. This part is pretty straight-forward but there are some "under the hood" components. So getting it running does not necessarily mean that you understand everything. In the early days (2013 :P ) the OVS install was painful and there were lots of things to add. But I found that the Ubuntu apt install went very smoothly: apt-get install openvswitch-common openvswitch-switch For later use I also installed WIreshark and ssh on all of my Ubuntu nodes: apt-get install ssh wireshark-gnome

    On the controller VM I installed POX: apt-get install git, git clone

    Now, you can tell OVS exactly what you want it to do or you can let it happen automatically. I have posted all of the commands I used in a text file on the configs page: SDN configs The basic idea is this: configure your switch topology, connect the controller and then either provide the manual commands or auto config options for the topology.

    BTW - I got my start at the GENI conferences and using their systems - pretty cool. Lots of tutorials and configurations can be found here: tutorials Many of the tutorials can be run on any SDN topology.

    As there is a database that keeps track of the switch condition and what the controller wants to do, there are a couple of commands that you should familiarize yourself with. ovs-vsctl tells the switch what to do. ovs-ofctl gives you access to forwarding info among other things. To make your life easier I strongly urge you to keep track of not only your IP addresses but MACs as well. This is especially helpful when trying to remember what virtual interface is connected to another. When I was all done I complered a couple of tests to make sure that traffic could go where I wanted it to:

    Of course the real object is to get to the flows and flow table via the dumpflows command.

    Problems: There are lots of places where an "oops" can pop up. My pain points came from a couple of sources: misconfiguration of either interfaces or commands, firewall settings (especially on Fedora), missing packages or mismatches between what I wanted my VMs to do and what my SDN topology was trying to do. For example, let your VMs forward all traffic => promiscuous. Remember that Ubuntu network interfaces can be frustrating if you are not changing things or touching the right place. :)

    Note: You should see OpenFlow messages between the controller and the switch as soon as you complete portions of the config. Traffic should be forwarded between the network nodes as soon as your controller is operational.

  7. Capturing OpenFlow Packets
  8. Wait a minute - you said I should look at OpenFlow packets! But how? Good catch. Well, let's think about what has to happen. We want to capture traffic on a virtual link between the controller and the switch, and, the ESXi box is on an isolated network. There are a couple of ways to go about this: you could capture on the VM itself but the capture files are not very portable at this point. You could remote into the VM and send the captures over that session. This is what I decided to do.

    First - I installed Openssh server: apt-get install openssh-server and edited /etc/sshd_config to do X11 forwarding, password authentication (or not) and allow me as a user. Restart ssh. Second - I installed xming on my Windows management machine so that I had an X-windows environment once I ssh'd. Third - I allowed X11 forwarding in my putty configuration. Lastly, once I was ssh'd, I ran sudo wireshark &. This should allow you to capture over your ssh connection. boom. done. In the end, my topology looked something like this:

    As you can see I did stuff with Mininet and some other management details but this should be enough to get you started on your project - you can do this!