Openstack with ACI Integration – Part 1 (General Discussions of Openstack with ACI)

Contributors:  Soumitra Mukherji and Alec Chamberlain
With Expert Guidance from: Filip Wardzichowski

If you have been using ACI as your data center fabric and you are like most enterprise customers, you probably have ACI integrated VMM domain and are reaping the benefits of that.  ACI also integrates seamlessly with Openstack creating a Openstack VMM domain.  For larger enterprise customers and Service Providers Openstack is the defacto Private/Public Cloud solution.  One benefit of ACI/Openstack integration is that (unlike VMware integrated ACI),  the Openstack dashboard ( Horizon — think of it like vCenter) is the single source of truth.  If your developers are already using Openstack, they don’t need to learn ACI or even get involved with ACI.  They can use Horizon or Openstack CLI commands and configure their environment just like they did before.  Behind the scenes the Openstack controller will talk to Cisco APIC and build the Tenants and all the necessary plumbing.   The ACI Controller (APIC) can then be used by the network administrator to view all the infrastructure.

A little disclaimer on the guided Openstack/ACI integration that we will show:

In a Production environment you will want to install Openstack directly on Bare metal servers.  However, we will install Openstack on top of VMware hypervisors (i.e. nested virtualization) for both Part 2 and Part 3.  I don’t have spare baremetals laying around to install Openstack and test out with ACI Integration.  I suspect a lot of you are in the same situation and this will give you an avenue to get familiar with Openstack and ACI integration without disrupting the VMware users.  Further, it’s really handy to make snapshots of your VMs at critical points, so you can fall back in case you make a mistake along the install process.

We will not go into gory details of the inner workings of Openstack because that is beyond the scope of these articles.  You can do a google search on that and spend a good month to learn this up.  The focus of these articles are to give you a good understanding and get you up and running with ACI  Integrated Openstack.

We plan to write 4 articles on this totally:

  • Part 1: Briefly discuss Openstack and ACI Integrated Openstack, i.e. Openstack VMM domain.
  • Part 2: Installing Openstack and Integrating with ACI using Packstack ( with centOS – either opensource or RedHat version).  This is unsupported but works and gives you a good understanding of the process.  Packstack is a bunch of scripts like puppet/ansible and python packed together.  This is a fire and forget method, you bring up openstack, integrate with ACI, but if you needed to make any changes to the openstack infrastructure later you have to do it manually.   In our guided install we will have 4 VMs (across 2 esxi hypervisors).  One is to install, one for Openstack Controller and two for Openstack Computes.
  • Part 3: Installing Openstack and Integrating with ACI using Red Hat’s director based method with registered RedHat centOS ISO.  This is one of the supported professional methods to install openstack (there are several other supported methods).  In this method we will again use 4  VMware VMs across 2 esxi.  The first one is the install VM (also called director).  You install the director there, which is basically a Openstack Cluster in itself with one of the openstack hypervisors being the Controller, and the 2 computes and build openstack instances (servers) on those hypervisors.  This is also called the undercloud.  Then you use the Director to spin up the Openstack Cluster that you will use.  This is called the overcloud.  The Servers that were built by undercloud will be used as hypervisors.  In our case, one of them will be the controller and the other 2 computes.  This method is called Triple-O which stands for Openstack On Openstack.   Since this is a production grade install,  the scripts provided by Redhat will do everything for you, including pxe booting the controllers and computes and installing the required software (overcloud) on them for you to start using.   When you install Openstack on baremetal using this method,  you have to point your pxe  control  to the IMC/CIMC (Integrated Management Controller) of the baremetal server.  The script will use those IMC IPs  to boot the baremetal at the appropriate times and pxe boot them.  The utility that is used to power on/off the baremetal is called  IPMI (Intelligent Platform Management Interface). Since we will be  doing this install on top of VMware VMs, we can’t point the script to the IMC for ipmi usage.  As a workaround we will use a Virtual BMC ( Bare Metal Controller).   This will interact with vCenter to fool the system to think that it’s using IMC. The ipmi commands to power up/down  the VMs will then get sent to the vCenter who will take the appropriate action.  I have the docker-compose code for this which you can install in a few minutes to serve this purpose.  I will guide you through this at the appropriate time.  
  • Part 4: Using ACI Integrated Openstack.  Here we will explore building a Openstack Project (Tenant in ACI).  Create Networks, Routers, NAT and Floating IPs, build VMs build virtual disks for VM use etc, etc.  On each step of the way we will show you what happens from the ACI side.

In this Part 1 article, let’s discuss the topic “What is Openstack ?”

Openstack is an open source platform to build Infrastructure as a Service (IAAS).  In short it can be your defacto one stop Private/Public Cloud Infrastructure.

OpenStack powers Walmart’s ecommerce site serving 80+ million people each month. CERN, the European research organization, also uses OpenStack for its private cloud — the largest OpenStack cloud in the world, with nearly 200,000 cores.

You can spin up VMs, Containers, and third party services such as Kubernetes etc on it also. 

Figure 1: What is Openstack

Openstack itself does not define constructs like  hypervisors, networking, etc. it uses already existing hypervisors like KVM / Xen or even VMware esxi.  For Networking it can use OpenvSwitch (OVS)  and Linux Bridge, etc.

It is built of several pieces glued together.  These pieces are called projects and they talk to each other using APIs.  Below diagram is a representation of some of the pieces

Figure 2: Openstack Projects.

Being a opensource platform has it’s benefits and downside.  There is tons of documentation on line that you can find.  However it gets a bit (sometimes very) confusing because there is just way too much stuff out there.   For a production environment, I feel that having a paid subscription for Red Hat is essential so you can get their support when needed.  I also want to point out that Openstack is heavily used by developers.  Needless to say, it is assumed that you are fairly comfortable with Linux Operating systems. 

In case you are not very comfortable with Linux Operating Systems and  Linux networking, for the purpose of these guided installs you don’t need to worry.  We will guide you through each step of the way.  If you follow these guides, you will learn along the way.  However be pre-warned, you will need to invest some time into this.

Listed Below are some of the main Openstack pieces (projects).

  • Neutron provides the networking capability for OpenStack. It helps to ensure that each of the components of an OpenStack deployment can communicate with one another quickly and efficiently.

  • Nova is the OpenStack project that provides a way to provision compute instances (aka virtual servers). Nova supports creating virtual machines, baremetal servers (through the use of ironic), and has limited support for system containers. Nova runs as a set of daemons on top of existing Linux servers to provide that service.
  • Ironic is an OpenStack project which provisions bare metal (as opposed to virtual) machines. It may be used independently or as part of an OpenStack Cloud, and integrates with the OpenStack Identity (keystone), Compute (nova), Network (neutron), Image (glance), and Object (swift) services.
  • Horizon is the dashboard behind OpenStack. It is the only graphical interface to OpenStack, so for users wanting to give OpenStack a try, this may be the first component they actually “see.” Developers can access all of the components of OpenStack individually through an application programming interface (API), but the dashboard provides system administrators a look at what is going on in the cloud, and to manage it as needed.

  • Keystone provides identity services for OpenStack. It is essentially a central list of all of the users of the OpenStack cloud, mapped against all of the services provided by the cloud, which they have permission to use. It provides multiple means of access, meaning developers can easily map their existing user access methods against Keystone.

  • Cinder is a block storage component, which is more analogous to the traditional notion of a computer being able to access specific locations on a disk drive. This more traditional way of accessing files might be important in scenarios in which data access speed is the most important consideration.
  • Glance provides image services to OpenStack. In this case, “images” refers to images (or virtual copies) of hard disks. Glance allows these images to be used as templates when deploying new virtual machine instances.

  • Swift OpenStack Scalable Distributed Object Storage is used for redundant, scalable data storage using clusters of standardized servers to store petabytes of accessible data. It is a long-term storage system for large amounts of static data which can be retrieved and updated.
  • Ceilometer provides telemetry services, which allow the cloud to provide billing services to individual users of the cloud. It also keeps a verifiable count of each user’s system usage of each of the various components of an OpenStack cloud. Think metering and usage reporting.

  • Heat is the orchestration component of OpenStack, which allows developers to store the requirements of a cloud application in a file that defines what resources are necessary for that application. In this way, it helps to manage the infrastructure needed for a cloud service to run.

The diagram below is a representation of all main projects and most commonly used projects for OpenStack.  There are many more.

Figure 3. Most commonly used OpenStack Projects

At this point you may be wondering what the benefit of Integrating Openstack with ACI are ?

The Neutron reference implementation provides a functional networking solution for OpenStack. However, it presents various challenges that the OpenStack/ACI Integration Solves:

  • Implementing Layer2 and Layer3 services over existing network infrastructure is extremely complicated and limited to basic provisioning only.
  • To overcome the previous point, overlay technologies can be used (for example, GRE or VXLAN) for implementing Layer 2 or Layer 3 services. However, with the reference implementation, the overlay technologies lack scale and limit visibility for network administrators. Overlay technologies can also introduce performance challenges with servers not supporting the hardware offload of the tunnel encapsulation.
  • Communication between OpenStack instances and external endpoints (EPs) must be routed through the Neutron servers, which may become a performance bottleneck and limit high availability.
  • NAT/SNAT functions run centralized on Neutron servers, representing a performance choke point and lacking high availability solutions.
  • Network operators have limited to no visibility into OpenStack resources.

Source: Cisco ACI Unified Plug-in for OpenStack Architectural Overview

References Used:

Cisco ACI Unified Plug-in for OpenStack Architectural Overview

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.