### Openstack with ACI Integration – Part 3 (installing using Red Hat Director)

Contributors:  Soumitra Mukherji and Alec Chamberlain
With Expert Guidance from: Filip Wardzichowski

In Part 1 of this article, we discussed that I would write 4 parts for this.

• Part 1:  General Discussion of Openstack / ACI Integration
• Part 2: Guided Install for Openstack/ACI with Opensource CentOS7  or with RedHat Registered CentOS7 using packstack (works but unsupported,  good to learn it to get a better understanding of the integration)
• Part 3: Guided Install for Openstack/ACI using RedHat Registered  CentOS7 ISO and using the supported Director Based install with undercloud and overcloud
• Part 4:  Using Openstack/ACI to build a full working Project (Tenant in ACI) and inspecting what happens on the ACI side

I am writing Part 3 before Part 2, because I’ve been working on the screenshots and they are ready.  It takes me a tremendous amount of time to write these articles in details.  Part 2 should be coming soon.

Repeat of the disclaimer I stated in Part 1:

In a Production environment you will want to install Openstack directly on Bare metal servers.  However, we will install Openstack on top of VMware hypervisors (i.e. nested virtualization) for both Part 2 and Part 3.  I don’t have spare baremetals laying around to install Openstack and test out with ACI Integration.  I suspect a lot of you are in the same situation and this will give you an avenue to get familiar with Openstack and ACI integration without disrupting the VMware users.  Further, it’s really handy to make snapshots of your VMs at critical points, so you can fall back in case you make a mistake along the install process.

Highly Recommended:

If you want to understand the Architecture for OpeStack/ACI Integration, I strongly suggest going through this document:
Cisco ACI Unified Plug-in for OpenStack Architectural Overview

Before we start implementing OpenStack/ACI integration using Red Hat Director based install, let’s discuss a few items about the architecture.

As mentioned in Part 1,  OpenStack comprises of several Projects that talk to each other (via queues), to make the entire solution work.  Openstack does not define it’s own hypervisors or networking as such but uses pre-existing hypervisors that are proven and work well, most commonly KVM (can also use others such as Xen or esxi) and for networking most commonly OpenVSwitch or Linux Bridges are used.

The project that deals with the networking piece of OpenStack is called Neutron.

Neutron architecture comprises of 2 main portions:

1. The Core Plugin which defines the basic layer 2 connectivity
2. The Service Plugins that defines how Services such as Routers, Load Balancers, etc are implemented.

The Core Plugin is implemented by 2 functional categories:

1. Type Drivers:  maintain any needed type-specific network state, and perform provider network validation and tenant network allocation
2. Mechanism Drivers: responsible for taking the information established by the TypeDriver and ensuring that it is properly applied given the specific networking mechanisms that have been enabled.

The diagram below is a representation of this.

For OpenStack/ACI Integration new plugins for Type Drivers, Mechanism Drivers and Service Plugins have been implemented.

The Cisco ACI OpenStack Neutron Plugin is only supported with commercially supported OpenStack distributions.

In other words a PackStack (opensource) deployed Openstack/ACI integration is not officially supported.  However as mentioned previously it does work,  is easy to implement and gives us a good learning experience.

There are 2 different modes that the integration can be done:

1) OpFlex Mode (OpFlex-ovs): In this option, Cisco APIC controls the upstream Open vSwitch (OVS) running on each  Nova compute node by using the OpFlex protocol. This requires installing Cisco OpFlex and OVS agents running on each of the compute nodes. This deployment option implements a virtual machine manager (VMM) on Cisco APIC to provide the fabric administrator maximum visibility of the OpenStack cloud. When choosing the OpFlex mode, the Cisco ACI OpenStack Plug-in replaces the Neutron node datapath enabling fully distributed Layer 2, anycast gateway, DHCP, metadata optimization, distributed NAT, and floating IP enforcement.

2) Non-OpFlex Mode: In this option, Cisco APIC only programs the physical fabric and treats OpenStack tenant traffic as part of Physical domains (PhysDoms). This option can leverage SR-IOV or OVS-DPDK on the compute nodes and does not require installing Cisco agents on the nodes.

For a list of challenges that Openstack/ACI integration solves, please see Part 1 of this writeup.

In this writeup we will be discussing and demonstrating the Openstack/ACI Integration with Opflex Integration mode.

With Opflex ACI Integration mode the plugins that are implemented are shown below in the diagram.

When Openstack/ACI integration is done,  the OpenStack CLI / Horizon becomes the single source of truth (unlike integration with VMWare – vCenter / ACI Integration.  If your developers are already using Openstack, they don’t need to learn ACI or even get involved with ACI. They can use Horizon or Openstack CLI commands and configure their environment just like they did before. Behind the scenes the Openstack controller will talk to Cisco APIC and build the Tenants and all the necessary plumbing. The ACI Controller (APIC) can then be used by the network administrator to view all the infrastructure.

The diagram below (from the CCO Archtecture Overview document) gives you an overall picture about this workflow.

1. The OpenStack tenant administrator configures the OpenStack networks through standard Neutron calls or GBP calls using CLI, Heat, Horizon, or REST API calls.
2. The ML2 plus mechanism driver translates the networks created into Cisco AIM policies. Cisco AIM stores the new configuration into its database and pushes network profiles into Cisco APIC through Cisco APIC REST API calls.
3. Cisco APIC creates related network profiles, for example, Cisco ACI tenants, bridge domains, EPGs, and contracts. If OpFlex agent is installed on the compute nodes, OVS rules are also configured accordingly.
4. OpenStack administrator creates instances and attaches those the previously created OpenStack networks.
5. Cisco APIC is notified that new instances have been spawned. Consequently, Cisco APIC pushes the related policies to the leaf switches where the compute nodes running these VMs are attached.

What is Opflex ?

OpFlex is an open and extensible policy protocol designed to transfer declarative networking policies such as those used in Cisco ACI to other devices. With OpFlex, the policy model native to Cisco ACI can be extended all the way down into the virtual switches running on the OpenStack hosts. This OpFlex extension to the compute node allows Cisco ACI to use OVS to support common OpenStack features such as routing, Source NAT (SNAT) and floating IP in a distributed manner.

All Cisco ACI leaf switches provide an OpFlex proxy service. The OpFlex agents running on the host are connected to the proxy through the Cisco ACI infrastructure network, commonly referred as infra VLAN.

The compute nodes are provisioned with a Linux subinterface to communicate with the infra VLAN and to obtain a Cisco APIC-provisioned IP address through DHCP from the Cisco ACI tunnel endpoint (TEP) pool. Once the IP connectivity is established and the OpFlex aware agent can connect to the proxy and query Cisco ACI policies, the leaf effectively becomes an extended Cisco ACI leaf.

The OVS agents communicate to the OpFlex proxy through the infra VLAN of the Cisco ACI fabric.

The OpFlex communication happens through the TCP protocol on port 8009. Data between the OpFlex proxy and agent can be encrypted with SSL, which is enabled by default.

In Part 3 we will be implementing the Openstack/ACI integration with Red Hat Based Registered ISO.

The PackStack Openstack implementation is generally meant for a quick and dirty install.  It is easier to do since it has less components.  It is generally done as a All In One (AIO) method, meaning the Controller and Compute are deployed over the same node.

In the Packstack install method all Openstack modules are implemented as SystemD Services.   In the Red Hat Commercial undercloud (director)/Overcloud  method all Openstack Components are containerized.  This is shown in the diagram below.

What will be done in Part 3:

• Part 3: Installing Openstack and Integrating with ACI using Red Hat’s director based method with registered RedHat centOS ISO.  This is one of the supported professional methods to install openstack (there are several other supported methods).  In this method we will again use 4  VMware VMs across 2 esxi.  The first one is the install VM (also called director).  You install the director there, which is basically a Openstack Cluster in itself with one of the openstack hypervisors being the Controller, and the 2 computes and build openstack instances (servers) on those hypervisors.  This is also called the undercloud.  Then you use the Director to spin up the Openstack Cluster that you will use.  This is called the overcloud.  The Servers that were built by undercloud will be used as hypervisors.  In our case, one of them will be the controller and the other 2 computes.  This method is called Triple-O which stands for Openstack On Openstack.   Since this is a production grade install,  the scripts provided by Redhat will do everything for you, including pxe booting the controllers and computes and installing the required software (overcloud) on them for you to start using.  When you install Openstack on baremetal using this method,  you have to point your pxe  control  to the IMC/CIMC (Integrated Management Controller) of the baremetal server.  The script will use those IMC IPs  to boot the baremetal at the appropriate times and pxe boot them.  The utility that is used to power on/off the baremetal is called  IPMI (Intelligent Platform Management Interface). Since we will be  doing this install on top of VMware VMs, we can’t point the script to the IMC for ipmi usage.  As a workaround we will use a Virtual BMC ( Bare Metal Controller).   This will interact with vCenter to fool the system to think that it’s using IMC. The ipmi commands to power up/down  the VMs will then get sent to the vCenter who will take the appropriate action.  I have the docker-compose code for this which you can install in a few minutes to serve this purpose.  I will guide you through this at the appropriate time.

First, Let’s look at the RedHat documentation to see what the connectivity should look like:

From Figure 3.1 of the RedHat documentation the connectivity needed shows as below.  I’ve crossed out the items that we will not use in this guided install.

What will be our physical topology for this guided lab:

In this guided lab, I am doing the install on top of esxi hypervisors as previously stated.  I am going to use 2 different esxi hypervisors which are managed by a vCenter. This will allow me to spin up the director nodes, 2 compute nodes and control node all on those 2 hypervisors. We’ll make Openstack think they are baremetal and not know any better.    If you want to do this over baremetal directly (the production way), you will need to adjust accordingly.

I’ve connected one uplink from each hypervisor to a port in the ACI leaf.  I don’t have extra uplinks, so, I will just do it with single links.  Ideally you would use bonded links going to 2 leaves.  However, for the sake of simplicity,  I would urge you to just follow my example to get it up and running and get familiar with the integration.   The diagram below shows my physical connectivity for the 2 esxi.

From the diagram above note the following:

• For our lab guided install, we will not have separate Swift/Cinder  and Ceph Storage nodes.  Ceph Storage nodes are responsible for providing object storage in a Red Hat OpenStack Platform environment.
• NIC1 is used for PXE and for DHCP (the director/undercloud node provides these services).  NIC2 and NIC3 are bonded together and will have subinterfaces defined on the Openstack Nodes for all the vlans required.   In our Guided Install we will use  one NIC for everything.  There will be 802.1q subinterfaces on these single NICs for each Node (including the undercloud/director node).  This will make the connectivity very simple and give you a good understanding of how this works without getting distracted.
• Notice that the DHCP PXE boot vlan is connected to every node
• Notice that the Internal API and Cluster mgmt vlan connects to both Computes and Controllers
• Notice that the Tenant vlan connects to Computes and Controllers
• Notice that the API vlan connects to Computes and Controllers
• Notice that the External vlan connects to the Director and Controllers

Study the figure above properly.  Notice the following:

• I have 2 esxi hypervisors (10.0.0.51 and 10.0.0.53) that are connected via single uplink to ACI Leaf.
• From Leaf 102,  I will create an L3 Out in common Tenant and will advertise the BD subnet 100.100.160.0/24 to the outside.
• The L3Out will be configured in Common Tenant (most logical place) and peer with the ISN SVI.  This is the same ISN that I use for my Multisite ACI connectivity ( as a side reference).  I also want to point out that when we build Projects from Openstack which is equivalent to a ACI Tenant, those Tenants will be built in ACI in their own space.  We will look at this during Part 4: Using ACI/Integrated Openstack.
• My Edge ASA has a static route to 100.100.160.0/24 and is also configured with SNAT, so that endpoints in BD extNet with IP in range of 100.100.160.0/24 can reach the Internet
• I have a OOB Mgmt switch that connects to the OOB of every device.  All my Openstack Nodes will have one NIC only for simplicity.  The CentOS VMs will be configured with 802.1q tagging.
• I need to also ssh in to the Director to configure it and install the required Openstack software (undercloud software).  I will also have to execute the overcloud script from the director, once the undercloud is installed.  For that reason, notice how I brought vlan 500 OOB/mgmt into the trunk that goes to leaf-102
• I’ve manually created a DVS that spans those 2 esxis with the uplinks as shown in the diagram.   All the nodes will only have 1 NIC that connets to the one port group on that DVS “RH-Director-Services”.  That port group is configured as trunk.   The PXE packets will go over that too untagged (since PXE has to be untagged).

Now, let’s take a look at the logical Topology that we will be using in our guided install.  The diagram below depicts this.

From the diagram above, note the following:

• All the Openstack Nodes, Computes, Controller and even the Undercloud has one NIC ens192 that connects to the port group RH-Director Services on the DVS.  Remember that RH-Director is a trunk port group.  There will be 802.1q sub-interfaces configured on the Nodes itself to access the correct network as needed.
• Vlans as shown in diagram all go through the ACI Fabric.  All Vlans except for the L3Out Vlan (vlan 100) will have EPGs built with static binding towards the esxis for this connectivity.  extNet Vlan100  We’ll discuss the ACI configuration for this shortly.
• All those EPGs will belong to their own BD and to a common VRF that we spin up in common Tenant.  Details will be shown shortly.

Since in this guided lab we are doing everything over esxi hypervisors (nested virutalization), I want to discuss the configuration of the DVS from esxi.

Please study the diagram below for this.

Notice the following:

• all the VMs are connected to port group RH-Director Services
• The Port Group is configured as a Trunk
• Uplinks are vmnic4 on both hypervisors.
• Please ignore vmnic3 which goes to a different esxi and is not relevant here
• Also, notice that there is a VM called “NTP-DNS-RSOpenstack” in the VM list.  This VM is something that we’ll spin up to provide NTP/DNS/vBMC services.  We’ll do this in a bit using docker containers with code that you can get from my git repo and spin up in a few minutes.

Also Pay attention to the DVS Global Setting as shown in the diagram below:

Note the points below:

LLDP on DVS is disabled, because we don’t want the DVS to consume the LLDP packets.  LLDP neighbors will be from the Openstack Nodes will pass through the DVS and go to ACI Leaf.

For the port group, set the MTU to 9000 bytes.  ACI port MTU by default is already 9000.   The reason for this is that later we could use this to install openshift on top of openstack and that will require larger MTUs.  As a side note, when we configure the overcloud, we will set MTU to 9000 on the physical node NIC and all the vlans except for the extNet Vlan.  That is because on the Internet you won’t get 9000 mtu and we want to minmize pmtu discovery.  The other mechansism is fragmentation which is not supported by routers for good reason.  We’ll configure that vlan for 1500 bytes.  However the port group in the DVS will be capable of supporting 9000 bytes for the other vlans.

DVS Port Group Settings should be as shown in the diagram below:

Notice the following:

Because we will be doing nested virtualization, we will need to set Promiscous Mode, Mac address Changes and Forged Transmissions to Accept (see figure above).

Also, because we are doing Nested Virutalization Ensure that for the 2 computes you have “Hadware Assisted Virtualization” turned on (from vCenter).  (see figure below)

Now, let’s take a look at the ACI settings that are needed to do all this Openstack Infrastucture Plumbing.

Let’s start with the Fabric/Access Policies.  In my case, notice below that Leaf 101 has port 1/20 with Policy Group RH-Director attached to it as shown below.

Similarly, leaf-102 has port 1/19 and 1/25 attached to RHG-Director Policy Group as shown below.

Policy Group “RH-Director” is attached to AEP “RH-Director”  Also notice that I have LLDP enabled on the ports.  Please see diagram below.

The AEP “RH-Director” is associated with a physical domain and also a L3Out domain as shown below.  Also, it is important that you enable Infra Vlan on the AEP.   Our Openstack / ACI Integrated Nodes will communicate using VXLAN  and opflex to the leaves

Notice below that the physical domain and L3Out domain defined in ACI both have association to the same vlan Pool “RH-Director” with all the vlans that we spoke about.  I’ve included vlan 3967 also, but that’s not really needed since vlan 3967 which is our ACI infra vlan (done during initial APIC setup) will be passed through due to AEP having check box with Infra Vlan allowed.  I like to put it in here, in the vlan pool also, so I don’t forget what the infra vlan number was that was used for initial ACI setup.

Please also ssh to APIC and gather the information by cat’ing the file :

This will tell you what your initial ACI Infra Vlan was during APIC setup.  Also make sure to note the tep pool used and the GIPo address used for the Fabric during initial setup.  Make note of the ACI Fabric release also.  Please verify also that leaves and spines are all in the same release.

On Common Tenant, spin up a new VRF “RH-Director”  Also spin up the BDs as shown below, one for each Vlan.  All BDs should belong to the VRF we created. Only put in unicast enable for vLan 160: extNet and vlan 161: PXE.  All the other BDs should be pure L2 BDs.  I’ve used GW IP of 100.100.160.254/24 and 192.168.24.254/24 respectively. Please see figure below.

Now Create EPGs for each of the Vlans.  Each EPG should be tied to the respective BDs. Please see figure below.

Create Static binding for each EPG towards the hypervisors.  Use the ports that we identified before and we created the access policies for.  All static binding ports should use their respective vlans and they should all be trunk ports.  Please see figure below.

For PXE Vlan 161 please use 802.1P encapsulation.  PXE packets are not sent encapsulated.  See figure below.

Now, create a L3Out in Common Tenant “RH-Director” and associate with the VRF.  Make sure to tie in the L3Domain “RH-Director” that we created earlier.   In this case, I will just use static default route instead of using any routing protocol.  Feel free to do otherwise.

The Figure below shows the L3Out Node Profile configuration and the static default pointing to the ISN router SVI

The Figure below shows the L3Out Logical Interface Profile.  Notice that we are using SVI vlan 100 to match with the SVI on the ISN router (our external Router).  Also notice that I’ve kept MTU at 1500 for this.

The Figure below shows the L3Out External EPG configuration.  Note that I’ve used prefix 0/0 with “external Subnets for External EPG” flag and also have any/any contract both provider and consumer.

On the BD for extNet, make sure that the IP is flagged for “Advertise Externally”.  Also tie in the L3Out that you just created.  Please see diagram below.

Make sure to tie in the same any/any contract to the extNet that you used for L3Out external EPG.  Tie it in both as consumer and provider.  Please see figure below.

The last step on getting the Openstack Infra ready from the ACI prospective is to go to “System/System Settings/Fabric-Wide Settings” and make sure that “Oplfex Client Autnentication” is unchecked.  Please see figure below.

We are all done with the Openstack Required connectivity !

Next we need to spin up a VM for Services.  This will serve as your NTP, DNS and vBMC server (running on docker). NTP is an integral part of this configuration.  If NTP is not configured properly,  Director based install will fail.  vBMC will be used for powering on/off the VMs accordingly during PXE boot.  Local DNS is nice to have, so you can resolve the names of your Openstack cluster and also resolve global names.   I would suggest bringing up a Ubuntu 18.04 or 20.04 VM with one NIC and attach it to port group RH-Director-Services.   Figure below is a representation of this connectivity for the Services VM.

Boot up the VM and then do sudo -i to go in as root.  Modify the /etc/netplan/01-network-manager-all.yaml configuration to configure Sub-Interfaces with 802.1q.  Put one sub-interface on Vlan 161: PXE-Boot Vlan and the other one on Vlan 500: OOB Vlan (use your Vlan that you used for your OOB).    Make sure to adjust your OOB Vlan 500 IP and default gateway according to your needs.   Vlan 161 is just an isolated network inside ACI, so I suggest keeping it just like I show.  Your netplan file should look like below.

next executue the command from root:   netplan try, hit enter when asked for confirmation.  Your VM should now be able to talk on both vlan 161 and vlan 500

First Install Docker and docker-compose on that VM.  Follow instructions in the readme file (at the very bottom)

https://github.com/soumukhe/docker-ntp

Installing NTP container:

On the Services VM, make sure you are ssh’d in as a user (not root).

git clone https://github.com/soumukhe/docker-ntp.gitcd  docker-ntpvi docker-compose.yml  change the last line to the NTP server that you use.  As an example if your ntp server is 10.100.254.130   then the last line of the docker-compose yml should read as shown below: environment:- NTP_SERVERS=10.100.254.130Now start the docker conatiner:docker-compose up --build -d

check with: docker ps

your Openstack nodes, including director will point to the NTP server using vlan 161.  The NTP server will syncronize it’s time with the real NTP server at 10.100.254.130 (using OOB VLAN 500) and pass it along to the Openstack nodes (using PXE VLAN 161)

Installing vMBC container and setting it up:

Since the Services VM has an OOB connection on Vlan 500, it should be able to reach vCenter.  When installing the overcloud from the director, the director script will send ipmi commands to the vMBC server that will in turn talk to vCenter to power off / on the virutal machines (openstack nodes).  It will then pxe boot them and install all the necessary software needed for openstack overcloud.

To install vMBC contaier and configure it, please follow:

https://github.com/soumukhe/vBMC-docker-compose

I’ve written very explicit instructions on how to set it up and it should only take a few minutes to get going and test out if ipmi commands  will properly power on/off the intended VMs.

Installing DNS container:

To install the DNS container and get it going, please follow:

https://github.com/soumukhe/coredns_docker-compose

It should take you just a few minutes to get it setup and working

As a last step to ensure all containers are running on the Service VM, execute the command:

docker ps --format '{{ .Names }}' | sort

It’s now time to start bringing up the VMs that will host Director and the Nodes.

You can choose what is comfortable for you on resources for the VMs.

I would suggest:

•  16Gig RAM 16 CPU and 150 Gig (thin provision to save space) for director
• 48 Gig RAM 48 CPU and 350 Gig (thin provision to save space) for Controller
• 48 Gig RAM 48 CPU and 350 Gig (thin provision to save space) for each of the 2 computes
• Make sure to have 1 NIC on each VM and connect to the RH Director Services DVS Port Group

This gives you enough horse power to spine up many VMs on Openstack once you are done.  However if you just want to test it out with basic cirros image VMs, you can go much lower.

Note:  it is a good idea to make snapshots of the Controller and Compute nodes with the raw unformatted disk.  Later during the deploy if it bombs, you could go back to this state in a minute.  Otherwise you will have to delete the disk from vCenter and add it back again.

In this guided lab, we will install Red Hat OSP 13  (also known as Queens for Open Source Release) and integrate with ACI 5.2

rhel-server-7.9-x86_64-dvd.iso  from:

It’s now time to setup IP for vlan 500 (OOB network), so you can ssh to it.

Setup basic network connectivity:

ip a      # this will show you the name of the NIC.  In my case it was ens192sudo -icd /etc/sysconfig/network-scriptsvi the files below and change accordingly to your OOB IP that you want to give to your Director.vi  ifcfg-ens192DEVICE=ens192TYPE="Ethernet"ONBOOT=yesBOOTPROTO=noneMTU=9000vi  ifcfg-ens192.500TYPE="Ethernet"DEVICE="ens192.500"IPV4_FAILURE_FATAL="no"ONBOOT="yes"IPADDR="10.0.160.1"PREFIX="16"GATEWAY="10.0.0.1"VLAN=yes

systemctl restart NetworkManager

Ping your default gateway for OOB.  In my case this is 10.0.0.1.  If ping is successful then you are ready to start working.

You don’t need to worry about the rest of the sub-interface configurations needed on the director node.  The Undercloud Install script will take care of that.

in my case:    ssh soumukhe@10.0.160.1

From here start following the RedHat documentation at section 4.2.
https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/13/html/director_installation_and_usage/installing-the-undercloud#creating-the-stack-user

### Next, complete sections:

#### 4.3. Creating directories for templates and images4.4. Setting the undercloud hostname4.5. Registering and updating your undercloud

For my pool I used:

 subscription-manager attach --pool=xxx  # Please obtain pool ID by reading the Red Hat documentationIn my setup I set a variable for pool ID and used it this way:POOLID=$(subscription-manager list --available --all --matches="Red Hat OpenStack" | grep Pool | awk '{print$3}')subscription-manager attach --pool=\$POOLID

Continue with:

### 4.6. Installing the director packages

Skip 4.7, unless you want to do it:   4.7. Installing ceph-ansible

Next go to:

### 4.8. Configuring the director

Here, you will copy the sample undercloud.conf.sample file to your home directory.  You will then have to modify it to your parameters.

This is where things might get a bit complicated.  If you’ve followed this guide this far, you can download my sample configs from my git repo.  This will have all the files that I needed to modify to make the install with ACI Integration work.

cd to your home directory (as stack user) and then do:

git clone https://github.com/soumukhe/RedHat-Openstack_ACI-sampleConfigs.git

Now do:
cd RedHat-Openstack_ACI-sampleConfig

The directory structure of the sample configs are shown below.  Notice that undercloud.conf is under the directory  home/stack.   You can look at that config and adjust your undercloud.conf accordingly.  If you’ve followed this guide all the way except for the vlan number for OOB (vlan 500 in my case), then you will only need to make changes to that part for your undercloud.conf in your home directory.

### 4.11. Installing the director

This could take 45 minutes to an hour to complete.

In case you get cut off midway during the install (it happened to me once, due to home wireless off air),  you will need to do the install again.  However this might fail because of a config lock.  In that case do the following:sudo rm /var/run/os-refresh-config.locksudo yum repolistsudo rebootssh back in to directoropenstack undercloud install

Another Note that I wanted to point out.  In the PackStack install method you have to disable selinux by doing the following:

vi /etc/selinux/configSELINUX=disabledreboot and  verify with getenforce

Do not disable selunx for the Director Based install.  If you do, the overcloud install will fail.   revert back to SELINUX=enforced and reboot the director if you had done this.

Complete the rest of section 4.11 and jump to 4.12 and then 4.12.1 follow by section 4.13

### 5.1. Registry Methods

It is easist to choose the Local Registery option, though Satellite Registry is probably the recommended option, but you have to do more configuration for that.

### 5.5. Using the undercloud as a local registry

Read item 3 in section 5.5 and do item 4 and 5

4) Modify the local_registry_images.yaml file and include the following parameters to authenticate with registry.redhat.io:

5) Log in to registry.redhat.io and pull the container images from the remote registry to the undercloud.

Finish off with item 6 in the section 5.5 to test that images have been downloaded

Now we move on to Section 6.1

### 6.1. Registering Nodes for the Overcloud

For this we will be doing “introspection” of the Controller and compute nodes, so Director knows about the nodes.  for this we will point the script to vBMC so, it can be used by ipmi.

Copy the “instackenv.json” file from the sample you got to the /home/stack directory.   Change the MAC addresses of the nodes in that file to match the controller and compute macs that vcenter shows for them.  Also make sure that the ip of the Services VM is correct.  In my example case, it is 192.168.24.20.  The user pm_user and pm_password should be correct.

Now, run the below commands:

     openstack overcloud node import --validate-only ~/instackenv.json     openstack overcloud node import ~/instackenv.json     openstack baremetal node list     openstack overcloud node introspect --all-manageable --provide      # this is the actual introspection, VMs will be powered on and off     openstack baremetal node list

This should show you a list of the bare metals (VMs in this case) that got introspected.

check the properties of the nodes by the commands:

openstack baremetal node show overcloud-controller-0 -c propertiesopenstack baremetal node show overcloud-compute-0 -c propertiesopenstack baremetal node show overcloud-compute-1 -c properties

modify the properties of the controller and computes as shown below:

openstack baremetal node set --property capabilities='profile:control,boot_option:local,cpu_hugepages:true,cpu_txt:true,cpu_vt:true,boot_option:local,cpu_aes:true,cpu_hugepages_1g:true,boot_mode:bios' overcloud-controller-0openstack baremetal node set --property capabilities='profile:compute,boot_option:local,cpu_hugepages:true,cpu_txt:true,cpu_vt:true,boot_option:local,cpu_aes:true,cpu_hugepages_1g:true,boot_mode:bios' overcloud-compute-0openstack baremetal node set --property capabilities='profile:compute,boot_option:local,cpu_hugepages:true,cpu_txt:true,cpu_vt:true,boot_option:local,cpu_aes:true,cpu_hugepages_1g:true,boot_mode:bios' overcloud-compute-1

Delete the compute and control flavors that were created while creation of the undercloud:

openstack baremetal flavor delete control
opensstack baremetal flavor delete compute

Recreate the control and compute flavors based on the properties that showed by the earlier command.  In my case, since I had 48 Gigs of memory and 48 gigs CPU and 340 gigs of hard disks for compute and control the properties that showed from “openstack baremetal node show  <name_of_node> -c properties”, I would do the following.

openstack flavor create --ram 49152 --disk 349 --vcpus 48 controlopenstack flavor create --ram 49152 --disk 349 --vcpus 48 compute

Now, modify the properties of the flavors to match the introspection results by the following commands:

openstack flavor set \--property capabilities:profile="control" \--property capabilities:boot_option='local' \--property capabilities:cpu_hugepages='true' \--property capabilities:cpu_txt='true' \--property capabilities:cpu_vt='true' \--property capabilities:boot_option='local' \--property capabilities:cpu_aes='true'  \--property capabilities:cpu_hugepages_1g='true' \--property capabilities:boot_mode='bios' \control
openstack flavor set \--property capabilities:profile="compute" \--property capabilities:boot_option='local' \--property capabilities:cpu_hugepages='true' \--property capabilities:cpu_txt='true' \--property capabilities:cpu_vt='true' \--property capabilities:boot_option='local' \--property capabilities:cpu_aes='true'  \--property capabilities:cpu_hugepages_1g='true' \--property capabilities:boot_mode='bios' \compute

check that properties match:

openstack baremetal node show overcloud-controller-0 -c propertiesopenstack baremetal node show overcloud-compute-0 -c propertiesopenstack baremetal node show overcloud-compute-1 -c propertiesopenstack flavor show controlopenstack flavor show compute

Now we need to do a few things before we run the overcloud creation script for Openstack / ACI integration:

First, let’s put the required environment files in the templates directory:

cp  /usr/share/openstack-tripleo-heat-templates/roles_data.yaml   ~/templates/aci_roles_data.yamlcp  /usr/share/openstack-tripleo-heat-templates/roles_data.yaml   ~/templates/roles_data.yaml  # we want to keep the original file name also in case you want to deploy without ACI Integrationcp /usr/share/openstack-tripleo-heat-templates/network_data.yaml   ~/templatesNote: network_data.yaml needs to be mofified in your templates directory to match your vlans and IP ranges. It's pretty self explanatory if you look at the file.  Compare with the network_data.yaml from the cloned sample repo.  If you've followed the guide exactly, you could just copy the network_data.yaml from the sample cloned git repo to your templates directory.There should already be a overcloud_images.yaml file in the template directory

At this point we need to follow the ACI / Openstack Integration guide at:
https://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/4-x/openstack/ACI-Installation-Guide-for-Red-Hat-Using-OSP13-Director/ACI-Installation-Guide-for-Red-Hat-Using-OSP13-Director_chapter_010.html

From CCO,

openstack-ciscorpms-repo-13.0-1065.tar.gz
tripleo-ciscoaci-13.0-1065.noarch.rpm

now execute the commands:

sudo yum --nogpgcheck localinstall tripleo-ciscoaci-13.0-1065.noarch.rpm # This command installs the dependencies/opt/ciscoaci-tripleo-heat-templates/tools/build_openstack_aci_containers.py -z /home/stack/openstack-ciscorpms-repo-13.0-1065.tar.gz -u 192.168.24.1  # The command uses the upstream Docker images as a base to build the required containers and pushes them to the local Docker repository. It creates a /home/stack/templates/ciscoaci_containers.yaml environment fileIn this case 192.168.24.1 is the IP of your Director node on vlan 161 which was configured by the undercloud install based on the contents of undercloud.conf file

This should take about 45 minutes to process.  It will also place an environment file in  home/stack/templates directory with name of “ciscoaci_containers.yaml”

After this, you also need to modify the  ~/templates/aci_roles_data.yaml file based on the Cisco document.  If you followed the guides you could just copy the aci_roles_data.yaml file that you cloned from my git repo to your templates directory.

You also need to create an environment file for the ACI install.  You can copy this file “ciscoaci.yaml” to your templates directory.

You should also set your MTU to 9K for the interfaces and VLans except for the extNet Vlan 160.  That way you will be able to install openshift without issues on top of Openstack later.

To make the custom NIC Template do the following:

1. Generate template files

/usr/share/openstack-tripleo-heat-templates/tools/process-templates.py \      -p /usr/share/openstack-tripleo-heat-templates \      -r /home/stack/templates/aci_roles_data.yaml \      -n /home/stack/templates/network_data.yaml \      -o /home/stack/openstack-tripleo-heat-templates-rendered --safe

2. Copy  the generated single-nic-vlans directory conents to  ~/templates/custom-nics

cp -r ~/openstack-tripleo-heat-templates-rendered/network/config/single-nic-vlans/   ~/templates/custom-nics/

3. change the mtu settings in compute.yaml and controller.yaml to 9000

cd  ~/templates/custom-nics/

Change compute.yaml and controller.yaml  nic1 and vlans to 9000 mtu (don’t change externalNetwork because you don’t want fragmentation )

You can just copy the contents of the custom-nics directory  from the cloned repo directory to your ~/templates/custom-nics directory.

example of controller.yaml change:

You also need a node-info.yaml file which defines the number of controllers and compute.  You can copy that directly from my cloned sample repo into your ~/templates folder.

There is a deploy.sh file in the cloned git directory.  Copy that also to your ~/templates directory.
Please Note:  We are forcing the Openstack hypervisor to use qemu instead of kvm.  That is because we are doing nested virtualization and kvm in this case will not work.

• In a production environment, you will not want to use qemu but instead use kvm, since you will do this over real bare-metal and not over vmWare VMs (nested virtualization).
• qemu is a type 2 hypervisor,  i.e. it is in usersapce and everything is done via software and performance will be bad.
• kvm is a type 1 hypervisor and runs in kernel space

Note:  Before running the deploy script check each of the files listed in the script with the files from my cloned repo.  Remember that some changes were made in each of the files  You need to verify that you’ve made the appropriate changes also.

Finally,  cd to ~/templates and   run the deploy.sh script  by  ./deploy.sh

This will take around 2 hours to run at the end of which you will have a fully Openstack/ACI integrated fabric.

Note:

There are also 2 other deploy scripts in the cloned git repo.

• deploy.sh.mtu1500-noACI.sh    # can be used for deploying without ACI Integration
• deploy.sh.mtu1500-ACI.sh        # can be used for deploying with ACI Integration with default of 1500 MTU

In case there is some mistake in your files and the install bombs and you need to fix the yaml files and run the deploy script again, you first need to delete the partially deployed bad overcloud and run the deploy again.  To do this you would have to:

openstack baremetal node list   # to list the baremetal nodes.  They will most likely show Provisioning State active

Put all of the nodes in maintenance mode:

openstack baremetal node maintenance set overcloud-controller-0openstack baremetal node maintenance set overcloud-compute-0openstack baremetal node maintenance set overcloud-compute-1openstack baremetal node list   # should show Maintenance: True

Now do:

openstack stack delete overcloud

check that overcloud is finished deleting by doing the below command once in a while:openstack stack list

After the stack has been deleted, the nodes will automaically power off.  However the OS that was configured on the previous run is stil present on the disks of the Controller and Compute Nodes.  Go to vCenter and revert back to the snapshot of the raw disk state for the Controller and Computes.  This should take a minute or so per node.

Once stack is deleted take the nodes off maintenance:

openstack baremetal node maintenance unset overcloud-controller-0openstack baremetal node maintenance unset overcloud-compute-0openstack baremetal node maintenance unset overcloud-compute-1openstack baremetal node list   # should show Maintenance: False and Provisioning State: Available

You could now run the deploy script again.

For whatever reason, if you want to do the Introspection again, then you would have to delete the nodes with the command:  openstack baremetal node delete <node_name> for each of the nodes one at a time.  Then you can do introspection again and set the properties.

After Successful deployment you should see something like below:

It will also create a overcludrc file for you which you can source to start using the Openstack environment from CLI

as an example I show below output of “openstack hypervisor list” after I source the openstackrc file

Notice that unlike Packstack install, the Triple-O method uses docker containers for all it’s services for both controllers and computes.  Below you will see that the controller has 61 containers in this case (computes have about 10) You can ssh to the controllers or computes from the director node without a password.  To ssh   use  heat-admin@controller_ip which you can get from openstack server list.

You could also interact directly with containers to get relevant information.

Below I am looking at Nova mappings:

Checking Nova Mappings:[root@controller-0 ~]# docker exec -it nova_api bash()[root@controller-0 /]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

Also notice that the cluster even though using only one Controller in this case is still using High Availabilty mechanism through Pacemaker.  Check by pcs status on controller.

Notice below that ACI shows the 3 nodes, 1 Controller and 2 computes in the OpenStack VMM domain

There are no faults as you can see below

You can also browse to the HORIZON Dashboard.  Use the password you specified in the ciscoaci.yaml file.  Username is:  admin

If you look at compute you can see the 2 computes on line with resource utilization

Now you can also copy the contents of /etc/host file from the controller or compute and populate the DNS server with that info.   To make it easy to do, I’ve written a simple bash script which you will find in the cloned repo called convert.db.   Just copy the contents of the /etc/host file from controller to the file  etc.hosts.txt and execute the script.  The DNS format will get outputted to db.txt file.

References:

### Cisco ACI Unified Plug-in for OpenStack Architectural Overview

Click to access Cisco-ACI-Plug-in-for-OpenStack-Architectural-Overview.pdf

### Cisco APIC OpenStack Plug-in Release Notes, Release 5.2(1)

https://www.cisco.com/c/en/us/td/docs/dcn/aci/openstack/release-notes/5x/cisco-apic-openstack-release-notes-521.html

### Cisco Virtualization Matrics

https://www.cisco.com/c/dam/en/us/td/docs/Website/datacenter/aci/virtualization/matrix/virtmatrix.html

### Installing Openstack OSP13/ACI Integration

https://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/4-x/openstack/ACI-Installation-Guide-for-Red-Hat-Using-OSP13-Director/ACI-Installation-Guide-for-Red-Hat-Using-OSP13-Director_chapter_010.html

### Red Hat OSP 13 Install Guide

https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/13/html/director_installation_and_usage/chap-introduction

### Troubleshooting overcloud

https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/troubleshooting/troubleshooting-overcloud.html

### Node Replacement:

https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/10/html/director_installation_and_usage/sect-scaling_the_overcloud