Openstack with ACI Integration – Part 2 ( installing with packstack)

Contributors:  Soumitra Mukherji and Alec Chamberlain
With Expert Guidance from: Filip Wardzichowski

In Part 1 of this article, we discussed that I would write 4 parts for this.

  • Part 1:  General Discussion of Openstack / ACI Integration
  • Part 2: Guided Install for Openstack/ACI with Opensource CentOS7  or with RedHat Registered CentOS7 using packstack (works but unsupported,  good to learn it to get a better understanding of the integration)
  • Part 3: Guided Install for Openstack/ACI using RedHat Registered  CentOS7 ISO and using the supported Director Based install with undercloud and overcloud
  • Part 4:  Using Openstack/ACI to build a full working Project (Tenant in ACI) and inspecting what happens on the ACI side

Repeat of the disclaimer I stated in Part 1:

In a Production environment you will want to install Openstack directly on Bare metal servers.  However, we will install Openstack on top of VMware hypervisors (i.e. nested virtualization) for both Part 2 and Part 3.  I don’t have spare baremetals laying around to install Openstack and test out with ACI Integration.  I suspect a lot of you are in the same situation and this will give you an avenue to get familiar with Openstack and ACI integration without disrupting the VMware users.  Further, it’s really handy to make snapshots of your VMs at critical points, so you can fall back in case you make a mistake along the install process.

Highly Recommended:

If you want to understand the Architecture for OpenStack/ACI Integration, I strongly suggest going through this document:
Cisco ACI Unified Plug-in for OpenStack Architectural Overview

Before we start implementing OpenStack/ACI integration using Packstack, let’s discuss a few items about the architecture.

As mentioned in Part 1,  OpenStack comprises of several Projects that talk to each other (via queues), to make the entire solution work.  Openstack does not define it’s own hypervisors or networking as such but uses pre-existing hypervisors that are proven and work well, most commonly KVM (can also use others such as Xen or esxi) and for networking most commonly OpenVSwitch or Linux Bridges are used.

The project that deals with the networking piece of OpenStack is called Neutron.

Neutron architecture comprises of 2 main portions: 

  1. The Core Plugin which defines the basic layer 2 connectivity
  2. The Service Plugins that defines how Services such as Routers, Load Balancers, etc are implemented.

The Core Plugin is implemented by 2 functional categories:

  1. Type Drivers:  maintain any needed type-specific network state, and perform provider network validation and tenant network allocation
  2. Mechanism Drivers: responsible for taking the information established by the TypeDriver and ensuring that it is properly applied given the specific networking mechanisms that have been enabled.

The diagram below is a representation of this.

Figure 1: Neutron Project Implementation Architecture

For OpenStack/ACI Integration new plugins for Type Drivers, Mechanism Drivers and Service Plugins have been implemented.

The Cisco ACI OpenStack Neutron Plugin is only supported with commercially supported OpenStack distributions.

In other words a PackStack (opensource) deployed Openstack/ACI integration is not officially supported.  However as mentioned previously it does work,  is easy to implement and gives us a good learning experience. 

There are 2 different modes that the integration can be done:

1) OpFlex Mode (OpFlex-ovs): In this option, Cisco APIC controls the upstream Open vSwitch (OVS) running on each  Nova compute node by using the OpFlex protocol. This requires installing Cisco OpFlex and OVS agents running on each of the compute nodes. This deployment option implements a virtual machine manager (VMM) on Cisco APIC to provide the fabric administrator maximum visibility of the OpenStack cloud. When choosing the OpFlex mode, the Cisco ACI OpenStack Plug-in replaces the Neutron node datapath enabling fully distributed Layer 2, anycast gateway, DHCP, metadata optimization, distributed NAT, and floating IP enforcement.

2) Non-OpFlex Mode: In this option, Cisco APIC only programs the physical fabric and treats OpenStack tenant traffic as part of Physical domains (PhysDoms). This option can leverage SR-IOV or OVS-DPDK on the compute nodes and does not require installing Cisco agents on the nodes.

For a list of challenges that Openstack/ACI integration solves, please see Part 1 of this writeup.

In this writeup we will be discussing and demonstrating the Openstack/ACI Integration with Opflex Integration mode.

With Opflex ACI Integration mode the plugins that are implemented are shown below in the diagram.

Figure 2: ACI Plugins for Openstack/ACI Integration in Opflex Mode

When Openstack/ACI integration is done,  the OpenStack CLI / Horizon becomes the single source of truth (unlike integration with VMWare – vCenter / ACI Integration.  If your developers are already using Openstack, they don’t need to learn ACI or even get involved with ACI. They can use Horizon or Openstack CLI commands and configure their environment just like they did before. Behind the scenes the Openstack controller will talk to Cisco APIC and build the Tenants and all the necessary plumbing. The ACI Controller (APIC) can then be used by the network administrator to view all the infrastructure.

The diagram below (from the CCO Archtecture Overview document) gives you an overall picture about this workflow.

Figure 3: Workflow (from CCO document listed above, Figure 9)
  1. The OpenStack tenant administrator configures the OpenStack networks through standard Neutron calls or GBP calls using CLI, Heat, Horizon, or REST API calls.
  2. The ML2 plus mechanism driver translates the networks created into Cisco AIM policies. Cisco AIM stores the new configuration into its database and pushes network profiles into Cisco APIC through Cisco APIC REST API calls.
  3. Cisco APIC creates related network profiles, for example, Cisco ACI tenants, bridge domains, EPGs, and contracts. If OpFlex agent is installed on the compute nodes, OVS rules are also configured accordingly.
  4. OpenStack administrator creates instances and attaches those the previously created OpenStack networks.
  5. Cisco APIC is notified that new instances have been spawned. Consequently, Cisco APIC pushes the related policies to the leaf switches where the compute nodes running these VMs are attached.

What is Opflex ?

OpFlex is an open and extensible policy protocol designed to transfer declarative networking policies such as those used in Cisco ACI to other devices. With OpFlex, the policy model native to Cisco ACI can be extended all the way down into the virtual switches running on the OpenStack hosts. This OpFlex extension to the compute node allows Cisco ACI to use OVS to support common OpenStack features such as routing, Source NAT (SNAT) and floating IP in a distributed manner.

All Cisco ACI leaf switches provide an OpFlex proxy service. The OpFlex agents running on the host are connected to the proxy through the Cisco ACI infrastructure network, commonly referred as infra VLAN.

The compute nodes are provisioned with a Linux subinterface to communicate with the infra VLAN and to obtain a Cisco APIC-provisioned IP address through DHCP from the Cisco ACI tunnel endpoint (TEP) pool. Once the IP connectivity is established and the OpFlex aware agent can connect to the proxy and query Cisco ACI policies, the leaf effectively becomes an extended Cisco ACI leaf.

The OVS agents communicate to the OpFlex proxy through the infra VLAN of the Cisco ACI fabric.

The OpFlex communication happens through the TCP protocol on port 8009. Data between the OpFlex proxy and agent can be encrypted with SSL, which is enabled by default.

In Part 2 we will be implementing the Openstack/ACI integration with PackStack. 

As mentioned earlier this is an opensource method and not a commercial install method and is not officially supported  However I feel it gives you a good understanding of the implementation.

The PackStack Openstack implementation is generally meant for a quick and dirty install.  It is easier to do since it has less components.  It is generally done as a All In One (AIO) method, meaning the Controller and Compute are deployed over the same node.  In our example we will be deploying One Node which will be Controller/Compute and 2 Nodes that will be Compute only.

In the Packstack install method all Openstack modules are implemented as SystemD Services.   In the Red Hat Commercial undercloud (director)/Overcloud  method all Openstack Components are containerized.  This is shown in the diagram below.

Figure 4. Comparing PackStack Install to Redhat Director based install.

What will be done in Part 2:

  • Part 2: Installing Openstack and Integrating with ACI using Packstack ( with centOS – either opensource or RedHat version).  This is unsupported but works and gives you a good understanding of the process.  Packstack is a bunch of scripts like puppet/ansible and python packed together.  This is a fire and forget method, you bring up openstack, integrate with ACI, but if you needed to make any changes to the openstack infrastructure later you have to do it manually.   In our guided install we will have 4 VMs (across 2 esxi hypervisors).  One is to install, one for Openstack Controller and two for Openstack Computes.

Below figure shows the Logical Topology that we will implement.

Figure 5: Logical Topology

Points to note from the Logical Topology.

  • One Install Server called “packstack”.    This is not really needed.  You could do the install from the Controller Node itself, but this keeps it nice and clean.
  • 2 Compute Nodes
  • 1 Controller Node
  • Each Node has 3 NICs, ens192, ens224, ens 256
  • 4 Vlans in total . 
    • Untagged port group on DVS, Vlan 192  for tunnnel with prefix of 100.100.101.0/24 on ens256
    • Vlan 101.224 for extnet on ens224
    • Vlan 3967.224 for opflex connection to ACI Leaf ( 3967 because my ACI infra vlan is 3967, automatically assigned from DHCP)
    • Untagged on VMWare standard switch port group  on ens192 for OOB/mgmt
  • Vlan 101, 192 are plumbed through ACI static bindings and connect to their own epg/bd as shown in the diagram
  • All Nodes have 16Gig Memory, 16 CPUs and 350 Gig Hard Drive.  You can adjust based on what you need

The Physical topology is shown in the figure below:

Figure 6. Physical Topology

Points to note from Physical Topology:

  • Each UCS Server connects to a port on one leaf.
  • L3Out physical connection is on e 1/25 on Leaf-101 and peers to ISN on vlan 500, neighbor IP 10.0.0.9/16
  • SVI for L3Out is 10.0.140.253/16
  • Not using any routing protocol for L3Out connection.  Using default Static to 10.0.0.9
  • External Network is 100.100.101.0/24,  BD GW IP is 100.100.101.254/24
  • ASA does SNAT for 100.100.101.254.  ASA has static route for 100.100.101.254/24 pointing to 10.0.140.253
  • Manually created DVS with 2 port groups. 
  • Using 2 vCenter Hosts for all the VMs (Nodes for Openstack)

The Figure below shows the DVS

Figure 7. DVS view

The Figure below shows the global properties of the DVS:

Note:

  • MTU is set to 9000
  • LLDP is disabled, so that the lldp packets can reach all the way to the Nodes
Figure 8. Global Properties of DVS

The Figure below shows properties of DVS Port group OpenstackServices-Trunk-AllVlans

Note:

  • Promiscious Mode, Mac Address Changes and Forged Transmissions are set to Allowed, because we will do nested virtualization
  • This port group is set to trunk allowing all vlans
Figure 9: Properties of OpenstackServices-Trun-AllVlans DVS port group

The Figure below shows properties for DVS port group Openstack-Tunnel-Vlan192

Note:

  • Promiscious Mode, Mac Address Changes and Forged Transmissions are set to Allowed, because we will do nested virtualization
  • This port group is set to access mode with vlan 192
Figure 10: Properties of Openstack-Tunnel-Vlan192 DVS Port Group

Also, since we are doing Nested Virtualization, we need to ensure to turn on Hardware Assisted Virtualization for all Computes (including Controller, since Controller will also be a Compute in this case)

Figure 10a

The Figure below shows the VMs spun up on the esxi’s for the Openstack Node Installs.

Note:

  • host 10.0.0.51 has the Controller and the Install Server
  • Host 10.0.0.53 has the 2 Computes and the yum Repo Server for ACI Plugins ( Note: you could setup the Install Server as a repo if you did not want to have a separate repo server)
Figure 11: VMWare Host Views

ACI Access Policies for Leaf-101 and Leaf-102 are shown below

Figure 12a: Leaf Access Policies

Access Policy packstack is shown below. 

Notice it’s associated with:

  • AEP: packstack
  • lldp: on
Figure 12b: Policy Group packstack

AEP packstack has:

  • Infra Vlan checked
  • associated with 2 domains:
    • L3Out Domain: packstack 
    • Physical Domain: packstack
Figure 12c: AEP Details

Both Physical Domain packstack and L3Out Domain packstack have the same vlan pool packstack associated with them

Figure 12d. Domains and vLan Pool Association

On ACI, we created a VRF in Common Tenant

Figure 12. ACI VRF

2 Bridge Domains were created, one for Tunnel and one for extNet. 

  • The BDs belong to the VRF Created.
  • Unicast is enabled on both BDs, with GW IPs as shown below.
Figure 12. ACI BDs

2 EPGs were created, one for ExtNet and one for Tunnel

The EPGs belong to the respective BDs

Figure 12. ACI BDs

EPG ExtNet is associated with physical domain packstack

Static bindings with vlan-101 are configured for leaf-101 port 1/19 and leaf 102-port 1/20

Figure 12a. Static bindings and Domain for EPG ExtNet

EPG ExtNet has any/any contract associated with it

Figure 12a. Contract Association with extEPG

EPG OpenstackTunnel is associated with physical domain packstack

Static bindings with vlan-192 are configured for leaf-101 port 1/19 and leaf 102-port 1/20

Figure 12b. Static bindings and Domain for EPG OpenstackTunnel

The figure below shows the L3Out that was created

L3Out is associated with the VRF packstack and L3Out Domain is packstack

Figure 13. L3Out

The figure below shows the default Static Route added to the L3Out

Figure 13. L3Out

The figure below shows the interface profile of the L3Out.

Note that the MTU is kept to 1500 Bytes and it is configured with SVI of 500 with IP of 10.0.140.253/16

Figure 14. L3Out Interface Profile

The Figure below shows that any/any contract is associated with the L3Out External EPG.  Also the Prefix for L3Out EPG is 0/0 with “External Subnets for External Prefixes”

Figure 15. External EPG Confiuguration

Our InfraStructure is all ready !

Next, we need to bring up our VMs.  In this case, it’s easiest to bring up the Intall VM and clone it 4 times, for 1 Controller, 2 for Computes and 1 for the yum repo server.  Then you would just have to change the IPs, hostnames, etc on them and that is very fast. ( Note: you could setup the Install Server as a repo if you did not want to have a separate repo server)

You can use either:

In this example, I will use the Red Hat CentOS image, but other than a few steps of registering and enabling the proper repos, the method is pretty much the same.

Spin up your CentOS VM using ISO from vCenter. 

If using RedHat registered ISO, you can follow this link for the initial instructions on registering, etc.
https://access.redhat.com/articles/1127153

Remember for Red Hat we will be using Openstack OSP13.  The equivalent opensource version is Queens

Brief instructions for Red Hat ISO:

You Need to do this for each individual Controller/Compute
subscription-manager register

subscription-manager attach --pool=xxx # Please find your pool ID following the Red Hat Dcoumentation listed above

In my setup I set a variable for pool ID and used it this way:

POOLID=$(subscription-manager list --available --all --matches="Red Hat OpenStack" | grep Pool | awk '{print $3}')
subscription-manager attach --pool=$POOLID

subscription-manager repos --disable=*

subscription-manager repos --enable=rhel-7-server-rpms
subscription-manager repos --enable=rhel-7-server-rh-common-rpms
subscription-manager repos --enable=rhel-7-server-extras-rpms
subscription-manager repos --enable=rhel-7-server-openstack-13-rpms
subscription-manager repos --enable=rhel-7-server-openstack-13-devtools-rpms


yum install yum-utils -y

yum update -y

One thing to keep in mind is that when installing the ISO, do not take the default disk setting.  For PackStack the compute volumes for usage come from the  root directory.   The default install gives you most of the space in the /home volume.   Change that to give /home about 50 Gigs and the rest to the “/”

Figure 17: Volume Configuration while installing ISO

Once the ISO is installed, ssh into the VM.

Then do the below:

1) Add env
[soumukhe@openstack1 ~]$

cat > /etc/environment << EOF
LANG=en_US.utf-8
LC_ALL=en_US.utf-8
EOF

2) Disable firewall
systemctl status firewalld
systemctl stop firewalld
systemctl disable firewalld

3) stop NetworkManager
systemctl status NetworkManager
systemctl stop NetworkManager
systemctl disable NetworkManager

4) add network service (simpler than NetworkManager)
systemctl enable network
systemctl start network
systemctl status network

5) disable selinux
vi /etc/selinux/config
SELINUX=disabled

reboot and verify with getenforce

Now, if using opensource CentOS 7 do the following:

sudo yum install -y centos-release-openstack-queens 
sudo yum update -y
sudo yum install -y openstack-packstack

If using the Red Hat Release do the following:

sudo yum update -y
sudo yum install -y openstack-packstack

Now, go ahead and configure your subinterfaces properly for ens224.101 and ens3967.101 (3967 could be different in your case, based on your ACI Infra Vlan initial settings)

Example below:


ip a      # this will show you the name of the NIC. 
sudo -i
cd /etc/sysconfig/network-scripts

vi the files below and change accordingly

vi  ifcfg-ens224

DEVICE=ens224
TYPE="Ethernet"
ONBOOT=yes
BOOTPROTO=none
MTU=9000

vi  ifcfg-ens224.101

TYPE="Ethernet"
DEVICE="ens224.101"
IPV4_FAILURE_FATAL="no"
ONBOOT="yes"
IPADDR="100.100.101.17"
PREFIX="24"
VLAN=yes

vi  ifcfg-ens224.3967

DEVICE="ens224.3967"
TYPE=Ethernet
NM_CONTROLLED=no
VLAN=yes
ONPARENT=yes
MTU=9000
BOOTPROTO=dhcp
DHCPRELEASE=1
PEERDNS=yes
ONBOOT="yes"



vi route-ens224.3967

ADDRESS0=224.0.0.0
NETMASK0=240.0.0.0
GATEWAY0=0.0.0.0
METRIC0=1000
vi /etc/dhcp/dhclient-ens224.3967.conf 

send dhcp-client-identifier 01:00:50:56:a9:44:60; # the 01 prefix is not a typo
request subnet-mask, domain-name, domain-name-servers, host-name;
send host-name n-packstack-compute-0;
option rfc3442-classless-static-routes code 121 = array of unsigned integer 8;
option ms-classless-static-routes code 249 = array of unsigned integer 8;
option wpad code 252 = string;
also request rfc3442-classless-static-routes;
also request ms-classless-static-routes;
also request static-routes;
also request wpad;
also request ntp-servers;

 follow by:

sudo systemctl restart network

Make sure to do ping tests to check connectivity.

At this time,  make sure to make a snapshot of this VM, so that you can fall back to this state if needed.  (quick tip,  do not snapshot memory and it will go much faster)

Go ahead and clone this VM 3 times, one for Controller, 2 for Computes, and 1 for Yum Repo. ( Note: you could setup the Install Server as a repo if you did not want to have a separate repo server)

go to console of each VM and change the IP on the mgmt/OOB ens192, so you can ssh in

ssh in to each individual VM and change the IPs of all the interfaces accordingly

remember to vi and change /etc/hosts,  /etc/hostname, /etc/sysconfig/network-scripts/ifcfg-ens192, ifcfg-ens224.101

If using Red Hat Registered ISO, you will have to re-register the other nodes.

use the commands shown below:
Make sure to do sudo -i  to go in as root.  Do everything from root from now on.

subscription-manager register --force
subscription-manager attach --pool=xxx  # please find your pool ID based on Red Hat Instructions

In my setup I set a variable for pool ID and used it this way:

POOLID=$(subscription-manager list --available --all --matches="Red Hat OpenStack" | grep Pool | awk '{print $3}')
subscription-manager attach --pool=$POOLID

You now need to generate a packstack answer file.

Use the command:

packstack --gen-answer-file myopenstack.answer

An Answer file called myopenstack.answer will be generated in the  /root directory.

You will need to modify the answerfile before running it.

In my case, my
OOB for Controller is 10.0.150.31
OOB for Compute-1 is: 10.0.150.32
OOB for Compute-2 is 10.0.150.33

Also, please pay attention to the other IPs in the answer file (line NTP_Servers). Change based on your environment.

Change based on your IP settings

openstack-config --set ~/myopenstack.answer general CONFIG_DEFAULT_PASSWORD superSekret
openstack-config --set ~/myopenstack.answer general CONFIG_SERVICE_WORKERS 2
openstack-config --set ~/myopenstack.answer general CONFIG_NTP_SERVERS 10.81.254.131
openstack-config --set ~/myopenstack.answer general CONFIG_NEUTRON_OVS_EXTERNAL_PHYSNET extnet
openstack-config --set ~/myopenstack.answer general CONFIG_NEUTRON_L3_EXT_BRIDGE br-ex
openstack-config --set ~/myopenstack.answer general CONFIG_NEUTRON_OVN_BRIDGE_MAPPINGS extnet:br-ex
openstack-config --set ~/myopenstack.answer general CONFIG_NEUTRON_OVS_BRIDGE_IFACES br-ex:ens224.101
openstack-config --set ~/myopenstack.answer general CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS extnet:br-ex
openstack-config --set ~/myopenstack.answer general CONFIG_DEBUG_MODE y
openstack-config --set ~/myopenstack.answer general CONFIG_CONTROLLER_HOST 10.0.150.31
openstack-config --set ~/myopenstack.answer general CONFIG_NETWORK_HOSTS 10.0.150.31,10.0.150.32,10.0.150.33
openstack-config --set ~/myopenstack.answer general CONFIG_COMPUTE_HOSTS 10.0.150.32,10.0.150.33,10.0.150.31
openstack-config --set ~/myopenstack.answer general CONFIG_KEYSTONE_ADMIN_PW superSekret
openstack-config --set ~/myopenstack.answer general CONFIG_KEYSTONE_API_VERSION v3
openstack-config --set ~/myopenstack.answer general CONFIG_HEAT_INSTALL n
openstack-config --set ~/myopenstack.answer general CONFIG_PROVISION_DEMO n
openstack-config --set ~/myopenstack.answer general CONFIG_PROVISION_TEMPEST n
openstack-config --set ~/myopenstack.answer general CONFIG_NAGIOS_INSTALL n
openstack-config --set ~/myopenstack.answer general CONFIG_NEUTRON_ML2_TYPE_DRIVERS vxlan,flat,vlan
openstack-config --set ~/myopenstack.answer general CONFIG_NEUTRON_OVS_TUNNEL_IF ens256
openstack-config --set ~/myopenstack.answer general CONFIG_NEUTRON_OVS_TUNNEL_SUBNETS 192.168.24.0/24
openstack-config --set ~/myopenstack.answer general CONFIG_NEUTRON_OVN_TUNNEL_IF ens256
openstack-config --set ~/myopenstack.answer general CONFIG_NEUTRON_OVN_TUNNEL_SUBNETS 192.168.24.0/24
openstack-config --set ~/myopenstack.answer general CONFIG_CEILOMETER_INSTALL n
openstack-config --set ~/myopenstack.answer general CONFIG_AODH_INSTALL n
openstack-config --set ~/myopenstack.answer general CONFIG_AMQP_HOST 10.0.150.31
openstack-config --set ~/myopenstack.answer general CONFIG_MARIADB_HOST 10.0.150.31
openstack-config --set ~/myopenstack.answer general CONFIG_REDIS_HOST 10.0.150.31
openstack-config --set ~/myopenstack.answer general CONFIG_CINDER_VOLUMES_SIZE 150G
openstack-config --set ~/myopenstack.answer general CONFIG_SWIFT_STORAGE_SIZE 20G

At this time  run the sctipt using your answer files and this will intall Openstack with 1 Controller and 2 Computes but without ACI Integration. 

packstack --answer-file=myopenstack.answer -d

This could take almost 2 hours to run.  At the end of the run you should see something like this:

Figure 18: Successful Packstack Run with No ACI Integration

For ACI Intergration you will now need to setup a yum repo and put the ACI plugin in that repo.  You would then have to go to each node,  Controller and Compute and configure them to be able to use that repo for the plugin install.

( Note: you could setup the Install Server as a repo if you did not want to have a separate repo server)

To setup the repo:

Now do the following:

mkdir -p /home/mypackage_dir/repo
cp openstack-ciscorpms-repo-13.0-1065.tar.gz /home/mypackage_dir/repo
cd /home/mypackage_dir/repo
tar -xvf openstack-ciscorpms-repo-13.0-1065.tar.gz

cd /etc/yum.repos.d

Copy the below:

cat <<EOF> customrepo.repo
[myrepo]
name=My custom repository
baseurl=file:///home/mypackage_dir/repo
enabled=1
gpgcheck=0
EOF

if apache httpd is not installed, install it (check by systemctl status httpd)
sudo systemctl install httpd -y
sudo systemctl start httpd
sudo systemctl enable httpd


sudo ln -s    /home/mypackage_dir/repo      /var/www/html/repo

sudo yum update -y
sudo yum clean all
sudo yum repolist all

Yum Repo is installed now !

Now, ssh to each node, Controller and 2 computes

make sure you do sudo -i to go in as root

copy the below to each of the nodes:  ( please use your repo server IP in place of 10.0.140.40 below)

cat > /etc/yum.repos.d/myrepo.repo << EOF
[myreposerver]
name=My RPM System Package Repo
baseurl=http://10.0.140.40/repo
enabled=1
gpgcheck=0
EOF

follow this by:

yum clean all
yum repolist

On all 3 Nodes, 1 Controller and   2 Computes install the plugins:

yum -y install \
aci-integration-module \
acitoolkit \
agent-ovs \
apicapi \
ciscoaci-puppet \
ethtool \
libmodelgbp \
libopflex \
libuv \
lldpd \
neutron-opflex-agent \
noiro-openvswitch-lib \
noiro-openvswitch-otherlib \
nova-sriov-nics \
openstack-dashboard-gbp \
openstack-heat-gbp \
openstack-neutron-gbp \
opflex-agent \
opflex-agent-lib \
opflex-agent-renderer-openvswitch \
prometheus-cpp-lib \
python2-networking \
python2-tabulate \
python-django-horizon-gbp \
python-gbpclient \
python-meld3 \
supervisor

on Controller do the following:

Change Neutron core plugin and service plugins:

openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2plus
openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins group_policy,ncp,apic_aim_l3,metering

Change ML2 config:

openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers opflex,local,flat,vlan,gre,vxlan
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types opflex
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers apic_aim
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers apic_aim,port_security

DHCP agent config:

openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT ovs_integration_bridge br-int
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT enable_isolated_metadata True
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT force_metadata True

Create ML2 ACI config:

Note:  system-id: DMZ-Packstack  will be the name of the VMM domain in ACI

openstack-config --set /etc/neutron/plugins/ml2/ml2_conf_cisco_apic.ini DEFAULT apic_system_id DMZ-Packstack
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf_cisco_apic.ini apic_aim_auth auth_plugin v3password
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf_cisco_apic.ini apic_aim_auth auth_url http://$(hostname -i | awk '{print $1}'):35357/v3/
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf_cisco_apic.ini apic_aim_auth username admin
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf_cisco_apic.ini apic_aim_auth password superSekret
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf_cisco_apic.ini apic_aim_auth user_domain_name default
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf_cisco_apic.ini apic_aim_auth project_domain_name default
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf_cisco_apic.ini apic_aim_auth project_name admin
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf_cisco_apic.ini ml2_apic_aim enable_optimized_metadata True
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf_cisco_apic.ini ml2_apic_aim enable_optimized_dhcp True
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf_cisco_apic.ini ml2_apic_aim enable_keystone_notification_purge True
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf_cisco_apic.ini group_policy policy_drivers aim_mapping
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf_cisco_apic.ini group_policy extension_drivers aim_extension,proxy_group,apic_allowed_vm_name,apic_segmentation_label

Modify AIM conig:

Note:  Please change the apic  host IP, username and password accordingly

openstack-config --set /etc/aim/aim.conf DEFAULT debug True
openstack-config --set /etc/aim/aim.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/aim/aim.conf DEFAULT control_exchange neutron
openstack-config --set /etc/aim/aim.conf DEFAULT default_log_levels neutron.context=ERROR
openstack-config --set /etc/aim/aim.conf oslo_messaging_rabbit rabbit_host $(hostname -i | awk '{print $1}')
openstack-config --set /etc/aim/aim.conf oslo_messaging_rabbit rabbit_port 5672
openstack-config --set /etc/aim/aim.conf oslo_messaging_rabbit rabbit_hosts $(hostname -i | awk '{print $1}'):5672
openstack-config --set /etc/aim/aim.conf oslo_messaging_rabbit rabbit_use_ssl False
openstack-config --set /etc/aim/aim.conf oslo_messaging_rabbit rabbit_userid guest
openstack-config --set /etc/aim/aim.conf oslo_messaging_rabbit rabbit_password guest
openstack-config --set /etc/aim/aim.conf oslo_messaging_rabbit rabbit_ha_queues False
openstack-config --set /etc/aim/aim.conf database connection $(egrep "^connection" /etc/neutron/neutron.conf | sed 's/.*=//')
openstack-config --set /etc/aim/aim.conf aim agent_down_time 75
openstack-config --set /etc/aim/aim.conf aim poll_config False
openstack-config --set /etc/aim/aim.conf aim max_operation_retry 5
openstack-config --set /etc/aim/aim.conf aim aim_system_id DMZ-Packstack
openstack-config --set /etc/aim/aim.conf apic apic_hosts 10.0.0.58
openstack-config --set /etc/aim/aim.conf apic apic_username soumukhe
openstack-config --set /etc/aim/aim.conf apic apic_password soumukhe_APIC_Password
openstack-config --set /etc/aim/aim.conf apic apic_use_ssl True
openstack-config --set /etc/aim/aimctl.conf DEFAULT apic_system_id DMZ-Packstack
openstack-config --set /etc/aim/aimctl.conf apic apic_entity_profile packstack
openstack-config --set /etc/aim/aimctl.conf apic apic_provision_hostlinks False
openstack-config --set /etc/aim/aimctl.conf apic apic_provision_infra False
openstack-config --set /etc/aim/aimctl.conf apic scope_infra True
openstack-config --set /etc/aim/aimctl.conf apic_vmdom:DMZ-Packstack encap_mode vxlan

Now,  vi /etc/aim/aimctl.conf and put in the actual connectivity from the computes to the leaf ports

Figure 19: /etc/aim/aimctl.conf connection information

On Controller, Initialiaze AIM and start processes:

aimctl db-migration upgrade head
aimctl config update
aimctl infra create
aimctl manager load-domains --enforce
systemctl start aim-aid
systemctl start aim-event-service-polling
systemctl start aim-event-service-rpc
systemctl enable aim-aid
systemctl enable aim-event-service-polling
systemctl enable aim-event-service-rpc

Verify that aim processes are running:

 

systemctl status aim-* | grep -B 2 Active:
Figure 20: output of systemctl aim-* (viewing from controller)

The VMM domain should now show up in ACI

Figure 21: VMM Domain should show up

When you double click and go in, you will proably notice that the VMM AEP attachment is not showing up.   This might be a bug.  However remember this is the packstack based intall and is not officially supported.   In the Red Hat director based install (Part 3), this association shows up properly.

Figure 22: AEP might not be associated with VMM Domain

If the VMM domain was not associated, on APIC UI go to Fabric/Access Policies/Policies/Global/Attachable Access Entity Profiles/packstack    and add in the VMM Association as shown below

Figure 23: Adding in the AEP to the VMM domain manually

Copy updated Neutron auth policy

/bin/cp -rf /etc/group-based-policy/policy.d/policy.json /etc/neutron/policy.json

Update Neutron daemon defnition and restart services:

sed -i '/^ExecStart=/ s/$/ --config-file \/etc\/neutron\/plugins\/ml2\/ml2_conf_cisco_apic\.ini/' /usr/lib/systemd/system/neutron-server.service
systemctl daemon-reload
systemctl restart neutron-server
systemctl restart neutron-dhcp-agent

Your Controller part of the Integration has now been completed.  Next we will need to do the compute part of the ACI Integration.  Remember that in this packstack Integration the Controller is also a compute, so you will have to do the below configuration on all the nodes,  i.e.  Controller and 2 Computes

systemctl stop neutron-openvswitch-agent
systemctl disable neutron-openvswitch-agent
systemctl mask neutron-openvswitch-agent

yum -y install agent-ovs neutron-opflex-agent

For the below, make sure to change the json file for the key name based on the node that you are working on.

for instance for my naming structure:

  • Controller:    “name”: “rh-ps-os-control-0”
  • Compute-0:    “name”: “rh-ps-os-compute-0”
  • Compute-1:    “name”: “rh-ps-os-compute-1”

Copy the below 2 snippets on each Node after changing the value of key “name”

{
"opflex": {
"domain": "comp/prov-OpenStack/ctrlr-[DMZ-Packstack]-DMZ-Packstack/sw-InsiemeLSOid",
"name": "rh-ps-os-control-0",
"peers": [
{"hostname": "10.8.0.30", "port": "8009"}
],
"ssl": {
"mode": "encrypted"
}
}
}
EOF


cat << EOF > /etc/opflex-agent-ovs/conf.d/20-vxlan-aci-renderer.conf
{
"renderers": {
"stitched-mode": {
"int-bridge-name": "br-fabric",
"access-bridge-name": "br-int",
"encap": {
"vxlan": {
"encap-iface": "br-fab_vxlan0",
"uplink-iface": "ens224.3967",
"uplink-vlan": 3967,
"remote-ip": "10.8.0.32",
"remote-port": 8472
}
},
"flowid-cache-dir": "/var/lib/opflex-agent-ovs/ids"
}
}
}
EOF

After completing the above step do the below on each of the nodes:

ovs-vsctl add-br br-fabric
ovs-vsctl add-port br-fabric br-fab_vxlan0 -- set Interface br-fab_vxlan0 type=vxlan options:remote_ip=flow options:key=flow options:dst_port=8472
ovs-vsctl set bridge br-int protocols=[]


systemctl restart opflex-agent
systemctl restart neutron-opflex-agent
systemctl enable opflex-agent
systemctl enable neutron-opflex-agent

Verify from APIC that all the nodes show up in the VMM Domain

Figure 24: Verifying from APIC UI that the VMM domain shows all nodes

Next on each Node,  modify the iptables entry to allow udp 8472 (vxlan) to be accepted.

To do this vi /etc/sysconfig/iptables and add the entry:

-A INPUT -p udp -m multiport --dports 8472 -m comment --comment "vxlan" -m state --state NEW -j ACCEPT

As an example on my compute-0 node, the added entry shows as follows:

Figure 25: Permitting udp 8472 (vxlan) on all nodes

Follow this by restarting iptables:

systemctl restart iptables

The Last Step is to clean up all the older Neutron and OVS configuration:

ovs-vsctl del-port patch-tun
ovs-vsctl del-port patch-int
ovs-vsctl del-br br-tun
. /root/keystonerc_admin
for agent in $(neutron agent-list | grep -E "neutron-openvswitch-agent|neutron-l3-agent" | awk -F '|' '{print $2}'); do neutron agent-delete $agent; done

You should be all done !

Verify from UI that the VMM domain shows no faults:

Figure 26: VMM domain shows no faults from APIC

In the PackStack install before accessing Horizon Dashboard, you will need to make 1 quick modification.

On Controller:

vi /etc/httpd/conf.d/15-horizon_vhost.conf

add the line:  ServerAlias *
Figure 27: adding ServerAlias * to /etc/httpd/conf.d/15-horizon_vhost.conf

Follow this by restarting httpd service

systemctl restart httpd

Now try to Access Horizon Dashboard. 

While accessing, you could check the logs also by the following command:
(this will show you connection failures if you have any):

tail -f /var/log/httpd/access_log       # do this on the controller.  Remember Horizon is running there

Figure 28: Checking access logs in case you have problems accessing Horizon: on Controller: tail -f /var/log/httpd/access_log

on Controller you should have a keystonerc_admin file show up.  Look at the file and point your browser to that.

In my case: http://10.0.150.31

Figure 29. Looking at keystonerc_admin f

when you point your browser to the IP, if you did the install with Red Hat Registered CentOS7, you will see the below screen.

Figure 30: Horizon Dashboard for Red Hat based PackStack Install

when you point your browser to the IP, if you did the install with opensource CentOS7, you will see the below screen.

Figure 30: Horizon Dashboard for opensource PackStack Install

log in with username of admin and password as shown in the keystonerc_admin file.   Note that you specified this password earlier in the answerfile before running the packstack install

Figure 31. Looking at the computes from Horizon Dashboard

You can check from CLI also.  Remember that for the PackStack install your credentials environment file will be in the Controller.   This file is named: keystonerc_admin

you won’t be needing your install server unless you want to run packstack install again

source the file and   run the command:  openstack hypervisor list

Figure 32. looking at hypervisors from CLI (from Controller)

For Troubleshooting look at the  /var/log directory for Controller and Computes

Figure 33: contents of /var/log (controller and computes)

This Completes Part 2 of this writeup.

References:

If using RedHat registered ISO, you can follow this link for the initial instructions on registering, etc.

https://access.redhat.com/articles/1127153

Cisco ACI Unified Plug-in for OpenStack Architectural Overview

Click to access Cisco-ACI-Plug-in-for-OpenStack-Architectural-Overview.pdf

Cisco Virtualization Matrics

https://www.cisco.com/c/dam/en/us/td/docs/Website/datacenter/aci/virtualization/matrix/virtmatrix.html


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.