Contributors: Soumitra Mukherji and Alec Chamberlain
With Expert Guidance from: Filip Wardzichowski
If you followed Part 1 through 3, you should have a OpenStack / ACI integrated fabric ready to use.
- part 1: General discussion of Openstack and ACI integration
- part 2: showed how to do the integration using packstack
- part 3: showed how to do the integration using Red Hat’s Director Based Install.
in this Part 4, we will explore building a Openstack Project (Tenant in ACI). Create Networks, Routers, NAT and Floating IPs, build VMs build virtual disks for VM use etc, etc. On each step of the way we will show you what happens from the ACI side.
In the guide below, I will show the usage from the Red Hat’s director based install release. If you are using the packstack installed integration, it would be very similar, except that you would execute all commands from the controller. The default credentials file that you have to source in the packstack case is “keystonerc”
Though you can do mostly all configurations from UI, using Horizon, I will show here how to do it from CLI because that’s what most developers use, plus it’s easy to write automation scripts for full infrastructure deployment which speeds up things tremendously
Well have 3 sections in this writeup:
1) Creating Project (Tenant in ACI) / and all associated objects like networks, routers, SNAT pool, Floating IP pool, checking endpoints and viewing what happens in ACI
2) Creating the flavors, instances creating and attaching floating ips and security rules and viewing the rules from ACI (as Host Protection Profile), attaching to servers using virsh, ssh to floating ip from external device, ssh using ipnetns exec (linux namespace), viewing endpoints
3) Working with Cinder Volumes (block storage)
The Horizon UI is shown below. You can log into it with admin credentials and the password that you set during deployment.

You can see the default admin and service projects that it creates as shown below:

From APIC you can see that the VMM domain got created

You can also see that I have 1 Controller and 2 Computes

Everything is clean, i.e. No Faults

1) Creating Project (Tenant in ACI) / and all associated objects like networks, routers, SNAT pool, Floating IP pool, checking endpoints and viewing what happens in ACI
After initial Openstack / ACI integration is completed, if you looked at Tenants, you will see that in common Tenant, a VRF called “CommonUnroutedVRF” came in. Nothing else has happened, no BDs, EPGs, etc.

Since we ran the overcloud deployment from the ~/templates directory, it created the credentials file “overcloudrc” in the templates directory. Copy that to the stack’s home directory (on the director node). Then go to home directory and source it.

- Check from command line what the default projects are
openstack project list

2. Lets’ create a new Project
openstack project create soumukhe-openstack

3. create Role for admin for project
openstack role add --project soumukhe-openstack --user admin admin
openstack role add --project soumukhe-openstack --user admin _member_

4. copy the overcloudrc file to a file with a name that makes it evident when you source that file. In my case, I copied it to “overcloudrc-Project-soumukhe” Then, vi the new file and put in the last line on it, so when you source it, it indicates where you are. I’ve used this: Also, make sure to change the variable for OS_PROJECT_NAME to the new project name.
Note: There are a lot of role assignments that you can do. You can create new users with new roles. In the example below, I’m still going to use the admin user, but set the working project context to the correct one. That way, I don’t have to do a “–project” after each command.
export PS1='[\u@\h \W(overcloud_admin_soumukhe)]\$ '

Now when you source the new credentials file, you will see that it will be evident which context you are in

5. Create External Network
First find the dn (distinguished name) for the external EPG (that was created manually in ACI). You can do this from apic UI by downloading the object for the external EPG or from API (visore) or with aimctl
a) finding DN from the downloaded object from APIC UI

b) finding dn from Visore

c) Finding DN from “aimctl” commands. This option will be good for developers because they will not have access to the APIC. The commands to use are shown below:
aimctl manager external-network-list
aimctl manager external-network-get <tenantofL3> <L3OutName> <L3OutEPGName>
- In the case of Director based install, this command needs to be executed on the aim Container since all services are containarized.
- In the case of packstack you can execute the command directly on the Controller
First, ssh in to the controller (from director node), using heat-admin username. Note the below figure shows that we are using the name of the controller instead of the IP. That is because in the case of director install we setup our own local dns. Part 3 showed you how to do that. If you did not setup the local DNS server obtain the IP from the command ( you need to source the stackrc file first):
openstack server list


To find the docker container id and name, use the below command:
docker ps | grep aim | awk '{print $1, $12}'

Running the aim commands against the aim container
docker exec e9b615d9d886 aimctl manager external-network-list
docker exec e9b615d9d886 aimctl manager external-network-get common RH-Director-L3Out RH-DirectorInstP

6. Now create the external Network
neutron net-create external-net \
--router:external \
--apic:distinguished_names type=dict ExternalNetwork=uni/tn-common/out-L3OutOpenstack/instP-L3OutOS-EPG

Observe from APIC UI: New BD will get deployed in Common Tenant.

also observe that the BD belongs to the original VRF that we had manually created during the infra Vlan deployments “RH-Director”

This BD is also Associated with the L3Out
unicast is enabled, however there is no IP defined as of yet for the BD

New AppProfile and EPG also came in:

EPG is associated with the new Bridge Domain

Also a new Contract got associated with the EPG

Looking at Contracts, the contract is associated also with the L3Out External EPG

Contract has allow all filters

A user Tenant came in, but no VRF/BD/EPG/L3out created in user Tenant. A App Profile shell has been created with no EPG in it. Use the CLI to verify the Project ID is the same as what the ACI Tenant shows

7. Create a subnet in the external Network (for SNAT)
neutron subnet-create external-net 100.100.161.0/27 --name external-sub-snat --disable-dhcp --apic:snat_host_pool True

Common Tenant now shows the BD IP come in:

No Changes has been made on the Actual ACI Tenant (Openstack Project)

8. Now create your private Network (where VM will reside)
openstack network create myNetwork

User Tenant now has both EPG and BD show up. Note that the BD belongs to VRF called UnroutedVRF in common Tenant. There is no VRF in User Tenant as of now

The BD in User Tenant is mapped to the Common Tenant Unrouted VRF

9. Attach a subnet to your Private Network
openstack subnet create subnet1 --subnet-range 100.11.0.0/24 --dns-nameserver 8.8.8.8 --network myNetwork

At this point you will notice that the subnet has not shown up yet in the User Tenant BD

10. Now create a neutron Router called R2 and attach the Private Network to it
openstack router create R2

openstack router add subnet R2 subnet1

Notice that the subnet now shows up in Tenant BD, a new VRF also showed up “DefaultRoutedVRF”

Advertise Externally Flag is also set

Tenant EPG also has the VMM Domain assigned to it and contracts from common Tenant


The Contract is in common tenant. You can see from the below figure how the contract got associated from the User EPG to the L3Out EPG in User Space

The User L3Out is a dummy L3Out. There is no Logican Node Profile / Interface Profile defined for this L3Out

11. Also attach the router to the external network

12. Create a Floating IP Pool in the external-net for connectivity from outside to the Instance
neutron subnet-create external-net 100.100.161.224/27 \
--name fip-subnet \
--allocation-pool start=100.100.161.226,end=100.100.161.254 \
--disable-dhcp \
--gateway 100.100.161.225

In ACI Notice that the SNAT pool Gateway is added as a secondary IP to the BD in common Tenant. Advertise Externally Flag is turned on

Also, notice that the BD had L3Out tied to it so both Snat IP and the NAT IP will be advertised out through L3Out

The Infrastructure is ready. Now let’s create some instances.
2) Creating the flavors, instances creating and attaching floating ips and security rules, attaching to servers using virsh, ssh to floating ip, ipnetns, viewing endpoints
- Download Image for cirros and upload to Glance
for cirros to work properly you need to download as such:
curl --compressed -J -L -O http://download.cirros-cloud.net/0.5.1/cirros-0.5.1-x86_64-disk.img
Note: For image download in qcow format look at:
https://docs.openstack.org/image-guide/obtain-images.html

upload the image to Glance
openstack image create --min-disk 1 --disk-format qcow2 --file ./cirros-0.5.1-x86_64-disk.img cirros1
openstack image show cirros1

2. Create a flavor for compute instance
openstack flavor create --disk 1 --ram 768 --id 5 m1.tiny1

3. Create instance in your private network
First look at the network created to find out the ID of the private network
openstack network list

Now create the instance on that Network. Name the Instance smcirros_inst1, use flavor 5 (created earlier) and use the cirros1 image uploaded to glance earlier
openstack server create --image cirros1 --flavor 5 --nic net-id=4594595c-2c37-4df8-8495-f12db3777c38 smcirros_inst1

ensure Server is active
openstack server list

See details about the server (instance). Look to see which hypervisor the server has been spun up on
openstack server show smcirros_inst1

Lets spin up another instance and see where that falls.
openstack server create --image cirros1 --flavor 5 --nic net-id=4594595c-2c37-4df8-8495-f12db3777c38 cirros2

List the instances

See details of Instance 2. Notice that this instance fell on compute-1

4. Creating and attaching Floating IP
by default, instances can reach the outside. However if you wanted to reach instance from outside (like ssh or web server or db), you will need to create a 1:1 NAT (Floating IP) for this.
First look at subnets and Networks you have defined
openstack network list
openstack subnet list

Now creae the floating IP. Remember you had created a floating IP subnet earlier and attached to external network. Now we will assingn IPs from this pool.
Note: you can create as many floating IPs as you wish
Format is: openstack floating ip create –subnet <floating IP-subnet_pool created earlier> <external_network the floating_ip_pool_used>
openstack floating ip create –subnet 7f483c3b-7c0a-4631-9008-e3d9bebd5a64 b3b892f3-13a8-4aac-8d15-b21e0c9b4414

List the floating IPs

attach the floating ip to instance 1 (smcirros_inst1)
openstack server add floating ip smcirros_inst1 100.100.161.229

Now if you looked at the floating Ip list, you will see that it is attached to the fixed ip of intance 1
openstack floating ip list
openstack server list

so, you can see that we have 1 instance attached to floating IP. The other instance is not. This is typical in data center design. Perhaps instance 1 is the web interface whereas instance 2 is hosting a db where the web server communicates to intenally
5) Create Security Rules
By default instances can talk to the outside with SNAT, but for outside devices to talk to the inside you also need to enable Security Rules. Let’s allow icmp and ssh
first find the project and the security group id associated with that project
openstack project list
openstack security group list

Now Create security rule for icmp and ssh
openstack security group rule create --remote-ip 0.0.0.0/0 --protocol icmp --ingress e13f5e42-3945-4245-98c8-fccffd9a7f8c
openstack security group rule create --remote-ip 0.0.0.0/0 --protocol tcp --dst-port 22 --ingress e13f5e42-3945-4245-98c8-fccffd9a7f8c

Now, list the security rules
openstack security group rule list

You can verify these security rules from APIC. These are called HPP in APIC. HPP stands for Host Protection Policies. They will be found in your user tenant in Tenant/Policies/Host Protection.

Look at the figure below for our 2 rules that we added

6. log into the server
There are many ways to go into the server.
- using virsh console access
- using ipnetns exec ssh access
- targetting floating ip from outside
Let’s go in with virsh console first:
What is virsh ?
As mentioned before in Part 1 of this writeup, Openstack does not define it’s own hypervisors but uses pre-defined hypervisors like qemu (type 2 – userspace) and kvm (type 1 – kernel space). Libvirt is the library used to manage qemu and kvm.
Virsh is a command line utility that is used to communicate with libvirt

source: https://www.youtube.com/watch?v=HfNKpT2jo7U
continuing with using virsh to console in to instance…
Since this instance fell on compute-0, let’s first ssh to compute-0 from director.
Note: since I’ve installed a local DNS (shown how to in Part 3), I can resolve compute-0 by name. If you did not do that, then use the IP. To find IP, source in with stackrc and then do “openstack server list”
ssh heat-admin@overcloud-compute-0
sudo -i # make sure to go in as root

find out the virsh console ID and then console in
virsh list
virsh console <console_id>

If you happen to do this fast enough, before the instance is fully booted, you will see it boot. (notice you can also see logs, etc, which I won’t show here)

Ping google.com from the instance (note that i’ve configured NAT on the external ASA to route the extenral network range 100.100.161.0/27 via SNAT. Remember we defined that subnet earlier and tied that to the external Network

using floating ip for access to instance
Now let’s try to go in via the floating IP. In our setup we only have instance 1 attached to floating ip. Check with:
openstack server list
openstack floating ip list

We see from above that floating IP 100.100.161.229 is associated with smcirros_inst1. From another vm outside, in my case a jumpbox, I will try to ping and ssh to it

using netns for access to instance
Namespaces and cgroups are two of the main kernel technologies most of the new trend on software containerization (think Docker) rides on. To put it simple, cgroups are a metering and limiting mechanism, they control how much of a system resource (CPU, memory) you can use. On the other hand, namespaces limit what you can see. Thanks to namespaces processes have their own view of the system’s resources.
source: https://blogs.igalia.com/dpino/2016/04/10/network-namespaces/
continuing with netns to access instance…
First, ssh to the controller
openstack server list # from director
ssh heat-admin@controller_ip

next, look for the namespaces. Look for the dhcp name space
ip netns

now look at the interfaces defined in that namespace
ip netns exec qdhcp-4594595c-2c37-4df8-8495-f12db3777c38 ip a
You will notice that the private IP subnet we had defined (for our instances is connected via a tap interface).

now ssh in to an instance using that namespace
openstack server list # to get the private ip
ip netns exec <namespace_id> ssh cirros@<private_ip>

7. Viewing Endpoiints
Endpoints can be viewed through APIC just like we normally do. Also, notice that connectivity is through vxlan

From the openstack side you can ssh to a compute go to /var/lib/opflex/files/endpoints to veiw endpoints that compute node is hosting.

3. Working with Cinder Volumes (block storage)
Creating and attaching volumes
Till now, when we’ve spun up instances, we’ve used the built in ephemeral disks. Ephemeral disks have a lifetime of the VM. When VM is destroyed, they go away. If you want to create permanent disks, you can create volumes.
In this example we will do the following:
- create a block volume (cinder)
- attach the volume to instance 1 and mount it
- write a file to it
- detach the volume from instance 1
- attach the volume to instance 2 and mount it
- view the file that we had written to that volume when it was attached to instance 1
first let’s make sure cinder services are up and running properly
cinder service-list

next check to see what volumes are there. In our case we don’t have any, because we did not create any
openstack volume list

Create a volume of 1GB
openstack volume create --size 1 vol1

list volumes again
openstack volume list

get a list of instances
openstack server list

log into instance 1 (ssh, virsh, netns)
then show the current mounts. You will see device present is vda
mount # check for mount devices

now from director, add the volume
openstack server add volume smcirros_inst1 vol1
openstack volume list # check what device name is used to attach. Below it shows /dev/vdb

issue the mount command again to see the new device
ls /dev/vd* # verify that vdb is there inside the instance
mount # to verify that it's not really mounted

make the file system
sudo mkfs.ext3 /dev/vdb

Mount the file system
sudo mount /mydisksudo mount /dev/vdb/mydisk

mount now shows the disk mounted
mount

currently the filesystem is owned by root, so you won’t have permission to write to it. Change the group and user to cirros user
sudo chgrp -R cirros /mydisk
sudo chown -R cirros /mydisk

Now write a simple file in that file system
echo hello > /mydisk/hello.txt

Now unmount the disk inside the Instance and veify that it’s gone
sudo unmount /dev/vdb
mount # to verify that the device is unmounted

Now follow the below to disconnect the volume from instance 1 and add and disk volume to instance 2
openstack server remove volume smcirros_inst1 vol1 # on director
openstack server add volume cirros2 vol1 # on director
ip netns exec qdhcp-4594595c-2c37-4df8-8495-f12db3777c38 ssh cirros@100.11.0.35 # execute this from controller to attach to instance 2. got ip from output of openstack server list.
sudo mkdir /mydisk # inside instance 2
sudo mount /dev/vdb /mydisk # inside instance 2
ls -l /mydisk/ # inside instance 2
cat /mydisk/hello.txt # inside instance 2
We can now read that file on instance 2

Conclusion:
By following this writeups (part 1 through 4) I believe you will get a good head start on installing and using ACI Integrated OpenStack.
Ofcourse, I want to point out that OpenStack is very extensive and you can go very in-depth into it. You can go into the architecture of how the internal bridges br-int and br-fabric is instantiated and how packets flow (data plane and contol plane) and interaction between the different components including aim, opflex, neutron, nova, etc etc.
For troubleshooting you would need to be familiar with many tools such as ovs-vsctl, ovs-dpctl, ovs-ofctl and be able to map ports, capture and trace packets, etc, etc.
If you use it over the course of time you can get very familiar with it.
From the end user’s and developer’s prospective, using this is pretty intuitive as you’ve seen in part 4. From the Network Operator’s prospective they have complete view of what the developer is building and doing. From the developer’s prospective, they don’t have to deal with the network folks, they are in control (or so they think 🙂 ).