Openstack with ACI Integration – Part 4 (using Openstack Integrated ACI)

Contributors:  Soumitra Mukherji and Alec Chamberlain
With Expert Guidance from: Filip Wardzichowski

If you followed Part 1 through 3, you should have a OpenStack / ACI integrated fabric ready to use.

  • part 1: General discussion of Openstack and ACI integration
  • part 2: showed how to do the integration using packstack
  • part 3: showed how to do the integration using Red Hat’s Director Based Install.

in this Part 4, we will explore building a Openstack Project (Tenant in ACI). Create Networks, Routers, NAT and Floating IPs, build VMs build virtual disks for VM use etc, etc. On each step of the way we will show you what happens from the ACI side.

In the guide below, I will show the usage from the Red Hat’s director based install release.  If you are using the packstack installed integration, it would be very similar, except that you would execute all commands from the controller.   The default credentials file that you have to source in the packstack case is “keystonerc”

Though you can do mostly all configurations from UI, using Horizon, I will show here how to do it from CLI because that’s what most developers use, plus it’s easy to write automation scripts for full infrastructure deployment which speeds up things tremendously

Well have 3 sections in this writeup:

1) Creating Project (Tenant in ACI) / and all associated objects like networks, routers, SNAT pool, Floating IP pool, checking endpoints and viewing what happens in ACI

2) Creating the flavors, instances creating and attaching floating ips and security rules and viewing the rules from ACI (as Host Protection Profile), attaching to servers using virsh, ssh to floating ip from external device, ssh using ipnetns exec (linux namespace), viewing endpoints

3) Working with Cinder Volumes (block storage)

The Horizon UI is shown below.  You can log into it with admin credentials and the password that you set during deployment.

Figure 1: Horizon UI

You can see the default admin and service projects that it creates as shown below:

Figure 2. Default admin and service projexts

From APIC you can see that the VMM domain got created

Figure 3. Openstack Director VMM domain

You can also see that I have 1 Controller and 2 Computes

Figure 4, 1 Controller and 2 Computes

Everything is clean,  i.e. No Faults

Figure 5, No Faults

1) Creating Project (Tenant in ACI) / and all associated objects like networks, routers, SNAT pool, Floating IP pool, checking endpoints and viewing what happens in ACI

After initial Openstack / ACI integration is completed, if you looked at Tenants, you will see that in common Tenant,  a VRF called “CommonUnroutedVRF” came in.  Nothing else has happened, no BDs, EPGs, etc.

Figure 6. New VRF on Common Tenabnt

Since we ran the overcloud deployment from the ~/templates directory, it created the credentials file “overcloudrc” in the templates directory.  Copy that to the stack’s home directory (on the director node).  Then go to home directory and source it.

Figure 7. Sourcing the credentials file
  1. Check from command line what the default projects are
openstack project list
Figure 7. viewing default projects

2. Lets’ create a new Project

openstack project create soumukhe-openstack
Figure 8. Creating a new project

3. create Role for admin for project

openstack role add --project soumukhe-openstack --user admin admin

openstack role add --project soumukhe-openstack --user admin _member_
Figure 9. Creating Role for new project

4. copy the overcloudrc file to a file with a name that makes it evident when you source that file.  In my case, I copied it to “overcloudrc-Project-soumukhe”  Then, vi the new file and  put in the last line on it, so when you source it, it indicates where you are.  I’ve used this:  Also, make sure to change the variable for OS_PROJECT_NAME to the new project name.  

Note:  There are a lot of role assignments that you can do.  You can create new users with new roles.  In the example below, I’m still going to use the admin user, but set the working project context to the correct one.  That way, I don’t have to do a   “–project”  after each command.

export PS1='[\u@\h \W(overcloud_admin_soumukhe)]\$ '
Figure 10: Copying and creating the new credentials file, so you are in the correct project

Now when you source the new credentials file, you will see that it will be evident which context you are in

Figure 11. Sourcing the new credentials file

5.  Create External  Network

First find the dn (distinguished name)  for the external EPG (that was created manually in ACI).  You can do this from apic UI by downloading the object for the external EPG or from API (visore)  or with aimctl

a) finding DN from the downloaded object from APIC UI

Figure 12. Finding DN from the downloaded code

b) finding dn from Visore

Figure 13. Finding DN from Visore

c) Finding DN from “aimctl” commands.  This option will be good for developers because they will not have access to the APIC.   The commands to use are shown below:

aimctl manager external-network-list 
aimctl manager external-network-get <tenantofL3> <L3OutName> <L3OutEPGName>
  • In the case of Director based install, this command needs to be executed on the aim Container since all services are containarized.
  • In the case of packstack you can execute the command directly on the Controller

First, ssh in to the controller (from director node), using heat-admin username.  Note the below figure shows that we are using the name of the controller instead of the IP.  That is because in the case of director install we setup our own local dns.  Part 3 showed you how to do that.  If you did not setup the local DNS server obtain the IP from the command ( you need to source the  stackrc file first): 

openstack server list
Figure 14. Gettig IPs of Nodes after sourcing stackrc
Figure 15. ssh to controller with heat-admin

To find the docker container id and name, use the below command:

docker ps | grep aim | awk '{print $1, $12}'
Figure 16. Finding container that runs aim

Running the aim commands against the aim container

docker exec e9b615d9d886 aimctl manager external-network-list
docker exec e9b615d9d886 aimctl manager external-network-get common RH-Director-L3Out RH-DirectorInstP
Figure 17. Finding the DN using aim commands

6. Now create the external Network

neutron net-create external-net \
--router:external \
--apic:distinguished_names type=dict ExternalNetwork=uni/tn-common/out-L3OutOpenstack/instP-L3OutOS-EPG
Figure 18. Creating the external Network

Observe from APIC UI:  New BD will get deployed in Common Tenant.

Figure 19. Observer how BD got deployed in common tenant

also observe that the BD belongs to the original VRF that we had manually created during the infra Vlan deployments “RH-Director”

Figure 20. BD is associated with original VRF that we had manually created during install of OpenStack

This BD is also Associated with the L3Out

unicast is enabled, however there is no IP defined as of yet for the BD

Figure 21. Unicast is enalbed on BD and associated with the original L3Out. There is no ip on the BD at this time

New AppProfile and EPG also came in:

Figure 22. New App Profile and EPG came in

EPG is associated with the new Bridge Domain

Figure 23. EPG belongs to the newly created BD

Also a new Contract got associated with the EPG

Figure 24. New Contract got associated with the EPG

Looking at Contracts, the contract is associated also with the L3Out External EPG

Figure 25. Contract’s other end is the L3Out EPG

Contract has allow all filters

Figure 26. Allow All filters for the contract

A user Tenant came in, but no VRF/BD/EPG/L3out created in user Tenant.  A App Profile shell has been created with no EPG in it.  Use the CLI to verify the Project ID is the same as what the ACI Tenant shows

Figure 27. A new Tenant also got created based on the Openstack created Project

7. Create a subnet in the external Network (for SNAT)

neutron subnet-create external-net 100.100.161.0/27  --name external-sub-snat --disable-dhcp   --apic:snat_host_pool True
Figure 28. Creating a subnet in the external Network

Common Tenant now shows the BD IP come in:

Figure 29. BD gateway got deployed in Common Tenant when Suibnet was associated with External Network

No Changes has been made on the Actual ACI Tenant (Openstack Project)

Figure 30. User Tenant configuration has not changed at this time

8. Now create your private Network (where VM will reside)

openstack network create myNetwork
Figure 31. Creating Private Network, where Instances will be spun up

User Tenant now has both EPG and BD show up.  Note that the BD belongs to VRF called UnroutedVRF in common Tenant.  There is no VRF in User Tenant as of now

Figure 32. After Private Network Creation, User Tenant in ACI got EPG and BD Spun up

The BD in User Tenant is mapped to the Common Tenant Unrouted VRF

Figure 33. User Tenant EPG belongs to UnroutedVRF in Common Tenant

9.  Attach a subnet to your Private Network

 openstack subnet create subnet1 --subnet-range 100.11.0.0/24 --dns-nameserver 8.8.8.8 --network myNetwork
Figure 34. Attaching Subnet to Private Network

At this point you will notice that the subnet has not shown up yet in the User Tenant BD

Figure 35. User Tenant BD still has no subnet (GW) defined

10. Now create a neutron Router called R2 and attach the Private Network to it

openstack router create R2
Figure 36. Creating the Router
openstack router add subnet R2 subnet1
Firue 37 Attachig the Private subnet to the Router

Notice that the subnet now shows up in Tenant BD, a new VRF also showed up “DefaultRoutedVRF”

Figure 38. BD Gateway came in when Private Network subnet got attached to Router

Advertise Externally Flag is also set

Figure 39. Advertise Externally Flag is set on Subnet

Tenant EPG also has the VMM Domain assigned to it and contracts from common Tenant

Figure 40. The OpenStack Domain is attached to User EPG
Figure 41. Contract Provider/Consumer got attached to EPG

The Contract is in common tenant.   You can see from the below figure how the contract got associated from the User EPG to the L3Out EPG in User Space

Figure 42. Contract (defined in common tenant) is applied to EPG in user tenant and L3Out EPG in user Tenant

The User L3Out is a dummy L3Out.  There is no Logican Node Profile / Interface Profile defined for this L3Out

Figure 43. L3Out in user space is a dummy EPG. No Node/Interface Profile defined

11. Also attach the router to the external network

Figure 44. Attaching Router to the external network

12. Create a Floating IP Pool in the external-net for connectivity from outside to the Instance

neutron subnet-create external-net   100.100.161.224/27  \
 --name fip-subnet    \
 --allocation-pool start=100.100.161.226,end=100.100.161.254  \
 --disable-dhcp    \
--gateway 100.100.161.225
Figure 45. Creating Floating IP Pool (for connectivity from outside to instance)

In ACI Notice that the SNAT pool  Gateway is added as a secondary IP to the BD in common Tenant.  Advertise Externally Flag is turned on

Figure 46. Floating IP was assigned to BD in common Tenant as secondary IP

Also, notice that the BD had L3Out tied to it so both Snat IP and the NAT IP will be advertised out through L3Out

Figure 46. both SNAT and Floating IP are tied with L3Out of Common Tenant

The Infrastructure is ready.  Now let’s create some instances.

2) Creating the flavors, instances creating and attaching floating ips and security rules, attaching to servers using virsh, ssh to floating ip, ipnetns, viewing endpoints

  1. Download Image for cirros and upload to Glance

for cirros to work properly you need to download as such:

curl --compressed -J -L -O http://download.cirros-cloud.net/0.5.1/cirros-0.5.1-x86_64-disk.img

Note: For image download in qcow format look at:
https://docs.openstack.org/image-guide/obtain-images.html

Figure 47. Downloading qcow images

upload the image to Glance

openstack image create --min-disk 1 --disk-format qcow2 --file ./cirros-0.5.1-x86_64-disk.img cirros1
openstack image show cirros1
Figure 48. Uploading Image to Glance

2. Create a flavor for compute instance

openstack flavor create --disk 1 --ram 768 --id 5 m1.tiny1
Figure 49. Creating Flavor for Instance

3. Create instance in your private network

First look at the network created to find out the ID of the private network

openstack network list
Figure 50. Getting Private Network ID

Now create the instance on that Network.  Name the Instance smcirros_inst1, use flavor 5 (created earlier) and use the cirros1 image uploaded to glance earlier

openstack server create --image cirros1 --flavor 5 --nic net-id=4594595c-2c37-4df8-8495-f12db3777c38  smcirros_inst1
Figure 51. Creating the instance

ensure Server is active

openstack server list
Figure 52. listing the servers

See details about the server (instance).  Look to see which hypervisor the server has been spun up on

openstack server show smcirros_inst1
Figure 53. Looking at details of instance

Lets spin up another instance and see where that falls.

openstack server create --image cirros1 --flavor 5 --nic net-id=4594595c-2c37-4df8-8495-f12db3777c38 cirros2
Figure 54. Spinning up instance 2

List the instances

Figure 55. Listing instances

See details of Instance 2.  Notice that this instance fell on compute-1

Figure 56. Instance 2 got spun up on compute-1

4.  Creating and attaching Floating IP

by default, instances can reach the outside.  However if you wanted to reach instance from outside (like ssh or web server or db), you will need to create a 1:1 NAT (Floating IP) for this.

First look at subnets and Networks you have defined

openstack network list
openstack subnet list
Figure 57. Checking the subnets and networks

Now creae the floating IP.  Remember you had created a floating IP subnet earlier and attached to external network. Now we will assingn IPs from this pool.

Note: you can create as many floating IPs as you wish

Format is: openstack floating ip create –subnet     <floating IP-subnet_pool created earlier>      <external_network the floating_ip_pool_used>

openstack floating ip create –subnet     7f483c3b-7c0a-4631-9008-e3d9bebd5a64      b3b892f3-13a8-4aac-8d15-b21e0c9b4414

Figure 58. Floating ip of 100.100.161.239 got assigned

List the floating IPs

Figure 59. Listing Floating IPs

attach the floating ip to instance 1 (smcirros_inst1)

openstack server add floating ip smcirros_inst1 100.100.161.229
Figure 60. Attaching Floating IP to an instance

Now if you looked at the floating Ip list, you will see that it is attached to the fixed ip of intance 1

openstack floating ip list
openstack server list
Figure 61. Looking at floating ip attachment

so, you can see that we have 1 instance attached to floating IP.  The other instance is not.  This is typical in data center design.  Perhaps instance 1 is the web interface whereas instance 2 is hosting a db where the web server communicates to intenally

5)  Create Security Rules

By default instances can talk to the outside with SNAT, but for outside devices to talk to the inside you also need to enable Security Rules.  Let’s allow icmp and ssh

first find the project and the security group id associated with that project

openstack project list
openstack security group list
Figure 62. Looking at Security Group ID associated with project

Now Create security rule for icmp and ssh

openstack security group rule create --remote-ip 0.0.0.0/0 --protocol icmp  --ingress    e13f5e42-3945-4245-98c8-fccffd9a7f8c

openstack security group rule create --remote-ip 0.0.0.0/0 --protocol tcp --dst-port 22  --ingress    e13f5e42-3945-4245-98c8-fccffd9a7f8c
Figure 64. Creating Security Rules (in default group for project)

Now, list the security rules

openstack security group rule list
Figure 65. Listing Security Group Rules

You can verify these security rules from APIC.  These are called HPP in APIC.  HPP stands for Host Protection Policies.  They will be found in your user tenant in  Tenant/Policies/Host Protection.

Figure 66. Looking at Host Protection Policies on APIC

Look at the figure below for our 2 rules that we added

Figure 64. Looking at the HPP rules from APIC for the rules we added

6.  log into the server

There are many ways to go into the server.

  • using virsh  console access
  • using ipnetns exec ssh access
  • targetting floating ip from outside

Let’s go in with virsh console first:


What is virsh ?

As mentioned before in Part 1 of this writeup, Openstack does not define it’s own hypervisors but uses pre-defined hypervisors like qemu (type 2 – userspace) and kvm (type 1 – kernel space).  Libvirt is the library used to manage qemu and kvm. 

Virsh is a command line utility that is used to communicate with libvirt

Figure 64a. what is virsh

source: https://www.youtube.com/watch?v=HfNKpT2jo7U


continuing with using virsh to console in to instance…

Since this instance fell on compute-0, let’s first ssh to compute-0 from director. 

Note: since I’ve installed a local DNS (shown how to in Part 3), I can resolve compute-0 by name.  If you did not do that, then use the IP.  To find IP, source in with stackrc and then do “openstack server list”

ssh heat-admin@overcloud-compute-0
sudo -i # make sure to go in as root
Figure 65 ssh to compute-0 and then going in as root

find out the virsh console ID and then console in

virsh list
virsh console <console_id>
Figure 66. Console in to instance using virsh

If you happen to do this fast enough, before the instance is fully booted, you will see it boot. (notice you can also see logs, etc, which I won’t show here)

Figure 67. Watching Instance boot

Ping google.com from the instance (note that i’ve configured NAT on the external ASA to route the extenral network range 100.100.161.0/27 via SNAT.  Remember we defined that subnet earlier and tied that to the external Network

Figure 68. Pinging google.com from instance

using floating ip for access to instance

Now let’s try to go in via the floating IP.  In our setup we only have instance 1 attached to floating ip.  Check with:

openstack server list
openstack floating ip list
Figure 69. Checking floating ip

We see from above that floating IP 100.100.161.229 is associated with smcirros_inst1.  From another vm outside, in my case a jumpbox, I will try to ping and ssh to it

Figure 70. ping and ssh instance 1 from outside using Floating IP attached to instace

using netns for access to instance


Namespaces and cgroups are two of the main kernel technologies most of the new trend on software containerization (think Docker) rides on. To put it simple, cgroups are a metering and limiting mechanism, they control how much of a system resource (CPU, memory) you can use. On the other hand, namespaces limit what you can see. Thanks to namespaces processes have their own view of the system’s resources.

source: https://blogs.igalia.com/dpino/2016/04/10/network-namespaces/


continuing with netns to access instance…

First, ssh to the controller

openstack server list # from director 
ssh heat-admin@controller_ip
Figure 71. ssh to controller

next, look for the namespaces.  Look for the dhcp name space

ip netns
Figure 72. Looking at namespaces and identifying the dhcp namespace

now look at the interfaces defined in that namespace

ip netns exec qdhcp-4594595c-2c37-4df8-8495-f12db3777c38 ip a

You will notice that the private IP subnet we had defined (for our instances is connected via a tap interface). 

figure 73: Looking at interfaces in namespace qdhcp on controller

now ssh in to an instance using that namespace

openstack server list   # to get the private ip
ip netns exec <namespace_id> ssh cirros@<private_ip>
Figure 74. Using netns to ssh in

7.  Viewing Endpoiints

Endpoints can be viewed through APIC just like we normally do.  Also, notice that connectivity is through vxlan

Figure 75. Viewing endpoints from APIC (vxlan encapsulation used)

From the openstack side you can ssh to a compute go to  /var/lib/opflex/files/endpoints to veiw endpoints that compute node is hosting.

Figure 76. Viewing endpoints from compute node

3. Working with Cinder Volumes (block storage)

Creating and attaching volumes

Till now, when we’ve spun up instances, we’ve used the built in ephemeral disks. Ephemeral disks have a lifetime of the VM.  When VM is destroyed, they go away.  If you want to create permanent disks, you can create volumes.

In this example we will do the following:

  • create a block volume (cinder)
  • attach the volume to instance 1 and mount it
  • write a file to it
  • detach the volume from instance 1
  • attach the volume to instance 2 and mount it
  • view the file that we had written to that volume when it was attached to instance 1

first let’s make sure cinder services are up and running properly

cinder service-list
Figure 77. cinder service-list to verify cinder services are running

next check to see what volumes are there.  In our case we don’t have any, because we did not create any

openstack volume list
Figure 78: check for existing volumes

Create a volume of 1GB

openstack volume create --size 1 vol1
Figure 79. Create a block volume (cinder)

list volumes again

openstack volume list
Figure 80. listing volumes again to verify new volume we created in the list

get a list of instances

openstack server list

Figure 81: Listing the existing instances

log into instance 1 (ssh, virsh, netns)

then show the current mounts.  You will see device present is vda

mount # check for mount devices

 

Figure 82. Checking for mount devices

now from director, add the volume

openstack server add volume smcirros_inst1 vol1
openstack volume list # check what device name is used to attach. Below it shows /dev/vdb
Figure 83. Attaching the Volume to instance 1

issue the  mount command again to see the new device

ls /dev/vd*  # verify that vdb is there inside the instance
mount # to verify that it's not really mounted
Figure 84. Checking from inside instance to verify vdb is present, but not mounted

make the file system

sudo mkfs.ext3 /dev/vdb
Figure 85. Making the file system

Mount the file system

sudo mount /mydisksudo mount /dev/vdb/mydisk
Figure 86. Mounting the file system

mount now shows the disk mounted

mount
Figure 87. Verify that vdb is mounted

currently the filesystem is owned by root, so you won’t have permission to write to it.  Change the group and  user to cirros user

sudo chgrp -R cirros /mydisk
sudo chown -R cirros /mydisk
Figure 88. changing group and owner to cirros user

Now write a simple file in that file system

echo hello > /mydisk/hello.txt
Figure 89. Write a file to the new file system

Now unmount the disk inside the Instance and veify that it’s gone

sudo unmount  /dev/vdb
mount # to verify that the device is unmounted
Figure 90: unmounting the device

Now follow the below to disconnect the volume from instance 1 and add and disk volume to instance 2

openstack server remove volume smcirros_inst1 vol1   # on director
openstack server add volume cirros2 vol1 # on director

ip netns exec qdhcp-4594595c-2c37-4df8-8495-f12db3777c38 ssh cirros@100.11.0.35        # execute this from controller to attach to instance 2.  got ip from output of openstack server list.

sudo mkdir /mydisk # inside instance 2
sudo mount /dev/vdb /mydisk # inside instance 2
ls -l  /mydisk/ # inside instance 2
cat /mydisk/hello.txt # inside instance 2

We can now read that file on instance 2

Figure 91: Reading file from cinder mounted volume on instance 2

Conclusion:

By following this writeups (part 1 through 4)  I believe you will get a good head start on installing and using ACI Integrated OpenStack. 

Ofcourse,  I want to point out that OpenStack is very extensive and you can go very in-depth into it. You can go into the architecture of how the internal bridges br-int and br-fabric is instantiated and how packets flow (data plane and contol plane) and interaction between the different components including aim, opflex, neutron, nova, etc etc.

For troubleshooting you would need to be familiar with many tools such as ovs-vsctl, ovs-dpctl, ovs-ofctl and be able to map ports, capture and trace packets, etc, etc. 

If you use it over the course of time you can get very familiar with it.

From the end user’s and developer’s prospective, using this is pretty intuitive as you’ve seen in part 4.  From the Network Operator’s prospective they have complete view of what the developer is building and doing.  From the developer’s prospective,  they don’t have to deal with the network folks, they are in control (or so they think 🙂 ).

 

 


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.