Configuring Cisco ACI with Ansible AWX running in Docker

Table of Contents


We are looking into an APIC integration with a security product and it requires an AWX Ansible server to push configuration changes to our APIC. Seeing a chance to kill 2 birds with one stone, I wanted to take the oppurtunity to highlight the steps I took installing AWX into our lab environment. I copied a very simple playbook I found on github to test the connection between AWX and ACI. This way I know our AWX server is ready for our next integration.

Previously having only worked with Ansible on the CLI, the switch to a GUI to create playbooks was an interesting one. With AWX you gain the ability to create and manage different tasks (playbooks) for different users, groups,and roles external from the underlying Linux UID/GID of the system hosting AWX.

AWX is the Open-Sourced foundation of Ansible Tower. Ansible was purchased by Red Hat around 2015 and its product Ansible Tower is a commercial version based on AWX maintained and supported by Red Hat. Ansible AWX and Ansible Tower have extremely similar features, though AWX lacks the obvious integration into the Red Hat ecosystem.

The AWX foundation has announced plans to keep the projects "lock and step" in terms of its main features. You will have to decide which platform is correct for your environment.


  • 1 Linux VM – CentOS 7
    Figure 1: Screenshot showing my VM settings these are honestly overkill.
  • AWX 17.1.0 Repo
  • Docker
  • Docker-compose
  • Ansible (to run the playbook that installs the AWX docker necessary containers)

AWX Install Options

Overall installing AWX was pretty straightforward once I found a version that would work with Docker. After version 18.0 Docker based AWX is no longer offically supported and I ran into many issues trying to get it working on AWX 19.5.1.

If you followed my article on Kubernetes you might have a working K8 cluster that you can deploy AWX into. Using K8 will allow you to deploy a newer version such as 19.5.1. I deployed the latest AWX into my Kubernetes cluster and it works just the same. I am outlining the Docker installation method because it is more likey for readers to have access to Docker than K8.
Figure 2: Screenshot of AWX Github dropping officall support for Docker based install after AWX 18.[/caption]

AWX Install for AWS/EC2 based install with Terraform

Big Thank You to Soumitra!! For documenting and scripting the AWX install into an EC2 Instance


This procedure will create everything for you and install AWX ready to be used. It will create objects on AWS, such as VPCs, CIDRs, Subnets, Internet Gateway, Security Groups and EC2. It will also install doker, docker-compose, Ansible and get the scripts ready in the home directory. You just have to ssh in to the EC2 and run those scripts (which are numbered, 1.x, 2,x, 3.x, 4.x).

The actual bringup should not take more than 10 minutes. Before you start, go to the AWS console and create a AWS Key and Secret, which you will need to enter in the "" file before running the script.

⚠️ The Security Group rules configured by this script are wide open because this is for a POC. If you wanted to tighten it down, please change the terraform resource definition in before executing the script.

📗 You could retrofit this script without too much effort to install AWX on vCenter based esxi also. Please feel free to do so if you desire.

1) from the linux/mac box where you will do the install from, make sure you have terraform binary installed
          a) browse to,  go to the bottom and right-click and copy the terraform binary for your platform.
          b) on your mac or linux box,  do a curl -O <the copied buffer>
          c) unzip the file that you just curled in.  e.g.  unzip
          d) sudo mv terraform /usr/local/bin

2) clone this directory:   git clone
3) cd awx_on_ec2_terraform
4) vi and put in your AWS access-keys and secret keys and awx desired password

5) run the terraform script:
      a) terraform init
      b) terraform validate
      c) terraform apply

6) the output on the screen will give you the ec2 Public IP.  SSH to the ec2 with ec2-user@publicIP and run the following scripts:
     ./       # this ansible playbook will take 5 minutes to run

7) Now you can browse to your awx ui.   Point browser to http://public_IP of ec2.   
     a) you can do a terraform output from the terraform worksapce directory (where you ran the terraform plan from) to see the public ip
     b) the username will be admin and the password will be the one you specified in the file

8)  To Destroy:
     for full AWS constructs this script built:  terraform destroy
     to just destroy the ec2 only:   terraform destroy --target aws_instance.dummy-phy-ec2

From here you can jump to the section: Administering AWX GUI

AWX VM Preperation for vSphere based install

I am not going to go through creating a linux VM in vSphere, but create a VM with your linux distrobution of choice, I used CentOS for this installation because dnf was available after the initial OS install without any additional configuration. The AWX installation playbook uses the dnf utility to download and configure packages to build the needed docker images from.

Once you have the OS installed I needed to install git, depending on your inital OS configuration you might have it available already.

yum install -y git

Now lets install docker and docker-compose. I am not going to walk you through the specific installation steps because they are already documented very nicely from Docker directly.

Follow these 2 guide’s for CentOS, if you are using a different distro (ie. Ubuntu) search for the guide specific to your setup.

After installing the docker pieces, we can clone the AWX Git repo. Make sure to specify the 17.1.0 branch with the following command.

git clone -b 17.1.0

Change to the AWX directory and lets edit our installation inventory file located at /home/cisco/awx/installer/inventory`. This file is extremely lengthy and there are many different options you might need to enable/configure for your environment. Below is the option's that I had in my AWX inventory file. These options will create create a directory at `/var/lib/awx/projects where you will create additional directories for all locally created AWX projects.

[cisco@awx installer]$ cat inventory
localhost ansible_connection=local ansible_python_interpreter="/usr/bin/env python3"

admin_password=<Your Password Here>

After you have made your edits you can install AWX just like any other Ansible playbook.
📗 Note: If you don’t have ansible installed in the VM, you can do so by:

sudo pip3 install ansible

To run the playbook to spin up the awx docker contaiers run the Ansible Playbook.

[cisco@awx installer]$ pwd
[cisco@awx installer]$
ansible-playbook -i inventory install.yml

Once the playbook has successfully completed you will have an AWX instance to begin creating Ansible playbooks. The application is a total of 4 Docker containers. However to work with ACI I wanted to ensure the ACI Ansible modules were installed. In order to do so we install the module software inside of our AWX container’s. Lets grab a list of the running docker containers.

[cisco@awx installer]$ docker ps
CONTAINER ID   IMAGE                COMMAND                  CREATED      STATUS      PORTS                                   NAMES
4fa9fdd1c3a1   ansible/awx:17.1.0   "/usr/bin/tini -- /u…"   3 days ago   Up 3 days   8052/tcp                                awx_task
460d4b3e8243   ansible/awx:17.1.0   "/usr/bin/tini -- /b…"   3 days ago   Up 3 days>8052/tcp, :::80->8052/tcp   awx_web
22d46fb22095   postgres:12          "docker-entrypoint.s…"   3 days ago   Up 3 days   5432/tcp                                awx_postgres
3a302b6dddf9   redis                "docker-entrypoint.s…"   3 days ago   Up 3 days   6379/tcp                                awx_redis
[cisco@awx installer]$ docker exec -it 4f sh

From the output above our top two lines are the containers I installed the ACI ansible modules using the following command.

[cisco@awx installer]$ docker exec -it awx_task ansible-galaxy collection install cisco.aci
[cisco@awx installer]$ docker exec -it awx_web ansible-galaxy collection install cisco.aci

At this point we are almost ready to configure inside the AWX GUI.

Administering AWX GUI

AWX has built-in integrations with many software repositories and version control tools such as git and subversion. Mentioned above, we defined a directory in the ansible inventory for manual projects. Change to your defined directory and create a new directory. For each project you create you will need to create a new directory if you use a manual Source Control Crediental Type.

[cisco@awx installer]$ cd /var/lib/awx/projects/
[cisco@awx projects]$ pwd
[cisco@awx aci]$ sudo mkdir aci
[sudo] password for cisco:
[cisco@awx aci]$

With the directory created, we can navigate to your VM mgmt. IP address and login to AWX GUI with our configured Username and Password from the inventory file shown above. Navigate under Resources to Projects once inside of projects click the Add button.

Figure 3: Screenshot showing the AWX GUI.

Figure 4: Screenshot guiding through the creation of adding a Project.

Figure 5: Screenshot guiding through creating an AWX project and selecting the directory that we created.

With the Project created we can move onto creating the other AWX items. I created an Inventory next.
Figure 6: Screenshot showing guiding through creating an Inventory.

For this article I am demoing a very basic Ansible playbook I found online, it is not written in a manner I would use for an Enterprise deployment. I just needed to test the connection from AWX to my ACI fabric for a different integration.
Figure 7: Screenshot showing ACI Inventory file. There is only 1 APIC in the cluster.

Figure 8: Screenshot guiding through creating a Template.

In order to create a template there has to be a playbook existing in our project directory. I found this one online you can copy it to test out AWX in your lab. You will need to use sudo to create the file.

[cisco@awx aci]$ pwd
[cisco@awx aci]$ ls
[cisco@awx aci]$ cat aci-epg-demo.yml
- name: Create a Tenant, App profile, VRF, Bridge Domain, and 10 EPGs
  hosts: localhost
  gather_facts: false
#    aci_host: IPAddressOfACI
#    aci_username: admin
#    aci_password: SuppliedPassword
    aci_tenant: gtest
    aci_ap_name: gtest-apps
    aci_bd: gtest-bridge
    aci_vrf: gtest-vrf
      - _epg: Texas
        _desc: obviously the best
      - _epg: Arizona
        _desc: really hot
      - _epg: Utah
        _desc: boring
      - _epg: Nevada
        _desc: hot and expensive
      - _epg: Oklahoma
        _desc: wind whips
      - _epg: Nebraska
        _desc: corn right...corn
      - _epg: Idaho
        _desc: waffle fries
      - _epg: Washington
        _desc: rain I think
      - _epg: California
        _desc: beautiful to visit
      - _epg: Florida
        _desc: watch out for Florida man

  - name: Create a tenant
      host: "{{ aci_host }}"
      username: "{{ aci_username }}"
      password: "{{ aci_password }}"
      tenant: "{{ aci_tenant }}"
      description: Aleccham Ansible Tower Testing
      state: present
      validate_certs: false

  - name: Add a new AP
      host: "{{ aci_host }}"
      username: "{{ aci_username }}"
      password: "{{ aci_password }}"
      tenant: "{{ aci_tenant }}"
      ap: "{{ aci_ap_name }}"
      description: Greg Ansible Testing
      state: present
      validate_certs: false

  - name: Add a new VRF to a tenant
      host: "{{ aci_host }}"
      username: "{{ aci_username }}"
      password: "{{ aci_password }}"
      vrf: "{{ aci_vrf }}"
      tenant: "{{ aci_tenant }}"
      descr: Greg Ansible Testing
      policy_control_preference: enforced
      policy_control_direction: ingress
      state: present
      validate_certs: false

  - name: Add Bridge Domain
      host: "{{ aci_host }}"
      username: "{{ aci_username }}"
      password: "{{ aci_password }}"
      tenant: "{{ aci_tenant }}"
      bd: "{{ aci_bd }}"
      vrf: "{{ aci_vrf }}"
      state: present
      validate_certs: false

  - name: Add new EPGs
      host: "{{ aci_host }}"
      username: "{{ aci_username }}"
      password: "{{ aci_password }}"
      tenant: "{{ aci_tenant }}"
      ap: "{{ aci_ap_name }}"
      epg: "{{ all_epgs._epg }}"
      description: "{{ all_epgs._desc}}"
      bd: "{{ aci_bd }}"
      priority: unspecified
      intra_epg_isolation: unenforced
      state: present
      validate_certs: false
    loop: "{{ aci_epgs }}"
      loop_var: all_epgs
[cisco@awx aci]$

Once that playbook file is created in your project directory we can hop back to the AWX GUI and select it for our template. You will notice that I populate some of the variables to change what was created in my ACI fabric. I have included them below.

Figure 9: Screenshot showing guiding through the creation of a template. Notice I populated the variables field to change what I was configuring in ACI.

aci_host: ""
aci_username: "admin"
aci_password: "YOURPASSWORD"
aci_tenant: "awx-test"
aci_ap_name: "awx-test-apps"
aci_bd: awx-test-bridge
aci_vrf: awx-test-vrf
  - _epg: Texas
    _desc: obviously the best
  - _epg: Arizona
    _desc: really hot
  - _epg: Utah
    _desc: boring
  - _epg: Nevada
    _desc: hot and expensive
  - _epg: Oklahoma
    _desc: wind whips

Once the template is created go ahead and Launch the Template. This will allow you to test the connection between AWX and ACI. If you configured everything correctly you should see the following output which is very similar to what you would see from the CLI when issuing an Ansible command to run a playbook.

Figure 10: Screenshot showing the successful completion of the template after Launch.

Figure 11: Screenshot showing the created tenant, BD, EPGs, and VRF. The names were taken from the variables if you compare the playbook to what is show in the ACI GUI.


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.