ACI/Cloud Extension Usage Primer (AWS) – Cloud Tenant Only (deploying Application Load Balancer with Service Graph on AWS)

In this Blog post I will show how to use MSO to configure a ACI Tenant on AWS cloud.   In a upcoming post I will discuss how to extend your Data Center Fabric  from physical Fabric to the AWS Cloud ACI Fabric.  Once you know how to create Tenants on the Cloud side extending to the Physical Fabric or between different Cloud providers will be easy, especially if you already are familiar on how to use Cisco MSO.

I also wanted to point out that if you had 2 different Cloud Providers for instance AWS and Azure then you could orchestrate from the MSO and push consistent policies for your workloads across those 2 providers in your ACI Tenant.  You don’t necessarily need to have your Tenant pushed to your on Premise Fabric if you don’t want to.

  I will discuss some of the concepts behind the configurations and show you step by step how to configure a working Tenant.  I will also go to AWS console after each step to illustrate what happens on the AWS side when you configure. If you have your AWS/ACI integration already setup you can follow through these examples.  I believe that if you did that it would really help you clarify your concept and make you comfortable with using the ACI Cloud Extension technology very quickly.

I also share my github docker-compose codes, so you can use those scripts to quickly install docker, docker-compose and deploy customized web servers (containers) on AWS EC2 instances and test out external connectivity and load balancing using AWS  ALB (Application Load Balancers) .   Running those scripts will literally take about 3 minutes per EC2 Instance.

By now, you all know the concept of ACI Anywhere.   ACI Anywhere is basically a marketing terminology meaning ACI can go Anywhere.  What ACI Anywhere gives you is the ability to extend your data center anywhere.  You can use  a single pane of glass  for orchestration, management, and policies (security and connectivity) across the entire ACI Anywhere infrastructure. I’m sure a lot of us who have been in this field for a while and have used the old fashioned way of implementing vpls or OTV to stretch data centers and duplicating policies manually across different data centers and trying to keep them in sync, will see the value of this right away !

The progression of ACI Anywhere has come along this way.

  • (Private Cloud) Single Stand Alone Fabric:   Release 1.0 – Nov 13th 2013
  • (Private Cloud / ext) MultiPod:  Release 2.0 – August 2016
  • (Private Cloud / ext) MultiSite: Release 3.0 – October 2017
  • (Private Cloud / ext) Remote Leaf: Release 3.1 – Dec 2017
  • (Private Cloud / ext) Virtual Pod: Release 4.0 – October 2018
  • (Public Cloud / ext) ACI Cloud Extension (AWS): Release 4.1 March 2019
  • (Public Cloud / ext) ACI Cloud Extension (Azure): Release 4.2 – Sep 2019

ext = extension

In this Post we will discuss ACI / AWS usage and how to Provision your Tenants on the AWS side.   While going through this, I will point out what happens on the AWS side  as you configure by showing you screen shots from AWS console.

This post will not discuss how to install and integrate the Physical Fabric and Cloud Fabric.   That might come in a future post.  Suffice to say, that if you have already done Multisite setup (or even Multipod),  you won’t find it too hard.   There are a number of steps, but it’s all pretty automated for the most part.

However for a short overview, below is a diagram that shows how the basic connectivity works:

On the Cloud Side we spin up a APIC ( the ACI SDN Controller) on a chosen AWS Region/Availability Zone.  This APIC actually runs on a AWS virtual machine also known as EC2 (in the case of Azure it will run on top of a AVE (Azure Virtual Machine).  Since this APIC is running in the cloud, we call it cAPIC (Cloud APIC).

We spin up another 2 virtual CSR Routers that peer BGP EVPN on the control plane to the physical Fabric Spines.  For Data Plane we use vxlan based tunnelling, just like Multisite.   The actual connectivity from AWS side to Physical Side is ofcourse through IPv4 Cloud using IPSec Tunnels.  The tunnels start and terminate from the AWS vCSR routers (again running on EC2 instances) to the on-Site VPN devices in most cases CSR routers (but could be something else that can terminate ipSec tunnels).

Notice that there are not really any Spines or Leaves on the AWS side.   The important thing to remember is that the infra tenant goes on it’s own AWS account.  Each Tenant needs an extra AWS account.  In other words one Tenant per AWS account.  However inside that Tenant, you could make multiple VRFs.    The Tenants could be in any AWS Region/Availability Zones.

Figure 1

The orchestration for the Fabric is done through the MSO (Multisite Orchestrator – used to be called MSC – Multisite Controller earlier) just like the normal Multisite Configuration.   The MSO can be onsite (running on Docker Swarm or K8S Service Engine Platform) or could be running on AWS also on Service Engine (the underlying nodes being AWS EC2 instances).   As long as the MSO can reach both the physical APIC and cAPIC you are set.  Incidentally,  I highly recommend using the Service Engine to host the MSO.  In my experience I have found the MSO running on SE to be very snappy and also the upgrade for the MSO running on SE is really nice.  It’s just 2 clicks through the GUI.   The older docker-smarm base MSO upgrade was a real pain, because you always had to struggle with python modules and also docker release support and upgrade. Remember the SE is running Kubernetes and thus supports rolling upgrades, distrubuted deployments that makes all this possible. The complexity of Kubernetes is all hidden under the covers so you don’t have to deal with it.   The SE OVA can be downloaded from CCO and the MSO code running on SE can be downloaded from  Cisco App Store

Figure 2

What you need to know to be able to follow through:

  • Working (hands on) Knowledge of ACI
  • Working (hands on) knowledge of using MSO with Multisite ACI Fabric
  • Some knowledge of AWS, ofcourse more is better, but this is not a must.

I’m assuming if you are reading this, you already are quite familiar with ACI and also Multisite and using MSO.   Before diving in to provisioning, we need to discuss a few items on AWS side.

In AWS, there is the concept of Regions and Availability Zones.   Availability Zones are a bunch of AWS Data Centers that are tightly meshed together and have very high resiliency and connectivity.  A Region is a group of Availability zones in a particular geographic area.  In other words AWS has many Regions throughout the world and each Region has a bunch of Avaiabilty zones.   Depending on  your requirements you would have your workloads on one Region (distributed on multiple AZs in that region) or multiple Regions ( distributed in multiple AZs for each region) if you are a multinational organization.  Each area would probably choose a Region close to them.  Inside the Region you would use one or multiple Availability zones for that Region and place your workload spread across those Availability Zones.  As an example you can place your Web Farm EC2 instances on 4 AZs  in a region.  You could then place a AWS Load Balancer in front to load balance external traffic to the Web Servers across those AZs.   This will give you really good performance (since the load on each Web Server will be low) and also very high uptime.  If there is a problem with one AZ, you will not even notice it.  You could then go one step further and have the Web Farm spread across different Regions also and put a Network Load Balancer whose targets will be the per region Application Load Balancers.

Keep in mind that different AWS regions have different number of AZs.  As an example  North Virginia Region has 6 AZs as of today.

Figure 3

Now let’s discuss how you would spin up a EC2 instance on AWS and get connectivity to it from the outside world.

  • Step 1:  You would log into AWS Console
  • Step 2:  You would create a VPC (Virtual Private Cloud).  A VPC is a container that host your AWS objects that you create.  You can think of this as a VRF from the normal networking point of view.  You would put a big prefix in the VPC, sort of like defining a route aggregate with the superset IP Prefix that the VRF will contain (as an example 10.0.0.0/16 in Region N.Virginia). Notice, these will generally always be private addresses.
  • Step 3:  You will define one or more subnets in one or more AZs inside that VPC.  This subnet needs to be a subset of the big subnet defined in the Region you choose.  Let’s say we choose 10.0.1.0/24 in us-east-1a (which is a AZ in N.Virginia region)
  • Step 4:  Inside that AZ you could put some broad Network ACLs to define what traffic can ingress / egress on your subnet on that AZ.  Keep in mind that this is a Network Access List, so not stateful
  • Step 5: You will spin up an EC2 instance, place it in your defined subnet for the AZ.  You will give it a IP address from that subnet (for example 10.0.1.100/24)  or you can choose to have auto assigned IP from that subnet.
  • Step 6:  You will choose auto assign public IP ( or attach a elastic IP to it, which is basically a pegged down public IP given to you from amazon, for which you will have to pay a little bit extra).
  • Step 7:  You will define a Security Group (or use an existing one), that will define who can access that EC2 instance on what protocol (ingress/egress).  Note that Security Group is stateful.
  • You will create a IGW (Internet Gateway) and add associate it with your VPC and then add that to the route table on your subnet

So, as you can see there are quite a few steps.  Although, if you’ve done it a few times, it’s really very easy to follow through and works flawlessly.  Ofcourse now you can also do many more things, like adding Firewalls,  NAT gateways, defining labels, storage, etc, etc.  

What to keep in mind when building a Tenant on the AWS side using MSO:

The way the integration is done is that on the AWS side we use Cloud Native objects to push the cloud configuration objects.   The objects that you are already familiar with in ACI don’t necessarily exist on the cloud side. 

As an example in ACI, we create Tenant, VRFs, BDs, AppProfiles, EPGs and contracts.  

Don’t try to create a BD from MSO and try to push it to AWS site.  Similarly,  you don’t bind a EPG with a VMM binding or static binding.  The way we define which EC2 instance belongs to which EPG is by matching labels on that EPG to those defined on EC2 instances.  You could also be very broad and say match all instances in a AZ. 

In fact an EC2 instance can have multiple labels and thus you can put that EC2 instance on multiple EPGs.

For this reason, it’s important to visualize what ACI objects map to what AWS objects.  Once, you’ve seen this, this is pretty much common sense.

Below is a diagram that shows you the mapping between ACI objects and AWS objects:

Figure 4

Let’s get going:

Now, that we’ve covered the basics, lets actually configure and follow through.   If your AWS/ACI Integration is already setup you can follow this exercise.

What We will do:

Part 1

  • Build a Tenant in AWS, so we can spin up 2 EC2 instances in 2 different AZs in a AWS Region
  • We will make sure that external connectivity is available so we can SSH in
  • We will SSH in to the EC2s and then download the code from  my Git repository to install docker, docker-compose and spin up customized Web Containers (docker), all using those scripts.
  • We will browse from our local machine to make sure it’s working.

Part 2

  • We will then put a AWS LB in front of the 2 EC2 Web Servers (using MSO,  service Graph on AWS)
  • We will check from our browser to verify that Load Balancing is working

Part 1 Implementation

Build The Tenant In MSO

This is pretty much the same as what you would normally do on MSO for Physical Fabric Tenants.  Build Tenant, Associate the AWS site and the user.  The only difference is that when you associate the AWS site, you will have to choose the AWS account number.

Figure 5
Figure 6

( * A Note on adding the AWS Tenant Account #: when adding the AWS Teanant Account number you will be presented with the option of choosing a Untrusted Account/Trusted Account/Organization Account.  

Figure 7

For lab simplicity you could choose the option:  Untrusted.   You would then have to enter your Access Key ID and Cloud Secret Access Key ID.  The problem with this approach is that the AWS Key ID or Access Key secret can expire after some period of time, so you would have to redo the association with the new Key ID/access key secret.   Managing key ID or access key is not something you may want to do on a regular basis.

For that reason,  in the long run for a production environment you will probably want to create the trust relationship between the Infra Account and the Tenant account as  a permanent Trusted account.  .  

How to setup a permanent Tenant Trusted account:

  •  If your cAPIC is at release 4.2(3x) or later, this process has been greatly simplified.  You could browse to your cAPIC and copy a link to a “cloud formation template”.  You then go to your AWS console on the Tenant account and run the “cloud formation template” from there.
  • Prior to 4.2(3x),  the process is a bit more cumbersome.  You will have to go to your AWS infra Tenant account first, download the cloud formation template to a S3 bucket ( AWS object storage).  Then you have to login to the AWS Tenant account and then run the “cloud formation template” from the tenant account.

I intend to do a more detailed writeup on this with screen shots in a future post.  Stay tuned… )

[Updated 5/22/2020

New detail step by step post on this topic:     written on 5/22/2020]

Figure 8

Assuming that you have done one of the options above and successfully  created the trust relationship and pushed your Tenant from MSO:

Let’s see what happened on AWS after we saved the Tenant.   Below you will see nothing happened.  The VPC you see there is the default VPC that already existed.  If I had pushed a Tenant, on the physical Fabric from MSO, you would see the Tenant pop up on physical Fabric.  It’s important to remember that the equivalent of Tenant is the Account Number for AWS.  So, all you’ve done is got ready to start creating the objects in AWS.

Figure 9

Now, Let’s go add our Schema.

Figure 10

At the same time create a Template in the Schema.  Associate the Template with the previously created Tenant. Also, add the Template to the AWS site.

Figure 11

At this time, still nothing happened on the AWS Side. 

Now, Let go ahead and create a VRF.    You will see that on the Site Local Property of the Template (Under Site 3),  the icon changed to red and if you click on it, it will say required information is missing on Site Locals.

Figure 12

Let’s go under Site Local, click on the VRF and configure  the AZs for use in the Region, in this case we choose us-east-1.  This means that we will be able to place our workloads in us-east1a and also us-east1b.

Next click on the CIDR, so we can add the CIDR address.

Figure 13

Now, Let go ahead and create the CIDR.

Keep in mind what these mean:

  • CIDR = VPC big subnet
  • Subnet = AZ

We create the CIDR of 10.0.0.0/16  and 2 subnets,  10.0.1.0/24  in us-east-1a and 10.0.2.0/24 in us-east-1b

Click On the Main Template and then click on Deploy to Site

Figue 14

Now, Check on AWS console to see if anything happened.   Still Nothing !

[Updated on 5/24/2020:

Starting from cAPIC release 5.0 (internal code name Jordan), this behavior has changed.  VPC is pushed to AWS as soon as the VRF is created.  This feature was implemented to:

  1. accomodate the integration of Cloud ACI with AWS TGW Solution.
  2. to accomodate the brownfield use, where you might want to import AWS VPCs / AZ subnets into Cloud ACI.

A more detailed writeup on this will follow.  Please keep tuned…]

The reason for this is we have not defined an EPG. Since we have not configured anything to use, in AWS yet, nothing is created.

Figure 15

Click on Main Template,  define App Profile and EPG. For EPG under Common Properties (right side pane) put the name of the EPG (EPG-WEB) in our case.

Do Not put anything under On-Premise Properties, because we won’t push this EPG to On-Prem Fabric

Under Cloud Properties ( right side pane) for EPG, make sure to associate the EPG with the VRF.   Remember in AWS case, EPG goes directly to VRF (not BD).

Once you do that, click on Deploy to Sites

Figure 16

You will see what will be pushed to AWS Site

Figure 17

Now, Let’s go ahead and check the AWS Console.  You will now see that the VPC has been pushed down to AWS.  Also the 2 subnets we defined have been pushed down to AWS

Figure 18
Figure 19

Now, it’s time to do the equivalent of VMM binding to EPG.   However we will do this with EPG Selectors.  Basically we’ll click on the Site Local / EPG, and then click on Selector

Figure 20

For the Selectors, we will put a label.   key:value pair of  server:web

server  Equals  web,   later when we bring up our EC2 Servers for WEB, we will put these labels on them     ofcourse save the config

Figure 21

Click on main Template and hit Deploy To Sites

Figure 22

The next item will be to create the external Connectivity (equivalent of L3Out).   We will need to also create a contract to apply to the external connectivity

Now, let’s create a  Contract Filter.  We will allow everything on the filter, so we will choose Ethertype = unspecified

ofcourse save it.

Figure 23
Figure 24

next, create the contract and associate the filter with the contract, hit Deploy To Sites

Figure 25

Let’s Check what happened in AWS.

In AWS, you go to EC2 / Security Groups.  You will see that the Contract we defined has now come in there.   So,  Contract == Security Group Rule  (The actual security group is the EPG)

Figure 26

Let’s now do our External Connectivity.  For AWS you don’t need to define a L3 Out.  Just define the External EPG

On Main Template, create External EPg,   On right hand pane, click CLOUD, in common properties, put in the name of the Ext EPG and associate with VRF.  In Cloud Properties, associate with the App Profile we created.

Figure 27

Notice that on the Site Local Template, you now have that RED  “i”.  If you click on it, you will see that it says that required information is missing.

 

Figure 28

In the Site Local Instantiation of the template, click on External EPg and then click on Selector on the right hand pane.

Figure 29

for the selector, put in:      IP Address   Equals     0.0.0.0/0   meaning all prefix.  Save it.

Figure 30

Let’s see what happened on AWS Side.   You will notice that on the AWS side, we now have a IGW ( Internet Gatweay) defined and tied to our main VPC.

Figure 31

However, the Subnets still don’t have  the IGW in the Route Table

Figure 32

Let’s go ahead and tie in the contract to EPG-WEB as Provider   and  External EPG as Consumer

Figure 33
Figure 34

Click Deploy to Sites

Figure 35

Now, Let’s check on the AWS side.   You will now see that the IGW has come in for the Route Table for our subnets

Figure 36

Let’s also browse to cAPIC and look at the contracts.  On cAPIC ui go to Dashboard and click on the Target Icon and then click on EPG Communication

Figure 37

Next Click on Select Contract

Figure 38

Select Your Contract and click on the Select Button

Figure 39

Here you can quickly see  the details of the contract and the consumer provider EPGs.   Click on the Contract Filter

Figure 40

Here you can see the details of the Contract Filter

Figure 41

All Done with the configuration for 1st Part !

Now, it’s time to spin up the EC2 instances, bring up Web Server containers on them and test them

Spinning up the EC2 Instances:

Go To AWS Console,  Services/EC2/Instances   and click on Launch Instance

Figure 42

Select the EC2 Image on the top of the list

Figure 43
Figure 44

Choose the following:

  • Network:   Your VPC that was pushed down by MSO/cAPIC
  • Subnet:   The first Subnet in us-east1a   10.0.1.0/24
  • Auto-Assign Public IP:  Enable
  • Network Interfaces:  keep to Auto-Assign

Click on Next: Add Storage

Figure 45
Figure 46

Just take the default Storage.  Please go ahead and click on Next: Add Tags

Figure 47

If you recall, on MSO, we had setup the EPG Selector  as   server:web

So, let’s put that tag here.  The implication is this EC2 instance will go into that EPG.

After that, click on Review and Launch.   No need to Configure Security Group.  cAPIC will automatically instruct AWS to attach the correct Security Group to the EC2 Server because it’s chosen by the correct selector

Figure 48

On next page, click on Launch.  Notice that the Security Group that you see here, is the default Security Group.  That will change soon as we bring up the instance, because cAPIC will instruct AWS to do that

Figure 49

On the next page,  you are given the option to create a SSH Key Pair, or create a new one.   Since I already have one which I had downloaded eariler, I’m not going to create one.  If you create one, please make sure to hit the download button and download it.  Make sure to put that in a directory from where you will ssh in.  Don’t loose this certificate.  If you do loose the certificate, you won’t be able to ssh in to those servers later.

Click on Launch Instance

Figure 50

On next page, click on View Instances

 

Figure 51

This will take you to the EC2 Launch page.  Here you can see your newly launched EC2 Instance. 

  • Wait for the Instance State to be “running”    
  • Notice the public IP is shown (here it is 54.167.82.213)
  • Notice the auto assigned private IP in my case is 10.0.1.11
  • Notice that the EC2 instance is in us-east-1a
  • Click on the Security Group associated with this instance
Figure 52

Notice that the security group rule (contract) that was defined from MSO is now attached to the EC2 instance.

Figure 53

 

Bring up the other EC2 Instance in us-east1b AZ:

Now follow the same procedure to bring up the EC2 instance 2 in the other availibility zone.  I’m not going to show the screen captures for that any more.

Choose the following:

  • Network:   Your VPC that was pushed down by MSO/cAPIC
  • Subnet:   The second Subnet in us-east1b   10.0.2.0/24
  • Auto-Assign Public IP:  Enable
  • Network Interfaces:  keep to Auto-Assign
  • Label: server:web

 

In my case, my public IP is 35.172.138.158

Figure 54

Let’s now go to our local machine Terminal Session and try to ping the public IPs for the 2 EC2 instances.

In my case:

  • 54.167.82.213   EC2 – Instance 1 running on us-east1-a
  • 35.172.138.158   EC2- Instance 2 running on us-east-1b

You can see that they both work.  So, all good !

 

Figure 55
Figure 56

Let’s now go ahead and install the docker container Web Server.

I’ve already got this all coded up in my git repository, so, it should be really fast

First Step is to copy the .pem (certificate) file that you downloaded earlier from AWS to the directory of your local machine from where you will be doing your ssh from.

Figure 57

Second Step:   Do a “sudo chmod 400 *.pem”  to change the permisssions on it

Figure 58

Now,  ssh into the first EC2 Instance using the certificate.  Note that the default user in AWS is ec2-user

in my case:  “ssh -i sm-aws-lab.pem ec2-user@54.167.82.213”

Figure 59

Let’s update the packages by “sudo yum update -y”

I’m attaching the end part of the update to show that it updated successfully.

Figure 60

Next:   Let’s install the docker container for Web Services.  The steps are very simple and fast.

For this EC2 instance (the 1st one) do the following:

  • sudo yum install git -y
  • git clone https://github.com/soumukhe/aws-aci-lb-ec2-1.git
  • cd  aws-aci-lb-ec2-1/
  • ./1.install_docker.sh
  • exit and ssh back in followed by cd  aws-aci-lb-ec2-1/
  • ./2.install_docker_compose.sh
  • docker-compose up –build -d  ( please note that ther are 2 dashes before build, the web page display shows it like one dash)

Once you are done do a “docker ps” to ensure the web container is running.  Notice that we’ve mapped port 9002 of the EC2 instance to the containers port 443

Figure 61

Bring up the WEB docker container  for the 2nd EC2 instance.  Note: the git repository is different for the 2nd instance.   Below is the procedure.

For this EC2 instance (the 2nd one) do the following:

  • ssh in to the 2nd instance using your key and public IP of 2nd instance
  • sudo yum update -y
  • sudo yum install git -y
  • git clone https://github.com/soumukhe/aws-aci-lb-ec2-2.git
  • cd  aws-aci-lb-ec2-2/
  • ./1.install_docker.sh
  • exit and ssh back in followed by cd  aws-aci-lb-ec2-2/
  • ./2.install_docker_compose.sh
  • docker-compose up –build -d  ( please note that ther are 2 dashes before build, the web page display shows it like one dash)

Once you are done do a “docker ps” to ensure the web container is running.  Notice that we’ve mapped port 9002 of the EC2 instance to the containers port 443

 

Figure 62

Next on your local Machine, open 2 browser sessions.

You should reach the web interface for both the EC2 instances as shown below:

Figure 63
Figure 64

All Good !!!

Part 2:   Configuration and attaching AWS ALB (Application Load Balancer) from MSO/cAPIC (service graph on AWS)

In this section, we will continue with the above configuration and attach a AWS ALB with a  Service Graph so that traffic to the 2 web servers will be load balanced.

Let’s first recall what we would do in the case of a physical fabric (using MSO).

  • We would create a L4L7 Device from APIC
  • From MSO, we would then create a service graph, attach the L4L7 Device, connect up the connector interfaces and attach the service graph to the correct contract
  • We would then go to the load balancer UI or CLI and configure the rules VIP IP, target groups, etc, etc (if using unmanged mode service graph).  If using managed mode service graph, we would do the configuaration from the apic UI (or api).   I highly recommend unmanaged mode for several reasons in the on-Premise ACI service graph deployment but I won’t get into that here.

For AWS, we would do exactly the same

  • We first have to create a L4L7 Device ( AWS ALB) from cAPIC
  • We will then go to MSO create a service graph, attach the L4L7 device and attach the service graph to the correct contract
  • We will then create the rules, targets, etc, etc for the Load Balancer through the MSO.   This is managed mode ofcourse and it’s really easy to do for Cloud provider case. 

Let’s get going…

Step 1:  Go to cAPIC and create L4L7 device (AWS ALB)

on cAPIC go to Application Management/Services/Actions and click on Create Devices

Figure 65

In the Create Device form page:

  • add the Name of the L4L7 Device
  • Choose Internet Facing
  • Click on Availability Zone
Figure 66

Next, click on Select Availability Zone

 

Figure 67

Select us-east-1a and click Select

Figure 68

On the Next Page, Click on Select Subnet

Figure 69

Next page, select the 10.0.1.0 subnet and hit the Select button

 

Figure 70

Click Add

Figure 71

Click Save

Figure 72

On next page Click Add Availability Zone and add in the us-east-1b AZ with 10.0.2.0 subnet

Figure 73

When done, hit the Save Button

Figure 74

Let’s check what happened from AWS Console.

On AWS console, go to Services/EC2/Load Balancers.  You will notice that your created ALB is there, but there are no Listeners configured for the ALB

Figure 75

Now,  click on Target Groups and you will see that there are no Targets at this time

Figure 76

Let’s go back to MSO

On MSO, on your Schema, go to your Main Template,

  • Add Service Graph
  • Name the Service Graph appropriately
  • Click Load Balancer
  • Drag the Load Balancer to Canvas
Figure 77

You will now see that your Site Local Template has the red “i”.  Click on it and it will tell you that “Required Information is missing on Site Local”

Figure 78

Now on the Site Local Instantiation of the template, click on your  Service Graph and on the right hand pane under Template Properties, click on the Service Graph Icon

Figure 79

Now on the Select Device form page, Select the Load Balancer you created from cAPIC, then click on Save

Figure 80

Now, go the the Main Template, Click on your external Contract and add in the Service Graph to the contract

Figure 81

Back to the local Instantiation of the template in Site Local (you will see the red “i” icon saying missing information.   Click on the Contract and then on the Load Balancer Device on Right Hand Pane

Figure 82

In the Add Listeners Form that will pop up now, click on ADD Listeners

Figure 83

In the Add Listeners form page do the following:

  • Name Your Listener (in my case “WebFarm”
  • Choose Protocol: HTTP
  • Choose port: 80
  • Click On Add Rule to add a new rule
  • For the new Rule Give it a name (Rule1 in my case)
  • put Path to Home Directory for Web Server    i.e.    “/”
  • Protocol:  HTTPS  (since we are using HTTPS in our web server)
  • Port: 9002, since on our Web Server (docker Container), we are listening on port 9002
  • The Web Farm EPG:  EGP-WEB (in my case)
Figure 84
Figure 85
Figure 86
Figure 87

Now, go to Main Template and click DEPLOY TO SITES

Figure 88
Figure 89

Now, Let’s go to AWS console and check.

Go To Services/EC2/Load Balancers.

You will see the Listener Rule now exists.  Click on the Rule to see it

Figure 90

You can see the Forwarding Path and the Target (Forward To)

Click on Forward To

Figure 91

This takes you to the Target Groups.  Look at the Registered Targets and the Target Healths.   They all look good !!!

Figure 92

Let’s look from cAPIC

On cAPIC click on the Target Icon, then click on EPG Communication

Figure 93

On Next Page, click on Select Contract

Figure 94

Choose your External Contract and click Select

Figure 95

Notice, below, you can see all the details on your contract and service Graph.  Once done viewing, click on Cancel

Figure 96
Figure 97

Let’s Test out if Load Balancing is working.

  • First, we need to get the DNS name of the ALB as provided by AWS
  • Go to AWS Console/EC2/Load Balancers and copy the URL for the DNS Name

In my case it happens to be:

AWS-ALB-VRF1-us-east-1-1631484734.us-east-1.elb.amazonaws.com

Figure 98

Next, on your local machine browser of choice, put that in for the URI.  You will see your web page pop up.  

Every time you hit the refresh button on your browser, it will round robin between the servers

 

Figure 99
Figure 100
Figure 101
Figure 102

Load Balancing is working !!!

 

Conclusion:

We configured and ACI Tenant with Web Server Farm on AWS Cloud.  We also attached EC2 with Web Docker containers in them and successfully instantiated a ALB with Service Graph.  

Keep in mind that we went a lot of back and forth between MSO, AWS console and cAPIC.   That is because I wanted to show you what is happening under the hood.  In reality you can do most of the orchestration from the MSO !

Now that you are comfortable on creating a Cloud only tenant on AWS using MSO, it will be really easy for you to configure Tenants across your Physical OnSite Fabric and Cloud Fabric using MSO.   Keep tuned for blog write-ups on that.

References:

Posted in All

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.