In this Blog Post I will show how to use MSO to configure a ACI stretched Tenant between onPrem ACI Fabric and Cloud Native ACI Fabric with Shared L3 Out.
Please go through the post on “ACI/Cloud Extension Usage Primer (AWS) – Cloud Tenant Only (deploying Application Load Balancer with Service Graph on AWS)” first. I will not repeat things I said there, otherwise this post will get really long and distracting.
If you have your own ACI/AWS integration ready, you can follow through the exercise I show here. That will really clear up your concepts.
Before we start, let’s get some theory clear. Notice that I don’t mention stretching the BD between onPrem Fabric and AWS Cloud Native Fabric. That is because cloud providers don’t have normal broadcast/multicast in the cloud infrastructure. Remember from the previous article that the equivalent of BD in AWS is VPC-Subnet (in AZ).
Now, if you think about it, we do stretch the EPG, but what does that mean ? Remember that the equivalent of EPG in AWS is the security group. In ACI at the end of the day, that’s exactly what EPG means also. It’s a Security Group. So, when we stretch an EPG between onPrem and AWS, it means that endpoints on that EPG (be it EPG on ACI or Security Group on AWS) are in the same security group. In other words they can freely talk to each other without any contracts needed. However the actual prefixes for the endpoints on the onPrem side and the AWS side have to be different and cannot overlap.
Let’s discuss with diagrams, so this concept is clear. We have a very simple topology here with only 1 stretched EPG to make it easy to understand. We will use this exact topology in the lab exercise below. I highly suggest trying this out in your own setup to try to get comfortable with this.
I’m assuming you already read through the previous writeup and followed through in your own lab. For that reason, in this writeup I will be a little bit more comprehensive and not go through the gory details with screenshots for every time you have to click somewhere. However I will point out the important things that you should not miss.
The diagram below shows:
- one EPG stretched between onPrem and AWS
- one VM on onPrem side
- One EC2 on AWS side
- Pay attention to the fact that on the same EPG the IP subnets have to be different for the endpoints on either side (remember cloud does not do traditional broadcast/multicast so, you cannot stretch L2 domains)
- onPrem has it’s own L3 Out connection and so does AWS side (IGW (Internet Gateway))
- On the onPrem side I have a prefix 100.65.0.1/32 that both onPrem and AWS side needs to be able to reach
- On AWS side, the reachability should be there for the big Internet
Now, let’s make sure we understand what’s possible and what’s not possible to do.
If you look at the diagram below, you will see that:
- The on Prem Side endpoints will always be able to reach prefixes it learns from the local L3 Out (if you configure it to do that)
- The AWS Side endpoints will always be able to reach the Internet prefixes as allowed by the security groups and destination prefixes that you will configure
- OnPrem endpoints and AWS endpoints will always be able to talk to each other freely because they are in the same EPG
The next diagram shows that if you wished you could make the AWS endpoints reach the onPrem L3 out endpoints.
The next diagram shows that endpoints from the onPrem side cannot use the external connectivity on the AWS side even if you wanted to. (ofcourse you could implement NAT server/service in AWS and do some hack to make it work, but that’s not a inbuilt function). The limitation has nothing to do with ACI. It’s the way Cloud providers configure their infrastructure. In any case, I don’t think most folks want to have their onPrem Internet traffic go out through the cloud providers external link. They will charge you for the amount of traffic and it can add up.
The next diagram shows the logical configuration needed to achieve this. You will need to configure:
2 External EPGs:
- One for access control for OnPrem ACI Endpoints
- Other one for AWS side so that you can advertise the AWS VPC prefix out and to advertise onPrem learnt prefixes from L3Out to the vCSRs on AWS side
Keep in mind the prefixes in those extEPgs need to be very specifically configured for this to work. I’ll show you that with screenshots in the implementation section of this.
Ofcourse, you could get as granular as you want. For instance, you can put multiple labels on your AWS workloads. Then you could pull the workloads you want in also a different EPG (example a EC2 instance will now belong to different EPgs). Labels/Tagging are so powerful ! No wonder modern systems like Kubernetes uses labels so extensively !!!
The prefixes in those extEPgs need to be very specifically configured and is the same as the above case, for this to work. I’ll show you that with screenshots in the implementation section of this.
Before we start the actual implementation, we need to make sure that we know the details on limitations, etc, etc. Please read CCO Documentation: Shared On-Premises L3Out for Cisco Cloud APIC Workloads
Guidelines and Limitations
ACI Multi-Site multi-cloud deployments support a combination of any two cloud sites (AWS, Azure, or both) and two on-premises sites for a total of four sites.
If you plan to use Amazon Web Services cloud site, you cannot use the same account for multiple Tenants. This includes the
infraTenant as well as any user Tenants you may configure.
The on-premises L3Out and cloud EPGs must not be in Tenant
The on-premises L3Out, the cloud EPG, and the Contract must all be in the same Tenant.
When configuring L3Out contract, the scope of the contract can be
tenantif the cloud EPG uses the same VRF as the L3Out. The scope must be
tenantif a separate VRF is configured for the cloud EPG.
When an on-premises L3Out has a contract with a cloud EPG in a different VRF, the VRF in which the cloud EPG resides cannot be stretched to the on-premises site and cannot have a contract with any other VRF in the on-premises site.
When configuring an external subnet in an on-premises external EPG:
Mark the external subnet with a shared route-control flag to have a contract with a cloud EPG.
The external subnet must be marked even if the external EPG and the cloud EPG are in the same VRF.
Specify the external subnet as a non-zero subnet. Only explicit subnets can be exported to the cloud.
The external subnet must not overlap with another external subnet.
Aggregation of routes is not supported. In other words, you must add individual subnets, such as
10.10.11.0/24, rather than a single
For cloud subnet routes to be advertised out of the fabric, you must configure the on-premises L3out to enable the Export Route Control Subnet flag.
You can do this in your on-premises APIC GUI by navigating to Create Subnet window checking the Export Route Control Subnet checkbox.and adding the cloud subnet. Then in the
The external subnet that is configured as classification subnet in the on-premises external EPG must have been learned through the routing protocol in the L3Out or created as a static route.
Now, that we got the basics covered, let’s move on to the practical implementation scenario. Please follow along in your own AWS/ACI lab if possible.
First Make your Tenant as shown in below diagram
Tie to the AWS Tenant Account. Use Untrusted or Trusted (we talked about this in the previous post) . Also the CCO doc external Services Cloud: (how to create untrusted user) can guide you through this.
Next, Add Your Schema
Put in the details of the Schema as shown in the diagram below.
Next, Create your ACI and AWS Cloud Native objects in the common Template, App Profile, EPG, BD, VRF. Make sure to associate EPG with BD and BD with VRF. Make sure to put the BD subnet in under BD. Make sure to Stretch your BD. Remember we are not stretching the BD to AWS per say, but this is required to satisfy the logic later for the EPGs to realize that they are on a stretched BD
For EPG make sure to associate the VRF with both On-Premise Properties and Cloud Properties
Remember in AWS, this will create a Security Group
Site Local for Site3 (AWS) will show the red ”i” saying information is missing
Click on the VRF and add in the AWS Region and then hit the + button on CIDR
Put in the details for your desired CIDR. Keep in mind that these will generally be Private Addresses
Click on SiteLocal for Site3 (AWS) and put in the cloud EPG Selector
Put in some Key:value pair for Selector
In my case, I did “tier = app”. Later when we spin up EC2, remember to put the lablel of ”tier=app” on the EC2s when you spin those up
On Your Site Local for OnPrem Site, do your VMM or Static binding for EPG
Add in your VMM domain details (if doing VMM binding)
Click on the Shared Template and hit “DEPLOY TO SITES”
Verify what’s being pushed to where and hit Deploy
Go To AWS and do some basic checks:
VPC has been deployed, subnet has been deployed
Check the AWS subnet that got pushed from MSO
Check on the AWS Security Group pushed by MSO (EPG)
Check on the VGW that came in on AWS. Remember VGW is the virtual gateway in AWS Tenant space that is used to source the peering with the infra Tenant CSRs running in AWS
Check on the Site to Site VPN. (vpn from Tenant accoung VGW to Infra account vCSRs). This might take up to 10 minutes to be in “available” state.
Bring up an EC2 Instance:
create new cert and download, or use existing cert
Connect to your on Prem VM (from vCenter Console, or whtever) and do connectivity checks
Check your VPC Subnet Route Table. You will notice no IGW (Internet Gateway).
VGW is there ofcourse. Without IGW, EC2 cannot communicate directly outside through AWS
Creating IGW on AWS from MSO, IGW is only on AWS Site, so let’s create a new Template on our schema called AWS only and associate with AWS Site only (Site 3). Associate with Tenant
On that AWS Only Template, create the External Epg ”AWSOnlyExtEPG” Make sure to have CLOUD Tab clicked. Associate the vrf in Common Properties and APP Profile in Cloud Properties
Click on ExtEPG Selector
Give it a name and key:value of IP Address = 0.0.0.0/0
This means all destination will be allowed based on the contracts that we will use
Let’s create the contract in the main Shared Template, for lab purposes attach Filter of unspecified in the contract. Then Associate the contract with the user epg As provider
Go Back to AWS Only Template and associate the contract with the External Epg as Consumer
Check back on AWS Console to verify that IGW is present in subnet Route
If you recall when we checked the Public IP of the AWS EC2, it was 22.214.171.124
Let’s ping and then SSH in from our local machine (going through Internet)
Let’s ping our VM IP from there (126.96.36.199). It works !,
even though the VM is on the Private ACI Fabric (traffic is going through the vxlan tunnel from
CSR to OnSite ACI spine). That is because the EPG is stretched (same security group)
Now let’s configure our onPrem L3Out. Make a Site Template for OnPrem (call it OnPrem only). As usual, associate the Tenant and the onPrem Site (Site 1) in my case.
On the OnPrem Only Site Template, create the L3Out. In my case I call it “OnPremL3Out”
Associate the L3Out with the VRF
Go Back to Shared Template and create:
- Contract “OnPremL3OutContract”
- Also another contract called “AWS-2-OnPremExtEPGcontract”
- Make sure the scope of both these contracts are set to both “tenant” or both “vrf”
- Apply the standard Filter AllowAll to both the contracts for the lab
For both the contracts, Tie it in to the the user EPG as Provider. In my case, I’m using the same Allow All Filter (since it’s a lab)
Make Sure to go to your Site Local for Site1 (OnPrem Site) Shared Template and tie in the L3Out to the User BD
Now, go back to OnPrem Only Site and create the external EPG “OnPrem-extEPG” , make sure to associate With VRF and L3Out and attach the OnPremL3OutContract as consumer. Then hit the + button on Subnet to add the extEPG subnet
In our case, we will add 0.0.0.0/0 since it’s a lab. The Implication for this is the Physical Site User EPGs can reach any prefix that you get advertisement from the Physical Peering router connected to OnSite Border Leaf
Choose the prefixes that you want to be reached from AWS through the OnSite L3Out. In my case I used 100.65.0.1/32 ( I could have done 100.65.0.0/24 if from the onPrem L3Out ACI BL was learning the prefix 100.65.0.1/24)
Use the following Scope options:
- Shared Route Control Subnet (remember Shared means on the way in, — share what I learnt from outside to another vrf inside the fabric)
- External Subnets for External EPg ( This is like a access list for the destination prefix, in our case 100.65.0.1/32)
For understanding these knobs: Please read the article on https://unofficialaciguide.com called: “Understanding Scope Of Prefixes in L3 Out External EPG in ACI”
Now, hit the + button on SUBNET again and this time add another prefix. This is the prefix for the AWS VPC, in my case 10.0.0.0/16. This is needed so there is a route back to AWS from your OnSite Outside world. (otherwise return traffic from your onPrem side to AWS VPC will not go back).
Note: Please use the main VPC subnet here, not the AZ subnet. Notice, my VPC subnet is 10.0.0.0/16 and that’s what I am using in the extEPG prefix. My AZ subnet is 10.0.1.0/24, so I don’t use that.
Use the following Scope options:
- Export Route Control (remember Export means export what I learn from another L3Out to the outside world, i.e. transit routing)
Next, Click on all the Templates and DEPLOY TO SITES where necessary
Note: from my testing, after bringing up the peering router to the ACI border leaf, later, if you change the scope of the prefixes, and notice that connectivity does not work, it's because bgp update needs to be triggered (or it may take time). One quick way to do this is to go to the contract, in my case this is on the main shared template (Onsite+AWS) and change the scope of the contract, let's say from vrf to tenant and then Deploy to Site. Then change it back from tenant to vrf and hit deploy to site again.
You can always go to your peering router and see if you are learning the VPC prefix on the peering router. Then go to one of the infra CSR on the AWS side and verify the bgp l2vpn evpn routes show that you are learning the required prefixes from the ACI OnPrem fabric side.
Screenshot shown below. Do this only if you have to.
- go to your OnPrem APIC to your Tenant and add finish off the config for the L3Out.
- Create your Node Profile and Interface Profile for L3Out
- Also, make sure to turn on the routing protocol and attach L3Out Domain
Note: You will see some strange looking objects pop up in your physical APIC as shown below. These are Shadow objects created by the system for proper operations. Please do not delete or mess with those objects. An explanation of those objects are shown in the appendix section of this document.
Configure your peering Router (peering to OnPrem ACI Border Leaves) and verify OSPF neighbors are up
Check the routes on the peering router. Ensure that you can see:
- AWS VPC Prefix coming in from ACI ( in my case 10.0.0.0/16)
- Your OnSite Prefix comfing in from ACI (in my case 188.8.131.52/24)
From AWS EC2 ping 184.108.40.206 (going directly out of AWS):
From AWS EC2 ping 100.65.0.1 (the endpoint through your OnSite L3Out):
From AWS EC2 ping 220.127.116.11 (the VM IP on the OnSite ACI Fabric):
Checking on the Infra virtual CSR on AWS
show vrf, make a note of the RD for your VRF. In this case 65003:2949121
show nve peers, make sure that nve (network virtual interface) is up
show nve vni
show bgp l2vpn evpn, make sure you can see the desired prefixes coming in from the onPremise ACI Fabric
Following up on discussion on Fig 53:
Note: You will see some strange looking objects pop up in your physical APIC as shown in Fig 53. These are Shadow objects created by the system for proper operations. Please do not delete or mess with those objects. An explanation of those objects are shown in the appendix section of this document.
Explanation of the Shadow objects:
Use Case: Stretched EPG b/w OnPrem and Cloud
- Cloud site’s EPG will be shadowed in OnPrem as InstP (with name prefix “–msc-“) and it will be under L3Out (with name prefix “–msc-l3out-“)
- OnPrem site’s EPG will be shadowed in Cloud site as InstP (with name prefix “—msc-“) and it will be under existing AP.
Use Case: OnPrem EPG has a contract with Cloud External EPG
- OnPrem site’s EPG will be shadowed in Cloud site as InstP along with L3Out with regular naming and no prefix will be used.
- Cloud site’s External EPG will be shadowed in OnPrem as InstP with regular name under L3out which will have name of cloud AP.
Use Case: OnPrem External EPG has a contract with Cloud EPG
- Cloud site’s EPG will be shadowed in OnPrem as InstP (with name prefix “–msc-“) and it will be under L3Out (with name prefix “–msc-l3out-“)
- OnPrem site’s External EPG will be shadowed in Cloud site as InstP (with name prefix “–msc-“) and it will be under AP (with name prefix “–msc-ap-“)
The reason for using prefix is a design choice between MSO and cAPIC.
- apPrefix = “–msc-ap-“
- instpPrefix = “–msc-“
- l3OutPrefix = “–msc-l3out-“
- prefixListPrefix = “–msc-prefix-list-“
- contractPrefix = “–msc-“
- Cisco Cloud ACI on AWS White Paper
- Cisco Cloud ACI on Microsoft Azure White Paper
- Internet Service for Cisco Cloud APIC Workloads: (how to create untrusted user)
- Cisco Cloud APIC for AWS Installation Guide, Release 4.2(x)
- Shared On-Premises L3Out for Cisco Cloud APIC Workloads
- Cloud APIC Install Guide-Azure
- Cisco Cloud on Azure White Paper
- Cloud APIC Insall / Upgrade & Configure Guide