Table of Contents:
- Credits
- Introduction
- Considerations
- Cloud APIC relationship to Brownfield VPC
- Implementation Steps overview
- Ensure that Contract Based Routing Is Turned on ⚠️
- Proof of Concept
a. POC Topology
a.1. Explanation of POC Topology
a.2. Releases used in this POC
b. Terraform: Spin up ACI Tenant using Terraform
c. Terraform: Spin up EC2 in AWS Tenant
d. Terraform: Spin up Brownfield Environment using Terraform
e. cAPIC: Create VRF for brownfield VPC and import brownfield VPC to cAPIC
f. Brownfield: Create TGW Attachment for brownfield VPC
g. Brownfield: Configure/Edit the Brownfield VPC Route Tables
h. NDO: Create brownfield EPG in the new VRF (where brownfield VPC was imported)
i. NDO: Create and Apply Contract between brownfiled EPG and greenfield EPG
j.Brownfield: Configure proper SecurityGroup Rules for brownfield EPG
k. Testing Reachability between Brownfield and Greenfield EC2s - Note on how to configure reachability from brownfield to a different ACI Site
- References
Credits
This guide and proof of concept lab is based out of Huyen Duong and Minako Higuchi’s presentations. Both Huyen and Minako are Cisco TMEs from the Cloud Netetwork BU. Regardless to say that they are experts in Cloud & ACI arena and I thank them for their constant guidance and hand holding.
I also want to thank Lionel Hercot from the BU who has helped me tremendously with the Terraform scripts that we will use in this POC to spin up the basic ACI Tenant.
We will be using Terraform in this POC to spin up the basic ACI Tenant with EC2s and Brownfield environment with EC2s. That way this POC can be done quickly and we can conentrate on the task of what needs to be done for the brownfield integration without getting distracted with the run of the mill tasks of creating the pre-requisites. Spinning up Terraform and spinning up the pre-requisites is really easy. If you are not familiar with Terraform, that should not even be an issue. However if you would rather get the pre-requisites ready by doing so manually, you can certainly do that also.
Introduction
If you already have resources deployed in AWS, you can now connect your brownfield VPCs to the AWS cAPIC created VPC using Transit Gateway Attachment. This means your connectivity from ACI Fabric created VPCs to the brownfield VPCs can go over AWS backbone directly. Security Policies can be attached for this connectivity based on requirements.
Prior to 25.0.2 release, cAPIC supported greenfield environments where cAPIC created VPC, CIDRs, subnets etc. Existing brownfield environments (where SG, VPCs etc are already created by the user without cAPIC) could not coexist in cAPIC managed site.
- Starting from 25.0.2 release, capabilities for co-existence with brown field environment on AWS are supported.
- cAPIC supports import of the existing Brownfield VPC and automate network and security policy connectivity from cAPIC managed VPC, SG.
- In 25.0.2 release, cAPIC doesn’t configure or provision anything in existing brownfield VPC. The assumption is that security and routing will be owned by the customer on the existing VPC.
- This is new feature support on AWS. Support on Azure was already there from cAPIC 5.2. Please see: Cloud ACI 5.2: Azure Brownfield Integration with ACI Fabric
You can follow this article which will guide you step by step on the procedures involved to Integrate ACI Brownfield on AWS to the ACI Fabric Tenant. For this Proof Of Concept lab, please use a AWS subscription where you have cAPIC 25.0.2 or higher already installed. Also, to make it easier for this POC, the brownfield environment and the ACI tenant will be installed in the same AWS account. This is not necessary, though, keep in mind that if the AWS accounts for ACI Tenant and brownfield environment were different, you would have to configure proper privileges between the 2 AWS accounts to allow for Transit Gateway attachments.
Considerations
The following guidelines and restrictions apply specifically for unmanaged (brownfield) cloud context profiles:
- A given VPC ID of an unmanaged VPC cannot be mapped to two different unmanaged cloud context profiles on a Cisco Cloud APIC. – A given VPC ID can only be used once to create only one unmanaged cloud context profile on a Cisco Cloud APIC.
- Brownfield AWS account and Greenfield ( ACI Tenant) AWS Account can be the same AWS account or different AWS accounts. If they are different AWS accounts, you will need to ensure that you create proper permissions in AWS (using Cloud Formation Template), so that the Transit Gateway Attachment between the 2 accounts are allowd.
- The region should be same as the one where the brownfield VPC has been created.
- The CIDR should be same as the one configured in the brownfield VPC.
- Even though you can selectively import all or a particular set of CIDRS under the brownfield VPC, you cannot import a brownfield VPC without its primary CIDR. Importing the primary CIDR is mandatory when importing a brownfield VPC.
- A hosted VRF can’t be used for importing a brownfield VPC. ( writeup on hosted VRF will be coming soon)
Cloud APIC relationship to Brownfield VPC
- For greenfield deployment, Cloud APIC creates TGW, VPC and auto configures TGW attachment between greenfield VPC and greenfield TGW.
- When you register a brownfield VPC with Cisco Cloud APIC, the following configurations take place:
- An inventory pull is performed on the brownfield resource group or VPC.
- Based on the contracts with between the brownfield cloud EPGs and the greenfield cloud EPGs, Cisco Cloud APIC will make the necessary configurations only in certain areas.
- Cloud APIC programs the security group rules for the greenfield VPC to allow inbound and outbound traffic to and from the brownfield VPCs. Cloud APIC does not program the security group rules for the brownfield VPC
- Cloud APIC does not configure TGW attachment for brownfield VPC. Neither does it program any route tables or routes for the brownfield VPC. In order for the brownfield VPC to communicate with the greenfield VPC, you must manually make the following configurations:
- From AWS Console on brownfield environment: Create the transit gateway VPC attachment with the infra transit gateway
- From AWS Console on brownfield environment: Create the route table for the brownfield VPC and subnet.
- From AWS Console on brownfield environment: Add the routes where the destinations are the greenfield CIDRs and the nexthop should be the AWS Infra transit gateway VPC attachment
Implementation Steps overview
Below are the steps required to achieve this integration. The steps are categorized by what needs to be done on the ACI Fabric Side and what needs to be done on the brownfield side.
- Brownfield: Verify brownfield VPC created on AWS user account
- ACI: Create VRF for brownfield VPC (new tenant maybe created if needed)
- ACI: Import brownfield VPC into Cloud APIC
- Brownfield: Create TGW Attachment for brownfield VPC
- Brownfield: Configure or edit brownfield VPC Route Table
- ACI: Create brownfield EPG in new VRF (where brownfield VPC imported to)
- ACI: Apply contract between brownfield EPG and greenfield EPG
- Brownfield: Configure proper Security Grouprules for brownfield EPG
A Graphical representation of this is shown below:
Figure 1: High Level Workflow for ACI/AWS brownfield integration
Ensure that Contract Based Routing Is Turned on
As of cloud APIC 25.0(2) version, a new option called ‘Contract Based Routing’ (CBR) has been added to the cloud APIC configuration options.
CBR extends the routing and security split feature to internal VRFs communication. This includes route map-based route leaking between pairs of VRFs that are part of the same ACI domain.
At this time, please ensure from cAPIC setup of 25.0(2) that CBR is enabled.
⚠️If you forget to do this, this integration will not work because the contracts will not propgate the routes.
To turn CBR on, go to cAPIC Setup and turn it on.
Figure 1a: cAPIC setup for 25.0.2
Turn on CBR if it is not tuned on.
Figure 1b: Turn on CBR (Contract Based Routing)
Proof of Concept
POC Topology
We will use the topology shown below for this POC.
Figure 2: Topology for this POC
Explanation of POC Topology
AWS/ACI Fabric:
- It is assumed that you already have the ACI Fabric installed in AWS.
- ACI Tenant will be created with 1 VPC/One CIDR and 3 subnets.
- EC2 will be spun up in subnet2.
- AWS Internet Gateway will be deployed so we could ssh to the EC2.
- The ACI Tenant Topology and ec2 will be spun up by using Terraform.
Brownfield Environment:
- This consists of 2 VPCs each with it’s own CIDR and subnet.
- The 2 VPCs will be attached by a Transit Gateway, so that they can talk to each other.
- EC2s will be spun up on each of these VPCs.
- AWS Internet Gateway will be deployed so we could ssh to the EC2.
- Brownfield Topology and EC2s will be spun up by using Terraform.
📗Note:
- After the brownfield environment is spun up, you should be able to ping between EC2s of Brownfield environment. The brownfield environment, including VPCs, Transit Gateway for Brownfield and EC2s will be setup by Terraform.
- We did not need to create 2 VPCs in the brownfield environment. This POC is doing this to illustrate a typical scenario where the brownfiled environments have multiple VPCs connected via a Transit Gateway.
- communications will not work between brownfield ec2s and aci ec2s. This is what we will confiugure
- In the diagram we show the NDO. If you had only one ACI Fabric site (the AWS site) you could do the entire configuration through cApiC. However, if you had a multisite setup (more than one site, be it other cloud or physical sites), you will need NDO. Our terraform script will also run against NDO. So, if you wanted to follow this POC using Terraform to build the ACI Tenant, you would need to have the NDO.
Releases used in this POC
- ND: 2.1.1e
- NDO: 3.6.1e
- cAPIC for AWS: 25.0.2
Terraform: Spin up ACI Tenant using Terraform
Please use the below procedure from your local mac or linux desktop:
git clone https://github.com/soumukhe/AciAWSBrField-IntegrationPOC.git
This will download 3 different directories.
In each directory there are 3 variable files:
- vars.tf
- terraform.tfvars
- override.tf
Make sure to populate your AWS access-keys and secret keys in the overide.tf file.
You can also change variable values in terraform.tfvars as you need to. Feel free to also modify values in vars.tf if you want.
You will need to have terraform installed. If you don’t have that, this will take you a couple of minutes to do. Just follow Terraform with Cisco Nexus Dashboard Orchestrator for building Hybrid Cloud and end to end services
Next:
cd AciAWSBrField-IntegrationPOC/
cd aci_tenant/
verify that you have filled in all the fields in overide .tf with proper values.
Figure 3: Fill in the values for override.tf
Next:
You will need to setup the environment variable TF_CLI_ARGS_apply=’-parallelism=1′ . This is needed for Terraform scripts to run with NDO. Source the below file to to setup the environment variable.
source ./FirstSourceParallelism.env
Next, run the terraform script:
terraform init
terraform validate
terraform apply
Once this is done, your ACI Tenant should be setup based on the diagram. You can go verify from NDO and cAPIC. Feel free to look at your AWS Infra account to verify objecs created
Terraform: Spin up EC2 in AWS Tenant
In this step we will spin up an EC2 in AWS Tenant.
Steps:
cd ../awsEC2-onACI_Infra/
make sure to edit the override.tf file and put in the required parameters:
Figure 4: Make sure overide.tf for ACI/AWS Tenant EC2 parameters are populated
You need to also unset the Terraform variable that we set earlier. For this pleasse do the following:
source unset_env_first.env
Next:
terraform init
terraform validate
terraform apply
Your EC2 will spin up and the output on the screen will show you the public IP address of the EC2. You can view this information later at any time by doing:
terraform refresh
terraform output
Figure 6: Output of running terraform apply.
You can ssh into the EC2 with ec2-user@public_ip
Figure 7: ssh to ACI Fabric EC2 with ec2-user
Since terraform code also installed apache web server and created a index.html file, you can curl to localhost from the ec2 ssh session and you can see the private ip of the ec2. ( you could also curl the public ip from your desktop to get this information)
Figure 8: curl shows the private IP of the ec2
Terraform: Spin up Brownfield Environment using Terraform
Since for the POC we don’t have a brownfiueld environment, we will spin one up using terraform.
Figure 9: Spinning up brownfield environment using Terraform
Steps:
cd ../BField_with_TGW/
make sure to edit the override.tf file and put in the required parameters:
Figure 10: Populating override.tf file
You need to also unset the Terraform variable that we set earlier. For this please do the following:
source unset_env_first.env
Next:
terraform init
terraform validate
terraform apply
Your Brownfied EC2s will now spin up.
Your brownfield Topology will now get setup and your EC2 will spin up and the output on the screen will show you the public IP address of the EC2. You can view this information later at any time by doing:
terraform refresh terraform output
Figure 11: Output showing EC2 Public IPs for Brownfield.
You could ssh in to the brownfield EC2s or curl to them to get the private IPs.
ssh to one of the EC2s and ping the private IP of the other one to verify it works.
Figure 11a: ssh to one ec2 on brownfield and ping the private ip of the other ec2
ACI: Create VRF for brownfield VPC and import brownfield VPC to cAPIC
In this step ( 2 steps combined) we will create the VRF for brownfield VPC and import the browfield VPCs into the VRFs.
Figure 12: create VRF in ACI and import vpc from brownfield
📗Note:
Currently we will need to do this from cAPIC and not the NDO. This is a brand new feature of cAPIC 25.0.2 of AWS and it will take till the next cycle of NDO for this feature to catch up. NDO can create VRFs, but this is a special VRF for brownfield import which NDO cannot do yet.
From NDO: go to Infrastructure Sites and browse directly to the cAPIC for AWS.
Figure 13: Browsing to APIC
On Cloud APIC, create two VRF named brVrf1, brVrf2 to represent brownfield VPCs. Associate those VRFs to the ACI Tenant we created.
📗 Note: ACI Tenant and Brownfield environment needs to be in same AWS account.
On cAPIC, go to Appliction Management/VRFs and create
- brVrf1
- brVrf2
Figure 14: Creating VRF from cAPIC
Make sure to choose the Tenant that we created earlier (with Terraform) to associate with this VRF.
Figure 15: Creating brVrf1 VRF in ACI for importing brownfield1-vpc
Repeat the steps for the 2nd VRF
Figure 16. Creating brVrf2 VRF in ACI for importing brownfield2-vpc
Now Click on the Intent button and select Unmanaged VPC
Figure 17: Clicking on Intent button and selecting Unmanaged VPC
Choose the 1st brownfield1-vpc
Figure 18: Choosing brownfield1-vpc
On the next screen, you will have to choose the VRF this VPC gets associated with. Choose the 1st VRF that you created in ACI: brVrf1
Figure 19: Associating with the ACI VRF created, brVrf1
On the next screen choose continue to build CTX option. If you just finish it off here you will then have to go to the CTX menu and build that and associate with the VRF, which is also a perfectly fine way of doing it.
- make sure to put in a Ctx name such as brCtx1
- attach the TGW (the TGW should have been created in the initial cApIc setup)
- resource to import will auto populate from the brownfield VPC CIDR
The complted screen should look like below:
Figure 20: Completed screen for brownfield1-vpc import
Repeat the same steps for brownfield2-vpc import. The completed screen is shown below:
Figure 21: Completed screen for brownfield1-vpc import
Brownfield: Create TGW Attachment for brownfield VPC
Figure 22: Create TGW Attachment for brownfield VPC
cAPIC has shared the Infra TGW to the Tenant AWS Account for the same region. Now from the Tenant account we need to create the TGW attachments
Figure 22: Need to build TGW Attachments to shared TGW in region
Steps:
From AWS Tenant Account (where brownfield and AWS tenant is), go to VPC/Transit Gateway Attachments/Create transit gateway attachment
Figure 23: Go to Transit Gateway Attachments and create Attachment on AWS Tenant Account
Now Create the attachment:
Make sure to choose the Transit Gateway for the ACI Infra Tenant in the choices. You may want to look at the AWS Infra account TGW gateway ID to make sure it matches up.
Your final screen with attachment configuration should look like below:
Figure 24: Transit GW attachment for brownfield1-vpc to ACI Infra Tenant TGW
Watch the screen on AWS Console to make sure that the Transit Gateway Attachment goes to Available state
Figure 25: Making sure the TGW attachment is in available state
Repeat the same steps on the AWS Tenant account to create the brownfield TGW attachment from brownfield2-vpc to ACI Fabric TGW.
Figure26: Transit GW attachment for brownfield2-vpc to ACI Infra Tenant TGW
Watch the screen on AWS Console to make sure that the Transit Gateway Attachment goes to Available state
Figure 25: Making sure the TGW attachment is in available state
Brownfield: Configure/Edit the Brownfield VPC Route Tables
In this step we will modify the Route tables for the VPCs in brownfield .
Figure 26: Modifying Route Table for brownfield VPC Steps.
The diagram below makes it obvious why we need to do this.
Figure 27: Reason for modifying the route table
In the AWS Tenant/brownfield account, go to VPC/Route Tables and identify the Route Tables for the Brownfield VPCs. These were created initially by Terraform and are clearly marked with tags as you can see below:
Figure 28: Identifyig the brownfield vpc route tables
Click on brownfield1-RT and click on Edit Routes:
Figure 29: Click on Edit Routes for brownfield1-RT
add the route for the ACI VRF CIDR of 10.140.0.0/16 with Next Hop of the ACI Infra Transit Gateway
Figure 30: Adding the Route for brownfield1-RT
Repeat the steps for the other route table in brownfield. brownfield2-RT
Figure 31: Adding the Route for brownfield2-RT
NDO: Create brownfield EPG in the new VRF (where brownfield VPC was imported)
Create brownfield EPG in the new VRF (where brownfield VPC was imported)](#br-epg)
Figure 32: Step for creating brownfield EPG in the new VRF
In this step we will create 2 EPGs, one corresponding to each brownfield VPC. Rember EPGs create Security Group Rules in ACI.
- EPG1: brEPG1
- EPG2: brEPG2
Though we can certainly create the EPGs from cAPIC, we will instead do this from NDO. The reason for this is that if you also wanted the brownfield EC2s to be able to talk to other sites, you would need to use NDO to do this. Remember that whenever you deal with Multisite configurations, NDO is a requirement.
For this to work, we first need to import the VRF that we created from cAPIC into NDO.
- Go to NDO, and the schema and Tenant that was created by Terraform.
- click on the Shared Template and then on import
- Choose your AWS Site to import from
Figure 33: NDO Import Screen
Click on VRF and import both the VRFs brVrf1 and brVrf2 that you created in cAPIC
Figure 34: Importing the cAPIC created VRFs.
Once you are done, click on Shared Teplate and then Deploy to sites.
Now, you need to create the EPG. This is like you normally use NDO. Just create an application profile or use the existing application Profile and create the EPGs. Make sure to tie the EPGs to the correct VRFs.
Create the following EPGs:
- brEpg1 belongs to brVrf1
- brEpg2 belongs to brVrf2
Figure 35: Creating the 2 EPGs from NDO
go to the main shared-template and deploy
from the site local template, choose the 2 newly created EPGs and add selectors to them. Match the selectors with the subnets or CIDRs (depending on what you want) of the Brownfield VPCs.
📗Note: You will need to use IP Based Selector for this to work.
Selector Values:
br-epg1-Selector: IP Equals 100.72.1.0/24
br-epg2-Selector: IP Equals 100.72.2.0/24
Figure 36: Creating Selectors for the EPGs
NDO: Create and Apply Contract between brownfiled EPG and greenfield EPG
In this step we will create a contract between brownfield EPGs and ACI Tenant EPGs as needed.
Figure 37: Create Contract Step
Use the following values:
- contract: br-2-aci-epgs
- scope: Tenant
- filter: Any (already created by Terraform)
Apply the contract between the brownfield EPGs and the ACI Tenant EPG (both consumer and provider)
Contracts provider/consumer applied to brEpg1, brEpg2, epg1
⚠️Note: We are being very lax here with security rules since this is a POC. In a real production evoironment you wil obviously analyze and do this as needed
From the main template click Deploy
before the actual deplopyment, you could look at the Deployment Plan as shown below:
Figure 38: Looking at deployment plan
Brownfield: Configure proper SecurityGroup Rules for brownfield EPG
The last step in the integration is to configure required security group for the brownfield EPGs as required.
Figure 39: Configuring Security Group Rule for brownfield EPGs as required.
In the case of our POC, terraform was used and the security group is wide open (based on the terraform plan that we ran). In production, you would want to tighten the security group as required.
The lax security group that was created by the Terraform script is shown below: (from AWS console, go to VPC/Security Groups)
Figure 40: Current Security Group for brownfield VPC brownfield1-vpc
Security group for the other vpc, brownfield2-vpc is similar and allows everything ingress and egress.
For the purpose of this POC, where we are only showing connectivity test, this is good enough and we won’t make any changes to the Security Group Rules.
Testing Reachability between Brownfield and Greenfield EC2s
For Testing for reachability between brownfield and greenfield (aci Fabric endpoints), we will do a quick test and ping from the ACI Fabric EC2 to the brownfield VMs using their privarte IPs.
step1:
from the Terraform Directory for ACI EC2 ( awsEC2-onACI_Infra ) do the following:
terraform output # this will show you the Public IP of the EC2
ssh ec2-user@publicIP
Figure 41: get public IP and ssh in to ACI Fabric EC2
step2:
from the Terraform Directory for Brownfield EC2s ( BField_with_TGW) do the following:
terraform refresh # this will refresh the terraform state and reconcle differences (in case needed)
terraform output # this will show you the Public IP of the EC2s
curl PublicIP # do this for both the brownfield EC2s to get their Private IP
Figure 42: Getting the Private IPs of the brownfield EC2s.
step3:
from the ACI Fabric EC2s, ping the private IPs of the brownfield EC2s
Figure 43: Ping test from ACI Fabric EC2 to private IPs of brownfield ACI
Note on how to configure reachability from brownfield to a different ACI Site
As you can see we have demonstrated integration from Brownfield <–>Greenfield
If you wanted to achieve connectiity to a different ACI site (be it another cloud site, AWS/Azure or physical ACI Fabric Site), the same procedure can be used.
- Ensure that in the Brownfield VPC Route Table you enter the CIDR or subnet for the Greenfield (ACI Tenant) VPCs and point to the ACI Fabric TGW as next hop
- Create contract between the EPG that you created in the ACI Fabric (that maps to the unmaaged VRF, (brEpg1 and brEpg2 in our case) to the EPG of the different Fabric. Make sure Contract scope is correct. 📗 This has to be done using NDO. Remember, whenever you are trying to make any InterSite configuration NDO is a must.