Connecting ACI/AWS Cloud Fabric to External DC/Branch Site via TGW

Table of Contents:

  1. Introduction
  2. Solution Requirements
  3. Basic Concept
  4. Implementation Steps overview
  5. Ensure that Contract Based Routing Is Turned on ⚠️
  6. Proof of Concept
    a. Releases used in this POC
    b. Terraform: Initial Tenant & External Infrastructure + ec2 spinups
    c. spin up csr for physical infrastructure
    d. initial config of csr on physical infrastructure
    e. changing ec2s for password authentication
    f. Going through the implementation steps
        f.1. cAPIC: Create external Subnet Pool
        f.2. cAPIC: Create external VRF
        f.3. cAPIC: Create External Network
        f.4. cAPIC: Download configuration for external router
        f.5. External Router: Apply the configuration
               f.5.a Verify that IPSec tunnels and bgp peering is up
        f.6. NDO: Configure route leaking from external/internal (both directions)
               f.6.a NDO: Import External VRF from cAPIC
        f.7. NDO: Create External EPG in Infra Tenant
        f.8. NDO: Create Global Contact in Tenant Space and apply contract between user EPG and External EPG
  7. Testing
  8. How to put static route for EC2 for a prefix πŸ—
  9. References

Introduction

From cAPIC 25.0.2 (for AWS) and higher you can now connect to External (non-ACI) Sites directly from AWS Transit Gateway (TGW). cAPIC already installs a TGW for the ACI Infrastructure and this TGW is used for the connectivity.

The main benefit of this feature is that is that it will give users better network performance and reduce burden on the cloud CSRs.

πŸ“— Note: Cloud CSR performance with IPSec will drastically improve soon as the Cloud CSR1KVs will be replaced by Cisco Catalyst 8000V Edge Software BYOL soon (possibly by mid March 2022.

file
Figure 1: ACI Fabric on AWS connecting to branch site from TGW

Prior to cAPIC 25.0.2, you had the option of connecting to external sites with ipv4 IPSec/BGP peering between the cloud routers and physical routers (for both AWS and Azure). That option is still valid and supported. For a complete POC for that scenario, please follow Cisco Cloud ACI Generic External Connectivity

Needless to say that connecting between different ACI Cloud Fabrics/onPrem ACI Fabrics was always supported. We are discussing specifically ACI Cloud Fabric to non ACI External Sites in this article.

file
Figure 2: Option for connecting to external Site with IPSec/BGP peering from Cloud CSR

Solution Requirements

To Implement this feature you need the following:

  • Cloud APIC version 25.0.2 or higher
  • IPsec and BGP capable routers on the external Sites
    • Both IKEv1 and IKEv2 are supported (IK#2 being the default)
  • Optional: Nexus Dashboard 3.5(1) or higher. Please note that using Nexus Dashboard is always recommended, because if you want to configure connectivity between different ACI Sites ( cloud/onPrem), NDO is a requirement.

πŸ“— Note: This is a brand new feature and certain aspects of it cannot be configured directly from NDO at this time. However we can configure the currnetly non supported features from cAPIC directly and then use NDO to do the rest. This will soon get resolved with newer releases of NDO.

Basic Concept

The Diagram below shows what we are trying to achieve. We’ll use this same topology for the Proof Of Concept Walkthrough.

file
Figure 3: What we are trying to achieve.

  • On the left hand side we have a ACI Fabric on AWS with a EC2 instance running on the ACI Tenant
  • On the right hand side we have a Physical Infrastructure with a CSR router and a VM running on the lan on the inside interface of the CSR.
    • πŸ“— *Note:
      • this can be any router that supports IPsec, BGP and VRFs
      • for the proof of concept, we’ll have the physical side ( CSR and VM) simulated on AWS on a separate VPC. This will allow us to do a quick proof of concept, since we don’t have to configure NAT and physical firewall rules in physical DMZ to allow communication between the physical CSRs and AWS TGW for the IPsec tunnels. *
  • The Goal is to enable the EC2 on ACI AWS tenant communicate to the VM on physical infrastructure using their private IPs. This will be done through the secure IPsec tunnel between TGW and physical router. In essence your AWS/ACI fabric and physical Infrastructure Fabric are for all practical purposes part of your enterprise’s routing domain.

Implementation Steps overview

The actual Implementation is quite simple. If you have a networking background, you can relate to this just by looking at the figure below. Also, note that the configuration template for the onPrem CSR can be downloaded from cAPIC, so it’s really easy to get this going.

πŸ“— Note: We will be using a combination of NDO and cAPIC to orchestrate. As mentioned previously, you could use only the cAPIC to do this, however if you had more than one ACI site, you will need NDO.

⚠️ Though we would like the entire confirugation for this setup to be done from NDO, currently, the feature for peering to external site from TGW has not come into NDO at this time. We will do some initial configuration from cAPIC and then do the rest from NDO. This feature should come into NDO after a few releases.

Below are the steps required to achieve this integration. The steps are categorized by what needs to be done on the ACI Fabric Side and what needs to be done on the External side.

  1. cAPIC: Create external Subnet Pool
  2. cAPIC: Create external VRF
  3. cAPIC: Create External Network
  4. cAPIC: Download configuration for external router
  5. External Router: Apply the configuration
  6. NDO: Configure route leaking from external/internal (both directions)
  7. NDO: Create External EPG in Infra Tenant
  8. NDO: Create Global Contact in Tenant Space and apply contract between user EPG and External EPG

The implementation steps are depicted in the diagram below:

file
Figure 4: Implementation Steps

Not surprisingly if you compare these steps to Cisco Cloud ACI Generic External Connectivity , you will see that it is extremely similar. There are just a few items that are different from an orchestration prospective.

Ensure that Contract Based Routing Is Turned on

As of cloud APIC 25.0(2) version, a new option called ‘Contract Based Routing’ (CBR) has been added to the cloud APIC configuration options.

CBR extends the routing and security split feature to internal VRFs communication. This includes route map-based route leaking between pairs of VRFs that are part of the same ACI domain.

At this time, please ensure from cAPIC setup of 25.0(2) that CBR is enabled.

⚠️If you forget to do this, this integration will not work because the contracts will not propgate the routes.

To turn CBR on, go to cAPIC Setup and turn it on.

file
Figure 5: cAPIC setup for 25.0.2

Turn on CBR if it is not tuned on.

file
Figure 6: Turn on CBR (Contract Based Routing)

Proof Of Concept

Releases used in this POC
  • ND: 2.1.1e
  • NDO: 3.6.1e
  • cAPIC for AWS: 25.0.2f
Terraform: Initial Tenant & External Infrastructure + ec2 spinups
  • It is assumed that you have the AWS cAPIC 25.0.2f already installed in AWS Infra Account and setup.
  • It is assumed that you already have ND and NDO setup. For the Prooof of Concept, I will be using NDO 3.6.1e
  • You now need to bringup the initial ACI Tenant on AWS and bring up the EC2 instance on the ACI/AWS Tenant account. You also need to bring up the Physical External Branch site. As mentioned before we will be simulating the eternal branch site on AWS itself. That will make it very easy and fast to go through this POC as we will not need to worry about onSite Firewall configurations to allow connectivity.

Further, we will be using Terraform to spine up the prerequistes, so we don’t waste time with mundane configurations.

Please use the below procedure from your local mac or linux desktop:

git clone https://github.com/soumukhe/ACI_AWS_ExternalConnectivityPOC.git

This will download a parent folder called ACI_AWS_ExternalConnectivityPOC and 3 different directories under the parent directory.

aci_tenant                 # Used to spin up an ACI Tenant with 1 VPC, 3 subnets with Transit Gateway Connecitivy to ACI Infra Tenant and other associated objects
awsEC2-onACI_tenant        # Please use the aci_tenant script first.  This script will spin up an ec2 with Apache installed on the ACI Tenant
phyDcOnAwsSimuilated            #  This script will create a simulated External Data Center environment that you can then integrate with ACI Tenant.  This will help you 
                              to get familiar with the integration without getting distraced by having to setup the basic external Data Center and ACI Tenant.
                              Physical Simulated DC will have 1 VPC with 1 CIDR and 2 subnets.  It will also have IGW so you can ssh in to the EC2 that will be spun up by the plan.  

In each directory there are 3 variable files:

  • vars.tf
  • terraform.tfvars
  • override.tf

The only one you need to modify is the override.tf file. If you want to modify any of the others, you are welcome to do so.

The actual bringup should not take more than 10 minutes. Before you start, go to the AWS console and create a AWS Key and Secret, which you will need to enter in the "override.tf" file in each directory before running the script.

πŸ— The AWS Key and Secret should be from the AWS Tenant account, not the AWS Infra Account

Please edit override.tf file in each of the 3 directories and populate the fields required, such as your AWS access-keys and secret keys and NDO related username/passwords as required

⚠️ The Security Group rules configured by this script are wide open because this is for a POC. If you wanted to tighten it down, please change the terraform resource definition in main.tf before executing the script.

If you don’t have terraform installed, please do so first. It will take you a minute to do so

If Terraform is not installed, already, please install as shown below:
From the linux/mac box where you will do the install from:

browse to https://terraform.io/downloads,  go to the bottom and right-click and copy the terraform binary for your platform.
on your mac or linux box,  do a curl -O <the copied buffer>
unzip the file that you just curled in.  e.g.  unzip terraform_1.1.6_linux_386.zip
sudo mv terraform /usr/local/bin

To bring up the infrastrucure do the following:

1) bring up the ACI Tenant

cd  aci_tenant 
source  FirstSourceParallelism.env 
terraform init
terraform apply

2) spin up ec2 in the ACI Tenant

cd  awsEC2-onACI_tenant
source  unset_env_first.env
terraform init
terraform apply

3) spin up the external branch inrastructure and ec2 in it

cd  phyDcOnAwsSimuilated 
source  unset_env_first.env
terraform init
terraform apply

The script uploads your ssh public keys to the ec2s, so you can ssh into them with:
ssh ec2-user@public_ip
The public IPs will be shown on the screen after the scripts finishes running If you want to look at it later, just do:
terraform output from the directory where you ran the script from.
You could also see the public IP from the AWS console.

spin up csr for physical infrastructure
  • Go to AWS console/Marketplace and click on discover products
    file
    Figure 7: AWS Marketplace/discover products

  • type in csr in the search bar and click on the CSR BYOL product
    file
    Figure 8: Choose CSR BYOL

  • Click on Continue to subscribe and then on Continue to Configuration
    file
    Figure 9: Continue to Subscribe and Continue to Configuration

  • Next, click on Contiue to Launch
    file
    Figure 10: Contiunue to Launch

  • Make sure to choose "Launch through EC2" and then click on the Launch button
    file
    Figure 11: Launching through EC2

  • Keep the default of t2.medium and click on Next: Confiure Instance Details
    file
    Figure 12: Keep t2.medium and click Next: Configure Instance Details

  • If you used my terraform code to spin up the physical ifra, then match the below choices. If you did your own, please use similar logic. Make sure to bring up 2nd interface also on CSR and put that in the other subnet as shown below
    file
    file
    file
    Figure 13: CSR values to be used.

  • Hit the Next: Add Storage button and ib following page Next: Add Tags followed on next page by Next: Confiure Security Group
    file
    Figure 14: Choose as shown here

  • If you used my Terraform script to create the AWS physical, then, choose existing security group and the allow_all-sgroup as shown below
    file
    Figure 15: Use the allow_all security group

  • Next click Review and Launch followed by the Launch button
    file
    file
    Figure 16: Review and Launch, followed by Launch

  • Here you will be asked to choose your ssh key. Choose the one that was already created by the Terraform Script, Acknowledge and click Launch Instance
    file
    Figure 17: Choose your SSH public key

  • your AWS CSR should spin up now. Go to EC2 Instances and then click on the Networking and make a note of the ENI for the Primary Interface (the one that you associated the 100.127.2.x subnet with)
    file
    Figure 18: Make a note of the ENI

  • Next on AWS Cosole search bar, search for Elastic and choose Elastic IP
    file
    file
    Figure 19: Choose Elastic IP

  • Choose an available elastic IP and click on Associate. If you don’t have any, first click on Allocate Elastic IP address and create 1
    file
    Figure 20: Associate Elastic IP

  • Now Choose the Network Interface (the ENI you made a note of earlier) and associate that with the CSR Gig 1
    file
    Figure 21: Finish Associating Elastic IP with CSR Gig 1

  • it is a good idea to name the instances, so it’s easy to understand visually
    file
    Figure 22: Naming the instances

You should now be able to ssh to the CSR. Please use the username of ec2-user, so do a ssh ec2-user@ip. Get the IP from the instance Details. Remember the ssh key used was from the linux/mac ~/ssh/ directory, so use that same box where you ran the terrafor script from to be able to ssh in.

file

file
Figure 23: ssh to ec2

initial config of csr on physical infrastructure

Configure the csr with another user like admin, which does not require ssh keys, so you can log in from anywhere

username admin priv 15 secret 0 SomePassword!
aaa new-model
aaa authentication login default local
aaa authorization exec default local 

line con 0
 stopbits 1

line vty 0 4
 transport input ssh

line vty 5 20
 transport input ssh

The username of ec2-user is configured by default with ssh keys. Leave them alone.

username ec2-user privilege 15
ip ssh rsa keypair-name ssh-key
ip ssh version 2
ip ssh pubkey-chain
  username ec2-user
   key-hash ssh-rsa 82181A7A92D40D721B919F562AA7259C ec2-user

Gig 1 which is the public facing interface will by default have a config like so:
Similarly Gig2 which is the inside interface will have a config like shown below

!
interface GigabitEthernet1
 ip address dhcp
 ip nat outside
 negotiation auto
 no mop enabled
 no mop sysid
end
!
interface GigabitEthernet2
 ip address dhcp
 negotiation auto
 no mop enabled
 no mop sysid
 shutdown
end
!
interface VirtualPortGroup0     ! you don't need this, so you can just leave it alone
 vrf forwarding GS
 ip address 192.168.35.101 255.255.255.0
 ip nat inside
 no mop enabled
 no mop sysid
end

Modify the config (as shown below) to make it plain vanilla. Use the AWS console to find out the private IPs and match those up.
⚠️ Please pay attention to changing G1 interface configs after G2 by ssh from EC2
file
Figure 24: Looking at the private IPs from AWS Console / EC2 instance for CSR to match up IPs

vrf definition phyint
 rd 102:102
 route-target export 65444:1
 route-target import 65444:1
 !
 address-family ipv4
 exit-address-family
!
interface GigabitEthernet1    # don't change G1 first   Otherwise you will get cut off.  Remember you used this interface to ssh in.  First do G2 config.  Then ssh in to the Phy Infra ec2 and then ssh into the router using the G2 IP from there.  Use the admin pasword you created for this ssh, since that ec2 will not have the keys for the ec2-user of CSR
 ip address 100.127.2.127 255.255.255.0
 negotiation auto
 no mop enabled
 no mop sysid
end
!
interface GigabitEthernet2
 vrf forwarding phyint
 ip address 100.127.1.191 255.255.255.0
 negotiation auto
 no mop enabled
 no mop sysid
end

file
Figure 25: you ssh’d in to CSR from G1, so don’t change that first. Configure G2 and then ssh in from the EC2 using the admin credentials you configured on CSR.

changing ec2s for password authentication

It is also a good idea to enable keyless ssh to the ec2 instances for convineinece. That way if you wanted to ssh in from the CSR to the EC2 instance, you could do that even though the ssh keys are not present in the CSR. To ssh from CSR to EC2 you would then do ssh -vrf phyint -l ec2-user 100.127.1.30 (example in my case below)
file
Figure 26: ssh from csr to EC2

for creating password based authentication for EC2, do the following:

For the pyhsical infra ec2,  ssh into it using the public IP.  ssh ec2-user@ip (you can look up the public IP from AWS console or do a terraform refresh followed by terraform output to find it)

 sudo passwd ec2-user        # put in a password  
 sudo vim /etc/ssh/sshd_config
 PasswordAuthentication yes    # find the line and change to yes

 sudo systemctl restart sshd

Your Basic configs have all been done now. Now, it’s time to start the integration of cloud ACI/AWS fabric with the external branch network.

Go To TOP

Going through the implementation Steps
cAPIC: Create external Subnet Pool

file
Figure 27: cAPIC: Create external Subnet Pool

GoTo:

  • cAPIC Setup
  • Region Management
  • Next
  • Configure IPSec Tunnel Subnet Pool. This pool must be in the 169.254.0.0/16 range.

πŸ“— Note: The link local Prefix 169.254.0.0/16 (as defined in RFC 3927) is a AWS requirement for IPSec from TGW. On a separate note, GCP also uses this range for purposes like IPSec tunnels from Cloud Native Routers.

file
file
file
Figure 27: Configuring IPSec Tunnel Subnet Pool from cAPIC

cAPIC: Create external VRF

file
Figure 28: cAPIC: Create external VRF

GoTo:
VRF/Actions/Create VRF
Create the VRF. I named mine, tgwExtVrf, choose Infra Tenant
file
file
Figure 29: Creating External VRF

cAPIC: Create External Network

file
Figure 30: cAPIC: Create External Network

GoTo:

  • External Networks/Actions/Create External Network
  • put in a name for the network, I used tgwExtNet, choose the external VRF, select TGW and select the TGW name (as defined in initial cAPIC setup)
  • Click Add VPN Network
  • put in a name, I put in toBranch1 , click on Add IPSec Peer
  • put in the public IP of the onPrem CSR. in my case it is 100.26.6.218 (from the simulated physical CSR on AWS), Choose ike v2, put in the BGP ASN of the physical site, in my case, it is 65444, Choose the subnet pool that you created earlier
  • click add and finish off the configuration
    file
    file
    file
    file
    file
    file
    Figure 31: Creating the Eternal Network
cAPIC: Download configuration for external router

file
Figure 32: Download the Configuration

GoTo:

  • External Networks/Action/Download External Device Configuration Files
  • Check the check boxes for all the devices. I have only 1 external router in this POC, so only 1 device
  • Download the file. You will get a zipped file.

file
file
Figure 33: Downloading the external device (router) configuration

External Router: Apply the configuration

file
Figure 34: External Router: Apply the configuration

Unzip the downloaded zipped file from previous step
You will get text files in there, we have 1 file, because we have 1 router. Open it up in a text editor. I used Atom to open it. In our case this one external device will peer IPSec (and BGP inside) with TGW. If you study the file you will see that there are 2 sections to the configuration. One for each tunnel. I’ve marked it so, you can see for yourself.

Contents of the file I got

! -----------------------------------------
! Device: 100.26.6.218
! version: v1.0.1
! -----------------------------------------

! ****** Section 1 for Tunnel1 to TGW******
! The following file contains configuration recommendation to connect an external networking device with the cloud ACI Fabric
! The configurations here are provided for an IOS-XE based device. The user is expected to understand the configs and make any necessary amends before using them
! on the external device. Cisco does not assume any responsibility for the correctness of the config.

! Tunnel to 100.26.6.218 1.100 for hcextnwTunnIf.acct-[infra]/region-[us-east-1]/hubCtx-[1]-id-[0]/ext-[tgwExtNet_us-east-1]/vpn-[toBranch1]/rtr-default-peer-100.26.6.218/src-0-dest-[100.26.6.218]
! USER-DEFINED: please define rd: RD
! USER-DEFINED: please provide preshared-key: 1822256987766307149915818417586587148539
! USER-DEFINED: please define router-id: ROUTER-ID
! USER-DEFINED: please define gig-number: GIG-NUMBER
! USER-DEFINED: please define gig-gateway: GIG-GATEWAY
! ikev: ikev2
! vrf-name: tgwExtVrf
! user name: ifc
! tunnel counter: 1
! IPV4 address: IPv4-ADDR
! tunnel interface destination: 100.26.6.218
! tunne id: 100
! BGP peer address: 169.254.11.2
! BGP peer neighbor address: 169.254.11.1
! BGP peer ASN: 65444
! hcloudHubCtx ASN: 65428

vrf definition tgwExtVrf
    rd RD:1
    address-family ipv4
    exit-address-family
exit

interface Loopback0
    vrf forwarding tgwExtVrf
    ip address 41.41.41.41 255.255.255.255
exit

crypto ikev2 proposal ikev2-1
    encryption aes-cbc-256 aes-cbc-192 aes-cbc-128
    integrity sha512 sha384 sha256 sha1
    group 24 21 20 19 16 15 14 2
exit

crypto ikev2 policy ikev2-1
    proposal ikev2-1
exit

crypto ikev2 keyring keyring-ifc-1
    peer peer-ikev2-keyring
        address IPv4-ADDR
        pre-shared-key 1822256987766307149915818417586587148539
    exit
exit

crypto ikev2 profile ikev-profile-ifc-1
    match address local interface GIG-NUMBER
    match identity remote address IPv4-ADDR 255.255.255.255
    identity local address 100.26.6.218
    authentication remote pre-share
    authentication local pre-share
    keyring local keyring-ifc-1
    lifetime 3600
    dpd 10 5 periodic
exit

crypto ipsec transform-set ikev-transport-ifc-1 esp-gcm 256
    mode tunnel
exit

crypto ipsec profile ikev-profile-ifc-1
    set transform-set ikev-transport-ifc-1
    set pfs group14
    set ikev2-profile ikev-profile-ifc-1
exit

interface Tunnel100
    vrf forwarding tgwExtVrf
    ip address 169.254.11.2 255.255.255.252
    ip mtu 1400
    ip tcp adjust-mss 1400
    tunnel source GIG-NUMBER
    tunnel mode ipsec ipv4
    tunnel destination IPv4-ADDR
    tunnel protection ipsec profile ikev-profile-ifc-1
exit

ip route IPv4-ADDR 255.255.255.255 GIG-NUMBER GIG-GATEWAY

router bgp 65444
    bgp router-id ROUTER-ID
    bgp log-neighbor-changes

    address-family ipv4 vrf tgwExtVrf
        network 41.41.41.41 mask 255.255.255.255
        neighbor 169.254.11.1 remote-as 65428
        neighbor 169.254.11.1 ebgp-multihop 255
        neighbor 169.254.11.1 activate
    exit-address-family
exit

! ****** Section 2 for Tunnel2 to TGW******
! The following file contains configuration recommendation to connect an external networking device with the cloud ACI Fabric
! The configurations here are provided for an IOS-XE based device. The user is expected to understand the configs and make any necessary amends before using them
! on the external device. Cisco does not assume any responsibility for the correctness of the config.

! Tunnel to 100.26.6.218 2.200 for hcextnwTunnIf.acct-[infra]/region-[us-east-1]/hubCtx-[1]-id-[0]/ext-[tgwExtNet_us-east-1]/vpn-[toBranch1]/rtr-default-peer-100.26.6.218/src-1-dest-[100.26.6.218]
! USER-DEFINED: please define rd: RD
! USER-DEFINED: please provide preshared-key: 1371563253226002843416776641410824850772
! USER-DEFINED: please define router-id: ROUTER-ID
! USER-DEFINED: please define gig-number: GIG-NUMBER
! USER-DEFINED: please define gig-gateway: GIG-GATEWAY
! ikev: ikev2
! vrf-name: tgwExtVrf
! user name: ifc
! tunnel counter: 2
! IPV4 address: IPv4-ADDR
! tunnel interface destination: 100.26.6.218
! tunne id: 200
! BGP peer address: 169.254.11.6
! BGP peer neighbor address: 169.254.11.5
! BGP peer ASN: 65444
! hcloudHubCtx ASN: 65428

vrf definition tgwExtVrf
    rd RD:1
    address-family ipv4
    exit-address-family
exit

interface Loopback0
    vrf forwarding tgwExtVrf
    ip address 41.41.41.41 255.255.255.255
exit

crypto ikev2 proposal ikev2-1
    encryption aes-cbc-256 aes-cbc-192 aes-cbc-128
    integrity sha512 sha384 sha256 sha1
    group 24 21 20 19 16 15 14 2
exit

crypto ikev2 policy ikev2-1
    proposal ikev2-1
exit

crypto ikev2 keyring keyring-ifc-2
    peer peer-ikev2-keyring
        address IPv4-ADDR
        pre-shared-key 1371563253226002843416776641410824850772
    exit
exit

crypto ikev2 profile ikev-profile-ifc-2
    match address local interface GIG-NUMBER
    match identity remote address IPv4-ADDR 255.255.255.255
    identity local address 100.26.6.218
    authentication remote pre-share
    authentication local pre-share
    keyring local keyring-ifc-2
    lifetime 3600
    dpd 10 5 periodic
exit

crypto ipsec transform-set ikev-transport-ifc-2 esp-gcm 256
    mode tunnel
exit

crypto ipsec profile ikev-profile-ifc-2
    set transform-set ikev-transport-ifc-2
    set pfs group14
    set ikev2-profile ikev-profile-ifc-2
exit

interface Tunnel200
    vrf forwarding tgwExtVrf
    ip address 169.254.11.6 255.255.255.252
    ip mtu 1400
    ip tcp adjust-mss 1400
    tunnel source GIG-NUMBER
    tunnel mode ipsec ipv4
    tunnel destination IPv4-ADDR
    tunnel protection ipsec profile ikev-profile-ifc-2
exit

ip route IPv4-ADDR 255.255.255.255 GIG-NUMBER GIG-GATEWAY

router bgp 65444
    bgp router-id ROUTER-ID
    bgp log-neighbor-changes

    address-family ipv4 vrf tgwExtVrf
        network 41.41.41.41 mask 255.255.255.255
        neighbor 169.254.11.5 remote-as 65428
        neighbor 169.254.11.5 ebgp-multihop 255
        neighbor 169.254.11.5 activate
    exit-address-family
exit

On the top of each file there are 5 values that are mentoned that is in the rest of each section of the files. You will need to fill these values in at the appropriate places of the configuration file content.

file
Figure 34a: Values you need to get before configuring onPrem CSR

πŸ“— Note: The easiest way I’ve found to do this is to copy each section to a different text file. Then get the values and do a find & Replace for each of the 2 text files.
⚠️ Make sure to do a case sensitive find, otherwise the replace will really mess up the config

Values you need to fill in:

  • RD : this gets appended to the RD value of the VRF: I used 1, so, RD:1 became 1:1
  • ROUTER-ID : for this I looked at the config and used the loopback0 value of 41.41.41.41
  • GIG-NUMBER : This is the Gig Interface of physical CSR that’s facing the internet, in my case it is GigabitEthernet1
  • GIG-GATEWAY : This is the gatway IP for the external facing interface. In my case, this is the simulated Physical CSR on AWS and my private IP for that interface is 100.127.2.127/24, my gateway is 100.127.2.1
  • IPv4-ADDR : This is the public IP that AWS assigned (on AWS side) for the VPN Gateways tunnel interface.

How to find the AWS side Tunnel Interface Public IPs (so you can get the value of IPv4-ADDR above).
on AWS Console, go to VPC/Virtual Private Network / Site-to-Site VPN Connections and look at the Outside IP Address. For each of the files (sections), you need to use one of these IPs.

πŸ“— Note: Both Tunnels show down right now, because the physical CSR has not been configured yet.

file
Figure 35: Finding the Public IPs of the Tunnel Interfaces on AWS

Finally, after doing a case sensitive find & replace, my 2 files look like this:

Modified Config for Tunnel 1:

! ----------------Tunnel-1----------------

! The following file contains configuration recommendation to connect an external networking device with the cloud ACI Fabric
! The configurations here are provided for an IOS-XE based device. The user is expected to understand the configs and make any necessary amends before using them
! on the external device. Cisco does not assume any responsibility for the correctness of the config.

! Tunnel to 100.26.6.218 1.100 for hcextnwTunnIf.acct-[infra]/region-[us-east-1]/hubCtx-[1]-id-[0]/ext-[tgwExtNet_us-east-1]/vpn-[toBranch1]/rtr-default-peer-100.26.6.218/src-0-dest-[100.26.6.218]
! USER-DEFINED: please define rd: 1
! USER-DEFINED: please provide preshared-key: 1822256987766307149915818417586587148539
! USER-DEFINED: please define router-id: 41.41.41.41
! USER-DEFINED: please define gig-number: GigabitEthernet1
! USER-DEFINED: please define gig-gateway: 100.127.2.1
! ikev: ikev2
! vrf-name: tgwExtVrf
! user name: ifc
! tunnel counter: 1
! IPV4 address: 3.88.123.224
! tunnel interface destination: 100.26.6.218
! tunne id: 100
! BGP peer address: 169.254.11.2
! BGP peer neighbor address: 169.254.11.1
! BGP peer ASN: 65444
! hcloudHubCtx ASN: 65428

vrf definition tgwExtVrf
    rd 1:1
    address-family ipv4
    exit-address-family
exit

interface Loopback0
    vrf forwarding tgwExtVrf
    ip address 41.41.41.41 255.255.255.255
exit

crypto ikev2 proposal ikev2-1
    encryption aes-cbc-256 aes-cbc-192 aes-cbc-128
    integrity sha512 sha384 sha256 sha1
    group 24 21 20 19 16 15 14 2
exit

crypto ikev2 policy ikev2-1
    proposal ikev2-1
exit

crypto ikev2 keyring keyring-ifc-1
    peer peer-ikev2-keyring
        address 3.88.123.224
        pre-shared-key 1822256987766307149915818417586587148539
    exit
exit

crypto ikev2 profile ikev-profile-ifc-1
    match address local interface GigabitEthernet1
    match identity remote address 3.88.123.224 255.255.255.255
    identity local address 100.26.6.218
    authentication remote pre-share
    authentication local pre-share
    keyring local keyring-ifc-1
    lifetime 3600
    dpd 10 5 periodic
exit

crypto ipsec transform-set ikev-transport-ifc-1 esp-gcm 256
    mode tunnel
exit

crypto ipsec profile ikev-profile-ifc-1
    set transform-set ikev-transport-ifc-1
    set pfs group14
    set ikev2-profile ikev-profile-ifc-1
exit

interface Tunnel100
    vrf forwarding tgwExtVrf
    ip address 169.254.11.2 255.255.255.252
    ip mtu 1400
    ip tcp adjust-mss 1400
    tunnel source GigabitEthernet1
    tunnel mode ipsec ipv4
    tunnel destination 3.88.123.224
    tunnel protection ipsec profile ikev-profile-ifc-1
exit

ip route 3.88.123.224 255.255.255.255 GigabitEthernet1 100.127.2.1

router bgp 65444
    bgp router-id 41.41.41.41
    bgp log-neighbor-changes

    address-family ipv4 vrf tgwExtVrf
        network 41.41.41.41 mask 255.255.255.255
        neighbor 169.254.11.1 remote-as 65428
        neighbor 169.254.11.1 ebgp-multihop 255
        neighbor 169.254.11.1 activate
    exit-address-family
exit

Modified Config for Tunnel 2:

----------------Tunnel-200  to TGW ----------------

! The following file contains configuration recommendation to connect an external networking device with the cloud ACI Fabric
! The configurations here are provided for an IOS-XE based device. The user is expected to understand the configs and make any necessary amends before using them
! on the external device. Cisco does not assume any responsibility for the correctness of the config.

! Tunnel to 100.26.6.218 2.200 for hcextnwTunnIf.acct-[infra]/region-[us-east-1]/hubCtx-[1]-id-[0]/ext-[tgwExtNet_us-east-1]/vpn-[toBranch1]/rtr-default-peer-100.26.6.218/src-1-dest-[100.26.6.218]
! USER-DEFINED: please define rd: 1
! USER-DEFINED: please provide preshared-key: 1371563253226002843416776641410824850772
! USER-DEFINED: please define router-id: 41.41.41.41
! USER-DEFINED: please define gig-number: GigabitEthernet1
! USER-DEFINED: please define gig-gateway: 100.127.2.1
! ikev: ikev2
! vrf-name: tgwExtVrf
! user name: ifc
! tunnel counter: 2
! IPV4 address: 52.87.97.32
! tunnel interface destination: 100.26.6.218
! tunne id: 200
! BGP peer address: 169.254.11.6
! BGP peer neighbor address: 169.254.11.5
! BGP peer ASN: 65444
! hcloudHubCtx ASN: 65428

vrf definition tgwExtVrf
   rd 1:1
   address-family ipv4
   exit-address-family
exit

interface Loopback0
   vrf forwarding tgwExtVrf
   ip address 41.41.41.41 255.255.255.255
exit

crypto ikev2 proposal ikev2-1
   encryption aes-cbc-256 aes-cbc-192 aes-cbc-128
   integrity sha512 sha384 sha256 sha1
   group 24 21 20 19 16 15 14 2
exit

crypto ikev2 policy ikev2-1
   proposal ikev2-1
exit

crypto ikev2 keyring keyring-ifc-2
   peer peer-ikev2-keyring
       address 52.87.97.32
       pre-shared-key 1371563253226002843416776641410824850772
   exit
exit

crypto ikev2 profile ikev-profile-ifc-2
   match address local interface GigabitEthernet1
   match identity remote address 52.87.97.32 255.255.255.255
   identity local address 100.26.6.218
   authentication remote pre-share
   authentication local pre-share
   keyring local keyring-ifc-2
   lifetime 3600
   dpd 10 5 periodic
exit

crypto ipsec transform-set ikev-transport-ifc-2 esp-gcm 256
   mode tunnel
exit

crypto ipsec profile ikev-profile-ifc-2
   set transform-set ikev-transport-ifc-2
   set pfs group14
   set ikev2-profile ikev-profile-ifc-2
exit

interface Tunnel200
   vrf forwarding tgwExtVrf
   ip address 169.254.11.6 255.255.255.252
   ip mtu 1400
   ip tcp adjust-mss 1400
   tunnel source GigabitEthernet1
   tunnel mode ipsec ipv4
   tunnel destination 52.87.97.32
   tunnel protection ipsec profile ikev-profile-ifc-2
exit

ip route 52.87.97.32 255.255.255.255 GigabitEthernet1 100.127.2.1

router bgp 65444
   bgp router-id 41.41.41.41
   bgp log-neighbor-changes

   address-family ipv4 vrf tgwExtVrf
       network 41.41.41.41 mask 255.255.255.255
       neighbor 169.254.11.5 remote-as 65428
       neighbor 169.254.11.5 ebgp-multihop 255
       neighbor 169.254.11.5 activate
   exit-address-family
exit

Now, just copy and paste those config files on the onPrem CSR.

Also, since we placed the onPrem CSR inside interface on a VRF, "vrf phyint" (for physical Internal), we need to do route leaking from that VRF to the external VRF and vice versa. For this you will have to make slight modifications to your config for the onPrem CSR. I am showing the full bgp configuration here along with the route target export and import for the VRFs that are responsible for the route-leaking between the VRFs.


!
vrf definition GS
 rd 100:100
 !
 address-family ipv4
 exit-address-family
!
vrf definition phyint
 rd 102:102
 route-target export 65444:1
 route-target import 65444:1
 !
 address-family ipv4
 exit-address-family
!         
vrf definition tgwExtVrf
 rd 1:1   
 route-target export 65444:1
 route-target import 65444:1
 !
 address-family ipv4
 exit-address-family
!
router bgp 65444
 bgp router-id 41.41.41.41
 bgp log-neighbor-changes
 !
 address-family ipv4 vrf phyint
  network 100.127.1.0 mask 255.255.255.0
 exit-address-family
 !
 address-family ipv4 vrf tgwExtVrf
  network 41.41.41.41 mask 255.255.255.255
  neighbor 169.254.11.1 remote-as 65428
  neighbor 169.254.11.1 ebgp-multihop 255
  neighbor 169.254.11.1 activate
  neighbor 169.254.11.5 remote-as 65428
  neighbor 169.254.11.5 ebgp-multihop 255
  neighbor 169.254.11.5 activate
 exit-address-family
Verify that ipSec Tunnels and BGP peering is up

Check on router and also on AWS Console

show crypto session
show ip bgp vpnv4 vrf tgwExtVrf sum

file
file
Figure 35a: Checking IPSec tunnels and BGP peering

file
Figure 35b: Checking to make sure IPSec tunnels are up from AWS Console

NDO: Configure route leaking from external/internal (both directions)

file
Figure 36: NDO: Configure route leaking from external/internal (both directions

πŸ“— Note: Before we can do this, we need to import the external VRF into NDO

NDO: Import External VRF from cAPIC
  • From NDO, create a schema
  • Create a template, Associate template with Infra Tenant
  • Associate template with AWS Site

Create a Schema as shown below.
file
Figure 37: Creatring a schema

Select a Template type of ACI Multi-cloud
file
Figure 38: Choosing the template type

Associate the Tempate with the Infra Tenant
file
Figure 39: Associating Template with Infra Tenant

Associate the template with the Site (I only have 1 AWS site in this setup)
file
Figure 40: Associate template with AWS Site

Now, Import the VRF into the template as shown below:
file
file
Figure 41: Importing the external VRF to NDO

Now, we can configure Route Leaking
Please follow the steps to leak routes as shown in the article below:
Leak External Routes to Cloud user Tenant
Leak routes from Cloud User VRF to external VRF

NDO: Create External EPG in Infra Tenant

file
Figure 42: NDO: Create External EPG in Infra Tenant

Steps to create External EPG in Infra Tenant was shown at:
Create extEPG to represent external network

NDO: Create Global Contact in Tenant Space and apply contract between user EPG and External EPG

file
Figure 43: NDO: Create Global Contact in Tenant Space and apply contract between user EPG and External EPG

For implementing this, please follow artilcle:
Apply Contract between extEPG and Cloud EPG

Testing

For Testing Let’s ping from the Physical Infra CSR Inside Interface to the ACI Cloud EC2. ( private IPs)

file
file
Figure 44: Successful ping from the Physical Infra CSR Inside Interface to the ACI Cloud EC2. ( private IPs)

For pinging from onPrem VM to cloud EC2, we first need to add a static route in AWS. For EC2, simply putting a static route like sudo route add -net 10.140.3.0/24 gw 100.127.1.191 dev eth0 will not work !

How to put static route for EC2 for a prefix πŸ—

Please see: How to put static route for EC2 for a prefix

Testing ping from onPrem VM to cloud EC2 after following procedure above to add the static route

file
file
Figure 45: Successful ping from onPrem VM to cloud EC2

References

Cloud ACI Documentation

Go To TOP


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.