Cisco Cloud ACI Generic External Connectivity

Table of Contents:

  1. Credits
  2. Introduction
  3. Solution Requirements
  4. Basic Concept
  5. Implementation Steps overview
  6. Proof of Concept
    a. Initial Tenant & External Infrastructure bringup
    b. spin up csr for physical infrastructure
    c. initial config of csr on physical infrastructure
    d. changing ec2s for password authentication
    e. Workflow For Configuring from NDO
        e.1. Create ext VRF on Cloud CSRs
        e.2. Create external devices
        e.3. Create Site Local Subnet Pool
        e.4. Deploy and Download ext device config
        e.5. Apply config to ext device
        e.6. Leak External Routes to Cloud user Tenant
        e.7. Leak routes from Cloud User VRF to external VRF
        e.8. Create extEPG to represent external network
        e.8. Apply Contract between extEPG and Cloud EPG
  7. Testing
  8. How to put static route for EC2 for a prefix 🗝
  9. References

Credits

I want to sincerely thank Huyen Duong, TME and Cloud Expert from Cisco Sytems, Singapore for teaching me and guiding me through this.

Introduction

After 5.2(1) of cAPIC release the next release numbering changed to 25.0(1). As of this writing the latest release available is 25.0(2e). Physical APICs will still continue with the 5.x and increasing release numbers.

  • Before release 25.x, you could do MultiSite configurations between Cloud ACI Fabrics (on AWS/Azure) and Physical ACI Fabrics using NDO (Nexus Dashboard Orchestrator).

  • In Cloud ACI 5.2 for Azure, you could also integrate Azure brownfield environments into the Cloud ACI Fabric. You can read more about that and follow through a Proof Of Concept at: Cloud ACI 5.2: Azure Brownfield Integration with ACI Fabric

  • it was also possible to connect Azure cloud ACI Fabric to on-premise ACI using Azure Express Route Gateway

  • From release 25.x of cAPIC you can connect any external infrastructure (non ACI Fabric) to AWS / Azure / GCP based ACI Cloud Fabrics. Keep in mind we are not talking about connecting Cloud ACI Fabric through the provider’s Internet Gatway Connection (like IGW). That was always possible. What we are talking about here is connectivity through secure IPsec tunnels from the Cloud ACI Fabric to the external Non ACI Fabrics, so that you could treat the ACI Fabrics and the Non ACI Fabrics as one internal Routing Domain (i.e. the private IP endpoints of both ACI Fabric and Non ACI Fabrics have connectivity through the secure IPSec tunnel)

  • cAPIC 25.x also introduces a lot of new features including GCP support. For details please see: Cloud ACI Landing Page

Solution Requirements

To Implement this feature you need the following:

  • Cloud APIC version 25.0(1) or higher
  • Nexus Dashboard 3.5(1) or higher
  • IPsec and BGP capable routers on the external Sites
    • Both IKEv1 and IKEv2 are supported (IK#2 being the default)

Basic Concept

The Diagram below shows what we are trying to achieve. We’ll use this same topology for the Proof Of Concept Walkthrough.
file
Figure 1: What we are trying to achieve.

  • On the left hand side we have a ACI Fabric on AWS with a EC2 instance running on the ACI Tenant
  • On the right hand side we have a Physical Infrastructure with a CSR router and a VM running on the lan on the inside interface of the CSR.
    • 📗 *Note:
      • this can be any router that supports IPsec, BGP and VRFs
      • for the proof of concept, we’ll have the physical side ( CSR and VM) on AWS also on a separate VPC. This will allow us to do a quick proof of concept, since we don’t have to configure NAT and physical firewall rules in physical DMZ to allow communication between the physical CSRs and Cloud CSRs for the IPsec tunnels. *
  • The Goal is to enable the EC2 on ACI AWS tenant communicate to the VM on physical infrastructure using their private IPs. This will be done through the secure IPsec tunnel. In essence your AWS/ACI fabric and physical Infrastructure Fabric are for all practical purposes part of your enterprise’s routing domain.

Implementation Steps overview

The actual Implementation is quite simple. If you have a networking background, you can relate to this just by looking at the figure below. Also, note that the configuration template for the onPrem CSR can be downloaded from NDO, so it’s really easy to get this going.

file
Figure 2: Implmementation Steps Overview

As shown in the diagram above, the Implementation Steps are as shown below:

a. Bring up VRFs in ACI INFRA CSRs and On-Prem CSRS
b. Bring up IPsec tunnel between Infra CSRs and on+_Prem CSRs (configuration of on Prem CSR can be downloaded from NDO)
c. Leak/redistribute the needed routes on each side
d. Create security policies on the Cloud Fabric side EPG with Contracts

Proof Of Concept

Initial Tenant & External Infrastructure bringup
  • It is assumed that you have the AWS cAPIC 25.x already installed in AWS Infra Account and setup. For the Proof Of Concept I will be using cAPIC 25.0(2e) on AWS
  • It is assumed that you already have ND and NDO setup. For the Prooof of Concept, I will be using NDO 3.6
  • You now need to bringup the initial ACI Tenant on AWS and bring up the EC2 instance on the ACI/AWS Tenant account.
  • The next step is to bring up the physical Infrastructure, the CSR and the VM as shown in the diagram. As mentioned earlier, we will bring this up in AWS itself on a separate VPC and spin up the VM (ec2). We’ll also need to create 2 subnets in the VPC (as shown in the diagram above). Each of these subnets will be tied to 1 of the interfaces of the CSR. We also need to create an IGW, so we can ssh into the ec2 and to the CSR. For the CSR we will use an elastic IP, so it’s public IP does not change. Please refer to Figure 1.
    • You can spin this up manually or use the terraform code provided to spin up the AWS infrastructrure and EC2 in minutes and then spin up the CSR manually from the AWS console.
    • if you want to use my terraform code, please ssh to the linux / mac where terraform is installed and do the below:
      git clone https://github.com/soumukhe/Terraform-aws-physical-dummy.git`
      cd Terraform-aws-physical-dummy
      modify the overfide.tf file and put in your aws credentials for the account where this will be spun up (could be the same ACI tenant account)
      modify terraform.tfvars file as needed
      terraform init
      terraform validate
      terraform apply
  • once this is done, it will spit out the ip of the ec2 and you can ssh to it
  • Next you will need to spin up the CSR on AWS ( the one on the right hand side of the figure)
spin up csr for physical infrastructure
  • Go to AWS console/Marketplace and click on discover products
    file
    Figure 3: AWS Marketplace/discover products

  • type in csr in the search bar and click on the CSR BYOL product
    file
    Figure 4: Choose CSR BYOL

  • Click on Continue to subscribe and then on Continue to Configuration
    file
    Figure 5: Continue to Subscribe and Continue to Configuration

  • Next, click on Contiue to Launch
    file
    Figure 6: Contiunue to Launch

  • Make sure to choose "Launch through EC2" and then click on the Launch button
    file
    Figure 7: Launching through EC2

  • Keep the default of t2.medium and click on Next: Confiure Instance Details
    file
    Figure 8: Keep t2.medium and click Next: Configure Instance Details

  • If you used my terraform code to spin up the physical ifra, then match the below choices. If you did your own, please use similar logic. Make sure to bring up 2nd interface also on CSR and put that in the other subnet as shown below
    file
    file
    file
    Figure 9: CSR values to be used.

  • Hit the Next: Add Storage button and ib following page Next: Add Tags followed on next page by Next: Confiure Security Group
    file
    Figure 10: Choose as shown here

  • If you used my Terraform script to create the AWS physical, then, choose existing security group and the allow_all-sgroup as shown below
    file
    Figure 10: Use the allow_all security group

  • Next click Review and Launch followed by the Launch button
    file
    file
    Figure 11: Review and Launch, followed by Launch

  • Here you will be asked to choose your ssh key. Choose the one that was already created by the Terraform Script, Acknowledge and click Launch Instance
    file
    Figure 12: Choose your SSH public key

  • your AWS CSR should spin up now. Go to EC2 Instances and then click on the Networking and make a note of the ENI for the Primary Interface (the one that you associated the 100.127.2.x subnet with)
    file
    Figure 13: Make a note of the ENI

  • Next on AWS Cosole search bar, search for Elastic and choose Elastic IP
    file
    file
    Figure 14: Choose Elastic IP

  • Choose an available elastic IP and click on Associate. If you don’t have any, first click on Allocate Elastic IP address and create 1
    file
    Figure 15: Associate Elastic IP

  • Now Choose the Network Interface (the ENI you made a note of earlier) and associate that with the CSR Gig 1
    file
    Figure 16: Finish Associating Elastic IP with CSR Gig 1

  • it is a good idea to name the instances, so it’s easy to understand visually
    file
    Figure 17: Naming the instances

You should now be able to ssh to the CSR. Please use the username of ec2-user, so do a ssh ec2-user@ip. Get the IP from the instance Details. Remember the ssh key used was from the linux/mac ~/ssh/ directory, so use that same box where you ran the terrafor script from to be able to ssh in.
file

file
Figure 18: ssh to ec2

initial config of csr on physical infrastructure

Configure the csr with another user like admin, which does not require ssh keys, so you can log in from anywhere

username admin priv 15 secret 0 SomePassword!
aaa new-model
aaa authentication login default local
aaa authorization exec default local 

line con 0
 stopbits 1

line vty 0 4
 transport input ssh

line vty 5 20
 transport input ssh

The username of ec2-user is configured by default with ssh keys. Leave them alone.

username ec2-user privilege 15
ip ssh rsa keypair-name ssh-key
ip ssh version 2
ip ssh pubkey-chain
  username ec2-user
   key-hash ssh-rsa 82181A7A92D40D721B919F562AA7259C ec2-user

Gig 1 which is the public facing interface will by default have a config like so:
Similarly Gig2 which is the inside interface will have a config like shown below

!
interface GigabitEthernet1
 ip address dhcp
 ip nat outside
 negotiation auto
 no mop enabled
 no mop sysid
end
!
interface GigabitEthernet2
 ip address dhcp
 negotiation auto
 no mop enabled
 no mop sysid
 shutdown
end
!
interface VirtualPortGroup0    # you don't need this, so you can just leave it alone
 vrf forwarding GS
 ip address 192.168.35.101 255.255.255.0
 ip nat inside
 no mop enabled
 no mop sysid
end

Modify the config (as shown below) to make it plain vanilla. Use the AWS console to find out the private IPs and match those up.
⚠️ Please pay attention to changing G1 interface configs after G2 by ssh from EC2
file
Figure 19: Looking at the private IPs from AWS Console / EC2 instance for CSR to match up IPs

vrf definition phyint
 rd 102:102
 route-target export 64550:1           # you may have to change the value based on what you get from NDO later
 route-target import 64550:1            # you may have to change the value based on what you get from NDO later
 !
 address-family ipv4
 exit-address-family
!
interface GigabitEthernet1    # don't change G1 first   Otherwise you will get cut off.  Remember you used this interface to ssh in.  First do G2 config.  Then ssh in to the Phy Infra ec2 and then ssh into the router using the G2 IP from there.  Use the admin pasword you created for this ssh, since that ec2 will not have the keys for the ec2-user of CSR
 ip address 100.127.2.33 255.255.255.0
 negotiation auto
 no mop enabled
 no mop sysid
end
!
interface GigabitEthernet2
 vrf forwarding phyint
 ip address 100.127.1.194 255.255.255.0
 negotiation auto
 no mop enabled
 no mop sysid
end

file
Figure 19: you ssh’d in to CSR from G1, so don’t change that first. Configure G2 and then ssh in from the EC2 using the admin credentials you configured on CSR.

changing ec2s for password authentication

It is also a good idea to enable keyless ssh to the ec2 instances for convineinece. That way if you wanted to ssh in from the CSR to the EC2 instance, you could do that even though the ssh keys are not present in the CSR. To ssh from CSR to EC2 you would then do ssh -vrf phyint -l ec2-user 100.127.1.132 (example in my case below)
file
Figure 19.a: ssh from csr to EC2

for creating password based authentication for EC2, do the following:

For the pyhsical infra ec2,  ssh into it using the public IP.  ssh ec2-user@ip (you can look up the public IP from AWS console or do a terraform refresh followed by terraform output to find it)

 sudo passwd ec2-user        # put in a password  
 sudo vim /etc/ssh/sshd_config
 PasswordAuthentication yes    # find the line and change to yes

 sudo systemctl restart sshd

Your Basic configs have all been done now. Now, it’s time to configure using NDO and obtaining the physical CSR config from NDO and pasting it it. After that you will be done.

Go To TOP

Workflow For Configuring from NDO

The workflow for configuring the external connectivity is shown in the diagram below:
file
Figure 20: Workflow to be used for external Connectivity

Create ext VRF on Cloud CSRs

file
Figure 20a: create ext VRF on Cloud CSRs

Deployment Considerations:

  • The same workflow can be used to provision an external VRF to multiple Cloud sites
  • You must use a separate template for different types of Cloud sites (for example AWS and Azure)
    • You can use the same VRF name (for example extVrf1) in those separate template, if desired, as they represent different objects anyway
  • You can associate the same template to multiple Cloud sites of the same type (stretched template deployment model)
    • The same external VRF object defined in the template is in this case provisioned in all the Cloud sites of the same type

Create a Schema as shown below.
file
Figure 21: Creatring a schema

Select a Template type of ACI Multi-cloud
file
Figure 22: Choosing the template type

Associate the Tempate with the Infra Tenant
file
Figure 23: Associating Template with Infra Tenant

Associate the template with the Site (I only have 1 AWS site in this setup)
file
Figure 24: Associate template with AWS Site

Add a VRF in the main Schema
file
Figure 25: Adding a VRF

Deploy the Template
file
Figure 26: Deploying the Template

Create external devices

file
Figire 27: Creating external devices

Creating external devices means to tell NDO, what the public IP of the phsical Site CSRs are. In this POC, I’m just using 1 CSR, so, I will only create 1 external device.

On NDO go to Site Connectivity and click on Configure

file
Figure 27: Site Connectivity/Configure

You will see multiple options here.

  1. Control Plane Configuration: This is where you put in Global Parameters for BGP Peering timers and OSPF area information that are used when you have configurations of Multisite Fabric (multiple ACI Cloud and Physical Fabrtics)
  2. IPN Devies: This is where you put in your IPN information if doing Multisite
  3. External Devices: This is for external Connectivity (what we are doing now)
  4. IPSec Tunnel Pools: This is where you define the ip pool from which tunnel interfaces will be configured for the IPSec Tunnels

In our case, since we are only doing external connectrivity right now, we only need to look at item 3 and 4 above

Click on External Devices and then Add External Devices
file
Figure 28: Adding External Devices

We need to put in the value of the Public IP of CSR of our Physical Site here. Also notice that our physical site BGP ASN is 65444
file
Figbure 29: gathering info for Pubic IP of CSR and ASN number for physical CSR

file
Figure 30: Populating the External Device information

The complted configuration will now look like this:
file
Figure 31: Complted External Device Configuration

Create Site Local Subnet Pool

file
Figure 32: Site Local Subnet Pool

This is the value of the IP block from which tunnel interfaces will be configured on Cloud CSRs and external CSRs to bring up IP tunnels. This step os optional. If you don’t configure this, the value will be taken from the Global External Subnet Pool.

📗 Note: On a side note, notice that I had not created any Subnet pool on cAPIC initial configuration. This is not necessary any more and can be configured directly from NDO

file
Figure 32: No Tunnel Subnet Pool defined on cAPIC initial configuration

From NDO, you will see that a default pool of 100.68.0.0/16 has been configured for this. Clicking on the "i" icon, you will see the meaning for this.

file
Figure 33: Global Extenal Subnet Pool.

Click on Add Named Subnet Pool
file
Figure 34: click on Site Specific Subnet Pool

Add the pool value. In this POC, I’ve added 10.181.0.0/16
file
Figure 35: Site Specific Subnet Pool value added

Deploy and Download ext device config

file
Figure 36: Deploy & Download ext device config

In this step we will tie in the External Devices & the tunnel subnet pools to the AWS site.
Go to Site Connectivity and the AWS site. Then click on Add External Connection
file
Figure 37: Add External Connection

Associate the site with the External Devices and subnetPool that you configured earlier
Also, note that we are using IKE V2. V1 is also an option.
file
Figure 38: Tie in the External Device and ext subnet pool.

The completed form will look like below:
file
Figure 39: Completed External Connectrivity

Next, click on Deploy & Download External Device Config files. A zip file will get downloaded to your local machine. Unzip the file and the configuration that you need to apply will be there in a text file. Since I only have 1 Site in this POC, I will have only 1 text file with the ext CSR config in that file.

file
Figure 40: Download the ext CSR configs

Apply config to ext device

file
Figure 40a: apply config to ext device

Open up the file in a text editor and study it. You will have to make minimal configuration changes only to match up the interaces. As an example I had to change Gig 2 to Gig 1 and change the next hop for my default GW of the CSR for Gig 1. Other than that, all I had to do was copy and paste in the configuration.

The downloaded and modifed configuration in this case is shown below:

Downloaded and modified Configuration:

! -----------------------------------------
! Device: 100.26.6.218
! version: v1.0.5
! -----------------------------------------

! ----------------Tunnel-100  to 1st Cloud CSR----------------
! The following file contains configuration recommendation to connect an external networking device with the cloud ACI Fabric
! The configurations here are provided for an IOS-XE based device. The user is expected to understand the configs and make any necessary amends before using them
! on the external device. Cisco does not assume any responsibility for the correctness of the config.

! Tunnel to 100.26.6.218 1.100 [ikev2] for hctunnIf.acct-[infra]/region-[us-east-1]/context-[overlay-1]-addr-[100.30.0.0/25]/csr-[ct_routerp_us-east-1_1:0]/tunn-10
! USER-DEFINED: please define gig-gateway: GIG-GATEWAY
! USER-DEFINED: please define GigabitEthernet1 if required
! USER-DEFINED: please define tunnel-id: 100 if required
! USER-DEFINED: please define vrf-name: infra:AWS-ExtVRF1 if required
! USER-DEFINED: please define source public IP: 34.236.125.151 if 0.0.0.0 ip still not provided by aws.
! Device:            100.26.6.218
! Tunnel ID:         100
! Tunnel counter:    1
! Tunnel address:    10.181.0.5
! Tunnel Dn:         acct-[infra]/region-[us-east-1]/context-[overlay-1]-addr-[100.30.0.0/25]/csr-[ct_routerp_us-east-1_1:0]/tunn-10
! VRF name:          infra:AWS-ExtVRF1
! ikev:              ikev2
! Bgp Peer addr:     10.181.0.6
! Bgp Peer asn:      65444
! Source Public IP:  34.236.125.151
! PreShared key:     1ZL8EICZ9HZ5OQ23XGKHF0WRAKIXXE0GXERM4O915SP37IT41RLNJ49HUJNMO7AX
! ikev profile name: ikev2-100

vrf definition infra:AWS-ExtVRF1
    rd 1:1

    address-family ipv4
        route-target export 64550:1
        route-target import 64550:1
    exit-address-family
exit

crypto ikev2 proposal ikev2-infra:AWS-ExtVRF1
    encryption aes-cbc-256 aes-cbc-192 aes-cbc-128
    integrity sha512 sha384 sha256 sha1
    group 24 21 20 19 16 15 14 2
exit

crypto ikev2 policy ikev2-infra:AWS-ExtVRF1
    proposal ikev2-infra:AWS-ExtVRF1
exit

crypto ikev2 keyring keyring-ikev2-100
    peer peer-ikev2-keyring
        address 34.236.125.151
        pre-shared-key 1ZL8EICZ9HZ5OQ23XGKHF0WRAKIXXE0GXERM4O915SP37IT41RLNJ49HUJNMO7AX
    exit
exit

crypto ikev2 profile ikev2-100
    match address local interface GigabitEthernet1
    match identity remote address 34.236.125.151 255.255.255.255
    identity local address 100.26.6.218
    authentication remote pre-share
    authentication local pre-share
    keyring local keyring-ikev2-100
    lifetime 3600
    dpd 10 5 on-demand
exit

crypto ipsec transform-set ikev2-100 esp-gcm 256
    mode tunnel
exit

crypto ipsec profile ikev2-100
    set transform-set ikev2-100
    set pfs group14
    set ikev2-profile ikev2-100
exit

interface Tunnel100
    vrf forwarding infra:AWS-ExtVRF1
    ip address 10.181.0.6 255.255.255.252
    ip mtu 1400
    ip tcp adjust-mss 1400
    tunnel source GigabitEthernet1
    tunnel mode ipsec ipv4
    tunnel destination 34.236.125.151
    tunnel protection ipsec profile ikev2-100
exit

ip route 34.236.125.151 255.255.255.255 GigabitEthernet1 100.127.2.1

router bgp 65444

address-family ipv4 vrf infra:AWS-ExtVRF1
    redistribute connected
    maximum-paths eibgp 32

    neighbor 10.181.0.5 remote-as 65300
    neighbor 10.181.0.5 ebgp-multihop 255
    neighbor 10.181.0.5 activate
    neighbor 10.181.0.5 send-community both

    distance bgp 20 200 20
exit-address-family

! ----------------Tunnel-200  to 2nd Cloud CSR----------------

! The following file contains configuration recommendation to connect an external networking device with the cloud ACI Fabric
! The configurations here are provided for an IOS-XE based device. The user is expected to understand the configs and make any necessary amends before using them
! on the external device. Cisco does not assume any responsibility for the correctness of the config.

! Tunnel to 100.26.6.218 2.200 [ikev2] for hctunnIf.acct-[infra]/region-[us-east-1]/context-[overlay-1]-addr-[100.30.0.0/25]/csr-[ct_routerp_us-east-1_0:0]/tunn-10
! USER-DEFINED: please define gig-gateway: GIG-GATEWAY
! USER-DEFINED: please define GigabitEthernet1 if required
! USER-DEFINED: please define tunnel-id: 200 if required
! USER-DEFINED: please define vrf-name: infra:AWS-ExtVRF1 if required
! USER-DEFINED: please define source public IP: 34.203.26.205 if 0.0.0.0 ip still not provided by aws.
! Device:            100.26.6.218
! Tunnel ID:         200
! Tunnel counter:    2
! Tunnel address:    10.181.0.1
! Tunnel Dn:         acct-[infra]/region-[us-east-1]/context-[overlay-1]-addr-[100.30.0.0/25]/csr-[ct_routerp_us-east-1_0:0]/tunn-10
! VRF name:          infra:AWS-ExtVRF1
! ikev:              ikev2
! Bgp Peer addr:     10.181.0.2
! Bgp Peer asn:      65444
! Source Public IP:  34.203.26.205
! PreShared key:     1ZL8EICZ9HZ5OQ23XGKHF0WRAKIXXE0GXERM4O915SP37IT41RLNJ49HUJNMO7AX
! ikev profile name: ikev2-200

vrf definition infra:AWS-ExtVRF1
    rd 1:1

    address-family ipv4
        route-target export 64550:1
        route-target import 64550:1
    exit-address-family
exit

crypto ikev2 proposal ikev2-infra:AWS-ExtVRF1
    encryption aes-cbc-256 aes-cbc-192 aes-cbc-128
    integrity sha512 sha384 sha256 sha1
    group 24 21 20 19 16 15 14 2
exit

crypto ikev2 policy ikev2-infra:AWS-ExtVRF1
    proposal ikev2-infra:AWS-ExtVRF1
exit

crypto ikev2 keyring keyring-ikev2-200
    peer peer-ikev2-keyring
        address 34.203.26.205
        pre-shared-key 1ZL8EICZ9HZ5OQ23XGKHF0WRAKIXXE0GXERM4O915SP37IT41RLNJ49HUJNMO7AX
    exit
exit

crypto ikev2 profile ikev2-200
    match address local interface GigabitEthernet1
    match identity remote address 34.203.26.205 255.255.255.255
    identity local address 100.26.6.218
    authentication remote pre-share
    authentication local pre-share
    keyring local keyring-ikev2-200
    lifetime 3600
    dpd 10 5 on-demand
exit

crypto ipsec transform-set ikev2-200 esp-gcm 256
    mode tunnel
exit

crypto ipsec profile ikev2-200
    set transform-set ikev2-200
    set pfs group14
    set ikev2-profile ikev2-200
exit

interface Tunnel200
    vrf forwarding infra:AWS-ExtVRF1
    ip address 10.181.0.2 255.255.255.252
    ip mtu 1400
    ip tcp adjust-mss 1400
    tunnel source GigabitEthernet1
    tunnel mode ipsec ipv4
    tunnel destination 34.203.26.205
    tunnel protection ipsec profile ikev2-200
exit

ip route 34.203.26.205 255.255.255.255 GigabitEthernet1 100.127.2.1

router bgp 65444

address-family ipv4 vrf infra:AWS-ExtVRF1
    redistribute connected
    maximum-paths eibgp 32

    neighbor 10.181.0.1 remote-as 65300
    neighbor 10.181.0.1 ebgp-multihop 255
    neighbor 10.181.0.1 activate
    neighbor 10.181.0.1 send-community both

    distance bgp 20 200 20
exit-address-family

At that point also check the route target and modify the route target for the vrf phyint on the csr. The route target is actually the BGP ASN of the cAPIC Site:1

!
vrf definition phyint
 rd 102:102
 route-target export 64550:1
 route-target import 64550:1
 !
 address-family ipv4
 exit-address-family
!     

Make slight modifications to the BGP config, to configure route-leaking between the external VRF and the phyint vrf on the physical csr.

router bgp 65444
 bgp log-neighbor-changes
 !
 address-family vpnv4
 exit-address-family
 !
 address-family ipv4 vrf infra:AWS-ExtVRF1
  redistribute connected
  neighbor 10.181.0.1 remote-as 65300
  neighbor 10.181.0.1 ebgp-multihop 255
  neighbor 10.181.0.1 activate
  neighbor 10.181.0.1 send-community both
  neighbor 10.181.0.5 remote-as 65300
  neighbor 10.181.0.5 ebgp-multihop 255
  neighbor 10.181.0.5 activate
  neighbor 10.181.0.5 send-community both
  maximum-paths eibgp 32
  distance bgp 20 200 20
 exit-address-family
 !
 address-family ipv4 vrf phyint
  network 100.127.1.0 mask 255.255.255.0
 exit-address-family

You might also want to put a default route for the G2 interface pointing to the .1 address of the subnet for that VRF. In the case of this POC it is as shown below:

ip route vrf phyint 0.0.0.0  0.0.0.0  GigabitEthernet2 100.127.1.1

After this, you will notice that your IPse tunnels are up and BGP is up.

file
Figure 41: ipSec Tunnels are up

You will also notice that physical site CSR and Cloud CSRs IPsec Tunnels are up
file
Figure 42: IPsec tunnels are up

BGP peers will be up but no prefixes will be learnt on Phy-Site CSR at this point, because we are not advertisinng prefixes from the ACI Site CSR to the physical Site CSR, since we have not configured route leaking on the ACI Cloud CSRs yet
file
Figure 43: No Prexes learnt from BGP on Phy-Site CSR

On the ACI Infra CSRs, you will notice that the subnet for the physical Site lan 100.127.1.0/24 has been learnt because we already configed route leaking on the phsical Site CSR
file
Figure 44: ACI Infra Cloud CSRs have learnt physical Site Lan prefix

Leak External Routes to Cloud user Tenant

file

Figure 45: Leak cloud tenant VRF prefix to external VRF

At this point we will leak the phsical site 100.127.1.0 prefix to the ACI Tenant VRF on the Cloud CSRs
file
Figure 46: Leak prefixes from ext VRF to Tenant VPC

Go to the Infra Schema, click on Site Local Template / VRF and click on Add Leak Route
file
Fikgure 47: Add Leak Route

Here you will put in the information to where the route needs to get leaked to. Route leaking in this case is from the ext VRF to the ACI Tenant VRF. The route to be leaked is the lan prefix of the external site, which in this case is 100.127.1.0/24
file
Figure 48: Leak prefix from ext VRF to user Tenant VRF

Now, click on the main template and hit the Deploy to sites button
file
Figure 49: Deploy to Sites

Leak routes from Cloud User VRF to external VRF

file
Figure 50: Leak external routes to Cloud user Tenant

At this point we will leak the 10.140.3.0/24 to the ext VRF on the Cloud CSRs
file
Figure 51: Leak prefixes from ext VRF to Tenant VPC

Go to Tenant Schema click on Site Local Template / VRF and click on Add Leak Route
file
Figure 52: Add Leak Route from Tenant VRF to External VRF

Here you will put in the information to where the route needs to get leaked to. Route leaking in this case is from the Tenant VRF to the ACI Cloud CSR external VRF. The route to be leaked in this case is 10.140.3.0/24
file
Figure 53: Leaking 10.140.3.0/24 to the external Cloud ACI CSR external VRFs

Now, click on the main template and hit the Deploy to sites button
file
Figure 54: Deploy to Sites

Now that route leaking on the ACI Cloud CSRs have been configured, you will see that BGP Prefixes have been learnt on the physical CSR. Notice that 10.140.3.0/24 is now on the phsical site CSR on both the external VRF and the internal vrf
file

file

file
Figure 55: BGP Prefix has been learnt

Create extEPG to represent external network

file
Figure 56: Create external EPG in Infra Schema

**Now that we have the routing up and working, the last thing to look at is to create the external EPG in the Infra Schema, so we can attach a contract to it. Remember that ACI is a whitelist model and we have to explicitly allow communication between EPGs.

Also, remember that EPGs have a relationship to Application Profiles. So, we have to first creaet an App Profile as shown below.**

file
Figure 57: Create App Profile in Infra Schema

Now create the Cloud EPG in Infra Schema and tie it to the Application Profile that you created
file
Figure 58: Create Cloud External EPG in Infra Tenant

Now click on the Site Local Template, Click on EPG that you created, Make sure to tag it as External-Site and click on Add Selector
file
Figure 59: Choosing Exter-Site type for the External EPG that we created and also clickibng Add Selector

Give the selector a name. Make sure to choose the prefix that you need to be able to communicate with on the physical Infrastructure. In my case this prefix is 100.127.1.0/24
file
Figure 60: Choosing the Selector for the External EPG

Once done, please make sure to click on main Template and hit deploy to sites.

Apply Contract between extEPG and Cloud EPG

file
Figure 61: Apply contract between extEPG and Cloud EPG

For this create a Global Contract in the Tenant Schema with any / any filter (for POC this is o.k., but for prodction you might want to tighten it down)

file
Figure 62: Create Global Contract in Tenant Schema with any/any filter

Apply the Contract as Consumer and Provider in the Tennant Cloud EPG and also for the Infa External EPG as shown below.

file

file
Figure 63: Apply the contract both Provider/Consumer to Infra extEPG and Tenant Cloud EPG

That should be all. Now it’s time for testing.

Testing

For Testing Let’s ping from the Phsical Infra CSR Inside Interface to the ACI Cloud EC2. ( private IPs)
file
Figure 64: Ping Test from on-Prem CSR phy internal interface to ACI-Fabric Interface

file
Figure 65: As you can see the ping is successful.

Now Let’s try to ping from on_prem non ACI Site VM (actually an EC2 to the ACI/AWS Fabric EC2)
Obviously we need to put a route on the on-Site VM saying that for prefix 10.140.3.0/24 the Next Hop is the on-Prem CSR’s Internal Interface IP.

⚠️However, there is a catch to this. This would work fine in a real VM, but for a AWS cloud EC2 trying to change the next hop with a simple static route does not work.**

file
Figure 65: Ping from onPrem VM to cloud EC2.

Let’s try this by putting the static route on the onPrem EC2
sudo route add -net 10.140.3.0/24 gw 100.127.1.194 dev eth0

file

file
Figure 66: Applying Static route to VM onPrem (EC2)

Now Let’s try the ping
file
Figure 67: Static route on EC2s don’t work !

You could also do a packet debug on the CSR and you will notice that the packets are not reaching the CSR next hop at all. For packet debug on CSR, please see article: A Practical Guide to using Azure vNET Peering with Cloud ACI which explains how to do it.

How to put static route for EC2 for a prefix

First let’s delete the Static route we put in EC2, because it’s not any good.
sudo route delete -net 10.140.3.0/24

file
Figure 68: Deleting the useless static route on the EC2

Next, let’s solve the problem. 2 things need to be done for static route to work on AWS.

  1. On AWS Console go to VPC then choose the Subnet for the EC2 and make a note of the Route Table ID. Next go to VPC Route Table and edit that Route Table and add the static route there. The Next hop should be the ENI (Elastic Network Interface of the CSR G2 (the next hop), You can find the ENI ID by going to EC2 / CSR Instance / Networking

file
Figure 69: Modifying Route Table in AWS

  1. Next go to EC2 / Nework Interfaces and to the ENI for the CSR G2 ( the next hop for the route). Click Actions and then Change source/dest check. Disable the source/dest check.

file
Figure 70: Disabling Source/Destination Check on the next hop ENI

📗 Note: Putting a static route on the EC2 along with disabling source/dest check on the CSR G2 ENI would also work. However it would only work for this particular EC2 in that subnet. Other EC2s would then need a static route also. If you equate this to the traditional legacy lan, it’s like putting a static on the next hop router V. putting a static directly on the EC2.

Now Try the ping again:
file
Figure 71: Ping works after modifying Routing Table of Subnet and disabling Source/Destination Check for next hop ENI

References

Cloud ACI Documentation

Go To TOP


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.