AWS Direct Connect for connecting AWS/ACI Fabric to onPrem ACI Fabric

Table of Contents:

  1. Introduction
  2. Quick Introduction to AWS Direct Connect
  3. Azure and GCP equivanet for AWS Direct Connect and ACI support
  4. ACI/AWS Fabric Implementation with Direct Connect
    4a. Implemenation
    4b. Hosted VIF Method
    4c. Second Method: associating VGW with DXGW in Master Account
  5. NDO: Connecting onPrem/AWS ACI Fabric
  6. References

Introduction

Recently, I had the opportunity to help a few customers to connect their ACI OnPrem Fabric to ACI/AWS Fabric via AWS Direct Connect. I am documenting this here to help the ACI community accomplish this goal.

Quick Introduction to AWS Direct Connect

  • Direct Connect offers you a dedicated Network Connection to connect from your AWS VPC to your onPremise Network.
  • Not having to connect through the Internet gives you Low Latency and Consistent Bandwidth
  • Data Transfer cost for egress is lower for Direct Connect than for Internet
  • Not Having packets go over regular Internet from onPrem to AWS is inherently more secure
  • You can have larger MTUs 9001 Bytes for Private VIFs (minus packet overhead). Compare this to the 1500 Byte packet MTU when going over Internet (minus packet overhead)
  • Leverages AWS Global Network Backbone
  • Direct Connect (DX) Locations are provided by authorative 3rd party providers
  • Get 1, 10, 100 Gbps with Dedicated Connections (or sub 1 Gpbs bandwidth by leveraging AWS DX Partner (also known as Direct Connect Hosted Connection)

When you setup Direct Connect, you have to configure something called a VIF (Virtual Interface). What this essentially does is extend a 802.1q Vlan all the way from AWS side to your onPremise router. You then bring up BGP to exchange prefixes between your onPrem and AWS side.

There are 3 kinds of VIF:

  • Private VIF: This is used for Private Connectivity
  • Public VIF: This will also give you the ability to reach public services such as S3 buckets from your onPrem, without having to go through the Internet.
    📗 Note: you can also reach S3 buckets or other services (that use AWS Pubilc IPs) from onPrem using Private VIF, by utilizing Interface VPC Endpoint with Endpoint Services. This is also known as Private Link connectivity.
  • Transit VIF: Instead of connecting your DX Link to the VPC, this kind of VIF connects to the Transit Gateway giving you more flexibility.
    📗Note: MTU for Private VIF can go up to 9001Bytes, whereas Transit VIF is 8500 Bytes

file
Figure 1: MTU based on VIF type

Direct Connect Gateway (DXGW)
There is also another AWS object called the Direct Connect Gateway (basically a router). When using Private VIF using DX, it is always advisable to use a DXGW. Without DXGW, Direct Connect can only Connect to the home Region ( where the Direct Connect Loaction is. N. Virginia for instance) DXGW on the other hand has Global Scope, so you can attach to multiple VPCs (up to 10) in AWS regardless of which region the Direct Connect Loacation resides at. In the case of Transit VIFs, DXGW is a requirement and you can connect one Direct Connect Gateway to 3 TGWs.
⚠️ DXGW is not applicable for Public VIFs

Hosted VIFs
Most enterprises will have their Direct Connect Connection belonging to a particular AWS account. It is not practical nor will the business generally allow a brand new Direct Connect connection for every AWS Account. Take for instance the case of ACI/AWS. You would install the ACI Fabric on AWS on a particular account. This account is called the ACI Infra Account. Now, you want to leverage Direct Connect. Chances are the enterprise already has a Direct Connect Connection to another AWS account that they manage. They will probably not entertain a new DX connection for the Infra Account. Further, if you have multiple regions configured for Infra Account, you cannot expect the enterprise to give you a Direct Connect Connection for each fo these accounts. This is where Hosted VIF comes in. The administrator of the AWS account (that has the Direct Connect Connection) can create the VIF and share it with the AWS account that needs to use it. In this writeup, I will refer to this account that owns the Direct Connect as the "master AWS account".

I’ve covered enough about Direct Connect technology from AWS above to get you going for ACI Connectivity purposes. For a more detailed understanding of Direct Connect, please see AWS documentation at: AWS Direct Connect Documentation

Azure and GCP equivalent for AWS Direct Connect and ACI support

  • Azure: Express Route, supported for ACI/Azure Integration
  • GCP: Interconnect, support for ACI/GCP Integration coming soon

ACI/AWS Fabric Implementation with Direct Connect

A High Level diagram of ACI/AWS Fabric with Direct connect is shown in the figure below.
Please note the following:

  • ACI Infra VPCs are tied to DXGW (Direct Connect Gateway)
  • Each Infra Account will need a private VIF
  • The DX Location (like equinox) is where Cross Connects are established between customer/partner colo router and AWS DX Router
  • IPSec is optional (most enterprises have encryption in transit and rest, so adding extra ipSec overhead is not necessary)
  • Once Connectivity between onPrem and AWS Infra VPCs are achieved, the rest of the operations, i.e. BGP EVPN (control plane), and VXLan (data palne), between Cat8Kv and ACI Physical Spines are no different than in the non DX case (Internet connectivity)

file
Figure 2: High Level Diagram of ACI/AWS fabric with DX to onPrem

Since most customers opt to not use IPSec in the DX ACI/AWS to onPrem, the diagram below shows this case specifically.
file
Figure 2a: High Level Diagram of ACI AWS Fabric with DX to onPrem with no IPSec

Please note the following:

  • The Cat8Kvs will use Gig4 for BGP EVPN/VXLan Connectivity to Spine
  • For IPSec Gig3 is used. IPSec connectivity is from C8KV to IPN/ISN. If IPSec is not used, then G3 has no purpose
  • We need to enable Route Propogation for the route table associated with Gig4 of the Cat8Kvs. Do Not put Static Routes instead. ⚠️ Using Static Routes will limit the MTU to 1500 Bytes (we’ll discuss this further during the implementation part of this writeup).

The above figures show you the mechanics behind the DX integration. Obviously, you will have 2 ISNs (at least) in the onPrem site.

  • You could create 2 VIFs and peer BGP from each ISN to DXGW.
  • You could create 2 BGP Peerings over same VIF

file
Figure 3: BGP Peering from each ISN requires multiple VIFs or multiple BGP peerings over same VIF

Also, note that you will probably only want to allow connection to cAPIC/AWS and Cat8KVs management Interfaces from the enterprise itself. You can easily accomplish this with proper route leaking. ⚠️ Be Careful, Only 100 Prefixes are allowed by AWS to be advertised over Private VIF to VPC. Summarize Prefixes when advertising through BGP.

file
Figure 4: OOB reachability for cAPIC/Cat8Kvs from enterprise only

📗⚠️ Note/Cauition: It is very important that on the ISN, you also redistribute (from OSPF to BGP, with route maps) the TEP Pool of the ACI Fabric. This is because from the leaf side the outgoing packets go directly through a VXLAN tunnel from the leaf to C8KV gig 4 interface.

To Summarize :
Data Plane Traffic Flow

  • From onPrem Leaf to AWS egress: VXLAN Tunnel from Leaf to C8KV Gig 4 interface
  • From AWS to onPrem Leaf egress: VXLAN Tunnel from C8KV to onPrem Spine.

You can quickly determine the TEP Pool used for the onPrem Fabric via command:

cat /data/data_admin/sam_exported.config

file
Figure 4a: Determine TEP Pool IP of onPrem Fabric and make sure to advertise to AWS (on ISN)

If your onPrem Fabric is a MultiPod, each POD will have a different TEP Pool. Make sure to advertise every TEP Pool to AWS side.

Implemenation

We will assume that you will not have a direct DX Connection on the ACI Infra AWS account. We’ll also go through the steps of creating the DX Connection, but if you already have the DX connection on another AWS account which you are allowed to use, you will obviously skip those steps.

There are 2 methods you can use to achieve this:
a) Hosted VIF
b) Associating VGW with DXGW in Master AWS Account

file
Figure 5: Hosted VIF Method

file
Figure 6: Associating VGW with DXGW in Master AWS Account

Hosted VIF Method

As mentioned earlier, If you already have a DX connection on a mater AWS account, you can skip the steps on creating the DX Connection.

Step 1: Create the DX Connection. Do this from the AWS console.

file
Figure 7: DX Screen in AWS console

Once you click on "Create Connection" please follow the instructions in the diagram below.

file
Figure 8: Creating the DX Connection

Once you finish, you will see that your DX Connection will be in pending state for some time. It will ultimately go to down state.
📗 Note: You will not be charged by AWS till:
you submit your LOA/CFA (Letter of Authorization Customer Facility Access) and provide this to the partner for X-Connect
OR:
90 days passes and if you don’t do anything. In that case you will be charged based on the port speed you selected

You can dlownlod the LOA from AWS Console as shown below. You will need to download it and provide it to the DX Provider.

file
Figure 9: Downloading the LOA

The LOA will look like shown below:
file
Figure 10: What LOA looks like

In the next step, you will need to create the Private VIF as shown below:
file
Figure 11: Click on VIF

Now Choose Private VIF as shown below:
📗Note: the limit is 50 Private VIFs per DX
For DX Qoutas, please see: https://docs.aws.amazon.com/directconnect/latest/UserGuide/limits.html

file
Figure 12: Creating the Private VIF

Contine creating the Private VIF as shown below:
📗 Note: choose the AWS Infra Account for the Hosted VIF
file
Figure 13: Contiue Creating the Private VIF

  • Make sure to go to Advanced Settings and configure the IP addresses for the AWS side and your ISN Side for BGP Peering.
  • Also, don’t forget to select 9001 MTU (Jumbo)
    file
    Figure 14: Additional Settings for VIF

Once the VIF is created it will show in confirming state. It’s expecting that the administrator of the Infra Account Accepts the VIF.

file
Figure 15: VIF in confirming state

By clicking on the VIF, you can observe/verify the details of the VIF as shown below:

file
Figure 16: Details of VIF

Next, Go to the ACI Infra AWS Account:
Create your DXGW in the AWS/ACI Infra Account as shown below:
file
Figure 17: Creating DXGW in ACI/AWS Infra account

Once done, you will notice that the DXGW is in available state.

file
Figure 18: DXGW in Available State

Next we need to create a VGW and attach the VGW to the ACI/AWS Infra account. For this first verify the Infra VPC as shown below. ( the assumption is that cAPIC install in ACI/AWS Infra account was already done)

file
Figure 20: Verify VPC for ACI Infra Account

Now create your VGW

file
Figure 21: Creating VGW

Attach the VGW to the ACI/Infra VPC
📙 One VPC can only have one VGW attached to it

file
Figure 22: Attaching VGW to ACI/AWS Infra VPC

Once done, you will be presented with a success screen
file
Figure 23: VGW Attached successfully to ACI/AWS Infra VPC

Now go to DX screen in AWS (Infra Account) and click on VIF. You will notice that the hosted VIF created in the AWS Master Account shows up here. Click on the VIF and accept the Hosted VIF request.

file
Figure 24: Hosted VIF created from AWS Master account shows up in Infra Account.

file
Figure 25: Accepting the VIF.

Next, tie in the DXGW to the VIF
📙 Direct Connect Gateway (DXGW) is a highly available global resource (as opposed to regional). Also note that you can associate up to 30 Private VIFs per DXGW. This means that if you had 2 Private VIFs, one per onPrem router, you still only need only 1 DXGW.

file
Figure 26: Tying in the DXGW to the VIF

Once done, you will see the DXGW shows the VIF attached.

file
Figure 27: Observing the VIF associated with DXGW

Next we have to tie in the VGW to the DXGW. For this click on "Gateway Association" tab and click on Associate Gateway

file
Figure 28: Associating VGW with DXGW, go to Gateway Association Tab

Next choose the VGW you created earlier. Also make sure to put in the Infra Account subnet (as observed earier from the Infra VPC) in the allowed prefix list. This is like creating a route-map to advertise the Infra Prefix to onPrem.

file
Figure 29: Completing the VGW/DXGW association

Next, go to route tables, and add propogate route for the Route Tables for Infra cAPIC, based on what needs to talk to onPrem. You can go to cAPIC find the IPs (or from AWS console, and match the route-tables accordingly.

This in effect creates a bgp route map to allow prefixes from the onPrem side to be advertised into the VPC

⚠️ Do Not Add a Static route (pointing to VGW as Next Hop) for onPrem instead. Doing so, will limit MTU to 1500 Bytes

a) cAPIC management, interface roue-table so you can reach from onPrem if needed (optional)
b) C8Kv mgmt interfaces (optional)
c) C8Kv Gig 4 interfaces (mandotory, this is the interface that will be used for BGP Peering towards the ISN)

In the below example we want the Prefix 100.127.0.1/30 to be advertised to VPC. Route Propogation will accomplish this. (ofcourse on ISN you will need to make sure that the route map permits the prefix to be advertised out. )

file
Figure 30: Example of Route Propgation results.

Below figure shows how Route Propogation will accept more routes as needed (provided they are advertised on the ISN BGP Peering out)

file
Figure 31: Learning onPrem OOB routes in appropriate subnet route tables.

You will notice that the VIF will show that the BGP Peering is down. This is because the ISN BGP peering towards the AWS side needs to be configured.

file
Figure 32: BGP Peering shows to be down because ISN BGP towards AWS is not configured yet.

A Sample ISN Configuration is shown below:

Interface Ethernet 3.100
   encapsulation dot1q 100
    vrf forwarding infra:isn
    ip address 192.168.0.1/30
!
router  bgp  64512
  bgp router-id 172.16.0.1
  bgp log-neighbor-changes
  neighbor 192.168.0.2 remote-as 64512
  neighbor 192.168.0.2 password 0xyBXG3Doz5Mc0vCtxGfejzr
!
address-family ipv4 vrf infra:isn
  ! Be careful, AWS allows up to 100 prefix advertised to VPC, summarize if needed!!!
  redistribute connected
  # below advertise tep pool of onPrem ACI fabric (cat /data/data_admin/sam_exported.config on onPrem APIC to find TEP Pool)
  network 10.8.0.0/16
  neighbor 192.168.0.2 activate
!
! Need to pass on the prefix block of cAPIC/AWS to physical Spine side
! note 100.30.0.0/24 is the ip CIDR for the infra cAPIC
access-list 1000 permit 100.30.0.0 0.0.255.255

route-map ospf permit 10
  match ip address 1000

! 
router ospf infra:isn
  vrf infra:isn
  router-id 172.16.0.1
  redistribute bgp 64512 route-map ospf

The diagram below will make this configuration more obvious:

file
Figure 32: Understanding BGP Configuration on ISN to peer with AWS

Go To TOP

Second Method: associating VGW with DXGW in Master Account

As mentioned previously there is another method to configure DX Connection from Master Account to ACI/AWS Infra account.

file
Figure 33: DX connection without using Hosted VIF

  • In this method the VIF, DX and DX Gateway is created in the Master Account itself.
  • Once done, go to the ACI/AWS Infra Account, create the VGW, Associate with Infra VPC.
  • Next, on the ACI/AWS Infra Account, go to the DX screen, click on Virtual Private Gateway, Choose the VGW and then click on Associate Direct Connect Gateway

file
Figure 33: Second Method: Assiciating VGW with Direct Connect Gateway of Master AWS Account

Now, make sure to choose the Master AWS Account ID and the DXGW ID of the Master AWS Account as shown below:

file
Figure 34: Choosing the DXGW ID of Master AWS account from the AWS/ACI Infra Account.

Once done, you will see that the status of the proposed connection is in "requested" state

file
Figure 35: Requested state for proposed DXGW Proposal

Now, go to the AWS Master Account and accept the proposal
file
Figure 36: Accepting the Proposal from AWS Master account

From here on, follow method 1 to configure the ISN BGP Peering to complete the connectivity. Also, don’t forget to configure route propogation for route table for subnet of C8KV Gig4.

NDO: Connecting onPrem/AWS ACI Fabric

Once your DX connection is up, go to NDO Configure Infra, click on your AWS Site. On the Right Most Pane, add the Inter-Site Connectivity from AWS to onPrem.

Key Items to remember here are:

  • Connection Type: Make sure to choose Private Connection
  • Protocol: BGP EVPN
  • IPSec: Disabled

Once done, your sites should be connected. You can ssh in to your c8Kvs and check the BGP evpn peering and the route tables for the infra VRF.

file
Figure 37: Joining the AWS and onPrem ACI Fabrics from NDO

References

Cloud ACI Documentation

AWS Direct Connect Documentation

[AWS Direct Connect Quotas](https://docs.aws.amazon.com/directconnect/latest/UserGuide/limits.html "AWS Direct Connect Quotas:)

Go To TOP


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.