Table of Contents:
- Simulated Hybrid Topology
- Relevant Route Leak configuation for onPrem Router
In a previous post, AWS Direct Connect for connecting AWS/ACI Fabric to onPrem ACI Fabric , I show details on how to implement a hybrid fabric between onPrem and AWS cloud through AWS Direct Connect.
In that post, I mentioned that it is also possible to set up the topology so that NDO to CNC communications also happens through the AWS Direct Connect link as depicted in Figure 4 of the previous article. I’ve copied this diagram below as Figure 1 in this article.
Figure 1: NDO to CNC through direct connect
The onPrem router where BGP to Direct Connect Gateway terminates in this case is the ISN, though it does not need to be. That router in this case has multiple interfaces. One interface goes towards the spine and the other goes to the OOB management segment. Your topology on the onPrem side might be different, but the basic idea is the same. Going through this writeup you can modify the configuration based on your topology
The interface going towards the onPrem ACI Fabric Spine is in a ISN VRF, while the interface going towards the ND/NDO is in the management VRF
Some customers have reached out for help on this for the route leaking configuration between the management VRF and the ISN VRF on the onPrem router to be able to make this possible.
For NDO to be able to reach the CNC (in AWS) through the direct connect 2 items need to be looked at.
- Routing Table: The NDO IP needs to be advertised through BGP to AWS. The CNC IP needs to be advertised to the management VRF. However the NDO is in the management VRF, while the BGP session is in the ISN VRF. So, route leaking between the VRFs have to be configured.
- Security Group in AWS: on the subnet that the CNC OOB ENI resides, make sure to modify the security group incoming rule to allow NDO host IP to communicate to CNC on port 443 and also icmp and ssh if you want to do ping tests and be able to ssh to CNC from NDO.
📙I am just discussing how to allow NDO to communicate with CNC through the direct connect. You will probably also want to be able to access CNC directly through the private connection. You can just extend this configuration to also allow for that.
For item# 2, Security Group in AWS, all you have to do is identify the approppriate route table in AWS and add the inbound rule.
This writeup will discuss item# 1 Route Leak configuration on the onPrem side.
⚠️ Please make sure that on the AWS side routing table for the subnet that hosts the CNC OOB mgmt ENI that you enable route-propagation.
Since I don’t have a real Direct Connect in my lab and the main purpose of this writeup is to show the routing configuration needed on the onPrem router for the route leaking and BGP advertisement of appropriate prefixes, I will demonstrate this with a simulated hybrid topology.
Figure 2: Simulated Hybrid Topology
Explanation of Simulated Hybrid Topology:
- In this topology, I’ve used 2 AWS accounts.
- One account is for AWS Infra Account
- Another account is for Simulated onPrem
- A C8KV router has been spun up on the onPrem account which does site-to-site VPN to the AWS Infra Account using TGW
- The ACI Spine, NDO and the CNC are basically just Ubuntu EC2s used to validate route-leak is working and prove that connectivity is good.
- ND/NDO is in the management VRF
- ACI Spine is in the ISN-VRF
📙I am not going to show the configuration for bringing up the site-2-site VPN, since that is pretty routine and not relevant for this discussion. However I do want to point out that in this topology, I’ve doubled up on G2 (which is on ISN-VRF) for the IPSec connectivity and also the ACI Spine connectivity.
⚠️ If you want to simulate this topology yourself, please feel free to do so. Few things to keep in mind are:
a) on the TGW routing table, make sure to do route propogation
b) on the routing table for CNC ENI make sure to add route for 0.0.0.0/0 with next hop as TGW
c) on the C8Kv onPrem IPSec router make sure to turn off source/destination check in AWS for both the ENIs.
d) on the ND/NDO ENI routing table make sure to add a route for the CNC IP 10.10.10.99 pointing to ENI of G2 of C8Kv.
e) The IPSec configuration that needed to go on the C8Kv for the site-2-site VPN was downloaded from AWS Site-2-Site VPN configuration menu and modified. Since the IPSec connectivity terminates on G2 which is on ISN-VRF few modifications were done to make it work. The relevant configuation to bring up the IPSec site-to-site VPN on the C8Kv is shown below. Note the addition of:
"match fvrf ISN-VRF" for crypto ikev2 proposal and crypto ikev2 profile. Also tunnel interface will require "tunnel vrf ISN-VRF"
crypto ikev2 proposal PROPOSAL1 encryption aes-cbc-128 integrity sha1 group 2 crypto ikev2 policy POLICY1 match fvrf ISN-VRF ! <------------------------ match address local 10.20.20.10 proposal PROPOSAL1 crypto ikev2 keyring KEYRING1 peer 184.108.40.206 address 220.127.116.11 pre-shared-key PF8Tj47q7poc5vBoTzsQk5OdxsyhCkVM ! crypto ikev2 profile IKEV2-PROFILE match fvrf ISN-VRF ! <------------------------ match address local interface GigabitEthernet2 match identity remote address 18.104.22.168 255.255.255.255 authentication remote pre-share authentication local pre-share keyring local KEYRING1 lifetime 28800 dpd 10 10 on-demand ! ! crypto isakmp keepalive 10 10 ! crypto ipsec security-association replay window-size 128 ! crypto ipsec transform-set ipsec-prop-vpn-066b40d9738c55949-0 esp-aes esp-sha-hmac mode tunnel ! crypto ipsec df-bit clear ! crypto ipsec profile ipsec-vpn-066b40d9738c55949-0 set transform-set ipsec-prop-vpn-066b40d9738c55949-0 set pfs group2 set ikev2-profile IKEV2-PROFILE ! interface GigabitEthernet2 vrf forwarding ISN-VRF ip address dhcp negotiation auto no mop enabled no mop sysid end ! interface Tunnel1 vrf forwarding ISN-VRF ip address 169.254.110.46 255.255.255.252 ip tcp adjust-mss 1379 tunnel source GigabitEthernet2 tunnel mode ipsec ipv4 tunnel destination 22.214.171.124 tunnel vrf ISN-VRF ! <------------------------ tunnel protection ipsec profile ipsec-vpn-066b40d9738c55949-0 ip virtual-reassembly end ! interface GigabitEthernet2 vrf forwarding ISN-VRF ip address 10.20.20.10 255.255.255.128 negotiation auto no mop enabled no mop sysid end !
Verifying that IPSec is up
Figure 2a: Verifying IPSec is up
First let’s look at the VRF related configurations.
! vrf definition ISN-VRF rd 100:100 ! address-family ipv4 import map NDO-IP ! <--------- route target route-map export map CNC-IP ! <--------- route target route-map route-target export 1000:1000 route-target import 1000:1000 exit-address-family ! vrf definition management rd 101:101 ! address-family ipv4 import map CNC-IP ! <--------- route target route-map export map NDO-IP ! <--------- route target route-map route-target export 1000:1000 route-target import 1000:1000 exit-address-family ! ip route vrf ISN-VRF 0.0.0.0 0.0.0.0 GigabitEthernet2 10.20.20.1 ip route vrf management 0.0.0.0 0.0.0.0 10.20.20.129 ip route vrf management 10.20.20.169 255.255.255.255 GigabitEthernet1 10.20.20.129 ! <--------------host route for NDO pointing to Default Gateway of same Interface ip route vrf ISN-VRF 10.10.10.99 255.255.255.255 169.254.110.45 ! <--------------host route for CNC pointing to bgp peer IP ! route-map NDO-IP permit 0 match ip address prefix-list NDO-IP ! route-map CNC-IP permit 0 match ip address prefix-list CNC-IP ! ip prefix-list CNC-IP seq 5 permit 10.10.10.99/32 ! <----- Host IP for CNC OOB ip prefix-list NDO-IP seq 5 permit 10.20.20.169/32 ! <----- Host IP for NDO OOB ! ! interface GigabitEthernet2 vrf forwarding ISN-VRF 10.20.20.150 255.255.255.128 negotiation auto no mop enabled no mop sysid end ! interface Tunnel1 vrf forwarding ISN-VRF ip address 169.254.110.46 255.255.255.252 ip tcp adjust-mss 1379 tunnel source GigabitEthernet2 tunnel mode ipsec ipv4 tunnel destination 126.96.36.199 tunnel vrf ISN-VRF tunnel protection ipsec profile ipsec-vpn-066b40d9738c55949-0 ip virtual-reassembly end ! ! interface GigabitEthernet2 vrf forwarding ISN-VRF ip address 10.20.20.10 255.255.255.128 negotiation auto no mop enabled no mop sysid end
Points to note on above VRF related configuations
- Both VRFs do import and export of route targets with route-maps. The route-maps only allow NDO-IP and CNC-IP to be advertised to the other VRF.
- IP static routes for the CNC-IP and NDO IP have been added appropriately. The reason for this is that in the BGP configuration we will advertise those routes out. Without the explicit routes, BGP will not advertise those routes.
Next, let’s look at the BGP configuration
! router bgp 65000 bgp log-neighbor-changes bgp graceful-restart neighbor 169.254.110.45 remote-as 64512 neighbor 169.254.110.45 ebgp-multihop 255 ! address-family vpnv4 exit-address-family ! address-family ipv4 vrf ISN-VRF network 10.10.10.99 mask 255.255.255.255 ! <--- CNC host IP is being explicity advertised so that it's seen on vrf management network 10.20.20.0 mask 255.255.255.128 neighbor 169.254.110.45 remote-as 64512 neighbor 169.254.110.45 ebgp-multihop 255 neighbor 169.254.110.45 activate exit-address-family ! address-family ipv4 vrf management network 10.20.20.128 mask 255.255.255.128 network 10.20.20.169 mask 255.255.255.255 ! <--- NDO host IP is being explicitly advertised so that it is seen on vrf ISN-VRF exit-address-family !
Points to note on above BGP related configuations
- CNC host IP is being explicity advertised so that it’s seen on vrf management
- NDO host IP is being explicitly advertised so that it is seen on vrf ISN-VRF
As you can see below:
1) CNC IP is being advertised to VRF management of onPrem Router
Figure 3: CNC IP shows in VRF Management of onPrem Router
2) NDO IP is being advertised to VRF ISN of onPrem Router
Figure 4: NDO IP shows up in VRF ISN of onPrem Router
3) Spine can ping CNC OOB, this is not really needed, Spine only needs to reach Gig4 of cloud Routers. You can modify the security groups accordingly in AWS for the appropriate route tables.
Figure 5: ACI onPrem Spine can reach CNC OOB (not really needed)
4) ND/NDO can reach CNC OOB
Figure 6: ND/NDO can reach CNC OOB
📙 Note: CNC will still require outgoing Internet access to be able to make API calls to AWS. However, incoming Internet access can be blocked by modifying the security group accordingly. You could also use a NAT gateway in a public subnet and configure the route tables for the CNC OOB ENI accordingly to be able to reach the Internet.
[AWS Direct Connect Quotas](https://docs.aws.amazon.com/directconnect/latest/UserGuide/limits.html "AWS Direct Connect Quotas:)