Consuming AWS Native Services from applications running on onPrem ACI Fabric

Table of Contents:

  1. Introduction
  2. Native Service examples on AWS shown in this writeup
  3. Overall Example Topology & Explanation
  4. Route53 Private Hosted Zone setup for oncloud.com
  5. CoreDns Install and setup for onprem.com
  6. Route53 Resolver Inbound Endpoint
  7. Route53 Resolver Outbound Endpoint
  8. S3 VPC Interface Endpoint
  9. EC2 VPC Interface Endpoint
  10. SQS VPC Interface Endpoint
  11. References

Introduction

Customers using Hybrid AWS/CNC with onPrem ACI fabric often ask how they could consume AWS Native Services through Private Connectivity (meaning not through the Internet). Native Cloud Services ( for example, S3, SQS, DynamoDB) don’t normally have a ENI on the customer VPCs, so they have to be generally accessed through the Public Internet using the URL of the Native Service.

AWS offers VPC Interface Endpoints for this purpose. Using VPC Interface Endpoints for AWS Services, the service can be accessed directly from a Private VPC or even from a hybrid onPrem without having to go through the Internet. Implementing VPC Interface Endpoints actually installs ENIs on the VPC subnets, so these ENIs can be used for the entry point to the Native Service.

The implication of this is that any AWS Native Service can be accessed directly from onPrem Apps running on the VMs on ACI Fabric or from Apps running on Private EC2s in the CNC Fabric

⚠️ AWS also supports VPC Gateway Endpoints. However using VPC Gateway Endpoints don’t install ENIs on the VPC subnet, so, they only offer access to the service directly from the VPC only where the Gateway Edpoint is implemented. Connectivity from onPrem to the service would not work. Further Gateway Endpoints only support S3 and DynamoDB. Interface Endpoints is the newer implementation and needs to be used for this purpose.

Native Service examples on AWS shown in this writeup

In this writeup we will go through detailed implmentation for the following AWS Services

  1. DNS Integration between onPrem and AWS with Route53 Resolver inbound and outboud endpoints
  2. S3 VPC Interface Endpoints, so you can use S3 from onPrem through private connection
  3. EC2 VPC Interface Endpoints, so you can use aws cli for ec2s from onPrem through private connection
  4. SQS VPC Interface Endpoints, so you can use SQS from onPrem through private connection

Overall Example Topology & Explanation

The Overall Topology for this hybrid fabric is shown in the diagram below.
file
Figure 1: Overall Hybrid Topology

Explanation of Topology:

  • OnPrem side is an ACI Fabric with 1 Spine and 1 Leaf
  • The DNS Domain for the onPrem side is onprem.com. DNS Domain for cloud side is oncloud.com.
  • onpremvm1 is the onPrem vm from where we will do our testing. This VM has no Iternet Connectivity
  • onpremDNS is the VM on onPrem running CoreDNS. This VM has a 2nd NIC that has Internet Connectivity so it can talk to upstream DNS providers for resolving outside of it’s domain.
  • On the Cloud Side there is a AWS Infra and AWS Tenant
  • On the Cloud side Route53 Private DNS Hosted zone is oncloud.com
  • On the Cloud side there are 2 Service Subnets (for VPC interface endpoint ENIs) that have been configured from NDO. They are in different zones in the same Region
  • On the Cloud side each Zone has a /28 subnet for Infra TGW Connectivity
  • On the Cloud side an EC2 has been brought up on on of the Service Subnets "svc-subnet1" for testing purposes. Generally speaking you would want to bring up dedicated subnets for VPC Interface Endpoints Services and EC2s. However, in this case I did it this way to make the diagram easier to follow.
  • From NDO, a Tenant and EPG is stretched across onPrem and Cloud in this example. Since the EPG is a stretched EPG the onPrem VM and cloud EC2 should be able to talk to each other.

Route53 Private Hosted Zone setup for oncloud.com

For the Cloud side we setup a very basic domain using Route53 Private Hosted Zone. We put 1 entry in the Hosted Zone for the AWS EC2 running on it.

The Figures below shows the workflow to do this from the AWS Console.

file
Figure 2: Created a new Private Hosted Zone called oncloud.com

file
Figure 3: Associate the VPC that was created from NDO to this Route53 Private Hosted Zone

file
Figure 4: added a A record for the EC2 running on AWS. I called it app1. So the fully qualified name is app1.oncloud.com

file
Figure 5: Verifying that EC2 can resolve the name app1.oncloud.com

CoreDns Install and setup for onprem.com

Spin up an Ubuntu VM. The VM should have one connection to the leaf for the epg we created for this tenant. The other NIC should go connect to another EPG or switch through which it can reach the Internet.

The figure below shows the configuration for the network interfaces for the Ubuntu VM. Note that ens192 is connected to our EPG and it’s mtu has been set to 1350 (1500 – 100 for IPSec – 50 for VXLan)

file
Figure 6: Confuiguration for NICs for CoreDNS Ubuntu VM.

Next, clone my git repo so you can install coreDNS container. Make sure to be in the home directory for the Ubuntu VM.
Follow the instructions on README.MD to install docker/docker-compose

git clone https://github.com/soumukhe/coredns_docker-compose.git

once cloned change directory to "coredns_docker-compose" Here you will see the following.

  • docker-compose.yml
  • Dockerfile
  • config directory

Change directory to the config directory. Here rename db.osp.com to db.onprem.com. Then vi that file and do the 2 things as shown by the arrows.

  1. change the domain name to onprem.com.
  2. put in a A record for your onPrem VM (you can remove the other records)
    file
    Figure 6a: change domain name and put in A record for onPrem VM

Once done, modify "Corefile" to look like shown below.
file
Figure 7: What the Corefile should look like

📙 Note: for all other domains the resolution request is sent to /etc/resolv.conf. resolv.conf in my case points to google dns server
file
Figure 8: contents of /etc/resolv.conf for DNS Server

The final files in that directory should be like shown in the figure below.
file
Figure 9: Final contents of "coredns_docker-compose"

now build and bring up the container with the command

docker-compose up -d

⚠️ make sure to read the README file for "UDP 53 is used by resolv.conf, so docker container won’t be able to map 53:53/udp on base system." section and how to work around it

Check to make sure that your container is running with the command:

docker-compose ps

file
Figure 10: Verify that CoreDNS container is running

Now for the last part, we need to ssh to onpremvm1 and configure it to use our DNS Server.
The netplan yaml for onpremvm1 is shown below

file
Figure 11: netplan yaml file for onpremvm1

📙 Note from above figure that the search domain also has oncloud.com. This is because later when we finish the Route53 Resolver Inbound Endpoint configuration, we can just resolve by name for EC2s running on cloud (instead of having to type the fully qualified name)

To verify that it can resolve the name, please check it out with nslookup (or dig)
file
Figure12: verify that name resolution works for onpremvm1

Also, verify that names on the internet can be resolved. You will need to resolve the VPC endpoint IP names later for connecting to the AWS Native Services through private IPs
file
Figure 13: Name resolution on Internet

Keep in mind that even though name resolution works for google.com we cannot ping it from onpremvm1 since this VM has no internet connectivity
file
Figure 14: onpremvm1 does not have Internet connectivity

Route53 Resolver Inbound Endpoint

Route53 Resolver Inbound Endpoint is used so that onPrem VMs/hosts can resolve private IPs of EC2s/hosts on the cloud side. It brings up 2 ENIs on 2 VPC subnets that you specify. All we have to do is put a forwarding entry in the onPrem DNS server for the Cloud domain, in our case oncloud.com.

To bring up the inbound endpoint, go to aws console to Route53 and select Resolver/inbound endpoints and create a new one.

  • Make Sure to choose the VPC that was created by NDO/CNC
  • Also, choose the Security Group that was created by NDO/CNC

file
Figure 15: Creating Route53 Resolver Inbound Endpoint

Next choose 2 availability zones and the 2 subnets. In this case these will be the service epg subnets that we created from NDO/CNC
file
Figure 16: Choosing the zones and service EPG subnets

Tag the endpoint
file
Figure 17: Tagging the endpoint

The Resolver Inbound Endpoint wil take a few minutes to get operational. Once operational, check for the IP addresses it used for the installed ENIs on the subnets.
file
Figure 18: Checking the IP addresses for the Inbound Resolver Endpoint ENIs

Next go to your onPrem DNS server. Here you will need to add a forward entry for oncloud.com domain pointing to the IP address of the Resolver Inbound Endpoints.

To do this, modify the Corefile and add the fowarding entry as shown in the figure below.
file
Figure 19: Forwarding Entry added to CoreDNS Corefile for oncloud.com

📙 Everytime you change anything on the CoreDNS config, do a "docker-compose restart" for it to take effect.

docker-compose restart

Verify you can resolve the onprem EC2 form the onpremvm1 as shown below
file
Figure 20: onPrem VM can resolve Cloud VM Private IP

Since this is a hybrid cloud setup using NDO/CNC, you should be able to ping the private IP of the cloud EC2 using DNS name.
file
Figure 22: Pinging from onPrem VM to cloud EC2 using DNS entry

Route53 Resolver Outbound Endpoint

Route53 Resolver Outbound Endpoint is used so that Cloud EC2s/hosts can resolve private IPs of onPrem VMs/hosts. It brings up 2 ENIs on 2 VPC subnets that you specify. All we have to do is create a forwarding rule that says for onprem.com to foward to the onPrem DNS server

To bring up the outbound endpoint, go to aws console to Route53 and select Resolver/outbound endpoints and create a new one.

  • Make Sure to choose the VPC that was created by NDO/CNC
  • Also, choose the Security Group that was created by NDO/CNC
    file
    Figure 23: Creating Route53 Resolver Outbound Endpoint

Choose the zones and subnets for the service subnets that were created from NDO/CNC
file
Figure 24: choosing zones and subnets

Finally Tag the resource
file
Figure 25: Tagging the Resolver Outbound Endpoint

Once the endpoint is available, you can see the IPs of the ENIs it installed in the Service Subnets.
file

Go to the bottom of the configuration for the endpoint and click on create rule.
file
Figure 26: Creating the forwardig Rule

For the details of the rule

  • name the rule
  • choose the domain that needs to be forwarded, in our case onprem.com
  • choose the VPC which was created by NDO/CNC
    file
    Figure 27: Entering the DNS Domain that needs to be forwarded

Enter the onPrem CoreDNS Server IP addresses where the resolution request for onprem.com needs to be forwarded.
Also Tag the resource
file
Figure 28: Entering the onPrem CoreDNS onPrem IP.

Verify that from the EC2 on cloud you can resolve and ping onpremvm1 using fully qualified name
file
Figure 29: Verifying onPrem hostname resolution from Cloud side

S3 VPC Interface Endpoint

S3 VPC Interface Endpoint can be used so that onpremvm1 can use S3 privately (without having to go through the Internet)

First let’s setup a bucket with no custom policies.
file
Figure 30: Setup a new bucket

On AWS Console, go to VPC/Endpoints and create a VPC Endpoint. Name the Endpoint, select AWS services
file
Figure 31: Creating VPC Interface Endpoint for S3

Make sure you choose the Interface Endpoint for S3 and not the Gateway Endpoint
file
Figure 32: Choosing VPC Interface Endpoint for S3

Select the VPC that was created by NDO/CNC
file
Figure 33: Choosing the VPC created by NDO/CNC

Select the 2 AZs and the service subnets that were created by NDO/CNC
file
Figure 34: Chossing the Zones and subnets for Service-Subnet created by NDO/CNC

Selec the Security Group creatd by NDO/CNC
file
Figure 35: Choosing the Security Group created by NDO/CNC

Tag the resource
file
Figure 36: Tagging the Endpoint

Once the Interface Endpoint is in available state, look at details and make a note of the DNS Name for the resource
file
Figure 37: DNS Name for VPC Interface Endpoint

Make a note of the IPs (private IPs) that were assigned to the new ENIs created by the interface endpoint. Go to the onprem VM and verify that it can be resolved.
file
Figure 38: Verifying that onpremvm1 can resolve the AWS Interface Endpoint

on VM1 install AWS CLI V2. Follow the AWS documentation at: https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html
Basic procedure as as below:

curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install

📙 Since onpremvm1 does not have internet connection, download “awscliv2.zip" to the onprem dns server and scp to onpremvm1, Then unzip and install in onpremvm1

Check with:

aws  -–version
aws configure --profile mytenant   //to configure it
export AWS_DEFAULT_PROFILE=mytenant // to set a default profile

Now you can use aws cli to work with S3

aws s3 ls --endpoint-url http://vpce-040db050456464062-egbfhsms.s3.us-east-1.vpce.amazonaws.com

file
Figure 39: Listing the bucket from opremvm1

📙 Note:
optional parameter: --region us-east-1
I did not need that because I had configured my default region for the profile to be us-east-1 in aws configure

You can do other operations as usual. Below is an example of uploading a text file to S3 from onpremvm1 without using Internet
file
Figure 40: example of uploading a text file to S3 from onpremvm1 without using Internet

Similarly you can delete the object from the bucket directly from onPrem VM through the private connection
file
Figure 41: Removing buket object from onpremvm1 (no Internet)

EC2 VPC Interface Endpoint

The purpose for the EC2 VPC Interface Endoint is so you can use the aws cli ec2 commands from the onPrem VM without Internet connection.

You can look at the reference commands at:
https://docs.aws.amazon.com/cli/latest/reference/ec2/index.html

The Pocedures are the same for creating the endpoint (as shown in the S3 VPC Interface Endpoint)
The screenshots below could serve as a reference example.
file
file
file
file
Figure 42: Creating the EC2 VP Interface Endpoint

file
Figure 43; Once done, make a note of the DNS Name for the endpoint.

file
Figure 44: Verifying that onPrem VM can resolve those names

Below are some examples of using aws cli for ec2
file
file
file
Figure 45: OnPrem VM using aws cli ec2 commands through Private Connection

SQS VPC Interface Endpoint

The purpose for the sqs VPC Interface Endoint is so you can use the aws cli sqs commands from the onPrem VM without Internet connection.

You can look at the reference commands at:
https://docs.aws.amazon.com/cli/latest/reference/sqs/index.html

Before we proceed, create a new SQS as shown in the figure below.
file
Figure 46: Creating a very Basic SQS Queue with all default confuguration options

file
Figure 47: Grab the URL for the SQS queue

Now we need to create the SQS VPC Interface Endpoint to use that SQS from onPrem without going through Internet

The Pocedures are the same for creating the endpoint (as shown in the S3 VPC Interface Endpoint)
The screenshots below could serve as a reference example.
file
file
file
file
file
Figure 48: Creating the SQS VPC Interface Endpoint

file
Figure 49; Once done, make a note of the DNS Name for the endpoint.

file
Figure 50: Verifying that onPrem VM can resolve those names

Below is an example of sending a message to the sqs queue from onPrem VM through private IPs

file
Figure 51: Sending message to SQS queue from onPrem VM

We know from the figure above that the message went through. However, if you wanted to, you could verify from the AWS console as shown below.
file
file
file
Figure 52: Verifying SQS message received from AWS Console

References

https://docs.aws.amazon.com/cli/latest/index.html

Go To TOP


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.