ACI/Cloud Extension Usage Primer (Azure) – Simple Service Graph with Azure Application Gateway & vNET Peering

Release 5.0.2e of cAPIC, has extensive support for integrating load balancer / firewall services with your workloads in Azure Cloud.  Once you understand the basic rules of deploying these services you will be able to integrate these services efficiently to suit your requirements.  

Below is a list of service integration features that are available in Azure Cloud with release 5.0.2e of cAPIC.

  • Multi-Node Service Insertion capability / Inter-VNET contract with Multi-Node Service Graph
  • Support Azure Application gateway v2 SKU
  • Support Azure Load Balancer
  • Static IP support for both Azure Application Gateway and Azure Load Balancer
  • Support for third-party firewall
  • Redirect (UDR: User Defined Route) support for service graph

If you recall we had built a overlay-2 pseudo vrf in the Infra vNET (HUB vNET) during the initial cAPIC 5.0.2 setup in the Infra vNET.  overlay-2 can be used as a central area to host your services so they can be used by multiple vNETs.

In this write-up I will start with a very simple insertion of Azure Application Gateway.   vNET peering is not a requirement for this particular example ( you could do the same with VGW), but I want to build this on top of the vNET peering build that we did in the last article.  That way we can continue on this exercise and in subsequent write-ups will evolve this to a multinode  service insertion exercise using the hub and spoke (vNET peering) topology

When you start deploying your services in the Infra vNet (overlay-2), this will require the use of vNET peering. That makes it very convenient to place your service devices in a central location so you can share it between multiple vNETs.  This is also referred to as the HUB and SPOKE topology.

I was initially planning to write one article to show you all this, but on further reflection, I decided to break this into 3 parts. This will make it easier to understand in smaller chunks, build your skills and learn the basic rules, so you can then move on to more complex builds that you might want to implement for your particular requirements. 

The 3 articles I plan on writing on this are:

  • (current write-up) Build on the last vNET Peering build and insert a Azure Application Gateway on the provider vNET
  •  replace the Azure Application Gateway with a Azure Network Load Balancer
  • Use Overaly-2 VRF in Infra vNET  as a central area to deploy Multi-Node Service insertion

Before we start on the Azure Application Gateway discussion/implementation, let’s quickly go over what the Azure Application Gateway and Azure Network Load Balancers are.

Azure Application Gateways:

This is also know as Azure Application Load Balancers or ALB.  This is basically a specalized Layer 7 Load Balancer and balances web traffic namely http and https traffic.   It can also do URL filtering, redirection and forwarding based on user defined rules. 

ALBs can be deployed in 2 ways:

  • Internet-facing:  Also known as North/South
  • Internal-facing:  Also known a East West

In the ACI implementation model,  the ALB is deployed associating the ALB with a  Service Graph and then tying the Service Graph to a ACI contract.  Servers in provider EPG are dynamically added to the backend pool.

Azure Network Load Balancers:

This is also known as the Azure NLB. This is a Layer 4 device and distributes inbound packets based on L4 ports.   In other word this can load balance on all tcp/udp based ports.   NLBs can also be deployed Internet facing or Internal facing.

There are 2 modes of operation for NLB:

  • Forward Mode: Here you specifically list what port you want to forward to the backend server farms.
  • HA Port Mode: This mode will forward all TCP/UDP ports to the backend server farms.

In the ACI implementation model,  the NLB is deployed associating the NLB with a  Service Graph and then tying the Service Graph to a ACI contract. Servers in provider EPG are dynamically added to the backend pool.

Now that we’ve discussed the basic theory of ALB/NLB, let’s move on with our ALB build on top of our last vNET peering build.  Please follow along with this exercise in your own setup to get a good hands on understanding.

Where we left off in the last vNET peering build.

We had built a Tenant with 2 VNets,  WEB and APP.  We utilized vNET peering and explored to understand what happens under the hood and how the packets flow in this sort of topology. As a recap our previous build topology is shown below.

Figure 1

The diagram below shows a the Logical Representation of the build.  Remember in the Infra vNET we had also built overlay-2 with RFC 6598 IPs.  We won’t use overlay-2 in this exercise.  We’ll use it when we demonstrate the multi-node Service Graph implementation.

Also, note that the figure below is showing the components of the Hub vNET (overlay-1 and overlay-2) just for completeness because we are using vNET Peering for traffic to go from consumer region to provider region and vice versa (using the built in NLB in overlay-1 and the cloud CSRs for routing).  However this Service Graph could very well have been done while using VGW peering instead of vNET peering.  In this sort of scenario, there is no requirement that the provider region needs to have cloud CSRs.  You only need to pay attention to that rule if the service devices were being placed in the Hub-VNET (overlay-2).Also, note that the figure below is showing the components of the Hub vNET (overlay-1 and overlay-2) just for completeness because we are using vNET Peering for traffic to go from consumer region to provider region and vice versa (using the built in NLB in overlay-1 and the cloud CSRs for routing).  However this Service Graph could very well have been done with VGW peering instead of vNET peering.  In this sort of scenario, there is no requirement that the provider region needs to have cloud CSRs.  You only need to pay attention to that rule if the service devices were being placed in the Hub-VNET (overlay-2).

However please keep in mind the benefits of using vNET Peering instead of VGW.   VGW peering is IPsec over the regular Internet.  This implies that you will be subject to limited bandwidth (around 1.25 Gbps) and will incur higher unpredictable latency.    vNET Peering on the other hand is static peering and it’s all using the Azure backbone.  Your packets don’t traverse the internet to go from region to region.  This implies you get the benefit of much higher bandwidth (around 20 Gbps) and much lower and predictable latency when using vNET Peering.

Figure 2

Also, remember we had a contract between WEB EPG and APP EPG, WEB being the consumer and APP being the Provider.  We used vNET peering and the vNETs peered to the HUB vNET.  The HUB vNET was in the East Region since the East region was the only region that had the cloud CSRs (our home region also).

I want to list 3 basic rules that you should always follow when doing service integration in Clould ACI Fabric in Azure.

  1. If using the HUB vNET (overlay-2) to host the service devices, the Service Graph devices should always be built on the Provider vNET Region where cloud CSRs are present. In our first example we are not using overlay-2 to host the ALB.  So, technically we could place the ALB in either the West US Region or the East US Region (regardless of whether the region has CSRs or not) as long as the vNET in that region is the provider side of the contract (the service device still has to be on the provider side).  However to keep the flow consistent for this example, we will build the ALB in the eastus region in the Tenant  (VFR-APP) vNET itself.  Remember our provider was EPG-APP which is in VRF-APP in the East US Region. 
  2. The service devices should not use the ip subnet used by the workloads itself.  Bring up a new subnet for the service devices in a different unique subnet in the same vNET CIDR.
  3. If a VM in the provider region (having service graphs), also has Internet connectivity through Azure IGW, then those VMs cannot have Azure Basic Public IP.  They would have to have Azure Standard SKU IPs.

To continue with our build of ALB, we have to do a few things to our existing vNET Peering Tenant Topology. (please see markings in diagram below)

  1. Since the APP VM was on the provider side and it was configured for Internet access, we need to change the IP from basic to Standard SKU IP
  2. To test out the ALB we need to have another APP VM spun up (this time use Standard SKU IP from the begining on that one)
  3. Create the ALB device in the Provider Region ( in this example directly on the Tenant VRF ( VRF-APP)
  4. Create the Service Graph and attach it to the EW Contract
Figure 3

Below is a logical diagram of the Service Graph implementation. 

Note that we are following the rules we listed above:

  1. ALB is in Provider vNET.
  2. ALB is in it’s own subnet (10.80.254.0).   APP VMs are in 10.80.5.0/24 subnet.  The CIDR for the vNET is 10.80.0.0/16
  3. Also note that on the ALB ( the Unchecked boxes mean that there is no redirect configured on the ALB.  ALBs always dose SNAT (Source NAT) and DNAT (Destination NAT), so configuring redirect on the ALB is not an option that’s required or available.
Figure 4

Once we finish with the implementation we will do some packet captures to see how the SNAT and DNAT are taking place.   What we will see is represented below.

Figure 5

Task 1/2: Let’s start with our first set of tasks to build this topology.

Adding Standard SKU Public IP to existing APP VM and spinning up another APP VM with Standard SKU Public IP

Figure 6

To do this, go to Azure console, go to search bar and type in “public”  .  Select Public IP addresses, Click Add and make 2 Standard SKU Public IPs.  One for APP-VM1 and one for APP-VM2

Figure 7

Now let’s go to our previous APP VM and change the original basic SKU IP to Standard SKU IP that we created.   Go the the APP VM, click on Networking and then on the Network Inteface for that VM.

Figure 8

Go to IP Configuration / ipconfig1 and then disassociate the old basic IP with this NIC

Figure 9

Now, associate the standard SKU IP we created for APP VM1 with this NIC and save

Figure 10

Now Spin up the 2nd APP VM.   Make sure to use the 2nd Standard SKU IP we created ( the one for APP VM2).   Also don’t forget to tag the VM as “tier==app” ( or whatever you had used for the EPG selector when creating the EPG, such as “name==app” or “tagp==app”).  That’s the EPG selector that we had used for that EPG from MSO.

Figure 11

Since this is ALB that we are deploying we need to test out that load balancer is working using port 80.  So, we need to deploy Web servers on the APP VMs.  To do this quickly in a few minutes you can use the instructions below and pull in the docker-compose files from my github repository

Figure 12

Before you do the below,  please make sure that you allow packets to go out to the Internet from the EPG-APP.  To do this make sure that you have also associated the NS-APP-C2  contract from EPG-APP (Consumer) to EXT-EPG-APP (Provider).  Do this from the MSO.

      Installing Web Server for APP VM1

  1. From Azure console get the public IP of APP-VM1
  2. SSH to APP-VM1
  3. Run the below commands to get your web container ready (should take a few minutes only)
  • sudo  -i
  • apt-get update && apt-get upgrade  -y
  • echo net.ipv4.ip_forward=1 >> /etc/sysctl.conf
  • sysctl  -p
  • sysctl --system
  • exit
  • sudo apt install docker.io -y
  • sudo systemctl start docker
  • sudo systemctl enable docker
  • sudo groupadd docker
  • sudo usermod -aG docker $USER
  • log out and ssh back in for this to work
  • docker  --version)
  • sudo apt install docker-compose  -y
  • git clone https://github.com/soumukhe/aws-aci-lb-ec2-1.git
  • cd aws-aci-lb-ec2-1/
  • docker-compose up  --build -d
  • echo "I am app-vm1 " > ~/aws-aci-lb-ec2-1/html/index.html 
Figure 13

Similarly for APP VM2 do the below:

Installing Web Server for APP VM2

  • sudo  -i 
  • apt-get update && apt-get upgrade  -y 
  • echo net.ipv4.ip_forward=1 >> /etc/sysctl.conf 
  • sysctl  -p 
  • exit
  • sudo apt install docker.io -y
  • sudo systemctl start docker 
  • sudo systemctl enable docker 
  • sudo groupadd docker 
  • sudo usermod -aG docker $USER 
  • log out and ssh back in for this to work
  • docker  --version
  • sudo apt install docker-compose  -y
  • git clone https://github.com/soumukhe/aws-aci-lb-ec2-2.git
  • cd aws-aci-lb-ec2-2/
  • docker-compose up  --build -d 
  • echo "I am app-vm2 " > ~/aws-aci-lb-ec2-2/html/index.html
Figure 14

we are now ready with the VMs.

Task 3: Create 2nd subnet for ALB and create ALB

Figure 14a

For this go to MSO, Site Local Template/VRF-APP ( the provider vNET).  Add the 2nd subnet of 10.80.254.0/24 (give it a name if you wish).  Make sure to save and deploy.

Figure 15

Now on cAPIC verify from Application Management / Cloud Context Profiles that the ALB Subnet you put in on VRF-APP is showing and there are no faults.

Figure 16

Verify from Azure Console also that both subnets are there on the egress route for APP object

Figure 17

Now, let’s create the Provider Side ALB

Figure 18

On cAPIC go to Application Management/Services/Devices and Click on Actions.  Click Create Device ( if you recall, the process of creating the device in onPrem physical fabric is exactly the same.  The device has to be created on the APIC.)

Figure 19

On the Device Creation, choose the options similar to the diagram below

Figure 20
Figure 21

On the next page, let’s finish it off from cAPIC.  You could configure the Service Graph from cAPIC, but let’s be consistent and do it from MSO.

Figure 22

On MSO UI, go to your Schema, Main Template, Click on Service Graph, Name it appropriately, and then drag the load balancer Icon to the drop device box as shown in the diagram below.

Figure 23

Now, go to Site Local Instantiation of the Template, click on Service graph, Click on Load Balancer and then associate the ALB with the Service Graph.

Figure 24

Now, go to your main template, click on the EW Contract and attach the Service Graph to the contract

Figure 25

Now, it’s time to configure the Listeners.  This is where you configure the rules backend pools, etc, etc.  Click on Add Listeners. (you have to click on the Site Local Template, then on EW-C1 and then on the Load Balancer Icon for this option to show up)

Figure 26

Add Name for Listener, and to make it simple, just modify the default Rule to serve your purpose

Figure 27

In the rule, select Forward to.   Our Docker container listens to port 9001 ( do a docker ps on the app vm and you will see this). For the backend pool choose EPG-APP (remember we dynamically attach the endpoints to the members of that EPG).  don’t forget to go to Main Template and click Delploy to Sites.

Figure 28

All Done Configuring !!!

Time to Test

ssh to WEB VM (our consumer) and  curl to the VIP IP 10.80.254.10.  You will notice that it will reach APP-VM1 then APP-VM2 in a round robin fashion.

Figure 29

While doing Curl from WEB VM to ALB VIP, do some tcpdumps on WEB and APP VM.

  • APP VM: sudo tcpdump  -i eth0 -n -s 150  -vv net 10.80.254.0/24
  • WEB VM:  sudo tcpdump  -i eth0 -n -s 150  -vv net 10.80.254.0/24

You will notice that on the APP VM the packets are not being received from the VIP.  They are being recieved from some other IPs in the VIP subnet.

Figure 30

You could also use Azure Network Watcher to capture packets.

Figure 31

To do this, go to Azure Console and search for network and click on Network Watcher

Figure 32

Click on Packet Capture and then click Add.   Choose the correct Resource group, and choose an app-vm.  Keep default storage account.  I chose  min capture.  Optionally cyou can even setup capture filters.  Click Save.

Figure 33

Refresh your screen till you see the capture is running

Figure 34

Now do your curls to the ALB VIP again and then stop the capture

Figure 35

Click on stopped capture and download the capture

Figure 36

Open up the capture on your local machine wireshark and decode it.  You will notice the same results we saw from tcpdump.  Namely, on APP VM, packets are coming from some IPs from the LB subnet and not from VIP itself.

Figure 36a

One more item, I wanted to point out.  Since we are using vNET peering, our main traffic between vNETs is still going through our Infra Load Balancers and then getting routed on the clould CSRs.   (when we use service devices on overlay-2 for multinode service graphs on a later writeup, you will see that this is not the case.  I just wanted to point this out).  If you wanted to verify this, you can do the debugs on the CSR to verify as shown below.

The commands on the CSRs to be used are as follows:

  • debug platform packet-trace packet 128
  • debug platform condition ipv4 10.70.5.4/32 both
  • debug platform condition start
  • debug platform packet-trace packet 128

To View:

  • show platform packet-trace statistics
  • show platform packet-trace summary

To Stop:

  • debug platform condition stop
Figure 37

Viewing your Tenant ALB from Azure Console.

Go to Azure Console and search for application and click on Application Gateway.  You can see your ALB there.  Click on the ALB.

Figure 38

View your Frontend config, Backend Pool members and VIP IP from there.

Figure 39

View on what port Listener is listening on

Figure 40

View and Test health Probes from Application Gateway (ALB)

Figure 41

Click Test to see results of Health Probe

Figure 42

Results of Health Probe (they look good).   Note that our health probes are going out on tcp port 9001 (since that what we configured from MSO for the listener backend port, since the docker web containers are exposing port 9001 on host and mapping it to port 80 on the web container)

Figure 43

You can also view Application Gateway / Backend health and confirm that the health probes are sent on port 9001

Figure 44

References:


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.