ACI/Cloud Extension Usage Primer (Azure) – ACI 5.0.2 cAPIC Feature Listing and First Time Setup differences

In July of 2020 Cisco Released ACI 5.0.2 cAPIC with rich feature support for Azure Cloud. In this writeup I will list the new features and then guide you through the differences for the ACI 5.0.2 cAPIC setup differences for First Time Setup for Azure.

In subsequent writeups I will discuss the main fetures individually

Nothing has changed in 5.0.2 cAPIC for AWS compared to 5.0.1 cAPIC, but it’s still better to have all cAPICs and physical APICs running in a uniform release

Note: that MSO release 3.0.2d got released at the same time for using 5.0.2 cAPIC Azure to be able to use the new features from MSO.  MSO 3.0.2d can run on docker swarm (traditional) or SE 1.1.3d (K8s based) Platform.   The instructions for installing SE 1.1.3d OVA and MSO 3.0.2d APP on that platform has been added in post “Deploying MSO on Cisco Application Service Engine” post in unofficial.

The new feature summary is as follows:

  • Azure VNET Peering
  • Support for 3rd party firewall service chaining in Azure
  • Support static IP for load balancer
  • Tag-based search on Cloud APIC
  • Support Azure NLB automation with service chaining
  • Support Azure Multi-node service graph
  • Support Inter-VNET / VPC services
  • infra-VNET (multiple CIDR / subnet blocks) with UDR in both user VNET / Infra for Azure
  • Custom naming for cloud resources
  • Support more than 32 characters for Tenant+VRF name
  • Support for comma-separated filters for rule creation in contract

cAPIC 5.0.2 First Time Setup Differences:

Please follow the previous post on installing Azure cAPIC from scratch with the following differences and additional items as you setup release 5.0.2 cAPIC.

Differences in First Time setup of cAPIC 5.0.2 as compared to cAPIC 5.0.1 for Azure:

  1. Make sure to subscribe to cloud CSR 17.1 as comapred to 5.0.1 cAPIC Azure which needed cloud CSR 16.1.2
  2. When deploying from Azure Market Place some new options will show up which are pretty self explanatory, you can take the defaults or change if you wish
Figure 1

2a) Added 7/21/2020

Another thing came to my attention.   This is rather an important one.   Starting from this release of cAPIC the ARM template will not automatically have the PIP (Public IP) of cAPIC configured with a permanent Public IP address.  When you come to the Template part that has the Public IP Address field, please make sure to not take the default “(new) CloudApic-pip”.   If you did that it would give you a basic IP dynamic IP.   That would mean that after reboot, the IP address of cAPIC OOB would change.  Your MSO mapping of cAPIC and your on-Prem Permiter Firewall will have to be modified and that is no good.

When filling the “Public IP Address” field in the template, please click on “Create new”, then change to either “standard” or “Basic with Static”

This is shown in the diagram below.

Figure 1a
Figure 1b

In case you forgot to do this at the initial install time, you could also change it later by disassociating the current dynamic IP, then creating a new PIP (standard or basic with static) and associating it back with the OOB NIC of the cAPiC.

3) After cAPIC deployment has finished and you point your browser to cAPIC to do first time setup, in the Region Setup there are a few differences.

  • If you want to utilize the new vNET Peering (which is sort of like TGW feature with differences, which will be discussed in another writeup on vNET Peering for Azure), then enable vNET Peering (highly recommended)
  • Enable Inter-Site Connectivity if this Azure Fabric will be tied to other sites Phyisical ACI / Cloud ACI (azure/aws)

In my example below, I choose the home region ( the region where both cAPIC and associated CSRs are deployed) to be Azure region eastus.   I could have deployed CSRs in more regions also, if I wanted to, but choose not to.

Figure 2

4) Continuing with the FTS (first time setup) from the cAPIC, when it comes time to add the CIDR, it is imperative that you add atleast 3 CIDRs (of /24). The first one is the primary and more will be needed for the Network Load Balancer ( infra/hub NLB – will be discussed in more details in the future vNET peering post) and the CSRs in different regions.   Note: changing this or adding more later could be a disruptive operation, so you want to do it right from the beginning.

Figure 3

5) Continuing with first time setup, towards the end there is an option for “Cloud Resource Naming” convention.   You could customize this to suit your purpose, I have chosen to keep it default.

Figure 4

6 ) After the initial setup has finished, there is still a little bit of work to be done.  You will have to go add some more user subnets in the Infra VRF.  These are not the same as the ones you defined in step 4 above.  Those are used by the system, CSRs, Infra NLB, etc, etc.   These ones are ones that you will use later on for Services such as 3rd party firwall interfaces (trusted / untrusted/mgmt for firewalls, Infra load balancers that you may want to have for tenant vNET use and such. These prefixes will go to a pseudo VRF called overlay-2.  overlay-2 is not really a VRF, though cAPIC lists it as a VRF.  If you look at it later from Azure console you will realize this.  All this will make much more sense when you follow through the the Multinode Service Graph article that I plan to write later.  For now just follow along with me.

To do this, go to cAPIC, Application Management and click on Cloud Context Profiles

Figure 5

On the next page, hit the edit icon

Figure 6

Now, click on Add CIDR.  also make sure that Hub Networking (vNET peering is turned off).  These prefixes will not go to overlay 2 if HUB Networking is turned on.

Figure 7

Now, add a CIDR for overlay-2.  I choose to use RFC 6598 addresses 100.64.0.0/10 range.  so, I used 100.64.0.0/16.   Don’t attempt to make this Primary.

I’ve included the following:

Primary CIDR 100.64.0.0/16
loadbalancer 100.64.5.0/24
firewall-untrust 100.64.6.0/24
firewall-trust 100.64.7.0/24
HubVMSubnet1 100.64.8.0/24
service-mgmt 100.64.254.0/24
Figure 8

Now, go ahead and save all this. Make sure that HUB Network Peering is not enabled.

Figure 9

Now you need to edit again, so that you can turn on HUB Network Peering

Figure 10

Now, go ahead and Enable Hub Network Peering and then Save

Figure 11

This concludes the setup.   If using MSO, you still need to add this to the MSO and setup the onSite CSRs based on the suggested configuration that the MSO gives you (from configure Infra/Download Only).

 

Checking/Verifying:

Like anything else, we should verify that all is good. 

On cAPIC, go to Event Analytics / Faults and check for faults.   In my case, I have not licensed my cAPIC, so I know I will have a license fault, which I don’t wish to see, so I filter to not show license faults and show other raised faults

Figure 12

Also, check from Dashboard for faults

Figure 14

Check InterSite Connectivity from cAPIC and verify that it looks good.  Sometimes it might take the cAPIC a while to catch up.   It’s also a good idea to ssh to the virtual CSRs and use the following commands to make sure that things are good.

  • show ip int brief
  • show crypto session | i status
  • show bgp l2vpn evpn sum
Figure 15

Check Inter Region Connectivity

Figure 16

Check details of infra subnet Allocation from cAPIC (Application Management / Cloud Context Profiles).  Make sure that no faults are showing there

Figure 17

on cAPIC click on Application Management / Services.  and doubleclick on the Hub NLB (Network Load Balancer),  Make sure it is healthy.

Figure 18

Now Click on Application Management / Services / Service Graph to see the in-built hub Service Graph.  Make sure it’s healthy

Figure 19

on cAPIC, Click on Application Management / VRFs and look at Overlay-2. It shows as a VRF but really is not.

Figure 20

Go to Azure Console and click on the resource group where you installed cAPiC, arrange for Group by type and check the vNETs, you will only see overlay-1

Figure 21

If you go to overlay-2 egress routes you will see the subnets that you defined in overlay-2 (RFC 6598 addresses in my case.

Figure 22

If you click on the HUB Load Balancer and look at front end IP for the HUB NLB you will see it’s IP (10.33.1.36).   Note this IP came from the initial Subnets set in cAPiC FTS (not from overlay-2 defined IPs.

Figure 23

If you click on the HUB NLB Back End Pool you will see that the IPs for the backend matches with the IP of the cloud CSR (private IP) for gig2

Figure 24

You will also notice that the HUB NLB is sending health probes with port 22

Figure 25

If you look at the metric for NLB and filter for Health Probe Status, you should see it at 100%

Figure 26

Also Check for Data Plane Availability, should be at 100%

Figure 27

On MSO, you can also look at the Consolidated View for all Sites

References:


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.