Cloud ACI 5.2: A compelling case for Cisco ACI Hybrid/Multicloud Data Centers

With the Covid pandemic, many enterprises have come to the realization that having a huge infrastructure for your Data Center on-Premise may not be the most cost effective.  A Hybrid Cloud Data Center (onPrem + cloud), or MultiCloud ( multiple Interconnected DC Fabrics in one or more cloud provider) Data Center may make more sense.  Apart from a  quick turnaround it also gives you the benefit of a pay as you go model, instead of having big capital expenditures that you have to amortize annually.  Besides you don’t need to have to deploy, maintain your infrastructure and incur recurring rack space real estate charges / racking-stacking, cabling expenses / cooling expenses that you need to deal with.

Cloud ACI is no longer a new technology,  it’s a mature technology that gives you the ability to deploy Hybrid and Multicloud Data Center Fabrics and make them seamlessly work together.  Cloud ACI 5.2 makes the case even more compelling with a multitude of newer enhancements.   In this article,  I will discuss shortly what the main enhancements are.  In future write-ups I’ll discuss these items in more details where you can also follow through and do Proof Of Concepts to get more familiar with these enhancements.  Currently Cisco Cloud ACI supports AWS and Azure.  GCP support is around the corner.

Note: Cloud ACI 5.2 should be out in AWS and Azure Marketplace any time now.

AWS and Azure Related Enhancements in Cloud ACI 5.2:

  1. If you have 2 or more Cloud Fabrics in the same Cloud Provider you can now use the Cloud Providers backbone for interconnecting these Data Centers (DCI).  Prior to this you needed to build IPSec tunnels over the Internet between the sites to achieve this.  This gives you the benefit of high bandwidth and predictable latency between Cloud Sites belonging to same Cloud Provider
  2. If your Data Centers can communicate with each other without the use of Public Internet (Direct Connect, Express Route), then there is no need to assign public IP addresses to the CSR Interfaces any more.  BGP Peering for control plane and vxlan NVE for Data Plane can be all done over the private IP addresses of the CSRs now.
  3. CSR Version now shows up in the cAPIC GUI and cAPIC and CSR upgrades are decoupled
  4. The intiial setup parameters for cloud APIC Intersite conectivity parameters have now been moved to MSO.  This gives you the benefit of having more granular InterSite connectivity options and also prevents you from making mistakes like assigning duplicate or overlapping subnets for InterSite Connectivity.   You will need MSO release 3.3x or above which runs on Nexus Dashboard 2.2 and above Platform.  ND is avaialble to run on-site as OVA or in cloud Provider sites.  A fully managed SAAS version of ND will be available in the near future.
  5. If using IPSec tunnels between Sites, you can now use IKEv2 for both AWS and Azure (Azure had this at 5.1).  IKEv1 is still an option.

AWS Related Enhancements:

  1. cAPIC now supports AWS TGW Connect Attachment Type in Infra VPC.  The CSRs are considered as the NVA (Network Virtualization Appliance) and BGP peering is built from CSRs to the TGW for control plane (over GRE tunnels).  This gives you the benefit not having to deploy 2 TGWs per region.  TGW is a highly available AWS resource and now you only need to use 1 TGW per region thus cutting down on your AWS resource consumption.
  2. If Tenant Region is not the infra region and is not a hub region (meaning no CSRs were deployed in that region), then from that Tenant Region connectivity to other DC sites (on Premise or Cloud) can still take place over the high speed TGW Inter Region Peering and then through the Infra CSRs.  Pror to 5.2, you would either need to bring up CSRs in that non Infra HUB region or have to configure VGW tunnels between that Tenant VPC and the Infra VPC.

Azure Related Enhancements:

  1. Brownfield import for Azure.  If you already have resources deployed in Azure, you can now connect your brownfield vNETS to the cAPIC vNETs using vNET Peering.  This means your connectivity from ACI Fabric vNETS to the brownfield vNETS can go over Azure’s backbone directly.  Security Policies can be attached for this connectivity based on requirements.
  2. You can now do Tenant vNET peerings across Azure Active Directories.  This will be very useful for B2B  connectivity.  Prior to this tenant vNET peerings for Azure using cAPIC was only possible across subscriptions in the same Azure Active Directory.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.