Table of Contents:
- Quick Refresher of Cisco Cloud Network Controller
2a. Unified Object Mapping for CCNC
- CCNC Installation Guide Overview
3a. CCNC GCP Install Steps
3b. CCNC GCP Pre-requisities
3.c.CCNC GCP install from Marketplace
- CCNC GCP First Time Setup
- Onboarding CCNC and APICs to Nexus Dashboard and managing from Nexus Dashboard Orchestrator
- Tenant onbaording on GCP CCNC Fabric
From release 25.0(5) onwards, Cisco Cloud Network Fabric (CNC) in GCP supports BGP-EVPN connectivity for inter-site connectivity between the Google Cloud Site and other Cloud Sites (AWS/Azure) and on-premise ACI Sites. External Connectivity and Brownfield Connectivity are also supported, just like they are in AWS and Azure CNC Fabrics.
Previously, I had written an article which introduced the concept of CNC Google Site. At that time, BGP EVPN inter-site connectivity for CNC Google Site was not supported. Now, with CNC release 25.0(5) onwards, this support has come in, making the CNC GCP Site at par with AWS and Azure CNC Sites.
In this article, I will guide you through the installation of CNC Google Site. All documentation about CNC can be found at CCO: https://www.cisco.com/c/en/us/support/cloud-systems-management/cloud-application-policy-infrastructure-controller/series.html. This writeup is not intended to be an alternative for the CCO documentation, but rather additional guidance with screenshots as needed.
Cisco Cloud Network Controller, previously known as Cisco CAPIC, is the SDN controller used for orchestration, visualization, and troubleshooting of Cisco CNC Fabric in a public cloud. It is a key part of Cisco ACI extension to the public cloud, providing consistent policy, security, and analytics for workloads deployed on-premises data centers and the public cloud.
The extension also provides an automated connection between on-premises data centers and the public cloud with easy provisioning and monitoring, and a single point for managing, monitoring, and troubleshooting policies across on-premises data centers and the public cloud or between cloud sites.
Cisco Cloud Network Controller runs as a virtual instance on a supported public cloud to provide automated connectivity, policy translation, and enhanced visibility of workloads in the public cloud. It translates policies received from NDO and programs them into cloud-native constructs, such as VPCs and security groups on AWS and VNets, application security groups, and network security groups on Microsoft Azure. The controller is deployed through the public cloud Marketplace, such as AWS Marketplace, Azure Marketplace and GCP Marketplace.
Cisco Cloud Router (CCR) is an important component in the public cloud platforms used for inter-site communication to on-premises sites and the public cloud platforms.
The figure below shows how communication is achieved using secure private connectivity between Public Cloud and On-Prem.
Figure 1: High Level Connectivity Diagram showing how Secure Private Connectivity is achieved between Public Cloud and On-Prem
This document will provide high-level guidance on how to deploy CCNC in Google Cloud. Currently, CNC can be deployed in AWS, Azure, and GCP. When CCNC is deployed in the public cloud and initial configuration of CCNC is completed, it builds the CCNC fabric in the respective cloud.
A user can then use the CCNC interface to build the desired network infrastructure for consumption. Further, a higher-level controller called the Nexus Dashboard Orchestrator can be used, which orchestrates across several different cloud CCNC Fabrics and onPrem ACI Fabrics. NDO sends API calls to each CCNC, which, in turn, is responsible for orchestrating its domain. NDO allows orchestration of Network Infrastructure across different clouds and on-Premise Fabrics.
Figure 2: Hybrid/Multi-Cloud Solution using NDO
When infrastructure is built in the cloud using CCNC, they are represented by CCNC objects. Even though each cloud provider has different names and mechanisms for their cloud objects, the CCNC maps them to a common set of objects, making configuration consistent and easier across a multicloud or hybrid cloud (with NDO).
As an example, a CCNC VRF equates to a Virtual Network in Azure, a VPC in AWS, and a VPC in GCP. The diagram below shows the most common objects utilized by CCNC and their mappings across different cloud providers.
Figure 3: Unified Object Mapping for CCNC
CCNC installation is done from the Marketplace of the respective Cloud Provider. Detailed instructions for installing CCNC in the cloud can be found on CCO at: https://www.cisco.com/c/en/us/support/cloud-systems-management/cloud-application-policy-infrastructure-controller/series.html
⚠️ This document is not meant to replace the CCO document but rather supplement it with screen captures for additional assistance.
There are some major differences in different Public Cloud Provider Architectures. CCNC Fabric architecture also differs internally between the different Public Cloud Providers, AWS/Azure/GCP. As an example, VPCs are Global in GCP, but are regional in AWS and Azure. Subnets are regional in GCP and Azure, but are zonal in AWS.
The chart below shows the most used object differences between AWS/Azure/GCP that you should pay attention to:
Figure 4: most commonly used object differences between AWS/Azure/GCP
📙 The table above is not an exhaustive list of all objects, but rather the most commonly used objects in CCNC.
- In the CCNC architecture for GCP, when 2 VPCs (CCNC VRFs) communicate with each other they do so using a spoke-to-spoke model, unlike AWS and Azure which uses a spoke to hub model.
- Cloud Routers for CCNC have 2 interfaces whereas in AWS and Azure they have 4 interfaces.
From an end-user perspective, these details may not be necessary to know since NDO/CCNC orchestrator has a universal model and does the necessary configurations under the cover.
For a detailed understanding of CCNC/GCP, please see: https://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/multi-cns-gcp-wp.html
Pre-req1: Enable Appropriate APIs & Services
The following API and services need to be enabled on the project where CCNC will be deployed.
- Compute Engine API
- Cloud Deployment Manager V2 API
- Cloud Logging API
- Cloud Pub/Sub API
- Cloud Resource Manager API
- Cloud Runtime Configuration API
- Identity and Access Management (IAM) API
- Service Usage API
📙 Note that the following additional APIs and services should be enabled automatically when you enable all of the APIs and services listed above:
- IAM Service Account Credentials API
- Cloud OS Login API
- Recommender API
If they are not enabled automatically, enable them manually.
To Enable the APIs go to the GCP Navigation Menu, then choose “APIs & Services”.
Figure 5: Enabling Appropriate APIs & Services
Pre-req2: Assign appropriate permissions to the Google APIs Service Agent service account.
When you enabled the APIs and Services in the previous step, a service account should have been automatically created for Google API Service Agent. This service account will be of format:
Figure 6: Locating the automatically created GCP Service Account for Google API Service Agent
Next, click on the edit icon on the right side of this Service Account Row and add the following permissions to this Service Account.
Project IAM Admin
📙 The Editor permission for this service account will be there by default. Please do not delete it. Just add the other two permissions.
Figure 7: Adding Permissions to Service Account
Pre-req3: Verify Quota for N2 CPU to at least 16 in the region where CCNC will be deployed.
To check this, follow the steps below:
- Click on the Navigation Menu in GCP
- Go to IAM & Admin and select Quotas
- In the Filter table box, search for "CPU"
- Select the appropriate CPU metric (for example, "vCPUs (N2)")
- Choose the region where CCNC will be deployed
- Verify that the quota for N2 CPU is at least 16. If it is not, request a quota increase.
Figure 8: Navigating to the Quotas menu
You can scroll through the list of Quotas to find N2 CPUs for your region or you can apply a filter to make it easier to locate.
Figure 9: Adding Filer for N2 CPUs and for region where CCNC will be deployed
Pre-req4: Have a ssh public key available or create a key pair to get a new public key.
The purpose of this key pair is for SSH purposes into the CCNC when required.
If you want to create a new key pair, you can use the ssh-keygen utility to do so:
ssh-keygen -t rsa -f ~/.ssh/cnc/ssh-key -C admin
If you already have an SSH key pair that you want to use, you can do that as well. In the example below, the key being used is from an Ubuntu jump box.
Figure 10: ssh public key from Ubuntu Jump Box
Pre-req5: Create a table that lists the GCP Deployment Template Parameters
Figure 11: Table of GCP Deployment Parameters.
Next, CCNC is installed from CCNC Marketplace using Deployment Manager.
For Parameters during CFT Please refer to Table above.
Go to Marketplace and search for “Cisco Cloud Network Controller.”
Figure 12: Searching for CCNC in Marketplace
Next, Launch the deployment
Figure 13: Launching the CCNC deployment in GCP
For the deployment manager parameters, use the table you created in Figure 11.
📙You could use an existing service account or create a new one
Figure 14: Completing the Marketplace deployment of CCNC
Wait for the deployment to complete as shown below.
Figure 15: CCNC deployment Completed.
After CCNC is installed in the cloud, First Time Setup needs to be completed by pointing your browser to the CCNC IP. After this setup is completed, CCNC will spin up Cisco Cloud Routers in the respective Clouds. These Routers are part of the CCNC fabric.
The CCNC Public IP can be found from the GCP VM Console as shown below.
- Login to GCP Console.
- From the Navigation Menu, choose “Compute Engine”.
- Click on “CCNC VM Instances”.
- Find the CCNC VM and look at its External IP Address.
Use the External IP Address to point your browser to the CCNC IP and proceed with the First Time Setup.
Figure 16: Finding the Public IP of CCNC VM
Point your browser to CCNC IP and login with admin user and the password you provided during initial setup.
Figure 17: Logging into GCP CCNC
You will be greeted with the welcome screen and then First Time Setup
Figure 18: GCP CCNC First Time Setup Welcome Screen
Next you will see 4 Sections for the setup. You need to configure each section.
- DNS and NTP Servers
- Region Management
- Advanced Settings
- Smart Licensing
Figure 18: First Time Setup Sections
📙 Note: There is a 90-day evaluation period for Smart Licensing
DNS and NTP Sample Configuration is shown below.
Figure 19: Sample DNS, NTP Configuration
For Advanced Settings, you would normally choose Contract Based Routing.
Figure 20: Contract Based Routing
For Region Management Parameters, please refer to the table created before.
The home region is the region where CCNC was installed during the CFT setup. This region always needs to have Cloud Routers installed. You can choose additional regions based on where you plan to have workloads. These additional regions can also have Cloud Routers if desired. Having extra Cloud Routers in other regions provides extra capacity, and traffic between other sites and the extra region does not have to traverse the home region.
You need to determine whether to configure inter-site connectivity using C8Kv or external network connectivity using GCP Native Routers.
• Catalyst 8000Vs: Click the box in this column for a region if you want to use the Cisco Catalyst 8000V router for inter-site connectivity for inter-site use cases. This functionality was introduced in release 25.0(5), which allows you to configure a BGP-EVPN connection for inter-site connectivity between a Google Cloud site and other cloud sites or an ACI on-premises site using Cisco Catalyst 8000V routers. Refer to "Inter-Site Connectivity Using BGP-EVPN" in the Cisco Cloud Network Controller for Google Cloud User Guide for more information.
• External Connectivity using Google Cloud Routers: Click the box in this column for any region where you want to use the Google Cloud router for external network connectivity. This allows you to configure an IPv4 connection between a Google Cloud site and non-Google Cloud sites or an external device, where a VPN connection is created between a Google Cloud router and an external device. Refer to "External Network Connectivity" in the Cisco Cloud Network Controller for Google Cloud User Guide for more information.
Figure 21: Choosing Regions where Cloud Routers Or External Connectivity using Google Cloud Routers is permitted
📙 Note: You will notice that all regions are selected by default. This is because VPC is a global resource in GCP.
Subnet Pools for Cloud Routers
The Prefix chosen during the initial CFT setup belongs to the home region. If you had additional regions with Cloud Routers enabled, you will need to add additional pools of /24
Figure 22: Subnet Pools for Cloud Routers
📙 The entry for 10.23.0.0/24 came from the Deployment Manager parameters entered.
Leave the Hub Network and IPsec tunnel pool to default
Figure 23: Hub Network and Ipsec Tunnel Pool
Enter the parameters for the Cloud Routers by using the table created earlier.
Figure 24: Cloud Router Parameters
Once finished Cloud Routers will spin up. Wait for Cloud Routers to spin up.
Figure 25: viewing Cloud Router spin up from GCP Console
📙 Notice that the Cloud Routers in GCP have 2 interfaces as opposed to 4 in AWS and Azure
Next Go to CNC, Infrastructure/InterSite Connectivity and wait till the Cloud Routers show to be ready.
The Map will get auto-populated based on the regions you had selected.
Figure 26: Map Location Auto Populated
Cloud Routers will show a Deployment Status of Success.
Figure 27: Cloud Routers Successfully Deployed
CCNC and APICs can function as standalone management systems, but there are benefits to onboarding and managing them from the Nexus Dashboard Orchestrator. This includes a single pane of glass for the entire infrastructure, regardless of whether it is in different cloud or on-prem domains, and the ability to create tenants with policies across different cloud and on-prem domains, providing multi-cloud and hybrid-cloud capabilities.
The following section provides a high-level overview of how to onboard sites on the Nexus Dashboard Orchestrator. It is important to note that the Nexus Dashboard Orchestrator is an application that runs on the Nexus Dashboard, which is deployed as a Kubernetes Cluster and can run on-prem, AWS, or Azure. However, the installation and day0 configuration of the Nexus Dashboard and Nexus Dashboard Orchestrator are beyond the scope of this document.
Figure 28: Nexus Dashboard on AWS and Azure Marketplace
Sites first need to be added to Nexus Dashboard before you can manage them from Nexus Dashboard Orchestrator.
After logging into ND, all sites need to be added.
Figure 29: Adding Sites to Nexus Dashboard
Next, maneuver to Nexus Dashboard Orchestrator and set all the sites to managed.
Figure 30: Managing Sites on NDO
Once Sites are managed and shows O.K. Site Connectivity on NDO needs to be completed.
Figure 31: Site Connectivity Screenshots
Once Site Connectivity configurations are completed, the user needs to Deploy and Download the onPrem IPSec router configurations. The configurations can be almost copied and pasted on the onPrem IPSec routers.
Figure 32: Deploying and Downloading Configurations for onPrem IPSec Devices
📙 In the screenshots above, 2 onPrem ACI Fabrics are present which makes this a hybrid cloud deployment. However, you don’t need onPrem ACI sites and have a Multi-Cloud Fabric Deployment.
Once the configurations are applied, you can check on the IPSec Devices onPrem Routers and on the Cloud Routers, that all interfaces are up as shown below.
Figure 33: Verifying all interfaces are up and IPSec tunnels are up, viewing from on-Premise IPSec routers
Figure 34: Verifying Tunnels and IPSec connections from Cloud Router in GCP
Tenant Configurations across Sites (including GCP CCNC Site) are done in the same way from NDO as done before. As mentioned previously, Tenants in GCP requires a separate GCP Project. The Tenant GCP Project may be under a folder structure or not. Onbaording a GCP CCNC tenant can be done as a Managed Tenant or Unmanaged Tenant. The permisssions/roles needed for GCP Tenant project can be found in CCO Document: Cisco Cloud Network Controller for Google Cloud User Guide, Release 26.0(x).
📙 For a managed Tenant, if the GCP Tenant Project is under a Folder Structure, then the Roles need to be assigned at the folder level.
To illustrate with an example, below you will see that I have 2 GCP Tenant Projects that are under a main Folder Structure. In this case, I have to assign the service account (that was created during the CNC deployment) to the main folder. Additionally, I will need to give it the roles shown below.
- Cloud Functions Service Agent
- Compute Instance Admin (v1)
- Cpute Network Admin
- Compute Security Admin
- Logging Admin
- Pub/Sub Admin
- Storage Admin
Figure 35: Tenant Projects in this case reside under a folder structure
Figure 36: Assigning Service Account to Main Folder and adding Additional Roles