Nexus Dashboard 2.1 – New Features – Federated Clusters

Nexus Dashboard 2.1 has incorporated a lot of New Features/Improvements.  In this write-up, I will primarily discuss the Federated Nexus Dashboard configuration, why use it and also discuss some of other underlying improvements/changes.  I will also show how to send API calls to Nexus Dashboard Federated Clusters to get Federation Related Information.

Installing ND is pretty simple.  However, (in my opinion) before installing you need to spend some time to think about the design, document it with diagrams and IP allocation and get the connectivity part readied based on your intended design. For help with the Hardware install, the previous writeup I had done,  is still valid.   For the virtual form factor, it’s really very intuitive.  You just load the OVA, once done point your browser to it and put in some basic configuration related parameters, like Fabric Interface IP/Mask, NTP, DNS, Search Domains, other member information and away you go.

ND is available in:

  • Hardware Form Factor (iso)
  • Virtual Machine Form Factor, (ova or qcow2). For the virtual images you could download 2 different kinds of images, one for running MSO only and one for running more apps in addition, e.g.  NDI  6.0 and above (Nexus Dashboard Insights – which basically consolidates the older NAE and NI together).
  • You could also use the SAAS version of ND
  • You could also use the Cloud version of ND (GCP/AWS/Azure).  This version is suitable for using NDO.

Once the install is completed and you point your browser to ND (with admin/rescue-user), you will be greeted with a list of new features as shown below:

What’s New in 2.1(1d)

  • Connect and monitor multiple clusters from One View
  • Support DCNM sites on Nexus Dashboard cloud clusters (AWS and Azure)
  • Dual stack IPv4 and IPv6 for management and data networks
  • Increased cluster size for Nexus Dashboard deployed in VMware ESX
  • LDAP connectivity verification
  • Resource profiles for virtual nodes in VMware ESX
  • Co-hosting of services on virtual clusters
  • Help Center
  • External provider verification

The first one in the list “Connect and monitor multiple clusters from One View” refers to Federated ND Clusters.  This is primarily what we will discuss in this write-up. 

One more architectural change I wanted to point out is that in the underlying container mechanism, the K8s ND cluster is now using cri-o and not docker.   The reason for this change is that as you might be aware K8s has deprecated the use of docker and moved on to cri-o from the K8s release of 1.20 (due to cri-o having a limited attack surface for exploits).  Docker will be completed removed from K8s release 1.22.    Incidentally, the version of K8s in use for ND2.1 happens to be 1.16 as you can see in the figure below.

kubectl version   # execute on ND (on one of the Master Nodes)
Figure 1: ND2.1 uses K8s 1.16

From a user prospective,  the implication of this is you cannot use docker commands any more.  In fact the “docker” command won’t even be available, because docker is not installed.  Cisco TAC will be able to use  ‘crictl ‘ (from root in ND) commands to interact with containers directly if they wish to.  However, you should generally always use K8s command instead, for instance:

kubectl describe
kubectl exec -it

Keep in mind that you don’t have to be a Kubernetes expert to use Nexus Dashboard.  All this underlying architecture should be of no concern to the non-advanced users.  One of the great things about ND is that you don’t need to even know or understand Kubernetes.

Below is an example of using the “kubectl exec” command from root access.

kubectl get pods -n kafka
kubectl exec -it cruisecontrol-7558b9c5d-hj6cx -n kafka sh
Figure 2: Use of kubectl exec command from root

Below you can see the equivalent use of “critcl” commands for the same purpose

Figure 3: using crictl from root on ND

Federated ND Clusters:

Before release of ND2.1, the only way to increase capacity for ND Clusters was to increase the number of  ND nodes.  The Cisco Nexus Dashboard platform at it’s base consists of 3 K8s Master Nodes.   When required you can scale horizontally by adding 4 more worker nodes and even 2 standby nodes in case you need to do quick master node recovery.  Please See ND Sizing Tool.

This is a ACI blog, so we won’t talk about DCNM, but I wanted to point out that Nexus Dashboard is more than for just ACI.  Let’s say you have more than 1 Fabric (ACI/DCNM), let’s say you have 3 ACI Fabrics in different geographic locations.  If you wanted to  use ND with all your fabrics, you would have to design how to distribute your ND cluster members (nodes).   Perhaps you would put all the nodes in 1 site and add all the sites from that ND Cluster (with proper inband connectivity to each fabric through L3Outs) as shown in the example of the previous writeup .

Or perhaps, you would want to distribute the nodes across 2 sites or even 3 sites, for redundancy purposes.  This would work, but now, remember that the ND clusters have to sync to each other at the K8s level across the wide area.  There is no right or wrong answer on the design, but depending on your topology, you would have to make that decision.  You would have to take into account what latency and bandwidth you have over the wide area.   Since the IPN to IPN connectivity is generally higher bandwidth, perhaps you can carve out a vrf in the IPN for this traffic. 

Would it not be nice if you can install 1 ND cluster on each site and have them federated so, you get a single pane of glass.  Each ND cluster would be it’s own entity.  For the K8s level cluster sync, they would use the local high speed / low latency network.  However from a higher level ND prospective, you could see each fabric and their sites from one place.  Further applications like NDI could be brought up on ND Clusters whose local sites you would like to be monitored.  You could then access that NDI information from any Cluster’s NDI application.  The proxy function of ND would be used under the covers to accomplish this.  That is exactly what ND Federation is all about.

Architecture of the ND Federation:

In a Fedetated ND cluster, each ND cluster can have it’s own sites and each ND cluster is it’s own entity.  One of the ND Clusters ( the one you choose to make your federation from — by adding federation members) is the primary.  You can install apps like NDI on any cluster and then NDI can be used to monitor any site on any of the federated clusters.

Each cluster has a SM (Site Manager).  The required sync and query related information for federation is transferred over through bidirectional communications between a non primary ND Cluster’s  API Gateways to the primary ND cluster’s API Gateway.  The primary ND cluster also has a FM (Federation Manager) where the federation member information is held.  The non-member ND clusters also have a client piece of the FM locally.  The FM has responsibility to sync members, sites of members and also sync key related information.   This is depicted in the figure below.

Figure 4: Architecture of the Federated Nexus Dashboard

How to setup a Federation:

Setting up a ND Federation is really simple.  It can be done from UI or also through API.  Since this is a one time setup, I’ll show the UI method here. 

Step1: Go to  Cluster Configuration/Multi-Cluster Connectivity and click on Connect Cluster.  Do this from one of the ND Clusters that you wish to be the Primary ND.

Figure 5: Setting up ND Cluster – Step 1

Step 2:  Populate information for member ND Cluster you wish to be part of this federation

Figure 6. Populating the required values for Federation Member – Step 2

That’s all that’s required to add members.  Add in other members from the primary ND as needed

Looking around a Federated ND Cluster:

Go to the Sites of any of the ND Cluster and click on the top bar, where it shows the name of the cluster.  Notice that in this particular setup I’m sitting on dmz-nd-cluster2 and this ND Cluster only has 1 Site called Fabric7 associated with it.

Figure 7. Looking around a Federated ND Cluster

Once clicked, you will see that you can choose other members of this ND cluster.  Also note that “dmz-nd-cluster2” is the Primary.   Let’s click on dmz-nd-cluster1.

Figure 8. Switching view to a different ND cluster

You will now see that you have switched over to the ND Federation member “dmz-nd-cluster1”.  You will also notice that this cluster has a different ACI site associated with it “fabric8”

Figure 9. Looking at a Federation member ND cluster

You will also notice that on the top right bar, there is a “Multi-cluster Dashboard” button available.  Click on that.

Figure 10: Multi-Cluster Dashboard

Here you can see a consolidated view of all the members of the Federated ND cluster as shown below.

Figure 11. Multi-Cluster Dashboard Consolidated view

Using API Calls to ND:

You can view and try out the api documentation from: https://≺nd_ip≻/apidocs/

In Release 2.1 of ND, the API documentation has not been officially published.  The documentation should get published in the next update of ND.  For that reason, I will show you some API calls that you can use with ND.

API calls can be made to ND for POST, GET, DELETE, etc. You can use whatever method you currently use/prefer to send API calls.  In this write-up I will demonstrate 2 commonly used methods.

  • Postman
  • curl

Using Postman to send API calls:

Open up your Chrome Postman app and make the necessary environment files.  In this example I have 2 ND clusters (one physical with 3 nodes and 1 virtual with only 1 node).  I have made 2 environment files in Postman, so I can send api calls to any one of them by choosing the appropriate environment.  The figure below shows my environment file setup.

Figure 12. Chrome Postman Environment files

Before you can send API Calls, you need to first send a login request through API with proper credentials and capture the tokens to make calls to ND through API.

Note:  Postman Interceptor is a great tool to find out what API calls are needed to do what function.  Once setup, all you do is go to Chrome and browse to ND and Postman Intercptor will clearly show you what api calls, body, headers are needed to make that api call.   In the below example, we find out what api is needed to make a login to ND.

Setup Steps:

a) go to chrome browser, type in chrome://extensions (Make sure Postman extension is enabled (install extension if not there)
b) go to Postman app and make a new Collection called interceptor

Figure 12a. install Postman Interceptor and create New Empty Collection

c) On Postman APP, click on Interceptor (make sure to choose Source to Interceptor and not Proxy)
d) Save reqests to intercpetor
e) capture Requests

f) from Chrome browser login to ND
g) a new request will pop up under interceptor, click on it and view
h) you can see the URL and the body of the request

i) Click on the code button
j) choose curl or python or any other choice and you can see clearly the code you need (in this example the curl code to send the request)

Figure 12c. Observing the code that was sent for login to ND


Now, that we have the api parameters login needed from Postman Interceptor,  let’s go about creating a login request from Postman an a brand new Collection Space.

For this, make sure you’ve chosen the desired environment file and then create a postman POST request with the following Parameters:

Method: POST
URI: https://{{nd}}/login
"userName": "{{username}}",
"userPasswd": "{{password}}",
"domain": "{{domain}}"

A screenshot of this is shown in the figure below.

Figure 13. Postman Setup for ND Login Screenshot

Now go to the Tests tab for that post and put in the logic to capture the Tokens.  Please put in the following snippet there:

var jsonData = JSON.parse(responseBody)
postman.setEnvironmentVariable("jwttoken", jsonData.jwttoken);
postman.setEnvironmentVariable("token", jsonData.token);
tests["response code is 200"] = responseCode.code === 200;

A screenshot of this is shown below:

Figure 14. Postman setup to capture required Tokens

Now that you’ve created the login POST, go and execute it by hitting the “Send” button.   You should get a successful login and your tokens should get captured.  Below screenshot shows what the output will look like.

Figure 15. Output of Successful Login using Postman Post

You are now ready to make Postman Calls.

Try out these calls:

Get Federation Manager:  
Method: GET
URI: https://{{nd}}/api/config/federation/manager/mo

Get Federation Members:
Method: GET
URI: https://{{nd}}/api/config/federation/members

Get Sites:
Method: GET
URI: https://{{nd}}/api/config/class/v2/sites/

For each of these calls setup individual Postman GET requests.  Also make sure that for the Headers of the call you put in the value of captured Tokens, but putting in the key:value pairs captured automatically from the initial login post.

jwtoken :  {{jwtoken}}
token : {{token}}

A Screenshot of this is shown below.

Figure 16: putting in the correct Header key:values for Postman requests

Now, you are ready to send the requests to ND.   First execute the login script one time to capture the current tokens, then execute the desired requests.  Below is an example of executing the “GetFederationManager” request.

Figure 17. Output of Get Request from Postman

Using curl to send API Calls.

Using curl to send API Calls, is very simple and has the advantage of incorporating in many scripts.  For the purpose of this demonstration,  I will show a very basic curl script that you can use to send the requests to ND.  You can run this from any linux box or even a mac.   Please make sure to install “jq”  package which should take you a few seconds.

Make a directory and in that directory make a file with the json for the body of the login for ND.

"userName": "soumukhe",
"userPasswd": "superSekret",
"domain": "raddb"

In my case, I named the file loginPayload.json, the contents of which are shown below:

Figure 18. creating the json file for payload

Next create the body of the file with the request that you want to send.  As an example to send an API call to get information on FM you would create a file such as below:


# Define IP for a Master for each ND Cluster
HOST= # # my ND Fed Primary - dmz-nd-cluster2
#HOST= # my ND Fed Member - dmz-nd-cluster1


# POST, GET, Delete
#CURL_OPTS_POST='-v -k -H "Content-Type: application/json" -X POST'
CURL_OPTS_POST='-s -k -H "Content-Type: application/json" -X POST'
CURL_OPTS_GET='-s -k -H "Content-Type: application/json" -X GET'
CURL_OPTS_DEL='-s -k -H "Content-Type: application/json" -X DELETE'

# Get Cookie
curl $CURL_OPTS_POST https://$HOST/login -d @loginPayload.json -c $COOKIEFILE > /dev/null

# Send desired request
curl $CURL_OPTS_GET https://$HOST/api/config/federation/manager/mo -b $COOKIEFILE | jq .

Below screen shows the created file in my setup.

Figure 19. my file

Make sure to make the file executable.

chmod 775

Now, run the script.  The results in my run are shown below.

Figure 20. Running the script

A Corner Case Situation of Federation Primary Total Failure (unreachable for ever):

Now, that you know how to run API calls to ND, let’s discuss a very corner case situation. 

Let’s first recap a few items.

  • In a Federated ND cluster the cluster that you use to add members from is your Primary Federation Cluster  (FM)
  • There can be only 1 Primary in the Federated ND Cluster
  • All Federation Member adds/deletes need to be done from the Primary ND Cluster

What would happen if the Primary Federation Cluster totally crashed and burnt ?

First let’s discuss why I call this a corner case.   The chances of this happening is extremely remote.  Your cluster (including Primary Federation ND Cluster) should contain at-least 3 ND Masters.  Chances of all 3 Masters crashing and getting corrupted are not likely. 

However for some remote reason if this happens what are the consequences ?

Basically now your Federation Members can still operate on their own managing their local sites.  Let say you had 3 ND Federated Clusters, A, B, C  and A was the Primary Federation Cluster.  For some reason Cluster A became totally dead and cannot be brought back up ever.   Cluster B and Cluster C now will not know about each other.  Each of these clusters can still make operations for it’s own local sites only.   Let’s say NDI was installed in Cluster B and Cluster C when Federation Primary ( Cluster A) was up and running.  After this catastrophic failure of Cluster A totally disappearing, NDI on Cluster B will still be able to manage/get information from sites of Cluster C.  However, Clsuter B and Cluster C will be stuck with current Federation forever unless a cluster is forced to take over as primary.

At this time, knowing that Primary ND Cluster is gone and will never return, you will need to force one of the other clusters to take over as Primary and add the other ND clusters as members to that Federation.

Since this is such a remote corner case and highly unlikely this can only be done using API Calls using a force flag to take over.  There is no GUI method to do this.

The API Calls for Forced takeover of Primary Federation Function.

There are 2 API Calls that you will use.  The first one will be to force a member to take over as Federation Primary,  The second one will need to be done for each member that you will need to force to join this new Federation Primary.

  1. Force one of the members to take over
  2. Add the members to the new Federation Primary by force


Force Federation Member to Take Over as Federation Primary.

Method:  POST
URI: https://<MemberIP_that_will_take_over_as_primary>/api/config/federation/manager
“federationName”: <name of federation>,
“force”: true #<optional if name above is different than original federation name>

Add the members to the new Federation Primary by force

Method:  POST
URI: https://<MemberIP_that_will_take_over_as_primary>/api/config/federation/member
"url": <url of fed mem>",
"force": true,
“userName”:<user with admin permission>”,
“loginDomain”:”<login domain>”


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.