Updates:
- 5/18/2021: Deciphering ND Licensing
- 4/14/2021:
- Please use Release 2.0.1d, not release 2.0.1b (which got deferred) On CCO downloads, please search for Nexus Dashboard. 2.0.1d can be found at: https://software.cisco.com/download/home/286327743/type/286328258/release/2.0(1d)
- 2.0(2) should be out anytime now
By Now you are probably already familiar with Cisco Application Service Engine. We had previously written some articles on them:
- Deploying MSO on Cisco Application Service Engine (OVA based SE) — updated 1/12/2021
- Deploying MSO on Cisco Application Service Engine ( AWS AMI Based SE)
- Deploying Cisco Application Service Engine (SE) for ACI – Fabric Internal Mode – 1.1.2i
You can think of Cisco Nexus Dashboard as the deluxe version of Cisco Application Service Engine. There has been many improvements made on the CASE code and it has been rebranded to the new name of Cisco Nexus Dashboard.
Nexus Dashboard is a high performance platform that can host cisco apps and 3rd part approved apps for all your Day2 Operations requirements. To start off you can deploy ACI MSO, Network Insights and Network Assurance Engine giving you both orchestration, Assurance, telemetry/flow analysis and troubleshooting/corelation features from a single pane of glass. The Nexus Dashboard is more than just for ACI, but this blog being a ACI only blog, we will limit the scope to ACI and related discussions.

Nexus Dashboard ISO image can be downloaded from CCO in the same location where you would download the Application Service Engine Code from. In fact the Application Service Engine 2.0 ISO image is the Nexus Dashboard Image. Installation has been greatly simplified and automated (compared to the SE install). Once Nexus Dashboard image is installed, you can simply point to the Cisco App Store and download and install MSO ( Multisite Orchestrator ), Network Insights (NI) image and NAE image. The MSO image that works on Nexus Dashboard should be MSO release 3.2.x and above. Also, note that NIA (Network Insights Advisor) and NIR (Network Insights Resources) have now been combined to a unified NI application. The Network Assurance Engine (NAE) image for running on Nexus Dashboard is 5.1.x and above.
Note: NAE will also get integrated with NI in a later release (NI 6.0)
Summary of what the main applications do:
- MSO: Orchestrate ACI Multisite Fabrics (both physical and hybrid/cloud)
- NI: Unification of NIA and NIR. So, this one application gives you Visibility, Monitoring, Analytics and Corelation + Advisory and PSIRTs based on your setup/ fabric hardware.
- NAE: Verification/Assurance of Intent with actual configuration and changes made in ACI Fabric. You could also do what/if analysis, setup polices which define your intent for your fabric (for instance security based policies or configuration / feature based policies) and if there are configurations made that are not compliant with your policies then NAE will notify you. All this is done by collecting data in intervals (known as epochs) and using mathematical modeling.

Below Figure shows from Nexus Dashboard how you can click and install your desired app.

The underlying architecture is based on Kubernetes and the hardware architecture is massively scalable. Nexus Dashboard is deployed in a Nexus Dashboard platform which is a cluster of specialized high performance UCS servers with fast hard drives and tons of memory and CPU. The Cisco Nexus Dashboard platform at it’s base consists of 3 K8s Master Nodes. When required you can scale horizontally by adding 4 more worker nodes and even 2 standby nodes in case you need to do quick master node recovery. Please See ND Sizing Tool. If you are already familiar with Kubernetes, you will appreciate that bringing up a full working Kubernetes cluster (especially with multiple masters), is not for the faint of heart. The Nexus Dashboard is more than a multimaster K8s platform running on linux. The architecture of the Nexus Dashboard makes it possible to run the Cisco and 3rd party apps to give the end user a single pane of glass for all things related to your ACI Fabric ( and other fabrics like DCNM – that we won’t talk about in this guide).

Nexus Dashboard will be available as ova and cloud images at a later date. I will update this post when they are available. The ova version and cloud version of-course will be less scalable and will be more targeted for running one or 2 apps at most like MSO, etc, etc. For now, you will need to use the Nexus Dashboard Platform (cluster of UCS servers), for running Nexus Dashboard.
If you already have SE hardware purchased, that will host the Nexus Dashboard. You can also upgrade your SE 1.1.3x code to Nexus Dashboard Code.
Some of the highlights of Nexus Dashboard capabilities are listed in the figure below.

Licensing Requirements for Cisco Nexus Dashboard is shown below:

For Licensing Information, please look at the Licensing CCO URL here.
As an example you will notice that for running MSO only, you will need minimum of “DCN Advantage” License. Please also look at: ND Ordering Site

Note: I initially found the Licensing document a bit hard to read and understand, so I brainstormed this with a colleague and we were able to decipher it. I am pointing this out below to make it easier for you to understand in case you too find it a bit confusing.
NI (Nexus Insights) which in release 6.0 of NI will also merge with NAE as a single app, will require A DCN Permier License. The NI feature is also called the Day 2 Ops feature and if you look at the diagram below, Day 2 Ops is only available in the DCN Premier License.
Follow the line from DCN Premier all the way to the bottom as shown below. You will notice that this leads to the Subscription Based Premier Licenses. This means if you need the Day 2 OPS feature (NI), you will need to buy the subscription based Premier License. There is no Perpetual based license for DCN Premier.

Also, notice that on the way all the lines from all the stated licenses intersect the box that says “Cisco Nexus 9500/9300/9200” which leads you to the Add-On licenses that you can also purchase if you needed those features as shown under the Add-on. As you can see Security, Storage and NDB could be purchased in either Subscription mode or Perpetual mode ad Add-on Licenses to any of the 5 main licenses.

Now let’s say you have DCN Advantage or Essentials license and would like to use NI 6.0 (NI+NAE). If you have perpetual licenses, follow all the lines from all the stated licenses that intersect the box that says “Cisco Nexus 9500/9300/9200” which leads you to the Add-On licenses that you can purchase either as subscription or perpetual. As you can see the Day2Ops bundle needed for NI 6.0 (NI+NAE) is only available as a subscription Add-on.

Now, let’s say you wanted to use ND just to run MSO with MultiSite Fabric, nothing else. To figure out what your possible license options are follow the DCN Advantage License line and this will lead you to either a subscription based Advantage License or Perpetual based Advantage License. So, this means that if you wanted to run MSO on ND and nothing else, you would need the DCN Advantage License which you can buy in perpetual or Subscription mode.

Nexus Dashboard Installation:
As mentioned earlier, the Installation for Nexus Dashboard as compared to the older Application Service Engine has been greatly simplified. Full Install instructions on CCO can be found here:
Below I will review the highlights/method that you can follow and guide you through it.
Hardware install: Please follow the CCO guide to install / rack / stack your hardware. Note that just like in Cisco application Service Engine, there are 2 kinds of interfaces in each of the ND nodes (UCS computes). These are the Data Network (also sometimes called Fabric interface) and the management network (also sometimes called OOB). For the hardware based ND platform, each of these (data and mgmt) comprises of 2 interfaces. These interfaces are active standby bonded together. The Fabric interfaces are 10G/25G SFP based and the mgmt interfaces are 1G/10G copper interfaces.

Data network ( also known as Fabric Interface) is used for:
- Nexus Dashboard node clustering
- Application to application communication
- Nexus Dashboard nodes to Cisco APIC nodes communication
- For example, the network traffic for Day-2 Operations applications such as NAE.
Management Network is used for:
- Accessing the Nexus Dashboard GUI
- Accessing the Nexus Dashboard CLI via SSH
- DNS and NTP communication
- Nexus Dashboard firmware upload
- Cisco DC App Center (AppStore)
ND tries to use OOB interface by default to communicate with Intersight. If it fails, it tries with other interfaces. Users have an option to configure proxy to reach Intersight
ACI Inband Network Connectivity Requirement
The Fabric/Data Network needs to be able to reach the inband APIC IP (yes, you have to set that up and I will go over it). The management network is your normal OOB network. You can connect the ACI inband network directly to ACI Fabrics using L2 connections (l2outs), but I think it makes much more sense to use L3Out to Inband Network. Think about your ND Cluster actually residing in your NMC (Network Management Center) and not co-located with your ACI Fabric. Further, it stands to reason, that your ACI Fabrics are all in different locations, it just makes sense to use a L3Out routed connection for this purpose.
Below diagram is a representation of the connectivity.

Added 4/20/2021:
As a side note: If your ND nodes are split across sites there is a requirement that the Data Interfaces have L2 adjacency between them. This requirement is only true if your ND nodes are managing DCNM based fabrics. You don’t need to bother about this requirement if your ND nodes are used only for ACI fabric based management. This is required for the streaming telemetry from DCNM Controller to ND. You could use any L2 extension technology to accomplish this e.g. VXLan EVPN/traditional L2 stretch/vpls, etc, etc. So, again, to re-iterate, for ND with ACI fabrics this is not a concern.
Example of Connectivity in my lab:
For my lab install, I decided to connect using L3 Out to ACI Inband. I think this makes most sense, since the Sites will be far away for a real live Multiste ACI Fabric. In the figure below you will see that
- I have 2 Fabrics, Fabric 7 and Fabric 8.
- Both Fabrics (7 & 8) use vlan 60 for the inband
- I’ve used the following subnets for inband connectivity:
- Fabric 8 inband: 10.1.60.0/28
- Fabric 7 inband: 10.1.60.16/28
- DMZ ISN L3Out Peer: 10.1.60.32/28
- DMZ ISN To ND Cluster: 10.1.60.48/28
- Note: I’ve changed the L3Out SVI MAC IP to unique values. This is because in this setup, both fabric L3Outs connect to the same router (ISN) to vlan 61, VRF DMZ. If I did not do that I would have duplicate MACs.
- I only have my L3 Out from 1 leaf and since this is a lab, I can get away with it. Ideally, in production, I would VPC the L3Out connection.
- I’ve also used a tag map and tagged the VRF for Fabric 7 with 777777 and Fabric 8 with 888888. This is a convenience factor and helps me look at prefixes from the router and know where they came from.

Steps to Create the Inband ACI Network:
First create your Fabric Access Policies. In the example below, I show only configuration for Fabric 8. Fabric 7 will be similar.

Next, I go to mgmt Tenant from APIC and tag the VRF inb. (please create vrf inb if it’s not there). Notice I’ve Tagged vrf inb with tag 888888 for Fabric 8. Again, this is optional, but this makes sense to do for future troubleshooting reasons if needed.

Next, create your BD inb and make sure to map it to vrf inb. Configure the default gateway from the inband subnet you assigned. In my case, the IP is 10.1.60.1/28. Also notice that I am not going to tie my L3Out from the BD itself. It’s better practice to use a route map to advertise the BD subnets outside using a route-map on L3 out. This gives you better/granular control. Plus you can do multiple things with the route-map as needed.

Next, create your inband EPG. Note, that the inband EPG is a special EPG and is created in the Node Manamgent EPG section. Make sure to tie in the BD inb to this EPG. Also, notice that I’ve used a any/any contract in this case for the L3Out. I’ve also configured it as both Provider and Consumer. Ofcourse in a real live Fabric, you will want to tie this down to what’s needed.

Next, I go to Static Node Management Addresses and give every leaf, spine, APIC and IP from the subnet that I configured for Fabric 8 inband (on the BD)

Next, create your L3Out for routing inband prefixes to the outside. Make sure to reference the L3Out to vrf inb.

Next, add in a route map to advertise the prefix to be advertised out. I’ve also changed the ospf external prefix to ext type-1 which is good practice.

Do Not forget to add your contract Provider/Consumer based on your security needs to the L3Out
Lastly, Please make sure to go to Global Configuration and change your Connecitivy Preference for APIC if needed. In my case, I don’t want to change it.

Your Inband Connectivity for Fabrics are done.
Make sure to do the matching configurations on your router. In my case, since I connected both Fabric 7 and Fabric 8 to the same router (lab scenario), I’ve done the matching configuration on that one router. Once you configure your router, make sure that OSPF is up, do ping tests from rotuer to APIC inband and leaf/spine inband IPs. Below is my matching configuration:
My Matching Configuration on the external router:
vrf context DMZ
ip route 0.0.0.0/0 10.1.1.2
address-family ipv4 unicast
!
vrf DMZ
router-id 10.1.100.1
default-information originate
!
interface Ethernet1/48
description ToLeaf101-Fabric8 for ACI-Inband
switchport
switchport mode trunk
switchport trunk allowed vlan 61
no shutdown
!
interface Ethernet1/45
description ToLeaf101-Fabric7 for ACI-Inband
switchport
switchport mode trunk
switchport trunk allowed vlan 61
no shutdown
!
interface Ethernet1/19-24
description Connection to SE Cluster for ND
switchport
switchport access vlan 62
no shutdown
!
interface Vlan61
description Inband connection to Fabric7 E1/45 and Fabric8 E1/48
no shutdown
vrf member DMZ
ip address 10.1.60.34/28
ip ospf network broadcast
ip router ospf 200 area 0.0.0.0
!
interface Vlan62
description e1/19-24 for SE connection
no shutdown
vrf member DMZ
ip address 10.1.60.49/28
ip ospf network broadcast
ip router ospf 200 area 0.0.0.0
!
router ospf 200
vrf DMZ
router-id 10.1.100.1
default-information originate
Installing Nexus Dashboard from ISO:
If you order a new Nexus Dashboard Platform, you will probably have it come in with the Nexus Dashboard image already installed. If you want/need to install from scratch, then the procedure has been greatly simplified.
First, Download the iso image for Nexus Dashboard from CCO ( search for Application Service Engine in downloads. Anything from version 2.0.x is Nexus Dashboard even if it says Service Engine). After the download is complete point your browser to the the CIMC for each of the nodes. Then go map the downloaded ISO as CD/DVD as shown below.

Power Cycle the nodes.

Hit F6 for Bios Setup when the banner comes up during reboot.

On Bios, point it to boot from Cisco vKVM-Mapped DVD

The Nodes will start booting and installing the OS and Nexus Dashboard.

From the KVM Console, after a while, it will seem that it’s hung. However the system is still installing.

Please go ahead and ssh in from Serial Console. For this, first ssh to CIMC. From there do a “connect host”

After the install is complete, all the nodes will shutdown. Go to your CIMC UI and power on the nodes. After the nodes have come up, hit enter on the node you want to be your first master node. Do not touch the other nodes ! ( meaning you don’t even need to go into the initial setup on the other nodes, except for initial master). It will ask you to:
- Configure the rescue-user password.
- Management Network
- Gateway for management network
Go ahead and configure them. Do not touch the other Nodes.

Note: In case you made a mistake or you want to wipe out the ND config clean and restart from the initial configuration screen, you can wipe out the config by the command: acs reboot factory-wipe
After this step, you will see on the screen after a few minutes that it will say that you can point your browser to https://management_ip. Go ahead and do that now. This will take you to the bootstrap configuration (first time configuration) for ND. Use your rescue-user password to log in.

Populate the cluster related information.
- ND Cluster Name
- NTP Info
- DNS Info
- DNS Search Domain
- K8s App Network ( change if needed from default )
- K8s Service Network ( change if needed from default )
Hit Next to proceed

Next Edit the 1st Master Node to fill in details

Make sure to put the Node Detail information in:
- Name
- Serial Number: Auto Populated
- Managent IP and Gateway: Already configured for first master, just doublecheck
- Data Netowork, IP and Gateway

Now Click on Add Node

Put in information for the 2nd master node. Note that you have to enter
- the CIMC IP and credentials for CIMC of the 2nd master node and hit verify. This will auto populate the Serial Number.
- Name of the Node
- Populate the Management Network IP/Gateway
- Populate the Data Network IP/Gateway
- Populate the Vlan if you used 802.1q trunking (generally you won’t do this)

Repeat for the 3rd Master Node

On the next page, you can see the details and edit information if you had made a mistake. Hit Next

the next page will give you the confirmation page. It will show you a summary of all the information you put in. If it is correct hit configure. Otherwise, hit Previous and edit/correct

You will now see the progress screen.

If you watch from Console you will see that for Master 2 and Master 3, the configuration is getting applied. This is done using CIMC Serial Console.

It will take a good 45 minutes to 1 hour from here for the cluster to come up and be in working state. You can see this in the progress bar. Nodes might reboot during the install.

You are all done with the Basic Install of ND.
Configuring Radius Authentication for ND
Browse to ND and login with admin and the rescue-password you had configured.
Adding in Authentication Servers is almost a must do for a production fabric. Also, Remote Authenticated Users will be able to do cross launch / Single SignOn, meaning they can go to the APIC Console directly from ND or from MSO, without having to put in their credentials again.
Authentication Servers supported are:
- LDAP
- TACACS
- RADIUS
In this example we will add in Radius Server for Authentication. Please note that you need to configure your AVPair for radius in a specific way for MSO and APIC to work. Kindly look at the section for avPair configuration in unofficial guide writeup Upgrading ACI Fabric and MSO
Click on Adminstrative/Authentication and in the work pane click on Actions/Create Login Domain.

Create a login Domain and name it something (raddb in my case).
Now, click “Add Provider” and put in the information for your Radius Provider:
- Host Name/IP
- Authorization Protocol: Default is PAP
- Port Default is 1812. (notice in my cae, I’m using port 10000, since I am running Freeradius on a container and mapped port 1812 to port 10000)
- Key
- Timeout and Retry Interval

Now log in to ND and you will see the option for authentication. Choose the Radius Domain that you created and login with your Radius User Credentials.

Adding in Sites
After logging in on UI with your Radius username/pasword, go to Dashboard. Click Add Site

Populate all the information about the Site as shown below.
It is very important that:
- Site Name: If you intend to backup/restore configuration from previous version of MSO to the MSO running on ND (starting from MSO release 3.2.x is supported on ND), make sure that the Site Name defined in ND matches exactly to the site name you defined on the previous version of MSO. If you do not do that, you will not be able to restore the configuration on the MSO running on ND. This is because Sites are now defined in ND and the APPs (MSO / NI for instance), reads the site names from ND. If the site name mismatches during the restore from the previous MSO backup, the restore will fail complaining about undefined site.
- for the IP address of the APIC you need to populate the Inband IP address of the APIC and not the OOB IP. NI will not work properly if you did not do this, (though MSO will be o.k.)
- Put in the exact inband EPG name that you configured for that fabric

After this everything will look good and your Site will be added. Add in the other Sites also. In my case, I added in Fabric 8 as my 2nd Site.
Note: In case you see an error while adding in the site like below, it means that you had previously added a site from SE or ND and deleted the ND configuration without deleting the Site from ND. APIC has some objects that get configured by API from ND and this will prevent you from adding the site back again.

The solution for this is to go to APIC object browser and browse for the following objects and then wipe out any references to the previous objects sending API calls to APIC (using postman or equivalent curl):
- aaaServiceNodeCluster
- analyticsCluster
This will show you the object names. Example below figure:

Use Postman to delete the object name and then add the site again from ND.
https://{{apic}}/api/node/mo/uni/fabric/analytics/cluster-Object_Name_Found.json
https://{{apic}}/api/node/mo/uni/userext/snclstr-cluster-Object_Name_Found.json

Checking Cross Launch for Remote Users (users authenticated by Radius in my case)
In my lab setup, I completed adding both the sites. Let’s check out the cross launch functionality.
First I log in to ND with a remote user credentials and using the Radius Login Domain that I created.

Let’s go to Sites and For Fabric 7 APIC let’s click on “Open”.

I immediately get taken into the APIC UI without entering any credentials. The credentials used were my same Radius username/password that I used to log into ND.

Setting Up Intersight on ND
By Setting up Intersight, you will gain the following functionality:
- ND cluster provides access to Intersight for applications
- NI will use Intersight to update Bug/PSIRT/Recommendations
You can learn more about Intersight by going to the following CCO documentation: Cisco Intersight Overview
To Setup Intersight, first log into ND UI. Then go to system Overview and click on “Setup Intersight”

Verify that “Device ID” and “Claim ID” show up. If not, Click on Settings:

Make sure:
- Device Connector is on
- Choose Read-Only or Allow Control
- Choose Auto Update

If the settings are correct and if your connectivity is good, then “Device ID and “Claim ID” should show up.
Next, Click on Intersight.


If you have an Intersight account, log in. If not, create one.

Once you are in, Click on “Target” and “Claim a new Target”

Next, Click on “Cisco Nexus Dashboard”

Copy the “Device ID” and “Claim Code” from ND UI and paste it in here.

You will now see that the ND cluster has been claimed in Intersight.

If you double click on that entry, you will see that all the Nodes in the ND Cluster has been claimed.

List Of Useful cli Commands From ssh Terminal On ND:
Cluster Troubleshooting:
acs health — displays cluster health information and any existing issues. (for more details acs health -d)
acs cluster config — displays cluster configuration.
acs cluster masters — displays master nodes configuration.
acs cluster workers — displays worker nodes configuration.
acs cluster standbys — displays standby nodes configuration.
acs techsupport collect — collects Tech Support information.
acs version — returns the Nexus Dashboard version.
Resetting Devices:
acs reboot — reboots the node.
acs reboot clean — removes all data for Nexus Dashboard and applications, but preserves the Nexus Dashboard bootstrap configuration and pod images.
When you first bring up your Nexus Dashboard cluster, initial deployment process installs all required pod images. Retaining pod images will speed up cluster bring up after reboot.
acs reboot clean-wipe — removes all data for Nexus Dashboard and applications including application images, but preserves the Nexus Dashboard bootstrap configuration.
When the cluster boots up again, pod images will be re-installed.
acs reboot factory-reset — removes all data for Nexus Dashboard and applications including cluster bootstrap configuration, but preserves application images.
When you first bring up your Nexus Dashboard cluster, initial deployment process installs all required pod images. Retaining pod images will speed up cluster bring up.
acs reboot factory-wipe — removes all data for Nexus Dashboard and applications, including application images and cluster bootstrap configuration.
When the cluster boots up again, the pod images will be re-installed.
Conclusion:
Nexus Dashboard is the Next Genration Deluxe Version of SE. Nexus Dashboard has a microservice based architecture that gives you a single pane of glass for orchestration, visibility, telemetry and troubleshooting your fabric end to end. Cisco Nexus Dashboard is massively scalable, easy to install and operate. It has very high availability given that it is K8s based and has multiple masters. You can scale horizontally when needed and add 4 additional worker nodes and 2 standby nodes. Cisco Nexus Dashboard can host approved 3rd part apps also. Cisco Nexus Dashboard is more than ACI. It also supports DCNM fabrics.
References:
- Cisco Nexus Dashboard Users Guide
- Nexus Dashboard Hardware Installation Guide
- Nexus Dashboard Deployment Guide
- Cisco Intersight Overview
- ND Sizing Tool
- Licensing CCO URL
- ND Ordering Site