Terraform with Cisco Nexus Dashboard Orchestrator for building Hybrid Cloud and end to end services

Table of contents

  1. Introduction
  2. What We Will Demo in this Article
  3. A Brief Introduction to Terraform
    1. Characteristics of Terraform
    2. Important Pointers of Terraform
    3. Terraform Providers
    4. Terraform HCL code structure
    5. What Order are Resources built
    6. Terraform Backends
    7. Provisioners
    8. Executing The HCL Code
    9. Types of IAC Tools
    10. Terraform Commands
    11. Items to Remember for Terraform
    12. Security Considerations when uploading Terraform HCL code to git
  4. Let’s begin with our deployment
    1. Pre-Requisites
    2. Download & Install Terraform
    3. git clone the sample code
    4. modify override.tf files
    5. create aci tenant in aws tenant account
      1. Important Note on parallelism env for Terraform with NDO
      2. Checking from NDO
      3. Checking the terraform Module graph with terraform graph
    6. create ec2 instances in the ACI/AWS tenant infrastructure
    7. spinning up Elastic Kubernetes Cluster on ACI/AWS tenant infrastructure
  5. References


In the past, setting up data center infrastructure has been a very manual process. Humans have to physically rack and stack the Data Center Infrastructure. Then the infrastructure has to be configured so that connectivity to Servers/Services/Security can be properly implemented. Finally the Servers/Services has to be manually provisioned and segregated based on the requirements.

If more servers and services are needed on a temporary or pemnanent basis, the process has to be manually repeated. Expanding data centers to the cloud is a perfect solution to speed up the process. With Hybrid Cloud Data Centers, you can easily expand or pull back your Data Center resources in the cloud as needed.

Cloud ACI with Cisco Nexus Dashboard Orchestrator gives you the capability to create hybrid cloud data centers easily. The Data Center infrastructure across physical and cloud (or any combination of physical and/or multiple cloud provider clouds) can be configured through the Nexus Dashboard Orchestrator. The end result is that you can have your services securely anywhere (physical/cloud or any combinaton) and it is oblivious to the end user.

Once the infrastructre is ready, you can use the Cloud providier’s console or API based provisioning methods for spining up compute and services. Provisioning using the cloud console is generally done for quick one time configurations. However in the cloud, services are often spun up and destroyed depending on the needs. Automation for this is essential for any devops based operations so that the required services can be spun up quickly and destroyed consistently and without errors.

Every cloud provider has their own automation method. Some examples are shown below:

  • AWS: Cloud Formation Template
  • Azure: Azure Resource Manager
  • GCP: Cloud Deployment Manager

Though these cloud automation/deployment tools are excellent for the particular cloud provider, Terraform works across all cloud providers. Terraform also works with NDO. The implication of this is that you can build your entire Data Ceter Infrastructre across physical and different cloud Providers targetting Terraform for NDO. You can then use Terraform to spin up the required services in the cloud and you have an end to end Hybrid Data Center in minutes. This also gives you the capabiity to break the infrastructure and services down as needed (to save on costs). Further, the infrastructure will be always consistent and error free since you don’t have to do manual provisioning.

What We Will Demo in this Article

In this article, we will do the following:

  1. Spin up a full ACI Tenant in AWS using Terraform with NDO
  2. Spin up Web Servers on the AWS Tenant / ACI Infrastructure using Terraform
  3. Spin up a full Kubernetes Cluster (Elastic Kubernetes Services) on the ACI Infrastructure on AWS using eksctl utility

I have purposely only used AWS in this demo to keep it simple. You can easily modify the provided scripts to extend the tenant to physical ACI Fabric as needed. You can follow though this writeup to get very familiar with the process and modify the scripts as you wish.

For this demo, we will run the free (non cloud) version of Terraform on our local desktop.


A Brief Introduction to Terraform

Terraform is a multi-cloud automation software product created by Hashicorp®. Recently Hashicorp announced it’s initial Public Offering (Nov 29th, 2021)

Some of the highlights of Terraform are listed below:

  • Simple configuration and very fast learning curve
  • Supports multiple Platforms and clouds, has hundreds of providers
  • Easy integration with Configuration Management tools like Ansible
  • Easily extensible with plugins
  • Free for local install, Terraform Cloud has free tier and higher business tier that costs money with extra team management and other features. There is also a paid Enterprise Version for air-gapped installs.

You can see a full list of Terraform Feature Matrix at: https://www.datocms-assets.com/2885/1602500234-terraform-full-feature-pricing-tablev2-1.pdf

Figure 1: Terraform Feature Matrix

Characteristics of Terraform
  • Terraform builds a graph of all your resources, and parallelizes the creation and modification of any non-dependent resources. Because of this, Terraform builds infrastructure as efficiently as possible, and operators get insight into dependencies in their infrastructure.
  • Terraform is idempotent – you can apply multiple times with no changes
  • You can do dry run with “terraform plan”
  • Terraform is immutable – not upgrade in place
  • Terraform is Declarative (not Imperative/Procedual)
  • Agentless (unlike Chef, Puppet, *SaltStack)

*Note: SaltStack can run both with agent and agentless

Important Pointers of Terraform
  • Terraform is written in Go Language. Go is not needed unless you are writing/modifying providers
  • Terraform configurations for your infrastructure is generally written in HCL language (Hashicorp). This is very close to json
  • Terraform keeps state, unlike Ansible, which means that once you give control to Terraform, you should not do manual configurations (unless you just do initial provisioning and blow off the state file)
    Example: Change the name of VM manually and execute script again:
    Ansible will deploy a brand new VM with the original name specified in Ansible Playbook.
    Terraform will rename the manually changed VM back to the original VM Name. This demonstrates the Declarative nature of Terraform, where the user defines the desired state only and Terraform complies.

Figure 2: Terraform is a Declarative Model, Ansible is an Imperative Model.

Terraform Providers
  • A Provider is the underlying code where you feed in your config.
  • The Providers takes your config and sends API calls to the controller, like APIC/MSO/vCenter/AWS/Azure/GCP/DigitalOcean, etc etc
  • There are 2 types of Providers: approved and 3rd party (community providers)
  • Approved means Hashicorp has verified and certified the provider

Terraform List Of Providers can be found here: https://registry.terraform.io/browse/providers
Figure 3: Terraform Providers

Terraform HCL code structure
  • All information with detailed example for every provider can be found at terraform.io
  • Code is written in HCL (Hashicorp Language), which is very similar to json
  • Basic structures that you need to configure are providers, resources, data-sources, variables and functions
  • You can also upload your module to terraform registery, for easy sharing with team. Terraform Registery is closely tied to git. So, you upload your module to git and then register it in Terraform Registery (public or private)
  • Public Registery modules can be used by anyone
What Order are Resources built

Terraform builds a dependency graph from the Terraform configurations, and walks this graph to generate plans, refresh state, and more.

  • Implicit Dependency:
    • Terraform automatically infers when one resource depends on another through interpolation of configuration. e.g. create ec2 before lb
  • Explicit Dependency:
    • If dependency is not known to Terraform, you can put a “depends_on” argument in the resource declaration which forces the other resource to be built first. e.g. rdb (relational database) needs to be built before ec2, put ”depends_on = aws_db_instance.my_instance.id”
  • By default, Terraform has a default concurrency of 10 parallel runs. You can limit the number of concurrent operations with flag "-parallelism=n"
  • Terraform can print out a very nice graphic of your module relationship with “terraform graph” command
Terraform Backends
  • As mentioned before, Terraform keeps state. As an example, if you manually change the name of an EPG or AWS EC2 or Azure VM, Terraform will know that you changed it and on the next apply will revert it back.
  • Terraform state is kept in a state file on Backends
  • For local installs Backend state file is not encrypted. State is kept in a file called "terrafarm.tfstate"
  • For Cloud installs Backend state file is encrypted and TLS transferred if you are executing locally
  • Terraform also has Provisioners
  • There are 2 types of Provisioners (mainly, there are also null and file provisioners)
    • local-exec for running local command line executables
    • remote-exec for running remote installs, for instance bring up ec2 instance and then do an “apt update and apt install nginx”
  • Provisioners are called Creation-Time by default. They only run once during initial creation
  • Destroy-Time provisoners are used so that when doing destroy certain things are uninstalled first, such as anti-virus agent, so that the Aniti-Virus master server knows that endpoint should not be managed any more
  • It is not recommended to use Provisoners extensively
Executing The HCL Code

Once you write the initial Terraform Code, you can deploy and modify the code as shown in the flow diagram below.
Figure 4: Executing the Terraform Code

Types of IAC Tools

It’s important to realize that all IAC tools have their place and each is suitable for it’s use case. Terraform in no way obsoletes any of the existing tools. Depending on the use case, the proper IAC tool needs to be selected.

Figure 5: Different IAC Tools

IAC tools can also be used in combination. The illustration below shows Terraform being used to spin up AWS EC2 and utilizing Terraform Local Provisioner to make ansible calls to configure the EC2 tools (perhaps install some application on the EC2). As mentioned before Terraform Remote Provisioner can be also used to configure the EC2, but as a practise, it’s not recommended.

Figure 6. Making Ansible calls from Terraform

Terraform Commands

A list of Terraform Commands available can be seen below:

aciadmin@ubuntu-jump:~/Terraform/AWS-EKS-Terraform$ terraform -help
Usage: terraform [global options] <subcommand> [args]

The available commands for execution are listed below.
The primary workflow commands are given first, followed by
less common or more advanced commands.

Main commands:
  init          Prepare your working directory for other commands
  validate      Check whether the configuration is valid
  plan          Show changes required by the current configuration
  apply         Create or update infrastructure
  destroy       Destroy previously-created infrastructure

All other commands:
  console       Try Terraform expressions at an interactive command prompt
  fmt           Reformat your configuration in the standard style
  force-unlock  Release a stuck lock on the current workspace
  get           Install or upgrade remote Terraform modules
  graph         Generate a Graphviz graph of the steps in an operation
  import        Associate existing infrastructure with a Terraform resource
  login         Obtain and save credentials for a remote host
  logout        Remove locally-stored credentials for a remote host
  output        Show output values from your root module
  providers     Show the providers required for this configuration
  refresh       Update the state to match remote systems
  show          Show the current state or a saved plan
  state         Advanced state management
  taint         Mark a resource instance as not fully functional
  test          Experimental support for module integration testing
  untaint       Remove the 'tainted' state from a resource instance
  version       Show the current Terraform version
  workspace     Workspace management

Global options (use these before the subcommand, if any):
  -chdir=DIR    Switch to a different working directory before executing the
                given subcommand.
  -help         Show this help output, or the help for a specified subcommand.
  -version      An alias for the "version" subcommand.


Items to Remember for Terraform

Items to Remember for Terraform

  • Terraform workspace is a folder where you keep the terraform code
  • Terraform files always ends in a *.tf *or .tfvars **extension
    • in a workspace you can have separate .tf files to keep them separate by their function, such as variables.tf for defining variables and main.tf for the main code with the resource definitions. main.tf will pick up it’s variable values from variables.tf while executing. You can optionally put everything in one .tf file
    • The most common convention for the .tf files are as such:
      • main.tf: Most of the functional code goes here
      • variables.tf: This file is used for defining and storing default values of variables
      • outputs.tf: Defines what is shown at the end of a terraform run
      • override.tf or some-name_override.tf files can be used to override variable values. You could have multiple some-name_override.tf files and it will read them alphabetically (indexed on some-name), but this is not suggested, beause it makes troubleshoting the code difficult
  • "terraform fmt" command can visually format the terraform code nicely (makes it look very professional)
  • "terraform graph" creates a file showing the visual interactions between modules of the Terraform code
  • "terraform init" has to be used the first time or when introducing new modules
  • terraform refresh is a useful command to refresh state
  • terraform destroy can be used with target such as terraform destroy -target "target-resource-name"
  • "terraform apply" can be run with "-auto-approve" flag, so it won’t ask for confirmation.
    • "terraform apply" inherently does a "terraform plan"
  • "terraform import" can be used to import existing resources into state file and can be helpful to create the terraform resource block
  • "terraform console" is a very good tool to test out/validate terraform function syntax you want to incorporate in your script
  • Data Sources in Terraform are used to get information about resources external to Terraform and use them to setup Terraform resources
  • state file is stored locally in "terraform.tfstate" For Terraform Cloud you can keep state files on cloud
    • state file contains sensitive information (passwords/keys)
  • a lock file is created with the name of ".terraform.lock.hcl"
  • Lines can be commented by #
  • Blocks can be commented by /* terraform code lines here */
  • Terraform Variables can be defined in multiple ways with the following order of precedence (highest to lowest):
    • command line flag – run as a command line switch
    • set in terraform.tfvars file
    • Environment variables by doing "export TF_VAR_variableName
    • Default variable value from variables.tf file (and overwritten by values from override.tf)
    • User Manual Entry: if value of variable is not specified anywhere
  • Terraform Resources describe one or more infrastructure elements
    • you can use Meta-Arguments in resources such as:
      • depends_on, count, for_each
  • Terraform Provisioners can be used to run scripts (not recommended).
    • Provisioners only run on 1st run. You can force it to run every time with triggers (inside the resource block) as shown below:
      triggers = {
      build_number = timestamp()
  • There are different kind of provisioners:
    • null provisioner
    • file provisioner
    • local provisioner
    • remote provisioner
  • Destroy time provisioners can be used so that the provisioning is destroyed before the rest of the destroy. This can sometimes be useful as in the case of de-registerig agents from compute.

    resource "aws_instance" "web" {
    # ...
    provisioner "local-exec" {
    when    = "destroy"
    command = "echo 'Destroy-time provisioner'"
    Security Considerations when uploading Terraform HCL code to git

As mentioned previously, Terraform is stateful and sensitive information like passwords and keys are stored in the state files. For this reason, if you want to upload your code to git, you need to ensure that you don’t upload the sensitive information.

I would recommend the following:

  • Don’t put any sensitive infomation in variables.tf or hardcode it in main.tf. Don’t put sensitive information in terraform.tfvars. If you look at the configs that you download for the demo, you will see that variables.tf has the variable block with sensistive information defined, but the actual keys and passwords are not included there.
    Figure 6a. Part of my variables.tf file showing sensitive information not included there

  • create a override.tf file where you repeat the block for keys & passwords and put your sensitive information in this override.tf file.

  • Now in your git local repo, create a .gitignore file with contents like the below. This would ensure that sensitive information is not uploaded to git when you do a git push origin master

aciadmin@ubuntu-jump:~/gitstuff/terraform-NDO-AWS_with_ec2_eks$cat .gitignore
#  Local .terraform directories

# .tfstate files (including backup state file)

# .tfvars files

# no creds

# terraformrc file

# override.tf files with sensitive information


Let’s begin with our deployment

  • Have a linux based workstation to work from (either mac or linux). I’ll do the demo on Ubuntu
  • Install cAPIC infra tenant on AWS account (please look at previous writeups for this)
  • Install ND 2.1(1e) or higher and NDO version 3.6 or higher (please look at previous writeup for this).
  • On board the AWS Infra Tenant on ND & NDO (please look at previous writeups for this)
  • On AWS account for tenant, create a AWS user (from IAM) with programattic access and download the security credentials for that user (key and secret access key)
  • On your Unbuntu box, make sure to install AWS CLI version 2
  curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
  unzip awscliv2.zip
   sudo ./aws/install
  • check with: aws --version
    Figure 6.b: checking aws version

  • Create a aws cli default profile for that user

  aws configure --profile tfuser     ( # tfuser is the username you created in AWS for this)
  • you can set an environment variable, so that your aws cli can use the user identity by default (note: you don’t have to do this for Terraform use, but we will use this later)
  • Check that aws cli is working aws sts get-caller-identity
    -- output --
    "UserId": "SOMEUSERID",
    "Account": "XXXXXXXXXX",
    "Arn": "arn:aws:iam::XXXXXXX:user/tfuser"
Download & Install Terraform

Terraform comes in a single binary. To download it,

  • go to https://terraform.io
  • click on Download CLI
  • copy the link for your operating system
  • on your ubuntu VM, do “`curl -O "
  • unzip the zip file
  • sudo mv terraform /usr/local/bin

Figure 7: Installing Terraform

Check with terraform version
Figure 8: terraform version

git clone the sample code

Clone the sample git code to your ubuntu VM

git clone https://github.com/soumukhe/terraform-NDO-AWS_with_ec2_eks.git

Figure 9: Clone the sample code from git

Change Directory to "terraform-NDO-AWS_with_ec2_eks/"
a tree command in that directory will show you the folowing file structures
Figure 9: Structure of cloned git repo

modify override.tf files

Go to each of the main directories and modify the "override.tf" files.

  • Please put in the values from your enviornment.
    ⚠️ The "overfide.tf" files in each directory is different, so, please don’t copy the same one to each directory. Modify each of them separately.

Below you can see the contents of "override.tf" in thge aciAWS_infra directory.

#  use this override.tf to put in confidential data

#  Populate values based on your AWS values
variable "awsstuff" {
  type = object({
    aws_account_id    = string
    aws_access_key_id = string
    aws_secret_key    = string
  default = {
    aws_account_id    = "populate_me"
    aws_access_key_id = "populate_me"
    aws_secret_key    = "populate_me"

#  Populate values based on your ND cofigiration
variable "creds" {
  type = map(any)
  default = {
    username = "populate_me"
    password = "populate_me"
    url      = "https://ip_of_nd/"
    domain   = "put_in_auth_domain_defined_in_ND" # if you are using local user, comment this out.  
                                                  # Make sure to also comment out in variables.tf file.
create aci tenant in aws tenant account
  • change directory to "aciAWS_infra"
    Now, modify the terraform.tfvars file and put in the values for your desired tenant variables. My values are shown as below:
Values of variables to override default values defined in variables.tf
These are my sample variable values.  Please change based on your requirements.
aws_site_name = "AWS10" # the site name for the AWS site as seen on ND

schema_name   = "SM-Terraform-Tenant-Schema" # give it a name for the schema as you wish
template_name = "shared-template"            # use a template name as you wish
vrf_name      = "vrf1"                       # use a vrf name as you wish
bd_name       = "bd1"                        # use a bd name as you wish
anp_name      = "anp1"                       # use a ANP name as you wish
epg_name      = "epg1"                       # use a EPg name as you wish
region_name   = "us-east-1"                  # Make sure that you choose a region that was enabled in cAPIC initial setup

cidr_ip = "" # CIDR IP as you wish for the VPC to be created in AWS tenant account

subnet1 = "" # subnet should belong to CIDR
zone1   = "us-east-1a"    # az should be the 1st az in the chosen region.

subnet2 = "" # subnet should belong to CIDR
zone2   = "us-east-1b"    # az should be the 2nd az in the chosen region.

subnet3 = "" # subnet should belong to CIDR
zone3   = "us-east-1b"    # az should be the 2nd az in the chosen region.  Only 2 zones are allowed per region currently in cAPIC

epg_selector_value = "" # EPG Selector to ensure proper Security Rules as defined by ACI Contracts

user_association = "soumukhe" #  the user to get associated with the tenant


Important Note on parallelism env for Terraform with NDO

Make sure to source the file parallelism.env
⚠️ if you don’t do this, NDO based "terraform apply" will show many errors and will not work

  • By default, Terraform has a concurrency of 10 parallel runs. You can limit the number of concurrent operations with flag "-parallelism=n"
  • in the case of NDO with Terraform, you want to limit this to one. Instead of passing the -parallelism=1 through command line with terraform apply, it is easier to just set the environment variable for it. There is a file called parrelism.env in the directory with contents: export TF_CLI_ARGS_apply='-parallelism=1' . Just source the file as shown below:
    . ./parallelism.env 

    do a "terraform init"
    Figure 10: terraform init

Check with "terraform version"
Figure 11: terraform version

Now do "terraform apply"
You will see the terraform plan output and it will propmt you to accept with a yes
The bottom part of "terraform apply" output is shown below:
Figure 12: terraform apply (bottom part)

your ACI Infrastructure on AWS Tenant Account will now be setup
Bottom part of output is shown below:
Figure 13: output of "terraform apply"

📗 Note: you can destroy the Teanant by terraform destroy
However for my script it seems you have to run "terraform destroy" 3 times. This is probably due to some interpolation error in my script, that I will have to investigate
Figure 14: terraform destroy (unfortunately in my script I have to run it 3 times)

In case you tested with "terraform destroy" please make sure to do "terraform apply" again to deploy the ACI/AWS tenant

Checking from NDO

You can now check from NDO to see what the tenant looks like.

  • On NDO, right click on the schema
  • click on "Deployed View"

Figure 15: Clicking on deployed view

The deployed view will look like below
Figure 16: Deployed View from NDO

Checking the terraform Module graph with terraform graph

To accomplish this do the following:

terraform graph > graph.dot
sudo apt install graphviz
cat graph.dot | dot -Tsvg > graph.svg
scp the file to your local workstation
Then open graph.svg in browser

The output should look like below:
Figure 16: viewing output of "terraform graph"


create ec2 instances in the ACI/AWS tenant infrastructure]
  • change directory to: awsEC2-onACI_Infra
  • edit main.tf and change the filter to match your tenant name
    data "aws_vpcs" "vpc_id" {
    tags = {
    AciPolicyDnTag = "*-sm-terraform-*" # adding filter for "uni/tn-sm-terraform-T1/ctxprofile-vrf1-us-east-1"

    Now, do the following:

    terraform init
    terraform apply
    put in the number of ec2 instances you want spun up

    Figure 17: terraform init in awsEC2-onACI_Infra workspace

Figure 18: spinning up 2 ec2 instances in ACI/AWS tenant infrastructure

  • The script will spin up 2 ec2 instances (since we asked for 2)
  • it will install Apache2 on both instances
  • it will install the ssh public key from your ~/.ssh directory to the ec2 instances
  • it will then spit out the value of the IPs for each instance and you can curl and ssh to the ec2s
    • If later you want to see the IP’s, you can do "terraform refesh" followed by "terraform output"

Figure 19: output of run (bottom part)

Figure 20: curling to the public IPs

For ssh, user name is ec2-user
Figure 21: ssh to one of the ec2 instances

You can use "terraform destroy" to destroy the ec2 instances


spinning up Elastic Kubernetes Cluster on ACI/AWS tenant infrastructure]
1. Download and extract the latest release of eksctl with the following command.
curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp

2. Move the extracted binary to /usr/local/bin.
sudo mv /tmp/eksctl /usr/local/bin

3. Test that your installation was successful with the following command.
eksctl version
  • install kubectl utility
sudo apt install kubectl
  • do a terraform init
  • follow by: terraform plan

This will spit out the subnet-ids for your ACI/AWS tenant environment as shown below:
Figure 22: using Terraform to get subnet ID of subnets

Using the output from "terraform plan":

  • modify the cluster_config.yaml file and populate the subnet-IDs.
  • Modify the zones if needed
    ⚠️Remember that cAPIC currently (release 25.x) supports only 2 zones, so you will have to install the EKS cluster in 2 zones
  • modify the vpc id
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
  name: acme-test-cluster
  region: us-east-1
  version: "1.21"

  id: vpc-XXX                           # put in vpcID from output of terraform output
      us-east-1a: { id: subnet-YYY }    # put in subnetID from output of terraform output
      us-east-1b: { id: subnet-ZZZ }    # put in subnetID from output of terraform output
      us-east-1a: { id: subnet-YYY }    # put in subnetID from output of terraform output
      us-east-1b: { id: subnet-ZZZ }    # put in subnetID from output of terraform output

  - name: acme-test-cluster-workers
    minSize: 3
    maxSize: 6
    desiredCapacity: 3
    instanceType: m5.2xlarge
    labels: {role: worker}
      publicKeyPath: ~/.ssh/id_rsa.pub
      nodegroup-role: worker
        externalDNS: true
        #certManager: true
        #albIngress: true

# usage:  eksctl create cluster --config-file ./cluster_config.yaml
#         eksctl delete cluster --config-file ./cluster_config.yaml

My modification of cluster-config.yaml file looks like below:
Figure 23: My modified cluster-config.yaml

  • go to AWS Console to VPC/subnets and make the subnets public as shown below:
    Figure 24: Making the subnets public from AWS Console

  • set the environment variable for the aws user you had created earlier and make sure it’s working

aws sts get-caller-identity
  • Now run the script to create the eks cluster
    eksctl create cluster --config-file ./cluster_config.yaml

The script will take about 20 minutes to create the cluster


Figure 24: Creating EKS Cluster


eksctl get clusters
kubectl get nodes
kubectl config view   (and other kubectl commands)

Figure 25: eksctl get clusters

Figure 26: kubectl get nodes

  • checking from cAPIC to see endopints in EPG
    Figure 27: Checking from cAPIC for endpoints in EPg

To delete the cluster use the following command:

eksctl delete cluster --config-file ./cluster_config.yaml

Figure 28: Deleting EKS cluster


terraform related:

eksctl related:

From other blogs:

Something to investigate:


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.