Part 2 – Cisco ACI 5.2 and Kubernetes 1.21 CNI Feature Highlights

Table of contents

Introduction

In this article we examine our Cisco ACI integration with Kubernetes and demonstrate different features of the Cisco ACI CNI. If you do not have a running cluster then revisit Part 1 of this article series to learn about deploying Kubernetes and the ACI CNI. This cluster is a VMware Nested Kubernetes cluster connected to ACI, however there are SOOOO many other Kubernetes deployment options out there that may fit your environment better such as kube-virt, kops, Openstack, or Openshift.

The benefits of utilizing the Cisco ACI CNI are the following:

  • Easy connectivity between K8 pods, Virtual Machines, and Baremetal Servers
  • Enhanced Security thrugh the combination of Cisco ACI and Kubernetes Security and Network policies
  • Automatic load balancing configuration pushed to upstream switching harware
  • Multi-Tenancy via Cisco APIC. Kubernetes does not offer multi tenancy.
    • Achieved via multiple isolated K8 clusters or isolate namespaces via ACI policies
  • Kubernetes Cluster information at the network level via API Telemetry
    • Node information
    • Pods
    • Namespaces
    • Deployments
    • Services
    • etc.

https://www.cisco.com/c/dam/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/solution-overview-c22-739493.pdf

This article assumes you have access to an application running in your Kubernetes cluster. This can be something as simple as nginx :smiley:

Network Diagram

Visability into your Kubernetes Cluster

As shown in the previous article, from the APIC GUI we are presented with lots of information about our Kubernetes cluster. Earlier we showed the information pertaining to the physical cluster, now lets look at the logical Kubernetes objects in the APIC GUI.

Figure 1: Screenshot showing the namespaces that have been created in our Kubernetes cluster.

Looking at the Namespaces Tab, we can see that information pertaining to each of the configured Namespaces in our K8 cluster, each Namespace provides users a count of Kubernetes constructs such as Services, Deployments, replicaSets, Pods, and Events. Clicking on those K8 constructs will reveal further information about them.

Figure 2: Screenshot showing the information displayed after clicking total number of deployments.

Now taking a look at the Service tab we can see the different services that have been created inside of the K8 Cluster. Clicking on these presents further information in the APIC GUI pertaining to the individual service such IP, Service Type, and DNS name. The information shown here can be used to quickly gather information in a troubleshooting scenario.

Figure 3: Screenshot showing the Services available in my Kubernetes lab.
Figure 4: Screenshot showing the additional information when clicking on a specific service.

The bulk of the Kubernetes information pertaining to your cluster is presented here under the VMM domain. However there are a few other places where the ACI CNI can provide some additional visibility into our Kubnernetes cluster. If we navigate to our acc-provision created tenant. Then we can see that there are numerous containers that are running in the cluster. Just like any other EP within ACI, our K8 container’s are tracked and recorded into the COOP database. ACI will update the location of the containers as they are spun up or down in the cluster. Allowing for Developers and network admins to quickly work together to nail down where an issue way be occuring.

EPG Visability of Containers and Nodes

All of the Kubernetes Containers and Nodes can be found in our acc-provision tenant that has been defined in your aci_cni_config.yaml. In our example that is tenant k8s_pod

To find the EPG that holds your Application containers, navigate to your kubernetes tenant and find the EPG – aci-containers-default. In this EPG you will find ALL of the containers for the applications you have created within your kubernetes cluster. This EPG will not show the Kubernetes DNS containers or the ACI CNI containers. The K8 DNS containers can be found in EPG aci-containers-system.

Figure 5: Screenshot showing the the location of application containers in the APIC GUI.
Figure 6: Screenshot showing the K8 DNS containers and their EPG.

Easily Secure Kubernetes Applications and Cluster through ACI Contracts

When we push our the aci_cni_config.yaml using the acc-provision tool, many ACI contracts are automatically provisioned. These contracts are designed for the least level of privilege required to stand up your Kubernetes cluster and ACI containers.

Figure 7: Screenshot showing the contracts applied to the default container EPG.

Its important to note that because of a core ACI construct, any Endpoints in the same EPG can communicate with each other. Meaning these contracts are purely for containers to communicate with Endpoints in other EPGs or Tenants depending on your contract configuration. This is why when all of the Kubernetes containers are in the aci-containers-default EPG they have communication with each out.

Some deployments will require that you place Kubernetes containers into EPGs other than ones created by the acc-provision tool. This is another benefit to the ACI CNI which allows administrators to quickly and securly connect containers to baremetal servers by simply placing them in same EPG. Following the steps lists on the integration

https://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/kb/b_Kubernetes_Integration_with_ACI.html#task_ggz_svz_r1b

For my example, I am going to use the recommendation from the installation guide of Contract masters. I went ahead and created an EPG in the tenant created by the acc-provision tool.

Figure 8: Screenshot showing the aci-containers-default EPG as the contract master for our user created EPG – annotate.

This EPG will need to have the pod BD, K8 VMM Domain, and the aci-containers-default EPG configured as the contract master to inherit its applied contracts.

Figure 9: Screenshot showing the new annotate EPG with the correct BD and Contract master settings.
Figure 10: Screenshot showing the VMM Domain association to our annotate EPG.

Now that we have our ACI configuration in place we can now annotate our Kubernetes deployments. Using the ACI CNI we have the ability to annotate deployments 3 different ways. I will be using the same deployment file for each method. You can find the kubernetes deployment file below. This application will run in its own namespace – myapp – though you could also deploy it in the – default – namespace.

cisco@k8s-01:~$ kubectl create ns myapp
namespace/myapp created
cisco@k8s-01:~$
cisco@k8s-01:~$ cat nginx.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80

cisco@k8s-01:~$

Then we will deploy the app into the K8 cluster.

cisco@k8s-01:~$ kubectl create -f nginx.yml -n myapp
deployment.apps/nginx-deployment created
cisco@k8s-01:~$
  • Option 1 – acikubectl

The ACI CNI provides a CLI script to easily interact with an integrated cluster. This program shortens the syntax required to annotate your kubernetes deployments. You will see the increased syntax required in Option 2 below. Using the deployment shared above I annotated my cluster with the following.

cisco@k8s-01:~$ acikubectl set default-eg namespace myapp -t k8s_pod -a annotations -g annotations-epg
Setting default endpoint group:
Endpoint Group:
  Tenant: k8s_pod
  App profile: annotations
  Endpoint group: annotations-epg
cisco@k8s-01:~$

After running the command above, looking at the Operational tab of our annotations EPG we now have visibility into our new deployment in our newly created EPG.

  • Option 2 – kubectl

You can do the same thing by manually apply the annotations using kubectl and some json data. Be sure to clean up the previous deployment if you are trying the differnt options. Deleting the ns is quick and eay

cisco@k8s-01:~$ kubectl create -f nginx.yml -n myapp
deployment.apps/nginx-deployment created
cisco@k8s-01:~$ kubectl --namespace=myapp annotate deployment nginx-deployment opflex.cisco.condpoint-group='{"tenant":"k8s_pod","app-profile":"annotations","name":"annotations-epg"}'
deployment.apps/nginx-deployment annotated
cisco@k8s-01:~$
  • Option 3 – Deployment yaml

You can also just annotate your deployment file.

cisco@k8s-01:~$ cat nginx2.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  namespace: myapp
  annotations:
    opflex.cisco.com/endpoint-group: '{ "tenant":"k8s_pod", "app-profile":"annotations", "name":"annotations-epg"  }'
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80

cisco@k8s-01:~$
Figure 11: Screenshot showing the Nginix pods with annotations applied after moving to their new EPG.
cisco@k8s-01:~$ kubectl get pods -o wide -n testns
NAME                                READY   STATUS    RESTARTS   AGE   IP            NODE     NOMINATED NODE   READINESS GATES
nginx-deployment-66b6c48dd5-2xwd9   1/1     Running   0          39h   10.113.0.36   k8s-03   <none>           <none>
nginx-deployment-66b6c48dd5-5zg5x   1/1     Running   0          39h   10.113.0.93   k8s-05   <none>           <none>
nginx-deployment-66b6c48dd5-x8n59   1/1     Running   0          39h   10.113.0.94   k8s-05   <none>           <none>
cisco@k8s-01:~$

L4-L7 Load Balancing Integration

The ACI CNI provides additional load-balancing to external Kubernetes services. This configuration occurs automatically when create the service in Kubernetes. This can occur via kubectl or through specifying it in your service.yml.

If you remember back to our aci_cni_config.yaml we defined a few subnets for these services and I will highlight where we can find them on the CLI and the GUI.

net_config:
  node_subnet: 100.100.170.1/16         # Subnet to use for nodes
  pod_subnet: 10.113.0.1/16          # Subnet to use for Kubernetes Pods
  extern_dynamic: 10.114.0.1/24      # Subnet to use for dynamic external IPs <--
  extern_static: 10.115.0.1/24       # Subnet to use for static external IPs <--
  node_svc_subnet: 10.116.0.1/24     # Subnet to use for service graph <--
  cluster_svc_subnet: 10.117.0.1/24  # Subnet used for Cluster-IP Services <--  
  kubeapi_vlan: 3031                 # The VLAN used by the physdom for nodes
  service_vlan: 3032                 # The VLAN used by LoadBalancer services
  infra_vlan: 3967                   # The VLAN used by ACI infra

To create a K8 service we can do either of the following proceedures to our previous nginx deployment.

cisco@k8s-01:~$ kubectl expose deployment nginx-deployment --port=80 --target-port=80         --name=nginx-service --type=LoadBalancer -n myapp
service/nginx-service exposed
cisco@k8s-01:~$
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  selector:
    app: nginx
  ports:
    - port: 80
      targetPort: 80
  type: LoadBalancer

Once you have the service created then we can look at the details using the following.

cisco@k8s-01:~$ kubectl get svc -n myapp
NAME            TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
nginx-service   LoadBalancer   10.117.0.227   10.114.0.13   80:32082/TCP   8s
cisco@k8s-01:~$

From the output above we can see that service has been assigned 2 differnt IPs from our defined service subnets and the service should be available on port 80.

  • 10.117.0.227
  • 10.114.0.13 our service will be available externally at this IP.
Figure 12: Screenshot showing accessing our external service Nginx service at the provide IP address.

When this service is created in Kubernetes, our CNI pushes additional configuration to our ACI fabric in the VRF and tenant our L3 out defined in the aci_cni_config.yaml

  aep: esxi-1-aaep               # The AEP for ports/VPCs used by this cluster
  vrf:                              # This VRF used to create all kubernetes EPs
    name: K8sIntegrated
    tenant: common                  # This can be system-id or common
  l3out:
    l3domain: packstack
    name: L3OutOpenstack                   # Used to provision external IPs
    external_networks:
    - L3OutOS-EPG                      # Used for external contracts
FIgure 13: Screenshot showing the External EPGs that are automatically created when a service is created in Kubernetes.

Defined inside of these L3ExtEPGs are External Subnets with /32 routes to the IP addresses shown from the kubectl svc command.

cisco@k8s-01:~$ kubectl get svc -n myapp
NAME            TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
nginx-service   LoadBalancer   10.117.0.227   10.114.0.13   80:32082/TCP   8s
cisco@k8s-01:~$
Figure 14: Screenshot showing the IP address of our service is the same as what is show from the kubectl on the CLI.

If we look at what contract is applied we will see that whatever port we have specified in our service.yaml or kubectl command will be pushed as a Provided contract on that L3ExtEpg.

Figure 15: Screenshot showing the contract that is created by the ACI CNI when you have a LoadBalacner type K8 service.

Looking closer at the contract we see the specific port, which in our case is simply port 80.

Figure 16: Screenshot showing the contract and its subject created by the ACI CNI.

Figure 17: Screenshot showing the contract filter that is created with the port information from our service .yaml or kubectl command.
.

Path of the packet and a Little PBR

When exposing a Kubernetes service externally and requiring that the traffic be load-balanced. In this scenario Kubernetes requires that the load balancing be handled up stream. Luckily using the ACI CNI provides a solution utilizing PBR Service Graphs. Traffic that is received at the external service, in our example 10.114.0.13, well be redirected to one of the nodes that hosts an endpoint for that service. On each node is a service endpoint that handles for the external services on hosted on the node. ACI is providing the NAT function becuase Kubernetes is not configured to handle traffic sent to the external service. I took the following image and updated it for our example so you can visualize what is occuring with the traffic.

Figure 18: Graphical representation of our Kubernetes L3 out with an LoadBalanced Kubernetes service.

This Service Graph appears in the tenant our L3 out have been deployed too.

Figure 19: Screenshot showing the K8 Service Graph template pushed by acc-provision.
Figure 20: Screenshot showing the Devices and their Device Interfaces for each of our K8 nodes.
Figure 21: Screenshot showing the Device Selection policies.

References


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.