Part 1 – Cisco ACI 5.2 and Kubernetes 1.21 CNI Integration

Table of contents

Introduction

In this article we will examine Cisco ACI’s integration with Kubernetes. This article will demonstrate one method to deploy a VMware Nested Kubernetes cluster, however there are SOOOO many other Kubernetes deployment options out there that may fit your environment better such as kube-virt, kops, or even Openshift (similar but underlying kubernetes is still present and supported with Cisco ACI).

The benefits of utilizing the Cisco ACI CNI are the following:

  • Easy connectivity between K8 pods, Virtual Machines, and Baremetal Servers
  • Enhanced Security thrugh the combination of Cisco ACI and Kubernetes Security and Network policies
  • Automatic load balancing configuration pushed to upstream switching harware
  • Multi-Tenancy via Cisco APIC. Kubernetes does not offer multi tenancy.
    • Achieved via multiple isolated K8 clusters or isolate namespaces via ACI policies
  • Kubernetes Cluster information at the network level via API Telemetry
    • Node information
    • Pods
    • Namespaces
    • Deployments
    • Services
    • etc.

https://www.cisco.com/c/dam/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/solution-overview-c22-739493.pdf

THIS IS MEANT ONLY FOR A LAB ENVIRONMENT, THOUGH YOU COULD SCALE THE DEPLOYMENT UP :smiley:

Network Diagram

Lab Requirements

  • 1 Ansible Server
  • 1 vCenter
  • 1 ESXI
  • 1 VM Template – Ubuntu 18.04 is what I used but any ubuntu can be made to work.
  • WWW Internet access through ACI Fabric L3 Out Defined in ACC-Config.yaml

If you have integration with K8 and ACI using a proxy its possible but its VERY painful to configure NO_PROXY on our K8 Hosts. You could download the images from cisco.com and upload them into your local container image repo, but we will not be covering that here

My Kubernetes Installation Method – Ansible + kubeadm

https://github.com/camrossi/aci_kubeadm

MY GOAL IS TO PROVIDE YOU FILES YOU COULD COPY AND EDIT MINIMALLY FOR YOUR ENVIRONMENT. THE README IN THE REPO ABOVE IS VERY HELPFUL

To install my Kubernetes cluster I made use of a github repo maintained by a colleague, Cam Rossi, who is a TME at Cisco. It is similar to kubespray in that it utilizing Ansible as an automation plaftform. However it was not without its issues; while running the ansible scirpt it required that I manually intervene at times to get a successful Kubernetes deployment.

I found the ansible script was looping because it could not connect to the newly cloned Virtual Machines. I found that the hosts did not have their IP configuration present in the system. I used the VMware Web Console to edit the network configuration of our K8 hosts manually since there was no way to SSH to the hosts without an IP configured on the mgmt interface.

The main benefit of using this repo is the following:

  • Automagically created all the required ACI Objects (L3 out, teneant, BDs, EPGs)
  • Reminds us to properly config our L3out logical interfaces and apply L3 Out Domains
  • Automatically creates K8 master and minions from VM template
  • Configure the correct linux settings for kubeadm as shown in the Kubeadm requirements below – disable SWAP for example

It may not be hands off but it sure beat doing everything by manually.

Creating a VM template

I am not going to cover creating a VM in vCenter and creating a VM template. I simply installed Ubuntu 18.04 on a VM and installed VMware Guest Tools. You can use the article below to on how to accomplish this. This VM needs 1 interface, other than that requirement the HW specs will come from our yaml file configured in a later section.

  • Ubuntu 18
  • New VM was configured with:
    • 2 CPU, 16GB HD, 2GB RAM and 2 NIC
    • username/pass: cisco/cisco123
  • New Virtual Machine Requirement
    • One NIC
    • Install SSH and Python
    • PowerOff the VM, Createa a Snapshot
      • If you call the Snapsop anything else than "Base" edit vm_snapshotname variable in inventory/group_vars/all.yml file.

https://kb.vmware.com/s/article/1022525

Ansible Server Configuration

Ansible Host Software Setup
  • Install latest ansible version (I tested this with 2.9)
  • Python 2.7.9 or higher (I tested with 2.7.12)
  • Install PIP
    • curl https://bootstrap.pypa.io/get-pip.py | python
  • Install pyvmom (I tested pyvmomi 6.5.0.2017.5)
    • pip install pyvmomi
  • Install python-netaddr
    • pip install netaddr
  • If using password to authenticate ssh session, you need to install sshpass Installation guide
    • Disable host key checking: edit the file /etc/ansible/ansible.cfg` and set the `host_key_checking = False
Configure Ansible Playbook

Clone the repository above

git clone https://github.com/camrossi/aci_kubeadm

First we need to create/edit an inventory file defining the MGMT IPs and the Node VLAN IPs for our K8 hosts.

cisco@k8ansible:~/aci_kubeadm/inventory$ cat inventory
# Needed for OSX
#localhost ansible_python_interpreter=/usr/bin/python

# Here we define the basic parameters of our hosts.
# The first is the hostname,  ansible_ssh_host=IP is the management interface IP addre
# ip=IP is the ACI facing interface IP address.
k8s-01 ansible_ssh_host=10.0.20.3 ip=100.100.170.2
k8s-02 ansible_ssh_host=10.0.20.4 ip=100.100.170.3
k8s-03 ansible_ssh_host=10.0.20.5 ip=100.100.170.4
k8s-04 ansible_ssh_host=10.0.20.6 ip=100.100.170.5
k8s-05 ansible_ssh_host=10.0.20.7 ip=100.100.170.6
# ## configure a host if your nodes are not directly reachable
# bastion ansible_ssh_host=x.x.x.x

[kube-master]
k8s-01

[kube-node]
k8s-02
k8s-03
k8s-04
k8s-05

[k8s-cluster:children]
kube-node
kube-master

## If you want to deploy VMs from template add them below
[vmware-vm:children]
kube-node
kube-master
cisco@k8ansible:~/aci_kubeadm/inventory$

Next we will configure vSphere settings and edit our aci-cni-config.yaml file. The variables for these 2 processes in the playbook are controlled by the same file, all.yml. You will need to edit variables for your environment.

cisco@k8ansible:~/aci_kubeadm/inventory/group_vars$ pwd
/home/cisco/aci_kubeadm/inventory/group_vars
cisco@k8ansible:~/aci_kubeadm/inventory/group_vars$ cat all.yml
#User and passwords to access the Kubernetes VM.
## The ansible_sudo_pass is the same that is used for the Ansible Host to install packages on it.
ansible_ssh_pass: cisco123
ansible_user: cisco
ansible_sudo_pass: cisco123

# vCenter Credentials and Parameters
vm_vsphere_host: 10.0.8.60
vm_vsphere_user: administrator@lab.com
vm_vsphere_password: password
vm_vsphere_datacenter: ACI-DC
vm_notes: "aleccham K8"
vm_folder: Ansible-K8
vm_template: k8base   # The name of your Template.
vm_linked_clone: no               # If you want a linked clone you also need to pass the name of the snapshot in the next variable.
vm_snapshotname:               # The name of your snampshot
esxi_hostname: 10.0.8.10
vm_datastore: datastoreFab8-DC               # THe DS name, the Template and the VM must be in the same DS for linked clones.
resource_pool: K8sIntegrated

##Host Config
# Mgmt IP address is derived from the inventory file "ansible_ssh_host" variable
ram_size: 4096                     #(MB)
hd_size: 25                         #(GB)
mgmt_network: net-10.0.x.x           # The name of your Management Network Port Group in vCenter
mgmt_netmask: 255.255.0.0
domain: cisco.com
upstream_dns_servers:               # The DNS server to be configured in your VMs
  - 8.8.8.8

## The name of the linux interface for the management and aci interface. Change if requried
mgmt_if_name: ens160
aci_if_name: ens192
aci_if_mtu: 1600                   # You need to set this to at least 1600

acc_provision_installer:  acc-provision_5.2.1.1-62_amd64.deb   # Download from cisco.com
kube_version: 1.21.4-00             # This is the versioning number as per Ubuntu Packages. (apt-cache show <package> | grep Version)
kubeadm_token: fqv728.htdmfzf6rt9blhej  # This is the kubeadm token. You should probably change it.

# Configuration for ACI Fabric
## This tell acc-provision which version of kubernetes we are using. You can find the complete list by running
## acc-provision --list-flavor
k8s_flavor: kubernetes-1.21
aci_user: admin
aci_admin_pass: passsword
aci_config:
  system_id: k8s_pod     # Every opflex cluster must have a distict ID
  apic_hosts:                       # List of APIC hosts to connect for APIC API
  - 10.0.0.58
  vmm_domain:                       # Kubernetes VMM domain configuration
    encap_type: vxlan               # Encap mode: vxlan or vlan
    mcast_range:                    # Every opflex VMM must use a distinct range
      start: 225.100.3.1
      end: 225.100.3.255
    mcast_fabric: 225.1.1.3
    nested_inside:                  # If the K8S node are VM specify here the VMM domain.
      type: vmware                  # A PortGroup called like system_id will be automatically created
      name: Fab-8-VMM
  # The following resources must already exist on the APIC,
  # they are used, but not created by the provisioning tool.
  aep: esxi-1-aaep               # The AEP for ports/VPCs used by this cluster
  vrf:                              # This VRF used to create all kubernetes EPs
    name: K8sIntegrated
    tenant: common                  # This can be system-id or common
  l3out:
    l3domain: packstack
    name: L3OutOpenstack                   # Used to provision external IPs
    external_networks:
    - L3OutOS-EPG                      # Used for external contracts
#
# Networks used by Kubernetes
#
net_config:
  node_subnet: 100.100.170.1/16         # Subnet to use for nodes
  pod_subnet: 10.113.0.1/16          # Subnet to use for Kubernetes Pods
  extern_dynamic: 10.114.0.1/24      # Subnet to use for dynamic external IPs
  extern_static: 10.115.0.1/24       # Subnet to use for static external IPs
  node_svc_subnet: 10.116.0.1/24     # Subnet to use for service graph
  cluster_svc_subnet: 10.117.0.1/24  # Subnet used for Cluster-IP Services
  kubeapi_vlan: 3031                 # The VLAN used by the physdom for nodes
  service_vlan: 3032                 # The VLAN used by LoadBalancer services
  infra_vlan: 3967                   # The VLAN used by ACI infra

#
# Configuration for container registry
# DO NOT CHANGE
#
registry:
  image_prefix: noiro
cisco@k8ansible:~/aci_kubeadm/inventory/group_vars$

The repository was written for an older version of Kubernetes and a different version of ACI. I made the following changes for ACI 5.2 and Kubernetes 1.21.

In the lab_setup.yml I needed to update for my version of the acc-provision.deb in the following section - name: Install ACC Provision on the local host You will need to go to the Cisco VMM Matrix to determine what version is appropriate for your environment.

https://www.cisco.com/c/dam/en/us/td/docs/Website/datacenter/aci/virtualization/matrix/virtmatrix.html

cisco@k8ansible:~/aci_kubeadm$ cat lab_setup.yml
---

# ACI fabric config must exist before we start deploying K8S, so I install acc p                                                                                                                                             rovision on the ansible host.
# and I push the ACI config from there. After I will also install it on the kube                                                                                                                                             -master
- hosts: 127.0.0.1
  gather_facts: no
  vars:
   aci_login: &aci_login
     hostname: "{{ aci_config.apic_hosts[0] }}"
     username: '{{ aci_user }}'
     password: '{{ aci_admin_pass }}'
     use_proxy: 'no'
     validate_certs: false
  tasks:
      - name: Install ACC Provision on the local host
        apt:
          deb: roles/aci-host/files/acc-provision_5.2.1.1-62_amd64.deb
        delegate_to: localhost
        become: true

      - name: Create ACI Tenant for the Cluster
        aci_tenant:
          <<: *aci_login
          name: "{{ aci_config.system_id }}"
        delegate_to: localhost

      - name: Create ACI VRF
        aci_vrf:
          <<: *aci_login
          tenant: "{{ aci_config.vrf.tenant }}"
          name: "{{ aci_config.vrf.tenant }}"
          state: present
        delegate_to: localhost

      - name: Add L3OUT
        aci_l3out:
          <<: *aci_login
          tenant: "{{ aci_config.vrf.tenant }}"
          name: "{{ aci_config.l3out.name }}"
          domain: "packstack"
          vrf: "{{ aci_config.vrf.name }}"
          route_control: [ "export" ]
        delegate_to: localhost

      - name: Add extEPG
        aci_l3out_extepg:
          <<: *aci_login
          tenant: "{{ aci_config.vrf.tenant }}"
          l3out: "{{ aci_config.l3out.name }}"
          name: "{{ item }}"
        with_items: "{{ aci_config.l3out.external_networks }}"
        delegate_to: localhost
      - pause:
          prompt: "Connect to APIC NOW and finish confiuring your L3OUT, you nee                                                                                                                                             d to ADD your Nodes, routing protocol and the subnet in the ExtEPGs. Once done h                                                                                                                                             it return"
      - name: Generate CNI config and Push APIC Config
        delegate_to: localhost
        command: acc-provision --flavor="{{ k8s_flavor }}"  -a -u "{{ aci_user }                                                                                                                                             }" -p "{{ aci_admin_pass }}" -c inventory/group_vars/all.yml  -o aci-cni-config.                                                                                                                                             yaml

- hosts: vmware-vm
  gather_facts: False
  roles:
    - vmware-vm

- hosts: k8s-cluster
  gather_facts: True
  roles:
    - aci-host

- hosts: kube-master
  gather_facts: True
  roles:
    - master

- hosts: kube-node
  gather_facts: True
  roles:
    - node

cisco@k8ansible:~/aci_kubeadm$

You will need to place the appropriate acc-provision.deb in the following directory on our Ansible host.

https://software.cisco.com/download/home/285968390/type/286304714/release/5.2(3.20211025)

cisco@k8ansible:~/aci_kubeadm/roles/aci-host/files$ pwd
/home/cisco/aci_kubeadm/roles/aci-host/files
cisco@k8ansible:~/aci_kubeadm/roles/aci-host/files$ ls
acc-provision_5.2.1.1-62_amd64.deb
cisco@k8ansible:~/aci_kubeadm/roles/aci-host/files$

Based on restrictions in the lab I needed to edit some my VMware parameters. You may need to do this as well and I wanted to point out where this configuration occured in the installations directory.

cisco@k8ansible:~/aci_kubeadm/roles/vmware-vm/tasks$ cat main.yml
---

- name: Clone and Configure VM
  vmware_guest:
    validate_certs: False
    hostname: "{{ vm_vsphere_host }}"
    username: "{{ vm_vsphere_user }}"
    password: "{{ vm_vsphere_password }}"
    datacenter: "{{ vm_vsphere_datacenter }}"
    folder: "{{ vm_vsphere_datacenter }}/vm/{{ vm_folder }}"
    name: "{{ inventory_hostname }}"
    template: "{{ vm_template }}"
    linked_clone: "{{ vm_linked_clone }}"
    snapshot_src: "{{ vm_snapshotname }}"
    esxi_hostname: "{{  esxi_hostname }}"
    resource_pool: "{{ resource_pool }}"

    # Folder is in the format of <DataCenter>/vm/<Folder> the "vm" is hard coded in vcenter and you always need it.
    annotation: "{{ vm_notes }}"
    state: poweredon
    wait_for_ip_address: True
    disk:
    - size_gb: "{{hd_size}}"
      type: thin
      ## you can speficy a DS if you want
      datastore: "{{ vm_datastore }}"
    hardware:
      memory_mb: "{{ram_size}}"

    networks:
    - name: "{{ mgmt_network  }}"
      type: static
      ip: "{{ ansible_ssh_host }}"
      netmask: "{{ mgmt_netmask }}"
      start_connected: True
      connected: True
      allow_guest_control: True
  delegate_to: localhost

- name: Wait 600 seconds for the VMs to be reachable over SSH
  wait_for_connection:

- name: Create ACI Interface
  vmware_guest_network:
    validate_certs: False
    hostname: "{{ vm_vsphere_host }}"
    username: "{{ vm_vsphere_user }}"
    password: "{{ vm_vsphere_password }}"
    datacenter: "{{ vm_vsphere_datacenter }}"
    name: "{{ inventory_hostname }}"
    cluster: "Cluster1"
    gather_network_info: false
    networks:
      - name: "{{ aci_config.system_id }}"
        dvswitch_name: "{{ aci_config.vmm_domain.nested_inside.name }}"
        state: new
        start_connected: True
        connected: True
  delegate_to: localhost
- pause:
    prompt: "vmware_guest_network has a bug and will not select the correct portgroup. Go to vCenter and select the right port group for the ACI interface and then press RETURN"
cisco@k8ansible:~/aci_kubeadm/roles/vmware-vm/tasks$

ACI Fabric Pre-Requisites

VMM

Your fabric needs to have basic connectivity pre-configured for your hosts.
This scripts assumes you are deploying VMs as such I expect to have ACI configured with VMM integration to your vCenter.

Tenant(s), VRF and L3OUT

For the tenant configuration you have two options:

  • Configure your kubernetes VRF and L3OUT in the common tenant and have a separate tenant for the Kubernetes cluster (Preferred Option)
  • Configure everything in a dedicated Kubernetes tenant

I would recommend to use the first option as, un-provisioning a cluster, deletes the Kubernetes Tenant and if you are redeploying it multiple time you will need to re-configure every time the VRF and L3OUT.
The demo configuration is assuming you have deployed the VRF and the L3OUT in common.

Figure 1: Screenshot showing the L3out that needs to match what is in our yaml file.

You can also see the AEP in the ACI-CNI-config.yml is the AEP of the VMM domain since we are doing a nested VMware installation.

Figure 2: Screenshot showing the AEP that matches what is in our ACI-CNI-config.yml
    nested_inside:                  # If the K8S node are VM specify here the VMM domain.
      type: vmware                  # A PortGroup called like system_id will be automatically created
      name: Fab-8-VMM
  # The following resources must already exist on the APIC,
  # they are used, but not created by the provisioning tool.
  aep: esxi-1-aaep               # The AEP for ports/VPCs used by this cluster
  vrf:                              # This VRF used to create all kubernetes EPs
    name: K8sIntegrated
    tenant: common                  # This can be system-id or common
  l3out:
    l3domain: packstack
    name: L3OutOpenstack                   # Used to provision external IPs
    external_networks:
    - L3OutOS-EPG  

Deploy The K8 Cluster

Once you have configured the nesseccary files you can run the playbook with the following command to get very verbose output. If you have any errors this will help you troubleshoot what needs to be changed.

cisco@k8ansible:~$ cd aci_kubeadm/
cisco@k8ansible:~/aci_kubeadm$  ansible-playbook -i inventory/inventory -b lab_setup.yml -vvvv

As I mentioned previous you will need to do in add the file to the host system during the ansible playbook. One easy way to do this is to create the file on your VM template BEFORE running the ansible playbook. That way when the template is cloned the file is already present in the system without you having to create it by hand. This lets you edit the IP addresses instead of creating the structure.

Basic K8 Host Network Configuration

You can apply netplan configuration with the following command.

sudo netplan apply

If you wish to test your edited netplan config then you can test out your editted file with the following command

sudo netplan try
cisco@k8s-01:~$ cd /etc/netplan/
cisco@k8s-01:/etc/netplan$ ls
01-netcfg.yaml
cisco@k8s-01:/etc/netplan$ cat 01-netcfg.yaml
network:
  version: 2
  renderer: networkd
  ethernets:
     ens160:
        addresses: [ 10.0.20.3/16 ]
        dhcp4: no
        dhcp6: no
     ens192:
        match:
            macaddress: 00:50:56:96:f4:a4
        set-name: ens192
        dhcp4: no
        dhcp6: no
        mtu: 9000
  vlans:
     node-3031:
       id: 3031
       link: ens192
       addresses: [ 100.100.170.2/16 ]
       gateway4: 100.100.170.1
       nameservers:
          addresses:
          - 8.8.8.8
     infra-3967:
       id: 3967
       link: ens192
       dhcp4: yes
       routes:
       - to: 224.0.0.0/4
         scope: link
cisco@k8s-01:/etc/netplan$

Sometimes after reloading the hosts the 224.0.0.0/4route is not present in the routing table and it is quicker to add than reload the host or restart network services. You can use the following command.

cisco@k8s-01:~$ cat route-add.sh
route add -net 224.0.0.0/4 dev infra-3967
cisco@k8s-01:~$

Verifying you have a Successful Kubernetes Installation

After the Ansible playbook has successfully ran lets verify that our Kubernetes cluster is working as expected. If all your nodes are in the Ready state, then we can move onto verifying the ACI configuration.

cisco@k8s-01:~$ kubectl get nodes
NAME     STATUS   ROLES                  AGE   VERSION
k8s-01   Ready    control-plane,master   46d   v1.21.4
k8s-03   Ready    <none>                 46d   v1.21.4
k8s-04   Ready    <none>                 46d   v1.21.4
k8s-05   Ready    <none>                 46d   v1.21.4
cisco@k8s-01:~$
cisco@k8s-01:~$ kubectl get pods -n kube-system
NAME                              READY   STATUS    RESTARTS   AGE
coredns-558bd4d5db-mgff7          1/1     Running   14         46d
coredns-558bd4d5db-mghl6          1/1     Running   14         46d
etcd-k8s-01                       1/1     Running   2          46d
kube-apiserver-k8s-01             1/1     Running   2          46d
kube-controller-manager-k8s-01    1/1     Running   2          46d
kube-proxy-8j829                  1/1     Running   2          46d
kube-proxy-b8sxr                  1/1     Running   2          46d
kube-proxy-pjl6l                  1/1     Running   2          46d
kube-proxy-xrgtz                  1/1     Running   2          46d
kube-scheduler-k8s-01             1/1     Running   2          46d
metrics-server-5b974f8c7f-zqzzd   1/1     Running   0          19d
cisco@k8s-01:~$
cisco@k8s-01:~$  kubectl get pods -A
NAMESPACE               NAME                                        READY   STATUS              RESTARTS   AGE
aci-containers-system   aci-containers-controller-5c69d6ffb-ftfml   1/1     Running             3          46d
aci-containers-system   aci-containers-host-5jj4f                   3/3     Running             7          46d
aci-containers-system   aci-containers-host-5wv6c                   3/3     Running             13         46d
aci-containers-system   aci-containers-host-kpzj8                   3/3     Running             10         46d
aci-containers-system   aci-containers-host-qg5k9                   3/3     Running             14         46d
aci-containers-system   aci-containers-openvswitch-6c26d            1/1     Running             2          46d
aci-containers-system   aci-containers-openvswitch-fmcgw            1/1     Running             1          46d
aci-containers-system   aci-containers-openvswitch-jjpld            1/1     Running             1          46d
aci-containers-system   aci-containers-openvswitch-q5fdl            1/1     Running             2          46d
kube-system             coredns-558bd4d5db-mgff7                    1/1     Running             14         46d
kube-system             coredns-558bd4d5db-mghl6                    1/1     Running             14         46d
kube-system             etcd-k8s-01                                 1/1     Running             2          46d
kube-system             kube-apiserver-k8s-01                       1/1     Running             2          46d
kube-system             kube-controller-manager-k8s-01              1/1     Running             2          46d
kube-system             kube-proxy-8j829                            1/1     Running             2          46d
kube-system             kube-proxy-b8sxr                            1/1     Running             2          46d
kube-system             kube-proxy-pjl6l                            1/1     Running             2          46d
kube-system             kube-proxy-xrgtz                            1/1     Running             2          46d
kube-system             kube-scheduler-k8s-01                       1/1     Running             2          46d
kube-system             metrics-server-5b974f8c7f-zqzzd             1/1     Running             0          19d

Now lets take a look at what is created in the APIC GUI.
First you will notice a newly created tenant that is tagged with being created from an outside controller.

Figure 3: Screenshot showing the newly create Kubernetes tenant

Of course if we navigate over to the VMM Domain tabs we will find our new Kubernetes VMM domain.
Figure 4: Screenshot showing the newly created K8 VMM domain.

You will notice that the names for these policies has come from our yaml configuration file. Look at the system_id: k8s_pod

aci_config:
  system_id: k8s_pod     # Every opflex cluster must have a distict ID
  apic_hosts:                       # List of APIC hosts to connect for APIC API
  - 10.0.0.58

Inside of our newly created tenant you will find four EPGs and 2 BDs. The BDs have the subnets that are defined in our configuration file. as well as the L3out attached.

Figure 5: Screenshot showing the created EPGs and BDs

Figure 6: Screenshot showing the created subnets.
Figure 7: Screenshot showing the L3out attached to the BD
net_config:
  node_subnet: 100.100.170.1/16         # Subnet to use for nodes
  pod_subnet: 10.113.0.1/16          # Subnet to use for Kubernetes Pods
  vrf:                              # This VRF used to create all kubernetes EPs
    name: K8sIntegrated
    tenant: common                  # This can be system-id or common
  l3out:
    l3domain: packstack
    name: L3OutOpenstack                   # Used to provision external IPs
    external_networks:
    - L3OutOS-EPG  

Now we will look at some of the information that is under the VMM Domain which shows our attached AEP and multicast group. If we click some of the other tabs such as nodes we can see hostname and status of our K8 cluster.

Figure 8: Screenshot showing the K8 Domain attached to our defined AEP and the created mCast address group.

Figure 9: Screenshot showing connected K8 nodes.

Remember that we are doing a VMware nested installation, which means we have created a Trunk port group in vCenter that our Nodes are attached too.

Figure 10: Screenshot showing the created Trunk Port Group settings

Remembering that this configuration is coming from our config yaml.

  kubeapi_vlan: 3031                 # The VLAN used by the physdom for nodes
  service_vlan: 3032                 # The VLAN used by LoadBalancer services
  infra_vlan: 3967                   # The VLAN used by ACI infra

Other Kubernetes Installation Methods I have experience integrating with Cisco ACI

Kubeadm By Hand

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/

You could configre as many Ubuntu server needed for your lab by hand and utilize kubeadm to stand up th K8 cluster. MY FILES ABOVE CAN BE COPIED INTO THE HOSTS TO SPEED UP THIS PROCESS The issue with the process is of course it is very manual and introduces lots of places for errors and fat finger mistakes. Since we are installing our K8 Hosts on an ESXI server you could make a base machine, clone this golden image and then perform th kubeadm setup by hand on each node.

Some Kubeadm requirements, we will go over our ACI Requirements that also must be met in a later section.

  • A compatible Linux host. The Kubernetes project provides generic instructions for Linux distributions based on Debian and Red Hat, and those distributions without a package manager.
  • 2 GB or more of RAM per machine (any less will leave little room for your apps).
  • 2 CPUs or more.
  • Full network connectivity between all machines in the cluster (public or private network is fine).
  • Unique hostname, MAC address, and product_uuid for every node.
  • Certain ports are open on your machines.
  • Swap disabled. You MUST disable swap in order for the kubelet to work properly.

In Linux to configure DHCP on a VLAN sub-interface you need to create a configuration for the specific interface as shown below. This is controlled by the MAC address in the dhclient.conf file. You can see in the file below we specify the MAC of ens192 – 01-00-50-56-96-f4-a4.

cisco@k8s-01:~$ cat /etc/dhcp/dhclient.conf
send dhcp-client-identifier 01:00:50:56:96:f4:a4;
request subnet-mask, domain-name, domain-name-servers, host-name;
send host-name k8s-01;

option rfc3442-classless-static-routes code 121 = array of unsigned integer 8;
option ms-classless-static-routes code 249 = array of unsigned integer 8;
option wpad code 252 = string;

also request rfc3442-classless-static-routes;
also request ms-classless-static-routes;
also request static-routes;
also request wpad;
also request ntp-servers;
timeout 10;
cisco@k8s-01:~$

The Kubernetes repo is added as well.

cisco@k8s-01:~$ cat /etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
cisco@k8s-01:~$

As well as editing some Linux Bridge networking settings. This is of course done by the script automatically.

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system

Be sure to flush the ports

sudo iptables -F

Then proceed with the ACI CNI installation.

https://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/kb/b_Kubernetes_Integration_with_ACI.html

Once you pushed the configuration using ACC-Provision tool, attach the nodes to the newly created vDS Trunk port group in vCenter, then you bring up the Kubeadm using kubeadm init

Kubespray

https://kubernetes.io/docs/setup/production-environment/tools/kubespray/
https://github.com/kubernetes-sigs/kubespray
Kubespray is an opensource project that utilizes ansible to deploy and configure an enterprise ready K8 cluster. The configuration is much more in depth than any of the previously mentioned methods.

Kubespray is a composition of Ansible playbooks, inventory, provisioning tools, and domain knowledge for generic OS/Kubernetes clusters configuration management tasks. Kubespray provides:

  • An HA cluster
    • Easy installation of the recommended 3 masters nodes
  • Composable attributes
  • Support for most popular Linux distributions

Wrap Up

Congratulations!! We have a working Kubernetes cluster that is integrated with Cisco ACI. In the next part to this series we will dive into interacting with our ACI CNI and deploying some applications. This will cover creating K8 External services, annotating Pods/EPGs, and interacting with the K8 cluster.

References

https://kubernetes.io/docs/setup/production-environment/tools/kubespray/
https://github.com/kubernetes-sigs/kubespray
https://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/kb/b_Kubernetes_Integration_with_ACI.html
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/
https://software.cisco.com/download/home/285968390/type/286304714/release/5.2(3.20211025)
https://www.cisco.com/c/dam/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/solution-overview-c22-739493.pdf
https://github.com/camrossi/aci_kubeadm
https://kb.vmware.com/s/article/1022525
https://www.cisco.com/c/dam/en/us/td/docs/Website/datacenter/aci/virtualization/matrix/virtmatrix.html

Go To TOP


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.