In this post, I will cover a full Azure Kubernetes Service Integration Practical deployment with File Share Services for Apps. We will cover the following topics:
- Cisco cAPIC/Azure AKS integration (step by step)
- Creating File share to be used the K8s apps (we’ll spin up a simple nginx deployment with 2 replicas and load the fileshare as volumes in nginx containers)
- Security, Data Plane and Control Plane for the K8s cluster
I do also want to point out that in this example, I’m not going to integrate AKS with the Cisco ACI CNI. We will use the Native Azure CNI with Calico policy. Integrating the Cisco CNI (using accprovision tool) would give you more granular control from ACI and hence additional benefits. For example, If you integrated AKS with the Cisco CNI, you would get visibility into the actual apps/namespaces/services that you spin up from K8s level directly in cAPIC. It will also give you the capability to control East/West traffic inside the apps using ACI contracts.
In the case I will demonstrate below (without cisco ACI CNI), how you could control North/South access with ACI contracts (we’ll domo this below). However for East/West traffic (between apps in the cluster itself) you will not be able to use ACI contracts, because they are all inside the cluster itself. You could however use traditional K8s Network Policies to acheive East/West traffic policies btween pods in cluster (we’ll demo this in this writeup also)
This is a practical hands on approach that will guide you on how to do this and setup the entire integration. You will get most value from this If you follow along and repeat the steps in your own ACI/Azure Fabric (with release 5.1.2 of cAPIC (to be released on CCO in a few weeks)
From cAPIC release 5.1.2 Native Azure Services are supported for cAPIC integration. Previous to this release EPGs were identified by label matching only (ip /region/custom label). For that reason, Native Azure Service support was not possible. This meant you could bring up an EPG, put some EPG selector labels on those and then spin up VMs /3rd party firewalls etc in Azure which had tradional nics, put matching labels on the nics and based on label match the Security Groups in Azure would get configured. This effectively pulled in your endpoint VMs into the correct EPG (security group for Azure). From there you could use all the bells and whistles of ACI and configure policies based on your requirements.
From release 5.1.2 cAPIC for Azure, a new concept of service EPGs has been introduced. Service EPGs match based on Services and some other fields (based on the service). Native Services in Azure do not have traditional endpoints (nics) like a Azure VM does. With Service EPGs, now you can match Azure Services and pull them into Service EPGs. From there you can again utilize the bells and whistles of ACI and create your required policy. This effectively gives you the capability of integrating Native Azure Services with cAPIC.
Below is a screenshot from the cAPIC UI showing the different Native Services that you can integrate.

Let’s go through a high level overview of the tasks that are involved for this integration.
a) First, from cAPIC we will create a subnet where we will pull in the AZ storate service. We will create this subnet in the HUB (though you could do this in the tenant space itself)

b) Next, we will create the Storage Account in Azure.

c) After that from cAPIC we will create the Tenant/VRF/Subnet/Service EPG for AKS

d) Next we will bring up the AKS cluster in Azure

e) Finally we will create and apply a contract between the AKS Service EPG and Storage Service EPG. We will also create a contract from the AKS Service EPG and a external EPG to control who /what ports can access the AKS services that we spin up.

Before we start, let’s spin up a Ubuntu VM, that we will use as a jumpbox and also the place where we will execute the K8s kubectl commands from to talk to the K8s api-server. This jumpbox can we anywhere, on your physical site or in a cloud site as long as it has access to the Internet. You could also do this from your local MAC by using equivalent commands (if that’s what you are using).

Once you are done installing the Ubuntu jumpbox, ssh into it and do the below:
-
- Install Azure CLI
curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
- Make a directory where you will execute the commands from
mkdir newAksAZ && cd newAksAZ
- Login to your azure account from the jumpbox
az login -u <azureUsername@domain.com> -p <azurePassword>
- List all the accounts you have under that Azure Subscription (in case you have multiple subscriptions under that azure account)
az account list -o table
- Set your account to the one you will use to create your Tenant/AKS cluster
az account set -s <subscriptionID>
- Create Azure service principle
az ad sp create-for-rbac --name azure-cli --create-cert
- Install Azure CLI
You will now see the following sort of output on your Ubuntu Terminal:
(in my case, my Ubuntu SSH userID for the jumpbox is aciadmin)
Save the output to a text file in a safe place !

g. Now copy the private key from the home directory to the working directorycp ~/tmpFileName.pem . # don’t forget the “.” at the end, assuming you are sitting in the working directory
h. Now login to your AZ account using the Service Principle (make sure to use the output that you got from setup (step f above) for the fields)az login --service-principal --username yourAppID --tenant yourAzureTenantID --password tmpFileName.pem
i. For convenience make a bash script to login to your azure account and set the correct subscription ID as the one you will be using for this purpose
cat > setsubscripiton.sh <<EOF
#!/bin/bash
az login --service-principal --username yourAppID --tenant yourAzureTenantID --password tmpFileName.pem > /dev/null
az account set -s <yourWorkingAccount>
EOF
j. Now set proper permissions for your login script:chmod 775 setsubcription.sh
k. From now on you can quickly login to your Azure account and set the working subscription to the one where you will be using by executing the script as shown below./setsubcription.sh
You can verify that you are on the correct subscription ID by the below command:az account show

First We’ll create a vNet for Azure File Storage. We’ll create this in the hub vNet (though you don’t have to)

We’ll add a new subnet to house the Azure File Share. While creating the subnet make sure to give it a private link label. I called it newaksprivatelinklabel. This private link label will be used to associate the service EPG in this case to the subnet.

Now, let’s create the EPG where the storage will reside. Remember that this is meant for Azure Storage, which is a Native Azure Service. So, we will configure this as a service epg.


We will choose EPG type: Service, For Storage, we will choose Deployment Type: Azure Storage, Access type: Private and also select the Private Link Label we created earlier in the Hub CTX. Associating this private link label from EPG, will make this EPG be part of that subnet.
Note: Azure has many types of storage:
- Containers: for blob/unstructured data
- File Share: Serverless, NFS file Shares
- Tables: For tabular data storage
- Queues: queue kind os storage for sacaling apps
In this case, we are choosing Azure Storage, since that covers all the cases.

for EP Selector, choose a unique alphanumeric value. This will have to match the storage name that we will create in the next step. This effectively pulls in the Native Storage Service as an endpoint in ACI Fabric.
Note: An easy way to generate a (hopefully unique) global name is to generate from the ubuntu jumpbox using RANDOM function:
echo newaksstoragefs$RANDOM

Next, we will create the Azure File Storage. You can ofcourse, create it from the UI, but in this case, we will create it directly from out ubuntu jump box using azure cli.
a. Run the login script for Azure from the ubuntu jumpbox../setsubcription.sh

b. Next, on Jumpbox, create shell script ”createAKSstorage.sh” to create your storage account. Modify the values on the script to suit your purpose. Remember the value of AKS_PERS_STORAGE_ACCOUNT_NAME should be the Endpoint Selector you inputted earlier for the private endpoint selector in the storage service EPG.
#!/bin/bash
# modify the below as you need to
AKS_PERS_STORAGE_ACCOUNT_NAME=newaksstoragefs25386
AKS_PERS_RESOURCE_GROUP=NewAksStorageRG
AKS_PERS_LOCATION=eastus
AKS_PERS_SHARE_NAME=mynewaksshare
###
# Don't modify the below
az group create --name $AKS_PERS_RESOURCE_GROUP --location $AKS_PERS_LOCATION
az storage account create -n $AKS_PERS_STORAGE_ACCOUNT_NAME -g $AKS_PERS_RESOURCE_GROUP -l $AKS_PERS_LOCATION --sku Standard_LRS
AZURE_STORAGE_CONNECTION_STRING=$(az storage account show-connection-string -n $AKS_PERS_STORAGE_ACCOUNT_NAME -g $AKS_PERS_RESOURCE_GROUP -o tsv)
az storage share create -n $AKS_PERS_SHARE_NAME --connection-string $AZURE_STORAGE_CONNECTION_STRING
STORAGE_KEY=$(az storage account keys list --resource-group $AKS_PERS_RESOURCE_GROUP --account-name$AKS_PERS_STORAGE_ACCOUNT_NAME --query "[0].value" -o tsv)
# Show the needed env values
echo Storage account name: $AKS_PERS_STORAGE_ACCOUNT_NAME
echo Storage account key: $STORAGE_KEY
c. chmod 775 createAKSstorage.sh
d. source ./createAKSstorage.sh

Now, let’s go to Azure UI and check for Private Endpoints.

Click on the Pravate Link for your New File Share

Here, you will be able to see the Private Endpoint of the storage service. Notice in my case my subnet for CTX was 100.67.75.0/24. The private endpoint got a value of 100.64.75.4

You can also verify that Private Endpont showing up on the epg (from cAPIC)

Take a look at the Network Security Group that got associated with that EPG

From Azure UI, you can see the equivalent Security Group

Now, on Azure UI, let’s go to that storage (we created with our script) and upload a text file to it.

Open up the storage account you created

Click on the file share that you created (by script) in that storage account

Upload some text file to that file share. In my case, I made a simple index.html file in my local MAC download directory that I will upload to the Azure file share.

We are all done creating the Native Azure Storage, uploading a file to it and associating our service EPG with that storage

Let’s first create the Tenant and VRF where we will install the AKS (k8s) cluster. ( by now you know how to do that, so I won’t waste your time).

Next, create the Cloud Context Profile for the VRF.

In this case, we don’t need to use Private Link Label, because cAPIC will manage the subnet for the K8s Nodes. I’m using CIDR Block of 172.24.0.0/22 and subnet is also the same.

Now, let’s create the service EPG for AKS

In this case we are going to make the EPG a service EPG, Cloud Native Managed, and Private

For EP Selector, choose the subnet that you created earlier in the AKS CTX.

Now, let’s create a contract between the service EPG of AKS and service EPG of Storage

Export the contract to the Tenant

Apply the contract

Now, let’s create the AKS Cluster. In this case we will use azure cli from our ubuntu jumpbox. (though it could be created from the UI)
az login -u username@acme.com -p myAzureSecretPassword
az account set –s subscriptionID

Add the necessary Azure Providers (if needed):
a. First check if you have the necessary providers:
az provider show -n Microsoft.OperationsManagement -o table
az provider show -n Microsoft.OperationalInsights -o table
b. If they are not registered, use the following commands to register:
az provider register --namespace Microsoft.OperationsManagement
az provider register --namespace Microsoft.OperationalInsights

Bring up the AKS Cluster in the correct resource group (created by cAPIC)
a) set the variable ID to the subnet ID. You can get this from the below query:
ID=$(az network vnet subnet list \
-g CAPIC_NewAKSCluster_NewAKSClusterVRF_eastus --vnet-name NewAKSClusterVRF \
--query '[].id' | tr -d '[' | tr -d ']' | tr -d '"')
b) Bring up the cluster in the cAPIC provisioned vNET
az aks create \
--enable-managed-identity \
--resource-group CAPIC_NewAKSCluster_NewAKSClusterVRF_eastus \
--name NewAKSCluster1 \
--location eastus \
--zones 1 2 \
--node-count 2 \
--enable-addons monitoring \
--kubernetes-version 1.19.0 \
--load-balancer-sku standard \
--network-plugin azure \
--network-policy calico \
--vnet-subnet-id $ID \
--docker-bridge-address 172.17.0.1/16 \
--dns-service-ip 10.2.0.10 \
--service-cidr 10.2.0.0/24
The script might take a few minutes to run and bring up the AKS cluster. After compltetion, let’s add the kubectl utility to the jumpbox.
sudo az aks install-cli
Next, check the kube config file. You will see that it is empty.
kubectl config view

Now, let’s populate the kube config file for this cluster
az aks get-credentials \
--resource-group CAPIC_NewAKSCluster_NewAKSClusterVRF_eastus \
--name NewAKSCluster1

you will now notice that the kube config file is populated with user, cluster, context and correct certificates
kubectl config view
your kubectl commands are now operational.
Try these out:
kubectl get nodes
kubectl get ns
kubectl cluster-info
kubectl top nodes --sort-by cpu

In production, you will probably never stop the AKS cluster. However in a POC you might want to do that when not testing (for expense reasons). Let’s create some quick scripts, so we can show/start/stop the AKS cluster when we want.
First you need to do this (these scripts will not work without this)az extension add --name aks-preview
create these 3 scripts as shown below:
showAksState.sh#!/bin/bash
az aks show --name NewAKSCluster1 --resource-group CAPIC_NewAKSCluster_NewAKSClusterVRF_eastus
stopAksCluster.sh#!/bin/bash
az aks stop --name NewAKSCluster1 --resource-group CAPIC_NewAKSCluster_NewAKSClusterVRF_eastus
startAksCluster.sh#!/bin/bash
az aks start --name NewAKSCluster1 --resource-group CAPIC_NewAKSCluster_NewAKSClusterVRF_eastus
Don’t forget to do a chmod 775
on them.
You are all done bringing up the AKS Cluster in the ACI service EPG. The endopints Nodes can be seen from cAPIC Cloud Context Profile

From Azure Console, you can also verify that the Security Groups have been automatically populated

Now, that we are done with the AKS cluster bringup and the Storage conriguration, let’s bring up a simple Azure nginx deployment with 2 replicas to use the shared AZ file storage

a) On your Jumpbox, execute the az login script.
./ setsubcription.sh
b) Set the environments as shown below (substitute your values in bold)
AKS_PERS_STORAGE_ACCOUNT_NAME=newaksstoragefs25386
AKS_PERS_RESOURCE_GROUP=NewAksStorageRG
AKS_PERS_LOCATION=eastus
AKS_PERS_SHARE_NAME=mynewaksshare
c) Set Variable for your storage key, using the below command:
STORAGE_KEY=$(az storage account keys list --resource-group $AKS_PERS_RESOURCE_GROUP --account-name $AKS_PERS_STORAGE_ACCOUNT_NAME --query "[0].value" -o tsv)
d) Make sure to check the value by:
echo $STORAGE_KEY
Now, let’s bring up the nginx deployment with the shared storage
a) Create a name space where you will run your appkubectl create ns mynamespace
b) Create your k8s secret for the storage account (remember secrets name spaced)
kubectl create secret generic azure-secret \
--from-literal=azurestorageaccountname=$AKS_PERS_STORAGE_ACCOUNT_NAME \
--from-literal=azurestorageaccountkey=$STORAGE_KEY --namespace=mynamespace
c) Create K8s storage class. Please see yaml file in notes below. Apply with:kubectl apply -f myazuresc.yaml
myazuresc.yamlallowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
labels:
addonmanager.kubernetes.io/mode: EnsureExists
kubernetes.io/cluster-service: "true"
name: myazuresc
parameters:
skuName: Standard_LRS
provisioner: kubernetes.io/azure-file
reclaimPolicy: Delete
volumeBindingMode: Immediate
d) Create K8s persistent volume. Please see yaml file in below. Apply with:
(Note: Be careful to check the fields here, especailly the “shareName”, this needs to match to the name of the fileShare that you configured in your storage. In my case, I had named the fileshare mynewaksshare)
kubectl apply -f myazurepv.yaml
myazurepv.yamlapiVersion: v1
kind: PersistentVolume
metadata:
name: myazurepv
spec:
accessModes:
- ReadWriteMany
azureFile:
secretName: azure-secret
secretNamespace: mynamespace
shareName: mynewaksshare
capacity:
storage: 3Gi
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=1000
- gid=1000
- mfsymlinks
- nobrl
persistentVolumeReclaimPolicy: Retain
storageClassName: myazuresc
volumeMode: Filesystem
e) Create K8s persistent volume claim. (remember pvc is not namespaced) Please see yaml file below. Apply with:kubectl apply -f myazurepvc.yaml
myazurepvc.yamlapiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myazurepvc
namespace: mynamespace
spec:
accessModes:
- ReadWriteMany
storageClassName: myazuresc
resources:
requests:
storage: 3Gi
f) Check that pvc is bound to pvkubectl -n mynamespace get pvc

g) Create your yaml file for nginx deployment. (deployments are namespaced)
kubectl create deployment mynginx --image=nginx --replicas=2 \
--namespace=mynamespace --dry-run=client -o yaml > mynginx.yaml
Then go edit it to put in :
- value of
{.spec.template.spec.containers[*].ports}
{.spec.template.spec.volumes}
{.spec.template.spec.containers[*].volumeMounts}
Completed yaml file below:
mynginx.yamlapiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: mynginx
name: mynginx
namespace: mynamespace
spec:
replicas: 2
selector:
matchLabels:
app: mynginx
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: mynginx
spec:
volumes:
- name: myazure
persistentVolumeClaim:
claimName: myazurepvc
containers:
- image: nginx
name: nginx
volumeMounts:
- name: myazure
mountPath: /usr/share/nginx/html
readOnly: false
ports:
- containerPort: 80
resources: {}
status: {}
h) Start the nginx deploymentkubectl apply -f mynginx.yaml
Check with: kubectl -n mynamespace get pods -o wide

i) Expose the service with azure loadbalancer (svcs are namespaced)
kubectl create svc loadbalancer mynginxsvc --tcp=80:80 --namespace=mynamespace --dry-run=client -o yaml > mynginxsvc.yaml
Next, edit mynginx.yaml and change selector to app=mynginx (to match the label of the pods. You can check the pod labels by kubectl -n mynamespace get pods --show-labels
)
mynginxsvc.yamlapiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: mynginxsvc
name: mynginxsvc
namespace: mynamespace
spec:
ports:
- name: 80-80
port: 80
protocol: TCP
targetPort: 80
selector:
app: mynginx
type: LoadBalancer
status:
loadBalancer: {}
j) apply the yaml file:kubectl apply -f mynginxsvc.yaml
k) Check that the svc is associated with the deployment podskubectl -n mynamespace get ep

l) Check the loadbalancer Public IPkubectl -n mynamespace get svc


Security Section:
From the Ubuntu Jumpbox, curl the IP of the lb to test it out:curl http://40.80.48.178
You will notice that the curl is not working. This is because we have not explictly added any contract that allows port 80 to be exposed from the K8s Pods

Let’s add a contract to allow port 80 to be exposed for the data plane. First create the external EPG

In this example, I’ll put 0/0 for the external EPG. You could get as granular as you want.

Now let’s create a contract. I’m allowing port 80 and port 443 in the filter for now.

Apply the contract.
extEPG is consumer and AKS-ClusterEPG is provider

Now try the curl again. Also, try to browse to the lb address from your local MAC. They will both work, because you had put 0/0 in the EPG and your contract allows port 80

EastWest Traffic Policy:
As mentioned earlier, in this setup we cannot use ACI policies (contracts) for east/west traffic between pods in the cluster. That is because we are not doing ACI CNI Integration in this setup.
However, we can still use tradional Network Policy for East/West traffic between pods. In this simple example, I will show how to create a network policy that does not allow our nginx deployment to be accessed from any other pod in the cluster.
First let’s test this to verify that East/West traffic between pods in cluster is infact working.
Let’s get the name of the service that we are running in our namespace:kubectl -n mynamespace get svc

the name of the service that we had created was mynginxsvc. Let’s also quickly look at the addresses that the pod got.kubectl -n mynamespace get ep

To Test, let’s bring up a temporary busybox pod in default namespace and try to see if we can get to the service from the container in that pod.
We’ll use netcat to see if port 80 can be accessed from pod on default to the nginx service. You will see that port 80 is reachable from that pod. kubectl run bb --image=busybox:1.28 --restart=Never --rm -it -- nc -zvw 2 mynginxsvc.mynamespace.svc.cluster.local 80

Now, let’s create a network policy that does not allow nginx pods to be accessed from any other pods in the cluster. Create the yaml file below and name it netpol.yaml
mynetpol.yamlapiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: mynetpol
namespace: mynamespace
spec:
podSelector:
matchLabels:
app: mynginx
policyTypes:
- Ingress
ingress:
- from:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 172.24.0.0/24
create the network policy:kubectl apply -f mynetpol.yaml

Repeat the same test as before (bringing up a temporary busybox container and using netcat for testing)
kubectl run bb --image=busybox:1.28 --restart=Never --rm -it -- nc -zvw 2 mynginxsvc.mynamespace.svc.cluster.local 80

As you can see, the nginx pods are not reachable any more from other pods in the cluster.
However from North/South, I can still access them.


Control Plane Security of AKS Cluster:
K8s RBAC for control plane security (authentication and authorization)
Authentication is done via certificates, and authorization with roles/rolebindings, clusterroles/clusterrolebindings.
We’ll make a user called kiwi who has access to view pods only but not configure or view anything else
a) On jumpbox, make a directory called kiwi and go into itmkdir kiwi && cd kiwi
b) Make a private key for kiwiopenssl genrsa -out kiwi.key 2048

c) Create the Certificate Signing Requestopenssl req -new -key kiwi.key -out kiwi.csr -subj '/CN=kiwi/O=kiwi.com/C=US'

d) Check with: openssl req -in kiwi.csr -text

e) Copy and paste the below content in your jumpbox terminal:
cat <<EOF | kubectl apply -f -
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
name: kiwi
spec:
request: $(cat kiwi.csr | base64 | tr -d '\n')
signerName: kubernetes.io/kube-apiserver-client
usages:
- client auth
EOF

f) Do the following command:kubectl get csr

g) Approve the Certificate: kubectl certificate approve kiwi

h) Verify that the K8s kube-controller-manager has approved and Issued the certificatekubectl get csr

i). copy the certificate portion from output of kubectl get csr kiwi -o yaml
. Then do this:echo <your_copied_buffer> | base64 --decode > kiwi.crt

It is good practice to check that the ceritificate copy was done and decoded properly and the certificate is valid by using the command: openssl x509 -in kiwi.crt -text
(check on the expiry date, issuer, and the subject line)

j) Now create a role for kiwi to be able to view pods.kubectl create role kiwi --verb=get,list --resource=pods

k) Create the appropriate role bindingkubectl create rolebinding kiwi --role=kiwi --user=kiwi

l) Add to kubeconfig file:
kubectl config set-credentials kiwi \
--client-key=kiwi.key \
--client-certificate=kiwi.crt \
--embed-certs=true

m) Add kiwi context:kubectl config set-context kiwi --cluster=NewAKSCluster1 --user=kiwi

n) Change your context to kiwi:kubectl config use-context kiwi

o) Test out with kubectl get all
You will notice that everything is denied other than viewing pods for user kiwi

Conclusion:
- cAPIC allows you to integrate native AKS service and File Share from Azure.
- North/South Policies (contracts) can be used for controlling who can access the services in the cluster. East/West policies within the cluster still needs K8s tradional network policies. ( If you integrated AKS with Cisco CNI), you could do all policies with ACI contracts. (I’ll do a separate writeup for that later)
- The usres of AKS services don’t have to change their behavior or learning. The network / system administrator controls the security aspects from cAPIC