In this post, I will cover a full Native Azure Kubernetes Service Integration Practical deployment with Azure Native File Share Services for Apps. We will cover the following topics:
- Cisco cAPIC/Azure AKS integration (step by step)
- Creating File share to be used the K8s apps (we’ll spin up a simple nginx deployment with 2 replicas and load the fileshare as volumes in nginx containers)
- Security, Data Plane and Control Plane for the K8s cluster
I do also want to point out that in this example, I’m not going to integrate AKS with the Cisco ACI CNI. We will use the Native Azure CNI with Calico policy. Integrating the Cisco CNI (using accprovision tool) would give you more granular control from ACI and hence additional benefits (however support for that in cloud will come at a later release). For example, If you integrated AKS with the Cisco CNI, you would get visibility into the actual apps/namespaces/services that you spin up from K8s level directly in cAPIC. It will also give you the capability to control East/West traffic inside the apps using ACI contracts.
In the case I will demonstrate below (without cisco ACI CNI), how you could control North/South access with ACI contracts (we’ll domo this below). However for East/West traffic (between apps in the cluster itself) you will not be able to use ACI contracts, because they are all inside the cluster itself. You could however use traditional K8s Network Policies to acheive East/West traffic policies btween pods in cluster (we’ll demo this in this writeup also)
This is a practical hands on approach that will guide you on how to do this and setup the entire integration. You will get most value from this If you follow along and repeat the steps in your own ACI/Azure Fabric (with release 5.1.2 of cAPIC).
From cAPIC release 5.1.2 Native Azure Services are supported for cAPIC integration. Previous to this release EPGs were identified by label matching only (ip /region/custom label). For that reason, Native Azure Service support was not possible. This meant you could bring up an EPG, put some EPG selector labels on those and then spin up VMs /3rd party firewalls etc in Azure which had tradional nics, put matching labels on the nics and based on label match the Security Groups in Azure would get configured. This effectively pulled in your endpoint VMs into the correct EPG (security group for Azure). From there you could use all the bells and whistles of ACI and configure policies based on your requirements.
From release 5.1.2 cAPIC for Azure, a new concept of service EPGs has been introduced. Service EPGs match based on Services and some other fields (based on the service). Native Services in Azure do not have traditional endpoints (nics) like a Azure VM does. With Service EPGs, now you can match Azure Services and pull them into Service EPGs. From there you can again utilize the bells and whistles of ACI and create your required policy. This effectively gives you the capability of integrating Native Azure Services with cAPIC.
Below is a screenshot from the cAPIC UI showing the different Native Services that you can integrate.

Let’s go through a high level overview of the tasks that are involved for this integration.
a) First, from cAPIC we will create a subnet where we will pull in the AZ storate service. We will create this subnet in the HUB (though you could do this in the tenant space itself)

b) Next, we will create the Storage Account in Azure.

c) After that from cAPIC we will create the Tenant/VRF/Subnet/Service EPG for AKS

d) Next we will bring up the AKS cluster in Azure

e) Finally we will create and apply a contract between the AKS Service EPG and Storage Service EPG. We will also create a contract from the AKS Service EPG and a external EPG to control who /what ports can access the AKS services that we spin up.

Before we start, let’s spin up a Ubuntu VM, that we will use as a jumpbox and also the place where we will execute the K8s kubectl commands from to talk to the K8s api-server. This jumpbox can we anywhere, on your physical site or in a cloud site as long as it has access to the Internet. You could also do this from your local MAC by using equivalent commands (if that’s what you are using).

Once you are done installing the Ubuntu jumpbox, ssh into it and do the below:
-
- Install Azure CLI
curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
- Make a directory where you will execute the commands from
mkdir newAksAZ && cd newAksAZ
- Login to your azure account from the jumpbox
az login -u <azureUsername@domain.com> -p <azurePassword>
- List all the accounts you have under that Azure Subscription (in case you have multiple subscriptions under that azure account)
az account list -o table
- Set your account to the one you will use to create your Tenant/AKS cluster
az account set -s <subscriptionID>
- Create Azure service principle
az ad sp create-for-rbac --name azure-cli --create-cert
- Install Azure CLI
You will now see the following sort of output on your Ubuntu Terminal:
(in my case, my Ubuntu SSH userID for the jumpbox is aciadmin)
Save the output to a text file in a safe place !

g. Now copy the private key from the home directory to the working directorycp ~/tmpFileName.pem . # don’t forget the “.” at the end, assuming you are sitting in the working directory
h. Now login to your AZ account using the Service Principle (make sure to use the output that you got from setup (step f above) for the fields)az login --service-principal --username yourAppID --tenant yourAzureTenantID --password tmpFileName.pem
i. For convenience make a bash script to login to your azure account and set the correct subscription ID as the one you will be using for this purpose
cat > setsubscripiton.sh <<EOF
#!/bin/bash
az login --service-principal --username yourAppID --tenant yourAzureTenantID --password tmpFileName.pem > /dev/null
az account set -s <yourWorkingAZ_Subs-ID>
EOF
j. Now set proper permissions for your login script:chmod 775 setsubcription.sh
k. From now on you can quickly login to your Azure account and set the working subscription to the one where you will be using by executing the script as shown below./setsubcription.sh
You can verify that you are on the correct subscription ID by the below command:az account show

First We’ll create a vNet for Azure File Storage. We’ll create this in the hub vNet (though you don’t have to)

We’ll add a new subnet to house the Azure File Share. While creating the subnet make sure to give it a private link label. I called it newaksprivatelinklabel. This private link label will be used to associate the service EPG in this case to the subnet.

Now, let’s create the EPG ( in Infra Tenant) where the storage will reside. Remember that this is meant for Azure Storage, which is a Native Azure Service. So, we will configure this as a service epg.


Service EPGs have Several unique attributes. The CCO document listed here lists them out very nicely and even gives you a table with different possible values. A screenshot of that is shown below:

We will choose EPG type: Service, For Storage, we will choose Deployment Type: Azure Storage, Access type: Private and also select the Private Link Label we created earlier in the Hub CTX. Associating this private link label from EPG, will make this EPG be part of that subnet.
Below may not be true all the time, but it helps me quickly remember without having to look at the document:
Cloud Native Managed is generally used when you have a Managed Service but you still need to attach the endpoint to the VRF from Azure (either UI or Azure CLI). That is the case for APIM or AKS.
Let’s take the case of Storage, In Storage there is no such concept of spinning up storage and attaching the storage endpoint to a vNET. So, for storage we would choose Cloud Native. When you use Cloud Native, you also use Private Link Label to match the EPs into the EPG.
Note: Azure has many types of storage:
- Containers: for blob/unstructured data
- File Share: Serverless, NFS file Shares
- Tables: For tabular data storage
- Queues: queue kind os storage for sacaling apps
In this case, we are choosing Azure Storage, since that covers all the cases.

for EP Selector, choose a unique alphanumeric value. This will have to match the storage name that we will create in the next step. This effectively pulls in the Native Storage Service as an endpoint in ACI Fabric.
Note: An easy way to generate a (hopefully unique) global name is to generate from the ubuntu jumpbox using RANDOM function:
echo newaksstoragefs$RANDOM

Next, we will create the Azure File Storage. You can ofcourse, create it from the UI, but in this case, we will create it directly from out ubuntu jump box using azure cli.
a. Run the login script for Azure from the ubuntu jumpbox../setsubcription.sh

b. Next, on Jumpbox, create shell script ”createAKSstorage.sh” to create your storage account. Modify the values on the script to suit your purpose. Remember the value of AKS_PERS_STORAGE_ACCOUNT_NAME should be the Endpoint Selector you inputted earlier for the private endpoint selector in the storage service EPG.
#!/bin/bash
# modify the below as you need to
AKS_PERS_STORAGE_ACCOUNT_NAME=newaksstoragefs25386
AKS_PERS_RESOURCE_GROUP=NewAksStorageRG
AKS_PERS_LOCATION=eastus
AKS_PERS_SHARE_NAME=mynewaksshare
###
# Don't modify the below
az group create --name $AKS_PERS_RESOURCE_GROUP --location $AKS_PERS_LOCATION
az storage account create -n $AKS_PERS_STORAGE_ACCOUNT_NAME -g $AKS_PERS_RESOURCE_GROUP -l $AKS_PERS_LOCATION --sku Standard_LRS
AZURE_STORAGE_CONNECTION_STRING=$(az storage account show-connection-string -n $AKS_PERS_STORAGE_ACCOUNT_NAME -g $AKS_PERS_RESOURCE_GROUP -o tsv)
az storage share create -n $AKS_PERS_SHARE_NAME --connection-string $AZURE_STORAGE_CONNECTION_STRING
STORAGE_KEY=$(az storage account keys list --resource-group $AKS_PERS_RESOURCE_GROUP --account-name $AKS_PERS_STORAGE_ACCOUNT_NAME --query "[0].value" -o tsv)
# Show the needed env values
echo Storage account name: $AKS_PERS_STORAGE_ACCOUNT_NAME
echo Storage account key: $STORAGE_KEY
c. chmod 775 createAKSstorage.sh
d. source ./createAKSstorage.sh

Now, let’s go to Azure UI and check for Private Endpoints.

Click on the Pravate Link for your New File Share

Here, you will be able to see the Private Endpoint of the storage service. Notice in my case my subnet for CTX was 100.67.75.0/24. The private endpoint got a value of 100.64.75.4

You can also verify that Private Endpont showing up on the epg (from cAPIC)

Take a look at the Network Security Group that got associated with that EPG

From Azure UI, you can see the equivalent Security Group

Now, on Azure UI, let’s go to that storage (we created with our script) and upload a text file to it.

Open up the storage account you created

Click on the file share that you created (by script) in that storage account

Upload some text file to that file share. In my case, I made a simple index.html file in my local MAC download directory that I will upload to the Azure file share.
<!DOCTYPE html>
<html lang="en" dir="ltr">
<head>
<meta charset="utf-8">
<title></title>
</head>
<body>
<p>I am in the Azure File Share</p>
</body>
</html>

We are all done creating the Native Azure Storage, uploading a file to it and associating our service EPG with that storage

Let’s first create the Tenant and VRF where we will install the AKS (k8s) cluster (please do this in the same Infra account. We could definitely do this in a separate Tenant account, but in this example we’ll keep it simple). For this exercise, please use the same Infra account to spin up your Tenant. ( by now you know how to do that, so I won’t waste your time).

Next, create the Cloud Context Profile for the VRF.

In this case, we don’t need to use Private Link Label, because cAPIC will manage the subnet for the K8s Nodes. I’m using CIDR Block of 172.24.0.0/22 and subnet is also the same.

Now, let’s create the service EPG for AKS

In this case we are going to make the EPG a service EPG, Cloud Native Managed, and Private

For EP Selector, choose the subnet that you created earlier in the AKS CTX.

Now, let’s create a contract between the service EPG of AKS and service EPG of Storage

Export the contract to the Tenant

Apply the contract

Now, let’s create the AKS Cluster. In this case we will use azure cli from our ubuntu jumpbox. (though it could be created from the UI)
step a:
Login to your Az account. (In this case the Tenant is built in the Infra account itself)
az login -u username@acme.com -p myAzureSecretPassword
az account set –s subscriptionID
step b:
Check supported k8s versions for the region
az aks get-versions --location eastus --output table
step c:
From the output of “Step b” choose a version of K8s. I’ve chose 1.19.7
Define the Variables below:
RG='CAPIC_NewAKSCluster_NewAKSClusterVRF_eastus'
LOCATION='eastus'
CLUSTER_NAME='NewAKSCluster1'
ROLE='Contributor'
NET_ROLE='Network Contributor'
K8S_VERSION=1.19.7
SSH_KEY=k8sSshKey.pub
step d:
Generate ssh key for AKS, also add the necessary Azure extensions
ssh-keygen -f k8sSshKey # (don't bother to make a passphrase for lab purposes, just hit enter)
az extension add --name aks-preview
az feature register -n AKSNetworkModePreview --namespace Microsoft.ContainerService
az provider register -n Microsoft.ContainerService
# Verify the featurelist
az feature list -o table | grep -i aksnetworkmode
step e:
define variable for service principal name and check to make sure that the SP does not already exist
SP='calico-aks-sp'
# list service principal
az ad sp list --spn http://$SP --query "[].{id:appId,tenant:appOwnerTenantId,displayName:displayName,appDisplayName:appDisplayName,homepage:homepage,spNames:servicePrincipalNames}"
step f:
Create the service principal and capture its password.
SP_PASSWORD=$(az ad sp create-for-rbac --name "http://$SP" --skip-assignment | jq '.password' | sed -e 's/^"//' -e 's/"$//')
echo $SP_PASSWORD > SP-K8s.pass
step g:
get resource group ID and set variable for it
get service principal client/app Id and set variable for it
set service principal Contributor role on resource group
get vNet ID and set variable for it
RG_ID=$(az group show -n $RG --query 'id' -o tsv)
CLIENT_ID=$(az ad sp list --display-name $SP --query '[].appId' -o tsv)
az role assignment create --role $ROLE --assignee $CLIENT_ID --scope $RG_ID
ID=$(az network vnet subnet list \
-g CAPIC_NewAKSCluster_NewAKSClusterVRF_eastus --vnet-name NewAKSClusterVRF \
--query '[].id' | tr -d '[' | tr -d ']' | tr -d '"')
step h:
Deploy AKS cluster
az aks create \
--resource-group $RG \
--name $CLUSTER_NAME \
--kubernetes-version $K8S_VERSION \
--nodepool-name 'nix' \
--node-count 2 \
--network-plugin kubenet \
--network-policy calico \
--service-cidr 10.0.0.0/16 \
--dns-service-ip 10.0.0.10 \
--docker-bridge-address 172.17.0.1/16 \
--service-principal $CLIENT_ID \
--vnet-subnet-id $ID \
--client-secret $SP_PASSWORD \
--node-osdisk-size 50 \
--node-vm-size Standard_D2s_v3 \
--output table \
--ssh-key-value $SSH_KEY
The K8s cluster will take a few minutes (around 10 or so), to spin up. You will see the following sort of output on your terminal screen when done:

At this time, quickly go and check from Azure Console, that the K8s Cluster is healthy.
- On the Azure UI, search forKubernetes in the search bar and click on “Kubernetes services” (make sure to choose the right tenant subscription).
- You will see your cluster there. In the case of this example, you will see “NewAKSCluster1”. Click on it.
- On the side bar, click on workloads. Make sure that all the pods show up Ready in the work-pane as shown below.

After completion, let’s add the kubectl utility to the jumpbox.
sudo az aks install-cli
Next, check the kube config file. You will see that it is empty.
kubectl config view

Now, let’s populate the kube config file for this cluster
Note: in case you were running kubectl commands earlier from your jumpbox (or local mac), you probably already have a ~/.kube/config file. Please go and rename that file like so: mv ~/.kube/config ~/.kube/config.orig
az aks get-credentials \
--resource-group CAPIC_NewAKSCluster_NewAKSClusterVRF_eastus \
--name NewAKSCluster1

you will now notice that the kube config file is populated with user, cluster, context and correct certificates
kubectl config view
your kubectl commands are now operational.
Try these out:
kubectl get nodes
kubectl get ns
kubectl get pods --all-namespaces # make sure they are all up and good
kubectl cluster-info
kubectl top nodes --sort-by cpu # this information comes from metrics server)

In production, you will probably never stop the AKS cluster. However in a POC you might want to do that when not testing (for expense reasons). Let’s create some quick scripts, so we can show/start/stop the AKS cluster when we want.
create these 3 scripts as shown below:
showAksState.sh#!/bin/bash
az aks show --name NewAKSCluster1 --resource-group CAPIC_NewAKSCluster_NewAKSClusterVRF_eastus
stopAksCluster.sh#!/bin/bash
az aks stop --name NewAKSCluster1 --resource-group CAPIC_NewAKSCluster_NewAKSClusterVRF_eastus
startAksCluster.sh#!/bin/bash
az aks start --name NewAKSCluster1 --resource-group CAPIC_NewAKSCluster_NewAKSClusterVRF_eastus
Don’t forget to do a chmod 775
on them.
You are all done bringing up the AKS Cluster in the ACI service EPG. The endopints Nodes can be seen from cAPIC Cloud Context Profile.
Note: I do want to point out, that if you looked at the EPG from cAPIC and looked at end points in Cloud Resources, you will not see the Node endpoints there. For Cloud Native Managed, the endpoints show up in Cloud Context Profiles only. For Cloud Native, they show up both in Cloud Context Profiles and EPGs.

From Azure Console, you can also verify that the Security Groups have been automatically populated

Now, that we are done with the AKS cluster bringup and the Storage configuration, let’s bring up a simple Azure nginx deployment with 2 replicas to use the shared AZ file storage

a) On your Jumpbox, execute the az login script (confirm that the az account set -s <subscription_ID> points to the Infra account).
(Note: If you had made the K8s cluster in a separate Tenant account, you would still set the subscription to the Infra account here. Remember what we are doing below is mounting the Azure fileshare that we created earlier. kubectl commands are already setup to talk to kubelet and send commands to the kubeapi server that was spun up.)
./ setsubcription.sh
b) Set the environments as shown below (substitute your values in bold. If you followed the same naming convention in this lab, the only thing that will be different is the “AKS_PERS-STORAG_ACCOUNT_NAME” value. You can look this up in Azure UI or you can use the command “az storage account list –resource-group NewAksStorageRG | grep name”)
AKS_PERS_STORAGE_ACCOUNT_NAME=newaksstoragefs25386
AKS_PERS_RESOURCE_GROUP=NewAksStorageRG
AKS_PERS_LOCATION=eastus
AKS_PERS_SHARE_NAME=mynewaksshare
c) Set Variable for your storage key, using the below command:
STORAGE_KEY=$(az storage account keys list --resource-group $AKS_PERS_RESOURCE_GROUP --account-name $AKS_PERS_STORAGE_ACCOUNT_NAME --query "[0].value" -o tsv)
d) Make sure to check the value by:
echo $STORAGE_KEY
Now, let’s bring up the nginx deployment with the shared storage
a) Create a name space where you will run your appkubectl create ns mynamespace
b) Create your k8s secret for the storage account (remember secrets are name spaced. “kubectl api-resources” command will make this obvious)
kubectl create secret generic azure-secret \
--from-literal=azurestorageaccountname=$AKS_PERS_STORAGE_ACCOUNT_NAME \
--from-literal=azurestorageaccountkey=$STORAGE_KEY --namespace=mynamespace
c) Create K8s storage class. Please use the yaml file below. Apply with:kubectl apply -f myazuresc.yaml
myazuresc.yamlallowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
labels:
addonmanager.kubernetes.io/mode: EnsureExists
kubernetes.io/cluster-service: "true"
name: myazuresc
parameters:
skuName: Standard_LRS
provisioner: kubernetes.io/azure-file
reclaimPolicy: Delete
volumeBindingMode: Immediate
d) Create K8s persistent volume. Please see yaml file below. Apply with:
(Note: Be careful to check the fields here, especailly the “shareName”, this needs to match to the name of the fileShare that you configured in your storage. In my case, I had named the fileshare mynewaksshare)
kubectl apply -f myazurepv.yaml
myazurepv.yamlapiVersion: v1
kind: PersistentVolume
metadata:
name: myazurepv
spec:
accessModes:
- ReadWriteMany
azureFile:
secretName: azure-secret
secretNamespace: mynamespace
shareName: mynewaksshare
capacity:
storage: 3Gi
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=1000
- gid=1000
- mfsymlinks
- nobrl
persistentVolumeReclaimPolicy: Retain
storageClassName: myazuresc
volumeMode: Filesystem
e) Create K8s persistent volume claim. (remember pvc is not namespaced) Please see yaml file below. Apply with:kubectl apply -f myazurepvc.yaml
myazurepvc.yamlapiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myazurepvc
namespace: mynamespace
spec:
accessModes:
- ReadWriteMany
storageClassName: myazuresc
resources:
requests:
storage: 3Gi
f) Check that pvc is bound to pvkubectl -n mynamespace get pvc

g) Create your yaml file for nginx deployment. (deployments are namespaced)
kubectl create deployment mynginx --image=nginx --replicas=2 \
--namespace=mynamespace --dry-run=client -o yaml > mynginx.yaml
Then go edit mynginx.yaml and add in ( if you’ve been following along, you can just copy the below yaml file):
- value of
{.spec.template.spec.containers[*].ports}
{.spec.template.spec.volumes}
{.spec.template.spec.containers[*].volumeMounts}
Completed yaml file below:
mynginx.yamlapiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: mynginx
name: mynginx
namespace: mynamespace
spec:
replicas: 2
selector:
matchLabels:
app: mynginx
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: mynginx
spec:
volumes:
- name: myazure
persistentVolumeClaim:
claimName: myazurepvc
containers:
- image: nginx
name: nginx
volumeMounts:
- name: myazure
mountPath: /usr/share/nginx/html
readOnly: false
ports:
- containerPort: 80
resources: {}
status: {}
h) Start the nginx deploymentkubectl apply -f mynginx.yaml
Check with: kubectl -n mynamespace get pods -o wide

i) Expose the service with azure loadbalancer (svcs are namespaced)
Note: if you’ve followed the examples exactly like it shows here, you can just copy the yaml file below and apply it.
kubectl create svc loadbalancer mynginxsvc --tcp=80:80 --dry-run=client -o yaml > mynginxsvc.yaml
Next, edit mynginx.yaml and change selector to app=mynginx and namespace to mynamespace (to match the label of the pods. You can check the pod labels by kubectl -n mynamespace get pods --show-labels
)
mynginxsvc.yamlapiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: mynginxsvc
name: mynginxsvc
namespace: mynamespace
spec:
ports:
- name: 80-80
port: 80
protocol: TCP
targetPort: 80
selector:
app: mynginx
type: LoadBalancer
status:
loadBalancer: {}
j) apply the yaml file:kubectl apply -f mynginxsvc.yaml
k) Check that the svc is associated with the deployment podskubectl -n mynamespace get ep

l) Check the loadbalancer Public IPkubectl -n mynamespace get svc


Security Section:
From the Ubuntu Jumpbox, curl the IP of the lb to test it out:curl http://40.80.48.178
You will notice that the curl is not working. This is because we have not explictly added any contract that allows port 80 to be exposed from the K8s Pods

Let’s add a contract to allow port 80 to be exposed for the data plane. First create the external EPG

In this example, I’ll put 0/0 for the external EPG. You could get as granular as you want.

Now let’s create a contract. I’m allowing port 80 and port 443 in the filter for now.

Apply the contract.
extEPG is consumer and AKS-ClusterEPG is provider

Now try the curl again. Also, try to browse to the lb address from your local MAC. They will both work, because you had put 0/0 in the EPG and your contract allows port 80

EastWest Traffic Policy:
As mentioned earlier, in this setup we cannot use ACI policies (contracts) for east/west traffic between pods in the cluster. That is because we are not doing ACI CNI Integration in this setup.
However, we can still use tradional Network Policy for East/West traffic between pods. In this simple example, I will show how to create a network policy that does not allow our nginx deployment to be accessed from any other pod in the cluster.
First let’s test this to verify that East/West traffic between pods in cluster is infact working.
Let’s get the name of the service that we are running in our namespace. You will notice that the name of the service is mynginxsvc

To Test, let’s bring up a temporary busybox pod in default namespace and try to see if we can get to the service from the container in that pod.
We’ll use netcat to see if port 80 can be accessed from pod on default to the nginx service. You will see that port 80 is reachable from that pod. kubectl run bb --image=busybox:1.28 --restart=Never --rm -it -- nc -zvw 2 mynginxsvc.mynamespace.svc.cluster.local 80

Since we are using Calico Network Policy, our pods are going to get IPs from 10.244. subnet which you can verify by doing “kubectl get ep -n mynamespace”

Now, let’s create a network policy that does not allow nginx pods to be accessed from any other pods in the cluster. but only North/South traffic. Create the yaml file below and name it mynetpol.yaml
mynetpol.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: mynetpol
namespace: mynamespace
spec:
podSelector:
matchLabels:
app: mynginx
policyTypes:
- Ingress
ingress:
- from:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 10.240.0.0/8
create the network policy:kubectl apply -f mynetpol.yaml

Repeat the same test as before (bringing up a temporary busybox container and using netcat for testing)
kubectl run bb --image=busybox:1.28 --restart=Never --rm -it -- nc -zvw 2 mynginxsvc.mynamespace.svc.cluster.local 80

As you can see, the nginx pods are not reachable any more from other pods in the cluster.
However from North/South, I can still access them.


Control Plane Security of AKS Cluster:
K8s RBAC for control plane security (authentication and authorization)
Authentication is done via certificates, and authorization with roles/rolebindings, clusterroles/clusterrolebindings.
We’ll make a user called kiwi who has access to view pods only but not configure or view anything else
a) On jumpbox, make a directory called kiwi and go into itmkdir kiwi && cd kiwi
b) Make a private key for kiwiopenssl genrsa -out kiwi.key 2048

c) Create the Certificate Signing Requestopenssl req -new -key kiwi.key -out kiwi.csr -subj '/CN=kiwi/O=kiwi.com/C=US'

d) Check with: openssl req -in kiwi.csr -text

e) Copy and paste the below content in your jumpbox terminal:
cat <<EOF | kubectl apply -f -
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
name: kiwi
spec:
request: $(cat kiwi.csr | base64 | tr -d '\n')
signerName: kubernetes.io/kube-apiserver-client
usages:
- client auth
EOF

f) Do the following command:kubectl get csr

g) Approve the Certificate: kubectl certificate approve kiwi

h) Verify that the K8s kube-controller-manager has approved and Issued the certificatekubectl get csr

i). copy the certificate portion from output of kubectl get csr kiwi -o yaml
. Then do this:echo <your_copied_buffer> | base64 --decode > kiwi.crt

It is good practice to check that the ceritificate copy was done and decoded properly and the certificate is valid by using the command: openssl x509 -in kiwi.crt -text
(check on the expiry date, issuer, and the subject line)

j) Now create a role for kiwi to be able to view pods.kubectl create role kiwi --verb=get,list --resource=pods

k) Create the appropriate role bindingkubectl create rolebinding kiwi --role=kiwi --user=kiwi

l) Add to kubeconfig file:
kubectl config set-credentials kiwi \
--client-key=kiwi.key \
--client-certificate=kiwi.crt \
--embed-certs=true

m) Add kiwi context:kubectl config set-context kiwi --cluster=NewAKSCluster1 --user=kiwi

n) Change your context to kiwi:kubectl config use-context kiwi

o) Test out with kubectl get all
You will notice that everything is denied other than viewing pods for user kiwi and only in the default namespace

Conclusion:
- cAPIC allows you to integrate native AKS service and File Share from Azure.
- North/South Policies (contracts) can be used for controlling who can access the services in the cluster. East/West policies within the cluster still needs K8s tradional network policies. ( If you integrated AKS with Cisco CNI), you could do all policies with ACI contracts. (I’ll do a separate writeup for that later)
- The usres of AKS services don’t have to change their behavior or learning. The network / system administrator controls the security aspects from cAPIC
References: