Part 3 – Monitoring Apps deployed on K8s cluster on ACI Fabric Using AppDynamics/NDI

Introduction

In this article we will focus on monitoring an application running within a Kubernetes cluster. This Kubernetes cluster has been integrated with Cisco ACI CNI, however that is not a requirement for the integration or monitoring to work. If you have read the previous articles you will remember that we were previously installing each AppDynamics monitoring agent individually. This is becoming extremely tedious when you have to scale this configuration to any number of servers as you have to install each agent individually. A solution to this problem is to utilize Kubernetes and AppDynamics Auto-Instrumentation feature. This feature allows for a set of pods running within the Kubernetes cluster to scan defined namespaces for deployments that can be monitored as well as monitor the Kubernetes nodes hosting those deployments. This instrumentation feature was CRUCIAL to getting data into Nexus Dashboard Insights because it allowed for the AppDynamics agents to record exit call data made between the different pods and nodes. 

AppD Architecture

image001 Figure 1: Graphics showing the AppD Architecture

When talking about AppD architecture when monitoring a Kubernetes cluster, there isn’t much that changes. We still have our Enterprise Console, Controller, and Events Service. As well as the MachineAgent, Application Agent, DB Agent, and Network Agents. The difference is these agents are running as Kubernetes pod and in my deployment because I am not utilizing CRIO, as Docker containers. The deployment and configuration of these agents are controlled through Kubernetes yaml files. 

Requirements and Setup

The first step to tackle when monitoring a Kubernetes application is standing up the Kubernetes cluster. Kubernetes deployment and integration with Cisco ACI WILL NOT be covered in this article, but will come in a separate article. I am assuming that you have a working Kubernetes cluster, AppD controller, and AppD Enterprise Console. 

Once you have all the supporting Kubernetes and AppDynamics components deployed we can begin configuring our Kubernetes cluster to report data to our AppDynamics controller. To begin we will need to install the Kubernetes Metrics server. This is required to create the API’s that our AppDynamics pods will poll for information regarding the K8 cluster. 

To install the Kubernetes metrics server I followed the offical Github documents found at the link below. 

https://github.com/kubernetes-sigs/metrics-server

I will go over the exact steps that I took from my environment as well because I needed to change some of the default settings.

To start you can use kubectl to download the metrics server deployment file directly from github.

kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

After the download has completed I found that my metrics server was not running. After some googling I learned that I needed to enabled hostNetwork in the deployment file for the metrics server.

cisco@k8s-01:~$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
aci-containers-system aci-containers-controller-5c69d6ffb-ftfml 1/1 Running 3 26d
aci-containers-system aci-containers-host-5jj4f 3/3 Running 7 26d
aci-containers-system aci-containers-host-5wv6c 3/3 Running 13 26d
aci-containers-system aci-containers-host-kpzj8 3/3 Running 10 26d
aci-containers-system aci-containers-host-qg5k9 3/3 Running 14 26d
aci-containers-system aci-containers-openvswitch-6c26d 1/1 Running 2 26d
aci-containers-system aci-containers-openvswitch-fmcgw 1/1 Running 1 26d
aci-containers-system aci-containers-openvswitch-jjpld 1/1 Running 1 26d
aci-containers-system aci-containers-openvswitch-q5fdl 1/1 Running 2 26d
aci-containers-system aci-containers-operator-7c898b9877-5vzpg 1/1 Running 2 26d
kube-system coredns-558bd4d5db-mgff7 1/1 Running 14 26d
kube-system coredns-558bd4d5db-mghl6 1/1 Running 14 26d
kube-system etcd-k8s-01 1/1 Running 2 26d
kube-system kube-apiserver-k8s-01 1/1 Running 2 26d
kube-system kube-controller-manager-k8s-01 1/1 Running 2 26d
kube-system kube-proxy-8j829 1/1 Running 2 26d
kube-system kube-proxy-b8sxr 1/1 Running 2 26d
kube-system kube-proxy-pjl6l 1/1 Running 2 26d
kube-system kube-proxy-xrgtz 1/1 Running 2 26d
kube-system kube-scheduler-k8s-01 1/1 Running 2 26d
kube-system metrics-server-dbf765b9b-xpffs 0/1 Running 0 
cisco@k8s-01:~$

Below you will find a copy of the changes I made to my metrics server to get it to run. You can make the changes with the following command.

kubectl edit deploy -n kube-system metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata: 
  annotations: 
    deployment.kubernetes.io/revision: "2"
    kubectl.kubernetes.io/last-applied-configuration: "{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"k8s-app\":\"metrics-server\"},\"name\":\"metrics-server\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"k8s-app\":\"metrics-server\"}},\"strategy\":{\"rollingUpdate\":{\"maxUnavailable\":0}},\"template\":{\"metadata\":{\"labels\":{\"k8s-app\":\"metrics-server\"}},\"spec\":{\"containers\":[{\"args\":[\"--cert-dir=/tmp\",\"--secure-port=4443\",\"--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname\",\"--kubelet-use-node-status-port\",\"--metric-resolution=15s\"],\"image\":\"k8s.gcr.io/metrics-server/metrics-server:v0.5.2\",\"imagePullPolicy\":\"IfNotPresent\",\"livenessProbe\":{\"failureThreshold\":3,\"httpGet\":{\"path\":\"/livez\",\"port\":\"https\",\"scheme\":\"HTTPS\"},\"periodSeconds\":10},\"name\":\"metrics-server\",\"ports\":[{\"containerPort\":4443,\"name\":\"https\",\"protocol\":\"TCP\"}],\"readinessProbe\":{\"failureThreshold\":3,\"httpGet\":{\"path\":\"/readyz\",\"port\":\"https\",\"scheme\":\"HTTPS\"},\"initialDelaySeconds\":20,\"periodSeconds\":10},\"resources\":{\"requests\":{\"cpu\":\"100m\",\"memory\":\"200Mi\"}},\"securityContext\":{\"readOnlyRootFilesystem\":true,\"runAsNonRoot\":true,\"runAsUser\":1000},\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp-dir\"}]}],\"nodeSelector\":{\"kubernetes.io/os\":\"linux\"},\"priorityClassName\":\"system-cluster-critical\",\"serviceAccountName\":\"metrics-server\",\"volumes\":[{\"emptyDir\":{},\"name\":\"tmp-dir\"}]}}}}\n"
  creationTimestamp: "2021-11-30T23:24:49Z"
  generation: 2
  labels: 
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
  resourceVersion: "3558230"
  uid: 4d05e8a3-49d7-43c6-a719-5e0b9e9d1cd1
spec: 
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector: 
    matchLabels: 
      k8s-app: metrics-server
  strategy:
    rollingUpdate: 
      maxSurge: 25%
      maxUnavailable: 0
    type: RollingUpdate
  template:
    metadata: 
      creationTimestamp: ~
      labels: 
        k8s-app: metrics-server
    spec: 
      containers: 
        - 
          args: 
            - "--cert-dir=/tmp"
            - "--secure-port=4443"
            - "--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname"
            - "--kubelet-use-node-status-port"
            - "--metric-resolution=15s"
            - "--kubelet-insecure-tls"
          image: "k8s.gcr.io/metrics-server/metrics-server:v0.5.2"
          imagePullPolicy: IfNotPresent
          livenessProbe: 
            failureThreshold: 3
            httpGet: 
              path: /livez
              port: https
              scheme: HTTPS
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          name: metrics-server
          ports: 
            - 
              containerPort: 4443
              hostPort: 4443
              name: https
              protocol: TCP
          readinessProbe: 
            failureThreshold: 3
            httpGet: 
              path: /readyz
              port: https
              scheme: HTTPS
            initialDelaySeconds: 20
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          resources: 
            requests: 
              cpu: 100m
              memory: 200Mi
          securityContext: 
            readOnlyRootFilesystem: true
            runAsNonRoot: true
            runAsUser: 1000
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts: 
            - 
              mountPath: /tmp
              name: tmp-dir
      dnsPolicy: ClusterFirst
      **hostNetwork: true**
      nodeSelector: 
        kubernetes.io/os: linux
      priorityClassName: system-cluster-critical
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: metrics-server
      serviceAccountName: metrics-server
      terminationGracePeriodSeconds: 30
      volumes: 
        - 
          emptyDir: {}
          name: tmp-dir
status: 
  conditions: 
    - 
      lastTransitionTime: "2021-11-30T23:24:49Z"
      lastUpdateTime: "2021-11-30T23:24:49Z"
      message: "Deployment does not have minimum availability."
      reason: MinimumReplicasUnavailable
      status: "False"
      type: Available
    - 
      lastTransitionTime: "2021-11-30T23:24:49Z"
      lastUpdateTime: "2021-11-30T23:29:08Z"
      message: "ReplicaSet \"metrics-server-5b974f8c7f\" is progressing."
      reason: ReplicaSetUpdated
      status: "True"
      type: Progressing
  observedGeneration: 2
  replicas: 2
  unavailableReplicas: 2
  updatedReplicas: 1

Confirm the server is running using the same command we ran previously.

cisco@k8s-01:~$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system metrics-server-5b974f8c7f-zqzzd 1/1 Running 0 5m47s
cisco@k8s-01:~$

Installing an App that Works

While researching and writing this article, I found it very difficult to find an app that was written in such a way to report network data to the AppD Controller. I was fortunate enough to reach out to a colleague and he provided me with the following app. It does include a load generator, though I was unable to test it out. You can find my application deployment yaml below. It is based on the following GitHub if you want to make changes for your environment. This Github also has Helm deployment files if that your preferred K8 deployment method.

https://github.com/JPedro2/Cloud-Native-Demo/tree/main/smm-1.8.0/multiCloudDeployments/control/Tea-Store

You can copy the below yaml file if you are using the Cisco ACI CNI. I changed the WebUI service to be type LoadBalancer so we could make use of the L4-L7 Service Graph integration that Cisco ACI offer to Kubernetes. This allowed be to access the application from an external client.

I deployed the application into the default namespace with the following command.

kubectl create -f <application-deployment.yaml>
cisco@k8s-01:~/teastore$ cat teastore-loadbalancer-ip.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: teastore-db
  labels:
    app: teastore-db
    version: v1
spec:
  selector:
    matchLabels:
      app: teastore-db
      version: v1
  template:
    metadata:
      labels:
        app: teastore-db
        version: v1
    spec:
      containers:
        - name: teastore-db
          image: descartesresearch/teastore-db
          resources:
            requests:
              memory: 1Gi
              cpu: 0.5
            limits:
              memory: 2Gi
              cpu: 1
          ports:
            - containerPort: 3306
---
apiVersion: v1
kind: Service
metadata:
  name: teastore-db
  labels:
    app: teastore-db
    service: teastore-db
spec:
  ports:
    - port: 3306
      protocol: TCP
  selector:
    app: teastore-db
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: teastore-registry
  labels:
    app: teastore-registry
    version: v1
spec:
  selector:
    matchLabels:
      app: teastore-registry
      version: v1
  template:
    metadata:
      labels:
        app: teastore-registry
        version: v1
    spec:
      containers:
        - name: teastore-registry
          image: brownkw/teastore-registry
          resources:
            requests:
              memory: 1Gi
              cpu: 0.5
            limits:
              memory: 2Gi
              cpu: 1
          ports:
            - containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: teastore-registry
  labels:
    app: teastore-registry
    service: teastore-registry
spec:
  ports:
    - port: 8080
      protocol: TCP
  selector:
    app: teastore-registry
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: teastore-persistence
  labels:
    framework: java
    app: teastore-persistence
    version: v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: teastore-persistence
      version: v1
  template:
    metadata:
      labels:
        framework: java
        app: teastore-persistence
        version: v1
    spec:
      containers:
        - name: teastore-persistence
          image: brownkw/teastore-persistence
          ports:
            - containerPort: 8080
          resources:
            requests:
              memory: 1Gi
              cpu: 0.5
            limits:
              memory: 2Gi
              cpu: 1
          env:
            - name: HOST_NAME
              value: "teastore-persistence"
            - name: REGISTRY_HOST
              value: "teastore-registry"
            - name: DB_HOST
              value: "teastore-db"
            - name: DB_PORT
              value: "3306"
---
apiVersion: v1
kind: Service
metadata:
  name: teastore-persistence
  labels:
    app: teastore-persistence
    service: teastore-persistence
spec:
  ports:
    - port: 8080
      protocol: TCP
  selector:
    app: teastore-persistence
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: teastore-auth
  labels:
    framework: java
    app: teastore-auth
    version: v1
spec:
  selector:
    matchLabels:
      app: teastore-auth
      version: v1
  template:
    metadata:
      labels:
        framework: java
        app: teastore-auth
        version: v1
    spec:
      containers:
        - name: teastore-auth
          image: brownkw/teastore-auth
          ports:
            - containerPort: 8080
          resources:
            requests:
              memory: 1Gi
              cpu: 0.5
            limits:
              memory: 2Gi
              cpu: 1
          env:
            - name: HOST_NAME
              value: "teastore-auth"
            - name: REGISTRY_HOST
              value: "teastore-registry"
---
apiVersion: v1
kind: Service
metadata:
  name: teastore-auth
  labels:
    app: teastore-auth
    service: teastore-auth
spec:
  ports:
    - port: 8080
      protocol: TCP
  selector:
    app: teastore-auth
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: teastore-webui-v1
  labels:
    framework: java
    app: teastore-webui
    version: v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: teastore-webui
      version: v1
  template:
    metadata:
      labels:
        framework: java
        app: teastore-webui
        version: v1
    spec:
      containers:
        - name: teastore-webui-v1
          image: brownkw/teastore-webui
          resources:
            requests:
              memory: 1Gi
              cpu: 0.5
            limits:
              memory: 2Gi
              cpu: 1
          ports:
            - containerPort: 8080
          env:
            - name: HOST_NAME
              value: "teastore-webui"
            - name: REGISTRY_HOST
              value: "teastore-registry"
            - name: PROCESS_PAYMENT
              value: "true"
            - name: VISA_URL
              value: "https://fso-payment-gw-sim.azurewebsites.net/api/payment"
            - name: MASTERCARD_URL
              value: "https://fso-payment-gw-sim.azurewebsites.net/api/payment"
            - name: AMEX_URL
              value: "https://amex-fso-payment-gw-sim.azurewebsites.net/api/payment"
---
apiVersion: v1
kind: Service
metadata:
  name: teastore-webui
  labels:
    app: teastore-webui
    service: teastore-webui
spec:
  type: LoadBalancer
  ports:
    - port: 8080
      targetPort: 8080
      protocol: TCP
  selector:
    app: teastore-webui
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: teastore-recommender
  labels:
    framework: java
    app: teastore-recommender
    version: v1
spec:
  selector:
    matchLabels:
      app: teastore-recommender
      version: v1
  template:
    metadata:
      labels:
        framework: java
        app: teastore-recommender
        version: v1
    spec:
      containers:
        - name: teastore-recommender
          image: brownkw/teastore-recommender
          ports:
            - containerPort: 8080
          resources:
            requests:
              memory: 1Gi
              cpu: 0.5
            limits:
              memory: 2Gi
              cpu: 1
          env:
            - name: HOST_NAME
              value: "teastore-recommender"
            - name: REGISTRY_HOST
              value: "teastore-registry"
---
apiVersion: v1
kind: Service
metadata:
  name: teastore-recommender
  labels:
    app: teastore-recommender
    service: teastore-recommender
spec:
  ports:
    - port: 8080
      protocol: TCP
  selector:
    app: teastore-recommender
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: teastore-image-v1
  labels:
    framework: java
    app: teastore-image
    version: v1
spec:
  selector:
    matchLabels:
      app: teastore-image
      version: v1
  template:
    metadata:
      labels:
        framework: java
        app: teastore-image
        version: v1
    spec:
      containers:
        - name: teastore-image-v1
          image: brownkw/teastore-image
          ports:
            - containerPort: 8080
          resources:
            requests:
              memory: 1Gi
              cpu: 0.5
            limits:
              memory: 2Gi
              cpu: 1
          env:
            - name: HOST_NAME
              value: "teastore-image"
            - name: REGISTRY_HOST
              value: "teastore-registry"
---
apiVersion: v1
kind: Service
metadata:
  name: teastore-image
  labels:
    app: teastore-image
    service: teastore-image
spec:
  ports:
    - port: 8080
      protocol: TCP
  selector:
    app: teastore-image

cisco@k8s-01:~/teastore$

If this app deploys successfully, then you can navigate to the newly created load-balancer service IP. I do not have my Kubernetes integrated with DNS. If you have that in your environment go to the newly created service URL. 

cisco@k8s-01:~/teastore$ kubectl get svc
NAME                   TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
kubernetes             ClusterIP      10.117.0.1             443/TCP          29d
teastore-auth          ClusterIP      10.117.0.132           8080/TCP         27h
teastore-db            ClusterIP      10.117.0.193           3306/TCP         27h
teastore-image         ClusterIP      10.117.0.205           8080/TCP         27h
teastore-persistence   ClusterIP      10.117.0.180           8080/TCP         27h
teastore-recommender   ClusterIP      10.117.0.156           8080/TCP         27h
teastore-registry      ClusterIP      10.117.0.235           8080/TCP         27h
teastore-webui         LoadBalancer   10.117.0.120   10.114.0.9    8080:30480/TCP   27h
cisco@k8s-01:~/teastore$
Screen Shot 2021-12-03 at 6.36.24 PM
Figure 2: Screenshot of the E-Commerce homepage.

AppDynamics Cluster Agent configuration

https://docs.appdynamics.com/21.4/en/infrastructure-visibility/monitor-kubernetes-with-the-cluster-agent/install-the-cluster-agent/install-the-cluster-agent-with-the-kubernetes-cli

Once we have our Kubernetes metrics server installed we are now ready to start configuring our AppDynamics cluster agents. The first step for AppDynamics is to create a Kubernetes namespace for our configuration to live. We can do that with the following command.

cisco@k8s-01:~/appd$ kubectl create ns appdynamics
namespace/appdynamics created

Once created we can begin configuring the necessary components. You will need to download the cluster agent software from AppDynamics software download webpage and then transfer that file to where ever you are executing the kubectl command and unzip it in a directory. I created a directory to extract the zip.

Screen Shot 2021-11-30 at 6.42.43 PM
Figure 3: Screenshot showing the cluster agent download on the AppDynamics webpage.

Navigate to your cluster-agent directory and do the following.

cisco@k8s-01:~$ cd appd/
cisco@k8s-01:~/appd$ kubectl create -f cluster-agent-operator.yaml
customresourcedefinition.apiextensions.k8s.io/clusteragents.appdynamics.com created
customresourcedefinition.apiextensions.k8s.io/clustercollectors.appdynamics.com created
customresourcedefinition.apiextensions.k8s.io/infravizs.appdynamics.com created
customresourcedefinition.apiextensions.k8s.io/adams.appdynamics.com created
serviceaccount/appdynamics-operator created
role.rbac.authorization.k8s.io/appdynamics-operator created
rolebinding.rbac.authorization.k8s.io/appdynamics-operator created
deployment.apps/appdynamics-operator created
serviceaccount/appdynamics-cluster-agent created
clusterrole.rbac.authorization.k8s.io/appdynamics-cluster-agent created
clusterrole.rbac.authorization.k8s.io/appdynamics-cluster-agent-instrumentation created
clusterrolebinding.rbac.authorization.k8s.io/appdynamics-cluster-agent created
clusterrolebinding.rbac.authorization.k8s.io/appdynamics-cluster-agent-instrumentation created
cisco@k8s-01:~/appd$

Now that that cluster agent operator is running we can create our Kubernetes secret to use as a login method for the AppDynamics controller. To create the secret we will use the following command.

kubectl -n appdynamics create secret generic cluster-agent-secret --from-literal=controller-key=<access-key>

To get this AppDynamics access we will need to navigate to your AppDynamics controller GUI and copy the key. You can navigate to the menu by going to the Gear Icon in the top right -> License -> Account 

AppD Access Key
Figure 4: Screenshot showing the location of the Access Key.

Once we have successfully created our secret we can install the rest of our AppDynamics components. First I will create MachineAgents, also known as Infra Visibility when talking about monitoring a Kubernetes cluster with AppDynamics. This configuration is controlled with the infraviz.yaml file found in the cluster agent plugin we downloaded from AppDynamics. This yaml file requires some editing for our environment. See a copy of my file below.

cisco@k8s-01:~/appd$ cat infraviz.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: appdynamics-infraviz
  namespace: appdynamics
---
apiVersion: appdynamics.com/v1alpha1
kind: InfraViz
metadata:
  name: appd-infraviz
  namespace: appdynamics
spec:
  controllerUrl: "http://10.0.141.188:8090"
  image: "docker.io/appdynamics/machine-agent-analytics:latest"
  account: "customer1"
  globalAccount: "customer1_da9adba7-c633-4ef4-9997-779e25086ea4"
  enableServerViz: false
  enableContainerHostId: true
  enableMasters: true
  enableDockerViz: false
  netVizImage: appdynamics/machine-agent-netviz:latest
  netVizPort: 3892
  resources:
    limits:
      cpu: 500m
      memory: "1G"
    requests:
      cpu: 200m
      memory: "800M"
cisco@k8s-01:~/appd$

You will need to gather your global account information from the AppD controller GUI to add to the infraviz.yaml file. This is also at the same location as our access key. Gear Icon in the top right -> License -> Account 

Screen Shot 2021-11-30 at 7.02.56 PM
Figure 5: Screenshot showing the Global Account name in the AppD controller GUI.

Once we have the infraviz.yaml file edited for our environment, then we can push the deployment.

cisco@k8s-01:~/appd$ kubectl create -f infraviz.yaml
serviceaccount/appdynamics-infraviz created
infraviz.appdynamics.com/appd-infraviz created
cisco@k8s-01:~/appd$

cisco@k8s-01:~/appd$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
aci-containers-system aci-containers-controller-5c69d6ffb-ftfml 1/1 Running 3 27d
aci-containers-system aci-containers-host-5jj4f 3/3 Running 7 27d
aci-containers-system aci-containers-host-5wv6c 3/3 Running 13 27d
aci-containers-system aci-containers-host-kpzj8 3/3 Running 10 27d
aci-containers-system aci-containers-host-qg5k9 3/3 Running 14 27d
aci-containers-system aci-containers-openvswitch-6c26d 1/1 Running 2 27d
aci-containers-system aci-containers-openvswitch-fmcgw 1/1 Running 1 27d
aci-containers-system aci-containers-openvswitch-jjpld 1/1 Running 1 27d
aci-containers-system aci-containers-openvswitch-q5fdl 1/1 Running 2 27d
aci-containers-system aci-containers-operator-7c898b9877-5vzpg 1/1 Running 2 27d
appdynamics appd-infraviz-cl9wd 2/2 Running 0 6s
appdynamics appd-infraviz-gghlt 2/2 Running 0 6s
appdynamics appd-infraviz-h4pnw 0/2 ContainerCreating 0 6s
appdynamics appd-infraviz-hsbwz 2/2 Running 0 6s
appdynamics appdynamics-operator-59dd959685-k5z8s 1/1 Running 0 95s
kube-system coredns-558bd4d5db-mgff7 1/1 Running 14 27d
kube-system coredns-558bd4d5db-mghl6 1/1 Running 14 27d
kube-system etcd-k8s-01 1/1 Running 2 27d
kube-system kube-apiserver-k8s-01 1/1 Running 2 27d
kube-system kube-controller-manager-k8s-01 1/1 Running 2 27d
kube-system kube-proxy-8j829 1/1 Running 2 27d
kube-system kube-proxy-b8sxr 1/1 Running 2 27d
kube-system kube-proxy-pjl6l 1/1 Running 2 27d
kube-system kube-proxy-xrgtz 1/1 Running 2 27d
kube-system kube-scheduler-k8s-01 1/1 Running 2 27d
kube-system metrics-server-5b974f8c7f-zqzzd 1/1 Running 0 49m
cisco@k8s-01:~/appd$

Installing AppDynamics Cluster Agent-Application Monitoring

https://docs.appdynamics.com/21.9/en/infrastructure-visibility/monitor-kubernetes-with-the-cluster-agent/install-the-cluster-agent/configure-the-cluster-agent

https://docs.appdynamics.com/21.4/en/infrastructure-visibility/monitor-kubernetes-with-the-cluster-agent/auto-instrument-applications-with-the-cluster-agent

Up to this point we have not installed any application agents. This is because we will be utilizing AppDynamics Auto-Instrumentation feature to install the agents for us. This configuration is controlled through the cluster-agent.yaml file. Below is an example of the file I used to auto instrument our application.

cisco@k8s-01:~/teastore$ cat cluster-agent.yaml
apiVersion: appdynamics.com/v1alpha1
kind: Clusteragent
metadata:
  name: k8s-cluster-agent
  namespace: appdynamics
spec:
  #App Name that appears on AppD Controller Cluster Agent Dashboard
  appName: "teastore-app"
  controllerUrl: "http://<IP of AppD Controller>:8090"
  account: "customer1"
  image: "docker.io/appdynamics/cluster-agent:latest"
  serviceAccountName: appdynamics-cluster-agent
  #The namespaces to be monitored in the cluster
  nsToMonitor:
  - "default"
  - "appdynamics"
  #How often Kubernetes warning and state-change events are uploaded to the Controller in seconds
  eventUploadInterval: 10
  #[Dinamically Configurable]
  stdoutLogging: "true"
  #[Dinamically Configurable]Number of log details. INFO, WARNING, DEBUG, OR TRACE
  logLevel: "INFO"
  #[Dinamically Configurable]Maximum number of backups the log saves. When the maximum number of backups is reached, the oldest log file after the initial log file is deleted.
  logFileBackups: 3
  instrumentationMethod: "Env"
  #Instrument only namespace where tea-store app is deployed (default namespace by default)
  nsToInstrumentRegex: "default"
  #App Name that appears on AppD Controller Cluster Agent Dashboard
  defaultAppName: "teastore-app"
  instrumentationRules:
    - namespaceRegex: "default"
      appName: teastore
      language: java
      labelMatch:
        - framework: java
      imageInfo:
        image: docker.io/appdynamics/java-agent:21.8.0
        agentMountPath: /opt/appdynamics
        imagePullPolicy: Always
cisco@k8s-01:~/teastore$

I have bolded some of the important fields; appName: is the name of the cluster that will appear in the AppD Controller. You can see that here in our AppD GUI.

Screen Shot 2021-12-02 at 2.52.46 PM
Figure 6: Screenshot showing the Kubernetes cluster name in the AppD GUI.

Next we have our nsToMonitor: this list is simply the K8 namespaces that our cluster-agent operator will be analyzing to deploy AppDynamics agents.

Then we come to our application configuration where we specify the name that will appear under the Applications tab in the AppD GUI, appName:. The namespace our application will live in, namespaceRegex:. Finally the language running inside of our app, language:.

Now we can deploy our AppDynamics cluster agent by using the following command.

kubectl create -f cluster-agent.yaml

You will know that your cluster agent has successfully configured the Application agents when you can see them AppDynamics image and env variables when describing your Application pod.

cisco@k8s-01:~/teastore$ kubectl describe pod teastore-image-v1-664f874857-qj7wp
Name:         teastore-image-v1-664f874857-qj7wp
Namespace:    default
Priority:     0
Node:         k8s-03/100.100.170.4
Start Time:   Thu, 02 Dec 2021 20:01:37 +0000
Labels:       app=teastore-image
              framework=java
              pod-template-hash=664f874857
              version=v1
Annotations:  APPD_DEPLOYMENT_NAME: teastore-image-v1
              APPD_INSTRUMENTED_CONTAINERS: teastore-image-v1
              APPD_POD_INSTRUMENTATION_STATE: Pending
              APPD_teastore-image-v1_APPNAME: teastore
              APPD_teastore-image-v1_TIERNAME: teastore-image-v1</strong>
Status:       Running
IP:           10.113.0.34
IPs:
  IP:           10.113.0.34
Controlled By:  ReplicaSet/teastore-image-v1-664f874857
Init Containers:
  appd-agent-attach-java:
    Container ID:  docker://bedd37b1dc0dab3180cdad01e535c28f1e716652c918aad80ab2c739f284d6af
    Image:         docker.io/appdynamics/java-agent:21.8.0
    Image ID:      docker-pullable://appdynamics/java-agent@sha256:b30c0adf39ebbabfb5543bf3630ef71e54c72c173ac6fe4c498bd3cc6d75f132
    Port:          
    Host Port:     
    Command:
      cp
      -r
      /opt/appdynamics/.
      /opt/appdynamics-java
      State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 02 Dec 2021 20:01:39 +0000
      Finished:     Thu, 02 Dec 2021 20:01:39 +0000
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     200m
      memory:  75M
    Requests:
      cpu:        100m
      memory:     50M
    Environment:  
    Mounts:
      /opt/appdynamics-java from appd-agent-repo-java (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-f964q (ro)
Containers:
  teastore-image-v1:
    Container ID:   docker://d173e77a0045a744bff0343766406f446036ce176dc99e7f5b3f34d01cfc0f7f
    Image:          brownkw/teastore-image
    Image ID:       docker-pullable://brownkw/teastore-image@sha256:6b8aafa49fed2a31ac7d1e3090f3b403059f9bfa16ed35f59f1953e7d0e1246b
    Port:           8080/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Thu, 02 Dec 2021 20:01:41 +0000
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     1
      memory:  2Gi
    Requests:
      cpu:     500m
      memory:  1Gi
    Environment:
      APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY:        Optional: false
      HOST_NAME:                                 teastore-image
      REGISTRY_HOST:                             teastore-registry
      JAVA_TOOL_OPTIONS:                          -Dappdynamics.agent.accountAccessKey=$(APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY) -Dappdynamics.agent.reuse.nodeName=true -Dappdynamics.socket.collection.bci.enable=true -javaagent:/opt/appdynamics-java/javaagent.jar
      APPDYNAMICS_CONTROLLER_HOST_NAME:          10.0.141.188
      APPDYNAMICS_CONTROLLER_PORT:               8090
      APPDYNAMICS_CONTROLLER_SSL_ENABLED:        false
      APPDYNAMICS_AGENT_ACCOUNT_NAME:            customer1
      APPDYNAMICS_AGENT_APPLICATION_NAME:        teastore
      APPDYNAMICS_AGENT_TIER_NAME:               teastore-image-v1
      APPDYNAMICS_AGENT_REUSE_NODE_NAME_PREFIX:  teastore-image-v1
      APPDYNAMICS_NETVIZ_AGENT_HOST:              (v1:status.hostIP)
      APPDYNAMICS_NETVIZ_AGENT_PORT:             3892
    Mounts:
      /opt/appdynamics-java from appd-agent-repo-java (rw)</strong>
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-f964q (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  appd-agent-repo-java:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  
  kube-api-access-f964q:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  72s   default-scheduler  Successfully assigned default/teastore-image-v1-664f874857-qj7wp to k8s-03
  Normal  Pulling    66s   kubelet            Pulling image "docker.io/appdynamics/java-agent:21.8.0"
  Normal  Pulled     65s   kubelet            Successfully pulled image "docker.io/appdynamics/java-agent:21.8.0" in 526.65323ms
  Normal  Created    65s   kubelet            Created container appd-agent-attach-java
  Normal  Started    65s   kubelet            Started container appd-agent-attach-java
  Normal  Pulling    64s   kubelet            Pulling image "brownkw/teastore-image"
  Normal  Pulled     63s   kubelet            Successfully pulled image "brownkw/teastore-image" in 533.939133ms
  Normal  Created    63s   kubelet            Created container teastore-image-v1
  Normal  Started    63s   kubelet            Started container teastore-image-v1
cisco@k8s-01:~/teastore$

After the AppDynamics monitoring agent images have loaded we can navigate back to AppD GUI and take a look at the application monitoring.

Screen Shot 2021-12-03 at 5.27.32 PM
Figure 7: Screenshot showing our newly created JAVA Application in AppD Controller GUI. You can also see the servers that are being monitored as well.

Now if you go to the Application Dashboard by double clicking on the newly created application. You will notice that there is no data being reported. This is because we have not generated any load on the application for the JAVA agents to report to the AppD GUI and subsequently the NDI GUI.

Usually this is done with an application load generator. A tool that you could use for that is called Locust and want to call this out as it is a more enterprise method of application testing. However in this guide I simply just interacted with the app to generated some load. Since this application is an e-commerce I created multiple order, logged in/out multiple times, created 404 errors by attempting to access a file in the app doesn’t exist. I state these to give you some ideas. 

https://locust.io/

When you have data appearing in the AppD GUI it will look like the screenshot below.

Screen Shot 2021-12-03 at 5.36.04 PM
Figure 8: Screenshot showing data appearing in the AppD GUI.

NDI and AppD Reporting Metrics

Now if we look at the screenshot above we are seeing application performance data this is NOT the data that will appear in the NDI GUI. The data that will appear in NDI is located in the Application -> Network Dashboard tab.

Screen Shot 2021-12-03 at 5.41.56 PM
Figure 9: Screenshot showing where in the AppD GUI NDI will be receiving data from.

Now if we click on the Network Dashboard we can see the different metircs for each of the different pods for our application.

Screen Shot 2021-12-03 at 5.48.08 PM
Figure 10: Screenshot showing where in the AppD GUI NDI will be receiving data.

Now let’s take a look in the NDI GUI. If you haven’t integrated NDI and AppD yet then take a look at Part 1 of this series. Lets navigate to the Applications section of the NDI GUI located under the Browse tab.

Screen Shot 2021-12-03 at 5.54.16 PM
Figure 11: Screenshot showing where in the NDI GUI we view the AppD information.

Once in the Application menu we can see a lot of the same information that is available in the AppD GUI Dashboard.

Screen Shot 2021-12-03 at 5.56.34 PM
Figure 12: Screenshot showing the data that was seen in AppD GUI in the NDI GUI.

We can double-click on our application to dive further into the data that is being reported. There is many different network related metrics being reported to NDI that allows users to analyze where the application slowness might be coming from. Is it the network or is it the Application or the server. By combining these two software suites you have the ability to determine where the issue may lie. You can take this monitoring even further by analyzing the specific Flows with NDI’s flow telemetry feature to see if the network slowness may be occurring somewhere else in the network path. 

Below you will find screenshots of the different metrics reported into the NDI GUI.

Screen Shot 2021-12-03 at 5.58.46 PM
Figure 13: Screenshot of Dashboard application statistics.
Screen Shot 2021-12-03 at 5.59.23 PM
Figure 14: Screenshot showing the different tier metrics for the app being monitored.
Screen Shot 2021-12-03 at 6.00.22 PM
Figure 15: Screenshot of the Application Network Links metrics.
Screen Shot 2021-12-03 at 6.00.53 PM
Figure 16: Screenshot showing the different searchable Application anomalies.
Screen Shot 2021-12-03 at 6.01.28 PM
Figure 17: Screenshot showing where our Application is physically connected too in our Cisco ACI fabric.

References

 


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.