Jamf Pro & Kubernetes

Have you ever wanted to spend hours torturing yourself? Containerizing apps and orchestrating them with Kubernetes is the perfect way to do so! Luckily for you, I already spent several hours torturing myself to figure this out!


Prerequisites

  • Kubernetes Cluster with persistent storage
  • Container network plugin (Flannel, Calico, etc)
  • A database (either as a container or external to the cluster)
  • Jamf Pro subscription
  • Jamf Pro Container image – Github and DockerHub (Base Image and a more recently updated image) both have containers made by Jamf and some light documentation.

How does it all work?

Jamf Pro Namespace

The Namespace is used to manage the Jamf Pro app and Tomcat service together. You can also set resource limits for the namespace in addition to the resource limits of the deployment. The resource limits I set here are very beefy compared to what K8 apps normally would have because Jamf Pro is a beefy web app.

# Deploy Jamf Pro namespace
apiVersion: v1
kind: Namespace
metadata:
  name: jamf-pro
  labels:
    name: jamf-pro
----
# Namespace resource limits
apiVersion: v1
kind: ResourceQuota
metadata:
  name: jamf-pro
  namespace: jamf-pro
spec:
  hard:
    requests.cpu: "4"
    requests.memory: 4Gi
    limits.cpu: "16"
    limits.memory: 16Gi

Tomcat Service

This service is used to allow connections to the Jamf Pro web app from outside of the cluster. Calling it the Tomcat service is a bit of a misnomer I will admit as Tomcat is actually running in the pod under the app deployment. You can call it whatever you want in your cluster, though.

I use the NodePort type of service because my cluster is self hosted and I have my own load balancer. If you’re using a cloud Kubernetes cluster and not self hosting like I am, this would probably be better as a LoadBalancer type of service.

You can set nodePort to any port that Kubernetes supports (30000-32767), however targetPort should be set to the same port as containerPort in the Deployment manifest. In my case, that is port 8080.

You can also set your own labels to better target the correct pods that should have an external connection, or to better label everything.

# Deploy service to allow connections to Jamf Pro pods externally
apiVersion: v1
kind: Service
metadata:
  name: tomcat
  namespace: jamf-pro
  labels:
    app: jamf-pro
spec:
  type: NodePort
  ports:
  - name: "https"
    nodePort: 31621
    targetPort: 8080
    protocol: TCP
  selector:
    app: jamf-pro

Jamf Pro Deployment

This is the important part! This was difficult to piece together because there just isn’t any documentation on getting Jamf Pro to work in Kubernetes. Further trouble comes from the fact that the Jamf Pro container images do not have the ROOT.war file to build the web app included with them. You can either:

  1. Create your own container image with a ROOT.war file or a pre-populated /usr/bin/tomcat/webapps/ROOTdirectory inside the container and upload it to DockerHub for later use by Kubernetes to pull the image.
  2. Use the base image and bind mount the ROOT.war file to the container.

I don’t want to have to deal with updating a custom image constantly so I chose to bind mount the ROOT.war file to the container. The trouble with this, though, is that Kubernetes doesn’t have a way to bind mount! This means I have implemented this with a technically insecure method, but this is not a production deployment so I don’t really have to worry about it.

hostPath is a way to allow containers access to the host filesystem. Anyone familiar with Kubernetes does not like this – containers are supposed to stay, well, contained. This can allow malicious actors access to the host through the container. If you do things properly though, like create a separate partition for the exposed file(s) with strict filesystem restrictions set in /etc/fstab, chroot the directory, and generally follow other good security practices, this should be secure enough. If you’re more worried about security, you’ll just have to juggle updating your internal container image with each new release of Jamf Pro and/or an update to the base container image.

In order to use hostPath, there are two things that must be included in the manifest:

# This is configured with the environment (env) variables and MUST point to /data/ROOT.war.

volumeMounts:
  - name: webapp
    mountPath: /data/ROOT.war
# This is configured under the object (spec) section. This must point to a directory and file which exists on EVERY worker node. I've had mixed results with using an SMB/CIFS share and don't generally recommend it. (yet).

volumes:
  - name: webapp
    hostPath:
      # Change to where you have your ROOT.war saved on each worker node.
      path: /path/to/ROOT.war

Along with the consideration of including the ROOT.war file somehow, we also have to pass several environment variables to the container. The Jamf Github page for the container image shows that we can set the following environment variables for the container:

STDOUT_LOGGING [ true ] / false

DATABASE_HOST [ localhost ]
DATABASE_NAME [ jamfsoftware ]
DATABASE_USERNAME [ jamfsoftware ]
DATABASE_PASSWORD [ jamfsw03 ]
DATABASE_PORT [ 3306 ]

JMXREMOTE true / [ false ]
JMXREMOTE_PORT
JMXREMOTE_RMI_PORT
JMXREMOTE_SSL
JMXREMOTE_AUTHENTICATE
RMI_SERVER_HOSTNAME
JMXREMOTE_PASSWORD_FILE

CATALINA_OPTS
JAVA_OPTS [ -Djava.awt.headless=true ]

PRIMARY_NODE_NAME -- Enable clustering
  This MUST be the ip address of the primary as recognized by Tomcat
  There is no direct JamfPro primary <--> secondary communication so the ip need not be reachable by the secondary directly

POD_NAME -- Enable Kubernetes clustering via downward API
POD_IP -- Enable Kubernetes clustering via downward API

MEMCACHED_HOST -- Enable Memcached caching, assumes port 11211 by default

My example manifest below shows how to use some of these environment variables. The most important of these variables are the variables describing the database. I choose to use an external MySQL database because I like to have a central database server for my projects. You can also use a containerized database with a persistent volume. If you choose using an external database and aren’t lazy with Kubernetes network rules like I am, you will have to make considerations for the network connection from the pod to an external resource. If you are also lazy or just testing things out, this should Just Work™️ as is. You can use either an IP address or FQDN here, but if you use an FQDN your pods will have to be able to resolve the address of course.

Here’s an example of my deployment manifest putting all of this together:

# Jamf Pro deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  name: jamf-pro
  namespace: jamf-pro
  labels:
    app: jamf-pro
spec:
  replicas: 2
  selector:
    matchLabels:
      app: jamf-pro
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: jamf-pro
    spec:
      containers:
        - env:
            # Change to your MySQL requirenments
            - name: DATABASE_HOST
              value: "database.domain.net"
            - name: DATABASE_NAME
              value: "jamfsoftware"
            - name: DATABASE_USERNAME
              value: "jamfsoftware"
            - name: DATABASE_PASSWORD
              value: "changeit"
            - name: DATABASE_PORT
              value: "3306"
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
          name: jamf-pro
          image: jamf/jamfpro:latest
          ports:
            - containerPort: 8080
              name: tomcat-web-svc
          resources:
            requests:
              memory: "1Gi"
              cpu: "1"
            limits:
              memory: "4Gi"
              cpu: "4"
          volumeMounts:
            - name: webapp
              mountPath: /data/ROOT.war
      restartPolicy: Always
      volumes:
        - name: webapp
          hostPath:
            # Change to where you have your ROOT.war saved
            path: /path/to/ROOT.war

It’s all gone wrong! Help!

Yeah that happens with Kubernetes. The most common issues you will run into is that the database cannot be reached or the ROOT.war file or the/usr/bin/tomcat/webapps/ROOT directory inside the container do not exist. Other common issues are that you set the resource limits wrong, you can’t reach the pod from outside the cluster, or the cluster network or containers themselves are failing/unstable.

Here’s some examples of common issues and how to find logs for htem:

  • If the database connection fails, you will see this at the beginning of the kubectl logs result. This log is actually the Catalina log from inside the container.

kubectl logs -n namespace-name jamf-pro-pod-name

kubectl logs -n jamf-pro jamf-pro-7bc189cd58-lbehp

  • If the ROOT.war file cannot be found or mounted inside the container, the kubectl describe results will show this in the events section.

kubectl describe pod -n namespace-name jamf-pro-pod-name

kubectl describe pod -n jamf-pro jamf-pro-7bc189cd58-lbehp

  • If there are issues with the Jamf Pro web app itself, you can use the following command to copy the logs from the container.

kubectl cp namespace-name/jamf-pro-pod-name:/usr/local/tomcat/logs /path/on/host

kubectl cp jamf-pro/jamf-pro-7bc189cd58-lbehp:/usr/local/tomcat/logs /home/shichi/jamf-logs


Take a look at mine!

I am currently running Jamf Pro in a container orchestrated by Kubernetes and am slowly learning more about how to deploy and secure the app. I don’t trust the internet with login information so you’ll just have to marvel at the login page: https://kubedjamf.rubyraccoon.net