This will be a fairly short post on one of my misconceptions of one of Kubernetes’ concepts. My guess is that I am not the only one who had this misunderstanding at the beginning. We will be talking about the relationship between Deployments and Pods – two of the most used concepts in the Kubernetes world. This is also why I think there could be many others dealing with the same issue that I dealt with.
In Kubernetes, a Deployment is an abstraction of multiple Pods running in a cluster of compute nodes. A Pods is a set of Linux Containers encapsulating an application component. The most characteristic feature of Pods managed by a Deployment is, that all these Pods are running the same set of containers with an identical configuration. They are basically undifferentiated, fungible clones of each other. This is useful for scaling applications by simply adding more of these “clones” or making them resilient against the failure of one or multiple nodes in the cluster. In my opinion this concept and its well-constructed implementation are one of Kubernetes’ most alluring features.
A Broken Deployment
The YAML manifest below reflects a simple Deployment of an NGINX web server. However, as you will notice later on, the Deployment refers to another resource that we will not create prior to applying this Deployment. Exactly this will lead to an error condition, with which I can demonstrate the troubleshooting. Save below manifest as a file named:
my-deployment.yaml, so that we can apply it later on.
apiVersion: apps/v1 kind: Deployment metadata: name: my-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: serviceAccountName: my-service-account containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80
Now let’s create a fresh namespace for your experiment and apply above manifest:
# Create the namespace kubectl create namespace my-experiment # Switch the current context so that we don't have to mention the namespace in each command kubectl config set-context --current --namespace=my-experiment # Create the deployment kubectl apply -f my-deployment.yaml
The typical train of thoughts is now: The Deployment has been applied, so Kubernetes controller will take care of creating three Pods of NGINX, as we have defined in our manifest. So let’s check with
kubectl get pods
No resources found in my-experiment namespace.
Argh! Where are the Pods? We did everything right, but why aren’t they showing up? And if something is wrong, shouldn’t there be a Pod with an Error status?
Looking for a Solution
Let’s check if the Deployment itself has anything to say using
kubectl describe deployment my-deployment
NAME READY UP-TO-DATE AVAILABLE AGE my-deployment 0/3 0 0 8m25s
Hmm. No Pods ready and none of them available. We need more details. Run
kubectl describe deployment my-deployment
Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 10m deployment-controller Scaled up replica set my-deployment-679fbb77 to 3
The Deployment is fine, no warnings, no errors. Purely from a conceptual perspective, this is correct. But why? Because the Deployment doesn’t create the Pods by itself, but in turn creates a ReplicaSet. The actual Pods will not be managed by the Deployment, but by this ReplicaSet.
The event above says:
Scaled up replica set my-deployment-679fbb77 to 3. This is why we will now have a more detailed look at this ReplicaSet using:
kubectl describe replicaset my-deployment-679fbb77
Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedCreate 7m6s (x18 over 18m) replicaset-controller Error creating: pods "my-deployment-679fbb77-" is forbidden: error looking up service account my-experiment/my-service-account: serviceaccount "my-service-account" not found
There it is! The ReplicaSet controller complains about a missing ServiceAccount. If there is no ServiceAccount, the controller is unable to schedule any of the the required Pods to a node. This also explains, why we have been unable to gather information looking for Pods.
But wait: Why is there even a ServiceAccount in the mix? Because I added on in the original Deployment. We now know the problem, so we can easily solve it.
In our specific case, a ServiceAccount wouldn’t be necessary for deploying NGINX, but as the Deployment refers to one, we need to make this account available. This is where the account is referenced:
[...] spec: serviceAccountName: my-service-account [...]
You could also delete the line mentioning the
serviceAccountName, but we will solve this by creating a simple ServiceAccount without any specific RoleBindings. This can be achieved with a single command:
kubectl create serviceaccount my-service-account
Meanwhile, the ReplicaSet controller has already backed off trying to get the Pods scheduled. Let’s give it a kick in the butt with an explicit restart:
kubectl rollout restart deployment my-deployment
Let’s now check if the Pods are deployed:
NAME READY STATUS RESTARTS AGE my-deployment-84dbcb5f94-psb9x 1/1 Running 0 28s my-deployment-84dbcb5f94-jx2q8 1/1 Running 0 15s my-deployment-84dbcb5f94-mxfrt 1/1 Running 0 13s
Now things are rolling!
Kubernetes Events Are Oftenly Helpful
In general, if you don’t know where to check for events exactly, it can be helpful to list all events logged in your cluster using
kubectl get events
LAST SEEN TYPE REASON OBJECT MESSAGE 72s Normal ScalingReplicaSet deployment/my-deployment Scaled up replica set my-deployment-679fbb77 to 3 31s Warning FailedCreate replicaset/my-deployment-679fbb77 Error creating: pods "my-deployment-679fbb77-" is forbidden: error looking up service account my-experiment/my-service-account: serviceaccount "my-service-account" not found
Keep in mind: Creating your resource in a specific namespace may trigger automatic creation of other resources, which are not namespace-scoped. A typical example: When you use Dynamic Volume Provisioning to automatically provision storage for each of your Pods, the cluster will create Persistent Volumes. Persistent Volumes are not tied to specific namespaces, but are a cluster-wide resource. In this case it can be helpful to add the
-A flag when querying for events in order to show all the things going on in the cluster.
For me personally, “learning by doing” was a great experience in the Kubernetes world so far, but one has to make sure to grasp the core concepts from the bottom up. It would have certainly spared me some extensive googling. The Kubernetes documentation is not only describing an API, but is explaining all the concepts and relations between the different entities in the K8s universe.
Enjoy you day and see you soon!