Kubernetes 101 — Architecture & Networking

Yitaek Hwang
7 min readJun 29, 2023

--

Kubernetes for Developers: Part IV

Photo by Growtika on Unsplash

Last week, we started learning about Kubernetes basics, going over what pods, nodes, and pod controllers are. Now that we know what runs on Kubernetes (i.e. pods), we’ll dive into the high-level architecture of Kubernetes and how those pods can talk to one another. In my experience, Kubernetes networking always tends to confuse developers. But once you understand how Docker networking works, you’ll see parallels and have an easier time grasping those concepts (if you haven’t read Part II, now would be a good time to review).

So without further ado, let’s dive in!

This series is brought to you by DevZero.

DevZero’s platform provides cloud-hosted development environments. With DevZero, you can reuse your existing infrastructure configuration to configure production-like environments for development and testing.

Check out DevZero at devzero.io to get started today!

Kubernetes Architecture

Last week, we learned that pods are the smallest deployable unit in Kubernetes. On the other end of the spectrum, a Kubernetes cluster encompasses all of the components running on one or more nodes.

At a high-level, Kubernetes is comprised of three sets of components:

  • Control plane components: API server, etcd, kube-scheduler, kube-controller-manager
  • Node components: kubelet, kube-proxy, container runtime
  • Add-ons: DNS, networking plugins, etc

Kubernetes documentation website has a great diagram (copied below) and explanation for all of these components. I won’t bore you by detailing each component, but I’ll list out the key concept and takeaways.

Image credit: Kubernetes documentation
  1. Kubernetes uses a “hub-and-spoke” API pattern with the API server running on the control plane listening to all the requests.
  2. Remember the controller pattern from pod controllers (e.g. Deployment, StatefulSet, DaemonSet)? Control plane components also run controllers to manage other Kubernetes components like nodes and other tasks.
  3. Kube-scheduler assigns pods to nodes, taking other factors into consideration like resource requests and other constraints like affinity rules or taints.
  4. On each node, we have an agent called kubelet that ensures that all the containers in a pod are running as specified.
  5. Communication is established via the kube-proxy. This component forwards traffic requests to and from your pods to enable networking in the cluster.

Remember back in Part II, we talked about why we need a container orchestrator. And all of these components mentioned above work together to provide those benefits like service discovery (kube-proxy, DNS), self-healing (kubelet. Kube-controllers, kube-scheduler), and more.

Finally, if you only take one thing away, remember this: the controller pattern underpins the key tenets of the Kubernetes architecture and design. Kubernetes makes use of lots of controllers at every level to take in the desired state and monitor it constantly to maintain that state. So the more you define what the “desired” state should be, the more Kubernetes can do for you in return.

Kubernetes Networking

Now onto the topic that usually trips up a lot of engineers: networking.

Before we talk about it in detail though, let’s take a step back and set the stage. In Part II, we looked at how Docker handles networking, namely via bridge and host networks. When we created our own bridge network, we saw that we were able to reach other containers by name:

With Kubernetes, we are now working with pods that may or may not be on the same node. Also, we are typically working on a larger scale than Docker so it would be nice to have service discovery for all the available endpoints in the same cluster. From our overview of the Kubernetes Architecture, we know that Kubernetes has some components (i.e., kube-proxy, DNS services) that lay the groundwork, but how does it actually work?

Services

Enter Kubernetes services.

Kubernetes services are an abstraction layer that defines a logical set of endpoints to access pods. Think of it like an intra-cluster load balancer in front of your pods.

Let’s take a look at an example to illustrate. Here we have a simple deployment for nginx with 5 replicas. Notice that we have opened up port 80 with the name http:

Since nginx is stateless, we want to evenly distribute network calls to our five nginx pods. To do so, we create a service:

The important parts to note here are:

  • name: nginx-service is what we called our service
  • selector: we matched the label key-value pair of app: nginx to select our pods
  • targetPort: we are targeting our named port http which maps to port 80 on our nginx pods
  • port: this is the port we want to expose to others calling our service

Now other pods can call nginx-service:8080 to talk to our nginx pods.

NOTE: for stateful applications, we need to use a headless service to distinguish the specific pod you want to talk to. This is an advanced topic so I’ll link the documentation for further reading, but know that the basic concepts still apply.

Service Types

Kubernetes actually has four different types of services. In our nginx example, we didn’t specify a type so we implicitly created a service of type ClusterIP. This exposes our application to others in the cluster, but what if we want services external to our cluster to reach it as well (e.g., external facing API service, frontend, etc)?

This is where our other three types of services come into play:

  • NodePort: Assigns a static port on each of the nodes in the cluster. You can either specify this port number in the service definition or let Kubernetes assign a port that is not already taken.
  • LoadBalancer: Exposes services behind an external LoadBalancer. Kubernetes does not come with a load balancer component natively so you must use an external service like a cloud product or MetalLB.
  • ExternalName: Maps services to an external hostname value by configuring the cluster DNS service’s CNAME record (e.g., mapping our nginx service to nginx.example.com)

In practice, you will most likely only use ClusterIP and NodePort. That’s because the clusters that are provisioned and managed for you by the cloud provider or your infrastructure team will usually use an ingress controller to set up a reverse proxy into your cluster. Cloud provider ingress products like AWS Ingress Controller (i.e., ALB, NLB) require that underlying service types to be of NodePort. If you’re using other third party ingress like NGINX, Traefik, or Kong, then you can expose ClusterIP behind an ingress.

Note: For an in-depth guide on Kubernetes Ingress Controllers, you can read my primer.

DNS Resolution

There are a few other nuances that you should be aware of in terms of networking. Going back to our nginx example, I said that other pods can call `nginx-service:8080` to talk to our application. To be more precise though, only pods in the same namespace as our nginx deployment can call `nginx-service:8080`.

A namespace in Kubernetes is a software-defined mechanism for isolating Kubernetes resources. Think of it as a way to organize resources or implement soft multi-tenancy within the cluster (e.g., you can have a namespace per team, per engineer, or per CI runs).

There are Kubernetes objects that are scoped under namespaces like deployments and services, while others can be cluster-wide like things related to storage or nodes. By default, Kubernetes allows communication between namespaces. But in order to reach across namespaces, you need to use a fully qualified domain name (FQDN) in the form of <service-name>.<namespace-name>.svc.cluster.local.

Let’s say that our original nginx deployment is running in test-ns-1. If we are calling it from within test-ns-1, then we can use nginx-service:8080 or the FQDN like nginx-service.test-ns-1.svc.cluster.local:8080. Now, if we are calling it from a different namespace, we need to use the FQDN: nginx-service.test-ns-1.svc.cluster.local:8080. Note that Kubernetes DNS can resolve partially as well so calling nginx-service.test-ns-1:8080 will also work.

Running Locally

Finally, one other place where developers get confused with Kubernetes networking is when they are running locally via a small distribution like minikube and kind. For developers used to Docker’s host network, they are often frustrated with not being able to pass in a `-p` flag to map everything to localhost.

There are a few ways around this:

  • You can configure your Kubernetes distribution to map certain ports or port ranges to localhost at startup (may not work on MacOS).
  • Use an add-on service like minikube ingress or use helper commands like minikube tunnel to map specific ports opened by your service. This will depend on the Kubernetes distribution.
  • Or use Kubernetes port-forwarding to mimic the Docker host network mapping behavior via the -p flag.

Wrapping Up

We covered a lot of ground in today’s lesson. But I hope you were able to see how concepts from Docker translate to Kubernetes. The two main takeaways to remember are 1) a lot of Kubernetes components use the controller patterns, and 2) services are an abstraction provided by Kubernetes to allow communication to and from pods.

Before jumping into more practical matters, we have one more topic to cover: resource management and scheduling. As a developer, you may not be setting these values and behavior directly, but knowing about them helps you understand and debug failures much faster.

Stay tuned for our next lesson!

--

--

Yitaek Hwang

Software Engineer at NYDIG writing about cloud, DevOps/SRE, and crypto topics: https://yitaekhwang.com