Advanced Kubernetes Topics

Yitaek Hwang
5 min readJul 24, 2023

Kubernetes for Developers: Part VIII

Photo by Michael Dziedzic on Unsplash

Welcome to Part VIII and the final installment of the “Kubernetes for Developer” series!

For the last 8 weeks, we’ve been diving deep into Kubernetes. We started with fundamental Docker and Kubernetes and ended last week by taking a look at the state of Kubernetes. For the last piece in the series, we’ll wrap up with some advanced topics that you’ll most likely run into in your day to day.

As always, let’s get started!

This series is brought to you by DevZero.

DevZero’s platform provides cloud-hosted development environments. With DevZero, you can reuse your existing infrastructure configuration to configure production-like environments for development and testing.

Check out DevZero at to get started today!

Service Mesh

Technically speaking, service mesh is not a Kubernetes topic. However, I’m including it here because 1) service mesh is commonly used alongside Kubernetes, and 2) understanding service mesh at a high-level helps to think about problems that microservices face at scale easier.

So what is a service mesh?

It is a mesh of proxy components that run alongside your application (most likely as a sidecar in Kubernetes) that offloads a lot of the networking responsibilities to the proxy. Like Kubernetes, service mesh has a control and a data plane. At a high-level, the control plane exposes an API and coordinates the lifecycle of proxies in the data plane. The data plane in turn manages network calls between services.

The key thing to know here is that the proxy components in a service mesh facilitate communication between services. This is in contrast with other networking components like ingress or API gateways that facilitate networking calls from outside the cluster to internal services. The two most popular implementations of service meshes are Istio (which uses envoy proxy underneath) and Linkerd (which uses Linkerd-proxy).

If Kubernetes already provides automatic service discovery and routing via kube-proxy, you may be wondering why a service mesh is needed. It may even look like a bad design choice to add a proxy per service (or node depending on the implementation) which would add latency and maintenance burden.

To understand the benefits of a service mesh, we have to look at the complexities of running a lot of services at scale. As you add in more services, managing how those services talk to one another seamlessly starts to become challenging. In fact, there’s a lot of “stuff” that needs to happen to make sure everything “works”. These things include:

  • Security: encrypting messages in transit (e.g., mTLS), access control (i.e. service-to-service authorization)
  • Traffic Management: load balancing, routing, retries, timeouts, traffic splitting, etc
  • Observability: tracking latencies/errors/saturation, visualizing service topology, etc

These are features that all your services can benefit from and share, so offloading these capabilities to the proxy layer that sits in between services helps to decouple these “operational” concerns from your business logic.

As a developer, this decoupling of the operational logic and service logic is where you’ll see the most benefit. Most commonly, a service mesh obviates the need to generate self-signed certs and support TLS between services at the code level. Instead you can offload that logic to the service mesh and have that behavior coded at the mesh layer.

As an added benefit, this makes the lives of our infra/ops friends easier as well. They now have a central platform to control these behaviors. With more and more cloud providers now offering a managed service mesh alongside Kubernetes, you’ll probably run into services meshes far more commonly in the future.

Operator Pattern

One of the best parts of the overall Kubernetes design is its extensibility. Beyond the Kubernetes resources we learned about in previous lessons (e.g., pods, services, namespaces, etc), Kubernetes allows custom resources to be defined and added via the Kubernetes API. To control these resources via the familiar controller pattern, Kubernetes also exposes custom controllers. In turn, operator pattern combines custom resources and controllers in an operator.

Operators are useful when the existing Kubernetes resources and controllers cannot quite support your application behaviors robustly. For example, you may have a complex application with the following set of features:

  • Generates secrets via some distributed key generation protocol.
  • Uses consensus algorithm to elect a new leader in case of failover.
  • Utilizes nonstandard protocols to communicate to other services.
  • Upgrading requires a series of interdependent processes.

Native Kubernetes resources and APIs do not provide sufficient ways to accomplish the above. So in this case, you can elect to define your resources and controller actions (e.g., key generation, leader election, etc) to accomplish what you need.

In practice, unless you are tasked with creating operators for your service, you’ll most likely be using operators that others have created. Most common ones include database operators, monitoring/logging components, or CI/CD components. These operators may not be as complex as the hypothetical operator example mentioned above, but underneath the hood, they are grouping certain resources together and defining actions to make operating those services easier.

Wrapping Up

Phew, we covered a lot in the past 8 weeks. My hope for the series was to provide a good starting point for you to understand containers and basic Kubernetes concepts. We walked through why an orchestrator is needed and how Kubernetes builds on Docker concepts. We talked about some popular tools you might encounter as well as some new developments within Kubernetes. Now that you have a solid foundation, you can dive deep into the documentation or other blogs to play around with Kubernetes.

As I mentioned in the State of Kubernetes 2023 article, the developer experience for Kubernetes still leaves a lot to be desired. So don’t be discouraged if you still find a lot of this confusing and daunting. But there is a lot of work being done to improve the user experience, and who knows what impact AI will have in this field. In the meantime, understand the key concepts and work with the tools given to you at your organization.

Thank you again for joining me on this journey! If you liked this series on Kubernetes, check out my other guides and articles on Kubernetes on my profile!



Yitaek Hwang

Software Engineer at NYDIG writing about cloud, DevOps/SRE, and crypto topics: