Kubernetes, microservices, and beyond
A explorative guide to Kubernetes, its role in microservices architecture, and its impact on modern cloud-native application development.
Introduction
This post contains the summerized notes from several places (links are attached to referrence section), you may consider it as a container101 or kubernetes101.
More than Microservices
To recap, we have a basic understanding of the motivation for moving towards microservices architecture. To summerize, the most outstanding reason for microservices is speeding up development. We break monolith into smaller pieces (portable) and people could release on their own cycle without coorinating with each other.
The underlying infrastures are very important to microservices for rapid deployments and continuous delivery. To faster rollback, deploy, monitor or test, microservices architecture also requires high automation.
Microservices is one of the reasons we need advanced tools, like Kubernetes.
Start with Container
For people who never heard of container, you can imagine when you download a mobile app from AppStore, each app is a self-contained application bundling with everything needed to run on your iPhone. The concept of container is very similiar.
Container image is a packing format including application and all dependencies required to run it, like a sandbox for resource-isolated processes in OS. It is a method of OS-level virualization rather than hardware virtualization, which logical packaging mechanism to ensure that applications deploy quickly, reliably, and consistently regardless of deployment environment.
We don't need to worry about the dependency conflicts between containerized applications. Importantly, it's secure to not run directly on the host operating system.
Containers technology is built into most mainstream OSs, but not exactly easy to use out of box. Something like Docker will provide a nice abstraction on top of container technology to help create, distribute, and run containers images on our own servers.
Deployment -- Yesterday and Tomorrow
The OLD way to deploy appplication was to install them on a host using OS package manager, which requires entangling with applications' executables, configuration, libraries, and lifecycles with each other and with the host OS. VM (virtual-machine) images could help to achieve preditable rollouts and rollbacks, but they are heavyweight and non-porable.
The NEW way is to deploy containers. Containers are isolated from each other and from the host, with their own filesystems, process namespace and bounded computational resources, thus, easier to build than VMs and portable across clouds/OS. Containers also allow applications to be version controlled and replicated.
More than one container: Kubernetes
With Docker, we are able to package containers easily, but that just the begining of the story. More problems come with app configuration, service discovery, managing updates, and monitering, that's how Kubernetes jumps into the picture to manage all these complexities for us.
Kubernetes is the automated container orchestration, which help containers to be coordinated, distributed and managed at scale. (Kubernetes is also called k8s because there are 8 chars between k and s :)). It is created by Google based on their interal project Borg
K8s Basics
Kubernetes coordinates a highly available cluster of computers that are connected to work as a single unit. (all the following images are from this tutorial)

Inside a k8s cluster, there is one master(in the center) and multiple nodes surrounding the master.
Master is the managing node responsible for
- deployment
- scheduling other nodes
- monitoring
- replace if any node goes wrong

For each node, there are
- kubelet
- a process for communitation between master and nodes
- it also manages the Pods and containers
- container runtime (Docker, rkt)
- pulling container image from a registry
- unpacking the container
- running the app

Inside each node, there are multiple pods. Pod is an application specific logical host, containing multiple container apps that always running on one node.
Each pod contains
- volumes -- the shared storage
- IP address -- for networking
- container image and other information
In microservices world, one service is a set of pods.
(kubeclt
command provides a bunch of options to interact with Pods.)
Health check is done by Kubelet
- Readiness probes
- indicate a pod is ready to serve traffic
- non-ready pods will be removed from load balancer
- Liveness probes
- indicate a container is alive
- containers will be restarted if not alive
Scale up/down is by changing number of replicas in k8s configs. Auto-scaling is also possible.
K8s adopts rolling updates, which allow Deployments' update to take place with zero downtime by incrementally updating Pods instances with new ones.
During both scaling and updating, traffic will only be load-balanced to available pods (with readiness probes passed).
Summery
To fully obtain the benefits of microservices architecture(rapid development and continuous delivery), we need highly automated infrastures to enable building and deploying containerized applications in a scalable and reliable way. With Kubenetes, we could focus more on designing and architecting services.
Appendix: QA with Adrian
Adrian Cockcroft, the former cloud architect of Netflix, who made microservices mainstream, also showed up in several videos in Scalable Microservices with Kubernetes from Udacity. I put together the following conversations with subtitles auto-generated from Google. I really like how he uses pet and cattle to analogize the relationship between developers and clusters. :)
Q: What role do you see containers playing in the future of application development and deployment?
A: Well if you look at the history of having large systems in the data center. Bare metal machines, typically you'd buy them, you'd depreciate them over three years. They'd sit at the same IP address. They get installed once and used over and over again, and you can think of those machines as pets. What we really want to do is have cattle. Right. So if you lose some you know one cow out of a dairy hood you still got milk, right. You can get another cow. That's that kind of model, rather than having some machine that's very specific and if one knows its name and if they have anything goes wrong with it everyone gets very unhappy. So that transition from sort of these very specific machines to more sort of fleets or herds of machines is something that happened typically when, you know, you can do it with the VMs. But VMs take out as, you know, they basically usually you get them by the hour and they last for a few weeks and you know they have a much shorter life cycle. But when we go to containers became efficient enough that you could actually get a machine in seconds and you could run it for minutes and that was a perfectly reasonable thing to do. So, you can create an entire test environment from scratch, run it until when and run all your tests, shut it down again and you can have lots of those running in parallel. So now you've got machines, some of them you barely know they exist, they come and go very, very quickly. And finally, sort of, if you take it right to the limit, you could create a container just to run a single request and shut it down again. And that's something that's starting to be called serverless computing,
Q: when it comes to writing containerized apps other than make it smaller, what are problems that no one's talking about?
A: So one of the problems is that people get into is if they keep their old organization and the old practices. So if you've got a micro service or an application you are building and the way you used to build your monolith was you had teams spread all around the world that contributed code into the system and you spent, you know, weeks of putting everything together. When we created software it sort of followed the form. That form, this is sometimes called Conway's law which says that the structure of an organization guides the kind of structure of the code they produce. So what I'm really saying is that you can't build micro services with a waterfall organization.
So people might lose some benefits of the microservices pattern with organizational issues.