The Kubernetes Book by Nigel Poulton & Pushkar Joglekar, chapter name Kubernetes primer

Kubernetes primer

Kubernetes primer

This chapter is split into two main sections.

• Kubernetes background – where it came from etc.

• Kubernetes as the Operating System of the cloud

Kubernetes background

Kubernetes is an application orchestrator. For the most part, it orchestrates containerized cloud-native microservices apps. How about that for a sentence full of buzzwords!

You’ll come across those terms a lot as you work with Kubernetes, so let’s take a minute to explain what each one means.

What is an orchestrator

An orchestrator is a system that deploys and manages applications. It can deploy your applications and dynamically respond to changes. For example, Kubernetes can:

• deploy your application

• scale it up and down dynamically according to demand

• self-heal it when things break

• perform zero-downtime rolling updates and rollbacks

• and more

And the best part about Kubernetes… it can do all of that without you having to supervise or get involved in decisions. Obviously you have to set things up in the first place, but once you’ve done that, you can sit back and let Kubernetes work its magic.

What is a containerised app

A containerized application is an app that runs in a container.

Before we had containers, applications ran on physical servers or in virtual machines. Containers are the next iteration of how we package and run our apps, and they’re faster, more lightweight, and more suited to modern business requirements than servers and virtual machines.

Think of it this way:

• Applications ran on physical servers in the age of open-system (roughly the 1980s and 1990s)

• Applications ran in virtual machines in the age of virtual machines (2000s and into the 2010s)

• Applications run in containers in the cloud-native era (now) While Kubernetes can orchestrate other workload types, including virtual machines and serverless functions, it’s most commonly used to orchestrate containerised apps.

What is a cloud-native app

A cloud-native application is an application that is designed to meet modern business demands (auto-scaling, self-healing, rolling updates etc.) and can run on Kubernetes.

I feel like it’s important to be clear that cloud-native apps are not applications that will only run on a public cloud. Yes, they absolutely can run on a public cloud, but they can run anywhere that you have Kubernetes –

even your on-premises data center.

What is a microservices app

A microservices app is a business application that is built from lots of small specialised parts that communicate and form a meaningful application. For example, you might have an e-commerce app that comprises all of the following small specialised components:

• web front-end

• catalog service

• shopping cart

• authentication service

• logging service

• persistent store

• more…

Each of these individual services is called a microservice. Typically, each can be coded and looked after by a different team, and each can have its own release cadence and can be scaled independently of all others.

For example, you can patch and scale the logging microservice without affecting any of the other application components.

Building applications this way is an important aspect of a cloud-native application.

With all of this in mind, let’s re-phrase that definition that was full of buzzwords…

Kubernetes deploys and manages (orchestrates) applications that are packaged and run as containers (containerized) and that are built in ways (cloud-native microservices) that allow them to scale, self-heal, and be updated inline with modern business requirements.

We’ll talk about these concepts a lot throughout the book, but for now, this should help you understand some of the main industry buzzwords.

Where did Kubernetes come from

Let’s start from the beginning…

Amazon Web Services (AWS) changed the world when it brought us modern-day cloud computing. Since then, everyone else has been trying to catch-up.

One of the companies trying to catch-up was Google. Google has its own very good cloud, and needs a way to abstract the value of AWS, and make it easier for potential customers to use the Google Cloud.

 

 

 

 

Google has boatloads of experience working with containers at scale. For example, huge Google applications, such as Search and Gmail, have been running at extreme scale on containers for a lot of years – since way before Docker brought us easy-to-use containers. To orchestrate and manage these containerised apps, Google had a couple of in-house proprietary systems. They took the lessons learned from these in-house systems, and created a new platform called Kubernetes, and donated it to the newly formed Cloud Native Computing Foundation (CNCF) in 2014 as an open-source project.

Figure 1.1

Since then, Kubernetes has become the most important cloud-native technology on the planet.

Like many of the modern cloud-native projects, it’s written in Go (Golang), it’s built in the open on GitHub (at kubernetes/kubernetes), it’s actively discussed on the IRC channels, you can follow it on Twitter (@kubernetesio), and slack.k8s.io is a pretty good slack channel. There are also regular meetups and conferences all over the planet.

Kubernetes and Docker

Kubernetes and Docker are complementary technologies. For example, it’s common to develop your applications with Docker and use Kubernetes to orchestrate them in production.

In this model, you write your code in your favourite languages, then use Docker to package it, test it, and ship it. But the final steps of deploying and running it is handled by Kubernetes.

At a high-level, you might have a Kubernetes cluster with 10 nodes to run your production applications. Behind the scenes, each node is running Docker as its container runtime. This means that Docker is the low-level technology that starts and stops the containerised applications. Kubernetes is the higher-level technology that looks after the bigger picture, such as; deciding which nodes to run containers on, deciding when to scale up or down, and executing updates.

Figure 1.2 shows a simple Kubernetes cluster with some nodes using Docker as the container runtime.

Figure 1.2

As can be seen in Figure 1.2, Docker isn’t the only container runtime that Kubernetes supports. In fact, Kubernetes has a couple of features that abstract the container runtime (make it interchangeable):

 

 

1. The Container Runtime Interface (CRI) is an abstraction layer that standardizes the way 3rd-party container runtimes interface with Kubernetes. It allows the container runtime code to exist outside of Kubernetes, but interface with it in a supported and standardized way.

2. Runtime Classes is a new feature that was introduced in Kubernetes 1.12 and promoted to beta in 1.14.

It allows for different classes of runtimes. For example, the gVisor or Kata Containers runtimes might provide better workload isolation than the Docker and containerd runtimes.

At the time of writing, containerd is catching up to Docker as the most commonly used container runtime in Kubernetes. It is a stripped-down version of Docker with just the stuff that Kubernetes needs. It’s pronounced container dee.

While all of this is interesting, it’s low-level stuff that shouldn’t impact your Kubernetes learning experience. For example, whichever container runtime you use, the regular Kubernetes commands and patterns will continue to work as normal.

What about Kubernetes vs Docker Swarm

In 2016 and 2017 we had the orchestrator wars where Docker Swarm, Mesosphere DCOS, and Kubernetes competed to become the de-facto container orchestrator. To cut a long story short, Kubernetes won.

It’s true that Docker Swarm and other container orchestrators still exist, but their development and market-share are small compared to Kubernetes.

Kubernetes and Borg: Resistance is futile!

There’s a good chance you’ll hear people talk about how Kubernetes relates to Google’s Borg and Omega systems.

As previously mentioned, Google has been running containers at scale for a long time – apparently crunching through billions of containers a week. So yes, Google has been running things like search, Gmail, and GFS on lots of containers for a very long time.

Orchestrating these containerised apps was the job of a couple of in-house Google technologies called Borg and Omega. So, it’s not a huge stretch to make the connection with Kubernetes – all three are in the game of orchestrating containers at scale, and they’re all related to Google.

However, it’s important to understand that Kubernetes is not an open-sourced version of Borg or Omega. It’s more like Kubernetes shares its DNA and family history with Borg and Omega. A bit like this… In the beginning was Borg, and Borg begat Omega. Omega knew the open-source community and begat her Kubernetes ;-) Figure 1.3 - Shared DNA

 

 

The point is, all three are separate, but all three are related. In fact, some of the people who built Borg and Omega are involved in building Kubernetes. So, although Kubernetes was built from scratch, it leverages much of what was learned at Google with Borg and Omega.

As things stand, Kubernetes is an open-source project donated to the CNCF in 2014, it’s licensed under the Apache 2.0 license, version 1.0 shipped way back in July 2015, and at-the-time-of-writing, we’ve already passed version 1.16.

Kubernetes – what’s in the name

The name Kubernetes (koo-ber-net-eez) comes from the Greek word meaning Helmsman – the person who steers a seafaring ship. This theme is reflected in the logo.

Figure 1.4 - The Kubernetes logo

Apparently, some of the people involved in the creation of Kubernetes wanted to call it Seven of Nine. If you know your Star Trek, you’ll know that Seven of Nine is a female Borg rescued by the crew of the USS Voyager under the command of Captain Kathryn Janeway. Sadly, copyright laws prevented it from being called Seven of Nine. However, the seven spokes on the logo are a tip-of-the-hat to Seven of Nine.

One last thing about the name before moving on. You’ll often see Kubernetes shortened to K8s (pronounced

“Kates”). The number 8 replaces the 8 characters between the K and the s – great for tweets and lazy typists like me ;-)

The operating system of the cloud

Kubernetes has emerged as the de facto platform for deploying and managing cloud-native applications. In many ways, it’s like an operating system (OS) for the cloud. Consider this:

• You install a traditional OS (Linux or Windows) on a server, and the OS abstracts the physical server’s resources and schedules processes etc.

• You install Kubernetes on a cloud, and it abstracts the cloud’s resources and schedules the various microservices of cloud-native applications

In the same way that Linux abstracts the hardware differences of different server platforms, Kubernetes abstracts the differences between different private and public clouds. Net result… as long as you’re running Kubernetes, it doesn’t matter if the underlying systems are on premises in your own data center, edge clusters, or in the public cloud.

With this in mind, Kubernetes enables a true hybrid cloud, allowing you to seamlessly move and balance workloads across multiple different public and private cloud infrastructures. You can also migrate to and from different clouds, meaning you can choose a cloud today and not have to stick with that decision for the rest of your life.

 

Cloud scale

Generally speaking, cloud-native microservices applications make our previous scalability challenges look easy

– we’ve just said that Google goes through billions of containers per week!

That’s great, but most of us aren’t the size of Google. What about the rest of us?

Well… as a general rule, if your legacy apps have hundreds of VMs, there’s a good chance your containerized cloud-native apps will have thousands of containers. With this in mind, we desperately need help managing them.

Say hello to Kubernetes.

Also, we live in a business and technology world that is increasingly fragmented and in a constant state of disruption. With this in mind, we desperately need a framework and platform that is widely accepted and hides the complexity.

Again, say hello to Kubernetes.

Application scheduling

A typical computer is a collection of CPU, memory, storage, and networking. But modern operating systems have done a great job abstracting most of that. For example, how many developers care which CPU core or exact memory address their application uses? Not many, we let the OS take care of things like that. And it’s a good thing, it’s made the world of application development a far friendlier place.

Kubernetes does a similar thing with cloud and data center resources. At a high-level, a cloud or data center is a pool of compute, network and storage. Kubernetes abstracts it. This means we don’t have hard code which node or storage volume our applications run on, we don’t even have to care which cloud they run on – we let Kubernetes take care of that. Gone are the days of naming your servers, mapping storage volumes in a spreadsheet, and otherwise treating your infrastructure assets like pets. Systems like Kubernetes don’t care. Gone are the days of taking your app and saying “Run this part of the app on this exact node, with this IP, on this specific volume…“. In the cloud-native Kubernetes world, we just say “Hey Kubernetes, here’s an app. Please deploy it and make sure it keeps running…“.

A quick analogy…

Consider the process of sending goods via a courier service.

You package the goods in the courier’s standard packaging, put a label on it, and hand it over to the courier. The courier is responsible for everything. This includes; all the complex logistics of which planes and trucks it goes on, which highways to use, and who the drivers should be etc. They also provide services that let you do things like track your package and make delivery changes. The point is, the only thing that you have to do is package and label the goods, and the courier abstracts everything else and takes care of scheduling and other logistics.

It’s the same for apps on Kubernetes. You package the app as a container, give it a declarative manifest, and let Kubernetes take care of deploying it and keeping it running. You also get a rich set of tools and APIs that let you introspect (observe and examine) your app. It’s a beautiful thing.

 

Chapter summary

Kubernetes was created by Google based on lessons learned running containers at scale for many years. It was donated to the community as an open-source project and is now the industry standard API for deploying and managing cloud-native applications. It runs on any cloud or on-premises data center and abstracts the underlying infrastructure. This allows you to build hybrid clouds, as well as migrate easily between cloud platforms. It’s open-sourced under the Apache 2.0 license and lives within the Cloud Native Computing Foundation (CNCF).

Tip!

Kubernetes is a fast-moving project under active development. But don’t let this put you off – embrace it. Change is the new normal.

To help you keep up-to-date, I suggest you subscribe to a couple of my YouTube channels:

• #KubernetesMoment: a short weekly video discussing or explaining something about Kubernetes

• Kubernetes this Month: A monthly roundup of all the important things going on in the Kubernetes world You should also check out:

• My website at nigelpoulton.com

• My video training courses at pluralsight.com and acloud.guru

• My hands-on learning at MSB (msb.com)

• KubeCon and your local Kubernetes and cloud-native meetups

2: Kubernetes principles of operation

In this chapter, you’ll learn about the major components required to build a Kubernetes cluster and deploy an app. The aim is to give you an overview of the major concepts. So don’t worry if you don’t understand everything straight away, we’ll cover most things again as we progress through the book.

We’ll divide the chapter as follows:

• Kubernetes from 40K feet

• Masters and nodes

• Packaging apps

• Declarative configuration and desired state

• Pods

• Deployments

• Services

Kubernetes from 40K feet

At the highest level, Kubernetes is two things:

• A cluster for running applications

• An orchestrator of cloud-native microservices apps

Kubernetes as a cluster

Kubernetes is like any other cluster – a bunch of nodes and a control plane. The control plane exposes an API, has a scheduler for assigning work to nodes, and state is recorded in a persistent store. Nodes are where application services run.

It can be useful to think of the control plane as the brains of the cluster, and the nodes as the muscle. In this analogy, the control plane is the brains because it implements all of the important features such as auto-scaling and zero-downtime rolling updates. The nodes are the muscle because they do the every-day hard work of executing application code.

Kubernetes as an orchestrator

Orchestrator is just a fancy word for a system that takes care of deploying and managing applications.

Let’s look at a quick analogy.

In the real world, a football (soccer) team is made of individuals. No two individuals are the same, and each has a different role to play in the team – some defend, some attack, some are great at passing, some tackle, some

 

 

 

 

shoot… Along comes the coach, and he or she gives everyone a position and organizes them into a team with a purpose. We go from Figure 2.1 to Figure 2.2.

Figure 2.1

Figure 2.2

The coach also makes sure the team maintains its formation, sticks to the game-plan, and deals with any injuries and other changes in circumstance.

Well guess what… microservices apps on Kubernetes are the same.

Stick with me on this…

We start out with lots of individual specialised services. Some serve web pages, some do authentication, some perform searches, others persist data. Kubernetes comes along – a bit like the coach in the football analogy –-

organizes everything into a useful app and keeps things running smoothly. It even responds to events and other changes.

In the sports world we call this coaching. In the application world we call it orchestration. Kubernetes orchestrates cloud-native microservices applications.

 

How it works

To make this happen, you start out with an app, package it up and give it to the cluster (Kubernetes). The cluster is made up of one or more masters and a bunch of nodes.

The masters, sometimes called heads or head nodes, are in-charge of the cluster. This means they make the scheduling decisions, perform monitoring, implement changes, respond to events, and more. For these reasons, we often refer to the masters as the control plane.

The nodes are where application services run, and we sometimes call them the data plane. Each node has a reporting line back to the masters, and constantly watches for new work assignments.

To run applications on a Kubernetes cluster we follow this simple pattern: 1. Write the application as small independent microservices in our favourite languages.

2. Package each microservice in its own container.

3. Wrap each container in its own Pod.

4. Deploy Pods to the cluster via higher-level controllers such as; Deployments, DaemonSets, StatefulSets, CronJobs etc.

Now then… we’re still near the beginning of the book and you’re not expected to know what all of this means yet. However, at a high-level, Deployments offer scalability and rolling updates, DaemonSets run one instance of a service on every node in the cluster, StatefulSets are for stateful application components, and CronJobs are for short-lived tasks that need to run at set times. There are more than these, but these will do for now.

Kubernetes likes to manage applications declaratively. This is a pattern where you describe how you want your application to look and feel in a set of YAML files. You POST these files to Kubernetes, then sit back while Kubernetes makes it all happen.

But it doesn’t stop there. Because the declarative pattern tells Kubernetes how an application should look, Kubernetes can watch it and make sure things don’t stray from what you asked for. If something isn’t as it should be, Kubernetes tries to fix it.

That’s the big picture. Let’s dig a bit deeper.

Masters and nodes

A Kubernetes cluster is made of masters and nodes. These are Linux hosts that can be virtual machines (VM), bare metal servers in your data center, or instances in a private or public cloud.

Masters (control plane)

A Kubernetes master is a collection of system services that make up the control plane of the cluster.

The simplest setups run all the master services on a single host. However, this is only suitable for labs and test environments. For production environments, multi-master high availability (HA) is a must have. This is why the major cloud providers implement HA masters as part of their hosted Kubernetes platforms such as Azure Kubernetes Service (AKS), AWS Elastic Kubernetes Service (EKS), and Google Kubernetes Engine (GKE).

Generally speaking, running 3 or 5 replicated masters in an HA configuration is recommended.

 

It’s also considered a good practice not to run user applications on masters. This allows masters to concentrate entirely on managing the cluster.

Let’s take a quick look at the different master services that make up the control plane.

The API server

The API server is the Grand Central Station of Kubernetes. All communication, between all components, must go through the API server. We’ll get into the detail later in the book, but it’s important to understand that internal system components, as well as external user components, all communicate via the same API.

It exposes a RESTful API that you POST YAML configuration files to over HTTPS. These YAML files, which we sometimes call manifests, contain the desired state of your application. This desired state includes things like; which container image to use, which ports to expose, and how many Pod replicas to run.

All requests to the API Server are subject to authentication and authorization checks, but once these are done, the config in the YAML file is validated, persisted to the cluster store, and deployed to the cluster.

The cluster store

The cluster store is the only stateful part of the control plane, and it persistently stores the entire configuration and state of the cluster. As such, it’s a vital component of the cluster – no cluster store, no cluster.

The cluster store is currently based on etcd, a popular distributed database. As it’s the single source of truth for the cluster, you should run between 3-5 etcd replicas for high-availability, and you should provide adequate ways to recover when things go wrong.

On the topic of availability, etcd prefers consistency over availability. This means that it will not tolerate a split-brain situation and will halt updates to the cluster in order to maintain consistency. However, if etcd becomes unavailable, applications running on the cluster should continue to work, you just won’t be able to update anything.

As with all distributed databases, consistency of writes to the database is vital. For example, multiple writes to the same value originating from different nodes needs to be handled. etcd uses the popular RAFT consensus algorithm to accomplish this.

The controller manager

The controller manager implements all of the background control loops that monitor the cluster and respond to events.

It’s a controller of controllers, meaning it spawns all of the independent control loops and monitors them.

Some of the control loops include; the node controller, the endpoints controller, and the replicaset controller. Each one runs as a background watch-loop that is constantly watching the API Server for changes -– the aim of the game is to ensure the current state of the cluster matches the desired state (more on this shortly).

The logic implemented by each control loop is effectively this: 1. Obtain desired state

2. Observe current state

3. Determine differences

 

4. Reconcile differences

This logic is at the heart of Kubernetes and declarative design patterns.

Each control loop is also extremely specialized and only interested in its own little corner of the Kubernetes cluster. No attempt is made to over-complicate things by implementing awareness of other parts of the system

– each control loop takes care of its own business and leaves everything else alone. This is key to the distributed design of Kubernetes and adheres to the Unix philosophy of building complex systems from small specialized parts.

Note: Throughout the book we’ll use terms like control loop, watch loop, and reconciliation loop to mean the same thing.

The scheduler

At a high level, the scheduler watches the API server for new work tasks and assigns them to appropriate healthy nodes. Behind the scenes, it implements complex logic that filters out nodes incapable of running the task, and then ranks the nodes that are capable. The ranking system is complex, but the node with the highest-ranking score is selected to run the task.

When identifying nodes that are capable of running a task, the scheduler performs various predicate checks. These include; is the node tainted, are there any affinity or anti-affinity rules, is the required network port available on the node, does the node have sufficient free resources etc. Any node incapable of running the task is ignored, and the remaining nodes are ranked according to things such as; does the node already have the required image, how much free resource does the node have, how many tasks is the node already running. Each criterion is worth points, and the node with the most points is selected to run the task.

If the scheduler cannot find a suitable node, the task cannot be scheduled and is marked as pending.

The scheduler isn’t responsible for running tasks, just picking the nodes a task will run on.

The cloud controller manager

If you’re running your cluster on a supported public cloud platform, such as AWS, Azure, GCP, DO, IBM

Cloud etc. your control plane will be running a cloud controller manager. Its job is to manage integrations with underlying cloud technologies and services such as, instances, load-balancers, and storage. For example, if your application asks for an internet facing load-balancer, the cloud controller manager is involved in provisioning an appropriate load-balancer on your cloud platform.

Control Plane summary

Kubernetes masters run all of the cluster’s control plane services. Think of it as the brains of the cluster where all the control and scheduling decisions are made. Behind the scenes, a master is made up of lots of small specialized control loops and services. These include the API server, the cluster store, the controller manager, and the scheduler.

The API Server is the front-end into the control plane and all instructions and communication must go through it. By default, it exposes a RESTful endpoint on port 443.

Figure 2.3 shows a high-level view of a Kubernetes master (control plane).

 

 

 

 

Figure 2.3 - Kubernetes Master

Nodes

Nodes are the workers of a Kubernetes cluster. At a high-level they do three things: 1. Watch the API Server for new work assignments

2. Execute new work assignments

3. Report back to the control plane (via the API server)

As we can see from Figure 2.4, they’re a bit simpler than masters.

Figure 2.4 - Kubernetes Node (formerly Minion)

Let’s look at the three major components of a node.

Kubelet

The Kubelet is the star of the show on every node. It’s the main Kubernetes agent, and it runs on every node in the cluster. In fact, it’s common to use the terms node and kubelet interchangeably.

 

When you join a new node to a cluster, the process installs kubelet onto the node. The kubelet is then responsible for registering the node with the cluster. Registration effectively pools the node’s CPU, memory, and storage into the wider cluster pool.

One of the main jobs of the kubelet is to watch the API server for new work assignments. Any time it sees one, it executes the task and maintains a reporting channel back to the control plane.

If a kubelet can’t run a particular task, it reports back to the master and lets the control plane decide what actions to take. For example, if a Kubelet cannot execute a task, it is not responsible for finding another node to run it on. It simply reports back to the control plane and the control plane decides what to do.

Container runtime

The Kubelet needs a container runtime to perform container-related tasks -– things like pulling images and starting and stopping containers.

In the early days, Kubernetes had native support for a few container runtimes such as Docker. More recently, it has moved to a plugin model called the Container Runtime Interface (CRI). At a high-level, the CRI masks the internal machinery of Kubernetes and exposes a clean documented interface for 3rd-party container runtimes to plug into.

There are lots of container runtimes available for Kubernetes. One popular example is cri-containerd. This is a community-based open-source project porting the CNCF containerd runtime to the CRI interface. It has a lot of support and is replacing Docker as the most popular container runtime used in Kubernetes.

Note: containerd (pronounced “container-dee”) is the container supervisor and runtime logic stripped out from the Docker Engine. It was donated to the CNCF by Docker, Inc. and has a lot of community support. Other CRI-compliant container runtimes exist.

Kube-proxy

The last piece of the node puzzle is the kube-proxy. This runs on every node in the cluster and is responsible for local cluster networking. For example, it makes sure each node gets its own unique IP address, and implements local IPTABLES or IPVS rules to handle routing and load-balancing of traffic on the Pod network.

Kubernetes DNS

As well as the various control plane and node components, every Kubernetes cluster has an internal DNS service that is vital to operations.

The cluster’s DNS service has a static IP address that is hard-coded into every Pod on the cluster, meaning all containers and Pods know how to find it. Every new service is automatically registered with the cluster’s DNS

so that all components in the cluster can find every Service by name. Some other components that are registered with the cluster DNS are StatefulSets and the individual Pods that a StatefulSet manages.

Cluster DNS is based on CoreDNS (https://coredns.io/).

Now that we understand the fundamentals of masters and nodes, let’s switch gears and look at how we package applications to run on Kubernetes.

 

 

Packaging apps for Kubernetes

For an application to run on a Kubernetes cluster it needs to tick a few boxes. These include: 1. Packaged as a container

2. Wrapped in a Pod

3. Deployed via a declarative manifest file

It goes like this… You write an application service in a language of your choice. You build it into a container image and store it in a registry. At this point, the application service is containerized.

Next, you define a Kubernetes Pod to run the containerized application. At the kind of high level we’re at, a Pod is just a wrapper that allows a container to run on a Kubernetes cluster. Once you’ve defined the Pod, you’re ready to deploy it on the cluster.

It is possible to run a standalone Pod on a Kubernetes cluster. But the preferred model is to deploy all Pods via higher-level controllers. The most common controller is the Deployment. It offers scalability, self-healing, and rolling updates. You define Deployments in YAML manifest files that specifies things like; which image to use and how many replicas to deploy.

Figure 2.5 shows application code packaged as a container, running inside a Pod, managed by a Deployment controller.

Figure 2.5

Once everything is defined in the Deployment YAML file, you POST it to the API Server as the desired state of the application and let Kubernetes implement it.

Speaking of desired state…

The declarative model and desired state

The declarative model and the concept of desired state are at the very heart of Kubernetes.

In Kubernetes, the declarative model works like this:

1. Declare the desired state of an application (microservice) in a manifest file 2. POST it to the API server

 

 

3. Kubernetes stores it in the cluster store as the application’s desired state 4. Kubernetes implements the desired state on the cluster

5. Kubernetes implements watch loops to make sure the current state of the application doesn’t vary from the desired state

Let’s look at each step in a bit more detail.

Manifest files are written in simple YAML, and they tell Kubernetes how you want an application to look. This is called the desired state. It includes things such as; which image to use, how many replicas to run, which network ports to listen on, and how to perform updates.

Once you’ve created the manifest, you POST it to the API server. The most common way of doing this is with the kubectl command-line utility. This sends the manifest to the control plane as an HTTP POST, usually on port 443.

Once the request is authenticated and authorized, Kubernetes inspects the manifest, identifies which controller to send it to (e.g. the Deployments controller), and records the config in the cluster store as part of the cluster’s overall desired state. Once this is done, the work gets scheduled on the cluster. This includes the hard work of pulling images, starting containers, building networks, and starting the application’s processes.

Finally, Kubernetes utilizes background reconciliation loops that constantly monitor the state of the cluster. If the current state of the cluster varies from the desired state, Kubernetes will perform whatever tasks are necessary to reconcile the issue.

Figure 2.6

It’s important to understand that what we’ve described is the opposite of the traditional imperative model. The imperative model is where you issue long lists of platform-specific commands to build things.

Not only is the declarative model a lot simpler than long scripts with lots of imperative commands, it also enables self-healing, scaling, and lends itself to version control and self-documentation. It does this by telling the cluster how things should look. If they stop looking like this, the cluster notices the discrepancy and does all of the hard work to reconcile the situation.

But the declarative story doesn’t end there – things go wrong, and things change. When they do, the current

state of the cluster no longer matches the desired state. As soon as this happens, Kubernetes kicks into action and attempts to bring the two back into harmony.

Let’s consider an example.

Declarative example

Assume you have an app with a desired state that includes 10 replicas of a web front-end Pod. If a node that was running two replicas fails, the current state will be reduced to 8 replicas, but the desired state will still be 10. This

 

 

will be observed by a reconciliation loop and Kubernetes will schedule two new replicas to bring the total back up to 10.

The same thing will happen if you intentionally scale the desired number of replicas up or down. You could even change the image you want to use. For example, if the app is currently using v2.00 of an image, and you update the desired state to use v2.01, Kubernetes will notice the difference and go through the process of updating all replicas so that they are using the new version specified in the new desired state.

To be clear. Instead of writing a long list of commands to go through the process of updating every replica to the new version, you simply tell Kubernetes you want the new version, and Kubernetes does the hard work for us.

Despite how simple this might seem, it’s extremely powerful and at the very heart of how Kubernetes operates.

You give Kubernetes a declarative manifest that describes how you want an application to look. This forms the basis of the application’s desired state. The Kubernetes control plane records it, implements it, and runs background reconciliation loops that constantly check that what is running is what you’ve asked for. When current state matches desired state, the world is a happy place. When it doesn’t, Kubernetes gets busy fixing it.

Pods

In the VMware world, the atomic unit of scheduling is the virtual machine (VM). In the Docker world, it’s the container. Well… in the Kubernetes world, it’s the Pod.

It’s true that Kubernetes runs containerized apps. However, you cannot run a container directly on a Kubernetes cluster – containers must always run inside of Pods.

Figure 2.7

Pods and containers

The very first thing to understand is that the term Pod comes from a pod of whales – in the English language we call a group of whales a pod of whales. As the Docker logo is a whale, it makes sense that we call a group of containers a Pod.

The simplest model is to run a single container per Pod. However, there are advanced use-cases that run multiple containers inside a single Pod. These multi-container Pods are beyond the scope of what we’re discussing here, but powerful examples include:

• Service meshes

• Web containers supported by a helper container that pulls the latest content

• Containers with a tightly coupled log scraper

 

 

 

 

The point is, a Kubernetes Pod is a construct for running one or more containers. Figure 2.8 shows a multi-container Pod.

Figure 2.8

Pod anatomy

At the highest-level, a Pod is a ring-fenced environment to run containers. The Pod itself doesn’t actually run anything, it’s just a sandbox for hosting containers. Keeping it high level, you ring-fence an area of the host OS, build a network stack, create a bunch of kernel namespaces, and run one or more containers in it. That’s a Pod.

If you’re running multiple containers in a Pod, they all share the same Pod environment. This includes things like the IPC namespace, shared memory, volumes, network stack and more. As an example, this means that all containers in the same Pod will share the same IP address (the Pod’s IP). This is shown in Figure 2.9.

Figure 2.9

If two containers in the same Pod need to talk to each other (container-to-container within the Pod) they can use ports on the Pod’s localhost interface as shown in Figure 2.10.

 

 

 

 

Figure 2.10

Multi-container Pods are ideal when you have requirements for tightly coupled containers that may need to share memory and storage. However, if you don’t need to tightly couple your containers, you should put them in their own Pods and loosely couple them over the network. This keeps things clean by having each Pod dedicated to a single task. It also creates a lot of network traffic that is un-encrypted. You should seriously consider using a service mesh to secure traffic between Pods and application services.

Pods as the unit of scaling

Pods are also the minimum unit of scheduling in Kubernetes. If you need to scale your app, you add or remove Pods. You do not scale by adding more containers to an existing Pod. Multi-container Pods are only for situations where two different, but complimentary, containers need to share resources. Figure 2.11 shows how to scale the nginx front-end of an app using multiple Pods as the unit of scaling.

Figure 2.11 - Scaling with Pods

Pods - atomic operations

The deployment of a Pod is an atomic operation. This means that a Pod is only considered ready for service when all of its containers are up and running. There is never a situation where a partially deployed Pod will service requests. The entire Pod either comes up and is put into service, or it doesn’t, and it fails.

 

A single Pod can only be scheduled to a single node. This is also true of multi-container Pods – all containers in the same Pod will run on the same node.

Pod lifecycle

Pods are mortal. They’re created, they live, and they die. If they die unexpectedly, you don’t bring them back to life. Instead, Kubernetes starts a new one in its place. However, even though the new Pod looks, smells, and feels like the old one, it isn’t. It’s a shiny new Pod with a shiny new ID and IP address.

This has implications on how you should design your applications. Don’t design them so they are tightly coupled to a particular instance of a Pod. Instead, design them so that when Pods fail, a totally new one (with a new ID

and IP address) can pop up somewhere else in the cluster and seamlessly take its place.

Deployments

Most of the time you’ll deploy Pods indirectly via a higher-level controller. Examples of higher-level controllers include; Deployments, DaemonSets, and StatefulSets.

For example, a Deployment is a higher-level Kubernetes object that wraps around a particular Pod and adds features such as scaling, zero-downtime updates, and versioned rollbacks.

Behind the scenes, Deployments, DaemonSets and StatefulSets implement a controller and a watch loop that is constantly observing the cluster making sure that current state matches desired state.

Deployments have existed in Kubernetes since version 1.2 and were promoted to GA (stable) in 1.9. You’ll see them a lot.

Services and network stable networking

We’ve just learned that Pods are mortal and can die. However, if they’re managed via Deployments or DaemonSets, they get replaced when they fail. But replacements come with totally different IP addresses. This also happens when you perform scaling operations – scaling up adds new Pods with new IP addresses, whereas scaling down takes existing Pods away. Events like these cause a lot of IP churn.

The point I’m making is that Pods are unreliable, which poses a challenge… Assume you’ve got a microservices app with a bunch of Pods performing video rendering. How will this work if other parts of the app that need to use the rendering service cannot rely on the rendering Pods being there when they need them?

This is where Services come in to play. Services provide reliable networking for a set of Pods.

Figure 2.12 shows the uploader microservice talking to the renderer microservice via a Kubernetes Service object.

The Kubernetes Service is providing a reliable name and IP, and is load-balancing requests to the two renderer Pods behind it.

 

 

Figure 2.12

Digging into a bit more detail. Services are fully-fledged objects in the Kubernetes API – just like Pods and Deployments. They have a front-end that consists of a stable DNS name, IP address, and port. On the back-end, they load-balance across a dynamic set of Pods. As Pods come and go, the Service observes this, automatically updates itself, and continues to provide that stable networking endpoint.

The same applies if you scale the number of Pods up or down. New Pods are seamlessly added to the Service and will receive traffic. Terminated Pods are seamlessly removed from the Service and will not receive traffic.

That’s the job of a Service – it’s a stable network abstraction point that provides TCP and UDP load-balancing across a dynamic set of Pods.

As they operate at the TCP and UDP layer, Services do not possess application intelligence and cannot provide application-layer host and path routing. For that, you need an Ingress, which understands HTTP and provides host and path-based routing.

Connecting Pods to Services

Services use labels and a label selector to know which set of Pods to load-balance traffic to. The Service has a label selector that is a list of all the labels a Pod must possess in order for it to receive traffic from the Service.

Figure 2.13 shows a Service configured to send traffic to all Pods on the cluster tagged with the following three labels:

• zone=prod

• env=be

• ver=1.3

 

 

 

 

Both Pods in the diagram have all three labels, so the Service will load-balance traffic to them.

Figure 2.13

Figure 2.14 shows a similar setup. However, an additional Pod, on the right, does not match the set of labels configured in the Service’s label selector. This means the Service will not load balance requests to it.

Figure 2.14

One final thing about Services. They only send traffic to healthy Pods. This means a Pod that is failing health-checks will not receive traffic from the Service.

That’s the basics. Services bring stable IP addresses and DNS names to the unstable world of Pods.

Chapter summary

In this chapter, we introduced some of the major components of a Kubernetes cluster.

The masters are where the control plane components run. Under-the-hood, there are several system-services, including the API Server that exposes a public REST interface to the cluster. Masters make all of the deployment and scheduling decisions, and multi-master HA is important for production-grade environments.

 

Nodes are where user applications run. Each node runs a service called the kubelet that registers the node with the cluster and communicates with the API Server. It watches the API for new work tasks and maintains a reporting channel. Nodes also have a container runtime and the kube-proxy service. The container runtime, such as Docker or containerd, is responsible for all container-related operations. The kube-proxy is responsible for networking on the node.

We also talked about some of the major Kubernetes API objects such as Pods, Deployments, and Services. The Pod is the basic building-block. Deployments add self-healing, scaling and updates. Services add stable networking and load-balancing.

Now that we’ve covered the basics, let’s get into the detail.

3: Installing Kubernetes

In this chapter, we’ll look at a few quick ways to install Kubernetes.

There are three typical ways of getting a Kubernetes:

1. Test playground

2. Hosted Kubernetes

3. DIY installation

Kubernetes playgrounds

Test playgrounds are the simplest ways to get Kubernetes, but they’re not intended for production. Common examples include Magic Sandbox (msb.com), Play with Kubernetes (https://labs.play-with-k8s.com/), and Docker Desktop.

With Magic Sandbox, you register for an account and login. That’s it, you’ve instantly got a fully working multi-node private cluster that’s ready to go. You also get curated lessons and hands-on labs.

Play with Kubernetes requires you to login with a GitHub or Docker Hub account and follow a few simple steps to build a cluster that lasts for 4 hours.

Docker Desktop is a free desktop application from Docker, Inc. You download and run the installer, and after a few clicks you’ve got a single-node development cluster on your laptop.

Hosted Kubernetes

Most of the major cloud platforms now offer their own hosted Kubernetes services. In this model, control plane (masters) components are managed by your cloud platform. For example, your cloud provider makes sure the control plane is highly available, performant, and handles all control plane upgrades. On the flipside, you have less control over versions and have limited options to customise.

Irrespective of pros and cons, hosted Kubernetes services are as close to zero-effort production-grade Kubernetes as you will get. In fact, the Google Kubernetes Engine (GKE) lets you deploy a production-grade Kubernetes cluster and the Istio service mesh with just a few simple clicks. Other clouds offer similar services:

• AWS: Elastic Kubernetes Service (EKS)

• Azure: Azure Kubernetes Service (AKS)

• Linode: Linode Kubernetes Engine (LKE)

• DigitalOcean: DigitalOcean Kubernetes

• IBM Cloud: IBM Cloud Kubernetes Service

• Google Cloud Platform: Google Kubernetes Engine (GKE)

With these offerings in mind, ask yourself the following question before building your own Kubernetes cluster: Is building and managing your own Kubernetes cluster the best use of your time and other resources? If the answer isn’t “Hell yes” , I strongly suggest you consider a hosted service.