Kubernetes Services
In the previous chapters, we’ve looked at some Kubernetes objects that are used to deploy and run applications. We looked at Pods as the most fundamental unit for deploying microservices applications, then we looked at Deployment controllers that add things like scaling, self-healing, and rolling updates. However, despite all of benefits of Deployments, we still cannot rely on individual Pod IPs! This is where Kubernetes Service objects come into play – they provide stable and reliable networking for a set of dynamic Pods.
We’ll divide the chapter as follows:
Setting the scene
Before diving in, we need to remind ourselves that Pod IPs are unreliable. When Pods fail, they get replaced with new Pods that have new IPs. Scaling-up a Deployment introduces new Pods with new IP addresses. Scaling-down a Deployment removes Pods. This creates a large amount of IP churn, and creates the situation where Pod IPs cannot be relied on.
You also need to know 3 fundamental things about Kubernetes Services.
First, let’s clear up some terminology. When talking about Service with a capital “S”, we’re talking about the Service object in Kubernetes that provides stable networking for Pods. Just like a Pod, ReplicaSet, or Deployment, a Kubernetes Service is a REST object in the API that you define in a manifest and POST to the API Server.
Second, you need to know that every Service gets its own stable IP address, its own stable DNS name, and its own stable port.
Third, you need to know that Services leverage labels to dynamically select the Pods in the cluster they will send traffic to.
Theory
Figure 6.1 shows a simple Pod-based application deployed via a Kubernetes Deployment. It shows a client (which could be another component of the app) that does not have a reliable network endpoint for accessing the Pods. Remember, it’s a bad idea to talk directly to an individual Pod because that Pod could disappear at any point via scaling operations, updates and rollbacks, and failures.
6: Kubernetes Services
|
63
|
Figure 6.1
Figure 6.2 shows the same application with a Service added into the mix. The Service is associated with the Pods and fronts them with a stable IP, DNS, and port. It also load-balances requests across the Pods.
Figure 6.2
With a Service in front of a set of Pods, the Pods can scale up and down, they can fail, and they can be updated, rolled back… and while events like these occur, the Service in front of them observes the changes and updates its list of healthy Pods. But it never changes the stable IP, DNS, and port that it exposes.
Think of Services as having a static front-end and a dynamic back-end. The front-end, consisting of the IP, DNS name, and port, and never changes. The back-end, consisting of the Pods, can be constantly changing.
Labels and loose coupling
Services are loosely coupled with Pods via labels and label selectors. This is the same technology that loosely couples Deployments to Pods and is key to the flexibility provided by Kubernetes. Figure 6.3 shows an example where 3 Pods are labelled as zone=prod and version=1, and the Service has a label selector that matches.
Figure 6.3
In Figure 6.3, the Service is providing stable networking to all three Pods – you can send requests to the Service and it will forward them on to the Pods. It also provides simple load-balancing.
For a Service to match a set of Pods, and therefore send traffic to them, the Pods must possess every label in the Services label selector. However, the Pod can have additional labels that are not listed in the Service’s label selector. If that’s confusing, the examples in Figures 6.4 and 6.5 should help.
Figure 6.4 shows an example where the Service does not match any of the Pods. This is because the Service is looking for Pods that have two labels, but the Pods only possess one of them. The logic behind this is a Boolean
AND.
Figure 6.4
Figure 6.5 shows an example that does work. It works because the Service is looking for two labels and the Pods in the diagram possess both. It doesn’t matter that the Pods possess additional labels that the Service isn’t looking for. The Service is looking for Pods with two labels, it finds them, and ignores the fact that the Pods have additional labels – all that is important is that the Pods possess the labels the Service is looking for.
Figure 6.5
The following excerpts, from a Service YAML and Deployment YAML, show how selectors and labels are implemented. I’ve added comments to the lines of interest.
svc.yml
apiVersion: v1
kind: Service
metadata:
name: hello-svc
spec:
ports:
- port: 8080
selector:
app: hello-world # Label selector
# Service is looking for Pods with the label `app=hello-world`
deploy.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-deploy
spec:
replicas: 10
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world # Pod labels
-
- The label matches the Service's label selector
spec:
containers:
image: nigelpoulton/k8sbook:latest
ports:
- containerPort: 8080
In the example files, the Service has a label selector (.spec.selector) with a single value app=hello-world. This is the label that the Service is looking for when it queries the cluster for matching Pods. The Deployment specifies a Pod template with the same app=hello-world label (.spec.template.metadata.labels). This means that any Pods it deploys will have the app=hello-world label. It is these two attributes that loosely couple the Service to the Deployment’s Pods.
When the Deployment and the Service are deployed, the Service will select all 10 Pod replicas and provide them with a stable networking endpoint and load-balance traffic to them.
Services and Endpoint objects
As Pods come-and-go (scaling up and down, failures, rolling updates etc.), the Service dynamically updates its list of healthy matching Pods. It does this through a combination of the label selector and a construct called an Endpoints object.
Each Service that is created, automatically gets an associated Endpoints object. All this Endpoints object is, is a dynamic list of all of the healthy Pods on the cluster that match the Service’s label selector.
It works like this…
Kubernetes is constantly evaluating the Service’s label selector against the current list of healthy Pods on the cluster. Any new Pods that match the selector get added to the Endpoints object, and any Pods that disappear get removed. This means the Endpoints object is always up to date. Then, when a Service is sending traffic to Pods, it queries its Endpoints object for the latest list of healthy matching Pods.
When sending traffic to Pods, via a Service, an application will normally query the cluster’s internal DNS for the IP address of a Service. It then sends the traffic to this stable IP address and the Service sends it on to a Pod. However, a Kubernetes-native application (that’s a fancy way of saying an application that understands Kubernetes and can query the Kubernetes API) can query the Endpoints API directly, bypassing the DNS lookup and use of the Service’s IP.
Now that you know the fundamentals of how Services work, let’s look at some use-cases.
Accessing Services from inside the cluster
Kubernetes supports several types of Service. The default type is ClusterIP.
A ClusterIP Service has a stable IP address and port that is only accessible from inside the cluster. It’s programmed into the network fabric and guaranteed to be stable for the life of the Service. Programmed into the network fabric is fancy way of saying the network just knows about it and you don’t need to bother with the details (stuff like low-level IPTABLES and IPVS rules etc).
Anyway, the ClusterIP gets registered against the name of the Service on the cluster’s internal DNS service. All Pods in the cluster are pre-programmed to know about the cluster’s DNS service, meaning all Pods are able to resolve Service names.
Let’s look at a simple example.
Creating a new Service called “magic-sandbox” will trigger the following. Kubernetes will register the name “magic-sandbox”, along with the ClusterIP and port, with the cluster’s DNS service. The name, ClusterIP, and port are guaranteed to be long-lived and stable, and all Pods in the cluster send service discovery requests to the internal DNS and will therefore be able to resolve “magic-sandbox” to the ClusterIP. IPTABLES or IPVS rules are distributed across the cluster that ensure traffic sent to the ClusterIP gets routed to Pods on the backend.
Net result… as long as a Pod (application microservice) knows the name of a Service, it can resolve that to its ClusterIP address and connect to the desired Pods.
This only works for Pods and other objects on the cluster, as it requires access to the cluster’s DNS service. It does not work outside of the cluster.
Accessing Services from outside the cluster
Kubernetes has another type of Service called a NodePort Service. This builds on top of ClusterIP and enables access from outside of the cluster.
You already know that the default Service type is ClusterIP, and it registers a DNS name, virtual IP, and port with the cluster’s DNS. A different type of Service, called a NodePort Service builds on this by adding another port that can be used to reach the Service from outside the cluster. This additional port is called the NodePort.
The following example represents a NodePort Service:
This magic-sandbox Service can be accessed from inside the cluster via magic-sandbox on port 8080, or 172.12.5.17 on port 8080. It can also be accessed from outside of the cluster by sending a request to the IP address of any cluster node on port 30050.
At the bottom of the stack are cluster nodes that host Pods. You add a Service and use labels to associate it with Pods. The Service object has a reliable NodePort mapped to every node in the cluster –- the NodePort value is the same on every node. This means that traffic from outside of the cluster can hit any node in the cluster on the NodePort and get through to the application (Pods).
Figure 6.6 shows a NodePort Service where 3 Pods are exposed externally on port 30050 on every node in the cluster. In step 1, an external client hits Node2 on port 30050. In step 2 it is redirected to the Service object (this happens even though Node2 isn’t running a Pod from the Service). Step 3 shows that the Service has an associated Endpoint object with an always-up-to-date list of Pods matching the label selector. Step 4 shows the client being directed to pod1 on Node1.
Figure 6.6
The Service could just as easily have directed the client to pod2 or pod3. In fact, future requests may go to other Pods as the Service performs basic load-balancing.
There are other types of Services, such as LoadBalancer, and ExternalName.
LoadBalancer Services integrate with load-balancers from your cloud provider such as AWS, Azure, DO, IBM Cloud, and GCP. They build on top of NodePort Services (which in turn build on top of ClusterIP Services) and allow clients on the internet to reach your Pods via one of your cloud’s load-balancers. They’re extremely easy to setup. However, they only work if you’re running your Kubernetes cluster on a supported cloud platform. E.g. you cannot leverage an ELB load-balancer on AWS if your Kubernetes cluster is running on Microsoft Azure.
ExternalName Services route traffic to systems outside of your Kubernetes cluster (all other Service types route traffic to Pods in your cluster).
Service discovery
Kubernetes implements Service discovery in a couple of ways:
- Environment variables (definitely not preferred)
DNS-based Service discovery requires the DNS cluster-add-on – this is just a fancy name for the native Kubernetes DNS service. I can’t remember ever seeing a cluster without it, and if you followed the installation methods from the “Installing Kubernetes” chapter, you’ll already have this. Behind the scenes it implements:
- Control plane Pods running a DNS service
- A Service object called kube-dns that sits in front of the Pods
- Kubelets program every container with the knowledge of the DNS (via /etc/resolv.conf)
The DNS add-on constantly watches the API server for new Services and automatically registers them in DNS.
This means every Service gets a DNS name that is resolvable across the entire cluster.
The alternative form of service discovery is through environment variables. Every Pod gets a set of environment variables that resolve every Service currently on the cluster. However, this is an extremely limited fall-back in case you’re not using DNS in your cluster.
A major problem with environment variables is that they’re only inserted into Pods when the Pod is initially created. This means that Pods have no way of learning about new Services added to the cluster after the Pod itself is created. This is far from ideal, and a major reason DNS is the preferred method. Another limitation can be in clusters with a lot of Services.
Summary of Service theory
Services are all about providing stable networking for Pods. They also provide load-balancing and ways to be accessed from outside of the cluster.
The front-end of a Service provides a stable IP, DNS name and port that is guaranteed not to change for the entire life of the Service. The back-end of a Service uses labels to load-balance traffic across a potentially dynamic set of application Pods.
Hands-on with Services
We’re about to get hands-on and put the theory to the test.
You’ll augment a simple single-Pod app with a Kubernetes Service. And You’ll learn how to do it in two ways:
- The imperative way (not recommended)
The imperative way
Warning! The imperative way is not the Kubernetes way. It introduces the risk that you make imperative changes and never update your declarative manifests, rendering the manifests incorrect and out-of-date. This introduces the risk that stale manifests are subsequently used to update the cluster at a later date, unintentionally overwriting important changes that were made imperatively.
Use kubectl to declaratively deploy the following Deployment (later steps will be done imperatively).
The YAML file is called deploy.yml and can be found in the services folder in the book’s GitHub repo.
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-deploy
spec:
replicas: 10
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
image: nigelpoulton/k8sbook:latest ports:
- containerPort: 8080
- kubectl apply -f deploy.yml deployment.apps/hello-deploy created
Now that the Deployment is running, it’s time to imperatively deploy a Service for it.
The command to imperatively create a Kubernetes Service is kubectl expose. Run the following command to create a new Service that will provide networking and load-balancing for the Pods deployed in the previous step.
- kubectl expose deployment web-deploy \ --name=hello-svc \ --target-port=8080 \ --type=NodePort
service/hello-svc exposed
Let’s explain what the command is doing. kubectl expose is the imperative way to create a new Service object. deployment web-deploy is telling Kubernetes to expose the web-deploy Deployment that you created in the previous step. --name=hello-svc tells Kubernetes to name this Service “hello-svc”, and --target-port=8080 tells it which port the app is listening on (this is not the cluster-wide NodePort that you’ll access the Service on). Finally, --type=NodePort tells Kubernetes you want a cluster-wide port for the Service.
Once the Service is created, you can inspect it with the kubectl describe svc hello-svc command.
- kubectl describe svc hello-svc
Name:hello-svc
Namespace:default
Labels:<none>
Annotations:<none>
Selector:app=hello-world
Type: NodePort
IP: 192.168.201.116
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 30175/TCP
Endpoints: 192.168.128.13:8080,192.168.128.249:8080, + more...
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Some interesting values in the output include:
- Selector is the list of labels that Pods must have in order for the Service to send traffic to them
- IP is the permanent internal ClusterIP (VIP) of the Service
- Port is the port that the Service listens on inside the cluster
- TargetPort is the port that the application is listening on
- NodePort is the cluster-wide port that can be used to access it from outside the cluster
- Endpoints is the dynamic list of healthy Pod IPs currently match the Service’s label selector.
Now that you know the cluster-wide port that the Service is accessible on (30175), you can open a web browser and access the app. In order to do this, you will need to know the IP address of at least one of the nodes in your cluster, and you will need to be able to reach it from your browser – e.g. a publicly routable IP if you’re accessing via the internet.
Figure 6.7 shows a web browser accessing a cluster node with an IP address of 54.246.255.52 on the cluster-wide NodePort 30175.
Figure 6.7
The app you’ve deployed is a simple web app. It’s built to listen on port 8080, and you’ve configured a Kubernetes Service to map port 30175 on every cluster node back to port 8080 on the app. By default, cluster-wide ports (NodePort values) are between 30,000 - 32,767. In this example it was dynamically assigned, but you can also specify a port.
Coming up next you’re going to see how to do the same thing the proper way – the declarative way. To do that, you need to clean up by deleting the Service you just created. You can do this with the following kubectl delete svc command
- kubectl delete svc hello-svc service "hello-svc" deleted
The declarative way
Time to do things the proper way… the Kubernetes way.
A Service manifest file
You’ll use the following Service manifest file to deploy the same Service that you deployed in the previous section.
However, this time you’ll specify a value for the cluster-wide port.
apiVersion: v1
kind: Service
metadata:
name: hello-svc
spec:
type: NodePort
ports:
nodePort: 30001
targetPort: 8080
protocol: TCP
selector:
app: hello-world
Let’s step through some of the lines.
Services are mature objects and are fully defined in the v1 core API group (.apiVersion).
The .kind field tells Kubernetes you’re defining a Service object.
The .metadata section defines a name for the Service. You can also apply labels here. Any labels you add here are used to identify the Service and are not related to labels for selecting Pods.
The .spec section is where you actually define the Service. In this example, you’re telling Kubernetes to deploy a NodePort Service. The port value configures the Service to listen on port 8080 for internal requests, and the NodePort value tells it to listen on 30001 for external requests. The targetPort value is part of the Service’s back-end configuration and tells Kubernetes to send traffic to the application Pods on port 8080. Then you’re explicitly telling it to use TCP (default).
Finally, .spec.selector tells the Service to send traffic to all Pods in the cluster that have the app=hello-world label. This means it will provide stable networking and load-balancing across all Pods with that label.
Before deploying and testing the Service, let’s remind ourselves of the major Service types.
Common Service types
The three common ServiceTypes are:
- ClusterIP. This is the default option and gives the Service a stable IP address internally within the cluster. It will not make the Service available outside of the cluster.
- NodePort. This builds on top of ClusterIP and adds a cluster-wide TCP or UDP port. It makes the Service available outside of the cluster on a stable port.
- LoadBalancer. This builds on top of NodePort and integrates with cloud-based load-balancers.
There’s another Service type called ExternalName. This is used to direct traffic to services that exist outside of the Kubernetes cluster.
The manifest needs POSTing to the API server. The simplest way to do this is with kubectl apply.
The YAML file is called svc.yml and can be found in the services folder of book’s GitHub repo.
- kubectl apply -f svc.yml service/hello-svc created
This command tells Kubernetes to deploy a new object from a file called svc.yml. The .kind field in the YAML file tells Kubernetes that you’re deploying a new Service object.
Introspecting Services
Now that the Service is deployed, you can inspect it with the usual kubectl get and kubectl describe commands.
$ kubectl get svc hello-svc
NAME TYPE CLUSTER-IP hello-svc NodePort 100.70.40.2
EXTERNAL-IP <none>
PORT(S)
8080:30001/TCP
AGE
8s
- kubectl describe svc hello-svc
Name:hello-svc
Namespace:default
Labels:<none>
Annotations:kubectl.kubernetes.io/last-applied-configuration...
Selector:app=hello-world
Type:
|
NodePort
|
|
IP:
|
100.70.40.2
|
Port:
|
<unset>
|
8080/TCP
|
TargetPort:
|
8080/TCP
|
|
NodePort:
|
<unset>
|
30001/TCP
|
Endpoints:
|
100.96.1.10:8080, 100.96.1.11:8080, + more...
|
Session Affinity:
|
None
|
|
External Traffic Policy:
|
Cluster
|
|
Events:
|
<none>
|
|
In the previous example, you exposed the Service as a NodePort on port 30001 across the entire cluster. This means you can point a web browser to that port on any node and reach the Service and the Pods it’s proxying. You will need to use the IP address of a node you can reach, and you will need to make sure that any firewall and security rules allow the traffic to flow.
Figure 6.8 shows a web browser accessing the app via a cluster node with an IP address of 54.246.255.52 on the cluster-wide port 30001.
Figure 6.8
Endpoints objects
Earlier in the chapter, we said that every Service gets its own Endpoints object with the same name as the Service. This object holds a list of all the Pods the Service matches and is dynamically updated as matching Pods come and go. You can see Endpoints with the normal kubectl commands.
In the following command, you use the Endpoint controller’s ep shortname.
$ kubectl get ep hello-svc
NAME ENDPOINTS AGE
hello-svc 100.96.1.10:8080, 100.96.1.11:8080 + 8 more... 1m
- Kubectl describe ep hello-svc
Name:hello-svc
Namespace:default
Labels:<none>
Annotations: endpoints.kubernetes.io/last-change...
Subsets:
Addresses: 100.96.1.10,100.96.1.11,100.96.1.12...
NotReadyAddresses: <none> Ports:
Name
|
Port
|
Protocol
|
----
|
----
|
--------
|
<unset>
|
8080
|
TCP
|
Events: <none>
|
|
|
Summary of deploying Services
As with all Kubernetes objects, the preferred way of deploying and managing Services is the declarative way. Labels allow them to send traffic to a dynamic set of Pods. This means you can deploy new Services that will work with Pods and Deployments that are already running on the cluster and already in-use. Each Service gets its own Endpoints object that maintains an up-to-date list of matching Pods.
Real world example
Although everything you’ve learned so far is cool and interesting, the important questions are: How does it bring value? and How does it keep businesses running and make them more agile and resilient?
Let’s take a minute to run through a common real-world example – making updates to applications.
We all know that updating applications is a fact of life – bug fixes, new features, performance improvements etc.
Figure 6.9 shows a simple application deployed on a Kubernetes cluster as a bunch of Pods managed by a Deployment. As part of it, there’s a Service selecting on Pods with labels that match app=biz1 and zone=prod (notice how the Pods have both of the labels listed in the label selector). The application is up and running.
Figure 6.9
Now assume you need to push a new version, but you need to do it without causing downtime.
To do this, you can add Pods running the new version of the app as shown in Figure 6.10.
Behind the scenes, the updated Pods are labelled so that they match the existing label selector. The Service is now load-balancing requests across both versions of the app (version=4.1 and version=4.2). This happens because the Service’s label selector is being constantly evaluated, and its Endpoint object is constantly being updated with new matching Pods.
Once you’re happy with the updated version, forcing all traffic to use it is as simple as updating the Service’s label selector to include the label version=4.2. Suddenly the older Pods no longer match, and the Service will only forward traffic to the new version (Figure 6.11).
Figure 6.11
However, the old version still exists, you’re just not sending traffic to it anymore. This means that if you experience an issue with the new version, you can switch back to the previous version by simply changing the label selector on the Service to select on version=4.1 instead of version=4.2. See Figure 6.12.
Figure 6.12
Now everybody’s getting the old version.
6: Kubernetes Services
|
78
|
This functionality can be used for all kinds of things – blue-greens, canaries, you name it. So simple, yet so powerful.
Clean-up the lab with the following commands. These will delete the Deployment and Service used in the examples.
- kubectl delete -f deploy.yml
- kubectl delete -f svc.yml
Chapter Summary
In this chapter, you learned that Services bring stable and reliable networking to apps deployed on Kubernetes. They also perform load-balancing and allow you to expose elements of your application to the outside world (outside of the Kubernetes cluster).
The front-end of a Service is fixed, providing stable networking for the Pods behind it. The back-end of a Service is dynamic, allowing Pods to come and go without impacting the ability of the Service to provide load-balancing.
Services are first-class objects in the Kubernetes API and can be defined in the standard YAML manifest files.
They use label selectors to dynamically match Pods, and the best way to work with them is declaratively.