The Kubernetes Book by Nigel Poulton & Pushkar Joglekar, chapter name Service discovery

Service discovery

In this chapter, you’ll learn what service discovery is, why it’s important, and how it’s implemented in Kubernetes. You’ll also learn some troubleshooting tips.

 

To get the most from this chapter, you should know what a Kubernetes Service object is and how they work.

 

This was covered in the previous chapter.

 

This chapter is split into the following sections:

 

  • Quick background

 

  • Service registration

 

  • Service discovery

 

  • Service discovery and Namespaces

 

  • Troubleshooting service discovery

 

Quick background

 

Applications run inside of containers and containers run inside of Pods. Every Kubernetes Pod gets its own unique IP address, and all Pods connect to the same flat network called the Pod network. However, Pods are ephemeral. This means they come and go and should not be considered reliable. For example, scaling operations, rolling updates, rollbacks and failures all cause Pods to be added or removed from the network.

 

To address the unreliable nature of Pods, Kubernetes provides a Service object that sits in front of a set of Pods and provides a reliable name, IP address, and port. Clients connect to the Service object, which in turn load-balances requests to the target Pods.

 

Note: The word “service” has lots of meanings. When we use it with a capital “S” we are referring to the Kubernetes Service object that provides stable networking to a set of Pods.

 

Modern cloud-native applications are comprised of lots of small independent microservices that work together to create a useful application. For these microservices to work together, they need to be able to discover and connect to each other. This is where service discovery comes into play.

 

There are two major components to service discovery:

 

  • Service registration

 

  • Service discovery

 

Service registration

 

Service registration is the process of a microservice registering its connection details in a service registry so that other microservices can discover it and connect to it.

 

 

Figure 7.1

 

A few important things to note about this in Kubernetes:

 

  1. Kubernetes uses an internal DNS service as its service registry

 

  1. Services register with DNS (not individual Pods)

 

  1. The name, IP address, and network port of every Service is registered

 

For this to work, Kubernetes provides a well-known internal DNS service that we usually call the “cluster DNS”. The term well known means that it operates at an address known to every Pod and container in the cluster. It’s implemented in the kube-system Namespace as a set of Pods managed by a Deployment called coredns. These Pods are fronted by a Service called kube-dns. Behind the scenes, it’s based on a DNS technology called CoreDNS and runs as a Kubernetes-native application.

 

The previous sentence contains a lot of detail, so the following commands show how its implemented. You can run these commands on your own Kubernetes clusters.

 

$ kubectl

get svc

-n kube-system -l k8s-app=kube-dns

 

 

NAME

TYPE

 

CLUSTER-IP

EXTERNAL-IP

PORT(S)

AGE

kube-dns

ClusterIP

192.168.200.10

<none>

53/UDP,53/TCP,9153/TCP

3h44m

$ kubectl

get deploy -n kube-system -l k8s-app=kube-dns

 

NAME

READY

UP-TO-DATE   AVAILABLE

AGE

 

 

coredns

2/2

2

2

3h45m

 

 

 

kubectl get pods -n kube-system -l k8s-app=kube-dns

 

NAME

READY

STATUS

RESTARTS

AGE

coredns-5644d7b6d9-fk4c9

1/1

Running

0

3h45m

coredns-5644d7b6d9-s5zlr

1/1

Running

0

3h45m

 

Every Kubernetes Service is automatically registered with the cluster DNS when it’s created. The registration process looks like this (exact flow might slightly differ):

 

  1. You POST a new Service manifest to the API Server

 

  1. The request is authenticated, authorized, and subjected to admission policies

 

  1. The Service is allocated a virtual IP address called a ClusterIP

 

  1. An Endpoints object (or Endpoint slices) is created to hold a list of Pods the Service will load-balance traffic to

 

  1. The Pod network is configured to handle traffic sent to the ClusterIP (more on this later)

 

  1. The Service’s name and IP are registered with the cluster DNS

 

Step 6 is the secret sauce in the service registration process.

 

We mentioned earlier that the cluster DNS is a Kubernetes-native application. This means it knows it’s running on Kubernetes and implements a controller that watches the API Server for new Service objects. Any time it observes a new Service object, it creates the DNS records that allow the Service name to be resolved to its ClusterIP. This means that applications and Services do not need to perform service registration – the cluster DNS is constantly looking for new Services and automatically registers their details.

 

It’s important to understand that the name registered for the Service is the value stored in its metadata.name property. The ClusterIP is dynamically assigned by Kubernetes.

 

apiVersion: v1

 

kind: Service

 

metadata:

 

name: ent <<---- Name registered with cluster DNS

 

spec:

 

selector:

 

app: web

 

ports:

 

 

At this point, the front-end configuration of the Service is registered (name, IP, port) and the Service can be discovered by applications running in other Pods.

 

The service back-end

 

Now that the front-end of the Service is registered, the back-end needs building. This involves creating and maintaining a list of Pod IPs that the Service will load-balance traffic to.

 

As explained in the previous chapter, every Service has a label selector that determines which Pods the Service will load-balance traffic to. See Figure 7.2.

 

 

Figure 7.2

 

Kubernetes automatically creates an Endpoints object (or Endpoint slices) for every Service. These hold the list of Pods that match the label selector and will receive traffic from the Service. They’re also critical to how traffic is routed from the Service’s ClusterIP to Pod IPs (more on this soon).

 

The following command shows an Endpoints object for a Service called ent. It has the IP address and port of two Pods that match the label selector.

 

$ kubectl get endpoint ent

 

 

NAME    ENDPOINTS                                                        AGE

ent    192.168.129.46:8080,192.168.130.127:8080            14m

 

Figure 7.3 shows a Service called ent that will load-balance to two Pods. It also shows the Endpoints object with the IPs of the two Pods that match the Service’s label selector.

 

 

Figure 7.3

 

The kubelet process on every node is watching the API Server for new Endpoints objects. When it sees them, it creates local networking rules that redirect ClusterIP traffic to Pod IPs. In modern Linux-based Kubernetes cluster the technology used to create these rules is the Linux IP Virtual Server (IPVS). Older versions of Kubernetes used iptables.

 

At this point the Service is fully registered and ready to be discovered:

 

  • Its front-end configuration is registered with DNS

 

  • Its back-end configuration is stored in an Endpoints object (or Endpoint slices) and the network is ready to handle traffic

 

Let’s summarise the service registration process with the help of a simple flow diagram.

 

Summarising service registration

 

 

Figure 7.4

 

You POST a new Service configuration to the API Server and the request is authenticated and authorized. The Service is allocated a ClusterIP and its configuration is persisted to the cluster store. An associated Endpoints object is created to hold the list of Pod IPs that match the label selector. The cluster DNS is running as a Kubernetes-native application and watching the API Server for new Service objects. It sees the new Service and registers the appropriate DNS A and SRV records. Every node is running a kube-proxy that sees the new Service and Endpoints objects and creates IPVS rules on every node so that traffic to the Service’s ClusterIP is redirected to one of the Pods that match its label selector.

 

Service discovery

 

Let’s assume there are two microservices applications on a single Kubernetes cluster – enterprise and voyager. The Pods for the enterprise app sit behind a Kubernetes Service called ent and the Pods for the voyager app sit behind another Kubernetes Service called voy.

 

Both are registered with DNS as follows:

 

  • ent: 192.168.201.240

 

  • voy: 192.168.200.217

 

 

Figure 7.5

 

 

For service discovery to work, every microservice needs to know two things:

 

 

  1. The name of the remote microservice they want to connect to

 

  1. How to convert the name to an IP address

 

The application developer is responsible for point 1 – coding the microservice with the names of microservices they connect to. Kubernetes takes care of point 2.

 

Converting names to IP addresses using the cluster DNS

 

Kubernetes automatically configures every container so that it can find and use the cluster DNS to convert Service names to IPs. It does this by populating every container’s /etc/resolv.conf file with the IP address of cluster DNS Service as well as any search domains that should be appended to unqualified names.

 

Note: An “unqualified name” is a short name such as ent. Appending a search domain converts an unqualified name into a fully qualified domain name (FQDN) such as ent.default.svc.cluster.local.

 

The following snippet shows a container that is configured to send DNS queries to the cluster DNS at 192.168.200.10. It also lists the search domains to append to unqualified names.

 

$ cat /etc/resolv.conf

 

search svc.cluster.local cluster.local default.svc.cluster.local

 

nameserver 192.168.200.10

 

options ndots:5

 

The following snippet shows that nameserver in /etc/resolv.conf matches the IP address of the cluster DNS (the kube-dns Service).

 

$ kubectl get svc -n kube-system -l k8s-app=kube-dns

 

NAME            TYPE              CLUSTER-IP                 PORT(S)                               AGE

 

kube-dns     ClusterIP     192.168.200.10          53/UDP,53/TCP,9153/TCP      3h53m

 

If Pods in the enterprise app need to connect to Pods in the voyager app, they send a request to the cluster DNS asking it to resolve the name voy to an IP address. The cluster DNS will return the value of the ClusterIP (192.168.200.217).

 

At this point, the enterprise Pods have an IP address to send traffic to. However, this ClusterIP is a virtual IP (VIP) that requires some more network magic in order for requests to reach voyager Pods.

 

Some network magic

 

 

Once a Pod has the ClusterIP of a Service, it sends traffic to that IP address. However, the address is on a special network called the service network and there are no routes to it! This means the apps container doesn’t know where to send the traffic, so it sends it to its default gateway.

 

Note: A default gateway is where a device sends traffic that it doesn’t have a specific route for. The default gateway will normally forward traffic to another device with a larger routing table that might have a route for the traffic. A simple analogy might be driving from City A to City B. The local roads in City A probably don’t have signposts to City B, so you follow signs to the major highway/motorway. Once on the highway/motorway there is more chance that you will find directions to City B. If the first signpost doesn’t have directions to City B you keep driving until you see a signpost for City B. Routing is similar, if a device doesn’t have a route for the destination network, the traffic is routed from one default gateway to the next until hopefully a device has a route to the required network.

 

The containers default gateway sends the traffic to the Node it is running on.

 

The Node doesn’t have a route to the service network either, so it sends the traffic to its own default gateway. Doing this causes the traffic to be processed by the Nodes kernel, which is where the magic happens!

 

Every Kubernetes Node runs a system service called kube-proxy. At a high-level, kube-proxy is responsible for capturing traffic destined for ClusterIPs and redirecting it to the IP addresses of Pods that match the Service’s label selector. Let’s look a bit closer…

 

kube-proxy is a Pod-based Kubernetes-native app that implements a controller that watches the API Server for new Service and Endpoints objects. When it sees them, it creates local IPVS rules that tell the Node to intercept traffic destined for the Service’s ClusterIP and forward it to individual Pod IPs.

 

This means that every time a Nodes kernel processes traffic headed for an address on the service network, a trap occurs and the traffic is redirected to the IP of a healthy Pod matching the Service’s label selector.

 

Kubernetes originally used iptables to do this trapping and load-balancing. However, it was replaced by IPVS in Kubernetes 1.11. The is because IPVS is a high-performance kernel-based L4 load-balancer that scales better than iptables and implements better load-balancing algorithms.

 

Summarising service discovery

 

Let’s quickly summarise the service discovery process with the help of the flow diagram in Figure 7.6.

 

 

Figure 7.6

 

Assume a microservice called “enterprise” needs to send traffic to a microservice called “voyager”. To start this flow, the “enterprise” microservice needs to know the name of the Kubernetes Service object sitting in front of the “voyager” microservice B. We’ll assume it’s called “voy”, but it is the responsibility of the application developer to ensure this is known.

 

An instance of the “enterprise” microservice sends a query to the cluster DNS (defined in the /etc/resolv.conf file of every container) asking it to resolve the name of the “voy” Service to an IP address. The cluster DNS replies with the ClusterIP (virtual IP) and the instance of the “enterprise” microservice sends requests to this ClusterIP. However, there are no routes to the service network that the ClusterIP is on. This means the requests are sent to the container’s default gateway and eventually sent to the Node the container is running on. The Node has no route to the service network so it sends the traffic to its own default gateway. En-route, the request is processed by the Node’s kernel. A trap is triggered and the request is redirected to the IP address of a Pod that matches the Services label selector.

 

The Node has routes to Pod IPs and the requests reach a Pod and are processed.

 

Service discovery and Namespaces

 

Two things are important if you want to understand how service discovery works within and across Namespaces:

 

  1. Every cluster has an address space

 

  1. Namespaces partition the cluster address space

 

Every cluster has an address space based on a DNS domain that we usually call the cluster domain. By default, it’s called cluster.local, and Service objects are placed within that address space. For example, a Service called ent will have a fully qualified domain name (FQDN) of ent.default.svc.cluster.local

 

The format of the FQDN is <object-name>.<namespace>.svc.cluster.local

 

Namespaces allow you to partition the address space below the cluster domain. For example, creating a couple of Namespaces called prod and dev will give you two address spaces that you can place Services and other objects in:

 

 

  • dev: <object-name>.dev.svc.cluster.local

 

  • prod: <object-name>.prod.svc.cluster.local

 

Object names must be unique within Namespaces but not across Namespaces. This means that you cannot have two Service objects called “ent” in the same Namespace, but you can if they are in different Namespaces. This is useful for parallel development and production configurations. For example, Figure 7.7 shows a single cluster address divided into dev and prod with identical instances of the ent and voy Service are deployed to each.

 

 

Figure 7.7

 

Pods in the prod Namespace can connect to Services in the local Namespace using short names such as ent and voy. To connect to objects in a remote Namespace requires FQDNs such as ent.dev.svc.cluster.local and

voy.dev.svc.cluster.local.

 

As we’ve seen, Namespaces partition the cluster address space. They are also good for implementing access control and resource quotas. However, they are not workload isolation boundaries and should not be used to isolate hostile workloads.

 

Service discovery example

 

Let’s walk through a quick example.

 

The following YAML is called sd-example.yml in the service-discovery folder of the books GitHub repo. It defines two Namespaces, two Deployments, two Services, and a standalone jump Pod. The two Deployments have identical names, as do the Services. However, they’re deployed to different Namespace, so this is allowed. The jump Pod is deployed to the dev Namespace.

 

 

Figure 7.8

 

apiVersion: v1

 

kind: Namespace

 

metadata:

 

name: dev

 

---

 

apiVersion: v1

 

kind: Namespace

 

metadata:

 

name: prod

 

---

 

apiVersion: apps/v1

 

kind: Deployment

 

metadata:

 

name: enterprise

 

labels:

 

app: enterprise

 

namespace: dev

 

spec:

 

selector:

 

matchLabels:

 

app: enterprise

 

replicas: 2

 

strategy:

 

type: RollingUpdate

 

template:

 

metadata:

 

labels:

 

app: enterprise

 

spec:

 

terminationGracePeriodSeconds: 1

 

containers:

 

  • image: nigelpoulton/k8sbook:text-dev

 

name: enterprise-ctr ports:

 

- containerPort: 8080

 

---

 

apiVersion: apps/v1

 

kind: Deployment

 

metadata:

 

name: enterprise

 

labels:

 

app: enterprise

 

namespace: prod

 

spec:

 

selector:

 

matchLabels:

 

app: enterprise

 

replicas: 2

 

strategy:

 

type: RollingUpdate

 

template:

 

metadata:

 

labels:

 

app: enterprise

 

spec:

 

terminationGracePeriodSeconds: 1

 

containers:

 

  • image: nigelpoulton/k8sbook:text-prod

 

name: enterprise-ctr ports:

 

- containerPort: 8080

 

---

 

apiVersion: v1

 

kind: Service

 

metadata:

 

name: ent

 

namespace: dev

 

spec:

 

selector:

 

app: enterprise

 

ports:

 

  • port: 8080 type: ClusterIP

---

 

apiVersion: v1

 

kind: Service

 

metadata:

 

name: ent

 

namespace: prod

 

spec:

 

selector:

 

app: enterprise

 

ports:

 

  • port: 8080 type: ClusterIP

---

 

 

apiVersion: v1

 

kind: Pod

 

metadata:

 

name: jump

 

namespace: dev

 

spec:

 

terminationGracePeriodSeconds: 5

 

containers:

 

  • name: jump

 

image: ubuntu

 

tty: true

 

stdin: true

 

Deploy the configuration to your cluster.

 

  • kubectl apply -f dns-namespaces.yml namespace/dev created namespace/prod created deployment.apps/enterprise created deployment.apps/enterprise created service/ent created

 

service/ent created pod/jump-pod created

 

Check the configuration was correctly applied. The following outputs are trimmed and do not show all objects.

 

$ kubectl get

all -n dev

 

 

 

 

 

 

 

NAME

TYPE

CLUSTER-IP

 

EXTERNAL-IP

PORT(S)

AGE

service/ent

ClusterIP

192.168.202.57

<none>

 

8080/TCP

43s

NAME

 

READY   UP-TO-DATE

AVAILABLE

AGE

 

deployment.apps/enterprise

2/2

2

 

2

 

43s

 

<snip>

 

 

 

 

 

 

 

 

$ kubectl get

all -n prod

 

 

 

 

 

 

 

NAME

TYPE

CLUSTER-IP

 

EXTERNAL-IP

PORT(S)

AGE

service/ent

ClusterIP

192.168.203.158

<none>

 

8080/TCP

82s

NAME

 

READY   UP-TO-DATE

AVAILABLE

AGE

 

deployment.apps/enterprise

2/2

2

 

2

 

52s

 

<snip>

 

 

 

 

 

 

 

 

 

The next steps will:

 

  1. Log on to the main container of jump Pod in the dev Namespace

 

  1. Check the container’s /etc/resolv.conf file

 

  1. Connect to the ent app in the dev Namespace using the Service’s shortname

 

  1. Connect to the ent app in the prod Namespace using the Service’s FQDN

 

To help with the demo, the versions of the ent app used in each Namespace are different.

 

Log on to the jump Pod.

 

 

  • kubectl exec -it jump -n dev -- bash root@jump:/#

 

Your terminal prompt will change to indicate you are attached to the jump Pod.

 

Inspect the contents of the /etc/resolv.conf file and check that the search domains listed include the dev Namespace (search dev.svc.cluster.local) and not the prod Namespace.

 

$ cat /etc/resolv.conf

 

search dev.svc.cluster.local svc.cluster.local cluster.local default.svc.cluster.local

 

nameserver 192.168.200.10

 

options ndots:5

 

The nameserver value will match the ClusterIP of the kube-dns Service on your cluster. This is the well-known IP address that handles DNS/service discovery traffic.

 

Install the curl utility.

 

  • apt-get update && apt-get install curl -y <snip>

 

Use curl to connect to the version of the app running in dev by using the ent short name.

 

$ curl ent:8080

 

Hello from the DEV Namespace!

 

Hostname: enterprise-7d49557d8d-k4jjz

 

The “Hello from the DEV Namespace” response proves that curl connected to the dev instance of the app.

 

When the curl command was issued, the container automatically appended dev.svc.cluster.local to the ent name and sent the query to the IP address of the cluster DNS specified in /etc/resolv.conf. DNS returned the ClusterIP for the ent Service running in the local dev Namespace and the app sent the traffic to that address. En-route to the Nodes default gateway the traffic triggered a trap in the Node’s kernel and was redirected to one of the Pods hosting the simple web application.

 

Run the curl command again, but this time append the domain name of the prod Namespace. This will cause the cluster DNS to return the ClusterIP for the instance in the prod Namespace and traffic will eventually reach a Pod running in prod

 

  • curl ent.prod.svc.cluster.local:8080 Hello from the PROD Namespace! Hostname: enterprise-5464d8c4f9-v7xsk

 

This time the response comes from a Pod in the prod Namespace.

 

The test proves that short names are resolved to the local Namespace (the same Namespace the app is running in) and connecting across Namespaces requires FQDNs.

 

Remember to exit your terminal from the container by typing exit.

 

Troubleshooting service discovery

 

Service registration and discovery involves a lot of moving parts. If a single one of them stops working, the whole process can potentially break. Let’s quickly run through what needs to be working and how to check them.

 

Kubernetes uses the cluster DNS as its service registry. It runs as a set of Pods in the kube-system Namespace with a Service object providing a stable network endpoint. The important components are:

 

  • Pods: Managed by the coredns Deployment

 

  • Service: A ClusterIP Service called kube-dns listening on port 53 TCP/UDP

 

  • Endpoint: Also called kube-dns

 

All objects relating to the cluster DNS are tagged with the k8s-app=kube-dns label. This is helpful when filtering kubectl output.

 

Make sure that the coredns Deployment and its managed Pods are up and running.

 

$ kubectl

get deploy -n kube-system -l k8s-app=kube-dns

NAME

READY

UP-TO-DATE

AVAILABLE

AGE

coredns

2/2

2

2

2d21h

$ kubectl

get pods -n kube-system -l k8s-app=kube-dns

 

NAME

READY

STATUS

RESTARTS

AGE

coredns-5644d7b6d9-74pv7

1/1

Running

0

2d21h

coredns-5644d7b6d9-s759f

1/1

Running

0

2d21h

 

Check the logs from the each of the coredns Pods. You’ll need to substitute the names of the Pods from your own environment. The following output is typical of a working DNS Pod.

 

$ kubectl logs coredns-5644d7b6d9-74pv7 -n kube-system 2020-02-19T21:31:01.456Z [INFO] plugin/reload: Running configuration...

 

2020-02-19T21:31:01.457Z [INFO] CoreDNS-1.6.2

 

2020-02-19T21:31:01.457Z [INFO] linux/amd64, go1.12.8, 795a3eb CoreDNS-1.6.2

 

linux/amd64, go1.12.8, 795a3eb

 

Assuming the Pods and Deployment are working, you should also check the Service and associated Endpoints object. The output should show that the service is up, has an IP address in the ClusterIP field, and is listening on port 53 TCP/UDP.

 

The ClusterIP address for the kube-dns Service should match the IP address in the /etc/resolv.conf files of all containers running on the cluster. If the IP addresses are different, containers will send DNS requests to the wrong IP address.

 

$ kubectl get svc kube-dns -n kube-system

 

 

NAME

TYPE

CLUSTER-IP

EXTERNAL-IP

PORT(S)

AGE

kube-dns

ClusterIP

192.168.200.10

<none>

53/UDP,53/TCP,9153/TCP

2d21h

 

The associated kube-dns Endpoints object should also be up and have the IP addresses of the coredns Pods listening on port 53 TCP and UDP.

 

$ kubectl get ep -n kube-system -l k8s-app=kube-dns

 

 

NAME

 

 

 

ENDPOINTS

 

 

 

AGE

 

 

 

kube-dns

 

 

192.168.128.24:53,192.168.128.3:53,192.168.128.24:53 + 3 more...      2d21h

 

 

 

Once you’ve verified the fundamental DNS components are up and working, you can proceed to perform more detailed and in-depth troubleshooting. Here are a few basic tips.

 

Start a troubleshooting Pod that has your favourite networking tools installed (ping, traceroute, curl, dig, nslookup etc.). The standard gcr.io/kubernetes-e2e-test-images/dnsutils:1.3 image is a popular choice if you don’t have your own custom image with your tools installed. Unfortunately, there is no latest image in the repo. This means you have to specify a version. At the time of writing, 1.3 was the latest version.

 

The following command will start a new standalone Pod called netutils, based on the dnsutils image just mentioned. It will also log your terminal on to it.

 

  • kubectl run -it dnsutils \

 

--image gcr.io/kubernetes-e2e-test-images/dnsutils:1.3

 

A common way to test DNS resolution is to use nslookup to resolve the kubernetes.default Service that sits in front of the API Server. The query should return an IP address and the name kubernetes.default.svc.cluster.local.

 

# nslookup kubernetes

 

Server:               192.168.200.10

 

Address:             192.168.200.10#53

 

Name:   kubernetes.default.svc.cluster.local

 

Address: 192.168.200.1

 

The first two lines should return the IP address of your cluster DNS. The last two lines should show the FQDN of the kubernetes Service and its ClusterIP. You can verify the ClusterIP of the kubernetes Service by running a kubectl get svc kubernetes command.

 

Errors such as “nslookup: can’t resolve kubernetes” are possible indicators that DNS is not working. A possible solution is to restart the coredns Pods. These are managed by a Deployment object and will be automatically recreated.

 

The following command deletes the DNS Pods and must be ran from a terminal with kubectl installed. If you’re still logged on to the netutils Pod you’ll need to type exit to log off.

 

 

  • kubectl delete pod -n kube-system -l k8s-app=kube-dns pod "coredns-5644d7b6d9-2pdmd" deleted

pod "coredns-5644d7b6d9-wsjzp" deleted

 

Verify that the Pods have restarted and test DNS again.

 

Summary

 

In this chapter, you learned that Kubernetes uses the internal cluster DNS for service registration and service discovery.

 

All new Service objects are automatically registered with the cluster DNS and all containers are configured to know where to find the cluster DNS. This means that all containers will talk to the cluster DNS when they need to resolve a name to an IP address.

 

The cluster DNS resolves Service names to ClusterIPs. These IP addresses are on a special network called the service network and there are no routes to this network. Fortunately, every cluster Node is configured to trap on packets destined for the service network and redirect them to Pod IPs on the Pod network.