concept liveness probe in category kubernetes

appears as: liveness probes, liveness probe, The liveness probe, liveness probe, The liveness probes, liveness probes
Learn Kubernetes in a Month of Lunches MEAP V07

This is an excerpt from Manning's book Learn Kubernetes in a Month of Lunches MEAP V07.

12.2  Restarting unhealthy Pods with liveness probes

Liveness probes use the same health check mechanism as readiness probes - the test configurations might be identical in your Pod spec - but the action for a failed probe is different. Liveness probes take action at the compute level, restarting Pods if they become unhealthy. A restart is when Kubernetes replaces the Pod container with a new one; the Pod itself isn’t replaced; it continues to run on the same node but with a new container.

Listing 12.2 shows a liveness probe for the random number API. This probe uses the same HTTP GET action to run the probe, but it has some additional configuration. Restarting a Pod is more invasive than removing it from a Service, and the extra settings help to ensure that only happens when we really need it.

Listing 12.2 - api-with-readiness-and-liveness.yaml, adding a liveness probe
livenessProbe:
  httpGet:                 # HTTP GET actions can be used in liveness and
    path: /healthz         # readiness probes - they use the same spec
    port: 80
  periodSeconds: 10       
  initialDelaySeconds: 10  # wait 10 seconds before running the first probe
  failureThreshold: 2      # allow two probes to fail before taking action

This is a change to the Pod spec, so applying the update will create new replacement Pods that start off healthy. This time when a Pod becomes unhealthy after the application fails, it will be removed from the Service thanks to the readiness probe, then it will be restarted thanks to the liveness probe, and then the Pod will be added back into the Service.

In that exercise you'll see the liveness probe in action, restarting the Pod when the application fails. The restart is a new Pod container but the Pod environment is the same - so it has the same IP address, and if the container mounted an EmptyDir volume in the Pod it would have access to the files written by the previous container. You can see in figure 12.5 that both Pods are running and ready after the restart, so Kubernetes fixed the failure and healed the application.

Figure 12.5 Readiness probes and liveness probes combined help keep applications online
Kubernetes in Action, Second Edition MEAP V05

This is an excerpt from Manning's book Kubernetes in Action, Second Edition MEAP V05.

Defining liveness probes in the pod manifest

The following listing shows an updated manifest for the pod, which defines a liveness probe for each of the two containers, with different levels of configuration.

Listing 6.8 Adding a liveness probe to a pod: kubia-liveness.yaml
apiVersion: v1
kind: Pod
metadata:
  name: kubia-liveness
spec:
  containers:
  - name: kubia
    image: luksa/kubia:1.0
    ports:
    - name: http
      containerPort: 8080
    livenessProbe:                                #A
      httpGet:                                    #A
        path: /                                   #A
        port: 8080                                #A
  - name: envoy
    image: luksa/kubia-ssl-proxy:1.0
    ports:
    - name: https
      containerPort: 8443
    - name: admin
      containerPort: 9901
    livenessProbe:                                #B
      httpGet:                                    #B
        path: /ready                              #B
        port: admin                               #B
      initialDelaySeconds: 10                     #B
      periodSeconds: 5                            #B
      timeoutSeconds: 2                           #B
      failureThreshold: 3                         #B

Defining a liveness probe using the minimum required configuration

The liveness probe for the kubia container is the simplest version of a probe for HTTP-based applications. The probe simply sends an HTTP GET request for the path / on port 8080 to determine if the container can still serve requests. If the application responds with an HTTP status between 200 and 399, the application is considered healthy.

 As you can see in the envoy container’s liveness probe, you can specify the probe’s target port by name instead of by number.

The liveness probe for the envoy container also contains additional fields. These are best explained with the following figure.

Figure 6.6 The configuration and operation of a liveness probe
Kubernetes in Action

This is an excerpt from Manning's book Kubernetes in Action.

4.1.1. Introducing liveness probes

Kubernetes can check if a container is still alive through liveness probes. You can specify a liveness probe for each container in the pod’s specification. Kubernetes will periodically execute the probe and restart the container if the probe fails.

Listing 4.1. Adding a liveness probe to a pod: kubia-liveness-probe.yaml
apiVersion: v1
kind: Pod
metadata:
  name: kubia-liveness
spec:
  containers:
  - image: luksa/kubia-unhealthy       #1
    name: kubia
    livenessProbe:                     #2
      httpGet:                         #2
        path: /                        #3
        port: 8080                     #4

For pods running in production, you should always define a liveness probe. Without one, Kubernetes has no way of knowing whether your app is still alive or not. As long as the process is still running, Kubernetes will consider the container to be healthy.

sitemap

Unable to load book!

The book could not be loaded.

(try again in a couple of minutes)

manning.com homepage
test yourself with a liveTest