Kubernetes Patterns (2024)

The Health Probe pattern is about how an application can communicate its health state to Kubernetes. To be fully automatable, a cloud-native application must be highly observable by allowing its state to be inferred so that Kubernetes can detect whether the application is up and whether it is ready to serve requests.These observations influence the lifecycle management of Pods and the way traffic is routed to the application.

Kubernetes regularly checks the container process status and restarts it if issues are detected. However, from practice, we know that checking the process status is not sufficient to decide about the health of an application. In many cases, an application hangs, but its process is still up and running. For example, a Java application may throw an OutOfMemoryError and still have the JVM process running. Alternatively, an application may freeze because it runs into an infinite loop, deadlock, or some thrashing (cache, heap, process). To detect these kinds of situations, Kubernetes needs a reliable way to check the health of applications. That is, not to understand how an application works internally, but a check that indicates whether the application is functioning as expected and capable of serving consumers.

The software industry has accepted the fact that it is not possible to write bug-free code. Moreover, the chances for failure increase even more when working with distributed applications. As a result, the focus for dealing with failures has shifted from avoiding them to detecting faults and recovering. Detecting failure is not a simple task that can be performed uniformly for all applications, as all have different definitions of a failure. Also, various types of failures require different corrective actions. Transient failures may self-recover, given enough time, and some other failures may need a restart of the application. Let’s see the checks Kubernetes uses to detect and correct failures.

Process Health Checks

A process health check is the simplest health check the Kubelet constantly performs on the container processes. If the container processes are not running, the container is restarted. So even without any other health checks, the application becomes slightly more robust with this generic check. If your application is capable of detecting any kind of failure and shutting itself down, the process health check is all you need. However, for most cases that is not enough and other types of health checks are also necessary.

Liveness Probes

If your application runs into some deadlock, it is still considered healthy from the process health check’s point of view. To detect this kind of issue and any other types of failure according to your application business logic, Kubernetes has liveness probes—regular checks performed by the Kubelet agent that asks your container to confirm it is still healthy. It is important to have the health check performed from the outside rather than the in application itself, as some failures may prevent the application watchdog from reporting its failure. Regarding corrective action, this health check is similar to a process health check, since if a failure is detected, the container is restarted. However, it offers more flexibility regarding what methods to use for checking the application health, as follows:

  • HTTP probe performs an HTTP GET request to the container IP address and expects a successful HTTP response code between 200 and 399.

  • A TCP Socket probe assumes a successful TCP connection.

  • An Exec probe executes an arbitrary command in the container kernel namespace and expects a successful exit code (0).

An example HTTP-based liveness probe is shown in ExampleÂ4-1.

Example 4-1. Container with a liveness probe
apiVersion: v1kind: Podmetadata: name: pod-with-liveness-checkspec: containers: - image: k8spatterns/random-generator:1.0 name: random-generator env: - name: DELAY_STARTUP value: "20" ports: - containerPort: 8080 protocol: TCP livenessProbe: httpGet:  path: /actuator/health port: 8080 initialDelaySeconds: 30 

HTTP probe to a health-check endpoint

Wait 30 seconds before doing the first liveness check to give the application some time to warm up

Depending on the nature of your application, you can choose the method that is most suitable for you. It is up to your implementation to decide when your application is considered healthy or not. However, keep in mind that the result of not passing a health check is restarting of your container. If restarting your container does not help, there is no benefit to having a failing health check as Kubernetes restarts your container without fixing the underlying issue.

Readiness Probes

Liveness checks are useful for keeping applications healthy by killing unhealthy containers and replacing them with new ones. But sometimes a container may not be healthy, and restarting it may not help either. The most common example is when a container is still starting up and not ready to handle any requests yet. Or maybe a container is overloaded, and its latency is increasing, and you want it to shield itself from additional load for a while.

For this kind of scenario, Kubernetes has readiness probes. The methods for performing readiness checks are the same as liveness checks (HTTP, TCP, Exec), but the corrective action is different. Rather than restarting the container, a failed readiness probe causes the container to be removed from the service endpoint and not receive any new traffic. Readiness probes signal when a container is ready so that it has some time to warm up before getting hit with requests from the service. It is also useful for shielding the container from traffic at later stages, as readiness probes are performed regularly, similarly to liveness checks. ExampleÂ4-2 shows how a readiness probe can be implemented by probing the existence of a file the application creates when it is ready for operations.

Example 4-2. Container with readiness probe
apiVersion: v1kind: Podmetadata: name: pod-with-readiness-checkspec: containers: - image: k8spatterns/random-generator:1.0 name: random-generator readinessProbe: exec:  command: [ "stat", "/var/run/random-generator-ready" ]

Check for the existence of a file the application creates to indicate it’s ready to serve requests. stat returns an error if the file does not exist, letting the readiness check fail.

Again, it is up to your implementation of the health check to decide when your application is ready to do its job and when it should be left alone. While process health checks and liveness checks are intended to recover from the failure by restarting the container, the readiness check buys time for your application and expects it to recover by itself. Keep in mind that Kubernetes tries to prevent your container from receiving new requests (when it is shutting down, for example), regardless of whether the readiness check still passes after having received a SIGTERM signal.

In many cases, you have liveness and readiness probes performing the same checks. However, the presence of a readiness probe gives your container time to start up. Only by passing the readiness check is a Deployment considered to be successful, so that, for example, Pods with an older version can be terminated as part of a rolling update.

The liveness and readiness probes are fundamental building blocks in the automation of cloud-native applications. Application frameworks such as Spring actuator, WildFly Swarm health check, Karaf health checks, or the MicroProfile spec for Java provide implementations for offering Health Probes.

To be fully automatable, cloud-native applications must be highly observable by providing a means for the managing platform to read and interpret the application health, and if necessary, take corrective actions. Health checks play a fundamental role in the automation of activities such as deployment, self-healing, scaling, and others. However, there are also other means through which your application can provide more visibility about its health.

The obvious and old method for this purpose is through logging. It is a good practice for containers to log any significant events to system out and system error and have these logs collected to a central location for further analysis. Logs are not typically used for taking automated actions, but rather to raise alerts and further investigations. A more useful aspect of logs is the postmortem analysis of failures and detecting unnoticeable errors.

Apart from logging to standard streams, it is also a good practice to log the reason for exiting a container to /dev/termination-log. This location is the place where the container can state its last will before being permanently vanished. FigureÂ4-1 shows the possible options for how a container can communicate with the runtime platform.

Kubernetes Patterns (7)

Figure 4-1. Container observability options

Containers provide a unified way for packaging and running applications by treating them like black boxes. However, any container that is aiming to become a cloud-native citizen must provide APIs for the runtime environment to observe the container health and act accordingly. This support is a fundamental prerequisite for automation of the container updates and lifecycle in a unified way, which in turn improves the system’s resilience and user experience. In practical terms, that means, as a very minimum, your containerized application must provide APIs for the different kinds of health checks (liveness and readiness).

Even better-behaving applications must also provide other means for the managing platform to observe the state of the containerized application by integrating with tracing and metrics-gathering libraries such as OpenTracing or Prometheus. Treat your application as a black box, but implement all the necessary APIs to help the platform observe and manage your application in the best way possible.

The next pattern, Managed Lifecycle, is also about communication between applications and the Kubernetes management layer, but coming from the other direction. It’s about how your application gets informed about important Pod lifecycle events.

Kubernetes Patterns (2024)
Top Articles
Planning to retire at 40? Wait, have you done your math right?
SIP for Rs 1 crore: How much time and money will it take?
Is Paige Vanzant Related To Ronnie Van Zant
Ghosted Imdb Parents Guide
Aadya Bazaar
All Obituaries | Ashley's J H Williams & Sons, Inc. | Selma AL funeral home and cremation
104 Presidential Ct Lafayette La 70503
Top Hat Trailer Wiring Diagram
Nonuclub
Wordle auf Deutsch - Wordle mit Deutschen Wörtern Spielen
Regal Stone Pokemon Gaia
Accuradio Unblocked
Magic Mike's Last Dance Showtimes Near Marcus Cedar Creek Cinema
Bj Alex Mangabuddy
Craigslist Toy Hauler For Sale By Owner
Jellyfin Ps5
Welcome to GradeBook
Scotchlas Funeral Home Obituaries
Schedule An Oil Change At Walmart
Spn 520211
Terry Bradshaw | Biography, Stats, & Facts
Makemv Splunk
Znamy dalsze plany Magdaleny Fręch. Nie będzie nawet chwili przerwy
Paris Immobilier - craigslist
Table To Formula Calculator
Pronóstico del tiempo de 10 días para San Josecito, Provincia de San José, Costa Rica - The Weather Channel | weather.com
27 Fantastic Things to do in Lynchburg, Virginia - Happy To Be Virginia
Bfsfcu Truecar
Vadoc Gtlvisitme App
Shauna's Art Studio Laurel Mississippi
Abga Gestation Calculator
Ripsi Terzian Instagram
Federal Student Aid
Zero Sievert Coop
Giantess Feet Deviantart
Pokemon Reborn Locations
Section 212 at MetLife Stadium
Doordash Promo Code Generator
Cpmc Mission Bernal Campus & Orthopedic Institute Photos
Bill Manser Net Worth
Trivago Anaheim California
Sarahbustani Boobs
Aurora Southeast Recreation Center And Fieldhouse Reviews
Marcel Boom X
4Chan Zelda Totk
Smoke From Street Outlaws Net Worth
Quest Diagnostics Mt Morris Appointment
March 2023 Wincalendar
7 Sites to Identify the Owner of a Phone Number
E. 81 St. Deli Menu
Heisenberg Breaking Bad Wiki
Latest Posts
Article information

Author: Lilliana Bartoletti

Last Updated:

Views: 6007

Rating: 4.2 / 5 (73 voted)

Reviews: 88% of readers found this page helpful

Author information

Name: Lilliana Bartoletti

Birthday: 1999-11-18

Address: 58866 Tricia Spurs, North Melvinberg, HI 91346-3774

Phone: +50616620367928

Job: Real-Estate Liaison

Hobby: Graffiti, Astronomy, Handball, Magic, Origami, Fashion, Foreign language learning

Introduction: My name is Lilliana Bartoletti, I am a adventurous, pleasant, shiny, beautiful, handsome, zealous, tasty person who loves writing and wants to share my knowledge and understanding with you.