Docker Swarm vs Kubernetes: A Practical Comparison | Better Stack Community (2024)

In the world of container orchestration, two prominent platforms have emerged asleaders: Docker Swarm andKubernetes. As organizations increasingly adoptcontainerization to achieve scalability, resilience, and efficient resourceutilization, choosing between these two solutions becomes crucial.

This article will compare both platforms, exploring their features, strengths,and trade-offs. By understanding their differences and use cases, you will gainvaluable insights to select the ideal container orchestration tool for yourspecific needs.

We will examine various aspects, including installation and setup, containercompatibility, scalability, high availability, networking, ease of use,ecosystem, and security. Through practical hands-on example, you will get ataste of how each platform tackles common challenges in deploying and managingcontainerized applications.

Whether you are new to container orchestration or seeking to transition from oneplatform to another, this article aims to provide comprehensive knowledge tomake an educated choice. By the end, you be well-equipped to decide which optionaligns best with your requirements and goals.

Without any further ado, let's learn what Docker Swarm is first.

What is Docker Swarm?

Docker Swarm is a container orchestration tool for managing and deployingapplications using a containerization approach. It allows you to create acluster of Docker nodes (known as a Swarm) and deploy services to the cluster.Docker Swarm is lightweight and straightforward, allowing you to containerizeand deploy your software project in minutes.

To manage and deploy applications efficiently, Docker Swarm offers the followingcapabilities:

  • Cluster Management: You can create a cluster of nodes to manage theapplication containers. A node can be a manager or a worker (or both).
  • Service Abstraction: Docker Swarm introduces the "Service" component,which lets you set the network configurations, resource limits, and number ofcontainer replicas. By managing the services, Docker Swarm ensures theapplication's desired state is maintained,
  • Load Balancing: Swarm offers built-in load balancing, which allowsservices inside the cluster to interact without the need for you to define theload balancer file manually.
  • Rolling Back: Rolling back to a specific service to its previous versionis fully supported in case of a failed deployment.
  • Fault Tolerance: Docker Swarm automatically checks for failures in nodesand containers to generate new ones to allow users to keep using theapplication without noticing any problems.
  • Scalability: With Docker Swarm, you have the flexibility to adjust thenumber of replicas for your containers easily. This allows you to scale yourapplication up or down based on the changing demands of your workload.

Docker Swarm architecture

Below is the diagram representing Docker Swarm Architecture:

Docker Swarm vs Kubernetes: A Practical Comparison | Better Stack Community (1)
Docker Swarm vs Kubernetes: A Practical Comparison | Better Stack Community (2)

In summary, the Swarm cluster consists of several vital components that worktogether to manage and deploy containers efficiently:

  1. Manager nodes are responsible for receiving client commands and assigningtasks to worker nodes. Inside each manager node, we have the ServiceDiscovery and Scheduler component. The former is responsible for collectinginformation on the worker nodes, while the latter finds suitable worker nodesto assign tasks to.

  2. Worker nodes are solely responsible for executing the tasks assigned by amanager node.

In the above diagram, the deployment steps typically work like this:

  • The client machines send commands to a manager node, let's say to deploy a newPostgreSQL database to the Swarm cluster.
  • The Swarm manager finds the appropriate worker node to assign tasks forcreating the new deployment.
  • Finally, the worker node will create new containers for the PostgreSQLdatabase and manage the containers' states.

Now that you've understood what Docker Swarm is and how it works let's learnabout Kubernetes.

What is Kubernetes?

Kubernetes is a container orchestration tool for managing and deployingcontainerized applications. It supports a wide variety of container runtimes,including Docker, Containerd, Mirantis, and others, but the most commonly usedruntime is Docker. Kubernetes is feature-rich and highly configurable, whichmakes it ideal for deploying complex applications, but it requires a steeplearning curve.

Some of the key features of Kubernetes are listed below:

  • Container Orchestration with Pods: A pod is the smallest and simplestdeployment unit. Kubernetes allows you to deploy applications by creating andmanaging one or more containers in pods.
  • Service Discovery: Kubernetes allows containers to interact with eachother easily within the same cluster.
  • Load Balancing: Kubernetes' load balancer allows access to a group of podsvia an external network. Clients from outside the Kubernetes cluster canaccess the pods running inside the Kubernetes cluster via the balancer'sexternal IP. If any of the pods in the group goes down, client requests willautomatically be forwarded to other pods, allowing the deployed applicationsto be highly available.
  • Automatic Scaling: You can scale-in or scale-out yourapplication by adjusting number of deployedcontainers based on resource utilization or custom-defined metrics.
  • Self-Healing: If one of your pods goes down, Kubernetes will try torecreate that pod automatically so that normal operation is restored.
  • Persistent Storage: It provides support for various storage options,allowing you to persist data beyond the lifecycle of individual pods.
  • Configuration and Secrets Management: Kubernetes allows you to store andmanage configuration data and secrets securely, decoupling sensitiveinformation from the application code.
  • Rolling back: If there's a problem with your deployment, you can easilytransition to a previous version of the application.
  • Resource Management: With Kubernetes, you can flexibly set the resourceconstraints when deploying your application.
  • Extensibility: Kubernetes has a vast ecosystem and can be extended withnumerous plugins and tools for added functionality. You can also interact withKubernetes components using Kubernetes APIs to deploy your applicationprogrammatically.

Kubernetes architecture

Docker Swarm vs Kubernetes: A Practical Comparison | Better Stack Community (3)
Docker Swarm vs Kubernetes: A Practical Comparison | Better Stack Community (4)

The above diagram illustrates the Kubernetes architecture. It contains thefollowing components:

  • Control Plane: This is a set of components responsible for managing theoperation of a Kubernetes cluster. The main components of the Kubernetescontrol plane are:
    • Kube API Server: Provides an interface to interact with the Kubernetescluster. It exposes the Kubernetes API so that clients from outside thecluster can communicate and control it.
    • Etcd: It is a distributed key-value store for storing configuration data andthe desired state of the cluster.
    • Scheduler: It decides where to place newly created pods within the clusterbased on resource requirements, node availability, and other defined rules.It also continuously monitors the cluster's resources to balance theworkload across nodes.
    • Kube Controller Manager: It is responsible for managing and maintaining thedesired state of various Kubernetes components such as pods, services, orvolumes. It continuously checks the state of Kubernetes components, whetherthey match expectations or not. If they don't, the controller-manager willtake action to restore order to the problematic components.
  • Kubernetes node: The Kubernetes node (which could be a physical or virtualmachine) is responsible for executing the tasks assigned by the control plane.Here are the main components and concepts related to Kubernetes' nodes:

    • Kubelet: The Kubelet is an agent that runs on each node and manages thecontainers and their lifecycle. It communicates with the control plane,receives instructions about which containers to run, and ensures that thecontainers are in the desired state. The Kubelet also monitors the health ofthe containers and reports their status back to the control plane.
    • Pod: Kubernetes pod is the smallest component of Kubernetes cluster. Itrepresents a single instance of a running process in the cluster. Inside apod you have the container runtime, which is responsible for pullingcontainer images from a registry, creating container instances, and managingtheir execution.

In the Kubernetes architecture diagram above, the execution steps work likethis:

  • The client from outside the Kubernetes cluster will interact with the clusterby executing kubectl command to the Kubernetes controller plane.
  • The controller plane will assign deployment tasks to the Kubernetes Node.
  • The Kubernetes node will create new pods with container runtime for thecontainer execution.
  • To allow users outside the cluster to interact with the deployed app, theclient sends requests to the Kubernetes control plane to create a loadbalancer.
  • Finally, users can interact with the deployed application via the loadbalancer IP Address.

Now that you understand what Kubernetes is, let's continue to the next sectionto learn the differences between Docker Swarm and Kubernetes.

Comparing Docker Swarm and Kubernetes

Although Docker Swarm and Kubernetes offer the capabilities to orchestratecontainers, they are fundamentally different and serve different use cases.

CriteriaDocker SwarmKubernetes
Installation and setupEasy to set up using the docker commandComplicated to manually set up the Kubernetes cluster
Types of containers they supportOnly works with Docker containersSupports Containerd, Docker, CRI-O, and others
High AvailabilityProvides basic configuration to support high availabilityOffers feature-rich support for high availability
PopularityPopularPopular
NetworkingSupport basic networking featuresAdvanced features support for networking
GUI supportYesYes
Learning curveEasy to get startedRequires a steeper learning curve
ComplexitySimple and lightweightComplicated and offer a lot of features
Load balancingAutomatic load-balancingManual load-balancing
CLIsDo not need to install other CLINeed to install other CLIs such as kubectl
ScalabilityDoes not support automatic scalingSupports automatic scaling
SecurityOnly supports TLSSupports RBAC, SSL/TLS, and secret management

Setup and installation

Regarding installation and setup, Docker Swarm is easier to set up thanKubernetes. Assuming Docker is already installed, you only need to rundocker swarm init to create the cluster, then attach a node to the clusterusing docker swarm join.

The steps for setting up a Kubernetes cluster are not as straightforward withoutusing cloud platforms. However, there are some tools ease the process such asKubeadm, or Kubespray. For example, to create a new cluster using Kubeadm youneed to:

  • Set up the Docker runtime on all the nodes.
  • Install Kubeadm on all the nodes.
  • Initiate Kubernetes Control Plane on the master node.
  • Install the network plugin.
  • Join the worker node to the master node.
  • Using kubectl get node to check whether all the nodes are added to theKubernetes cluster.

Types of containers they support

Docker Swarm only supports Docker containers, but Kubernetes supports anycontainer runtime that implements itsContainer Runtime Interface,including Docker, Containerd, CRI-O, Mirantis, and others, giving you manyoptions to choose the one that best fits your use cases.

High availability

Docker Swarm provides built-in high availability for services. It automaticallyreplicates services across multiple nodes in the Swarm cluster, ensuring thatcontainers are distributed and balanced for fault tolerance.

On the other hand, Kubernetes provides a comprehensive set of features toachieve high availability, such as advanced scheduling to define pod placementbased on node availability or resource utilization. Factors such as node health,resource constraints, and workload demands are also considered when distributingpods across the cluster.

Popularity

Both Docker Swarm and Kubernetes are popular in the industry and have beenbattle-tested for a wide variety of use cases. Docker Swarm is often used wheresimplicity and fast deployment are the primary considerations, while Kubernetesis well-suited for complex applications that demand advanced features andcapabilities. It excels in scenarios where high availability, scalability, andflexibility are crucial requirements.

Networking

Both Docker Swarm and Kubernetes offer networking support. Docker Swarm providessimpler built-in overlay networking capabilities, while Kubernetes supports amore extensive and flexible networking model through its wide range of pluginsand advanced networking features.

GUI support

Both Docker Swarm and Kubernetes have GUI tools to help you easily interact withthem. With Docker Swarm, you have Swarmpit orShipyard. For Kubernetes, you haveKubernetes Lens,Kube-dashboard, orOctant.

Learning curve

Docker Swarm is easy to learn and start with since it is lightweight andprovides a simple approach to managing Docker containers. Kubernetes, however,has a much steeper learning curve since it offers more features and flexibilityfor deploying and managing containers.

Complexity

Docker Swarm is designed to be lightweight and easy to use while Kubernetes isdesigned to have a lot of features and support to deploy complex applications.Thus, Kubernetes is much more complicated to operate compared to Docker Swarm.

Load Balancing

Docker Swarm offers an automatic load balancing mechanism, ensuring seamlesscommunication and interaction between containers within the Docker Swarmcluster. The load balancing functionality is built-in, requiring minimalconfiguration.

In contrast, Kubernetes provides a more customizable approach to load balancing.It allows you to define and configure load balancers based on your specificrequirements. While this requires some manual setup, it grants you greatercontrol over the load balancing configuration within the Kubernetes cluster.

Command-line tools

When working with Docker Swarm, there is no need to install an additionalcommand line tool specifically for cluster management. The standard dockercommand is sufficient to create and interact with the Swarm cluster.

However, Kubernetes requires the installation of the kubectl command line toolto work with the cluster. kubectl is the primary way to interact with aKubernetes cluster and it provides extensive functionality for managingdeployments, services, pods, and other resources within the Kubernetesenvironment.

Scalability

Kubernetes allows you to automatically scale in or scale out the containersbased on resource utilization or other factors. Docker Swarm, on the other hand,does not support auto-scaling ability.

Security

Docker Swarm and Kubernetes were built with security in mind, and they bothprovide several features and mechanisms to deploy applications safely. However,Kubernetes supports more authentication and authorization mechanisms such asRole-Based Access Control (RBAC), secure access layers (SSL), TLS protocol, andsecret management.

Now that you have a clearer understanding of the distinctions between DockerSwarm and Kubernetes, we will now focus on deploying a sample Go application toboth platforms. This hands-on experience will provide you with practicalinsights into the deployment process, showcasing the unique features andconsiderations of each platform.

Prerequisites

Before following through with the rest of this tutorial, you need to meet thefollowing requirements:

  1. A ready-to-use Linux machine, such as anUbuntu version 22.04 server.
  2. A recent version of Go installed on yourmachine.
  3. A ready-to-useGit commandline for cloning the sample application from its GitHub repo.
  4. A recent version of Dockerinstalled and accessible without using sudo.
  5. A free DockerHub account for storing and sharingDocker images.
  6. An already installed PostgreSQLinstance.

Setting up the demo application

To fully understand how Docker Swarm and Kubernetes work and how they differ,you will deploy a demo application using both tools. This application is a blogservice that allows users to create, update, retrieve, and delete blog posts. Itis built using the Go programming language, and its data is stored inPostgreSQL.

However, you don't need to be familiar with Go or PostgreSQL to follow through.If you prefer, you can use an application stack that you're more familiar withand just follow the deployment steps to practice using Docker Swarm andKubernetes.

To test out the application's functionality, let's try to run it directly on thelocal machine before deploying it through Docker Swarm and Kubernetes.

1. Cloning the GitHub repository

In this step, you will clone thethis GitHub repositoryto your machine using the commands below:

Copied!

git clone https://github.com/betterstack-community/docker-swarm-kubernetes

Copied!

cd betterstack-swarm-kubernetes

The structure of the directory should look like this:

Copied!

├── config├── Dockerfile├── go.mod├── go.sum├── handler├── kubernetes├── main.go└── swarm
  • The swarm and kubernetes directories, along with the Dockerfile file arefor deploying the app to Docker Swarm and Kubernetes platform. You willtemporarily skip them for now.
  • The rest of the directories and files constitute the Go application.

2. Creating a PostgreSQL database in the local environment

Assuming PostgreSQL is installed on your machine, you can go ahead and accessthe psql command-line using the default postgres user by running thefollowing command:

Copied!

sudo -u postgres psql

In the psql interface, create a new user named "blog_user":

Copied!

CREATE USER blog_user;

Run the following command to list all existing users in the PostgreSQL database.You should see the blog_user user there.

Copied!

\du

Output

 List of roles Role name | Attributes | Member of-----------+------------------------------------------------------------+----------- blog_user | | {} postgres | Superuser, Create role, Create DB, Replication, Bypass RLS | {}

Next, change the password of the blog_user user to "blog_password":

Copied!

ALTER ROLE blog_user WITH PASSWORD 'blog_password';

Afterward, create a new blog_db database by running the following command:

Copied!

CREATE DATABASE blog_db;

Finally, quit the current session using \q, and then log in again to blog_dbdatabase using blog_user user. Enter the password for the blog user ifprompted:

Copied!

psql -U blog_user -d blog_db -h localhost

At this stage, you can create a new posts table within the blog_db databaseby running the following command:

Copied!

CREATE TABLE IF NOT EXISTS posts ( ID SERIAL NOT NULL PRIMARY KEY, BODY TEXT NOT NULL, CREATED_AT TIMESTAMP DEFAULT CURRENT_TIMESTAMP, UPDATED_AT TIMESTAMP DEFAULT CURRENT_TIMESTAMP);

Now that you've successfully set up the database let's go ahead and run theapplication in the local environment.

3. Starting the application

To bring up the app in the local environment, you need to install the requireddependencies first:

Copied!

go mod tidy

Then run the following commands to set the necessary environment variables forthe Go application to access the PostgreSQL database:

Copied!

export POSTGRES_PASSWORD=blog_password

Copied!

export POSTGRES_USER=blog_user

Copied!

export POSTGRES_HOST=localhost

Afterward, run the following command to start the Go application:

Copied!

go run main.go

Output

You are connected to the database

Once the application is up and running open a separate terminal and run thefollowing command to create a new blog entry:

Copied!

curl --location 'http://localhost:8081/blog' \ --header 'Content-Type: application/json' \ --data '{ "body":"this is a great blog" }'

Then run the following command to get the entry with an id of 1:

Copied!

curl --location 'http://localhost:8081/blog/1'

You should see the result below:

Output

{"id":1,"body":"this is a great blog","created_at":"2023-05-13T15:03:40.461732Z","updated_at":"2023-05-13T15:03:40.461732Z"}

At this point, you've successfully set up the app in the local environment andinteracted with its APIs to confirm that it's in working order. Let's now moveon to the next section where you will deploy the application using Docker Swarm.

Deploying the application with Docker Swarm

Since you no longer need the PostgreSQL database service to run, you can stop itby running the following command:

Copied!

sudo service postgresql stop

You can also stop the current application by pressing CTRL+C in the terminal.

1. Create a Docker Swarm cluster

In a Docker Swarm cluster, you will have one or more manager nodes to distributetasks to one or more worker nodes. To simplify this demonstration, you will onlycreate the manger node responsible for deploying the services and handling theapplication workload.

To create a Docker Swarm cluster, run the following command:

Copied!

docker swarm init

You should see the following output confirming that the current node is amanager and relevant instructions to add a worker node to the cluster:

Output

Swarm initialized: current node (rpuk92y8wypwqwnv5kqzk5fik) is now a manager.To add a worker to this swarm, run the following command: docker swarm join --token <token> <ip_address>To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

Since we won't be adding a worker node to the Swarm cluster, let's move on tothe next section and push the Docker image to DockerHub. If you'd like to see afull demonstration of running a highly available Docker Swarm setup inproduction, please see the linked article.

2. Building the application Docker image

Inside the application directory, you have a Dockerfile which has thefollowing contents:

Dockerfile

Copied!

FROM golang:1.20WORKDIR /appCOPY go.mod go.sum ./RUN go mod downloadCOPY main.go ./COPY config/db.go ./config/db.goCOPY handler/handlers.go ./handler/handlers.goRUN CGO_ENABLED=0 GOOS=linux go build -o /out/main ./EXPOSE 8081# RunENTRYPOINT ["/out/main"]

These commands inside the Dockerfile instructs Docker to pull the Go version1.20 image, then copy the files from our project directory into the Docker imageand build an application executable named main in the /out directory. It alsospecifies that the Docker container will listen on port 8081 and that the/out/main binary is executed when the container is started.

To build the application image, run the following command. Remember to replacethe <username> placeholder with your DockerHub username:

Copied!

docker build -t <username>/blog .

You should observe the following output at the end of the process:

Output

. . .Successfully built 222ca8bc81b8Successfully tagged <username>/blog:latest

Now that you have successfully built the application image, log into yourDockerHub account from the current session so that you can push the image toyour account:

Copied!

docker login

Output

WARNING! Your password will be stored unencrypted in /home/<user>/.docker/config.json.Configure a credential helper to remove this warning. Seehttps://docs.docker.com/engine/reference/commandline/login/#credentials-storeLogin Succeeded

Once login is successful, run the following command to push the image toDockerHub:

Copied!

docker push <username>/blog

Now that you've successfully pushed the image to your DockerHub account, let'smove on to the next section and deploy the application to the Swarm cluster.

3. Deploying the application to the Swarm cluster

In this section, you will deploy the Go application and the PostgreSQL databaseto the Swarm cluster. Within the swarm directory, you have the two followingfiles:

  • initdb.sql is for initializing the database state by creating a new tablenamed "posts" when deploying the PostgreSQL database to the Swarm cluster
  • docker-compose.yml is for creating the PostgreSQL database and theapplication services to the Swarm cluster

Open the docker-compose.yml file in your text editor and update the<username> placeholder to your Docker username:

swarm/docker-compose.yml

Copied!

# Use postgres/example user/password credentialsversion: '3.6'services: db: image: postgres environment: - POSTGRES_DB=blog_db - POSTGRES_USER=blog_user - POSTGRES_PASSWORD=blog_password ports: - '5432:5432' volumes: - ./initdb.sql:/docker-entrypoint-initdb.d/initdb.sql networks: - blog-network blog:

image: <username>/blog

environment: - POSTGRES_DB=blog_db - POSTGRES_USER=blog_user - POSTGRES_PASSWORD=blog_password - POSTGRES_HOST=db:5432 ports: - '8081:8081' volumes: - .:/app networks: - blog-networknetworks: blog-network: name: blog-network

This file instructs Docker Swarm to create three services called db, blog,and networks:

  • The db service is for creating a PostgreSQL database using the specifiedconfiguration. Its state is initialized using the initdb.sql file, and ituses the blog-network network.
  • The blog service creates the blog application using the Docker image youpreviously pushed to DockerHub. You also need to provide environment variablesfor this service so that the application can access the PostgreSQL database.This service will also use the blog-network network allowing interactionwith the db service through the host URL: db:5432.
  • The networks service creates a network named blog-network so the twoservices can communicate via this network.

To deploy these three services to the running Swarm cluster, run the followingcommands in turn:

Copied!

cd swarm

Copied!

docker stack deploy --compose-file docker-compose.yml blogapp

Output

Creating network blog-networkCreating service blogapp_dbCreating service blogapp_blog

Afterward, run the following command to verify that the services are running:

Copied!

docker stack services blogapp

You should see the following results:

ID NAME MODE REPLICAS IMAGE PORTSuptqkx630zpy blogapp_blog replicated 1/1 username/blog:latest *:8081->8081/tcpjgb19lcx32y6 blogapp_db replicated 1/1 postgres:latest *:5432->5432/tcp

You should also be able to interact with the application endpoints now. Run thefollowing command to create a new blog post:

Copied!

curl --location 'http://localhost:8081/blog' \ --header 'Content-Type: application/json' \ --data '{ "body":"this is a great blog" }'

Then run the command below to retrieve the newly created post:

Copied!

curl --location 'http://localhost:8081/blog/1'

You should observe the same output as before:

Output

{"id":1,"body":"this is a great blog","created_at":"2023-05-13T08:56:26.140815Z","updated_at":"2023-05-13T08:56:26.140815Z"}

Now that you've successfully deployed the blog application using Docker Swarm.Let's move on to the next section and learn how to deploy the application usingKubernetes.

Deploying the application with Kubernetes

To simplify the setup for the Kubernetes cluster, you will use theminikube tool which allows you tocreate an run a local Kubernetes cluster on your machine.

Start by followingthese instructions to install theminikube binary on your machine, then launch the minikube service by runningthe following command:

Copied!

minikube start

Afterwards, run the following command to check whether the minikube service isup and running:

Copied!

minikube kubectl -- get node

You should see the output below:

Output

NAME STATUS ROLES AGE VERSIONminikube Ready control-plane 188d v1.25.3

For ease of use, you should alias the minukube kubectl -- command to kubectlas follows:

Copied!

alias kubectl="minikube kubectl --"

Now that the minikube service is up and running, you will can proceed withdeploying the Go application and PostgreSQL database to the Kubernetes cluster.Let's start by deploying the PostgreSQL database in the next section.

1. Deploying the PostgreSQL database to the Kubernetes cluster

To deploy the PostgreSQL database to the Kubernetes cluster, you need to:

  • Create a Kubernetes Persistent Volume (Kubernetes PV) so that you don't losethe application data when the Kubernetes cluster is down.
  • Create the Kubernetes Persistent Volume Claim (Kubernetes PVC) to requeststorage inside the Kubernetes PV.
  • Create the Kubernetes Config Map with initializing SQL query to create a tablenamed posts when deploying the PostgreSQL database.
  • Create the Kubernetes Deployment to deploy the PostgreSQL database to theKubernetes cluster.
  • Create the Kubernetes Service, which allows the blog application to access tothe PostgreSQL database.

From the current terminal, navigate to the kubernetes directory:

Copied!

cd ../kubernetes

Next, create a postgres/docker-pg-vol/data directory inside the kubernetesdirectory to store the persistent data:

Copied!

mkdir postgres/docker-pg-vol/data -p

Update the path inside the hostPath block in the postgres-volume.yml filewith the absolute path to the persistent data directory. The path should looklike this:/home/<username>/betterstack-swarm-kubernetes/kubernetes/postgres/docker-pg-vol/data

Copied!

nano postgres-volume.yml

kubernetes/postgres-volume.yml

Copied!

apiVersion: v1kind: PersistentVolumemetadata: name: postgresql-claim0 labels: type: localspec: storageClassName: manual capacity: storage: 1Gi accessModes: - ReadWriteOnce hostPath:

path: "/home/<username>/betterstack-swarm-kubernetes/kubernetes/postgres/docker-pg-vol/data"

Afterward, run the following command to create the Kubernetes Persistent Volume.If you do not specify the --namespace value, Kubernetes will automatically usethe default namespace.

Copied!

kubectl apply -f postgres-volume.yml

Output

persistentvolume/postgresql-claim0 created

Then run the following command to create the Persistent Volume Claim.

Copied!

kubectl apply -f postgres-pvc.yml

Output

persistentvolumeclaim/postgresql-claim0 created

To create the configuration map for initializing the SQL query, run thefollowing command:

Copied!

kubectl apply -f postgres-initdb-config.yml

Output

configmap/postgresql-initdb-config created

Next, run the following command to deploy the PostgreSQL database:

Copied!

kubectl apply -f postgres-deployment.yml

Output

deployment.apps/postgresql created

Finally, create the Kubernetes service for the PostgreSQL database:

Copied!

kubectl apply -f postgres-service.yml

Output

service/postgresql created

At this point, the PostgreSQL instance should be up and running. Go ahead andrun the command below to confirm:

Copied!

kubectl get pod

You should see the following result:

Output

NAME READY STATUS RESTARTS AGEpostgresql-655746b9f8-n4dcf 1/1 Running 0 2m36s

Run the following command to check for the PostgreSQL service.

Copied!

kubectl get service

You should see the following output:

Output

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkubernetes ClusterIP 10.96.0.1 <none> 443/TCP 14dpostgresql ClusterIP 10.98.157.228 <none> 5432/TCP 2s

Now that you've successfully deployed the PostgreSQL database to the Kubernetescluster, let's deploy the blog application next.

2. Deploying the application to the Kubernetes cluster

Before you can deploy your application using Kubernetes, you need to create aDocker secret so that Kubernetes can access your DockerHub account to pull theapplication image. Note that you need to replace the username, password, andemail below with your DockerHub account information:

Copied!

kubectl create secret docker-registry dockerhub-secret \ --docker-server=docker.io \ --docker-username=<username> \ --docker-password=<password> \ --docker-email=<email>

Output

secret/dockerhub-secret created

Once the secret is created, open the deployment.yml file in your text editorand edit it as follows:

Copied!

nano kubernetes/deployment.yml

kubernetes/deployment.yml

Copied!

---apiVersion: apps/v1kind: Deploymentmetadata: name: blogappspec: replicas: 2 selector: matchLabels: name: blogapp template: metadata: labels: name: blogapp spec: containers: - name: application

image: <username>/blog

imagePullPolicy: Always envFrom: - secretRef: name: dockerhub-secret env: - name: POSTGRES_DB value: blog_db - name: POSTGRES_USER value: blog_user - name: POSTGRES_PASSWORD value: blog_password - name: POSTGRES_HOST value: postgresql:5432 ports: - containerPort: 8081

Save and close the file, then run the following command to deploy the blog app:

Copied!

kubectl apply -f deployment.yml

Output

deployment.apps/blogapp created

Then run the following command to create the Kubernetes service for the blogapp, so that you can access the app from outside the Kubernetes cluster.

Copied!

kubectl apply -f service.yml

Output

service/blogapp-service created

Run the following command to get the running services inside the Kubernetescluster.

Copied!

kubectl get service

You should see the following result:

Copied!

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEblogapp-service LoadBalancer 10.97.173.87 <pending> 8081:31983/TCP 5skubernetes ClusterIP 10.96.0.1 <none> 443/TCP 14dpostgresql ClusterIP 10.96.38.7 <none> 5432/TCP 8m1s

Notice that the EXTERNAL-IP column for the blogapp-service is pending. Thisis because you are using a custom Kubernetes cluster (minikube) that does nothave an integrated load balancer. To workaround this issue, create a minikubetunnel in a new terminal through the command below:

Copied!

minikube tunnel

Then run the kubectl get service command again. You should see the updatedresult with a proper EXTERNAL-IP value:

Output

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEblogapp-service LoadBalancer 10.97.173.87 10.97.173.87 8081:31983/TCP 4m40skubernetes ClusterIP 10.96.0.1 <none> 443/TCP 14dpostgresql ClusterIP 10.96.38.7 <none> 5432/TCP 12m

Now that you've deployed the blog application to the Kubernetes cluster, you canaccess it through its external IP which in the above example is 10.97.173.87.

Copied!

curl --location 'http://<external_ip>:8081/blog' \ --header 'Content-Type: application/json' \ --data '{ "body":"this is a great blog" }'

Then run the following command to get the blog content with id 1.

Copied!

curl --location 'http://<external_ip>:8081/blog/1'

You should see the output as:

Output

{"id":1,"body":"this is a great blog","created_at":"2023-05-13T08:56:26.140815Z","updated_at":"2023-05-13T08:56:26.140815Z"}

And that's how you deploy the blog application to the Kubernetes cluster.

Comparing the deployment process between Docker Swarm and Kubernetes, we observedistinct trade-offs. Docker Swarm offers a simpler deployment experience, but itlacks built-in support for persistent data storage. In the event of a DockerSwarm cluster failure or recreation, there is a risk of losing the applicationdata.

In contrast, Kubernetes introduces a more intricate deployment process but itoffers the advantage of creating persistent data storage, ensuring datadurability even during cluster disruptions or recreation. Additionally,Kubernetes provides flexibility in exposing the blog app service as per yourrequirements by leveraging the Kubernetes Service definition file.

Use cases of Kubernetes vs. Docker Swarm

Docker Swarm and Kubernetes, despite being container orchestration platforms,possess distinct characteristics that cater to different use cases.

Docker Swarm is well-suited for:

  • Beginners in containerization who seek to learn application deployment usingcontainerization techniques.
  • Small to medium-sized applications that require straightforward deployment andmanagement.
  • Users familiar with Docker and prefer a Docker-centric approach to applicationdeployment. Docker Swarm seamlessly integrates with the Docker ecosystem.
  • Applications with a relatively stable user base or predictable trafficpatterns. Docker Swarm, although lacking advanced automatic scaling featurescompared to Kubernetes, can adequately handle such scenarios.

Kubernetes should be considered when:

  • You have a comprehensive understanding of Kubernetes components and haveacquired proficiency in working with the platform. Kubernetes has a steeperlearning curve but offers a comprehensive solution for deploying and managingcontainerized applications.
  • Your application is complex and demands extensive customization duringdeployment. Kubernetes excels in managing large-scale, intricate applications,providing advanced management, scalability, and customization capabilities.
  • Fine-grained control and customization options are crucial for yourdeployment.
  • Automatic scaling is necessary due to your application's growing user base.

In summary, Docker Swarm suits simpler deployments and those who prefer aDocker-centric approach, while Kubernetes caters to complex applications,fine-grained control, and automatic scaling needs. Carefully assess yourrequirements and familiarity with the platforms to determine the most suitablechoice for your specific use case.

Final thoughts

Throughout this article, you've gained insights into the distinctions betweenDocker Swarm and Kubernetes, and you've also had practical experience deployinga sample Go application using both platforms.

To further explore technical articles on cloud-native and containerizationtechnologies, we invite you to visit our BetterStack community guides.There, you can find additional resources and in-depth information to enhanceyour understanding of these technologies.

Thanks for reading!

Docker Swarm vs Kubernetes: A Practical Comparison | Better Stack Community (5)

Article by

Donald Le

Got an article suggestion?Let us know

Next article

Docker Security: 14 Best Practices You Should KnowLearn 14 Docker best practices to ensure that your deployments are robust, resilient, and ready to meet the challenges of modern security threats.→

Docker Swarm vs Kubernetes: A Practical Comparison | Better Stack Community (6)

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Docker Swarm vs Kubernetes: A Practical Comparison | Better Stack Community (2024)
Top Articles
Debt Consolidation Loans - Apply | Discover Personal Loans
When will interest rates go down? | Expert Analysis & Predictions
English Bulldog Puppies For Sale Under 1000 In Florida
Katie Pavlich Bikini Photos
Gamevault Agent
Pieology Nutrition Calculator Mobile
Hocus Pocus Showtimes Near Harkins Theatres Yuma Palms 14
Hendersonville (Tennessee) – Travel guide at Wikivoyage
Compare the Samsung Galaxy S24 - 256GB - Cobalt Violet vs Apple iPhone 16 Pro - 128GB - Desert Titanium | AT&T
Vardis Olive Garden (Georgioupolis, Kreta) ✈️ inkl. Flug buchen
Craigslist Dog Kennels For Sale
Things To Do In Atlanta Tomorrow Night
Non Sequitur
Crossword Nexus Solver
How To Cut Eelgrass Grounded
Pac Man Deviantart
Alexander Funeral Home Gallatin Obituaries
Energy Healing Conference Utah
Geometry Review Quiz 5 Answer Key
Hobby Stores Near Me Now
Icivics The Electoral Process Answer Key
Allybearloves
Bible Gateway passage: Revelation 3 - New Living Translation
Yisd Home Access Center
Pearson Correlation Coefficient
Home
Shadbase Get Out Of Jail
Gina Wilson Angle Addition Postulate
Celina Powell Lil Meech Video: A Controversial Encounter Shakes Social Media - Video Reddit Trend
Walmart Pharmacy Near Me Open
Marquette Gas Prices
A Christmas Horse - Alison Senxation
Ou Football Brainiacs
Access a Shared Resource | Computing for Arts + Sciences
Vera Bradley Factory Outlet Sunbury Products
Pixel Combat Unblocked
Movies - EPIC Theatres
Cvs Sport Physicals
Mercedes W204 Belt Diagram
Mia Malkova Bio, Net Worth, Age & More - Magzica
'Conan Exiles' 3.0 Guide: How To Unlock Spells And Sorcery
Teenbeautyfitness
Where Can I Cash A Huntington National Bank Check
Topos De Bolos Engraçados
Sand Castle Parents Guide
Gregory (Five Nights at Freddy's)
Grand Valley State University Library Hours
Hello – Cornerstone Chapel
Stoughton Commuter Rail Schedule
Nfsd Web Portal
Selly Medaline
Latest Posts
Article information

Author: Otha Schamberger

Last Updated:

Views: 6339

Rating: 4.4 / 5 (55 voted)

Reviews: 94% of readers found this page helpful

Author information

Name: Otha Schamberger

Birthday: 1999-08-15

Address: Suite 490 606 Hammes Ferry, Carterhaven, IL 62290

Phone: +8557035444877

Job: Forward IT Agent

Hobby: Fishing, Flying, Jewelry making, Digital arts, Sand art, Parkour, tabletop games

Introduction: My name is Otha Schamberger, I am a vast, good, healthy, cheerful, energetic, gorgeous, magnificent person who loves writing and wants to share my knowledge and understanding with you.