Koding Books

Professional, free coding tutorials

Kubernetes a practical introduction

Kubernetes is an open-source container orchestration platform that automates containerised applications’ deployment, scaling, and management. Originally developed by Google, Kubernetes has become one of the most popular tools for managing containerised workloads in production environments. With Kubernetes, developers can easily deploy and manage applications across multiple hosts, scale applications up or down as needed, and ensure high availability and fault tolerance. This article will explore Kubernetes’s key concepts and features and show you how to start with this powerful platform.

History

Google originally developed Kubernetes in the early 2010s as an internal tool for managing containerized workloads. Google had been using containers for many years to run its applications and had developed a tool called Borg to manage those containers at scale.

Kubernetes was developed as an open-source Borg version and released to the public in 2014. The project quickly gained popularity and was adopted by many companies to manage containerized workloads in production environments.

In 2015, Google donated the Kubernetes project to the Cloud Native Computing Foundation (CNCF), a non-profit organization that was formed to promote the adoption of cloud-native technologies. Since then, Kubernetes has become one of the most popular open-source projects in the world, with a large and active community of contributors and users.

Today, Kubernetes is used by companies of all sizes to manage containerized workloads in production environments and has become the de facto standard for container orchestration.

Kubernetes and Docker

Kubernetes and Docker are both tools that are commonly used in the context of containerization, but they serve different purposes. Docker is a platform for building, packaging, and distributing containerized applications, while Kubernetes is a platform for orchestrating and managing containerized applications at scale.

In other words, Docker provides a way to create and package container images, while Kubernetes provides a way to deploy and manage those containers across a cluster of machines. Kubernetes can work with any container runtime, but Docker is one of the most popular container runtimes that can be used with Kubernetes.

Kubernetes can manage Docker containers and containers from other container runtimes, such as containers and CRI-O. Kubernetes provides a higher-level abstraction for managing containers, allowing developers to focus on the application logic rather than the underlying infrastructure.

Key Benefits

Here are some of the key features of Kubernetes:

  • Container orchestration: Kubernetes automates the deployment, scaling, and management of containerized applications, making it easy to manage containerized workloads at scale.
  • Service discovery and load balancing: Kubernetes provides built-in service discovery and load balancing, allowing applications to communicate with each other easily and ensuring that traffic is distributed evenly across the cluster.
  • Self-healing: Kubernetes automatically detects and replaces failed containers, ensuring that applications are always available and running as expected.
  • Horizontal scaling: Kubernetes makes it easy to scale applications horizontally by adding or removing containers as needed based on resource utilization or other metrics.
  • Rolling updates and rollbacks: Kubernetes supports rolling updates and rollbacks, allowing you to update your applications without downtime and easily roll back to a previous version if needed.
  • Storage orchestration: Kubernetes provides built-in storage orchestration, allowing you to mount storage volumes to containers and manage data persistence easily.
  • Configuration management: Kubernetes provides a way to manage application configuration and secrets, allowing you to easily manage sensitive information such as passwords and API keys.
  • Extensibility: Kubernetes is highly extensible, with a large ecosystem of plugins and extensions that can be used to customize and extend its functionality.

These are just some of the key features of Kubernetes, and many more make it a powerful platform for managing containerized workloads.

Use Cases

Some common use cases for Kubernetes:

  • Microservices: Kubernetes is well-suited for managing microservices architectures, where applications are broken down into smaller, independent services that can be deployed and scaled independently.
  • Continuous delivery: Kubernetes can automate the deployment and scaling of applications, making it easier to implement continuous delivery pipelines and release new features quickly and reliably.
  • High availability: Kubernetes provides built-in features for ensuring high availability and fault tolerance, making it a good choice for high-uptime applications.
  • Big data: Kubernetes can manage big data workloads like Apache Spark and Hadoop by providing a scalable and flexible platform for running distributed applications.
  • IoT: Kubernetes can manage IoT workloads, such as edge computing and device management, by providing a platform for deploying and managing containerized applications at the edge.
  • Hybrid and multi-cloud: Kubernetes can manage applications across hybrid and multi-cloud environments, providing a consistent platform for deploying and managing applications regardless of the underlying infrastructure.

These are just a few examples of the many use cases for Kubernetes. Its flexibility and scalability make it a powerful tool for managing containerized workloads in a wide range of environments

Let’s see Kubernetes in action.

Running Kubernetes locally

Install a container runtime

Kubernetes relies on a container runtime to run containers. The most popular container runtime is Docker, but there are other options such as containerd and CRI-O. You’ll need to install a container runtime on your machine before you can run Kubernetes.

For this example, we’ll be using docker.

  1. Choose your platform: Docker is available for a variety of platforms, including Windows, macOS, and Linux. Choose the appropriate platform for your machine.
  2. Download Docker: You can download Docker from the official Docker website. Go to the Docker website and download the appropriate version of Docker for your platform.
  3. Install Docker: Once you’ve downloaded the Docker installer, run the installer to install Docker on your machine. Follow the prompts to complete the installation process.
  4. Verify the installation: After the installation is complete, you can verify that Docker is installed correctly by running the docker --version command in a terminal or command prompt. This should display the version of Docker that you installed.

That’s it! Once Docker is installed on your machine, you can start using it to run containers. Note that depending on your platform, additional configuration steps may be required to get Docker up and running. The Docker documentation provides detailed instructions for installing and configuring Docker on various platforms.

Install Kubectl

Here’s an overview of how to install kubectl on your machine:

  1. Choose your platform: kubectl is available for a variety of platforms, including Windows, macOS, and Linux. Choose the appropriate platform for your machine.
  2. Download kubectl: You can download kubectl from the official Kubernetes website. Go to the Kubernetes website and download the appropriate version of kubectl for your platform.
  3. Install kubectl: Once you’ve downloaded the kubectl binary, you’ll need to install it on your machine. The installation process varies depending on your platform. Here are some examples:
    • Linux: On Linux, you can install kubectl using your package manager. For example, on Ubuntu you can run sudo apt-get install kubectl to install kubectl.
    • macOS: On macOS, you can install kubectl using Homebrew. First, install Homebrew if you haven’t already. Then, run brew install kubectl to install kubectl.
    • Windows: On Windows, you can install kubectl using Chocolatey. First, install Chocolatey if you haven’t already. Then, run choco install kubernetes-cli to install kubectl.
  4. Verify the installation: After the installation is complete, you can verify that kubectl is installed correctly by running the kubectl version command in a terminal or command prompt. This should display the version of kubectl that you installed, as well as the version of the Kubernetes server that you’re connected to (if any).

That’s it! Once kubectl is installed on your machine, you can use it to interact with Kubernetes clusters. Note that you’ll need to configure kubectl to connect to a Kubernetes cluster before you can start using it. The Kubernetes documentation provides detailed instructions for configuring kubectl for various use cases.

Install Kubernetes distribution

How to install a Kubernetes distribution on your machine:

  1. Choose a distribution: There are several Kubernetes distributions available, each with its own strengths and weaknesses. Some popular options include:
    • Minikube: Minikube is a lightweight Kubernetes distribution that runs a single-node Kubernetes cluster on your local machine. It’s designed for development and testing purposes, and is a good choice if you’re just getting started with Kubernetes.
    • Kind: Kind (Kubernetes in Docker) is another lightweight Kubernetes distribution that runs a Kubernetes cluster inside a Docker container. It’s designed for testing and development purposes, and is a good choice if you’re already familiar with Docker.
    • k3s: k3s is a lightweight Kubernetes distribution that’s designed for resource-constrained environments, such as edge devices and IoT devices. It’s a good choice if you need to run Kubernetes on a device with limited resources.
  2. Install the distribution: Once you’ve chosen a Kubernetes distribution, you’ll need to install it on your machine. The installation process varies depending on the distribution. Here are some examples:
    • Minikube: To install Minikube, you’ll need to download the Minikube binary and install it on your machine. The Minikube documentation provides detailed instructions for installing Minikube on various platforms.
    • Kind: To install Kind, you’ll need to install Docker on your machine (if you haven’t already), and then download the Kind binary and install it on your machine. The Kind documentation provides detailed instructions for installing Kind on various platforms.
    • k3s: To install k3s, you’ll need to download the k3s binary and install it on your machine. The k3s documentation provides detailed instructions for installing k3s on various platforms.
  3. Start the Kubernetes cluster: Once you’ve installed the Kubernetes distribution, you can start the Kubernetes cluster using the appropriate command. For example, with Minikube you can start the cluster using the minikube start command.

That’s it! Once the Kubernetes cluster is running, you can start deploying applications and managing the cluster using kubectl. Note that additional configuration steps may be required to get the Kubernetes cluster up and running, depending on your specific use case. The documentation for your chosen Kubernetes distribution should provide detailed instructions for configuring and using the cluster.

I’ll be using K3’s.

K3s

How you can start a Kubernetes cluster using k3s:

Install k3s: First, you must install k3s on your machine. The installation process varies depending on your platform. Here is an example on linux:

curl -sfL https://get.k3s.io | sh -

Start the Kubernetes cluster: Once k3s is installed, you can start the Kubernetes cluster using the following command:

k3d cluster create mycluster

This will create a new Kubernetes cluster named mycluster using k3s. By default, the cluster will have a single node.

Verify the cluster: After the cluster is created, you can verify that it’s running correctly by running the following command:

kubectl cluster-info

This will display information about the Kubernetes cluster, including the API server URL and the Kubernetes version.

That’s it! Once the Kubernetes cluster is running, you can start deploying applications and managing the cluster using Kubectl. Additional configuration steps may be required to get the Kubernetes cluster up and running, depending on your specific use case. The k3s documentation provides detailed instructions for configuring and using the cluster.

Deploying an application to k3s cluster

How to deploy applications to a Kubernetes cluster using manifests:

Create a manifest: A Kubernetes manifest is a YAML file that describes the desired state of the application. The manifest should include information about the container image, the number of replicas, and any other configuration options. Here’s an example manifest for a simple web application:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: myapp
        image: myapp:latest
        ports:
        - containerPort: 80

This manifest describes a deployment with three replicas of a container running the myapp:latest image, listening on port 80.

Apply the manifest: Once you’ve created the manifest, you can apply it to the Kubernetes cluster using the following command:

kubectl apply -f myapp.yaml

This will create the deployment and any associated resources (such as pods) on the Kubernetes cluster.

Verify the deployment: After the deployment is created, you can verify that it’s running correctly by running the following command:

kubectl get pods

This will display a list of pods running on the Kubernetes cluster. You should see the pods associated with your deployment listed here.

That’s it! Once the deployment runs, you can access the application by connecting to the appropriate service or pod IP address and port. Depending on your specific use case, additional configuration steps may be required to get your application up and running. The Kubernetes documentation provides detailed instructions for deploying and managing applications on a Kubernetes cluster.

I have assumed here that you have a containerised application already.

Management of the k3s cluster

Scaling applications: You can scale applications up or down by changing the number of replicas in the deployment manifest. For example, to scale a deployment named myapp to 5 replicas, you can run the following command:

kubectl scale deployment myapp --replicas=5

This will update the deployment to have 5 replicas running.

Updating application configurations: You can update the configuration of an application by editing the deployment manifest and then applying the changes using kubectl apply. For example, to update the container image used by a deployment named myapp, you can edit the manifest to specify the new image, and then run the following command:

kubectl apply -f myapp.yaml

This will update the deployment to use the new container image.

Monitoring the health of the cluster: You can monitor the health of the Kubernetes cluster using kubectl. For example, to view the status of all pods running on the cluster, you can run the following command:

kubectl get pods

This will display a list of all pods running on the cluster, along with their status.

Managing nodes: You can manage the nodes in the Kubernetes cluster using kubectl. For example, to view the status of all nodes in the cluster, you can run the following command:

kubectl get nodes

This will display a list of all nodes in the cluster, along with their status.

That’s it! Once your applications are deployed, you can use kubectl to manage the Kubernetes cluster. Note that there are many other management tasks that you can perform using kubectl, depending on your specific use case. The Kubernetes documentation provides detailed instructions for managing a Kubernetes cluster using kubectl.

The last Byte…

Kubernetes has become the de facto standard for container orchestration, providing a powerful and flexible platform for managing containerized workloads in production environments. With Kubernetes, developers can easily deploy, scale, and manage applications, while operations teams can ensure high availability and reliability.

Kubernetes has a large and active community of contributors and users and is supported by various vendors and service providers. This has led to a rich ecosystem of tools and services that integrate with Kubernetes, making it easier to use and more powerful than ever.

While Kubernetes can be complex to set up and manage, many resources are available to help developers and operations teams get started. Whether you’re running Kubernetes on your local machine for development and testing purposes or deploying it in a production environment, there are many benefits to using Kubernetes for container orchestration.

Overall, Kubernetes has revolutionized how developers and operations teams manage containerized workloads and is likely to remain a critical tool in the container ecosystem for years to come.

Ali Kayani

https://www.linkedin.com/in/ali-kayani-silvercoder007/

Post navigation

Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

A brief introduction to Docker