Kubernetes in Simple English. Part I

Published: March 28, 2025Reading time: 8 min
Femi Adigun profile picture

Femi Adigun

Senior Software Engineer & Coach

Updated March 01, 2025

What is Kubernetes?

Kubernetes is an open-source platform designed to automate deploying, scaling, and operating application containers. It is a powerful tool that can help you manage your containerized applications in a production environment.

Architecture Overview

A Kubernetes cluster consists of a master node and one or more worker nodes. The master node is responsible for managing the cluster, while the worker nodes are responsible for running the applications.

Master Node Components

The master node is the brain of the cluster. It is responsible for managing the cluster and scheduling applications to run on the worker nodes. Master node actually requires less resources (CPU, RAM and storage) than the worker nodes. The master node consists of several components:

  • API Server: Serves as the cluster gateway.
  • Controller Manager: States managers, detects clusters state and recreates by communicating with the kubelet.
  • Scheduler: Makes the decision on pod assignment on a node, however, kubelet is responsible for the pod assignment.
  • etcd: A key value store just like your localstorage.

Worker Node Components

Inside a node, we have pods. A pod is the smallest deployable unit in Kubernetes. It is a group of one or more containers that are deployed together on the same host. A pod can contain one or more containers, but it is recommended to have only one container per pod.

  • Kubelet: Runs in every node and is responsible for pod management.
  • Kube-proxy: Acts as a mini internet inside the cluster.
  • Container runtime: The software responsible for running containers.

How Does a Cluster Work?

When you deploy an application to a Kubernetes cluster, the master node schedules the application to run on one of the worker nodes. The worker node then pulls the application image from a container registry and runs the application in a container.

When the application is running, Kubernetes monitors it and ensures that it stays healthy. If the application fails, Kubernetes can automatically restart it or move it to another node.

Getting Started with Minikube and Kubectl

Minikube is a tool that makes it easy to run a single-node Kubernetes cluster on your local machine. Kubectl is the command-line tool used to interact with the Kubernetes cluster.

Installing Minikube (on Mac)

Minikube requires a hypervisor to run. You can use Hyperkit, VirtualBox, or Docker as the hypervisor. In this example, we will use QEMU.

brew install qemu
brew install minikube

kubectl is a dependency of minikube, so you don't need to install it separately.

minikube start --driver=qemu

Minikube has docker daemon running inside it, so you can build your docker images and run them inside minikube. Even if you don't have docker on your local machine, you can still build and run your docker images inside minikube.

Working with Deployments

Creating a Deployment

A deployment is a Kubernetes resource that defines how to deploy and scale an application. You can create a deployment using a YAML file or by running a kubectl command.

kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.4

After creating the deployment, you can view it using the kubectl get deployments command. You can then expose it as a service using the kubectl expose command to make it accessible.

Editing a Deployment

You can edit a deployment using the kubectl edit deployment command. This will open the deployment in a text editor, where you can make changes to the configuration.

Scaling a Deployment

You can scale a deployment up or down using the kubectl scale command:

kubectl scale deployment my-deployment --replicas=5

Viewing Logs

You can view the logs of a pod using the kubectl logs command:

kubectl logs my-pod

Deleting a Deployment

If you no longer need a deployment, you can delete it using the kubectl delete deployment command.

Using Configuration Files

You can create a configuration file to define your deployment, service, and other resources in Kubernetes. This file is written in YAML format and contains the blueprint for your application.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: my-container
        image: my-image:latest

You can then apply the configuration file to your cluster using the kubectl apply command. This approach is recommended for production environments as it allows you to store your configuration in version control.

Conclusion

Kubernetes offers a powerful platform for deploying and managing containerized applications. While there is a learning curve, tools like Minikube make it easier to get started and experiment with Kubernetes locally.

As you become more comfortable with the basics, you can explore more advanced features such as StatefulSets, ConfigMaps, Secrets, and more to build robust, scalable applications.

Author avatar

Femi Adigun

AWS Certified Solutions Architect

Tags:
KubernetesDockerDevOpsContainers