Kubernetes Basics

From Server rent store
Jump to navigation Jump to search
  1. Kubernetes Basics

This article provides a foundational understanding of Kubernetes, a powerful container orchestration system. It's aimed at newcomers to the MediaWiki infrastructure and those looking to understand the platform on which some of our services are deployed.

What is Kubernetes?

Kubernetes (often shortened to K8s) is an open-source system for automating deployment, scaling, and management of containerized applications. It was originally designed by Google, but is now maintained by the Cloud Native Computing Foundation. Think of it as a platform for running and managing applications packaged as Docker containers. Instead of manually starting, stopping, and scaling containers, Kubernetes automates these tasks, ensuring high availability and efficient resource utilization. Understanding Kubernetes is crucial for anyone involved in deploying and maintaining applications on our server infrastructure, especially given our increasing reliance on containerization.

Core Concepts

Several key concepts form the foundation of Kubernetes. Let's explore these:

  • Pods: The smallest deployable units in Kubernetes. A pod represents a single instance of an application and can contain one or more containers that share storage and network resources. Think of a Pod as a logical host for your application. Pod lifecycle is an important concept to grasp.
  • Deployments: Deployments manage the desired state of your application. They ensure a specified number of pod replicas are running and automatically replace failed pods. They facilitate rolling updates and rollbacks.
  • Services: Services provide a stable network endpoint for accessing your application. They abstract away the underlying pods, allowing access even as pods are created, destroyed, and scaled. Service discovery is a core function.
  • Namespaces: Namespaces provide a way to logically isolate resources within a Kubernetes cluster. They’re useful for separating development, staging, and production environments. Namespace organization is vital for larger deployments.
  • Nodes: Nodes are the worker machines where your pods are actually run. These can be physical or virtual machines. Understanding node capacity is important for scaling.
  • Clusters: A Kubernetes cluster is a set of nodes that work together to run your containerized applications.

Kubernetes Architecture

A Kubernetes cluster consists of two main components: the control plane and worker nodes.

Control Plane

The control plane manages the cluster. Key components include:

  • kube-apiserver: The central management component. It exposes the Kubernetes API.
  • etcd: A distributed key-value store used to store the cluster's configuration data. etcd backups are critical.
  • kube-scheduler: Decides which node to run a pod on, based on resource requirements and other constraints.
  • kube-controller-manager: Runs controller processes that regulate the state of the cluster.
  • cloud-controller-manager: Integrates with cloud providers (like AWS, Azure, or Google Cloud) to manage resources.

Worker Nodes

Worker nodes execute the tasks assigned by the control plane. Key components include:

  • kubelet: An agent that runs on each node and manages pods.
  • kube-proxy: A network proxy that enables communication to pods from inside or outside the cluster.
  • Container runtime (e.g., Docker): Responsible for running containers.


Technical Specifications of Example Nodes

Here’s a table detailing the typical specifications of worker nodes in our MediaWiki Kubernetes cluster. These are subject to change, so this is a representative example.

CPU Memory (RAM) Storage Operating System
8 vCPUs 32 GB 100 GB SSD Ubuntu 22.04 LTS

Common Kubernetes Commands

Here's a table of frequently used `kubectl` commands (the Kubernetes command-line tool):

Command Description
`kubectl get pods` Lists all pods in the current namespace.
`kubectl create deployment <deployment-name> --image=<image-name>` Creates a new deployment.
`kubectl scale deployment <deployment-name> --replicas=<number>` Scales a deployment to a specified number of replicas.
`kubectl describe pod <pod-name>` Provides detailed information about a pod.
`kubectl logs <pod-name>` Displays the logs from a pod.

Networking in Kubernetes

Kubernetes networking is a complex topic. Key concepts include:

  • CNI (Container Network Interface): An interface for configuring networking between pods. We use Calico networking for our deployments.
  • Ingress: Manages external access to services within the cluster, often providing load balancing and SSL termination. Understanding Ingress controllers is essential.
  • Network Policies: Control traffic flow between pods.

Resource Management

Kubernetes allows you to specify resource requests and limits for your containers:

Resource Description
Requests The minimum amount of resources a container needs to run. The scheduler uses this information to place pods on appropriate nodes.
Limits The maximum amount of resources a container can use. Kubernetes will attempt to enforce these limits.

Understanding resource requests and limits is crucial for efficient resource utilization and preventing resource contention. See our resource allocation guide for more details.

Further Learning


Intel-Based Server Configurations

Configuration Specifications Benchmark
Core i7-6700K/7700 Server 64 GB DDR4, NVMe SSD 2 x 512 GB CPU Benchmark: 8046
Core i7-8700 Server 64 GB DDR4, NVMe SSD 2x1 TB CPU Benchmark: 13124
Core i9-9900K Server 128 GB DDR4, NVMe SSD 2 x 1 TB CPU Benchmark: 49969
Core i9-13900 Server (64GB) 64 GB RAM, 2x2 TB NVMe SSD
Core i9-13900 Server (128GB) 128 GB RAM, 2x2 TB NVMe SSD
Core i5-13500 Server (64GB) 64 GB RAM, 2x500 GB NVMe SSD
Core i5-13500 Server (128GB) 128 GB RAM, 2x500 GB NVMe SSD
Core i5-13500 Workstation 64 GB DDR5 RAM, 2 NVMe SSD, NVIDIA RTX 4000

AMD-Based Server Configurations

Configuration Specifications Benchmark
Ryzen 5 3600 Server 64 GB RAM, 2x480 GB NVMe CPU Benchmark: 17849
Ryzen 7 7700 Server 64 GB DDR5 RAM, 2x1 TB NVMe CPU Benchmark: 35224
Ryzen 9 5950X Server 128 GB RAM, 2x4 TB NVMe CPU Benchmark: 46045
Ryzen 9 7950X Server 128 GB DDR5 ECC, 2x2 TB NVMe CPU Benchmark: 63561
EPYC 7502P Server (128GB/1TB) 128 GB RAM, 1 TB NVMe CPU Benchmark: 48021
EPYC 7502P Server (128GB/2TB) 128 GB RAM, 2 TB NVMe CPU Benchmark: 48021
EPYC 7502P Server (128GB/4TB) 128 GB RAM, 2x2 TB NVMe CPU Benchmark: 48021
EPYC 7502P Server (256GB/1TB) 256 GB RAM, 1 TB NVMe CPU Benchmark: 48021
EPYC 7502P Server (256GB/4TB) 256 GB RAM, 2x2 TB NVMe CPU Benchmark: 48021
EPYC 9454P Server 256 GB RAM, 2x2 TB NVMe

Order Your Dedicated Server

Configure and order your ideal server configuration

Need Assistance?

⚠️ *Note: All benchmark scores are approximate and may vary based on configuration. Server availability subject to stock.* ⚠️