Question 9 · Section 14

What is Pod in Kubernetes?

K8s doesn't work with containers directly because it needs a place for "helpers" (sidecars, init containers) and shared configuration (network, volumes). A Pod is the wrapper th...

Language versions: English Russian Ukrainian

Junior Level

Simple Explanation

Pod is the smallest launch unit in Kubernetes. A Pod wraps one or more containers and provides them with shared resources: IP address, storage, configuration.

K8s doesn’t work with containers directly because it needs a place for “helpers” (sidecars, init containers) and shared configuration (network, volumes). A Pod is the wrapper that groups one or more containers.

Simple Analogy

A Pod is like a room in a dormitory:

  • One person can live in the room (one container) — the most common case
  • Sometimes two people live in a room (two containers) — if they are closely related
  • Everyone in the room shares a bathroom and kitchen (shared resources)
  • The room has one address (one IP for the entire Pod)

Key Features

  1. One IP for all containers — containers in a Pod communicate via localhost
  2. Shared volumes — containers can read/write the same files
  3. Live together, die together — Pod is created and deleted as a whole

The Most Common Case: One Container per Pod

apiVersion: v1
kind: Pod
metadata:
  name: myapp-pod
spec:
  containers:
  - name: myapp
    image: myapp:1.0
    ports:
    - containerPort: 8080

Why Pod Instead of Just a Container?

Kubernetes needs a place for additional information:

  • Sidecar containers (log collection, monitoring)
  • Init containers (preparation before launching the main one)
  • Shared resources (volumes, network)

Important: Pods are Ephemeral

Pods are ephemeral — they can disappear at any moment. Your code should not store state in a Pod. Use PersistentVolume or an external DB for data.

What a Junior Developer Should Remember

  • Pod — smallest launch unit in K8s
  • Usually one container = one Pod
  • Containers in a Pod share IP and volumes
  • Pods are ephemeral — can disappear at any moment
  • In production, Pods are managed through Deployment, not directly

When NOT to Use Multi-container Pods

DON’T use multi-container Pods if: containers can be separated, they don’t require a shared lifecycle, they need to be scaled independently.


Middle Level

Why Do We Need a Pod?

A Pod is an abstraction for a group of closely related containers that must run together on the same server.

Multi-container Usage Patterns

In most cases the rule is “one container per Pod.” But there are patterns for multiple containers:

Sidecar — an auxiliary container running alongside the main one (like a motorcycle with a sidecar).

1. Sidecar

An auxiliary container that extends the main one’s functionality:

spec:
  containers:
  - name: app
    image: myapp:1.0
    volumeMounts:
    - name: logs
      mountPath: /var/log/app

  - name: log-agent        # Sidecar
    image: fluentd
    volumeMounts:
    - name: logs
      mountPath: /var/log/app
      readOnly: true

  volumes:
  - name: logs
    emptyDir: {}

Examples: log collector (Fluentd), Service Mesh proxy (Istio sidecar).

2. Adapter

Converts the main application’s output to an external system’s standard:

[App: custom data format] → [Adapter: Prometheus format] → [External monitoring]

3. Ambassador

Proxies the main container’s connections to external services:

[App] → [Ambassador: localhost:3306] → [Remote DB: db.example.com:3306]

4. Init Containers

Run and complete before the main containers start:

spec:
  initContainers:
  - name: wait-for-db
    image: busybox
    command: ['sh', '-c', 'until nc -z db 5432; do sleep 2; done']

  containers:
  - name: app
    image: myapp:1.0

Pod Lifecycle

Pending → Running → (Succeeded | Failed)
              ↓
          Terminated
  • Pending: Pod accepted but not yet launched (downloading image)
  • Running: Pod launched, containers running
  • Succeeded: All containers completed successfully
  • Failed: At least one container exited with an error

Why Can’t You Work with Pods Directly?

In production, Pods are not created manually but through controllers:

Controller Purpose
Deployment Stateless applications (REST API, web services)
StatefulSet Stateful applications (DBs, queues)
DaemonSet One Pod per Node (logging, monitoring)
Job/CronJob One-time tasks

These controllers provide self-healing: if a Pod crashes, the controller creates a new one.

Pod Networking

  • Each Pod gets a unique IP address
  • All containers in a Pod share one network namespace
  • Containers communicate via localhost
  • Pods communicate with each other directly (no NAT)

What a Middle Developer Should Remember

  • Pod — wrapper over a group of closely related containers
  • 90% of cases: one container = one Pod
  • Sidecar pattern — basis for Service Mesh and logging
  • Init Containers — for preparation before launch
  • Pods are ephemeral — use Deployment for management
  • Containers in a Pod share Network and IPC namespaces

Senior Level

Pod as the Atomic Scheduling Unit

A Pod is the fundamental Kubernetes abstraction that defines the boundary of atomicity for scheduling, scaling, and failure.

Architectural Rationale for Pod

Why doesn’t K8s work directly with containers?

  1. Grouping related processes: Some processes must be colocated (on the same Node), share a network stack, have a shared lifecycle.

  2. Unit of atomicity:
    • Scheduler places an entire Pod on one Node
    • All containers in a Pod start/die together
    • Containers in a Pod cannot be split across Nodes
  3. Isolation and sharing: Pod defines isolation boundaries (what is shared) and sharing (what is not).

Linux Namespaces and Pod

All containers in a Pod share:

  • Network namespace: one IP, one set of ports
  • IPC namespace: shared inter-process communication
  • UTS namespace: same hostname

But isolated:

  • PID namespace (by default, can be changed)
  • User namespace (optional)

Pod Spec: Key Sections

apiVersion: v1
kind: Pod
metadata:
  name: myapp
  labels:
    app: myapp
    version: v1
spec:
  # Scheduling
  nodeSelector:
    disktype: ssd
  tolerations:
  - key: "dedicated"
    operator: "Equal"
    value: "gpu"
    effect: "NoSchedule"

  # Initialization
  initContainers: [...]

  # Containers
  containers:
  - name: app
    image: myapp:1.0
    resources:
      requests:
        cpu: "500m"
        memory: "256Mi"
      limits:
        cpu: "1"
        memory: "512Mi"
    livenessProbe: {...}
    readinessProbe: {...}
    volumeMounts: [...]

  # Volumes
  volumes: [...]

  # Security
  securityContext:
    runAsNonRoot: true
    fsGroup: 2000

  # DNS
  dnsPolicy: ClusterFirst
  hostname: myapp
  subdomain: default

Advanced Patterns

Sidecar for mTLS (Istio)

containers:
- name: app
  image: myapp:1.0
- name: istio-proxy
  image: istio/proxyv2:1.20
  # Intercepts all traffic, provides mTLS

Init Container for Migrations

initContainers:
- name: db-migrate
  image: myapp:1.0
  command: ['java', '-jar', 'app.jar', '--migrate']
containers:
- name: app
  image: myapp:1.0
  command: ['java', '-jar', 'app.jar']

Pod Lifecycle: Deep Dive

Phase transitions:

Pending:     Pod accepted, containers not yet running
             ↓ (image pulled, resources allocated)
Running:     At least one container running
             ↓ (all containers exit successfully)
Succeeded:   Terminal state
             ↓ (any container exits with error)
Failed:      Terminal state

Container states:

  • Waiting: Container not yet started (pulling image, etc.)
  • Running: Container executing
  • Terminated: Container exited (with exit code, signal, reason)

Pod Disruption Budget (PDB)

For protection against voluntary disruptions (drain, rollout):

apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  name: myapp-pdb
spec:
  minAvailable: 2
  selector:
    matchLabels:
      app: myapp

Ephemeral Containers (v1.23+)

For debugging without restarting a Pod:

kubectl debug -it myapp-pod --image=busybox --target=app

QoS Classes

Kubernetes assigns a QoS class to a Pod based on requests/limits:

Class Conditions Eviction priority
Guaranteed requests == limits for all containers Last
Burstable At least one request < limit Middle
BestEffort No requests/limits First

Summary for Senior

  • Pod — wrapper over a group of closely related containers.
  • Containers in a Pod share Network and IPC namespaces.
  • Sidecar pattern — foundation for Service Mesh, observability, security.
  • Pods are ephemeral — treated as “Cattle”, not “Pets”.
  • QoS classes determine eviction priority.
  • PDB protects against voluntary disruptions.
  • Init Containers — for initialization, Sidecar — for extending functionality.

Interview Cheat Sheet

Must know:

  • Pod — smallest launch unit in K8s; wrapper over one or more containers
  • 90% of cases: one container = one Pod
  • Containers in a Pod share Network namespace (one IP), IPC, volumes
  • Pods are ephemeral — treated as Cattle, not Pets; don’t store state in a Pod
  • In production, Pods are managed through controllers (Deployment, StatefulSet, DaemonSet)
  • Patterns: Sidecar (extension), Init (preparation), Ambassador (proxy)
  • QoS classes (Guaranteed, Burstable, BestEffort) determine eviction priority

Frequent follow-up questions:

  • “Why doesn’t K8s work with containers directly?” — Needs a place for sidecars, init containers, shared configuration
  • “What is a Sidecar?” — Auxiliary container alongside the main one (logging, service mesh proxy)
  • “What happens if a Pod crashes?” — Controller (Deployment) creates a new one; the Pod itself doesn’t restart
  • “How do containers in a Pod communicate?” — Via localhost (shared network namespace)

Red flags (DO NOT say):

  • “I create Pods directly in production” (use Deployment/StatefulSet)
  • “Pod = always one container” (multi-container Pod is a standard pattern)
  • “Data in a Pod persists after deletion” (Pods are ephemeral)
  • “Containers in a Pod have different IPs” (they share one network namespace)

Related topics:

  • [[What is Kubernetes and why is it needed]] — K8s architecture
  • [[What is Node in Kubernetes]] — where Pods run
  • [[How to organize rolling update in Kubernetes]] — updating Pods