What is Namespace in Kubernetes?
A Namespace is like an apartment in an apartment building. All apartments are in one building (cluster), but each has its own number (name). Apartment 1 and apartment 2 can each...
Junior Level
Simple Definition
Namespace is a way to logically separate resources within a single Kubernetes cluster. Imagine the cluster as a large computer, and Namespaces are folders on that computer. Each folder can have its own Pods, Services, Deployments with the same names, and they won’t conflict.
Namespace – not a physical separation (Pods from different namespaces can still communicate over the network). This is logical grouping for management convenience and RBAC isolation.
Analogy
A Namespace is like an apartment in an apartment building. All apartments are in one building (cluster), but each has its own number (name). Apartment 1 and apartment 2 can each have their own TV (Pod named web-app), and they don’t interfere with each other.
YAML Example
# Create Namespace
apiVersion: v1
kind: Namespace
metadata:
name: production
# Deployment in namespace production
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
namespace: production
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: app
image: my-app:1.0
kubectl Example
# List all namespaces
kubectl get namespaces
# List Pods in a specific namespace
kubectl get pods -n production
# Create a Pod in a namespace
kubectl run my-pod --image=nginx -n staging
# Set default namespace (to avoid typing -n every time)
kubectl config set-context --current --namespace=production
When to Use
- Environment separation:
dev,staging,productionin one cluster - Team separation: each team gets its own namespace
- Resource limiting: set CPU/RAM quotas per namespace
- Isolation: different projects don’t see each other’s resources
Middle Level
How it Works
Namespace is a field in the metadata of every Kubernetes object. The API Server uses it for grouping and filtering resources. When you run kubectl get pods -n production, the API Server returns only Pods with metadata.namespace: production.
Two types of resources:
- Namespace-scoped — belong to a namespace: Pod, Service, Deployment, ConfigMap, Secret, PVC
- Cluster-scoped — don’t belong to any namespace: Node, PersistentVolume, StorageClass, ClusterRole, Namespace
DNS between namespaces:
# Within the same namespace
http://my-service
# Between namespaces (FQDN)
http://my-service.staging.svc.cluster.local
# Format: <service-name>.<namespace>.svc.cluster.local
Practical Scenarios
Scenario 1: Environment separation
Cluster:
├── namespace: dev — for development
├── namespace: staging — for testing
├── namespace: production — for production
One cluster, three environments. Team deploys to dev, QA tests in staging, production is stable.
Scenario 2: Resource Quotas
apiVersion: v1
kind: ResourceQuota
metadata:
name: dev-quota
namespace: dev
spec:
hard:
requests.cpu: "10"
requests.memory: 20Gi
limits.cpu: "20"
limits.memory: 40Gi
pods: "50"
services: "20"
The dev team cannot request more than 10 CPU and 20Gi RAM, even if the cluster is large.
Scenario 3: RBAC by Namespace
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: dev-reader
namespace: dev
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: dev-reader-binding
namespace: dev
subjects:
- kind: User
name: developer@company.com
roleRef:
kind: Role
name: dev-reader
apiGroup: rbac.authorization.k8s.io
The developer sees only namespace dev, has no access to production.
Common Mistakes Table
| Mistake | Consequence | Solution |
|---|---|---|
Using namespace default |
All resources in one namespace, no isolation | Always create separate namespaces for each project |
Forgetting to specify -n in kubectl |
Commands run in default or current context, resources “not found” |
Use kubectl config set-context --current --namespace=... |
| NetworkPolicy not configured | Pods from different namespaces communicate freely, no network isolation | Configure NetworkPolicy to limit cross-namespace traffic |
| ResourceQuota too strict | Pods can’t start, FailedScheduling due to quota exceeded |
Monitor kube_resourcequota metrics, set quotas based on actual consumption |
| Confusing cluster-scoped with namespace-scoped | Trying to create PV or Node in namespace — error | Remember: Node, PV, StorageClass, ClusterRole are cluster-scoped |
| Secrets not copied between namespaces | Deployment works in staging, not in production because Secret wasn’t created | Use External Secrets Operator or copy Secrets manually |
Comparison: Namespace Organization Strategies
| Strategy | Pros | Cons | When to use |
|---|---|---|---|
By environment (dev, staging, prod) |
Simple, clear, easy to quota | All teams in one namespace, possible conflict | Small teams, single product |
By team (team-a, team-b) |
Team isolation, independent deployment | Harder to manage environments within namespace | Large organizations, multi-tenancy |
By product (product-a, product-b) |
Full product cycle in one namespace | Can be expensive (quotas per product) | Multiple products, different SLAs |
Hybrid (team-a-dev, team-a-prod) |
Best of both worlds | More namespaces, harder to administer | Medium and large organizations |
When NOT to Use
- One project, one team, one cluster — Namespace adds complexity without benefit (though still better than
default) - Need full isolation — Namespace provides only logical, not physical isolation. For full isolation, you need a separate cluster
- Need shared configuration for all — ConfigMap and Secrets are not shared between namespaces. If all services use one config, you must copy it to each namespace
Senior Level
Deep Mechanics: API Server, etcd, and Controller Reconciliation
Storage in etcd:
Namespace is not just a “folder”. It is a full Kubernetes object (kind: Namespace), stored in etcd at the key:
/registry/namespaces/<name>
Resources in a namespace are stored as:
/registry/pods/<namespace>/<pod-name>
/registry/services/<namespace>/<service-name>
The API Server filters resources by the metadata.namespace field. The request GET /api/v1/namespaces/production/pods translates to an etcd query with prefix /registry/pods/production/.
Namespace Lifecycle Controller:
kube-controller-manager runs the namespace controller, which manages namespace lifecycles:
- Termination: On
kubectl delete namespace X, the controller setsstatus.phase: Terminatingand starts deleting all resources in the namespace - Finalizer: Namespace has a finalizer
kubernetes. It blocks deletion until all resources are deleted - Garbage Collection: The controller iteratively deletes resources, but if any resource has its own finalizer (e.g., PVC with
persistentVolumeClaim), namespace deletion hangs
Discovery and DNS: CoreDNS (or kube-dns) serves DNS queries inside the cluster. For cross-namespace communication:
my-service.staging.svc.cluster.local
CoreDNS resolves this to the Service’s ClusterIP via a record in etcd. If the Service in the other namespace is deleted, DNS returns NXDOMAIN.
Admission Controllers:
LimitRanger— applies LimitRange to namespace on Pod creationResourceQuota— checks if namespace quotas are exceededNamespaceLifecycle— rejects requests to namespaces in Terminating state
Trade-offs
| Aspect | Trade-off |
|---|---|
| One cluster vs many clusters | One cluster is cheaper and easier to manage, but failure domain is shared. Many clusters = full isolation but more expensive and complex |
| Namespace-per-team vs Namespace-per-env | Per-team = better team isolation. Per-env = simpler CI/CD pipeline. Hybrid = best but more complex |
| Strict vs soft ResourceQuota | Strict = predictable resources, but possible FailedScheduling. Soft = flexibility, but risk of resource starvation |
| NetworkPolicy strict vs permissive | Strict = security, but hard to maintain. Permissive = simple, but any Pod can talk to any |
| Shared vs isolated Control Plane | Shared (one API Server) is cheaper. Isolated (separate clusters) gives failure domain isolation |
Edge Cases (6+)
Edge Case 1: Namespace stuck in Terminating
On kubectl delete namespace X, namespace gets stuck in Terminating. Cause: some resource has a finalizer that can’t complete (e.g., external-provisioner for PVC not responding). Solution:
# See what's blocking
kubectl api-resources --verbs=list --namespaced -o name | xargs -n 1 kubectl get --show-kind --ignore-not-found -n X
# Forceful removal (dangerous!)
kubectl get namespace X -o json | jq '.spec.finalizers=[]' | kubectl replace --raw "/api/v1/namespaces/X/finalize" -f -
Edge Case 2: Default namespace has no ResourceQuota
By default, namespace default has no ResourceQuota. If a developer accidentally deploys to default, the Pod can consume unlimited resources, starving other namespaces.
Do not use default namespace in production – it’s a bad practice. Create separate namespaces for dev/staging/prod.
Solution: create ResourceQuota for default and block its use via Admission Webhook.
Edge Case 3: Cross-namespace Service with ExternalName
kind: Service
apiVersion: v1
metadata:
name: external-db
namespace: production
spec:
type: ExternalName
externalName: db.database.svc.cluster.local
Production service resolves address in namespace database. If database namespace is deleted, Service silently returns NXDOMAIN only on DNS query — no validation on creation.
Edge Case 4: Cluster-scoped resources “leak” through namespace
StorageClass is cluster-scoped. If team A created StorageClass fast-ssd, team B can use it in their namespace without restrictions. For isolation, use StorageClass with allowedTopologies or RBAC on StorageClass.
Edge Case 5: LimitRange + HPA conflict LimitRange sets max CPU per container (e.g., 2 CPU). HPA wants to scale Pod to 4 CPU. Pod cannot request more than LimitRange max, HPA can’t add replicas with needed resources. Result: HPA stuck, application underperforming.
Edge Case 6: Namespace and Pod Security Standards (PSS) Kubernetes 1.25+ replaced PodSecurityPolicy with Pod Security Admission (PSA). PSA is applied at the namespace level via label:
metadata:
labels:
pod-security.kubernetes.io/enforce: "restricted"
pod-security.kubernetes.io/audit: "restricted"
pod-security.kubernetes.io/warn: "restricted"
If namespace has no PSA labels, the cluster-wide default applies (usually baseline). This can create a false sense of security.
Edge Case 7: Service Mesh sidecar injection and namespace Istio/Venice injects sidecar at the namespace level via label:
metadata:
labels:
istio-injection: "enabled"
If namespace doesn’t have the label, sidecar is not injected. Pod runs without mTLS, without observability. When migrating between namespaces, it’s easy to forget the label.
Performance Numbers
| Metric | Value |
|---|---|
| API Server namespace filter latency | ~1-5ms for query to namespace with 100 resources |
| etcd key prefix scan (per namespace) | ~5-20ms for namespace with 500 resources |
| Namespace deletion (1000 resources) | 10-60 seconds (depends on finalizers) |
| CoreDNS cross-namespace resolution | 1-5ms |
| Maximum namespaces per cluster | Not limited (practically ~10000, limited by etcd capacity) |
| ResourceQuota evaluation latency | ~1-10ms on Pod creation |
| NetworkPolicy across namespaces | ~5-20ms latency overhead (iptables/ipvs rules) |
Security
- Namespace ≠ Security Boundary — Pod in namespace A can access Pod in namespace B by default. For isolation, you need NetworkPolicy + RBAC
- RBAC namespace-scoped (Role) vs cluster-scoped (ClusterRole) — Role is limited to namespace, ClusterRole — entire cluster. Don’t confuse them!
- Secrets isolation — Secrets visible only in their namespace. But cluster admin (with ClusterRole) can read Secrets from all namespaces
- Pod Security Admission (PSA) — applied at namespace level. Use
restrictedfor production namespaces - Admission Webhooks — can create ValidatingAdmissionWebhook that blocks deployment in
defaultnamespace or requires specific labels - Service Account tokens — automatically created in each namespace. Don’t share Service Accounts between namespaces — this breaks isolation
- NetworkPolicy — the only way to isolate network between namespaces:
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: deny-cross-namespace namespace: production spec: podSelector: {} policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: production
Production War Story
Situation: Large company, 50 teams, one cluster with 50 namespaces (one per team). Each team managed their own namespace: dev, staging, production inside one namespace. One team (team-payments) created a Deployment without ResourceQuota and LimitRange, with requests.cpu: "100" on 50 Pods.
What happened:
- 50 Pods requested 5000 CPU, but cluster had only 2000 CPU
- Pods stuck in
Pending, kube-scheduler couldn’t place them - API Server started slowing down due to large number of
FailedSchedulingevents - kube-controller-manager began backlog reconciliation, API Server latency grew to 5 seconds
- All 50 teams started experiencing deployment and health check problems
- 4-hour outage until administrators deleted the rogue Deployment
Post-mortem and fix:
- ResourceQuota enforced on every namespace (max CPU, memory, pods)
- Deployment without LimitRange blocked via ValidatingAdmissionWebhook
- Namespace-per-env strategy instead of namespace-per-team:
team-payments-dev,team-payments-prod - Monitoring
kube_resourcequotawith alerts approaching limits - RBAC: teams can deploy only to their own namespaces
- Introduced “cluster capacity planning” process — quotas reviewed monthly
Monitoring after fix:
# Alert: Namespace approaching ResourceQuota
kube_resourcequota{type="used"} / kube_resourcequota{type="hard"} > 0.8
# Alert: Pods in Pending > 5 minutes
sum(kube_pod_status_phase{phase="Pending"}) by (namespace) > 5
# Alert: API Server latency
histogram_quantile(0.99, apiserver_request_duration_seconds_bucket) > 1
Monitoring (Prometheus/Grafana)
Key metrics:
# ResourceQuota usage by namespace
kube_resourcequota{type="used"} / kube_resourcequota{type="hard"}
# Pod count by namespace
sum(kube_pod_status_phase) by (namespace, phase)
# Namespace in Terminating > 5 minutes
kube_namespace_status_phase{phase="Terminating"}
# API Server requests latency
histogram_quantile(0.99, apiserver_request_duration_seconds_bucket)
# etcd request latency
histogram_quantile(0.99, etcd_request_duration_seconds_bucket)
# Failed scheduling events
rate(kube_pod_status_reason_condition{reason="FailedScheduling"}[5m])
Grafana Dashboard panels:
- ResourceQuota usage heatmap (by namespace) — red at > 80%
- Pod count and status distribution — Pending/Running/Failed
- API Server latency p99 — alert at > 1 second
- Namespace lifecycle events — creation/deletion rate
- Cross-namespace network traffic — via NetworkPolicy metrics
- etcd storage usage — namespace keys take ~5-10% of etcd
Highload Best Practices
- Use namespace-per-env strategy —
team-a-dev,team-a-prodinstead of one namespace for all envs - ResourceQuota on every namespace — without quotas, one namespace can take over the entire cluster
- Default LimitRange — so Pods without requests/limits don’t start without default values
- NetworkPolicy deny-by-default — block cross-namespace traffic, allow only what’s needed
- Pod Security Admission
restrictedfor production namespaces - ValidatingAdmissionWebhook — block deployment to
defaultnamespace, require labels - Monitor ResourceQuota usage — alert at 80% utilization
- Avoid >100 namespaces in one cluster — API Server and etcd start slowing down with many namespaces
- Use kubectl contexts —
kubectl config use-context team-a-prodto avoid deploying to the wrong place - Cluster capacity planning — monthly quota review, cluster scaling planning
Interview Cheat Sheet
Must know:
- Namespace — logical (not physical!) grouping of resources in K8s cluster
- DNS between namespaces:
service.namespace.svc.cluster.local - ResourceQuota limits CPU/RAM/Pods per namespace; LimitRange — defaults for containers
- RBAC: Role (namespace-scoped) vs ClusterRole (entire cluster)
- Namespace ≠ Security Boundary — Pod from A can communicate with B without NetworkPolicy
- Default namespace — not for production; always create separate namespaces
- Cluster-scoped resources: Node, PV, StorageClass, ClusterRole (don’t belong to namespace)
Common follow-up questions:
- “Does namespace isolate network?” — No, only logically; NetworkPolicy is needed for network isolation
- “Namespace stuck in Terminating — why?” — Resource with finalizer can’t complete (e.g., external-provisioner)
- “Maximum namespaces?” — Formally not limited; practically ~10000 (limited by etcd)
- “Are Secrets shared between namespaces?” — No, each Secret visible only in its own namespace
Red flags (DO NOT say):
- “Namespace = full isolation” (needs NetworkPolicy + RBAC)
- “I use default namespace for production” (bad practice)
- “Namespace replaces a separate cluster” (no failure domain isolation)
- “PV belongs to namespace” (PV is a cluster-scoped resource)
Related topics:
- [[What is Kubernetes and why is it needed]] — K8s objects
- [[What is the difference between ConfigMap and Secret]] — Secrets in namespace
- [[How to monitor applications in Kubernetes]] — monitoring by namespace