What types of Service exist (ClusterIP, NodePort, LoadBalancer)?
Kubernetes has 4 types of Service. They determine from where you can reach your application:
Junior Level
Simple Explanation
Kubernetes has 4 types of Service. They determine from where you can reach your application:
ClusterIP (default)
- Access: Only within the Kubernetes cluster
- For what: Communication between microservices
- Example: Backend connects to the database
apiVersion: v1
kind: Service
metadata:
name: backend
spec:
type: ClusterIP # Can be omitted — it's the default
selector:
app: backend
ports:
- port: 80
targetPort: 8080
NodePort
- Access: From outside on the port of each server (Node)
- Ports: 30000–32767
- For what: Quick access for testing
spec:
type: NodePort
ports:
- port: 80
targetPort: 8080
nodePort: 30080 # Fixed port (or K8s picks one itself)
Access: http://<server-IP>:30080
LoadBalancer
- Access: Public IP from the cloud provider
- For what: Production in the cloud (AWS, GCP, Azure)
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8080
The cloud creates a load balancer with a public IP.
// LoadBalancer is a wrapper over NodePort, which is a wrapper over ClusterIP, // which forwards to Pods. Three levels of nesting.
ExternalName
- Access: DNS alias to an external resource
- For what: Accessing an external DB or API by a local name
spec:
type: ExternalName
externalName: mydb.example.com
Summary Table
| Type | External access | When to use |
|---|---|---|
| ClusterIP | No | Internal communication |
| NodePort | Yes (Node port) | Testing |
| LoadBalancer | Yes (public IP) | Production in cloud |
| ExternalName | No* | DNS to external resource |
What a Junior Developer Should Remember
- ClusterIP — only within the cluster (microservices)
- NodePort — external access via server port
- LoadBalancer — public IP from the cloud
- ExternalName — DNS alias to an external resource
- For HTTP/HTTPS with multiple services — Ingress saves money. For a single service — LoadBalancer is simpler.
Middle Level
How Each Type Works
ClusterIP
Creates a virtual IP inside the cluster. kube-proxy on each Node sets up iptables rules for routing.
Use Case: Communication between microservices. Backend → Database, Frontend → API.
NodePort
Works in two steps:
- Traffic arrives at a Node port (30000-32767)
- Forwarded to the Service’s ClusterIP
- ClusterIP load balances to Pods
Disadvantages:
- Need to know node IPs
- Inconvenient to manage many ports
- No SSL termination
LoadBalancer
In cloud environments:
- K8s creates a NodePort-type Service
- Requests the cloud to create a Load Balancer
- The cloud directs traffic to the NodePort
Use Case: Main entry point for external clients.
Cost: Each LoadBalancer is a separate cloud resource (~$18-25/mo in AWS). 10 services = $180-250/mo for load balancers alone.
ExternalName
Works at the DNS level — returns a CNAME record. Doesn’t use selectors or proxying.
Use Case: Migration from a monolith — an external resource looks like a K8s Service.
What to Choose?
Modern architecture:
- Internal connections → ClusterIP
- External HTTP/HTTPS → ClusterIP + Ingress (one LB for all services)
- Non-HTTP traffic (DB) → LoadBalancer
- External resources → ExternalName
Saving with Ingress
Instead of:
10 services → 10 LoadBalancers → $$$
We use:
10 services (ClusterIP) → 1 Ingress → 1 LoadBalancer → $
What a Middle Developer Should Remember
- NodePort — foundation for LoadBalancer
- To save money: Ingress on top of ClusterIP
- LoadBalancer directly — only for non-HTTP traffic
- ExternalName — DNS alias, doesn’t proxy traffic
- Difference: L4 (Service) vs L7 (Ingress) load balancing
When NOT to Use Each Service Type
DON’T use NodePort in production (insecure, inconvenient). DON’T use LoadBalancer for internal services (expensive). DON’T use ExternalName for in-cluster services (pointless).
Senior Level
Architectural Analysis of Service Types
Choosing a Service type is choosing the network exposure level and cost model.
Deep Technical Detail
LoadBalancer: Cloud Implementations
| Provider | Implementation | Features |
|---|---|---|
| AWS | NLB/ALB | NLB (Network Load Balancer) — TCP/UDP load balancer. ALB (Application Load Balancer) — HTTP level. |
| GCP | Cloud LB | Supports internal LB |
| Azure | Azure LB | Integration with AKS |
| On-premise | MetalLB | BGP or Layer2 mode |
Internal LoadBalancer:
annotations:
networking.gke.io/load-balancer-type: "Internal" # GCP
service.beta.kubernetes.io/aws-load-balancer-internal: "true" # AWS
Creates an LB accessible only within the VPC.
NodePort: Limitations
- Port range: 30000-32767 (configurable in API server:
--service-node-port-range) - One port per Service — can’t run two NodePorts on the same port
- Firewall: need to open ports on all Nodes
LoadBalancer: Multi-port Issues
One LoadBalancer Service can export multiple ports:
ports:
- name: http
port: 80
targetPort: 8080
- name: grpc
port: 9090
targetPort: 9090
But this creates one LB with two listeners. Cost = one LB.
Service Mesh Integration
In Istio:
- All Services are ClusterIP
- Sidecar intercepts traffic
- Gateway (instead of Ingress) manages external traffic
- mTLS between all services automatically
Troubleshooting LoadBalancer
Problem: LoadBalancer in Pending status.
kubectl get svc
# NAME TYPE EXTERNAL-IP STATUS
# my-service LoadBalancer <pending> ...
Causes:
- No Cloud Controller Manager
- Cloud quota limit
- Incorrect annotations
Solution:
kubectl describe svc my-service # Check events
# Events: Ensuring load balancer → Error creating load balancer
MetalLB for On-Premise
# MetalLB configuration
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: default
spec:
addresses:
- 192.168.1.240-192.168.1.250
BGP mode: advertises IPs via BGP to routers. Layer2 mode: one Node accepts traffic, others forward.
Gateway API — GA since K8s v1.28, gradually replaces Ingress.
Kubernetes Gateway API — a new abstraction for external traffic:
- More expressive than Ingress
- Supports TCP/UDP, TLS
- Role-based (infra vs app teams)
kind: Gateway
spec:
gatewayClassName: istio
listeners:
- name: http
port: 80
protocol: HTTP
Summary for Senior
- Service type determines “scope of visibility” and cost model.
- NodePort — foundation for LoadBalancer.
- Ingress on top of ClusterIP saves cloud resources.
- LoadBalancer directly — only for non-HTTP protocols.
- On-premise: MetalLB for LoadBalancer emulation.
- Gateway API — the future of external traffic in K8s.
- Service Mesh (Istio) redefines the networking model (L7, mTLS).
Interview Cheat Sheet
Must know:
- ClusterIP — only within the cluster (microservices); default
- NodePort — external access via port 30000-32767 on each Node; for testing
- LoadBalancer — public IP from the cloud provider; for production
- ExternalName — DNS CNAME to an external resource; no proxying
- LoadBalancer = wrapper over NodePort, which = wrapper over ClusterIP
- Ingress on top of ClusterIP saves money (one LB for all HTTP services)
- On-premise: MetalLB emulates LoadBalancer (BGP or Layer2 mode)
Frequent follow-up questions:
- “Why isn’t NodePort for production?” — Need to know node IPs, no SSL, inconvenient port management
- “How much does a LoadBalancer cost in AWS?” — ~$18-25/mo each; 10 services = $180-250/mo
- “LoadBalancer for an internal service?” — No, expensive and insecure; use ClusterIP
- “What is Gateway API?” — Next generation Ingress (GA since K8s v1.28); supports TCP/UDP, not just HTTP
Red flags (DO NOT say):
- “NodePort is the production standard” (insecure, no HA, inconvenient)
- “LoadBalancer for every microservice” (expensive; Ingress saves money)
- “ExternalName proxies traffic” (only DNS CNAME, no proxying)
- “Ingress replaces all Services” (Ingress is for external HTTP, Service for internal L4)
Related topics:
- [[What is Service in Kubernetes]] — Service basics
- [[What is Ingress in Kubernetes]] — HTTP routing
- [[What is Kubernetes and why is it needed]] — general architecture