What is Thread Pool?
A Thread Pool is a set of pre-created threads that are reused to execute tasks.
Junior Level
Basic Understanding
A Thread Pool is a set of pre-created threads that are reused to execute tasks.
Why: creating a thread costs ~1MB of memory (stack) and 1-10ms of time (OS call + JVM registration). If you have 10,000 short tasks — creating 10,000 threads will kill the server. Thread Pool solves this by reusing a limited number of threads. Instead of creating a new thread for each task, we take an existing thread from the pool.
Analogy
Imagine taxis:
- Without a pool: For each passenger, buy a new car, throw it away after the ride
- With a pool: 10 taxis serve all passengers in turns
Why do we need a thread pool?
// BAD: creating a thread for each task
for (int i = 0; i < 1000; i++) {
new Thread(() -> process(i)).start();
// Thread creation = ~1MB memory + creation time
// 1000 threads = ~1GB memory!
}
// GOOD: reusing threads
ExecutorService pool = Executors.newFixedThreadPool(10);
for (int i = 0; i < 1000; i++) {
pool.submit(() -> process(i));
// 10 threads process 1000 tasks
}
Advantages
| Advantage | Description |
|---|---|
| Performance | Don’t waste time creating threads (~1-10ms = OS call to create thread + stack allocation + JVM registration) |
| Memory | Each thread = ~1MB stack, pool saves memory |
| Management | Limit the max number of threads |
| Monitoring | Easier to track activity |
Simple Example
// Create a pool of 5 threads
ExecutorService executor = Executors.newFixedThreadPool(5);
// Submit tasks
for (int i = 0; i < 20; i++) {
executor.submit(() -> {
System.out.println("Task executed by thread: " +
Thread.currentThread().getName());
});
}
// Shut down the pool
executor.shutdown(); // Stop accepting tasks
executor.awaitTermination(60, TimeUnit.SECONDS); // Wait for completion
Manual Pool Creation
ThreadPoolExecutor executor = new ThreadPoolExecutor(
5, // corePoolSize — always-alive threads
10, // maximumPoolSize — max threads
60, // keepAliveTime — lifetime of excess threads
TimeUnit.SECONDS,
new ArrayBlockingQueue<>(100) // task queue
);
Middle Level
ThreadPoolExecutor Anatomy
public ThreadPoolExecutor(
int corePoolSize, // 1. Minimum number of threads
int maximumPoolSize, // 2. Maximum number of threads
long keepAliveTime, // 3. Lifetime of "excess" threads
TimeUnit unit, // 4. Time unit
BlockingQueue<Runnable> workQueue, // 5. Task queue
ThreadFactory threadFactory, // 6. Factory for creating threads
RejectedExecutionHandler handler // 7. Handler on overflow
)
Task Addition Algorithm (CRITICAL!)
New task
│
▼
Threads < corePoolSize?
│ yes │ no
▼ ▼
Create new Queue full?
thread │ yes │ no
│ ▼ ▼
▼ Threads < max? Add to queue
Execute │ yes │ no
▼ ▼
New thread RejectedExecutionHandler
Important: A thread is created beyond corePoolSize ONLY if the queue is FULL.
Queue Types
| Queue | Feature | When to use |
|---|---|---|
LinkedBlockingQueue |
Unbounded (default) | When you know tasks are few |
ArrayBlockingQueue |
Fixed size | Production — OOM protection |
SynchronousQueue |
Capacity = 0 | Each task → new thread |
PriorityBlockingQueue |
Priority queue | Important tasks go first |
RejectedExecutionHandler Strategies
// 1. AbortPolicy (default) — throws exception
executor.setRejectedExecutionHandler(new ThreadPoolExecutor.AbortPolicy());
// → RejectedExecutionException
// 2. CallerRunsPolicy — executes in the submitting thread
executor.setRejectedExecutionHandler(new ThreadPoolExecutor.CallerRunsPolicy());
// Creates **back-pressure** — since the task runs in the submitting thread, it can't submit new ones until the pool frees up. This automatically slows down the task source.
// 3. DiscardPolicy — silently ignores
executor.setRejectedExecutionHandler(new ThreadPoolExecutor.DiscardPolicy());
// 4. DiscardOldestPolicy — removes the oldest task from queue
executor.setRejectedExecutionHandler(new ThreadPoolExecutor.DiscardOldestPolicy());
ThreadFactory — Naming Threads
ThreadFactory factory = new ThreadFactory() {
private int count = 1;
@Override
public Thread newThread(Runnable r) {
Thread t = new Thread(r);
t.setName("my-pool-worker-" + count++);
t.setDaemon(true); // Daemons — don't prevent JVM shutdown
return t;
}
};
ThreadPoolExecutor executor = new ThreadPoolExecutor(
5, 10, 60, TimeUnit.SECONDS,
new ArrayBlockingQueue<>(100),
factory // Custom factory
);
Senior Level
Under the Hood: Internal State
ThreadPoolExecutor stores state in a single AtomicInteger ctl:
private final AtomicInteger ctl = new AtomicInteger(ctlOf(RUNNING, 0));
// Upper 3 bits — pool state
// Lower 29 bits — thread count
Pool Lifecycle
| State | Accepts tasks? | Executes tasks? | Description |
|---|---|---|---|
| RUNNING | Yes | Yes | Normal operation |
| SHUTDOWN | No | Yes | shutdown() — drains the queue |
| STOP | No | No | shutdownNow() — interrupts active |
| TIDYING | No | No | All tasks completed, threads stopped |
| TERMINATED | No | No | terminated() executed |
Choosing Pool Size
CPU-bound tasks (computations)
Threads = Number of Cores + 1 (empirical rule for CPU-bound tasks; "+1" compensates for occasional page faults. On hyperthreaded CPUs, count logical cores. Always measure on your hardware).
int cores = Runtime.getRuntime().availableProcessors();
ExecutorService cpuPool = new ThreadPoolExecutor(
cores + 1, cores + 1,
0L, TimeUnit.MILLISECONDS,
new ArrayBlockingQueue<>(1000)
);
I/O-bound tasks (DB, API, files)
Threads = Number of Cores * (1 + Wait time / Service time)
int cores = Runtime.getRuntime().availableProcessors();
// If 90% of time waiting (DB, API):
int ioThreads = cores * (1 + 9) = cores * 10;
// 90% of time waiting for I/O, 10% computing → W/S = 90/10 = 9
// W/S is estimated by profiler: if DB request takes 100ms, 90ms is waiting → W/S = 9
// For 8-core CPU: ~80 threads
Problem: ThreadLocal Leaks
// BAD: ThreadLocal is not cleared
ThreadLocal<UserContext> context = new ThreadLocal<>();
executor.submit(() -> {
context.set(new UserContext("user123"));
// Task completed, but thread returned to pool!
// Next task will see user123!
});
// GOOD: always clear
executor.submit(() -> {
try {
context.set(new UserContext("user123"));
process();
} finally {
context.remove(); // MANDATORY!
}
});
Graceful Shutdown
public void shutdown(ExecutorService executor) {
executor.shutdown(); // 1. Stop accepting
try {
if (!executor.awaitTermination(60, TimeUnit.SECONDS)) { // 2. Wait
executor.shutdownNow(); // 3. Force
if (!executor.awaitTermination(30, TimeUnit.SECONDS)) {
System.err.println("Pool did not terminate");
}
}
} catch (InterruptedException e) {
executor.shutdownNow();
Thread.currentThread().interrupt();
}
}
Diagnostics
Monitoring via Metrics
// Expose to Prometheus/Grafana:
executor.getPoolSize(); // Current pool size
executor.getActiveCount(); // Active threads
executor.getQueue().size(); // Queue size
executor.getCompletedTaskCount(); // Completed tasks
executor.getTaskCount(); // Total tasks
jstack for Analysis
jstack <pid> | grep "my-pool-worker"
Java Flight Recorder
java -XX:StartFlightRecording=filename=rec.jfr MyApp
Events:
jdk.ThreadPoolSubmit— task submissionjdk.ThreadPoolTerminate— pool termination
Best Practices
- Always limit the queue —
ArrayBlockingQueue, notLinkedBlockingQueue - Use CallerRunsPolicy — for back-pressure under overload
- Name threads — custom ThreadFactory for debugging
- Clear ThreadLocal — in finally block
- Don’t create pool inside a method — that’s a thread leak
- Monitor queue.size() — growing queue = degradation
- Proper shutdown — shutdown → awaitTermination → shutdownNow
- Pool sizing — CPU-bound: N+1, I/O-bound: N * (1 + W/S)
Interview Cheat Sheet
Must know:
- Thread Pool — a set of reusable threads; creating a new thread = ~1MB stack + 1-10ms
- Task addition algorithm: first corePoolSize → then queue → then up to maximumPoolSize → then rejection
- A thread is created beyond corePoolSize ONLY if the queue is FULL (this is a frequent interview question)
- 4 RejectedExecutionHandler: AbortPolicy (default), CallerRunsPolicy (back-pressure), DiscardPolicy, DiscardOldestPolicy
- CPU-bound pool: N+1 threads; I/O-bound: N * (1 + Wait/Service time) — Little’s formula
- ThreadPoolExecutor stores state in a single AtomicInteger ctl (3 bits — status, 29 — thread count)
- Lifecycle: RUNNING → SHUTDOWN → STOP → TIDYING → TERMINATED
- ThreadLocal must be cleared in finally — otherwise leaks between tasks in the pool
Frequent follow-up questions:
- Why queue full → create thread, not the other way? — To first use the cheap queue, and only scale threads under overload
- How does LinkedBlockingQueue differ from ArrayBlockingQueue? — The first is unbounded (OOM risk), the second has a limit (production-safe)
- What does CallerRunsPolicy do? — Executes the task in the submitting thread, creating back-pressure (submitter slows down)
- Why does ThreadLocal leak in a pool? — The thread returns to the pool with an uncleared ThreadLocal, the next task sees someone else’s data
Red flags (do NOT say):
- “I create a pool inside a method for each operation” — thread leak, pool should be a singleton
- “I use LinkedBlockingQueue in production” — unbounded queue = OOM risk
- “shutdownNow() immediately stops all threads” — it only sends interrupt(), tasks must handle it themselves
Related topics:
- [[13. What types of Thread Pool exist in Java]]
- [[15. What does ExecutorService do]]
- [[16. What is the difference between Executors.newFixedThreadPool() and newCachedThreadPool()]]
- [[17. What is ForkJoinPool]]