What thread pool is used by default for async methods?
By default, all CompletableFuture async methods (without an Executor parameter) use ForkJoinPool.commonPool().
🟢 Junior Level
By default, all CompletableFuture async methods (without an Executor parameter) use ForkJoinPool.commonPool().
// Uses ForkJoinPool.commonPool()
CompletableFuture<String> cf = CompletableFuture.supplyAsync(() -> {
return "Hello";
});
// Also commonPool
cf.thenApplyAsync(s -> s.toUpperCase());
commonPool characteristics:
- Parallelism:
Runtime.getRuntime().availableProcessors() - 1 - Algorithm: Work-Stealing (workers “steal” tasks from each other)
- Thread type: Daemon threads (do not prevent JVM shutdown)
🟡 Middle Level
The “Shared Kitchen” Problem
commonPool is used by the entire JVM:
- Parallel Streams
CompletableFuture- Some libraries (Selenium, internal Spring mechanisms)
They all compete for the same threads. If one part of the application runs a heavy stream, async responses in another module will slow down.
Blocking (Starvation)
commonPool is designed for CPU-bound tasks.
// ❌ Blocking I/O in commonPool
CompletableFuture.supplyAsync(() -> {
return httpClient.get(url); // blocks a thread from the shared pool!
});
When all threads (usually 7 on an 8-core CPU) are busy waiting for the network, the application will stop executing any async tasks — Thread Starvation.
Low-core systems
With 1 CPU core, ForkJoinPool.commonPool() is still used, but with parallelism = 1. CompletableFuture does NOT switch to thread-per-task. The pool size can be changed via:
System.setProperty("java.util.concurrent.ForkJoinPool.common.parallelism", "N");
When to use your own Executor?
- I/O operations: Always. The pool should be larger (50-100 threads) or use Virtual Threads.
- Isolation: Ensure that a failure in the “Notifications” module does not take down the “Payments” module.
- Monitoring:
commonPoolis hard to monitor. A customThreadPoolExecutorlets you see queue size and rejected task count.
Diagnostics
# Change pool size via JVM flag
-Djava.util.concurrent.ForkJoinPool.common.parallelism=N
# Programmatically check size
ForkJoinPool.getCommonPoolParallelism()
# Threads are named ForkJoinPool.commonPool-worker-X
# If you see WAITING on network sockets — architectural problem
jstack <pid>
🔴 Senior Level
Internal Implementation
Work-Stealing algorithm:
- LIFO for local tasks: ForkJoin workers process their own tasks in LIFO order. This keeps “hot” data in the CPU cache (L1/L2).
- FIFO for stealing: “Stealing” threads take tasks from the tail of the queue, minimizing contention.
Architectural Trade-offs
| Approach | Pros | Cons |
|---|---|---|
| commonPool | No setup needed | Shared resource, hard to monitor |
| FixedThreadPool | Predictable size | No work-stealing |
| Virtual Threads (Java 21+) | Ideal for I/O | Java 21+ only |
Production Strategy
Separate pools by workload type:
// I/O-Bound (HTTP, DB, Files)
Executor ioExecutor = new ThreadPoolExecutor(
10, 100, 60L, TimeUnit.SECONDS,
new LinkedBlockingQueue<>(1000),
new ThreadFactoryBuilder().setNameFormat("io-pool-%d").build(),
new ThreadPoolExecutor.CallerRunsPolicy() // Backpressure
);
// CPU-Bound (Calculations, Mapping)
Executor cpuExecutor = Executors.newFixedThreadPool(
Runtime.getRuntime().availableProcessors()
);
// Virtual Threads (Java 21+) — for I/O
Executor vtExecutor = Executors.newVirtualThreadPerTaskExecutor();
Isolation at every stage of the chain:
// ❌ Only the first stage in its own pool, the rest — in commonPool
CF.supplyAsync(task, myExecutor)
.thenApplyAsync(transform); // commonPool!
// ✅ Isolation maintained
CF.supplyAsync(task, myExecutor)
.thenApplyAsync(transform, myExecutor);
Lifecycle Management
@PreDestroy
public void shutdown() {
((ExecutorService) ioExecutor).shutdown();
((ExecutorService) cpuExecutor).shutdown();
}
Without shutdown() — “zombie threads” that prevent proper process termination.
Best Practices
// ✅ Always your own Executor for production
CompletableFuture.supplyAsync(task, ioExecutor);
// ✅ CallerRunsPolicy for backpressure
new ThreadPoolExecutor.CallerRunsPolicy();
// ✅ Thread naming for diagnostics
new ThreadFactoryBuilder().setNameFormat("pool-%d").build();
// ❌ commonPool for I/O
// ❌ Blocking in commonPool
// ❌ Without shutdown() on exit
Summary for Senior
- Default pool — ForkJoinPool.commonPool().
- It is only for fast computations.
- Blocking in commonPool reduces throughput for all tasks using that pool. This is one of the most frequent causes of performance degradation.
- In serious projects, always inject your own
Executorsvia Spring@Bean.
🎯 Interview Cheat Sheet
Must know:
- By default — ForkJoinPool.commonPool() (availableProcessors - 1)
- Work-Stealing algorithm: LIFO for local, FIFO for stealing
- commonPool only for CPU-bound tasks, for I/O — your own Executor
- All JVM tasks (Parallel Streams, other CFs) share one commonPool
- Pool size: -Djava.util.concurrent.ForkJoinPool.common.parallelism=N
Frequent follow-up questions:
- Why is commonPool bad for I/O? — Few threads (N-1), blocking → thread pool starvation
- How does Work-Stealing work? — Workers process their own tasks (LIFO), “steal” from the tail of others’ queues (FIFO)
- How to diagnose starvation? — jstack: ForkJoinPool-worker-X in WAITING on network sockets
- What about 1 CPU core? — commonPool with parallelism = 1, does NOT switch to thread-per-task
Red flags (DO NOT say):
- “commonPool is fine for HTTP requests” — blocking I/O → thread starvation
- “commonPool scales infinitely” — limited by availableProcessors - 1
- “Daemon threads prevent JVM shutdown” — daemon threads do NOT prevent, but without shutdown() — zombie threads
Related topics:
- [[13. How to specify your own Executor for CompletableFuture]]
- [[15. Why is it important to avoid blocking operations in CompletableFuture]]
- [[11. What is the difference between thenApply() and thenApplyAsync()]]
- [[14. What is blocking code and how to distinguish it from non-blocking]]