Why is it important to avoid blocking operations in CompletableFuture
CompletableFuture usually runs in ForkJoinPool.commonPool(), which has very few threads (typically = number of CPU cores - 1).
🟢 Junior Level
CompletableFuture usually runs in ForkJoinPool.commonPool(), which has very few threads (typically = number of CPU cores - 1).
If one of the threads blocks, other tasks won’t be able to run — this is called thread pool starvation.
// ❌ Dangerous — blocks a ForkJoinPool thread
CompletableFuture.supplyAsync(() -> {
return httpClient.get(url); // blocking I/O
});
// ✅ Safe — dedicated Executor for I/O
ExecutorService ioExecutor = Executors.newFixedThreadPool(20);
CompletableFuture.supplyAsync(() -> {
return httpClient.get(url);
}, ioExecutor);
Simple analogy:
- ForkJoinPool is like 4 checkout counters in a store
- If one cashier falls asleep — the queue stops moving
- You need to keep the cashiers awake!
🟡 Middle Level
Thread pool starvation
// commonPool has 7 threads (on an 8-core CPU)
// If 7 tasks block — all new tasks wait
for (int i = 0; i < 100; i++) {
CompletableFuture.supplyAsync(() -> {
Thread.sleep(5000); // blocks the thread
return "done";
});
}
// Only the first 7 execute
// The remaining 93 wait for a thread to become free
Consequences
- Deadlock:
```java
CompletableFuture
cf1 = CompletableFuture.supplyAsync(() -> { return cf2.join(); // waits for cf2 });
CompletableFuture
2. **Performance degradation:**
```java
// All threads are occupied with blocking operations
// Latency grows, throughput drops
Typical mistakes
- Blocking in thenApply: ```java // ❌ thenApply runs in ForkJoinPool cf.thenApply(s -> { return httpClient.sendBlocking(url); // blocks! });
// ✅ thenApplyAsync with I/O Executor cf.thenApplyAsync(s -> httpClient.sendBlocking(url), ioExecutor);
---
## 🔴 Senior Level
### Internal Implementation
**ForkJoinPool.commonPool():**
```java
// Size = Runtime.getRuntime().availableProcessors() - 1
// For 8 cores = 7 threads
// For CPU-bound tasks — ideal
// By default, pool size = availableProcessors() - 1 (can be changed via
// `java.util.concurrent.ForkJoinPool.common.parallelism`). Works well for CPU-bound
// tasks, but for I/O it can become a bottleneck under heavy blocking load.
Work-stealing:
// ForkJoinPool uses work-stealing
// But if all threads are blocked — stealing doesn't help
// Thread 1: [BLOCKED on I/O]
// Thread 2: [BLOCKED on I/O]
// Thread 3: [BLOCKED on I/O]
// ...
// Queue: [waiting tasks...] // nobody is processing
Architectural Trade-offs
| Approach | Pros | Cons |
|---|---|---|
| commonPool | No setup needed | CPU-bound only |
| Custom Executor | Full control | Requires management |
| Virtual Threads | Best of both worlds | Java 21+ |
Edge Cases
1. Cascading blocking:
// One blocking call drags others along
cf1.thenApply(s -> blockingCall1(s))
.thenCompose(r -> blockingCall2(r))
.thenAccept(result -> blockingCall3(result));
// One CF = 3 blocking operations = 3x blocking time
2. Mixed workload:
// CPU-bound + I/O in the same pool
CompletableFuture.supplyAsync(() -> heavyCalculation()); // CPU
CompletableFuture.supplyAsync(() -> httpClient.get(url)); // I/O
// I/O blocks the thread — CPU tasks wait
Performance
Thread pool starvation effect:
- In the worst case, when the pool is fully saturated with tasks, one blocking task
reduces throughput by approximately 1/N, where N is the pool size. Actual loss
depends on the nature of tasks and work-stealing behavior.
- 7 blocking tasks = 100% loss (complete deadlock)
Virtual Threads solution:
- 100000+ blocking tasks = no problem
- Each blocking call suspends, doesn't block the OS thread
Production Experience
Detecting starvation:
# Thread dump
jstack <pid> | grep -A 10 "ForkJoinPool"
# Metrics
- Active threads < pool size
- Queue size growing
- Latency growing
Prevention:
// ✅ Pool separation
ExecutorService cpuExecutor = Executors.newFixedThreadPool(
Runtime.getRuntime().availableProcessors());
ExecutorService ioExecutor = Executors.newFixedThreadPool(50);
// CPU tasks
CompletableFuture.supplyAsync(cpuTask, cpuExecutor);
// I/O tasks
CompletableFuture.supplyAsync(ioTask, ioExecutor);
// ✅ Virtual Threads (Java 21+)
ExecutorService vThreads = Executors.newVirtualThreadPerTaskExecutor();
CompletableFuture.supplyAsync(anyTask, vThreads);
Best Practices
// ✅ Dedicated Executor for I/O
CompletableFuture.supplyAsync(ioTask, ioExecutor);
// ✅ Virtual Threads
Executors.newVirtualThreadPerTaskExecutor();
// ✅ Thread pool monitoring
metrics.recordQueueSize(executor.getQueue().size());
// ❌ Blocking in commonPool
// ❌ join()/get() without timeout
// ❌ Ignoring thread starvation
🎯 Interview Cheat Sheet
Must know:
- ForkJoinPool.commonPool() has few threads (availableProcessors - 1)
- Thread pool starvation: all threads blocked, new tasks wait
- Blocking in commonPool reduces throughput for ALL tasks (Parallel Streams, other CFs)
- Cascading blocking: one blocking call drags others in the chain
- Solution: dedicated Executor for I/O, Virtual Threads (Java 21+), pool separation
Common follow-up questions:
- What happens if 7 tasks block commonPool? — All new tasks wait, complete deadlock
- Does work-stealing help with blocking? — No, if all threads are blocked — stealing doesn’t work
- How to detect starvation? — jstack: ForkJoinPool-worker-X in BLOCKED/WAITING, queue size growing
- Do Virtual Threads solve the problem? — Yes, 100000+ blocking tasks without problems (Java 21+)
Red flags (DO NOT say):
- “Blocking in thenApply is fine — it’s lightweight” — thenApply runs in the same thread, blocking kills the pool
- “commonPool scales automatically” — limited to availableProcessors - 1
- “join() without timeout in production is OK” — infinite waiting, cascading failure
Related topics:
- [[14. What is blocking code and how to distinguish it from non-blocking]]
- [[12. What thread pool is used by default for async methods]]
- [[13. How to specify a custom Executor for CompletableFuture]]
- [[17. What does supplyAsync() method do and when to use it]]