What is ForkJoinPool and how is it related to parallel streams?
The main feature of ForkJoinPool:
Junior Level
ForkJoinPool is a special thread pool for executing tasks that can be split into subtasks (the “divide and conquer” principle).
Unlike a regular ThreadPoolExecutor (where each thread takes tasks from a shared queue), in ForkJoinPool each thread has its own queue (Deque) and a work-stealing mechanism — a free thread “steals” tasks from the tail of a busy thread’s queue. This reduces queue contention.
Connection with parallel streams: When you call parallelStream(), Java automatically uses ForkJoinPool for parallel processing.
// Uses ForkJoinPool.commonPool() by default
list.parallelStream().map(this::process).collect(toList());
Pool size: Usually equals number_of_CPU_cores - 1.
Middle Level
Work-Stealing Algorithm
The main feature of ForkJoinPool:
- Each thread has its own double-ended queue (Deque) of tasks
- When a thread finishes its tasks, it does not sleep but looks at “neighbors’” queues
- It steals a task from the tail of another thread’s queue
This minimizes contention and ensures even load distribution across all cores.
Connection with Parallel Streams
- Source -> Spliterator: Stream splits data into parts
- Spliterator -> ForkJoinTask: Each part is wrapped into a task
- Execution: Tasks are sent to the pool
- Combining: Results are summed via
combiner
Common Pool
ForkJoinPool.commonPool() — a static shared pool:
- Size:
Runtime.getRuntime().availableProcessors() - 1 - Why -1? One thread is reserved for the calling thread, which also participates
Senior Level
LIFO vs FIFO queues
Internal ForkJoin queues work on the principle:
- LIFO for the “owning” thread: the last element is the largest (it has not been split yet), it is beneficial to process it first. Fresh data is still in the CPU cache.
- FIFO for “stealing” threads: they take small subtasks from the tail so as not to conflict with the queue owner over the head of the deque.
ManagedBlocker
ForkJoinPool has the ManagedBlocker interface. If a task reports that it is about to block (I/O), the pool can temporarily create a new thread to not reduce parallelism. Standard parallel streams rarely use this mechanism.
Isolation problem in Enterprise
In Spring Boot, using commonPool for everything is bad practice:
- Component A launched a heavy stream -> component B slows down
- Solution: Custom
ForkJoinPoolfor critical tasks
Task Granularity
If tasks are too small — the time spent on ForkJoin creation overrides the benefit. If too large — Work-Stealing will not work. Stream API balances this through Spliterator.
Diagnostics
-Djava.util.concurrent.ForkJoinPool.common.parallelism: Main control leverForkJoinPool.getCommonPoolParallelism(): Programmatic way to find the limit- VisualVM/JConsole: Show
Steal Count. If it grows — the pool is working efficiently
Interview Cheat Sheet
Must know:
ForkJoinPool— a thread pool for “divide and conquer” tasks, used by parallel streams- Unlike
ThreadPoolExecutor, each thread has its own Deque queue and work-stealing algorithm - Work-stealing: a free thread “steals” a task from the tail of another thread’s queue, minimizing contention
ForkJoinPool.commonPool()— a static shared pool, size =availableProcessors() - 1(one thread is reserved for the calling thread)parallelStream()automatically sends tasks tocommonPool()viaSpliterator- In parallel mode, the “owning” thread works by LIFO, the “stealing” thread — by FIFO (takes small subtasks)
- In Spring Boot using the shared
commonPoolfor everything is an anti-pattern, a customForkJoinPoolis needed
Common follow-up questions:
- Why
-1in commonPool size? — The calling thread itself participates in processing, one thread is reserved for it - How does ForkJoinPool reduce queue contention? — Each thread has its own Deque; stealing from the tail of another queue does not conflict with the owner
- How to find the current commonPool parallelism programmatically? —
ForkJoinPool.getCommonPoolParallelism() - What is ManagedBlocker and when is it needed? — Interface for tasks that may block (I/O); the pool creates a temporary thread. Rarely used in streams
Red flags (DO NOT say):
- “ForkJoinPool is the same as ThreadPoolExecutor” — ForkJoinPool has a different architecture: work-stealing, per-thread Deque
- “commonPool always creates as many threads as there are cores” — 1 less, minus the calling thread
- “Parallel stream creates its own pool” — it reuses the shared
ForkJoinPool.commonPool() - “You can ignore Task Granularity” — too small tasks = overhead on ForkJoinTask, too large = no work-stealing
Related topics:
- [[11. How to create a parallel stream]]
- [[12. What potential problems can occur with parallel streams]]
- [[10. When to use parallel streams]]
- [[9. What are parallel streams]]