What Happens During Rehashing (Resize)?
During rehashing, HashMap creates a new array 2x larger and transfers all elements there.
🟢 Junior Level
During rehashing, HashMap creates a new array 2x larger and transfers all elements there.
Simple example:
HashMap<String, Integer> map = new HashMap<>(); // 16 buckets
// Add 13 elements...
// Resize triggers:
// 1. New array of 32 buckets is created
// 2. All 13 elements are transferred to new buckets
// 3. Old array is discarded
Analogy: You had a cabinet with 16 shelves, and it got full. You buy a 32-shelf cabinet and move all your things. Some things stay on the same shelf numbers, others move to new ones.
Main point: Resize is a slow O(n) operation. Better to avoid it by setting the right size upfront.
🟡 Middle Level
The Bit Trick: (hash & oldCap)
Since capacity is always a power of two, on doubling one bit is added to the mask:
Old mask (15): 0000 1111
New mask (31): 0001 1111
^
New bit = oldCap
Checking one bit determines the element’s fate:
if ((e.hash & oldCap) == 0) {
// Stays at the old index
} else {
// Moves to oldIndex + oldCapacity
}
Forming Lo and Hi Lists
When traversing one bucket, HashMap creates two temporary lists:
- Lo List: elements staying in place
- Hi List: elements moving to
index + oldCap
Order Preservation (Java 8+)
Java 8 uses Tail Insertion — element order is preserved. This prevents linked list cycling (the Java 7 problem). However, HashMap is still NOT thread-safe — on concurrent resize, element loss is possible. For multi-threaded access, use ConcurrentHashMap.
Tree Transfer
If the bucket is a tree:
- The tree is split into Lo and Hi parts
- If a part has <= 6 elements → untreeify (back to list)
- If > 6 → new tree in the new bucket
Overhead
| Resource | Cost |
|---|---|
| Memory | Two arrays simultaneously (old + new) |
| CPU | O(n) — traversal of all elements |
| GC | Pressure from temporary objects and old array |
🔴 Senior Level
Resize Algorithm in Java 8+:
- Create a new array 2x the old size
- For each bucket, split elements into two groups by one bit of hash:
- Lo (bit = 0): stay at the same index
- Hi (bit = 1): move to
index + oldCap
- Old array = null for GC
Why this is efficient: no need to recompute hashCode — just check ONE bit (hash & oldCap). Elements either stay in place or shift by a fixed offset.
TreeNode.split()
On tree splitting:
// If a part has <= UNTREEIFY_THRESHOLD (6) — untreeify
if (lc <= UNTREEIFY_THRESHOLD)
tab[index] = loHead.untreeify(map);
else {
tab[index] = loHead;
if (hiHead != null) loHead.treeify(tab); // Rebalancing
}
Memory Pressure
On resize capacity=1M → capacity=2M:
- New array: ~16MB (1M references × 8 bytes × 2 due to UseCompressedOops)
- Old array: another ~16MB until GC
- At peak: +32MB in Heap
GC Impact
- Old array → immediate garbage (Young/Middle Gen)
- Reference redistribution → write barriers for GC
- In G1: may trigger mixed GC if old array is in Old Gen
ConcurrentHashMap: Parallel Resize
In ConcurrentHashMap, resize is performed in parallel:
- Multiple threads can help transfer elements
transferIndexis used for coordination- ForwardingNode indicates progress
Production Experience
In latency-sensitive systems:
- Resize causes spikes from 10ms to 100+ms
- Solution: pre-calculate capacity
- Monitoring: JFR
jdk.HashMapResizeevent (Java 17+)
🎯 Interview Cheat Sheet
Must know:
- New array = 2x the old one, elements distributed via bit trick (hash & oldCap)
- Lo List: stay in place, Hi List: move to index + oldCap
- hashCode is NOT recomputed — ONE bit is checked
- Java 8+: Tail Insertion preserves order, prevents cycling (Java 7 problem)
- TreeNode.split(): at <= 6 elements → untreeify (back to list)
- GC pressure: old array + new = peak doubling of array memory
Common follow-up questions:
- Why not recompute hash? — on doubling a power of two, one bit is added to the mask
- What happens on concurrent resize? — element loss, undefined behavior
- How does ConcurrentHashMap resize? — in parallel, via transferIndex and ForwardingNode
- How much memory on resize 1M → 2M? — ~32MB peak (two arrays of ~16MB each)
Red flags (DO NOT say):
- “On resize, all hashCodes are recomputed” — no, one bit is checked
- “HashMap is thread-safe on resize” — no, even in Java 8+
- “Elements are distributed randomly” — no, deterministically by one bit
Related topics:
- [[17. What is Capacity in HashMap]]
- [[18. When Does Rehashing Occur in HashMap]]
- [[22. How HashMap Works in a Multi-threaded Environment]]
- [[23. What is ConcurrentHashMap and How Does It Differ from HashMap]]