When Does Rehashing (Resize) Occur in HashMap?
Resize is acceptable: on initial data load (one-time), for non-latency-critical operations. NOT acceptable: in an HTTP request handler's hot path, in real-time systems.
🟢 Junior Level
Rehashing (resize) is the process of increasing HashMap’s internal array size. It occurs in three cases:
- First
put()— on HashMap creation, the array is empty, the firstputcreates it - Threshold exceeded — when the number of elements >
capacity * loadFactor - Treeification with a small table — 8 elements in a bucket and capacity < 64
Example:
HashMap<String, Integer> map = new HashMap<>(); // table = null
map.put("key1", 1); // First put: array of 16 buckets is created
// ... add 11 more elements (total 12)
map.put("key13", 13); // 13th element: capacity grows to 32
Why it matters? Resize is a slow O(n) operation. It’s better to set the right size upfront.
🟡 Middle Level
Resize Triggers
| Trigger | Condition | Action |
|---|---|---|
| First put | table == null |
Array of size 16 created |
| Threshold exceeded | ++size > threshold |
Array doubled |
| Treeification | 8 elements in bucket, capacity < 64 | Resize instead of tree |
Threshold Formula
threshold = capacity * loadFactor;
// By default: 16 * 0.75 = 12
What Happens During Resize?
- A new array 2x larger is created
- All elements are transferred to the new array
- Threshold is recomputed
Performance:
- Time: O(n) — traverses all elements
- Memory: two arrays exist temporarily
- GC pressure: creates garbage collector load
How to Avoid Resize?
// If you know there will be 1000 elements:
int capacity = (int)(1000 / 0.75f) + 1;
new HashMap<>(capacity);
Common Mistakes
- Not accounting for rounding —
new HashMap(1000)gives capacity 1024, threshold 768 - Resize in hot path — in a loop with frequent puts, resize causes latency spikes
- Multi-threaded resize — in Java 7, caused cycling
When Resize is Acceptable
Resize is acceptable: on initial data load (one-time), for non-latency-critical operations. NOT acceptable: in an HTTP request handler’s hot path, in real-time systems.
🔴 Senior Level
Lazy Initialization
public V put(K key, V value) {
return putVal(hash(key), key, value, false, true);
}
final V putVal(int hash, K key, V value, boolean onlyIfAbsent, boolean evict) {
Node<K,V>[] tab;
if ((tab = table) == null || (n = tab.length) == 0)
n = (tab = resize()).length; // First resize!
...
}
The first resize occurs on the first put() — this creates the initial array.
Resize on Treeification
Besides threshold resize, at 8 elements in a bucket and capacity < 64, a resize occurs INSTEAD of treeification — this is an attempt to spread elements across new buckets, rather than converting a list to a tree.
Multi-threaded Resize Danger
HashMap is not thread-safe:
- Java 7: Parallel resize → cyclic references → infinite loop
- Java 8+: Fixed via tail insertion, but data loss is still possible
// Thread A and B simultaneously do resize:
// Both create a new array, transfer elements
// Result: element loss, undefined behavior
Performance Impact
Resize is one of HashMap’s most expensive operations:
- New array allocation: O(capacity)
- Element copying: O(size)
- Total: O(n)
In latency-sensitive systems, this causes spikes from fractions of a millisecond (for small Maps) to 100+ ms (for Maps with millions of elements; depends on JVM and HashMap size).
Production Monitoring
Signs of resize problems:
- Periodic delays (spikes) in latency metrics
- Increased GC activity during resize
- Memory profiling: simultaneous existence of two arrays
Best Practices
// Guava:
Map<K, V> map = Maps.newHashMapWithExpectedSize(expectedSize);
// Or manually:
new HashMap<>((int)(expectedSize / 0.75f) + 1);
🎯 Interview Cheat Sheet
Must know:
- Three triggers: first put (array creation), ++size > threshold, treeification at capacity < 64
- threshold = capacity * loadFactor; by default 16 * 0.75 = 12
- Resize = O(n): new array allocation + element copying
- Lazy Initialization: array is created on first put, not in constructor (Java 8+)
- Java 7: parallel resize → infinite loop; Java 8+: fixed, but data loss still possible
- Resize in hot path — cause of latency spikes
Common follow-up questions:
- Why does resize double, not +16? — doubling gives amortized O(1) insertion
- Can resize be avoided? — yes, set initialCapacity = (expected / 0.75) + 1
- Why does treeification trigger resize? — attempt to spread elements instead of creating a tree
- Is resize acceptable? — on initial load yes; in hot path of HTTP handler — no
Red flags (DO NOT say):
- “Resize happens on every put” — no, only when threshold is exceeded
- “Java 8 is fully safe on resize” — no, data loss is still possible
- “Resize recomputes hashCode” — no, one bit is checked (hash & oldCap)
Related topics:
- [[16. What is Load Factor in HashMap]]
- [[17. What is Capacity in HashMap]]
- [[19. What Happens During Rehashing]]
- [[22. How HashMap Works in a Multi-threaded Environment]]