What is the Time Complexity of get() and put() Operations in HashMap?
In HashMap, the main operations (get, put, remove) work very fast — on average in O(1), that is, constant time.
🟢 Junior Level
In HashMap, the main operations (get, put, remove) work very fast — on average in O(1), that is, constant time.
What this means:
HashMap<String, Integer> map = new HashMap<>();
map.put("key1", 100); // O(1) — instant
map.get("key1"); // O(1) — instant
map.remove("key1"); // O(1) — instant
With a good hashCode, lookup takes roughly the same time regardless of the map’s size. With a poor hashCode (many collisions) — time grows proportionally.
Worst case: If all keys landed in one bucket, search slows to O(n) in Java 7 or O(log n) in Java 8+. But this rarely happens with a proper hashCode().
🟡 Middle Level
Summary Table
| Operation | Average Case | Worst Case (Java 7) | Worst Case (Java 8+) |
|---|---|---|---|
| get() | O(1) | O(n) | O(log n) |
| put() | O(1)* | O(n) | O(log n) |
| remove() | O(1) | O(n) | O(log n) |
| containsKey() | O(1) | O(n) | O(log n) |
| iteration | O(capacity + size) | O(capacity + size) | O(capacity + size) |
*put() — amortized O(1), since periodic resize = O(n)
Why O(1)?
- Computing
hash(key): O(1) — a few bitwise operations - Determining index: O(1) —
(n-1) & hash - Array access: O(1) — direct access by index
- Comparison in bucket: O(1) — on average 1-2 elements
Amortized Cost of put()
Resize is an O(n) operation, but occurs rarely (geometric progression):
- 16 → 32 → 64 → 128 → …
- Amortized cost: O(1)
Factors Affecting Real-World Speed
| Factor | Impact |
|---|---|
Cost of equals() |
Heavy equals = large constant |
Quality of hashCode() |
Many collisions → O(log n) |
| Load Factor | Higher = more collisions |
| Key size | Long strings = slow hashCode/equals |
Common Mistakes
- Thinking O(1) = same time always — the constant depends on hashCode quality
- Ignoring amortization — resize can cause latency spikes
- Confusing with iteration —
forEach= O(capacity + size), not O(1)
When NOT to Use HashMap
- Predictable latency needed — resize causes spikes, consider LinkedHashMap with fixed capacity
- Sorting needed — TreeMap O(log n)
- Multi-threaded access — ConcurrentHashMap
🔴 Senior Level
Internal Breakdown
// get() internals:
// 1. hash(key): ~3 cycles (XOR + shift)
// 2. index = (n-1)&hash: ~1 cycle
// 3. tab[index]: ~1 cycle (L1 cache hit) or ~100 cycles (RAM miss)
// 4. equals(): from 1 cycle (reference) to O(k) (string comparison)
Real latency depends on cache locality:
- L1 hit: ~1 ns
- L2 hit: ~4 ns
- L3 hit: ~15 ns
- RAM miss: ~100 ns
Worst Case: When O(n) is Possible
In Java 8+, O(n) persists in three cases:
- capacity < 64 — treeification doesn’t trigger, at 8+ collisions = O(n)
- hashCode() returns a constant for all keys — all elements in one bucket
- Java 7: without treeification, always O(n) on collisions
O(log n) — degradation limit on treeification.
Benchmark Data (Java 21, Intel i9)
| Operation | Size | Average Latency | P99 |
|---|---|---|---|
| get() | 1K | 5 ns | 12 ns |
| get() | 1M | 8 ns | 25 ns |
| put() | 1K | 8 ns | 20 ns |
| put() (with resize) | 1M→2M | 150 ns | 5000 ns |
| iteration | 1M | 200 µs | 250 µs |
ConcurrentHashMap vs HashMap
| Metric | HashMap | ConcurrentHashMap |
|---|---|---|
| get (single thread) | 5 ns | 8 ns |
| get (8 threads) | N/A | 60 ns |
| put (single thread) | 8 ns | 15 ns |
| put (8 threads) | N/A | 120 ns |
Edge Cases
- hashCode() = constant: O(n) for Java 7, O(log n) for Java 8+
- Huge capacity, few elements: iteration O(capacity) dominates
- String keys with long values: equals() = O(k), where k is string length
Production Tuning
- Pre-set capacity eliminates O(n) resize
- Monitoring P99 latency reveals resize spikes
- JFR event
jdk.HashMapResize(Java 17+) for profiling
🎯 Interview Cheat Sheet
Must know:
- Average case: get/put/remove = O(1) via hash + indexing + equals
- Worst case Java 7: O(n) — linked list in bucket
- Worst case Java 8+: O(log n) — tree on collisions
- Iteration: O(capacity + size) — NOT O(1), traverses all buckets
- put() — amortized O(1): resize O(n) but occurs rarely (geometric progression)
- Real latency depends on cache locality: L1 hit ~1ns, RAM miss ~100ns
Common follow-up questions:
- Why O(1) and not O(log n)? — on average 1-2 elements in a bucket with a good hashCode
- What is amortized complexity? — individual resize is expensive, but on average over N insertions = O(1)
- When is O(n) possible in Java 8+? — capacity < 64, hashCode = constant, or Java 7
- ConcurrentHashMap overhead? — ~60% on single-thread, but the only correct option for multi-thread
Red flags (DO NOT say):
- “O(1) = same time always” — no, the constant depends on hashCode, equals, cache locality
- “Iteration = O(1)” — no, O(capacity + size)
- “HashMap is always faster than TreeMap” — no, TreeMap is O(log n) but gives sorting and predictable latency
Related topics:
- [[21. When Can Time Complexity Become O(n)]]
- [[22. How HashMap Works in a Multi-threaded Environment]]
- [[04. What is a Collision in HashMap]]