Which Map Implementation to Choose? (Comparison)
Java has many Map types. Here's a simple cheat sheet:
🟢 Junior Level
Java has many Map types. Here’s a simple cheat sheet:
| Map | When to Use |
|---|---|
| HashMap | Default. Fast, simple |
| LinkedHashMap | When insertion order matters (or LRU cache) |
| TreeMap | When sorted keys are needed |
| ConcurrentHashMap | When multiple threads are working |
| WeakHashMap | For temporary data tied to objects |
| EnumMap | When keys are enums |
Example:
// Default case:
Map<String, Integer> map = new HashMap<>();
// Thread-safe:
Map<String, Integer> concurrent = new ConcurrentHashMap<>();
// Sorted:
Map<String, Integer> sorted = new TreeMap<>();
Main rule: If you don’t know what to pick — choose HashMap.
🟡 Middle Level
Comparison Table
| Implementation | Order | Thread Safety | Null Keys | Complexity | When to Use |
|---|---|---|---|---|---|
| HashMap | No | No | Yes | O(1) | Default |
| LinkedHashMap | Insertion order | No | Yes | O(1) | LRU cache, order |
| TreeMap | Sorted | No | No | O(log n) | Navigation, ranges |
| ConcurrentHashMap | No | Yes | No | O(1) | Multi-threading |
| WeakHashMap | No | No | Yes | O(1) | Metadata, cache |
| EnumMap | Enum order | No | No | O(1) | Enum keys |
| IdentityHashMap | No | No | Yes | O(1) | Reference comparison |
LinkedHashMap: LRU Cache
Map<K, V> lruCache = new LinkedHashMap<>(capacity, 0.75f, true) {
@Override
protected boolean removeEldestEntry(Map.Entry<K, V> eldest) {
return size() > MAX_SIZE;
}
};
The accessOrder = true parameter — order by last access.
TreeMap: Navigation
TreeMap<String, Integer> tree = new TreeMap<>();
tree.put("a", 1);
tree.put("c", 3);
tree.put("b", 2);
tree.subMap("a", "c"); // {"a"=1, "b"=2}
tree.headMap("b"); // {"a"=1}
tree.tailMap("b"); // {"b"=2, "c"=3}
tree.firstKey(); // "a"
tree.lastKey(); // "c"
EnumMap: Maximum Speed
enum Status { ACTIVE, INACTIVE, PENDING }
Map<Status, Integer> counts = new EnumMap<>(Status.class);
// Internally: array of Status.values().length — maximally fast
ConcurrentHashMap: Atomic Operations
ConcurrentMap<String, Integer> map = new ConcurrentHashMap<>();
map.putIfAbsent("key", 0);
map.computeIfAbsent("key", k -> computeExpensive());
map.merge("key", 1, Integer::sum);
Immutable Maps (Java 10+)
Map<K, V> immutable = Map.of("a", 1, "b", 2);
// Or:
Map<K, V> copy = Map.copyOf(originalMap);
// Memory-optimized, UnsupportedOperationException on modification
IdentityHashMap
IdentityHashMap — uses == instead of equals() for key comparison. Two different objects with the same fields are considered different keys. Rare use case: when reference identity matters, not content.
🔴 Senior Level
Internal Implementations
HashMap: Array + linked lists/trees. O(1) via hashing.
LinkedHashMap: HashMap + doubly-linked list for ordering. Overhead ~2 refs per entry.
TreeMap: Red-black tree. O(log n), no hashing, requires Comparable/Comparator.
ConcurrentHashMap: CAS + synchronized on bucket. Volatile fields, multi-threaded resize.
EnumMap: Regular Object[enum.length] array. Access by ordinal().
IdentityHashMap: Open addressing (not chaining). Uses == and System.identityHashCode().
Performance Comparison
| Operation | HashMap | LinkedHashMap | TreeMap | ConcurrentHashMap | EnumMap |
|---|---|---|---|---|---|
| get() | O(1) | O(1) | O(log n) | O(1) | O(1) |
| put() | O(1) | O(1) | O(log n) | O(1) | O(1) |
| iteration | O(cap+n) | O(n) | O(n) | O(cap+n) | O(n) |
| Memory | Base | Base+16B | ~48B/node | Base+volatile | ~array |
| get 1M (ns) | 5 | 6 | 50 | 8 | 1 |
Memory Footprint (per 1M entries)
| Map | Approx. Memory |
|---|---|
| HashMap | ~48MB |
| LinkedHashMap | ~64MB (+2 refs/entry) |
| TreeMap | ~80MB (red-black tree: 4 refs per entry — parent, left, right, next + color) |
| ConcurrentHashMap | ~56MB (volatile + header) |
| EnumMap (10 keys) | ~100 bytes (array only) |
Edge Cases
- TreeMap and null keys:
new TreeMap<>()throws NPE on null. WithComparator.nullsFirst()— works. - LinkedHashMap accessOrder: When
true— everyget()changes order (for LRU). This is a mutation! - ConcurrentHashMap size(): O(n) — counts all elements. Don’t use in hot path.
- EnumMap: Only enum keys. Attempting another type = ClassCastException at creation.
When to Use What
Default path:
Need a Map?
├── Multi-threaded access? → ConcurrentHashMap
├── Keys = Enum? → EnumMap
├── Sorting needed? → TreeMap
├── Insertion order / LRU? → LinkedHashMap
├── Cache with GC auto-cleanup? → WeakHashMap
└── Everything else → HashMap
Specialized Libraries
For production caching:
- Caffeine — replacement for Guava Cache, best performance
- Ehcache — distributed caching
- Redis/Memcached — external cache
These libraries provide:
- TTL, LRU, LFU eviction
- Statistics (hit/miss rate)
- Async loading
- Distributed support
Production Decision Matrix
| Requirement | Choice |
|---|---|
| Read-heavy, single-thread | HashMap |
| Read-heavy, multi-thread | ConcurrentHashMap |
| Write-heavy, multi-thread | ConcurrentHashMap |
| Ordered iteration | LinkedHashMap |
| Range queries | TreeMap |
| Minimal memory (enum keys) | EnumMap |
| Cache with GC cleanup | WeakHashMap or Caffeine |
| Immutable config | Map.of() |
| Identity semantics | IdentityHashMap |
🎯 Interview Cheat Sheet
Must know:
- HashMap — default choice, O(1), no order, allows null
- LinkedHashMap — insertion order, LRU cache via accessOrder + removeEldestEntry
- TreeMap — sorting, O(log n), navigation (subMap, headMap, tailMap), no null keys
- ConcurrentHashMap — multi-threading, CAS + synchronized on bucket, no null
- EnumMap — enum keys, internally an array, O(1), fastest and most compact
- WeakHashMap — auto-cleanup by GC, for metadata, not for production caching
- IdentityHashMap — comparison by
==instead of equals, open addressing (not chaining)
Common follow-up questions:
- How to make an LRU cache? — LinkedHashMap with accessOrder=true and override removeEldestEntry
- Why does IdentityHashMap use open addressing? — for identity semantics, chaining isn’t needed
- When is TreeMap preferable to HashMap? — range queries or key sorting needed
- Why is EnumMap faster than HashMap? — array with ordinal() as index, no hashCode, no collisions
Red flags (DO NOT say):
- “TreeMap = O(1)” — no, O(log n)
- “WeakHashMap = production cache” — no, no TTL/LRU/limits; use Caffeine
- “IdentityHashMap = same as HashMap” — no, uses
==instead of equals
Related topics:
- [[01. How HashMap Works Internally]]
- [[23. What is ConcurrentHashMap and How Does It Differ from HashMap]]
- [[27. What is WeakHashMap]]