Question 21 · Section 10

When Can Time Complexity Become O(n)?

Although HashMap usually works in O(1) (instantly), in some cases it can slow down to O(n) (proportional to the number of elements).

Language versions: English Russian Ukrainian

🟢 Junior Level

Although HashMap usually works in O(1) (instantly), in some cases it can slow down to O(n) (proportional to the number of elements).

When this happens:

  1. Old Java (before version 8): All collisions stored in a list → O(n)
  2. Small table (capacity < 64): Even with 8 collisions, a tree isn’t created
  3. Very poor hashCode(): If it always returns the same number — everything in one bucket
  4. Iterating the entire map: forEach traverses all buckets, including empty ones

Iteration (forEach) is ALWAYS O(capacity + size) — this is normal behavior, not degradation. Separately: get/put degradation from O(1) to O(n) or O(log n).

Example:

// Bad hashCode — ALWAYS returns 1
class BadKey {
    @Override public int hashCode() { return 1; }
}
// All elements in one bucket → search like in a list → O(n)

🟡 Middle Level

O(n) Degradation Scenarios

Scenario Description Solution
Java 7 No treeification, all collisions — list Upgrade Java
capacity < 64 + bad hashCode Treeification doesn’t trigger Fix the key’s hashCode()
hashCode = constant All elements in one bucket Fix hashCode()
Iteration Traversal of all buckets Don’t create huge capacity
Hash Flooding Attack Attacker crafts keys Java 8+ protects

Detailed Breakdown: capacity < 64

HashMap<BadKey, String> map = new HashMap<>(); // capacity = 16, threshold = 12
// 8 elements < threshold(12) → resize does NOT happen
// Treeification doesn't trigger: capacity(16) < 64
// At 13th element → resize to 32 → still < 64 → tree does NOT get created
// At 25th element → resize to 64 → now >= 64 → treeification triggers

Iteration: O(capacity + size)

HashMap<String, Integer> map = new HashMap<>(1_000_000);
map.put("one", 1);
// Iteration: traverses 1,000,000 buckets, of which 1 is filled
// Complexity: O(1,000,001) ≈ O(n)!

Solution: Don’t overestimate capacity without need.

Hash Flooding Attack

Attacker crafts keys with the same hash:

  • Java 7: O(n²) on inserting n elements → DoS
  • Java 8+: O(n log n) — mitigated, but overhead remains

Common Mistakes

  1. Not upgrading Java — Java 7 is vulnerable to DoS via HashMap
  2. Overestimated capacity — slow iteration
  3. Ignoring hashCode — object keys with poor distribution

🔴 Senior Level

Internal Mechanics: MIN_TREEIFY_CAPACITY

static final int MIN_TREEIFY_CAPACITY = 64;

This threshold means that for capacity < 64, HashMap prefers resize over treeification. At this point, the overflowing bucket remains a list → O(n).

Why Java 7 Was Vulnerable

In Java 7, on Hash Flooding:

// Attacker sends POST with 10,000 parameters with the same hash
// Tomcat creates a HashMap for parameters
// Inserting 10,000 elements into one bucket: 10,000² = 100,000,000 comparisons
// CPU: 100% → server doesn't respond = DoS

CVE-2012-2743 — a known vulnerability.

Edge Case: Comparable and Treeification

If keys don’t implement Comparable:

// HashMap uses tieBreakOrder():
static int tieBreakOrder(Object a, Object b) {
    return (System.identityHashCode(a) <= System.identityHashCode(b)) ? -1 : 1;
}

This creates a tree, but with overhead — balancing is less effective.

O(n) on Iteration: Deep Analysis

for (Map.Entry<K, V> entry : map.entrySet()) { ... }

The iterator traverses:

for (int i = index; i < table.length; ++i) {
    // Skips empty buckets
}

Each empty bucket is one check. At capacity=1M and size=10 — 999,990 “empty” iterations.

Production Impact

In an API gateway with million-capacity and small size:

  • Iteration for logging: 5-10ms instead of expected 0.1ms
  • In aggregate: significant overhead on frequent calls

Mitigation Strategies

  1. Current Java (17+) — treeification + security fixes
  2. Proper hashCode — unit tests on distribution
  3. Adequate capacity — formula (expected / 0.75) + 1
  4. ConcurrentHashMap — for multi-threaded environments, also protected

Benchmark: O(n) vs O(log n)

At 1024 collisions in one bucket:

  • O(n) = 1024 comparisons ≈ 5 µs
  • O(log n) = 10 comparisons ≈ 0.1 µs
  • Difference: 50x

🎯 Interview Cheat Sheet

Must know:

  • Iteration (forEach) is ALWAYS O(capacity + size) — this is normal behavior, NOT degradation
  • get/put degradation: O(1) → O(n) at capacity < 64 + bad hashCode (tree not created)
  • get/put degradation: O(1) → O(log n) on treeification (Java 8+, capacity >= 64)
  • Java 7: O(n) on collisions, vulnerable to Hash Flooding DoS (CVE-2012-2743)
  • At 1024 collisions: O(n) = 1024 comparisons vs O(log n) = 10 — 50x difference
  • Hash Flooding Attack: attacker crafts keys → DoS; Java 8+ mitigated via treeification

Common follow-up questions:

  • Why does capacity < 64 block treeification? — HashMap prefers resize: it will spread elements
  • When is O(n) possible in Java 8+? — at capacity < 64 + 8+ collisions, or hashCode = constant
  • What is tieBreakOrder? — fallback for tree when keys aren’t Comparable, uses System.identityHashCode
  • How to defend against Hash Flooding? — Java 8+ automatically via treeification; monitor hashCode quality

Red flags (DO NOT say):

  • “forEach degrades to O(n)” — no, this is normal complexity O(capacity + size)
  • “Increasing capacity solves collisions” — no, fixing hashCode does
  • “Java 8 is fully protected from DoS” — no, overhead from treeification remains

Related topics:

  • [[20. What is the Time Complexity of get() and put() Operations in HashMap]]
  • [[04. What is a Collision in HashMap]]
  • [[06. What Happens When 8 Elements Are Reached in One Bucket]]
  • [[22. How HashMap Works in a Multi-threaded Environment]]