Question 21 · Section 9

How to avoid race condition?

Unlike file 21 (what is race condition), this file describes concrete prevention strategies. Key idea: a race condition occurs when at least one thread reads, at least one write...

Language versions: English Russian Ukrainian

Unlike file 21 (what is race condition), this file describes concrete prevention strategies. Key idea: a race condition occurs when at least one thread reads, at least one writes, and there is no synchronization between them. All strategies boil down to three approaches: synchronize access, isolate data, or eliminate mutable shared state.


Junior Level

Basic Understanding

A race condition occurs when multiple threads access shared data and at least one modifies it. To avoid race conditions, you need to ensure atomicity or isolate data.

What “atomicity” means in practice: an operation appears instantaneous to other threads — either it hasn’t started or it has already completed. An intermediate state is never observed. synchronized achieves this via locking, Atomic classes via CAS (Compare-And-Swap) CPU instructions.

What “isolating” data means: each thread works with its own copy, and results are combined only after completion. This costs more memory but completely eliminates competition.

Strategy 1: synchronized

public class SafeCounter {
    private int count = 0;

    // Only one thread can execute this method at a time
    public synchronized void increment() {
        count++;
    }

    public synchronized int getCount() {
        return count;
    }
}

Strategy 2: Atomic Classes

public class SafeCounter {
    private AtomicInteger count = new AtomicInteger(0);

    // Atomic operation — no race condition
    public void increment() {
        count.incrementAndGet();
    }

    public int getCount() {
        return count.get();
    }
}

Strategy 3: Immutable Objects

// Immutable object — cannot be changed, race condition impossible
public final class Config {
    private final String value;

    public Config(String value) {
        this.value = value;
    }

    public String getValue() {
        return value; // Always safe
    }
}

Strategy 4: ThreadLocal

// Each thread has its own copy — no shared state
ThreadLocal<SimpleDateFormat> dateFormat = ThreadLocal.withInitial(
    () -> new SimpleDateFormat("yyyy-MM-dd")
);

public String formatDate(Date date) {
    return dateFormat.get().format(date); // Safe
}

Comparing Approaches

Method Pros Minors
synchronized Reliable, simple Heavy under contention
Atomic Fast (lock-free) Only for simple operations
Immutability Ideal for scaling GC pressure
ThreadLocal Zero contention Leak risk in pools

Middle Level

Synchronization Strategy (Pessimistic)

Locking

// synchronized — critical section
synchronized(lock) {
    // Only one thread — others BLOCKED
    counter++;
    total += counter;
}

// ReentrantLock — more options
ReentrantLock lock = new ReentrantLock();
lock.lock();
try {
    counter++;
} finally {
    lock.unlock(); // Always in finally!
}

Atomic Variables (Optimistic)

AtomicInteger counter = new AtomicInteger(0);

// updateAndGet — atomic check-then-act
counter.updateAndGet(current -> {
    if (current < MAX) {
        return current + 1;
    }
    return current; // Don't increment
});

// computeIfPresent — atomic operation on a map
ConcurrentHashMap<String, Integer> map = new ConcurrentHashMap<>();
map.computeIfPresent("key", (k, v) -> v + 1);

Isolation Strategy (Optimistic/Design)

Immutable Objects

// Instead of mutating — create a new object
public class State {
    private final int count;
    private final String name;

    public State(int count, String name) {
        this.count = count;
        this.name = name;
    }

    // Copy-on-write
    public State withCount(int newCount) {
        return new State(newCount, this.name);
    }
}

// Usage:
State current = new State(0, "test");
current = current.withCount(1); // New object — no race condition

ThreadLocal

ThreadLocal<UserContext> context = new ThreadLocal<>();

// Important: always clean up in thread pools!
executor.submit(() -> {
    try {
        context.set(new UserContext("user123"));
        process();
    } finally {
        context.remove(); // MANDATORY!
    }
});

Concurrent Collections

// Instead of HashMap + synchronized:
ConcurrentHashMap<String, String> map = new ConcurrentHashMap<>();

map.putIfAbsent("key", "value");  // Atomic
map.compute("key", (k, v) -> ...); // Atomic
map.merge("key", 1, Integer::sum); // Atomic

// Segmented locks — many threads write simultaneously

Safe Publication Pattern

Race conditions often occur during object creation. Ways to safely publish:

// 1. Static initialization (guaranteed by ClassLoader)
public static final Config INSTANCE = new Config();

// 2. final fields
public class Safe {
    public final String value;
    public Safe(String v) { this.value = v; }
}

// 3. volatile field
private volatile Config config;
config = new Config();

// 4. synchronized
synchronized(lock) {
    config = new Config();
}

Comparing Approaches for Highload

Method Pros Minors When
synchronized Reliable, simple Context switch, deadlock Simple cases
Atomic/CAS Very fast CPU under high contention Counters, flags
Immutability Scales GC from copying Functional style
ThreadLocal No contention Leaks in pools Request context

Senior Level

Under the Hood: How JVM Ensures Atomicity

synchronized → Monitor Enter/Exit

monitorenter:
  1. Attempt to acquire monitor
  2. On contention — thread goes to EntryList (BLOCKED)
  3. On exit — all changes are flushed to memory

monitorexit:
  1. Release monitor
  2. Invalidate caches of other threads

Atomic → CAS at CPU Level

lock cmpxchg [memory], new_value
; Atomic at the memory bus level
; Other cores cannot access this cell

Lock-free Patterns

// Lock-free linked list — simplified
class LockFreeStack<T> {
    private AtomicReference<Node<T>> head = new AtomicReference<>(null);

    public void push(T value) {
        Node<T> newNode = new Node<>(value);
        Node<T> oldHead;
        do {
            oldHead = head.get();
            newNode.next = oldHead;
        } while (!head.compareAndSet(oldHead, newNode));
    }
}

ConcurrentHashMap Internals

Java 7: Segment-level locking (16 segments)
Java 8+: CAS + synchronized at bucket (node) level

get():       Nearly lock-free (volatile reads)
put():       CAS for new bucket, synchronized for collision
compute():   synchronized on specific bucket — not the whole map

Diagnostics

FindBugs / SpotBugs

// SpotBugs will find:
public class RaceCondition {
    private int data = 0; // Not synchronized, accessed from multiple threads

    public void write() {
        data = 42; // Race condition!
    }

    public int read() {
        return data; // Race condition!
    }
}

Stress Testing on ARM/M1

Race conditions favor architectures with relaxed memory models:

x86: Strongly ordered — race condition appears less often
ARM: Weakly ordered — race condition appears more often

Test on ARM (M1/M2) — more chances to catch races!

ThreadSanitizer

# For Java (via external tools)
# Runs code with instrumentation and catches races

Prevention Patterns

1. Confinement (Isolation)

// Data doesn't leave the method — no race condition
public int process() {
    int local = 0; // Local variable — only in this thread
    for (int i = 0; i < 100; i++) {
        local += i;
    }
    return local;
}

2. Immutability

// Immutable = thread-safe by definition
public record User(String name, int age) {}

3. Thread-safe Collections

// CopyOnWriteArrayList — for rare writes, frequent reads
List<String> list = new CopyOnWriteArrayList<>();
list.add("item"); // Copies entire array — expensive for frequent writes

// ConcurrentLinkedQueue — lock-free queue
Queue<String> queue = new ConcurrentLinkedQueue<>();
queue.offer("item"); // CAS — lock-free

Best Practices

  1. Avoid mutable shared state — the best approach
  2. Use Atomic for simple counters and flags
  3. Use ConcurrentHashMap instead of HashMap + synchronized
  4. Immutable objects — thread-safe by definition
  5. ThreadLocal — always clean up in finally
  6. Safe Publication — final, volatile, or synchronized on creation
  7. Test on multi-processor systems — ARM/M1 is more aggressive than x86
  8. Static Analysis — SpotBugs, Error Prone find typical races

When NOT to Apply These Strategies

  • No shared state — if threads don’t exchange data, race condition is impossible
  • Data is immutable — immutable objects are thread-safe by definition, no synchronization needed
  • Read-only operations — if all threads only read, race condition is impossible
  • One thread writes, others read after — if writing happens before reading starts (e.g., initialization before thread start), there’s no race
  • Tolerance for lost updates — for metrics and statistics, it’s sometimes acceptable to lose 1-2 out of a million updates (approximate counters)

synchronized vs Atomic vs ConcurrentHashMap: What to Choose?

Situation Choice Why
Simple counter (increment) AtomicLong CAS is faster than locking, no monitor overhead
Multiple operations as one transaction synchronized Need to atomically execute a group of operations
Cache/map with concurrent access ConcurrentHashMap Segmented locks, higher parallelism
Request context (user session) ThreadLocal Full isolation, zero contention
Object created once and never changes Immutable (final fields) Thread-safe without synchronization, free

Interview Cheat Sheet

Must know:

  • 4 main strategies: synchronized, Atomic classes, Immutable Objects, ThreadLocal
  • synchronized = pessimistic locking (block on entry), Atomic = optimistic/CAS (try without locking)
  • ConcurrentHashMap uses CAS + synchronized at bucket level, not on the whole map
  • Safe Publication: static initialization, final fields, volatile, or synchronized
  • ThreadLocal — always clean up in finally, otherwise leaks in thread pools

Frequent follow-up questions:

  • When to choose Atomic over synchronized? — For simple operations (increment, set, compare-and-swap). For a group of operations as a transaction — synchronized
  • Why is CopyOnWriteArrayList expensive for frequent writes? — Each add copies the entire array, suitable only for rare writes/frequent reads
  • How does x86 differ from ARM in the context of races? — x86 is strongly ordered, ARM is weakly ordered — race conditions appear more often on ARM/M1

Red flags (DO NOT say):

  • “volatile solves all multithreading problems” — volatile is only for visibility, not for atomicity
  • “ConcurrentHashMap is fully lock-free” — get() is nearly lock-free, but put()/compute() use synchronized on bucket
  • “ThreadLocal can’t cause memory leaks” — in thread pools the ThreadLocal lives longer than the task, without remove() the copy remains forever
  • “Immutability is free” — creating many copies creates GC pressure

Related topics:

  • [[19. What conditions are necessary for deadlock to occur]]
  • [[21. What is race condition]]
  • [[23. What are Virtual Threads in Java 21]]
  • [[27. What is the difference between Thread and Runnable]]