Question 1 · Section 9

What is the difference between synchronized and volatile?

synchronized and volatile are two keywords in Java for working with multithreading. They solve different problems:

Language versions: English Russian Ukrainian

Junior Level

Basic Understanding

synchronized and volatile are two keywords in Java for working with multithreading. They solve different problems:

Characteristic volatile synchronized
What it does Guarantees visibility (a write by one thread is immediately visible to all readers, not cached locally) Guarantees visibility + atomicity (a compound operation is executed entirely, without interference from other threads)
Where applied Only to fields (variables) To methods or code blocks
Locking No Yes (acquires the object’s monitor)
Deadlock Impossible Possible

volatile example

public class Worker {
    private volatile boolean running = true;

    public void stop() {
        running = false; // All threads will see this value immediately
    }

    public void doWork() {
        while (running) {
            // Working...
        }
    }
}

synchronized example

public class Counter {
    private int count = 0;

    public synchronized void increment() {
        count++; // Atomic operation — only one thread can execute at a time
    }

    public synchronized int getCount() {
        return count;
    }
}

When to use which

  • volatile — for flags (stop flag, ready flag), when one thread writes and others read
  • synchronized — when you need to atomically read-modify-write (counters, compound operations)

When volatile is NOT suitable

  1. Compound operations (increment, check-then-act): volatile count++ is NOT atomic — use AtomicInteger
  2. Groups of variables: if you need to atomically update two fields — volatile won’t help, you need synchronized
  3. Dependent variables: if value B depends on A — volatile does not guarantee atomicity of the pair (A, B)

Middle Level

Java Memory Model (JMM)

Both mechanisms work within the Java Memory Model (JMM), defined in JSR-133.

How volatile works

When writing to a volatile variable, the JVM inserts Memory Barriers:

Memory Barrier (memory fence) — a CPU instruction that prevents reordering of read/write operations around the barrier. Without the barrier, the processor may reorder instructions for optimization, leading to incorrect behavior in a multithreaded environment.

Types of barriers in the JVM:

  • LoadLoad: prevent reordering of two reads
  • StoreStore: prevent reordering of two writes
  • LoadStore: read cannot be reordered after a write
  • StoreLoad: write cannot be reordered after a read (the most expensive)

Where “Load” = read from memory, “Store” = write to memory.

  • Store-Store barrier — guarantees all regular writes before the volatile write are flushed to memory
  • Store-Load barrier — guarantees subsequent reads see the latest data
// Without volatile — visibility problem
int x = 0;
boolean ready = false; // without volatile

// Thread 1:
x = 42;
ready = true;

// Thread 2:
if (ready) {
    System.out.println(x); // Might print 0! (x not yet visible)
}
// With volatile — visibility guarantee
int x = 0;
volatile boolean ready = false;

// Thread 1:
x = 42;
ready = true; // Store-Load barrier — x is guaranteed to be written

// Thread 2:
if (ready) {
    System.out.println(x); // Always prints 42!
}

Happens-Before Guarantee

A write to a volatile variable happens-before a subsequent read of that same variable. This means all changes made before the volatile write will be visible to the thread that reads it.

How synchronized works

synchronized provides:

  1. Mutual Exclusion — only one thread executes the code at a time
  2. Visibility — on entering a synchronized block, the cache is invalidated; on exit, data is flushed to memory
  3. Atomicity — compound operations are executed as a single unit
// Problem: volatile does NOT ensure atomicity
volatile int counter = 0;
counter++; // NOT safe! This is 3 operations: read → modify → write

// Solution: synchronized ensures atomicity
synchronized(lock) {
    counter++; // Safe — only one thread executes
}

volatile vs AtomicBoolean: volatile only guarantees visibility. AtomicBoolean guarantees visibility + atomicity via CAS. For a simple flag (running = true), volatile is sufficient. For if (!flag.compareAndSet(false, true)), you need AtomicBoolean.

Double-Checked Locking Pattern

A classic example where volatile is critical:

public class Singleton {
    // MUST be volatile! Without it, a partially initialized object is possible
    private static volatile Singleton instance;

    public static Singleton getInstance() {
        if (instance == null) {                  // First check (no lock)
            synchronized (Singleton.class) {
                if (instance == null) {          // Second check (with lock)
                    instance = new Singleton();  // Object creation
                }
            }
        }
        return instance;
    }
}

Without volatile, the following scenario is possible:

  1. Thread A starts creating instance (allocates memory, but constructor not finished)
  2. JVM reorders: assigns the reference first, then calls the constructor
  3. Thread B sees instance != null and gets a partially initialized object

Senior Level

Under the Hood: CPU-Level Implementation

volatile at the CPU level

On x86 architecture, writing to a volatile field translates to an instruction with a lock prefix, which:

The lock prefix is an x86 instruction that locks the cache line for the duration of the operation. Other cores cannot read/write this cache line until the operation completes.

  • Locks the cache line for the duration of the write
  • Sends an invalidation signal to other cores (MESI protocol)
  • Prevents instruction reordering across the barrier

MESI protocol (Modified, Exclusive, Shared, Invalid) — a cache coherency protocol for multi-core CPUs. When a core writes data (Modified), it sends a signal to other cores — their copies become Invalid and must be re-read from RAM.

# Regular write (without volatile):
mov [rax], 1

# Volatile write (with memory barrier):
lock or dword ptr [rax], 0  # Store barrier
mov [rax], 1
lock or dword ptr [rax], 0  # Load barrier

synchronized at the bytecode level

synchronized(obj) { /* code */ }

Bytecode:

aload_1         // Load obj
dup
astore_2
monitorenter    // Acquire monitor
/* code */
aload_2
monitorexit     // Release monitor

Important: the compiler generates two monitorexit instructions — one for normal exit, one for exception handling (hidden finally).

Reordering and Memory Barriers

Processors and the JIT compiler reorder instructions for optimization. volatile imposes restrictions:

Barrier Type Prevents
LoadLoad Read after read cannot be moved upward
StoreStore Write after write cannot be moved upward
LoadStore Write after read cannot be moved upward
StoreLoad Read after write cannot be moved upward (most expensive)

volatile write = StoreStore + StoreLoad barriers volatile read = LoadLoad + LoadStore barriers

Performance and Highload

Cache Invalidation and False Sharing

public class Counters {
    public volatile int counter1; // May be in the same cache line
    public volatile int counter2; // Writing to counter1 invalidates counter2!
}

Solution — @Contended (Java 8+):

public class ContendedCounters {
    @Contended
    public volatile int counter1;

    @Contended
    public volatile int counter2; // Now in different cache lines
}

Adaptive Spinning

Modern JVMs don’t send a thread to BLOCKED state immediately. First, the thread performs spin-wait — an empty loop waiting for the monitor to be released. If spinning succeeds — we save on context switch. If not — we park the thread via the OS.

Comparison Table (Advanced)

Characteristic volatile synchronized
Guarantee type Visibility + ordering Visibility + atomicity + exclusion
Level Variable (field) Method or code block
Locking No (Lock-free) Yes (Blocking)
CPU Cache Disables caching of the variable Flushes entire local cache on entry/exit
Deadlock Impossible Possible
Overhead Low (memory barrier) Medium/high (depends on contention)
Reentrancy N/A Yes

Diagnostics

# jstack will show BLOCKED threads for synchronized
jstack <pid> | grep "BLOCKED"

# For volatile issues — only behavioral analysis
# JCStress for stress testing

Best Practices

  1. volatile — for flags and publishing immutable objects
  2. synchronized — for compound operations (read-modify-write)
  3. Atomic* — for simple counters (CAS is faster than synchronized)
  4. Private lock — use private final Object lock instead of synchronized(this)
  5. Avoid volatile long/double unless you need atomicity (on 32-bit JVMs)

Interview Cheat Sheet

Must know:

  • volatile guarantees only visibility, synchronized guarantees visibility + atomicity + mutual exclusion
  • volatile applies only to fields, synchronized applies to methods and code blocks
  • volatile does NOT protect compound operations (count++ is not atomic even with volatile)
  • volatile uses memory barriers (StoreStore, StoreLoad), synchronized uses monitor acquisition
  • Deadlock is impossible with volatile, but possible with synchronized
  • On x86, volatile write translates to lock prefix and cache line invalidation (MESI)
  • Double-Checked Locking requires volatile for correct Singleton publication
  • volatile is lock-free, with lower overhead than synchronized under low contention

Frequent follow-up questions:

  • Can you use volatile for a counter? — No, count++ = read-modify-write (3 operations), use AtomicInteger
  • What happens without volatile in Double-Checked Locking? — JVM may reorder: reference assigned before constructor finishes → partially initialized object
  • Which memory barriers does volatile insert? — Write: StoreStore + StoreLoad; Read: LoadLoad + LoadStore
  • When is volatile preferable to synchronized? — For simple flags (stop, ready), where one thread writes, others read

Red flags (do NOT say):

  • “Volatile makes operations atomic” — no, that’s AtomicInteger/CAS
  • “Synchronized and volatile are interchangeable” — no, they solve different problems
  • “Volatile protects a group of variables” — no, only a single variable
  • “Synchronized is for visibility, volatile is for atomicity” — exactly the opposite

Related topics:

  • [[2. What is happens-before relationship]] — volatile write happens-before read
  • [[3. What is visibility problem]] — volatile solves the visibility problem
  • [[8. What are Atomic classes]] — lock-free alternative for counters
  • [[9. What is CAS (Compare-And-Swap)]] — foundation of Atomic classes