What is Non-Repeatable Read?
Same query, same moment within the transaction — but the result is different.
🟢 Junior Level
Non-Repeatable Read is an anomaly where a repeated query of the same data within one transaction returns a different result because another transaction managed to modify and commit the data. This is a problem because your business logic may make a decision based on the first value, and then discover that the actual value has changed — and the report won’t reconcile.
Simple Example
Transaction A: SELECT price FROM products WHERE id = 1; -- 100
-- Another transaction: UPDATE products SET price = 150 WHERE id = 1; COMMIT;
Transaction A: SELECT price FROM products WHERE id = 1; -- 150!
Same query, same moment within the transaction — but the result is different.
At Which Level It Occurs
Occurs at the Read Committed level. At the Repeatable Read and Serializable levels, this problem is impossible.
Difference from Phantom Read
- Non-Repeatable Read: data changed in existing rows (price 100 → 150)
- Phantom Read: new rows appeared (there were 5 records, now there are 6)
🟡 Middle Level
Scenario
| Time | Transaction A (Read Committed) | Transaction B (Update) |
|---|---|---|
| T1 | BEGIN; |
BEGIN; |
| T2 | SELECT price FROM products WHERE id = 1; → 100 |
|
| T3 | UPDATE products SET price = 150 WHERE id = 1; COMMIT; |
|
| T4 | SELECT price FROM products WHERE id = 1; → 150 |
|
| T5 | Logic broken — price changed mid-transaction |
Root cause: on Read Committed, PostgreSQL creates a new snapshot for EACH statement, not for the entire transaction. By time T4, Transaction B has already committed, and the new snapshot sees its changes.
Why This Happens
At the Read Committed level, the DBMS (PostgreSQL) takes a data snapshot before each statement, not for the entire transaction. By the time of the second query, Transaction B has already committed, so the new snapshot sees the updated data.
Consequences
- Reporting: Balance at the start of the report doesn’t reconcile with the balance when calculating tax
- Invariants: Check
price < limitpassed, but before the purchase the price is already higher - Multi-step calculations: Intermediate results are based on different data versions
How to Prevent
- Raise the level to Repeatable Read: Snapshot is taken once for the entire transaction
- SELECT … FOR UPDATE: Explicitly lock the row from changes until the end of the transaction
Difference from Phantom Read
| Non-Repeatable Read | Phantom Read |
|---|---|
| Modification of existing rows | Appearance/disappearance of rows |
UPDATE from another transaction |
INSERT/DELETE from another transaction |
| Same rows, different values | Different set of rows |
In practice, the difference is critical: Non-repeatable read breaks read-modify-write (read price, calculated tax, price changed). Phantom read breaks aggregates (counted 100 orders, processed 100, but there are now 105).
🔴 Senior Level
MVCC Snapshot Timing — The Root Cause
Read Committed: Per-Statement Snapshots
In PostgreSQL under Read Committed:
Statement 1: BEGIN;
Statement 2: SELECT ... → Snapshot A created (sees committed state at T2)
Statement 3: → Other transaction commits
Statement 4: SELECT ... → Snapshot B created (sees committed state at T4, including other tx changes)
Statement 5: COMMIT;
Each statement gets a fresh snapshot. This is by design — it’s the trade-off for maximum concurrency.
Repeatable Read: Transaction-Level Snapshot
Statement 1: BEGIN;
Statement 2: SELECT ... → Snapshot A created and FROZEN for entire transaction
Statement 3: → Other transaction commits (NOT visible)
Statement 4: SELECT ... → Still uses Snapshot A (same result as Statement 2)
Statement 5: COMMIT;
Business Impact Analysis
Financial Systems Example
// Non-Repeatable Read can cause inconsistent calculations
@Transactional // Read Committed by default
public Statement generateStatement(Long accountId) {
BigDecimal balance = repo.getBalance(accountId); // Read 1: 1000
// Concurrent deposit commits: balance is now 1500
BigDecimal fee = calculateFee(balance); // Uses 1000
BigDecimal interest = calculateInterest(balance); // Uses 1000
// But total_assets query might see 1500
BigDecimal total = repo.getTotalAssets(accountId); // Read 2: 1500
// Statement shows: balance=1000, total=1500 — inconsistent!
}
Solution Approaches
Approach 1: Repeatable Read
@Transactional(isolation = Isolation.REPEATABLE_READ)
public Statement generateStatement(Long accountId) {
// All reads see same snapshot
}
Approach 2: Pessimistic Locking
@Query("SELECT a FROM Account a WHERE a.id = :id FOR UPDATE")
Account findByIdLocked(Long id);
@Transactional
public Statement generateStatement(Long accountId) {
Account account = repo.findByIdLocked(accountId);
// Locked — no one can change until we commit
}
Approach 3: Application-Level Consistency
@Transactional
public Statement generateStatement(Long accountId) {
// Read all needed data at once into local variables
var snapshot = repo.getAccountSnapshot(accountId);
// Use only local variables, no more DB reads
}
PostgreSQL-Specific Behavior
UPDATE Visibility Rules
Under Read Committed, PostgreSQL has special behavior for UPDATE/DELETE:
-- Transaction A (Read Committed)
BEGIN;
UPDATE accounts SET balance = balance - 10 WHERE id = 1;
If another transaction modified and committed the same row between A’s SELECT and UPDATE:
- PostgreSQL re-evaluates the WHERE clause against the latest committed version
- If the row still matches, the UPDATE proceeds on the new version
- This is called “EvalPlanQual” rechecking
This prevents lost updates under Read Committed, but only for direct UPDATEs, not read-modify-write patterns in application code.
Monitoring Non-Repeatable Read Issues
-- Detect long-running transactions on Read Committed
SELECT pid, now() - xact_start AS duration, state, query
FROM pg_stat_activity
WHERE state != 'idle'
AND xact_start < now() - interval '30 seconds'
ORDER BY xact_start;
-- Track transaction duration distribution
SELECT
percentile_cont(0.5) WITHIN GROUP (ORDER BY duration) as p50,
percentile_cont(0.95) WITHIN GROUP (ORDER BY duration) as p95,
percentile_cont(0.99) WITHIN GROUP (ORDER BY duration) as p99
FROM (
SELECT extract(epoch FROM (now() - xact_start)) AS duration
FROM pg_stat_activity
WHERE state != 'idle'
) sub;
When Non-Repeatable Read Is Acceptable
- Eventual consistency systems: Where slight staleness within a transaction is acceptable
- Status checks: Reading a flag that might change (e.g.,
is_active) - Caching scenarios: Where you explicitly want the latest data
When It’s NOT Acceptable
- Multi-step financial calculations
- Report generation requiring self-consistency
- Invariant checking before critical operations
- Read-modify-write without proper locking
🎯 Interview Cheat Sheet
Must know:
- Non-Repeatable Read — re-SELECT of the same rows returns different values (UPDATE from another transaction)
- Occurs on Read Committed because each SELECT gets a new snapshot
- Prevented on Repeatable Read (one snapshot per transaction) and Serializable
- Difference from Phantom Read: NRR = existing rows changed, Phantom = new rows appeared
- In PostgreSQL: per-statement snapshot on RC vs per-transaction on RR
- Solutions: RR, SELECT FOR UPDATE, application-level consistency (read everything into local variables)
Common follow-up questions:
- How to prevent NRR without raising isolation level? — SELECT … FOR UPDATE for specific rows
- Why does PostgreSQL create a new snapshot on RC? — Maximum concurrency, each statement sees the latest data
- What is EvalPlanQual? — PG mechanism: on concurrent UPDATE, re-reads the row and checks the WHERE clause
Red flags (DO NOT say):
- “NRR and Phantom Read are the same thing” — NRR = UPDATE of existing, Phantom = INSERT of new
- “On RC, two SELECTs return the same result” — on RC, each SELECT sees its own snapshot
- “FOR UPDATE is only for writing” — FOR UPDATE also prevents NRR
Related topics:
- [[4. What is Read Committed]]
- [[5. What is Repeatable Read]]
- [[9. What is Phantom Read]]
- [[10. What is Lost Update]]