Why use spin_lock_irqsave() in process context when IRQs also take the lock?
Kernel-focused, precise answer with the exact condition, examples, and debugging tips — ready for blog publication.
- Overview
- The reentrancy problem
- Deadlock sequence
- Why
spin_lock()isn’t enough - The solution
- Exact condition — formal rule
- Full correct example
- Lock variants and when to use them
- Debugging tips
- Detailed recap
Overview
In Linux kernel synchronization, spin_lock_irqsave() is required when the same spinlock is shared between process context and interrupt context. It ensures the CPU’s interrupts are disabled while the lock is held, preventing the interrupt handler from re-entering the same critical section.
The reentrancy problem
Process context (system calls, workqueues, kernel threads) runs with interrupts enabled and can be preempted at any time. If an interrupt handler tries to take the same spinlock that a preempted process already holds, the CPU will deadlock because the handler spins forever waiting for itself to release the lock.
Example: Deadlock due to interrupt reentry
spinlock_t my_lock;
int shared_counter;
irqreturn_t my_irq_handler(int irq, void *dev_id)
{
spin_lock(&my_lock); /* IRQ context */
shared_counter++;
spin_unlock(&my_lock);
return IRQ_HANDLED;
}
void work_handler(struct work_struct *work)
{
spin_lock(&my_lock); /* Process context */
shared_counter++;
spin_unlock(&my_lock);
}
my_lock, then gets interrupted by an IRQ on the same CPU. The IRQ handler tries to lock the same spinlock, spins forever — system hangs.
Why spin_lock() isn’t enough
spin_lock() just busy-waits until the lock becomes free. It doesn’t disable interrupts, so an IRQ can still preempt process context and re-acquire the same lock on the same CPU. This leads to a recursive deadlock.
The solution — spin_lock_irqsave()
spin_lock_irqsave() does two things atomically:
- Saves the current interrupt enable/disable state.
- Disables local interrupts, then takes the lock.
This ensures no IRQ can interrupt this CPU while the lock is held. After finishing the critical section, spin_unlock_irqrestore() releases the lock and restores the original interrupt state.
spin_lock_irqsave() in process context if the same lock is also used inside a hardware IRQ handler.
Exact condition — formal rule
Rule: Use spin_lock_irqsave() in process context if and only if the same spinlock can be acquired in a hardware interrupt handler on the same CPU.
if (lock_acquired_in_irq_handler)
spin_lock_irqsave(&lock, flags);
else
spin_lock(&lock);
- For locks shared with tasklets or softirqs — use
spin_lock_bh(). - If interrupts are already disabled by design —
spin_lock()suffices, but document it clearly.
Full correct example
#include <linux/module.h>
#include <linux/init.h>
#include <linux/interrupt.h>
#include <linux/workqueue.h>
#include <linux/spinlock.h>
static spinlock_t my_lock;
static struct work_struct my_work;
static int shared_counter;
static irqreturn_t my_irq(int irq, void *dev_id)
{
/* IRQ context — interrupts already disabled */
spin_lock(&my_lock);
shared_counter++;
spin_unlock(&my_lock);
return IRQ_HANDLED;
}
static void my_work_func(struct work_struct *work)
{
unsigned long flags;
spin_lock_irqsave(&my_lock, flags); /* disable IRQs while locking */
shared_counter++;
spin_unlock_irqrestore(&my_lock, flags);
}
static int __init demo_init(void)
{
spin_lock_init(&my_lock);
INIT_WORK(&my_work, my_work_func);
schedule_work(&my_work);
pr_info("spin_lock_irqsave demo loaded\\n");
return 0;
}
static void __exit demo_exit(void)
{
flush_work(&my_work);
pr_info("spin_lock_irqsave demo unloaded\\n");
}
module_init(demo_init);
module_exit(demo_exit);
MODULE_LICENSE("GPL");
Lock variants and when to use them
| Variant | Disables | Use when... |
|---|---|---|
spin_lock() | Nothing | Only process context (no IRQ or softirq) |
spin_lock_bh() | Bottom halves / softirqs | Shared with tasklets or softirqs |
spin_lock_irqsave() | Local IRQs (and saves flags) | Shared with hard IRQ handlers |
Debugging tips
- If you forget
irqsavewhere needed, you’ll seesoft lockuporBUG: spinlock recursionmessages indmesg. - Enable
CONFIG_DEBUG_SPINLOCKandCONFIG_LOCKDEPto detect misuse during development. - Use
KASANorkmemleakto detect spinlock corruption or use-after-free errors. - Document which contexts use each lock to avoid confusion and prevent future regressions.
Detailed Recap
The necessity of spin_lock_irqsave() in process context depends on the CPU interrupt model and spinlock ownership overlap. Here’s the detailed reasoning summarized:
- Process context runs with interrupts enabled and can be interrupted at any time.
- Interrupt handlers run with interrupts disabled on their CPU and often share data with process context (e.g., workqueues, kthreads, I/O paths).
- If both contexts access the same protected data using the same spinlock, reentrancy from an IRQ while process context holds the lock causes deadlock.
spin_lock_irqsave()prevents this by disabling interrupts locally while the lock is held, guaranteeing atomicity across both contexts.- The use of
irqsavedoesn’t make locking "faster" — it makes it safe by ensuring the CPU can’t enter the interrupt handler that would attempt to re-acquire the same lock. - When interrupts are disabled, only this CPU is prevented from handling its IRQs — other CPUs remain unaffected, preserving SMP responsiveness.
- Always use
spin_lock_irqsave()in process context if:- the same spinlock is acquired in a hard IRQ handler, or
- the same lock is shared between process context and an interrupt source registered via
request_irq().
- Never sleep, allocate memory with
GFP_KERNEL, or hold the lock for a long time while IRQs are disabled. - Prefer
spin_lock_bh()if the lock is shared only with softirqs or tasklets. - Use
spin_lock()only for pure process-context synchronization where no asynchronous interrupts will compete for the same lock.
In short: spin_lock_irqsave() ensures atomicity between process and interrupt contexts on the same CPU. It prevents a deadlock caused by an interrupt handler trying to acquire a lock already held by the interrupted code.
No comments:
Post a Comment