You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This patch modifies all relevant places, where we statically know
that both interrupts and preemption should be enabled, to unconditionally
pre-fault the stack.
These include places in code before:
- WITH_LOCK(preemption_lock) block
- WITH_LOCK(rcu_read_lock) block
- sched::preempt_disable() call
- WITH_LOCK(irq_lock) block in one case
In general, these are the places that follow the assumption
that most of the time preemption and interrupts are enabled.
And the functions/method below are called directly or indirectly by
an application and there is no other kernel code in that application
call stack above that also disables interrupts or preemption.
One good example is the memory allocation code in mempool.cc
that disables preemption in quite few places. Many of those
use WITH_LOCK/DROP_LOCK(preempt_lock) combination that implies
they were intended to be called with preemption enabled otherwise
code in DROP_LOCK would not work. Also all of those end up calling
some form of thread::wait() method that asserts that both interrupts
and interrupts are enabled.
To validate the reasoning, we add relevant invariants before we pre-fault
the stack.
Signed-off-by: Waldemar Kozaczuk <[email protected]>
0 commit comments