mirror of
https://github.com/AuxXxilium/linux_dsm_epyc7002.git
synced 2024-11-23 22:20:50 +07:00
mm/mmu_notifiers: ensure range_end() is paired with range_start()
[ Upstream commit c2655835fd8cabdfe7dab737253de3ffb88da126 ]
If one or more notifiers fails .invalidate_range_start(), invoke
.invalidate_range_end() for "all" notifiers. If there are multiple
notifiers, those that did not fail are expecting _start() and _end() to
be paired, e.g. KVM's mmu_notifier_count would become imbalanced.
Disallow notifiers that can fail _start() from implementing _end() so
that it's unnecessary to either track which notifiers rejected _start(),
or had already succeeded prior to a failed _start().
Note, the existing behavior of calling _start() on all notifiers even
after a previous notifier failed _start() was an unintented "feature".
Make it canon now that the behavior is depended on for correctness.
As of today, the bug is likely benign:
1. The only caller of the non-blocking notifier is OOM kill.
2. The only notifiers that can fail _start() are the i915 and Nouveau
drivers.
3. The only notifiers that utilize _end() are the SGI UV GRU driver
and KVM.
4. The GRU driver will never coincide with the i195/Nouveau drivers.
5. An imbalanced kvm->mmu_notifier_count only causes soft lockup in the
_guest_, and the guest is already doomed due to being an OOM victim.
Fix the bug now to play nice with future usage, e.g. KVM has a
potential use case for blocking memslot updates in KVM while an
invalidation is in-progress, and failure to unblock would result in said
updates being blocked indefinitely and hanging.
Found by inspection. Verified by adding a second notifier in KVM that
periodically returns -EAGAIN on non-blockable ranges, triggering OOM,
and observing that KVM exits with an elevated notifier count.
Link: https://lkml.kernel.org/r/20210311180057.1582638-1-seanjc@google.com
Fixes: 93065ac753
("mm, oom: distinguish blockable mode for mmu notifiers")
Signed-off-by: Sean Christopherson <seanjc@google.com>
Suggested-by: Jason Gunthorpe <jgg@ziepe.ca>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Ben Gardon <bgardon@google.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: "Jérôme Glisse" <jglisse@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Dimitri Sivanich <dimitri.sivanich@hpe.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
This commit is contained in:
parent
42aa210795
commit
de2e6b4e32
@ -169,11 +169,11 @@ struct mmu_notifier_ops {
|
||||
* the last refcount is dropped.
|
||||
*
|
||||
* If blockable argument is set to false then the callback cannot
|
||||
* sleep and has to return with -EAGAIN. 0 should be returned
|
||||
* otherwise. Please note that if invalidate_range_start approves
|
||||
* a non-blocking behavior then the same applies to
|
||||
* invalidate_range_end.
|
||||
*
|
||||
* sleep and has to return with -EAGAIN if sleeping would be required.
|
||||
* 0 should be returned otherwise. Please note that notifiers that can
|
||||
* fail invalidate_range_start are not allowed to implement
|
||||
* invalidate_range_end, as there is no mechanism for informing the
|
||||
* notifier that its start failed.
|
||||
*/
|
||||
int (*invalidate_range_start)(struct mmu_notifier *subscription,
|
||||
const struct mmu_notifier_range *range);
|
||||
|
@ -501,10 +501,33 @@ static int mn_hlist_invalidate_range_start(
|
||||
"");
|
||||
WARN_ON(mmu_notifier_range_blockable(range) ||
|
||||
_ret != -EAGAIN);
|
||||
/*
|
||||
* We call all the notifiers on any EAGAIN,
|
||||
* there is no way for a notifier to know if
|
||||
* its start method failed, thus a start that
|
||||
* does EAGAIN can't also do end.
|
||||
*/
|
||||
WARN_ON(ops->invalidate_range_end);
|
||||
ret = _ret;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (ret) {
|
||||
/*
|
||||
* Must be non-blocking to get here. If there are multiple
|
||||
* notifiers and one or more failed start, any that succeeded
|
||||
* start are expecting their end to be called. Do so now.
|
||||
*/
|
||||
hlist_for_each_entry_rcu(subscription, &subscriptions->list,
|
||||
hlist, srcu_read_lock_held(&srcu)) {
|
||||
if (!subscription->ops->invalidate_range_end)
|
||||
continue;
|
||||
|
||||
subscription->ops->invalidate_range_end(subscription,
|
||||
range);
|
||||
}
|
||||
}
|
||||
srcu_read_unlock(&srcu, id);
|
||||
|
||||
return ret;
|
||||
|
Loading…
Reference in New Issue
Block a user