mirror of
https://github.com/AuxXxilium/linux_dsm_epyc7002.git
synced 2024-11-24 03:00:54 +07:00
epyc7002
13 Commits
Author | SHA1 | Message | Date | |
---|---|---|---|---|
Maarten Lankhorst
|
0c500d6a5c |
dma-buf/dma-resv: Respect num_fences when initializing the shared fence list.
commit bf8975837dac156c33a4d15d46602700998cb6dd upstream.
We hardcode the maximum number of shared fences to 4, instead of
respecting num_fences. Use a minimum of 4, but more if num_fences
is higher.
This seems to have been an oversight when first implementing the
api.
Fixes:
|
||
Daniel Vetter
|
7dd1b884f7 |
dma-resv: lockdep-prime address_space->i_mmap_rwsem for dma-resv
GPU drivers need this in their shrinkers, to be able to throw out mmap'ed buffers. Note that we also need dma_resv_lock in shrinkers, but that loop is resolved by trylocking in shrinkers. So full hierarchy is now (ignore some of the other branches we already have primed): mmap_read_lock -> dma_resv -> shrinkers -> i_mmap_lock_write I hope that's not inconsistent with anything mm or fs does, adding relevant people. Reviewed-by: Thomas Hellström <thomas.hellstrom@intel.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com> Cc: Sumit Semwal <sumit.semwal@linaro.org> Cc: "Christian König" <christian.koenig@amd.com> Cc: linux-media@vger.kernel.org Cc: linaro-mm-sig@lists.linaro.org Cc: Dave Chinner <david@fromorbit.com> Cc: Qian Cai <cai@lca.pw> Cc: linux-xfs@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org Cc: Thomas Hellström (Intel) <thomas_os@shipmail.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Jason Gunthorpe <jgg@mellanox.com> Cc: linux-mm@kvack.org Cc: linux-rdma@vger.kernel.org Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200728135839.1035515-1-daniel.vetter@ffwll.ch |
||
Linus Torvalds
|
97d052ea3f |
A set of locking fixes and updates:
- Untangle the header spaghetti which causes build failures in various situations caused by the lockdep additions to seqcount to validate that the write side critical sections are non-preemptible. - The seqcount associated lock debug addons which were blocked by the above fallout. seqcount writers contrary to seqlock writers must be externally serialized, which usually happens via locking - except for strict per CPU seqcounts. As the lock is not part of the seqcount, lockdep cannot validate that the lock is held. This new debug mechanism adds the concept of associated locks. sequence count has now lock type variants and corresponding initializers which take a pointer to the associated lock used for writer serialization. If lockdep is enabled the pointer is stored and write_seqcount_begin() has a lockdep assertion to validate that the lock is held. Aside of the type and the initializer no other code changes are required at the seqcount usage sites. The rest of the seqcount API is unchanged and determines the type at compile time with the help of _Generic which is possible now that the minimal GCC version has been moved up. Adding this lockdep coverage unearthed a handful of seqcount bugs which have been addressed already independent of this. While generaly useful this comes with a Trojan Horse twist: On RT kernels the write side critical section can become preemtible if the writers are serialized by an associated lock, which leads to the well known reader preempts writer livelock. RT prevents this by storing the associated lock pointer independent of lockdep in the seqcount and changing the reader side to block on the lock when a reader detects that a writer is in the write side critical section. - Conversion of seqcount usage sites to associated types and initializers. -----BEGIN PGP SIGNATURE----- iQJHBAABCgAxFiEEQp8+kY+LLUocC4bMphj1TA10mKEFAl8xmPYTHHRnbHhAbGlu dXRyb25peC5kZQAKCRCmGPVMDXSYoTuQEACyzQCjU8PgehPp9oMqWzaX2fcVyuZO QU2yw6gmz2oTz3ZHUNwdW8UnzGh2OWosK3kDruoD9FtSS51lER1/ISfSPCGfyqxC KTjOcB1Kvxwq/3LcCx7Zi3ZxWApat74qs3EhYhKtEiQ2Y9xv9rLq8VV1UWAwyxq0 eHpjlIJ6b6rbt+ARslaB7drnccOsdK+W/roNj4kfyt+gezjBfojGRdMGQNMFcpnv shuTC+vYurAVIiVA/0IuizgHfwZiXOtVpjVoEWaxg6bBH6HNuYMYzdSa/YrlDkZs n/aBI/Xkvx+Eacu8b1Zwmbzs5EnikUK/2dMqbzXKUZK61eV4hX5c2xrnr1yGWKTs F/juh69Squ7X6VZyKVgJ9RIccVueqwR2EprXWgH3+RMice5kjnXH4zURp0GHALxa DFPfB6fawcH3Ps87kcRFvjgm6FBo0hJ1AxmsW1dY4ACFB9azFa2euW+AARDzHOy2 VRsUdhL9CGwtPjXcZ/9Rhej6fZLGBXKr8uq5QiMuvttp4b6+j9FEfBgD4S6h8csl AT2c2I9LcbWqyUM9P4S7zY/YgOZw88vHRuDH7tEBdIeoiHfrbSBU7EQ9jlAKq/59 f+Htu2Io281c005g7DEeuCYvpzSYnJnAitj5Lmp/kzk2Wn3utY1uIAVszqwf95Ul 81ppn2KlvzUK8g== =7Gj+ -----END PGP SIGNATURE----- Merge tag 'locking-urgent-2020-08-10' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull locking updates from Thomas Gleixner: "A set of locking fixes and updates: - Untangle the header spaghetti which causes build failures in various situations caused by the lockdep additions to seqcount to validate that the write side critical sections are non-preemptible. - The seqcount associated lock debug addons which were blocked by the above fallout. seqcount writers contrary to seqlock writers must be externally serialized, which usually happens via locking - except for strict per CPU seqcounts. As the lock is not part of the seqcount, lockdep cannot validate that the lock is held. This new debug mechanism adds the concept of associated locks. sequence count has now lock type variants and corresponding initializers which take a pointer to the associated lock used for writer serialization. If lockdep is enabled the pointer is stored and write_seqcount_begin() has a lockdep assertion to validate that the lock is held. Aside of the type and the initializer no other code changes are required at the seqcount usage sites. The rest of the seqcount API is unchanged and determines the type at compile time with the help of _Generic which is possible now that the minimal GCC version has been moved up. Adding this lockdep coverage unearthed a handful of seqcount bugs which have been addressed already independent of this. While generally useful this comes with a Trojan Horse twist: On RT kernels the write side critical section can become preemtible if the writers are serialized by an associated lock, which leads to the well known reader preempts writer livelock. RT prevents this by storing the associated lock pointer independent of lockdep in the seqcount and changing the reader side to block on the lock when a reader detects that a writer is in the write side critical section. - Conversion of seqcount usage sites to associated types and initializers" * tag 'locking-urgent-2020-08-10' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (25 commits) locking/seqlock, headers: Untangle the spaghetti monster locking, arch/ia64: Reduce <asm/smp.h> header dependencies by moving XTP bits into the new <asm/xtp.h> header x86/headers: Remove APIC headers from <asm/smp.h> seqcount: More consistent seqprop names seqcount: Compress SEQCNT_LOCKNAME_ZERO() seqlock: Fold seqcount_LOCKNAME_init() definition seqlock: Fold seqcount_LOCKNAME_t definition seqlock: s/__SEQ_LOCKDEP/__SEQ_LOCK/g hrtimer: Use sequence counter with associated raw spinlock kvm/eventfd: Use sequence counter with associated spinlock userfaultfd: Use sequence counter with associated spinlock NFSv4: Use sequence counter with associated spinlock iocost: Use sequence counter with associated spinlock raid5: Use sequence counter with associated spinlock vfs: Use sequence counter with associated spinlock timekeeping: Use sequence counter with associated raw spinlock xfrm: policy: Use sequence counters with associated lock netfilter: nft_set_rbtree: Use sequence counter with associated rwlock netfilter: conntrack: Use sequence counter with associated spinlock sched: tasks: Use sequence counter with associated spinlock ... |
||
Ahmed S. Darwish
|
cd29f22019 |
dma-buf: Use sequence counter with associated wound/wait mutex
A sequence counter write side critical section must be protected by some form of locking to serialize writers. If the serialization primitive is not disabling preemption implicitly, preemption has to be explicitly disabled before entering the sequence counter write side critical section. The dma-buf reservation subsystem uses plain sequence counters to manage updates to reservations. Writer serialization is accomplished through a wound/wait mutex. Acquiring a wound/wait mutex does not disable preemption, so this needs to be done manually before and after the write side critical section. Use the newly-added seqcount_ww_mutex_t instead: - It associates the ww_mutex with the sequence count, which enables lockdep to validate that the write side critical section is properly serialized. - It removes the need to explicitly add preempt_disable/enable() around the write side critical section because the write_begin/end() functions for this new data type automatically do this. If lockdep is disabled this ww_mutex lock association is compiled out and has neither storage size nor runtime overhead. Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: https://lkml.kernel.org/r/20200720155530.1173732-13-a.darwish@linutronix.de |
||
Ahmed S. Darwish
|
318ce71f3e |
dma-buf: Remove custom seqcount lockdep class key
Commit |
||
Daniel Vetter
|
d0b9a9aef0 |
dma-fence: prime lockdep annotations
Two in one go:
- it is allowed to call dma_fence_wait() while holding a
dma_resv_lock(). This is fundamental to how eviction works with ttm,
so required.
- it is allowed to call dma_fence_wait() from memory reclaim contexts,
specifically from shrinker callbacks (which i915 does), and from mmu
notifier callbacks (which amdgpu does, and which i915 sometimes also
does, and probably always should, but that's kinda a debate). Also
for stuff like HMM we really need to be able to do this, or things
get real dicey.
Consequence is that any critical path necessary to get to a
dma_fence_signal for a fence must never a) call dma_resv_lock nor b)
allocate memory with GFP_KERNEL. Also by implication of
dma_resv_lock(), no userspace faulting allowed. That's some supremely
obnoxious limitations, which is why we need to sprinkle the right
annotations to all relevant paths.
The one big locking context we're leaving out here is mmu notifiers,
added in
commit
|
||
Michel Lespinasse
|
0adf65f53a |
DMA reservations: use the new mmap locking API
This use is converted manually ahead of the next patch in the series, as it requires including a new header which the automated conversion would miss. Signed-off-by: Michel Lespinasse <walken@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Daniel Jordan <daniel.m.jordan@oracle.com> Reviewed-by: Laurent Dufour <ldufour@linux.ibm.com> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Cc: Davidlohr Bueso <dbueso@suse.de> Cc: David Rientjes <rientjes@google.com> Cc: Hugh Dickins <hughd@google.com> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: Jerome Glisse <jglisse@redhat.com> Cc: John Hubbard <jhubbard@nvidia.com> Cc: Liam Howlett <Liam.Howlett@oracle.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ying Han <yinghan@google.com> Link: http://lkml.kernel.org/r/20200520052908.204642-4-walken@google.com Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> |
||
Daniel Vetter
|
fedf7a441f |
dma-resv: Also prime acquire ctx for lockdep
Semnatically it really doesn't matter where we grab the ticket. But since the ticket is a fake lockdep lock, it matters for lockdep validation purposes. This means stuff like grabbing a ticket and then doing copy_from/to_user isn't allowed anymore. This is a changed compared to the current ttm fault handler, which doesn't bother with having a full reservation. Since I'm looking into fixing the TODO entry in ttm_mem_evict_wait_busy() I think that'll have to change sooner or later anyway, better get started. A bit more context on why I'm looking into this: For backwards compat with existing i915 gem code I think we'll have to do full slowpath locking in the i915 equivalent of the eviction code. And with dynamic dma-buf that will leak across drivers, so another thing we need to standardize and make sure it's done the same way everyway. Unfortunately this means another full audit of all drivers: - gem helpers: acquire_init is done right before taking locks, so no problem. Same for acquire_fini and unlocking, which means nothing that's not already covered by the dma_resv_lock rules will be caught with this extension here to the acquire_ctx. - etnaviv: An absolute massive amount of code is run between the acquire_init and the first lock acquisition in submit_lock_objects. But nothing that would touch user memory and could cause a fault. Furthermore nothing that uses the ticket, so even if I missed something, it would be easy to fix by pushing the acquire_init right before the first use. Similar on the unlock/acquire_fini side. - i915: Right now (and this will likely change a lot rsn) the acquire ctx and actual locks are right next to each another. No problem. - msm has a problem: submit_create calls acquire_init, but then submit_lookup_objects() has a bunch of copy_from_user to do the object lookups. That's the only thing before submit_lock_objects call dma_resv_lock(). Despite all the copypasta to etnaviv, etnaviv does not have this issue since it copies all the userspace structs earlier. submit_cleanup does not have any such issues. With the prep patch to pull out the acquire_ctx and reorder it msm is going to be safe too. - nouveau: acquire_init is right next to ttm_bo_reserve, so all good. Similar on the acquire_fini/ttm_bo_unreserve side. - ttm execbuf utils: acquire context and locking are even in the same functions here (one function to reserve everything, the other to unreserve), so all good. - vc4: Another case where acquire context and locking are handled in the same functions (one function to lock everything, the other to unlock). Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Christian König <christian.koenig@amd.com> Cc: Sumit Semwal <sumit.semwal@linaro.org> Cc: linux-media@vger.kernel.org Cc: linaro-mm-sig@lists.linaro.org Cc: Huang Rui <ray.huang@amd.com> Cc: Eric Anholt <eric@anholt.net> Cc: Ben Skeggs <bskeggs@redhat.com> Cc: Alex Deucher <alexander.deucher@amd.com> Cc: Rob Herring <robh@kernel.org> Cc: Lucas Stach <l.stach@pengutronix.de> Cc: Russell King <linux+etnaviv@armlinux.org.uk> Cc: Christian Gmeiner <christian.gmeiner@gmail.com> Cc: Rob Clark <robdclark@gmail.com> Cc: Sean Paul <sean@poorly.run> Acked-by: Christian König <christian.koenig@amd.com> Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191119210844.16947-3-daniel.vetter@ffwll.ch |
||
Steven Price
|
ffbbaa7420 |
dma_resv: prime lockdep annotations
From d07ea81611ed6e4fb8cc290f42d23dbcca2da2f8 Mon Sep 17 00:00:00 2001
From: Steven Price <steven.price@arm.com>
Date: Mon, 11 Nov 2019 13:07:19 +0000
Subject: [PATCH] dma_resv: Correct return type of dma_resv_lockdep()
subsys_initcall() expects a function which returns 'int'. Fix
dma_resv_lockdep() so it returns an 'int' error code.
Fixes:
|
||
Daniel Vetter
|
b2a8116e25 |
dma_resv: prime lockdep annotations
Full audit of everyone: - i915, radeon, amdgpu should be clean per their maintainers. - vram helpers should be fine, they don't do command submission, so really no business holding struct_mutex while doing copy_*_user. But I haven't checked them all. - panfrost seems to dma_resv_lock only in panfrost_job_push, which looks clean. - v3d holds dma_resv locks in the tail of its v3d_submit_cl_ioctl(), copying from/to userspace happens all in v3d_lookup_bos which is outside of the critical section. - vmwgfx has a bunch of ioctls that do their own copy_*_user: - vmw_execbuf_process: First this does some copies in vmw_execbuf_cmdbuf() and also in the vmw_execbuf_process() itself. Then comes the usual ttm reserve/validate sequence, then actual submission/fencing, then unreserving, and finally some more copy_to_user in vmw_execbuf_copy_fence_user. Glossing over tons of details, but looks all safe. - vmw_fence_event_ioctl: No ttm_reserve/dma_resv_lock anywhere to be seen, seems to only create a fence and copy it out. - a pile of smaller ioctl in vmwgfx_ioctl.c, no reservations to be found there. Summary: vmwgfx seems to be fine too. - virtio: There's virtio_gpu_execbuffer_ioctl, which does all the copying from userspace before even looking up objects through their handles, so safe. Plus the getparam/getcaps ioctl, also both safe. - qxl only has qxl_execbuffer_ioctl, which calls into qxl_process_single_command. There's a lovely comment before the __copy_from_user_inatomic that the slowpath should be copied from i915, but I guess that never happened. Try not to be unlucky and get your CS data evicted between when it's written and the kernel tries to read it. The only other copy_from_user is for relocs, but those are done before qxl_release_reserve_list(), which seems to be the only thing reserving buffers (in the ttm/dma_resv sense) in that code. So looks safe. - A debugfs file in nouveau_debugfs_pstate_set() and the usif ioctl in usif_ioctl() look safe. nouveau_gem_ioctl_pushbuf() otoh breaks this everywhere and needs to be fixed up. v2: Thomas pointed at that vmwgfx calls dma_resv_init while it holds a dma_resv lock of a different object already. Christian mentioned that ttm core does this too for ghost objects. intel-gfx-ci highlighted that i915 has similar issues. Unfortunately we can't do this in the usual module init functions, because kernel threads don't have an ->mm - we have to wait around for some user thread to do this. Solution is to spawn a worker (but only once). It's horrible, but it works. v3: We can allocate mm! (Chris). Horrible worker hack out, clean initcall solution in. v4: Annotate with __init (Rob Herring) Cc: Rob Herring <robh@kernel.org> Cc: Alex Deucher <alexander.deucher@amd.com> Cc: Christian König <christian.koenig@amd.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Thomas Zimmermann <tzimmermann@suse.de> Cc: Rob Herring <robh@kernel.org> Cc: Tomeu Vizoso <tomeu.vizoso@collabora.com> Cc: Eric Anholt <eric@anholt.net> Cc: Dave Airlie <airlied@redhat.com> Cc: Gerd Hoffmann <kraxel@redhat.com> Cc: Ben Skeggs <bskeggs@redhat.com> Cc: "VMware Graphics" <linux-graphics-maintainer@vmware.com> Cc: Thomas Hellstrom <thellstrom@vmware.com> Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com> Reviewed-by: Christian König <christian.koenig@amd.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Tested-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191104173801.2972-1-daniel.vetter@ffwll.ch |
||
Qiang Yu
|
7fbd0782bf |
dma-buf/resv: fix exclusive fence get
This causes kernel crash when testing lima driver.
Cc: Christian König <christian.koenig@amd.com>
Fixes:
|
||
Chris Wilson
|
b016cd6ed4 |
dma-buf: Restore seqlock around dma_resv updates
This reverts |
||
Christian König
|
52791eeec1 |
dma-buf: rename reservation_object to dma_resv
Be more consistent with the naming of the other DMA-buf objects. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/323401/ |