Set shared_max to the number of shared fences right before we release
the lock.
This way every attempt to add a shared fence without previously
reserving a slot will cause an error.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Huang Rui <ray.huang@amd.com>
Acked-by: Junwei Zhang <Jerry.Zhang@amd.com>
Link: https://patchwork.kernel.org/patch/10626147/
Let's support simultaneous submissions to multiple engines.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Reviewed-by: Junwei Zhang <Jerry.Zhang@amd.com>
Reviewed-by: Huang Rui <ray.huang@amd.com>
Link: https://patchwork.kernel.org/patch/10626149/
No need for that any more. Just replace the list when there isn't enough
room any more for the additional fence.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Junwei Zhang <Jerry.Zhang@amd.com>
Reviewed-by: Huang Rui <ray.huang@amd.com>
Link: https://patchwork.kernel.org/patch/10626143/
In a similar fashion to reservation_object_lock() and
reservation_object_unlock(), ww_mutex_trylock is also useful and so is
worth wrapping for consistency.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
[danvet: Add __must_check Joonas wants.]
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: http://patchwork.freedesktop.org/patch/msgid/20170221093000.22802-1-chris@chris-wilson.co.uk
Joonas complained that writing ww_mutex_lock(&resv->lock, ctx) was too
intrusive compared to reservation_object_lock(resv, ctx);
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: http://patchwork.freedesktop.org/patch/msgid/20161115154642.31850-1-chris@chris-wilson.co.uk
In the atomic modesetting path, each driver simply wants to grab a ref
to the exclusive fence from a reservation object to store in the incoming
drm_plane_state, without doing the whole RCU dance. Since each driver
will need to do this, lets make a helper.
v2: rename to _rcu instead of _unlocked to be more consistent
Signed-off-by: Rob Clark <robdclark@gmail.com>
Acked-by: Sumit Semwal <sumit.semwal@linaro.org>
This adds some extra functions to deal with rcu.
reservation_object_get_fences_rcu() will obtain the list of shared
and exclusive fences without obtaining the ww_mutex.
reservation_object_wait_timeout_rcu() will wait on all fences of the
reservation_object, without obtaining the ww_mutex.
reservation_object_test_signaled_rcu() will test if all fences of the
reservation_object are signaled without using the ww_mutex.
reservation_object_get_excl and reservation_object_get_list require
the reservation object to be held, updating requires
write_seqcount_begin/end. If only the exclusive fence is needed,
rcu_dereference followed by fence_get_rcu can be used, if the shared
fences are needed it's recommended to use the supplied functions.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Acked-by: Sumit Semwal <sumit.semwal@linaro.org>
Acked-by: Daniel Vetter <daniel@ffwll.ch>
Reviewed-By: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Move the list of shared fences to a struct, and return it in
reservation_object_get_list().
Add reservation_object_get_excl to get the exclusive fence.
Add reservation_object_reserve_shared(), which reserves space
in the reservation_object for 1 more shared fence.
reservation_object_add_shared_fence() and
reservation_object_add_excl_fence() are used to assign a new
fence to a reservation_object pointer, to complete a reservation.
Changes since v1:
- Add reservation_object_get_excl, reorder code a bit.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Acked-by: Sumit Semwal <sumit.semwal@linaro.org>
Acked-by: Daniel Vetter <daniel@ffwll.ch>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Move the definitions for wound/wait mutexes out to a separate
header, ww_mutex.h. This reduces clutter in mutex.h, and
increases readability.
Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Rik van Riel <riel@redhat.com>
Acked-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Cc: Dave Airlie <airlied@gmail.com>
Link: http://lkml.kernel.org/r/51D675DC.3000907@canonical.com
[ Tidied up the code a bit. ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
This adds support for a generic reservations framework that can be
hooked up to ttm and dma-buf and allows easy sharing of reservations
across devices.
The idea is that a dma-buf and ttm object both will get a pointer
to a struct reservation_object, which has to be reserved before
anything is done with the contents of the dma-buf.
Changes since v1:
- Fix locking issue in ticket_reserve, which could cause mutex_unlock
to be called too many times.
Changes since v2:
- All fence related calls and members have been taken out for now,
what's left is the bare minimum to be useful for ttm locking conversion.
Changes since v3:
- Removed helper functions too. The documentation has an example
implementation for locking. With the move to ww_mutex there is no
need to have much logic any more.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>