Our low-level wait routine has evolved from our generic wait interface
that handled unlocked, RPS boosting, waits with time tracking. If we
push our GEM fence tracking to use reservation_objects (required for
handling multiple timelines), we lose the ability to pass the required
information down to i915_wait_request(). However, if we push the extra
functionality from i915_wait_request() to the individual callsites
(i915_gem_object_wait_rendering and i915_gem_wait_ioctl) that make use
of those extras, we can both simplify our low level wait and prepare for
extending the GEM interface for use of reservation_objects.
v2: Rewrite i915_wait_request() kerneldocs
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Matthew Auld <matthew.william.auld@gmail.com>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20161028125858.23563-4-chris@chris-wilson.co.uk
We cannot use blocking method mutex_lock inside a wait loop.
Here we invoke pick_next_workload() which needs acquire a
mutex in our "condition" experssion. Then we go into a another
of the going-to-sleep sequence and changing the task state.
This is a dangerous. Let's rewrite the wait sequence to avoid
nested sleeping.
v2: fix do...while loop exit condition (zhenyu)
v3: rebase to gvt-staging branch
Signed-off-by: Du, Changbin <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Mark all local functions & variables as static.
Signed-off-by: Du, Changbin <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Switch to use new for_each_engine() helper to properly access
enabled intel_engine_cs as i915 core has changed that to be
dynamic managed. At GVT-g init time would still depend on ring
mask to determine engine list as it's earlier.
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
For whatever reason, the gvt scheduler runs synchronously. At the very
least, lets run synchronously without holding the struct_mutex.
v2: cut'n'paste mutex_lock instead of unlock.
Replace long hold of struct_mutex with a mutex to serialise the worker
threads.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
The kthread will not be interrupted, don't even bother checking.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
The workload took a pointer to the request, and even waited upon,
without holding a reference on the request. Take that reference
explicitly and fix up the error path following request allocation that
missed flushing the request.
v2: [zhenyuw]
- drop request put in error path for dispatch, as main thread
caller will handle it identically to a real request.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
i915 core should only call functions and structures exposed through
intel_gvt.h. Remove internal gvt.h and i915_pvinfo.h.
Change for internal intel_gvt structure as private handler which
not requires to expose gvt internal structure for i915 core.
v2: Fix per Chris's comment
- carefully handle dev_priv->gvt assignment
- add necessary bracket for macro helper
- forward declartion struct intel_gvt
- keep free operation within same file handling alloc
v3: fix use after free and remove intel_gvt.initialized
v4: change to_gvt() to an inline
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
This patch introduces a command scanner to scan guest command buffers.
Signed-off-by: Yulei Zhang <yulei.zhang@intel.com>
Signed-off-by: Zhi Wang <zhi.a.wang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
As different VM may configure different render MMIOs when executing
workload, to schedule workloads between different VM, the render MMIOs
have to be switched.
Signed-off-by: Zhi Wang <zhi.a.wang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
This patch introduces the vGPU workload scheduler routines.
GVT workload scheduler is responsible for picking and executing GVT workload
from current scheduled vGPU. Before the workload is submitted to host i915,
the guest execlist context will be shadowed in the host GVT shadow context.
the instructions in guest ring buffer will be copied into GVT shadow ring
buffer. Then GVT-g workload scheduler will scan the instructions in guest
ring buffer and submit it to host i915.
Signed-off-by: Zhi Wang <zhi.a.wang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>