The functions where internally already only using the structure, so we
need to just flip the interface.
v2: rebase
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Imre Deak <imre.deak@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20190613232156.34940-7-daniele.ceraolospurio@intel.com
Mixed C99 and kernel types use is getting ugly. Prefer kernel types.
sed -i 's/\buint\(8\|16\|32\|64\)_t\b/u\1/g'
Acked-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
The majority of runtime-pm operations are bounded and scoped within a
function; these are easy to verify that the wakeref are handled
correctly. We can employ the compiler to help us, and reduce the number
of wakerefs tracked when debugging, by passing around cookies provided
by the various rpm_get functions to their rpm_put counterpart. This
makes the pairing explicit, and given the required wakeref cookie the
compiler can verify that we pass an initialised value to the rpm_put
(quite handy for double checking error paths).
For regular builds, the compiler should be able to eliminate the unused
local variables and the program growth should be minimal. Fwiw, it came
out as a net improvement as gcc was able to refactor rpm_get and
rpm_get_if_in_use together,
v2: Just s/rpm_put/rpm_put_unchecked/ everywhere, leaving the manual
mark up for smaller more targeted patches.
v3: Mention the cookie in Returns
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Jani Nikula <jani.nikula@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190114142129.24398-2-chris@chris-wilson.co.uk
This trys to give new born vGPU with higher scheduling chance
not only with adding to sched list head and also have higher
priority for workload sched for 2 seconds after starting to
schedule it. In order for fast GPU execution during VM boot,
and ensure guest driver setup with required state given in time.
This fixes recent failure seen on one VM with multiple linux VMs
running on kernel with commit 2621cefaa42b3("drm/i915: Provide a timeout to i915_gem_wait_for_idle() on setup"),
which had shorter setup timeout that caused context state init failed.
v2: change to 2s for higher scheduling period
Cc: Yuan Hang <hang.yuan@intel.com>
Reviewed-by: Hang Yuan <hang.yuan@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
pm_runtime_get_sync in intel_runtime_pm_get might sleep if i915
device is not active. When stop vgpu schedule, the device may be
inactive. So need to move runtime_pm_get out of spin_lock/unlock.
Fixes: b24881e0b0b6("drm/i915/gvt: Add runtime_pm_get/put into gvt_switch_mmio
Cc: <stable@vger.kernel.org>
Signed-off-by: Hang Yuan <hang.yuan@linux.intel.com>
Signed-off-by: Xiong Zhang <xiong.y.zhang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
The scheduler lock(gvt->sched_lock) is used to protect gvt
scheduler logic, including the gvt scheduler structure(gvt->scheduler
and per vgpu schedule data(vgpu->sched_data, vgpu->sched_ctl).
v9:
- Change commit author since the patches are improved a lot compared
with original version.
Original author: Pei Zhang <pei.zhang@intel.com>
- Rebase to latest gvt-staging.
v8:
- Correct coding wqstyle.
- Rebase to latest gvt-staging.
v7:
- Remove gtt_lock since already proteced by gvt_lock and vgpu_lock.
v6:
- Rebase to latest gvt-staging.
v5:
- Rebase to latest gvt-staging.
v4:
- Rebase to latest gvt-staging.
v3: update to latest code base
Signed-off-by: Pei Zhang <pei.zhang@intel.com>
Signed-off-by: Colin Xu <colin.xu@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
When there is only one vGPU in GVT-g and it submits workloads
continuously, it will not be scheduled out, vgpu_update_timeslice
is not called and its sched_in_time is not updated in a long time,
which can be several seconds or longer.
Once GVT-g pauses to submit workload for this vGPU due to heavy
host CPU workload, this vGPU get scheduled out and
vgpu_update_timeslice is called, its left_ts will be subtract
by a big value from sched_out_time - sched_in_time.
When GVT-g is going to submit workload for this vGPU again,
it will not be scheduled in until gvt_balance_timeslice reaches
stage 0 and reset its left_ts, which introduces several
hunderand milliseconds latency.
This patch updates time slice in every ms to update sched_in_time
timely.
v2: revise commit message
v3: use more concise expr. (Zhenyu)
Signed-off-by: Zhipeng Gong <zhipeng.gong@intel.com>
Cc: Zhenyu Wang <zhenyuw@linux.intel.com>
Cc: Min He <min.he@intel.com>
Signed-off-by: Zhi Wang <zhi.a.wang@intel.com>
intel_gvt_schedule check timer through a counter and is supposed
to wake up to increase the counter every ms.
In a system with heavy workload, gvt_service_thread can not get
a chance to run right after wake up and will be delayed several
milliseconds. As a result, one hundred counter interval means
several hundred milliseconds in real time.
This patch use real time instead of counter to do timer check.
v2: remove static variable. (Zhenyu)
v3: correct expire_time update. (Zhenyu)
Signed-off-by: Zhipeng Gong <zhipeng.gong@intel.com>
Cc: Zhenyu Wang <zhenyuw@linux.intel.com>
Cc: Min He <min.he@intel.com>
Signed-off-by: Zhi Wang <zhi.a.wang@intel.com>
Only reset vgpu execlist state of the exact engine which gets reset
request from VM. After read context status from HWSP enabled, KMD will use
the saved CSB read pointer but not always read from MMIO. When one engine
reset happen, only the read pointer of this engine will be reset, in GVT-g
host side also need to align with this policy, otherwise VM may get wrong
CSB status after one engine reset compeleted.
v2: Split refine and fix patch, code refine(Zhenyu)
v3: Move active flag of vgpu scheduler into sched_data(Zhenyu)
Cc: Fred Gao <fred.gao@intel.com>
Cc: Zhi Wang <zhi.a.wang@intel.com>
Cc: Zhenyu Wang <zhenyuw@linux.intel.com>
Signed-off-by: Weinan Li <weinan.z.li@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Signed-off-by: Zhi Wang <zhi.a.wang@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Stop gvt scheduler timer if no vGPU exists, otherwise it keeps
gvt service thread busy to handle request schedule event but no
actual schedule activity required.
Reviewed-by: Zhi Wang <zhi.a.wang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
The current schedule policy rely on a 1ms timer to execute workload. This
can introduce maximum 1ms unnecessary latency. This is especially bad for
small media workloads.
And I don't think we need this timer for QoS, but the change is not simply
remove the code. So I made a new API intel_gvt_kick_schedule() for future
change.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
We have implemented delayed ring mmio switch mechanism to reduce
unnecessary mmio switch. While the vGPU is being destroyed or
detached from VM, we need to force the ring switch to host context.
The later deadline is missed. Then it got a chance that word load
from VM2 might execute under the ring context of VM1 which was
attached to a same vGPU instance. Finally, the GPU is hang.
This patch guarantee the two deadline are performed.
v2: Remove unused variable 'scheduler'
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
gvt-next-2017-06-08
First gvt-next pull for 4.13:
- optimization for per-VM mmio save/restore (Changbin)
- optimization for mmio hash table (Changbin)
- scheduler optimization with event (Ping)
- vGPU reset refinement (Fred)
- other misc refactor and cleanups, etc.
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170608093547.bjgs436e3iokrzdm@zhen-hp.sh.intel.com
This patch decouple the time slice calculation and scheduler, let
other event be able to trigger scheduling without impact the
calculation for QoS.
v2: add only one new enum definition.
v3: fix typo.
Signed-off-by: Ping Gao <ping.a.gao@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Commit ab9da627906a ("drm/i915: make context status notifier head be
per engine") gives us a chance to inspect every single request. Then
we can eliminate unnecessary mmio switching for same vGPU. We only
need mmio switching for different VMs (including host).
This patch introduced a new general API intel_gvt_switch_mmio() to
replace the old intel_gvt_load/restore_render_mmio(). This function
can be further optimized for vGPU to vGPU switching.
To support individual ring switch, we track the owner who occupy
each ring. When another VM or host request a ring we do the mmio
context switching. Otherwise no need to switch the ring.
This optimization is very useful if only one guest has plenty of
workloads and the host is mostly idle. The best case is no mmio
switching will happen.
v2:
o fix missing ring switch issue. (chuanxiao)
o support individual ring switch.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Reviewed-by: Chuanxiao Dong <chuanxiao.dong@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
It's no need to switch vgpu if next vgpu is the same with current
vgpu, otherwise it will make performance drop in some case.
v2: correct the comments.
Signed-off-by: Ping Gao <ping.a.gao@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
As those debug messages might appear in every timer call for scheduler,
it's too noisy, eat too much log and aren't meaningful. So remove them.
Cc: Ping Gao <ping.a.gao@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
It's too chatty to have three places to tell us which one
is next vgpu for schedule. My log file was bloated to eat
all disk space..
Cc: Ping Gao <ping.a.gao@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
The timeslice usage will determine vGPU whether has chance to
schedule or not at every vGPU switch checkpoint.
Signed-off-by: Ping Gao <ping.a.gao@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
This method tries to guarantee precision in second level, with the
adjustment conducted in every 100ms. At the end of each vGPU switch
calculate the sched time and subtract it from the time slice
allocated; the allocated time slice for every 100ms together with
remaining timeslice, will be used to decide how much timeslice
allocated to this vGPU in the next 100ms slice, with the end goal
to guarantee weight ratio in second level.
Signed-off-by: Ping Gao <ping.a.gao@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
The weight defines proportional control of physical GPU resource
shared between vGPUs. So far the weight is tied to a specific vGPU
type, i.e when creating multiple vGPUs with different types, they
will inherit different weights.
e.g. The weight of type GVTg_V5_2 is 8, the weight of type GVTg_V5_4
is 4, so vGPU of type GVTg_V5_2 has double vGPU resource of vGPU type
GVTg_V5_4.
TODO: allow user control the weight setting in the future.
Signed-off-by: Ping Gao <ping.a.gao@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Factor out the scheduler to a more clear structure, the basic
logic is to find out next vGPU first and then schedule it.
vGPUs were ordered in a LRU list, scheduler scan from the LRU
list head and choose the first vGPU who has pending workload.
Signed-off-by: Ping Gao <ping.a.gao@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Add some statistic routine to collect the time when vGPU is
scheduled in/out and the time of the last ctx submission.
Signed-off-by: Ping Gao <ping.a.gao@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Currently the scheduler is triggered by delayed_work, which doesn't
provide precision at microsecond level. Move to hrtimer instead for
more accurate control.
Signed-off-by: Ping Gao <ping.a.gao@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Fix to correctly assign 1ms for gvt scheduler interval time,
as previous code using HZ is pretty broken. And use no delay
for start gvt scheduler function.
Fixes: 4b63960ebd ("drm/i915/gvt: vGPU schedule policy framework")
Cc: Zhi Wang <zhi.a.wang@intel.com>
Cc: stable@vger.kernel.org # v4.10+
Acked-by: Chuanxiao Dong <chuanxiao.dong@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Remove below unimportant log which is too noisy.
'no current vgpu search from q head'
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Remove the variable 'execlist' as it's unused in function
vgpu_has_pending_workload.
Signed-off-by: Ping Gao <ping.a.gao@intel.com>
Reviewed-by: Zhi Wang <zhi.a.wang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Mark all local functions & variables as static.
Signed-off-by: Du, Changbin <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Switch to use new for_each_engine() helper to properly access
enabled intel_engine_cs as i915 core has changed that to be
dynamic managed. At GVT-g init time would still depend on ring
mask to determine engine list as it's earlier.
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
i915 core should only call functions and structures exposed through
intel_gvt.h. Remove internal gvt.h and i915_pvinfo.h.
Change for internal intel_gvt structure as private handler which
not requires to expose gvt internal structure for i915 core.
v2: Fix per Chris's comment
- carefully handle dev_priv->gvt assignment
- add necessary bracket for macro helper
- forward declartion struct intel_gvt
- keep free operation within same file handling alloc
v3: fix use after free and remove intel_gvt.initialized
v4: change to_gvt() to an inline
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
This patch introduces a vGPU schedule policy framework, with a timer based
schedule policy module for now
Signed-off-by: Zhi Wang <zhi.a.wang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>