Add three IPS operation functions to test/set/clear IPS in PDE.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Add a new entry type GTT_TYPE_PPGTT_PTE_64K_ENTRY. 64K entry is very
different from 2M/1G entry. 64K entry is controlled by IPS bit in upper
PDE. To leverage the current logic, I take IPS bit as 'PSE' for PTE
level. Which means, 64K entries can also processed by get_pse_type().
v2: Make it bisectable.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
As different OSes might handling GVT PPGTT creation/destroy notification
differently during a vGPU reset. A better approach is invalidating all
vGPU PPGTT mm objects during vGPU reset.
Signed-off-by: Zhi Wang <zhi.a.wang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
We don't know how many page tables will be shadowed. It varies
considerably corresponding to guest load. Radix tree is a better
choice for us. Since Page Frame Number is used as key so most of
the bits are common.
Here is some performance data (duration in us) of looking up a
element:
Before: (aka. ppgtt_find_shadow_page)
0.308 0.292 0.246 0.432 0.143 ... 0.311 0.225 0.382 0.199 0.325
After: (aka. intel_vgpu_find_spt_by_mfn)
0.106 0.106 0.107 0.106 0.105 0.107 ... 0.107 0.109 0.105 0.108
This time I didn't get the early data of hash table. The data is
measured when desktop is shown.
As last change, the overall benchmark almost is not changed, but
we get better scalability.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
This patch provide generic page_track infrastructure for write-protected
guest page. The old page_track logic gets rewrote and now stays in a new
standalone page_track.c. This page track infrastructure can be both used
by vGUC and GTT shadowing.
The important change is that it uses radix tree instead of hash table.
We don't have a predictable number of pages that will be tracked.
Here is some performance data (duration in us) of looking up a element:
Before: (aka. intel_vgpu_find_tracked_page)
0.091 0.089 0.090 ... 0.093 0.091 0.087 ... 0.292 0.285 0.292 0.291
After: (aka. intel_vgpu_find_page_track)
0.104 0.105 0.100 0.102 0.102 0.100 ... 0.101 0.101 0.105 0.105
The hash table has good performance at beginning, but turns bad with
more pages being tracked even no 3D applications are running. As
expected, radix tree has stable duration and very quick.
The overall benchmark (tested with Heaven Benchmark) marginally improved
since this is not the bottleneck. What we benefit more from this change
is scalability.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
The target structure of some functions is struct intel_vgpu_ppgtt_spt and
their names are xxx_shadow_page. It should be xxx_shadow_page_table. Let's
use short name 'spt' instead to reduce the length. As well as the hash
table name.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
This is a another big one and the GVT shadow page management code is
heavily refined.
The new code only use struct intel_vgpu_ppgtt_spt to represent a vgpu
shadow page table - w/ or wo/ a guest page associated with. A pure shadow
page (no guest page associated) will be used to shadow splited 2M huge
gtt. In this case, the spt.guest_page.gfn should be a zero.
To search a existed shadow page table, we have two new interfaces:
- intel_vgpu_find_spt_by_gfn(), find a spt by guest gfn. It must not
be a pure spt.
- intel_vgpu_find_spt_by_mfn, Find the spt using shadow page mfn in
shadowed PTE.
The oos_page management is remained as what is was.
v2: Split some changes into small standalone patches.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Factor out these two interfaces so we can kill some duplicated code in
scheduler.c.
v2:
- rename to intel_vgpu_{get,put}_ppgtt_mm
- refine handle_g2v_notification
Signed-off-by: Changbin Du <changbin.du@intel.com>
Reviewed-by: Zhi Wang <zhi.a.wang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Accurate names help to avoid confusing so improve readability.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Reviewed-by: Zhi Wang <zhi.a.wang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Separate ggtt and ppgtt since they are different. A little more code but
straightforward.
And move these helpers to gtt.c since that is the only client.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
If we manage an object with a reference count, then its life cycle
must flow the reference count operations. Meanwhile, change the
operation functions to generic name *put* and *get*.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
This is a big one and the GVT shadow graphic memory management code is
heavily refined. The new code is more straightforward with less code.
The struct intel_vgpu_mm is restructured to be clearly defined, use
accurate names and some of the original fields are removed which are
really redundant.
Now we only manage ppgtt mm object with mm->ppgtt_mm.lru_list. No need
to mix ppgtt and ggtt together, since one vGPU only has one ggtt object.
v4: Don't invoke ppgtt_free_all_shadow_page before intel_vgpu_destroy_all_ppgtt_mm.
v3: Add GVT_RING_CTX_NR_PDPS to avoid confusing about the PDPs.
v2: Split some changes into small standalone patches.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
It's a bit confusing that page write protect handler is live in
mmio emulation handler. This moves it to stand alone gvt ops.
Also remove unnecessary check of write protected page access
in mmio read handler and cleanup handling of failsafe case.
v2: rebase
Reviewed-by: Xiong Zhang <xiong.y.zhang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
This reverts commit b20d09886fd1b74cd2255d846029a049e524db14.
This caused windows driver boot errors for invalid page address.
Revert for now.
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Cc: Zhi Wang <zhi.a.wang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Refine previously broken PPGTT scratch. Scratch PTE was no correctly
handled and also the handling of scratch entries in page table walk was
not well organized, which brings gaps of introducing lazy shadow.
Signed-off-by: Zhi Wang <zhi.a.wang@intel.com>
As we want to re-use intel_vgpu_shadow_page in buidling scrach page table
and we don't want to put scrach page table page into hash table, a new
param is introduced to give the caller a choice to decide if a shadow page
should be put into hash table.
Signed-off-by: Zhi Wang <zhi.a.wang@intel.com>
As there is already an I915_GTT_PAGE_SIZE marco in i915, let GVT-g use it
as well. Also this patch re-names some GTT marcos with additional prefix.
Signed-off-by: Zhi Wang <zhi.a.wang@intel.com>
As the data structure of "intel_vgpu_guest_page" will become much heavier
in future, it's better to factor out the guest memory page track mechnisim
as early as possible.
Signed-off-by: Zhi Wang <zhi.a.wang@intel.com>
When doing the VGPU reset, we don't need to do the gtt/ppgtt reset.
This will make the GVT to do the ppgtt shadow every time for
a workload and caused really bad performance after a VGPU reset.
This patch will make sure ppgtt clean only happen at device module
level reset to fix this.
Signed-off-by: Chuanxiao Dong <chuanxiao.dong@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
When debugging the gtt code, found the intel_vgpu_gma_to_gpa() can
translate any given GMA though the GMA is not valid. This because
the GTT ops suppress the possible errors, which may result in an
invalid PT entry is retrieved by upper caller.
This patch changed the prototype of pte ops to propagate status to
callers. Then we make sure the GTT walker stop as early as when
a error is detected to prevent undefined behavior.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
This patch introduces a new function intel_vgpu_reset_gtt() to reset
the all GTT related status, including GGTT, PPGTT, scratch page. This
function can free all shadowed PPGTT, clear all GGTT entry, and clear
scratch page to all zero. After this, we can ensure no gtt related
information can be leakaged from one VM to anothor one when assign
vgpu instance across different VMs (not simultaneously).
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
The GGTT space is partitioned between vGPUs, it could be reused by
next vGPU after previous one is release, the stale entries need
point to scratch page when vGPU created.
v2: Reset logic move to vGPU create.
v3: Correct the commit msg.
v4: Move the reset function to vGPU init gtt function, as result it's no
need explicitly in vGPU reset logic as vGPU init gtt called during
reset.
Signed-off-by: Ping Gao <ping.a.gao@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
All the unused entries in the page table tree(PML4E->PDPE->PDE->PTE)
should point to scratch page table/scratch page to avoid page walk error
due to the page prefetching.
When removing an entry in shadow PPGTT, it need map to scratch page
also, the older implementation use single scratch page to assign to all
level entries, it doesn't align the page walk behavior when removed
entry is in PML, PDP, PD. To avoid potential page walk error this patch
implement a scratch page tree to replace the single scratch page.
v2: more details in commit message address Kevin's comments.
Signed-off-by: Ping Gao <ping.a.gao@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
The vGPU graphics memory emulation framework is responsible for graphics
memory table virtualization. Under virtualization environment, a VM will
populate the page table entry with guest page frame number(GPFN/GFN), while
HW needs a page table filled with MFN(Machine frame number). The
relationship between GFN and MFN(Machine frame number) is managed by
hypervisor, while GEN HW doesn't have such knowledge to translate a GFN.
To solve this gap, shadow GGTT/PPGTT page table is introdcued.
For GGTT, the GFN inside the guest GGTT page table entry will be translated
into MFN and written into physical GTT MMIO registers when guest write
virtual GTT MMIO registers.
For PPGTT, a shadow PPGTT page table will be created and write-protected
translated from guest PPGTT page table. And the shadow page table root
pointers will be written into the shadow context after a guest workload
is shadowed.
vGPU graphics memory emulation framework consists:
- Per-GEN HW platform page table entry bits extract/de-extract routines.
- GTT MMIO register emulation handlers, which will call hypercall to do
GFN->MFN translation when guest write GTT MMIO register
- PPGTT shadow page table routines, e.g. shadow create/destroy/out-of-sync
Signed-off-by: Zhi Wang <zhi.a.wang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>