Commit Graph

5 Commits

Author SHA1 Message Date
Zhenyu Wang
a752b070a6 drm/i915/gvt: Fix function comment doc errors
Caught by W=1 to fix left wrong function comment doc.

Reviewed-by: Hang Yuan <hang.yuan@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
2018-08-07 10:39:53 +08:00
Colin Xu
f25a49ab8a drm/i915/gvt: Use vgpu_lock to protect per vgpu access
The patch set splits out 2 small locks from the original big gvt lock:
  - vgpu_lock protects per-vGPU data and logic, especially the vGPU
    trap emulation path.
  - sched_lock protects gvt scheudler structure, context schedule logic
    and vGPU's schedule data.

Use vgpu_lock to replace the gvt big lock. By doing this, the
mmio read/write trap path, vgpu virtual event emulation and other
vgpu related process, would be protected under per vgpu_lock.

v9:
  - Change commit author since the patches are improved a lot compared
    with original version.
    Original author: Pei Zhang <pei.zhang@intel.com>
  - Rebase to latest gvt-staging.
v8:
  - Correct coding and comment style.
  - Rebase to latest gvt-staging.
v7:
  - Remove gtt_lock since already proteced by gvt_lock and vgpu_lock.
  - Fix a typo in intel_gvt_deactivate_vgpu, unlock the wrong lock.
v6:
  - Rebase to latest gvt-staging.
v5:
  - Rebase to latest gvt-staging.
  - intel_vgpu_page_track_handler should use vgpu_lock.
v4:
  - Rebase to latest gvt-staging.
  - Protect vgpu->active access with vgpu_lock.
  - Do not wait gpu idle in vgpu_lock.
v3: update to latest code base
v2: add gvt->lock in function gvt_check_vblank_emulation

Performance comparison on Kabylake platform.
  - Configuration:
    Host: Ubuntu 16.04.
    Guest 1 & 2: Ubuntu 16.04.

glmark2 score comparison:
  - Configuration:
    Host: glxgears.
    Guests: glmark2.
+--------------------------------+-----------------+
| Setup                          | glmark2 score   |
+--------------------------------+-----------------+
| unified lock, iommu=on         | 58~62 (avg. 60) |
+--------------------------------+-----------------+
| unified lock, iommu=igfx_off   | 57~61 (avg. 59) |
+--------------------------------+-----------------+
| per-logic lock, iommu=on       | 60~68 (avg. 64) |
+--------------------------------+-----------------+
| per-logic lock, iommu=igfx_off | 61~67 (avg. 64) |
+--------------------------------+-----------------+

lock_stat comparison:
  - Configuration:
    Stop lock stat immediately after boot up.
    Boot 2 VM Guests.
    Run glmark2 in guests.
    Start perf lock_stat for 20 seconds and stop again.
  - Legend: c - contentions; w - waittime-avg
+------------+-----------------+-----------+---------------+------------+
|            | gvt_lock        |sched_lock | vgpu_lock     | gtt_lock   |
+ lock type; +-----------------+-----------+---------------+------------+
| iommu set  | c     | w       | c  | w    | c    | w      | c   | w    |
+------------+-------+---------+----+------+------+--------+-----+------+
| unified;   | 20697 | 839     |N/A | N/A  | N/A  | N/A    | N/A | N/A  |
| on         |       |         |    |      |      |        |     |      |
+------------+-------+---------+----+------+------+--------+-----+------+
| unified;   | 21838 | 658.15  |N/A | N/A  | N/A  | N/A    | N/A | N/A  |
| igfx_off   |       |         |    |      |      |        |     |      |
+------------+-------+---------+----+------+------+--------+-----+------+
| per-logic; | 1553  | 1599.96 |9458|429.97| 5846 | 274.33 | 0   | 0.00 |
| on         |       |         |    |      |      |        |     |      |
+------------+-------+---------+----+------+------+--------+-----+------+
| per-logic; | 1911  | 1678.32 |8335|445.16| 5451 | 244.80 | 0   | 0.00 |
| igfx_off   |       |         |    |      |      |        |     |      |
+------------+-------+---------+----+------+------+--------+-----+------+

Signed-off-by: Pei Zhang <pei.zhang@intel.com>
Signed-off-by: Colin Xu <colin.xu@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
2018-05-18 12:39:02 +08:00
Xiong Zhang
991ecefbdd drm/i915/gvt: Return error at the failure of finding page_track
In XenGT, ioreq copy is used to trap mmio write and ppgtt write. Both
of them are memory write, ioreq handler couldn't distinguish them. So
ioreq handler probe the ppgtt write handler, if it is succuess, this
ioreq is ppgtt write, otherwise it is mmio write.

So ppgtt write handler should return an error at the failure of finding
page track, it is fatal to implement ioreq handler in XenGT.

Signed-off-by: Xiong Zhang <xiong.y.zhang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
2018-03-06 14:49:38 +08:00
Xiong Zhang
7e60946feb drm/i915/gvt: Release gvt->lock at the failure of finding page track
page_track_handler take lock at the beginning, the lock should be released
at the failure of finding page track. Otherwise deadlock will happen.

Fixes: e502a2af4c ("drm/i915/gvt: Provide generic page_track infrastructure for write-protected page")
Signed-off-by: Xiong Zhang <xiong.y.zhang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
2018-03-06 14:49:24 +08:00
Changbin Du
e502a2af4c drm/i915/gvt: Provide generic page_track infrastructure for write-protected page
This patch provide generic page_track infrastructure for write-protected
guest page. The old page_track logic gets rewrote and now stays in a new
standalone page_track.c. This page track infrastructure can be both used
by vGUC and GTT shadowing.

The important change is that it uses radix tree instead of hash table.
We don't have a predictable number of pages that will be tracked.

Here is some performance data (duration in us) of looking up a element:
Before: (aka. intel_vgpu_find_tracked_page)
 0.091 0.089 0.090 ... 0.093 0.091 0.087 ... 0.292 0.285 0.292 0.291
After: (aka. intel_vgpu_find_page_track)
 0.104 0.105 0.100 0.102 0.102 0.100 ... 0.101 0.101 0.105 0.105

The hash table has good performance at beginning, but turns bad with
more pages being tracked even no 3D applications are running. As
expected, radix tree has stable duration and very quick.

The overall benchmark (tested with Heaven Benchmark) marginally improved
since this is not the bottleneck. What we benefit more from this change
is scalability.

Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
2018-03-06 13:19:20 +08:00