- Ice Lake's display enabling patches (Jose, Mahesh, Dhinakaran, Paulo, Manasi, Anusha, Arkadiusz)

- Ice Lake's workarounds (Oscar and Yunwei)
 - Ice Lake interrupt registers fixes (Oscar)
 - Context switch timeline fixes and improvements (Chris)
 - Spelling fixes (Colin)
 - GPU reset fixes and improvements (Chris)
   - Including fixes on execlist and preemption for a proper GPU reset (Chris)
 - Clean-up the port pipe select bits (Ville)
 - Other execlist improvements (Chris)
 - Remove unused enable_cmd_parser parameter (Chris)
 - Fix order of enabling pipe/transcoder/planes on HSW+ to avoid hang on ICL (Paulo)
 - Simplification and changes on intel_context (Chris)
 - Disable LVDS on Radiant P845 (Ondrej)
 - Improve HSW/BDW voltage swing handling (Ville)
 - Cleanup and renames on few parts of intel_dp code to make code clear and less confusing (Ville)
 - Move acpi lid notification code for fixing LVDS (Chris)
 - Speed up GPU idle detection (Chris)
 - Make intel_engine_dump irqsafe (Chris)
 - Fix GVT crash (Zhenyu)
 - Move GEM BO inside drm_framebuffer and use intel_fb_obj everywhere (Chris)
 - Revert edp's alternate fixed mode (Jani)
 - Protect tainted function pointer lookup (Chris)
   - And subsequent unsigned long size fix (Chris)
 - Allow page directory allocation to fail (Chris)
 - VBT's edp and lvds fix and clean-up (Ville)
 - Many other reorganizations and cleanups on DDI and DP code, as well on scaler and planes (Ville)
 - Selftest pin the mock kernel context (Chris)
 - Many PSR Fixes, clean-up and improvements (Dhinakaran)
 - PSR VBT fix (Vathsala)
 - Fix i915_scheduler and intel_context declaration (Tvrtko)
 - Improve PCH underruns detection on ILK-IVB (Ville)
 - Few s/drm_priv/i915 (Chris, Michal)
 - Notify opregion of the sanitized encoder state (Maarten)
 - Guc's event handling improvements and fixes on initialization failures (Michal)
 - Many gtt fixes and improvements (Chris)
 - Fixes and improvements for Suspend and Freeze safely (Chris)
 - i915_gem init and fini cleanup and fixes (Michal)
 - Remove obsolete switch_mm for gen8+ (Chris)
 - hw and context id fixes for GuC (Lionel)
 - Add new vGPU cap info bit VGT_CAPS_HUGE_GTT (Changbin)
 - Make context pin/unpin symmetric (Chris)
 - vma: Move the bind_count vs pin_count assertion to a helper (Chris)
 - Use available SZ_1M instead of 1 << 20 (Chris)
 - Trace and PMU fixes and improvements (Tvrtko)
 -----BEGIN PGP SIGNATURE-----
 
 iQEcBAABAgAGBQJbGFxbAAoJEPpiX2QO6xPK618H/i+VkEGB+Qdr3h3bwhwVSWB1
 TzHZKFSDxznm3rDGU9argGc/nk0af4Kbq1+jnG9FYou2bmW7+wRu9RwIiX4Dggmy
 FJUHTZDm4lkP3KVlTGL9IbmS9/P6Opxdw9Hyn3WwpfDK2lg9KrRy3NwBtsxaLF6w
 ZM8hrabsnv0p9RRbNNqb9PJmDJCyoCeyvKgQPeHxHrwiV3VLsqerbuWRAHAQ90Vz
 /7hPvl6EcujpQR0xeaHt2+dFP2FTVVbVwyFyU4JMc5iPEDdQGOwPmxZCK8c7Khil
 Uoy1iUtoE5YKrcutEfFhUDigkYIB4N6WSAVrWPxEaHYQzx3XyewZtKIxDHVHpMI=
 =bbkZ
 -----END PGP SIGNATURE-----

Merge tag 'drm-intel-next-2018-06-06' of git://anongit.freedesktop.org/drm/drm-intel into drm-next

- Ice Lake's display enabling patches (Jose, Mahesh, Dhinakaran, Paulo, Manasi, Anusha, Arkadiusz)
- Ice Lake's workarounds (Oscar and Yunwei)
- Ice Lake interrupt registers fixes (Oscar)
- Context switch timeline fixes and improvements (Chris)
- Spelling fixes (Colin)
- GPU reset fixes and improvements (Chris)
  - Including fixes on execlist and preemption for a proper GPU reset (Chris)
- Clean-up the port pipe select bits (Ville)
- Other execlist improvements (Chris)
- Remove unused enable_cmd_parser parameter (Chris)
- Fix order of enabling pipe/transcoder/planes on HSW+ to avoid hang on ICL (Paulo)
- Simplification and changes on intel_context (Chris)
- Disable LVDS on Radiant P845 (Ondrej)
- Improve HSW/BDW voltage swing handling (Ville)
- Cleanup and renames on few parts of intel_dp code to make code clear and less confusing (Ville)
- Move acpi lid notification code for fixing LVDS (Chris)
- Speed up GPU idle detection (Chris)
- Make intel_engine_dump irqsafe (Chris)
- Fix GVT crash (Zhenyu)
- Move GEM BO inside drm_framebuffer and use intel_fb_obj everywhere (Chris)
- Revert edp's alternate fixed mode (Jani)
- Protect tainted function pointer lookup (Chris)
  - And subsequent unsigned long size fix (Chris)
- Allow page directory allocation to fail (Chris)
- VBT's edp and lvds fix and clean-up (Ville)
- Many other reorganizations and cleanups on DDI and DP code, as well on scaler and planes (Ville)
- Selftest pin the mock kernel context (Chris)
- Many PSR Fixes, clean-up and improvements (Dhinakaran)
- PSR VBT fix (Vathsala)
- Fix i915_scheduler and intel_context declaration (Tvrtko)
- Improve PCH underruns detection on ILK-IVB (Ville)
- Few s/drm_priv/i915 (Chris, Michal)
- Notify opregion of the sanitized encoder state (Maarten)
- Guc's event handling improvements and fixes on initialization failures (Michal)
- Many gtt fixes and improvements (Chris)
- Fixes and improvements for Suspend and Freeze safely (Chris)
- i915_gem init and fini cleanup and fixes (Michal)
- Remove obsolete switch_mm for gen8+ (Chris)
- hw and context id fixes for GuC (Lionel)
- Add new vGPU cap info bit VGT_CAPS_HUGE_GTT (Changbin)
- Make context pin/unpin symmetric (Chris)
- vma: Move the bind_count vs pin_count assertion to a helper (Chris)
- Use available SZ_1M instead of 1 << 20 (Chris)
- Trace and PMU fixes and improvements (Tvrtko)

Signed-off-by: Dave Airlie <airlied@redhat.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180611162737.GA2378@intel.com
This commit is contained in:
Dave Airlie 2018-06-22 11:34:41 +10:00
commit 3069290d9d
84 changed files with 3176 additions and 2239 deletions

View File

@ -61,7 +61,7 @@ static int alloc_gm(struct intel_vgpu *vgpu, bool high_gm)
}
mutex_lock(&dev_priv->drm.struct_mutex);
ret = i915_gem_gtt_insert(&dev_priv->ggtt.base, node,
ret = i915_gem_gtt_insert(&dev_priv->ggtt.vm, node,
size, I915_GTT_PAGE_SIZE,
I915_COLOR_UNEVICTABLE,
start, end, flags);

View File

@ -273,8 +273,8 @@ static void emulate_monitor_status_change(struct intel_vgpu *vgpu)
for_each_pipe(dev_priv, pipe) {
vgpu_vreg_t(vgpu, DSPCNTR(pipe)) &= ~DISPLAY_PLANE_ENABLE;
vgpu_vreg_t(vgpu, SPRCTL(pipe)) &= ~SPRITE_ENABLE;
vgpu_vreg_t(vgpu, CURCNTR(pipe)) &= ~CURSOR_MODE;
vgpu_vreg_t(vgpu, CURCNTR(pipe)) |= CURSOR_MODE_DISABLE;
vgpu_vreg_t(vgpu, CURCNTR(pipe)) &= ~MCURSOR_MODE;
vgpu_vreg_t(vgpu, CURCNTR(pipe)) |= MCURSOR_MODE_DISABLE;
}
vgpu_vreg_t(vgpu, PIPECONF(PIPE_A)) |= PIPECONF_ENABLE;

View File

@ -300,16 +300,16 @@ static int cursor_mode_to_drm(int mode)
int cursor_pixel_formats_index = 4;
switch (mode) {
case CURSOR_MODE_128_ARGB_AX:
case MCURSOR_MODE_128_ARGB_AX:
cursor_pixel_formats_index = 0;
break;
case CURSOR_MODE_256_ARGB_AX:
case MCURSOR_MODE_256_ARGB_AX:
cursor_pixel_formats_index = 1;
break;
case CURSOR_MODE_64_ARGB_AX:
case MCURSOR_MODE_64_ARGB_AX:
cursor_pixel_formats_index = 2;
break;
case CURSOR_MODE_64_32B_AX:
case MCURSOR_MODE_64_32B_AX:
cursor_pixel_formats_index = 3;
break;
@ -342,8 +342,8 @@ int intel_vgpu_decode_cursor_plane(struct intel_vgpu *vgpu,
return -ENODEV;
val = vgpu_vreg_t(vgpu, CURCNTR(pipe));
mode = val & CURSOR_MODE;
plane->enabled = (mode != CURSOR_MODE_DISABLE);
mode = val & MCURSOR_MODE;
plane->enabled = (mode != MCURSOR_MODE_DISABLE);
if (!plane->enabled)
return -ENODEV;

View File

@ -361,9 +361,9 @@ int intel_gvt_load_firmware(struct intel_gvt *gvt);
#define gvt_aperture_sz(gvt) (gvt->dev_priv->ggtt.mappable_end)
#define gvt_aperture_pa_base(gvt) (gvt->dev_priv->ggtt.gmadr.start)
#define gvt_ggtt_gm_sz(gvt) (gvt->dev_priv->ggtt.base.total)
#define gvt_ggtt_gm_sz(gvt) (gvt->dev_priv->ggtt.vm.total)
#define gvt_ggtt_sz(gvt) \
((gvt->dev_priv->ggtt.base.total >> PAGE_SHIFT) << 3)
((gvt->dev_priv->ggtt.vm.total >> PAGE_SHIFT) << 3)
#define gvt_hidden_sz(gvt) (gvt_ggtt_gm_sz(gvt) - gvt_aperture_sz(gvt))
#define gvt_aperture_gmadr_base(gvt) (0)

View File

@ -446,9 +446,9 @@ static void switch_mocs(struct intel_vgpu *pre, struct intel_vgpu *next,
#define CTX_CONTEXT_CONTROL_VAL 0x03
bool is_inhibit_context(struct i915_gem_context *ctx, int ring_id)
bool is_inhibit_context(struct intel_context *ce)
{
u32 *reg_state = ctx->__engine[ring_id].lrc_reg_state;
const u32 *reg_state = ce->lrc_reg_state;
u32 inhibit_mask =
_MASKED_BIT_ENABLE(CTX_CTRL_ENGINE_CTX_RESTORE_INHIBIT);
@ -501,7 +501,7 @@ static void switch_mmio(struct intel_vgpu *pre,
* itself.
*/
if (mmio->in_context &&
!is_inhibit_context(s->shadow_ctx, ring_id))
!is_inhibit_context(&s->shadow_ctx->__engine[ring_id]))
continue;
if (mmio->mask)

View File

@ -49,7 +49,7 @@ void intel_gvt_switch_mmio(struct intel_vgpu *pre,
void intel_gvt_init_engine_mmio_context(struct intel_gvt *gvt);
bool is_inhibit_context(struct i915_gem_context *ctx, int ring_id);
bool is_inhibit_context(struct intel_context *ce);
int intel_vgpu_restore_inhibit_context(struct intel_vgpu *vgpu,
struct i915_request *req);

View File

@ -54,11 +54,8 @@ static void set_context_pdp_root_pointer(
static void update_shadow_pdps(struct intel_vgpu_workload *workload)
{
struct intel_vgpu *vgpu = workload->vgpu;
int ring_id = workload->ring_id;
struct i915_gem_context *shadow_ctx = vgpu->submission.shadow_ctx;
struct drm_i915_gem_object *ctx_obj =
shadow_ctx->__engine[ring_id].state->obj;
workload->req->hw_context->state->obj;
struct execlist_ring_context *shadow_ring_context;
struct page *page;
@ -128,9 +125,8 @@ static int populate_shadow_context(struct intel_vgpu_workload *workload)
struct intel_vgpu *vgpu = workload->vgpu;
struct intel_gvt *gvt = vgpu->gvt;
int ring_id = workload->ring_id;
struct i915_gem_context *shadow_ctx = vgpu->submission.shadow_ctx;
struct drm_i915_gem_object *ctx_obj =
shadow_ctx->__engine[ring_id].state->obj;
workload->req->hw_context->state->obj;
struct execlist_ring_context *shadow_ring_context;
struct page *page;
void *dst;
@ -205,7 +201,7 @@ static int populate_shadow_context(struct intel_vgpu_workload *workload)
static inline bool is_gvt_request(struct i915_request *req)
{
return i915_gem_context_force_single_submission(req->ctx);
return i915_gem_context_force_single_submission(req->gem_context);
}
static void save_ring_hw_state(struct intel_vgpu *vgpu, int ring_id)
@ -280,10 +276,8 @@ static int shadow_context_status_change(struct notifier_block *nb,
return NOTIFY_OK;
}
static void shadow_context_descriptor_update(struct i915_gem_context *ctx,
struct intel_engine_cs *engine)
static void shadow_context_descriptor_update(struct intel_context *ce)
{
struct intel_context *ce = to_intel_context(ctx, engine);
u64 desc = 0;
desc = ce->lrc_desc;
@ -292,7 +286,7 @@ static void shadow_context_descriptor_update(struct i915_gem_context *ctx,
* like GEN8_CTX_* cached in desc_template
*/
desc &= U64_MAX << 12;
desc |= ctx->desc_template & ((1ULL << 12) - 1);
desc |= ce->gem_context->desc_template & ((1ULL << 12) - 1);
ce->lrc_desc = desc;
}
@ -300,12 +294,11 @@ static void shadow_context_descriptor_update(struct i915_gem_context *ctx,
static int copy_workload_to_ring_buffer(struct intel_vgpu_workload *workload)
{
struct intel_vgpu *vgpu = workload->vgpu;
struct i915_request *req = workload->req;
void *shadow_ring_buffer_va;
u32 *cs;
struct i915_request *req = workload->req;
if (IS_KABYLAKE(req->i915) &&
is_inhibit_context(req->ctx, req->engine->id))
if (IS_KABYLAKE(req->i915) && is_inhibit_context(req->hw_context))
intel_vgpu_restore_inhibit_context(vgpu, req);
/* allocate shadow ring buffer */
@ -353,35 +346,16 @@ int intel_gvt_scan_and_shadow_workload(struct intel_vgpu_workload *workload)
struct intel_vgpu_submission *s = &vgpu->submission;
struct i915_gem_context *shadow_ctx = s->shadow_ctx;
struct drm_i915_private *dev_priv = vgpu->gvt->dev_priv;
int ring_id = workload->ring_id;
struct intel_engine_cs *engine = dev_priv->engine[ring_id];
struct intel_ring *ring;
struct intel_engine_cs *engine = dev_priv->engine[workload->ring_id];
struct intel_context *ce;
struct i915_request *rq;
int ret;
lockdep_assert_held(&dev_priv->drm.struct_mutex);
if (workload->shadowed)
if (workload->req)
return 0;
shadow_ctx->desc_template &= ~(0x3 << GEN8_CTX_ADDRESSING_MODE_SHIFT);
shadow_ctx->desc_template |= workload->ctx_desc.addressing_mode <<
GEN8_CTX_ADDRESSING_MODE_SHIFT;
if (!test_and_set_bit(ring_id, s->shadow_ctx_desc_updated))
shadow_context_descriptor_update(shadow_ctx,
dev_priv->engine[ring_id]);
ret = intel_gvt_scan_and_shadow_ringbuffer(workload);
if (ret)
goto err_scan;
if ((workload->ring_id == RCS) &&
(workload->wa_ctx.indirect_ctx.size != 0)) {
ret = intel_gvt_scan_and_shadow_wa_ctx(&workload->wa_ctx);
if (ret)
goto err_scan;
}
/* pin shadow context by gvt even the shadow context will be pinned
* when i915 alloc request. That is because gvt will update the guest
* context from shadow context when workload is completed, and at that
@ -389,56 +363,50 @@ int intel_gvt_scan_and_shadow_workload(struct intel_vgpu_workload *workload)
* shadow_ctx pages invalid. So gvt need to pin itself. After update
* the guest context, gvt can unpin the shadow_ctx safely.
*/
ring = intel_context_pin(shadow_ctx, engine);
if (IS_ERR(ring)) {
ret = PTR_ERR(ring);
ce = intel_context_pin(shadow_ctx, engine);
if (IS_ERR(ce)) {
gvt_vgpu_err("fail to pin shadow context\n");
goto err_shadow;
return PTR_ERR(ce);
}
ret = populate_shadow_context(workload);
shadow_ctx->desc_template &= ~(0x3 << GEN8_CTX_ADDRESSING_MODE_SHIFT);
shadow_ctx->desc_template |= workload->ctx_desc.addressing_mode <<
GEN8_CTX_ADDRESSING_MODE_SHIFT;
if (!test_and_set_bit(workload->ring_id, s->shadow_ctx_desc_updated))
shadow_context_descriptor_update(ce);
ret = intel_gvt_scan_and_shadow_ringbuffer(workload);
if (ret)
goto err_unpin;
workload->shadowed = true;
return 0;
err_unpin:
intel_context_unpin(shadow_ctx, engine);
err_shadow:
release_shadow_wa_ctx(&workload->wa_ctx);
err_scan:
return ret;
}
if ((workload->ring_id == RCS) &&
(workload->wa_ctx.indirect_ctx.size != 0)) {
ret = intel_gvt_scan_and_shadow_wa_ctx(&workload->wa_ctx);
if (ret)
goto err_shadow;
}
static int intel_gvt_generate_request(struct intel_vgpu_workload *workload)
{
int ring_id = workload->ring_id;
struct drm_i915_private *dev_priv = workload->vgpu->gvt->dev_priv;
struct intel_engine_cs *engine = dev_priv->engine[ring_id];
struct i915_request *rq;
struct intel_vgpu *vgpu = workload->vgpu;
struct intel_vgpu_submission *s = &vgpu->submission;
struct i915_gem_context *shadow_ctx = s->shadow_ctx;
int ret;
rq = i915_request_alloc(dev_priv->engine[ring_id], shadow_ctx);
rq = i915_request_alloc(engine, shadow_ctx);
if (IS_ERR(rq)) {
gvt_vgpu_err("fail to allocate gem request\n");
ret = PTR_ERR(rq);
goto err_unpin;
goto err_shadow;
}
gvt_dbg_sched("ring id %d get i915 gem request %p\n", ring_id, rq);
workload->req = i915_request_get(rq);
ret = copy_workload_to_ring_buffer(workload);
if (ret)
goto err_unpin;
return 0;
err_unpin:
intel_context_unpin(shadow_ctx, engine);
ret = populate_shadow_context(workload);
if (ret)
goto err_req;
return 0;
err_req:
rq = fetch_and_zero(&workload->req);
i915_request_put(rq);
err_shadow:
release_shadow_wa_ctx(&workload->wa_ctx);
err_unpin:
intel_context_unpin(ce);
return ret;
}
@ -517,21 +485,13 @@ static int prepare_shadow_batch_buffer(struct intel_vgpu_workload *workload)
return ret;
}
static int update_wa_ctx_2_shadow_ctx(struct intel_shadow_wa_ctx *wa_ctx)
static void update_wa_ctx_2_shadow_ctx(struct intel_shadow_wa_ctx *wa_ctx)
{
struct intel_vgpu_workload *workload = container_of(wa_ctx,
struct intel_vgpu_workload,
wa_ctx);
int ring_id = workload->ring_id;
struct intel_vgpu_submission *s = &workload->vgpu->submission;
struct i915_gem_context *shadow_ctx = s->shadow_ctx;
struct drm_i915_gem_object *ctx_obj =
shadow_ctx->__engine[ring_id].state->obj;
struct execlist_ring_context *shadow_ring_context;
struct page *page;
page = i915_gem_object_get_page(ctx_obj, LRC_STATE_PN);
shadow_ring_context = kmap_atomic(page);
struct intel_vgpu_workload *workload =
container_of(wa_ctx, struct intel_vgpu_workload, wa_ctx);
struct i915_request *rq = workload->req;
struct execlist_ring_context *shadow_ring_context =
(struct execlist_ring_context *)rq->hw_context->lrc_reg_state;
shadow_ring_context->bb_per_ctx_ptr.val =
(shadow_ring_context->bb_per_ctx_ptr.val &
@ -539,9 +499,6 @@ static int update_wa_ctx_2_shadow_ctx(struct intel_shadow_wa_ctx *wa_ctx)
shadow_ring_context->rcs_indirect_ctx.val =
(shadow_ring_context->rcs_indirect_ctx.val &
(~INDIRECT_CTX_ADDR_MASK)) | wa_ctx->indirect_ctx.shadow_gma;
kunmap_atomic(shadow_ring_context);
return 0;
}
static int prepare_shadow_wa_ctx(struct intel_shadow_wa_ctx *wa_ctx)
@ -633,7 +590,7 @@ static int prepare_workload(struct intel_vgpu_workload *workload)
goto err_unpin_mm;
}
ret = intel_gvt_generate_request(workload);
ret = copy_workload_to_ring_buffer(workload);
if (ret) {
gvt_vgpu_err("fail to generate request\n");
goto err_unpin_mm;
@ -670,12 +627,9 @@ static int prepare_workload(struct intel_vgpu_workload *workload)
static int dispatch_workload(struct intel_vgpu_workload *workload)
{
struct intel_vgpu *vgpu = workload->vgpu;
struct intel_vgpu_submission *s = &vgpu->submission;
struct i915_gem_context *shadow_ctx = s->shadow_ctx;
struct drm_i915_private *dev_priv = vgpu->gvt->dev_priv;
int ring_id = workload->ring_id;
struct intel_engine_cs *engine = dev_priv->engine[ring_id];
int ret = 0;
int ret;
gvt_dbg_sched("ring id %d prepare to dispatch workload %p\n",
ring_id, workload);
@ -687,10 +641,6 @@ static int dispatch_workload(struct intel_vgpu_workload *workload)
goto out;
ret = prepare_workload(workload);
if (ret) {
intel_context_unpin(shadow_ctx, engine);
goto out;
}
out:
if (ret)
@ -765,27 +715,23 @@ static struct intel_vgpu_workload *pick_next_workload(
static void update_guest_context(struct intel_vgpu_workload *workload)
{
struct i915_request *rq = workload->req;
struct intel_vgpu *vgpu = workload->vgpu;
struct intel_gvt *gvt = vgpu->gvt;
struct intel_vgpu_submission *s = &vgpu->submission;
struct i915_gem_context *shadow_ctx = s->shadow_ctx;
int ring_id = workload->ring_id;
struct drm_i915_gem_object *ctx_obj =
shadow_ctx->__engine[ring_id].state->obj;
struct drm_i915_gem_object *ctx_obj = rq->hw_context->state->obj;
struct execlist_ring_context *shadow_ring_context;
struct page *page;
void *src;
unsigned long context_gpa, context_page_num;
int i;
gvt_dbg_sched("ring id %d workload lrca %x\n", ring_id,
workload->ctx_desc.lrca);
context_page_num = gvt->dev_priv->engine[ring_id]->context_size;
gvt_dbg_sched("ring id %d workload lrca %x\n", rq->engine->id,
workload->ctx_desc.lrca);
context_page_num = rq->engine->context_size;
context_page_num = context_page_num >> PAGE_SHIFT;
if (IS_BROADWELL(gvt->dev_priv) && ring_id == RCS)
if (IS_BROADWELL(gvt->dev_priv) && rq->engine->id == RCS)
context_page_num = 19;
i = 2;
@ -858,6 +804,7 @@ static void complete_current_workload(struct intel_gvt *gvt, int ring_id)
scheduler->current_workload[ring_id];
struct intel_vgpu *vgpu = workload->vgpu;
struct intel_vgpu_submission *s = &vgpu->submission;
struct i915_request *rq = workload->req;
int event;
mutex_lock(&gvt->lock);
@ -866,11 +813,7 @@ static void complete_current_workload(struct intel_gvt *gvt, int ring_id)
* switch to make sure request is completed.
* For the workload w/o request, directly complete the workload.
*/
if (workload->req) {
struct drm_i915_private *dev_priv =
workload->vgpu->gvt->dev_priv;
struct intel_engine_cs *engine =
dev_priv->engine[workload->ring_id];
if (rq) {
wait_event(workload->shadow_ctx_status_wq,
!atomic_read(&workload->shadow_ctx_active));
@ -886,8 +829,6 @@ static void complete_current_workload(struct intel_gvt *gvt, int ring_id)
workload->status = 0;
}
i915_request_put(fetch_and_zero(&workload->req));
if (!workload->status && !(vgpu->resetting_eng &
ENGINE_MASK(ring_id))) {
update_guest_context(workload);
@ -896,10 +837,13 @@ static void complete_current_workload(struct intel_gvt *gvt, int ring_id)
INTEL_GVT_EVENT_MAX)
intel_vgpu_trigger_virtual_event(vgpu, event);
}
mutex_lock(&dev_priv->drm.struct_mutex);
/* unpin shadow ctx as the shadow_ctx update is done */
intel_context_unpin(s->shadow_ctx, engine);
mutex_unlock(&dev_priv->drm.struct_mutex);
mutex_lock(&rq->i915->drm.struct_mutex);
intel_context_unpin(rq->hw_context);
mutex_unlock(&rq->i915->drm.struct_mutex);
i915_request_put(fetch_and_zero(&workload->req));
}
gvt_dbg_sched("ring id %d complete workload %p status %d\n",
@ -1270,7 +1214,6 @@ alloc_workload(struct intel_vgpu *vgpu)
atomic_set(&workload->shadow_ctx_active, 0);
workload->status = -EINPROGRESS;
workload->shadowed = false;
workload->vgpu = vgpu;
return workload;

View File

@ -83,7 +83,6 @@ struct intel_vgpu_workload {
struct i915_request *req;
/* if this workload has been dispatched to i915? */
bool dispatched;
bool shadowed;
int status;
struct intel_vgpu_mm *shadow_mm;

View File

@ -328,7 +328,7 @@ static int per_file_stats(int id, void *ptr, void *data)
} else {
struct i915_hw_ppgtt *ppgtt = i915_vm_to_ppgtt(vma->vm);
if (ppgtt->base.file != stats->file_priv)
if (ppgtt->vm.file != stats->file_priv)
continue;
}
@ -508,7 +508,7 @@ static int i915_gem_object_info(struct seq_file *m, void *data)
dpy_count, dpy_size);
seq_printf(m, "%llu [%pa] gtt total\n",
ggtt->base.total, &ggtt->mappable_end);
ggtt->vm.total, &ggtt->mappable_end);
seq_printf(m, "Supported page sizes: %s\n",
stringify_page_sizes(INTEL_INFO(dev_priv)->page_sizes,
buf, sizeof(buf)));
@ -542,8 +542,8 @@ static int i915_gem_object_info(struct seq_file *m, void *data)
struct i915_request,
client_link);
rcu_read_lock();
task = pid_task(request && request->ctx->pid ?
request->ctx->pid : file->pid,
task = pid_task(request && request->gem_context->pid ?
request->gem_context->pid : file->pid,
PIDTYPE_PID);
print_file_stats(m, task ? task->comm : "<unknown>", stats);
rcu_read_unlock();
@ -1162,19 +1162,28 @@ static int i915_frequency_info(struct seq_file *m, void *unused)
intel_uncore_forcewake_put(dev_priv, FORCEWAKE_ALL);
if (IS_GEN6(dev_priv) || IS_GEN7(dev_priv)) {
pm_ier = I915_READ(GEN6_PMIER);
pm_imr = I915_READ(GEN6_PMIMR);
pm_isr = I915_READ(GEN6_PMISR);
pm_iir = I915_READ(GEN6_PMIIR);
pm_mask = I915_READ(GEN6_PMINTRMSK);
} else {
if (INTEL_GEN(dev_priv) >= 11) {
pm_ier = I915_READ(GEN11_GPM_WGBOXPERF_INTR_ENABLE);
pm_imr = I915_READ(GEN11_GPM_WGBOXPERF_INTR_MASK);
/*
* The equivalent to the PM ISR & IIR cannot be read
* without affecting the current state of the system
*/
pm_isr = 0;
pm_iir = 0;
} else if (INTEL_GEN(dev_priv) >= 8) {
pm_ier = I915_READ(GEN8_GT_IER(2));
pm_imr = I915_READ(GEN8_GT_IMR(2));
pm_isr = I915_READ(GEN8_GT_ISR(2));
pm_iir = I915_READ(GEN8_GT_IIR(2));
pm_mask = I915_READ(GEN6_PMINTRMSK);
} else {
pm_ier = I915_READ(GEN6_PMIER);
pm_imr = I915_READ(GEN6_PMIMR);
pm_isr = I915_READ(GEN6_PMISR);
pm_iir = I915_READ(GEN6_PMIIR);
}
pm_mask = I915_READ(GEN6_PMINTRMSK);
seq_printf(m, "Video Turbo Mode: %s\n",
yesno(rpmodectl & GEN6_RP_MEDIA_TURBO));
seq_printf(m, "HW control enabled: %s\n",
@ -1182,8 +1191,12 @@ static int i915_frequency_info(struct seq_file *m, void *unused)
seq_printf(m, "SW control enabled: %s\n",
yesno((rpmodectl & GEN6_RP_MEDIA_MODE_MASK) ==
GEN6_RP_MEDIA_SW_MODE));
seq_printf(m, "PM IER=0x%08x IMR=0x%08x ISR=0x%08x IIR=0x%08x, MASK=0x%08x\n",
pm_ier, pm_imr, pm_isr, pm_iir, pm_mask);
seq_printf(m, "PM IER=0x%08x IMR=0x%08x, MASK=0x%08x\n",
pm_ier, pm_imr, pm_mask);
if (INTEL_GEN(dev_priv) <= 10)
seq_printf(m, "PM ISR=0x%08x IIR=0x%08x\n",
pm_isr, pm_iir);
seq_printf(m, "pm_intrmsk_mbz: 0x%08x\n",
rps->pm_intrmsk_mbz);
seq_printf(m, "GT_PERF_STATUS: 0x%08x\n", gt_perf_status);
@ -1895,7 +1908,7 @@ static int i915_gem_framebuffer_info(struct seq_file *m, void *data)
fbdev_fb->base.format->cpp[0] * 8,
fbdev_fb->base.modifier,
drm_framebuffer_read_refcount(&fbdev_fb->base));
describe_obj(m, fbdev_fb->obj);
describe_obj(m, intel_fb_obj(&fbdev_fb->base));
seq_putc(m, '\n');
}
#endif
@ -1913,7 +1926,7 @@ static int i915_gem_framebuffer_info(struct seq_file *m, void *data)
fb->base.format->cpp[0] * 8,
fb->base.modifier,
drm_framebuffer_read_refcount(&fb->base));
describe_obj(m, fb->obj);
describe_obj(m, intel_fb_obj(&fb->base));
seq_putc(m, '\n');
}
mutex_unlock(&dev->mode_config.fb_lock);
@ -2630,8 +2643,6 @@ static int i915_edp_psr_status(struct seq_file *m, void *data)
{
struct drm_i915_private *dev_priv = node_to_i915(m->private);
u32 psrperf = 0;
u32 stat[3];
enum pipe pipe;
bool enabled = false;
bool sink_support;
@ -2652,47 +2663,17 @@ static int i915_edp_psr_status(struct seq_file *m, void *data)
seq_printf(m, "Re-enable work scheduled: %s\n",
yesno(work_busy(&dev_priv->psr.work.work)));
if (HAS_DDI(dev_priv)) {
if (dev_priv->psr.psr2_enabled)
enabled = I915_READ(EDP_PSR2_CTL) & EDP_PSR2_ENABLE;
else
enabled = I915_READ(EDP_PSR_CTL) & EDP_PSR_ENABLE;
} else {
for_each_pipe(dev_priv, pipe) {
enum transcoder cpu_transcoder =
intel_pipe_to_cpu_transcoder(dev_priv, pipe);
enum intel_display_power_domain power_domain;
power_domain = POWER_DOMAIN_TRANSCODER(cpu_transcoder);
if (!intel_display_power_get_if_enabled(dev_priv,
power_domain))
continue;
stat[pipe] = I915_READ(VLV_PSRSTAT(pipe)) &
VLV_EDP_PSR_CURR_STATE_MASK;
if ((stat[pipe] == VLV_EDP_PSR_ACTIVE_NORFB_UP) ||
(stat[pipe] == VLV_EDP_PSR_ACTIVE_SF_UPDATE))
enabled = true;
intel_display_power_put(dev_priv, power_domain);
}
}
if (dev_priv->psr.psr2_enabled)
enabled = I915_READ(EDP_PSR2_CTL) & EDP_PSR2_ENABLE;
else
enabled = I915_READ(EDP_PSR_CTL) & EDP_PSR_ENABLE;
seq_printf(m, "Main link in standby mode: %s\n",
yesno(dev_priv->psr.link_standby));
seq_printf(m, "HW Enabled & Active bit: %s", yesno(enabled));
if (!HAS_DDI(dev_priv))
for_each_pipe(dev_priv, pipe) {
if ((stat[pipe] == VLV_EDP_PSR_ACTIVE_NORFB_UP) ||
(stat[pipe] == VLV_EDP_PSR_ACTIVE_SF_UPDATE))
seq_printf(m, " pipe %c", pipe_name(pipe));
}
seq_puts(m, "\n");
seq_printf(m, "HW Enabled & Active bit: %s\n", yesno(enabled));
/*
* VLV/CHV PSR has no kind of performance counter
* SKL+ Perf counter is reset to 0 everytime DC state is entered
*/
if (IS_HASWELL(dev_priv) || IS_BROADWELL(dev_priv)) {
@ -4245,8 +4226,13 @@ i915_drop_caches_set(void *data, u64 val)
i915_gem_shrink_all(dev_priv);
fs_reclaim_release(GFP_KERNEL);
if (val & DROP_IDLE)
drain_delayed_work(&dev_priv->gt.idle_work);
if (val & DROP_IDLE) {
do {
if (READ_ONCE(dev_priv->gt.active_requests))
flush_delayed_work(&dev_priv->gt.retire_work);
drain_delayed_work(&dev_priv->gt.idle_work);
} while (READ_ONCE(dev_priv->gt.awake));
}
if (val & DROP_FREED)
i915_gem_drain_freed_objects(dev_priv);

View File

@ -67,6 +67,7 @@ bool __i915_inject_load_failure(const char *func, int line)
if (++i915_load_fail_count == i915_modparams.inject_load_failure) {
DRM_INFO("Injecting failure at checkpoint %u [%s:%d]\n",
i915_modparams.inject_load_failure, func, line);
i915_modparams.inject_load_failure = 0;
return true;
}
@ -117,16 +118,15 @@ __i915_printk(struct drm_i915_private *dev_priv, const char *level,
static bool i915_error_injected(struct drm_i915_private *dev_priv)
{
#if IS_ENABLED(CONFIG_DRM_I915_DEBUG)
return i915_modparams.inject_load_failure &&
i915_load_fail_count == i915_modparams.inject_load_failure;
return i915_load_fail_count && !i915_modparams.inject_load_failure;
#else
return false;
#endif
}
#define i915_load_error(dev_priv, fmt, ...) \
__i915_printk(dev_priv, \
i915_error_injected(dev_priv) ? KERN_DEBUG : KERN_ERR, \
#define i915_load_error(i915, fmt, ...) \
__i915_printk(i915, \
i915_error_injected(i915) ? KERN_DEBUG : KERN_ERR, \
fmt, ##__VA_ARGS__)
/* Map PCH device id to PCH type, or PCH_NONE if unknown. */
@ -233,6 +233,8 @@ intel_virt_detect_pch(const struct drm_i915_private *dev_priv)
id = INTEL_PCH_SPT_DEVICE_ID_TYPE;
else if (IS_COFFEELAKE(dev_priv) || IS_CANNONLAKE(dev_priv))
id = INTEL_PCH_CNP_DEVICE_ID_TYPE;
else if (IS_ICELAKE(dev_priv))
id = INTEL_PCH_ICP_DEVICE_ID_TYPE;
if (id)
DRM_DEBUG_KMS("Assuming PCH ID %04x\n", id);
@ -634,26 +636,6 @@ static const struct vga_switcheroo_client_ops i915_switcheroo_ops = {
.can_switch = i915_switcheroo_can_switch,
};
static void i915_gem_fini(struct drm_i915_private *dev_priv)
{
/* Flush any outstanding unpin_work. */
i915_gem_drain_workqueue(dev_priv);
mutex_lock(&dev_priv->drm.struct_mutex);
intel_uc_fini_hw(dev_priv);
intel_uc_fini(dev_priv);
i915_gem_cleanup_engines(dev_priv);
i915_gem_contexts_fini(dev_priv);
mutex_unlock(&dev_priv->drm.struct_mutex);
intel_uc_fini_misc(dev_priv);
i915_gem_cleanup_userptr(dev_priv);
i915_gem_drain_freed_objects(dev_priv);
WARN_ON(!list_empty(&dev_priv->contexts.list));
}
static int i915_load_modeset_init(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = to_i915(dev);
@ -1553,12 +1535,30 @@ static bool suspend_to_idle(struct drm_i915_private *dev_priv)
return false;
}
static int i915_drm_prepare(struct drm_device *dev)
{
struct drm_i915_private *i915 = to_i915(dev);
int err;
/*
* NB intel_display_suspend() may issue new requests after we've
* ostensibly marked the GPU as ready-to-sleep here. We need to
* split out that work and pull it forward so that after point,
* the GPU is not woken again.
*/
err = i915_gem_suspend(i915);
if (err)
dev_err(&i915->drm.pdev->dev,
"GEM idle failed, suspend/resume might fail\n");
return err;
}
static int i915_drm_suspend(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = to_i915(dev);
struct pci_dev *pdev = dev_priv->drm.pdev;
pci_power_t opregion_target_state;
int error;
/* ignore lid events during suspend */
mutex_lock(&dev_priv->modeset_restore_lock);
@ -1575,13 +1575,6 @@ static int i915_drm_suspend(struct drm_device *dev)
pci_save_state(pdev);
error = i915_gem_suspend(dev_priv);
if (error) {
dev_err(&pdev->dev,
"GEM idle failed, resume might fail\n");
goto out;
}
intel_display_suspend(dev);
intel_dp_mst_suspend(dev);
@ -1600,7 +1593,6 @@ static int i915_drm_suspend(struct drm_device *dev)
opregion_target_state = suspend_to_idle(dev_priv) ? PCI_D1 : PCI_D3cold;
intel_opregion_notify_adapter(dev_priv, opregion_target_state);
intel_uncore_suspend(dev_priv);
intel_opregion_unregister(dev_priv);
intel_fbdev_set_suspend(dev, FBINFO_STATE_SUSPENDED, true);
@ -1609,10 +1601,9 @@ static int i915_drm_suspend(struct drm_device *dev)
intel_csr_ucode_suspend(dev_priv);
out:
enable_rpm_wakeref_asserts(dev_priv);
return error;
return 0;
}
static int i915_drm_suspend_late(struct drm_device *dev, bool hibernation)
@ -1623,7 +1614,10 @@ static int i915_drm_suspend_late(struct drm_device *dev, bool hibernation)
disable_rpm_wakeref_asserts(dev_priv);
i915_gem_suspend_late(dev_priv);
intel_display_set_init_power(dev_priv, false);
intel_uncore_suspend(dev_priv);
/*
* In case of firmware assisted context save/restore don't manually
@ -2081,6 +2075,22 @@ int i915_reset_engine(struct intel_engine_cs *engine, const char *msg)
return ret;
}
static int i915_pm_prepare(struct device *kdev)
{
struct pci_dev *pdev = to_pci_dev(kdev);
struct drm_device *dev = pci_get_drvdata(pdev);
if (!dev) {
dev_err(kdev, "DRM not initialized, aborting suspend.\n");
return -ENODEV;
}
if (dev->switch_power_state == DRM_SWITCH_POWER_OFF)
return 0;
return i915_drm_prepare(dev);
}
static int i915_pm_suspend(struct device *kdev)
{
struct pci_dev *pdev = to_pci_dev(kdev);
@ -2731,6 +2741,7 @@ const struct dev_pm_ops i915_pm_ops = {
* S0ix (via system suspend) and S3 event handlers [PMSG_SUSPEND,
* PMSG_RESUME]
*/
.prepare = i915_pm_prepare,
.suspend = i915_pm_suspend,
.suspend_late = i915_pm_suspend_late,
.resume_early = i915_pm_resume_early,

View File

@ -85,8 +85,8 @@
#define DRIVER_NAME "i915"
#define DRIVER_DESC "Intel Graphics"
#define DRIVER_DATE "20180514"
#define DRIVER_TIMESTAMP 1526300884
#define DRIVER_DATE "20180606"
#define DRIVER_TIMESTAMP 1528323047
/* Use I915_STATE_WARN(x) and I915_STATE_WARN_ON() (rather than WARN() and
* WARN_ON()) for hw state sanity checks to check for unexpected conditions
@ -607,7 +607,6 @@ struct i915_psr {
bool link_standby;
bool colorimetry_support;
bool alpm;
bool has_hw_tracking;
bool psr2_enabled;
u8 sink_sync_latency;
bool debug;
@ -1049,9 +1048,9 @@ struct intel_vbt_data {
/* Feature bits */
unsigned int int_tv_support:1;
unsigned int lvds_dither:1;
unsigned int lvds_vbt:1;
unsigned int int_crt_support:1;
unsigned int lvds_use_ssc:1;
unsigned int int_lvds_support:1;
unsigned int display_clock_mode:1;
unsigned int fdi_rx_polarity_inverted:1;
unsigned int panel_type:4;
@ -1067,7 +1066,6 @@ struct intel_vbt_data {
int vswing;
bool low_vswing;
bool initialized;
bool support;
int bpp;
struct edp_power_seq pps;
} edp;
@ -1078,8 +1076,8 @@ struct intel_vbt_data {
bool require_aux_wakeup;
int idle_frames;
enum psr_lines_to_wait lines_to_wait;
int tp1_wakeup_time;
int tp2_tp3_wakeup_time;
int tp1_wakeup_time_us;
int tp2_tp3_wakeup_time_us;
} psr;
struct {
@ -1843,6 +1841,7 @@ struct drm_i915_private {
*/
struct ida hw_ida;
#define MAX_CONTEXT_HW_ID (1<<21) /* exclusive */
#define MAX_GUC_CONTEXT_HW_ID (1 << 20) /* exclusive */
#define GEN11_MAX_CONTEXT_HW_ID (1<<11) /* exclusive */
} contexts;
@ -1950,7 +1949,9 @@ struct drm_i915_private {
*/
struct i915_perf_stream *exclusive_stream;
struct intel_context *pinned_ctx;
u32 specific_ctx_id;
u32 specific_ctx_id_mask;
struct hrtimer poll_check_timer;
wait_queue_head_t poll_wq;
@ -2743,6 +2744,8 @@ int vlv_force_gfx_clock(struct drm_i915_private *dev_priv, bool on);
int intel_engines_init_mmio(struct drm_i915_private *dev_priv);
int intel_engines_init(struct drm_i915_private *dev_priv);
u32 intel_calculate_mcr_s_ss_select(struct drm_i915_private *dev_priv);
/* intel_hotplug.c */
void intel_hpd_irq_handler(struct drm_i915_private *dev_priv,
u32 pin_mask, u32 long_mask);
@ -3164,10 +3167,12 @@ void i915_gem_init_mmio(struct drm_i915_private *i915);
int __must_check i915_gem_init(struct drm_i915_private *dev_priv);
int __must_check i915_gem_init_hw(struct drm_i915_private *dev_priv);
void i915_gem_init_swizzling(struct drm_i915_private *dev_priv);
void i915_gem_fini(struct drm_i915_private *dev_priv);
void i915_gem_cleanup_engines(struct drm_i915_private *dev_priv);
int i915_gem_wait_for_idle(struct drm_i915_private *dev_priv,
unsigned int flags);
int __must_check i915_gem_suspend(struct drm_i915_private *dev_priv);
void i915_gem_suspend_late(struct drm_i915_private *dev_priv);
void i915_gem_resume(struct drm_i915_private *dev_priv);
int i915_gem_fault(struct vm_fault *vmf);
int i915_gem_object_wait(struct drm_i915_gem_object *obj,
@ -3208,7 +3213,7 @@ struct dma_buf *i915_gem_prime_export(struct drm_device *dev,
static inline struct i915_hw_ppgtt *
i915_vm_to_ppgtt(struct i915_address_space *vm)
{
return container_of(vm, struct i915_hw_ppgtt, base);
return container_of(vm, struct i915_hw_ppgtt, vm);
}
/* i915_gem_fence_reg.c */

View File

@ -65,7 +65,7 @@ insert_mappable_node(struct i915_ggtt *ggtt,
struct drm_mm_node *node, u32 size)
{
memset(node, 0, sizeof(*node));
return drm_mm_insert_node_in_range(&ggtt->base.mm, node,
return drm_mm_insert_node_in_range(&ggtt->vm.mm, node,
size, 0, I915_COLOR_UNEVICTABLE,
0, ggtt->mappable_end,
DRM_MM_INSERT_LOW);
@ -139,6 +139,8 @@ int i915_mutex_lock_interruptible(struct drm_device *dev)
static u32 __i915_gem_park(struct drm_i915_private *i915)
{
GEM_TRACE("\n");
lockdep_assert_held(&i915->drm.struct_mutex);
GEM_BUG_ON(i915->gt.active_requests);
GEM_BUG_ON(!list_empty(&i915->gt.active_rings));
@ -181,6 +183,8 @@ static u32 __i915_gem_park(struct drm_i915_private *i915)
void i915_gem_park(struct drm_i915_private *i915)
{
GEM_TRACE("\n");
lockdep_assert_held(&i915->drm.struct_mutex);
GEM_BUG_ON(i915->gt.active_requests);
@ -193,6 +197,8 @@ void i915_gem_park(struct drm_i915_private *i915)
void i915_gem_unpark(struct drm_i915_private *i915)
{
GEM_TRACE("\n");
lockdep_assert_held(&i915->drm.struct_mutex);
GEM_BUG_ON(!i915->gt.active_requests);
@ -243,17 +249,17 @@ i915_gem_get_aperture_ioctl(struct drm_device *dev, void *data,
struct i915_vma *vma;
u64 pinned;
pinned = ggtt->base.reserved;
pinned = ggtt->vm.reserved;
mutex_lock(&dev->struct_mutex);
list_for_each_entry(vma, &ggtt->base.active_list, vm_link)
list_for_each_entry(vma, &ggtt->vm.active_list, vm_link)
if (i915_vma_is_pinned(vma))
pinned += vma->node.size;
list_for_each_entry(vma, &ggtt->base.inactive_list, vm_link)
list_for_each_entry(vma, &ggtt->vm.inactive_list, vm_link)
if (i915_vma_is_pinned(vma))
pinned += vma->node.size;
mutex_unlock(&dev->struct_mutex);
args->aper_size = ggtt->base.total;
args->aper_size = ggtt->vm.total;
args->aper_available_size = args->aper_size - pinned;
return 0;
@ -1217,9 +1223,9 @@ i915_gem_gtt_pread(struct drm_i915_gem_object *obj,
page_length = remain < page_length ? remain : page_length;
if (node.allocated) {
wmb();
ggtt->base.insert_page(&ggtt->base,
i915_gem_object_get_dma_address(obj, offset >> PAGE_SHIFT),
node.start, I915_CACHE_NONE, 0);
ggtt->vm.insert_page(&ggtt->vm,
i915_gem_object_get_dma_address(obj, offset >> PAGE_SHIFT),
node.start, I915_CACHE_NONE, 0);
wmb();
} else {
page_base += offset & PAGE_MASK;
@ -1240,8 +1246,7 @@ i915_gem_gtt_pread(struct drm_i915_gem_object *obj,
out_unpin:
if (node.allocated) {
wmb();
ggtt->base.clear_range(&ggtt->base,
node.start, node.size);
ggtt->vm.clear_range(&ggtt->vm, node.start, node.size);
remove_mappable_node(&node);
} else {
i915_vma_unpin(vma);
@ -1420,9 +1425,9 @@ i915_gem_gtt_pwrite_fast(struct drm_i915_gem_object *obj,
page_length = remain < page_length ? remain : page_length;
if (node.allocated) {
wmb(); /* flush the write before we modify the GGTT */
ggtt->base.insert_page(&ggtt->base,
i915_gem_object_get_dma_address(obj, offset >> PAGE_SHIFT),
node.start, I915_CACHE_NONE, 0);
ggtt->vm.insert_page(&ggtt->vm,
i915_gem_object_get_dma_address(obj, offset >> PAGE_SHIFT),
node.start, I915_CACHE_NONE, 0);
wmb(); /* flush modifications to the GGTT (insert_page) */
} else {
page_base += offset & PAGE_MASK;
@ -1449,8 +1454,7 @@ i915_gem_gtt_pwrite_fast(struct drm_i915_gem_object *obj,
out_unpin:
if (node.allocated) {
wmb();
ggtt->base.clear_range(&ggtt->base,
node.start, node.size);
ggtt->vm.clear_range(&ggtt->vm, node.start, node.size);
remove_mappable_node(&node);
} else {
i915_vma_unpin(vma);
@ -1993,7 +1997,7 @@ compute_partial_view(struct drm_i915_gem_object *obj,
*/
int i915_gem_fault(struct vm_fault *vmf)
{
#define MIN_CHUNK_PAGES ((1 << 20) >> PAGE_SHIFT) /* 1 MiB */
#define MIN_CHUNK_PAGES (SZ_1M >> PAGE_SHIFT)
struct vm_area_struct *area = vmf->vma;
struct drm_i915_gem_object *obj = to_intel_bo(area->vm_private_data);
struct drm_device *dev = obj->base.dev;
@ -3003,7 +3007,7 @@ i915_gem_find_active_request(struct intel_engine_cs *engine)
struct i915_request *
i915_gem_reset_prepare_engine(struct intel_engine_cs *engine)
{
struct i915_request *request = NULL;
struct i915_request *request;
/*
* During the reset sequence, we must prevent the engine from
@ -3014,52 +3018,7 @@ i915_gem_reset_prepare_engine(struct intel_engine_cs *engine)
*/
intel_uncore_forcewake_get(engine->i915, FORCEWAKE_ALL);
/*
* Prevent the signaler thread from updating the request
* state (by calling dma_fence_signal) as we are processing
* the reset. The write from the GPU of the seqno is
* asynchronous and the signaler thread may see a different
* value to us and declare the request complete, even though
* the reset routine have picked that request as the active
* (incomplete) request. This conflict is not handled
* gracefully!
*/
kthread_park(engine->breadcrumbs.signaler);
/*
* Prevent request submission to the hardware until we have
* completed the reset in i915_gem_reset_finish(). If a request
* is completed by one engine, it may then queue a request
* to a second via its execlists->tasklet *just* as we are
* calling engine->init_hw() and also writing the ELSP.
* Turning off the execlists->tasklet until the reset is over
* prevents the race.
*
* Note that this needs to be a single atomic operation on the
* tasklet (flush existing tasks, prevent new tasks) to prevent
* a race between reset and set-wedged. It is not, so we do the best
* we can atm and make sure we don't lock the machine up in the more
* common case of recursively being called from set-wedged from inside
* i915_reset.
*/
if (!atomic_read(&engine->execlists.tasklet.count))
tasklet_kill(&engine->execlists.tasklet);
tasklet_disable(&engine->execlists.tasklet);
/*
* We're using worker to queue preemption requests from the tasklet in
* GuC submission mode.
* Even though tasklet was disabled, we may still have a worker queued.
* Let's make sure that all workers scheduled before disabling the
* tasklet are completed before continuing with the reset.
*/
if (engine->i915->guc.preempt_wq)
flush_workqueue(engine->i915->guc.preempt_wq);
if (engine->irq_seqno_barrier)
engine->irq_seqno_barrier(engine);
request = i915_gem_find_active_request(engine);
request = engine->reset.prepare(engine);
if (request && request->fence.error == -EIO)
request = ERR_PTR(-EIO); /* Previous reset failed! */
@ -3111,7 +3070,7 @@ static void skip_request(struct i915_request *request)
static void engine_skip_context(struct i915_request *request)
{
struct intel_engine_cs *engine = request->engine;
struct i915_gem_context *hung_ctx = request->ctx;
struct i915_gem_context *hung_ctx = request->gem_context;
struct i915_timeline *timeline = request->timeline;
unsigned long flags;
@ -3121,7 +3080,7 @@ static void engine_skip_context(struct i915_request *request)
spin_lock_nested(&timeline->lock, SINGLE_DEPTH_NESTING);
list_for_each_entry_continue(request, &engine->timeline.requests, link)
if (request->ctx == hung_ctx)
if (request->gem_context == hung_ctx)
skip_request(request);
list_for_each_entry(request, &timeline->requests, link)
@ -3167,11 +3126,11 @@ i915_gem_reset_request(struct intel_engine_cs *engine,
}
if (stalled) {
i915_gem_context_mark_guilty(request->ctx);
i915_gem_context_mark_guilty(request->gem_context);
skip_request(request);
/* If this context is now banned, skip all pending requests. */
if (i915_gem_context_is_banned(request->ctx))
if (i915_gem_context_is_banned(request->gem_context))
engine_skip_context(request);
} else {
/*
@ -3181,7 +3140,7 @@ i915_gem_reset_request(struct intel_engine_cs *engine,
*/
request = i915_gem_find_active_request(engine);
if (request) {
i915_gem_context_mark_innocent(request->ctx);
i915_gem_context_mark_innocent(request->gem_context);
dma_fence_set_error(&request->fence, -EAGAIN);
/* Rewind the engine to replay the incomplete rq */
@ -3210,13 +3169,8 @@ void i915_gem_reset_engine(struct intel_engine_cs *engine,
if (request)
request = i915_gem_reset_request(engine, request, stalled);
if (request) {
DRM_DEBUG_DRIVER("resetting %s to restart from tail of request 0x%x\n",
engine->name, request->global_seqno);
}
/* Setup the CS to resume from the breadcrumb of the hung request */
engine->reset_hw(engine, request);
engine->reset.reset(engine, request);
}
void i915_gem_reset(struct drm_i915_private *dev_priv,
@ -3230,14 +3184,14 @@ void i915_gem_reset(struct drm_i915_private *dev_priv,
i915_retire_requests(dev_priv);
for_each_engine(engine, dev_priv, id) {
struct i915_gem_context *ctx;
struct intel_context *ce;
i915_gem_reset_engine(engine,
engine->hangcheck.active_request,
stalled_mask & ENGINE_MASK(id));
ctx = fetch_and_zero(&engine->last_retired_context);
if (ctx)
intel_context_unpin(ctx, engine);
ce = fetch_and_zero(&engine->last_retired_context);
if (ce)
intel_context_unpin(ce);
/*
* Ostensibily, we always want a context loaded for powersaving,
@ -3264,8 +3218,7 @@ void i915_gem_reset(struct drm_i915_private *dev_priv,
void i915_gem_reset_finish_engine(struct intel_engine_cs *engine)
{
tasklet_enable(&engine->execlists.tasklet);
kthread_unpark(engine->breadcrumbs.signaler);
engine->reset.finish(engine);
intel_uncore_forcewake_put(engine->i915, FORCEWAKE_ALL);
}
@ -3543,6 +3496,22 @@ new_requests_since_last_retire(const struct drm_i915_private *i915)
work_pending(&i915->gt.idle_work.work));
}
static void assert_kernel_context_is_current(struct drm_i915_private *i915)
{
struct intel_engine_cs *engine;
enum intel_engine_id id;
if (i915_terminally_wedged(&i915->gpu_error))
return;
GEM_BUG_ON(i915->gt.active_requests);
for_each_engine(engine, i915, id) {
GEM_BUG_ON(__i915_gem_active_peek(&engine->timeline.last_request));
GEM_BUG_ON(engine->last_retired_context !=
to_intel_context(i915->kernel_context, engine));
}
}
static void
i915_gem_idle_work_handler(struct work_struct *work)
{
@ -3554,6 +3523,24 @@ i915_gem_idle_work_handler(struct work_struct *work)
if (!READ_ONCE(dev_priv->gt.awake))
return;
if (READ_ONCE(dev_priv->gt.active_requests))
return;
/*
* Flush out the last user context, leaving only the pinned
* kernel context resident. When we are idling on the kernel_context,
* no more new requests (with a context switch) are emitted and we
* can finally rest. A consequence is that the idle work handler is
* always called at least twice before idling (and if the system is
* idle that implies a round trip through the retire worker).
*/
mutex_lock(&dev_priv->drm.struct_mutex);
i915_gem_switch_to_kernel_context(dev_priv);
mutex_unlock(&dev_priv->drm.struct_mutex);
GEM_TRACE("active_requests=%d (after switch-to-kernel-context)\n",
READ_ONCE(dev_priv->gt.active_requests));
/*
* Wait for last execlists context complete, but bail out in case a
* new request is submitted. As we don't trust the hardware, we
@ -3587,6 +3574,8 @@ i915_gem_idle_work_handler(struct work_struct *work)
epoch = __i915_gem_park(dev_priv);
assert_kernel_context_is_current(dev_priv);
rearm_hangcheck = false;
out_unlock:
mutex_unlock(&dev_priv->drm.struct_mutex);
@ -3735,7 +3724,29 @@ i915_gem_wait_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
static int wait_for_timeline(struct i915_timeline *tl, unsigned int flags)
{
return i915_gem_active_wait(&tl->last_request, flags);
struct i915_request *rq;
long ret;
rq = i915_gem_active_get_unlocked(&tl->last_request);
if (!rq)
return 0;
/*
* "Race-to-idle".
*
* Switching to the kernel context is often used a synchronous
* step prior to idling, e.g. in suspend for flushing all
* current operations to memory before sleeping. These we
* want to complete as quickly as possible to avoid prolonged
* stalls, so allow the gpu to boost to maximum clocks.
*/
if (flags & I915_WAIT_FOR_IDLE_BOOST)
gen6_rps_boost(rq, NULL);
ret = i915_request_wait(rq, flags, MAX_SCHEDULE_TIMEOUT);
i915_request_put(rq);
return ret < 0 ? ret : 0;
}
static int wait_for_engines(struct drm_i915_private *i915)
@ -3753,6 +3764,9 @@ static int wait_for_engines(struct drm_i915_private *i915)
int i915_gem_wait_for_idle(struct drm_i915_private *i915, unsigned int flags)
{
GEM_TRACE("flags=%x (%s)\n",
flags, flags & I915_WAIT_LOCKED ? "locked" : "unlocked");
/* If the device is asleep, we have no requests outstanding */
if (!READ_ONCE(i915->gt.awake))
return 0;
@ -3769,6 +3783,7 @@ int i915_gem_wait_for_idle(struct drm_i915_private *i915, unsigned int flags)
return err;
}
i915_retire_requests(i915);
GEM_BUG_ON(i915->gt.active_requests);
return wait_for_engines(i915);
} else {
@ -4357,7 +4372,7 @@ i915_gem_object_ggtt_pin(struct drm_i915_gem_object *obj,
u64 flags)
{
struct drm_i915_private *dev_priv = to_i915(obj->base.dev);
struct i915_address_space *vm = &dev_priv->ggtt.base;
struct i915_address_space *vm = &dev_priv->ggtt.vm;
struct i915_vma *vma;
int ret;
@ -4945,25 +4960,26 @@ void __i915_gem_object_release_unless_active(struct drm_i915_gem_object *obj)
i915_gem_object_put(obj);
}
static void assert_kernel_context_is_current(struct drm_i915_private *i915)
void i915_gem_sanitize(struct drm_i915_private *i915)
{
struct i915_gem_context *kernel_context = i915->kernel_context;
struct intel_engine_cs *engine;
enum intel_engine_id id;
for_each_engine(engine, i915, id) {
GEM_BUG_ON(__i915_gem_active_peek(&engine->timeline.last_request));
GEM_BUG_ON(engine->last_retired_context != kernel_context);
}
}
GEM_TRACE("\n");
void i915_gem_sanitize(struct drm_i915_private *i915)
{
if (i915_terminally_wedged(&i915->gpu_error)) {
mutex_lock(&i915->drm.struct_mutex);
mutex_lock(&i915->drm.struct_mutex);
intel_runtime_pm_get(i915);
intel_uncore_forcewake_get(i915, FORCEWAKE_ALL);
/*
* As we have just resumed the machine and woken the device up from
* deep PCI sleep (presumably D3_cold), assume the HW has been reset
* back to defaults, recovering from whatever wedged state we left it
* in and so worth trying to use the device once more.
*/
if (i915_terminally_wedged(&i915->gpu_error))
i915_gem_unset_wedged(i915);
mutex_unlock(&i915->drm.struct_mutex);
}
/*
* If we inherit context state from the BIOS or earlier occupants
@ -4975,6 +4991,18 @@ void i915_gem_sanitize(struct drm_i915_private *i915)
*/
if (INTEL_GEN(i915) >= 5 && intel_has_gpu_reset(i915))
WARN_ON(intel_gpu_reset(i915, ALL_ENGINES));
/* Reset the submission backend after resume as well as the GPU reset */
for_each_engine(engine, i915, id) {
if (engine->reset.reset)
engine->reset.reset(engine, NULL);
}
intel_uncore_forcewake_put(i915, FORCEWAKE_ALL);
intel_runtime_pm_put(i915);
i915_gem_contexts_lost(i915);
mutex_unlock(&i915->drm.struct_mutex);
}
int i915_gem_suspend(struct drm_i915_private *dev_priv)
@ -4982,6 +5010,8 @@ int i915_gem_suspend(struct drm_i915_private *dev_priv)
struct drm_device *dev = &dev_priv->drm;
int ret;
GEM_TRACE("\n");
intel_runtime_pm_get(dev_priv);
intel_suspend_gt_powersave(dev_priv);
@ -5002,13 +5032,13 @@ int i915_gem_suspend(struct drm_i915_private *dev_priv)
ret = i915_gem_wait_for_idle(dev_priv,
I915_WAIT_INTERRUPTIBLE |
I915_WAIT_LOCKED);
I915_WAIT_LOCKED |
I915_WAIT_FOR_IDLE_BOOST);
if (ret && ret != -EIO)
goto err_unlock;
assert_kernel_context_is_current(dev_priv);
}
i915_gem_contexts_lost(dev_priv);
mutex_unlock(&dev->struct_mutex);
intel_uc_suspend(dev_priv);
@ -5028,6 +5058,24 @@ int i915_gem_suspend(struct drm_i915_private *dev_priv)
if (WARN_ON(!intel_engines_are_idle(dev_priv)))
i915_gem_set_wedged(dev_priv); /* no hope, discard everything */
intel_runtime_pm_put(dev_priv);
return 0;
err_unlock:
mutex_unlock(&dev->struct_mutex);
intel_runtime_pm_put(dev_priv);
return ret;
}
void i915_gem_suspend_late(struct drm_i915_private *i915)
{
struct drm_i915_gem_object *obj;
struct list_head *phases[] = {
&i915->mm.unbound_list,
&i915->mm.bound_list,
NULL
}, **phase;
/*
* Neither the BIOS, ourselves or any other kernel
* expects the system to be in execlists mode on startup,
@ -5047,20 +5095,22 @@ int i915_gem_suspend(struct drm_i915_private *dev_priv)
* machines is a good idea, we don't - just in case it leaves the
* machine in an unusable condition.
*/
intel_uc_sanitize(dev_priv);
i915_gem_sanitize(dev_priv);
intel_runtime_pm_put(dev_priv);
return 0;
mutex_lock(&i915->drm.struct_mutex);
for (phase = phases; *phase; phase++) {
list_for_each_entry(obj, *phase, mm.link)
WARN_ON(i915_gem_object_set_to_gtt_domain(obj, false));
}
mutex_unlock(&i915->drm.struct_mutex);
err_unlock:
mutex_unlock(&dev->struct_mutex);
intel_runtime_pm_put(dev_priv);
return ret;
intel_uc_sanitize(i915);
i915_gem_sanitize(i915);
}
void i915_gem_resume(struct drm_i915_private *i915)
{
GEM_TRACE("\n");
WARN_ON(i915->gt.awake);
mutex_lock(&i915->drm.struct_mutex);
@ -5234,9 +5284,15 @@ int i915_gem_init_hw(struct drm_i915_private *dev_priv)
/* Only when the HW is re-initialised, can we replay the requests */
ret = __i915_gem_restart_engines(dev_priv);
if (ret)
goto cleanup_uc;
out:
intel_uncore_forcewake_put(dev_priv, FORCEWAKE_ALL);
return ret;
cleanup_uc:
intel_uc_fini_hw(dev_priv);
goto out;
}
static int __intel_engines_record_defaults(struct drm_i915_private *i915)
@ -5357,12 +5413,8 @@ int i915_gem_init(struct drm_i915_private *dev_priv)
{
int ret;
/*
* We need to fallback to 4K pages since gvt gtt handling doesn't
* support huge page entries - we will need to check either hypervisor
* mm can support huge guest page or just do emulation in gvt.
*/
if (intel_vgpu_active(dev_priv))
/* We need to fallback to 4K pages if host doesn't support huge gtt. */
if (intel_vgpu_active(dev_priv) && !intel_vgpu_has_huge_gtt(dev_priv))
mkwrite_device_info(dev_priv)->page_sizes =
I915_GTT_PAGE_SIZE_4K;
@ -5502,6 +5554,28 @@ int i915_gem_init(struct drm_i915_private *dev_priv)
return ret;
}
void i915_gem_fini(struct drm_i915_private *dev_priv)
{
i915_gem_suspend_late(dev_priv);
/* Flush any outstanding unpin_work. */
i915_gem_drain_workqueue(dev_priv);
mutex_lock(&dev_priv->drm.struct_mutex);
intel_uc_fini_hw(dev_priv);
intel_uc_fini(dev_priv);
i915_gem_cleanup_engines(dev_priv);
i915_gem_contexts_fini(dev_priv);
mutex_unlock(&dev_priv->drm.struct_mutex);
intel_uc_fini_misc(dev_priv);
i915_gem_cleanup_userptr(dev_priv);
i915_gem_drain_freed_objects(dev_priv);
WARN_ON(!list_empty(&dev_priv->contexts.list));
}
void i915_gem_init_mmio(struct drm_i915_private *i915)
{
i915_gem_sanitize(i915);
@ -5666,16 +5740,17 @@ int i915_gem_freeze(struct drm_i915_private *dev_priv)
return 0;
}
int i915_gem_freeze_late(struct drm_i915_private *dev_priv)
int i915_gem_freeze_late(struct drm_i915_private *i915)
{
struct drm_i915_gem_object *obj;
struct list_head *phases[] = {
&dev_priv->mm.unbound_list,
&dev_priv->mm.bound_list,
&i915->mm.unbound_list,
&i915->mm.bound_list,
NULL
}, **p;
}, **phase;
/* Called just before we write the hibernation image.
/*
* Called just before we write the hibernation image.
*
* We need to update the domain tracking to reflect that the CPU
* will be accessing all the pages to create and restore from the
@ -5689,15 +5764,15 @@ int i915_gem_freeze_late(struct drm_i915_private *dev_priv)
* the objects as well, see i915_gem_freeze()
*/
i915_gem_shrink(dev_priv, -1UL, NULL, I915_SHRINK_UNBOUND);
i915_gem_drain_freed_objects(dev_priv);
i915_gem_shrink(i915, -1UL, NULL, I915_SHRINK_UNBOUND);
i915_gem_drain_freed_objects(i915);
spin_lock(&dev_priv->mm.obj_lock);
for (p = phases; *p; p++) {
list_for_each_entry(obj, *p, mm.link)
__start_cpu_write(obj);
mutex_lock(&i915->drm.struct_mutex);
for (phase = phases; *phase; phase++) {
list_for_each_entry(obj, *phase, mm.link)
WARN_ON(i915_gem_object_set_to_cpu_domain(obj, true));
}
spin_unlock(&dev_priv->mm.obj_lock);
mutex_unlock(&i915->drm.struct_mutex);
return 0;
}

View File

@ -26,6 +26,7 @@
#define __I915_GEM_H__
#include <linux/bug.h>
#include <linux/interrupt.h>
struct drm_i915_private;
@ -62,9 +63,12 @@ struct drm_i915_private;
#if IS_ENABLED(CONFIG_DRM_I915_TRACE_GEM)
#define GEM_TRACE(...) trace_printk(__VA_ARGS__)
#define GEM_TRACE_DUMP() ftrace_dump(DUMP_ALL)
#define GEM_TRACE_DUMP_ON(expr) \
do { if (expr) ftrace_dump(DUMP_ALL); } while (0)
#else
#define GEM_TRACE(...) do { } while (0)
#define GEM_TRACE_DUMP() do { } while (0)
#define GEM_TRACE_DUMP_ON(expr) BUILD_BUG_ON_INVALID(expr)
#endif
#define I915_NUM_ENGINES 8
@ -72,4 +76,16 @@ struct drm_i915_private;
void i915_gem_park(struct drm_i915_private *i915);
void i915_gem_unpark(struct drm_i915_private *i915);
static inline void __tasklet_disable_sync_once(struct tasklet_struct *t)
{
if (atomic_inc_return(&t->count) == 1)
tasklet_unlock_wait(t);
}
static inline void __tasklet_enable_sync_once(struct tasklet_struct *t)
{
if (atomic_dec_return(&t->count) == 0)
tasklet_kill(t);
}
#endif /* __I915_GEM_H__ */

View File

@ -127,14 +127,8 @@ static void i915_gem_context_free(struct i915_gem_context *ctx)
for (n = 0; n < ARRAY_SIZE(ctx->__engine); n++) {
struct intel_context *ce = &ctx->__engine[n];
if (!ce->state)
continue;
WARN_ON(ce->pin_count);
if (ce->ring)
intel_ring_free(ce->ring);
__i915_gem_object_release_unless_active(ce->state->obj);
if (ce->ops)
ce->ops->destroy(ce);
}
kfree(ctx->name);
@ -203,7 +197,7 @@ static void context_close(struct i915_gem_context *ctx)
*/
lut_close(ctx);
if (ctx->ppgtt)
i915_ppgtt_close(&ctx->ppgtt->base);
i915_ppgtt_close(&ctx->ppgtt->vm);
ctx->file_priv = ERR_PTR(-EBADF);
i915_gem_context_put(ctx);
@ -214,10 +208,19 @@ static int assign_hw_id(struct drm_i915_private *dev_priv, unsigned *out)
int ret;
unsigned int max;
if (INTEL_GEN(dev_priv) >= 11)
if (INTEL_GEN(dev_priv) >= 11) {
max = GEN11_MAX_CONTEXT_HW_ID;
else
max = MAX_CONTEXT_HW_ID;
} else {
/*
* When using GuC in proxy submission, GuC consumes the
* highest bit in the context id to indicate proxy submission.
*/
if (USES_GUC_SUBMISSION(dev_priv))
max = MAX_GUC_CONTEXT_HW_ID;
else
max = MAX_CONTEXT_HW_ID;
}
ret = ida_simple_get(&dev_priv->contexts.hw_ida,
0, max, GFP_KERNEL);
@ -246,7 +249,7 @@ static u32 default_desc_template(const struct drm_i915_private *i915,
desc = GEN8_CTX_VALID | GEN8_CTX_PRIVILEGE;
address_mode = INTEL_LEGACY_32B_CONTEXT;
if (ppgtt && i915_vm_is_48bit(&ppgtt->base))
if (ppgtt && i915_vm_is_48bit(&ppgtt->vm))
address_mode = INTEL_LEGACY_64B_CONTEXT;
desc |= address_mode << GEN8_CTX_ADDRESSING_MODE_SHIFT;
@ -266,6 +269,7 @@ __create_hw_context(struct drm_i915_private *dev_priv,
struct drm_i915_file_private *file_priv)
{
struct i915_gem_context *ctx;
unsigned int n;
int ret;
ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
@ -283,6 +287,12 @@ __create_hw_context(struct drm_i915_private *dev_priv,
ctx->i915 = dev_priv;
ctx->sched.priority = I915_PRIORITY_NORMAL;
for (n = 0; n < ARRAY_SIZE(ctx->__engine); n++) {
struct intel_context *ce = &ctx->__engine[n];
ce->gem_context = ctx;
}
INIT_RADIX_TREE(&ctx->handles_vma, GFP_KERNEL);
INIT_LIST_HEAD(&ctx->handles_list);
@ -514,16 +524,8 @@ void i915_gem_contexts_lost(struct drm_i915_private *dev_priv)
lockdep_assert_held(&dev_priv->drm.struct_mutex);
for_each_engine(engine, dev_priv, id) {
engine->legacy_active_context = NULL;
engine->legacy_active_ppgtt = NULL;
if (!engine->last_retired_context)
continue;
intel_context_unpin(engine->last_retired_context, engine);
engine->last_retired_context = NULL;
}
for_each_engine(engine, dev_priv, id)
intel_engine_lost_context(engine);
}
void i915_gem_contexts_fini(struct drm_i915_private *i915)
@ -583,58 +585,119 @@ last_request_on_engine(struct i915_timeline *timeline,
{
struct i915_request *rq;
if (timeline == &engine->timeline)
return NULL;
GEM_BUG_ON(timeline == &engine->timeline);
rq = i915_gem_active_raw(&timeline->last_request,
&engine->i915->drm.struct_mutex);
if (rq && rq->engine == engine)
if (rq && rq->engine == engine) {
GEM_TRACE("last request for %s on engine %s: %llx:%d\n",
timeline->name, engine->name,
rq->fence.context, rq->fence.seqno);
GEM_BUG_ON(rq->timeline != timeline);
return rq;
}
return NULL;
}
static bool engine_has_idle_kernel_context(struct intel_engine_cs *engine)
static bool engine_has_kernel_context_barrier(struct intel_engine_cs *engine)
{
struct i915_timeline *timeline;
struct drm_i915_private *i915 = engine->i915;
const struct intel_context * const ce =
to_intel_context(i915->kernel_context, engine);
struct i915_timeline *barrier = ce->ring->timeline;
struct intel_ring *ring;
bool any_active = false;
list_for_each_entry(timeline, &engine->i915->gt.timelines, link) {
if (last_request_on_engine(timeline, engine))
return false;
}
return intel_engine_has_kernel_context(engine);
}
int i915_gem_switch_to_kernel_context(struct drm_i915_private *dev_priv)
{
struct intel_engine_cs *engine;
struct i915_timeline *timeline;
enum intel_engine_id id;
lockdep_assert_held(&dev_priv->drm.struct_mutex);
i915_retire_requests(dev_priv);
for_each_engine(engine, dev_priv, id) {
lockdep_assert_held(&i915->drm.struct_mutex);
list_for_each_entry(ring, &i915->gt.active_rings, active_link) {
struct i915_request *rq;
if (engine_has_idle_kernel_context(engine))
rq = last_request_on_engine(ring->timeline, engine);
if (!rq)
continue;
rq = i915_request_alloc(engine, dev_priv->kernel_context);
any_active = true;
if (rq->hw_context == ce)
continue;
/*
* Was this request submitted after the previous
* switch-to-kernel-context?
*/
if (!i915_timeline_sync_is_later(barrier, &rq->fence)) {
GEM_TRACE("%s needs barrier for %llx:%d\n",
ring->timeline->name,
rq->fence.context,
rq->fence.seqno);
return false;
}
GEM_TRACE("%s has barrier after %llx:%d\n",
ring->timeline->name,
rq->fence.context,
rq->fence.seqno);
}
/*
* If any other timeline was still active and behind the last barrier,
* then our last switch-to-kernel-context must still be queued and
* will run last (leaving the engine in the kernel context when it
* eventually idles).
*/
if (any_active)
return true;
/* The engine is idle; check that it is idling in the kernel context. */
return engine->last_retired_context == ce;
}
int i915_gem_switch_to_kernel_context(struct drm_i915_private *i915)
{
struct intel_engine_cs *engine;
enum intel_engine_id id;
GEM_TRACE("awake?=%s\n", yesno(i915->gt.awake));
lockdep_assert_held(&i915->drm.struct_mutex);
GEM_BUG_ON(!i915->kernel_context);
i915_retire_requests(i915);
for_each_engine(engine, i915, id) {
struct intel_ring *ring;
struct i915_request *rq;
GEM_BUG_ON(!to_intel_context(i915->kernel_context, engine));
if (engine_has_kernel_context_barrier(engine))
continue;
GEM_TRACE("emit barrier on %s\n", engine->name);
rq = i915_request_alloc(engine, i915->kernel_context);
if (IS_ERR(rq))
return PTR_ERR(rq);
/* Queue this switch after all other activity */
list_for_each_entry(timeline, &dev_priv->gt.timelines, link) {
list_for_each_entry(ring, &i915->gt.active_rings, active_link) {
struct i915_request *prev;
prev = last_request_on_engine(timeline, engine);
if (prev)
i915_sw_fence_await_sw_fence_gfp(&rq->submit,
&prev->submit,
I915_FENCE_GFP);
prev = last_request_on_engine(ring->timeline, engine);
if (!prev)
continue;
if (prev->gem_context == i915->kernel_context)
continue;
GEM_TRACE("add barrier on %s for %llx:%d\n",
engine->name,
prev->fence.context,
prev->fence.seqno);
i915_sw_fence_await_sw_fence_gfp(&rq->submit,
&prev->submit,
I915_FENCE_GFP);
i915_timeline_sync_set(rq->timeline, &prev->fence);
}
/*
@ -747,11 +810,11 @@ int i915_gem_context_getparam_ioctl(struct drm_device *dev, void *data,
break;
case I915_CONTEXT_PARAM_GTT_SIZE:
if (ctx->ppgtt)
args->value = ctx->ppgtt->base.total;
args->value = ctx->ppgtt->vm.total;
else if (to_i915(dev)->mm.aliasing_ppgtt)
args->value = to_i915(dev)->mm.aliasing_ppgtt->base.total;
args->value = to_i915(dev)->mm.aliasing_ppgtt->vm.total;
else
args->value = to_i915(dev)->ggtt.base.total;
args->value = to_i915(dev)->ggtt.vm.total;
break;
case I915_CONTEXT_PARAM_NO_ERROR_CAPTURE:
args->value = i915_gem_context_no_error_capture(ctx);

View File

@ -30,6 +30,7 @@
#include <linux/radix-tree.h>
#include "i915_gem.h"
#include "i915_scheduler.h"
struct pid;
@ -45,6 +46,13 @@ struct intel_ring;
#define DEFAULT_CONTEXT_HANDLE 0
struct intel_context;
struct intel_context_ops {
void (*unpin)(struct intel_context *ce);
void (*destroy)(struct intel_context *ce);
};
/**
* struct i915_gem_context - client state
*
@ -144,11 +152,14 @@ struct i915_gem_context {
/** engine: per-engine logical HW state */
struct intel_context {
struct i915_gem_context *gem_context;
struct i915_vma *state;
struct intel_ring *ring;
u32 *lrc_reg_state;
u64 lrc_desc;
int pin_count;
const struct intel_context_ops *ops;
} __engine[I915_NUM_ENGINES];
/** ring_size: size for allocating the per-engine ring buffer */
@ -263,25 +274,26 @@ to_intel_context(struct i915_gem_context *ctx,
return &ctx->__engine[engine->id];
}
static inline struct intel_ring *
static inline struct intel_context *
intel_context_pin(struct i915_gem_context *ctx, struct intel_engine_cs *engine)
{
return engine->context_pin(engine, ctx);
}
static inline void __intel_context_pin(struct i915_gem_context *ctx,
const struct intel_engine_cs *engine)
static inline void __intel_context_pin(struct intel_context *ce)
{
struct intel_context *ce = to_intel_context(ctx, engine);
GEM_BUG_ON(!ce->pin_count);
ce->pin_count++;
}
static inline void intel_context_unpin(struct i915_gem_context *ctx,
struct intel_engine_cs *engine)
static inline void intel_context_unpin(struct intel_context *ce)
{
engine->context_unpin(engine, ctx);
GEM_BUG_ON(!ce->pin_count);
if (--ce->pin_count)
return;
GEM_BUG_ON(!ce->ops);
ce->ops->unpin(ce);
}
/* i915_gem_context.c */

View File

@ -703,7 +703,7 @@ static int eb_select_context(struct i915_execbuffer *eb)
return -ENOENT;
eb->ctx = ctx;
eb->vm = ctx->ppgtt ? &ctx->ppgtt->base : &eb->i915->ggtt.base;
eb->vm = ctx->ppgtt ? &ctx->ppgtt->vm : &eb->i915->ggtt.vm;
eb->context_flags = 0;
if (ctx->flags & CONTEXT_NO_ZEROMAP)
@ -943,9 +943,9 @@ static void reloc_cache_reset(struct reloc_cache *cache)
if (cache->node.allocated) {
struct i915_ggtt *ggtt = cache_to_ggtt(cache);
ggtt->base.clear_range(&ggtt->base,
cache->node.start,
cache->node.size);
ggtt->vm.clear_range(&ggtt->vm,
cache->node.start,
cache->node.size);
drm_mm_remove_node(&cache->node);
} else {
i915_vma_unpin((struct i915_vma *)cache->node.mm);
@ -1016,7 +1016,7 @@ static void *reloc_iomap(struct drm_i915_gem_object *obj,
if (IS_ERR(vma)) {
memset(&cache->node, 0, sizeof(cache->node));
err = drm_mm_insert_node_in_range
(&ggtt->base.mm, &cache->node,
(&ggtt->vm.mm, &cache->node,
PAGE_SIZE, 0, I915_COLOR_UNEVICTABLE,
0, ggtt->mappable_end,
DRM_MM_INSERT_LOW);
@ -1037,9 +1037,9 @@ static void *reloc_iomap(struct drm_i915_gem_object *obj,
offset = cache->node.start;
if (cache->node.allocated) {
wmb();
ggtt->base.insert_page(&ggtt->base,
i915_gem_object_get_dma_address(obj, page),
offset, I915_CACHE_NONE, 0);
ggtt->vm.insert_page(&ggtt->vm,
i915_gem_object_get_dma_address(obj, page),
offset, I915_CACHE_NONE, 0);
} else {
offset += page << PAGE_SHIFT;
}

View File

@ -42,7 +42,7 @@
#include "intel_drv.h"
#include "intel_frontbuffer.h"
#define I915_GFP_DMA (GFP_KERNEL | __GFP_HIGHMEM)
#define I915_GFP_ALLOW_FAIL (GFP_KERNEL | __GFP_RETRY_MAYFAIL | __GFP_NOWARN)
/**
* DOC: Global GTT views
@ -190,19 +190,11 @@ int intel_sanitize_enable_ppgtt(struct drm_i915_private *dev_priv,
return 1;
}
static int ppgtt_bind_vma(struct i915_vma *vma,
enum i915_cache_level cache_level,
u32 unused)
static int gen6_ppgtt_bind_vma(struct i915_vma *vma,
enum i915_cache_level cache_level,
u32 unused)
{
u32 pte_flags;
int ret;
if (!(vma->flags & I915_VMA_LOCAL_BIND)) {
ret = vma->vm->allocate_va_range(vma->vm, vma->node.start,
vma->size);
if (ret)
return ret;
}
/* Currently applicable only to VLV */
pte_flags = 0;
@ -214,6 +206,22 @@ static int ppgtt_bind_vma(struct i915_vma *vma,
return 0;
}
static int gen8_ppgtt_bind_vma(struct i915_vma *vma,
enum i915_cache_level cache_level,
u32 unused)
{
int ret;
if (!(vma->flags & I915_VMA_LOCAL_BIND)) {
ret = vma->vm->allocate_va_range(vma->vm,
vma->node.start, vma->size);
if (ret)
return ret;
}
return gen6_ppgtt_bind_vma(vma, cache_level, unused);
}
static void ppgtt_unbind_vma(struct i915_vma *vma)
{
vma->vm->clear_range(vma->vm, vma->node.start, vma->size);
@ -489,7 +497,7 @@ static int __setup_page_dma(struct i915_address_space *vm,
struct i915_page_dma *p,
gfp_t gfp)
{
p->page = vm_alloc_page(vm, gfp | __GFP_NOWARN | __GFP_NORETRY);
p->page = vm_alloc_page(vm, gfp | I915_GFP_ALLOW_FAIL);
if (unlikely(!p->page))
return -ENOMEM;
@ -506,7 +514,7 @@ static int __setup_page_dma(struct i915_address_space *vm,
static int setup_page_dma(struct i915_address_space *vm,
struct i915_page_dma *p)
{
return __setup_page_dma(vm, p, I915_GFP_DMA);
return __setup_page_dma(vm, p, __GFP_HIGHMEM);
}
static void cleanup_page_dma(struct i915_address_space *vm,
@ -520,8 +528,8 @@ static void cleanup_page_dma(struct i915_address_space *vm,
#define setup_px(vm, px) setup_page_dma((vm), px_base(px))
#define cleanup_px(vm, px) cleanup_page_dma((vm), px_base(px))
#define fill_px(ppgtt, px, v) fill_page_dma((vm), px_base(px), (v))
#define fill32_px(ppgtt, px, v) fill_page_dma_32((vm), px_base(px), (v))
#define fill_px(vm, px, v) fill_page_dma((vm), px_base(px), (v))
#define fill32_px(vm, px, v) fill_page_dma_32((vm), px_base(px), (v))
static void fill_page_dma(struct i915_address_space *vm,
struct i915_page_dma *p,
@ -614,7 +622,7 @@ static struct i915_page_table *alloc_pt(struct i915_address_space *vm)
{
struct i915_page_table *pt;
pt = kmalloc(sizeof(*pt), GFP_KERNEL | __GFP_NOWARN);
pt = kmalloc(sizeof(*pt), I915_GFP_ALLOW_FAIL);
if (unlikely(!pt))
return ERR_PTR(-ENOMEM);
@ -651,7 +659,7 @@ static struct i915_page_directory *alloc_pd(struct i915_address_space *vm)
{
struct i915_page_directory *pd;
pd = kzalloc(sizeof(*pd), GFP_KERNEL | __GFP_NOWARN);
pd = kzalloc(sizeof(*pd), I915_GFP_ALLOW_FAIL);
if (unlikely(!pd))
return ERR_PTR(-ENOMEM);
@ -685,7 +693,7 @@ static int __pdp_init(struct i915_address_space *vm,
const unsigned int pdpes = i915_pdpes_per_pdp(vm);
pdp->page_directory = kmalloc_array(pdpes, sizeof(*pdp->page_directory),
GFP_KERNEL | __GFP_NOWARN);
I915_GFP_ALLOW_FAIL);
if (unlikely(!pdp->page_directory))
return -ENOMEM;
@ -765,53 +773,6 @@ static void gen8_initialize_pml4(struct i915_address_space *vm,
memset_p((void **)pml4->pdps, vm->scratch_pdp, GEN8_PML4ES_PER_PML4);
}
/* Broadwell Page Directory Pointer Descriptors */
static int gen8_write_pdp(struct i915_request *rq,
unsigned entry,
dma_addr_t addr)
{
struct intel_engine_cs *engine = rq->engine;
u32 *cs;
BUG_ON(entry >= 4);
cs = intel_ring_begin(rq, 6);
if (IS_ERR(cs))
return PTR_ERR(cs);
*cs++ = MI_LOAD_REGISTER_IMM(1);
*cs++ = i915_mmio_reg_offset(GEN8_RING_PDP_UDW(engine, entry));
*cs++ = upper_32_bits(addr);
*cs++ = MI_LOAD_REGISTER_IMM(1);
*cs++ = i915_mmio_reg_offset(GEN8_RING_PDP_LDW(engine, entry));
*cs++ = lower_32_bits(addr);
intel_ring_advance(rq, cs);
return 0;
}
static int gen8_mm_switch_3lvl(struct i915_hw_ppgtt *ppgtt,
struct i915_request *rq)
{
int i, ret;
for (i = GEN8_3LVL_PDPES - 1; i >= 0; i--) {
const dma_addr_t pd_daddr = i915_page_dir_dma_addr(ppgtt, i);
ret = gen8_write_pdp(rq, i, pd_daddr);
if (ret)
return ret;
}
return 0;
}
static int gen8_mm_switch_4lvl(struct i915_hw_ppgtt *ppgtt,
struct i915_request *rq)
{
return gen8_write_pdp(rq, 0, px_dma(&ppgtt->pml4));
}
/* PDE TLBs are a pain to invalidate on GEN8+. When we modify
* the page table structures, we mark them dirty so that
* context switching/execlist queuing code takes extra steps
@ -819,7 +780,7 @@ static int gen8_mm_switch_4lvl(struct i915_hw_ppgtt *ppgtt,
*/
static void mark_tlbs_dirty(struct i915_hw_ppgtt *ppgtt)
{
ppgtt->pd_dirty_rings = INTEL_INFO(ppgtt->base.i915)->ring_mask;
ppgtt->pd_dirty_rings = INTEL_INFO(ppgtt->vm.i915)->ring_mask;
}
/* Removes entries from a single page table, releasing it if it's empty.
@ -1012,7 +973,7 @@ gen8_ppgtt_insert_pte_entries(struct i915_hw_ppgtt *ppgtt,
gen8_pte_t *vaddr;
bool ret;
GEM_BUG_ON(idx->pdpe >= i915_pdpes_per_pdp(&ppgtt->base));
GEM_BUG_ON(idx->pdpe >= i915_pdpes_per_pdp(&ppgtt->vm));
pd = pdp->page_directory[idx->pdpe];
vaddr = kmap_atomic_px(pd->page_table[idx->pde]);
do {
@ -1043,7 +1004,7 @@ gen8_ppgtt_insert_pte_entries(struct i915_hw_ppgtt *ppgtt,
break;
}
GEM_BUG_ON(idx->pdpe >= i915_pdpes_per_pdp(&ppgtt->base));
GEM_BUG_ON(idx->pdpe >= i915_pdpes_per_pdp(&ppgtt->vm));
pd = pdp->page_directory[idx->pdpe];
}
@ -1229,7 +1190,7 @@ static int gen8_init_scratch(struct i915_address_space *vm)
{
int ret;
ret = setup_scratch_page(vm, I915_GFP_DMA);
ret = setup_scratch_page(vm, __GFP_HIGHMEM);
if (ret)
return ret;
@ -1272,7 +1233,7 @@ static int gen8_init_scratch(struct i915_address_space *vm)
static int gen8_ppgtt_notify_vgt(struct i915_hw_ppgtt *ppgtt, bool create)
{
struct i915_address_space *vm = &ppgtt->base;
struct i915_address_space *vm = &ppgtt->vm;
struct drm_i915_private *dev_priv = vm->i915;
enum vgt_g2v_type msg;
int i;
@ -1333,13 +1294,13 @@ static void gen8_ppgtt_cleanup_4lvl(struct i915_hw_ppgtt *ppgtt)
int i;
for (i = 0; i < GEN8_PML4ES_PER_PML4; i++) {
if (ppgtt->pml4.pdps[i] == ppgtt->base.scratch_pdp)
if (ppgtt->pml4.pdps[i] == ppgtt->vm.scratch_pdp)
continue;
gen8_ppgtt_cleanup_3lvl(&ppgtt->base, ppgtt->pml4.pdps[i]);
gen8_ppgtt_cleanup_3lvl(&ppgtt->vm, ppgtt->pml4.pdps[i]);
}
cleanup_px(&ppgtt->base, &ppgtt->pml4);
cleanup_px(&ppgtt->vm, &ppgtt->pml4);
}
static void gen8_ppgtt_cleanup(struct i915_address_space *vm)
@ -1353,7 +1314,7 @@ static void gen8_ppgtt_cleanup(struct i915_address_space *vm)
if (use_4lvl(vm))
gen8_ppgtt_cleanup_4lvl(ppgtt);
else
gen8_ppgtt_cleanup_3lvl(&ppgtt->base, &ppgtt->pdp);
gen8_ppgtt_cleanup_3lvl(&ppgtt->vm, &ppgtt->pdp);
gen8_free_scratch(vm);
}
@ -1489,7 +1450,7 @@ static void gen8_dump_pdp(struct i915_hw_ppgtt *ppgtt,
gen8_pte_t scratch_pte,
struct seq_file *m)
{
struct i915_address_space *vm = &ppgtt->base;
struct i915_address_space *vm = &ppgtt->vm;
struct i915_page_directory *pd;
u32 pdpe;
@ -1499,7 +1460,7 @@ static void gen8_dump_pdp(struct i915_hw_ppgtt *ppgtt,
u64 pd_start = start;
u32 pde;
if (pdp->page_directory[pdpe] == ppgtt->base.scratch_pd)
if (pdp->page_directory[pdpe] == ppgtt->vm.scratch_pd)
continue;
seq_printf(m, "\tPDPE #%d\n", pdpe);
@ -1507,7 +1468,7 @@ static void gen8_dump_pdp(struct i915_hw_ppgtt *ppgtt,
u32 pte;
gen8_pte_t *pt_vaddr;
if (pd->page_table[pde] == ppgtt->base.scratch_pt)
if (pd->page_table[pde] == ppgtt->vm.scratch_pt)
continue;
pt_vaddr = kmap_atomic_px(pt);
@ -1540,10 +1501,10 @@ static void gen8_dump_pdp(struct i915_hw_ppgtt *ppgtt,
static void gen8_dump_ppgtt(struct i915_hw_ppgtt *ppgtt, struct seq_file *m)
{
struct i915_address_space *vm = &ppgtt->base;
struct i915_address_space *vm = &ppgtt->vm;
const gen8_pte_t scratch_pte =
gen8_pte_encode(vm->scratch_page.daddr, I915_CACHE_LLC);
u64 start = 0, length = ppgtt->base.total;
u64 start = 0, length = ppgtt->vm.total;
if (use_4lvl(vm)) {
u64 pml4e;
@ -1551,7 +1512,7 @@ static void gen8_dump_ppgtt(struct i915_hw_ppgtt *ppgtt, struct seq_file *m)
struct i915_page_directory_pointer *pdp;
gen8_for_each_pml4e(pdp, pml4, start, length, pml4e) {
if (pml4->pdps[pml4e] == ppgtt->base.scratch_pdp)
if (pml4->pdps[pml4e] == ppgtt->vm.scratch_pdp)
continue;
seq_printf(m, " PML4E #%llu\n", pml4e);
@ -1564,10 +1525,10 @@ static void gen8_dump_ppgtt(struct i915_hw_ppgtt *ppgtt, struct seq_file *m)
static int gen8_preallocate_top_level_pdp(struct i915_hw_ppgtt *ppgtt)
{
struct i915_address_space *vm = &ppgtt->base;
struct i915_address_space *vm = &ppgtt->vm;
struct i915_page_directory_pointer *pdp = &ppgtt->pdp;
struct i915_page_directory *pd;
u64 start = 0, length = ppgtt->base.total;
u64 start = 0, length = ppgtt->vm.total;
u64 from = start;
unsigned int pdpe;
@ -1603,11 +1564,11 @@ static int gen8_preallocate_top_level_pdp(struct i915_hw_ppgtt *ppgtt)
*/
static int gen8_ppgtt_init(struct i915_hw_ppgtt *ppgtt)
{
struct i915_address_space *vm = &ppgtt->base;
struct i915_address_space *vm = &ppgtt->vm;
struct drm_i915_private *dev_priv = vm->i915;
int ret;
ppgtt->base.total = USES_FULL_48BIT_PPGTT(dev_priv) ?
ppgtt->vm.total = USES_FULL_48BIT_PPGTT(dev_priv) ?
1ULL << 48 :
1ULL << 32;
@ -1615,27 +1576,26 @@ static int gen8_ppgtt_init(struct i915_hw_ppgtt *ppgtt)
* And we are not sure about the latter so play safe for now.
*/
if (IS_CHERRYVIEW(dev_priv) || IS_BROXTON(dev_priv))
ppgtt->base.pt_kmap_wc = true;
ppgtt->vm.pt_kmap_wc = true;
ret = gen8_init_scratch(&ppgtt->base);
ret = gen8_init_scratch(&ppgtt->vm);
if (ret) {
ppgtt->base.total = 0;
ppgtt->vm.total = 0;
return ret;
}
if (use_4lvl(vm)) {
ret = setup_px(&ppgtt->base, &ppgtt->pml4);
ret = setup_px(&ppgtt->vm, &ppgtt->pml4);
if (ret)
goto free_scratch;
gen8_initialize_pml4(&ppgtt->base, &ppgtt->pml4);
gen8_initialize_pml4(&ppgtt->vm, &ppgtt->pml4);
ppgtt->switch_mm = gen8_mm_switch_4lvl;
ppgtt->base.allocate_va_range = gen8_ppgtt_alloc_4lvl;
ppgtt->base.insert_entries = gen8_ppgtt_insert_4lvl;
ppgtt->base.clear_range = gen8_ppgtt_clear_4lvl;
ppgtt->vm.allocate_va_range = gen8_ppgtt_alloc_4lvl;
ppgtt->vm.insert_entries = gen8_ppgtt_insert_4lvl;
ppgtt->vm.clear_range = gen8_ppgtt_clear_4lvl;
} else {
ret = __pdp_init(&ppgtt->base, &ppgtt->pdp);
ret = __pdp_init(&ppgtt->vm, &ppgtt->pdp);
if (ret)
goto free_scratch;
@ -1647,36 +1607,35 @@ static int gen8_ppgtt_init(struct i915_hw_ppgtt *ppgtt)
}
}
ppgtt->switch_mm = gen8_mm_switch_3lvl;
ppgtt->base.allocate_va_range = gen8_ppgtt_alloc_3lvl;
ppgtt->base.insert_entries = gen8_ppgtt_insert_3lvl;
ppgtt->base.clear_range = gen8_ppgtt_clear_3lvl;
ppgtt->vm.allocate_va_range = gen8_ppgtt_alloc_3lvl;
ppgtt->vm.insert_entries = gen8_ppgtt_insert_3lvl;
ppgtt->vm.clear_range = gen8_ppgtt_clear_3lvl;
}
if (intel_vgpu_active(dev_priv))
gen8_ppgtt_notify_vgt(ppgtt, true);
ppgtt->base.cleanup = gen8_ppgtt_cleanup;
ppgtt->base.unbind_vma = ppgtt_unbind_vma;
ppgtt->base.bind_vma = ppgtt_bind_vma;
ppgtt->base.set_pages = ppgtt_set_pages;
ppgtt->base.clear_pages = clear_pages;
ppgtt->vm.cleanup = gen8_ppgtt_cleanup;
ppgtt->vm.bind_vma = gen8_ppgtt_bind_vma;
ppgtt->vm.unbind_vma = ppgtt_unbind_vma;
ppgtt->vm.set_pages = ppgtt_set_pages;
ppgtt->vm.clear_pages = clear_pages;
ppgtt->debug_dump = gen8_dump_ppgtt;
return 0;
free_scratch:
gen8_free_scratch(&ppgtt->base);
gen8_free_scratch(&ppgtt->vm);
return ret;
}
static void gen6_dump_ppgtt(struct i915_hw_ppgtt *ppgtt, struct seq_file *m)
{
struct i915_address_space *vm = &ppgtt->base;
struct i915_address_space *vm = &ppgtt->vm;
struct i915_page_table *unused;
gen6_pte_t scratch_pte;
u32 pd_entry, pte, pde;
u32 start = 0, length = ppgtt->base.total;
u32 start = 0, length = ppgtt->vm.total;
scratch_pte = vm->pte_encode(vm->scratch_page.daddr,
I915_CACHE_LLC, 0);
@ -1974,7 +1933,7 @@ static int gen6_init_scratch(struct i915_address_space *vm)
{
int ret;
ret = setup_scratch_page(vm, I915_GFP_DMA);
ret = setup_scratch_page(vm, __GFP_HIGHMEM);
if (ret)
return ret;
@ -2013,8 +1972,8 @@ static void gen6_ppgtt_cleanup(struct i915_address_space *vm)
static int gen6_ppgtt_allocate_page_directories(struct i915_hw_ppgtt *ppgtt)
{
struct i915_address_space *vm = &ppgtt->base;
struct drm_i915_private *dev_priv = ppgtt->base.i915;
struct i915_address_space *vm = &ppgtt->vm;
struct drm_i915_private *dev_priv = ppgtt->vm.i915;
struct i915_ggtt *ggtt = &dev_priv->ggtt;
int ret;
@ -2022,16 +1981,16 @@ static int gen6_ppgtt_allocate_page_directories(struct i915_hw_ppgtt *ppgtt)
* allocator works in address space sizes, so it's multiplied by page
* size. We allocate at the top of the GTT to avoid fragmentation.
*/
BUG_ON(!drm_mm_initialized(&ggtt->base.mm));
BUG_ON(!drm_mm_initialized(&ggtt->vm.mm));
ret = gen6_init_scratch(vm);
if (ret)
return ret;
ret = i915_gem_gtt_insert(&ggtt->base, &ppgtt->node,
ret = i915_gem_gtt_insert(&ggtt->vm, &ppgtt->node,
GEN6_PD_SIZE, GEN6_PD_ALIGN,
I915_COLOR_UNEVICTABLE,
0, ggtt->base.total,
0, ggtt->vm.total,
PIN_HIGH);
if (ret)
goto err_out;
@ -2064,16 +2023,16 @@ static void gen6_scratch_va_range(struct i915_hw_ppgtt *ppgtt,
u32 pde;
gen6_for_each_pde(unused, &ppgtt->pd, start, length, pde)
ppgtt->pd.page_table[pde] = ppgtt->base.scratch_pt;
ppgtt->pd.page_table[pde] = ppgtt->vm.scratch_pt;
}
static int gen6_ppgtt_init(struct i915_hw_ppgtt *ppgtt)
{
struct drm_i915_private *dev_priv = ppgtt->base.i915;
struct drm_i915_private *dev_priv = ppgtt->vm.i915;
struct i915_ggtt *ggtt = &dev_priv->ggtt;
int ret;
ppgtt->base.pte_encode = ggtt->base.pte_encode;
ppgtt->vm.pte_encode = ggtt->vm.pte_encode;
if (intel_vgpu_active(dev_priv) || IS_GEN6(dev_priv))
ppgtt->switch_mm = gen6_mm_switch;
else if (IS_HASWELL(dev_priv))
@ -2087,24 +2046,24 @@ static int gen6_ppgtt_init(struct i915_hw_ppgtt *ppgtt)
if (ret)
return ret;
ppgtt->base.total = I915_PDES * GEN6_PTES * PAGE_SIZE;
ppgtt->vm.total = I915_PDES * GEN6_PTES * PAGE_SIZE;
gen6_scratch_va_range(ppgtt, 0, ppgtt->base.total);
gen6_write_page_range(ppgtt, 0, ppgtt->base.total);
gen6_scratch_va_range(ppgtt, 0, ppgtt->vm.total);
gen6_write_page_range(ppgtt, 0, ppgtt->vm.total);
ret = gen6_alloc_va_range(&ppgtt->base, 0, ppgtt->base.total);
ret = gen6_alloc_va_range(&ppgtt->vm, 0, ppgtt->vm.total);
if (ret) {
gen6_ppgtt_cleanup(&ppgtt->base);
gen6_ppgtt_cleanup(&ppgtt->vm);
return ret;
}
ppgtt->base.clear_range = gen6_ppgtt_clear_range;
ppgtt->base.insert_entries = gen6_ppgtt_insert_entries;
ppgtt->base.unbind_vma = ppgtt_unbind_vma;
ppgtt->base.bind_vma = ppgtt_bind_vma;
ppgtt->base.set_pages = ppgtt_set_pages;
ppgtt->base.clear_pages = clear_pages;
ppgtt->base.cleanup = gen6_ppgtt_cleanup;
ppgtt->vm.clear_range = gen6_ppgtt_clear_range;
ppgtt->vm.insert_entries = gen6_ppgtt_insert_entries;
ppgtt->vm.bind_vma = gen6_ppgtt_bind_vma;
ppgtt->vm.unbind_vma = ppgtt_unbind_vma;
ppgtt->vm.set_pages = ppgtt_set_pages;
ppgtt->vm.clear_pages = clear_pages;
ppgtt->vm.cleanup = gen6_ppgtt_cleanup;
ppgtt->debug_dump = gen6_dump_ppgtt;
DRM_DEBUG_DRIVER("Allocated pde space (%lldM) at GTT entry: %llx\n",
@ -2120,8 +2079,8 @@ static int gen6_ppgtt_init(struct i915_hw_ppgtt *ppgtt)
static int __hw_ppgtt_init(struct i915_hw_ppgtt *ppgtt,
struct drm_i915_private *dev_priv)
{
ppgtt->base.i915 = dev_priv;
ppgtt->base.dma = &dev_priv->drm.pdev->dev;
ppgtt->vm.i915 = dev_priv;
ppgtt->vm.dma = &dev_priv->drm.pdev->dev;
if (INTEL_GEN(dev_priv) < 8)
return gen6_ppgtt_init(ppgtt);
@ -2231,10 +2190,10 @@ i915_ppgtt_create(struct drm_i915_private *dev_priv,
}
kref_init(&ppgtt->ref);
i915_address_space_init(&ppgtt->base, dev_priv, name);
ppgtt->base.file = fpriv;
i915_address_space_init(&ppgtt->vm, dev_priv, name);
ppgtt->vm.file = fpriv;
trace_i915_ppgtt_create(&ppgtt->base);
trace_i915_ppgtt_create(&ppgtt->vm);
return ppgtt;
}
@ -2268,16 +2227,16 @@ void i915_ppgtt_release(struct kref *kref)
struct i915_hw_ppgtt *ppgtt =
container_of(kref, struct i915_hw_ppgtt, ref);
trace_i915_ppgtt_release(&ppgtt->base);
trace_i915_ppgtt_release(&ppgtt->vm);
ppgtt_destroy_vma(&ppgtt->base);
ppgtt_destroy_vma(&ppgtt->vm);
GEM_BUG_ON(!list_empty(&ppgtt->base.active_list));
GEM_BUG_ON(!list_empty(&ppgtt->base.inactive_list));
GEM_BUG_ON(!list_empty(&ppgtt->base.unbound_list));
GEM_BUG_ON(!list_empty(&ppgtt->vm.active_list));
GEM_BUG_ON(!list_empty(&ppgtt->vm.inactive_list));
GEM_BUG_ON(!list_empty(&ppgtt->vm.unbound_list));
ppgtt->base.cleanup(&ppgtt->base);
i915_address_space_fini(&ppgtt->base);
ppgtt->vm.cleanup(&ppgtt->vm);
i915_address_space_fini(&ppgtt->vm);
kfree(ppgtt);
}
@ -2373,7 +2332,7 @@ void i915_gem_suspend_gtt_mappings(struct drm_i915_private *dev_priv)
i915_check_and_clear_faults(dev_priv);
ggtt->base.clear_range(&ggtt->base, 0, ggtt->base.total);
ggtt->vm.clear_range(&ggtt->vm, 0, ggtt->vm.total);
i915_ggtt_invalidate(dev_priv);
}
@ -2716,16 +2675,16 @@ static int aliasing_gtt_bind_vma(struct i915_vma *vma,
struct i915_hw_ppgtt *appgtt = i915->mm.aliasing_ppgtt;
if (!(vma->flags & I915_VMA_LOCAL_BIND) &&
appgtt->base.allocate_va_range) {
ret = appgtt->base.allocate_va_range(&appgtt->base,
vma->node.start,
vma->size);
appgtt->vm.allocate_va_range) {
ret = appgtt->vm.allocate_va_range(&appgtt->vm,
vma->node.start,
vma->size);
if (ret)
return ret;
}
appgtt->base.insert_entries(&appgtt->base, vma, cache_level,
pte_flags);
appgtt->vm.insert_entries(&appgtt->vm, vma, cache_level,
pte_flags);
}
if (flags & I915_VMA_GLOBAL_BIND) {
@ -2748,7 +2707,7 @@ static void aliasing_gtt_unbind_vma(struct i915_vma *vma)
}
if (vma->flags & I915_VMA_LOCAL_BIND) {
struct i915_address_space *vm = &i915->mm.aliasing_ppgtt->base;
struct i915_address_space *vm = &i915->mm.aliasing_ppgtt->vm;
vm->clear_range(vm, vma->node.start, vma->size);
}
@ -2815,30 +2774,30 @@ int i915_gem_init_aliasing_ppgtt(struct drm_i915_private *i915)
if (IS_ERR(ppgtt))
return PTR_ERR(ppgtt);
if (WARN_ON(ppgtt->base.total < ggtt->base.total)) {
if (WARN_ON(ppgtt->vm.total < ggtt->vm.total)) {
err = -ENODEV;
goto err_ppgtt;
}
if (ppgtt->base.allocate_va_range) {
if (ppgtt->vm.allocate_va_range) {
/* Note we only pre-allocate as far as the end of the global
* GTT. On 48b / 4-level page-tables, the difference is very,
* very significant! We have to preallocate as GVT/vgpu does
* not like the page directory disappearing.
*/
err = ppgtt->base.allocate_va_range(&ppgtt->base,
0, ggtt->base.total);
err = ppgtt->vm.allocate_va_range(&ppgtt->vm,
0, ggtt->vm.total);
if (err)
goto err_ppgtt;
}
i915->mm.aliasing_ppgtt = ppgtt;
GEM_BUG_ON(ggtt->base.bind_vma != ggtt_bind_vma);
ggtt->base.bind_vma = aliasing_gtt_bind_vma;
GEM_BUG_ON(ggtt->vm.bind_vma != ggtt_bind_vma);
ggtt->vm.bind_vma = aliasing_gtt_bind_vma;
GEM_BUG_ON(ggtt->base.unbind_vma != ggtt_unbind_vma);
ggtt->base.unbind_vma = aliasing_gtt_unbind_vma;
GEM_BUG_ON(ggtt->vm.unbind_vma != ggtt_unbind_vma);
ggtt->vm.unbind_vma = aliasing_gtt_unbind_vma;
return 0;
@ -2858,8 +2817,8 @@ void i915_gem_fini_aliasing_ppgtt(struct drm_i915_private *i915)
i915_ppgtt_put(ppgtt);
ggtt->base.bind_vma = ggtt_bind_vma;
ggtt->base.unbind_vma = ggtt_unbind_vma;
ggtt->vm.bind_vma = ggtt_bind_vma;
ggtt->vm.unbind_vma = ggtt_unbind_vma;
}
int i915_gem_init_ggtt(struct drm_i915_private *dev_priv)
@ -2883,7 +2842,7 @@ int i915_gem_init_ggtt(struct drm_i915_private *dev_priv)
return ret;
/* Reserve a mappable slot for our lockless error capture */
ret = drm_mm_insert_node_in_range(&ggtt->base.mm, &ggtt->error_capture,
ret = drm_mm_insert_node_in_range(&ggtt->vm.mm, &ggtt->error_capture,
PAGE_SIZE, 0, I915_COLOR_UNEVICTABLE,
0, ggtt->mappable_end,
DRM_MM_INSERT_LOW);
@ -2891,16 +2850,15 @@ int i915_gem_init_ggtt(struct drm_i915_private *dev_priv)
return ret;
/* Clear any non-preallocated blocks */
drm_mm_for_each_hole(entry, &ggtt->base.mm, hole_start, hole_end) {
drm_mm_for_each_hole(entry, &ggtt->vm.mm, hole_start, hole_end) {
DRM_DEBUG_KMS("clearing unused GTT space: [%lx, %lx]\n",
hole_start, hole_end);
ggtt->base.clear_range(&ggtt->base, hole_start,
hole_end - hole_start);
ggtt->vm.clear_range(&ggtt->vm, hole_start,
hole_end - hole_start);
}
/* And finally clear the reserved guard page */
ggtt->base.clear_range(&ggtt->base,
ggtt->base.total - PAGE_SIZE, PAGE_SIZE);
ggtt->vm.clear_range(&ggtt->vm, ggtt->vm.total - PAGE_SIZE, PAGE_SIZE);
if (USES_PPGTT(dev_priv) && !USES_FULL_PPGTT(dev_priv)) {
ret = i915_gem_init_aliasing_ppgtt(dev_priv);
@ -2925,11 +2883,11 @@ void i915_ggtt_cleanup_hw(struct drm_i915_private *dev_priv)
struct i915_vma *vma, *vn;
struct pagevec *pvec;
ggtt->base.closed = true;
ggtt->vm.closed = true;
mutex_lock(&dev_priv->drm.struct_mutex);
GEM_BUG_ON(!list_empty(&ggtt->base.active_list));
list_for_each_entry_safe(vma, vn, &ggtt->base.inactive_list, vm_link)
GEM_BUG_ON(!list_empty(&ggtt->vm.active_list));
list_for_each_entry_safe(vma, vn, &ggtt->vm.inactive_list, vm_link)
WARN_ON(i915_vma_unbind(vma));
mutex_unlock(&dev_priv->drm.struct_mutex);
@ -2941,12 +2899,12 @@ void i915_ggtt_cleanup_hw(struct drm_i915_private *dev_priv)
if (drm_mm_node_allocated(&ggtt->error_capture))
drm_mm_remove_node(&ggtt->error_capture);
if (drm_mm_initialized(&ggtt->base.mm)) {
if (drm_mm_initialized(&ggtt->vm.mm)) {
intel_vgt_deballoon(dev_priv);
i915_address_space_fini(&ggtt->base);
i915_address_space_fini(&ggtt->vm);
}
ggtt->base.cleanup(&ggtt->base);
ggtt->vm.cleanup(&ggtt->vm);
pvec = &dev_priv->mm.wc_stash;
if (pvec->nr) {
@ -2996,7 +2954,7 @@ static unsigned int chv_get_total_gtt_size(u16 gmch_ctrl)
static int ggtt_probe_common(struct i915_ggtt *ggtt, u64 size)
{
struct drm_i915_private *dev_priv = ggtt->base.i915;
struct drm_i915_private *dev_priv = ggtt->vm.i915;
struct pci_dev *pdev = dev_priv->drm.pdev;
phys_addr_t phys_addr;
int ret;
@ -3020,7 +2978,7 @@ static int ggtt_probe_common(struct i915_ggtt *ggtt, u64 size)
return -ENOMEM;
}
ret = setup_scratch_page(&ggtt->base, GFP_DMA32);
ret = setup_scratch_page(&ggtt->vm, GFP_DMA32);
if (ret) {
DRM_ERROR("Scratch setup failed\n");
/* iounmap will also get called at remove, but meh */
@ -3326,7 +3284,7 @@ static void setup_private_pat(struct drm_i915_private *dev_priv)
static int gen8_gmch_probe(struct i915_ggtt *ggtt)
{
struct drm_i915_private *dev_priv = ggtt->base.i915;
struct drm_i915_private *dev_priv = ggtt->vm.i915;
struct pci_dev *pdev = dev_priv->drm.pdev;
unsigned int size;
u16 snb_gmch_ctl;
@ -3350,25 +3308,25 @@ static int gen8_gmch_probe(struct i915_ggtt *ggtt)
else
size = gen8_get_total_gtt_size(snb_gmch_ctl);
ggtt->base.total = (size / sizeof(gen8_pte_t)) << PAGE_SHIFT;
ggtt->base.cleanup = gen6_gmch_remove;
ggtt->base.bind_vma = ggtt_bind_vma;
ggtt->base.unbind_vma = ggtt_unbind_vma;
ggtt->base.set_pages = ggtt_set_pages;
ggtt->base.clear_pages = clear_pages;
ggtt->base.insert_page = gen8_ggtt_insert_page;
ggtt->base.clear_range = nop_clear_range;
ggtt->vm.total = (size / sizeof(gen8_pte_t)) << PAGE_SHIFT;
ggtt->vm.cleanup = gen6_gmch_remove;
ggtt->vm.bind_vma = ggtt_bind_vma;
ggtt->vm.unbind_vma = ggtt_unbind_vma;
ggtt->vm.set_pages = ggtt_set_pages;
ggtt->vm.clear_pages = clear_pages;
ggtt->vm.insert_page = gen8_ggtt_insert_page;
ggtt->vm.clear_range = nop_clear_range;
if (!USES_FULL_PPGTT(dev_priv) || intel_scanout_needs_vtd_wa(dev_priv))
ggtt->base.clear_range = gen8_ggtt_clear_range;
ggtt->vm.clear_range = gen8_ggtt_clear_range;
ggtt->base.insert_entries = gen8_ggtt_insert_entries;
ggtt->vm.insert_entries = gen8_ggtt_insert_entries;
/* Serialize GTT updates with aperture access on BXT if VT-d is on. */
if (intel_ggtt_update_needs_vtd_wa(dev_priv)) {
ggtt->base.insert_entries = bxt_vtd_ggtt_insert_entries__BKL;
ggtt->base.insert_page = bxt_vtd_ggtt_insert_page__BKL;
if (ggtt->base.clear_range != nop_clear_range)
ggtt->base.clear_range = bxt_vtd_ggtt_clear_range__BKL;
ggtt->vm.insert_entries = bxt_vtd_ggtt_insert_entries__BKL;
ggtt->vm.insert_page = bxt_vtd_ggtt_insert_page__BKL;
if (ggtt->vm.clear_range != nop_clear_range)
ggtt->vm.clear_range = bxt_vtd_ggtt_clear_range__BKL;
}
ggtt->invalidate = gen6_ggtt_invalidate;
@ -3380,7 +3338,7 @@ static int gen8_gmch_probe(struct i915_ggtt *ggtt)
static int gen6_gmch_probe(struct i915_ggtt *ggtt)
{
struct drm_i915_private *dev_priv = ggtt->base.i915;
struct drm_i915_private *dev_priv = ggtt->vm.i915;
struct pci_dev *pdev = dev_priv->drm.pdev;
unsigned int size;
u16 snb_gmch_ctl;
@ -3407,29 +3365,29 @@ static int gen6_gmch_probe(struct i915_ggtt *ggtt)
pci_read_config_word(pdev, SNB_GMCH_CTRL, &snb_gmch_ctl);
size = gen6_get_total_gtt_size(snb_gmch_ctl);
ggtt->base.total = (size / sizeof(gen6_pte_t)) << PAGE_SHIFT;
ggtt->vm.total = (size / sizeof(gen6_pte_t)) << PAGE_SHIFT;
ggtt->base.clear_range = gen6_ggtt_clear_range;
ggtt->base.insert_page = gen6_ggtt_insert_page;
ggtt->base.insert_entries = gen6_ggtt_insert_entries;
ggtt->base.bind_vma = ggtt_bind_vma;
ggtt->base.unbind_vma = ggtt_unbind_vma;
ggtt->base.set_pages = ggtt_set_pages;
ggtt->base.clear_pages = clear_pages;
ggtt->base.cleanup = gen6_gmch_remove;
ggtt->vm.clear_range = gen6_ggtt_clear_range;
ggtt->vm.insert_page = gen6_ggtt_insert_page;
ggtt->vm.insert_entries = gen6_ggtt_insert_entries;
ggtt->vm.bind_vma = ggtt_bind_vma;
ggtt->vm.unbind_vma = ggtt_unbind_vma;
ggtt->vm.set_pages = ggtt_set_pages;
ggtt->vm.clear_pages = clear_pages;
ggtt->vm.cleanup = gen6_gmch_remove;
ggtt->invalidate = gen6_ggtt_invalidate;
if (HAS_EDRAM(dev_priv))
ggtt->base.pte_encode = iris_pte_encode;
ggtt->vm.pte_encode = iris_pte_encode;
else if (IS_HASWELL(dev_priv))
ggtt->base.pte_encode = hsw_pte_encode;
ggtt->vm.pte_encode = hsw_pte_encode;
else if (IS_VALLEYVIEW(dev_priv))
ggtt->base.pte_encode = byt_pte_encode;
ggtt->vm.pte_encode = byt_pte_encode;
else if (INTEL_GEN(dev_priv) >= 7)
ggtt->base.pte_encode = ivb_pte_encode;
ggtt->vm.pte_encode = ivb_pte_encode;
else
ggtt->base.pte_encode = snb_pte_encode;
ggtt->vm.pte_encode = snb_pte_encode;
return ggtt_probe_common(ggtt, size);
}
@ -3441,7 +3399,7 @@ static void i915_gmch_remove(struct i915_address_space *vm)
static int i915_gmch_probe(struct i915_ggtt *ggtt)
{
struct drm_i915_private *dev_priv = ggtt->base.i915;
struct drm_i915_private *dev_priv = ggtt->vm.i915;
phys_addr_t gmadr_base;
int ret;
@ -3451,23 +3409,21 @@ static int i915_gmch_probe(struct i915_ggtt *ggtt)
return -EIO;
}
intel_gtt_get(&ggtt->base.total,
&gmadr_base,
&ggtt->mappable_end);
intel_gtt_get(&ggtt->vm.total, &gmadr_base, &ggtt->mappable_end);
ggtt->gmadr =
(struct resource) DEFINE_RES_MEM(gmadr_base,
ggtt->mappable_end);
ggtt->do_idle_maps = needs_idle_maps(dev_priv);
ggtt->base.insert_page = i915_ggtt_insert_page;
ggtt->base.insert_entries = i915_ggtt_insert_entries;
ggtt->base.clear_range = i915_ggtt_clear_range;
ggtt->base.bind_vma = ggtt_bind_vma;
ggtt->base.unbind_vma = ggtt_unbind_vma;
ggtt->base.set_pages = ggtt_set_pages;
ggtt->base.clear_pages = clear_pages;
ggtt->base.cleanup = i915_gmch_remove;
ggtt->vm.insert_page = i915_ggtt_insert_page;
ggtt->vm.insert_entries = i915_ggtt_insert_entries;
ggtt->vm.clear_range = i915_ggtt_clear_range;
ggtt->vm.bind_vma = ggtt_bind_vma;
ggtt->vm.unbind_vma = ggtt_unbind_vma;
ggtt->vm.set_pages = ggtt_set_pages;
ggtt->vm.clear_pages = clear_pages;
ggtt->vm.cleanup = i915_gmch_remove;
ggtt->invalidate = gmch_ggtt_invalidate;
@ -3486,8 +3442,8 @@ int i915_ggtt_probe_hw(struct drm_i915_private *dev_priv)
struct i915_ggtt *ggtt = &dev_priv->ggtt;
int ret;
ggtt->base.i915 = dev_priv;
ggtt->base.dma = &dev_priv->drm.pdev->dev;
ggtt->vm.i915 = dev_priv;
ggtt->vm.dma = &dev_priv->drm.pdev->dev;
if (INTEL_GEN(dev_priv) <= 5)
ret = i915_gmch_probe(ggtt);
@ -3504,27 +3460,29 @@ int i915_ggtt_probe_hw(struct drm_i915_private *dev_priv)
* restriction!
*/
if (USES_GUC(dev_priv)) {
ggtt->base.total = min_t(u64, ggtt->base.total, GUC_GGTT_TOP);
ggtt->mappable_end = min_t(u64, ggtt->mappable_end, ggtt->base.total);
ggtt->vm.total = min_t(u64, ggtt->vm.total, GUC_GGTT_TOP);
ggtt->mappable_end =
min_t(u64, ggtt->mappable_end, ggtt->vm.total);
}
if ((ggtt->base.total - 1) >> 32) {
if ((ggtt->vm.total - 1) >> 32) {
DRM_ERROR("We never expected a Global GTT with more than 32bits"
" of address space! Found %lldM!\n",
ggtt->base.total >> 20);
ggtt->base.total = 1ULL << 32;
ggtt->mappable_end = min_t(u64, ggtt->mappable_end, ggtt->base.total);
ggtt->vm.total >> 20);
ggtt->vm.total = 1ULL << 32;
ggtt->mappable_end =
min_t(u64, ggtt->mappable_end, ggtt->vm.total);
}
if (ggtt->mappable_end > ggtt->base.total) {
if (ggtt->mappable_end > ggtt->vm.total) {
DRM_ERROR("mappable aperture extends past end of GGTT,"
" aperture=%pa, total=%llx\n",
&ggtt->mappable_end, ggtt->base.total);
ggtt->mappable_end = ggtt->base.total;
&ggtt->mappable_end, ggtt->vm.total);
ggtt->mappable_end = ggtt->vm.total;
}
/* GMADR is the PCI mmio aperture into the global GTT. */
DRM_DEBUG_DRIVER("GGTT size = %lluM\n", ggtt->base.total >> 20);
DRM_DEBUG_DRIVER("GGTT size = %lluM\n", ggtt->vm.total >> 20);
DRM_DEBUG_DRIVER("GMADR size = %lluM\n", (u64)ggtt->mappable_end >> 20);
DRM_DEBUG_DRIVER("DSM size = %lluM\n",
(u64)resource_size(&intel_graphics_stolen_res) >> 20);
@ -3551,9 +3509,9 @@ int i915_ggtt_init_hw(struct drm_i915_private *dev_priv)
* and beyond the end of the GTT if we do not provide a guard.
*/
mutex_lock(&dev_priv->drm.struct_mutex);
i915_address_space_init(&ggtt->base, dev_priv, "[global]");
i915_address_space_init(&ggtt->vm, dev_priv, "[global]");
if (!HAS_LLC(dev_priv) && !USES_PPGTT(dev_priv))
ggtt->base.mm.color_adjust = i915_gtt_color_adjust;
ggtt->vm.mm.color_adjust = i915_gtt_color_adjust;
mutex_unlock(&dev_priv->drm.struct_mutex);
if (!io_mapping_init_wc(&dev_priv->ggtt.iomap,
@ -3576,7 +3534,7 @@ int i915_ggtt_init_hw(struct drm_i915_private *dev_priv)
return 0;
out_gtt_cleanup:
ggtt->base.cleanup(&ggtt->base);
ggtt->vm.cleanup(&ggtt->vm);
return ret;
}
@ -3610,34 +3568,31 @@ void i915_ggtt_disable_guc(struct drm_i915_private *i915)
void i915_gem_restore_gtt_mappings(struct drm_i915_private *dev_priv)
{
struct i915_ggtt *ggtt = &dev_priv->ggtt;
struct drm_i915_gem_object *obj, *on;
struct i915_vma *vma, *vn;
i915_check_and_clear_faults(dev_priv);
/* First fill our portion of the GTT with scratch pages */
ggtt->base.clear_range(&ggtt->base, 0, ggtt->base.total);
ggtt->vm.clear_range(&ggtt->vm, 0, ggtt->vm.total);
ggtt->base.closed = true; /* skip rewriting PTE on VMA unbind */
ggtt->vm.closed = true; /* skip rewriting PTE on VMA unbind */
/* clflush objects bound into the GGTT and rebind them. */
list_for_each_entry_safe(obj, on, &dev_priv->mm.bound_list, mm.link) {
bool ggtt_bound = false;
struct i915_vma *vma;
GEM_BUG_ON(!list_empty(&ggtt->vm.active_list));
list_for_each_entry_safe(vma, vn, &ggtt->vm.inactive_list, vm_link) {
struct drm_i915_gem_object *obj = vma->obj;
for_each_ggtt_vma(vma, obj) {
if (!i915_vma_unbind(vma))
continue;
if (!(vma->flags & I915_VMA_GLOBAL_BIND))
continue;
WARN_ON(i915_vma_bind(vma, obj->cache_level,
PIN_UPDATE));
ggtt_bound = true;
}
if (!i915_vma_unbind(vma))
continue;
if (ggtt_bound)
WARN_ON(i915_gem_object_set_to_gtt_domain(obj, false));
WARN_ON(i915_vma_bind(vma, obj->cache_level, PIN_UPDATE));
WARN_ON(i915_gem_object_set_to_gtt_domain(obj, false));
}
ggtt->base.closed = false;
ggtt->vm.closed = false;
if (INTEL_GEN(dev_priv) >= 8) {
struct intel_ppat *ppat = &dev_priv->ppat;
@ -3657,8 +3612,10 @@ void i915_gem_restore_gtt_mappings(struct drm_i915_private *dev_priv)
ppgtt = dev_priv->mm.aliasing_ppgtt;
else
ppgtt = i915_vm_to_ppgtt(vm);
if (!ppgtt)
continue;
gen6_write_page_range(ppgtt, 0, ppgtt->base.total);
gen6_write_page_range(ppgtt, 0, ppgtt->vm.total);
}
}
@ -3880,7 +3837,7 @@ int i915_gem_gtt_reserve(struct i915_address_space *vm,
GEM_BUG_ON(!IS_ALIGNED(size, I915_GTT_PAGE_SIZE));
GEM_BUG_ON(!IS_ALIGNED(offset, I915_GTT_MIN_ALIGNMENT));
GEM_BUG_ON(range_overflows(offset, size, vm->total));
GEM_BUG_ON(vm == &vm->i915->mm.aliasing_ppgtt->base);
GEM_BUG_ON(vm == &vm->i915->mm.aliasing_ppgtt->vm);
GEM_BUG_ON(drm_mm_node_allocated(node));
node->size = size;
@ -3977,7 +3934,7 @@ int i915_gem_gtt_insert(struct i915_address_space *vm,
GEM_BUG_ON(start >= end);
GEM_BUG_ON(start > 0 && !IS_ALIGNED(start, I915_GTT_PAGE_SIZE));
GEM_BUG_ON(end < U64_MAX && !IS_ALIGNED(end, I915_GTT_PAGE_SIZE));
GEM_BUG_ON(vm == &vm->i915->mm.aliasing_ppgtt->base);
GEM_BUG_ON(vm == &vm->i915->mm.aliasing_ppgtt->vm);
GEM_BUG_ON(drm_mm_node_allocated(node));
if (unlikely(range_overflows(start, size, end)))

View File

@ -65,7 +65,7 @@ typedef u64 gen8_pde_t;
typedef u64 gen8_ppgtt_pdpe_t;
typedef u64 gen8_ppgtt_pml4e_t;
#define ggtt_total_entries(ggtt) ((ggtt)->base.total >> PAGE_SHIFT)
#define ggtt_total_entries(ggtt) ((ggtt)->vm.total >> PAGE_SHIFT)
/* gen6-hsw has bit 11-4 for physical addr bit 39-32 */
#define GEN6_GTT_ADDR_ENCODE(addr) ((addr) | (((addr) >> 28) & 0xff0))
@ -367,7 +367,7 @@ i915_vm_has_scratch_64K(struct i915_address_space *vm)
* the spec.
*/
struct i915_ggtt {
struct i915_address_space base;
struct i915_address_space vm;
struct io_mapping iomap; /* Mapping to our CPU mappable region */
struct resource gmadr; /* GMADR resource */
@ -385,7 +385,7 @@ struct i915_ggtt {
};
struct i915_hw_ppgtt {
struct i915_address_space base;
struct i915_address_space vm;
struct kref ref;
struct drm_mm_node node;
unsigned long pd_dirty_rings;
@ -543,7 +543,7 @@ static inline struct i915_ggtt *
i915_vm_to_ggtt(struct i915_address_space *vm)
{
GEM_BUG_ON(!i915_is_ggtt(vm));
return container_of(vm, struct i915_ggtt, base);
return container_of(vm, struct i915_ggtt, vm);
}
#define INTEL_MAX_PPAT_ENTRIES 8

View File

@ -194,7 +194,7 @@ int i915_gem_render_state_emit(struct i915_request *rq)
if (IS_ERR(so.obj))
return PTR_ERR(so.obj);
so.vma = i915_vma_instance(so.obj, &engine->i915->ggtt.base, NULL);
so.vma = i915_vma_instance(so.obj, &engine->i915->ggtt.vm, NULL);
if (IS_ERR(so.vma)) {
err = PTR_ERR(so.vma);
goto err_obj;

View File

@ -480,7 +480,7 @@ i915_gem_shrinker_vmap(struct notifier_block *nb, unsigned long event, void *ptr
/* We also want to clear any cached iomaps as they wrap vmap */
list_for_each_entry_safe(vma, next,
&i915->ggtt.base.inactive_list, vm_link) {
&i915->ggtt.vm.inactive_list, vm_link) {
unsigned long count = vma->node.size >> PAGE_SHIFT;
if (vma->iomap && i915_vma_unbind(vma) == 0)
freed_pages += count;

View File

@ -642,7 +642,7 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_i915_private *dev_priv
if (ret)
goto err;
vma = i915_vma_instance(obj, &ggtt->base, NULL);
vma = i915_vma_instance(obj, &ggtt->vm, NULL);
if (IS_ERR(vma)) {
ret = PTR_ERR(vma);
goto err_pages;
@ -653,7 +653,7 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_i915_private *dev_priv
* setting up the GTT space. The actual reservation will occur
* later.
*/
ret = i915_gem_gtt_reserve(&ggtt->base, &vma->node,
ret = i915_gem_gtt_reserve(&ggtt->vm, &vma->node,
size, gtt_offset, obj->cache_level,
0);
if (ret) {
@ -666,7 +666,7 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_i915_private *dev_priv
vma->pages = obj->mm.pages;
vma->flags |= I915_VMA_GLOBAL_BIND;
__i915_vma_set_map_and_fenceable(vma);
list_move_tail(&vma->vm_link, &ggtt->base.inactive_list);
list_move_tail(&vma->vm_link, &ggtt->vm.inactive_list);
spin_lock(&dev_priv->mm.obj_lock);
list_move_tail(&obj->mm.link, &dev_priv->mm.bound_list);

View File

@ -973,8 +973,7 @@ i915_error_object_create(struct drm_i915_private *i915,
void __iomem *s;
int ret;
ggtt->base.insert_page(&ggtt->base, dma, slot,
I915_CACHE_NONE, 0);
ggtt->vm.insert_page(&ggtt->vm, dma, slot, I915_CACHE_NONE, 0);
s = io_mapping_map_atomic_wc(&ggtt->iomap, slot);
ret = compress_page(&compress, (void __force *)s, dst);
@ -993,7 +992,7 @@ i915_error_object_create(struct drm_i915_private *i915,
out:
compress_fini(&compress, dst);
ggtt->base.clear_range(&ggtt->base, slot, PAGE_SIZE);
ggtt->vm.clear_range(&ggtt->vm, slot, PAGE_SIZE);
return dst;
}
@ -1287,9 +1286,11 @@ static void error_record_engine_registers(struct i915_gpu_state *error,
static void record_request(struct i915_request *request,
struct drm_i915_error_request *erq)
{
erq->context = request->ctx->hw_id;
struct i915_gem_context *ctx = request->gem_context;
erq->context = ctx->hw_id;
erq->sched_attr = request->sched.attr;
erq->ban_score = atomic_read(&request->ctx->ban_score);
erq->ban_score = atomic_read(&ctx->ban_score);
erq->seqno = request->global_seqno;
erq->jiffies = request->emitted_jiffies;
erq->start = i915_ggtt_offset(request->ring->vma);
@ -1297,7 +1298,7 @@ static void record_request(struct i915_request *request,
erq->tail = request->tail;
rcu_read_lock();
erq->pid = request->ctx->pid ? pid_nr(request->ctx->pid) : 0;
erq->pid = ctx->pid ? pid_nr(ctx->pid) : 0;
rcu_read_unlock();
}
@ -1461,12 +1462,12 @@ static void gem_record_rings(struct i915_gpu_state *error)
request = i915_gem_find_active_request(engine);
if (request) {
struct i915_gem_context *ctx = request->gem_context;
struct intel_ring *ring;
ee->vm = request->ctx->ppgtt ?
&request->ctx->ppgtt->base : &ggtt->base;
ee->vm = ctx->ppgtt ? &ctx->ppgtt->vm : &ggtt->vm;
record_context(&ee->context, request->ctx);
record_context(&ee->context, ctx);
/* We need to copy these to an anonymous buffer
* as the simplest method to avoid being overwritten
@ -1483,11 +1484,10 @@ static void gem_record_rings(struct i915_gpu_state *error)
ee->ctx =
i915_error_object_create(i915,
to_intel_context(request->ctx,
engine)->state);
request->hw_context->state);
error->simulated |=
i915_gem_context_no_error_capture(request->ctx);
i915_gem_context_no_error_capture(ctx);
ee->rq_head = request->head;
ee->rq_post = request->postfix;
@ -1563,17 +1563,17 @@ static void capture_active_buffers(struct i915_gpu_state *error)
static void capture_pinned_buffers(struct i915_gpu_state *error)
{
struct i915_address_space *vm = &error->i915->ggtt.base;
struct i915_address_space *vm = &error->i915->ggtt.vm;
struct drm_i915_error_buffer *bo;
struct i915_vma *vma;
int count_inactive, count_active;
count_inactive = 0;
list_for_each_entry(vma, &vm->active_list, vm_link)
list_for_each_entry(vma, &vm->inactive_list, vm_link)
count_inactive++;
count_active = 0;
list_for_each_entry(vma, &vm->inactive_list, vm_link)
list_for_each_entry(vma, &vm->active_list, vm_link)
count_active++;
bo = NULL;
@ -1667,7 +1667,16 @@ static void capture_reg_state(struct i915_gpu_state *error)
}
/* 4: Everything else */
if (INTEL_GEN(dev_priv) >= 8) {
if (INTEL_GEN(dev_priv) >= 11) {
error->ier = I915_READ(GEN8_DE_MISC_IER);
error->gtier[0] = I915_READ(GEN11_RENDER_COPY_INTR_ENABLE);
error->gtier[1] = I915_READ(GEN11_VCS_VECS_INTR_ENABLE);
error->gtier[2] = I915_READ(GEN11_GUC_SG_INTR_ENABLE);
error->gtier[3] = I915_READ(GEN11_GPM_WGBOXPERF_INTR_ENABLE);
error->gtier[4] = I915_READ(GEN11_CRYPTO_RSVD_INTR_ENABLE);
error->gtier[5] = I915_READ(GEN11_GUNIT_CSME_INTR_ENABLE);
error->ngtier = 6;
} else if (INTEL_GEN(dev_priv) >= 8) {
error->ier = I915_READ(GEN8_DE_MISC_IER);
for (i = 0; i < 4; i++)
error->gtier[i] = I915_READ(GEN8_GT_IER(i));

View File

@ -58,7 +58,7 @@ struct i915_gpu_state {
u32 eir;
u32 pgtbl_er;
u32 ier;
u32 gtier[4], ngtier;
u32 gtier[6], ngtier;
u32 ccid;
u32 derrmr;
u32 forcewake;

View File

@ -2640,7 +2640,8 @@ gen8_de_irq_handler(struct drm_i915_private *dev_priv, u32 master_ctl)
GEN9_AUX_CHANNEL_C |
GEN9_AUX_CHANNEL_D;
if (IS_CNL_WITH_PORT_F(dev_priv))
if (IS_CNL_WITH_PORT_F(dev_priv) ||
INTEL_GEN(dev_priv) >= 11)
tmp_mask |= CNL_AUX_CHANNEL_F;
if (iir & tmp_mask) {
@ -3920,7 +3921,7 @@ static void gen8_de_irq_postinstall(struct drm_i915_private *dev_priv)
de_pipe_masked |= GEN8_DE_PIPE_IRQ_FAULT_ERRORS;
}
if (IS_CNL_WITH_PORT_F(dev_priv))
if (IS_CNL_WITH_PORT_F(dev_priv) || INTEL_GEN(dev_priv) >= 11)
de_port_masked |= CNL_AUX_CHANNEL_F;
de_pipe_enables = de_pipe_masked | GEN8_PIPE_VBLANK |

View File

@ -130,9 +130,6 @@ i915_param_named_unsafe(invert_brightness, int, 0600,
i915_param_named(disable_display, bool, 0400,
"Disable display (default: false)");
i915_param_named_unsafe(enable_cmd_parser, bool, 0400,
"Enable command parsing (true=enabled [default], false=disabled)");
i915_param_named(mmio_debug, int, 0600,
"Enable the MMIO debug code for the first N failures (default: off). "
"This may negatively affect performance.");

View File

@ -58,7 +58,6 @@ struct drm_printer;
param(unsigned int, inject_load_failure, 0) \
/* leave bools at the end to not create holes */ \
param(bool, alpha_support, IS_ENABLED(CONFIG_DRM_I915_ALPHA_SUPPORT)) \
param(bool, enable_cmd_parser, true) \
param(bool, enable_hangcheck, true) \
param(bool, fastboot, false) \
param(bool, prefault_disable, false) \

View File

@ -340,7 +340,6 @@ static const struct intel_device_info intel_valleyview_info = {
GEN(7),
.is_lp = 1,
.num_pipes = 2,
.has_psr = 1,
.has_runtime_pm = 1,
.has_rc6 = 1,
.has_gmch_display = 1,
@ -433,7 +432,6 @@ static const struct intel_device_info intel_cherryview_info = {
.is_lp = 1,
.ring_mask = RENDER_RING | BSD_RING | BLT_RING | VEBOX_RING,
.has_64bit_reloc = 1,
.has_psr = 1,
.has_runtime_pm = 1,
.has_resource_streamer = 1,
.has_rc6 = 1,

View File

@ -737,12 +737,7 @@ static int gen8_append_oa_reports(struct i915_perf_stream *stream,
continue;
}
/*
* XXX: Just keep the lower 21 bits for now since I'm not
* entirely sure if the HW touches any of the higher bits in
* this field
*/
ctx_id = report32[2] & 0x1fffff;
ctx_id = report32[2] & dev_priv->perf.oa.specific_ctx_id_mask;
/*
* Squash whatever is in the CTX_ID field if it's marked as
@ -1203,6 +1198,33 @@ static int i915_oa_read(struct i915_perf_stream *stream,
return dev_priv->perf.oa.ops.read(stream, buf, count, offset);
}
static struct intel_context *oa_pin_context(struct drm_i915_private *i915,
struct i915_gem_context *ctx)
{
struct intel_engine_cs *engine = i915->engine[RCS];
struct intel_context *ce;
int ret;
ret = i915_mutex_lock_interruptible(&i915->drm);
if (ret)
return ERR_PTR(ret);
/*
* As the ID is the gtt offset of the context's vma we
* pin the vma to ensure the ID remains fixed.
*
* NB: implied RCS engine...
*/
ce = intel_context_pin(ctx, engine);
mutex_unlock(&i915->drm.struct_mutex);
if (IS_ERR(ce))
return ce;
i915->perf.oa.pinned_ctx = ce;
return ce;
}
/**
* oa_get_render_ctx_id - determine and hold ctx hw id
* @stream: An i915-perf stream opened for OA metrics
@ -1215,40 +1237,76 @@ static int i915_oa_read(struct i915_perf_stream *stream,
*/
static int oa_get_render_ctx_id(struct i915_perf_stream *stream)
{
struct drm_i915_private *dev_priv = stream->dev_priv;
struct drm_i915_private *i915 = stream->dev_priv;
struct intel_context *ce;
if (HAS_LOGICAL_RING_CONTEXTS(dev_priv)) {
dev_priv->perf.oa.specific_ctx_id = stream->ctx->hw_id;
} else {
struct intel_engine_cs *engine = dev_priv->engine[RCS];
struct intel_ring *ring;
int ret;
ret = i915_mutex_lock_interruptible(&dev_priv->drm);
if (ret)
return ret;
ce = oa_pin_context(i915, stream->ctx);
if (IS_ERR(ce))
return PTR_ERR(ce);
switch (INTEL_GEN(i915)) {
case 7: {
/*
* As the ID is the gtt offset of the context's vma we
* pin the vma to ensure the ID remains fixed.
*
* NB: implied RCS engine...
* On Haswell we don't do any post processing of the reports
* and don't need to use the mask.
*/
ring = intel_context_pin(stream->ctx, engine);
mutex_unlock(&dev_priv->drm.struct_mutex);
if (IS_ERR(ring))
return PTR_ERR(ring);
/*
* Explicitly track the ID (instead of calling
* i915_ggtt_offset() on the fly) considering the difference
* with gen8+ and execlists
*/
dev_priv->perf.oa.specific_ctx_id =
i915_ggtt_offset(to_intel_context(stream->ctx, engine)->state);
i915->perf.oa.specific_ctx_id = i915_ggtt_offset(ce->state);
i915->perf.oa.specific_ctx_id_mask = 0;
break;
}
case 8:
case 9:
case 10:
if (USES_GUC_SUBMISSION(i915)) {
/*
* When using GuC, the context descriptor we write in
* i915 is read by GuC and rewritten before it's
* actually written into the hardware. The LRCA is
* what is put into the context id field of the
* context descriptor by GuC. Because it's aligned to
* a page, the lower 12bits are always at 0 and
* dropped by GuC. They won't be part of the context
* ID in the OA reports, so squash those lower bits.
*/
i915->perf.oa.specific_ctx_id =
lower_32_bits(ce->lrc_desc) >> 12;
/*
* GuC uses the top bit to signal proxy submission, so
* ignore that bit.
*/
i915->perf.oa.specific_ctx_id_mask =
(1U << (GEN8_CTX_ID_WIDTH - 1)) - 1;
} else {
i915->perf.oa.specific_ctx_id = stream->ctx->hw_id;
i915->perf.oa.specific_ctx_id_mask =
(1U << GEN8_CTX_ID_WIDTH) - 1;
}
break;
case 11: {
struct intel_engine_cs *engine = i915->engine[RCS];
i915->perf.oa.specific_ctx_id =
stream->ctx->hw_id << (GEN11_SW_CTX_ID_SHIFT - 32) |
engine->instance << (GEN11_ENGINE_INSTANCE_SHIFT - 32) |
engine->class << (GEN11_ENGINE_INSTANCE_SHIFT - 32);
i915->perf.oa.specific_ctx_id_mask =
((1U << GEN11_SW_CTX_ID_WIDTH) - 1) << (GEN11_SW_CTX_ID_SHIFT - 32) |
((1U << GEN11_ENGINE_INSTANCE_WIDTH) - 1) << (GEN11_ENGINE_INSTANCE_SHIFT - 32) |
((1 << GEN11_ENGINE_CLASS_WIDTH) - 1) << (GEN11_ENGINE_CLASS_SHIFT - 32);
break;
}
default:
MISSING_CASE(INTEL_GEN(i915));
}
DRM_DEBUG_DRIVER("filtering on ctx_id=0x%x ctx_id_mask=0x%x\n",
i915->perf.oa.specific_ctx_id,
i915->perf.oa.specific_ctx_id_mask);
return 0;
}
@ -1262,17 +1320,15 @@ static int oa_get_render_ctx_id(struct i915_perf_stream *stream)
static void oa_put_render_ctx_id(struct i915_perf_stream *stream)
{
struct drm_i915_private *dev_priv = stream->dev_priv;
struct intel_context *ce;
if (HAS_LOGICAL_RING_CONTEXTS(dev_priv)) {
dev_priv->perf.oa.specific_ctx_id = INVALID_CTX_ID;
} else {
struct intel_engine_cs *engine = dev_priv->engine[RCS];
dev_priv->perf.oa.specific_ctx_id = INVALID_CTX_ID;
dev_priv->perf.oa.specific_ctx_id_mask = 0;
ce = fetch_and_zero(&dev_priv->perf.oa.pinned_ctx);
if (ce) {
mutex_lock(&dev_priv->drm.struct_mutex);
dev_priv->perf.oa.specific_ctx_id = INVALID_CTX_ID;
intel_context_unpin(stream->ctx, engine);
intel_context_unpin(ce);
mutex_unlock(&dev_priv->drm.struct_mutex);
}
}

View File

@ -127,6 +127,7 @@ static void __i915_pmu_maybe_start_timer(struct drm_i915_private *i915)
{
if (!i915->pmu.timer_enabled && pmu_needs_timer(i915, true)) {
i915->pmu.timer_enabled = true;
i915->pmu.timer_last = ktime_get();
hrtimer_start_range_ns(&i915->pmu.timer,
ns_to_ktime(PERIOD), 0,
HRTIMER_MODE_REL_PINNED);
@ -155,12 +156,13 @@ static bool grab_forcewake(struct drm_i915_private *i915, bool fw)
}
static void
update_sample(struct i915_pmu_sample *sample, u32 unit, u32 val)
add_sample(struct i915_pmu_sample *sample, u32 val)
{
sample->cur += mul_u32_u32(val, unit);
sample->cur += val;
}
static void engines_sample(struct drm_i915_private *dev_priv)
static void
engines_sample(struct drm_i915_private *dev_priv, unsigned int period_ns)
{
struct intel_engine_cs *engine;
enum intel_engine_id id;
@ -182,8 +184,9 @@ static void engines_sample(struct drm_i915_private *dev_priv)
val = !i915_seqno_passed(current_seqno, last_seqno);
update_sample(&engine->pmu.sample[I915_SAMPLE_BUSY],
PERIOD, val);
if (val)
add_sample(&engine->pmu.sample[I915_SAMPLE_BUSY],
period_ns);
if (val && (engine->pmu.enable &
(BIT(I915_SAMPLE_WAIT) | BIT(I915_SAMPLE_SEMA)))) {
@ -194,11 +197,13 @@ static void engines_sample(struct drm_i915_private *dev_priv)
val = 0;
}
update_sample(&engine->pmu.sample[I915_SAMPLE_WAIT],
PERIOD, !!(val & RING_WAIT));
if (val & RING_WAIT)
add_sample(&engine->pmu.sample[I915_SAMPLE_WAIT],
period_ns);
update_sample(&engine->pmu.sample[I915_SAMPLE_SEMA],
PERIOD, !!(val & RING_WAIT_SEMAPHORE));
if (val & RING_WAIT_SEMAPHORE)
add_sample(&engine->pmu.sample[I915_SAMPLE_SEMA],
period_ns);
}
if (fw)
@ -207,7 +212,14 @@ static void engines_sample(struct drm_i915_private *dev_priv)
intel_runtime_pm_put(dev_priv);
}
static void frequency_sample(struct drm_i915_private *dev_priv)
static void
add_sample_mult(struct i915_pmu_sample *sample, u32 val, u32 mul)
{
sample->cur += mul_u32_u32(val, mul);
}
static void
frequency_sample(struct drm_i915_private *dev_priv, unsigned int period_ns)
{
if (dev_priv->pmu.enable &
config_enabled_mask(I915_PMU_ACTUAL_FREQUENCY)) {
@ -221,15 +233,17 @@ static void frequency_sample(struct drm_i915_private *dev_priv)
intel_runtime_pm_put(dev_priv);
}
update_sample(&dev_priv->pmu.sample[__I915_SAMPLE_FREQ_ACT],
1, intel_gpu_freq(dev_priv, val));
add_sample_mult(&dev_priv->pmu.sample[__I915_SAMPLE_FREQ_ACT],
intel_gpu_freq(dev_priv, val),
period_ns / 1000);
}
if (dev_priv->pmu.enable &
config_enabled_mask(I915_PMU_REQUESTED_FREQUENCY)) {
update_sample(&dev_priv->pmu.sample[__I915_SAMPLE_FREQ_REQ], 1,
intel_gpu_freq(dev_priv,
dev_priv->gt_pm.rps.cur_freq));
add_sample_mult(&dev_priv->pmu.sample[__I915_SAMPLE_FREQ_REQ],
intel_gpu_freq(dev_priv,
dev_priv->gt_pm.rps.cur_freq),
period_ns / 1000);
}
}
@ -237,14 +251,27 @@ static enum hrtimer_restart i915_sample(struct hrtimer *hrtimer)
{
struct drm_i915_private *i915 =
container_of(hrtimer, struct drm_i915_private, pmu.timer);
unsigned int period_ns;
ktime_t now;
if (!READ_ONCE(i915->pmu.timer_enabled))
return HRTIMER_NORESTART;
engines_sample(i915);
frequency_sample(i915);
now = ktime_get();
period_ns = ktime_to_ns(ktime_sub(now, i915->pmu.timer_last));
i915->pmu.timer_last = now;
/*
* Strictly speaking the passed in period may not be 100% accurate for
* all internal calculation, since some amount of time can be spent on
* grabbing the forcewake. However the potential error from timer call-
* back delay greatly dominates this so we keep it simple.
*/
engines_sample(i915, period_ns);
frequency_sample(i915, period_ns);
hrtimer_forward(hrtimer, now, ns_to_ktime(PERIOD));
hrtimer_forward_now(hrtimer, ns_to_ktime(PERIOD));
return HRTIMER_RESTART;
}
@ -519,12 +546,12 @@ static u64 __i915_pmu_event_read(struct perf_event *event)
case I915_PMU_ACTUAL_FREQUENCY:
val =
div_u64(i915->pmu.sample[__I915_SAMPLE_FREQ_ACT].cur,
FREQUENCY);
USEC_PER_SEC /* to MHz */);
break;
case I915_PMU_REQUESTED_FREQUENCY:
val =
div_u64(i915->pmu.sample[__I915_SAMPLE_FREQ_REQ].cur,
FREQUENCY);
USEC_PER_SEC /* to MHz */);
break;
case I915_PMU_INTERRUPTS:
val = count_interrupts(i915);

View File

@ -65,6 +65,14 @@ struct i915_pmu {
* event types.
*/
u64 enable;
/**
* @timer_last:
*
* Timestmap of the previous timer invocation.
*/
ktime_t timer_last;
/**
* @enable_count: Reference counts for the enabled events.
*

View File

@ -54,6 +54,7 @@ enum vgt_g2v_type {
*/
#define VGT_CAPS_FULL_48BIT_PPGTT BIT(2)
#define VGT_CAPS_HWSP_EMULATION BIT(3)
#define VGT_CAPS_HUGE_GTT BIT(4)
struct vgt_if {
u64 magic; /* VGT_MAGIC */

View File

@ -1990,6 +1990,11 @@ enum i915_power_well_id {
_ICL_PORT_COMP_DW10_A, \
_ICL_PORT_COMP_DW10_B)
/* ICL PHY DFLEX registers */
#define PORT_TX_DFLEXDPMLE1 _MMIO(0x1638C0)
#define DFLEXDPMLE1_DPMLETC_MASK(n) (0xf << (4 * (n)))
#define DFLEXDPMLE1_DPMLETC(n, x) ((x) << (4 * (n)))
/* BXT PHY Ref registers */
#define _PORT_REF_DW3_A 0x16218C
#define _PORT_REF_DW3_BC 0x6C18C
@ -2306,8 +2311,9 @@ enum i915_power_well_id {
#define GAMW_ECO_ENABLE_64K_IPS_FIELD 0xF
#define GAMT_CHKN_BIT_REG _MMIO(0x4ab8)
#define GAMT_CHKN_DISABLE_DYNAMIC_CREDIT_SHARING (1<<28)
#define GAMT_CHKN_DISABLE_I2M_CYCLE_ON_WR_PORT (1<<24)
#define GAMT_CHKN_DISABLE_L3_COH_PIPE (1 << 31)
#define GAMT_CHKN_DISABLE_DYNAMIC_CREDIT_SHARING (1 << 28)
#define GAMT_CHKN_DISABLE_I2M_CYCLE_ON_WR_PORT (1 << 24)
#if 0
#define PRB0_TAIL _MMIO(0x2030)
@ -2663,6 +2669,9 @@ enum i915_power_well_id {
#define GEN8_4x4_STC_OPTIMIZATION_DISABLE (1<<6)
#define GEN9_PARTIAL_RESOLVE_IN_VC_DISABLE (1<<1)
#define GEN10_CACHE_MODE_SS _MMIO(0xe420)
#define FLOAT_BLEND_OPTIMIZATION_ENABLE (1 << 4)
#define GEN6_BLITTER_ECOSKPD _MMIO(0x221d0)
#define GEN6_BLITTER_LOCK_SHIFT 16
#define GEN6_BLITTER_FBC_NOTIFY (1<<3)
@ -2709,6 +2718,10 @@ enum i915_power_well_id {
#define GEN10_F2_SS_DIS_SHIFT 18
#define GEN10_F2_SS_DIS_MASK (0xf << GEN10_F2_SS_DIS_SHIFT)
#define GEN10_MIRROR_FUSE3 _MMIO(0x9118)
#define GEN10_L3BANK_PAIR_COUNT 4
#define GEN10_L3BANK_MASK 0x0F
#define GEN8_EU_DISABLE0 _MMIO(0x9134)
#define GEN8_EU_DIS0_S0_MASK 0xffffff
#define GEN8_EU_DIS0_S1_SHIFT 24
@ -4088,10 +4101,10 @@ enum {
#define EDP_Y_COORDINATE_ENABLE (1<<25) /* GLK and CNL+ */
#define EDP_MAX_SU_DISABLE_TIME(t) ((t)<<20)
#define EDP_MAX_SU_DISABLE_TIME_MASK (0x1f<<20)
#define EDP_PSR2_TP2_TIME_500 (0<<8)
#define EDP_PSR2_TP2_TIME_100 (1<<8)
#define EDP_PSR2_TP2_TIME_2500 (2<<8)
#define EDP_PSR2_TP2_TIME_50 (3<<8)
#define EDP_PSR2_TP2_TIME_500us (0<<8)
#define EDP_PSR2_TP2_TIME_100us (1<<8)
#define EDP_PSR2_TP2_TIME_2500us (2<<8)
#define EDP_PSR2_TP2_TIME_50us (3<<8)
#define EDP_PSR2_TP2_TIME_MASK (3<<8)
#define EDP_PSR2_FRAME_BEFORE_SU_SHIFT 4
#define EDP_PSR2_FRAME_BEFORE_SU_MASK (0xf<<4)
@ -4133,11 +4146,12 @@ enum {
#define ADPA_DAC_ENABLE (1<<31)
#define ADPA_DAC_DISABLE 0
#define ADPA_PIPE_SELECT_MASK (1<<30)
#define ADPA_PIPE_A_SELECT 0
#define ADPA_PIPE_B_SELECT (1<<30)
#define ADPA_PIPE_SELECT(pipe) ((pipe) << 30)
/* CPT uses bits 29:30 for pch transcoder select */
#define ADPA_PIPE_SEL_SHIFT 30
#define ADPA_PIPE_SEL_MASK (1<<30)
#define ADPA_PIPE_SEL(pipe) ((pipe) << 30)
#define ADPA_PIPE_SEL_SHIFT_CPT 29
#define ADPA_PIPE_SEL_MASK_CPT (3<<29)
#define ADPA_PIPE_SEL_CPT(pipe) ((pipe) << 29)
#define ADPA_CRT_HOTPLUG_MASK 0x03ff0000 /* bit 25-16 */
#define ADPA_CRT_HOTPLUG_MONITOR_NONE (0<<24)
#define ADPA_CRT_HOTPLUG_MONITOR_MASK (3<<24)
@ -4296,9 +4310,9 @@ enum {
/* Gen 3 SDVO bits: */
#define SDVO_ENABLE (1 << 31)
#define SDVO_PIPE_SEL(pipe) ((pipe) << 30)
#define SDVO_PIPE_SEL_SHIFT 30
#define SDVO_PIPE_SEL_MASK (1 << 30)
#define SDVO_PIPE_B_SELECT (1 << 30)
#define SDVO_PIPE_SEL(pipe) ((pipe) << 30)
#define SDVO_STALL_SELECT (1 << 29)
#define SDVO_INTERRUPT_ENABLE (1 << 26)
/*
@ -4338,12 +4352,14 @@ enum {
#define SDVOB_HOTPLUG_ENABLE (1 << 23) /* SDVO only */
/* Gen 6 (CPT) SDVO/HDMI bits: */
#define SDVO_PIPE_SEL_CPT(pipe) ((pipe) << 29)
#define SDVO_PIPE_SEL_SHIFT_CPT 29
#define SDVO_PIPE_SEL_MASK_CPT (3 << 29)
#define SDVO_PIPE_SEL_CPT(pipe) ((pipe) << 29)
/* CHV SDVO/HDMI bits: */
#define SDVO_PIPE_SEL_CHV(pipe) ((pipe) << 24)
#define SDVO_PIPE_SEL_SHIFT_CHV 24
#define SDVO_PIPE_SEL_MASK_CHV (3 << 24)
#define SDVO_PIPE_SEL_CHV(pipe) ((pipe) << 24)
/* DVO port control */
@ -4354,7 +4370,9 @@ enum {
#define _DVOC 0x61160
#define DVOC _MMIO(_DVOC)
#define DVO_ENABLE (1 << 31)
#define DVO_PIPE_B_SELECT (1 << 30)
#define DVO_PIPE_SEL_SHIFT 30
#define DVO_PIPE_SEL_MASK (1 << 30)
#define DVO_PIPE_SEL(pipe) ((pipe) << 30)
#define DVO_PIPE_STALL_UNUSED (0 << 28)
#define DVO_PIPE_STALL (1 << 28)
#define DVO_PIPE_STALL_TV (2 << 28)
@ -4391,9 +4409,12 @@ enum {
*/
#define LVDS_PORT_EN (1 << 31)
/* Selects pipe B for LVDS data. Must be set on pre-965. */
#define LVDS_PIPEB_SELECT (1 << 30)
#define LVDS_PIPE_MASK (1 << 30)
#define LVDS_PIPE(pipe) ((pipe) << 30)
#define LVDS_PIPE_SEL_SHIFT 30
#define LVDS_PIPE_SEL_MASK (1 << 30)
#define LVDS_PIPE_SEL(pipe) ((pipe) << 30)
#define LVDS_PIPE_SEL_SHIFT_CPT 29
#define LVDS_PIPE_SEL_MASK_CPT (3 << 29)
#define LVDS_PIPE_SEL_CPT(pipe) ((pipe) << 29)
/* LVDS dithering flag on 965/g4x platform */
#define LVDS_ENABLE_DITHER (1 << 25)
/* LVDS sync polarity flags. Set to invert (i.e. negative) */
@ -4690,7 +4711,9 @@ enum {
/* Enables the TV encoder */
# define TV_ENC_ENABLE (1 << 31)
/* Sources the TV encoder input from pipe B instead of A. */
# define TV_ENC_PIPEB_SELECT (1 << 30)
# define TV_ENC_PIPE_SEL_SHIFT 30
# define TV_ENC_PIPE_SEL_MASK (1 << 30)
# define TV_ENC_PIPE_SEL(pipe) ((pipe) << 30)
/* Outputs composite video (DAC A only) */
# define TV_ENC_OUTPUT_COMPOSITE (0 << 28)
/* Outputs SVideo video (DAC B/C) */
@ -5172,10 +5195,15 @@ enum {
#define CHV_DP_D _MMIO(VLV_DISPLAY_BASE + 0x64300)
#define DP_PORT_EN (1 << 31)
#define DP_PIPEB_SELECT (1 << 30)
#define DP_PIPE_MASK (1 << 30)
#define DP_PIPE_SELECT_CHV(pipe) ((pipe) << 16)
#define DP_PIPE_MASK_CHV (3 << 16)
#define DP_PIPE_SEL_SHIFT 30
#define DP_PIPE_SEL_MASK (1 << 30)
#define DP_PIPE_SEL(pipe) ((pipe) << 30)
#define DP_PIPE_SEL_SHIFT_IVB 29
#define DP_PIPE_SEL_MASK_IVB (3 << 29)
#define DP_PIPE_SEL_IVB(pipe) ((pipe) << 29)
#define DP_PIPE_SEL_SHIFT_CHV 16
#define DP_PIPE_SEL_MASK_CHV (3 << 16)
#define DP_PIPE_SEL_CHV(pipe) ((pipe) << 16)
/* Link training mode - select a suitable mode for each stage */
#define DP_LINK_TRAIN_PAT_1 (0 << 28)
@ -5896,7 +5924,6 @@ enum {
#define CURSOR_GAMMA_ENABLE 0x40000000
#define CURSOR_STRIDE_SHIFT 28
#define CURSOR_STRIDE(x) ((ffs(x)-9) << CURSOR_STRIDE_SHIFT) /* 256,512,1k,2k */
#define CURSOR_PIPE_CSC_ENABLE (1<<24)
#define CURSOR_FORMAT_SHIFT 24
#define CURSOR_FORMAT_MASK (0x07 << CURSOR_FORMAT_SHIFT)
#define CURSOR_FORMAT_2C (0x00 << CURSOR_FORMAT_SHIFT)
@ -5905,18 +5932,21 @@ enum {
#define CURSOR_FORMAT_ARGB (0x04 << CURSOR_FORMAT_SHIFT)
#define CURSOR_FORMAT_XRGB (0x05 << CURSOR_FORMAT_SHIFT)
/* New style CUR*CNTR flags */
#define CURSOR_MODE 0x27
#define CURSOR_MODE_DISABLE 0x00
#define CURSOR_MODE_128_32B_AX 0x02
#define CURSOR_MODE_256_32B_AX 0x03
#define CURSOR_MODE_64_32B_AX 0x07
#define CURSOR_MODE_128_ARGB_AX ((1 << 5) | CURSOR_MODE_128_32B_AX)
#define CURSOR_MODE_256_ARGB_AX ((1 << 5) | CURSOR_MODE_256_32B_AX)
#define CURSOR_MODE_64_ARGB_AX ((1 << 5) | CURSOR_MODE_64_32B_AX)
#define MCURSOR_MODE 0x27
#define MCURSOR_MODE_DISABLE 0x00
#define MCURSOR_MODE_128_32B_AX 0x02
#define MCURSOR_MODE_256_32B_AX 0x03
#define MCURSOR_MODE_64_32B_AX 0x07
#define MCURSOR_MODE_128_ARGB_AX ((1 << 5) | MCURSOR_MODE_128_32B_AX)
#define MCURSOR_MODE_256_ARGB_AX ((1 << 5) | MCURSOR_MODE_256_32B_AX)
#define MCURSOR_MODE_64_ARGB_AX ((1 << 5) | MCURSOR_MODE_64_32B_AX)
#define MCURSOR_PIPE_SELECT_MASK (0x3 << 28)
#define MCURSOR_PIPE_SELECT_SHIFT 28
#define MCURSOR_PIPE_SELECT(pipe) ((pipe) << 28)
#define MCURSOR_GAMMA_ENABLE (1 << 26)
#define CURSOR_ROTATE_180 (1<<15)
#define CURSOR_TRICKLE_FEED_DISABLE (1 << 14)
#define MCURSOR_PIPE_CSC_ENABLE (1<<24)
#define MCURSOR_ROTATE_180 (1<<15)
#define MCURSOR_TRICKLE_FEED_DISABLE (1 << 14)
#define _CURABASE 0x70084
#define _CURAPOS 0x70088
#define CURSOR_POS_MASK 0x007FF
@ -6764,6 +6794,10 @@ enum {
#define _PS_VPHASE_1B 0x68988
#define _PS_VPHASE_2B 0x68A88
#define _PS_VPHASE_1C 0x69188
#define PS_Y_PHASE(x) ((x) << 16)
#define PS_UV_RGB_PHASE(x) ((x) << 0)
#define PS_PHASE_MASK (0x7fff << 1) /* u2.13 */
#define PS_PHASE_TRIP (1 << 0)
#define _PS_HPHASE_1A 0x68194
#define _PS_HPHASE_2A 0x68294
@ -7192,13 +7226,17 @@ enum {
/* GEN7 chicken */
#define GEN7_COMMON_SLICE_CHICKEN1 _MMIO(0x7010)
# define GEN7_CSC1_RHWO_OPT_DISABLE_IN_RCC ((1<<10) | (1<<26))
# define GEN9_RHWO_OPTIMIZATION_DISABLE (1<<14)
#define COMMON_SLICE_CHICKEN2 _MMIO(0x7014)
# define GEN9_PBE_COMPRESSED_HASH_SELECTION (1<<13)
# define GEN9_DISABLE_GATHER_AT_SET_SHADER_COMMON_SLICE (1<<12)
# define GEN8_SBE_DISABLE_REPLAY_BUF_OPTIMIZATION (1<<8)
# define GEN8_CSC2_SBE_VUE_CACHE_CONSERVATIVE (1<<0)
#define GEN7_CSC1_RHWO_OPT_DISABLE_IN_RCC ((1 << 10) | (1 << 26))
#define GEN9_RHWO_OPTIMIZATION_DISABLE (1 << 14)
#define COMMON_SLICE_CHICKEN2 _MMIO(0x7014)
#define GEN9_PBE_COMPRESSED_HASH_SELECTION (1 << 13)
#define GEN9_DISABLE_GATHER_AT_SET_SHADER_COMMON_SLICE (1 << 12)
#define GEN8_SBE_DISABLE_REPLAY_BUF_OPTIMIZATION (1 << 8)
#define GEN8_CSC2_SBE_VUE_CACHE_CONSERVATIVE (1 << 0)
#define GEN11_COMMON_SLICE_CHICKEN3 _MMIO(0x7304)
#define GEN11_BLEND_EMB_FIX_DISABLE_IN_RCC (1 << 11)
#define HIZ_CHICKEN _MMIO(0x7018)
# define CHV_HZ_8X8_MODE_IN_1X (1<<15)
@ -7208,6 +7246,7 @@ enum {
#define DISABLE_PIXEL_MASK_CAMMING (1<<14)
#define GEN9_SLICE_COMMON_ECO_CHICKEN1 _MMIO(0x731c)
#define GEN11_STATE_CACHE_REDIRECT_TO_CS (1 << 11)
#define GEN7_L3SQCREG1 _MMIO(0xB010)
#define VLV_B0_WA_L3SQCREG1_VALUE 0x00D30000
@ -7862,27 +7901,14 @@ enum {
#define PCH_DP_AUX_CH_DATA(aux_ch, i) _MMIO(_PORT((aux_ch) - AUX_CH_B, _PCH_DPB_AUX_CH_DATA1, _PCH_DPC_AUX_CH_DATA1) + (i) * 4) /* 5 registers */
/* CPT */
#define PORT_TRANS_A_SEL_CPT 0
#define PORT_TRANS_B_SEL_CPT (1<<29)
#define PORT_TRANS_C_SEL_CPT (2<<29)
#define PORT_TRANS_SEL_MASK (3<<29)
#define PORT_TRANS_SEL_CPT(pipe) ((pipe) << 29)
#define PORT_TO_PIPE(val) (((val) & (1<<30)) >> 30)
#define PORT_TO_PIPE_CPT(val) (((val) & PORT_TRANS_SEL_MASK) >> 29)
#define SDVO_PORT_TO_PIPE_CHV(val) (((val) & (3<<24)) >> 24)
#define DP_PORT_TO_PIPE_CHV(val) (((val) & (3<<16)) >> 16)
#define _TRANS_DP_CTL_A 0xe0300
#define _TRANS_DP_CTL_B 0xe1300
#define _TRANS_DP_CTL_C 0xe2300
#define TRANS_DP_CTL(pipe) _MMIO_PIPE(pipe, _TRANS_DP_CTL_A, _TRANS_DP_CTL_B)
#define TRANS_DP_OUTPUT_ENABLE (1<<31)
#define TRANS_DP_PORT_SEL_B (0<<29)
#define TRANS_DP_PORT_SEL_C (1<<29)
#define TRANS_DP_PORT_SEL_D (2<<29)
#define TRANS_DP_PORT_SEL_NONE (3<<29)
#define TRANS_DP_PORT_SEL_MASK (3<<29)
#define TRANS_DP_PIPE_TO_PORT(val) ((((val) & TRANS_DP_PORT_SEL_MASK) >> 29) + PORT_B)
#define TRANS_DP_PORT_SEL_MASK (3 << 29)
#define TRANS_DP_PORT_SEL_NONE (3 << 29)
#define TRANS_DP_PORT_SEL(port) (((port) - PORT_B) << 29)
#define TRANS_DP_AUDIO_ONLY (1<<26)
#define TRANS_DP_ENH_FRAMING (1<<18)
#define TRANS_DP_8BPC (0<<9)
@ -8322,8 +8348,9 @@ enum {
#define GEN7_ROW_CHICKEN2 _MMIO(0xe4f4)
#define GEN7_ROW_CHICKEN2_GT2 _MMIO(0xf4f4)
#define DOP_CLOCK_GATING_DISABLE (1<<0)
#define PUSH_CONSTANT_DEREF_DISABLE (1<<8)
#define DOP_CLOCK_GATING_DISABLE (1 << 0)
#define PUSH_CONSTANT_DEREF_DISABLE (1 << 8)
#define GEN11_TDL_CLOCK_GATING_FIX_DISABLE (1 << 1)
#define HSW_ROW_CHICKEN3 _MMIO(0xe49c)
#define HSW_ROW_CHICKEN3_L3_GLOBAL_ATOMICS_DISABLE (1 << 6)
@ -9100,13 +9127,16 @@ enum skl_power_gate {
#define DPLL_CFGCR1_QDIV_RATIO_MASK (0xff << 10)
#define DPLL_CFGCR1_QDIV_RATIO_SHIFT (10)
#define DPLL_CFGCR1_QDIV_RATIO(x) ((x) << 10)
#define DPLL_CFGCR1_QDIV_MODE_SHIFT (9)
#define DPLL_CFGCR1_QDIV_MODE(x) ((x) << 9)
#define DPLL_CFGCR1_KDIV_MASK (7 << 6)
#define DPLL_CFGCR1_KDIV_SHIFT (6)
#define DPLL_CFGCR1_KDIV(x) ((x) << 6)
#define DPLL_CFGCR1_KDIV_1 (1 << 6)
#define DPLL_CFGCR1_KDIV_2 (2 << 6)
#define DPLL_CFGCR1_KDIV_4 (4 << 6)
#define DPLL_CFGCR1_PDIV_MASK (0xf << 2)
#define DPLL_CFGCR1_PDIV_SHIFT (2)
#define DPLL_CFGCR1_PDIV(x) ((x) << 2)
#define DPLL_CFGCR1_PDIV_2 (1 << 2)
#define DPLL_CFGCR1_PDIV_3 (2 << 2)

View File

@ -320,6 +320,7 @@ static void advance_ring(struct i915_request *request)
* is just about to be. Either works, if we miss the last two
* noops - they are safe to be replayed on a reset.
*/
GEM_TRACE("marking %s as inactive\n", ring->timeline->name);
tail = READ_ONCE(request->tail);
list_del(&ring->active_link);
} else {
@ -383,8 +384,8 @@ static void __retire_engine_request(struct intel_engine_cs *engine,
* the subsequent request.
*/
if (engine->last_retired_context)
intel_context_unpin(engine->last_retired_context, engine);
engine->last_retired_context = rq->ctx;
intel_context_unpin(engine->last_retired_context);
engine->last_retired_context = rq->hw_context;
}
static void __retire_engine_upto(struct intel_engine_cs *engine,
@ -455,8 +456,8 @@ static void i915_request_retire(struct i915_request *request)
i915_request_remove_from_client(request);
/* Retirement decays the ban score as it is a sign of ctx progress */
atomic_dec_if_positive(&request->ctx->ban_score);
intel_context_unpin(request->ctx, request->engine);
atomic_dec_if_positive(&request->gem_context->ban_score);
intel_context_unpin(request->hw_context);
__retire_engine_upto(request->engine, request);
@ -657,7 +658,7 @@ i915_request_alloc(struct intel_engine_cs *engine, struct i915_gem_context *ctx)
{
struct drm_i915_private *i915 = engine->i915;
struct i915_request *rq;
struct intel_ring *ring;
struct intel_context *ce;
int ret;
lockdep_assert_held(&i915->drm.struct_mutex);
@ -681,22 +682,21 @@ i915_request_alloc(struct intel_engine_cs *engine, struct i915_gem_context *ctx)
* GGTT space, so do this first before we reserve a seqno for
* ourselves.
*/
ring = intel_context_pin(ctx, engine);
if (IS_ERR(ring))
return ERR_CAST(ring);
GEM_BUG_ON(!ring);
ce = intel_context_pin(ctx, engine);
if (IS_ERR(ce))
return ERR_CAST(ce);
ret = reserve_gt(i915);
if (ret)
goto err_unpin;
ret = intel_ring_wait_for_space(ring, MIN_SPACE_FOR_ADD_REQUEST);
ret = intel_ring_wait_for_space(ce->ring, MIN_SPACE_FOR_ADD_REQUEST);
if (ret)
goto err_unreserve;
/* Move our oldest request to the slab-cache (if not in use!) */
rq = list_first_entry(&ring->request_list, typeof(*rq), ring_link);
if (!list_is_last(&rq->ring_link, &ring->request_list) &&
rq = list_first_entry(&ce->ring->request_list, typeof(*rq), ring_link);
if (!list_is_last(&rq->ring_link, &ce->ring->request_list) &&
i915_request_completed(rq))
i915_request_retire(rq);
@ -760,9 +760,10 @@ i915_request_alloc(struct intel_engine_cs *engine, struct i915_gem_context *ctx)
INIT_LIST_HEAD(&rq->active_list);
rq->i915 = i915;
rq->engine = engine;
rq->ctx = ctx;
rq->ring = ring;
rq->timeline = ring->timeline;
rq->gem_context = ctx;
rq->hw_context = ce;
rq->ring = ce->ring;
rq->timeline = ce->ring->timeline;
GEM_BUG_ON(rq->timeline == &engine->timeline);
spin_lock_init(&rq->lock);
@ -814,14 +815,14 @@ i915_request_alloc(struct intel_engine_cs *engine, struct i915_gem_context *ctx)
goto err_unwind;
/* Keep a second pin for the dual retirement along engine and ring */
__intel_context_pin(rq->ctx, engine);
__intel_context_pin(ce);
/* Check that we didn't interrupt ourselves with a new request */
GEM_BUG_ON(rq->timeline->seqno != rq->fence.seqno);
return rq;
err_unwind:
rq->ring->emit = rq->head;
ce->ring->emit = rq->head;
/* Make sure we didn't add ourselves to external state before freeing */
GEM_BUG_ON(!list_empty(&rq->active_list));
@ -832,7 +833,7 @@ i915_request_alloc(struct intel_engine_cs *engine, struct i915_gem_context *ctx)
err_unreserve:
unreserve_gt(i915);
err_unpin:
intel_context_unpin(ctx, engine);
intel_context_unpin(ce);
return ERR_PTR(ret);
}
@ -1018,8 +1019,8 @@ i915_request_await_object(struct i915_request *to,
void __i915_request_add(struct i915_request *request, bool flush_caches)
{
struct intel_engine_cs *engine = request->engine;
struct intel_ring *ring = request->ring;
struct i915_timeline *timeline = request->timeline;
struct intel_ring *ring = request->ring;
struct i915_request *prev;
u32 *cs;
int err;
@ -1095,8 +1096,10 @@ void __i915_request_add(struct i915_request *request, bool flush_caches)
i915_gem_active_set(&timeline->last_request, request);
list_add_tail(&request->ring_link, &ring->request_list);
if (list_is_first(&request->ring_link, &ring->request_list))
if (list_is_first(&request->ring_link, &ring->request_list)) {
GEM_TRACE("marking %s as active\n", ring->timeline->name);
list_add(&ring->active_link, &request->i915->gt.active_rings);
}
request->emitted_jiffies = jiffies;
/*
@ -1113,7 +1116,7 @@ void __i915_request_add(struct i915_request *request, bool flush_caches)
local_bh_disable();
rcu_read_lock(); /* RCU serialisation for set-wedged protection */
if (engine->schedule)
engine->schedule(request, &request->ctx->sched);
engine->schedule(request, &request->gem_context->sched);
rcu_read_unlock();
i915_sw_fence_commit(&request->submit);
local_bh_enable(); /* Kick the execlists tasklet if just scheduled */

View File

@ -93,8 +93,9 @@ struct i915_request {
* i915_request_free() will then decrement the refcount on the
* context.
*/
struct i915_gem_context *ctx;
struct i915_gem_context *gem_context;
struct intel_engine_cs *engine;
struct intel_context *hw_context;
struct intel_ring *ring;
struct i915_timeline *timeline;
struct intel_signal_node signaling;
@ -266,6 +267,7 @@ long i915_request_wait(struct i915_request *rq,
#define I915_WAIT_INTERRUPTIBLE BIT(0)
#define I915_WAIT_LOCKED BIT(1) /* struct_mutex held, handle GPU reset */
#define I915_WAIT_ALL BIT(2) /* used by i915_gem_object_wait() */
#define I915_WAIT_FOR_IDLE_BOOST BIT(3)
static inline u32 intel_engine_get_seqno(struct intel_engine_cs *engine);

View File

@ -591,21 +591,26 @@ TRACE_EVENT(i915_gem_ring_sync_to,
TP_STRUCT__entry(
__field(u32, dev)
__field(u32, sync_from)
__field(u32, sync_to)
__field(u32, from_class)
__field(u32, from_instance)
__field(u32, to_class)
__field(u32, to_instance)
__field(u32, seqno)
),
TP_fast_assign(
__entry->dev = from->i915->drm.primary->index;
__entry->sync_from = from->engine->id;
__entry->sync_to = to->engine->id;
__entry->from_class = from->engine->uabi_class;
__entry->from_instance = from->engine->instance;
__entry->to_class = to->engine->uabi_class;
__entry->to_instance = to->engine->instance;
__entry->seqno = from->global_seqno;
),
TP_printk("dev=%u, sync-from=%u, sync-to=%u, seqno=%u",
TP_printk("dev=%u, sync-from=%u:%u, sync-to=%u:%u, seqno=%u",
__entry->dev,
__entry->sync_from, __entry->sync_to,
__entry->from_class, __entry->from_instance,
__entry->to_class, __entry->to_instance,
__entry->seqno)
);
@ -616,24 +621,27 @@ TRACE_EVENT(i915_request_queue,
TP_STRUCT__entry(
__field(u32, dev)
__field(u32, hw_id)
__field(u32, ring)
__field(u32, ctx)
__field(u64, ctx)
__field(u16, class)
__field(u16, instance)
__field(u32, seqno)
__field(u32, flags)
),
TP_fast_assign(
__entry->dev = rq->i915->drm.primary->index;
__entry->hw_id = rq->ctx->hw_id;
__entry->ring = rq->engine->id;
__entry->hw_id = rq->gem_context->hw_id;
__entry->class = rq->engine->uabi_class;
__entry->instance = rq->engine->instance;
__entry->ctx = rq->fence.context;
__entry->seqno = rq->fence.seqno;
__entry->flags = flags;
),
TP_printk("dev=%u, hw_id=%u, ring=%u, ctx=%u, seqno=%u, flags=0x%x",
__entry->dev, __entry->hw_id, __entry->ring, __entry->ctx,
__entry->seqno, __entry->flags)
TP_printk("dev=%u, engine=%u:%u, hw_id=%u, ctx=%llu, seqno=%u, flags=0x%x",
__entry->dev, __entry->class, __entry->instance,
__entry->hw_id, __entry->ctx, __entry->seqno,
__entry->flags)
);
DECLARE_EVENT_CLASS(i915_request,
@ -643,24 +651,27 @@ DECLARE_EVENT_CLASS(i915_request,
TP_STRUCT__entry(
__field(u32, dev)
__field(u32, hw_id)
__field(u32, ring)
__field(u32, ctx)
__field(u64, ctx)
__field(u16, class)
__field(u16, instance)
__field(u32, seqno)
__field(u32, global)
),
TP_fast_assign(
__entry->dev = rq->i915->drm.primary->index;
__entry->hw_id = rq->ctx->hw_id;
__entry->ring = rq->engine->id;
__entry->hw_id = rq->gem_context->hw_id;
__entry->class = rq->engine->uabi_class;
__entry->instance = rq->engine->instance;
__entry->ctx = rq->fence.context;
__entry->seqno = rq->fence.seqno;
__entry->global = rq->global_seqno;
),
TP_printk("dev=%u, hw_id=%u, ring=%u, ctx=%u, seqno=%u, global=%u",
__entry->dev, __entry->hw_id, __entry->ring, __entry->ctx,
__entry->seqno, __entry->global)
TP_printk("dev=%u, engine=%u:%u, hw_id=%u, ctx=%llu, seqno=%u, global=%u",
__entry->dev, __entry->class, __entry->instance,
__entry->hw_id, __entry->ctx, __entry->seqno,
__entry->global)
);
DEFINE_EVENT(i915_request, i915_request_add,
@ -686,8 +697,9 @@ TRACE_EVENT(i915_request_in,
TP_STRUCT__entry(
__field(u32, dev)
__field(u32, hw_id)
__field(u32, ring)
__field(u32, ctx)
__field(u64, ctx)
__field(u16, class)
__field(u16, instance)
__field(u32, seqno)
__field(u32, global_seqno)
__field(u32, port)
@ -696,8 +708,9 @@ TRACE_EVENT(i915_request_in,
TP_fast_assign(
__entry->dev = rq->i915->drm.primary->index;
__entry->hw_id = rq->ctx->hw_id;
__entry->ring = rq->engine->id;
__entry->hw_id = rq->gem_context->hw_id;
__entry->class = rq->engine->uabi_class;
__entry->instance = rq->engine->instance;
__entry->ctx = rq->fence.context;
__entry->seqno = rq->fence.seqno;
__entry->global_seqno = rq->global_seqno;
@ -705,10 +718,10 @@ TRACE_EVENT(i915_request_in,
__entry->port = port;
),
TP_printk("dev=%u, hw_id=%u, ring=%u, ctx=%u, seqno=%u, prio=%u, global=%u, port=%u",
__entry->dev, __entry->hw_id, __entry->ring, __entry->ctx,
__entry->seqno, __entry->prio, __entry->global_seqno,
__entry->port)
TP_printk("dev=%u, engine=%u:%u, hw_id=%u, ctx=%llu, seqno=%u, prio=%u, global=%u, port=%u",
__entry->dev, __entry->class, __entry->instance,
__entry->hw_id, __entry->ctx, __entry->seqno,
__entry->prio, __entry->global_seqno, __entry->port)
);
TRACE_EVENT(i915_request_out,
@ -718,8 +731,9 @@ TRACE_EVENT(i915_request_out,
TP_STRUCT__entry(
__field(u32, dev)
__field(u32, hw_id)
__field(u32, ring)
__field(u32, ctx)
__field(u64, ctx)
__field(u16, class)
__field(u16, instance)
__field(u32, seqno)
__field(u32, global_seqno)
__field(u32, completed)
@ -727,17 +741,18 @@ TRACE_EVENT(i915_request_out,
TP_fast_assign(
__entry->dev = rq->i915->drm.primary->index;
__entry->hw_id = rq->ctx->hw_id;
__entry->ring = rq->engine->id;
__entry->hw_id = rq->gem_context->hw_id;
__entry->class = rq->engine->uabi_class;
__entry->instance = rq->engine->instance;
__entry->ctx = rq->fence.context;
__entry->seqno = rq->fence.seqno;
__entry->global_seqno = rq->global_seqno;
__entry->completed = i915_request_completed(rq);
),
TP_printk("dev=%u, hw_id=%u, ring=%u, ctx=%u, seqno=%u, global=%u, completed?=%u",
__entry->dev, __entry->hw_id, __entry->ring,
__entry->ctx, __entry->seqno,
TP_printk("dev=%u, engine=%u:%u, hw_id=%u, ctx=%llu, seqno=%u, global=%u, completed?=%u",
__entry->dev, __entry->class, __entry->instance,
__entry->hw_id, __entry->ctx, __entry->seqno,
__entry->global_seqno, __entry->completed)
);
@ -771,21 +786,23 @@ TRACE_EVENT(intel_engine_notify,
TP_STRUCT__entry(
__field(u32, dev)
__field(u32, ring)
__field(u16, class)
__field(u16, instance)
__field(u32, seqno)
__field(bool, waiters)
),
TP_fast_assign(
__entry->dev = engine->i915->drm.primary->index;
__entry->ring = engine->id;
__entry->class = engine->uabi_class;
__entry->instance = engine->instance;
__entry->seqno = intel_engine_get_seqno(engine);
__entry->waiters = waiters;
),
TP_printk("dev=%u, ring=%u, seqno=%u, waiters=%u",
__entry->dev, __entry->ring, __entry->seqno,
__entry->waiters)
TP_printk("dev=%u, engine=%u:%u, seqno=%u, waiters=%u",
__entry->dev, __entry->class, __entry->instance,
__entry->seqno, __entry->waiters)
);
DEFINE_EVENT(i915_request, i915_request_retire,
@ -800,8 +817,9 @@ TRACE_EVENT(i915_request_wait_begin,
TP_STRUCT__entry(
__field(u32, dev)
__field(u32, hw_id)
__field(u32, ring)
__field(u32, ctx)
__field(u64, ctx)
__field(u16, class)
__field(u16, instance)
__field(u32, seqno)
__field(u32, global)
__field(unsigned int, flags)
@ -815,18 +833,20 @@ TRACE_EVENT(i915_request_wait_begin,
*/
TP_fast_assign(
__entry->dev = rq->i915->drm.primary->index;
__entry->hw_id = rq->ctx->hw_id;
__entry->ring = rq->engine->id;
__entry->hw_id = rq->gem_context->hw_id;
__entry->class = rq->engine->uabi_class;
__entry->instance = rq->engine->instance;
__entry->ctx = rq->fence.context;
__entry->seqno = rq->fence.seqno;
__entry->global = rq->global_seqno;
__entry->flags = flags;
),
TP_printk("dev=%u, hw_id=%u, ring=%u, ctx=%u, seqno=%u, global=%u, blocking=%u, flags=0x%x",
__entry->dev, __entry->hw_id, __entry->ring, __entry->ctx,
__entry->seqno, __entry->global,
!!(__entry->flags & I915_WAIT_LOCKED), __entry->flags)
TP_printk("dev=%u, engine=%u:%u, hw_id=%u, ctx=%llu, seqno=%u, global=%u, blocking=%u, flags=0x%x",
__entry->dev, __entry->class, __entry->instance,
__entry->hw_id, __entry->ctx, __entry->seqno,
__entry->global, !!(__entry->flags & I915_WAIT_LOCKED),
__entry->flags)
);
DEFINE_EVENT(i915_request, i915_request_wait_end,
@ -936,7 +956,7 @@ DECLARE_EVENT_CLASS(i915_context,
__entry->dev = ctx->i915->drm.primary->index;
__entry->ctx = ctx;
__entry->hw_id = ctx->hw_id;
__entry->vm = ctx->ppgtt ? &ctx->ppgtt->base : NULL;
__entry->vm = ctx->ppgtt ? &ctx->ppgtt->vm : NULL;
),
TP_printk("dev=%u, ctx=%p, ctx_vm=%p, hw_id=%u",
@ -966,21 +986,24 @@ TRACE_EVENT(switch_mm,
TP_ARGS(engine, to),
TP_STRUCT__entry(
__field(u32, ring)
__field(u16, class)
__field(u16, instance)
__field(struct i915_gem_context *, to)
__field(struct i915_address_space *, vm)
__field(u32, dev)
),
TP_fast_assign(
__entry->ring = engine->id;
__entry->class = engine->uabi_class;
__entry->instance = engine->instance;
__entry->to = to;
__entry->vm = to->ppgtt? &to->ppgtt->base : NULL;
__entry->vm = to->ppgtt ? &to->ppgtt->vm : NULL;
__entry->dev = engine->i915->drm.primary->index;
),
TP_printk("dev=%u, ring=%u, ctx=%p, ctx_vm=%p",
__entry->dev, __entry->ring, __entry->to, __entry->vm)
TP_printk("dev=%u, engine=%u:%u, ctx=%p, ctx_vm=%p",
__entry->dev, __entry->class, __entry->instance, __entry->to,
__entry->vm)
);
#endif /* _I915_TRACE_H_ */

View File

@ -105,7 +105,7 @@ static void vgt_deballoon_space(struct i915_ggtt *ggtt,
node->start + node->size,
node->size / 1024);
ggtt->base.reserved -= node->size;
ggtt->vm.reserved -= node->size;
drm_mm_remove_node(node);
}
@ -141,11 +141,11 @@ static int vgt_balloon_space(struct i915_ggtt *ggtt,
DRM_INFO("balloon space: range [ 0x%lx - 0x%lx ] %lu KiB.\n",
start, end, size / 1024);
ret = i915_gem_gtt_reserve(&ggtt->base, node,
ret = i915_gem_gtt_reserve(&ggtt->vm, node,
size, start, I915_COLOR_UNEVICTABLE,
0);
if (!ret)
ggtt->base.reserved += size;
ggtt->vm.reserved += size;
return ret;
}
@ -197,7 +197,7 @@ static int vgt_balloon_space(struct i915_ggtt *ggtt,
int intel_vgt_balloon(struct drm_i915_private *dev_priv)
{
struct i915_ggtt *ggtt = &dev_priv->ggtt;
unsigned long ggtt_end = ggtt->base.total;
unsigned long ggtt_end = ggtt->vm.total;
unsigned long mappable_base, mappable_size, mappable_end;
unsigned long unmappable_base, unmappable_size, unmappable_end;

View File

@ -36,6 +36,12 @@ intel_vgpu_has_hwsp_emulation(struct drm_i915_private *dev_priv)
return dev_priv->vgpu.caps & VGT_CAPS_HWSP_EMULATION;
}
static inline bool
intel_vgpu_has_huge_gtt(struct drm_i915_private *dev_priv)
{
return dev_priv->vgpu.caps & VGT_CAPS_HUGE_GTT;
}
int intel_vgt_balloon(struct drm_i915_private *dev_priv);
void intel_vgt_deballoon(struct drm_i915_private *dev_priv);

View File

@ -85,7 +85,7 @@ vma_create(struct drm_i915_gem_object *obj,
int i;
/* The aliasing_ppgtt should never be used directly! */
GEM_BUG_ON(vm == &vm->i915->mm.aliasing_ppgtt->base);
GEM_BUG_ON(vm == &vm->i915->mm.aliasing_ppgtt->vm);
vma = kmem_cache_zalloc(vm->i915->vmas, GFP_KERNEL);
if (vma == NULL)
@ -459,6 +459,18 @@ bool i915_gem_valid_gtt_space(struct i915_vma *vma, unsigned long cache_level)
return true;
}
static void assert_bind_count(const struct drm_i915_gem_object *obj)
{
/*
* Combine the assertion that the object is bound and that we have
* pinned its pages. But we should never have bound the object
* more than we have pinned its pages. (For complete accuracy, we
* assume that no else is pinning the pages, but as a rough assertion
* that we will not run into problems later, this will do!)
*/
GEM_BUG_ON(atomic_read(&obj->mm.pages_pin_count) < obj->bind_count);
}
/**
* i915_vma_insert - finds a slot for the vma in its address space
* @vma: the vma
@ -595,7 +607,7 @@ i915_vma_insert(struct i915_vma *vma, u64 size, u64 alignment, u64 flags)
obj->bind_count++;
spin_unlock(&dev_priv->mm.obj_lock);
GEM_BUG_ON(atomic_read(&obj->mm.pages_pin_count) < obj->bind_count);
assert_bind_count(obj);
return 0;
@ -633,7 +645,7 @@ i915_vma_remove(struct i915_vma *vma)
* reaped by the shrinker.
*/
i915_gem_object_unpin_pages(obj);
GEM_BUG_ON(atomic_read(&obj->mm.pages_pin_count) < obj->bind_count);
assert_bind_count(obj);
}
int __i915_vma_do_pin(struct i915_vma *vma,

View File

@ -267,8 +267,6 @@ parse_lfp_panel_data(struct drm_i915_private *dev_priv,
if (!lvds_lfp_data_ptrs)
return;
dev_priv->vbt.lvds_vbt = 1;
panel_dvo_timing = get_lvds_dvo_timing(lvds_lfp_data,
lvds_lfp_data_ptrs,
panel_type);
@ -518,8 +516,31 @@ parse_driver_features(struct drm_i915_private *dev_priv,
if (!driver)
return;
if (driver->lvds_config == BDB_DRIVER_FEATURE_EDP)
dev_priv->vbt.edp.support = 1;
if (INTEL_GEN(dev_priv) >= 5) {
/*
* Note that we consider BDB_DRIVER_FEATURE_INT_SDVO_LVDS
* to mean "eDP". The VBT spec doesn't agree with that
* interpretation, but real world VBTs seem to.
*/
if (driver->lvds_config != BDB_DRIVER_FEATURE_INT_LVDS)
dev_priv->vbt.int_lvds_support = 0;
} else {
/*
* FIXME it's not clear which BDB version has the LVDS config
* bits defined. Revision history in the VBT spec says:
* "0.92 | Add two definitions for VBT value of LVDS Active
* Config (00b and 11b values defined) | 06/13/2005"
* but does not the specify the BDB version.
*
* So far version 134 (on i945gm) is the oldest VBT observed
* in the wild with the bits correctly populated. Version
* 108 (on i85x) does not have the bits correctly populated.
*/
if (bdb->version >= 134 &&
driver->lvds_config != BDB_DRIVER_FEATURE_INT_LVDS &&
driver->lvds_config != BDB_DRIVER_FEATURE_INT_SDVO_LVDS)
dev_priv->vbt.int_lvds_support = 0;
}
DRM_DEBUG_KMS("DRRS State Enabled:%d\n", driver->drrs_enabled);
/*
@ -542,11 +563,8 @@ parse_edp(struct drm_i915_private *dev_priv, const struct bdb_header *bdb)
int panel_type = dev_priv->vbt.panel_type;
edp = find_section(bdb, BDB_EDP);
if (!edp) {
if (dev_priv->vbt.edp.support)
DRM_DEBUG_KMS("No eDP BDB found but eDP panel supported.\n");
if (!edp)
return;
}
switch ((edp->color_depth >> (panel_type * 2)) & 3) {
case EDP_18BPP:
@ -688,8 +706,52 @@ parse_psr(struct drm_i915_private *dev_priv, const struct bdb_header *bdb)
break;
}
dev_priv->vbt.psr.tp1_wakeup_time = psr_table->tp1_wakeup_time;
dev_priv->vbt.psr.tp2_tp3_wakeup_time = psr_table->tp2_tp3_wakeup_time;
/*
* New psr options 0=500us, 1=100us, 2=2500us, 3=0us
* Old decimal value is wake up time in multiples of 100 us.
*/
if (bdb->version >= 209 && IS_GEN9_BC(dev_priv)) {
switch (psr_table->tp1_wakeup_time) {
case 0:
dev_priv->vbt.psr.tp1_wakeup_time_us = 500;
break;
case 1:
dev_priv->vbt.psr.tp1_wakeup_time_us = 100;
break;
case 3:
dev_priv->vbt.psr.tp1_wakeup_time_us = 0;
break;
default:
DRM_DEBUG_KMS("VBT tp1 wakeup time value %d is outside range[0-3], defaulting to max value 2500us\n",
psr_table->tp1_wakeup_time);
/* fallthrough */
case 2:
dev_priv->vbt.psr.tp1_wakeup_time_us = 2500;
break;
}
switch (psr_table->tp2_tp3_wakeup_time) {
case 0:
dev_priv->vbt.psr.tp2_tp3_wakeup_time_us = 500;
break;
case 1:
dev_priv->vbt.psr.tp2_tp3_wakeup_time_us = 100;
break;
case 3:
dev_priv->vbt.psr.tp1_wakeup_time_us = 0;
break;
default:
DRM_DEBUG_KMS("VBT tp2_tp3 wakeup time value %d is outside range[0-3], defaulting to max value 2500us\n",
psr_table->tp2_tp3_wakeup_time);
/* fallthrough */
case 2:
dev_priv->vbt.psr.tp2_tp3_wakeup_time_us = 2500;
break;
}
} else {
dev_priv->vbt.psr.tp1_wakeup_time_us = psr_table->tp1_wakeup_time * 100;
dev_priv->vbt.psr.tp2_tp3_wakeup_time_us = psr_table->tp2_tp3_wakeup_time * 100;
}
}
static void parse_dsi_backlight_ports(struct drm_i915_private *dev_priv,
@ -1197,18 +1259,37 @@ static const u8 cnp_ddc_pin_map[] = {
[DDC_BUS_DDI_F] = GMBUS_PIN_3_BXT, /* sic */
};
static const u8 icp_ddc_pin_map[] = {
[ICL_DDC_BUS_DDI_A] = GMBUS_PIN_1_BXT,
[ICL_DDC_BUS_DDI_B] = GMBUS_PIN_2_BXT,
[ICL_DDC_BUS_PORT_1] = GMBUS_PIN_9_TC1_ICP,
[ICL_DDC_BUS_PORT_2] = GMBUS_PIN_10_TC2_ICP,
[ICL_DDC_BUS_PORT_3] = GMBUS_PIN_11_TC3_ICP,
[ICL_DDC_BUS_PORT_4] = GMBUS_PIN_12_TC4_ICP,
};
static u8 map_ddc_pin(struct drm_i915_private *dev_priv, u8 vbt_pin)
{
if (HAS_PCH_CNP(dev_priv)) {
if (vbt_pin < ARRAY_SIZE(cnp_ddc_pin_map)) {
return cnp_ddc_pin_map[vbt_pin];
} else {
DRM_DEBUG_KMS("Ignoring alternate pin: VBT claims DDC pin %d, which is not valid for this platform\n", vbt_pin);
return 0;
}
const u8 *ddc_pin_map;
int n_entries;
if (HAS_PCH_ICP(dev_priv)) {
ddc_pin_map = icp_ddc_pin_map;
n_entries = ARRAY_SIZE(icp_ddc_pin_map);
} else if (HAS_PCH_CNP(dev_priv)) {
ddc_pin_map = cnp_ddc_pin_map;
n_entries = ARRAY_SIZE(cnp_ddc_pin_map);
} else {
/* Assuming direct map */
return vbt_pin;
}
return vbt_pin;
if (vbt_pin < n_entries && ddc_pin_map[vbt_pin] != 0)
return ddc_pin_map[vbt_pin];
DRM_DEBUG_KMS("Ignoring alternate pin: VBT claims DDC pin %d, which is not valid for this platform\n",
vbt_pin);
return 0;
}
static void parse_ddi_port(struct drm_i915_private *dev_priv, enum port port,
@ -1504,7 +1585,6 @@ init_vbt_defaults(struct drm_i915_private *dev_priv)
/* LFP panel data */
dev_priv->vbt.lvds_dither = 1;
dev_priv->vbt.lvds_vbt = 0;
/* SDVO panel data */
dev_priv->vbt.sdvo_lvds_vbt_mode = NULL;
@ -1513,6 +1593,9 @@ init_vbt_defaults(struct drm_i915_private *dev_priv)
dev_priv->vbt.int_tv_support = 1;
dev_priv->vbt.int_crt_support = 1;
/* driver features */
dev_priv->vbt.int_lvds_support = 1;
/* Default to using SSC */
dev_priv->vbt.lvds_use_ssc = 1;
/*

View File

@ -846,8 +846,9 @@ static void cancel_fake_irq(struct intel_engine_cs *engine)
void intel_engine_reset_breadcrumbs(struct intel_engine_cs *engine)
{
struct intel_breadcrumbs *b = &engine->breadcrumbs;
unsigned long flags;
spin_lock_irq(&b->irq_lock);
spin_lock_irqsave(&b->irq_lock, flags);
/*
* Leave the fake_irq timer enabled (if it is running), but clear the
@ -871,7 +872,7 @@ void intel_engine_reset_breadcrumbs(struct intel_engine_cs *engine)
*/
clear_bit(ENGINE_IRQ_BREADCRUMB, &engine->irq_posted);
spin_unlock_irq(&b->irq_lock);
spin_unlock_irqrestore(&b->irq_lock, flags);
}
void intel_engine_fini_breadcrumbs(struct intel_engine_cs *engine)

View File

@ -63,33 +63,35 @@ static struct intel_crt *intel_attached_crt(struct drm_connector *connector)
return intel_encoder_to_crt(intel_attached_encoder(connector));
}
bool intel_crt_port_enabled(struct drm_i915_private *dev_priv,
i915_reg_t adpa_reg, enum pipe *pipe)
{
u32 val;
val = I915_READ(adpa_reg);
/* asserts want to know the pipe even if the port is disabled */
if (HAS_PCH_CPT(dev_priv))
*pipe = (val & ADPA_PIPE_SEL_MASK_CPT) >> ADPA_PIPE_SEL_SHIFT_CPT;
else
*pipe = (val & ADPA_PIPE_SEL_MASK) >> ADPA_PIPE_SEL_SHIFT;
return val & ADPA_DAC_ENABLE;
}
static bool intel_crt_get_hw_state(struct intel_encoder *encoder,
enum pipe *pipe)
{
struct drm_device *dev = encoder->base.dev;
struct drm_i915_private *dev_priv = to_i915(dev);
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct intel_crt *crt = intel_encoder_to_crt(encoder);
u32 tmp;
bool ret;
if (!intel_display_power_get_if_enabled(dev_priv,
encoder->power_domain))
return false;
ret = false;
ret = intel_crt_port_enabled(dev_priv, crt->adpa_reg, pipe);
tmp = I915_READ(crt->adpa_reg);
if (!(tmp & ADPA_DAC_ENABLE))
goto out;
if (HAS_PCH_CPT(dev_priv))
*pipe = PORT_TO_PIPE_CPT(tmp);
else
*pipe = PORT_TO_PIPE(tmp);
ret = true;
out:
intel_display_power_put(dev_priv, encoder->power_domain);
return ret;
@ -168,11 +170,9 @@ static void intel_crt_set_dpms(struct intel_encoder *encoder,
if (HAS_PCH_LPT(dev_priv))
; /* Those bits don't exist here */
else if (HAS_PCH_CPT(dev_priv))
adpa |= PORT_TRANS_SEL_CPT(crtc->pipe);
else if (crtc->pipe == 0)
adpa |= ADPA_PIPE_A_SELECT;
adpa |= ADPA_PIPE_SEL_CPT(crtc->pipe);
else
adpa |= ADPA_PIPE_B_SELECT;
adpa |= ADPA_PIPE_SEL(crtc->pipe);
if (!HAS_PCH_SPLIT(dev_priv))
I915_WRITE(BCLRPAT(crtc->pipe), 0);

View File

@ -1243,35 +1243,6 @@ intel_ddi_get_crtc_encoder(struct intel_crtc *crtc)
return ret;
}
/* Finds the only possible encoder associated with the given CRTC. */
struct intel_encoder *
intel_ddi_get_crtc_new_encoder(struct intel_crtc_state *crtc_state)
{
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_encoder *ret = NULL;
struct drm_atomic_state *state;
struct drm_connector *connector;
struct drm_connector_state *connector_state;
int num_encoders = 0;
int i;
state = crtc_state->base.state;
for_each_new_connector_in_state(state, connector, connector_state, i) {
if (connector_state->crtc != crtc_state->base.crtc)
continue;
ret = to_intel_encoder(connector_state->best_encoder);
num_encoders++;
}
WARN(num_encoders != 1, "%d encoders on crtc for pipe %c\n", num_encoders,
pipe_name(crtc->pipe));
BUG_ON(ret == NULL);
return ret;
}
#define LC_FREQ 2700
static int hsw_ddi_calc_wrpll_link(struct drm_i915_private *dev_priv,
@ -1374,8 +1345,13 @@ static int cnl_calc_wrpll_link(struct drm_i915_private *dev_priv,
uint32_t cfgcr0, cfgcr1;
uint32_t p0, p1, p2, dco_freq, ref_clock;
cfgcr0 = I915_READ(CNL_DPLL_CFGCR0(pll_id));
cfgcr1 = I915_READ(CNL_DPLL_CFGCR1(pll_id));
if (INTEL_GEN(dev_priv) >= 11) {
cfgcr0 = I915_READ(ICL_DPLL_CFGCR0(pll_id));
cfgcr1 = I915_READ(ICL_DPLL_CFGCR1(pll_id));
} else {
cfgcr0 = I915_READ(CNL_DPLL_CFGCR0(pll_id));
cfgcr1 = I915_READ(CNL_DPLL_CFGCR1(pll_id));
}
p0 = cfgcr1 & DPLL_CFGCR1_PDIV_MASK;
p2 = cfgcr1 & DPLL_CFGCR1_KDIV_MASK;
@ -1451,6 +1427,30 @@ static void ddi_dotclock_get(struct intel_crtc_state *pipe_config)
pipe_config->base.adjusted_mode.crtc_clock = dotclock;
}
static void icl_ddi_clock_get(struct intel_encoder *encoder,
struct intel_crtc_state *pipe_config)
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
enum port port = encoder->port;
int link_clock = 0;
uint32_t pll_id;
pll_id = intel_get_shared_dpll_id(dev_priv, pipe_config->shared_dpll);
if (port == PORT_A || port == PORT_B) {
if (intel_crtc_has_type(pipe_config, INTEL_OUTPUT_HDMI))
link_clock = cnl_calc_wrpll_link(dev_priv, pll_id);
else
link_clock = icl_calc_dp_combo_pll_link(dev_priv,
pll_id);
} else {
/* FIXME - Add for MG PLL */
WARN(1, "MG PLL clock_get code not implemented yet\n");
}
pipe_config->port_clock = link_clock;
ddi_dotclock_get(pipe_config);
}
static void cnl_ddi_clock_get(struct intel_encoder *encoder,
struct intel_crtc_state *pipe_config)
{
@ -1644,6 +1644,8 @@ static void intel_ddi_clock_get(struct intel_encoder *encoder,
bxt_ddi_clock_get(encoder, pipe_config);
else if (IS_CANNONLAKE(dev_priv))
cnl_ddi_clock_get(encoder, pipe_config);
else if (IS_ICELAKE(dev_priv))
icl_ddi_clock_get(encoder, pipe_config);
}
void intel_ddi_set_pipe_settings(const struct intel_crtc_state *crtc_state)
@ -2115,6 +2117,26 @@ u8 intel_ddi_dp_voltage_max(struct intel_encoder *encoder)
DP_TRAIN_VOLTAGE_SWING_MASK;
}
/*
* We assume that the full set of pre-emphasis values can be
* used on all DDI platforms. Should that change we need to
* rethink this code.
*/
u8 intel_ddi_dp_pre_emphasis_max(struct intel_encoder *encoder, u8 voltage_swing)
{
switch (voltage_swing & DP_TRAIN_VOLTAGE_SWING_MASK) {
case DP_TRAIN_VOLTAGE_SWING_LEVEL_0:
return DP_TRAIN_PRE_EMPH_LEVEL_3;
case DP_TRAIN_VOLTAGE_SWING_LEVEL_1:
return DP_TRAIN_PRE_EMPH_LEVEL_2;
case DP_TRAIN_VOLTAGE_SWING_LEVEL_2:
return DP_TRAIN_PRE_EMPH_LEVEL_1;
case DP_TRAIN_VOLTAGE_SWING_LEVEL_3:
default:
return DP_TRAIN_PRE_EMPH_LEVEL_0;
}
}
static void cnl_ddi_vswing_program(struct intel_encoder *encoder,
int level, enum intel_output_type type)
{

View File

@ -1202,7 +1202,7 @@ void assert_panel_unlocked(struct drm_i915_private *dev_priv, enum pipe pipe)
{
i915_reg_t pp_reg;
u32 val;
enum pipe panel_pipe = PIPE_A;
enum pipe panel_pipe = INVALID_PIPE;
bool locked = true;
if (WARN_ON(HAS_DDI(dev_priv)))
@ -1214,18 +1214,35 @@ void assert_panel_unlocked(struct drm_i915_private *dev_priv, enum pipe pipe)
pp_reg = PP_CONTROL(0);
port_sel = I915_READ(PP_ON_DELAYS(0)) & PANEL_PORT_SELECT_MASK;
if (port_sel == PANEL_PORT_SELECT_LVDS &&
I915_READ(PCH_LVDS) & LVDS_PIPEB_SELECT)
panel_pipe = PIPE_B;
/* XXX: else fix for eDP */
switch (port_sel) {
case PANEL_PORT_SELECT_LVDS:
intel_lvds_port_enabled(dev_priv, PCH_LVDS, &panel_pipe);
break;
case PANEL_PORT_SELECT_DPA:
intel_dp_port_enabled(dev_priv, DP_A, PORT_A, &panel_pipe);
break;
case PANEL_PORT_SELECT_DPC:
intel_dp_port_enabled(dev_priv, PCH_DP_C, PORT_C, &panel_pipe);
break;
case PANEL_PORT_SELECT_DPD:
intel_dp_port_enabled(dev_priv, PCH_DP_D, PORT_D, &panel_pipe);
break;
default:
MISSING_CASE(port_sel);
break;
}
} else if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) {
/* presumably write lock depends on pipe, not port select */
pp_reg = PP_CONTROL(pipe);
panel_pipe = pipe;
} else {
u32 port_sel;
pp_reg = PP_CONTROL(0);
if (I915_READ(LVDS) & LVDS_PIPEB_SELECT)
panel_pipe = PIPE_B;
port_sel = I915_READ(PP_ON_DELAYS(0)) & PANEL_PORT_SELECT_MASK;
WARN_ON(port_sel != PANEL_PORT_SELECT_LVDS);
intel_lvds_port_enabled(dev_priv, LVDS, &panel_pipe);
}
val = I915_READ(pp_reg);
@ -1267,7 +1284,10 @@ void assert_pipe(struct drm_i915_private *dev_priv,
static void assert_plane(struct intel_plane *plane, bool state)
{
bool cur_state = plane->get_hw_state(plane);
enum pipe pipe;
bool cur_state;
cur_state = plane->get_hw_state(plane, &pipe);
I915_STATE_WARN(cur_state != state,
"%s assertion failure (expected %s, current %s)\n",
@ -1305,125 +1325,64 @@ void assert_pch_transcoder_disabled(struct drm_i915_private *dev_priv,
pipe_name(pipe));
}
static bool dp_pipe_enabled(struct drm_i915_private *dev_priv,
enum pipe pipe, u32 port_sel, u32 val)
{
if ((val & DP_PORT_EN) == 0)
return false;
if (HAS_PCH_CPT(dev_priv)) {
u32 trans_dp_ctl = I915_READ(TRANS_DP_CTL(pipe));
if ((trans_dp_ctl & TRANS_DP_PORT_SEL_MASK) != port_sel)
return false;
} else if (IS_CHERRYVIEW(dev_priv)) {
if ((val & DP_PIPE_MASK_CHV) != DP_PIPE_SELECT_CHV(pipe))
return false;
} else {
if ((val & DP_PIPE_MASK) != (pipe << 30))
return false;
}
return true;
}
static bool hdmi_pipe_enabled(struct drm_i915_private *dev_priv,
enum pipe pipe, u32 val)
{
if ((val & SDVO_ENABLE) == 0)
return false;
if (HAS_PCH_CPT(dev_priv)) {
if ((val & SDVO_PIPE_SEL_MASK_CPT) != SDVO_PIPE_SEL_CPT(pipe))
return false;
} else if (IS_CHERRYVIEW(dev_priv)) {
if ((val & SDVO_PIPE_SEL_MASK_CHV) != SDVO_PIPE_SEL_CHV(pipe))
return false;
} else {
if ((val & SDVO_PIPE_SEL_MASK) != SDVO_PIPE_SEL(pipe))
return false;
}
return true;
}
static bool lvds_pipe_enabled(struct drm_i915_private *dev_priv,
enum pipe pipe, u32 val)
{
if ((val & LVDS_PORT_EN) == 0)
return false;
if (HAS_PCH_CPT(dev_priv)) {
if ((val & PORT_TRANS_SEL_MASK) != PORT_TRANS_SEL_CPT(pipe))
return false;
} else {
if ((val & LVDS_PIPE_MASK) != LVDS_PIPE(pipe))
return false;
}
return true;
}
static bool adpa_pipe_enabled(struct drm_i915_private *dev_priv,
enum pipe pipe, u32 val)
{
if ((val & ADPA_DAC_ENABLE) == 0)
return false;
if (HAS_PCH_CPT(dev_priv)) {
if ((val & PORT_TRANS_SEL_MASK) != PORT_TRANS_SEL_CPT(pipe))
return false;
} else {
if ((val & ADPA_PIPE_SELECT_MASK) != ADPA_PIPE_SELECT(pipe))
return false;
}
return true;
}
static void assert_pch_dp_disabled(struct drm_i915_private *dev_priv,
enum pipe pipe, i915_reg_t reg,
u32 port_sel)
enum pipe pipe, enum port port,
i915_reg_t dp_reg)
{
u32 val = I915_READ(reg);
I915_STATE_WARN(dp_pipe_enabled(dev_priv, pipe, port_sel, val),
"PCH DP (0x%08x) enabled on transcoder %c, should be disabled\n",
i915_mmio_reg_offset(reg), pipe_name(pipe));
enum pipe port_pipe;
bool state;
I915_STATE_WARN(HAS_PCH_IBX(dev_priv) && (val & DP_PORT_EN) == 0
&& (val & DP_PIPEB_SELECT),
"IBX PCH dp port still using transcoder B\n");
state = intel_dp_port_enabled(dev_priv, dp_reg, port, &port_pipe);
I915_STATE_WARN(state && port_pipe == pipe,
"PCH DP %c enabled on transcoder %c, should be disabled\n",
port_name(port), pipe_name(pipe));
I915_STATE_WARN(HAS_PCH_IBX(dev_priv) && !state && port_pipe == PIPE_B,
"IBX PCH DP %c still using transcoder B\n",
port_name(port));
}
static void assert_pch_hdmi_disabled(struct drm_i915_private *dev_priv,
enum pipe pipe, i915_reg_t reg)
enum pipe pipe, enum port port,
i915_reg_t hdmi_reg)
{
u32 val = I915_READ(reg);
I915_STATE_WARN(hdmi_pipe_enabled(dev_priv, pipe, val),
"PCH HDMI (0x%08x) enabled on transcoder %c, should be disabled\n",
i915_mmio_reg_offset(reg), pipe_name(pipe));
enum pipe port_pipe;
bool state;
I915_STATE_WARN(HAS_PCH_IBX(dev_priv) && (val & SDVO_ENABLE) == 0
&& (val & SDVO_PIPE_B_SELECT),
"IBX PCH hdmi port still using transcoder B\n");
state = intel_sdvo_port_enabled(dev_priv, hdmi_reg, &port_pipe);
I915_STATE_WARN(state && port_pipe == pipe,
"PCH HDMI %c enabled on transcoder %c, should be disabled\n",
port_name(port), pipe_name(pipe));
I915_STATE_WARN(HAS_PCH_IBX(dev_priv) && !state && port_pipe == PIPE_B,
"IBX PCH HDMI %c still using transcoder B\n",
port_name(port));
}
static void assert_pch_ports_disabled(struct drm_i915_private *dev_priv,
enum pipe pipe)
{
u32 val;
enum pipe port_pipe;
assert_pch_dp_disabled(dev_priv, pipe, PCH_DP_B, TRANS_DP_PORT_SEL_B);
assert_pch_dp_disabled(dev_priv, pipe, PCH_DP_C, TRANS_DP_PORT_SEL_C);
assert_pch_dp_disabled(dev_priv, pipe, PCH_DP_D, TRANS_DP_PORT_SEL_D);
assert_pch_dp_disabled(dev_priv, pipe, PORT_B, PCH_DP_B);
assert_pch_dp_disabled(dev_priv, pipe, PORT_C, PCH_DP_C);
assert_pch_dp_disabled(dev_priv, pipe, PORT_D, PCH_DP_D);
val = I915_READ(PCH_ADPA);
I915_STATE_WARN(adpa_pipe_enabled(dev_priv, pipe, val),
"PCH VGA enabled on transcoder %c, should be disabled\n",
pipe_name(pipe));
I915_STATE_WARN(intel_crt_port_enabled(dev_priv, PCH_ADPA, &port_pipe) &&
port_pipe == pipe,
"PCH VGA enabled on transcoder %c, should be disabled\n",
pipe_name(pipe));
val = I915_READ(PCH_LVDS);
I915_STATE_WARN(lvds_pipe_enabled(dev_priv, pipe, val),
"PCH LVDS enabled on transcoder %c, should be disabled\n",
pipe_name(pipe));
I915_STATE_WARN(intel_lvds_port_enabled(dev_priv, PCH_LVDS, &port_pipe) &&
port_pipe == pipe,
"PCH LVDS enabled on transcoder %c, should be disabled\n",
pipe_name(pipe));
assert_pch_hdmi_disabled(dev_priv, pipe, PCH_HDMIB);
assert_pch_hdmi_disabled(dev_priv, pipe, PCH_HDMIC);
assert_pch_hdmi_disabled(dev_priv, pipe, PCH_HDMID);
assert_pch_hdmi_disabled(dev_priv, pipe, PORT_B, PCH_HDMIB);
assert_pch_hdmi_disabled(dev_priv, pipe, PORT_C, PCH_HDMIC);
assert_pch_hdmi_disabled(dev_priv, pipe, PORT_D, PCH_HDMID);
}
static void _vlv_enable_pll(struct intel_crtc *crtc,
@ -2521,6 +2480,7 @@ intel_fill_fb_info(struct drm_i915_private *dev_priv,
{
struct intel_framebuffer *intel_fb = to_intel_framebuffer(fb);
struct intel_rotation_info *rot_info = &intel_fb->rot_info;
struct drm_i915_gem_object *obj = intel_fb_obj(fb);
u32 gtt_offset_rotated = 0;
unsigned int max_size = 0;
int i, num_planes = fb->format->num_planes;
@ -2585,7 +2545,7 @@ intel_fill_fb_info(struct drm_i915_private *dev_priv,
* fb layout agrees with the fence layout. We already check that the
* fb stride matches the fence stride elsewhere.
*/
if (i == 0 && i915_gem_object_is_tiled(intel_fb->obj) &&
if (i == 0 && i915_gem_object_is_tiled(obj) &&
(x + width) * cpp > fb->pitches[i]) {
DRM_DEBUG_KMS("bad fb plane %d offset: 0x%x\n",
i, fb->offsets[i]);
@ -2670,9 +2630,9 @@ intel_fill_fb_info(struct drm_i915_private *dev_priv,
max_size = max(max_size, offset + size);
}
if (max_size * tile_size > intel_fb->obj->base.size) {
if (max_size * tile_size > obj->base.size) {
DRM_DEBUG_KMS("fb too big for bo (need %u bytes, have %zu bytes)\n",
max_size * tile_size, intel_fb->obj->base.size);
max_size * tile_size, obj->base.size);
return -EINVAL;
}
@ -3430,24 +3390,33 @@ static void i9xx_disable_plane(struct intel_plane *plane,
spin_unlock_irqrestore(&dev_priv->uncore.lock, irqflags);
}
static bool i9xx_plane_get_hw_state(struct intel_plane *plane)
static bool i9xx_plane_get_hw_state(struct intel_plane *plane,
enum pipe *pipe)
{
struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
enum intel_display_power_domain power_domain;
enum i9xx_plane_id i9xx_plane = plane->i9xx_plane;
enum pipe pipe = plane->pipe;
bool ret;
u32 val;
/*
* Not 100% correct for planes that can move between pipes,
* but that's only the case for gen2-4 which don't have any
* display power wells.
*/
power_domain = POWER_DOMAIN_PIPE(pipe);
power_domain = POWER_DOMAIN_PIPE(plane->pipe);
if (!intel_display_power_get_if_enabled(dev_priv, power_domain))
return false;
ret = I915_READ(DSPCNTR(i9xx_plane)) & DISPLAY_PLANE_ENABLE;
val = I915_READ(DSPCNTR(i9xx_plane));
ret = val & DISPLAY_PLANE_ENABLE;
if (INTEL_GEN(dev_priv) >= 5)
*pipe = plane->pipe;
else
*pipe = (val & DISPPLANE_SEL_PIPE_MASK) >>
DISPPLANE_SEL_PIPE_SHIFT;
intel_display_power_put(dev_priv, power_domain);
@ -4631,20 +4600,33 @@ static void ivybridge_update_fdi_bc_bifurcation(struct intel_crtc *intel_crtc)
}
}
/* Return which DP Port should be selected for Transcoder DP control */
static enum port
intel_trans_dp_port_sel(struct intel_crtc *crtc)
/*
* Finds the encoder associated with the given CRTC. This can only be
* used when we know that the CRTC isn't feeding multiple encoders!
*/
static struct intel_encoder *
intel_get_crtc_new_encoder(const struct intel_atomic_state *state,
const struct intel_crtc_state *crtc_state)
{
struct drm_device *dev = crtc->base.dev;
struct intel_encoder *encoder;
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
const struct drm_connector_state *connector_state;
const struct drm_connector *connector;
struct intel_encoder *encoder = NULL;
int num_encoders = 0;
int i;
for_each_encoder_on_crtc(dev, &crtc->base, encoder) {
if (encoder->type == INTEL_OUTPUT_DP ||
encoder->type == INTEL_OUTPUT_EDP)
return encoder->port;
for_each_new_connector_in_state(&state->base, connector, connector_state, i) {
if (connector_state->crtc != &crtc->base)
continue;
encoder = to_intel_encoder(connector_state->best_encoder);
num_encoders++;
}
return -1;
WARN(num_encoders != 1, "%d encoders for pipe %c\n",
num_encoders, pipe_name(crtc->pipe));
return encoder;
}
/*
@ -4655,7 +4637,8 @@ intel_trans_dp_port_sel(struct intel_crtc *crtc)
* - DP transcoding bits
* - transcoder
*/
static void ironlake_pch_enable(const struct intel_crtc_state *crtc_state)
static void ironlake_pch_enable(const struct intel_atomic_state *state,
const struct intel_crtc_state *crtc_state)
{
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct drm_device *dev = crtc->base.dev;
@ -4714,6 +4697,8 @@ static void ironlake_pch_enable(const struct intel_crtc_state *crtc_state)
&crtc_state->base.adjusted_mode;
u32 bpc = (I915_READ(PIPECONF(pipe)) & PIPECONF_BPC_MASK) >> 5;
i915_reg_t reg = TRANS_DP_CTL(pipe);
enum port port;
temp = I915_READ(reg);
temp &= ~(TRANS_DP_PORT_SEL_MASK |
TRANS_DP_SYNC_MASK |
@ -4726,19 +4711,9 @@ static void ironlake_pch_enable(const struct intel_crtc_state *crtc_state)
if (adjusted_mode->flags & DRM_MODE_FLAG_PVSYNC)
temp |= TRANS_DP_VSYNC_ACTIVE_HIGH;
switch (intel_trans_dp_port_sel(crtc)) {
case PORT_B:
temp |= TRANS_DP_PORT_SEL_B;
break;
case PORT_C:
temp |= TRANS_DP_PORT_SEL_C;
break;
case PORT_D:
temp |= TRANS_DP_PORT_SEL_D;
break;
default:
BUG();
}
port = intel_get_crtc_new_encoder(state, crtc_state)->port;
WARN_ON(port < PORT_B || port > PORT_D);
temp |= TRANS_DP_PORT_SEL(port);
I915_WRITE(reg, temp);
}
@ -4746,7 +4721,8 @@ static void ironlake_pch_enable(const struct intel_crtc_state *crtc_state)
ironlake_enable_pch_transcoder(dev_priv, pipe);
}
static void lpt_pch_enable(const struct intel_crtc_state *crtc_state)
static void lpt_pch_enable(const struct intel_atomic_state *state,
const struct intel_crtc_state *crtc_state)
{
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
@ -4776,6 +4752,39 @@ static void cpt_verify_modeset(struct drm_device *dev, int pipe)
}
}
/*
* The hardware phase 0.0 refers to the center of the pixel.
* We want to start from the top/left edge which is phase
* -0.5. That matches how the hardware calculates the scaling
* factors (from top-left of the first pixel to bottom-right
* of the last pixel, as opposed to the pixel centers).
*
* For 4:2:0 subsampled chroma planes we obviously have to
* adjust that so that the chroma sample position lands in
* the right spot.
*
* Note that for packed YCbCr 4:2:2 formats there is no way to
* control chroma siting. The hardware simply replicates the
* chroma samples for both of the luma samples, and thus we don't
* actually get the expected MPEG2 chroma siting convention :(
* The same behaviour is observed on pre-SKL platforms as well.
*/
u16 skl_scaler_calc_phase(int sub, bool chroma_cosited)
{
int phase = -0x8000;
u16 trip = 0;
if (chroma_cosited)
phase += (sub - 1) * 0x8000 / sub;
if (phase < 0)
phase = 0x10000 + phase;
else
trip = PS_PHASE_TRIP;
return ((phase >> 2) & PS_PHASE_MASK) | trip;
}
static int
skl_update_scaler(struct intel_crtc_state *crtc_state, bool force_detach,
unsigned int scaler_user, int *scaler_id,
@ -4975,14 +4984,22 @@ static void skylake_pfit_enable(struct intel_crtc *crtc)
&crtc->config->scaler_state;
if (crtc->config->pch_pfit.enabled) {
u16 uv_rgb_hphase, uv_rgb_vphase;
int id;
if (WARN_ON(crtc->config->scaler_state.scaler_id < 0))
return;
uv_rgb_hphase = skl_scaler_calc_phase(1, false);
uv_rgb_vphase = skl_scaler_calc_phase(1, false);
id = scaler_state->scaler_id;
I915_WRITE(SKL_PS_CTRL(pipe, id), PS_SCALER_EN |
PS_FILTER_MEDIUM | scaler_state->scalers[id].mode);
I915_WRITE_FW(SKL_PS_VPHASE(pipe, id),
PS_Y_PHASE(0) | PS_UV_RGB_PHASE(uv_rgb_vphase));
I915_WRITE_FW(SKL_PS_HPHASE(pipe, id),
PS_Y_PHASE(0) | PS_UV_RGB_PHASE(uv_rgb_hphase));
I915_WRITE(SKL_PS_WIN_POS(pipe, id), crtc->config->pch_pfit.pos);
I915_WRITE(SKL_PS_WIN_SZ(pipe, id), crtc->config->pch_pfit.size);
}
@ -5501,10 +5518,8 @@ static void ironlake_crtc_enable(struct intel_crtc_state *pipe_config,
*
* Spurious PCH underruns also occur during PCH enabling.
*/
if (intel_crtc->config->has_pch_encoder || IS_GEN5(dev_priv))
intel_set_cpu_fifo_underrun_reporting(dev_priv, pipe, false);
if (intel_crtc->config->has_pch_encoder)
intel_set_pch_fifo_underrun_reporting(dev_priv, pipe, false);
intel_set_cpu_fifo_underrun_reporting(dev_priv, pipe, false);
intel_set_pch_fifo_underrun_reporting(dev_priv, pipe, false);
if (intel_crtc->config->has_pch_encoder)
intel_prepare_shared_dpll(intel_crtc);
@ -5549,7 +5564,7 @@ static void ironlake_crtc_enable(struct intel_crtc_state *pipe_config,
intel_enable_pipe(pipe_config);
if (intel_crtc->config->has_pch_encoder)
ironlake_pch_enable(pipe_config);
ironlake_pch_enable(old_intel_state, pipe_config);
assert_vblank_disabled(crtc);
drm_crtc_vblank_on(crtc);
@ -5559,9 +5574,16 @@ static void ironlake_crtc_enable(struct intel_crtc_state *pipe_config,
if (HAS_PCH_CPT(dev_priv))
cpt_verify_modeset(dev, intel_crtc->pipe);
/* Must wait for vblank to avoid spurious PCH FIFO underruns */
if (intel_crtc->config->has_pch_encoder)
/*
* Must wait for vblank to avoid spurious PCH FIFO underruns.
* And a second vblank wait is needed at least on ILK with
* some interlaced HDMI modes. Let's do the double wait always
* in case there are more corner cases we don't know about.
*/
if (intel_crtc->config->has_pch_encoder) {
intel_wait_for_vblank(dev_priv, pipe);
intel_wait_for_vblank(dev_priv, pipe);
}
intel_set_cpu_fifo_underrun_reporting(dev_priv, pipe, true);
intel_set_pch_fifo_underrun_reporting(dev_priv, pipe, true);
}
@ -5623,6 +5645,11 @@ static void haswell_crtc_enable(struct intel_crtc_state *pipe_config,
if (INTEL_GEN(dev_priv) >= 11)
icl_map_plls_to_ports(crtc, pipe_config, old_state);
intel_encoders_pre_enable(crtc, pipe_config, old_state);
if (!transcoder_is_dsi(cpu_transcoder))
intel_ddi_enable_pipe_clock(pipe_config);
if (intel_crtc_has_dp_encoder(intel_crtc->config))
intel_dp_set_m_n(intel_crtc, M1_N1);
@ -5651,11 +5678,6 @@ static void haswell_crtc_enable(struct intel_crtc_state *pipe_config,
intel_crtc->active = true;
intel_encoders_pre_enable(crtc, pipe_config, old_state);
if (!transcoder_is_dsi(cpu_transcoder))
intel_ddi_enable_pipe_clock(pipe_config);
/* Display WA #1180: WaDisableScalarClockGating: glk, cnl */
psl_clkgate_wa = (IS_GEMINILAKE(dev_priv) || IS_CANNONLAKE(dev_priv)) &&
intel_crtc->config->pch_pfit.enabled;
@ -5688,7 +5710,7 @@ static void haswell_crtc_enable(struct intel_crtc_state *pipe_config,
intel_enable_pipe(pipe_config);
if (intel_crtc->config->has_pch_encoder)
lpt_pch_enable(pipe_config);
lpt_pch_enable(old_intel_state, pipe_config);
if (intel_crtc_has_type(intel_crtc->config, INTEL_OUTPUT_DP_MST))
intel_ddi_set_vc_payload_alloc(pipe_config, true);
@ -5741,10 +5763,8 @@ static void ironlake_crtc_disable(struct intel_crtc_state *old_crtc_state,
* pipe is already disabled, but FDI RX/TX is still enabled.
* Happens at least with VGA+HDMI cloning. Suppress them.
*/
if (intel_crtc->config->has_pch_encoder) {
intel_set_cpu_fifo_underrun_reporting(dev_priv, pipe, false);
intel_set_pch_fifo_underrun_reporting(dev_priv, pipe, false);
}
intel_set_cpu_fifo_underrun_reporting(dev_priv, pipe, false);
intel_set_pch_fifo_underrun_reporting(dev_priv, pipe, false);
intel_encoders_disable(crtc, old_crtc_state, old_state);
@ -5849,6 +5869,22 @@ static void i9xx_pfit_enable(struct intel_crtc *crtc)
I915_WRITE(BCLRPAT(crtc->pipe), 0);
}
bool intel_port_is_tc(struct drm_i915_private *dev_priv, enum port port)
{
if (IS_ICELAKE(dev_priv))
return port >= PORT_C && port <= PORT_F;
return false;
}
enum tc_port intel_port_to_tc(struct drm_i915_private *dev_priv, enum port port)
{
if (!intel_port_is_tc(dev_priv, port))
return PORT_TC_NONE;
return port - PORT_C;
}
enum intel_display_power_domain intel_port_to_power_domain(enum port port)
{
switch (port) {
@ -7675,16 +7711,18 @@ i9xx_get_initial_plane_config(struct intel_crtc *crtc,
struct drm_i915_private *dev_priv = to_i915(dev);
struct intel_plane *plane = to_intel_plane(crtc->base.primary);
enum i9xx_plane_id i9xx_plane = plane->i9xx_plane;
enum pipe pipe = crtc->pipe;
enum pipe pipe;
u32 val, base, offset;
int fourcc, pixel_format;
unsigned int aligned_height;
struct drm_framebuffer *fb;
struct intel_framebuffer *intel_fb;
if (!plane->get_hw_state(plane))
if (!plane->get_hw_state(plane, &pipe))
return;
WARN_ON(pipe != crtc->pipe);
intel_fb = kzalloc(sizeof(*intel_fb), GFP_KERNEL);
if (!intel_fb) {
DRM_DEBUG_KMS("failed to alloc fb\n");
@ -8705,16 +8743,18 @@ skylake_get_initial_plane_config(struct intel_crtc *crtc,
struct drm_i915_private *dev_priv = to_i915(dev);
struct intel_plane *plane = to_intel_plane(crtc->base.primary);
enum plane_id plane_id = plane->id;
enum pipe pipe = crtc->pipe;
enum pipe pipe;
u32 val, base, offset, stride_mult, tiling, alpha;
int fourcc, pixel_format;
unsigned int aligned_height;
struct drm_framebuffer *fb;
struct intel_framebuffer *intel_fb;
if (!plane->get_hw_state(plane))
if (!plane->get_hw_state(plane, &pipe))
return;
WARN_ON(pipe != crtc->pipe);
intel_fb = kzalloc(sizeof(*intel_fb), GFP_KERNEL);
if (!intel_fb) {
DRM_DEBUG_KMS("failed to alloc fb\n");
@ -9142,9 +9182,12 @@ void hsw_disable_pc8(struct drm_i915_private *dev_priv)
static int haswell_crtc_compute_clock(struct intel_crtc *crtc,
struct intel_crtc_state *crtc_state)
{
struct intel_atomic_state *state =
to_intel_atomic_state(crtc_state->base.state);
if (!intel_crtc_has_type(crtc_state, INTEL_OUTPUT_DSI)) {
struct intel_encoder *encoder =
intel_ddi_get_crtc_new_encoder(crtc_state);
intel_get_crtc_new_encoder(state, crtc_state);
if (!intel_get_shared_dpll(crtc, crtc_state, encoder)) {
DRM_DEBUG_DRIVER("failed to find PLL for pipe %c\n",
@ -9692,7 +9735,8 @@ static void i845_disable_cursor(struct intel_plane *plane,
i845_update_cursor(plane, NULL, NULL);
}
static bool i845_cursor_get_hw_state(struct intel_plane *plane)
static bool i845_cursor_get_hw_state(struct intel_plane *plane,
enum pipe *pipe)
{
struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
enum intel_display_power_domain power_domain;
@ -9704,6 +9748,8 @@ static bool i845_cursor_get_hw_state(struct intel_plane *plane)
ret = I915_READ(CURCNTR(PIPE_A)) & CURSOR_ENABLE;
*pipe = PIPE_A;
intel_display_power_put(dev_priv, power_domain);
return ret;
@ -9715,25 +9761,30 @@ static u32 i9xx_cursor_ctl(const struct intel_crtc_state *crtc_state,
struct drm_i915_private *dev_priv =
to_i915(plane_state->base.plane->dev);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
u32 cntl;
u32 cntl = 0;
cntl = MCURSOR_GAMMA_ENABLE;
if (IS_GEN6(dev_priv) || IS_IVYBRIDGE(dev_priv))
cntl |= MCURSOR_TRICKLE_FEED_DISABLE;
if (HAS_DDI(dev_priv))
cntl |= CURSOR_PIPE_CSC_ENABLE;
if (INTEL_GEN(dev_priv) <= 10) {
cntl |= MCURSOR_GAMMA_ENABLE;
if (HAS_DDI(dev_priv))
cntl |= MCURSOR_PIPE_CSC_ENABLE;
}
if (INTEL_GEN(dev_priv) < 5 && !IS_G4X(dev_priv))
cntl |= MCURSOR_PIPE_SELECT(crtc->pipe);
switch (plane_state->base.crtc_w) {
case 64:
cntl |= CURSOR_MODE_64_ARGB_AX;
cntl |= MCURSOR_MODE_64_ARGB_AX;
break;
case 128:
cntl |= CURSOR_MODE_128_ARGB_AX;
cntl |= MCURSOR_MODE_128_ARGB_AX;
break;
case 256:
cntl |= CURSOR_MODE_256_ARGB_AX;
cntl |= MCURSOR_MODE_256_ARGB_AX;
break;
default:
MISSING_CASE(plane_state->base.crtc_w);
@ -9741,7 +9792,7 @@ static u32 i9xx_cursor_ctl(const struct intel_crtc_state *crtc_state,
}
if (plane_state->base.rotation & DRM_MODE_ROTATE_180)
cntl |= CURSOR_ROTATE_180;
cntl |= MCURSOR_ROTATE_180;
return cntl;
}
@ -9903,23 +9954,32 @@ static void i9xx_disable_cursor(struct intel_plane *plane,
i9xx_update_cursor(plane, NULL, NULL);
}
static bool i9xx_cursor_get_hw_state(struct intel_plane *plane)
static bool i9xx_cursor_get_hw_state(struct intel_plane *plane,
enum pipe *pipe)
{
struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
enum intel_display_power_domain power_domain;
enum pipe pipe = plane->pipe;
bool ret;
u32 val;
/*
* Not 100% correct for planes that can move between pipes,
* but that's only the case for gen2-3 which don't have any
* display power wells.
*/
power_domain = POWER_DOMAIN_PIPE(pipe);
power_domain = POWER_DOMAIN_PIPE(plane->pipe);
if (!intel_display_power_get_if_enabled(dev_priv, power_domain))
return false;
ret = I915_READ(CURCNTR(pipe)) & CURSOR_MODE;
val = I915_READ(CURCNTR(plane->pipe));
ret = val & MCURSOR_MODE;
if (INTEL_GEN(dev_priv) >= 5 || IS_G4X(dev_priv))
*pipe = plane->pipe;
else
*pipe = (val & MCURSOR_PIPE_SELECT_MASK) >>
MCURSOR_PIPE_SELECT_SHIFT;
intel_display_power_put(dev_priv, power_domain);
@ -14124,14 +14184,15 @@ static void intel_setup_outputs(struct drm_i915_private *dev_priv)
static void intel_user_framebuffer_destroy(struct drm_framebuffer *fb)
{
struct intel_framebuffer *intel_fb = to_intel_framebuffer(fb);
struct drm_i915_gem_object *obj = intel_fb_obj(fb);
drm_framebuffer_cleanup(fb);
i915_gem_object_lock(intel_fb->obj);
WARN_ON(!intel_fb->obj->framebuffer_references--);
i915_gem_object_unlock(intel_fb->obj);
i915_gem_object_lock(obj);
WARN_ON(!obj->framebuffer_references--);
i915_gem_object_unlock(obj);
i915_gem_object_put(intel_fb->obj);
i915_gem_object_put(obj);
kfree(intel_fb);
}
@ -14140,8 +14201,7 @@ static int intel_user_framebuffer_create_handle(struct drm_framebuffer *fb,
struct drm_file *file,
unsigned int *handle)
{
struct intel_framebuffer *intel_fb = to_intel_framebuffer(fb);
struct drm_i915_gem_object *obj = intel_fb->obj;
struct drm_i915_gem_object *obj = intel_fb_obj(fb);
if (obj->userptr.mm) {
DRM_DEBUG("attempting to use a userptr for a framebuffer, denied\n");
@ -14411,9 +14471,9 @@ static int intel_framebuffer_init(struct intel_framebuffer *intel_fb,
i, fb->pitches[i], stride_alignment);
goto err;
}
}
intel_fb->obj = obj;
fb->obj[i] = &obj->base;
}
ret = intel_fill_fb_info(dev_priv, fb);
if (ret)
@ -15095,8 +15155,8 @@ void i830_disable_pipe(struct drm_i915_private *dev_priv, enum pipe pipe)
WARN_ON(I915_READ(DSPCNTR(PLANE_A)) & DISPLAY_PLANE_ENABLE);
WARN_ON(I915_READ(DSPCNTR(PLANE_B)) & DISPLAY_PLANE_ENABLE);
WARN_ON(I915_READ(DSPCNTR(PLANE_C)) & DISPLAY_PLANE_ENABLE);
WARN_ON(I915_READ(CURCNTR(PIPE_A)) & CURSOR_MODE);
WARN_ON(I915_READ(CURCNTR(PIPE_B)) & CURSOR_MODE);
WARN_ON(I915_READ(CURCNTR(PIPE_A)) & MCURSOR_MODE);
WARN_ON(I915_READ(CURCNTR(PIPE_B)) & MCURSOR_MODE);
I915_WRITE(PIPECONF(pipe), 0);
POSTING_READ(PIPECONF(pipe));
@ -15110,12 +15170,12 @@ void i830_disable_pipe(struct drm_i915_private *dev_priv, enum pipe pipe)
static bool intel_plane_mapping_ok(struct intel_crtc *crtc,
struct intel_plane *plane)
{
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
enum i9xx_plane_id i9xx_plane = plane->i9xx_plane;
u32 val = I915_READ(DSPCNTR(i9xx_plane));
enum pipe pipe;
return (val & DISPLAY_PLANE_ENABLE) == 0 ||
(val & DISPPLANE_SEL_PIPE_MASK) == DISPPLANE_SEL_PIPE(crtc->pipe);
if (!plane->get_hw_state(plane, &pipe))
return true;
return pipe == crtc->pipe;
}
static void
@ -15274,6 +15334,9 @@ static void intel_sanitize_encoder(struct intel_encoder *encoder)
connector->base.dpms = DRM_MODE_DPMS_OFF;
connector->base.encoder = NULL;
}
/* notify opregion of the sanitized encoder state */
intel_opregion_notify_encoder(encoder, connector && has_active_crtc);
}
void i915_redisable_vga_power_on(struct drm_i915_private *dev_priv)
@ -15314,7 +15377,10 @@ static void readout_plane_state(struct intel_crtc *crtc)
for_each_intel_plane_on_crtc(&dev_priv->drm, crtc, plane) {
struct intel_plane_state *plane_state =
to_intel_plane_state(plane->base.state);
bool visible = plane->get_hw_state(plane);
enum pipe pipe;
bool visible;
visible = plane->get_hw_state(plane, &pipe);
intel_set_plane_visible(crtc_state, plane_state, visible);
}

View File

@ -126,6 +126,17 @@ enum port {
#define port_name(p) ((p) + 'A')
enum tc_port {
PORT_TC_NONE = -1,
PORT_TC1 = 0,
PORT_TC2,
PORT_TC3,
PORT_TC4,
I915_MAX_TC_PORTS
};
enum dpio_channel {
DPIO_CH0,
DPIO_CH1

View File

@ -56,7 +56,7 @@ struct dp_link_dpll {
struct dpll dpll;
};
static const struct dp_link_dpll gen4_dpll[] = {
static const struct dp_link_dpll g4x_dpll[] = {
{ 162000,
{ .p1 = 2, .p2 = 10, .n = 2, .m1 = 23, .m2 = 8 } },
{ 270000,
@ -513,7 +513,7 @@ vlv_power_sequencer_kick(struct intel_dp *intel_dp)
uint32_t DP;
if (WARN(I915_READ(intel_dp->output_reg) & DP_PORT_EN,
"skipping pipe %c power seqeuncer kick due to port %c being active\n",
"skipping pipe %c power sequencer kick due to port %c being active\n",
pipe_name(pipe), port_name(intel_dig_port->base.port)))
return;
@ -529,9 +529,9 @@ vlv_power_sequencer_kick(struct intel_dp *intel_dp)
DP |= DP_LINK_TRAIN_PAT_1;
if (IS_CHERRYVIEW(dev_priv))
DP |= DP_PIPE_SELECT_CHV(pipe);
else if (pipe == PIPE_B)
DP |= DP_PIPEB_SELECT;
DP |= DP_PIPE_SEL_CHV(pipe);
else
DP |= DP_PIPE_SEL(pipe);
pll_enabled = I915_READ(DPLL(pipe)) & DPLL_VCO_ENABLE;
@ -554,7 +554,7 @@ vlv_power_sequencer_kick(struct intel_dp *intel_dp)
/*
* Similar magic as in intel_dp_enable_port().
* We _must_ do this port enable + disable trick
* to make this power seqeuencer lock onto the port.
* to make this power sequencer lock onto the port.
* Otherwise even VDD force bit won't work.
*/
I915_WRITE(intel_dp->output_reg, DP);
@ -1550,8 +1550,8 @@ intel_dp_set_clock(struct intel_encoder *encoder,
int i, count = 0;
if (IS_G4X(dev_priv)) {
divisor = gen4_dpll;
count = ARRAY_SIZE(gen4_dpll);
divisor = g4x_dpll;
count = ARRAY_SIZE(g4x_dpll);
} else if (HAS_PCH_SPLIT(dev_priv)) {
divisor = pch_dpll;
count = ARRAY_SIZE(pch_dpll);
@ -1964,7 +1964,7 @@ static void intel_dp_prepare(struct intel_encoder *encoder,
/* Split out the IBX/CPU vs CPT settings */
if (IS_GEN7(dev_priv) && port == PORT_A) {
if (IS_IVYBRIDGE(dev_priv) && port == PORT_A) {
if (adjusted_mode->flags & DRM_MODE_FLAG_PHSYNC)
intel_dp->DP |= DP_SYNC_HS_HIGH;
if (adjusted_mode->flags & DRM_MODE_FLAG_PVSYNC)
@ -1974,7 +1974,7 @@ static void intel_dp_prepare(struct intel_encoder *encoder,
if (drm_dp_enhanced_frame_cap(intel_dp->dpcd))
intel_dp->DP |= DP_ENHANCED_FRAMING;
intel_dp->DP |= crtc->pipe << 29;
intel_dp->DP |= DP_PIPE_SEL_IVB(crtc->pipe);
} else if (HAS_PCH_CPT(dev_priv) && port != PORT_A) {
u32 trans_dp;
@ -2000,9 +2000,9 @@ static void intel_dp_prepare(struct intel_encoder *encoder,
intel_dp->DP |= DP_ENHANCED_FRAMING;
if (IS_CHERRYVIEW(dev_priv))
intel_dp->DP |= DP_PIPE_SELECT_CHV(crtc->pipe);
else if (crtc->pipe == PIPE_B)
intel_dp->DP |= DP_PIPEB_SELECT;
intel_dp->DP |= DP_PIPE_SEL_CHV(crtc->pipe);
else
intel_dp->DP |= DP_PIPE_SEL(crtc->pipe);
}
}
@ -2624,52 +2624,66 @@ void intel_dp_sink_dpms(struct intel_dp *intel_dp, int mode)
mode == DRM_MODE_DPMS_ON ? "enable" : "disable");
}
static bool cpt_dp_port_selected(struct drm_i915_private *dev_priv,
enum port port, enum pipe *pipe)
{
enum pipe p;
for_each_pipe(dev_priv, p) {
u32 val = I915_READ(TRANS_DP_CTL(p));
if ((val & TRANS_DP_PORT_SEL_MASK) == TRANS_DP_PORT_SEL(port)) {
*pipe = p;
return true;
}
}
DRM_DEBUG_KMS("No pipe for DP port %c found\n", port_name(port));
/* must initialize pipe to something for the asserts */
*pipe = PIPE_A;
return false;
}
bool intel_dp_port_enabled(struct drm_i915_private *dev_priv,
i915_reg_t dp_reg, enum port port,
enum pipe *pipe)
{
bool ret;
u32 val;
val = I915_READ(dp_reg);
ret = val & DP_PORT_EN;
/* asserts want to know the pipe even if the port is disabled */
if (IS_IVYBRIDGE(dev_priv) && port == PORT_A)
*pipe = (val & DP_PIPE_SEL_MASK_IVB) >> DP_PIPE_SEL_SHIFT_IVB;
else if (HAS_PCH_CPT(dev_priv) && port != PORT_A)
ret &= cpt_dp_port_selected(dev_priv, port, pipe);
else if (IS_CHERRYVIEW(dev_priv))
*pipe = (val & DP_PIPE_SEL_MASK_CHV) >> DP_PIPE_SEL_SHIFT_CHV;
else
*pipe = (val & DP_PIPE_SEL_MASK) >> DP_PIPE_SEL_SHIFT;
return ret;
}
static bool intel_dp_get_hw_state(struct intel_encoder *encoder,
enum pipe *pipe)
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct intel_dp *intel_dp = enc_to_intel_dp(&encoder->base);
enum port port = encoder->port;
u32 tmp;
bool ret;
if (!intel_display_power_get_if_enabled(dev_priv,
encoder->power_domain))
return false;
ret = false;
ret = intel_dp_port_enabled(dev_priv, intel_dp->output_reg,
encoder->port, pipe);
tmp = I915_READ(intel_dp->output_reg);
if (!(tmp & DP_PORT_EN))
goto out;
if (IS_GEN7(dev_priv) && port == PORT_A) {
*pipe = PORT_TO_PIPE_CPT(tmp);
} else if (HAS_PCH_CPT(dev_priv) && port != PORT_A) {
enum pipe p;
for_each_pipe(dev_priv, p) {
u32 trans_dp = I915_READ(TRANS_DP_CTL(p));
if (TRANS_DP_PIPE_TO_PORT(trans_dp) == port) {
*pipe = p;
ret = true;
goto out;
}
}
DRM_DEBUG_KMS("No pipe for dp port 0x%x found\n",
i915_mmio_reg_offset(intel_dp->output_reg));
} else if (IS_CHERRYVIEW(dev_priv)) {
*pipe = DP_PORT_TO_PIPE_CHV(tmp);
} else {
*pipe = PORT_TO_PIPE(tmp);
}
ret = true;
out:
intel_display_power_put(dev_priv, encoder->power_domain);
return ret;
@ -2883,7 +2897,7 @@ _intel_dp_set_link_train(struct intel_dp *intel_dp,
}
I915_WRITE(DP_TP_CTL(port), temp);
} else if ((IS_GEN7(dev_priv) && port == PORT_A) ||
} else if ((IS_IVYBRIDGE(dev_priv) && port == PORT_A) ||
(HAS_PCH_CPT(dev_priv) && port != PORT_A)) {
*DP &= ~DP_LINK_TRAIN_MASK_CPT;
@ -3041,11 +3055,11 @@ static void vlv_detach_power_sequencer(struct intel_dp *intel_dp)
edp_panel_vdd_off_sync(intel_dp);
/*
* VLV seems to get confused when multiple power seqeuencers
* VLV seems to get confused when multiple power sequencers
* have the same port selected (even if only one has power/vdd
* enabled). The failure manifests as vlv_wait_port_ready() failing
* CHV on the other hand doesn't seem to mind having the same port
* selected in multiple power seqeuencers, but let's clear the
* selected in multiple power sequencers, but let's clear the
* port select always when logically disconnecting a power sequencer
* from a port.
*/
@ -3195,14 +3209,14 @@ uint8_t
intel_dp_voltage_max(struct intel_dp *intel_dp)
{
struct drm_i915_private *dev_priv = to_i915(intel_dp_to_dev(intel_dp));
enum port port = dp_to_dig_port(intel_dp)->base.port;
struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
enum port port = encoder->port;
if (INTEL_GEN(dev_priv) >= 9) {
struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
if (HAS_DDI(dev_priv))
return intel_ddi_dp_voltage_max(encoder);
} else if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv))
else if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv))
return DP_TRAIN_VOLTAGE_SWING_LEVEL_3;
else if (IS_GEN7(dev_priv) && port == PORT_A)
else if (IS_IVYBRIDGE(dev_priv) && port == PORT_A)
return DP_TRAIN_VOLTAGE_SWING_LEVEL_2;
else if (HAS_PCH_CPT(dev_priv) && port != PORT_A)
return DP_TRAIN_VOLTAGE_SWING_LEVEL_3;
@ -3214,33 +3228,11 @@ uint8_t
intel_dp_pre_emphasis_max(struct intel_dp *intel_dp, uint8_t voltage_swing)
{
struct drm_i915_private *dev_priv = to_i915(intel_dp_to_dev(intel_dp));
enum port port = dp_to_dig_port(intel_dp)->base.port;
struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
enum port port = encoder->port;
if (INTEL_GEN(dev_priv) >= 9) {
switch (voltage_swing & DP_TRAIN_VOLTAGE_SWING_MASK) {
case DP_TRAIN_VOLTAGE_SWING_LEVEL_0:
return DP_TRAIN_PRE_EMPH_LEVEL_3;
case DP_TRAIN_VOLTAGE_SWING_LEVEL_1:
return DP_TRAIN_PRE_EMPH_LEVEL_2;
case DP_TRAIN_VOLTAGE_SWING_LEVEL_2:
return DP_TRAIN_PRE_EMPH_LEVEL_1;
case DP_TRAIN_VOLTAGE_SWING_LEVEL_3:
return DP_TRAIN_PRE_EMPH_LEVEL_0;
default:
return DP_TRAIN_PRE_EMPH_LEVEL_0;
}
} else if (IS_HASWELL(dev_priv) || IS_BROADWELL(dev_priv)) {
switch (voltage_swing & DP_TRAIN_VOLTAGE_SWING_MASK) {
case DP_TRAIN_VOLTAGE_SWING_LEVEL_0:
return DP_TRAIN_PRE_EMPH_LEVEL_3;
case DP_TRAIN_VOLTAGE_SWING_LEVEL_1:
return DP_TRAIN_PRE_EMPH_LEVEL_2;
case DP_TRAIN_VOLTAGE_SWING_LEVEL_2:
return DP_TRAIN_PRE_EMPH_LEVEL_1;
case DP_TRAIN_VOLTAGE_SWING_LEVEL_3:
default:
return DP_TRAIN_PRE_EMPH_LEVEL_0;
}
if (HAS_DDI(dev_priv)) {
return intel_ddi_dp_pre_emphasis_max(encoder, voltage_swing);
} else if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) {
switch (voltage_swing & DP_TRAIN_VOLTAGE_SWING_MASK) {
case DP_TRAIN_VOLTAGE_SWING_LEVEL_0:
@ -3253,7 +3245,7 @@ intel_dp_pre_emphasis_max(struct intel_dp *intel_dp, uint8_t voltage_swing)
default:
return DP_TRAIN_PRE_EMPH_LEVEL_0;
}
} else if (IS_GEN7(dev_priv) && port == PORT_A) {
} else if (IS_IVYBRIDGE(dev_priv) && port == PORT_A) {
switch (voltage_swing & DP_TRAIN_VOLTAGE_SWING_MASK) {
case DP_TRAIN_VOLTAGE_SWING_LEVEL_0:
return DP_TRAIN_PRE_EMPH_LEVEL_2;
@ -3448,7 +3440,7 @@ static uint32_t chv_signal_levels(struct intel_dp *intel_dp)
}
static uint32_t
gen4_signal_levels(uint8_t train_set)
g4x_signal_levels(uint8_t train_set)
{
uint32_t signal_levels = 0;
@ -3485,9 +3477,9 @@ gen4_signal_levels(uint8_t train_set)
return signal_levels;
}
/* Gen6's DP voltage swing and pre-emphasis control */
/* SNB CPU eDP voltage swing and pre-emphasis control */
static uint32_t
gen6_edp_signal_levels(uint8_t train_set)
snb_cpu_edp_signal_levels(uint8_t train_set)
{
int signal_levels = train_set & (DP_TRAIN_VOLTAGE_SWING_MASK |
DP_TRAIN_PRE_EMPHASIS_MASK);
@ -3513,9 +3505,9 @@ gen6_edp_signal_levels(uint8_t train_set)
}
}
/* Gen7's DP voltage swing and pre-emphasis control */
/* IVB CPU eDP voltage swing and pre-emphasis control */
static uint32_t
gen7_edp_signal_levels(uint8_t train_set)
ivb_cpu_edp_signal_levels(uint8_t train_set)
{
int signal_levels = train_set & (DP_TRAIN_VOLTAGE_SWING_MASK |
DP_TRAIN_PRE_EMPHASIS_MASK);
@ -3562,14 +3554,14 @@ intel_dp_set_signal_levels(struct intel_dp *intel_dp)
signal_levels = chv_signal_levels(intel_dp);
} else if (IS_VALLEYVIEW(dev_priv)) {
signal_levels = vlv_signal_levels(intel_dp);
} else if (IS_GEN7(dev_priv) && port == PORT_A) {
signal_levels = gen7_edp_signal_levels(train_set);
} else if (IS_IVYBRIDGE(dev_priv) && port == PORT_A) {
signal_levels = ivb_cpu_edp_signal_levels(train_set);
mask = EDP_LINK_TRAIN_VOL_EMP_MASK_IVB;
} else if (IS_GEN6(dev_priv) && port == PORT_A) {
signal_levels = gen6_edp_signal_levels(train_set);
signal_levels = snb_cpu_edp_signal_levels(train_set);
mask = EDP_LINK_TRAIN_VOL_EMP_MASK_SNB;
} else {
signal_levels = gen4_signal_levels(train_set);
signal_levels = g4x_signal_levels(train_set);
mask = DP_VOLTAGE_MASK | DP_PRE_EMPHASIS_MASK;
}
@ -3652,7 +3644,7 @@ intel_dp_link_down(struct intel_encoder *encoder,
DRM_DEBUG_KMS("\n");
if ((IS_GEN7(dev_priv) && port == PORT_A) ||
if ((IS_IVYBRIDGE(dev_priv) && port == PORT_A) ||
(HAS_PCH_CPT(dev_priv) && port != PORT_A)) {
DP &= ~DP_LINK_TRAIN_MASK_CPT;
DP |= DP_LINK_TRAIN_PAT_IDLE_CPT;
@ -3681,8 +3673,9 @@ intel_dp_link_down(struct intel_encoder *encoder,
intel_set_pch_fifo_underrun_reporting(dev_priv, PIPE_A, false);
/* always enable with pattern 1 (as per spec) */
DP &= ~(DP_PIPEB_SELECT | DP_LINK_TRAIN_MASK);
DP |= DP_PORT_EN | DP_LINK_TRAIN_PAT_1;
DP &= ~(DP_PIPE_SEL_MASK | DP_LINK_TRAIN_MASK);
DP |= DP_PORT_EN | DP_PIPE_SEL(PIPE_A) |
DP_LINK_TRAIN_PAT_1;
I915_WRITE(intel_dp->output_reg, DP);
POSTING_READ(intel_dp->output_reg);
@ -3737,8 +3730,6 @@ intel_edp_init_dpcd(struct intel_dp *intel_dp)
dev_priv->no_aux_handshake = intel_dp->dpcd[DP_MAX_DOWNSPREAD] &
DP_NO_AUX_HANDSHAKE_LINK_TRAINING;
intel_psr_init_dpcd(intel_dp);
/*
* Read the eDP display control registers.
*
@ -3754,6 +3745,12 @@ intel_edp_init_dpcd(struct intel_dp *intel_dp)
DRM_DEBUG_KMS("eDP DPCD: %*ph\n", (int) sizeof(intel_dp->edp_dpcd),
intel_dp->edp_dpcd);
/*
* This has to be called after intel_dp->edp_dpcd is filled, PSR checks
* for SET_POWER_CAPABLE bit in intel_dp->edp_dpcd[1]
*/
intel_psr_init_dpcd(intel_dp);
/* Read the eDP 1.4+ supported link rates. */
if (intel_dp->edp_dpcd[0] >= DP_EDP_14) {
__le16 sink_rates[DP_MAX_SUPPORTED_RATES];
@ -5317,14 +5314,14 @@ static void intel_edp_panel_vdd_sanitize(struct intel_dp *intel_dp)
static enum pipe vlv_active_pipe(struct intel_dp *intel_dp)
{
struct drm_i915_private *dev_priv = to_i915(intel_dp_to_dev(intel_dp));
struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
enum pipe pipe;
if ((intel_dp->DP & DP_PORT_EN) == 0)
return INVALID_PIPE;
if (intel_dp_port_enabled(dev_priv, intel_dp->output_reg,
encoder->port, &pipe))
return pipe;
if (IS_CHERRYVIEW(dev_priv))
return DP_PORT_TO_PIPE_CHV(intel_dp->DP);
else
return PORT_TO_PIPE(intel_dp->DP);
return INVALID_PIPE;
}
void intel_dp_encoder_reset(struct drm_encoder *encoder)
@ -5673,7 +5670,7 @@ intel_dp_init_panel_power_sequencer_registers(struct intel_dp *intel_dp,
/*
* On some VLV machines the BIOS can leave the VDD
* enabled even on power seqeuencers which aren't
* enabled even on power sequencers which aren't
* hooked up to any port. This would mess up the
* power domain tracking the first time we pick
* one of these power sequencers for use since
@ -5681,7 +5678,7 @@ intel_dp_init_panel_power_sequencer_registers(struct intel_dp *intel_dp,
* already on and therefore wouldn't grab the power
* domain reference. Disable VDD first to avoid this.
* This also avoids spuriously turning the VDD on as
* soon as the new power seqeuencer gets initialized.
* soon as the new power sequencer gets initialized.
*/
if (force_disable_vdd) {
u32 pp = ironlake_get_pp_control(intel_dp);
@ -5719,10 +5716,20 @@ intel_dp_init_panel_power_sequencer_registers(struct intel_dp *intel_dp,
if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) {
port_sel = PANEL_PORT_SELECT_VLV(port);
} else if (HAS_PCH_IBX(dev_priv) || HAS_PCH_CPT(dev_priv)) {
if (port == PORT_A)
switch (port) {
case PORT_A:
port_sel = PANEL_PORT_SELECT_DPA;
else
break;
case PORT_C:
port_sel = PANEL_PORT_SELECT_DPC;
break;
case PORT_D:
port_sel = PANEL_PORT_SELECT_DPD;
break;
default:
MISSING_CASE(port);
break;
}
}
pp_on |= port_sel;

View File

@ -2525,6 +2525,76 @@ static bool icl_calc_dpll_state(struct intel_crtc_state *crtc_state,
return true;
}
int icl_calc_dp_combo_pll_link(struct drm_i915_private *dev_priv,
uint32_t pll_id)
{
uint32_t cfgcr0, cfgcr1;
uint32_t pdiv, kdiv, qdiv_mode, qdiv_ratio, dco_integer, dco_fraction;
const struct skl_wrpll_params *params;
int index, n_entries, link_clock;
/* Read back values from DPLL CFGCR registers */
cfgcr0 = I915_READ(ICL_DPLL_CFGCR0(pll_id));
cfgcr1 = I915_READ(ICL_DPLL_CFGCR1(pll_id));
dco_integer = cfgcr0 & DPLL_CFGCR0_DCO_INTEGER_MASK;
dco_fraction = (cfgcr0 & DPLL_CFGCR0_DCO_FRACTION_MASK) >>
DPLL_CFGCR0_DCO_FRACTION_SHIFT;
pdiv = (cfgcr1 & DPLL_CFGCR1_PDIV_MASK) >> DPLL_CFGCR1_PDIV_SHIFT;
kdiv = (cfgcr1 & DPLL_CFGCR1_KDIV_MASK) >> DPLL_CFGCR1_KDIV_SHIFT;
qdiv_mode = (cfgcr1 & DPLL_CFGCR1_QDIV_MODE(1)) >>
DPLL_CFGCR1_QDIV_MODE_SHIFT;
qdiv_ratio = (cfgcr1 & DPLL_CFGCR1_QDIV_RATIO_MASK) >>
DPLL_CFGCR1_QDIV_RATIO_SHIFT;
params = dev_priv->cdclk.hw.ref == 24000 ?
icl_dp_combo_pll_24MHz_values :
icl_dp_combo_pll_19_2MHz_values;
n_entries = ARRAY_SIZE(icl_dp_combo_pll_24MHz_values);
for (index = 0; index < n_entries; index++) {
if (dco_integer == params[index].dco_integer &&
dco_fraction == params[index].dco_fraction &&
pdiv == params[index].pdiv &&
kdiv == params[index].kdiv &&
qdiv_mode == params[index].qdiv_mode &&
qdiv_ratio == params[index].qdiv_ratio)
break;
}
/* Map PLL Index to Link Clock */
switch (index) {
default:
MISSING_CASE(index);
case 0:
link_clock = 540000;
break;
case 1:
link_clock = 270000;
break;
case 2:
link_clock = 162000;
break;
case 3:
link_clock = 324000;
break;
case 4:
link_clock = 216000;
break;
case 5:
link_clock = 432000;
break;
case 6:
link_clock = 648000;
break;
case 7:
link_clock = 810000;
break;
}
return link_clock;
}
static enum port icl_mg_pll_id_to_port(enum intel_dpll_id id)
{
return id - DPLL_ID_ICL_MGPLL1 + PORT_C;

View File

@ -336,5 +336,7 @@ void intel_shared_dpll_init(struct drm_device *dev);
void intel_dpll_dump_hw_state(struct drm_i915_private *dev_priv,
struct intel_dpll_hw_state *hw_state);
int icl_calc_dp_combo_pll_link(struct drm_i915_private *dev_priv,
uint32_t pll_id);
#endif /* _INTEL_DPLL_MGR_H_ */

View File

@ -194,7 +194,6 @@ enum intel_output_type {
struct intel_framebuffer {
struct drm_framebuffer base;
struct drm_i915_gem_object *obj;
struct intel_rotation_info rot_info;
/* for each plane in the normal GTT view */
@ -971,7 +970,7 @@ struct intel_plane {
const struct intel_plane_state *plane_state);
void (*disable_plane)(struct intel_plane *plane,
struct intel_crtc *crtc);
bool (*get_hw_state)(struct intel_plane *plane);
bool (*get_hw_state)(struct intel_plane *plane, enum pipe *pipe);
int (*check_plane)(struct intel_plane *plane,
struct intel_crtc_state *crtc_state,
struct intel_plane_state *state);
@ -1004,7 +1003,7 @@ struct cxsr_latency {
#define to_intel_framebuffer(x) container_of(x, struct intel_framebuffer, base)
#define to_intel_plane(x) container_of(x, struct intel_plane, base)
#define to_intel_plane_state(x) container_of(x, struct intel_plane_state, base)
#define intel_fb_obj(x) (x ? to_intel_framebuffer(x)->obj : NULL)
#define intel_fb_obj(x) ((x) ? to_intel_bo((x)->obj[0]) : NULL)
struct intel_hdmi {
i915_reg_t hdmi_reg;
@ -1376,6 +1375,8 @@ void gen9_enable_guc_interrupts(struct drm_i915_private *dev_priv);
void gen9_disable_guc_interrupts(struct drm_i915_private *dev_priv);
/* intel_crt.c */
bool intel_crt_port_enabled(struct drm_i915_private *dev_priv,
i915_reg_t adpa_reg, enum pipe *pipe);
void intel_crt_init(struct drm_i915_private *dev_priv);
void intel_crt_reset(struct drm_encoder *encoder);
@ -1392,8 +1393,6 @@ void intel_ddi_disable_transcoder_func(struct drm_i915_private *dev_priv,
enum transcoder cpu_transcoder);
void intel_ddi_enable_pipe_clock(const struct intel_crtc_state *crtc_state);
void intel_ddi_disable_pipe_clock(const struct intel_crtc_state *crtc_state);
struct intel_encoder *
intel_ddi_get_crtc_new_encoder(struct intel_crtc_state *crtc_state);
void intel_ddi_set_pipe_settings(const struct intel_crtc_state *crtc_state);
void intel_ddi_prepare_link_retrain(struct intel_dp *intel_dp);
bool intel_ddi_connector_get_hw_state(struct intel_connector *intel_connector);
@ -1407,6 +1406,8 @@ void intel_ddi_compute_min_voltage_level(struct drm_i915_private *dev_priv,
u32 bxt_signal_levels(struct intel_dp *intel_dp);
uint32_t ddi_signal_levels(struct intel_dp *intel_dp);
u8 intel_ddi_dp_voltage_max(struct intel_encoder *encoder);
u8 intel_ddi_dp_pre_emphasis_max(struct intel_encoder *encoder,
u8 voltage_swing);
int intel_ddi_toggle_hdcp_signalling(struct intel_encoder *intel_encoder,
bool enable);
void icl_map_plls_to_ports(struct drm_crtc *crtc,
@ -1488,6 +1489,9 @@ void intel_connector_attach_encoder(struct intel_connector *connector,
struct intel_encoder *encoder);
struct drm_display_mode *
intel_encoder_current_mode(struct intel_encoder *encoder);
bool intel_port_is_tc(struct drm_i915_private *dev_priv, enum port port);
enum tc_port intel_port_to_tc(struct drm_i915_private *dev_priv,
enum port port);
enum pipe intel_get_pipe_from_connector(struct intel_connector *connector);
int intel_get_pipe_from_crtc_id_ioctl(struct drm_device *dev, void *data,
@ -1615,6 +1619,7 @@ void intel_mode_from_pipe_config(struct drm_display_mode *mode,
void intel_crtc_arm_fifo_underrun(struct intel_crtc *crtc,
struct intel_crtc_state *crtc_state);
u16 skl_scaler_calc_phase(int sub, bool chroma_center);
int skl_update_scaler_crtc(struct intel_crtc_state *crtc_state);
int skl_max_scale(struct intel_crtc *crtc, struct intel_crtc_state *crtc_state,
uint32_t pixel_format);
@ -1644,6 +1649,9 @@ void intel_csr_ucode_suspend(struct drm_i915_private *);
void intel_csr_ucode_resume(struct drm_i915_private *);
/* intel_dp.c */
bool intel_dp_port_enabled(struct drm_i915_private *dev_priv,
i915_reg_t dp_reg, enum port port,
enum pipe *pipe);
bool intel_dp_init(struct drm_i915_private *dev_priv, i915_reg_t output_reg,
enum port port);
bool intel_dp_init_connector(struct intel_digital_port *intel_dig_port,
@ -1821,6 +1829,8 @@ void intel_infoframe_init(struct intel_digital_port *intel_dig_port);
/* intel_lvds.c */
bool intel_lvds_port_enabled(struct drm_i915_private *dev_priv,
i915_reg_t lvds_reg, enum pipe *pipe);
void intel_lvds_init(struct drm_i915_private *dev_priv);
struct intel_encoder *intel_get_lvds_encoder(struct drm_device *dev);
bool intel_is_dual_link_lvds(struct drm_device *dev);
@ -1911,8 +1921,6 @@ void intel_psr_flush(struct drm_i915_private *dev_priv,
unsigned frontbuffer_bits,
enum fb_op_origin origin);
void intel_psr_init(struct drm_i915_private *dev_priv);
void intel_psr_single_frame_update(struct drm_i915_private *dev_priv,
unsigned frontbuffer_bits);
void intel_psr_compute_config(struct intel_dp *intel_dp,
struct intel_crtc_state *crtc_state);
void intel_psr_irq_control(struct drm_i915_private *dev_priv, bool debug);
@ -2058,6 +2066,8 @@ void intel_init_ipc(struct drm_i915_private *dev_priv);
void intel_enable_ipc(struct drm_i915_private *dev_priv);
/* intel_sdvo.c */
bool intel_sdvo_port_enabled(struct drm_i915_private *dev_priv,
i915_reg_t sdvo_reg, enum pipe *pipe);
bool intel_sdvo_init(struct drm_i915_private *dev_priv,
i915_reg_t reg, enum port port);
@ -2076,7 +2086,7 @@ void skl_update_plane(struct intel_plane *plane,
const struct intel_crtc_state *crtc_state,
const struct intel_plane_state *plane_state);
void skl_disable_plane(struct intel_plane *plane, struct intel_crtc *crtc);
bool skl_plane_get_hw_state(struct intel_plane *plane);
bool skl_plane_get_hw_state(struct intel_plane *plane, enum pipe *pipe);
bool skl_plane_has_ccs(struct drm_i915_private *dev_priv,
enum pipe pipe, enum plane_id plane_id);
bool intel_format_is_yuv(uint32_t format);

View File

@ -1665,16 +1665,16 @@ static int intel_dsi_get_panel_orientation(struct intel_connector *connector)
{
struct drm_i915_private *dev_priv = to_i915(connector->base.dev);
int orientation = DRM_MODE_PANEL_ORIENTATION_NORMAL;
enum i9xx_plane_id plane;
enum i9xx_plane_id i9xx_plane;
u32 val;
if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) {
if (connector->encoder->crtc_mask == BIT(PIPE_B))
plane = PLANE_B;
i9xx_plane = PLANE_B;
else
plane = PLANE_A;
i9xx_plane = PLANE_A;
val = I915_READ(DSPCNTR(plane));
val = I915_READ(DSPCNTR(i9xx_plane));
if (val & DISPPLANE_ROTATE_180)
orientation = DRM_MODE_PANEL_ORIENTATION_BOTTOM_UP;
}

View File

@ -137,19 +137,15 @@ static bool intel_dvo_connector_get_hw_state(struct intel_connector *connector)
static bool intel_dvo_get_hw_state(struct intel_encoder *encoder,
enum pipe *pipe)
{
struct drm_device *dev = encoder->base.dev;
struct drm_i915_private *dev_priv = to_i915(dev);
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct intel_dvo *intel_dvo = enc_to_dvo(encoder);
u32 tmp;
tmp = I915_READ(intel_dvo->dev.dvo_reg);
if (!(tmp & DVO_ENABLE))
return false;
*pipe = (tmp & DVO_PIPE_SEL_MASK) >> DVO_PIPE_SEL_SHIFT;
*pipe = PORT_TO_PIPE(tmp);
return true;
return tmp & DVO_ENABLE;
}
static void intel_dvo_get_config(struct intel_encoder *encoder,
@ -276,8 +272,7 @@ static void intel_dvo_pre_enable(struct intel_encoder *encoder,
dvo_val |= DVO_DATA_ORDER_FP | DVO_BORDER_ENABLE |
DVO_BLANK_ACTIVE_HIGH;
if (pipe == 1)
dvo_val |= DVO_PIPE_B_SELECT;
dvo_val |= DVO_PIPE_SEL(pipe);
dvo_val |= DVO_PIPE_STALL;
if (adjusted_mode->flags & DRM_MODE_FLAG_PHSYNC)
dvo_val |= DVO_HSYNC_ACTIVE_HIGH;

View File

@ -515,7 +515,7 @@ int intel_engine_create_scratch(struct intel_engine_cs *engine, int size)
return PTR_ERR(obj);
}
vma = i915_vma_instance(obj, &engine->i915->ggtt.base, NULL);
vma = i915_vma_instance(obj, &engine->i915->ggtt.vm, NULL);
if (IS_ERR(vma)) {
ret = PTR_ERR(vma);
goto err_unref;
@ -585,7 +585,7 @@ static int init_status_page(struct intel_engine_cs *engine)
if (ret)
goto err;
vma = i915_vma_instance(obj, &engine->i915->ggtt.base, NULL);
vma = i915_vma_instance(obj, &engine->i915->ggtt.vm, NULL);
if (IS_ERR(vma)) {
ret = PTR_ERR(vma);
goto err;
@ -645,6 +645,12 @@ static int init_phys_status_page(struct intel_engine_cs *engine)
return 0;
}
static void __intel_context_unpin(struct i915_gem_context *ctx,
struct intel_engine_cs *engine)
{
intel_context_unpin(to_intel_context(ctx, engine));
}
/**
* intel_engines_init_common - initialize cengine state which might require hw access
* @engine: Engine to initialize.
@ -658,7 +664,8 @@ static int init_phys_status_page(struct intel_engine_cs *engine)
*/
int intel_engine_init_common(struct intel_engine_cs *engine)
{
struct intel_ring *ring;
struct drm_i915_private *i915 = engine->i915;
struct intel_context *ce;
int ret;
engine->set_default_submission(engine);
@ -670,18 +677,18 @@ int intel_engine_init_common(struct intel_engine_cs *engine)
* be available. To avoid this we always pin the default
* context.
*/
ring = intel_context_pin(engine->i915->kernel_context, engine);
if (IS_ERR(ring))
return PTR_ERR(ring);
ce = intel_context_pin(i915->kernel_context, engine);
if (IS_ERR(ce))
return PTR_ERR(ce);
/*
* Similarly the preempt context must always be available so that
* we can interrupt the engine at any time.
*/
if (engine->i915->preempt_context) {
ring = intel_context_pin(engine->i915->preempt_context, engine);
if (IS_ERR(ring)) {
ret = PTR_ERR(ring);
if (i915->preempt_context) {
ce = intel_context_pin(i915->preempt_context, engine);
if (IS_ERR(ce)) {
ret = PTR_ERR(ce);
goto err_unpin_kernel;
}
}
@ -690,7 +697,7 @@ int intel_engine_init_common(struct intel_engine_cs *engine)
if (ret)
goto err_unpin_preempt;
if (HWS_NEEDS_PHYSICAL(engine->i915))
if (HWS_NEEDS_PHYSICAL(i915))
ret = init_phys_status_page(engine);
else
ret = init_status_page(engine);
@ -702,10 +709,11 @@ int intel_engine_init_common(struct intel_engine_cs *engine)
err_breadcrumbs:
intel_engine_fini_breadcrumbs(engine);
err_unpin_preempt:
if (engine->i915->preempt_context)
intel_context_unpin(engine->i915->preempt_context, engine);
if (i915->preempt_context)
__intel_context_unpin(i915->preempt_context, engine);
err_unpin_kernel:
intel_context_unpin(engine->i915->kernel_context, engine);
__intel_context_unpin(i915->kernel_context, engine);
return ret;
}
@ -718,6 +726,8 @@ int intel_engine_init_common(struct intel_engine_cs *engine)
*/
void intel_engine_cleanup_common(struct intel_engine_cs *engine)
{
struct drm_i915_private *i915 = engine->i915;
intel_engine_cleanup_scratch(engine);
if (HWS_NEEDS_PHYSICAL(engine->i915))
@ -732,9 +742,9 @@ void intel_engine_cleanup_common(struct intel_engine_cs *engine)
if (engine->default_state)
i915_gem_object_put(engine->default_state);
if (engine->i915->preempt_context)
intel_context_unpin(engine->i915->preempt_context, engine);
intel_context_unpin(engine->i915->kernel_context, engine);
if (i915->preempt_context)
__intel_context_unpin(i915->preempt_context, engine);
__intel_context_unpin(i915->kernel_context, engine);
i915_timeline_fini(&engine->timeline);
}
@ -769,6 +779,35 @@ u64 intel_engine_get_last_batch_head(const struct intel_engine_cs *engine)
return bbaddr;
}
int intel_engine_stop_cs(struct intel_engine_cs *engine)
{
struct drm_i915_private *dev_priv = engine->i915;
const u32 base = engine->mmio_base;
const i915_reg_t mode = RING_MI_MODE(base);
int err;
if (INTEL_GEN(dev_priv) < 3)
return -ENODEV;
GEM_TRACE("%s\n", engine->name);
I915_WRITE_FW(mode, _MASKED_BIT_ENABLE(STOP_RING));
err = 0;
if (__intel_wait_for_register_fw(dev_priv,
mode, MODE_IDLE, MODE_IDLE,
1000, 0,
NULL)) {
GEM_TRACE("%s: timed out on STOP_RING -> IDLE\n", engine->name);
err = -ETIMEDOUT;
}
/* A final mmio read to let GPU writes be hopefully flushed to memory */
POSTING_READ_FW(mode);
return err;
}
const char *i915_cache_level_str(struct drm_i915_private *i915, int type)
{
switch (type) {
@ -780,12 +819,32 @@ const char *i915_cache_level_str(struct drm_i915_private *i915, int type)
}
}
u32 intel_calculate_mcr_s_ss_select(struct drm_i915_private *dev_priv)
{
const struct sseu_dev_info *sseu = &(INTEL_INFO(dev_priv)->sseu);
u32 mcr_s_ss_select;
u32 slice = fls(sseu->slice_mask);
u32 subslice = fls(sseu->subslice_mask[slice]);
if (INTEL_GEN(dev_priv) == 10)
mcr_s_ss_select = GEN8_MCR_SLICE(slice) |
GEN8_MCR_SUBSLICE(subslice);
else if (INTEL_GEN(dev_priv) >= 11)
mcr_s_ss_select = GEN11_MCR_SLICE(slice) |
GEN11_MCR_SUBSLICE(subslice);
else
mcr_s_ss_select = 0;
return mcr_s_ss_select;
}
static inline uint32_t
read_subslice_reg(struct drm_i915_private *dev_priv, int slice,
int subslice, i915_reg_t reg)
{
uint32_t mcr_slice_subslice_mask;
uint32_t mcr_slice_subslice_select;
uint32_t default_mcr_s_ss_select;
uint32_t mcr;
uint32_t ret;
enum forcewake_domains fw_domains;
@ -802,6 +861,8 @@ read_subslice_reg(struct drm_i915_private *dev_priv, int slice,
GEN8_MCR_SUBSLICE(subslice);
}
default_mcr_s_ss_select = intel_calculate_mcr_s_ss_select(dev_priv);
fw_domains = intel_uncore_forcewake_for_reg(dev_priv, reg,
FW_REG_READ);
fw_domains |= intel_uncore_forcewake_for_reg(dev_priv,
@ -812,11 +873,10 @@ read_subslice_reg(struct drm_i915_private *dev_priv, int slice,
intel_uncore_forcewake_get__locked(dev_priv, fw_domains);
mcr = I915_READ_FW(GEN8_MCR_SELECTOR);
/*
* The HW expects the slice and sublice selectors to be reset to 0
* after reading out the registers.
*/
WARN_ON_ONCE(mcr & mcr_slice_subslice_mask);
WARN_ON_ONCE((mcr & mcr_slice_subslice_mask) !=
default_mcr_s_ss_select);
mcr &= ~mcr_slice_subslice_mask;
mcr |= mcr_slice_subslice_select;
I915_WRITE_FW(GEN8_MCR_SELECTOR, mcr);
@ -824,6 +884,8 @@ read_subslice_reg(struct drm_i915_private *dev_priv, int slice,
ret = I915_READ_FW(reg);
mcr &= ~mcr_slice_subslice_mask;
mcr |= default_mcr_s_ss_select;
I915_WRITE_FW(GEN8_MCR_SELECTOR, mcr);
intel_uncore_forcewake_put__locked(dev_priv, fw_domains);
@ -934,10 +996,19 @@ bool intel_engine_is_idle(struct intel_engine_cs *engine)
return true;
/* Waiting to drain ELSP? */
if (READ_ONCE(engine->execlists.active))
return false;
if (READ_ONCE(engine->execlists.active)) {
struct intel_engine_execlists *execlists = &engine->execlists;
/* ELSP is empty, but there are ready requests? */
if (tasklet_trylock(&execlists->tasklet)) {
execlists->tasklet.func(execlists->tasklet.data);
tasklet_unlock(&execlists->tasklet);
}
if (READ_ONCE(execlists->active))
return false;
}
/* ELSP is empty, but there are ready requests? E.g. after reset */
if (READ_ONCE(engine->execlists.first))
return false;
@ -978,8 +1049,8 @@ bool intel_engines_are_idle(struct drm_i915_private *dev_priv)
*/
bool intel_engine_has_kernel_context(const struct intel_engine_cs *engine)
{
const struct i915_gem_context * const kernel_context =
engine->i915->kernel_context;
const struct intel_context *kernel_context =
to_intel_context(engine->i915->kernel_context, engine);
struct i915_request *rq;
lockdep_assert_held(&engine->i915->drm.struct_mutex);
@ -991,7 +1062,7 @@ bool intel_engine_has_kernel_context(const struct intel_engine_cs *engine)
*/
rq = __i915_gem_active_peek(&engine->timeline.last_request);
if (rq)
return rq->ctx == kernel_context;
return rq->hw_context == kernel_context;
else
return engine->last_retired_context == kernel_context;
}
@ -1043,6 +1114,11 @@ void intel_engines_park(struct drm_i915_private *i915)
if (engine->park)
engine->park(engine);
if (engine->pinned_default_state) {
i915_gem_object_unpin_map(engine->default_state);
engine->pinned_default_state = NULL;
}
i915_gem_batch_pool_fini(&engine->batch_pool);
engine->execlists.no_priolist = false;
}
@ -1060,6 +1136,16 @@ void intel_engines_unpark(struct drm_i915_private *i915)
enum intel_engine_id id;
for_each_engine(engine, i915, id) {
void *map;
/* Pin the default state for fast resets from atomic context. */
map = NULL;
if (engine->default_state)
map = i915_gem_object_pin_map(engine->default_state,
I915_MAP_WB);
if (!IS_ERR_OR_NULL(map))
engine->pinned_default_state = map;
if (engine->unpark)
engine->unpark(engine);
@ -1067,6 +1153,29 @@ void intel_engines_unpark(struct drm_i915_private *i915)
}
}
/**
* intel_engine_lost_context: called when the GPU is reset into unknown state
* @engine: the engine
*
* We have either reset the GPU or otherwise about to lose state tracking of
* the current GPU logical state (e.g. suspend). On next use, it is therefore
* imperative that we make no presumptions about the current state and load
* from scratch.
*/
void intel_engine_lost_context(struct intel_engine_cs *engine)
{
struct intel_context *ce;
lockdep_assert_held(&engine->i915->drm.struct_mutex);
engine->legacy_active_context = NULL;
engine->legacy_active_ppgtt = NULL;
ce = fetch_and_zero(&engine->last_retired_context);
if (ce)
intel_context_unpin(ce);
}
bool intel_engine_can_store_dword(struct intel_engine_cs *engine)
{
switch (INTEL_GEN(engine->i915)) {
@ -1296,6 +1405,7 @@ void intel_engine_dump(struct intel_engine_cs *engine,
const struct intel_engine_execlists * const execlists = &engine->execlists;
struct i915_gpu_error * const error = &engine->i915->gpu_error;
struct i915_request *rq, *last;
unsigned long flags;
struct rb_node *rb;
int count;
@ -1362,7 +1472,8 @@ void intel_engine_dump(struct intel_engine_cs *engine,
drm_printf(m, "\tDevice is asleep; skipping register dump\n");
}
spin_lock_irq(&engine->timeline.lock);
local_irq_save(flags);
spin_lock(&engine->timeline.lock);
last = NULL;
count = 0;
@ -1404,16 +1515,17 @@ void intel_engine_dump(struct intel_engine_cs *engine,
print_request(m, last, "\t\tQ ");
}
spin_unlock_irq(&engine->timeline.lock);
spin_unlock(&engine->timeline.lock);
spin_lock_irq(&b->rb_lock);
spin_lock(&b->rb_lock);
for (rb = rb_first(&b->waiters); rb; rb = rb_next(rb)) {
struct intel_wait *w = rb_entry(rb, typeof(*w), node);
drm_printf(m, "\t%s [%d] waiting for %x\n",
w->tsk->comm, w->tsk->pid, w->seqno);
}
spin_unlock_irq(&b->rb_lock);
spin_unlock(&b->rb_lock);
local_irq_restore(flags);
drm_printf(m, "IRQ? 0x%lx (breadcrumbs? %s) (execlists? %s)\n",
engine->irq_posted,

View File

@ -47,7 +47,7 @@
static void intel_fbdev_invalidate(struct intel_fbdev *ifbdev)
{
struct drm_i915_gem_object *obj = ifbdev->fb->obj;
struct drm_i915_gem_object *obj = intel_fb_obj(&ifbdev->fb->base);
unsigned int origin =
ifbdev->vma_flags & PLANE_HAS_FENCE ? ORIGIN_GTT : ORIGIN_CPU;
@ -193,7 +193,7 @@ static int intelfb_create(struct drm_fb_helper *helper,
drm_framebuffer_put(&intel_fb->base);
intel_fb = ifbdev->fb = NULL;
}
if (!intel_fb || WARN_ON(!intel_fb->obj)) {
if (!intel_fb || WARN_ON(!intel_fb_obj(&intel_fb->base))) {
DRM_DEBUG_KMS("no BIOS fb, allocating a new one\n");
ret = intelfb_alloc(helper, sizes);
if (ret)
@ -265,7 +265,7 @@ static int intelfb_create(struct drm_fb_helper *helper,
* If the object is stolen however, it will be full of whatever
* garbage was left in there.
*/
if (intel_fb->obj->stolen && !prealloc)
if (intel_fb_obj(fb)->stolen && !prealloc)
memset_io(info->screen_base, 0, info->screen_size);
/* Use default scratch pixmap (info->pixmap.flags = FB_PIXMAP_SYSTEM) */
@ -792,7 +792,8 @@ void intel_fbdev_set_suspend(struct drm_device *dev, int state, bool synchronous
* been restored from swap. If the object is stolen however, it will be
* full of whatever garbage was left in there.
*/
if (state == FBINFO_STATE_RUNNING && ifbdev->fb->obj->stolen)
if (state == FBINFO_STATE_RUNNING &&
intel_fb_obj(&ifbdev->fb->base)->stolen)
memset_io(info->screen_base, 0, info->screen_size);
drm_fb_helper_set_suspend(&ifbdev->helper, state);

View File

@ -153,8 +153,6 @@ void intel_frontbuffer_flip_prepare(struct drm_i915_private *dev_priv,
/* Remove stale busy bits due to the old buffer. */
dev_priv->fb_tracking.busy_bits &= ~frontbuffer_bits;
spin_unlock(&dev_priv->fb_tracking.lock);
intel_psr_single_frame_update(dev_priv, frontbuffer_bits);
}
/**

View File

@ -346,10 +346,8 @@ int intel_guc_send_mmio(struct intel_guc *guc, const u32 *action, u32 len,
ret = -EIO;
if (ret) {
DRM_DEBUG_DRIVER("INTEL_GUC_SEND: Action 0x%X failed;"
" ret=%d status=0x%08X response=0x%08X\n",
action[0], ret, status,
I915_READ(SOFT_SCRATCH(15)));
DRM_ERROR("MMIO: GuC action %#x failed with error %d %#x\n",
action[0], ret, status);
goto out;
}
@ -572,7 +570,7 @@ struct i915_vma *intel_guc_allocate_vma(struct intel_guc *guc, u32 size)
if (IS_ERR(obj))
return ERR_CAST(obj);
vma = i915_vma_instance(obj, &dev_priv->ggtt.base, NULL);
vma = i915_vma_instance(obj, &dev_priv->ggtt.vm, NULL);
if (IS_ERR(vma))
goto err;

View File

@ -513,8 +513,7 @@ static void guc_add_request(struct intel_guc *guc, struct i915_request *rq)
{
struct intel_guc_client *client = guc->execbuf_client;
struct intel_engine_cs *engine = rq->engine;
u32 ctx_desc = lower_32_bits(intel_lr_context_descriptor(rq->ctx,
engine));
u32 ctx_desc = lower_32_bits(rq->hw_context->lrc_desc);
u32 ring_tail = intel_ring_set_tail(rq->ring, rq->tail) / sizeof(u64);
spin_lock(&client->wq_lock);
@ -537,7 +536,7 @@ static void guc_add_request(struct intel_guc *guc, struct i915_request *rq)
*/
static void flush_ggtt_writes(struct i915_vma *vma)
{
struct drm_i915_private *dev_priv = to_i915(vma->obj->base.dev);
struct drm_i915_private *dev_priv = vma->vm->i915;
if (i915_vma_is_map_and_fenceable(vma))
POSTING_READ_FW(GUC_STATUS);
@ -552,8 +551,8 @@ static void inject_preempt_context(struct work_struct *work)
preempt_work[engine->id]);
struct intel_guc_client *client = guc->preempt_client;
struct guc_stage_desc *stage_desc = __get_stage_desc(client);
u32 ctx_desc = lower_32_bits(intel_lr_context_descriptor(client->owner,
engine));
u32 ctx_desc = lower_32_bits(to_intel_context(client->owner,
engine)->lrc_desc);
u32 data[7];
/*
@ -623,6 +622,21 @@ static void wait_for_guc_preempt_report(struct intel_engine_cs *engine)
report->report_return_status = INTEL_GUC_REPORT_STATUS_UNKNOWN;
}
static void complete_preempt_context(struct intel_engine_cs *engine)
{
struct intel_engine_execlists *execlists = &engine->execlists;
GEM_BUG_ON(!execlists_is_active(execlists, EXECLISTS_ACTIVE_PREEMPT));
execlists_cancel_port_requests(execlists);
execlists_unwind_incomplete_requests(execlists);
wait_for_guc_preempt_report(engine);
intel_write_status_page(engine, I915_GEM_HWS_PREEMPT_INDEX, 0);
execlists_clear_active(execlists, EXECLISTS_ACTIVE_PREEMPT);
}
/**
* guc_submit() - Submit commands through GuC
* @engine: engine associated with the commands
@ -710,7 +724,7 @@ static bool __guc_dequeue(struct intel_engine_cs *engine)
struct i915_request *rq, *rn;
list_for_each_entry_safe(rq, rn, &p->requests, sched.link) {
if (last && rq->ctx != last->ctx) {
if (last && rq->hw_context != last->hw_context) {
if (port == last_port) {
__list_del_many(&p->requests,
&rq->sched.link);
@ -793,20 +807,44 @@ static void guc_submission_tasklet(unsigned long data)
if (execlists_is_active(execlists, EXECLISTS_ACTIVE_PREEMPT) &&
intel_read_status_page(engine, I915_GEM_HWS_PREEMPT_INDEX) ==
GUC_PREEMPT_FINISHED) {
execlists_cancel_port_requests(&engine->execlists);
execlists_unwind_incomplete_requests(execlists);
wait_for_guc_preempt_report(engine);
execlists_clear_active(execlists, EXECLISTS_ACTIVE_PREEMPT);
intel_write_status_page(engine, I915_GEM_HWS_PREEMPT_INDEX, 0);
}
GUC_PREEMPT_FINISHED)
complete_preempt_context(engine);
if (!execlists_is_active(execlists, EXECLISTS_ACTIVE_PREEMPT))
guc_dequeue(engine);
}
static struct i915_request *
guc_reset_prepare(struct intel_engine_cs *engine)
{
struct intel_engine_execlists * const execlists = &engine->execlists;
GEM_TRACE("%s\n", engine->name);
/*
* Prevent request submission to the hardware until we have
* completed the reset in i915_gem_reset_finish(). If a request
* is completed by one engine, it may then queue a request
* to a second via its execlists->tasklet *just* as we are
* calling engine->init_hw() and also writing the ELSP.
* Turning off the execlists->tasklet until the reset is over
* prevents the race.
*/
__tasklet_disable_sync_once(&execlists->tasklet);
/*
* We're using worker to queue preemption requests from the tasklet in
* GuC submission mode.
* Even though tasklet was disabled, we may still have a worker queued.
* Let's make sure that all workers scheduled before disabling the
* tasklet are completed before continuing with the reset.
*/
if (engine->i915->guc.preempt_wq)
flush_workqueue(engine->i915->guc.preempt_wq);
return i915_gem_find_active_request(engine);
}
/*
* Everything below here is concerned with setup & teardown, and is
* therefore not part of the somewhat time-critical batch-submission
@ -1119,7 +1157,7 @@ int intel_guc_submission_init(struct intel_guc *guc)
WARN_ON(!guc_verify_doorbells(guc));
ret = guc_clients_create(guc);
if (ret)
return ret;
goto err_pool;
for_each_engine(engine, dev_priv, id) {
guc->preempt_work[id].engine = engine;
@ -1128,6 +1166,9 @@ int intel_guc_submission_init(struct intel_guc *guc)
return 0;
err_pool:
guc_stage_desc_pool_destroy(guc);
return ret;
}
void intel_guc_submission_fini(struct intel_guc *guc)
@ -1267,6 +1308,9 @@ int intel_guc_submission_enable(struct intel_guc *guc)
&engine->execlists;
execlists->tasklet.func = guc_submission_tasklet;
engine->reset.prepare = guc_reset_prepare;
engine->park = guc_submission_park;
engine->unpark = guc_submission_unpark;

View File

@ -1161,33 +1161,16 @@ static void intel_hdmi_prepare(struct intel_encoder *encoder,
static bool intel_hdmi_get_hw_state(struct intel_encoder *encoder,
enum pipe *pipe)
{
struct drm_device *dev = encoder->base.dev;
struct drm_i915_private *dev_priv = to_i915(dev);
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct intel_hdmi *intel_hdmi = enc_to_intel_hdmi(&encoder->base);
u32 tmp;
bool ret;
if (!intel_display_power_get_if_enabled(dev_priv,
encoder->power_domain))
return false;
ret = false;
ret = intel_sdvo_port_enabled(dev_priv, intel_hdmi->hdmi_reg, pipe);
tmp = I915_READ(intel_hdmi->hdmi_reg);
if (!(tmp & SDVO_ENABLE))
goto out;
if (HAS_PCH_CPT(dev_priv))
*pipe = PORT_TO_PIPE_CPT(tmp);
else if (IS_CHERRYVIEW(dev_priv))
*pipe = SDVO_PORT_TO_PIPE_CHV(tmp);
else
*pipe = PORT_TO_PIPE(tmp);
ret = true;
out:
intel_display_power_put(dev_priv, encoder->power_domain);
return ret;
@ -1421,8 +1404,8 @@ static void intel_disable_hdmi(struct intel_encoder *encoder,
intel_set_cpu_fifo_underrun_reporting(dev_priv, PIPE_A, false);
intel_set_pch_fifo_underrun_reporting(dev_priv, PIPE_A, false);
temp &= ~SDVO_PIPE_B_SELECT;
temp |= SDVO_ENABLE;
temp &= ~SDVO_PIPE_SEL_MASK;
temp |= SDVO_ENABLE | SDVO_PIPE_SEL(PIPE_A);
/*
* HW workaround, need to write this twice for issue
* that may result in first write getting masked.

View File

@ -164,7 +164,8 @@
#define WA_TAIL_BYTES (sizeof(u32) * WA_TAIL_DWORDS)
static int execlists_context_deferred_alloc(struct i915_gem_context *ctx,
struct intel_engine_cs *engine);
struct intel_engine_cs *engine,
struct intel_context *ce);
static void execlists_init_reg_state(u32 *reg_state,
struct i915_gem_context *ctx,
struct intel_engine_cs *engine,
@ -189,12 +190,7 @@ static inline bool need_preempt(const struct intel_engine_cs *engine,
!i915_request_completed(last));
}
/**
* intel_lr_context_descriptor_update() - calculate & cache the descriptor
* descriptor for a pinned context
* @ctx: Context to work on
* @engine: Engine the descriptor will be used with
*
/*
* The context descriptor encodes various attributes of a context,
* including its GTT address and some flags. Because it's fairly
* expensive to calculate, we'll just do it once and cache the result,
@ -204,7 +200,7 @@ static inline bool need_preempt(const struct intel_engine_cs *engine,
*
* bits 0-11: flags, GEN8_CTX_* (cached in ctx->desc_template)
* bits 12-31: LRCA, GTT address of (the HWSP of) this context
* bits 32-52: ctx ID, a globally unique tag
* bits 32-52: ctx ID, a globally unique tag (highest bit used by GuC)
* bits 53-54: mbz, reserved for use by hardware
* bits 55-63: group ID, currently unused and set to 0
*
@ -222,9 +218,9 @@ static inline bool need_preempt(const struct intel_engine_cs *engine,
*/
static void
intel_lr_context_descriptor_update(struct i915_gem_context *ctx,
struct intel_engine_cs *engine)
struct intel_engine_cs *engine,
struct intel_context *ce)
{
struct intel_context *ce = to_intel_context(ctx, engine);
u64 desc;
BUILD_BUG_ON(MAX_CONTEXT_HW_ID > (BIT(GEN8_CTX_ID_WIDTH)));
@ -237,6 +233,11 @@ intel_lr_context_descriptor_update(struct i915_gem_context *ctx,
/* bits 12-31 */
GEM_BUG_ON(desc & GENMASK_ULL(63, 32));
/*
* The following 32bits are copied into the OA reports (dword 2).
* Consider updating oa_get_render_ctx_id in i915_perf.c when changing
* anything below.
*/
if (INTEL_GEN(ctx->i915) >= 11) {
GEM_BUG_ON(ctx->hw_id >= BIT(GEN11_SW_CTX_ID_WIDTH));
desc |= (u64)ctx->hw_id << GEN11_SW_CTX_ID_SHIFT;
@ -418,9 +419,9 @@ execlists_update_context_pdps(struct i915_hw_ppgtt *ppgtt, u32 *reg_state)
static u64 execlists_update_context(struct i915_request *rq)
{
struct intel_context *ce = to_intel_context(rq->ctx, rq->engine);
struct intel_context *ce = rq->hw_context;
struct i915_hw_ppgtt *ppgtt =
rq->ctx->ppgtt ?: rq->i915->mm.aliasing_ppgtt;
rq->gem_context->ppgtt ?: rq->i915->mm.aliasing_ppgtt;
u32 *reg_state = ce->lrc_reg_state;
reg_state[CTX_RING_TAIL+1] = intel_ring_set_tail(rq->ring, rq->tail);
@ -430,7 +431,7 @@ static u64 execlists_update_context(struct i915_request *rq)
* PML4 is allocated during ppgtt init, so this is not needed
* in 48-bit mode.
*/
if (ppgtt && !i915_vm_is_48bit(&ppgtt->base))
if (ppgtt && !i915_vm_is_48bit(&ppgtt->vm))
execlists_update_context_pdps(ppgtt, reg_state);
return ce->lrc_desc;
@ -495,14 +496,14 @@ static void execlists_submit_ports(struct intel_engine_cs *engine)
execlists_clear_active(execlists, EXECLISTS_ACTIVE_HWACK);
}
static bool ctx_single_port_submission(const struct i915_gem_context *ctx)
static bool ctx_single_port_submission(const struct intel_context *ce)
{
return (IS_ENABLED(CONFIG_DRM_I915_GVT) &&
i915_gem_context_force_single_submission(ctx));
i915_gem_context_force_single_submission(ce->gem_context));
}
static bool can_merge_ctx(const struct i915_gem_context *prev,
const struct i915_gem_context *next)
static bool can_merge_ctx(const struct intel_context *prev,
const struct intel_context *next)
{
if (prev != next)
return false;
@ -552,8 +553,18 @@ static void inject_preempt_context(struct intel_engine_cs *engine)
if (execlists->ctrl_reg)
writel(EL_CTRL_LOAD, execlists->ctrl_reg);
execlists_clear_active(&engine->execlists, EXECLISTS_ACTIVE_HWACK);
execlists_set_active(&engine->execlists, EXECLISTS_ACTIVE_PREEMPT);
execlists_clear_active(execlists, EXECLISTS_ACTIVE_HWACK);
execlists_set_active(execlists, EXECLISTS_ACTIVE_PREEMPT);
}
static void complete_preempt_context(struct intel_engine_execlists *execlists)
{
GEM_BUG_ON(!execlists_is_active(execlists, EXECLISTS_ACTIVE_PREEMPT));
execlists_cancel_port_requests(execlists);
execlists_unwind_incomplete_requests(execlists);
execlists_clear_active(execlists, EXECLISTS_ACTIVE_PREEMPT);
}
static bool __execlists_dequeue(struct intel_engine_cs *engine)
@ -602,8 +613,6 @@ static bool __execlists_dequeue(struct intel_engine_cs *engine)
GEM_BUG_ON(!execlists_is_active(execlists,
EXECLISTS_ACTIVE_USER));
GEM_BUG_ON(!port_count(&port[0]));
if (port_count(&port[0]) > 1)
return false;
/*
* If we write to ELSP a second time before the HW has had
@ -671,7 +680,8 @@ static bool __execlists_dequeue(struct intel_engine_cs *engine)
* second request, and so we never need to tell the
* hardware about the first.
*/
if (last && !can_merge_ctx(rq->ctx, last->ctx)) {
if (last &&
!can_merge_ctx(rq->hw_context, last->hw_context)) {
/*
* If we are on the second port and cannot
* combine this request with the last, then we
@ -690,14 +700,14 @@ static bool __execlists_dequeue(struct intel_engine_cs *engine)
* the same context (even though a different
* request) to the second port.
*/
if (ctx_single_port_submission(last->ctx) ||
ctx_single_port_submission(rq->ctx)) {
if (ctx_single_port_submission(last->hw_context) ||
ctx_single_port_submission(rq->hw_context)) {
__list_del_many(&p->requests,
&rq->sched.link);
goto done;
}
GEM_BUG_ON(last->ctx == rq->ctx);
GEM_BUG_ON(last->hw_context == rq->hw_context);
if (submit)
port_assign(port, last);
@ -947,34 +957,14 @@ static void execlists_cancel_requests(struct intel_engine_cs *engine)
local_irq_restore(flags);
}
/*
* Check the unread Context Status Buffers and manage the submission of new
* contexts to the ELSP accordingly.
*/
static void execlists_submission_tasklet(unsigned long data)
static void process_csb(struct intel_engine_cs *engine)
{
struct intel_engine_cs * const engine = (struct intel_engine_cs *)data;
struct intel_engine_execlists * const execlists = &engine->execlists;
struct execlist_port *port = execlists->port;
struct drm_i915_private *dev_priv = engine->i915;
struct drm_i915_private *i915 = engine->i915;
bool fw = false;
/*
* We can skip acquiring intel_runtime_pm_get() here as it was taken
* on our behalf by the request (see i915_gem_mark_busy()) and it will
* not be relinquished until the device is idle (see
* i915_gem_idle_work_handler()). As a precaution, we make sure
* that all ELSP are drained i.e. we have processed the CSB,
* before allowing ourselves to idle and calling intel_runtime_pm_put().
*/
GEM_BUG_ON(!dev_priv->gt.awake);
/*
* Prefer doing test_and_clear_bit() as a two stage operation to avoid
* imposing the cost of a locked atomic transaction when submitting a
* new request (outside of the context-switch interrupt).
*/
while (test_bit(ENGINE_IRQ_EXECLIST, &engine->irq_posted)) {
do {
/* The HWSP contains a (cacheable) mirror of the CSB */
const u32 *buf =
&engine->status_page.page_addr[I915_HWS_CSB_BUF0_INDEX];
@ -982,28 +972,27 @@ static void execlists_submission_tasklet(unsigned long data)
if (unlikely(execlists->csb_use_mmio)) {
buf = (u32 * __force)
(dev_priv->regs + i915_mmio_reg_offset(RING_CONTEXT_STATUS_BUF_LO(engine, 0)));
execlists->csb_head = -1; /* force mmio read of CSB ptrs */
(i915->regs + i915_mmio_reg_offset(RING_CONTEXT_STATUS_BUF_LO(engine, 0)));
execlists->csb_head = -1; /* force mmio read of CSB */
}
/* Clear before reading to catch new interrupts */
clear_bit(ENGINE_IRQ_EXECLIST, &engine->irq_posted);
smp_mb__after_atomic();
if (unlikely(execlists->csb_head == -1)) { /* following a reset */
if (unlikely(execlists->csb_head == -1)) { /* after a reset */
if (!fw) {
intel_uncore_forcewake_get(dev_priv,
execlists->fw_domains);
intel_uncore_forcewake_get(i915, execlists->fw_domains);
fw = true;
}
head = readl(dev_priv->regs + i915_mmio_reg_offset(RING_CONTEXT_STATUS_PTR(engine)));
head = readl(i915->regs + i915_mmio_reg_offset(RING_CONTEXT_STATUS_PTR(engine)));
tail = GEN8_CSB_WRITE_PTR(head);
head = GEN8_CSB_READ_PTR(head);
execlists->csb_head = head;
} else {
const int write_idx =
intel_hws_csb_write_index(dev_priv) -
intel_hws_csb_write_index(i915) -
I915_HWS_CSB_BUF0_INDEX;
head = execlists->csb_head;
@ -1012,8 +1001,8 @@ static void execlists_submission_tasklet(unsigned long data)
}
GEM_TRACE("%s cs-irq head=%d [%d%s], tail=%d [%d%s]\n",
engine->name,
head, GEN8_CSB_READ_PTR(readl(dev_priv->regs + i915_mmio_reg_offset(RING_CONTEXT_STATUS_PTR(engine)))), fw ? "" : "?",
tail, GEN8_CSB_WRITE_PTR(readl(dev_priv->regs + i915_mmio_reg_offset(RING_CONTEXT_STATUS_PTR(engine)))), fw ? "" : "?");
head, GEN8_CSB_READ_PTR(readl(i915->regs + i915_mmio_reg_offset(RING_CONTEXT_STATUS_PTR(engine)))), fw ? "" : "?",
tail, GEN8_CSB_WRITE_PTR(readl(i915->regs + i915_mmio_reg_offset(RING_CONTEXT_STATUS_PTR(engine)))), fw ? "" : "?");
while (head != tail) {
struct i915_request *rq;
@ -1023,7 +1012,8 @@ static void execlists_submission_tasklet(unsigned long data)
if (++head == GEN8_CSB_ENTRIES)
head = 0;
/* We are flying near dragons again.
/*
* We are flying near dragons again.
*
* We hold a reference to the request in execlist_port[]
* but no more than that. We are operating in softirq
@ -1063,14 +1053,7 @@ static void execlists_submission_tasklet(unsigned long data)
if (status & GEN8_CTX_STATUS_COMPLETE &&
buf[2*head + 1] == execlists->preempt_complete_status) {
GEM_TRACE("%s preempt-idle\n", engine->name);
execlists_cancel_port_requests(execlists);
execlists_unwind_incomplete_requests(execlists);
GEM_BUG_ON(!execlists_is_active(execlists,
EXECLISTS_ACTIVE_PREEMPT));
execlists_clear_active(execlists,
EXECLISTS_ACTIVE_PREEMPT);
complete_preempt_context(execlists);
continue;
}
@ -1139,16 +1122,49 @@ static void execlists_submission_tasklet(unsigned long data)
if (head != execlists->csb_head) {
execlists->csb_head = head;
writel(_MASKED_FIELD(GEN8_CSB_READ_PTR_MASK, head << 8),
dev_priv->regs + i915_mmio_reg_offset(RING_CONTEXT_STATUS_PTR(engine)));
i915->regs + i915_mmio_reg_offset(RING_CONTEXT_STATUS_PTR(engine)));
}
}
} while (test_bit(ENGINE_IRQ_EXECLIST, &engine->irq_posted));
if (!execlists_is_active(execlists, EXECLISTS_ACTIVE_PREEMPT))
if (unlikely(fw))
intel_uncore_forcewake_put(i915, execlists->fw_domains);
}
/*
* Check the unread Context Status Buffers and manage the submission of new
* contexts to the ELSP accordingly.
*/
static void execlists_submission_tasklet(unsigned long data)
{
struct intel_engine_cs * const engine = (struct intel_engine_cs *)data;
GEM_TRACE("%s awake?=%d, active=%x, irq-posted?=%d\n",
engine->name,
engine->i915->gt.awake,
engine->execlists.active,
test_bit(ENGINE_IRQ_EXECLIST, &engine->irq_posted));
/*
* We can skip acquiring intel_runtime_pm_get() here as it was taken
* on our behalf by the request (see i915_gem_mark_busy()) and it will
* not be relinquished until the device is idle (see
* i915_gem_idle_work_handler()). As a precaution, we make sure
* that all ELSP are drained i.e. we have processed the CSB,
* before allowing ourselves to idle and calling intel_runtime_pm_put().
*/
GEM_BUG_ON(!engine->i915->gt.awake);
/*
* Prefer doing test_and_clear_bit() as a two stage operation to avoid
* imposing the cost of a locked atomic transaction when submitting a
* new request (outside of the context-switch interrupt).
*/
if (test_bit(ENGINE_IRQ_EXECLIST, &engine->irq_posted))
process_csb(engine);
if (!execlists_is_active(&engine->execlists, EXECLISTS_ACTIVE_PREEMPT))
execlists_dequeue(engine);
if (fw)
intel_uncore_forcewake_put(dev_priv, execlists->fw_domains);
/* If the engine is now idle, so should be the flag; and vice versa. */
GEM_BUG_ON(execlists_is_active(&engine->execlists,
EXECLISTS_ACTIVE_USER) ==
@ -1322,6 +1338,26 @@ static void execlists_schedule(struct i915_request *request,
spin_unlock_irq(&engine->timeline.lock);
}
static void execlists_context_destroy(struct intel_context *ce)
{
GEM_BUG_ON(!ce->state);
GEM_BUG_ON(ce->pin_count);
intel_ring_free(ce->ring);
__i915_gem_object_release_unless_active(ce->state->obj);
}
static void execlists_context_unpin(struct intel_context *ce)
{
intel_ring_unpin(ce->ring);
ce->state->obj->pin_global--;
i915_gem_object_unpin_map(ce->state->obj);
i915_vma_unpin(ce->state);
i915_gem_context_put(ce->gem_context);
}
static int __context_pin(struct i915_gem_context *ctx, struct i915_vma *vma)
{
unsigned int flags;
@ -1345,21 +1381,15 @@ static int __context_pin(struct i915_gem_context *ctx, struct i915_vma *vma)
return i915_vma_pin(vma, 0, GEN8_LR_CONTEXT_ALIGN, flags);
}
static struct intel_ring *
execlists_context_pin(struct intel_engine_cs *engine,
struct i915_gem_context *ctx)
static struct intel_context *
__execlists_context_pin(struct intel_engine_cs *engine,
struct i915_gem_context *ctx,
struct intel_context *ce)
{
struct intel_context *ce = to_intel_context(ctx, engine);
void *vaddr;
int ret;
lockdep_assert_held(&ctx->i915->drm.struct_mutex);
if (likely(ce->pin_count++))
goto out;
GEM_BUG_ON(!ce->pin_count); /* no overflow please! */
ret = execlists_context_deferred_alloc(ctx, engine);
ret = execlists_context_deferred_alloc(ctx, engine, ce);
if (ret)
goto err;
GEM_BUG_ON(!ce->state);
@ -1378,7 +1408,7 @@ execlists_context_pin(struct intel_engine_cs *engine,
if (ret)
goto unpin_map;
intel_lr_context_descriptor_update(ctx, engine);
intel_lr_context_descriptor_update(ctx, engine, ce);
ce->lrc_reg_state = vaddr + LRC_STATE_PN * PAGE_SIZE;
ce->lrc_reg_state[CTX_RING_BUFFER_START+1] =
@ -1387,8 +1417,7 @@ execlists_context_pin(struct intel_engine_cs *engine,
ce->state->obj->pin_global++;
i915_gem_context_get(ctx);
out:
return ce->ring;
return ce;
unpin_map:
i915_gem_object_unpin_map(ce->state->obj);
@ -1399,33 +1428,33 @@ execlists_context_pin(struct intel_engine_cs *engine,
return ERR_PTR(ret);
}
static void execlists_context_unpin(struct intel_engine_cs *engine,
struct i915_gem_context *ctx)
static const struct intel_context_ops execlists_context_ops = {
.unpin = execlists_context_unpin,
.destroy = execlists_context_destroy,
};
static struct intel_context *
execlists_context_pin(struct intel_engine_cs *engine,
struct i915_gem_context *ctx)
{
struct intel_context *ce = to_intel_context(ctx, engine);
lockdep_assert_held(&ctx->i915->drm.struct_mutex);
GEM_BUG_ON(ce->pin_count == 0);
if (--ce->pin_count)
return;
if (likely(ce->pin_count++))
return ce;
GEM_BUG_ON(!ce->pin_count); /* no overflow please! */
intel_ring_unpin(ce->ring);
ce->ops = &execlists_context_ops;
ce->state->obj->pin_global--;
i915_gem_object_unpin_map(ce->state->obj);
i915_vma_unpin(ce->state);
i915_gem_context_put(ctx);
return __execlists_context_pin(engine, ctx, ce);
}
static int execlists_request_alloc(struct i915_request *request)
{
struct intel_context *ce =
to_intel_context(request->ctx, request->engine);
int ret;
GEM_BUG_ON(!ce->pin_count);
GEM_BUG_ON(!request->hw_context->pin_count);
/* Flush enough space to reduce the likelihood of waiting after
* we start building the request - in which case we will just
@ -1642,7 +1671,7 @@ static int lrc_setup_wa_ctx(struct intel_engine_cs *engine)
if (IS_ERR(obj))
return PTR_ERR(obj);
vma = i915_vma_instance(obj, &engine->i915->ggtt.base, NULL);
vma = i915_vma_instance(obj, &engine->i915->ggtt.vm, NULL);
if (IS_ERR(vma)) {
err = PTR_ERR(vma);
goto err;
@ -1757,12 +1786,25 @@ static void enable_execlists(struct intel_engine_cs *engine)
I915_WRITE(RING_MODE_GEN7(engine),
_MASKED_BIT_ENABLE(GFX_RUN_LIST_ENABLE));
I915_WRITE(RING_MI_MODE(engine->mmio_base),
_MASKED_BIT_DISABLE(STOP_RING));
I915_WRITE(RING_HWS_PGA(engine->mmio_base),
engine->status_page.ggtt_offset);
POSTING_READ(RING_HWS_PGA(engine->mmio_base));
}
/* Following the reset, we need to reload the CSB read/write pointers */
engine->execlists.csb_head = -1;
static bool unexpected_starting_state(struct intel_engine_cs *engine)
{
struct drm_i915_private *dev_priv = engine->i915;
bool unexpected = false;
if (I915_READ(RING_MI_MODE(engine->mmio_base)) & STOP_RING) {
DRM_DEBUG_DRIVER("STOP_RING still set in RING_MI_MODE\n");
unexpected = true;
}
return unexpected;
}
static int gen8_init_common_ring(struct intel_engine_cs *engine)
@ -1777,6 +1819,12 @@ static int gen8_init_common_ring(struct intel_engine_cs *engine)
intel_engine_reset_breadcrumbs(engine);
intel_engine_init_hangcheck(engine);
if (GEM_SHOW_DEBUG() && unexpected_starting_state(engine)) {
struct drm_printer p = drm_debug_printer(__func__);
intel_engine_dump(engine, &p, NULL);
}
enable_execlists(engine);
/* After a GPU reset, we may have requests to replay */
@ -1823,8 +1871,69 @@ static int gen9_init_render_ring(struct intel_engine_cs *engine)
return 0;
}
static void reset_common_ring(struct intel_engine_cs *engine,
struct i915_request *request)
static struct i915_request *
execlists_reset_prepare(struct intel_engine_cs *engine)
{
struct intel_engine_execlists * const execlists = &engine->execlists;
struct i915_request *request, *active;
GEM_TRACE("%s\n", engine->name);
/*
* Prevent request submission to the hardware until we have
* completed the reset in i915_gem_reset_finish(). If a request
* is completed by one engine, it may then queue a request
* to a second via its execlists->tasklet *just* as we are
* calling engine->init_hw() and also writing the ELSP.
* Turning off the execlists->tasklet until the reset is over
* prevents the race.
*/
__tasklet_disable_sync_once(&execlists->tasklet);
/*
* We want to flush the pending context switches, having disabled
* the tasklet above, we can assume exclusive access to the execlists.
* For this allows us to catch up with an inflight preemption event,
* and avoid blaming an innocent request if the stall was due to the
* preemption itself.
*/
if (test_bit(ENGINE_IRQ_EXECLIST, &engine->irq_posted))
process_csb(engine);
/*
* The last active request can then be no later than the last request
* now in ELSP[0]. So search backwards from there, so that if the GPU
* has advanced beyond the last CSB update, it will be pardoned.
*/
active = NULL;
request = port_request(execlists->port);
if (request) {
unsigned long flags;
/*
* Prevent the breadcrumb from advancing before we decide
* which request is currently active.
*/
intel_engine_stop_cs(engine);
spin_lock_irqsave(&engine->timeline.lock, flags);
list_for_each_entry_from_reverse(request,
&engine->timeline.requests,
link) {
if (__i915_request_completed(request,
request->global_seqno))
break;
active = request;
}
spin_unlock_irqrestore(&engine->timeline.lock, flags);
}
return active;
}
static void execlists_reset(struct intel_engine_cs *engine,
struct i915_request *request)
{
struct intel_engine_execlists * const execlists = &engine->execlists;
unsigned long flags;
@ -1854,6 +1963,9 @@ static void reset_common_ring(struct intel_engine_cs *engine,
__unwind_incomplete_requests(engine);
spin_unlock(&engine->timeline.lock);
/* Following the reset, we need to reload the CSB read/write pointers */
engine->execlists.csb_head = -1;
local_irq_restore(flags);
/*
@ -1878,20 +1990,14 @@ static void reset_common_ring(struct intel_engine_cs *engine,
* future request will be after userspace has had the opportunity
* to recreate its own state.
*/
regs = to_intel_context(request->ctx, engine)->lrc_reg_state;
if (engine->default_state) {
void *defaults;
defaults = i915_gem_object_pin_map(engine->default_state,
I915_MAP_WB);
if (!IS_ERR(defaults)) {
memcpy(regs, /* skip restoring the vanilla PPHWSP */
defaults + LRC_STATE_PN * PAGE_SIZE,
engine->context_size - PAGE_SIZE);
i915_gem_object_unpin_map(engine->default_state);
}
regs = request->hw_context->lrc_reg_state;
if (engine->pinned_default_state) {
memcpy(regs, /* skip restoring the vanilla PPHWSP */
engine->pinned_default_state + LRC_STATE_PN * PAGE_SIZE,
engine->context_size - PAGE_SIZE);
}
execlists_init_reg_state(regs, request->ctx, engine, request->ring);
execlists_init_reg_state(regs,
request->gem_context, engine, request->ring);
/* Move the RING_HEAD onto the breadcrumb, past the hanging batch */
regs[CTX_RING_BUFFER_START + 1] = i915_ggtt_offset(request->ring->vma);
@ -1904,9 +2010,25 @@ static void reset_common_ring(struct intel_engine_cs *engine,
unwind_wa_tail(request);
}
static void execlists_reset_finish(struct intel_engine_cs *engine)
{
/*
* Flush the tasklet while we still have the forcewake to be sure
* that it is not allowed to sleep before we restart and reload a
* context.
*
* As before (with execlists_reset_prepare) we rely on the caller
* serialising multiple attempts to reset so that we know that we
* are the only one manipulating tasklet state.
*/
__tasklet_enable_sync_once(&engine->execlists.tasklet);
GEM_TRACE("%s\n", engine->name);
}
static int intel_logical_ring_emit_pdps(struct i915_request *rq)
{
struct i915_hw_ppgtt *ppgtt = rq->ctx->ppgtt;
struct i915_hw_ppgtt *ppgtt = rq->gem_context->ppgtt;
struct intel_engine_cs *engine = rq->engine;
const int num_lri_cmds = GEN8_3LVL_PDPES * 2;
u32 *cs;
@ -1945,15 +2067,15 @@ static int gen8_emit_bb_start(struct i915_request *rq,
* it is unsafe in case of lite-restore (because the ctx is
* not idle). PML4 is allocated during ppgtt init so this is
* not needed in 48-bit.*/
if (rq->ctx->ppgtt &&
(intel_engine_flag(rq->engine) & rq->ctx->ppgtt->pd_dirty_rings) &&
!i915_vm_is_48bit(&rq->ctx->ppgtt->base) &&
if (rq->gem_context->ppgtt &&
(intel_engine_flag(rq->engine) & rq->gem_context->ppgtt->pd_dirty_rings) &&
!i915_vm_is_48bit(&rq->gem_context->ppgtt->vm) &&
!intel_vgpu_active(rq->i915)) {
ret = intel_logical_ring_emit_pdps(rq);
if (ret)
return ret;
rq->ctx->ppgtt->pd_dirty_rings &= ~intel_engine_flag(rq->engine);
rq->gem_context->ppgtt->pd_dirty_rings &= ~intel_engine_flag(rq->engine);
}
cs = intel_ring_begin(rq, 6);
@ -2214,6 +2336,8 @@ static void execlists_set_default_submission(struct intel_engine_cs *engine)
engine->schedule = execlists_schedule;
engine->execlists.tasklet.func = execlists_submission_tasklet;
engine->reset.prepare = execlists_reset_prepare;
engine->park = NULL;
engine->unpark = NULL;
@ -2233,11 +2357,12 @@ logical_ring_default_vfuncs(struct intel_engine_cs *engine)
{
/* Default vfuncs which can be overriden by each engine. */
engine->init_hw = gen8_init_common_ring;
engine->reset_hw = reset_common_ring;
engine->reset.prepare = execlists_reset_prepare;
engine->reset.reset = execlists_reset;
engine->reset.finish = execlists_reset_finish;
engine->context_pin = execlists_context_pin;
engine->context_unpin = execlists_context_unpin;
engine->request_alloc = execlists_request_alloc;
engine->emit_flush = gen8_emit_flush;
@ -2340,6 +2465,8 @@ static int logical_ring_init(struct intel_engine_cs *engine)
upper_32_bits(ce->lrc_desc);
}
engine->execlists.csb_head = -1;
return 0;
error:
@ -2472,7 +2599,7 @@ static void execlists_init_reg_state(u32 *regs,
struct drm_i915_private *dev_priv = engine->i915;
struct i915_hw_ppgtt *ppgtt = ctx->ppgtt ?: dev_priv->mm.aliasing_ppgtt;
u32 base = engine->mmio_base;
bool rcs = engine->id == RCS;
bool rcs = engine->class == RENDER_CLASS;
/* A context is actually a big batch buffer with several
* MI_LOAD_REGISTER_IMM commands followed by (reg, value) pairs. The
@ -2540,7 +2667,7 @@ static void execlists_init_reg_state(u32 *regs,
CTX_REG(regs, CTX_PDP0_UDW, GEN8_RING_PDP_UDW(engine, 0), 0);
CTX_REG(regs, CTX_PDP0_LDW, GEN8_RING_PDP_LDW(engine, 0), 0);
if (ppgtt && i915_vm_is_48bit(&ppgtt->base)) {
if (ppgtt && i915_vm_is_48bit(&ppgtt->vm)) {
/* 64b PPGTT (48bit canonical)
* PDP0_DESCRIPTOR contains the base address to PML4 and
* other PDP Descriptors are ignored.
@ -2619,10 +2746,10 @@ populate_lr_context(struct i915_gem_context *ctx,
}
static int execlists_context_deferred_alloc(struct i915_gem_context *ctx,
struct intel_engine_cs *engine)
struct intel_engine_cs *engine,
struct intel_context *ce)
{
struct drm_i915_gem_object *ctx_obj;
struct intel_context *ce = to_intel_context(ctx, engine);
struct i915_vma *vma;
uint32_t context_size;
struct intel_ring *ring;
@ -2646,7 +2773,7 @@ static int execlists_context_deferred_alloc(struct i915_gem_context *ctx,
goto error_deref_obj;
}
vma = i915_vma_instance(ctx_obj, &ctx->i915->ggtt.base, NULL);
vma = i915_vma_instance(ctx_obj, &ctx->i915->ggtt.vm, NULL);
if (IS_ERR(vma)) {
ret = PTR_ERR(vma);
goto error_deref_obj;

View File

@ -104,11 +104,4 @@ struct i915_gem_context;
void intel_lr_context_resume(struct drm_i915_private *dev_priv);
static inline uint64_t
intel_lr_context_descriptor(struct i915_gem_context *ctx,
struct intel_engine_cs *engine)
{
return to_intel_context(ctx, engine)->lrc_desc;
}
#endif /* _INTEL_LRC_H_ */

View File

@ -85,34 +85,35 @@ static struct intel_lvds_connector *to_lvds_connector(struct drm_connector *conn
return container_of(connector, struct intel_lvds_connector, base.base);
}
bool intel_lvds_port_enabled(struct drm_i915_private *dev_priv,
i915_reg_t lvds_reg, enum pipe *pipe)
{
u32 val;
val = I915_READ(lvds_reg);
/* asserts want to know the pipe even if the port is disabled */
if (HAS_PCH_CPT(dev_priv))
*pipe = (val & LVDS_PIPE_SEL_MASK_CPT) >> LVDS_PIPE_SEL_SHIFT_CPT;
else
*pipe = (val & LVDS_PIPE_SEL_MASK) >> LVDS_PIPE_SEL_SHIFT;
return val & LVDS_PORT_EN;
}
static bool intel_lvds_get_hw_state(struct intel_encoder *encoder,
enum pipe *pipe)
{
struct drm_device *dev = encoder->base.dev;
struct drm_i915_private *dev_priv = to_i915(dev);
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct intel_lvds_encoder *lvds_encoder = to_lvds_encoder(&encoder->base);
u32 tmp;
bool ret;
if (!intel_display_power_get_if_enabled(dev_priv,
encoder->power_domain))
return false;
ret = false;
ret = intel_lvds_port_enabled(dev_priv, lvds_encoder->reg, pipe);
tmp = I915_READ(lvds_encoder->reg);
if (!(tmp & LVDS_PORT_EN))
goto out;
if (HAS_PCH_CPT(dev_priv))
*pipe = PORT_TO_PIPE_CPT(tmp);
else
*pipe = PORT_TO_PIPE(tmp);
ret = true;
out:
intel_display_power_put(dev_priv, encoder->power_domain);
return ret;
@ -255,14 +256,11 @@ static void intel_pre_enable_lvds(struct intel_encoder *encoder,
temp |= LVDS_PORT_EN | LVDS_A0A2_CLKA_POWER_UP;
if (HAS_PCH_CPT(dev_priv)) {
temp &= ~PORT_TRANS_SEL_MASK;
temp |= PORT_TRANS_SEL_CPT(pipe);
temp &= ~LVDS_PIPE_SEL_MASK_CPT;
temp |= LVDS_PIPE_SEL_CPT(pipe);
} else {
if (pipe == 1) {
temp |= LVDS_PIPEB_SELECT;
} else {
temp &= ~LVDS_PIPEB_SELECT;
}
temp &= ~LVDS_PIPE_SEL_MASK;
temp |= LVDS_PIPE_SEL(pipe);
}
/* set the corresponsding LVDS_BORDER bit */
@ -943,7 +941,11 @@ static bool compute_is_dual_link_lvds(struct intel_lvds_encoder *lvds_encoder)
* register is uninitialized.
*/
val = I915_READ(lvds_encoder->reg);
if (!(val & ~(LVDS_PIPE_MASK | LVDS_DETECTED)))
if (HAS_PCH_CPT(dev_priv))
val &= ~(LVDS_DETECTED | LVDS_PIPE_SEL_MASK_CPT);
else
val &= ~(LVDS_DETECTED | LVDS_PIPE_SEL_MASK);
if (val == 0)
val = dev_priv->vbt.bios_lvds_val;
return (val & LVDS_CLKB_POWER_MASK) == LVDS_CLKB_POWER_UP;
@ -998,8 +1000,16 @@ void intel_lvds_init(struct drm_i915_private *dev_priv)
return;
/* Skip init on machines we know falsely report LVDS */
if (dmi_check_system(intel_no_lvds))
if (dmi_check_system(intel_no_lvds)) {
WARN(!dev_priv->vbt.int_lvds_support,
"Useless DMI match. Internal LVDS support disabled by VBT\n");
return;
}
if (!dev_priv->vbt.int_lvds_support) {
DRM_DEBUG_KMS("Internal LVDS support disabled by VBT\n");
return;
}
if (HAS_PCH_SPLIT(dev_priv))
lvds_reg = PCH_LVDS;
@ -1011,10 +1021,6 @@ void intel_lvds_init(struct drm_i915_private *dev_priv)
if (HAS_PCH_SPLIT(dev_priv)) {
if ((lvds & LVDS_DETECTED) == 0)
return;
if (dev_priv->vbt.edp.support) {
DRM_DEBUG_KMS("disable LVDS for eDP support\n");
return;
}
}
pin = GMBUS_PIN_PANEL;

View File

@ -97,10 +97,6 @@ void intel_psr_irq_control(struct drm_i915_private *dev_priv, bool debug)
{
u32 debug_mask, mask;
/* No PSR interrupts on VLV/CHV */
if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv))
return;
mask = EDP_PSR_ERROR(TRANSCODER_EDP);
debug_mask = EDP_PSR_POST_EXIT(TRANSCODER_EDP) |
EDP_PSR_PRE_ENTRY(TRANSCODER_EDP);
@ -201,15 +197,6 @@ void intel_psr_irq_handler(struct drm_i915_private *dev_priv, u32 psr_iir)
}
}
static bool intel_dp_get_y_coord_required(struct intel_dp *intel_dp)
{
uint8_t psr_caps = 0;
if (drm_dp_dpcd_readb(&intel_dp->aux, DP_PSR_CAPS, &psr_caps) != 1)
return false;
return psr_caps & DP_PSR2_SU_Y_COORDINATE_REQUIRED;
}
static bool intel_dp_get_colorimetry_status(struct intel_dp *intel_dp)
{
uint8_t dprx = 0;
@ -232,13 +219,13 @@ static bool intel_dp_get_alpm_status(struct intel_dp *intel_dp)
static u8 intel_dp_get_sink_sync_latency(struct intel_dp *intel_dp)
{
u8 val = 0;
u8 val = 8; /* assume the worst if we can't read the value */
if (drm_dp_dpcd_readb(&intel_dp->aux,
DP_SYNCHRONIZATION_LATENCY_IN_SINK, &val) == 1)
val &= DP_MAX_RESYNC_FRAME_COUNT_MASK;
else
DRM_ERROR("Unable to get sink synchronization latency\n");
DRM_DEBUG_KMS("Unable to get sink synchronization latency, assuming 8 frames\n");
return val;
}
@ -250,13 +237,25 @@ void intel_psr_init_dpcd(struct intel_dp *intel_dp)
drm_dp_dpcd_read(&intel_dp->aux, DP_PSR_SUPPORT, intel_dp->psr_dpcd,
sizeof(intel_dp->psr_dpcd));
if (intel_dp->psr_dpcd[0]) {
dev_priv->psr.sink_support = true;
DRM_DEBUG_KMS("Detected EDP PSR Panel.\n");
if (!intel_dp->psr_dpcd[0])
return;
DRM_DEBUG_KMS("eDP panel supports PSR version %x\n",
intel_dp->psr_dpcd[0]);
if (!(intel_dp->edp_dpcd[1] & DP_EDP_SET_POWER_CAP)) {
DRM_DEBUG_KMS("Panel lacks power state control, PSR cannot be enabled\n");
return;
}
dev_priv->psr.sink_support = true;
dev_priv->psr.sink_sync_latency =
intel_dp_get_sink_sync_latency(intel_dp);
if (INTEL_GEN(dev_priv) >= 9 &&
(intel_dp->psr_dpcd[0] == DP_PSR2_WITH_Y_COORD_IS_SUPPORTED)) {
bool y_req = intel_dp->psr_dpcd[1] &
DP_PSR2_SU_Y_COORDINATE_REQUIRED;
bool alpm = intel_dp_get_alpm_status(intel_dp);
/*
* All panels that supports PSR version 03h (PSR2 +
* Y-coordinate) can handle Y-coordinates in VSC but we are
@ -268,47 +267,17 @@ void intel_psr_init_dpcd(struct intel_dp *intel_dp)
* Y-coordinate requirement panels we would need to enable
* GTC first.
*/
dev_priv->psr.sink_psr2_support =
intel_dp_get_y_coord_required(intel_dp);
DRM_DEBUG_KMS("PSR2 %s on sink", dev_priv->psr.sink_psr2_support
? "supported" : "not supported");
dev_priv->psr.sink_psr2_support = y_req && alpm;
DRM_DEBUG_KMS("PSR2 %ssupported\n",
dev_priv->psr.sink_psr2_support ? "" : "not ");
if (dev_priv->psr.sink_psr2_support) {
dev_priv->psr.colorimetry_support =
intel_dp_get_colorimetry_status(intel_dp);
dev_priv->psr.alpm =
intel_dp_get_alpm_status(intel_dp);
dev_priv->psr.sink_sync_latency =
intel_dp_get_sink_sync_latency(intel_dp);
}
}
}
static bool vlv_is_psr_active_on_pipe(struct drm_device *dev, int pipe)
{
struct drm_i915_private *dev_priv = to_i915(dev);
uint32_t val;
val = I915_READ(VLV_PSRSTAT(pipe)) &
VLV_EDP_PSR_CURR_STATE_MASK;
return (val == VLV_EDP_PSR_ACTIVE_NORFB_UP) ||
(val == VLV_EDP_PSR_ACTIVE_SF_UPDATE);
}
static void vlv_psr_setup_vsc(struct intel_dp *intel_dp,
const struct intel_crtc_state *crtc_state)
{
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
uint32_t val;
/* VLV auto-generate VSC package as per EDP 1.3 spec, Table 3.10 */
val = I915_READ(VLV_VSCSDP(crtc->pipe));
val &= ~VLV_EDP_PSR_SDP_FREQ_MASK;
val |= VLV_EDP_PSR_SDP_FREQ_EVFRAME;
I915_WRITE(VLV_VSCSDP(crtc->pipe), val);
}
static void hsw_psr_setup_vsc(struct intel_dp *intel_dp,
const struct intel_crtc_state *crtc_state)
{
@ -341,12 +310,6 @@ static void hsw_psr_setup_vsc(struct intel_dp *intel_dp,
DP_SDP_VSC, &psr_vsc, sizeof(psr_vsc));
}
static void vlv_psr_enable_sink(struct intel_dp *intel_dp)
{
drm_dp_dpcd_writeb(&intel_dp->aux, DP_PSR_EN_CFG,
DP_PSR_ENABLE | DP_PSR_MAIN_LINK_ACTIVE);
}
static void hsw_psr_setup_aux(struct intel_dp *intel_dp)
{
struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
@ -389,13 +352,12 @@ static void hsw_psr_enable_sink(struct intel_dp *intel_dp)
u8 dpcd_val = DP_PSR_ENABLE;
/* Enable ALPM at sink for psr2 */
if (dev_priv->psr.psr2_enabled && dev_priv->psr.alpm)
drm_dp_dpcd_writeb(&intel_dp->aux,
DP_RECEIVER_ALPM_CONFIG,
DP_ALPM_ENABLE);
if (dev_priv->psr.psr2_enabled)
if (dev_priv->psr.psr2_enabled) {
drm_dp_dpcd_writeb(&intel_dp->aux, DP_RECEIVER_ALPM_CONFIG,
DP_ALPM_ENABLE);
dpcd_val |= DP_PSR_ENABLE_PSR2;
}
if (dev_priv->psr.link_standby)
dpcd_val |= DP_PSR_MAIN_LINK_ACTIVE;
drm_dp_dpcd_writeb(&intel_dp->aux, DP_PSR_EN_CFG, dpcd_val);
@ -403,81 +365,49 @@ static void hsw_psr_enable_sink(struct intel_dp *intel_dp)
drm_dp_dpcd_writeb(&intel_dp->aux, DP_SET_POWER, DP_SET_POWER_D0);
}
static void vlv_psr_enable_source(struct intel_dp *intel_dp,
const struct intel_crtc_state *crtc_state)
{
struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
struct drm_i915_private *dev_priv = to_i915(dig_port->base.base.dev);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
/* Transition from PSR_state 0 (disabled) to PSR_state 1 (inactive) */
I915_WRITE(VLV_PSRCTL(crtc->pipe),
VLV_EDP_PSR_MODE_SW_TIMER |
VLV_EDP_PSR_SRC_TRANSMITTER_STATE |
VLV_EDP_PSR_ENABLE);
}
static void vlv_psr_activate(struct intel_dp *intel_dp)
{
struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
struct drm_device *dev = dig_port->base.base.dev;
struct drm_i915_private *dev_priv = to_i915(dev);
struct drm_crtc *crtc = dig_port->base.base.crtc;
enum pipe pipe = to_intel_crtc(crtc)->pipe;
/*
* Let's do the transition from PSR_state 1 (inactive) to
* PSR_state 2 (transition to active - static frame transmission).
* Then Hardware is responsible for the transition to
* PSR_state 3 (active - no Remote Frame Buffer (RFB) update).
*/
I915_WRITE(VLV_PSRCTL(pipe), I915_READ(VLV_PSRCTL(pipe)) |
VLV_EDP_PSR_ACTIVE_ENTRY);
}
static void hsw_activate_psr1(struct intel_dp *intel_dp)
{
struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
struct drm_device *dev = dig_port->base.base.dev;
struct drm_i915_private *dev_priv = to_i915(dev);
u32 max_sleep_time = 0x1f;
u32 val = EDP_PSR_ENABLE;
uint32_t max_sleep_time = 0x1f;
/*
* Let's respect VBT in case VBT asks a higher idle_frame value.
* Let's use 6 as the minimum to cover all known cases including
* the off-by-one issue that HW has in some cases. Also there are
* cases where sink should be able to train
* with the 5 or 6 idle patterns.
/* Let's use 6 as the minimum to cover all known cases including the
* off-by-one issue that HW has in some cases.
*/
uint32_t idle_frames = max(6, dev_priv->vbt.psr.idle_frames);
uint32_t val = EDP_PSR_ENABLE;
int idle_frames = max(6, dev_priv->vbt.psr.idle_frames);
val |= max_sleep_time << EDP_PSR_MAX_SLEEP_TIME_SHIFT;
/* sink_sync_latency of 8 means source has to wait for more than 8
* frames, we'll go with 9 frames for now
*/
idle_frames = max(idle_frames, dev_priv->psr.sink_sync_latency + 1);
val |= idle_frames << EDP_PSR_IDLE_FRAME_SHIFT;
val |= max_sleep_time << EDP_PSR_MAX_SLEEP_TIME_SHIFT;
if (IS_HASWELL(dev_priv))
val |= EDP_PSR_MIN_LINK_ENTRY_TIME_8_LINES;
if (dev_priv->psr.link_standby)
val |= EDP_PSR_LINK_STANDBY;
if (dev_priv->vbt.psr.tp1_wakeup_time > 5)
val |= EDP_PSR_TP1_TIME_2500us;
else if (dev_priv->vbt.psr.tp1_wakeup_time > 1)
val |= EDP_PSR_TP1_TIME_500us;
else if (dev_priv->vbt.psr.tp1_wakeup_time > 0)
if (dev_priv->vbt.psr.tp1_wakeup_time_us == 0)
val |= EDP_PSR_TP1_TIME_0us;
else if (dev_priv->vbt.psr.tp1_wakeup_time_us <= 100)
val |= EDP_PSR_TP1_TIME_100us;
else if (dev_priv->vbt.psr.tp1_wakeup_time_us <= 500)
val |= EDP_PSR_TP1_TIME_500us;
else
val |= EDP_PSR_TP1_TIME_0us;
val |= EDP_PSR_TP1_TIME_2500us;
if (dev_priv->vbt.psr.tp2_tp3_wakeup_time > 5)
val |= EDP_PSR_TP2_TP3_TIME_2500us;
else if (dev_priv->vbt.psr.tp2_tp3_wakeup_time > 1)
val |= EDP_PSR_TP2_TP3_TIME_500us;
else if (dev_priv->vbt.psr.tp2_tp3_wakeup_time > 0)
if (dev_priv->vbt.psr.tp2_tp3_wakeup_time_us == 0)
val |= EDP_PSR_TP2_TP3_TIME_0us;
else if (dev_priv->vbt.psr.tp2_tp3_wakeup_time_us <= 100)
val |= EDP_PSR_TP2_TP3_TIME_100us;
else if (dev_priv->vbt.psr.tp2_tp3_wakeup_time_us <= 500)
val |= EDP_PSR_TP2_TP3_TIME_500us;
else
val |= EDP_PSR_TP2_TP3_TIME_0us;
val |= EDP_PSR_TP2_TP3_TIME_2500us;
if (intel_dp_source_supports_hbr2(intel_dp) &&
drm_dp_tps3_supported(intel_dp->dpcd))
@ -494,15 +424,15 @@ static void hsw_activate_psr2(struct intel_dp *intel_dp)
struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
struct drm_device *dev = dig_port->base.base.dev;
struct drm_i915_private *dev_priv = to_i915(dev);
/*
* Let's respect VBT in case VBT asks a higher idle_frame value.
* Let's use 6 as the minimum to cover all known cases including
* the off-by-one issue that HW has in some cases. Also there are
* cases where sink should be able to train
* with the 5 or 6 idle patterns.
u32 val;
/* Let's use 6 as the minimum to cover all known cases including the
* off-by-one issue that HW has in some cases.
*/
uint32_t idle_frames = max(6, dev_priv->vbt.psr.idle_frames);
u32 val = idle_frames << EDP_PSR2_IDLE_FRAME_SHIFT;
int idle_frames = max(6, dev_priv->vbt.psr.idle_frames);
idle_frames = max(idle_frames, dev_priv->psr.sink_sync_latency + 1);
val = idle_frames << EDP_PSR2_IDLE_FRAME_SHIFT;
/* FIXME: selective update is probably totally broken because it doesn't
* mesh at all with our frontbuffer tracking. And the hw alone isn't
@ -513,14 +443,15 @@ static void hsw_activate_psr2(struct intel_dp *intel_dp)
val |= EDP_PSR2_FRAME_BEFORE_SU(dev_priv->psr.sink_sync_latency + 1);
if (dev_priv->vbt.psr.tp2_tp3_wakeup_time > 5)
val |= EDP_PSR2_TP2_TIME_2500;
else if (dev_priv->vbt.psr.tp2_tp3_wakeup_time > 1)
val |= EDP_PSR2_TP2_TIME_500;
else if (dev_priv->vbt.psr.tp2_tp3_wakeup_time > 0)
val |= EDP_PSR2_TP2_TIME_100;
if (dev_priv->vbt.psr.tp2_tp3_wakeup_time_us >= 0 &&
dev_priv->vbt.psr.tp2_tp3_wakeup_time_us <= 50)
val |= EDP_PSR2_TP2_TIME_50us;
else if (dev_priv->vbt.psr.tp2_tp3_wakeup_time_us <= 100)
val |= EDP_PSR2_TP2_TIME_100us;
else if (dev_priv->vbt.psr.tp2_tp3_wakeup_time_us <= 500)
val |= EDP_PSR2_TP2_TIME_500us;
else
val |= EDP_PSR2_TP2_TIME_50;
val |= EDP_PSR2_TP2_TIME_2500us;
I915_WRITE(EDP_PSR2_CTL, val);
}
@ -602,17 +533,11 @@ void intel_psr_compute_config(struct intel_dp *intel_dp,
* ones. Since by Display design transcoder EDP is tied to port A
* we can safely escape based on the port A.
*/
if (HAS_DDI(dev_priv) && dig_port->base.port != PORT_A) {
if (dig_port->base.port != PORT_A) {
DRM_DEBUG_KMS("PSR condition failed: Port not supported\n");
return;
}
if ((IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) &&
!dev_priv->psr.link_standby) {
DRM_ERROR("PSR condition failed: Link off requested but not supported on this platform\n");
return;
}
if (IS_HASWELL(dev_priv) &&
I915_READ(HSW_STEREO_3D_CTL(crtc_state->cpu_transcoder)) &
S3D_ENABLE) {
@ -640,11 +565,6 @@ void intel_psr_compute_config(struct intel_dp *intel_dp,
return;
}
if (!(intel_dp->edp_dpcd[1] & DP_EDP_SET_POWER_CAP)) {
DRM_DEBUG_KMS("PSR condition failed: panel lacks power state control\n");
return;
}
crtc_state->has_psr = true;
crtc_state->has_psr2 = intel_psr2_config_valid(intel_dp, crtc_state);
DRM_DEBUG_KMS("Enabling PSR%s\n", crtc_state->has_psr2 ? "2" : "");
@ -760,7 +680,6 @@ void intel_psr_enable(struct intel_dp *intel_dp,
* enabled.
* However on some platforms we face issues when first
* activation follows a modeset so quickly.
* - On VLV/CHV we get bank screen on first activation
* - On HSW/BDW we get a recoverable frozen screen until
* next exit-activate sequence.
*/
@ -772,36 +691,6 @@ void intel_psr_enable(struct intel_dp *intel_dp,
mutex_unlock(&dev_priv->psr.lock);
}
static void vlv_psr_disable(struct intel_dp *intel_dp,
const struct intel_crtc_state *old_crtc_state)
{
struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
struct drm_device *dev = intel_dig_port->base.base.dev;
struct drm_i915_private *dev_priv = to_i915(dev);
struct intel_crtc *crtc = to_intel_crtc(old_crtc_state->base.crtc);
uint32_t val;
if (dev_priv->psr.active) {
/* Put VLV PSR back to PSR_state 0 (disabled). */
if (intel_wait_for_register(dev_priv,
VLV_PSRSTAT(crtc->pipe),
VLV_EDP_PSR_IN_TRANS,
0,
1))
WARN(1, "PSR transition took longer than expected\n");
val = I915_READ(VLV_PSRCTL(crtc->pipe));
val &= ~VLV_EDP_PSR_ACTIVE_ENTRY;
val &= ~VLV_EDP_PSR_ENABLE;
val &= ~VLV_EDP_PSR_MODE_MASK;
I915_WRITE(VLV_PSRCTL(crtc->pipe), val);
dev_priv->psr.active = false;
} else {
WARN_ON(vlv_is_psr_active_on_pipe(dev, crtc->pipe));
}
}
static void hsw_psr_disable(struct intel_dp *intel_dp,
const struct intel_crtc_state *old_crtc_state)
{
@ -894,21 +783,12 @@ static bool psr_wait_for_idle(struct drm_i915_private *dev_priv)
if (!intel_dp)
return false;
if (HAS_DDI(dev_priv)) {
if (dev_priv->psr.psr2_enabled) {
reg = EDP_PSR2_STATUS;
mask = EDP_PSR2_STATUS_STATE_MASK;
} else {
reg = EDP_PSR_STATUS;
mask = EDP_PSR_STATUS_STATE_MASK;
}
if (dev_priv->psr.psr2_enabled) {
reg = EDP_PSR2_STATUS;
mask = EDP_PSR2_STATUS_STATE_MASK;
} else {
struct drm_crtc *crtc =
dp_to_dig_port(intel_dp)->base.base.crtc;
enum pipe pipe = to_intel_crtc(crtc)->pipe;
reg = VLV_PSRSTAT(pipe);
mask = VLV_EDP_PSR_IN_TRANS;
reg = EDP_PSR_STATUS;
mask = EDP_PSR_STATUS_STATE_MASK;
}
mutex_unlock(&dev_priv->psr.lock);
@ -953,102 +833,23 @@ static void intel_psr_work(struct work_struct *work)
static void intel_psr_exit(struct drm_i915_private *dev_priv)
{
struct intel_dp *intel_dp = dev_priv->psr.enabled;
struct drm_crtc *crtc = dp_to_dig_port(intel_dp)->base.base.crtc;
enum pipe pipe = to_intel_crtc(crtc)->pipe;
u32 val;
if (!dev_priv->psr.active)
return;
if (HAS_DDI(dev_priv)) {
if (dev_priv->psr.psr2_enabled) {
val = I915_READ(EDP_PSR2_CTL);
WARN_ON(!(val & EDP_PSR2_ENABLE));
I915_WRITE(EDP_PSR2_CTL, val & ~EDP_PSR2_ENABLE);
} else {
val = I915_READ(EDP_PSR_CTL);
WARN_ON(!(val & EDP_PSR_ENABLE));
I915_WRITE(EDP_PSR_CTL, val & ~EDP_PSR_ENABLE);
}
if (dev_priv->psr.psr2_enabled) {
val = I915_READ(EDP_PSR2_CTL);
WARN_ON(!(val & EDP_PSR2_ENABLE));
I915_WRITE(EDP_PSR2_CTL, val & ~EDP_PSR2_ENABLE);
} else {
val = I915_READ(VLV_PSRCTL(pipe));
/*
* Here we do the transition drirectly from
* PSR_state 3 (active - no Remote Frame Buffer (RFB) update) to
* PSR_state 5 (exit).
* PSR State 4 (active with single frame update) can be skipped.
* On PSR_state 5 (exit) Hardware is responsible to transition
* back to PSR_state 1 (inactive).
* Now we are at Same state after vlv_psr_enable_source.
*/
val &= ~VLV_EDP_PSR_ACTIVE_ENTRY;
I915_WRITE(VLV_PSRCTL(pipe), val);
/*
* Send AUX wake up - Spec says after transitioning to PSR
* active we have to send AUX wake up by writing 01h in DPCD
* 600h of sink device.
* XXX: This might slow down the transition, but without this
* HW doesn't complete the transition to PSR_state 1 and we
* never get the screen updated.
*/
drm_dp_dpcd_writeb(&intel_dp->aux, DP_SET_POWER,
DP_SET_POWER_D0);
val = I915_READ(EDP_PSR_CTL);
WARN_ON(!(val & EDP_PSR_ENABLE));
I915_WRITE(EDP_PSR_CTL, val & ~EDP_PSR_ENABLE);
}
dev_priv->psr.active = false;
}
/**
* intel_psr_single_frame_update - Single Frame Update
* @dev_priv: i915 device
* @frontbuffer_bits: frontbuffer plane tracking bits
*
* Some platforms support a single frame update feature that is used to
* send and update only one frame on Remote Frame Buffer.
* So far it is only implemented for Valleyview and Cherryview because
* hardware requires this to be done before a page flip.
*/
void intel_psr_single_frame_update(struct drm_i915_private *dev_priv,
unsigned frontbuffer_bits)
{
struct drm_crtc *crtc;
enum pipe pipe;
u32 val;
if (!CAN_PSR(dev_priv))
return;
/*
* Single frame update is already supported on BDW+ but it requires
* many W/A and it isn't really needed.
*/
if (!IS_VALLEYVIEW(dev_priv) && !IS_CHERRYVIEW(dev_priv))
return;
mutex_lock(&dev_priv->psr.lock);
if (!dev_priv->psr.enabled) {
mutex_unlock(&dev_priv->psr.lock);
return;
}
crtc = dp_to_dig_port(dev_priv->psr.enabled)->base.base.crtc;
pipe = to_intel_crtc(crtc)->pipe;
if (frontbuffer_bits & INTEL_FRONTBUFFER_ALL_MASK(pipe)) {
val = I915_READ(VLV_PSRCTL(pipe));
/*
* We need to set this bit before writing registers for a flip.
* This bit will be self-clear when it gets to the PSR active state.
*/
I915_WRITE(VLV_PSRCTL(pipe), val | VLV_EDP_PSR_SINGLE_FRAME_UPDATE);
}
mutex_unlock(&dev_priv->psr.lock);
}
/**
* intel_psr_invalidate - Invalidade PSR
* @dev_priv: i915 device
@ -1071,7 +872,7 @@ void intel_psr_invalidate(struct drm_i915_private *dev_priv,
if (!CAN_PSR(dev_priv))
return;
if (dev_priv->psr.has_hw_tracking && origin == ORIGIN_FLIP)
if (origin == ORIGIN_FLIP)
return;
mutex_lock(&dev_priv->psr.lock);
@ -1114,7 +915,7 @@ void intel_psr_flush(struct drm_i915_private *dev_priv,
if (!CAN_PSR(dev_priv))
return;
if (dev_priv->psr.has_hw_tracking && origin == ORIGIN_FLIP)
if (origin == ORIGIN_FLIP)
return;
mutex_lock(&dev_priv->psr.lock);
@ -1131,8 +932,7 @@ void intel_psr_flush(struct drm_i915_private *dev_priv,
/* By definition flush = invalidate + flush */
if (frontbuffer_bits) {
if (dev_priv->psr.psr2_enabled ||
IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) {
if (dev_priv->psr.psr2_enabled) {
intel_psr_exit(dev_priv);
} else {
/*
@ -1184,9 +984,6 @@ void intel_psr_init(struct drm_i915_private *dev_priv)
if (IS_HASWELL(dev_priv) || IS_BROADWELL(dev_priv))
/* HSW and BDW require workarounds that we don't implement. */
dev_priv->psr.link_standby = false;
else if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv))
/* On VLV and CHV only standby mode is supported. */
dev_priv->psr.link_standby = true;
else
/* For new platforms let's respect VBT back again */
dev_priv->psr.link_standby = dev_priv->vbt.psr.full_link;
@ -1204,18 +1001,10 @@ void intel_psr_init(struct drm_i915_private *dev_priv)
INIT_DELAYED_WORK(&dev_priv->psr.work, intel_psr_work);
mutex_init(&dev_priv->psr.lock);
if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) {
dev_priv->psr.enable_source = vlv_psr_enable_source;
dev_priv->psr.disable_source = vlv_psr_disable;
dev_priv->psr.enable_sink = vlv_psr_enable_sink;
dev_priv->psr.activate = vlv_psr_activate;
dev_priv->psr.setup_vsc = vlv_psr_setup_vsc;
} else {
dev_priv->psr.has_hw_tracking = true;
dev_priv->psr.enable_source = hsw_psr_enable_source;
dev_priv->psr.disable_source = hsw_psr_disable;
dev_priv->psr.enable_sink = hsw_psr_enable_sink;
dev_priv->psr.activate = hsw_psr_activate;
dev_priv->psr.setup_vsc = hsw_psr_setup_vsc;
}
dev_priv->psr.enable_source = hsw_psr_enable_source;
dev_priv->psr.disable_source = hsw_psr_disable;
dev_priv->psr.enable_sink = hsw_psr_enable_sink;
dev_priv->psr.activate = hsw_psr_activate;
dev_priv->psr.setup_vsc = hsw_psr_setup_vsc;
}

View File

@ -531,9 +531,22 @@ static int init_ring_common(struct intel_engine_cs *engine)
return ret;
}
static void reset_ring_common(struct intel_engine_cs *engine,
struct i915_request *request)
static struct i915_request *reset_prepare(struct intel_engine_cs *engine)
{
intel_engine_stop_cs(engine);
if (engine->irq_seqno_barrier)
engine->irq_seqno_barrier(engine);
return i915_gem_find_active_request(engine);
}
static void reset_ring(struct intel_engine_cs *engine,
struct i915_request *request)
{
GEM_TRACE("%s seqno=%x\n",
engine->name, request ? request->global_seqno : 0);
/*
* RC6 must be prevented until the reset is complete and the engine
* reinitialised. If it occurs in the middle of this sequence, the
@ -558,8 +571,7 @@ static void reset_ring_common(struct intel_engine_cs *engine,
*/
if (request) {
struct drm_i915_private *dev_priv = request->i915;
struct intel_context *ce = to_intel_context(request->ctx,
engine);
struct intel_context *ce = request->hw_context;
struct i915_hw_ppgtt *ppgtt;
if (ce->state) {
@ -571,7 +583,7 @@ static void reset_ring_common(struct intel_engine_cs *engine,
CCID_EN);
}
ppgtt = request->ctx->ppgtt ?: engine->i915->mm.aliasing_ppgtt;
ppgtt = request->gem_context->ppgtt ?: engine->i915->mm.aliasing_ppgtt;
if (ppgtt) {
u32 pd_offset = ppgtt->pd.base.ggtt_offset << 10;
@ -597,6 +609,10 @@ static void reset_ring_common(struct intel_engine_cs *engine,
}
}
static void reset_finish(struct intel_engine_cs *engine)
{
}
static int intel_rcs_ctx_init(struct i915_request *rq)
{
int ret;
@ -1105,7 +1121,7 @@ intel_ring_create_vma(struct drm_i915_private *dev_priv, int size)
/* mark ring buffers as read-only from GPU side by default */
obj->gt_ro = 1;
vma = i915_vma_instance(obj, &dev_priv->ggtt.base, NULL);
vma = i915_vma_instance(obj, &dev_priv->ggtt.vm, NULL);
if (IS_ERR(vma))
goto err;
@ -1169,10 +1185,22 @@ intel_ring_free(struct intel_ring *ring)
kfree(ring);
}
static int context_pin(struct intel_context *ce)
static void intel_ring_context_destroy(struct intel_context *ce)
{
struct i915_vma *vma = ce->state;
int ret;
GEM_BUG_ON(ce->pin_count);
if (ce->state)
__i915_gem_object_release_unless_active(ce->state->obj);
}
static int __context_pin(struct intel_context *ce)
{
struct i915_vma *vma;
int err;
vma = ce->state;
if (!vma)
return 0;
/*
* Clear this page out of any CPU caches for coherent swap-in/out.
@ -1180,13 +1208,42 @@ static int context_pin(struct intel_context *ce)
* on an active context (which by nature is already on the GPU).
*/
if (!(vma->flags & I915_VMA_GLOBAL_BIND)) {
ret = i915_gem_object_set_to_gtt_domain(vma->obj, true);
if (ret)
return ret;
err = i915_gem_object_set_to_gtt_domain(vma->obj, true);
if (err)
return err;
}
return i915_vma_pin(vma, 0, I915_GTT_MIN_ALIGNMENT,
PIN_GLOBAL | PIN_HIGH);
err = i915_vma_pin(vma, 0, I915_GTT_MIN_ALIGNMENT,
PIN_GLOBAL | PIN_HIGH);
if (err)
return err;
/*
* And mark is as a globally pinned object to let the shrinker know
* it cannot reclaim the object until we release it.
*/
vma->obj->pin_global++;
return 0;
}
static void __context_unpin(struct intel_context *ce)
{
struct i915_vma *vma;
vma = ce->state;
if (!vma)
return;
vma->obj->pin_global--;
i915_vma_unpin(vma);
}
static void intel_ring_context_unpin(struct intel_context *ce)
{
__context_unpin(ce);
i915_gem_context_put(ce->gem_context);
}
static struct i915_vma *
@ -1243,7 +1300,7 @@ alloc_context_vma(struct intel_engine_cs *engine)
i915_gem_object_set_cache_level(obj, I915_CACHE_L3_LLC);
}
vma = i915_vma_instance(obj, &i915->ggtt.base, NULL);
vma = i915_vma_instance(obj, &i915->ggtt.vm, NULL);
if (IS_ERR(vma)) {
err = PTR_ERR(vma);
goto err_obj;
@ -1258,67 +1315,62 @@ alloc_context_vma(struct intel_engine_cs *engine)
return ERR_PTR(err);
}
static struct intel_ring *
intel_ring_context_pin(struct intel_engine_cs *engine,
struct i915_gem_context *ctx)
static struct intel_context *
__ring_context_pin(struct intel_engine_cs *engine,
struct i915_gem_context *ctx,
struct intel_context *ce)
{
struct intel_context *ce = to_intel_context(ctx, engine);
int ret;
lockdep_assert_held(&ctx->i915->drm.struct_mutex);
if (likely(ce->pin_count++))
goto out;
GEM_BUG_ON(!ce->pin_count); /* no overflow please! */
int err;
if (!ce->state && engine->context_size) {
struct i915_vma *vma;
vma = alloc_context_vma(engine);
if (IS_ERR(vma)) {
ret = PTR_ERR(vma);
err = PTR_ERR(vma);
goto err;
}
ce->state = vma;
}
if (ce->state) {
ret = context_pin(ce);
if (ret)
goto err;
ce->state->obj->pin_global++;
}
err = __context_pin(ce);
if (err)
goto err;
i915_gem_context_get(ctx);
out:
/* One ringbuffer to rule them all */
return engine->buffer;
GEM_BUG_ON(!engine->buffer);
ce->ring = engine->buffer;
return ce;
err:
ce->pin_count = 0;
return ERR_PTR(ret);
return ERR_PTR(err);
}
static void intel_ring_context_unpin(struct intel_engine_cs *engine,
struct i915_gem_context *ctx)
static const struct intel_context_ops ring_context_ops = {
.unpin = intel_ring_context_unpin,
.destroy = intel_ring_context_destroy,
};
static struct intel_context *
intel_ring_context_pin(struct intel_engine_cs *engine,
struct i915_gem_context *ctx)
{
struct intel_context *ce = to_intel_context(ctx, engine);
lockdep_assert_held(&ctx->i915->drm.struct_mutex);
GEM_BUG_ON(ce->pin_count == 0);
if (--ce->pin_count)
return;
if (likely(ce->pin_count++))
return ce;
GEM_BUG_ON(!ce->pin_count); /* no overflow please! */
if (ce->state) {
ce->state->obj->pin_global--;
i915_vma_unpin(ce->state);
}
ce->ops = &ring_context_ops;
i915_gem_context_put(ctx);
return __ring_context_pin(engine, ctx, ce);
}
static int intel_init_ring_buffer(struct intel_engine_cs *engine)
@ -1329,10 +1381,6 @@ static int intel_init_ring_buffer(struct intel_engine_cs *engine)
intel_engine_setup_common(engine);
err = intel_engine_init_common(engine);
if (err)
goto err;
timeline = i915_timeline_create(engine->i915, engine->name);
if (IS_ERR(timeline)) {
err = PTR_ERR(timeline);
@ -1354,8 +1402,14 @@ static int intel_init_ring_buffer(struct intel_engine_cs *engine)
GEM_BUG_ON(engine->buffer);
engine->buffer = ring;
err = intel_engine_init_common(engine);
if (err)
goto err_unpin;
return 0;
err_unpin:
intel_ring_unpin(ring);
err_ring:
intel_ring_free(ring);
err:
@ -1441,7 +1495,7 @@ static inline int mi_set_context(struct i915_request *rq, u32 flags)
*cs++ = MI_NOOP;
*cs++ = MI_SET_CONTEXT;
*cs++ = i915_ggtt_offset(to_intel_context(rq->ctx, engine)->state) | flags;
*cs++ = i915_ggtt_offset(rq->hw_context->state) | flags;
/*
* w/a: MI_SET_CONTEXT must always be followed by MI_NOOP
* WaMiSetContext_Hang:snb,ivb,vlv
@ -1509,7 +1563,7 @@ static int remap_l3(struct i915_request *rq, int slice)
static int switch_context(struct i915_request *rq)
{
struct intel_engine_cs *engine = rq->engine;
struct i915_gem_context *to_ctx = rq->ctx;
struct i915_gem_context *to_ctx = rq->gem_context;
struct i915_hw_ppgtt *to_mm =
to_ctx->ppgtt ?: rq->i915->mm.aliasing_ppgtt;
struct i915_gem_context *from_ctx = engine->legacy_active_context;
@ -1532,7 +1586,7 @@ static int switch_context(struct i915_request *rq)
hw_flags = MI_FORCE_RESTORE;
}
if (to_intel_context(to_ctx, engine)->state &&
if (rq->hw_context->state &&
(to_ctx != from_ctx || hw_flags & MI_FORCE_RESTORE)) {
GEM_BUG_ON(engine->id != RCS);
@ -1580,7 +1634,7 @@ static int ring_request_alloc(struct i915_request *request)
{
int ret;
GEM_BUG_ON(!to_intel_context(request->ctx, request->engine)->pin_count);
GEM_BUG_ON(!request->hw_context->pin_count);
/* Flush enough space to reduce the likelihood of waiting after
* we start building the request - in which case we will just
@ -2006,11 +2060,11 @@ static void intel_ring_default_vfuncs(struct drm_i915_private *dev_priv,
intel_ring_init_semaphores(dev_priv, engine);
engine->init_hw = init_ring_common;
engine->reset_hw = reset_ring_common;
engine->reset.prepare = reset_prepare;
engine->reset.reset = reset_ring;
engine->reset.finish = reset_finish;
engine->context_pin = intel_ring_context_pin;
engine->context_unpin = intel_ring_context_unpin;
engine->request_alloc = ring_request_alloc;
engine->emit_breadcrumb = i9xx_emit_breadcrumb;

View File

@ -342,6 +342,7 @@ struct intel_engine_cs {
struct i915_timeline timeline;
struct drm_i915_gem_object *default_state;
void *pinned_default_state;
atomic_t irq_count;
unsigned long irq_posted;
@ -423,18 +424,22 @@ struct intel_engine_cs {
void (*irq_disable)(struct intel_engine_cs *engine);
int (*init_hw)(struct intel_engine_cs *engine);
void (*reset_hw)(struct intel_engine_cs *engine,
struct i915_request *rq);
struct {
struct i915_request *(*prepare)(struct intel_engine_cs *engine);
void (*reset)(struct intel_engine_cs *engine,
struct i915_request *rq);
void (*finish)(struct intel_engine_cs *engine);
} reset;
void (*park)(struct intel_engine_cs *engine);
void (*unpark)(struct intel_engine_cs *engine);
void (*set_default_submission)(struct intel_engine_cs *engine);
struct intel_ring *(*context_pin)(struct intel_engine_cs *engine,
struct i915_gem_context *ctx);
void (*context_unpin)(struct intel_engine_cs *engine,
struct i915_gem_context *ctx);
struct intel_context *(*context_pin)(struct intel_engine_cs *engine,
struct i915_gem_context *ctx);
int (*request_alloc)(struct i915_request *rq);
int (*init_context)(struct i915_request *rq);
@ -550,7 +555,7 @@ struct intel_engine_cs {
* to the kernel context and trash it as the save may not happen
* before the hardware is powered down.
*/
struct i915_gem_context *last_retired_context;
struct intel_context *last_retired_context;
/* We track the current MI_SET_CONTEXT in order to eliminate
* redudant context switches. This presumes that requests are not
@ -873,6 +878,8 @@ int intel_init_bsd_ring_buffer(struct intel_engine_cs *engine);
int intel_init_blt_ring_buffer(struct intel_engine_cs *engine);
int intel_init_vebox_ring_buffer(struct intel_engine_cs *engine);
int intel_engine_stop_cs(struct intel_engine_cs *engine);
u64 intel_engine_get_active_head(const struct intel_engine_cs *engine);
u64 intel_engine_get_last_batch_head(const struct intel_engine_cs *engine);
@ -1046,6 +1053,7 @@ bool intel_engine_is_idle(struct intel_engine_cs *engine);
bool intel_engines_are_idle(struct drm_i915_private *dev_priv);
bool intel_engine_has_kernel_context(const struct intel_engine_cs *engine);
void intel_engine_lost_context(struct intel_engine_cs *engine);
void intel_engines_park(struct drm_i915_private *i915);
void intel_engines_unpark(struct drm_i915_private *i915);

View File

@ -1403,27 +1403,37 @@ static bool intel_sdvo_connector_get_hw_state(struct intel_connector *connector)
return false;
}
bool intel_sdvo_port_enabled(struct drm_i915_private *dev_priv,
i915_reg_t sdvo_reg, enum pipe *pipe)
{
u32 val;
val = I915_READ(sdvo_reg);
/* asserts want to know the pipe even if the port is disabled */
if (HAS_PCH_CPT(dev_priv))
*pipe = (val & SDVO_PIPE_SEL_MASK_CPT) >> SDVO_PIPE_SEL_SHIFT_CPT;
else if (IS_CHERRYVIEW(dev_priv))
*pipe = (val & SDVO_PIPE_SEL_MASK_CHV) >> SDVO_PIPE_SEL_SHIFT_CHV;
else
*pipe = (val & SDVO_PIPE_SEL_MASK) >> SDVO_PIPE_SEL_SHIFT;
return val & SDVO_ENABLE;
}
static bool intel_sdvo_get_hw_state(struct intel_encoder *encoder,
enum pipe *pipe)
{
struct drm_device *dev = encoder->base.dev;
struct drm_i915_private *dev_priv = to_i915(dev);
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct intel_sdvo *intel_sdvo = to_sdvo(encoder);
u16 active_outputs = 0;
u32 tmp;
bool ret;
tmp = I915_READ(intel_sdvo->sdvo_reg);
intel_sdvo_get_active_outputs(intel_sdvo, &active_outputs);
if (!(tmp & SDVO_ENABLE) && (active_outputs == 0))
return false;
ret = intel_sdvo_port_enabled(dev_priv, intel_sdvo->sdvo_reg, pipe);
if (HAS_PCH_CPT(dev_priv))
*pipe = PORT_TO_PIPE_CPT(tmp);
else
*pipe = PORT_TO_PIPE(tmp);
return true;
return ret || active_outputs;
}
static void intel_sdvo_get_config(struct intel_encoder *encoder,
@ -1550,8 +1560,8 @@ static void intel_disable_sdvo(struct intel_encoder *encoder,
intel_set_cpu_fifo_underrun_reporting(dev_priv, PIPE_A, false);
intel_set_pch_fifo_underrun_reporting(dev_priv, PIPE_A, false);
temp &= ~SDVO_PIPE_B_SELECT;
temp |= SDVO_ENABLE;
temp &= ~SDVO_PIPE_SEL_MASK;
temp |= SDVO_ENABLE | SDVO_PIPE_SEL(PIPE_A);
intel_sdvo_write_sdvox(intel_sdvo, temp);
temp &= ~SDVO_ENABLE;

View File

@ -284,13 +284,35 @@ skl_update_plane(struct intel_plane *plane,
/* program plane scaler */
if (plane_state->scaler_id >= 0) {
int scaler_id = plane_state->scaler_id;
const struct intel_scaler *scaler;
const struct intel_scaler *scaler =
&crtc_state->scaler_state.scalers[scaler_id];
u16 y_hphase, uv_rgb_hphase;
u16 y_vphase, uv_rgb_vphase;
scaler = &crtc_state->scaler_state.scalers[scaler_id];
/* TODO: handle sub-pixel coordinates */
if (fb->format->format == DRM_FORMAT_NV12) {
y_hphase = skl_scaler_calc_phase(1, false);
y_vphase = skl_scaler_calc_phase(1, false);
/* MPEG2 chroma siting convention */
uv_rgb_hphase = skl_scaler_calc_phase(2, true);
uv_rgb_vphase = skl_scaler_calc_phase(2, false);
} else {
/* not used */
y_hphase = 0;
y_vphase = 0;
uv_rgb_hphase = skl_scaler_calc_phase(1, false);
uv_rgb_vphase = skl_scaler_calc_phase(1, false);
}
I915_WRITE_FW(SKL_PS_CTRL(pipe, scaler_id),
PS_SCALER_EN | PS_PLANE_SEL(plane_id) | scaler->mode);
I915_WRITE_FW(SKL_PS_PWR_GATE(pipe, scaler_id), 0);
I915_WRITE_FW(SKL_PS_VPHASE(pipe, scaler_id),
PS_Y_PHASE(y_vphase) | PS_UV_RGB_PHASE(uv_rgb_vphase));
I915_WRITE_FW(SKL_PS_HPHASE(pipe, scaler_id),
PS_Y_PHASE(y_hphase) | PS_UV_RGB_PHASE(uv_rgb_hphase));
I915_WRITE_FW(SKL_PS_WIN_POS(pipe, scaler_id), (crtc_x << 16) | crtc_y);
I915_WRITE_FW(SKL_PS_WIN_SZ(pipe, scaler_id),
((crtc_w + 1) << 16)|(crtc_h + 1));
@ -327,19 +349,21 @@ skl_disable_plane(struct intel_plane *plane, struct intel_crtc *crtc)
}
bool
skl_plane_get_hw_state(struct intel_plane *plane)
skl_plane_get_hw_state(struct intel_plane *plane,
enum pipe *pipe)
{
struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
enum intel_display_power_domain power_domain;
enum plane_id plane_id = plane->id;
enum pipe pipe = plane->pipe;
bool ret;
power_domain = POWER_DOMAIN_PIPE(pipe);
power_domain = POWER_DOMAIN_PIPE(plane->pipe);
if (!intel_display_power_get_if_enabled(dev_priv, power_domain))
return false;
ret = I915_READ(PLANE_CTL(pipe, plane_id)) & PLANE_CTL_ENABLE;
ret = I915_READ(PLANE_CTL(plane->pipe, plane_id)) & PLANE_CTL_ENABLE;
*pipe = plane->pipe;
intel_display_power_put(dev_priv, power_domain);
@ -588,19 +612,21 @@ vlv_disable_plane(struct intel_plane *plane, struct intel_crtc *crtc)
}
static bool
vlv_plane_get_hw_state(struct intel_plane *plane)
vlv_plane_get_hw_state(struct intel_plane *plane,
enum pipe *pipe)
{
struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
enum intel_display_power_domain power_domain;
enum plane_id plane_id = plane->id;
enum pipe pipe = plane->pipe;
bool ret;
power_domain = POWER_DOMAIN_PIPE(pipe);
power_domain = POWER_DOMAIN_PIPE(plane->pipe);
if (!intel_display_power_get_if_enabled(dev_priv, power_domain))
return false;
ret = I915_READ(SPCNTR(pipe, plane_id)) & SP_ENABLE;
ret = I915_READ(SPCNTR(plane->pipe, plane_id)) & SP_ENABLE;
*pipe = plane->pipe;
intel_display_power_put(dev_priv, power_domain);
@ -754,18 +780,20 @@ ivb_disable_plane(struct intel_plane *plane, struct intel_crtc *crtc)
}
static bool
ivb_plane_get_hw_state(struct intel_plane *plane)
ivb_plane_get_hw_state(struct intel_plane *plane,
enum pipe *pipe)
{
struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
enum intel_display_power_domain power_domain;
enum pipe pipe = plane->pipe;
bool ret;
power_domain = POWER_DOMAIN_PIPE(pipe);
power_domain = POWER_DOMAIN_PIPE(plane->pipe);
if (!intel_display_power_get_if_enabled(dev_priv, power_domain))
return false;
ret = I915_READ(SPRCTL(pipe)) & SPRITE_ENABLE;
ret = I915_READ(SPRCTL(plane->pipe)) & SPRITE_ENABLE;
*pipe = plane->pipe;
intel_display_power_put(dev_priv, power_domain);
@ -910,18 +938,20 @@ g4x_disable_plane(struct intel_plane *plane, struct intel_crtc *crtc)
}
static bool
g4x_plane_get_hw_state(struct intel_plane *plane)
g4x_plane_get_hw_state(struct intel_plane *plane,
enum pipe *pipe)
{
struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
enum intel_display_power_domain power_domain;
enum pipe pipe = plane->pipe;
bool ret;
power_domain = POWER_DOMAIN_PIPE(pipe);
power_domain = POWER_DOMAIN_PIPE(plane->pipe);
if (!intel_display_power_get_if_enabled(dev_priv, power_domain))
return false;
ret = I915_READ(DVSCNTR(pipe)) & DVS_ENABLE;
ret = I915_READ(DVSCNTR(plane->pipe)) & DVS_ENABLE;
*pipe = plane->pipe;
intel_display_power_put(dev_priv, power_domain);
@ -1303,8 +1333,8 @@ static bool skl_mod_supported(uint32_t format, uint64_t modifier)
}
static bool intel_sprite_plane_format_mod_supported(struct drm_plane *plane,
uint32_t format,
uint64_t modifier)
uint32_t format,
uint64_t modifier)
{
struct drm_i915_private *dev_priv = to_i915(plane->dev);
@ -1326,14 +1356,14 @@ static bool intel_sprite_plane_format_mod_supported(struct drm_plane *plane,
}
static const struct drm_plane_funcs intel_sprite_plane_funcs = {
.update_plane = drm_atomic_helper_update_plane,
.disable_plane = drm_atomic_helper_disable_plane,
.destroy = intel_plane_destroy,
.atomic_get_property = intel_plane_atomic_get_property,
.atomic_set_property = intel_plane_atomic_set_property,
.atomic_duplicate_state = intel_plane_duplicate_state,
.atomic_destroy_state = intel_plane_destroy_state,
.format_mod_supported = intel_sprite_plane_format_mod_supported,
.update_plane = drm_atomic_helper_update_plane,
.disable_plane = drm_atomic_helper_disable_plane,
.destroy = intel_plane_destroy,
.atomic_get_property = intel_plane_atomic_get_property,
.atomic_set_property = intel_plane_atomic_set_property,
.atomic_duplicate_state = intel_plane_duplicate_state,
.atomic_destroy_state = intel_plane_destroy_state,
.format_mod_supported = intel_sprite_plane_format_mod_supported,
};
bool skl_plane_has_ccs(struct drm_i915_private *dev_priv,

View File

@ -798,16 +798,12 @@ static struct intel_tv *intel_attached_tv(struct drm_connector *connector)
static bool
intel_tv_get_hw_state(struct intel_encoder *encoder, enum pipe *pipe)
{
struct drm_device *dev = encoder->base.dev;
struct drm_i915_private *dev_priv = to_i915(dev);
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
u32 tmp = I915_READ(TV_CTL);
if (!(tmp & TV_ENC_ENABLE))
return false;
*pipe = (tmp & TV_ENC_PIPE_SEL_MASK) >> TV_ENC_PIPE_SEL_SHIFT;
*pipe = PORT_TO_PIPE(tmp);
return true;
return tmp & TV_ENC_ENABLE;
}
static void
@ -1024,8 +1020,7 @@ static void intel_tv_pre_enable(struct intel_encoder *encoder,
break;
}
if (intel_crtc->pipe == 1)
tv_ctl |= TV_ENC_PIPEB_SELECT;
tv_ctl |= TV_ENC_PIPE_SEL(intel_crtc->pipe);
tv_ctl |= tv_mode->oversample;
if (tv_mode->progressive)
@ -1149,12 +1144,9 @@ intel_tv_detect_type(struct intel_tv *intel_tv,
save_tv_ctl = tv_ctl = I915_READ(TV_CTL);
/* Poll for TV detection */
tv_ctl &= ~(TV_ENC_ENABLE | TV_TEST_MODE_MASK);
tv_ctl &= ~(TV_ENC_ENABLE | TV_ENC_PIPE_SEL_MASK | TV_TEST_MODE_MASK);
tv_ctl |= TV_TEST_MODE_MONITOR_DETECT;
if (intel_crtc->pipe == 1)
tv_ctl |= TV_ENC_PIPEB_SELECT;
else
tv_ctl &= ~TV_ENC_PIPEB_SELECT;
tv_ctl |= TV_ENC_PIPE_SEL(intel_crtc->pipe);
tv_dac &= ~(TVDAC_SENSE_MASK | DAC_A_MASK | DAC_B_MASK | DAC_C_MASK);
tv_dac |= (TVDAC_STATE_CHG_EN |

View File

@ -50,10 +50,10 @@ static int __intel_uc_reset_hw(struct drm_i915_private *dev_priv)
return ret;
}
static int __get_platform_enable_guc(struct drm_i915_private *dev_priv)
static int __get_platform_enable_guc(struct drm_i915_private *i915)
{
struct intel_uc_fw *guc_fw = &dev_priv->guc.fw;
struct intel_uc_fw *huc_fw = &dev_priv->huc.fw;
struct intel_uc_fw *guc_fw = &i915->guc.fw;
struct intel_uc_fw *huc_fw = &i915->huc.fw;
int enable_guc = 0;
/* Default is to enable GuC/HuC if we know their firmwares */
@ -67,11 +67,11 @@ static int __get_platform_enable_guc(struct drm_i915_private *dev_priv)
return enable_guc;
}
static int __get_default_guc_log_level(struct drm_i915_private *dev_priv)
static int __get_default_guc_log_level(struct drm_i915_private *i915)
{
int guc_log_level;
if (!HAS_GUC(dev_priv) || !intel_uc_is_using_guc())
if (!HAS_GUC(i915) || !intel_uc_is_using_guc())
guc_log_level = GUC_LOG_LEVEL_DISABLED;
else if (IS_ENABLED(CONFIG_DRM_I915_DEBUG) ||
IS_ENABLED(CONFIG_DRM_I915_DEBUG_GEM))
@ -86,7 +86,7 @@ static int __get_default_guc_log_level(struct drm_i915_private *dev_priv)
/**
* sanitize_options_early - sanitize uC related modparam options
* @dev_priv: device private
* @i915: device private
*
* In case of "enable_guc" option this function will attempt to modify
* it only if it was initially set to "auto(-1)". Default value for this
@ -101,14 +101,14 @@ static int __get_default_guc_log_level(struct drm_i915_private *dev_priv)
* unless GuC is enabled on given platform and the driver is compiled with
* debug config when this modparam will default to "enable(1..4)".
*/
static void sanitize_options_early(struct drm_i915_private *dev_priv)
static void sanitize_options_early(struct drm_i915_private *i915)
{
struct intel_uc_fw *guc_fw = &dev_priv->guc.fw;
struct intel_uc_fw *huc_fw = &dev_priv->huc.fw;
struct intel_uc_fw *guc_fw = &i915->guc.fw;
struct intel_uc_fw *huc_fw = &i915->huc.fw;
/* A negative value means "use platform default" */
if (i915_modparams.enable_guc < 0)
i915_modparams.enable_guc = __get_platform_enable_guc(dev_priv);
i915_modparams.enable_guc = __get_platform_enable_guc(i915);
DRM_DEBUG_DRIVER("enable_guc=%d (submission:%s huc:%s)\n",
i915_modparams.enable_guc,
@ -119,28 +119,28 @@ static void sanitize_options_early(struct drm_i915_private *dev_priv)
if (intel_uc_is_using_guc() && !intel_uc_fw_is_selected(guc_fw)) {
DRM_WARN("Incompatible option detected: %s=%d, %s!\n",
"enable_guc", i915_modparams.enable_guc,
!HAS_GUC(dev_priv) ? "no GuC hardware" :
"no GuC firmware");
!HAS_GUC(i915) ? "no GuC hardware" :
"no GuC firmware");
}
/* Verify HuC firmware availability */
if (intel_uc_is_using_huc() && !intel_uc_fw_is_selected(huc_fw)) {
DRM_WARN("Incompatible option detected: %s=%d, %s!\n",
"enable_guc", i915_modparams.enable_guc,
!HAS_HUC(dev_priv) ? "no HuC hardware" :
"no HuC firmware");
!HAS_HUC(i915) ? "no HuC hardware" :
"no HuC firmware");
}
/* A negative value means "use platform/config default" */
if (i915_modparams.guc_log_level < 0)
i915_modparams.guc_log_level =
__get_default_guc_log_level(dev_priv);
__get_default_guc_log_level(i915);
if (i915_modparams.guc_log_level > 0 && !intel_uc_is_using_guc()) {
DRM_WARN("Incompatible option detected: %s=%d, %s!\n",
"guc_log_level", i915_modparams.guc_log_level,
!HAS_GUC(dev_priv) ? "no GuC hardware" :
"GuC not enabled");
!HAS_GUC(i915) ? "no GuC hardware" :
"GuC not enabled");
i915_modparams.guc_log_level = 0;
}
@ -195,15 +195,14 @@ void intel_uc_cleanup_early(struct drm_i915_private *i915)
/**
* intel_uc_init_mmio - setup uC MMIO access
*
* @dev_priv: device private
* @i915: device private
*
* Setup minimal state necessary for MMIO accesses later in the
* initialization sequence.
*/
void intel_uc_init_mmio(struct drm_i915_private *dev_priv)
void intel_uc_init_mmio(struct drm_i915_private *i915)
{
intel_guc_init_send_regs(&dev_priv->guc);
intel_guc_init_send_regs(&i915->guc);
}
static void guc_capture_load_err_log(struct intel_guc *guc)
@ -225,11 +224,11 @@ static void guc_free_load_err_log(struct intel_guc *guc)
static int guc_enable_communication(struct intel_guc *guc)
{
struct drm_i915_private *dev_priv = guc_to_i915(guc);
struct drm_i915_private *i915 = guc_to_i915(guc);
gen9_enable_guc_interrupts(dev_priv);
gen9_enable_guc_interrupts(i915);
if (HAS_GUC_CT(dev_priv))
if (HAS_GUC_CT(i915))
return intel_guc_ct_enable(&guc->ct);
guc->send = intel_guc_send_mmio;
@ -239,23 +238,23 @@ static int guc_enable_communication(struct intel_guc *guc)
static void guc_disable_communication(struct intel_guc *guc)
{
struct drm_i915_private *dev_priv = guc_to_i915(guc);
struct drm_i915_private *i915 = guc_to_i915(guc);
if (HAS_GUC_CT(dev_priv))
if (HAS_GUC_CT(i915))
intel_guc_ct_disable(&guc->ct);
gen9_disable_guc_interrupts(dev_priv);
gen9_disable_guc_interrupts(i915);
guc->send = intel_guc_send_nop;
guc->handler = intel_guc_to_host_event_handler_nop;
}
int intel_uc_init_misc(struct drm_i915_private *dev_priv)
int intel_uc_init_misc(struct drm_i915_private *i915)
{
struct intel_guc *guc = &dev_priv->guc;
struct intel_guc *guc = &i915->guc;
int ret;
if (!USES_GUC(dev_priv))
if (!USES_GUC(i915))
return 0;
intel_guc_init_ggtt_pin_bias(guc);
@ -267,32 +266,32 @@ int intel_uc_init_misc(struct drm_i915_private *dev_priv)
return 0;
}
void intel_uc_fini_misc(struct drm_i915_private *dev_priv)
void intel_uc_fini_misc(struct drm_i915_private *i915)
{
struct intel_guc *guc = &dev_priv->guc;
struct intel_guc *guc = &i915->guc;
if (!USES_GUC(dev_priv))
if (!USES_GUC(i915))
return;
intel_guc_fini_wq(guc);
}
int intel_uc_init(struct drm_i915_private *dev_priv)
int intel_uc_init(struct drm_i915_private *i915)
{
struct intel_guc *guc = &dev_priv->guc;
struct intel_guc *guc = &i915->guc;
int ret;
if (!USES_GUC(dev_priv))
if (!USES_GUC(i915))
return 0;
if (!HAS_GUC(dev_priv))
if (!HAS_GUC(i915))
return -ENODEV;
ret = intel_guc_init(guc);
if (ret)
return ret;
if (USES_GUC_SUBMISSION(dev_priv)) {
if (USES_GUC_SUBMISSION(i915)) {
/*
* This is stuff we need to have available at fw load time
* if we are planning to enable submission later
@ -307,16 +306,16 @@ int intel_uc_init(struct drm_i915_private *dev_priv)
return 0;
}
void intel_uc_fini(struct drm_i915_private *dev_priv)
void intel_uc_fini(struct drm_i915_private *i915)
{
struct intel_guc *guc = &dev_priv->guc;
struct intel_guc *guc = &i915->guc;
if (!USES_GUC(dev_priv))
if (!USES_GUC(i915))
return;
GEM_BUG_ON(!HAS_GUC(dev_priv));
GEM_BUG_ON(!HAS_GUC(i915));
if (USES_GUC_SUBMISSION(dev_priv))
if (USES_GUC_SUBMISSION(i915))
intel_guc_submission_fini(guc);
intel_guc_fini(guc);
@ -340,22 +339,22 @@ void intel_uc_sanitize(struct drm_i915_private *i915)
__intel_uc_reset_hw(i915);
}
int intel_uc_init_hw(struct drm_i915_private *dev_priv)
int intel_uc_init_hw(struct drm_i915_private *i915)
{
struct intel_guc *guc = &dev_priv->guc;
struct intel_huc *huc = &dev_priv->huc;
struct intel_guc *guc = &i915->guc;
struct intel_huc *huc = &i915->huc;
int ret, attempts;
if (!USES_GUC(dev_priv))
if (!USES_GUC(i915))
return 0;
GEM_BUG_ON(!HAS_GUC(dev_priv));
GEM_BUG_ON(!HAS_GUC(i915));
gen9_reset_guc_interrupts(dev_priv);
gen9_reset_guc_interrupts(i915);
/* WaEnableuKernelHeaderValidFix:skl */
/* WaEnableGuCBootHashCheckNotSet:skl,bxt,kbl */
if (IS_GEN9(dev_priv))
if (IS_GEN9(i915))
attempts = 3;
else
attempts = 1;
@ -365,11 +364,11 @@ int intel_uc_init_hw(struct drm_i915_private *dev_priv)
* Always reset the GuC just before (re)loading, so
* that the state and timing are fairly predictable
*/
ret = __intel_uc_reset_hw(dev_priv);
ret = __intel_uc_reset_hw(i915);
if (ret)
goto err_out;
if (USES_HUC(dev_priv)) {
if (USES_HUC(i915)) {
ret = intel_huc_fw_upload(huc);
if (ret)
goto err_out;
@ -392,24 +391,24 @@ int intel_uc_init_hw(struct drm_i915_private *dev_priv)
if (ret)
goto err_log_capture;
if (USES_HUC(dev_priv)) {
if (USES_HUC(i915)) {
ret = intel_huc_auth(huc);
if (ret)
goto err_communication;
}
if (USES_GUC_SUBMISSION(dev_priv)) {
if (USES_GUC_SUBMISSION(i915)) {
ret = intel_guc_submission_enable(guc);
if (ret)
goto err_communication;
}
dev_info(dev_priv->drm.dev, "GuC firmware version %u.%u\n",
dev_info(i915->drm.dev, "GuC firmware version %u.%u\n",
guc->fw.major_ver_found, guc->fw.minor_ver_found);
dev_info(dev_priv->drm.dev, "GuC submission %s\n",
enableddisabled(USES_GUC_SUBMISSION(dev_priv)));
dev_info(dev_priv->drm.dev, "HuC %s\n",
enableddisabled(USES_HUC(dev_priv)));
dev_info(i915->drm.dev, "GuC submission %s\n",
enableddisabled(USES_GUC_SUBMISSION(i915)));
dev_info(i915->drm.dev, "HuC %s\n",
enableddisabled(USES_HUC(i915)));
return 0;
@ -428,20 +427,20 @@ int intel_uc_init_hw(struct drm_i915_private *dev_priv)
if (GEM_WARN_ON(ret == -EIO))
ret = -EINVAL;
dev_err(dev_priv->drm.dev, "GuC initialization failed %d\n", ret);
dev_err(i915->drm.dev, "GuC initialization failed %d\n", ret);
return ret;
}
void intel_uc_fini_hw(struct drm_i915_private *dev_priv)
void intel_uc_fini_hw(struct drm_i915_private *i915)
{
struct intel_guc *guc = &dev_priv->guc;
struct intel_guc *guc = &i915->guc;
if (!USES_GUC(dev_priv))
if (!USES_GUC(i915))
return;
GEM_BUG_ON(!HAS_GUC(dev_priv));
GEM_BUG_ON(!HAS_GUC(i915));
if (USES_GUC_SUBMISSION(dev_priv))
if (USES_GUC_SUBMISSION(i915))
intel_guc_submission_disable(guc);
guc_disable_communication(guc);

View File

@ -1702,15 +1702,9 @@ static void gen3_stop_engine(struct intel_engine_cs *engine)
{
struct drm_i915_private *dev_priv = engine->i915;
const u32 base = engine->mmio_base;
const i915_reg_t mode = RING_MI_MODE(base);
I915_WRITE_FW(mode, _MASKED_BIT_ENABLE(STOP_RING));
if (__intel_wait_for_register_fw(dev_priv,
mode, MODE_IDLE, MODE_IDLE,
500, 0,
NULL))
DRM_DEBUG_DRIVER("%s: timed out on STOP_RING\n",
engine->name);
if (intel_engine_stop_cs(engine))
DRM_DEBUG_DRIVER("%s: timed out on STOP_RING\n", engine->name);
I915_WRITE_FW(RING_HEAD(base), I915_READ_FW(RING_TAIL(base)));
POSTING_READ_FW(RING_HEAD(base)); /* paranoia */

View File

@ -318,6 +318,12 @@ enum vbt_gmbus_ddi {
DDC_BUS_DDI_C,
DDC_BUS_DDI_D,
DDC_BUS_DDI_F,
ICL_DDC_BUS_DDI_A = 0x1,
ICL_DDC_BUS_DDI_B,
ICL_DDC_BUS_PORT_1 = 0x4,
ICL_DDC_BUS_PORT_2,
ICL_DDC_BUS_PORT_3,
ICL_DDC_BUS_PORT_4,
};
#define VBT_DP_MAX_LINK_RATE_HBR3 0
@ -635,7 +641,7 @@ struct bdb_sdvo_lvds_options {
#define BDB_DRIVER_FEATURE_NO_LVDS 0
#define BDB_DRIVER_FEATURE_INT_LVDS 1
#define BDB_DRIVER_FEATURE_SDVO_LVDS 2
#define BDB_DRIVER_FEATURE_EDP 3
#define BDB_DRIVER_FEATURE_INT_SDVO_LVDS 3
struct bdb_driver_features {
u8 boot_dev_algorithm:1;

View File

@ -463,6 +463,25 @@ static int icl_ctx_workarounds_init(struct drm_i915_private *dev_priv)
*/
WA_SET_BIT_MASKED(ICL_HDC_MODE, HDC_FORCE_NON_COHERENT);
/* Wa_2006611047:icl (pre-prod)
* Formerly known as WaDisableImprovedTdlClkGating
*/
if (IS_ICL_REVID(dev_priv, ICL_REVID_A0, ICL_REVID_A0))
WA_SET_BIT_MASKED(GEN7_ROW_CHICKEN2,
GEN11_TDL_CLOCK_GATING_FIX_DISABLE);
/* WaEnableStateCacheRedirectToCS:icl */
WA_SET_BIT_MASKED(GEN9_SLICE_COMMON_ECO_CHICKEN1,
GEN11_STATE_CACHE_REDIRECT_TO_CS);
/* Wa_2006665173:icl (pre-prod) */
if (IS_ICL_REVID(dev_priv, ICL_REVID_A0, ICL_REVID_A0))
WA_SET_BIT_MASKED(GEN11_COMMON_SLICE_CHICKEN3,
GEN11_BLEND_EMB_FIX_DISABLE_IN_RCC);
/* WaEnableFloatBlendOptimization:icl */
WA_SET_BIT_MASKED(GEN10_CACHE_MODE_SS, FLOAT_BLEND_OPTIMIZATION_ENABLE);
return 0;
}
@ -672,8 +691,74 @@ static void cfl_gt_workarounds_apply(struct drm_i915_private *dev_priv)
GAMT_ECO_ENABLE_IN_PLACE_DECOMPRESS);
}
static void wa_init_mcr(struct drm_i915_private *dev_priv)
{
const struct sseu_dev_info *sseu = &(INTEL_INFO(dev_priv)->sseu);
u32 mcr;
u32 mcr_slice_subslice_mask;
/*
* WaProgramMgsrForL3BankSpecificMmioReads: cnl,icl
* L3Banks could be fused off in single slice scenario. If that is
* the case, we might need to program MCR select to a valid L3Bank
* by default, to make sure we correctly read certain registers
* later on (in the range 0xB100 - 0xB3FF).
* This might be incompatible with
* WaProgramMgsrForCorrectSliceSpecificMmioReads.
* Fortunately, this should not happen in production hardware, so
* we only assert that this is the case (instead of implementing
* something more complex that requires checking the range of every
* MMIO read).
*/
if (INTEL_GEN(dev_priv) >= 10 &&
is_power_of_2(sseu->slice_mask)) {
/*
* read FUSE3 for enabled L3 Bank IDs, if L3 Bank matches
* enabled subslice, no need to redirect MCR packet
*/
u32 slice = fls(sseu->slice_mask);
u32 fuse3 = I915_READ(GEN10_MIRROR_FUSE3);
u8 ss_mask = sseu->subslice_mask[slice];
u8 enabled_mask = (ss_mask | ss_mask >>
GEN10_L3BANK_PAIR_COUNT) & GEN10_L3BANK_MASK;
u8 disabled_mask = fuse3 & GEN10_L3BANK_MASK;
/*
* Production silicon should have matched L3Bank and
* subslice enabled
*/
WARN_ON((enabled_mask & disabled_mask) != enabled_mask);
}
mcr = I915_READ(GEN8_MCR_SELECTOR);
if (INTEL_GEN(dev_priv) >= 11)
mcr_slice_subslice_mask = GEN11_MCR_SLICE_MASK |
GEN11_MCR_SUBSLICE_MASK;
else
mcr_slice_subslice_mask = GEN8_MCR_SLICE_MASK |
GEN8_MCR_SUBSLICE_MASK;
/*
* WaProgramMgsrForCorrectSliceSpecificMmioReads:cnl,icl
* Before any MMIO read into slice/subslice specific registers, MCR
* packet control register needs to be programmed to point to any
* enabled s/ss pair. Otherwise, incorrect values will be returned.
* This means each subsequent MMIO read will be forwarded to an
* specific s/ss combination, but this is OK since these registers
* are consistent across s/ss in almost all cases. In the rare
* occasions, such as INSTDONE, where this value is dependent
* on s/ss combo, the read should be done with read_subslice_reg.
*/
mcr &= ~mcr_slice_subslice_mask;
mcr |= intel_calculate_mcr_s_ss_select(dev_priv);
I915_WRITE(GEN8_MCR_SELECTOR, mcr);
}
static void cnl_gt_workarounds_apply(struct drm_i915_private *dev_priv)
{
wa_init_mcr(dev_priv);
/* WaDisableI2mCycleOnWRPort:cnl (pre-prod) */
if (IS_CNL_REVID(dev_priv, CNL_REVID_B0, CNL_REVID_B0))
I915_WRITE(GAMT_CHKN_BIT_REG,
@ -692,6 +777,8 @@ static void cnl_gt_workarounds_apply(struct drm_i915_private *dev_priv)
static void icl_gt_workarounds_apply(struct drm_i915_private *dev_priv)
{
wa_init_mcr(dev_priv);
/* This is not an Wa. Enable for better image quality */
I915_WRITE(_3D_CHICKEN3,
_MASKED_BIT_ENABLE(_3D_CHICKEN3_AA_LINE_QUALITY_FIX_ENABLE));
@ -772,6 +859,13 @@ static void icl_gt_workarounds_apply(struct drm_i915_private *dev_priv)
PMFLUSHDONE_LNICRSDROP |
PMFLUSH_GAPL3UNBLOCK |
PMFLUSHDONE_LNEBLK);
/* Wa_1406463099:icl
* Formerly known as WaGamTlbPendError
*/
I915_WRITE(GAMT_CHKN_BIT_REG,
I915_READ(GAMT_CHKN_BIT_REG) |
GAMT_CHKN_DISABLE_L3_COH_PIPE);
}
void intel_gt_workarounds_apply(struct drm_i915_private *dev_priv)

View File

@ -338,7 +338,7 @@ fake_huge_pages_object(struct drm_i915_private *i915, u64 size, bool single)
static int igt_check_page_sizes(struct i915_vma *vma)
{
struct drm_i915_private *i915 = to_i915(vma->obj->base.dev);
struct drm_i915_private *i915 = vma->vm->i915;
unsigned int supported = INTEL_INFO(i915)->page_sizes;
struct drm_i915_gem_object *obj = vma->obj;
int err = 0;
@ -379,7 +379,7 @@ static int igt_check_page_sizes(struct i915_vma *vma)
static int igt_mock_exhaust_device_supported_pages(void *arg)
{
struct i915_hw_ppgtt *ppgtt = arg;
struct drm_i915_private *i915 = ppgtt->base.i915;
struct drm_i915_private *i915 = ppgtt->vm.i915;
unsigned int saved_mask = INTEL_INFO(i915)->page_sizes;
struct drm_i915_gem_object *obj;
struct i915_vma *vma;
@ -415,7 +415,7 @@ static int igt_mock_exhaust_device_supported_pages(void *arg)
goto out_put;
}
vma = i915_vma_instance(obj, &ppgtt->base, NULL);
vma = i915_vma_instance(obj, &ppgtt->vm, NULL);
if (IS_ERR(vma)) {
err = PTR_ERR(vma);
goto out_put;
@ -458,7 +458,7 @@ static int igt_mock_exhaust_device_supported_pages(void *arg)
static int igt_mock_ppgtt_misaligned_dma(void *arg)
{
struct i915_hw_ppgtt *ppgtt = arg;
struct drm_i915_private *i915 = ppgtt->base.i915;
struct drm_i915_private *i915 = ppgtt->vm.i915;
unsigned long supported = INTEL_INFO(i915)->page_sizes;
struct drm_i915_gem_object *obj;
int bit;
@ -500,7 +500,7 @@ static int igt_mock_ppgtt_misaligned_dma(void *arg)
/* Force the page size for this object */
obj->mm.page_sizes.sg = page_size;
vma = i915_vma_instance(obj, &ppgtt->base, NULL);
vma = i915_vma_instance(obj, &ppgtt->vm, NULL);
if (IS_ERR(vma)) {
err = PTR_ERR(vma);
goto out_unpin;
@ -591,7 +591,7 @@ static void close_object_list(struct list_head *objects,
list_for_each_entry_safe(obj, on, objects, st_link) {
struct i915_vma *vma;
vma = i915_vma_instance(obj, &ppgtt->base, NULL);
vma = i915_vma_instance(obj, &ppgtt->vm, NULL);
if (!IS_ERR(vma))
i915_vma_close(vma);
@ -604,8 +604,8 @@ static void close_object_list(struct list_head *objects,
static int igt_mock_ppgtt_huge_fill(void *arg)
{
struct i915_hw_ppgtt *ppgtt = arg;
struct drm_i915_private *i915 = ppgtt->base.i915;
unsigned long max_pages = ppgtt->base.total >> PAGE_SHIFT;
struct drm_i915_private *i915 = ppgtt->vm.i915;
unsigned long max_pages = ppgtt->vm.total >> PAGE_SHIFT;
unsigned long page_num;
bool single = false;
LIST_HEAD(objects);
@ -641,7 +641,7 @@ static int igt_mock_ppgtt_huge_fill(void *arg)
list_add(&obj->st_link, &objects);
vma = i915_vma_instance(obj, &ppgtt->base, NULL);
vma = i915_vma_instance(obj, &ppgtt->vm, NULL);
if (IS_ERR(vma)) {
err = PTR_ERR(vma);
break;
@ -725,7 +725,7 @@ static int igt_mock_ppgtt_huge_fill(void *arg)
static int igt_mock_ppgtt_64K(void *arg)
{
struct i915_hw_ppgtt *ppgtt = arg;
struct drm_i915_private *i915 = ppgtt->base.i915;
struct drm_i915_private *i915 = ppgtt->vm.i915;
struct drm_i915_gem_object *obj;
const struct object_info {
unsigned int size;
@ -819,7 +819,7 @@ static int igt_mock_ppgtt_64K(void *arg)
*/
obj->mm.page_sizes.sg &= ~I915_GTT_PAGE_SIZE_2M;
vma = i915_vma_instance(obj, &ppgtt->base, NULL);
vma = i915_vma_instance(obj, &ppgtt->vm, NULL);
if (IS_ERR(vma)) {
err = PTR_ERR(vma);
goto out_object_unpin;
@ -887,8 +887,8 @@ static int igt_mock_ppgtt_64K(void *arg)
static struct i915_vma *
gpu_write_dw(struct i915_vma *vma, u64 offset, u32 val)
{
struct drm_i915_private *i915 = to_i915(vma->obj->base.dev);
const int gen = INTEL_GEN(vma->vm->i915);
struct drm_i915_private *i915 = vma->vm->i915;
const int gen = INTEL_GEN(i915);
unsigned int count = vma->size >> PAGE_SHIFT;
struct drm_i915_gem_object *obj;
struct i915_vma *batch;
@ -1047,7 +1047,8 @@ static int __igt_write_huge(struct i915_gem_context *ctx,
u32 dword, u32 val)
{
struct drm_i915_private *i915 = to_i915(obj->base.dev);
struct i915_address_space *vm = ctx->ppgtt ? &ctx->ppgtt->base : &i915->ggtt.base;
struct i915_address_space *vm =
ctx->ppgtt ? &ctx->ppgtt->vm : &i915->ggtt.vm;
unsigned int flags = PIN_USER | PIN_OFFSET_FIXED;
struct i915_vma *vma;
int err;
@ -1100,7 +1101,8 @@ static int igt_write_huge(struct i915_gem_context *ctx,
struct drm_i915_gem_object *obj)
{
struct drm_i915_private *i915 = to_i915(obj->base.dev);
struct i915_address_space *vm = ctx->ppgtt ? &ctx->ppgtt->base : &i915->ggtt.base;
struct i915_address_space *vm =
ctx->ppgtt ? &ctx->ppgtt->vm : &i915->ggtt.vm;
static struct intel_engine_cs *engines[I915_NUM_ENGINES];
struct intel_engine_cs *engine;
I915_RND_STATE(prng);
@ -1439,7 +1441,7 @@ static int igt_ppgtt_pin_update(void *arg)
if (IS_ERR(obj))
return PTR_ERR(obj);
vma = i915_vma_instance(obj, &ppgtt->base, NULL);
vma = i915_vma_instance(obj, &ppgtt->vm, NULL);
if (IS_ERR(vma)) {
err = PTR_ERR(vma);
goto out_put;
@ -1493,7 +1495,7 @@ static int igt_ppgtt_pin_update(void *arg)
if (IS_ERR(obj))
return PTR_ERR(obj);
vma = i915_vma_instance(obj, &ppgtt->base, NULL);
vma = i915_vma_instance(obj, &ppgtt->vm, NULL);
if (IS_ERR(vma)) {
err = PTR_ERR(vma);
goto out_put;
@ -1531,7 +1533,8 @@ static int igt_tmpfs_fallback(void *arg)
struct i915_gem_context *ctx = arg;
struct drm_i915_private *i915 = ctx->i915;
struct vfsmount *gemfs = i915->mm.gemfs;
struct i915_address_space *vm = ctx->ppgtt ? &ctx->ppgtt->base : &i915->ggtt.base;
struct i915_address_space *vm =
ctx->ppgtt ? &ctx->ppgtt->vm : &i915->ggtt.vm;
struct drm_i915_gem_object *obj;
struct i915_vma *vma;
u32 *vaddr;
@ -1587,7 +1590,8 @@ static int igt_shrink_thp(void *arg)
{
struct i915_gem_context *ctx = arg;
struct drm_i915_private *i915 = ctx->i915;
struct i915_address_space *vm = ctx->ppgtt ? &ctx->ppgtt->base : &i915->ggtt.base;
struct i915_address_space *vm =
ctx->ppgtt ? &ctx->ppgtt->vm : &i915->ggtt.vm;
struct drm_i915_gem_object *obj;
struct i915_vma *vma;
unsigned int flags = PIN_USER;
@ -1696,14 +1700,14 @@ int i915_gem_huge_page_mock_selftests(void)
goto out_unlock;
}
if (!i915_vm_is_48bit(&ppgtt->base)) {
if (!i915_vm_is_48bit(&ppgtt->vm)) {
pr_err("failed to create 48b PPGTT\n");
err = -EINVAL;
goto out_close;
}
/* If we were ever hit this then it's time to mock the 64K scratch */
if (!i915_vm_has_scratch_64K(&ppgtt->base)) {
if (!i915_vm_has_scratch_64K(&ppgtt->vm)) {
pr_err("PPGTT missing 64K scratch page\n");
err = -EINVAL;
goto out_close;
@ -1712,7 +1716,7 @@ int i915_gem_huge_page_mock_selftests(void)
err = i915_subtests(tests, ppgtt);
out_close:
i915_ppgtt_close(&ppgtt->base);
i915_ppgtt_close(&ppgtt->vm);
i915_ppgtt_put(ppgtt);
out_unlock:
@ -1758,7 +1762,7 @@ int i915_gem_huge_page_live_selftests(struct drm_i915_private *dev_priv)
}
if (ctx->ppgtt)
ctx->ppgtt->base.scrub_64K = true;
ctx->ppgtt->vm.scrub_64K = true;
err = i915_subtests(tests, ctx);

View File

@ -26,6 +26,7 @@
#include "igt_flush_test.h"
#include "mock_drm.h"
#include "mock_gem_device.h"
#include "huge_gem_object.h"
#define DW_PER_PAGE (PAGE_SIZE / sizeof(u32))
@ -114,7 +115,7 @@ static int gpu_fill(struct drm_i915_gem_object *obj,
{
struct drm_i915_private *i915 = to_i915(obj->base.dev);
struct i915_address_space *vm =
ctx->ppgtt ? &ctx->ppgtt->base : &i915->ggtt.base;
ctx->ppgtt ? &ctx->ppgtt->vm : &i915->ggtt.vm;
struct i915_request *rq;
struct i915_vma *vma;
struct i915_vma *batch;
@ -289,7 +290,7 @@ create_test_object(struct i915_gem_context *ctx,
{
struct drm_i915_gem_object *obj;
struct i915_address_space *vm =
ctx->ppgtt ? &ctx->ppgtt->base : &ctx->i915->ggtt.base;
ctx->ppgtt ? &ctx->ppgtt->vm : &ctx->i915->ggtt.vm;
u64 size;
int err;
@ -420,6 +421,130 @@ static int igt_ctx_exec(void *arg)
return err;
}
static __maybe_unused const char *
__engine_name(struct drm_i915_private *i915, unsigned int engines)
{
struct intel_engine_cs *engine;
unsigned int tmp;
if (engines == ALL_ENGINES)
return "all";
for_each_engine_masked(engine, i915, engines, tmp)
return engine->name;
return "none";
}
static int __igt_switch_to_kernel_context(struct drm_i915_private *i915,
struct i915_gem_context *ctx,
unsigned int engines)
{
struct intel_engine_cs *engine;
unsigned int tmp;
int err;
GEM_TRACE("Testing %s\n", __engine_name(i915, engines));
for_each_engine_masked(engine, i915, engines, tmp) {
struct i915_request *rq;
rq = i915_request_alloc(engine, ctx);
if (IS_ERR(rq))
return PTR_ERR(rq);
i915_request_add(rq);
}
err = i915_gem_switch_to_kernel_context(i915);
if (err)
return err;
for_each_engine_masked(engine, i915, engines, tmp) {
if (!engine_has_kernel_context_barrier(engine)) {
pr_err("kernel context not last on engine %s!\n",
engine->name);
return -EINVAL;
}
}
err = i915_gem_wait_for_idle(i915, I915_WAIT_LOCKED);
if (err)
return err;
GEM_BUG_ON(i915->gt.active_requests);
for_each_engine_masked(engine, i915, engines, tmp) {
if (engine->last_retired_context->gem_context != i915->kernel_context) {
pr_err("engine %s not idling in kernel context!\n",
engine->name);
return -EINVAL;
}
}
err = i915_gem_switch_to_kernel_context(i915);
if (err)
return err;
if (i915->gt.active_requests) {
pr_err("switch-to-kernel-context emitted %d requests even though it should already be idling in the kernel context\n",
i915->gt.active_requests);
return -EINVAL;
}
for_each_engine_masked(engine, i915, engines, tmp) {
if (!intel_engine_has_kernel_context(engine)) {
pr_err("kernel context not last on engine %s!\n",
engine->name);
return -EINVAL;
}
}
return 0;
}
static int igt_switch_to_kernel_context(void *arg)
{
struct drm_i915_private *i915 = arg;
struct intel_engine_cs *engine;
struct i915_gem_context *ctx;
enum intel_engine_id id;
int err;
/*
* A core premise of switching to the kernel context is that
* if an engine is already idling in the kernel context, we
* do not emit another request and wake it up. The other being
* that we do indeed end up idling in the kernel context.
*/
mutex_lock(&i915->drm.struct_mutex);
ctx = kernel_context(i915);
if (IS_ERR(ctx)) {
err = PTR_ERR(ctx);
goto out_unlock;
}
/* First check idling each individual engine */
for_each_engine(engine, i915, id) {
err = __igt_switch_to_kernel_context(i915, ctx, BIT(id));
if (err)
goto out_unlock;
}
/* Now en masse */
err = __igt_switch_to_kernel_context(i915, ctx, ALL_ENGINES);
if (err)
goto out_unlock;
out_unlock:
GEM_TRACE_DUMP_ON(err);
if (igt_flush_test(i915, I915_WAIT_LOCKED))
err = -EIO;
mutex_unlock(&i915->drm.struct_mutex);
kernel_context_close(ctx);
return err;
}
static int fake_aliasing_ppgtt_enable(struct drm_i915_private *i915)
{
struct drm_i915_gem_object *obj;
@ -432,7 +557,7 @@ static int fake_aliasing_ppgtt_enable(struct drm_i915_private *i915)
list_for_each_entry(obj, &i915->mm.bound_list, mm.link) {
struct i915_vma *vma;
vma = i915_vma_instance(obj, &i915->ggtt.base, NULL);
vma = i915_vma_instance(obj, &i915->ggtt.vm, NULL);
if (IS_ERR(vma))
continue;
@ -447,9 +572,28 @@ static void fake_aliasing_ppgtt_disable(struct drm_i915_private *i915)
i915_gem_fini_aliasing_ppgtt(i915);
}
int i915_gem_context_mock_selftests(void)
{
static const struct i915_subtest tests[] = {
SUBTEST(igt_switch_to_kernel_context),
};
struct drm_i915_private *i915;
int err;
i915 = mock_gem_device();
if (!i915)
return -ENOMEM;
err = i915_subtests(tests, i915);
drm_dev_unref(&i915->drm);
return err;
}
int i915_gem_context_live_selftests(struct drm_i915_private *dev_priv)
{
static const struct i915_subtest tests[] = {
SUBTEST(igt_switch_to_kernel_context),
SUBTEST(igt_ctx_exec),
};
bool fake_alias = false;

View File

@ -35,7 +35,7 @@ static int populate_ggtt(struct drm_i915_private *i915)
u64 size;
for (size = 0;
size + I915_GTT_PAGE_SIZE <= i915->ggtt.base.total;
size + I915_GTT_PAGE_SIZE <= i915->ggtt.vm.total;
size += I915_GTT_PAGE_SIZE) {
struct i915_vma *vma;
@ -57,7 +57,7 @@ static int populate_ggtt(struct drm_i915_private *i915)
return -EINVAL;
}
if (list_empty(&i915->ggtt.base.inactive_list)) {
if (list_empty(&i915->ggtt.vm.inactive_list)) {
pr_err("No objects on the GGTT inactive list!\n");
return -EINVAL;
}
@ -69,7 +69,7 @@ static void unpin_ggtt(struct drm_i915_private *i915)
{
struct i915_vma *vma;
list_for_each_entry(vma, &i915->ggtt.base.inactive_list, vm_link)
list_for_each_entry(vma, &i915->ggtt.vm.inactive_list, vm_link)
i915_vma_unpin(vma);
}
@ -103,7 +103,7 @@ static int igt_evict_something(void *arg)
goto cleanup;
/* Everything is pinned, nothing should happen */
err = i915_gem_evict_something(&ggtt->base,
err = i915_gem_evict_something(&ggtt->vm,
I915_GTT_PAGE_SIZE, 0, 0,
0, U64_MAX,
0);
@ -116,7 +116,7 @@ static int igt_evict_something(void *arg)
unpin_ggtt(i915);
/* Everything is unpinned, we should be able to evict something */
err = i915_gem_evict_something(&ggtt->base,
err = i915_gem_evict_something(&ggtt->vm,
I915_GTT_PAGE_SIZE, 0, 0,
0, U64_MAX,
0);
@ -181,7 +181,7 @@ static int igt_evict_for_vma(void *arg)
goto cleanup;
/* Everything is pinned, nothing should happen */
err = i915_gem_evict_for_node(&ggtt->base, &target, 0);
err = i915_gem_evict_for_node(&ggtt->vm, &target, 0);
if (err != -ENOSPC) {
pr_err("i915_gem_evict_for_node on a full GGTT returned err=%d\n",
err);
@ -191,7 +191,7 @@ static int igt_evict_for_vma(void *arg)
unpin_ggtt(i915);
/* Everything is unpinned, we should be able to evict the node */
err = i915_gem_evict_for_node(&ggtt->base, &target, 0);
err = i915_gem_evict_for_node(&ggtt->vm, &target, 0);
if (err) {
pr_err("i915_gem_evict_for_node returned err=%d\n",
err);
@ -229,7 +229,7 @@ static int igt_evict_for_cache_color(void *arg)
* i915_gtt_color_adjust throughout our driver, so using a mock color
* adjust will work just fine for our purposes.
*/
ggtt->base.mm.color_adjust = mock_color_adjust;
ggtt->vm.mm.color_adjust = mock_color_adjust;
obj = i915_gem_object_create_internal(i915, I915_GTT_PAGE_SIZE);
if (IS_ERR(obj)) {
@ -265,7 +265,7 @@ static int igt_evict_for_cache_color(void *arg)
i915_vma_unpin(vma);
/* Remove just the second vma */
err = i915_gem_evict_for_node(&ggtt->base, &target, 0);
err = i915_gem_evict_for_node(&ggtt->vm, &target, 0);
if (err) {
pr_err("[0]i915_gem_evict_for_node returned err=%d\n", err);
goto cleanup;
@ -276,7 +276,7 @@ static int igt_evict_for_cache_color(void *arg)
*/
target.color = I915_CACHE_L3_LLC;
err = i915_gem_evict_for_node(&ggtt->base, &target, 0);
err = i915_gem_evict_for_node(&ggtt->vm, &target, 0);
if (!err) {
pr_err("[1]i915_gem_evict_for_node returned err=%d\n", err);
err = -EINVAL;
@ -288,7 +288,7 @@ static int igt_evict_for_cache_color(void *arg)
cleanup:
unpin_ggtt(i915);
cleanup_objects(i915);
ggtt->base.mm.color_adjust = NULL;
ggtt->vm.mm.color_adjust = NULL;
return err;
}
@ -305,7 +305,7 @@ static int igt_evict_vm(void *arg)
goto cleanup;
/* Everything is pinned, nothing should happen */
err = i915_gem_evict_vm(&ggtt->base);
err = i915_gem_evict_vm(&ggtt->vm);
if (err) {
pr_err("i915_gem_evict_vm on a full GGTT returned err=%d]\n",
err);
@ -314,7 +314,7 @@ static int igt_evict_vm(void *arg)
unpin_ggtt(i915);
err = i915_gem_evict_vm(&ggtt->base);
err = i915_gem_evict_vm(&ggtt->vm);
if (err) {
pr_err("i915_gem_evict_vm on a full GGTT returned err=%d]\n",
err);
@ -359,9 +359,9 @@ static int igt_evict_contexts(void *arg)
/* Reserve a block so that we know we have enough to fit a few rq */
memset(&hole, 0, sizeof(hole));
err = i915_gem_gtt_insert(&i915->ggtt.base, &hole,
err = i915_gem_gtt_insert(&i915->ggtt.vm, &hole,
PRETEND_GGTT_SIZE, 0, I915_COLOR_UNEVICTABLE,
0, i915->ggtt.base.total,
0, i915->ggtt.vm.total,
PIN_NOEVICT);
if (err)
goto out_locked;
@ -377,9 +377,9 @@ static int igt_evict_contexts(void *arg)
goto out_locked;
}
if (i915_gem_gtt_insert(&i915->ggtt.base, &r->node,
if (i915_gem_gtt_insert(&i915->ggtt.vm, &r->node,
1ul << 20, 0, I915_COLOR_UNEVICTABLE,
0, i915->ggtt.base.total,
0, i915->ggtt.vm.total,
PIN_NOEVICT)) {
kfree(r);
break;

View File

@ -151,14 +151,14 @@ static int igt_ppgtt_alloc(void *arg)
if (err)
goto err_ppgtt;
if (!ppgtt->base.allocate_va_range)
if (!ppgtt->vm.allocate_va_range)
goto err_ppgtt_cleanup;
/* Check we can allocate the entire range */
for (size = 4096;
size <= ppgtt->base.total;
size <= ppgtt->vm.total;
size <<= 2) {
err = ppgtt->base.allocate_va_range(&ppgtt->base, 0, size);
err = ppgtt->vm.allocate_va_range(&ppgtt->vm, 0, size);
if (err) {
if (err == -ENOMEM) {
pr_info("[1] Ran out of memory for va_range [0 + %llx] [bit %d]\n",
@ -168,15 +168,15 @@ static int igt_ppgtt_alloc(void *arg)
goto err_ppgtt_cleanup;
}
ppgtt->base.clear_range(&ppgtt->base, 0, size);
ppgtt->vm.clear_range(&ppgtt->vm, 0, size);
}
/* Check we can incrementally allocate the entire range */
for (last = 0, size = 4096;
size <= ppgtt->base.total;
size <= ppgtt->vm.total;
last = size, size <<= 2) {
err = ppgtt->base.allocate_va_range(&ppgtt->base,
last, size - last);
err = ppgtt->vm.allocate_va_range(&ppgtt->vm,
last, size - last);
if (err) {
if (err == -ENOMEM) {
pr_info("[2] Ran out of memory for va_range [%llx + %llx] [bit %d]\n",
@ -188,7 +188,7 @@ static int igt_ppgtt_alloc(void *arg)
}
err_ppgtt_cleanup:
ppgtt->base.cleanup(&ppgtt->base);
ppgtt->vm.cleanup(&ppgtt->vm);
err_ppgtt:
mutex_unlock(&dev_priv->drm.struct_mutex);
kfree(ppgtt);
@ -987,12 +987,12 @@ static int exercise_ppgtt(struct drm_i915_private *dev_priv,
err = PTR_ERR(ppgtt);
goto out_unlock;
}
GEM_BUG_ON(offset_in_page(ppgtt->base.total));
GEM_BUG_ON(ppgtt->base.closed);
GEM_BUG_ON(offset_in_page(ppgtt->vm.total));
GEM_BUG_ON(ppgtt->vm.closed);
err = func(dev_priv, &ppgtt->base, 0, ppgtt->base.total, end_time);
err = func(dev_priv, &ppgtt->vm, 0, ppgtt->vm.total, end_time);
i915_ppgtt_close(&ppgtt->base);
i915_ppgtt_close(&ppgtt->vm);
i915_ppgtt_put(ppgtt);
out_unlock:
mutex_unlock(&dev_priv->drm.struct_mutex);
@ -1061,18 +1061,18 @@ static int exercise_ggtt(struct drm_i915_private *i915,
mutex_lock(&i915->drm.struct_mutex);
restart:
list_sort(NULL, &ggtt->base.mm.hole_stack, sort_holes);
drm_mm_for_each_hole(node, &ggtt->base.mm, hole_start, hole_end) {
list_sort(NULL, &ggtt->vm.mm.hole_stack, sort_holes);
drm_mm_for_each_hole(node, &ggtt->vm.mm, hole_start, hole_end) {
if (hole_start < last)
continue;
if (ggtt->base.mm.color_adjust)
ggtt->base.mm.color_adjust(node, 0,
&hole_start, &hole_end);
if (ggtt->vm.mm.color_adjust)
ggtt->vm.mm.color_adjust(node, 0,
&hole_start, &hole_end);
if (hole_start >= hole_end)
continue;
err = func(i915, &ggtt->base, hole_start, hole_end, end_time);
err = func(i915, &ggtt->vm, hole_start, hole_end, end_time);
if (err)
break;
@ -1134,7 +1134,7 @@ static int igt_ggtt_page(void *arg)
goto out_free;
memset(&tmp, 0, sizeof(tmp));
err = drm_mm_insert_node_in_range(&ggtt->base.mm, &tmp,
err = drm_mm_insert_node_in_range(&ggtt->vm.mm, &tmp,
count * PAGE_SIZE, 0,
I915_COLOR_UNEVICTABLE,
0, ggtt->mappable_end,
@ -1147,9 +1147,9 @@ static int igt_ggtt_page(void *arg)
for (n = 0; n < count; n++) {
u64 offset = tmp.start + n * PAGE_SIZE;
ggtt->base.insert_page(&ggtt->base,
i915_gem_object_get_dma_address(obj, 0),
offset, I915_CACHE_NONE, 0);
ggtt->vm.insert_page(&ggtt->vm,
i915_gem_object_get_dma_address(obj, 0),
offset, I915_CACHE_NONE, 0);
}
order = i915_random_order(count, &prng);
@ -1188,7 +1188,7 @@ static int igt_ggtt_page(void *arg)
kfree(order);
out_remove:
ggtt->base.clear_range(&ggtt->base, tmp.start, tmp.size);
ggtt->vm.clear_range(&ggtt->vm, tmp.start, tmp.size);
intel_runtime_pm_put(i915);
drm_mm_remove_node(&tmp);
out_unpin:
@ -1229,7 +1229,7 @@ static int exercise_mock(struct drm_i915_private *i915,
ppgtt = ctx->ppgtt;
GEM_BUG_ON(!ppgtt);
err = func(i915, &ppgtt->base, 0, ppgtt->base.total, end_time);
err = func(i915, &ppgtt->vm, 0, ppgtt->vm.total, end_time);
mock_context_close(ctx);
return err;
@ -1270,7 +1270,7 @@ static int igt_gtt_reserve(void *arg)
/* Start by filling the GGTT */
for (total = 0;
total + 2*I915_GTT_PAGE_SIZE <= i915->ggtt.base.total;
total + 2*I915_GTT_PAGE_SIZE <= i915->ggtt.vm.total;
total += 2*I915_GTT_PAGE_SIZE) {
struct i915_vma *vma;
@ -1288,20 +1288,20 @@ static int igt_gtt_reserve(void *arg)
list_add(&obj->st_link, &objects);
vma = i915_vma_instance(obj, &i915->ggtt.base, NULL);
vma = i915_vma_instance(obj, &i915->ggtt.vm, NULL);
if (IS_ERR(vma)) {
err = PTR_ERR(vma);
goto out;
}
err = i915_gem_gtt_reserve(&i915->ggtt.base, &vma->node,
err = i915_gem_gtt_reserve(&i915->ggtt.vm, &vma->node,
obj->base.size,
total,
obj->cache_level,
0);
if (err) {
pr_err("i915_gem_gtt_reserve (pass 1) failed at %llu/%llu with err=%d\n",
total, i915->ggtt.base.total, err);
total, i915->ggtt.vm.total, err);
goto out;
}
track_vma_bind(vma);
@ -1319,7 +1319,7 @@ static int igt_gtt_reserve(void *arg)
/* Now we start forcing evictions */
for (total = I915_GTT_PAGE_SIZE;
total + 2*I915_GTT_PAGE_SIZE <= i915->ggtt.base.total;
total + 2*I915_GTT_PAGE_SIZE <= i915->ggtt.vm.total;
total += 2*I915_GTT_PAGE_SIZE) {
struct i915_vma *vma;
@ -1337,20 +1337,20 @@ static int igt_gtt_reserve(void *arg)
list_add(&obj->st_link, &objects);
vma = i915_vma_instance(obj, &i915->ggtt.base, NULL);
vma = i915_vma_instance(obj, &i915->ggtt.vm, NULL);
if (IS_ERR(vma)) {
err = PTR_ERR(vma);
goto out;
}
err = i915_gem_gtt_reserve(&i915->ggtt.base, &vma->node,
err = i915_gem_gtt_reserve(&i915->ggtt.vm, &vma->node,
obj->base.size,
total,
obj->cache_level,
0);
if (err) {
pr_err("i915_gem_gtt_reserve (pass 2) failed at %llu/%llu with err=%d\n",
total, i915->ggtt.base.total, err);
total, i915->ggtt.vm.total, err);
goto out;
}
track_vma_bind(vma);
@ -1371,7 +1371,7 @@ static int igt_gtt_reserve(void *arg)
struct i915_vma *vma;
u64 offset;
vma = i915_vma_instance(obj, &i915->ggtt.base, NULL);
vma = i915_vma_instance(obj, &i915->ggtt.vm, NULL);
if (IS_ERR(vma)) {
err = PTR_ERR(vma);
goto out;
@ -1383,18 +1383,18 @@ static int igt_gtt_reserve(void *arg)
goto out;
}
offset = random_offset(0, i915->ggtt.base.total,
offset = random_offset(0, i915->ggtt.vm.total,
2*I915_GTT_PAGE_SIZE,
I915_GTT_MIN_ALIGNMENT);
err = i915_gem_gtt_reserve(&i915->ggtt.base, &vma->node,
err = i915_gem_gtt_reserve(&i915->ggtt.vm, &vma->node,
obj->base.size,
offset,
obj->cache_level,
0);
if (err) {
pr_err("i915_gem_gtt_reserve (pass 3) failed at %llu/%llu with err=%d\n",
total, i915->ggtt.base.total, err);
total, i915->ggtt.vm.total, err);
goto out;
}
track_vma_bind(vma);
@ -1429,8 +1429,8 @@ static int igt_gtt_insert(void *arg)
u64 start, end;
} invalid_insert[] = {
{
i915->ggtt.base.total + I915_GTT_PAGE_SIZE, 0,
0, i915->ggtt.base.total,
i915->ggtt.vm.total + I915_GTT_PAGE_SIZE, 0,
0, i915->ggtt.vm.total,
},
{
2*I915_GTT_PAGE_SIZE, 0,
@ -1460,7 +1460,7 @@ static int igt_gtt_insert(void *arg)
/* Check a couple of obviously invalid requests */
for (ii = invalid_insert; ii->size; ii++) {
err = i915_gem_gtt_insert(&i915->ggtt.base, &tmp,
err = i915_gem_gtt_insert(&i915->ggtt.vm, &tmp,
ii->size, ii->alignment,
I915_COLOR_UNEVICTABLE,
ii->start, ii->end,
@ -1475,7 +1475,7 @@ static int igt_gtt_insert(void *arg)
/* Start by filling the GGTT */
for (total = 0;
total + I915_GTT_PAGE_SIZE <= i915->ggtt.base.total;
total + I915_GTT_PAGE_SIZE <= i915->ggtt.vm.total;
total += I915_GTT_PAGE_SIZE) {
struct i915_vma *vma;
@ -1493,15 +1493,15 @@ static int igt_gtt_insert(void *arg)
list_add(&obj->st_link, &objects);
vma = i915_vma_instance(obj, &i915->ggtt.base, NULL);
vma = i915_vma_instance(obj, &i915->ggtt.vm, NULL);
if (IS_ERR(vma)) {
err = PTR_ERR(vma);
goto out;
}
err = i915_gem_gtt_insert(&i915->ggtt.base, &vma->node,
err = i915_gem_gtt_insert(&i915->ggtt.vm, &vma->node,
obj->base.size, 0, obj->cache_level,
0, i915->ggtt.base.total,
0, i915->ggtt.vm.total,
0);
if (err == -ENOSPC) {
/* maxed out the GGTT space */
@ -1510,7 +1510,7 @@ static int igt_gtt_insert(void *arg)
}
if (err) {
pr_err("i915_gem_gtt_insert (pass 1) failed at %llu/%llu with err=%d\n",
total, i915->ggtt.base.total, err);
total, i915->ggtt.vm.total, err);
goto out;
}
track_vma_bind(vma);
@ -1522,7 +1522,7 @@ static int igt_gtt_insert(void *arg)
list_for_each_entry(obj, &objects, st_link) {
struct i915_vma *vma;
vma = i915_vma_instance(obj, &i915->ggtt.base, NULL);
vma = i915_vma_instance(obj, &i915->ggtt.vm, NULL);
if (IS_ERR(vma)) {
err = PTR_ERR(vma);
goto out;
@ -1542,7 +1542,7 @@ static int igt_gtt_insert(void *arg)
struct i915_vma *vma;
u64 offset;
vma = i915_vma_instance(obj, &i915->ggtt.base, NULL);
vma = i915_vma_instance(obj, &i915->ggtt.vm, NULL);
if (IS_ERR(vma)) {
err = PTR_ERR(vma);
goto out;
@ -1557,13 +1557,13 @@ static int igt_gtt_insert(void *arg)
goto out;
}
err = i915_gem_gtt_insert(&i915->ggtt.base, &vma->node,
err = i915_gem_gtt_insert(&i915->ggtt.vm, &vma->node,
obj->base.size, 0, obj->cache_level,
0, i915->ggtt.base.total,
0, i915->ggtt.vm.total,
0);
if (err) {
pr_err("i915_gem_gtt_insert (pass 2) failed at %llu/%llu with err=%d\n",
total, i915->ggtt.base.total, err);
total, i915->ggtt.vm.total, err);
goto out;
}
track_vma_bind(vma);
@ -1579,7 +1579,7 @@ static int igt_gtt_insert(void *arg)
/* And then force evictions */
for (total = 0;
total + 2*I915_GTT_PAGE_SIZE <= i915->ggtt.base.total;
total + 2*I915_GTT_PAGE_SIZE <= i915->ggtt.vm.total;
total += 2*I915_GTT_PAGE_SIZE) {
struct i915_vma *vma;
@ -1597,19 +1597,19 @@ static int igt_gtt_insert(void *arg)
list_add(&obj->st_link, &objects);
vma = i915_vma_instance(obj, &i915->ggtt.base, NULL);
vma = i915_vma_instance(obj, &i915->ggtt.vm, NULL);
if (IS_ERR(vma)) {
err = PTR_ERR(vma);
goto out;
}
err = i915_gem_gtt_insert(&i915->ggtt.base, &vma->node,
err = i915_gem_gtt_insert(&i915->ggtt.vm, &vma->node,
obj->base.size, 0, obj->cache_level,
0, i915->ggtt.base.total,
0, i915->ggtt.vm.total,
0);
if (err) {
pr_err("i915_gem_gtt_insert (pass 3) failed at %llu/%llu with err=%d\n",
total, i915->ggtt.base.total, err);
total, i915->ggtt.vm.total, err);
goto out;
}
track_vma_bind(vma);
@ -1669,7 +1669,7 @@ int i915_gem_gtt_live_selftests(struct drm_i915_private *i915)
SUBTEST(igt_ggtt_page),
};
GEM_BUG_ON(offset_in_page(i915->ggtt.base.total));
GEM_BUG_ON(offset_in_page(i915->ggtt.vm.total));
return i915_subtests(tests, i915);
}

View File

@ -113,7 +113,7 @@ static int igt_gem_huge(void *arg)
obj = huge_gem_object(i915,
nreal * PAGE_SIZE,
i915->ggtt.base.total + PAGE_SIZE);
i915->ggtt.vm.total + PAGE_SIZE);
if (IS_ERR(obj))
return PTR_ERR(obj);
@ -311,7 +311,7 @@ static int igt_partial_tiling(void *arg)
obj = huge_gem_object(i915,
nreal << PAGE_SHIFT,
(1 + next_prime_number(i915->ggtt.base.total >> PAGE_SHIFT)) << PAGE_SHIFT);
(1 + next_prime_number(i915->ggtt.vm.total >> PAGE_SHIFT)) << PAGE_SHIFT);
if (IS_ERR(obj))
return PTR_ERR(obj);
@ -440,7 +440,7 @@ static int make_obj_busy(struct drm_i915_gem_object *obj)
struct i915_vma *vma;
int err;
vma = i915_vma_instance(obj, &i915->ggtt.base, NULL);
vma = i915_vma_instance(obj, &i915->ggtt.vm, NULL);
if (IS_ERR(vma))
return PTR_ERR(vma);

View File

@ -24,3 +24,4 @@ selftest(vma, i915_vma_mock_selftests)
selftest(evict, i915_gem_evict_mock_selftests)
selftest(gtt, i915_gem_gtt_mock_selftests)
selftest(hugepages, i915_gem_huge_page_mock_selftests)
selftest(contexts, i915_gem_context_mock_selftests)

View File

@ -430,7 +430,7 @@ static struct i915_vma *empty_batch(struct drm_i915_private *i915)
if (err)
goto err;
vma = i915_vma_instance(obj, &i915->ggtt.base, NULL);
vma = i915_vma_instance(obj, &i915->ggtt.vm, NULL);
if (IS_ERR(vma)) {
err = PTR_ERR(vma);
goto err;
@ -555,7 +555,8 @@ static int live_empty_request(void *arg)
static struct i915_vma *recursive_batch(struct drm_i915_private *i915)
{
struct i915_gem_context *ctx = i915->kernel_context;
struct i915_address_space *vm = ctx->ppgtt ? &ctx->ppgtt->base : &i915->ggtt.base;
struct i915_address_space *vm =
ctx->ppgtt ? &ctx->ppgtt->vm : &i915->ggtt.vm;
struct drm_i915_gem_object *obj;
const int gen = INTEL_GEN(i915);
struct i915_vma *vma;

View File

@ -35,7 +35,7 @@ static bool assert_vma(struct i915_vma *vma,
{
bool ok = true;
if (vma->vm != &ctx->ppgtt->base) {
if (vma->vm != &ctx->ppgtt->vm) {
pr_err("VMA created with wrong VM\n");
ok = false;
}
@ -110,8 +110,7 @@ static int create_vmas(struct drm_i915_private *i915,
list_for_each_entry(obj, objects, st_link) {
for (pinned = 0; pinned <= 1; pinned++) {
list_for_each_entry(ctx, contexts, link) {
struct i915_address_space *vm =
&ctx->ppgtt->base;
struct i915_address_space *vm = &ctx->ppgtt->vm;
struct i915_vma *vma;
int err;
@ -259,12 +258,12 @@ static int igt_vma_pin1(void *arg)
VALID(0, PIN_GLOBAL | PIN_OFFSET_BIAS | 8192),
VALID(0, PIN_GLOBAL | PIN_OFFSET_BIAS | (i915->ggtt.mappable_end - 4096)),
VALID(0, PIN_GLOBAL | PIN_MAPPABLE | PIN_OFFSET_BIAS | (i915->ggtt.mappable_end - 4096)),
VALID(0, PIN_GLOBAL | PIN_OFFSET_BIAS | (i915->ggtt.base.total - 4096)),
VALID(0, PIN_GLOBAL | PIN_OFFSET_BIAS | (i915->ggtt.vm.total - 4096)),
VALID(0, PIN_GLOBAL | PIN_MAPPABLE | PIN_OFFSET_FIXED | (i915->ggtt.mappable_end - 4096)),
INVALID(0, PIN_GLOBAL | PIN_MAPPABLE | PIN_OFFSET_FIXED | i915->ggtt.mappable_end),
VALID(0, PIN_GLOBAL | PIN_OFFSET_FIXED | (i915->ggtt.base.total - 4096)),
INVALID(0, PIN_GLOBAL | PIN_OFFSET_FIXED | i915->ggtt.base.total),
VALID(0, PIN_GLOBAL | PIN_OFFSET_FIXED | (i915->ggtt.vm.total - 4096)),
INVALID(0, PIN_GLOBAL | PIN_OFFSET_FIXED | i915->ggtt.vm.total),
INVALID(0, PIN_GLOBAL | PIN_OFFSET_FIXED | round_down(U64_MAX, PAGE_SIZE)),
VALID(4096, PIN_GLOBAL),
@ -272,12 +271,12 @@ static int igt_vma_pin1(void *arg)
VALID(i915->ggtt.mappable_end - 4096, PIN_GLOBAL | PIN_MAPPABLE),
VALID(i915->ggtt.mappable_end, PIN_GLOBAL | PIN_MAPPABLE),
NOSPACE(i915->ggtt.mappable_end + 4096, PIN_GLOBAL | PIN_MAPPABLE),
VALID(i915->ggtt.base.total - 4096, PIN_GLOBAL),
VALID(i915->ggtt.base.total, PIN_GLOBAL),
NOSPACE(i915->ggtt.base.total + 4096, PIN_GLOBAL),
VALID(i915->ggtt.vm.total - 4096, PIN_GLOBAL),
VALID(i915->ggtt.vm.total, PIN_GLOBAL),
NOSPACE(i915->ggtt.vm.total + 4096, PIN_GLOBAL),
NOSPACE(round_down(U64_MAX, PAGE_SIZE), PIN_GLOBAL),
INVALID(8192, PIN_GLOBAL | PIN_MAPPABLE | PIN_OFFSET_FIXED | (i915->ggtt.mappable_end - 4096)),
INVALID(8192, PIN_GLOBAL | PIN_OFFSET_FIXED | (i915->ggtt.base.total - 4096)),
INVALID(8192, PIN_GLOBAL | PIN_OFFSET_FIXED | (i915->ggtt.vm.total - 4096)),
INVALID(8192, PIN_GLOBAL | PIN_OFFSET_FIXED | (round_down(U64_MAX, PAGE_SIZE) - 4096)),
VALID(8192, PIN_GLOBAL | PIN_OFFSET_BIAS | (i915->ggtt.mappable_end - 4096)),
@ -289,9 +288,9 @@ static int igt_vma_pin1(void *arg)
* variable start, end and size.
*/
NOSPACE(0, PIN_GLOBAL | PIN_MAPPABLE | PIN_OFFSET_BIAS | i915->ggtt.mappable_end),
NOSPACE(0, PIN_GLOBAL | PIN_OFFSET_BIAS | i915->ggtt.base.total),
NOSPACE(0, PIN_GLOBAL | PIN_OFFSET_BIAS | i915->ggtt.vm.total),
NOSPACE(8192, PIN_GLOBAL | PIN_MAPPABLE | PIN_OFFSET_BIAS | (i915->ggtt.mappable_end - 4096)),
NOSPACE(8192, PIN_GLOBAL | PIN_OFFSET_BIAS | (i915->ggtt.base.total - 4096)),
NOSPACE(8192, PIN_GLOBAL | PIN_OFFSET_BIAS | (i915->ggtt.vm.total - 4096)),
#endif
{ },
#undef NOSPACE
@ -307,13 +306,13 @@ static int igt_vma_pin1(void *arg)
* focusing on error handling of boundary conditions.
*/
GEM_BUG_ON(!drm_mm_clean(&i915->ggtt.base.mm));
GEM_BUG_ON(!drm_mm_clean(&i915->ggtt.vm.mm));
obj = i915_gem_object_create_internal(i915, PAGE_SIZE);
if (IS_ERR(obj))
return PTR_ERR(obj);
vma = checked_vma_instance(obj, &i915->ggtt.base, NULL);
vma = checked_vma_instance(obj, &i915->ggtt.vm, NULL);
if (IS_ERR(vma))
goto out;
@ -405,7 +404,7 @@ static unsigned int rotated_size(const struct intel_rotation_plane_info *a,
static int igt_vma_rotate(void *arg)
{
struct drm_i915_private *i915 = arg;
struct i915_address_space *vm = &i915->ggtt.base;
struct i915_address_space *vm = &i915->ggtt.vm;
struct drm_i915_gem_object *obj;
const struct intel_rotation_plane_info planes[] = {
{ .width = 1, .height = 1, .stride = 1 },
@ -604,7 +603,7 @@ static bool assert_pin(struct i915_vma *vma,
static int igt_vma_partial(void *arg)
{
struct drm_i915_private *i915 = arg;
struct i915_address_space *vm = &i915->ggtt.base;
struct i915_address_space *vm = &i915->ggtt.vm;
const unsigned int npages = 1021; /* prime! */
struct drm_i915_gem_object *obj;
const struct phase {

View File

@ -105,7 +105,10 @@ static int emit_recurse_batch(struct hang *h,
struct i915_request *rq)
{
struct drm_i915_private *i915 = h->i915;
struct i915_address_space *vm = rq->ctx->ppgtt ? &rq->ctx->ppgtt->base : &i915->ggtt.base;
struct i915_address_space *vm =
rq->gem_context->ppgtt ?
&rq->gem_context->ppgtt->vm :
&i915->ggtt.vm;
struct i915_vma *hws, *vma;
unsigned int flags;
u32 *batch;
@ -560,6 +563,30 @@ struct active_engine {
#define TEST_SELF BIT(2)
#define TEST_PRIORITY BIT(3)
static int active_request_put(struct i915_request *rq)
{
int err = 0;
if (!rq)
return 0;
if (i915_request_wait(rq, 0, 5 * HZ) < 0) {
GEM_TRACE("%s timed out waiting for completion of fence %llx:%d, seqno %d.\n",
rq->engine->name,
rq->fence.context,
rq->fence.seqno,
i915_request_global_seqno(rq));
GEM_TRACE_DUMP();
i915_gem_set_wedged(rq->i915);
err = -EIO;
}
i915_request_put(rq);
return err;
}
static int active_engine(void *data)
{
I915_RND_STATE(prng);
@ -608,24 +635,20 @@ static int active_engine(void *data)
i915_request_add(new);
mutex_unlock(&engine->i915->drm.struct_mutex);
if (old) {
if (i915_request_wait(old, 0, HZ) < 0) {
GEM_TRACE("%s timed out.\n", engine->name);
GEM_TRACE_DUMP();
i915_gem_set_wedged(engine->i915);
i915_request_put(old);
err = -EIO;
break;
}
i915_request_put(old);
}
err = active_request_put(old);
if (err)
break;
cond_resched();
}
for (count = 0; count < ARRAY_SIZE(rq); count++)
i915_request_put(rq[count]);
for (count = 0; count < ARRAY_SIZE(rq); count++) {
int err__ = active_request_put(rq[count]);
/* Keep the first error */
if (!err)
err = err__;
}
err_file:
mock_file_free(engine->i915, file);

View File

@ -83,7 +83,7 @@ static int emit_recurse_batch(struct spinner *spin,
struct i915_request *rq,
u32 arbitration_command)
{
struct i915_address_space *vm = &rq->ctx->ppgtt->base;
struct i915_address_space *vm = &rq->gem_context->ppgtt->vm;
struct i915_vma *hws, *vma;
u32 *batch;
int err;

View File

@ -33,7 +33,7 @@ read_nonprivs(struct i915_gem_context *ctx, struct intel_engine_cs *engine)
memset(cs, 0xc5, PAGE_SIZE);
i915_gem_object_unpin_map(result);
vma = i915_vma_instance(result, &engine->i915->ggtt.base, NULL);
vma = i915_vma_instance(result, &engine->i915->ggtt.vm, NULL);
if (IS_ERR(vma)) {
err = PTR_ERR(vma);
goto err_obj;

View File

@ -30,6 +30,7 @@ mock_context(struct drm_i915_private *i915,
const char *name)
{
struct i915_gem_context *ctx;
unsigned int n;
int ret;
ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
@ -43,6 +44,12 @@ mock_context(struct drm_i915_private *i915,
INIT_RADIX_TREE(&ctx->handles_vma, GFP_KERNEL);
INIT_LIST_HEAD(&ctx->handles_list);
for (n = 0; n < ARRAY_SIZE(ctx->__engine); n++) {
struct intel_context *ce = &ctx->__engine[n];
ce->gem_context = ctx;
}
ret = ida_simple_get(&i915->contexts.hw_ida,
0, MAX_CONTEXT_HW_ID, GFP_KERNEL);
if (ret < 0)

View File

@ -72,25 +72,34 @@ static void hw_delay_complete(struct timer_list *t)
spin_unlock(&engine->hw_lock);
}
static struct intel_ring *
static void mock_context_unpin(struct intel_context *ce)
{
i915_gem_context_put(ce->gem_context);
}
static void mock_context_destroy(struct intel_context *ce)
{
GEM_BUG_ON(ce->pin_count);
}
static const struct intel_context_ops mock_context_ops = {
.unpin = mock_context_unpin,
.destroy = mock_context_destroy,
};
static struct intel_context *
mock_context_pin(struct intel_engine_cs *engine,
struct i915_gem_context *ctx)
{
struct intel_context *ce = to_intel_context(ctx, engine);
if (!ce->pin_count++)
if (!ce->pin_count++) {
i915_gem_context_get(ctx);
ce->ring = engine->buffer;
ce->ops = &mock_context_ops;
}
return engine->buffer;
}
static void mock_context_unpin(struct intel_engine_cs *engine,
struct i915_gem_context *ctx)
{
struct intel_context *ce = to_intel_context(ctx, engine);
if (!--ce->pin_count)
i915_gem_context_put(ctx);
return ce;
}
static int mock_request_alloc(struct i915_request *request)
@ -185,7 +194,6 @@ struct intel_engine_cs *mock_engine(struct drm_i915_private *i915,
engine->base.status_page.page_addr = (void *)(engine + 1);
engine->base.context_pin = mock_context_pin;
engine->base.context_unpin = mock_context_unpin;
engine->base.request_alloc = mock_request_alloc;
engine->base.emit_flush = mock_emit_flush;
engine->base.emit_breadcrumb = mock_emit_breadcrumb;
@ -204,8 +212,13 @@ struct intel_engine_cs *mock_engine(struct drm_i915_private *i915,
if (!engine->base.buffer)
goto err_breadcrumbs;
if (IS_ERR(intel_context_pin(i915->kernel_context, &engine->base)))
goto err_ring;
return &engine->base;
err_ring:
mock_ring_free(engine->base.buffer);
err_breadcrumbs:
intel_engine_fini_breadcrumbs(&engine->base);
i915_timeline_fini(&engine->base.timeline);
@ -238,11 +251,15 @@ void mock_engine_free(struct intel_engine_cs *engine)
{
struct mock_engine *mock =
container_of(engine, typeof(*mock), base);
struct intel_context *ce;
GEM_BUG_ON(timer_pending(&mock->hw_delay));
if (engine->last_retired_context)
intel_context_unpin(engine->last_retired_context, engine);
ce = fetch_and_zero(&engine->last_retired_context);
if (ce)
intel_context_unpin(ce);
__intel_context_unpin(engine->i915->kernel_context, engine);
mock_ring_free(engine->buffer);

View File

@ -136,8 +136,6 @@ static struct dev_pm_domain pm_domain = {
struct drm_i915_private *mock_gem_device(void)
{
struct drm_i915_private *i915;
struct intel_engine_cs *engine;
enum intel_engine_id id;
struct pci_dev *pdev;
int err;
@ -233,13 +231,13 @@ struct drm_i915_private *mock_gem_device(void)
mock_init_ggtt(i915);
mkwrite_device_info(i915)->ring_mask = BIT(0);
i915->engine[RCS] = mock_engine(i915, "mock", RCS);
if (!i915->engine[RCS])
goto err_unlock;
i915->kernel_context = mock_context(i915, NULL);
if (!i915->kernel_context)
goto err_engine;
goto err_unlock;
i915->engine[RCS] = mock_engine(i915, "mock", RCS);
if (!i915->engine[RCS])
goto err_context;
mutex_unlock(&i915->drm.struct_mutex);
@ -247,9 +245,8 @@ struct drm_i915_private *mock_gem_device(void)
return i915;
err_engine:
for_each_engine(engine, i915, id)
mock_engine_free(engine);
err_context:
i915_gem_contexts_fini(i915);
err_unlock:
mutex_unlock(&i915->drm.struct_mutex);
kmem_cache_destroy(i915->priorities);

View File

@ -66,25 +66,25 @@ mock_ppgtt(struct drm_i915_private *i915,
return NULL;
kref_init(&ppgtt->ref);
ppgtt->base.i915 = i915;
ppgtt->base.total = round_down(U64_MAX, PAGE_SIZE);
ppgtt->base.file = ERR_PTR(-ENODEV);
ppgtt->vm.i915 = i915;
ppgtt->vm.total = round_down(U64_MAX, PAGE_SIZE);
ppgtt->vm.file = ERR_PTR(-ENODEV);
INIT_LIST_HEAD(&ppgtt->base.active_list);
INIT_LIST_HEAD(&ppgtt->base.inactive_list);
INIT_LIST_HEAD(&ppgtt->base.unbound_list);
INIT_LIST_HEAD(&ppgtt->vm.active_list);
INIT_LIST_HEAD(&ppgtt->vm.inactive_list);
INIT_LIST_HEAD(&ppgtt->vm.unbound_list);
INIT_LIST_HEAD(&ppgtt->base.global_link);
drm_mm_init(&ppgtt->base.mm, 0, ppgtt->base.total);
INIT_LIST_HEAD(&ppgtt->vm.global_link);
drm_mm_init(&ppgtt->vm.mm, 0, ppgtt->vm.total);
ppgtt->base.clear_range = nop_clear_range;
ppgtt->base.insert_page = mock_insert_page;
ppgtt->base.insert_entries = mock_insert_entries;
ppgtt->base.bind_vma = mock_bind_ppgtt;
ppgtt->base.unbind_vma = mock_unbind_ppgtt;
ppgtt->base.set_pages = ppgtt_set_pages;
ppgtt->base.clear_pages = clear_pages;
ppgtt->base.cleanup = mock_cleanup;
ppgtt->vm.clear_range = nop_clear_range;
ppgtt->vm.insert_page = mock_insert_page;
ppgtt->vm.insert_entries = mock_insert_entries;
ppgtt->vm.bind_vma = mock_bind_ppgtt;
ppgtt->vm.unbind_vma = mock_unbind_ppgtt;
ppgtt->vm.set_pages = ppgtt_set_pages;
ppgtt->vm.clear_pages = clear_pages;
ppgtt->vm.cleanup = mock_cleanup;
return ppgtt;
}
@ -107,27 +107,27 @@ void mock_init_ggtt(struct drm_i915_private *i915)
INIT_LIST_HEAD(&i915->vm_list);
ggtt->base.i915 = i915;
ggtt->vm.i915 = i915;
ggtt->gmadr = (struct resource) DEFINE_RES_MEM(0, 2048 * PAGE_SIZE);
ggtt->mappable_end = resource_size(&ggtt->gmadr);
ggtt->base.total = 4096 * PAGE_SIZE;
ggtt->vm.total = 4096 * PAGE_SIZE;
ggtt->base.clear_range = nop_clear_range;
ggtt->base.insert_page = mock_insert_page;
ggtt->base.insert_entries = mock_insert_entries;
ggtt->base.bind_vma = mock_bind_ggtt;
ggtt->base.unbind_vma = mock_unbind_ggtt;
ggtt->base.set_pages = ggtt_set_pages;
ggtt->base.clear_pages = clear_pages;
ggtt->base.cleanup = mock_cleanup;
ggtt->vm.clear_range = nop_clear_range;
ggtt->vm.insert_page = mock_insert_page;
ggtt->vm.insert_entries = mock_insert_entries;
ggtt->vm.bind_vma = mock_bind_ggtt;
ggtt->vm.unbind_vma = mock_unbind_ggtt;
ggtt->vm.set_pages = ggtt_set_pages;
ggtt->vm.clear_pages = clear_pages;
ggtt->vm.cleanup = mock_cleanup;
i915_address_space_init(&ggtt->base, i915, "global");
i915_address_space_init(&ggtt->vm, i915, "global");
}
void mock_fini_ggtt(struct drm_i915_private *i915)
{
struct i915_ggtt *ggtt = &i915->ggtt;
i915_address_space_fini(&ggtt->base);
i915_address_space_fini(&ggtt->vm);
}