linux_dsm_epyc7002/drivers/gpu/drm/i915/display/intel_bw.c
Stanislav Lisovskiy 20f505f225 drm/i915: Restrict qgv points which don't have enough bandwidth.
According to BSpec 53998, we should try to
restrict qgv points, which can't provide
enough bandwidth for desired display configuration.

Currently we are just comparing against all of
those and take minimum(worst case).

v2: Fixed wrong PCode reply mask, removed hardcoded
    values.

v3: Forbid simultaneous legacy SAGV PCode requests and
    restricting qgv points. Put the actual restriction
    to commit function, added serialization(thanks to Ville)
    to prevent commit being applied out of order in case of
    nonblocking and/or nomodeset commits.

v4:
    - Minor code refactoring, fixed few typos(thanks to James Ausmus)
    - Change the naming of qgv point
      masking/unmasking functions(James Ausmus).
    - Simplify the masking/unmasking operation itself,
      as we don't need to mask only single point per request(James Ausmus)
    - Reject and stick to highest bandwidth point if SAGV
      can't be enabled(BSpec)

v5:
    - Add new mailbox reply codes, which seems to happen during boot
      time for TGL and indicate that QGV setting is not yet available.

v6:
    - Increase number of supported QGV points to be in sync with BSpec.

v7: - Rebased and resolved conflict to fix build failure.
    - Fix NUM_QGV_POINTS to 8 and moved that to header file(James Ausmus)

v8: - Don't report an error if we can't restrict qgv points, as SAGV
      can be disabled by BIOS, which is completely legal. So don't
      make CI panic. Instead if we detect that there is only 1 QGV
      point accessible just analyze if we can fit the required bandwidth
      requirements, but no need in restricting.

v9: - Fix wrong QGV transition if we have 0 planes and no SAGV
      simultaneously.

v10: - Fix CDCLK corruption, because of global state getting serialized
       without modeset, which caused copying of non-calculated cdclk
       to be copied to dev_priv(thanks to Ville for the hint).

v11: - Remove unneeded headers and spaces(Matthew Roper)
     - Remove unneeded intel_qgv_info qi struct from bw check and zero
       out the needed one(Matthew Roper)
     - Changed QGV error message to have more clear meaning(Matthew Roper)
     - Use state->modeset_set instead of any_ms(Matthew Roper)
     - Moved NUM_SAGV_POINTS from i915_reg.h to i915_drv.h where it's used
     - Keep using crtc_state->hw.active instead of .enable(Matthew Roper)
     - Moved unrelated changes to other patch(using latency as parameter
       for plane wm calculation, moved to SAGV refactoring patch)

v12: - Fix rebase conflict with own temporary SAGV/QGV fix.
     - Remove unnecessary mask being zero check when unmasking
       qgv points as this is completely legal(Matt Roper)
     - Check if we are setting the same mask as already being set
       in hardware to prevent error from PCode.
     - Fix error message when restricting/unrestricting qgv points
       to "mask/unmask" which sounds more accurate(Matt Roper)
     - Move sagv status setting to icl_get_bw_info from atomic check
       as this should be calculated only once.(Matt Roper)
     - Edited comments for the case when we can't enable SAGV and
       use only 1 QGV point with highest bandwidth to be more
       understandable.(Matt Roper)

v13: - Moved max_data_rate in bw check to closer scope(Ville Syrjälä)
     - Changed comment for zero new_mask in qgv points masking function
       to better reflect reality(Ville Syrjälä)
     - Simplified bit mask operation in qgv points masking function
       (Ville Syrjälä)
     - Moved intel_qgv_points_mask closer to gen11 SAGV disabling,
       however this still can't be under modeset condition(Ville Syrjälä)
     - Packed qgv_points_mask as u8 and moved closer to pipe_sagv_mask
       (Ville Syrjälä)
     - Extracted PCode changes to separate patch.(Ville Syrjälä)
     - Now treat num_planes 0 same as 1 to avoid confusion and
       returning max_bw as 0, which would prevent choosing QGV
       point having max bandwidth in case if SAGV is not allowed,
       as per BSpec(Ville Syrjälä)
     - Do the actual qgv_points_mask swap in the same place as
       all other global state parts like cdclk are swapped.
       In the next patch, this all will be moved to bw state as
       global state, once new global state patch series from Ville
       lands

v14: - Now using global state to serialize access to qgv points
     - Added global state locking back, otherwise we seem to read
       bw state in a wrong way.

v15: - Added TODO comment for near atomic global state locking in
       bw code.

v16: - Fixed intel_atomic_bw_* functions to be intel_bw_* as discussed
       with Jani Nikula.
     - Take bw_state_changed flag into use.

v17: - Moved qgv point related manipulations next to SAGV code, as
       those are semantically related(Ville Syrjälä)
     - Renamed those into intel_sagv_(pre)|(post)_plane_update
       (Ville Syrjälä)

v18: - Move sagv related calls from commit tail into
       intel_sagv_(pre)|(post)_plane_update(Ville Syrjälä)

v19: - Use intel_atomic_get_bw_(old)|(new)_state which is intended
       for commit tail stage.

v20: - Return max bandwidth for 0 planes(Ville)
     - Constify old_bw_state in bw_atomic_check(Ville)
     - Removed some debugs(Ville)
     - Added data rate to debug print when no QGV points(Ville)
     - Removed some comments(Ville)

v21, v22, v23: - Fixed rebase conflict

v24: - Changed PCode mask to use ICL_ prefix
v25: - Resolved rebase conflict

v26: - Removed redundant NULL checks(Ville)
     - Removed redundant error prints(Ville)

v27: - Use device specific drm_err(Ville)
     - Fixed parenthesis ident reported by checkpatch
       Line over 100 warns to be fixed together with
       existing code style.

Signed-off-by: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
Cc: Ville Syrjälä <ville.syrjala@intel.com>
Cc: James Ausmus <james.ausmus@intel.com>
[vsyrjala: Drop duplicate intel_sagv_{pre,post}_plane_update() prototypes
           and drop unused NUM_SAGV_POINTS define]
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200514074853.9508-3-stanislav.lisovskiy@intel.com
2020-05-14 19:08:30 +03:00

587 lines
14 KiB
C

// SPDX-License-Identifier: MIT
/*
* Copyright © 2019 Intel Corporation
*/
#include <drm/drm_atomic_state_helper.h>
#include "intel_bw.h"
#include "intel_display_types.h"
#include "intel_sideband.h"
#include "intel_atomic.h"
#include "intel_pm.h"
/* Parameters for Qclk Geyserville (QGV) */
struct intel_qgv_point {
u16 dclk, t_rp, t_rdpre, t_rc, t_ras, t_rcd;
};
struct intel_qgv_info {
struct intel_qgv_point points[I915_NUM_QGV_POINTS];
u8 num_points;
u8 num_channels;
u8 t_bl;
enum intel_dram_type dram_type;
};
static int icl_pcode_read_mem_global_info(struct drm_i915_private *dev_priv,
struct intel_qgv_info *qi)
{
u32 val = 0;
int ret;
ret = sandybridge_pcode_read(dev_priv,
ICL_PCODE_MEM_SUBSYSYSTEM_INFO |
ICL_PCODE_MEM_SS_READ_GLOBAL_INFO,
&val, NULL);
if (ret)
return ret;
if (IS_GEN(dev_priv, 12)) {
switch (val & 0xf) {
case 0:
qi->dram_type = INTEL_DRAM_DDR4;
break;
case 3:
qi->dram_type = INTEL_DRAM_LPDDR4;
break;
case 4:
qi->dram_type = INTEL_DRAM_DDR3;
break;
case 5:
qi->dram_type = INTEL_DRAM_LPDDR3;
break;
default:
MISSING_CASE(val & 0xf);
break;
}
} else if (IS_GEN(dev_priv, 11)) {
switch (val & 0xf) {
case 0:
qi->dram_type = INTEL_DRAM_DDR4;
break;
case 1:
qi->dram_type = INTEL_DRAM_DDR3;
break;
case 2:
qi->dram_type = INTEL_DRAM_LPDDR3;
break;
case 3:
qi->dram_type = INTEL_DRAM_LPDDR4;
break;
default:
MISSING_CASE(val & 0xf);
break;
}
} else {
MISSING_CASE(INTEL_GEN(dev_priv));
qi->dram_type = INTEL_DRAM_LPDDR3; /* Conservative default */
}
qi->num_channels = (val & 0xf0) >> 4;
qi->num_points = (val & 0xf00) >> 8;
if (IS_GEN(dev_priv, 12))
qi->t_bl = qi->dram_type == INTEL_DRAM_DDR4 ? 4 : 16;
else if (IS_GEN(dev_priv, 11))
qi->t_bl = qi->dram_type == INTEL_DRAM_DDR4 ? 4 : 8;
return 0;
}
static int icl_pcode_read_qgv_point_info(struct drm_i915_private *dev_priv,
struct intel_qgv_point *sp,
int point)
{
u32 val = 0, val2 = 0;
int ret;
ret = sandybridge_pcode_read(dev_priv,
ICL_PCODE_MEM_SUBSYSYSTEM_INFO |
ICL_PCODE_MEM_SS_READ_QGV_POINT_INFO(point),
&val, &val2);
if (ret)
return ret;
sp->dclk = val & 0xffff;
sp->t_rp = (val & 0xff0000) >> 16;
sp->t_rcd = (val & 0xff000000) >> 24;
sp->t_rdpre = val2 & 0xff;
sp->t_ras = (val2 & 0xff00) >> 8;
sp->t_rc = sp->t_rp + sp->t_ras;
return 0;
}
int icl_pcode_restrict_qgv_points(struct drm_i915_private *dev_priv,
u32 points_mask)
{
int ret;
/* bspec says to keep retrying for at least 1 ms */
ret = skl_pcode_request(dev_priv, ICL_PCODE_SAGV_DE_MEM_SS_CONFIG,
points_mask,
ICL_PCODE_POINTS_RESTRICTED_MASK,
ICL_PCODE_POINTS_RESTRICTED,
1);
if (ret < 0) {
drm_err(&dev_priv->drm, "Failed to disable qgv points (%d)\n", ret);
return ret;
}
return 0;
}
static int icl_get_qgv_points(struct drm_i915_private *dev_priv,
struct intel_qgv_info *qi)
{
int i, ret;
ret = icl_pcode_read_mem_global_info(dev_priv, qi);
if (ret)
return ret;
if (drm_WARN_ON(&dev_priv->drm,
qi->num_points > ARRAY_SIZE(qi->points)))
qi->num_points = ARRAY_SIZE(qi->points);
for (i = 0; i < qi->num_points; i++) {
struct intel_qgv_point *sp = &qi->points[i];
ret = icl_pcode_read_qgv_point_info(dev_priv, sp, i);
if (ret)
return ret;
drm_dbg_kms(&dev_priv->drm,
"QGV %d: DCLK=%d tRP=%d tRDPRE=%d tRAS=%d tRCD=%d tRC=%d\n",
i, sp->dclk, sp->t_rp, sp->t_rdpre, sp->t_ras,
sp->t_rcd, sp->t_rc);
}
return 0;
}
static int icl_calc_bw(int dclk, int num, int den)
{
/* multiples of 16.666MHz (100/6) */
return DIV_ROUND_CLOSEST(num * dclk * 100, den * 6);
}
static int icl_sagv_max_dclk(const struct intel_qgv_info *qi)
{
u16 dclk = 0;
int i;
for (i = 0; i < qi->num_points; i++)
dclk = max(dclk, qi->points[i].dclk);
return dclk;
}
struct intel_sa_info {
u16 displayrtids;
u8 deburst, deprogbwlimit;
};
static const struct intel_sa_info icl_sa_info = {
.deburst = 8,
.deprogbwlimit = 25, /* GB/s */
.displayrtids = 128,
};
static const struct intel_sa_info tgl_sa_info = {
.deburst = 16,
.deprogbwlimit = 34, /* GB/s */
.displayrtids = 256,
};
static int icl_get_bw_info(struct drm_i915_private *dev_priv, const struct intel_sa_info *sa)
{
struct intel_qgv_info qi = {};
bool is_y_tile = true; /* assume y tile may be used */
int num_channels;
int deinterleave;
int ipqdepth, ipqdepthpch;
int dclk_max;
int maxdebw;
int i, ret;
ret = icl_get_qgv_points(dev_priv, &qi);
if (ret) {
drm_dbg_kms(&dev_priv->drm,
"Failed to get memory subsystem information, ignoring bandwidth limits");
return ret;
}
num_channels = qi.num_channels;
deinterleave = DIV_ROUND_UP(num_channels, is_y_tile ? 4 : 2);
dclk_max = icl_sagv_max_dclk(&qi);
ipqdepthpch = 16;
maxdebw = min(sa->deprogbwlimit * 1000,
icl_calc_bw(dclk_max, 16, 1) * 6 / 10); /* 60% */
ipqdepth = min(ipqdepthpch, sa->displayrtids / num_channels);
for (i = 0; i < ARRAY_SIZE(dev_priv->max_bw); i++) {
struct intel_bw_info *bi = &dev_priv->max_bw[i];
int clpchgroup;
int j;
clpchgroup = (sa->deburst * deinterleave / num_channels) << i;
bi->num_planes = (ipqdepth - clpchgroup) / clpchgroup + 1;
bi->num_qgv_points = qi.num_points;
for (j = 0; j < qi.num_points; j++) {
const struct intel_qgv_point *sp = &qi.points[j];
int ct, bw;
/*
* Max row cycle time
*
* FIXME what is the logic behind the
* assumed burst length?
*/
ct = max_t(int, sp->t_rc, sp->t_rp + sp->t_rcd +
(clpchgroup - 1) * qi.t_bl + sp->t_rdpre);
bw = icl_calc_bw(sp->dclk, clpchgroup * 32 * num_channels, ct);
bi->deratedbw[j] = min(maxdebw,
bw * 9 / 10); /* 90% */
drm_dbg_kms(&dev_priv->drm,
"BW%d / QGV %d: num_planes=%d deratedbw=%u\n",
i, j, bi->num_planes, bi->deratedbw[j]);
}
if (bi->num_planes == 1)
break;
}
/*
* In case if SAGV is disabled in BIOS, we always get 1
* SAGV point, but we can't send PCode commands to restrict it
* as it will fail and pointless anyway.
*/
if (qi.num_points == 1)
dev_priv->sagv_status = I915_SAGV_NOT_CONTROLLED;
else
dev_priv->sagv_status = I915_SAGV_ENABLED;
return 0;
}
static unsigned int icl_max_bw(struct drm_i915_private *dev_priv,
int num_planes, int qgv_point)
{
int i;
/*
* Let's return max bw for 0 planes
*/
num_planes = max(1, num_planes);
for (i = 0; i < ARRAY_SIZE(dev_priv->max_bw); i++) {
const struct intel_bw_info *bi =
&dev_priv->max_bw[i];
/*
* Pcode will not expose all QGV points when
* SAGV is forced to off/min/med/max.
*/
if (qgv_point >= bi->num_qgv_points)
return UINT_MAX;
if (num_planes >= bi->num_planes)
return bi->deratedbw[qgv_point];
}
return 0;
}
void intel_bw_init_hw(struct drm_i915_private *dev_priv)
{
if (!HAS_DISPLAY(dev_priv))
return;
if (IS_GEN(dev_priv, 12))
icl_get_bw_info(dev_priv, &tgl_sa_info);
else if (IS_GEN(dev_priv, 11))
icl_get_bw_info(dev_priv, &icl_sa_info);
}
static unsigned int intel_bw_crtc_num_active_planes(const struct intel_crtc_state *crtc_state)
{
/*
* We assume cursors are small enough
* to not not cause bandwidth problems.
*/
return hweight8(crtc_state->active_planes & ~BIT(PLANE_CURSOR));
}
static unsigned int intel_bw_crtc_data_rate(const struct intel_crtc_state *crtc_state)
{
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
unsigned int data_rate = 0;
enum plane_id plane_id;
for_each_plane_id_on_crtc(crtc, plane_id) {
/*
* We assume cursors are small enough
* to not not cause bandwidth problems.
*/
if (plane_id == PLANE_CURSOR)
continue;
data_rate += crtc_state->data_rate[plane_id];
}
return data_rate;
}
void intel_bw_crtc_update(struct intel_bw_state *bw_state,
const struct intel_crtc_state *crtc_state)
{
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
struct drm_i915_private *i915 = to_i915(crtc->base.dev);
bw_state->data_rate[crtc->pipe] =
intel_bw_crtc_data_rate(crtc_state);
bw_state->num_active_planes[crtc->pipe] =
intel_bw_crtc_num_active_planes(crtc_state);
drm_dbg_kms(&i915->drm, "pipe %c data rate %u num active planes %u\n",
pipe_name(crtc->pipe),
bw_state->data_rate[crtc->pipe],
bw_state->num_active_planes[crtc->pipe]);
}
static unsigned int intel_bw_num_active_planes(struct drm_i915_private *dev_priv,
const struct intel_bw_state *bw_state)
{
unsigned int num_active_planes = 0;
enum pipe pipe;
for_each_pipe(dev_priv, pipe)
num_active_planes += bw_state->num_active_planes[pipe];
return num_active_planes;
}
static unsigned int intel_bw_data_rate(struct drm_i915_private *dev_priv,
const struct intel_bw_state *bw_state)
{
unsigned int data_rate = 0;
enum pipe pipe;
for_each_pipe(dev_priv, pipe)
data_rate += bw_state->data_rate[pipe];
return data_rate;
}
struct intel_bw_state *
intel_atomic_get_old_bw_state(struct intel_atomic_state *state)
{
struct drm_i915_private *dev_priv = to_i915(state->base.dev);
struct intel_global_state *bw_state;
bw_state = intel_atomic_get_old_global_obj_state(state, &dev_priv->bw_obj);
return to_intel_bw_state(bw_state);
}
struct intel_bw_state *
intel_atomic_get_new_bw_state(struct intel_atomic_state *state)
{
struct drm_i915_private *dev_priv = to_i915(state->base.dev);
struct intel_global_state *bw_state;
bw_state = intel_atomic_get_new_global_obj_state(state, &dev_priv->bw_obj);
return to_intel_bw_state(bw_state);
}
struct intel_bw_state *
intel_atomic_get_bw_state(struct intel_atomic_state *state)
{
struct drm_i915_private *dev_priv = to_i915(state->base.dev);
struct intel_global_state *bw_state;
bw_state = intel_atomic_get_global_obj_state(state, &dev_priv->bw_obj);
if (IS_ERR(bw_state))
return ERR_CAST(bw_state);
return to_intel_bw_state(bw_state);
}
int intel_bw_atomic_check(struct intel_atomic_state *state)
{
struct drm_i915_private *dev_priv = to_i915(state->base.dev);
struct intel_crtc_state *new_crtc_state, *old_crtc_state;
struct intel_bw_state *new_bw_state = NULL;
const struct intel_bw_state *old_bw_state = NULL;
unsigned int data_rate;
unsigned int num_active_planes;
struct intel_crtc *crtc;
int i, ret;
u32 allowed_points = 0;
unsigned int max_bw_point = 0, max_bw = 0;
unsigned int num_qgv_points = dev_priv->max_bw[0].num_qgv_points;
u32 mask = (1 << num_qgv_points) - 1;
/* FIXME earlier gens need some checks too */
if (INTEL_GEN(dev_priv) < 11)
return 0;
for_each_oldnew_intel_crtc_in_state(state, crtc, old_crtc_state,
new_crtc_state, i) {
unsigned int old_data_rate =
intel_bw_crtc_data_rate(old_crtc_state);
unsigned int new_data_rate =
intel_bw_crtc_data_rate(new_crtc_state);
unsigned int old_active_planes =
intel_bw_crtc_num_active_planes(old_crtc_state);
unsigned int new_active_planes =
intel_bw_crtc_num_active_planes(new_crtc_state);
/*
* Avoid locking the bw state when
* nothing significant has changed.
*/
if (old_data_rate == new_data_rate &&
old_active_planes == new_active_planes)
continue;
new_bw_state = intel_atomic_get_bw_state(state);
if (IS_ERR(new_bw_state))
return PTR_ERR(new_bw_state);
new_bw_state->data_rate[crtc->pipe] = new_data_rate;
new_bw_state->num_active_planes[crtc->pipe] = new_active_planes;
drm_dbg_kms(&dev_priv->drm,
"pipe %c data rate %u num active planes %u\n",
pipe_name(crtc->pipe),
new_bw_state->data_rate[crtc->pipe],
new_bw_state->num_active_planes[crtc->pipe]);
}
if (!new_bw_state)
return 0;
ret = intel_atomic_lock_global_state(&new_bw_state->base);
if (ret)
return ret;
data_rate = intel_bw_data_rate(dev_priv, new_bw_state);
data_rate = DIV_ROUND_UP(data_rate, 1000);
num_active_planes = intel_bw_num_active_planes(dev_priv, new_bw_state);
for (i = 0; i < num_qgv_points; i++) {
unsigned int max_data_rate;
max_data_rate = icl_max_bw(dev_priv, num_active_planes, i);
/*
* We need to know which qgv point gives us
* maximum bandwidth in order to disable SAGV
* if we find that we exceed SAGV block time
* with watermarks. By that moment we already
* have those, as it is calculated earlier in
* intel_atomic_check,
*/
if (max_data_rate > max_bw) {
max_bw_point = i;
max_bw = max_data_rate;
}
if (max_data_rate >= data_rate)
allowed_points |= BIT(i);
drm_dbg_kms(&dev_priv->drm, "QGV point %d: max bw %d required %d\n",
i, max_data_rate, data_rate);
}
/*
* BSpec states that we always should have at least one allowed point
* left, so if we couldn't - simply reject the configuration for obvious
* reasons.
*/
if (allowed_points == 0) {
drm_dbg_kms(&dev_priv->drm, "No QGV points provide sufficient memory"
" bandwidth %d for display configuration(%d active planes).\n",
data_rate, num_active_planes);
return -EINVAL;
}
/*
* Leave only single point with highest bandwidth, if
* we can't enable SAGV due to the increased memory latency it may
* cause.
*/
if (!intel_can_enable_sagv(dev_priv, new_bw_state)) {
allowed_points = BIT(max_bw_point);
drm_dbg_kms(&dev_priv->drm, "No SAGV, using single QGV point %d\n",
max_bw_point);
}
/*
* We store the ones which need to be masked as that is what PCode
* actually accepts as a parameter.
*/
new_bw_state->qgv_points_mask = ~allowed_points & mask;
old_bw_state = intel_atomic_get_old_bw_state(state);
/*
* If the actual mask had changed we need to make sure that
* the commits are serialized(in case this is a nomodeset, nonblocking)
*/
if (new_bw_state->qgv_points_mask != old_bw_state->qgv_points_mask) {
ret = intel_atomic_serialize_global_state(&new_bw_state->base);
if (ret)
return ret;
}
return 0;
}
static struct intel_global_state *
intel_bw_duplicate_state(struct intel_global_obj *obj)
{
struct intel_bw_state *state;
state = kmemdup(obj->state, sizeof(*state), GFP_KERNEL);
if (!state)
return NULL;
return &state->base;
}
static void intel_bw_destroy_state(struct intel_global_obj *obj,
struct intel_global_state *state)
{
kfree(state);
}
static const struct intel_global_state_funcs intel_bw_funcs = {
.atomic_duplicate_state = intel_bw_duplicate_state,
.atomic_destroy_state = intel_bw_destroy_state,
};
int intel_bw_init(struct drm_i915_private *dev_priv)
{
struct intel_bw_state *state;
state = kzalloc(sizeof(*state), GFP_KERNEL);
if (!state)
return -ENOMEM;
intel_atomic_global_obj_init(dev_priv, &dev_priv->bw_obj,
&state->base, &intel_bw_funcs);
return 0;
}