drm next for 5.10-rc1

New driver:
 Cadence MHDP8546 DisplayPort bridge driver
 
 core:
 - cross-driver scatterlist cleanups
 - devm_drm conversions
 - remove drm_dev_init
 - devm_drm_dev_alloc conversion
 
 ttm:
 - lots of refactoring and cleanups
 
 bridges:
 - chained bridge support in more drivers
 
 panel:
 - misc new panels
 
 scheduler:
 - cleanup priority levels
 
 displayport:
 - refactor i915 code into helpers for nouveau
 
 i915:
 - split into display and GT trees
 - WW locking refactoring in GEM
 - execbuf2 extension mechanism
 - syncobj timeline support
 - GEN 12 HOBL display powersaving
 - Rocket Lake display additions
 - Disable FBC on Tigerlake
 - Tigerlake Type-C + DP improvements
 - Hotplug interrupt refactoring
 
 amdgpu:
 - Sienna Cichlid updates
 - Navy Flounder updates
 - DCE6 (SI) support for DC
 - Plane rotation enabled
 - TMZ state info ioctl
 - PCIe DPC recovery support
 - DC interrupt handling refactor
 - OLED panel fixes
 
 amdkfd:
 - add SMI events for thermal throttling
 - SMI interface events ioctl update
 - process eviction counters
 
 radeon:
 - move to dma_ for allocations
 - expose sclk via sysfs
 
 msm:
 - DSI support for sm8150/sm8250
 - per-process GPU pagetable support
 - Displayport support
 
 mediatek:
 - move HDMI phy driver to PHY
 - convert mtk-dpi to bridge API
 - disable mt2701 tmds
 
 tegra:
 - bridge support
 
 exynos:
 - misc cleanups
 
 vc4:
 - dual display cleanups
 
 ast:
 - cleanups
 
 gma500:
 - conversion to GPIOd API
 
 hisilicon:
 - misc reworks
 
 ingenic:
 - clock handling and format improvements
 
 mcde:
 - DSI support
 
 mgag200:
 - desktop g200 support
 
 mxsfb:
 - i.MX7 + i.MX8M
 - alpha plane support
 
 panfrost:
 - devfreq support
 - amlogic SoC support
 
 ps8640:
 - EDID from eDP retrieval
 
 tidss:
 - AM65xx YUV workaround
 
 virtio:
 - virtio-gpu exported resources
 
 rcar-du:
 - R8A7742, R8A774E1 and R8A77961 support
 - YUV planar format fixes
 - non-visible plane handling
 - VSP device reference count fix
 - Kconfig fix to avoid displaying disabled options in .config
 -----BEGIN PGP SIGNATURE-----
 
 iQIcBAABAgAGBQJfh579AAoJEAx081l5xIa+GqoP/0amz+ZN7y/L7+f32CRinJ7/
 3e4xjXNDmtWG4Whe/WKjlYmbAcvSdWV/4HYpurW2BFJnOAB/5lIqYcS/PyqErPzA
 w4EpRoJ+ZdFgmlDH0vdsDwPLT/HFmhUN9AopNkoZpbSMxrManSj5QgmePXyiKReP
 Q+ZAK5UW5AdOVY4bgXUSEkVq2eilCLXf+bSBR/LrVQuNgu7GULX8SIy/Y1CuMtv8
 LgzzjLKfIZaIWC+F/RU7BxJ7YnrVq7z7yXnUx8j2416+k/Wwe+BeSUCSZstT7q9G
 UkX8jWfR7ZKqhwP+UQeSwDbHkALz7lv88nyjQdxJZ3SrXRe4hy14YjxnR4maeNAj
 3TAYSdcAMWyRHqeEZIZ7Hj5sQtTq5OZAoIjxzH3vpVdAnnAkcWoF77pqxV8XPqTC
 nw40DihAxQOshGwMkjd5DqkEwnMv43Hs1WTVYu9dPTOfOdqPNt+Vqp7Xl9Z46+kV
 k6PDcx60T9ayDW1QZ6MoIXHta9E7ixzu7gYBL3vP4LuporY0uNG3bzF3CMvof1BK
 sHYcYTdZkqbTD2d6rHV+TbpPQXgTtlej9qVlQM4SeX37Xtc7LxCYpnpUHKz2S/fK
 1vyeGPgdytHblwlxwZOPZ4R2I/HTfnITdr4kMcJHhxAsEewfW1Rd4+stQqVJ2Mph
 Vz+CFP2BngivGFz5vuky
 =4H8J
 -----END PGP SIGNATURE-----

Merge tag 'drm-next-2020-10-15' of git://anongit.freedesktop.org/drm/drm

Pull drm updates from Dave Airlie:
 "Not a major amount of change, the i915 trees got split into display
  and gt trees to better facilitate higher level review, and there's a
  major refactoring of i915 GEM locking to use more core kernel concepts
  (like ww-mutexes). msm gets per-process pagetables, older AMD SI cards
  get DC support, nouveau got a bump in displayport support with common
  code extraction from i915.

  Outside of drm this contains a couple of patches for hexint
  moduleparams which you've acked, and a virtio common code tree that
  you should also get via it's regular path.

  New driver:
   - Cadence MHDP8546 DisplayPort bridge driver

  core:
   - cross-driver scatterlist cleanups
   - devm_drm conversions
   - remove drm_dev_init
   - devm_drm_dev_alloc conversion

  ttm:
   - lots of refactoring and cleanups

  bridges:
   - chained bridge support in more drivers

  panel:
   - misc new panels

  scheduler:
   - cleanup priority levels

  displayport:
   - refactor i915 code into helpers for nouveau

  i915:
   - split into display and GT trees
   - WW locking refactoring in GEM
   - execbuf2 extension mechanism
   - syncobj timeline support
   - GEN 12 HOBL display powersaving
   - Rocket Lake display additions
   - Disable FBC on Tigerlake
   - Tigerlake Type-C + DP improvements
   - Hotplug interrupt refactoring

  amdgpu:
   - Sienna Cichlid updates
   - Navy Flounder updates
   - DCE6 (SI) support for DC
   - Plane rotation enabled
   - TMZ state info ioctl
   - PCIe DPC recovery support
   - DC interrupt handling refactor
   - OLED panel fixes

  amdkfd:
   - add SMI events for thermal throttling
   - SMI interface events ioctl update
   - process eviction counters

  radeon:
   - move to dma_ for allocations
   - expose sclk via sysfs

  msm:
   - DSI support for sm8150/sm8250
   - per-process GPU pagetable support
   - Displayport support

  mediatek:
   - move HDMI phy driver to PHY
   - convert mtk-dpi to bridge API
   - disable mt2701 tmds

  tegra:
   - bridge support

  exynos:
   - misc cleanups

  vc4:
   - dual display cleanups

  ast:
   - cleanups

  gma500:
   - conversion to GPIOd API

  hisilicon:
   - misc reworks

  ingenic:
   - clock handling and format improvements

  mcde:
   - DSI support

  mgag200:
   - desktop g200 support

  mxsfb:
   - i.MX7 + i.MX8M
   - alpha plane support

  panfrost:
   - devfreq support
   - amlogic SoC support

  ps8640:
   - EDID from eDP retrieval

  tidss:
   - AM65xx YUV workaround

  virtio:
   - virtio-gpu exported resources

  rcar-du:
   - R8A7742, R8A774E1 and R8A77961 support
   - YUV planar format fixes
   - non-visible plane handling
   - VSP device reference count fix
   - Kconfig fix to avoid displaying disabled options in .config"

* tag 'drm-next-2020-10-15' of git://anongit.freedesktop.org/drm/drm: (1494 commits)
  drm/ingenic: Fix bad revert
  drm/amdgpu: Fix invalid number of character '{' in amdgpu_acpi_init
  drm/amdgpu: Remove warning for virtual_display
  drm/amdgpu: kfd_initialized can be static
  drm/amd/pm: setup APU dpm clock table in SMU HW initialization
  drm/amdgpu: prevent spurious warning
  drm/amdgpu/swsmu: fix ARC build errors
  drm/amd/display: Fix OPTC_DATA_FORMAT programming
  drm/amd/display: Don't allow pstate if no support in blank
  drm/panfrost: increase readl_relaxed_poll_timeout values
  MAINTAINERS: Update entry for st7703 driver after the rename
  Revert "gpu/drm: ingenic: Add option to mmap GEM buffers cached"
  drm/amd/display: HDMI remote sink need mode validation for Linux
  drm/amd/display: Change to correct unit on audio rate
  drm/amd/display: Avoid set zero in the requested clk
  drm/amdgpu: align frag_end to covered address space
  drm/amdgpu: fix NULL pointer dereference for Renoir
  drm/vmwgfx: fix regression in thp code due to ttm init refactor.
  drm/amdgpu/swsmu: add interrupt work handler for smu11 parts
  drm/amdgpu/swsmu: add interrupt work function
  ...
This commit is contained in:
Linus Torvalds 2020-10-15 10:46:16 -07:00
commit 93b694d096
1295 changed files with 65076 additions and 20247 deletions

View File

@ -0,0 +1,117 @@
# SPDX-License-Identifier: GPL-2.0
%YAML 1.2
---
$id: http://devicetree.org/schemas/display/brcm,bcm2711-hdmi.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Broadcom BCM2711 HDMI Controller Device Tree Bindings
maintainers:
- Eric Anholt <eric@anholt.net>
properties:
compatible:
enum:
- brcm,bcm2711-hdmi0
- brcm,bcm2711-hdmi1
reg:
items:
- description: HDMI controller register range
- description: DVP register range
- description: HDMI PHY register range
- description: Rate Manager register range
- description: Packet RAM register range
- description: Metadata RAM register range
- description: CSC register range
- description: CEC register range
- description: HD register range
reg-names:
items:
- const: hdmi
- const: dvp
- const: phy
- const: rm
- const: packet
- const: metadata
- const: csc
- const: cec
- const: hd
clocks:
items:
- description: The HDMI state machine clock
- description: The Pixel BVB clock
- description: The HDMI Audio parent clock
- description: The HDMI CEC parent clock
clock-names:
items:
- const: hdmi
- const: bvb
- const: audio
- const: cec
ddc:
allOf:
- $ref: /schemas/types.yaml#/definitions/phandle
description: >
Phandle of the I2C controller used for DDC EDID probing
hpd-gpios:
description: >
The GPIO pin for the HDMI hotplug detect (if it doesn't appear
as an interrupt/status bit in the HDMI controller itself)
dmas:
maxItems: 1
description: >
Should contain one entry pointing to the DMA channel used to
transfer audio data.
dma-names:
const: audio-rx
resets:
maxItems: 1
required:
- compatible
- reg
- reg-names
- clocks
- resets
- ddc
additionalProperties: false
examples:
- |
hdmi0: hdmi@7ef00700 {
compatible = "brcm,bcm2711-hdmi0";
reg = <0x7ef00700 0x300>,
<0x7ef00300 0x200>,
<0x7ef00f00 0x80>,
<0x7ef00f80 0x80>,
<0x7ef01b00 0x200>,
<0x7ef01f00 0x400>,
<0x7ef00200 0x80>,
<0x7ef04300 0x100>,
<0x7ef20000 0x100>;
reg-names = "hdmi",
"dvp",
"phy",
"rm",
"packet",
"metadata",
"csc",
"cec",
"hd";
clocks = <&firmware_clocks 13>, <&firmware_clocks 14>, <&dvp 1>, <&clk_27MHz>;
clock-names = "hdmi", "bvb", "audio", "cec";
resets = <&dvp 0>;
ddc = <&ddc0>;
};
...

View File

@ -11,7 +11,9 @@ maintainers:
properties:
compatible:
const: brcm,bcm2835-hvs
enum:
- brcm,bcm2711-hvs
- brcm,bcm2835-hvs
reg:
maxItems: 1
@ -19,6 +21,10 @@ properties:
interrupts:
maxItems: 1
clocks:
maxItems: 1
description: Core Clock
required:
- compatible
- reg
@ -26,6 +32,16 @@ required:
additionalProperties: false
if:
properties:
compatible:
contains:
const: brcm,bcm2711-hvs"
then:
required:
- clocks
examples:
- |
hvs@7e400000 {

View File

@ -15,6 +15,11 @@ properties:
- brcm,bcm2835-pixelvalve0
- brcm,bcm2835-pixelvalve1
- brcm,bcm2835-pixelvalve2
- brcm,bcm2711-pixelvalve0
- brcm,bcm2711-pixelvalve1
- brcm,bcm2711-pixelvalve2
- brcm,bcm2711-pixelvalve3
- brcm,bcm2711-pixelvalve4
reg:
maxItems: 1

View File

@ -17,6 +17,7 @@ description: >
properties:
compatible:
enum:
- brcm,bcm2711-vc5
- brcm,bcm2835-vc4
- brcm,cygnus-vc4

View File

@ -0,0 +1,169 @@
# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
%YAML 1.2
---
$id: "http://devicetree.org/schemas/display/bridge/cdns,mhdp8546.yaml#"
$schema: "http://devicetree.org/meta-schemas/core.yaml#"
title: Cadence MHDP8546 bridge
maintainers:
- Swapnil Jakhade <sjakhade@cadence.com>
- Yuti Amonkar <yamonkar@cadence.com>
properties:
compatible:
enum:
- cdns,mhdp8546
- ti,j721e-mhdp8546
reg:
minItems: 1
maxItems: 2
items:
- description:
Register block of mhdptx apb registers up to PHY mapped area (AUX_CONFIG_P).
The AUX and PMA registers are not part of this range, they are instead
included in the associated PHY.
- description:
Register block for DSS_EDP0_INTG_CFG_VP registers in case of TI J7 SoCs.
reg-names:
minItems: 1
maxItems: 2
items:
- const: mhdptx
- const: j721e-intg
clocks:
maxItems: 1
description:
DP bridge clock, used by the IP to know how to translate a number of
clock cycles into a time (which is used to comply with DP standard timings
and delays).
phys:
maxItems: 1
description:
phandle to the DisplayPort PHY.
phy-names:
items:
- const: dpphy
power-domains:
maxItems: 1
interrupts:
maxItems: 1
ports:
type: object
description:
Ports as described in Documentation/devicetree/bindings/graph.txt.
properties:
'#address-cells':
const: 1
'#size-cells':
const: 0
port@0:
type: object
description:
First input port representing the DP bridge input.
port@1:
type: object
description:
Second input port representing the DP bridge input.
port@2:
type: object
description:
Third input port representing the DP bridge input.
port@3:
type: object
description:
Fourth input port representing the DP bridge input.
port@4:
type: object
description:
Output port representing the DP bridge output.
required:
- port@0
- port@4
- '#address-cells'
- '#size-cells'
allOf:
- if:
properties:
compatible:
contains:
const: ti,j721e-mhdp8546
then:
properties:
reg:
minItems: 2
reg-names:
minItems: 2
else:
properties:
reg:
maxItems: 1
reg-names:
maxItems: 1
required:
- compatible
- clocks
- reg
- reg-names
- phys
- phy-names
- interrupts
- ports
additionalProperties: false
examples:
- |
#include <dt-bindings/interrupt-controller/arm-gic.h>
bus {
#address-cells = <2>;
#size-cells = <2>;
mhdp: dp-bridge@f0fb000000 {
compatible = "cdns,mhdp8546";
reg = <0xf0 0xfb000000 0x0 0x1000000>;
reg-names = "mhdptx";
clocks = <&mhdp_clock>;
phys = <&dp_phy>;
phy-names = "dpphy";
interrupts = <GIC_SPI 614 IRQ_TYPE_LEVEL_HIGH>;
ports {
#address-cells = <1>;
#size-cells = <0>;
port@0 {
reg = <0>;
dp_bridge_input: endpoint {
remote-endpoint = <&xxx_dpi_output>;
};
};
port@4 {
reg = <4>;
dp_bridge_output: endpoint {
remote-endpoint = <&xxx_dp_connector_input>;
};
};
};
};
};
...

View File

@ -0,0 +1,176 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/display/bridge/lontium,lt9611.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Lontium LT9611 2 Port MIPI to HDMI Bridge
maintainers:
- Vinod Koul <vkoul@kernel.org>
description: |
The LT9611 is a bridge device which converts DSI to HDMI
properties:
compatible:
enum:
- lontium,lt9611
reg:
maxItems: 1
"#sound-dai-cells":
const: 1
interrupts:
maxItems: 1
reset-gpios:
maxItems: 1
description: GPIO connected to active high RESET pin.
vdd-supply:
description: Regulator for 1.8V MIPI phy power.
vcc-supply:
description: Regulator for 3.3V IO power.
ports:
type: object
properties:
"#address-cells":
const: 1
"#size-cells":
const: 0
port@0:
type: object
description: |
Primary MIPI port-1 for MIPI input
properties:
reg:
const: 0
patternProperties:
"^endpoint(@[0-9])$":
type: object
additionalProperties: false
properties:
remote-endpoint:
$ref: /schemas/types.yaml#/definitions/phandle
required:
- reg
port@1:
type: object
description: |
Additional MIPI port-2 for MIPI input, used in combination
with primary MIPI port-1 to drive higher resolution displays
properties:
reg:
const: 1
patternProperties:
"^endpoint(@[0-9])$":
type: object
additionalProperties: false
properties:
remote-endpoint:
$ref: /schemas/types.yaml#/definitions/phandle
required:
- reg
port@2:
type: object
description: |
HDMI port for HDMI output
properties:
reg:
const: 2
patternProperties:
"^endpoint(@[0-9])$":
type: object
additionalProperties: false
properties:
remote-endpoint:
$ref: /schemas/types.yaml#/definitions/phandle
required:
- reg
required:
- "#address-cells"
- "#size-cells"
- port@0
- port@2
required:
- compatible
- reg
- interrupts
- vdd-supply
- vcc-supply
- ports
additionalProperties: false
examples:
- |
#include <dt-bindings/gpio/gpio.h>
#include <dt-bindings/interrupt-controller/irq.h>
i2c10 {
#address-cells = <1>;
#size-cells = <0>;
hdmi-bridge@3b {
compatible = "lontium,lt9611";
reg = <0x3b>;
reset-gpios = <&tlmm 128 GPIO_ACTIVE_HIGH>;
interrupts-extended = <&tlmm 84 IRQ_TYPE_EDGE_FALLING>;
vdd-supply = <&lt9611_1v8>;
vcc-supply = <&lt9611_3v3>;
ports {
#address-cells = <1>;
#size-cells = <0>;
port@0 {
reg = <0>;
lt9611_a: endpoint {
remote-endpoint = <&dsi0_out>;
};
};
port@1 {
reg = <1>;
lt9611_b: endpoint {
remote-endpoint = <&dsi1_out>;
};
};
port@2 {
reg = <2>;
lt9611_out: endpoint {
remote-endpoint = <&hdmi_con>;
};
};
};
};
};
...

View File

@ -79,6 +79,9 @@ properties:
The GPIO used to control the power down line of this device.
maxItems: 1
power-supply:
maxItems: 1
required:
- compatible
- ports

View File

@ -14,8 +14,10 @@ Required properties:
- compatible : Shall contain one or more of
- "renesas,r8a774a1-hdmi" for R8A774A1 (RZ/G2M) compatible HDMI TX
- "renesas,r8a774b1-hdmi" for R8A774B1 (RZ/G2N) compatible HDMI TX
- "renesas,r8a774e1-hdmi" for R8A774E1 (RZ/G2H) compatible HDMI TX
- "renesas,r8a7795-hdmi" for R8A7795 (R-Car H3) compatible HDMI TX
- "renesas,r8a7796-hdmi" for R8A7796 (R-Car M3-W) compatible HDMI TX
- "renesas,r8a77961-hdmi" for R8A77961 (R-Car M3-W+) compatible HDMI TX
- "renesas,r8a77965-hdmi" for R8A77965 (R-Car M3-N) compatible HDMI TX
- "renesas,rcar-gen3-hdmi" for the generic R-Car Gen3 and RZ/G2 compatible
HDMI TX
@ -42,7 +44,7 @@ Optional properties:
Example:
hdmi0: hdmi@fead0000 {
compatible = "renesas,r8a7795-dw-hdmi";
compatible = "renesas,r8a7795-hdmi", "renesas,rcar-gen3-hdmi";
reg = <0 0xfead0000 0 0x10000>;
interrupts = <0 389 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&cpg CPG_CORE R8A7795_CLK_S0D4>, <&cpg CPG_MOD 729>;

View File

@ -16,11 +16,13 @@ description: |
properties:
compatible:
enum:
- renesas,r8a7742-lvds # for RZ/G1H compatible LVDS encoders
- renesas,r8a7743-lvds # for RZ/G1M compatible LVDS encoders
- renesas,r8a7744-lvds # for RZ/G1N compatible LVDS encoders
- renesas,r8a774a1-lvds # for RZ/G2M compatible LVDS encoders
- renesas,r8a774b1-lvds # for RZ/G2N compatible LVDS encoders
- renesas,r8a774c0-lvds # for RZ/G2E compatible LVDS encoders
- renesas,r8a774e1-lvds # for RZ/G2H compatible LVDS encoders
- renesas,r8a7790-lvds # for R-Car H2 compatible LVDS encoders
- renesas,r8a7791-lvds # for R-Car M2-W compatible LVDS encoders
- renesas,r8a7793-lvds # for R-Car M2-N compatible LVDS encoders

View File

@ -0,0 +1,127 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/display/bridge/toshiba,tc358762.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Toshiba TC358762 MIPI DSI to MIPI DPI bridge
maintainers:
- Marek Vasut <marex@denx.de>
description: |
The TC358762 is bridge device which converts MIPI DSI to MIPI DPI.
properties:
compatible:
enum:
- toshiba,tc358762
reg:
maxItems: 1
description: virtual channel number of a DSI peripheral
vddc-supply:
description: Regulator for 1.2V internal core power.
ports:
type: object
properties:
"#address-cells":
const: 1
"#size-cells":
const: 0
port@0:
type: object
additionalProperties: false
description: |
Video port for MIPI DSI input
properties:
reg:
const: 0
patternProperties:
endpoint:
type: object
additionalProperties: false
properties:
remote-endpoint: true
required:
- reg
port@1:
type: object
additionalProperties: false
description: |
Video port for MIPI DPI output (panel or connector).
properties:
reg:
const: 1
patternProperties:
endpoint:
type: object
additionalProperties: false
properties:
remote-endpoint: true
required:
- reg
required:
- "#address-cells"
- "#size-cells"
- port@0
- port@1
required:
- compatible
- reg
- vddc-supply
- ports
additionalProperties: false
examples:
- |
i2c1 {
#address-cells = <1>;
#size-cells = <0>;
bridge@0 {
reg = <0>;
compatible = "toshiba,tc358762";
vddc-supply = <&vcc_1v2_reg>;
ports {
#address-cells = <1>;
#size-cells = <0>;
port@0 {
reg = <0>;
bridge_in: endpoint {
remote-endpoint = <&dsi_out>;
};
};
port@1 {
reg = <1>;
bridge_out: endpoint {
remote-endpoint = <&panel_in>;
};
};
};
};
};
...

View File

@ -0,0 +1,215 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/display/bridge/toshiba,tc358775.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Toshiba TC358775 DSI to LVDS bridge bindings
maintainers:
- Vinay Simha BN <simhavcs@gmail.com>
description: |
This binding supports DSI to LVDS bridge TC358775
MIPI DSI-RX Data 4-lane, CLK 1-lane with data rates up to 800 Mbps/lane.
Video frame size:
Up to 1600x1200 24-bit/pixel resolution for single-link LVDS display panel
limited by 135 MHz LVDS speed
Up to WUXGA (1920x1200 24-bit pixels) resolution for dual-link LVDS display
panel, limited by 270 MHz LVDS speed.
properties:
compatible:
const: toshiba,tc358775
reg:
maxItems: 1
description: i2c address of the bridge, 0x0f
vdd-supply:
maxItems: 1
description: 1.2V LVDS Power Supply
vddio-supply:
maxItems: 1
description: 1.8V IO Power Supply
stby-gpios:
maxItems: 1
description: Standby pin, Low active
reset-gpios:
maxItems: 1
description: Hardware reset, Low active
ports:
type: object
description:
A node containing input and output port nodes with endpoint definitions
as documented in
Documentation/devicetree/bindings/media/video-interfaces.txt
properties:
"#address-cells":
const: 1
"#size-cells":
const: 0
port@0:
type: object
description: |
DSI Input. The remote endpoint phandle should be a
reference to a valid mipi_dsi_host device node.
port@1:
type: object
description: |
Video port for LVDS output (panel or connector).
port@2:
type: object
description: |
Video port for Dual link LVDS output (panel or connector).
required:
- port@0
- port@1
required:
- compatible
- reg
- vdd-supply
- vddio-supply
- stby-gpios
- reset-gpios
- ports
examples:
- |
#include <dt-bindings/gpio/gpio.h>
/* For single-link LVDS display panel */
i2c@78b8000 {
/* On High speed expansion */
label = "HS-I2C2";
reg = <0x078b8000 0x500>;
clock-frequency = <400000>; /* fastmode operation */
#address-cells = <1>;
#size-cells = <0>;
tc_bridge: bridge@f {
compatible = "toshiba,tc358775";
reg = <0x0f>;
vdd-supply = <&pm8916_l2>;
vddio-supply = <&pm8916_l6>;
stby-gpios = <&msmgpio 99 GPIO_ACTIVE_LOW>;
reset-gpios = <&msmgpio 72 GPIO_ACTIVE_LOW>;
ports {
#address-cells = <1>;
#size-cells = <0>;
port@0 {
reg = <0>;
d2l_in_test: endpoint {
remote-endpoint = <&dsi0_out>;
};
};
port@1 {
reg = <1>;
lvds_out: endpoint {
remote-endpoint = <&panel_in>;
};
};
};
};
};
dsi@1a98000 {
reg = <0x1a98000 0x25c>;
reg-names = "dsi_ctrl";
ports {
#address-cells = <1>;
#size-cells = <0>;
port@1 {
reg = <1>;
dsi0_out: endpoint {
remote-endpoint = <&d2l_in_test>;
data-lanes = <0 1 2 3>;
};
};
};
};
- |
/* For dual-link LVDS display panel */
i2c@78b8000 {
/* On High speed expansion */
label = "HS-I2C2";
reg = <0x078b8000 0x500>;
clock-frequency = <400000>; /* fastmode operation */
#address-cells = <1>;
#size-cells = <0>;
tc_bridge_dual: bridge@f {
compatible = "toshiba,tc358775";
reg = <0x0f>;
vdd-supply = <&pm8916_l2>;
vddio-supply = <&pm8916_l6>;
stby-gpios = <&msmgpio 99 GPIO_ACTIVE_LOW>;
reset-gpios = <&msmgpio 72 GPIO_ACTIVE_LOW>;
ports {
#address-cells = <1>;
#size-cells = <0>;
port@0 {
reg = <0>;
d2l_in_dual: endpoint {
remote-endpoint = <&dsi0_out_dual>;
};
};
port@1 {
reg = <1>;
lvds0_out: endpoint {
remote-endpoint = <&panel_in0>;
};
};
port@2 {
reg = <2>;
lvds1_out: endpoint {
remote-endpoint = <&panel_in1>;
};
};
};
};
};
dsi@1a98000 {
reg = <0x1a98000 0x25c>;
reg-names = "dsi_ctrl";
ports {
#address-cells = <1>;
#size-cells = <0>;
port@1 {
reg = <1>;
dsi0_out_dual: endpoint {
remote-endpoint = <&d2l_in_dual>;
data-lanes = <0 1 2 3>;
};
};
};
};
...

View File

@ -0,0 +1,108 @@
# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
# Copyright 2019 NXP
%YAML 1.2
---
$id: "http://devicetree.org/schemas/display/imx/nxp,imx8mq-dcss.yaml#"
$schema: "http://devicetree.org/meta-schemas/core.yaml#"
title: iMX8MQ Display Controller Subsystem (DCSS)
maintainers:
- Laurentiu Palcu <laurentiu.palcu@nxp.com>
description:
The DCSS (display controller sub system) is used to source up to three
display buffers, compose them, and drive a display using HDMI 2.0a(with HDCP
2.2) or MIPI-DSI. The DCSS is intended to support up to 4kp60 displays. HDR10
image processing capabilities are included to provide a solution capable of
driving next generation high dynamic range displays.
properties:
compatible:
const: nxp,imx8mq-dcss
reg:
items:
- description: DCSS base address and size, up to IRQ steer start
- description: DCSS BLKCTL base address and size
interrupts:
items:
- description: Context loader completion and error interrupt
- description: DTG interrupt used to signal context loader trigger time
- description: DTG interrupt for Vblank
interrupt-names:
items:
- const: ctxld
- const: ctxld_kick
- const: vblank
clocks:
items:
- description: Display APB clock for all peripheral PIO access interfaces
- description: Display AXI clock needed by DPR, Scaler, RTRAM_CTRL
- description: RTRAM clock
- description: Pixel clock, can be driven either by HDMI phy clock or MIPI
- description: DTRC clock, needed by video decompressor
clock-names:
items:
- const: apb
- const: axi
- const: rtrm
- const: pix
- const: dtrc
assigned-clocks:
items:
- description: Phandle and clock specifier of IMX8MQ_CLK_DISP_AXI_ROOT
- description: Phandle and clock specifier of IMX8MQ_CLK_DISP_RTRM
- description: Phandle and clock specifier of either IMX8MQ_VIDEO2_PLL1_REF_SEL or
IMX8MQ_VIDEO_PLL1_REF_SEL
assigned-clock-parents:
items:
- description: Phandle and clock specifier of IMX8MQ_SYS1_PLL_800M
- description: Phandle and clock specifier of IMX8MQ_SYS1_PLL_800M
- description: Phandle and clock specifier of IMX8MQ_CLK_27M
assigned-clock-rates:
items:
- description: Must be 800 MHz
- description: Must be 400 MHz
port:
type: object
description:
A port node pointing to the input port of a HDMI/DP or MIPI display bridge.
additionalProperties: false
examples:
- |
#include <dt-bindings/clock/imx8mq-clock.h>
dcss: display-controller@32e00000 {
compatible = "nxp,imx8mq-dcss";
reg = <0x32e00000 0x2d000>, <0x32e2f000 0x1000>;
interrupts = <6>, <8>, <9>;
interrupt-names = "ctxld", "ctxld_kick", "vblank";
interrupt-parent = <&irqsteer>;
clocks = <&clk IMX8MQ_CLK_DISP_APB_ROOT>, <&clk IMX8MQ_CLK_DISP_AXI_ROOT>,
<&clk IMX8MQ_CLK_DISP_RTRM_ROOT>, <&clk IMX8MQ_VIDEO2_PLL_OUT>,
<&clk IMX8MQ_CLK_DISP_DTRC>;
clock-names = "apb", "axi", "rtrm", "pix", "dtrc";
assigned-clocks = <&clk IMX8MQ_CLK_DISP_AXI>, <&clk IMX8MQ_CLK_DISP_RTRM>,
<&clk IMX8MQ_VIDEO2_PLL1_REF_SEL>;
assigned-clock-parents = <&clk IMX8MQ_SYS1_PLL_800M>, <&clk IMX8MQ_SYS1_PLL_800M>,
<&clk IMX8MQ_CLK_27M>;
assigned-clock-rates = <800000000>,
<400000000>;
port {
dcss_out: endpoint {
remote-endpoint = <&hdmi_in>;
};
};
};

View File

@ -43,7 +43,7 @@ Required properties (all function blocks):
"mediatek,<chip>-dpi" - DPI controller, see mediatek,dpi.txt
"mediatek,<chip>-disp-mutex" - display mutex
"mediatek,<chip>-disp-od" - overdrive
the supported chips are mt2701, mt2712 and mt8173.
the supported chips are mt2701, mt7623, mt2712 and mt8173.
- reg: Physical base address and length of the function block register space
- interrupts: The interrupt signal from the function block (required, except for
merge and split function blocks).

View File

@ -7,7 +7,7 @@ output bus.
Required properties:
- compatible: "mediatek,<chip>-dpi"
the supported chips are mt2701 , mt8173 and mt8183.
the supported chips are mt2701, mt7623, mt8173 and mt8183.
- reg: Physical base address and length of the controller's registers
- interrupts: The interrupt signal from the function block.
- clocks: device clocks

View File

@ -7,7 +7,7 @@ channel output.
Required properties:
- compatible: "mediatek,<chip>-dsi"
the supported chips are mt2701, mt8173 and mt8183.
- the supported chips are mt2701, mt7623, mt8173 and mt8183.
- reg: Physical base address and length of the controller's registers
- interrupts: The interrupt signal from the function block.
- clocks: device clocks
@ -26,7 +26,7 @@ The MIPI TX configuration module controls the MIPI D-PHY.
Required properties:
- compatible: "mediatek,<chip>-mipi-tx"
the supported chips are mt2701, mt8173 and mt8183.
- the supported chips are mt2701, 7623, mt8173 and mt8183.
- reg: Physical base address and length of the controller's registers
- clocks: PLL reference clock
- clock-output-names: name of the output clock line to the DSI encoder

View File

@ -6,6 +6,7 @@ its parallel input.
Required properties:
- compatible: Should be "mediatek,<chip>-hdmi".
- the supported chips are mt2701, mt7623 and mt8173
- reg: Physical base address and length of the controller's registers
- interrupts: The interrupt signal from the function block.
- clocks: device clocks
@ -32,6 +33,7 @@ The HDMI CEC controller handles hotplug detection and CEC communication.
Required properties:
- compatible: Should be "mediatek,<chip>-cec"
- the supported chips are mt7623 and mt8173
- reg: Physical base address and length of the controller's registers
- interrupts: The interrupt signal from the function block.
- clocks: device clock
@ -44,6 +46,7 @@ The Mediatek's I2C controller is used to interface with I2C devices.
Required properties:
- compatible: Should be "mediatek,<chip>-hdmi-ddc"
- the supported chips are mt7623 and mt8173
- reg: Physical base address and length of the controller's registers
- clocks: device clock
- clock-names: Should be "ddc-i2c".
@ -56,6 +59,7 @@ output and drives the HDMI pads.
Required properties:
- compatible: "mediatek,<chip>-hdmi-phy"
- the supported chips are mt2701, mt7623 and mt8173
- reg: Physical base address and length of the module's registers
- clocks: PLL reference clock
- clock-names: must contain "pll_ref"

View File

@ -90,6 +90,8 @@ Required properties:
* "qcom,dsi-phy-14nm-660"
* "qcom,dsi-phy-10nm"
* "qcom,dsi-phy-10nm-8998"
* "qcom,dsi-phy-7nm"
* "qcom,dsi-phy-7nm-8150"
- reg: Physical base address and length of the registers of PLL, PHY. Some
revisions require the PHY regulator base address, whereas others require the
PHY lane base address. See below for each PHY revision.
@ -98,7 +100,7 @@ Required properties:
* "dsi_pll"
* "dsi_phy"
* "dsi_phy_regulator"
For DSI 14nm and 10nm PHYs:
For DSI 14nm, 10nm and 7nm PHYs:
* "dsi_pll"
* "dsi_phy"
* "dsi_phy_lane"
@ -116,7 +118,7 @@ Required properties:
- vcca-supply: phandle to vcca regulator device node
For 14nm PHY:
- vcca-supply: phandle to vcca regulator device node
For 10nm PHY:
For 10nm and 7nm PHY:
- vdds-supply: phandle to vdds regulator device node
Optional properties:

View File

@ -13,7 +13,9 @@ properties:
compatible:
items:
- enum:
- bananapi,lhr050h41
- bananapi,lhr050h41
- feixin,k101-im2byl02
- const: ilitek,ili9881c
backlight: true

View File

@ -0,0 +1,70 @@
# SPDX-License-Identifier: (GPL-2.0-only or BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/display/panel/mantix,mlaf057we51-x.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Mantix MLAF057WE51-X 5.7" 720x1440 TFT LCD panel
maintainers:
- Guido Günther <agx@sigxcpu.org>
description:
Mantix MLAF057WE51 X is a 720x1440 TFT LCD panel connected using
a MIPI-DSI video interface.
allOf:
- $ref: panel-common.yaml#
properties:
compatible:
enum:
- mantix,mlaf057we51-x
port: true
reg:
maxItems: 1
description: DSI virtual channel
avdd-supply:
description: Positive analog power supply
avee-supply:
description: Negative analog power supply
vddi-supply:
description: 1.8V I/O voltage supply
reset-gpios: true
backlight: true
required:
- compatible
- reg
- avdd-supply
- avee-supply
- vddi-supply
- reset-gpios
additionalProperties: false
examples:
- |
#include <dt-bindings/gpio/gpio.h>
dsi {
#address-cells = <1>;
#size-cells = <0>;
panel@0 {
compatible = "mantix,mlaf057we51-x";
reg = <0>;
avdd-supply = <&reg_avdd>;
avee-supply = <&reg_avee>;
vddi-supply = <&reg_1v8_p>;
reset-gpios = <&gpio1 29 GPIO_ACTIVE_LOW>;
backlight = <&backlight>;
};
};
...

View File

@ -29,6 +29,8 @@ properties:
# compatible must be listed in alphabetical order, ordered by compatible.
# The description in the comment is mandatory for each compatible.
# Ampire AM-1280800N3TZQW-T00H 10.1" WQVGA TFT LCD panel
- ampire,am-1280800n3tzqw-t00h
# Ampire AM-480272H3TMQW-T01H 4.3" WQVGA TFT LCD panel
- ampire,am-480272h3tmqw-t01h
# Ampire AM-800480R3TMQW-A1H 7.0" WVGA TFT LCD panel
@ -87,6 +89,8 @@ properties:
- cdtech,s070swv29hg-dc44
# CDTech(H.K.) Electronics Limited 7" 800x480 color TFT-LCD panel
- cdtech,s070wv95-ct16
# Chefree CH101OLHLWH-002 10.1" (1280x800) color TFT LCD panel
- chefree,ch101olhlwh-002
# Chunghwa Picture Tubes Ltd. 7" WXGA TFT LCD panel
- chunghwa,claa070wp03xg
# Chunghwa Picture Tubes Ltd. 10.1" WXGA TFT LCD panel
@ -159,6 +163,8 @@ properties:
- innolux,n156bge-l21
# Innolux Corporation 7.0" WSVGA (1024x600) TFT LCD panel
- innolux,zj070na-01p
# King & Display KD116N21-30NV-A010 eDP TFT LCD panel
- kingdisplay,kd116n21-30nv-a010
# Kaohsiung Opto-Electronics Inc. 5.7" QVGA (320 x 240) TFT LCD panel
- koe,tx14d24vm1bpa
# Kaohsiung Opto-Electronics Inc. 10.1" WUXGA (1920 x 1200) LVDS TFT LCD panel
@ -219,6 +225,8 @@ properties:
- osddisplays,osd070t1718-19ts
# One Stop Displays OSD101T2045-53TS 10.1" 1920x1200 panel
- osddisplays,osd101t2045-53ts
# POWERTIP PH800480T013-IDF2 7.0" WVGA TFT LCD panel
- powertip,ph800480t013-idf02
# QiaoDian XianShi Corporation 4"3 TFT LCD panel
- qiaodian,qd43003c0-40
# Rocktech Displays Ltd. RK101II01D-CT 10.1" TFT 1280x800

View File

@ -8,10 +8,11 @@ title: Rocktech JH057N00900 5.5" 720x1440 TFT LCD panel
maintainers:
- Ondrej Jirman <megi@xff.cz>
- Guido Gŭnther <agx@sigxcpu.org>
description: |
Rocktech JH057N00900 is a 720x1440 TFT LCD panel
connected using a MIPI-DSI video interface.
description:
Rocktech JH057N00900 is a 720x1440 TFT LCD panel
connected using a MIPI-DSI video interface.
allOf:
- $ref: panel-common.yaml#
@ -19,9 +20,9 @@ allOf:
properties:
compatible:
enum:
# Rocktech JH057N00900 5.5" 720x1440 TFT LCD panel
# Rocktech JH057N00900 5.5" 720x1440 TFT LCD panel
- rocktech,jh057n00900
# Xingbangda XBD599 5.99" 720x1440 TFT LCD panel
# Xingbangda XBD599 5.99" 720x1440 TFT LCD panel
- xingbangda,xbd599
port: true
@ -35,13 +36,9 @@ properties:
iovcc-supply:
description: I/O voltage supply
reset-gpios:
description: GPIO used for the reset pin
maxItems: 1
reset-gpios: true
backlight:
description: Backlight used by the panel
$ref: "/schemas/types.yaml#/definitions/phandle"
backlight: true
required:
- compatible
@ -57,15 +54,16 @@ examples:
#include <dt-bindings/gpio/gpio.h>
dsi {
#address-cells = <1>;
#size-cells = <0>;
panel@0 {
compatible = "rocktech,jh057n00900";
reg = <0>;
vcc-supply = <&reg_2v8_p>;
iovcc-supply = <&reg_1v8_p>;
reset-gpios = <&gpio3 13 GPIO_ACTIVE_LOW>;
backlight = <&backlight>;
};
#address-cells = <1>;
#size-cells = <0>;
panel@0 {
compatible = "rocktech,jh057n00900";
reg = <0>;
vcc-supply = <&reg_2v8_p>;
iovcc-supply = <&reg_1v8_p>;
reset-gpios = <&gpio3 13 GPIO_ACTIVE_LOW>;
backlight = <&backlight>;
};
};
...

View File

@ -3,6 +3,7 @@
Required Properties:
- compatible: must be one of the following.
- "renesas,du-r8a7742" for R8A7742 (RZ/G1H) compatible DU
- "renesas,du-r8a7743" for R8A7743 (RZ/G1M) compatible DU
- "renesas,du-r8a7744" for R8A7744 (RZ/G1N) compatible DU
- "renesas,du-r8a7745" for R8A7745 (RZ/G1E) compatible DU
@ -10,6 +11,7 @@ Required Properties:
- "renesas,du-r8a774a1" for R8A774A1 (RZ/G2M) compatible DU
- "renesas,du-r8a774b1" for R8A774B1 (RZ/G2N) compatible DU
- "renesas,du-r8a774c0" for R8A774C0 (RZ/G2E) compatible DU
- "renesas,du-r8a774e1" for R8A774E1 (RZ/G2H) compatible DU
- "renesas,du-r8a7779" for R8A7779 (R-Car H1) compatible DU
- "renesas,du-r8a7790" for R8A7790 (R-Car H2) compatible DU
- "renesas,du-r8a7791" for R8A7791 (R-Car M2-W) compatible DU
@ -18,6 +20,7 @@ Required Properties:
- "renesas,du-r8a7794" for R8A7794 (R-Car E2) compatible DU
- "renesas,du-r8a7795" for R8A7795 (R-Car H3) compatible DU
- "renesas,du-r8a7796" for R8A7796 (R-Car M3-W) compatible DU
- "renesas,du-r8a77961" for R8A77961 (R-Car M3-W+) compatible DU
- "renesas,du-r8a77965" for R8A77965 (R-Car M3-N) compatible DU
- "renesas,du-r8a77970" for R8A77970 (R-Car V3M) compatible DU
- "renesas,du-r8a77980" for R8A77980 (R-Car V3H) compatible DU
@ -68,6 +71,7 @@ corresponding to each DU output.
Port0 Port1 Port2 Port3
-----------------------------------------------------------------------------
R8A7742 (RZ/G1H) DPAD 0 LVDS 0 LVDS 1 -
R8A7743 (RZ/G1M) DPAD 0 LVDS 0 - -
R8A7744 (RZ/G1N) DPAD 0 LVDS 0 - -
R8A7745 (RZ/G1E) DPAD 0 DPAD 1 - -
@ -75,6 +79,7 @@ corresponding to each DU output.
R8A774A1 (RZ/G2M) DPAD 0 HDMI 0 LVDS 0 -
R8A774B1 (RZ/G2N) DPAD 0 HDMI 0 LVDS 0 -
R8A774C0 (RZ/G2E) DPAD 0 LVDS 0 LVDS 1 -
R8A774E1 (RZ/G2H) DPAD 0 HDMI 0 LVDS 0 -
R8A7779 (R-Car H1) DPAD 0 DPAD 1 - -
R8A7790 (R-Car H2) DPAD 0 LVDS 0 LVDS 1 -
R8A7791 (R-Car M2-W) DPAD 0 LVDS 0 - -
@ -83,6 +88,7 @@ corresponding to each DU output.
R8A7794 (R-Car E2) DPAD 0 DPAD 1 - -
R8A7795 (R-Car H3) DPAD 0 HDMI 0 HDMI 1 LVDS 0
R8A7796 (R-Car M3-W) DPAD 0 HDMI 0 LVDS 0 -
R8A77961 (R-Car M3-W+) DPAD 0 HDMI 0 LVDS 0 -
R8A77965 (R-Car M3-N) DPAD 0 HDMI 0 LVDS 0 -
R8A77970 (R-Car V3M) DPAD 0 LVDS 0 - -
R8A77980 (R-Car V3H) DPAD 0 LVDS 0 - -

View File

@ -19,6 +19,7 @@ Optional properties:
- vbat-supply: The supply for VBAT
- solomon,segment-no-remap: Display needs normal (non-inverted) data column
to segment mapping
- solomon,col-offset: Offset of columns (COL/SEG) that the screen is mapped to.
- solomon,com-seq: Display uses sequential COM pin configuration
- solomon,com-lrremap: Display uses left-right COM pin remap
- solomon,com-invdir: Display uses inverted COM pin scan direction

View File

@ -197,6 +197,8 @@ patternProperties:
description: Ceva, Inc.
"^checkpoint,.*":
description: Check Point Software Technologies Ltd.
"^chefree,.*":
description: Chefree Technology Corp.
"^chipidea,.*":
description: Chipidea, Inc
"^chipone,.*":
@ -605,6 +607,8 @@ patternProperties:
description: Logic Technologies Limited
"^longcheer,.*":
description: Longcheer Technology (Shanghai) Co., Ltd.
"^lontium,.*":
description: Lontium Semiconductor Corporation
"^loongson,.*":
description: Loongson Technology Corporation Limited
"^lsi,.*":
@ -615,6 +619,8 @@ patternProperties:
description: Linux Automation GmbH
"^macnica,.*":
description: Macnica Americas
"^mantix,.*":
description: Mantix Display Technology Co.,Ltd.
"^mapleboard,.*":
description: Mapleboard.org
"^marvell,.*":
@ -836,6 +842,8 @@ patternProperties:
description: Poslab Technology Co., Ltd.
"^pov,.*":
description: Point of View International B.V.
"^powertip,.*":
description: Powertip Tech. Corp.
"^powervr,.*":
description: PowerVR (deprecated, use img)
"^primux,.*":

View File

@ -263,7 +263,7 @@ DMA
dmam_pool_destroy()
DRM
devm_drm_dev_init()
devm_drm_dev_alloc()
GPIO
devm_gpiod_get()

View File

@ -20,8 +20,8 @@ A. Configuration
================
The framebuffer console can be enabled by using your favorite kernel
configuration tool. It is under Device Drivers->Graphics Support->Frame
buffer Devices->Console display driver support->Framebuffer Console Support.
configuration tool. It is under Device Drivers->Graphics Support->
Console display driver support->Framebuffer Console Support.
Select 'y' to compile support statically or 'm' for module support. The
module will be fbcon.

View File

@ -70,6 +70,15 @@ Interrupt Handling
.. kernel-doc:: drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
:internal:
IP Blocks
------------------
.. kernel-doc:: drivers/gpu/drm/amd/include/amd_shared.h
:doc: IP Blocks
.. kernel-doc:: drivers/gpu/drm/amd/include/amd_shared.h
:identifiers: amd_ip_block_type amd_ip_funcs
AMDGPU XGMI Support
===================
@ -153,7 +162,7 @@ This section covers hwmon and power/thermal controls.
HWMON Interfaces
----------------
.. kernel-doc:: drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
.. kernel-doc:: drivers/gpu/drm/amd/pm/amdgpu_pm.c
:doc: hwmon
GPU sysfs Power State Interfaces
@ -164,48 +173,54 @@ GPU power controls are exposed via sysfs files.
power_dpm_state
~~~~~~~~~~~~~~~
.. kernel-doc:: drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
.. kernel-doc:: drivers/gpu/drm/amd/pm/amdgpu_pm.c
:doc: power_dpm_state
power_dpm_force_performance_level
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. kernel-doc:: drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
.. kernel-doc:: drivers/gpu/drm/amd/pm/amdgpu_pm.c
:doc: power_dpm_force_performance_level
pp_table
~~~~~~~~
.. kernel-doc:: drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
.. kernel-doc:: drivers/gpu/drm/amd/pm/amdgpu_pm.c
:doc: pp_table
pp_od_clk_voltage
~~~~~~~~~~~~~~~~~
.. kernel-doc:: drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
.. kernel-doc:: drivers/gpu/drm/amd/pm/amdgpu_pm.c
:doc: pp_od_clk_voltage
pp_dpm_*
~~~~~~~~
.. kernel-doc:: drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
.. kernel-doc:: drivers/gpu/drm/amd/pm/amdgpu_pm.c
:doc: pp_dpm_sclk pp_dpm_mclk pp_dpm_socclk pp_dpm_fclk pp_dpm_dcefclk pp_dpm_pcie
pp_power_profile_mode
~~~~~~~~~~~~~~~~~~~~~
.. kernel-doc:: drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
.. kernel-doc:: drivers/gpu/drm/amd/pm/amdgpu_pm.c
:doc: pp_power_profile_mode
*_busy_percent
~~~~~~~~~~~~~~
.. kernel-doc:: drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
.. kernel-doc:: drivers/gpu/drm/amd/pm/amdgpu_pm.c
:doc: gpu_busy_percent
.. kernel-doc:: drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
.. kernel-doc:: drivers/gpu/drm/amd/pm/amdgpu_pm.c
:doc: mem_busy_percent
gpu_metrics
~~~~~~~~~~~~~~~~~~~~~
.. kernel-doc:: drivers/gpu/drm/amd/pm/amdgpu_pm.c
:doc: gpu_metrics
GPU Product Information
=======================
@ -233,7 +248,7 @@ serial_number
unique_id
---------
.. kernel-doc:: drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
.. kernel-doc:: drivers/gpu/drm/amd/pm/amdgpu_pm.c
:doc: unique_id
GPU Memory Usage Information
@ -283,7 +298,7 @@ PCIe Accounting Information
pcie_bw
-------
.. kernel-doc:: drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
.. kernel-doc:: drivers/gpu/drm/amd/pm/amdgpu_pm.c
:doc: pcie_bw
pcie_replay_count

View File

@ -1,3 +1,5 @@
.. Copyright 2020 DisplayLink (UK) Ltd.
===================
Userland interfaces
===================
@ -162,6 +164,116 @@ other hand, a driver requires shared state between clients which is
visible to user-space and accessible beyond open-file boundaries, they
cannot support render nodes.
Device Hot-Unplug
=================
.. note::
The following is the plan. Implementation is not there yet
(2020 May).
Graphics devices (display and/or render) may be connected via USB (e.g.
display adapters or docking stations) or Thunderbolt (e.g. eGPU). An end
user is able to hot-unplug this kind of devices while they are being
used, and expects that the very least the machine does not crash. Any
damage from hot-unplugging a DRM device needs to be limited as much as
possible and userspace must be given the chance to handle it if it wants
to. Ideally, unplugging a DRM device still lets a desktop continue to
run, but that is going to need explicit support throughout the whole
graphics stack: from kernel and userspace drivers, through display
servers, via window system protocols, and in applications and libraries.
Other scenarios that should lead to the same are: unrecoverable GPU
crash, PCI device disappearing off the bus, or forced unbind of a driver
from the physical device.
In other words, from userspace perspective everything needs to keep on
working more or less, until userspace stops using the disappeared DRM
device and closes it completely. Userspace will learn of the device
disappearance from the device removed uevent, ioctls returning ENODEV
(or driver-specific ioctls returning driver-specific things), or open()
returning ENXIO.
Only after userspace has closed all relevant DRM device and dmabuf file
descriptors and removed all mmaps, the DRM driver can tear down its
instance for the device that no longer exists. If the same physical
device somehow comes back in the mean time, it shall be a new DRM
device.
Similar to PIDs, chardev minor numbers are not recycled immediately. A
new DRM device always picks the next free minor number compared to the
previous one allocated, and wraps around when minor numbers are
exhausted.
The goal raises at least the following requirements for the kernel and
drivers.
Requirements for KMS UAPI
-------------------------
- KMS connectors must change their status to disconnected.
- Legacy modesets and pageflips, and atomic commits, both real and
TEST_ONLY, and any other ioctls either fail with ENODEV or fake
success.
- Pending non-blocking KMS operations deliver the DRM events userspace
is expecting. This applies also to ioctls that faked success.
- open() on a device node whose underlying device has disappeared will
fail with ENXIO.
- Attempting to create a DRM lease on a disappeared DRM device will
fail with ENODEV. Existing DRM leases remain and work as listed
above.
Requirements for Render and Cross-Device UAPI
---------------------------------------------
- All GPU jobs that can no longer run must have their fences
force-signalled to avoid inflicting hangs on userspace.
The associated error code is ENODEV.
- Some userspace APIs already define what should happen when the device
disappears (OpenGL, GL ES: `GL_KHR_robustness`_; `Vulkan`_:
VK_ERROR_DEVICE_LOST; etc.). DRM drivers are free to implement this
behaviour the way they see best, e.g. returning failures in
driver-specific ioctls and handling those in userspace drivers, or
rely on uevents, and so on.
- dmabuf which point to memory that has disappeared will either fail to
import with ENODEV or continue to be successfully imported if it would
have succeeded before the disappearance. See also about memory maps
below for already imported dmabufs.
- Attempting to import a dmabuf to a disappeared device will either fail
with ENODEV or succeed if it would have succeeded without the
disappearance.
- open() on a device node whose underlying device has disappeared will
fail with ENXIO.
.. _GL_KHR_robustness: https://www.khronos.org/registry/OpenGL/extensions/KHR/KHR_robustness.txt
.. _Vulkan: https://www.khronos.org/vulkan/
Requirements for Memory Maps
----------------------------
Memory maps have further requirements that apply to both existing maps
and maps created after the device has disappeared. If the underlying
memory disappears, the map is created or modified such that reads and
writes will still complete successfully but the result is undefined.
This applies to both userspace mmap()'d memory and memory pointed to by
dmabuf which might be mapped to other devices (cross-device dmabuf
imports).
Raising SIGBUS is not an option, because userspace cannot realistically
handle it. Signal handlers are global, which makes them extremely
difficult to use correctly from libraries like those that Mesa produces.
Signal handlers are not composable, you can't have different handlers
for GPU1 and GPU2 from different vendors, and a third handler for
mmapped regular files. Threads cause additional pain with signal
handling as well.
.. _drm_driver_ioctl:
IOCTL Support on Device Nodes
@ -199,7 +311,7 @@ EPERM/EACCES:
difference between EACCES and EPERM.
ENODEV:
The device is not (yet) present or fully initialized.
The device is not present anymore or is not yet fully initialized.
EOPNOTSUPP:
Feature (like PRIME, modesetting, GEM) is not supported by the driver.

View File

@ -1,6 +1,6 @@
==========================================
drm/pl111 ARM PrimeCell PL111 CLCD Driver
==========================================
====================================================
drm/pl111 ARM PrimeCell PL110 and PL111 CLCD Driver
====================================================
.. kernel-doc:: drivers/gpu/drm/pl111/pl111_drv.c
:doc: ARM PrimeCell PL111 CLCD Driver
:doc: ARM PrimeCell PL110 and PL111 CLCD Driver

View File

@ -403,6 +403,52 @@ Contact: Emil Velikov, respective driver maintainers
Level: Intermediate
Plumb drm_atomic_state all over
-------------------------------
Currently various atomic functions take just a single or a handful of
object states (eg. plane state). While that single object state can
suffice for some simple cases, we often have to dig out additional
object states for dealing with various dependencies between the individual
objects or the hardware they represent. The process of digging out the
additional states is rather non-intuitive and error prone.
To fix that most functions should rather take the overall
drm_atomic_state as one of their parameters. The other parameters
would generally be the object(s) we mainly want to interact with.
For example, instead of
.. code-block:: c
int (*atomic_check)(struct drm_plane *plane, struct drm_plane_state *state);
we would have something like
.. code-block:: c
int (*atomic_check)(struct drm_plane *plane, struct drm_atomic_state *state);
The implementation can then trivially gain access to any required object
state(s) via drm_atomic_get_plane_state(), drm_atomic_get_new_plane_state(),
drm_atomic_get_old_plane_state(), and their equivalents for
other object types.
Additionally many drivers currently access the object->state pointer
directly in their commit functions. That is not going to work if we
eg. want to allow deeper commit pipelines as those pointers could
then point to the states corresponding to a future commit instead of
the current commit we're trying to process. Also non-blocking commits
execute locklessly so there are serious concerns with dereferencing
the object->state pointers without holding the locks that protect them.
Use of drm_atomic_get_new_plane_state(), drm_atomic_get_old_plane_state(),
etc. avoids these problems as well since they relate to a specific
commit via the passed in drm_atomic_state.
Contact: Ville Syrjälä, Daniel Vetter
Level: Intermediate
Core refactorings
=================

View File

@ -359,8 +359,6 @@ Code Seq# Include File Comments
0xEC 00-01 drivers/platform/chrome/cros_ec_dev.h ChromeOS EC driver
0xF3 00-3F drivers/usb/misc/sisusbvga/sisusb.h sisfb (in development)
<mailto:thomas@winischhofer.net>
0xF4 00-1F video/mbxfb.h mbxfb
<mailto:raph@8d.com>
0xF6 all LTTng Linux Trace Toolkit Next Generation
<mailto:mathieu.desnoyers@efficios.com>
0xFD all linux/dm-ioctl.h

View File

@ -5417,7 +5417,7 @@ F: drivers/gpu/drm/panel/panel-arm-versatile.c
DRM DRIVER FOR ASPEED BMC GFX
M: Joel Stanley <joel@jms.id.au>
L: linux-aspeed@lists.ozlabs.org
L: linux-aspeed@lists.ozlabs.org (moderated for non-subscribers)
S: Supported
T: git git://anongit.freedesktop.org/drm/drm-misc
F: Documentation/devicetree/bindings/gpu/aspeed-gfx.txt
@ -5425,7 +5425,10 @@ F: drivers/gpu/drm/aspeed/
DRM DRIVER FOR AST SERVER GRAPHICS CHIPS
M: Dave Airlie <airlied@redhat.com>
S: Odd Fixes
R: Thomas Zimmermann <tzimmermann@suse.de>
L: dri-devel@lists.freedesktop.org
S: Supported
T: git git://anongit.freedesktop.org/drm/drm-misc
F: drivers/gpu/drm/ast/
DRM DRIVER FOR BOCHS VIRTUAL GPU
@ -5499,14 +5502,24 @@ S: Maintained
F: drivers/gpu/drm/panel/panel-lvds.c
F: Documentation/devicetree/bindings/display/panel/lvds.yaml
DRM DRIVER FOR MANTIX MLAF057WE51 PANELS
M: Guido Günther <agx@sigxcpu.org>
R: Purism Kernel Team <kernel@puri.sm>
S: Maintained
F: Documentation/devicetree/bindings/display/panel/mantix,mlaf057we51-x.yaml
F: drivers/gpu/drm/panel/panel-mantix-mlaf057we51.c
DRM DRIVER FOR MATROX G200/G400 GRAPHICS CARDS
S: Orphan / Obsolete
F: drivers/gpu/drm/mga/
F: include/uapi/drm/mga_drm.h
DRM DRIVER FOR MGA G200 SERVER GRAPHICS CHIPS
DRM DRIVER FOR MGA G200 GRAPHICS CHIPS
M: Dave Airlie <airlied@redhat.com>
S: Odd Fixes
R: Thomas Zimmermann <tzimmermann@suse.de>
L: dri-devel@lists.freedesktop.org
S: Supported
T: git git://anongit.freedesktop.org/drm/drm-misc
F: drivers/gpu/drm/mgag200/
DRM DRIVER FOR MI0283QT
@ -5587,12 +5600,13 @@ S: Maintained
F: Documentation/devicetree/bindings/display/panel/raydium,rm67191.yaml
F: drivers/gpu/drm/panel/panel-raydium-rm67191.c
DRM DRIVER FOR ROCKTECH JH057N00900 PANELS
DRM DRIVER FOR SITRONIX ST7703 PANELS
M: Guido Günther <agx@sigxcpu.org>
R: Purism Kernel Team <kernel@puri.sm>
R: Ondrej Jirman <megous@megous.com>
S: Maintained
F: Documentation/devicetree/bindings/display/panel/rocktech,jh057n00900.txt
F: drivers/gpu/drm/panel/panel-rocktech-jh057n00900.c
F: Documentation/devicetree/bindings/display/panel/rocktech,jh057n00900.yaml
F: drivers/gpu/drm/panel/panel-sitronix-st7703.c
DRM DRIVER FOR SAVAGE VIDEO CARDS
S: Orphan / Obsolete
@ -5651,13 +5665,15 @@ F: drivers/gpu/drm/panel/panel-tpo-tpg110.c
DRM DRIVER FOR USB DISPLAYLINK VIDEO ADAPTERS
M: Dave Airlie <airlied@redhat.com>
R: Sean Paul <sean@poorly.run>
R: Thomas Zimmermann <tzimmermann@suse.de>
L: dri-devel@lists.freedesktop.org
S: Odd Fixes
S: Supported
T: git git://anongit.freedesktop.org/drm/drm-misc
F: drivers/gpu/drm/udl/
DRM DRIVER FOR VIRTUAL KERNEL MODESETTING (VKMS)
M: Rodrigo Siqueira <rodrigosiqueiramelo@gmail.com>
M: Melissa Wen <melissa.srw@gmail.com>
R: Haneen Mohammed <hamohammed.sa@gmail.com>
R: Daniel Vetter <daniel@ffwll.ch>
L: dri-devel@lists.freedesktop.org
@ -5792,7 +5808,7 @@ F: drivers/gpu/drm/gma500/
DRM DRIVERS FOR HISILICON
M: Xinliang Liu <xinliang.liu@linaro.org>
M: Rongrong Zou <zourongrong@gmail.com>
M: Tian Tao <tiantao6@hisilicon.com>
R: John Stultz <john.stultz@linaro.org>
R: Xinwei Kong <kong.kongxinwei@hisilicon.com>
R: Chen Feng <puck.chen@hisilicon.com>
@ -5818,6 +5834,7 @@ L: dri-devel@lists.freedesktop.org
S: Supported
F: Documentation/devicetree/bindings/display/mediatek/
F: drivers/gpu/drm/mediatek/
F: drivers/phy/mediatek/phy-mtk-hdmi*
DRM DRIVERS FOR NVIDIA TEGRA
M: Thierry Reding <thierry.reding@gmail.com>
@ -12499,6 +12516,14 @@ F: drivers/iio/gyro/fxas21002c_core.c
F: drivers/iio/gyro/fxas21002c_i2c.c
F: drivers/iio/gyro/fxas21002c_spi.c
NXP i.MX 8MQ DCSS DRIVER
M: Laurentiu Palcu <laurentiu.palcu@oss.nxp.com>
R: Lucas Stach <l.stach@pengutronix.de>
L: dri-devel@lists.freedesktop.org
S: Maintained
F: Documentation/devicetree/bindings/display/imx/nxp,imx8mq-dcss.yaml
F: drivers/gpu/drm/imx/dcss/
NXP PTN5150A CC LOGIC AND EXTCON DRIVER
M: Krzysztof Kozlowski <krzk@kernel.org>
L: linux-kernel@vger.kernel.org

View File

@ -65,7 +65,15 @@
#define LPSS_CLK_DIVIDER BIT(2)
#define LPSS_LTR BIT(3)
#define LPSS_SAVE_CTX BIT(4)
#define LPSS_NO_D3_DELAY BIT(5)
/*
* For some devices the DSDT AML code for another device turns off the device
* before our suspend handler runs, causing us to read/save all 1-s (0xffffffff)
* as ctx register values.
* Luckily these devices always use the same ctx register values, so we can
* work around this by saving the ctx registers once on activation.
*/
#define LPSS_SAVE_CTX_ONCE BIT(5)
#define LPSS_NO_D3_DELAY BIT(6)
struct lpss_private_data;
@ -252,9 +260,10 @@ static const struct lpss_device_desc byt_pwm_dev_desc = {
};
static const struct lpss_device_desc bsw_pwm_dev_desc = {
.flags = LPSS_SAVE_CTX | LPSS_NO_D3_DELAY,
.flags = LPSS_SAVE_CTX_ONCE | LPSS_NO_D3_DELAY,
.prv_offset = 0x800,
.setup = bsw_pwm_setup,
.resume_from_noirq = true,
};
static const struct lpss_device_desc byt_uart_dev_desc = {
@ -882,9 +891,14 @@ static int acpi_lpss_activate(struct device *dev)
* we have to deassert reset line to be sure that ->probe() will
* recognize the device.
*/
if (pdata->dev_desc->flags & LPSS_SAVE_CTX)
if (pdata->dev_desc->flags & (LPSS_SAVE_CTX | LPSS_SAVE_CTX_ONCE))
lpss_deassert_reset(pdata);
#ifdef CONFIG_PM
if (pdata->dev_desc->flags & LPSS_SAVE_CTX_ONCE)
acpi_lpss_save_ctx(dev, pdata);
#endif
return 0;
}
@ -1028,7 +1042,7 @@ static int acpi_lpss_resume(struct device *dev)
acpi_lpss_d3_to_d0_delay(pdata);
if (pdata->dev_desc->flags & LPSS_SAVE_CTX)
if (pdata->dev_desc->flags & (LPSS_SAVE_CTX | LPSS_SAVE_CTX_ONCE))
acpi_lpss_restore_ctx(dev, pdata);
return 0;

View File

@ -425,7 +425,7 @@ static int agp_amdk7_probe(struct pci_dev *pdev,
return -ENOMEM;
bridge->driver = &amd_irongate_driver;
bridge->dev_private_data = &amd_irongate_private,
bridge->dev_private_data = &amd_irongate_private;
bridge->dev = pdev;
bridge->capndx = cap_ptr;

View File

@ -382,7 +382,7 @@ static int agp_nvidia_probe(struct pci_dev *pdev,
return -ENOMEM;
bridge->driver = &nvidia_driver;
bridge->dev_private_data = &nvidia_private,
bridge->dev_private_data = &nvidia_private;
bridge->dev = pdev;
bridge->capndx = cap_ptr;

View File

@ -513,7 +513,7 @@ static int agp_serverworks_probe(struct pci_dev *pdev,
return -ENOMEM;
bridge->driver = &sworks_driver;
bridge->dev_private_data = &serverworks_private,
bridge->dev_private_data = &serverworks_private;
bridge->dev = pci_dev_get(pdev);
pci_set_drvdata(pdev, bridge);

View File

@ -283,6 +283,7 @@ EXPORT_SYMBOL(dma_fence_begin_signalling);
/**
* dma_fence_end_signalling - end a critical DMA fence signalling section
* @cookie: opaque cookie from dma_fence_begin_signalling()
*
* Closes a critical section annotation opened by dma_fence_begin_signalling().
*/

View File

@ -98,12 +98,14 @@ static int __init dma_resv_lockdep(void)
struct mm_struct *mm = mm_alloc();
struct ww_acquire_ctx ctx;
struct dma_resv obj;
struct address_space mapping;
int ret;
if (!mm)
return -ENOMEM;
dma_resv_init(&obj);
address_space_init_once(&mapping);
mmap_read_lock(mm);
ww_acquire_init(&ctx, &reservation_ww_class);
@ -111,6 +113,9 @@ static int __init dma_resv_lockdep(void)
if (ret == -EDEADLK)
dma_resv_lock_slow(&obj, &ctx);
fs_reclaim_acquire(GFP_KERNEL);
/* for unmap_mapping_range on trylocked buffer objects in shrinkers */
i_mmap_lock_write(&mapping);
i_mmap_unlock_write(&mapping);
#ifdef CONFIG_MMU_NOTIFIER
lock_map_acquire(&__mmu_notifier_invalidate_range_start_map);
__dma_fence_might_wait();

View File

@ -140,13 +140,12 @@ struct sg_table *dma_heap_map_dma_buf(struct dma_buf_attachment *attachment,
enum dma_data_direction direction)
{
struct dma_heaps_attachment *a = attachment->priv;
struct sg_table *table;
struct sg_table *table = &a->table;
int ret;
table = &a->table;
if (!dma_map_sg(attachment->dev, table->sgl, table->nents,
direction))
table = ERR_PTR(-ENOMEM);
ret = dma_map_sgtable(attachment->dev, table, direction, 0);
if (ret)
table = ERR_PTR(ret);
return table;
}
@ -154,7 +153,7 @@ static void dma_heap_unmap_dma_buf(struct dma_buf_attachment *attachment,
struct sg_table *table,
enum dma_data_direction direction)
{
dma_unmap_sg(attachment->dev, table->sgl, table->nents, direction);
dma_unmap_sgtable(attachment->dev, table, direction, 0);
}
static vm_fault_t dma_heap_vm_fault(struct vm_fault *vmf)

View File

@ -63,10 +63,9 @@ static struct sg_table *get_sg_table(struct device *dev, struct dma_buf *buf,
GFP_KERNEL);
if (ret < 0)
goto err;
if (!dma_map_sg(dev, sg->sgl, sg->nents, direction)) {
ret = -EINVAL;
ret = dma_map_sgtable(dev, sg, direction, 0);
if (ret < 0)
goto err;
}
return sg;
err:
@ -78,7 +77,7 @@ static struct sg_table *get_sg_table(struct device *dev, struct dma_buf *buf,
static void put_sg_table(struct device *dev, struct sg_table *sg,
enum dma_data_direction direction)
{
dma_unmap_sg(dev, sg->sgl, sg->nents, direction);
dma_unmap_sgtable(dev, sg, direction, 0);
sg_free_table(sg);
kfree(sg);
}
@ -308,6 +307,9 @@ static long udmabuf_ioctl(struct file *filp, unsigned int ioctl,
static const struct file_operations udmabuf_fops = {
.owner = THIS_MODULE,
.unlocked_ioctl = udmabuf_ioctl,
#ifdef CONFIG_COMPAT
.compat_ioctl = udmabuf_ioctl,
#endif
};
static struct miscdevice udmabuf_misc = {

View File

@ -100,7 +100,7 @@ obj-$(CONFIG_DRM_MSM) += msm/
obj-$(CONFIG_DRM_TEGRA) += tegra/
obj-$(CONFIG_DRM_STM) += stm/
obj-$(CONFIG_DRM_STI) += sti/
obj-$(CONFIG_DRM_IMX) += imx/
obj-y += imx/
obj-$(CONFIG_DRM_INGENIC) += ingenic/
obj-$(CONFIG_DRM_MEDIATEK) += mediatek/
obj-$(CONFIG_DRM_MESON) += meson/

View File

@ -30,7 +30,7 @@ FULL_AMD_DISPLAY_PATH = $(FULL_AMD_PATH)/$(DISPLAY_FOLDER_NAME)
ccflags-y := -I$(FULL_AMD_PATH)/include/asic_reg \
-I$(FULL_AMD_PATH)/include \
-I$(FULL_AMD_PATH)/amdgpu \
-I$(FULL_AMD_PATH)/powerplay/inc \
-I$(FULL_AMD_PATH)/pm/inc \
-I$(FULL_AMD_PATH)/acp/include \
-I$(FULL_AMD_DISPLAY_PATH) \
-I$(FULL_AMD_DISPLAY_PATH)/include \
@ -47,7 +47,7 @@ amdgpu-y += amdgpu_device.o amdgpu_kms.o \
amdgpu_encoders.o amdgpu_display.o amdgpu_i2c.o \
amdgpu_fb.o amdgpu_gem.o amdgpu_ring.o \
amdgpu_cs.o amdgpu_bios.o amdgpu_benchmark.o amdgpu_test.o \
amdgpu_pm.o atombios_dp.o amdgpu_afmt.o amdgpu_trace_points.o \
atombios_dp.o amdgpu_afmt.o amdgpu_trace_points.o \
atombios_encoders.o amdgpu_sa.o atombios_i2c.o \
amdgpu_dma_buf.o amdgpu_vm.o amdgpu_ib.o amdgpu_pll.o \
amdgpu_ucode.o amdgpu_bo_list.o amdgpu_ctx.o amdgpu_sync.o \
@ -55,15 +55,15 @@ amdgpu-y += amdgpu_device.o amdgpu_kms.o \
amdgpu_vf_error.o amdgpu_sched.o amdgpu_debugfs.o amdgpu_ids.o \
amdgpu_gmc.o amdgpu_mmhub.o amdgpu_xgmi.o amdgpu_csa.o amdgpu_ras.o amdgpu_vm_cpu.o \
amdgpu_vm_sdma.o amdgpu_discovery.o amdgpu_ras_eeprom.o amdgpu_nbio.o \
amdgpu_umc.o smu_v11_0_i2c.o amdgpu_fru_eeprom.o
amdgpu_umc.o smu_v11_0_i2c.o amdgpu_fru_eeprom.o amdgpu_rap.o
amdgpu-$(CONFIG_PERF_EVENTS) += amdgpu_pmu.o
# add asic specific block
amdgpu-$(CONFIG_DRM_AMDGPU_CIK)+= cik.o cik_ih.o kv_smc.o kv_dpm.o \
amdgpu-$(CONFIG_DRM_AMDGPU_CIK)+= cik.o cik_ih.o \
dce_v8_0.o gfx_v7_0.o cik_sdma.o uvd_v4_2.o vce_v2_0.o
amdgpu-$(CONFIG_DRM_AMDGPU_SI)+= si.o gmc_v6_0.o gfx_v6_0.o si_ih.o si_dma.o dce_v6_0.o si_dpm.o si_smc.o \
amdgpu-$(CONFIG_DRM_AMDGPU_SI)+= si.o gmc_v6_0.o gfx_v6_0.o si_ih.o si_dma.o dce_v6_0.o \
uvd_v3_1.o
amdgpu-y += \
@ -85,7 +85,7 @@ amdgpu-y += \
# add UMC block
amdgpu-y += \
umc_v6_1.o umc_v6_0.o
umc_v6_1.o umc_v6_0.o umc_v8_7.o
# add IH block
amdgpu-y += \
@ -105,10 +105,6 @@ amdgpu-y += \
psp_v11_0.o \
psp_v12_0.o
# add SMC block
amdgpu-y += \
amdgpu_dpm.o
# add DCE block
amdgpu-y += \
dce_v10_0.o \
@ -212,7 +208,7 @@ amdgpu-$(CONFIG_VGA_SWITCHEROO) += amdgpu_atpx_handler.o
amdgpu-$(CONFIG_ACPI) += amdgpu_acpi.o
amdgpu-$(CONFIG_HMM_MIRROR) += amdgpu_mn.o
include $(FULL_AMD_PATH)/powerplay/Makefile
include $(FULL_AMD_PATH)/pm/Makefile
amdgpu-y += $(AMD_POWERPLAY_FILES)

View File

@ -49,6 +49,8 @@
#include <linux/rbtree.h>
#include <linux/hashtable.h>
#include <linux/dma-fence.h>
#include <linux/pci.h>
#include <linux/aer.h>
#include <drm/ttm/ttm_bo_api.h>
#include <drm/ttm/ttm_bo_driver.h>
@ -102,6 +104,7 @@
#include "amdgpu_mes.h"
#include "amdgpu_umc.h"
#include "amdgpu_mmhub.h"
#include "amdgpu_gfxhub.h"
#include "amdgpu_df.h"
#define MAX_GPU_INSTANCE 16
@ -178,6 +181,7 @@ extern uint amdgpu_dm_abm_level;
extern struct amdgpu_mgpu_info mgpu_info;
extern int amdgpu_ras_enable;
extern uint amdgpu_ras_mask;
extern int amdgpu_bad_page_threshold;
extern int amdgpu_async_gfx_ring;
extern int amdgpu_mcbp;
extern int amdgpu_discovery;
@ -187,9 +191,11 @@ extern int amdgpu_force_asic_type;
#ifdef CONFIG_HSA_AMD
extern int sched_policy;
extern bool debug_evictions;
extern bool no_system_mem_limit;
#else
static const int sched_policy = KFD_SCHED_POLICY_HWS;
static const bool debug_evictions; /* = false */
static const bool no_system_mem_limit;
#endif
extern int amdgpu_tmz;
@ -201,6 +207,7 @@ extern int amdgpu_si_support;
#ifdef CONFIG_DRM_AMDGPU_CIK
extern int amdgpu_cik_support;
#endif
extern int amdgpu_num_kcq;
#define AMDGPU_VM_MAX_NUM_CTX 4096
#define AMDGPU_SG_THRESHOLD (256*1024*1024)
@ -212,6 +219,8 @@ extern int amdgpu_cik_support;
#define AMDGPUFB_CONN_LIMIT 4
#define AMDGPU_BIOS_NUM_SCRATCH 16
#define AMDGPU_VBIOS_VGA_ALLOCATION (9 * 1024 * 1024) /* reserve 8MB for vga emulator and 1 MB for FB */
/* hard reset data */
#define AMDGPU_ASIC_RESET_DATA 0x39d5e86b
@ -245,6 +254,7 @@ struct amdgpu_fpriv;
struct amdgpu_bo_va_mapping;
struct amdgpu_atif;
struct kfd_vm_fault_info;
struct amdgpu_hive_info;
enum amdgpu_cp_irq {
AMDGPU_CP_IRQ_GFX_ME0_PIPE0_EOP = 0,
@ -611,6 +621,8 @@ struct amdgpu_asic_funcs {
uint64_t (*get_pcie_replay_count)(struct amdgpu_device *adev);
/* device supports BACO */
bool (*supports_baco)(struct amdgpu_device *adev);
/* pre asic_init quirks */
void (*pre_asic_init)(struct amdgpu_device *adev);
};
/*
@ -647,16 +659,6 @@ struct amdgpu_atcs {
struct amdgpu_atcs_functions functions;
};
/*
* Firmware VRAM reservation
*/
struct amdgpu_fw_vram_usage {
u64 start_offset;
u64 size;
struct amdgpu_bo *reserved_bo;
void *va;
};
/*
* CGS
*/
@ -725,13 +727,13 @@ struct amd_powerplay {
#define AMDGPU_MAX_DF_PERFMONS 4
struct amdgpu_device {
struct device *dev;
struct drm_device *ddev;
struct pci_dev *pdev;
struct drm_device ddev;
#ifdef CONFIG_DRM_AMD_ACP
struct amdgpu_acp acp;
#endif
struct amdgpu_hive_info *hive;
/* ASIC */
enum amd_asic_type asic_type;
uint32_t family;
@ -765,7 +767,6 @@ struct amdgpu_device {
bool is_atom_fw;
uint8_t *bios;
uint32_t bios_size;
struct amdgpu_bo *stolen_vga_memory;
uint32_t bios_scratch_reg_offset;
uint32_t bios_scratch[AMDGPU_BIOS_NUM_SCRATCH];
@ -881,6 +882,9 @@ struct amdgpu_device {
/* mmhub */
struct amdgpu_mmhub mmhub;
/* gfxhub */
struct amdgpu_gfxhub gfxhub;
/* gfx */
struct amdgpu_gfx gfx;
@ -917,11 +921,6 @@ struct amdgpu_device {
/* display related functionality */
struct amdgpu_display_manager dm;
/* discovery */
uint8_t *discovery_bin;
uint32_t discovery_tmr_size;
struct amdgpu_bo *discovery_memory;
/* mes */
bool enable_mes;
struct amdgpu_mes mes;
@ -946,8 +945,6 @@ struct amdgpu_device {
struct delayed_work delayed_init_work;
struct amdgpu_virt virt;
/* firmware VRAM reservation */
struct amdgpu_fw_vram_usage fw_vram_usage;
/* link all shadow bo */
struct list_head shadow_list;
@ -961,9 +958,9 @@ struct amdgpu_device {
bool in_suspend;
bool in_hibernate;
bool in_gpu_reset;
atomic_t in_gpu_reset;
enum pp_mp1_state mp1_state;
struct mutex lock_reset;
struct rw_semaphore reset_sem;
struct amdgpu_doorbell_index doorbell_index;
struct mutex notifier_lock;
@ -995,34 +992,60 @@ struct amdgpu_device {
atomic_t throttling_logging_enabled;
struct ratelimit_state throttling_logging_rs;
uint32_t ras_features;
bool in_pci_err_recovery;
struct pci_saved_state *pci_state;
};
static inline struct amdgpu_device *drm_to_adev(struct drm_device *ddev)
{
return container_of(ddev, struct amdgpu_device, ddev);
}
static inline struct drm_device *adev_to_drm(struct amdgpu_device *adev)
{
return &adev->ddev;
}
static inline struct amdgpu_device *amdgpu_ttm_adev(struct ttm_bo_device *bdev)
{
return container_of(bdev, struct amdgpu_device, mman.bdev);
}
int amdgpu_device_init(struct amdgpu_device *adev,
struct drm_device *ddev,
struct pci_dev *pdev,
uint32_t flags);
void amdgpu_device_fini(struct amdgpu_device *adev);
int amdgpu_gpu_wait_for_idle(struct amdgpu_device *adev);
void amdgpu_device_vram_access(struct amdgpu_device *adev, loff_t pos,
uint32_t *buf, size_t size, bool write);
uint32_t amdgpu_mm_rreg(struct amdgpu_device *adev, uint32_t reg,
uint32_t amdgpu_device_rreg(struct amdgpu_device *adev,
uint32_t reg, uint32_t acc_flags);
void amdgpu_device_wreg(struct amdgpu_device *adev,
uint32_t reg, uint32_t v,
uint32_t acc_flags);
void amdgpu_mm_wreg(struct amdgpu_device *adev, uint32_t reg, uint32_t v,
uint32_t acc_flags);
void amdgpu_mm_wreg_mmio_rlc(struct amdgpu_device *adev, uint32_t reg, uint32_t v,
uint32_t acc_flags);
void amdgpu_mm_wreg_mmio_rlc(struct amdgpu_device *adev,
uint32_t reg, uint32_t v);
void amdgpu_mm_wreg8(struct amdgpu_device *adev, uint32_t offset, uint8_t value);
uint8_t amdgpu_mm_rreg8(struct amdgpu_device *adev, uint32_t offset);
u32 amdgpu_io_rreg(struct amdgpu_device *adev, u32 reg);
void amdgpu_io_wreg(struct amdgpu_device *adev, u32 reg, u32 v);
u32 amdgpu_device_indirect_rreg(struct amdgpu_device *adev,
u32 pcie_index, u32 pcie_data,
u32 reg_addr);
u64 amdgpu_device_indirect_rreg64(struct amdgpu_device *adev,
u32 pcie_index, u32 pcie_data,
u32 reg_addr);
void amdgpu_device_indirect_wreg(struct amdgpu_device *adev,
u32 pcie_index, u32 pcie_data,
u32 reg_addr, u32 reg_data);
void amdgpu_device_indirect_wreg64(struct amdgpu_device *adev,
u32 pcie_index, u32 pcie_data,
u32 reg_addr, u64 reg_data);
bool amdgpu_device_asic_has_dc_support(enum amd_asic_type asic_type);
bool amdgpu_device_has_dc_support(struct amdgpu_device *adev);
@ -1033,8 +1056,8 @@ int emu_soc_asic_init(struct amdgpu_device *adev);
*/
#define AMDGPU_REGS_NO_KIQ (1<<1)
#define RREG32_NO_KIQ(reg) amdgpu_mm_rreg(adev, (reg), AMDGPU_REGS_NO_KIQ)
#define WREG32_NO_KIQ(reg, v) amdgpu_mm_wreg(adev, (reg), (v), AMDGPU_REGS_NO_KIQ)
#define RREG32_NO_KIQ(reg) amdgpu_device_rreg(adev, (reg), AMDGPU_REGS_NO_KIQ)
#define WREG32_NO_KIQ(reg, v) amdgpu_device_wreg(adev, (reg), (v), AMDGPU_REGS_NO_KIQ)
#define RREG32_KIQ(reg) amdgpu_kiq_rreg(adev, (reg))
#define WREG32_KIQ(reg, v) amdgpu_kiq_wreg(adev, (reg), (v))
@ -1042,9 +1065,9 @@ int emu_soc_asic_init(struct amdgpu_device *adev);
#define RREG8(reg) amdgpu_mm_rreg8(adev, (reg))
#define WREG8(reg, v) amdgpu_mm_wreg8(adev, (reg), (v))
#define RREG32(reg) amdgpu_mm_rreg(adev, (reg), 0)
#define DREG32(reg) printk(KERN_INFO "REGISTER: " #reg " : 0x%08X\n", amdgpu_mm_rreg(adev, (reg), 0))
#define WREG32(reg, v) amdgpu_mm_wreg(adev, (reg), (v), 0)
#define RREG32(reg) amdgpu_device_rreg(adev, (reg), 0)
#define DREG32(reg) printk(KERN_INFO "REGISTER: " #reg " : 0x%08X\n", amdgpu_device_rreg(adev, (reg), 0))
#define WREG32(reg, v) amdgpu_device_wreg(adev, (reg), (v), 0)
#define REG_SET(FIELD, v) (((v) << FIELD##_SHIFT) & FIELD##_MASK)
#define REG_GET(FIELD, v) (((v) << FIELD##_SHIFT) & FIELD##_MASK)
#define RREG32_PCIE(reg) adev->pcie_rreg(adev, (reg))
@ -1090,7 +1113,7 @@ int emu_soc_asic_init(struct amdgpu_device *adev);
WREG32_SMC(_Reg, tmp); \
} while (0)
#define DREG32_SYS(sqf, adev, reg) seq_printf((sqf), #reg " : 0x%08X\n", amdgpu_mm_rreg((adev), (reg), false))
#define DREG32_SYS(sqf, adev, reg) seq_printf((sqf), #reg " : 0x%08X\n", amdgpu_device_rreg((adev), (reg), false))
#define RREG32_IO(reg) amdgpu_io_rreg(adev, (reg))
#define WREG32_IO(reg, v) amdgpu_io_wreg(adev, (reg), (v))
@ -1141,10 +1164,12 @@ int emu_soc_asic_init(struct amdgpu_device *adev);
#define amdgpu_asic_need_reset_on_init(adev) (adev)->asic_funcs->need_reset_on_init((adev))
#define amdgpu_asic_get_pcie_replay_count(adev) ((adev)->asic_funcs->get_pcie_replay_count((adev)))
#define amdgpu_asic_supports_baco(adev) (adev)->asic_funcs->supports_baco((adev))
#define amdgpu_asic_pre_asic_init(adev) (adev)->asic_funcs->pre_asic_init((adev))
#define amdgpu_inc_vram_lost(adev) atomic_inc(&((adev)->vram_lost_counter));
/* Common functions */
bool amdgpu_device_has_job_running(struct amdgpu_device *adev);
bool amdgpu_device_should_recover_gpu(struct amdgpu_device *adev);
int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
struct amdgpu_job* job);
@ -1194,7 +1219,7 @@ static inline void *amdgpu_atpx_get_dhandle(void) { return NULL; }
extern const struct drm_ioctl_desc amdgpu_ioctls_kms[];
extern const int amdgpu_max_kms_ioctl;
int amdgpu_driver_load_kms(struct drm_device *dev, unsigned long flags);
int amdgpu_driver_load_kms(struct amdgpu_device *adev, unsigned long flags);
void amdgpu_driver_unload_kms(struct drm_device *dev);
void amdgpu_driver_lastclose_kms(struct drm_device *dev);
int amdgpu_driver_open_kms(struct drm_device *dev, struct drm_file *file_priv);
@ -1258,6 +1283,15 @@ static inline int amdgpu_dm_display_resume(struct amdgpu_device *adev) { return
void amdgpu_register_gpu_instance(struct amdgpu_device *adev);
void amdgpu_unregister_gpu_instance(struct amdgpu_device *adev);
pci_ers_result_t amdgpu_pci_error_detected(struct pci_dev *pdev,
pci_channel_state_t state);
pci_ers_result_t amdgpu_pci_mmio_enabled(struct pci_dev *pdev);
pci_ers_result_t amdgpu_pci_slot_reset(struct pci_dev *pdev);
void amdgpu_pci_resume(struct pci_dev *pdev);
bool amdgpu_device_cache_pci_state(struct pci_dev *pdev);
bool amdgpu_device_load_pci_state(struct pci_dev *pdev);
#include "amdgpu_object.h"
/* used by df_v3_6.c and amdgpu_pmu.c */
@ -1278,4 +1312,8 @@ static inline bool amdgpu_is_tmz(struct amdgpu_device *adev)
return adev->gmc.tmz_enabled;
}
static inline int amdgpu_in_reset(struct amdgpu_device *adev)
{
return atomic_read(&adev->in_gpu_reset);
}
#endif

View File

@ -136,9 +136,7 @@ static int acp_poweroff(struct generic_pm_domain *genpd)
* 2. power off the acp tiles
* 3. check and enter ulv state
*/
if (adev->powerplay.pp_funcs &&
adev->powerplay.pp_funcs->set_powergating_by_smu)
amdgpu_dpm_set_powergating_by_smu(adev, AMD_IP_BLOCK_TYPE_ACP, true);
amdgpu_dpm_set_powergating_by_smu(adev, AMD_IP_BLOCK_TYPE_ACP, true);
}
return 0;
}
@ -157,8 +155,7 @@ static int acp_poweron(struct generic_pm_domain *genpd)
* 2. turn on acp clock
* 3. power on acp tiles
*/
if (adev->powerplay.pp_funcs->set_powergating_by_smu)
amdgpu_dpm_set_powergating_by_smu(adev, AMD_IP_BLOCK_TYPE_ACP, false);
amdgpu_dpm_set_powergating_by_smu(adev, AMD_IP_BLOCK_TYPE_ACP, false);
}
return 0;
}
@ -529,9 +526,7 @@ static int acp_set_powergating_state(void *handle,
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
bool enable = (state == AMD_PG_STATE_GATE);
if (adev->powerplay.pp_funcs &&
adev->powerplay.pp_funcs->set_powergating_by_smu)
amdgpu_dpm_set_powergating_by_smu(adev, AMD_IP_BLOCK_TYPE_ACP, enable);
amdgpu_dpm_set_powergating_by_smu(adev, AMD_IP_BLOCK_TYPE_ACP, enable);
return 0;
}

View File

@ -463,11 +463,11 @@ static int amdgpu_atif_handler(struct amdgpu_device *adev,
if (req.pending & ATIF_DGPU_DISPLAY_EVENT) {
if (adev->flags & AMD_IS_PX) {
pm_runtime_get_sync(adev->ddev->dev);
pm_runtime_get_sync(adev_to_drm(adev)->dev);
/* Just fire off a uevent and let userspace tell us what to do */
drm_helper_hpd_irq_event(adev->ddev);
pm_runtime_mark_last_busy(adev->ddev->dev);
pm_runtime_put_autosuspend(adev->ddev->dev);
drm_helper_hpd_irq_event(adev_to_drm(adev));
pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
}
}
/* TODO: check other events */
@ -806,8 +806,8 @@ int amdgpu_acpi_init(struct amdgpu_device *adev)
}
adev->atif = atif;
if (atif->notifications.brightness_change) {
#if defined(CONFIG_BACKLIGHT_CLASS_DEVICE) || defined(CONFIG_BACKLIGHT_CLASS_DEVICE_MODULE)
if (atif->notifications.brightness_change) {
if (amdgpu_device_has_dc_support(adev)) {
#if defined(CONFIG_DRM_AMD_DC)
struct amdgpu_display_manager *dm = &adev->dm;
@ -817,7 +817,7 @@ int amdgpu_acpi_init(struct amdgpu_device *adev)
struct drm_encoder *tmp;
/* Find the encoder controlling the brightness */
list_for_each_entry(tmp, &adev->ddev->mode_config.encoder_list,
list_for_each_entry(tmp, &adev_to_drm(adev)->mode_config.encoder_list,
head) {
struct amdgpu_encoder *enc = to_amdgpu_encoder(tmp);

View File

@ -36,6 +36,8 @@
*/
uint64_t amdgpu_amdkfd_total_mem_size;
static bool kfd_initialized;
int amdgpu_amdkfd_init(void)
{
struct sysinfo si;
@ -51,19 +53,26 @@ int amdgpu_amdkfd_init(void)
#else
ret = -ENOENT;
#endif
kfd_initialized = !ret;
return ret;
}
void amdgpu_amdkfd_fini(void)
{
kgd2kfd_exit();
if (kfd_initialized) {
kgd2kfd_exit();
kfd_initialized = false;
}
}
void amdgpu_amdkfd_device_probe(struct amdgpu_device *adev)
{
bool vf = amdgpu_sriov_vf(adev);
if (!kfd_initialized)
return;
adev->kfd.dev = kgd2kfd_probe((struct kgd_dev *)adev,
adev->pdev, adev->asic_type, vf);
@ -119,7 +128,7 @@ void amdgpu_amdkfd_device_init(struct amdgpu_device *adev)
.gpuvm_size = min(adev->vm_manager.max_pfn
<< AMDGPU_GPU_PAGE_SHIFT,
AMDGPU_GMC_HOLE_START),
.drm_render_minor = adev->ddev->render->index,
.drm_render_minor = adev_to_drm(adev)->render->index,
.sdma_doorbell_idx = adev->doorbell_index.sdma_engine,
};
@ -160,7 +169,7 @@ void amdgpu_amdkfd_device_init(struct amdgpu_device *adev)
adev->doorbell_index.last_non_cp;
}
kgd2kfd_device_init(adev->kfd.dev, adev->ddev, &gpu_resources);
kgd2kfd_device_init(adev->kfd.dev, adev_to_drm(adev), &gpu_resources);
}
}
@ -479,11 +488,11 @@ int amdgpu_amdkfd_get_dmabuf_info(struct kgd_dev *kgd, int dma_buf_fd,
goto out_put;
obj = dma_buf->priv;
if (obj->dev->driver != adev->ddev->driver)
if (obj->dev->driver != adev_to_drm(adev)->driver)
/* Can't handle buffers from different drivers */
goto out_put;
adev = obj->dev->dev_private;
adev = drm_to_adev(obj->dev);
bo = gem_to_amdgpu_bo(obj);
if (!(bo->preferred_domains & (AMDGPU_GEM_DOMAIN_VRAM |
AMDGPU_GEM_DOMAIN_GTT)))
@ -517,8 +526,9 @@ int amdgpu_amdkfd_get_dmabuf_info(struct kgd_dev *kgd, int dma_buf_fd,
uint64_t amdgpu_amdkfd_get_vram_usage(struct kgd_dev *kgd)
{
struct amdgpu_device *adev = (struct amdgpu_device *)kgd;
struct ttm_resource_manager *vram_man = ttm_manager_type(&adev->mman.bdev, TTM_PL_VRAM);
return amdgpu_vram_mgr_usage(&adev->mman.bdev.man[TTM_PL_VRAM]);
return amdgpu_vram_mgr_usage(vram_man);
}
uint64_t amdgpu_amdkfd_get_hive_id(struct kgd_dev *kgd)
@ -571,6 +581,13 @@ uint32_t amdgpu_amdkfd_get_asic_rev_id(struct kgd_dev *kgd)
return adev->rev_id;
}
int amdgpu_amdkfd_get_noretry(struct kgd_dev *kgd)
{
struct amdgpu_device *adev = (struct amdgpu_device *)kgd;
return adev->gmc.noretry;
}
int amdgpu_amdkfd_submit_ib(struct kgd_dev *kgd, enum kgd_engine_type engine,
uint32_t vmid, uint64_t gpu_addr,
uint32_t *ib_cmd, uint32_t ib_len)
@ -612,6 +629,7 @@ int amdgpu_amdkfd_submit_ib(struct kgd_dev *kgd, enum kgd_engine_type engine,
job->vmid = vmid;
ret = amdgpu_ib_schedule(ring, 1, ib, job, &f);
if (ret) {
DRM_ERROR("amdgpu: failed to schedule IB.\n");
goto err_ib_sched;
@ -755,4 +773,8 @@ void kgd2kfd_interrupt(struct kfd_dev *kfd, const void *ih_ring_entry)
void kgd2kfd_set_sram_ecc_flag(struct kfd_dev *kfd)
{
}
void kgd2kfd_smi_event_throttle(struct kfd_dev *kfd, uint32_t throttle_bitmask)
{
}
#endif

View File

@ -181,6 +181,7 @@ uint64_t amdgpu_amdkfd_get_unique_id(struct kgd_dev *kgd);
uint64_t amdgpu_amdkfd_get_mmio_remap_phys_addr(struct kgd_dev *kgd);
uint32_t amdgpu_amdkfd_get_num_gws(struct kgd_dev *kgd);
uint32_t amdgpu_amdkfd_get_asic_rev_id(struct kgd_dev *kgd);
int amdgpu_amdkfd_get_noretry(struct kgd_dev *kgd);
uint8_t amdgpu_amdkfd_get_xgmi_hops_count(struct kgd_dev *dst, struct kgd_dev *src);
/* Read user wptr from a specified user address space with page fault
@ -270,5 +271,6 @@ int kgd2kfd_resume_mm(struct mm_struct *mm);
int kgd2kfd_schedule_evict_and_restore_process(struct mm_struct *mm,
struct dma_fence *fence);
void kgd2kfd_set_sram_ecc_flag(struct kfd_dev *kfd);
void kgd2kfd_smi_event_throttle(struct kfd_dev *kfd, uint32_t throttle_bitmask);
#endif /* AMDGPU_AMDKFD_H_INCLUDED */

View File

@ -283,22 +283,6 @@ static int kgd_hqd_sdma_destroy(struct kgd_dev *kgd, void *mqd,
return 0;
}
static void kgd_set_vm_context_page_table_base(struct kgd_dev *kgd, uint32_t vmid,
uint64_t page_table_base)
{
struct amdgpu_device *adev = get_amdgpu_device(kgd);
if (!amdgpu_amdkfd_is_kfd_vmid(adev, vmid)) {
pr_err("trying to set page table base for wrong VMID %u\n",
vmid);
return;
}
mmhub_v9_4_setup_vm_pt_regs(adev, vmid, page_table_base);
gfxhub_v1_0_setup_vm_pt_regs(adev, vmid, page_table_base);
}
const struct kfd2kgd_calls arcturus_kfd2kgd = {
.program_sh_mem_settings = kgd_gfx_v9_program_sh_mem_settings,
.set_pasid_vmid_mapping = kgd_gfx_v9_set_pasid_vmid_mapping,
@ -317,7 +301,7 @@ const struct kfd2kgd_calls arcturus_kfd2kgd = {
.wave_control_execute = kgd_gfx_v9_wave_control_execute,
.address_watch_get_offset = kgd_gfx_v9_address_watch_get_offset,
.get_atc_vmid_pasid_mapping_info =
kgd_gfx_v9_get_atc_vmid_pasid_mapping_info,
.set_vm_context_page_table_base = kgd_set_vm_context_page_table_base,
.get_hive_id = amdgpu_amdkfd_get_hive_id,
kgd_gfx_v9_get_atc_vmid_pasid_mapping_info,
.set_vm_context_page_table_base =
kgd_gfx_v9_set_vm_context_page_table_base,
};

View File

@ -32,7 +32,6 @@
#include "v10_structs.h"
#include "nv.h"
#include "nvd.h"
#include "gfxhub_v2_0.h"
enum hqd_dequeue_request_type {
NO_ACTION = 0,
@ -542,7 +541,7 @@ static int kgd_hqd_destroy(struct kgd_dev *kgd, void *mqd,
uint32_t temp;
struct v10_compute_mqd *m = get_mqd(mqd);
if (adev->in_gpu_reset)
if (amdgpu_in_reset(adev))
return -EIO;
#if 0
@ -753,7 +752,7 @@ static void set_vm_context_page_table_base(struct kgd_dev *kgd, uint32_t vmid,
}
/* SDMA is on gfxhub as well for Navi1* series */
gfxhub_v2_0_setup_vm_pt_regs(adev, vmid, page_table_base);
adev->gfxhub.funcs->setup_vm_pt_regs(adev, vmid, page_table_base);
}
const struct kfd2kgd_calls gfx_v10_kfd2kgd = {
@ -776,6 +775,4 @@ const struct kfd2kgd_calls gfx_v10_kfd2kgd = {
.get_atc_vmid_pasid_mapping_info =
get_atc_vmid_pasid_mapping_info,
.set_vm_context_page_table_base = set_vm_context_page_table_base,
.get_hive_id = amdgpu_amdkfd_get_hive_id,
.get_unique_id = amdgpu_amdkfd_get_unique_id,
};

View File

@ -31,7 +31,6 @@
#include "v10_structs.h"
#include "nv.h"
#include "nvd.h"
#include "gfxhub_v2_1.h"
enum hqd_dequeue_request_type {
NO_ACTION = 0,
@ -657,7 +656,7 @@ static void set_vm_context_page_table_base_v10_3(struct kgd_dev *kgd, uint32_t v
struct amdgpu_device *adev = get_amdgpu_device(kgd);
/* SDMA is on gfxhub as well for Navi1* series */
gfxhub_v2_1_setup_vm_pt_regs(adev, vmid, page_table_base);
adev->gfxhub.funcs->setup_vm_pt_regs(adev, vmid, page_table_base);
}
#if 0
@ -822,7 +821,6 @@ const struct kfd2kgd_calls gfx_v10_3_kfd2kgd = {
.address_watch_get_offset = address_watch_get_offset_v10_3,
.get_atc_vmid_pasid_mapping_info = NULL,
.set_vm_context_page_table_base = set_vm_context_page_table_base_v10_3,
.get_hive_id = amdgpu_amdkfd_get_hive_id,
#if 0
.enable_debug_trap = enable_debug_trap_v10_3,
.disable_debug_trap = disable_debug_trap_v10_3,

View File

@ -423,7 +423,7 @@ static int kgd_hqd_destroy(struct kgd_dev *kgd, void *mqd,
unsigned long flags, end_jiffies;
int retry;
if (adev->in_gpu_reset)
if (amdgpu_in_reset(adev))
return -EIO;
acquire_queue(kgd, pipe_id, queue_id);

View File

@ -419,7 +419,7 @@ static int kgd_hqd_destroy(struct kgd_dev *kgd, void *mqd,
int retry;
struct vi_mqd *m = get_mqd(mqd);
if (adev->in_gpu_reset)
if (amdgpu_in_reset(adev))
return -EIO;
acquire_queue(kgd, pipe_id, queue_id);

View File

@ -36,9 +36,7 @@
#include "v9_structs.h"
#include "soc15.h"
#include "soc15d.h"
#include "mmhub_v1_0.h"
#include "gfxhub_v1_0.h"
#include "gfx_v9_0.h"
enum hqd_dequeue_request_type {
NO_ACTION = 0,
@ -552,7 +550,7 @@ int kgd_gfx_v9_hqd_destroy(struct kgd_dev *kgd, void *mqd,
uint32_t temp;
struct v9_mqd *m = get_mqd(mqd);
if (adev->in_gpu_reset)
if (amdgpu_in_reset(adev))
return -EIO;
acquire_queue(kgd, pipe_id, queue_id);
@ -690,7 +688,7 @@ uint32_t kgd_gfx_v9_address_watch_get_offset(struct kgd_dev *kgd,
return 0;
}
static void kgd_gfx_v9_set_vm_context_page_table_base(struct kgd_dev *kgd,
void kgd_gfx_v9_set_vm_context_page_table_base(struct kgd_dev *kgd,
uint32_t vmid, uint64_t page_table_base)
{
struct amdgpu_device *adev = get_amdgpu_device(kgd);
@ -701,9 +699,182 @@ static void kgd_gfx_v9_set_vm_context_page_table_base(struct kgd_dev *kgd,
return;
}
mmhub_v1_0_setup_vm_pt_regs(adev, vmid, page_table_base);
adev->mmhub.funcs->setup_vm_pt_regs(adev, vmid, page_table_base);
gfxhub_v1_0_setup_vm_pt_regs(adev, vmid, page_table_base);
adev->gfxhub.funcs->setup_vm_pt_regs(adev, vmid, page_table_base);
}
static void lock_spi_csq_mutexes(struct amdgpu_device *adev)
{
mutex_lock(&adev->srbm_mutex);
mutex_lock(&adev->grbm_idx_mutex);
}
static void unlock_spi_csq_mutexes(struct amdgpu_device *adev)
{
mutex_unlock(&adev->grbm_idx_mutex);
mutex_unlock(&adev->srbm_mutex);
}
/**
* @get_wave_count: Read device registers to get number of waves in flight for
* a particular queue. The method also returns the VMID associated with the
* queue.
*
* @adev: Handle of device whose registers are to be read
* @queue_idx: Index of queue in the queue-map bit-field
* @wave_cnt: Output parameter updated with number of waves in flight
* @vmid: Output parameter updated with VMID of queue whose wave count
* is being collected
*/
static void get_wave_count(struct amdgpu_device *adev, int queue_idx,
int *wave_cnt, int *vmid)
{
int pipe_idx;
int queue_slot;
unsigned int reg_val;
/*
* Program GRBM with appropriate MEID, PIPEID, QUEUEID and VMID
* parameters to read out waves in flight. Get VMID if there are
* non-zero waves in flight.
*/
*vmid = 0xFF;
*wave_cnt = 0;
pipe_idx = queue_idx / adev->gfx.mec.num_queue_per_pipe;
queue_slot = queue_idx % adev->gfx.mec.num_queue_per_pipe;
soc15_grbm_select(adev, 1, pipe_idx, queue_slot, 0);
reg_val = RREG32(SOC15_REG_OFFSET(GC, 0, mmSPI_CSQ_WF_ACTIVE_COUNT_0) +
queue_slot);
*wave_cnt = reg_val & SPI_CSQ_WF_ACTIVE_COUNT_0__COUNT_MASK;
if (*wave_cnt != 0)
*vmid = (RREG32_SOC15(GC, 0, mmCP_HQD_VMID) &
CP_HQD_VMID__VMID_MASK) >> CP_HQD_VMID__VMID__SHIFT;
}
/**
* @kgd_gfx_v9_get_cu_occupancy: Reads relevant registers associated with each
* shader engine and aggregates the number of waves that are in flight for the
* process whose pasid is provided as a parameter. The process could have ZERO
* or more queues running and submitting waves to compute units.
*
* @kgd: Handle of device from which to get number of waves in flight
* @pasid: Identifies the process for which this query call is invoked
* @wave_cnt: Output parameter updated with number of waves in flight that
* belong to process with given pasid
* @max_waves_per_cu: Output parameter updated with maximum number of waves
* possible per Compute Unit
*
* @note: It's possible that the device has too many queues (oversubscription)
* in which case a VMID could be remapped to a different PASID. This could lead
* to an iaccurate wave count. Following is a high-level sequence:
* Time T1: vmid = getVmid(); vmid is associated with Pasid P1
* Time T2: passId = getPasId(vmid); vmid is associated with Pasid P2
* In the sequence above wave count obtained from time T1 will be incorrectly
* lost or added to total wave count.
*
* The registers that provide the waves in flight are:
*
* SPI_CSQ_WF_ACTIVE_STATUS - bit-map of queues per pipe. The bit is ON if a
* queue is slotted, OFF if there is no queue. A process could have ZERO or
* more queues slotted and submitting waves to be run on compute units. Even
* when there is a queue it is possible there could be zero wave fronts, this
* can happen when queue is waiting on top-of-pipe events - e.g. waitRegMem
* command
*
* For each bit that is ON from above:
*
* Read (SPI_CSQ_WF_ACTIVE_COUNT_0 + queue_idx) register. It provides the
* number of waves that are in flight for the queue at specified index. The
* index ranges from 0 to 7.
*
* If non-zero waves are in flight, read CP_HQD_VMID register to obtain VMID
* of the wave(s).
*
* Determine if VMID from above step maps to pasid provided as parameter. If
* it matches agrregate the wave count. That the VMID will not match pasid is
* a normal condition i.e. a device is expected to support multiple queues
* from multiple proceses.
*
* Reading registers referenced above involves programming GRBM appropriately
*/
static void kgd_gfx_v9_get_cu_occupancy(struct kgd_dev *kgd, int pasid,
int *pasid_wave_cnt, int *max_waves_per_cu)
{
int qidx;
int vmid;
int se_idx;
int sh_idx;
int se_cnt;
int sh_cnt;
int wave_cnt;
int queue_map;
int pasid_tmp;
int max_queue_cnt;
int vmid_wave_cnt = 0;
struct amdgpu_device *adev;
DECLARE_BITMAP(cp_queue_bitmap, KGD_MAX_QUEUES);
adev = get_amdgpu_device(kgd);
lock_spi_csq_mutexes(adev);
soc15_grbm_select(adev, 1, 0, 0, 0);
/*
* Iterate through the shader engines and arrays of the device
* to get number of waves in flight
*/
bitmap_complement(cp_queue_bitmap, adev->gfx.mec.queue_bitmap,
KGD_MAX_QUEUES);
max_queue_cnt = adev->gfx.mec.num_pipe_per_mec *
adev->gfx.mec.num_queue_per_pipe;
sh_cnt = adev->gfx.config.max_sh_per_se;
se_cnt = adev->gfx.config.max_shader_engines;
for (se_idx = 0; se_idx < se_cnt; se_idx++) {
for (sh_idx = 0; sh_idx < sh_cnt; sh_idx++) {
gfx_v9_0_select_se_sh(adev, se_idx, sh_idx, 0xffffffff);
queue_map = RREG32(SOC15_REG_OFFSET(GC, 0,
mmSPI_CSQ_WF_ACTIVE_STATUS));
/*
* Assumption: queue map encodes following schema: four
* pipes per each micro-engine, with each pipe mapping
* eight queues. This schema is true for GFX9 devices
* and must be verified for newer device families
*/
for (qidx = 0; qidx < max_queue_cnt; qidx++) {
/* Skip qeueus that are not associated with
* compute functions
*/
if (!test_bit(qidx, cp_queue_bitmap))
continue;
if (!(queue_map & (1 << qidx)))
continue;
/* Get number of waves in flight and aggregate them */
get_wave_count(adev, qidx, &wave_cnt, &vmid);
if (wave_cnt != 0) {
pasid_tmp =
RREG32(SOC15_REG_OFFSET(OSSSYS, 0,
mmIH_VMID_0_LUT) + vmid);
if (pasid_tmp == pasid)
vmid_wave_cnt += wave_cnt;
}
}
}
}
gfx_v9_0_select_se_sh(adev, 0xffffffff, 0xffffffff, 0xffffffff);
soc15_grbm_select(adev, 0, 0, 0, 0);
unlock_spi_csq_mutexes(adev);
/* Update the output parameters and return */
*pasid_wave_cnt = vmid_wave_cnt;
*max_waves_per_cu = adev->gfx.cu_info.simd_per_cu *
adev->gfx.cu_info.max_waves_per_simd;
}
const struct kfd2kgd_calls gfx_v9_kfd2kgd = {
@ -726,6 +897,5 @@ const struct kfd2kgd_calls gfx_v9_kfd2kgd = {
.get_atc_vmid_pasid_mapping_info =
kgd_gfx_v9_get_atc_vmid_pasid_mapping_info,
.set_vm_context_page_table_base = kgd_gfx_v9_set_vm_context_page_table_base,
.get_hive_id = amdgpu_amdkfd_get_hive_id,
.get_unique_id = amdgpu_amdkfd_get_unique_id,
.get_cu_occupancy = kgd_gfx_v9_get_cu_occupancy,
};

View File

@ -60,3 +60,6 @@ uint32_t kgd_gfx_v9_address_watch_get_offset(struct kgd_dev *kgd,
bool kgd_gfx_v9_get_atc_vmid_pasid_mapping_info(struct kgd_dev *kgd,
uint8_t vmid, uint16_t *p_pasid);
void kgd_gfx_v9_set_vm_context_page_table_base(struct kgd_dev *kgd,
uint32_t vmid, uint64_t page_table_base);

View File

@ -148,8 +148,12 @@ static int amdgpu_amdkfd_reserve_mem_limit(struct amdgpu_device *adev,
spin_lock(&kfd_mem_limit.mem_limit_lock);
if (kfd_mem_limit.system_mem_used + system_mem_needed >
kfd_mem_limit.max_system_mem_limit)
pr_debug("Set no_system_mem_limit=1 if using shared memory\n");
if ((kfd_mem_limit.system_mem_used + system_mem_needed >
kfd_mem_limit.max_system_mem_limit) ||
kfd_mem_limit.max_system_mem_limit && !no_system_mem_limit) ||
(kfd_mem_limit.ttm_mem_used + ttm_mem_needed >
kfd_mem_limit.max_ttm_mem_limit) ||
(adev->kfd.vram_used + vram_needed >
@ -562,7 +566,7 @@ static int init_user_pages(struct kgd_mem *mem, uint64_t user_addr)
mutex_lock(&process_info->lock);
ret = amdgpu_ttm_tt_set_userptr(bo->tbo.ttm, user_addr, 0);
ret = amdgpu_ttm_tt_set_userptr(&bo->tbo, user_addr, 0);
if (ret) {
pr_err("%s: Failed to set userptr: %d\n", __func__, ret);
goto out;
@ -1668,7 +1672,7 @@ int amdgpu_amdkfd_gpuvm_import_dmabuf(struct kgd_dev *kgd,
return -EINVAL;
obj = dma_buf->priv;
if (obj->dev->dev_private != adev)
if (drm_to_adev(obj->dev) != adev)
/* Can't handle buffers from other devices */
return -EINVAL;

View File

@ -148,7 +148,7 @@ void amdgpu_atombios_i2c_init(struct amdgpu_device *adev)
if (i2c.valid) {
sprintf(stmp, "0x%x", i2c.i2c_id);
adev->i2c_bus[i] = amdgpu_i2c_create(adev->ddev, &i2c, stmp);
adev->i2c_bus[i] = amdgpu_i2c_create(adev_to_drm(adev), &i2c, stmp);
}
gpio = (ATOM_GPIO_I2C_ASSIGMENT *)
((u8 *)gpio + sizeof(ATOM_GPIO_I2C_ASSIGMENT));
@ -541,7 +541,7 @@ bool amdgpu_atombios_get_connector_info_from_object_table(struct amdgpu_device *
}
}
amdgpu_link_encoder_connector(adev->ddev);
amdgpu_link_encoder_connector(adev_to_drm(adev));
return true;
}
@ -1786,9 +1786,9 @@ static int amdgpu_atombios_allocate_fb_scratch(struct amdgpu_device *adev)
(uint32_t)(ATOM_VRAM_BLOCK_SRIOV_MSG_SHARE_RESERVATION <<
ATOM_VRAM_OPERATION_FLAGS_SHIFT)) {
/* Firmware request VRAM reservation for SR-IOV */
adev->fw_vram_usage.start_offset = (start_addr &
adev->mman.fw_vram_usage_start_offset = (start_addr &
(~ATOM_VRAM_OPERATION_FLAGS_MASK)) << 10;
adev->fw_vram_usage.size = size << 10;
adev->mman.fw_vram_usage_size = size << 10;
/* Use the default scratch size */
usage_bytes = 0;
} else {
@ -1882,7 +1882,7 @@ static void cail_mc_write(struct card_info *info, uint32_t reg, uint32_t val)
*/
static void cail_reg_write(struct card_info *info, uint32_t reg, uint32_t val)
{
struct amdgpu_device *adev = info->dev->dev_private;
struct amdgpu_device *adev = drm_to_adev(info->dev);
WREG32(reg, val);
}
@ -1898,7 +1898,7 @@ static void cail_reg_write(struct card_info *info, uint32_t reg, uint32_t val)
*/
static uint32_t cail_reg_read(struct card_info *info, uint32_t reg)
{
struct amdgpu_device *adev = info->dev->dev_private;
struct amdgpu_device *adev = drm_to_adev(info->dev);
uint32_t r;
r = RREG32(reg);
@ -1916,7 +1916,7 @@ static uint32_t cail_reg_read(struct card_info *info, uint32_t reg)
*/
static void cail_ioreg_write(struct card_info *info, uint32_t reg, uint32_t val)
{
struct amdgpu_device *adev = info->dev->dev_private;
struct amdgpu_device *adev = drm_to_adev(info->dev);
WREG32_IO(reg, val);
}
@ -1932,7 +1932,7 @@ static void cail_ioreg_write(struct card_info *info, uint32_t reg, uint32_t val)
*/
static uint32_t cail_ioreg_read(struct card_info *info, uint32_t reg)
{
struct amdgpu_device *adev = info->dev->dev_private;
struct amdgpu_device *adev = drm_to_adev(info->dev);
uint32_t r;
r = RREG32_IO(reg);
@ -1944,7 +1944,7 @@ static ssize_t amdgpu_atombios_get_vbios_version(struct device *dev,
char *buf)
{
struct drm_device *ddev = dev_get_drvdata(dev);
struct amdgpu_device *adev = ddev->dev_private;
struct amdgpu_device *adev = drm_to_adev(ddev);
struct atom_context *ctx = adev->mode_info.atom_context;
return snprintf(buf, PAGE_SIZE, "%s\n", ctx->vbios_version);
@ -1995,7 +1995,7 @@ int amdgpu_atombios_init(struct amdgpu_device *adev)
return -ENOMEM;
adev->mode_info.atom_card_info = atom_card_info;
atom_card_info->dev = adev->ddev;
atom_card_info->dev = adev_to_drm(adev);
atom_card_info->reg_read = cail_reg_read;
atom_card_info->reg_write = cail_reg_write;
/* needed for iio ops */

View File

@ -89,9 +89,9 @@ int amdgpu_atomfirmware_allocate_fb_scratch(struct amdgpu_device *adev)
(uint32_t)(ATOM_VRAM_BLOCK_SRIOV_MSG_SHARE_RESERVATION <<
ATOM_VRAM_OPERATION_FLAGS_SHIFT)) {
/* Firmware request VRAM reservation for SR-IOV */
adev->fw_vram_usage.start_offset = (start_addr &
adev->mman.fw_vram_usage_start_offset = (start_addr &
(~ATOM_VRAM_OPERATION_FLAGS_MASK)) << 10;
adev->fw_vram_usage.size = size << 10;
adev->mman.fw_vram_usage_size = size << 10;
/* Use the default scratch size */
usage_bytes = 0;
} else {
@ -543,6 +543,7 @@ int amdgpu_mem_train_support(struct amdgpu_device *adev)
case HW_REV(11, 0, 0):
case HW_REV(11, 0, 5):
case HW_REV(11, 0, 7):
case HW_REV(11, 0, 11):
ret = 1;
break;
default:

View File

@ -616,7 +616,7 @@ static bool amdgpu_atpx_detect(void)
while ((pdev = pci_get_class(PCI_CLASS_DISPLAY_VGA << 8, pdev)) != NULL) {
vga_count++;
has_atpx |= (amdgpu_atpx_pci_probe_handle(pdev) == true);
has_atpx |= amdgpu_atpx_pci_probe_handle(pdev);
parent_pdev = pci_upstream_bridge(pdev);
d3_supported |= parent_pdev && parent_pdev->bridge_d3;
@ -626,7 +626,7 @@ static bool amdgpu_atpx_detect(void)
while ((pdev = pci_get_class(PCI_CLASS_DISPLAY_OTHER << 8, pdev)) != NULL) {
vga_count++;
has_atpx |= (amdgpu_atpx_pci_probe_handle(pdev) == true);
has_atpx |= amdgpu_atpx_pci_probe_handle(pdev);
parent_pdev = pci_upstream_bridge(pdev);
d3_supported |= parent_pdev && parent_pdev->bridge_d3;

View File

@ -417,26 +417,40 @@ static inline bool amdgpu_acpi_vfct_bios(struct amdgpu_device *adev)
bool amdgpu_get_bios(struct amdgpu_device *adev)
{
if (amdgpu_atrm_get_bios(adev))
if (amdgpu_atrm_get_bios(adev)) {
dev_info(adev->dev, "Fetched VBIOS from ATRM\n");
goto success;
}
if (amdgpu_acpi_vfct_bios(adev))
if (amdgpu_acpi_vfct_bios(adev)) {
dev_info(adev->dev, "Fetched VBIOS from VFCT\n");
goto success;
}
if (igp_read_bios_from_vram(adev))
if (igp_read_bios_from_vram(adev)) {
dev_info(adev->dev, "Fetched VBIOS from VRAM BAR\n");
goto success;
}
if (amdgpu_read_bios(adev))
if (amdgpu_read_bios(adev)) {
dev_info(adev->dev, "Fetched VBIOS from ROM BAR\n");
goto success;
}
if (amdgpu_read_bios_from_rom(adev))
if (amdgpu_read_bios_from_rom(adev)) {
dev_info(adev->dev, "Fetched VBIOS from ROM\n");
goto success;
}
if (amdgpu_read_disabled_bios(adev))
if (amdgpu_read_disabled_bios(adev)) {
dev_info(adev->dev, "Fetched VBIOS from disabled ROM BAR\n");
goto success;
}
if (amdgpu_read_platform_bios(adev))
if (amdgpu_read_platform_bios(adev)) {
dev_info(adev->dev, "Fetched VBIOS from platform\n");
goto success;
}
DRM_ERROR("Unable to locate a BIOS ROM\n");
return false;

View File

@ -265,7 +265,7 @@ int amdgpu_bo_create_list_entry_array(struct drm_amdgpu_bo_list_in *in,
int amdgpu_bo_list_ioctl(struct drm_device *dev, void *data,
struct drm_file *filp)
{
struct amdgpu_device *adev = dev->dev_private;
struct amdgpu_device *adev = drm_to_adev(dev);
struct amdgpu_fpriv *fpriv = filp->driver_priv;
union drm_amdgpu_bo_list *args = data;
uint32_t handle = args->in.list_handle;

View File

@ -26,6 +26,7 @@
#include <drm/drm_edid.h>
#include <drm/drm_fb_helper.h>
#include <drm/drm_dp_helper.h>
#include <drm/drm_probe_helper.h>
#include <drm/amdgpu_drm.h>
#include "amdgpu.h"
@ -41,7 +42,7 @@
void amdgpu_connector_hotplug(struct drm_connector *connector)
{
struct drm_device *dev = connector->dev;
struct amdgpu_device *adev = dev->dev_private;
struct amdgpu_device *adev = drm_to_adev(dev);
struct amdgpu_connector *amdgpu_connector = to_amdgpu_connector(connector);
/* bail if the connector does not have hpd pin, e.g.,
@ -279,7 +280,7 @@ amdgpu_connector_get_hardcoded_edid(struct amdgpu_device *adev)
static void amdgpu_connector_get_edid(struct drm_connector *connector)
{
struct drm_device *dev = connector->dev;
struct amdgpu_device *adev = dev->dev_private;
struct amdgpu_device *adev = drm_to_adev(dev);
struct amdgpu_connector *amdgpu_connector = to_amdgpu_connector(connector);
if (amdgpu_connector->edid)
@ -463,7 +464,7 @@ static int amdgpu_connector_set_property(struct drm_connector *connector,
uint64_t val)
{
struct drm_device *dev = connector->dev;
struct amdgpu_device *adev = dev->dev_private;
struct amdgpu_device *adev = drm_to_adev(dev);
struct drm_encoder *encoder;
struct amdgpu_encoder *amdgpu_encoder;
@ -834,7 +835,7 @@ static enum drm_mode_status amdgpu_connector_vga_mode_valid(struct drm_connector
struct drm_display_mode *mode)
{
struct drm_device *dev = connector->dev;
struct amdgpu_device *adev = dev->dev_private;
struct amdgpu_device *adev = drm_to_adev(dev);
/* XXX check mode bandwidth */
@ -941,7 +942,7 @@ static bool
amdgpu_connector_check_hpd_status_unchanged(struct drm_connector *connector)
{
struct drm_device *dev = connector->dev;
struct amdgpu_device *adev = dev->dev_private;
struct amdgpu_device *adev = drm_to_adev(dev);
struct amdgpu_connector *amdgpu_connector = to_amdgpu_connector(connector);
enum drm_connector_status status;
@ -972,7 +973,7 @@ static enum drm_connector_status
amdgpu_connector_dvi_detect(struct drm_connector *connector, bool force)
{
struct drm_device *dev = connector->dev;
struct amdgpu_device *adev = dev->dev_private;
struct amdgpu_device *adev = drm_to_adev(dev);
struct amdgpu_connector *amdgpu_connector = to_amdgpu_connector(connector);
const struct drm_encoder_helper_funcs *encoder_funcs;
int r;
@ -1159,7 +1160,7 @@ static enum drm_mode_status amdgpu_connector_dvi_mode_valid(struct drm_connector
struct drm_display_mode *mode)
{
struct drm_device *dev = connector->dev;
struct amdgpu_device *adev = dev->dev_private;
struct amdgpu_device *adev = drm_to_adev(dev);
struct amdgpu_connector *amdgpu_connector = to_amdgpu_connector(connector);
/* XXX check mode bandwidth */
@ -1311,7 +1312,7 @@ static bool amdgpu_connector_encoder_is_hbr2(struct drm_connector *connector)
bool amdgpu_connector_is_dp12_capable(struct drm_connector *connector)
{
struct drm_device *dev = connector->dev;
struct amdgpu_device *adev = dev->dev_private;
struct amdgpu_device *adev = drm_to_adev(dev);
if ((adev->clock.default_dispclk >= 53900) &&
amdgpu_connector_encoder_is_hbr2(connector)) {
@ -1325,7 +1326,7 @@ static enum drm_connector_status
amdgpu_connector_dp_detect(struct drm_connector *connector, bool force)
{
struct drm_device *dev = connector->dev;
struct amdgpu_device *adev = dev->dev_private;
struct amdgpu_device *adev = drm_to_adev(dev);
struct amdgpu_connector *amdgpu_connector = to_amdgpu_connector(connector);
enum drm_connector_status ret = connector_status_disconnected;
struct amdgpu_connector_atom_dig *amdgpu_dig_connector = amdgpu_connector->con_priv;
@ -1413,6 +1414,10 @@ amdgpu_connector_dp_detect(struct drm_connector *connector, bool force)
pm_runtime_put_autosuspend(connector->dev->dev);
}
drm_dp_set_subconnector_property(&amdgpu_connector->base,
ret,
amdgpu_dig_connector->dpcd,
amdgpu_dig_connector->downstream_ports);
return ret;
}
@ -1521,7 +1526,7 @@ amdgpu_connector_add(struct amdgpu_device *adev,
struct amdgpu_hpd *hpd,
struct amdgpu_router *router)
{
struct drm_device *dev = adev->ddev;
struct drm_device *dev = adev_to_drm(adev);
struct drm_connector *connector;
struct drm_connector_list_iter iter;
struct amdgpu_connector *amdgpu_connector;
@ -1959,6 +1964,11 @@ amdgpu_connector_add(struct amdgpu_device *adev,
if (has_aux)
amdgpu_atombios_dp_aux_init(amdgpu_connector);
if (connector_type == DRM_MODE_CONNECTOR_DisplayPort ||
connector_type == DRM_MODE_CONNECTOR_eDP) {
drm_connector_attach_dp_subconnector_property(&amdgpu_connector->base);
}
return;
failed:

View File

@ -299,7 +299,7 @@ static void amdgpu_cs_get_threshold_for_moves(struct amdgpu_device *adev,
{
s64 time_us, increment_us;
u64 free_vram, total_vram, used_vram;
struct ttm_resource_manager *vram_man = ttm_manager_type(&adev->mman.bdev, TTM_PL_VRAM);
/* Allow a maximum of 200 accumulated ms. This is basically per-IB
* throttling.
*
@ -316,7 +316,7 @@ static void amdgpu_cs_get_threshold_for_moves(struct amdgpu_device *adev,
}
total_vram = adev->gmc.real_vram_size - atomic64_read(&adev->vram_pin_size);
used_vram = amdgpu_vram_mgr_usage(&adev->mman.bdev.man[TTM_PL_VRAM]);
used_vram = amdgpu_vram_mgr_usage(vram_man);
free_vram = used_vram >= total_vram ? 0 : total_vram - used_vram;
spin_lock(&adev->mm_stats.lock);
@ -363,7 +363,7 @@ static void amdgpu_cs_get_threshold_for_moves(struct amdgpu_device *adev,
if (!amdgpu_gmc_vram_full_visible(&adev->gmc)) {
u64 total_vis_vram = adev->gmc.visible_vram_size;
u64 used_vis_vram =
amdgpu_vram_mgr_vis_usage(&adev->mman.bdev.man[TTM_PL_VRAM]);
amdgpu_vram_mgr_vis_usage(vram_man);
if (used_vis_vram < total_vis_vram) {
u64 free_vis_vram = total_vis_vram - used_vis_vram;
@ -1275,13 +1275,24 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
return r;
}
static void trace_amdgpu_cs_ibs(struct amdgpu_cs_parser *parser)
{
int i;
if (!trace_amdgpu_cs_enabled())
return;
for (i = 0; i < parser->job->num_ibs; i++)
trace_amdgpu_cs(parser, i);
}
int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
{
struct amdgpu_device *adev = dev->dev_private;
struct amdgpu_device *adev = drm_to_adev(dev);
union drm_amdgpu_cs *cs = data;
struct amdgpu_cs_parser parser = {};
bool reserved_buffers = false;
int i, r;
int r;
if (amdgpu_ras_intr_triggered())
return -EHWPOISON;
@ -1294,7 +1305,8 @@ int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
r = amdgpu_cs_parser_init(&parser, data);
if (r) {
DRM_ERROR("Failed to initialize parser %d!\n", r);
if (printk_ratelimit())
DRM_ERROR("Failed to initialize parser %d!\n", r);
goto out;
}
@ -1319,8 +1331,7 @@ int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
reserved_buffers = true;
for (i = 0; i < parser.job->num_ibs; i++)
trace_amdgpu_cs(&parser, i);
trace_amdgpu_cs_ibs(&parser);
r = amdgpu_cs_vm_handling(&parser);
if (r)
@ -1421,7 +1432,7 @@ static struct dma_fence *amdgpu_cs_get_fence(struct amdgpu_device *adev,
int amdgpu_cs_fence_to_handle_ioctl(struct drm_device *dev, void *data,
struct drm_file *filp)
{
struct amdgpu_device *adev = dev->dev_private;
struct amdgpu_device *adev = drm_to_adev(dev);
union drm_amdgpu_fence_to_handle *info = data;
struct dma_fence *fence;
struct drm_syncobj *syncobj;
@ -1597,7 +1608,7 @@ static int amdgpu_cs_wait_any_fence(struct amdgpu_device *adev,
int amdgpu_cs_wait_fences_ioctl(struct drm_device *dev, void *data,
struct drm_file *filp)
{
struct amdgpu_device *adev = dev->dev_private;
struct amdgpu_device *adev = drm_to_adev(dev);
union drm_amdgpu_wait_fences *wait = data;
uint32_t fence_count = wait->in.fence_count;
struct drm_amdgpu_fence *fences_user;

View File

@ -46,7 +46,7 @@ const unsigned int amdgpu_ctx_num_entities[AMDGPU_HW_IP_NUM] = {
static int amdgpu_ctx_priority_permit(struct drm_file *filp,
enum drm_sched_priority priority)
{
if (priority < 0 || priority >= DRM_SCHED_PRIORITY_MAX)
if (priority < 0 || priority >= DRM_SCHED_PRIORITY_COUNT)
return -EINVAL;
/* NORMAL and below are accessible by everyone */
@ -65,7 +65,7 @@ static int amdgpu_ctx_priority_permit(struct drm_file *filp,
static enum gfx_pipe_priority amdgpu_ctx_sched_prio_to_compute_prio(enum drm_sched_priority prio)
{
switch (prio) {
case DRM_SCHED_PRIORITY_HIGH_HW:
case DRM_SCHED_PRIORITY_HIGH:
case DRM_SCHED_PRIORITY_KERNEL:
return AMDGPU_GFX_PIPE_PRIO_HIGH;
default:
@ -114,7 +114,11 @@ static int amdgpu_ctx_init_entity(struct amdgpu_ctx *ctx, u32 hw_ip,
scheds = adev->gpu_sched[hw_ip][hw_prio].sched;
num_scheds = adev->gpu_sched[hw_ip][hw_prio].num_scheds;
if (hw_ip == AMDGPU_HW_IP_VCN_ENC || hw_ip == AMDGPU_HW_IP_VCN_DEC) {
/* disable load balance if the hw engine retains context among dependent jobs */
if (hw_ip == AMDGPU_HW_IP_VCN_ENC ||
hw_ip == AMDGPU_HW_IP_VCN_DEC ||
hw_ip == AMDGPU_HW_IP_UVD_ENC ||
hw_ip == AMDGPU_HW_IP_UVD) {
sched = drm_sched_pick_best(scheds, num_scheds);
scheds = &sched;
num_scheds = 1;
@ -385,16 +389,15 @@ int amdgpu_ctx_ioctl(struct drm_device *dev, void *data,
enum drm_sched_priority priority;
union drm_amdgpu_ctx *args = data;
struct amdgpu_device *adev = dev->dev_private;
struct amdgpu_device *adev = drm_to_adev(dev);
struct amdgpu_fpriv *fpriv = filp->driver_priv;
r = 0;
id = args->in.ctx_id;
priority = amdgpu_to_sched_priority(args->in.priority);
r = amdgpu_to_sched_priority(args->in.priority, &priority);
/* For backwards compatibility reasons, we need to accept
* ioctls with garbage in the priority field */
if (priority == DRM_SCHED_PRIORITY_INVALID)
if (r == -EINVAL)
priority = DRM_SCHED_PRIORITY_NORMAL;
switch (args->in.op) {

View File

@ -34,6 +34,7 @@
#include "amdgpu_pm.h"
#include "amdgpu_dm_debugfs.h"
#include "amdgpu_ras.h"
#include "amdgpu_rap.h"
/**
* amdgpu_debugfs_add_files - Add simple debugfs entries
@ -68,8 +69,8 @@ int amdgpu_debugfs_add_files(struct amdgpu_device *adev,
adev->debugfs_count = i;
#if defined(CONFIG_DEBUG_FS)
drm_debugfs_create_files(files, nfiles,
adev->ddev->primary->debugfs_root,
adev->ddev->primary);
adev_to_drm(adev)->primary->debugfs_root,
adev_to_drm(adev)->primary);
#endif
return 0;
}
@ -100,14 +101,18 @@ static int amdgpu_debugfs_autodump_open(struct inode *inode, struct file *file)
file->private_data = adev;
mutex_lock(&adev->lock_reset);
ret = down_read_killable(&adev->reset_sem);
if (ret)
return ret;
if (adev->autodump.dumping.done) {
reinit_completion(&adev->autodump.dumping);
ret = 0;
} else {
ret = -EBUSY;
}
mutex_unlock(&adev->lock_reset);
up_read(&adev->reset_sem);
return ret;
}
@ -126,7 +131,7 @@ static unsigned int amdgpu_debugfs_autodump_poll(struct file *file, struct poll_
poll_wait(file, &adev->autodump.gpu_hang, poll_table);
if (adev->in_gpu_reset)
if (amdgpu_in_reset(adev))
return POLLIN | POLLRDNORM | POLLWRNORM;
return 0;
@ -146,7 +151,7 @@ static void amdgpu_debugfs_autodump_init(struct amdgpu_device *adev)
init_waitqueue_head(&adev->autodump.gpu_hang);
debugfs_create_file("amdgpu_autodump", 0600,
adev->ddev->primary->debugfs_root,
adev_to_drm(adev)->primary->debugfs_root,
adev, &autodump_debug_fops);
}
@ -222,23 +227,23 @@ static int amdgpu_debugfs_process_reg_op(bool read, struct file *f,
*pos &= (1UL << 22) - 1;
r = pm_runtime_get_sync(adev->ddev->dev);
r = pm_runtime_get_sync(adev_to_drm(adev)->dev);
if (r < 0) {
pm_runtime_put_autosuspend(adev->ddev->dev);
pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
return r;
}
r = amdgpu_virt_enable_access_debugfs(adev);
if (r < 0) {
pm_runtime_put_autosuspend(adev->ddev->dev);
pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
return r;
}
if (use_bank) {
if ((sh_bank != 0xFFFFFFFF && sh_bank >= adev->gfx.config.max_sh_per_se) ||
(se_bank != 0xFFFFFFFF && se_bank >= adev->gfx.config.max_shader_engines)) {
pm_runtime_mark_last_busy(adev->ddev->dev);
pm_runtime_put_autosuspend(adev->ddev->dev);
pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
amdgpu_virt_disable_access_debugfs(adev);
return -EINVAL;
}
@ -262,7 +267,7 @@ static int amdgpu_debugfs_process_reg_op(bool read, struct file *f,
} else {
r = get_user(value, (uint32_t *)buf);
if (!r)
amdgpu_mm_wreg_mmio_rlc(adev, *pos >> 2, value, 0);
amdgpu_mm_wreg_mmio_rlc(adev, *pos >> 2, value);
}
if (r) {
result = r;
@ -287,8 +292,8 @@ static int amdgpu_debugfs_process_reg_op(bool read, struct file *f,
if (pm_pg_lock)
mutex_unlock(&adev->pm.mutex);
pm_runtime_mark_last_busy(adev->ddev->dev);
pm_runtime_put_autosuspend(adev->ddev->dev);
pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
amdgpu_virt_disable_access_debugfs(adev);
return result;
@ -335,15 +340,15 @@ static ssize_t amdgpu_debugfs_regs_pcie_read(struct file *f, char __user *buf,
if (size & 0x3 || *pos & 0x3)
return -EINVAL;
r = pm_runtime_get_sync(adev->ddev->dev);
r = pm_runtime_get_sync(adev_to_drm(adev)->dev);
if (r < 0) {
pm_runtime_put_autosuspend(adev->ddev->dev);
pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
return r;
}
r = amdgpu_virt_enable_access_debugfs(adev);
if (r < 0) {
pm_runtime_put_autosuspend(adev->ddev->dev);
pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
return r;
}
@ -353,8 +358,8 @@ static ssize_t amdgpu_debugfs_regs_pcie_read(struct file *f, char __user *buf,
value = RREG32_PCIE(*pos >> 2);
r = put_user(value, (uint32_t *)buf);
if (r) {
pm_runtime_mark_last_busy(adev->ddev->dev);
pm_runtime_put_autosuspend(adev->ddev->dev);
pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
amdgpu_virt_disable_access_debugfs(adev);
return r;
}
@ -365,8 +370,8 @@ static ssize_t amdgpu_debugfs_regs_pcie_read(struct file *f, char __user *buf,
size -= 4;
}
pm_runtime_mark_last_busy(adev->ddev->dev);
pm_runtime_put_autosuspend(adev->ddev->dev);
pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
amdgpu_virt_disable_access_debugfs(adev);
return result;
@ -394,15 +399,15 @@ static ssize_t amdgpu_debugfs_regs_pcie_write(struct file *f, const char __user
if (size & 0x3 || *pos & 0x3)
return -EINVAL;
r = pm_runtime_get_sync(adev->ddev->dev);
r = pm_runtime_get_sync(adev_to_drm(adev)->dev);
if (r < 0) {
pm_runtime_put_autosuspend(adev->ddev->dev);
pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
return r;
}
r = amdgpu_virt_enable_access_debugfs(adev);
if (r < 0) {
pm_runtime_put_autosuspend(adev->ddev->dev);
pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
return r;
}
@ -411,8 +416,8 @@ static ssize_t amdgpu_debugfs_regs_pcie_write(struct file *f, const char __user
r = get_user(value, (uint32_t *)buf);
if (r) {
pm_runtime_mark_last_busy(adev->ddev->dev);
pm_runtime_put_autosuspend(adev->ddev->dev);
pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
amdgpu_virt_disable_access_debugfs(adev);
return r;
}
@ -425,8 +430,8 @@ static ssize_t amdgpu_debugfs_regs_pcie_write(struct file *f, const char __user
size -= 4;
}
pm_runtime_mark_last_busy(adev->ddev->dev);
pm_runtime_put_autosuspend(adev->ddev->dev);
pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
amdgpu_virt_disable_access_debugfs(adev);
return result;
@ -454,15 +459,15 @@ static ssize_t amdgpu_debugfs_regs_didt_read(struct file *f, char __user *buf,
if (size & 0x3 || *pos & 0x3)
return -EINVAL;
r = pm_runtime_get_sync(adev->ddev->dev);
r = pm_runtime_get_sync(adev_to_drm(adev)->dev);
if (r < 0) {
pm_runtime_put_autosuspend(adev->ddev->dev);
pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
return r;
}
r = amdgpu_virt_enable_access_debugfs(adev);
if (r < 0) {
pm_runtime_put_autosuspend(adev->ddev->dev);
pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
return r;
}
@ -472,8 +477,8 @@ static ssize_t amdgpu_debugfs_regs_didt_read(struct file *f, char __user *buf,
value = RREG32_DIDT(*pos >> 2);
r = put_user(value, (uint32_t *)buf);
if (r) {
pm_runtime_mark_last_busy(adev->ddev->dev);
pm_runtime_put_autosuspend(adev->ddev->dev);
pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
amdgpu_virt_disable_access_debugfs(adev);
return r;
}
@ -484,8 +489,8 @@ static ssize_t amdgpu_debugfs_regs_didt_read(struct file *f, char __user *buf,
size -= 4;
}
pm_runtime_mark_last_busy(adev->ddev->dev);
pm_runtime_put_autosuspend(adev->ddev->dev);
pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
amdgpu_virt_disable_access_debugfs(adev);
return result;
@ -513,15 +518,15 @@ static ssize_t amdgpu_debugfs_regs_didt_write(struct file *f, const char __user
if (size & 0x3 || *pos & 0x3)
return -EINVAL;
r = pm_runtime_get_sync(adev->ddev->dev);
r = pm_runtime_get_sync(adev_to_drm(adev)->dev);
if (r < 0) {
pm_runtime_put_autosuspend(adev->ddev->dev);
pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
return r;
}
r = amdgpu_virt_enable_access_debugfs(adev);
if (r < 0) {
pm_runtime_put_autosuspend(adev->ddev->dev);
pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
return r;
}
@ -530,8 +535,8 @@ static ssize_t amdgpu_debugfs_regs_didt_write(struct file *f, const char __user
r = get_user(value, (uint32_t *)buf);
if (r) {
pm_runtime_mark_last_busy(adev->ddev->dev);
pm_runtime_put_autosuspend(adev->ddev->dev);
pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
amdgpu_virt_disable_access_debugfs(adev);
return r;
}
@ -544,8 +549,8 @@ static ssize_t amdgpu_debugfs_regs_didt_write(struct file *f, const char __user
size -= 4;
}
pm_runtime_mark_last_busy(adev->ddev->dev);
pm_runtime_put_autosuspend(adev->ddev->dev);
pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
amdgpu_virt_disable_access_debugfs(adev);
return result;
@ -573,15 +578,15 @@ static ssize_t amdgpu_debugfs_regs_smc_read(struct file *f, char __user *buf,
if (size & 0x3 || *pos & 0x3)
return -EINVAL;
r = pm_runtime_get_sync(adev->ddev->dev);
r = pm_runtime_get_sync(adev_to_drm(adev)->dev);
if (r < 0) {
pm_runtime_put_autosuspend(adev->ddev->dev);
pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
return r;
}
r = amdgpu_virt_enable_access_debugfs(adev);
if (r < 0) {
pm_runtime_put_autosuspend(adev->ddev->dev);
pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
return r;
}
@ -591,8 +596,8 @@ static ssize_t amdgpu_debugfs_regs_smc_read(struct file *f, char __user *buf,
value = RREG32_SMC(*pos);
r = put_user(value, (uint32_t *)buf);
if (r) {
pm_runtime_mark_last_busy(adev->ddev->dev);
pm_runtime_put_autosuspend(adev->ddev->dev);
pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
amdgpu_virt_disable_access_debugfs(adev);
return r;
}
@ -603,8 +608,8 @@ static ssize_t amdgpu_debugfs_regs_smc_read(struct file *f, char __user *buf,
size -= 4;
}
pm_runtime_mark_last_busy(adev->ddev->dev);
pm_runtime_put_autosuspend(adev->ddev->dev);
pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
amdgpu_virt_disable_access_debugfs(adev);
return result;
@ -632,15 +637,15 @@ static ssize_t amdgpu_debugfs_regs_smc_write(struct file *f, const char __user *
if (size & 0x3 || *pos & 0x3)
return -EINVAL;
r = pm_runtime_get_sync(adev->ddev->dev);
r = pm_runtime_get_sync(adev_to_drm(adev)->dev);
if (r < 0) {
pm_runtime_put_autosuspend(adev->ddev->dev);
pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
return r;
}
r = amdgpu_virt_enable_access_debugfs(adev);
if (r < 0) {
pm_runtime_put_autosuspend(adev->ddev->dev);
pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
return r;
}
@ -649,8 +654,8 @@ static ssize_t amdgpu_debugfs_regs_smc_write(struct file *f, const char __user *
r = get_user(value, (uint32_t *)buf);
if (r) {
pm_runtime_mark_last_busy(adev->ddev->dev);
pm_runtime_put_autosuspend(adev->ddev->dev);
pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
amdgpu_virt_disable_access_debugfs(adev);
return r;
}
@ -663,8 +668,8 @@ static ssize_t amdgpu_debugfs_regs_smc_write(struct file *f, const char __user *
size -= 4;
}
pm_runtime_mark_last_busy(adev->ddev->dev);
pm_runtime_put_autosuspend(adev->ddev->dev);
pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
amdgpu_virt_disable_access_debugfs(adev);
return result;
@ -791,22 +796,22 @@ static ssize_t amdgpu_debugfs_sensor_read(struct file *f, char __user *buf,
valuesize = sizeof(values);
r = pm_runtime_get_sync(adev->ddev->dev);
r = pm_runtime_get_sync(adev_to_drm(adev)->dev);
if (r < 0) {
pm_runtime_put_autosuspend(adev->ddev->dev);
pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
return r;
}
r = amdgpu_virt_enable_access_debugfs(adev);
if (r < 0) {
pm_runtime_put_autosuspend(adev->ddev->dev);
pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
return r;
}
r = amdgpu_dpm_read_sensor(adev, idx, &values[0], &valuesize);
pm_runtime_mark_last_busy(adev->ddev->dev);
pm_runtime_put_autosuspend(adev->ddev->dev);
pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
if (r) {
amdgpu_virt_disable_access_debugfs(adev);
@ -873,15 +878,15 @@ static ssize_t amdgpu_debugfs_wave_read(struct file *f, char __user *buf,
wave = (*pos & GENMASK_ULL(36, 31)) >> 31;
simd = (*pos & GENMASK_ULL(44, 37)) >> 37;
r = pm_runtime_get_sync(adev->ddev->dev);
r = pm_runtime_get_sync(adev_to_drm(adev)->dev);
if (r < 0) {
pm_runtime_put_autosuspend(adev->ddev->dev);
pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
return r;
}
r = amdgpu_virt_enable_access_debugfs(adev);
if (r < 0) {
pm_runtime_put_autosuspend(adev->ddev->dev);
pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
return r;
}
@ -896,8 +901,8 @@ static ssize_t amdgpu_debugfs_wave_read(struct file *f, char __user *buf,
amdgpu_gfx_select_se_sh(adev, 0xFFFFFFFF, 0xFFFFFFFF, 0xFFFFFFFF);
mutex_unlock(&adev->grbm_idx_mutex);
pm_runtime_mark_last_busy(adev->ddev->dev);
pm_runtime_put_autosuspend(adev->ddev->dev);
pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
if (!x) {
amdgpu_virt_disable_access_debugfs(adev);
@ -971,7 +976,7 @@ static ssize_t amdgpu_debugfs_gpr_read(struct file *f, char __user *buf,
if (!data)
return -ENOMEM;
r = pm_runtime_get_sync(adev->ddev->dev);
r = pm_runtime_get_sync(adev_to_drm(adev)->dev);
if (r < 0)
goto err;
@ -994,8 +999,8 @@ static ssize_t amdgpu_debugfs_gpr_read(struct file *f, char __user *buf,
amdgpu_gfx_select_se_sh(adev, 0xFFFFFFFF, 0xFFFFFFFF, 0xFFFFFFFF);
mutex_unlock(&adev->grbm_idx_mutex);
pm_runtime_mark_last_busy(adev->ddev->dev);
pm_runtime_put_autosuspend(adev->ddev->dev);
pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
while (size) {
uint32_t value;
@ -1017,7 +1022,7 @@ static ssize_t amdgpu_debugfs_gpr_read(struct file *f, char __user *buf,
return result;
err:
pm_runtime_put_autosuspend(adev->ddev->dev);
pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
kfree(data);
return r;
}
@ -1042,9 +1047,9 @@ static ssize_t amdgpu_debugfs_gfxoff_write(struct file *f, const char __user *bu
if (size & 0x3 || *pos & 0x3)
return -EINVAL;
r = pm_runtime_get_sync(adev->ddev->dev);
r = pm_runtime_get_sync(adev_to_drm(adev)->dev);
if (r < 0) {
pm_runtime_put_autosuspend(adev->ddev->dev);
pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
return r;
}
@ -1053,8 +1058,8 @@ static ssize_t amdgpu_debugfs_gfxoff_write(struct file *f, const char __user *bu
r = get_user(value, (uint32_t *)buf);
if (r) {
pm_runtime_mark_last_busy(adev->ddev->dev);
pm_runtime_put_autosuspend(adev->ddev->dev);
pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
return r;
}
@ -1066,8 +1071,8 @@ static ssize_t amdgpu_debugfs_gfxoff_write(struct file *f, const char __user *bu
size -= 4;
}
pm_runtime_mark_last_busy(adev->ddev->dev);
pm_runtime_put_autosuspend(adev->ddev->dev);
pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
return result;
}
@ -1091,7 +1096,7 @@ static ssize_t amdgpu_debugfs_gfxoff_read(struct file *f, char __user *buf,
if (size & 0x3 || *pos & 0x3)
return -EINVAL;
r = pm_runtime_get_sync(adev->ddev->dev);
r = pm_runtime_get_sync(adev_to_drm(adev)->dev);
if (r < 0)
return r;
@ -1100,15 +1105,15 @@ static ssize_t amdgpu_debugfs_gfxoff_read(struct file *f, char __user *buf,
r = amdgpu_get_gfx_off_status(adev, &value);
if (r) {
pm_runtime_mark_last_busy(adev->ddev->dev);
pm_runtime_put_autosuspend(adev->ddev->dev);
pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
return r;
}
r = put_user(value, (uint32_t *)buf);
if (r) {
pm_runtime_mark_last_busy(adev->ddev->dev);
pm_runtime_put_autosuspend(adev->ddev->dev);
pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
return r;
}
@ -1118,8 +1123,8 @@ static ssize_t amdgpu_debugfs_gfxoff_read(struct file *f, char __user *buf,
size -= 4;
}
pm_runtime_mark_last_busy(adev->ddev->dev);
pm_runtime_put_autosuspend(adev->ddev->dev);
pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
return result;
}
@ -1211,7 +1216,7 @@ static const char *debugfs_regs_names[] = {
*/
int amdgpu_debugfs_regs_init(struct amdgpu_device *adev)
{
struct drm_minor *minor = adev->ddev->primary;
struct drm_minor *minor = adev_to_drm(adev)->primary;
struct dentry *ent, *root = minor->debugfs_root;
unsigned int i;
@ -1231,17 +1236,19 @@ static int amdgpu_debugfs_test_ib(struct seq_file *m, void *data)
{
struct drm_info_node *node = (struct drm_info_node *) m->private;
struct drm_device *dev = node->minor->dev;
struct amdgpu_device *adev = dev->dev_private;
struct amdgpu_device *adev = drm_to_adev(dev);
int r = 0, i;
r = pm_runtime_get_sync(dev->dev);
if (r < 0) {
pm_runtime_put_autosuspend(adev->ddev->dev);
pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
return r;
}
/* Avoid accidently unparking the sched thread during GPU reset */
mutex_lock(&adev->lock_reset);
r = down_read_killable(&adev->reset_sem);
if (r)
return r;
/* hold on the scheduler */
for (i = 0; i < AMDGPU_MAX_RINGS; i++) {
@ -1268,7 +1275,7 @@ static int amdgpu_debugfs_test_ib(struct seq_file *m, void *data)
kthread_unpark(ring->sched.thread);
}
mutex_unlock(&adev->lock_reset);
up_read(&adev->reset_sem);
pm_runtime_mark_last_busy(dev->dev);
pm_runtime_put_autosuspend(dev->dev);
@ -1280,7 +1287,7 @@ static int amdgpu_debugfs_get_vbios_dump(struct seq_file *m, void *data)
{
struct drm_info_node *node = (struct drm_info_node *) m->private;
struct drm_device *dev = node->minor->dev;
struct amdgpu_device *adev = dev->dev_private;
struct amdgpu_device *adev = drm_to_adev(dev);
seq_write(m, adev->bios, adev->bios_size);
return 0;
@ -1290,12 +1297,12 @@ static int amdgpu_debugfs_evict_vram(struct seq_file *m, void *data)
{
struct drm_info_node *node = (struct drm_info_node *)m->private;
struct drm_device *dev = node->minor->dev;
struct amdgpu_device *adev = dev->dev_private;
struct amdgpu_device *adev = drm_to_adev(dev);
int r;
r = pm_runtime_get_sync(dev->dev);
if (r < 0) {
pm_runtime_put_autosuspend(adev->ddev->dev);
pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
return r;
}
@ -1311,12 +1318,12 @@ static int amdgpu_debugfs_evict_gtt(struct seq_file *m, void *data)
{
struct drm_info_node *node = (struct drm_info_node *)m->private;
struct drm_device *dev = node->minor->dev;
struct amdgpu_device *adev = dev->dev_private;
struct amdgpu_device *adev = drm_to_adev(dev);
int r;
r = pm_runtime_get_sync(dev->dev);
if (r < 0) {
pm_runtime_put_autosuspend(adev->ddev->dev);
pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
return r;
}
@ -1458,7 +1465,9 @@ static int amdgpu_debugfs_ib_preempt(void *data, u64 val)
return -ENOMEM;
/* Avoid accidently unparking the sched thread during GPU reset */
mutex_lock(&adev->lock_reset);
r = down_read_killable(&adev->reset_sem);
if (r)
goto pro_end;
/* stop the scheduler */
kthread_park(ring->sched.thread);
@ -1499,13 +1508,14 @@ static int amdgpu_debugfs_ib_preempt(void *data, u64 val)
/* restart the scheduler */
kthread_unpark(ring->sched.thread);
mutex_unlock(&adev->lock_reset);
up_read(&adev->reset_sem);
ttm_bo_unlock_delayed_workqueue(&adev->mman.bdev, resched);
pro_end:
kfree(fences);
return 0;
return r;
}
static int amdgpu_debugfs_sclk_set(void *data, u64 val)
@ -1517,9 +1527,9 @@ static int amdgpu_debugfs_sclk_set(void *data, u64 val)
if (amdgpu_sriov_vf(adev) && !amdgpu_sriov_is_pp_one_vf(adev))
return -EINVAL;
ret = pm_runtime_get_sync(adev->ddev->dev);
ret = pm_runtime_get_sync(adev_to_drm(adev)->dev);
if (ret < 0) {
pm_runtime_put_autosuspend(adev->ddev->dev);
pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
return ret;
}
@ -1532,8 +1542,8 @@ static int amdgpu_debugfs_sclk_set(void *data, u64 val)
return 0;
}
pm_runtime_mark_last_busy(adev->ddev->dev);
pm_runtime_put_autosuspend(adev->ddev->dev);
pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
if (ret)
return -EINVAL;
@ -1553,7 +1563,7 @@ int amdgpu_debugfs_init(struct amdgpu_device *adev)
adev->debugfs_preempt =
debugfs_create_file("amdgpu_preempt_ib", 0600,
adev->ddev->primary->debugfs_root, adev,
adev_to_drm(adev)->primary->debugfs_root, adev,
&fops_ib_preempt);
if (!(adev->debugfs_preempt)) {
DRM_ERROR("unable to create amdgpu_preempt_ib debugsfs file\n");
@ -1562,7 +1572,7 @@ int amdgpu_debugfs_init(struct amdgpu_device *adev)
adev->smu.debugfs_sclk =
debugfs_create_file("amdgpu_force_sclk", 0200,
adev->ddev->primary->debugfs_root, adev,
adev_to_drm(adev)->primary->debugfs_root, adev,
&fops_sclk_set);
if (!(adev->smu.debugfs_sclk)) {
DRM_ERROR("unable to create amdgpu_set_sclk debugsfs file\n");
@ -1623,6 +1633,8 @@ int amdgpu_debugfs_init(struct amdgpu_device *adev)
amdgpu_debugfs_autodump_init(adev);
amdgpu_rap_debugfs_init(adev);
return amdgpu_debugfs_add_files(adev, amdgpu_debugfs_list,
ARRAY_SIZE(amdgpu_debugfs_list));
}

File diff suppressed because it is too large Load Diff

View File

@ -44,9 +44,9 @@ struct amdgpu_df_funcs {
void (*enable_ecc_force_par_wr_rmw)(struct amdgpu_device *adev,
bool enable);
int (*pmc_start)(struct amdgpu_device *adev, uint64_t config,
int is_enable);
int is_add);
int (*pmc_stop)(struct amdgpu_device *adev, uint64_t config,
int is_disable);
int is_remove);
void (*pmc_get_count)(struct amdgpu_device *adev, uint64_t config,
uint64_t *count);
uint64_t (*get_fica)(struct amdgpu_device *adev, uint32_t ficaa_val);

View File

@ -136,7 +136,7 @@ static int amdgpu_discovery_read_binary(struct amdgpu_device *adev, uint8_t *bin
uint64_t pos = vram_size - DISCOVERY_TMR_OFFSET;
amdgpu_device_vram_access(adev, pos, (uint32_t *)binary,
adev->discovery_tmr_size, false);
adev->mman.discovery_tmr_size, false);
return 0;
}
@ -168,18 +168,18 @@ static int amdgpu_discovery_init(struct amdgpu_device *adev)
uint16_t checksum;
int r;
adev->discovery_tmr_size = DISCOVERY_TMR_SIZE;
adev->discovery_bin = kzalloc(adev->discovery_tmr_size, GFP_KERNEL);
if (!adev->discovery_bin)
adev->mman.discovery_tmr_size = DISCOVERY_TMR_SIZE;
adev->mman.discovery_bin = kzalloc(adev->mman.discovery_tmr_size, GFP_KERNEL);
if (!adev->mman.discovery_bin)
return -ENOMEM;
r = amdgpu_discovery_read_binary(adev, adev->discovery_bin);
r = amdgpu_discovery_read_binary(adev, adev->mman.discovery_bin);
if (r) {
DRM_ERROR("failed to read ip discovery binary\n");
goto out;
}
bhdr = (struct binary_header *)adev->discovery_bin;
bhdr = (struct binary_header *)adev->mman.discovery_bin;
if (le32_to_cpu(bhdr->binary_signature) != BINARY_SIGNATURE) {
DRM_ERROR("invalid ip discovery binary signature\n");
@ -192,7 +192,7 @@ static int amdgpu_discovery_init(struct amdgpu_device *adev)
size = bhdr->binary_size - offset;
checksum = bhdr->binary_checksum;
if (!amdgpu_discovery_verify_checksum(adev->discovery_bin + offset,
if (!amdgpu_discovery_verify_checksum(adev->mman.discovery_bin + offset,
size, checksum)) {
DRM_ERROR("invalid ip discovery binary checksum\n");
r = -EINVAL;
@ -202,7 +202,7 @@ static int amdgpu_discovery_init(struct amdgpu_device *adev)
info = &bhdr->table_list[IP_DISCOVERY];
offset = le16_to_cpu(info->offset);
checksum = le16_to_cpu(info->checksum);
ihdr = (struct ip_discovery_header *)(adev->discovery_bin + offset);
ihdr = (struct ip_discovery_header *)(adev->mman.discovery_bin + offset);
if (le32_to_cpu(ihdr->signature) != DISCOVERY_TABLE_SIGNATURE) {
DRM_ERROR("invalid ip discovery data table signature\n");
@ -210,7 +210,7 @@ static int amdgpu_discovery_init(struct amdgpu_device *adev)
goto out;
}
if (!amdgpu_discovery_verify_checksum(adev->discovery_bin + offset,
if (!amdgpu_discovery_verify_checksum(adev->mman.discovery_bin + offset,
ihdr->size, checksum)) {
DRM_ERROR("invalid ip discovery data table checksum\n");
r = -EINVAL;
@ -220,9 +220,9 @@ static int amdgpu_discovery_init(struct amdgpu_device *adev)
info = &bhdr->table_list[GC];
offset = le16_to_cpu(info->offset);
checksum = le16_to_cpu(info->checksum);
ghdr = (struct gpu_info_header *)(adev->discovery_bin + offset);
ghdr = (struct gpu_info_header *)(adev->mman.discovery_bin + offset);
if (!amdgpu_discovery_verify_checksum(adev->discovery_bin + offset,
if (!amdgpu_discovery_verify_checksum(adev->mman.discovery_bin + offset,
ghdr->size, checksum)) {
DRM_ERROR("invalid gc data table checksum\n");
r = -EINVAL;
@ -232,16 +232,16 @@ static int amdgpu_discovery_init(struct amdgpu_device *adev)
return 0;
out:
kfree(adev->discovery_bin);
adev->discovery_bin = NULL;
kfree(adev->mman.discovery_bin);
adev->mman.discovery_bin = NULL;
return r;
}
void amdgpu_discovery_fini(struct amdgpu_device *adev)
{
kfree(adev->discovery_bin);
adev->discovery_bin = NULL;
kfree(adev->mman.discovery_bin);
adev->mman.discovery_bin = NULL;
}
int amdgpu_discovery_reg_base_init(struct amdgpu_device *adev)
@ -265,8 +265,8 @@ int amdgpu_discovery_reg_base_init(struct amdgpu_device *adev)
return r;
}
bhdr = (struct binary_header *)adev->discovery_bin;
ihdr = (struct ip_discovery_header *)(adev->discovery_bin +
bhdr = (struct binary_header *)adev->mman.discovery_bin;
ihdr = (struct ip_discovery_header *)(adev->mman.discovery_bin +
le16_to_cpu(bhdr->table_list[IP_DISCOVERY].offset));
num_dies = le16_to_cpu(ihdr->num_dies);
@ -274,7 +274,7 @@ int amdgpu_discovery_reg_base_init(struct amdgpu_device *adev)
for (i = 0; i < num_dies; i++) {
die_offset = le16_to_cpu(ihdr->die_info[i].die_offset);
dhdr = (struct die_header *)(adev->discovery_bin + die_offset);
dhdr = (struct die_header *)(adev->mman.discovery_bin + die_offset);
num_ips = le16_to_cpu(dhdr->num_ips);
ip_offset = die_offset + sizeof(*dhdr);
@ -288,7 +288,7 @@ int amdgpu_discovery_reg_base_init(struct amdgpu_device *adev)
le16_to_cpu(dhdr->die_id), num_ips);
for (j = 0; j < num_ips; j++) {
ip = (struct ip *)(adev->discovery_bin + ip_offset);
ip = (struct ip *)(adev->mman.discovery_bin + ip_offset);
num_base_address = ip->num_base_address;
DRM_DEBUG("%s(%d) #%d v%d.%d.%d:\n",
@ -337,24 +337,24 @@ int amdgpu_discovery_get_ip_version(struct amdgpu_device *adev, int hw_id,
uint16_t num_ips;
int i, j;
if (!adev->discovery_bin) {
if (!adev->mman.discovery_bin) {
DRM_ERROR("ip discovery uninitialized\n");
return -EINVAL;
}
bhdr = (struct binary_header *)adev->discovery_bin;
ihdr = (struct ip_discovery_header *)(adev->discovery_bin +
bhdr = (struct binary_header *)adev->mman.discovery_bin;
ihdr = (struct ip_discovery_header *)(adev->mman.discovery_bin +
le16_to_cpu(bhdr->table_list[IP_DISCOVERY].offset));
num_dies = le16_to_cpu(ihdr->num_dies);
for (i = 0; i < num_dies; i++) {
die_offset = le16_to_cpu(ihdr->die_info[i].die_offset);
dhdr = (struct die_header *)(adev->discovery_bin + die_offset);
dhdr = (struct die_header *)(adev->mman.discovery_bin + die_offset);
num_ips = le16_to_cpu(dhdr->num_ips);
ip_offset = die_offset + sizeof(*dhdr);
for (j = 0; j < num_ips; j++) {
ip = (struct ip *)(adev->discovery_bin + ip_offset);
ip = (struct ip *)(adev->mman.discovery_bin + ip_offset);
if (le16_to_cpu(ip->hw_id) == hw_id) {
if (major)
@ -377,13 +377,13 @@ int amdgpu_discovery_get_gfx_info(struct amdgpu_device *adev)
struct binary_header *bhdr;
struct gc_info_v1_0 *gc_info;
if (!adev->discovery_bin) {
if (!adev->mman.discovery_bin) {
DRM_ERROR("ip discovery uninitialized\n");
return -EINVAL;
}
bhdr = (struct binary_header *)adev->discovery_bin;
gc_info = (struct gc_info_v1_0 *)(adev->discovery_bin +
bhdr = (struct binary_header *)adev->mman.discovery_bin;
gc_info = (struct gc_info_v1_0 *)(adev->mman.discovery_bin +
le16_to_cpu(bhdr->table_list[GC].offset));
adev->gfx.config.max_shader_engines = le32_to_cpu(gc_info->gc_num_se);

View File

@ -93,7 +93,7 @@ static void amdgpu_display_flip_work_func(struct work_struct *__work)
* targeted by the flip
*/
if (amdgpu_crtc->enabled &&
(amdgpu_display_get_crtc_scanoutpos(adev->ddev, work->crtc_id, 0,
(amdgpu_display_get_crtc_scanoutpos(adev_to_drm(adev), work->crtc_id, 0,
&vpos, &hpos, NULL, NULL,
&crtc->hwmode)
& (DRM_SCANOUTPOS_VALID | DRM_SCANOUTPOS_IN_VBLANK)) ==
@ -152,7 +152,7 @@ int amdgpu_display_crtc_page_flip_target(struct drm_crtc *crtc,
struct drm_modeset_acquire_ctx *ctx)
{
struct drm_device *dev = crtc->dev;
struct amdgpu_device *adev = dev->dev_private;
struct amdgpu_device *adev = drm_to_adev(dev);
struct amdgpu_crtc *amdgpu_crtc = to_amdgpu_crtc(crtc);
struct drm_gem_object *obj;
struct amdgpu_flip_work *work;
@ -292,7 +292,7 @@ int amdgpu_display_crtc_set_config(struct drm_mode_set *set,
pm_runtime_mark_last_busy(dev->dev);
adev = dev->dev_private;
adev = drm_to_adev(dev);
/* if we have active crtcs and we don't have a power ref,
take the current one */
if (active && !adev->have_disp_power_ref) {
@ -619,51 +619,51 @@ int amdgpu_display_modeset_create_props(struct amdgpu_device *adev)
int sz;
adev->mode_info.coherent_mode_property =
drm_property_create_range(adev->ddev, 0 , "coherent", 0, 1);
drm_property_create_range(adev_to_drm(adev), 0, "coherent", 0, 1);
if (!adev->mode_info.coherent_mode_property)
return -ENOMEM;
adev->mode_info.load_detect_property =
drm_property_create_range(adev->ddev, 0, "load detection", 0, 1);
drm_property_create_range(adev_to_drm(adev), 0, "load detection", 0, 1);
if (!adev->mode_info.load_detect_property)
return -ENOMEM;
drm_mode_create_scaling_mode_property(adev->ddev);
drm_mode_create_scaling_mode_property(adev_to_drm(adev));
sz = ARRAY_SIZE(amdgpu_underscan_enum_list);
adev->mode_info.underscan_property =
drm_property_create_enum(adev->ddev, 0,
"underscan",
amdgpu_underscan_enum_list, sz);
drm_property_create_enum(adev_to_drm(adev), 0,
"underscan",
amdgpu_underscan_enum_list, sz);
adev->mode_info.underscan_hborder_property =
drm_property_create_range(adev->ddev, 0,
"underscan hborder", 0, 128);
drm_property_create_range(adev_to_drm(adev), 0,
"underscan hborder", 0, 128);
if (!adev->mode_info.underscan_hborder_property)
return -ENOMEM;
adev->mode_info.underscan_vborder_property =
drm_property_create_range(adev->ddev, 0,
"underscan vborder", 0, 128);
drm_property_create_range(adev_to_drm(adev), 0,
"underscan vborder", 0, 128);
if (!adev->mode_info.underscan_vborder_property)
return -ENOMEM;
sz = ARRAY_SIZE(amdgpu_audio_enum_list);
adev->mode_info.audio_property =
drm_property_create_enum(adev->ddev, 0,
drm_property_create_enum(adev_to_drm(adev), 0,
"audio",
amdgpu_audio_enum_list, sz);
sz = ARRAY_SIZE(amdgpu_dither_enum_list);
adev->mode_info.dither_property =
drm_property_create_enum(adev->ddev, 0,
drm_property_create_enum(adev_to_drm(adev), 0,
"dither",
amdgpu_dither_enum_list, sz);
if (amdgpu_device_has_dc_support(adev)) {
adev->mode_info.abm_level_property =
drm_property_create_range(adev->ddev, 0,
"abm level", 0, 4);
drm_property_create_range(adev_to_drm(adev), 0,
"abm level", 0, 4);
if (!adev->mode_info.abm_level_property)
return -ENOMEM;
}
@ -813,7 +813,7 @@ int amdgpu_display_get_crtc_scanoutpos(struct drm_device *dev,
int vbl_start, vbl_end, vtotal, ret = 0;
bool in_vbl = true;
struct amdgpu_device *adev = dev->dev_private;
struct amdgpu_device *adev = drm_to_adev(dev);
/* preempt_disable_rt() should go right here in PREEMPT_RT patchset. */

View File

@ -35,6 +35,7 @@
#include "amdgpu_display.h"
#include "amdgpu_gem.h"
#include "amdgpu_dma_buf.h"
#include "amdgpu_xgmi.h"
#include <drm/amdgpu_drm.h>
#include <linux/dma-buf.h>
#include <linux/dma-fence-array.h>
@ -302,7 +303,8 @@ static struct sg_table *amdgpu_dma_buf_map(struct dma_buf_attachment *attach,
switch (bo->tbo.mem.mem_type) {
case TTM_PL_TT:
sgt = drm_prime_pages_to_sg(bo->tbo.ttm->pages,
sgt = drm_prime_pages_to_sg(obj->dev,
bo->tbo.ttm->pages,
bo->tbo.num_pages);
if (IS_ERR(sgt))
return sgt;
@ -454,7 +456,7 @@ static struct drm_gem_object *
amdgpu_dma_buf_create_obj(struct drm_device *dev, struct dma_buf *dma_buf)
{
struct dma_resv *resv = dma_buf->resv;
struct amdgpu_device *adev = dev->dev_private;
struct amdgpu_device *adev = drm_to_adev(dev);
struct amdgpu_bo *bo;
struct amdgpu_bo_param bp;
int ret;
@ -595,3 +597,36 @@ struct drm_gem_object *amdgpu_gem_prime_import(struct drm_device *dev,
obj->import_attach = attach;
return obj;
}
/**
* amdgpu_dmabuf_is_xgmi_accessible - Check if xgmi available for P2P transfer
*
* @adev: amdgpu_device pointer of the importer
* @bo: amdgpu buffer object
*
* Returns:
* True if dmabuf accessible over xgmi, false otherwise.
*/
bool amdgpu_dmabuf_is_xgmi_accessible(struct amdgpu_device *adev,
struct amdgpu_bo *bo)
{
struct drm_gem_object *obj = &bo->tbo.base;
struct drm_gem_object *gobj;
if (obj->import_attach) {
struct dma_buf *dma_buf = obj->import_attach->dmabuf;
if (dma_buf->ops != &amdgpu_dmabuf_ops)
/* No XGMI with non AMD GPUs */
return false;
gobj = dma_buf->priv;
bo = gem_to_amdgpu_bo(gobj);
}
if (amdgpu_xgmi_same_hive(adev, amdgpu_ttm_adev(bo->tbo.bdev)) &&
(bo->preferred_domains & AMDGPU_GEM_DOMAIN_VRAM))
return true;
return false;
}

View File

@ -29,6 +29,8 @@ struct dma_buf *amdgpu_gem_prime_export(struct drm_gem_object *gobj,
int flags);
struct drm_gem_object *amdgpu_gem_prime_import(struct drm_device *dev,
struct dma_buf *dma_buf);
bool amdgpu_dmabuf_is_xgmi_accessible(struct amdgpu_device *adev,
struct amdgpu_bo *bo);
void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj);
void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
int amdgpu_gem_prime_mmap(struct drm_gem_object *obj,

View File

@ -26,12 +26,12 @@
#include <drm/drm_drv.h>
#include <drm/drm_gem.h>
#include <drm/drm_vblank.h>
#include <drm/drm_managed.h>
#include "amdgpu_drv.h"
#include <drm/drm_pciids.h>
#include <linux/console.h>
#include <linux/module.h>
#include <linux/pci.h>
#include <linux/pm_runtime.h>
#include <linux/vga_switcheroo.h>
#include <drm/drm_probe_helper.h>
@ -88,9 +88,10 @@
* - 3.37.0 - L2 is invalidated before SDMA IBs, needed for correctness
* - 3.38.0 - Add AMDGPU_IB_FLAG_EMIT_MEM_SYNC
* - 3.39.0 - DMABUF implicit sync does a full pipeline sync
* - 3.40.0 - Add AMDGPU_IDS_FLAGS_TMZ
*/
#define KMS_DRIVER_MAJOR 3
#define KMS_DRIVER_MINOR 39
#define KMS_DRIVER_MINOR 40
#define KMS_DRIVER_PATCHLEVEL 0
int amdgpu_vram_limit = 0;
@ -146,16 +147,18 @@ int amdgpu_async_gfx_ring = 1;
int amdgpu_mcbp = 0;
int amdgpu_discovery = -1;
int amdgpu_mes = 0;
int amdgpu_noretry;
int amdgpu_noretry = -1;
int amdgpu_force_asic_type = -1;
int amdgpu_tmz = 0;
int amdgpu_reset_method = -1; /* auto */
int amdgpu_num_kcq = -1;
struct amdgpu_mgpu_info mgpu_info = {
.mutex = __MUTEX_INITIALIZER(mgpu_info.mutex),
};
int amdgpu_ras_enable = -1;
uint amdgpu_ras_mask = 0xffffffff;
int amdgpu_bad_page_threshold = -1;
/**
* DOC: vramlimit (int)
@ -393,12 +396,12 @@ MODULE_PARM_DESC(sched_hw_submission, "the max number of HW submissions (default
module_param_named(sched_hw_submission, amdgpu_sched_hw_submission, int, 0444);
/**
* DOC: ppfeaturemask (uint)
* DOC: ppfeaturemask (hexint)
* Override power features enabled. See enum PP_FEATURE_MASK in drivers/gpu/drm/amd/include/amd_shared.h.
* The default is the current set of stable power features.
*/
MODULE_PARM_DESC(ppfeaturemask, "all power features enabled (default))");
module_param_named(ppfeaturemask, amdgpu_pp_feature_mask, uint, 0444);
module_param_named(ppfeaturemask, amdgpu_pp_feature_mask, hexint, 0444);
/**
* DOC: forcelongtraining (uint)
@ -593,8 +596,13 @@ MODULE_PARM_DESC(mes,
"Enable Micro Engine Scheduler (0 = disabled (default), 1 = enabled)");
module_param_named(mes, amdgpu_mes, int, 0444);
/**
* DOC: noretry (int)
* Disable retry faults in the GPU memory controller.
* (0 = retry enabled, 1 = retry disabled, -1 auto (default))
*/
MODULE_PARM_DESC(noretry,
"Disable retry faults (0 = retry enabled (default), 1 = retry disabled)");
"Disable retry faults (0 = retry enabled, 1 = retry disabled, -1 auto (default))");
module_param_named(noretry, amdgpu_noretry, int, 0644);
/**
@ -676,11 +684,14 @@ MODULE_PARM_DESC(debug_largebar,
* Ignore CRAT table during KFD initialization. By default, KFD uses the ACPI CRAT
* table to get information about AMD APUs. This option can serve as a workaround on
* systems with a broken CRAT table.
*
* Default is auto (according to asic type, iommu_v2, and crat table, to decide
* whehter use CRAT)
*/
int ignore_crat;
module_param(ignore_crat, int, 0444);
MODULE_PARM_DESC(ignore_crat,
"Ignore CRAT table during KFD initialization (0 = use CRAT (default), 1 = ignore CRAT)");
"Ignore CRAT table during KFD initialization (0 = auto (default), 1 = ignore CRAT)");
/**
* DOC: halt_if_hws_hang (int)
@ -715,6 +726,15 @@ MODULE_PARM_DESC(queue_preemption_timeout_ms, "queue preemption timeout in ms (1
bool debug_evictions;
module_param(debug_evictions, bool, 0644);
MODULE_PARM_DESC(debug_evictions, "enable eviction debug messages (false = default)");
/**
* DOC: no_system_mem_limit(bool)
* Disable system memory limit, to support multiple process shared memory
*/
bool no_system_mem_limit;
module_param(no_system_mem_limit, bool, 0644);
MODULE_PARM_DESC(no_system_mem_limit, "disable system memory limit (false = default)");
#endif
/**
@ -765,6 +785,19 @@ module_param_named(tmz, amdgpu_tmz, int, 0444);
MODULE_PARM_DESC(reset_method, "GPU reset method (-1 = auto (default), 0 = legacy, 1 = mode0, 2 = mode1, 3 = mode2, 4 = baco)");
module_param_named(reset_method, amdgpu_reset_method, int, 0444);
/**
* DOC: bad_page_threshold (int)
* Bad page threshold is to specify the threshold value of faulty pages
* detected by RAS ECC, that may result in GPU entering bad status if total
* faulty pages by ECC exceed threshold value and leave it for user's further
* check.
*/
MODULE_PARM_DESC(bad_page_threshold, "Bad page threshold(-1 = auto(default value), 0 = disable bad page retirement)");
module_param_named(bad_page_threshold, amdgpu_bad_page_threshold, int, 0444);
MODULE_PARM_DESC(num_kcq, "number of kernel compute queue user want to setup (8 if set to greater than 8 or less than 0, only affect gfx 8+)");
module_param_named(num_kcq, amdgpu_num_kcq, int, 0444);
static const struct pci_device_id pciidlist[] = {
#ifdef CONFIG_DRM_AMDGPU_SI
{0x1002, 0x6780, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TAHITI},
@ -1065,7 +1098,7 @@ static struct drm_driver kms_driver;
static int amdgpu_pci_probe(struct pci_dev *pdev,
const struct pci_device_id *ent)
{
struct drm_device *dev;
struct drm_device *ddev;
struct amdgpu_device *adev;
unsigned long flags = ent->driver_data;
int ret, retry = 0;
@ -1081,6 +1114,16 @@ static int amdgpu_pci_probe(struct pci_dev *pdev,
return -ENODEV;
}
/* Due to hardware bugs, S/G Display on raven requires a 1:1 IOMMU mapping,
* however, SME requires an indirect IOMMU mapping because the encryption
* bit is beyond the DMA mask of the chip.
*/
if (mem_encrypt_active() && ((flags & AMD_ASIC_MASK) == CHIP_RAVEN)) {
dev_info(&pdev->dev,
"SME is not compatible with RAVEN\n");
return -ENOTSUPP;
}
#ifdef CONFIG_DRM_AMDGPU_SI
if (!amdgpu_si_support) {
switch (flags & AMD_ASIC_MASK) {
@ -1121,36 +1164,39 @@ static int amdgpu_pci_probe(struct pci_dev *pdev,
if (ret)
return ret;
dev = drm_dev_alloc(&kms_driver, &pdev->dev);
if (IS_ERR(dev))
return PTR_ERR(dev);
adev = devm_drm_dev_alloc(&pdev->dev, &kms_driver, typeof(*adev), ddev);
if (IS_ERR(adev))
return PTR_ERR(adev);
adev->dev = &pdev->dev;
adev->pdev = pdev;
ddev = adev_to_drm(adev);
if (!supports_atomic)
dev->driver_features &= ~DRIVER_ATOMIC;
ddev->driver_features &= ~DRIVER_ATOMIC;
ret = pci_enable_device(pdev);
if (ret)
goto err_free;
return ret;
dev->pdev = pdev;
ddev->pdev = pdev;
pci_set_drvdata(pdev, ddev);
pci_set_drvdata(pdev, dev);
ret = amdgpu_driver_load_kms(dev, ent->driver_data);
ret = amdgpu_driver_load_kms(adev, ent->driver_data);
if (ret)
goto err_pci;
retry_init:
ret = drm_dev_register(dev, ent->driver_data);
ret = drm_dev_register(ddev, ent->driver_data);
if (ret == -EAGAIN && ++retry <= 3) {
DRM_INFO("retry init %d\n", retry);
/* Don't request EX mode too frequently which is attacking */
msleep(5000);
goto retry_init;
} else if (ret)
} else if (ret) {
goto err_pci;
}
adev = dev->dev_private;
ret = amdgpu_debugfs_init(adev);
if (ret)
DRM_ERROR("Creating debugfs files failed (%d).\n", ret);
@ -1159,8 +1205,6 @@ static int amdgpu_pci_probe(struct pci_dev *pdev,
err_pci:
pci_disable_device(pdev);
err_free:
drm_dev_put(dev);
return ret;
}
@ -1177,14 +1221,13 @@ amdgpu_pci_remove(struct pci_dev *pdev)
amdgpu_driver_unload_kms(dev);
pci_disable_device(pdev);
pci_set_drvdata(pdev, NULL);
drm_dev_put(dev);
}
static void
amdgpu_pci_shutdown(struct pci_dev *pdev)
{
struct drm_device *dev = pci_get_drvdata(pdev);
struct amdgpu_device *adev = dev->dev_private;
struct amdgpu_device *adev = drm_to_adev(dev);
if (amdgpu_ras_intr_triggered())
return;
@ -1217,7 +1260,7 @@ static int amdgpu_pmops_resume(struct device *dev)
static int amdgpu_pmops_freeze(struct device *dev)
{
struct drm_device *drm_dev = dev_get_drvdata(dev);
struct amdgpu_device *adev = drm_dev->dev_private;
struct amdgpu_device *adev = drm_to_adev(drm_dev);
int r;
adev->in_hibernate = true;
@ -1253,7 +1296,7 @@ static int amdgpu_pmops_runtime_suspend(struct device *dev)
{
struct pci_dev *pdev = to_pci_dev(dev);
struct drm_device *drm_dev = pci_get_drvdata(pdev);
struct amdgpu_device *adev = drm_dev->dev_private;
struct amdgpu_device *adev = drm_to_adev(drm_dev);
int ret, i;
if (!adev->runpm) {
@ -1287,7 +1330,7 @@ static int amdgpu_pmops_runtime_suspend(struct device *dev)
if (amdgpu_is_atpx_hybrid()) {
pci_ignore_hotplug(pdev);
} else {
pci_save_state(pdev);
amdgpu_device_cache_pci_state(pdev);
pci_disable_device(pdev);
pci_ignore_hotplug(pdev);
pci_set_power_state(pdev, PCI_D3cold);
@ -1304,7 +1347,7 @@ static int amdgpu_pmops_runtime_resume(struct device *dev)
{
struct pci_dev *pdev = to_pci_dev(dev);
struct drm_device *drm_dev = pci_get_drvdata(pdev);
struct amdgpu_device *adev = drm_dev->dev_private;
struct amdgpu_device *adev = drm_to_adev(drm_dev);
int ret;
if (!adev->runpm)
@ -1320,7 +1363,7 @@ static int amdgpu_pmops_runtime_resume(struct device *dev)
pci_set_master(pdev);
} else {
pci_set_power_state(pdev, PCI_D0);
pci_restore_state(pdev);
amdgpu_device_load_pci_state(pdev);
ret = pci_enable_device(pdev);
if (ret)
return ret;
@ -1340,7 +1383,7 @@ static int amdgpu_pmops_runtime_resume(struct device *dev)
static int amdgpu_pmops_runtime_idle(struct device *dev)
{
struct drm_device *drm_dev = dev_get_drvdata(dev);
struct amdgpu_device *adev = drm_dev->dev_private;
struct amdgpu_device *adev = drm_to_adev(drm_dev);
/* we don't want the main rpm_idle to call suspend - we want to autosuspend */
int ret = 1;
@ -1499,6 +1542,13 @@ static struct drm_driver kms_driver = {
.patchlevel = KMS_DRIVER_PATCHLEVEL,
};
static struct pci_error_handlers amdgpu_pci_err_handler = {
.error_detected = amdgpu_pci_error_detected,
.mmio_enabled = amdgpu_pci_mmio_enabled,
.slot_reset = amdgpu_pci_slot_reset,
.resume = amdgpu_pci_resume,
};
static struct pci_driver amdgpu_kms_pci_driver = {
.name = DRIVER_NAME,
.id_table = pciidlist,
@ -1506,10 +1556,9 @@ static struct pci_driver amdgpu_kms_pci_driver = {
.remove = amdgpu_pci_remove,
.shutdown = amdgpu_pci_shutdown,
.driver.pm = &amdgpu_pm_ops,
.err_handler = &amdgpu_pci_err_handler,
};
static int __init amdgpu_init(void)
{
int r;

View File

@ -35,7 +35,7 @@
void
amdgpu_link_encoder_connector(struct drm_device *dev)
{
struct amdgpu_device *adev = dev->dev_private;
struct amdgpu_device *adev = drm_to_adev(dev);
struct drm_connector *connector;
struct drm_connector_list_iter iter;
struct amdgpu_connector *amdgpu_connector;

View File

@ -135,7 +135,7 @@ static int amdgpufb_create_pinned_object(struct amdgpu_fbdev *rfbdev,
AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS |
AMDGPU_GEM_CREATE_VRAM_CLEARED;
info = drm_get_format_info(adev->ddev, mode_cmd);
info = drm_get_format_info(adev_to_drm(adev), mode_cmd);
cpp = info->cpp[0];
/* need to align pitch with crtc limits */
@ -231,7 +231,7 @@ static int amdgpufb_create(struct drm_fb_helper *helper,
goto out;
}
ret = amdgpu_display_framebuffer_init(adev->ddev, &rfbdev->rfb,
ret = amdgpu_display_framebuffer_init(adev_to_drm(adev), &rfbdev->rfb,
&mode_cmd, gobj);
if (ret) {
DRM_ERROR("failed to initialize framebuffer %d\n", ret);
@ -254,7 +254,7 @@ static int amdgpufb_create(struct drm_fb_helper *helper,
drm_fb_helper_fill_info(info, &rfbdev->helper, sizes);
/* setup aperture base/size for vesafb takeover */
info->apertures->ranges[0].base = adev->ddev->mode_config.fb_base;
info->apertures->ranges[0].base = adev_to_drm(adev)->mode_config.fb_base;
info->apertures->ranges[0].size = adev->gmc.aper_size;
/* Use default scratch pixmap (info->pixmap.flags = FB_PIXMAP_SYSTEM) */
@ -270,7 +270,7 @@ static int amdgpufb_create(struct drm_fb_helper *helper,
DRM_INFO("fb depth is %d\n", fb->format->depth);
DRM_INFO(" pitch is %d\n", fb->pitches[0]);
vga_switcheroo_client_fb_set(adev->ddev->pdev, info);
vga_switcheroo_client_fb_set(adev_to_drm(adev)->pdev, info);
return 0;
out:
@ -318,7 +318,7 @@ int amdgpu_fbdev_init(struct amdgpu_device *adev)
return 0;
/* don't init fbdev if there are no connectors */
if (list_empty(&adev->ddev->mode_config.connector_list))
if (list_empty(&adev_to_drm(adev)->mode_config.connector_list))
return 0;
/* select 8 bpp console on low vram cards */
@ -332,10 +332,10 @@ int amdgpu_fbdev_init(struct amdgpu_device *adev)
rfbdev->adev = adev;
adev->mode_info.rfbdev = rfbdev;
drm_fb_helper_prepare(adev->ddev, &rfbdev->helper,
&amdgpu_fb_helper_funcs);
drm_fb_helper_prepare(adev_to_drm(adev), &rfbdev->helper,
&amdgpu_fb_helper_funcs);
ret = drm_fb_helper_init(adev->ddev, &rfbdev->helper);
ret = drm_fb_helper_init(adev_to_drm(adev), &rfbdev->helper);
if (ret) {
kfree(rfbdev);
return ret;
@ -343,7 +343,7 @@ int amdgpu_fbdev_init(struct amdgpu_device *adev)
/* disable all the possible outputs/crtcs before entering KMS mode */
if (!amdgpu_device_has_dc_support(adev))
drm_helper_disable_unused_functions(adev->ddev);
drm_helper_disable_unused_functions(adev_to_drm(adev));
drm_fb_helper_initial_config(&rfbdev->helper, bpp_sel);
return 0;
@ -354,7 +354,7 @@ void amdgpu_fbdev_fini(struct amdgpu_device *adev)
if (!adev->mode_info.rfbdev)
return;
amdgpu_fbdev_destroy(adev->ddev, adev->mode_info.rfbdev);
amdgpu_fbdev_destroy(adev_to_drm(adev), adev->mode_info.rfbdev);
kfree(adev->mode_info.rfbdev);
adev->mode_info.rfbdev = NULL;
}

View File

@ -155,7 +155,7 @@ int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **f,
seq);
amdgpu_ring_emit_fence(ring, ring->fence_drv.gpu_addr,
seq, flags | AMDGPU_FENCE_FLAG_INT);
pm_runtime_get_noresume(adev->ddev->dev);
pm_runtime_get_noresume(adev_to_drm(adev)->dev);
ptr = &ring->fence_drv.fences[seq & ring->fence_drv.num_fences_mask];
if (unlikely(rcu_dereference_protected(*ptr, 1))) {
struct dma_fence *old;
@ -284,8 +284,8 @@ bool amdgpu_fence_process(struct amdgpu_ring *ring)
BUG();
dma_fence_put(fence);
pm_runtime_mark_last_busy(adev->ddev->dev);
pm_runtime_put_autosuspend(adev->ddev->dev);
pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
} while (last_seq != seq);
return true;
@ -700,7 +700,7 @@ static int amdgpu_debugfs_fence_info(struct seq_file *m, void *data)
{
struct drm_info_node *node = (struct drm_info_node *)m->private;
struct drm_device *dev = node->minor->dev;
struct amdgpu_device *adev = dev->dev_private;
struct amdgpu_device *adev = drm_to_adev(dev);
int i;
for (i = 0; i < AMDGPU_MAX_RINGS; ++i) {
@ -749,7 +749,7 @@ static int amdgpu_debugfs_gpu_recover(struct seq_file *m, void *data)
{
struct drm_info_node *node = (struct drm_info_node *) m->private;
struct drm_device *dev = node->minor->dev;
struct amdgpu_device *adev = dev->dev_private;
struct amdgpu_device *adev = drm_to_adev(dev);
int r;
r = pm_runtime_get_sync(dev->dev);

View File

@ -34,18 +34,31 @@
static bool is_fru_eeprom_supported(struct amdgpu_device *adev)
{
/* TODO: Gaming SKUs don't have the FRU EEPROM.
* Use this hack to address hangs on modprobe on gaming SKUs
* until a proper solution can be implemented by only supporting
* the explicit chip IDs for VG20 Server cards
*
* TODO: Add list of supported Arcturus DIDs once confirmed
/* Only server cards have the FRU EEPROM
* TODO: See if we can figure this out dynamically instead of
* having to parse VBIOS versions.
*/
if ((adev->asic_type == CHIP_VEGA20 && adev->pdev->device == 0x66a0) ||
(adev->asic_type == CHIP_VEGA20 && adev->pdev->device == 0x66a1) ||
(adev->asic_type == CHIP_VEGA20 && adev->pdev->device == 0x66a4))
return true;
return false;
struct atom_context *atom_ctx = adev->mode_info.atom_context;
/* VBIOS is of the format ###-DXXXYY-##. For SKU identification,
* we can use just the "DXXX" portion. If there were more models, we
* could convert the 3 characters to a hex integer and use a switch
* for ease/speed/readability. For now, 2 string comparisons are
* reasonable and not too expensive
*/
switch (adev->asic_type) {
case CHIP_VEGA20:
/* D161 and D163 are the VG20 server SKUs */
if (strnstr(atom_ctx->vbios_version, "D161",
sizeof(atom_ctx->vbios_version)) ||
strnstr(atom_ctx->vbios_version, "D163",
sizeof(atom_ctx->vbios_version)))
return true;
else
return false;
default:
return false;
}
}
static int amdgpu_fru_read_eeprom(struct amdgpu_device *adev, uint32_t addrptr,

View File

@ -26,4 +26,4 @@
int amdgpu_fru_get_product_info(struct amdgpu_device *adev);
#endif // __AMDGPU_PRODINFO_H__
#endif // __AMDGPU_FRU_EEPROM_H__

View File

@ -93,7 +93,7 @@ int amdgpu_gem_object_create(struct amdgpu_device *adev, unsigned long size,
void amdgpu_gem_force_release(struct amdgpu_device *adev)
{
struct drm_device *ddev = adev->ddev;
struct drm_device *ddev = adev_to_drm(adev);
struct drm_file *file;
mutex_lock(&ddev->filelist_mutex);
@ -217,7 +217,7 @@ void amdgpu_gem_object_close(struct drm_gem_object *obj,
int amdgpu_gem_create_ioctl(struct drm_device *dev, void *data,
struct drm_file *filp)
{
struct amdgpu_device *adev = dev->dev_private;
struct amdgpu_device *adev = drm_to_adev(dev);
struct amdgpu_fpriv *fpriv = filp->driver_priv;
struct amdgpu_vm *vm = &fpriv->vm;
union drm_amdgpu_gem_create *args = data;
@ -298,7 +298,7 @@ int amdgpu_gem_userptr_ioctl(struct drm_device *dev, void *data,
struct drm_file *filp)
{
struct ttm_operation_ctx ctx = { true, false };
struct amdgpu_device *adev = dev->dev_private;
struct amdgpu_device *adev = drm_to_adev(dev);
struct drm_amdgpu_gem_userptr *args = data;
struct drm_gem_object *gobj;
struct amdgpu_bo *bo;
@ -332,7 +332,7 @@ int amdgpu_gem_userptr_ioctl(struct drm_device *dev, void *data,
bo = gem_to_amdgpu_bo(gobj);
bo->preferred_domains = AMDGPU_GEM_DOMAIN_GTT;
bo->allowed_domains = AMDGPU_GEM_DOMAIN_GTT;
r = amdgpu_ttm_tt_set_userptr(bo->tbo.ttm, args->addr, args->flags);
r = amdgpu_ttm_tt_set_userptr(&bo->tbo, args->addr, args->flags);
if (r)
goto release_object;
@ -587,7 +587,7 @@ int amdgpu_gem_va_ioctl(struct drm_device *dev, void *data,
struct drm_amdgpu_gem_va *args = data;
struct drm_gem_object *gobj;
struct amdgpu_device *adev = dev->dev_private;
struct amdgpu_device *adev = drm_to_adev(dev);
struct amdgpu_fpriv *fpriv = filp->driver_priv;
struct amdgpu_bo *abo;
struct amdgpu_bo_va *bo_va;
@ -711,7 +711,7 @@ int amdgpu_gem_va_ioctl(struct drm_device *dev, void *data,
int amdgpu_gem_op_ioctl(struct drm_device *dev, void *data,
struct drm_file *filp)
{
struct amdgpu_device *adev = dev->dev_private;
struct amdgpu_device *adev = drm_to_adev(dev);
struct drm_amdgpu_gem_op *args = data;
struct drm_gem_object *gobj;
struct amdgpu_vm_bo_base *base;
@ -788,7 +788,7 @@ int amdgpu_mode_dumb_create(struct drm_file *file_priv,
struct drm_device *dev,
struct drm_mode_create_dumb *args)
{
struct amdgpu_device *adev = dev->dev_private;
struct amdgpu_device *adev = drm_to_adev(dev);
struct drm_gem_object *gobj;
uint32_t handle;
u64 flags = AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED |

View File

@ -202,40 +202,29 @@ bool amdgpu_gfx_is_high_priority_compute_queue(struct amdgpu_device *adev,
void amdgpu_gfx_compute_queue_acquire(struct amdgpu_device *adev)
{
int i, queue, pipe, mec;
int i, queue, pipe;
bool multipipe_policy = amdgpu_gfx_is_multipipe_capable(adev);
int max_queues_per_mec = min(adev->gfx.mec.num_pipe_per_mec *
adev->gfx.mec.num_queue_per_pipe,
adev->gfx.num_compute_rings);
/* policy for amdgpu compute queue ownership */
for (i = 0; i < AMDGPU_MAX_COMPUTE_QUEUES; ++i) {
queue = i % adev->gfx.mec.num_queue_per_pipe;
pipe = (i / adev->gfx.mec.num_queue_per_pipe)
% adev->gfx.mec.num_pipe_per_mec;
mec = (i / adev->gfx.mec.num_queue_per_pipe)
/ adev->gfx.mec.num_pipe_per_mec;
if (multipipe_policy) {
/* policy: make queues evenly cross all pipes on MEC1 only */
for (i = 0; i < max_queues_per_mec; i++) {
pipe = i % adev->gfx.mec.num_pipe_per_mec;
queue = (i / adev->gfx.mec.num_pipe_per_mec) %
adev->gfx.mec.num_queue_per_pipe;
/* we've run out of HW */
if (mec >= adev->gfx.mec.num_mec)
break;
if (multipipe_policy) {
/* policy: amdgpu owns the first two queues of the first MEC */
if (mec == 0 && queue < 2)
set_bit(i, adev->gfx.mec.queue_bitmap);
} else {
/* policy: amdgpu owns all queues in the first pipe */
if (mec == 0 && pipe == 0)
set_bit(i, adev->gfx.mec.queue_bitmap);
set_bit(pipe * adev->gfx.mec.num_queue_per_pipe + queue,
adev->gfx.mec.queue_bitmap);
}
} else {
/* policy: amdgpu owns all queues in the given pipe */
for (i = 0; i < max_queues_per_mec; ++i)
set_bit(i, adev->gfx.mec.queue_bitmap);
}
/* update the number of active compute rings */
adev->gfx.num_compute_rings =
bitmap_weight(adev->gfx.mec.queue_bitmap, AMDGPU_MAX_COMPUTE_QUEUES);
/* If you hit this case and edited the policy, you probably just
* need to increase AMDGPU_MAX_COMPUTE_RINGS */
if (WARN_ON(adev->gfx.num_compute_rings > AMDGPU_MAX_COMPUTE_RINGS))
adev->gfx.num_compute_rings = AMDGPU_MAX_COMPUTE_RINGS;
dev_dbg(adev->dev, "mec queue bitmap weight=%d\n", bitmap_weight(adev->gfx.mec.queue_bitmap, AMDGPU_MAX_COMPUTE_QUEUES));
}
void amdgpu_gfx_graphics_queue_acquire(struct amdgpu_device *adev)
@ -571,8 +560,14 @@ void amdgpu_gfx_off_ctrl(struct amdgpu_device *adev, bool enable)
if (enable && !adev->gfx.gfx_off_state && !adev->gfx.gfx_off_req_count) {
schedule_delayed_work(&adev->gfx.gfx_off_delay_work, GFX_OFF_DELAY_ENABLE);
} else if (!enable && adev->gfx.gfx_off_state) {
if (!amdgpu_dpm_set_powergating_by_smu(adev, AMD_IP_BLOCK_TYPE_GFX, false))
if (!amdgpu_dpm_set_powergating_by_smu(adev, AMD_IP_BLOCK_TYPE_GFX, false)) {
adev->gfx.gfx_off_state = false;
if (adev->gfx.funcs->init_spm_golden) {
dev_dbg(adev->dev, "GFXOFF is disabled, re-init SPM golden settings\n");
amdgpu_gfx_init_spm_golden(adev);
}
}
}
mutex_unlock(&adev->gfx.gfx_off_mutex);
@ -698,6 +693,9 @@ uint32_t amdgpu_kiq_rreg(struct amdgpu_device *adev, uint32_t reg)
struct amdgpu_kiq *kiq = &adev->gfx.kiq;
struct amdgpu_ring *ring = &kiq->ring;
if (adev->in_pci_err_recovery)
return 0;
BUG_ON(!ring->funcs->emit_rreg);
spin_lock_irqsave(&kiq->ring_lock, flags);
@ -724,7 +722,7 @@ uint32_t amdgpu_kiq_rreg(struct amdgpu_device *adev, uint32_t reg)
*
* also don't wait anymore for IRQ context
* */
if (r < 1 && (adev->in_gpu_reset || in_interrupt()))
if (r < 1 && (amdgpu_in_reset(adev) || in_interrupt()))
goto failed_kiq_read;
might_sleep();
@ -748,7 +746,7 @@ uint32_t amdgpu_kiq_rreg(struct amdgpu_device *adev, uint32_t reg)
failed_kiq_read:
if (reg_val_offs)
amdgpu_device_wb_free(adev, reg_val_offs);
pr_err("failed to read reg:%x\n", reg);
dev_err(adev->dev, "failed to read reg:%x\n", reg);
return ~0;
}
@ -762,6 +760,9 @@ void amdgpu_kiq_wreg(struct amdgpu_device *adev, uint32_t reg, uint32_t v)
BUG_ON(!ring->funcs->emit_wreg);
if (adev->in_pci_err_recovery)
return;
spin_lock_irqsave(&kiq->ring_lock, flags);
amdgpu_ring_alloc(ring, 32);
amdgpu_ring_emit_wreg(ring, reg, v);
@ -782,7 +783,7 @@ void amdgpu_kiq_wreg(struct amdgpu_device *adev, uint32_t reg, uint32_t v)
*
* also don't wait anymore for IRQ context
* */
if (r < 1 && (adev->in_gpu_reset || in_interrupt()))
if (r < 1 && (amdgpu_in_reset(adev) || in_interrupt()))
goto failed_kiq_write;
might_sleep();
@ -801,5 +802,5 @@ void amdgpu_kiq_wreg(struct amdgpu_device *adev, uint32_t reg, uint32_t v)
amdgpu_ring_undo(ring);
spin_unlock_irqrestore(&kiq->ring_lock, flags);
failed_kiq_write:
pr_err("failed to write reg:%x\n", reg);
dev_err(adev->dev, "failed to write reg:%x\n", reg);
}

View File

@ -216,6 +216,8 @@ struct amdgpu_gfx_funcs {
int (*ras_error_inject)(struct amdgpu_device *adev, void *inject_if);
int (*query_ras_error_count) (struct amdgpu_device *adev, void *ras_error_status);
void (*reset_ras_error_count) (struct amdgpu_device *adev);
void (*init_spm_golden)(struct amdgpu_device *adev);
void (*query_ras_error_status) (struct amdgpu_device *adev);
};
struct sq_work {
@ -324,6 +326,7 @@ struct amdgpu_gfx {
#define amdgpu_gfx_get_gpu_clock_counter(adev) (adev)->gfx.funcs->get_gpu_clock_counter((adev))
#define amdgpu_gfx_select_se_sh(adev, se, sh, instance) (adev)->gfx.funcs->select_se_sh((adev), (se), (sh), (instance))
#define amdgpu_gfx_select_me_pipe_q(adev, me, pipe, q, vmid) (adev)->gfx.funcs->select_me_pipe_q((adev), (me), (pipe), (q), (vmid))
#define amdgpu_gfx_init_spm_golden(adev) (adev)->gfx.funcs->init_spm_golden((adev))
/**
* amdgpu_gfx_create_bitmask - create a bitmask

View File

@ -0,0 +1,43 @@
/*
* Copyright 2020 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
*/
#ifndef __AMDGPU_GFXHUB_H__
#define __AMDGPU_GFXHUB_H__
struct amdgpu_gfxhub_funcs {
u64 (*get_fb_location)(struct amdgpu_device *adev);
u64 (*get_mc_fb_offset)(struct amdgpu_device *adev);
void (*setup_vm_pt_regs)(struct amdgpu_device *adev, uint32_t vmid,
uint64_t page_table_base);
int (*gart_enable)(struct amdgpu_device *adev);
void (*gart_disable)(struct amdgpu_device *adev);
void (*set_fault_enable_default)(struct amdgpu_device *adev, bool value);
void (*init)(struct amdgpu_device *adev);
int (*get_xgmi_info)(struct amdgpu_device *adev);
};
struct amdgpu_gfxhub {
const struct amdgpu_gfxhub_funcs *funcs;
};
#endif

View File

@ -27,6 +27,7 @@
#include <linux/io-64-nonatomic-lo-hi.h>
#include "amdgpu.h"
#include "amdgpu_gmc.h"
#include "amdgpu_ras.h"
#include "amdgpu_xgmi.h"
@ -411,3 +412,102 @@ void amdgpu_gmc_tmz_set(struct amdgpu_device *adev)
break;
}
}
/**
* amdgpu_noretry_set -- set per asic noretry defaults
* @adev: amdgpu_device pointer
*
* Set a per asic default for the no-retry parameter.
*
*/
void amdgpu_gmc_noretry_set(struct amdgpu_device *adev)
{
struct amdgpu_gmc *gmc = &adev->gmc;
switch (adev->asic_type) {
case CHIP_RAVEN:
/* Raven currently has issues with noretry
* regardless of what we decide for other
* asics, we should leave raven with
* noretry = 0 until we root cause the
* issues.
*/
if (amdgpu_noretry == -1)
gmc->noretry = 0;
else
gmc->noretry = amdgpu_noretry;
break;
default:
/* default this to 0 for now, but we may want
* to change this in the future for certain
* GPUs as it can increase performance in
* certain cases.
*/
if (amdgpu_noretry == -1)
gmc->noretry = 0;
else
gmc->noretry = amdgpu_noretry;
break;
}
}
void amdgpu_gmc_set_vm_fault_masks(struct amdgpu_device *adev, int hub_type,
bool enable)
{
struct amdgpu_vmhub *hub;
u32 tmp, reg, i;
hub = &adev->vmhub[hub_type];
for (i = 0; i < 16; i++) {
reg = hub->vm_context0_cntl + hub->ctx_distance * i;
tmp = RREG32(reg);
if (enable)
tmp |= hub->vm_cntx_cntl_vm_fault;
else
tmp &= ~hub->vm_cntx_cntl_vm_fault;
WREG32(reg, tmp);
}
}
void amdgpu_gmc_get_vbios_allocations(struct amdgpu_device *adev)
{
unsigned size;
/*
* TODO:
* Currently there is a bug where some memory client outside
* of the driver writes to first 8M of VRAM on S3 resume,
* this overrides GART which by default gets placed in first 8M and
* causes VM_FAULTS once GTT is accessed.
* Keep the stolen memory reservation until the while this is not solved.
*/
switch (adev->asic_type) {
case CHIP_VEGA10:
case CHIP_RAVEN:
case CHIP_RENOIR:
adev->mman.keep_stolen_vga_memory = true;
break;
default:
adev->mman.keep_stolen_vga_memory = false;
break;
}
if (!amdgpu_device_ip_get_ip_block(adev, AMD_IP_BLOCK_TYPE_DCE))
size = 0;
else
size = amdgpu_gmc_get_vbios_fb_size(adev);
/* set to 0 if the pre-OS buffer uses up most of vram */
if ((adev->gmc.real_vram_size - size) < (8 * 1024 * 1024))
size = 0;
if (size > AMDGPU_VBIOS_VGA_ALLOCATION) {
adev->mman.stolen_vga_size = AMDGPU_VBIOS_VGA_ALLOCATION;
adev->mman.stolen_extended_size = size - adev->mman.stolen_vga_size;
} else {
adev->mman.stolen_vga_size = size;
adev->mman.stolen_extended_size = 0;
}
}

View File

@ -74,6 +74,12 @@ struct amdgpu_gmc_fault {
/*
* VMHUB structures, functions & helpers
*/
struct amdgpu_vmhub_funcs {
void (*print_l2_protection_fault_status)(struct amdgpu_device *adev,
uint32_t status);
uint32_t (*get_invalidate_req)(unsigned int vmid, uint32_t flush_type);
};
struct amdgpu_vmhub {
uint32_t ctx0_ptb_addr_lo32;
uint32_t ctx0_ptb_addr_hi32;
@ -92,6 +98,10 @@ struct amdgpu_vmhub {
uint32_t ctx_addr_distance; /* include LO32/HI32 */
uint32_t eng_distance;
uint32_t eng_addr_distance; /* include LO32/HI32 */
uint32_t vm_cntx_cntl_vm_fault;
const struct amdgpu_vmhub_funcs *vmhub_funcs;
};
/*
@ -121,6 +131,8 @@ struct amdgpu_gmc_funcs {
void (*get_vm_pte)(struct amdgpu_device *adev,
struct amdgpu_bo_va_mapping *mapping,
uint64_t *flags);
/* get the amount of memory used by the vbios for pre-OS console */
unsigned int (*get_vbios_fb_size)(struct amdgpu_device *adev);
};
struct amdgpu_xgmi {
@ -203,7 +215,6 @@ struct amdgpu_gmc {
uint8_t vram_vendor;
uint32_t srbm_soft_reset;
bool prt_warning;
uint64_t stolen_size;
uint32_t sdpif_register;
/* apertures */
u64 shared_aperture_start;
@ -228,6 +239,7 @@ struct amdgpu_gmc {
struct amdgpu_xgmi xgmi;
struct amdgpu_irq_src ecc_irq;
int noretry;
};
#define amdgpu_gmc_flush_gpu_tlb(adev, vmid, vmhub, type) ((adev)->gmc.gmc_funcs->flush_gpu_tlb((adev), (vmid), (vmhub), (type)))
@ -239,6 +251,7 @@ struct amdgpu_gmc {
#define amdgpu_gmc_map_mtype(adev, flags) (adev)->gmc.gmc_funcs->map_mtype((adev),(flags))
#define amdgpu_gmc_get_vm_pde(adev, level, dst, flags) (adev)->gmc.gmc_funcs->get_vm_pde((adev), (level), (dst), (flags))
#define amdgpu_gmc_get_vm_pte(adev, mapping, flags) (adev)->gmc.gmc_funcs->get_vm_pte((adev), (mapping), (flags))
#define amdgpu_gmc_get_vbios_fb_size(adev) (adev)->gmc.gmc_funcs->get_vbios_fb_size((adev))
/**
* amdgpu_gmc_vram_full_visible - Check if full VRAM is visible through the BAR
@ -288,5 +301,12 @@ void amdgpu_gmc_ras_fini(struct amdgpu_device *adev);
int amdgpu_gmc_allocate_vm_inv_eng(struct amdgpu_device *adev);
extern void amdgpu_gmc_tmz_set(struct amdgpu_device *adev);
extern void amdgpu_gmc_noretry_set(struct amdgpu_device *adev);
extern void
amdgpu_gmc_set_vm_fault_masks(struct amdgpu_device *adev, int hub_type,
bool enable);
void amdgpu_gmc_get_vbios_allocations(struct amdgpu_device *adev);
#endif

View File

@ -24,11 +24,10 @@
#include "amdgpu.h"
struct amdgpu_gtt_mgr {
struct drm_mm mm;
spinlock_t lock;
atomic64_t available;
};
static inline struct amdgpu_gtt_mgr *to_gtt_mgr(struct ttm_resource_manager *man)
{
return container_of(man, struct amdgpu_gtt_mgr, manager);
}
struct amdgpu_gtt_node {
struct drm_mm_node node;
@ -47,10 +46,11 @@ static ssize_t amdgpu_mem_info_gtt_total_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct drm_device *ddev = dev_get_drvdata(dev);
struct amdgpu_device *adev = ddev->dev_private;
struct amdgpu_device *adev = drm_to_adev(ddev);
struct ttm_resource_manager *man = ttm_manager_type(&adev->mman.bdev, TTM_PL_TT);
return snprintf(buf, PAGE_SIZE, "%llu\n",
(adev->mman.bdev.man[TTM_PL_TT].size) * PAGE_SIZE);
man->size * PAGE_SIZE);
}
/**
@ -65,10 +65,11 @@ static ssize_t amdgpu_mem_info_gtt_used_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct drm_device *ddev = dev_get_drvdata(dev);
struct amdgpu_device *adev = ddev->dev_private;
struct amdgpu_device *adev = drm_to_adev(ddev);
struct ttm_resource_manager *man = ttm_manager_type(&adev->mman.bdev, TTM_PL_TT);
return snprintf(buf, PAGE_SIZE, "%llu\n",
amdgpu_gtt_mgr_usage(&adev->mman.bdev.man[TTM_PL_TT]));
amdgpu_gtt_mgr_usage(man));
}
static DEVICE_ATTR(mem_info_gtt_total, S_IRUGO,
@ -76,6 +77,7 @@ static DEVICE_ATTR(mem_info_gtt_total, S_IRUGO,
static DEVICE_ATTR(mem_info_gtt_used, S_IRUGO,
amdgpu_mem_info_gtt_used_show, NULL);
static const struct ttm_resource_manager_func amdgpu_gtt_mgr_func;
/**
* amdgpu_gtt_mgr_init - init GTT manager and DRM MM
*
@ -84,24 +86,23 @@ static DEVICE_ATTR(mem_info_gtt_used, S_IRUGO,
*
* Allocate and initialize the GTT manager.
*/
static int amdgpu_gtt_mgr_init(struct ttm_mem_type_manager *man,
unsigned long p_size)
int amdgpu_gtt_mgr_init(struct amdgpu_device *adev, uint64_t gtt_size)
{
struct amdgpu_device *adev = amdgpu_ttm_adev(man->bdev);
struct amdgpu_gtt_mgr *mgr;
struct amdgpu_gtt_mgr *mgr = &adev->mman.gtt_mgr;
struct ttm_resource_manager *man = &mgr->manager;
uint64_t start, size;
int ret;
mgr = kzalloc(sizeof(*mgr), GFP_KERNEL);
if (!mgr)
return -ENOMEM;
man->use_tt = true;
man->func = &amdgpu_gtt_mgr_func;
ttm_resource_manager_init(man, gtt_size >> PAGE_SHIFT);
start = AMDGPU_GTT_MAX_TRANSFER_SIZE * AMDGPU_GTT_NUM_TRANSFER_WINDOWS;
size = (adev->gmc.gart_size >> PAGE_SHIFT) - start;
drm_mm_init(&mgr->mm, start, size);
spin_lock_init(&mgr->lock);
atomic64_set(&mgr->available, p_size);
man->priv = mgr;
atomic64_set(&mgr->available, gtt_size >> PAGE_SHIFT);
ret = device_create_file(adev->dev, &dev_attr_mem_info_gtt_total);
if (ret) {
@ -114,6 +115,8 @@ static int amdgpu_gtt_mgr_init(struct ttm_mem_type_manager *man,
return ret;
}
ttm_set_driver_manager(&adev->mman.bdev, TTM_PL_TT, &mgr->manager);
ttm_resource_manager_set_used(man, true);
return 0;
}
@ -125,20 +128,27 @@ static int amdgpu_gtt_mgr_init(struct ttm_mem_type_manager *man,
* Destroy and free the GTT manager, returns -EBUSY if ranges are still
* allocated inside it.
*/
static int amdgpu_gtt_mgr_fini(struct ttm_mem_type_manager *man)
void amdgpu_gtt_mgr_fini(struct amdgpu_device *adev)
{
struct amdgpu_device *adev = amdgpu_ttm_adev(man->bdev);
struct amdgpu_gtt_mgr *mgr = man->priv;
struct amdgpu_gtt_mgr *mgr = &adev->mman.gtt_mgr;
struct ttm_resource_manager *man = &mgr->manager;
int ret;
ttm_resource_manager_set_used(man, false);
ret = ttm_resource_manager_force_list_clean(&adev->mman.bdev, man);
if (ret)
return;
spin_lock(&mgr->lock);
drm_mm_takedown(&mgr->mm);
spin_unlock(&mgr->lock);
kfree(mgr);
man->priv = NULL;
device_remove_file(adev->dev, &dev_attr_mem_info_gtt_total);
device_remove_file(adev->dev, &dev_attr_mem_info_gtt_used);
return 0;
ttm_resource_manager_cleanup(man);
ttm_set_driver_manager(&adev->mman.bdev, TTM_PL_TT, NULL);
}
/**
@ -148,7 +158,7 @@ static int amdgpu_gtt_mgr_fini(struct ttm_mem_type_manager *man)
*
* Check if a mem object has already address space allocated.
*/
bool amdgpu_gtt_mgr_has_gart_addr(struct ttm_mem_reg *mem)
bool amdgpu_gtt_mgr_has_gart_addr(struct ttm_resource *mem)
{
return mem->mm_node != NULL;
}
@ -163,12 +173,12 @@ bool amdgpu_gtt_mgr_has_gart_addr(struct ttm_mem_reg *mem)
*
* Dummy, allocate the node but no space for it yet.
*/
static int amdgpu_gtt_mgr_new(struct ttm_mem_type_manager *man,
static int amdgpu_gtt_mgr_new(struct ttm_resource_manager *man,
struct ttm_buffer_object *tbo,
const struct ttm_place *place,
struct ttm_mem_reg *mem)
struct ttm_resource *mem)
{
struct amdgpu_gtt_mgr *mgr = man->priv;
struct amdgpu_gtt_mgr *mgr = to_gtt_mgr(man);
struct amdgpu_gtt_node *node;
int r;
@ -226,10 +236,10 @@ static int amdgpu_gtt_mgr_new(struct ttm_mem_type_manager *man,
*
* Free the allocated GTT again.
*/
static void amdgpu_gtt_mgr_del(struct ttm_mem_type_manager *man,
struct ttm_mem_reg *mem)
static void amdgpu_gtt_mgr_del(struct ttm_resource_manager *man,
struct ttm_resource *mem)
{
struct amdgpu_gtt_mgr *mgr = man->priv;
struct amdgpu_gtt_mgr *mgr = to_gtt_mgr(man);
struct amdgpu_gtt_node *node = mem->mm_node;
if (node) {
@ -249,17 +259,17 @@ static void amdgpu_gtt_mgr_del(struct ttm_mem_type_manager *man,
*
* Return how many bytes are used in the GTT domain
*/
uint64_t amdgpu_gtt_mgr_usage(struct ttm_mem_type_manager *man)
uint64_t amdgpu_gtt_mgr_usage(struct ttm_resource_manager *man)
{
struct amdgpu_gtt_mgr *mgr = man->priv;
struct amdgpu_gtt_mgr *mgr = to_gtt_mgr(man);
s64 result = man->size - atomic64_read(&mgr->available);
return (result > 0 ? result : 0) * PAGE_SIZE;
}
int amdgpu_gtt_mgr_recover(struct ttm_mem_type_manager *man)
int amdgpu_gtt_mgr_recover(struct ttm_resource_manager *man)
{
struct amdgpu_gtt_mgr *mgr = man->priv;
struct amdgpu_gtt_mgr *mgr = to_gtt_mgr(man);
struct amdgpu_gtt_node *node;
struct drm_mm_node *mm_node;
int r = 0;
@ -284,10 +294,10 @@ int amdgpu_gtt_mgr_recover(struct ttm_mem_type_manager *man)
*
* Dump the table content using printk.
*/
static void amdgpu_gtt_mgr_debug(struct ttm_mem_type_manager *man,
static void amdgpu_gtt_mgr_debug(struct ttm_resource_manager *man,
struct drm_printer *printer)
{
struct amdgpu_gtt_mgr *mgr = man->priv;
struct amdgpu_gtt_mgr *mgr = to_gtt_mgr(man);
spin_lock(&mgr->lock);
drm_mm_print(&mgr->mm, printer);
@ -298,10 +308,8 @@ static void amdgpu_gtt_mgr_debug(struct ttm_mem_type_manager *man,
amdgpu_gtt_mgr_usage(man) >> 20);
}
const struct ttm_mem_type_manager_func amdgpu_gtt_mgr_func = {
.init = amdgpu_gtt_mgr_init,
.takedown = amdgpu_gtt_mgr_fini,
.get_node = amdgpu_gtt_mgr_new,
.put_node = amdgpu_gtt_mgr_del,
static const struct ttm_resource_manager_func amdgpu_gtt_mgr_func = {
.alloc = amdgpu_gtt_mgr_new,
.free = amdgpu_gtt_mgr_del,
.debug = amdgpu_gtt_mgr_debug
};

View File

@ -40,7 +40,7 @@
static int amdgpu_i2c_pre_xfer(struct i2c_adapter *i2c_adap)
{
struct amdgpu_i2c_chan *i2c = i2c_get_adapdata(i2c_adap);
struct amdgpu_device *adev = i2c->dev->dev_private;
struct amdgpu_device *adev = drm_to_adev(i2c->dev);
struct amdgpu_i2c_bus_rec *rec = &i2c->rec;
uint32_t temp;
@ -82,7 +82,7 @@ static int amdgpu_i2c_pre_xfer(struct i2c_adapter *i2c_adap)
static void amdgpu_i2c_post_xfer(struct i2c_adapter *i2c_adap)
{
struct amdgpu_i2c_chan *i2c = i2c_get_adapdata(i2c_adap);
struct amdgpu_device *adev = i2c->dev->dev_private;
struct amdgpu_device *adev = drm_to_adev(i2c->dev);
struct amdgpu_i2c_bus_rec *rec = &i2c->rec;
uint32_t temp;
@ -101,7 +101,7 @@ static void amdgpu_i2c_post_xfer(struct i2c_adapter *i2c_adap)
static int amdgpu_i2c_get_clock(void *i2c_priv)
{
struct amdgpu_i2c_chan *i2c = i2c_priv;
struct amdgpu_device *adev = i2c->dev->dev_private;
struct amdgpu_device *adev = drm_to_adev(i2c->dev);
struct amdgpu_i2c_bus_rec *rec = &i2c->rec;
uint32_t val;
@ -116,7 +116,7 @@ static int amdgpu_i2c_get_clock(void *i2c_priv)
static int amdgpu_i2c_get_data(void *i2c_priv)
{
struct amdgpu_i2c_chan *i2c = i2c_priv;
struct amdgpu_device *adev = i2c->dev->dev_private;
struct amdgpu_device *adev = drm_to_adev(i2c->dev);
struct amdgpu_i2c_bus_rec *rec = &i2c->rec;
uint32_t val;
@ -130,7 +130,7 @@ static int amdgpu_i2c_get_data(void *i2c_priv)
static void amdgpu_i2c_set_clock(void *i2c_priv, int clock)
{
struct amdgpu_i2c_chan *i2c = i2c_priv;
struct amdgpu_device *adev = i2c->dev->dev_private;
struct amdgpu_device *adev = drm_to_adev(i2c->dev);
struct amdgpu_i2c_bus_rec *rec = &i2c->rec;
uint32_t val;
@ -143,7 +143,7 @@ static void amdgpu_i2c_set_clock(void *i2c_priv, int clock)
static void amdgpu_i2c_set_data(void *i2c_priv, int data)
{
struct amdgpu_i2c_chan *i2c = i2c_priv;
struct amdgpu_device *adev = i2c->dev->dev_private;
struct amdgpu_device *adev = drm_to_adev(i2c->dev);
struct amdgpu_i2c_bus_rec *rec = &i2c->rec;
uint32_t val;
@ -253,7 +253,7 @@ void amdgpu_i2c_add(struct amdgpu_device *adev,
const struct amdgpu_i2c_bus_rec *rec,
const char *name)
{
struct drm_device *dev = adev->ddev;
struct drm_device *dev = adev_to_drm(adev);
int i;
for (i = 0; i < AMDGPU_MAX_I2C_BUS; i++) {

View File

@ -445,7 +445,7 @@ static int amdgpu_debugfs_sa_info(struct seq_file *m, void *data)
{
struct drm_info_node *node = (struct drm_info_node *) m->private;
struct drm_device *dev = node->minor->dev;
struct amdgpu_device *adev = dev->dev_private;
struct amdgpu_device *adev = drm_to_adev(dev);
seq_printf(m, "--------------------- DELAYED --------------------- \n");
amdgpu_sa_bo_dump_debug_info(&adev->ib_pools[AMDGPU_IB_POOL_DELAYED],

View File

@ -85,7 +85,7 @@ static void amdgpu_hotplug_work_func(struct work_struct *work)
{
struct amdgpu_device *adev = container_of(work, struct amdgpu_device,
hotplug_work);
struct drm_device *dev = adev->ddev;
struct drm_device *dev = adev_to_drm(adev);
struct drm_mode_config *mode_config = &dev->mode_config;
struct drm_connector *connector;
struct drm_connector_list_iter iter;
@ -151,7 +151,7 @@ void amdgpu_irq_disable_all(struct amdgpu_device *adev)
irqreturn_t amdgpu_irq_handler(int irq, void *arg)
{
struct drm_device *dev = (struct drm_device *) arg;
struct amdgpu_device *adev = dev->dev_private;
struct amdgpu_device *adev = drm_to_adev(dev);
irqreturn_t ret;
ret = amdgpu_ih_process(adev, &adev->irq.ih);
@ -268,9 +268,9 @@ int amdgpu_irq_init(struct amdgpu_device *adev)
if (!adev->enable_virtual_display)
/* Disable vblank IRQs aggressively for power-saving */
/* XXX: can this be enabled for DC? */
adev->ddev->vblank_disable_immediate = true;
adev_to_drm(adev)->vblank_disable_immediate = true;
r = drm_vblank_init(adev->ddev, adev->mode_info.num_crtc);
r = drm_vblank_init(adev_to_drm(adev), adev->mode_info.num_crtc);
if (r)
return r;
@ -284,14 +284,14 @@ int amdgpu_irq_init(struct amdgpu_device *adev)
adev->irq.installed = true;
/* Use vector 0 for MSI-X */
r = drm_irq_install(adev->ddev, pci_irq_vector(adev->pdev, 0));
r = drm_irq_install(adev_to_drm(adev), pci_irq_vector(adev->pdev, 0));
if (r) {
adev->irq.installed = false;
if (!amdgpu_device_has_dc_support(adev))
flush_work(&adev->hotplug_work);
return r;
}
adev->ddev->max_vblank_count = 0x00ffffff;
adev_to_drm(adev)->max_vblank_count = 0x00ffffff;
DRM_DEBUG("amdgpu: irq initialized.\n");
return 0;
@ -311,7 +311,7 @@ void amdgpu_irq_fini(struct amdgpu_device *adev)
unsigned i, j;
if (adev->irq.installed) {
drm_irq_uninstall(adev->ddev);
drm_irq_uninstall(adev_to_drm(adev));
adev->irq.installed = false;
if (adev->irq.msi_enabled)
pci_free_irq_vectors(adev->pdev);
@ -522,7 +522,7 @@ void amdgpu_irq_gpu_reset_resume_helper(struct amdgpu_device *adev)
int amdgpu_irq_get(struct amdgpu_device *adev, struct amdgpu_irq_src *src,
unsigned type)
{
if (!adev->ddev->irq_enabled)
if (!adev_to_drm(adev)->irq_enabled)
return -ENOENT;
if (type >= src->num_types)
@ -552,7 +552,7 @@ int amdgpu_irq_get(struct amdgpu_device *adev, struct amdgpu_irq_src *src,
int amdgpu_irq_put(struct amdgpu_device *adev, struct amdgpu_irq_src *src,
unsigned type)
{
if (!adev->ddev->irq_enabled)
if (!adev_to_drm(adev)->irq_enabled)
return -ENOENT;
if (type >= src->num_types)
@ -583,7 +583,7 @@ int amdgpu_irq_put(struct amdgpu_device *adev, struct amdgpu_irq_src *src,
bool amdgpu_irq_enabled(struct amdgpu_device *adev, struct amdgpu_irq_src *src,
unsigned type)
{
if (!adev->ddev->irq_enabled)
if (!adev_to_drm(adev)->irq_enabled)
return false;
if (type >= src->num_types)

View File

@ -251,7 +251,7 @@ void amdgpu_job_stop_all_jobs_on_sched(struct drm_gpu_scheduler *sched)
int i;
/* Signal all jobs not yet scheduled */
for (i = DRM_SCHED_PRIORITY_MAX - 1; i >= DRM_SCHED_PRIORITY_MIN; i--) {
for (i = DRM_SCHED_PRIORITY_COUNT - 1; i >= DRM_SCHED_PRIORITY_MIN; i--) {
struct drm_sched_rq *rq = &sched->sched_rq[i];
if (!rq)

View File

@ -78,7 +78,7 @@ void amdgpu_unregister_gpu_instance(struct amdgpu_device *adev)
*/
void amdgpu_driver_unload_kms(struct drm_device *dev)
{
struct amdgpu_device *adev = dev->dev_private;
struct amdgpu_device *adev = drm_to_adev(dev);
if (adev == NULL)
return;
@ -86,7 +86,7 @@ void amdgpu_driver_unload_kms(struct drm_device *dev)
amdgpu_unregister_gpu_instance(adev);
if (adev->rmmio == NULL)
goto done_free;
return;
if (adev->runpm) {
pm_runtime_get_sync(dev->dev);
@ -94,12 +94,7 @@ void amdgpu_driver_unload_kms(struct drm_device *dev)
}
amdgpu_acpi_fini(adev);
amdgpu_device_fini(adev);
done_free:
kfree(adev);
dev->dev_private = NULL;
}
void amdgpu_register_gpu_instance(struct amdgpu_device *adev)
@ -130,22 +125,18 @@ void amdgpu_register_gpu_instance(struct amdgpu_device *adev)
/**
* amdgpu_driver_load_kms - Main load function for KMS.
*
* @dev: drm dev pointer
* @adev: pointer to struct amdgpu_device
* @flags: device flags
*
* This is the main load function for KMS (all asics).
* Returns 0 on success, error on failure.
*/
int amdgpu_driver_load_kms(struct drm_device *dev, unsigned long flags)
int amdgpu_driver_load_kms(struct amdgpu_device *adev, unsigned long flags)
{
struct amdgpu_device *adev;
struct drm_device *dev;
int r, acpi_status;
adev = kzalloc(sizeof(struct amdgpu_device), GFP_KERNEL);
if (adev == NULL) {
return -ENOMEM;
}
dev->dev_private = (void *)adev;
dev = adev_to_drm(adev);
if (amdgpu_has_atpx() &&
(amdgpu_is_atpx_hybrid() ||
@ -160,7 +151,7 @@ int amdgpu_driver_load_kms(struct drm_device *dev, unsigned long flags)
* properly initialize the GPU MC controller and permit
* VRAM allocation
*/
r = amdgpu_device_init(adev, dev, dev->pdev, flags);
r = amdgpu_device_init(adev, flags);
if (r) {
dev_err(&dev->pdev->dev, "Fatal error during GPU init\n");
goto out;
@ -186,7 +177,7 @@ int amdgpu_driver_load_kms(struct drm_device *dev, unsigned long flags)
break;
case CHIP_VEGA10:
/* turn runpm on if noretry=0 */
if (!amdgpu_noretry)
if (!adev->gmc.noretry)
adev->runpm = true;
break;
default:
@ -291,14 +282,25 @@ static int amdgpu_firmware_info(struct drm_amdgpu_info_firmware *fw_info,
fw_info->feature = 0;
break;
case AMDGPU_INFO_FW_TA:
if (query_fw->index > 1)
return -EINVAL;
if (query_fw->index == 0) {
switch (query_fw->index) {
case 0:
fw_info->ver = adev->psp.ta_fw_version;
fw_info->feature = adev->psp.ta_xgmi_ucode_version;
} else {
break;
case 1:
fw_info->ver = adev->psp.ta_fw_version;
fw_info->feature = adev->psp.ta_ras_ucode_version;
break;
case 2:
fw_info->ver = adev->psp.ta_fw_version;
fw_info->feature = adev->psp.ta_hdcp_ucode_version;
break;
case 3:
fw_info->ver = adev->psp.ta_fw_version;
fw_info->feature = adev->psp.ta_dtm_ucode_version;
break;
default:
return -EINVAL;
}
break;
case AMDGPU_INFO_FW_SDMA:
@ -480,7 +482,7 @@ static int amdgpu_hw_ip_info(struct amdgpu_device *adev,
*/
static int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
{
struct amdgpu_device *adev = dev->dev_private;
struct amdgpu_device *adev = drm_to_adev(dev);
struct drm_amdgpu_info *info = data;
struct amdgpu_mode_info *minfo = &adev->mode_info;
void __user *out = (void __user *)(uintptr_t)info->return_pointer;
@ -595,13 +597,13 @@ static int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file
ui64 = atomic64_read(&adev->num_vram_cpu_page_faults);
return copy_to_user(out, &ui64, min(size, 8u)) ? -EFAULT : 0;
case AMDGPU_INFO_VRAM_USAGE:
ui64 = amdgpu_vram_mgr_usage(&adev->mman.bdev.man[TTM_PL_VRAM]);
ui64 = amdgpu_vram_mgr_usage(ttm_manager_type(&adev->mman.bdev, TTM_PL_VRAM));
return copy_to_user(out, &ui64, min(size, 8u)) ? -EFAULT : 0;
case AMDGPU_INFO_VIS_VRAM_USAGE:
ui64 = amdgpu_vram_mgr_vis_usage(&adev->mman.bdev.man[TTM_PL_VRAM]);
ui64 = amdgpu_vram_mgr_vis_usage(ttm_manager_type(&adev->mman.bdev, TTM_PL_VRAM));
return copy_to_user(out, &ui64, min(size, 8u)) ? -EFAULT : 0;
case AMDGPU_INFO_GTT_USAGE:
ui64 = amdgpu_gtt_mgr_usage(&adev->mman.bdev.man[TTM_PL_TT]);
ui64 = amdgpu_gtt_mgr_usage(ttm_manager_type(&adev->mman.bdev, TTM_PL_TT));
return copy_to_user(out, &ui64, min(size, 8u)) ? -EFAULT : 0;
case AMDGPU_INFO_GDS_CONFIG: {
struct drm_amdgpu_info_gds gds_info;
@ -624,7 +626,7 @@ static int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file
min(adev->gmc.visible_vram_size -
atomic64_read(&adev->visible_pin_size),
vram_gtt.vram_size);
vram_gtt.gtt_size = adev->mman.bdev.man[TTM_PL_TT].size;
vram_gtt.gtt_size = ttm_manager_type(&adev->mman.bdev, TTM_PL_TT)->size;
vram_gtt.gtt_size *= PAGE_SIZE;
vram_gtt.gtt_size -= atomic64_read(&adev->gart_pin_size);
return copy_to_user(out, &vram_gtt,
@ -632,14 +634,17 @@ static int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file
}
case AMDGPU_INFO_MEMORY: {
struct drm_amdgpu_memory_info mem;
struct ttm_resource_manager *vram_man =
ttm_manager_type(&adev->mman.bdev, TTM_PL_VRAM);
struct ttm_resource_manager *gtt_man =
ttm_manager_type(&adev->mman.bdev, TTM_PL_TT);
memset(&mem, 0, sizeof(mem));
mem.vram.total_heap_size = adev->gmc.real_vram_size;
mem.vram.usable_heap_size = adev->gmc.real_vram_size -
atomic64_read(&adev->vram_pin_size) -
AMDGPU_VM_RESERVED_VRAM;
mem.vram.heap_usage =
amdgpu_vram_mgr_usage(&adev->mman.bdev.man[TTM_PL_VRAM]);
amdgpu_vram_mgr_usage(vram_man);
mem.vram.max_allocation = mem.vram.usable_heap_size * 3 / 4;
mem.cpu_accessible_vram.total_heap_size =
@ -649,16 +654,16 @@ static int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file
atomic64_read(&adev->visible_pin_size),
mem.vram.usable_heap_size);
mem.cpu_accessible_vram.heap_usage =
amdgpu_vram_mgr_vis_usage(&adev->mman.bdev.man[TTM_PL_VRAM]);
amdgpu_vram_mgr_vis_usage(vram_man);
mem.cpu_accessible_vram.max_allocation =
mem.cpu_accessible_vram.usable_heap_size * 3 / 4;
mem.gtt.total_heap_size = adev->mman.bdev.man[TTM_PL_TT].size;
mem.gtt.total_heap_size = gtt_man->size;
mem.gtt.total_heap_size *= PAGE_SIZE;
mem.gtt.usable_heap_size = mem.gtt.total_heap_size -
atomic64_read(&adev->gart_pin_size);
mem.gtt.heap_usage =
amdgpu_gtt_mgr_usage(&adev->mman.bdev.man[TTM_PL_TT]);
amdgpu_gtt_mgr_usage(gtt_man);
mem.gtt.max_allocation = mem.gtt.usable_heap_size * 3 / 4;
return copy_to_user(out, &mem,
@ -742,6 +747,8 @@ static int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file
dev_info.ids_flags |= AMDGPU_IDS_FLAGS_FUSION;
if (amdgpu_mcbp || amdgpu_sriov_vf(adev))
dev_info.ids_flags |= AMDGPU_IDS_FLAGS_PREEMPTION;
if (amdgpu_is_tmz(adev))
dev_info.ids_flags |= AMDGPU_IDS_FLAGS_TMZ;
vm_size = adev->vm_manager.max_pfn * AMDGPU_GPU_PAGE_SIZE;
vm_size -= AMDGPU_VA_RESERVED_SIZE;
@ -995,7 +1002,7 @@ void amdgpu_driver_lastclose_kms(struct drm_device *dev)
*/
int amdgpu_driver_open_kms(struct drm_device *dev, struct drm_file *file_priv)
{
struct amdgpu_device *adev = dev->dev_private;
struct amdgpu_device *adev = drm_to_adev(dev);
struct amdgpu_fpriv *fpriv;
int r, pasid;
@ -1080,7 +1087,7 @@ int amdgpu_driver_open_kms(struct drm_device *dev, struct drm_file *file_priv)
void amdgpu_driver_postclose_kms(struct drm_device *dev,
struct drm_file *file_priv)
{
struct amdgpu_device *adev = dev->dev_private;
struct amdgpu_device *adev = drm_to_adev(dev);
struct amdgpu_fpriv *fpriv = file_priv->driver_priv;
struct amdgpu_bo_list *list;
struct amdgpu_bo *pd;
@ -1145,7 +1152,7 @@ u32 amdgpu_get_vblank_counter_kms(struct drm_crtc *crtc)
{
struct drm_device *dev = crtc->dev;
unsigned int pipe = crtc->index;
struct amdgpu_device *adev = dev->dev_private;
struct amdgpu_device *adev = drm_to_adev(dev);
int vpos, hpos, stat;
u32 count;
@ -1213,7 +1220,7 @@ int amdgpu_enable_vblank_kms(struct drm_crtc *crtc)
{
struct drm_device *dev = crtc->dev;
unsigned int pipe = crtc->index;
struct amdgpu_device *adev = dev->dev_private;
struct amdgpu_device *adev = drm_to_adev(dev);
int idx = amdgpu_display_crtc_idx_to_irq_type(adev, pipe);
return amdgpu_irq_get(adev, &adev->crtc_irq, idx);
@ -1230,7 +1237,7 @@ void amdgpu_disable_vblank_kms(struct drm_crtc *crtc)
{
struct drm_device *dev = crtc->dev;
unsigned int pipe = crtc->index;
struct amdgpu_device *adev = dev->dev_private;
struct amdgpu_device *adev = drm_to_adev(dev);
int idx = amdgpu_display_crtc_idx_to_irq_type(adev, pipe);
amdgpu_irq_put(adev, &adev->crtc_irq, idx);
@ -1266,7 +1273,7 @@ static int amdgpu_debugfs_firmware_info(struct seq_file *m, void *data)
{
struct drm_info_node *node = (struct drm_info_node *) m->private;
struct drm_device *dev = node->minor->dev;
struct amdgpu_device *adev = dev->dev_private;
struct amdgpu_device *adev = drm_to_adev(dev);
struct drm_amdgpu_info_firmware fw_info;
struct drm_amdgpu_query_fw query_fw;
struct atom_context *ctx = adev->mode_info.atom_context;
@ -1389,13 +1396,31 @@ static int amdgpu_debugfs_firmware_info(struct seq_file *m, void *data)
fw_info.feature, fw_info.ver);
query_fw.fw_type = AMDGPU_INFO_FW_TA;
for (i = 0; i < 2; i++) {
for (i = 0; i < 4; i++) {
query_fw.index = i;
ret = amdgpu_firmware_info(&fw_info, &query_fw, adev);
if (ret)
continue;
seq_printf(m, "TA %s feature version: %u, firmware version: 0x%08x\n",
i ? "RAS" : "XGMI", fw_info.feature, fw_info.ver);
switch (query_fw.index) {
case 0:
seq_printf(m, "TA %s feature version: 0x%08x, firmware version: 0x%08x\n",
"RAS", fw_info.feature, fw_info.ver);
break;
case 1:
seq_printf(m, "TA %s feature version: 0x%08x, firmware version: 0x%08x\n",
"XGMI", fw_info.feature, fw_info.ver);
break;
case 2:
seq_printf(m, "TA %s feature version: 0x%08x, firmware version: 0x%08x\n",
"HDCP", fw_info.feature, fw_info.ver);
break;
case 3:
seq_printf(m, "TA %s feature version: 0x%08x, firmware version: 0x%08x\n",
"DTM", fw_info.feature, fw_info.ver);
break;
default:
return -EINVAL;
}
}
/* SMC */

View File

@ -27,6 +27,20 @@ struct amdgpu_mmhub_funcs {
void (*query_ras_error_count)(struct amdgpu_device *adev,
void *ras_error_status);
void (*reset_ras_error_count)(struct amdgpu_device *adev);
u64 (*get_fb_location)(struct amdgpu_device *adev);
void (*init)(struct amdgpu_device *adev);
int (*gart_enable)(struct amdgpu_device *adev);
void (*set_fault_enable_default)(struct amdgpu_device *adev,
bool value);
void (*gart_disable)(struct amdgpu_device *adev);
int (*set_clockgating)(struct amdgpu_device *adev,
enum amd_clockgating_state state);
void (*get_clockgating)(struct amdgpu_device *adev, u32 *flags);
void (*setup_vm_pt_regs)(struct amdgpu_device *adev, uint32_t vmid,
uint64_t page_table_base);
void (*update_power_gating)(struct amdgpu_device *adev,
bool enable);
void (*query_ras_error_status)(struct amdgpu_device *adev);
};
struct amdgpu_mmhub {

View File

@ -46,6 +46,7 @@
#include <drm/drm_dp_mst_helper.h>
#include "modules/inc/mod_freesync.h"
#include "amdgpu_dm_irq_params.h"
struct amdgpu_bo;
struct amdgpu_device;
@ -404,7 +405,8 @@ struct amdgpu_crtc {
struct amdgpu_flip_work *pflip_works;
enum amdgpu_flip_status pflip_status;
int deferred_flip_completion;
u32 last_flip_vblank;
/* parameters access from DM IRQ handler */
struct dm_irq_params dm_irq_params;
/* pll sharing */
struct amdgpu_atom_ss ss;
bool ss_enabled;
@ -469,6 +471,7 @@ struct amdgpu_encoder {
struct amdgpu_connector_atom_dig {
/* displayport */
u8 dpcd[DP_RECEIVER_CAP_SIZE];
u8 downstream_ports[DP_MAX_DOWNSTREAM_PORTS];
u8 dp_sink_type;
int dp_clock;
int dp_lane_count;

View File

@ -136,8 +136,8 @@ void amdgpu_bo_placement_from_domain(struct amdgpu_bo *abo, u32 domain)
places[c].fpfn = 0;
places[c].lpfn = 0;
places[c].flags = TTM_PL_FLAG_WC | TTM_PL_FLAG_UNCACHED |
TTM_PL_FLAG_VRAM;
places[c].mem_type = TTM_PL_VRAM;
places[c].flags = TTM_PL_FLAG_WC | TTM_PL_FLAG_UNCACHED;
if (flags & AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED)
places[c].lpfn = visible_pfn;
@ -152,7 +152,8 @@ void amdgpu_bo_placement_from_domain(struct amdgpu_bo *abo, u32 domain)
if (domain & AMDGPU_GEM_DOMAIN_GTT) {
places[c].fpfn = 0;
places[c].lpfn = 0;
places[c].flags = TTM_PL_FLAG_TT;
places[c].mem_type = TTM_PL_TT;
places[c].flags = 0;
if (flags & AMDGPU_GEM_CREATE_CPU_GTT_USWC)
places[c].flags |= TTM_PL_FLAG_WC |
TTM_PL_FLAG_UNCACHED;
@ -164,7 +165,8 @@ void amdgpu_bo_placement_from_domain(struct amdgpu_bo *abo, u32 domain)
if (domain & AMDGPU_GEM_DOMAIN_CPU) {
places[c].fpfn = 0;
places[c].lpfn = 0;
places[c].flags = TTM_PL_FLAG_SYSTEM;
places[c].mem_type = TTM_PL_SYSTEM;
places[c].flags = 0;
if (flags & AMDGPU_GEM_CREATE_CPU_GTT_USWC)
places[c].flags |= TTM_PL_FLAG_WC |
TTM_PL_FLAG_UNCACHED;
@ -176,28 +178,32 @@ void amdgpu_bo_placement_from_domain(struct amdgpu_bo *abo, u32 domain)
if (domain & AMDGPU_GEM_DOMAIN_GDS) {
places[c].fpfn = 0;
places[c].lpfn = 0;
places[c].flags = TTM_PL_FLAG_UNCACHED | AMDGPU_PL_FLAG_GDS;
places[c].mem_type = AMDGPU_PL_GDS;
places[c].flags = TTM_PL_FLAG_UNCACHED;
c++;
}
if (domain & AMDGPU_GEM_DOMAIN_GWS) {
places[c].fpfn = 0;
places[c].lpfn = 0;
places[c].flags = TTM_PL_FLAG_UNCACHED | AMDGPU_PL_FLAG_GWS;
places[c].mem_type = AMDGPU_PL_GWS;
places[c].flags = TTM_PL_FLAG_UNCACHED;
c++;
}
if (domain & AMDGPU_GEM_DOMAIN_OA) {
places[c].fpfn = 0;
places[c].lpfn = 0;
places[c].flags = TTM_PL_FLAG_UNCACHED | AMDGPU_PL_FLAG_OA;
places[c].mem_type = AMDGPU_PL_OA;
places[c].flags = TTM_PL_FLAG_UNCACHED;
c++;
}
if (!c) {
places[c].fpfn = 0;
places[c].lpfn = 0;
places[c].flags = TTM_PL_MASK_CACHING | TTM_PL_FLAG_SYSTEM;
places[c].mem_type = TTM_PL_SYSTEM;
places[c].flags = TTM_PL_MASK_CACHING;
c++;
}
@ -374,6 +380,9 @@ int amdgpu_bo_create_kernel_at(struct amdgpu_device *adev,
if (r)
return r;
if ((*bo_ptr) == NULL)
return 0;
/*
* Remove the original mem node and create a new one at the request
* position.
@ -381,7 +390,7 @@ int amdgpu_bo_create_kernel_at(struct amdgpu_device *adev,
if (cpu_addr)
amdgpu_bo_kunmap(*bo_ptr);
ttm_bo_mem_put(&(*bo_ptr)->tbo, &(*bo_ptr)->tbo.mem);
ttm_resource_free(&(*bo_ptr)->tbo, &(*bo_ptr)->tbo.mem);
for (i = 0; i < (*bo_ptr)->placement.num_placement; ++i) {
(*bo_ptr)->placements[i].fpfn = offset >> PAGE_SHIFT;
@ -442,14 +451,14 @@ void amdgpu_bo_free_kernel(struct amdgpu_bo **bo, u64 *gpu_addr,
static bool amdgpu_bo_validate_size(struct amdgpu_device *adev,
unsigned long size, u32 domain)
{
struct ttm_mem_type_manager *man = NULL;
struct ttm_resource_manager *man = NULL;
/*
* If GTT is part of requested domains the check must succeed to
* allow fall back to GTT
*/
if (domain & AMDGPU_GEM_DOMAIN_GTT) {
man = &adev->mman.bdev.man[TTM_PL_TT];
man = ttm_manager_type(&adev->mman.bdev, TTM_PL_TT);
if (size < (man->size << PAGE_SHIFT))
return true;
@ -458,7 +467,7 @@ static bool amdgpu_bo_validate_size(struct amdgpu_device *adev,
}
if (domain & AMDGPU_GEM_DOMAIN_VRAM) {
man = &adev->mman.bdev.man[TTM_PL_VRAM];
man = ttm_manager_type(&adev->mman.bdev, TTM_PL_VRAM);
if (size < (man->size << PAGE_SHIFT))
return true;
@ -552,7 +561,7 @@ static int amdgpu_bo_do_create(struct amdgpu_device *adev,
bo = kzalloc(sizeof(struct amdgpu_bo), GFP_KERNEL);
if (bo == NULL)
return -ENOMEM;
drm_gem_private_object_init(adev->ddev, &bo->tbo.base, size);
drm_gem_private_object_init(adev_to_drm(adev), &bo->tbo.base, size);
INIT_LIST_HEAD(&bo->shadow_list);
bo->vm_bo = NULL;
bo->preferred_domains = bp->preferred_domain ? bp->preferred_domain :
@ -591,7 +600,7 @@ static int amdgpu_bo_do_create(struct amdgpu_device *adev,
amdgpu_cs_report_moved_bytes(adev, ctx.bytes_moved, 0);
if (bp->flags & AMDGPU_GEM_CREATE_VRAM_CLEARED &&
bo->tbo.mem.placement & TTM_PL_FLAG_VRAM) {
bo->tbo.mem.mem_type == TTM_PL_VRAM) {
struct dma_fence *fence;
r = amdgpu_fill_buffer(bo, 0, bo->tbo.base.resv, &fence);
@ -1268,11 +1277,11 @@ int amdgpu_bo_get_metadata(struct amdgpu_bo *bo, void *buffer,
*/
void amdgpu_bo_move_notify(struct ttm_buffer_object *bo,
bool evict,
struct ttm_mem_reg *new_mem)
struct ttm_resource *new_mem)
{
struct amdgpu_device *adev = amdgpu_ttm_adev(bo->bdev);
struct amdgpu_bo *abo;
struct ttm_mem_reg *old_mem = &bo->mem;
struct ttm_resource *old_mem = &bo->mem;
if (!amdgpu_bo_is_amdgpu_bo(bo))
return;
@ -1299,7 +1308,7 @@ void amdgpu_bo_move_notify(struct ttm_buffer_object *bo,
}
/**
* amdgpu_bo_move_notify - notification about a BO being released
* amdgpu_bo_release_notify - notification about a BO being released
* @bo: pointer to a buffer object
*
* Wipes VRAM buffers whose contents should not be leaked before the

View File

@ -160,7 +160,7 @@ static inline int amdgpu_bo_reserve(struct amdgpu_bo *bo, bool no_intr)
struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
int r;
r = __ttm_bo_reserve(&bo->tbo, !no_intr, false, NULL);
r = ttm_bo_reserve(&bo->tbo, !no_intr, false, NULL);
if (unlikely(r != 0)) {
if (r != -ERESTARTSYS)
dev_err(adev->dev, "%p reserve failed\n", bo);
@ -283,7 +283,7 @@ int amdgpu_bo_get_metadata(struct amdgpu_bo *bo, void *buffer,
uint64_t *flags);
void amdgpu_bo_move_notify(struct ttm_buffer_object *bo,
bool evict,
struct ttm_mem_reg *new_mem);
struct ttm_resource *new_mem);
void amdgpu_bo_release_notify(struct ttm_buffer_object *bo);
int amdgpu_bo_fault_reserve_notify(struct ttm_buffer_object *bo);
void amdgpu_bo_fence(struct amdgpu_bo *bo, struct dma_fence *fence,

View File

@ -226,7 +226,7 @@ static int init_pmu_by_type(struct amdgpu_device *adev,
pmu_entry->pmu.attr_groups = attr_groups;
pmu_entry->pmu_perf_type = pmu_perf_type;
snprintf(pmu_name, PMU_NAME_SIZE, "%s_%d",
pmu_file_prefix, adev->ddev->primary->index);
pmu_file_prefix, adev_to_drm(adev)->primary->index);
ret = perf_pmu_register(&pmu_entry->pmu, pmu_name, -1);

View File

@ -161,10 +161,12 @@ static int psp_sw_init(void *handle)
struct psp_context *psp = &adev->psp;
int ret;
ret = psp_init_microcode(psp);
if (ret) {
DRM_ERROR("Failed to load psp firmware!\n");
return ret;
if (!amdgpu_sriov_vf(adev)) {
ret = psp_init_microcode(psp);
if (ret) {
DRM_ERROR("Failed to load psp firmware!\n");
return ret;
}
}
ret = psp_memory_training_init(psp);
@ -219,6 +221,9 @@ int psp_wait_for(struct psp_context *psp, uint32_t reg_index,
int i;
struct amdgpu_device *adev = psp->adev;
if (psp->adev->in_pci_err_recovery)
return 0;
for (i = 0; i < adev->usec_timeout; i++) {
val = RREG32(reg_index);
if (check_changed) {
@ -245,6 +250,9 @@ psp_cmd_submit_buf(struct psp_context *psp,
bool ras_intr = false;
bool skip_unsupport = false;
if (psp->adev->in_pci_err_recovery)
return 0;
mutex_lock(&psp->mutex);
memset(psp->cmd_buf_mem, 0, PSP_CMD_BUFFER_SIZE);
@ -929,6 +937,7 @@ static int psp_ras_load(struct psp_context *psp)
{
int ret;
struct psp_gfx_cmd_resp *cmd;
struct ta_ras_shared_memory *ras_cmd;
/*
* TODO: bypass the loading in sriov for now
@ -952,11 +961,20 @@ static int psp_ras_load(struct psp_context *psp)
ret = psp_cmd_submit_buf(psp, NULL, cmd,
psp->fence_buf_mc_addr);
ras_cmd = (struct ta_ras_shared_memory*)psp->ras.ras_shared_buf;
if (!ret) {
psp->ras.ras_initialized = true;
psp->ras.session_id = cmd->resp.session_id;
if (!ras_cmd->ras_status)
psp->ras.ras_initialized = true;
else
dev_warn(psp->adev->dev, "RAS Init Status: 0x%X\n", ras_cmd->ras_status);
}
if (ret || ras_cmd->ras_status)
amdgpu_ras_fini(psp->adev);
kfree(cmd);
return ret;
@ -1429,6 +1447,168 @@ static int psp_dtm_terminate(struct psp_context *psp)
}
// DTM end
// RAP start
static int psp_rap_init_shared_buf(struct psp_context *psp)
{
int ret;
/*
* Allocate 16k memory aligned to 4k from Frame Buffer (local
* physical) for rap ta <-> Driver
*/
ret = amdgpu_bo_create_kernel(psp->adev, PSP_RAP_SHARED_MEM_SIZE,
PAGE_SIZE, AMDGPU_GEM_DOMAIN_VRAM,
&psp->rap_context.rap_shared_bo,
&psp->rap_context.rap_shared_mc_addr,
&psp->rap_context.rap_shared_buf);
return ret;
}
static int psp_rap_load(struct psp_context *psp)
{
int ret;
struct psp_gfx_cmd_resp *cmd;
cmd = kzalloc(sizeof(struct psp_gfx_cmd_resp), GFP_KERNEL);
if (!cmd)
return -ENOMEM;
memset(psp->fw_pri_buf, 0, PSP_1_MEG);
memcpy(psp->fw_pri_buf, psp->ta_rap_start_addr, psp->ta_rap_ucode_size);
psp_prep_ta_load_cmd_buf(cmd,
psp->fw_pri_mc_addr,
psp->ta_rap_ucode_size,
psp->rap_context.rap_shared_mc_addr,
PSP_RAP_SHARED_MEM_SIZE);
ret = psp_cmd_submit_buf(psp, NULL, cmd, psp->fence_buf_mc_addr);
if (!ret) {
psp->rap_context.rap_initialized = true;
psp->rap_context.session_id = cmd->resp.session_id;
mutex_init(&psp->rap_context.mutex);
}
kfree(cmd);
return ret;
}
static int psp_rap_unload(struct psp_context *psp)
{
int ret;
struct psp_gfx_cmd_resp *cmd;
cmd = kzalloc(sizeof(struct psp_gfx_cmd_resp), GFP_KERNEL);
if (!cmd)
return -ENOMEM;
psp_prep_ta_unload_cmd_buf(cmd, psp->rap_context.session_id);
ret = psp_cmd_submit_buf(psp, NULL, cmd, psp->fence_buf_mc_addr);
kfree(cmd);
return ret;
}
static int psp_rap_initialize(struct psp_context *psp)
{
int ret;
/*
* TODO: bypass the initialize in sriov for now
*/
if (amdgpu_sriov_vf(psp->adev))
return 0;
if (!psp->adev->psp.ta_rap_ucode_size ||
!psp->adev->psp.ta_rap_start_addr) {
dev_info(psp->adev->dev, "RAP: optional rap ta ucode is not available\n");
return 0;
}
if (!psp->rap_context.rap_initialized) {
ret = psp_rap_init_shared_buf(psp);
if (ret)
return ret;
}
ret = psp_rap_load(psp);
if (ret)
return ret;
ret = psp_rap_invoke(psp, TA_CMD_RAP__INITIALIZE);
if (ret != TA_RAP_STATUS__SUCCESS) {
psp_rap_unload(psp);
amdgpu_bo_free_kernel(&psp->rap_context.rap_shared_bo,
&psp->rap_context.rap_shared_mc_addr,
&psp->rap_context.rap_shared_buf);
psp->rap_context.rap_initialized = false;
dev_warn(psp->adev->dev, "RAP TA initialize fail.\n");
return -EINVAL;
}
return 0;
}
static int psp_rap_terminate(struct psp_context *psp)
{
int ret;
if (!psp->rap_context.rap_initialized)
return 0;
ret = psp_rap_unload(psp);
psp->rap_context.rap_initialized = false;
/* free rap shared memory */
amdgpu_bo_free_kernel(&psp->rap_context.rap_shared_bo,
&psp->rap_context.rap_shared_mc_addr,
&psp->rap_context.rap_shared_buf);
return ret;
}
int psp_rap_invoke(struct psp_context *psp, uint32_t ta_cmd_id)
{
struct ta_rap_shared_memory *rap_cmd;
int ret;
if (!psp->rap_context.rap_initialized)
return -EINVAL;
if (ta_cmd_id != TA_CMD_RAP__INITIALIZE &&
ta_cmd_id != TA_CMD_RAP__VALIDATE_L0)
return -EINVAL;
mutex_lock(&psp->rap_context.mutex);
rap_cmd = (struct ta_rap_shared_memory *)
psp->rap_context.rap_shared_buf;
memset(rap_cmd, 0, sizeof(struct ta_rap_shared_memory));
rap_cmd->cmd_id = ta_cmd_id;
rap_cmd->validation_method_id = METHOD_A;
ret = psp_ta_invoke(psp, rap_cmd->cmd_id, psp->rap_context.session_id);
if (ret) {
mutex_unlock(&psp->rap_context.mutex);
return ret;
}
mutex_unlock(&psp->rap_context.mutex);
return rap_cmd->rap_status;
}
// RAP end
static int psp_hw_start(struct psp_context *psp)
{
struct amdgpu_device *adev = psp->adev;
@ -1706,7 +1886,7 @@ static int psp_load_smu_fw(struct psp_context *psp)
return 0;
if (adev->in_gpu_reset && ras && ras->supported) {
if (amdgpu_in_reset(adev) && ras && ras->supported) {
ret = amdgpu_dpm_set_mp1_state(adev, PP_MP1_STATE_UNLOAD);
if (ret) {
DRM_WARN("Failed to set MP1 state prepare for reload\n");
@ -1821,7 +2001,7 @@ static int psp_load_fw(struct amdgpu_device *adev)
int ret;
struct psp_context *psp = &adev->psp;
if (amdgpu_sriov_vf(adev) && adev->in_gpu_reset) {
if (amdgpu_sriov_vf(adev) && amdgpu_in_reset(adev)) {
psp_ring_stop(psp, PSP_RING_TYPE__KM); /* should not destroy ring, only stop */
goto skip_memalloc;
}
@ -1891,6 +2071,11 @@ static int psp_load_fw(struct amdgpu_device *adev)
if (ret)
dev_err(psp->adev->dev,
"DTM: Failed to initialize DTM\n");
ret = psp_rap_initialize(psp);
if (ret)
dev_err(psp->adev->dev,
"RAP: Failed to initialize RAP\n");
}
return 0;
@ -1941,6 +2126,7 @@ static int psp_hw_fini(void *handle)
if (psp->adev->psp.ta_fw) {
psp_ras_terminate(psp);
psp_rap_terminate(psp);
psp_dtm_terminate(psp);
psp_hdcp_terminate(psp);
}
@ -1999,6 +2185,11 @@ static int psp_suspend(void *handle)
DRM_ERROR("Failed to terminate dtm ta\n");
return ret;
}
ret = psp_rap_terminate(psp);
if (ret) {
DRM_ERROR("Failed to terminate rap ta\n");
return ret;
}
}
ret = psp_asd_unload(psp);
@ -2077,6 +2268,11 @@ static int psp_resume(void *handle)
if (ret)
dev_err(psp->adev->dev,
"DTM: Failed to initialize DTM\n");
ret = psp_rap_initialize(psp);
if (ret)
dev_err(psp->adev->dev,
"RAP: Failed to initialize RAP\n");
}
mutex_unlock(&adev->firmware.mutex);
@ -2342,6 +2538,11 @@ int parse_ta_bin_descriptor(struct psp_context *psp,
psp->ta_dtm_ucode_size = le32_to_cpu(desc->size_bytes);
psp->ta_dtm_start_addr = ucode_start_addr;
break;
case TA_FW_TYPE_PSP_RAP:
psp->ta_rap_ucode_version = le32_to_cpu(desc->fw_version);
psp->ta_rap_ucode_size = le32_to_cpu(desc->size_bytes);
psp->ta_rap_start_addr = ucode_start_addr;
break;
default:
dev_warn(psp->adev->dev, "Unsupported TA type: %d\n", desc->fw_type);
break;
@ -2420,7 +2621,7 @@ static ssize_t psp_usbc_pd_fw_sysfs_read(struct device *dev,
char *buf)
{
struct drm_device *ddev = dev_get_drvdata(dev);
struct amdgpu_device *adev = ddev->dev_private;
struct amdgpu_device *adev = drm_to_adev(ddev);
uint32_t fw_ver;
int ret;
@ -2447,7 +2648,7 @@ static ssize_t psp_usbc_pd_fw_sysfs_write(struct device *dev,
size_t count)
{
struct drm_device *ddev = dev_get_drvdata(dev);
struct amdgpu_device *adev = ddev->dev_private;
struct amdgpu_device *adev = drm_to_adev(ddev);
void *cpu_addr;
dma_addr_t dma_addr;
int ret;

View File

@ -29,6 +29,7 @@
#include "psp_gfx_if.h"
#include "ta_xgmi_if.h"
#include "ta_ras_if.h"
#include "ta_rap_if.h"
#define PSP_FENCE_BUFFER_SIZE 0x1000
#define PSP_CMD_BUFFER_SIZE 0x1000
@ -38,6 +39,7 @@
#define PSP_TMR_SIZE 0x400000
#define PSP_HDCP_SHARED_MEM_SIZE 0x4000
#define PSP_DTM_SHARED_MEM_SIZE 0x4000
#define PSP_RAP_SHARED_MEM_SIZE 0x4000
#define PSP_SHARED_MEM_SIZE 0x4000
struct psp_context;
@ -159,6 +161,15 @@ struct psp_dtm_context {
struct mutex mutex;
};
struct psp_rap_context {
bool rap_initialized;
uint32_t session_id;
struct amdgpu_bo *rap_shared_bo;
uint64_t rap_shared_mc_addr;
void *rap_shared_buf;
struct mutex mutex;
};
#define MEM_TRAIN_SYSTEM_SIGNATURE 0x54534942
#define GDDR6_MEM_TRAINING_DATA_SIZE_IN_BYTES 0x1000
#define GDDR6_MEM_TRAINING_OFFSET 0x8000
@ -277,11 +288,16 @@ struct psp_context
uint32_t ta_dtm_ucode_size;
uint8_t *ta_dtm_start_addr;
uint32_t ta_rap_ucode_version;
uint32_t ta_rap_ucode_size;
uint8_t *ta_rap_start_addr;
struct psp_asd_context asd_context;
struct psp_xgmi_context xgmi_context;
struct psp_ras_context ras;
struct psp_hdcp_context hdcp_context;
struct psp_dtm_context dtm_context;
struct psp_rap_context rap_context;
struct mutex mutex;
struct psp_memory_training_context mem_train_ctx;
};
@ -357,6 +373,7 @@ int psp_ras_trigger_error(struct psp_context *psp,
int psp_hdcp_invoke(struct psp_context *psp, uint32_t ta_cmd_id);
int psp_dtm_invoke(struct psp_context *psp, uint32_t ta_cmd_id);
int psp_rap_invoke(struct psp_context *psp, uint32_t ta_cmd_id);
int psp_rlc_autoload_start(struct psp_context *psp);

View File

@ -0,0 +1,127 @@
/*
* Copyright 2020 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
*
*/
#include <linux/debugfs.h>
#include <linux/pm_runtime.h>
#include "amdgpu.h"
#include "amdgpu_rap.h"
/**
* DOC: AMDGPU RAP debugfs test interface
*
* how to use?
* echo opcode > <debugfs_dir>/dri/xxx/rap_test
*
* opcode:
* currently, only 2 is supported by Linux host driver,
* opcode 2 stands for TA_CMD_RAP__VALIDATE_L0, used to
* trigger L0 policy validation, you can refer more detail
* from header file ta_rap_if.h
*
*/
static ssize_t amdgpu_rap_debugfs_write(struct file *f, const char __user *buf,
size_t size, loff_t *pos)
{
struct amdgpu_device *adev = (struct amdgpu_device *)file_inode(f)->i_private;
struct ta_rap_shared_memory *rap_shared_mem;
struct ta_rap_cmd_output_data *rap_cmd_output;
struct drm_device *dev = adev_to_drm(adev);
uint32_t op;
int ret;
if (*pos || size != 2)
return -EINVAL;
ret = kstrtouint_from_user(buf, size, *pos, &op);
if (ret)
return ret;
ret = pm_runtime_get_sync(dev->dev);
if (ret < 0) {
pm_runtime_put_autosuspend(dev->dev);
return ret;
}
/* make sure gfx core is on, RAP TA cann't handle
* GFX OFF case currently.
*/
amdgpu_gfx_off_ctrl(adev, false);
switch (op) {
case 2:
ret = psp_rap_invoke(&adev->psp, op);
if (ret == TA_RAP_STATUS__SUCCESS) {
dev_info(adev->dev, "RAP L0 validate test success.\n");
} else {
rap_shared_mem = (struct ta_rap_shared_memory *)
adev->psp.rap_context.rap_shared_buf;
rap_cmd_output = &(rap_shared_mem->rap_out_message.output);
dev_info(adev->dev, "RAP test failed, the output is:\n");
dev_info(adev->dev, "\tlast_subsection: 0x%08x.\n",
rap_cmd_output->last_subsection);
dev_info(adev->dev, "\tnum_total_validate: 0x%08x.\n",
rap_cmd_output->num_total_validate);
dev_info(adev->dev, "\tnum_valid: 0x%08x.\n",
rap_cmd_output->num_valid);
dev_info(adev->dev, "\tlast_validate_addr: 0x%08x.\n",
rap_cmd_output->last_validate_addr);
dev_info(adev->dev, "\tlast_validate_val: 0x%08x.\n",
rap_cmd_output->last_validate_val);
dev_info(adev->dev, "\tlast_validate_val_exptd: 0x%08x.\n",
rap_cmd_output->last_validate_val_exptd);
}
break;
default:
dev_info(adev->dev, "Unsupported op id: %d, ", op);
dev_info(adev->dev, "Only support op 2(L0 validate test).\n");
}
amdgpu_gfx_off_ctrl(adev, true);
pm_runtime_mark_last_busy(dev->dev);
pm_runtime_put_autosuspend(dev->dev);
return size;
}
static const struct file_operations amdgpu_rap_debugfs_ops = {
.owner = THIS_MODULE,
.read = NULL,
.write = amdgpu_rap_debugfs_write,
.llseek = default_llseek
};
void amdgpu_rap_debugfs_init(struct amdgpu_device *adev)
{
#if defined(CONFIG_DEBUG_FS)
struct drm_minor *minor = adev_to_drm(adev)->primary;
if (!adev->psp.rap_context.rap_initialized)
return;
debugfs_create_file("rap_test", S_IWUSR, minor->debugfs_root,
adev, &amdgpu_rap_debugfs_ops);
#endif
}

View File

@ -0,0 +1,30 @@
/*
* Copyright 2020 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
*
*/
#ifndef _AMDGPU_RAP_H
#define _AMDGPU_RAP_H
#include "amdgpu.h"
void amdgpu_rap_debugfs_init(struct amdgpu_device *adev);
#endif

View File

@ -34,6 +34,8 @@
#include "amdgpu_xgmi.h"
#include "ivsrcid/nbio/irqsrcs_nbif_7_4.h"
static const char *RAS_FS_NAME = "ras";
const char *ras_error_string[] = {
"none",
"parity",
@ -62,13 +64,14 @@ const char *ras_block_string[] = {
#define ras_err_str(i) (ras_error_string[ffs(i)])
#define ras_block_str(i) (ras_block_string[i])
#define AMDGPU_RAS_FLAG_INIT_BY_VBIOS 1
#define AMDGPU_RAS_FLAG_INIT_NEED_RESET 2
#define RAS_DEFAULT_FLAGS (AMDGPU_RAS_FLAG_INIT_BY_VBIOS)
/* inject address is 52 bits */
#define RAS_UMC_INJECT_ADDR_LIMIT (0x1ULL << 52)
/* typical ECC bad page rate(1 bad page per 100MB VRAM) */
#define RAS_BAD_PAGE_RATE (100 * 1024 * 1024ULL)
enum amdgpu_ras_retire_page_reservation {
AMDGPU_RAS_RETIRE_PAGE_RESERVED,
AMDGPU_RAS_RETIRE_PAGE_PENDING,
@ -367,12 +370,19 @@ static ssize_t amdgpu_ras_debugfs_ctrl_write(struct file *f, const char __user *
static ssize_t amdgpu_ras_debugfs_eeprom_write(struct file *f, const char __user *buf,
size_t size, loff_t *pos)
{
struct amdgpu_device *adev = (struct amdgpu_device *)file_inode(f)->i_private;
struct amdgpu_device *adev =
(struct amdgpu_device *)file_inode(f)->i_private;
int ret;
ret = amdgpu_ras_eeprom_reset_table(&adev->psp.ras.ras->eeprom_control);
ret = amdgpu_ras_eeprom_reset_table(
&(amdgpu_ras_get_context(adev)->eeprom_control));
return ret == 1 ? size : -EIO;
if (ret == 1) {
amdgpu_ras_get_context(adev)->flags = RAS_DEFAULT_FLAGS;
return size;
} else {
return -EIO;
}
}
static const struct file_operations amdgpu_ras_debugfs_ctrl_ops = {
@ -1017,45 +1027,13 @@ static ssize_t amdgpu_ras_sysfs_features_read(struct device *dev,
return scnprintf(buf, PAGE_SIZE, "feature mask: 0x%x\n", con->features);
}
static int amdgpu_ras_sysfs_create_feature_node(struct amdgpu_device *adev)
static void amdgpu_ras_sysfs_remove_bad_page_node(struct amdgpu_device *adev)
{
struct amdgpu_ras *con = amdgpu_ras_get_context(adev);
struct attribute *attrs[] = {
&con->features_attr.attr,
NULL
};
struct bin_attribute *bin_attrs[] = {
&con->badpages_attr,
NULL
};
struct attribute_group group = {
.name = "ras",
.attrs = attrs,
.bin_attrs = bin_attrs,
};
con->features_attr = (struct device_attribute) {
.attr = {
.name = "features",
.mode = S_IRUGO,
},
.show = amdgpu_ras_sysfs_features_read,
};
con->badpages_attr = (struct bin_attribute) {
.attr = {
.name = "gpu_vram_bad_pages",
.mode = S_IRUGO,
},
.size = 0,
.private = NULL,
.read = amdgpu_ras_sysfs_badpages_read,
};
sysfs_attr_init(attrs[0]);
sysfs_bin_attr_init(bin_attrs[0]);
return sysfs_create_group(&adev->dev->kobj, &group);
sysfs_remove_file_from_group(&adev->dev->kobj,
&con->badpages_attr.attr,
RAS_FS_NAME);
}
static int amdgpu_ras_sysfs_remove_feature_node(struct amdgpu_device *adev)
@ -1065,14 +1043,9 @@ static int amdgpu_ras_sysfs_remove_feature_node(struct amdgpu_device *adev)
&con->features_attr.attr,
NULL
};
struct bin_attribute *bin_attrs[] = {
&con->badpages_attr,
NULL
};
struct attribute_group group = {
.name = "ras",
.name = RAS_FS_NAME,
.attrs = attrs,
.bin_attrs = bin_attrs,
};
sysfs_remove_group(&adev->dev->kobj, &group);
@ -1105,7 +1078,7 @@ int amdgpu_ras_sysfs_create(struct amdgpu_device *adev,
if (sysfs_add_file_to_group(&adev->dev->kobj,
&obj->sysfs_attr.attr,
"ras")) {
RAS_FS_NAME)) {
put_obj(obj);
return -EINVAL;
}
@ -1125,7 +1098,7 @@ int amdgpu_ras_sysfs_remove(struct amdgpu_device *adev,
sysfs_remove_file_from_group(&adev->dev->kobj,
&obj->sysfs_attr.attr,
"ras");
RAS_FS_NAME);
obj->attr_inuse = 0;
put_obj(obj);
@ -1141,6 +1114,9 @@ static int amdgpu_ras_sysfs_remove_all(struct amdgpu_device *adev)
amdgpu_ras_sysfs_remove(adev, &obj->head);
}
if (amdgpu_bad_page_threshold != 0)
amdgpu_ras_sysfs_remove_bad_page_node(adev);
amdgpu_ras_sysfs_remove_feature_node(adev);
return 0;
@ -1169,9 +1145,9 @@ static int amdgpu_ras_sysfs_remove_all(struct amdgpu_device *adev)
static void amdgpu_ras_debugfs_create_ctrl_node(struct amdgpu_device *adev)
{
struct amdgpu_ras *con = amdgpu_ras_get_context(adev);
struct drm_minor *minor = adev->ddev->primary;
struct drm_minor *minor = adev_to_drm(adev)->primary;
con->dir = debugfs_create_dir("ras", minor->debugfs_root);
con->dir = debugfs_create_dir(RAS_FS_NAME, minor->debugfs_root);
debugfs_create_file("ras_ctrl", S_IWUGO | S_IRUGO, con->dir,
adev, &amdgpu_ras_debugfs_ctrl_ops);
debugfs_create_file("ras_eeprom_reset", S_IWUGO | S_IRUGO, con->dir,
@ -1187,6 +1163,13 @@ static void amdgpu_ras_debugfs_create_ctrl_node(struct amdgpu_device *adev)
*/
debugfs_create_bool("auto_reboot", S_IWUGO | S_IRUGO, con->dir,
&con->reboot);
/*
* User could set this not to clean up hardware's error count register
* of RAS IPs during ras recovery.
*/
debugfs_create_bool("disable_ras_err_cnt_harvest", 0644,
con->dir, &con->disable_ras_err_cnt_harvest);
}
void amdgpu_ras_debugfs_create(struct amdgpu_device *adev,
@ -1211,6 +1194,7 @@ void amdgpu_ras_debugfs_create(struct amdgpu_device *adev,
void amdgpu_ras_debugfs_create_all(struct amdgpu_device *adev)
{
#if defined(CONFIG_DEBUG_FS)
struct amdgpu_ras *con = amdgpu_ras_get_context(adev);
struct ras_manager *obj;
struct ras_fs_if fs_info;
@ -1233,6 +1217,7 @@ void amdgpu_ras_debugfs_create_all(struct amdgpu_device *adev)
amdgpu_ras_debugfs_create(adev, &fs_info);
}
}
#endif
}
void amdgpu_ras_debugfs_remove(struct amdgpu_device *adev,
@ -1249,6 +1234,7 @@ void amdgpu_ras_debugfs_remove(struct amdgpu_device *adev,
static void amdgpu_ras_debugfs_remove_all(struct amdgpu_device *adev)
{
#if defined(CONFIG_DEBUG_FS)
struct amdgpu_ras *con = amdgpu_ras_get_context(adev);
struct ras_manager *obj, *tmp;
@ -1257,14 +1243,48 @@ static void amdgpu_ras_debugfs_remove_all(struct amdgpu_device *adev)
}
con->dir = NULL;
#endif
}
/* debugfs end */
/* ras fs */
static BIN_ATTR(gpu_vram_bad_pages, S_IRUGO,
amdgpu_ras_sysfs_badpages_read, NULL, 0);
static DEVICE_ATTR(features, S_IRUGO,
amdgpu_ras_sysfs_features_read, NULL);
static int amdgpu_ras_fs_init(struct amdgpu_device *adev)
{
amdgpu_ras_sysfs_create_feature_node(adev);
struct amdgpu_ras *con = amdgpu_ras_get_context(adev);
struct attribute_group group = {
.name = RAS_FS_NAME,
};
struct attribute *attrs[] = {
&con->features_attr.attr,
NULL
};
struct bin_attribute *bin_attrs[] = {
NULL,
NULL,
};
int r;
/* add features entry */
con->features_attr = dev_attr_features;
group.attrs = attrs;
sysfs_attr_init(attrs[0]);
if (amdgpu_bad_page_threshold != 0) {
/* add bad_page_features entry */
bin_attr_gpu_vram_bad_pages.private = NULL;
con->badpages_attr = bin_attr_gpu_vram_bad_pages;
bin_attrs[0] = &con->badpages_attr;
group.bin_attrs = bin_attrs;
sysfs_bin_attr_init(bin_attrs[0]);
}
r = sysfs_create_group(&adev->dev->kobj, &group);
if (r)
dev_err(adev->dev, "Failed to create RAS sysfs group!");
return 0;
}
@ -1456,6 +1476,45 @@ static void amdgpu_ras_log_on_err_counter(struct amdgpu_device *adev)
}
}
/* Parse RdRspStatus and WrRspStatus */
void amdgpu_ras_error_status_query(struct amdgpu_device *adev,
struct ras_query_if *info)
{
/*
* Only two block need to query read/write
* RspStatus at current state
*/
switch (info->head.block) {
case AMDGPU_RAS_BLOCK__GFX:
if (adev->gfx.funcs->query_ras_error_status)
adev->gfx.funcs->query_ras_error_status(adev);
break;
case AMDGPU_RAS_BLOCK__MMHUB:
if (adev->mmhub.funcs->query_ras_error_status)
adev->mmhub.funcs->query_ras_error_status(adev);
break;
default:
break;
}
}
static void amdgpu_ras_query_err_status(struct amdgpu_device *adev)
{
struct amdgpu_ras *con = amdgpu_ras_get_context(adev);
struct ras_manager *obj;
if (!con)
return;
list_for_each_entry(obj, &con->head, node) {
struct ras_query_if info = {
.head = obj->head,
};
amdgpu_ras_error_status_query(adev, &info);
}
}
/* recovery begin */
/* return 0 on success.
@ -1512,23 +1571,30 @@ static void amdgpu_ras_do_recovery(struct work_struct *work)
struct amdgpu_device *remote_adev = NULL;
struct amdgpu_device *adev = ras->adev;
struct list_head device_list, *device_list_handle = NULL;
struct amdgpu_hive_info *hive = amdgpu_get_xgmi_hive(adev, false);
/* Build list of devices to query RAS related errors */
if (hive && adev->gmc.xgmi.num_physical_nodes > 1)
device_list_handle = &hive->device_list;
else {
INIT_LIST_HEAD(&device_list);
list_add_tail(&adev->gmc.xgmi.head, &device_list);
device_list_handle = &device_list;
}
if (!ras->disable_ras_err_cnt_harvest) {
struct amdgpu_hive_info *hive = amdgpu_get_xgmi_hive(adev);
list_for_each_entry(remote_adev, device_list_handle, gmc.xgmi.head) {
amdgpu_ras_log_on_err_counter(remote_adev);
/* Build list of devices to query RAS related errors */
if (hive && adev->gmc.xgmi.num_physical_nodes > 1) {
device_list_handle = &hive->device_list;
} else {
INIT_LIST_HEAD(&device_list);
list_add_tail(&adev->gmc.xgmi.head, &device_list);
device_list_handle = &device_list;
}
list_for_each_entry(remote_adev,
device_list_handle, gmc.xgmi.head) {
amdgpu_ras_query_err_status(remote_adev);
amdgpu_ras_log_on_err_counter(remote_adev);
}
amdgpu_put_xgmi_hive(hive);
}
if (amdgpu_device_should_recover_gpu(ras->adev))
amdgpu_device_gpu_recover(ras->adev, 0);
amdgpu_device_gpu_recover(ras->adev, NULL);
atomic_set(&ras->in_recovery, 0);
}
@ -1643,7 +1709,7 @@ static int amdgpu_ras_load_bad_pages(struct amdgpu_device *adev)
int ret = 0;
/* no bad page record, skip eeprom access */
if (!control->num_recs)
if (!control->num_recs || (amdgpu_bad_page_threshold == 0))
return ret;
bps = kcalloc(control->num_recs, sizeof(*bps), GFP_KERNEL);
@ -1697,6 +1763,47 @@ static bool amdgpu_ras_check_bad_page(struct amdgpu_device *adev,
return ret;
}
static void amdgpu_ras_validate_threshold(struct amdgpu_device *adev,
uint32_t max_length)
{
struct amdgpu_ras *con = amdgpu_ras_get_context(adev);
int tmp_threshold = amdgpu_bad_page_threshold;
u64 val;
/*
* Justification of value bad_page_cnt_threshold in ras structure
*
* Generally, -1 <= amdgpu_bad_page_threshold <= max record length
* in eeprom, and introduce two scenarios accordingly.
*
* Bad page retirement enablement:
* - If amdgpu_bad_page_threshold = -1,
* bad_page_cnt_threshold = typical value by formula.
*
* - When the value from user is 0 < amdgpu_bad_page_threshold <
* max record length in eeprom, use it directly.
*
* Bad page retirement disablement:
* - If amdgpu_bad_page_threshold = 0, bad page retirement
* functionality is disabled, and bad_page_cnt_threshold will
* take no effect.
*/
if (tmp_threshold < -1)
tmp_threshold = -1;
else if (tmp_threshold > max_length)
tmp_threshold = max_length;
if (tmp_threshold == -1) {
val = adev->gmc.mc_vram_size;
do_div(val, RAS_BAD_PAGE_RATE);
con->bad_page_cnt_threshold = min(lower_32_bits(val),
max_length);
} else {
con->bad_page_cnt_threshold = tmp_threshold;
}
}
/* called in gpu recovery/init */
int amdgpu_ras_reserve_bad_pages(struct amdgpu_device *adev)
{
@ -1706,7 +1813,8 @@ int amdgpu_ras_reserve_bad_pages(struct amdgpu_device *adev)
struct amdgpu_bo *bo = NULL;
int i, ret = 0;
if (!con || !con->eh_data)
/* Not reserve bad page when amdgpu_bad_page_threshold == 0. */
if (!con || !con->eh_data || (amdgpu_bad_page_threshold == 0))
return 0;
mutex_lock(&con->recovery_lock);
@ -1774,6 +1882,8 @@ int amdgpu_ras_recovery_init(struct amdgpu_device *adev)
{
struct amdgpu_ras *con = amdgpu_ras_get_context(adev);
struct ras_err_handler_data **data;
uint32_t max_eeprom_records_len = 0;
bool exc_err_limit = false;
int ret;
if (con)
@ -1792,8 +1902,15 @@ int amdgpu_ras_recovery_init(struct amdgpu_device *adev)
atomic_set(&con->in_recovery, 0);
con->adev = adev;
ret = amdgpu_ras_eeprom_init(&con->eeprom_control);
if (ret)
max_eeprom_records_len = amdgpu_ras_eeprom_get_record_max_length();
amdgpu_ras_validate_threshold(adev, max_eeprom_records_len);
ret = amdgpu_ras_eeprom_init(&con->eeprom_control, &exc_err_limit);
/*
* This calling fails when exc_err_limit is true or
* ret != 0.
*/
if (exc_err_limit || ret)
goto free;
if (con->eeprom_control.num_recs) {
@ -1817,6 +1934,15 @@ int amdgpu_ras_recovery_init(struct amdgpu_device *adev)
out:
dev_warn(adev->dev, "Failed to initialize ras recovery!\n");
/*
* Except error threshold exceeding case, other failure cases in this
* function would not fail amdgpu driver init.
*/
if (!exc_err_limit)
ret = 0;
else
ret = -EINVAL;
return ret;
}
@ -1856,6 +1982,16 @@ int amdgpu_ras_request_reset_on_boot(struct amdgpu_device *adev,
return 0;
}
static int amdgpu_ras_check_asic_type(struct amdgpu_device *adev)
{
if (adev->asic_type != CHIP_VEGA10 &&
adev->asic_type != CHIP_VEGA20 &&
adev->asic_type != CHIP_ARCTURUS)
return 1;
else
return 0;
}
/*
* check hardware's ras ability which will be saved in hw_supported.
* if hardware does not support ras, we can skip some ras initializtion and
@ -1872,8 +2008,7 @@ static void amdgpu_ras_check_supported(struct amdgpu_device *adev,
*supported = 0;
if (amdgpu_sriov_vf(adev) || !adev->is_atom_fw ||
(adev->asic_type != CHIP_VEGA20 &&
adev->asic_type != CHIP_ARCTURUS))
amdgpu_ras_check_asic_type(adev))
return;
if (amdgpu_atomfirmware_mem_ecc_supported(adev)) {
@ -1895,6 +2030,8 @@ static void amdgpu_ras_check_supported(struct amdgpu_device *adev,
*supported = amdgpu_ras_enable == 0 ?
0 : *hw_supported & amdgpu_ras_mask;
adev->ras_features = *supported;
}
int amdgpu_ras_init(struct amdgpu_device *adev)
@ -1917,9 +2054,9 @@ int amdgpu_ras_init(struct amdgpu_device *adev)
amdgpu_ras_check_supported(adev, &con->hw_supported,
&con->supported);
if (!con->hw_supported) {
if (!con->hw_supported || (adev->asic_type == CHIP_VEGA10)) {
r = 0;
goto err_out;
goto release_con;
}
con->features = 0;
@ -1930,25 +2067,25 @@ int amdgpu_ras_init(struct amdgpu_device *adev)
if (adev->nbio.funcs->init_ras_controller_interrupt) {
r = adev->nbio.funcs->init_ras_controller_interrupt(adev);
if (r)
goto err_out;
goto release_con;
}
if (adev->nbio.funcs->init_ras_err_event_athub_interrupt) {
r = adev->nbio.funcs->init_ras_err_event_athub_interrupt(adev);
if (r)
goto err_out;
goto release_con;
}
if (amdgpu_ras_fs_init(adev)) {
r = -EINVAL;
goto err_out;
goto release_con;
}
dev_info(adev->dev, "RAS INFO: ras initialized successfully, "
"hardware ability[%x] ras_mask[%x]\n",
con->hw_supported, con->supported);
return 0;
err_out:
release_con:
amdgpu_ras_set_context(adev, NULL);
kfree(con);
@ -1976,7 +2113,7 @@ int amdgpu_ras_late_init(struct amdgpu_device *adev,
amdgpu_ras_request_reset_on_boot(adev,
ras_block->block);
return 0;
} else if (adev->in_suspend || adev->in_gpu_reset) {
} else if (adev->in_suspend || amdgpu_in_reset(adev)) {
/* in resume phase, if fail to enable ras,
* clean up all ras fs nodes, and disable ras */
goto cleanup;
@ -1985,7 +2122,7 @@ int amdgpu_ras_late_init(struct amdgpu_device *adev,
}
/* in resume phase, no need to create ras fs node */
if (adev->in_suspend || adev->in_gpu_reset)
if (adev->in_suspend || amdgpu_in_reset(adev))
return 0;
if (ih_info->cb) {
@ -2143,3 +2280,19 @@ bool amdgpu_ras_need_emergency_restart(struct amdgpu_device *adev)
return false;
}
bool amdgpu_ras_check_err_threshold(struct amdgpu_device *adev)
{
struct amdgpu_ras *con = amdgpu_ras_get_context(adev);
bool exc_err_limit = false;
if (con && (amdgpu_bad_page_threshold != 0))
amdgpu_ras_eeprom_check_err_threshold(&con->eeprom_control,
&exc_err_limit);
/*
* We are only interested in variable exc_err_limit,
* as it says if GPU is in bad state or not.
*/
return exc_err_limit;
}

View File

@ -31,6 +31,10 @@
#include "ta_ras_if.h"
#include "amdgpu_ras_eeprom.h"
#define AMDGPU_RAS_FLAG_INIT_BY_VBIOS (0x1 << 0)
#define AMDGPU_RAS_FLAG_INIT_NEED_RESET (0x1 << 1)
#define AMDGPU_RAS_FLAG_SKIP_BAD_PAGE_RESV (0x1 << 2)
enum amdgpu_ras_block {
AMDGPU_RAS_BLOCK__UMC = 0,
AMDGPU_RAS_BLOCK__SDMA,
@ -336,6 +340,12 @@ struct amdgpu_ras {
struct amdgpu_ras_eeprom_control eeprom_control;
bool error_query_ready;
/* bad page count threshold */
uint32_t bad_page_cnt_threshold;
/* disable ras error count harvest in recovery */
bool disable_ras_err_cnt_harvest;
};
struct ras_fs_data {
@ -490,6 +500,8 @@ void amdgpu_ras_suspend(struct amdgpu_device *adev);
unsigned long amdgpu_ras_query_error_count(struct amdgpu_device *adev,
bool is_ce);
bool amdgpu_ras_check_err_threshold(struct amdgpu_device *adev);
/* error handling functions */
int amdgpu_ras_add_bad_pages(struct amdgpu_device *adev,
struct eeprom_table_record *bps, int pages);
@ -500,10 +512,14 @@ static inline int amdgpu_ras_reset_gpu(struct amdgpu_device *adev)
{
struct amdgpu_ras *ras = amdgpu_ras_get_context(adev);
/* save bad page to eeprom before gpu reset,
* i2c may be unstable in gpu reset
/*
* Save bad page to eeprom before gpu reset, i2c may be unstable
* in gpu reset.
*
* Also, exclude the case when ras recovery issuer is
* eeprom page write itself.
*/
if (in_task())
if (!(ras->flags & AMDGPU_RAS_FLAG_SKIP_BAD_PAGE_RESV) && in_task())
amdgpu_ras_reserve_bad_pages(adev);
if (atomic_cmpxchg(&ras->in_recovery, 0, 1) == 0)

View File

@ -46,6 +46,9 @@
#define EEPROM_TABLE_HDR_VAL 0x414d4452
#define EEPROM_TABLE_VER 0x00010000
/* Bad GPU tag BADG */
#define EEPROM_TABLE_HDR_BAD 0x42414447
/* Assume 2 Mbit size */
#define EEPROM_SIZE_BYTES 256000
#define EEPROM_PAGE__SIZE_BYTES 256
@ -56,6 +59,15 @@
#define to_amdgpu_device(x) (container_of(x, struct amdgpu_ras, eeprom_control))->adev
static bool __is_ras_eeprom_supported(struct amdgpu_device *adev)
{
if ((adev->asic_type == CHIP_VEGA20) ||
(adev->asic_type == CHIP_ARCTURUS))
return true;
return false;
}
static bool __get_eeprom_i2c_addr_arct(struct amdgpu_device *adev,
uint16_t *i2c_addr)
{
@ -213,6 +225,24 @@ static bool __validate_tbl_checksum(struct amdgpu_ras_eeprom_control *control,
return true;
}
static int amdgpu_ras_eeprom_correct_header_tag(
struct amdgpu_ras_eeprom_control *control,
uint32_t header)
{
unsigned char buff[EEPROM_ADDRESS_SIZE + EEPROM_TABLE_HEADER_SIZE];
struct amdgpu_ras_eeprom_table_header *hdr = &control->tbl_hdr;
int ret = 0;
memset(buff, 0, EEPROM_ADDRESS_SIZE + EEPROM_TABLE_HEADER_SIZE);
mutex_lock(&control->tbl_mutex);
hdr->header = header;
ret = __update_table_header(control, buff);
mutex_unlock(&control->tbl_mutex);
return ret;
}
int amdgpu_ras_eeprom_reset_table(struct amdgpu_ras_eeprom_control *control)
{
unsigned char buff[EEPROM_ADDRESS_SIZE + EEPROM_TABLE_HEADER_SIZE] = { 0 };
@ -238,12 +268,14 @@ int amdgpu_ras_eeprom_reset_table(struct amdgpu_ras_eeprom_control *control)
}
int amdgpu_ras_eeprom_init(struct amdgpu_ras_eeprom_control *control)
int amdgpu_ras_eeprom_init(struct amdgpu_ras_eeprom_control *control,
bool *exceed_err_limit)
{
int ret = 0;
struct amdgpu_device *adev = to_amdgpu_device(control);
unsigned char buff[EEPROM_ADDRESS_SIZE + EEPROM_TABLE_HEADER_SIZE] = { 0 };
struct amdgpu_ras_eeprom_table_header *hdr = &control->tbl_hdr;
struct amdgpu_ras *ras = amdgpu_ras_get_context(adev);
struct i2c_msg msg = {
.addr = 0,
.flags = I2C_M_RD,
@ -251,6 +283,11 @@ int amdgpu_ras_eeprom_init(struct amdgpu_ras_eeprom_control *control)
.buf = buff,
};
*exceed_err_limit = false;
if (!__is_ras_eeprom_supported(adev))
return 0;
/* Verify i2c adapter is initialized */
if (!adev->pm.smu_i2c.algo)
return -ENOENT;
@ -279,6 +316,18 @@ int amdgpu_ras_eeprom_init(struct amdgpu_ras_eeprom_control *control)
DRM_DEBUG_DRIVER("Found existing EEPROM table with %d records",
control->num_recs);
} else if ((hdr->header == EEPROM_TABLE_HDR_BAD) &&
(amdgpu_bad_page_threshold != 0)) {
if (ras->bad_page_cnt_threshold > control->num_recs) {
dev_info(adev->dev, "Using one valid bigger bad page "
"threshold and correcting eeprom header tag.\n");
ret = amdgpu_ras_eeprom_correct_header_tag(control,
EEPROM_TABLE_HDR_VAL);
} else {
*exceed_err_limit = true;
dev_err(adev->dev, "Exceeding the bad_page_threshold parameter, "
"disabling the GPU.\n");
}
} else {
DRM_INFO("Creating new EEPROM table");
@ -375,6 +424,49 @@ static uint32_t __correct_eeprom_dest_address(uint32_t curr_address)
return curr_address;
}
int amdgpu_ras_eeprom_check_err_threshold(
struct amdgpu_ras_eeprom_control *control,
bool *exceed_err_limit)
{
struct amdgpu_device *adev = to_amdgpu_device(control);
unsigned char buff[EEPROM_ADDRESS_SIZE +
EEPROM_TABLE_HEADER_SIZE] = { 0 };
struct amdgpu_ras_eeprom_table_header *hdr = &control->tbl_hdr;
struct i2c_msg msg = {
.addr = control->i2c_address,
.flags = I2C_M_RD,
.len = EEPROM_ADDRESS_SIZE + EEPROM_TABLE_HEADER_SIZE,
.buf = buff,
};
int ret;
*exceed_err_limit = false;
if (!__is_ras_eeprom_supported(adev))
return 0;
/* read EEPROM table header */
mutex_lock(&control->tbl_mutex);
ret = i2c_transfer(&adev->pm.smu_i2c, &msg, 1);
if (ret < 1) {
dev_err(adev->dev, "Failed to read EEPROM table header.\n");
goto err;
}
__decode_table_header_from_buff(hdr, &buff[2]);
if (hdr->header == EEPROM_TABLE_HDR_BAD) {
dev_warn(adev->dev, "This GPU is in BAD status.");
dev_warn(adev->dev, "Please retire it or setting one bigger "
"threshold value when reloading driver.\n");
*exceed_err_limit = true;
}
err:
mutex_unlock(&control->tbl_mutex);
return 0;
}
int amdgpu_ras_eeprom_process_recods(struct amdgpu_ras_eeprom_control *control,
struct eeprom_table_record *records,
bool write,
@ -383,10 +475,12 @@ int amdgpu_ras_eeprom_process_recods(struct amdgpu_ras_eeprom_control *control,
int i, ret = 0;
struct i2c_msg *msgs, *msg;
unsigned char *buffs, *buff;
bool sched_ras_recovery = false;
struct eeprom_table_record *record;
struct amdgpu_device *adev = to_amdgpu_device(control);
struct amdgpu_ras *ras = amdgpu_ras_get_context(adev);
if (adev->asic_type != CHIP_VEGA20 && adev->asic_type != CHIP_ARCTURUS)
if (!__is_ras_eeprom_supported(adev))
return 0;
buffs = kcalloc(num, EEPROM_ADDRESS_SIZE + EEPROM_TABLE_RECORD_SIZE,
@ -402,11 +496,30 @@ int amdgpu_ras_eeprom_process_recods(struct amdgpu_ras_eeprom_control *control,
goto free_buff;
}
/*
* If saved bad pages number exceeds the bad page threshold for
* the whole VRAM, update table header to mark the BAD GPU tag
* and schedule one ras recovery after eeprom write is done,
* this can avoid the missing for latest records.
*
* This new header will be picked up and checked in the bootup
* by ras recovery, which may break bootup process to notify
* user this GPU is in bad state and to retire such GPU for
* further check.
*/
if (write && (amdgpu_bad_page_threshold != 0) &&
((control->num_recs + num) >= ras->bad_page_cnt_threshold)) {
dev_warn(adev->dev,
"Saved bad pages(%d) reaches threshold value(%d).\n",
control->num_recs + num, ras->bad_page_cnt_threshold);
control->tbl_hdr.header = EEPROM_TABLE_HDR_BAD;
sched_ras_recovery = true;
}
/* In case of overflow just start from beginning to not lose newest records */
if (write && (control->next_addr + EEPROM_TABLE_RECORD_SIZE * num > EEPROM_SIZE_BYTES))
control->next_addr = EEPROM_RECORD_START;
/*
* TODO Currently makes EEPROM writes for each record, this creates
* internal fragmentation. Optimized the code to do full page write of
@ -482,6 +595,20 @@ int amdgpu_ras_eeprom_process_recods(struct amdgpu_ras_eeprom_control *control,
__update_tbl_checksum(control, records, num, old_hdr_byte_sum);
__update_table_header(control, buffs);
if (sched_ras_recovery) {
/*
* Before scheduling ras recovery, assert the related
* flag first, which shall bypass common bad page
* reservation execution in amdgpu_ras_reset_gpu.
*/
amdgpu_ras_get_context(adev)->flags |=
AMDGPU_RAS_FLAG_SKIP_BAD_PAGE_RESV;
dev_warn(adev->dev, "Conduct ras recovery due to bad "
"page threshold reached.\n");
amdgpu_ras_reset_gpu(adev);
}
} else if (!__validate_tbl_checksum(control, records, num)) {
DRM_WARN("EEPROM Table checksum mismatch!");
/* TODO Uncomment when EEPROM read/write is relliable */
@ -499,6 +626,11 @@ int amdgpu_ras_eeprom_process_recods(struct amdgpu_ras_eeprom_control *control,
return ret == num ? 0 : -EIO;
}
inline uint32_t amdgpu_ras_eeprom_get_record_max_length(void)
{
return EEPROM_MAX_RECORD_NUM;
}
/* Used for testing if bugs encountered */
#if 0
void amdgpu_ras_eeprom_test(struct amdgpu_ras_eeprom_control *control)

Some files were not shown because too many files have changed in this diff Show More