No functional change.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Instead of hard coding just another name in the ring code.
v2: squash in Tom's rebase fix
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Those are way too large.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Those are way too large.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Instead of specifying the total ring size calculate that from the maximum
number of dw a submission can have and the number of concurrent submissions.
This fixes UVD with 8 concurrent submissions or more.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Eric Huang <JinHuiEric.Huang@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Eric Huang <JinHuiEric.Huang@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
TTM BO accounting is out of sync with how memory is really allocated
in ttm[_dma]_tt_alloc_page_directory. This resulted in excessive
estimated overhead with many small allocations.
ttm_dma_tt_alloc_page_directory makes a single allocation for three
arrays: pages, DMA and CPU addresses. It uses drm_calloc_large, which
uses kmalloc internally for allocations smaller than PAGE_SIZE.
ttm_round_pot should be a good approximation of its memory usage both
above and below PAGE_SIZE.
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Felix Kuehling <Felix.Kuehling@amd.com>
Reviewed-by: Monk Liu <monk.liu@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
This is just a type-safety things to avoid everyone taking void *,
it doesn't change anything.
v2: agd5f: split out the dal changes into a separate patch.
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Change History
--------------
v2:
- Make firmware version check correctly. Firmware
versions >= 1.80 should all support 40 UVD
instances.
- Replace AMDGPU_MAX_UVD_HANDLES with max_handles
variable.
v1:
- The firmware can handle upto 40 UVD sessions.
Signed-off-by: Arindam Nath <arindam.nath@amd.com>
Signed-off-by: Ayyappa Chandolu <ayyappa.chandolu@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
These tables were initialized on stack on each call, avoid that
and save a little bit of text size.
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Nils Wallménius <nils.wallmenius@gmail.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Also adjust phm_construct_table to take a const pointer
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Nils Wallménius <nils.wallmenius@gmail.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
As these arrays were of pointer to pointer type, they were
pointer to pointer to const. Make them pointer to const
pointer to const.
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Nils Wallménius <nils.wallmenius@gmail.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
All these are compile time constand and the
drm_debugfs_create/remove_files functions take a const
pointer argument.
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Nils Wallménius <nils.wallmenius@gmail.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
This marks the struct amdgpu_sched_ops const and
adjusts amd_sched_init to take a const pointer
for the ops param. The ops member of
struct amd_gpu_scheduler is also changed to const.
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Nils Wallménius <nils.wallmenius@gmail.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
This patch marks some compile-time constant tables 'const'.
The tables marked in this patch are the low hanging fruit
where little other changes were necesary to avoid casting
away constness etc. Also mark some tables that are private
to a file as static.
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Nils Wallménius <nils.wallmenius@gmail.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
This adds support to the command parser for the set append counter
packet3, this is required to support atomic counters on
evergreen/cayman GPUs.
v2: fixup some of the hardcoded numbers with real register names
(Christian)
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Arindam Nath <arindam.nath@amd.com>
Reviewed-by: Leo Liu <leo.liu@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Arindam Nath <arindam.nath@amd.com>
Reviewed-by: Leo Liu <leo.liu@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
If we don't need to flush we can easily use another VMID
already assigned to the process.
Signed-off-by: Christian König <christian.koenig@amd.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Chunming Zhou <david1.zhou@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
This way we can track when the flush is done.
Signed-off-by: Christian König <christian.koenig@amd.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Chunming Zhou <david1.zhou@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
v2: rebase & cleanup
This way we can store more than one fence as user for each VMID.
Signed-off-by: Christian König <christian.koenig@amd.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com> (v1)
Reviewed-by: Chunming Zhou <david1.zhou@amd.com> (v1)
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
No need to have two of them any more.
Signed-off-by: Christian König <christian.koenig@amd.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Chunming Zhou <david1.zhou@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Also add some pflip debug prints.
This change allows us to wait on pflip status until the new surface address
is actually submitted to the register.
This reverts ed3020e923240829dcdfd3343f6e91dc02c63775
drm/amdgpu: Move MMIO flip out of spinlocked region
The original change assumed DAL will aquire locks inside DAL
implemetion of page_flip callback which eventaully didn't happen.
This moves the flip before status update which makes sense for the
non-DAL code pathes as well.
Signed-off-by: Andrey Grodzovsky <Andrey.Grodzovsky@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
When this flag is set, we program the hardware to execute the flip
during horizontal blank (i.e. for the next scanline) instead of during
vertical blank (i.e. for the next frame).
Currently this is only supported on ASICs which have a page flip
completion interrupt (>= R600), and only if the use_pflipirq parameter
has value 2 (the default).
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Michel Dänzer <michel.daenzer@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Add a proper implementation for setting the deep sleep divider.
Signed-off-by: Eric Huang <JinHuiEric.Huang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Keeping the pages array around can use a lot of system memory
when you want a large GART.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Not needed any more.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Make it more flexible by passing src and page addresses
directly instead of the structures they contain.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
As far as I can see that isn't neccessary any more.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
This comes from the display handling code.
Signed-off-by: Rex Zhu <Rex.Zhu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
This patch enables clockgating for the UVD6 block in Stoney.
Signed-off-by: Tom St Denis <tom.stdenis@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
This patch enables clock gating for the UVD5 block with
Tonga.
Signed-off-by: Tom St Denis <tom.stdenis@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
This patch adds support for software clock gating to UVD 5
and UVD 6 blocks with a preliminary commented out hardware
gating routine.
Currently hardware gating does not work so it's not activated.
Signed-off-by: Tom St Denis <tom.stdenis@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
The values of all but the RADEON_HPD_NONE members of the radeon_hpd_id
enum transform 1:1 into bit positions within the 'enabled' bitset as
assembled by evergreen_hpd_init():
enabled |= 1 << radeon_connector->hpd.hpd;
However, if ->hpd.hpd happens to equal RADEON_HPD_NONE == 0xff, UBSAN
reports
UBSAN: Undefined behaviour in drivers/gpu/drm/radeon/evergreen.c:1867:16
shift exponent 255 is too large for 32-bit type 'int'
[...]
Call Trace:
[<ffffffff818c4d35>] dump_stack+0xbc/0x117
[<ffffffff818c4c79>] ? _atomic_dec_and_lock+0x169/0x169
[<ffffffff819411bb>] ubsan_epilogue+0xd/0x4e
[<ffffffff81941cbc>] __ubsan_handle_shift_out_of_bounds+0x1fb/0x254
[<ffffffffa0ba7f2e>] ? atom_execute_table+0x3e/0x50 [radeon]
[<ffffffff81941ac1>] ? __ubsan_handle_load_invalid_value+0x158/0x158
[<ffffffffa0b87700>] ? radeon_get_pll_use_mask+0x130/0x130 [radeon]
[<ffffffff81219930>] ? wake_up_klogd_work_func+0x60/0x60
[<ffffffff8121a35e>] ? vprintk_default+0x3e/0x60
[<ffffffffa0c603c4>] evergreen_hpd_init+0x274/0x2d0 [radeon]
[<ffffffffa0c603c4>] ? evergreen_hpd_init+0x274/0x2d0 [radeon]
[<ffffffffa0bd196e>] radeon_modeset_init+0x8ce/0x18d0 [radeon]
[<ffffffffa0b71d86>] radeon_driver_load_kms+0x186/0x350 [radeon]
[<ffffffffa03b6b16>] drm_dev_register+0xc6/0x100 [drm]
[<ffffffffa03bc8c4>] drm_get_pci_dev+0xe4/0x490 [drm]
[<ffffffff814b83f0>] ? kfree+0x220/0x370
[<ffffffffa0b687c2>] radeon_pci_probe+0x112/0x140 [radeon]
[...]
=====================================================================
radeon 0000:01:00.0: No connectors reported connected with modes
At least on x86, there should be no user-visible impact as there
1 << 0xff == 1 << (0xff & 31) == 1 << 31
holds and 31 > RADEON_MAX_HPD_PINS. Thus, this patch is a cosmetic one.
All of the above applies analogously to evergreen_hpd_fini(),
r100_hpd_init(), r100_hpd_fini(), r600_hpd_init(), r600_hpd_fini(),
rs600_hpd_init() and rs600_hpd_fini()
Silence UBSAN by checking ->hpd.hpd for RADEON_HPD_NONE before oring it
into the 'enabled' bitset in the *_init()- or the 'disabled' bitset in
the *_fini()-functions respectively.
Signed-off-by: Nicolai Stange <nicstange@gmail.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
White space fix.
Signed-off-by: Rex Zhu <Rex.Zhu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
This is not a fatal error.
v2: add comment why ignore the error here.
Signed-off-by: Rex Zhu <Rex.Zhu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Need to soft reset VCE as part of the clockgating
sequence.
Signed-off-by: Rex Zhu <Rex.Zhu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Acked-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
this is to fix fatal page fault error that occured if:
job is signaled/released after its timeout work is already
put to the global queue (in this case the cancel_delayed_work
will return false), which will lead to NX-protection error
page fault during job_timeout_func.
Signed-off-by: Monk Liu <Monk.Liu@amd.com>
Reviewed-by: Chunming Zhou <david1.zhou@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Add two callbacks to scheduler to maintain jobs, and invoked for
job timeout calculations. Now TDR measures time gap from
job is processed by hw.
v2:
fix typo
Signed-off-by: Monk Liu <Monk.Liu@amd.com>
Reviewed-by: Chunming Zhou <david1.zhou@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
original time out detect routine is incorrect, cuz it measures
the gap from job scheduled, but we should only measure the
gap from processed by hw.
Signed-off-by: Monk Liu <Monk.Liu@amd.com>
Reviewed-by: Chunming Zhou <david1.zhou@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
the mirror_list will be used for later time out detect
feature. This is needed to properly detect a GPU
timeout with the scheduler.
Signed-off-by: Monk Liu <Monk.Liu@amd.com>
Reviewed-by: Chunming Zhou <david1.zhou@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
for those jobs submitted through scheduler, do not
free it immediately after scheduled, instead free it
in global workqueue by its sched fence signaling
callback function.
v2:
call uf's bo_undef after job_run()
call job's sync free after job_run()
no static inline __amdgpu_job_free() anymore, just use
kfree(job) to replace it.
Signed-off-by: Monk Liu <Monk.Liu@amd.com>
Reviewed-by: Chunming Zhou <david1.zhou@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Consolidate job initialization in one place rather than
duplicating it in multiple places.
Signed-off-by: Monk Liu <Monk.Liu@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
More ground work for conditional execution on SDMA
necessary for preemption.
Signed-off-by: Monk Liu <monk.liu@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
This adds the groundwork for conditional execution on
SDMA which is necessary for preemption.
Signed-off-by: Monk Liu <monk.liu@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
V2: the signaled items on the LRU maintain their order
Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>