New features for 4.21:
amdgpu:
- Support for SDMA paging queue on vega
- Put compute EOP buffers into vram for better performance
- Share more code with amdkfd
- Support for scanout with DCC on gfx9
- Initial kerneldoc for DC
- Updated SMU firmware support for gfx8 chips
- Rework CSA handling for eventual support for preemption
- XGMI PSP support
- Clean up RLC handling
- Enable GPU reset by default on VI, SOC15 dGPUs
- Ring and IB test cleanups
amdkfd:
- Share more code with amdgpu
ttm:
- Move global init out of the drivers
scheduler:
- Track if schedulers are ready for work
- Timeout/fault handling changes to facilitate GPU recovery
Signed-off-by: Dave Airlie <airlied@redhat.com>
From: Alex Deucher <alexdeucher@gmail.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20181114165113.3751-1-alexander.deucher@amd.com
Fixes gcc '-Wunused-but-set-variable' warning:
drivers/gpu/drm/ttm/ttm_execbuf_util.c: In function 'ttm_eu_fence_buffer_objects':
drivers/gpu/drm/ttm/ttm_execbuf_util.c:190:24: warning:
variable 'driver' set but not used [-Wunused-but-set-variable]
It not used any more after
commit f2c24b83ae ("drm/ttm: flip the switch, and convert to dma_fence")
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Junwei Zhang <Jerry.Zhang@amd.com>
Acked-by: Huang Rui <ray.huang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Make sure that the global BO state is always correctly initialized.
This allows removing all the device code to initialize it.
v2: fix up vbox (Alex)
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Junwei Zhang <Jerry.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
As the name says we only need one global instance of ttm_bo_global.
Just use a single exported instance which is save to initialize multiple times.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Junwei Zhang <Jerry.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
This way it can protect the whole BO global state.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Junwei Zhang <Jerry.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
As the name says we only need one global instance of ttm_mem_global.
Drop all the driver initialization and just use a single exported
instance which is initialized during BO global initialization.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Junwei Zhang <Jerry.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
So far, struct ttm_bo_global_ref was the only way of initializing a struct
ttm_bo_global. Providing separate initializer and release functions for
struct ttm_bo_global gives drivers the option of implementing their own
init and release callbacks for drm_global_references of type
DRM_GLOBAL_TTM_BO.
The original functions for initializing and releasing via struct
ttm_bo_global_ref are wrappers around the new interfaces.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
The functions ttm_bo_global_init() and ttm_bo_global_release() do not
receive an argument of type struct ttm_bo_global. Both take a struct
drm_global_reference that contains points to a struct ttm_bo_global_ref.
Renaming them reflects this.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Let's support simultaneous submissions to multiple engines.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Reviewed-by: Junwei Zhang <Jerry.Zhang@amd.com>
Reviewed-by: Huang Rui <ray.huang@amd.com>
Link: https://patchwork.kernel.org/patch/10626149/
Move all entries between @first and including @last before @head.
This is useful for LRU lists where a whole block of entries should be
moved to the end of the list.
Used as a band aid in TTM, but better placed in the common list headers.
Acked-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Huang Rui <ray.huang@amd.com>
Reviewed-by: Junwei Zhang <Jerry.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Export ttm_bo_get_unless_zero() to be used when looking up buffer
objects that are removed from the lookup structure in the destructor.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Sinclair Yeh <syeh@vmware.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
No other driver is using this functionality so move it out of TTM and
into the vmwgfx driver. Update includes and remove exports.
Also annotate to remove false static analyzer lock balance warnings.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
While cutting the lists we sometimes accidentally added a list_head from
the stack to the LRUs, effectively corrupting the list.
Remove the list cutting and use explicit list manipulation instead.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-and-Tested: Huang Rui <ray.huang@amd.com>
Tested-by: Mike Lothian <mike@fireburn.co.uk>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Staring at the function for six hours, just to essentially move one line
of code. The problem was that the first list_cut_position call could result
in list2 pointing to la-la-land.
Signed-off-by: Christian König <christian.koenig@amd.com>
Tested-by: Michel Dänzer <michel.daenzer@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
The first parameter of list_cut_position() must point to an initialized
list.
Noticed thanks to KASAN pointing out something's fishy here.
Fixes: "drm/ttm: add bulk move function on LRU"
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Michel Dänzer <michel.daenzer@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
This function allow us to bulk move a group of BOs to the tail of their LRU.
The positions of group of BOs are stored on the (first, last) bulk_move_pos
structure.
Signed-off-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
Tested-by: Mike Lothian <mike@fireburn.co.uk>
Tested-by: Dieter Nützel <Dieter@nuetzel-hh.de>
Acked-by: Chunming Zhou <david1.zhou@amd.com>
Reviewed-by: Junwei Zhang <Jerry.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
When move a BO to the end of LRU, it need remember the BO positions.
Make sure all moved bo in between "first" and "last". And they will be bulk
moving together.
Signed-off-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
Tested-by: Mike Lothian <mike@fireburn.co.uk>
Tested-by: Dieter Nützel <Dieter@nuetzel-hh.de>
Acked-by: Chunming Zhou <david1.zhou@amd.com>
Reviewed-by: Junwei Zhang <Jerry.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
These codes are not used.
Signed-off-by: Huang Rui <ray.huang@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
All non-x86 definitions are moved to ttm_set_memory header, so remove it from
ttm_tt.c.
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
This patch fixed the error when do not configure CONFIG_X86, otherwise, below
error will be encountered.
All errors (new ones prefixed by >>):
drivers/gpu/drm/ttm/ttm_page_alloc_dma.c: In function 'ttm_set_pages_caching':
>> drivers/gpu/drm/ttm/ttm_page_alloc_dma.c:272:7: error: implicit declaration of function 'set_pages_array_uc'; did you mean
+'ttm_set_pages_array_uc'? [-Werror=implicit-function-declaration]
r = set_pages_array_uc(pages, cpages);
^~~~~~~~~~~~~~~~~~
ttm_set_pages_array_uc
cc1: some warnings being treated as errors
Reported-by: kbuild test robot <lkp@intel.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Every set_pages_array_wb call resulted in cross-core
interrupts and TLB flushes. Merge more of them for
less overhead.
This reduces the time needed to free a 1.6 GiB GTT WC
buffer as part of Vulkan CTS from ~2 sec to < 0.25 sec.
(Allocation still takes more than 2 sec though)
(v2): use set_pages_wb instead of set_memory_wb.
Signed-off-by: Bas Nieuwenhuizen <basni@chromium.org>
Signed-off-by: Huang Rui <ray.huang@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Huang Rui <ray.huang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
All non-x86 definitions are moved to ttm_set_memory header, so remove it from
ttm_page_alloc.c.
Suggested-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
Reviewed-by: Bas Nieuwenhuizen <basni@chromium.org>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
All non-x86 definitions are moved to ttm_set_memory header, so remove it from
ttm_page_alloc_dma.c.
Suggested-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
Reviewed-by: Bas Nieuwenhuizen <basni@chromium.org>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
A call to ttm_bo_unref() clears the supplied pointer to NULL, while
ttm_bo_put() does not. None of the converted call sites requires the
pointer to become NULL, so the respective assign operations has been
left out from the patch.
Signed-off-by: Thomas Zimmermann <contact@tzimmermann.org>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Thomas Zimmermann <contact@tzimmermann.org>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
The TTM buffer-object interface provides ttm_bo_reference() and
ttm_bo_unref() for managing reference counts. Replacing them with
ttm_bo_get() and ttm_bo_put() aligns the API with conventions used
throughout the Linux kernel.
The implementation of ttm_bo_unref() clears the supplied pointer
to NULL. This leads to workarounds where the caller saves the
pointer's value before de-referencing the BO. ttm_bo_put() does
not clear the supplied pointer.
Signed-off-by: Thomas Zimmermann <contact@tzimmermann.org>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Make use of the swap macro and remove unnecessary variable *tmp_mem*.
This makes the code easier to read and maintain. Also, reduces the
stack usage.
This code was detected with the help of Coccinelle.
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
First feature request for 4.19. Highlights:
- Add initial amdgpu documentation
- Add initial GPU scheduler documention
- GPU scheduler fixes for dying processes
- Add support for the JPEG engine on VCN
- Switch CI to use powerplay by default
- EDC support for CZ
- More powerplay cleanups
- Misc DC fixes
Signed-off-by: Dave Airlie <airlied@redhat.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180621161138.3008-1-alexander.deucher@amd.com
Use new return type vm_fault_t for fault handler. For
now, this is just documenting that the function returns
a VM_FAULT value rather than an errno. Once all instances
are converted, vm_fault_t will become a distinct type.
Ref-> commit 1c8f422059 ("mm: change return type to vm_fault_t")
Previously vm_insert_{mixed,pfn} returns err which driver
mapped into VM_FAULT_* type. The new function
vmf_insert_{mixed,pfn} will replace this inefficiency by
returning VM_FAULT_* type.
Signed-off-by: Souptick Joarder <jrdr.linux@gmail.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
This is dual licensed under GPL-2.0 or MIT.
Signed-off-by: Dirk Hohndel (VMware) <dirk@hohndel.org>
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Acked-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Then priority could be set before initialization.
By default, it requires to kzalloc ttm bo. In fact, we always do so.
Signed-off-by: Junwei Zhang <Jerry.Zhang@amd.com>
Reviewed-by: David Zhou <david1.zhou@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
GFP_TRANSHUGE tries very hard to allocate huge pages, which can result
in long delays with high memory pressure. I have observed firefox
freezing for up to around a minute due to this while restic was taking
a full system backup.
Since we don't really need huge pages, use GFP_TRANSHUGE_LIGHT |
__GFP_NORETRY instead, in order to fail quickly when there are no huge
pages available.
Set __GFP_KSWAPD_RECLAIM as well, in order for huge pages to be freed
up in the background if necessary.
With these changes, I'm no longer seeing freezes during a restic backup.
Cc: stable@vger.kernel.org
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Michel Dänzer <michel.daenzer@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Make sure the transfered BO is never destroy before the transfer BO.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Roger He <Hongbo.He@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
It will be used by vmwgfx cpu blit.
Reviewed-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Use helpers to perform the kmap_atomic_prot() functionality to
a) Avoid in-function ifdefs that violate the kernel coding policy,
b) Facilitate exporting the functionality.
This commit should not change any functionality.
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Better to set this with all other fields as well.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Roger He <Hongbo.He@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Instead of calculating the size in bytes just to recalculate the number
of pages from it pass the BO directly to the function.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Roger He <Hongbo.He@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Allows us to gut a BO of it's backing store when the driver says that it
isn't needed any more.
Signed-off-by: Christian König <christian.koenig@amd.com>
Acked-by: Roger He <Hongbo.He@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
This allows drivers to only allocate dma addresses, but not a page
array.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Roger He <Hongbo.He@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Let's stop mangling everything in a single header and create one header
per object instead.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Roger He <Hongbo.He@amd.com>
Acked-by: Michel Dänzer <michel.daenzer@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cleanup ttm_tt_create a bit.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Reviewed-by: Roger He <Hongbo.He@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Rename ttm_bo_add_ttm to ttm_tt_create and move it into ttm_tt.c.
v2: separate the cleanup.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Reviewed-by: Roger He <Hongbo.He@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
the free mem space and the lower limit both include two parts:
system memory and swap space.
For the OOM triggered by TTM, that is the case as below:
first swap space is full of swapped out pages and soon
system memory also is filled up with ttm pages. and then
any memory allocation request will run into OOM.
to cover two cases:
a. if no swap disk at all or free swap space is under swap mem
limit but available system mem is bigger than sys mem limit,
allow TTM allocation;
b. if the available system mem is less than sys mem limit but
free swap space is bigger than swap mem limit, allow TTM
allocation.
v2: merge two memory limit(swap and system) into one
v3: keep original behavior except ttm_opt_ctx->flags with
TTM_OPT_FLAG_FORCE_ALLOC
v4: always set force_alloc as tx->flags & TTM_OPT_FLAG_FORCE_ALLOC
v5: add an attribute for lower_mem_limit
v6: set lower_mem_limit as 0 to keep original behavior
Signed-off-by: Roger He <Hongbo.He@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Never used as parameter, the only driver actually using this is nouveau
and there it is initialized after the BO is initialized.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Only used by the AGP backend and there it can be easily accessed using
ttm->bdev->glob.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
The pointer is available as ttm->bdev->glob as well.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
The pointer is available as bo->bdev->glob as well.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Use ttm_pool_populate/ttm_pool_unpopulate if the driver doesn't provide
a function.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>