Staring at the function for six hours, just to essentially move one line
of code. The problem was that the first list_cut_position call could result
in list2 pointing to la-la-land.
Signed-off-by: Christian König <christian.koenig@amd.com>
Tested-by: Michel Dänzer <michel.daenzer@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
The first parameter of list_cut_position() must point to an initialized
list.
Noticed thanks to KASAN pointing out something's fishy here.
Fixes: "drm/ttm: add bulk move function on LRU"
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Michel Dänzer <michel.daenzer@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
This function allow us to bulk move a group of BOs to the tail of their LRU.
The positions of group of BOs are stored on the (first, last) bulk_move_pos
structure.
Signed-off-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
Tested-by: Mike Lothian <mike@fireburn.co.uk>
Tested-by: Dieter Nützel <Dieter@nuetzel-hh.de>
Acked-by: Chunming Zhou <david1.zhou@amd.com>
Reviewed-by: Junwei Zhang <Jerry.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
When move a BO to the end of LRU, it need remember the BO positions.
Make sure all moved bo in between "first" and "last". And they will be bulk
moving together.
Signed-off-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
Tested-by: Mike Lothian <mike@fireburn.co.uk>
Tested-by: Dieter Nützel <Dieter@nuetzel-hh.de>
Acked-by: Chunming Zhou <david1.zhou@amd.com>
Reviewed-by: Junwei Zhang <Jerry.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
These codes are not used.
Signed-off-by: Huang Rui <ray.huang@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
All non-x86 definitions are moved to ttm_set_memory header, so remove it from
ttm_tt.c.
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
This patch fixed the error when do not configure CONFIG_X86, otherwise, below
error will be encountered.
All errors (new ones prefixed by >>):
drivers/gpu/drm/ttm/ttm_page_alloc_dma.c: In function 'ttm_set_pages_caching':
>> drivers/gpu/drm/ttm/ttm_page_alloc_dma.c:272:7: error: implicit declaration of function 'set_pages_array_uc'; did you mean
+'ttm_set_pages_array_uc'? [-Werror=implicit-function-declaration]
r = set_pages_array_uc(pages, cpages);
^~~~~~~~~~~~~~~~~~
ttm_set_pages_array_uc
cc1: some warnings being treated as errors
Reported-by: kbuild test robot <lkp@intel.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Every set_pages_array_wb call resulted in cross-core
interrupts and TLB flushes. Merge more of them for
less overhead.
This reduces the time needed to free a 1.6 GiB GTT WC
buffer as part of Vulkan CTS from ~2 sec to < 0.25 sec.
(Allocation still takes more than 2 sec though)
(v2): use set_pages_wb instead of set_memory_wb.
Signed-off-by: Bas Nieuwenhuizen <basni@chromium.org>
Signed-off-by: Huang Rui <ray.huang@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Huang Rui <ray.huang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
All non-x86 definitions are moved to ttm_set_memory header, so remove it from
ttm_page_alloc.c.
Suggested-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
Reviewed-by: Bas Nieuwenhuizen <basni@chromium.org>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
All non-x86 definitions are moved to ttm_set_memory header, so remove it from
ttm_page_alloc_dma.c.
Suggested-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
Reviewed-by: Bas Nieuwenhuizen <basni@chromium.org>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
A call to ttm_bo_unref() clears the supplied pointer to NULL, while
ttm_bo_put() does not. None of the converted call sites requires the
pointer to become NULL, so the respective assign operations has been
left out from the patch.
Signed-off-by: Thomas Zimmermann <contact@tzimmermann.org>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Thomas Zimmermann <contact@tzimmermann.org>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
The TTM buffer-object interface provides ttm_bo_reference() and
ttm_bo_unref() for managing reference counts. Replacing them with
ttm_bo_get() and ttm_bo_put() aligns the API with conventions used
throughout the Linux kernel.
The implementation of ttm_bo_unref() clears the supplied pointer
to NULL. This leads to workarounds where the caller saves the
pointer's value before de-referencing the BO. ttm_bo_put() does
not clear the supplied pointer.
Signed-off-by: Thomas Zimmermann <contact@tzimmermann.org>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Make use of the swap macro and remove unnecessary variable *tmp_mem*.
This makes the code easier to read and maintain. Also, reduces the
stack usage.
This code was detected with the help of Coccinelle.
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
First feature request for 4.19. Highlights:
- Add initial amdgpu documentation
- Add initial GPU scheduler documention
- GPU scheduler fixes for dying processes
- Add support for the JPEG engine on VCN
- Switch CI to use powerplay by default
- EDC support for CZ
- More powerplay cleanups
- Misc DC fixes
Signed-off-by: Dave Airlie <airlied@redhat.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180621161138.3008-1-alexander.deucher@amd.com
Use new return type vm_fault_t for fault handler. For
now, this is just documenting that the function returns
a VM_FAULT value rather than an errno. Once all instances
are converted, vm_fault_t will become a distinct type.
Ref-> commit 1c8f422059 ("mm: change return type to vm_fault_t")
Previously vm_insert_{mixed,pfn} returns err which driver
mapped into VM_FAULT_* type. The new function
vmf_insert_{mixed,pfn} will replace this inefficiency by
returning VM_FAULT_* type.
Signed-off-by: Souptick Joarder <jrdr.linux@gmail.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
This is dual licensed under GPL-2.0 or MIT.
Signed-off-by: Dirk Hohndel (VMware) <dirk@hohndel.org>
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Acked-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Then priority could be set before initialization.
By default, it requires to kzalloc ttm bo. In fact, we always do so.
Signed-off-by: Junwei Zhang <Jerry.Zhang@amd.com>
Reviewed-by: David Zhou <david1.zhou@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
GFP_TRANSHUGE tries very hard to allocate huge pages, which can result
in long delays with high memory pressure. I have observed firefox
freezing for up to around a minute due to this while restic was taking
a full system backup.
Since we don't really need huge pages, use GFP_TRANSHUGE_LIGHT |
__GFP_NORETRY instead, in order to fail quickly when there are no huge
pages available.
Set __GFP_KSWAPD_RECLAIM as well, in order for huge pages to be freed
up in the background if necessary.
With these changes, I'm no longer seeing freezes during a restic backup.
Cc: stable@vger.kernel.org
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Michel Dänzer <michel.daenzer@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Make sure the transfered BO is never destroy before the transfer BO.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Roger He <Hongbo.He@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
It will be used by vmwgfx cpu blit.
Reviewed-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Use helpers to perform the kmap_atomic_prot() functionality to
a) Avoid in-function ifdefs that violate the kernel coding policy,
b) Facilitate exporting the functionality.
This commit should not change any functionality.
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Better to set this with all other fields as well.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Roger He <Hongbo.He@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Instead of calculating the size in bytes just to recalculate the number
of pages from it pass the BO directly to the function.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Roger He <Hongbo.He@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Allows us to gut a BO of it's backing store when the driver says that it
isn't needed any more.
Signed-off-by: Christian König <christian.koenig@amd.com>
Acked-by: Roger He <Hongbo.He@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
This allows drivers to only allocate dma addresses, but not a page
array.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Roger He <Hongbo.He@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Let's stop mangling everything in a single header and create one header
per object instead.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Roger He <Hongbo.He@amd.com>
Acked-by: Michel Dänzer <michel.daenzer@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cleanup ttm_tt_create a bit.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Reviewed-by: Roger He <Hongbo.He@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Rename ttm_bo_add_ttm to ttm_tt_create and move it into ttm_tt.c.
v2: separate the cleanup.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Reviewed-by: Roger He <Hongbo.He@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
the free mem space and the lower limit both include two parts:
system memory and swap space.
For the OOM triggered by TTM, that is the case as below:
first swap space is full of swapped out pages and soon
system memory also is filled up with ttm pages. and then
any memory allocation request will run into OOM.
to cover two cases:
a. if no swap disk at all or free swap space is under swap mem
limit but available system mem is bigger than sys mem limit,
allow TTM allocation;
b. if the available system mem is less than sys mem limit but
free swap space is bigger than swap mem limit, allow TTM
allocation.
v2: merge two memory limit(swap and system) into one
v3: keep original behavior except ttm_opt_ctx->flags with
TTM_OPT_FLAG_FORCE_ALLOC
v4: always set force_alloc as tx->flags & TTM_OPT_FLAG_FORCE_ALLOC
v5: add an attribute for lower_mem_limit
v6: set lower_mem_limit as 0 to keep original behavior
Signed-off-by: Roger He <Hongbo.He@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Never used as parameter, the only driver actually using this is nouveau
and there it is initialized after the BO is initialized.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Only used by the AGP backend and there it can be easily accessed using
ttm->bdev->glob.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
The pointer is available as ttm->bdev->glob as well.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
The pointer is available as bo->bdev->glob as well.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Use ttm_pool_populate/ttm_pool_unpopulate if the driver doesn't provide
a function.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Because ttm_bo_force_list_clean() is only called on two occasions:
1. By ttm_bo_evict_mm() during suspend.
2. By ttm_bo_clean_mm() when the driver unloads.
On both cases we absolutely don't want any memory allocation failure.
Signed-off-by: Roger He <Hongbo.He@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
set TTM_OPT_FLAG_FORCE_ALLOC when we are servicing for page
fault routine.
for ttm_mem_global_reserve if in page fault routine, allow the gtt
pages reservation always. because page fault routing already grabbed
system memory and the allowance of this exception is harmless.
Otherwise, it will trigger OOM killer.
will be used later.
v2: set the FORCE_ALLOC always
v3: minor refine
Signed-off-by: Roger He <Hongbo.He@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
for saving memory and more bit flag can be used in future
Signed-off-by: Roger He <Hongbo.He@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
To aid debugging set the page mapping during allocation instead of
during VM faults.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Roger He <Hongbo.He@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Stop calling the driver callback directly.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Roger He <Hongbo.He@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Remove redundant store of return code.
Signed-off-by: Tom St Denis <tom.stdenis@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Add missing {} braces.
Signed-off-by: Tom St Denis <tom.stdenis@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Add missing {} braces.
Signed-off-by: Tom St Denis <tom.stdenis@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Hoist the comparison of the ret to -EDEADLK above
the two code paths to simplify the function.
Signed-off-by: Tom St Denis <tom.stdenis@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
The dual ret/retval was more complex than need be. Now
we drop the retval variable and assign the appropriate VM
codes to ret instead.
Signed-off-by: Tom St Denis <tom.stdenis@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Add missing {} braces.
Signed-off-by: Tom St Denis <tom.stdenis@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Flip the logic of the comparison and remove
the redudant variable for the pool address.
(v2): Remove {} bracing.
Signed-off-by: Tom St Denis <tom.stdenis@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Correct missing {} style.
Signed-off-by: Tom St Denis <tom.stdenis@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Explicitly return errors in ttm_tt_alloc_page_directory() and
ttm_dma_tt_alloc_page_directory() instead of relying on
further logic to detect errors.
Signed-off-by: Tom St Denis <tom.stdenis@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>