2009-06-05 19:42:42 +07:00
|
|
|
/*
|
|
|
|
* Copyright 2009 Jerome Glisse.
|
|
|
|
* All Rights Reserved.
|
|
|
|
*
|
|
|
|
* Permission is hereby granted, free of charge, to any person obtaining a
|
|
|
|
* copy of this software and associated documentation files (the
|
|
|
|
* "Software"), to deal in the Software without restriction, including
|
|
|
|
* without limitation the rights to use, copy, modify, merge, publish,
|
|
|
|
* distribute, sub license, and/or sell copies of the Software, and to
|
|
|
|
* permit persons to whom the Software is furnished to do so, subject to
|
|
|
|
* the following conditions:
|
|
|
|
*
|
|
|
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
|
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
|
|
* FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL
|
|
|
|
* THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM,
|
|
|
|
* DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
|
|
|
|
* OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE
|
|
|
|
* USE OR OTHER DEALINGS IN THE SOFTWARE.
|
|
|
|
*
|
|
|
|
* The above copyright notice and this permission notice (including the
|
|
|
|
* next paragraph) shall be included in all copies or substantial portions
|
|
|
|
* of the Software.
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
/*
|
|
|
|
* Authors:
|
|
|
|
* Jerome Glisse <glisse@freedesktop.org>
|
|
|
|
* Thomas Hellstrom <thomas-at-tungstengraphics-dot-com>
|
|
|
|
* Dave Airlie
|
|
|
|
*/
|
|
|
|
#include <linux/list.h>
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 15:04:11 +07:00
|
|
|
#include <linux/slab.h>
|
2009-06-05 19:42:42 +07:00
|
|
|
#include <drm/drmP.h>
|
|
|
|
#include "radeon_drm.h"
|
|
|
|
#include "radeon.h"
|
2010-11-23 08:47:49 +07:00
|
|
|
#include "radeon_trace.h"
|
2009-06-05 19:42:42 +07:00
|
|
|
|
|
|
|
|
|
|
|
int radeon_ttm_init(struct radeon_device *rdev);
|
|
|
|
void radeon_ttm_fini(struct radeon_device *rdev);
|
2009-11-20 20:29:23 +07:00
|
|
|
static void radeon_bo_clear_surface_reg(struct radeon_bo *bo);
|
2009-06-05 19:42:42 +07:00
|
|
|
|
|
|
|
/*
|
|
|
|
* To exclude mutual BO access we rely on bo_reserve exclusion, as all
|
|
|
|
* function are calling it.
|
|
|
|
*/
|
|
|
|
|
drm/radeon: GPU virtual memory support v22
Virtual address space are per drm client (opener of /dev/drm).
Client are in charge of virtual address space, they need to
map bo into it by calling DRM_RADEON_GEM_VA ioctl.
First 16M of virtual address space is reserved by the kernel.
Once using 2 level page table we should be able to have a small
vram memory footprint for each pt (there would be one pt for all
gart, one for all vram and then one first level for each virtual
address space).
Plan include using the sub allocator for a common vm page table
area and using memcpy to copy vm page table in & out. Or use
a gart object and copy things in & out using dma.
v2: agd5f fixes:
- Add vram base offset for vram pages. The GPU physical address of a
vram page is FB_OFFSET + page offset. FB_OFFSET is 0 on discrete
cards and the physical bus address of the stolen memory on
integrated chips.
- VM_CONTEXT1_PROTECTION_FAULT_DEFAULT_ADDR covers all vmid's >= 1
v3: agd5f:
- integrate with the semaphore/multi-ring stuff
v4:
- rebase on top ttm dma & multi-ring stuff
- userspace is now in charge of the address space
- no more specific cs vm ioctl, instead cs ioctl has a new
chunk
v5:
- properly handle mem == NULL case from move_notify callback
- fix the vm cleanup path
v6:
- fix update of page table to only happen on valid mem placement
v7:
- add tlb flush for each vm context
- add flags to define mapping property (readable, writeable, snooped)
- make ring id implicit from ib->fence->ring, up to each asic callback
to then do ring specific scheduling if vm ib scheduling function
v8:
- add query for ib limit and kernel reserved virtual space
- rename vm->size to max_pfn (maximum number of page)
- update gem_va ioctl to also allow unmap operation
- bump kernel version to allow userspace to query for vm support
v9:
- rebuild page table only when bind and incrementaly depending
on bo referenced by cs and that have been moved
- allow virtual address space to grow
- use sa allocator for vram page table
- return invalid when querying vm limit on non cayman GPU
- dump vm fault register on lockup
v10: agd5f:
- Move the vm schedule_ib callback to a standalone function, remove
the callback and use the existing ib_execute callback for VM IBs.
v11:
- rebase on top of lastest Linus
v12: agd5f:
- remove spurious backslash
- set IB vm_id to 0 in radeon_ib_get()
v13: agd5f:
- fix handling of RADEON_CHUNK_ID_FLAGS
v14:
- fix va destruction
- fix suspend resume
- forbid bo to have several different va in same vm
v15:
- rebase
v16:
- cleanup left over of vm init/fini
v17: agd5f:
- cs checker
v18: agd5f:
- reworks the CS ioctl to better support multiple rings and
VM. Rather than adding a new chunk id for VM, just re-use the
IB chunk id and add a new flags for VM mode. Also define additional
dwords for the flags chunk id to define the what ring we want to use
(gfx, compute, uvd, etc.) and the priority.
v19:
- fix cs fini in weird case of no ib
- semi working flush fix for ni
- rebase on top of sa allocator changes
v20: agd5f:
- further CS ioctl cleanups from Christian's comments
v21: agd5f:
- integrate CS checker improvements
v22: agd5f:
- final cleanups for release, only allow VM CS on cayman
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
2012-01-06 10:11:05 +07:00
|
|
|
void radeon_bo_clear_va(struct radeon_bo *bo)
|
|
|
|
{
|
|
|
|
struct radeon_bo_va *bo_va, *tmp;
|
|
|
|
|
|
|
|
list_for_each_entry_safe(bo_va, tmp, &bo->va, bo_list) {
|
|
|
|
/* remove from all vm address space */
|
2012-08-06 23:32:21 +07:00
|
|
|
radeon_vm_bo_rmv(bo->rdev, bo_va->vm, bo);
|
drm/radeon: GPU virtual memory support v22
Virtual address space are per drm client (opener of /dev/drm).
Client are in charge of virtual address space, they need to
map bo into it by calling DRM_RADEON_GEM_VA ioctl.
First 16M of virtual address space is reserved by the kernel.
Once using 2 level page table we should be able to have a small
vram memory footprint for each pt (there would be one pt for all
gart, one for all vram and then one first level for each virtual
address space).
Plan include using the sub allocator for a common vm page table
area and using memcpy to copy vm page table in & out. Or use
a gart object and copy things in & out using dma.
v2: agd5f fixes:
- Add vram base offset for vram pages. The GPU physical address of a
vram page is FB_OFFSET + page offset. FB_OFFSET is 0 on discrete
cards and the physical bus address of the stolen memory on
integrated chips.
- VM_CONTEXT1_PROTECTION_FAULT_DEFAULT_ADDR covers all vmid's >= 1
v3: agd5f:
- integrate with the semaphore/multi-ring stuff
v4:
- rebase on top ttm dma & multi-ring stuff
- userspace is now in charge of the address space
- no more specific cs vm ioctl, instead cs ioctl has a new
chunk
v5:
- properly handle mem == NULL case from move_notify callback
- fix the vm cleanup path
v6:
- fix update of page table to only happen on valid mem placement
v7:
- add tlb flush for each vm context
- add flags to define mapping property (readable, writeable, snooped)
- make ring id implicit from ib->fence->ring, up to each asic callback
to then do ring specific scheduling if vm ib scheduling function
v8:
- add query for ib limit and kernel reserved virtual space
- rename vm->size to max_pfn (maximum number of page)
- update gem_va ioctl to also allow unmap operation
- bump kernel version to allow userspace to query for vm support
v9:
- rebuild page table only when bind and incrementaly depending
on bo referenced by cs and that have been moved
- allow virtual address space to grow
- use sa allocator for vram page table
- return invalid when querying vm limit on non cayman GPU
- dump vm fault register on lockup
v10: agd5f:
- Move the vm schedule_ib callback to a standalone function, remove
the callback and use the existing ib_execute callback for VM IBs.
v11:
- rebase on top of lastest Linus
v12: agd5f:
- remove spurious backslash
- set IB vm_id to 0 in radeon_ib_get()
v13: agd5f:
- fix handling of RADEON_CHUNK_ID_FLAGS
v14:
- fix va destruction
- fix suspend resume
- forbid bo to have several different va in same vm
v15:
- rebase
v16:
- cleanup left over of vm init/fini
v17: agd5f:
- cs checker
v18: agd5f:
- reworks the CS ioctl to better support multiple rings and
VM. Rather than adding a new chunk id for VM, just re-use the
IB chunk id and add a new flags for VM mode. Also define additional
dwords for the flags chunk id to define the what ring we want to use
(gfx, compute, uvd, etc.) and the priority.
v19:
- fix cs fini in weird case of no ib
- semi working flush fix for ni
- rebase on top of sa allocator changes
v20: agd5f:
- further CS ioctl cleanups from Christian's comments
v21: agd5f:
- integrate CS checker improvements
v22: agd5f:
- final cleanups for release, only allow VM CS on cayman
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
2012-01-06 10:11:05 +07:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2009-11-20 20:29:23 +07:00
|
|
|
static void radeon_ttm_bo_destroy(struct ttm_buffer_object *tbo)
|
2009-06-05 19:42:42 +07:00
|
|
|
{
|
2009-11-20 20:29:23 +07:00
|
|
|
struct radeon_bo *bo;
|
2009-06-05 19:42:42 +07:00
|
|
|
|
2009-11-20 20:29:23 +07:00
|
|
|
bo = container_of(tbo, struct radeon_bo, tbo);
|
|
|
|
mutex_lock(&bo->rdev->gem.mutex);
|
|
|
|
list_del_init(&bo->list);
|
|
|
|
mutex_unlock(&bo->rdev->gem.mutex);
|
|
|
|
radeon_bo_clear_surface_reg(bo);
|
drm/radeon: GPU virtual memory support v22
Virtual address space are per drm client (opener of /dev/drm).
Client are in charge of virtual address space, they need to
map bo into it by calling DRM_RADEON_GEM_VA ioctl.
First 16M of virtual address space is reserved by the kernel.
Once using 2 level page table we should be able to have a small
vram memory footprint for each pt (there would be one pt for all
gart, one for all vram and then one first level for each virtual
address space).
Plan include using the sub allocator for a common vm page table
area and using memcpy to copy vm page table in & out. Or use
a gart object and copy things in & out using dma.
v2: agd5f fixes:
- Add vram base offset for vram pages. The GPU physical address of a
vram page is FB_OFFSET + page offset. FB_OFFSET is 0 on discrete
cards and the physical bus address of the stolen memory on
integrated chips.
- VM_CONTEXT1_PROTECTION_FAULT_DEFAULT_ADDR covers all vmid's >= 1
v3: agd5f:
- integrate with the semaphore/multi-ring stuff
v4:
- rebase on top ttm dma & multi-ring stuff
- userspace is now in charge of the address space
- no more specific cs vm ioctl, instead cs ioctl has a new
chunk
v5:
- properly handle mem == NULL case from move_notify callback
- fix the vm cleanup path
v6:
- fix update of page table to only happen on valid mem placement
v7:
- add tlb flush for each vm context
- add flags to define mapping property (readable, writeable, snooped)
- make ring id implicit from ib->fence->ring, up to each asic callback
to then do ring specific scheduling if vm ib scheduling function
v8:
- add query for ib limit and kernel reserved virtual space
- rename vm->size to max_pfn (maximum number of page)
- update gem_va ioctl to also allow unmap operation
- bump kernel version to allow userspace to query for vm support
v9:
- rebuild page table only when bind and incrementaly depending
on bo referenced by cs and that have been moved
- allow virtual address space to grow
- use sa allocator for vram page table
- return invalid when querying vm limit on non cayman GPU
- dump vm fault register on lockup
v10: agd5f:
- Move the vm schedule_ib callback to a standalone function, remove
the callback and use the existing ib_execute callback for VM IBs.
v11:
- rebase on top of lastest Linus
v12: agd5f:
- remove spurious backslash
- set IB vm_id to 0 in radeon_ib_get()
v13: agd5f:
- fix handling of RADEON_CHUNK_ID_FLAGS
v14:
- fix va destruction
- fix suspend resume
- forbid bo to have several different va in same vm
v15:
- rebase
v16:
- cleanup left over of vm init/fini
v17: agd5f:
- cs checker
v18: agd5f:
- reworks the CS ioctl to better support multiple rings and
VM. Rather than adding a new chunk id for VM, just re-use the
IB chunk id and add a new flags for VM mode. Also define additional
dwords for the flags chunk id to define the what ring we want to use
(gfx, compute, uvd, etc.) and the priority.
v19:
- fix cs fini in weird case of no ib
- semi working flush fix for ni
- rebase on top of sa allocator changes
v20: agd5f:
- further CS ioctl cleanups from Christian's comments
v21: agd5f:
- integrate CS checker improvements
v22: agd5f:
- final cleanups for release, only allow VM CS on cayman
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
2012-01-06 10:11:05 +07:00
|
|
|
radeon_bo_clear_va(bo);
|
2011-02-18 23:59:16 +07:00
|
|
|
drm_gem_object_release(&bo->gem_base);
|
2009-11-20 20:29:23 +07:00
|
|
|
kfree(bo);
|
2009-06-05 19:42:42 +07:00
|
|
|
}
|
|
|
|
|
2009-12-15 03:02:09 +07:00
|
|
|
bool radeon_ttm_bo_is_radeon_bo(struct ttm_buffer_object *bo)
|
|
|
|
{
|
|
|
|
if (bo->destroy == &radeon_ttm_bo_destroy)
|
|
|
|
return true;
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2009-12-07 21:52:58 +07:00
|
|
|
void radeon_ttm_placement_from_domain(struct radeon_bo *rbo, u32 domain)
|
|
|
|
{
|
|
|
|
u32 c = 0;
|
|
|
|
|
|
|
|
rbo->placement.fpfn = 0;
|
2010-12-04 04:38:19 +07:00
|
|
|
rbo->placement.lpfn = 0;
|
2009-12-07 21:52:58 +07:00
|
|
|
rbo->placement.placement = rbo->placements;
|
|
|
|
rbo->placement.busy_placement = rbo->placements;
|
|
|
|
if (domain & RADEON_GEM_DOMAIN_VRAM)
|
|
|
|
rbo->placements[c++] = TTM_PL_FLAG_WC | TTM_PL_FLAG_UNCACHED |
|
|
|
|
TTM_PL_FLAG_VRAM;
|
|
|
|
if (domain & RADEON_GEM_DOMAIN_GTT)
|
|
|
|
rbo->placements[c++] = TTM_PL_MASK_CACHING | TTM_PL_FLAG_TT;
|
|
|
|
if (domain & RADEON_GEM_DOMAIN_CPU)
|
|
|
|
rbo->placements[c++] = TTM_PL_MASK_CACHING | TTM_PL_FLAG_SYSTEM;
|
2009-12-11 21:13:22 +07:00
|
|
|
if (!c)
|
|
|
|
rbo->placements[c++] = TTM_PL_MASK_CACHING | TTM_PL_FLAG_SYSTEM;
|
2009-12-07 21:52:58 +07:00
|
|
|
rbo->placement.num_placement = c;
|
|
|
|
rbo->placement.num_busy_placement = c;
|
|
|
|
}
|
|
|
|
|
2011-02-18 23:59:16 +07:00
|
|
|
int radeon_bo_create(struct radeon_device *rdev,
|
2010-11-18 07:00:26 +07:00
|
|
|
unsigned long size, int byte_align, bool kernel, u32 domain,
|
2012-05-11 05:33:13 +07:00
|
|
|
struct sg_table *sg, struct radeon_bo **bo_ptr)
|
2009-06-05 19:42:42 +07:00
|
|
|
{
|
2009-11-20 20:29:23 +07:00
|
|
|
struct radeon_bo *bo;
|
2009-06-05 19:42:42 +07:00
|
|
|
enum ttm_bo_type type;
|
2010-12-04 04:38:19 +07:00
|
|
|
unsigned long page_align = roundup(byte_align, PAGE_SIZE) >> PAGE_SHIFT;
|
|
|
|
unsigned long max_size = 0;
|
2011-11-12 03:42:57 +07:00
|
|
|
size_t acc_size;
|
2009-06-05 19:42:42 +07:00
|
|
|
int r;
|
|
|
|
|
2011-02-18 23:59:16 +07:00
|
|
|
size = ALIGN(size, PAGE_SIZE);
|
|
|
|
|
2012-05-16 03:40:10 +07:00
|
|
|
rdev->mman.bdev.dev_mapping = rdev->ddev->dev_mapping;
|
2009-06-05 19:42:42 +07:00
|
|
|
if (kernel) {
|
|
|
|
type = ttm_bo_type_kernel;
|
2012-05-11 05:33:13 +07:00
|
|
|
} else if (sg) {
|
|
|
|
type = ttm_bo_type_sg;
|
2009-06-05 19:42:42 +07:00
|
|
|
} else {
|
|
|
|
type = ttm_bo_type_device;
|
|
|
|
}
|
2009-11-20 20:29:23 +07:00
|
|
|
*bo_ptr = NULL;
|
2010-11-09 17:50:05 +07:00
|
|
|
|
2010-12-04 04:38:19 +07:00
|
|
|
/* maximun bo size is the minimun btw visible vram and gtt size */
|
|
|
|
max_size = min(rdev->mc.visible_vram_size, rdev->mc.gtt_size);
|
|
|
|
if ((page_align << PAGE_SHIFT) >= max_size) {
|
|
|
|
printk(KERN_WARNING "%s:%d alloc size %ldM bigger than %ldMb limit\n",
|
|
|
|
__func__, __LINE__, page_align >> (20 - PAGE_SHIFT), max_size >> 20);
|
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
|
2011-11-12 03:42:57 +07:00
|
|
|
acc_size = ttm_bo_dma_acc_size(&rdev->mman.bdev, size,
|
|
|
|
sizeof(struct radeon_bo));
|
|
|
|
|
2012-08-21 20:55:01 +07:00
|
|
|
retry:
|
2009-11-20 20:29:23 +07:00
|
|
|
bo = kzalloc(sizeof(struct radeon_bo), GFP_KERNEL);
|
|
|
|
if (bo == NULL)
|
2009-06-05 19:42:42 +07:00
|
|
|
return -ENOMEM;
|
2011-02-18 23:59:16 +07:00
|
|
|
r = drm_gem_object_init(rdev->ddev, &bo->gem_base, size);
|
|
|
|
if (unlikely(r)) {
|
|
|
|
kfree(bo);
|
|
|
|
return r;
|
|
|
|
}
|
2009-11-20 20:29:23 +07:00
|
|
|
bo->rdev = rdev;
|
2011-02-18 23:59:17 +07:00
|
|
|
bo->gem_base.driver_private = NULL;
|
2009-11-20 20:29:23 +07:00
|
|
|
bo->surface_reg = -1;
|
|
|
|
INIT_LIST_HEAD(&bo->list);
|
drm/radeon: GPU virtual memory support v22
Virtual address space are per drm client (opener of /dev/drm).
Client are in charge of virtual address space, they need to
map bo into it by calling DRM_RADEON_GEM_VA ioctl.
First 16M of virtual address space is reserved by the kernel.
Once using 2 level page table we should be able to have a small
vram memory footprint for each pt (there would be one pt for all
gart, one for all vram and then one first level for each virtual
address space).
Plan include using the sub allocator for a common vm page table
area and using memcpy to copy vm page table in & out. Or use
a gart object and copy things in & out using dma.
v2: agd5f fixes:
- Add vram base offset for vram pages. The GPU physical address of a
vram page is FB_OFFSET + page offset. FB_OFFSET is 0 on discrete
cards and the physical bus address of the stolen memory on
integrated chips.
- VM_CONTEXT1_PROTECTION_FAULT_DEFAULT_ADDR covers all vmid's >= 1
v3: agd5f:
- integrate with the semaphore/multi-ring stuff
v4:
- rebase on top ttm dma & multi-ring stuff
- userspace is now in charge of the address space
- no more specific cs vm ioctl, instead cs ioctl has a new
chunk
v5:
- properly handle mem == NULL case from move_notify callback
- fix the vm cleanup path
v6:
- fix update of page table to only happen on valid mem placement
v7:
- add tlb flush for each vm context
- add flags to define mapping property (readable, writeable, snooped)
- make ring id implicit from ib->fence->ring, up to each asic callback
to then do ring specific scheduling if vm ib scheduling function
v8:
- add query for ib limit and kernel reserved virtual space
- rename vm->size to max_pfn (maximum number of page)
- update gem_va ioctl to also allow unmap operation
- bump kernel version to allow userspace to query for vm support
v9:
- rebuild page table only when bind and incrementaly depending
on bo referenced by cs and that have been moved
- allow virtual address space to grow
- use sa allocator for vram page table
- return invalid when querying vm limit on non cayman GPU
- dump vm fault register on lockup
v10: agd5f:
- Move the vm schedule_ib callback to a standalone function, remove
the callback and use the existing ib_execute callback for VM IBs.
v11:
- rebase on top of lastest Linus
v12: agd5f:
- remove spurious backslash
- set IB vm_id to 0 in radeon_ib_get()
v13: agd5f:
- fix handling of RADEON_CHUNK_ID_FLAGS
v14:
- fix va destruction
- fix suspend resume
- forbid bo to have several different va in same vm
v15:
- rebase
v16:
- cleanup left over of vm init/fini
v17: agd5f:
- cs checker
v18: agd5f:
- reworks the CS ioctl to better support multiple rings and
VM. Rather than adding a new chunk id for VM, just re-use the
IB chunk id and add a new flags for VM mode. Also define additional
dwords for the flags chunk id to define the what ring we want to use
(gfx, compute, uvd, etc.) and the priority.
v19:
- fix cs fini in weird case of no ib
- semi working flush fix for ni
- rebase on top of sa allocator changes
v20: agd5f:
- further CS ioctl cleanups from Christian's comments
v21: agd5f:
- integrate CS checker improvements
v22: agd5f:
- final cleanups for release, only allow VM CS on cayman
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
2012-01-06 10:11:05 +07:00
|
|
|
INIT_LIST_HEAD(&bo->va);
|
2009-12-10 23:16:28 +07:00
|
|
|
radeon_ttm_placement_from_domain(bo, domain);
|
2009-12-08 00:36:19 +07:00
|
|
|
/* Kernel allocation are uninterruptible */
|
2012-05-11 19:57:18 +07:00
|
|
|
down_read(&rdev->pm.mclk_lock);
|
2009-12-10 23:16:28 +07:00
|
|
|
r = ttm_bo_init(&rdev->mman.bdev, &bo->tbo, size, type,
|
2011-11-12 03:42:57 +07:00
|
|
|
&bo->placement, page_align, 0, !kernel, NULL,
|
2012-05-11 05:33:13 +07:00
|
|
|
acc_size, sg, &radeon_ttm_bo_destroy);
|
2012-05-11 19:57:18 +07:00
|
|
|
up_read(&rdev->pm.mclk_lock);
|
2009-06-05 19:42:42 +07:00
|
|
|
if (unlikely(r != 0)) {
|
2010-07-08 09:43:28 +07:00
|
|
|
if (r != -ERESTARTSYS) {
|
|
|
|
if (domain == RADEON_GEM_DOMAIN_VRAM) {
|
|
|
|
domain |= RADEON_GEM_DOMAIN_GTT;
|
|
|
|
goto retry;
|
|
|
|
}
|
2009-12-08 00:36:19 +07:00
|
|
|
dev_err(rdev->dev,
|
2009-12-10 23:16:28 +07:00
|
|
|
"object_init failed for (%lu, 0x%08X)\n",
|
|
|
|
size, domain);
|
2010-07-08 09:43:28 +07:00
|
|
|
}
|
2009-06-05 19:42:42 +07:00
|
|
|
return r;
|
|
|
|
}
|
2009-11-20 20:29:23 +07:00
|
|
|
*bo_ptr = bo;
|
2011-02-18 23:59:16 +07:00
|
|
|
|
2010-11-23 08:47:49 +07:00
|
|
|
trace_radeon_bo_create(bo);
|
2011-02-18 23:59:16 +07:00
|
|
|
|
2009-06-05 19:42:42 +07:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2009-11-20 20:29:23 +07:00
|
|
|
int radeon_bo_kmap(struct radeon_bo *bo, void **ptr)
|
2009-06-05 19:42:42 +07:00
|
|
|
{
|
2009-11-20 20:29:23 +07:00
|
|
|
bool is_iomem;
|
2009-06-05 19:42:42 +07:00
|
|
|
int r;
|
|
|
|
|
2009-11-20 20:29:23 +07:00
|
|
|
if (bo->kptr) {
|
2009-06-05 19:42:42 +07:00
|
|
|
if (ptr) {
|
2009-11-20 20:29:23 +07:00
|
|
|
*ptr = bo->kptr;
|
2009-06-05 19:42:42 +07:00
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
2009-11-20 20:29:23 +07:00
|
|
|
r = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages, &bo->kmap);
|
2009-06-05 19:42:42 +07:00
|
|
|
if (r) {
|
|
|
|
return r;
|
|
|
|
}
|
2009-11-20 20:29:23 +07:00
|
|
|
bo->kptr = ttm_kmap_obj_virtual(&bo->kmap, &is_iomem);
|
2009-06-05 19:42:42 +07:00
|
|
|
if (ptr) {
|
2009-11-20 20:29:23 +07:00
|
|
|
*ptr = bo->kptr;
|
2009-06-05 19:42:42 +07:00
|
|
|
}
|
2009-11-20 20:29:23 +07:00
|
|
|
radeon_bo_check_tiling(bo, 0, 0);
|
2009-06-05 19:42:42 +07:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2009-11-20 20:29:23 +07:00
|
|
|
void radeon_bo_kunmap(struct radeon_bo *bo)
|
2009-06-05 19:42:42 +07:00
|
|
|
{
|
2009-11-20 20:29:23 +07:00
|
|
|
if (bo->kptr == NULL)
|
2009-06-05 19:42:42 +07:00
|
|
|
return;
|
2009-11-20 20:29:23 +07:00
|
|
|
bo->kptr = NULL;
|
|
|
|
radeon_bo_check_tiling(bo, 0, 0);
|
|
|
|
ttm_bo_kunmap(&bo->kmap);
|
2009-06-05 19:42:42 +07:00
|
|
|
}
|
|
|
|
|
2009-11-20 20:29:23 +07:00
|
|
|
void radeon_bo_unref(struct radeon_bo **bo)
|
2009-06-05 19:42:42 +07:00
|
|
|
{
|
2009-11-20 20:29:23 +07:00
|
|
|
struct ttm_buffer_object *tbo;
|
2010-04-29 15:37:59 +07:00
|
|
|
struct radeon_device *rdev;
|
2009-06-05 19:42:42 +07:00
|
|
|
|
2009-11-20 20:29:23 +07:00
|
|
|
if ((*bo) == NULL)
|
2009-06-05 19:42:42 +07:00
|
|
|
return;
|
2010-04-29 15:37:59 +07:00
|
|
|
rdev = (*bo)->rdev;
|
2009-11-20 20:29:23 +07:00
|
|
|
tbo = &((*bo)->tbo);
|
2012-05-11 19:57:18 +07:00
|
|
|
down_read(&rdev->pm.mclk_lock);
|
2009-11-20 20:29:23 +07:00
|
|
|
ttm_bo_unref(&tbo);
|
2012-05-11 19:57:18 +07:00
|
|
|
up_read(&rdev->pm.mclk_lock);
|
2009-11-20 20:29:23 +07:00
|
|
|
if (tbo == NULL)
|
|
|
|
*bo = NULL;
|
2009-06-05 19:42:42 +07:00
|
|
|
}
|
|
|
|
|
2012-03-14 23:12:41 +07:00
|
|
|
int radeon_bo_pin_restricted(struct radeon_bo *bo, u32 domain, u64 max_offset,
|
|
|
|
u64 *gpu_addr)
|
2009-06-05 19:42:42 +07:00
|
|
|
{
|
2009-12-07 21:52:58 +07:00
|
|
|
int r, i;
|
2009-06-05 19:42:42 +07:00
|
|
|
|
2009-11-20 20:29:23 +07:00
|
|
|
if (bo->pin_count) {
|
|
|
|
bo->pin_count++;
|
|
|
|
if (gpu_addr)
|
|
|
|
*gpu_addr = radeon_bo_gpu_offset(bo);
|
2012-03-28 13:52:32 +07:00
|
|
|
|
|
|
|
if (max_offset != 0) {
|
|
|
|
u64 domain_start;
|
|
|
|
|
|
|
|
if (domain == RADEON_GEM_DOMAIN_VRAM)
|
|
|
|
domain_start = bo->rdev->mc.vram_start;
|
|
|
|
else
|
|
|
|
domain_start = bo->rdev->mc.gtt_start;
|
2012-03-29 21:47:43 +07:00
|
|
|
WARN_ON_ONCE(max_offset <
|
|
|
|
(radeon_bo_gpu_offset(bo) - domain_start));
|
2012-03-28 13:52:32 +07:00
|
|
|
}
|
|
|
|
|
2009-06-05 19:42:42 +07:00
|
|
|
return 0;
|
|
|
|
}
|
2009-12-07 21:52:58 +07:00
|
|
|
radeon_ttm_placement_from_domain(bo, domain);
|
2010-03-27 02:18:55 +07:00
|
|
|
if (domain == RADEON_GEM_DOMAIN_VRAM) {
|
|
|
|
/* force to pin into visible video ram */
|
|
|
|
bo->placement.lpfn = bo->rdev->mc.visible_vram_size >> PAGE_SHIFT;
|
|
|
|
}
|
2012-03-14 23:12:41 +07:00
|
|
|
if (max_offset) {
|
|
|
|
u64 lpfn = max_offset >> PAGE_SHIFT;
|
|
|
|
|
|
|
|
if (!bo->placement.lpfn)
|
|
|
|
bo->placement.lpfn = bo->rdev->mc.gtt_size >> PAGE_SHIFT;
|
|
|
|
|
|
|
|
if (lpfn < bo->placement.lpfn)
|
|
|
|
bo->placement.lpfn = lpfn;
|
|
|
|
}
|
2009-12-07 21:52:58 +07:00
|
|
|
for (i = 0; i < bo->placement.num_placement; i++)
|
|
|
|
bo->placements[i] |= TTM_PL_FLAG_NO_EVICT;
|
2010-04-07 17:21:19 +07:00
|
|
|
r = ttm_bo_validate(&bo->tbo, &bo->placement, false, false, false);
|
2009-11-20 20:29:23 +07:00
|
|
|
if (likely(r == 0)) {
|
|
|
|
bo->pin_count = 1;
|
|
|
|
if (gpu_addr != NULL)
|
|
|
|
*gpu_addr = radeon_bo_gpu_offset(bo);
|
2009-06-05 19:42:42 +07:00
|
|
|
}
|
2009-12-08 00:36:19 +07:00
|
|
|
if (unlikely(r != 0))
|
2009-11-20 20:29:23 +07:00
|
|
|
dev_err(bo->rdev->dev, "%p pin failed\n", bo);
|
2009-06-05 19:42:42 +07:00
|
|
|
return r;
|
|
|
|
}
|
2012-03-14 23:12:41 +07:00
|
|
|
|
|
|
|
int radeon_bo_pin(struct radeon_bo *bo, u32 domain, u64 *gpu_addr)
|
|
|
|
{
|
|
|
|
return radeon_bo_pin_restricted(bo, domain, 0, gpu_addr);
|
|
|
|
}
|
2009-06-05 19:42:42 +07:00
|
|
|
|
2009-11-20 20:29:23 +07:00
|
|
|
int radeon_bo_unpin(struct radeon_bo *bo)
|
2009-06-05 19:42:42 +07:00
|
|
|
{
|
2009-12-07 21:52:58 +07:00
|
|
|
int r, i;
|
2009-06-05 19:42:42 +07:00
|
|
|
|
2009-11-20 20:29:23 +07:00
|
|
|
if (!bo->pin_count) {
|
|
|
|
dev_warn(bo->rdev->dev, "%p unpin not necessary\n", bo);
|
|
|
|
return 0;
|
2009-06-05 19:42:42 +07:00
|
|
|
}
|
2009-11-20 20:29:23 +07:00
|
|
|
bo->pin_count--;
|
|
|
|
if (bo->pin_count)
|
|
|
|
return 0;
|
2009-12-07 21:52:58 +07:00
|
|
|
for (i = 0; i < bo->placement.num_placement; i++)
|
|
|
|
bo->placements[i] &= ~TTM_PL_FLAG_NO_EVICT;
|
2010-04-07 17:21:19 +07:00
|
|
|
r = ttm_bo_validate(&bo->tbo, &bo->placement, false, false, false);
|
2009-12-08 00:36:19 +07:00
|
|
|
if (unlikely(r != 0))
|
2009-11-20 20:29:23 +07:00
|
|
|
dev_err(bo->rdev->dev, "%p validate failed for unpin\n", bo);
|
2009-12-08 00:36:19 +07:00
|
|
|
return r;
|
2009-08-16 18:05:45 +07:00
|
|
|
}
|
|
|
|
|
2009-11-20 20:29:23 +07:00
|
|
|
int radeon_bo_evict_vram(struct radeon_device *rdev)
|
2009-06-05 19:42:42 +07:00
|
|
|
{
|
2010-01-25 10:08:08 +07:00
|
|
|
/* late 2.6.33 fix IGP hibernate - we need pm ops to do this correct */
|
|
|
|
if (0 && (rdev->flags & RADEON_IS_IGP)) {
|
2010-01-05 23:27:29 +07:00
|
|
|
if (rdev->mc.igp_sideport_enabled == false)
|
|
|
|
/* Useless to evict on IGP chips */
|
|
|
|
return 0;
|
2009-06-05 19:42:42 +07:00
|
|
|
}
|
|
|
|
return ttm_bo_evict_mm(&rdev->mman.bdev, TTM_PL_VRAM);
|
|
|
|
}
|
|
|
|
|
2009-11-20 20:29:23 +07:00
|
|
|
void radeon_bo_force_delete(struct radeon_device *rdev)
|
2009-06-05 19:42:42 +07:00
|
|
|
{
|
2009-11-20 20:29:23 +07:00
|
|
|
struct radeon_bo *bo, *n;
|
2009-06-05 19:42:42 +07:00
|
|
|
|
|
|
|
if (list_empty(&rdev->gem.objects)) {
|
|
|
|
return;
|
|
|
|
}
|
2009-11-20 20:29:23 +07:00
|
|
|
dev_err(rdev->dev, "Userspace still has active objects !\n");
|
|
|
|
list_for_each_entry_safe(bo, n, &rdev->gem.objects, list) {
|
2009-06-05 19:42:42 +07:00
|
|
|
mutex_lock(&rdev->ddev->struct_mutex);
|
2009-11-20 20:29:23 +07:00
|
|
|
dev_err(rdev->dev, "%p %p %lu %lu force free\n",
|
2011-02-18 23:59:18 +07:00
|
|
|
&bo->gem_base, bo, (unsigned long)bo->gem_base.size,
|
|
|
|
*((unsigned long *)&bo->gem_base.refcount));
|
2009-11-20 20:29:23 +07:00
|
|
|
mutex_lock(&bo->rdev->gem.mutex);
|
|
|
|
list_del_init(&bo->list);
|
|
|
|
mutex_unlock(&bo->rdev->gem.mutex);
|
2011-03-01 10:40:06 +07:00
|
|
|
/* this should unref the ttm bo */
|
2011-02-18 23:59:18 +07:00
|
|
|
drm_gem_object_unreference(&bo->gem_base);
|
2009-06-05 19:42:42 +07:00
|
|
|
mutex_unlock(&rdev->ddev->struct_mutex);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2009-11-20 20:29:23 +07:00
|
|
|
int radeon_bo_init(struct radeon_device *rdev)
|
2009-06-05 19:42:42 +07:00
|
|
|
{
|
2009-09-11 18:00:43 +07:00
|
|
|
/* Add an MTRR for the VRAM */
|
|
|
|
rdev->mc.vram_mtrr = mtrr_add(rdev->mc.aper_base, rdev->mc.aper_size,
|
|
|
|
MTRR_TYPE_WRCOMB, 1);
|
|
|
|
DRM_INFO("Detected VRAM RAM=%lluM, BAR=%lluM\n",
|
|
|
|
rdev->mc.mc_vram_size >> 20,
|
|
|
|
(unsigned long long)rdev->mc.aper_size >> 20);
|
|
|
|
DRM_INFO("RAM width %dbits %cDR\n",
|
|
|
|
rdev->mc.vram_width, rdev->mc.vram_is_ddr ? 'D' : 'S');
|
2009-06-05 19:42:42 +07:00
|
|
|
return radeon_ttm_init(rdev);
|
|
|
|
}
|
|
|
|
|
2009-11-20 20:29:23 +07:00
|
|
|
void radeon_bo_fini(struct radeon_device *rdev)
|
2009-06-05 19:42:42 +07:00
|
|
|
{
|
|
|
|
radeon_ttm_fini(rdev);
|
|
|
|
}
|
|
|
|
|
2009-11-20 20:29:23 +07:00
|
|
|
void radeon_bo_list_add_object(struct radeon_bo_list *lobj,
|
|
|
|
struct list_head *head)
|
2009-06-05 19:42:42 +07:00
|
|
|
{
|
|
|
|
if (lobj->wdomain) {
|
2010-11-17 19:38:32 +07:00
|
|
|
list_add(&lobj->tv.head, head);
|
2009-06-05 19:42:42 +07:00
|
|
|
} else {
|
2010-11-17 19:38:32 +07:00
|
|
|
list_add_tail(&lobj->tv.head, head);
|
2009-06-05 19:42:42 +07:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2010-02-16 03:36:33 +07:00
|
|
|
int radeon_bo_list_validate(struct list_head *head)
|
2009-06-05 19:42:42 +07:00
|
|
|
{
|
2009-11-20 20:29:23 +07:00
|
|
|
struct radeon_bo_list *lobj;
|
|
|
|
struct radeon_bo *bo;
|
2010-07-08 09:43:28 +07:00
|
|
|
u32 domain;
|
2009-06-05 19:42:42 +07:00
|
|
|
int r;
|
|
|
|
|
2010-11-17 19:38:32 +07:00
|
|
|
r = ttm_eu_reserve_buffers(head);
|
2009-06-05 19:42:42 +07:00
|
|
|
if (unlikely(r != 0)) {
|
|
|
|
return r;
|
|
|
|
}
|
2010-11-17 19:38:32 +07:00
|
|
|
list_for_each_entry(lobj, head, tv.head) {
|
2009-11-20 20:29:23 +07:00
|
|
|
bo = lobj->bo;
|
|
|
|
if (!bo->pin_count) {
|
2010-07-08 09:43:28 +07:00
|
|
|
domain = lobj->wdomain ? lobj->wdomain : lobj->rdomain;
|
|
|
|
|
|
|
|
retry:
|
|
|
|
radeon_ttm_placement_from_domain(bo, domain);
|
2009-12-10 23:16:28 +07:00
|
|
|
r = ttm_bo_validate(&bo->tbo, &bo->placement,
|
2010-04-07 17:21:19 +07:00
|
|
|
true, false, false);
|
2010-07-08 09:43:28 +07:00
|
|
|
if (unlikely(r)) {
|
|
|
|
if (r != -ERESTARTSYS && domain == RADEON_GEM_DOMAIN_VRAM) {
|
|
|
|
domain |= RADEON_GEM_DOMAIN_GTT;
|
|
|
|
goto retry;
|
|
|
|
}
|
2009-06-05 19:42:42 +07:00
|
|
|
return r;
|
2010-07-08 09:43:28 +07:00
|
|
|
}
|
2009-06-05 19:42:42 +07:00
|
|
|
}
|
2009-11-20 20:29:23 +07:00
|
|
|
lobj->gpu_offset = radeon_bo_gpu_offset(bo);
|
|
|
|
lobj->tiling_flags = bo->tiling_flags;
|
2009-06-05 19:42:42 +07:00
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2009-11-20 20:29:23 +07:00
|
|
|
int radeon_bo_fbdev_mmap(struct radeon_bo *bo,
|
2009-06-05 19:42:42 +07:00
|
|
|
struct vm_area_struct *vma)
|
|
|
|
{
|
2009-11-20 20:29:23 +07:00
|
|
|
return ttm_fbdev_mmap(vma, &bo->tbo);
|
2009-06-05 19:42:42 +07:00
|
|
|
}
|
|
|
|
|
2009-12-09 11:15:38 +07:00
|
|
|
int radeon_bo_get_surface_reg(struct radeon_bo *bo)
|
2009-06-05 19:42:42 +07:00
|
|
|
{
|
2009-11-20 20:29:23 +07:00
|
|
|
struct radeon_device *rdev = bo->rdev;
|
2009-06-24 06:48:08 +07:00
|
|
|
struct radeon_surface_reg *reg;
|
2009-11-20 20:29:23 +07:00
|
|
|
struct radeon_bo *old_object;
|
2009-06-24 06:48:08 +07:00
|
|
|
int steal;
|
|
|
|
int i;
|
|
|
|
|
2009-11-20 20:29:23 +07:00
|
|
|
BUG_ON(!atomic_read(&bo->tbo.reserved));
|
|
|
|
|
|
|
|
if (!bo->tiling_flags)
|
2009-06-24 06:48:08 +07:00
|
|
|
return 0;
|
|
|
|
|
2009-11-20 20:29:23 +07:00
|
|
|
if (bo->surface_reg >= 0) {
|
|
|
|
reg = &rdev->surface_regs[bo->surface_reg];
|
|
|
|
i = bo->surface_reg;
|
2009-06-24 06:48:08 +07:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
steal = -1;
|
|
|
|
for (i = 0; i < RADEON_GEM_MAX_SURFACES; i++) {
|
|
|
|
|
|
|
|
reg = &rdev->surface_regs[i];
|
2009-11-20 20:29:23 +07:00
|
|
|
if (!reg->bo)
|
2009-06-24 06:48:08 +07:00
|
|
|
break;
|
|
|
|
|
2009-11-20 20:29:23 +07:00
|
|
|
old_object = reg->bo;
|
2009-06-24 06:48:08 +07:00
|
|
|
if (old_object->pin_count == 0)
|
|
|
|
steal = i;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* if we are all out */
|
|
|
|
if (i == RADEON_GEM_MAX_SURFACES) {
|
|
|
|
if (steal == -1)
|
|
|
|
return -ENOMEM;
|
|
|
|
/* find someone with a surface reg and nuke their BO */
|
|
|
|
reg = &rdev->surface_regs[steal];
|
2009-11-20 20:29:23 +07:00
|
|
|
old_object = reg->bo;
|
2009-06-24 06:48:08 +07:00
|
|
|
/* blow away the mapping */
|
|
|
|
DRM_DEBUG("stealing surface reg %d from %p\n", steal, old_object);
|
2009-11-20 20:29:23 +07:00
|
|
|
ttm_bo_unmap_virtual(&old_object->tbo);
|
2009-06-24 06:48:08 +07:00
|
|
|
old_object->surface_reg = -1;
|
|
|
|
i = steal;
|
|
|
|
}
|
|
|
|
|
2009-11-20 20:29:23 +07:00
|
|
|
bo->surface_reg = i;
|
|
|
|
reg->bo = bo;
|
2009-06-24 06:48:08 +07:00
|
|
|
|
|
|
|
out:
|
2009-11-20 20:29:23 +07:00
|
|
|
radeon_set_surface_reg(rdev, i, bo->tiling_flags, bo->pitch,
|
2010-08-05 07:48:18 +07:00
|
|
|
bo->tbo.mem.start << PAGE_SHIFT,
|
2009-11-20 20:29:23 +07:00
|
|
|
bo->tbo.num_pages << PAGE_SHIFT);
|
2009-06-24 06:48:08 +07:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2009-11-20 20:29:23 +07:00
|
|
|
static void radeon_bo_clear_surface_reg(struct radeon_bo *bo)
|
2009-06-24 06:48:08 +07:00
|
|
|
{
|
2009-11-20 20:29:23 +07:00
|
|
|
struct radeon_device *rdev = bo->rdev;
|
2009-06-24 06:48:08 +07:00
|
|
|
struct radeon_surface_reg *reg;
|
|
|
|
|
2009-11-20 20:29:23 +07:00
|
|
|
if (bo->surface_reg == -1)
|
2009-06-24 06:48:08 +07:00
|
|
|
return;
|
|
|
|
|
2009-11-20 20:29:23 +07:00
|
|
|
reg = &rdev->surface_regs[bo->surface_reg];
|
|
|
|
radeon_clear_surface_reg(rdev, bo->surface_reg);
|
2009-06-24 06:48:08 +07:00
|
|
|
|
2009-11-20 20:29:23 +07:00
|
|
|
reg->bo = NULL;
|
|
|
|
bo->surface_reg = -1;
|
2009-06-24 06:48:08 +07:00
|
|
|
}
|
|
|
|
|
2009-11-20 20:29:23 +07:00
|
|
|
int radeon_bo_set_tiling_flags(struct radeon_bo *bo,
|
|
|
|
uint32_t tiling_flags, uint32_t pitch)
|
2009-06-24 06:48:08 +07:00
|
|
|
{
|
2011-12-17 05:03:42 +07:00
|
|
|
struct radeon_device *rdev = bo->rdev;
|
2009-11-20 20:29:23 +07:00
|
|
|
int r;
|
|
|
|
|
2011-12-17 05:03:42 +07:00
|
|
|
if (rdev->family >= CHIP_CEDAR) {
|
|
|
|
unsigned bankw, bankh, mtaspect, tilesplit, stilesplit;
|
|
|
|
|
|
|
|
bankw = (tiling_flags >> RADEON_TILING_EG_BANKW_SHIFT) & RADEON_TILING_EG_BANKW_MASK;
|
|
|
|
bankh = (tiling_flags >> RADEON_TILING_EG_BANKH_SHIFT) & RADEON_TILING_EG_BANKH_MASK;
|
|
|
|
mtaspect = (tiling_flags >> RADEON_TILING_EG_MACRO_TILE_ASPECT_SHIFT) & RADEON_TILING_EG_MACRO_TILE_ASPECT_MASK;
|
|
|
|
tilesplit = (tiling_flags >> RADEON_TILING_EG_TILE_SPLIT_SHIFT) & RADEON_TILING_EG_TILE_SPLIT_MASK;
|
|
|
|
stilesplit = (tiling_flags >> RADEON_TILING_EG_STENCIL_TILE_SPLIT_SHIFT) & RADEON_TILING_EG_STENCIL_TILE_SPLIT_MASK;
|
|
|
|
switch (bankw) {
|
|
|
|
case 0:
|
|
|
|
case 1:
|
|
|
|
case 2:
|
|
|
|
case 4:
|
|
|
|
case 8:
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
switch (bankh) {
|
|
|
|
case 0:
|
|
|
|
case 1:
|
|
|
|
case 2:
|
|
|
|
case 4:
|
|
|
|
case 8:
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
switch (mtaspect) {
|
|
|
|
case 0:
|
|
|
|
case 1:
|
|
|
|
case 2:
|
|
|
|
case 4:
|
|
|
|
case 8:
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
if (tilesplit > 6) {
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
if (stilesplit > 6) {
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
}
|
2009-11-20 20:29:23 +07:00
|
|
|
r = radeon_bo_reserve(bo, false);
|
|
|
|
if (unlikely(r != 0))
|
|
|
|
return r;
|
|
|
|
bo->tiling_flags = tiling_flags;
|
|
|
|
bo->pitch = pitch;
|
|
|
|
radeon_bo_unreserve(bo);
|
|
|
|
return 0;
|
2009-06-24 06:48:08 +07:00
|
|
|
}
|
|
|
|
|
2009-11-20 20:29:23 +07:00
|
|
|
void radeon_bo_get_tiling_flags(struct radeon_bo *bo,
|
|
|
|
uint32_t *tiling_flags,
|
|
|
|
uint32_t *pitch)
|
2009-06-24 06:48:08 +07:00
|
|
|
{
|
2009-11-20 20:29:23 +07:00
|
|
|
BUG_ON(!atomic_read(&bo->tbo.reserved));
|
2009-06-24 06:48:08 +07:00
|
|
|
if (tiling_flags)
|
2009-11-20 20:29:23 +07:00
|
|
|
*tiling_flags = bo->tiling_flags;
|
2009-06-24 06:48:08 +07:00
|
|
|
if (pitch)
|
2009-11-20 20:29:23 +07:00
|
|
|
*pitch = bo->pitch;
|
2009-06-24 06:48:08 +07:00
|
|
|
}
|
|
|
|
|
2009-11-20 20:29:23 +07:00
|
|
|
int radeon_bo_check_tiling(struct radeon_bo *bo, bool has_moved,
|
|
|
|
bool force_drop)
|
2009-06-24 06:48:08 +07:00
|
|
|
{
|
2009-11-20 20:29:23 +07:00
|
|
|
BUG_ON(!atomic_read(&bo->tbo.reserved));
|
|
|
|
|
|
|
|
if (!(bo->tiling_flags & RADEON_TILING_SURFACE))
|
2009-06-24 06:48:08 +07:00
|
|
|
return 0;
|
|
|
|
|
|
|
|
if (force_drop) {
|
2009-11-20 20:29:23 +07:00
|
|
|
radeon_bo_clear_surface_reg(bo);
|
2009-06-24 06:48:08 +07:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2009-11-20 20:29:23 +07:00
|
|
|
if (bo->tbo.mem.mem_type != TTM_PL_VRAM) {
|
2009-06-24 06:48:08 +07:00
|
|
|
if (!has_moved)
|
|
|
|
return 0;
|
|
|
|
|
2009-11-20 20:29:23 +07:00
|
|
|
if (bo->surface_reg >= 0)
|
|
|
|
radeon_bo_clear_surface_reg(bo);
|
2009-06-24 06:48:08 +07:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2009-11-20 20:29:23 +07:00
|
|
|
if ((bo->surface_reg >= 0) && !has_moved)
|
2009-06-24 06:48:08 +07:00
|
|
|
return 0;
|
|
|
|
|
2009-11-20 20:29:23 +07:00
|
|
|
return radeon_bo_get_surface_reg(bo);
|
2009-06-24 06:48:08 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
void radeon_bo_move_notify(struct ttm_buffer_object *bo,
|
2009-12-15 03:02:09 +07:00
|
|
|
struct ttm_mem_reg *mem)
|
2009-06-24 06:48:08 +07:00
|
|
|
{
|
2009-12-15 03:02:09 +07:00
|
|
|
struct radeon_bo *rbo;
|
|
|
|
if (!radeon_ttm_bo_is_radeon_bo(bo))
|
|
|
|
return;
|
|
|
|
rbo = container_of(bo, struct radeon_bo, tbo);
|
2009-11-20 20:29:23 +07:00
|
|
|
radeon_bo_check_tiling(rbo, 0, 1);
|
drm/radeon: GPU virtual memory support v22
Virtual address space are per drm client (opener of /dev/drm).
Client are in charge of virtual address space, they need to
map bo into it by calling DRM_RADEON_GEM_VA ioctl.
First 16M of virtual address space is reserved by the kernel.
Once using 2 level page table we should be able to have a small
vram memory footprint for each pt (there would be one pt for all
gart, one for all vram and then one first level for each virtual
address space).
Plan include using the sub allocator for a common vm page table
area and using memcpy to copy vm page table in & out. Or use
a gart object and copy things in & out using dma.
v2: agd5f fixes:
- Add vram base offset for vram pages. The GPU physical address of a
vram page is FB_OFFSET + page offset. FB_OFFSET is 0 on discrete
cards and the physical bus address of the stolen memory on
integrated chips.
- VM_CONTEXT1_PROTECTION_FAULT_DEFAULT_ADDR covers all vmid's >= 1
v3: agd5f:
- integrate with the semaphore/multi-ring stuff
v4:
- rebase on top ttm dma & multi-ring stuff
- userspace is now in charge of the address space
- no more specific cs vm ioctl, instead cs ioctl has a new
chunk
v5:
- properly handle mem == NULL case from move_notify callback
- fix the vm cleanup path
v6:
- fix update of page table to only happen on valid mem placement
v7:
- add tlb flush for each vm context
- add flags to define mapping property (readable, writeable, snooped)
- make ring id implicit from ib->fence->ring, up to each asic callback
to then do ring specific scheduling if vm ib scheduling function
v8:
- add query for ib limit and kernel reserved virtual space
- rename vm->size to max_pfn (maximum number of page)
- update gem_va ioctl to also allow unmap operation
- bump kernel version to allow userspace to query for vm support
v9:
- rebuild page table only when bind and incrementaly depending
on bo referenced by cs and that have been moved
- allow virtual address space to grow
- use sa allocator for vram page table
- return invalid when querying vm limit on non cayman GPU
- dump vm fault register on lockup
v10: agd5f:
- Move the vm schedule_ib callback to a standalone function, remove
the callback and use the existing ib_execute callback for VM IBs.
v11:
- rebase on top of lastest Linus
v12: agd5f:
- remove spurious backslash
- set IB vm_id to 0 in radeon_ib_get()
v13: agd5f:
- fix handling of RADEON_CHUNK_ID_FLAGS
v14:
- fix va destruction
- fix suspend resume
- forbid bo to have several different va in same vm
v15:
- rebase
v16:
- cleanup left over of vm init/fini
v17: agd5f:
- cs checker
v18: agd5f:
- reworks the CS ioctl to better support multiple rings and
VM. Rather than adding a new chunk id for VM, just re-use the
IB chunk id and add a new flags for VM mode. Also define additional
dwords for the flags chunk id to define the what ring we want to use
(gfx, compute, uvd, etc.) and the priority.
v19:
- fix cs fini in weird case of no ib
- semi working flush fix for ni
- rebase on top of sa allocator changes
v20: agd5f:
- further CS ioctl cleanups from Christian's comments
v21: agd5f:
- integrate CS checker improvements
v22: agd5f:
- final cleanups for release, only allow VM CS on cayman
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
2012-01-06 10:11:05 +07:00
|
|
|
radeon_vm_bo_invalidate(rbo->rdev, rbo);
|
2009-06-24 06:48:08 +07:00
|
|
|
}
|
|
|
|
|
2010-04-09 19:39:24 +07:00
|
|
|
int radeon_bo_fault_reserve_notify(struct ttm_buffer_object *bo)
|
2009-06-24 06:48:08 +07:00
|
|
|
{
|
2010-04-09 19:39:24 +07:00
|
|
|
struct radeon_device *rdev;
|
2009-12-15 03:02:09 +07:00
|
|
|
struct radeon_bo *rbo;
|
2010-04-09 19:39:24 +07:00
|
|
|
unsigned long offset, size;
|
|
|
|
int r;
|
|
|
|
|
2009-12-15 03:02:09 +07:00
|
|
|
if (!radeon_ttm_bo_is_radeon_bo(bo))
|
2010-04-09 19:39:24 +07:00
|
|
|
return 0;
|
2009-12-15 03:02:09 +07:00
|
|
|
rbo = container_of(bo, struct radeon_bo, tbo);
|
2009-11-20 20:29:23 +07:00
|
|
|
radeon_bo_check_tiling(rbo, 0, 0);
|
2010-04-09 19:39:24 +07:00
|
|
|
rdev = rbo->rdev;
|
|
|
|
if (bo->mem.mem_type == TTM_PL_VRAM) {
|
|
|
|
size = bo->mem.num_pages << PAGE_SHIFT;
|
2010-08-05 07:48:18 +07:00
|
|
|
offset = bo->mem.start << PAGE_SHIFT;
|
2010-04-09 19:39:24 +07:00
|
|
|
if ((offset + size) > rdev->mc.visible_vram_size) {
|
|
|
|
/* hurrah the memory is not visible ! */
|
|
|
|
radeon_ttm_placement_from_domain(rbo, RADEON_GEM_DOMAIN_VRAM);
|
|
|
|
rbo->placement.lpfn = rdev->mc.visible_vram_size >> PAGE_SHIFT;
|
|
|
|
r = ttm_bo_validate(bo, &rbo->placement, false, true, false);
|
|
|
|
if (unlikely(r != 0))
|
|
|
|
return r;
|
2010-08-05 07:48:18 +07:00
|
|
|
offset = bo->mem.start << PAGE_SHIFT;
|
2010-04-09 19:39:24 +07:00
|
|
|
/* this should not happen */
|
|
|
|
if ((offset + size) > rdev->mc.visible_vram_size)
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return 0;
|
2009-06-24 06:48:08 +07:00
|
|
|
}
|
2011-10-14 06:08:47 +07:00
|
|
|
|
2011-10-27 23:15:10 +07:00
|
|
|
int radeon_bo_wait(struct radeon_bo *bo, u32 *mem_type, bool no_wait)
|
2011-10-14 06:08:47 +07:00
|
|
|
{
|
|
|
|
int r;
|
|
|
|
|
|
|
|
r = ttm_bo_reserve(&bo->tbo, true, no_wait, false, 0);
|
|
|
|
if (unlikely(r != 0))
|
|
|
|
return r;
|
|
|
|
spin_lock(&bo->tbo.bdev->fence_lock);
|
|
|
|
if (mem_type)
|
|
|
|
*mem_type = bo->tbo.mem.mem_type;
|
|
|
|
if (bo->tbo.sync_obj)
|
2011-10-27 23:28:37 +07:00
|
|
|
r = ttm_bo_wait(&bo->tbo, true, true, no_wait);
|
2011-10-14 06:08:47 +07:00
|
|
|
spin_unlock(&bo->tbo.bdev->fence_lock);
|
|
|
|
ttm_bo_unreserve(&bo->tbo);
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/**
|
|
|
|
* radeon_bo_reserve - reserve bo
|
|
|
|
* @bo: bo structure
|
|
|
|
* @no_wait: don't sleep while trying to reserve (return -EBUSY)
|
|
|
|
*
|
|
|
|
* Returns:
|
|
|
|
* -EBUSY: buffer is busy and @no_wait is true
|
|
|
|
* -ERESTARTSYS: A wait for the buffer to become unreserved was interrupted by
|
|
|
|
* a signal. Release all buffer reservations and return to user-space.
|
|
|
|
*/
|
|
|
|
int radeon_bo_reserve(struct radeon_bo *bo, bool no_wait)
|
|
|
|
{
|
|
|
|
int r;
|
|
|
|
|
|
|
|
r = ttm_bo_reserve(&bo->tbo, true, no_wait, false, 0);
|
|
|
|
if (unlikely(r != 0)) {
|
|
|
|
if (r != -ERESTARTSYS)
|
|
|
|
dev_err(bo->rdev->dev, "%p reserve failed\n", bo);
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
drm/radeon: GPU virtual memory support v22
Virtual address space are per drm client (opener of /dev/drm).
Client are in charge of virtual address space, they need to
map bo into it by calling DRM_RADEON_GEM_VA ioctl.
First 16M of virtual address space is reserved by the kernel.
Once using 2 level page table we should be able to have a small
vram memory footprint for each pt (there would be one pt for all
gart, one for all vram and then one first level for each virtual
address space).
Plan include using the sub allocator for a common vm page table
area and using memcpy to copy vm page table in & out. Or use
a gart object and copy things in & out using dma.
v2: agd5f fixes:
- Add vram base offset for vram pages. The GPU physical address of a
vram page is FB_OFFSET + page offset. FB_OFFSET is 0 on discrete
cards and the physical bus address of the stolen memory on
integrated chips.
- VM_CONTEXT1_PROTECTION_FAULT_DEFAULT_ADDR covers all vmid's >= 1
v3: agd5f:
- integrate with the semaphore/multi-ring stuff
v4:
- rebase on top ttm dma & multi-ring stuff
- userspace is now in charge of the address space
- no more specific cs vm ioctl, instead cs ioctl has a new
chunk
v5:
- properly handle mem == NULL case from move_notify callback
- fix the vm cleanup path
v6:
- fix update of page table to only happen on valid mem placement
v7:
- add tlb flush for each vm context
- add flags to define mapping property (readable, writeable, snooped)
- make ring id implicit from ib->fence->ring, up to each asic callback
to then do ring specific scheduling if vm ib scheduling function
v8:
- add query for ib limit and kernel reserved virtual space
- rename vm->size to max_pfn (maximum number of page)
- update gem_va ioctl to also allow unmap operation
- bump kernel version to allow userspace to query for vm support
v9:
- rebuild page table only when bind and incrementaly depending
on bo referenced by cs and that have been moved
- allow virtual address space to grow
- use sa allocator for vram page table
- return invalid when querying vm limit on non cayman GPU
- dump vm fault register on lockup
v10: agd5f:
- Move the vm schedule_ib callback to a standalone function, remove
the callback and use the existing ib_execute callback for VM IBs.
v11:
- rebase on top of lastest Linus
v12: agd5f:
- remove spurious backslash
- set IB vm_id to 0 in radeon_ib_get()
v13: agd5f:
- fix handling of RADEON_CHUNK_ID_FLAGS
v14:
- fix va destruction
- fix suspend resume
- forbid bo to have several different va in same vm
v15:
- rebase
v16:
- cleanup left over of vm init/fini
v17: agd5f:
- cs checker
v18: agd5f:
- reworks the CS ioctl to better support multiple rings and
VM. Rather than adding a new chunk id for VM, just re-use the
IB chunk id and add a new flags for VM mode. Also define additional
dwords for the flags chunk id to define the what ring we want to use
(gfx, compute, uvd, etc.) and the priority.
v19:
- fix cs fini in weird case of no ib
- semi working flush fix for ni
- rebase on top of sa allocator changes
v20: agd5f:
- further CS ioctl cleanups from Christian's comments
v21: agd5f:
- integrate CS checker improvements
v22: agd5f:
- final cleanups for release, only allow VM CS on cayman
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
2012-01-06 10:11:05 +07:00
|
|
|
|
|
|
|
/* object have to be reserved */
|
|
|
|
struct radeon_bo_va *radeon_bo_va(struct radeon_bo *rbo, struct radeon_vm *vm)
|
|
|
|
{
|
|
|
|
struct radeon_bo_va *bo_va;
|
|
|
|
|
|
|
|
list_for_each_entry(bo_va, &rbo->va, bo_list) {
|
|
|
|
if (bo_va->vm == vm) {
|
|
|
|
return bo_va;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return NULL;
|
|
|
|
}
|