drm/i915/gem: Apply lmem size restriction to get_pages

When creating a handle, it is just that, an abstract handle. The fact
that we cannot currently support a handle larger than the size of the
backing storage is an artifact of our whole-object-at-a-time handling in
get_pages() and being an implementation limitation is best handled at
that point -- similar to shmem, where we only barf when asked to
populate the whole object if larger than RAM. (Pinning the whole object
at a time is major hindrance that we are likely to have to overcome in
the near future.) In the case of the buddy allocator, the late check is
preferable as the request size may often be smaller than the required
size.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191216122603.2598155-1-chris@chris-wilson.co.uk
This commit is contained in:
Chris Wilson 2019-12-16 12:26:03 +00:00
parent 8840544033
commit 0a9a5532d2
3 changed files with 4 additions and 4 deletions

View File

@ -79,9 +79,6 @@ __i915_gem_lmem_object_create(struct intel_memory_region *mem,
struct drm_i915_private *i915 = mem->i915;
struct drm_i915_gem_object *obj;
if (size > BIT(mem->mm.max_order) * mem->mm.chunk_size)
return ERR_PTR(-E2BIG);
obj = i915_gem_object_alloc();
if (!obj)
return ERR_PTR(-ENOMEM);

View File

@ -1420,7 +1420,7 @@ static int igt_ppgtt_smoke_huge(void *arg)
err = i915_gem_object_pin_pages(obj);
if (err) {
if (err == -ENXIO) {
if (err == -ENXIO || err == -E2BIG) {
i915_gem_object_put(obj);
size >>= 1;
goto try_again;

View File

@ -73,6 +73,9 @@ __intel_memory_region_get_pages_buddy(struct intel_memory_region *mem,
min_order = ilog2(size) - ilog2(mem->mm.chunk_size);
}
if (size > BIT(mem->mm.max_order) * mem->mm.chunk_size)
return -E2BIG;
n_pages = size >> ilog2(mem->mm.chunk_size);
mutex_lock(&mem->mm_lock);