This introduces allocation in the batch submission path that wasn't there
previously, but these are compatibility paths so we care about simplicity
more than performance.
kernel.org bug #12419.
Signed-off-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Keith Packard <keithp@keithp.com>
Acked-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Like the GTT pwrite path fix, this uses an optimistic path and a
fallback to get_user_pages. Note that this means we have to stop using
vfs_write and roll it ourselves.
Signed-off-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
We've wanted this for a few consumers that touch the pages directly (such as
the following commit), which have been doing the refcounting outside of
get/put pages.
Signed-off-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Since the pagefault path determines that the lock order we use has to be
mmap_sem -> struct_mutex, we can't allow page faults to occur while the
struct_mutex is held. To fix this in pwrite, we first try optimistically to
see if we can copy from user without faulting. If it fails, fall back to
using get_user_pages to pin the user's memory, and map those pages
atomically when copying it to the GPU.
Signed-off-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
This fixes incorrect detection of the second SDVO/HDMI output on G4X, and
extra boot time on pre-G4X.
Signed-off-by: Kristian Høgsberg <krh@redhat.com>
Signed-off-by: Eric Anholt <eric@anholt.net>
This improves the PLL timings according to the suggestion of the hardware
engineers. This results in some outputs being able to sync that weren't
able to before.
This is part of fixing fd.o bug #17508.
Signed-off-by: Ma Ling <ling.ma@intel.com>
[anholt: cleaned up a couple of redundant comments]
Signed-off-by: Eric Anholt <eric@anholt.net>
The values come from the internal reference spreadsheet on PLL
timing limits for the G4X chipsets.
Part of fixing fd.o bug #17508
Signed-off-by: Ma Ling <ling.ma@intel.com>
[anholt: Cleaned up some whitespace]
Signed-off-by: Eric Anholt <eric@anholt.net>
Later spec investigation has revealed that every 9xx mobile part has
had this register in this format. Also, no non-mobile parts have been shown
to have this register. So make all mobile use the same code, and all
non-mobile use the hack 965 detection.
Signed-off-by: Eric Anholt <eric@anholt.net>
The last 8 fence registers sit at a different offset, so when we went to set
fence number 8 in the lower offset, we instead set PGETBL_CTL, and the GPU
got all sorts of angry at us.
fd.o bug #20567. Easily reproducible by running glxgears and killing it about
6 times.
Signed-off-by: Eric Anholt <eric@anholt.net>
The i915 also uses the fence registers for GPU access to tiled buffers so
we cannot reallocate one whilst it is on the active list. By performing a
LRU scan of the fenced buffers we also avoid waiting the possibility of
waiting on a pinned, or otherwise unusable, buffer.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
We need to check and report if there are no available fences - or else we
spin endlessly waiting for a buffer to magically unpin itself.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
As we may steal the fence register of an unpinned buffer for another,
every time we repin the buffer we need to recheck whether it needs to be
allocated a fence.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
If we wait upon a request and successfully unbind a buffer occupying a
fence register, then that slot will be freed and cause a NULL derefrence
upon rescanning.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
The VGA registers just hit the pipe registers that we already set through
MMIO. This fixes strange colors on resume.
Signed-off-by: Pierre Willenbrock <pierre@pirsoft.de>
Signed-off-by: Eric Anholt <eric@anholt.net>
If userspace passes an object list with the same object appearing more
than once, we end up hitting the BUG_ON() in
i915_gem_object_set_to_gpu_domain() as it gets called a second time
for the same object.
Signed-off-by: Kristian Høgsberg <krh@redhat.com>
Signed-off-by: Eric Anholt <eric@anholt.net>
This could be triggered by a client asking to emit an irq when the device
wasn't initialized.
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@linux.ie>
* 'x86-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
x86: enable DMAR by default
xen: disable interrupts early, as start_kernel expects
gpu/drm, x86, PAT: io_mapping_create_wc and resource_size_t
gpu/drm, x86, PAT: Handle io_mapping_create_wc() errors in a clean way
x86, Voyager: fix compile by lifting the degeneracy of phys_cpu_present_map
x86, doc: fix references to Documentation/x86/i386/boot.txt
io_mapping_create_wc can return NULL on error and io_mapping_free() should be
called on one of the error-cleanup path.
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
Cc: Dave Airlie <airlied@redhat.com>
Cc: Jesse Barnes <jbarnes@virtuousgeek.org>
Cc: Eric Anholt <eric@anholt.net>
Cc: Keith Packard <keithp@keithp.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
However we still have another issue with ioremap_wc not falling back
properly or somehow doing something else stupid, this probably needs
to be tracked down.
Signed-off-by: Dave Airlie <airlied@redhat.com>
We've seen cases in the wild where the VBT sync data is wrong, so add
some code to fix it up in that case, taking care to make sure that the
total is greater than the sync end.
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Dave Airlie <airlied@linux.ie>
These are normal; we walk through different values looking for the right
one, so why flood the screen with messages?
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@linux.ie>
In the KMS case, enter/leavevt won't fix up the interrupt handler for
us, so we need to do it at suspend/resume time. Make sure we don't fail
the resume if the chip is hung either.
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
dev_priv->hw_status_page can be NULL, if i915_gem_retire_requests()
is called from i915_gem_busy_ioctl().
Signed-off-by Karsten Wiese <fzu@wemgehoertderstaat.de>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
there might be a nicer way to fix this but this is the simplest for now.
Signed-off-by: Pierre Willenbrock <pierre@pirsoft.de>
Signed-off-by: Dave Airlie <airlied@redhat.com>
The object is dereferenced before the NULL check. Oops.
Fixes http://bugs.freedesktop.org/show_bug.cgi?id=20235
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@redhat.com>
This ensures that the user gets the latest information from the hardware
on whether the buffer is busy, potentially reducing the working set of objects
that the user chooses.
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@redhat.com>
In the KMS case, we need to suspend/resume GEM as well. So on suspend, make
sure we idle GEM and stop any new rendering from coming in, and on resume,
re-init the framebuffer and clear the suspended flag.
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@redhat.com>
The problem was that object_set_to_gpu_domain would set the new write_domains
that are getting set by this batchbuffer, then the accumulated flushes required
for all the objects in preparation for this batchbuffer were posted, and the
brand new write domain would get cleared by the flush being posted. Instead,
hang on to the new (or old if we're not changing it) value and set it after
the flush is queued.
Results from this noticably included conformance test failures from reads
shortly after writes (where the new write domain had been lost and thus not
flushed and waited on), but is a suspected cause of hangs in some apps when
a write domain is lost on a buffer that gets reused for instruction or
commmand state.
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@redhat.com>
While not strictly required, it helped while thinking about the following
change. This change should be invariant.
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@redhat.com>
This fixes potential fault at fault time if the object was unreferenced
while the mapping still existed. Now, while the mmap_offset only lives
for the lifetime of the object, the object also stays alive while a vma
exists that needs it.
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Need to do this in case the unref ends up doing a free.
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Lifted from the DDX modesetting.
Signed-off-by: Kristian Høgsberg <krh@redhat.com>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@redhat.com>
They used to be different. Now they're identical.
Signed-off-by: Kristian Høgsberg <krh@redhat.com>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@redhat.com>
We need to hold the struct_mutex around pinning and the phys object
operations.
Signed-off-by: Kristian Høgsberg <krh@redhat.com>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Check the error paths within intel_pipe_set_base() to first cleanup and
then report back the error.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@linux.ie>
If we fail to create the ringbuffer, then we need to cleanup the allocated
hws.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@linux.ie>
In the case where no EDID data is read from the device, adding the
panel_fixed_mode pointer to the probed modes list causes data corruption.
If the panel_fixed_mode pointer is added to the probed modes list at
init time, a copy of the mode is added again at drm_get_modes() request
time. Then, the panel_fixed_mode pointer is freed because it is seen as
a duplicate mode. Unfortunately, this pointer is still stored and used
in mode_fixup().
Because the panel_fixed_mode data is copied and returned at
drm_get_modes() time, it is unnecessary to add this information at init
time.
Signed-off-by: Steve Aarnio <steve.j.aarnio@intel.com>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@linux.ie>
If we fail whilst constructing the fb, then we need to unpin it as well.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@linux.ie>
A missing unpin on the error path.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@linux.ie>
A missing unpin on the error path.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@linux.ie>
A missing unreference and unpin after rejecting the relocation for an
invalid memory domain.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@linux.ie>
We failed to unlock the mutex after failing to create the mmap offset.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@linux.ie>
Set the request alignment to 0, and leave it up to i915_gem_object_pin()
to set the appropriate alignment to match the fence covering the object.
Eric Anholt mentioned that the pinning code is meant to choose the
maximum of the request alignment and that of the fence covering the
object... However currently, the pinning code will only apply the fence
constraints if the supplied alignment is 0.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@linux.ie>
Lockdep warns that i915_gem_execbuffer() can trigger a page fault (which
takes mmap_sem) while holding dev->struct_mutex, while drm_vm_open()
(which is called with mmap_sem already held) takes dev->struct_mutex.
So this is a potential AB-BA deadlock.
The way that i915_gem_execbuffer() triggers a page fault is by doing
copy_to_user() when returning new buffer offsets back to userspace;
however there is no reason to hold the struct_mutex when doing this
copy, since what is being copied is the contents of an array private to
i915_gem_execbuffer() anyway. So we can fix the potential deadlock (and
get rid of the lockdep warning) by simply moving the copy_to_user()
outside of where struct_mutex is held.
This fixes <http://bugzilla.kernel.org/show_bug.cgi?id=12491>.
Reported-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@linux.ie>