If we dont have semaphores enabled, we allocate 4
dwords for signalling. But end up emitting more regardless.
Fix this by bailing out early if semaphores are not enabled.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=78274
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=78283
Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This is missing in:
commit 78325f2d27
Author: Ben Widawsky <benjamin.widawsky@intel.com>
Date: Tue Apr 29 14:52:29 2014 -0700
drm/i915: Virtualize the ringbuffer signal func
Looks to me like a rebase side-effect...
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
For clients that submit large batch buffers the command parser has
a substantial impact on performance. On my HSW ULT system performance
drops as much as ~20% on some tests. Most of the time is spent in the
command lookup code. Converting that from the current naive search to
a hash table lookup reduces the performance drop to ~10%.
The choice of value for I915_CMD_HASH_ORDER allows all commands
currently used in the parser tables to hash to their own bucket (except
for one collision on the render ring). The tradeoff is that it wastes
memory. Because the opcodes for the commands in the tables are not
particularly well distributed, reducing the order still leaves many
buckets empty. The increased collisions don't seem to have a huge
impact on the performance gain, but for now anyhow, the parser trades
memory for performance.
NB: Ville noticed that the error paths through the ring init code
will leak memory. I've not addressed that here. We can do a follow
up pass to handle all of the leaks.
v2: improved comment describing selection of hash key mask (Damien)
replace a BUG_ON() with an error return (Tvrtko, Ville)
commit message improvements
Signed-off-by: Brad Volkin <bradley.d.volkin@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
During the review of
commit 1f70999f90
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Mon Jan 27 22:43:07 2014 +0000
drm/i915: Prevent recursion by retiring requests when the ring is full
Ville raised the point that our interaction with request->tail was
likely to foul up other uses elsewhere (such as hang check comparing
ACTHD against requests).
However, we also need to restore the implicit retire requests that certain
test cases depend upon (e.g. igt/gem_exec_lut_handle), this raises the
spectre that the ppgtt will randomly call i915_gpu_idle() and recurse
back into intel_ring_begin().
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=78023
Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com>
[danvet: Remove now unused 'tail' variable as spotted by Brad.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
A few improvements to the fallback method for waiting upon ring space:
1. Fix the start/end wait tracepoints to always be paired.
2. Increase responsiveness of checking
3. Mark the process as waiting upon io
4. Check for signal interruptions
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com>
[danvet: Drop the s/msleep/io_schedule_timeout/ change again since the
latter isn't exported.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Previously, our code only had a 32b offset value for where the
batchbuffer starts. With full PPGTT, and 64b canonical GPU address
space, that is an insufficient value. The code to expand is pretty
straight forward, and only one platform needs to do anything with the
extra bits.
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Rafael Barbalho <rafael.barbalho@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Add_request has always contained both the semaphore mailbox updates as
well as the breadcrumb writes. Since the semaphore signal is the one
which actually knows about the number of dwords it needs to emit to the
ring, we move the ring_begin to that function. This allows us to remove
the hideously shared #define
On a related not, gen8 will use a different number of dwords for
semaphores, but not for add request.
v2: Make number of dwords an explicit part of signalling (via function
argument). (Chris)
v3: very slight comment change
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This abstraction again is in preparation for gen8. Gen8 will bring new
semantics for doing this operation.
While here, make the writes of MI_NOOPs explicit for non-existent rings.
This should have been implicit before.
NOTE: This is going to be removed in a few patches.
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This will be helpful in abstracting some of the code in preparation for
gen8 semaphores.
v2: Move mbox stuff to a separate struct
v3: Rebased over VCS2 work
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> (v1)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The Gen7 doesn't have the second BSD ring. But it will complain the switch check
warning message during compilation. So just add it to remove the
switch check warning.
V1->V2: Follow Daniel's comment to update the comment
Reviewed-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: Zhao Yakui <yakui.zhao@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Based on the hardware spec, the BDW GT3 machine has two independent
BSD ring that can be used to dispatch the video commands.
So just initialize it.
V3->V4: Follow Imre's comment to do some minor updates. For example:
more comments are added to describe the semaphore between ring.
Reviewed-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: Zhao Yakui <yakui.zhao@intel.com>
[danvet: Fix up checkpatch error.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
If we include the expected values for the failing ring register checks,
it makes it marginally easier to see which is the culprit.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Oscar Mateo <oscar.mateo@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Tearing down the ring buffers across resume is overkill, risks
unnecessary failure and increases fragmentation.
After failure, since the device is still active we may end up trying to
write into the dangling iomapping and trigger an oops.
v2: stop_ringbuffers() was meant to call stop(ring) not
cleanup(ring) during resume!
Reported-by: Jae-hyeon Park <jhyeon@gmail.com>
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=72351
References: https://bugs.freedesktop.org/show_bug.cgi?id=76554
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Oscar Mateo <oscar.mateo@intel.com>
[danvet: s/ring->obj == NULL/!intel_ring_initialized(ring)/ as
suggested by Oscar.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
For readibility and guess at the meaning behind the constants.
v2: Claim only the meagerest connections with reality.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Oscar Mateo <oscar.mateo@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Piglit runner and QA are both looking at the dmesg for
DRM_ERRORs with test cases. Add a flag to control those
when we they are expected from related test cases.
Also add flag to control if contexts should be banned
that introduced the hang. Hangcheck is timer based and
preventing bans by adding sleeps to testcases makes
testing slower.
v2: intel_ring_stopped(), readable comment (Chris)
v3: keep compatibility (Daniel)
References: https://bugs.freedesktop.org/show_bug.cgi?id=75876
Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
In commit a51435a313
Author: Naresh Kumar Kachhi <naresh.kumar.kachhi@intel.com>
Date: Wed Mar 12 16:39:40 2014 +0530
drm/i915: disable rings before HW status page setup
we reordered stopping the rings to do so before we set the HWS register.
However, there is an extra workaround for g45 to reset the rings twice,
and for consistency we should apply that workaround before setting the
HWS to be sure that the rings are truly stopped.
Cc: Naresh Kumar Kachhi <naresh.kumar.kachhi@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We have been setting the bit which was originally BIOS dependent since:
commit f05bb0c7b6
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Sun Jan 20 16:33:32 2013 +0000
drm/i915: GFX_MODE Flush TLB Invalidate Mode must be '1' for scanline waits
Therefore, we do not need to try to figure it out dynamically and we can
just always invalidate the TLBs.
It's a partial revert of:
commit 12b0286f49
Author: Ben Widawsky <ben@bwidawsk.net>
Date: Mon Jun 4 14:42:50 2012 -0700
drm/i915: possibly invalidate TLB before context switch
The original commit attempted to only invalidate when necessary
(very much a relic from the old days). Now, we can just always invalidate.
I guess the old TODO still exists. Since we seem to have abandoned ILK
contexts however, there isn't much point in even remembering.
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This patch Enables the bit for TLB invalidate in GFX Mode register
for Gen7.
According to bspec, When enabled this bit limits the invalidation
of the TLB only to batch buffer boundaries, to pipe_control
commands which have the TLB invalidation bit set and sync flushes.
If disabled, the TLB caches are flushed for every full flush of
the pipeline.
Tested only on vlv platform. Chris has tested on ivb and hsw
platforms.
v2: Adding the explicit enabling of this bit for all Gen7 platforms
instead of only vlv (Chris)
Signed-off-by: Akash Goel <akash.goel@intel.com>
Signed-off-by: Sourab Gupta <sourab.gupta@intel.com>
Tested-by: Chris Wilson <chris@chris-wilson.co.uk> #ivb, hsw -Chris
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
[danvet: Add w/a markers as suggested by Ville.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The documentation calls this GFX_MODE bit "Flush TLB invalidate Mode".
However, that is not a good name for an enable bit as it doesn't make it
clear what is enabled. An even worse name is GFX_TLB_INVALIDATE_ALWAYS
as enabling that bit actually prevents the TLB from being invalidated at
every flush. This leads to great confusion when reading code and
proposed patches. To get around this try to bake in what is enabled by
setting the bit and call it GFX_TLB_INVALIDATE_EXPLICIT.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: "Gupta, Sourab" <sourab.gupta@intel.com>
Acked-by: "Gupta, Sourab" <sourab.gupta@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
As Broadwell has an increased virtual address size, it requires more
than 32 bits to store offsets into its address space. This includes the
debug registers to track the current HEAD of the individual rings, which
may be anywhere within the per-process address spaces. In order to find
the full location, we need to read the high bits from a second register.
We then also need to expand our storage to keep track of the larger
address.
v2: Carefully read the two registers to catch wraparound between
the reads.
v3: Use a WARN_ON rather than loop indefinitely on an unstable
register read.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Ben Widawsky <benjamin.widawsky@intel.com>
Cc: Timo Aaltonen <tjaalton@ubuntu.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Reviewed-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Drop spurious hunk which conflicted.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This patch Removes the VS_TIMER_DISPATCH bit enable in MI MODE reg for
platforms > Gen6.
VS_TIMER_DISPATCH bit enable was earlier required as a part of
WA 'WaTimedSingleVertexDispatch', which is now applicable only to
platforms < Gen7.
v2: Enhancing the scope of the patch to full Gen7 (Chris)
v3: Modifying the WA condition to the cover the applicable platforms,
and adding the WA name in comments. (Ville)
Signed-off-by: Akash Goel <akash.goel@intel.com>
Signed-off-by: Sourab Gupta <sourab.gupta@intel.com>
Tested-by: Chris Wilson <chris@chris-wilson.co.uk> # ivb, hsw -Chris
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
While wandering in the spec, I noticed that BDW removes those 2 bits
from INSTPM. I couldn't find any direct way to invalidate the TLB (ie
without the ring working already). Maybe someone will be more lucky.
At least, we now know we may be a problem.
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Reviewed-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Based on Bspec the command parser must be stopped prior to
issuing sync flush. This should be done by the caller of
intel_ring_setup_status_page. Patch adds a warning if it is
not done.
v2: rebased based on new patch (wait for ring to become idle)
Signed-off-by: Naresh Kumar Kachhi <naresh.kumar.kachhi@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
make sure we wait for rings to become idle once they are
disabled. In case of timeout print an error message
Signed-off-by: Naresh Kumar Kachhi <naresh.kumar.kachhi@intel.com>
[danvet: Frob patch as suggested by Chris.]
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Rings should be idle before issuing sync_flush
(in intel_ring_setup_status_page). This patch moves the ring
disabling before doing the HW status page setup.
Signed-off-by: Naresh Kumar Kachhi <naresh.kumar.kachhi@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJTHSaRAAoJEHm+PkMAQRiG7G8IAJHElwFDNSQE7Y9MmbicrAMG
kfjhBtBpTaVrJKQXegCNUwDaLLyC4oLIxDheW84oPXbrEGDLqPtBov/hrcFkHVr4
lh/ZYk02nYtcfpN0JnL/Yj2oKHVmBWs0vFlM7StSFsJCj10DoCVQQdmAJ8XODTPo
CXMapk+UikTX1TlIO8+B5toyl3R1OqPmW211UV1vQVLKy66hu+MKVN/V+/EyopL0
1jO81EDpaRaeIJh1/okcyUoIq9pqLkAWNpeQ7uyXZ+Sfivt9RXwLYKmAB3lP20Hc
ZMIIoHSCyYRFjxLlQvt02bA9nY4wTY7YN5kZ2kk65y7TFfhcGsCw1Sc69iyCoKs=
=CJcA
-----END PGP SIGNATURE-----
Merge tag 'v3.14-rc6' into drm-intel-next-queued
Linux 3.14-rc6
I need the hdmi/dvi-dual link fixes in 3.14 to avoid ugly conflicts
when merging Ville's new hdmi cloning support into my -next tree
Conflicts:
drivers/gpu/drm/i915/Makefile
drivers/gpu/drm/i915/intel_dp.c
Makefile cleanup conflicts with an acpi build fix, intel_dp.c is
trivial.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The command parser scans batch buffers submitted via execbuffer ioctls before
the driver submits them to hardware. At a high level, it looks for several
things:
1) Commands which are explicitly defined as privileged or which should only be
used by the kernel driver. The parser generally rejects such commands, with
the provision that it may allow some from the drm master process.
2) Commands which access registers. To support correct/enhanced userspace
functionality, particularly certain OpenGL extensions, the parser provides a
whitelist of registers which userspace may safely access (for both normal and
drm master processes).
3) Commands which access privileged memory (i.e. GGTT, HWS page, etc). The
parser always rejects such commands.
See the overview comment in the source for more details.
This patch only implements the logic. Subsequent patches will build the tables
that drive the parser.
v2: Don't set the secure bit if the parser succeeds
Fail harder during init
Makefile cleanup
Kerneldoc cleanup
Clarify module param description
Convert ints to bools in a few places
Move client/subclient defs to i915_reg.h
Remove the bits_count field
OTC-Tracker: AXIA-4631
Change-Id: I50b98c71c6655893291c78a2d1b8954577b37a30
Signed-off-by: Brad Volkin <bradley.d.volkin@intel.com>
Reviewed-by: Jani Nikula <jani.nikula@intel.com>
[danvet: Appease checkpatch.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Split out from Chris vma-bind rework.
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Ben Widawsky <benjamin.widawsky@intel.com>
Reviewed-by: Jani Nikula <jani.nikula@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We access it through the cpu window. No functional difference expected
atm since we default to a bottom-up allocation scheme. But that might
eventually change so that we prefer the unmappable range for buffers
that don't need cpu gtt access.
Split out from Chris vma-bind rework.
Note that this is only possible due to the split-up of the mappable
pin flag into PIN_GLOBAL and PIN_MAPPABLE.
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Ben Widawsky <benjamin.widawsky@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Tighter code since legacy gem has only mappable anyway.
Split out from Chris vma-bind rework.
Note that this is only possible due to the split-up of the mappable
pin flag into PIN_GLOBAL and PIN_MAPPABLE.
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Ben Widawsky <benjamin.widawsky@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Split out from Chris vma-bind rework.
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Ben Widawsky <benjamin.widawsky@intel.com>
Cc: Jani Nikula <jani.nikula@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Anything more than just one bool parameter is just a pain to read,
symbolic constants are much better.
Split out from Chris' vma-binding rework patch.
v2: Undo the behaviour change in object_pin that Chris spotted.
v3: Split out misplaced hunk to handle set_cache_level errors,
spotted by Jani.
v4: Keep the current over-zealous binding logic in the execbuffer code
working with a quick hack while the overall binding code gets shuffled
around.
v5: Reorder the PIN_ flags for more natural patch splitup.
v6: Pull out the PIN_GLOBAL split-up again.
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Ben Widawsky <benjamin.widawsky@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
intel_ring_cachline_align() emits MI_NOOPs until the ring tail is
aligned to a cacheline boundary.
Cc: Bjoern C <lkml@call-home.ch>
Cc: Alexandru DAMIAN <alexandru.damian@intel.com>
Cc: Enrico Tagliavini <enrico.tagliavini@gmail.com>
Suggested-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: stable@vger.kernel.org (prereq for the next patch)
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
As the VM do not track activity of objects and instead use a large
hammer to forcibly idle and evict all of their associated objects when
one is released, it is possible for that to cause a recursion when we
need to wait for free space on a ring and call retire requests.
(intel_ring_begin -> intel_ring_wait_request ->
i915_gem_retire_requests_ring -> i915_gem_context_free ->
i915_gem_evict_vm -> i915_gpu_idle -> intel_ring_begin etc)
In order to remove the requirement for calling retire-requests from
intel_ring_wait_request, we have to inline a couple of steps from
retiring requests, notably we have to record the position of the request
we wait for and use that to update the available ring space.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Because whatever.*
* This should contain a fairly long list of issues and still
unresolved resgressions, but I didn't really get a vote.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The ring will emit too many if semaphores are disabled since we do not
add the correct number to num_dwords anymore.
This was introduced:
commit 52ed23253b
Author: Ben Widawsky <benjamin.widawsky@intel.com>
Date: Mon Dec 16 20:50:38 2013 -0800
drm/i915: Don't emit mbox updates without semaphores
FWIW, the bug was fixed later in the series.
/me hangs head in shame.
Daniel: Also note that we should have merged the read-only semaphore
modparam before this patch.
Reported-by: Kenneth Graunke <kenneth@whitecape.org>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
In very rare cases (such as a memory failure stress test) it is possible
to fill the entire ring without emitting a request. Under this
circumstance, the outstanding request is flushed and waited upon. After
space on the ring is cleared, we return to emitting the new command -
except that we just cleared the seqno allocated for this operation and
trigger the sanity check that a request is only ever emitted with a
valid seqno. The fix is to rearrange the code to make sure the
allocation of the seqno for this operation is after any required flushes
of outstanding operations.
The bug exists since the preallocation was introduced in
commit 9d7730914f
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Tue Nov 27 16:22:52 2012 +0000
drm/i915: Preallocate next seqno before touching the ring
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: stable@vger.kernel.org
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Aside from the fact that it leaves confusing dumps on error capture, it
is entirely unnecessary, and potentially harmful in cases like BDW,
where the instruction has changed.
In reality (seemingly), this will have no behavioral impact.
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Added power well arguments to all the force wake routines
to help us individually control power well based on the
scenario.
Signed-off-by: Deepak S <deepak.s@intel.com>
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
[danvet: Resolve conflict with the removed forcewake hack and drop one
spurious hunk Jesse noticed.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
I believe, and an evening of i-g-t, that our original workaround for the
missed interrupts on Sandybridge, that of holding forcewake whilst we
wait for an interrupts, is no longer required. This leaves us dependent
on the second workaround of forcing an UC read of the ACTHD before
reading back the seqno from the snooped HWS. Dropping the forcewake
should allow us to conserve a little power, not much as the GPU is meant
to be busy whilst we wait for it!
Cc: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The spec tells us that we need to emit an SRM after the LRI
to MSG_FBC_REND_STATE.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Don't issue the FBC nuke/cache clean command when invalidate_domains!=0.
That would indicate that we're not being called for the post-batch
flush.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This confused me some many times that I think it is appropriate to add a
small comment to instruct the reader of the code that it is indeed doing
what it is supposed to do.
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The handling of the error interrupts isn't wired up at all. And it
hasn't been ever since ilk happened, so don't bother.
Reviewed-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This was an oversight and should have been in a previous series
somewhere.
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
PIPE_CONTROL added the high address dword. I'm not sure how the
simulator let me get away with this. I've explicitly left out all the
workarounds from Gen7 because in the minimal digging that I did, most
don't seem necessary, and the simulator doesn't complain without them
Note that BLT and BSD ring commands had already been updated previously.
Just render/pipe_control should have been broken.
v2: Squash in a fixup from Ville to follow the recent IVB PIPE_CONTROL
updates: "BDW uses the IVB PIPE_CONTROL style for specifying GTT vs.
PPGTT for the PIPE_CONTROL QW/DW write."
v3: Rebase on top of Chris' cleanup to have an explicit ring->scratch
buffer object instead of an opaque ring->private where everyone stores
the same stuff inside.
Reported-by: Damien Lespiau <damien.lespiau@intel.com>
Reviewed-by: Ben Widawsky <ben@bwidawsk.net> (for the fixup)
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net> (v1)
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>