Once we factored out radeon_cs_packet_parse function,
evergreen_cs_next_is_pkt3_nop and r600_cs_next_is_pkt3_nop
functions became identical, so they can be factored out
into a common function.
Signed-off-by: Ilija Hadzic <ihadzic@research.bell-labs.com>
Reviewed-by: Marek Olšák <maraeo@gmail.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
We now have a common radeon_cs_packet_parse function
that is good for all ASICs. Hook it up and eliminate
ASIC-specific versions.
Signed-off-by: Ilija Hadzic <ihadzic@research.bell-labs.com>
Reviewed-by: Marek Olšák <maraeo@gmail.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
The idea here is to move to a finer grained reset.
In some cases we may not need reset every block, and
in other cases we may not need to re-init the entire
asic.
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
radeon_fence_wait_empty_locked should not trigger GPU reset as no
place where it's call from would benefit from such thing and it
actually lead to a kernel deadlock in case the reset is triggered
from pm codepath. Instead force ring completion in place where it
makes sense or return early in others.
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cc: stable@vger.kernel.org
Force all fence to signal if GPU reset failed so no process get stuck
on waiting fence.
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cc: stable@vger.kernel.org
Allows us to use the DMA ring from userspace.
DMA doesn't have a good NOP packet in which to embed the
reloc idx, so userspace has to add a reloc for each
buffer used and order them to match the command stream.
v2: fix address bounds checking, reloc indexing
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
With the new per-crtc locking mutliple set-cursor calls could happen
in parallel. Out of sheer paranoia I've opted for an irqsave spinlock.
But if there's indeed an access from interrupt contexts to these regs
it's already broken with the old code, so this can likely just be
reduced to a normal spinlock. Otoh the pageflip completion happens
from the vblank irq handler ...
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Just refactoring to make the next patche simpler. Now all indirect register
access in the new modesetting driver should go through the r100_mm_(w|r)reg
fucntions.
RADEON_READ_MM from the old driver seems to be totally unused, so just kill
it.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
The bo creation placement is where the bo will be. Instead of trying
to move bo at each command stream let this work to another worker
thread that will use more advance heuristic.
agd5f: remove leftover unused variable
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
There are 2 async DMA engines on cayman, one at 0xd000 and
one at 0xd800. The programming interface is the same as
evergreen however there are some changes to the commands
for using vmids.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Make it possible to allocate a persistent page table.
Signed-off-by: Christian König <deathsimple@vodafone.de>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
We want to use VMs without the IB pool in the future.
v2: also remove it from radeon_vm_finish.
Signed-off-by: Christian König <deathsimple@vodafone.de>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Based on Dmitries work, but splitting the code into page
directory and page table handling makes it far more
readable and (hopefully) more reliable.
Allocations of page tables are made from the SA on demand,
that should still work fine since all page tables are of
the same size.
Also using the fact that allocations from the SA are mostly
continuously (except for end of buffer wraps and under very
high memory pressure) to group updates send to the chipset
specific code into larger chunks.
v3: mostly a rewrite of Dmitries previous patch.
v4: fix some typos and coding style
Signed-off-by: Dmitry Cherkasov <Dmitrii.Cherkasov@amd.com>
Signed-off-by: Christian König <deathsimple@vodafone.de>
Tested-by: Michel Dänzer <michel.daenzer@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Pass the vm and ring index rather than an IB. This allows
us to use the vm_flush interface for non-IB cases in the
future.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
PDE/PTE update code uses CP ring for memory writes.
All page table entries are preallocated for now in alloc_pt().
It is made as whole because it's hard to divide it to several patches
that compile and doesn't break anything being applied separately.
Tested on cayman card.
v2: rebased on top of "refactor set_page chipset interface v3",
code cleanups
v3: switched offsets calc macros to inline funcs where possible,
remove pd_addr from radeon_vm, switched RADEON_BLOCK_SIZE define,
to 9 (and PTE_COUNT to 1 << BLOCK_SIZE)
v4 (ck): move "incr" documentation to previous patch, cleanup and
document RADEON_VM_* constants, change commit message to
our usual format, simplify patch allot by removing
everything current not necessary, disable SI workaround.
v5: (agd5f): Fix typo in tables_size calculation in
radeon_vm_alloc_pt(). Second line should have been
'+=' rather than '='.
v6: fix npdes calculation. In scenario when pfns to be mapped overlap
two PDE spans:
+-----------+-------------+
| PDE span | PDE span |
+-----------+----+--------+
| |
+---------+
| pfns |
+---------+
the following npdes calculation gives incorrect result:
npdes = (nptes >> RADEON_VM_BLOCK_SIZE) + 1;
For the case above picture it should give npdes = 2, but gives one.
This patch corrects it by rounding last pfn up to 512 border,
first - down to 512 border and then subtracting and dividing by 512.
v7: Make npde calculation clearer, fix ndw calculation.
v8: (agd5f): reserve enough for 2 full VM PTs, add some
additional comments.
v9: fix typo in npde calculation
Signed-off-by: Dmitry Cherkasov <Dmitrii.Cherkasov@amd.com>
Signed-off-by: Christian König <deathsimple@vodafone.de>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cleanup the interface in preparation for hierarchical page tables.
v2: add incr parameter to set_page for simple scattered PTs uptates
added PDE-specific flags to r600_flags and radeon_drm.h
removed superfluous value masking with 0xffffffff
v3: removed superfluous bo_va->valid checking
changed R600_PTE_VALID to R600_ENTRY_VALID to handle PDE too
v4 (ck): fix indention style, rework and fix typos in commit message,
add documentation for incr parameter, also use incr
parameter for system pages
v5 (agd5f): use upper_32_bits() and minor white space fixes
Signed-off-by: Christian König <deathsimple@vodafone.de>
Signed-off-by: Dmitry Cherkassov <Dmitrii.Cherkasov@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Roughly based on how nouveau is handling it. Instead of
adding the bo_va when the address is set add the bo_va
when the handle is opened, but set the address to zero
until userspace tells us where to place it.
This fixes another bunch of problems with glamor.
v2: agd5f: fix build after dropping patch 7/8.
Signed-off-by: Christian König <deathsimple@vodafone.de>
It doesn't really belong into the object functions,
also rename it to avoid collisions with struct radeon_bo_va.
Signed-off-by: Christian König <deathsimple@vodafone.de>
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
Even GPUs can have a null pointer dereference, so move
the IB pool to another offset to catch those.
Signed-off-by: Christian König <deathsimple@vodafone.de>
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
Currently doing the update with the CP.
v2: Rebased on Jeromes bugfix. Make validity comparison
more human readable.
Signed-off-by: Christian König <deathsimple@vodafone.de>
Removing the need to wait for anything.
Still not ideal, since we need to free pt on va remove.
Signed-off-by: Christian König <deathsimple@vodafone.de>
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
Move binding onto the ring, simplifying handling a bit.
Signed-off-by: Christian König <deathsimple@vodafone.de>
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
Move flushing the VMs as function into the rings.
First step to make VM operations async.
Signed-off-by: Christian König <deathsimple@vodafone.de>
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
Store a reference to the VM into the IB structure, that
makes calculating the IBs address a bit less complicated.
Signed-off-by: Christian König <deathsimple@vodafone.de>
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
Move it out of the radeon_pm.c and into radeon_acpi.c since
we use it for more than just pm now.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Set up an handler for ACPI events and respond to brightness change
requests from the system BIOS.
v2: fix notification when using device-specific command codes
(tested by Pali Rohár <pali.rohar@gmail.com>); cache the encoder
controlling the backlight during the initialization to avoid searching
it every time (suggested by Alex Deucher).
v3: whitespace fixes (Alex Deucher).
Signed-off-by: Luca Tettamanti <kronos.it@gmail.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Wrap the call to VERIFY_INTERFACE and add the parsing of the support
vectors.
v2: use a packed struct for handling the output of ACPI calls, hides
ugly pointer arithmetics (Lee, Chun-Yi <jlee@suse.com>).
v3: fix radeon_atif_parse_functions handling (Alex Deucher)
Signed-off-by: Luca Tettamanti <kronos.it@gmail.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
It was only used for dynpm, but has been replaced with
a better implementation using fences. Remove it.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Returns a snapshot of the GPU clock counter. Needed
for certain OpenGL extensions.
v2: agd5f
- address Jerome's comments
- add function documentation
Signed-off-by: Marek Olšák <maraeo@gmail.com>
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Virtual address need to be fenced to know when we can safely remove it.
This patch also properly clear the pagetable. Previously it was
serouisly broken.
Kernel 3.5/3.4 need a similar patch but adapted for difference in mutex locking.
v2: For to update pagetable when unbinding bo (don't bailout if
bo_va->valid is true).
v3: Add kernel 3.5/3.4 comment.
v4: Fix compilation warnings.
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Add support for using memory buffers rather than
scratch registers. Some rings may not be able to
write to scratch registers.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Just store the index in the ring structure.
Idea taken from one of Jerome's wip rptr patches.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Const IBs are executed on the CE not the CP, so we can't
fence them in the normal way.
So submit them directly before the IB instead, just as
the documentation says.
v2: keep the extra documentation
Signed-off-by: Christian König <deathsimple@vodafone.de>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Otherwise we can encounter out of memory situations under extreme load.
v2: add documentation for the new function
Signed-off-by: Christian König <deathsimple@vodafone.de>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Try to save whatever is on the rings when
we encounter an lockup.
v2: Fix spelling error. Free saved ring data if reset fails.
Add documentation for the new functions.
v3: Some more spelling fixes
v4: It doesn't make sense to save anything if all fences
are signaled
Signed-off-by: Christian König <deathsimple@vodafone.de>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Before emitting any indirect buffer, emit the offset of the next
valid ring content if any. This allow code that want to resume
ring to resume ring right after ib that caused GPU lockup.
v2: use scratch registers instead of storing it into memory
v3: skip over the surface sync for ni and si as well
v4: use SET_CONFIG_REG instead of PACKET0
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Christian König <deathsimple@vodafone.de>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Just restore the page table instead. Addressing three
problem with this change:
1. Calling vm_manager_suspend in the suspend path is
problematic cause it wants to wait for the VM use
to end, which in case of a lockup never happens.
2. In case of a locked up memory controller
unbinding the VM seems to make it even more
unstable, creating an unrecoverable lockup
in the end.
3. If we want to backup/restore the leftover ring
content we must not unbind VMs in between.
Signed-off-by: Christian König <deathsimple@vodafone.de>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Just reinitialize the shader content on resume instead.
Signed-off-by: Christian König <deathsimple@vodafone.de>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
The IB pool is in gart memory, so it is completely
superfluous to unpin / repin it on suspend / resume.
Signed-off-by: Christian König <deathsimple@vodafone.de>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
GPU reset need to be exclusive, one happening at a time. For this
add a rw semaphore so that any path that trigger GPU activities
have to take the semaphore as a reader thus allowing concurency.
The GPU reset path take the semaphore as a writer ensuring that
no concurrent reset take place.
v2: init rw semaphore
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Instead of returning the error handle it directly
and while at it fix the comments about the ring lock.
Signed-off-by: Christian König <deathsimple@vodafone.de>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Try to remove or replace the cs_mutex with a
vm_mutex where it is still needed.
v2: fix locking order
v3: rebased on drm-next
Signed-off-by: Christian König <deathsimple@vodafone.de>
So we can skip the locking. Also renames sw_int to
ring_int, cause that better matches its purpose.
Signed-off-by: Christian Koenig <christian.koenig@amd.com>
1. It is really dangerous to have more than one
spinlock protecting the same information.
2. radeon_irq_set sometimes wasn't called with lock
protection, so it can happen that more than one
CPU would tamper with the irq regs at the same
time.
3. The pm.gui_idle variable was assuming that the 3D
engine wasn't becoming idle between testing the
register and setting the variable. So just remove
it and test the register directly.
v2: Also handle the hpd irq code the same way.
v3: Rename hpd parameter for clarification.
Signed-off-by: Christian Koenig <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
The spinlock was actually there to protect the
rptr, but rptr was read outside of the locked area.
Also we don't really need a spinlock here, an
atomic should to quite fine since we only need to
prevent it from being reentrant.
v2: Keep the spinlock....
v3: Back to an atomic again after finding & fixing the real bug.
Signed-off-by: Christian Koenig <christian.koenig@amd.com>
It is a rw_semaphore now and only write locked
while changing the clock. Also the lock is renamed
to better reflect what it is protecting.
v2: Keep the ttm_vm_ops on IGPs
Signed-off-by: Christian König <deathsimple@vodafone.de>
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
Move inter ring syncing with semaphores into the
existing ring allocations, with that we need to
lock the ring mutex only once.
Signed-off-by: Christian König <deathsimple@vodafone.de>
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
It is completely unnecessary to create fences
before they are emitted, so remove it and a bunch
of checks if fences are emitted or not.
Signed-off-by: Christian König <deathsimple@vodafone.de>
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
- Properly set up the RBs
- Properly set up the SPI
- Properly set up gb_addr_config
This should fix rendering issues on certain cards.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Pull drm fixes from Dave Airlie:
"A bunch of fixes:
- vmware memory corruption
- ttm spinlock balance
- cirrus/mgag200 work in the presence of efifb
and finally Alex and Jerome managed to track down a magic set of bits
that on certain rv740 and evergreen cards allow the correct use of the
complete set of render backends, this makes the cards operate
correctly in a number of scenarios we had issues in before, it also
manages to boost speed on benchmarks my large amounts on these
specific gpus."
* 'drm-fixes' of git://people.freedesktop.org/~airlied/linux:
drm/edid: Make the header fixup threshold tunable
drm/radeon: fix regression in UMS CS ioctl
drm/vmwgfx: Fix nasty write past alloced memory area
drm/ttm: Fix spinlock imbalance
drm/radeon: fixup tiling group size and backendmap on r6xx-r9xx (v4)
drm/radeon: fix HD6790, HD6570 backend programming
drm/radeon: properly program gart on rv740, juniper, cypress, barts, hemlock
drm/radeon: fix bank information in tiling config
drm/mgag200: kick off conflicting framebuffers earlier.
drm/cirrus: kick out conflicting framebuffers earlier
cirrus: avoid crash if driver fails to load
Tiling group size is always 256bits on r6xx/r7xx/r8xx/9xx. Also fix and
simplify render backend map. This now properly sets up the backend map
on r6xx-9xx which should improve 3D performance.
Vadim benchmarked also:
Some benchmarks on juniper (5750), fullscreen 1920x1080,
first result - kernel 3.4.0+ (fb21affa), second - with these patches:
Lightsmark: 91 fps => 123 fps +35%
Doom3: 74 fps => 101 fps +36%
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Just move its only caller into the same file as it and make it static.
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
No need to malloc it any more.
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Christian König <deathsimple@vodafone.de>
Signed-off-by: Dave Airlie <airlied@redhat.com>
If we don't store local data into global variables
it isn't necessary to lock anything.
v2: rebased on new SA interface
Signed-off-by: Christian König <deathsimple@vodafone.de>
Signed-off-by: Dave Airlie <airlied@redhat.com>
It never really belonged there in the first place.
Signed-off-by: Christian König <deathsimple@vodafone.de>
Signed-off-by: Dave Airlie <airlied@redhat.com>
It isn't necessary any more and the suballocator seems to perform
even better.
Signed-off-by: Christian König <deathsimple@vodafone.de>
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Directly use the suballocator to get small chunks of memory.
It's equally fast and doesn't crash when we encounter a GPU reset.
v2: rebased on new SA interface.
Signed-off-by: Christian König <deathsimple@vodafone.de>
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
A startover with a new idea for a multiple ring allocator.
Should perform as well as a normal ring allocator as long
as only one ring does somthing, but falls back to a more
complex algorithm if more complex things start to happen.
We store the last allocated bo in last, we always try to allocate
after the last allocated bo. Principle is that in a linear GPU ring
progression was is after last is the oldest bo we allocated and thus
the first one that should no longer be in use by the GPU.
If it's not the case we skip over the bo after last to the closest
done bo if such one exist. If none exist and we are not asked to
block we report failure to allocate.
If we are asked to block we wait on all the oldest fence of all
rings. We just wait for any of those fence to complete.
v2: We need to be able to let hole point to the list_head, otherwise
try free will never free the first allocation of the list. Also
stop calling radeon_fence_signalled more than necessary.
v3: Don't free allocations without considering them as a hole,
otherwise we might lose holes. Also return ENOMEM instead of ENOENT
when running out of fences to wait for. Limit the number of holes
we try for each ring to 3.
Signed-off-by: Christian König <deathsimple@vodafone.de>
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Use one wait queue for all rings. When one ring progress, other
likely does to and we are not expecting to have a lot of waiter
anyway.
Also add a fence_wait_any that will wait until the first fence
in the fence array (one fence per ring) is signaled. This allow
to wait on all rings.
v2: some minor cleanups and improvements.
Signed-off-by: Christian König <deathsimple@vodafone.de>
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Define the interface without modifying the allocation
algorithm in any way.
v2: rebase on top of fence new uint64 patch
v3: add ring to debugfs output
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Christian König <deathsimple@vodafone.de>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Instead of offset + size keep start and end offset directly.
Signed-off-by: Christian König <deathsimple@vodafone.de>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Make the suballocator self containing to locking.
v2: split the bugfix into a seperate patch.
v3: remove some unreleated changes.
Sig-off-by: Christian König <deathsimple@vodafone.de>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Some callers illegal called fence_wait_next/empty
while holding the ring emission mutex. So don't
relock the mutex in that cases, and move the actual
locking into the fence code.
v2: Don't try to unlock the mutex if it isn't locked.
Signed-off-by: Christian König <deathsimple@vodafone.de>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Using 64bits fence sequence we can directly compare sequence
number to know if a fence is signaled or not. Thus the fence
list became useless, so does the fence lock that mainly
protected the fence list.
Things like ring.ready are no longer behind a lock, this should
be ok as ring.ready is initialized once and will only change
when facing lockup. Worst case is that we return an -EBUSY just
after a successfull GPU reset, or we go into wait state instead
of returning -EBUSY (thus delaying reporting -EBUSY to fence
wait caller).
v2: Remove left over comment, force using writeback on cayman and
newer, thus not having to suffer from possibly scratch reg
exhaustion
v3: Rebase on top of change to uint64 fence patch
v4: Change DCE5 test to force write back on cayman and newer but
also any APU such as PALM or SUMO family
v5: Rebase on top of new uint64 fence patch
v6: Just break if seq doesn't change any more. Use radeon_fence
prefix for all function names. Even if it's now highly optimized,
try avoiding polling to often.
v7: We should never poll the last_seq from the hardware without
waking the sleeping threads, otherwise we might lose events.
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Christian König <deathsimple@vodafone.de>
Signed-off-by: Dave Airlie <airlied@redhat.com>
This convert fence to use uint64_t sequence number intention is
to use the fact that uin64_t is big enough that we don't need to
care about wrap around.
Tested with and without writeback using 0xFFFFF000 as initial
fence sequence and thus allowing to test the wrap around from
32bits to 64bits.
v2: Add comment about possible race btw CPU & GPU, add comment
stressing that we need 2 dword aligned for R600_WB_EVENT_OFFSET
Read fence sequenc in reverse order of GPU write them so we
mitigate the race btw CPU and GPU.
v3: Drop the need for ring to emit the 64bits fence, and just have
each ring emit the lower 32bits of the fence sequence. We
handle the wrap over 32bits in fence_process.
v4: Just a small optimization: Don't reread the last_seq value
if loop restarts, since we already know its value anyway.
Also start at zero not one for seq value and use pre instead
of post increment in emmit, otherwise wait_empty will deadlock.
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Christian König <deathsimple@vodafone.de>
Signed-off-by: Dave Airlie <airlied@redhat.com>
A single global mutex for ring submissions seems sufficient.
Signed-off-by: Christian König <deathsimple@vodafone.de>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Nothing chipset or ring specific with it,
so also move it to radon_ring.
Signed-off-by: Christian König <deathsimple@vodafone.de>
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Don't hard code the 10 seconds timeout. Compute jobs
can run much longer.
Signed-off-by: Christian König <deathsimple@vodafone.de>
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
It isn't chipset specific, so it makes no sense
to have that inside r100.c.
Signed-off-by: Christian König <deathsimple@vodafone.de>
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Rings need to lock in order, otherwise
the ring subsystem can deadlock.
v2: fix error handling and number of locked doublewords.
v3: stop creating unneeded semaphores.
Signed-off-by: Christian König <deathsimple@vodafone.de>
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
It's never used and so practically superfluous.
Signed-off-by: Christian König <deathsimple@vodafone.de>
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
As discussed with Michel that name better
describes the behavior of this function.
Signed-off-by: Christian König <deathsimple@vodafone.de>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Previusly multiple rings could trigger multiple GPU
resets at the same time.
Signed-off-by: Christian König <deathsimple@vodafone.de>
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Removing all the different error messages and
having just one standard behaviour over all
chipset generations.
Signed-off-by: Christian König <deathsimple@vodafone.de>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
It makes no sense at all to have more than one flag.
Signed-off-by: Christian König <deathsimple@vodafone.de>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Different rings have different criteria to test
if they are stuck.
v2: rebased on current drm-next
Signed-off-by: Christian König <deathsimple@vodafone.de>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Rafał Miłecki <zajec5@gmail.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Tested-by: Christian König <deathsimple@vodafone.de>
Reviewed-by: Christian König <deathsimple@vodafone.de>
Signed-off-by: Dave Airlie <airlied@redhat.com>
- add support for rs6xx
- add support for DCE4/5
- fixup 6xx/7xx
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
RLC handles the interrupt controller and other tasks
on the GPU.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Currently the driver required 5 sets of ucode:
1. pfp - pre-fetch parser, part of the CP
2. me - micro engine, part of the CP
3. ce - constant engine, part of the CP
4. rlc - interrupt controller
5. mc - memory controller
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>