When we are using memcpy to move objects around, and we fail to memcpy
due to lack of memory to populate or failure to finish the copy, we don't
want to destroy the mm_node that has been copied into old_copy.
While working on a new kms driver that uses memcpy, if I overallocated bo's
up to the memory limits, and eviction failed, then machine would oops soon
after due to having an active bo with an already freed drm_mm embedded in it,
freeing it a second time didn't end well.
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
All legitimate users of this function outside ttm_bo.c are gone, now
it's only an implementation detail.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
This requires re-use of the seqno, which increases fairness slightly.
Instead of spinning with a new seqno every time we keep the current one,
but still drop all other reservations we hold. Only when we succeed,
we try to get back our other reservations again.
This should increase fairness slightly as well.
Changes since v1:
- Increase val_seq before calling ttm_bo_reserve_slowpath_nolru and
retrying to take all entries to prevent a race.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
Instead of dropping everything, waiting for the bo to be unreserved
and trying over, a better strategy would be to do a blocking wait.
This can be mapped a lot better to a mutex_lock-like call.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
With the lru lock no longer required for protecting reservations we
can just do a ttm_bo_reserve_nolru on -EBUSY, and handle all errors
in a single path.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
There should no longer be assumptions that reserve will always succeed
with the lru lock held, so we can safely break the whole atomic
reserve/lru thing. As a bonus this fixes most lockdep annotations for
reservations.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
Noticed while reviewing the fence locking in the radeon pageflip
handler.
v2: Instead of grabbing the bdev->fence_lock in object_transfer just
move the single callsite of that function a few lines, so that it is
protected by the fence_lock. Suggested by Jerome Glisse.
v3: Fix typo in commit message.
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
All items on the lru list are always reservable, so this is a stupid
thing to keep. Not only that, it is used in a way which would
guarantee deadlocks if it were ever to be set to block on reserve.
This is a lot of churn, but mostly because of the removal of the
argument which can be nested arbitrarily deeply in many places.
No change of code in this patch except removal of the no_wait_reserve
argument, the previous patch removed the use of no_wait_reserve.
v2:
- Warn if -EBUSY is returned on reservation, all objects on the list
should be reservable. Adjusted patch slightly due to conflicts.
v3:
- Focus on no_wait_reserve removal only.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Replace the goto loop with a simple for each loop, and only run the
delayed destroy cleanup if we can reserve the buffer first.
No race occurs, since lru lock is never dropped any more. An empty list
and a list full of unreservable buffers both cause -EBUSY to be returned,
which is identical to the previous situation, because previously buffers
on the lru list were always guaranteed to be reservable.
This should work since currently ttm guarantees items on the lru are
always reservable, and reserving items blockingly with some bo held
are enough to cause you to run into a deadlock.
Currently this is not a concern since removal off the lru list and
reservations are always done with atomically, but when this guarantee
no longer holds, we have to handle this situation or end up with
possible deadlocks.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Replace the while loop with a simple for each loop, and only run the
delayed destroy cleanup if we can reserve the buffer first.
No race occurs, since lru lock is never dropped any more. An empty list
and a list full of unreservable buffers both cause -EBUSY to be returned,
which is identical to the previous situation.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
By removing the unlocking of lru and retaking it immediately, a race is
removed where the bo is taken off the swap list or the lru list between
the unlock and relock. As such the cleanup_refs code can be simplified,
it will attempt to call ttm_bo_wait non-blockingly, and if it fails
it will drop the locks and perform a blocking wait, or return an error
if no_wait_gpu was set.
The need for looping is also eliminated, since swapout and evict_mem_first
will always follow the destruction path, no new fence is allowed
to be attached. As far as I can see this may already have been the case,
but the unlocking / relocking required a complicated loop to deal with
re-reservation.
Changes since v1:
- Simplify no_wait_gpu case by folding it in with empty ddestroy.
- Hold a reservation while calling ttm_bo_cleanup_memtype_use again.
Changes since v2:
- Do not remove bo from lru list while waiting
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
This requires changing the order in ttm_bo_cleanup_refs_or_queue to
take the reservation first, as there is otherwise no race free way to
take lru lock before fence_lock.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Alex writes:
Pretty minor -next pull request. We some additional new bits waiting
internally for release. Hopefully Monday we can get at least some of
them out. The others will probably take a few more weeks.
Highlights of the current request:
- ELD registers for passing audio information to the sound hardware
- Handle GPUVM page faults more gracefully
- Misc fixes
Merge radeon test
* 'drm-next-3.8' of git://people.freedesktop.org/~agd5f/linux: (483 commits)
drm/radeon: bump driver version for new info ioctl requests
drm/radeon: fix eDP clk and lane setup for scaled modes
drm/radeon: add new INFO ioctl requests
drm/radeon/dce32+: use fractional fb dividers for high clocks
drm/radeon: use cached memory when evicting for vram on non agp
drm/radeon: add a CS flag END_OF_FRAME
drm/radeon: stop page faults from hanging the system (v2)
drm/radeon/dce4/5: add registers for ELD handling
drm/radeon/dce3.2: add registers for ELD handling
radeon: fix pll/ctrc mapping on dce2 and dce3 hardware
Linux 3.7-rc7
powerpc/eeh: Do not invalidate PE properly
Revert "drm/i915: enable rc6 on ilk again"
ALSA: hda - Fix build without CONFIG_PM
of/address: sparc: Declare of_iomap as an extern function for sparc again
PM / QoS: fix wrong error-checking condition
bnx2x: remove redundant warning log
vxlan: fix command usage in its doc
8139cp: revert "set ring address before enabling receiver"
MPI: Fix compilation on MIPS with GCC 4.4 and newer
...
Conflicts:
drivers/gpu/drm/exynos/exynos_drm_encoder.c
drivers/gpu/drm/exynos/exynos_drm_fbdev.c
drivers/gpu/drm/nouveau/core/engine/disp/nv50.c
Removes the need for a write lock each time we call ttm_bo_unref().
v2: Remove an unused variable.
v3: Really remove the unused variable.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Also move a kref_init() out of spinlocked region
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
This is similar to other platforms that don't allow command submission
to buffers locked on the cpu.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Reservation locking currently always takes place under the LRU spinlock.
Hence, strictly there is no need for an atomic_cmpxchg call; we can use
atomic_read followed by atomic_write since nobody else will ever reserve
without the lru spinlock held.
At least on Intel this should remove a locked bus cycle on successful
reserve.
Note that thit commit may be obsoleted by the cross-device reservation work.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
The mostly used lookup+get put+potential_destroy path of TTM objects
is converted to use RCU locks. This will substantially decrease the amount
of locked bus cycles during normal operation.
Since we use kfree_rcu to free the objects, no rcu synchronization is needed
at module unload time.
v2: Don't touch include/linux/kref.h
v3: Adapt to kref_get_unless_zero return value change
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
vmwgfx was its only user and always sets it to the same..
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Reviewed-By: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
It's unused.
Signed-off-by: Marcin Slusarz <marcin.slusarz@gmail.com>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
It's unused.
Signed-off-by: Marcin Slusarz <marcin.slusarz@gmail.com>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
All drivers set it to 0 and nothing uses it.
Signed-off-by: Marcin Slusarz <marcin.slusarz@gmail.com>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
It is unnecessary to disable preemption explicitly while calling
copy_highpage(). Because copy_highpage() will do it again through
kmap_atomic/kunmap_atomic.
Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
The TTM page can be allocated from high memory. In such case it is
wrong to use the page_address(page) as the virtual address for the high memory
page.
bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=50241
Signed-off-by: Zhao Yakui <yakui.zhao@intel.com>
Cc: stable@vger.kernel.org
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
In theory, that function could release the lru lock between
checking for bo on ddestroy list and a successful reserve if the bo
was already reserved, and the function was called with waiting reserves
allowed.
However, all current reservers of a bo on the ddestroy list would
atomically take the bo off the list after a successful reserve so this
race should not have been hit, so no need to backport for stable.
This patch also fixes a case found by Maarten Lankhorst where
ttm_mem_evict_first called with no_wait_gpu would incorrectly
spin waiting for bo idle if trying to evict a busy buffer that
also sits on the ddestroy list.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
The ttm_mem_evict_first function could theoretically drop the
lru lock without retrying if a reservation from off the LRU list
ended up waiting.
However, since currently there are no users that could cause a wait
in that situation so this is not suitable for stable
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
A long time ago, in v2.4, VM_RESERVED kept swapout process off VMA,
currently it lost original meaning but still has some effects:
| effect | alternative flags
-+------------------------+---------------------------------------------
1| account as reserved_vm | VM_IO
2| skip in core dump | VM_IO, VM_DONTDUMP
3| do not merge or expand | VM_IO, VM_DONTEXPAND, VM_HUGETLB, VM_PFNMAP
4| do not mlock | VM_IO, VM_DONTEXPAND, VM_HUGETLB, VM_PFNMAP
This patch removes reserved_vm counter from mm_struct. Seems like nobody
cares about it, it does not exported into userspace directly, it only
reduces total_vm showed in proc.
Thus VM_RESERVED can be replaced with VM_IO or pair VM_DONTEXPAND | VM_DONTDUMP.
remap_pfn_range() and io_remap_pfn_range() set VM_IO|VM_DONTEXPAND|VM_DONTDUMP.
remap_vmalloc_range() set VM_DONTEXPAND | VM_DONTDUMP.
[akpm@linux-foundation.org: drivers/vfio/pci/vfio_pci.c fixup]
Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Carsten Otte <cotte@de.ibm.com>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Cc: Cyrill Gorcunov <gorcunov@openvz.org>
Cc: Eric Paris <eparis@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Morris <james.l.morris@oracle.com>
Cc: Jason Baron <jbaron@redhat.com>
Cc: Kentaro Takeda <takedakn@nttdata.co.jp>
Cc: Matt Helsley <matthltc@us.ibm.com>
Cc: Nick Piggin <npiggin@kernel.dk>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Robert Richter <robert.richter@amd.com>
Cc: Suresh Siddha <suresh.b.siddha@intel.com>
Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Cc: Venkatesh Pallipadi <venki@google.com>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Pull drm merge (part 1) from Dave Airlie:
"So first of all my tree and uapi stuff has a conflict mess, its my
fault as the nouveau stuff didn't hit -next as were trying to rebase
regressions out of it before we merged.
Highlights:
- SH mobile modesetting driver and associated helpers
- some DRM core documentation
- i915 modesetting rework, haswell hdmi, haswell and vlv fixes, write
combined pte writing, ilk rc6 support,
- nouveau: major driver rework into a hw core driver, makes features
like SLI a lot saner to implement,
- psb: add eDP/DP support for Cedarview
- radeon: 2 layer page tables, async VM pte updates, better PLL
selection for > 2 screens, better ACPI interactions
The rest is general grab bag of fixes.
So why part 1? well I have the exynos pull req which came in a bit
late but was waiting for me to do something they shouldn't have and it
looks fairly safe, and David Howells has some more header cleanups
he'd like me to pull, that seem like a good idea, but I'd like to get
this merge out of the way so -next dosen't get blocked."
Tons of conflicts mostly due to silly include line changes, but mostly
mindless. A few other small semantic conflicts too, noted from Dave's
pre-merged branch.
* 'drm-next' of git://people.freedesktop.org/~airlied/linux: (447 commits)
drm/nv98/crypt: fix fuc build with latest envyas
drm/nouveau/devinit: fixup various issues with subdev ctor/init ordering
drm/nv41/vm: fix and enable use of "real" pciegart
drm/nv44/vm: fix and enable use of "real" pciegart
drm/nv04/dmaobj: fixup vm target handling in preparation for nv4x pcie
drm/nouveau: store supported dma mask in vmmgr
drm/nvc0/ibus: initial implementation of subdev
drm/nouveau/therm: add support for fan-control modes
drm/nouveau/hwmon: rename pwm0* to pmw1* to follow hwmon's rules
drm/nouveau/therm: calculate the pwm divisor on nv50+
drm/nouveau/fan: rewrite the fan tachometer driver to get more precision, faster
drm/nouveau/therm: move thermal-related functions to the therm subdev
drm/nouveau/bios: parse the pwm divisor from the perf table
drm/nouveau/therm: use the EXTDEV table to detect i2c monitoring devices
drm/nouveau/therm: rework thermal table parsing
drm/nouveau/gpio: expose the PWM/TOGGLE parameter found in the gpio vbios table
drm/nouveau: fix pm initialization order
drm/nouveau/bios: check that fixed tvdac gpio data is valid before using it
drm/nouveau: log channel debug/error messages from client object rather than drm client
drm/nouveau: have drm debugging macros build on top of core macros
...
Convert #include "..." to #include <path/...> in drivers/gpu/.
Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: Dave Airlie <airlied@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Acked-by: Dave Jones <davej@redhat.com>
Remove useless kfree() and clean up code related to the removal.
The semantic patch that finds this problem is as follows:
(http://coccinelle.lip6.fr/)
// <smpl>
@r exists@
position p1,p2;
expression x;
@@
if (x@p1 == NULL) { ... kfree@p2(x); ... return ...; }
@unchanged exists@
position r.p1,r.p2;
expression e <= r.x,x,e1;
iterator I;
statement S;
@@
if (x@p1 == NULL) { ... when != I(x,...) S
when != e = e1
when != e += e1
when != e -= e1
when != ++e
when != --e
when != e++
when != e--
when != &e
kfree@p2(x); ... return ...; }
@ok depends on unchanged exists@
position any r.p1;
position r.p2;
expression x;
@@
... when != true x@p1 == NULL
kfree@p2(x);
@depends on !ok && unchanged@
position r.p2;
expression x;
@@
*kfree@p2(x);
// </smpl>
Signed-off-by: Peter Senna Tschudin <peter.senna@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Use copy_highpage() to copy from one page to another.
Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
Cc: dri-devel@lists.freedesktop.org
Signed-off-by: Dave Airlie <airlied@redhat.com>
A regression was introduced in the 3.3 rc series, commit
"drm/ttm: simplify memory accounting for ttm user v2",
causing the metadata of buffer objects created using the ttm_bo_create()
function to be accounted twice.
That causes massive leaks with the vmwgfx driver running for example
SpecViewperf Catia-03 test 2, eventually killing the app.
Furthermore, the same commit introduces a regression where
metadata accounting is leaked if a buffer object is
initialized with an illegal size. This is also fixed with this commit.
v2: Fixed an error path and removed an unused variable.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Jerome Glisse <jglisse@redhat.com>
Cc: stable@vger.kernel.org
Signed-off-by: Dave Airlie <airlied@redhat.com>
This imbalance may cause hangs when TTM is trying to swap out a buffer
that is already on the delayed delete list.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Jakob Bornecrantz <jakob@vmware.com>
Cc: stable@vger.kernel.org
Signed-off-by: Dave Airlie <airlied@redhat.com>
This adds the ability for ttm common code to take an SG table
and use it as the backing for a slave TTM object.
The drivers can then populate their GTT tables using the SG object.
v2: make sure to setup VM for sg bos as well.
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Pull drm main changes from Dave Airlie:
"This is the main drm pull request, I'm probably going to send two more
smaller ones, will explain below.
This contains a patch that is also in the fbdev tree, but it should be
the same patch, it added an API for hot unplugging framebuffer
devices, and I need that API for a new driver.
It also contains some changes to the i2c tree which Jean has acked,
and one change to moorestown platform stuff in x86.
Highlights:
- new drivers: UDL driver for USB displaylink devices, kms only,
should support correct hotplug operations.
- core: i2c speedups + better hotplug support, EDID overriding via
firmware interface - allows user to load a firmware for a broken
monitor/kvm from userspace, it even has documentation for it.
- exynos: new HDMI audio + hdmi 1.4 + virtual output driver
- gma500: code cleanup
- radeon: cleanups, CS optimisations, streamout support and pageflip
fix
- nouveau: NVD9 displayport support + more reclocking work
- i915: re-enabling GMBUS, finish gpu patch (might help hibernation
who knows), missed irq fixes, stencil tiling fixes, interlaced
support, aliasesd PPGTT support for SNB/IVB, swizzling for SNB/IVB,
semaphore fixes
As well as the usual bunch of cleanups and fixes all over the place.
I've got two things I'd like to merge a bit later:
a) AMD support for all their new radeonhd 7000 series GPU and APUs.
AMD dropped this a bit late due to insane internal review
processes, (please AMD just follow Intel and let open source guys
ship stuff early) however I don't want to penalise people who own
this hardware (since its been on sale for 3-4 months and GPU hw
doesn't exactly have a lifetime in years) and consign them to
using closed drivers for longer than necessary. The changes are
well contained and just plug into the driver new gpu functionality
so they should be fairly regression proof. I just want to give
them a bit of a run on the hw AMD kindly sent me.
b) drm prime/dma-buf interface code. This is just infrastructure
code to expose the dma-buf stuff to drm drivers and to userspace.
I'm not planning on pushing any driver support in this cycle
(except maybe exynos), but I'd like to get the infrastructure code
in so for the next cycle I can start getting the driver support
into the individual drivers. We have started driver support for
i915, nouveau and udl along with I think exynos and omap in
staging. However this code relies on the dma-buf tree being
pulled into your tree first since it needs the latest interfaces
from that tree. I'll push to get that tree sent asap.
(oh and any warnings you see in i915 are gcc's fault from what anyone
can see)."
Fix up trivial conflicts in arch/x86/platform/mrst/mrst.c due to the new
msic_thermal_platform_data() thermal function being added next to the
tc35876x_platform_data() i2c device function..
* 'drm-next' of git://people.freedesktop.org/~airlied/linux: (326 commits)
drm/i915: use DDC_ADDR instead of hard-coding it
drm/radeon: use DDC_ADDR instead of hard-coding it
drm: remove unneeded redefinition of DDC_ADDR
drm/exynos: added virtual display driver.
drm: allow loading an EDID as firmware to override broken monitor
drm/exynos: enable hdmi audio feature
drm/exynos: add default pixel format for plane
drm/exynos: cleanup exynos_hdmi.h
drm/exynos: add is_local member in exynos_drm_subdrv struct
drm/exynos: add subdrv open/close functions
drm/exynos: remove module of exynos drm subdrv
drm/exynos: release pending pageflip events when closed
drm/exynos: added new funtion to get/put dma address.
drm/exynos: update gem and buffer framework.
drm/exynos: added mode_fixup feature and code clean.
drm/exynos: add HDMI version 1.4 support
drm/exynos: remove exynos_mixer.h
gma500: Fix mmap frambuffer
drm/radeon: Drop radeon_gem_object_(un)pin.
drm/radeon: Restrict offset for legacy display engine.
...
Use the more current logging style.
Add pr_fmt and remove the TTM_PFX uses.
Coalesce formats and align arguments.
Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Both changes in dc97b3409a cause serious
regressions in the nouveau driver.
move_notify() was originally able to presume that bo->mem is the old node,
and new_mem is the new node. The above commit moves the call to
move_notify() to after move() has been done, which means that now, sometimes,
new_mem isn't the new node at all, bo->mem is, and new_mem points at a
stale, possibly-just-been-killed-by-move node.
This is clearly not a good situation. This patch reverts this change, and
replaces it with a cleanup in the move() failure path instead.
The second issue is that the call to move_notify() from cleanup_memtype_use()
causes the TTM ghost objects to get passed into the driver. This is clearly
bad as the driver knows nothing about these "fake" TTM BOs, and ends up
accessing uninitialised memory.
I worked around this in nouveau's move_notify() hook by ensuring the BO
destructor was nouveau's. I don't particularly like this solution, and
would rather TTM never pass the driver these objects. However, I don't
clearly understand the reason why we're calling move_notify() here anyway
and am happy to work around the problem in nouveau instead of breaking the
behaviour expected by other drivers.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Cc: Jerome Glisse <j.glisse@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
. It was useful during development, but now on a production system
we can get this (if the user forgot to upload the firmware):
[drm] radeon: irq initialized.
[drm] GART: num cpu pages 131072, num gpu pages 131072
[drm] radeon: ib pool ready.
[drm] Loading SUMO Microcode
r600_cp: Failed to load firmware "radeon/SUMO_pfp.bin"
atl1c 0000:03:00.0: version 1.0.1.0-NAPI.213057] [drm:evergreen_startup] *ERROR* Failed to load firmware!
radeon 0000:00:01.0: disabling GPU acceleration
88] radeon 0000:00:01.0: ffff8801bb782400 unpin not necessary
------------[ cut here ]------------
WARNING: at /home/konrad/linux-linus/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c:956 ttm_dma_unpopulate+0x79/0x300 [ttm]()
Hardware name: System Product Name
Modules linked in: e1000e atl1c radeon(+) ahci libahci libata scsi_mod fbcon tileblit font ttm bitblit softcursor drm_kms_helper wmi xen_blkfront xen_netfront fb_sys_fops sysimgblt sysfillrect syscopyarea xenfs xen_privcmd
Pid: 1600, comm: modprobe Not tainted 3.2.0-06100-ge343a89 #1
Call Trace:
[<ffffffff8108973a>] warn_slowpath_common+0x7a/0xb0
[<ffffffff81089785>] warn_slowpath_null+0x15/0x20
[<ffffffffa0060309>] ttm_dma_unpopulate+0x79/0x300 [ttm]
[<ffffffffa01341c0>] radeon_ttm_tt_unpopulate+0x120/0x130 [radeon]
[<ffffffffa0056e0c>] ttm_tt_destroy+0x2c/0x70 [ttm]
[<ffffffffa0057a4e>] ttm_bo_cleanup_memtype_use+0x3e/0x80 [ttm]
[<ffffffffa00595a1>] ttm_bo_release+0x251/0x280 [ttm]
[<ffffffffa0059610>] ttm_bo_unref+0x40/0x60 [ttm]
[<ffffffffa0134d02>] radeon_bo_unref+0x42/0x80 [radeon]
[<ffffffffa0186dfb>] radeon_sa_bo_manager_fini+0x6b/0x80 [radeon]
[<ffffffffa0146b8f>] radeon_ib_pool_fini+0x6f/0x90 [radeon]
[<ffffffffa014be49>] r100_ib_fini+0x19/0x20 [radeon]
[<ffffffffa017b47e>] evergreen_init+0x1ee/0x2d0 [radeon]
The big WARN() has nothing to do with the culprit - which is that
the firmware was not loaded. So lets remove the WARN() from the TTM DMA code.
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
The "if (!p && !p->dev)" condition isn't right because || was intended
instead of &&. But actually, "p" is the list cursor and so it's always
non-NULL and we can just remove that bit. We can remove the another
similar check as well.
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
ttm tt rework modified the way we allocate and populate the
ttm_tt structure, the AGP side was missing some bit to properly
work. Fix those and fix radeon and nouveau AGP support.
Tested on radeon only so far.
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
The code to figure out how many pages to shrink the pool
ends up capping the 'count' at _manager->options.max_size - which is OK.
Except that the 'count' is also used when accounting for how many pages
are recycled - which we end up with the invalid values. This fixes
it by using a different value for the amount of pages to shrink.
On top of that we would free the cached page pool - which is nonsense
as they are deleted from the pool - so there are no free pages in that
pool..
Also we also missed the opportunity to batch the amount of pages
to free (similar to how ttm_page_alloc.c does it). This reintroduces
the code that was lost during rebasing.
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Merge in the upstream tree to bring in the mainline fixes.
Conflicts:
drivers/gpu/drm/exynos/exynos_drm_fbdev.c
drivers/gpu/drm/nouveau/nouveau_sgdma.c
Previously we were calling back move_notify in error path when the
bo is returned to it's original position or when destroy the bo.
When destroying the bo set the new mem placement as NULL when calling
back in the driver.
Updating nouveau to deal with NULL placement properly.
v2: reserve the object before calling move_notify in bo destroy path
at that point ttm should be the only piece of code interacting
with the object so atomic_set is safe here.
v3: callback move notify only once the bo is in its new position
call move notify want swaping out the buffer
v4:- don't call move_notify when swapin out bo, assume driver should
do what is appropriate in swap notify
- move move_notify call back to ttm_bo_cleanup_memtype_use for
destroy path
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Provide helper function to compute the kernel memory size needed
for each buffer object. Move all the accounting inside ttm, simplifying
driver and avoiding code duplication accross them.
v2 fix accounting of ghost object, one would have thought that i
would have run into the issue since a longtime but it seems
ghost object are rare when you have plenty of vram ;)
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Move dma data to a superset ttm_dma_tt structure which herit
from ttm_tt. This allow driver that don't use dma functionalities
to not have to waste memory for it.
V2 Rebase on top of no memory account changes (where/when is my
delorean when i need it ?)
V3 Make sure page list is initialized empty
V4 typo/syntax fixes
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
In TTM world the pages for the graphic drivers are kept in three different
pools: write combined, uncached, and cached (write-back). When the pages
are used by the graphic driver the graphic adapter via its built in MMU
(or AGP) programs these pages in. The programming requires the virtual address
(from the graphic adapter perspective) and the physical address (either System RAM
or the memory on the card) which is obtained using the pci_map_* calls (which does the
virtual to physical - or bus address translation). During the graphic application's
"life" those pages can be shuffled around, swapped out to disk, moved from the
VRAM to System RAM or vice-versa. This all works with the existing TTM pool code
- except when we want to use the software IOTLB (SWIOTLB) code to "map" the physical
addresses to the graphic adapter MMU. We end up programming the bounce buffer's
physical address instead of the TTM pool memory's and get a non-worky driver.
There are two solutions:
1) using the DMA API to allocate pages that are screened by the DMA API, or
2) using the pci_sync_* calls to copy the pages from the bounce-buffer and back.
This patch fixes the issue by allocating pages using the DMA API. The second
is a viable option - but it has performance drawbacks and potential correctness
issues - think of the write cache page being bounced (SWIOTLB->TTM), the
WC is set on the TTM page and the copy from SWIOTLB not making it to the TTM
page until the page has been recycled in the pool (and used by another application).
The bounce buffer does not get activated often - only in cases where we have
a 32-bit capable card and we want to use a page that is allocated above the
4GB limit. The bounce buffer offers the solution of copying the contents
of that 4GB page to an location below 4GB and then back when the operation has been
completed (or vice-versa). This is done by using the 'pci_sync_*' calls.
Note: If you look carefully enough in the existing TTM page pool code you will
notice the GFP_DMA32 flag is used - which should guarantee that the provided page
is under 4GB. It certainly is the case, except this gets ignored in two cases:
- If user specifies 'swiotlb=force' which bounces _every_ page.
- If user is using a Xen's PV Linux guest (which uses the SWIOTLB and the
underlaying PFN's aren't necessarily under 4GB).
To not have this extra copying done the other option is to allocate the pages
using the DMA API so that there is not need to map the page and perform the
expensive 'pci_sync_*' calls.
This DMA API capable TTM pool requires for this the 'struct device' to
properly call the DMA API. It also has to track the virtual and bus address of
the page being handed out in case it ends up being swapped out or de-allocated -
to make sure it is de-allocated using the proper's 'struct device'.
Implementation wise the code keeps two lists: one that is attached to the
'struct device' (via the dev->dma_pools list) and a global one to be used when
the 'struct device' is unavailable (think shrinker code). The global list can
iterate over all of the 'struct device' and its associated dma_pool. The list
in dev->dma_pools can only iterate the device's dma_pool.
/[struct device_pool]\
/---------------------------------------------------| dev |
/ +-------| dma_pool |
/-----+------\ / \--------------------/
|struct device| /-->[struct dma_pool for WC]</ /[struct device_pool]\
| dma_pools +----+ /-| dev |
| ... | \--->[struct dma_pool for uncached]<-/--| dma_pool |
\-----+------/ / \--------------------/
\----------------------------------------------/
[Two pools associated with the device (WC and UC), and the parallel list
containing the 'struct dev' and 'struct dma_pool' entries]
The maximum amount of dma pools a device can have is six: write-combined,
uncached, and cached; then there are the DMA32 variants which are:
write-combined dma32, uncached dma32, and cached dma32.
Currently this code only gets activated when any variant of the SWIOTLB IOMMU
code is running (Intel without VT-d, AMD without GART, IBM Calgary and Xen PV
with PCI devices).
Tested-by: Michel Dänzer <michel@daenzer.net>
[v1: Using swiotlb_nr_tbl instead of swiotlb_enabled]
[v2: Major overhaul - added 'inuse_list' to seperate used from inuse and reorder
the order of lists to get better performance.]
[v3: Added comments/and some logic based on review, Added Jerome tag]
[v4: rebase on top of ttm_tt & ttm_backend merge]
[v5: rebase on top of ttm memory accounting overhaul]
[v6: New rebase on top of more memory accouting changes]
[v7: well rebase on top of no memory accounting changes]
[v8: make sure pages list is initialized empty]
[v9: calll ttm_mem_global_free_page in unpopulate for accurate accountg]
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
Acked-by: Thomas Hellstrom <thellstrom@vmware.com>
Move the page allocation and freeing to driver callback and
provide ttm code helper function for those.
Most intrusive change, is the fact that we now only fully
populate an object this simplify some of code designed around
the page fault design.
V2 Rebase on top of memory accounting overhaul
V3 New rebase on top of more memory accouting changes
V4 Rebase on top of no memory account changes (where/when is my
delorean when i need it ?)
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
ttm_backend will only exist with a ttm_tt, and ttm_tt
will only be of interest when bound to a backend. Merge them
to avoid code and data duplication.
V2 Rebase on top of memory accounting overhaul
V3 Rebase on top of more memory accounting changes
V4 Rebase on top of no memory account changes (where/when is my
delorean when i need it ?)
V5 make sure ttm is unbound before destroying, change commit
message on suggestion from Tormod Volden
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Use the ttm_tt pages array for pages allocations, move the list
unwinding into the page allocation functions.
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
On failure we need to make sure the page we free has wb cache
attribute. Do this pas call the proper ttm page helper function.
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Split btw highmem and lowmem page was rendered useless by the
pool code. Remove it. Note further cleanup would change the
ttm page allocation helper to actualy take an array instead
of relying on list this could drasticly reduce the number of
function call in the common case of allocation whole buffer.
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
This was never use in none of the driver, properly using userspace
page for bo would need more code (vma interaction mostly). Removing
this dead code in preparation of ttm_tt & backend merge.
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
An unlikely race could case a bo to be returned reserved on an error path.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
* 'drm-core-next' of git://people.freedesktop.org/~airlied/linux: (290 commits)
Revert "drm/ttm: add a way to bo_wait for either the last read or last write"
Revert "drm/radeon/kms: add a new gem_wait ioctl with read/write flags"
vmwgfx: Don't pass unused arguments to do_dirty functions
vmwgfx: Emulate depth 32 framebuffers
drm/radeon: Lower the severity of the radeon lockup messages.
drm/i915/dp: Fix eDP on PCH DP on CPT/PPT
drm/i915/dp: Introduce is_cpu_edp()
drm/i915: use correct SPD type value
drm/i915: fix ILK+ infoframe support
drm/i915: add DP test request handling
drm/i915: read full receiver capability field during DP hot plug
drm/i915/dp: Remove eDP special cases from bandwidth checks
drm/i915/dp: Fix the math in intel_dp_link_required
drm/i915/panel: Always record the backlight level again (but cleverly)
i915: Move i915_read/write out of line
drm/i915: remove transcoder PLL mashing from mode_set per specs
drm/i915: if transcoder disable fails, say which
drm/i915: set watermarks for third pipe on IVB
drm/i915: export a CPT mode set verification function
drm/i915: fix transcoder PLL select masking
...
This reverts commit dfadbbdb57.
Further upstream discussion between Marek and Thomas decided this wasn't
fully baked and needed further work, so revert it before it hits mainline.
Signed-off-by: Dave Airlie <airlied@redhat.com>
There are a number of fixes in mainline required for code in -next,
also there was a few conflicts I'd rather resolve myself.
Signed-off-by: Dave Airlie <airlied@redhat.com>
Conflicts:
drivers/gpu/drm/radeon/evergreen.c
drivers/gpu/drm/radeon/r600.c
drivers/gpu/drm/radeon/radeon_asic.h
Fixes an information leak to userspace, we were handing out un-zeroed pages
for any newly created TTM_PL_TT buffer.
Reported-by: Marcin Slusarz <marcin.slusarz@gmail.com>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Tested-by: Marcin Slusarz <marcin.slusarz@gmail.com>
Cc: stable@kernel.org
Signed-off-by: Dave Airlie <airlied@redhat.com>
Sometimes we want to know whether a buffer is busy and wait for it (bo_wait).
However, sometimes it would be more useful to be able to query whether
a buffer is busy and being either read or written, and wait until it's stopped
being either read or written. The point of this is to be able to avoid
unnecessary waiting, e.g. if a GPU has written something to a buffer and is now
reading that buffer, and a CPU wants to map that buffer for read, it needs to
only wait for the last write. If there were no write, there wouldn't be any
waiting needed.
This, or course, requires user space drivers to send read/write flags
with each relocation (like we have read/write domains in radeon, so we can
actually use those for something useful now).
Now how this patch works:
The read/write flags should passed to ttm_validate_buffer. TTM maintains
separate sync objects of the last read and write for each buffer, in addition
to the sync object of the last use of a buffer. ttm_bo_wait then operates
with one the sync objects.
Signed-off-by: Marek Olšák <maraeo@gmail.com>
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
This was true for new TTM_PL_SYSTEM and new TTM_PL_TT cases, but wasn't
the case on TTM_PL_SYSTEM<->TTM_PL_TT moves, which causes trouble on some
paths as nouveau's move_notify() hook requires that the dma addresses be
valid at this point.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Nouveau makes the assumption that if a TTM is bound there will be a mm_node
around for it and the backwards ordering here resulted in a use-after-free
on some eviction paths.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
ttm_tt_destroy kfrees passed object, so we need to nullify
a reference to it.
Signed-off-by: Marcin Slusarz <marcin.slusarz@gmail.com>
Cc: stable@kernel.org
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
This allows us to move duplicated code in <asm/atomic.h>
(atomic_inc_not_zero() for now) to <linux/atomic.h>
Signed-off-by: Arun Sharma <asharma@fb.com>
Reviewed-by: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: David Miller <davem@davemloft.net>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Acked-by: Mike Frysinger <vapier@gentoo.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Soon tmpfs will stop supporting ->readpage and read_mapping_page(): once
"tmpfs: add shmem_read_mapping_page_gfp" has been applied, this patch can
be applied to ease the transition.
ttm_tt_swapin() and ttm_tt_swapout() use shmem_read_mapping_page() in
place of read_mapping_page(), since their swap_space has been created with
shmem_file_setup().
Signed-off-by: Hugh Dickins <hughd@google.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Thomas Hellstrom <thellstrom@vmware.com>
Cc: Dave Airlie <airlied@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
. and some comments to make it easier to understand.
Ackedby: Randy Dunlap <randy.dunlap@oracle.com>
[v2: Added some more updates from Randy Dunlap]
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Change each shrinker's API by consolidating the existing parameters into
shrink_control struct. This will simplify any further features added w/o
touching each file of shrinker.
[akpm@linux-foundation.org: fix build]
[akpm@linux-foundation.org: fix warning]
[kosaki.motohiro@jp.fujitsu.com: fix up new shrinker API]
[akpm@linux-foundation.org: fix xfs warning]
[akpm@linux-foundation.org: update gfs2]
Signed-off-by: Ying Han <yinghan@google.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Minchan Kim <minchan.kim@gmail.com>
Acked-by: Pavel Emelyanov <xemul@openvz.org>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Mel Gorman <mel@csn.ul.ie>
Acked-by: Rik van Riel <riel@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Hugh Dickins <hughd@google.com>
Cc: Dave Hansen <dave@linux.vnet.ibm.com>
Cc: Steven Whitehouse <swhiteho@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This reverts commit 69a07f0b11.
We've tracked a number of problems back to this, and Thomas
thinks we should redesign this for .40/41 anyways so I'm
happy to revert it.
Signed-off-by: Dave Airlie <airlied@redhat.com>
There's no need to pass kref_put() the address of a function (just the
function will do just fine) nor to cast its unused return to void.
Signed-off-by: Paul Bolle <pebolle@tiscali.nl>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
This reverts commit 5a893fc28f.
This causes a use after free in the ttm free alloc pages path,
when it tries to get the be after the be has been destroyed.
Signed-off-by: Dave Airlie <airlied@redhat.com>
* 'stable/ttm.pci-api.v5' of git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen:
ttm: Include the 'struct dev' when using the DMA API.
nouveau/ttm/PCIe: Use dma_addr if TTM has set it.
radeon/ttm/PCIe: Use dma_addr if TTM has set it.
ttm: Expand (*populate) to support an array of DMA addresses.
ttm: Utilize the DMA API for pages that have TTM_PAGE_FLAG_DMA32 set.
ttm: Introduce a placeholder for DMA (bus) addresses.
Nouveau doesn't have enough information at ttm_backend_func.bind() time
to implement things like tiled GART, or to keep a buffer at a constant
address in the GPU virtual address space no matter where in physical
memory it's placed.
To resolve this, nouveau will handle binding of all buffers to the GPU
itself from the move_notify() hook. This commit ensures it's called
for all buffer moves.
Talked to Dave about the impact on radeon, which uses move_notify, it
doesn't look like anything should break there.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Reviewed-by: Thomas Hellstrom <thomas@shipmail.org>
Signed-off-by: Dave Airlie <airlied@redhat.com>
This makes the accounting when using 'debug_dma_dump_mappings()'
and CONFIG_DMA_API_DEBUG=y be assigned to the correct device
instead of 'fallback'.
No functional change - just cosmetic.
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
We pass in the array of ttm pages to be populated in the GART/MM
of the card (or AGP). Patch titled: "ttm: Utilize the DMA API for
pages that have TTM_PAGE_FLAG_DMA32 set." uses the DMA API to make
those pages have a proper DMA addresses (in the situation where
page_to_phys or virt_to_phys do not give use the DMA (bus) address).
Since we are using the DMA API on those pages, we should pass in the
DMA address to this function so it can save it in its proper fields
(later patches use it).
[v2: Added reviewed-by tag]
Reviewed-by: Thomas Hellstrom <thellstrom@shipmail.org>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Tested-by: Ian Campbell <ian.campbell@citrix.com>
For pages that have the TTM_PAGE_FLAG_DMA32 flag set we
use the DMA API. We save the bus address in our array which we
use to program the GART (see "radeon/ttm/PCIe: Use dma_addr if TTM
has set it." and "nouveau/ttm/PCIe: Use dma_addr if TTM has set it.").
The reason behind using the DMA API is that under Xen we would
end up programming the GART with the bounce buffer (SWIOTLB)
DMA address instead of the physical DMA address of the TTM page.
The reason being that alloc_page with GFP_DMA32 does not allocate
pages under the the 4GB mark when running under Xen hypervisor.
Under baremetal this means we do the DMA API call earlier instead
of when we program the GART.
For details please refer to:
https://lkml.org/lkml/2011/1/7/251
[v2: Fixed indentation, revised desc, added Reviewed-by]
Reviewed-by: Thomas Hellstrom <thomas@shipmail.org>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Tested-by: Ian Campbell <ian.campbell@citrix.com>
This is right now limited to only non-pool constructs.
[v2: Fixed indentation issues, add review-by tag]
Reviewed-by: Thomas Hellstrom <thomas@shipmail.org>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Tested-by: Ian Campbell <ian.campbell@citrix.com>
* 'drm-core-next' of git://git.kernel.org/pub/scm/linux/kernel/git/airlied/drm-2.6: (390 commits)
drm/radeon/kms: disable underscan by default
drm/radeon/kms: only enable hdmi features if the monitor supports audio
drm: Restore the old_fb upon modeset failure
drm/nouveau: fix hwmon device binding
radeon: consolidate asic-specific function decls for pre-r600
vga_switcheroo: comparing too few characters in strncmp()
drm/radeon/kms: add NI pci ids
drm/radeon/kms: don't enable pcie gen2 on NI yet
drm/radeon/kms: add radeon_asic struct for NI asics
drm/radeon/kms/ni: load default sclk/mclk/vddc at pm init
drm/radeon/kms: add ucode loader for NI
drm/radeon/kms: add support for DCE5 display LUTs
drm/radeon/kms: add ni_reg.h
drm/radeon/kms: add bo blit support for NI
drm/radeon/kms: always use writeback/events for fences on NI
drm/radeon/kms: adjust default clock/vddc tracking for pm on DCE5
drm/radeon/kms: add backend map workaround for barts
drm/radeon/kms: fill gpu init for NI asics
drm/radeon/kms: add disabled vbios accessor for NI asics
drm/radeon/kms: handle NI thermal controller
...
Make ttm_bo::ttm_bo_device_release call cancel_delayed_work_sync()
instead of calling cancel_delayed_work() followed by
flush_scheduled_work().
This is to prepare for the deprecation and removal of
flush_scheduled_work().
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc:: Thomas Hellstrom <thellstrom@vmware.com>
Cc:: Dave Airlie <airlied@redhat.com>
Drivers using their own implementation of io_mem_reserve/io_mem_free are
likely to store the tracking information for the map in mem.mm_node, so
it can't be freed while still mapped.
Signed-off-by: Ben Skeggs<bskeggs@redhat.com>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
This patch attempts to fix up shortcomings with the current calling
sequences.
1) There's a fastpath where no locking occurs and only io_mem_reserved is
called to obtain needed info for mapping. The fastpath is set per
memory type manager.
2) If the fastpath is disabled, io_mem_reserve and io_mem_free will be exactly
balanced and not called recursively for the same struct ttm_mem_reg.
3) Optionally the driver can choose to enable a per memory type manager LRU
eviction mechanism that, when io_mem_reserve returns -EAGAIN will attempt
to kill user-space mappings of memory in that manager to free up needed
resources
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Ben Skeggs <bskeggs@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Rather than having the driver supply the validation sequence, leave that
responsibility to TTM. This saves some confusion and a function argument.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Drastically reduce the number of spin lock / unlock operations by performing
unreserving and fencing under global locks.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Jerome Glisse <j.glisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
The bo lock used only to protect the bo sync object members, and since it
is a per bo lock, fencing a buffer list will see a lot of locks and unlocks.
Replace it with a per-device lock that protects the sync object members on
*all* bos. Reading and setting these members will always be very quick, so
the risc of heavy lock contention is microscopic. Note that waiting for
sync objects will always take place outside of this lock.
The bo device fence lock will eventually be replaced with a seqlock /
rcu mechanism so we can determine that a bo is idle under a
rcu / read seqlock.
However this change will allow us to batch fencing and unreserving of
buffers with a minimal amount of locking.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Jerome Glisse <j.glisse@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Add an aid for the driver to detect deadlocks on multi-bo reservations
Update documentation.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Jerome Glisse <j.glisse@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Avoid the ttm_bo_unreserve() spinlocks by calling
ttm_eu_backoff_reservation_locked under the lru spinlock.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Jerome Glisse <j.glisse@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Makes it possible to reserve a list of buffer objects with a single
spin lock / unlock if there is no contention.
Should improve cpu usage on SMP kernels.
v2: Initialize private list members on reserve and don't call
ttm_bo_list_ref_sub() with zero put_count.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
A process suspended waiting for a higher sequence or no sequence to unreserve,
a bo may be beaten to the reservation by a process with a lower sequence.
In that case the first process should give up trying to reserve and
return -EAGAIN. In order for that to happen, we must wake waiting processes
when we change sequence, so that they have a chance to detect the new
sequence.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Call destroy() on _all_ ttm_bo_init() failures, and make sure that
behavior is documented in the function description.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
This breaks vmwgfx non-root EGL clients and is a remnant from the
TTM user-space interface. This test should be done in the driver.
Replace the remaining placement test with a BUG_ON, since triggering
it is a driver bug.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
The sync object may disappear as soon as we release the bo::lock, so
take a reference on it while we use it.
One option would be to call sync_object_flush() before releasing the bo::lock,
but that would put an atomic requirement on that function.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
The driver (for example vmwgfx) may want to silently deal with the
error itself.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Since we're doing this outside of a spinlock to provide the necessary
barriers, add an explicit barrier.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Replace with BUG_ON(). These error messages remained from the time
when TTM was initialized from user-space. Nowadays hitting one of those
is really a kernel bug.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Searching for a free block in the range manager may in some situations be a
lenghty operation, and we want to avoid holding the global lru lock
during that time. Instead use a per-manager spinlock.
This leaves the global lru lock for quick lru list and swap list manipulation
only, including list manipulation associated with reserving buffer objects.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Remove an obsolete comment about mm nodes.
Document the new bo range manager interface.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
* 'drm-core-next' of git://git.kernel.org/pub/scm/linux/kernel/git/airlied/drm-2.6: (476 commits)
vmwgfx: Implement a proper GMR eviction mechanism
drm/radeon/kms: fix r6xx/7xx 1D tiling CS checker v2
drm/radeon/kms: properly compute group_size on 6xx/7xx
drm/radeon/kms: fix 2D tile height alignment in the r600 CS checker
drm/radeon/kms/evergreen: set the clear state to the blit state
drm/radeon/kms: don't poll dac load detect.
gpu: Add Intel GMA500(Poulsbo) Stub Driver
drm/radeon/kms: MC vram map needs to be >= pci aperture size
drm/radeon/kms: implement display watermark support for evergreen
drm/radeon/kms/evergreen: add some additional safe regs v2
drm/radeon/r600: fix tiling issues in CS checker.
drm/i915: Move gpu_write_list to per-ring
drm/i915: Invalidate the to-ring, flush the old-ring when updating domains
drm/i915/ringbuffer: Write the value passed in to the tail register
agp/intel: Restore valid PTE bit for Sandybridge after bdd3072
drm/i915: Fix flushing regression from 9af90d19f
drm/i915/sdvo: Remove unused encoding member
i915: enable AVI infoframe for intel_hdmi.c [v4]
drm/i915: Fix current fb blocking for page flip
drm/i915: IS_IRONLAKE is synonymous with gen == 5
...
Fix up conflicts in
- drivers/gpu/drm/i915/{i915_gem.c, i915/intel_overlay.c}: due to the
new simplified stack-based kmap_atomic() interface
- drivers/gpu/drm/vmwgfx/vmwgfx_drv.c: added .llseek entry due to BKL
removal cleanups.
Keep the current interface but ignore the KM_type and use a stack based
approach.
The advantage is that we get rid of crappy code like:
#define __KM_PTE \
(in_nmi() ? KM_NMI_PTE : \
in_irq() ? KM_IRQ_PTE : \
KM_PTE0)
and in general can stop worrying about what context we're in and what kmap
slots might be appropriate for that.
The downside is that FRV kmap_atomic() gets more expensive.
For now we use a CPP trick suggested by Andrew:
#define kmap_atomic(page, args...) __kmap_atomic(page)
to avoid having to touch all kmap_atomic() users in a single patch.
[ not compiled on:
- mn10300: the arch doesn't actually build with highmem to begin with ]
[akpm@linux-foundation.org: coding-style fixes]
[akpm@linux-foundation.org: fix up drivers/gpu/drm/i915/intel_overlay.c]
Acked-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Chris Metcalf <cmetcalf@tilera.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Russell King <rmk@arm.linux.org.uk>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: David Miller <davem@davemloft.net>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Dave Airlie <airlied@linux.ie>
Cc: Li Zefan <lizf@cn.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit replaces the ttm_bo_cleanup_ref function with two new functions.
One for the case where the bo is not yet on the delayed destroy list, and
one for the case where the bo was on the delayed destroy list, at least at
the time of call. This makes it possible to optimize the two cases somewhat.
It also enables the possibility to directly destroy buffers on the
delayed delete list when they are about to be evicted or swapped out.
Currently they were only evicted / swapped and destruction was left for the
delayed buffer destruction thread.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Function ttm_bo_wait_unreserved can be slightly simplified.
Signed-off-by: Jean Delvare <khali@linux-fr.org>
Cc: Thomas Hellstrom <thellstrom@vmware.com>
Cc: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
We need the unlocked variant for the new codepath introduced to fix the
race condition in master recently.
Signed-off-by: Dave Airlie <airlied@redhat.com>
[airlied - add fix for vmwgfx build]
* 'nouveau/for-airlied' of ../drm-nouveau-next: (93 commits)
drm/ttm: restructure to allow driver to plug in alternate memory manager
drm/ttm: introduce utility function to free an allocated memory node
drm/nouveau: fix thinkos in mem timing table recordlen check
drm/nouveau: parse voltage from perf 0x40 entires
drm/nouveau: don't use the default pll limits in table v2.1 on nv50+ cards
drm/nv50: Fix large 3D performance regression caused by the interchannel sync patches.
drm/nouveau: Synchronize buffer object moves in hardware.
drm/nouveau: Use semaphores to handle inter-channel sync in hardware.
drm/nouveau: Provide a means to have arbitrary work run on fence completion.
drm/nouveau: Minor refactoring/cleanup of the fence code.
drm/nouveau: Add a module option to force card POST.
drm/nv50: prevent (IB_PUT == IB_GET) for occurring unless idle
drm/nv0x-nv4x: Leave the 0x40 bit untouched when changing CRE_LCD.
drm/nv30-nv40: Fix postdivider mask when writing engine/memory PLLs.
drm/nouveau: Fix perf table parsing on BMP v5.25.
drm/nouveau: fix required mode bandwidth calculation for DP
drm/nouveau: fix typo in c2aa91afea5f7e7ae4530fabd37414a79c03328c
drm/nva3: split pm backend out from nv50
drm/nouveau: run perflvl and M table scripts on mem clock change
drm/nouveau: pass perflvl struct to clock_pre()
...
This fixes a race pointed out by Dave Airlie where we don't take a buffer
object about to be destroyed off the LRU lists properly. It also fixes a rare
case where a buffer object could be destroyed in the middle of an
accelerated eviction.
The patch also adds a utility function that can be used to prematurely
release GPU memory space usage of an object waiting to be destroyed.
For example during eviction or swapout.
The above mentioned commit didn't queue the buffer on the delayed destroy
list under some rare circumstances. It also didn't completely honor the
remove_all parameter.
Fixes:
https://bugzilla.redhat.com/show_bug.cgi?id=615505http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=591061
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Nouveau will need this on GeForce 8 and up to account for the GPU
reordering physical VRAM for some memory types.
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
Acked-by: Thomas Hellström <thellstrom@vmware.com>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Existing core code/drivers call drm_mm_put_block on ttm_mem_reg.mm_node
directly. Future patches will modify TTM behaviour in such a way that
ttm_mem_reg.mm_node doesn't necessarily belong to drm_mm.
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
Acked-by: Thomas Hellström <thellstrom@vmware.com>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Correct allocation flags type and function prototype for ANSI C compliance.
[airlied: whitespace fixed]
Signed-off-by: Daniel J Blueman <daniel.blueman@gmail.com>
Reviewed-by: Alex Deucher <alexdeucher@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
It makes sense for a BO to move after a process has requested
exclusive RW access on it (e.g. because the BO used to be located in
unmappable VRAM and we intercepted the CPU access from the fault
handler).
If we let the ghost object inherit cpu_writers from the original
object, ttm_bo_release_list() will raise a kernel BUG when the ghost
object is destroyed. This can be reproduced with the nouveau driver on
nv5x.
Reported-by: Marcin Slusarz <marcin.slusarz@gmail.com>
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
Tested-by: Marcin Slusarz <marcin.slusarz@gmail.com>
Signed-off-by: Francisco Jerez <currojerez@riseup.net>
Signed-off-by: Dave Airlie <airlied@redhat.com>
I wrote this for the prime sharing work, but I also noticed other external
non-upstream drivers from a large company carrying a similiar patch, so I
may as well ship it in master.
Signed-off-by: Dave Airlie <airlied@redhat.com>
* 'drm-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/airlied/drm-2.6:
drm/radeon/kms: add quirk to make HP DV5000 laptop resume
drm/radeon/kms: fix RADEON_INFO_CRTC_FROM_ID info ioctl
Fix ttm_page_alloc.c build breakage
drm/radeon/kms: fix legacy LVDS dpms sequence
drm/radeon/kms: drop taking lock around crtc lookup.
The commit 1e8655f873
drm/ttm: Fix build on architectures without AGP
looks at TTM_HAS_AGP before it has been set in ttm_bo_driver.h
Move the conditional inclusion of <asm/agp.h> *after* we have included
ttm_bo_driver.h
Signed-of-by: Tony Luck <tony.luck@intel.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Add the shrinkers missed in the first conversion of the API in
commit 7f8275d0d6 ("mm: add context argument to
shrinker callback").
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Make inclusion of <asm/agp.h> conditional on TTM_HAS_AGP. The use
of the functions declared in it is already conditional.
Reported-by: Geert Stappers <stappers@stappers.nl>
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Tested-by: Geert Stappers <stappers@stappers.nl>
Signed-off-by: Dave Airlie <airlied@redhat.com>
* drm-intel-lru:
drm: implement helper functions for scanning lru list
drm_mm: extract check_free_mm_node
drm: sane naming for drm_mm.c
drm: kill dead code in drm_mm.c
drm: kill drm_mm_node->private
drm: use list_for_each_entry in drm_mm.c
Only ever assigned, never used.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
[glisse: I will re-add if needed for range-restricted allocations]
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Repeated ttm_page_alloc_init/fini fails noisily because the pool
manager kobj isn't zeroed out between uses (we could do just that but
statically allocated kobjects are generally considered a bad thing).
Move it to kzalloc'ed memory.
Note that this patch drops the refcounting behavior of the pool
allocator init/fini functions: it would have led to a race condition
in its current form, and anyway it was never exploited.
This fixes a regression with reloading kms modules at runtime, since
page allocator was introduced.
Signed-off-by: Francisco Jerez <currojerez@riseup.net>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Non pooled page allocation should have GFP_USER set so allocation
can wait and reclaim page from other process (ie non atomic).
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
* 'drm-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/airlied/drm-2.6: (41 commits)
drm/radeon/kms: make sure display hw is disabled when suspending
drm/vmwgfx: Allow userspace to change default layout. Bump minor.
drm/vmwgfx: Fix framebuffer modesetting
drm/vmwgfx: Fix vga save / restore with display topology.
vgaarb: use MIT license
vgaarb: convert pr_devel() to pr_debug()
drm: fix typos in Linux DRM Developer's Guide
drm/radeon/kms/pm: voltage fixes
drm/radeon/kms/pm: radeon_set_power_state fixes
drm/radeon/kms/pm: patch default power state with default clocks/voltages on r6xx+
drm/radeon/kms/pm: enable SetVoltage on r7xx/evergreen
drm/radeon/kms/pm: add support for SetVoltage cmd table (V2)
drm/radeon/kms/evergreen: add initial CS parser
drm/kms: disable/enable poll around switcheroo on/off
drm/nouveau: fixup confusion over which handle the DSM is hanging off.
drm/nouveau: attempt to get bios from ACPI v3
drm/nv50: cast IGP memory location to u64 before shifting
drm/ttm: Fix ttm_page_alloc.c
drm/ttm: Fix cached TTM page allocation.
drm/vmwgfx: Remove some leftover debug messages.
...
Fix a number of typos misspellings and checkpatch.pl warnings.
Replace "[ttm] " with TTM_PFX
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
This patch fixes a regression introduced with the pool page allocator
in the event that there are no highmem pages (for example x86_64),
in which case cached page allocation would fail.
Tested with the vmwgfx driver on a 64-bit vm.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
* 'drm-for-2.6.35' of git://git.kernel.org/pub/scm/linux/kernel/git/airlied/drm-2.6: (207 commits)
drm/radeon/kms/pm/r600: select the mid clock mode for single head low profile
drm/radeon: fix power supply kconfig interaction.
drm/radeon/kms: record object that have been list reserved
drm/radeon: AGP memory is only I/O if the aperture can be mapped by the CPU.
drm/radeon/kms: don't default display priority to high on rs4xx
drm/edid: fix typo in 1600x1200@75 mode
drm/nouveau: fix i2c-related init table handlers
drm/nouveau: support init table i2c device identifier 0x81
drm/nouveau: ensure we've parsed i2c table entry for INIT_*I2C* handlers
drm/nouveau: display error message for any failed init table opcode
drm/nouveau: fix init table handlers to return proper error codes
drm/nv50: support fractional feedback divider on newer chips
drm/nv50: fix monitor detection on certain chipsets
drm/nv50: store full dcb i2c entry from vbios
drm/nv50: fix suspend/resume with DP outputs
drm/nv50: output calculated crtc pll when debugging on
drm/nouveau: dump pll limits entries when debugging is on
drm/nouveau: bios parser fixes for eDP boards
drm/nouveau: fix a nouveau_bo dereference after it's been destroyed
drm/nv40: remove some completed ctxprog TODOs
...
We want to be able to prevent the delayed workqueue from changing state
while we're reclocking, so add an API to block and unblock it.
Signed-off-by: Matthew Garrett <mjg@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
It's unused and buggy in its current form, since it can place a bo
in the reserved state without removing it from lru lists.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
If the memory is not iomem we should not try to
ioremap it. Should fix :
https://bugs.freedesktop.org/show_bug.cgi?id=27822
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Tested-by: Rafał Miłecki <zajec5@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
* drm-ttm-unmappable:
drm/radeon/kms: enable use of unmappable VRAM V2
drm/ttm: remove io_ field from TTM V6
drm/vmwgfx: add support for new TTM fault callback V5
drm/nouveau/kms: add support for new TTM fault callback V5
drm/radeon/kms: add support for new fault callback V7
drm/ttm: ttm_fault callback to allow driver to handle bo placement V6
drm/ttm: split no_wait argument in 2 GPU or reserve wait
Conflicts:
drivers/gpu/drm/nouveau/nouveau_bo.c
All TTM driver have been converted to new io_mem_reserve/free
interface which allow driver to choose and return proper io
base, offset to core TTM for ioremapping if necessary. This
patch remove what is now deadcode.
V2 adapt to match with change in first patch of the patchset
V3 update after io_mem_reserve/io_mem_free callback balancing
V4 adjust to minor cleanup
V5 remove the needs ioremap flag
V6 keep the ioremapping facility in TTM
[airlied- squashed driver removals in here also]
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
On fault the driver is given the opportunity to perform any operation
it sees fit in order to place the buffer into a CPU visible area of
memory. This patch doesn't break TTM users, nouveau, vmwgfx and radeon
should keep working properly. Future patch will take advantage of this
infrastructure and remove the old path from TTM once driver are
converted.
V2 return VM_FAULT_NOPAGE if callback return -EBUSY or -ERESTARTSYS
V3 balance io_mem_reserve and io_mem_free call, fault_reserve_notify
is responsible to perform any necessary task for mapping to succeed
V4 minor cleanup, atomic_t -> bool as member is protected by reserve
mecanism from concurent access
V5 the callback is now responsible for iomapping the bo and providing
a virtual address this simplify TTM and will allow to get rid of
TTM_MEMTYPE_FLAG_NEEDS_IOREMAP
V6 use the bus addr data to decide to ioremap or this isn't needed
but we don't necesarily need to ioremap in the callback but still
allow driver to use static mapping
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
* drm-ttm-pool:
drm/ttm: using kmalloc/kfree requires including slab.h
drm/ttm: include linux/seq_file.h for seq_printf
drm/ttm: Add sysfs interface to control pool allocator.
drm/ttm: Use set_pages_array_wc instead of set_memory_wc.
arch/x86: Add array variants for setting memory to wc caching.
drm/nouveau: Add ttm page pool debugfs file.
drm/radeon/kms: Add ttm page pool debugfs file.
drm/ttm: Add debugfs output entry to pool allocator.
drm/ttm: add pool wc/uc page allocator V3
There is case where we want to be able to wait only for the
GPU while not waiting for other buffer to be unreserved. This
patch split the no_wait argument all the way down in the whole
ttm path so that upper level can decide on what to wait on or
not.
[airlied: squashed these 4 for bisectability reasons.]
drm/radeon/kms: update to TTM no_wait splitted argument
drm/nouveau: update to TTM no_wait splitted argument
drm/vmwgfx: update to TTM no_wait splitted argument
[vmwgfx patch: Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>]
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Acked-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Fixes
drivers/gpu/drm/ttm/ttm_page_alloc.c: In function 'ttm_page_alloc_debugfs':
drivers/gpu/drm/ttm/ttm_page_alloc.c:829: error: implicit declaration of
function 'seq_printf'
Signed-off-by: Matt Turner <mattst88@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Sysfs interface allows user to configure pool allocator functionality and
change limits for the size of pool.
Signed-off-by: Pauli Nieminen <suokkos@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>