Merge branch 'akpm' (patches from Andrew)

Merge third patch-bomb from Andrew Morton:
 "I'm pretty much done for -rc1 now:

   - the rest of MM, basically

   - lib/ updates

   - checkpatch, epoll, hfs, fatfs, ptrace, coredump, exit

   - cpu_mask simplifications

   - kexec, rapidio, MAINTAINERS etc, etc.

   - more dma-mapping cleanups/simplifications from hch"

* emailed patches from Andrew Morton <akpm@linux-foundation.org>: (109 commits)
  MAINTAINERS: add/fix git URLs for various subsystems
  mm: memcontrol: add "sock" to cgroup2 memory.stat
  mm: memcontrol: basic memory statistics in cgroup2 memory controller
  mm: memcontrol: do not uncharge old page in page cache replacement
  Documentation: cgroup: add memory.swap.{current,max} description
  mm: free swap cache aggressively if memcg swap is full
  mm: vmscan: do not scan anon pages if memcg swap limit is hit
  swap.h: move memcg related stuff to the end of the file
  mm: memcontrol: replace mem_cgroup_lruvec_online with mem_cgroup_online
  mm: vmscan: pass memcg to get_scan_count()
  mm: memcontrol: charge swap to cgroup2
  mm: memcontrol: clean up alloc, online, offline, free functions
  mm: memcontrol: flatten struct cg_proto
  mm: memcontrol: rein in the CONFIG space madness
  net: drop tcp_memcontrol.c
  mm: memcontrol: introduce CONFIG_MEMCG_LEGACY_KMEM
  mm: memcontrol: allow to disable kmem accounting for cgroup2
  mm: memcontrol: account "kmem" consumers in cgroup2 memory controller
  mm: memcontrol: move kmem accounting code to CONFIG_MEMCG
  mm: memcontrol: separate kmem code from legacy tcp accounting code
  ...
This commit is contained in:
Linus Torvalds 2016-01-21 12:32:08 -08:00
commit eae21770b4
203 changed files with 3665 additions and 4014 deletions

10
CREDITS
View File

@ -1856,6 +1856,16 @@ S: Korte Heul 95
S: 1403 ND BUSSUM S: 1403 ND BUSSUM
S: The Netherlands S: The Netherlands
N: Martin Kepplinger
E: martink@posteo.de
E: martin.kepplinger@theobroma-systems.com
W: http://www.martinkepplinger.com
D: mma8452 accelerators iio driver
D: Kernel cleanups
S: Garnisonstraße 26
S: 4020 Linz
S: Austria
N: Karl Keyte N: Karl Keyte
E: karl@koft.com E: karl@koft.com
D: Disk usage statistics and modifications to line printer driver D: Disk usage statistics and modifications to line printer driver

View File

@ -951,16 +951,6 @@ to "Closing".
alignment constraints (e.g. the alignment constraints about 64-bit alignment constraints (e.g. the alignment constraints about 64-bit
objects). objects).
3) Supporting multiple types of IOMMUs
If your architecture needs to support multiple types of IOMMUs, you
can use include/linux/asm-generic/dma-mapping-common.h. It's a
library to support the DMA API with multiple types of IOMMUs. Lots
of architectures (x86, powerpc, sh, alpha, ia64, microblaze and
sparc) use it. Choose one to see how it can be used. If you need to
support multiple types of IOMMUs in a single system, the example of
x86 or powerpc helps.
Closing Closing
This document, and the API itself, would not be in its current This document, and the API itself, would not be in its current

View File

@ -819,6 +819,78 @@ PAGE_SIZE multiple when read back.
the cgroup. This may not exactly match the number of the cgroup. This may not exactly match the number of
processes killed but should generally be close. processes killed but should generally be close.
memory.stat
A read-only flat-keyed file which exists on non-root cgroups.
This breaks down the cgroup's memory footprint into different
types of memory, type-specific details, and other information
on the state and past events of the memory management system.
All memory amounts are in bytes.
The entries are ordered to be human readable, and new entries
can show up in the middle. Don't rely on items remaining in a
fixed position; use the keys to look up specific values!
anon
Amount of memory used in anonymous mappings such as
brk(), sbrk(), and mmap(MAP_ANONYMOUS)
file
Amount of memory used to cache filesystem data,
including tmpfs and shared memory.
file_mapped
Amount of cached filesystem data mapped with mmap()
file_dirty
Amount of cached filesystem data that was modified but
not yet written back to disk
file_writeback
Amount of cached filesystem data that was modified and
is currently being written back to disk
inactive_anon
active_anon
inactive_file
active_file
unevictable
Amount of memory, swap-backed and filesystem-backed,
on the internal memory management lists used by the
page reclaim algorithm
pgfault
Total number of page faults incurred
pgmajfault
Number of major page faults incurred
memory.swap.current
A read-only single value file which exists on non-root
cgroups.
The total amount of swap currently being used by the cgroup
and its descendants.
memory.swap.max
A read-write single value file which exists on non-root
cgroups. The default is "max".
Swap usage hard limit. If a cgroup's swap usage reaches this
limit, anonymous meomry of the cgroup will not be swapped out.
5-2-2. General Usage 5-2-2. General Usage
@ -1291,3 +1363,20 @@ allocation from the slack available in other groups or the rest of the
system than killing the group. Otherwise, memory.max is there to system than killing the group. Otherwise, memory.max is there to
limit this type of spillover and ultimately contain buggy or even limit this type of spillover and ultimately contain buggy or even
malicious applications. malicious applications.
The combined memory+swap accounting and limiting is replaced by real
control over swap space.
The main argument for a combined memory+swap facility in the original
cgroup design was that global or parental pressure would always be
able to swap all anonymous memory of a child group, regardless of the
child's own (possibly untrusted) configuration. However, untrusted
groups can sabotage swapping by other means - such as referencing its
anonymous memory in a tight loop - and an admin can not assume full
swappability when overcommitting untrusted jobs.
For trusted jobs, on the other hand, a combined counter is not an
intuitive userspace interface, and it flies in the face of the idea
that cgroup controllers should account and limit specific physical
resources. Swap space is a resource like all others in the system,
and that's why unified hierarchy allows distributing it separately.

View File

@ -1,40 +0,0 @@
#
# Feature name: dma_map_attrs
# Kconfig: HAVE_DMA_ATTRS
# description: arch provides dma_*map*_attrs() APIs
#
-----------------------
| arch |status|
-----------------------
| alpha: | ok |
| arc: | TODO |
| arm: | ok |
| arm64: | ok |
| avr32: | TODO |
| blackfin: | TODO |
| c6x: | TODO |
| cris: | TODO |
| frv: | TODO |
| h8300: | ok |
| hexagon: | ok |
| ia64: | ok |
| m32r: | TODO |
| m68k: | TODO |
| metag: | TODO |
| microblaze: | ok |
| mips: | ok |
| mn10300: | TODO |
| nios2: | TODO |
| openrisc: | ok |
| parisc: | TODO |
| powerpc: | ok |
| s390: | ok |
| score: | TODO |
| sh: | ok |
| sparc: | ok |
| tile: | ok |
| um: | TODO |
| unicore32: | ok |
| x86: | ok |
| xtensa: | TODO |
-----------------------

View File

@ -180,6 +180,16 @@ dos1xfloppy -- If set, use a fallback default BIOS Parameter Block
<bool>: 0,1,yes,no,true,false <bool>: 0,1,yes,no,true,false
LIMITATION
---------------------------------------------------------------------
* The fallocated region of file is discarded at umount/evict time
when using fallocate with FALLOC_FL_KEEP_SIZE.
So, User should assume that fallocated region can be discarded at
last close if there is memory pressure resulting in eviction of
the inode from the memory. As a result, for any dependency on
the fallocated region, user should make sure to recheck fallocate
after reopening the file.
TODO TODO
---------------------------------------------------------------------- ----------------------------------------------------------------------
* Need to get rid of the raw scanning stuff. Instead, always use * Need to get rid of the raw scanning stuff. Instead, always use

View File

@ -611,6 +611,7 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
cgroup.memory= [KNL] Pass options to the cgroup memory controller. cgroup.memory= [KNL] Pass options to the cgroup memory controller.
Format: <string> Format: <string>
nosocket -- Disable socket memory accounting. nosocket -- Disable socket memory accounting.
nokmem -- Disable kernel memory accounting.
checkreqprot [SELINUX] Set initial checkreqprot flag value. checkreqprot [SELINUX] Set initial checkreqprot flag value.
Format: { "0" | "1" } Format: { "0" | "1" }

View File

@ -825,14 +825,13 @@ via the /proc/sys interface:
Each write syscall must fully contain the sysctl value to be Each write syscall must fully contain the sysctl value to be
written, and multiple writes on the same sysctl file descriptor written, and multiple writes on the same sysctl file descriptor
will rewrite the sysctl value, regardless of file position. will rewrite the sysctl value, regardless of file position.
0 - (default) Same behavior as above, but warn about processes that 0 - Same behavior as above, but warn about processes that perform writes
perform writes to a sysctl file descriptor when the file position to a sysctl file descriptor when the file position is not 0.
is not 0. 1 - (default) Respect file position when writing sysctl strings. Multiple
1 - Respect file position when writing sysctl strings. Multiple writes writes will append to the sysctl value buffer. Anything past the max
will append to the sysctl value buffer. Anything past the max length length of the sysctl value buffer will be ignored. Writes to numeric
of the sysctl value buffer will be ignored. Writes to numeric sysctl sysctl entries must always be at file position 0 and the value must
entries must always be at file position 0 and the value must be be fully contained in the buffer sent in the write syscall.
fully contained in the buffer sent in the write syscall.
============================================================== ==============================================================

84
Documentation/ubsan.txt Normal file
View File

@ -0,0 +1,84 @@
Undefined Behavior Sanitizer - UBSAN
Overview
--------
UBSAN is a runtime undefined behaviour checker.
UBSAN uses compile-time instrumentation to catch undefined behavior (UB).
Compiler inserts code that perform certain kinds of checks before operations
that may cause UB. If check fails (i.e. UB detected) __ubsan_handle_*
function called to print error message.
GCC has that feature since 4.9.x [1] (see -fsanitize=undefined option and
its suboptions). GCC 5.x has more checkers implemented [2].
Report example
---------------
================================================================================
UBSAN: Undefined behaviour in ../include/linux/bitops.h:110:33
shift exponent 32 is to large for 32-bit type 'unsigned int'
CPU: 0 PID: 0 Comm: swapper Not tainted 4.4.0-rc1+ #26
0000000000000000 ffffffff82403cc8 ffffffff815e6cd6 0000000000000001
ffffffff82403cf8 ffffffff82403ce0 ffffffff8163a5ed 0000000000000020
ffffffff82403d78 ffffffff8163ac2b ffffffff815f0001 0000000000000002
Call Trace:
[<ffffffff815e6cd6>] dump_stack+0x45/0x5f
[<ffffffff8163a5ed>] ubsan_epilogue+0xd/0x40
[<ffffffff8163ac2b>] __ubsan_handle_shift_out_of_bounds+0xeb/0x130
[<ffffffff815f0001>] ? radix_tree_gang_lookup_slot+0x51/0x150
[<ffffffff8173c586>] _mix_pool_bytes+0x1e6/0x480
[<ffffffff83105653>] ? dmi_walk_early+0x48/0x5c
[<ffffffff8173c881>] add_device_randomness+0x61/0x130
[<ffffffff83105b35>] ? dmi_save_one_device+0xaa/0xaa
[<ffffffff83105653>] dmi_walk_early+0x48/0x5c
[<ffffffff831066ae>] dmi_scan_machine+0x278/0x4b4
[<ffffffff8111d58a>] ? vprintk_default+0x1a/0x20
[<ffffffff830ad120>] ? early_idt_handler_array+0x120/0x120
[<ffffffff830b2240>] setup_arch+0x405/0xc2c
[<ffffffff830ad120>] ? early_idt_handler_array+0x120/0x120
[<ffffffff830ae053>] start_kernel+0x83/0x49a
[<ffffffff830ad120>] ? early_idt_handler_array+0x120/0x120
[<ffffffff830ad386>] x86_64_start_reservations+0x2a/0x2c
[<ffffffff830ad4f3>] x86_64_start_kernel+0x16b/0x17a
================================================================================
Usage
-----
To enable UBSAN configure kernel with:
CONFIG_UBSAN=y
and to check the entire kernel:
CONFIG_UBSAN_SANITIZE_ALL=y
To enable instrumentation for specific files or directories, add a line
similar to the following to the respective kernel Makefile:
For a single file (e.g. main.o):
UBSAN_SANITIZE_main.o := y
For all files in one directory:
UBSAN_SANITIZE := y
To exclude files from being instrumented even if
CONFIG_UBSAN_SANITIZE_ALL=y, use:
UBSAN_SANITIZE_main.o := n
and:
UBSAN_SANITIZE := n
Detection of unaligned accesses controlled through the separate option -
CONFIG_UBSAN_ALIGNMENT. It's off by default on architectures that support
unaligned accesses (CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS=y). One could
still enable it in config, just note that it will produce a lot of UBSAN
reports.
References
----------
[1] - https://gcc.gnu.org/onlinedocs/gcc-4.9.0/gcc/Debugging-Options.html
[2] - https://gcc.gnu.org/onlinedocs/gcc/Debugging-Options.html

View File

@ -781,6 +781,7 @@ F: sound/aoa/
APM DRIVER APM DRIVER
M: Jiri Kosina <jikos@kernel.org> M: Jiri Kosina <jikos@kernel.org>
S: Odd fixes S: Odd fixes
T: git git://git.kernel.org/pub/scm/linux/kernel/git/jikos/apm.git
F: arch/x86/kernel/apm_32.c F: arch/x86/kernel/apm_32.c
F: include/linux/apm_bios.h F: include/linux/apm_bios.h
F: include/uapi/linux/apm_bios.h F: include/uapi/linux/apm_bios.h
@ -946,6 +947,7 @@ M: Alexandre Belloni <alexandre.belloni@free-electrons.com>
M: Jean-Christophe Plagniol-Villard <plagnioj@jcrosoft.com> M: Jean-Christophe Plagniol-Villard <plagnioj@jcrosoft.com>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
W: http://www.linux4sam.org W: http://www.linux4sam.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/nferre/linux-at91.git
S: Supported S: Supported
F: arch/arm/mach-at91/ F: arch/arm/mach-at91/
F: include/soc/at91/ F: include/soc/at91/
@ -1464,6 +1466,7 @@ ARM/Rockchip SoC support
M: Heiko Stuebner <heiko@sntech.de> M: Heiko Stuebner <heiko@sntech.de>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
L: linux-rockchip@lists.infradead.org L: linux-rockchip@lists.infradead.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/mmind/linux-rockchip.git
S: Maintained S: Maintained
F: arch/arm/boot/dts/rk3* F: arch/arm/boot/dts/rk3*
F: arch/arm/mach-rockchip/ F: arch/arm/mach-rockchip/
@ -1796,6 +1799,7 @@ ARM64 PORT (AARCH64 ARCHITECTURE)
M: Catalin Marinas <catalin.marinas@arm.com> M: Catalin Marinas <catalin.marinas@arm.com>
M: Will Deacon <will.deacon@arm.com> M: Will Deacon <will.deacon@arm.com>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
T: git git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git
S: Maintained S: Maintained
F: arch/arm64/ F: arch/arm64/
F: Documentation/arm64/ F: Documentation/arm64/
@ -1881,7 +1885,7 @@ ATHEROS ATH6KL WIRELESS DRIVER
M: Kalle Valo <kvalo@qca.qualcomm.com> M: Kalle Valo <kvalo@qca.qualcomm.com>
L: linux-wireless@vger.kernel.org L: linux-wireless@vger.kernel.org
W: http://wireless.kernel.org/en/users/Drivers/ath6kl W: http://wireless.kernel.org/en/users/Drivers/ath6kl
T: git git://github.com/kvalo/ath.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/ath.git
S: Supported S: Supported
F: drivers/net/wireless/ath/ath6kl/ F: drivers/net/wireless/ath/ath6kl/
@ -2133,6 +2137,7 @@ F: drivers/net/wireless/broadcom/b43legacy/
BACKLIGHT CLASS/SUBSYSTEM BACKLIGHT CLASS/SUBSYSTEM
M: Jingoo Han <jingoohan1@gmail.com> M: Jingoo Han <jingoohan1@gmail.com>
M: Lee Jones <lee.jones@linaro.org> M: Lee Jones <lee.jones@linaro.org>
T: git git://git.kernel.org/pub/scm/linux/kernel/git/lee/backlight.git
S: Maintained S: Maintained
F: drivers/video/backlight/ F: drivers/video/backlight/
F: include/linux/backlight.h F: include/linux/backlight.h
@ -2815,6 +2820,7 @@ F: drivers/input/touchscreen/chipone_icn8318.c
CHROME HARDWARE PLATFORM SUPPORT CHROME HARDWARE PLATFORM SUPPORT
M: Olof Johansson <olof@lixom.net> M: Olof Johansson <olof@lixom.net>
S: Maintained S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/olof/chrome-platform.git
F: drivers/platform/chrome/ F: drivers/platform/chrome/
CISCO VIC ETHERNET NIC DRIVER CISCO VIC ETHERNET NIC DRIVER
@ -3113,6 +3119,7 @@ M: Mikael Starvik <starvik@axis.com>
M: Jesper Nilsson <jesper.nilsson@axis.com> M: Jesper Nilsson <jesper.nilsson@axis.com>
L: linux-cris-kernel@axis.com L: linux-cris-kernel@axis.com
W: http://developer.axis.com W: http://developer.axis.com
T: git git://git.kernel.org/pub/scm/linux/kernel/git/jesper/cris.git
S: Maintained S: Maintained
F: arch/cris/ F: arch/cris/
F: drivers/tty/serial/crisv10.* F: drivers/tty/serial/crisv10.*
@ -3121,6 +3128,7 @@ CRYPTO API
M: Herbert Xu <herbert@gondor.apana.org.au> M: Herbert Xu <herbert@gondor.apana.org.au>
M: "David S. Miller" <davem@davemloft.net> M: "David S. Miller" <davem@davemloft.net>
L: linux-crypto@vger.kernel.org L: linux-crypto@vger.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/herbert/cryptodev-2.6.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6.git
S: Maintained S: Maintained
F: Documentation/crypto/ F: Documentation/crypto/
@ -3583,7 +3591,7 @@ M: Christine Caulfield <ccaulfie@redhat.com>
M: David Teigland <teigland@redhat.com> M: David Teigland <teigland@redhat.com>
L: cluster-devel@redhat.com L: cluster-devel@redhat.com
W: http://sources.redhat.com/cluster/ W: http://sources.redhat.com/cluster/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/teigland/dlm.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/teigland/linux-dlm.git
S: Supported S: Supported
F: fs/dlm/ F: fs/dlm/
@ -3997,6 +4005,7 @@ M: Tyler Hicks <tyhicks@canonical.com>
L: ecryptfs@vger.kernel.org L: ecryptfs@vger.kernel.org
W: http://ecryptfs.org W: http://ecryptfs.org
W: https://launchpad.net/ecryptfs W: https://launchpad.net/ecryptfs
T: git git://git.kernel.org/pub/scm/linux/kernel/git/tyhicks/ecryptfs.git
S: Supported S: Supported
F: Documentation/filesystems/ecryptfs.txt F: Documentation/filesystems/ecryptfs.txt
F: fs/ecryptfs/ F: fs/ecryptfs/
@ -4275,6 +4284,7 @@ M: Andreas Dilger <adilger.kernel@dilger.ca>
L: linux-ext4@vger.kernel.org L: linux-ext4@vger.kernel.org
W: http://ext4.wiki.kernel.org W: http://ext4.wiki.kernel.org
Q: http://patchwork.ozlabs.org/project/linux-ext4/list/ Q: http://patchwork.ozlabs.org/project/linux-ext4/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4.git
S: Maintained S: Maintained
F: Documentation/filesystems/ext4.txt F: Documentation/filesystems/ext4.txt
F: fs/ext4/ F: fs/ext4/
@ -4957,6 +4967,7 @@ F: include/linux/hw_random.h
HARDWARE SPINLOCK CORE HARDWARE SPINLOCK CORE
M: Ohad Ben-Cohen <ohad@wizery.com> M: Ohad Ben-Cohen <ohad@wizery.com>
S: Maintained S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/ohad/hwspinlock.git
F: Documentation/hwspinlock.txt F: Documentation/hwspinlock.txt
F: drivers/hwspinlock/hwspinlock_* F: drivers/hwspinlock/hwspinlock_*
F: include/linux/hwspinlock.h F: include/linux/hwspinlock.h
@ -5495,6 +5506,7 @@ M: Dmitry Kasatkin <dmitry.kasatkin@gmail.com>
L: linux-ima-devel@lists.sourceforge.net L: linux-ima-devel@lists.sourceforge.net
L: linux-ima-user@lists.sourceforge.net L: linux-ima-user@lists.sourceforge.net
L: linux-security-module@vger.kernel.org L: linux-security-module@vger.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/zohar/linux-integrity.git
S: Supported S: Supported
F: security/integrity/ima/ F: security/integrity/ima/
@ -5750,11 +5762,11 @@ F: include/linux/mic_bus.h
F: include/linux/scif.h F: include/linux/scif.h
F: include/uapi/linux/mic_common.h F: include/uapi/linux/mic_common.h
F: include/uapi/linux/mic_ioctl.h F: include/uapi/linux/mic_ioctl.h
F include/uapi/linux/scif_ioctl.h F: include/uapi/linux/scif_ioctl.h
F: drivers/misc/mic/ F: drivers/misc/mic/
F: drivers/dma/mic_x100_dma.c F: drivers/dma/mic_x100_dma.c
F: drivers/dma/mic_x100_dma.h F: drivers/dma/mic_x100_dma.h
F Documentation/mic/ F: Documentation/mic/
INTEL PMC/P-Unit IPC DRIVER INTEL PMC/P-Unit IPC DRIVER
M: Zha Qipeng<qipeng.zha@intel.com> M: Zha Qipeng<qipeng.zha@intel.com>
@ -5835,6 +5847,8 @@ M: Julian Anastasov <ja@ssi.bg>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
L: lvs-devel@vger.kernel.org L: lvs-devel@vger.kernel.org
S: Maintained S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/horms/ipvs-next.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/horms/ipvs.git
F: Documentation/networking/ipvs-sysctl.txt F: Documentation/networking/ipvs-sysctl.txt
F: include/net/ip_vs.h F: include/net/ip_vs.h
F: include/uapi/linux/ip_vs.h F: include/uapi/linux/ip_vs.h
@ -6118,6 +6132,7 @@ M: "J. Bruce Fields" <bfields@fieldses.org>
M: Jeff Layton <jlayton@poochiereds.net> M: Jeff Layton <jlayton@poochiereds.net>
L: linux-nfs@vger.kernel.org L: linux-nfs@vger.kernel.org
W: http://nfs.sourceforge.net/ W: http://nfs.sourceforge.net/
T: git git://linux-nfs.org/~bfields/linux.git
S: Supported S: Supported
F: fs/nfsd/ F: fs/nfsd/
F: include/uapi/linux/nfsd/ F: include/uapi/linux/nfsd/
@ -6174,6 +6189,7 @@ M: Christian Borntraeger <borntraeger@de.ibm.com>
M: Cornelia Huck <cornelia.huck@de.ibm.com> M: Cornelia Huck <cornelia.huck@de.ibm.com>
L: linux-s390@vger.kernel.org L: linux-s390@vger.kernel.org
W: http://www.ibm.com/developerworks/linux/linux390/ W: http://www.ibm.com/developerworks/linux/linux390/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/kvms390/linux.git
S: Supported S: Supported
F: Documentation/s390/kvm.txt F: Documentation/s390/kvm.txt
F: arch/s390/include/asm/kvm* F: arch/s390/include/asm/kvm*
@ -6247,6 +6263,7 @@ KGDB / KDB /debug_core
M: Jason Wessel <jason.wessel@windriver.com> M: Jason Wessel <jason.wessel@windriver.com>
W: http://kgdb.wiki.kernel.org/ W: http://kgdb.wiki.kernel.org/
L: kgdb-bugreport@lists.sourceforge.net L: kgdb-bugreport@lists.sourceforge.net
T: git git://git.kernel.org/pub/scm/linux/kernel/git/jwessel/kgdb.git
S: Maintained S: Maintained
F: Documentation/DocBook/kgdb.tmpl F: Documentation/DocBook/kgdb.tmpl
F: drivers/misc/kgdbts.c F: drivers/misc/kgdbts.c
@ -6418,6 +6435,7 @@ LIBNVDIMM: NON-VOLATILE MEMORY DEVICE SUBSYSTEM
M: Dan Williams <dan.j.williams@intel.com> M: Dan Williams <dan.j.williams@intel.com>
L: linux-nvdimm@lists.01.org L: linux-nvdimm@lists.01.org
Q: https://patchwork.kernel.org/project/linux-nvdimm/list/ Q: https://patchwork.kernel.org/project/linux-nvdimm/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm.git
S: Supported S: Supported
F: drivers/nvdimm/* F: drivers/nvdimm/*
F: include/linux/nd.h F: include/linux/nd.h
@ -7087,6 +7105,7 @@ F: Documentation/hwmon/menf21bmc
METAG ARCHITECTURE METAG ARCHITECTURE
M: James Hogan <james.hogan@imgtec.com> M: James Hogan <james.hogan@imgtec.com>
L: linux-metag@vger.kernel.org L: linux-metag@vger.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/jhogan/metag.git
S: Odd Fixes S: Odd Fixes
F: arch/metag/ F: arch/metag/
F: Documentation/metag/ F: Documentation/metag/
@ -7568,7 +7587,8 @@ NETWORKING DRIVERS (WIRELESS)
M: Kalle Valo <kvalo@codeaurora.org> M: Kalle Valo <kvalo@codeaurora.org>
L: linux-wireless@vger.kernel.org L: linux-wireless@vger.kernel.org
Q: http://patchwork.kernel.org/project/linux-wireless/list/ Q: http://patchwork.kernel.org/project/linux-wireless/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/wireless-drivers.git/ T: git git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/wireless-drivers.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/wireless-drivers-next.git
S: Maintained S: Maintained
F: drivers/net/wireless/ F: drivers/net/wireless/
@ -7974,6 +7994,7 @@ M: Mark Rutland <mark.rutland@arm.com>
M: Ian Campbell <ijc+devicetree@hellion.org.uk> M: Ian Campbell <ijc+devicetree@hellion.org.uk>
M: Kumar Gala <galak@codeaurora.org> M: Kumar Gala <galak@codeaurora.org>
L: devicetree@vger.kernel.org L: devicetree@vger.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/robh/linux.git
S: Maintained S: Maintained
F: Documentation/devicetree/ F: Documentation/devicetree/
F: arch/*/boot/dts/ F: arch/*/boot/dts/
@ -8364,7 +8385,7 @@ PCMCIA SUBSYSTEM
P: Linux PCMCIA Team P: Linux PCMCIA Team
L: linux-pcmcia@lists.infradead.org L: linux-pcmcia@lists.infradead.org
W: http://lists.infradead.org/mailman/listinfo/linux-pcmcia W: http://lists.infradead.org/mailman/listinfo/linux-pcmcia
T: git git://git.kernel.org/pub/scm/linux/kernel/git/brodo/pcmcia-2.6.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/brodo/pcmcia.git
S: Maintained S: Maintained
F: Documentation/pcmcia/ F: Documentation/pcmcia/
F: drivers/pcmcia/ F: drivers/pcmcia/
@ -8686,7 +8707,7 @@ M: Colin Cross <ccross@android.com>
M: Kees Cook <keescook@chromium.org> M: Kees Cook <keescook@chromium.org>
M: Tony Luck <tony.luck@intel.com> M: Tony Luck <tony.luck@intel.com>
S: Maintained S: Maintained
T: git git://git.infradead.org/users/cbou/linux-pstore.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux.git
F: fs/pstore/ F: fs/pstore/
F: include/linux/pstore* F: include/linux/pstore*
F: drivers/firmware/efi/efi-pstore.c F: drivers/firmware/efi/efi-pstore.c
@ -8895,13 +8916,14 @@ QUALCOMM ATHEROS ATH10K WIRELESS DRIVER
M: Kalle Valo <kvalo@qca.qualcomm.com> M: Kalle Valo <kvalo@qca.qualcomm.com>
L: ath10k@lists.infradead.org L: ath10k@lists.infradead.org
W: http://wireless.kernel.org/en/users/Drivers/ath10k W: http://wireless.kernel.org/en/users/Drivers/ath10k
T: git git://github.com/kvalo/ath.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/ath.git
S: Supported S: Supported
F: drivers/net/wireless/ath/ath10k/ F: drivers/net/wireless/ath/ath10k/
QUALCOMM HEXAGON ARCHITECTURE QUALCOMM HEXAGON ARCHITECTURE
M: Richard Kuo <rkuo@codeaurora.org> M: Richard Kuo <rkuo@codeaurora.org>
L: linux-hexagon@vger.kernel.org L: linux-hexagon@vger.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/rkuo/linux-hexagon-kernel.git
S: Supported S: Supported
F: arch/hexagon/ F: arch/hexagon/
@ -9100,6 +9122,7 @@ F: drivers/phy/phy-rcar-gen3-usb2.c
RESET CONTROLLER FRAMEWORK RESET CONTROLLER FRAMEWORK
M: Philipp Zabel <p.zabel@pengutronix.de> M: Philipp Zabel <p.zabel@pengutronix.de>
T: git git://git.pengutronix.de/git/pza/linux
S: Maintained S: Maintained
F: drivers/reset/ F: drivers/reset/
F: Documentation/devicetree/bindings/reset/ F: Documentation/devicetree/bindings/reset/
@ -9247,6 +9270,7 @@ M: Martin Schwidefsky <schwidefsky@de.ibm.com>
M: Heiko Carstens <heiko.carstens@de.ibm.com> M: Heiko Carstens <heiko.carstens@de.ibm.com>
L: linux-s390@vger.kernel.org L: linux-s390@vger.kernel.org
W: http://www.ibm.com/developerworks/linux/linux390/ W: http://www.ibm.com/developerworks/linux/linux390/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux.git
S: Supported S: Supported
F: arch/s390/ F: arch/s390/
F: drivers/s390/ F: drivers/s390/
@ -9439,7 +9463,7 @@ M: Lukasz Majewski <l.majewski@samsung.com>
L: linux-pm@vger.kernel.org L: linux-pm@vger.kernel.org
L: linux-samsung-soc@vger.kernel.org L: linux-samsung-soc@vger.kernel.org
S: Supported S: Supported
T: https://github.com/lmajewski/linux-samsung-thermal.git T: git https://github.com/lmajewski/linux-samsung-thermal.git
F: drivers/thermal/samsung/ F: drivers/thermal/samsung/
SAMSUNG USB2 PHY DRIVER SAMSUNG USB2 PHY DRIVER
@ -10092,6 +10116,7 @@ F: drivers/media/pci/solo6x10/
SOFTWARE RAID (Multiple Disks) SUPPORT SOFTWARE RAID (Multiple Disks) SUPPORT
L: linux-raid@vger.kernel.org L: linux-raid@vger.kernel.org
T: git git://neil.brown.name/md
S: Supported S: Supported
F: drivers/md/ F: drivers/md/
F: include/linux/raid/ F: include/linux/raid/
@ -10263,6 +10288,7 @@ SQUASHFS FILE SYSTEM
M: Phillip Lougher <phillip@squashfs.org.uk> M: Phillip Lougher <phillip@squashfs.org.uk>
L: squashfs-devel@lists.sourceforge.net (subscribers-only) L: squashfs-devel@lists.sourceforge.net (subscribers-only)
W: http://squashfs.org.uk W: http://squashfs.org.uk
T: git git://git.kernel.org/pub/scm/linux/kernel/git/pkl/squashfs-next.git
S: Maintained S: Maintained
F: Documentation/filesystems/squashfs.txt F: Documentation/filesystems/squashfs.txt
F: fs/squashfs/ F: fs/squashfs/
@ -10459,6 +10485,7 @@ F: arch/x86/boot/video*
SWIOTLB SUBSYSTEM SWIOTLB SUBSYSTEM
M: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> M: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
L: linux-kernel@vger.kernel.org L: linux-kernel@vger.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/konrad/swiotlb.git
S: Supported S: Supported
F: lib/swiotlb.c F: lib/swiotlb.c
F: arch/*/kernel/pci-swiotlb.c F: arch/*/kernel/pci-swiotlb.c
@ -10722,6 +10749,7 @@ TENSILICA XTENSA PORT (xtensa)
M: Chris Zankel <chris@zankel.net> M: Chris Zankel <chris@zankel.net>
M: Max Filippov <jcmvbkbc@gmail.com> M: Max Filippov <jcmvbkbc@gmail.com>
L: linux-xtensa@linux-xtensa.org L: linux-xtensa@linux-xtensa.org
T: git git://github.com/czankel/xtensa-linux.git
S: Maintained S: Maintained
F: arch/xtensa/ F: arch/xtensa/
F: drivers/irqchip/irq-xtensa-* F: drivers/irqchip/irq-xtensa-*
@ -11004,7 +11032,7 @@ R: Jason Gunthorpe <jgunthorpe@obsidianresearch.com>
W: http://tpmdd.sourceforge.net W: http://tpmdd.sourceforge.net
L: tpmdd-devel@lists.sourceforge.net (moderated for non-subscribers) L: tpmdd-devel@lists.sourceforge.net (moderated for non-subscribers)
Q: git git://github.com/PeterHuewe/linux-tpmdd.git Q: git git://github.com/PeterHuewe/linux-tpmdd.git
T: https://github.com/PeterHuewe/linux-tpmdd T: git https://github.com/PeterHuewe/linux-tpmdd
S: Maintained S: Maintained
F: drivers/char/tpm/ F: drivers/char/tpm/
@ -11461,6 +11489,7 @@ M: Richard Weinberger <richard@nod.at>
L: user-mode-linux-devel@lists.sourceforge.net L: user-mode-linux-devel@lists.sourceforge.net
L: user-mode-linux-user@lists.sourceforge.net L: user-mode-linux-user@lists.sourceforge.net
W: http://user-mode-linux.sourceforge.net W: http://user-mode-linux.sourceforge.net
T: git git://git.kernel.org/pub/scm/linux/kernel/git/rw/uml.git
S: Maintained S: Maintained
F: Documentation/virtual/uml/ F: Documentation/virtual/uml/
F: arch/um/ F: arch/um/
@ -11507,6 +11536,7 @@ F: fs/fat/
VFIO DRIVER VFIO DRIVER
M: Alex Williamson <alex.williamson@redhat.com> M: Alex Williamson <alex.williamson@redhat.com>
L: kvm@vger.kernel.org L: kvm@vger.kernel.org
T: git git://github.com/awilliam/linux-vfio.git
S: Maintained S: Maintained
F: Documentation/vfio.txt F: Documentation/vfio.txt
F: drivers/vfio/ F: drivers/vfio/
@ -11576,6 +11606,7 @@ M: "Michael S. Tsirkin" <mst@redhat.com>
L: kvm@vger.kernel.org L: kvm@vger.kernel.org
L: virtualization@lists.linux-foundation.org L: virtualization@lists.linux-foundation.org
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git
S: Maintained S: Maintained
F: drivers/vhost/ F: drivers/vhost/
F: include/uapi/linux/vhost.h F: include/uapi/linux/vhost.h
@ -11992,7 +12023,7 @@ M: Dave Chinner <david@fromorbit.com>
M: xfs@oss.sgi.com M: xfs@oss.sgi.com
L: xfs@oss.sgi.com L: xfs@oss.sgi.com
W: http://oss.sgi.com/projects/xfs W: http://oss.sgi.com/projects/xfs
T: git git://oss.sgi.com/xfs/xfs.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/dgc/linux-xfs.git
S: Supported S: Supported
F: Documentation/filesystems/xfs.txt F: Documentation/filesystems/xfs.txt
F: fs/xfs/ F: fs/xfs/

View File

@ -411,7 +411,7 @@ export MAKE AWK GENKSYMS INSTALLKERNEL PERL PYTHON UTS_MACHINE
export HOSTCXX HOSTCXXFLAGS LDFLAGS_MODULE CHECK CHECKFLAGS export HOSTCXX HOSTCXXFLAGS LDFLAGS_MODULE CHECK CHECKFLAGS
export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS LDFLAGS export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS LDFLAGS
export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV CFLAGS_KASAN export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV CFLAGS_KASAN CFLAGS_UBSAN
export KBUILD_AFLAGS AFLAGS_KERNEL AFLAGS_MODULE export KBUILD_AFLAGS AFLAGS_KERNEL AFLAGS_MODULE
export KBUILD_AFLAGS_MODULE KBUILD_CFLAGS_MODULE KBUILD_LDFLAGS_MODULE export KBUILD_AFLAGS_MODULE KBUILD_CFLAGS_MODULE KBUILD_LDFLAGS_MODULE
export KBUILD_AFLAGS_KERNEL KBUILD_CFLAGS_KERNEL export KBUILD_AFLAGS_KERNEL KBUILD_CFLAGS_KERNEL
@ -784,6 +784,7 @@ endif
include scripts/Makefile.kasan include scripts/Makefile.kasan
include scripts/Makefile.extrawarn include scripts/Makefile.extrawarn
include scripts/Makefile.ubsan
# Add any arch overrides and user supplied CPPFLAGS, AFLAGS and CFLAGS as the # Add any arch overrides and user supplied CPPFLAGS, AFLAGS and CFLAGS as the
# last assignments # last assignments

View File

@ -205,9 +205,6 @@ config HAVE_NMI_WATCHDOG
config HAVE_ARCH_TRACEHOOK config HAVE_ARCH_TRACEHOOK
bool bool
config HAVE_DMA_ATTRS
bool
config HAVE_DMA_CONTIGUOUS config HAVE_DMA_CONTIGUOUS
bool bool
@ -632,4 +629,7 @@ config OLD_SIGACTION
config COMPAT_OLD_SIGACTION config COMPAT_OLD_SIGACTION
bool bool
config ARCH_NO_COHERENT_DMA_MMAP
bool
source "kernel/gcov/Kconfig" source "kernel/gcov/Kconfig"

View File

@ -9,7 +9,6 @@ config ALPHA
select HAVE_OPROFILE select HAVE_OPROFILE
select HAVE_PCSPKR_PLATFORM select HAVE_PCSPKR_PLATFORM
select HAVE_PERF_EVENTS select HAVE_PERF_EVENTS
select HAVE_DMA_ATTRS
select VIRT_TO_BUS select VIRT_TO_BUS
select GENERIC_IRQ_PROBE select GENERIC_IRQ_PROBE
select AUTO_IRQ_AFFINITY if SMP select AUTO_IRQ_AFFINITY if SMP

View File

@ -10,8 +10,6 @@ static inline struct dma_map_ops *get_dma_ops(struct device *dev)
return dma_ops; return dma_ops;
} }
#include <asm-generic/dma-mapping-common.h>
#define dma_cache_sync(dev, va, size, dir) ((void)0) #define dma_cache_sync(dev, va, size, dir) ((void)0)
#endif /* _ALPHA_DMA_MAPPING_H */ #endif /* _ALPHA_DMA_MAPPING_H */

View File

@ -47,7 +47,6 @@
#define MADV_WILLNEED 3 /* will need these pages */ #define MADV_WILLNEED 3 /* will need these pages */
#define MADV_SPACEAVAIL 5 /* ensure resources are available */ #define MADV_SPACEAVAIL 5 /* ensure resources are available */
#define MADV_DONTNEED 6 /* don't need these pages */ #define MADV_DONTNEED 6 /* don't need these pages */
#define MADV_FREE 7 /* free pages only if memory pressure */
/* common/generic parameters */ /* common/generic parameters */
#define MADV_FREE 8 /* free pages only if memory pressure */ #define MADV_FREE 8 /* free pages only if memory pressure */

View File

@ -11,192 +11,11 @@
#ifndef ASM_ARC_DMA_MAPPING_H #ifndef ASM_ARC_DMA_MAPPING_H
#define ASM_ARC_DMA_MAPPING_H #define ASM_ARC_DMA_MAPPING_H
#include <asm-generic/dma-coherent.h> extern struct dma_map_ops arc_dma_ops;
#include <asm/cacheflush.h>
void *dma_alloc_noncoherent(struct device *dev, size_t size, static inline struct dma_map_ops *get_dma_ops(struct device *dev)
dma_addr_t *dma_handle, gfp_t gfp);
void dma_free_noncoherent(struct device *dev, size_t size, void *vaddr,
dma_addr_t dma_handle);
void *dma_alloc_coherent(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t gfp);
void dma_free_coherent(struct device *dev, size_t size, void *kvaddr,
dma_addr_t dma_handle);
/* drivers/base/dma-mapping.c */
extern int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
void *cpu_addr, dma_addr_t dma_addr, size_t size);
extern int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt,
void *cpu_addr, dma_addr_t dma_addr,
size_t size);
#define dma_mmap_coherent(d, v, c, h, s) dma_common_mmap(d, v, c, h, s)
#define dma_get_sgtable(d, t, v, h, s) dma_common_get_sgtable(d, t, v, h, s)
/*
* streaming DMA Mapping API...
* CPU accesses page via normal paddr, thus needs to explicitly made
* consistent before each use
*/
static inline void __inline_dma_cache_sync(unsigned long paddr, size_t size,
enum dma_data_direction dir)
{ {
switch (dir) { return &arc_dma_ops;
case DMA_FROM_DEVICE:
dma_cache_inv(paddr, size);
break;
case DMA_TO_DEVICE:
dma_cache_wback(paddr, size);
break;
case DMA_BIDIRECTIONAL:
dma_cache_wback_inv(paddr, size);
break;
default:
pr_err("Invalid DMA dir [%d] for OP @ %lx\n", dir, paddr);
}
}
void __arc_dma_cache_sync(unsigned long paddr, size_t size,
enum dma_data_direction dir);
#define _dma_cache_sync(addr, sz, dir) \
do { \
if (__builtin_constant_p(dir)) \
__inline_dma_cache_sync(addr, sz, dir); \
else \
__arc_dma_cache_sync(addr, sz, dir); \
} \
while (0);
static inline dma_addr_t
dma_map_single(struct device *dev, void *cpu_addr, size_t size,
enum dma_data_direction dir)
{
_dma_cache_sync((unsigned long)cpu_addr, size, dir);
return (dma_addr_t)cpu_addr;
}
static inline void
dma_unmap_single(struct device *dev, dma_addr_t dma_addr,
size_t size, enum dma_data_direction dir)
{
}
static inline dma_addr_t
dma_map_page(struct device *dev, struct page *page,
unsigned long offset, size_t size,
enum dma_data_direction dir)
{
unsigned long paddr = page_to_phys(page) + offset;
return dma_map_single(dev, (void *)paddr, size, dir);
}
static inline void
dma_unmap_page(struct device *dev, dma_addr_t dma_handle,
size_t size, enum dma_data_direction dir)
{
}
static inline int
dma_map_sg(struct device *dev, struct scatterlist *sg,
int nents, enum dma_data_direction dir)
{
struct scatterlist *s;
int i;
for_each_sg(sg, s, nents, i)
s->dma_address = dma_map_page(dev, sg_page(s), s->offset,
s->length, dir);
return nents;
}
static inline void
dma_unmap_sg(struct device *dev, struct scatterlist *sg,
int nents, enum dma_data_direction dir)
{
struct scatterlist *s;
int i;
for_each_sg(sg, s, nents, i)
dma_unmap_page(dev, sg_dma_address(s), sg_dma_len(s), dir);
}
static inline void
dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle,
size_t size, enum dma_data_direction dir)
{
_dma_cache_sync(dma_handle, size, DMA_FROM_DEVICE);
}
static inline void
dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle,
size_t size, enum dma_data_direction dir)
{
_dma_cache_sync(dma_handle, size, DMA_TO_DEVICE);
}
static inline void
dma_sync_single_range_for_cpu(struct device *dev, dma_addr_t dma_handle,
unsigned long offset, size_t size,
enum dma_data_direction direction)
{
_dma_cache_sync(dma_handle + offset, size, DMA_FROM_DEVICE);
}
static inline void
dma_sync_single_range_for_device(struct device *dev, dma_addr_t dma_handle,
unsigned long offset, size_t size,
enum dma_data_direction direction)
{
_dma_cache_sync(dma_handle + offset, size, DMA_TO_DEVICE);
}
static inline void
dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sglist, int nelems,
enum dma_data_direction dir)
{
int i;
struct scatterlist *sg;
for_each_sg(sglist, sg, nelems, i)
_dma_cache_sync((unsigned int)sg_virt(sg), sg->length, dir);
}
static inline void
dma_sync_sg_for_device(struct device *dev, struct scatterlist *sglist,
int nelems, enum dma_data_direction dir)
{
int i;
struct scatterlist *sg;
for_each_sg(sglist, sg, nelems, i)
_dma_cache_sync((unsigned int)sg_virt(sg), sg->length, dir);
}
static inline int dma_supported(struct device *dev, u64 dma_mask)
{
/* Support 32 bit DMA mask exclusively */
return dma_mask == DMA_BIT_MASK(32);
}
static inline int dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
{
return 0;
}
static inline int dma_set_mask(struct device *dev, u64 dma_mask)
{
if (!dev->dma_mask || !dma_supported(dev, dma_mask))
return -EIO;
*dev->dma_mask = dma_mask;
return 0;
} }
#endif #endif

View File

@ -17,18 +17,14 @@
*/ */
#include <linux/dma-mapping.h> #include <linux/dma-mapping.h>
#include <linux/dma-debug.h>
#include <linux/export.h>
#include <asm/cache.h> #include <asm/cache.h>
#include <asm/cacheflush.h> #include <asm/cacheflush.h>
/*
* Helpers for Coherent DMA API. static void *arc_dma_alloc(struct device *dev, size_t size,
*/ dma_addr_t *dma_handle, gfp_t gfp, struct dma_attrs *attrs)
void *dma_alloc_noncoherent(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t gfp)
{ {
void *paddr; void *paddr, *kvaddr;
/* This is linear addr (0x8000_0000 based) */ /* This is linear addr (0x8000_0000 based) */
paddr = alloc_pages_exact(size, gfp); paddr = alloc_pages_exact(size, gfp);
@ -38,22 +34,6 @@ void *dma_alloc_noncoherent(struct device *dev, size_t size,
/* This is bus address, platform dependent */ /* This is bus address, platform dependent */
*dma_handle = (dma_addr_t)paddr; *dma_handle = (dma_addr_t)paddr;
return paddr;
}
EXPORT_SYMBOL(dma_alloc_noncoherent);
void dma_free_noncoherent(struct device *dev, size_t size, void *vaddr,
dma_addr_t dma_handle)
{
free_pages_exact((void *)dma_handle, size);
}
EXPORT_SYMBOL(dma_free_noncoherent);
void *dma_alloc_coherent(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t gfp)
{
void *paddr, *kvaddr;
/* /*
* IOC relies on all data (even coherent DMA data) being in cache * IOC relies on all data (even coherent DMA data) being in cache
* Thus allocate normal cached memory * Thus allocate normal cached memory
@ -65,22 +45,15 @@ void *dma_alloc_coherent(struct device *dev, size_t size,
* -For coherent data, Read/Write to buffers terminate early in cache * -For coherent data, Read/Write to buffers terminate early in cache
* (vs. always going to memory - thus are faster) * (vs. always going to memory - thus are faster)
*/ */
if (is_isa_arcv2() && ioc_exists) if ((is_isa_arcv2() && ioc_exists) ||
return dma_alloc_noncoherent(dev, size, dma_handle, gfp); dma_get_attr(DMA_ATTR_NON_CONSISTENT, attrs))
return paddr;
/* This is linear addr (0x8000_0000 based) */
paddr = alloc_pages_exact(size, gfp);
if (!paddr)
return NULL;
/* This is kernel Virtual address (0x7000_0000 based) */ /* This is kernel Virtual address (0x7000_0000 based) */
kvaddr = ioremap_nocache((unsigned long)paddr, size); kvaddr = ioremap_nocache((unsigned long)paddr, size);
if (kvaddr == NULL) if (kvaddr == NULL)
return NULL; return NULL;
/* This is bus address, platform dependent */
*dma_handle = (dma_addr_t)paddr;
/* /*
* Evict any existing L1 and/or L2 lines for the backing page * Evict any existing L1 and/or L2 lines for the backing page
* in case it was used earlier as a normal "cached" page. * in case it was used earlier as a normal "cached" page.
@ -95,26 +68,111 @@ void *dma_alloc_coherent(struct device *dev, size_t size,
return kvaddr; return kvaddr;
} }
EXPORT_SYMBOL(dma_alloc_coherent);
void dma_free_coherent(struct device *dev, size_t size, void *kvaddr, static void arc_dma_free(struct device *dev, size_t size, void *vaddr,
dma_addr_t dma_handle) dma_addr_t dma_handle, struct dma_attrs *attrs)
{ {
if (is_isa_arcv2() && ioc_exists) if (!dma_get_attr(DMA_ATTR_NON_CONSISTENT, attrs) &&
return dma_free_noncoherent(dev, size, kvaddr, dma_handle); !(is_isa_arcv2() && ioc_exists))
iounmap((void __force __iomem *)vaddr);
iounmap((void __force __iomem *)kvaddr);
free_pages_exact((void *)dma_handle, size); free_pages_exact((void *)dma_handle, size);
} }
EXPORT_SYMBOL(dma_free_coherent);
/* /*
* Helper for streaming DMA... * streaming DMA Mapping API...
* CPU accesses page via normal paddr, thus needs to explicitly made
* consistent before each use
*/ */
void __arc_dma_cache_sync(unsigned long paddr, size_t size, static void _dma_cache_sync(unsigned long paddr, size_t size,
enum dma_data_direction dir) enum dma_data_direction dir)
{ {
__inline_dma_cache_sync(paddr, size, dir); switch (dir) {
case DMA_FROM_DEVICE:
dma_cache_inv(paddr, size);
break;
case DMA_TO_DEVICE:
dma_cache_wback(paddr, size);
break;
case DMA_BIDIRECTIONAL:
dma_cache_wback_inv(paddr, size);
break;
default:
pr_err("Invalid DMA dir [%d] for OP @ %lx\n", dir, paddr);
}
} }
EXPORT_SYMBOL(__arc_dma_cache_sync);
static dma_addr_t arc_dma_map_page(struct device *dev, struct page *page,
unsigned long offset, size_t size, enum dma_data_direction dir,
struct dma_attrs *attrs)
{
unsigned long paddr = page_to_phys(page) + offset;
_dma_cache_sync(paddr, size, dir);
return (dma_addr_t)paddr;
}
static int arc_dma_map_sg(struct device *dev, struct scatterlist *sg,
int nents, enum dma_data_direction dir, struct dma_attrs *attrs)
{
struct scatterlist *s;
int i;
for_each_sg(sg, s, nents, i)
s->dma_address = dma_map_page(dev, sg_page(s), s->offset,
s->length, dir);
return nents;
}
static void arc_dma_sync_single_for_cpu(struct device *dev,
dma_addr_t dma_handle, size_t size, enum dma_data_direction dir)
{
_dma_cache_sync(dma_handle, size, DMA_FROM_DEVICE);
}
static void arc_dma_sync_single_for_device(struct device *dev,
dma_addr_t dma_handle, size_t size, enum dma_data_direction dir)
{
_dma_cache_sync(dma_handle, size, DMA_TO_DEVICE);
}
static void arc_dma_sync_sg_for_cpu(struct device *dev,
struct scatterlist *sglist, int nelems,
enum dma_data_direction dir)
{
int i;
struct scatterlist *sg;
for_each_sg(sglist, sg, nelems, i)
_dma_cache_sync((unsigned int)sg_virt(sg), sg->length, dir);
}
static void arc_dma_sync_sg_for_device(struct device *dev,
struct scatterlist *sglist, int nelems,
enum dma_data_direction dir)
{
int i;
struct scatterlist *sg;
for_each_sg(sglist, sg, nelems, i)
_dma_cache_sync((unsigned int)sg_virt(sg), sg->length, dir);
}
static int arc_dma_supported(struct device *dev, u64 dma_mask)
{
/* Support 32 bit DMA mask exclusively */
return dma_mask == DMA_BIT_MASK(32);
}
struct dma_map_ops arc_dma_ops = {
.alloc = arc_dma_alloc,
.free = arc_dma_free,
.map_page = arc_dma_map_page,
.map_sg = arc_dma_map_sg,
.sync_single_for_device = arc_dma_sync_single_for_device,
.sync_single_for_cpu = arc_dma_sync_single_for_cpu,
.sync_sg_for_cpu = arc_dma_sync_sg_for_cpu,
.sync_sg_for_device = arc_dma_sync_sg_for_device,
.dma_supported = arc_dma_supported,
};
EXPORT_SYMBOL(arc_dma_ops);

View File

@ -47,7 +47,6 @@ config ARM
select HAVE_C_RECORDMCOUNT select HAVE_C_RECORDMCOUNT
select HAVE_DEBUG_KMEMLEAK select HAVE_DEBUG_KMEMLEAK
select HAVE_DMA_API_DEBUG select HAVE_DMA_API_DEBUG
select HAVE_DMA_ATTRS
select HAVE_DMA_CONTIGUOUS if MMU select HAVE_DMA_CONTIGUOUS if MMU
select HAVE_DYNAMIC_FTRACE if (!XIP_KERNEL) && !CPU_ENDIAN_BE32 && MMU select HAVE_DYNAMIC_FTRACE if (!XIP_KERNEL) && !CPU_ENDIAN_BE32 && MMU
select HAVE_EFFICIENT_UNALIGNED_ACCESS if (CPU_V6 || CPU_V6K || CPU_V7) && MMU select HAVE_EFFICIENT_UNALIGNED_ACCESS if (CPU_V6 || CPU_V6K || CPU_V7) && MMU

View File

@ -41,13 +41,6 @@ static inline void set_dma_ops(struct device *dev, struct dma_map_ops *ops)
#define HAVE_ARCH_DMA_SUPPORTED 1 #define HAVE_ARCH_DMA_SUPPORTED 1
extern int dma_supported(struct device *dev, u64 mask); extern int dma_supported(struct device *dev, u64 mask);
/*
* Note that while the generic code provides dummy dma_{alloc,free}_noncoherent
* implementations, we don't provide a dma_cache_sync function so drivers using
* this API are highlighted with build warnings.
*/
#include <asm-generic/dma-mapping-common.h>
#ifdef __arch_page_to_dma #ifdef __arch_page_to_dma
#error Please update to __arch_pfn_to_dma #error Please update to __arch_pfn_to_dma
#endif #endif

View File

@ -64,7 +64,6 @@ config ARM64
select HAVE_DEBUG_BUGVERBOSE select HAVE_DEBUG_BUGVERBOSE
select HAVE_DEBUG_KMEMLEAK select HAVE_DEBUG_KMEMLEAK
select HAVE_DMA_API_DEBUG select HAVE_DMA_API_DEBUG
select HAVE_DMA_ATTRS
select HAVE_DMA_CONTIGUOUS select HAVE_DMA_CONTIGUOUS
select HAVE_DYNAMIC_FTRACE select HAVE_DYNAMIC_FTRACE
select HAVE_EFFICIENT_UNALIGNED_ACCESS select HAVE_EFFICIENT_UNALIGNED_ACCESS

View File

@ -64,8 +64,6 @@ static inline bool is_device_dma_coherent(struct device *dev)
return dev->archdata.dma_coherent; return dev->archdata.dma_coherent;
} }
#include <asm-generic/dma-mapping-common.h>
static inline dma_addr_t phys_to_dma(struct device *dev, phys_addr_t paddr) static inline dma_addr_t phys_to_dma(struct device *dev, phys_addr_t paddr)
{ {
return (dma_addr_t)paddr; return (dma_addr_t)paddr;

View File

@ -1,350 +1,14 @@
#ifndef __ASM_AVR32_DMA_MAPPING_H #ifndef __ASM_AVR32_DMA_MAPPING_H
#define __ASM_AVR32_DMA_MAPPING_H #define __ASM_AVR32_DMA_MAPPING_H
#include <linux/mm.h>
#include <linux/device.h>
#include <linux/scatterlist.h>
#include <asm/processor.h>
#include <asm/cacheflush.h>
#include <asm/io.h>
extern void dma_cache_sync(struct device *dev, void *vaddr, size_t size, extern void dma_cache_sync(struct device *dev, void *vaddr, size_t size,
int direction); int direction);
/* extern struct dma_map_ops avr32_dma_ops;
* Return whether the given device DMA address mask can be supported
* properly. For example, if your device can only drive the low 24-bits static inline struct dma_map_ops *get_dma_ops(struct device *dev)
* during bus mastering, then you would pass 0x00ffffff as the mask
* to this function.
*/
static inline int dma_supported(struct device *dev, u64 mask)
{ {
/* Fix when needed. I really don't know of any limitations */ return &avr32_dma_ops;
return 1;
} }
static inline int dma_set_mask(struct device *dev, u64 dma_mask)
{
if (!dev->dma_mask || !dma_supported(dev, dma_mask))
return -EIO;
*dev->dma_mask = dma_mask;
return 0;
}
/*
* dma_map_single can't fail as it is implemented now.
*/
static inline int dma_mapping_error(struct device *dev, dma_addr_t addr)
{
return 0;
}
/**
* dma_alloc_coherent - allocate consistent memory for DMA
* @dev: valid struct device pointer, or NULL for ISA and EISA-like devices
* @size: required memory size
* @handle: bus-specific DMA address
*
* Allocate some uncached, unbuffered memory for a device for
* performing DMA. This function allocates pages, and will
* return the CPU-viewed address, and sets @handle to be the
* device-viewed address.
*/
extern void *dma_alloc_coherent(struct device *dev, size_t size,
dma_addr_t *handle, gfp_t gfp);
/**
* dma_free_coherent - free memory allocated by dma_alloc_coherent
* @dev: valid struct device pointer, or NULL for ISA and EISA-like devices
* @size: size of memory originally requested in dma_alloc_coherent
* @cpu_addr: CPU-view address returned from dma_alloc_coherent
* @handle: device-view address returned from dma_alloc_coherent
*
* Free (and unmap) a DMA buffer previously allocated by
* dma_alloc_coherent().
*
* References to memory and mappings associated with cpu_addr/handle
* during and after this call executing are illegal.
*/
extern void dma_free_coherent(struct device *dev, size_t size,
void *cpu_addr, dma_addr_t handle);
/**
* dma_alloc_writecombine - allocate write-combining memory for DMA
* @dev: valid struct device pointer, or NULL for ISA and EISA-like devices
* @size: required memory size
* @handle: bus-specific DMA address
*
* Allocate some uncached, buffered memory for a device for
* performing DMA. This function allocates pages, and will
* return the CPU-viewed address, and sets @handle to be the
* device-viewed address.
*/
extern void *dma_alloc_writecombine(struct device *dev, size_t size,
dma_addr_t *handle, gfp_t gfp);
/**
* dma_free_coherent - free memory allocated by dma_alloc_writecombine
* @dev: valid struct device pointer, or NULL for ISA and EISA-like devices
* @size: size of memory originally requested in dma_alloc_writecombine
* @cpu_addr: CPU-view address returned from dma_alloc_writecombine
* @handle: device-view address returned from dma_alloc_writecombine
*
* Free (and unmap) a DMA buffer previously allocated by
* dma_alloc_writecombine().
*
* References to memory and mappings associated with cpu_addr/handle
* during and after this call executing are illegal.
*/
extern void dma_free_writecombine(struct device *dev, size_t size,
void *cpu_addr, dma_addr_t handle);
/**
* dma_map_single - map a single buffer for streaming DMA
* @dev: valid struct device pointer, or NULL for ISA and EISA-like devices
* @cpu_addr: CPU direct mapped address of buffer
* @size: size of buffer to map
* @dir: DMA transfer direction
*
* Ensure that any data held in the cache is appropriately discarded
* or written back.
*
* The device owns this memory once this call has completed. The CPU
* can regain ownership by calling dma_unmap_single() or dma_sync_single().
*/
static inline dma_addr_t
dma_map_single(struct device *dev, void *cpu_addr, size_t size,
enum dma_data_direction direction)
{
dma_cache_sync(dev, cpu_addr, size, direction);
return virt_to_bus(cpu_addr);
}
/**
* dma_unmap_single - unmap a single buffer previously mapped
* @dev: valid struct device pointer, or NULL for ISA and EISA-like devices
* @handle: DMA address of buffer
* @size: size of buffer to map
* @dir: DMA transfer direction
*
* Unmap a single streaming mode DMA translation. The handle and size
* must match what was provided in the previous dma_map_single() call.
* All other usages are undefined.
*
* After this call, reads by the CPU to the buffer are guaranteed to see
* whatever the device wrote there.
*/
static inline void
dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size,
enum dma_data_direction direction)
{
}
/**
* dma_map_page - map a portion of a page for streaming DMA
* @dev: valid struct device pointer, or NULL for ISA and EISA-like devices
* @page: page that buffer resides in
* @offset: offset into page for start of buffer
* @size: size of buffer to map
* @dir: DMA transfer direction
*
* Ensure that any data held in the cache is appropriately discarded
* or written back.
*
* The device owns this memory once this call has completed. The CPU
* can regain ownership by calling dma_unmap_page() or dma_sync_single().
*/
static inline dma_addr_t
dma_map_page(struct device *dev, struct page *page,
unsigned long offset, size_t size,
enum dma_data_direction direction)
{
return dma_map_single(dev, page_address(page) + offset,
size, direction);
}
/**
* dma_unmap_page - unmap a buffer previously mapped through dma_map_page()
* @dev: valid struct device pointer, or NULL for ISA and EISA-like devices
* @handle: DMA address of buffer
* @size: size of buffer to map
* @dir: DMA transfer direction
*
* Unmap a single streaming mode DMA translation. The handle and size
* must match what was provided in the previous dma_map_single() call.
* All other usages are undefined.
*
* After this call, reads by the CPU to the buffer are guaranteed to see
* whatever the device wrote there.
*/
static inline void
dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size,
enum dma_data_direction direction)
{
dma_unmap_single(dev, dma_address, size, direction);
}
/**
* dma_map_sg - map a set of SG buffers for streaming mode DMA
* @dev: valid struct device pointer, or NULL for ISA and EISA-like devices
* @sg: list of buffers
* @nents: number of buffers to map
* @dir: DMA transfer direction
*
* Map a set of buffers described by scatterlist in streaming
* mode for DMA. This is the scatter-gather version of the
* above pci_map_single interface. Here the scatter gather list
* elements are each tagged with the appropriate dma address
* and length. They are obtained via sg_dma_{address,length}(SG).
*
* NOTE: An implementation may be able to use a smaller number of
* DMA address/length pairs than there are SG table elements.
* (for example via virtual mapping capabilities)
* The routine returns the number of addr/length pairs actually
* used, at most nents.
*
* Device ownership issues as mentioned above for pci_map_single are
* the same here.
*/
static inline int
dma_map_sg(struct device *dev, struct scatterlist *sglist, int nents,
enum dma_data_direction direction)
{
int i;
struct scatterlist *sg;
for_each_sg(sglist, sg, nents, i) {
char *virt;
sg->dma_address = page_to_bus(sg_page(sg)) + sg->offset;
virt = sg_virt(sg);
dma_cache_sync(dev, virt, sg->length, direction);
}
return nents;
}
/**
* dma_unmap_sg - unmap a set of SG buffers mapped by dma_map_sg
* @dev: valid struct device pointer, or NULL for ISA and EISA-like devices
* @sg: list of buffers
* @nents: number of buffers to map
* @dir: DMA transfer direction
*
* Unmap a set of streaming mode DMA translations.
* Again, CPU read rules concerning calls here are the same as for
* pci_unmap_single() above.
*/
static inline void
dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nhwentries,
enum dma_data_direction direction)
{
}
/**
* dma_sync_single_for_cpu
* @dev: valid struct device pointer, or NULL for ISA and EISA-like devices
* @handle: DMA address of buffer
* @size: size of buffer to map
* @dir: DMA transfer direction
*
* Make physical memory consistent for a single streaming mode DMA
* translation after a transfer.
*
* If you perform a dma_map_single() but wish to interrogate the
* buffer using the cpu, yet do not wish to teardown the DMA mapping,
* you must call this function before doing so. At the next point you
* give the DMA address back to the card, you must first perform a
* dma_sync_single_for_device, and then the device again owns the
* buffer.
*/
static inline void
dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle,
size_t size, enum dma_data_direction direction)
{
/*
* No need to do anything since the CPU isn't supposed to
* touch this memory after we flushed it at mapping- or
* sync-for-device time.
*/
}
static inline void
dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle,
size_t size, enum dma_data_direction direction)
{
dma_cache_sync(dev, bus_to_virt(dma_handle), size, direction);
}
static inline void
dma_sync_single_range_for_cpu(struct device *dev, dma_addr_t dma_handle,
unsigned long offset, size_t size,
enum dma_data_direction direction)
{
/* just sync everything, that's all the pci API can do */
dma_sync_single_for_cpu(dev, dma_handle, offset+size, direction);
}
static inline void
dma_sync_single_range_for_device(struct device *dev, dma_addr_t dma_handle,
unsigned long offset, size_t size,
enum dma_data_direction direction)
{
/* just sync everything, that's all the pci API can do */
dma_sync_single_for_device(dev, dma_handle, offset+size, direction);
}
/**
* dma_sync_sg_for_cpu
* @dev: valid struct device pointer, or NULL for ISA and EISA-like devices
* @sg: list of buffers
* @nents: number of buffers to map
* @dir: DMA transfer direction
*
* Make physical memory consistent for a set of streaming
* mode DMA translations after a transfer.
*
* The same as dma_sync_single_for_* but for a scatter-gather list,
* same rules and usage.
*/
static inline void
dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg,
int nents, enum dma_data_direction direction)
{
/*
* No need to do anything since the CPU isn't supposed to
* touch this memory after we flushed it at mapping- or
* sync-for-device time.
*/
}
static inline void
dma_sync_sg_for_device(struct device *dev, struct scatterlist *sglist,
int nents, enum dma_data_direction direction)
{
int i;
struct scatterlist *sg;
for_each_sg(sglist, sg, nents, i)
dma_cache_sync(dev, sg_virt(sg), sg->length, direction);
}
/* Now for the API extensions over the pci_ one */
#define dma_alloc_noncoherent(d, s, h, f) dma_alloc_coherent(d, s, h, f)
#define dma_free_noncoherent(d, s, v, h) dma_free_coherent(d, s, v, h)
/* drivers/base/dma-mapping.c */
extern int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
void *cpu_addr, dma_addr_t dma_addr, size_t size);
extern int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt,
void *cpu_addr, dma_addr_t dma_addr,
size_t size);
#define dma_mmap_coherent(d, v, c, h, s) dma_common_mmap(d, v, c, h, s)
#define dma_get_sgtable(d, t, v, h, s) dma_common_get_sgtable(d, t, v, h, s)
#endif /* __ASM_AVR32_DMA_MAPPING_H */ #endif /* __ASM_AVR32_DMA_MAPPING_H */

View File

@ -9,9 +9,14 @@
#include <linux/dma-mapping.h> #include <linux/dma-mapping.h>
#include <linux/gfp.h> #include <linux/gfp.h>
#include <linux/export.h> #include <linux/export.h>
#include <linux/mm.h>
#include <linux/device.h>
#include <linux/scatterlist.h>
#include <asm/addrspace.h> #include <asm/processor.h>
#include <asm/cacheflush.h> #include <asm/cacheflush.h>
#include <asm/io.h>
#include <asm/addrspace.h>
void dma_cache_sync(struct device *dev, void *vaddr, size_t size, int direction) void dma_cache_sync(struct device *dev, void *vaddr, size_t size, int direction)
{ {
@ -93,36 +98,8 @@ static void __dma_free(struct device *dev, size_t size,
__free_page(page++); __free_page(page++);
} }
void *dma_alloc_coherent(struct device *dev, size_t size, static void *avr32_dma_alloc(struct device *dev, size_t size,
dma_addr_t *handle, gfp_t gfp) dma_addr_t *handle, gfp_t gfp, struct dma_attrs *attrs)
{
struct page *page;
void *ret = NULL;
page = __dma_alloc(dev, size, handle, gfp);
if (page)
ret = phys_to_uncached(page_to_phys(page));
return ret;
}
EXPORT_SYMBOL(dma_alloc_coherent);
void dma_free_coherent(struct device *dev, size_t size,
void *cpu_addr, dma_addr_t handle)
{
void *addr = phys_to_cached(uncached_to_phys(cpu_addr));
struct page *page;
pr_debug("dma_free_coherent addr %p (phys %08lx) size %u\n",
cpu_addr, (unsigned long)handle, (unsigned)size);
BUG_ON(!virt_addr_valid(addr));
page = virt_to_page(addr);
__dma_free(dev, size, page, handle);
}
EXPORT_SYMBOL(dma_free_coherent);
void *dma_alloc_writecombine(struct device *dev, size_t size,
dma_addr_t *handle, gfp_t gfp)
{ {
struct page *page; struct page *page;
dma_addr_t phys; dma_addr_t phys;
@ -130,23 +107,91 @@ void *dma_alloc_writecombine(struct device *dev, size_t size,
page = __dma_alloc(dev, size, handle, gfp); page = __dma_alloc(dev, size, handle, gfp);
if (!page) if (!page)
return NULL; return NULL;
phys = page_to_phys(page); phys = page_to_phys(page);
*handle = phys;
/* Now, map the page into P3 with write-combining turned on */ if (dma_get_attr(DMA_ATTR_WRITE_COMBINE, attrs)) {
return __ioremap(phys, size, _PAGE_BUFFER); /* Now, map the page into P3 with write-combining turned on */
*handle = phys;
return __ioremap(phys, size, _PAGE_BUFFER);
} else {
return phys_to_uncached(phys);
}
} }
EXPORT_SYMBOL(dma_alloc_writecombine);
void dma_free_writecombine(struct device *dev, size_t size, static void avr32_dma_free(struct device *dev, size_t size,
void *cpu_addr, dma_addr_t handle) void *cpu_addr, dma_addr_t handle, struct dma_attrs *attrs)
{ {
struct page *page; struct page *page;
iounmap(cpu_addr); if (dma_get_attr(DMA_ATTR_WRITE_COMBINE, attrs)) {
iounmap(cpu_addr);
page = phys_to_page(handle);
} else {
void *addr = phys_to_cached(uncached_to_phys(cpu_addr));
pr_debug("avr32_dma_free addr %p (phys %08lx) size %u\n",
cpu_addr, (unsigned long)handle, (unsigned)size);
BUG_ON(!virt_addr_valid(addr));
page = virt_to_page(addr);
}
page = phys_to_page(handle);
__dma_free(dev, size, page, handle); __dma_free(dev, size, page, handle);
} }
EXPORT_SYMBOL(dma_free_writecombine);
static dma_addr_t avr32_dma_map_page(struct device *dev, struct page *page,
unsigned long offset, size_t size,
enum dma_data_direction direction, struct dma_attrs *attrs)
{
void *cpu_addr = page_address(page) + offset;
dma_cache_sync(dev, cpu_addr, size, direction);
return virt_to_bus(cpu_addr);
}
static int avr32_dma_map_sg(struct device *dev, struct scatterlist *sglist,
int nents, enum dma_data_direction direction,
struct dma_attrs *attrs)
{
int i;
struct scatterlist *sg;
for_each_sg(sglist, sg, nents, i) {
char *virt;
sg->dma_address = page_to_bus(sg_page(sg)) + sg->offset;
virt = sg_virt(sg);
dma_cache_sync(dev, virt, sg->length, direction);
}
return nents;
}
static void avr32_dma_sync_single_for_device(struct device *dev,
dma_addr_t dma_handle, size_t size,
enum dma_data_direction direction)
{
dma_cache_sync(dev, bus_to_virt(dma_handle), size, direction);
}
static void avr32_dma_sync_sg_for_device(struct device *dev,
struct scatterlist *sglist, int nents,
enum dma_data_direction direction)
{
int i;
struct scatterlist *sg;
for_each_sg(sglist, sg, nents, i)
dma_cache_sync(dev, sg_virt(sg), sg->length, direction);
}
struct dma_map_ops avr32_dma_ops = {
.alloc = avr32_dma_alloc,
.free = avr32_dma_free,
.map_page = avr32_dma_map_page,
.map_sg = avr32_dma_map_sg,
.sync_single_for_device = avr32_dma_sync_single_for_device,
.sync_sg_for_device = avr32_dma_sync_sg_for_device,
};
EXPORT_SYMBOL(avr32_dma_ops);

View File

@ -8,36 +8,6 @@
#define _BLACKFIN_DMA_MAPPING_H #define _BLACKFIN_DMA_MAPPING_H
#include <asm/cacheflush.h> #include <asm/cacheflush.h>
struct scatterlist;
void *dma_alloc_coherent(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t gfp);
void dma_free_coherent(struct device *dev, size_t size, void *vaddr,
dma_addr_t dma_handle);
/*
* Now for the API extensions over the pci_ one
*/
#define dma_alloc_noncoherent(d, s, h, f) dma_alloc_coherent(d, s, h, f)
#define dma_free_noncoherent(d, s, v, h) dma_free_coherent(d, s, v, h)
#define dma_supported(d, m) (1)
static inline int
dma_set_mask(struct device *dev, u64 dma_mask)
{
if (!dev->dma_mask || !dma_supported(dev, dma_mask))
return -EIO;
*dev->dma_mask = dma_mask;
return 0;
}
static inline int
dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
{
return 0;
}
extern void extern void
__dma_sync(dma_addr_t addr, size_t size, enum dma_data_direction dir); __dma_sync(dma_addr_t addr, size_t size, enum dma_data_direction dir);
@ -66,102 +36,11 @@ _dma_sync(dma_addr_t addr, size_t size, enum dma_data_direction dir)
__dma_sync(addr, size, dir); __dma_sync(addr, size, dir);
} }
static inline dma_addr_t extern struct dma_map_ops bfin_dma_ops;
dma_map_single(struct device *dev, void *ptr, size_t size,
enum dma_data_direction dir) static inline struct dma_map_ops *get_dma_ops(struct device *dev)
{ {
_dma_sync((dma_addr_t)ptr, size, dir); return &bfin_dma_ops;
return (dma_addr_t) ptr;
} }
static inline dma_addr_t
dma_map_page(struct device *dev, struct page *page,
unsigned long offset, size_t size,
enum dma_data_direction dir)
{
return dma_map_single(dev, page_address(page) + offset, size, dir);
}
static inline void
dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size,
enum dma_data_direction dir)
{
BUG_ON(!valid_dma_direction(dir));
}
static inline void
dma_unmap_page(struct device *dev, dma_addr_t dma_addr, size_t size,
enum dma_data_direction dir)
{
dma_unmap_single(dev, dma_addr, size, dir);
}
extern int dma_map_sg(struct device *dev, struct scatterlist *sg, int nents,
enum dma_data_direction dir);
static inline void
dma_unmap_sg(struct device *dev, struct scatterlist *sg,
int nhwentries, enum dma_data_direction dir)
{
BUG_ON(!valid_dma_direction(dir));
}
static inline void
dma_sync_single_range_for_cpu(struct device *dev, dma_addr_t handle,
unsigned long offset, size_t size,
enum dma_data_direction dir)
{
BUG_ON(!valid_dma_direction(dir));
}
static inline void
dma_sync_single_range_for_device(struct device *dev, dma_addr_t handle,
unsigned long offset, size_t size,
enum dma_data_direction dir)
{
_dma_sync(handle + offset, size, dir);
}
static inline void
dma_sync_single_for_cpu(struct device *dev, dma_addr_t handle, size_t size,
enum dma_data_direction dir)
{
dma_sync_single_range_for_cpu(dev, handle, 0, size, dir);
}
static inline void
dma_sync_single_for_device(struct device *dev, dma_addr_t handle, size_t size,
enum dma_data_direction dir)
{
dma_sync_single_range_for_device(dev, handle, 0, size, dir);
}
static inline void
dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, int nents,
enum dma_data_direction dir)
{
BUG_ON(!valid_dma_direction(dir));
}
extern void
dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg,
int nents, enum dma_data_direction dir);
static inline void
dma_cache_sync(struct device *dev, void *vaddr, size_t size,
enum dma_data_direction dir)
{
_dma_sync((dma_addr_t)vaddr, size, dir);
}
/* drivers/base/dma-mapping.c */
extern int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
void *cpu_addr, dma_addr_t dma_addr, size_t size);
extern int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt,
void *cpu_addr, dma_addr_t dma_addr,
size_t size);
#define dma_mmap_coherent(d, v, c, h, s) dma_common_mmap(d, v, c, h, s)
#define dma_get_sgtable(d, t, v, h, s) dma_common_get_sgtable(d, t, v, h, s)
#endif /* _BLACKFIN_DMA_MAPPING_H */ #endif /* _BLACKFIN_DMA_MAPPING_H */

View File

@ -78,8 +78,8 @@ static void __free_dma_pages(unsigned long addr, unsigned int pages)
spin_unlock_irqrestore(&dma_page_lock, flags); spin_unlock_irqrestore(&dma_page_lock, flags);
} }
void *dma_alloc_coherent(struct device *dev, size_t size, static void *bfin_dma_alloc(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t gfp) dma_addr_t *dma_handle, gfp_t gfp, struct dma_attrs *attrs)
{ {
void *ret; void *ret;
@ -92,15 +92,12 @@ void *dma_alloc_coherent(struct device *dev, size_t size,
return ret; return ret;
} }
EXPORT_SYMBOL(dma_alloc_coherent);
void static void bfin_dma_free(struct device *dev, size_t size, void *vaddr,
dma_free_coherent(struct device *dev, size_t size, void *vaddr, dma_addr_t dma_handle, struct dma_attrs *attrs)
dma_addr_t dma_handle)
{ {
__free_dma_pages((unsigned long)vaddr, get_pages(size)); __free_dma_pages((unsigned long)vaddr, get_pages(size));
} }
EXPORT_SYMBOL(dma_free_coherent);
/* /*
* Streaming DMA mappings * Streaming DMA mappings
@ -112,9 +109,9 @@ void __dma_sync(dma_addr_t addr, size_t size,
} }
EXPORT_SYMBOL(__dma_sync); EXPORT_SYMBOL(__dma_sync);
int static int bfin_dma_map_sg(struct device *dev, struct scatterlist *sg_list,
dma_map_sg(struct device *dev, struct scatterlist *sg_list, int nents, int nents, enum dma_data_direction direction,
enum dma_data_direction direction) struct dma_attrs *attrs)
{ {
struct scatterlist *sg; struct scatterlist *sg;
int i; int i;
@ -126,10 +123,10 @@ dma_map_sg(struct device *dev, struct scatterlist *sg_list, int nents,
return nents; return nents;
} }
EXPORT_SYMBOL(dma_map_sg);
void dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg_list, static void bfin_dma_sync_sg_for_device(struct device *dev,
int nelems, enum dma_data_direction direction) struct scatterlist *sg_list, int nelems,
enum dma_data_direction direction)
{ {
struct scatterlist *sg; struct scatterlist *sg;
int i; int i;
@ -139,4 +136,31 @@ void dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg_list,
__dma_sync(sg_dma_address(sg), sg_dma_len(sg), direction); __dma_sync(sg_dma_address(sg), sg_dma_len(sg), direction);
} }
} }
EXPORT_SYMBOL(dma_sync_sg_for_device);
static dma_addr_t bfin_dma_map_page(struct device *dev, struct page *page,
unsigned long offset, size_t size, enum dma_data_direction dir,
struct dma_attrs *attrs)
{
dma_addr_t handle = (dma_addr_t)(page_address(page) + offset);
_dma_sync(handle, size, dir);
return handle;
}
static inline void bfin_dma_sync_single_for_device(struct device *dev,
dma_addr_t handle, size_t size, enum dma_data_direction dir)
{
_dma_sync(handle, size, dir);
}
struct dma_map_ops bfin_dma_ops = {
.alloc = bfin_dma_alloc,
.free = bfin_dma_free,
.map_page = bfin_dma_map_page,
.map_sg = bfin_dma_map_sg,
.sync_single_for_device = bfin_dma_sync_single_for_device,
.sync_sg_for_device = bfin_dma_sync_sg_for_device,
};
EXPORT_SYMBOL(bfin_dma_ops);

View File

@ -17,6 +17,7 @@ config C6X
select OF_EARLY_FLATTREE select OF_EARLY_FLATTREE
select GENERIC_CLOCKEVENTS select GENERIC_CLOCKEVENTS
select MODULES_USE_ELF_RELA select MODULES_USE_ELF_RELA
select ARCH_NO_COHERENT_DMA_MMAP
config MMU config MMU
def_bool n def_bool n

View File

@ -12,104 +12,22 @@
#ifndef _ASM_C6X_DMA_MAPPING_H #ifndef _ASM_C6X_DMA_MAPPING_H
#define _ASM_C6X_DMA_MAPPING_H #define _ASM_C6X_DMA_MAPPING_H
#include <linux/dma-debug.h>
#include <asm-generic/dma-coherent.h>
#define dma_supported(d, m) 1
static inline void dma_sync_single_range_for_device(struct device *dev,
dma_addr_t addr,
unsigned long offset,
size_t size,
enum dma_data_direction dir)
{
}
static inline int dma_set_mask(struct device *dev, u64 dma_mask)
{
if (!dev->dma_mask || !dma_supported(dev, dma_mask))
return -EIO;
*dev->dma_mask = dma_mask;
return 0;
}
/* /*
* DMA errors are defined by all-bits-set in the DMA address. * DMA errors are defined by all-bits-set in the DMA address.
*/ */
static inline int dma_mapping_error(struct device *dev, dma_addr_t dma_addr) #define DMA_ERROR_CODE ~0
extern struct dma_map_ops c6x_dma_ops;
static inline struct dma_map_ops *get_dma_ops(struct device *dev)
{ {
debug_dma_mapping_error(dev, dma_addr); return &c6x_dma_ops;
return dma_addr == ~0;
} }
extern dma_addr_t dma_map_single(struct device *dev, void *cpu_addr,
size_t size, enum dma_data_direction dir);
extern void dma_unmap_single(struct device *dev, dma_addr_t handle,
size_t size, enum dma_data_direction dir);
extern int dma_map_sg(struct device *dev, struct scatterlist *sglist,
int nents, enum dma_data_direction direction);
extern void dma_unmap_sg(struct device *dev, struct scatterlist *sglist,
int nents, enum dma_data_direction direction);
static inline dma_addr_t dma_map_page(struct device *dev, struct page *page,
unsigned long offset, size_t size,
enum dma_data_direction dir)
{
dma_addr_t handle;
handle = dma_map_single(dev, page_address(page) + offset, size, dir);
debug_dma_map_page(dev, page, offset, size, dir, handle, false);
return handle;
}
static inline void dma_unmap_page(struct device *dev, dma_addr_t handle,
size_t size, enum dma_data_direction dir)
{
dma_unmap_single(dev, handle, size, dir);
debug_dma_unmap_page(dev, handle, size, dir, false);
}
extern void dma_sync_single_for_cpu(struct device *dev, dma_addr_t handle,
size_t size, enum dma_data_direction dir);
extern void dma_sync_single_for_device(struct device *dev, dma_addr_t handle,
size_t size,
enum dma_data_direction dir);
extern void dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg,
int nents, enum dma_data_direction dir);
extern void dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg,
int nents, enum dma_data_direction dir);
extern void coherent_mem_init(u32 start, u32 size); extern void coherent_mem_init(u32 start, u32 size);
extern void *dma_alloc_coherent(struct device *, size_t, dma_addr_t *, gfp_t); void *c6x_dma_alloc(struct device *dev, size_t size, dma_addr_t *handle,
extern void dma_free_coherent(struct device *, size_t, void *, dma_addr_t); gfp_t gfp, struct dma_attrs *attrs);
void c6x_dma_free(struct device *dev, size_t size, void *vaddr,
#define dma_alloc_noncoherent(d, s, h, f) dma_alloc_coherent((d), (s), (h), (f)) dma_addr_t dma_handle, struct dma_attrs *attrs);
#define dma_free_noncoherent(d, s, v, h) dma_free_coherent((d), (s), (v), (h))
/* Not supported for now */
static inline int dma_mmap_coherent(struct device *dev,
struct vm_area_struct *vma, void *cpu_addr,
dma_addr_t dma_addr, size_t size)
{
return -EINVAL;
}
static inline int dma_get_sgtable(struct device *dev, struct sg_table *sgt,
void *cpu_addr, dma_addr_t dma_addr,
size_t size)
{
return -EINVAL;
}
#endif /* _ASM_C6X_DMA_MAPPING_H */ #endif /* _ASM_C6X_DMA_MAPPING_H */

View File

@ -36,110 +36,101 @@ static void c6x_dma_sync(dma_addr_t handle, size_t size,
} }
} }
dma_addr_t dma_map_single(struct device *dev, void *ptr, size_t size, static dma_addr_t c6x_dma_map_page(struct device *dev, struct page *page,
enum dma_data_direction dir) unsigned long offset, size_t size, enum dma_data_direction dir,
struct dma_attrs *attrs)
{ {
dma_addr_t addr = virt_to_phys(ptr); dma_addr_t handle = virt_to_phys(page_address(page) + offset);
c6x_dma_sync(addr, size, dir); c6x_dma_sync(handle, size, dir);
return handle;
debug_dma_map_page(dev, virt_to_page(ptr),
(unsigned long)ptr & ~PAGE_MASK, size,
dir, addr, true);
return addr;
} }
EXPORT_SYMBOL(dma_map_single);
static void c6x_dma_unmap_page(struct device *dev, dma_addr_t handle,
void dma_unmap_single(struct device *dev, dma_addr_t handle, size_t size, enum dma_data_direction dir, struct dma_attrs *attrs)
size_t size, enum dma_data_direction dir)
{ {
c6x_dma_sync(handle, size, dir); c6x_dma_sync(handle, size, dir);
debug_dma_unmap_page(dev, handle, size, dir, true);
} }
EXPORT_SYMBOL(dma_unmap_single);
static int c6x_dma_map_sg(struct device *dev, struct scatterlist *sglist,
int dma_map_sg(struct device *dev, struct scatterlist *sglist, int nents, enum dma_data_direction dir, struct dma_attrs *attrs)
int nents, enum dma_data_direction dir)
{ {
struct scatterlist *sg; struct scatterlist *sg;
int i; int i;
for_each_sg(sglist, sg, nents, i) for_each_sg(sglist, sg, nents, i) {
sg->dma_address = dma_map_single(dev, sg_virt(sg), sg->length, sg->dma_address = sg_phys(sg);
dir); c6x_dma_sync(sg->dma_address, sg->length, dir);
}
debug_dma_map_sg(dev, sglist, nents, nents, dir);
return nents; return nents;
} }
EXPORT_SYMBOL(dma_map_sg);
static void c6x_dma_unmap_sg(struct device *dev, struct scatterlist *sglist,
void dma_unmap_sg(struct device *dev, struct scatterlist *sglist, int nents, enum dma_data_direction dir,
int nents, enum dma_data_direction dir) struct dma_attrs *attrs)
{ {
struct scatterlist *sg; struct scatterlist *sg;
int i; int i;
for_each_sg(sglist, sg, nents, i) for_each_sg(sglist, sg, nents, i)
dma_unmap_single(dev, sg_dma_address(sg), sg->length, dir); c6x_dma_sync(sg_dma_address(sg), sg->length, dir);
debug_dma_unmap_sg(dev, sglist, nents, dir);
} }
EXPORT_SYMBOL(dma_unmap_sg);
void dma_sync_single_for_cpu(struct device *dev, dma_addr_t handle, static void c6x_dma_sync_single_for_cpu(struct device *dev, dma_addr_t handle,
size_t size, enum dma_data_direction dir) size_t size, enum dma_data_direction dir)
{ {
c6x_dma_sync(handle, size, dir); c6x_dma_sync(handle, size, dir);
debug_dma_sync_single_for_cpu(dev, handle, size, dir);
} }
EXPORT_SYMBOL(dma_sync_single_for_cpu);
static void c6x_dma_sync_single_for_device(struct device *dev,
void dma_sync_single_for_device(struct device *dev, dma_addr_t handle, dma_addr_t handle, size_t size, enum dma_data_direction dir)
size_t size, enum dma_data_direction dir)
{ {
c6x_dma_sync(handle, size, dir); c6x_dma_sync(handle, size, dir);
debug_dma_sync_single_for_device(dev, handle, size, dir);
} }
EXPORT_SYMBOL(dma_sync_single_for_device);
static void c6x_dma_sync_sg_for_cpu(struct device *dev,
void dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sglist, struct scatterlist *sglist, int nents,
int nents, enum dma_data_direction dir) enum dma_data_direction dir)
{ {
struct scatterlist *sg; struct scatterlist *sg;
int i; int i;
for_each_sg(sglist, sg, nents, i) for_each_sg(sglist, sg, nents, i)
dma_sync_single_for_cpu(dev, sg_dma_address(sg), c6x_dma_sync_single_for_cpu(dev, sg_dma_address(sg),
sg->length, dir); sg->length, dir);
debug_dma_sync_sg_for_cpu(dev, sglist, nents, dir);
} }
EXPORT_SYMBOL(dma_sync_sg_for_cpu);
static void c6x_dma_sync_sg_for_device(struct device *dev,
void dma_sync_sg_for_device(struct device *dev, struct scatterlist *sglist, struct scatterlist *sglist, int nents,
int nents, enum dma_data_direction dir) enum dma_data_direction dir)
{ {
struct scatterlist *sg; struct scatterlist *sg;
int i; int i;
for_each_sg(sglist, sg, nents, i) for_each_sg(sglist, sg, nents, i)
dma_sync_single_for_device(dev, sg_dma_address(sg), c6x_dma_sync_single_for_device(dev, sg_dma_address(sg),
sg->length, dir); sg->length, dir);
debug_dma_sync_sg_for_device(dev, sglist, nents, dir);
} }
EXPORT_SYMBOL(dma_sync_sg_for_device);
struct dma_map_ops c6x_dma_ops = {
.alloc = c6x_dma_alloc,
.free = c6x_dma_free,
.map_page = c6x_dma_map_page,
.unmap_page = c6x_dma_unmap_page,
.map_sg = c6x_dma_map_sg,
.unmap_sg = c6x_dma_unmap_sg,
.sync_single_for_device = c6x_dma_sync_single_for_device,
.sync_single_for_cpu = c6x_dma_sync_single_for_cpu,
.sync_sg_for_device = c6x_dma_sync_sg_for_device,
.sync_sg_for_cpu = c6x_dma_sync_sg_for_cpu,
};
EXPORT_SYMBOL(c6x_dma_ops);
/* Number of entries preallocated for DMA-API debugging */ /* Number of entries preallocated for DMA-API debugging */
#define PREALLOC_DMA_DEBUG_ENTRIES (1 << 16) #define PREALLOC_DMA_DEBUG_ENTRIES (1 << 16)

View File

@ -73,8 +73,8 @@ static void __free_dma_pages(u32 addr, int order)
* Allocate DMA coherent memory space and return both the kernel * Allocate DMA coherent memory space and return both the kernel
* virtual and DMA address for that space. * virtual and DMA address for that space.
*/ */
void *dma_alloc_coherent(struct device *dev, size_t size, void *c6x_dma_alloc(struct device *dev, size_t size, dma_addr_t *handle,
dma_addr_t *handle, gfp_t gfp) gfp_t gfp, struct dma_attrs *attrs)
{ {
u32 paddr; u32 paddr;
int order; int order;
@ -94,13 +94,12 @@ void *dma_alloc_coherent(struct device *dev, size_t size,
return phys_to_virt(paddr); return phys_to_virt(paddr);
} }
EXPORT_SYMBOL(dma_alloc_coherent);
/* /*
* Free DMA coherent memory as defined by the above mapping. * Free DMA coherent memory as defined by the above mapping.
*/ */
void dma_free_coherent(struct device *dev, size_t size, void *vaddr, void c6x_dma_free(struct device *dev, size_t size, void *vaddr,
dma_addr_t dma_handle) dma_addr_t dma_handle, struct dma_attrs *attrs)
{ {
int order; int order;
@ -111,7 +110,6 @@ void dma_free_coherent(struct device *dev, size_t size, void *vaddr,
__free_dma_pages(virt_to_phys(vaddr), order); __free_dma_pages(virt_to_phys(vaddr), order);
} }
EXPORT_SYMBOL(dma_free_coherent);
/* /*
* Initialise the coherent DMA memory allocator using the given uncached region. * Initialise the coherent DMA memory allocator using the given uncached region.

View File

@ -16,21 +16,18 @@
#include <linux/gfp.h> #include <linux/gfp.h>
#include <asm/io.h> #include <asm/io.h>
void *dma_alloc_coherent(struct device *dev, size_t size, static void *v32_dma_alloc(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t gfp) dma_addr_t *dma_handle, gfp_t gfp, struct dma_attrs *attrs)
{ {
void *ret; void *ret;
int order = get_order(size);
/* ignore region specifiers */ /* ignore region specifiers */
gfp &= ~(__GFP_DMA | __GFP_HIGHMEM); gfp &= ~(__GFP_DMA | __GFP_HIGHMEM);
if (dma_alloc_from_coherent(dev, size, dma_handle, &ret))
return ret;
if (dev == NULL || (dev->coherent_dma_mask < 0xffffffff)) if (dev == NULL || (dev->coherent_dma_mask < 0xffffffff))
gfp |= GFP_DMA; gfp |= GFP_DMA;
ret = (void *)__get_free_pages(gfp, order); ret = (void *)__get_free_pages(gfp, get_order(size));
if (ret != NULL) { if (ret != NULL) {
memset(ret, 0, size); memset(ret, 0, size);
@ -39,12 +36,45 @@ void *dma_alloc_coherent(struct device *dev, size_t size,
return ret; return ret;
} }
void dma_free_coherent(struct device *dev, size_t size, static void v32_dma_free(struct device *dev, size_t size, void *vaddr,
void *vaddr, dma_addr_t dma_handle) dma_addr_t dma_handle, struct dma_attrs *attrs)
{ {
int order = get_order(size); free_pages((unsigned long)vaddr, get_order(size));
if (!dma_release_from_coherent(dev, order, vaddr))
free_pages((unsigned long)vaddr, order);
} }
static inline dma_addr_t v32_dma_map_page(struct device *dev,
struct page *page, unsigned long offset, size_t size,
enum dma_data_direction direction,
struct dma_attrs *attrs)
{
return page_to_phys(page) + offset;
}
static inline int v32_dma_map_sg(struct device *dev, struct scatterlist *sg,
int nents, enum dma_data_direction direction,
struct dma_attrs *attrs)
{
printk("Map sg\n");
return nents;
}
static inline int v32_dma_supported(struct device *dev, u64 mask)
{
/*
* we fall back to GFP_DMA when the mask isn't all 1s,
* so we can't guarantee allocations that must be
* within a tighter range than GFP_DMA..
*/
if (mask < 0x00ffffff)
return 0;
return 1;
}
struct dma_map_ops v32_dma_ops = {
.alloc = v32_dma_alloc,
.free = v32_dma_free,
.map_page = v32_dma_map_page,
.map_sg = v32_dma_map_sg,
.dma_supported = v32_dma_supported,
};
EXPORT_SYMBOL(v32_dma_ops);

View File

@ -1,156 +1,20 @@
/* DMA mapping. Nothing tricky here, just virt_to_phys */
#ifndef _ASM_CRIS_DMA_MAPPING_H #ifndef _ASM_CRIS_DMA_MAPPING_H
#define _ASM_CRIS_DMA_MAPPING_H #define _ASM_CRIS_DMA_MAPPING_H
#include <linux/mm.h>
#include <linux/kernel.h>
#include <linux/scatterlist.h>
#include <asm/cache.h>
#include <asm/io.h>
#define dma_alloc_noncoherent(d, s, h, f) dma_alloc_coherent(d, s, h, f)
#define dma_free_noncoherent(d, s, v, h) dma_free_coherent(d, s, v, h)
#ifdef CONFIG_PCI #ifdef CONFIG_PCI
#include <asm-generic/dma-coherent.h> extern struct dma_map_ops v32_dma_ops;
void *dma_alloc_coherent(struct device *dev, size_t size, static inline struct dma_map_ops *get_dma_ops(struct device *dev)
dma_addr_t *dma_handle, gfp_t flag);
void dma_free_coherent(struct device *dev, size_t size,
void *vaddr, dma_addr_t dma_handle);
#else
static inline void *
dma_alloc_coherent(struct device *dev, size_t size, dma_addr_t *dma_handle,
gfp_t flag)
{ {
BUG(); return &v32_dma_ops;
return NULL;
} }
#else
static inline void static inline struct dma_map_ops *get_dma_ops(struct device *dev)
dma_free_coherent(struct device *dev, size_t size, void *cpu_addr,
dma_addr_t dma_handle)
{ {
BUG(); BUG();
return NULL;
} }
#endif #endif
static inline dma_addr_t
dma_map_single(struct device *dev, void *ptr, size_t size,
enum dma_data_direction direction)
{
BUG_ON(direction == DMA_NONE);
return virt_to_phys(ptr);
}
static inline void
dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size,
enum dma_data_direction direction)
{
BUG_ON(direction == DMA_NONE);
}
static inline int
dma_map_sg(struct device *dev, struct scatterlist *sg, int nents,
enum dma_data_direction direction)
{
printk("Map sg\n");
return nents;
}
static inline dma_addr_t
dma_map_page(struct device *dev, struct page *page, unsigned long offset,
size_t size, enum dma_data_direction direction)
{
BUG_ON(direction == DMA_NONE);
return page_to_phys(page) + offset;
}
static inline void
dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size,
enum dma_data_direction direction)
{
BUG_ON(direction == DMA_NONE);
}
static inline void
dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nhwentries,
enum dma_data_direction direction)
{
BUG_ON(direction == DMA_NONE);
}
static inline void
dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, size_t size,
enum dma_data_direction direction)
{
}
static inline void
dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle, size_t size,
enum dma_data_direction direction)
{
}
static inline void
dma_sync_single_range_for_cpu(struct device *dev, dma_addr_t dma_handle,
unsigned long offset, size_t size,
enum dma_data_direction direction)
{
}
static inline void
dma_sync_single_range_for_device(struct device *dev, dma_addr_t dma_handle,
unsigned long offset, size_t size,
enum dma_data_direction direction)
{
}
static inline void
dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, int nelems,
enum dma_data_direction direction)
{
}
static inline void
dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, int nelems,
enum dma_data_direction direction)
{
}
static inline int
dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
{
return 0;
}
static inline int
dma_supported(struct device *dev, u64 mask)
{
/*
* we fall back to GFP_DMA when the mask isn't all 1s,
* so we can't guarantee allocations that must be
* within a tighter range than GFP_DMA..
*/
if(mask < 0x00ffffff)
return 0;
return 1;
}
static inline int
dma_set_mask(struct device *dev, u64 mask)
{
if(!dev->dma_mask || !dma_supported(dev, mask))
return -EIO;
*dev->dma_mask = mask;
return 0;
}
static inline void static inline void
dma_cache_sync(struct device *dev, void *vaddr, size_t size, dma_cache_sync(struct device *dev, void *vaddr, size_t size,
@ -158,15 +22,4 @@ dma_cache_sync(struct device *dev, void *vaddr, size_t size,
{ {
} }
/* drivers/base/dma-mapping.c */
extern int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
void *cpu_addr, dma_addr_t dma_addr, size_t size);
extern int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt,
void *cpu_addr, dma_addr_t dma_addr,
size_t size);
#define dma_mmap_coherent(d, v, c, h, s) dma_common_mmap(d, v, c, h, s)
#define dma_get_sgtable(d, t, v, h, s) dma_common_get_sgtable(d, t, v, h, s)
#endif #endif

View File

@ -15,6 +15,7 @@ config FRV
select OLD_SIGSUSPEND3 select OLD_SIGSUSPEND3
select OLD_SIGACTION select OLD_SIGACTION
select HAVE_DEBUG_STACKOVERFLOW select HAVE_DEBUG_STACKOVERFLOW
select ARCH_NO_COHERENT_DMA_MMAP
config ZONE_DMA config ZONE_DMA
bool bool

View File

@ -1,128 +1,17 @@
#ifndef _ASM_DMA_MAPPING_H #ifndef _ASM_DMA_MAPPING_H
#define _ASM_DMA_MAPPING_H #define _ASM_DMA_MAPPING_H
#include <linux/device.h>
#include <linux/scatterlist.h>
#include <asm/cache.h> #include <asm/cache.h>
#include <asm/cacheflush.h> #include <asm/cacheflush.h>
#include <asm/io.h>
/*
* See Documentation/DMA-API.txt for the description of how the
* following DMA API should work.
*/
#define dma_alloc_noncoherent(d, s, h, f) dma_alloc_coherent(d, s, h, f)
#define dma_free_noncoherent(d, s, v, h) dma_free_coherent(d, s, v, h)
extern unsigned long __nongprelbss dma_coherent_mem_start; extern unsigned long __nongprelbss dma_coherent_mem_start;
extern unsigned long __nongprelbss dma_coherent_mem_end; extern unsigned long __nongprelbss dma_coherent_mem_end;
void *dma_alloc_coherent(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t gfp); extern struct dma_map_ops frv_dma_ops;
void dma_free_coherent(struct device *dev, size_t size, void *vaddr, dma_addr_t dma_handle);
extern dma_addr_t dma_map_single(struct device *dev, void *ptr, size_t size, static inline struct dma_map_ops *get_dma_ops(struct device *dev)
enum dma_data_direction direction);
static inline
void dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size,
enum dma_data_direction direction)
{ {
BUG_ON(direction == DMA_NONE); return &frv_dma_ops;
}
extern int dma_map_sg(struct device *dev, struct scatterlist *sg, int nents,
enum dma_data_direction direction);
static inline
void dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nhwentries,
enum dma_data_direction direction)
{
BUG_ON(direction == DMA_NONE);
}
extern
dma_addr_t dma_map_page(struct device *dev, struct page *page, unsigned long offset,
size_t size, enum dma_data_direction direction);
static inline
void dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size,
enum dma_data_direction direction)
{
BUG_ON(direction == DMA_NONE);
}
static inline
void dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, size_t size,
enum dma_data_direction direction)
{
}
static inline
void dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle, size_t size,
enum dma_data_direction direction)
{
flush_write_buffers();
}
static inline
void dma_sync_single_range_for_cpu(struct device *dev, dma_addr_t dma_handle,
unsigned long offset, size_t size,
enum dma_data_direction direction)
{
}
static inline
void dma_sync_single_range_for_device(struct device *dev, dma_addr_t dma_handle,
unsigned long offset, size_t size,
enum dma_data_direction direction)
{
flush_write_buffers();
}
static inline
void dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, int nelems,
enum dma_data_direction direction)
{
}
static inline
void dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, int nelems,
enum dma_data_direction direction)
{
flush_write_buffers();
}
static inline
int dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
{
return 0;
}
static inline
int dma_supported(struct device *dev, u64 mask)
{
/*
* we fall back to GFP_DMA when the mask isn't all 1s,
* so we can't guarantee allocations that must be
* within a tighter range than GFP_DMA..
*/
if (mask < 0x00ffffff)
return 0;
return 1;
}
static inline
int dma_set_mask(struct device *dev, u64 mask)
{
if (!dev->dma_mask || !dma_supported(dev, mask))
return -EIO;
*dev->dma_mask = mask;
return 0;
} }
static inline static inline
@ -132,19 +21,4 @@ void dma_cache_sync(struct device *dev, void *vaddr, size_t size,
flush_write_buffers(); flush_write_buffers();
} }
/* Not supported for now */
static inline int dma_mmap_coherent(struct device *dev,
struct vm_area_struct *vma, void *cpu_addr,
dma_addr_t dma_addr, size_t size)
{
return -EINVAL;
}
static inline int dma_get_sgtable(struct device *dev, struct sg_table *sgt,
void *cpu_addr, dma_addr_t dma_addr,
size_t size)
{
return -EINVAL;
}
#endif /* _ASM_DMA_MAPPING_H */ #endif /* _ASM_DMA_MAPPING_H */

View File

@ -43,9 +43,20 @@ static inline unsigned long _swapl(unsigned long v)
//#define __iormb() asm volatile("membar") //#define __iormb() asm volatile("membar")
//#define __iowmb() asm volatile("membar") //#define __iowmb() asm volatile("membar")
#define __raw_readb __builtin_read8 static inline u8 __raw_readb(const volatile void __iomem *addr)
#define __raw_readw __builtin_read16 {
#define __raw_readl __builtin_read32 return __builtin_read8((volatile void __iomem *)addr);
}
static inline u16 __raw_readw(const volatile void __iomem *addr)
{
return __builtin_read16((volatile void __iomem *)addr);
}
static inline u32 __raw_readl(const volatile void __iomem *addr)
{
return __builtin_read32((volatile void __iomem *)addr);
}
#define __raw_writeb(datum, addr) __builtin_write8(addr, datum) #define __raw_writeb(datum, addr) __builtin_write8(addr, datum)
#define __raw_writew(datum, addr) __builtin_write16(addr, datum) #define __raw_writew(datum, addr) __builtin_write16(addr, datum)

View File

@ -34,7 +34,8 @@ struct dma_alloc_record {
static DEFINE_SPINLOCK(dma_alloc_lock); static DEFINE_SPINLOCK(dma_alloc_lock);
static LIST_HEAD(dma_alloc_list); static LIST_HEAD(dma_alloc_list);
void *dma_alloc_coherent(struct device *hwdev, size_t size, dma_addr_t *dma_handle, gfp_t gfp) static void *frv_dma_alloc(struct device *hwdev, size_t size, dma_addr_t *dma_handle,
gfp_t gfp, struct dma_attrs *attrs)
{ {
struct dma_alloc_record *new; struct dma_alloc_record *new;
struct list_head *this = &dma_alloc_list; struct list_head *this = &dma_alloc_list;
@ -84,9 +85,8 @@ void *dma_alloc_coherent(struct device *hwdev, size_t size, dma_addr_t *dma_hand
return NULL; return NULL;
} }
EXPORT_SYMBOL(dma_alloc_coherent); static void frv_dma_free(struct device *hwdev, size_t size, void *vaddr,
dma_addr_t dma_handle, struct dma_attrs *attrs)
void dma_free_coherent(struct device *hwdev, size_t size, void *vaddr, dma_addr_t dma_handle)
{ {
struct dma_alloc_record *rec; struct dma_alloc_record *rec;
unsigned long flags; unsigned long flags;
@ -105,22 +105,9 @@ void dma_free_coherent(struct device *hwdev, size_t size, void *vaddr, dma_addr_
BUG(); BUG();
} }
EXPORT_SYMBOL(dma_free_coherent); static int frv_dma_map_sg(struct device *dev, struct scatterlist *sglist,
int nents, enum dma_data_direction direction,
dma_addr_t dma_map_single(struct device *dev, void *ptr, size_t size, struct dma_attrs *attrs)
enum dma_data_direction direction)
{
BUG_ON(direction == DMA_NONE);
frv_cache_wback_inv((unsigned long) ptr, (unsigned long) ptr + size);
return virt_to_bus(ptr);
}
EXPORT_SYMBOL(dma_map_single);
int dma_map_sg(struct device *dev, struct scatterlist *sglist, int nents,
enum dma_data_direction direction)
{ {
int i; int i;
struct scatterlist *sg; struct scatterlist *sg;
@ -135,14 +122,49 @@ int dma_map_sg(struct device *dev, struct scatterlist *sglist, int nents,
return nents; return nents;
} }
EXPORT_SYMBOL(dma_map_sg); static dma_addr_t frv_dma_map_page(struct device *dev, struct page *page,
unsigned long offset, size_t size,
dma_addr_t dma_map_page(struct device *dev, struct page *page, unsigned long offset, enum dma_data_direction direction, struct dma_attrs *attrs)
size_t size, enum dma_data_direction direction)
{ {
BUG_ON(direction == DMA_NONE); BUG_ON(direction == DMA_NONE);
flush_dcache_page(page); flush_dcache_page(page);
return (dma_addr_t) page_to_phys(page) + offset; return (dma_addr_t) page_to_phys(page) + offset;
} }
EXPORT_SYMBOL(dma_map_page); static void frv_dma_sync_single_for_device(struct device *dev,
dma_addr_t dma_handle, size_t size,
enum dma_data_direction direction)
{
flush_write_buffers();
}
static void frv_dma_sync_sg_for_device(struct device *dev,
struct scatterlist *sg, int nelems,
enum dma_data_direction direction)
{
flush_write_buffers();
}
static int frv_dma_supported(struct device *dev, u64 mask)
{
/*
* we fall back to GFP_DMA when the mask isn't all 1s,
* so we can't guarantee allocations that must be
* within a tighter range than GFP_DMA..
*/
if (mask < 0x00ffffff)
return 0;
return 1;
}
struct dma_map_ops frv_dma_ops = {
.alloc = frv_dma_alloc,
.free = frv_dma_free,
.map_page = frv_dma_map_page,
.map_sg = frv_dma_map_sg,
.sync_single_for_device = frv_dma_sync_single_for_device,
.sync_sg_for_device = frv_dma_sync_sg_for_device,
.dma_supported = frv_dma_supported,
};
EXPORT_SYMBOL(frv_dma_ops);

View File

@ -18,7 +18,9 @@
#include <linux/scatterlist.h> #include <linux/scatterlist.h>
#include <asm/io.h> #include <asm/io.h>
void *dma_alloc_coherent(struct device *hwdev, size_t size, dma_addr_t *dma_handle, gfp_t gfp) static void *frv_dma_alloc(struct device *hwdev, size_t size,
dma_addr_t *dma_handle, gfp_t gfp,
struct dma_attrs *attrs)
{ {
void *ret; void *ret;
@ -29,29 +31,15 @@ void *dma_alloc_coherent(struct device *hwdev, size_t size, dma_addr_t *dma_hand
return ret; return ret;
} }
EXPORT_SYMBOL(dma_alloc_coherent); static void frv_dma_free(struct device *hwdev, size_t size, void *vaddr,
dma_addr_t dma_handle, struct dma_attrs *attrs)
void dma_free_coherent(struct device *hwdev, size_t size, void *vaddr, dma_addr_t dma_handle)
{ {
consistent_free(vaddr); consistent_free(vaddr);
} }
EXPORT_SYMBOL(dma_free_coherent); static int frv_dma_map_sg(struct device *dev, struct scatterlist *sglist,
int nents, enum dma_data_direction direction,
dma_addr_t dma_map_single(struct device *dev, void *ptr, size_t size, struct dma_attrs *attrs)
enum dma_data_direction direction)
{
BUG_ON(direction == DMA_NONE);
frv_cache_wback_inv((unsigned long) ptr, (unsigned long) ptr + size);
return virt_to_bus(ptr);
}
EXPORT_SYMBOL(dma_map_single);
int dma_map_sg(struct device *dev, struct scatterlist *sglist, int nents,
enum dma_data_direction direction)
{ {
unsigned long dampr2; unsigned long dampr2;
void *vaddr; void *vaddr;
@ -79,14 +67,48 @@ int dma_map_sg(struct device *dev, struct scatterlist *sglist, int nents,
return nents; return nents;
} }
EXPORT_SYMBOL(dma_map_sg); static dma_addr_t frv_dma_map_page(struct device *dev, struct page *page,
unsigned long offset, size_t size,
dma_addr_t dma_map_page(struct device *dev, struct page *page, unsigned long offset, enum dma_data_direction direction, struct dma_attrs *attrs)
size_t size, enum dma_data_direction direction)
{ {
BUG_ON(direction == DMA_NONE);
flush_dcache_page(page); flush_dcache_page(page);
return (dma_addr_t) page_to_phys(page) + offset; return (dma_addr_t) page_to_phys(page) + offset;
} }
EXPORT_SYMBOL(dma_map_page); static void frv_dma_sync_single_for_device(struct device *dev,
dma_addr_t dma_handle, size_t size,
enum dma_data_direction direction)
{
flush_write_buffers();
}
static void frv_dma_sync_sg_for_device(struct device *dev,
struct scatterlist *sg, int nelems,
enum dma_data_direction direction)
{
flush_write_buffers();
}
static int frv_dma_supported(struct device *dev, u64 mask)
{
/*
* we fall back to GFP_DMA when the mask isn't all 1s,
* so we can't guarantee allocations that must be
* within a tighter range than GFP_DMA..
*/
if (mask < 0x00ffffff)
return 0;
return 1;
}
struct dma_map_ops frv_dma_ops = {
.alloc = frv_dma_alloc,
.free = frv_dma_free,
.map_page = frv_dma_map_page,
.map_sg = frv_dma_map_sg,
.sync_single_for_device = frv_dma_sync_single_for_device,
.sync_sg_for_device = frv_dma_sync_sg_for_device,
.dma_supported = frv_dma_supported,
};
EXPORT_SYMBOL(frv_dma_ops);

View File

@ -15,7 +15,6 @@ config H8300
select OF_IRQ select OF_IRQ
select OF_EARLY_FLATTREE select OF_EARLY_FLATTREE
select HAVE_MEMBLOCK select HAVE_MEMBLOCK
select HAVE_DMA_ATTRS
select CLKSRC_OF select CLKSRC_OF
select H8300_TMR8 select H8300_TMR8
select HAVE_KERNEL_GZIP select HAVE_KERNEL_GZIP

View File

@ -8,6 +8,4 @@ static inline struct dma_map_ops *get_dma_ops(struct device *dev)
return &h8300_dma_map_ops; return &h8300_dma_map_ops;
} }
#include <asm-generic/dma-mapping-common.h>
#endif #endif

View File

@ -27,7 +27,6 @@ config HEXAGON
select GENERIC_CLOCKEVENTS_BROADCAST select GENERIC_CLOCKEVENTS_BROADCAST
select MODULES_USE_ELF_RELA select MODULES_USE_ELF_RELA
select GENERIC_CPU_DEVICES select GENERIC_CPU_DEVICES
select HAVE_DMA_ATTRS
---help--- ---help---
Qualcomm Hexagon is a processor architecture designed for high Qualcomm Hexagon is a processor architecture designed for high
performance and low power across a wide variety of applications. performance and low power across a wide variety of applications.

View File

@ -49,8 +49,6 @@ extern int dma_is_consistent(struct device *dev, dma_addr_t dma_handle);
extern void dma_cache_sync(struct device *dev, void *vaddr, size_t size, extern void dma_cache_sync(struct device *dev, void *vaddr, size_t size,
enum dma_data_direction direction); enum dma_data_direction direction);
#include <asm-generic/dma-mapping-common.h>
static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t size) static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t size)
{ {
if (!dev->dma_mask) if (!dev->dma_mask)

View File

@ -25,7 +25,6 @@ config IA64
select HAVE_FTRACE_MCOUNT_RECORD select HAVE_FTRACE_MCOUNT_RECORD
select HAVE_DYNAMIC_FTRACE if (!ITANIUM) select HAVE_DYNAMIC_FTRACE if (!ITANIUM)
select HAVE_FUNCTION_TRACER select HAVE_FUNCTION_TRACER
select HAVE_DMA_ATTRS
select TTY select TTY
select HAVE_ARCH_TRACEHOOK select HAVE_ARCH_TRACEHOOK
select HAVE_DMA_API_DEBUG select HAVE_DMA_API_DEBUG

View File

@ -25,8 +25,6 @@ extern void machvec_dma_sync_sg(struct device *, struct scatterlist *, int,
#define get_dma_ops(dev) platform_dma_get_ops(dev) #define get_dma_ops(dev) platform_dma_get_ops(dev)
#include <asm-generic/dma-mapping-common.h>
static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t size) static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t size)
{ {
if (!dev->dma_mask) if (!dev->dma_mask)

View File

@ -1,123 +1,17 @@
#ifndef _M68K_DMA_MAPPING_H #ifndef _M68K_DMA_MAPPING_H
#define _M68K_DMA_MAPPING_H #define _M68K_DMA_MAPPING_H
#include <asm/cache.h> extern struct dma_map_ops m68k_dma_ops;
struct scatterlist; static inline struct dma_map_ops *get_dma_ops(struct device *dev)
static inline int dma_supported(struct device *dev, u64 mask)
{ {
return 1; return &m68k_dma_ops;
} }
static inline int dma_set_mask(struct device *dev, u64 mask)
{
return 0;
}
extern void *dma_alloc_coherent(struct device *, size_t,
dma_addr_t *, gfp_t);
extern void dma_free_coherent(struct device *, size_t,
void *, dma_addr_t);
static inline void *dma_alloc_attrs(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t flag,
struct dma_attrs *attrs)
{
/* attrs is not supported and ignored */
return dma_alloc_coherent(dev, size, dma_handle, flag);
}
static inline void dma_free_attrs(struct device *dev, size_t size,
void *cpu_addr, dma_addr_t dma_handle,
struct dma_attrs *attrs)
{
/* attrs is not supported and ignored */
dma_free_coherent(dev, size, cpu_addr, dma_handle);
}
static inline void *dma_alloc_noncoherent(struct device *dev, size_t size,
dma_addr_t *handle, gfp_t flag)
{
return dma_alloc_coherent(dev, size, handle, flag);
}
static inline void dma_free_noncoherent(struct device *dev, size_t size,
void *addr, dma_addr_t handle)
{
dma_free_coherent(dev, size, addr, handle);
}
static inline void dma_cache_sync(struct device *dev, void *vaddr, size_t size, static inline void dma_cache_sync(struct device *dev, void *vaddr, size_t size,
enum dma_data_direction dir) enum dma_data_direction dir)
{ {
/* we use coherent allocation, so not much to do here. */ /* we use coherent allocation, so not much to do here. */
} }
extern dma_addr_t dma_map_single(struct device *, void *, size_t,
enum dma_data_direction);
static inline void dma_unmap_single(struct device *dev, dma_addr_t addr,
size_t size, enum dma_data_direction dir)
{
}
extern dma_addr_t dma_map_page(struct device *, struct page *,
unsigned long, size_t size,
enum dma_data_direction);
static inline void dma_unmap_page(struct device *dev, dma_addr_t address,
size_t size, enum dma_data_direction dir)
{
}
extern int dma_map_sg(struct device *, struct scatterlist *, int,
enum dma_data_direction);
static inline void dma_unmap_sg(struct device *dev, struct scatterlist *sg,
int nhwentries, enum dma_data_direction dir)
{
}
extern void dma_sync_single_for_device(struct device *, dma_addr_t, size_t,
enum dma_data_direction);
extern void dma_sync_sg_for_device(struct device *, struct scatterlist *, int,
enum dma_data_direction);
static inline void dma_sync_single_range_for_device(struct device *dev,
dma_addr_t dma_handle, unsigned long offset, size_t size,
enum dma_data_direction direction)
{
/* just sync everything for now */
dma_sync_single_for_device(dev, dma_handle, offset + size, direction);
}
static inline void dma_sync_single_for_cpu(struct device *dev, dma_addr_t handle,
size_t size, enum dma_data_direction dir)
{
}
static inline void dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg,
int nents, enum dma_data_direction dir)
{
}
static inline void dma_sync_single_range_for_cpu(struct device *dev,
dma_addr_t dma_handle, unsigned long offset, size_t size,
enum dma_data_direction direction)
{
/* just sync everything for now */
dma_sync_single_for_cpu(dev, dma_handle, offset + size, direction);
}
static inline int dma_mapping_error(struct device *dev, dma_addr_t handle)
{
return 0;
}
/* drivers/base/dma-mapping.c */
extern int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
void *cpu_addr, dma_addr_t dma_addr, size_t size);
extern int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt,
void *cpu_addr, dma_addr_t dma_addr,
size_t size);
#define dma_mmap_coherent(d, v, c, h, s) dma_common_mmap(d, v, c, h, s)
#define dma_get_sgtable(d, t, v, h, s) dma_common_get_sgtable(d, t, v, h, s)
#endif /* _M68K_DMA_MAPPING_H */ #endif /* _M68K_DMA_MAPPING_H */

View File

@ -18,8 +18,8 @@
#if defined(CONFIG_MMU) && !defined(CONFIG_COLDFIRE) #if defined(CONFIG_MMU) && !defined(CONFIG_COLDFIRE)
void *dma_alloc_coherent(struct device *dev, size_t size, static void *m68k_dma_alloc(struct device *dev, size_t size, dma_addr_t *handle,
dma_addr_t *handle, gfp_t flag) gfp_t flag, struct dma_attrs *attrs)
{ {
struct page *page, **map; struct page *page, **map;
pgprot_t pgprot; pgprot_t pgprot;
@ -61,8 +61,8 @@ void *dma_alloc_coherent(struct device *dev, size_t size,
return addr; return addr;
} }
void dma_free_coherent(struct device *dev, size_t size, static void m68k_dma_free(struct device *dev, size_t size, void *addr,
void *addr, dma_addr_t handle) dma_addr_t handle, struct dma_attrs *attrs)
{ {
pr_debug("dma_free_coherent: %p, %x\n", addr, handle); pr_debug("dma_free_coherent: %p, %x\n", addr, handle);
vfree(addr); vfree(addr);
@ -72,8 +72,8 @@ void dma_free_coherent(struct device *dev, size_t size,
#include <asm/cacheflush.h> #include <asm/cacheflush.h>
void *dma_alloc_coherent(struct device *dev, size_t size, static void *m68k_dma_alloc(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t gfp) dma_addr_t *dma_handle, gfp_t gfp, struct dma_attrs *attrs)
{ {
void *ret; void *ret;
/* ignore region specifiers */ /* ignore region specifiers */
@ -90,19 +90,16 @@ void *dma_alloc_coherent(struct device *dev, size_t size,
return ret; return ret;
} }
void dma_free_coherent(struct device *dev, size_t size, static void m68k_dma_free(struct device *dev, size_t size, void *vaddr,
void *vaddr, dma_addr_t dma_handle) dma_addr_t dma_handle, struct dma_attrs *attrs)
{ {
free_pages((unsigned long)vaddr, get_order(size)); free_pages((unsigned long)vaddr, get_order(size));
} }
#endif /* CONFIG_MMU && !CONFIG_COLDFIRE */ #endif /* CONFIG_MMU && !CONFIG_COLDFIRE */
EXPORT_SYMBOL(dma_alloc_coherent); static void m68k_dma_sync_single_for_device(struct device *dev,
EXPORT_SYMBOL(dma_free_coherent); dma_addr_t handle, size_t size, enum dma_data_direction dir)
void dma_sync_single_for_device(struct device *dev, dma_addr_t handle,
size_t size, enum dma_data_direction dir)
{ {
switch (dir) { switch (dir) {
case DMA_BIDIRECTIONAL: case DMA_BIDIRECTIONAL:
@ -118,10 +115,9 @@ void dma_sync_single_for_device(struct device *dev, dma_addr_t handle,
break; break;
} }
} }
EXPORT_SYMBOL(dma_sync_single_for_device);
void dma_sync_sg_for_device(struct device *dev, struct scatterlist *sglist, static void m68k_dma_sync_sg_for_device(struct device *dev,
int nents, enum dma_data_direction dir) struct scatterlist *sglist, int nents, enum dma_data_direction dir)
{ {
int i; int i;
struct scatterlist *sg; struct scatterlist *sg;
@ -131,31 +127,19 @@ void dma_sync_sg_for_device(struct device *dev, struct scatterlist *sglist,
dir); dir);
} }
} }
EXPORT_SYMBOL(dma_sync_sg_for_device);
dma_addr_t dma_map_single(struct device *dev, void *addr, size_t size, static dma_addr_t m68k_dma_map_page(struct device *dev, struct page *page,
enum dma_data_direction dir) unsigned long offset, size_t size, enum dma_data_direction dir,
{ struct dma_attrs *attrs)
dma_addr_t handle = virt_to_bus(addr);
dma_sync_single_for_device(dev, handle, size, dir);
return handle;
}
EXPORT_SYMBOL(dma_map_single);
dma_addr_t dma_map_page(struct device *dev, struct page *page,
unsigned long offset, size_t size,
enum dma_data_direction dir)
{ {
dma_addr_t handle = page_to_phys(page) + offset; dma_addr_t handle = page_to_phys(page) + offset;
dma_sync_single_for_device(dev, handle, size, dir); dma_sync_single_for_device(dev, handle, size, dir);
return handle; return handle;
} }
EXPORT_SYMBOL(dma_map_page);
int dma_map_sg(struct device *dev, struct scatterlist *sglist, int nents, static int m68k_dma_map_sg(struct device *dev, struct scatterlist *sglist,
enum dma_data_direction dir) int nents, enum dma_data_direction dir, struct dma_attrs *attrs)
{ {
int i; int i;
struct scatterlist *sg; struct scatterlist *sg;
@ -167,4 +151,13 @@ int dma_map_sg(struct device *dev, struct scatterlist *sglist, int nents,
} }
return nents; return nents;
} }
EXPORT_SYMBOL(dma_map_sg);
struct dma_map_ops m68k_dma_ops = {
.alloc = m68k_dma_alloc,
.free = m68k_dma_free,
.map_page = m68k_dma_map_page,
.map_sg = m68k_dma_map_sg,
.sync_single_for_device = m68k_dma_sync_single_for_device,
.sync_sg_for_device = m68k_dma_sync_sg_for_device,
};
EXPORT_SYMBOL(m68k_dma_ops);

View File

@ -1,177 +1,11 @@
#ifndef _ASM_METAG_DMA_MAPPING_H #ifndef _ASM_METAG_DMA_MAPPING_H
#define _ASM_METAG_DMA_MAPPING_H #define _ASM_METAG_DMA_MAPPING_H
#include <linux/mm.h> extern struct dma_map_ops metag_dma_ops;
#include <asm/cache.h> static inline struct dma_map_ops *get_dma_ops(struct device *dev)
#include <asm/io.h>
#include <linux/scatterlist.h>
#include <asm/bug.h>
#define dma_alloc_noncoherent(d, s, h, f) dma_alloc_coherent(d, s, h, f)
#define dma_free_noncoherent(d, s, v, h) dma_free_coherent(d, s, v, h)
void *dma_alloc_coherent(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t flag);
void dma_free_coherent(struct device *dev, size_t size,
void *vaddr, dma_addr_t dma_handle);
void dma_sync_for_device(void *vaddr, size_t size, int dma_direction);
void dma_sync_for_cpu(void *vaddr, size_t size, int dma_direction);
int dma_mmap_coherent(struct device *dev, struct vm_area_struct *vma,
void *cpu_addr, dma_addr_t dma_addr, size_t size);
int dma_mmap_writecombine(struct device *dev, struct vm_area_struct *vma,
void *cpu_addr, dma_addr_t dma_addr, size_t size);
static inline dma_addr_t
dma_map_single(struct device *dev, void *ptr, size_t size,
enum dma_data_direction direction)
{ {
BUG_ON(!valid_dma_direction(direction)); return &metag_dma_ops;
WARN_ON(size == 0);
dma_sync_for_device(ptr, size, direction);
return virt_to_phys(ptr);
}
static inline void
dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size,
enum dma_data_direction direction)
{
BUG_ON(!valid_dma_direction(direction));
dma_sync_for_cpu(phys_to_virt(dma_addr), size, direction);
}
static inline int
dma_map_sg(struct device *dev, struct scatterlist *sglist, int nents,
enum dma_data_direction direction)
{
struct scatterlist *sg;
int i;
BUG_ON(!valid_dma_direction(direction));
WARN_ON(nents == 0 || sglist[0].length == 0);
for_each_sg(sglist, sg, nents, i) {
BUG_ON(!sg_page(sg));
sg->dma_address = sg_phys(sg);
dma_sync_for_device(sg_virt(sg), sg->length, direction);
}
return nents;
}
static inline dma_addr_t
dma_map_page(struct device *dev, struct page *page, unsigned long offset,
size_t size, enum dma_data_direction direction)
{
BUG_ON(!valid_dma_direction(direction));
dma_sync_for_device((void *)(page_to_phys(page) + offset), size,
direction);
return page_to_phys(page) + offset;
}
static inline void
dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size,
enum dma_data_direction direction)
{
BUG_ON(!valid_dma_direction(direction));
dma_sync_for_cpu(phys_to_virt(dma_address), size, direction);
}
static inline void
dma_unmap_sg(struct device *dev, struct scatterlist *sglist, int nhwentries,
enum dma_data_direction direction)
{
struct scatterlist *sg;
int i;
BUG_ON(!valid_dma_direction(direction));
WARN_ON(nhwentries == 0 || sglist[0].length == 0);
for_each_sg(sglist, sg, nhwentries, i) {
BUG_ON(!sg_page(sg));
sg->dma_address = sg_phys(sg);
dma_sync_for_cpu(sg_virt(sg), sg->length, direction);
}
}
static inline void
dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, size_t size,
enum dma_data_direction direction)
{
dma_sync_for_cpu(phys_to_virt(dma_handle), size, direction);
}
static inline void
dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle,
size_t size, enum dma_data_direction direction)
{
dma_sync_for_device(phys_to_virt(dma_handle), size, direction);
}
static inline void
dma_sync_single_range_for_cpu(struct device *dev, dma_addr_t dma_handle,
unsigned long offset, size_t size,
enum dma_data_direction direction)
{
dma_sync_for_cpu(phys_to_virt(dma_handle)+offset, size,
direction);
}
static inline void
dma_sync_single_range_for_device(struct device *dev, dma_addr_t dma_handle,
unsigned long offset, size_t size,
enum dma_data_direction direction)
{
dma_sync_for_device(phys_to_virt(dma_handle)+offset, size,
direction);
}
static inline void
dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sglist, int nelems,
enum dma_data_direction direction)
{
int i;
struct scatterlist *sg;
for_each_sg(sglist, sg, nelems, i)
dma_sync_for_cpu(sg_virt(sg), sg->length, direction);
}
static inline void
dma_sync_sg_for_device(struct device *dev, struct scatterlist *sglist,
int nelems, enum dma_data_direction direction)
{
int i;
struct scatterlist *sg;
for_each_sg(sglist, sg, nelems, i)
dma_sync_for_device(sg_virt(sg), sg->length, direction);
}
static inline int
dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
{
return 0;
}
#define dma_supported(dev, mask) (1)
static inline int
dma_set_mask(struct device *dev, u64 mask)
{
if (!dev->dma_mask || !dma_supported(dev, mask))
return -EIO;
*dev->dma_mask = mask;
return 0;
} }
/* /*
@ -184,11 +18,4 @@ dma_cache_sync(struct device *dev, void *vaddr, size_t size,
{ {
} }
/* drivers/base/dma-mapping.c */
extern int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt,
void *cpu_addr, dma_addr_t dma_addr,
size_t size);
#define dma_get_sgtable(d, t, v, h, s) dma_common_get_sgtable(d, t, v, h, s)
#endif #endif

View File

@ -171,8 +171,8 @@ static struct metag_vm_region *metag_vm_region_find(struct metag_vm_region
* Allocate DMA-coherent memory space and return both the kernel remapped * Allocate DMA-coherent memory space and return both the kernel remapped
* virtual and bus address for that space. * virtual and bus address for that space.
*/ */
void *dma_alloc_coherent(struct device *dev, size_t size, static void *metag_dma_alloc(struct device *dev, size_t size,
dma_addr_t *handle, gfp_t gfp) dma_addr_t *handle, gfp_t gfp, struct dma_attrs *attrs)
{ {
struct page *page; struct page *page;
struct metag_vm_region *c; struct metag_vm_region *c;
@ -263,13 +263,12 @@ void *dma_alloc_coherent(struct device *dev, size_t size,
no_page: no_page:
return NULL; return NULL;
} }
EXPORT_SYMBOL(dma_alloc_coherent);
/* /*
* free a page as defined by the above mapping. * free a page as defined by the above mapping.
*/ */
void dma_free_coherent(struct device *dev, size_t size, static void metag_dma_free(struct device *dev, size_t size, void *vaddr,
void *vaddr, dma_addr_t dma_handle) dma_addr_t dma_handle, struct dma_attrs *attrs)
{ {
struct metag_vm_region *c; struct metag_vm_region *c;
unsigned long flags, addr; unsigned long flags, addr;
@ -329,16 +328,19 @@ void dma_free_coherent(struct device *dev, size_t size,
__func__, vaddr); __func__, vaddr);
dump_stack(); dump_stack();
} }
EXPORT_SYMBOL(dma_free_coherent);
static int metag_dma_mmap(struct device *dev, struct vm_area_struct *vma,
static int dma_mmap(struct device *dev, struct vm_area_struct *vma, void *cpu_addr, dma_addr_t dma_addr, size_t size,
void *cpu_addr, dma_addr_t dma_addr, size_t size) struct dma_attrs *attrs)
{ {
int ret = -ENXIO;
unsigned long flags, user_size, kern_size; unsigned long flags, user_size, kern_size;
struct metag_vm_region *c; struct metag_vm_region *c;
int ret = -ENXIO;
if (dma_get_attr(DMA_ATTR_WRITE_COMBINE, attrs))
vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot);
else
vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
user_size = (vma->vm_end - vma->vm_start) >> PAGE_SHIFT; user_size = (vma->vm_end - vma->vm_start) >> PAGE_SHIFT;
@ -364,25 +366,6 @@ static int dma_mmap(struct device *dev, struct vm_area_struct *vma,
return ret; return ret;
} }
int dma_mmap_coherent(struct device *dev, struct vm_area_struct *vma,
void *cpu_addr, dma_addr_t dma_addr, size_t size)
{
vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
return dma_mmap(dev, vma, cpu_addr, dma_addr, size);
}
EXPORT_SYMBOL(dma_mmap_coherent);
int dma_mmap_writecombine(struct device *dev, struct vm_area_struct *vma,
void *cpu_addr, dma_addr_t dma_addr, size_t size)
{
vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot);
return dma_mmap(dev, vma, cpu_addr, dma_addr, size);
}
EXPORT_SYMBOL(dma_mmap_writecombine);
/* /*
* Initialise the consistent memory allocation. * Initialise the consistent memory allocation.
*/ */
@ -423,7 +406,7 @@ early_initcall(dma_alloc_init);
/* /*
* make an area consistent to devices. * make an area consistent to devices.
*/ */
void dma_sync_for_device(void *vaddr, size_t size, int dma_direction) static void dma_sync_for_device(void *vaddr, size_t size, int dma_direction)
{ {
/* /*
* Ensure any writes get through the write combiner. This is necessary * Ensure any writes get through the write combiner. This is necessary
@ -465,12 +448,11 @@ void dma_sync_for_device(void *vaddr, size_t size, int dma_direction)
wmb(); wmb();
} }
EXPORT_SYMBOL(dma_sync_for_device);
/* /*
* make an area consistent to the core. * make an area consistent to the core.
*/ */
void dma_sync_for_cpu(void *vaddr, size_t size, int dma_direction) static void dma_sync_for_cpu(void *vaddr, size_t size, int dma_direction)
{ {
/* /*
* Hardware L2 cache prefetch doesn't occur across 4K physical * Hardware L2 cache prefetch doesn't occur across 4K physical
@ -497,4 +479,100 @@ void dma_sync_for_cpu(void *vaddr, size_t size, int dma_direction)
rmb(); rmb();
} }
EXPORT_SYMBOL(dma_sync_for_cpu);
static dma_addr_t metag_dma_map_page(struct device *dev, struct page *page,
unsigned long offset, size_t size,
enum dma_data_direction direction, struct dma_attrs *attrs)
{
dma_sync_for_device((void *)(page_to_phys(page) + offset), size,
direction);
return page_to_phys(page) + offset;
}
static void metag_dma_unmap_page(struct device *dev, dma_addr_t dma_address,
size_t size, enum dma_data_direction direction,
struct dma_attrs *attrs)
{
dma_sync_for_cpu(phys_to_virt(dma_address), size, direction);
}
static int metag_dma_map_sg(struct device *dev, struct scatterlist *sglist,
int nents, enum dma_data_direction direction,
struct dma_attrs *attrs)
{
struct scatterlist *sg;
int i;
for_each_sg(sglist, sg, nents, i) {
BUG_ON(!sg_page(sg));
sg->dma_address = sg_phys(sg);
dma_sync_for_device(sg_virt(sg), sg->length, direction);
}
return nents;
}
static void metag_dma_unmap_sg(struct device *dev, struct scatterlist *sglist,
int nhwentries, enum dma_data_direction direction,
struct dma_attrs *attrs)
{
struct scatterlist *sg;
int i;
for_each_sg(sglist, sg, nhwentries, i) {
BUG_ON(!sg_page(sg));
sg->dma_address = sg_phys(sg);
dma_sync_for_cpu(sg_virt(sg), sg->length, direction);
}
}
static void metag_dma_sync_single_for_cpu(struct device *dev,
dma_addr_t dma_handle, size_t size,
enum dma_data_direction direction)
{
dma_sync_for_cpu(phys_to_virt(dma_handle), size, direction);
}
static void metag_dma_sync_single_for_device(struct device *dev,
dma_addr_t dma_handle, size_t size,
enum dma_data_direction direction)
{
dma_sync_for_device(phys_to_virt(dma_handle), size, direction);
}
static void metag_dma_sync_sg_for_cpu(struct device *dev,
struct scatterlist *sglist, int nelems,
enum dma_data_direction direction)
{
int i;
struct scatterlist *sg;
for_each_sg(sglist, sg, nelems, i)
dma_sync_for_cpu(sg_virt(sg), sg->length, direction);
}
static void metag_dma_sync_sg_for_device(struct device *dev,
struct scatterlist *sglist, int nelems,
enum dma_data_direction direction)
{
int i;
struct scatterlist *sg;
for_each_sg(sglist, sg, nelems, i)
dma_sync_for_device(sg_virt(sg), sg->length, direction);
}
struct dma_map_ops metag_dma_ops = {
.alloc = metag_dma_alloc,
.free = metag_dma_free,
.map_page = metag_dma_map_page,
.map_sg = metag_dma_map_sg,
.sync_single_for_device = metag_dma_sync_single_for_device,
.sync_single_for_cpu = metag_dma_sync_single_for_cpu,
.sync_sg_for_cpu = metag_dma_sync_sg_for_cpu,
.mmap = metag_dma_mmap,
};
EXPORT_SYMBOL(metag_dma_ops);

View File

@ -19,7 +19,6 @@ config MICROBLAZE
select HAVE_ARCH_KGDB select HAVE_ARCH_KGDB
select HAVE_DEBUG_KMEMLEAK select HAVE_DEBUG_KMEMLEAK
select HAVE_DMA_API_DEBUG select HAVE_DMA_API_DEBUG
select HAVE_DMA_ATTRS
select HAVE_DYNAMIC_FTRACE select HAVE_DYNAMIC_FTRACE
select HAVE_FTRACE_MCOUNT_RECORD select HAVE_FTRACE_MCOUNT_RECORD
select HAVE_FUNCTION_GRAPH_TRACER select HAVE_FUNCTION_GRAPH_TRACER

View File

@ -44,8 +44,6 @@ static inline struct dma_map_ops *get_dma_ops(struct device *dev)
return &dma_direct_ops; return &dma_direct_ops;
} }
#include <asm-generic/dma-mapping-common.h>
static inline void __dma_sync(unsigned long paddr, static inline void __dma_sync(unsigned long paddr,
size_t size, enum dma_data_direction direction) size_t size, enum dma_data_direction direction)
{ {

View File

@ -31,7 +31,6 @@ config MIPS
select RTC_LIB if !MACH_LOONGSON64 select RTC_LIB if !MACH_LOONGSON64
select GENERIC_ATOMIC64 if !64BIT select GENERIC_ATOMIC64 if !64BIT
select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
select HAVE_DMA_ATTRS
select HAVE_DMA_CONTIGUOUS select HAVE_DMA_CONTIGUOUS
select HAVE_DMA_API_DEBUG select HAVE_DMA_API_DEBUG
select GENERIC_IRQ_PROBE select GENERIC_IRQ_PROBE

View File

@ -29,8 +29,6 @@ static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t size)
static inline void dma_mark_clean(void *addr, size_t size) {} static inline void dma_mark_clean(void *addr, size_t size) {}
#include <asm-generic/dma-mapping-common.h>
extern void dma_cache_sync(struct device *dev, void *vaddr, size_t size, extern void dma_cache_sync(struct device *dev, void *vaddr, size_t size,
enum dma_data_direction direction); enum dma_data_direction direction);

View File

@ -73,7 +73,6 @@
#define MADV_SEQUENTIAL 2 /* expect sequential page references */ #define MADV_SEQUENTIAL 2 /* expect sequential page references */
#define MADV_WILLNEED 3 /* will need these pages */ #define MADV_WILLNEED 3 /* will need these pages */
#define MADV_DONTNEED 4 /* don't need these pages */ #define MADV_DONTNEED 4 /* don't need these pages */
#define MADV_FREE 5 /* free pages only if memory pressure */
/* common parameters: try to keep these consistent across architectures */ /* common parameters: try to keep these consistent across architectures */
#define MADV_FREE 8 /* free pages only if memory pressure */ #define MADV_FREE 8 /* free pages only if memory pressure */

View File

@ -14,6 +14,7 @@ config MN10300
select OLD_SIGSUSPEND3 select OLD_SIGSUSPEND3
select OLD_SIGACTION select OLD_SIGACTION
select HAVE_DEBUG_STACKOVERFLOW select HAVE_DEBUG_STACKOVERFLOW
select ARCH_NO_COHERENT_DMA_MMAP
config AM33_2 config AM33_2
def_bool n def_bool n

View File

@ -11,154 +11,14 @@
#ifndef _ASM_DMA_MAPPING_H #ifndef _ASM_DMA_MAPPING_H
#define _ASM_DMA_MAPPING_H #define _ASM_DMA_MAPPING_H
#include <linux/mm.h>
#include <linux/scatterlist.h>
#include <asm/cache.h> #include <asm/cache.h>
#include <asm/io.h> #include <asm/io.h>
/* extern struct dma_map_ops mn10300_dma_ops;
* See Documentation/DMA-API.txt for the description of how the
* following DMA API should work.
*/
extern void *dma_alloc_coherent(struct device *dev, size_t size, static inline struct dma_map_ops *get_dma_ops(struct device *dev)
dma_addr_t *dma_handle, int flag);
extern void dma_free_coherent(struct device *dev, size_t size,
void *vaddr, dma_addr_t dma_handle);
#define dma_alloc_noncoherent(d, s, h, f) dma_alloc_coherent((d), (s), (h), (f))
#define dma_free_noncoherent(d, s, v, h) dma_free_coherent((d), (s), (v), (h))
static inline
dma_addr_t dma_map_single(struct device *dev, void *ptr, size_t size,
enum dma_data_direction direction)
{ {
BUG_ON(direction == DMA_NONE); return &mn10300_dma_ops;
mn10300_dcache_flush_inv();
return virt_to_bus(ptr);
}
static inline
void dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size,
enum dma_data_direction direction)
{
BUG_ON(direction == DMA_NONE);
}
static inline
int dma_map_sg(struct device *dev, struct scatterlist *sglist, int nents,
enum dma_data_direction direction)
{
struct scatterlist *sg;
int i;
BUG_ON(!valid_dma_direction(direction));
WARN_ON(nents == 0 || sglist[0].length == 0);
for_each_sg(sglist, sg, nents, i) {
BUG_ON(!sg_page(sg));
sg->dma_address = sg_phys(sg);
}
mn10300_dcache_flush_inv();
return nents;
}
static inline
void dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nhwentries,
enum dma_data_direction direction)
{
BUG_ON(!valid_dma_direction(direction));
}
static inline
dma_addr_t dma_map_page(struct device *dev, struct page *page,
unsigned long offset, size_t size,
enum dma_data_direction direction)
{
BUG_ON(direction == DMA_NONE);
return page_to_bus(page) + offset;
}
static inline
void dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size,
enum dma_data_direction direction)
{
BUG_ON(direction == DMA_NONE);
}
static inline
void dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle,
size_t size, enum dma_data_direction direction)
{
}
static inline
void dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle,
size_t size, enum dma_data_direction direction)
{
mn10300_dcache_flush_inv();
}
static inline
void dma_sync_single_range_for_cpu(struct device *dev, dma_addr_t dma_handle,
unsigned long offset, size_t size,
enum dma_data_direction direction)
{
}
static inline void
dma_sync_single_range_for_device(struct device *dev, dma_addr_t dma_handle,
unsigned long offset, size_t size,
enum dma_data_direction direction)
{
mn10300_dcache_flush_inv();
}
static inline
void dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg,
int nelems, enum dma_data_direction direction)
{
}
static inline
void dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg,
int nelems, enum dma_data_direction direction)
{
mn10300_dcache_flush_inv();
}
static inline
int dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
{
return 0;
}
static inline
int dma_supported(struct device *dev, u64 mask)
{
/*
* we fall back to GFP_DMA when the mask isn't all 1s, so we can't
* guarantee allocations that must be within a tighter range than
* GFP_DMA
*/
if (mask < 0x00ffffff)
return 0;
return 1;
}
static inline
int dma_set_mask(struct device *dev, u64 mask)
{
if (!dev->dma_mask || !dma_supported(dev, mask))
return -EIO;
*dev->dma_mask = mask;
return 0;
} }
static inline static inline
@ -168,19 +28,4 @@ void dma_cache_sync(void *vaddr, size_t size,
mn10300_dcache_flush_inv(); mn10300_dcache_flush_inv();
} }
/* Not supported for now */
static inline int dma_mmap_coherent(struct device *dev,
struct vm_area_struct *vma, void *cpu_addr,
dma_addr_t dma_addr, size_t size)
{
return -EINVAL;
}
static inline int dma_get_sgtable(struct device *dev, struct sg_table *sgt,
void *cpu_addr, dma_addr_t dma_addr,
size_t size)
{
return -EINVAL;
}
#endif #endif

View File

@ -20,8 +20,8 @@
static unsigned long pci_sram_allocated = 0xbc000000; static unsigned long pci_sram_allocated = 0xbc000000;
void *dma_alloc_coherent(struct device *dev, size_t size, static void *mn10300_dma_alloc(struct device *dev, size_t size,
dma_addr_t *dma_handle, int gfp) dma_addr_t *dma_handle, gfp_t gfp, struct dma_attrs *attrs)
{ {
unsigned long addr; unsigned long addr;
void *ret; void *ret;
@ -61,10 +61,9 @@ void *dma_alloc_coherent(struct device *dev, size_t size,
printk("dma_alloc_coherent() = %p [%x]\n", ret, *dma_handle); printk("dma_alloc_coherent() = %p [%x]\n", ret, *dma_handle);
return ret; return ret;
} }
EXPORT_SYMBOL(dma_alloc_coherent);
void dma_free_coherent(struct device *dev, size_t size, void *vaddr, static void mn10300_dma_free(struct device *dev, size_t size, void *vaddr,
dma_addr_t dma_handle) dma_addr_t dma_handle, struct dma_attrs *attrs)
{ {
unsigned long addr = (unsigned long) vaddr & ~0x20000000; unsigned long addr = (unsigned long) vaddr & ~0x20000000;
@ -73,4 +72,60 @@ void dma_free_coherent(struct device *dev, size_t size, void *vaddr,
free_pages(addr, get_order(size)); free_pages(addr, get_order(size));
} }
EXPORT_SYMBOL(dma_free_coherent);
static int mn10300_dma_map_sg(struct device *dev, struct scatterlist *sglist,
int nents, enum dma_data_direction direction,
struct dma_attrs *attrs)
{
struct scatterlist *sg;
int i;
for_each_sg(sglist, sg, nents, i) {
BUG_ON(!sg_page(sg));
sg->dma_address = sg_phys(sg);
}
mn10300_dcache_flush_inv();
return nents;
}
static dma_addr_t mn10300_dma_map_page(struct device *dev, struct page *page,
unsigned long offset, size_t size,
enum dma_data_direction direction, struct dma_attrs *attrs)
{
return page_to_bus(page) + offset;
}
static void mn10300_dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle,
size_t size, enum dma_data_direction direction)
{
mn10300_dcache_flush_inv();
}
static void mn10300_dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg,
int nelems, enum dma_data_direction direction)
{
mn10300_dcache_flush_inv();
}
static int mn10300_dma_supported(struct device *dev, u64 mask)
{
/*
* we fall back to GFP_DMA when the mask isn't all 1s, so we can't
* guarantee allocations that must be within a tighter range than
* GFP_DMA
*/
if (mask < 0x00ffffff)
return 0;
return 1;
}
struct dma_map_ops mn10300_dma_ops = {
.alloc = mn10300_dma_alloc,
.free = mn10300_dma_free,
.map_page = mn10300_dma_map_page,
.map_sg = mn10300_dma_map_sg,
.sync_single_for_device = mn10300_dma_sync_single_for_device,
.sync_sg_for_device = mn10300_dma_sync_sg_for_device,
};

View File

@ -10,131 +10,20 @@
#ifndef _ASM_NIOS2_DMA_MAPPING_H #ifndef _ASM_NIOS2_DMA_MAPPING_H
#define _ASM_NIOS2_DMA_MAPPING_H #define _ASM_NIOS2_DMA_MAPPING_H
#include <linux/scatterlist.h> extern struct dma_map_ops nios2_dma_ops;
#include <linux/cache.h>
#include <asm/cacheflush.h>
static inline void __dma_sync_for_device(void *vaddr, size_t size, static inline struct dma_map_ops *get_dma_ops(struct device *dev)
enum dma_data_direction direction)
{ {
switch (direction) { return &nios2_dma_ops;
case DMA_FROM_DEVICE:
invalidate_dcache_range((unsigned long)vaddr,
(unsigned long)(vaddr + size));
break;
case DMA_TO_DEVICE:
/*
* We just need to flush the caches here , but Nios2 flush
* instruction will do both writeback and invalidate.
*/
case DMA_BIDIRECTIONAL: /* flush and invalidate */
flush_dcache_range((unsigned long)vaddr,
(unsigned long)(vaddr + size));
break;
default:
BUG();
}
}
static inline void __dma_sync_for_cpu(void *vaddr, size_t size,
enum dma_data_direction direction)
{
switch (direction) {
case DMA_BIDIRECTIONAL:
case DMA_FROM_DEVICE:
invalidate_dcache_range((unsigned long)vaddr,
(unsigned long)(vaddr + size));
break;
case DMA_TO_DEVICE:
break;
default:
BUG();
}
}
#define dma_alloc_noncoherent(d, s, h, f) dma_alloc_coherent(d, s, h, f)
#define dma_free_noncoherent(d, s, v, h) dma_free_coherent(d, s, v, h)
void *dma_alloc_coherent(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t flag);
void dma_free_coherent(struct device *dev, size_t size,
void *vaddr, dma_addr_t dma_handle);
static inline dma_addr_t dma_map_single(struct device *dev, void *ptr,
size_t size,
enum dma_data_direction direction)
{
BUG_ON(!valid_dma_direction(direction));
__dma_sync_for_device(ptr, size, direction);
return virt_to_phys(ptr);
}
static inline void dma_unmap_single(struct device *dev, dma_addr_t dma_addr,
size_t size, enum dma_data_direction direction)
{
}
extern int dma_map_sg(struct device *dev, struct scatterlist *sg, int nents,
enum dma_data_direction direction);
extern dma_addr_t dma_map_page(struct device *dev, struct page *page,
unsigned long offset, size_t size, enum dma_data_direction direction);
extern void dma_unmap_page(struct device *dev, dma_addr_t dma_address,
size_t size, enum dma_data_direction direction);
extern void dma_unmap_sg(struct device *dev, struct scatterlist *sg,
int nhwentries, enum dma_data_direction direction);
extern void dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle,
size_t size, enum dma_data_direction direction);
extern void dma_sync_single_for_device(struct device *dev,
dma_addr_t dma_handle, size_t size, enum dma_data_direction direction);
extern void dma_sync_single_range_for_cpu(struct device *dev,
dma_addr_t dma_handle, unsigned long offset, size_t size,
enum dma_data_direction direction);
extern void dma_sync_single_range_for_device(struct device *dev,
dma_addr_t dma_handle, unsigned long offset, size_t size,
enum dma_data_direction direction);
extern void dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg,
int nelems, enum dma_data_direction direction);
extern void dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg,
int nelems, enum dma_data_direction direction);
static inline int dma_supported(struct device *dev, u64 mask)
{
return 1;
}
static inline int dma_set_mask(struct device *dev, u64 mask)
{
if (!dev->dma_mask || !dma_supported(dev, mask))
return -EIO;
*dev->dma_mask = mask;
return 0;
}
static inline int dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
{
return 0;
} }
/* /*
* dma_alloc_noncoherent() returns non-cacheable memory, so there's no need to * dma_alloc_noncoherent() returns non-cacheable memory, so there's no need to
* do any flushing here. * do any flushing here.
*/ */
static inline void dma_cache_sync(struct device *dev, void *vaddr, size_t size, static inline void dma_cache_sync(struct device *dev, void *vaddr, size_t size,
enum dma_data_direction direction) enum dma_data_direction direction)
{ {
} }
/* drivers/base/dma-mapping.c */
extern int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
void *cpu_addr, dma_addr_t dma_addr, size_t size);
extern int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt,
void *cpu_addr, dma_addr_t dma_addr,
size_t size);
#define dma_mmap_coherent(d, v, c, h, s) dma_common_mmap(d, v, c, h, s)
#define dma_get_sgtable(d, t, v, h, s) dma_common_get_sgtable(d, t, v, h, s)
#endif /* _ASM_NIOS2_DMA_MAPPING_H */ #endif /* _ASM_NIOS2_DMA_MAPPING_H */

View File

@ -20,9 +20,46 @@
#include <linux/cache.h> #include <linux/cache.h>
#include <asm/cacheflush.h> #include <asm/cacheflush.h>
static inline void __dma_sync_for_device(void *vaddr, size_t size,
enum dma_data_direction direction)
{
switch (direction) {
case DMA_FROM_DEVICE:
invalidate_dcache_range((unsigned long)vaddr,
(unsigned long)(vaddr + size));
break;
case DMA_TO_DEVICE:
/*
* We just need to flush the caches here , but Nios2 flush
* instruction will do both writeback and invalidate.
*/
case DMA_BIDIRECTIONAL: /* flush and invalidate */
flush_dcache_range((unsigned long)vaddr,
(unsigned long)(vaddr + size));
break;
default:
BUG();
}
}
void *dma_alloc_coherent(struct device *dev, size_t size, static inline void __dma_sync_for_cpu(void *vaddr, size_t size,
dma_addr_t *dma_handle, gfp_t gfp) enum dma_data_direction direction)
{
switch (direction) {
case DMA_BIDIRECTIONAL:
case DMA_FROM_DEVICE:
invalidate_dcache_range((unsigned long)vaddr,
(unsigned long)(vaddr + size));
break;
case DMA_TO_DEVICE:
break;
default:
BUG();
}
}
static void *nios2_dma_alloc(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t gfp, struct dma_attrs *attrs)
{ {
void *ret; void *ret;
@ -45,24 +82,21 @@ void *dma_alloc_coherent(struct device *dev, size_t size,
return ret; return ret;
} }
EXPORT_SYMBOL(dma_alloc_coherent);
void dma_free_coherent(struct device *dev, size_t size, void *vaddr, static void nios2_dma_free(struct device *dev, size_t size, void *vaddr,
dma_addr_t dma_handle) dma_addr_t dma_handle, struct dma_attrs *attrs)
{ {
unsigned long addr = (unsigned long) CAC_ADDR((unsigned long) vaddr); unsigned long addr = (unsigned long) CAC_ADDR((unsigned long) vaddr);
free_pages(addr, get_order(size)); free_pages(addr, get_order(size));
} }
EXPORT_SYMBOL(dma_free_coherent);
int dma_map_sg(struct device *dev, struct scatterlist *sg, int nents, static int nios2_dma_map_sg(struct device *dev, struct scatterlist *sg,
enum dma_data_direction direction) int nents, enum dma_data_direction direction,
struct dma_attrs *attrs)
{ {
int i; int i;
BUG_ON(!valid_dma_direction(direction));
for_each_sg(sg, sg, nents, i) { for_each_sg(sg, sg, nents, i) {
void *addr; void *addr;
@ -75,40 +109,32 @@ int dma_map_sg(struct device *dev, struct scatterlist *sg, int nents,
return nents; return nents;
} }
EXPORT_SYMBOL(dma_map_sg);
dma_addr_t dma_map_page(struct device *dev, struct page *page, static dma_addr_t nios2_dma_map_page(struct device *dev, struct page *page,
unsigned long offset, size_t size, unsigned long offset, size_t size,
enum dma_data_direction direction) enum dma_data_direction direction,
struct dma_attrs *attrs)
{ {
void *addr; void *addr = page_address(page) + offset;
BUG_ON(!valid_dma_direction(direction));
addr = page_address(page) + offset;
__dma_sync_for_device(addr, size, direction); __dma_sync_for_device(addr, size, direction);
return page_to_phys(page) + offset; return page_to_phys(page) + offset;
} }
EXPORT_SYMBOL(dma_map_page);
void dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size, static void nios2_dma_unmap_page(struct device *dev, dma_addr_t dma_address,
enum dma_data_direction direction) size_t size, enum dma_data_direction direction,
struct dma_attrs *attrs)
{ {
BUG_ON(!valid_dma_direction(direction));
__dma_sync_for_cpu(phys_to_virt(dma_address), size, direction); __dma_sync_for_cpu(phys_to_virt(dma_address), size, direction);
} }
EXPORT_SYMBOL(dma_unmap_page);
void dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nhwentries, static void nios2_dma_unmap_sg(struct device *dev, struct scatterlist *sg,
enum dma_data_direction direction) int nhwentries, enum dma_data_direction direction,
struct dma_attrs *attrs)
{ {
void *addr; void *addr;
int i; int i;
BUG_ON(!valid_dma_direction(direction));
if (direction == DMA_TO_DEVICE) if (direction == DMA_TO_DEVICE)
return; return;
@ -118,69 +144,54 @@ void dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nhwentries,
__dma_sync_for_cpu(addr, sg->length, direction); __dma_sync_for_cpu(addr, sg->length, direction);
} }
} }
EXPORT_SYMBOL(dma_unmap_sg);
void dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, static void nios2_dma_sync_single_for_cpu(struct device *dev,
size_t size, enum dma_data_direction direction) dma_addr_t dma_handle, size_t size,
enum dma_data_direction direction)
{ {
BUG_ON(!valid_dma_direction(direction));
__dma_sync_for_cpu(phys_to_virt(dma_handle), size, direction); __dma_sync_for_cpu(phys_to_virt(dma_handle), size, direction);
} }
EXPORT_SYMBOL(dma_sync_single_for_cpu);
void dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle, static void nios2_dma_sync_single_for_device(struct device *dev,
size_t size, enum dma_data_direction direction) dma_addr_t dma_handle, size_t size,
enum dma_data_direction direction)
{ {
BUG_ON(!valid_dma_direction(direction));
__dma_sync_for_device(phys_to_virt(dma_handle), size, direction); __dma_sync_for_device(phys_to_virt(dma_handle), size, direction);
} }
EXPORT_SYMBOL(dma_sync_single_for_device);
void dma_sync_single_range_for_cpu(struct device *dev, dma_addr_t dma_handle, static void nios2_dma_sync_sg_for_cpu(struct device *dev,
unsigned long offset, size_t size, struct scatterlist *sg, int nelems,
enum dma_data_direction direction) enum dma_data_direction direction)
{
BUG_ON(!valid_dma_direction(direction));
__dma_sync_for_cpu(phys_to_virt(dma_handle), size, direction);
}
EXPORT_SYMBOL(dma_sync_single_range_for_cpu);
void dma_sync_single_range_for_device(struct device *dev, dma_addr_t dma_handle,
unsigned long offset, size_t size,
enum dma_data_direction direction)
{
BUG_ON(!valid_dma_direction(direction));
__dma_sync_for_device(phys_to_virt(dma_handle), size, direction);
}
EXPORT_SYMBOL(dma_sync_single_range_for_device);
void dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, int nelems,
enum dma_data_direction direction)
{ {
int i; int i;
BUG_ON(!valid_dma_direction(direction));
/* Make sure that gcc doesn't leave the empty loop body. */ /* Make sure that gcc doesn't leave the empty loop body. */
for_each_sg(sg, sg, nelems, i) for_each_sg(sg, sg, nelems, i)
__dma_sync_for_cpu(sg_virt(sg), sg->length, direction); __dma_sync_for_cpu(sg_virt(sg), sg->length, direction);
} }
EXPORT_SYMBOL(dma_sync_sg_for_cpu);
void dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, static void nios2_dma_sync_sg_for_device(struct device *dev,
int nelems, enum dma_data_direction direction) struct scatterlist *sg, int nelems,
enum dma_data_direction direction)
{ {
int i; int i;
BUG_ON(!valid_dma_direction(direction));
/* Make sure that gcc doesn't leave the empty loop body. */ /* Make sure that gcc doesn't leave the empty loop body. */
for_each_sg(sg, sg, nelems, i) for_each_sg(sg, sg, nelems, i)
__dma_sync_for_device(sg_virt(sg), sg->length, direction); __dma_sync_for_device(sg_virt(sg), sg->length, direction);
} }
EXPORT_SYMBOL(dma_sync_sg_for_device);
struct dma_map_ops nios2_dma_ops = {
.alloc = nios2_dma_alloc,
.free = nios2_dma_free,
.map_page = nios2_dma_map_page,
.unmap_page = nios2_dma_unmap_page,
.map_sg = nios2_dma_map_sg,
.unmap_sg = nios2_dma_unmap_sg,
.sync_single_for_device = nios2_dma_sync_single_for_device,
.sync_single_for_cpu = nios2_dma_sync_single_for_cpu,
.sync_sg_for_cpu = nios2_dma_sync_sg_for_cpu,
.sync_sg_for_device = nios2_dma_sync_sg_for_device,
};
EXPORT_SYMBOL(nios2_dma_ops);

View File

@ -29,9 +29,6 @@ config OPENRISC
config MMU config MMU
def_bool y def_bool y
config HAVE_DMA_ATTRS
def_bool y
config RWSEM_GENERIC_SPINLOCK config RWSEM_GENERIC_SPINLOCK
def_bool y def_bool y

View File

@ -42,6 +42,4 @@ static inline int dma_supported(struct device *dev, u64 dma_mask)
return dma_mask == DMA_BIT_MASK(32); return dma_mask == DMA_BIT_MASK(32);
} }
#include <asm-generic/dma-mapping-common.h>
#endif /* __ASM_OPENRISC_DMA_MAPPING_H */ #endif /* __ASM_OPENRISC_DMA_MAPPING_H */

View File

@ -29,6 +29,7 @@ config PARISC
select TTY # Needed for pdc_cons.c select TTY # Needed for pdc_cons.c
select HAVE_DEBUG_STACKOVERFLOW select HAVE_DEBUG_STACKOVERFLOW
select HAVE_ARCH_AUDITSYSCALL select HAVE_ARCH_AUDITSYSCALL
select ARCH_NO_COHERENT_DMA_MMAP
help help
The PA-RISC microprocessor is designed by Hewlett-Packard and used The PA-RISC microprocessor is designed by Hewlett-Packard and used

View File

@ -1,30 +1,11 @@
#ifndef _PARISC_DMA_MAPPING_H #ifndef _PARISC_DMA_MAPPING_H
#define _PARISC_DMA_MAPPING_H #define _PARISC_DMA_MAPPING_H
#include <linux/mm.h>
#include <linux/scatterlist.h>
#include <asm/cacheflush.h> #include <asm/cacheflush.h>
/* See Documentation/DMA-API-HOWTO.txt */
struct hppa_dma_ops {
int (*dma_supported)(struct device *dev, u64 mask);
void *(*alloc_consistent)(struct device *dev, size_t size, dma_addr_t *iova, gfp_t flag);
void *(*alloc_noncoherent)(struct device *dev, size_t size, dma_addr_t *iova, gfp_t flag);
void (*free_consistent)(struct device *dev, size_t size, void *vaddr, dma_addr_t iova);
dma_addr_t (*map_single)(struct device *dev, void *addr, size_t size, enum dma_data_direction direction);
void (*unmap_single)(struct device *dev, dma_addr_t iova, size_t size, enum dma_data_direction direction);
int (*map_sg)(struct device *dev, struct scatterlist *sg, int nents, enum dma_data_direction direction);
void (*unmap_sg)(struct device *dev, struct scatterlist *sg, int nhwents, enum dma_data_direction direction);
void (*dma_sync_single_for_cpu)(struct device *dev, dma_addr_t iova, unsigned long offset, size_t size, enum dma_data_direction direction);
void (*dma_sync_single_for_device)(struct device *dev, dma_addr_t iova, unsigned long offset, size_t size, enum dma_data_direction direction);
void (*dma_sync_sg_for_cpu)(struct device *dev, struct scatterlist *sg, int nelems, enum dma_data_direction direction);
void (*dma_sync_sg_for_device)(struct device *dev, struct scatterlist *sg, int nelems, enum dma_data_direction direction);
};
/* /*
** We could live without the hppa_dma_ops indirection if we didn't want ** We need to support 4 different coherent dma models with one binary:
** to support 4 different coherent dma models with one binary (they will **
** someday be loadable modules):
** I/O MMU consistent method dma_sync behavior ** I/O MMU consistent method dma_sync behavior
** ============= ====================== ======================= ** ============= ====================== =======================
** a) PA-7x00LC uncachable host memory flush/purge ** a) PA-7x00LC uncachable host memory flush/purge
@ -40,158 +21,22 @@ struct hppa_dma_ops {
*/ */
#ifdef CONFIG_PA11 #ifdef CONFIG_PA11
extern struct hppa_dma_ops pcxl_dma_ops; extern struct dma_map_ops pcxl_dma_ops;
extern struct hppa_dma_ops pcx_dma_ops; extern struct dma_map_ops pcx_dma_ops;
#endif #endif
extern struct hppa_dma_ops *hppa_dma_ops; extern struct dma_map_ops *hppa_dma_ops;
#define dma_alloc_attrs(d, s, h, f, a) dma_alloc_coherent(d, s, h, f) static inline struct dma_map_ops *get_dma_ops(struct device *dev)
#define dma_free_attrs(d, s, h, f, a) dma_free_coherent(d, s, h, f)
static inline void *
dma_alloc_coherent(struct device *dev, size_t size, dma_addr_t *dma_handle,
gfp_t flag)
{ {
return hppa_dma_ops->alloc_consistent(dev, size, dma_handle, flag); return hppa_dma_ops;
}
static inline void *
dma_alloc_noncoherent(struct device *dev, size_t size, dma_addr_t *dma_handle,
gfp_t flag)
{
return hppa_dma_ops->alloc_noncoherent(dev, size, dma_handle, flag);
}
static inline void
dma_free_coherent(struct device *dev, size_t size,
void *vaddr, dma_addr_t dma_handle)
{
hppa_dma_ops->free_consistent(dev, size, vaddr, dma_handle);
}
static inline void
dma_free_noncoherent(struct device *dev, size_t size,
void *vaddr, dma_addr_t dma_handle)
{
hppa_dma_ops->free_consistent(dev, size, vaddr, dma_handle);
}
static inline dma_addr_t
dma_map_single(struct device *dev, void *ptr, size_t size,
enum dma_data_direction direction)
{
return hppa_dma_ops->map_single(dev, ptr, size, direction);
}
static inline void
dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size,
enum dma_data_direction direction)
{
hppa_dma_ops->unmap_single(dev, dma_addr, size, direction);
}
static inline int
dma_map_sg(struct device *dev, struct scatterlist *sg, int nents,
enum dma_data_direction direction)
{
return hppa_dma_ops->map_sg(dev, sg, nents, direction);
}
static inline void
dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nhwentries,
enum dma_data_direction direction)
{
hppa_dma_ops->unmap_sg(dev, sg, nhwentries, direction);
}
static inline dma_addr_t
dma_map_page(struct device *dev, struct page *page, unsigned long offset,
size_t size, enum dma_data_direction direction)
{
return dma_map_single(dev, (page_address(page) + (offset)), size, direction);
}
static inline void
dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size,
enum dma_data_direction direction)
{
dma_unmap_single(dev, dma_address, size, direction);
}
static inline void
dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, size_t size,
enum dma_data_direction direction)
{
if(hppa_dma_ops->dma_sync_single_for_cpu)
hppa_dma_ops->dma_sync_single_for_cpu(dev, dma_handle, 0, size, direction);
}
static inline void
dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle, size_t size,
enum dma_data_direction direction)
{
if(hppa_dma_ops->dma_sync_single_for_device)
hppa_dma_ops->dma_sync_single_for_device(dev, dma_handle, 0, size, direction);
}
static inline void
dma_sync_single_range_for_cpu(struct device *dev, dma_addr_t dma_handle,
unsigned long offset, size_t size,
enum dma_data_direction direction)
{
if(hppa_dma_ops->dma_sync_single_for_cpu)
hppa_dma_ops->dma_sync_single_for_cpu(dev, dma_handle, offset, size, direction);
}
static inline void
dma_sync_single_range_for_device(struct device *dev, dma_addr_t dma_handle,
unsigned long offset, size_t size,
enum dma_data_direction direction)
{
if(hppa_dma_ops->dma_sync_single_for_device)
hppa_dma_ops->dma_sync_single_for_device(dev, dma_handle, offset, size, direction);
}
static inline void
dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, int nelems,
enum dma_data_direction direction)
{
if(hppa_dma_ops->dma_sync_sg_for_cpu)
hppa_dma_ops->dma_sync_sg_for_cpu(dev, sg, nelems, direction);
}
static inline void
dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, int nelems,
enum dma_data_direction direction)
{
if(hppa_dma_ops->dma_sync_sg_for_device)
hppa_dma_ops->dma_sync_sg_for_device(dev, sg, nelems, direction);
}
static inline int
dma_supported(struct device *dev, u64 mask)
{
return hppa_dma_ops->dma_supported(dev, mask);
}
static inline int
dma_set_mask(struct device *dev, u64 mask)
{
if(!dev->dma_mask || !dma_supported(dev, mask))
return -EIO;
*dev->dma_mask = mask;
return 0;
} }
static inline void static inline void
dma_cache_sync(struct device *dev, void *vaddr, size_t size, dma_cache_sync(struct device *dev, void *vaddr, size_t size,
enum dma_data_direction direction) enum dma_data_direction direction)
{ {
if(hppa_dma_ops->dma_sync_single_for_cpu) if (hppa_dma_ops->sync_single_for_cpu)
flush_kernel_dcache_range((unsigned long)vaddr, size); flush_kernel_dcache_range((unsigned long)vaddr, size);
} }
@ -238,22 +83,4 @@ struct parisc_device;
void * sba_get_iommu(struct parisc_device *dev); void * sba_get_iommu(struct parisc_device *dev);
#endif #endif
/* At the moment, we panic on error for IOMMU resource exaustion */
#define dma_mapping_error(dev, x) 0
/* This API cannot be supported on PA-RISC */
static inline int dma_mmap_coherent(struct device *dev,
struct vm_area_struct *vma, void *cpu_addr,
dma_addr_t dma_addr, size_t size)
{
return -EINVAL;
}
static inline int dma_get_sgtable(struct device *dev, struct sg_table *sgt,
void *cpu_addr, dma_addr_t dma_addr,
size_t size)
{
return -EINVAL;
}
#endif #endif

View File

@ -43,7 +43,6 @@
#define MADV_SPACEAVAIL 5 /* insure that resources are reserved */ #define MADV_SPACEAVAIL 5 /* insure that resources are reserved */
#define MADV_VPS_PURGE 6 /* Purge pages from VM page cache */ #define MADV_VPS_PURGE 6 /* Purge pages from VM page cache */
#define MADV_VPS_INHERIT 7 /* Inherit parents page size */ #define MADV_VPS_INHERIT 7 /* Inherit parents page size */
#define MADV_FREE 8 /* free pages only if memory pressure */
/* common/generic parameters */ /* common/generic parameters */
#define MADV_FREE 8 /* free pages only if memory pressure */ #define MADV_FREE 8 /* free pages only if memory pressure */

View File

@ -40,7 +40,7 @@
#include <asm/parisc-device.h> #include <asm/parisc-device.h>
/* See comments in include/asm-parisc/pci.h */ /* See comments in include/asm-parisc/pci.h */
struct hppa_dma_ops *hppa_dma_ops __read_mostly; struct dma_map_ops *hppa_dma_ops __read_mostly;
EXPORT_SYMBOL(hppa_dma_ops); EXPORT_SYMBOL(hppa_dma_ops);
static struct device root = { static struct device root = {

View File

@ -413,7 +413,8 @@ pcxl_dma_init(void)
__initcall(pcxl_dma_init); __initcall(pcxl_dma_init);
static void * pa11_dma_alloc_consistent (struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t flag) static void *pa11_dma_alloc(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t flag, struct dma_attrs *attrs)
{ {
unsigned long vaddr; unsigned long vaddr;
unsigned long paddr; unsigned long paddr;
@ -439,7 +440,8 @@ static void * pa11_dma_alloc_consistent (struct device *dev, size_t size, dma_ad
return (void *)vaddr; return (void *)vaddr;
} }
static void pa11_dma_free_consistent (struct device *dev, size_t size, void *vaddr, dma_addr_t dma_handle) static void pa11_dma_free(struct device *dev, size_t size, void *vaddr,
dma_addr_t dma_handle, struct dma_attrs *attrs)
{ {
int order; int order;
@ -450,15 +452,20 @@ static void pa11_dma_free_consistent (struct device *dev, size_t size, void *vad
free_pages((unsigned long)__va(dma_handle), order); free_pages((unsigned long)__va(dma_handle), order);
} }
static dma_addr_t pa11_dma_map_single(struct device *dev, void *addr, size_t size, enum dma_data_direction direction) static dma_addr_t pa11_dma_map_page(struct device *dev, struct page *page,
unsigned long offset, size_t size,
enum dma_data_direction direction, struct dma_attrs *attrs)
{ {
void *addr = page_address(page) + offset;
BUG_ON(direction == DMA_NONE); BUG_ON(direction == DMA_NONE);
flush_kernel_dcache_range((unsigned long) addr, size); flush_kernel_dcache_range((unsigned long) addr, size);
return virt_to_phys(addr); return virt_to_phys(addr);
} }
static void pa11_dma_unmap_single(struct device *dev, dma_addr_t dma_handle, size_t size, enum dma_data_direction direction) static void pa11_dma_unmap_page(struct device *dev, dma_addr_t dma_handle,
size_t size, enum dma_data_direction direction,
struct dma_attrs *attrs)
{ {
BUG_ON(direction == DMA_NONE); BUG_ON(direction == DMA_NONE);
@ -475,7 +482,9 @@ static void pa11_dma_unmap_single(struct device *dev, dma_addr_t dma_handle, siz
return; return;
} }
static int pa11_dma_map_sg(struct device *dev, struct scatterlist *sglist, int nents, enum dma_data_direction direction) static int pa11_dma_map_sg(struct device *dev, struct scatterlist *sglist,
int nents, enum dma_data_direction direction,
struct dma_attrs *attrs)
{ {
int i; int i;
struct scatterlist *sg; struct scatterlist *sg;
@ -492,7 +501,9 @@ static int pa11_dma_map_sg(struct device *dev, struct scatterlist *sglist, int n
return nents; return nents;
} }
static void pa11_dma_unmap_sg(struct device *dev, struct scatterlist *sglist, int nents, enum dma_data_direction direction) static void pa11_dma_unmap_sg(struct device *dev, struct scatterlist *sglist,
int nents, enum dma_data_direction direction,
struct dma_attrs *attrs)
{ {
int i; int i;
struct scatterlist *sg; struct scatterlist *sg;
@ -509,18 +520,24 @@ static void pa11_dma_unmap_sg(struct device *dev, struct scatterlist *sglist, in
return; return;
} }
static void pa11_dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, unsigned long offset, size_t size, enum dma_data_direction direction) static void pa11_dma_sync_single_for_cpu(struct device *dev,
dma_addr_t dma_handle, size_t size,
enum dma_data_direction direction)
{ {
BUG_ON(direction == DMA_NONE); BUG_ON(direction == DMA_NONE);
flush_kernel_dcache_range((unsigned long) phys_to_virt(dma_handle) + offset, size); flush_kernel_dcache_range((unsigned long) phys_to_virt(dma_handle),
size);
} }
static void pa11_dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle, unsigned long offset, size_t size, enum dma_data_direction direction) static void pa11_dma_sync_single_for_device(struct device *dev,
dma_addr_t dma_handle, size_t size,
enum dma_data_direction direction)
{ {
BUG_ON(direction == DMA_NONE); BUG_ON(direction == DMA_NONE);
flush_kernel_dcache_range((unsigned long) phys_to_virt(dma_handle) + offset, size); flush_kernel_dcache_range((unsigned long) phys_to_virt(dma_handle),
size);
} }
static void pa11_dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sglist, int nents, enum dma_data_direction direction) static void pa11_dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sglist, int nents, enum dma_data_direction direction)
@ -545,32 +562,28 @@ static void pa11_dma_sync_sg_for_device(struct device *dev, struct scatterlist *
flush_kernel_vmap_range(sg_virt(sg), sg->length); flush_kernel_vmap_range(sg_virt(sg), sg->length);
} }
struct hppa_dma_ops pcxl_dma_ops = { struct dma_map_ops pcxl_dma_ops = {
.dma_supported = pa11_dma_supported, .dma_supported = pa11_dma_supported,
.alloc_consistent = pa11_dma_alloc_consistent, .alloc = pa11_dma_alloc,
.alloc_noncoherent = pa11_dma_alloc_consistent, .free = pa11_dma_free,
.free_consistent = pa11_dma_free_consistent, .map_page = pa11_dma_map_page,
.map_single = pa11_dma_map_single, .unmap_page = pa11_dma_unmap_page,
.unmap_single = pa11_dma_unmap_single,
.map_sg = pa11_dma_map_sg, .map_sg = pa11_dma_map_sg,
.unmap_sg = pa11_dma_unmap_sg, .unmap_sg = pa11_dma_unmap_sg,
.dma_sync_single_for_cpu = pa11_dma_sync_single_for_cpu, .sync_single_for_cpu = pa11_dma_sync_single_for_cpu,
.dma_sync_single_for_device = pa11_dma_sync_single_for_device, .sync_single_for_device = pa11_dma_sync_single_for_device,
.dma_sync_sg_for_cpu = pa11_dma_sync_sg_for_cpu, .sync_sg_for_cpu = pa11_dma_sync_sg_for_cpu,
.dma_sync_sg_for_device = pa11_dma_sync_sg_for_device, .sync_sg_for_device = pa11_dma_sync_sg_for_device,
}; };
static void *fail_alloc_consistent(struct device *dev, size_t size, static void *pcx_dma_alloc(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t flag) dma_addr_t *dma_handle, gfp_t flag, struct dma_attrs *attrs)
{
return NULL;
}
static void *pa11_dma_alloc_noncoherent(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t flag)
{ {
void *addr; void *addr;
if (!dma_get_attr(DMA_ATTR_NON_CONSISTENT, attrs))
return NULL;
addr = (void *)__get_free_pages(flag, get_order(size)); addr = (void *)__get_free_pages(flag, get_order(size));
if (addr) if (addr)
*dma_handle = (dma_addr_t)virt_to_phys(addr); *dma_handle = (dma_addr_t)virt_to_phys(addr);
@ -578,24 +591,23 @@ static void *pa11_dma_alloc_noncoherent(struct device *dev, size_t size,
return addr; return addr;
} }
static void pa11_dma_free_noncoherent(struct device *dev, size_t size, static void pcx_dma_free(struct device *dev, size_t size, void *vaddr,
void *vaddr, dma_addr_t iova) dma_addr_t iova, struct dma_attrs *attrs)
{ {
free_pages((unsigned long)vaddr, get_order(size)); free_pages((unsigned long)vaddr, get_order(size));
return; return;
} }
struct hppa_dma_ops pcx_dma_ops = { struct dma_map_ops pcx_dma_ops = {
.dma_supported = pa11_dma_supported, .dma_supported = pa11_dma_supported,
.alloc_consistent = fail_alloc_consistent, .alloc = pcx_dma_alloc,
.alloc_noncoherent = pa11_dma_alloc_noncoherent, .free = pcx_dma_free,
.free_consistent = pa11_dma_free_noncoherent, .map_page = pa11_dma_map_page,
.map_single = pa11_dma_map_single, .unmap_page = pa11_dma_unmap_page,
.unmap_single = pa11_dma_unmap_single,
.map_sg = pa11_dma_map_sg, .map_sg = pa11_dma_map_sg,
.unmap_sg = pa11_dma_unmap_sg, .unmap_sg = pa11_dma_unmap_sg,
.dma_sync_single_for_cpu = pa11_dma_sync_single_for_cpu, .sync_single_for_cpu = pa11_dma_sync_single_for_cpu,
.dma_sync_single_for_device = pa11_dma_sync_single_for_device, .sync_single_for_device = pa11_dma_sync_single_for_device,
.dma_sync_sg_for_cpu = pa11_dma_sync_sg_for_cpu, .sync_sg_for_cpu = pa11_dma_sync_sg_for_cpu,
.dma_sync_sg_for_device = pa11_dma_sync_sg_for_device, .sync_sg_for_device = pa11_dma_sync_sg_for_device,
}; };

View File

@ -108,7 +108,6 @@ config PPC
select HAVE_ARCH_TRACEHOOK select HAVE_ARCH_TRACEHOOK
select HAVE_MEMBLOCK select HAVE_MEMBLOCK
select HAVE_MEMBLOCK_NODE_MAP select HAVE_MEMBLOCK_NODE_MAP
select HAVE_DMA_ATTRS
select HAVE_DMA_API_DEBUG select HAVE_DMA_API_DEBUG
select HAVE_OPROFILE select HAVE_OPROFILE
select HAVE_DEBUG_KMEMLEAK select HAVE_DEBUG_KMEMLEAK
@ -158,6 +157,7 @@ config PPC
select ARCH_HAS_DMA_SET_COHERENT_MASK select ARCH_HAS_DMA_SET_COHERENT_MASK
select ARCH_HAS_DEVMEM_IS_ALLOWED select ARCH_HAS_DEVMEM_IS_ALLOWED
select HAVE_ARCH_SECCOMP_FILTER select HAVE_ARCH_SECCOMP_FILTER
select ARCH_HAS_UBSAN_SANITIZE_ALL
config GENERIC_CSUM config GENERIC_CSUM
def_bool CPU_LITTLE_ENDIAN def_bool CPU_LITTLE_ENDIAN

View File

@ -125,8 +125,6 @@ static inline void set_dma_offset(struct device *dev, dma_addr_t off)
#define HAVE_ARCH_DMA_SET_MASK 1 #define HAVE_ARCH_DMA_SET_MASK 1
extern int dma_set_mask(struct device *dev, u64 dma_mask); extern int dma_set_mask(struct device *dev, u64 dma_mask);
#include <asm-generic/dma-mapping-common.h>
extern int __dma_set_mask(struct device *dev, u64 dma_mask); extern int __dma_set_mask(struct device *dev, u64 dma_mask);
extern u64 __dma_get_required_mask(struct device *dev); extern u64 __dma_get_required_mask(struct device *dev);

View File

@ -191,7 +191,7 @@ struct fadump_crash_info_header {
u64 elfcorehdr_addr; u64 elfcorehdr_addr;
u32 crashing_cpu; u32 crashing_cpu;
struct pt_regs regs; struct pt_regs regs;
struct cpumask cpu_online_mask; struct cpumask online_mask;
}; };
/* Crash memory ranges */ /* Crash memory ranges */

View File

@ -136,12 +136,18 @@ endif
obj-$(CONFIG_EPAPR_PARAVIRT) += epapr_paravirt.o epapr_hcalls.o obj-$(CONFIG_EPAPR_PARAVIRT) += epapr_paravirt.o epapr_hcalls.o
obj-$(CONFIG_KVM_GUEST) += kvm.o kvm_emul.o obj-$(CONFIG_KVM_GUEST) += kvm.o kvm_emul.o
# Disable GCOV in odd or sensitive code # Disable GCOV & sanitizers in odd or sensitive code
GCOV_PROFILE_prom_init.o := n GCOV_PROFILE_prom_init.o := n
UBSAN_SANITIZE_prom_init.o := n
GCOV_PROFILE_ftrace.o := n GCOV_PROFILE_ftrace.o := n
UBSAN_SANITIZE_ftrace.o := n
GCOV_PROFILE_machine_kexec_64.o := n GCOV_PROFILE_machine_kexec_64.o := n
UBSAN_SANITIZE_machine_kexec_64.o := n
GCOV_PROFILE_machine_kexec_32.o := n GCOV_PROFILE_machine_kexec_32.o := n
UBSAN_SANITIZE_machine_kexec_32.o := n
GCOV_PROFILE_kprobes.o := n GCOV_PROFILE_kprobes.o := n
UBSAN_SANITIZE_kprobes.o := n
UBSAN_SANITIZE_vdso.o := n
extra-$(CONFIG_PPC_FPU) += fpu.o extra-$(CONFIG_PPC_FPU) += fpu.o
extra-$(CONFIG_ALTIVEC) += vector.o extra-$(CONFIG_ALTIVEC) += vector.o

View File

@ -415,7 +415,7 @@ void crash_fadump(struct pt_regs *regs, const char *str)
else else
ppc_save_regs(&fdh->regs); ppc_save_regs(&fdh->regs);
fdh->cpu_online_mask = *cpu_online_mask; fdh->online_mask = *cpu_online_mask;
/* Call ibm,os-term rtas call to trigger firmware assisted dump */ /* Call ibm,os-term rtas call to trigger firmware assisted dump */
rtas_os_term((char *)str); rtas_os_term((char *)str);
@ -646,7 +646,7 @@ static int __init fadump_build_cpu_notes(const struct fadump_mem_struct *fdm)
} }
/* Lower 4 bytes of reg_value contains logical cpu id */ /* Lower 4 bytes of reg_value contains logical cpu id */
cpu = be64_to_cpu(reg_entry->reg_value) & FADUMP_CPU_ID_MASK; cpu = be64_to_cpu(reg_entry->reg_value) & FADUMP_CPU_ID_MASK;
if (fdh && !cpumask_test_cpu(cpu, &fdh->cpu_online_mask)) { if (fdh && !cpumask_test_cpu(cpu, &fdh->online_mask)) {
SKIP_TO_NEXT_CPU(reg_entry); SKIP_TO_NEXT_CPU(reg_entry);
continue; continue;
} }

View File

@ -15,6 +15,7 @@ targets := $(obj-vdso32) vdso32.so vdso32.so.dbg
obj-vdso32 := $(addprefix $(obj)/, $(obj-vdso32)) obj-vdso32 := $(addprefix $(obj)/, $(obj-vdso32))
GCOV_PROFILE := n GCOV_PROFILE := n
UBSAN_SANITIZE := n
ccflags-y := -shared -fno-common -fno-builtin ccflags-y := -shared -fno-common -fno-builtin
ccflags-y += -nostdlib -Wl,-soname=linux-vdso32.so.1 \ ccflags-y += -nostdlib -Wl,-soname=linux-vdso32.so.1 \

View File

@ -8,6 +8,7 @@ targets := $(obj-vdso64) vdso64.so vdso64.so.dbg
obj-vdso64 := $(addprefix $(obj)/, $(obj-vdso64)) obj-vdso64 := $(addprefix $(obj)/, $(obj-vdso64))
GCOV_PROFILE := n GCOV_PROFILE := n
UBSAN_SANITIZE := n
ccflags-y := -shared -fno-common -fno-builtin ccflags-y := -shared -fno-common -fno-builtin
ccflags-y += -nostdlib -Wl,-soname=linux-vdso64.so.1 \ ccflags-y += -nostdlib -Wl,-soname=linux-vdso64.so.1 \

View File

@ -3,6 +3,7 @@
subdir-ccflags-$(CONFIG_PPC_WERROR) := -Werror subdir-ccflags-$(CONFIG_PPC_WERROR) := -Werror
GCOV_PROFILE := n GCOV_PROFILE := n
UBSAN_SANITIZE := n
ccflags-$(CONFIG_PPC64) := $(NO_MINIMAL_TOC) ccflags-$(CONFIG_PPC64) := $(NO_MINIMAL_TOC)

View File

@ -579,7 +579,6 @@ config QDIO
menuconfig PCI menuconfig PCI
bool "PCI support" bool "PCI support"
select HAVE_DMA_ATTRS
select PCI_MSI select PCI_MSI
select IOMMU_SUPPORT select IOMMU_SUPPORT
help help

View File

@ -23,8 +23,6 @@ static inline void dma_cache_sync(struct device *dev, void *vaddr, size_t size,
{ {
} }
#include <asm-generic/dma-mapping-common.h>
static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t size) static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t size)
{ {
if (!dev->dma_mask) if (!dev->dma_mask)

View File

@ -11,7 +11,6 @@ config SUPERH
select HAVE_GENERIC_DMA_COHERENT select HAVE_GENERIC_DMA_COHERENT
select HAVE_ARCH_TRACEHOOK select HAVE_ARCH_TRACEHOOK
select HAVE_DMA_API_DEBUG select HAVE_DMA_API_DEBUG
select HAVE_DMA_ATTRS
select HAVE_PERF_EVENTS select HAVE_PERF_EVENTS
select HAVE_DEBUG_BUGVERBOSE select HAVE_DEBUG_BUGVERBOSE
select ARCH_HAVE_CUSTOM_GPIO_H select ARCH_HAVE_CUSTOM_GPIO_H

View File

@ -11,8 +11,6 @@ static inline struct dma_map_ops *get_dma_ops(struct device *dev)
#define DMA_ERROR_CODE 0 #define DMA_ERROR_CODE 0
#include <asm-generic/dma-mapping-common.h>
void dma_cache_sync(struct device *dev, void *vaddr, size_t size, void dma_cache_sync(struct device *dev, void *vaddr, size_t size,
enum dma_data_direction dir); enum dma_data_direction dir);

View File

@ -26,7 +26,6 @@ config SPARC
select RTC_CLASS select RTC_CLASS
select RTC_DRV_M48T59 select RTC_DRV_M48T59
select RTC_SYSTOHC select RTC_SYSTOHC
select HAVE_DMA_ATTRS
select HAVE_DMA_API_DEBUG select HAVE_DMA_API_DEBUG
select HAVE_ARCH_JUMP_LABEL if SPARC64 select HAVE_ARCH_JUMP_LABEL if SPARC64
select GENERIC_IRQ_SHOW select GENERIC_IRQ_SHOW

View File

@ -37,21 +37,4 @@ static inline struct dma_map_ops *get_dma_ops(struct device *dev)
return dma_ops; return dma_ops;
} }
#define HAVE_ARCH_DMA_SET_MASK 1
static inline int dma_set_mask(struct device *dev, u64 mask)
{
#ifdef CONFIG_PCI
if (dev->bus == &pci_bus_type) {
if (!dev->dma_mask || !dma_supported(dev, mask))
return -EINVAL;
*dev->dma_mask = mask;
return 0;
}
#endif
return -EINVAL;
}
#include <asm-generic/dma-mapping-common.h>
#endif #endif

View File

@ -5,7 +5,6 @@ config TILE
def_bool y def_bool y
select HAVE_PERF_EVENTS select HAVE_PERF_EVENTS
select USE_PMC if PERF_EVENTS select USE_PMC if PERF_EVENTS
select HAVE_DMA_ATTRS
select HAVE_DMA_API_DEBUG select HAVE_DMA_API_DEBUG
select HAVE_KVM if !TILEGX select HAVE_KVM if !TILEGX
select GENERIC_FIND_FIRST_BIT select GENERIC_FIND_FIRST_BIT

View File

@ -73,37 +73,7 @@ static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t size)
} }
#define HAVE_ARCH_DMA_SET_MASK 1 #define HAVE_ARCH_DMA_SET_MASK 1
int dma_set_mask(struct device *dev, u64 mask);
#include <asm-generic/dma-mapping-common.h>
static inline int
dma_set_mask(struct device *dev, u64 mask)
{
struct dma_map_ops *dma_ops = get_dma_ops(dev);
/*
* For PCI devices with 64-bit DMA addressing capability, promote
* the dma_ops to hybrid, with the consistent memory DMA space limited
* to 32-bit. For 32-bit capable devices, limit the streaming DMA
* address range to max_direct_dma_addr.
*/
if (dma_ops == gx_pci_dma_map_ops ||
dma_ops == gx_hybrid_pci_dma_map_ops ||
dma_ops == gx_legacy_pci_dma_map_ops) {
if (mask == DMA_BIT_MASK(64) &&
dma_ops == gx_legacy_pci_dma_map_ops)
set_dma_ops(dev, gx_hybrid_pci_dma_map_ops);
else if (mask > dev->archdata.max_direct_dma_addr)
mask = dev->archdata.max_direct_dma_addr;
}
if (!dev->dma_mask || !dma_supported(dev, mask))
return -EIO;
*dev->dma_mask = mask;
return 0;
}
/* /*
* dma_alloc_noncoherent() is #defined to return coherent memory, * dma_alloc_noncoherent() is #defined to return coherent memory,

View File

@ -583,6 +583,35 @@ struct dma_map_ops *gx_hybrid_pci_dma_map_ops;
EXPORT_SYMBOL(gx_legacy_pci_dma_map_ops); EXPORT_SYMBOL(gx_legacy_pci_dma_map_ops);
EXPORT_SYMBOL(gx_hybrid_pci_dma_map_ops); EXPORT_SYMBOL(gx_hybrid_pci_dma_map_ops);
int dma_set_mask(struct device *dev, u64 mask)
{
struct dma_map_ops *dma_ops = get_dma_ops(dev);
/*
* For PCI devices with 64-bit DMA addressing capability, promote
* the dma_ops to hybrid, with the consistent memory DMA space limited
* to 32-bit. For 32-bit capable devices, limit the streaming DMA
* address range to max_direct_dma_addr.
*/
if (dma_ops == gx_pci_dma_map_ops ||
dma_ops == gx_hybrid_pci_dma_map_ops ||
dma_ops == gx_legacy_pci_dma_map_ops) {
if (mask == DMA_BIT_MASK(64) &&
dma_ops == gx_legacy_pci_dma_map_ops)
set_dma_ops(dev, gx_hybrid_pci_dma_map_ops);
else if (mask > dev->archdata.max_direct_dma_addr)
mask = dev->archdata.max_direct_dma_addr;
}
if (!dev->dma_mask || !dma_supported(dev, mask))
return -EIO;
*dev->dma_mask = mask;
return 0;
}
EXPORT_SYMBOL(dma_set_mask);
#ifdef CONFIG_ARCH_HAS_DMA_SET_COHERENT_MASK #ifdef CONFIG_ARCH_HAS_DMA_SET_COHERENT_MASK
int dma_set_coherent_mask(struct device *dev, u64 mask) int dma_set_coherent_mask(struct device *dev, u64 mask)
{ {

View File

@ -5,7 +5,6 @@ config UNICORE32
select ARCH_MIGHT_HAVE_PC_SERIO select ARCH_MIGHT_HAVE_PC_SERIO
select HAVE_MEMBLOCK select HAVE_MEMBLOCK
select HAVE_GENERIC_DMA_COHERENT select HAVE_GENERIC_DMA_COHERENT
select HAVE_DMA_ATTRS
select HAVE_KERNEL_GZIP select HAVE_KERNEL_GZIP
select HAVE_KERNEL_BZIP2 select HAVE_KERNEL_BZIP2
select GENERIC_ATOMIC64 select GENERIC_ATOMIC64

View File

@ -28,8 +28,6 @@ static inline struct dma_map_ops *get_dma_ops(struct device *dev)
return &swiotlb_dma_map_ops; return &swiotlb_dma_map_ops;
} }
#include <asm-generic/dma-mapping-common.h>
static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t size) static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t size)
{ {
if (dev && dev->dma_mask) if (dev && dev->dma_mask)

View File

@ -31,6 +31,7 @@ config X86
select ARCH_HAS_PMEM_API if X86_64 select ARCH_HAS_PMEM_API if X86_64
select ARCH_HAS_MMIO_FLUSH select ARCH_HAS_MMIO_FLUSH
select ARCH_HAS_SG_CHAIN select ARCH_HAS_SG_CHAIN
select ARCH_HAS_UBSAN_SANITIZE_ALL
select ARCH_HAVE_NMI_SAFE_CMPXCHG select ARCH_HAVE_NMI_SAFE_CMPXCHG
select ARCH_MIGHT_HAVE_ACPI_PDC if ACPI select ARCH_MIGHT_HAVE_ACPI_PDC if ACPI
select ARCH_MIGHT_HAVE_PC_PARPORT select ARCH_MIGHT_HAVE_PC_PARPORT
@ -99,7 +100,6 @@ config X86
select HAVE_DEBUG_KMEMLEAK select HAVE_DEBUG_KMEMLEAK
select HAVE_DEBUG_STACKOVERFLOW select HAVE_DEBUG_STACKOVERFLOW
select HAVE_DMA_API_DEBUG select HAVE_DMA_API_DEBUG
select HAVE_DMA_ATTRS
select HAVE_DMA_CONTIGUOUS select HAVE_DMA_CONTIGUOUS
select HAVE_DYNAMIC_FTRACE select HAVE_DYNAMIC_FTRACE
select HAVE_DYNAMIC_FTRACE_WITH_REGS select HAVE_DYNAMIC_FTRACE_WITH_REGS

View File

@ -60,6 +60,7 @@ clean-files += cpustr.h
KBUILD_CFLAGS := $(USERINCLUDE) $(REALMODE_CFLAGS) -D_SETUP KBUILD_CFLAGS := $(USERINCLUDE) $(REALMODE_CFLAGS) -D_SETUP
KBUILD_AFLAGS := $(KBUILD_CFLAGS) -D__ASSEMBLY__ KBUILD_AFLAGS := $(KBUILD_CFLAGS) -D__ASSEMBLY__
GCOV_PROFILE := n GCOV_PROFILE := n
UBSAN_SANITIZE := n
$(obj)/bzImage: asflags-y := $(SVGA_MODE) $(obj)/bzImage: asflags-y := $(SVGA_MODE)

View File

@ -33,6 +33,7 @@ KBUILD_CFLAGS += $(call cc-option,-fno-stack-protector)
KBUILD_AFLAGS := $(KBUILD_CFLAGS) -D__ASSEMBLY__ KBUILD_AFLAGS := $(KBUILD_CFLAGS) -D__ASSEMBLY__
GCOV_PROFILE := n GCOV_PROFILE := n
UBSAN_SANITIZE :=n
LDFLAGS := -m elf_$(UTS_MACHINE) LDFLAGS := -m elf_$(UTS_MACHINE)
LDFLAGS_vmlinux := -T LDFLAGS_vmlinux := -T

View File

@ -4,6 +4,7 @@
KBUILD_CFLAGS += $(DISABLE_LTO) KBUILD_CFLAGS += $(DISABLE_LTO)
KASAN_SANITIZE := n KASAN_SANITIZE := n
UBSAN_SANITIZE := n
VDSO64-$(CONFIG_X86_64) := y VDSO64-$(CONFIG_X86_64) := y
VDSOX32-$(CONFIG_X86_X32_ABI) := y VDSOX32-$(CONFIG_X86_X32_ABI) := y

View File

@ -46,8 +46,6 @@ bool arch_dma_alloc_attrs(struct device **dev, gfp_t *gfp);
#define HAVE_ARCH_DMA_SUPPORTED 1 #define HAVE_ARCH_DMA_SUPPORTED 1
extern int dma_supported(struct device *hwdev, u64 mask); extern int dma_supported(struct device *hwdev, u64 mask);
#include <asm-generic/dma-mapping-common.h>
extern void *dma_generic_alloc_coherent(struct device *dev, size_t size, extern void *dma_generic_alloc_coherent(struct device *dev, size_t size,
dma_addr_t *dma_addr, gfp_t flag, dma_addr_t *dma_addr, gfp_t flag,
struct dma_attrs *attrs); struct dma_attrs *attrs);

View File

@ -385,6 +385,7 @@ int arch_kimage_file_post_load_cleanup(struct kimage *image)
return image->fops->cleanup(image->image_loader_data); return image->fops->cleanup(image->image_loader_data);
} }
#ifdef CONFIG_KEXEC_VERIFY_SIG
int arch_kexec_kernel_verify_sig(struct kimage *image, void *kernel, int arch_kexec_kernel_verify_sig(struct kimage *image, void *kernel,
unsigned long kernel_len) unsigned long kernel_len)
{ {
@ -395,6 +396,7 @@ int arch_kexec_kernel_verify_sig(struct kimage *image, void *kernel,
return image->fops->verify_sig(kernel, kernel_len); return image->fops->verify_sig(kernel, kernel_len);
} }
#endif
/* /*
* Apply purgatory relocations. * Apply purgatory relocations.

View File

@ -70,3 +70,4 @@ KBUILD_CFLAGS := $(LINUXINCLUDE) $(REALMODE_CFLAGS) -D_SETUP -D_WAKEUP \
-I$(srctree)/arch/x86/boot -I$(srctree)/arch/x86/boot
KBUILD_AFLAGS := $(KBUILD_CFLAGS) -D__ASSEMBLY__ KBUILD_AFLAGS := $(KBUILD_CFLAGS) -D__ASSEMBLY__
GCOV_PROFILE := n GCOV_PROFILE := n
UBSAN_SANITIZE := n

View File

@ -15,7 +15,6 @@ config XTENSA
select GENERIC_PCI_IOMAP select GENERIC_PCI_IOMAP
select GENERIC_SCHED_CLOCK select GENERIC_SCHED_CLOCK
select HAVE_DMA_API_DEBUG select HAVE_DMA_API_DEBUG
select HAVE_DMA_ATTRS
select HAVE_FUNCTION_TRACER select HAVE_FUNCTION_TRACER
select HAVE_FUTEX_CMPXCHG if !MMU select HAVE_FUTEX_CMPXCHG if !MMU
select HAVE_IRQ_TIME_ACCOUNTING select HAVE_IRQ_TIME_ACCOUNTING

View File

@ -13,8 +13,6 @@
#include <asm/cache.h> #include <asm/cache.h>
#include <asm/io.h> #include <asm/io.h>
#include <asm-generic/dma-coherent.h>
#include <linux/mm.h> #include <linux/mm.h>
#include <linux/scatterlist.h> #include <linux/scatterlist.h>
@ -30,8 +28,6 @@ static inline struct dma_map_ops *get_dma_ops(struct device *dev)
return &xtensa_dma_map_ops; return &xtensa_dma_map_ops;
} }
#include <asm-generic/dma-mapping-common.h>
void dma_cache_sync(struct device *dev, void *vaddr, size_t size, void dma_cache_sync(struct device *dev, void *vaddr, size_t size,
enum dma_data_direction direction); enum dma_data_direction direction);

View File

@ -86,7 +86,6 @@
#define MADV_SEQUENTIAL 2 /* expect sequential page references */ #define MADV_SEQUENTIAL 2 /* expect sequential page references */
#define MADV_WILLNEED 3 /* will need these pages */ #define MADV_WILLNEED 3 /* will need these pages */
#define MADV_DONTNEED 4 /* don't need these pages */ #define MADV_DONTNEED 4 /* don't need these pages */
#define MADV_FREE 5 /* free pages only if memory pressure */
/* common parameters: try to keep these consistent across architectures */ /* common parameters: try to keep these consistent across architectures */
#define MADV_FREE 8 /* free pages only if memory pressure */ #define MADV_FREE 8 /* free pages only if memory pressure */

View File

@ -200,7 +200,7 @@ static const struct attribute_group *hotplugable_cpu_attr_groups[] = {
struct cpu_attr { struct cpu_attr {
struct device_attribute attr; struct device_attribute attr;
const struct cpumask *const * const map; const struct cpumask *const map;
}; };
static ssize_t show_cpus_attr(struct device *dev, static ssize_t show_cpus_attr(struct device *dev,
@ -209,7 +209,7 @@ static ssize_t show_cpus_attr(struct device *dev,
{ {
struct cpu_attr *ca = container_of(attr, struct cpu_attr, attr); struct cpu_attr *ca = container_of(attr, struct cpu_attr, attr);
return cpumap_print_to_pagebuf(true, buf, *ca->map); return cpumap_print_to_pagebuf(true, buf, ca->map);
} }
#define _CPU_ATTR(name, map) \ #define _CPU_ATTR(name, map) \
@ -217,9 +217,9 @@ static ssize_t show_cpus_attr(struct device *dev,
/* Keep in sync with cpu_subsys_attrs */ /* Keep in sync with cpu_subsys_attrs */
static struct cpu_attr cpu_attrs[] = { static struct cpu_attr cpu_attrs[] = {
_CPU_ATTR(online, &cpu_online_mask), _CPU_ATTR(online, &__cpu_online_mask),
_CPU_ATTR(possible, &cpu_possible_mask), _CPU_ATTR(possible, &__cpu_possible_mask),
_CPU_ATTR(present, &cpu_present_mask), _CPU_ATTR(present, &__cpu_present_mask),
}; };
/* /*

View File

@ -12,7 +12,6 @@
#include <linux/gfp.h> #include <linux/gfp.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/vmalloc.h> #include <linux/vmalloc.h>
#include <asm-generic/dma-coherent.h>
/* /*
* Managed DMA API * Managed DMA API
@ -167,7 +166,7 @@ void dmam_free_noncoherent(struct device *dev, size_t size, void *vaddr,
} }
EXPORT_SYMBOL(dmam_free_noncoherent); EXPORT_SYMBOL(dmam_free_noncoherent);
#ifdef ARCH_HAS_DMA_DECLARE_COHERENT_MEMORY #ifdef CONFIG_HAVE_GENERIC_DMA_COHERENT
static void dmam_coherent_decl_release(struct device *dev, void *res) static void dmam_coherent_decl_release(struct device *dev, void *res)
{ {
@ -247,7 +246,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
void *cpu_addr, dma_addr_t dma_addr, size_t size) void *cpu_addr, dma_addr_t dma_addr, size_t size)
{ {
int ret = -ENXIO; int ret = -ENXIO;
#ifdef CONFIG_MMU #if defined(CONFIG_MMU) && !defined(CONFIG_ARCH_NO_COHERENT_DMA_MMAP)
unsigned long user_count = (vma->vm_end - vma->vm_start) >> PAGE_SHIFT; unsigned long user_count = (vma->vm_end - vma->vm_start) >> PAGE_SHIFT;
unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT; unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;
unsigned long pfn = page_to_pfn(virt_to_page(cpu_addr)); unsigned long pfn = page_to_pfn(virt_to_page(cpu_addr));
@ -264,7 +263,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
user_count << PAGE_SHIFT, user_count << PAGE_SHIFT,
vma->vm_page_prot); vma->vm_page_prot);
} }
#endif /* CONFIG_MMU */ #endif /* CONFIG_MMU && !CONFIG_ARCH_NO_COHERENT_DMA_MMAP */
return ret; return ret;
} }

View File

@ -56,9 +56,7 @@ static u32 find_nvram_size(void __iomem *end)
static int nvram_find_and_copy(void __iomem *iobase, u32 lim) static int nvram_find_and_copy(void __iomem *iobase, u32 lim)
{ {
struct nvram_header __iomem *header; struct nvram_header __iomem *header;
int i;
u32 off; u32 off;
u32 *src, *dst;
u32 size; u32 size;
if (nvram_len) { if (nvram_len) {
@ -95,10 +93,7 @@ static int nvram_find_and_copy(void __iomem *iobase, u32 lim)
return -ENXIO; return -ENXIO;
found: found:
src = (u32 *)header; __ioread32_copy(nvram_buf, header, sizeof(*header) / 4);
dst = (u32 *)nvram_buf;
for (i = 0; i < sizeof(struct nvram_header); i += 4)
*dst++ = __raw_readl(src++);
header = (struct nvram_header *)nvram_buf; header = (struct nvram_header *)nvram_buf;
nvram_len = header->len; nvram_len = header->len;
if (nvram_len > size) { if (nvram_len > size) {
@ -111,8 +106,8 @@ static int nvram_find_and_copy(void __iomem *iobase, u32 lim)
nvram_len = NVRAM_SPACE - 1; nvram_len = NVRAM_SPACE - 1;
} }
/* proceed reading data after header */ /* proceed reading data after header */
for (; i < nvram_len; i += 4) __ioread32_copy(nvram_buf + sizeof(*header), header + 1,
*dst++ = readl(src++); DIV_ROUND_UP(nvram_len, 4));
nvram_buf[NVRAM_SPACE - 1] = '\0'; nvram_buf[NVRAM_SPACE - 1] = '\0';
return 0; return 0;

View File

@ -22,6 +22,7 @@ KBUILD_CFLAGS := $(cflags-y) -DDISABLE_BRANCH_PROFILING \
GCOV_PROFILE := n GCOV_PROFILE := n
KASAN_SANITIZE := n KASAN_SANITIZE := n
UBSAN_SANITIZE := n
lib-y := efi-stub-helper.o lib-y := efi-stub-helper.o

View File

@ -82,13 +82,13 @@ config DRM_TTM
config DRM_GEM_CMA_HELPER config DRM_GEM_CMA_HELPER
bool bool
depends on DRM && HAVE_DMA_ATTRS depends on DRM
help help
Choose this if you need the GEM CMA helper functions Choose this if you need the GEM CMA helper functions
config DRM_KMS_CMA_HELPER config DRM_KMS_CMA_HELPER
bool bool
depends on DRM && HAVE_DMA_ATTRS depends on DRM
select DRM_GEM_CMA_HELPER select DRM_GEM_CMA_HELPER
select DRM_KMS_FB_HELPER select DRM_KMS_FB_HELPER
select FB_SYS_FILLRECT select FB_SYS_FILLRECT

View File

@ -5,7 +5,7 @@ config DRM_IMX
select VIDEOMODE_HELPERS select VIDEOMODE_HELPERS
select DRM_GEM_CMA_HELPER select DRM_GEM_CMA_HELPER
select DRM_KMS_CMA_HELPER select DRM_KMS_CMA_HELPER
depends on DRM && (ARCH_MXC || ARCH_MULTIPLATFORM) && HAVE_DMA_ATTRS depends on DRM && (ARCH_MXC || ARCH_MULTIPLATFORM)
depends on IMX_IPUV3_CORE depends on IMX_IPUV3_CORE
help help
enable i.MX graphics support enable i.MX graphics support

View File

@ -1,6 +1,6 @@
config DRM_RCAR_DU config DRM_RCAR_DU
tristate "DRM Support for R-Car Display Unit" tristate "DRM Support for R-Car Display Unit"
depends on DRM && ARM && HAVE_DMA_ATTRS && OF depends on DRM && ARM && OF
depends on ARCH_SHMOBILE || COMPILE_TEST depends on ARCH_SHMOBILE || COMPILE_TEST
select DRM_KMS_HELPER select DRM_KMS_HELPER
select DRM_KMS_CMA_HELPER select DRM_KMS_CMA_HELPER

View File

@ -1,6 +1,6 @@
config DRM_SHMOBILE config DRM_SHMOBILE
tristate "DRM Support for SH Mobile" tristate "DRM Support for SH Mobile"
depends on DRM && ARM && HAVE_DMA_ATTRS depends on DRM && ARM
depends on ARCH_SHMOBILE || COMPILE_TEST depends on ARCH_SHMOBILE || COMPILE_TEST
depends on FB_SH_MOBILE_MERAM || !FB_SH_MOBILE_MERAM depends on FB_SH_MOBILE_MERAM || !FB_SH_MOBILE_MERAM
select BACKLIGHT_CLASS_DEVICE select BACKLIGHT_CLASS_DEVICE

View File

@ -1,6 +1,6 @@
config DRM_STI config DRM_STI
tristate "DRM Support for STMicroelectronics SoC stiH41x Series" tristate "DRM Support for STMicroelectronics SoC stiH41x Series"
depends on DRM && (SOC_STIH415 || SOC_STIH416 || ARCH_MULTIPLATFORM) && HAVE_DMA_ATTRS depends on DRM && (SOC_STIH415 || SOC_STIH416 || ARCH_MULTIPLATFORM)
select RESET_CONTROLLER select RESET_CONTROLLER
select DRM_KMS_HELPER select DRM_KMS_HELPER
select DRM_GEM_CMA_HELPER select DRM_GEM_CMA_HELPER

Some files were not shown because too many files have changed in this diff Show More