The CCP driver generally uses a round-robin approach when
assigning operations to available CCPs. For the DMA engine,
however, the DMA mappings of the SGs are associated with a
specific CCP. When an IOMMU is enabled, the IOMMU is
programmed based on this specific device.
If the DMA operations are not performed by that specific
CCP then addressing errors and I/O page faults will occur.
Update the CCP driver to allow a specific CCP device to be
requested for an operation and use this in the DMA engine
support.
Cc: <stable@vger.kernel.org> # 4.9.x-
Signed-off-by: Gary R Hook <gary.hook@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Pull crypto fixes from Herbert Xu:
- self-test failure of crc32c on powerpc
- regressions of ecb(aes) when used with xts/lrw in s5p-sss
- a number of bugs in the omap RNG driver
* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
crypto: s5p-sss - Fix spinlock recursion on LRW(AES)
hwrng: omap - Do not access INTMASK_REG on EIP76
hwrng: omap - use devm_clk_get() instead of of_clk_get()
hwrng: omap - write registers after enabling the clock
crypto: s5p-sss - Fix completing crypto request in IRQ handler
crypto: powerpc - Fix initialisation of crc32c context
Fix typos and add the following to the scripts/spelling.txt:
disble||disable
disbled||disabled
I kept the TSL2563_INT_DISBLED in /drivers/iio/light/tsl2563.c
untouched. The macro is not referenced at all, but this commit is
touching only comment blocks just in case.
Link: http://lkml.kernel.org/r/1481573103-11329-20-git-send-email-yamada.masahiro@socionext.com
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Constify the buffer passed to crypto_kpp_set_secret() and
kpp_alg.set_secret, since it is never modified.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
trivial fix to spelling mistake in pr_err message
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Acked-by: Steve Lin <steven.lin1@broadcom.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add statistics for compression/decompression hardware offload
under debugfs.
Signed-off-by: Mahipal Challa <Mahipal.Challa@cavium.com>
Signed-off-by: Jan Glauber <jglauber@cavium.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This contains changes for adding compression/decompression h/w offload
functionality for both DEFLATE and LZS.
Signed-off-by: Mahipal Challa <Mahipal.Challa@cavium.com>
Signed-off-by: Jan Glauber <jglauber@cavium.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add a driver for the ZIP engine found on Cavium ThunderX SOCs.
The ZIP engine supports hardware accelerated compression and
decompression. It includes 2 independent ZIP cores and supports:
- DEFLATE compression and decompression (RFC 1951)
- LZS compression and decompression (RFC 2395 and ANSI X3.241-1994)
- ADLER32 and CRC32 checksums for ZLIB (RFC 1950) and GZIP (RFC 1952)
The ZIP engine is presented as a PCI device. It supports DMA and
scatter-gather.
Signed-off-by: Mahipal Challa <Mahipal.Challa@cavium.com>
Signed-off-by: Jan Glauber <jglauber@cavium.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Running TCRYPT with LRW compiled causes spinlock recursion:
testing speed of async lrw(aes) (lrw(ecb-aes-s5p)) encryption
tcrypt: test 0 (256 bit key, 16 byte blocks): 19007 operations in 1 seconds (304112 bytes)
tcrypt: test 1 (256 bit key, 64 byte blocks): 15753 operations in 1 seconds (1008192 bytes)
tcrypt: test 2 (256 bit key, 256 byte blocks): 14293 operations in 1 seconds (3659008 bytes)
tcrypt: test 3 (256 bit key, 1024 byte blocks): 11906 operations in 1 seconds (12191744 bytes)
tcrypt: test 4 (256 bit key, 8192 byte blocks):
BUG: spinlock recursion on CPU#1, irq/84-10830000/89
lock: 0xeea99a68, .magic: dead4ead, .owner: irq/84-10830000/89, .owner_cpu: 1
CPU: 1 PID: 89 Comm: irq/84-10830000 Not tainted 4.11.0-rc1-00001-g897ca6d0800d #559
Hardware name: SAMSUNG EXYNOS (Flattened Device Tree)
[<c010e1ec>] (unwind_backtrace) from [<c010ae1c>] (show_stack+0x10/0x14)
[<c010ae1c>] (show_stack) from [<c03449c0>] (dump_stack+0x78/0x8c)
[<c03449c0>] (dump_stack) from [<c015de68>] (do_raw_spin_lock+0x11c/0x120)
[<c015de68>] (do_raw_spin_lock) from [<c0720110>] (_raw_spin_lock_irqsave+0x20/0x28)
[<c0720110>] (_raw_spin_lock_irqsave) from [<c0572ca0>] (s5p_aes_crypt+0x2c/0xb4)
[<c0572ca0>] (s5p_aes_crypt) from [<bf1d8aa4>] (do_encrypt+0x78/0xb0 [lrw])
[<bf1d8aa4>] (do_encrypt [lrw]) from [<bf1d8b00>] (encrypt_done+0x24/0x54 [lrw])
[<bf1d8b00>] (encrypt_done [lrw]) from [<c05732a0>] (s5p_aes_complete+0x60/0xcc)
[<c05732a0>] (s5p_aes_complete) from [<c0573440>] (s5p_aes_interrupt+0x134/0x1a0)
[<c0573440>] (s5p_aes_interrupt) from [<c01667c4>] (irq_thread_fn+0x1c/0x54)
[<c01667c4>] (irq_thread_fn) from [<c0166a98>] (irq_thread+0x12c/0x1e0)
[<c0166a98>] (irq_thread) from [<c0136a28>] (kthread+0x108/0x138)
[<c0136a28>] (kthread) from [<c0107778>] (ret_from_fork+0x14/0x3c)
Interrupt handling routine was calling req->base.complete() under
spinlock. In most cases this wasn't fatal but when combined with some
of the cipher modes (like LRW) this caused recursion - starting the new
encryption (s5p_aes_crypt()) while still holding the spinlock from
previous round (s5p_aes_complete()).
Beside that, the s5p_aes_interrupt() error handling path could execute
two completions in case of error for RX and TX blocks.
Rewrite the interrupt handling routine and the completion by:
1. Splitting the operations on scatterlist copies from
s5p_aes_complete() into separate s5p_sg_done(). This still should be
done under lock.
The s5p_aes_complete() now only calls req->base.complete() and it has
to be called outside of lock.
2. Moving the s5p_aes_complete() out of spinlock critical sections.
In interrupt service routine s5p_aes_interrupts(), it appeared in few
places, including error paths inside other functions called from ISR.
This code was not so obvious to read so simplify it by putting the
s5p_aes_complete() only within ISR level.
Reported-by: Nathan Royce <nroycea+kernel@gmail.com>
Cc: <stable@vger.kernel.org> # v4.10.x: 07de4bc88c crypto: s5p-sss - Fix completing
Cc: <stable@vger.kernel.org> # v4.10.x
Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In a regular interrupt handler driver was finishing the crypt/decrypt
request by calling complete on crypto request. This is disallowed since
converting to skcipher in commit b286d8b1a6 ("crypto: skcipher - Add
skcipher walk interface") and causes a warning:
WARNING: CPU: 0 PID: 0 at crypto/skcipher.c:430 skcipher_walk_first+0x13c/0x14c
The interrupt is marked shared but in fact there are no other users
sharing it. Thus the simplest solution seems to be to just use a
threaded interrupt handler, after converting it to oneshot.
Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Pull crypto fixes from Herbert Xu:
- vmalloc stack regression in CCM
- Build problem in CRC32 on ARM
- Memory leak in cavium
- Missing Kconfig dependencies in atmel and mediatek
- XTS Regression on some platforms (s390 and ppc)
- Memory overrun in CCM test vector
* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
crypto: vmx - Use skcipher for xts fallback
crypto: vmx - Use skcipher for cbc fallback
crypto: testmgr - Pad aes_ccm_enc_tv_template vector
crypto: arm/crc32 - add build time test for CRC instruction support
crypto: arm/crc32 - fix build error with outdated binutils
crypto: ccm - move cbcmac input off the stack
crypto: xts - Propagate NEED_FALLBACK bit
crypto: api - Add crypto_requires_off helper
crypto: atmel - CRYPTO_DEV_MEDIATEK should depend on HAS_DMA
crypto: atmel - CRYPTO_DEV_ATMEL_TDES and CRYPTO_DEV_ATMEL_SHA should depend on HAS_DMA
crypto: cavium - fix leak on curr if curr->head fails to be allocated
crypto: cavium - Fix couple of static checker errors
Looks like a quiet cycle for vhost/virtio, just a couple of minor
tweaks. Most notable is automatic interrupt affinity for blk and scsi.
Hopefully other devices are not far behind.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-----BEGIN PGP SIGNATURE-----
iQEcBAABAgAGBQJYt1rRAAoJECgfDbjSjVRpEZsIALSHevdXWtRHBZUb0ZkqPLQb
/x2Vn49CcALS1p7iSuP9L027MPeaLKyr0NBT9hptBChp/4b9lnZWyyAo6vYQrzfx
Ia/hLBYsK4ml6lEwbyfLwqkF2cmYCrZhBSVAILifn84lTPoN7CT0PlYDfA+OCaNR
geo75qF8KR+AUO0aqchwMRL3RV3OxZKxQr2AR6LttCuhiBgnV3Xqxffg/M3x6ONM
0ffFFdodm6slem3hIEiGUMwKj4NKQhcOleV+y0fVBzWfLQG9210pZbQyRBRikIL0
7IsaarpaUr7OrLAZFMGF6nJnyRAaRrt6WknTHZkyvyggrePrGcmGgPm4jrODwY4=
=2zwv
-----END PGP SIGNATURE-----
Merge tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost
Pull vhost updates from Michael Tsirkin:
"virtio, vhost: optimizations, fixes
Looks like a quiet cycle for vhost/virtio, just a couple of minor
tweaks. Most notable is automatic interrupt affinity for blk and scsi.
Hopefully other devices are not far behind"
* tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost:
virtio-console: avoid DMA from stack
vhost: introduce O(1) vq metadata cache
virtio_scsi: use virtio IRQ affinity
virtio_blk: use virtio IRQ affinity
blk-mq: provide a default queue mapping for virtio device
virtio: provide a method to get the IRQ affinity mask for a virtqueue
virtio: allow drivers to request IRQ affinity when creating VQs
virtio_pci: simplify MSI-X setup
virtio_pci: don't duplicate the msix_enable flag in struct pci_dev
virtio_pci: use shared interrupts for virtqueues
virtio_pci: remove struct virtio_pci_vq_info
vhost: try avoiding avail index access when getting descriptor
virtio_mmio: expose header to userspace
Pull more s390 updates from Martin Schwidefsky:
"Next to the usual bug fixes (including the TASK_SIZE fix), there is
one larger crypto item. It allows to use protected keys with the
in-kernel crypto API
The protected key support has two parts, the pkey user space API to
convert key formats and the paes crypto module that uses a protected
key instead of a standard AES key"
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux:
s390: TASK_SIZE for kernel threads
s390/crypt: Add protected key AES module
s390/dasd: fix spelling mistake: "supportet" -> "supported"
s390/pkey: Introduce pkey kernel module
s390/zcrypt: export additional symbols
s390/zcrypt: Rework CONFIG_ZCRYPT Kconfig text.
s390/zcrypt: Cleanup leftover module code.
s390/nmi: purge tlbs after control register validation
s390/nmi: fix order of register validation
s390/crypto: Add PCKMO inline function
s390/zcrypt: Enable request count reset for cards and queues.
s390/mm: use _SEGMENT_ENTRY_EMPTY in the code
s390/chsc: Add exception handler for CHSC instruction
s390: opt into HAVE_COPY_THREAD_TLS
s390: restore address space when returning to user space
s390: rename CIF_ASCE to CIF_ASCE_PRIMARY
Fix typos and add the following to the scripts/spelling.txt:
deintializing||deinitializing
deintialize||deinitialize
deintialized||deinitialized
Link: http://lkml.kernel.org/r/1481573103-11329-28-git-send-email-yamada.masahiro@socionext.com
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Add a struct irq_affinity pointer to the find_vqs methods, which if set
is used to tell the PCI layer to create the MSI-X vectors for our I/O
virtqueues with the proper affinity from the start. Compared to after
the fact affinity hints this gives us an instantly working setup and
allows to allocate the irq descritors node-local and avoid interconnect
traffic. Last but not least this will allow blk-mq queues are created
based on the interrupt affinity for storage drivers.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
If NO_DMA=y:
ERROR: "bad_dma_ops" [drivers/crypto/mediatek/mtk-crypto.ko] undefined!
Add a dependency on HAS_DMA to fix this.
Fixes: 7dee9f6187 ("crypto: mediatek - remove ARM dependencies")
Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The exit path when curr->head cannot be allocated fails to kfree the
earlier allocated curr. Fix this by kfree'ing it.
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Fix the following smatch errors
cptvf_reqmanager.c:333 do_post_process() warn: variable dereferenced
before check 'cptvf'
cptvf_main.c:825 cptvf_remove() error: we previously assumed 'cptvf'
could be null
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: George Cherian <george.cherian@cavium.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
These are the current source files that should not have
executable attributes set.
[ Normally this would be sent through Andrew Morton's tree
but his quilt tools don't like permission only patches. ]
Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This patch introduces a new in-kernel-crypto blockcipher
called 'paes' which implements AES with protected keys.
The paes blockcipher can be used similar to the aes
blockcipher but uses secure key material to derive the
working protected key and so offers an encryption
implementation where never a clear key value is exposed
in memory.
The paes module is only available for the s390 platform
providing a minimal hardware support of CPACF enabled
with at least MSA level 3. Upon module initialization
these requirements are checked.
Includes additional contribution from Harald Freudenberger.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
pci_enable_msix has been long deprecated, but this driver adds a new
instance. Convert it to pci_alloc_irq_vectors and greatly simplify
the code, and make sure the prope code properly unwinds.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
pci_enable_msix has been long deprecated, but this driver adds a new
instance. Convert it to pci_alloc_irq_vectors and greatly simplify
the code.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch introcudes a new kernel module pkey which is providing
protected key handling and management functions. The pkey API is
available within the kernel for other s390 specific code to create
and manage protected keys. Additionally the functions are exported
to user space via IOCTL calls. The implementation makes extensive
use of functions provided by the zcrypt device driver. For
generating protected keys from secure keys there is also a CEX
coprocessor card needed.
Signed-off-by: Harald Freudenberger <freude@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The CONFIG_ZCRYPT Kconfig entry in drivers/crypto showed
outdated hardware whereas the latest cards where missing.
Reworked the text to reflect the current abilities of the
zcrypt device driver.
Signed-off-by: Harald Freudenberger <freude@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
In Broadcom SPU driver, in case where incremental hash
is done in software in ahash_finup(), tmpbuf was freed
twice.
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Rob Rice <rob.rice@broadcom.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The driver fails to build if MSI support is disabled:
In file included from /git/arm-soc/drivers/crypto/cavium/cpt/cptpf_main.c:18:0:
drivers/crypto/cavium/cpt/cptpf.h:57:20: error: array type has incomplete element type 'struct msix_entry'
struct msix_entry msix_entries[CPT_PF_MSIX_VECTORS];
^~~~~~~~~~~~
drivers/crypto/cavium/cpt/cptpf_main.c: In function 'cpt_enable_msix':
drivers/crypto/cavium/cpt/cptpf_main.c:344:8: error: implicit declaration of function 'pci_enable_msix';did you mean 'cpt_enable_msix'? [-Werror=implicit-function-declaration]
On the other hand, it doesn't seem to have any build dependency on ARCH_THUNDER,
so let's allow compile-testing to catch this kind of problem more easily.
The 64-bit dependency is needed for the use of readq/writeq.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: David Daney <david.daney@cavium.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
cpt_bind_vq_to_grp() could return an error code. However, it currently
returns a u8. This produce the static checker warning.
drivers/crypto/cavium/cpt/cptpf_mbox.c:70 cpt_bind_vq_to_grp() warn: signedness bug returning '(-22)'
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: George Cherian <george.cherian@cavium.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
If we register the DMA API debug notification chain to
receive platform bus events:
dma_debug_add_bus(&platform_bus_type);
we start receiving warnings after a simple test like "modprobe caam_jr &&
modprobe caamhash && modprobe -r caamhash && modprobe -r caam_jr":
platform ffe301000.jr: DMA-API: device driver has pending DMA allocations while released from device [count=1938]
One of leaked entries details: [device address=0x0000000173fda090] [size=63 bytes] [mapped with DMA_TO_DEVICE] [mapped as single]
It turns out there are several issues with handling buf_dma (mapping of buffer
holding the previous chunk smaller than hash block size):
-detection of buf_dma mapping failure occurs too late, after a job descriptor
using that value has been submitted for execution
-dma mapping leak - unmapping is not performed in all places: for e.g.
in ahash_export or in most ahash_fin* callbacks (due to current back-to-back
implementation of buf_dma unmapping/mapping)
Fix these by:
-calling dma_mapping_error() on buf_dma right after the mapping and providing
an error code if needed
-unmapping buf_dma during the "job done" (ahash_done_*) callbacks
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
caamhash uses double buffering for holding previous/current
and next chunks (data smaller than block size) to be hashed.
Add (inline) functions to abstract this mechanism.
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In case ctx_dma dma mapping fails, ahash_unmap_ctx() tries to
dma unmap an invalid address:
map_seq_out_ptr_ctx() / ctx_map_to_sec4_sg() -> goto unmap_ctx ->
-> ahash_unmap_ctx() -> dma unmap ctx_dma
There is also possible to reach ahash_unmap_ctx() with ctx_dma
uninitialzed or to try to unmap the same address twice.
Fix these by setting ctx_dma = 0 where needed:
-initialize ctx_dma in ahash_init()
-clear ctx_dma in case of mapping error (instead of holding
the error code returned by the dma map function)
-clear ctx_dma after each unmapping
Fixes: 32686d34f8 ("crypto: caam - ensure that we clean up after an error")
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
setkey() callback may be invoked multiple times for the same tfm.
In this case, DMA API leaks are caused by shared descriptors
(and key for caamalg) being mapped several times and unmapped only once.
Fix this by performing mapping / unmapping only in crypto algorithm's
cra_init() / cra_exit() callbacks and sync_for_device in the setkey()
tfm callback.
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Shared descriptors for hash algorithms are small enough
for (split) keys to be inlined in all cases.
Since driver already does this, all what's left is to remove
unused ctx->key_dma.
Fixes: 045e36780f ("crypto: caam - ahash hmac support")
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
dma_map_sg() might coalesce S/G entries, so use the number of S/G
entries returned by it instead of what sg_nents_for_len() initially
returns.
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Replace internal sg_count() function and the convoluted logic
around it with the standard sg_nents_for_len() function.
src_nents, dst_nents now hold the number of SW S/G entries,
instead of the HW S/G table entries.
With this change, null (zero length) input data for AEAD case
needs to be handled in a visible way. req->src is no longer
(un)mapped, pointer address is set to 0 in SEQ IN PTR command.
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
sg_count() internally calls sg_nents_for_len(), which could fail
in case the required number of bytes is larger than the total
bytes in the S/G.
Thus, add checks to validate the input.
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
HW S/G generation does not work properly when the following conditions
are met:
-src == dst
-src/dst is S/G
-IV is right before (contiguous with) the first src/dst S/G entry
since "iv_contig" is set to true (iv_contig is a misnomer here and
it actually refers to the whole output being contiguous)
Fix this by setting dst S/G nents equal to src S/G nents, instead of
leaving it set to init value (0).
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
If one of the JRs failed at init, the next JR used
the failed JR's IO space. The patch fixes this bug.
Signed-off-by: Tudor Ambarus <tudor-dan.ambarus@nxp.com>
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Setting the dma mask could fail, thus make sure it succeeds
before going further.
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
intern.h, jr.h are not needed in error.c
error.h is not needed in ctrl.c
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The reverse-get/set functions can be simplified by
eliminating unused code.
Signed-off-by: Gary R Hook <gary.hook@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Move the command queue tail pointer when an error is
detected. Always return the error.
Signed-off-by: Gary R Hook <gary.hook@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The CCP initialization messages only need to be sent to
syslog in debug mode.
Signed-off-by: Gary R Hook <gary.hook@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch clarifies and fixes how errors should be handled by
atmel_sha_start().
For update operations, the previous code wrongly assumed that
(err != -EINPROGRESS) implies (err == 0). It's wrong because that doesn't
take the error cases (err < 0) into account.
This patch also adds many comments to detail all the possible returned
values and what should be done in each case.
Especially, when an error occurs, since atmel_sha_complete() has already
been called, hence releasing the hardware, atmel_sha_start() must not call
atmel_sha_finish_req() later otherwise atmel_sha_complete() would be
called a second time.
Signed-off-by: Cyrille Pitchen <cyrille.pitchen@atmel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch fixes a previous patch: "crypto: atmel-sha - update request
queue management to make it more generic".
Indeed the patch above should have replaced the "return -EINVAL;" lines by
"return atmel_sha_complete(dd, -EINVAL);" but instead replaced them by a
simple call of "atmel_sha_complete(dd, -EINVAL);".
Hence all "return" instructions were missing.
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Cyrille Pitchen <cyrille.pitchen@atmel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Ensure that the size field is correctly populated for
all AES modes.
Signed-off-by: Gary R Hook <gary.hook@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add Broadcom Secure Processing Unit (SPU) crypto driver for SPU
hardware crypto offload. The driver supports ablkcipher, ahash,
and aead symmetric crypto operations.
Signed-off-by: Steve Lin <steven.lin1@broadcom.com>
Signed-off-by: Rob Rice <rob.rice@broadcom.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add the CPT options in crypto Kconfig and update the
crypto Makefile
Update the MAINTAINERS file too.
Signed-off-by: George Cherian <george.cherian@cavium.com>
Reviewed-by: David Daney <david.daney@cavium.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Enable the CPT VF driver. CPT is the cryptographic Acceleration Unit
in Octeon-tx series of processors.
Signed-off-by: George Cherian <george.cherian@cavium.com>
Reviewed-by: David Daney <david.daney@cavium.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Enable the Physical Function driver for the Cavium Crypto Engine (CPT)
found in Octeon-tx series of SoC's. CPT is the Cryptographic Accelaration
Unit. CPT includes microcoded GigaCypher symmetric engines (SEs) and
asymmetric engines (AEs).
Signed-off-by: George Cherian <george.cherian@cavium.com>
Reviewed-by: David Daney <david.daney@cavium.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When we enable COMPILE_TEST building for the Atmel sha and tdes implementations,
we run into a couple of warnings about incorrect format strings, e.g.
In file included from include/linux/platform_device.h:14:0,
from drivers/crypto/atmel-sha.c:24:
drivers/crypto/atmel-sha.c: In function 'atmel_sha_xmit_cpu':
drivers/crypto/atmel-sha.c:571:19: error: format '%d' expects argument of type 'int', but argument 6 has type 'size_t {aka long unsigned int}' [-Werror=format=]
In file included from include/linux/printk.h:6:0,
from include/linux/kernel.h:13,
from drivers/crypto/atmel-tdes.c:17:
drivers/crypto/atmel-tdes.c: In function 'atmel_tdes_crypt_dma_stop':
include/linux/kern_levels.h:4:18: error: format '%u' expects argument of type 'unsigned int', but argument 2 has type 'size_t {aka long unsigned int}' [-Werror=format=]
These are all fixed by using the "%z" modifier for size_t data.
There are also a few uses of min()/max() with incompatible types:
drivers/crypto/atmel-tdes.c: In function 'atmel_tdes_crypt_start':
drivers/crypto/atmel-tdes.c:528:181: error: comparison of distinct pointer types lacks a cast [-Werror]
Where possible, we should use consistent types here, otherwise we can use
min_t()/max_t() to get well-defined behavior without a warning.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
With the new authenc support, we get a harmless Kconfig warning:
warning: (CRYPTO_DEV_ATMEL_AUTHENC) selects CRYPTO_DEV_ATMEL_SHA which has unmet direct dependencies (CRYPTO && CRYPTO_HW && ARCH_AT91)
The problem is that each of the options has slightly different dependencies,
although they all seem to want the same thing: allow building for real AT91
targets that actually have the hardware, and possibly for compile testing.
This makes all four options consistent: instead of depending on a particular
dmaengine implementation, we depend on the ARM platform, CONFIG_COMPILE_TEST
as an alternative when that is turned off. This makes the 'select' statements
work correctly.
Fixes: 89a82ef87e ("crypto: atmel-authenc - add support to authenc(hmac(shaX), Y(aes)) modes")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Pull crypto fixes from Herbert Xu:
- use-after-free in algif_aead
- modular aesni regression when pcbc is modular but absent
- bug causing IO page faults in ccp
- double list add in ccp
- NULL pointer dereference in qat (two patches)
- panic in chcr
- NULL pointer dereference in chcr
- out-of-bound access in chcr
* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
crypto: chcr - Fix key length for RFC4106
crypto: algif_aead - Fix kernel panic on list_del
crypto: aesni - Fix failure when pcbc module is absent
crypto: ccp - Fix double add when creating new DMA command
crypto: ccp - Fix DMA operations when IOMMU is enabled
crypto: chcr - Check device is allocated before use
crypto: chcr - Fix panic on dma_unmap_sg
crypto: qat - zero esram only for DH85x devices
crypto: qat - fix bar discovery for c62x
Typecast the pointer with correct structure.
Signed-off-by: Atul Gupta <atul.gupta@chelsio.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
1 Block of encrption can be done with aes-generic. no need of
cbc(aes). This patch replaces cbc(aes-generic) with aes-generic.
Signed-off-by: Harsh Jain <harsh@chelsio.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The first argument to list_for_each_entry cannot be NULL.
Generated by: scripts/coccinelle/iterators/itnull.cocci
Signed-off-by: Julia Lawall <julia.lawall@lip6.fr>
Signed-off-by: Fengguang Wu <fengguang.wu@intel.com>
Signed-off-by: Harsh Jain <harsh@chelsio.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Change assign flowc id to each outgoing request.Firmware use flowc id
to schedule each request onto HW. FW reply may miss without this change.
Reviewed-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: Atul Gupta <atul.gupta@chelsio.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When VERBOSE_DEBUG is defined and SHA_FLAGS_DUMP_REG flag is set in
dd->flags, this patch prints the register names and values when performing
IO accesses.
Signed-off-by: Cyrille Pitchen <cyrille.pitchen@atmel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patchs allows to combine the AES and SHA hardware accelerators on
some Atmel SoCs. Doing so, AES blocks are only written to/read from the
AES hardware. Those blocks are also transferred from the AES to the SHA
accelerator internally, without additionnal accesses to the system busses.
Hence, the AES and SHA accelerators work in parallel to process all the
data blocks, instead of serializing the process by (de)crypting those
blocks first then authenticating them after like the generic
crypto/authenc.c driver does.
Of course, both the AES and SHA hardware accelerators need to be available
before we can start to process the data blocks. Hence we use their crypto
request queue to synchronize both drivers.
Signed-off-by: Cyrille Pitchen <cyrille.pitchen@atmel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch fixes the value returned by atmel_aes_handle_queue(), which
could have been wrong previously when the crypto request was started
synchronously but became asynchronous during the ctx->start() call.
Signed-off-by: Cyrille Pitchen <cyrille.pitchen@atmel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds support to the hmac(shaX) algorithms.
Signed-off-by: Cyrille Pitchen <cyrille.pitchen@atmel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds a simple function to perform data transfer with the DMA
controller.
Signed-off-by: Cyrille Pitchen <cyrille.pitchen@atmel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds a simple function to perform data transfer with PIO, hence
handled by the CPU.
Signed-off-by: Cyrille Pitchen <cyrille.pitchen@atmel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch defines an alias macro to SHA_MR_MODE_PDC, which is not suited
for DMA usage.
Signed-off-by: Cyrille Pitchen <cyrille.pitchen@atmel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch simply defines a helper function to test the 'Data Ready' flag
of the Status Register. It also gives a chance for the crypto request to
be processed synchronously if this 'Data Ready' flag is already set when
polling the Status Register. Indeed, running synchronously avoid the
latency of the 'Data Ready' interrupt.
When the 'Data Ready' flag has not been set yet, we enable the associated
interrupt and resume processing the crypto request asynchronously from the
'done' task just as before.
Signed-off-by: Cyrille Pitchen <cyrille.pitchen@atmel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch modifies the SHA_FLAGS_SHA* flags: those algo flags are now
organized as values of a single bitfield instead of individual bits.
This allows to reduce the number of bits needed to encode all possible
values. Also the new values match the SHA_MR_ALGO_SHA* values hence
the algorithm bitfield of the SHA_MR register could simply be set with:
mr = (mr & ~SHA_FLAGS_ALGO_MASK) | (ctx->flags & SHA_FLAGS_ALGO_MASK)
Signed-off-by: Cyrille Pitchen <cyrille.pitchen@atmel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch is a transitional patch. It updates atmel_sha_done_task() to
make it more generic. Indeed, it adds a new .resume() member in the
atmel_sha_dev structure. This hook is called from atmel_sha_done_task()
to resume processing an asynchronous request.
Signed-off-by: Cyrille Pitchen <cyrille.pitchen@atmel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch is a transitional patch. It splits the atmel_sha_handle_queue()
function. Now atmel_sha_handle_queue() only manages the request queue and
calls a new .start() hook from the atmel_sha_ctx structure.
This hook allows to implement different kind of requests still handled by
a single queue.
Also when the req parameter of atmel_sha_handle_queue() refers to the very
same request as the one returned by crypto_dequeue_request(), the queue
management now gives a chance to this crypto request to be handled
synchronously, hence reducing latencies. The .start() hook returns 0 if
the crypto request was handled synchronously and -EINPROGRESS if the
crypto request still need to be handled asynchronously.
Besides, the new .is_async member of the atmel_sha_dev structure helps
tagging this asynchronous state. Indeed, the req->base.complete() callback
should not be called if the crypto request is handled synchronously.
Signed-off-by: Cyrille Pitchen <cyrille.pitchen@atmel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This is a transitional patch: it creates the atmel_sha_find_dev() function,
which will be used in further patches to share the source code responsible
for finding a Atmel SHA device.
Signed-off-by: Cyrille Pitchen <cyrille.pitchen@atmel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Check keylen before copying salt to avoid wrap around of Integer.
Signed-off-by: Harsh Jain <harsh@chelsio.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eliminate a double-add by creating a new list to manage
command descriptors when created; move the descriptor to
the pending list when the command is submitted.
Cc: <stable@vger.kernel.org>
Signed-off-by: Gary R Hook <gary.hook@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
An I/O page fault occurs when the IOMMU is enabled on a
system that supports the v5 CCP. DMA operations use a
Request ID value that does not match what is expected by
the IOMMU, resulting in the I/O page fault. Setting the
Request ID value to 0 corrects this issue.
Cc: <stable@vger.kernel.org>
Signed-off-by: Gary R Hook <gary.hook@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Ensure dev is allocated for crypto uld context before using the device
for crypto operations.
Cc: <stable@vger.kernel.org>
Signed-off-by: Atul Gupta <atul.gupta@chelsio.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Save DMA mapped sg list addresses to request context buffer.
Signed-off-by: Atul Gupta <atul.gupta@chelsio.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Zero embedded ram in DH85x devices. This is not
needed for newer generations as it is done by HW.
Cc: <stable@vger.kernel.org>
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Some accelerators of the c62x series have only two bars.
This patch skips BAR0 if the accelerator does not have it.
Cc: <stable@vger.kernel.org>
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Some preemptible check warnings were reported from enable_kernel_vsx(). This
patch disables preemption in aes_ctr.c before enabling vsx, and they are now
consistent with other files in the same directory.
Signed-off-by: Li Zhong <zhong@linux.vnet.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch only regroup functions by usage.
This will help to integrate the GCM support patch later by
adjusting some shared code section, such as common code which
will be reused by GCM, AES mode setting, and DMA transfer.
Signed-off-by: Ryder Lee <ryder.lee@mediatek.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch introduces a new callback 'resume' in the struct mtk_aes_rec.
This callback is run to resume/complete the processing of the crypto
request when woken up by AES interrupts when DMA completion.
This callback will help implementing the GCM mode support in further
patches.
Signed-off-by: Ryder Lee <ryder.lee@mediatek.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch changes mtk_aes_handle_queue() to make it more generic.
The function argument is now a pointer to struct crypto_async_request,
which is the common base of struct ablkcipher_request and
struct aead_request.
Also this patch introduces struct mtk_aes_base_ctx which will be the
common base of all the transformation contexts.
Hence the very same queue will be used to manage both block cipher and
AEAD requests (such as gcm and authenc implemented in further patches).
Signed-off-by: Ryder Lee <ryder.lee@mediatek.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch fixes mtk_aes_xmit() data transfer bug.
The original function uses the same loop and ring->pos
to handle both command and result descriptors. But this
produces incomplete results when src.sg_len != dst.sg_len.
To solve the problem, we splits the descriptors into different
loops and uses cmd_pos and res_pos to record them respectively.
Signed-off-by: Ryder Lee <ryder.lee@mediatek.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch moves hardware control block members from
mtk_*_rec to transformation context and refines related
definition. This makes operational context to manage its
own control information easily for each DMA transfer.
Signed-off-by: Ryder Lee <ryder.lee@mediatek.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The fourth argument of dma_map_sg() and dma_unmap_sg() is an item of
dma_data_direction enum. Function img_hash_xmit_dma() wrongly used
DMA_MEM_TO_DEV, which is an item of dma_transfer_direction enum.
Replace DMA_MEM_TO_DEV (which value is 1) with DMA_TO_DEVICE (which
value is fortunately also 1) when calling dma_map_sg() and
dma_unmap_sg().
Signed-off-by: Nicolas Iooss <nicolas.iooss_linux@m4x.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Some hardware accelerators (like intel aesni or the s390
cpacf functions) have lower priorities than virtio
crypto, and those drivers are faster than the same in
the host via virtio. So let's lower the priority of
virtio-crypto's algorithm, make it's higher than software
implementations but lower than the hardware ones.
Suggested-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Gonglei <arei.gonglei@huawei.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Fixes the following sparse warning:
drivers/crypto/mediatek/mtk-platform.c:585:27: warning:
symbol 'of_crypto_id' was not declared. Should it be static?
Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
After I enabled COMPILE_TEST for non-ARM targets, I ran into these
warnings:
crypto/mediatek/mtk-aes.c: In function 'mtk_aes_info_map':
crypto/mediatek/mtk-aes.c:224:28: error: format '%d' expects argument of type 'int', but argument 3 has type 'long unsigned int' [-Werror=format=]
dev_err(cryp->dev, "dma %d bytes error\n", sizeof(*info));
crypto/mediatek/mtk-sha.c:344:28: error: format '%d' expects argument of type 'int', but argument 3 has type 'long unsigned int' [-Werror=format=]
crypto/mediatek/mtk-sha.c:550:21: error: format '%u' expects argument of type 'unsigned int', but argument 4 has type 'size_t {aka long unsigned int}' [-Werror=format=]
The correct format for size_t is %zu, so use that in all three
cases.
Fixes: 785e5c616c ("crypto: mediatek - Add crypto driver support for some MediaTek chips")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Building the mediatek driver on an older ARM architecture results in a
harmless warning:
warning: (ARCH_OMAP2PLUS_TYPICAL && CRYPTO_DEV_MEDIATEK) selects NEON which has unmet direct dependencies (VFPv3 && CPU_V7)
We could add an explicit dependency on CPU_V7, but it seems nicer to
open up the build to additional configurations. This replaces the ARM
optimized algorithm selection with the normal one that all other drivers
use, and that in turn lets us relax the dependency on ARM and drop
a number of the unrelated 'select' statements.
Obviously a real user would still select those other optimized drivers
as a fallback, but as there is no strict dependency, we can leave that
up to the user.
Fixes: 785e5c616c ("crypto: mediatek - Add crypto driver support for some MediaTek chips")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In the case where keylen <= bs mtk_sha_setkey returns an uninitialized
return value in err. Fix this by returning 0 instead of err.
Issue detected by static analysis with cppcheck.
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The function is used to check either the platform device ID name or the OF
node's compatible (depending how the device was registered) to know which
device type was registered.
But the driver is for a DT-only platform and so there's no need for this
level of indirection since the devices can only be registered via OF.
Suggested-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Javier Martinez Canillas <javier@osg.samsung.com>
Acked-by: Jamie Iles <jamie@jamieiles.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>