As it is we only test ciphers when combined with a mode. That means
users that do not invoke a mode of operations may get an untested
cipher.
This patch tests all ciphers using the ECB mode so that simple cipher
users such as ansi-cprng are also protected.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch makes use of the new testing infrastructure by requiring
algorithms to pass a run-time test before they're made available to
users.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch moves the newly created alg_test infrastructure into
cryptomgr. This shall allow us to use it for testing at algorithm
registrations.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch creates a new interface algorithm testing. A test can
be requested for a particular implementation of an algorithm. This
is achieved by taking both the name of the algorithm and that of
the implementation.
The all-inclusive test has also been rewritten to no longer require
a duplicate listing of all algorithms with tests. In that process
a number of missing tests have also been discovered and rectified.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The info printed is a complete waste of space when there is no error
since it doesn't tell us anything that we don't already know. If there
is an error, we can also be more verbose.
In case that there is an error, this patch also aborts the test and
returns the error to the caller. In future this will be used to
algorithms at registration time.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
From NHM processor onward, Intel processors can support hardware accelerated
CRC32c algorithm with the new CRC32 instruction in SSE 4.2 instruction set.
The patch detects the availability of the feature, and chooses the most proper
way to calculate CRC32c checksum.
Byte code instructions are used for compiler compatibility.
No MMX / XMM registers is involved in the implementation.
Signed-off-by: Austin Zhang <austin.zhang@intel.com>
Signed-off-by: Kent Liu <kent.liu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
If tcrypt is to be used as a run-time integrity test, it needs to be
more resilient in a hostile environment. For a start allocating 32K
of physically contiguous memory is definitely out.
This patch teaches it to use separate pages instead.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Rather than displaying larval objects as real objects, this patch
makes them show up under /proc/crypto as of type larval.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Since the only user of __crypto_alg_lookup is doing exactly what
crypto_alg_lookup does, we can now the latter in lieu of the former.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Authenc works in two stages for encryption, it first encrypts and
then computes an ICV. The context memory of the request is used
by both operations. The problem is that when an asynchronous
encryption completes, we will compute the ICV and then reread the
context memory of the encryption to get the original request.
It just happens that we have a buffer of 16 bytes in front of the
request pointer, so ICVs of 16 bytes (such as SHA1) do not trigger
the bug. However, any attempt to uses a larger ICV instantly kills
the machine when the first asynchronous encryption is completed.
This patch fixes this by saving the request pointer before we start
the ICV computation.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The changeset ca786dc738
crypto: hash - Fixed digest size check
missed one spot for the digest type. This patch corrects that
error.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
My changeset 4b22f0ddb6
crypto: tcrpyt - Remove unnecessary kmap/kunmap calls
introduced a typo that broke AEAD chunk testing. In particular,
axbuf should really be xbuf.
There is also an issue with testing the last segment when encrypting.
The additional part produced by AEAD wasn't tested. Similarly, on
decryption the additional part of the AEAD input is mistaken for
corruption.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/djbw/async_tx: (24 commits)
I/OAT: I/OAT version 3.0 support
I/OAT: tcp_dma_copybreak default value dependent on I/OAT version
I/OAT: Add watchdog/reset functionality to ioatdma
iop_adma: cleanup iop_chan_xor_slot_count
iop_adma: document how to calculate the minimum descriptor pool size
iop_adma: directly reclaim descriptors on allocation failure
async_tx: make async_tx_test_ack a boolean routine
async_tx: remove depend_tx from async_tx_sync_epilog
async_tx: export async_tx_quiesce
async_tx: fix handling of the "out of descriptor" condition in async_xor
async_tx: ensure the xor destination buffer remains dma-mapped
async_tx: list_for_each_entry_rcu() cleanup
dmaengine: Driver for the Synopsys DesignWare DMA controller
dmaengine: Add slave DMA interface
dmaengine: add DMA_COMPL_SKIP_{SRC,DEST}_UNMAP flags to control dma unmap
dmaengine: Add dma_client parameter to device_alloc_chan_resources
dmatest: Simple DMA memcpy test client
dmaengine: DMA engine driver for Marvell XOR engine
iop-adma: fix platform driver hotplug/coldplug
dmaengine: track the number of clients using a channel
...
Fixed up conflict in drivers/dca/dca-sysfs.c manually
All callers of async_tx_sync_epilog have called async_tx_quiesce on the
depend_tx, so async_tx_sync_epilog need only call the callback to
complete the operation.
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Ensure forward progress is made when a dmaengine driver is unable to
allocate an xor descriptor by breaking the dependency chain with
async_tx_quisce() and issue any pending descriptors.
Tested with iop-adma by setting device->max_xor = 2 to force multiple
calls to device_prep_dma_xor for each call to async_xor and limiting the
descriptor slot pool to 5. Discovered that the minimum descriptor pool
size for iop-adma is 2 * iop_chan_xor_slot_cnt(device->max_xor) + 1.
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
When the number of source buffers for an xor operation exceeds the hardware
channel maximum async_xor creates a chain of dependent operations. The result
of one operation is reused as an input to the next to continue the xor
calculation. The destination buffer should remain mapped for the duration of
the entire chain. To provide this guarantee the code must no longer be allowed
to fallback to the synchronous path as this will preclude the buffer from being
unmapped, i.e. the dma-driver will potentially miss the descriptor with
!DMA_COMPL_SKIP_DEST_UNMAP.
Cc: Neil Brown <neilb@suse.de>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
In the rcu update side, don't use list_for_each_entry_rcu().
Signed-off-by: Li Zefan <lizf@cn.fujitsu.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
All new crypto interfaces should go into individual files as much
as possible in order to ensure that crypto.h does not collapse under
its own weight.
This patch moves the ahash code into crypto/hash.h and crypto/internal/hash.h
respectively.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch reimplements crc32c using the ahash interface. This
allows one tfm to be used by an unlimited number of users provided
that they all use the same key (which all current crc32c users do).
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds the walking helpers for hash algorithms akin to
those of block ciphers. This is a necessary step before we can
reimplement existing hash algorithms using the new ahash interface.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds a cryptographic pseudo-random number generator
based on CTR(AES-128). It is meant to be used in cases where a
deterministic CPRNG is required.
One of the first applications will be as an input in the IPsec IV
generation process.
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The base field in ahash_tfm appears to have been cut-n-pasted from
ablkcipher. It isn't needed here at all. Similarly, the info field
in ahash_request also appears to have originated from its cipher
counter-part and is vestigial.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The digest size check on hash algorithms is incorrect. It's
perfectly valid for hash algorithms to have a digest length
longer than their block size. For example crc32c has a block
size of 1 and a digest size of 4. Rather than having it lie
about its block size, this patch fixes the checks to do what
they really should which is to bound the digest size so that
code placing the digest on the stack continue to work.
HMAC however still needs to check this as it's only defined
for such algorithms.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Similar to the rmd128.c annotations, significantly cuts down on the
noise.
Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Remove the private implementation of 32-bit rotation and unaligned
access with byteswapping.
As a bonus, fixes sparse warnings:
crypto/camellia.c:602:2: warning: cast to restricted __be32
crypto/camellia.c:603:2: warning: cast to restricted __be32
crypto/camellia.c:604:2: warning: cast to restricted __be32
crypto/camellia.c:605:2: warning: cast to restricted __be32
crypto/camellia.c:710:2: warning: cast to restricted __be32
crypto/camellia.c:711:2: warning: cast to restricted __be32
crypto/camellia.c:712:2: warning: cast to restricted __be32
crypto/camellia.c:713:2: warning: cast to restricted __be32
crypto/camellia.c:714:2: warning: cast to restricted __be32
crypto/camellia.c:715:2: warning: cast to restricted __be32
crypto/camellia.c:716:2: warning: cast to restricted __be32
crypto/camellia.c:717:2: warning: cast to restricted __be32
Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Noticed by Neil Horman: we are doing unnecessary kmap/kunmap calls
on kmalloced memory. This patch removes them. For the purposes of
testing SG construction, the underlying crypto code already does plenty
of kmap/kunmap calls anyway.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Patch to add checking of DES3 test vectors using CBC mode. FIPS-140-2
compliance mandates that any supported mode of operation must include a self
test. This satisfies that requirement for cbc(des3_ede). The included test
vector was generated by me using openssl. Key/IV was generated with the
following command:
openssl enc -des_ede_cbc -P
input and output values were generated by repeating the string "Too many
secrets" a few times over, truncating it to 128 bytes, and encrypting it with
openssl using the aformentioned key. Tested successfully by myself
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
Acked-by: Adrian-Ken Rueegsegger <rueegsegger@swiss-it.ch>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch converts the relevant code in the rmd implementations to
use the pointer form of the endian swapping operations. This allows
certain architectures to generate more optimised code. For example,
on sparc64 this more than halves the CPU cycles on a typical hashing
operation.
Based on a patch by David Miller.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch fixes endian issues making rmd320 work
properly on big-endian machines.
Signed-off-by: Adrian-Ken Rueegsegger <rueegsegger@swiss-it.ch>
Acked-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch fixes endian issues making rmd256 work
properly on big-endian machines.
Signed-off-by: Adrian-Ken Rueegsegger <rueegsegger@swiss-it.ch>
Acked-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch fixes endian issues making rmd160 work
properly on big-endian machines.
Signed-off-by: Adrian-Ken Rueegsegger <rueegsegger@swiss-it.ch>
Acked-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch is based on Sebastian Siewior's patch and
fixes endian issues making rmd128 work properly on
big-endian machines.
Signed-off-by: Adrian-Ken Rueegsegger <rueegsegger@swiss-it.ch>
Acked-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch changes tcrypt to use the new asynchronous hash interface
for testing hash algorithm correctness. The speed tests will continue
to use the existing interface for now.
Signed-off-by: Loc Ho <lho@amcc.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds asynchronous hash support to crypto daemon.
Signed-off-by: Loc Ho <lho@amcc.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds Kconfig entries for RIPEMD-256 and RIPEMD-320.
Signed-off-by: Adrian-Ken Rueegsegger <rueegsegger@swiss-it.ch>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds test vectors for RIPEMD-256 and
RIPEMD-320 hash algorithms.
The test vectors are taken from
<http://homes.esat.kuleuven.be/~bosselae/ripemd160.html>
Signed-off-by: Adrian-Ken Rueegsegger <rueegsegger@swiss-it.ch>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds support for the extended RIPEMD hash
algorithms RIPEMD-256 and RIPEMD-320.
Signed-off-by: Adrian-Ken Rueegsegger <rueegsegger@swiss-it.ch>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch puts all common RIPEMD values in the
appropriate header file. Initial values and constants
are the same for all variants of RIPEMD.
Signed-off-by: Adrian-Ken Rueegsegger <rueegsegger@swiss-it.ch>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Check whether the destination buffer is written to beyond the last
byte contained in the scatterlist.
Also change IDX1 of the cross-page access offsets to a multiple of 4.
This triggers a corruption in the HIFN driver and doesn't seem to
negatively impact other testcases.
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Change logs should be kept in source control systems, not the source.
This patch removes the change log from tcrpyt to stop people from
extending it any more.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds Kconfig entries for RIPEMD-128 and RIPEMD-160.
Signed-off-by: Adrian-Ken Rueegsegger <rueegsegger@swiss-it.ch>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds test vectors for RIPEMD-128 and
RIPEMD-160 hash algorithms and digests (HMAC).
The test vectors are taken from ISO:IEC 10118-3 (2004)
and RFC2286.
Signed-off-by: Adrian-Ken Rueegsegger <rueegsegger@swiss-it.ch>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds support for RIPEMD-128 and RIPEMD-160
hash algorithms.
Signed-off-by: Adrian-Ken Rueegsegger <rueegsegger@swiss-it.ch>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The EINPROGRESS notifications should be done just like the final
call-backs, i.e., with BH off. This patch fixes the call in cryptd
since previously it was called with BH on.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When chainiv postpones requests it never calls their completion functions.
This causes symptoms such as memory leaks when IPsec is in use.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
commit 636bdeaa 'dmaengine: ack to flags: make use of the unused bits in
the 'ack' field' missed an ->ack conversion in
crypto/async_tx/async_memset.c
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Coverity CID: 2306 & 2307 RESOURCE_LEAK
In the second for loop in test_cipher(), data is allocated space with
kzalloc() and is only ever freed in an error case.
Looking at this loop, data is written to this memory but nothing seems
to read from it.
So here is a patch removing the allocation, I think this is the right
fix.
Only compile tested.
Signed-off-by: Darren Jenkins <darrenrjenkins@gmailcom>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Move rcu-protected lists from list.h into a new header file rculist.h.
This is done because list are a very used primitive structure all over the
kernel and it's currently impossible to include other header files in this
list.h without creating some circular dependencies.
For example, list.h implements rcu-protected list and uses rcu_dereference()
without including rcupdate.h. It actually compiles because users of
rcu_dereference() are macros. Others RCU functions could be used too but
aren't probably because of this.
Therefore this patch creates rculist.h which includes rcupdates without to
many changes/troubles.
Signed-off-by: Franck Bui-Huu <fbuihuu@gmail.com>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Acked-by: Josh Triplett <josh@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
When HMAC gets a key longer than the block size of the hash, it needs
to feed it as input to the hash to reduce it to a fixed length. As
it is HMAC converts the key to a scatter and gather list. However,
this doesn't work on certain platforms if the key is not allocated
via kmalloc. For example, the keys from tcrypt are stored in the
rodata section and this causes it to fail with HMAC on x86-64.
This patch fixes this by copying the key to memory obtained via
kmalloc before hashing it.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Normally, kzalloc returns NULL or a valid pointer value, not a value to be
tested using IS_ERR.
Signed-off-by: Julia Lawall <julia@diku.dk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
After attaching the IV to the head during encryption, eseqiv does not
increase the encryption length by that amount. As such the last block
of the actual plain text will be left unencrypted.
Fortunately the only user of this code hifn currently crashes so this
shouldn't affect anyone :)
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Ciphers, block modes, name it, are grouped together and sorted.
Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
On Thu, Mar 27, 2008 at 03:40:36PM +0100, Bodo Eggert wrote:
> Kamalesh Babulal <kamalesh@linux.vnet.ibm.com> wrote:
>
> > This patch cleanups the crypto code, replaces the init() and fini()
> > with the <algorithm name>_init/_fini
>
> This part ist OK.
>
> > or init/fini_<algorithm name> (if the
> > <algorithm name>_init/_fini exist)
>
> Having init_foo and foo_init won't be a good thing, will it? I'd start
> confusing them.
>
> What about foo_modinit instead?
Thanks for the suggestion, the init() is replaced with
<algorithm name>_mod_init ()
and fini () is replaced with <algorithm name>_mod_fini.
Signed-off-by: Kamalesh Babulal <kamalesh@linux.vnet.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The key expansion routine could be get little more generic, become
a kernel doc entry and then get exported.
Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Tested-by: Stefan Hellermann <stefan@the2masters.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Implement CTS wrapper for CBC mode required for support of AES
encryption support for Kerberos (rfc3962).
Signed-off-by: Kevin Coffman <kwc@citi.umich.edu>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The third test vector of ECB-XTEA-ENC fails for me all other
are fine. I could not find a RFC or something else where they
are defined. The test vector has not been modified since git
started recording histrory. The implementation is very close
(not to say equal) to what is available as Public Domain (they
recommend 64 rounds and the in kernel uses 32). Therefore I
belive that there is typo somewhere and tcrypt reported always
*fail* instead of *okey*.
This patch replaces input + result of the third test vector with
result + input from the third decryption vector. The key is the
same, the other three test vectors are also the reverse.
Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Currently the tcrypt module is about 2 MiB on x86-32. The
main reason for the huge size is the data segment which contains
all the test vectors for each algorithm. The test vectors are
staticly allocated in an array and the size of the array has been
drastically increased by the merge of the Salsa20 test vectors.
With a hint from Benedigt Spranger I found a way how I could
convert those fixed-length arrays to strings which are flexible
in size. VIM and regex were also very helpfull :)
So, I am talking about a shrinking of ~97% on x86-32:
text data bss dec hex filename
18309 2039708 20 2058037 1f6735 tcrypt-b4.ko
45628 23516 80 69224 10e68 tcrypt.ko
Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The test routines (test_{cipher,hash,aead}) are makeing a copy
of the test template and are processing the encryption process
in place. This patch changes the creation of the copy so it will
work even if the source address of the input data isn't an array
inside of the template but a pointer.
Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The speed templates as it look always the same. The key size
is repeated for each block size and we test always the same
block size. The addition of one inner loop makes it possible
to get rid of the struct and it is possible to use a tiny
u8 array :)
Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Some crypto ciphers which are impleneted support similar key sizes
(16,24 & 32 byte). They can be grouped together and use a common
templatte instead of their own which contains the same data.
Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Rename sha512 to sha512_generic and add a MODULE_ALIAS for sha512
so all sha512 implementations can be loaded automatically.
Keep the broken tabs so git recognizes this as a rename.
Signed-off-by: Jan Glauber <jang@linux.vnet.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
'ack' is currently a simple integer that flags whether or not a client is done
touching fields in the given descriptor. It is effectively just a single bit
of information. Converting this to a flags parameter allows the other bits to
be put to use to control completion actions, like dma-unmap, and capture
results, like xor-zero-sum == 0.
Changes are one of:
1/ convert all open-coded ->ack manipulations to use async_tx_ack
and async_tx_test_ack.
2/ set the ack bit at prep time where possible
3/ make drivers store the flags at prep time
4/ add flags to the device_prep_dma_interrupt prototype
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Shrink struct dma_async_tx_descriptor and introduce
async_tx_channel_switch to properly inject a channel switch interrupt in
the descriptor stream. This simplifies the locking model as drivers no
longer need to handle dma_async_tx_descriptor.lock.
Acked-by: Shannon Nelson <shannon.nelson@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
The kernel crashes when ipsec passes a udp packet of about 14XX bytes
of data to aes-xcbc-mac.
It seems the first xxxx bytes of the data are in first sg entry,
and remaining xx bytes are in next sg entry. But we don't
check next sg entry to see if we need to go look the page up.
I noticed in hmac.c, we do a scatterwalk_sg_next(), to do this check
and possible lookup, thus xcbc.c needs to use this routine too.
A 15-hour run of an ipsec stress test sending streams of tcp and
udp packets of various sizes, using this patch and
aes-xcbc-mac completed successfully, so hopefully this fixes the
problem.
Signed-off-by: Joy Latten <latten@austin.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
If the channel cannot perform the operation in one call to
->device_prep_dma_zero_sum, then fallback to the xor+page_is_zero path.
This only affects users with arrays larger than 16 devices on iop13xx or
32 devices on iop3xx.
Cc: <stable@kernel.org>
Cc: Neil Brown <neilb@suse.de>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
The previous patch to move chainiv and eseqiv into blkcipher created
a section mismatch for the chainiv exit function which was also called
from __init. This patch removes the __exit marking on it.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When using aes-xcbc-mac for authentication in IPsec,
the kernel crashes. It seems this algorithm doesn't
account for the space IPsec may make in scatterlist for authtag.
Thus when crypto_xcbc_digest_update2() gets called,
nbytes may be less than sg[i].length.
Since nbytes is an unsigned number, it wraps
at the end of the loop allowing us to go back
into loop and causing crash in memcpy.
I used update function in digest.c to model this fix.
Please let me know if it looks ok.
Signed-off-by: Joy Latten <latten@austin.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The XTS blockmode uses a copy of the IV which is saved on the stack
and may or may not be properly aligned. If it is not, it will break
hardware cipher like the geode or padlock.
This patch encrypts the IV in place so we don't have to worry about
alignment.
Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Tested-by: Stefan Hellermann <stefan@the2masters.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Every file should include the headers containing the externs for its
global code (in this case for struct crypto_{init,exit}_digest_ops()).
Signed-off-by: Adrian Bunk <bunk@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
For compatibility with dm-crypt initramfs setups it is useful to merge
chainiv/seqiv into the crypto_blkcipher module. Since they're required
by most algorithms anyway this is an acceptable trade-off.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch fixes the following build error caused by commit
3631c650c4:
<-- snip -->
...
LD .tmp_vmlinux1
crypto/built-in.o: In function `skcipher_null_crypt':
crypto_null.c:(.text+0x3d14): undefined reference to `blkcipher_walk_virt'
crypto_null.c:(.text+0x3d14): relocation truncated to fit: R_MIPS_26 against `blkcipher_walk_virt'
crypto/built-in.o: In function `$L32':
crypto_null.c:(.text+0x3d54): undefined reference to `blkcipher_walk_done'
crypto_null.c:(.text+0x3d54): relocation truncated to fit: R_MIPS_26 against `blkcipher_walk_done'
crypto/built-in.o:(.data+0x2e8): undefined reference to `crypto_blkcipher_type'
make[1]: *** [.tmp_vmlinux1] Error 1
<-- snip -->
Signed-off-by: Adrian Bunk <bunk@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Building latest git fails with the following error:
ERROR: "crypto_alloc_ablkcipher" [crypto/tcrypt.ko] undefined!
This appears to happen because CONFIG_CRYPTO_TEST is set while
CONFIG_CRYPTO_BLKCIPHER is not.
The following patch fixes the problem for me.
Signed-off-by: Frederik Deweerdt <frederik.deweerdt@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The source and destination addresses are included to allow channel
selection based on address alignment.
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Reviewed-by: Haavard Skinnemoen <hskinnemoen@atmel.com>
Pass a full set of flags to drivers' per-operation 'prep' routines.
Currently the only flag passed is DMA_PREP_INTERRUPT. The expectation is
that arch-specific async_tx_find_channel() implementations can exploit this
capability to find the best channel for an operation.
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Acked-by: Shannon Nelson <shannon.nelson@intel.com>
Reviewed-by: Haavard Skinnemoen <hskinnemoen@atmel.com>
The tx_set_src and tx_set_dest methods were originally implemented to allow
an array of addresses to be passed down from async_xor to the dmaengine
driver while minimizing stack overhead. Removing these methods allows
drivers to have all transaction parameters available at 'prep' time, saves
two function pointers in struct dma_async_tx_descriptor, and reduces the
number of indirect branches..
A consequence of moving this data to the 'prep' routine is that
multi-source routines like async_xor need temporary storage to convert an
array of linear addresses into an array of dma addresses. In order to keep
the same stack footprint of the previous implementation the input array is
reused as storage for the dma addresses. This requires that
sizeof(dma_addr_t) be less than or equal to sizeof(void *). As a
consequence CONFIG_DMADEVICES now depends on !CONFIG_HIGHMEM64G. It also
requires that drivers be able to make descriptor resources available when
the 'prep' routine is polled.
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Acked-by: Shannon Nelson <shannon.nelson@intel.com>
Remove the unused ASYNC_TX_ASSUME_COHERENT flag. Async_tx is
meant to hide the difference between asynchronous hardware and synchronous
software operations, this flag requires clients to understand cache
coherency consequences of the async path.
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Reviewed-by: Haavard Skinnemoen <hskinnemoen@atmel.com>
single list_head variable initialized with LIST_HEAD_INIT could almost
always can be replaced with LIST_HEAD declaration, this shrinks the code
and looks better.
Signed-off-by: Denis Cheng <crquan@gmail.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
do_async_xor must be compiled away on !HAS_DMA archs.
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
* git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (125 commits)
[CRYPTO] twofish: Merge common glue code
[CRYPTO] hifn_795x: Fixup container_of() usage
[CRYPTO] cast6: inline bloat--
[CRYPTO] api: Set default CRYPTO_MINALIGN to unsigned long long
[CRYPTO] tcrypt: Make xcbc available as a standalone test
[CRYPTO] xcbc: Remove bogus hash/cipher test
[CRYPTO] xcbc: Fix algorithm leak when block size check fails
[CRYPTO] tcrypt: Zero axbuf in the right function
[CRYPTO] padlock: Only reset the key once for each CBC and ECB operation
[CRYPTO] api: Include sched.h for cond_resched in scatterwalk.h
[CRYPTO] salsa20-asm: Remove unnecessary dependency on CRYPTO_SALSA20
[CRYPTO] tcrypt: Add select of AEAD
[CRYPTO] salsa20: Add x86-64 assembly version
[CRYPTO] salsa20_i586: Salsa20 stream cipher algorithm (i586 version)
[CRYPTO] gcm: Introduce rfc4106
[CRYPTO] api: Show async type
[CRYPTO] chainiv: Avoid lock spinning where possible
[CRYPTO] seqiv: Add select AEAD in Kconfig
[CRYPTO] scatterwalk: Handle zero nbytes in scatterwalk_map_and_copy
[CRYPTO] null: Allow setkey on digest_null
...
Currently the gcm(aes) tests have to be taken together with all other
algorithms. This patch makes it available by itself at number 106.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>