It is very rare to see versions of GCC prior to 4.8 being used to build
the mainline kernel. These old compilers are also know to have codegen
issues which can lead to silent miscompilation:
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145
Raise the minimum GCC version for kernel build to 4.8 and remove some
tautological Kconfig dependencies as a consequence.
Cc: Masahiro Yamada <masahiroy@kernel.org>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
Signed-off-by: Will Deacon <will@kernel.org>
Pull crypto updates from Herbert Xu:
"API:
- Fix out-of-sync IVs in self-test for IPsec AEAD algorithms
Algorithms:
- Use formally verified implementation of x86/curve25519
Drivers:
- Enhance hwrng support in caam
- Use crypto_engine for skcipher/aead/rsa/hash in caam
- Add Xilinx AES driver
- Add uacce driver
- Register zip engine to uacce in hisilicon
- Add support for OCTEON TX CPT engine in marvell"
* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (162 commits)
crypto: af_alg - bool type cosmetics
crypto: arm[64]/poly1305 - add artifact to .gitignore files
crypto: caam - limit single JD RNG output to maximum of 16 bytes
crypto: caam - enable prediction resistance in HRWNG
bus: fsl-mc: add api to retrieve mc version
crypto: caam - invalidate entropy register during RNG initialization
crypto: caam - check if RNG job failed
crypto: caam - simplify RNG implementation
crypto: caam - drop global context pointer and init_done
crypto: caam - use struct hwrng's .init for initialization
crypto: caam - allocate RNG instantiation descriptor with GFP_DMA
crypto: ccree - remove duplicated include from cc_aead.c
crypto: chelsio - remove set but not used variable 'adap'
crypto: marvell - enable OcteonTX cpt options for build
crypto: marvell - add the Virtual Function driver for CPT
crypto: marvell - add support for OCTEON TX CPT engine
crypto: marvell - create common Kconfig and Makefile for Marvell
crypto: arm/neon - memzero_explicit aes-cbc key
crypto: bcm - Use scnprintf() for avoiding potential buffer overflow
crypto: atmel-i2c - Fix wakeup fail
...
When working with bool values the true and false definitions should be used
instead of 1 and 0.
Hopefully I fixed my mailer and apologize for that.
Signed-off-by: Lothar Rubusch <l.rubusch@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Do test_aead_vs_generic_impl() before test_aead_inauthentic_inputs() so
that any differences with the generic driver are detected before getting
to the inauthentic input tests, which intentionally use only the driver
being tested (so that they run even if a generic driver is unavailable).
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
rfc4543 was missing from the list of algorithms that may treat the end
of the AAD buffer specially.
Also, with rfc4106, rfc4309, rfc4543, and rfc7539esp, the end of the AAD
buffer is actually supposed to contain a second copy of the IV, and
we've concluded that if the IV copies don't match the behavior is
implementation-defined. So, the fuzz tests can't easily test that case.
So, make the fuzz tests only use inputs where the two IV copies match.
Reported-by: Geert Uytterhoeven <geert+renesas@glider.be>
Fixes: 40153b10d9 ("crypto: testmgr - fuzz AEADs against their generic implementation")
Cc: Stephan Mueller <smueller@chronox.de>
Originally-from: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Simplify the error handling in the XTS template's ->create() function by
taking advantage of crypto_drop_skcipher() now accepting (as a no-op) a
spawn that hasn't been grabbed yet.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Simplify the error handling in pkcs1pad_create() by taking advantage of
crypto_grab_akcipher() now handling an ERR_PTR() name and by taking
advantage of crypto_drop_akcipher() now accepting (as a no-op) a spawn
that hasn't been grabbed yet.
While we're at it, also simplify the way the hash_name optional argument
is handled. We only need to check whether it's present in one place,
and we can just assign directly to ctx->digest_info.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Simplify the error handling in pcrypt_create_aead() by taking advantage
of crypto_grab_aead() now handling an ERR_PTR() name and by taking
advantage of crypto_drop_aead() now accepting (as a no-op) a spawn that
hasn't been grabbed yet.
This required also making padata_free_shell() accept a NULL argument.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Simplify the error handling in the LRW template's ->create() function by
taking advantage of crypto_drop_skcipher() now accepting (as a no-op) a
spawn that hasn't been grabbed yet.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Simplify the error handling in aead_geniv_alloc() by taking advantage of
crypto_grab_aead() now handling an ERR_PTR() name and by taking
advantage of crypto_drop_aead() now accepting (as a no-op) a spawn that
hasn't been grabbed yet.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Simplify the error handling in crypto_rfc4543_create() by taking
advantage of crypto_grab_aead() now handling an ERR_PTR() name and by
taking advantage of crypto_drop_aead() now accepting (as a no-op) a
spawn that hasn't been grabbed yet.
Conveniently, this eliminates the 'ccm_name' variable which was
incorrectly named (it should have been 'gcm_name').
Also fix a weird case where a line was terminated by a comma rather than
a semicolon, causing the statement to be continued on the next line.
Fortunately the code still behaved as intended, though.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Simplify the error handling in crypto_rfc4106_create() by taking
advantage of crypto_grab_aead() now handling an ERR_PTR() name and by
taking advantage of crypto_drop_aead() now accepting (as a no-op) a
spawn that hasn't been grabbed yet.
Conveniently, this eliminates the 'ccm_name' variable which was
incorrectly named (it should have been 'gcm_name').
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Simplify the error handling in crypto_cts_create() by taking advantage
of crypto_grab_skcipher() now handling an ERR_PTR() name and by taking
advantage of crypto_drop_skcipher() now accepting (as a no-op) a spawn
that hasn't been grabbed yet.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Simplify the error handling in crypto_rfc3686_create() by taking
advantage of crypto_grab_skcipher() now handling an ERR_PTR() name and
by taking advantage of crypto_drop_skcipher() now accepting (as a no-op)
a spawn that hasn't been grabbed yet.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Simplify the error handling in the various cryptd_create_*() functions
by taking advantage of crypto_grab_*() now handling an ERR_PTR() name
and by taking advantage of crypto_drop_*() now accepting (as a no-op) a
spawn that hasn't been grabbed yet.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Simplify the error handling in crypto_rfc4309_create() by taking
advantage of crypto_grab_aead() now handling an ERR_PTR() name and by
taking advantage of crypto_drop_aead() now accepting (as a no-op) a
spawn that hasn't been grabbed yet.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Fix a weird case where a line was terminated by a comma rather than a
semicolon, causing the statement to be continued on the next line.
Fortunately the code still behaved as intended, though.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
crypto/md5.c:26:0: warning: macro "MD5_DIGEST_WORDS" is not used [-Wunused-macros]
crypto/md5.c:27:0: warning: macro "MD5_MESSAGE_BYTES" is not used [-Wunused-macros]
They are never used since commit 3c7eb3cc83 ("md5: remove from
lib and only live in crypto").
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Pull IMA fixes from Mimi Zohar:
"Two bug fixes and an associated change for each.
The one that adds SM3 to the IMA list of supported hash algorithms is
a simple change, but could be considered a new feature"
* 'next-integrity' of git://git.kernel.org/pub/scm/linux/kernel/git/zohar/linux-integrity:
ima: add sm3 algorithm to hash algorithm configuration list
crypto: rename sm3-256 to sm3 in hash_algo_name
efi: Only print errors about failing to get certs if EFI vars are found
x86/ima: use correct identifier for SetupMode variable
The name sm3-256 is defined in hash_algo_name in hash_info, but the
algorithm name implemented in sm3_generic.c is sm3, which will cause
the sm3-256 algorithm to be not found in some application scenarios of
the hash algorithm, and an ENOENT error will occur. For example,
IMA, keys, and other subsystems that reference hash_algo_name all use
the hash algorithm of sm3.
Fixes: 5ca4c20cfd ("keys, trusted: select hash algorithm for TPM2 chips")
Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Reviewed-by: Pascal van Leeuwen <pvanleeuwen@rambus.com>
Signed-off-by: Mimi Zohar <zohar@linux.ibm.com>
- Enable paes-s390 cipher selftests in testmgr (acked-by Herbert Xu).
- Fix protected key length update in PKEY_SEC2PROTK ioctl and increase
card/queue requests counter to 64-bit in crypto code.
- Fix clang warning in get_tod_clock.
- Fix ultravisor info length extensions handling.
- Fix style of SPDX License Identifier in vfio-ccw.
- Avoid unnecessary GFP_ATOMIC and simplify ACK tracking in qdio.
-----BEGIN PGP SIGNATURE-----
iQEzBAABCAAdFiEE3QHqV+H2a8xAv27vjYWKoQLXFBgFAl5H4GUACgkQjYWKoQLX
FBh57gf+NNw5r1Ec3Y+H7uY4841n2uyW6HReb0qD4iSgPmH3d+WjQViCg9mbUCRh
MW4xfEizp/aTWEC8L8v0u/HHDApvWOHLiDdtWiTybHwHFIHyoE0OMV8V5lVDXP72
2/AefaKejfqW2L8K18le6NLr5jTTO3/G0F5PNmqFhdU3zoR/ic0ReKk0gPFZ38HI
NSc2DOu0KLEKhrE7dAPYNab/nqLiC/oAgTQsTCX/3XeImYsiiLC3PPNdNWiiHYf/
NmgIolxlug/ItEacLCfyfdZbX5/1gw1jZlSdUozUno5Hy1Wt5bZ4rAzg0ORojb/l
tB4CDBeIFG/r+HTX7bAONxRrhxa0hw==
=GZ7w
-----END PGP SIGNATURE-----
Merge tag 's390-5.6-3' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux
Pull s390 updates from Vasily Gorbik:
- Enable paes-s390 cipher selftests in testmgr (acked-by Herbert Xu).
- Fix protected key length update in PKEY_SEC2PROTK ioctl and increase
card/queue requests counter to 64-bit in crypto code.
- Fix clang warning in get_tod_clock.
- Fix ultravisor info length extensions handling.
- Fix style of SPDX License Identifier in vfio-ccw.
- Avoid unnecessary GFP_ATOMIC and simplify ACK tracking in qdio.
* tag 's390-5.6-3' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux:
crypto/testmgr: enable selftests for paes-s390 ciphers
s390/time: Fix clk type in get_tod_clock
s390/uv: Fix handling of length extensions
s390/qdio: don't allocate *aob array with GFP_ATOMIC
s390/qdio: simplify ACK tracking
s390/zcrypt: fix card and queue total counter wrap
s390/pkey: fix missing length of protected key on return
vfio-ccw: Use the correct style for SPDX License Identifier
Pull crypto fix from Herbert Xu:
"This fixes a Kconfig anomaly when lib/crypto is enabled without Crypto
API"
* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
crypto: Kconfig - allow tests to be disabled when manager is disabled
This patch enables the selftests for the s390 specific protected key
AES (PAES) cipher implementations:
* cbc-paes-s390
* ctr-paes-s390
* ecb-paes-s390
* xts-paes-s390
PAES is an AES cipher but with encrypted ('protected') key
material. However, the paes ciphers are able to derive an protected
key from clear key material with the help of the pkey kernel module.
So this patch now enables the generic AES tests for the paes
ciphers. Under the hood the setkey() functions rearrange the clear key
values as clear key token and so the pkey kernel module is able to
provide protected key blobs from the given clear key values. The
derived protected key blobs are then used within the paes cipers and
should produce the very same results as the generic AES implementation
with the clear key values.
The s390-paes cipher testlist entries are surrounded
by #if IS_ENABLED(CONFIG_CRYPTO_PAES_S390) because they don't
make any sense on non s390 platforms or without the PAES
cipher implementation.
Link: http://lkml.kernel.org/r/20200213083946.zicarnnt3wizl5ty@gondor.apana.org.au
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Harald Freudenberger <freude@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
When running tcrypt skcipher speed tests, logs contain things like:
testing speed of async ecb(des3_ede) (ecb(des3_ede-generic)) encryption
or:
testing speed of async ecb(aes) (ecb(aes-ce)) encryption
The algorithm implementations are sync, not async.
Fix this inaccuracy.
Fixes: 7166e589da ("crypto: tcrypt - Use skcipher")
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The path with the CRYPTO_ALG_LARVAL flag has jumped to the end before
Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
We need to decrement this refcounter on these error paths.
Fixes: f7d76e05d0 ("crypto: user - fix use_after_free of struct xxx_request")
Cc: <stable@vger.kernel.org>
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The library code uses CRYPTO_MANAGER_DISABLE_TESTS to conditionalize its
tests, but the library code can also exist without CRYPTO_MANAGER. That
means on minimal configs, the test code winds up being built with no way
to disable it.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Pull crypto updates from Herbert Xu:
"API:
- Removed CRYPTO_TFM_RES flags
- Extended spawn grabbing to all algorithm types
- Moved hash descsize verification into API code
Algorithms:
- Fixed recursive pcrypt dead-lock
- Added new 32 and 64-bit generic versions of poly1305
- Added cryptogams implementation of x86/poly1305
Drivers:
- Added support for i.MX8M Mini in caam
- Added support for i.MX8M Nano in caam
- Added support for i.MX8M Plus in caam
- Added support for A33 variant of SS in sun4i-ss
- Added TEE support for Raven Ridge in ccp
- Added in-kernel API to submit TEE commands in ccp
- Added AMD-TEE driver
- Added support for BCM2711 in iproc-rng200
- Added support for AES256-GCM based ciphers for chtls
- Added aead support on SEC2 in hisilicon"
* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (244 commits)
crypto: arm/chacha - fix build failured when kernel mode NEON is disabled
crypto: caam - add support for i.MX8M Plus
crypto: x86/poly1305 - emit does base conversion itself
crypto: hisilicon - fix spelling mistake "disgest" -> "digest"
crypto: chacha20poly1305 - add back missing test vectors and test chunking
crypto: x86/poly1305 - fix .gitignore typo
tee: fix memory allocation failure checks on drv_data and amdtee
crypto: ccree - erase unneeded inline funcs
crypto: ccree - make cc_pm_put_suspend() void
crypto: ccree - split overloaded usage of irq field
crypto: ccree - fix PM race condition
crypto: ccree - fix FDE descriptor sequence
crypto: ccree - cc_do_send_request() is void func
crypto: ccree - fix pm wrongful error reporting
crypto: ccree - turn errors to debug msgs
crypto: ccree - fix AEAD decrypt auth fail
crypto: ccree - fix typo in comment
crypto: ccree - fix typos in error msgs
crypto: atmel-{aes,sha,tdes} - Retire crypto_platform_data
crypto: x86/sha - Eliminate casts on asm implementations
...
"AEAD" is capitalized everywhere else.
Use "an" when followed by a written or spoken vowel.
Fixes: be1eb7f78a ("crypto: essiv - create wrapper template for ESSIV generation")
Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
These two C implementations from Zinc -- a 32x32 one and a 64x64 one,
depending on the platform -- come from Andrew Moon's public domain
poly1305-donna portable code, modified for usage in the kernel. The
precomputation in the 32-bit version and the use of 64x64 multiplies in
the 64-bit version make these perform better than the code it replaces.
Moon's code is also very widespread and has received many eyeballs of
scrutiny.
There's a bit of interference between the x86 implementation, which
relies on internal details of the old scalar implementation. In the next
commit, the x86 implementation will be replaced with a faster one that
doesn't rely on this, so none of this matters much. But for now, to keep
this passing the tests, we inline the bits of the old implementation
that the x86 implementation relied on. Also, since we now support a
slightly larger key space, via the union, some offsets had to be fixed
up.
Nonce calculation was folded in with the emit function, to take
advantage of 64x64 arithmetic. However, Adiantum appeared to rely on no
nonce handling in emit, so this path was conditionalized. We also
introduced a new struct, poly1305_core_key, to represent the precise
amount of space that particular implementation uses.
Testing with kbench9000, depending on the CPU, the update function for
the 32x32 version has been improved by 4%-7%, and for the 64x64 by
19%-30%. The 32x32 gains are small, but I think there's great value in
having a parallel implementation to the 64x64 one so that the two can be
compared side-by-side as nice stand-alone units.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
All instances need to have a ->free() method, but people could forget to
set it and then not notice if the instance is never unregistered. To
help detect this bug earlier, don't allow an instance without a ->free()
method to be registered, and complain loudly if someone tries to do it.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that all templates provide a ->create() method which creates an
instance, installs a strongly-typed ->free() method directly to it, and
registers it, the older ->alloc() and ->free() methods in
'struct crypto_template' are no longer used. Remove them.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Convert shash_free_instance() and its users to the new way of freeing
instances, where a ->free() method is installed to the instance struct
itself. This replaces the weakly-typed method crypto_template::free().
This will allow removing support for the old way of freeing instances.
Also give shash_free_instance() a more descriptive name to reflect that
it's only for instances with a single spawn, not for any instance.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Convert the "cryptd" template to the new way of freeing instances, where
a ->free() method is installed to the instance struct itself. This
replaces the weakly-typed method crypto_template::free().
This will allow removing support for the old way of freeing instances.
Note that the 'default' case in cryptd_free() was already unreachable.
So, we aren't missing anything by keeping only the ahash and aead parts.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Convert the "seqiv" template to the new way of freeing instances where a
->free() method is installed to the instance struct itself. Also remove
the unused implementation of the old way of freeing instances from the
"echainiv" template, since it's already using the new way too.
In doing this, also simplify the code by making the helper function
aead_geniv_alloc() install the ->free() method, instead of making seqiv
and echainiv do this themselves. This is analogous to how
skcipher_alloc_instance_simple() works.
This will allow removing support for the old way of freeing instances.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add support to shash and ahash for the new way of freeing instances
(already used for skcipher, aead, and akcipher) where a ->free() method
is installed to the instance struct itself. These methods are more
strongly-typed than crypto_template::free(), which they replace.
This will allow removing support for the old way of freeing instances.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that crypto_init_spawn() is only called by crypto_grab_spawn(),
simplify things by moving its functionality into crypto_grab_spawn().
In the process of doing this, also be more consistent about when the
spawn and instance are updated, and remove the crypto_spawn::dropref
flag since now it's always set.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that all the templates that need ahash spawns have been converted to
use crypto_grab_ahash() rather than look up the algorithm directly,
crypto_ahash_type is no longer used outside of ahash.c. Make it static.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Remove lots of helper functions that were previously used for
instantiating crypto templates, but are now unused:
- crypto_get_attr_alg() and similar functions looked up an inner
algorithm directly from a template parameter. These were replaced
with getting the algorithm's name, then calling crypto_grab_*().
- crypto_init_spawn2() and similar functions initialized a spawn, given
an algorithm. Similarly, these were replaced with crypto_grab_*().
- crypto_alloc_instance() and similar functions allocated an instance
with a single spawn, given the inner algorithm. These aren't useful
anymore since crypto_grab_*() need the instance allocated first.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that all users of single-block cipher spawns have been converted to
use 'struct crypto_cipher_spawn' rather than the less specifically typed
'struct crypto_spawn', make crypto_spawn_cipher() take a pointer to a
'struct crypto_cipher_spawn' rather than a 'struct crypto_spawn'.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Make the xcbc template use the new function crypto_grab_cipher() to
initialize its cipher spawn.
This is needed to make all spawns be initialized in a consistent way.
This required making xcbc_create() allocate the instance directly rather
than use shash_alloc_instance().
Also simplify the error handling by taking advantage of crypto_drop_*()
now accepting (as a no-op) spawns that haven't been initialized yet, and
by taking advantage of crypto_grab_*() now handling ERR_PTR() names.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Make the vmac64 template use the new function crypto_grab_cipher() to
initialize its cipher spawn.
This is needed to make all spawns be initialized in a consistent way.
This required making vmac_create() allocate the instance directly rather
than use shash_alloc_instance().
Also simplify the error handling by taking advantage of crypto_drop_*()
now accepting (as a no-op) spawns that haven't been initialized yet, and
by taking advantage of crypto_grab_*() now handling ERR_PTR() names.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Make the cmac template use the new function crypto_grab_cipher() to
initialize its cipher spawn.
This is needed to make all spawns be initialized in a consistent way.
This required making cmac_create() allocate the instance directly rather
than use shash_alloc_instance().
Also simplify the error handling by taking advantage of crypto_drop_*()
now accepting (as a no-op) spawns that haven't been initialized yet, and
by taking advantage of crypto_grab_*() now handling ERR_PTR() names.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Make the cbcmac template use the new function crypto_grab_cipher() to
initialize its cipher spawn.
This is needed to make all spawns be initialized in a consistent way.
This required making cbcmac_create() allocate the instance directly
rather than use shash_alloc_instance().
Also simplify the error handling by taking advantage of crypto_drop_*()
now accepting (as a no-op) spawns that haven't been initialized yet, and
by taking advantage of crypto_grab_*() now handling ERR_PTR() names.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Make skcipher_alloc_instance_simple() use the new function
crypto_grab_cipher() to initialize its cipher spawn.
This is needed to make all spawns be initialized in a consistent way.
Also simplify the error handling by taking advantage of crypto_drop_*()
now accepting (as a no-op) spawns that haven't been initialized yet, and
by taking advantage of crypto_grab_*() now handling ERR_PTR() names.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Make the rfc7539 and rfc7539esp templates use the new function
crypto_grab_ahash() to initialize their ahash spawn.
This is needed to make all spawns be initialized in a consistent way.
Also simplify the error handling by taking advantage of crypto_drop_*()
now accepting (as a no-op) spawns that haven't been initialized yet, and
by taking advantage of crypto_grab_*() now handling ERR_PTR() names.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Make the ccm and ccm_base templates use the new function
crypto_grab_ahash() to initialize their ahash spawn.
This is needed to make all spawns be initialized in a consistent way.
Also simplify the error handling by taking advantage of crypto_drop_*()
now accepting (as a no-op) spawns that haven't been initialized yet.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Make the gcm and gcm_base templates use the new function
crypto_grab_ahash() to initialize their ahash spawn.
This is needed to make all spawns be initialized in a consistent way.
Also simplify the error handling by taking advantage of crypto_drop_*()
now accepting (as a no-op) spawns that haven't been initialized yet.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Make the authencesn template use the new function crypto_grab_ahash() to
initialize its ahash spawn.
This is needed to make all spawns be initialized in a consistent way.
Also simplify the error handling by taking advantage of crypto_drop_*()
now accepting (as a no-op) spawns that haven't been initialized yet, and
by taking advantage of crypto_grab_*() now handling ERR_PTR() names.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Make the authenc template use the new function crypto_grab_ahash() to
initialize its ahash spawn.
This is needed to make all spawns be initialized in a consistent way.
Also simplify the error handling by taking advantage of crypto_drop_*()
now accepting (as a no-op) spawns that haven't been initialized yet, and
by taking advantage of crypto_grab_*() now handling ERR_PTR() names.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Make the hmac template use the new function crypto_grab_shash() to
initialize its shash spawn.
This is needed to make all spawns be initialized in a consistent way.
This required making hmac_create() allocate the instance directly rather
than use shash_alloc_instance().
Also simplify the error handling by taking advantage of crypto_drop_*()
now accepting (as a no-op) spawns that haven't been initialized yet, and
by taking advantage of crypto_grab_*() now handling ERR_PTR() names.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Make the cryptd template (in the hash case) use the new function
crypto_grab_shash() to initialize its shash spawn.
This is needed to make all spawns be initialized in a consistent way.
This required making cryptd_create_hash() allocate the instance directly
rather than use cryptd_alloc_instance().
Also simplify the error handling by taking advantage of crypto_drop_*()
now accepting (as a no-op) spawns that haven't been initialized yet, and
by taking advantage of crypto_grab_*() now handling ERR_PTR() names.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Make the adiantum template use the new functions crypto_grab_cipher()
and crypto_grab_shash() to initialize its cipher and shash spawns.
This is needed to make all spawns be initialized in a consistent way.
Also simplify the error handling by taking advantage of crypto_drop_*()
now accepting (as a no-op) spawns that haven't been initialized yet, and
by taking advantage of crypto_grab_*() now handling ERR_PTR() names.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Currently, ahash spawns are initialized by using ahash_attr_alg() or
crypto_find_alg() to look up the ahash algorithm, then calling
crypto_init_ahash_spawn().
This is different from how skcipher, aead, and akcipher spawns are
initialized (they use crypto_grab_*()), and for no good reason. This
difference introduces unnecessary complexity.
The crypto_grab_*() functions used to have some problems, like not
holding a reference to the algorithm and requiring the caller to
initialize spawn->base.inst. But those problems are fixed now.
So, let's introduce crypto_grab_ahash() so that we can convert all
templates to the same way of initializing their spawns.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Currently, shash spawns are initialized by using shash_attr_alg() or
crypto_alg_mod_lookup() to look up the shash algorithm, then calling
crypto_init_shash_spawn().
This is different from how skcipher, aead, and akcipher spawns are
initialized (they use crypto_grab_*()), and for no good reason. This
difference introduces unnecessary complexity.
The crypto_grab_*() functions used to have some problems, like not
holding a reference to the algorithm and requiring the caller to
initialize spawn->base.inst. But those problems are fixed now.
So, let's introduce crypto_grab_shash() so that we can convert all
templates to the same way of initializing their spawns.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Currently, crypto_spawn::inst is first used temporarily to pass the
instance to crypto_grab_spawn(). Then crypto_init_spawn() overwrites it
with crypto_spawn::next, which shares the same union. Finally,
crypto_spawn::inst is set again when the instance is registered.
Make this less convoluted by just passing the instance as an argument to
crypto_grab_spawn() instead.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Initializing a crypto_akcipher_spawn currently requires:
1. Set spawn->base.inst to point to the instance.
2. Call crypto_grab_akcipher().
But there's no reason for these steps to be separate, and in fact this
unneeded complication has caused at least one bug, the one fixed by
commit 6db4341017 ("crypto: adiantum - initialize crypto_spawn::inst")
So just make crypto_grab_akcipher() take the instance as an argument.
To keep the function call from getting too unwieldy due to this extra
argument, also introduce a 'mask' variable into pkcs1pad_create().
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Initializing a crypto_aead_spawn currently requires:
1. Set spawn->base.inst to point to the instance.
2. Call crypto_grab_aead().
But there's no reason for these steps to be separate, and in fact this
unneeded complication has caused at least one bug, the one fixed by
commit 6db4341017 ("crypto: adiantum - initialize crypto_spawn::inst")
So just make crypto_grab_aead() take the instance as an argument.
To keep the function calls from getting too unwieldy due to this extra
argument, also introduce a 'mask' variable into the affected places
which weren't already using one.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Initializing a crypto_skcipher_spawn currently requires:
1. Set spawn->base.inst to point to the instance.
2. Call crypto_grab_skcipher().
But there's no reason for these steps to be separate, and in fact this
unneeded complication has caused at least one bug, the one fixed by
commit 6db4341017 ("crypto: adiantum - initialize crypto_spawn::inst")
So just make crypto_grab_skcipher() take the instance as an argument.
To keep the function calls from getting too unwieldy due to this extra
argument, also introduce a 'mask' variable into the affected places
which weren't already using one.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
To allow further simplifying template ->create() functions, make
crypto_grab_spawn() handle an ERR_PTR() name by passing back the error.
For most templates, this will allow the result of crypto_attr_alg_name()
to be passed directly to crypto_grab_*(), rather than first having to
assign it to a variable [where it can then potentially be misused, as it
was in the rfc7539 template prior to commit 5e27f38f1f ("crypto:
chacha20poly1305 - set cra_name correctly")] and check it for error.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Make crypto_drop_spawn() do nothing when the spawn hasn't been
initialized with an algorithm yet. This will allow simplifying error
handling in all the template ->create() functions, since on error they
will be able to just call their usual "free instance" function, rather
than having to handle dropping just the spawns that have been
initialized so far.
This does assume the spawn starts out zero-filled, but that's always the
case since instances are allocated with kzalloc(). And some other code
already assumes this anyway.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The CRYPTO_TFM_RES_* flags were apparently meant as a way to make the
->setkey() functions provide more information about errors. But these
flags weren't actually being used or tested, and in many cases they
weren't being set correctly anyway. So they've now been removed.
Also, if someone ever actually needs to start better distinguishing
->setkey() errors (which is somewhat unlikely, as this has been unneeded
for a long time), we'd be much better off just defining different return
values, like -EINVAL if the key is invalid for the algorithm vs.
-EKEYREJECTED if the key was rejected by a policy like "no weak keys".
That would be much simpler, less error-prone, and easier to test.
So just remove CRYPTO_TFM_RES_MASK and all the unneeded logic that
propagates these flags around.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The CRYPTO_TFM_RES_WEAK_KEY flag was apparently meant as a way to make
the ->setkey() functions provide more information about errors.
However, no one actually checks for this flag, which makes it pointless.
There are also no tests that verify that all algorithms actually set (or
don't set) it correctly.
This is also the last remaining CRYPTO_TFM_RES_* flag, which means that
it's the only thing still needing all the boilerplate code which
propagates these flags around from child => parent tfms.
And if someone ever needs to distinguish this error in the future (which
is somewhat unlikely, as it's been unneeded for a long time), it would
be much better to just define a new return value like -EKEYREJECTED.
That would be much simpler, less error-prone, and easier to test.
So just remove this flag.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The CRYPTO_TFM_RES_BAD_KEY_LEN flag was apparently meant as a way to
make the ->setkey() functions provide more information about errors.
However, no one actually checks for this flag, which makes it pointless.
Also, many algorithms fail to set this flag when given a bad length key.
Reviewing just the generic implementations, this is the case for
aes-fixed-time, cbcmac, echainiv, nhpoly1305, pcrypt, rfc3686, rfc4309,
rfc7539, rfc7539esp, salsa20, seqiv, and xcbc. But there are probably
many more in arch/*/crypto/ and drivers/crypto/.
Some algorithms can even set this flag when the key is the correct
length. For example, authenc and authencesn set it when the key payload
is malformed in any way (not just a bad length), the atmel-sha and ccree
drivers can set it if a memory allocation fails, and the chelsio driver
sets it for bad auth tag lengths, not just bad key lengths.
So even if someone actually wanted to start checking this flag (which
seems unlikely, since it's been unused for a long time), there would be
a lot of work needed to get it working correctly. But it would probably
be much better to go back to the drawing board and just define different
return values, like -EINVAL if the key is invalid for the algorithm vs.
-EKEYREJECTED if the key was rejected by a policy like "no weak keys".
That would be much simpler, less error-prone, and easier to test.
So just remove this flag.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
skcipher_walk_aead() is unused and is identical to
skcipher_walk_aead_encrypt(), so remove it.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch introduces the skcipher_ialg_simple helper which fetches
the crypto_alg structure from a simple skcipher instance's spawn.
This allows us to remove the third argument from the function
skcipher_alloc_instance_simple.
In doing so the reference count to the algorithm is now maintained
by the Crypto API and the caller no longer needs to drop the alg
refcount.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch changes crypto_grab_spawn to retain the reference count
on the algorithm. This is because the caller needs to access the
algorithm parameters and without the reference count the algorithm
can be freed at any time.
The reference count will be subsequently dropped by the crypto API
once the instance has been registered. The helper crypto_drop_spawn
will also conditionally drop the reference count depending on whether
it has been registered.
Note that the code is actually added to crypto_init_spawn. However,
unless the caller activates this by setting spawn->dropref beforehand
then nothing happens. The only caller that sets dropref is currently
crypto_grab_spawn.
Once all legacy users of crypto_init_spawn disappear, then we can
kill the dropref flag.
Internally each instance will maintain a list of its spawns prior
to registration. This memory used by this list is shared with
other fields that are only used after registration. In order for
this to work a new flag spawn->registered is added to indicate
whether spawn->inst can be used.
Fixes: d6ef2f198d ("crypto: api - Add crypto_grab_spawn primitive")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Some of the algorithm unregistration functions return -ENOENT when asked
to unregister a non-registered algorithm, while others always return 0
or always return void. But no users check the return value, except for
two of the bulk unregistration functions which print a message on error
but still always return 0 to their caller, and crypto_del_alg() which
calls crypto_unregister_instance() which always returns 0.
Since unregistering a non-registered algorithm is always a kernel bug
but there isn't anything callers should do to handle this situation at
runtime, let's simplify things by making all the unregistration
functions return void, and moving the error message into
crypto_unregister_alg() and upgrading it to a WARN().
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When CONFIG_CRYPTO_MANAGER_EXTRA_TESTS=y, the first lookup of an
algorithm that needs to be instantiated using a template will always get
the generic implementation, even when an accelerated one is available.
This happens because the extra self-tests for the accelerated
implementation allocate the generic implementation for comparison
purposes, and then crypto_alg_tested() for the generic implementation
"fulfills" the original request (i.e. sets crypto_larval::adult).
This patch fixes this by only fulfilling the original request if
we are currently the best outstanding larval as judged by the
priority. If we're not the best then we will ask all waiters on
that larval request to retry the lookup.
Note that this patch introduces a behaviour change when the module
providing the new algorithm is unregistered during the process.
Previously we would have failed with ENOENT, after the patch we
will instead redo the lookup.
Fixes: 9a8a6b3f09 ("crypto: testmgr - fuzz hashes against...")
Fixes: d435e10e67 ("crypto: testmgr - fuzz skciphers against...")
Fixes: 40153b10d9 ("crypto: testmgr - fuzz AEADs against...")
Reported-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-----BEGIN PGP SIGNATURE-----
iJYEABYIAD4WIQRE6pSOnaBC00OEHEIaerohdGur0gUCXfrLASAcamFya2tvLnNh
a2tpbmVuQGxpbnV4LmludGVsLmNvbQAKCRAaerohdGur0pZfAQD9F5Vjdqp3fWk+
pxt+eD9+xaD2MYuSVO2AEVBC949vdQD/TP7xnb66w7n9YtMtm9MgvysHAakJYeAe
l4XsHAiPHgI=
=CFIs
-----END PGP SIGNATURE-----
Merge tag 'tpmdd-next-20191219' of git://git.infradead.org/users/jjs/linux-tpmdd
Pull tpm fixes from Jarkko Sakkinen:
"Bunch of fixes for rc3"
* tag 'tpmdd-next-20191219' of git://git.infradead.org/users/jjs/linux-tpmdd:
tpm/tpm_ftpm_tee: add shutdown call back
tpm: selftest: cleanup after unseal with wrong auth/policy test
tpm: selftest: add test covering async mode
tpm: fix invalid locking in NONBLOCKING mode
security: keys: trusted: fix lost handle flush
tpm_tis: reserve chip for duration of tpm_tis_core_init
KEYS: asymmetric: return ENOMEM if akcipher_request_alloc() fails
KEYS: remove CONFIG_KEYS_COMPAT
No error code was being set on this error path.
Cc: stable@vger.kernel.org
Fixes: ad4b1eb5fb ("KEYS: asym_tpm: Implement encryption operation [ver #2]")
Fixes: c08fed7371 ("KEYS: Implement encrypt, decrypt and sign for software asymmetric key [ver #2]")
Reviewed-by: James Morris <jamorris@linux.microsoft.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
This patch switches hmac over to the new init_tfm/exit_tfm interface
as opposed to cra_init/cra_exit. This way the shash API can make
sure that descsize does not exceed the maximum.
This patch also adds the API helper shash_alg_instance.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The shash interface supports a dynamic descsize field because of
the presence of fallbacks (it's just padlock-sha actually, perhaps
we can remove it one day). As it is the API does not verify the
setting of descsize at all. It is up to the individual algorithms
to ensure that descsize does not exceed the specified maximum value
of HASH_MAX_DESCSIZE (going above would cause stack corruption).
In order to allow the API to impose this limit directly, this patch
adds init_tfm/exit_tfm hooks to the shash_alg structure. We can
then verify the descsize setting in the API directly.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch explains the logic behind crypto_remove_spawns and its
underling crypto_more_spawns.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Currently when a spawn is removed we will zap its alg field.
This is racy because the spawn could belong to an unregistered
instance which may dereference the spawn->alg field.
This patch fixes this by keeping spawn->alg constant and instead
adding a new spawn->dead field to indicate that a spawn is going
away.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The function crypto_spawn_alg is racy because it drops the lock
before shooting the dying algorithm. The algorithm could disappear
altogether before we shoot it.
This patch fixes it by moving the shooting into the locked section.
Fixes: 6bfd48096f ("[CRYPTO] api: Added spawns")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
We need to check whether spawn->alg is NULL under lock as otherwise
the algorithm could be removed from under us after we have checked
it and found it to be non-NULL. This could cause us to remove the
spawn from a non-existent list.
Fixes: 7ede5a5ba5 ("crypto: api - Fix crypto_drop_spawn crash...")
Cc: <stable@vger.kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As af_alg_release_parent may be called from BH context (most notably
due to an async request that only completes after socket closure,
or as reported here because of an RCU-delayed sk_destruct call), we
must use bh_lock_sock instead of lock_sock.
Reported-by: syzbot+c2f1558d49e25cc36e5e@syzkaller.appspotmail.com
Reported-by: Eric Dumazet <eric.dumazet@gmail.com>
Fixes: c840ac6af3 ("crypto: af_alg - Disallow bind/setkey/...")
Cc: <stable@vger.kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Since commit 63d3578892 ("crypto: pcrypt - remove padata cpumask
notifier") this feature is unused, so get rid of it.
Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com>
Cc: Eric Biggers <ebiggers@kernel.org>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Steffen Klassert <steffen.klassert@secunet.com>
Cc: linux-crypto@vger.kernel.org
Cc: linux-doc@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Of the three fields in crt_u.cipher (struct cipher_tfm), ->cit_setkey()
is pointless because it always points to setkey() in crypto/cipher.c.
->cit_decrypt_one() and ->cit_encrypt_one() are slightly less pointless,
since if the algorithm doesn't have an alignmask, they are set directly
to ->cia_encrypt() and ->cia_decrypt(). However, this "optimization"
isn't worthwhile because:
- The "cipher" algorithm type is the only algorithm still using crt_u,
so it's bloating every struct crypto_tfm for every algorithm type.
- If the algorithm has an alignmask, this "optimization" actually makes
things slower, as it causes 2 indirect calls per block rather than 1.
- It adds extra code complexity.
- Some templates already call ->cia_encrypt()/->cia_decrypt() directly
instead of going through ->cit_encrypt_one()/->cit_decrypt_one().
- The "cipher" algorithm type never gives optimal performance anyway.
For that, a higher-level type such as skcipher needs to be used.
Therefore, just remove the extra indirection, and make
crypto_cipher_setkey(), crypto_cipher_encrypt_one(), and
crypto_cipher_decrypt_one() be direct calls into crypto/cipher.c.
Also remove the unused function crypto_cipher_cast().
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
crt_u.compress (struct compress_tfm) is pointless because its two
fields, ->cot_compress() and ->cot_decompress(), always point to
crypto_compress() and crypto_decompress().
Remove this pointless indirection, and just make crypto_comp_compress()
and crypto_comp_decompress() be direct calls to what used to be
crypto_compress() and crypto_decompress().
Also remove the unused function crypto_comp_cast().
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The whole point of using an AEAD over length-preserving encryption is
that the data is authenticated. However currently the fuzz tests don't
test any inauthentic inputs to verify that the data is actually being
authenticated. And only two algorithms ("rfc4543(gcm(aes))" and
"ccm(aes)") even have any inauthentic test vectors at all.
Therefore, update the AEAD fuzz tests to sometimes generate inauthentic
test vectors, either by generating a (ciphertext, AAD) pair without
using the key, or by mutating an authentic pair that was generated.
To avoid flakiness, only assume this works reliably if the auth tag is
at least 8 bytes. Also account for the rfc4106, rfc4309, and rfc7539esp
algorithms intentionally ignoring the last 8 AAD bytes, and for some
algorithms doing extra checks that result in EINVAL rather than EBADMSG.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In preparation for adding inauthentic input fuzz tests, which don't
require that a generic implementation of the algorithm be available,
refactor test_aead_vs_generic_impl() so that instead there's a
higher-level function test_aead_extra() which initializes a struct
aead_extra_tests_ctx and then calls test_aead_vs_generic_impl() with a
pointer to that struct.
As a bonus, this reduces stack usage.
Also switch from crypto_aead_alg(tfm)->maxauthsize to
crypto_aead_maxauthsize(), now that the latter is available in
<crypto/aead.h>.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The alignment bug in ghash_setkey() fixed by commit 5c6bc4dfa5
("crypto: ghash - fix unaligned memory access in ghash_setkey()")
wasn't reliably detected by the crypto self-tests on ARM because the
tests only set the keys directly from the test vectors.
To improve test coverage, update the tests to sometimes pass misaligned
keys to setkey(). This applies to shash, ahash, skcipher, and aead.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When checking two implementations of the same skcipher algorithm for
consistency, require that the minimum key size be the same, not just the
maximum key size. There's no good reason to allow different minimum key
sizes.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Currently if the comparison fuzz tests encounter an encryption error
when generating an skcipher or AEAD test vector, they will still test
the decryption side (passing it the uninitialized ciphertext buffer)
and expect it to fail with the same error.
This is sort of broken because it's not well-defined usage of the API to
pass an uninitialized buffer, and furthermore in the AEAD case it's
acceptable for the decryption error to be EBADMSG (meaning "inauthentic
input") even if the encryption error was something else like EINVAL.
Fix this for skcipher by explicitly initializing the ciphertext buffer
on error, and for AEAD by skipping the decryption test on error.
Reported-by: Pascal Van Leeuwen <pvanleeuwen@verimatrix.com>
Fixes: d435e10e67 ("crypto: testmgr - fuzz skciphers against their generic implementation")
Fixes: 40153b10d9 ("crypto: testmgr - fuzz AEADs against their generic implementation")
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The essiv and hmac templates refuse to use any hash algorithm that has a
->setkey() function, which includes not just algorithms that always need
a key, but also algorithms that optionally take a key.
Previously the only optionally-keyed hash algorithms in the crypto API
were non-cryptographic algorithms like crc32, so this didn't really
matter. But that's changed with BLAKE2 support being added. BLAKE2
should work with essiv and hmac, just like any other cryptographic hash.
Fix this by allowing the use of both algorithms without a ->setkey()
function and algorithms that have the OPTIONAL_KEY flag set.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Due to the removal of the blkcipher and ablkcipher algorithm types,
crypto_skcipher_extsize() now simply calls crypto_alg_extsize(). So
remove it and just use crypto_alg_extsize().
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Due to the removal of the blkcipher and ablkcipher algorithm types,
crypto_skcipher::decrypt is now redundant since it always equals
crypto_skcipher_alg(tfm)->decrypt.
Remove it and update crypto_skcipher_decrypt() accordingly.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Due to the removal of the blkcipher and ablkcipher algorithm types,
crypto_skcipher::encrypt is now redundant since it always equals
crypto_skcipher_alg(tfm)->encrypt.
Remove it and update crypto_skcipher_encrypt() accordingly.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Due to the removal of the blkcipher and ablkcipher algorithm types,
crypto_skcipher::setkey now always points to skcipher_setkey().
Simplify by removing this function pointer and instead just making
skcipher_setkey() be crypto_skcipher_setkey() directly.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Due to the removal of the blkcipher and ablkcipher algorithm types,
crypto_skcipher::keysize is now redundant since it always equals
crypto_skcipher_alg(tfm)->max_keysize.
Remove it and update crypto_skcipher_default_keysize() accordingly.
Also rename crypto_skcipher_default_keysize() to
crypto_skcipher_max_keysize() to clarify that it specifically returns
the maximum key size, not some unspecified "default".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Due to the removal of the blkcipher and ablkcipher algorithm types,
crypto_skcipher::ivsize is now redundant since it always equals
crypto_skcipher_alg(tfm)->ivsize.
Remove it and update crypto_skcipher_ivsize() accordingly.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Update a comment to refer to crypto_alloc_skcipher() rather than
crypto_alloc_blkcipher() (the latter having been removed).
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
We should not be modifying the original request's MAY_SLEEP flag
upon completion. It makes no sense to do so anyway.
Reported-by: Eric Biggers <ebiggers@kernel.org>
Fixes: 5068c7a883 ("crypto: pcrypt - Add pcrypt crypto...")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Tested-by: Eric Biggers <ebiggers@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The crypto glue performed function prototype casting via macros to make
indirect calls to assembly routines. Instead of performing casts at the
call sites (which trips Control Flow Integrity prototype checking), switch
each prototype to a common standard set of arguments which allows the
removal of the existing macros. In order to keep pointer math unchanged,
internal casting between u128 pointers and u8 pointers is added.
Co-developed-by: João Moreira <joao.moreira@intel.com>
Signed-off-by: João Moreira <joao.moreira@intel.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Eric Biggers <ebiggers@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
If the pcrypt template is used multiple times in an algorithm, then a
deadlock occurs because all pcrypt instances share the same
padata_instance, which completes requests in the order submitted. That
is, the inner pcrypt request waits for the outer pcrypt request while
the outer request is already waiting for the inner.
This patch fixes this by allocating a set of queues for each pcrypt
instance instead of using two global queues. In order to maintain
the existing user-space interface, the pinst structure remains global
so any sysfs modifications will apply to every pcrypt instance.
Note that when an update occurs we have to allocate memory for
every pcrypt instance. Should one of the allocations fail we
will abort the update without rolling back changes already made.
The new per-instance data structure is called padata_shell and is
essentially a wrapper around parallel_data.
Reproducer:
#include <linux/if_alg.h>
#include <sys/socket.h>
#include <unistd.h>
int main()
{
struct sockaddr_alg addr = {
.salg_type = "aead",
.salg_name = "pcrypt(pcrypt(rfc4106-gcm-aesni))"
};
int algfd, reqfd;
char buf[32] = { 0 };
algfd = socket(AF_ALG, SOCK_SEQPACKET, 0);
bind(algfd, (void *)&addr, sizeof(addr));
setsockopt(algfd, SOL_ALG, ALG_SET_KEY, buf, 20);
reqfd = accept(algfd, 0, 0);
write(reqfd, buf, 32);
read(reqfd, buf, 16);
}
Reported-by: syzbot+56c7151cad94eec37c521f0e47d2eee53f9361c4@syzkaller.appspotmail.com
Fixes: 5068c7a883 ("crypto: pcrypt - Add pcrypt crypto parallelization wrapper")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Tested-by: Eric Biggers <ebiggers@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
On module unload of pcrypt we must unregister the crypto algorithms
first and then tear down the padata structure. As otherwise the
crypto algorithms are still alive and can be used while the padata
structure is being freed.
Fixes: 5068c7a883 ("crypto: pcrypt - Add pcrypt crypto...")
Cc: <stable@vger.kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Replace all the occurrences of FIELD_SIZEOF() with sizeof_field() except
at places where these are defined. Later patches will remove the unused
definition of FIELD_SIZEOF().
This patch is generated using following script:
EXCLUDE_FILES="include/linux/stddef.h|include/linux/kernel.h"
git grep -l -e "\bFIELD_SIZEOF\b" | while read file;
do
if [[ "$file" =~ $EXCLUDE_FILES ]]; then
continue
fi
sed -i -e 's/\bFIELD_SIZEOF\b/sizeof_field/g' $file;
done
Signed-off-by: Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>
Link: https://lore.kernel.org/r/20190924105839.110713-3-pankaj.laxminarayan.bharadiya@intel.com
Co-developed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
Acked-by: David Miller <davem@davemloft.net> # for net
Use chacha20_setkey() and chacha12_setkey() from
<crypto/internal/chacha.h> instead of defining them again in
chacha_generic.c.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Another instance of CRYPTO_BLKCIPHER made it in just after it was
renamed to CRYPTO_SKCIPHER. Fix it.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
kfree has taken null pointer check into account. so it is safe to
remove the unnecessary check.
Signed-off-by: Chen Wandun <chenwandun@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The TFM context can be renamed to a more appropriate name and the local
varaibles as well, using 'tctx' which seems to be more common than
'mctx'.
The _setkey callback was the last one without the blake2b_ prefix,
rename that too.
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that there's only one call to blake2b_update, we can merge it to the
callback and simplify. The empty input check is split and the rest of
code un-indented.
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The helper is trival and called once, inlining makes things simpler.
There's a comment to tie it back to the idea behind the code.
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
All the code for param block has been inlined, last_node and outlen from
the state are not used or have become redundant due to other code.
Remove it.
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The keyed init writes the key bytes to the input buffer and does an
update. We can do that in two ways: fill the buffer and update
immediatelly. This is what current blake2b_init_key does. Any other
following _update or _final will continue from the updated state.
The other way is to write the key and set the number of bytes to process
at the next _update or _final, lazy evaluation. Which leads to the the
simplified code in this patch.
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The call chain from blake2b_init can be simplified because the param
block is effectively zeros, besides the key.
- blake2b_init0 zeroes state and sets IV
- blake2b_init sets up param block with defaults (key and some 1s)
- init with key, write it to the input buffer and recalculate state
So the compact way is to zero out the state and initialize index 0 of
the state directly with the non-zero values and the key.
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
blake2b_final is called only once, merge it to the crypto API callback
and simplify. This avoids the temporary buffer and swaps the bytes of
internal buffer.
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that all users of the deprecated ablkcipher interface have been
moved to the skcipher interface, ablkcipher is no longer used and
can be removed.
Reviewed-by: Eric Biggers <ebiggers@kernel.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
this patchs constify the alg list because this list is never modified.
Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This implementation is the fastest available x86_64 implementation, and
unlike Sandy2x, it doesn't requie use of the floating point registers at
all. Instead it makes use of BMI2 and ADX, available on recent
microarchitectures. The implementation was written by Armando
Faz-Hernández with contributions (upstream) from Samuel Neves and me,
in addition to further changes in the kernel implementation from us.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Signed-off-by: Samuel Neves <sneves@dei.uc.pt>
Co-developed-by: Samuel Neves <sneves@dei.uc.pt>
[ardb: - move to arch/x86/crypto
- wire into lib/crypto framework
- implement crypto API KPP hooks ]
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Expose the generic Curve25519 library via the crypto API KPP interface.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In preparation of introducing KPP implementations of Curve25519, import
the set of test cases proposed by the Zinc patch set, but converted to
the KPP format.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
These implementations from Samuel Neves support AVX and AVX-512VL.
Originally this used AVX-512F, but Skylake thermal throttling made
AVX-512VL more attractive and possible to do with negligable difference.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Signed-off-by: Samuel Neves <sneves@dei.uc.pt>
Co-developed-by: Samuel Neves <sneves@dei.uc.pt>
[ardb: move to arch/x86/crypto, wire into lib/crypto framework]
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Wire up our newly added Blake2s implementation via the shash API.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In order to use 128-bit integer arithmetic in C code, the architecture
needs to have declared support for it by setting ARCH_SUPPORTS_INT128,
and it requires a version of the toolchain that supports this at build
time. This is why all existing tests for ARCH_SUPPORTS_INT128 also test
whether __SIZEOF_INT128__ is defined, since this is only the case for
compilers that can support 128-bit integers.
Let's fold this additional test into the Kconfig declaration of
ARCH_SUPPORTS_INT128 so that we can also use the symbol in Makefiles,
e.g., to decide whether a certain object needs to be included in the
first place.
Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This is a straight import of the OpenSSL/CRYPTOGAMS Poly1305 implementation for
MIPS authored by Andy Polyakov, a prior 64-bit only version of which has been
contributed by him to the OpenSSL project. The file 'poly1305-mips.pl' is taken
straight from this upstream GitHub repository [0] at commit
d22ade312a7af958ec955620b0d241cf42c37feb, and already contains all the changes
required to build it as part of a Linux kernel module.
[0] https://github.com/dot-asm/cryptogams
Co-developed-by: Andy Polyakov <appro@cryptogams.org>
Signed-off-by: Andy Polyakov <appro@cryptogams.org>
Co-developed-by: René van Dorst <opensource@vdorst.com>
Signed-off-by: René van Dorst <opensource@vdorst.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Implement the arch init/update/final Poly1305 library routines in the
accelerated SIMD driver for x86 so they are accessible to users of
the Poly1305 library interface as well.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Remove the dependency on the generic Poly1305 driver. Instead, depend
on the generic library so that we only reuse code without pulling in
the generic skcipher implementation as well.
While at it, remove the logic that prefers the non-SIMD path for short
inputs - this is no longer necessary after recent FPU handling changes
on x86.
Since this removes the last remaining user of the routines exported
by the generic shash driver, unexport them and make them static.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Expose the existing generic Poly1305 code via a init/update/final
library interface so that callers are not required to go through
the crypto API's shash abstraction to access it. At the same time,
make some preparations so that the library implementation can be
superseded by an accelerated arch-specific version in the future.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In preparation of exposing a Poly1305 library interface directly from
the accelerated x86 driver, align the state descriptor of the x86 code
with the one used by the generic driver. This is needed to make the
library interface unified between all implementations.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Move the core Poly1305 routines shared between the generic Poly1305
shash driver and the Adiantum and NHPoly1305 drivers into a separate
library so that using just this pieces does not pull in the crypto
API pieces of the generic Poly1305 routine.
In a subsequent patch, we will augment this generic library with
init/update/final routines so that Poyl1305 algorithm can be used
directly without the need for using the crypto API's shash abstraction.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that all users of generic ChaCha code have moved to the core library,
there is no longer a need for the generic ChaCha skcpiher driver to
export parts of it implementation for reuse by other drivers. So drop
the exports, and make the symbols static.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This integrates the accelerated MIPS 32r2 implementation of ChaCha
into both the API and library interfaces of the kernel crypto stack.
The significance of this is that, in addition to becoming available
as an accelerated library implementation, it can also be used by
existing crypto API code such as Adiantum (for block encryption on
ultra low performance cores) or IPsec using chacha20poly1305. These
are use cases that have already opted into using the abstract crypto
API. In order to support Adiantum, the core assembler routine has
been adapted to take the round count as a function argument rather
than hardcoding it to 20.
Co-developed-by: René van Dorst <opensource@vdorst.com>
Signed-off-by: René van Dorst <opensource@vdorst.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Wire the existing x86 SIMD ChaCha code into the new ChaCha library
interface, so that users of the library interface will get the
accelerated version when available.
Given that calls into the library API will always go through the
routines in this module if it is enabled, switch to static keys
to select the optimal implementation available (which may be none
at all, in which case we defer to the generic implementation for
all invocations).
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In preparation of extending the x86 ChaCha driver to also expose the ChaCha
library interface, drop the dependency on the chacha_generic crypto driver
as a non-SIMD fallback, and depend on the generic ChaCha library directly.
This way, we only pull in the code we actually need, without registering
a set of ChaCha skciphers that we will never use.
Since turning the FPU on and off is cheap these days, simplify the SIMD
routine by dropping the per-page yield, which makes for a cleaner switch
to the library API as well. This also allows use to invoke the skcipher
walk routines in non-atomic mode.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Currently, our generic ChaCha implementation consists of a permute
function in lib/chacha.c that operates on the 64-byte ChaCha state
directly [and which is always included into the core kernel since it
is used by the /dev/random driver], and the crypto API plumbing to
expose it as a skcipher.
In order to support in-kernel users that need the ChaCha streamcipher
but have no need [or tolerance] for going through the abstractions of
the crypto API, let's expose the streamcipher bits via a library API
as well, in a way that permits the implementation to be superseded by
an architecture specific one if provided.
So move the streamcipher code into a separate module in lib/crypto,
and expose the init() and crypt() routines to users of the library.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In preparation of introducing a set of crypto library interfaces, tidy
up the Makefile and split off the Kconfig symbols into a separate file.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
If aead is built as a module along with cryptomgr, it creates a
dependency loop due to the dependency chain aead => crypto_null =>
cryptomgr => aead.
This is due to the presence of the AEAD geniv code. This code is
not really part of the AEAD API but simply support code for IV
generators such as seqiv. This patch moves the geniv code into
its own module thus breaking the dependency loop.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The crypto API requires cryptomgr to be present for probing to work
so we need a softdep to ensure that cryptomgr is added to the
initramfs.
This was usually not a problem because until very recently it was
not practical to build crypto API as module but with the recent
work to eliminate direct AES users this is now possible.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Move existing code to trusted keys subsystem. Also, rename files with
"tpm" as suffix which provides the underlying implementation.
Suggested-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Signed-off-by: Sumit Garg <sumit.garg@linaro.org>
Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Tested-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Switch to utilize common heap based tpm_buf code for TPM based trusted
and asymmetric keys rather than using stack based tpm1_buf code. Also,
remove tpm1_buf code.
Suggested-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Signed-off-by: Sumit Garg <sumit.garg@linaro.org>
Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com>
Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Tested-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Move tpm_buf code to common include/linux/tpm.h header so that it can
be reused via other subsystems like trusted keys etc.
Also rename trusted keys and asymmetric keys usage of TPM 1.x buffer
implementation to tpm1_buf to avoid any compilation errors.
Suggested-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Signed-off-by: Sumit Garg <sumit.garg@linaro.org>
Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com>
Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Tested-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Now that the blkcipher algorithm type has been removed in favor of
skcipher, rename the crypto_blkcipher kernel module to crypto_skcipher,
and rename the config options accordingly:
CONFIG_CRYPTO_BLKCIPHER => CONFIG_CRYPTO_SKCIPHER
CONFIG_CRYPTO_BLKCIPHER2 => CONFIG_CRYPTO_SKCIPHER2
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that all "blkcipher" algorithms have been converted to "skcipher",
remove the blkcipher algorithm type.
The skcipher (symmetric key cipher) algorithm type was introduced a few
years ago to replace both blkcipher and ablkcipher (synchronous and
asynchronous block cipher). The advantages of skcipher include:
- A much less confusing name, since none of these algorithm types have
ever actually been for raw block ciphers, but rather for all
length-preserving encryption modes including block cipher modes of
operation, stream ciphers, and other length-preserving modes.
- It unified blkcipher and ablkcipher into a single algorithm type
which supports both synchronous and asynchronous implementations.
Note, blkcipher already operated only on scatterlists, so the fact
that skcipher does too isn't a regression in functionality.
- Better type safety by using struct skcipher_alg, struct
crypto_skcipher, etc. instead of crypto_alg, crypto_tfm, etc.
- It sometimes simplifies the implementations of algorithms.
Also, the blkcipher API was no longer being tested.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that the crypto_skcipher_type() function has been removed, there's
no reason to call the crypto_type struct for skciphers
"crypto_skcipher_type2". Rename it to simply "crypto_skcipher_type".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
crypto_has_skcipher() and crypto_has_skcipher2() do the same thing: they
check for the availability of an algorithm of type skcipher, blkcipher,
or ablkcipher, which also meets any non-type constraints the caller
specified. And they have exactly the same prototype.
Therefore, eliminate the redundancy by removing crypto_has_skcipher()
and renaming crypto_has_skcipher2() to crypto_has_skcipher().
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Test vectors for blake2b with various digest sizes. As the algorithm is
the same up to the digest calculation, the key and input data length is
distributed in a way that tests all combinanions of the two over the
digest sizes.
Based on the suggestion from Eric, the following input sizes are tested
[0, 1, 7, 15, 64, 247, 256], where blake2b blocksize is 128, so the
padded and the non-padded input buffers are tested.
blake2b-160 blake2b-256 blake2b-384 blake2b-512
---------------------------------------------------
len=0 | klen=0 klen=1 klen=32 klen=64
len=1 | klen=32 klen=64 klen=0 klen=1
len=7 | klen=64 klen=0 klen=1 klen=32
len=15 | klen=1 klen=32 klen=64 klen=0
len=64 | klen=0 klen=1 klen=32 klen=64
len=247 | klen=32 klen=64 klen=0 klen=1
len=256 | klen=64 klen=0 klen=1 klen=32
Where key:
- klen=0: empty key
- klen=1: 1 byte value 0x42, 'B'
- klen=32: first 32 bytes of the default key, sequence 00..1f
- klen=64: default key, sequence 00..3f
The unkeyed vectors are ordered before keyed, as this is required by
testmgr.
CC: Eric Biggers <ebiggers@kernel.org>
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The patch brings support of several BLAKE2 variants (2b with various
digest lengths). The keyed digest is supported, using tfm->setkey call.
The in-tree user will be btrfs (for checksumming), we're going to use
the BLAKE2b-256 variant.
The code is reference implementation taken from the official sources and
modified in terms of kernel coding style (whitespace, comments, uintXX_t
-> uXX types, removed unused prototypes and #ifdefs, removed testing
code, changed secure_zero_memory -> memzero_explicit, used own helpers
for unaligned reads/writes and rotations).
Further changes removed sanity checks of key length or output size,
these values are verified in the crypto API callbacks or hardcoded in
shash_alg and not exposed to users.
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The elliptic curve arithmetic library used by the EC-DH KPP implementation
assumes big endian byte order, and unconditionally reverses the byte
and word order of multi-limb quantities. On big endian systems, the byte
reordering is not necessary, while the word ordering needs to be retained.
So replace the __swab64() invocation with a call to be64_to_cpu() which
should do the right thing for both little and big endian builds.
Fixes: 3c4b23901a ("crypto: ecdh - Add ECDH software support")
Cc: <stable@vger.kernel.org> # v4.9+
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Convert the glue code for the PowerPC SPE implementations of AES-ECB,
AES-CBC, AES-CTR, and AES-XTS from the deprecated "blkcipher" API to the
"skcipher" API. This is needed in order for the blkcipher API to be
removed.
Tested with:
export ARCH=powerpc CROSS_COMPILE=powerpc-linux-gnu-
make mpc85xx_defconfig
cat >> .config << EOF
# CONFIG_MODULES is not set
# CONFIG_CRYPTO_MANAGER_DISABLE_TESTS is not set
CONFIG_DEBUG_KERNEL=y
CONFIG_CRYPTO_MANAGER_EXTRA_TESTS=y
CONFIG_CRYPTO_AES=y
CONFIG_CRYPTO_CBC=y
CONFIG_CRYPTO_CTR=y
CONFIG_CRYPTO_ECB=y
CONFIG_CRYPTO_XTS=y
CONFIG_CRYPTO_AES_PPC_SPE=y
EOF
make olddefconfig
make -j32
qemu-system-ppc -M mpc8544ds -cpu e500 -nographic \
-kernel arch/powerpc/boot/zImage \
-append cryptomgr.fuzz_iterations=1000
Note that xts-ppc-spe still fails the comparison tests due to the lack
of ciphertext stealing support. This is not addressed by this patch.
This patch also cleans up the code by making ->encrypt() and ->decrypt()
call a common function for each of ECB, CBC, and XTS, and by using a
clearer way to compute the length to process at each step.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In order to speed up aegis128 processing even more, duplicate the init()
and final() routines as SIMD versions in their entirety. This results
in a 2x speedup on ARM Cortex-A57 for ~1500 byte packets (using AES
instructions).
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Instead of passing around an ops structure with function pointers,
which forces indirect calls to be used, refactor the code slightly
so we can use ordinary function calls. At the same time, switch to
a static key to decide whether or not the SIMD code path may be used.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Convert the glue code for the SPARC64 DES opcodes implementations of
DES-ECB, DES-CBC, 3DES-ECB, and 3DES-CBC from the deprecated "blkcipher"
API to the "skcipher" API. This is needed in order for the blkcipher
API to be removed.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Convert the glue code for the SPARC64 Camellia opcodes implementations
of Camellia-ECB and Camellia-CBC from the deprecated "blkcipher" API to
the "skcipher" API. This is needed in order for the blkcipher API to be
removed.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>