The ccm(aes) test fails when req->assoclen > ~240bytes.
The problem is the value assigned to auth_offset is wrong.
As auth_offset is unsigned char, it can take max value as 255.
So fix it by making it unsigned int.
Signed-off-by: Ayush Sawal <ayush.sawal@chelsio.com>
Signed-off-by: Devulapally Shiva Krishna <shiva@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This solves the following issues observed during self test when
CONFIG_CRYPTO_MANAGER_EXTRA_TESTS is enabled.
1. Added fallback for cbc, ctr and rfc3686 if req->nbytes is zero
and for xts added a fallback case if req->nbytes is not multiple of 16.
2. In case of cbc-aes, solved wrong iv update. When
chcr_cipher_fallback() is called, used req->info pointer instead of
reqctx->iv.
3. In cbc-aes decryption there was a wrong result. This occurs when
chcr_cipher_fallback() is called from chcr_handle_cipher_resp().
In the fallback function iv(req->info) used is wrongly updated.
So use the initial iv for this case.
4)In case of ctr-aes encryption observed wrong result. In adjust_ctr_overflow()
there is condition which checks if ((bytes / AES_BLOCK_SIZE) > c),
where c is the number of blocks which can be processed without iv overflow,
but for the above bytes (req->nbytes < 32 , not a multiple of 16) this
condition fails and the 2nd block is corrupted as it requires the rollover iv.
So added a '=' condition in this to take care of this.
5)In rfc3686-ctr there was wrong result observed. This occurs when
chcr_cipher_fallback() is called from chcr_handle_cipher_resp().
Here also copying initial_iv in init_iv pointer for handling the fallback
case correctly.
Signed-off-by: Ayush Sawal <ayush.sawal@chelsio.com>
Signed-off-by: Devulapally Shiva Krishna <shiva@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch fixes two issues observed during self tests with
CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled.
1. gcm(aes) hang issue , that happens during decryption.
2. rfc4106-gcm-aes-chcr encryption unexpectedly succeeded.
For gcm-aes decryption , authtag is not mapped due to
sg_nents_for_len(upto size: assoclen+ cryptlen - authsize).
So fix it by dma_mapping authtag.
Also replaced sg_nents() to sg_nents_for_len() in case of aead_dma_unmap().
For rfc4106-gcm-aes-chcr, used crypto_ipsec_check_assoclen() for checking
the validity of assoclen.
Signed-off-by: Ayush Sawal <ayush.sawal@chelsio.com>
Signed-off-by: Devulapally Shiva Krishna <shiva@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
drivers/crypto/chelsio/chcr_algo.c: In function 'chcr_device_init':
drivers/crypto/chelsio/chcr_algo.c:1440:18: warning:
variable 'adap' set but not used [-Wunused-but-set-variable]
commit 567be3a5d2 ("crypto: chelsio - Use multiple txq/rxq per tfm
to process the requests") involved this unused variable.
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch enables chcr to use multiple txq/rxq per tfm
to process the crypto requests. The txq/rxq are selected based
on cpu core-id.
Signed-off-by: Ayush Sawal <ayush.sawal@chelsio.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Recalculate iv only if it is needed i.e. if the last req to hw
was partial for aes-xts.
Signed-off-by: Ayush Sawal <ayush.sawal@chelsio.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
kctx_len = (ntohl(KEY_CONTEXT_CTX_LEN_V(aeadctx->key_ctx_hdr)) << 4)
- sizeof(chcr_req->key_ctx);
can't possibly be endian-safe. Look: ->key_ctx_hdr is __be32. And
KEY_CONTEXT_CTX_LEN_V is "shift up by 24 bits". On little-endian hosts it
sees
b0 b1 b2 b3
in memory, inteprets that into b0 + (b1 << 8) + (b2 << 16) + (b3 << 24),
shifts up by 24, resulting in b0 << 24, does ntohl (byteswap on l-e),
gets b0 and shifts that up by 4. So we get b0 * 16 - sizeof(...).
Sounds reasonable, but on b-e we get
b3 + (b2 << 8) + (b1 << 16) + (b0 << 24), shift up by 24,
yielding b3 << 24, do ntohl (no-op on b-e) and then shift up by 4.
Resulting in b3 << 28 - sizeof(...), i.e. slightly under b3 * 256M.
Then we increase it some more and pass to alloc_skb() as size.
Somehow I doubt that we really want a quarter-gigabyte skb allocation
here...
Note that when you are building those values in
#define FILL_KEY_CTX_HDR(ck_size, mk_size, d_ck, opad, ctx_len) \
htonl(KEY_CONTEXT_VALID_V(1) | \
KEY_CONTEXT_CK_SIZE_V((ck_size)) | \
KEY_CONTEXT_MK_SIZE_V(mk_size) | \
KEY_CONTEXT_DUAL_CK_V((d_ck)) | \
KEY_CONTEXT_OPAD_PRESENT_V((opad)) | \
KEY_CONTEXT_SALT_PRESENT_V(1) | \
KEY_CONTEXT_CTX_LEN_V((ctx_len)))
ctx_len ends up in the first octet (i.e. b0 in the above), which
matches the current behaviour on l-e. If that's the intent, this
thing should've been
kctx_len = (KEY_CONTEXT_CTX_LEN_G(ntohl(aeadctx->key_ctx_hdr)) << 4)
- sizeof(chcr_req->key_ctx);
instead - fetch after ntohl() we get (b0 << 24) + (b1 << 16) + (b2 << 8) + b3,
shift it down by 24 (b0), resuling in b0 * 16 - sizeof(...) both on l-e and
on b-e.
PS: when sparse warns you about endianness problems, it might be worth checking
if there really is something wrong. And I don't mean "slap __force cast on it"...
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The libkcapi test which causes kernel panic is
aead asynchronous vmsplice multiple test.
./bin/kcapi -v -d 4 -x 10 -c "ccm(aes)"
-q 4edb58e8d5eb6bc711c43a6f3693daebde2e5524f1b55297abb29f003236e43d
-t a7877c99 -n 674742abd0f5ba -k 2861fd0253705d7875c95ba8a53171b4
-a fb7bc304a3909e66e2e0c5ef952712dd884ce3e7324171369f2c5db1adc48c7d
This patch avoids dma_mapping of a zero length sg which causes the panic,
by using sg_nents_for_len which maps only upto a specific length
Signed-off-by: Ayush Sawal <ayush.sawal@chelsio.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The libkcapi "cbc(aes)" failed tests are
symmetric asynchronous cipher one shot multiple test,
symmetric asynchronous cipher stream multiple test,
Symmetric asynchronous cipher vmsplice multiple test
In this patch a wait_for_completion is added in the chcr_aes_encrypt function,
which completes when the response of comes from the hardware.
This adds serialization for encryption in cbc(aes) aio case.
Signed-off-by: Ayush Sawal <ayush.sawal@chelsio.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The CRYPTO_TFM_RES_* flags were apparently meant as a way to make the
->setkey() functions provide more information about errors. But these
flags weren't actually being used or tested, and in many cases they
weren't being set correctly anyway. So they've now been removed.
Also, if someone ever actually needs to start better distinguishing
->setkey() errors (which is somewhat unlikely, as this has been unneeded
for a long time), we'd be much better off just defining different return
values, like -EINVAL if the key is invalid for the algorithm vs.
-EKEYREJECTED if the key was rejected by a policy like "no weak keys".
That would be much simpler, less error-prone, and easier to test.
So just remove CRYPTO_TFM_RES_MASK and all the unneeded logic that
propagates these flags around.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The CRYPTO_TFM_RES_BAD_KEY_LEN flag was apparently meant as a way to
make the ->setkey() functions provide more information about errors.
However, no one actually checks for this flag, which makes it pointless.
Also, many algorithms fail to set this flag when given a bad length key.
Reviewing just the generic implementations, this is the case for
aes-fixed-time, cbcmac, echainiv, nhpoly1305, pcrypt, rfc3686, rfc4309,
rfc7539, rfc7539esp, salsa20, seqiv, and xcbc. But there are probably
many more in arch/*/crypto/ and drivers/crypto/.
Some algorithms can even set this flag when the key is the correct
length. For example, authenc and authencesn set it when the key payload
is malformed in any way (not just a bad length), the atmel-sha and ccree
drivers can set it if a memory allocation fails, and the chelsio driver
sets it for bad auth tag lengths, not just bad key lengths.
So even if someone actually wanted to start checking this flag (which
seems unlikely, since it's been unused for a long time), there would be
a lot of work needed to get it working correctly. But it would probably
be much better to go back to the drawing board and just define different
return values, like -EINVAL if the key is invalid for the algorithm vs.
-EKEYREJECTED if the key was rejected by a policy like "no weak keys".
That would be much simpler, less error-prone, and easier to test.
So just remove this flag.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The chelsio crypto driver is casting 'struct crypto_aead' directly to
'struct crypto_tfm', which is incorrect because the crypto_tfm isn't the
first field of 'struct crypto_aead'. Consequently, the calls to
crypto_tfm_set_flags() are modifying some other field in the struct.
Also, the driver is setting CRYPTO_TFM_RES_BAD_KEY_LEN in
->setauthsize(), not just in ->setkey(). This is incorrect since this
flag is for bad key lengths, not for bad authentication tag lengths.
Fix these bugs by removing the broken crypto_tfm_set_flags() calls from
->setauthsize() and by fixing them in ->setkey().
Fixes: 324429d741 ("chcr: Support for Chelsio's Crypto Hardware")
Cc: <stable@vger.kernel.org> # v4.9+
Cc: Atul Gupta <atul.gupta@chelsio.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
chcr driver was not using the number of channels from lld and
assuming that there are always two channels available. With following
patch chcr will use number of channel as passed by cxgb4.
Signed-off-by: Ayush Sawal <ayush.sawal@chelsio.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Commit 7a7ffe65c8 ("crypto: skcipher - Add top-level skcipher interface")
dated 20 august 2015 introduced the new skcipher API which is supposed to
replace both blkcipher and ablkcipher. While all consumers of the API have
been converted long ago, some producers of the ablkcipher remain, forcing
us to keep the ablkcipher support routines alive, along with the matching
code to expose [a]blkciphers via the skcipher API.
So switch this driver to the skcipher API, allowing us to finally drop the
ablkcipher code in the near future.
Cc: Atul Gupta <atul.gupta@chelsio.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Replace a couple of occurrences where the "aes-generic" cipher is
instantiated explicitly and only used for encryption of a single block.
Use AES library calls instead.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The partial block should count as one and appropriately appended
to IV. eg 499B for AES CTR should count 32 block than 31 and
correct count value is updated in iv out.
Signed-off-by: Atul Gupta <atul.gupta@chelsio.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
removed un-necessary lock_chcr_dev to protect device state
DETACH. lock is not required to protect I/O count
Signed-off-by: Atul Gupta <atul.gupta@chelsio.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The flags field in 'struct shash_desc' never actually does anything.
The only ostensibly supported flag is CRYPTO_TFM_REQ_MAY_SLEEP.
However, no shash algorithm ever sleeps, making this flag a no-op.
With this being the case, inevitably some users who can't sleep wrongly
pass MAY_SLEEP. These would all need to be fixed if any shash algorithm
actually started sleeping. For example, the shash_ahash_*() functions,
which wrap a shash algorithm with the ahash API, pass through MAY_SLEEP
from the ahash API to the shash API. However, the shash functions are
called under kmap_atomic(), so actually they're assumed to never sleep.
Even if it turns out that some users do need preemption points while
hashing large buffers, we could easily provide a helper function
crypto_shash_update_large() which divides the data into smaller chunks
and calls crypto_shash_update() and cond_resched() for each chunk. It's
not necessary to have a flag in 'struct shash_desc', nor is it necessary
to make individual shash algorithms aware of this at all.
Therefore, remove shash_desc::flags, and document that the
crypto_shash_*() functions can be called from any context.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Fixes gcc '-Wunused-but-set-variable' warning:
drivers/crypto/chelsio/chcr_algo.c: In function 'chcr_device_init':
drivers/crypto/chelsio/chcr_algo.c:1371:18: warning:
variable 'adap' set but not used [-Wunused-but-set-variable]
It not used since commit a1c6fd4308 ("crypto: chelsio - Update ntx queue
received from cxgb4")
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
set_msg_len may fails with -EOVERFLOW, It should be propagate
to upstream.
Fixes: 2debd3325e ("crypto: chcr - Add AEAD algos.")
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Remove dead code related to internal IV generators, which are no longer
used since they've been replaced with the "seqiv" and "echainiv"
templates. The removed code includes:
- The "givcipher" (GIVCIPHER) algorithm type. No algorithms are
registered with this type anymore, so it's unneeded.
- The "const char *geniv" member of aead_alg, ablkcipher_alg, and
blkcipher_alg. A few algorithms still set this, but it isn't used
anymore except to show via /proc/crypto and CRYPTO_MSG_GETALG.
Just hardcode "<default>" or "<none>" in those cases.
- The 'skcipher_givcrypt_request' structure, which is never used.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Fix error counter increment in AEAD decrypt operation when
validation of tag is done in Driver instead of H/W.
Signed-off-by: Harsh Jain <harsh@chelsio.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
chcr receives "CXGB4_STATE_DETACH" event on PCI Shutdown.
Wait for processing of inflight request and Mark the device unavailable.
Signed-off-by: Harsh Jain <harsh@chelsio.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Send dma address as value to function arguments instead of pointer.
Signed-off-by: Harsh Jain <harsh@chelsio.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Send input as IV | AAD | Data. It will allow sending IV as Immediate
Data and Creates space in Work request to add more dma mapped entries.
Signed-off-by: Harsh Jain <harsh@chelsio.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Trivial fix to clean up varous indentation issue
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Update cxgb4 to send No. of Tx Queue created in lldinfo struct
and use the same ntxq in chcr driver.
This patch depends on following commit
commit add92a817e
"Fix memory corruption in DMA Mapped buffers"
v2:
Free txq_info in error case as pointed by Lino Sanfilippo.
Signed-off-by: Harsh Jain <harsh@chelsio.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In the quest to remove all stack VLA usage from the kernel[1], this
replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage
with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(),
which uses a fixed stack size.
[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
Cc: Harsh Jain <harsh@chelsio.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Update PCI Id in "cpl_rx_phys_dsgl" header. In case pci_chan_id and
tx_chan_id are not derived from same queue, H/W can send request
completion indication before completing DMA Transfer.
Herbert, It would be good if fix can be merge to stable tree.
For 4.14 kernel, It requires some update to avoid mege conficts.
Cc: <stable@vger.kernel.org>
Signed-off-by: Harsh Jain <harsh@chelsio.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Some aead algorithms set .cra_flags = CRYPTO_ALG_TYPE_AEAD. But this is
redundant with the C structure type ('struct aead_alg'), and
crypto_register_aead() already sets the type flag automatically,
clearing any type flag that was already there. Apparently the useless
assignment has just been copy+pasted around.
So, remove the useless assignment from all the aead algorithms.
This patch shouldn't change any actual behavior.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Some ahash algorithms set .cra_type = &crypto_ahash_type. But this is
redundant with the C structure type ('struct ahash_alg'), and
crypto_register_ahash() already sets the .cra_type automatically.
Apparently the useless assignment has just been copy+pasted around.
So, remove the useless assignment from all the ahash algorithms.
This patch shouldn't change any actual behavior.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Many ahash algorithms set .cra_flags = CRYPTO_ALG_TYPE_AHASH. But this
is redundant with the C structure type ('struct ahash_alg'), and
crypto_register_ahash() already sets the type flag automatically,
clearing any type flag that was already there. Apparently the useless
assignment has just been copy+pasted around.
So, remove the useless assignment from all the ahash algorithms.
This patch shouldn't change any actual behavior.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Extends memory required for IV to include B0 Block and DMA map in
single operation.
Signed-off-by: Harsh Jain <harsh@chelsio.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Send IV in WR as immediate instead of dma mapped entry for cipher.
Signed-off-by: Harsh Jain <harsh@chelsio.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In chcr_authenc_setkey and chcr_aead_digest_null_setkey we save
pointers to the authenc keys in local variables of type
struct crypto_authenc_keys and we don't zeroize them after use.
Fix this and don't leak pointers to the authenc keys.
Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Send multiple WRs to H/W when No. of entries received in scatter list
cannot be sent in single request.
Signed-off-by: Harsh Jain <harsh@chelsio.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
We use ctr(aes) to fallback rfc3686(ctr) request. Send updated IV to fallback path.
Signed-off-by: Harsh Jain <harsh@chelsio.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
CBC Decryption requires Last Block as IV. In case src/dst buffer
are same last block will be replaced by plain text. This patch copies
the Last Block before sending request to HW.
Signed-off-by: Harsh Jain <harsh@chelsio.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Function aead_ccm_validate_input is local to the source and does not
need to be in global scope, so make it static.
Cleans up sparse warning:
drivers/crypto/chelsio/chcr_algo.c:2627:5: warning: symbol
'aead_ccm_validate_input' was not declared. Should it be static?
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
sg_nents_xlen will take care of zero length sg list.
Remove Destination sg list size zero check.
Signed-off-by: Harsh Jain <harsh@chelsio.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add ctr and sha combination of algo in authenc mode.
Signed-off-by: Harsh Jain <harsh@chelsio.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Skip decrypt operation on IV received from HW for last request.
Signed-off-by: Harsh Jain <harsh@chelsio.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>