linux_dsm_epyc7002/security/integrity/ima/ima_crypto.c

696 lines
16 KiB
C
Raw Normal View History

/*
* Copyright (C) 2005,2006,2007,2008 IBM Corporation
*
* Authors:
* Mimi Zohar <zohar@us.ibm.com>
* Kylene Hall <kjhall@us.ibm.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, version 2 of the License.
*
* File: ima_crypto.c
* Calculates md5/sha1 file hash, template hash, boot-aggreate hash
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/kernel.h>
#include <linux/moduleparam.h>
#include <linux/ratelimit.h>
#include <linux/file.h>
#include <linux/crypto.h>
#include <linux/scatterlist.h>
#include <linux/err.h>
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h percpu.h is included by sched.h and module.h and thus ends up being included when building most .c files. percpu.h includes slab.h which in turn includes gfp.h making everything defined by the two files universally available and complicating inclusion dependencies. percpu.h -> slab.h dependency is about to be removed. Prepare for this change by updating users of gfp and slab facilities include those headers directly instead of assuming availability. As this conversion needs to touch large number of source files, the following script is used as the basis of conversion. http://userweb.kernel.org/~tj/misc/slabh-sweep.py The script does the followings. * Scan files for gfp and slab usages and update includes such that only the necessary includes are there. ie. if only gfp is used, gfp.h, if slab is used, slab.h. * When the script inserts a new include, it looks at the include blocks and try to put the new include such that its order conforms to its surrounding. It's put in the include block which contains core kernel includes, in the same order that the rest are ordered - alphabetical, Christmas tree, rev-Xmas-tree or at the end if there doesn't seem to be any matching order. * If the script can't find a place to put a new include (mostly because the file doesn't have fitting include block), it prints out an error message indicating which .h file needs to be added to the file. The conversion was done in the following steps. 1. The initial automatic conversion of all .c files updated slightly over 4000 files, deleting around 700 includes and adding ~480 gfp.h and ~3000 slab.h inclusions. The script emitted errors for ~400 files. 2. Each error was manually checked. Some didn't need the inclusion, some needed manual addition while adding it to implementation .h or embedding .c file was more appropriate for others. This step added inclusions to around 150 files. 3. The script was run again and the output was compared to the edits from #2 to make sure no file was left behind. 4. Several build tests were done and a couple of problems were fixed. e.g. lib/decompress_*.c used malloc/free() wrappers around slab APIs requiring slab.h to be added manually. 5. The script was run on all .h files but without automatically editing them as sprinkling gfp.h and slab.h inclusions around .h files could easily lead to inclusion dependency hell. Most gfp.h inclusion directives were ignored as stuff from gfp.h was usually wildly available and often used in preprocessor macros. Each slab.h inclusion directive was examined and added manually as necessary. 6. percpu.h was updated not to include slab.h. 7. Build test were done on the following configurations and failures were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my distributed build env didn't work with gcov compiles) and a few more options had to be turned off depending on archs to make things build (like ipr on powerpc/64 which failed due to missing writeq). * x86 and x86_64 UP and SMP allmodconfig and a custom test config. * powerpc and powerpc64 SMP allmodconfig * sparc and sparc64 SMP allmodconfig * ia64 SMP allmodconfig * s390 SMP allmodconfig * alpha SMP allmodconfig * um on x86_64 SMP allmodconfig 8. percpu.h modifications were reverted so that it could be applied as a separate patch and serve as bisection point. Given the fact that I had only a couple of failures from tests on step 6, I'm fairly confident about the coverage of this conversion patch. If there is a breakage, it's likely to be something in one of the arch headers which should be easily discoverable easily on most builds of the specific arch. Signed-off-by: Tejun Heo <tj@kernel.org> Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 15:04:11 +07:00
#include <linux/slab.h>
#include <crypto/hash.h>
#include "ima.h"
/* minimum file size for ahash use */
static unsigned long ima_ahash_minsize;
module_param_named(ahash_minsize, ima_ahash_minsize, ulong, 0644);
MODULE_PARM_DESC(ahash_minsize, "Minimum file size for ahash use");
/* default is 0 - 1 page. */
static int ima_maxorder;
static unsigned int ima_bufsize = PAGE_SIZE;
static int param_set_bufsize(const char *val, const struct kernel_param *kp)
{
unsigned long long size;
int order;
size = memparse(val, NULL);
order = get_order(size);
if (order >= MAX_ORDER)
return -EINVAL;
ima_maxorder = order;
ima_bufsize = PAGE_SIZE << order;
return 0;
}
static const struct kernel_param_ops param_ops_bufsize = {
.set = param_set_bufsize,
.get = param_get_uint,
};
#define param_check_bufsize(name, p) __param_check(name, p, unsigned int)
module_param_named(ahash_bufsize, ima_bufsize, bufsize, 0644);
MODULE_PARM_DESC(ahash_bufsize, "Maximum ahash buffer size");
static struct crypto_shash *ima_shash_tfm;
static struct crypto_ahash *ima_ahash_tfm;
int __init ima_init_crypto(void)
{
long rc;
ima_shash_tfm = crypto_alloc_shash(hash_algo_name[ima_hash_algo], 0, 0);
if (IS_ERR(ima_shash_tfm)) {
rc = PTR_ERR(ima_shash_tfm);
pr_err("Can not allocate %s (reason: %ld)\n",
hash_algo_name[ima_hash_algo], rc);
return rc;
}
ima: Fallback to the builtin hash algorithm IMA requires having it's hash algorithm be compiled-in due to it's early use. The default IMA algorithm is protected by Kconfig to be compiled-in. The ima_hash kernel parameter allows to choose the hash algorithm. When the specified algorithm is not available or available as a module, IMA initialization fails, which leads to a kernel panic (mknodat syscall calls ima_post_path_mknod()). Therefore as fallback we force IMA to use the default builtin Kconfig hash algorithm. Fixed crash: $ grep CONFIG_CRYPTO_MD4 .config CONFIG_CRYPTO_MD4=m [ 0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-4.12.14-2.3-default root=UUID=74ae8202-9ca7-4e39-813b-22287ec52f7a video=1024x768-16 plymouth.ignore-serial-consoles console=ttyS0 console=tty resume=/dev/disk/by-path/pci-0000:00:07.0-part3 splash=silent showopts ima_hash=md4 ... [ 1.545190] ima: Can not allocate md4 (reason: -2) ... [ 2.610120] BUG: unable to handle kernel NULL pointer dereference at (null) [ 2.611903] IP: ima_match_policy+0x23/0x390 [ 2.612967] PGD 0 P4D 0 [ 2.613080] Oops: 0000 [#1] SMP [ 2.613080] Modules linked in: autofs4 [ 2.613080] Supported: Yes [ 2.613080] CPU: 0 PID: 1 Comm: systemd Not tainted 4.12.14-2.3-default #1 [ 2.613080] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.0.0-prebuilt.qemu-project.org 04/01/2014 [ 2.613080] task: ffff88003e2d0040 task.stack: ffffc90000190000 [ 2.613080] RIP: 0010:ima_match_policy+0x23/0x390 [ 2.613080] RSP: 0018:ffffc90000193e88 EFLAGS: 00010296 [ 2.613080] RAX: 0000000000000000 RBX: 000000000000000c RCX: 0000000000000004 [ 2.613080] RDX: 0000000000000010 RSI: 0000000000000001 RDI: ffff880037071728 [ 2.613080] RBP: 0000000000008000 R08: 0000000000000000 R09: 0000000000000000 [ 2.613080] R10: 0000000000000008 R11: 61c8864680b583eb R12: 00005580ff10086f [ 2.613080] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000008000 [ 2.613080] FS: 00007f5c1da08940(0000) GS:ffff88003fc00000(0000) knlGS:0000000000000000 [ 2.613080] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 2.613080] CR2: 0000000000000000 CR3: 0000000037002000 CR4: 00000000003406f0 [ 2.613080] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 2.613080] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ 2.613080] Call Trace: [ 2.613080] ? shmem_mknod+0xbf/0xd0 [ 2.613080] ima_post_path_mknod+0x1c/0x40 [ 2.613080] SyS_mknod+0x210/0x220 [ 2.613080] entry_SYSCALL_64_fastpath+0x1a/0xa5 [ 2.613080] RIP: 0033:0x7f5c1bfde570 [ 2.613080] RSP: 002b:00007ffde1c90dc8 EFLAGS: 00000246 ORIG_RAX: 0000000000000085 [ 2.613080] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f5c1bfde570 [ 2.613080] RDX: 0000000000000000 RSI: 0000000000008000 RDI: 00005580ff10086f [ 2.613080] RBP: 00007ffde1c91040 R08: 00005580ff10086f R09: 0000000000000000 [ 2.613080] R10: 0000000000104000 R11: 0000000000000246 R12: 00005580ffb99660 [ 2.613080] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000002 [ 2.613080] Code: 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 41 57 41 56 44 8d 14 09 41 55 41 54 55 53 44 89 d3 09 cb 48 83 ec 38 48 8b 05 c5 03 29 01 <4c> 8b 20 4c 39 e0 0f 84 d7 01 00 00 4c 89 44 24 08 89 54 24 20 [ 2.613080] RIP: ima_match_policy+0x23/0x390 RSP: ffffc90000193e88 [ 2.613080] CR2: 0000000000000000 [ 2.613080] ---[ end trace 9a9f0a8a73079f6a ]--- [ 2.673052] Kernel panic - not syncing: Attempted to kill init! exitcode=0x00000009 [ 2.673052] [ 2.675337] Kernel Offset: disabled [ 2.676405] ---[ end Kernel panic - not syncing: Attempted to kill init! exitcode=0x00000009 Signed-off-by: Petr Vorel <pvorel@suse.cz> Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
2018-03-23 20:41:08 +07:00
pr_info("Allocated hash algorithm: %s\n",
hash_algo_name[ima_hash_algo]);
return 0;
}
static struct crypto_shash *ima_alloc_tfm(enum hash_algo algo)
{
struct crypto_shash *tfm = ima_shash_tfm;
int rc;
if (algo < 0 || algo >= HASH_ALGO__LAST)
algo = ima_hash_algo;
if (algo != ima_hash_algo) {
tfm = crypto_alloc_shash(hash_algo_name[algo], 0, 0);
if (IS_ERR(tfm)) {
rc = PTR_ERR(tfm);
pr_err("Can not allocate %s (reason: %d)\n",
hash_algo_name[algo], rc);
}
}
return tfm;
}
static void ima_free_tfm(struct crypto_shash *tfm)
{
if (tfm != ima_shash_tfm)
crypto_free_shash(tfm);
}
/**
* ima_alloc_pages() - Allocate contiguous pages.
* @max_size: Maximum amount of memory to allocate.
* @allocated_size: Returned size of actual allocation.
* @last_warn: Should the min_size allocation warn or not.
*
* Tries to do opportunistic allocation for memory first trying to allocate
* max_size amount of memory and then splitting that until zero order is
* reached. Allocation is tried without generating allocation warnings unless
* last_warn is set. Last_warn set affects only last allocation of zero order.
*
* By default, ima_maxorder is 0 and it is equivalent to kmalloc(GFP_KERNEL)
*
* Return pointer to allocated memory, or NULL on failure.
*/
static void *ima_alloc_pages(loff_t max_size, size_t *allocated_size,
int last_warn)
{
void *ptr;
int order = ima_maxorder;
gfp_t gfp_mask = __GFP_RECLAIM | __GFP_NOWARN | __GFP_NORETRY;
if (order)
order = min(get_order(max_size), order);
for (; order; order--) {
ptr = (void *)__get_free_pages(gfp_mask, order);
if (ptr) {
*allocated_size = PAGE_SIZE << order;
return ptr;
}
}
/* order is zero - one page */
gfp_mask = GFP_KERNEL;
if (!last_warn)
gfp_mask |= __GFP_NOWARN;
ptr = (void *)__get_free_pages(gfp_mask, 0);
if (ptr) {
*allocated_size = PAGE_SIZE;
return ptr;
}
*allocated_size = 0;
return NULL;
}
/**
* ima_free_pages() - Free pages allocated by ima_alloc_pages().
* @ptr: Pointer to allocated pages.
* @size: Size of allocated buffer.
*/
static void ima_free_pages(void *ptr, size_t size)
{
if (!ptr)
return;
free_pages((unsigned long)ptr, get_order(size));
}
static struct crypto_ahash *ima_alloc_atfm(enum hash_algo algo)
{
struct crypto_ahash *tfm = ima_ahash_tfm;
int rc;
if (algo < 0 || algo >= HASH_ALGO__LAST)
algo = ima_hash_algo;
if (algo != ima_hash_algo || !tfm) {
tfm = crypto_alloc_ahash(hash_algo_name[algo], 0, 0);
if (!IS_ERR(tfm)) {
if (algo == ima_hash_algo)
ima_ahash_tfm = tfm;
} else {
rc = PTR_ERR(tfm);
pr_err("Can not allocate %s (reason: %d)\n",
hash_algo_name[algo], rc);
}
}
return tfm;
}
static void ima_free_atfm(struct crypto_ahash *tfm)
{
if (tfm != ima_ahash_tfm)
crypto_free_ahash(tfm);
}
static inline int ahash_wait(int err, struct crypto_wait *wait)
{
err = crypto_wait_req(err, wait);
if (err)
pr_crit_ratelimited("ahash calculation failed: err: %d\n", err);
return err;
}
static int ima_calc_file_hash_atfm(struct file *file,
struct ima_digest_data *hash,
struct crypto_ahash *tfm)
{
loff_t i_size, offset;
char *rbuf[2] = { NULL, };
int rc, rbuf_len, active = 0, ahash_rc = 0;
struct ahash_request *req;
struct scatterlist sg[1];
struct crypto_wait wait;
size_t rbuf_size[2];
hash->length = crypto_ahash_digestsize(tfm);
req = ahash_request_alloc(tfm, GFP_KERNEL);
if (!req)
return -ENOMEM;
crypto_init_wait(&wait);
ahash_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG |
CRYPTO_TFM_REQ_MAY_SLEEP,
crypto_req_done, &wait);
rc = ahash_wait(crypto_ahash_init(req), &wait);
if (rc)
goto out1;
i_size = i_size_read(file_inode(file));
if (i_size == 0)
goto out2;
/*
* Try to allocate maximum size of memory.
* Fail if even a single page cannot be allocated.
*/
rbuf[0] = ima_alloc_pages(i_size, &rbuf_size[0], 1);
if (!rbuf[0]) {
rc = -ENOMEM;
goto out1;
}
/* Only allocate one buffer if that is enough. */
if (i_size > rbuf_size[0]) {
/*
* Try to allocate secondary buffer. If that fails fallback to
* using single buffering. Use previous memory allocation size
* as baseline for possible allocation size.
*/
rbuf[1] = ima_alloc_pages(i_size - rbuf_size[0],
&rbuf_size[1], 0);
}
for (offset = 0; offset < i_size; offset += rbuf_len) {
if (!rbuf[1] && offset) {
/* Not using two buffers, and it is not the first
* read/request, wait for the completion of the
* previous ahash_update() request.
*/
rc = ahash_wait(ahash_rc, &wait);
if (rc)
goto out3;
}
/* read buffer */
rbuf_len = min_t(loff_t, i_size - offset, rbuf_size[active]);
rc = integrity_kernel_read(file, offset, rbuf[active],
rbuf_len);
if (rc != rbuf_len)
goto out3;
if (rbuf[1] && offset) {
/* Using two buffers, and it is not the first
* read/request, wait for the completion of the
* previous ahash_update() request.
*/
rc = ahash_wait(ahash_rc, &wait);
if (rc)
goto out3;
}
sg_init_one(&sg[0], rbuf[active], rbuf_len);
ahash_request_set_crypt(req, sg, NULL, rbuf_len);
ahash_rc = crypto_ahash_update(req);
if (rbuf[1])
active = !active; /* swap buffers, if we use two */
}
/* wait for the last update request to complete */
rc = ahash_wait(ahash_rc, &wait);
out3:
ima_free_pages(rbuf[0], rbuf_size[0]);
ima_free_pages(rbuf[1], rbuf_size[1]);
out2:
if (!rc) {
ahash_request_set_crypt(req, NULL, hash->digest, 0);
rc = ahash_wait(crypto_ahash_final(req), &wait);
}
out1:
ahash_request_free(req);
return rc;
}
static int ima_calc_file_ahash(struct file *file, struct ima_digest_data *hash)
{
struct crypto_ahash *tfm;
int rc;
tfm = ima_alloc_atfm(hash->algo);
if (IS_ERR(tfm))
return PTR_ERR(tfm);
rc = ima_calc_file_hash_atfm(file, hash, tfm);
ima_free_atfm(tfm);
return rc;
}
static int ima_calc_file_hash_tfm(struct file *file,
struct ima_digest_data *hash,
struct crypto_shash *tfm)
{
loff_t i_size, offset = 0;
char *rbuf;
int rc;
SHASH_DESC_ON_STACK(shash, tfm);
shash->tfm = tfm;
hash->length = crypto_shash_digestsize(tfm);
rc = crypto_shash_init(shash);
if (rc != 0)
return rc;
i_size = i_size_read(file_inode(file));
if (i_size == 0)
goto out;
rbuf = kzalloc(PAGE_SIZE, GFP_KERNEL);
if (!rbuf)
return -ENOMEM;
while (offset < i_size) {
int rbuf_len;
rbuf_len = integrity_kernel_read(file, offset, rbuf, PAGE_SIZE);
if (rbuf_len < 0) {
rc = rbuf_len;
break;
}
if (rbuf_len == 0)
break;
offset += rbuf_len;
rc = crypto_shash_update(shash, rbuf, rbuf_len);
if (rc)
break;
}
kfree(rbuf);
out:
if (!rc)
rc = crypto_shash_final(shash, hash->digest);
return rc;
}
static int ima_calc_file_shash(struct file *file, struct ima_digest_data *hash)
{
struct crypto_shash *tfm;
int rc;
tfm = ima_alloc_tfm(hash->algo);
if (IS_ERR(tfm))
return PTR_ERR(tfm);
rc = ima_calc_file_hash_tfm(file, hash, tfm);
ima_free_tfm(tfm);
return rc;
}
/*
* ima_calc_file_hash - calculate file hash
*
* Asynchronous hash (ahash) allows using HW acceleration for calculating
* a hash. ahash performance varies for different data sizes on different
* crypto accelerators. shash performance might be better for smaller files.
* The 'ima.ahash_minsize' module parameter allows specifying the best
* minimum file size for using ahash on the system.
*
* If the ima.ahash_minsize parameter is not specified, this function uses
* shash for the hash calculation. If ahash fails, it falls back to using
* shash.
*/
int ima_calc_file_hash(struct file *file, struct ima_digest_data *hash)
{
loff_t i_size;
int rc;
struct file *f = file;
bool new_file_instance = false, modified_flags = false;
/*
* For consistency, fail file's opened with the O_DIRECT flag on
* filesystems mounted with/without DAX option.
*/
if (file->f_flags & O_DIRECT) {
hash->length = hash_digest_size[ima_hash_algo];
hash->algo = ima_hash_algo;
return -EINVAL;
}
/* Open a new file instance in O_RDONLY if we cannot read */
if (!(file->f_mode & FMODE_READ)) {
int flags = file->f_flags & ~(O_WRONLY | O_APPEND |
O_TRUNC | O_CREAT | O_NOCTTY | O_EXCL);
flags |= O_RDONLY;
f = dentry_open(&file->f_path, flags, file->f_cred);
if (IS_ERR(f)) {
/*
* Cannot open the file again, lets modify f_flags
* of original and continue
*/
pr_info_ratelimited("Unable to reopen file for reading.\n");
f = file;
f->f_flags |= FMODE_READ;
modified_flags = true;
} else {
new_file_instance = true;
}
}
i_size = i_size_read(file_inode(f));
if (ima_ahash_minsize && i_size >= ima_ahash_minsize) {
rc = ima_calc_file_ahash(f, hash);
if (!rc)
goto out;
}
rc = ima_calc_file_shash(f, hash);
out:
if (new_file_instance)
fput(f);
else if (modified_flags)
f->f_flags &= ~FMODE_READ;
return rc;
}
/*
* Calculate the hash of template data
*/
static int ima_calc_field_array_hash_tfm(struct ima_field_data *field_data,
struct ima_template_desc *td,
int num_fields,
struct ima_digest_data *hash,
struct crypto_shash *tfm)
{
SHASH_DESC_ON_STACK(shash, tfm);
int rc, i;
shash->tfm = tfm;
hash->length = crypto_shash_digestsize(tfm);
rc = crypto_shash_init(shash);
if (rc != 0)
return rc;
for (i = 0; i < num_fields; i++) {
u8 buffer[IMA_EVENT_NAME_LEN_MAX + 1] = { 0 };
u8 *data_to_hash = field_data[i].data;
u32 datalen = field_data[i].len;
u32 datalen_to_hash =
!ima_canonical_fmt ? datalen : cpu_to_le32(datalen);
if (strcmp(td->name, IMA_TEMPLATE_IMA_NAME) != 0) {
rc = crypto_shash_update(shash,
(const u8 *) &datalen_to_hash,
sizeof(datalen_to_hash));
if (rc)
break;
} else if (strcmp(td->fields[i]->field_id, "n") == 0) {
memcpy(buffer, data_to_hash, datalen);
data_to_hash = buffer;
datalen = IMA_EVENT_NAME_LEN_MAX + 1;
}
rc = crypto_shash_update(shash, data_to_hash, datalen);
if (rc)
break;
}
if (!rc)
rc = crypto_shash_final(shash, hash->digest);
return rc;
}
int ima_calc_field_array_hash(struct ima_field_data *field_data,
struct ima_template_desc *desc, int num_fields,
struct ima_digest_data *hash)
{
struct crypto_shash *tfm;
int rc;
tfm = ima_alloc_tfm(hash->algo);
if (IS_ERR(tfm))
return PTR_ERR(tfm);
rc = ima_calc_field_array_hash_tfm(field_data, desc, num_fields,
hash, tfm);
ima_free_tfm(tfm);
return rc;
}
static int calc_buffer_ahash_atfm(const void *buf, loff_t len,
struct ima_digest_data *hash,
struct crypto_ahash *tfm)
{
struct ahash_request *req;
struct scatterlist sg;
struct crypto_wait wait;
int rc, ahash_rc = 0;
hash->length = crypto_ahash_digestsize(tfm);
req = ahash_request_alloc(tfm, GFP_KERNEL);
if (!req)
return -ENOMEM;
crypto_init_wait(&wait);
ahash_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG |
CRYPTO_TFM_REQ_MAY_SLEEP,
crypto_req_done, &wait);
rc = ahash_wait(crypto_ahash_init(req), &wait);
if (rc)
goto out;
sg_init_one(&sg, buf, len);
ahash_request_set_crypt(req, &sg, NULL, len);
ahash_rc = crypto_ahash_update(req);
/* wait for the update request to complete */
rc = ahash_wait(ahash_rc, &wait);
if (!rc) {
ahash_request_set_crypt(req, NULL, hash->digest, 0);
rc = ahash_wait(crypto_ahash_final(req), &wait);
}
out:
ahash_request_free(req);
return rc;
}
static int calc_buffer_ahash(const void *buf, loff_t len,
struct ima_digest_data *hash)
{
struct crypto_ahash *tfm;
int rc;
tfm = ima_alloc_atfm(hash->algo);
if (IS_ERR(tfm))
return PTR_ERR(tfm);
rc = calc_buffer_ahash_atfm(buf, len, hash, tfm);
ima_free_atfm(tfm);
return rc;
}
static int calc_buffer_shash_tfm(const void *buf, loff_t size,
struct ima_digest_data *hash,
struct crypto_shash *tfm)
{
SHASH_DESC_ON_STACK(shash, tfm);
unsigned int len;
int rc;
shash->tfm = tfm;
hash->length = crypto_shash_digestsize(tfm);
rc = crypto_shash_init(shash);
if (rc != 0)
return rc;
while (size) {
len = size < PAGE_SIZE ? size : PAGE_SIZE;
rc = crypto_shash_update(shash, buf, len);
if (rc)
break;
buf += len;
size -= len;
}
if (!rc)
rc = crypto_shash_final(shash, hash->digest);
return rc;
}
static int calc_buffer_shash(const void *buf, loff_t len,
struct ima_digest_data *hash)
{
struct crypto_shash *tfm;
int rc;
tfm = ima_alloc_tfm(hash->algo);
if (IS_ERR(tfm))
return PTR_ERR(tfm);
rc = calc_buffer_shash_tfm(buf, len, hash, tfm);
ima_free_tfm(tfm);
return rc;
}
int ima_calc_buffer_hash(const void *buf, loff_t len,
struct ima_digest_data *hash)
{
int rc;
if (ima_ahash_minsize && len >= ima_ahash_minsize) {
rc = calc_buffer_ahash(buf, len, hash);
if (!rc)
return 0;
}
return calc_buffer_shash(buf, len, hash);
}
tpm: retrieve digest size of unknown algorithms with PCR read Currently, the TPM driver retrieves the digest size from a table mapping TPM algorithms identifiers to identifiers defined by the crypto subsystem. If the algorithm is not defined by the latter, the digest size can be retrieved from the output of the PCR read command. The patch modifies the definition of tpm_pcr_read() and tpm2_pcr_read() to pass the desired hash algorithm and obtain the digest size at TPM startup. Algorithms and corresponding digest sizes are stored in the new structure tpm_bank_info, member of tpm_chip, so that the information can be used by other kernel subsystems. tpm_bank_info contains: the TPM algorithm identifier, necessary to generate the event log as defined by Trusted Computing Group (TCG); the digest size, to pad/truncate a digest calculated with a different algorithm; the crypto subsystem identifier, to calculate the digest of event data. This patch also protects against data corruption that could happen in the bus, by checking that the digest size returned by the TPM during a PCR read matches the size of the algorithm passed to tpm2_pcr_read(). For the initial PCR read, when digest sizes are not yet available, this patch ensures that the amount of data copied from the output returned by the TPM does not exceed the size of the array data are copied to. Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com> Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Tested-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Acked-by: Mimi Zohar <zohar@linux.ibm.com> Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
2019-02-06 23:24:49 +07:00
static void __init ima_pcrread(u32 idx, struct tpm_digest *d)
{
if (!ima_tpm_chip)
return;
tpm: retrieve digest size of unknown algorithms with PCR read Currently, the TPM driver retrieves the digest size from a table mapping TPM algorithms identifiers to identifiers defined by the crypto subsystem. If the algorithm is not defined by the latter, the digest size can be retrieved from the output of the PCR read command. The patch modifies the definition of tpm_pcr_read() and tpm2_pcr_read() to pass the desired hash algorithm and obtain the digest size at TPM startup. Algorithms and corresponding digest sizes are stored in the new structure tpm_bank_info, member of tpm_chip, so that the information can be used by other kernel subsystems. tpm_bank_info contains: the TPM algorithm identifier, necessary to generate the event log as defined by Trusted Computing Group (TCG); the digest size, to pad/truncate a digest calculated with a different algorithm; the crypto subsystem identifier, to calculate the digest of event data. This patch also protects against data corruption that could happen in the bus, by checking that the digest size returned by the TPM during a PCR read matches the size of the algorithm passed to tpm2_pcr_read(). For the initial PCR read, when digest sizes are not yet available, this patch ensures that the amount of data copied from the output returned by the TPM does not exceed the size of the array data are copied to. Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com> Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Tested-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Acked-by: Mimi Zohar <zohar@linux.ibm.com> Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
2019-02-06 23:24:49 +07:00
if (tpm_pcr_read(ima_tpm_chip, idx, d) != 0)
pr_err("Error Communicating to TPM chip\n");
}
/*
* Calculate the boot aggregate hash
*/
static int __init ima_calc_boot_aggregate_tfm(char *digest,
struct crypto_shash *tfm)
{
tpm: retrieve digest size of unknown algorithms with PCR read Currently, the TPM driver retrieves the digest size from a table mapping TPM algorithms identifiers to identifiers defined by the crypto subsystem. If the algorithm is not defined by the latter, the digest size can be retrieved from the output of the PCR read command. The patch modifies the definition of tpm_pcr_read() and tpm2_pcr_read() to pass the desired hash algorithm and obtain the digest size at TPM startup. Algorithms and corresponding digest sizes are stored in the new structure tpm_bank_info, member of tpm_chip, so that the information can be used by other kernel subsystems. tpm_bank_info contains: the TPM algorithm identifier, necessary to generate the event log as defined by Trusted Computing Group (TCG); the digest size, to pad/truncate a digest calculated with a different algorithm; the crypto subsystem identifier, to calculate the digest of event data. This patch also protects against data corruption that could happen in the bus, by checking that the digest size returned by the TPM during a PCR read matches the size of the algorithm passed to tpm2_pcr_read(). For the initial PCR read, when digest sizes are not yet available, this patch ensures that the amount of data copied from the output returned by the TPM does not exceed the size of the array data are copied to. Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com> Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Tested-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Acked-by: Mimi Zohar <zohar@linux.ibm.com> Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
2019-02-06 23:24:49 +07:00
struct tpm_digest d = { .alg_id = TPM_ALG_SHA1, .digest = {0} };
int rc;
u32 i;
SHASH_DESC_ON_STACK(shash, tfm);
shash->tfm = tfm;
rc = crypto_shash_init(shash);
if (rc != 0)
return rc;
/* cumulative sha1 over tpm registers 0-7 */
for (i = TPM_PCR0; i < TPM_PCR8; i++) {
tpm: retrieve digest size of unknown algorithms with PCR read Currently, the TPM driver retrieves the digest size from a table mapping TPM algorithms identifiers to identifiers defined by the crypto subsystem. If the algorithm is not defined by the latter, the digest size can be retrieved from the output of the PCR read command. The patch modifies the definition of tpm_pcr_read() and tpm2_pcr_read() to pass the desired hash algorithm and obtain the digest size at TPM startup. Algorithms and corresponding digest sizes are stored in the new structure tpm_bank_info, member of tpm_chip, so that the information can be used by other kernel subsystems. tpm_bank_info contains: the TPM algorithm identifier, necessary to generate the event log as defined by Trusted Computing Group (TCG); the digest size, to pad/truncate a digest calculated with a different algorithm; the crypto subsystem identifier, to calculate the digest of event data. This patch also protects against data corruption that could happen in the bus, by checking that the digest size returned by the TPM during a PCR read matches the size of the algorithm passed to tpm2_pcr_read(). For the initial PCR read, when digest sizes are not yet available, this patch ensures that the amount of data copied from the output returned by the TPM does not exceed the size of the array data are copied to. Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com> Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Tested-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Acked-by: Mimi Zohar <zohar@linux.ibm.com> Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
2019-02-06 23:24:49 +07:00
ima_pcrread(i, &d);
/* now accumulate with current aggregate */
tpm: retrieve digest size of unknown algorithms with PCR read Currently, the TPM driver retrieves the digest size from a table mapping TPM algorithms identifiers to identifiers defined by the crypto subsystem. If the algorithm is not defined by the latter, the digest size can be retrieved from the output of the PCR read command. The patch modifies the definition of tpm_pcr_read() and tpm2_pcr_read() to pass the desired hash algorithm and obtain the digest size at TPM startup. Algorithms and corresponding digest sizes are stored in the new structure tpm_bank_info, member of tpm_chip, so that the information can be used by other kernel subsystems. tpm_bank_info contains: the TPM algorithm identifier, necessary to generate the event log as defined by Trusted Computing Group (TCG); the digest size, to pad/truncate a digest calculated with a different algorithm; the crypto subsystem identifier, to calculate the digest of event data. This patch also protects against data corruption that could happen in the bus, by checking that the digest size returned by the TPM during a PCR read matches the size of the algorithm passed to tpm2_pcr_read(). For the initial PCR read, when digest sizes are not yet available, this patch ensures that the amount of data copied from the output returned by the TPM does not exceed the size of the array data are copied to. Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com> Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Tested-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Acked-by: Mimi Zohar <zohar@linux.ibm.com> Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
2019-02-06 23:24:49 +07:00
rc = crypto_shash_update(shash, d.digest, TPM_DIGEST_SIZE);
}
if (!rc)
crypto_shash_final(shash, digest);
return rc;
}
int __init ima_calc_boot_aggregate(struct ima_digest_data *hash)
{
struct crypto_shash *tfm;
int rc;
tfm = ima_alloc_tfm(hash->algo);
if (IS_ERR(tfm))
return PTR_ERR(tfm);
hash->length = crypto_shash_digestsize(tfm);
rc = ima_calc_boot_aggregate_tfm(hash->digest, tfm);
ima_free_tfm(tfm);
return rc;
}