2024-07-05 23:00:04 +07:00
|
|
|
#ifndef MY_ABC_HERE
|
|
|
|
#define MY_ABC_HERE
|
|
|
|
#endif
|
2018-04-04 00:23:33 +07:00
|
|
|
// SPDX-License-Identifier: GPL-2.0
|
2007-06-12 20:07:21 +07:00
|
|
|
/*
|
|
|
|
* Copyright (C) 2007 Oracle. All rights reserved.
|
|
|
|
*/
|
|
|
|
|
2007-03-22 23:13:20 +07:00
|
|
|
#include <linux/fs.h>
|
2007-03-29 00:57:48 +07:00
|
|
|
#include <linux/blkdev.h>
|
2007-04-09 21:42:37 +07:00
|
|
|
#include <linux/radix-tree.h>
|
2007-05-03 02:53:43 +07:00
|
|
|
#include <linux/writeback.h>
|
2008-04-10 03:28:12 +07:00
|
|
|
#include <linux/workqueue.h>
|
2008-06-26 03:01:31 +07:00
|
|
|
#include <linux/kthread.h>
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 15:04:11 +07:00
|
|
|
#include <linux/slab.h>
|
2010-11-22 10:20:49 +07:00
|
|
|
#include <linux/migrate.h>
|
2011-05-06 20:33:15 +07:00
|
|
|
#include <linux/ratelimit.h>
|
2013-04-19 22:08:05 +07:00
|
|
|
#include <linux/uuid.h>
|
2013-08-15 22:11:21 +07:00
|
|
|
#include <linux/semaphore.h>
|
2018-01-13 00:55:03 +07:00
|
|
|
#include <linux/error-injection.h>
|
btrfs: Remove custom crc32c init code
The custom crc32 init code was introduced in
14a958e678cd ("Btrfs: fix btrfs boot when compiled as built-in") to
enable using btrfs as a built-in. However, later as pointed out by
60efa5eb2e88 ("Btrfs: use late_initcall instead of module_init") this
wasn't enough and finally btrfs was switched to late_initcall which
comes after the generic crc32c implementation is initiliased. The
latter commit superseeded the former. Now that we don't have to
maintain our own code let's just remove it and switch to using the
generic implementation.
Despite touching a lot of files the patch is really simple. Here is the gist of
the changes:
1. Select LIBCRC32C rather than the low-level modules.
2. s/btrfs_crc32c/crc32c/g
3. replace hash.h with linux/crc32c.h
4. Move the btrfs namehash funcs to ctree.h and change the tree accordingly.
I've tested this with btrfs being both a module and a built-in and xfstest
doesn't complain.
Does seem to fix the longstanding problem of not automatically selectiong
the crc32c module when btrfs is used. Possibly there is a workaround in
dracut.
The modinfo confirms that now all the module dependencies are there:
before:
depends: zstd_compress,zstd_decompress,raid6_pq,xor,zlib_deflate
after:
depends: libcrc32c,zstd_compress,zstd_decompress,raid6_pq,xor,zlib_deflate
Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
[ add more info to changelog from mails ]
Signed-off-by: David Sterba <dsterba@suse.com>
2018-01-08 16:45:05 +07:00
|
|
|
#include <linux/crc32c.h>
|
2018-12-14 04:16:45 +07:00
|
|
|
#include <linux/sched/mm.h>
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
#include <linux/kmod.h>
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
#include <linux/module.h>
|
|
|
|
#endif /* MY_ABC_HERE */
|
2011-03-19 05:56:43 +07:00
|
|
|
#include <asm/unaligned.h>
|
2019-06-03 21:58:56 +07:00
|
|
|
#include <crypto/hash.h>
|
2007-02-02 21:18:22 +07:00
|
|
|
#include "ctree.h"
|
|
|
|
#include "disk-io.h"
|
2007-03-17 03:20:31 +07:00
|
|
|
#include "transaction.h"
|
2007-04-09 21:42:37 +07:00
|
|
|
#include "btrfs_inode.h"
|
2008-03-25 02:01:56 +07:00
|
|
|
#include "volumes.h"
|
2007-10-16 03:15:53 +07:00
|
|
|
#include "print-tree.h"
|
2008-06-26 03:01:30 +07:00
|
|
|
#include "locking.h"
|
2008-09-06 03:13:11 +07:00
|
|
|
#include "tree-log.h"
|
2009-04-03 20:47:43 +07:00
|
|
|
#include "free-space-cache.h"
|
2015-09-30 10:50:38 +07:00
|
|
|
#include "free-space-tree.h"
|
Btrfs: Cache free inode numbers in memory
Currently btrfs stores the highest objectid of the fs tree, and it always
returns (highest+1) inode number when we create a file, so inode numbers
won't be reclaimed when we delete files, so we'll run out of inode numbers
as we keep create/delete files in 32bits machines.
This fixes it, and it works similarly to how we cache free space in block
cgroups.
We start a kernel thread to read the file tree. By scanning inode items,
we know which chunks of inode numbers are free, and we cache them in
an rb-tree.
Because we are searching the commit root, we have to carefully handle the
cross-transaction case.
The rb-tree is a hybrid extent+bitmap tree, so if we have too many small
chunks of inode numbers, we'll use bitmaps. Initially we allow 16K ram
of extents, and a bitmap will be used if we exceed this threshold. The
extents threshold is adjusted in runtime.
Signed-off-by: Li Zefan <lizf@cn.fujitsu.com>
2011-04-20 09:06:11 +07:00
|
|
|
#include "inode-map.h"
|
2011-11-09 19:44:05 +07:00
|
|
|
#include "check-integrity.h"
|
2012-06-05 01:03:51 +07:00
|
|
|
#include "rcu-string.h"
|
2012-11-06 19:15:27 +07:00
|
|
|
#include "dev-replace.h"
|
2013-01-30 06:40:14 +07:00
|
|
|
#include "raid56.h"
|
2013-11-02 00:06:58 +07:00
|
|
|
#include "sysfs.h"
|
2014-05-14 07:30:47 +07:00
|
|
|
#include "qgroup.h"
|
2016-03-10 16:26:59 +07:00
|
|
|
#include "compression.h"
|
2017-10-09 08:51:02 +07:00
|
|
|
#include "tree-checker.h"
|
2017-09-30 02:43:50 +07:00
|
|
|
#include "ref-verify.h"
|
2019-06-21 02:37:44 +07:00
|
|
|
#include "block-group.h"
|
2019-12-14 07:22:14 +07:00
|
|
|
#include "discard.h"
|
2020-01-20 21:09:08 +07:00
|
|
|
#include "space-info.h"
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
#include "syno-feat-tree.h"
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
#include "syno-rbd-meta.h"
|
|
|
|
#endif /* MY_ABC_HERE */
|
2007-02-02 21:18:22 +07:00
|
|
|
|
2015-12-15 08:14:36 +07:00
|
|
|
#define BTRFS_SUPER_FLAG_SUPP (BTRFS_HEADER_FLAG_WRITTEN |\
|
|
|
|
BTRFS_HEADER_FLAG_RELOC |\
|
|
|
|
BTRFS_SUPER_FLAG_ERROR |\
|
|
|
|
BTRFS_SUPER_FLAG_SEEDING |\
|
2018-01-09 08:05:41 +07:00
|
|
|
BTRFS_SUPER_FLAG_METADUMP |\
|
|
|
|
BTRFS_SUPER_FLAG_METADUMP_V2)
|
2015-12-15 08:14:36 +07:00
|
|
|
|
2008-06-12 03:50:36 +07:00
|
|
|
static void end_workqueue_fn(struct btrfs_work *work);
|
2012-03-01 20:56:26 +07:00
|
|
|
static void btrfs_destroy_ordered_extents(struct btrfs_root *root);
|
2011-01-06 18:30:25 +07:00
|
|
|
static int btrfs_destroy_delayed_refs(struct btrfs_transaction *trans,
|
2016-06-23 05:54:24 +07:00
|
|
|
struct btrfs_fs_info *fs_info);
|
2012-03-01 20:56:26 +07:00
|
|
|
static void btrfs_destroy_delalloc_inodes(struct btrfs_root *root);
|
2016-06-23 05:54:24 +07:00
|
|
|
static int btrfs_destroy_marked_extents(struct btrfs_fs_info *fs_info,
|
2011-01-06 18:30:25 +07:00
|
|
|
struct extent_io_tree *dirty_pages,
|
|
|
|
int mark);
|
2016-06-23 05:54:24 +07:00
|
|
|
static int btrfs_destroy_pinned_extent(struct btrfs_fs_info *fs_info,
|
2011-01-06 18:30:25 +07:00
|
|
|
struct extent_io_tree *pinned_extents);
|
2016-06-23 05:54:24 +07:00
|
|
|
static int btrfs_cleanup_transaction(struct btrfs_fs_info *fs_info);
|
|
|
|
static void btrfs_error_commit_super(struct btrfs_fs_info *fs_info);
|
2008-04-10 03:28:12 +07:00
|
|
|
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
static DEFINE_RATELIMIT_STATE(meta_err_rate_limit, 3 * HZ, DEFAULT_RATELIMIT_BURST);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
2008-09-30 02:18:18 +07:00
|
|
|
/*
|
2014-07-30 05:55:42 +07:00
|
|
|
* btrfs_end_io_wq structs are used to do processing in task context when an IO
|
|
|
|
* is complete. This is used during reads to verify checksums, and it is used
|
2008-09-30 02:18:18 +07:00
|
|
|
* by writes to insert metadata for new file extents after IO is complete.
|
|
|
|
*/
|
2014-07-30 05:55:42 +07:00
|
|
|
struct btrfs_end_io_wq {
|
2008-04-10 03:28:12 +07:00
|
|
|
struct bio *bio;
|
|
|
|
bio_end_io_t *end_io;
|
|
|
|
void *private;
|
|
|
|
struct btrfs_fs_info *info;
|
2017-06-03 14:38:06 +07:00
|
|
|
blk_status_t status;
|
2014-07-30 05:25:45 +07:00
|
|
|
enum btrfs_wq_endio_type metadata;
|
2008-06-12 03:50:36 +07:00
|
|
|
struct btrfs_work work;
|
2008-04-10 03:28:12 +07:00
|
|
|
};
|
2007-11-08 09:08:01 +07:00
|
|
|
|
2014-07-30 05:55:42 +07:00
|
|
|
static struct kmem_cache *btrfs_end_io_wq_cache;
|
|
|
|
|
|
|
|
int __init btrfs_end_io_wq_init(void)
|
|
|
|
{
|
|
|
|
btrfs_end_io_wq_cache = kmem_cache_create("btrfs_end_io_wq",
|
|
|
|
sizeof(struct btrfs_end_io_wq),
|
|
|
|
0,
|
2016-06-24 01:17:08 +07:00
|
|
|
SLAB_MEM_SPREAD,
|
2014-07-30 05:55:42 +07:00
|
|
|
NULL);
|
|
|
|
if (!btrfs_end_io_wq_cache)
|
|
|
|
return -ENOMEM;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2018-02-19 23:24:18 +07:00
|
|
|
void __cold btrfs_end_io_wq_exit(void)
|
2014-07-30 05:55:42 +07:00
|
|
|
{
|
2016-01-29 20:36:35 +07:00
|
|
|
kmem_cache_destroy(btrfs_end_io_wq_cache);
|
2014-07-30 05:55:42 +07:00
|
|
|
}
|
|
|
|
|
2020-01-24 21:32:57 +07:00
|
|
|
static void btrfs_free_csum_hash(struct btrfs_fs_info *fs_info)
|
|
|
|
{
|
|
|
|
if (fs_info->csum_shash)
|
|
|
|
crypto_free_shash(fs_info->csum_shash);
|
|
|
|
}
|
|
|
|
|
2008-09-30 02:18:18 +07:00
|
|
|
/*
|
|
|
|
* async submit bios are used to offload expensive checksumming
|
|
|
|
* onto the worker threads. They checksum file and metadata bios
|
|
|
|
* just before they are sent down the IO stack.
|
|
|
|
*/
|
2008-04-16 22:14:51 +07:00
|
|
|
struct async_submit_bio {
|
2017-05-05 22:57:13 +07:00
|
|
|
void *private_data;
|
2008-04-16 22:14:51 +07:00
|
|
|
struct bio *bio;
|
2017-06-23 08:05:23 +07:00
|
|
|
extent_submit_bio_start_t *submit_bio_start;
|
2008-04-16 22:14:51 +07:00
|
|
|
int mirror_num;
|
2010-05-25 20:48:28 +07:00
|
|
|
/*
|
|
|
|
* bio_offset is optional, can be used if the pages in the bio
|
|
|
|
* can't tell us where in the file the bio should go
|
|
|
|
*/
|
|
|
|
u64 bio_offset;
|
2008-06-12 03:50:36 +07:00
|
|
|
struct btrfs_work work;
|
2017-06-03 14:38:06 +07:00
|
|
|
blk_status_t status;
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
bool throttle;
|
|
|
|
struct btrfs_fs_info *fs_info;
|
|
|
|
#endif /* MY_ABC_HERE */
|
2008-04-16 22:14:51 +07:00
|
|
|
};
|
|
|
|
|
2011-07-27 03:11:19 +07:00
|
|
|
/*
|
|
|
|
* Lockdep class keys for extent_buffer->lock's in this root. For a given
|
|
|
|
* eb, the lockdep key is determined by the btrfs_root it belongs to and
|
|
|
|
* the level the eb occupies in the tree.
|
|
|
|
*
|
|
|
|
* Different roots are used for different purposes and may nest inside each
|
|
|
|
* other and they require separate keysets. As lockdep keys should be
|
|
|
|
* static, assign keysets according to the purpose of the root as indicated
|
2018-08-06 12:25:24 +07:00
|
|
|
* by btrfs_root->root_key.objectid. This ensures that all special purpose
|
|
|
|
* roots have separate keysets.
|
2009-02-13 02:09:45 +07:00
|
|
|
*
|
2011-07-27 03:11:19 +07:00
|
|
|
* Lock-nesting across peer nodes is always done with the immediate parent
|
|
|
|
* node locked thus preventing deadlock. As lockdep doesn't know this, use
|
|
|
|
* subclass to avoid triggering lockdep warning in such cases.
|
2009-02-13 02:09:45 +07:00
|
|
|
*
|
2011-07-27 03:11:19 +07:00
|
|
|
* The key is set by the readpage_end_io_hook after the buffer has passed
|
|
|
|
* csum validation but before the pages are unlocked. It is also set by
|
|
|
|
* btrfs_init_new_buffer on freshly allocated blocks.
|
2009-02-13 02:09:45 +07:00
|
|
|
*
|
2011-07-27 03:11:19 +07:00
|
|
|
* We also add a check to make sure the highest level of the tree is the
|
|
|
|
* same as our lockdep setup here. If BTRFS_MAX_LEVEL changes, this code
|
|
|
|
* needs update as well.
|
2009-02-13 02:09:45 +07:00
|
|
|
*/
|
|
|
|
#ifdef CONFIG_DEBUG_LOCK_ALLOC
|
|
|
|
# if BTRFS_MAX_LEVEL != 8
|
|
|
|
# error
|
|
|
|
# endif
|
2011-07-27 03:11:19 +07:00
|
|
|
|
|
|
|
static struct btrfs_lockdep_keyset {
|
|
|
|
u64 id; /* root objectid */
|
|
|
|
const char *name_stem; /* lock name stem */
|
|
|
|
char names[BTRFS_MAX_LEVEL + 1][20];
|
|
|
|
struct lock_class_key keys[BTRFS_MAX_LEVEL + 1];
|
|
|
|
} btrfs_lockdep_keysets[] = {
|
|
|
|
{ .id = BTRFS_ROOT_TREE_OBJECTID, .name_stem = "root" },
|
|
|
|
{ .id = BTRFS_EXTENT_TREE_OBJECTID, .name_stem = "extent" },
|
|
|
|
{ .id = BTRFS_CHUNK_TREE_OBJECTID, .name_stem = "chunk" },
|
|
|
|
{ .id = BTRFS_DEV_TREE_OBJECTID, .name_stem = "dev" },
|
|
|
|
{ .id = BTRFS_FS_TREE_OBJECTID, .name_stem = "fs" },
|
|
|
|
{ .id = BTRFS_CSUM_TREE_OBJECTID, .name_stem = "csum" },
|
2013-05-01 00:29:29 +07:00
|
|
|
{ .id = BTRFS_QUOTA_TREE_OBJECTID, .name_stem = "quota" },
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
{ .id = BTRFS_SYNO_QUOTA_V2_TREE_OBJECTID, .name_stem = "syno-v2-quota" },
|
|
|
|
{ .id = BTRFS_SYNO_USRQUOTA_V2_TREE_OBJECTID, .name_stem = "syno-v2-usrquota" },
|
|
|
|
#endif /* MY_ABC_HERE */
|
2011-07-27 03:11:19 +07:00
|
|
|
{ .id = BTRFS_TREE_LOG_OBJECTID, .name_stem = "log" },
|
|
|
|
{ .id = BTRFS_TREE_RELOC_OBJECTID, .name_stem = "treloc" },
|
|
|
|
{ .id = BTRFS_DATA_RELOC_TREE_OBJECTID, .name_stem = "dreloc" },
|
2013-09-03 23:28:57 +07:00
|
|
|
{ .id = BTRFS_UUID_TREE_OBJECTID, .name_stem = "uuid" },
|
2016-01-25 22:30:22 +07:00
|
|
|
{ .id = BTRFS_FREE_SPACE_TREE_OBJECTID, .name_stem = "free-space" },
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
{ .id = BTRFS_BLOCK_GROUP_HINT_TREE_OBJECTID, .name_stem = "block-group-hint" },
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
{ .id = BTRFS_BLOCK_GROUP_CACHE_TREE_OBJECTID, .name_stem = "block-group-cache" },
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
{ .id = BTRFS_SYNO_USAGE_TREE_OBJECTID, .name_stem = "syno-usage" },
|
|
|
|
{ .id = BTRFS_SYNO_EXTENT_USAGE_TREE_OBJECTID, .name_stem = "syno-extent-usage" },
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
{ .id = BTRFS_SYNO_FEATURE_TREE_OBJECTID, .name_stem = "syno-feat-tree" },
|
|
|
|
#endif /* MY_ABC_HERE */
|
2011-07-27 03:11:19 +07:00
|
|
|
{ .id = 0, .name_stem = "tree" },
|
2009-02-13 02:09:45 +07:00
|
|
|
};
|
2011-07-27 03:11:19 +07:00
|
|
|
|
|
|
|
void __init btrfs_init_lockdep(void)
|
|
|
|
{
|
|
|
|
int i, j;
|
|
|
|
|
|
|
|
/* initialize lockdep class names */
|
|
|
|
for (i = 0; i < ARRAY_SIZE(btrfs_lockdep_keysets); i++) {
|
|
|
|
struct btrfs_lockdep_keyset *ks = &btrfs_lockdep_keysets[i];
|
|
|
|
|
|
|
|
for (j = 0; j < ARRAY_SIZE(ks->names); j++)
|
|
|
|
snprintf(ks->names[j], sizeof(ks->names[j]),
|
|
|
|
"btrfs-%s-%02d", ks->name_stem, j);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
void btrfs_set_buffer_lockdep_class(u64 objectid, struct extent_buffer *eb,
|
|
|
|
int level)
|
|
|
|
{
|
|
|
|
struct btrfs_lockdep_keyset *ks;
|
|
|
|
|
|
|
|
BUG_ON(level >= ARRAY_SIZE(ks->keys));
|
|
|
|
|
|
|
|
/* find the matching keyset, id 0 is the default entry */
|
|
|
|
for (ks = btrfs_lockdep_keysets; ks->id; ks++)
|
|
|
|
if (ks->id == objectid)
|
|
|
|
break;
|
|
|
|
|
|
|
|
lockdep_set_class_and_name(&eb->lock,
|
|
|
|
&ks->keys[level], ks->names[level]);
|
|
|
|
}
|
|
|
|
|
2009-02-13 02:09:45 +07:00
|
|
|
#endif
|
|
|
|
|
2008-09-30 02:18:18 +07:00
|
|
|
/*
|
2019-02-25 20:24:15 +07:00
|
|
|
* Compute the csum of a btree block and store the result to provided buffer.
|
2008-09-30 02:18:18 +07:00
|
|
|
*/
|
2020-02-28 03:00:49 +07:00
|
|
|
static void csum_tree_block(struct extent_buffer *buf, u8 *result)
|
2007-10-16 03:19:22 +07:00
|
|
|
{
|
2019-06-03 21:58:57 +07:00
|
|
|
struct btrfs_fs_info *fs_info = buf->fs_info;
|
2020-02-28 03:00:47 +07:00
|
|
|
const int num_pages = fs_info->nodesize >> PAGE_SHIFT;
|
2019-06-03 21:58:57 +07:00
|
|
|
SHASH_DESC_ON_STACK(shash, fs_info->csum_shash);
|
2007-10-16 03:19:22 +07:00
|
|
|
char *kaddr;
|
2020-02-28 03:00:47 +07:00
|
|
|
int i;
|
2019-06-03 21:58:57 +07:00
|
|
|
|
|
|
|
shash->tfm = fs_info->csum_shash;
|
|
|
|
crypto_shash_init(shash);
|
2020-02-28 03:00:47 +07:00
|
|
|
kaddr = page_address(buf->pages[0]);
|
|
|
|
crypto_shash_update(shash, kaddr + BTRFS_CSUM_SIZE,
|
|
|
|
PAGE_SIZE - BTRFS_CSUM_SIZE);
|
2007-10-16 03:19:22 +07:00
|
|
|
|
2020-02-28 03:00:47 +07:00
|
|
|
for (i = 1; i < num_pages; i++) {
|
|
|
|
kaddr = page_address(buf->pages[i]);
|
|
|
|
crypto_shash_update(shash, kaddr, PAGE_SIZE);
|
2007-10-16 03:19:22 +07:00
|
|
|
}
|
2017-11-07 01:23:00 +07:00
|
|
|
memset(result, 0, BTRFS_CSUM_SIZE);
|
2019-06-03 21:58:57 +07:00
|
|
|
crypto_shash_final(shash, result);
|
2007-10-16 03:19:22 +07:00
|
|
|
}
|
|
|
|
|
2008-09-30 02:18:18 +07:00
|
|
|
/*
|
|
|
|
* we can't consider a given block up to date unless the transid of the
|
|
|
|
* block matches the transid in the parent node's pointer. This is how we
|
|
|
|
* detect blocks that either didn't get written at all or got written
|
|
|
|
* in the wrong place.
|
|
|
|
*/
|
2008-05-13 00:39:03 +07:00
|
|
|
static int verify_parent_transid(struct extent_io_tree *io_tree,
|
2012-05-06 18:23:47 +07:00
|
|
|
struct extent_buffer *eb, u64 parent_transid,
|
|
|
|
int atomic)
|
2008-05-13 00:39:03 +07:00
|
|
|
{
|
2010-02-04 02:33:23 +07:00
|
|
|
struct extent_state *cached_state = NULL;
|
2008-05-13 00:39:03 +07:00
|
|
|
int ret;
|
2014-07-31 05:43:18 +07:00
|
|
|
bool need_lock = (current->journal_info == BTRFS_SEND_TRANS_STUB);
|
2008-05-13 00:39:03 +07:00
|
|
|
|
|
|
|
if (!parent_transid || btrfs_header_generation(eb) == parent_transid)
|
|
|
|
return 0;
|
|
|
|
|
2012-05-06 18:23:47 +07:00
|
|
|
if (atomic)
|
|
|
|
return -EAGAIN;
|
|
|
|
|
2014-03-29 04:07:27 +07:00
|
|
|
if (need_lock) {
|
|
|
|
btrfs_tree_read_lock(eb);
|
2018-04-04 07:00:17 +07:00
|
|
|
btrfs_set_lock_blocking_read(eb);
|
2014-03-29 04:07:27 +07:00
|
|
|
}
|
|
|
|
|
2010-02-04 02:33:23 +07:00
|
|
|
lock_extent_bits(io_tree, eb->start, eb->start + eb->len - 1,
|
2015-12-03 20:30:40 +07:00
|
|
|
&cached_state);
|
2012-03-13 20:38:00 +07:00
|
|
|
if (extent_buffer_uptodate(eb) &&
|
2008-05-13 00:39:03 +07:00
|
|
|
btrfs_header_generation(eb) == parent_transid) {
|
|
|
|
ret = 0;
|
|
|
|
goto out;
|
|
|
|
}
|
2015-10-08 16:01:36 +07:00
|
|
|
btrfs_err_rl(eb->fs_info,
|
|
|
|
"parent transid verify failed on %llu wanted %llu found %llu",
|
|
|
|
eb->start,
|
2014-07-04 16:59:06 +07:00
|
|
|
parent_transid, btrfs_header_generation(eb));
|
2008-05-13 00:39:03 +07:00
|
|
|
ret = 1;
|
2014-03-29 04:07:27 +07:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Things reading via commit roots that don't have normal protection,
|
|
|
|
* like send, can have a really old block in cache that may point at a
|
2016-05-20 08:18:45 +07:00
|
|
|
* block that has been freed and re-allocated. So don't clear uptodate
|
2014-03-29 04:07:27 +07:00
|
|
|
* if we find an eb that is under IO (dirty/writeback) because we could
|
|
|
|
* end up reading in the stale data and then writing it back out and
|
|
|
|
* making everybody very sad.
|
|
|
|
*/
|
|
|
|
if (!extent_buffer_under_io(eb))
|
|
|
|
clear_extent_buffer_uptodate(eb);
|
2008-07-30 21:29:12 +07:00
|
|
|
out:
|
2010-02-04 02:33:23 +07:00
|
|
|
unlock_extent_cached(io_tree, eb->start, eb->start + eb->len - 1,
|
2017-12-13 03:43:52 +07:00
|
|
|
&cached_state);
|
2014-06-26 03:45:41 +07:00
|
|
|
if (need_lock)
|
|
|
|
btrfs_tree_read_unlock_blocking(eb);
|
2008-05-13 00:39:03 +07:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2019-06-03 21:58:53 +07:00
|
|
|
static bool btrfs_supported_super_csum(u16 csum_type)
|
|
|
|
{
|
|
|
|
switch (csum_type) {
|
|
|
|
case BTRFS_CSUM_TYPE_CRC32:
|
2019-10-07 16:11:01 +07:00
|
|
|
case BTRFS_CSUM_TYPE_XXHASH:
|
2019-10-07 16:11:02 +07:00
|
|
|
case BTRFS_CSUM_TYPE_SHA256:
|
2019-10-07 16:11:02 +07:00
|
|
|
case BTRFS_CSUM_TYPE_BLAKE2:
|
2019-06-03 21:58:53 +07:00
|
|
|
return true;
|
|
|
|
default:
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2013-03-06 21:57:46 +07:00
|
|
|
/*
|
|
|
|
* Return 0 if the superblock checksum type matches the checksum value of that
|
|
|
|
* algorithm. Pass the raw disk superblock data.
|
|
|
|
*/
|
2016-09-20 21:05:02 +07:00
|
|
|
static int btrfs_check_super_csum(struct btrfs_fs_info *fs_info,
|
|
|
|
char *raw_disk_sb)
|
2013-03-06 21:57:46 +07:00
|
|
|
{
|
|
|
|
struct btrfs_super_block *disk_sb =
|
|
|
|
(struct btrfs_super_block *)raw_disk_sb;
|
2019-06-03 21:58:55 +07:00
|
|
|
char result[BTRFS_CSUM_SIZE];
|
2019-06-03 21:58:57 +07:00
|
|
|
SHASH_DESC_ON_STACK(shash, fs_info->csum_shash);
|
|
|
|
|
|
|
|
shash->tfm = fs_info->csum_shash;
|
2013-03-06 21:57:46 +07:00
|
|
|
|
2019-06-03 21:58:55 +07:00
|
|
|
/*
|
|
|
|
* The super_block structure does not span the whole
|
|
|
|
* BTRFS_SUPER_INFO_SIZE range, we expect that the unused space is
|
|
|
|
* filled with zeros and is included in the checksum.
|
|
|
|
*/
|
2020-05-01 13:51:59 +07:00
|
|
|
crypto_shash_digest(shash, raw_disk_sb + BTRFS_CSUM_SIZE,
|
|
|
|
BTRFS_SUPER_INFO_SIZE - BTRFS_CSUM_SIZE, result);
|
2013-03-06 21:57:46 +07:00
|
|
|
|
2019-06-03 21:58:55 +07:00
|
|
|
if (memcmp(disk_sb->csum, result, btrfs_super_csum_size(disk_sb)))
|
|
|
|
return 1;
|
2013-03-06 21:57:46 +07:00
|
|
|
|
2019-06-03 21:58:53 +07:00
|
|
|
return 0;
|
2013-03-06 21:57:46 +07:00
|
|
|
}
|
|
|
|
|
2019-03-20 20:58:13 +07:00
|
|
|
int btrfs_verify_level_key(struct extent_buffer *eb, int level,
|
btrfs: Check the first key and level for cached extent buffer
[BUG]
When reading a file from a fuzzed image, kernel can panic like:
BTRFS warning (device loop0): csum failed root 5 ino 270 off 0 csum 0x98f94189 expected csum 0x00000000 mirror 1
assertion failed: !memcmp_extent_buffer(b, &disk_key, offsetof(struct btrfs_leaf, items[0].key), sizeof(disk_key)), file: fs/btrfs/ctree.c, line: 2544
------------[ cut here ]------------
kernel BUG at fs/btrfs/ctree.h:3500!
invalid opcode: 0000 [#1] PREEMPT SMP NOPTI
RIP: 0010:btrfs_search_slot.cold.24+0x61/0x63 [btrfs]
Call Trace:
btrfs_lookup_csum+0x52/0x150 [btrfs]
__btrfs_lookup_bio_sums+0x209/0x640 [btrfs]
btrfs_submit_bio_hook+0x103/0x170 [btrfs]
submit_one_bio+0x59/0x80 [btrfs]
extent_read_full_page+0x58/0x80 [btrfs]
generic_file_read_iter+0x2f6/0x9d0
__vfs_read+0x14d/0x1a0
vfs_read+0x8d/0x140
ksys_read+0x52/0xc0
do_syscall_64+0x60/0x210
entry_SYSCALL_64_after_hwframe+0x49/0xbe
[CAUSE]
The fuzzed image has a corrupted leaf whose first key doesn't match its
parent:
checksum tree key (CSUM_TREE ROOT_ITEM 0)
node 29741056 level 1 items 14 free 107 generation 19 owner CSUM_TREE
fs uuid 3381d111-94a3-4ac7-8f39-611bbbdab7e6
chunk uuid 9af1c3c7-2af5-488b-8553-530bd515f14c
...
key (EXTENT_CSUM EXTENT_CSUM 79691776) block 29761536 gen 19
leaf 29761536 items 1 free space 1726 generation 19 owner CSUM_TREE
leaf 29761536 flags 0x1(WRITTEN) backref revision 1
fs uuid 3381d111-94a3-4ac7-8f39-611bbbdab7e6
chunk uuid 9af1c3c7-2af5-488b-8553-530bd515f14c
item 0 key (EXTENT_CSUM EXTENT_CSUM 8798638964736) itemoff 1751 itemsize 2244
range start 8798638964736 end 8798641262592 length 2297856
When reading the above tree block, we have extent_buffer->refs = 2 in
the context:
- initial one from __alloc_extent_buffer()
alloc_extent_buffer()
|- __alloc_extent_buffer()
|- atomic_set(&eb->refs, 1)
- one being added to fs_info->buffer_radix
alloc_extent_buffer()
|- check_buffer_tree_ref()
|- atomic_inc(&eb->refs)
So if even we call free_extent_buffer() in read_tree_block or other
similar situation, we only decrease the refs by 1, it doesn't reach 0
and won't be freed right now.
The staled eb and its corrupted content will still be kept cached.
Furthermore, we have several extra cases where we either don't do first
key check or the check is not proper for all callers:
- scrub
We just don't have first key in this context.
- shared tree block
One tree block can be shared by several snapshot/subvolume trees.
In that case, the first key check for one subvolume doesn't apply to
another.
So for the above reasons, a corrupted extent buffer can sneak into the
buffer cache.
[FIX]
Call verify_level_key in read_block_for_search to do another
verification. For that purpose the function is exported.
Due to above reasons, although we can free corrupted extent buffer from
cache, we still need the check in read_block_for_search(), for scrub and
shared tree blocks.
Link: https://bugzilla.kernel.org/show_bug.cgi?id=202755
Link: https://bugzilla.kernel.org/show_bug.cgi?id=202757
Link: https://bugzilla.kernel.org/show_bug.cgi?id=202759
Link: https://bugzilla.kernel.org/show_bug.cgi?id=202761
Link: https://bugzilla.kernel.org/show_bug.cgi?id=202767
Link: https://bugzilla.kernel.org/show_bug.cgi?id=202769
Reported-by: Yoon Jungyeon <jungyeon@gatech.edu>
CC: stable@vger.kernel.org # 4.19+
Signed-off-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2019-03-12 16:10:40 +07:00
|
|
|
struct btrfs_key *first_key, u64 parent_transid)
|
2018-03-29 08:08:11 +07:00
|
|
|
{
|
2019-03-20 20:58:13 +07:00
|
|
|
struct btrfs_fs_info *fs_info = eb->fs_info;
|
2018-03-29 08:08:11 +07:00
|
|
|
int found_level;
|
|
|
|
struct btrfs_key found_key;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
found_level = btrfs_header_level(eb);
|
|
|
|
if (found_level != level) {
|
2019-03-20 13:27:39 +07:00
|
|
|
WARN(IS_ENABLED(CONFIG_BTRFS_DEBUG),
|
|
|
|
KERN_ERR "BTRFS: tree level check failed\n");
|
2018-03-29 08:08:11 +07:00
|
|
|
btrfs_err(fs_info,
|
|
|
|
"tree level mismatch detected, bytenr=%llu level expected=%u has=%u",
|
|
|
|
eb->start, level, found_level);
|
|
|
|
return -EIO;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!first_key)
|
|
|
|
return 0;
|
|
|
|
|
2018-04-13 05:32:47 +07:00
|
|
|
/*
|
|
|
|
* For live tree block (new tree blocks in current transaction),
|
|
|
|
* we need proper lock context to avoid race, which is impossible here.
|
|
|
|
* So we only checks tree blocks which is read from disk, whose
|
|
|
|
* generation <= fs_info->last_trans_committed.
|
|
|
|
*/
|
|
|
|
if (btrfs_header_generation(eb) > fs_info->last_trans_committed)
|
|
|
|
return 0;
|
2019-08-22 09:14:15 +07:00
|
|
|
|
|
|
|
/* We have @first_key, so this @eb must have at least one item */
|
|
|
|
if (btrfs_header_nritems(eb) == 0) {
|
|
|
|
btrfs_err(fs_info,
|
|
|
|
"invalid tree nritems, bytenr=%llu nritems=0 expect >0",
|
|
|
|
eb->start);
|
|
|
|
WARN_ON(IS_ENABLED(CONFIG_BTRFS_DEBUG));
|
|
|
|
return -EUCLEAN;
|
|
|
|
}
|
|
|
|
|
2018-03-29 08:08:11 +07:00
|
|
|
if (found_level)
|
|
|
|
btrfs_node_key_to_cpu(eb, &found_key, 0);
|
|
|
|
else
|
|
|
|
btrfs_item_key_to_cpu(eb, &found_key, 0);
|
|
|
|
ret = btrfs_comp_cpu_keys(first_key, &found_key);
|
|
|
|
|
|
|
|
if (ret) {
|
2019-03-20 13:27:39 +07:00
|
|
|
WARN(IS_ENABLED(CONFIG_BTRFS_DEBUG),
|
|
|
|
KERN_ERR "BTRFS: tree first key check failed\n");
|
2018-03-29 08:08:11 +07:00
|
|
|
btrfs_err(fs_info,
|
2018-05-18 09:59:35 +07:00
|
|
|
"tree first key mismatch detected, bytenr=%llu parent_transid=%llu key expected=(%llu,%u,%llu) has=(%llu,%u,%llu)",
|
|
|
|
eb->start, parent_transid, first_key->objectid,
|
|
|
|
first_key->type, first_key->offset,
|
|
|
|
found_key.objectid, found_key.type,
|
|
|
|
found_key.offset);
|
2018-03-29 08:08:11 +07:00
|
|
|
}
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2008-09-30 02:18:18 +07:00
|
|
|
/*
|
|
|
|
* helper to read a given tree block, doing retries as required when
|
|
|
|
* the checksums don't match and we have alternate mirrors to try.
|
2018-03-29 08:08:11 +07:00
|
|
|
*
|
|
|
|
* @parent_transid: expected transid, skip check if 0
|
|
|
|
* @level: expected level, mandatory check
|
|
|
|
* @first_key: expected key of first slot, skip check if NULL
|
2008-09-30 02:18:18 +07:00
|
|
|
*/
|
2019-03-20 20:56:39 +07:00
|
|
|
static int btree_read_extent_buffer_pages(struct extent_buffer *eb,
|
2018-03-29 08:08:11 +07:00
|
|
|
u64 parent_transid, int level,
|
|
|
|
struct btrfs_key *first_key)
|
2008-04-10 03:28:12 +07:00
|
|
|
{
|
2019-03-20 20:56:39 +07:00
|
|
|
struct btrfs_fs_info *fs_info = eb->fs_info;
|
2008-04-10 03:28:12 +07:00
|
|
|
struct extent_io_tree *io_tree;
|
2012-03-27 08:57:36 +07:00
|
|
|
int failed = 0;
|
2008-04-10 03:28:12 +07:00
|
|
|
int ret;
|
|
|
|
int num_copies = 0;
|
|
|
|
int mirror_num = 0;
|
2012-03-27 08:57:36 +07:00
|
|
|
int failed_mirror = 0;
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
bool can_retry;
|
|
|
|
#endif /* MY_ABC_HERE */
|
2008-04-10 03:28:12 +07:00
|
|
|
|
2016-06-23 05:54:23 +07:00
|
|
|
io_tree = &BTRFS_I(fs_info->btree_inode)->io_tree;
|
2008-04-10 03:28:12 +07:00
|
|
|
while (1) {
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
can_retry = true;
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
2018-11-06 21:40:20 +07:00
|
|
|
clear_bit(EXTENT_BUFFER_CORRUPT, &eb->bflags);
|
2024-07-05 23:00:04 +07:00
|
|
|
ret = read_extent_buffer_pages(eb, WAIT_COMPLETE, mirror_num
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
, &can_retry, parent_transid
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
);
|
2012-08-10 21:58:21 +07:00
|
|
|
if (!ret) {
|
2018-03-29 08:08:11 +07:00
|
|
|
if (verify_parent_transid(io_tree, eb,
|
2024-07-05 23:00:04 +07:00
|
|
|
parent_transid, 0)) {
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
/*
|
|
|
|
* Don't do data correction or we may mess eb->nr_retry.
|
|
|
|
* Only try open source dup version.
|
|
|
|
*/
|
|
|
|
eb->nr_retry = EXTENT_BUFFER_RETRY_ABORTED;
|
|
|
|
#endif /* MY_ABC_HERE */
|
2012-08-10 21:58:21 +07:00
|
|
|
ret = -EIO;
|
2024-07-05 23:00:04 +07:00
|
|
|
}
|
2019-03-20 20:58:13 +07:00
|
|
|
else if (btrfs_verify_level_key(eb, level,
|
2024-07-05 23:00:04 +07:00
|
|
|
first_key, parent_transid)) {
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
/*
|
|
|
|
* Don't do data correction or we may mess eb->nr_retry.
|
|
|
|
* Only try open source dup version.
|
|
|
|
*/
|
|
|
|
eb->nr_retry = EXTENT_BUFFER_RETRY_ABORTED;
|
|
|
|
#endif /* MY_ABC_HERE */
|
2018-03-29 08:08:11 +07:00
|
|
|
ret = -EUCLEAN;
|
2024-07-05 23:00:04 +07:00
|
|
|
} else
|
2018-03-29 08:08:11 +07:00
|
|
|
break;
|
2012-08-10 21:58:21 +07:00
|
|
|
}
|
2009-01-06 09:25:51 +07:00
|
|
|
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
failed = 1;
|
|
|
|
if (!failed_mirror)
|
|
|
|
failed_mirror = eb->read_mirror;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We found error and we or other people have done all retries for this mirror,
|
|
|
|
* go on and try if we have another mirror.
|
|
|
|
*/
|
|
|
|
if (eb->nr_retry != EXTENT_BUFFER_RETRY_ABORTED && can_retry)
|
|
|
|
continue;
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
2016-06-23 05:54:23 +07:00
|
|
|
num_copies = btrfs_num_copies(fs_info,
|
2008-04-10 03:28:12 +07:00
|
|
|
eb->start, eb->len);
|
2008-04-29 03:40:52 +07:00
|
|
|
if (num_copies == 1)
|
2012-03-27 08:57:36 +07:00
|
|
|
break;
|
2008-04-29 03:40:52 +07:00
|
|
|
|
2012-04-16 20:42:26 +07:00
|
|
|
if (!failed_mirror) {
|
|
|
|
failed = 1;
|
|
|
|
failed_mirror = eb->read_mirror;
|
|
|
|
}
|
|
|
|
|
2008-04-10 03:28:12 +07:00
|
|
|
mirror_num++;
|
2012-03-27 08:57:36 +07:00
|
|
|
if (mirror_num == failed_mirror)
|
|
|
|
mirror_num++;
|
|
|
|
|
2008-04-29 03:40:52 +07:00
|
|
|
if (mirror_num > num_copies)
|
2012-03-27 08:57:36 +07:00
|
|
|
break;
|
2008-04-10 03:28:12 +07:00
|
|
|
}
|
2012-03-27 08:57:36 +07:00
|
|
|
|
2012-07-10 20:30:17 +07:00
|
|
|
if (failed && !ret && failed_mirror)
|
2019-03-20 17:23:44 +07:00
|
|
|
btrfs_repair_eb_io_failure(eb, failed_mirror);
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
else if (failed) {
|
|
|
|
clear_bit(EXTENT_BUFFER_SHOULD_REPAIR, &eb->bflags);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (-EIO == ret && test_bit(BTRFS_FS_OPEN, &fs_info->flags)
|
|
|
|
&& !sb_rdonly(fs_info->sb)
|
|
|
|
&& !test_bit(BTRFS_FS_STATE_TRANS_ABORTED, &fs_info->fs_state)
|
|
|
|
&& __ratelimit(&meta_err_rate_limit)) {
|
|
|
|
btrfs_err(fs_info, "cannot fix %llu, record in meta_err", eb->start);
|
|
|
|
SynoBtrfsMetaCorruptedReport(fs_info->fs_devices->fsid, eb->start);
|
|
|
|
}
|
|
|
|
#endif /* MY_ABC_HERE */
|
2012-03-27 08:57:36 +07:00
|
|
|
|
|
|
|
return ret;
|
2008-04-10 03:28:12 +07:00
|
|
|
}
|
2007-10-16 03:19:22 +07:00
|
|
|
|
2008-09-30 02:18:18 +07:00
|
|
|
/*
|
2009-01-06 09:25:51 +07:00
|
|
|
* checksum a dirty tree block before IO. This has extra checks to make sure
|
|
|
|
* we only fill in the checksum field in the first page of a multi-page block
|
2008-09-30 02:18:18 +07:00
|
|
|
*/
|
2009-01-06 09:25:51 +07:00
|
|
|
|
2014-11-21 15:15:07 +07:00
|
|
|
static int csum_dirty_buffer(struct btrfs_fs_info *fs_info, struct page *page)
|
2007-10-16 03:19:22 +07:00
|
|
|
{
|
2012-12-21 16:17:45 +07:00
|
|
|
u64 start = page_offset(page);
|
2007-10-16 03:19:22 +07:00
|
|
|
u64 found_start;
|
2019-02-25 20:24:15 +07:00
|
|
|
u8 result[BTRFS_CSUM_SIZE];
|
|
|
|
u16 csum_size = btrfs_super_csum_size(fs_info->super_copy);
|
2007-10-16 03:19:22 +07:00
|
|
|
struct extent_buffer *eb;
|
2019-04-04 10:47:08 +07:00
|
|
|
int ret;
|
2008-04-10 03:28:12 +07:00
|
|
|
|
2012-03-08 04:20:05 +07:00
|
|
|
eb = (struct extent_buffer *)page->private;
|
|
|
|
if (page != eb->pages[0])
|
|
|
|
return 0;
|
2016-03-10 18:10:15 +07:00
|
|
|
|
2007-10-16 03:19:22 +07:00
|
|
|
found_start = btrfs_header_bytenr(eb);
|
2016-03-10 18:10:15 +07:00
|
|
|
/*
|
|
|
|
* Please do not consolidate these warnings into a single if.
|
|
|
|
* It is useful to know what went wrong.
|
|
|
|
*/
|
|
|
|
if (WARN_ON(found_start != start))
|
|
|
|
return -EUCLEAN;
|
|
|
|
if (WARN_ON(!PageUptodate(page)))
|
|
|
|
return -EUCLEAN;
|
|
|
|
|
2018-10-30 21:43:24 +07:00
|
|
|
ASSERT(memcmp_extent_buffer(eb, fs_info->fs_devices->metadata_uuid,
|
2019-03-20 19:15:57 +07:00
|
|
|
offsetof(struct btrfs_header, fsid),
|
|
|
|
BTRFS_FSID_SIZE) == 0);
|
2016-03-10 18:10:15 +07:00
|
|
|
|
2020-02-28 03:00:49 +07:00
|
|
|
csum_tree_block(eb, result);
|
2019-02-25 20:24:15 +07:00
|
|
|
|
2019-04-04 10:47:08 +07:00
|
|
|
if (btrfs_header_level(eb))
|
|
|
|
ret = btrfs_check_node(eb);
|
|
|
|
else
|
|
|
|
ret = btrfs_check_leaf_full(eb);
|
|
|
|
|
|
|
|
if (ret < 0) {
|
2019-10-04 16:31:33 +07:00
|
|
|
btrfs_print_tree(eb, 0);
|
2019-04-04 10:47:08 +07:00
|
|
|
btrfs_err(fs_info,
|
|
|
|
"block=%llu write time tree block corruption detected",
|
|
|
|
eb->start);
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_DEF_HERE
|
|
|
|
/*
|
|
|
|
* FIXME: For now, we ignore tree-checker during pre-write
|
|
|
|
* but leave error messages to collect informations.
|
|
|
|
*/
|
|
|
|
#else /* MY_DEF_HERE */
|
2019-10-04 16:31:33 +07:00
|
|
|
WARN_ON(IS_ENABLED(CONFIG_BTRFS_DEBUG));
|
2019-04-04 10:47:08 +07:00
|
|
|
return ret;
|
2024-07-05 23:00:04 +07:00
|
|
|
#endif /* MY_DEF_HERE */
|
2019-04-04 10:47:08 +07:00
|
|
|
}
|
2019-02-25 20:24:15 +07:00
|
|
|
write_extent_buffer(eb, result, 0, csum_size);
|
2019-04-04 10:47:08 +07:00
|
|
|
|
2019-02-25 20:24:15 +07:00
|
|
|
return 0;
|
2007-10-16 03:19:22 +07:00
|
|
|
}
|
|
|
|
|
2019-03-20 19:12:00 +07:00
|
|
|
static int check_tree_block_fsid(struct extent_buffer *eb)
|
2008-11-18 09:11:30 +07:00
|
|
|
{
|
2019-03-20 19:12:00 +07:00
|
|
|
struct btrfs_fs_info *fs_info = eb->fs_info;
|
2020-07-16 14:25:33 +07:00
|
|
|
struct btrfs_fs_devices *fs_devices = fs_info->fs_devices, *seed_devs;
|
2017-07-29 16:50:09 +07:00
|
|
|
u8 fsid[BTRFS_FSID_SIZE];
|
2020-07-16 14:25:33 +07:00
|
|
|
u8 *metadata_uuid;
|
2008-11-18 09:11:30 +07:00
|
|
|
|
2019-03-20 19:15:57 +07:00
|
|
|
read_extent_buffer(eb, fsid, offsetof(struct btrfs_header, fsid),
|
|
|
|
BTRFS_FSID_SIZE);
|
2020-07-16 14:25:33 +07:00
|
|
|
/*
|
|
|
|
* Checking the incompat flag is only valid for the current fs. For
|
|
|
|
* seed devices it's forbidden to have their uuid changed so reading
|
|
|
|
* ->fsid in this case is fine
|
|
|
|
*/
|
|
|
|
if (btrfs_fs_incompat(fs_info, METADATA_UUID))
|
|
|
|
metadata_uuid = fs_devices->metadata_uuid;
|
|
|
|
else
|
|
|
|
metadata_uuid = fs_devices->fsid;
|
|
|
|
|
|
|
|
if (!memcmp(fsid, metadata_uuid, BTRFS_FSID_SIZE))
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
list_for_each_entry(seed_devs, &fs_devices->seed_list, seed_list)
|
|
|
|
if (!memcmp(fsid, seed_devs->fsid, BTRFS_FSID_SIZE))
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
return 1;
|
2008-11-18 09:11:30 +07:00
|
|
|
}
|
|
|
|
|
2020-09-18 20:34:33 +07:00
|
|
|
int btrfs_validate_metadata_buffer(struct btrfs_io_bio *io_bio, u64 phy_offset,
|
|
|
|
struct page *page, u64 start, u64 end,
|
|
|
|
int mirror)
|
2008-04-10 03:28:12 +07:00
|
|
|
{
|
|
|
|
u64 found_start;
|
|
|
|
int found_level;
|
|
|
|
struct extent_buffer *eb;
|
2019-11-28 22:15:04 +07:00
|
|
|
struct btrfs_fs_info *fs_info;
|
|
|
|
u16 csum_size;
|
2008-04-10 03:28:12 +07:00
|
|
|
int ret = 0;
|
2019-02-25 20:24:15 +07:00
|
|
|
u8 result[BTRFS_CSUM_SIZE];
|
2010-08-07 00:21:20 +07:00
|
|
|
int reads_done;
|
2008-04-10 03:28:12 +07:00
|
|
|
|
|
|
|
if (!page->private)
|
|
|
|
goto out;
|
2009-01-06 09:25:51 +07:00
|
|
|
|
2012-03-08 04:20:05 +07:00
|
|
|
eb = (struct extent_buffer *)page->private;
|
2019-11-28 22:15:04 +07:00
|
|
|
fs_info = eb->fs_info;
|
|
|
|
csum_size = btrfs_super_csum_size(fs_info->super_copy);
|
2009-01-06 09:25:51 +07:00
|
|
|
|
2012-03-13 20:38:00 +07:00
|
|
|
/* the pending IO might have been the only thing that kept this buffer
|
|
|
|
* in memory. Make sure we have a ref for all this other checks
|
|
|
|
*/
|
2019-10-08 18:28:47 +07:00
|
|
|
atomic_inc(&eb->refs);
|
2012-03-13 20:38:00 +07:00
|
|
|
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
if (unlikely(bio_flagged(&io_bio->bio, BIO_CORRECTION_ERR)))
|
|
|
|
SetPageChecked(page);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
2012-03-13 20:38:00 +07:00
|
|
|
reads_done = atomic_dec_and_test(&eb->io_pages);
|
2010-08-07 00:21:20 +07:00
|
|
|
if (!reads_done)
|
|
|
|
goto err;
|
2008-04-10 03:28:12 +07:00
|
|
|
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
if (eb->read_mirror < mirror)
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
eb->read_mirror = mirror;
|
|
|
|
|
Btrfs: be aware of btree inode write errors to avoid fs corruption
While we have a transaction ongoing, the VM might decide at any time
to call btree_inode->i_mapping->a_ops->writepages(), which will start
writeback of dirty pages belonging to btree nodes/leafs. This call
might return an error or the writeback might finish with an error
before we attempt to commit the running transaction. If this happens,
we might have no way of knowing that such error happened when we are
committing the transaction - because the pages might no longer be
marked dirty nor tagged for writeback (if a subsequent modification
to the extent buffer didn't happen before the transaction commit) which
makes filemap_fdata[write|wait]_range unable to find such pages (even
if they're marked with SetPageError).
So if this happens we must abort the transaction, otherwise we commit
a super block with btree roots that point to btree nodes/leafs whose
content on disk is invalid - either garbage or the content of some
node/leaf from a past generation that got cowed or deleted and is no
longer valid (for this later case we end up getting error messages like
"parent transid verify failed on 10826481664 wanted 25748 found 29562"
when reading btree nodes/leafs from disk).
Note that setting and checking AS_EIO/AS_ENOSPC in the btree inode's
i_mapping would not be enough because we need to distinguish between
log tree extents (not fatal) vs non-log tree extents (fatal) and
because the next call to filemap_fdatawait_range() will catch and clear
such errors in the mapping - and that call might be from a log sync and
not from a transaction commit, which means we would not know about the
error at transaction commit time. Also, checking for the eb flag
EXTENT_BUFFER_IOERR at transaction commit time isn't done and would
not be completely reliable, as the eb might be removed from memory and
read back when trying to get it, which clears that flag right before
reading the eb's pages from disk, making us not know about the previous
write error.
Using the new 3 flags for the btree inode also makes us achieve the
goal of AS_EIO/AS_ENOSPC when writepages() returns success, started
writeback for all dirty pages and before filemap_fdatawait_range() is
called, the writeback for all dirty pages had already finished with
errors - because we were not using AS_EIO/AS_ENOSPC,
filemap_fdatawait_range() would return success, as it could not know
that writeback errors happened (the pages were no longer tagged for
writeback).
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Chris Mason <clm@fb.com>
2014-09-26 18:25:56 +07:00
|
|
|
if (test_bit(EXTENT_BUFFER_READ_ERR, &eb->bflags)) {
|
2012-03-27 08:57:36 +07:00
|
|
|
ret = -EIO;
|
|
|
|
goto err;
|
|
|
|
}
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
if (unlikely(eb->can_retry &&
|
|
|
|
eb->nr_retry == EXTENT_BUFFER_SHOULD_ABORT_RETRY)) {
|
|
|
|
ret = -EIO;
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
|
|
|
|
csum_tree_block(eb, result);
|
|
|
|
|
|
|
|
if (memcmp_extent_buffer(eb, result, 0, csum_size)) {
|
|
|
|
u8 val[BTRFS_CSUM_SIZE] = { 0 };
|
|
|
|
|
|
|
|
read_extent_buffer(eb, &val, 0, csum_size);
|
|
|
|
btrfs_warn_rl(fs_info,
|
|
|
|
"%s checksum verify failed on %llu wanted " CSUM_FMT " found " CSUM_FMT " level %d",
|
|
|
|
fs_info->sb->s_id, eb->start,
|
|
|
|
CSUM_FMT_VALUE(csum_size, val),
|
|
|
|
CSUM_FMT_VALUE(csum_size, result),
|
|
|
|
btrfs_header_level(eb));
|
|
|
|
ret = -EUCLEAN;
|
|
|
|
if (eb->nr_retry && eb->can_retry) {
|
|
|
|
if (eb->nr_retry > 1 &&
|
|
|
|
!memcmp(eb->prev_bad_csum, result, csum_size))
|
|
|
|
set_bit(EXTENT_BUFFER_RETRY_ERR, &eb->bflags);
|
|
|
|
else
|
|
|
|
memcpy(eb->prev_bad_csum, result, csum_size);
|
|
|
|
}
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
#endif /* MY_ABC_HERE */
|
2012-03-27 08:57:36 +07:00
|
|
|
|
2008-04-10 03:28:12 +07:00
|
|
|
found_start = btrfs_header_bytenr(eb);
|
2010-08-07 00:21:20 +07:00
|
|
|
if (found_start != eb->start) {
|
2018-06-22 08:52:15 +07:00
|
|
|
btrfs_err_rl(fs_info, "bad tree block start, want %llu have %llu",
|
|
|
|
eb->start, found_start);
|
2008-04-10 03:28:12 +07:00
|
|
|
ret = -EIO;
|
2008-04-10 03:28:12 +07:00
|
|
|
goto err;
|
|
|
|
}
|
2019-03-20 19:12:00 +07:00
|
|
|
if (check_tree_block_fsid(eb)) {
|
2015-12-31 21:46:45 +07:00
|
|
|
btrfs_err_rl(fs_info, "bad fsid on block %llu",
|
|
|
|
eb->start);
|
2008-05-13 00:39:03 +07:00
|
|
|
ret = -EIO;
|
|
|
|
goto err;
|
|
|
|
}
|
2008-04-10 03:28:12 +07:00
|
|
|
found_level = btrfs_header_level(eb);
|
2013-04-23 22:30:14 +07:00
|
|
|
if (found_level >= BTRFS_MAX_LEVEL) {
|
2018-06-22 08:52:15 +07:00
|
|
|
btrfs_err(fs_info, "bad tree block level %d on %llu",
|
|
|
|
(int)btrfs_header_level(eb), eb->start);
|
2013-04-23 22:30:14 +07:00
|
|
|
ret = -EIO;
|
|
|
|
goto err;
|
|
|
|
}
|
2008-04-10 03:28:12 +07:00
|
|
|
|
2011-07-27 03:11:19 +07:00
|
|
|
btrfs_set_buffer_lockdep_class(btrfs_header_owner(eb),
|
|
|
|
eb, found_level);
|
2009-02-13 02:09:45 +07:00
|
|
|
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
#else /* MY_ABC_HERE */
|
2020-02-28 03:00:49 +07:00
|
|
|
csum_tree_block(eb, result);
|
2011-03-17 00:42:43 +07:00
|
|
|
|
2019-02-25 20:24:15 +07:00
|
|
|
if (memcmp_extent_buffer(eb, result, 0, csum_size)) {
|
2020-09-21 14:57:14 +07:00
|
|
|
u8 val[BTRFS_CSUM_SIZE] = { 0 };
|
2019-02-25 20:24:15 +07:00
|
|
|
|
|
|
|
read_extent_buffer(eb, &val, 0, csum_size);
|
|
|
|
btrfs_warn_rl(fs_info,
|
2020-09-21 14:57:14 +07:00
|
|
|
"%s checksum verify failed on %llu wanted " CSUM_FMT " found " CSUM_FMT " level %d",
|
2019-02-25 20:24:15 +07:00
|
|
|
fs_info->sb->s_id, eb->start,
|
2020-09-21 14:57:14 +07:00
|
|
|
CSUM_FMT_VALUE(csum_size, val),
|
|
|
|
CSUM_FMT_VALUE(csum_size, result),
|
|
|
|
btrfs_header_level(eb));
|
2019-02-25 20:24:15 +07:00
|
|
|
ret = -EUCLEAN;
|
|
|
|
goto err;
|
|
|
|
}
|
2024-07-05 23:00:04 +07:00
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
if (unlikely(eb->parent_transid &&
|
|
|
|
btrfs_header_generation(eb) != eb->parent_transid)) {
|
|
|
|
btrfs_warn_rl(fs_info,
|
|
|
|
"parent transid verify failed on %llu wanted %llu found %llu\n",
|
|
|
|
eb->start,
|
|
|
|
eb->parent_transid,
|
|
|
|
btrfs_header_generation(eb));
|
|
|
|
|
|
|
|
if (eb->nr_retry && eb->can_retry) {
|
|
|
|
if (eb->nr_retry > 1 &&
|
|
|
|
eb->prev_bad_transid == btrfs_header_generation(eb))
|
|
|
|
set_bit(EXTENT_BUFFER_RETRY_ERR, &eb->bflags);
|
|
|
|
else
|
|
|
|
eb->prev_bad_transid = btrfs_header_generation(eb);
|
|
|
|
}
|
|
|
|
memset(eb->prev_bad_csum, 0, sizeof(eb->prev_bad_csum));
|
|
|
|
|
|
|
|
ret = -EIO;
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
#endif /* MY_ABC_HERE */
|
2019-02-25 20:24:15 +07:00
|
|
|
|
2011-03-17 00:42:43 +07:00
|
|
|
/*
|
|
|
|
* If this is a leaf block and it is corrupt, set the corrupt bit so
|
|
|
|
* that we don't try and read the other copies of this block, just
|
|
|
|
* return -EIO.
|
|
|
|
*/
|
2019-03-20 22:23:29 +07:00
|
|
|
if (found_level == 0 && btrfs_check_leaf_full(eb)) {
|
2011-03-17 00:42:43 +07:00
|
|
|
set_bit(EXTENT_BUFFER_CORRUPT, &eb->bflags);
|
|
|
|
ret = -EIO;
|
|
|
|
}
|
2008-04-10 03:28:12 +07:00
|
|
|
|
2019-03-20 22:25:00 +07:00
|
|
|
if (found_level > 0 && btrfs_check_node(eb))
|
2016-08-24 07:37:45 +07:00
|
|
|
ret = -EIO;
|
|
|
|
|
2012-03-13 20:38:00 +07:00
|
|
|
if (!ret)
|
|
|
|
set_extent_buffer_uptodate(eb);
|
2019-03-20 13:27:40 +07:00
|
|
|
else
|
|
|
|
btrfs_err(fs_info,
|
|
|
|
"block=%llu read time tree block corruption detected",
|
|
|
|
eb->start);
|
2024-07-05 23:00:04 +07:00
|
|
|
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
if (!ret && trace_btrfs_syno_meta_statistics_eb_disk_read_enabled()) {
|
|
|
|
struct btrfs_key first_key;
|
|
|
|
memset(&first_key, 0, sizeof(first_key));
|
|
|
|
if (btrfs_header_nritems(eb) > 0) {
|
|
|
|
if (found_level)
|
|
|
|
btrfs_node_key_to_cpu(eb, &first_key, 0);
|
|
|
|
else
|
|
|
|
btrfs_item_key_to_cpu(eb, &first_key, 0);
|
|
|
|
}
|
|
|
|
trace_btrfs_syno_meta_statistics_eb_disk_read(eb->fs_info, found_start, btrfs_header_owner(eb), found_level, &first_key);
|
|
|
|
}
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
2008-04-10 03:28:12 +07:00
|
|
|
err:
|
2013-04-20 21:18:27 +07:00
|
|
|
if (reads_done &&
|
|
|
|
test_and_clear_bit(EXTENT_BUFFER_READAHEAD, &eb->bflags))
|
2017-03-03 01:43:30 +07:00
|
|
|
btree_readahead_hook(eb, ret);
|
2011-06-10 18:55:54 +07:00
|
|
|
|
2013-01-30 06:40:14 +07:00
|
|
|
if (ret) {
|
|
|
|
/*
|
|
|
|
* our io error hook is going to dec the io pages
|
|
|
|
* again, we have to make sure it has something
|
|
|
|
* to decrement
|
|
|
|
*/
|
|
|
|
atomic_inc(&eb->io_pages);
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
/*
|
|
|
|
* Let io error hook call clear_extent_buffer_uptodate(), since we don't get here
|
|
|
|
* if bio is not uptodate.
|
|
|
|
*/
|
|
|
|
#else /* MY_ABC_HERE */
|
2012-03-13 20:38:00 +07:00
|
|
|
clear_extent_buffer_uptodate(eb);
|
2024-07-05 23:00:04 +07:00
|
|
|
#endif /* MY_ABC_HERE */
|
2013-01-30 06:40:14 +07:00
|
|
|
}
|
2012-03-13 20:38:00 +07:00
|
|
|
free_extent_buffer(eb);
|
2008-04-10 03:28:12 +07:00
|
|
|
out:
|
2008-04-10 03:28:12 +07:00
|
|
|
return ret;
|
2008-04-10 03:28:12 +07:00
|
|
|
}
|
|
|
|
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
void btrfs_metadata_io_failed(struct extent_buffer *eb, struct page *page,
|
|
|
|
int failed_mirror, int correction_err)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
int tried_out = 1;
|
|
|
|
unsigned long num_pages;
|
|
|
|
|
|
|
|
if (eb->read_mirror < failed_mirror)
|
|
|
|
eb->read_mirror = failed_mirror;
|
|
|
|
|
|
|
|
if (correction_err)
|
|
|
|
SetPageChecked(page);
|
|
|
|
|
|
|
|
if (!atomic_dec_and_test(&eb->io_pages))
|
|
|
|
return;
|
|
|
|
|
|
|
|
clear_extent_buffer_uptodate(eb);
|
|
|
|
|
|
|
|
if (!eb->can_retry)
|
|
|
|
return;
|
|
|
|
|
|
|
|
if (eb->nr_retry == EXTENT_BUFFER_SHOULD_ABORT_RETRY) {
|
|
|
|
// Keep ABORTED until we write the good one or we change to another btrfs mirror.
|
|
|
|
eb->nr_retry = EXTENT_BUFFER_RETRY_ABORTED;
|
|
|
|
correction_put_locked_record(eb->fs_info, eb->start);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
num_pages = num_extent_pages(eb);
|
|
|
|
for (i = 0; i < num_pages && tried_out; i++) {
|
|
|
|
page = eb->pages[i];
|
|
|
|
if (!PageChecked(page))
|
|
|
|
tried_out = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (test_bit(EXTENT_BUFFER_RETRY_ERR, &eb->bflags) ||
|
|
|
|
eb->nr_retry > SYNO_DATA_CORRECTION_MAX_RETRY_TIMES ||
|
|
|
|
tried_out) {
|
|
|
|
u8 zero[BTRFS_CSUM_SIZE] = { 0 };
|
|
|
|
|
|
|
|
for (i = 0; i < num_pages; i++) {
|
|
|
|
page = eb->pages[i];
|
|
|
|
ClearPageChecked(page);
|
|
|
|
}
|
|
|
|
eb->nr_retry = EXTENT_BUFFER_SHOULD_ABORT_RETRY;
|
|
|
|
|
|
|
|
if (memcmp(eb->prev_bad_csum, zero, sizeof(eb->prev_bad_csum)))
|
|
|
|
btrfs_err(eb->fs_info,
|
|
|
|
"BTRFS: %s failed to repair btree csum error on %llu, mirror = %d\n",
|
|
|
|
eb->fs_info->sb->s_id, eb->start,
|
|
|
|
eb->read_mirror);
|
|
|
|
else if (eb->parent_transid &&
|
|
|
|
btrfs_header_generation(eb) != eb->parent_transid)
|
|
|
|
btrfs_err(eb->fs_info,
|
|
|
|
"BTRFS: %s failed to repair parent transid verify failure on %llu, mirror = %d\n",
|
|
|
|
eb->fs_info->sb->s_id, eb->start,
|
|
|
|
eb->read_mirror);
|
|
|
|
else
|
|
|
|
btrfs_err(eb->fs_info,
|
|
|
|
"BTRFS: %s failed to repair meta data on %llu, mirror = %d\n",
|
|
|
|
eb->fs_info->sb->s_id, eb->start,
|
|
|
|
eb->read_mirror);
|
|
|
|
} else {
|
|
|
|
/*
|
|
|
|
* We need an indicator so that only one process can do the repair work. Testing if
|
|
|
|
* we have eb->nr_retry changing from 0 to 1 in read_extent_buffer_pages() is not enough
|
|
|
|
* since we could have concurrent readers and they set eb->nr_retry > 1 before we can test
|
|
|
|
* eb->nr_retry.
|
|
|
|
*/
|
|
|
|
if (!eb->nr_retry)
|
|
|
|
set_bit(EXTENT_BUFFER_SHOULD_REPAIR, &eb->bflags);
|
|
|
|
eb->nr_retry++;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
2015-07-20 20:29:37 +07:00
|
|
|
static void end_workqueue_bio(struct bio *bio)
|
2008-04-10 03:28:12 +07:00
|
|
|
{
|
2014-07-30 05:55:42 +07:00
|
|
|
struct btrfs_end_io_wq *end_io_wq = bio->bi_private;
|
2008-04-10 03:28:12 +07:00
|
|
|
struct btrfs_fs_info *fs_info;
|
Btrfs: fix task hang under heavy compressed write
This has been reported and discussed for a long time, and this hang occurs in
both 3.15 and 3.16.
Btrfs now migrates to use kernel workqueue, but it introduces this hang problem.
Btrfs has a kind of work queued as an ordered way, which means that its
ordered_func() must be processed in the way of FIFO, so it usually looks like --
normal_work_helper(arg)
work = container_of(arg, struct btrfs_work, normal_work);
work->func() <---- (we name it work X)
for ordered_work in wq->ordered_list
ordered_work->ordered_func()
ordered_work->ordered_free()
The hang is a rare case, first when we find free space, we get an uncached block
group, then we go to read its free space cache inode for free space information,
so it will
file a readahead request
btrfs_readpages()
for page that is not in page cache
__do_readpage()
submit_extent_page()
btrfs_submit_bio_hook()
btrfs_bio_wq_end_io()
submit_bio()
end_workqueue_bio() <--(ret by the 1st endio)
queue a work(named work Y) for the 2nd
also the real endio()
So the hang occurs when work Y's work_struct and work X's work_struct happens
to share the same address.
A bit more explanation,
A,B,C -- struct btrfs_work
arg -- struct work_struct
kthread:
worker_thread()
pick up a work_struct from @worklist
process_one_work(arg)
worker->current_work = arg; <-- arg is A->normal_work
worker->current_func(arg)
normal_work_helper(arg)
A = container_of(arg, struct btrfs_work, normal_work);
A->func()
A->ordered_func()
A->ordered_free() <-- A gets freed
B->ordered_func()
submit_compressed_extents()
find_free_extent()
load_free_space_inode()
... <-- (the above readhead stack)
end_workqueue_bio()
btrfs_queue_work(work C)
B->ordered_free()
As if work A has a high priority in wq->ordered_list and there are more ordered
works queued after it, such as B->ordered_func(), its memory could have been
freed before normal_work_helper() returns, which means that kernel workqueue
code worker_thread() still has worker->current_work pointer to be work
A->normal_work's, ie. arg's address.
Meanwhile, work C is allocated after work A is freed, work C->normal_work
and work A->normal_work are likely to share the same address(I confirmed this
with ftrace output, so I'm not just guessing, it's rare though).
When another kthread picks up work C->normal_work to process, and finds our
kthread is processing it(see find_worker_executing_work()), it'll think
work C as a collision and skip then, which ends up nobody processing work C.
So the situation is that our kthread is waiting forever on work C.
Besides, there're other cases that can lead to deadlock, but the real problem
is that all btrfs workqueue shares one work->func, -- normal_work_helper,
so this makes each workqueue to have its own helper function, but only a
wraper pf normal_work_helper.
With this patch, I no long hit the above hang.
Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
Signed-off-by: Chris Mason <clm@fb.com>
2014-08-15 22:36:53 +07:00
|
|
|
struct btrfs_workqueue *wq;
|
2008-04-10 03:28:12 +07:00
|
|
|
|
|
|
|
fs_info = end_io_wq->info;
|
2017-06-03 14:38:06 +07:00
|
|
|
end_io_wq->status = bio->bi_status;
|
Btrfs: move data checksumming into a dedicated tree
Btrfs stores checksums for each data block. Until now, they have
been stored in the subvolume trees, indexed by the inode that is
referencing the data block. This means that when we read the inode,
we've probably read in at least some checksums as well.
But, this has a few problems:
* The checksums are indexed by logical offset in the file. When
compression is on, this means we have to do the expensive checksumming
on the uncompressed data. It would be faster if we could checksum
the compressed data instead.
* If we implement encryption, we'll be checksumming the plain text and
storing that on disk. This is significantly less secure.
* For either compression or encryption, we have to get the plain text
back before we can verify the checksum as correct. This makes the raid
layer balancing and extent moving much more expensive.
* It makes the front end caching code more complex, as we have touch
the subvolume and inodes as we cache extents.
* There is potentitally one copy of the checksum in each subvolume
referencing an extent.
The solution used here is to store the extent checksums in a dedicated
tree. This allows us to index the checksums by phyiscal extent
start and length. It means:
* The checksum is against the data stored on disk, after any compression
or encryption is done.
* The checksum is stored in a central location, and can be verified without
following back references, or reading inodes.
This makes compression significantly faster by reducing the amount of
data that needs to be checksummed. It will also allow much faster
raid management code in general.
The checksums are indexed by a key with a fixed objectid (a magic value
in ctree.h) and offset set to the starting byte of the extent. This
allows us to copy the checksum items into the fsync log tree directly (or
any other tree), without having to invent a second format for them.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-12-09 04:58:54 +07:00
|
|
|
|
2016-06-06 02:31:52 +07:00
|
|
|
if (bio_op(bio) == REQ_OP_WRITE) {
|
2019-09-17 01:30:57 +07:00
|
|
|
if (end_io_wq->metadata == BTRFS_WQ_ENDIO_METADATA)
|
Btrfs: fix task hang under heavy compressed write
This has been reported and discussed for a long time, and this hang occurs in
both 3.15 and 3.16.
Btrfs now migrates to use kernel workqueue, but it introduces this hang problem.
Btrfs has a kind of work queued as an ordered way, which means that its
ordered_func() must be processed in the way of FIFO, so it usually looks like --
normal_work_helper(arg)
work = container_of(arg, struct btrfs_work, normal_work);
work->func() <---- (we name it work X)
for ordered_work in wq->ordered_list
ordered_work->ordered_func()
ordered_work->ordered_free()
The hang is a rare case, first when we find free space, we get an uncached block
group, then we go to read its free space cache inode for free space information,
so it will
file a readahead request
btrfs_readpages()
for page that is not in page cache
__do_readpage()
submit_extent_page()
btrfs_submit_bio_hook()
btrfs_bio_wq_end_io()
submit_bio()
end_workqueue_bio() <--(ret by the 1st endio)
queue a work(named work Y) for the 2nd
also the real endio()
So the hang occurs when work Y's work_struct and work X's work_struct happens
to share the same address.
A bit more explanation,
A,B,C -- struct btrfs_work
arg -- struct work_struct
kthread:
worker_thread()
pick up a work_struct from @worklist
process_one_work(arg)
worker->current_work = arg; <-- arg is A->normal_work
worker->current_func(arg)
normal_work_helper(arg)
A = container_of(arg, struct btrfs_work, normal_work);
A->func()
A->ordered_func()
A->ordered_free() <-- A gets freed
B->ordered_func()
submit_compressed_extents()
find_free_extent()
load_free_space_inode()
... <-- (the above readhead stack)
end_workqueue_bio()
btrfs_queue_work(work C)
B->ordered_free()
As if work A has a high priority in wq->ordered_list and there are more ordered
works queued after it, such as B->ordered_func(), its memory could have been
freed before normal_work_helper() returns, which means that kernel workqueue
code worker_thread() still has worker->current_work pointer to be work
A->normal_work's, ie. arg's address.
Meanwhile, work C is allocated after work A is freed, work C->normal_work
and work A->normal_work are likely to share the same address(I confirmed this
with ftrace output, so I'm not just guessing, it's rare though).
When another kthread picks up work C->normal_work to process, and finds our
kthread is processing it(see find_worker_executing_work()), it'll think
work C as a collision and skip then, which ends up nobody processing work C.
So the situation is that our kthread is waiting forever on work C.
Besides, there're other cases that can lead to deadlock, but the real problem
is that all btrfs workqueue shares one work->func, -- normal_work_helper,
so this makes each workqueue to have its own helper function, but only a
wraper pf normal_work_helper.
With this patch, I no long hit the above hang.
Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
Signed-off-by: Chris Mason <clm@fb.com>
2014-08-15 22:36:53 +07:00
|
|
|
wq = fs_info->endio_meta_write_workers;
|
2019-09-17 01:30:57 +07:00
|
|
|
else if (end_io_wq->metadata == BTRFS_WQ_ENDIO_FREE_SPACE)
|
Btrfs: fix task hang under heavy compressed write
This has been reported and discussed for a long time, and this hang occurs in
both 3.15 and 3.16.
Btrfs now migrates to use kernel workqueue, but it introduces this hang problem.
Btrfs has a kind of work queued as an ordered way, which means that its
ordered_func() must be processed in the way of FIFO, so it usually looks like --
normal_work_helper(arg)
work = container_of(arg, struct btrfs_work, normal_work);
work->func() <---- (we name it work X)
for ordered_work in wq->ordered_list
ordered_work->ordered_func()
ordered_work->ordered_free()
The hang is a rare case, first when we find free space, we get an uncached block
group, then we go to read its free space cache inode for free space information,
so it will
file a readahead request
btrfs_readpages()
for page that is not in page cache
__do_readpage()
submit_extent_page()
btrfs_submit_bio_hook()
btrfs_bio_wq_end_io()
submit_bio()
end_workqueue_bio() <--(ret by the 1st endio)
queue a work(named work Y) for the 2nd
also the real endio()
So the hang occurs when work Y's work_struct and work X's work_struct happens
to share the same address.
A bit more explanation,
A,B,C -- struct btrfs_work
arg -- struct work_struct
kthread:
worker_thread()
pick up a work_struct from @worklist
process_one_work(arg)
worker->current_work = arg; <-- arg is A->normal_work
worker->current_func(arg)
normal_work_helper(arg)
A = container_of(arg, struct btrfs_work, normal_work);
A->func()
A->ordered_func()
A->ordered_free() <-- A gets freed
B->ordered_func()
submit_compressed_extents()
find_free_extent()
load_free_space_inode()
... <-- (the above readhead stack)
end_workqueue_bio()
btrfs_queue_work(work C)
B->ordered_free()
As if work A has a high priority in wq->ordered_list and there are more ordered
works queued after it, such as B->ordered_func(), its memory could have been
freed before normal_work_helper() returns, which means that kernel workqueue
code worker_thread() still has worker->current_work pointer to be work
A->normal_work's, ie. arg's address.
Meanwhile, work C is allocated after work A is freed, work C->normal_work
and work A->normal_work are likely to share the same address(I confirmed this
with ftrace output, so I'm not just guessing, it's rare though).
When another kthread picks up work C->normal_work to process, and finds our
kthread is processing it(see find_worker_executing_work()), it'll think
work C as a collision and skip then, which ends up nobody processing work C.
So the situation is that our kthread is waiting forever on work C.
Besides, there're other cases that can lead to deadlock, but the real problem
is that all btrfs workqueue shares one work->func, -- normal_work_helper,
so this makes each workqueue to have its own helper function, but only a
wraper pf normal_work_helper.
With this patch, I no long hit the above hang.
Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
Signed-off-by: Chris Mason <clm@fb.com>
2014-08-15 22:36:53 +07:00
|
|
|
wq = fs_info->endio_freespace_worker;
|
2019-09-17 01:30:57 +07:00
|
|
|
else if (end_io_wq->metadata == BTRFS_WQ_ENDIO_RAID56)
|
Btrfs: fix task hang under heavy compressed write
This has been reported and discussed for a long time, and this hang occurs in
both 3.15 and 3.16.
Btrfs now migrates to use kernel workqueue, but it introduces this hang problem.
Btrfs has a kind of work queued as an ordered way, which means that its
ordered_func() must be processed in the way of FIFO, so it usually looks like --
normal_work_helper(arg)
work = container_of(arg, struct btrfs_work, normal_work);
work->func() <---- (we name it work X)
for ordered_work in wq->ordered_list
ordered_work->ordered_func()
ordered_work->ordered_free()
The hang is a rare case, first when we find free space, we get an uncached block
group, then we go to read its free space cache inode for free space information,
so it will
file a readahead request
btrfs_readpages()
for page that is not in page cache
__do_readpage()
submit_extent_page()
btrfs_submit_bio_hook()
btrfs_bio_wq_end_io()
submit_bio()
end_workqueue_bio() <--(ret by the 1st endio)
queue a work(named work Y) for the 2nd
also the real endio()
So the hang occurs when work Y's work_struct and work X's work_struct happens
to share the same address.
A bit more explanation,
A,B,C -- struct btrfs_work
arg -- struct work_struct
kthread:
worker_thread()
pick up a work_struct from @worklist
process_one_work(arg)
worker->current_work = arg; <-- arg is A->normal_work
worker->current_func(arg)
normal_work_helper(arg)
A = container_of(arg, struct btrfs_work, normal_work);
A->func()
A->ordered_func()
A->ordered_free() <-- A gets freed
B->ordered_func()
submit_compressed_extents()
find_free_extent()
load_free_space_inode()
... <-- (the above readhead stack)
end_workqueue_bio()
btrfs_queue_work(work C)
B->ordered_free()
As if work A has a high priority in wq->ordered_list and there are more ordered
works queued after it, such as B->ordered_func(), its memory could have been
freed before normal_work_helper() returns, which means that kernel workqueue
code worker_thread() still has worker->current_work pointer to be work
A->normal_work's, ie. arg's address.
Meanwhile, work C is allocated after work A is freed, work C->normal_work
and work A->normal_work are likely to share the same address(I confirmed this
with ftrace output, so I'm not just guessing, it's rare though).
When another kthread picks up work C->normal_work to process, and finds our
kthread is processing it(see find_worker_executing_work()), it'll think
work C as a collision and skip then, which ends up nobody processing work C.
So the situation is that our kthread is waiting forever on work C.
Besides, there're other cases that can lead to deadlock, but the real problem
is that all btrfs workqueue shares one work->func, -- normal_work_helper,
so this makes each workqueue to have its own helper function, but only a
wraper pf normal_work_helper.
With this patch, I no long hit the above hang.
Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
Signed-off-by: Chris Mason <clm@fb.com>
2014-08-15 22:36:53 +07:00
|
|
|
wq = fs_info->endio_raid56_workers;
|
2019-09-17 01:30:57 +07:00
|
|
|
else
|
Btrfs: fix task hang under heavy compressed write
This has been reported and discussed for a long time, and this hang occurs in
both 3.15 and 3.16.
Btrfs now migrates to use kernel workqueue, but it introduces this hang problem.
Btrfs has a kind of work queued as an ordered way, which means that its
ordered_func() must be processed in the way of FIFO, so it usually looks like --
normal_work_helper(arg)
work = container_of(arg, struct btrfs_work, normal_work);
work->func() <---- (we name it work X)
for ordered_work in wq->ordered_list
ordered_work->ordered_func()
ordered_work->ordered_free()
The hang is a rare case, first when we find free space, we get an uncached block
group, then we go to read its free space cache inode for free space information,
so it will
file a readahead request
btrfs_readpages()
for page that is not in page cache
__do_readpage()
submit_extent_page()
btrfs_submit_bio_hook()
btrfs_bio_wq_end_io()
submit_bio()
end_workqueue_bio() <--(ret by the 1st endio)
queue a work(named work Y) for the 2nd
also the real endio()
So the hang occurs when work Y's work_struct and work X's work_struct happens
to share the same address.
A bit more explanation,
A,B,C -- struct btrfs_work
arg -- struct work_struct
kthread:
worker_thread()
pick up a work_struct from @worklist
process_one_work(arg)
worker->current_work = arg; <-- arg is A->normal_work
worker->current_func(arg)
normal_work_helper(arg)
A = container_of(arg, struct btrfs_work, normal_work);
A->func()
A->ordered_func()
A->ordered_free() <-- A gets freed
B->ordered_func()
submit_compressed_extents()
find_free_extent()
load_free_space_inode()
... <-- (the above readhead stack)
end_workqueue_bio()
btrfs_queue_work(work C)
B->ordered_free()
As if work A has a high priority in wq->ordered_list and there are more ordered
works queued after it, such as B->ordered_func(), its memory could have been
freed before normal_work_helper() returns, which means that kernel workqueue
code worker_thread() still has worker->current_work pointer to be work
A->normal_work's, ie. arg's address.
Meanwhile, work C is allocated after work A is freed, work C->normal_work
and work A->normal_work are likely to share the same address(I confirmed this
with ftrace output, so I'm not just guessing, it's rare though).
When another kthread picks up work C->normal_work to process, and finds our
kthread is processing it(see find_worker_executing_work()), it'll think
work C as a collision and skip then, which ends up nobody processing work C.
So the situation is that our kthread is waiting forever on work C.
Besides, there're other cases that can lead to deadlock, but the real problem
is that all btrfs workqueue shares one work->func, -- normal_work_helper,
so this makes each workqueue to have its own helper function, but only a
wraper pf normal_work_helper.
With this patch, I no long hit the above hang.
Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
Signed-off-by: Chris Mason <clm@fb.com>
2014-08-15 22:36:53 +07:00
|
|
|
wq = fs_info->endio_write_workers;
|
Btrfs: move data checksumming into a dedicated tree
Btrfs stores checksums for each data block. Until now, they have
been stored in the subvolume trees, indexed by the inode that is
referencing the data block. This means that when we read the inode,
we've probably read in at least some checksums as well.
But, this has a few problems:
* The checksums are indexed by logical offset in the file. When
compression is on, this means we have to do the expensive checksumming
on the uncompressed data. It would be faster if we could checksum
the compressed data instead.
* If we implement encryption, we'll be checksumming the plain text and
storing that on disk. This is significantly less secure.
* For either compression or encryption, we have to get the plain text
back before we can verify the checksum as correct. This makes the raid
layer balancing and extent moving much more expensive.
* It makes the front end caching code more complex, as we have touch
the subvolume and inodes as we cache extents.
* There is potentitally one copy of the checksum in each subvolume
referencing an extent.
The solution used here is to store the extent checksums in a dedicated
tree. This allows us to index the checksums by phyiscal extent
start and length. It means:
* The checksum is against the data stored on disk, after any compression
or encryption is done.
* The checksum is stored in a central location, and can be verified without
following back references, or reading inodes.
This makes compression significantly faster by reducing the amount of
data that needs to be checksummed. It will also allow much faster
raid management code in general.
The checksums are indexed by a key with a fixed objectid (a magic value
in ctree.h) and offset set to the starting byte of the extent. This
allows us to copy the checksum items into the fsync log tree directly (or
any other tree), without having to invent a second format for them.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-12-09 04:58:54 +07:00
|
|
|
} else {
|
2020-04-17 04:46:24 +07:00
|
|
|
if (end_io_wq->metadata == BTRFS_WQ_ENDIO_RAID56)
|
Btrfs: fix task hang under heavy compressed write
This has been reported and discussed for a long time, and this hang occurs in
both 3.15 and 3.16.
Btrfs now migrates to use kernel workqueue, but it introduces this hang problem.
Btrfs has a kind of work queued as an ordered way, which means that its
ordered_func() must be processed in the way of FIFO, so it usually looks like --
normal_work_helper(arg)
work = container_of(arg, struct btrfs_work, normal_work);
work->func() <---- (we name it work X)
for ordered_work in wq->ordered_list
ordered_work->ordered_func()
ordered_work->ordered_free()
The hang is a rare case, first when we find free space, we get an uncached block
group, then we go to read its free space cache inode for free space information,
so it will
file a readahead request
btrfs_readpages()
for page that is not in page cache
__do_readpage()
submit_extent_page()
btrfs_submit_bio_hook()
btrfs_bio_wq_end_io()
submit_bio()
end_workqueue_bio() <--(ret by the 1st endio)
queue a work(named work Y) for the 2nd
also the real endio()
So the hang occurs when work Y's work_struct and work X's work_struct happens
to share the same address.
A bit more explanation,
A,B,C -- struct btrfs_work
arg -- struct work_struct
kthread:
worker_thread()
pick up a work_struct from @worklist
process_one_work(arg)
worker->current_work = arg; <-- arg is A->normal_work
worker->current_func(arg)
normal_work_helper(arg)
A = container_of(arg, struct btrfs_work, normal_work);
A->func()
A->ordered_func()
A->ordered_free() <-- A gets freed
B->ordered_func()
submit_compressed_extents()
find_free_extent()
load_free_space_inode()
... <-- (the above readhead stack)
end_workqueue_bio()
btrfs_queue_work(work C)
B->ordered_free()
As if work A has a high priority in wq->ordered_list and there are more ordered
works queued after it, such as B->ordered_func(), its memory could have been
freed before normal_work_helper() returns, which means that kernel workqueue
code worker_thread() still has worker->current_work pointer to be work
A->normal_work's, ie. arg's address.
Meanwhile, work C is allocated after work A is freed, work C->normal_work
and work A->normal_work are likely to share the same address(I confirmed this
with ftrace output, so I'm not just guessing, it's rare though).
When another kthread picks up work C->normal_work to process, and finds our
kthread is processing it(see find_worker_executing_work()), it'll think
work C as a collision and skip then, which ends up nobody processing work C.
So the situation is that our kthread is waiting forever on work C.
Besides, there're other cases that can lead to deadlock, but the real problem
is that all btrfs workqueue shares one work->func, -- normal_work_helper,
so this makes each workqueue to have its own helper function, but only a
wraper pf normal_work_helper.
With this patch, I no long hit the above hang.
Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
Signed-off-by: Chris Mason <clm@fb.com>
2014-08-15 22:36:53 +07:00
|
|
|
wq = fs_info->endio_raid56_workers;
|
2019-09-17 01:30:57 +07:00
|
|
|
else if (end_io_wq->metadata)
|
Btrfs: fix task hang under heavy compressed write
This has been reported and discussed for a long time, and this hang occurs in
both 3.15 and 3.16.
Btrfs now migrates to use kernel workqueue, but it introduces this hang problem.
Btrfs has a kind of work queued as an ordered way, which means that its
ordered_func() must be processed in the way of FIFO, so it usually looks like --
normal_work_helper(arg)
work = container_of(arg, struct btrfs_work, normal_work);
work->func() <---- (we name it work X)
for ordered_work in wq->ordered_list
ordered_work->ordered_func()
ordered_work->ordered_free()
The hang is a rare case, first when we find free space, we get an uncached block
group, then we go to read its free space cache inode for free space information,
so it will
file a readahead request
btrfs_readpages()
for page that is not in page cache
__do_readpage()
submit_extent_page()
btrfs_submit_bio_hook()
btrfs_bio_wq_end_io()
submit_bio()
end_workqueue_bio() <--(ret by the 1st endio)
queue a work(named work Y) for the 2nd
also the real endio()
So the hang occurs when work Y's work_struct and work X's work_struct happens
to share the same address.
A bit more explanation,
A,B,C -- struct btrfs_work
arg -- struct work_struct
kthread:
worker_thread()
pick up a work_struct from @worklist
process_one_work(arg)
worker->current_work = arg; <-- arg is A->normal_work
worker->current_func(arg)
normal_work_helper(arg)
A = container_of(arg, struct btrfs_work, normal_work);
A->func()
A->ordered_func()
A->ordered_free() <-- A gets freed
B->ordered_func()
submit_compressed_extents()
find_free_extent()
load_free_space_inode()
... <-- (the above readhead stack)
end_workqueue_bio()
btrfs_queue_work(work C)
B->ordered_free()
As if work A has a high priority in wq->ordered_list and there are more ordered
works queued after it, such as B->ordered_func(), its memory could have been
freed before normal_work_helper() returns, which means that kernel workqueue
code worker_thread() still has worker->current_work pointer to be work
A->normal_work's, ie. arg's address.
Meanwhile, work C is allocated after work A is freed, work C->normal_work
and work A->normal_work are likely to share the same address(I confirmed this
with ftrace output, so I'm not just guessing, it's rare though).
When another kthread picks up work C->normal_work to process, and finds our
kthread is processing it(see find_worker_executing_work()), it'll think
work C as a collision and skip then, which ends up nobody processing work C.
So the situation is that our kthread is waiting forever on work C.
Besides, there're other cases that can lead to deadlock, but the real problem
is that all btrfs workqueue shares one work->func, -- normal_work_helper,
so this makes each workqueue to have its own helper function, but only a
wraper pf normal_work_helper.
With this patch, I no long hit the above hang.
Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
Signed-off-by: Chris Mason <clm@fb.com>
2014-08-15 22:36:53 +07:00
|
|
|
wq = fs_info->endio_meta_workers;
|
2019-09-17 01:30:57 +07:00
|
|
|
else
|
Btrfs: fix task hang under heavy compressed write
This has been reported and discussed for a long time, and this hang occurs in
both 3.15 and 3.16.
Btrfs now migrates to use kernel workqueue, but it introduces this hang problem.
Btrfs has a kind of work queued as an ordered way, which means that its
ordered_func() must be processed in the way of FIFO, so it usually looks like --
normal_work_helper(arg)
work = container_of(arg, struct btrfs_work, normal_work);
work->func() <---- (we name it work X)
for ordered_work in wq->ordered_list
ordered_work->ordered_func()
ordered_work->ordered_free()
The hang is a rare case, first when we find free space, we get an uncached block
group, then we go to read its free space cache inode for free space information,
so it will
file a readahead request
btrfs_readpages()
for page that is not in page cache
__do_readpage()
submit_extent_page()
btrfs_submit_bio_hook()
btrfs_bio_wq_end_io()
submit_bio()
end_workqueue_bio() <--(ret by the 1st endio)
queue a work(named work Y) for the 2nd
also the real endio()
So the hang occurs when work Y's work_struct and work X's work_struct happens
to share the same address.
A bit more explanation,
A,B,C -- struct btrfs_work
arg -- struct work_struct
kthread:
worker_thread()
pick up a work_struct from @worklist
process_one_work(arg)
worker->current_work = arg; <-- arg is A->normal_work
worker->current_func(arg)
normal_work_helper(arg)
A = container_of(arg, struct btrfs_work, normal_work);
A->func()
A->ordered_func()
A->ordered_free() <-- A gets freed
B->ordered_func()
submit_compressed_extents()
find_free_extent()
load_free_space_inode()
... <-- (the above readhead stack)
end_workqueue_bio()
btrfs_queue_work(work C)
B->ordered_free()
As if work A has a high priority in wq->ordered_list and there are more ordered
works queued after it, such as B->ordered_func(), its memory could have been
freed before normal_work_helper() returns, which means that kernel workqueue
code worker_thread() still has worker->current_work pointer to be work
A->normal_work's, ie. arg's address.
Meanwhile, work C is allocated after work A is freed, work C->normal_work
and work A->normal_work are likely to share the same address(I confirmed this
with ftrace output, so I'm not just guessing, it's rare though).
When another kthread picks up work C->normal_work to process, and finds our
kthread is processing it(see find_worker_executing_work()), it'll think
work C as a collision and skip then, which ends up nobody processing work C.
So the situation is that our kthread is waiting forever on work C.
Besides, there're other cases that can lead to deadlock, but the real problem
is that all btrfs workqueue shares one work->func, -- normal_work_helper,
so this makes each workqueue to have its own helper function, but only a
wraper pf normal_work_helper.
With this patch, I no long hit the above hang.
Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
Signed-off-by: Chris Mason <clm@fb.com>
2014-08-15 22:36:53 +07:00
|
|
|
wq = fs_info->endio_workers;
|
Btrfs: move data checksumming into a dedicated tree
Btrfs stores checksums for each data block. Until now, they have
been stored in the subvolume trees, indexed by the inode that is
referencing the data block. This means that when we read the inode,
we've probably read in at least some checksums as well.
But, this has a few problems:
* The checksums are indexed by logical offset in the file. When
compression is on, this means we have to do the expensive checksumming
on the uncompressed data. It would be faster if we could checksum
the compressed data instead.
* If we implement encryption, we'll be checksumming the plain text and
storing that on disk. This is significantly less secure.
* For either compression or encryption, we have to get the plain text
back before we can verify the checksum as correct. This makes the raid
layer balancing and extent moving much more expensive.
* It makes the front end caching code more complex, as we have touch
the subvolume and inodes as we cache extents.
* There is potentitally one copy of the checksum in each subvolume
referencing an extent.
The solution used here is to store the extent checksums in a dedicated
tree. This allows us to index the checksums by phyiscal extent
start and length. It means:
* The checksum is against the data stored on disk, after any compression
or encryption is done.
* The checksum is stored in a central location, and can be verified without
following back references, or reading inodes.
This makes compression significantly faster by reducing the amount of
data that needs to be checksummed. It will also allow much faster
raid management code in general.
The checksums are indexed by a key with a fixed objectid (a magic value
in ctree.h) and offset set to the starting byte of the extent. This
allows us to copy the checksum items into the fsync log tree directly (or
any other tree), without having to invent a second format for them.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-12-09 04:58:54 +07:00
|
|
|
}
|
Btrfs: fix task hang under heavy compressed write
This has been reported and discussed for a long time, and this hang occurs in
both 3.15 and 3.16.
Btrfs now migrates to use kernel workqueue, but it introduces this hang problem.
Btrfs has a kind of work queued as an ordered way, which means that its
ordered_func() must be processed in the way of FIFO, so it usually looks like --
normal_work_helper(arg)
work = container_of(arg, struct btrfs_work, normal_work);
work->func() <---- (we name it work X)
for ordered_work in wq->ordered_list
ordered_work->ordered_func()
ordered_work->ordered_free()
The hang is a rare case, first when we find free space, we get an uncached block
group, then we go to read its free space cache inode for free space information,
so it will
file a readahead request
btrfs_readpages()
for page that is not in page cache
__do_readpage()
submit_extent_page()
btrfs_submit_bio_hook()
btrfs_bio_wq_end_io()
submit_bio()
end_workqueue_bio() <--(ret by the 1st endio)
queue a work(named work Y) for the 2nd
also the real endio()
So the hang occurs when work Y's work_struct and work X's work_struct happens
to share the same address.
A bit more explanation,
A,B,C -- struct btrfs_work
arg -- struct work_struct
kthread:
worker_thread()
pick up a work_struct from @worklist
process_one_work(arg)
worker->current_work = arg; <-- arg is A->normal_work
worker->current_func(arg)
normal_work_helper(arg)
A = container_of(arg, struct btrfs_work, normal_work);
A->func()
A->ordered_func()
A->ordered_free() <-- A gets freed
B->ordered_func()
submit_compressed_extents()
find_free_extent()
load_free_space_inode()
... <-- (the above readhead stack)
end_workqueue_bio()
btrfs_queue_work(work C)
B->ordered_free()
As if work A has a high priority in wq->ordered_list and there are more ordered
works queued after it, such as B->ordered_func(), its memory could have been
freed before normal_work_helper() returns, which means that kernel workqueue
code worker_thread() still has worker->current_work pointer to be work
A->normal_work's, ie. arg's address.
Meanwhile, work C is allocated after work A is freed, work C->normal_work
and work A->normal_work are likely to share the same address(I confirmed this
with ftrace output, so I'm not just guessing, it's rare though).
When another kthread picks up work C->normal_work to process, and finds our
kthread is processing it(see find_worker_executing_work()), it'll think
work C as a collision and skip then, which ends up nobody processing work C.
So the situation is that our kthread is waiting forever on work C.
Besides, there're other cases that can lead to deadlock, but the real problem
is that all btrfs workqueue shares one work->func, -- normal_work_helper,
so this makes each workqueue to have its own helper function, but only a
wraper pf normal_work_helper.
With this patch, I no long hit the above hang.
Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
Signed-off-by: Chris Mason <clm@fb.com>
2014-08-15 22:36:53 +07:00
|
|
|
|
2019-09-17 01:30:57 +07:00
|
|
|
btrfs_init_work(&end_io_wq->work, end_workqueue_fn, NULL, NULL);
|
Btrfs: fix task hang under heavy compressed write
This has been reported and discussed for a long time, and this hang occurs in
both 3.15 and 3.16.
Btrfs now migrates to use kernel workqueue, but it introduces this hang problem.
Btrfs has a kind of work queued as an ordered way, which means that its
ordered_func() must be processed in the way of FIFO, so it usually looks like --
normal_work_helper(arg)
work = container_of(arg, struct btrfs_work, normal_work);
work->func() <---- (we name it work X)
for ordered_work in wq->ordered_list
ordered_work->ordered_func()
ordered_work->ordered_free()
The hang is a rare case, first when we find free space, we get an uncached block
group, then we go to read its free space cache inode for free space information,
so it will
file a readahead request
btrfs_readpages()
for page that is not in page cache
__do_readpage()
submit_extent_page()
btrfs_submit_bio_hook()
btrfs_bio_wq_end_io()
submit_bio()
end_workqueue_bio() <--(ret by the 1st endio)
queue a work(named work Y) for the 2nd
also the real endio()
So the hang occurs when work Y's work_struct and work X's work_struct happens
to share the same address.
A bit more explanation,
A,B,C -- struct btrfs_work
arg -- struct work_struct
kthread:
worker_thread()
pick up a work_struct from @worklist
process_one_work(arg)
worker->current_work = arg; <-- arg is A->normal_work
worker->current_func(arg)
normal_work_helper(arg)
A = container_of(arg, struct btrfs_work, normal_work);
A->func()
A->ordered_func()
A->ordered_free() <-- A gets freed
B->ordered_func()
submit_compressed_extents()
find_free_extent()
load_free_space_inode()
... <-- (the above readhead stack)
end_workqueue_bio()
btrfs_queue_work(work C)
B->ordered_free()
As if work A has a high priority in wq->ordered_list and there are more ordered
works queued after it, such as B->ordered_func(), its memory could have been
freed before normal_work_helper() returns, which means that kernel workqueue
code worker_thread() still has worker->current_work pointer to be work
A->normal_work's, ie. arg's address.
Meanwhile, work C is allocated after work A is freed, work C->normal_work
and work A->normal_work are likely to share the same address(I confirmed this
with ftrace output, so I'm not just guessing, it's rare though).
When another kthread picks up work C->normal_work to process, and finds our
kthread is processing it(see find_worker_executing_work()), it'll think
work C as a collision and skip then, which ends up nobody processing work C.
So the situation is that our kthread is waiting forever on work C.
Besides, there're other cases that can lead to deadlock, but the real problem
is that all btrfs workqueue shares one work->func, -- normal_work_helper,
so this makes each workqueue to have its own helper function, but only a
wraper pf normal_work_helper.
With this patch, I no long hit the above hang.
Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
Signed-off-by: Chris Mason <clm@fb.com>
2014-08-15 22:36:53 +07:00
|
|
|
btrfs_queue_work(wq, &end_io_wq->work);
|
2008-04-10 03:28:12 +07:00
|
|
|
}
|
|
|
|
|
2017-06-03 14:38:06 +07:00
|
|
|
blk_status_t btrfs_bio_wq_end_io(struct btrfs_fs_info *info, struct bio *bio,
|
2014-07-30 05:25:45 +07:00
|
|
|
enum btrfs_wq_endio_type metadata)
|
2008-03-25 02:01:56 +07:00
|
|
|
{
|
2014-07-30 05:55:42 +07:00
|
|
|
struct btrfs_end_io_wq *end_io_wq;
|
2014-09-12 17:44:03 +07:00
|
|
|
|
2014-07-30 05:55:42 +07:00
|
|
|
end_io_wq = kmem_cache_alloc(btrfs_end_io_wq_cache, GFP_NOFS);
|
2008-04-10 03:28:12 +07:00
|
|
|
if (!end_io_wq)
|
2017-06-03 14:38:06 +07:00
|
|
|
return BLK_STS_RESOURCE;
|
2008-04-10 03:28:12 +07:00
|
|
|
|
|
|
|
end_io_wq->private = bio->bi_private;
|
|
|
|
end_io_wq->end_io = bio->bi_end_io;
|
2008-04-10 03:28:12 +07:00
|
|
|
end_io_wq->info = info;
|
2017-06-03 14:38:06 +07:00
|
|
|
end_io_wq->status = 0;
|
2008-04-10 03:28:12 +07:00
|
|
|
end_io_wq->bio = bio;
|
2008-04-10 03:28:12 +07:00
|
|
|
end_io_wq->metadata = metadata;
|
2008-04-10 03:28:12 +07:00
|
|
|
|
|
|
|
bio->bi_private = end_io_wq;
|
|
|
|
bio->bi_end_io = end_workqueue_bio;
|
2008-04-10 03:28:12 +07:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
Btrfs: Add ordered async work queues
Btrfs uses kernel threads to create async work queues for cpu intensive
operations such as checksumming and decompression. These work well,
but they make it difficult to keep IO order intact.
A single writepages call from pdflush or fsync will turn into a number
of bios, and each bio is checksummed in parallel. Once the checksum is
computed, the bio is sent down to the disk, and since we don't control
the order in which the parallel operations happen, they might go down to
the disk in almost any order.
The code deals with this somewhat by having deep work queues for a single
kernel thread, making it very likely that a single thread will process all
the bios for a single inode.
This patch introduces an explicitly ordered work queue. As work structs
are placed into the queue they are put onto the tail of a list. They have
three callbacks:
->func (cpu intensive processing here)
->ordered_func (order sensitive processing here)
->ordered_free (free the work struct, all processing is done)
The work struct has three callbacks. The func callback does the cpu intensive
work, and when it completes the work struct is marked as done.
Every time a work struct completes, the list is checked to see if the head
is marked as done. If so the ordered_func callback is used to do the
order sensitive processing and the ordered_free callback is used to do
any cleanup. Then we loop back and check the head of the list again.
This patch also changes the checksumming code to use the ordered workqueues.
One a 4 drive array, it increases streaming writes from 280MB/s to 350MB/s.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-11-07 10:03:00 +07:00
|
|
|
static void run_one_async_start(struct btrfs_work *work)
|
|
|
|
{
|
|
|
|
struct async_submit_bio *async;
|
2017-06-03 14:38:06 +07:00
|
|
|
blk_status_t ret;
|
Btrfs: Add ordered async work queues
Btrfs uses kernel threads to create async work queues for cpu intensive
operations such as checksumming and decompression. These work well,
but they make it difficult to keep IO order intact.
A single writepages call from pdflush or fsync will turn into a number
of bios, and each bio is checksummed in parallel. Once the checksum is
computed, the bio is sent down to the disk, and since we don't control
the order in which the parallel operations happen, they might go down to
the disk in almost any order.
The code deals with this somewhat by having deep work queues for a single
kernel thread, making it very likely that a single thread will process all
the bios for a single inode.
This patch introduces an explicitly ordered work queue. As work structs
are placed into the queue they are put onto the tail of a list. They have
three callbacks:
->func (cpu intensive processing here)
->ordered_func (order sensitive processing here)
->ordered_free (free the work struct, all processing is done)
The work struct has three callbacks. The func callback does the cpu intensive
work, and when it completes the work struct is marked as done.
Every time a work struct completes, the list is checked to see if the head
is marked as done. If so the ordered_func callback is used to do the
order sensitive processing and the ordered_free callback is used to do
any cleanup. Then we loop back and check the head of the list again.
This patch also changes the checksumming code to use the ordered workqueues.
One a 4 drive array, it increases streaming writes from 280MB/s to 350MB/s.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-11-07 10:03:00 +07:00
|
|
|
|
|
|
|
async = container_of(work, struct async_submit_bio, work);
|
2017-05-05 22:57:13 +07:00
|
|
|
ret = async->submit_bio_start(async->private_data, async->bio,
|
2012-03-12 22:03:00 +07:00
|
|
|
async->bio_offset);
|
|
|
|
if (ret)
|
2017-06-03 14:38:06 +07:00
|
|
|
async->status = ret;
|
Btrfs: Add ordered async work queues
Btrfs uses kernel threads to create async work queues for cpu intensive
operations such as checksumming and decompression. These work well,
but they make it difficult to keep IO order intact.
A single writepages call from pdflush or fsync will turn into a number
of bios, and each bio is checksummed in parallel. Once the checksum is
computed, the bio is sent down to the disk, and since we don't control
the order in which the parallel operations happen, they might go down to
the disk in almost any order.
The code deals with this somewhat by having deep work queues for a single
kernel thread, making it very likely that a single thread will process all
the bios for a single inode.
This patch introduces an explicitly ordered work queue. As work structs
are placed into the queue they are put onto the tail of a list. They have
three callbacks:
->func (cpu intensive processing here)
->ordered_func (order sensitive processing here)
->ordered_free (free the work struct, all processing is done)
The work struct has three callbacks. The func callback does the cpu intensive
work, and when it completes the work struct is marked as done.
Every time a work struct completes, the list is checked to see if the head
is marked as done. If so the ordered_func callback is used to do the
order sensitive processing and the ordered_free callback is used to do
any cleanup. Then we loop back and check the head of the list again.
This patch also changes the checksumming code to use the ordered workqueues.
One a 4 drive array, it increases streaming writes from 280MB/s to 350MB/s.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-11-07 10:03:00 +07:00
|
|
|
}
|
|
|
|
|
2018-07-18 03:08:41 +07:00
|
|
|
/*
|
|
|
|
* In order to insert checksums into the metadata in large chunks, we wait
|
|
|
|
* until bio submission time. All the pages in the bio are checksummed and
|
|
|
|
* sums are attached onto the ordered extent record.
|
|
|
|
*
|
|
|
|
* At IO completion time the csums attached on the ordered extent record are
|
|
|
|
* inserted into the tree.
|
|
|
|
*/
|
Btrfs: Add ordered async work queues
Btrfs uses kernel threads to create async work queues for cpu intensive
operations such as checksumming and decompression. These work well,
but they make it difficult to keep IO order intact.
A single writepages call from pdflush or fsync will turn into a number
of bios, and each bio is checksummed in parallel. Once the checksum is
computed, the bio is sent down to the disk, and since we don't control
the order in which the parallel operations happen, they might go down to
the disk in almost any order.
The code deals with this somewhat by having deep work queues for a single
kernel thread, making it very likely that a single thread will process all
the bios for a single inode.
This patch introduces an explicitly ordered work queue. As work structs
are placed into the queue they are put onto the tail of a list. They have
three callbacks:
->func (cpu intensive processing here)
->ordered_func (order sensitive processing here)
->ordered_free (free the work struct, all processing is done)
The work struct has three callbacks. The func callback does the cpu intensive
work, and when it completes the work struct is marked as done.
Every time a work struct completes, the list is checked to see if the head
is marked as done. If so the ordered_func callback is used to do the
order sensitive processing and the ordered_free callback is used to do
any cleanup. Then we loop back and check the head of the list again.
This patch also changes the checksumming code to use the ordered workqueues.
One a 4 drive array, it increases streaming writes from 280MB/s to 350MB/s.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-11-07 10:03:00 +07:00
|
|
|
static void run_one_async_done(struct btrfs_work *work)
|
2008-06-12 03:50:36 +07:00
|
|
|
{
|
|
|
|
struct async_submit_bio *async;
|
2018-07-18 03:08:41 +07:00
|
|
|
struct inode *inode;
|
|
|
|
blk_status_t ret;
|
2008-06-12 03:50:36 +07:00
|
|
|
|
|
|
|
async = container_of(work, struct async_submit_bio, work);
|
2018-07-18 03:08:41 +07:00
|
|
|
inode = async->private_data;
|
2008-08-16 02:34:17 +07:00
|
|
|
|
2016-03-05 02:23:12 +07:00
|
|
|
/* If an error occurred we just want to clean up the bio and move on */
|
2017-06-03 14:38:06 +07:00
|
|
|
if (async->status) {
|
|
|
|
async->bio->bi_status = async->status;
|
2015-07-20 20:29:37 +07:00
|
|
|
bio_endio(async->bio);
|
2012-03-12 22:03:00 +07:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2019-07-11 02:28:17 +07:00
|
|
|
/*
|
|
|
|
* All of the bios that pass through here are from async helpers.
|
|
|
|
* Use REQ_CGROUP_PUNT to issue them from the owning cgroup's context.
|
|
|
|
* This changes nothing when cgroups aren't in use.
|
|
|
|
*/
|
|
|
|
async->bio->bi_opf |= REQ_CGROUP_PUNT;
|
2019-07-11 02:28:14 +07:00
|
|
|
ret = btrfs_map_bio(btrfs_sb(inode->i_sb), async->bio, async->mirror_num);
|
2018-07-18 03:08:41 +07:00
|
|
|
if (ret) {
|
|
|
|
async->bio->bi_status = ret;
|
|
|
|
bio_endio(async->bio);
|
|
|
|
}
|
Btrfs: Add ordered async work queues
Btrfs uses kernel threads to create async work queues for cpu intensive
operations such as checksumming and decompression. These work well,
but they make it difficult to keep IO order intact.
A single writepages call from pdflush or fsync will turn into a number
of bios, and each bio is checksummed in parallel. Once the checksum is
computed, the bio is sent down to the disk, and since we don't control
the order in which the parallel operations happen, they might go down to
the disk in almost any order.
The code deals with this somewhat by having deep work queues for a single
kernel thread, making it very likely that a single thread will process all
the bios for a single inode.
This patch introduces an explicitly ordered work queue. As work structs
are placed into the queue they are put onto the tail of a list. They have
three callbacks:
->func (cpu intensive processing here)
->ordered_func (order sensitive processing here)
->ordered_free (free the work struct, all processing is done)
The work struct has three callbacks. The func callback does the cpu intensive
work, and when it completes the work struct is marked as done.
Every time a work struct completes, the list is checked to see if the head
is marked as done. If so the ordered_func callback is used to do the
order sensitive processing and the ordered_free callback is used to do
any cleanup. Then we loop back and check the head of the list again.
This patch also changes the checksumming code to use the ordered workqueues.
One a 4 drive array, it increases streaming writes from 280MB/s to 350MB/s.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-11-07 10:03:00 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static void run_one_async_free(struct btrfs_work *work)
|
|
|
|
{
|
|
|
|
struct async_submit_bio *async;
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
struct btrfs_fs_info *fs_info;
|
|
|
|
#endif /* MY_ABC_HERE */
|
Btrfs: Add ordered async work queues
Btrfs uses kernel threads to create async work queues for cpu intensive
operations such as checksumming and decompression. These work well,
but they make it difficult to keep IO order intact.
A single writepages call from pdflush or fsync will turn into a number
of bios, and each bio is checksummed in parallel. Once the checksum is
computed, the bio is sent down to the disk, and since we don't control
the order in which the parallel operations happen, they might go down to
the disk in almost any order.
The code deals with this somewhat by having deep work queues for a single
kernel thread, making it very likely that a single thread will process all
the bios for a single inode.
This patch introduces an explicitly ordered work queue. As work structs
are placed into the queue they are put onto the tail of a list. They have
three callbacks:
->func (cpu intensive processing here)
->ordered_func (order sensitive processing here)
->ordered_free (free the work struct, all processing is done)
The work struct has three callbacks. The func callback does the cpu intensive
work, and when it completes the work struct is marked as done.
Every time a work struct completes, the list is checked to see if the head
is marked as done. If so the ordered_func callback is used to do the
order sensitive processing and the ordered_free callback is used to do
any cleanup. Then we loop back and check the head of the list again.
This patch also changes the checksumming code to use the ordered workqueues.
One a 4 drive array, it increases streaming writes from 280MB/s to 350MB/s.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-11-07 10:03:00 +07:00
|
|
|
|
|
|
|
async = container_of(work, struct async_submit_bio, work);
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
if (async->fs_info && async->throttle) {
|
|
|
|
fs_info = async->fs_info;
|
|
|
|
if (atomic_dec_return(&fs_info->syno_async_submit_nr) < fs_info->syno_async_submit_throttle &&
|
|
|
|
waitqueue_active(&fs_info->syno_async_submit_queue_wait))
|
|
|
|
wake_up(&fs_info->syno_async_submit_queue_wait);
|
|
|
|
}
|
|
|
|
#endif /* MY_ABC_HERE */
|
2008-06-12 03:50:36 +07:00
|
|
|
kfree(async);
|
|
|
|
}
|
|
|
|
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
static blk_status_t __btrfs_wq_submit_bio
|
|
|
|
#else /* MY_ABC_HERE */
|
|
|
|
blk_status_t btrfs_wq_submit_bio
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
(
|
|
|
|
struct btrfs_fs_info *fs_info, struct bio *bio,
|
2017-07-06 06:41:23 +07:00
|
|
|
int mirror_num, unsigned long bio_flags,
|
|
|
|
u64 bio_offset, void *private_data,
|
2024-07-05 23:00:04 +07:00
|
|
|
extent_submit_bio_start_t *submit_bio_start
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
, bool throttle
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
)
|
2008-04-16 22:14:51 +07:00
|
|
|
{
|
|
|
|
struct async_submit_bio *async;
|
|
|
|
|
|
|
|
async = kmalloc(sizeof(*async), GFP_NOFS);
|
|
|
|
if (!async)
|
2017-06-03 14:38:06 +07:00
|
|
|
return BLK_STS_RESOURCE;
|
2008-04-16 22:14:51 +07:00
|
|
|
|
2017-05-05 22:57:13 +07:00
|
|
|
async->private_data = private_data;
|
2008-04-16 22:14:51 +07:00
|
|
|
async->bio = bio;
|
|
|
|
async->mirror_num = mirror_num;
|
Btrfs: Add ordered async work queues
Btrfs uses kernel threads to create async work queues for cpu intensive
operations such as checksumming and decompression. These work well,
but they make it difficult to keep IO order intact.
A single writepages call from pdflush or fsync will turn into a number
of bios, and each bio is checksummed in parallel. Once the checksum is
computed, the bio is sent down to the disk, and since we don't control
the order in which the parallel operations happen, they might go down to
the disk in almost any order.
The code deals with this somewhat by having deep work queues for a single
kernel thread, making it very likely that a single thread will process all
the bios for a single inode.
This patch introduces an explicitly ordered work queue. As work structs
are placed into the queue they are put onto the tail of a list. They have
three callbacks:
->func (cpu intensive processing here)
->ordered_func (order sensitive processing here)
->ordered_free (free the work struct, all processing is done)
The work struct has three callbacks. The func callback does the cpu intensive
work, and when it completes the work struct is marked as done.
Every time a work struct completes, the list is checked to see if the head
is marked as done. If so the ordered_func callback is used to do the
order sensitive processing and the ordered_free callback is used to do
any cleanup. Then we loop back and check the head of the list again.
This patch also changes the checksumming code to use the ordered workqueues.
One a 4 drive array, it increases streaming writes from 280MB/s to 350MB/s.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-11-07 10:03:00 +07:00
|
|
|
async->submit_bio_start = submit_bio_start;
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
async->fs_info = fs_info;
|
|
|
|
async->throttle = throttle;
|
|
|
|
if (async->throttle)
|
|
|
|
atomic_inc(&fs_info->syno_async_submit_nr);
|
|
|
|
#endif /* MY_ABC_HERE */
|
Btrfs: Add ordered async work queues
Btrfs uses kernel threads to create async work queues for cpu intensive
operations such as checksumming and decompression. These work well,
but they make it difficult to keep IO order intact.
A single writepages call from pdflush or fsync will turn into a number
of bios, and each bio is checksummed in parallel. Once the checksum is
computed, the bio is sent down to the disk, and since we don't control
the order in which the parallel operations happen, they might go down to
the disk in almost any order.
The code deals with this somewhat by having deep work queues for a single
kernel thread, making it very likely that a single thread will process all
the bios for a single inode.
This patch introduces an explicitly ordered work queue. As work structs
are placed into the queue they are put onto the tail of a list. They have
three callbacks:
->func (cpu intensive processing here)
->ordered_func (order sensitive processing here)
->ordered_free (free the work struct, all processing is done)
The work struct has three callbacks. The func callback does the cpu intensive
work, and when it completes the work struct is marked as done.
Every time a work struct completes, the list is checked to see if the head
is marked as done. If so the ordered_func callback is used to do the
order sensitive processing and the ordered_free callback is used to do
any cleanup. Then we loop back and check the head of the list again.
This patch also changes the checksumming code to use the ordered workqueues.
One a 4 drive array, it increases streaming writes from 280MB/s to 350MB/s.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-11-07 10:03:00 +07:00
|
|
|
|
2019-09-17 01:30:57 +07:00
|
|
|
btrfs_init_work(&async->work, run_one_async_start, run_one_async_done,
|
|
|
|
run_one_async_free);
|
Btrfs: Add ordered async work queues
Btrfs uses kernel threads to create async work queues for cpu intensive
operations such as checksumming and decompression. These work well,
but they make it difficult to keep IO order intact.
A single writepages call from pdflush or fsync will turn into a number
of bios, and each bio is checksummed in parallel. Once the checksum is
computed, the bio is sent down to the disk, and since we don't control
the order in which the parallel operations happen, they might go down to
the disk in almost any order.
The code deals with this somewhat by having deep work queues for a single
kernel thread, making it very likely that a single thread will process all
the bios for a single inode.
This patch introduces an explicitly ordered work queue. As work structs
are placed into the queue they are put onto the tail of a list. They have
three callbacks:
->func (cpu intensive processing here)
->ordered_func (order sensitive processing here)
->ordered_free (free the work struct, all processing is done)
The work struct has three callbacks. The func callback does the cpu intensive
work, and when it completes the work struct is marked as done.
Every time a work struct completes, the list is checked to see if the head
is marked as done. If so the ordered_func callback is used to do the
order sensitive processing and the ordered_free callback is used to do
any cleanup. Then we loop back and check the head of the list again.
This patch also changes the checksumming code to use the ordered workqueues.
One a 4 drive array, it increases streaming writes from 280MB/s to 350MB/s.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-11-07 10:03:00 +07:00
|
|
|
|
2010-05-25 20:48:28 +07:00
|
|
|
async->bio_offset = bio_offset;
|
2008-09-29 22:19:10 +07:00
|
|
|
|
2017-06-03 14:38:06 +07:00
|
|
|
async->status = 0;
|
2012-03-12 22:03:00 +07:00
|
|
|
|
2016-11-01 20:40:06 +07:00
|
|
|
if (op_is_sync(bio->bi_opf))
|
2014-02-28 09:46:06 +07:00
|
|
|
btrfs_set_work_high_priority(&async->work);
|
2009-04-21 02:50:09 +07:00
|
|
|
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
if (async->throttle) {
|
|
|
|
btrfs_queue_work(fs_info->syno_cow_async_workers, &async->work);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
2014-02-28 09:46:06 +07:00
|
|
|
btrfs_queue_work(fs_info->workers, &async->work);
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
out:
|
|
|
|
#endif /* MY_ABC_HERE */
|
2008-04-16 22:14:51 +07:00
|
|
|
return 0;
|
|
|
|
}
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
blk_status_t btrfs_wq_submit_bio(struct btrfs_fs_info *fs_info, struct bio *bio,
|
|
|
|
int mirror_num, unsigned long bio_flags,
|
|
|
|
u64 bio_offset, void *private_data,
|
|
|
|
extent_submit_bio_start_t *submit_bio_start)
|
|
|
|
{
|
|
|
|
return __btrfs_wq_submit_bio(fs_info, bio, mirror_num, bio_flags, bio_offset, private_data, submit_bio_start, false);
|
|
|
|
}
|
|
|
|
|
|
|
|
blk_status_t btrfs_wq_submit_bio_throttle(struct btrfs_fs_info *fs_info, struct bio *bio,
|
|
|
|
int mirror_num, unsigned long bio_flags,
|
|
|
|
u64 bio_offset, void *private_data,
|
|
|
|
extent_submit_bio_start_t *submit_bio_start)
|
|
|
|
{
|
|
|
|
DEFINE_WAIT(wait);
|
|
|
|
|
|
|
|
if (fs_info->syno_async_submit_throttle && atomic_read(&fs_info->syno_async_submit_nr) > fs_info->syno_async_submit_throttle) {
|
|
|
|
prepare_to_wait_exclusive(&fs_info->syno_async_submit_queue_wait, &wait, TASK_UNINTERRUPTIBLE);
|
|
|
|
if (atomic_read(&fs_info->syno_async_submit_nr) > fs_info->syno_async_submit_throttle)
|
|
|
|
schedule();
|
|
|
|
finish_wait(&fs_info->syno_async_submit_queue_wait, &wait);
|
|
|
|
}
|
|
|
|
|
|
|
|
return __btrfs_wq_submit_bio(fs_info, bio, mirror_num, bio_flags, bio_offset, private_data, submit_bio_start, true);
|
|
|
|
}
|
|
|
|
#endif /* MY_ABC_HERE */
|
2008-04-16 22:14:51 +07:00
|
|
|
|
2017-06-03 14:38:06 +07:00
|
|
|
static blk_status_t btree_csum_one_bio(struct bio *bio)
|
2008-09-24 00:14:12 +07:00
|
|
|
{
|
2013-11-08 03:20:26 +07:00
|
|
|
struct bio_vec *bvec;
|
2008-09-24 00:14:12 +07:00
|
|
|
struct btrfs_root *root;
|
2019-04-25 14:03:00 +07:00
|
|
|
int ret = 0;
|
2019-02-15 18:13:19 +07:00
|
|
|
struct bvec_iter_all iter_all;
|
2008-09-24 00:14:12 +07:00
|
|
|
|
2017-07-13 23:10:07 +07:00
|
|
|
ASSERT(!bio_flagged(bio, BIO_CLONED));
|
2019-04-25 14:03:00 +07:00
|
|
|
bio_for_each_segment_all(bvec, bio, iter_all) {
|
2008-09-24 00:14:12 +07:00
|
|
|
root = BTRFS_I(bvec->bv_page->mapping->host)->root;
|
2014-11-21 15:15:07 +07:00
|
|
|
ret = csum_dirty_buffer(root->fs_info, bvec->bv_page);
|
2012-03-12 22:03:00 +07:00
|
|
|
if (ret)
|
|
|
|
break;
|
2008-09-24 00:14:12 +07:00
|
|
|
}
|
2013-11-08 03:20:26 +07:00
|
|
|
|
2017-06-03 14:38:06 +07:00
|
|
|
return errno_to_blk_status(ret);
|
2008-09-24 00:14:12 +07:00
|
|
|
}
|
|
|
|
|
2018-03-08 20:35:48 +07:00
|
|
|
static blk_status_t btree_submit_bio_start(void *private_data, struct bio *bio,
|
2017-07-06 06:41:23 +07:00
|
|
|
u64 bio_offset)
|
2008-04-10 03:28:12 +07:00
|
|
|
{
|
2008-06-12 03:50:36 +07:00
|
|
|
/*
|
|
|
|
* when we're called for a write, we're already in the async
|
2008-08-16 02:34:16 +07:00
|
|
|
* submission context. Just jump into btrfs_map_bio
|
2008-06-12 03:50:36 +07:00
|
|
|
*/
|
2012-03-12 22:03:00 +07:00
|
|
|
return btree_csum_one_bio(bio);
|
Btrfs: Add ordered async work queues
Btrfs uses kernel threads to create async work queues for cpu intensive
operations such as checksumming and decompression. These work well,
but they make it difficult to keep IO order intact.
A single writepages call from pdflush or fsync will turn into a number
of bios, and each bio is checksummed in parallel. Once the checksum is
computed, the bio is sent down to the disk, and since we don't control
the order in which the parallel operations happen, they might go down to
the disk in almost any order.
The code deals with this somewhat by having deep work queues for a single
kernel thread, making it very likely that a single thread will process all
the bios for a single inode.
This patch introduces an explicitly ordered work queue. As work structs
are placed into the queue they are put onto the tail of a list. They have
three callbacks:
->func (cpu intensive processing here)
->ordered_func (order sensitive processing here)
->ordered_free (free the work struct, all processing is done)
The work struct has three callbacks. The func callback does the cpu intensive
work, and when it completes the work struct is marked as done.
Every time a work struct completes, the list is checked to see if the head
is marked as done. If so the ordered_func callback is used to do the
order sensitive processing and the ordered_free callback is used to do
any cleanup. Then we loop back and check the head of the list again.
This patch also changes the checksumming code to use the ordered workqueues.
One a 4 drive array, it increases streaming writes from 280MB/s to 350MB/s.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-11-07 10:03:00 +07:00
|
|
|
}
|
2008-04-10 03:28:12 +07:00
|
|
|
|
btrfs: detect fast implementation of crc32c on all architectures
Currently, there's only check for fast crc32c implementation on X86,
based on the CPU flags. This is used to decide if checksumming should be
offloaded to worker threads or can be calculated by the caller.
As there are more architectures that implement a faster version of
crc32c (ARM, SPARC, s390, MIPS, PowerPC), also there are specialized hw
cards.
The detection is based on driver name, all generic C implementations
contain 'generic', while the specialized versions do not. Alternatively
the priority could be used, but this is not currently provided by the
crypto API.
The flag is set per-filesystem at mount time and used for the offloading
decisions.
Signed-off-by: David Sterba <dsterba@suse.com>
2019-05-16 18:39:59 +07:00
|
|
|
static int check_async_write(struct btrfs_fs_info *fs_info,
|
|
|
|
struct btrfs_inode *bi)
|
2012-09-26 01:25:58 +07:00
|
|
|
{
|
2017-08-22 04:49:59 +07:00
|
|
|
if (atomic_read(&bi->sync_writers))
|
|
|
|
return 0;
|
btrfs: detect fast implementation of crc32c on all architectures
Currently, there's only check for fast crc32c implementation on X86,
based on the CPU flags. This is used to decide if checksumming should be
offloaded to worker threads or can be calculated by the caller.
As there are more architectures that implement a faster version of
crc32c (ARM, SPARC, s390, MIPS, PowerPC), also there are specialized hw
cards.
The detection is based on driver name, all generic C implementations
contain 'generic', while the specialized versions do not. Alternatively
the priority could be used, but this is not currently provided by the
crypto API.
The flag is set per-filesystem at mount time and used for the offloading
decisions.
Signed-off-by: David Sterba <dsterba@suse.com>
2019-05-16 18:39:59 +07:00
|
|
|
if (test_bit(BTRFS_FS_CSUM_IMPL_FAST, &fs_info->flags))
|
2012-09-26 01:25:58 +07:00
|
|
|
return 0;
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2020-09-18 20:34:38 +07:00
|
|
|
blk_status_t btrfs_submit_metadata_bio(struct inode *inode, struct bio *bio,
|
|
|
|
int mirror_num, unsigned long bio_flags)
|
2008-04-16 22:14:51 +07:00
|
|
|
{
|
2016-06-23 05:54:23 +07:00
|
|
|
struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb);
|
btrfs: detect fast implementation of crc32c on all architectures
Currently, there's only check for fast crc32c implementation on X86,
based on the CPU flags. This is used to decide if checksumming should be
offloaded to worker threads or can be calculated by the caller.
As there are more architectures that implement a faster version of
crc32c (ARM, SPARC, s390, MIPS, PowerPC), also there are specialized hw
cards.
The detection is based on driver name, all generic C implementations
contain 'generic', while the specialized versions do not. Alternatively
the priority could be used, but this is not currently provided by the
crypto API.
The flag is set per-filesystem at mount time and used for the offloading
decisions.
Signed-off-by: David Sterba <dsterba@suse.com>
2019-05-16 18:39:59 +07:00
|
|
|
int async = check_async_write(fs_info, BTRFS_I(inode));
|
2017-06-03 14:38:06 +07:00
|
|
|
blk_status_t ret;
|
2008-12-18 02:51:42 +07:00
|
|
|
|
2016-06-06 02:31:52 +07:00
|
|
|
if (bio_op(bio) != REQ_OP_WRITE) {
|
Btrfs: Add ordered async work queues
Btrfs uses kernel threads to create async work queues for cpu intensive
operations such as checksumming and decompression. These work well,
but they make it difficult to keep IO order intact.
A single writepages call from pdflush or fsync will turn into a number
of bios, and each bio is checksummed in parallel. Once the checksum is
computed, the bio is sent down to the disk, and since we don't control
the order in which the parallel operations happen, they might go down to
the disk in almost any order.
The code deals with this somewhat by having deep work queues for a single
kernel thread, making it very likely that a single thread will process all
the bios for a single inode.
This patch introduces an explicitly ordered work queue. As work structs
are placed into the queue they are put onto the tail of a list. They have
three callbacks:
->func (cpu intensive processing here)
->ordered_func (order sensitive processing here)
->ordered_free (free the work struct, all processing is done)
The work struct has three callbacks. The func callback does the cpu intensive
work, and when it completes the work struct is marked as done.
Every time a work struct completes, the list is checked to see if the head
is marked as done. If so the ordered_func callback is used to do the
order sensitive processing and the ordered_free callback is used to do
any cleanup. Then we loop back and check the head of the list again.
This patch also changes the checksumming code to use the ordered workqueues.
One a 4 drive array, it increases streaming writes from 280MB/s to 350MB/s.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-11-07 10:03:00 +07:00
|
|
|
/*
|
|
|
|
* called for a read, do the setup so that checksum validation
|
|
|
|
* can happen in the async kernel threads
|
|
|
|
*/
|
2016-06-23 05:54:23 +07:00
|
|
|
ret = btrfs_bio_wq_end_io(fs_info, bio,
|
|
|
|
BTRFS_WQ_ENDIO_METADATA);
|
2012-03-29 07:31:37 +07:00
|
|
|
if (ret)
|
2012-11-06 00:51:52 +07:00
|
|
|
goto out_w_error;
|
2019-07-11 02:28:14 +07:00
|
|
|
ret = btrfs_map_bio(fs_info, bio, mirror_num);
|
2012-09-26 01:25:58 +07:00
|
|
|
} else if (!async) {
|
|
|
|
ret = btree_csum_one_bio(bio);
|
|
|
|
if (ret)
|
2012-11-06 00:51:52 +07:00
|
|
|
goto out_w_error;
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
if (bio_flags & EXTENT_BIO_TREE_LOG)
|
|
|
|
ret = btrfs_map_bio_log_tree(fs_info, bio, mirror_num);
|
|
|
|
else
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
ret = btrfs_map_bio(fs_info, bio, mirror_num);
|
2012-11-06 00:51:52 +07:00
|
|
|
} else {
|
|
|
|
/*
|
|
|
|
* kthread helpers are used to submit writes so that
|
|
|
|
* checksumming can happen in parallel across all CPUs
|
|
|
|
*/
|
2017-05-05 22:57:13 +07:00
|
|
|
ret = btrfs_wq_submit_bio(fs_info, bio, mirror_num, 0,
|
2019-04-10 21:24:42 +07:00
|
|
|
0, inode, btree_submit_bio_start);
|
2008-04-16 22:14:51 +07:00
|
|
|
}
|
2009-04-21 02:50:09 +07:00
|
|
|
|
2015-07-20 20:29:37 +07:00
|
|
|
if (ret)
|
|
|
|
goto out_w_error;
|
|
|
|
return 0;
|
|
|
|
|
2012-11-06 00:51:52 +07:00
|
|
|
out_w_error:
|
2017-06-03 14:38:06 +07:00
|
|
|
bio->bi_status = ret;
|
2015-07-20 20:29:37 +07:00
|
|
|
bio_endio(bio);
|
2012-11-06 00:51:52 +07:00
|
|
|
return ret;
|
2008-04-16 22:14:51 +07:00
|
|
|
}
|
|
|
|
|
2010-12-07 21:54:09 +07:00
|
|
|
#ifdef CONFIG_MIGRATION
|
2010-11-22 10:20:49 +07:00
|
|
|
static int btree_migratepage(struct address_space *mapping,
|
2012-01-13 08:19:43 +07:00
|
|
|
struct page *newpage, struct page *page,
|
|
|
|
enum migrate_mode mode)
|
2010-11-22 10:20:49 +07:00
|
|
|
{
|
|
|
|
/*
|
|
|
|
* we can't safely write a btree page from here,
|
|
|
|
* we haven't done the locking hook
|
|
|
|
*/
|
|
|
|
if (PageDirty(page))
|
|
|
|
return -EAGAIN;
|
|
|
|
/*
|
|
|
|
* Buffers may be managed in a filesystem specific way.
|
|
|
|
* We must have no buffers or drop them.
|
|
|
|
*/
|
|
|
|
if (page_has_private(page) &&
|
|
|
|
!try_to_release_page(page, GFP_KERNEL))
|
|
|
|
return -EAGAIN;
|
2012-01-13 08:19:43 +07:00
|
|
|
return migrate_page(mapping, newpage, page, mode);
|
2010-11-22 10:20:49 +07:00
|
|
|
}
|
2010-12-07 21:54:09 +07:00
|
|
|
#endif
|
2010-11-22 10:20:49 +07:00
|
|
|
|
2007-11-08 09:08:01 +07:00
|
|
|
|
|
|
|
static int btree_writepages(struct address_space *mapping,
|
|
|
|
struct writeback_control *wbc)
|
|
|
|
{
|
2013-01-29 17:09:20 +07:00
|
|
|
struct btrfs_fs_info *fs_info;
|
|
|
|
int ret;
|
|
|
|
|
2007-12-12 00:42:00 +07:00
|
|
|
if (wbc->sync_mode == WB_SYNC_NONE) {
|
2007-11-27 22:52:01 +07:00
|
|
|
|
|
|
|
if (wbc->for_kupdate)
|
|
|
|
return 0;
|
|
|
|
|
2013-01-29 17:09:20 +07:00
|
|
|
fs_info = BTRFS_I(mapping->host)->root->fs_info;
|
2009-03-13 22:00:37 +07:00
|
|
|
/* this is a bit racy, but that's ok */
|
2018-07-02 14:44:58 +07:00
|
|
|
ret = __percpu_counter_compare(&fs_info->dirty_metadata_bytes,
|
|
|
|
BTRFS_DIRTY_METADATA_THRESH,
|
|
|
|
fs_info->dirty_metadata_batch);
|
2013-01-29 17:09:20 +07:00
|
|
|
if (ret < 0)
|
2007-11-27 07:34:41 +07:00
|
|
|
return 0;
|
|
|
|
}
|
2012-03-13 20:38:00 +07:00
|
|
|
return btree_write_cache_pages(mapping, wbc);
|
2007-11-08 09:08:01 +07:00
|
|
|
}
|
|
|
|
|
2008-01-29 21:59:12 +07:00
|
|
|
static int btree_releasepage(struct page *page, gfp_t gfp_flags)
|
2007-10-16 03:14:19 +07:00
|
|
|
{
|
2008-09-12 02:51:43 +07:00
|
|
|
if (PageWriteback(page) || PageDirty(page))
|
2009-01-06 09:25:51 +07:00
|
|
|
return 0;
|
2012-01-27 03:01:12 +07:00
|
|
|
|
2013-04-26 21:56:29 +07:00
|
|
|
return try_release_extent_buffer(page);
|
2007-03-29 00:57:48 +07:00
|
|
|
}
|
|
|
|
|
2013-05-22 10:17:23 +07:00
|
|
|
static void btree_invalidatepage(struct page *page, unsigned int offset,
|
|
|
|
unsigned int length)
|
2007-03-29 00:57:48 +07:00
|
|
|
{
|
2008-01-25 04:13:08 +07:00
|
|
|
struct extent_io_tree *tree;
|
|
|
|
tree = &BTRFS_I(page->mapping->host)->io_tree;
|
2007-10-16 03:14:19 +07:00
|
|
|
extent_invalidatepage(tree, page, offset);
|
|
|
|
btree_releasepage(page, GFP_NOFS);
|
2008-04-19 03:11:30 +07:00
|
|
|
if (PagePrivate(page)) {
|
2013-12-20 23:37:06 +07:00
|
|
|
btrfs_warn(BTRFS_I(page->mapping->host)->root->fs_info,
|
|
|
|
"page private not zero on page %llu",
|
|
|
|
(unsigned long long)page_offset(page));
|
2020-06-02 11:47:45 +07:00
|
|
|
detach_page_private(page);
|
2008-04-19 03:11:30 +07:00
|
|
|
}
|
2007-03-29 00:57:48 +07:00
|
|
|
}
|
|
|
|
|
2012-03-13 20:38:00 +07:00
|
|
|
static int btree_set_page_dirty(struct page *page)
|
|
|
|
{
|
2012-10-16 00:30:43 +07:00
|
|
|
#ifdef DEBUG
|
2012-03-13 20:38:00 +07:00
|
|
|
struct extent_buffer *eb;
|
|
|
|
|
|
|
|
BUG_ON(!PagePrivate(page));
|
|
|
|
eb = (struct extent_buffer *)page->private;
|
|
|
|
BUG_ON(!eb);
|
|
|
|
BUG_ON(!test_bit(EXTENT_BUFFER_DIRTY, &eb->bflags));
|
|
|
|
BUG_ON(!atomic_read(&eb->refs));
|
|
|
|
btrfs_assert_tree_locked(eb);
|
2012-10-16 00:30:43 +07:00
|
|
|
#endif
|
2012-03-13 20:38:00 +07:00
|
|
|
return __set_page_dirty_nobuffers(page);
|
|
|
|
}
|
|
|
|
|
2009-09-22 07:01:10 +07:00
|
|
|
static const struct address_space_operations btree_aops = {
|
2007-11-08 09:08:01 +07:00
|
|
|
.writepages = btree_writepages,
|
2007-10-16 03:14:19 +07:00
|
|
|
.releasepage = btree_releasepage,
|
|
|
|
.invalidatepage = btree_invalidatepage,
|
2010-11-29 21:49:11 +07:00
|
|
|
#ifdef CONFIG_MIGRATION
|
2010-11-22 10:20:49 +07:00
|
|
|
.migratepage = btree_migratepage,
|
2010-11-29 21:49:11 +07:00
|
|
|
#endif
|
2012-03-13 20:38:00 +07:00
|
|
|
.set_page_dirty = btree_set_page_dirty,
|
2007-03-29 00:57:48 +07:00
|
|
|
};
|
|
|
|
|
2016-06-23 05:54:24 +07:00
|
|
|
void readahead_tree_block(struct btrfs_fs_info *fs_info, u64 bytenr)
|
2007-05-01 19:53:32 +07:00
|
|
|
{
|
2007-10-16 03:14:19 +07:00
|
|
|
struct extent_buffer *buf = NULL;
|
2019-03-14 14:52:35 +07:00
|
|
|
int ret;
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
bool can_retry = false;
|
|
|
|
#endif /* MY_ABC_HERE */
|
2007-05-01 19:53:32 +07:00
|
|
|
|
2016-06-23 05:54:24 +07:00
|
|
|
buf = btrfs_find_create_tree_block(fs_info, bytenr);
|
2016-06-07 02:01:23 +07:00
|
|
|
if (IS_ERR(buf))
|
2014-06-15 05:49:36 +07:00
|
|
|
return;
|
2019-03-14 14:52:35 +07:00
|
|
|
|
2024-07-05 23:00:04 +07:00
|
|
|
ret = read_extent_buffer_pages(buf, WAIT_NONE, 0
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
, &can_retry, 0
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
);
|
2019-03-14 14:52:35 +07:00
|
|
|
if (ret < 0)
|
|
|
|
free_extent_buffer_stale(buf);
|
|
|
|
else
|
|
|
|
free_extent_buffer(buf);
|
2007-05-01 19:53:32 +07:00
|
|
|
}
|
|
|
|
|
2016-06-23 05:54:24 +07:00
|
|
|
struct extent_buffer *btrfs_find_create_tree_block(
|
|
|
|
struct btrfs_fs_info *fs_info,
|
|
|
|
u64 bytenr)
|
2008-04-02 00:48:14 +07:00
|
|
|
{
|
2016-06-23 05:54:23 +07:00
|
|
|
if (btrfs_is_testing(fs_info))
|
|
|
|
return alloc_test_extent_buffer(fs_info, bytenr);
|
|
|
|
return alloc_extent_buffer(fs_info, bytenr);
|
2008-04-02 00:48:14 +07:00
|
|
|
}
|
|
|
|
|
2018-03-29 08:08:11 +07:00
|
|
|
/*
|
|
|
|
* Read tree block at logical address @bytenr and do variant basic but critical
|
|
|
|
* verification.
|
|
|
|
*
|
|
|
|
* @parent_transid: expected transid of this tree block, skip check if 0
|
|
|
|
* @level: expected level, mandatory check
|
|
|
|
* @first_key: expected key in slot 0, skip check if NULL
|
|
|
|
*/
|
2016-06-23 05:54:24 +07:00
|
|
|
struct extent_buffer *read_tree_block(struct btrfs_fs_info *fs_info, u64 bytenr,
|
2018-03-29 08:08:11 +07:00
|
|
|
u64 parent_transid, int level,
|
|
|
|
struct btrfs_key *first_key)
|
2008-04-02 00:48:14 +07:00
|
|
|
{
|
|
|
|
struct extent_buffer *buf = NULL;
|
|
|
|
int ret;
|
|
|
|
|
2016-06-23 05:54:24 +07:00
|
|
|
buf = btrfs_find_create_tree_block(fs_info, bytenr);
|
2016-06-07 02:01:23 +07:00
|
|
|
if (IS_ERR(buf))
|
|
|
|
return buf;
|
2008-04-02 00:48:14 +07:00
|
|
|
|
2019-03-20 20:56:39 +07:00
|
|
|
ret = btree_read_extent_buffer_pages(buf, parent_transid,
|
2018-03-29 08:08:11 +07:00
|
|
|
level, first_key);
|
2013-07-31 06:39:56 +07:00
|
|
|
if (ret) {
|
2019-03-14 14:52:35 +07:00
|
|
|
free_extent_buffer_stale(buf);
|
2015-05-25 16:30:15 +07:00
|
|
|
return ERR_PTR(ret);
|
2013-07-31 06:39:56 +07:00
|
|
|
}
|
2007-10-16 03:14:19 +07:00
|
|
|
return buf;
|
2008-04-10 03:28:12 +07:00
|
|
|
|
2007-02-02 21:18:22 +07:00
|
|
|
}
|
|
|
|
|
2019-03-20 20:30:02 +07:00
|
|
|
void btrfs_clean_tree_block(struct extent_buffer *buf)
|
2007-03-02 06:59:40 +07:00
|
|
|
{
|
2019-03-20 20:30:02 +07:00
|
|
|
struct btrfs_fs_info *fs_info = buf->fs_info;
|
2008-01-10 03:55:33 +07:00
|
|
|
if (btrfs_header_generation(buf) ==
|
2013-01-29 17:09:20 +07:00
|
|
|
fs_info->running_transaction->transid) {
|
2009-03-09 22:45:38 +07:00
|
|
|
btrfs_assert_tree_locked(buf);
|
Btrfs: Change btree locking to use explicit blocking points
Most of the btrfs metadata operations can be protected by a spinlock,
but some operations still need to schedule.
So far, btrfs has been using a mutex along with a trylock loop,
most of the time it is able to avoid going for the full mutex, so
the trylock loop is a big performance gain.
This commit is step one for getting rid of the blocking locks entirely.
btrfs_tree_lock takes a spinlock, and the code explicitly switches
to a blocking lock when it starts an operation that can schedule.
We'll be able get rid of the blocking locks in smaller pieces over time.
Tracing allows us to find the most common cause of blocking, so we
can start with the hot spots first.
The basic idea is:
btrfs_tree_lock() returns with the spin lock held
btrfs_set_lock_blocking() sets the EXTENT_BUFFER_BLOCKING bit in
the extent buffer flags, and then drops the spin lock. The buffer is
still considered locked by all of the btrfs code.
If btrfs_tree_lock gets the spinlock but finds the blocking bit set, it drops
the spin lock and waits on a wait queue for the blocking bit to go away.
Much of the code that needs to set the blocking bit finishes without actually
blocking a good percentage of the time. So, an adaptive spin is still
used against the blocking bit to avoid very high context switch rates.
btrfs_clear_lock_blocking() clears the blocking bit and returns
with the spinlock held again.
btrfs_tree_unlock() can be called on either blocking or spinning locks,
it does the right thing based on the blocking bit.
ctree.c has a helper function to set/clear all the locked buffers in a
path as blocking.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-02-04 21:25:08 +07:00
|
|
|
|
2009-03-13 22:00:37 +07:00
|
|
|
if (test_and_clear_bit(EXTENT_BUFFER_DIRTY, &buf->bflags)) {
|
2017-06-21 01:01:20 +07:00
|
|
|
percpu_counter_add_batch(&fs_info->dirty_metadata_bytes,
|
|
|
|
-buf->len,
|
|
|
|
fs_info->dirty_metadata_batch);
|
2012-10-16 00:33:54 +07:00
|
|
|
/* ugh, clear_extent_buffer_dirty needs to lock the page */
|
2018-04-04 07:03:48 +07:00
|
|
|
btrfs_set_lock_blocking_write(buf);
|
2012-10-16 00:33:54 +07:00
|
|
|
clear_extent_buffer_dirty(buf);
|
|
|
|
}
|
2008-06-26 03:01:30 +07:00
|
|
|
}
|
2007-10-16 03:14:19 +07:00
|
|
|
}
|
|
|
|
|
2016-06-15 20:22:56 +07:00
|
|
|
static void __setup_root(struct btrfs_root *root, struct btrfs_fs_info *fs_info,
|
2012-03-01 20:56:26 +07:00
|
|
|
u64 objectid)
|
2007-02-21 04:40:44 +07:00
|
|
|
{
|
2016-06-21 01:14:09 +07:00
|
|
|
bool dummy = test_bit(BTRFS_FS_STATE_DUMMY_FS_INFO, &fs_info->fs_state);
|
2020-01-24 21:32:18 +07:00
|
|
|
root->fs_info = fs_info;
|
2007-02-22 05:04:57 +07:00
|
|
|
root->node = NULL;
|
2007-03-07 08:08:01 +07:00
|
|
|
root->commit_root = NULL;
|
2014-04-02 18:51:05 +07:00
|
|
|
root->state = 0;
|
2010-05-16 21:49:58 +07:00
|
|
|
root->orphan_cleanup_state = 0;
|
2008-03-25 02:01:56 +07:00
|
|
|
|
2007-04-09 21:42:37 +07:00
|
|
|
root->last_trans = 0;
|
2009-09-22 02:56:00 +07:00
|
|
|
root->highest_objectid = 0;
|
2013-05-15 14:48:22 +07:00
|
|
|
root->nr_delalloc_inodes = 0;
|
2013-05-15 14:48:23 +07:00
|
|
|
root->nr_ordered_extents = 0;
|
2010-02-24 02:43:04 +07:00
|
|
|
root->inode_tree = RB_ROOT;
|
btrfs: implement delayed inode items operation
Changelog V5 -> V6:
- Fix oom when the memory load is high, by storing the delayed nodes into the
root's radix tree, and letting btrfs inodes go.
Changelog V4 -> V5:
- Fix the race on adding the delayed node to the inode, which is spotted by
Chris Mason.
- Merge Chris Mason's incremental patch into this patch.
- Fix deadlock between readdir() and memory fault, which is reported by
Itaru Kitayama.
Changelog V3 -> V4:
- Fix nested lock, which is reported by Itaru Kitayama, by updating space cache
inode in time.
Changelog V2 -> V3:
- Fix the race between the delayed worker and the task which does delayed items
balance, which is reported by Tsutomu Itoh.
- Modify the patch address David Sterba's comment.
- Fix the bug of the cpu recursion spinlock, reported by Chris Mason
Changelog V1 -> V2:
- break up the global rb-tree, use a list to manage the delayed nodes,
which is created for every directory and file, and used to manage the
delayed directory name index items and the delayed inode item.
- introduce a worker to deal with the delayed nodes.
Compare with Ext3/4, the performance of file creation and deletion on btrfs
is very poor. the reason is that btrfs must do a lot of b+ tree insertions,
such as inode item, directory name item, directory name index and so on.
If we can do some delayed b+ tree insertion or deletion, we can improve the
performance, so we made this patch which implemented delayed directory name
index insertion/deletion and delayed inode update.
Implementation:
- introduce a delayed root object into the filesystem, that use two lists to
manage the delayed nodes which are created for every file/directory.
One is used to manage all the delayed nodes that have delayed items. And the
other is used to manage the delayed nodes which is waiting to be dealt with
by the work thread.
- Every delayed node has two rb-tree, one is used to manage the directory name
index which is going to be inserted into b+ tree, and the other is used to
manage the directory name index which is going to be deleted from b+ tree.
- introduce a worker to deal with the delayed operation. This worker is used
to deal with the works of the delayed directory name index items insertion
and deletion and the delayed inode update.
When the delayed items is beyond the lower limit, we create works for some
delayed nodes and insert them into the work queue of the worker, and then
go back.
When the delayed items is beyond the upper bound, we create works for all
the delayed nodes that haven't been dealt with, and insert them into the work
queue of the worker, and then wait for that the untreated items is below some
threshold value.
- When we want to insert a directory name index into b+ tree, we just add the
information into the delayed inserting rb-tree.
And then we check the number of the delayed items and do delayed items
balance. (The balance policy is above.)
- When we want to delete a directory name index from the b+ tree, we search it
in the inserting rb-tree at first. If we look it up, just drop it. If not,
add the key of it into the delayed deleting rb-tree.
Similar to the delayed inserting rb-tree, we also check the number of the
delayed items and do delayed items balance.
(The same to inserting manipulation)
- When we want to update the metadata of some inode, we cached the data of the
inode into the delayed node. the worker will flush it into the b+ tree after
dealing with the delayed insertion and deletion.
- We will move the delayed node to the tail of the list after we access the
delayed node, By this way, we can cache more delayed items and merge more
inode updates.
- If we want to commit transaction, we will deal with all the delayed node.
- the delayed node will be freed when we free the btrfs inode.
- Before we log the inode items, we commit all the directory name index items
and the delayed inode update.
I did a quick test by the benchmark tool[1] and found we can improve the
performance of file creation by ~15%, and file deletion by ~20%.
Before applying this patch:
Create files:
Total files: 50000
Total time: 1.096108
Average time: 0.000022
Delete files:
Total files: 50000
Total time: 1.510403
Average time: 0.000030
After applying this patch:
Create files:
Total files: 50000
Total time: 0.932899
Average time: 0.000019
Delete files:
Total files: 50000
Total time: 1.215732
Average time: 0.000024
[1] http://marc.info/?l=linux-btrfs&m=128212635122920&q=p3
Many thanks for Kitayama-san's help!
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Reviewed-by: David Sterba <dave@jikos.cz>
Tested-by: Tsutomu Itoh <t-itoh@jp.fujitsu.com>
Tested-by: Itaru Kitayama <kitayama@cl.bb4u.ne.jp>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2011-04-22 17:12:22 +07:00
|
|
|
INIT_RADIX_TREE(&root->delayed_nodes_tree, GFP_ATOMIC);
|
2010-05-16 21:46:25 +07:00
|
|
|
root->block_rsv = NULL;
|
2008-03-25 02:01:56 +07:00
|
|
|
|
|
|
|
INIT_LIST_HEAD(&root->dirty_list);
|
Btrfs: Mixed back reference (FORWARD ROLLING FORMAT CHANGE)
This commit introduces a new kind of back reference for btrfs metadata.
Once a filesystem has been mounted with this commit, IT WILL NO LONGER
BE MOUNTABLE BY OLDER KERNELS.
When a tree block in subvolume tree is cow'd, the reference counts of all
extents it points to are increased by one. At transaction commit time,
the old root of the subvolume is recorded in a "dead root" data structure,
and the btree it points to is later walked, dropping reference counts
and freeing any blocks where the reference count goes to 0.
The increments done during cow and decrements done after commit cancel out,
and the walk is a very expensive way to go about freeing the blocks that
are no longer referenced by the new btree root. This commit reduces the
transaction overhead by avoiding the need for dead root records.
When a non-shared tree block is cow'd, we free the old block at once, and the
new block inherits old block's references. When a tree block with reference
count > 1 is cow'd, we increase the reference counts of all extents
the new block points to by one, and decrease the old block's reference count by
one.
This dead tree avoidance code removes the need to modify the reference
counts of lower level extents when a non-shared tree block is cow'd.
But we still need to update back ref for all pointers in the block.
This is because the location of the block is recorded in the back ref
item.
We can solve this by introducing a new type of back ref. The new
back ref provides information about pointer's key, level and in which
tree the pointer lives. This information allow us to find the pointer
by searching the tree. The shortcoming of the new back ref is that it
only works for pointers in tree blocks referenced by their owner trees.
This is mostly a problem for snapshots, where resolving one of these
fuzzy back references would be O(number_of_snapshots) and quite slow.
The solution used here is to use the fuzzy back references in the common
case where a given tree block is only referenced by one root,
and use the full back references when multiple roots have a reference
on a given block.
This commit adds per subvolume red-black tree to keep trace of cached
inodes. The red-black tree helps the balancing code to find cached
inodes whose inode numbers within a given range.
This commit improves the balancing code by introducing several data
structures to keep the state of balancing. The most important one
is the back ref cache. It caches how the upper level tree blocks are
referenced. This greatly reduce the overhead of checking back ref.
The improved balancing code scales significantly better with a large
number of snapshots.
This is a very large commit and was written in a number of
pieces. But, they depend heavily on the disk format change and were
squashed together to make sure git bisect didn't end up in a
bad state wrt space balancing or the format change.
Signed-off-by: Yan Zheng <zheng.yan@oracle.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-06-10 21:45:14 +07:00
|
|
|
INIT_LIST_HEAD(&root->root_list);
|
2013-05-15 14:48:22 +07:00
|
|
|
INIT_LIST_HEAD(&root->delalloc_inodes);
|
|
|
|
INIT_LIST_HEAD(&root->delalloc_root);
|
2013-05-15 14:48:23 +07:00
|
|
|
INIT_LIST_HEAD(&root->ordered_extents);
|
|
|
|
INIT_LIST_HEAD(&root->ordered_root);
|
2019-01-23 14:15:14 +07:00
|
|
|
INIT_LIST_HEAD(&root->reloc_dirty_list);
|
2012-10-13 02:27:49 +07:00
|
|
|
INIT_LIST_HEAD(&root->logged_list[0]);
|
|
|
|
INIT_LIST_HEAD(&root->logged_list[1]);
|
Btrfs: Mixed back reference (FORWARD ROLLING FORMAT CHANGE)
This commit introduces a new kind of back reference for btrfs metadata.
Once a filesystem has been mounted with this commit, IT WILL NO LONGER
BE MOUNTABLE BY OLDER KERNELS.
When a tree block in subvolume tree is cow'd, the reference counts of all
extents it points to are increased by one. At transaction commit time,
the old root of the subvolume is recorded in a "dead root" data structure,
and the btree it points to is later walked, dropping reference counts
and freeing any blocks where the reference count goes to 0.
The increments done during cow and decrements done after commit cancel out,
and the walk is a very expensive way to go about freeing the blocks that
are no longer referenced by the new btree root. This commit reduces the
transaction overhead by avoiding the need for dead root records.
When a non-shared tree block is cow'd, we free the old block at once, and the
new block inherits old block's references. When a tree block with reference
count > 1 is cow'd, we increase the reference counts of all extents
the new block points to by one, and decrease the old block's reference count by
one.
This dead tree avoidance code removes the need to modify the reference
counts of lower level extents when a non-shared tree block is cow'd.
But we still need to update back ref for all pointers in the block.
This is because the location of the block is recorded in the back ref
item.
We can solve this by introducing a new type of back ref. The new
back ref provides information about pointer's key, level and in which
tree the pointer lives. This information allow us to find the pointer
by searching the tree. The shortcoming of the new back ref is that it
only works for pointers in tree blocks referenced by their owner trees.
This is mostly a problem for snapshots, where resolving one of these
fuzzy back references would be O(number_of_snapshots) and quite slow.
The solution used here is to use the fuzzy back references in the common
case where a given tree block is only referenced by one root,
and use the full back references when multiple roots have a reference
on a given block.
This commit adds per subvolume red-black tree to keep trace of cached
inodes. The red-black tree helps the balancing code to find cached
inodes whose inode numbers within a given range.
This commit improves the balancing code by introducing several data
structures to keep the state of balancing. The most important one
is the back ref cache. It caches how the upper level tree blocks are
referenced. This greatly reduce the overhead of checking back ref.
The improved balancing code scales significantly better with a large
number of snapshots.
This is a very large commit and was written in a number of
pieces. But, they depend heavily on the disk format change and were
squashed together to make sure git bisect didn't end up in a
bad state wrt space balancing or the format change.
Signed-off-by: Yan Zheng <zheng.yan@oracle.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-06-10 21:45:14 +07:00
|
|
|
spin_lock_init(&root->inode_lock);
|
2013-05-15 14:48:22 +07:00
|
|
|
spin_lock_init(&root->delalloc_lock);
|
2013-05-15 14:48:23 +07:00
|
|
|
spin_lock_init(&root->ordered_extent_lock);
|
2010-05-16 21:46:25 +07:00
|
|
|
spin_lock_init(&root->accounting_lock);
|
2012-10-13 02:27:49 +07:00
|
|
|
spin_lock_init(&root->log_extents_lock[0]);
|
|
|
|
spin_lock_init(&root->log_extents_lock[1]);
|
2017-12-12 14:34:34 +07:00
|
|
|
spin_lock_init(&root->qgroup_meta_rsv_lock);
|
2008-06-26 03:01:30 +07:00
|
|
|
mutex_init(&root->objectid_mutex);
|
2008-09-06 03:13:11 +07:00
|
|
|
mutex_init(&root->log_mutex);
|
2014-03-06 12:55:02 +07:00
|
|
|
mutex_init(&root->ordered_extent_mutex);
|
2014-03-06 12:55:03 +07:00
|
|
|
mutex_init(&root->delalloc_mutex);
|
btrfs: qgroup: try to flush qgroup space when we get -EDQUOT
[PROBLEM]
There are known problem related to how btrfs handles qgroup reserved
space. One of the most obvious case is the the test case btrfs/153,
which do fallocate, then write into the preallocated range.
btrfs/153 1s ... - output mismatch (see xfstests-dev/results//btrfs/153.out.bad)
--- tests/btrfs/153.out 2019-10-22 15:18:14.068965341 +0800
+++ xfstests-dev/results//btrfs/153.out.bad 2020-07-01 20:24:40.730000089 +0800
@@ -1,2 +1,5 @@
QA output created by 153
+pwrite: Disk quota exceeded
+/mnt/scratch/testfile2: Disk quota exceeded
+/mnt/scratch/testfile2: Disk quota exceeded
Silence is golden
...
(Run 'diff -u xfstests-dev/tests/btrfs/153.out xfstests-dev/results//btrfs/153.out.bad' to see the entire diff)
[CAUSE]
Since commit c6887cd11149 ("Btrfs: don't do nocow check unless we have to"),
we always reserve space no matter if it's COW or not.
Such behavior change is mostly for performance, and reverting it is not
a good idea anyway.
For preallcoated extent, we reserve qgroup data space for it already,
and since we also reserve data space for qgroup at buffered write time,
it needs twice the space for us to write into preallocated space.
This leads to the -EDQUOT in buffered write routine.
And we can't follow the same solution, unlike data/meta space check,
qgroup reserved space is shared between data/metadata.
The EDQUOT can happen at the metadata reservation, so doing NODATACOW
check after qgroup reservation failure is not a solution.
[FIX]
To solve the problem, we don't return -EDQUOT directly, but every time
we got a -EDQUOT, we try to flush qgroup space:
- Flush all inodes of the root
NODATACOW writes will free the qgroup reserved at run_dealloc_range().
However we don't have the infrastructure to only flush NODATACOW
inodes, here we flush all inodes anyway.
- Wait for ordered extents
This would convert the preallocated metadata space into per-trans
metadata, which can be freed in later transaction commit.
- Commit transaction
This will free all per-trans metadata space.
Also we don't want to trigger flush multiple times, so here we introduce
a per-root wait list and a new root status, to ensure only one thread
starts the flushing.
Fixes: c6887cd11149 ("Btrfs: don't do nocow check unless we have to")
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2020-07-13 17:50:48 +07:00
|
|
|
init_waitqueue_head(&root->qgroup_flush_wait);
|
2009-01-22 00:54:03 +07:00
|
|
|
init_waitqueue_head(&root->log_writer_wait);
|
|
|
|
init_waitqueue_head(&root->log_commit_wait[0]);
|
|
|
|
init_waitqueue_head(&root->log_commit_wait[1]);
|
2014-02-20 17:08:58 +07:00
|
|
|
INIT_LIST_HEAD(&root->log_ctxs[0]);
|
|
|
|
INIT_LIST_HEAD(&root->log_ctxs[1]);
|
2009-01-22 00:54:03 +07:00
|
|
|
atomic_set(&root->log_commit[0], 0);
|
|
|
|
atomic_set(&root->log_commit[1], 0);
|
|
|
|
atomic_set(&root->log_writers, 0);
|
2012-09-06 17:04:27 +07:00
|
|
|
atomic_set(&root->log_batch, 0);
|
2017-03-03 15:55:18 +07:00
|
|
|
refcount_set(&root->refs, 1);
|
Btrfs: fix unexpected failure of nocow buffered writes after snapshotting when low on space
Commit e9894fd3e3b3 ("Btrfs: fix snapshot vs nocow writting") forced
nocow writes to fallback to COW, during writeback, when a snapshot is
created. This resulted in writes made before creating the snapshot to
unexpectedly fail with ENOSPC during writeback when success (0) was
returned to user space through the write system call.
The steps leading to this problem are:
1. When it's not possible to allocate data space for a write, the
buffered write path checks if a NOCOW write is possible. If it is,
it will not reserve space and success (0) is returned to user space.
2. Then when a snapshot is created, the root's will_be_snapshotted
atomic is incremented and writeback is triggered for all inode's that
belong to the root being snapshotted. Incrementing that atomic forces
all previous writes to fallback to COW during writeback (running
delalloc).
3. This results in the writeback for the inodes to fail and therefore
setting the ENOSPC error in their mappings, so that a subsequent
fsync on them will report the error to user space. So it's not a
completely silent data loss (since fsync will report ENOSPC) but it's
a very unexpected and undesirable behaviour, because if a clean
shutdown/unmount of the filesystem happens without previous calls to
fsync, it is expected to have the data present in the files after
mounting the filesystem again.
So fix this by adding a new atomic named snapshot_force_cow to the
root structure which prevents this behaviour and works the following way:
1. It is incremented when we start to create a snapshot after triggering
writeback and before waiting for writeback to finish.
2. This new atomic is now what is used by writeback (running delalloc)
to decide whether we need to fallback to COW or not. Because we
incremented this new atomic after triggering writeback in the
snapshot creation ioctl, we ensure that all buffered writes that
happened before snapshot creation will succeed and not fallback to
COW (which would make them fail with ENOSPC).
3. The existing atomic, will_be_snapshotted, is kept because it is used
to force new buffered writes, that start after we started
snapshotting, to reserve data space even when NOCOW is possible.
This makes these writes fail early with ENOSPC when there's no
available space to allocate, preventing the unexpected behaviour of
writeback later failing with ENOSPC due to a fallback to COW mode.
Fixes: e9894fd3e3b3 ("Btrfs: fix snapshot vs nocow writting")
Signed-off-by: Robbie Ko <robbieko@synology.com>
Reviewed-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2018-08-06 09:30:30 +07:00
|
|
|
atomic_set(&root->snapshot_force_cow, 0);
|
Btrfs: prevent ioctls from interfering with a swap file
A later patch will implement swap file support for Btrfs, but before we
do that, we need to make sure that the various Btrfs ioctls cannot
change a swap file.
When a swap file is active, we must make sure that the extents of the
file are not moved and that they don't become shared. That means that
the following are not safe:
- chattr +c (enable compression)
- reflink
- dedupe
- snapshot
- defrag
Don't allow those to happen on an active swap file.
Additionally, balance, resize, device remove, and device replace are
also unsafe if they affect an active swapfile. Add a red-black tree of
block groups and devices which contain an active swapfile. Relocation
checks each block group against this tree and skips it or errors out for
balance or resize, respectively. Device remove and device replace check
the tree for the device they will operate on.
Note that we don't have to worry about chattr -C (disable nocow), which
we ignore for non-empty files, because an active swapfile must be
non-empty and can't be truncated. We also don't have to worry about
autodefrag because it's only done on COW files. Truncate and fallocate
are already taken care of by the generic code. Device add doesn't do
relocation so it's not an issue, either.
Signed-off-by: Omar Sandoval <osandov@fb.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2016-11-04 00:28:12 +07:00
|
|
|
atomic_set(&root->nr_swapfiles, 0);
|
2009-01-22 00:54:03 +07:00
|
|
|
root->log_transid = 0;
|
2014-02-20 17:08:59 +07:00
|
|
|
root->log_transid_committed = -1;
|
2009-10-14 00:21:08 +07:00
|
|
|
root->last_log_commit = 0;
|
btrfs: fix corrupt log due to concurrent fsync of inodes with shared extents
When we have extents shared amongst different inodes in the same subvolume,
if we fsync them in parallel we can end up with checksum items in the log
tree that represent ranges which overlap.
For example, consider we have inodes A and B, both sharing an extent that
covers the logical range from X to X + 64KiB:
1) Task A starts an fsync on inode A;
2) Task B starts an fsync on inode B;
3) Task A calls btrfs_csum_file_blocks(), and the first search in the
log tree, through btrfs_lookup_csum(), returns -EFBIG because it
finds an existing checksum item that covers the range from X - 64KiB
to X;
4) Task A checks that the checksum item has not reached the maximum
possible size (MAX_CSUM_ITEMS) and then releases the search path
before it does another path search for insertion (through a direct
call to btrfs_search_slot());
5) As soon as task A releases the path and before it does the search
for insertion, task B calls btrfs_csum_file_blocks() and gets -EFBIG
too, because there is an existing checksum item that has an end
offset that matches the start offset (X) of the checksum range we want
to log;
6) Task B releases the path;
7) Task A does the path search for insertion (through btrfs_search_slot())
and then verifies that the checksum item that ends at offset X still
exists and extends its size to insert the checksums for the range from
X to X + 64KiB;
8) Task A releases the path and returns from btrfs_csum_file_blocks(),
having inserted the checksums into an existing checksum item that got
its size extended. At this point we have one checksum item in the log
tree that covers the logical range from X - 64KiB to X + 64KiB;
9) Task B now does a search for insertion using btrfs_search_slot() too,
but it finds that the previous checksum item no longer ends at the
offset X, it now ends at an of offset X + 64KiB, so it leaves that item
untouched.
Then it releases the path and calls btrfs_insert_empty_item()
that inserts a checksum item with a key offset corresponding to X and
a size for inserting a single checksum (4 bytes in case of crc32c).
Subsequent iterations end up extending this new checksum item so that
it contains the checksums for the range from X to X + 64KiB.
So after task B returns from btrfs_csum_file_blocks() we end up with
two checksum items in the log tree that have overlapping ranges, one
for the range from X - 64KiB to X + 64KiB, and another for the range
from X to X + 64KiB.
Having checksum items that represent ranges which overlap, regardless of
being in the log tree or in the chekcsums tree, can lead to problems where
checksums for a file range end up not being found. This type of problem
has happened a few times in the past and the following commits fixed them
and explain in detail why having checksum items with overlapping ranges is
problematic:
27b9a8122ff71a "Btrfs: fix csum tree corruption, duplicate and outdated checksums"
b84b8390d6009c "Btrfs: fix file read corruption after extent cloning and fsync"
40e046acbd2f36 "Btrfs: fix missing data checksums after replaying a log tree"
Since this specific instance of the problem can only happen when logging
inodes, because it is the only case where concurrent attempts to insert
checksums for the same range can happen, fix the issue by using an extent
io tree as a range lock to serialize checksum insertion during inode
logging.
This issue could often be reproduced by the test case generic/457 from
fstests. When it happens it produces the following trace:
BTRFS critical (device dm-0): corrupt leaf: root=18446744073709551610 block=30625792 slot=42, csum end range (15020032) goes beyond the start range (15015936) of the next csum item
BTRFS info (device dm-0): leaf 30625792 gen 7 total ptrs 49 free space 2402 owner 18446744073709551610
BTRFS info (device dm-0): refs 1 lock (w:0 r:0 bw:0 br:0 sw:0 sr:0) lock_owner 0 current 15884
item 0 key (18446744073709551606 128 13979648) itemoff 3991 itemsize 4
item 1 key (18446744073709551606 128 13983744) itemoff 3987 itemsize 4
item 2 key (18446744073709551606 128 13987840) itemoff 3983 itemsize 4
item 3 key (18446744073709551606 128 13991936) itemoff 3979 itemsize 4
item 4 key (18446744073709551606 128 13996032) itemoff 3975 itemsize 4
item 5 key (18446744073709551606 128 14000128) itemoff 3971 itemsize 4
(...)
BTRFS error (device dm-0): block=30625792 write time tree block corruption detected
------------[ cut here ]------------
WARNING: CPU: 1 PID: 15884 at fs/btrfs/disk-io.c:539 btree_csum_one_bio+0x268/0x2d0 [btrfs]
Modules linked in: btrfs dm_thin_pool ...
CPU: 1 PID: 15884 Comm: fsx Tainted: G W 5.6.0-rc7-btrfs-next-58 #1
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.12.0-59-gc9ba5276e321-prebuilt.qemu.org 04/01/2014
RIP: 0010:btree_csum_one_bio+0x268/0x2d0 [btrfs]
Code: c7 c7 ...
RSP: 0018:ffffbb0109e6f8e0 EFLAGS: 00010296
RAX: 0000000000000000 RBX: ffffe1c0847b6080 RCX: 0000000000000000
RDX: 0000000000000000 RSI: ffffffffaa963988 RDI: 0000000000000001
RBP: ffff956a4f4d2000 R08: 0000000000000000 R09: 0000000000000001
R10: 0000000000000526 R11: 0000000000000000 R12: ffff956a5cd28bb0
R13: 0000000000000000 R14: ffff956a649c9388 R15: 000000011ed82000
FS: 00007fb419959e80(0000) GS:ffff956a7aa00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000000fe6d54 CR3: 0000000138696005 CR4: 00000000003606e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
btree_submit_bio_hook+0x67/0xc0 [btrfs]
submit_one_bio+0x31/0x50 [btrfs]
btree_write_cache_pages+0x2db/0x4b0 [btrfs]
? __filemap_fdatawrite_range+0xb1/0x110
do_writepages+0x23/0x80
__filemap_fdatawrite_range+0xd2/0x110
btrfs_write_marked_extents+0x15e/0x180 [btrfs]
btrfs_sync_log+0x206/0x10a0 [btrfs]
? kmem_cache_free+0x315/0x3b0
? btrfs_log_inode+0x1e8/0xf90 [btrfs]
? __mutex_unlock_slowpath+0x45/0x2a0
? lockref_put_or_lock+0x9/0x30
? dput+0x2d/0x580
? dput+0xb5/0x580
? btrfs_sync_file+0x464/0x4d0 [btrfs]
btrfs_sync_file+0x464/0x4d0 [btrfs]
do_fsync+0x38/0x60
__x64_sys_fsync+0x10/0x20
do_syscall_64+0x5c/0x280
entry_SYSCALL_64_after_hwframe+0x49/0xbe
RIP: 0033:0x7fb41953a6d0
Code: 48 3d ...
RSP: 002b:00007ffcc86bd218 EFLAGS: 00000246 ORIG_RAX: 000000000000004a
RAX: ffffffffffffffda RBX: 000000000000000d RCX: 00007fb41953a6d0
RDX: 0000000000000009 RSI: 0000000000040000 RDI: 0000000000000003
RBP: 0000000000040000 R08: 0000000000000001 R09: 0000000000000009
R10: 0000000000000064 R11: 0000000000000246 R12: 0000556cf4b2c060
R13: 0000000000000100 R14: 0000000000000000 R15: 0000556cf322b420
irq event stamp: 0
hardirqs last enabled at (0): [<0000000000000000>] 0x0
hardirqs last disabled at (0): [<ffffffffa96bdedf>] copy_process+0x74f/0x2020
softirqs last enabled at (0): [<ffffffffa96bdedf>] copy_process+0x74f/0x2020
softirqs last disabled at (0): [<0000000000000000>] 0x0
---[ end trace d543fc76f5ad7fd8 ]---
In that trace the tree checker detected the overlapping checksum items at
the time when we triggered writeback for the log tree when syncing the
log.
Another trace that can happen is due to BUG_ON() when deleting checksum
items while logging an inode:
BTRFS critical (device dm-0): slot 81 key (18446744073709551606 128 13635584) new key (18446744073709551606 128 13635584)
BTRFS info (device dm-0): leaf 30949376 gen 7 total ptrs 98 free space 8527 owner 18446744073709551610
BTRFS info (device dm-0): refs 4 lock (w:1 r:0 bw:0 br:0 sw:1 sr:0) lock_owner 13473 current 13473
item 0 key (257 1 0) itemoff 16123 itemsize 160
inode generation 7 size 262144 mode 100600
item 1 key (257 12 256) itemoff 16103 itemsize 20
item 2 key (257 108 0) itemoff 16050 itemsize 53
extent data disk bytenr 13631488 nr 4096
extent data offset 0 nr 131072 ram 131072
(...)
------------[ cut here ]------------
kernel BUG at fs/btrfs/ctree.c:3153!
invalid opcode: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC PTI
CPU: 1 PID: 13473 Comm: fsx Not tainted 5.6.0-rc7-btrfs-next-58 #1
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.12.0-59-gc9ba5276e321-prebuilt.qemu.org 04/01/2014
RIP: 0010:btrfs_set_item_key_safe+0x1ea/0x270 [btrfs]
Code: 0f b6 ...
RSP: 0018:ffff95e3889179d0 EFLAGS: 00010282
RAX: 0000000000000000 RBX: 0000000000000051 RCX: 0000000000000000
RDX: 0000000000000000 RSI: ffffffffb7763988 RDI: 0000000000000001
RBP: fffffffffffffff6 R08: 0000000000000000 R09: 0000000000000001
R10: 00000000000009ef R11: 0000000000000000 R12: ffff8912a8ba5a08
R13: ffff95e388917a06 R14: ffff89138dcf68c8 R15: ffff95e388917ace
FS: 00007fe587084e80(0000) GS:ffff8913baa00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007fe587091000 CR3: 0000000126dac005 CR4: 00000000003606e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
btrfs_del_csums+0x2f4/0x540 [btrfs]
copy_items+0x4b5/0x560 [btrfs]
btrfs_log_inode+0x910/0xf90 [btrfs]
btrfs_log_inode_parent+0x2a0/0xe40 [btrfs]
? dget_parent+0x5/0x370
btrfs_log_dentry_safe+0x4a/0x70 [btrfs]
btrfs_sync_file+0x42b/0x4d0 [btrfs]
__x64_sys_msync+0x199/0x200
do_syscall_64+0x5c/0x280
entry_SYSCALL_64_after_hwframe+0x49/0xbe
RIP: 0033:0x7fe586c65760
Code: 00 f7 ...
RSP: 002b:00007ffe250f98b8 EFLAGS: 00000246 ORIG_RAX: 000000000000001a
RAX: ffffffffffffffda RBX: 00000000000040e1 RCX: 00007fe586c65760
RDX: 0000000000000004 RSI: 0000000000006b51 RDI: 00007fe58708b000
RBP: 0000000000006a70 R08: 0000000000000003 R09: 00007fe58700cb61
R10: 0000000000000100 R11: 0000000000000246 R12: 00000000000000e1
R13: 00007fe58708b000 R14: 0000000000006b51 R15: 0000558de021a420
Modules linked in: dm_log_writes ...
---[ end trace c92a7f447a8515f5 ]---
CC: stable@vger.kernel.org # 4.4+
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-18 18:14:50 +07:00
|
|
|
if (!dummy) {
|
2019-03-01 09:47:59 +07:00
|
|
|
extent_io_tree_init(fs_info, &root->dirty_log_pages,
|
|
|
|
IO_TREE_ROOT_DIRTY_LOG_PAGES, NULL);
|
btrfs: fix corrupt log due to concurrent fsync of inodes with shared extents
When we have extents shared amongst different inodes in the same subvolume,
if we fsync them in parallel we can end up with checksum items in the log
tree that represent ranges which overlap.
For example, consider we have inodes A and B, both sharing an extent that
covers the logical range from X to X + 64KiB:
1) Task A starts an fsync on inode A;
2) Task B starts an fsync on inode B;
3) Task A calls btrfs_csum_file_blocks(), and the first search in the
log tree, through btrfs_lookup_csum(), returns -EFBIG because it
finds an existing checksum item that covers the range from X - 64KiB
to X;
4) Task A checks that the checksum item has not reached the maximum
possible size (MAX_CSUM_ITEMS) and then releases the search path
before it does another path search for insertion (through a direct
call to btrfs_search_slot());
5) As soon as task A releases the path and before it does the search
for insertion, task B calls btrfs_csum_file_blocks() and gets -EFBIG
too, because there is an existing checksum item that has an end
offset that matches the start offset (X) of the checksum range we want
to log;
6) Task B releases the path;
7) Task A does the path search for insertion (through btrfs_search_slot())
and then verifies that the checksum item that ends at offset X still
exists and extends its size to insert the checksums for the range from
X to X + 64KiB;
8) Task A releases the path and returns from btrfs_csum_file_blocks(),
having inserted the checksums into an existing checksum item that got
its size extended. At this point we have one checksum item in the log
tree that covers the logical range from X - 64KiB to X + 64KiB;
9) Task B now does a search for insertion using btrfs_search_slot() too,
but it finds that the previous checksum item no longer ends at the
offset X, it now ends at an of offset X + 64KiB, so it leaves that item
untouched.
Then it releases the path and calls btrfs_insert_empty_item()
that inserts a checksum item with a key offset corresponding to X and
a size for inserting a single checksum (4 bytes in case of crc32c).
Subsequent iterations end up extending this new checksum item so that
it contains the checksums for the range from X to X + 64KiB.
So after task B returns from btrfs_csum_file_blocks() we end up with
two checksum items in the log tree that have overlapping ranges, one
for the range from X - 64KiB to X + 64KiB, and another for the range
from X to X + 64KiB.
Having checksum items that represent ranges which overlap, regardless of
being in the log tree or in the chekcsums tree, can lead to problems where
checksums for a file range end up not being found. This type of problem
has happened a few times in the past and the following commits fixed them
and explain in detail why having checksum items with overlapping ranges is
problematic:
27b9a8122ff71a "Btrfs: fix csum tree corruption, duplicate and outdated checksums"
b84b8390d6009c "Btrfs: fix file read corruption after extent cloning and fsync"
40e046acbd2f36 "Btrfs: fix missing data checksums after replaying a log tree"
Since this specific instance of the problem can only happen when logging
inodes, because it is the only case where concurrent attempts to insert
checksums for the same range can happen, fix the issue by using an extent
io tree as a range lock to serialize checksum insertion during inode
logging.
This issue could often be reproduced by the test case generic/457 from
fstests. When it happens it produces the following trace:
BTRFS critical (device dm-0): corrupt leaf: root=18446744073709551610 block=30625792 slot=42, csum end range (15020032) goes beyond the start range (15015936) of the next csum item
BTRFS info (device dm-0): leaf 30625792 gen 7 total ptrs 49 free space 2402 owner 18446744073709551610
BTRFS info (device dm-0): refs 1 lock (w:0 r:0 bw:0 br:0 sw:0 sr:0) lock_owner 0 current 15884
item 0 key (18446744073709551606 128 13979648) itemoff 3991 itemsize 4
item 1 key (18446744073709551606 128 13983744) itemoff 3987 itemsize 4
item 2 key (18446744073709551606 128 13987840) itemoff 3983 itemsize 4
item 3 key (18446744073709551606 128 13991936) itemoff 3979 itemsize 4
item 4 key (18446744073709551606 128 13996032) itemoff 3975 itemsize 4
item 5 key (18446744073709551606 128 14000128) itemoff 3971 itemsize 4
(...)
BTRFS error (device dm-0): block=30625792 write time tree block corruption detected
------------[ cut here ]------------
WARNING: CPU: 1 PID: 15884 at fs/btrfs/disk-io.c:539 btree_csum_one_bio+0x268/0x2d0 [btrfs]
Modules linked in: btrfs dm_thin_pool ...
CPU: 1 PID: 15884 Comm: fsx Tainted: G W 5.6.0-rc7-btrfs-next-58 #1
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.12.0-59-gc9ba5276e321-prebuilt.qemu.org 04/01/2014
RIP: 0010:btree_csum_one_bio+0x268/0x2d0 [btrfs]
Code: c7 c7 ...
RSP: 0018:ffffbb0109e6f8e0 EFLAGS: 00010296
RAX: 0000000000000000 RBX: ffffe1c0847b6080 RCX: 0000000000000000
RDX: 0000000000000000 RSI: ffffffffaa963988 RDI: 0000000000000001
RBP: ffff956a4f4d2000 R08: 0000000000000000 R09: 0000000000000001
R10: 0000000000000526 R11: 0000000000000000 R12: ffff956a5cd28bb0
R13: 0000000000000000 R14: ffff956a649c9388 R15: 000000011ed82000
FS: 00007fb419959e80(0000) GS:ffff956a7aa00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000000fe6d54 CR3: 0000000138696005 CR4: 00000000003606e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
btree_submit_bio_hook+0x67/0xc0 [btrfs]
submit_one_bio+0x31/0x50 [btrfs]
btree_write_cache_pages+0x2db/0x4b0 [btrfs]
? __filemap_fdatawrite_range+0xb1/0x110
do_writepages+0x23/0x80
__filemap_fdatawrite_range+0xd2/0x110
btrfs_write_marked_extents+0x15e/0x180 [btrfs]
btrfs_sync_log+0x206/0x10a0 [btrfs]
? kmem_cache_free+0x315/0x3b0
? btrfs_log_inode+0x1e8/0xf90 [btrfs]
? __mutex_unlock_slowpath+0x45/0x2a0
? lockref_put_or_lock+0x9/0x30
? dput+0x2d/0x580
? dput+0xb5/0x580
? btrfs_sync_file+0x464/0x4d0 [btrfs]
btrfs_sync_file+0x464/0x4d0 [btrfs]
do_fsync+0x38/0x60
__x64_sys_fsync+0x10/0x20
do_syscall_64+0x5c/0x280
entry_SYSCALL_64_after_hwframe+0x49/0xbe
RIP: 0033:0x7fb41953a6d0
Code: 48 3d ...
RSP: 002b:00007ffcc86bd218 EFLAGS: 00000246 ORIG_RAX: 000000000000004a
RAX: ffffffffffffffda RBX: 000000000000000d RCX: 00007fb41953a6d0
RDX: 0000000000000009 RSI: 0000000000040000 RDI: 0000000000000003
RBP: 0000000000040000 R08: 0000000000000001 R09: 0000000000000009
R10: 0000000000000064 R11: 0000000000000246 R12: 0000556cf4b2c060
R13: 0000000000000100 R14: 0000000000000000 R15: 0000556cf322b420
irq event stamp: 0
hardirqs last enabled at (0): [<0000000000000000>] 0x0
hardirqs last disabled at (0): [<ffffffffa96bdedf>] copy_process+0x74f/0x2020
softirqs last enabled at (0): [<ffffffffa96bdedf>] copy_process+0x74f/0x2020
softirqs last disabled at (0): [<0000000000000000>] 0x0
---[ end trace d543fc76f5ad7fd8 ]---
In that trace the tree checker detected the overlapping checksum items at
the time when we triggered writeback for the log tree when syncing the
log.
Another trace that can happen is due to BUG_ON() when deleting checksum
items while logging an inode:
BTRFS critical (device dm-0): slot 81 key (18446744073709551606 128 13635584) new key (18446744073709551606 128 13635584)
BTRFS info (device dm-0): leaf 30949376 gen 7 total ptrs 98 free space 8527 owner 18446744073709551610
BTRFS info (device dm-0): refs 4 lock (w:1 r:0 bw:0 br:0 sw:1 sr:0) lock_owner 13473 current 13473
item 0 key (257 1 0) itemoff 16123 itemsize 160
inode generation 7 size 262144 mode 100600
item 1 key (257 12 256) itemoff 16103 itemsize 20
item 2 key (257 108 0) itemoff 16050 itemsize 53
extent data disk bytenr 13631488 nr 4096
extent data offset 0 nr 131072 ram 131072
(...)
------------[ cut here ]------------
kernel BUG at fs/btrfs/ctree.c:3153!
invalid opcode: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC PTI
CPU: 1 PID: 13473 Comm: fsx Not tainted 5.6.0-rc7-btrfs-next-58 #1
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.12.0-59-gc9ba5276e321-prebuilt.qemu.org 04/01/2014
RIP: 0010:btrfs_set_item_key_safe+0x1ea/0x270 [btrfs]
Code: 0f b6 ...
RSP: 0018:ffff95e3889179d0 EFLAGS: 00010282
RAX: 0000000000000000 RBX: 0000000000000051 RCX: 0000000000000000
RDX: 0000000000000000 RSI: ffffffffb7763988 RDI: 0000000000000001
RBP: fffffffffffffff6 R08: 0000000000000000 R09: 0000000000000001
R10: 00000000000009ef R11: 0000000000000000 R12: ffff8912a8ba5a08
R13: ffff95e388917a06 R14: ffff89138dcf68c8 R15: ffff95e388917ace
FS: 00007fe587084e80(0000) GS:ffff8913baa00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007fe587091000 CR3: 0000000126dac005 CR4: 00000000003606e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
btrfs_del_csums+0x2f4/0x540 [btrfs]
copy_items+0x4b5/0x560 [btrfs]
btrfs_log_inode+0x910/0xf90 [btrfs]
btrfs_log_inode_parent+0x2a0/0xe40 [btrfs]
? dget_parent+0x5/0x370
btrfs_log_dentry_safe+0x4a/0x70 [btrfs]
btrfs_sync_file+0x42b/0x4d0 [btrfs]
__x64_sys_msync+0x199/0x200
do_syscall_64+0x5c/0x280
entry_SYSCALL_64_after_hwframe+0x49/0xbe
RIP: 0033:0x7fe586c65760
Code: 00 f7 ...
RSP: 002b:00007ffe250f98b8 EFLAGS: 00000246 ORIG_RAX: 000000000000001a
RAX: ffffffffffffffda RBX: 00000000000040e1 RCX: 00007fe586c65760
RDX: 0000000000000004 RSI: 0000000000006b51 RDI: 00007fe58708b000
RBP: 0000000000006a70 R08: 0000000000000003 R09: 00007fe58700cb61
R10: 0000000000000100 R11: 0000000000000246 R12: 00000000000000e1
R13: 00007fe58708b000 R14: 0000000000006b51 R15: 0000558de021a420
Modules linked in: dm_log_writes ...
---[ end trace c92a7f447a8515f5 ]---
CC: stable@vger.kernel.org # 4.4+
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-18 18:14:50 +07:00
|
|
|
extent_io_tree_init(fs_info, &root->log_csum_range,
|
|
|
|
IO_TREE_LOG_CSUM_RANGE, NULL);
|
|
|
|
}
|
2008-07-29 02:32:51 +07:00
|
|
|
|
2007-03-14 03:47:54 +07:00
|
|
|
memset(&root->root_key, 0, sizeof(root->root_key));
|
|
|
|
memset(&root->root_item, 0, sizeof(root->root_item));
|
2007-08-08 03:15:09 +07:00
|
|
|
memset(&root->defrag_progress, 0, sizeof(root->defrag_progress));
|
2007-04-21 07:23:12 +07:00
|
|
|
root->root_key.objectid = objectid;
|
2011-07-08 02:44:25 +07:00
|
|
|
root->anon_dev = 0;
|
2012-07-25 22:35:53 +07:00
|
|
|
|
2012-12-07 16:28:54 +07:00
|
|
|
spin_lock_init(&root->root_item_lock);
|
btrfs: qgroup: Introduce per-root swapped blocks infrastructure
To allow delayed subtree swap rescan, btrfs needs to record per-root
information about which tree blocks get swapped. This patch introduces
the required infrastructure.
The designed workflow will be:
1) Record the subtree root block that gets swapped.
During subtree swap:
O = Old tree blocks
N = New tree blocks
reloc tree subvolume tree X
Root Root
/ \ / \
NA OB OA OB
/ | | \ / | | \
NC ND OE OF OC OD OE OF
In this case, NA and OA are going to be swapped, record (NA, OA) into
subvolume tree X.
2) After subtree swap.
reloc tree subvolume tree X
Root Root
/ \ / \
OA OB NA OB
/ | | \ / | | \
OC OD OE OF NC ND OE OF
3a) COW happens for OB
If we are going to COW tree block OB, we check OB's bytenr against
tree X's swapped_blocks structure.
If it doesn't fit any, nothing will happen.
3b) COW happens for NA
Check NA's bytenr against tree X's swapped_blocks, and get a hit.
Then we do subtree scan on both subtrees OA and NA.
Resulting 6 tree blocks to be scanned (OA, OC, OD, NA, NC, ND).
Then no matter what we do to subvolume tree X, qgroup numbers will
still be correct.
Then NA's record gets removed from X's swapped_blocks.
4) Transaction commit
Any record in X's swapped_blocks gets removed, since there is no
modification to swapped subtrees, no need to trigger heavy qgroup
subtree rescan for them.
This will introduce 128 bytes overhead for each btrfs_root even qgroup
is not enabled. This is to reduce memory allocations and potential
failures.
Signed-off-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2019-01-23 14:15:16 +07:00
|
|
|
btrfs_qgroup_init_swapped_blocks(&root->swapped_blocks);
|
2020-01-24 21:33:00 +07:00
|
|
|
#ifdef CONFIG_BTRFS_DEBUG
|
|
|
|
INIT_LIST_HEAD(&root->leak_list);
|
|
|
|
spin_lock(&fs_info->fs_roots_radix_lock);
|
|
|
|
list_add_tail(&root->leak_list, &fs_info->allocated_roots);
|
|
|
|
spin_unlock(&fs_info->fs_roots_radix_lock);
|
|
|
|
#endif
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
root->locker_enabled = 0;
|
|
|
|
root->locker_mode = LM_NONE;
|
|
|
|
root->locker_default_state = LS_OPEN;
|
|
|
|
root->locker_waittime = LOCKER_DEFAULT_WAITTIME;
|
|
|
|
root->locker_duration = LOCKER_DEFAULT_DURATION;
|
|
|
|
root->locker_clock_adjustment = 0;
|
|
|
|
root->locker_update_time_floor = 0;
|
|
|
|
root->locker_state = LS_OPEN;
|
|
|
|
root->locker_period_begin = LOCKER_DEFAULT_PERIOD_BEGIN;
|
|
|
|
root->locker_period_begin_sys = LOCKER_DEFAULT_PERIOD_BEGIN;
|
|
|
|
root->locker_period_end = LOCKER_DEFAULT_PERIOD_END;
|
|
|
|
root->locker_period_end_sys = LOCKER_DEFAULT_PERIOD_END;
|
|
|
|
spin_lock_init(&root->locker_lock);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
spin_lock_init(&root->syno_usage_lock);
|
|
|
|
rwlock_init(&root->syno_usage_rwlock);
|
|
|
|
INIT_LIST_HEAD(&root->syno_usage_rescan_list);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
root->usrquota_loaded_gen = 0;
|
|
|
|
INIT_LIST_HEAD(&root->usrquota_ro_root);
|
|
|
|
init_rwsem(&root->rescan_lock);
|
|
|
|
root->rescan_inode = (u64)-1;
|
|
|
|
root->rescan_end_inode = (u64)-1;
|
|
|
|
root->invalid_quota = true;
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
root->has_usrquota_limit = false;
|
|
|
|
root->has_quota_limit = false;
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
root->inline_dedupe = false;
|
|
|
|
root->small_extent_size = BTRFS_MAX_EXTENT_SIZE;
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
INIT_LIST_HEAD(&root->syno_orphan_cleanup.root);
|
|
|
|
#endif /* MY_ABC_HERE */
|
2007-03-14 03:47:54 +07:00
|
|
|
}
|
|
|
|
|
2016-02-11 17:01:55 +07:00
|
|
|
static struct btrfs_root *btrfs_alloc_root(struct btrfs_fs_info *fs_info,
|
2020-01-24 21:32:18 +07:00
|
|
|
u64 objectid, gfp_t flags)
|
2011-11-17 12:46:16 +07:00
|
|
|
{
|
2016-02-11 17:01:55 +07:00
|
|
|
struct btrfs_root *root = kzalloc(sizeof(*root), flags);
|
2011-11-17 12:46:16 +07:00
|
|
|
if (root)
|
2020-01-24 21:32:18 +07:00
|
|
|
__setup_root(root, fs_info, objectid);
|
2011-11-17 12:46:16 +07:00
|
|
|
return root;
|
|
|
|
}
|
|
|
|
|
2013-09-20 03:07:01 +07:00
|
|
|
#ifdef CONFIG_BTRFS_FS_RUN_SANITY_TESTS
|
|
|
|
/* Should only be used by the testing infrastructure */
|
2016-06-15 20:22:56 +07:00
|
|
|
struct btrfs_root *btrfs_alloc_dummy_root(struct btrfs_fs_info *fs_info)
|
2013-09-20 03:07:01 +07:00
|
|
|
{
|
|
|
|
struct btrfs_root *root;
|
|
|
|
|
2016-06-21 01:14:09 +07:00
|
|
|
if (!fs_info)
|
|
|
|
return ERR_PTR(-EINVAL);
|
|
|
|
|
2020-01-24 21:32:18 +07:00
|
|
|
root = btrfs_alloc_root(fs_info, BTRFS_ROOT_TREE_OBJECTID, GFP_KERNEL);
|
2013-09-20 03:07:01 +07:00
|
|
|
if (!root)
|
|
|
|
return ERR_PTR(-ENOMEM);
|
2016-06-15 20:22:56 +07:00
|
|
|
|
2016-06-01 18:18:25 +07:00
|
|
|
/* We don't use the stripesize in selftest, set it as sectorsize */
|
2014-05-08 04:06:09 +07:00
|
|
|
root->alloc_bytenr = 0;
|
2013-09-20 03:07:01 +07:00
|
|
|
|
|
|
|
return root;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2011-09-13 17:44:20 +07:00
|
|
|
struct btrfs_root *btrfs_create_tree(struct btrfs_trans_handle *trans,
|
|
|
|
u64 objectid)
|
|
|
|
{
|
2019-03-20 19:20:49 +07:00
|
|
|
struct btrfs_fs_info *fs_info = trans->fs_info;
|
2011-09-13 17:44:20 +07:00
|
|
|
struct extent_buffer *leaf;
|
|
|
|
struct btrfs_root *tree_root = fs_info->tree_root;
|
|
|
|
struct btrfs_root *root;
|
|
|
|
struct btrfs_key key;
|
2018-12-14 04:16:45 +07:00
|
|
|
unsigned int nofs_flag;
|
2011-09-13 17:44:20 +07:00
|
|
|
int ret = 0;
|
|
|
|
|
2018-12-14 04:16:45 +07:00
|
|
|
/*
|
|
|
|
* We're holding a transaction handle, so use a NOFS memory allocation
|
|
|
|
* context to avoid deadlock if reclaim happens.
|
|
|
|
*/
|
|
|
|
nofs_flag = memalloc_nofs_save();
|
2020-01-24 21:32:18 +07:00
|
|
|
root = btrfs_alloc_root(fs_info, objectid, GFP_KERNEL);
|
2018-12-14 04:16:45 +07:00
|
|
|
memalloc_nofs_restore(nofs_flag);
|
2011-09-13 17:44:20 +07:00
|
|
|
if (!root)
|
|
|
|
return ERR_PTR(-ENOMEM);
|
|
|
|
|
|
|
|
root->root_key.objectid = objectid;
|
|
|
|
root->root_key.type = BTRFS_ROOT_ITEM_KEY;
|
|
|
|
root->root_key.offset = 0;
|
|
|
|
|
2020-08-20 22:46:03 +07:00
|
|
|
leaf = btrfs_alloc_tree_block(trans, root, 0, objectid, NULL, 0, 0, 0,
|
|
|
|
BTRFS_NESTING_NORMAL);
|
2011-09-13 17:44:20 +07:00
|
|
|
if (IS_ERR(leaf)) {
|
|
|
|
ret = PTR_ERR(leaf);
|
2013-03-21 11:32:32 +07:00
|
|
|
leaf = NULL;
|
2024-07-05 23:00:04 +07:00
|
|
|
goto fail_unlock;
|
2011-09-13 17:44:20 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
root->node = leaf;
|
|
|
|
btrfs_mark_buffer_dirty(leaf);
|
|
|
|
|
|
|
|
root->commit_root = btrfs_root_node(root);
|
2014-04-02 18:51:05 +07:00
|
|
|
set_bit(BTRFS_ROOT_TRACK_DIRTY, &root->state);
|
2011-09-13 17:44:20 +07:00
|
|
|
|
|
|
|
root->root_item.flags = 0;
|
|
|
|
root->root_item.byte_limit = 0;
|
|
|
|
btrfs_set_root_bytenr(&root->root_item, leaf->start);
|
|
|
|
btrfs_set_root_generation(&root->root_item, trans->transid);
|
|
|
|
btrfs_set_root_level(&root->root_item, 0);
|
|
|
|
btrfs_set_root_refs(&root->root_item, 1);
|
|
|
|
btrfs_set_root_used(&root->root_item, leaf->len);
|
|
|
|
btrfs_set_root_last_snapshot(&root->root_item, 0);
|
|
|
|
btrfs_set_root_dirid(&root->root_item, 0);
|
2017-10-31 13:08:16 +07:00
|
|
|
if (is_fstree(objectid))
|
2020-02-24 22:37:51 +07:00
|
|
|
generate_random_guid(root->root_item.uuid);
|
|
|
|
else
|
|
|
|
export_guid(root->root_item.uuid, &guid_null);
|
2011-09-13 17:44:20 +07:00
|
|
|
root->root_item.drop_level = 0;
|
|
|
|
|
2024-07-05 23:00:04 +07:00
|
|
|
btrfs_tree_unlock(leaf);
|
|
|
|
|
2011-09-13 17:44:20 +07:00
|
|
|
key.objectid = objectid;
|
|
|
|
key.type = BTRFS_ROOT_ITEM_KEY;
|
|
|
|
key.offset = 0;
|
|
|
|
ret = btrfs_insert_root(trans, tree_root, &key, &root->root_item);
|
|
|
|
if (ret)
|
|
|
|
goto fail;
|
|
|
|
|
2013-03-21 11:32:32 +07:00
|
|
|
return root;
|
|
|
|
|
2024-07-05 23:00:04 +07:00
|
|
|
fail_unlock:
|
2020-02-15 04:11:42 +07:00
|
|
|
if (leaf)
|
2013-03-21 11:32:32 +07:00
|
|
|
btrfs_tree_unlock(leaf);
|
2024-07-05 23:00:04 +07:00
|
|
|
fail:
|
2020-01-24 21:33:01 +07:00
|
|
|
btrfs_put_root(root);
|
2011-09-13 17:44:20 +07:00
|
|
|
|
2013-03-21 11:32:32 +07:00
|
|
|
return ERR_PTR(ret);
|
2011-09-13 17:44:20 +07:00
|
|
|
}
|
|
|
|
|
2009-01-22 00:54:03 +07:00
|
|
|
static struct btrfs_root *alloc_log_tree(struct btrfs_trans_handle *trans,
|
|
|
|
struct btrfs_fs_info *fs_info)
|
2007-04-09 21:42:37 +07:00
|
|
|
{
|
|
|
|
struct btrfs_root *root;
|
2009-01-22 00:54:03 +07:00
|
|
|
struct extent_buffer *leaf;
|
2008-09-06 03:13:11 +07:00
|
|
|
|
2020-01-24 21:32:18 +07:00
|
|
|
root = btrfs_alloc_root(fs_info, BTRFS_TREE_LOG_OBJECTID, GFP_NOFS);
|
2008-09-06 03:13:11 +07:00
|
|
|
if (!root)
|
2009-01-22 00:54:03 +07:00
|
|
|
return ERR_PTR(-ENOMEM);
|
2008-09-06 03:13:11 +07:00
|
|
|
|
|
|
|
root->root_key.objectid = BTRFS_TREE_LOG_OBJECTID;
|
|
|
|
root->root_key.type = BTRFS_ROOT_ITEM_KEY;
|
|
|
|
root->root_key.offset = BTRFS_TREE_LOG_OBJECTID;
|
2014-04-02 18:51:05 +07:00
|
|
|
|
2009-01-22 00:54:03 +07:00
|
|
|
/*
|
2020-05-15 13:01:40 +07:00
|
|
|
* DON'T set SHAREABLE bit for log trees.
|
2014-04-02 18:51:05 +07:00
|
|
|
*
|
2020-05-15 13:01:40 +07:00
|
|
|
* Log trees are not exposed to user space thus can't be snapshotted,
|
|
|
|
* and they go away before a real commit is actually done.
|
|
|
|
*
|
|
|
|
* They do store pointers to file data extents, and those reference
|
|
|
|
* counts still get updated (along with back refs to the log tree).
|
2009-01-22 00:54:03 +07:00
|
|
|
*/
|
2008-09-06 03:13:11 +07:00
|
|
|
|
2014-06-15 06:54:12 +07:00
|
|
|
leaf = btrfs_alloc_tree_block(trans, root, 0, BTRFS_TREE_LOG_OBJECTID,
|
2020-08-20 22:46:03 +07:00
|
|
|
NULL, 0, 0, 0, BTRFS_NESTING_NORMAL);
|
2009-01-22 00:54:03 +07:00
|
|
|
if (IS_ERR(leaf)) {
|
2020-01-24 21:33:01 +07:00
|
|
|
btrfs_put_root(root);
|
2009-01-22 00:54:03 +07:00
|
|
|
return ERR_CAST(leaf);
|
|
|
|
}
|
2008-09-06 03:13:11 +07:00
|
|
|
|
2009-01-22 00:54:03 +07:00
|
|
|
root->node = leaf;
|
2008-09-06 03:13:11 +07:00
|
|
|
|
|
|
|
btrfs_mark_buffer_dirty(root->node);
|
|
|
|
btrfs_tree_unlock(root->node);
|
2009-01-22 00:54:03 +07:00
|
|
|
return root;
|
|
|
|
}
|
|
|
|
|
|
|
|
int btrfs_init_log_root_tree(struct btrfs_trans_handle *trans,
|
|
|
|
struct btrfs_fs_info *fs_info)
|
|
|
|
{
|
|
|
|
struct btrfs_root *log_root;
|
|
|
|
|
|
|
|
log_root = alloc_log_tree(trans, fs_info);
|
|
|
|
if (IS_ERR(log_root))
|
|
|
|
return PTR_ERR(log_root);
|
|
|
|
WARN_ON(fs_info->log_root_tree);
|
|
|
|
fs_info->log_root_tree = log_root;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
int btrfs_add_log_tree(struct btrfs_trans_handle *trans,
|
|
|
|
struct btrfs_root *root)
|
|
|
|
{
|
2016-06-23 05:54:23 +07:00
|
|
|
struct btrfs_fs_info *fs_info = root->fs_info;
|
2009-01-22 00:54:03 +07:00
|
|
|
struct btrfs_root *log_root;
|
|
|
|
struct btrfs_inode_item *inode_item;
|
|
|
|
|
2016-06-23 05:54:23 +07:00
|
|
|
log_root = alloc_log_tree(trans, fs_info);
|
2009-01-22 00:54:03 +07:00
|
|
|
if (IS_ERR(log_root))
|
|
|
|
return PTR_ERR(log_root);
|
|
|
|
|
|
|
|
log_root->last_trans = trans->transid;
|
|
|
|
log_root->root_key.offset = root->root_key.objectid;
|
|
|
|
|
|
|
|
inode_item = &log_root->root_item.inode;
|
2013-07-16 10:19:18 +07:00
|
|
|
btrfs_set_stack_inode_generation(inode_item, 1);
|
|
|
|
btrfs_set_stack_inode_size(inode_item, 3);
|
|
|
|
btrfs_set_stack_inode_nlink(inode_item, 1);
|
2016-06-15 20:22:56 +07:00
|
|
|
btrfs_set_stack_inode_nbytes(inode_item,
|
2016-06-23 05:54:23 +07:00
|
|
|
fs_info->nodesize);
|
2013-07-16 10:19:18 +07:00
|
|
|
btrfs_set_stack_inode_mode(inode_item, S_IFDIR | 0755);
|
2009-01-22 00:54:03 +07:00
|
|
|
|
Btrfs: Mixed back reference (FORWARD ROLLING FORMAT CHANGE)
This commit introduces a new kind of back reference for btrfs metadata.
Once a filesystem has been mounted with this commit, IT WILL NO LONGER
BE MOUNTABLE BY OLDER KERNELS.
When a tree block in subvolume tree is cow'd, the reference counts of all
extents it points to are increased by one. At transaction commit time,
the old root of the subvolume is recorded in a "dead root" data structure,
and the btree it points to is later walked, dropping reference counts
and freeing any blocks where the reference count goes to 0.
The increments done during cow and decrements done after commit cancel out,
and the walk is a very expensive way to go about freeing the blocks that
are no longer referenced by the new btree root. This commit reduces the
transaction overhead by avoiding the need for dead root records.
When a non-shared tree block is cow'd, we free the old block at once, and the
new block inherits old block's references. When a tree block with reference
count > 1 is cow'd, we increase the reference counts of all extents
the new block points to by one, and decrease the old block's reference count by
one.
This dead tree avoidance code removes the need to modify the reference
counts of lower level extents when a non-shared tree block is cow'd.
But we still need to update back ref for all pointers in the block.
This is because the location of the block is recorded in the back ref
item.
We can solve this by introducing a new type of back ref. The new
back ref provides information about pointer's key, level and in which
tree the pointer lives. This information allow us to find the pointer
by searching the tree. The shortcoming of the new back ref is that it
only works for pointers in tree blocks referenced by their owner trees.
This is mostly a problem for snapshots, where resolving one of these
fuzzy back references would be O(number_of_snapshots) and quite slow.
The solution used here is to use the fuzzy back references in the common
case where a given tree block is only referenced by one root,
and use the full back references when multiple roots have a reference
on a given block.
This commit adds per subvolume red-black tree to keep trace of cached
inodes. The red-black tree helps the balancing code to find cached
inodes whose inode numbers within a given range.
This commit improves the balancing code by introducing several data
structures to keep the state of balancing. The most important one
is the back ref cache. It caches how the upper level tree blocks are
referenced. This greatly reduce the overhead of checking back ref.
The improved balancing code scales significantly better with a large
number of snapshots.
This is a very large commit and was written in a number of
pieces. But, they depend heavily on the disk format change and were
squashed together to make sure git bisect didn't end up in a
bad state wrt space balancing or the format change.
Signed-off-by: Yan Zheng <zheng.yan@oracle.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-06-10 21:45:14 +07:00
|
|
|
btrfs_set_root_node(&log_root->root_item, log_root->node);
|
2009-01-22 00:54:03 +07:00
|
|
|
|
|
|
|
WARN_ON(root->log_root);
|
|
|
|
root->log_root = log_root;
|
|
|
|
root->log_transid = 0;
|
2014-02-20 17:08:59 +07:00
|
|
|
root->log_transid_committed = -1;
|
2009-10-14 00:21:08 +07:00
|
|
|
root->last_log_commit = 0;
|
2008-09-06 03:13:11 +07:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2020-10-20 03:02:31 +07:00
|
|
|
static struct btrfs_root *read_tree_root_path(struct btrfs_root *tree_root,
|
|
|
|
struct btrfs_path *path,
|
|
|
|
struct btrfs_key *key)
|
2008-09-06 03:13:11 +07:00
|
|
|
{
|
|
|
|
struct btrfs_root *root;
|
|
|
|
struct btrfs_fs_info *fs_info = tree_root->fs_info;
|
2008-10-30 01:49:05 +07:00
|
|
|
u64 generation;
|
2013-05-15 14:48:19 +07:00
|
|
|
int ret;
|
2018-03-29 08:08:11 +07:00
|
|
|
int level;
|
2007-04-09 21:42:37 +07:00
|
|
|
|
2020-01-24 21:32:18 +07:00
|
|
|
root = btrfs_alloc_root(fs_info, key->objectid, GFP_NOFS);
|
2020-10-20 03:02:31 +07:00
|
|
|
if (!root)
|
|
|
|
return ERR_PTR(-ENOMEM);
|
2007-04-09 21:42:37 +07:00
|
|
|
|
2013-05-15 14:48:19 +07:00
|
|
|
ret = btrfs_find_root(tree_root, key, path,
|
|
|
|
&root->root_item, &root->root_key);
|
2007-04-09 21:42:37 +07:00
|
|
|
if (ret) {
|
2009-09-22 02:56:00 +07:00
|
|
|
if (ret > 0)
|
|
|
|
ret = -ENOENT;
|
2020-10-20 03:02:31 +07:00
|
|
|
goto fail;
|
2007-04-09 21:42:37 +07:00
|
|
|
}
|
2009-09-22 02:56:00 +07:00
|
|
|
|
2008-10-30 01:49:05 +07:00
|
|
|
generation = btrfs_root_generation(&root->root_item);
|
2018-03-29 08:08:11 +07:00
|
|
|
level = btrfs_root_level(&root->root_item);
|
2016-06-23 05:54:24 +07:00
|
|
|
root->node = read_tree_block(fs_info,
|
|
|
|
btrfs_root_bytenr(&root->root_item),
|
2018-03-29 08:08:11 +07:00
|
|
|
generation, level, NULL);
|
2015-05-25 16:30:15 +07:00
|
|
|
if (IS_ERR(root->node)) {
|
|
|
|
ret = PTR_ERR(root->node);
|
2020-02-15 04:11:42 +07:00
|
|
|
root->node = NULL;
|
2020-10-20 03:02:31 +07:00
|
|
|
goto fail;
|
2013-05-15 14:48:19 +07:00
|
|
|
} else if (!btrfs_buffer_uptodate(root->node, generation, 0)) {
|
|
|
|
ret = -EIO;
|
2020-10-20 03:02:31 +07:00
|
|
|
goto fail;
|
2013-04-24 01:17:42 +07:00
|
|
|
}
|
Btrfs: Mixed back reference (FORWARD ROLLING FORMAT CHANGE)
This commit introduces a new kind of back reference for btrfs metadata.
Once a filesystem has been mounted with this commit, IT WILL NO LONGER
BE MOUNTABLE BY OLDER KERNELS.
When a tree block in subvolume tree is cow'd, the reference counts of all
extents it points to are increased by one. At transaction commit time,
the old root of the subvolume is recorded in a "dead root" data structure,
and the btree it points to is later walked, dropping reference counts
and freeing any blocks where the reference count goes to 0.
The increments done during cow and decrements done after commit cancel out,
and the walk is a very expensive way to go about freeing the blocks that
are no longer referenced by the new btree root. This commit reduces the
transaction overhead by avoiding the need for dead root records.
When a non-shared tree block is cow'd, we free the old block at once, and the
new block inherits old block's references. When a tree block with reference
count > 1 is cow'd, we increase the reference counts of all extents
the new block points to by one, and decrease the old block's reference count by
one.
This dead tree avoidance code removes the need to modify the reference
counts of lower level extents when a non-shared tree block is cow'd.
But we still need to update back ref for all pointers in the block.
This is because the location of the block is recorded in the back ref
item.
We can solve this by introducing a new type of back ref. The new
back ref provides information about pointer's key, level and in which
tree the pointer lives. This information allow us to find the pointer
by searching the tree. The shortcoming of the new back ref is that it
only works for pointers in tree blocks referenced by their owner trees.
This is mostly a problem for snapshots, where resolving one of these
fuzzy back references would be O(number_of_snapshots) and quite slow.
The solution used here is to use the fuzzy back references in the common
case where a given tree block is only referenced by one root,
and use the full back references when multiple roots have a reference
on a given block.
This commit adds per subvolume red-black tree to keep trace of cached
inodes. The red-black tree helps the balancing code to find cached
inodes whose inode numbers within a given range.
This commit improves the balancing code by introducing several data
structures to keep the state of balancing. The most important one
is the back ref cache. It caches how the upper level tree blocks are
referenced. This greatly reduce the overhead of checking back ref.
The improved balancing code scales significantly better with a large
number of snapshots.
This is a very large commit and was written in a number of
pieces. But, they depend heavily on the disk format change and were
squashed together to make sure git bisect didn't end up in a
bad state wrt space balancing or the format change.
Signed-off-by: Yan Zheng <zheng.yan@oracle.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-06-10 21:45:14 +07:00
|
|
|
root->commit_root = btrfs_root_node(root);
|
2013-05-15 14:48:19 +07:00
|
|
|
return root;
|
2020-10-20 03:02:31 +07:00
|
|
|
fail:
|
2020-01-24 21:33:01 +07:00
|
|
|
btrfs_put_root(root);
|
2020-10-20 03:02:31 +07:00
|
|
|
return ERR_PTR(ret);
|
|
|
|
}
|
|
|
|
|
|
|
|
struct btrfs_root *btrfs_read_tree_root(struct btrfs_root *tree_root,
|
|
|
|
struct btrfs_key *key)
|
|
|
|
{
|
|
|
|
struct btrfs_root *root;
|
|
|
|
struct btrfs_path *path;
|
|
|
|
|
|
|
|
path = btrfs_alloc_path();
|
|
|
|
if (!path)
|
|
|
|
return ERR_PTR(-ENOMEM);
|
|
|
|
root = read_tree_root_path(tree_root, path, key);
|
|
|
|
btrfs_free_path(path);
|
|
|
|
|
|
|
|
return root;
|
2013-05-15 14:48:19 +07:00
|
|
|
}
|
|
|
|
|
2024-07-05 23:00:04 +07:00
|
|
|
#if defined(MY_ABC_HERE)
|
|
|
|
void btrfs_free_new_fs_root_args(struct btrfs_new_fs_root_args *args)
|
|
|
|
{
|
|
|
|
if (!args)
|
|
|
|
return;
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
if (args->syno_delalloc_bytes) {
|
|
|
|
percpu_counter_destroy(args->syno_delalloc_bytes);
|
|
|
|
kfree(args->syno_delalloc_bytes);
|
|
|
|
}
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
kfree(args);
|
|
|
|
}
|
|
|
|
|
|
|
|
struct btrfs_new_fs_root_args *btrfs_alloc_new_fs_root_args(void)
|
|
|
|
{
|
|
|
|
int err;
|
|
|
|
struct btrfs_new_fs_root_args *args;
|
|
|
|
|
|
|
|
args = kzalloc(sizeof(*args), GFP_KERNEL);
|
|
|
|
if (!args) {
|
|
|
|
err = -ENOMEM;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
args->syno_delalloc_bytes = kzalloc(sizeof(*args->syno_delalloc_bytes), GFP_KERNEL);
|
|
|
|
if (!args->syno_delalloc_bytes) {
|
|
|
|
err = -ENOMEM;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
err = percpu_counter_init(args->syno_delalloc_bytes, 0, GFP_KERNEL);
|
|
|
|
if (err)
|
|
|
|
goto out;
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
|
|
|
return args;
|
|
|
|
|
|
|
|
out:
|
|
|
|
btrfs_free_new_fs_root_args(args);
|
|
|
|
return ERR_PTR(err);
|
|
|
|
}
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
2020-06-16 09:17:36 +07:00
|
|
|
/*
|
|
|
|
* Initialize subvolume root in-memory structure
|
|
|
|
*
|
|
|
|
* @anon_dev: anonymous device to attach to the root, if zero, allocate new
|
|
|
|
*/
|
2024-07-05 23:00:04 +07:00
|
|
|
static int btrfs_init_fs_root(struct btrfs_root *root, dev_t anon_dev
|
|
|
|
#if defined(MY_ABC_HERE)
|
|
|
|
, struct btrfs_new_fs_root_args *new_fs_root_args
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
)
|
2013-05-15 14:48:19 +07:00
|
|
|
{
|
|
|
|
int ret;
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
struct percpu_counter *delalloc_bytes = NULL;
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
if (new_fs_root_args && new_fs_root_args->syno_delalloc_bytes) {
|
|
|
|
root->syno_delalloc_bytes = new_fs_root_args->syno_delalloc_bytes;
|
|
|
|
new_fs_root_args->syno_delalloc_bytes = NULL;
|
|
|
|
} else {
|
|
|
|
delalloc_bytes = kzalloc(sizeof(*delalloc_bytes), GFP_NOFS);
|
|
|
|
if (!delalloc_bytes) {
|
|
|
|
ret = -ENOMEM;
|
|
|
|
goto fail;
|
|
|
|
}
|
|
|
|
ret = percpu_counter_init(delalloc_bytes, 0, GFP_NOFS);
|
|
|
|
if (ret < 0)
|
|
|
|
goto fail;
|
|
|
|
root->syno_delalloc_bytes = delalloc_bytes;
|
|
|
|
delalloc_bytes = NULL;
|
|
|
|
}
|
|
|
|
#endif /* MY_ABC_HERE */
|
2013-05-15 14:48:19 +07:00
|
|
|
|
|
|
|
root->free_ino_ctl = kzalloc(sizeof(*root->free_ino_ctl), GFP_NOFS);
|
|
|
|
root->free_ino_pinned = kzalloc(sizeof(*root->free_ino_pinned),
|
|
|
|
GFP_NOFS);
|
|
|
|
if (!root->free_ino_pinned || !root->free_ino_ctl) {
|
|
|
|
ret = -ENOMEM;
|
|
|
|
goto fail;
|
|
|
|
}
|
|
|
|
|
2024-07-05 23:00:04 +07:00
|
|
|
btrfs_drew_lock_init(&root->snapshot_lock);
|
2014-03-06 12:38:19 +07:00
|
|
|
|
2020-05-15 13:01:42 +07:00
|
|
|
if (root->root_key.objectid != BTRFS_TREE_LOG_OBJECTID &&
|
|
|
|
root->root_key.objectid != BTRFS_DATA_RELOC_TREE_OBJECTID) {
|
2020-05-15 13:01:40 +07:00
|
|
|
set_bit(BTRFS_ROOT_SHAREABLE, &root->state);
|
2020-01-24 21:32:19 +07:00
|
|
|
btrfs_check_and_init_root_item(&root->root_item);
|
|
|
|
}
|
|
|
|
|
2013-05-15 14:48:19 +07:00
|
|
|
btrfs_init_free_ino_ctl(root);
|
2014-02-05 08:37:48 +07:00
|
|
|
spin_lock_init(&root->ino_cache_lock);
|
|
|
|
init_waitqueue_head(&root->ino_cache_wait);
|
2013-05-15 14:48:19 +07:00
|
|
|
|
2020-06-16 09:17:34 +07:00
|
|
|
/*
|
|
|
|
* Don't assign anonymous block device to roots that are not exposed to
|
|
|
|
* userspace, the id pool is limited to 1M
|
|
|
|
*/
|
|
|
|
if (is_fstree(root->root_key.objectid) &&
|
|
|
|
btrfs_root_refs(&root->root_item) > 0) {
|
2020-06-16 09:17:36 +07:00
|
|
|
if (!anon_dev) {
|
|
|
|
ret = get_anon_bdev(&root->anon_dev);
|
|
|
|
if (ret)
|
|
|
|
goto fail;
|
|
|
|
} else {
|
|
|
|
root->anon_dev = anon_dev;
|
|
|
|
}
|
2020-06-16 09:17:34 +07:00
|
|
|
}
|
2016-01-07 20:26:59 +07:00
|
|
|
|
|
|
|
mutex_lock(&root->objectid_mutex);
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
if (btrfs_root_dead(root)) {
|
|
|
|
root->highest_objectid = BTRFS_LAST_FREE_OBJECTID;
|
|
|
|
} else {
|
|
|
|
ret = btrfs_find_highest_objectid(root,
|
|
|
|
&root->highest_objectid);
|
|
|
|
if (ret) {
|
|
|
|
mutex_unlock(&root->objectid_mutex);
|
|
|
|
goto fail;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
#else
|
2016-01-07 20:26:59 +07:00
|
|
|
ret = btrfs_find_highest_objectid(root,
|
|
|
|
&root->highest_objectid);
|
|
|
|
if (ret) {
|
|
|
|
mutex_unlock(&root->objectid_mutex);
|
2016-06-29 03:44:38 +07:00
|
|
|
goto fail;
|
2016-01-07 20:26:59 +07:00
|
|
|
}
|
2024-07-05 23:00:04 +07:00
|
|
|
#endif /* MY_ABC_HERE */
|
2016-01-07 20:26:59 +07:00
|
|
|
|
|
|
|
ASSERT(root->highest_objectid <= BTRFS_LAST_FREE_OBJECTID);
|
|
|
|
|
|
|
|
mutex_unlock(&root->objectid_mutex);
|
|
|
|
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
if ((test_bit(BTRFS_FS_SYNO_SPACE_USAGE_ENABLED, &root->fs_info->flags) ||
|
|
|
|
(root->fs_info->syno_usage_status.state == SYNO_USAGE_STATE_DISABLE &&
|
|
|
|
root->fs_info->syno_usage_root)) &&
|
|
|
|
is_fstree(root->root_key.objectid)) {
|
|
|
|
ret = btrfs_syno_usage_root_status_lookup(root->fs_info, root->root_key.objectid, &root->syno_usage_root_status);
|
|
|
|
if (ret < 0)
|
|
|
|
goto fail;
|
|
|
|
else if (ret == 0)
|
|
|
|
set_bit(BTRFS_ROOT_SYNO_SPACE_USAGE_ENABLED, &root->state);
|
|
|
|
else /* not initialize */
|
|
|
|
btrfs_syno_usage_root_initialize(root);
|
|
|
|
}
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
if (is_fstree(root->root_key.objectid) && !btrfs_root_dead(root))
|
|
|
|
btrfs_read_syno_quota_for_root(root);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
if (is_fstree(root->root_key.objectid) && !btrfs_root_dead(root))
|
|
|
|
btrfs_syno_locker_disk_root_read(root);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
2013-05-15 14:48:19 +07:00
|
|
|
return 0;
|
|
|
|
fail:
|
2018-07-20 21:30:25 +07:00
|
|
|
/* The caller is responsible to call btrfs_free_fs_root */
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
kfree(delalloc_bytes);
|
|
|
|
#endif /* MY_ABC_HERE */
|
2013-05-15 14:48:19 +07:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
struct btrfs_root *btrfs_lookup_fs_root(struct btrfs_fs_info *fs_info,
|
|
|
|
u64 root_id)
|
|
|
|
#else
|
2020-01-24 21:32:25 +07:00
|
|
|
static struct btrfs_root *btrfs_lookup_fs_root(struct btrfs_fs_info *fs_info,
|
|
|
|
u64 root_id)
|
2024-07-05 23:00:04 +07:00
|
|
|
#endif /* MY_ABC_HERE */
|
2013-05-15 14:48:19 +07:00
|
|
|
{
|
|
|
|
struct btrfs_root *root;
|
|
|
|
|
|
|
|
spin_lock(&fs_info->fs_roots_radix_lock);
|
|
|
|
root = radix_tree_lookup(&fs_info->fs_roots_radix,
|
|
|
|
(unsigned long)root_id);
|
2020-01-24 21:32:56 +07:00
|
|
|
if (root)
|
2020-01-24 21:33:01 +07:00
|
|
|
root = btrfs_grab_root(root);
|
2013-05-15 14:48:19 +07:00
|
|
|
spin_unlock(&fs_info->fs_roots_radix_lock);
|
|
|
|
return root;
|
|
|
|
}
|
|
|
|
|
2020-10-20 03:02:31 +07:00
|
|
|
static struct btrfs_root *btrfs_get_global_root(struct btrfs_fs_info *fs_info,
|
|
|
|
u64 objectid)
|
|
|
|
{
|
|
|
|
if (objectid == BTRFS_ROOT_TREE_OBJECTID)
|
|
|
|
return btrfs_grab_root(fs_info->tree_root);
|
|
|
|
if (objectid == BTRFS_EXTENT_TREE_OBJECTID)
|
|
|
|
return btrfs_grab_root(fs_info->extent_root);
|
|
|
|
if (objectid == BTRFS_CHUNK_TREE_OBJECTID)
|
|
|
|
return btrfs_grab_root(fs_info->chunk_root);
|
|
|
|
if (objectid == BTRFS_DEV_TREE_OBJECTID)
|
|
|
|
return btrfs_grab_root(fs_info->dev_root);
|
|
|
|
if (objectid == BTRFS_CSUM_TREE_OBJECTID)
|
|
|
|
return btrfs_grab_root(fs_info->csum_root);
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
if (objectid == BTRFS_SYNO_QUOTA_V2_TREE_OBJECTID)
|
|
|
|
return btrfs_grab_root(fs_info->quota_root) ?
|
|
|
|
fs_info->quota_root : ERR_PTR(-ENOENT);
|
|
|
|
#endif /* MY_ABC_HERE */
|
2020-10-20 03:02:31 +07:00
|
|
|
if (objectid == BTRFS_QUOTA_TREE_OBJECTID)
|
|
|
|
return btrfs_grab_root(fs_info->quota_root) ?
|
|
|
|
fs_info->quota_root : ERR_PTR(-ENOENT);
|
|
|
|
if (objectid == BTRFS_UUID_TREE_OBJECTID)
|
|
|
|
return btrfs_grab_root(fs_info->uuid_root) ?
|
|
|
|
fs_info->uuid_root : ERR_PTR(-ENOENT);
|
|
|
|
if (objectid == BTRFS_FREE_SPACE_TREE_OBJECTID)
|
|
|
|
return btrfs_grab_root(fs_info->free_space_root) ?
|
|
|
|
fs_info->free_space_root : ERR_PTR(-ENOENT);
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
if (objectid == BTRFS_SYNO_USRQUOTA_V2_TREE_OBJECTID)
|
|
|
|
return btrfs_grab_root(fs_info->usrquota_root) ?
|
|
|
|
fs_info->usrquota_root : ERR_PTR(-ENOENT);
|
|
|
|
if (objectid == BTRFS_USRQUOTA_TREE_OBJECTID)
|
|
|
|
return btrfs_grab_root(fs_info->usrquota_root) ?
|
|
|
|
fs_info->usrquota_root : ERR_PTR(-ENOENT);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
if (objectid == BTRFS_BLOCK_GROUP_HINT_TREE_OBJECTID)
|
|
|
|
return btrfs_grab_root(fs_info->block_group_hint_root) ?
|
|
|
|
fs_info->block_group_hint_root : ERR_PTR(-ENOENT);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
if (objectid == BTRFS_BLOCK_GROUP_CACHE_TREE_OBJECTID)
|
|
|
|
return btrfs_grab_root(fs_info->block_group_cache_root) ?
|
|
|
|
fs_info->block_group_cache_root : ERR_PTR(-ENOENT);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
if (objectid == BTRFS_SYNO_USAGE_TREE_OBJECTID)
|
|
|
|
return btrfs_grab_root(fs_info->syno_usage_root) ?
|
|
|
|
fs_info->syno_usage_root : ERR_PTR(-ENOENT);
|
|
|
|
if (objectid == BTRFS_SYNO_EXTENT_USAGE_TREE_OBJECTID)
|
|
|
|
return btrfs_grab_root(fs_info->syno_extent_usage_root) ?
|
|
|
|
fs_info->syno_extent_usage_root : ERR_PTR(-ENOENT);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
if (objectid == BTRFS_SYNO_FEATURE_TREE_OBJECTID)
|
|
|
|
return btrfs_grab_root(fs_info->syno_feat_root) ?
|
|
|
|
fs_info->syno_feat_root : ERR_PTR(-ENOENT);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
2020-10-20 03:02:31 +07:00
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2013-05-15 14:48:19 +07:00
|
|
|
int btrfs_insert_fs_root(struct btrfs_fs_info *fs_info,
|
|
|
|
struct btrfs_root *root)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
2016-05-09 19:11:38 +07:00
|
|
|
ret = radix_tree_preload(GFP_NOFS);
|
2013-05-15 14:48:19 +07:00
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
spin_lock(&fs_info->fs_roots_radix_lock);
|
|
|
|
ret = radix_tree_insert(&fs_info->fs_roots_radix,
|
|
|
|
(unsigned long)root->root_key.objectid,
|
|
|
|
root);
|
2020-01-24 21:32:27 +07:00
|
|
|
if (ret == 0) {
|
2020-01-24 21:33:01 +07:00
|
|
|
btrfs_grab_root(root);
|
2014-04-02 18:51:05 +07:00
|
|
|
set_bit(BTRFS_ROOT_IN_RADIX, &root->state);
|
2020-01-24 21:32:27 +07:00
|
|
|
}
|
2013-05-15 14:48:19 +07:00
|
|
|
spin_unlock(&fs_info->fs_roots_radix_lock);
|
|
|
|
radix_tree_preload_end();
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2020-01-24 21:33:00 +07:00
|
|
|
void btrfs_check_leaked_roots(struct btrfs_fs_info *fs_info)
|
|
|
|
{
|
|
|
|
#ifdef CONFIG_BTRFS_DEBUG
|
|
|
|
struct btrfs_root *root;
|
|
|
|
|
|
|
|
while (!list_empty(&fs_info->allocated_roots)) {
|
2020-09-04 01:29:51 +07:00
|
|
|
char buf[BTRFS_ROOT_NAME_BUF_LEN];
|
|
|
|
|
2020-01-24 21:33:00 +07:00
|
|
|
root = list_first_entry(&fs_info->allocated_roots,
|
|
|
|
struct btrfs_root, leak_list);
|
2020-09-04 01:29:51 +07:00
|
|
|
btrfs_err(fs_info, "leaked root %s refcount %d",
|
2020-12-16 23:18:44 +07:00
|
|
|
btrfs_root_name(&root->root_key, buf),
|
2020-01-24 21:33:00 +07:00
|
|
|
refcount_read(&root->refs));
|
|
|
|
while (refcount_read(&root->refs) > 1)
|
2020-01-24 21:33:01 +07:00
|
|
|
btrfs_put_root(root);
|
|
|
|
btrfs_put_root(root);
|
2020-01-24 21:33:00 +07:00
|
|
|
}
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2020-01-24 21:32:53 +07:00
|
|
|
void btrfs_free_fs_info(struct btrfs_fs_info *fs_info)
|
|
|
|
{
|
2020-01-24 21:32:57 +07:00
|
|
|
percpu_counter_destroy(&fs_info->dirty_metadata_bytes);
|
|
|
|
percpu_counter_destroy(&fs_info->delalloc_bytes);
|
|
|
|
percpu_counter_destroy(&fs_info->dio_bytes);
|
|
|
|
percpu_counter_destroy(&fs_info->dev_replace.bio_counter);
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
percpu_counter_destroy(&fs_info->eb_hit);
|
|
|
|
percpu_counter_destroy(&fs_info->eb_miss);
|
|
|
|
percpu_counter_destroy(&fs_info->meta_write_pages);
|
|
|
|
percpu_counter_destroy(&fs_info->data_write_pages);
|
|
|
|
percpu_counter_destroy(&fs_info->delayed_meta_ref);
|
|
|
|
percpu_counter_destroy(&fs_info->delayed_data_ref);
|
|
|
|
percpu_counter_destroy(&fs_info->write_flush);
|
|
|
|
percpu_counter_destroy(&fs_info->write_fua);
|
|
|
|
#endif /* MY_ABC_HERE */
|
2020-01-24 21:32:57 +07:00
|
|
|
btrfs_free_csum_hash(fs_info);
|
|
|
|
btrfs_free_stripe_hash_table(fs_info);
|
|
|
|
btrfs_free_ref_cache(fs_info);
|
2020-01-24 21:32:53 +07:00
|
|
|
kfree(fs_info->balance_ctl);
|
|
|
|
kfree(fs_info->delayed_root);
|
2024-07-05 23:00:04 +07:00
|
|
|
#if defined(MY_ABC_HERE) || defined(MY_ABC_HERE)
|
|
|
|
kfree(fs_info->mount_path);
|
|
|
|
#endif /* MY_ABC_HERE || MY_ABC_HERE */
|
2020-01-24 21:33:01 +07:00
|
|
|
btrfs_put_root(fs_info->extent_root);
|
|
|
|
btrfs_put_root(fs_info->tree_root);
|
|
|
|
btrfs_put_root(fs_info->chunk_root);
|
|
|
|
btrfs_put_root(fs_info->dev_root);
|
|
|
|
btrfs_put_root(fs_info->csum_root);
|
|
|
|
btrfs_put_root(fs_info->quota_root);
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
btrfs_put_root(fs_info->usrquota_root);
|
|
|
|
#endif /* MY_ABC_HERE */
|
2020-01-24 21:33:01 +07:00
|
|
|
btrfs_put_root(fs_info->uuid_root);
|
|
|
|
btrfs_put_root(fs_info->free_space_root);
|
|
|
|
btrfs_put_root(fs_info->fs_root);
|
2020-05-15 13:01:42 +07:00
|
|
|
btrfs_put_root(fs_info->data_reloc_root);
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
btrfs_put_root(fs_info->block_group_hint_root);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
btrfs_put_root(fs_info->block_group_cache_root);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
btrfs_put_root(fs_info->syno_usage_root);
|
|
|
|
btrfs_put_root(fs_info->syno_extent_usage_root);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
btrfs_put_root(fs_info->syno_feat_root);
|
|
|
|
#endif /* MY_ABC_HERE */
|
2020-01-24 21:33:00 +07:00
|
|
|
btrfs_check_leaked_roots(fs_info);
|
2020-02-15 04:11:40 +07:00
|
|
|
btrfs_extent_buffer_leak_debug_check(fs_info);
|
2020-01-24 21:32:53 +07:00
|
|
|
kfree(fs_info->super_copy);
|
|
|
|
kfree(fs_info->super_for_commit);
|
|
|
|
kvfree(fs_info);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2020-06-16 09:17:36 +07:00
|
|
|
/*
|
|
|
|
* Get an in-memory reference of a root structure.
|
|
|
|
*
|
|
|
|
* For essential trees like root/extent tree, we grab it from fs_info directly.
|
|
|
|
* For subvolume trees, we check the cached filesystem roots first. If not
|
|
|
|
* found, then read it from disk and add it to cached fs roots.
|
|
|
|
*
|
|
|
|
* Caller should release the root by calling btrfs_put_root() after the usage.
|
|
|
|
*
|
|
|
|
* NOTE: Reloc and log trees can't be read by this function as they share the
|
|
|
|
* same root objectid.
|
|
|
|
*
|
|
|
|
* @objectid: root id
|
|
|
|
* @anon_dev: preallocated anonymous block device number for new roots,
|
|
|
|
* pass 0 for new allocation.
|
|
|
|
* @check_ref: whether to check root item references, If true, return -ENOENT
|
|
|
|
* for orphan roots
|
|
|
|
*/
|
|
|
|
static struct btrfs_root *btrfs_get_root_ref(struct btrfs_fs_info *fs_info,
|
|
|
|
u64 objectid, dev_t anon_dev,
|
2024-07-05 23:00:04 +07:00
|
|
|
bool check_ref
|
|
|
|
#if defined(MY_ABC_HERE)
|
|
|
|
, struct btrfs_new_fs_root_args *new_fs_root_args
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
)
|
2007-06-23 01:16:25 +07:00
|
|
|
{
|
|
|
|
struct btrfs_root *root;
|
2015-01-03 00:45:16 +07:00
|
|
|
struct btrfs_path *path;
|
2015-01-03 01:36:14 +07:00
|
|
|
struct btrfs_key key;
|
2007-06-23 01:16:25 +07:00
|
|
|
int ret;
|
|
|
|
|
2020-10-20 03:02:31 +07:00
|
|
|
root = btrfs_get_global_root(fs_info, objectid);
|
|
|
|
if (root)
|
|
|
|
return root;
|
2009-09-22 02:56:00 +07:00
|
|
|
again:
|
2020-05-16 00:35:55 +07:00
|
|
|
root = btrfs_lookup_fs_root(fs_info, objectid);
|
2013-08-23 15:34:42 +07:00
|
|
|
if (root) {
|
2020-06-16 09:17:36 +07:00
|
|
|
/* Shouldn't get preallocated anon_dev for cached roots */
|
|
|
|
ASSERT(!anon_dev);
|
2024-07-05 23:00:04 +07:00
|
|
|
#if defined(MY_ABC_HERE)
|
|
|
|
/* Shouldn't get new_fs_root_args for cached roots */
|
|
|
|
ASSERT(!new_fs_root_args);
|
|
|
|
#endif /* MY_ABC_HERE */
|
2020-01-24 21:32:56 +07:00
|
|
|
if (check_ref && btrfs_root_refs(&root->root_item) == 0) {
|
2020-01-24 21:33:01 +07:00
|
|
|
btrfs_put_root(root);
|
2013-08-23 15:34:42 +07:00
|
|
|
return ERR_PTR(-ENOENT);
|
2020-01-24 21:32:56 +07:00
|
|
|
}
|
2007-06-23 01:16:25 +07:00
|
|
|
return root;
|
2013-08-23 15:34:42 +07:00
|
|
|
}
|
2007-06-23 01:16:25 +07:00
|
|
|
|
2020-05-16 00:35:55 +07:00
|
|
|
key.objectid = objectid;
|
|
|
|
key.type = BTRFS_ROOT_ITEM_KEY;
|
|
|
|
key.offset = (u64)-1;
|
|
|
|
root = btrfs_read_tree_root(fs_info->tree_root, &key);
|
2007-06-23 01:16:25 +07:00
|
|
|
if (IS_ERR(root))
|
|
|
|
return root;
|
2008-11-18 08:42:26 +07:00
|
|
|
|
2013-09-25 20:47:44 +07:00
|
|
|
if (check_ref && btrfs_root_refs(&root->root_item) == 0) {
|
2013-05-15 14:48:19 +07:00
|
|
|
ret = -ENOENT;
|
Btrfs: Cache free inode numbers in memory
Currently btrfs stores the highest objectid of the fs tree, and it always
returns (highest+1) inode number when we create a file, so inode numbers
won't be reclaimed when we delete files, so we'll run out of inode numbers
as we keep create/delete files in 32bits machines.
This fixes it, and it works similarly to how we cache free space in block
cgroups.
We start a kernel thread to read the file tree. By scanning inode items,
we know which chunks of inode numbers are free, and we cache them in
an rb-tree.
Because we are searching the commit root, we have to carefully handle the
cross-transaction case.
The rb-tree is a hybrid extent+bitmap tree, so if we have too many small
chunks of inode numbers, we'll use bitmaps. Initially we allow 16K ram
of extents, and a bitmap will be used if we exceed this threshold. The
extents threshold is adjusted in runtime.
Signed-off-by: Li Zefan <lizf@cn.fujitsu.com>
2011-04-20 09:06:11 +07:00
|
|
|
goto fail;
|
2011-06-13 22:18:23 +07:00
|
|
|
}
|
Btrfs: Cache free inode numbers in memory
Currently btrfs stores the highest objectid of the fs tree, and it always
returns (highest+1) inode number when we create a file, so inode numbers
won't be reclaimed when we delete files, so we'll run out of inode numbers
as we keep create/delete files in 32bits machines.
This fixes it, and it works similarly to how we cache free space in block
cgroups.
We start a kernel thread to read the file tree. By scanning inode items,
we know which chunks of inode numbers are free, and we cache them in
an rb-tree.
Because we are searching the commit root, we have to carefully handle the
cross-transaction case.
The rb-tree is a hybrid extent+bitmap tree, so if we have too many small
chunks of inode numbers, we'll use bitmaps. Initially we allow 16K ram
of extents, and a bitmap will be used if we exceed this threshold. The
extents threshold is adjusted in runtime.
Signed-off-by: Li Zefan <lizf@cn.fujitsu.com>
2011-04-20 09:06:11 +07:00
|
|
|
|
2024-07-05 23:00:04 +07:00
|
|
|
ret = btrfs_init_fs_root(root, anon_dev
|
|
|
|
#if defined(MY_ABC_HERE)
|
|
|
|
, new_fs_root_args
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
);
|
2011-06-13 22:28:50 +07:00
|
|
|
if (ret)
|
|
|
|
goto fail;
|
2008-11-18 08:42:26 +07:00
|
|
|
|
2015-01-03 00:45:16 +07:00
|
|
|
path = btrfs_alloc_path();
|
|
|
|
if (!path) {
|
|
|
|
ret = -ENOMEM;
|
|
|
|
goto fail;
|
|
|
|
}
|
2015-01-03 01:36:14 +07:00
|
|
|
key.objectid = BTRFS_ORPHAN_OBJECTID;
|
|
|
|
key.type = BTRFS_ORPHAN_ITEM_KEY;
|
2020-05-16 00:35:55 +07:00
|
|
|
key.offset = objectid;
|
2015-01-03 01:36:14 +07:00
|
|
|
|
|
|
|
ret = btrfs_search_slot(NULL, fs_info->tree_root, &key, path, 0, 0);
|
2015-01-03 00:45:16 +07:00
|
|
|
btrfs_free_path(path);
|
2010-05-16 21:49:58 +07:00
|
|
|
if (ret < 0)
|
|
|
|
goto fail;
|
|
|
|
if (ret == 0)
|
2014-04-02 18:51:05 +07:00
|
|
|
set_bit(BTRFS_ROOT_ORPHAN_ITEM_INSERTED, &root->state);
|
2010-05-16 21:49:58 +07:00
|
|
|
|
2013-05-15 14:48:19 +07:00
|
|
|
ret = btrfs_insert_fs_root(fs_info, root);
|
2007-04-09 21:42:37 +07:00
|
|
|
if (ret) {
|
2020-01-24 21:33:01 +07:00
|
|
|
btrfs_put_root(root);
|
2020-02-15 04:11:45 +07:00
|
|
|
if (ret == -EEXIST)
|
2009-09-22 02:56:00 +07:00
|
|
|
goto again;
|
|
|
|
goto fail;
|
2007-04-09 21:42:37 +07:00
|
|
|
}
|
2007-12-22 04:27:24 +07:00
|
|
|
return root;
|
2009-09-22 02:56:00 +07:00
|
|
|
fail:
|
2024-07-05 23:00:04 +07:00
|
|
|
/*
|
|
|
|
* If our caller provided us an anonymous device, then it's his
|
|
|
|
* responsability to free it in case we fail. So we have to set our
|
|
|
|
* root's anon_dev to 0 to avoid a double free, once by btrfs_put_root()
|
|
|
|
* and once again by our caller.
|
|
|
|
*/
|
|
|
|
if (anon_dev)
|
|
|
|
root->anon_dev = 0;
|
2020-02-15 04:11:42 +07:00
|
|
|
btrfs_put_root(root);
|
2009-09-22 02:56:00 +07:00
|
|
|
return ERR_PTR(ret);
|
2007-12-22 04:27:24 +07:00
|
|
|
}
|
|
|
|
|
2020-06-16 09:17:36 +07:00
|
|
|
/*
|
|
|
|
* Get in-memory reference of a root structure
|
|
|
|
*
|
|
|
|
* @objectid: tree objectid
|
|
|
|
* @check_ref: if set, verify that the tree exists and the item has at least
|
|
|
|
* one reference
|
|
|
|
*/
|
|
|
|
struct btrfs_root *btrfs_get_fs_root(struct btrfs_fs_info *fs_info,
|
|
|
|
u64 objectid, bool check_ref)
|
|
|
|
{
|
2024-07-05 23:00:04 +07:00
|
|
|
return btrfs_get_root_ref(fs_info, objectid, 0, check_ref
|
|
|
|
#if defined(MY_ABC_HERE)
|
|
|
|
, NULL
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
);
|
2020-06-16 09:17:36 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Get in-memory reference of a root structure, created as new, optionally pass
|
|
|
|
* the anonymous block device id
|
|
|
|
*
|
|
|
|
* @objectid: tree objectid
|
|
|
|
* @anon_dev: if zero, allocate a new anonymous block device or use the
|
|
|
|
* parameter value
|
|
|
|
*/
|
|
|
|
struct btrfs_root *btrfs_get_new_fs_root(struct btrfs_fs_info *fs_info,
|
2024-07-05 23:00:04 +07:00
|
|
|
u64 objectid, dev_t anon_dev
|
|
|
|
#if defined(MY_ABC_HERE)
|
|
|
|
, struct btrfs_new_fs_root_args *new_fs_root_args
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
)
|
2020-06-16 09:17:36 +07:00
|
|
|
{
|
2024-07-05 23:00:04 +07:00
|
|
|
return btrfs_get_root_ref(fs_info, objectid, anon_dev, true
|
|
|
|
#if defined(MY_ABC_HERE)
|
|
|
|
, new_fs_root_args
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
);
|
2020-06-16 09:17:36 +07:00
|
|
|
}
|
|
|
|
|
2020-10-20 03:02:31 +07:00
|
|
|
/*
|
|
|
|
* btrfs_get_fs_root_commit_root - return a root for the given objectid
|
|
|
|
* @fs_info: the fs_info
|
|
|
|
* @objectid: the objectid we need to lookup
|
|
|
|
*
|
|
|
|
* This is exclusively used for backref walking, and exists specifically because
|
|
|
|
* of how qgroups does lookups. Qgroups will do a backref lookup at delayed ref
|
|
|
|
* creation time, which means we may have to read the tree_root in order to look
|
|
|
|
* up a fs root that is not in memory. If the root is not in memory we will
|
|
|
|
* read the tree root commit root and look up the fs root from there. This is a
|
|
|
|
* temporary root, it will not be inserted into the radix tree as it doesn't
|
|
|
|
* have the most uptodate information, it'll simply be discarded once the
|
|
|
|
* backref code is finished using the root.
|
|
|
|
*/
|
|
|
|
struct btrfs_root *btrfs_get_fs_root_commit_root(struct btrfs_fs_info *fs_info,
|
|
|
|
struct btrfs_path *path,
|
|
|
|
u64 objectid)
|
|
|
|
{
|
|
|
|
struct btrfs_root *root;
|
|
|
|
struct btrfs_key key;
|
|
|
|
|
|
|
|
ASSERT(path->search_commit_root && path->skip_locking);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* This can return -ENOENT if we ask for a root that doesn't exist, but
|
|
|
|
* since this is called via the backref walking code we won't be looking
|
|
|
|
* up a root that doesn't exist, unless there's corruption. So if root
|
|
|
|
* != NULL just return it.
|
|
|
|
*/
|
|
|
|
root = btrfs_get_global_root(fs_info, objectid);
|
|
|
|
if (root)
|
|
|
|
return root;
|
|
|
|
|
|
|
|
root = btrfs_lookup_fs_root(fs_info, objectid);
|
|
|
|
if (root)
|
|
|
|
return root;
|
|
|
|
|
|
|
|
key.objectid = objectid;
|
|
|
|
key.type = BTRFS_ROOT_ITEM_KEY;
|
|
|
|
key.offset = (u64)-1;
|
|
|
|
root = read_tree_root_path(fs_info->tree_root, path, &key);
|
|
|
|
btrfs_release_path(path);
|
|
|
|
|
|
|
|
return root;
|
|
|
|
}
|
|
|
|
|
2008-06-12 03:50:36 +07:00
|
|
|
/*
|
|
|
|
* called by the kthread helper functions to finally call the bio end_io
|
|
|
|
* functions. This is where read checksum verification actually happens
|
|
|
|
*/
|
|
|
|
static void end_workqueue_fn(struct btrfs_work *work)
|
2008-04-10 03:28:12 +07:00
|
|
|
{
|
|
|
|
struct bio *bio;
|
2014-07-30 05:55:42 +07:00
|
|
|
struct btrfs_end_io_wq *end_io_wq;
|
2008-04-10 03:28:12 +07:00
|
|
|
|
2014-07-30 05:55:42 +07:00
|
|
|
end_io_wq = container_of(work, struct btrfs_end_io_wq, work);
|
2008-06-12 03:50:36 +07:00
|
|
|
bio = end_io_wq->bio;
|
2008-04-10 03:28:12 +07:00
|
|
|
|
2017-06-03 14:38:06 +07:00
|
|
|
bio->bi_status = end_io_wq->status;
|
2008-06-12 03:50:36 +07:00
|
|
|
bio->bi_private = end_io_wq->private;
|
|
|
|
bio->bi_end_io = end_io_wq->end_io;
|
2015-07-20 20:29:37 +07:00
|
|
|
bio_endio(bio);
|
2019-09-17 01:30:54 +07:00
|
|
|
kmem_cache_free(btrfs_end_io_wq_cache, end_io_wq);
|
2008-04-16 22:14:51 +07:00
|
|
|
}
|
|
|
|
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
static void btrfs_syno_orphan_cleanup(struct btrfs_fs_info *fs_info)
|
|
|
|
{
|
|
|
|
int err;
|
|
|
|
struct btrfs_root *root;
|
|
|
|
|
|
|
|
/* we need to run find orphan roots before snapshot cleanup */
|
|
|
|
if (!fs_info->syno_orphan_cleanup.root_tree_cleanup) {
|
|
|
|
fs_info->syno_orphan_cleanup.root_tree_cleanup = true;
|
|
|
|
err = btrfs_find_orphan_roots(fs_info);
|
|
|
|
if (err) {
|
|
|
|
btrfs_err(fs_info, "Failed to btrfs find orphan roots, err:%d", err);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
down_read(&fs_info->cleanup_work_sem);
|
|
|
|
err = btrfs_orphan_cleanup(fs_info->tree_root);
|
|
|
|
up_read(&fs_info->cleanup_work_sem);
|
|
|
|
if (err) {
|
|
|
|
btrfs_err(fs_info, "Failed to btrfs orphan cleanup with tree_root, err:%d", err);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!fs_info->syno_orphan_cleanup.enable ||
|
|
|
|
fs_info->syno_orphan_cleanup.orphan_inode_delayed)
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
if (!fs_info->syno_orphan_cleanup.fs_tree_cleanup) {
|
|
|
|
fs_info->syno_orphan_cleanup.fs_tree_cleanup = true;
|
|
|
|
err = btrfs_cleanup_fs_roots(fs_info);
|
|
|
|
if (err) {
|
|
|
|
btrfs_err(fs_info, "Failed to orphan cleanup all fs roots, err:%d", err);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
spin_lock(&fs_info->syno_orphan_cleanup.lock);
|
|
|
|
while (!list_empty(&fs_info->syno_orphan_cleanup.roots)) {
|
|
|
|
root = list_first_entry(&fs_info->syno_orphan_cleanup.roots, struct btrfs_root, syno_orphan_cleanup.root);
|
|
|
|
list_del_init(&root->syno_orphan_cleanup.root);
|
|
|
|
if (btrfs_root_dead(root))
|
|
|
|
continue;
|
|
|
|
root = btrfs_grab_root(root);
|
|
|
|
if (!root)
|
|
|
|
continue;
|
|
|
|
spin_unlock(&fs_info->syno_orphan_cleanup.lock);
|
|
|
|
|
|
|
|
down_read(&fs_info->cleanup_work_sem);
|
|
|
|
err = btrfs_orphan_cleanup(root);
|
|
|
|
up_read(&fs_info->cleanup_work_sem);
|
|
|
|
if (err)
|
|
|
|
btrfs_err(fs_info, "Failed to btrfs orphan cleanup with root:%llu, err:%d", root->root_key.objectid, err);
|
|
|
|
btrfs_put_root(root);
|
|
|
|
|
|
|
|
if (!fs_info->syno_orphan_cleanup.enable ||
|
|
|
|
fs_info->syno_orphan_cleanup.orphan_inode_delayed ||
|
|
|
|
btrfs_need_cleaner_sleep(fs_info))
|
|
|
|
goto out;
|
|
|
|
cond_resched();
|
|
|
|
spin_lock(&fs_info->syno_orphan_cleanup.lock);
|
|
|
|
}
|
|
|
|
spin_unlock(&fs_info->syno_orphan_cleanup.lock);
|
|
|
|
|
|
|
|
out:
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
2008-06-26 03:01:31 +07:00
|
|
|
static int cleaner_kthread(void *arg)
|
|
|
|
{
|
|
|
|
struct btrfs_root *root = arg;
|
2016-06-23 05:54:23 +07:00
|
|
|
struct btrfs_fs_info *fs_info = root->fs_info;
|
2013-05-14 17:20:40 +07:00
|
|
|
int again;
|
2008-06-26 03:01:31 +07:00
|
|
|
|
Btrfs: fix missing delayed iputs on unmount
There's a race between close_ctree() and cleaner_kthread().
close_ctree() sets btrfs_fs_closing(), and the cleaner stops when it
sees it set, but this is racy; the cleaner might have already checked
the bit and could be cleaning stuff. In particular, if it deletes unused
block groups, it will create delayed iputs for the free space cache
inodes. As of "btrfs: don't run delayed_iputs in commit", we're no
longer running delayed iputs after a commit. Therefore, if the cleaner
creates more delayed iputs after delayed iputs are run in
btrfs_commit_super(), we will leak inodes on unmount and get a busy
inode crash from the VFS.
Fix it by parking the cleaner before we actually close anything. Then,
any remaining delayed iputs will always be handled in
btrfs_commit_super(). This also ensures that the commit in close_ctree()
is really the last commit, so we can get rid of the commit in
cleaner_kthread().
The fstest/generic/475 followed by 476 can trigger a crash that
manifests as a slab corruption caused by accessing the freed kthread
structure by a wake up function. Sample trace:
[ 5657.077612] BUG: unable to handle kernel NULL pointer dereference at 00000000000000cc
[ 5657.079432] PGD 1c57a067 P4D 1c57a067 PUD da10067 PMD 0
[ 5657.080661] Oops: 0000 [#1] PREEMPT SMP
[ 5657.081592] CPU: 1 PID: 5157 Comm: fsstress Tainted: G W 4.19.0-rc8-default+ #323
[ 5657.083703] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.11.2-0-gf9626cc-prebuilt.qemu-project.org 04/01/2014
[ 5657.086577] RIP: 0010:shrink_page_list+0x2f9/0xe90
[ 5657.091937] RSP: 0018:ffffb5c745c8f728 EFLAGS: 00010287
[ 5657.092953] RAX: 0000000000000074 RBX: ffffb5c745c8f830 RCX: 0000000000000000
[ 5657.094590] RDX: 0000000000000000 RSI: 0000000000000001 RDI: ffff9a8747fdf3d0
[ 5657.095987] RBP: ffffb5c745c8f9e0 R08: 0000000000000000 R09: 0000000000000000
[ 5657.097159] R10: ffff9a8747fdf5e8 R11: 0000000000000000 R12: ffffb5c745c8f788
[ 5657.098513] R13: ffff9a877f6ff2c0 R14: ffff9a877f6ff2c8 R15: dead000000000200
[ 5657.099689] FS: 00007f948d853b80(0000) GS:ffff9a877d600000(0000) knlGS:0000000000000000
[ 5657.101032] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 5657.101953] CR2: 00000000000000cc CR3: 00000000684bd000 CR4: 00000000000006e0
[ 5657.103159] Call Trace:
[ 5657.103776] shrink_inactive_list+0x194/0x410
[ 5657.104671] shrink_node_memcg.constprop.84+0x39a/0x6a0
[ 5657.105750] shrink_node+0x62/0x1c0
[ 5657.106529] try_to_free_pages+0x1a4/0x500
[ 5657.107408] __alloc_pages_slowpath+0x2c9/0xb20
[ 5657.108418] __alloc_pages_nodemask+0x268/0x2b0
[ 5657.109348] kmalloc_large_node+0x37/0x90
[ 5657.110205] __kmalloc_node+0x236/0x310
[ 5657.111014] kvmalloc_node+0x3e/0x70
Fixes: 30928e9baac2 ("btrfs: don't run delayed_iputs in commit")
Signed-off-by: Omar Sandoval <osandov@fb.com>
Reviewed-by: David Sterba <dsterba@suse.com>
[ add trace ]
Signed-off-by: David Sterba <dsterba@suse.com>
2018-11-01 00:06:08 +07:00
|
|
|
while (1) {
|
2013-05-14 17:20:40 +07:00
|
|
|
again = 0;
|
2008-06-26 03:01:31 +07:00
|
|
|
|
2019-01-11 22:21:02 +07:00
|
|
|
set_bit(BTRFS_FS_CLEANER_RUNNING, &fs_info->flags);
|
|
|
|
|
2013-05-14 17:20:40 +07:00
|
|
|
/* Make the cleaner go to sleep early. */
|
2016-06-23 05:54:24 +07:00
|
|
|
if (btrfs_need_cleaner_sleep(fs_info))
|
2013-05-14 17:20:40 +07:00
|
|
|
goto sleep;
|
|
|
|
|
2016-06-13 10:39:58 +07:00
|
|
|
/*
|
|
|
|
* Do not do anything if we might cause open_ctree() to block
|
|
|
|
* before we have finished mounting the filesystem.
|
|
|
|
*/
|
2016-06-23 05:54:23 +07:00
|
|
|
if (!test_bit(BTRFS_FS_OPEN, &fs_info->flags))
|
2016-06-13 10:39:58 +07:00
|
|
|
goto sleep;
|
|
|
|
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
btrfs_syno_orphan_cleanup(fs_info);
|
|
|
|
#endif /* MY_ABC_HERE */
|
2016-06-23 05:54:23 +07:00
|
|
|
if (!mutex_trylock(&fs_info->cleaner_mutex))
|
2013-05-14 17:20:40 +07:00
|
|
|
goto sleep;
|
|
|
|
|
2013-05-14 17:20:42 +07:00
|
|
|
/*
|
|
|
|
* Avoid the problem that we change the status of the fs
|
|
|
|
* during the above check and trylock.
|
|
|
|
*/
|
2016-06-23 05:54:24 +07:00
|
|
|
if (btrfs_need_cleaner_sleep(fs_info)) {
|
2016-06-23 05:54:23 +07:00
|
|
|
mutex_unlock(&fs_info->cleaner_mutex);
|
2013-05-14 17:20:42 +07:00
|
|
|
goto sleep;
|
2009-09-22 03:00:26 +07:00
|
|
|
}
|
2008-06-26 03:01:31 +07:00
|
|
|
|
2016-06-23 05:54:24 +07:00
|
|
|
btrfs_run_delayed_iputs(fs_info);
|
Btrfs: fix deadlock running delayed iputs at transaction commit time
While running a stress test I ran into a deadlock when running the delayed
iputs at transaction time, which produced the following report and trace:
[ 886.399989] =============================================
[ 886.400871] [ INFO: possible recursive locking detected ]
[ 886.401663] 4.4.0-rc6-btrfs-next-18+ #1 Not tainted
[ 886.402384] ---------------------------------------------
[ 886.403182] fio/8277 is trying to acquire lock:
[ 886.403568] (&fs_info->delayed_iput_sem){++++..}, at: [<ffffffffa0538823>] btrfs_run_delayed_iputs+0x36/0xbf [btrfs]
[ 886.403568]
[ 886.403568] but task is already holding lock:
[ 886.403568] (&fs_info->delayed_iput_sem){++++..}, at: [<ffffffffa0538823>] btrfs_run_delayed_iputs+0x36/0xbf [btrfs]
[ 886.403568]
[ 886.403568] other info that might help us debug this:
[ 886.403568] Possible unsafe locking scenario:
[ 886.403568]
[ 886.403568] CPU0
[ 886.403568] ----
[ 886.403568] lock(&fs_info->delayed_iput_sem);
[ 886.403568] lock(&fs_info->delayed_iput_sem);
[ 886.403568]
[ 886.403568] *** DEADLOCK ***
[ 886.403568]
[ 886.403568] May be due to missing lock nesting notation
[ 886.403568]
[ 886.403568] 3 locks held by fio/8277:
[ 886.403568] #0: (sb_writers#11){.+.+.+}, at: [<ffffffff81174c4c>] __sb_start_write+0x5f/0xb0
[ 886.403568] #1: (&sb->s_type->i_mutex_key#15){+.+.+.}, at: [<ffffffffa054620d>] btrfs_file_write_iter+0x73/0x408 [btrfs]
[ 886.403568] #2: (&fs_info->delayed_iput_sem){++++..}, at: [<ffffffffa0538823>] btrfs_run_delayed_iputs+0x36/0xbf [btrfs]
[ 886.403568]
[ 886.403568] stack backtrace:
[ 886.403568] CPU: 6 PID: 8277 Comm: fio Not tainted 4.4.0-rc6-btrfs-next-18+ #1
[ 886.403568] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS by qemu-project.org 04/01/2014
[ 886.403568] 0000000000000000 ffff88009f80f770 ffffffff8125d4fd ffffffff82af1fc0
[ 886.403568] ffff88009f80f830 ffffffff8108e5f9 0000000200000000 ffff88009fd92290
[ 886.403568] 0000000000000000 ffffffff82af1fc0 ffffffff829cfb01 00042b216d008804
[ 886.403568] Call Trace:
[ 886.403568] [<ffffffff8125d4fd>] dump_stack+0x4e/0x79
[ 886.403568] [<ffffffff8108e5f9>] __lock_acquire+0xd42/0xf0b
[ 886.403568] [<ffffffff810c22db>] ? __module_address+0xdf/0x108
[ 886.403568] [<ffffffff8108eb77>] lock_acquire+0x10d/0x194
[ 886.403568] [<ffffffff8108eb77>] ? lock_acquire+0x10d/0x194
[ 886.403568] [<ffffffffa0538823>] ? btrfs_run_delayed_iputs+0x36/0xbf [btrfs]
[ 886.489542] [<ffffffff8148556b>] down_read+0x3e/0x4d
[ 886.489542] [<ffffffffa0538823>] ? btrfs_run_delayed_iputs+0x36/0xbf [btrfs]
[ 886.489542] [<ffffffffa0538823>] btrfs_run_delayed_iputs+0x36/0xbf [btrfs]
[ 886.489542] [<ffffffffa0533953>] btrfs_commit_transaction+0x8f5/0x96e [btrfs]
[ 886.489542] [<ffffffffa0521d7a>] flush_space+0x435/0x44a [btrfs]
[ 886.489542] [<ffffffffa052218b>] ? reserve_metadata_bytes+0x26a/0x384 [btrfs]
[ 886.489542] [<ffffffffa05221ae>] reserve_metadata_bytes+0x28d/0x384 [btrfs]
[ 886.489542] [<ffffffffa052256c>] ? btrfs_block_rsv_refill+0x58/0x96 [btrfs]
[ 886.489542] [<ffffffffa0522584>] btrfs_block_rsv_refill+0x70/0x96 [btrfs]
[ 886.489542] [<ffffffffa053d747>] btrfs_evict_inode+0x394/0x55a [btrfs]
[ 886.489542] [<ffffffff81188e31>] evict+0xa7/0x15c
[ 886.489542] [<ffffffff81189878>] iput+0x1d3/0x266
[ 886.489542] [<ffffffffa053887c>] btrfs_run_delayed_iputs+0x8f/0xbf [btrfs]
[ 886.489542] [<ffffffffa0533953>] btrfs_commit_transaction+0x8f5/0x96e [btrfs]
[ 886.489542] [<ffffffff81085096>] ? signal_pending_state+0x31/0x31
[ 886.489542] [<ffffffffa0521191>] btrfs_alloc_data_chunk_ondemand+0x1d7/0x288 [btrfs]
[ 886.489542] [<ffffffffa0521282>] btrfs_check_data_free_space+0x40/0x59 [btrfs]
[ 886.489542] [<ffffffffa05228f5>] btrfs_delalloc_reserve_space+0x1e/0x4e [btrfs]
[ 886.489542] [<ffffffffa053620a>] btrfs_direct_IO+0x10c/0x27e [btrfs]
[ 886.489542] [<ffffffff8111d9a1>] generic_file_direct_write+0xb3/0x128
[ 886.489542] [<ffffffffa05463c3>] btrfs_file_write_iter+0x229/0x408 [btrfs]
[ 886.489542] [<ffffffff8108ae38>] ? __lock_is_held+0x38/0x50
[ 886.489542] [<ffffffff8117279e>] __vfs_write+0x7c/0xa5
[ 886.489542] [<ffffffff81172cda>] vfs_write+0xa0/0xe4
[ 886.489542] [<ffffffff811734cc>] SyS_write+0x50/0x7e
[ 886.489542] [<ffffffff814872d7>] entry_SYSCALL_64_fastpath+0x12/0x6f
[ 1081.852335] INFO: task fio:8244 blocked for more than 120 seconds.
[ 1081.854348] Not tainted 4.4.0-rc6-btrfs-next-18+ #1
[ 1081.857560] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 1081.863227] fio D ffff880213f9bb28 0 8244 8240 0x00000000
[ 1081.868719] ffff880213f9bb28 00ffffff810fc6b0 ffffffff0000000a ffff88023ed55240
[ 1081.872499] ffff880206b5d400 ffff880213f9c000 ffff88020a4d5318 ffff880206b5d400
[ 1081.876834] ffffffff00000001 ffff880206b5d400 ffff880213f9bb40 ffffffff81482ba4
[ 1081.880782] Call Trace:
[ 1081.881793] [<ffffffff81482ba4>] schedule+0x7f/0x97
[ 1081.883340] [<ffffffff81485eb5>] rwsem_down_write_failed+0x2d5/0x325
[ 1081.895525] [<ffffffff8108d48d>] ? trace_hardirqs_on_caller+0x16/0x1ab
[ 1081.897419] [<ffffffff81269723>] call_rwsem_down_write_failed+0x13/0x20
[ 1081.899251] [<ffffffff81269723>] ? call_rwsem_down_write_failed+0x13/0x20
[ 1081.901063] [<ffffffff81089fae>] ? __down_write_nested.isra.0+0x1f/0x21
[ 1081.902365] [<ffffffff814855bd>] down_write+0x43/0x57
[ 1081.903846] [<ffffffffa05211b0>] ? btrfs_alloc_data_chunk_ondemand+0x1f6/0x288 [btrfs]
[ 1081.906078] [<ffffffffa05211b0>] btrfs_alloc_data_chunk_ondemand+0x1f6/0x288 [btrfs]
[ 1081.908846] [<ffffffff8108d461>] ? mark_held_locks+0x56/0x6c
[ 1081.910409] [<ffffffffa0521282>] btrfs_check_data_free_space+0x40/0x59 [btrfs]
[ 1081.912482] [<ffffffffa05228f5>] btrfs_delalloc_reserve_space+0x1e/0x4e [btrfs]
[ 1081.914597] [<ffffffffa053620a>] btrfs_direct_IO+0x10c/0x27e [btrfs]
[ 1081.919037] [<ffffffff8111d9a1>] generic_file_direct_write+0xb3/0x128
[ 1081.920754] [<ffffffffa05463c3>] btrfs_file_write_iter+0x229/0x408 [btrfs]
[ 1081.922496] [<ffffffff8108ae38>] ? __lock_is_held+0x38/0x50
[ 1081.923922] [<ffffffff8117279e>] __vfs_write+0x7c/0xa5
[ 1081.925275] [<ffffffff81172cda>] vfs_write+0xa0/0xe4
[ 1081.926584] [<ffffffff811734cc>] SyS_write+0x50/0x7e
[ 1081.927968] [<ffffffff814872d7>] entry_SYSCALL_64_fastpath+0x12/0x6f
[ 1081.985293] INFO: lockdep is turned off.
[ 1081.986132] INFO: task fio:8249 blocked for more than 120 seconds.
[ 1081.987434] Not tainted 4.4.0-rc6-btrfs-next-18+ #1
[ 1081.988534] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 1081.990147] fio D ffff880218febbb8 0 8249 8240 0x00000000
[ 1081.991626] ffff880218febbb8 00ffffff81486b8e ffff88020000000b ffff88023ed75240
[ 1081.993258] ffff8802120a9a00 ffff880218fec000 ffff88020a4d5318 ffff8802120a9a00
[ 1081.994850] ffffffff00000001 ffff8802120a9a00 ffff880218febbd0 ffffffff81482ba4
[ 1081.996485] Call Trace:
[ 1081.997037] [<ffffffff81482ba4>] schedule+0x7f/0x97
[ 1081.998017] [<ffffffff81485eb5>] rwsem_down_write_failed+0x2d5/0x325
[ 1081.999241] [<ffffffff810852a5>] ? finish_wait+0x6d/0x76
[ 1082.000306] [<ffffffff81269723>] call_rwsem_down_write_failed+0x13/0x20
[ 1082.001533] [<ffffffff81269723>] ? call_rwsem_down_write_failed+0x13/0x20
[ 1082.002776] [<ffffffff81089fae>] ? __down_write_nested.isra.0+0x1f/0x21
[ 1082.003995] [<ffffffff814855bd>] down_write+0x43/0x57
[ 1082.005000] [<ffffffffa05211b0>] ? btrfs_alloc_data_chunk_ondemand+0x1f6/0x288 [btrfs]
[ 1082.007403] [<ffffffffa05211b0>] btrfs_alloc_data_chunk_ondemand+0x1f6/0x288 [btrfs]
[ 1082.008988] [<ffffffffa0545064>] btrfs_fallocate+0x7c1/0xc2f [btrfs]
[ 1082.010193] [<ffffffff8108a1ba>] ? percpu_down_read+0x4e/0x77
[ 1082.011280] [<ffffffff81174c4c>] ? __sb_start_write+0x5f/0xb0
[ 1082.012265] [<ffffffff81174c4c>] ? __sb_start_write+0x5f/0xb0
[ 1082.013021] [<ffffffff811712e4>] vfs_fallocate+0x170/0x1ff
[ 1082.013738] [<ffffffff81181ebb>] ioctl_preallocate+0x89/0x9b
[ 1082.014778] [<ffffffff811822d7>] do_vfs_ioctl+0x40a/0x4ea
[ 1082.015778] [<ffffffff81176ea7>] ? SYSC_newfstat+0x25/0x2e
[ 1082.016806] [<ffffffff8118b4de>] ? __fget_light+0x4d/0x71
[ 1082.017789] [<ffffffff8118240e>] SyS_ioctl+0x57/0x79
[ 1082.018706] [<ffffffff814872d7>] entry_SYSCALL_64_fastpath+0x12/0x6f
This happens because we can recursively acquire the semaphore
fs_info->delayed_iput_sem when attempting to allocate space to satisfy
a file write request as shown in the first trace above - when committing
a transaction we acquire (down_read) the semaphore before running the
delayed iputs, and when running a delayed iput() we can end up calling
an inode's eviction handler, which in turn commits another transaction
and attempts to acquire (down_read) again the semaphore to run more
delayed iput operations.
This results in a deadlock because if a task acquires multiple times a
semaphore it should invoke down_read_nested() with a different lockdep
class for each level of recursion.
Fix this by simplifying the implementation and use a mutex instead that
is acquired by the cleaner kthread before it runs the delayed iputs
instead of always acquiring a semaphore before delayed references are
run from anywhere.
Fixes: d7c151717a1e (btrfs: Fix NO_SPACE bug caused by delayed-iput)
Cc: stable@vger.kernel.org # 4.1+
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Chris Mason <clm@fb.com>
2016-01-15 18:05:12 +07:00
|
|
|
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
if (root->fs_info->snapshot_cleaner && !btrfs_test_opt(root->fs_info, SKIP_CLEANER))
|
|
|
|
again = btrfs_clean_one_deleted_snapshot(root);
|
|
|
|
#else /* MY_ABC_HERE */
|
2013-05-14 17:20:40 +07:00
|
|
|
again = btrfs_clean_one_deleted_snapshot(root);
|
2024-07-05 23:00:04 +07:00
|
|
|
#endif /* MY_ABC_HERE */
|
2016-06-23 05:54:23 +07:00
|
|
|
mutex_unlock(&fs_info->cleaner_mutex);
|
2013-05-14 17:20:40 +07:00
|
|
|
|
|
|
|
/*
|
2013-05-14 17:20:41 +07:00
|
|
|
* The defragger has dealt with the R/O remount and umount,
|
|
|
|
* needn't do anything special here.
|
2013-05-14 17:20:40 +07:00
|
|
|
*/
|
2016-06-23 05:54:23 +07:00
|
|
|
btrfs_run_defrag_inodes(fs_info);
|
Btrfs: fix race between balance and unused block group deletion
We have a race between deleting an unused block group and balancing the
same block group that leads to an assertion failure/BUG(), producing the
following trace:
[181631.208236] BTRFS: assertion failed: 0, file: fs/btrfs/volumes.c, line: 2622
[181631.220591] ------------[ cut here ]------------
[181631.222959] kernel BUG at fs/btrfs/ctree.h:4062!
[181631.223932] invalid opcode: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC
[181631.224566] Modules linked in: btrfs dm_flakey dm_mod crc32c_generic xor raid6_pq nfsd auth_rpcgss oid_registry nfs_acl nfs lockd grace fscache sunrpc loop fuse acpi_cpufreq parpor$
[181631.224566] CPU: 8 PID: 17451 Comm: btrfs Tainted: G W 4.1.0-rc5-btrfs-next-10+ #1
[181631.224566] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.8.1-0-g4adadbd-20150316_085822-nilsson.home.kraxel.org 04/01/2014
[181631.224566] task: ffff880127e09590 ti: ffff8800b5824000 task.ti: ffff8800b5824000
[181631.224566] RIP: 0010:[<ffffffffa03f19f6>] [<ffffffffa03f19f6>] assfail.constprop.50+0x1e/0x20 [btrfs]
[181631.224566] RSP: 0018:ffff8800b5827ae8 EFLAGS: 00010246
[181631.224566] RAX: 0000000000000040 RBX: ffff8800109fc218 RCX: ffffffff81095dce
[181631.224566] RDX: 0000000000005124 RSI: ffffffff81464819 RDI: 00000000ffffffff
[181631.224566] RBP: ffff8800b5827ae8 R08: 0000000000000001 R09: 0000000000000000
[181631.224566] R10: 0000000000000000 R11: 0000000000000000 R12: ffff8800109fc200
[181631.224566] R13: ffff880020095000 R14: ffff8800b1a13f38 R15: ffff880020095000
[181631.224566] FS: 00007f70ca0b0c80(0000) GS:ffff88013ec00000(0000) knlGS:0000000000000000
[181631.224566] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
[181631.224566] CR2: 00007f2872ab6e68 CR3: 00000000a717c000 CR4: 00000000000006e0
[181631.224566] Stack:
[181631.224566] ffff8800b5827ba8 ffffffffa03f3916 ffff8800b5827b38 ffffffffa03d080e
[181631.224566] ffffffffa03d1423 ffff880020095000 ffff88001233c000 0000000000000001
[181631.224566] ffff880020095000 ffff8800b1a13f38 0000000a69c00000 0000000000000000
[181631.224566] Call Trace:
[181631.224566] [<ffffffffa03f3916>] btrfs_remove_chunk+0xa4/0x6bb [btrfs]
[181631.224566] [<ffffffffa03d080e>] ? join_transaction.isra.8+0xb9/0x3ba [btrfs]
[181631.224566] [<ffffffffa03d1423>] ? wait_current_trans.isra.13+0x22/0xfc [btrfs]
[181631.224566] [<ffffffffa03f3fbc>] btrfs_relocate_chunk.isra.29+0x8f/0xa7 [btrfs]
[181631.224566] [<ffffffffa03f54df>] btrfs_balance+0xaa4/0xc52 [btrfs]
[181631.224566] [<ffffffffa03fd388>] btrfs_ioctl_balance+0x23f/0x2b0 [btrfs]
[181631.224566] [<ffffffff810872f9>] ? trace_hardirqs_on+0xd/0xf
[181631.224566] [<ffffffffa04019a3>] btrfs_ioctl+0xfe2/0x2220 [btrfs]
[181631.224566] [<ffffffff812603ed>] ? __this_cpu_preempt_check+0x13/0x15
[181631.224566] [<ffffffff81084669>] ? arch_local_irq_save+0x9/0xc
[181631.224566] [<ffffffff81138def>] ? handle_mm_fault+0x834/0xcd2
[181631.224566] [<ffffffff81138def>] ? handle_mm_fault+0x834/0xcd2
[181631.224566] [<ffffffff8103e48c>] ? __do_page_fault+0x211/0x424
[181631.224566] [<ffffffff811755e6>] do_vfs_ioctl+0x3c6/0x479
(...)
The sequence of steps leading to this are:
CPU 0 CPU 1
btrfs_balance()
btrfs_relocate_chunk()
btrfs_relocate_block_group(bg X)
btrfs_lookup_block_group(bg X)
cleaner_kthread
locks fs_info->cleaner_mutex
btrfs_delete_unused_bgs()
finds bg X, which became
unused in the previous
transaction
checks bg X ->ro == 0,
so it proceeds
sets bg X ->ro to 1
(btrfs_set_block_group_ro(bg X))
blocks on fs_info->cleaner_mutex
btrfs_remove_chunk(bg X)
unlocks fs_info->cleaner_mutex
acquires fs_info->cleaner_mutex
relocate_block_group()
--> does nothing, no extents found in
the extent tree from bg X
unlocks fs_info->cleaner_mutex
btrfs_relocate_block_group(bg X) returns
btrfs_remove_chunk(bg X)
extent map not found
--> ASSERT(0)
Fix this by using a new mutex to make sure these 2 operations, block
group relocation and removal, are serialized.
This issue is reproducible by running fstests generic/038 (which stresses
chunk allocation and automatic removal of unused block groups) together
with the following balance loop:
while true; do btrfs balance start -dusage=0 <mountpoint> ; done
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Chris Mason <clm@fb.com>
2015-06-11 06:58:53 +07:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Acquires fs_info->delete_unused_bgs_mutex to avoid racing
|
|
|
|
* with relocation (btrfs_relocate_chunk) and relocation
|
|
|
|
* acquires fs_info->cleaner_mutex (btrfs_relocate_block_group)
|
|
|
|
* after acquiring fs_info->delete_unused_bgs_mutex. So we
|
|
|
|
* can't hold, nor need to, fs_info->cleaner_mutex when deleting
|
|
|
|
* unused block groups.
|
|
|
|
*/
|
2016-06-23 05:54:23 +07:00
|
|
|
btrfs_delete_unused_bgs(fs_info);
|
2013-05-14 17:20:40 +07:00
|
|
|
sleep:
|
2019-01-11 22:21:02 +07:00
|
|
|
clear_bit(BTRFS_FS_CLEANER_RUNNING, &fs_info->flags);
|
Btrfs: fix missing delayed iputs on unmount
There's a race between close_ctree() and cleaner_kthread().
close_ctree() sets btrfs_fs_closing(), and the cleaner stops when it
sees it set, but this is racy; the cleaner might have already checked
the bit and could be cleaning stuff. In particular, if it deletes unused
block groups, it will create delayed iputs for the free space cache
inodes. As of "btrfs: don't run delayed_iputs in commit", we're no
longer running delayed iputs after a commit. Therefore, if the cleaner
creates more delayed iputs after delayed iputs are run in
btrfs_commit_super(), we will leak inodes on unmount and get a busy
inode crash from the VFS.
Fix it by parking the cleaner before we actually close anything. Then,
any remaining delayed iputs will always be handled in
btrfs_commit_super(). This also ensures that the commit in close_ctree()
is really the last commit, so we can get rid of the commit in
cleaner_kthread().
The fstest/generic/475 followed by 476 can trigger a crash that
manifests as a slab corruption caused by accessing the freed kthread
structure by a wake up function. Sample trace:
[ 5657.077612] BUG: unable to handle kernel NULL pointer dereference at 00000000000000cc
[ 5657.079432] PGD 1c57a067 P4D 1c57a067 PUD da10067 PMD 0
[ 5657.080661] Oops: 0000 [#1] PREEMPT SMP
[ 5657.081592] CPU: 1 PID: 5157 Comm: fsstress Tainted: G W 4.19.0-rc8-default+ #323
[ 5657.083703] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.11.2-0-gf9626cc-prebuilt.qemu-project.org 04/01/2014
[ 5657.086577] RIP: 0010:shrink_page_list+0x2f9/0xe90
[ 5657.091937] RSP: 0018:ffffb5c745c8f728 EFLAGS: 00010287
[ 5657.092953] RAX: 0000000000000074 RBX: ffffb5c745c8f830 RCX: 0000000000000000
[ 5657.094590] RDX: 0000000000000000 RSI: 0000000000000001 RDI: ffff9a8747fdf3d0
[ 5657.095987] RBP: ffffb5c745c8f9e0 R08: 0000000000000000 R09: 0000000000000000
[ 5657.097159] R10: ffff9a8747fdf5e8 R11: 0000000000000000 R12: ffffb5c745c8f788
[ 5657.098513] R13: ffff9a877f6ff2c0 R14: ffff9a877f6ff2c8 R15: dead000000000200
[ 5657.099689] FS: 00007f948d853b80(0000) GS:ffff9a877d600000(0000) knlGS:0000000000000000
[ 5657.101032] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 5657.101953] CR2: 00000000000000cc CR3: 00000000684bd000 CR4: 00000000000006e0
[ 5657.103159] Call Trace:
[ 5657.103776] shrink_inactive_list+0x194/0x410
[ 5657.104671] shrink_node_memcg.constprop.84+0x39a/0x6a0
[ 5657.105750] shrink_node+0x62/0x1c0
[ 5657.106529] try_to_free_pages+0x1a4/0x500
[ 5657.107408] __alloc_pages_slowpath+0x2c9/0xb20
[ 5657.108418] __alloc_pages_nodemask+0x268/0x2b0
[ 5657.109348] kmalloc_large_node+0x37/0x90
[ 5657.110205] __kmalloc_node+0x236/0x310
[ 5657.111014] kvmalloc_node+0x3e/0x70
Fixes: 30928e9baac2 ("btrfs: don't run delayed_iputs in commit")
Signed-off-by: Omar Sandoval <osandov@fb.com>
Reviewed-by: David Sterba <dsterba@suse.com>
[ add trace ]
Signed-off-by: David Sterba <dsterba@suse.com>
2018-11-01 00:06:08 +07:00
|
|
|
if (kthread_should_park())
|
|
|
|
kthread_parkme();
|
|
|
|
if (kthread_should_stop())
|
|
|
|
return 0;
|
2016-03-15 17:28:54 +07:00
|
|
|
if (!again) {
|
2008-06-26 03:01:31 +07:00
|
|
|
set_current_state(TASK_INTERRUPTIBLE);
|
Btrfs: fix missing delayed iputs on unmount
There's a race between close_ctree() and cleaner_kthread().
close_ctree() sets btrfs_fs_closing(), and the cleaner stops when it
sees it set, but this is racy; the cleaner might have already checked
the bit and could be cleaning stuff. In particular, if it deletes unused
block groups, it will create delayed iputs for the free space cache
inodes. As of "btrfs: don't run delayed_iputs in commit", we're no
longer running delayed iputs after a commit. Therefore, if the cleaner
creates more delayed iputs after delayed iputs are run in
btrfs_commit_super(), we will leak inodes on unmount and get a busy
inode crash from the VFS.
Fix it by parking the cleaner before we actually close anything. Then,
any remaining delayed iputs will always be handled in
btrfs_commit_super(). This also ensures that the commit in close_ctree()
is really the last commit, so we can get rid of the commit in
cleaner_kthread().
The fstest/generic/475 followed by 476 can trigger a crash that
manifests as a slab corruption caused by accessing the freed kthread
structure by a wake up function. Sample trace:
[ 5657.077612] BUG: unable to handle kernel NULL pointer dereference at 00000000000000cc
[ 5657.079432] PGD 1c57a067 P4D 1c57a067 PUD da10067 PMD 0
[ 5657.080661] Oops: 0000 [#1] PREEMPT SMP
[ 5657.081592] CPU: 1 PID: 5157 Comm: fsstress Tainted: G W 4.19.0-rc8-default+ #323
[ 5657.083703] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.11.2-0-gf9626cc-prebuilt.qemu-project.org 04/01/2014
[ 5657.086577] RIP: 0010:shrink_page_list+0x2f9/0xe90
[ 5657.091937] RSP: 0018:ffffb5c745c8f728 EFLAGS: 00010287
[ 5657.092953] RAX: 0000000000000074 RBX: ffffb5c745c8f830 RCX: 0000000000000000
[ 5657.094590] RDX: 0000000000000000 RSI: 0000000000000001 RDI: ffff9a8747fdf3d0
[ 5657.095987] RBP: ffffb5c745c8f9e0 R08: 0000000000000000 R09: 0000000000000000
[ 5657.097159] R10: ffff9a8747fdf5e8 R11: 0000000000000000 R12: ffffb5c745c8f788
[ 5657.098513] R13: ffff9a877f6ff2c0 R14: ffff9a877f6ff2c8 R15: dead000000000200
[ 5657.099689] FS: 00007f948d853b80(0000) GS:ffff9a877d600000(0000) knlGS:0000000000000000
[ 5657.101032] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 5657.101953] CR2: 00000000000000cc CR3: 00000000684bd000 CR4: 00000000000006e0
[ 5657.103159] Call Trace:
[ 5657.103776] shrink_inactive_list+0x194/0x410
[ 5657.104671] shrink_node_memcg.constprop.84+0x39a/0x6a0
[ 5657.105750] shrink_node+0x62/0x1c0
[ 5657.106529] try_to_free_pages+0x1a4/0x500
[ 5657.107408] __alloc_pages_slowpath+0x2c9/0xb20
[ 5657.108418] __alloc_pages_nodemask+0x268/0x2b0
[ 5657.109348] kmalloc_large_node+0x37/0x90
[ 5657.110205] __kmalloc_node+0x236/0x310
[ 5657.111014] kvmalloc_node+0x3e/0x70
Fixes: 30928e9baac2 ("btrfs: don't run delayed_iputs in commit")
Signed-off-by: Omar Sandoval <osandov@fb.com>
Reviewed-by: David Sterba <dsterba@suse.com>
[ add trace ]
Signed-off-by: David Sterba <dsterba@suse.com>
2018-11-01 00:06:08 +07:00
|
|
|
schedule();
|
2008-06-26 03:01:31 +07:00
|
|
|
__set_current_state(TASK_RUNNING);
|
|
|
|
}
|
Btrfs: fix crash on close_ctree() if cleaner starts new transaction
Often when running fstests btrfs/079 I was running into the following
trace during umount on one of my qemu/kvm test vms:
[ 8245.682441] WARNING: CPU: 8 PID: 25064 at fs/btrfs/extent-tree.c:138 btrfs_put_block_group+0x51/0x69 [btrfs]()
[ 8245.685039] Modules linked in: btrfs dm_flakey dm_mod crc32c_generic xor raid6_pq nfsd auth_rpcgss oid_registry nfs_acl nfs lockd grace fscache sunrpc loop fuse parport_pc i2c_piix4 acpi_cpufreq processor psmouse i2c_core thermal_sys parport evdev serio_raw button pcspkr microcode ext4 crc16 jbd2 mbcache sg sr_mod cdrom sd_mod ata_generic virtio_scsi ata_piix libata floppy virtio_pci virtio_ring scsi_mod virtio e1000 [last unloaded: btrfs]
[ 8245.693860] CPU: 8 PID: 25064 Comm: umount Tainted: G W 4.1.0-rc5-btrfs-next-10+ #1
[ 8245.695081] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.8.1-0-g4adadbd-20150316_085822-nilsson.home.kraxel.org 04/01/2014
[ 8245.697583] 0000000000000009 ffff88020d047ce8 ffffffff8145eec7 ffffffff81095dce
[ 8245.699234] 0000000000000000 ffff88020d047d28 ffffffff8104b399 0000000000000028
[ 8245.700995] ffffffffa04db07b ffff8801c6036c00 ffff8801c6036d68 ffff880202eb40b0
[ 8245.702510] Call Trace:
[ 8245.703006] [<ffffffff8145eec7>] dump_stack+0x4f/0x7b
[ 8245.705393] [<ffffffff81095dce>] ? console_unlock+0x356/0x3a2
[ 8245.706569] [<ffffffff8104b399>] warn_slowpath_common+0xa1/0xbb
[ 8245.707747] [<ffffffffa04db07b>] ? btrfs_put_block_group+0x51/0x69 [btrfs]
[ 8245.709101] [<ffffffff8104b456>] warn_slowpath_null+0x1a/0x1c
[ 8245.710274] [<ffffffffa04db07b>] btrfs_put_block_group+0x51/0x69 [btrfs]
[ 8245.711823] [<ffffffffa04e3473>] btrfs_free_block_groups+0x145/0x322 [btrfs]
[ 8245.713251] [<ffffffffa04ef31a>] close_ctree+0x1ef/0x325 [btrfs]
[ 8245.714448] [<ffffffff8117d26e>] ? evict_inodes+0xdc/0xeb
[ 8245.715539] [<ffffffffa04cb3ad>] btrfs_put_super+0x19/0x1b [btrfs]
[ 8245.716835] [<ffffffff81167607>] generic_shutdown_super+0x73/0xef
[ 8245.718015] [<ffffffff81167a3a>] kill_anon_super+0x13/0x1e
[ 8245.719101] [<ffffffffa04cb1b6>] btrfs_kill_super+0x17/0x23 [btrfs]
[ 8245.720316] [<ffffffff81167544>] deactivate_locked_super+0x3b/0x68
[ 8245.721517] [<ffffffff81167dd6>] deactivate_super+0x3f/0x43
[ 8245.722581] [<ffffffff8117fbb9>] cleanup_mnt+0x59/0x78
[ 8245.723538] [<ffffffff8117fc18>] __cleanup_mnt+0x12/0x14
[ 8245.724572] [<ffffffff81065371>] task_work_run+0x8f/0xbc
[ 8245.725598] [<ffffffff810028fb>] do_notify_resume+0x45/0x53
[ 8245.726892] [<ffffffff814651ac>] int_signal+0x12/0x17
[ 8245.737887] ---[ end trace a01d038397e99b92 ]---
[ 8245.769363] general protection fault: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC
[ 8245.770737] Modules linked in: btrfs dm_flakey dm_mod crc32c_generic xor raid6_pq nfsd auth_rpcgss oid_registry nfs_acl nfs lockd grace fscache sunrpc loop fuse parport_pc i2c_piix4 acpi_cpufreq processor psmouse i2c_core thermal_sys parport evdev serio_raw button pcspkr microcode ext4 crc16 jbd2 mbcache sg sr_mod cdrom sd_mod ata_generic virtio_scsi ata_piix libata floppy virtio_pci virtio_ring scsi_mod virtio e1000 [last unloaded: btrfs]
[ 8245.772641] CPU: 2 PID: 25064 Comm: umount Tainted: G W 4.1.0-rc5-btrfs-next-10+ #1
[ 8245.772641] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.8.1-0-g4adadbd-20150316_085822-nilsson.home.kraxel.org 04/01/2014
[ 8245.772641] task: ffff880013005810 ti: ffff88020d044000 task.ti: ffff88020d044000
[ 8245.772641] RIP: 0010:[<ffffffffa051c8e6>] [<ffffffffa051c8e6>] btrfs_queue_work+0x2c/0x14d [btrfs]
[ 8245.772641] RSP: 0018:ffff88020d0478b8 EFLAGS: 00010202
[ 8245.772641] RAX: 0000000000000004 RBX: 6b6b6b6b6b6b6b6b RCX: ffffffffa0581488
[ 8245.772641] RDX: 0000000000000000 RSI: ffff880194b7bf48 RDI: ffff880144b6a7a0
[ 8245.772641] RBP: ffff88020d0478d8 R08: 0000000000000000 R09: 000000000000ffff
[ 8245.772641] R10: 0000000000000004 R11: 0000000000000005 R12: ffff880194b7bf48
[ 8245.772641] R13: ffff880194b7bf48 R14: 0000000000000410 R15: 0000000000000000
[ 8245.772641] FS: 00007f991e77d840(0000) GS:ffff88023e280000(0000) knlGS:0000000000000000
[ 8245.772641] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
[ 8245.772641] CR2: 00007fbbd325ee68 CR3: 000000021de8e000 CR4: 00000000000006e0
[ 8245.772641] Stack:
[ 8245.772641] ffff880194b7bf00 ffff880202eb4000 ffff880194b7bf48 0000000000000410
[ 8245.772641] ffff88020d047958 ffffffffa04ec6d5 ffff8801629b2ee8 0000000082987570
[ 8245.772641] 0000000000a5813f 0000000000000001 ffff880013006100 0000000000000002
[ 8245.772641] Call Trace:
[ 8245.772641] [<ffffffffa04ec6d5>] btrfs_wq_submit_bio+0xe1/0x17b [btrfs]
[ 8245.772641] [<ffffffff81086bff>] ? check_irq_usage+0x76/0x87
[ 8245.772641] [<ffffffffa04ec825>] btree_submit_bio_hook+0xb6/0xd9 [btrfs]
[ 8245.772641] [<ffffffffa04ebb7c>] ? btree_csum_one_bio+0xad/0xad [btrfs]
[ 8245.772641] [<ffffffffa04eb1a6>] ? btree_io_failed_hook+0x5e/0x5e [btrfs]
[ 8245.772641] [<ffffffffa050a6e7>] submit_one_bio+0x8c/0xc7 [btrfs]
[ 8245.772641] [<ffffffffa050d75b>] submit_extent_page.isra.18+0x9d/0x186 [btrfs]
[ 8245.772641] [<ffffffffa050d95b>] write_one_eb+0x117/0x1ae [btrfs]
[ 8245.772641] [<ffffffffa050a79b>] ? end_extent_buffer_writeback+0x21/0x21 [btrfs]
[ 8245.772641] [<ffffffffa0510510>] btree_write_cache_pages+0x2ab/0x385 [btrfs]
[ 8245.772641] [<ffffffffa04eb2b8>] btree_writepages+0x23/0x5c [btrfs]
[ 8245.772641] [<ffffffff8111c661>] do_writepages+0x23/0x2c
[ 8245.772641] [<ffffffff81189cd4>] __writeback_single_inode+0xda/0x5bd
[ 8245.772641] [<ffffffff8118aa60>] ? writeback_single_inode+0x2b/0x173
[ 8245.772641] [<ffffffff8118aafd>] writeback_single_inode+0xc8/0x173
[ 8245.772641] [<ffffffff8118ac95>] write_inode_now+0x8a/0x95
[ 8245.772641] [<ffffffff81247bf0>] ? _atomic_dec_and_lock+0x30/0x4e
[ 8245.772641] [<ffffffff8117cc5e>] iput+0x17d/0x26a
[ 8245.772641] [<ffffffffa04ef355>] close_ctree+0x22a/0x325 [btrfs]
[ 8245.772641] [<ffffffff8117d26e>] ? evict_inodes+0xdc/0xeb
[ 8245.772641] [<ffffffffa04cb3ad>] btrfs_put_super+0x19/0x1b [btrfs]
[ 8245.772641] [<ffffffff81167607>] generic_shutdown_super+0x73/0xef
[ 8245.772641] [<ffffffff81167a3a>] kill_anon_super+0x13/0x1e
[ 8245.772641] [<ffffffffa04cb1b6>] btrfs_kill_super+0x17/0x23 [btrfs]
[ 8245.772641] [<ffffffff81167544>] deactivate_locked_super+0x3b/0x68
[ 8245.772641] [<ffffffff81167dd6>] deactivate_super+0x3f/0x43
[ 8245.772641] [<ffffffff8117fbb9>] cleanup_mnt+0x59/0x78
[ 8245.772641] [<ffffffff8117fc18>] __cleanup_mnt+0x12/0x14
[ 8245.772641] [<ffffffff81065371>] task_work_run+0x8f/0xbc
[ 8245.772641] [<ffffffff810028fb>] do_notify_resume+0x45/0x53
[ 8245.772641] [<ffffffff814651ac>] int_signal+0x12/0x17
[ 8245.772641] Code: 1f 44 00 00 55 48 89 e5 41 56 41 55 41 54 53 49 89 f4 48 8b 46 70 a8 04 74 09 48 8b 5f 08 48 85 db 75 03 48 8b 1f 49 89 5c 24 68 <83> 7b 5c ff 74 04 f0 ff 43 50 49 83 7c 24 08 00 74 2c 4c 8d 6b
[ 8245.772641] RIP [<ffffffffa051c8e6>] btrfs_queue_work+0x2c/0x14d [btrfs]
[ 8245.772641] RSP <ffff88020d0478b8>
[ 8245.845040] ---[ end trace a01d038397e99b93 ]---
For logical reasons such as the phase of the moon, this happened more
often with "-o inode_cache" than without any mount options.
After some debugging it turned out to be simple to understand what was
happening:
1) close_ctree() is called;
2) It then stops the transaction kthread, which commits the current
transaction;
3) It asks the cleaner kthread to stop, which is currently running
btrfs_delete_unused_bgs();
4) btrfs_delete_unused_bgs() finds an unused block group, starts a new
transaction, deletes the block group, which implies COWing some
tree nodes and leafs and dirtying their respective pages, and then
finally it ends the transaction it started, without committing it;
5) The cleaner kthread stops;
6) close_ctree() releases (from memory) the block group objects, which
produces the warning in the trace pasted above;
7) Then it invalidates all pages of the btree inode, by calling
invalidate_inode_pages2(), which waits for any pages under writeback,
and releases any non-dirty pages;
8) All work queues are destroyed (waiting first for their current tasks
to finish execution);
9) A final iput() is called against the btree inode;
10) This iput triggers a writeback of the btree inode because it still
has dirty pages;
11) This starts the whole chain of callbacks for the btree inode until
it eventually reaches btrfs_wq_submit_bio() where it leads to a
NULL pointer dereference because the work queues were already
destroyed.
Fix this by making the cleaner commit any transaction that it started
after the transaction kthread was stopped.
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Chris Mason <clm@fb.com>
2015-06-13 12:55:31 +07:00
|
|
|
}
|
2008-06-26 03:01:31 +07:00
|
|
|
}
|
|
|
|
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
static void __btrfs_async_metadata_cache_hook(struct work_struct *work)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
struct btrfs_fs_info *fs_info = container_of(work,
|
|
|
|
struct btrfs_fs_info,
|
|
|
|
async_metadata_cache_work);
|
|
|
|
char mount_path[SYNO_MOUNT_PATH_LEN] = {'\0'};
|
|
|
|
char *argv[] = {"/usr/syno/sbin/synotune",
|
|
|
|
"--btrfs-metadata-rescan",
|
|
|
|
"-b",
|
|
|
|
mount_path, NULL};
|
|
|
|
static char *envp[] = {"HOME=/",
|
|
|
|
"TERM=linux",
|
|
|
|
"PATH=/sbin:/usr/sbin:/bin:/usr/bin", NULL};
|
|
|
|
|
|
|
|
spin_lock(&fs_info->mount_path_lock);
|
|
|
|
if (!fs_info->mount_path) {
|
|
|
|
spin_unlock(&fs_info->mount_path_lock);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
snprintf(mount_path, sizeof(mount_path), "%s", fs_info->mount_path);
|
|
|
|
spin_unlock(&fs_info->mount_path_lock);
|
|
|
|
|
|
|
|
if (atomic_read(&fs_info->syno_metadata_block_group_update_count) == 0)
|
|
|
|
return;
|
|
|
|
|
|
|
|
ret = call_usermodehelper(argv[0], argv, envp, UMH_WAIT_EXEC);
|
|
|
|
if (ret && ret != -ENOENT)
|
|
|
|
return;
|
|
|
|
|
|
|
|
atomic_set(&fs_info->syno_metadata_block_group_update_count, 0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline
|
|
|
|
void btrfs_init_async_metadata_cache_work(struct btrfs_fs_info *fs_info)
|
|
|
|
{
|
|
|
|
INIT_WORK(&fs_info->async_metadata_cache_work,
|
|
|
|
__btrfs_async_metadata_cache_hook);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline
|
|
|
|
void btrfs_syno_check_metadata_cache_sync(struct btrfs_fs_info *fs_info)
|
|
|
|
{
|
|
|
|
if (fs_info->metadata_cache_enable &&
|
|
|
|
!btrfs_fs_closing(fs_info) &&
|
|
|
|
!test_bit(BTRFS_FS_STATE_REMOUNTING, &fs_info->fs_state) &&
|
|
|
|
!fs_info->mount_path &&
|
|
|
|
(atomic_read(&fs_info->syno_metadata_block_group_update_count) != 0) &&
|
|
|
|
!work_busy(&fs_info->async_metadata_cache_work))
|
|
|
|
queue_work(system_unbound_wq, &fs_info->async_metadata_cache_work);
|
|
|
|
}
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
2008-06-26 03:01:31 +07:00
|
|
|
static int transaction_kthread(void *arg)
|
|
|
|
{
|
|
|
|
struct btrfs_root *root = arg;
|
2016-06-23 05:54:23 +07:00
|
|
|
struct btrfs_fs_info *fs_info = root->fs_info;
|
2008-06-26 03:01:31 +07:00
|
|
|
struct btrfs_trans_handle *trans;
|
|
|
|
struct btrfs_transaction *cur;
|
2010-05-16 21:49:58 +07:00
|
|
|
u64 transid;
|
2024-07-05 23:00:04 +07:00
|
|
|
time64_t delta;
|
2008-06-26 03:01:31 +07:00
|
|
|
unsigned long delay;
|
2012-03-12 22:05:50 +07:00
|
|
|
bool cannot_commit;
|
2008-06-26 03:01:31 +07:00
|
|
|
|
|
|
|
do {
|
2012-03-12 22:05:50 +07:00
|
|
|
cannot_commit = false;
|
2024-07-05 23:00:04 +07:00
|
|
|
delay = msecs_to_jiffies(fs_info->commit_interval * 1000);
|
2016-06-23 05:54:23 +07:00
|
|
|
mutex_lock(&fs_info->transaction_kthread_mutex);
|
2008-06-26 03:01:31 +07:00
|
|
|
|
2016-06-23 05:54:23 +07:00
|
|
|
spin_lock(&fs_info->trans_lock);
|
|
|
|
cur = fs_info->running_transaction;
|
2008-06-26 03:01:31 +07:00
|
|
|
if (!cur) {
|
2016-06-23 05:54:23 +07:00
|
|
|
spin_unlock(&fs_info->trans_lock);
|
2008-06-26 03:01:31 +07:00
|
|
|
goto sleep;
|
|
|
|
}
|
2008-07-29 02:32:19 +07:00
|
|
|
|
2024-07-05 23:00:04 +07:00
|
|
|
delta = ktime_get_seconds() - cur->start_time;
|
2019-08-22 14:25:00 +07:00
|
|
|
if (cur->state < TRANS_STATE_COMMIT_START &&
|
2024-07-05 23:00:04 +07:00
|
|
|
delta < fs_info->commit_interval) {
|
2016-06-23 05:54:23 +07:00
|
|
|
spin_unlock(&fs_info->trans_lock);
|
2024-07-05 23:00:04 +07:00
|
|
|
delay -= msecs_to_jiffies((delta - 1) * 1000);
|
|
|
|
delay = min(delay,
|
|
|
|
msecs_to_jiffies(fs_info->commit_interval * 1000));
|
2008-06-26 03:01:31 +07:00
|
|
|
goto sleep;
|
|
|
|
}
|
2010-05-16 21:49:58 +07:00
|
|
|
transid = cur->transid;
|
2016-06-23 05:54:23 +07:00
|
|
|
spin_unlock(&fs_info->trans_lock);
|
2009-03-13 21:10:06 +07:00
|
|
|
|
2012-03-12 22:03:00 +07:00
|
|
|
/* If the file system is aborted, this will always fail. */
|
Btrfs: fix orphan transaction on the freezed filesystem
With the following debug patch:
static int btrfs_freeze(struct super_block *sb)
{
+ struct btrfs_fs_info *fs_info = btrfs_sb(sb);
+ struct btrfs_transaction *trans;
+
+ spin_lock(&fs_info->trans_lock);
+ trans = fs_info->running_transaction;
+ if (trans) {
+ printk("Transid %llu, use_count %d, num_writer %d\n",
+ trans->transid, atomic_read(&trans->use_count),
+ atomic_read(&trans->num_writers));
+ }
+ spin_unlock(&fs_info->trans_lock);
return 0;
}
I found there was a orphan transaction after the freeze operation was done.
It is because the transaction may not be committed when the transaction handle
end even though it is the last handle of the current transaction. This design
avoid committing the transaction frequently, but also introduce the above
problem.
So I add btrfs_attach_transaction() which can catch the current transaction
and commit it. If there is no transaction, it will return ENOENT, and do not
anything.
This function also can be used to instead of btrfs_join_transaction_freeze()
because it don't increase the writer counter and don't start a new transaction,
so it also can fix the deadlock between sync and freeze.
Besides that, it is used to instead of btrfs_join_transaction() in
transaction_kthread(), because if there is no transaction, the transaction
kthread needn't anything.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
2012-09-20 14:54:00 +07:00
|
|
|
trans = btrfs_attach_transaction(root);
|
2012-03-12 22:05:50 +07:00
|
|
|
if (IS_ERR(trans)) {
|
Btrfs: fix orphan transaction on the freezed filesystem
With the following debug patch:
static int btrfs_freeze(struct super_block *sb)
{
+ struct btrfs_fs_info *fs_info = btrfs_sb(sb);
+ struct btrfs_transaction *trans;
+
+ spin_lock(&fs_info->trans_lock);
+ trans = fs_info->running_transaction;
+ if (trans) {
+ printk("Transid %llu, use_count %d, num_writer %d\n",
+ trans->transid, atomic_read(&trans->use_count),
+ atomic_read(&trans->num_writers));
+ }
+ spin_unlock(&fs_info->trans_lock);
return 0;
}
I found there was a orphan transaction after the freeze operation was done.
It is because the transaction may not be committed when the transaction handle
end even though it is the last handle of the current transaction. This design
avoid committing the transaction frequently, but also introduce the above
problem.
So I add btrfs_attach_transaction() which can catch the current transaction
and commit it. If there is no transaction, it will return ENOENT, and do not
anything.
This function also can be used to instead of btrfs_join_transaction_freeze()
because it don't increase the writer counter and don't start a new transaction,
so it also can fix the deadlock between sync and freeze.
Besides that, it is used to instead of btrfs_join_transaction() in
transaction_kthread(), because if there is no transaction, the transaction
kthread needn't anything.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
2012-09-20 14:54:00 +07:00
|
|
|
if (PTR_ERR(trans) != -ENOENT)
|
|
|
|
cannot_commit = true;
|
2012-03-12 22:03:00 +07:00
|
|
|
goto sleep;
|
2012-03-12 22:05:50 +07:00
|
|
|
}
|
2010-05-16 21:49:58 +07:00
|
|
|
if (transid == trans->transid) {
|
2016-09-10 08:39:03 +07:00
|
|
|
btrfs_commit_transaction(trans);
|
2010-05-16 21:49:58 +07:00
|
|
|
} else {
|
2016-09-10 08:39:03 +07:00
|
|
|
btrfs_end_transaction(trans);
|
2010-05-16 21:49:58 +07:00
|
|
|
}
|
2008-06-26 03:01:31 +07:00
|
|
|
sleep:
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
btrfs_syno_check_metadata_cache_sync(fs_info);
|
|
|
|
#endif /* MY_ABC_HERE */
|
2016-06-23 05:54:23 +07:00
|
|
|
wake_up_process(fs_info->cleaner_kthread);
|
|
|
|
mutex_unlock(&fs_info->transaction_kthread_mutex);
|
2008-06-26 03:01:31 +07:00
|
|
|
|
2013-09-28 03:32:39 +07:00
|
|
|
if (unlikely(test_bit(BTRFS_FS_STATE_ERROR,
|
2016-06-23 05:54:23 +07:00
|
|
|
&fs_info->fs_state)))
|
2016-06-23 05:54:24 +07:00
|
|
|
btrfs_cleanup_transaction(fs_info);
|
2016-03-15 17:28:59 +07:00
|
|
|
if (!kthread_should_stop() &&
|
2016-06-23 05:54:23 +07:00
|
|
|
(!btrfs_transaction_blocked(fs_info) ||
|
2016-03-15 17:28:59 +07:00
|
|
|
cannot_commit))
|
2018-01-23 19:46:53 +07:00
|
|
|
schedule_timeout_interruptible(delay);
|
2008-06-26 03:01:31 +07:00
|
|
|
} while (!kthread_should_stop());
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2011-11-04 02:17:42 +07:00
|
|
|
/*
|
2019-10-15 22:42:17 +07:00
|
|
|
* This will find the highest generation in the array of root backups. The
|
|
|
|
* index of the highest array is returned, or -EINVAL if we can't find
|
|
|
|
* anything.
|
2011-11-04 02:17:42 +07:00
|
|
|
*
|
|
|
|
* We check to make sure the array is valid by comparing the
|
|
|
|
* generation of the latest root in the array with the generation
|
|
|
|
* in the super block. If they don't match we pitch it.
|
|
|
|
*/
|
2019-10-15 22:42:17 +07:00
|
|
|
static int find_newest_super_backup(struct btrfs_fs_info *info)
|
2011-11-04 02:17:42 +07:00
|
|
|
{
|
2019-10-15 22:42:17 +07:00
|
|
|
const u64 newest_gen = btrfs_super_generation(info->super_copy);
|
2011-11-04 02:17:42 +07:00
|
|
|
u64 cur;
|
|
|
|
struct btrfs_root_backup *root_backup;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = 0; i < BTRFS_NUM_BACKUP_ROOTS; i++) {
|
|
|
|
root_backup = info->super_copy->super_roots + i;
|
|
|
|
cur = btrfs_backup_tree_root_gen(root_backup);
|
|
|
|
if (cur == newest_gen)
|
2019-10-15 22:42:17 +07:00
|
|
|
return i;
|
2011-11-04 02:17:42 +07:00
|
|
|
}
|
|
|
|
|
2019-10-15 22:42:17 +07:00
|
|
|
return -EINVAL;
|
2011-11-04 02:17:42 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* copy all the root pointers into the super backup array.
|
|
|
|
* this will bump the backup pointer by one when it is
|
|
|
|
* done
|
|
|
|
*/
|
|
|
|
static void backup_super_roots(struct btrfs_fs_info *info)
|
|
|
|
{
|
2019-10-15 22:42:24 +07:00
|
|
|
const int next_backup = info->backup_root_index;
|
2011-11-04 02:17:42 +07:00
|
|
|
struct btrfs_root_backup *root_backup;
|
|
|
|
|
|
|
|
root_backup = info->super_for_commit->super_roots + next_backup;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* make sure all of our padding and empty slots get zero filled
|
|
|
|
* regardless of which ones we use today
|
|
|
|
*/
|
|
|
|
memset(root_backup, 0, sizeof(*root_backup));
|
|
|
|
|
|
|
|
info->backup_root_index = (next_backup + 1) % BTRFS_NUM_BACKUP_ROOTS;
|
|
|
|
|
|
|
|
btrfs_set_backup_tree_root(root_backup, info->tree_root->node->start);
|
|
|
|
btrfs_set_backup_tree_root_gen(root_backup,
|
|
|
|
btrfs_header_generation(info->tree_root->node));
|
|
|
|
|
|
|
|
btrfs_set_backup_tree_root_level(root_backup,
|
|
|
|
btrfs_header_level(info->tree_root->node));
|
|
|
|
|
|
|
|
btrfs_set_backup_chunk_root(root_backup, info->chunk_root->node->start);
|
|
|
|
btrfs_set_backup_chunk_root_gen(root_backup,
|
|
|
|
btrfs_header_generation(info->chunk_root->node));
|
|
|
|
btrfs_set_backup_chunk_root_level(root_backup,
|
|
|
|
btrfs_header_level(info->chunk_root->node));
|
|
|
|
|
|
|
|
btrfs_set_backup_extent_root(root_backup, info->extent_root->node->start);
|
|
|
|
btrfs_set_backup_extent_root_gen(root_backup,
|
|
|
|
btrfs_header_generation(info->extent_root->node));
|
|
|
|
btrfs_set_backup_extent_root_level(root_backup,
|
|
|
|
btrfs_header_level(info->extent_root->node));
|
|
|
|
|
2011-11-07 06:50:56 +07:00
|
|
|
/*
|
|
|
|
* we might commit during log recovery, which happens before we set
|
|
|
|
* the fs_root. Make sure it is valid before we fill it in.
|
|
|
|
*/
|
|
|
|
if (info->fs_root && info->fs_root->node) {
|
|
|
|
btrfs_set_backup_fs_root(root_backup,
|
|
|
|
info->fs_root->node->start);
|
|
|
|
btrfs_set_backup_fs_root_gen(root_backup,
|
2011-11-04 02:17:42 +07:00
|
|
|
btrfs_header_generation(info->fs_root->node));
|
2011-11-07 06:50:56 +07:00
|
|
|
btrfs_set_backup_fs_root_level(root_backup,
|
2011-11-04 02:17:42 +07:00
|
|
|
btrfs_header_level(info->fs_root->node));
|
2011-11-07 06:50:56 +07:00
|
|
|
}
|
2011-11-04 02:17:42 +07:00
|
|
|
|
|
|
|
btrfs_set_backup_dev_root(root_backup, info->dev_root->node->start);
|
|
|
|
btrfs_set_backup_dev_root_gen(root_backup,
|
|
|
|
btrfs_header_generation(info->dev_root->node));
|
|
|
|
btrfs_set_backup_dev_root_level(root_backup,
|
|
|
|
btrfs_header_level(info->dev_root->node));
|
|
|
|
|
|
|
|
btrfs_set_backup_csum_root(root_backup, info->csum_root->node->start);
|
|
|
|
btrfs_set_backup_csum_root_gen(root_backup,
|
|
|
|
btrfs_header_generation(info->csum_root->node));
|
|
|
|
btrfs_set_backup_csum_root_level(root_backup,
|
|
|
|
btrfs_header_level(info->csum_root->node));
|
|
|
|
|
|
|
|
btrfs_set_backup_total_bytes(root_backup,
|
|
|
|
btrfs_super_total_bytes(info->super_copy));
|
|
|
|
btrfs_set_backup_bytes_used(root_backup,
|
|
|
|
btrfs_super_bytes_used(info->super_copy));
|
|
|
|
btrfs_set_backup_num_devices(root_backup,
|
|
|
|
btrfs_super_num_devices(info->super_copy));
|
|
|
|
|
|
|
|
/*
|
|
|
|
* if we don't copy this out to the super_copy, it won't get remembered
|
|
|
|
* for the next commit
|
|
|
|
*/
|
|
|
|
memcpy(&info->super_copy->super_roots,
|
|
|
|
&info->super_for_commit->super_roots,
|
|
|
|
sizeof(*root_backup) * BTRFS_NUM_BACKUP_ROOTS);
|
|
|
|
}
|
|
|
|
|
2019-10-15 22:42:19 +07:00
|
|
|
/*
|
|
|
|
* read_backup_root - Reads a backup root based on the passed priority. Prio 0
|
|
|
|
* is the newest, prio 1/2/3 are 2nd newest/3rd newest/4th (oldest) backup roots
|
|
|
|
*
|
|
|
|
* fs_info - filesystem whose backup roots need to be read
|
|
|
|
* priority - priority of backup root required
|
|
|
|
*
|
|
|
|
* Returns backup root index on success and -EINVAL otherwise.
|
|
|
|
*/
|
|
|
|
static int read_backup_root(struct btrfs_fs_info *fs_info, u8 priority)
|
|
|
|
{
|
|
|
|
int backup_index = find_newest_super_backup(fs_info);
|
|
|
|
struct btrfs_super_block *super = fs_info->super_copy;
|
|
|
|
struct btrfs_root_backup *root_backup;
|
|
|
|
|
|
|
|
if (priority < BTRFS_NUM_BACKUP_ROOTS && backup_index >= 0) {
|
|
|
|
if (priority == 0)
|
|
|
|
return backup_index;
|
|
|
|
|
|
|
|
backup_index = backup_index + BTRFS_NUM_BACKUP_ROOTS - priority;
|
|
|
|
backup_index %= BTRFS_NUM_BACKUP_ROOTS;
|
|
|
|
} else {
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
root_backup = super->super_roots + backup_index;
|
|
|
|
|
|
|
|
btrfs_set_super_generation(super,
|
|
|
|
btrfs_backup_tree_root_gen(root_backup));
|
|
|
|
btrfs_set_super_root(super, btrfs_backup_tree_root(root_backup));
|
|
|
|
btrfs_set_super_root_level(super,
|
|
|
|
btrfs_backup_tree_root_level(root_backup));
|
|
|
|
btrfs_set_super_bytes_used(super, btrfs_backup_bytes_used(root_backup));
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Fixme: the total bytes and num_devices need to match or we should
|
|
|
|
* need a fsck
|
|
|
|
*/
|
|
|
|
btrfs_set_super_total_bytes(super, btrfs_backup_total_bytes(root_backup));
|
|
|
|
btrfs_set_super_num_devices(super, btrfs_backup_num_devices(root_backup));
|
|
|
|
|
|
|
|
return backup_index;
|
|
|
|
}
|
|
|
|
|
2013-03-17 09:10:31 +07:00
|
|
|
/* helper to cleanup workers */
|
|
|
|
static void btrfs_stop_all_workers(struct btrfs_fs_info *fs_info)
|
|
|
|
{
|
2014-02-28 09:46:14 +07:00
|
|
|
btrfs_destroy_workqueue(fs_info->fixup_workers);
|
2014-02-28 09:46:07 +07:00
|
|
|
btrfs_destroy_workqueue(fs_info->delalloc_workers);
|
2014-02-28 09:46:06 +07:00
|
|
|
btrfs_destroy_workqueue(fs_info->workers);
|
2014-02-28 09:46:10 +07:00
|
|
|
btrfs_destroy_workqueue(fs_info->endio_workers);
|
|
|
|
btrfs_destroy_workqueue(fs_info->endio_raid56_workers);
|
2014-02-28 09:46:11 +07:00
|
|
|
btrfs_destroy_workqueue(fs_info->rmw_workers);
|
2014-02-28 09:46:10 +07:00
|
|
|
btrfs_destroy_workqueue(fs_info->endio_write_workers);
|
|
|
|
btrfs_destroy_workqueue(fs_info->endio_freespace_worker);
|
2014-02-28 09:46:15 +07:00
|
|
|
btrfs_destroy_workqueue(fs_info->delayed_workers);
|
2014-02-28 09:46:12 +07:00
|
|
|
btrfs_destroy_workqueue(fs_info->caching_workers);
|
2014-02-28 09:46:13 +07:00
|
|
|
btrfs_destroy_workqueue(fs_info->readahead_workers);
|
2014-02-28 09:46:09 +07:00
|
|
|
btrfs_destroy_workqueue(fs_info->flush_workers);
|
2014-02-28 09:46:16 +07:00
|
|
|
btrfs_destroy_workqueue(fs_info->qgroup_rescan_workers);
|
2019-12-14 07:22:14 +07:00
|
|
|
if (fs_info->discard_ctl.discard_workers)
|
|
|
|
destroy_workqueue(fs_info->discard_ctl.discard_workers);
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
btrfs_destroy_workqueue(fs_info->syno_multiple_writeback_workers);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
btrfs_destroy_workqueue(fs_info->syno_cow_async_workers);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
btrfs_destroy_workqueue(fs_info->syno_cow_endio_workers);
|
|
|
|
btrfs_destroy_workqueue(fs_info->syno_nocow_endio_workers);
|
|
|
|
btrfs_destroy_workqueue(fs_info->syno_high_priority_endio_workers);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
btrfs_destroy_workqueue(fs_info->extent_workers);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
btrfs_destroy_workqueue(fs_info->syno_allocator.caching_workers);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
2017-02-05 00:12:00 +07:00
|
|
|
/*
|
|
|
|
* Now that all other work queues are destroyed, we can safely destroy
|
|
|
|
* the queues used for metadata I/O, since tasks from those other work
|
|
|
|
* queues can do metadata I/O operations.
|
|
|
|
*/
|
|
|
|
btrfs_destroy_workqueue(fs_info->endio_meta_workers);
|
|
|
|
btrfs_destroy_workqueue(fs_info->endio_meta_write_workers);
|
2024-07-05 23:00:04 +07:00
|
|
|
|
2013-03-17 09:10:31 +07:00
|
|
|
}
|
|
|
|
|
2013-10-31 04:15:20 +07:00
|
|
|
static void free_root_extent_buffers(struct btrfs_root *root)
|
|
|
|
{
|
|
|
|
if (root) {
|
|
|
|
free_extent_buffer(root->node);
|
|
|
|
free_extent_buffer(root->commit_root);
|
|
|
|
root->node = NULL;
|
|
|
|
root->commit_root = NULL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2011-11-04 02:17:42 +07:00
|
|
|
/* helper to cleanup tree roots */
|
2019-10-10 09:39:25 +07:00
|
|
|
static void free_root_pointers(struct btrfs_fs_info *info, bool free_chunk_root)
|
2011-11-04 02:17:42 +07:00
|
|
|
{
|
2013-10-31 04:15:20 +07:00
|
|
|
free_root_extent_buffers(info->tree_root);
|
2013-05-18 01:06:51 +07:00
|
|
|
|
2013-10-31 04:15:20 +07:00
|
|
|
free_root_extent_buffers(info->dev_root);
|
|
|
|
free_root_extent_buffers(info->extent_root);
|
|
|
|
free_root_extent_buffers(info->csum_root);
|
|
|
|
free_root_extent_buffers(info->quota_root);
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
free_root_extent_buffers(info->usrquota_root);
|
|
|
|
#endif /* MY_ABC_HERE */
|
2013-10-31 04:15:20 +07:00
|
|
|
free_root_extent_buffers(info->uuid_root);
|
2020-02-15 04:11:42 +07:00
|
|
|
free_root_extent_buffers(info->fs_root);
|
2020-05-15 13:01:42 +07:00
|
|
|
free_root_extent_buffers(info->data_reloc_root);
|
2019-10-10 09:39:25 +07:00
|
|
|
if (free_chunk_root)
|
2013-10-31 04:15:20 +07:00
|
|
|
free_root_extent_buffers(info->chunk_root);
|
2015-09-30 10:50:38 +07:00
|
|
|
free_root_extent_buffers(info->free_space_root);
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
free_root_extent_buffers(info->block_group_hint_root);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
free_root_extent_buffers(info->block_group_cache_root);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
free_root_extent_buffers(info->syno_usage_root);
|
|
|
|
free_root_extent_buffers(info->syno_extent_usage_root);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
free_root_extent_buffers(info->syno_feat_root);
|
|
|
|
#endif /* MY_ABC_HERE */
|
2011-11-04 02:17:42 +07:00
|
|
|
}
|
|
|
|
|
2020-02-15 04:11:42 +07:00
|
|
|
void btrfs_put_root(struct btrfs_root *root)
|
|
|
|
{
|
|
|
|
if (!root)
|
|
|
|
return;
|
|
|
|
|
|
|
|
if (refcount_dec_and_test(&root->refs)) {
|
|
|
|
WARN_ON(!RB_EMPTY_ROOT(&root->inode_tree));
|
2020-05-20 13:58:51 +07:00
|
|
|
WARN_ON(test_bit(BTRFS_ROOT_DEAD_RELOC_TREE, &root->state));
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
if (root->syno_delalloc_bytes) {
|
|
|
|
WARN_ON_ONCE(percpu_counter_sum(root->syno_delalloc_bytes));
|
|
|
|
percpu_counter_destroy(root->syno_delalloc_bytes);
|
|
|
|
kfree(root->syno_delalloc_bytes);
|
|
|
|
root->syno_delalloc_bytes = NULL;
|
|
|
|
}
|
|
|
|
#endif /* MY_ABC_HERE */
|
2020-02-15 04:11:42 +07:00
|
|
|
if (root->anon_dev)
|
|
|
|
free_anon_bdev(root->anon_dev);
|
2020-06-23 15:40:07 +07:00
|
|
|
free_root_extent_buffers(root);
|
2020-02-15 04:11:42 +07:00
|
|
|
kfree(root->free_ino_ctl);
|
|
|
|
kfree(root->free_ino_pinned);
|
|
|
|
#ifdef CONFIG_BTRFS_DEBUG
|
|
|
|
spin_lock(&root->fs_info->fs_roots_radix_lock);
|
|
|
|
list_del_init(&root->leak_list);
|
|
|
|
spin_unlock(&root->fs_info->fs_roots_radix_lock);
|
|
|
|
#endif
|
|
|
|
kfree(root);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2014-05-08 04:06:09 +07:00
|
|
|
void btrfs_free_fs_roots(struct btrfs_fs_info *fs_info)
|
2013-04-25 03:35:41 +07:00
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
struct btrfs_root *gang[8];
|
|
|
|
int i;
|
|
|
|
|
|
|
|
while (!list_empty(&fs_info->dead_roots)) {
|
|
|
|
gang[0] = list_entry(fs_info->dead_roots.next,
|
|
|
|
struct btrfs_root, root_list);
|
|
|
|
list_del(&gang[0]->root_list);
|
|
|
|
|
2020-02-15 04:11:42 +07:00
|
|
|
if (test_bit(BTRFS_ROOT_IN_RADIX, &gang[0]->state))
|
2013-05-15 14:48:19 +07:00
|
|
|
btrfs_drop_and_free_fs_root(fs_info, gang[0]);
|
2020-02-15 04:11:44 +07:00
|
|
|
btrfs_put_root(gang[0]);
|
2013-04-25 03:35:41 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
while (1) {
|
|
|
|
ret = radix_tree_gang_lookup(&fs_info->fs_roots_radix,
|
|
|
|
(void **)gang, 0,
|
|
|
|
ARRAY_SIZE(gang));
|
|
|
|
if (!ret)
|
|
|
|
break;
|
|
|
|
for (i = 0; i < ret; i++)
|
2013-05-15 14:48:19 +07:00
|
|
|
btrfs_drop_and_free_fs_root(fs_info, gang[i]);
|
2013-04-25 03:35:41 +07:00
|
|
|
}
|
|
|
|
}
|
2011-11-04 02:17:42 +07:00
|
|
|
|
2014-08-02 06:12:38 +07:00
|
|
|
static void btrfs_init_scrub(struct btrfs_fs_info *fs_info)
|
|
|
|
{
|
|
|
|
mutex_init(&fs_info->scrub_lock);
|
|
|
|
atomic_set(&fs_info->scrubs_running, 0);
|
|
|
|
atomic_set(&fs_info->scrub_pause_req, 0);
|
|
|
|
atomic_set(&fs_info->scrubs_paused, 0);
|
|
|
|
atomic_set(&fs_info->scrub_cancel_req, 0);
|
|
|
|
init_waitqueue_head(&fs_info->scrub_pause_wait);
|
2019-01-30 13:45:02 +07:00
|
|
|
refcount_set(&fs_info->scrub_workers_refcnt, 0);
|
2014-08-02 06:12:38 +07:00
|
|
|
}
|
|
|
|
|
2014-08-02 06:12:39 +07:00
|
|
|
static void btrfs_init_balance(struct btrfs_fs_info *fs_info)
|
|
|
|
{
|
|
|
|
spin_lock_init(&fs_info->balance_lock);
|
|
|
|
mutex_init(&fs_info->balance_mutex);
|
|
|
|
atomic_set(&fs_info->balance_pause_req, 0);
|
|
|
|
atomic_set(&fs_info->balance_cancel_req, 0);
|
|
|
|
fs_info->balance_ctl = NULL;
|
|
|
|
init_waitqueue_head(&fs_info->balance_wait_q);
|
|
|
|
}
|
|
|
|
|
2016-06-22 08:16:51 +07:00
|
|
|
static void btrfs_init_btree_inode(struct btrfs_fs_info *fs_info)
|
2014-08-02 06:12:40 +07:00
|
|
|
{
|
2016-06-23 05:54:24 +07:00
|
|
|
struct inode *inode = fs_info->btree_inode;
|
|
|
|
|
|
|
|
inode->i_ino = BTRFS_BTREE_INODE_OBJECTID;
|
|
|
|
set_nlink(inode, 1);
|
2014-08-02 06:12:40 +07:00
|
|
|
/*
|
|
|
|
* we set the i_size on the btree inode to the max possible int.
|
|
|
|
* the real end of the address space is determined by all of
|
|
|
|
* the devices in the system
|
|
|
|
*/
|
2016-06-23 05:54:24 +07:00
|
|
|
inode->i_size = OFFSET_MAX;
|
|
|
|
inode->i_mapping->a_ops = &btree_aops;
|
2014-08-02 06:12:40 +07:00
|
|
|
|
2016-06-23 05:54:24 +07:00
|
|
|
RB_CLEAR_NODE(&BTRFS_I(inode)->rb_node);
|
2019-03-01 09:47:59 +07:00
|
|
|
extent_io_tree_init(fs_info, &BTRFS_I(inode)->io_tree,
|
2020-09-15 12:35:27 +07:00
|
|
|
IO_TREE_BTREE_INODE_IO, inode);
|
2019-03-11 21:58:30 +07:00
|
|
|
BTRFS_I(inode)->io_tree.track_uptodate = false;
|
2016-06-23 05:54:24 +07:00
|
|
|
extent_map_tree_init(&BTRFS_I(inode)->extent_tree);
|
2014-08-02 06:12:40 +07:00
|
|
|
|
2020-02-15 04:11:43 +07:00
|
|
|
BTRFS_I(inode)->root = btrfs_grab_root(fs_info->tree_root);
|
2016-06-23 05:54:24 +07:00
|
|
|
memset(&BTRFS_I(inode)->location, 0, sizeof(struct btrfs_key));
|
|
|
|
set_bit(BTRFS_INODE_DUMMY, &BTRFS_I(inode)->runtime_flags);
|
|
|
|
btrfs_insert_inode_hash(inode);
|
2014-08-02 06:12:40 +07:00
|
|
|
}
|
|
|
|
|
2014-08-02 06:12:41 +07:00
|
|
|
static void btrfs_init_dev_replace_locks(struct btrfs_fs_info *fs_info)
|
|
|
|
{
|
|
|
|
mutex_init(&fs_info->dev_replace.lock_finishing_cancel_unmount);
|
2018-04-05 06:29:24 +07:00
|
|
|
init_rwsem(&fs_info->dev_replace.rwsem);
|
2018-04-05 06:04:49 +07:00
|
|
|
init_waitqueue_head(&fs_info->dev_replace.replace_wait);
|
2014-08-02 06:12:41 +07:00
|
|
|
}
|
|
|
|
|
2014-08-02 06:12:42 +07:00
|
|
|
static void btrfs_init_qgroup(struct btrfs_fs_info *fs_info)
|
|
|
|
{
|
|
|
|
spin_lock_init(&fs_info->qgroup_lock);
|
|
|
|
mutex_init(&fs_info->qgroup_ioctl_lock);
|
|
|
|
fs_info->qgroup_tree = RB_ROOT;
|
|
|
|
INIT_LIST_HEAD(&fs_info->dirty_qgroups);
|
|
|
|
fs_info->qgroup_seq = 1;
|
|
|
|
fs_info->qgroup_ulist = NULL;
|
2016-08-15 23:10:33 +07:00
|
|
|
fs_info->qgroup_rescan_running = false;
|
2014-08-02 06:12:42 +07:00
|
|
|
mutex_init(&fs_info->qgroup_rescan_lock);
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
init_rwsem(&fs_info->inflight_reserve_lock);
|
|
|
|
fs_info->need_clear_reserve = false;
|
|
|
|
#endif /* MY_ABC_HERE */
|
2014-08-02 06:12:42 +07:00
|
|
|
}
|
|
|
|
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
static void btrfs_init_usrquota(struct btrfs_fs_info *fs_info)
|
|
|
|
{
|
|
|
|
spin_lock_init(&fs_info->usrquota_lock);
|
|
|
|
mutex_init(&fs_info->usrquota_ioctl_lock);
|
|
|
|
fs_info->usrquota_tree = RB_ROOT;
|
|
|
|
INIT_LIST_HEAD(&fs_info->dirty_usrquota);
|
|
|
|
INIT_LIST_HEAD(&fs_info->usrquota_ro_roots);
|
|
|
|
mutex_init(&fs_info->usrquota_ro_roots_lock);
|
|
|
|
fs_info->usrquota_flags = 0;
|
|
|
|
}
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
2015-02-16 22:29:26 +07:00
|
|
|
static int btrfs_init_workqueues(struct btrfs_fs_info *fs_info,
|
|
|
|
struct btrfs_fs_devices *fs_devices)
|
|
|
|
{
|
2018-02-13 16:50:42 +07:00
|
|
|
u32 max_active = fs_info->thread_pool_size;
|
2015-02-17 00:34:01 +07:00
|
|
|
unsigned int flags = WQ_MEM_RECLAIM | WQ_FREEZABLE | WQ_UNBOUND;
|
2015-02-16 22:29:26 +07:00
|
|
|
|
|
|
|
fs_info->workers =
|
2016-06-10 03:22:11 +07:00
|
|
|
btrfs_alloc_workqueue(fs_info, "worker",
|
|
|
|
flags | WQ_HIGHPRI, max_active, 16);
|
2015-02-16 22:29:26 +07:00
|
|
|
|
|
|
|
fs_info->delalloc_workers =
|
2016-06-10 03:22:11 +07:00
|
|
|
btrfs_alloc_workqueue(fs_info, "delalloc",
|
|
|
|
flags, max_active, 2);
|
2015-02-16 22:29:26 +07:00
|
|
|
|
|
|
|
fs_info->flush_workers =
|
2016-06-10 03:22:11 +07:00
|
|
|
btrfs_alloc_workqueue(fs_info, "flush_delalloc",
|
|
|
|
flags, max_active, 0);
|
2015-02-16 22:29:26 +07:00
|
|
|
|
|
|
|
fs_info->caching_workers =
|
2016-06-10 03:22:11 +07:00
|
|
|
btrfs_alloc_workqueue(fs_info, "cache", flags, max_active, 0);
|
2015-02-16 22:29:26 +07:00
|
|
|
|
|
|
|
fs_info->fixup_workers =
|
2016-06-10 03:22:11 +07:00
|
|
|
btrfs_alloc_workqueue(fs_info, "fixup", flags, 1, 0);
|
2015-02-16 22:29:26 +07:00
|
|
|
|
|
|
|
/*
|
|
|
|
* endios are largely parallel and should have a very
|
|
|
|
* low idle thresh
|
|
|
|
*/
|
|
|
|
fs_info->endio_workers =
|
2016-06-10 03:22:11 +07:00
|
|
|
btrfs_alloc_workqueue(fs_info, "endio", flags, max_active, 4);
|
2015-02-16 22:29:26 +07:00
|
|
|
fs_info->endio_meta_workers =
|
2016-06-10 03:22:11 +07:00
|
|
|
btrfs_alloc_workqueue(fs_info, "endio-meta", flags,
|
|
|
|
max_active, 4);
|
2015-02-16 22:29:26 +07:00
|
|
|
fs_info->endio_meta_write_workers =
|
2016-06-10 03:22:11 +07:00
|
|
|
btrfs_alloc_workqueue(fs_info, "endio-meta-write", flags,
|
|
|
|
max_active, 2);
|
2015-02-16 22:29:26 +07:00
|
|
|
fs_info->endio_raid56_workers =
|
2016-06-10 03:22:11 +07:00
|
|
|
btrfs_alloc_workqueue(fs_info, "endio-raid56", flags,
|
|
|
|
max_active, 4);
|
2015-02-16 22:29:26 +07:00
|
|
|
fs_info->rmw_workers =
|
2016-06-10 03:22:11 +07:00
|
|
|
btrfs_alloc_workqueue(fs_info, "rmw", flags, max_active, 2);
|
2015-02-16 22:29:26 +07:00
|
|
|
fs_info->endio_write_workers =
|
2016-06-10 03:22:11 +07:00
|
|
|
btrfs_alloc_workqueue(fs_info, "endio-write", flags,
|
|
|
|
max_active, 2);
|
2015-02-16 22:29:26 +07:00
|
|
|
fs_info->endio_freespace_worker =
|
2016-06-10 03:22:11 +07:00
|
|
|
btrfs_alloc_workqueue(fs_info, "freespace-write", flags,
|
|
|
|
max_active, 0);
|
2015-02-16 22:29:26 +07:00
|
|
|
fs_info->delayed_workers =
|
2016-06-10 03:22:11 +07:00
|
|
|
btrfs_alloc_workqueue(fs_info, "delayed-meta", flags,
|
|
|
|
max_active, 0);
|
2015-02-16 22:29:26 +07:00
|
|
|
fs_info->readahead_workers =
|
2016-06-10 03:22:11 +07:00
|
|
|
btrfs_alloc_workqueue(fs_info, "readahead", flags,
|
|
|
|
max_active, 2);
|
2015-02-16 22:29:26 +07:00
|
|
|
fs_info->qgroup_rescan_workers =
|
2016-06-10 03:22:11 +07:00
|
|
|
btrfs_alloc_workqueue(fs_info, "qgroup-rescan", flags, 1, 0);
|
2019-12-14 07:22:14 +07:00
|
|
|
fs_info->discard_ctl.discard_workers =
|
|
|
|
alloc_workqueue("btrfs_discard", WQ_UNBOUND | WQ_FREEZABLE, 1);
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
fs_info->syno_multiple_writeback_workers =
|
|
|
|
btrfs_alloc_workqueue_with_sysfs(fs_info, "syno-multi-wb", flags, max_active, 2);
|
|
|
|
#else /* MY_ABC_HERE */
|
|
|
|
fs_info->syno_multiple_writeback_workers =
|
|
|
|
btrfs_alloc_workqueue(fs_info, "syno-multi-wb", flags, max_active, 2);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
fs_info->syno_cow_async_workers =
|
|
|
|
btrfs_alloc_workqueue(fs_info, "syno_cow_async_workers", flags | WQ_HIGHPRI, max_active, 2);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
/* for reduce cow ordered extent contention, we limit max active with 4 */
|
|
|
|
fs_info->syno_cow_endio_workers =
|
|
|
|
btrfs_alloc_workqueue(fs_info, "syno_cow", flags, min_t(unsigned long, 4, max_active), 2);
|
|
|
|
fs_info->syno_nocow_endio_workers =
|
|
|
|
btrfs_alloc_workqueue(fs_info, "syno_nocow", flags, max_active, 2);
|
|
|
|
fs_info->syno_high_priority_endio_workers =
|
|
|
|
btrfs_alloc_workqueue(fs_info, "syno_high_priority", flags | WQ_HIGHPRI, WQ_DFL_ACTIVE, 2);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
fs_info->extent_workers =
|
|
|
|
btrfs_alloc_workqueue(fs_info, "extent-refs", flags, min_t(unsigned long, 4, max_active), 8);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
fs_info->syno_allocator.caching_workers =
|
|
|
|
btrfs_alloc_workqueue(fs_info, "syno-bg-cache", flags, max_active, 2);
|
|
|
|
#endif /* MY_ABC_HERE */
|
2015-02-16 22:29:26 +07:00
|
|
|
|
|
|
|
if (!(fs_info->workers && fs_info->delalloc_workers &&
|
2019-07-11 02:28:15 +07:00
|
|
|
fs_info->flush_workers &&
|
2015-02-16 22:29:26 +07:00
|
|
|
fs_info->endio_workers && fs_info->endio_meta_workers &&
|
|
|
|
fs_info->endio_meta_write_workers &&
|
|
|
|
fs_info->endio_write_workers && fs_info->endio_raid56_workers &&
|
|
|
|
fs_info->endio_freespace_worker && fs_info->rmw_workers &&
|
|
|
|
fs_info->caching_workers && fs_info->readahead_workers &&
|
|
|
|
fs_info->fixup_workers && fs_info->delayed_workers &&
|
2019-12-14 07:22:14 +07:00
|
|
|
fs_info->qgroup_rescan_workers &&
|
2024-07-05 23:00:04 +07:00
|
|
|
fs_info->discard_ctl.discard_workers
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
&& fs_info->syno_multiple_writeback_workers
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
&& fs_info->syno_cow_async_workers
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
&& fs_info->syno_cow_endio_workers
|
|
|
|
&& fs_info->syno_nocow_endio_workers
|
|
|
|
&& fs_info->syno_high_priority_endio_workers
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
&& fs_info->extent_workers
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
&& fs_info->syno_allocator.caching_workers
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
)) {
|
2015-02-16 22:29:26 +07:00
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2019-06-03 21:58:56 +07:00
|
|
|
static int btrfs_init_csum_hash(struct btrfs_fs_info *fs_info, u16 csum_type)
|
|
|
|
{
|
|
|
|
struct crypto_shash *csum_shash;
|
2019-10-08 23:41:33 +07:00
|
|
|
const char *csum_driver = btrfs_super_csum_driver(csum_type);
|
2019-06-03 21:58:56 +07:00
|
|
|
|
2019-10-08 23:41:33 +07:00
|
|
|
csum_shash = crypto_alloc_shash(csum_driver, 0, 0);
|
2019-06-03 21:58:56 +07:00
|
|
|
|
|
|
|
if (IS_ERR(csum_shash)) {
|
|
|
|
btrfs_err(fs_info, "error allocating %s hash for checksum",
|
2019-10-08 23:41:33 +07:00
|
|
|
csum_driver);
|
2019-06-03 21:58:56 +07:00
|
|
|
return PTR_ERR(csum_shash);
|
|
|
|
}
|
|
|
|
|
|
|
|
fs_info->csum_shash = csum_shash;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2014-08-02 06:12:46 +07:00
|
|
|
static int btrfs_replay_log(struct btrfs_fs_info *fs_info,
|
|
|
|
struct btrfs_fs_devices *fs_devices)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
struct btrfs_root *log_tree_root;
|
|
|
|
struct btrfs_super_block *disk_super = fs_info->super_copy;
|
|
|
|
u64 bytenr = btrfs_super_log_root(disk_super);
|
2018-03-29 08:08:11 +07:00
|
|
|
int level = btrfs_super_log_root_level(disk_super);
|
2014-08-02 06:12:46 +07:00
|
|
|
|
|
|
|
if (fs_devices->rw_devices == 0) {
|
2015-10-08 16:37:06 +07:00
|
|
|
btrfs_warn(fs_info, "log replay required on RO media");
|
2014-08-02 06:12:46 +07:00
|
|
|
return -EIO;
|
|
|
|
}
|
|
|
|
|
2020-01-24 21:32:18 +07:00
|
|
|
log_tree_root = btrfs_alloc_root(fs_info, BTRFS_TREE_LOG_OBJECTID,
|
|
|
|
GFP_KERNEL);
|
2014-08-02 06:12:46 +07:00
|
|
|
if (!log_tree_root)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
2016-06-23 05:54:24 +07:00
|
|
|
log_tree_root->node = read_tree_block(fs_info, bytenr,
|
2018-03-29 08:08:11 +07:00
|
|
|
fs_info->generation + 1,
|
|
|
|
level, NULL);
|
2015-05-25 16:30:15 +07:00
|
|
|
if (IS_ERR(log_tree_root->node)) {
|
2015-10-08 16:37:06 +07:00
|
|
|
btrfs_warn(fs_info, "failed to read log tree");
|
2015-06-11 13:16:44 +07:00
|
|
|
ret = PTR_ERR(log_tree_root->node);
|
2020-02-15 04:11:42 +07:00
|
|
|
log_tree_root->node = NULL;
|
2020-01-24 21:33:01 +07:00
|
|
|
btrfs_put_root(log_tree_root);
|
2015-06-11 13:16:44 +07:00
|
|
|
return ret;
|
2015-05-25 16:30:15 +07:00
|
|
|
} else if (!extent_buffer_uptodate(log_tree_root->node)) {
|
2015-10-08 16:37:06 +07:00
|
|
|
btrfs_err(fs_info, "failed to read log tree");
|
2020-01-24 21:33:01 +07:00
|
|
|
btrfs_put_root(log_tree_root);
|
2014-08-02 06:12:46 +07:00
|
|
|
return -EIO;
|
|
|
|
}
|
|
|
|
/* returns with log_tree_root freed on success */
|
|
|
|
ret = btrfs_recover_log_trees(log_tree_root);
|
|
|
|
if (ret) {
|
2016-06-23 05:54:23 +07:00
|
|
|
btrfs_handle_fs_error(fs_info, ret,
|
|
|
|
"Failed to recover log tree");
|
2020-01-24 21:33:01 +07:00
|
|
|
btrfs_put_root(log_tree_root);
|
2014-08-02 06:12:46 +07:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2017-07-17 14:45:34 +07:00
|
|
|
if (sb_rdonly(fs_info->sb)) {
|
2016-06-22 08:16:51 +07:00
|
|
|
ret = btrfs_commit_super(fs_info);
|
2014-08-02 06:12:46 +07:00
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2016-06-22 08:16:51 +07:00
|
|
|
static int btrfs_read_roots(struct btrfs_fs_info *fs_info)
|
2014-08-02 06:12:45 +07:00
|
|
|
{
|
2016-06-22 08:16:51 +07:00
|
|
|
struct btrfs_root *tree_root = fs_info->tree_root;
|
2015-02-17 00:44:34 +07:00
|
|
|
struct btrfs_root *root;
|
2014-08-02 06:12:45 +07:00
|
|
|
struct btrfs_key location;
|
|
|
|
int ret;
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
int err;
|
|
|
|
#endif /* MY_ABC_HERE */
|
2014-08-02 06:12:45 +07:00
|
|
|
|
2016-06-22 08:16:51 +07:00
|
|
|
BUG_ON(!fs_info->tree_root);
|
|
|
|
|
2014-08-02 06:12:45 +07:00
|
|
|
location.objectid = BTRFS_EXTENT_TREE_OBJECTID;
|
|
|
|
location.type = BTRFS_ROOT_ITEM_KEY;
|
|
|
|
location.offset = 0;
|
|
|
|
|
2015-02-17 00:44:34 +07:00
|
|
|
root = btrfs_read_tree_root(tree_root, &location);
|
2018-03-29 05:11:45 +07:00
|
|
|
if (IS_ERR(root)) {
|
|
|
|
ret = PTR_ERR(root);
|
|
|
|
goto out;
|
|
|
|
}
|
2015-02-17 00:44:34 +07:00
|
|
|
set_bit(BTRFS_ROOT_TRACK_DIRTY, &root->state);
|
|
|
|
fs_info->extent_root = root;
|
2014-08-02 06:12:45 +07:00
|
|
|
|
|
|
|
location.objectid = BTRFS_DEV_TREE_OBJECTID;
|
2015-02-17 00:44:34 +07:00
|
|
|
root = btrfs_read_tree_root(tree_root, &location);
|
2018-03-29 05:11:45 +07:00
|
|
|
if (IS_ERR(root)) {
|
|
|
|
ret = PTR_ERR(root);
|
|
|
|
goto out;
|
|
|
|
}
|
2015-02-17 00:44:34 +07:00
|
|
|
set_bit(BTRFS_ROOT_TRACK_DIRTY, &root->state);
|
|
|
|
fs_info->dev_root = root;
|
2014-08-02 06:12:45 +07:00
|
|
|
btrfs_init_devices_late(fs_info);
|
|
|
|
|
|
|
|
location.objectid = BTRFS_CSUM_TREE_OBJECTID;
|
2015-02-17 00:44:34 +07:00
|
|
|
root = btrfs_read_tree_root(tree_root, &location);
|
2018-03-29 05:11:45 +07:00
|
|
|
if (IS_ERR(root)) {
|
|
|
|
ret = PTR_ERR(root);
|
|
|
|
goto out;
|
|
|
|
}
|
2015-02-17 00:44:34 +07:00
|
|
|
set_bit(BTRFS_ROOT_TRACK_DIRTY, &root->state);
|
|
|
|
fs_info->csum_root = root;
|
2014-08-02 06:12:45 +07:00
|
|
|
|
2020-05-15 13:01:42 +07:00
|
|
|
/*
|
|
|
|
* This tree can share blocks with some other fs tree during relocation
|
|
|
|
* and we need a proper setup by btrfs_get_fs_root
|
|
|
|
*/
|
2020-05-16 00:35:55 +07:00
|
|
|
root = btrfs_get_fs_root(tree_root->fs_info,
|
|
|
|
BTRFS_DATA_RELOC_TREE_OBJECTID, true);
|
2020-05-15 13:01:42 +07:00
|
|
|
if (IS_ERR(root)) {
|
|
|
|
ret = PTR_ERR(root);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
set_bit(BTRFS_ROOT_TRACK_DIRTY, &root->state);
|
|
|
|
fs_info->data_reloc_root = root;
|
|
|
|
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
if (btrfs_test_opt(fs_info, NO_QUOTA_TREE))
|
|
|
|
goto skip_quota;
|
|
|
|
|
|
|
|
// Try loading quota tree v2
|
|
|
|
location.objectid = BTRFS_SYNO_QUOTA_V2_TREE_OBJECTID;
|
|
|
|
root = btrfs_read_tree_root(tree_root, &location);
|
|
|
|
if (!IS_ERR(root)) {
|
|
|
|
set_bit(BTRFS_ROOT_TRACK_DIRTY, &root->state);
|
|
|
|
set_bit(BTRFS_FS_SYNO_QUOTA_V2_ENABLED, &fs_info->flags);
|
|
|
|
fs_info->quota_root = root;
|
|
|
|
}
|
|
|
|
|
|
|
|
// Try loading quota tree v1
|
|
|
|
location.objectid = BTRFS_QUOTA_TREE_OBJECTID;
|
|
|
|
if (fs_info->quota_root)
|
|
|
|
root = ERR_PTR(-EEXIST);
|
|
|
|
else
|
|
|
|
root = btrfs_read_tree_root(tree_root, &location);
|
|
|
|
if (!IS_ERR(root)) {
|
|
|
|
set_bit(BTRFS_ROOT_TRACK_DIRTY, &root->state);
|
|
|
|
set_bit(BTRFS_FS_SYNO_QUOTA_V1_ENABLED, &fs_info->flags);
|
|
|
|
fs_info->quota_root = root;
|
|
|
|
}
|
|
|
|
|
|
|
|
// Try loading usrquota tree v2
|
|
|
|
location.objectid = BTRFS_SYNO_USRQUOTA_V2_TREE_OBJECTID;
|
|
|
|
root = btrfs_read_tree_root(tree_root, &location);
|
|
|
|
if (!IS_ERR(root)) {
|
|
|
|
set_bit(BTRFS_ROOT_TRACK_DIRTY, &root->state);
|
|
|
|
set_bit(BTRFS_FS_SYNO_USRQUOTA_V2_ENABLED, &fs_info->flags);
|
|
|
|
fs_info->usrquota_root = root;
|
|
|
|
}
|
|
|
|
|
|
|
|
// Try loading usrquota tree v1
|
|
|
|
location.objectid = BTRFS_USRQUOTA_TREE_OBJECTID;
|
|
|
|
if (fs_info->usrquota_root)
|
|
|
|
root = ERR_PTR(-EEXIST);
|
|
|
|
else
|
|
|
|
root = btrfs_read_tree_root(tree_root, &location);
|
|
|
|
if (!IS_ERR(root)) {
|
|
|
|
set_bit(BTRFS_ROOT_TRACK_DIRTY, &root->state);
|
|
|
|
set_bit(BTRFS_FS_SYNO_USRQUOTA_V1_ENABLED, &fs_info->flags);
|
|
|
|
fs_info->usrquota_root = root;
|
|
|
|
}
|
|
|
|
|
|
|
|
skip_quota:
|
|
|
|
#else
|
2014-08-02 06:12:45 +07:00
|
|
|
location.objectid = BTRFS_QUOTA_TREE_OBJECTID;
|
2015-02-17 00:44:34 +07:00
|
|
|
root = btrfs_read_tree_root(tree_root, &location);
|
|
|
|
if (!IS_ERR(root)) {
|
|
|
|
set_bit(BTRFS_ROOT_TRACK_DIRTY, &root->state);
|
2016-09-03 02:40:02 +07:00
|
|
|
set_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags);
|
2015-02-17 00:44:34 +07:00
|
|
|
fs_info->quota_root = root;
|
2014-08-02 06:12:45 +07:00
|
|
|
}
|
2024-07-05 23:00:04 +07:00
|
|
|
#endif /* MY_ABC_HERE */
|
2014-08-02 06:12:45 +07:00
|
|
|
|
|
|
|
location.objectid = BTRFS_UUID_TREE_OBJECTID;
|
2015-02-17 00:44:34 +07:00
|
|
|
root = btrfs_read_tree_root(tree_root, &location);
|
|
|
|
if (IS_ERR(root)) {
|
|
|
|
ret = PTR_ERR(root);
|
2014-08-02 06:12:45 +07:00
|
|
|
if (ret != -ENOENT)
|
2018-03-29 05:11:45 +07:00
|
|
|
goto out;
|
2014-08-02 06:12:45 +07:00
|
|
|
} else {
|
2015-02-17 00:44:34 +07:00
|
|
|
set_bit(BTRFS_ROOT_TRACK_DIRTY, &root->state);
|
|
|
|
fs_info->uuid_root = root;
|
2014-08-02 06:12:45 +07:00
|
|
|
}
|
|
|
|
|
2015-09-30 10:50:38 +07:00
|
|
|
if (btrfs_fs_compat_ro(fs_info, FREE_SPACE_TREE)) {
|
|
|
|
location.objectid = BTRFS_FREE_SPACE_TREE_OBJECTID;
|
|
|
|
root = btrfs_read_tree_root(tree_root, &location);
|
2018-03-29 05:11:45 +07:00
|
|
|
if (IS_ERR(root)) {
|
|
|
|
ret = PTR_ERR(root);
|
|
|
|
goto out;
|
|
|
|
}
|
2015-09-30 10:50:38 +07:00
|
|
|
set_bit(BTRFS_ROOT_TRACK_DIRTY, &root->state);
|
|
|
|
fs_info->free_space_root = root;
|
|
|
|
}
|
|
|
|
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
if (btrfs_test_opt(fs_info, BLOCK_GROUP_HINT_TREE)) {
|
|
|
|
location.objectid = BTRFS_BLOCK_GROUP_HINT_TREE_OBJECTID;
|
|
|
|
root = btrfs_read_tree_root(tree_root, &location);
|
|
|
|
if (!IS_ERR(root)) {
|
|
|
|
set_bit(BTRFS_ROOT_TRACK_DIRTY, &root->state);
|
|
|
|
fs_info->block_group_hint_root = root;
|
|
|
|
} else if (PTR_ERR(root) != -ENOENT) {
|
|
|
|
btrfs_clear_opt(fs_info->mount_opt, BLOCK_GROUP_HINT_TREE);
|
|
|
|
btrfs_warn(fs_info, "Failed to read block group hint tree");
|
|
|
|
}
|
|
|
|
}
|
|
|
|
#endif /* MY_ABC_HERE */
|
2014-08-02 06:12:45 +07:00
|
|
|
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
if (btrfs_fs_compat(fs_info, BLOCK_GROUP_CACHE_TREE)) {
|
|
|
|
location.objectid = BTRFS_BLOCK_GROUP_CACHE_TREE_OBJECTID;
|
|
|
|
root = btrfs_read_tree_root(tree_root, &location);
|
|
|
|
if (IS_ERR(root)) {
|
|
|
|
ret = PTR_ERR(root);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
set_bit(BTRFS_ROOT_TRACK_DIRTY, &root->state);
|
|
|
|
fs_info->block_group_cache_root = root;
|
|
|
|
// check block group cache tree consistent
|
|
|
|
err = btrfs_check_syno_block_group_cache_tree(fs_info);
|
|
|
|
if (err) {
|
|
|
|
set_bit(BTRFS_FS_BLOCK_GROUP_CACHE_TREE_BROKEN, &fs_info->flags);
|
|
|
|
btrfs_warn(fs_info, "block gorup cache tree is inconsistent, err:%d", err);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
location.objectid = BTRFS_SYNO_USAGE_TREE_OBJECTID;
|
|
|
|
root = btrfs_read_tree_root(tree_root, &location);
|
|
|
|
if (!IS_ERR(root)) {
|
|
|
|
set_bit(BTRFS_ROOT_TRACK_DIRTY, &root->state);
|
|
|
|
fs_info->syno_usage_root = root;
|
|
|
|
}
|
|
|
|
|
|
|
|
location.objectid = BTRFS_SYNO_EXTENT_USAGE_TREE_OBJECTID;
|
|
|
|
root = btrfs_read_tree_root(tree_root, &location);
|
|
|
|
if (!IS_ERR(root)) {
|
|
|
|
set_bit(BTRFS_ROOT_TRACK_DIRTY, &root->state);
|
|
|
|
fs_info->syno_extent_usage_root = root;
|
|
|
|
}
|
|
|
|
if (fs_info->syno_usage_root && fs_info->syno_extent_usage_root)
|
|
|
|
set_bit(BTRFS_FS_SYNO_SPACE_USAGE_ENABLED, &fs_info->flags);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
location.objectid = BTRFS_SYNO_FEATURE_TREE_OBJECTID;
|
|
|
|
root = btrfs_read_tree_root(tree_root, &location);
|
|
|
|
if (!IS_ERR(root)) {
|
|
|
|
set_bit(BTRFS_ROOT_TRACK_DIRTY, &root->state);
|
|
|
|
fs_info->syno_feat_root = root;
|
|
|
|
}
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
return 0;
|
|
|
|
out:
|
|
|
|
btrfs_warn(fs_info, "failed to read root (objectid=%llu): %d",
|
|
|
|
location.objectid, ret);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Real super block validation
|
|
|
|
* NOTE: super csum type and incompat features will not be checked here.
|
|
|
|
*
|
|
|
|
* @sb: super block to check
|
|
|
|
* @mirror_num: the super block number to check its bytenr:
|
|
|
|
* 0 the primary (1st) sb
|
|
|
|
* 1, 2 2nd and 3rd backup copy
|
|
|
|
* -1 skip bytenr check
|
|
|
|
*/
|
2018-05-11 12:35:26 +07:00
|
|
|
static int validate_super(struct btrfs_fs_info *fs_info,
|
|
|
|
struct btrfs_super_block *sb, int mirror_num)
|
2018-05-11 12:35:25 +07:00
|
|
|
{
|
|
|
|
u64 nodesize = btrfs_super_nodesize(sb);
|
|
|
|
u64 sectorsize = btrfs_super_sectorsize(sb);
|
|
|
|
int ret = 0;
|
|
|
|
|
|
|
|
if (btrfs_super_magic(sb) != BTRFS_MAGIC) {
|
|
|
|
btrfs_err(fs_info, "no valid FS found");
|
|
|
|
ret = -EINVAL;
|
|
|
|
}
|
|
|
|
if (btrfs_super_flags(sb) & ~BTRFS_SUPER_FLAG_SUPP) {
|
|
|
|
btrfs_err(fs_info, "unrecognized or unsupported super flag: %llu",
|
|
|
|
btrfs_super_flags(sb) & ~BTRFS_SUPER_FLAG_SUPP);
|
|
|
|
ret = -EINVAL;
|
|
|
|
}
|
|
|
|
if (btrfs_super_root_level(sb) >= BTRFS_MAX_LEVEL) {
|
|
|
|
btrfs_err(fs_info, "tree_root level too big: %d >= %d",
|
|
|
|
btrfs_super_root_level(sb), BTRFS_MAX_LEVEL);
|
|
|
|
ret = -EINVAL;
|
|
|
|
}
|
|
|
|
if (btrfs_super_chunk_root_level(sb) >= BTRFS_MAX_LEVEL) {
|
|
|
|
btrfs_err(fs_info, "chunk_root level too big: %d >= %d",
|
|
|
|
btrfs_super_chunk_root_level(sb), BTRFS_MAX_LEVEL);
|
|
|
|
ret = -EINVAL;
|
|
|
|
}
|
|
|
|
if (btrfs_super_log_root_level(sb) >= BTRFS_MAX_LEVEL) {
|
|
|
|
btrfs_err(fs_info, "log_root level too big: %d >= %d",
|
|
|
|
btrfs_super_log_root_level(sb), BTRFS_MAX_LEVEL);
|
|
|
|
ret = -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Check sectorsize and nodesize first, other check will need it.
|
|
|
|
* Check all possible sectorsize(4K, 8K, 16K, 32K, 64K) here.
|
|
|
|
*/
|
|
|
|
if (!is_power_of_2(sectorsize) || sectorsize < 4096 ||
|
|
|
|
sectorsize > BTRFS_MAX_METADATA_BLOCKSIZE) {
|
|
|
|
btrfs_err(fs_info, "invalid sectorsize %llu", sectorsize);
|
|
|
|
ret = -EINVAL;
|
|
|
|
}
|
|
|
|
/* Only PAGE SIZE is supported yet */
|
|
|
|
if (sectorsize != PAGE_SIZE) {
|
|
|
|
btrfs_err(fs_info,
|
|
|
|
"sectorsize %llu not supported yet, only support %lu",
|
|
|
|
sectorsize, PAGE_SIZE);
|
|
|
|
ret = -EINVAL;
|
|
|
|
}
|
|
|
|
if (!is_power_of_2(nodesize) || nodesize < sectorsize ||
|
|
|
|
nodesize > BTRFS_MAX_METADATA_BLOCKSIZE) {
|
|
|
|
btrfs_err(fs_info, "invalid nodesize %llu", nodesize);
|
|
|
|
ret = -EINVAL;
|
|
|
|
}
|
|
|
|
if (nodesize != le32_to_cpu(sb->__unused_leafsize)) {
|
|
|
|
btrfs_err(fs_info, "invalid leafsize %u, should be %llu",
|
|
|
|
le32_to_cpu(sb->__unused_leafsize), nodesize);
|
|
|
|
ret = -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Root alignment check */
|
|
|
|
if (!IS_ALIGNED(btrfs_super_root(sb), sectorsize)) {
|
|
|
|
btrfs_warn(fs_info, "tree_root block unaligned: %llu",
|
|
|
|
btrfs_super_root(sb));
|
|
|
|
ret = -EINVAL;
|
|
|
|
}
|
|
|
|
if (!IS_ALIGNED(btrfs_super_chunk_root(sb), sectorsize)) {
|
|
|
|
btrfs_warn(fs_info, "chunk_root block unaligned: %llu",
|
|
|
|
btrfs_super_chunk_root(sb));
|
|
|
|
ret = -EINVAL;
|
|
|
|
}
|
|
|
|
if (!IS_ALIGNED(btrfs_super_log_root(sb), sectorsize)) {
|
|
|
|
btrfs_warn(fs_info, "log_root block unaligned: %llu",
|
|
|
|
btrfs_super_log_root(sb));
|
|
|
|
ret = -EINVAL;
|
|
|
|
}
|
|
|
|
|
2021-05-31 16:26:01 +07:00
|
|
|
if (memcmp(fs_info->fs_devices->fsid, fs_info->super_copy->fsid,
|
|
|
|
BTRFS_FSID_SIZE)) {
|
|
|
|
btrfs_err(fs_info,
|
|
|
|
"superblock fsid doesn't match fsid of fs_devices: %pU != %pU",
|
|
|
|
fs_info->super_copy->fsid, fs_info->fs_devices->fsid);
|
|
|
|
ret = -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (btrfs_fs_incompat(fs_info, METADATA_UUID) &&
|
|
|
|
memcmp(fs_info->fs_devices->metadata_uuid,
|
|
|
|
fs_info->super_copy->metadata_uuid, BTRFS_FSID_SIZE)) {
|
|
|
|
btrfs_err(fs_info,
|
|
|
|
"superblock metadata_uuid doesn't match metadata uuid of fs_devices: %pU != %pU",
|
|
|
|
fs_info->super_copy->metadata_uuid,
|
|
|
|
fs_info->fs_devices->metadata_uuid);
|
|
|
|
ret = -EINVAL;
|
|
|
|
}
|
|
|
|
|
2018-10-30 21:43:24 +07:00
|
|
|
if (memcmp(fs_info->fs_devices->metadata_uuid, sb->dev_item.fsid,
|
2018-10-30 21:43:23 +07:00
|
|
|
BTRFS_FSID_SIZE) != 0) {
|
2018-05-11 12:35:25 +07:00
|
|
|
btrfs_err(fs_info,
|
2018-10-30 21:43:23 +07:00
|
|
|
"dev_item UUID does not match metadata fsid: %pU != %pU",
|
2018-10-30 21:43:24 +07:00
|
|
|
fs_info->fs_devices->metadata_uuid, sb->dev_item.fsid);
|
2018-05-11 12:35:25 +07:00
|
|
|
ret = -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Hint to catch really bogus numbers, bitflips or so, more exact checks are
|
|
|
|
* done later
|
|
|
|
*/
|
|
|
|
if (btrfs_super_bytes_used(sb) < 6 * btrfs_super_nodesize(sb)) {
|
|
|
|
btrfs_err(fs_info, "bytes_used is too small %llu",
|
|
|
|
btrfs_super_bytes_used(sb));
|
|
|
|
ret = -EINVAL;
|
|
|
|
}
|
|
|
|
if (!is_power_of_2(btrfs_super_stripesize(sb))) {
|
|
|
|
btrfs_err(fs_info, "invalid stripesize %u",
|
|
|
|
btrfs_super_stripesize(sb));
|
|
|
|
ret = -EINVAL;
|
|
|
|
}
|
|
|
|
if (btrfs_super_num_devices(sb) > (1UL << 31))
|
|
|
|
btrfs_warn(fs_info, "suspicious number of devices: %llu",
|
|
|
|
btrfs_super_num_devices(sb));
|
|
|
|
if (btrfs_super_num_devices(sb) == 0) {
|
|
|
|
btrfs_err(fs_info, "number of devices is 0");
|
|
|
|
ret = -EINVAL;
|
|
|
|
}
|
|
|
|
|
2018-05-11 12:35:26 +07:00
|
|
|
if (mirror_num >= 0 &&
|
|
|
|
btrfs_super_bytenr(sb) != btrfs_sb_offset(mirror_num)) {
|
2018-05-11 12:35:25 +07:00
|
|
|
btrfs_err(fs_info, "super offset mismatch %llu != %u",
|
|
|
|
btrfs_super_bytenr(sb), BTRFS_SUPER_INFO_OFFSET);
|
|
|
|
ret = -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Obvious sys_chunk_array corruptions, it must hold at least one key
|
|
|
|
* and one chunk
|
|
|
|
*/
|
|
|
|
if (btrfs_super_sys_array_size(sb) > BTRFS_SYSTEM_CHUNK_ARRAY_SIZE) {
|
|
|
|
btrfs_err(fs_info, "system chunk array too big %u > %u",
|
|
|
|
btrfs_super_sys_array_size(sb),
|
|
|
|
BTRFS_SYSTEM_CHUNK_ARRAY_SIZE);
|
|
|
|
ret = -EINVAL;
|
|
|
|
}
|
|
|
|
if (btrfs_super_sys_array_size(sb) < sizeof(struct btrfs_disk_key)
|
|
|
|
+ sizeof(struct btrfs_chunk)) {
|
|
|
|
btrfs_err(fs_info, "system chunk array too small %u < %zu",
|
|
|
|
btrfs_super_sys_array_size(sb),
|
|
|
|
sizeof(struct btrfs_disk_key)
|
|
|
|
+ sizeof(struct btrfs_chunk));
|
|
|
|
ret = -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The generation is a global counter, we'll trust it more than the others
|
|
|
|
* but it's still possible that it's the one that's wrong.
|
|
|
|
*/
|
|
|
|
if (btrfs_super_generation(sb) < btrfs_super_chunk_root_generation(sb))
|
|
|
|
btrfs_warn(fs_info,
|
|
|
|
"suspicious: generation < chunk_root_generation: %llu < %llu",
|
|
|
|
btrfs_super_generation(sb),
|
|
|
|
btrfs_super_chunk_root_generation(sb));
|
|
|
|
if (btrfs_super_generation(sb) < btrfs_super_cache_generation(sb)
|
|
|
|
&& btrfs_super_cache_generation(sb) != (u64)-1)
|
|
|
|
btrfs_warn(fs_info,
|
|
|
|
"suspicious: generation < cache_generation: %llu < %llu",
|
|
|
|
btrfs_super_generation(sb),
|
|
|
|
btrfs_super_cache_generation(sb));
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2018-05-11 12:35:26 +07:00
|
|
|
/*
|
|
|
|
* Validation of super block at mount time.
|
|
|
|
* Some checks already done early at mount time, like csum type and incompat
|
|
|
|
* flags will be skipped.
|
|
|
|
*/
|
|
|
|
static int btrfs_validate_mount_super(struct btrfs_fs_info *fs_info)
|
|
|
|
{
|
|
|
|
return validate_super(fs_info, fs_info->super_copy, 0);
|
|
|
|
}
|
|
|
|
|
btrfs: Do super block verification before writing it to disk
There are already 2 reports about strangely corrupted super blocks,
where csum still matches but extra garbage gets slipped into super block.
The corruption would looks like:
------
superblock: bytenr=65536, device=/dev/sdc1
---------------------------------------------------------
csum_type 41700 (INVALID)
csum 0x3b252d3a [match]
bytenr 65536
flags 0x1
( WRITTEN )
magic _BHRfS_M [match]
...
incompat_flags 0x5b22400000000169
( MIXED_BACKREF |
COMPRESS_LZO |
BIG_METADATA |
EXTENDED_IREF |
SKINNY_METADATA |
unknown flag: 0x5b22400000000000 )
...
------
Or
------
superblock: bytenr=65536, device=/dev/mapper/x
---------------------------------------------------------
csum_type 35355 (INVALID)
csum_size 32
csum 0xf0dbeddd [match]
bytenr 65536
flags 0x1
( WRITTEN )
magic _BHRfS_M [match]
...
incompat_flags 0x176d200000000169
( MIXED_BACKREF |
COMPRESS_LZO |
BIG_METADATA |
EXTENDED_IREF |
SKINNY_METADATA |
unknown flag: 0x176d200000000000 )
------
Obviously, csum_type and incompat_flags get some garbage, but its csum
still matches, which means kernel calculates the csum based on corrupted
super block memory.
And after manually fixing these values, the filesystem is completely
healthy without any problem exposed by btrfs check.
Although the cause is still unknown, at least detect it and prevent further
corruption.
Both reports have same symptoms, there's an overwrite on offset 192 of
the superblock, by 4 bytes. The superblock structure is not allocated or
freed and stays in the memory for the whole filesystem lifetime, so it's
not a use-after-free kind of error on someone else's leaked page.
As a vague point for the problable cause is mentioning of other system
freezing related to graphic card drivers.
Reported-by: Ken Swenson <flat@imo.uto.moe>
Reported-by: Ben Parsons <9parsonsb@gmail.com>
Signed-off-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
[ add brief analysis of the reports ]
Signed-off-by: David Sterba <dsterba@suse.com>
2018-05-11 12:35:27 +07:00
|
|
|
/*
|
|
|
|
* Validation of super block at write time.
|
|
|
|
* Some checks like bytenr check will be skipped as their values will be
|
|
|
|
* overwritten soon.
|
|
|
|
* Extra checks like csum type and incompat flags will be done here.
|
|
|
|
*/
|
|
|
|
static int btrfs_validate_write_super(struct btrfs_fs_info *fs_info,
|
|
|
|
struct btrfs_super_block *sb)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
ret = validate_super(fs_info, sb, -1);
|
|
|
|
if (ret < 0)
|
|
|
|
goto out;
|
2019-06-03 21:58:53 +07:00
|
|
|
if (!btrfs_supported_super_csum(btrfs_super_csum_type(sb))) {
|
btrfs: Do super block verification before writing it to disk
There are already 2 reports about strangely corrupted super blocks,
where csum still matches but extra garbage gets slipped into super block.
The corruption would looks like:
------
superblock: bytenr=65536, device=/dev/sdc1
---------------------------------------------------------
csum_type 41700 (INVALID)
csum 0x3b252d3a [match]
bytenr 65536
flags 0x1
( WRITTEN )
magic _BHRfS_M [match]
...
incompat_flags 0x5b22400000000169
( MIXED_BACKREF |
COMPRESS_LZO |
BIG_METADATA |
EXTENDED_IREF |
SKINNY_METADATA |
unknown flag: 0x5b22400000000000 )
...
------
Or
------
superblock: bytenr=65536, device=/dev/mapper/x
---------------------------------------------------------
csum_type 35355 (INVALID)
csum_size 32
csum 0xf0dbeddd [match]
bytenr 65536
flags 0x1
( WRITTEN )
magic _BHRfS_M [match]
...
incompat_flags 0x176d200000000169
( MIXED_BACKREF |
COMPRESS_LZO |
BIG_METADATA |
EXTENDED_IREF |
SKINNY_METADATA |
unknown flag: 0x176d200000000000 )
------
Obviously, csum_type and incompat_flags get some garbage, but its csum
still matches, which means kernel calculates the csum based on corrupted
super block memory.
And after manually fixing these values, the filesystem is completely
healthy without any problem exposed by btrfs check.
Although the cause is still unknown, at least detect it and prevent further
corruption.
Both reports have same symptoms, there's an overwrite on offset 192 of
the superblock, by 4 bytes. The superblock structure is not allocated or
freed and stays in the memory for the whole filesystem lifetime, so it's
not a use-after-free kind of error on someone else's leaked page.
As a vague point for the problable cause is mentioning of other system
freezing related to graphic card drivers.
Reported-by: Ken Swenson <flat@imo.uto.moe>
Reported-by: Ben Parsons <9parsonsb@gmail.com>
Signed-off-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
[ add brief analysis of the reports ]
Signed-off-by: David Sterba <dsterba@suse.com>
2018-05-11 12:35:27 +07:00
|
|
|
ret = -EUCLEAN;
|
|
|
|
btrfs_err(fs_info, "invalid csum type, has %u want %u",
|
|
|
|
btrfs_super_csum_type(sb), BTRFS_CSUM_TYPE_CRC32);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
if (btrfs_super_incompat_flags(sb) & ~BTRFS_FEATURE_INCOMPAT_SUPP) {
|
|
|
|
ret = -EUCLEAN;
|
|
|
|
btrfs_err(fs_info,
|
|
|
|
"invalid incompat flags, has 0x%llx valid mask 0x%llx",
|
|
|
|
btrfs_super_incompat_flags(sb),
|
|
|
|
(unsigned long long)BTRFS_FEATURE_INCOMPAT_SUPP);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
out:
|
|
|
|
if (ret < 0)
|
|
|
|
btrfs_err(fs_info,
|
|
|
|
"super block corruption detected before writing it to disk");
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2019-10-15 22:42:24 +07:00
|
|
|
static int __cold init_tree_roots(struct btrfs_fs_info *fs_info)
|
2019-10-15 22:42:20 +07:00
|
|
|
{
|
2019-10-15 22:42:24 +07:00
|
|
|
int backup_index = find_newest_super_backup(fs_info);
|
2019-10-15 22:42:20 +07:00
|
|
|
struct btrfs_super_block *sb = fs_info->super_copy;
|
|
|
|
struct btrfs_root *tree_root = fs_info->tree_root;
|
|
|
|
bool handle_error = false;
|
|
|
|
int ret = 0;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = 0; i < BTRFS_NUM_BACKUP_ROOTS; i++) {
|
|
|
|
u64 generation;
|
|
|
|
int level;
|
|
|
|
|
|
|
|
if (handle_error) {
|
|
|
|
if (!IS_ERR(tree_root->node))
|
|
|
|
free_extent_buffer(tree_root->node);
|
|
|
|
tree_root->node = NULL;
|
|
|
|
|
|
|
|
if (!btrfs_test_opt(fs_info, USEBACKUPROOT))
|
|
|
|
break;
|
|
|
|
|
|
|
|
free_root_pointers(fs_info, 0);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Don't use the log in recovery mode, it won't be
|
|
|
|
* valid
|
|
|
|
*/
|
|
|
|
btrfs_set_super_log_root(sb, 0);
|
|
|
|
|
|
|
|
/* We can't trust the free space cache either */
|
|
|
|
btrfs_set_opt(fs_info->mount_opt, CLEAR_CACHE);
|
|
|
|
|
|
|
|
ret = read_backup_root(fs_info, i);
|
2019-10-15 22:42:24 +07:00
|
|
|
backup_index = ret;
|
2019-10-15 22:42:20 +07:00
|
|
|
if (ret < 0)
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
generation = btrfs_super_generation(sb);
|
|
|
|
level = btrfs_super_root_level(sb);
|
|
|
|
tree_root->node = read_tree_block(fs_info, btrfs_super_root(sb),
|
|
|
|
generation, level, NULL);
|
2020-08-12 20:16:35 +07:00
|
|
|
if (IS_ERR(tree_root->node)) {
|
2019-10-15 22:42:20 +07:00
|
|
|
handle_error = true;
|
2020-08-12 20:16:35 +07:00
|
|
|
ret = PTR_ERR(tree_root->node);
|
|
|
|
tree_root->node = NULL;
|
|
|
|
btrfs_warn(fs_info, "couldn't read tree root");
|
|
|
|
continue;
|
2019-10-15 22:42:20 +07:00
|
|
|
|
2020-08-12 20:16:35 +07:00
|
|
|
} else if (!extent_buffer_uptodate(tree_root->node)) {
|
|
|
|
handle_error = true;
|
|
|
|
ret = -EIO;
|
|
|
|
btrfs_warn(fs_info, "error while reading tree root");
|
2019-10-15 22:42:20 +07:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
btrfs_set_root_node(&tree_root->root_item, tree_root->node);
|
|
|
|
tree_root->commit_root = btrfs_root_node(tree_root);
|
|
|
|
btrfs_set_root_refs(&tree_root->root_item, 1);
|
|
|
|
|
2019-10-15 22:42:21 +07:00
|
|
|
/*
|
|
|
|
* No need to hold btrfs_root::objectid_mutex since the fs
|
|
|
|
* hasn't been fully initialised and we are the only user
|
|
|
|
*/
|
2019-10-15 22:42:20 +07:00
|
|
|
ret = btrfs_find_highest_objectid(tree_root,
|
|
|
|
&tree_root->highest_objectid);
|
|
|
|
if (ret < 0) {
|
|
|
|
handle_error = true;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
ASSERT(tree_root->highest_objectid <= BTRFS_LAST_FREE_OBJECTID);
|
|
|
|
|
|
|
|
ret = btrfs_read_roots(fs_info);
|
|
|
|
if (ret < 0) {
|
|
|
|
handle_error = true;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* All successful */
|
|
|
|
fs_info->generation = generation;
|
|
|
|
fs_info->last_trans_committed = generation;
|
2019-10-15 22:42:24 +07:00
|
|
|
|
|
|
|
/* Always begin writing backup roots after the one being used */
|
|
|
|
if (backup_index < 0) {
|
|
|
|
fs_info->backup_root_index = 0;
|
|
|
|
} else {
|
|
|
|
fs_info->backup_root_index = backup_index + 1;
|
|
|
|
fs_info->backup_root_index %= BTRFS_NUM_BACKUP_ROOTS;
|
|
|
|
}
|
2019-10-15 22:42:20 +07:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
#define SYNO_BTRFS_COMMIT_DEBUG_TIME ((90 * MSEC_PER_SEC)) // 90 sec.
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
2020-01-24 21:32:59 +07:00
|
|
|
void btrfs_init_fs_info(struct btrfs_fs_info *fs_info)
|
2007-03-21 22:12:56 +07:00
|
|
|
{
|
2009-09-22 03:00:26 +07:00
|
|
|
INIT_RADIX_TREE(&fs_info->fs_roots_radix, GFP_ATOMIC);
|
2013-12-17 01:24:27 +07:00
|
|
|
INIT_RADIX_TREE(&fs_info->buffer_radix, GFP_ATOMIC);
|
2007-04-20 08:01:03 +07:00
|
|
|
INIT_LIST_HEAD(&fs_info->trans_list);
|
2007-06-09 05:11:48 +07:00
|
|
|
INIT_LIST_HEAD(&fs_info->dead_roots);
|
2009-11-12 16:36:34 +07:00
|
|
|
INIT_LIST_HEAD(&fs_info->delayed_iputs);
|
2013-05-15 14:48:22 +07:00
|
|
|
INIT_LIST_HEAD(&fs_info->delalloc_roots);
|
2009-09-12 03:11:19 +07:00
|
|
|
INIT_LIST_HEAD(&fs_info->caching_block_groups);
|
2013-05-15 14:48:22 +07:00
|
|
|
spin_lock_init(&fs_info->delalloc_root_lock);
|
2011-04-12 04:25:13 +07:00
|
|
|
spin_lock_init(&fs_info->trans_lock);
|
2009-09-22 03:00:26 +07:00
|
|
|
spin_lock_init(&fs_info->fs_roots_radix_lock);
|
2009-11-12 16:36:34 +07:00
|
|
|
spin_lock_init(&fs_info->delayed_iput_lock);
|
2011-05-25 02:35:30 +07:00
|
|
|
spin_lock_init(&fs_info->defrag_inodes_lock);
|
2013-04-11 17:30:16 +07:00
|
|
|
spin_lock_init(&fs_info->super_lock);
|
2013-12-17 01:24:27 +07:00
|
|
|
spin_lock_init(&fs_info->buffer_lock);
|
2014-09-18 22:20:02 +07:00
|
|
|
spin_lock_init(&fs_info->unused_bgs_lock);
|
2012-05-16 22:55:38 +07:00
|
|
|
rwlock_init(&fs_info->tree_mod_log_lock);
|
2015-02-26 09:49:20 +07:00
|
|
|
mutex_init(&fs_info->unused_bg_unpin_mutex);
|
Btrfs: fix race between balance and unused block group deletion
We have a race between deleting an unused block group and balancing the
same block group that leads to an assertion failure/BUG(), producing the
following trace:
[181631.208236] BTRFS: assertion failed: 0, file: fs/btrfs/volumes.c, line: 2622
[181631.220591] ------------[ cut here ]------------
[181631.222959] kernel BUG at fs/btrfs/ctree.h:4062!
[181631.223932] invalid opcode: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC
[181631.224566] Modules linked in: btrfs dm_flakey dm_mod crc32c_generic xor raid6_pq nfsd auth_rpcgss oid_registry nfs_acl nfs lockd grace fscache sunrpc loop fuse acpi_cpufreq parpor$
[181631.224566] CPU: 8 PID: 17451 Comm: btrfs Tainted: G W 4.1.0-rc5-btrfs-next-10+ #1
[181631.224566] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.8.1-0-g4adadbd-20150316_085822-nilsson.home.kraxel.org 04/01/2014
[181631.224566] task: ffff880127e09590 ti: ffff8800b5824000 task.ti: ffff8800b5824000
[181631.224566] RIP: 0010:[<ffffffffa03f19f6>] [<ffffffffa03f19f6>] assfail.constprop.50+0x1e/0x20 [btrfs]
[181631.224566] RSP: 0018:ffff8800b5827ae8 EFLAGS: 00010246
[181631.224566] RAX: 0000000000000040 RBX: ffff8800109fc218 RCX: ffffffff81095dce
[181631.224566] RDX: 0000000000005124 RSI: ffffffff81464819 RDI: 00000000ffffffff
[181631.224566] RBP: ffff8800b5827ae8 R08: 0000000000000001 R09: 0000000000000000
[181631.224566] R10: 0000000000000000 R11: 0000000000000000 R12: ffff8800109fc200
[181631.224566] R13: ffff880020095000 R14: ffff8800b1a13f38 R15: ffff880020095000
[181631.224566] FS: 00007f70ca0b0c80(0000) GS:ffff88013ec00000(0000) knlGS:0000000000000000
[181631.224566] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
[181631.224566] CR2: 00007f2872ab6e68 CR3: 00000000a717c000 CR4: 00000000000006e0
[181631.224566] Stack:
[181631.224566] ffff8800b5827ba8 ffffffffa03f3916 ffff8800b5827b38 ffffffffa03d080e
[181631.224566] ffffffffa03d1423 ffff880020095000 ffff88001233c000 0000000000000001
[181631.224566] ffff880020095000 ffff8800b1a13f38 0000000a69c00000 0000000000000000
[181631.224566] Call Trace:
[181631.224566] [<ffffffffa03f3916>] btrfs_remove_chunk+0xa4/0x6bb [btrfs]
[181631.224566] [<ffffffffa03d080e>] ? join_transaction.isra.8+0xb9/0x3ba [btrfs]
[181631.224566] [<ffffffffa03d1423>] ? wait_current_trans.isra.13+0x22/0xfc [btrfs]
[181631.224566] [<ffffffffa03f3fbc>] btrfs_relocate_chunk.isra.29+0x8f/0xa7 [btrfs]
[181631.224566] [<ffffffffa03f54df>] btrfs_balance+0xaa4/0xc52 [btrfs]
[181631.224566] [<ffffffffa03fd388>] btrfs_ioctl_balance+0x23f/0x2b0 [btrfs]
[181631.224566] [<ffffffff810872f9>] ? trace_hardirqs_on+0xd/0xf
[181631.224566] [<ffffffffa04019a3>] btrfs_ioctl+0xfe2/0x2220 [btrfs]
[181631.224566] [<ffffffff812603ed>] ? __this_cpu_preempt_check+0x13/0x15
[181631.224566] [<ffffffff81084669>] ? arch_local_irq_save+0x9/0xc
[181631.224566] [<ffffffff81138def>] ? handle_mm_fault+0x834/0xcd2
[181631.224566] [<ffffffff81138def>] ? handle_mm_fault+0x834/0xcd2
[181631.224566] [<ffffffff8103e48c>] ? __do_page_fault+0x211/0x424
[181631.224566] [<ffffffff811755e6>] do_vfs_ioctl+0x3c6/0x479
(...)
The sequence of steps leading to this are:
CPU 0 CPU 1
btrfs_balance()
btrfs_relocate_chunk()
btrfs_relocate_block_group(bg X)
btrfs_lookup_block_group(bg X)
cleaner_kthread
locks fs_info->cleaner_mutex
btrfs_delete_unused_bgs()
finds bg X, which became
unused in the previous
transaction
checks bg X ->ro == 0,
so it proceeds
sets bg X ->ro to 1
(btrfs_set_block_group_ro(bg X))
blocks on fs_info->cleaner_mutex
btrfs_remove_chunk(bg X)
unlocks fs_info->cleaner_mutex
acquires fs_info->cleaner_mutex
relocate_block_group()
--> does nothing, no extents found in
the extent tree from bg X
unlocks fs_info->cleaner_mutex
btrfs_relocate_block_group(bg X) returns
btrfs_remove_chunk(bg X)
extent map not found
--> ASSERT(0)
Fix this by using a new mutex to make sure these 2 operations, block
group relocation and removal, are serialized.
This issue is reproducible by running fstests generic/038 (which stresses
chunk allocation and automatic removal of unused block groups) together
with the following balance loop:
while true; do btrfs balance start -dusage=0 <mountpoint> ; done
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Chris Mason <clm@fb.com>
2015-06-11 06:58:53 +07:00
|
|
|
mutex_init(&fs_info->delete_unused_bgs_mutex);
|
2011-06-14 07:00:16 +07:00
|
|
|
mutex_init(&fs_info->reloc_mutex);
|
2014-03-06 12:55:03 +07:00
|
|
|
mutex_init(&fs_info->delalloc_root_mutex);
|
2013-01-29 17:13:12 +07:00
|
|
|
seqlock_init(&fs_info->profiles_lock);
|
2007-10-16 03:19:22 +07:00
|
|
|
|
2008-03-25 02:01:56 +07:00
|
|
|
INIT_LIST_HEAD(&fs_info->dirty_cowonly_roots);
|
2008-03-25 02:01:59 +07:00
|
|
|
INIT_LIST_HEAD(&fs_info->space_info);
|
2012-05-16 22:55:38 +07:00
|
|
|
INIT_LIST_HEAD(&fs_info->tree_mod_seq_list);
|
2014-09-18 22:20:02 +07:00
|
|
|
INIT_LIST_HEAD(&fs_info->unused_bgs);
|
2020-01-24 21:33:00 +07:00
|
|
|
#ifdef CONFIG_BTRFS_DEBUG
|
|
|
|
INIT_LIST_HEAD(&fs_info->allocated_roots);
|
2020-02-15 04:11:40 +07:00
|
|
|
INIT_LIST_HEAD(&fs_info->allocated_ebs);
|
|
|
|
spin_lock_init(&fs_info->eb_leak_lock);
|
2020-01-24 21:33:00 +07:00
|
|
|
#endif
|
2019-05-17 16:43:17 +07:00
|
|
|
extent_map_tree_init(&fs_info->mapping_tree);
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
atomic_set(&fs_info->nr_extent_maps, 0);
|
|
|
|
INIT_LIST_HEAD(&fs_info->extent_map_inode_list);
|
|
|
|
spin_lock_init(&fs_info->extent_map_inode_list_lock);
|
|
|
|
#endif /* MY_ABC_HERE */
|
2012-09-06 17:02:28 +07:00
|
|
|
btrfs_init_block_rsv(&fs_info->global_block_rsv,
|
|
|
|
BTRFS_BLOCK_RSV_GLOBAL);
|
|
|
|
btrfs_init_block_rsv(&fs_info->trans_block_rsv, BTRFS_BLOCK_RSV_TRANS);
|
|
|
|
btrfs_init_block_rsv(&fs_info->chunk_block_rsv, BTRFS_BLOCK_RSV_CHUNK);
|
|
|
|
btrfs_init_block_rsv(&fs_info->empty_block_rsv, BTRFS_BLOCK_RSV_EMPTY);
|
|
|
|
btrfs_init_block_rsv(&fs_info->delayed_block_rsv,
|
|
|
|
BTRFS_BLOCK_RSV_DELOPS);
|
btrfs: introduce delayed_refs_rsv
Traditionally we've had voodoo in btrfs to account for the space that
delayed refs may take up by having a global_block_rsv. This works most
of the time, except when it doesn't. We've had issues reported and seen
in production where sometimes the global reserve is exhausted during
transaction commit before we can run all of our delayed refs, resulting
in an aborted transaction. Because of this voodoo we have equally
dubious flushing semantics around throttling delayed refs which we often
get wrong.
So instead give them their own block_rsv. This way we can always know
exactly how much outstanding space we need for delayed refs. This
allows us to make sure we are constantly filling that reservation up
with space, and allows us to put more precise pressure on the enospc
system. Instead of doing math to see if its a good time to throttle,
the normal enospc code will be invoked if we have a lot of delayed refs
pending, and they will be run via the normal flushing mechanism.
For now the delayed_refs_rsv will hold the reservations for the delayed
refs, the block group updates, and deleting csums. We could have a
separate rsv for the block group updates, but the csum deletion stuff is
still handled via the delayed_refs so that will stay there.
Historical background:
The global reserve has grown to cover everything we don't reserve space
explicitly for, and we've grown a lot of weird ad-hoc heuristics to know
if we're running short on space and when it's time to force a commit. A
failure rate of 20-40 file systems when we run hundreds of thousands of
them isn't super high, but cleaning up this code will make things less
ugly and more predictible.
Thus the delayed refs rsv. We always know how many delayed refs we have
outstanding, and although running them generates more we can use the
global reserve for that spill over, which fits better into it's desired
use than a full blown reservation. This first approach is to simply
take how many times we're reserving space for and multiply that by 2 in
order to save enough space for the delayed refs that could be generated.
This is a niave approach and will probably evolve, but for now it works.
Signed-off-by: Josef Bacik <jbacik@fb.com>
Reviewed-by: David Sterba <dsterba@suse.com> # high-level review
[ added background notes from the cover letter ]
Signed-off-by: David Sterba <dsterba@suse.com>
2018-12-03 22:20:33 +07:00
|
|
|
btrfs_init_block_rsv(&fs_info->delayed_refs_rsv,
|
|
|
|
BTRFS_BLOCK_RSV_DELREFS);
|
|
|
|
|
2008-11-07 10:02:51 +07:00
|
|
|
atomic_set(&fs_info->async_delalloc_pages, 0);
|
2011-05-25 02:35:30 +07:00
|
|
|
atomic_set(&fs_info->defrag_running, 0);
|
2016-01-07 17:38:48 +07:00
|
|
|
atomic_set(&fs_info->reada_works_cnt, 0);
|
2018-12-03 23:06:52 +07:00
|
|
|
atomic_set(&fs_info->nr_delayed_iputs, 0);
|
2013-04-24 23:57:33 +07:00
|
|
|
atomic64_set(&fs_info->tree_mod_seq, 0);
|
2013-08-09 04:45:48 +07:00
|
|
|
fs_info->max_inline = BTRFS_DEFAULT_MAX_INLINE;
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
fs_info->metadata_ratio = 50;
|
|
|
|
#else /* MY_ABC_HERE */
|
2009-09-12 03:12:44 +07:00
|
|
|
fs_info->metadata_ratio = 0;
|
2024-07-05 23:00:04 +07:00
|
|
|
#endif /* MY_ABC_HERE */
|
2011-05-25 02:35:30 +07:00
|
|
|
fs_info->defrag_inodes = RB_ROOT;
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
INIT_LIST_HEAD(&fs_info->defrag_inodes_list[0]);
|
|
|
|
INIT_LIST_HEAD(&fs_info->defrag_inodes_list[1]);
|
|
|
|
fs_info->reclaim_space_entry_count = 0;
|
|
|
|
#endif /* MY_ABC_HERE */
|
2017-05-11 13:17:46 +07:00
|
|
|
atomic64_set(&fs_info->free_chunk_space, 0);
|
2012-05-16 22:55:38 +07:00
|
|
|
fs_info->tree_mod_log = RB_ROOT;
|
2013-08-01 23:14:52 +07:00
|
|
|
fs_info->commit_interval = BTRFS_DEFAULT_COMMIT_INTERVAL;
|
2015-01-16 23:21:12 +07:00
|
|
|
fs_info->avg_delayed_ref_runtime = NSEC_PER_SEC >> 6; /* div by 64 */
|
2011-05-23 19:30:00 +07:00
|
|
|
/* readahead state */
|
2015-11-07 07:28:21 +07:00
|
|
|
INIT_RADIX_TREE(&fs_info->reada_tree, GFP_NOFS & ~__GFP_DIRECT_RECLAIM);
|
2011-05-23 19:30:00 +07:00
|
|
|
spin_lock_init(&fs_info->reada_lock);
|
2017-09-30 02:43:50 +07:00
|
|
|
btrfs_init_ref_verify(fs_info);
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 01:49:59 +07:00
|
|
|
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
/* syno feature tree */
|
|
|
|
btrfs_syno_set_feat_tree_disable(fs_info);
|
|
|
|
fs_info->syno_feat_tree_status.version = BTRFS_SYNO_FEAT_TREE_VERSION;
|
|
|
|
fs_info->syno_feat_root = NULL;
|
|
|
|
mutex_init(&fs_info->syno_feat_tree_ioctl_lock);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
mutex_init(&fs_info->free_space_analyze_ioctl_lock);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
atomic_set(&fs_info->syno_allocator.syno_allocator_refs, 0);
|
|
|
|
init_waitqueue_head(&fs_info->syno_allocator.syno_allocator_wait);
|
|
|
|
atomic_set(&fs_info->syno_allocator.legacy_allocator_refs, 0);
|
|
|
|
init_waitqueue_head(&fs_info->syno_allocator.legacy_allocator_wait);
|
|
|
|
btrfs_init_syno_allocator_bg_prefetch_work(&fs_info->syno_allocator.bg_prefetch_work);
|
|
|
|
fs_info->syno_allocator.bg_prefetch_running = true;
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
atomic64_set(&fs_info->syno_meta_statistics.eb_disk_read, 0);
|
|
|
|
atomic64_set(&fs_info->syno_meta_statistics.search_key, 0);
|
|
|
|
atomic64_set(&fs_info->syno_meta_statistics.search_forward, 0);
|
|
|
|
atomic64_set(&fs_info->syno_meta_statistics.next_leaf, 0);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
fs_info->dedupe_info.inode = NULL;
|
|
|
|
fs_info->dedupe_info.hash_table = NULL;
|
|
|
|
fs_info->dedupe_info.cuckoo_idx = NULL;
|
|
|
|
fs_info->dedupe_info.table_size = 0;
|
|
|
|
fs_info->dedupe_info.cuckoo_size = 0;
|
|
|
|
fs_info->dedupe_info.seed = 1;
|
|
|
|
fs_info->dedupe_info.sample_rate = SZ_64K;
|
|
|
|
atomic_set(&fs_info->dedupe_info.valid, 0);
|
|
|
|
atomic_set(&fs_info->dedupe_info.modify, 0);
|
|
|
|
atomic_set(&fs_info->dedupe_info.ref, 0);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
2008-12-20 03:43:22 +07:00
|
|
|
fs_info->thread_pool_size = min_t(unsigned long,
|
|
|
|
num_online_cpus() + 2, 8);
|
2008-04-19 01:17:20 +07:00
|
|
|
|
2013-05-15 14:48:23 +07:00
|
|
|
INIT_LIST_HEAD(&fs_info->ordered_roots);
|
|
|
|
spin_lock_init(&fs_info->ordered_root_lock);
|
2017-10-20 01:15:57 +07:00
|
|
|
|
2014-08-02 06:12:38 +07:00
|
|
|
btrfs_init_scrub(fs_info);
|
2011-11-09 19:44:05 +07:00
|
|
|
#ifdef CONFIG_BTRFS_FS_CHECK_INTEGRITY
|
|
|
|
fs_info->check_integrity_print_mask = 0;
|
2024-07-05 23:00:04 +07:00
|
|
|
#endif /* CONFIG_BTRFS_FS_CHECK_INTEGRITY */
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
fs_info->snapshot_cleaner = 1;
|
|
|
|
#endif /* MY_ABC_HERE */
|
2014-08-02 06:12:39 +07:00
|
|
|
btrfs_init_balance(fs_info);
|
2020-07-21 21:22:33 +07:00
|
|
|
btrfs_init_async_reclaim_work(fs_info);
|
2011-03-08 20:14:00 +07:00
|
|
|
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
atomic_set(&fs_info->syno_writeback_thread_count, 0);
|
|
|
|
fs_info->syno_writeback_thread_max = 0;
|
|
|
|
spin_lock_init(&fs_info->syno_multiple_writeback_lock);
|
|
|
|
INIT_LIST_HEAD(&fs_info->syno_dirty_lru_inodes);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
btrfs_init_async_metadata_cache_work(fs_info);
|
|
|
|
atomic_set(&fs_info->syno_metadata_block_group_update_count, 0);
|
|
|
|
fs_info->metadata_cache_enable = false;
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
#if defined(MY_ABC_HERE) || defined(MY_ABC_HERE)
|
|
|
|
spin_lock_init(&fs_info->mount_path_lock);
|
|
|
|
#endif /* MY_ABC_HERE || MY_ABC_HERE */
|
|
|
|
|
Btrfs: free space accounting redo
1) replace the per fs_info extent_io_tree that tracked free space with two
rb-trees per block group to track free space areas via offset and size. The
reason to do this is because most allocations come with a hint byte where to
start, so we can usually find a chunk of free space at that hint byte to satisfy
the allocation and get good space packing. If we cannot find free space at or
after the given offset we fall back on looking for a chunk of the given size as
close to that given offset as possible. When we fall back on the size search we
also try to find a slot as close to the size we want as possible, to avoid
breaking small chunks off of huge areas if possible.
2) remove the extent_io_tree that tracked the block group cache from fs_info and
replaced it with an rb-tree thats tracks block group cache via offset. also
added a per space_info list that tracks the block group cache for the particular
space so we can lookup related block groups easily.
3) cleaned up the allocation code to make it a little easier to read and a
little less complicated. Basically there are 3 steps, first look from our
provided hint. If we couldn't find from that given hint, start back at our
original search start and look for space from there. If that fails try to
allocate space if we can and start looking again. If not we're screwed and need
to start over again.
4) small fixes. there were some issues in volumes.c where we wouldn't allocate
the rest of the disk. fixed cow_file_range to actually pass the alloc_hint,
which has helped a good bit in making the fs_mark test I run have semi-normal
results as we run out of space. Generally with data allocations we don't track
where we last allocated from, so everytime we did a data allocation we'd search
through every block group that we have looking for free space. Now searching a
block group with no free space isn't terribly time consuming, it was causing a
slight degradation as we got more data block groups. The alloc_hint has fixed
this slight degredation and made things semi-normal.
There is still one nagging problem I'm working on where we will get ENOSPC when
there is definitely plenty of space. This only happens with metadata
allocations, and only when we are almost full. So you generally hit the 85%
mark first, but sometimes you'll hit the BUG before you hit the 85% wall. I'm
still tracking it down, but until then this seems to be pretty stable and make a
significant performance gain.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-09-24 00:14:11 +07:00
|
|
|
spin_lock_init(&fs_info->block_group_cache_lock);
|
2010-02-24 02:43:04 +07:00
|
|
|
fs_info->block_group_cache_tree = RB_ROOT;
|
2012-12-27 16:01:23 +07:00
|
|
|
fs_info->first_logical_byte = (u64)-1;
|
Btrfs: free space accounting redo
1) replace the per fs_info extent_io_tree that tracked free space with two
rb-trees per block group to track free space areas via offset and size. The
reason to do this is because most allocations come with a hint byte where to
start, so we can usually find a chunk of free space at that hint byte to satisfy
the allocation and get good space packing. If we cannot find free space at or
after the given offset we fall back on looking for a chunk of the given size as
close to that given offset as possible. When we fall back on the size search we
also try to find a slot as close to the size we want as possible, to avoid
breaking small chunks off of huge areas if possible.
2) remove the extent_io_tree that tracked the block group cache from fs_info and
replaced it with an rb-tree thats tracks block group cache via offset. also
added a per space_info list that tracks the block group cache for the particular
space so we can lookup related block groups easily.
3) cleaned up the allocation code to make it a little easier to read and a
little less complicated. Basically there are 3 steps, first look from our
provided hint. If we couldn't find from that given hint, start back at our
original search start and look for space from there. If that fails try to
allocate space if we can and start looking again. If not we're screwed and need
to start over again.
4) small fixes. there were some issues in volumes.c where we wouldn't allocate
the rest of the disk. fixed cow_file_range to actually pass the alloc_hint,
which has helped a good bit in making the fs_mark test I run have semi-normal
results as we run out of space. Generally with data allocations we don't track
where we last allocated from, so everytime we did a data allocation we'd search
through every block group that we have looking for free space. Now searching a
block group with no free space isn't terribly time consuming, it was causing a
slight degradation as we got more data block groups. The alloc_hint has fixed
this slight degredation and made things semi-normal.
There is still one nagging problem I'm working on where we will get ENOSPC when
there is definitely plenty of space. This only happens with metadata
allocations, and only when we are almost full. So you generally hit the 85%
mark first, but sometimes you'll hit the BUG before you hit the 85% wall. I'm
still tracking it down, but until then this seems to be pretty stable and make a
significant performance gain.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-09-24 00:14:11 +07:00
|
|
|
|
2020-01-20 21:09:18 +07:00
|
|
|
extent_io_tree_init(fs_info, &fs_info->excluded_extents,
|
|
|
|
IO_TREE_FS_EXCLUDED_EXTENTS, NULL);
|
2016-09-03 02:40:02 +07:00
|
|
|
set_bit(BTRFS_FS_BARRIER, &fs_info->flags);
|
2007-06-12 17:35:45 +07:00
|
|
|
|
2009-04-01 00:27:11 +07:00
|
|
|
mutex_init(&fs_info->ordered_operations_mutex);
|
2008-09-06 03:13:11 +07:00
|
|
|
mutex_init(&fs_info->tree_log_mutex);
|
2008-06-26 03:01:30 +07:00
|
|
|
mutex_init(&fs_info->chunk_mutex);
|
2008-06-26 03:01:31 +07:00
|
|
|
mutex_init(&fs_info->transaction_kthread_mutex);
|
|
|
|
mutex_init(&fs_info->cleaner_mutex);
|
2015-04-07 02:46:08 +07:00
|
|
|
mutex_init(&fs_info->ro_block_group_mutex);
|
2014-03-14 02:42:13 +07:00
|
|
|
init_rwsem(&fs_info->commit_root_sem);
|
2009-11-12 16:34:40 +07:00
|
|
|
init_rwsem(&fs_info->cleanup_work_sem);
|
2009-09-22 03:00:26 +07:00
|
|
|
init_rwsem(&fs_info->subvol_sem);
|
2013-08-15 22:11:21 +07:00
|
|
|
sema_init(&fs_info->uuid_tree_rescan_sem, 1);
|
2009-04-03 20:47:43 +07:00
|
|
|
|
2014-08-02 06:12:41 +07:00
|
|
|
btrfs_init_dev_replace_locks(fs_info);
|
2024-07-05 23:00:04 +07:00
|
|
|
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
mutex_init(&fs_info->log_tree_rsv_alloc);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
2014-08-02 06:12:42 +07:00
|
|
|
btrfs_init_qgroup(fs_info);
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
btrfs_init_usrquota(fs_info);
|
|
|
|
#endif /* MY_ABC_HERE */
|
2019-12-14 07:22:14 +07:00
|
|
|
btrfs_discard_init(fs_info);
|
2011-09-13 17:56:09 +07:00
|
|
|
|
2009-04-03 20:47:43 +07:00
|
|
|
btrfs_init_free_cluster(&fs_info->meta_alloc_cluster);
|
|
|
|
btrfs_init_free_cluster(&fs_info->data_alloc_cluster);
|
|
|
|
|
2008-07-17 23:53:50 +07:00
|
|
|
init_waitqueue_head(&fs_info->transaction_throttle);
|
2008-07-17 23:54:14 +07:00
|
|
|
init_waitqueue_head(&fs_info->transaction_wait);
|
2010-10-30 02:37:34 +07:00
|
|
|
init_waitqueue_head(&fs_info->transaction_blocked_wait);
|
2008-08-16 02:34:17 +07:00
|
|
|
init_waitqueue_head(&fs_info->async_submit_wait);
|
2018-12-03 23:06:52 +07:00
|
|
|
init_waitqueue_head(&fs_info->delayed_iputs_wait);
|
2007-03-14 03:47:54 +07:00
|
|
|
|
2016-06-15 20:22:56 +07:00
|
|
|
/* Usable values until the real ones are cached from the superblock */
|
|
|
|
fs_info->nodesize = 4096;
|
|
|
|
fs_info->sectorsize = 4096;
|
|
|
|
fs_info->stripesize = 4096;
|
|
|
|
|
Btrfs: prevent ioctls from interfering with a swap file
A later patch will implement swap file support for Btrfs, but before we
do that, we need to make sure that the various Btrfs ioctls cannot
change a swap file.
When a swap file is active, we must make sure that the extents of the
file are not moved and that they don't become shared. That means that
the following are not safe:
- chattr +c (enable compression)
- reflink
- dedupe
- snapshot
- defrag
Don't allow those to happen on an active swap file.
Additionally, balance, resize, device remove, and device replace are
also unsafe if they affect an active swapfile. Add a red-black tree of
block groups and devices which contain an active swapfile. Relocation
checks each block group against this tree and skips it or errors out for
balance or resize, respectively. Device remove and device replace check
the tree for the device they will operate on.
Note that we don't have to worry about chattr -C (disable nocow), which
we ignore for non-empty files, because an active swapfile must be
non-empty and can't be truncated. We also don't have to worry about
autodefrag because it's only done on COW files. Truncate and fallocate
are already taken care of by the generic code. Device add doesn't do
relocation so it's not an issue, either.
Signed-off-by: Omar Sandoval <osandov@fb.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2016-11-04 00:28:12 +07:00
|
|
|
spin_lock_init(&fs_info->swapfile_pins_lock);
|
|
|
|
fs_info->swapfile_pins = RB_ROOT;
|
|
|
|
|
Btrfs: prevent send failures and crashes due to concurrent relocation
Send always operates on read-only trees and always expected that while it
is in progress, nothing changes in those trees. Due to that expectation
and the fact that send is a read-only operation, it operates on commit
roots and does not hold transaction handles. However relocation can COW
nodes and leafs from read-only trees, which can cause unexpected failures
and crashes (hitting BUG_ONs). while send using a node/leaf, it gets
COWed, the transaction used to COW it is committed, a new transaction
starts, the extent previously used for that node/leaf gets allocated,
possibly for another tree, and the respective extent buffer' content
changes while send is still using it. When this happens send normally
fails with EIO being returned to user space and messages like the
following are found in dmesg/syslog:
[ 3408.699121] BTRFS error (device sdc): parent transid verify failed on 58703872 wanted 250 found 253
[ 3441.523123] BTRFS error (device sdc): did not find backref in send_root. inode=63211, offset=0, disk_byte=5222825984 found extent=5222825984
Other times, less often, we hit a BUG_ON() because an extent buffer that
send is using used to be a node, and while send is still using it, it
got COWed and got reused as a leaf while send is still using, producing
the following trace:
[ 3478.466280] ------------[ cut here ]------------
[ 3478.466282] kernel BUG at fs/btrfs/ctree.c:1806!
[ 3478.466965] invalid opcode: 0000 [#1] SMP DEBUG_PAGEALLOC PTI
[ 3478.467635] CPU: 0 PID: 2165 Comm: btrfs Not tainted 5.0.0-btrfs-next-46 #1
[ 3478.468311] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.11.2-0-gf9626ccb91-prebuilt.qemu-project.org 04/01/2014
[ 3478.469681] RIP: 0010:read_node_slot+0x122/0x130 [btrfs]
(...)
[ 3478.471758] RSP: 0018:ffffa437826bfaa0 EFLAGS: 00010246
[ 3478.472457] RAX: ffff961416ed7000 RBX: 000000000000003d RCX: 0000000000000002
[ 3478.473151] RDX: 000000000000003d RSI: ffff96141e387408 RDI: ffff961599b30000
[ 3478.473837] RBP: ffffa437826bfb8e R08: 0000000000000001 R09: ffffa437826bfb8e
[ 3478.474515] R10: ffffa437826bfa70 R11: 0000000000000000 R12: ffff9614385c8708
[ 3478.475186] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
[ 3478.475840] FS: 00007f8e0e9cc8c0(0000) GS:ffff9615b6a00000(0000) knlGS:0000000000000000
[ 3478.476489] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 3478.477127] CR2: 00007f98b67a056e CR3: 0000000005df6005 CR4: 00000000003606f0
[ 3478.477762] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 3478.478385] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 3478.479003] Call Trace:
[ 3478.479600] ? do_raw_spin_unlock+0x49/0xc0
[ 3478.480202] tree_advance+0x173/0x1d0 [btrfs]
[ 3478.480810] btrfs_compare_trees+0x30c/0x690 [btrfs]
[ 3478.481388] ? process_extent+0x1280/0x1280 [btrfs]
[ 3478.481954] btrfs_ioctl_send+0x1037/0x1270 [btrfs]
[ 3478.482510] _btrfs_ioctl_send+0x80/0x110 [btrfs]
[ 3478.483062] btrfs_ioctl+0x13fe/0x3120 [btrfs]
[ 3478.483581] ? rq_clock_task+0x2e/0x60
[ 3478.484086] ? wake_up_new_task+0x1f3/0x370
[ 3478.484582] ? do_vfs_ioctl+0xa2/0x6f0
[ 3478.485075] ? btrfs_ioctl_get_supported_features+0x30/0x30 [btrfs]
[ 3478.485552] do_vfs_ioctl+0xa2/0x6f0
[ 3478.486016] ? __fget+0x113/0x200
[ 3478.486467] ksys_ioctl+0x70/0x80
[ 3478.486911] __x64_sys_ioctl+0x16/0x20
[ 3478.487337] do_syscall_64+0x60/0x1b0
[ 3478.487751] entry_SYSCALL_64_after_hwframe+0x49/0xbe
[ 3478.488159] RIP: 0033:0x7f8e0d7d4dd7
(...)
[ 3478.489349] RSP: 002b:00007ffcf6fb4908 EFLAGS: 00000202 ORIG_RAX: 0000000000000010
[ 3478.489742] RAX: ffffffffffffffda RBX: 0000000000000105 RCX: 00007f8e0d7d4dd7
[ 3478.490142] RDX: 00007ffcf6fb4990 RSI: 0000000040489426 RDI: 0000000000000005
[ 3478.490548] RBP: 0000000000000005 R08: 00007f8e0d6f3700 R09: 00007f8e0d6f3700
[ 3478.490953] R10: 00007f8e0d6f39d0 R11: 0000000000000202 R12: 0000000000000005
[ 3478.491343] R13: 00005624e0780020 R14: 0000000000000000 R15: 0000000000000001
(...)
[ 3478.493352] ---[ end trace d5f537302be4f8c8 ]---
Another possibility, much less likely to happen, is that send will not
fail but the contents of the stream it produces may not be correct.
To avoid this, do not allow send and relocation (balance) to run in
parallel. In the long term the goal is to allow for both to be able to
run concurrently without any problems, but that will take a significant
effort in development and testing.
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2019-04-22 22:44:09 +07:00
|
|
|
fs_info->send_in_progress = 0;
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
atomic64_set(&fs_info->block_group_cnt, 0);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
mutex_init(&fs_info->block_group_hint_tree_mutex);
|
|
|
|
atomic_set(&fs_info->reada_block_group_threads, 0);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
atomic64_set(&fs_info->fsync_cnt, 0);
|
|
|
|
atomic64_set(&fs_info->fsync_full_commit_cnt, 0);
|
|
|
|
fs_info->commit_time_debug_ms = SYNO_BTRFS_COMMIT_DEBUG_TIME;
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
spin_lock_init(&fs_info->syno_usage_lock);
|
|
|
|
btrfs_init_syno_usage_rescan_work(&fs_info->syno_usage_rescan_work);
|
|
|
|
btrfs_init_syno_usage_fast_rescan_work(&fs_info->syno_usage_fast_rescan_work);
|
|
|
|
btrfs_init_syno_usage_full_rescan_work(&fs_info->syno_usage_full_rescan_work);
|
|
|
|
INIT_LIST_HEAD(&fs_info->syno_usage_pending_fast_rescan_roots);
|
|
|
|
INIT_LIST_HEAD(&fs_info->syno_usage_pending_full_rescan_roots);
|
|
|
|
spin_lock_init(&fs_info->syno_usage_fast_rescan_lock);
|
|
|
|
spin_lock_init(&fs_info->syno_usage_full_rescan_lock);
|
|
|
|
mutex_init(&fs_info->syno_usage_ioctl_lock);
|
|
|
|
fs_info->syno_usage_fast_rescan_pid = 0;
|
|
|
|
fs_info->syno_usage_full_rescan_pid = 0;
|
|
|
|
atomic_set(&fs_info->syno_usage_pending_fast_rescan_count, 0);
|
|
|
|
atomic_set(&fs_info->syno_usage_pending_full_rescan_count, 0);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
atomic_set(&fs_info->syno_async_submit_nr, 0);
|
|
|
|
fs_info->syno_async_submit_throttle = 128;
|
|
|
|
init_waitqueue_head(&fs_info->syno_async_submit_queue_wait);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
atomic64_set(&fs_info->syno_ordered_extent_nr, 0);
|
|
|
|
fs_info->syno_max_ordered_queue_size = 65536;
|
|
|
|
init_waitqueue_head(&fs_info->syno_ordered_queue_wait);
|
|
|
|
atomic64_set(&fs_info->syno_ordered_extent_processed_nr, 0);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
spin_lock_init(&fs_info->syno_rbd.lock);
|
|
|
|
INIT_LIST_HEAD(&fs_info->syno_rbd.pinned_meta_files);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
atomic_set(&fs_info->syno_async_delayed_ref_count, 0);
|
|
|
|
spin_lock_init(&fs_info->syno_delayed_ref_throttle_lock);
|
|
|
|
INIT_LIST_HEAD(&fs_info->syno_delayed_ref_throttle_tickets);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
btrfs_init_block_rsv(&fs_info->cleaner_block_rsv, BTRFS_BLOCK_RSV_TEMP);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
spin_lock_init(&fs_info->syno_orphan_cleanup.lock);
|
|
|
|
INIT_LIST_HEAD(&fs_info->syno_orphan_cleanup.roots);
|
|
|
|
fs_info->syno_orphan_cleanup.enable = true;
|
|
|
|
#endif /* MY_ABC_HERE */
|
2020-01-24 21:32:59 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static int init_mount_fs_info(struct btrfs_fs_info *fs_info, struct super_block *sb)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
fs_info->sb = sb;
|
|
|
|
sb->s_blocksize = BTRFS_BDEV_BLOCKSIZE;
|
|
|
|
sb->s_blocksize_bits = blksize_bits(BTRFS_BDEV_BLOCKSIZE);
|
Btrfs: prevent send failures and crashes due to concurrent relocation
Send always operates on read-only trees and always expected that while it
is in progress, nothing changes in those trees. Due to that expectation
and the fact that send is a read-only operation, it operates on commit
roots and does not hold transaction handles. However relocation can COW
nodes and leafs from read-only trees, which can cause unexpected failures
and crashes (hitting BUG_ONs). while send using a node/leaf, it gets
COWed, the transaction used to COW it is committed, a new transaction
starts, the extent previously used for that node/leaf gets allocated,
possibly for another tree, and the respective extent buffer' content
changes while send is still using it. When this happens send normally
fails with EIO being returned to user space and messages like the
following are found in dmesg/syslog:
[ 3408.699121] BTRFS error (device sdc): parent transid verify failed on 58703872 wanted 250 found 253
[ 3441.523123] BTRFS error (device sdc): did not find backref in send_root. inode=63211, offset=0, disk_byte=5222825984 found extent=5222825984
Other times, less often, we hit a BUG_ON() because an extent buffer that
send is using used to be a node, and while send is still using it, it
got COWed and got reused as a leaf while send is still using, producing
the following trace:
[ 3478.466280] ------------[ cut here ]------------
[ 3478.466282] kernel BUG at fs/btrfs/ctree.c:1806!
[ 3478.466965] invalid opcode: 0000 [#1] SMP DEBUG_PAGEALLOC PTI
[ 3478.467635] CPU: 0 PID: 2165 Comm: btrfs Not tainted 5.0.0-btrfs-next-46 #1
[ 3478.468311] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.11.2-0-gf9626ccb91-prebuilt.qemu-project.org 04/01/2014
[ 3478.469681] RIP: 0010:read_node_slot+0x122/0x130 [btrfs]
(...)
[ 3478.471758] RSP: 0018:ffffa437826bfaa0 EFLAGS: 00010246
[ 3478.472457] RAX: ffff961416ed7000 RBX: 000000000000003d RCX: 0000000000000002
[ 3478.473151] RDX: 000000000000003d RSI: ffff96141e387408 RDI: ffff961599b30000
[ 3478.473837] RBP: ffffa437826bfb8e R08: 0000000000000001 R09: ffffa437826bfb8e
[ 3478.474515] R10: ffffa437826bfa70 R11: 0000000000000000 R12: ffff9614385c8708
[ 3478.475186] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
[ 3478.475840] FS: 00007f8e0e9cc8c0(0000) GS:ffff9615b6a00000(0000) knlGS:0000000000000000
[ 3478.476489] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 3478.477127] CR2: 00007f98b67a056e CR3: 0000000005df6005 CR4: 00000000003606f0
[ 3478.477762] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 3478.478385] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 3478.479003] Call Trace:
[ 3478.479600] ? do_raw_spin_unlock+0x49/0xc0
[ 3478.480202] tree_advance+0x173/0x1d0 [btrfs]
[ 3478.480810] btrfs_compare_trees+0x30c/0x690 [btrfs]
[ 3478.481388] ? process_extent+0x1280/0x1280 [btrfs]
[ 3478.481954] btrfs_ioctl_send+0x1037/0x1270 [btrfs]
[ 3478.482510] _btrfs_ioctl_send+0x80/0x110 [btrfs]
[ 3478.483062] btrfs_ioctl+0x13fe/0x3120 [btrfs]
[ 3478.483581] ? rq_clock_task+0x2e/0x60
[ 3478.484086] ? wake_up_new_task+0x1f3/0x370
[ 3478.484582] ? do_vfs_ioctl+0xa2/0x6f0
[ 3478.485075] ? btrfs_ioctl_get_supported_features+0x30/0x30 [btrfs]
[ 3478.485552] do_vfs_ioctl+0xa2/0x6f0
[ 3478.486016] ? __fget+0x113/0x200
[ 3478.486467] ksys_ioctl+0x70/0x80
[ 3478.486911] __x64_sys_ioctl+0x16/0x20
[ 3478.487337] do_syscall_64+0x60/0x1b0
[ 3478.487751] entry_SYSCALL_64_after_hwframe+0x49/0xbe
[ 3478.488159] RIP: 0033:0x7f8e0d7d4dd7
(...)
[ 3478.489349] RSP: 002b:00007ffcf6fb4908 EFLAGS: 00000202 ORIG_RAX: 0000000000000010
[ 3478.489742] RAX: ffffffffffffffda RBX: 0000000000000105 RCX: 00007f8e0d7d4dd7
[ 3478.490142] RDX: 00007ffcf6fb4990 RSI: 0000000040489426 RDI: 0000000000000005
[ 3478.490548] RBP: 0000000000000005 R08: 00007f8e0d6f3700 R09: 00007f8e0d6f3700
[ 3478.490953] R10: 00007f8e0d6f39d0 R11: 0000000000000202 R12: 0000000000000005
[ 3478.491343] R13: 00005624e0780020 R14: 0000000000000000 R15: 0000000000000001
(...)
[ 3478.493352] ---[ end trace d5f537302be4f8c8 ]---
Another possibility, much less likely to happen, is that send will not
fail but the contents of the stream it produces may not be correct.
To avoid this, do not allow send and relocation (balance) to run in
parallel. In the long term the goal is to allow for both to be able to
run concurrently without any problems, but that will take a significant
effort in development and testing.
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2019-04-22 22:44:09 +07:00
|
|
|
|
2020-01-24 21:32:58 +07:00
|
|
|
ret = percpu_counter_init(&fs_info->dio_bytes, 0, GFP_KERNEL);
|
|
|
|
if (ret)
|
2020-02-15 04:11:47 +07:00
|
|
|
return ret;
|
2020-01-24 21:32:58 +07:00
|
|
|
|
|
|
|
ret = percpu_counter_init(&fs_info->dirty_metadata_bytes, 0, GFP_KERNEL);
|
|
|
|
if (ret)
|
2020-02-15 04:11:47 +07:00
|
|
|
return ret;
|
2020-01-24 21:32:58 +07:00
|
|
|
|
|
|
|
fs_info->dirty_metadata_batch = PAGE_SIZE *
|
|
|
|
(1 + ilog2(nr_cpu_ids));
|
|
|
|
|
|
|
|
ret = percpu_counter_init(&fs_info->delalloc_bytes, 0, GFP_KERNEL);
|
|
|
|
if (ret)
|
2020-02-15 04:11:47 +07:00
|
|
|
return ret;
|
2020-01-24 21:32:58 +07:00
|
|
|
|
|
|
|
ret = percpu_counter_init(&fs_info->dev_replace.bio_counter, 0,
|
|
|
|
GFP_KERNEL);
|
|
|
|
if (ret)
|
2020-02-15 04:11:47 +07:00
|
|
|
return ret;
|
2020-01-24 21:32:58 +07:00
|
|
|
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
ret = percpu_counter_init(&fs_info->eb_hit, 0, GFP_KERNEL);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
ret = percpu_counter_init(&fs_info->eb_miss, 0, GFP_KERNEL);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
ret = percpu_counter_init(&fs_info->meta_write_pages, 0, GFP_KERNEL);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
ret = percpu_counter_init(&fs_info->data_write_pages, 0, GFP_KERNEL);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
ret = percpu_counter_init(&fs_info->delayed_meta_ref, 0, GFP_KERNEL);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
ret = percpu_counter_init(&fs_info->delayed_data_ref, 0, GFP_KERNEL);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
ret = percpu_counter_init(&fs_info->write_flush, 0, GFP_KERNEL);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
ret = percpu_counter_init(&fs_info->write_fua, 0, GFP_KERNEL);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
2020-01-24 21:32:58 +07:00
|
|
|
fs_info->delayed_root = kmalloc(sizeof(struct btrfs_delayed_root),
|
|
|
|
GFP_KERNEL);
|
2020-02-15 04:11:47 +07:00
|
|
|
if (!fs_info->delayed_root)
|
|
|
|
return -ENOMEM;
|
2020-01-24 21:32:58 +07:00
|
|
|
btrfs_init_delayed_root(fs_info->delayed_root);
|
|
|
|
|
2020-02-15 04:11:47 +07:00
|
|
|
return btrfs_alloc_stripe_hash_table(fs_info);
|
2020-01-24 21:32:58 +07:00
|
|
|
}
|
|
|
|
|
2020-02-18 21:56:08 +07:00
|
|
|
static int btrfs_uuid_rescan_kthread(void *data)
|
|
|
|
{
|
|
|
|
struct btrfs_fs_info *fs_info = (struct btrfs_fs_info *)data;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* 1st step is to iterate through the existing UUID tree and
|
|
|
|
* to delete all entries that contain outdated data.
|
|
|
|
* 2nd step is to add all missing entries to the UUID tree.
|
|
|
|
*/
|
|
|
|
ret = btrfs_uuid_tree_iterate(fs_info);
|
|
|
|
if (ret < 0) {
|
2020-02-15 03:05:01 +07:00
|
|
|
if (ret != -EINTR)
|
|
|
|
btrfs_warn(fs_info, "iterating uuid_tree failed %d",
|
|
|
|
ret);
|
2020-02-18 21:56:08 +07:00
|
|
|
up(&fs_info->uuid_tree_rescan_sem);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
return btrfs_uuid_scan_kthread(data);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int btrfs_check_uuid_tree(struct btrfs_fs_info *fs_info)
|
|
|
|
{
|
|
|
|
struct task_struct *task;
|
|
|
|
|
|
|
|
down(&fs_info->uuid_tree_rescan_sem);
|
|
|
|
task = kthread_run(btrfs_uuid_rescan_kthread, fs_info, "btrfs-uuid");
|
|
|
|
if (IS_ERR(task)) {
|
|
|
|
/* fs_info->update_uuid_tree_gen remains 0 in all error case */
|
|
|
|
btrfs_warn(fs_info, "failed to start uuid_rescan task");
|
|
|
|
up(&fs_info->uuid_tree_rescan_sem);
|
|
|
|
return PTR_ERR(task);
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2024-07-05 23:00:04 +07:00
|
|
|
/*
|
|
|
|
* Some options only have meaning at mount time and shouldn't persist across
|
|
|
|
* remounts, or be displayed. Clear these at the end of mount and remount
|
|
|
|
* code paths.
|
|
|
|
*/
|
|
|
|
void btrfs_clear_oneshot_options(struct btrfs_fs_info *fs_info)
|
|
|
|
{
|
|
|
|
btrfs_clear_opt(fs_info->mount_opt, USEBACKUPROOT);
|
|
|
|
btrfs_clear_opt(fs_info->mount_opt, CLEAR_CACHE);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Mounting logic specific to read-write file systems. Shared by open_ctree
|
|
|
|
* and btrfs_remount when remounting from read-only to read-write.
|
|
|
|
*/
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
int btrfs_start_pre_rw_mount(struct btrfs_fs_info *fs_info, struct syno_btrfs_mount_stats *stats)
|
|
|
|
#else
|
|
|
|
int btrfs_start_pre_rw_mount(struct btrfs_fs_info *fs_info)
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
const bool cache_opt = btrfs_test_opt(fs_info, SPACE_CACHE);
|
|
|
|
bool clear_free_space_tree = false;
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
ktime_t temp_t;
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
|
|
|
if (btrfs_test_opt(fs_info, CLEAR_CACHE) &&
|
|
|
|
btrfs_fs_compat_ro(fs_info, FREE_SPACE_TREE)) {
|
|
|
|
clear_free_space_tree = true;
|
|
|
|
} else if (btrfs_fs_compat_ro(fs_info, FREE_SPACE_TREE) &&
|
|
|
|
!btrfs_fs_compat_ro(fs_info, FREE_SPACE_TREE_VALID)) {
|
|
|
|
btrfs_warn(fs_info, "free space tree is invalid");
|
|
|
|
clear_free_space_tree = true;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (clear_free_space_tree) {
|
|
|
|
btrfs_info(fs_info, "clearing free space tree");
|
|
|
|
ret = btrfs_clear_free_space_tree(fs_info);
|
|
|
|
if (ret) {
|
|
|
|
btrfs_warn(fs_info,
|
|
|
|
"failed to clear free space tree: %d", ret);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
#else /* MY_ABC_HERE */
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
temp_t = ktime_get();
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
ret = btrfs_cleanup_fs_roots(fs_info);
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
if (stats != NULL)
|
|
|
|
stats->cleanup_fs_roots_time = ktime_to_ns(ktime_sub(ktime_get(), temp_t));
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
if (ret)
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
down_read(&fs_info->cleanup_work_sem);
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
temp_t = ktime_get();
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
if ((ret = btrfs_orphan_cleanup(fs_info->fs_root)) ||
|
|
|
|
(ret = btrfs_orphan_cleanup(fs_info->tree_root))) {
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
if (stats != NULL)
|
|
|
|
stats->orphan_cleanup_time = ktime_to_ns(ktime_sub(ktime_get(), temp_t));
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
up_read(&fs_info->cleanup_work_sem);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
if (stats != NULL)
|
|
|
|
stats->orphan_cleanup_time = ktime_to_ns(ktime_sub(ktime_get(), temp_t));
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
up_read(&fs_info->cleanup_work_sem);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
|
|
|
mutex_lock(&fs_info->cleaner_mutex);
|
|
|
|
ret = btrfs_recover_relocation(fs_info->tree_root);
|
|
|
|
mutex_unlock(&fs_info->cleaner_mutex);
|
|
|
|
if (ret < 0) {
|
|
|
|
btrfs_warn(fs_info, "failed to recover relocation: %d", ret);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (btrfs_test_opt(fs_info, FREE_SPACE_TREE) &&
|
|
|
|
!btrfs_fs_compat_ro(fs_info, FREE_SPACE_TREE)) {
|
|
|
|
btrfs_info(fs_info, "creating free space tree");
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
temp_t = ktime_get();
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
ret = btrfs_create_free_space_tree(fs_info);
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
if (stats != NULL)
|
|
|
|
stats->create_free_space_tree_time = ktime_to_ns(ktime_sub(ktime_get(), temp_t));
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
if (ret) {
|
|
|
|
btrfs_warn(fs_info,
|
|
|
|
"failed to create free space tree: %d", ret);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (cache_opt != btrfs_free_space_cache_v1_active(fs_info)) {
|
|
|
|
ret = btrfs_set_free_space_cache_v1_active(fs_info, cache_opt);
|
|
|
|
if (ret)
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
ret = btrfs_resume_balance_async(fs_info);
|
|
|
|
if (ret)
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
ret = btrfs_resume_dev_replace_async(fs_info);
|
|
|
|
if (ret) {
|
|
|
|
btrfs_warn(fs_info, "failed to resume dev_replace");
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
btrfs_qgroup_rescan_resume(fs_info);
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
btrfs_syno_usage_rescan_resume(fs_info);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
|
|
|
if (!fs_info->uuid_root) {
|
|
|
|
btrfs_info(fs_info, "creating UUID tree");
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
temp_t = ktime_get();
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
ret = btrfs_create_uuid_tree(fs_info);
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
if (stats != NULL)
|
|
|
|
stats->create_uuid_tree_time = ktime_to_ns(ktime_sub(ktime_get(), temp_t));
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
if (ret) {
|
|
|
|
btrfs_warn(fs_info,
|
|
|
|
"failed to create the UUID tree %d", ret);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
out:
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
static void free_all_syno_rbd_meta_file_inodes(struct btrfs_fs_info *fs_info)
|
|
|
|
{
|
|
|
|
struct btrfs_inode *inode;
|
|
|
|
|
|
|
|
spin_lock(&fs_info->syno_rbd.lock);
|
|
|
|
while (!list_empty(&fs_info->syno_rbd.pinned_meta_files)) {
|
|
|
|
inode = list_first_entry(&fs_info->syno_rbd.pinned_meta_files, struct btrfs_inode, syno_rbd_meta_file);
|
|
|
|
spin_unlock(&fs_info->syno_rbd.lock);
|
|
|
|
|
|
|
|
btrfs_unpin_rbd_meta_file(&inode->vfs_inode);
|
|
|
|
|
|
|
|
cond_resched();
|
|
|
|
spin_lock(&fs_info->syno_rbd.lock);
|
|
|
|
}
|
|
|
|
spin_unlock(&fs_info->syno_rbd.lock);
|
|
|
|
}
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
static int print_mount_time_msec = 90000;
|
|
|
|
module_param(print_mount_time_msec, int, S_IRUGO|S_IWUSR);
|
|
|
|
|
|
|
|
static void print_mount_stats(struct btrfs_fs_info *fs_info,
|
|
|
|
struct syno_btrfs_mount_stats *stats)
|
|
|
|
{
|
|
|
|
s64 total = ktime_to_ns(ktime_sub(ktime_get(), stats->start_time));
|
|
|
|
s64 others = total -
|
|
|
|
stats->read_chunk_tree_time -
|
|
|
|
stats->read_block_groups_time -
|
|
|
|
stats->read_qgroup_config_time -
|
|
|
|
stats->read_usrquota_config_time -
|
|
|
|
stats->read_syno_usage_config_time -
|
|
|
|
stats->activate_all_rbd_meta_files_time -
|
|
|
|
stats->replay_log_time -
|
|
|
|
stats->cleanup_fs_roots_time -
|
|
|
|
stats->create_block_group_cache_tree_time -
|
|
|
|
stats->create_free_space_tree_time -
|
|
|
|
stats->orphan_cleanup_time -
|
|
|
|
stats->create_uuid_tree_time;
|
|
|
|
|
|
|
|
if (print_mount_time_msec > div_s64(total, NSEC_PER_MSEC))
|
|
|
|
return;
|
|
|
|
|
|
|
|
btrfs_warn(fs_info, "btrfs mount open_ctree: "
|
|
|
|
"total time: %lld, "
|
|
|
|
"read chunk tree: %lld, "
|
|
|
|
"read block groups: %lld, "
|
|
|
|
"read qgroup config: %lld, "
|
|
|
|
"read usrquota config: %lld, "
|
|
|
|
"read syno usage config: %lld, "
|
|
|
|
"activate all rbd meta files: %lld, "
|
|
|
|
"replay log: %lld, "
|
|
|
|
"cleanup fs roots: %lld, "
|
|
|
|
"create block group cache tree: %lld, "
|
|
|
|
"create free space tree: %lld, "
|
|
|
|
"orphan cleanup: %lld, "
|
|
|
|
"create uuid tree: %lld, "
|
|
|
|
"others: %lld",
|
|
|
|
div_s64(total, NSEC_PER_USEC),
|
|
|
|
div_s64(stats->read_chunk_tree_time, NSEC_PER_USEC),
|
|
|
|
div_s64(stats->read_block_groups_time, NSEC_PER_USEC),
|
|
|
|
div_s64(stats->read_qgroup_config_time, NSEC_PER_USEC),
|
|
|
|
div_s64(stats->read_usrquota_config_time, NSEC_PER_USEC),
|
|
|
|
div_s64(stats->read_syno_usage_config_time, NSEC_PER_USEC),
|
|
|
|
div_s64(stats->activate_all_rbd_meta_files_time, NSEC_PER_USEC),
|
|
|
|
div_s64(stats->replay_log_time, NSEC_PER_USEC),
|
|
|
|
div_s64(stats->cleanup_fs_roots_time, NSEC_PER_USEC),
|
|
|
|
div_s64(stats->create_block_group_cache_tree_time, NSEC_PER_USEC),
|
|
|
|
div_s64(stats->create_free_space_tree_time, NSEC_PER_USEC),
|
|
|
|
div_s64(stats->orphan_cleanup_time, NSEC_PER_USEC),
|
|
|
|
div_s64(stats->create_uuid_tree_time, NSEC_PER_USEC),
|
|
|
|
div_s64(others, NSEC_PER_USEC));
|
|
|
|
}
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
2020-01-24 21:32:58 +07:00
|
|
|
int __cold open_ctree(struct super_block *sb, struct btrfs_fs_devices *fs_devices,
|
|
|
|
char *options)
|
|
|
|
{
|
|
|
|
u32 sectorsize;
|
|
|
|
u32 nodesize;
|
|
|
|
u32 stripesize;
|
|
|
|
u64 generation;
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
u64 syno_generation;
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
u64 syno_capability_generation;
|
|
|
|
u64 syno_capability_flags;
|
|
|
|
#endif /* MY_ABC_HERE */
|
2020-01-24 21:32:58 +07:00
|
|
|
u64 features;
|
|
|
|
u16 csum_type;
|
|
|
|
struct btrfs_super_block *disk_super;
|
|
|
|
struct btrfs_fs_info *fs_info = btrfs_sb(sb);
|
|
|
|
struct btrfs_root *tree_root;
|
|
|
|
struct btrfs_root *chunk_root;
|
|
|
|
int ret;
|
|
|
|
int err = -EINVAL;
|
|
|
|
int level;
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
struct syno_btrfs_mount_stats stats;
|
|
|
|
ktime_t temp_t;
|
|
|
|
memset(&stats, 0, sizeof(stats));
|
|
|
|
stats.start_time = ktime_get();
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
BUILD_BUG_ON(sizeof(struct btrfs_super_block) != 4096);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
ret = kfifo_alloc(&fs_info->cksumfailed_files, PAGE_SIZE, GFP_NOFS);
|
|
|
|
if (ret) {
|
|
|
|
printk(KERN_WARNING "BTRFS: failed to alloc cksumfailed files record\n");
|
|
|
|
err = ret;
|
|
|
|
goto fail_kfifo;
|
|
|
|
}
|
|
|
|
spin_lock_init(&fs_info->cksumfailed_files_write_lock);
|
|
|
|
fs_info->correction_suppress_log = DATA_CORRECTION_RATE_LIMIT;
|
|
|
|
fs_info->correction_disable = false;
|
|
|
|
|
|
|
|
fs_info->correction_record = RB_ROOT;
|
|
|
|
spin_lock_init(&fs_info->correction_record_lock);
|
|
|
|
#endif /* MY_ABC_HERE */
|
2020-01-24 21:32:58 +07:00
|
|
|
|
2020-01-24 21:32:59 +07:00
|
|
|
ret = init_mount_fs_info(fs_info, sb);
|
2013-01-30 06:40:14 +07:00
|
|
|
if (ret) {
|
2013-03-01 22:03:00 +07:00
|
|
|
err = ret;
|
2020-01-24 21:32:58 +07:00
|
|
|
goto fail;
|
2013-01-30 06:40:14 +07:00
|
|
|
}
|
|
|
|
|
2020-01-24 21:32:58 +07:00
|
|
|
/* These need to be init'ed before we start creating inodes and such. */
|
|
|
|
tree_root = btrfs_alloc_root(fs_info, BTRFS_ROOT_TREE_OBJECTID,
|
|
|
|
GFP_KERNEL);
|
|
|
|
fs_info->tree_root = tree_root;
|
|
|
|
chunk_root = btrfs_alloc_root(fs_info, BTRFS_CHUNK_TREE_OBJECTID,
|
|
|
|
GFP_KERNEL);
|
|
|
|
fs_info->chunk_root = chunk_root;
|
|
|
|
if (!tree_root || !chunk_root) {
|
|
|
|
err = -ENOMEM;
|
2020-02-15 04:11:47 +07:00
|
|
|
goto fail;
|
2020-01-24 21:32:58 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
fs_info->btree_inode = new_inode(sb);
|
|
|
|
if (!fs_info->btree_inode) {
|
|
|
|
err = -ENOMEM;
|
2020-02-15 04:11:47 +07:00
|
|
|
goto fail;
|
2020-01-24 21:32:58 +07:00
|
|
|
}
|
|
|
|
mapping_set_gfp_mask(fs_info->btree_inode->i_mapping, GFP_NOFS);
|
|
|
|
btrfs_init_btree_inode(fs_info);
|
|
|
|
|
2012-03-28 05:56:56 +07:00
|
|
|
invalidate_bdev(fs_devices->latest_bdev);
|
2013-03-06 21:57:46 +07:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Read super block and check the signature bytes only
|
|
|
|
*/
|
2020-02-13 22:24:32 +07:00
|
|
|
disk_super = btrfs_read_dev_super(fs_devices->latest_bdev);
|
|
|
|
if (IS_ERR(disk_super)) {
|
|
|
|
err = PTR_ERR(disk_super);
|
btrfs: implement delayed inode items operation
Changelog V5 -> V6:
- Fix oom when the memory load is high, by storing the delayed nodes into the
root's radix tree, and letting btrfs inodes go.
Changelog V4 -> V5:
- Fix the race on adding the delayed node to the inode, which is spotted by
Chris Mason.
- Merge Chris Mason's incremental patch into this patch.
- Fix deadlock between readdir() and memory fault, which is reported by
Itaru Kitayama.
Changelog V3 -> V4:
- Fix nested lock, which is reported by Itaru Kitayama, by updating space cache
inode in time.
Changelog V2 -> V3:
- Fix the race between the delayed worker and the task which does delayed items
balance, which is reported by Tsutomu Itoh.
- Modify the patch address David Sterba's comment.
- Fix the bug of the cpu recursion spinlock, reported by Chris Mason
Changelog V1 -> V2:
- break up the global rb-tree, use a list to manage the delayed nodes,
which is created for every directory and file, and used to manage the
delayed directory name index items and the delayed inode item.
- introduce a worker to deal with the delayed nodes.
Compare with Ext3/4, the performance of file creation and deletion on btrfs
is very poor. the reason is that btrfs must do a lot of b+ tree insertions,
such as inode item, directory name item, directory name index and so on.
If we can do some delayed b+ tree insertion or deletion, we can improve the
performance, so we made this patch which implemented delayed directory name
index insertion/deletion and delayed inode update.
Implementation:
- introduce a delayed root object into the filesystem, that use two lists to
manage the delayed nodes which are created for every file/directory.
One is used to manage all the delayed nodes that have delayed items. And the
other is used to manage the delayed nodes which is waiting to be dealt with
by the work thread.
- Every delayed node has two rb-tree, one is used to manage the directory name
index which is going to be inserted into b+ tree, and the other is used to
manage the directory name index which is going to be deleted from b+ tree.
- introduce a worker to deal with the delayed operation. This worker is used
to deal with the works of the delayed directory name index items insertion
and deletion and the delayed inode update.
When the delayed items is beyond the lower limit, we create works for some
delayed nodes and insert them into the work queue of the worker, and then
go back.
When the delayed items is beyond the upper bound, we create works for all
the delayed nodes that haven't been dealt with, and insert them into the work
queue of the worker, and then wait for that the untreated items is below some
threshold value.
- When we want to insert a directory name index into b+ tree, we just add the
information into the delayed inserting rb-tree.
And then we check the number of the delayed items and do delayed items
balance. (The balance policy is above.)
- When we want to delete a directory name index from the b+ tree, we search it
in the inserting rb-tree at first. If we look it up, just drop it. If not,
add the key of it into the delayed deleting rb-tree.
Similar to the delayed inserting rb-tree, we also check the number of the
delayed items and do delayed items balance.
(The same to inserting manipulation)
- When we want to update the metadata of some inode, we cached the data of the
inode into the delayed node. the worker will flush it into the b+ tree after
dealing with the delayed insertion and deletion.
- We will move the delayed node to the tail of the list after we access the
delayed node, By this way, we can cache more delayed items and merge more
inode updates.
- If we want to commit transaction, we will deal with all the delayed node.
- the delayed node will be freed when we free the btrfs inode.
- Before we log the inode items, we commit all the directory name index items
and the delayed inode update.
I did a quick test by the benchmark tool[1] and found we can improve the
performance of file creation by ~15%, and file deletion by ~20%.
Before applying this patch:
Create files:
Total files: 50000
Total time: 1.096108
Average time: 0.000022
Delete files:
Total files: 50000
Total time: 1.510403
Average time: 0.000030
After applying this patch:
Create files:
Total files: 50000
Total time: 0.932899
Average time: 0.000019
Delete files:
Total files: 50000
Total time: 1.215732
Average time: 0.000024
[1] http://marc.info/?l=linux-btrfs&m=128212635122920&q=p3
Many thanks for Kitayama-san's help!
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Reviewed-by: David Sterba <dave@jikos.cz>
Tested-by: Tsutomu Itoh <t-itoh@jp.fujitsu.com>
Tested-by: Itaru Kitayama <kitayama@cl.bb4u.ne.jp>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2011-04-22 17:12:22 +07:00
|
|
|
goto fail_alloc;
|
2011-01-08 17:09:13 +07:00
|
|
|
}
|
2007-06-12 17:35:45 +07:00
|
|
|
|
2019-06-03 21:58:54 +07:00
|
|
|
/*
|
2020-08-05 09:48:34 +07:00
|
|
|
* Verify the type first, if that or the checksum value are
|
2019-06-03 21:58:54 +07:00
|
|
|
* corrupted, we'll find out
|
|
|
|
*/
|
2020-02-13 22:24:32 +07:00
|
|
|
csum_type = btrfs_super_csum_type(disk_super);
|
2019-06-03 21:58:55 +07:00
|
|
|
if (!btrfs_supported_super_csum(csum_type)) {
|
2019-06-03 21:58:54 +07:00
|
|
|
btrfs_err(fs_info, "unsupported checksum algorithm: %u",
|
2019-06-03 21:58:55 +07:00
|
|
|
csum_type);
|
2019-06-03 21:58:54 +07:00
|
|
|
err = -EINVAL;
|
2020-02-13 22:24:32 +07:00
|
|
|
btrfs_release_disk_super(disk_super);
|
2019-06-03 21:58:54 +07:00
|
|
|
goto fail_alloc;
|
|
|
|
}
|
|
|
|
|
2019-06-03 21:58:56 +07:00
|
|
|
ret = btrfs_init_csum_hash(fs_info, csum_type);
|
|
|
|
if (ret) {
|
|
|
|
err = ret;
|
2020-02-13 22:24:32 +07:00
|
|
|
btrfs_release_disk_super(disk_super);
|
2019-06-03 21:58:56 +07:00
|
|
|
goto fail_alloc;
|
|
|
|
}
|
|
|
|
|
2013-03-06 21:57:46 +07:00
|
|
|
/*
|
|
|
|
* We want to check superblock checksum, the type is stored inside.
|
|
|
|
* Pass the whole disk block of size BTRFS_SUPER_INFO_SIZE (4k).
|
|
|
|
*/
|
2020-02-13 22:24:32 +07:00
|
|
|
if (btrfs_check_super_csum(fs_info, (u8 *)disk_super)) {
|
2016-05-09 16:32:39 +07:00
|
|
|
btrfs_err(fs_info, "superblock checksum mismatch");
|
2013-03-06 21:57:46 +07:00
|
|
|
err = -EINVAL;
|
2020-02-13 22:24:32 +07:00
|
|
|
btrfs_release_disk_super(disk_super);
|
2020-01-24 21:32:57 +07:00
|
|
|
goto fail_alloc;
|
2013-03-06 21:57:46 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* super_copy is zeroed at allocation time and we never touch the
|
|
|
|
* following bytes up to INFO_SIZE, the checksum is calculated from
|
|
|
|
* the whole block of INFO_SIZE
|
|
|
|
*/
|
2020-02-13 22:24:32 +07:00
|
|
|
memcpy(fs_info->super_copy, disk_super, sizeof(*fs_info->super_copy));
|
|
|
|
btrfs_release_disk_super(disk_super);
|
2007-10-16 03:14:19 +07:00
|
|
|
|
2018-10-30 21:43:25 +07:00
|
|
|
disk_super = fs_info->super_copy;
|
|
|
|
|
2008-03-25 02:01:56 +07:00
|
|
|
|
2018-10-30 21:43:25 +07:00
|
|
|
features = btrfs_super_flags(disk_super);
|
|
|
|
if (features & BTRFS_SUPER_FLAG_CHANGING_FSID_V2) {
|
|
|
|
features &= ~BTRFS_SUPER_FLAG_CHANGING_FSID_V2;
|
|
|
|
btrfs_set_super_flags(disk_super, features);
|
|
|
|
btrfs_info(fs_info,
|
|
|
|
"found metadata UUID change in progress flag, clearing");
|
|
|
|
}
|
|
|
|
|
|
|
|
memcpy(fs_info->super_for_commit, fs_info->super_copy,
|
|
|
|
sizeof(*fs_info->super_for_commit));
|
2018-10-30 21:43:24 +07:00
|
|
|
|
2018-05-11 12:35:26 +07:00
|
|
|
ret = btrfs_validate_mount_super(fs_info);
|
2013-03-06 21:57:46 +07:00
|
|
|
if (ret) {
|
2016-05-09 16:32:39 +07:00
|
|
|
btrfs_err(fs_info, "superblock contains fatal errors");
|
2013-03-06 21:57:46 +07:00
|
|
|
err = -EINVAL;
|
2020-01-24 21:32:57 +07:00
|
|
|
goto fail_alloc;
|
2013-03-06 21:57:46 +07:00
|
|
|
}
|
|
|
|
|
2007-04-09 21:42:37 +07:00
|
|
|
if (!btrfs_super_root(disk_super))
|
2020-01-24 21:32:57 +07:00
|
|
|
goto fail_alloc;
|
2007-04-09 21:42:37 +07:00
|
|
|
|
2011-01-06 18:30:25 +07:00
|
|
|
/* check FS state, whether FS is broken. */
|
2013-01-29 17:14:48 +07:00
|
|
|
if (btrfs_super_flags(disk_super) & BTRFS_SUPER_FLAG_ERROR)
|
|
|
|
set_bit(BTRFS_FS_STATE_ERROR, &fs_info->fs_state);
|
2011-01-06 18:30:25 +07:00
|
|
|
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
generation = btrfs_super_generation(disk_super);
|
|
|
|
syno_generation = btrfs_super_syno_generation(disk_super);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
syno_capability_generation =
|
|
|
|
btrfs_super_syno_capability_generation(disk_super);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
spin_lock_init(&fs_info->locker_lock);
|
|
|
|
ktime_get_raw_ts64(&fs_info->locker_prev_raw_clock);
|
|
|
|
fs_info->locker_clock.tv_sec = btrfs_super_syno_locker_clock(disk_super);
|
|
|
|
INIT_DELAYED_WORK(&fs_info->locker_update_work, btrfs_syno_locker_update_work_fn);
|
|
|
|
fs_info->locker_update_interval = 24*60*60;
|
|
|
|
btrfs_syno_locker_update_work_kick(fs_info);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
Btrfs: Per file/directory controls for COW and compression
Data compression and data cow are controlled across the entire FS by mount
options right now. ioctls are needed to set this on a per file or per
directory basis. This has been proposed previously, but VFS developers
wanted us to use generic ioctls rather than btrfs-specific ones.
According to Chris's comment, there should be just one true compression
method(probably LZO) stored in the super. However, before this, we would
wait for that one method is stable enough to be adopted into the super.
So I list it as a long term goal, and just store it in ram today.
After applying this patch, we can use the generic "FS_IOC_SETFLAGS" ioctl to
control file and directory's datacow and compression attribute.
NOTE:
- The compression type is selected by such rules:
If we mount btrfs with compress options, ie, zlib/lzo, the type is it.
Otherwise, we'll use the default compress type (zlib today).
v1->v2:
- rebase to the latest btrfs.
v2->v3:
- fix a problem, i.e. when a file is set NOCOW via mount option, then this NOCOW
will be screwed by inheritance from parent directory.
Signed-off-by: Liu Bo <liubo2009@cn.fujitsu.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2011-03-22 17:12:20 +07:00
|
|
|
/*
|
|
|
|
* In the long term, we'll store the compression type in the super
|
|
|
|
* block, and it'll be used for per file compression control.
|
|
|
|
*/
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
fs_info->compress_type = BTRFS_COMPRESS_DEFAULT;
|
|
|
|
#else
|
Btrfs: Per file/directory controls for COW and compression
Data compression and data cow are controlled across the entire FS by mount
options right now. ioctls are needed to set this on a per file or per
directory basis. This has been proposed previously, but VFS developers
wanted us to use generic ioctls rather than btrfs-specific ones.
According to Chris's comment, there should be just one true compression
method(probably LZO) stored in the super. However, before this, we would
wait for that one method is stable enough to be adopted into the super.
So I list it as a long term goal, and just store it in ram today.
After applying this patch, we can use the generic "FS_IOC_SETFLAGS" ioctl to
control file and directory's datacow and compression attribute.
NOTE:
- The compression type is selected by such rules:
If we mount btrfs with compress options, ie, zlib/lzo, the type is it.
Otherwise, we'll use the default compress type (zlib today).
v1->v2:
- rebase to the latest btrfs.
v2->v3:
- fix a problem, i.e. when a file is set NOCOW via mount option, then this NOCOW
will be screwed by inheritance from parent directory.
Signed-off-by: Liu Bo <liubo2009@cn.fujitsu.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2011-03-22 17:12:20 +07:00
|
|
|
fs_info->compress_type = BTRFS_COMPRESS_ZLIB;
|
2024-07-05 23:00:04 +07:00
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
/* block group hint tree is enabled by default */
|
|
|
|
btrfs_set_opt(fs_info->mount_opt, BLOCK_GROUP_HINT_TREE);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
/* syno allocator is enabled by default */
|
|
|
|
btrfs_set_opt(fs_info->mount_opt, SYNO_ALLOCATOR);
|
|
|
|
#endif /* MY_ABC_HERE */
|
Btrfs: Per file/directory controls for COW and compression
Data compression and data cow are controlled across the entire FS by mount
options right now. ioctls are needed to set this on a per file or per
directory basis. This has been proposed previously, but VFS developers
wanted us to use generic ioctls rather than btrfs-specific ones.
According to Chris's comment, there should be just one true compression
method(probably LZO) stored in the super. However, before this, we would
wait for that one method is stable enough to be adopted into the super.
So I list it as a long term goal, and just store it in ram today.
After applying this patch, we can use the generic "FS_IOC_SETFLAGS" ioctl to
control file and directory's datacow and compression attribute.
NOTE:
- The compression type is selected by such rules:
If we mount btrfs with compress options, ie, zlib/lzo, the type is it.
Otherwise, we'll use the default compress type (zlib today).
v1->v2:
- rebase to the latest btrfs.
v2->v3:
- fix a problem, i.e. when a file is set NOCOW via mount option, then this NOCOW
will be screwed by inheritance from parent directory.
Signed-off-by: Liu Bo <liubo2009@cn.fujitsu.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2011-03-22 17:12:20 +07:00
|
|
|
|
2016-06-23 05:54:24 +07:00
|
|
|
ret = btrfs_parse_options(fs_info, options, sb->s_flags);
|
2008-11-18 09:11:30 +07:00
|
|
|
if (ret) {
|
|
|
|
err = ret;
|
2020-01-24 21:32:57 +07:00
|
|
|
goto fail_alloc;
|
2008-11-18 09:11:30 +07:00
|
|
|
}
|
2008-05-14 00:46:40 +07:00
|
|
|
|
2008-12-02 18:36:08 +07:00
|
|
|
features = btrfs_super_incompat_flags(disk_super) &
|
|
|
|
~BTRFS_FEATURE_INCOMPAT_SUPP;
|
|
|
|
if (features) {
|
2016-05-09 16:32:39 +07:00
|
|
|
btrfs_err(fs_info,
|
|
|
|
"cannot mount because of unsupported optional features (%llx)",
|
|
|
|
features);
|
2008-12-02 18:36:08 +07:00
|
|
|
err = -EINVAL;
|
2020-01-24 21:32:57 +07:00
|
|
|
goto fail_alloc;
|
2008-12-02 18:36:08 +07:00
|
|
|
}
|
|
|
|
|
Btrfs: Mixed back reference (FORWARD ROLLING FORMAT CHANGE)
This commit introduces a new kind of back reference for btrfs metadata.
Once a filesystem has been mounted with this commit, IT WILL NO LONGER
BE MOUNTABLE BY OLDER KERNELS.
When a tree block in subvolume tree is cow'd, the reference counts of all
extents it points to are increased by one. At transaction commit time,
the old root of the subvolume is recorded in a "dead root" data structure,
and the btree it points to is later walked, dropping reference counts
and freeing any blocks where the reference count goes to 0.
The increments done during cow and decrements done after commit cancel out,
and the walk is a very expensive way to go about freeing the blocks that
are no longer referenced by the new btree root. This commit reduces the
transaction overhead by avoiding the need for dead root records.
When a non-shared tree block is cow'd, we free the old block at once, and the
new block inherits old block's references. When a tree block with reference
count > 1 is cow'd, we increase the reference counts of all extents
the new block points to by one, and decrease the old block's reference count by
one.
This dead tree avoidance code removes the need to modify the reference
counts of lower level extents when a non-shared tree block is cow'd.
But we still need to update back ref for all pointers in the block.
This is because the location of the block is recorded in the back ref
item.
We can solve this by introducing a new type of back ref. The new
back ref provides information about pointer's key, level and in which
tree the pointer lives. This information allow us to find the pointer
by searching the tree. The shortcoming of the new back ref is that it
only works for pointers in tree blocks referenced by their owner trees.
This is mostly a problem for snapshots, where resolving one of these
fuzzy back references would be O(number_of_snapshots) and quite slow.
The solution used here is to use the fuzzy back references in the common
case where a given tree block is only referenced by one root,
and use the full back references when multiple roots have a reference
on a given block.
This commit adds per subvolume red-black tree to keep trace of cached
inodes. The red-black tree helps the balancing code to find cached
inodes whose inode numbers within a given range.
This commit improves the balancing code by introducing several data
structures to keep the state of balancing. The most important one
is the back ref cache. It caches how the upper level tree blocks are
referenced. This greatly reduce the overhead of checking back ref.
The improved balancing code scales significantly better with a large
number of snapshots.
This is a very large commit and was written in a number of
pieces. But, they depend heavily on the disk format change and were
squashed together to make sure git bisect didn't end up in a
bad state wrt space balancing or the format change.
Signed-off-by: Yan Zheng <zheng.yan@oracle.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-06-10 21:45:14 +07:00
|
|
|
features = btrfs_super_incompat_flags(disk_super);
|
2010-10-25 14:12:26 +07:00
|
|
|
features |= BTRFS_FEATURE_INCOMPAT_MIXED_BACKREF;
|
2016-06-23 05:54:23 +07:00
|
|
|
if (fs_info->compress_type == BTRFS_COMPRESS_LZO)
|
2010-10-25 14:12:26 +07:00
|
|
|
features |= BTRFS_FEATURE_INCOMPAT_COMPRESS_LZO;
|
btrfs: Add zstd support
Add zstd compression and decompression support to BtrFS. zstd at its
fastest level compresses almost as well as zlib, while offering much
faster compression and decompression, approaching lzo speeds.
I benchmarked btrfs with zstd compression against no compression, lzo
compression, and zlib compression. I benchmarked two scenarios. Copying
a set of files to btrfs, and then reading the files. Copying a tarball
to btrfs, extracting it to btrfs, and then reading the extracted files.
After every operation, I call `sync` and include the sync time.
Between every pair of operations I unmount and remount the filesystem
to avoid caching. The benchmark files can be found in the upstream
zstd source repository under
`contrib/linux-kernel/{btrfs-benchmark.sh,btrfs-extract-benchmark.sh}`
[1] [2].
I ran the benchmarks on a Ubuntu 14.04 VM with 2 cores and 4 GiB of RAM.
The VM is running on a MacBook Pro with a 3.1 GHz Intel Core i7 processor,
16 GB of RAM, and a SSD.
The first compression benchmark is copying 10 copies of the unzipped
Silesia corpus [3] into a BtrFS filesystem mounted with
`-o compress-force=Method`. The decompression benchmark times how long
it takes to `tar` all 10 copies into `/dev/null`. The compression ratio is
measured by comparing the output of `df` and `du`. See the benchmark file
[1] for details. I benchmarked multiple zstd compression levels, although
the patch uses zstd level 1.
| Method | Ratio | Compression MB/s | Decompression speed |
|---------|-------|------------------|---------------------|
| None | 0.99 | 504 | 686 |
| lzo | 1.66 | 398 | 442 |
| zlib | 2.58 | 65 | 241 |
| zstd 1 | 2.57 | 260 | 383 |
| zstd 3 | 2.71 | 174 | 408 |
| zstd 6 | 2.87 | 70 | 398 |
| zstd 9 | 2.92 | 43 | 406 |
| zstd 12 | 2.93 | 21 | 408 |
| zstd 15 | 3.01 | 11 | 354 |
The next benchmark first copies `linux-4.11.6.tar` [4] to btrfs. Then it
measures the compression ratio, extracts the tar, and deletes the tar.
Then it measures the compression ratio again, and `tar`s the extracted
files into `/dev/null`. See the benchmark file [2] for details.
| Method | Tar Ratio | Extract Ratio | Copy (s) | Extract (s)| Read (s) |
|--------|-----------|---------------|----------|------------|----------|
| None | 0.97 | 0.78 | 0.981 | 5.501 | 8.807 |
| lzo | 2.06 | 1.38 | 1.631 | 8.458 | 8.585 |
| zlib | 3.40 | 1.86 | 7.750 | 21.544 | 11.744 |
| zstd 1 | 3.57 | 1.85 | 2.579 | 11.479 | 9.389 |
[1] https://github.com/facebook/zstd/blob/dev/contrib/linux-kernel/btrfs-benchmark.sh
[2] https://github.com/facebook/zstd/blob/dev/contrib/linux-kernel/btrfs-extract-benchmark.sh
[3] http://sun.aei.polsl.pl/~sdeor/index.php?page=silesia
[4] https://cdn.kernel.org/pub/linux/kernel/v4.x/linux-4.11.6.tar.xz
zstd source repository: https://github.com/facebook/zstd
Signed-off-by: Nick Terrell <terrelln@fb.com>
Signed-off-by: Chris Mason <clm@fb.com>
2017-08-10 09:39:02 +07:00
|
|
|
else if (fs_info->compress_type == BTRFS_COMPRESS_ZSTD)
|
|
|
|
features |= BTRFS_FEATURE_INCOMPAT_COMPRESS_ZSTD;
|
2010-08-07 00:21:20 +07:00
|
|
|
|
2013-03-08 02:22:04 +07:00
|
|
|
if (features & BTRFS_FEATURE_INCOMPAT_SKINNY_METADATA)
|
2016-05-09 16:32:39 +07:00
|
|
|
btrfs_info(fs_info, "has skinny extents");
|
2013-03-08 02:22:04 +07:00
|
|
|
|
2010-08-07 00:21:20 +07:00
|
|
|
/*
|
|
|
|
* flag our filesystem as having big metadata blocks if
|
|
|
|
* they are bigger than the page size
|
|
|
|
*/
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 19:29:47 +07:00
|
|
|
if (btrfs_super_nodesize(disk_super) > PAGE_SIZE) {
|
2010-08-07 00:21:20 +07:00
|
|
|
if (!(features & BTRFS_FEATURE_INCOMPAT_BIG_METADATA))
|
2016-05-09 16:32:39 +07:00
|
|
|
btrfs_info(fs_info,
|
|
|
|
"flagging fs with big metadata feature");
|
2010-08-07 00:21:20 +07:00
|
|
|
features |= BTRFS_FEATURE_INCOMPAT_BIG_METADATA;
|
|
|
|
}
|
|
|
|
|
2012-03-30 04:02:47 +07:00
|
|
|
nodesize = btrfs_super_nodesize(disk_super);
|
|
|
|
sectorsize = btrfs_super_sectorsize(disk_super);
|
2016-06-23 16:46:44 +07:00
|
|
|
stripesize = sectorsize;
|
2014-06-05 00:22:26 +07:00
|
|
|
fs_info->dirty_metadata_batch = nodesize * (1 + ilog2(nr_cpu_ids));
|
2013-01-29 17:10:51 +07:00
|
|
|
fs_info->delalloc_batch = sectorsize * 512 * (1 + ilog2(nr_cpu_ids));
|
2012-03-30 04:02:47 +07:00
|
|
|
|
2016-06-15 20:22:56 +07:00
|
|
|
/* Cache block sizes */
|
|
|
|
fs_info->nodesize = nodesize;
|
|
|
|
fs_info->sectorsize = sectorsize;
|
|
|
|
fs_info->stripesize = stripesize;
|
|
|
|
|
2012-03-30 04:02:47 +07:00
|
|
|
/*
|
|
|
|
* mixed block groups end up with duplicate but slightly offset
|
|
|
|
* extent buffers for the same range. It leads to corruptions
|
|
|
|
*/
|
|
|
|
if ((features & BTRFS_FEATURE_INCOMPAT_MIXED_GROUPS) &&
|
2014-06-05 00:22:26 +07:00
|
|
|
(sectorsize != nodesize)) {
|
2016-05-09 16:32:39 +07:00
|
|
|
btrfs_err(fs_info,
|
|
|
|
"unequal nodesize/sectorsize (%u != %u) are not allowed for mixed block groups",
|
|
|
|
nodesize, sectorsize);
|
2020-01-24 21:32:57 +07:00
|
|
|
goto fail_alloc;
|
2012-03-30 04:02:47 +07:00
|
|
|
}
|
|
|
|
|
2013-04-11 17:30:16 +07:00
|
|
|
/*
|
|
|
|
* Needn't use the lock because there is no other task which will
|
|
|
|
* update the flag.
|
|
|
|
*/
|
2010-10-25 14:12:26 +07:00
|
|
|
btrfs_set_super_incompat_flags(disk_super, features);
|
Btrfs: Mixed back reference (FORWARD ROLLING FORMAT CHANGE)
This commit introduces a new kind of back reference for btrfs metadata.
Once a filesystem has been mounted with this commit, IT WILL NO LONGER
BE MOUNTABLE BY OLDER KERNELS.
When a tree block in subvolume tree is cow'd, the reference counts of all
extents it points to are increased by one. At transaction commit time,
the old root of the subvolume is recorded in a "dead root" data structure,
and the btree it points to is later walked, dropping reference counts
and freeing any blocks where the reference count goes to 0.
The increments done during cow and decrements done after commit cancel out,
and the walk is a very expensive way to go about freeing the blocks that
are no longer referenced by the new btree root. This commit reduces the
transaction overhead by avoiding the need for dead root records.
When a non-shared tree block is cow'd, we free the old block at once, and the
new block inherits old block's references. When a tree block with reference
count > 1 is cow'd, we increase the reference counts of all extents
the new block points to by one, and decrease the old block's reference count by
one.
This dead tree avoidance code removes the need to modify the reference
counts of lower level extents when a non-shared tree block is cow'd.
But we still need to update back ref for all pointers in the block.
This is because the location of the block is recorded in the back ref
item.
We can solve this by introducing a new type of back ref. The new
back ref provides information about pointer's key, level and in which
tree the pointer lives. This information allow us to find the pointer
by searching the tree. The shortcoming of the new back ref is that it
only works for pointers in tree blocks referenced by their owner trees.
This is mostly a problem for snapshots, where resolving one of these
fuzzy back references would be O(number_of_snapshots) and quite slow.
The solution used here is to use the fuzzy back references in the common
case where a given tree block is only referenced by one root,
and use the full back references when multiple roots have a reference
on a given block.
This commit adds per subvolume red-black tree to keep trace of cached
inodes. The red-black tree helps the balancing code to find cached
inodes whose inode numbers within a given range.
This commit improves the balancing code by introducing several data
structures to keep the state of balancing. The most important one
is the back ref cache. It caches how the upper level tree blocks are
referenced. This greatly reduce the overhead of checking back ref.
The improved balancing code scales significantly better with a large
number of snapshots.
This is a very large commit and was written in a number of
pieces. But, they depend heavily on the disk format change and were
squashed together to make sure git bisect didn't end up in a
bad state wrt space balancing or the format change.
Signed-off-by: Yan Zheng <zheng.yan@oracle.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-06-10 21:45:14 +07:00
|
|
|
|
2008-12-02 18:36:08 +07:00
|
|
|
features = btrfs_super_compat_ro_flags(disk_super) &
|
|
|
|
~BTRFS_FEATURE_COMPAT_RO_SUPP;
|
2017-07-17 14:45:34 +07:00
|
|
|
if (!sb_rdonly(sb) && features) {
|
2016-05-09 16:32:39 +07:00
|
|
|
btrfs_err(fs_info,
|
|
|
|
"cannot mount read-write because of unsupported optional features (%llx)",
|
2013-08-20 18:20:07 +07:00
|
|
|
features);
|
2008-12-02 18:36:08 +07:00
|
|
|
err = -EINVAL;
|
2020-01-24 21:32:57 +07:00
|
|
|
goto fail_alloc;
|
2008-12-02 18:36:08 +07:00
|
|
|
}
|
2009-10-03 06:11:56 +07:00
|
|
|
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
if (btrfs_super_compat_ro_flags(disk_super) & BTRFS_FEATURE_COMPAT_RO_LOCKER) {
|
|
|
|
if (!sb_rdonly(sb) && !btrfs_syno_locker_feature_is_support()) {
|
|
|
|
btrfs_err(fs_info, "cannot mount read-write because of no locker support");
|
|
|
|
err = -EINVAL;
|
|
|
|
goto fail_alloc;
|
|
|
|
}
|
|
|
|
if (syno_generation != generation) {
|
|
|
|
btrfs_warn(fs_info, "locker was enabled. gen(%llu) != syno_gen(%llu)",
|
|
|
|
generation, syno_generation);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
features = btrfs_super_compat_flags(disk_super);
|
|
|
|
if (features & BTRFS_FEATURE_COMPAT_SYNO_CASELESS) {
|
|
|
|
if (syno_generation != generation) {
|
|
|
|
btrfs_warn(fs_info,
|
|
|
|
"Clear syno caseless feature, gen(%llu) != syno_gen(%llu), label:(%s)\n",
|
|
|
|
generation, syno_generation,
|
|
|
|
disk_super->label);
|
|
|
|
features &= ~BTRFS_FEATURE_COMPAT_SYNO_CASELESS;
|
|
|
|
btrfs_set_super_compat_flags(disk_super, features);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
syno_capability_flags = btrfs_super_syno_capability_flags(disk_super);
|
|
|
|
|
|
|
|
if (syno_capability_generation != generation && syno_capability_flags) {
|
|
|
|
btrfs_warn(fs_info,
|
|
|
|
"syno_capability_gen(%llu) is not match gen(%llu), "
|
|
|
|
"clear all capability flags (%llu).",
|
|
|
|
syno_capability_generation, generation,
|
|
|
|
syno_capability_flags);
|
|
|
|
syno_capability_flags = 0ULL;
|
|
|
|
btrfs_set_super_syno_capability_flags(disk_super,
|
|
|
|
syno_capability_flags);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (syno_capability_flags & ~BTRFS_FEATURE_SYNO_CAPABILITY_SUPP) {
|
|
|
|
btrfs_warn(fs_info,
|
|
|
|
"cannot support these features %llx",
|
|
|
|
syno_capability_flags & ~BTRFS_FEATURE_SYNO_CAPABILITY_SUPP);
|
|
|
|
syno_capability_flags &= BTRFS_FEATURE_SYNO_CAPABILITY_SUPP;
|
|
|
|
btrfs_set_super_syno_capability_flags(disk_super,
|
|
|
|
syno_capability_flags);
|
|
|
|
}
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
if (!fs_devices->rbd_enabled &&
|
|
|
|
(syno_capability_flags & BTRFS_FEATURE_SYNO_CAPABILITY_RBD_META)) {
|
|
|
|
syno_capability_flags &= ~BTRFS_FEATURE_SYNO_CAPABILITY_RBD_META;
|
|
|
|
btrfs_warn(fs_info,
|
|
|
|
"Rbd device is disabled, we drop rbd capability.");
|
|
|
|
btrfs_set_super_syno_capability_flags(disk_super,
|
|
|
|
syno_capability_flags);
|
|
|
|
}
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
2015-02-16 22:29:26 +07:00
|
|
|
ret = btrfs_init_workqueues(fs_info, fs_devices);
|
|
|
|
if (ret) {
|
|
|
|
err = ret;
|
2011-11-19 02:37:27 +07:00
|
|
|
goto fail_sb_buffer;
|
|
|
|
}
|
2008-06-12 08:47:56 +07:00
|
|
|
|
2017-04-12 17:24:32 +07:00
|
|
|
sb->s_bdi->ra_pages *= btrfs_super_num_devices(disk_super);
|
|
|
|
sb->s_bdi->ra_pages = max(sb->s_bdi->ra_pages, SZ_4M / PAGE_SIZE);
|
2008-04-19 03:13:31 +07:00
|
|
|
|
2008-05-07 22:43:44 +07:00
|
|
|
sb->s_blocksize = sectorsize;
|
|
|
|
sb->s_blocksize_bits = blksize_bits(sectorsize);
|
2018-10-30 21:43:24 +07:00
|
|
|
memcpy(&sb->s_uuid, fs_info->fs_devices->fsid, BTRFS_FSID_SIZE);
|
2007-10-16 03:15:53 +07:00
|
|
|
|
2008-06-26 03:01:30 +07:00
|
|
|
mutex_lock(&fs_info->chunk_mutex);
|
2016-06-22 08:16:51 +07:00
|
|
|
ret = btrfs_read_sys_array(fs_info);
|
2008-06-26 03:01:30 +07:00
|
|
|
mutex_unlock(&fs_info->chunk_mutex);
|
2008-04-25 20:04:37 +07:00
|
|
|
if (ret) {
|
2016-05-09 16:32:39 +07:00
|
|
|
btrfs_err(fs_info, "failed to read the system array: %d", ret);
|
Btrfs: Mixed back reference (FORWARD ROLLING FORMAT CHANGE)
This commit introduces a new kind of back reference for btrfs metadata.
Once a filesystem has been mounted with this commit, IT WILL NO LONGER
BE MOUNTABLE BY OLDER KERNELS.
When a tree block in subvolume tree is cow'd, the reference counts of all
extents it points to are increased by one. At transaction commit time,
the old root of the subvolume is recorded in a "dead root" data structure,
and the btree it points to is later walked, dropping reference counts
and freeing any blocks where the reference count goes to 0.
The increments done during cow and decrements done after commit cancel out,
and the walk is a very expensive way to go about freeing the blocks that
are no longer referenced by the new btree root. This commit reduces the
transaction overhead by avoiding the need for dead root records.
When a non-shared tree block is cow'd, we free the old block at once, and the
new block inherits old block's references. When a tree block with reference
count > 1 is cow'd, we increase the reference counts of all extents
the new block points to by one, and decrease the old block's reference count by
one.
This dead tree avoidance code removes the need to modify the reference
counts of lower level extents when a non-shared tree block is cow'd.
But we still need to update back ref for all pointers in the block.
This is because the location of the block is recorded in the back ref
item.
We can solve this by introducing a new type of back ref. The new
back ref provides information about pointer's key, level and in which
tree the pointer lives. This information allow us to find the pointer
by searching the tree. The shortcoming of the new back ref is that it
only works for pointers in tree blocks referenced by their owner trees.
This is mostly a problem for snapshots, where resolving one of these
fuzzy back references would be O(number_of_snapshots) and quite slow.
The solution used here is to use the fuzzy back references in the common
case where a given tree block is only referenced by one root,
and use the full back references when multiple roots have a reference
on a given block.
This commit adds per subvolume red-black tree to keep trace of cached
inodes. The red-black tree helps the balancing code to find cached
inodes whose inode numbers within a given range.
This commit improves the balancing code by introducing several data
structures to keep the state of balancing. The most important one
is the back ref cache. It caches how the upper level tree blocks are
referenced. This greatly reduce the overhead of checking back ref.
The improved balancing code scales significantly better with a large
number of snapshots.
This is a very large commit and was written in a number of
pieces. But, they depend heavily on the disk format change and were
squashed together to make sure git bisect didn't end up in a
bad state wrt space balancing or the format change.
Signed-off-by: Yan Zheng <zheng.yan@oracle.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-06-10 21:45:14 +07:00
|
|
|
goto fail_sb_buffer;
|
2008-04-25 20:04:37 +07:00
|
|
|
}
|
2008-03-25 02:01:56 +07:00
|
|
|
|
2008-10-30 01:49:05 +07:00
|
|
|
generation = btrfs_super_chunk_root_generation(disk_super);
|
2018-03-29 08:08:11 +07:00
|
|
|
level = btrfs_super_chunk_root_level(disk_super);
|
2008-03-25 02:01:56 +07:00
|
|
|
|
2016-06-23 05:54:24 +07:00
|
|
|
chunk_root->node = read_tree_block(fs_info,
|
2008-03-25 02:01:56 +07:00
|
|
|
btrfs_super_chunk_root(disk_super),
|
2018-03-29 08:08:11 +07:00
|
|
|
generation, level, NULL);
|
2015-05-25 16:30:15 +07:00
|
|
|
if (IS_ERR(chunk_root->node) ||
|
|
|
|
!extent_buffer_uptodate(chunk_root->node)) {
|
2016-05-09 16:32:39 +07:00
|
|
|
btrfs_err(fs_info, "failed to read chunk root");
|
2015-10-05 23:44:25 +07:00
|
|
|
if (!IS_ERR(chunk_root->node))
|
|
|
|
free_extent_buffer(chunk_root->node);
|
2015-07-15 20:02:09 +07:00
|
|
|
chunk_root->node = NULL;
|
2011-11-04 02:17:42 +07:00
|
|
|
goto fail_tree_roots;
|
2009-07-23 03:52:13 +07:00
|
|
|
}
|
Btrfs: Mixed back reference (FORWARD ROLLING FORMAT CHANGE)
This commit introduces a new kind of back reference for btrfs metadata.
Once a filesystem has been mounted with this commit, IT WILL NO LONGER
BE MOUNTABLE BY OLDER KERNELS.
When a tree block in subvolume tree is cow'd, the reference counts of all
extents it points to are increased by one. At transaction commit time,
the old root of the subvolume is recorded in a "dead root" data structure,
and the btree it points to is later walked, dropping reference counts
and freeing any blocks where the reference count goes to 0.
The increments done during cow and decrements done after commit cancel out,
and the walk is a very expensive way to go about freeing the blocks that
are no longer referenced by the new btree root. This commit reduces the
transaction overhead by avoiding the need for dead root records.
When a non-shared tree block is cow'd, we free the old block at once, and the
new block inherits old block's references. When a tree block with reference
count > 1 is cow'd, we increase the reference counts of all extents
the new block points to by one, and decrease the old block's reference count by
one.
This dead tree avoidance code removes the need to modify the reference
counts of lower level extents when a non-shared tree block is cow'd.
But we still need to update back ref for all pointers in the block.
This is because the location of the block is recorded in the back ref
item.
We can solve this by introducing a new type of back ref. The new
back ref provides information about pointer's key, level and in which
tree the pointer lives. This information allow us to find the pointer
by searching the tree. The shortcoming of the new back ref is that it
only works for pointers in tree blocks referenced by their owner trees.
This is mostly a problem for snapshots, where resolving one of these
fuzzy back references would be O(number_of_snapshots) and quite slow.
The solution used here is to use the fuzzy back references in the common
case where a given tree block is only referenced by one root,
and use the full back references when multiple roots have a reference
on a given block.
This commit adds per subvolume red-black tree to keep trace of cached
inodes. The red-black tree helps the balancing code to find cached
inodes whose inode numbers within a given range.
This commit improves the balancing code by introducing several data
structures to keep the state of balancing. The most important one
is the back ref cache. It caches how the upper level tree blocks are
referenced. This greatly reduce the overhead of checking back ref.
The improved balancing code scales significantly better with a large
number of snapshots.
This is a very large commit and was written in a number of
pieces. But, they depend heavily on the disk format change and were
squashed together to make sure git bisect didn't end up in a
bad state wrt space balancing or the format change.
Signed-off-by: Yan Zheng <zheng.yan@oracle.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-06-10 21:45:14 +07:00
|
|
|
btrfs_set_root_node(&chunk_root->root_item, chunk_root->node);
|
|
|
|
chunk_root->commit_root = btrfs_root_node(chunk_root);
|
2008-03-25 02:01:56 +07:00
|
|
|
|
2008-04-16 02:41:47 +07:00
|
|
|
read_extent_buffer(chunk_root->node, fs_info->chunk_tree_uuid,
|
2019-03-20 19:17:13 +07:00
|
|
|
offsetof(struct btrfs_header, chunk_tree_uuid),
|
|
|
|
BTRFS_UUID_SIZE);
|
2008-04-16 02:41:47 +07:00
|
|
|
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
temp_t = ktime_get();
|
|
|
|
#endif /* MY_ABC_HERE */
|
2016-06-21 21:40:19 +07:00
|
|
|
ret = btrfs_read_chunk_tree(fs_info);
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
stats.read_chunk_tree_time = ktime_to_ns(ktime_sub(ktime_get(), temp_t));
|
|
|
|
#endif /* MY_ABC_HERE */
|
2008-11-18 09:11:30 +07:00
|
|
|
if (ret) {
|
2016-05-09 16:32:39 +07:00
|
|
|
btrfs_err(fs_info, "failed to read chunk tree: %d", ret);
|
2011-11-04 02:17:42 +07:00
|
|
|
goto fail_tree_roots;
|
2008-11-18 09:11:30 +07:00
|
|
|
}
|
2008-03-25 02:01:56 +07:00
|
|
|
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
// For small volume, we shrink size of empty_cluster.
|
|
|
|
if (!fs_devices->total_rw_bytes) {
|
|
|
|
fs_info->data_alloc_cluster.empty_cluster = 0;
|
|
|
|
fs_info->data_alloc_cluster.downgrade_limit = 0;
|
|
|
|
} else if (fs_devices->total_rw_bytes < 16ULL * SZ_1G) {
|
|
|
|
fs_info->data_alloc_cluster.empty_cluster =
|
|
|
|
max(rounddown_pow_of_two(fs_devices->total_rw_bytes) >> 5,
|
|
|
|
(unsigned long) (1 <<
|
|
|
|
(fs_info->data_alloc_cluster.downgrade_limit + 1)));
|
|
|
|
}
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
2012-11-06 19:15:27 +07:00
|
|
|
/*
|
2018-02-27 11:41:59 +07:00
|
|
|
* Keep the devid that is marked to be the target device for the
|
|
|
|
* device replace procedure
|
2012-11-06 19:15:27 +07:00
|
|
|
*/
|
2018-02-27 11:41:59 +07:00
|
|
|
btrfs_free_extra_devids(fs_devices, 0);
|
2008-05-14 00:46:40 +07:00
|
|
|
|
2012-02-21 08:53:43 +07:00
|
|
|
if (!fs_devices->latest_bdev) {
|
2016-05-09 16:32:39 +07:00
|
|
|
btrfs_err(fs_info, "failed to read devices");
|
2012-02-21 08:53:43 +07:00
|
|
|
goto fail_tree_roots;
|
|
|
|
}
|
|
|
|
|
2019-10-15 22:42:20 +07:00
|
|
|
ret = init_tree_roots(fs_info);
|
2014-08-02 06:12:45 +07:00
|
|
|
if (ret)
|
2019-10-15 22:42:20 +07:00
|
|
|
goto fail_tree_roots;
|
2010-05-16 21:49:58 +07:00
|
|
|
|
2020-02-15 03:22:06 +07:00
|
|
|
/*
|
|
|
|
* If we have a uuid root and we're not being told to rescan we need to
|
|
|
|
* check the generation here so we can set the
|
|
|
|
* BTRFS_FS_UPDATE_UUID_TREE_GEN bit. Otherwise we could commit the
|
|
|
|
* transaction during a balance or the log replay without updating the
|
|
|
|
* uuid generation, and then if we crash we would rescan the uuid tree,
|
|
|
|
* even though it was perfectly fine.
|
|
|
|
*/
|
|
|
|
if (fs_info->uuid_root && !btrfs_test_opt(fs_info, RESCAN_UUID_TREE) &&
|
|
|
|
fs_info->generation == btrfs_super_uuid_tree_generation(disk_super))
|
|
|
|
set_bit(BTRFS_FS_UPDATE_UUID_TREE_GEN, &fs_info->flags);
|
|
|
|
|
2018-08-01 09:37:19 +07:00
|
|
|
ret = btrfs_verify_dev_extents(fs_info);
|
|
|
|
if (ret) {
|
|
|
|
btrfs_err(fs_info,
|
|
|
|
"failed to verify dev extents against chunks: %d",
|
|
|
|
ret);
|
|
|
|
goto fail_block_groups;
|
|
|
|
}
|
2012-06-23 01:24:12 +07:00
|
|
|
ret = btrfs_recover_balance(fs_info);
|
|
|
|
if (ret) {
|
2016-05-09 16:32:39 +07:00
|
|
|
btrfs_err(fs_info, "failed to recover balance: %d", ret);
|
2012-06-23 01:24:12 +07:00
|
|
|
goto fail_block_groups;
|
|
|
|
}
|
|
|
|
|
2012-05-25 21:06:10 +07:00
|
|
|
ret = btrfs_init_dev_stats(fs_info);
|
|
|
|
if (ret) {
|
2016-05-09 16:32:39 +07:00
|
|
|
btrfs_err(fs_info, "failed to init dev_stats: %d", ret);
|
2012-05-25 21:06:10 +07:00
|
|
|
goto fail_block_groups;
|
|
|
|
}
|
|
|
|
|
2012-11-06 19:15:27 +07:00
|
|
|
ret = btrfs_init_dev_replace(fs_info);
|
|
|
|
if (ret) {
|
2016-05-09 16:32:39 +07:00
|
|
|
btrfs_err(fs_info, "failed to init dev_replace: %d", ret);
|
2012-11-06 19:15:27 +07:00
|
|
|
goto fail_block_groups;
|
|
|
|
}
|
|
|
|
|
2018-02-27 11:41:59 +07:00
|
|
|
btrfs_free_extra_devids(fs_devices, 1);
|
2012-11-06 19:15:27 +07:00
|
|
|
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
ret = btrfs_debugfs_add_mounted(fs_info);
|
|
|
|
if (ret) {
|
|
|
|
pr_err("BTRFS: failed to init debugfs interface: %d\n", ret);
|
|
|
|
goto fail_block_groups;
|
|
|
|
}
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
2019-11-21 16:33:32 +07:00
|
|
|
ret = btrfs_sysfs_add_fsid(fs_devices);
|
2015-03-10 05:38:38 +07:00
|
|
|
if (ret) {
|
2016-05-09 16:32:39 +07:00
|
|
|
btrfs_err(fs_info, "failed to init sysfs fsid interface: %d",
|
|
|
|
ret);
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
goto fail_debugfs;
|
|
|
|
#else
|
2015-03-10 05:38:38 +07:00
|
|
|
goto fail_block_groups;
|
2024-07-05 23:00:04 +07:00
|
|
|
#endif /* MY_ABC_HERE */
|
2015-03-10 05:38:38 +07:00
|
|
|
}
|
|
|
|
|
2015-08-14 17:32:46 +07:00
|
|
|
ret = btrfs_sysfs_add_mounted(fs_info);
|
2011-03-07 09:13:14 +07:00
|
|
|
if (ret) {
|
2016-05-09 16:32:39 +07:00
|
|
|
btrfs_err(fs_info, "failed to init sysfs interface: %d", ret);
|
2015-03-10 05:38:38 +07:00
|
|
|
goto fail_fsdev_sysfs;
|
2011-03-07 09:13:14 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
ret = btrfs_init_space_info(fs_info);
|
|
|
|
if (ret) {
|
2016-05-09 16:32:39 +07:00
|
|
|
btrfs_err(fs_info, "failed to initialize space info: %d", ret);
|
2014-01-22 10:15:51 +07:00
|
|
|
goto fail_sysfs;
|
2011-03-07 09:13:14 +07:00
|
|
|
}
|
|
|
|
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
fs_info->log_tree_rsv_start = btrfs_super_syno_log_tree_rsv(disk_super);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
temp_t = ktime_get();
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
if (btrfs_test_opt(fs_info, NO_BLOCK_GROUP))
|
|
|
|
ret = 0;
|
|
|
|
else
|
|
|
|
#endif /* MY_ABC_HERE */
|
2016-06-21 21:40:19 +07:00
|
|
|
ret = btrfs_read_block_groups(fs_info);
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
stats.read_block_groups_time = ktime_to_ns(ktime_sub(ktime_get(), temp_t));
|
|
|
|
#endif /* MY_ABC_HERE */
|
2010-03-20 03:49:55 +07:00
|
|
|
if (ret) {
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
btrfs_err(fs_info, "failed to read block groups: %d,"
|
|
|
|
"use ro,no_block_group,nologreplay skip it for rescue", ret);
|
|
|
|
#else /* MY_ABC_HERE */
|
2016-05-09 16:32:39 +07:00
|
|
|
btrfs_err(fs_info, "failed to read block groups: %d", ret);
|
2024-07-05 23:00:04 +07:00
|
|
|
#endif /* MY_ABC_HERE */
|
2014-01-22 10:15:51 +07:00
|
|
|
goto fail_sysfs;
|
2010-03-20 03:49:55 +07:00
|
|
|
}
|
2017-03-09 08:34:37 +07:00
|
|
|
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
if (!fs_info->log_tree_rsv_size)
|
|
|
|
fs_info->log_tree_rsv_start = 0;
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
2017-12-18 16:08:59 +07:00
|
|
|
if (!sb_rdonly(sb) && !btrfs_check_rw_degradable(fs_info, NULL)) {
|
2016-05-09 16:32:39 +07:00
|
|
|
btrfs_warn(fs_info,
|
2018-11-28 18:05:13 +07:00
|
|
|
"writable mount is not allowed due to too many missing devices");
|
2014-01-22 10:15:51 +07:00
|
|
|
goto fail_sysfs;
|
2012-10-31 00:16:16 +07:00
|
|
|
}
|
2007-04-27 03:46:15 +07:00
|
|
|
|
2008-06-26 03:01:31 +07:00
|
|
|
fs_info->cleaner_kthread = kthread_run(cleaner_kthread, tree_root,
|
|
|
|
"btrfs-cleaner");
|
2009-01-21 22:49:16 +07:00
|
|
|
if (IS_ERR(fs_info->cleaner_kthread))
|
2014-01-22 10:15:51 +07:00
|
|
|
goto fail_sysfs;
|
2008-06-26 03:01:31 +07:00
|
|
|
|
|
|
|
fs_info->transaction_kthread = kthread_run(transaction_kthread,
|
|
|
|
tree_root,
|
|
|
|
"btrfs-transaction");
|
2009-01-21 22:49:16 +07:00
|
|
|
if (IS_ERR(fs_info->transaction_kthread))
|
2008-06-26 03:01:31 +07:00
|
|
|
goto fail_cleaner;
|
2008-06-26 03:01:31 +07:00
|
|
|
|
btrfs: Do not use data_alloc_cluster in ssd mode
This patch provides a band aid to improve the 'out of the box'
behaviour of btrfs for disks that are detected as being an ssd. In a
general purpose mixed workload scenario, the current ssd mode causes
overallocation of available raw disk space for data, while leaving
behind increasing amounts of unused fragmented free space. This
situation leads to early ENOSPC problems which are harming user
experience and adoption of btrfs as a general purpose filesystem.
This patch modifies the data extent allocation behaviour of the ssd mode
to make it behave identical to nossd mode. The metadata behaviour and
additional ssd_spread option stay untouched so far.
Recommendations for future development are to reconsider the current
oversimplified nossd / ssd distinction and the broken detection
mechanism based on the rotational attribute in sysfs and provide
experienced users with a more flexible way to choose allocator behaviour
for data and metadata, optimized for certain use cases, while keeping
sane 'out of the box' default settings. The internals of the current
btrfs code have more potential than what currently gets exposed to the
user to choose from.
The SSD story...
In the first year of btrfs development, around early 2008, btrfs
gained a mount option which enables specific functionality for
filesystems on solid state devices. The first occurance of this
functionality is in commit e18e4809, labeled "Add mount -o ssd, which
includes optimizations for seek free storage".
The effect on allocating free space for doing (data) writes is to
'cluster' writes together, writing them out in contiguous space, as
opposed to a 'tetris' way of putting all separate writes into any free
space fragment that fits (which is what the -o nossd behaviour does).
A somewhat simplified explanation of what happens is that, when for
example, the 'cluster' size is set to 2MiB, when we do some writes, the
data allocator will search for a free space block that is 2MiB big, and
put the writes in there. The ssd mode itself might allow a 2MiB cluster
to be composed of multiple free space extents with some existing data in
between, while the additional ssd_spread mount option kills off this
option and requires fully free space.
The idea behind this is (commit 536ac8ae): "The [...] clusters make it
more likely a given IO will completely overwrite the ssd block, so it
doesn't have to do an internal rwm cycle."; ssd block meaning nand erase
block. So, effectively this means applying a "locality based algorithm"
and trying to outsmart the actual ssd.
Since then, various changes have been made to the involved code, but the
basic idea is still present, and gets activated whenever the ssd mount
option is active. This also happens by default, when the rotational flag
as seen at /sys/block/<device>/queue/rotational is set to 0.
However, there's a number of problems with this approach.
First, what the optimization is trying to do is outsmart the ssd by
assuming there is a relation between the physical address space of the
block device as seen by btrfs and the actual physical storage of the
ssd, and then adjusting data placement. However, since the introduction
of the Flash Translation Layer (FTL) which is a part of the internal
controller of an ssd, these attempts are futile. The use of good quality
FTL in consumer ssd products might have been limited in 2008, but this
situation has changed drastically soon after that time. Today, even the
flash memory in your automatic cat feeding machine or your grandma's
wheelchair has a full featured one.
Second, the behaviour as described above results in the filesystem being
filled up with badly fragmented free space extents because of relatively
small pieces of space that are freed up by deletes, but not selected
again as part of a 'cluster'. Since the algorithm prefers allocating a
new chunk over going back to tetris mode, the end result is a filesystem
in which all raw space is allocated, but which is composed of
underutilized chunks with a 'shotgun blast' pattern of fragmented free
space. Usually, the next problematic thing that happens is the
filesystem wanting to allocate new space for metadata, which causes the
filesystem to fail in spectacular ways.
Third, the default mount options you get for an ssd ('ssd' mode enabled,
'discard' not enabled), in combination with spreading out writes over
the full address space and ignoring freed up space leads to worst case
behaviour in providing information to the ssd itself, since it will
never learn that all the free space left behind is actually free. There
are two ways to let an ssd know previously written data does not have to
be preserved, which are sending explicit signals using discard or
fstrim, or by simply overwriting the space with new data. The worst
case behaviour is the btrfs ssd_spread mount option in combination with
not having discard enabled. It has a side effect of minimizing the reuse
of free space previously written in.
Fourth, the rotational flag in /sys/ does not reliably indicate if the
device is a locally attached ssd. For example, iSCSI or NBD displays as
non-rotational, while a loop device on an ssd shows up as rotational.
The combination of the second and third problem effectively means that
despite all the good intentions, the btrfs ssd mode reliably causes the
ssd hardware and the filesystem structures and performance to be choked
to death. The clickbait version of the title of this story would have
been "Btrfs ssd optimizations considered harmful for ssds".
The current nossd 'tetris' mode (even still without discard) allows a
pattern of overwriting much more previously used space, causing many
more implicit discards to happen because of the overwrite information
the ssd gets. The actual location in the physical address space, as seen
from the point of view of btrfs is irrelevant, because the actual writes
to the low level flash are reordered anyway thanks to the FTL.
Changes made in the code
1. Make ssd mode data allocation identical to tetris mode, like nossd.
2. Adjust and clean up filesystem mount messages so that we can easily
identify if a kernel has this patch applied or not, when providing
support to end users. Also, make better use of the *_and_info helpers to
only trigger messages on actual state changes.
Backporting notes
Notes for whoever wants to backport this patch to their 4.9 LTS kernel:
* First apply commit 951e7966 "btrfs: drop the nossd flag when
remounting with -o ssd", or fixup the differences manually.
* The rest of the conflicts are because of the fs_info refactoring. So,
for example, instead of using fs_info, it's root->fs_info in
extent-tree.c
Signed-off-by: Hans van Kranenburg <hans.van.kranenburg@mendix.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2017-07-28 13:31:28 +07:00
|
|
|
if (!btrfs_test_opt(fs_info, NOSSD) &&
|
2009-06-10 20:51:32 +07:00
|
|
|
!fs_info->fs_devices->rotating) {
|
btrfs: Do not use data_alloc_cluster in ssd mode
This patch provides a band aid to improve the 'out of the box'
behaviour of btrfs for disks that are detected as being an ssd. In a
general purpose mixed workload scenario, the current ssd mode causes
overallocation of available raw disk space for data, while leaving
behind increasing amounts of unused fragmented free space. This
situation leads to early ENOSPC problems which are harming user
experience and adoption of btrfs as a general purpose filesystem.
This patch modifies the data extent allocation behaviour of the ssd mode
to make it behave identical to nossd mode. The metadata behaviour and
additional ssd_spread option stay untouched so far.
Recommendations for future development are to reconsider the current
oversimplified nossd / ssd distinction and the broken detection
mechanism based on the rotational attribute in sysfs and provide
experienced users with a more flexible way to choose allocator behaviour
for data and metadata, optimized for certain use cases, while keeping
sane 'out of the box' default settings. The internals of the current
btrfs code have more potential than what currently gets exposed to the
user to choose from.
The SSD story...
In the first year of btrfs development, around early 2008, btrfs
gained a mount option which enables specific functionality for
filesystems on solid state devices. The first occurance of this
functionality is in commit e18e4809, labeled "Add mount -o ssd, which
includes optimizations for seek free storage".
The effect on allocating free space for doing (data) writes is to
'cluster' writes together, writing them out in contiguous space, as
opposed to a 'tetris' way of putting all separate writes into any free
space fragment that fits (which is what the -o nossd behaviour does).
A somewhat simplified explanation of what happens is that, when for
example, the 'cluster' size is set to 2MiB, when we do some writes, the
data allocator will search for a free space block that is 2MiB big, and
put the writes in there. The ssd mode itself might allow a 2MiB cluster
to be composed of multiple free space extents with some existing data in
between, while the additional ssd_spread mount option kills off this
option and requires fully free space.
The idea behind this is (commit 536ac8ae): "The [...] clusters make it
more likely a given IO will completely overwrite the ssd block, so it
doesn't have to do an internal rwm cycle."; ssd block meaning nand erase
block. So, effectively this means applying a "locality based algorithm"
and trying to outsmart the actual ssd.
Since then, various changes have been made to the involved code, but the
basic idea is still present, and gets activated whenever the ssd mount
option is active. This also happens by default, when the rotational flag
as seen at /sys/block/<device>/queue/rotational is set to 0.
However, there's a number of problems with this approach.
First, what the optimization is trying to do is outsmart the ssd by
assuming there is a relation between the physical address space of the
block device as seen by btrfs and the actual physical storage of the
ssd, and then adjusting data placement. However, since the introduction
of the Flash Translation Layer (FTL) which is a part of the internal
controller of an ssd, these attempts are futile. The use of good quality
FTL in consumer ssd products might have been limited in 2008, but this
situation has changed drastically soon after that time. Today, even the
flash memory in your automatic cat feeding machine or your grandma's
wheelchair has a full featured one.
Second, the behaviour as described above results in the filesystem being
filled up with badly fragmented free space extents because of relatively
small pieces of space that are freed up by deletes, but not selected
again as part of a 'cluster'. Since the algorithm prefers allocating a
new chunk over going back to tetris mode, the end result is a filesystem
in which all raw space is allocated, but which is composed of
underutilized chunks with a 'shotgun blast' pattern of fragmented free
space. Usually, the next problematic thing that happens is the
filesystem wanting to allocate new space for metadata, which causes the
filesystem to fail in spectacular ways.
Third, the default mount options you get for an ssd ('ssd' mode enabled,
'discard' not enabled), in combination with spreading out writes over
the full address space and ignoring freed up space leads to worst case
behaviour in providing information to the ssd itself, since it will
never learn that all the free space left behind is actually free. There
are two ways to let an ssd know previously written data does not have to
be preserved, which are sending explicit signals using discard or
fstrim, or by simply overwriting the space with new data. The worst
case behaviour is the btrfs ssd_spread mount option in combination with
not having discard enabled. It has a side effect of minimizing the reuse
of free space previously written in.
Fourth, the rotational flag in /sys/ does not reliably indicate if the
device is a locally attached ssd. For example, iSCSI or NBD displays as
non-rotational, while a loop device on an ssd shows up as rotational.
The combination of the second and third problem effectively means that
despite all the good intentions, the btrfs ssd mode reliably causes the
ssd hardware and the filesystem structures and performance to be choked
to death. The clickbait version of the title of this story would have
been "Btrfs ssd optimizations considered harmful for ssds".
The current nossd 'tetris' mode (even still without discard) allows a
pattern of overwriting much more previously used space, causing many
more implicit discards to happen because of the overwrite information
the ssd gets. The actual location in the physical address space, as seen
from the point of view of btrfs is irrelevant, because the actual writes
to the low level flash are reordered anyway thanks to the FTL.
Changes made in the code
1. Make ssd mode data allocation identical to tetris mode, like nossd.
2. Adjust and clean up filesystem mount messages so that we can easily
identify if a kernel has this patch applied or not, when providing
support to end users. Also, make better use of the *_and_info helpers to
only trigger messages on actual state changes.
Backporting notes
Notes for whoever wants to backport this patch to their 4.9 LTS kernel:
* First apply commit 951e7966 "btrfs: drop the nossd flag when
remounting with -o ssd", or fixup the differences manually.
* The rest of the conflicts are because of the fs_info refactoring. So,
for example, instead of using fs_info, it's root->fs_info in
extent-tree.c
Signed-off-by: Hans van Kranenburg <hans.van.kranenburg@mendix.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2017-07-28 13:31:28 +07:00
|
|
|
btrfs_set_and_info(fs_info, SSD, "enabling ssd optimizations");
|
2009-06-10 20:51:32 +07:00
|
|
|
}
|
|
|
|
|
2014-02-05 21:26:17 +07:00
|
|
|
/*
|
2016-05-20 08:18:45 +07:00
|
|
|
* Mount does not set all options immediately, we can do it now and do
|
2014-02-05 21:26:17 +07:00
|
|
|
* not have to wait for transaction commit
|
|
|
|
*/
|
|
|
|
btrfs_apply_pending_changes(fs_info);
|
2014-01-13 12:36:06 +07:00
|
|
|
|
2011-11-09 19:44:05 +07:00
|
|
|
#ifdef CONFIG_BTRFS_FS_CHECK_INTEGRITY
|
2016-06-23 05:54:23 +07:00
|
|
|
if (btrfs_test_opt(fs_info, CHECK_INTEGRITY)) {
|
2016-06-23 05:54:24 +07:00
|
|
|
ret = btrfsic_mount(fs_info, fs_devices,
|
2016-06-23 05:54:23 +07:00
|
|
|
btrfs_test_opt(fs_info,
|
2011-11-09 19:44:05 +07:00
|
|
|
CHECK_INTEGRITY_INCLUDING_EXTENT_DATA) ?
|
|
|
|
1 : 0,
|
|
|
|
fs_info->check_integrity_print_mask);
|
|
|
|
if (ret)
|
2016-05-09 16:32:39 +07:00
|
|
|
btrfs_warn(fs_info,
|
|
|
|
"failed to initialize integrity check module: %d",
|
|
|
|
ret);
|
2011-11-09 19:44:05 +07:00
|
|
|
}
|
|
|
|
#endif
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
temp_t = ktime_get();
|
|
|
|
#endif /* MY_ABC_HERE */
|
2011-09-13 20:23:30 +07:00
|
|
|
ret = btrfs_read_qgroup_config(fs_info);
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
stats.read_qgroup_config_time = ktime_to_ns(ktime_sub(ktime_get(), temp_t));
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
if (ret) {
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
btrfs_err(fs_info, "failed to read qgroup tree: %d, use no_quota_tree for rescue", ret);
|
|
|
|
#endif /* MY_ABC_HERE */
|
2011-09-13 20:23:30 +07:00
|
|
|
goto fail_trans_kthread;
|
2024-07-05 23:00:04 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
/*
|
|
|
|
* feature-tree should be loaded before quota.
|
|
|
|
*
|
|
|
|
* some btrfs_root attributes are stored in feature-tree and should be loaded
|
|
|
|
* after feature-tree is ready. When loading quota, it'll load btrfs_root
|
|
|
|
* indirectly, but feature-tree is not ready at that time.
|
|
|
|
*/
|
|
|
|
ret = btrfs_syno_feat_tree_load_status_from_disk(fs_info);
|
|
|
|
if (ret) {
|
|
|
|
btrfs_err(fs_info, "failed to load syno feature tree, ret: [%d].", ret);
|
|
|
|
goto fail_qgroup;
|
|
|
|
}
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
temp_t = ktime_get();
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
ret = btrfs_read_usrquota_config(fs_info);
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
stats.read_usrquota_config_time = ktime_to_ns(ktime_sub(ktime_get(), temp_t));
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
if (ret) {
|
|
|
|
btrfs_err(fs_info, "failed to read usrquota tree: %d, use no_quota_tree for rescue", ret);
|
|
|
|
goto fail_qgroup;
|
|
|
|
}
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
temp_t = ktime_get();
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
ret = btrfs_read_syno_usage_config(fs_info);
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
stats.read_syno_usage_config_time = ktime_to_ns(ktime_sub(ktime_get(), temp_t));
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
if (ret)
|
|
|
|
goto fail_qgroup;
|
|
|
|
#endif /* MY_ABC_HERE */
|
2011-11-09 19:44:05 +07:00
|
|
|
|
2017-09-30 02:43:50 +07:00
|
|
|
if (btrfs_build_ref_tree(fs_info))
|
|
|
|
btrfs_err(fs_info, "couldn't build ref tree");
|
|
|
|
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
if (syno_capability_flags & BTRFS_FEATURE_SYNO_CAPABILITY_RBD_META) {
|
|
|
|
if (!btrfs_syno_check_feat_tree_enable(fs_info)) {
|
|
|
|
btrfs_err(fs_info, "feature tree is not enabled");
|
|
|
|
err = -EINVAL;
|
|
|
|
goto fail_qgroup;
|
|
|
|
}
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
temp_t = ktime_get();
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
ret = btrfs_activate_all_rbd_meta_files(fs_info);
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
stats.activate_all_rbd_meta_files_time = ktime_to_ns(ktime_sub(ktime_get(), temp_t));
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
if (ret) {
|
|
|
|
btrfs_err(fs_info,
|
|
|
|
"failed to activate rbd meta files, ret: %d", ret);
|
|
|
|
err = ret;
|
|
|
|
goto fail_qgroup;
|
|
|
|
}
|
|
|
|
fs_info->syno_rbd.first_mapping_table_offset =
|
|
|
|
btrfs_super_syno_rbd_first_mapping_table_offset(disk_super);
|
|
|
|
} else {
|
|
|
|
fs_info->syno_rbd.first_mapping_table_offset = 0;
|
|
|
|
}
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
if (btrfs_test_opt(fs_info, DROP_LOG_TREE) &&
|
|
|
|
btrfs_super_log_root(disk_super) != 0) {
|
|
|
|
if (fs_devices->rw_devices == 0) {
|
|
|
|
btrfs_warn(fs_info, "drop log required on RO media");
|
|
|
|
ret = -EIO;
|
|
|
|
goto fail_qgroup;
|
|
|
|
}
|
|
|
|
|
|
|
|
btrfs_warn(fs_info, "clear log tree, old log root:%lld(leve:%d)",
|
|
|
|
btrfs_super_log_root(disk_super),
|
|
|
|
btrfs_super_log_root_level(disk_super));
|
|
|
|
|
|
|
|
btrfs_set_super_log_root(fs_info->super_for_commit, 0);
|
|
|
|
btrfs_set_super_log_root(disk_super, 0);
|
|
|
|
btrfs_set_super_log_root_level(fs_info->super_for_commit, 0);
|
|
|
|
btrfs_set_super_log_root_level(disk_super, 0);
|
|
|
|
ret = write_all_supers(fs_info, 0);
|
|
|
|
if (ret) {
|
|
|
|
goto fail_qgroup;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
2016-01-19 09:23:03 +07:00
|
|
|
/* do not make disk changes in broken FS or nologreplay is given */
|
|
|
|
if (btrfs_super_log_root(disk_super) != 0 &&
|
2016-06-23 05:54:23 +07:00
|
|
|
!btrfs_test_opt(fs_info, NOLOGREPLAY)) {
|
2020-02-05 23:12:16 +07:00
|
|
|
btrfs_info(fs_info, "start tree-log replay");
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
temp_t = ktime_get();
|
|
|
|
#endif /* MY_ABC_HERE */
|
2014-08-02 06:12:46 +07:00
|
|
|
ret = btrfs_replay_log(fs_info, fs_devices);
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
stats.replay_log_time = ktime_to_ns(ktime_sub(ktime_get(), temp_t));
|
|
|
|
#endif /* MY_ABC_HERE */
|
2012-03-12 22:03:00 +07:00
|
|
|
if (ret) {
|
2014-08-02 06:12:46 +07:00
|
|
|
err = ret;
|
2014-04-23 18:33:35 +07:00
|
|
|
goto fail_qgroup;
|
2012-03-12 22:03:00 +07:00
|
|
|
}
|
2008-09-06 03:13:11 +07:00
|
|
|
}
|
Btrfs: update space balancing code
This patch updates the space balancing code to utilize the new
backref format. Before, btrfs-vol -b would break any COW links
on data blocks or metadata. This was slow and caused the amount
of space used to explode if a large number of snapshots were present.
The new code can keeps the sharing of all data extents and
most of the tree blocks.
To maintain the sharing of data extents, the space balance code uses
a seperate inode hold data extent pointers, then updates the references
to point to the new location.
To maintain the sharing of tree blocks, the space balance code uses
reloc trees to relocate tree blocks in reference counted roots.
There is one reloc tree for each subvol, and all reloc trees share
same root key objectid. Reloc trees are snapshots of the latest
committed roots of subvols (root->commit_root).
To relocate a tree block referenced by a subvol, there are two steps.
COW the block through subvol's reloc tree, then update block pointer in
the subvol to point to the new block. Since all reloc trees share
same root key objectid, doing special handing for tree blocks
owned by them is easy. Once a tree block has been COWed in one
reloc tree, we can use the resulting new block directly when the
same block is required to COW again through other reloc trees.
In this way, relocated tree blocks are shared between reloc trees,
so they are also shared between subvols.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-09-26 21:09:34 +07:00
|
|
|
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
#else /* MY_ABC_HERE */
|
2016-06-22 08:16:51 +07:00
|
|
|
ret = btrfs_find_orphan_roots(fs_info);
|
2012-03-12 22:03:00 +07:00
|
|
|
if (ret)
|
2014-04-23 18:33:35 +07:00
|
|
|
goto fail_qgroup;
|
2024-07-05 23:00:04 +07:00
|
|
|
#endif /* MY_ABC_HERE */
|
Btrfs: update space balancing code
This patch updates the space balancing code to utilize the new
backref format. Before, btrfs-vol -b would break any COW links
on data blocks or metadata. This was slow and caused the amount
of space used to explode if a large number of snapshots were present.
The new code can keeps the sharing of all data extents and
most of the tree blocks.
To maintain the sharing of data extents, the space balance code uses
a seperate inode hold data extent pointers, then updates the references
to point to the new location.
To maintain the sharing of tree blocks, the space balance code uses
reloc trees to relocate tree blocks in reference counted roots.
There is one reloc tree for each subvol, and all reloc trees share
same root key objectid. Reloc trees are snapshots of the latest
committed roots of subvols (root->commit_root).
To relocate a tree block referenced by a subvol, there are two steps.
COW the block through subvol's reloc tree, then update block pointer in
the subvol to point to the new block. Since all reloc trees share
same root key objectid, doing special handing for tree blocks
owned by them is easy. Once a tree block has been COWed in one
reloc tree, we can use the resulting new block directly when the
same block is required to COW again through other reloc trees.
In this way, relocated tree blocks are shared between reloc trees,
so they are also shared between subvols.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-09-26 21:09:34 +07:00
|
|
|
|
2020-05-16 00:35:55 +07:00
|
|
|
fs_info->fs_root = btrfs_get_fs_root(fs_info, BTRFS_FS_TREE_OBJECTID, true);
|
2010-05-29 16:44:10 +07:00
|
|
|
if (IS_ERR(fs_info->fs_root)) {
|
|
|
|
err = PTR_ERR(fs_info->fs_root);
|
2018-03-29 05:11:45 +07:00
|
|
|
btrfs_warn(fs_info, "failed to read fs tree: %d", err);
|
2020-02-13 22:47:28 +07:00
|
|
|
fs_info->fs_root = NULL;
|
2011-09-13 20:23:30 +07:00
|
|
|
goto fail_qgroup;
|
2010-05-29 16:44:10 +07:00
|
|
|
}
|
2009-06-10 20:51:32 +07:00
|
|
|
|
2017-07-17 14:45:34 +07:00
|
|
|
if (sb_rdonly(sb))
|
2024-07-05 23:00:04 +07:00
|
|
|
goto clear_oneshot;
|
2012-01-17 03:04:48 +07:00
|
|
|
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
ret = btrfs_start_pre_rw_mount(fs_info, &stats);
|
|
|
|
#else
|
|
|
|
ret = btrfs_start_pre_rw_mount(fs_info);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
if (ret) {
|
|
|
|
close_ctree(fs_info);
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
print_mount_stats(fs_info, &stats);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
return ret;
|
2016-09-23 07:24:22 +07:00
|
|
|
}
|
|
|
|
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
if (btrfs_test_opt(fs_info, SYNO_ALLOCATOR))
|
|
|
|
queue_work(system_unbound_wq, &fs_info->syno_allocator.bg_prefetch_work);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
if (test_bit(BTRFS_FS_BLOCK_GROUP_CACHE_TREE_BROKEN, &fs_info->flags) ||
|
|
|
|
(!btrfs_test_opt(fs_info, BLOCK_GROUP_CACHE_TREE) && btrfs_fs_compat(fs_info, BLOCK_GROUP_CACHE_TREE))) {
|
|
|
|
ret = btrfs_clear_block_group_cache_tree(fs_info);
|
2016-09-23 07:24:21 +07:00
|
|
|
if (ret) {
|
2024-07-05 23:00:04 +07:00
|
|
|
btrfs_warn(fs_info, "failed to clear block group cache tree: %d", ret);
|
2016-06-22 08:16:51 +07:00
|
|
|
close_ctree(fs_info);
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
print_mount_stats(fs_info, &stats);
|
|
|
|
#endif /* MY_ABC_HERE */
|
2016-09-23 07:24:21 +07:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
}
|
2024-07-05 23:00:04 +07:00
|
|
|
if (btrfs_test_opt(fs_info, BLOCK_GROUP_CACHE_TREE) &&
|
|
|
|
!btrfs_fs_compat(fs_info, BLOCK_GROUP_CACHE_TREE) &&
|
|
|
|
!test_bit(BTRFS_FS_BLOCK_GROUP_CACHE_TREE_BROKEN, &fs_info->flags)) {
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
temp_t = ktime_get();
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
ret = btrfs_create_block_group_cache_tree(fs_info);
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
stats.create_block_group_cache_tree_time = ktime_to_ns(ktime_sub(ktime_get(), temp_t));
|
|
|
|
#endif /* MY_ABC_HERE */
|
2015-12-30 22:52:35 +07:00
|
|
|
if (ret) {
|
2024-07-05 23:00:04 +07:00
|
|
|
btrfs_warn(fs_info, "failed to create block group cache tree: %d", ret);
|
2016-06-22 08:16:51 +07:00
|
|
|
close_ctree(fs_info);
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
print_mount_stats(fs_info, &stats);
|
|
|
|
#endif /* MY_ABC_HERE */
|
2015-12-30 22:52:35 +07:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
}
|
2024-07-05 23:00:04 +07:00
|
|
|
#endif /* MY_ABC_HERE */
|
2015-12-30 22:52:35 +07:00
|
|
|
|
2019-12-14 07:22:14 +07:00
|
|
|
btrfs_discard_resume(fs_info);
|
Btrfs: fix qgroup rescan resume on mount
When called during mount, we cannot start the rescan worker thread until
open_ctree is done. This commit restuctures the qgroup rescan internals to
enable a clean deferral of the rescan resume operation.
First of all, the struct qgroup_rescan is removed, saving us a malloc and
some initialization synchronizations problems. Its only element (the worker
struct) now lives within fs_info just as the rest of the rescan code.
Then setting up a rescan worker is split into several reusable stages.
Currently we have three different rescan startup scenarios:
(A) rescan ioctl
(B) rescan resume by mount
(C) rescan by quota enable
Each case needs its own combination of the four following steps:
(1) set the progress [A, C: zero; B: state of umount]
(2) commit the transaction [A]
(3) set the counters [A, C: zero; B: state of umount]
(4) start worker [A, B, C]
qgroup_rescan_init does step (1). There's no extra function added to commit
a transaction, we've got that already. qgroup_rescan_zero_tracking does
step (3). Step (4) is nothing more than a call to the generic
btrfs_queue_worker.
We also get rid of a double check for the rescan progress during
btrfs_qgroup_account_ref, which is no longer required due to having step 2
from the list above.
As a side effect, this commit prepares to move the rescan start code from
btrfs_run_qgroups (which is run during commit) to a less time critical
section.
Signed-off-by: Jan Schmidt <list.btrfs@jan-o-sch.net>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
2013-05-28 22:47:24 +07:00
|
|
|
|
2024-07-05 23:00:04 +07:00
|
|
|
if (fs_info->uuid_root &&
|
|
|
|
(btrfs_test_opt(fs_info, RESCAN_UUID_TREE) ||
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
!test_bit(BTRFS_FS_UPDATE_UUID_TREE_GEN, &fs_info->flags)
|
|
|
|
#else /* MY_ABC_HERE */
|
|
|
|
fs_info->generation != btrfs_super_uuid_tree_generation(disk_super)
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
)) {
|
2016-05-09 16:32:39 +07:00
|
|
|
btrfs_info(fs_info, "checking UUID tree");
|
2013-08-15 22:11:23 +07:00
|
|
|
ret = btrfs_check_uuid_tree(fs_info);
|
|
|
|
if (ret) {
|
2016-05-09 16:32:39 +07:00
|
|
|
btrfs_warn(fs_info,
|
|
|
|
"failed to check the UUID tree: %d", ret);
|
2016-06-22 08:16:51 +07:00
|
|
|
close_ctree(fs_info);
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
print_mount_stats(fs_info, &stats);
|
|
|
|
#endif /* MY_ABC_HERE */
|
2013-08-15 22:11:23 +07:00
|
|
|
return ret;
|
|
|
|
}
|
2013-08-15 22:11:19 +07:00
|
|
|
}
|
2024-07-05 23:00:04 +07:00
|
|
|
|
2016-09-03 02:40:02 +07:00
|
|
|
set_bit(BTRFS_FS_OPEN, &fs_info->flags);
|
2014-09-18 22:20:02 +07:00
|
|
|
|
2024-07-05 23:00:04 +07:00
|
|
|
clear_oneshot:
|
|
|
|
btrfs_clear_oneshot_options(fs_info);
|
2016-01-19 09:23:02 +07:00
|
|
|
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
print_mount_stats(fs_info, &stats);
|
|
|
|
#endif /* MY_ABC_HERE */
|
2011-11-17 13:10:02 +07:00
|
|
|
return 0;
|
2011-09-13 20:23:30 +07:00
|
|
|
fail_qgroup:
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
free_all_syno_rbd_meta_file_inodes(fs_info);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
btrfs_free_usrquota_config(fs_info);
|
|
|
|
#endif /* MY_ABC_HERE */
|
2011-09-13 20:23:30 +07:00
|
|
|
btrfs_free_qgroup_config(fs_info);
|
2008-11-20 03:13:35 +07:00
|
|
|
fail_trans_kthread:
|
|
|
|
kthread_stop(fs_info->transaction_kthread);
|
2016-06-23 05:54:24 +07:00
|
|
|
btrfs_cleanup_transaction(fs_info);
|
2014-05-08 04:06:09 +07:00
|
|
|
btrfs_free_fs_roots(fs_info);
|
2008-06-26 03:01:31 +07:00
|
|
|
fail_cleaner:
|
2008-06-26 03:01:31 +07:00
|
|
|
kthread_stop(fs_info->cleaner_kthread);
|
2008-11-20 03:13:35 +07:00
|
|
|
|
|
|
|
/*
|
|
|
|
* make sure we're done with the btree inode before we stop our
|
|
|
|
* kthreads
|
|
|
|
*/
|
|
|
|
filemap_write_and_wait(fs_info->btree_inode->i_mapping);
|
|
|
|
|
2014-01-22 10:15:51 +07:00
|
|
|
fail_sysfs:
|
2015-08-14 17:32:47 +07:00
|
|
|
btrfs_sysfs_remove_mounted(fs_info);
|
2014-01-22 10:15:51 +07:00
|
|
|
|
2015-03-10 05:38:38 +07:00
|
|
|
fail_fsdev_sysfs:
|
|
|
|
btrfs_sysfs_remove_fsid(fs_info->fs_devices);
|
|
|
|
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
fail_debugfs:
|
|
|
|
btrfs_debugfs_remove_mounted(fs_info);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
2010-03-20 03:49:55 +07:00
|
|
|
fail_block_groups:
|
2013-04-26 00:44:38 +07:00
|
|
|
btrfs_put_block_group_cache(fs_info);
|
2011-11-04 02:17:42 +07:00
|
|
|
|
|
|
|
fail_tree_roots:
|
2020-09-04 01:29:50 +07:00
|
|
|
if (fs_info->data_reloc_root)
|
|
|
|
btrfs_drop_and_free_fs_root(fs_info, fs_info->data_reloc_root);
|
2019-10-10 09:39:25 +07:00
|
|
|
free_root_pointers(fs_info, true);
|
2013-02-07 13:01:35 +07:00
|
|
|
invalidate_inode_pages2(fs_info->btree_inode->i_mapping);
|
2011-11-04 02:17:42 +07:00
|
|
|
|
2007-06-12 17:35:45 +07:00
|
|
|
fail_sb_buffer:
|
2013-03-17 09:10:31 +07:00
|
|
|
btrfs_stop_all_workers(fs_info);
|
2017-02-02 05:39:50 +07:00
|
|
|
btrfs_free_block_groups(fs_info);
|
btrfs: implement delayed inode items operation
Changelog V5 -> V6:
- Fix oom when the memory load is high, by storing the delayed nodes into the
root's radix tree, and letting btrfs inodes go.
Changelog V4 -> V5:
- Fix the race on adding the delayed node to the inode, which is spotted by
Chris Mason.
- Merge Chris Mason's incremental patch into this patch.
- Fix deadlock between readdir() and memory fault, which is reported by
Itaru Kitayama.
Changelog V3 -> V4:
- Fix nested lock, which is reported by Itaru Kitayama, by updating space cache
inode in time.
Changelog V2 -> V3:
- Fix the race between the delayed worker and the task which does delayed items
balance, which is reported by Tsutomu Itoh.
- Modify the patch address David Sterba's comment.
- Fix the bug of the cpu recursion spinlock, reported by Chris Mason
Changelog V1 -> V2:
- break up the global rb-tree, use a list to manage the delayed nodes,
which is created for every directory and file, and used to manage the
delayed directory name index items and the delayed inode item.
- introduce a worker to deal with the delayed nodes.
Compare with Ext3/4, the performance of file creation and deletion on btrfs
is very poor. the reason is that btrfs must do a lot of b+ tree insertions,
such as inode item, directory name item, directory name index and so on.
If we can do some delayed b+ tree insertion or deletion, we can improve the
performance, so we made this patch which implemented delayed directory name
index insertion/deletion and delayed inode update.
Implementation:
- introduce a delayed root object into the filesystem, that use two lists to
manage the delayed nodes which are created for every file/directory.
One is used to manage all the delayed nodes that have delayed items. And the
other is used to manage the delayed nodes which is waiting to be dealt with
by the work thread.
- Every delayed node has two rb-tree, one is used to manage the directory name
index which is going to be inserted into b+ tree, and the other is used to
manage the directory name index which is going to be deleted from b+ tree.
- introduce a worker to deal with the delayed operation. This worker is used
to deal with the works of the delayed directory name index items insertion
and deletion and the delayed inode update.
When the delayed items is beyond the lower limit, we create works for some
delayed nodes and insert them into the work queue of the worker, and then
go back.
When the delayed items is beyond the upper bound, we create works for all
the delayed nodes that haven't been dealt with, and insert them into the work
queue of the worker, and then wait for that the untreated items is below some
threshold value.
- When we want to insert a directory name index into b+ tree, we just add the
information into the delayed inserting rb-tree.
And then we check the number of the delayed items and do delayed items
balance. (The balance policy is above.)
- When we want to delete a directory name index from the b+ tree, we search it
in the inserting rb-tree at first. If we look it up, just drop it. If not,
add the key of it into the delayed deleting rb-tree.
Similar to the delayed inserting rb-tree, we also check the number of the
delayed items and do delayed items balance.
(The same to inserting manipulation)
- When we want to update the metadata of some inode, we cached the data of the
inode into the delayed node. the worker will flush it into the b+ tree after
dealing with the delayed insertion and deletion.
- We will move the delayed node to the tail of the list after we access the
delayed node, By this way, we can cache more delayed items and merge more
inode updates.
- If we want to commit transaction, we will deal with all the delayed node.
- the delayed node will be freed when we free the btrfs inode.
- Before we log the inode items, we commit all the directory name index items
and the delayed inode update.
I did a quick test by the benchmark tool[1] and found we can improve the
performance of file creation by ~15%, and file deletion by ~20%.
Before applying this patch:
Create files:
Total files: 50000
Total time: 1.096108
Average time: 0.000022
Delete files:
Total files: 50000
Total time: 1.510403
Average time: 0.000030
After applying this patch:
Create files:
Total files: 50000
Total time: 0.932899
Average time: 0.000019
Delete files:
Total files: 50000
Total time: 1.215732
Average time: 0.000024
[1] http://marc.info/?l=linux-btrfs&m=128212635122920&q=p3
Many thanks for Kitayama-san's help!
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Reviewed-by: David Sterba <dave@jikos.cz>
Tested-by: Tsutomu Itoh <t-itoh@jp.fujitsu.com>
Tested-by: Itaru Kitayama <kitayama@cl.bb4u.ne.jp>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2011-04-22 17:12:22 +07:00
|
|
|
fail_alloc:
|
2011-11-09 18:26:37 +07:00
|
|
|
btrfs_mapping_tree_free(&fs_info->mapping_tree);
|
|
|
|
|
2008-06-12 08:47:56 +07:00
|
|
|
iput(fs_info->btree_inode);
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
if (fs_info->locker_update_interval)
|
|
|
|
cancel_delayed_work_sync(&fs_info->locker_update_work);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
2009-01-21 22:49:16 +07:00
|
|
|
fail:
|
2011-11-09 18:26:37 +07:00
|
|
|
btrfs_close_devices(fs_info->fs_devices);
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
kfifo_free(&fs_info->cksumfailed_files);
|
|
|
|
fail_kfifo:
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
print_mount_stats(fs_info, &stats);
|
|
|
|
#endif /* MY_ABC_HERE */
|
2011-11-17 13:10:02 +07:00
|
|
|
return err;
|
2007-02-02 21:18:22 +07:00
|
|
|
}
|
2018-01-13 00:55:33 +07:00
|
|
|
ALLOW_ERROR_INJECTION(open_ctree, ERRNO);
|
2007-02-02 21:18:22 +07:00
|
|
|
|
2020-02-13 22:24:33 +07:00
|
|
|
static void btrfs_end_super_write(struct bio *bio)
|
2008-04-11 03:19:33 +07:00
|
|
|
{
|
2020-02-13 22:24:33 +07:00
|
|
|
struct btrfs_device *device = bio->bi_private;
|
|
|
|
struct bio_vec *bvec;
|
|
|
|
struct bvec_iter_all iter_all;
|
|
|
|
struct page *page;
|
|
|
|
|
|
|
|
bio_for_each_segment_all(bvec, bio, iter_all) {
|
|
|
|
page = bvec->bv_page;
|
|
|
|
|
|
|
|
if (bio->bi_status) {
|
|
|
|
btrfs_warn_rl_in_rcu(device->fs_info,
|
|
|
|
"lost page write due to IO error on %s (%d)",
|
|
|
|
rcu_str_deref(device->name),
|
|
|
|
blk_status_to_errno(bio->bi_status));
|
|
|
|
ClearPageUptodate(page);
|
|
|
|
SetPageError(page);
|
|
|
|
btrfs_dev_stat_inc_and_print(device,
|
|
|
|
BTRFS_DEV_STAT_WRITE_ERRS);
|
|
|
|
} else {
|
|
|
|
SetPageUptodate(page);
|
|
|
|
}
|
|
|
|
|
|
|
|
put_page(page);
|
|
|
|
unlock_page(page);
|
2008-04-11 03:19:33 +07:00
|
|
|
}
|
2020-02-13 22:24:33 +07:00
|
|
|
|
|
|
|
bio_put(bio);
|
2008-04-11 03:19:33 +07:00
|
|
|
}
|
|
|
|
|
2020-02-13 22:24:32 +07:00
|
|
|
struct btrfs_super_block *btrfs_read_dev_one_super(struct block_device *bdev,
|
|
|
|
int copy_num)
|
2015-08-14 17:32:58 +07:00
|
|
|
{
|
|
|
|
struct btrfs_super_block *super;
|
2020-02-13 22:24:32 +07:00
|
|
|
struct page *page;
|
2015-08-14 17:32:58 +07:00
|
|
|
u64 bytenr;
|
2020-02-13 22:24:32 +07:00
|
|
|
struct address_space *mapping = bdev->bd_inode->i_mapping;
|
2015-08-14 17:32:58 +07:00
|
|
|
|
|
|
|
bytenr = btrfs_sb_offset(copy_num);
|
|
|
|
if (bytenr + BTRFS_SUPER_INFO_SIZE >= i_size_read(bdev->bd_inode))
|
2020-02-13 22:24:32 +07:00
|
|
|
return ERR_PTR(-EINVAL);
|
2015-08-14 17:32:58 +07:00
|
|
|
|
2020-02-13 22:24:32 +07:00
|
|
|
page = read_cache_page_gfp(mapping, bytenr >> PAGE_SHIFT, GFP_NOFS);
|
|
|
|
if (IS_ERR(page))
|
|
|
|
return ERR_CAST(page);
|
2015-08-14 17:32:58 +07:00
|
|
|
|
2020-02-13 22:24:32 +07:00
|
|
|
super = page_address(page);
|
2020-09-30 20:09:52 +07:00
|
|
|
if (btrfs_super_magic(super) != BTRFS_MAGIC) {
|
|
|
|
btrfs_release_disk_super(super);
|
|
|
|
return ERR_PTR(-ENODATA);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (btrfs_super_bytenr(super) != bytenr) {
|
2020-02-13 22:24:32 +07:00
|
|
|
btrfs_release_disk_super(super);
|
|
|
|
return ERR_PTR(-EINVAL);
|
2015-08-14 17:32:58 +07:00
|
|
|
}
|
|
|
|
|
2020-02-13 22:24:32 +07:00
|
|
|
return super;
|
2015-08-14 17:32:58 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
|
2020-02-13 22:24:32 +07:00
|
|
|
struct btrfs_super_block *btrfs_read_dev_super(struct block_device *bdev)
|
2008-12-09 04:46:26 +07:00
|
|
|
{
|
2020-02-13 22:24:32 +07:00
|
|
|
struct btrfs_super_block *super, *latest = NULL;
|
2008-12-09 04:46:26 +07:00
|
|
|
int i;
|
|
|
|
u64 transid = 0;
|
|
|
|
|
|
|
|
/* we would like to check all the supers, but that would make
|
|
|
|
* a btrfs mount succeed after a mkfs from a different FS.
|
|
|
|
* So, we need to add a special mount option to scan for
|
|
|
|
* later supers, using BTRFS_SUPER_MIRROR_MAX instead
|
|
|
|
*/
|
|
|
|
for (i = 0; i < 1; i++) {
|
2020-02-13 22:24:32 +07:00
|
|
|
super = btrfs_read_dev_one_super(bdev, i);
|
|
|
|
if (IS_ERR(super))
|
2008-12-09 04:46:26 +07:00
|
|
|
continue;
|
|
|
|
|
|
|
|
if (!latest || btrfs_super_generation(super) > transid) {
|
2020-02-13 22:24:32 +07:00
|
|
|
if (latest)
|
|
|
|
btrfs_release_disk_super(super);
|
|
|
|
|
|
|
|
latest = super;
|
2008-12-09 04:46:26 +07:00
|
|
|
transid = btrfs_super_generation(super);
|
|
|
|
}
|
|
|
|
}
|
2015-08-14 17:32:51 +07:00
|
|
|
|
2020-02-13 22:24:32 +07:00
|
|
|
return super;
|
2008-12-09 04:46:26 +07:00
|
|
|
}
|
|
|
|
|
2009-06-11 02:28:55 +07:00
|
|
|
/*
|
2017-06-16 05:50:33 +07:00
|
|
|
* Write superblock @sb to the @device. Do not wait for completion, all the
|
2020-02-13 22:24:33 +07:00
|
|
|
* pages we use for writing are locked.
|
2009-06-11 02:28:55 +07:00
|
|
|
*
|
2017-06-16 05:50:33 +07:00
|
|
|
* Write @max_mirrors copies of the superblock, where 0 means default that fit
|
|
|
|
* the expected device size at commit time. Note that max_mirrors must be
|
|
|
|
* same for write and wait phases.
|
2009-06-11 02:28:55 +07:00
|
|
|
*
|
2020-02-13 22:24:33 +07:00
|
|
|
* Return number of errors when page is not found or submission fails.
|
2009-06-11 02:28:55 +07:00
|
|
|
*/
|
2008-12-09 04:46:26 +07:00
|
|
|
static int write_dev_supers(struct btrfs_device *device,
|
2017-06-16 05:50:33 +07:00
|
|
|
struct btrfs_super_block *sb, int max_mirrors)
|
2008-12-09 04:46:26 +07:00
|
|
|
{
|
2019-06-03 21:58:57 +07:00
|
|
|
struct btrfs_fs_info *fs_info = device->fs_info;
|
2020-02-13 22:24:33 +07:00
|
|
|
struct address_space *mapping = device->bdev->bd_inode->i_mapping;
|
2019-06-03 21:58:57 +07:00
|
|
|
SHASH_DESC_ON_STACK(shash, fs_info->csum_shash);
|
2008-12-09 04:46:26 +07:00
|
|
|
int i;
|
|
|
|
int errors = 0;
|
|
|
|
u64 bytenr;
|
|
|
|
|
|
|
|
if (max_mirrors == 0)
|
|
|
|
max_mirrors = BTRFS_SUPER_MIRROR_MAX;
|
|
|
|
|
2019-06-03 21:58:57 +07:00
|
|
|
shash->tfm = fs_info->csum_shash;
|
|
|
|
|
2008-12-09 04:46:26 +07:00
|
|
|
for (i = 0; i < max_mirrors; i++) {
|
2020-02-13 22:24:33 +07:00
|
|
|
struct page *page;
|
|
|
|
struct bio *bio;
|
|
|
|
struct btrfs_super_block *disk_super;
|
|
|
|
|
2008-12-09 04:46:26 +07:00
|
|
|
bytenr = btrfs_sb_offset(i);
|
2014-09-03 20:35:33 +07:00
|
|
|
if (bytenr + BTRFS_SUPER_INFO_SIZE >=
|
|
|
|
device->commit_total_bytes)
|
2008-12-09 04:46:26 +07:00
|
|
|
break;
|
|
|
|
|
2017-06-16 05:50:33 +07:00
|
|
|
btrfs_set_super_bytenr(sb, bytenr);
|
2009-06-11 02:28:55 +07:00
|
|
|
|
2020-05-01 13:51:59 +07:00
|
|
|
crypto_shash_digest(shash, (const char *)sb + BTRFS_CSUM_SIZE,
|
|
|
|
BTRFS_SUPER_INFO_SIZE - BTRFS_CSUM_SIZE,
|
|
|
|
sb->csum);
|
2009-06-11 02:28:55 +07:00
|
|
|
|
2020-02-13 22:24:33 +07:00
|
|
|
page = find_or_create_page(mapping, bytenr >> PAGE_SHIFT,
|
|
|
|
GFP_NOFS);
|
|
|
|
if (!page) {
|
2017-06-16 05:50:33 +07:00
|
|
|
btrfs_err(device->fs_info,
|
2020-02-13 22:24:33 +07:00
|
|
|
"couldn't get super block page for bytenr %llu",
|
2017-06-16 05:50:33 +07:00
|
|
|
bytenr);
|
|
|
|
errors++;
|
2009-06-11 02:28:55 +07:00
|
|
|
continue;
|
2017-06-16 05:50:33 +07:00
|
|
|
}
|
2013-04-29 21:05:57 +07:00
|
|
|
|
2020-02-13 22:24:33 +07:00
|
|
|
/* Bump the refcount for wait_dev_supers() */
|
|
|
|
get_page(page);
|
2008-12-09 04:46:26 +07:00
|
|
|
|
2020-02-13 22:24:33 +07:00
|
|
|
disk_super = page_address(page);
|
|
|
|
memcpy(disk_super, sb, BTRFS_SUPER_INFO_SIZE);
|
2009-06-11 02:28:55 +07:00
|
|
|
|
2020-02-13 22:24:33 +07:00
|
|
|
/*
|
|
|
|
* Directly use bios here instead of relying on the page cache
|
|
|
|
* to do I/O, so we don't lose the ability to do integrity
|
|
|
|
* checking.
|
|
|
|
*/
|
|
|
|
bio = bio_alloc(GFP_NOFS, 1);
|
|
|
|
bio_set_dev(bio, device->bdev);
|
|
|
|
bio->bi_iter.bi_sector = bytenr >> SECTOR_SHIFT;
|
|
|
|
bio->bi_private = device;
|
|
|
|
bio->bi_end_io = btrfs_end_super_write;
|
|
|
|
__bio_add_page(bio, page, BTRFS_SUPER_INFO_SIZE,
|
|
|
|
offset_in_page(bytenr));
|
2008-12-09 04:46:26 +07:00
|
|
|
|
2011-11-19 03:07:51 +07:00
|
|
|
/*
|
2020-02-13 22:24:33 +07:00
|
|
|
* We FUA only the first super block. The others we allow to
|
|
|
|
* go down lazy and there's a short window where the on-disk
|
|
|
|
* copies might still contain the older version.
|
2011-11-19 03:07:51 +07:00
|
|
|
*/
|
2020-02-13 22:24:33 +07:00
|
|
|
bio->bi_opf = REQ_OP_WRITE | REQ_SYNC | REQ_META | REQ_PRIO;
|
2017-12-06 13:54:02 +07:00
|
|
|
if (i == 0 && !btrfs_test_opt(device->fs_info, NOBARRIER))
|
2020-02-13 22:24:33 +07:00
|
|
|
bio->bi_opf |= REQ_FUA;
|
|
|
|
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
if (bio->bi_opf & REQ_FUA)
|
|
|
|
percpu_counter_add_batch(&device->fs_info->write_fua, 1, SZ_128M);
|
|
|
|
#endif /* MY_ABC_HERE */
|
2020-02-13 22:24:33 +07:00
|
|
|
btrfsic_submit_bio(bio);
|
2008-12-09 04:46:26 +07:00
|
|
|
}
|
|
|
|
return errors < i ? 0 : -1;
|
|
|
|
}
|
|
|
|
|
2017-06-16 05:50:33 +07:00
|
|
|
/*
|
|
|
|
* Wait for write completion of superblocks done by write_dev_supers,
|
|
|
|
* @max_mirrors same for write and wait phases.
|
|
|
|
*
|
2020-02-13 22:24:33 +07:00
|
|
|
* Return number of errors when page is not found or not marked up to
|
2017-06-16 05:50:33 +07:00
|
|
|
* date.
|
|
|
|
*/
|
|
|
|
static int wait_dev_supers(struct btrfs_device *device, int max_mirrors)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
int errors = 0;
|
2018-02-03 02:09:01 +07:00
|
|
|
bool primary_failed = false;
|
2017-06-16 05:50:33 +07:00
|
|
|
u64 bytenr;
|
|
|
|
|
|
|
|
if (max_mirrors == 0)
|
|
|
|
max_mirrors = BTRFS_SUPER_MIRROR_MAX;
|
|
|
|
|
|
|
|
for (i = 0; i < max_mirrors; i++) {
|
2020-02-13 22:24:33 +07:00
|
|
|
struct page *page;
|
|
|
|
|
2017-06-16 05:50:33 +07:00
|
|
|
bytenr = btrfs_sb_offset(i);
|
|
|
|
if (bytenr + BTRFS_SUPER_INFO_SIZE >=
|
|
|
|
device->commit_total_bytes)
|
|
|
|
break;
|
|
|
|
|
2020-02-13 22:24:33 +07:00
|
|
|
page = find_get_page(device->bdev->bd_inode->i_mapping,
|
|
|
|
bytenr >> PAGE_SHIFT);
|
|
|
|
if (!page) {
|
2017-06-16 05:50:33 +07:00
|
|
|
errors++;
|
2018-02-03 02:09:01 +07:00
|
|
|
if (i == 0)
|
|
|
|
primary_failed = true;
|
2017-06-16 05:50:33 +07:00
|
|
|
continue;
|
|
|
|
}
|
2020-02-13 22:24:33 +07:00
|
|
|
/* Page is submitted locked and unlocked once the IO completes */
|
|
|
|
wait_on_page_locked(page);
|
|
|
|
if (PageError(page)) {
|
2017-06-16 05:50:33 +07:00
|
|
|
errors++;
|
2018-02-03 02:09:01 +07:00
|
|
|
if (i == 0)
|
|
|
|
primary_failed = true;
|
|
|
|
}
|
2017-06-16 05:50:33 +07:00
|
|
|
|
2020-02-13 22:24:33 +07:00
|
|
|
/* Drop our reference */
|
|
|
|
put_page(page);
|
2017-06-16 05:50:33 +07:00
|
|
|
|
2020-02-13 22:24:33 +07:00
|
|
|
/* Drop the reference from the writing run */
|
|
|
|
put_page(page);
|
2017-06-16 05:50:33 +07:00
|
|
|
}
|
|
|
|
|
2018-02-03 02:09:01 +07:00
|
|
|
/* log error, force error return */
|
|
|
|
if (primary_failed) {
|
|
|
|
btrfs_err(device->fs_info, "error writing primary super block to device %llu",
|
|
|
|
device->devid);
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
2017-06-16 05:50:33 +07:00
|
|
|
return errors < i ? 0 : -1;
|
|
|
|
}
|
|
|
|
|
2011-11-19 03:07:51 +07:00
|
|
|
/*
|
|
|
|
* endio for the write_dev_flush, this will wake anyone waiting
|
|
|
|
* for the barrier when it is done
|
|
|
|
*/
|
2015-07-20 20:29:37 +07:00
|
|
|
static void btrfs_end_empty_barrier(struct bio *bio)
|
2011-11-19 03:07:51 +07:00
|
|
|
{
|
2017-06-06 22:06:06 +07:00
|
|
|
complete(bio->bi_private);
|
2011-11-19 03:07:51 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2017-06-13 16:05:41 +07:00
|
|
|
* Submit a flush request to the device if it supports it. Error handling is
|
|
|
|
* done in the waiting counterpart.
|
2011-11-19 03:07:51 +07:00
|
|
|
*/
|
2017-06-13 16:05:41 +07:00
|
|
|
static void write_dev_flush(struct btrfs_device *device)
|
2011-11-19 03:07:51 +07:00
|
|
|
{
|
2017-04-06 10:22:53 +07:00
|
|
|
struct request_queue *q = bdev_get_queue(device->bdev);
|
2017-06-06 22:06:06 +07:00
|
|
|
struct bio *bio = device->flush_bio;
|
2011-11-19 03:07:51 +07:00
|
|
|
|
2017-04-06 10:22:53 +07:00
|
|
|
if (!test_bit(QUEUE_FLAG_WC, &q->queue_flags))
|
2017-06-13 16:05:41 +07:00
|
|
|
return;
|
2011-11-19 03:07:51 +07:00
|
|
|
|
2017-06-06 22:06:06 +07:00
|
|
|
bio_reset(bio);
|
2011-11-19 03:07:51 +07:00
|
|
|
bio->bi_end_io = btrfs_end_empty_barrier;
|
2017-08-24 00:10:32 +07:00
|
|
|
bio_set_dev(bio, device->bdev);
|
2017-05-02 22:03:50 +07:00
|
|
|
bio->bi_opf = REQ_OP_WRITE | REQ_SYNC | REQ_PREFLUSH;
|
2011-11-19 03:07:51 +07:00
|
|
|
init_completion(&device->flush_wait);
|
|
|
|
bio->bi_private = &device->flush_wait;
|
|
|
|
|
2017-08-18 15:38:07 +07:00
|
|
|
btrfsic_submit_bio(bio);
|
2017-12-04 11:54:56 +07:00
|
|
|
set_bit(BTRFS_DEV_STATE_FLUSH_SENT, &device->dev_state);
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
percpu_counter_add_batch(&device->fs_info->write_flush, 1, SZ_128M);
|
|
|
|
#endif /* MY_ABC_HERE */
|
2017-06-13 16:05:41 +07:00
|
|
|
}
|
2011-11-19 03:07:51 +07:00
|
|
|
|
2017-06-13 16:05:41 +07:00
|
|
|
/*
|
|
|
|
* If the flush bio has been submitted by write_dev_flush, wait for it.
|
|
|
|
*/
|
2017-07-06 06:41:23 +07:00
|
|
|
static blk_status_t wait_dev_flush(struct btrfs_device *device)
|
2017-06-13 16:05:41 +07:00
|
|
|
{
|
|
|
|
struct bio *bio = device->flush_bio;
|
2011-11-19 03:07:51 +07:00
|
|
|
|
2017-12-04 11:54:56 +07:00
|
|
|
if (!test_bit(BTRFS_DEV_STATE_FLUSH_SENT, &device->dev_state))
|
2017-08-23 13:45:59 +07:00
|
|
|
return BLK_STS_OK;
|
2011-11-19 03:07:51 +07:00
|
|
|
|
2017-12-04 11:54:56 +07:00
|
|
|
clear_bit(BTRFS_DEV_STATE_FLUSH_SENT, &device->dev_state);
|
2017-06-15 21:04:26 +07:00
|
|
|
wait_for_completion_io(&device->flush_wait);
|
2011-11-19 03:07:51 +07:00
|
|
|
|
2017-07-06 06:41:23 +07:00
|
|
|
return bio->bi_status;
|
2011-11-19 03:07:51 +07:00
|
|
|
}
|
|
|
|
|
2017-06-27 16:28:40 +07:00
|
|
|
static int check_barrier_error(struct btrfs_fs_info *fs_info)
|
2017-05-06 06:17:54 +07:00
|
|
|
{
|
2017-12-18 16:08:59 +07:00
|
|
|
if (!btrfs_check_rw_degradable(fs_info, NULL))
|
2017-05-06 06:17:54 +07:00
|
|
|
return -EIO;
|
2011-11-19 03:07:51 +07:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* send an empty flush down to each device in parallel,
|
|
|
|
* then wait for them
|
|
|
|
*/
|
|
|
|
static int barrier_all_devices(struct btrfs_fs_info *info)
|
|
|
|
{
|
|
|
|
struct list_head *head;
|
|
|
|
struct btrfs_device *dev;
|
2012-08-01 23:56:49 +07:00
|
|
|
int errors_wait = 0;
|
2017-06-03 14:38:06 +07:00
|
|
|
blk_status_t ret;
|
2011-11-19 03:07:51 +07:00
|
|
|
|
2017-06-16 05:28:47 +07:00
|
|
|
lockdep_assert_held(&info->fs_devices->device_list_mutex);
|
2011-11-19 03:07:51 +07:00
|
|
|
/* send down all the barriers */
|
|
|
|
head = &info->fs_devices->devices;
|
2017-06-16 05:28:47 +07:00
|
|
|
list_for_each_entry(dev, head, dev_list) {
|
2017-12-04 11:54:54 +07:00
|
|
|
if (test_bit(BTRFS_DEV_STATE_MISSING, &dev->dev_state))
|
2014-02-05 14:34:38 +07:00
|
|
|
continue;
|
2017-06-13 16:05:40 +07:00
|
|
|
if (!dev->bdev)
|
2011-11-19 03:07:51 +07:00
|
|
|
continue;
|
2017-12-04 11:54:53 +07:00
|
|
|
if (!test_bit(BTRFS_DEV_STATE_IN_FS_METADATA, &dev->dev_state) ||
|
2017-12-04 11:54:52 +07:00
|
|
|
!test_bit(BTRFS_DEV_STATE_WRITEABLE, &dev->dev_state))
|
2011-11-19 03:07:51 +07:00
|
|
|
continue;
|
|
|
|
|
2017-06-13 16:05:41 +07:00
|
|
|
write_dev_flush(dev);
|
2017-08-23 13:45:59 +07:00
|
|
|
dev->last_flush_error = BLK_STS_OK;
|
2011-11-19 03:07:51 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
/* wait for all the barriers */
|
2017-06-16 05:28:47 +07:00
|
|
|
list_for_each_entry(dev, head, dev_list) {
|
2017-12-04 11:54:54 +07:00
|
|
|
if (test_bit(BTRFS_DEV_STATE_MISSING, &dev->dev_state))
|
2014-02-05 14:34:38 +07:00
|
|
|
continue;
|
2011-11-19 03:07:51 +07:00
|
|
|
if (!dev->bdev) {
|
2012-08-01 23:56:49 +07:00
|
|
|
errors_wait++;
|
2011-11-19 03:07:51 +07:00
|
|
|
continue;
|
|
|
|
}
|
2017-12-04 11:54:53 +07:00
|
|
|
if (!test_bit(BTRFS_DEV_STATE_IN_FS_METADATA, &dev->dev_state) ||
|
2017-12-04 11:54:52 +07:00
|
|
|
!test_bit(BTRFS_DEV_STATE_WRITEABLE, &dev->dev_state))
|
2011-11-19 03:07:51 +07:00
|
|
|
continue;
|
|
|
|
|
2017-06-13 16:05:41 +07:00
|
|
|
ret = wait_dev_flush(dev);
|
2017-05-06 06:17:54 +07:00
|
|
|
if (ret) {
|
|
|
|
dev->last_flush_error = ret;
|
2017-06-15 21:20:43 +07:00
|
|
|
btrfs_dev_stat_inc_and_print(dev,
|
|
|
|
BTRFS_DEV_STAT_FLUSH_ERRS);
|
2012-08-01 23:56:49 +07:00
|
|
|
errors_wait++;
|
2017-05-06 06:17:54 +07:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-06-13 16:05:40 +07:00
|
|
|
if (errors_wait) {
|
2017-05-06 06:17:54 +07:00
|
|
|
/*
|
|
|
|
* At some point we need the status of all disks
|
|
|
|
* to arrive at the volume status. So error checking
|
|
|
|
* is being pushed to a separate loop.
|
|
|
|
*/
|
2017-06-27 16:28:40 +07:00
|
|
|
return check_barrier_error(info);
|
2011-11-19 03:07:51 +07:00
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2015-08-19 14:54:15 +07:00
|
|
|
int btrfs_get_num_tolerated_disk_barrier_failures(u64 flags)
|
|
|
|
{
|
2015-09-15 20:08:07 +07:00
|
|
|
int raid_type;
|
|
|
|
int min_tolerated = INT_MAX;
|
2015-08-19 14:54:15 +07:00
|
|
|
|
2015-09-15 20:08:07 +07:00
|
|
|
if ((flags & BTRFS_BLOCK_GROUP_PROFILE_MASK) == 0 ||
|
|
|
|
(flags & BTRFS_AVAIL_ALLOC_BIT_SINGLE))
|
2019-05-17 16:43:36 +07:00
|
|
|
min_tolerated = min_t(int, min_tolerated,
|
2015-09-15 20:08:07 +07:00
|
|
|
btrfs_raid_array[BTRFS_RAID_SINGLE].
|
|
|
|
tolerated_failures);
|
2015-08-19 14:54:15 +07:00
|
|
|
|
2015-09-15 20:08:07 +07:00
|
|
|
for (raid_type = 0; raid_type < BTRFS_NR_RAID_TYPES; raid_type++) {
|
|
|
|
if (raid_type == BTRFS_RAID_SINGLE)
|
|
|
|
continue;
|
2018-04-25 18:01:43 +07:00
|
|
|
if (!(flags & btrfs_raid_array[raid_type].bg_flag))
|
2015-09-15 20:08:07 +07:00
|
|
|
continue;
|
2019-05-17 16:43:36 +07:00
|
|
|
min_tolerated = min_t(int, min_tolerated,
|
2015-09-15 20:08:07 +07:00
|
|
|
btrfs_raid_array[raid_type].
|
|
|
|
tolerated_failures);
|
|
|
|
}
|
2015-08-19 14:54:15 +07:00
|
|
|
|
2015-09-15 20:08:07 +07:00
|
|
|
if (min_tolerated == INT_MAX) {
|
2016-09-20 21:05:02 +07:00
|
|
|
pr_warn("BTRFS: unknown raid flag: %llu", flags);
|
2015-09-15 20:08:07 +07:00
|
|
|
min_tolerated = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
return min_tolerated;
|
2015-08-19 14:54:15 +07:00
|
|
|
}
|
|
|
|
|
2017-02-11 01:04:32 +07:00
|
|
|
int write_all_supers(struct btrfs_fs_info *fs_info, int max_mirrors)
|
2008-04-11 03:19:33 +07:00
|
|
|
{
|
2009-06-11 02:17:02 +07:00
|
|
|
struct list_head *head;
|
2008-04-11 03:19:33 +07:00
|
|
|
struct btrfs_device *dev;
|
2008-05-07 22:43:44 +07:00
|
|
|
struct btrfs_super_block *sb;
|
2008-04-11 03:19:33 +07:00
|
|
|
struct btrfs_dev_item *dev_item;
|
|
|
|
int ret;
|
|
|
|
int do_barriers;
|
2008-04-29 20:38:00 +07:00
|
|
|
int max_errors;
|
|
|
|
int total_errors = 0;
|
2008-05-07 22:43:44 +07:00
|
|
|
u64 flags;
|
2008-04-11 03:19:33 +07:00
|
|
|
|
2016-06-23 05:54:23 +07:00
|
|
|
do_barriers = !btrfs_test_opt(fs_info, NOBARRIER);
|
2017-09-14 01:25:21 +07:00
|
|
|
|
|
|
|
/*
|
|
|
|
* max_mirrors == 0 indicates we're from commit_transaction,
|
|
|
|
* not from fsync where the tree roots in fs_info have not
|
|
|
|
* been consistent on disk.
|
|
|
|
*/
|
|
|
|
if (max_mirrors == 0)
|
|
|
|
backup_super_roots(fs_info);
|
2008-04-11 03:19:33 +07:00
|
|
|
|
2016-06-23 05:54:23 +07:00
|
|
|
sb = fs_info->super_for_commit;
|
2008-05-07 22:43:44 +07:00
|
|
|
dev_item = &sb->dev_item;
|
2009-06-11 02:17:02 +07:00
|
|
|
|
2016-06-23 05:54:23 +07:00
|
|
|
mutex_lock(&fs_info->fs_devices->device_list_mutex);
|
|
|
|
head = &fs_info->fs_devices->devices;
|
|
|
|
max_errors = btrfs_super_num_devices(fs_info->super_copy) - 1;
|
2011-11-19 03:07:51 +07:00
|
|
|
|
2012-08-01 23:56:49 +07:00
|
|
|
if (do_barriers) {
|
2016-06-23 05:54:23 +07:00
|
|
|
ret = barrier_all_devices(fs_info);
|
2012-08-01 23:56:49 +07:00
|
|
|
if (ret) {
|
|
|
|
mutex_unlock(
|
2016-06-23 05:54:23 +07:00
|
|
|
&fs_info->fs_devices->device_list_mutex);
|
|
|
|
btrfs_handle_fs_error(fs_info, ret,
|
|
|
|
"errors while submitting device barriers.");
|
2012-08-01 23:56:49 +07:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
}
|
2011-11-19 03:07:51 +07:00
|
|
|
|
2017-06-16 05:28:47 +07:00
|
|
|
list_for_each_entry(dev, head, dev_list) {
|
2008-05-14 00:46:40 +07:00
|
|
|
if (!dev->bdev) {
|
|
|
|
total_errors++;
|
|
|
|
continue;
|
|
|
|
}
|
2017-12-04 11:54:53 +07:00
|
|
|
if (!test_bit(BTRFS_DEV_STATE_IN_FS_METADATA, &dev->dev_state) ||
|
2017-12-04 11:54:52 +07:00
|
|
|
!test_bit(BTRFS_DEV_STATE_WRITEABLE, &dev->dev_state))
|
2008-05-14 00:46:40 +07:00
|
|
|
continue;
|
|
|
|
|
2008-11-18 09:11:30 +07:00
|
|
|
btrfs_set_stack_device_generation(dev_item, 0);
|
2008-05-07 22:43:44 +07:00
|
|
|
btrfs_set_stack_device_type(dev_item, dev->type);
|
|
|
|
btrfs_set_stack_device_id(dev_item, dev->devid);
|
2014-07-24 10:37:13 +07:00
|
|
|
btrfs_set_stack_device_total_bytes(dev_item,
|
2014-09-03 20:35:33 +07:00
|
|
|
dev->commit_total_bytes);
|
2014-09-03 20:35:34 +07:00
|
|
|
btrfs_set_stack_device_bytes_used(dev_item,
|
|
|
|
dev->commit_bytes_used);
|
2008-05-07 22:43:44 +07:00
|
|
|
btrfs_set_stack_device_io_align(dev_item, dev->io_align);
|
|
|
|
btrfs_set_stack_device_io_width(dev_item, dev->io_width);
|
|
|
|
btrfs_set_stack_device_sector_size(dev_item, dev->sector_size);
|
|
|
|
memcpy(dev_item->uuid, dev->uuid, BTRFS_UUID_SIZE);
|
2018-10-30 21:43:23 +07:00
|
|
|
memcpy(dev_item->fsid, dev->fs_devices->metadata_uuid,
|
|
|
|
BTRFS_FSID_SIZE);
|
2008-12-09 04:46:26 +07:00
|
|
|
|
2008-05-07 22:43:44 +07:00
|
|
|
flags = btrfs_super_flags(sb);
|
|
|
|
btrfs_set_super_flags(sb, flags | BTRFS_HEADER_FLAG_WRITTEN);
|
|
|
|
|
btrfs: Do super block verification before writing it to disk
There are already 2 reports about strangely corrupted super blocks,
where csum still matches but extra garbage gets slipped into super block.
The corruption would looks like:
------
superblock: bytenr=65536, device=/dev/sdc1
---------------------------------------------------------
csum_type 41700 (INVALID)
csum 0x3b252d3a [match]
bytenr 65536
flags 0x1
( WRITTEN )
magic _BHRfS_M [match]
...
incompat_flags 0x5b22400000000169
( MIXED_BACKREF |
COMPRESS_LZO |
BIG_METADATA |
EXTENDED_IREF |
SKINNY_METADATA |
unknown flag: 0x5b22400000000000 )
...
------
Or
------
superblock: bytenr=65536, device=/dev/mapper/x
---------------------------------------------------------
csum_type 35355 (INVALID)
csum_size 32
csum 0xf0dbeddd [match]
bytenr 65536
flags 0x1
( WRITTEN )
magic _BHRfS_M [match]
...
incompat_flags 0x176d200000000169
( MIXED_BACKREF |
COMPRESS_LZO |
BIG_METADATA |
EXTENDED_IREF |
SKINNY_METADATA |
unknown flag: 0x176d200000000000 )
------
Obviously, csum_type and incompat_flags get some garbage, but its csum
still matches, which means kernel calculates the csum based on corrupted
super block memory.
And after manually fixing these values, the filesystem is completely
healthy without any problem exposed by btrfs check.
Although the cause is still unknown, at least detect it and prevent further
corruption.
Both reports have same symptoms, there's an overwrite on offset 192 of
the superblock, by 4 bytes. The superblock structure is not allocated or
freed and stays in the memory for the whole filesystem lifetime, so it's
not a use-after-free kind of error on someone else's leaked page.
As a vague point for the problable cause is mentioning of other system
freezing related to graphic card drivers.
Reported-by: Ken Swenson <flat@imo.uto.moe>
Reported-by: Ben Parsons <9parsonsb@gmail.com>
Signed-off-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
[ add brief analysis of the reports ]
Signed-off-by: David Sterba <dsterba@suse.com>
2018-05-11 12:35:27 +07:00
|
|
|
ret = btrfs_validate_write_super(fs_info, sb);
|
|
|
|
if (ret < 0) {
|
|
|
|
mutex_unlock(&fs_info->fs_devices->device_list_mutex);
|
|
|
|
btrfs_handle_fs_error(fs_info, -EUCLEAN,
|
|
|
|
"unexpected superblock corruption detected");
|
|
|
|
return -EUCLEAN;
|
|
|
|
}
|
|
|
|
|
2017-06-16 05:50:33 +07:00
|
|
|
ret = write_dev_supers(dev, sb, max_mirrors);
|
2008-04-29 20:38:00 +07:00
|
|
|
if (ret)
|
|
|
|
total_errors++;
|
2008-04-11 03:19:33 +07:00
|
|
|
}
|
2008-04-29 20:38:00 +07:00
|
|
|
if (total_errors > max_errors) {
|
2016-06-23 05:54:23 +07:00
|
|
|
btrfs_err(fs_info, "%d errors while writing supers",
|
|
|
|
total_errors);
|
|
|
|
mutex_unlock(&fs_info->fs_devices->device_list_mutex);
|
2012-03-12 22:03:00 +07:00
|
|
|
|
2013-08-09 22:08:40 +07:00
|
|
|
/* FUA is masked off if unsupported and can't be the reason */
|
2016-06-23 05:54:23 +07:00
|
|
|
btrfs_handle_fs_error(fs_info, -EIO,
|
|
|
|
"%d errors while writing supers",
|
|
|
|
total_errors);
|
2013-08-09 22:08:40 +07:00
|
|
|
return -EIO;
|
2008-04-29 20:38:00 +07:00
|
|
|
}
|
2008-04-11 03:19:33 +07:00
|
|
|
|
2008-12-09 04:46:26 +07:00
|
|
|
total_errors = 0;
|
2017-06-16 05:28:47 +07:00
|
|
|
list_for_each_entry(dev, head, dev_list) {
|
2008-05-14 00:46:40 +07:00
|
|
|
if (!dev->bdev)
|
|
|
|
continue;
|
2017-12-04 11:54:53 +07:00
|
|
|
if (!test_bit(BTRFS_DEV_STATE_IN_FS_METADATA, &dev->dev_state) ||
|
2017-12-04 11:54:52 +07:00
|
|
|
!test_bit(BTRFS_DEV_STATE_WRITEABLE, &dev->dev_state))
|
2008-05-14 00:46:40 +07:00
|
|
|
continue;
|
|
|
|
|
2017-06-16 05:50:33 +07:00
|
|
|
ret = wait_dev_supers(dev, max_mirrors);
|
2008-12-09 04:46:26 +07:00
|
|
|
if (ret)
|
|
|
|
total_errors++;
|
2008-04-11 03:19:33 +07:00
|
|
|
}
|
2016-06-23 05:54:23 +07:00
|
|
|
mutex_unlock(&fs_info->fs_devices->device_list_mutex);
|
2008-04-29 20:38:00 +07:00
|
|
|
if (total_errors > max_errors) {
|
2016-06-23 05:54:23 +07:00
|
|
|
btrfs_handle_fs_error(fs_info, -EIO,
|
|
|
|
"%d errors while writing supers",
|
|
|
|
total_errors);
|
2012-03-12 22:03:00 +07:00
|
|
|
return -EIO;
|
2008-04-29 20:38:00 +07:00
|
|
|
}
|
2008-04-11 03:19:33 +07:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2013-05-15 14:48:19 +07:00
|
|
|
/* Drop a fs root from the radix tree and free it. */
|
|
|
|
void btrfs_drop_and_free_fs_root(struct btrfs_fs_info *fs_info,
|
|
|
|
struct btrfs_root *root)
|
2007-04-11 03:58:11 +07:00
|
|
|
{
|
2020-02-15 04:11:45 +07:00
|
|
|
bool drop_ref = false;
|
|
|
|
|
2009-09-22 02:56:00 +07:00
|
|
|
spin_lock(&fs_info->fs_roots_radix_lock);
|
2007-04-11 03:58:11 +07:00
|
|
|
radix_tree_delete(&fs_info->fs_roots_radix,
|
|
|
|
(unsigned long)root->root_key.objectid);
|
2020-01-24 21:32:27 +07:00
|
|
|
if (test_and_clear_bit(BTRFS_ROOT_IN_RADIX, &root->state))
|
2020-02-15 04:11:45 +07:00
|
|
|
drop_ref = true;
|
2009-09-22 02:56:00 +07:00
|
|
|
spin_unlock(&fs_info->fs_roots_radix_lock);
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
if (!list_empty(&root->syno_orphan_cleanup.root)) {
|
|
|
|
spin_lock(&fs_info->syno_orphan_cleanup.lock);
|
|
|
|
list_del_init(&root->syno_orphan_cleanup.root);
|
|
|
|
spin_unlock(&fs_info->syno_orphan_cleanup.lock);
|
|
|
|
}
|
|
|
|
#endif /* MY_ABC_HERE */
|
2009-09-22 03:00:26 +07:00
|
|
|
|
2016-07-20 05:36:05 +07:00
|
|
|
if (test_bit(BTRFS_FS_STATE_ERROR, &fs_info->fs_state)) {
|
2020-03-24 21:47:52 +07:00
|
|
|
ASSERT(root->log_root == NULL);
|
2016-07-20 05:36:05 +07:00
|
|
|
if (root->reloc_root) {
|
2020-01-24 21:33:01 +07:00
|
|
|
btrfs_put_root(root->reloc_root);
|
2016-07-20 05:36:05 +07:00
|
|
|
root->reloc_root = NULL;
|
|
|
|
}
|
|
|
|
}
|
2013-02-27 20:28:24 +07:00
|
|
|
|
2014-05-08 04:06:09 +07:00
|
|
|
if (root->free_ino_pinned)
|
|
|
|
__btrfs_remove_free_space_cache(root->free_ino_pinned);
|
|
|
|
if (root->free_ino_ctl)
|
|
|
|
__btrfs_remove_free_space_cache(root->free_ino_ctl);
|
2020-02-15 04:11:41 +07:00
|
|
|
if (root->ino_cache_inode) {
|
|
|
|
iput(root->ino_cache_inode);
|
|
|
|
root->ino_cache_inode = NULL;
|
|
|
|
}
|
2020-02-15 04:11:45 +07:00
|
|
|
if (drop_ref)
|
|
|
|
btrfs_put_root(root);
|
2007-04-11 03:58:11 +07:00
|
|
|
}
|
|
|
|
|
2008-11-13 02:34:12 +07:00
|
|
|
int btrfs_cleanup_fs_roots(struct btrfs_fs_info *fs_info)
|
2007-02-22 05:04:57 +07:00
|
|
|
{
|
2008-11-13 02:34:12 +07:00
|
|
|
u64 root_objectid = 0;
|
|
|
|
struct btrfs_root *gang[8];
|
2014-04-22 16:13:51 +07:00
|
|
|
int i = 0;
|
|
|
|
int err = 0;
|
|
|
|
unsigned int ret = 0;
|
2007-03-17 03:20:31 +07:00
|
|
|
|
2008-11-13 02:34:12 +07:00
|
|
|
while (1) {
|
2020-02-15 04:11:46 +07:00
|
|
|
spin_lock(&fs_info->fs_roots_radix_lock);
|
2008-11-13 02:34:12 +07:00
|
|
|
ret = radix_tree_gang_lookup(&fs_info->fs_roots_radix,
|
|
|
|
(void **)gang, root_objectid,
|
|
|
|
ARRAY_SIZE(gang));
|
2014-04-22 16:13:51 +07:00
|
|
|
if (!ret) {
|
2020-02-15 04:11:46 +07:00
|
|
|
spin_unlock(&fs_info->fs_roots_radix_lock);
|
2008-11-13 02:34:12 +07:00
|
|
|
break;
|
2014-04-22 16:13:51 +07:00
|
|
|
}
|
Btrfs: Mixed back reference (FORWARD ROLLING FORMAT CHANGE)
This commit introduces a new kind of back reference for btrfs metadata.
Once a filesystem has been mounted with this commit, IT WILL NO LONGER
BE MOUNTABLE BY OLDER KERNELS.
When a tree block in subvolume tree is cow'd, the reference counts of all
extents it points to are increased by one. At transaction commit time,
the old root of the subvolume is recorded in a "dead root" data structure,
and the btree it points to is later walked, dropping reference counts
and freeing any blocks where the reference count goes to 0.
The increments done during cow and decrements done after commit cancel out,
and the walk is a very expensive way to go about freeing the blocks that
are no longer referenced by the new btree root. This commit reduces the
transaction overhead by avoiding the need for dead root records.
When a non-shared tree block is cow'd, we free the old block at once, and the
new block inherits old block's references. When a tree block with reference
count > 1 is cow'd, we increase the reference counts of all extents
the new block points to by one, and decrease the old block's reference count by
one.
This dead tree avoidance code removes the need to modify the reference
counts of lower level extents when a non-shared tree block is cow'd.
But we still need to update back ref for all pointers in the block.
This is because the location of the block is recorded in the back ref
item.
We can solve this by introducing a new type of back ref. The new
back ref provides information about pointer's key, level and in which
tree the pointer lives. This information allow us to find the pointer
by searching the tree. The shortcoming of the new back ref is that it
only works for pointers in tree blocks referenced by their owner trees.
This is mostly a problem for snapshots, where resolving one of these
fuzzy back references would be O(number_of_snapshots) and quite slow.
The solution used here is to use the fuzzy back references in the common
case where a given tree block is only referenced by one root,
and use the full back references when multiple roots have a reference
on a given block.
This commit adds per subvolume red-black tree to keep trace of cached
inodes. The red-black tree helps the balancing code to find cached
inodes whose inode numbers within a given range.
This commit improves the balancing code by introducing several data
structures to keep the state of balancing. The most important one
is the back ref cache. It caches how the upper level tree blocks are
referenced. This greatly reduce the overhead of checking back ref.
The improved balancing code scales significantly better with a large
number of snapshots.
This is a very large commit and was written in a number of
pieces. But, they depend heavily on the disk format change and were
squashed together to make sure git bisect didn't end up in a
bad state wrt space balancing or the format change.
Signed-off-by: Yan Zheng <zheng.yan@oracle.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-06-10 21:45:14 +07:00
|
|
|
root_objectid = gang[ret - 1]->root_key.objectid + 1;
|
2014-04-22 16:13:51 +07:00
|
|
|
|
2008-11-13 02:34:12 +07:00
|
|
|
for (i = 0; i < ret; i++) {
|
2014-04-22 16:13:51 +07:00
|
|
|
/* Avoid to grab roots in dead_roots */
|
|
|
|
if (btrfs_root_refs(&gang[i]->root_item) == 0) {
|
|
|
|
gang[i] = NULL;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
/* grab all the search result for later use */
|
2020-01-24 21:33:01 +07:00
|
|
|
gang[i] = btrfs_grab_root(gang[i]);
|
2014-04-22 16:13:51 +07:00
|
|
|
}
|
2020-02-15 04:11:46 +07:00
|
|
|
spin_unlock(&fs_info->fs_roots_radix_lock);
|
2011-02-01 04:22:42 +07:00
|
|
|
|
2014-04-22 16:13:51 +07:00
|
|
|
for (i = 0; i < ret; i++) {
|
|
|
|
if (!gang[i])
|
|
|
|
continue;
|
2008-11-13 02:34:12 +07:00
|
|
|
root_objectid = gang[i]->root_key.objectid;
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
down_read(&fs_info->cleanup_work_sem);
|
|
|
|
#endif /* MY_ABC_HERE */
|
2011-02-01 04:22:42 +07:00
|
|
|
err = btrfs_orphan_cleanup(gang[i]);
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
up_read(&fs_info->cleanup_work_sem);
|
|
|
|
#endif /* MY_ABC_HERE */
|
2011-02-01 04:22:42 +07:00
|
|
|
if (err)
|
2014-04-22 16:13:51 +07:00
|
|
|
break;
|
2020-01-24 21:33:01 +07:00
|
|
|
btrfs_put_root(gang[i]);
|
2008-11-13 02:34:12 +07:00
|
|
|
}
|
|
|
|
root_objectid++;
|
|
|
|
}
|
2014-04-22 16:13:51 +07:00
|
|
|
|
|
|
|
/* release the uncleaned roots due to error */
|
|
|
|
for (; i < ret; i++) {
|
|
|
|
if (gang[i])
|
2020-01-24 21:33:01 +07:00
|
|
|
btrfs_put_root(gang[i]);
|
2014-04-22 16:13:51 +07:00
|
|
|
}
|
|
|
|
return err;
|
2008-11-13 02:34:12 +07:00
|
|
|
}
|
2008-06-26 03:01:30 +07:00
|
|
|
|
2016-06-22 08:16:51 +07:00
|
|
|
int btrfs_commit_super(struct btrfs_fs_info *fs_info)
|
2008-11-13 02:34:12 +07:00
|
|
|
{
|
2016-06-22 08:16:51 +07:00
|
|
|
struct btrfs_root *root = fs_info->tree_root;
|
2008-11-13 02:34:12 +07:00
|
|
|
struct btrfs_trans_handle *trans;
|
2008-06-26 03:01:31 +07:00
|
|
|
|
2016-06-23 05:54:23 +07:00
|
|
|
mutex_lock(&fs_info->cleaner_mutex);
|
2016-06-23 05:54:24 +07:00
|
|
|
btrfs_run_delayed_iputs(fs_info);
|
2016-06-23 05:54:23 +07:00
|
|
|
mutex_unlock(&fs_info->cleaner_mutex);
|
|
|
|
wake_up_process(fs_info->cleaner_kthread);
|
2009-11-12 16:34:40 +07:00
|
|
|
|
|
|
|
/* wait until ongoing cleanup work done */
|
2016-06-23 05:54:23 +07:00
|
|
|
down_write(&fs_info->cleanup_work_sem);
|
|
|
|
up_write(&fs_info->cleanup_work_sem);
|
2009-11-12 16:34:40 +07:00
|
|
|
|
2011-04-13 23:54:33 +07:00
|
|
|
trans = btrfs_join_transaction(root);
|
2011-01-25 09:51:38 +07:00
|
|
|
if (IS_ERR(trans))
|
|
|
|
return PTR_ERR(trans);
|
2016-09-10 08:39:03 +07:00
|
|
|
return btrfs_commit_transaction(trans);
|
2008-11-13 02:34:12 +07:00
|
|
|
}
|
|
|
|
|
2019-10-02 00:57:35 +07:00
|
|
|
void __cold close_ctree(struct btrfs_fs_info *fs_info)
|
2008-11-13 02:34:12 +07:00
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
2016-09-03 02:40:02 +07:00
|
|
|
set_bit(BTRFS_FS_CLOSING_START, &fs_info->flags);
|
Btrfs: fix missing delayed iputs on unmount
There's a race between close_ctree() and cleaner_kthread().
close_ctree() sets btrfs_fs_closing(), and the cleaner stops when it
sees it set, but this is racy; the cleaner might have already checked
the bit and could be cleaning stuff. In particular, if it deletes unused
block groups, it will create delayed iputs for the free space cache
inodes. As of "btrfs: don't run delayed_iputs in commit", we're no
longer running delayed iputs after a commit. Therefore, if the cleaner
creates more delayed iputs after delayed iputs are run in
btrfs_commit_super(), we will leak inodes on unmount and get a busy
inode crash from the VFS.
Fix it by parking the cleaner before we actually close anything. Then,
any remaining delayed iputs will always be handled in
btrfs_commit_super(). This also ensures that the commit in close_ctree()
is really the last commit, so we can get rid of the commit in
cleaner_kthread().
The fstest/generic/475 followed by 476 can trigger a crash that
manifests as a slab corruption caused by accessing the freed kthread
structure by a wake up function. Sample trace:
[ 5657.077612] BUG: unable to handle kernel NULL pointer dereference at 00000000000000cc
[ 5657.079432] PGD 1c57a067 P4D 1c57a067 PUD da10067 PMD 0
[ 5657.080661] Oops: 0000 [#1] PREEMPT SMP
[ 5657.081592] CPU: 1 PID: 5157 Comm: fsstress Tainted: G W 4.19.0-rc8-default+ #323
[ 5657.083703] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.11.2-0-gf9626cc-prebuilt.qemu-project.org 04/01/2014
[ 5657.086577] RIP: 0010:shrink_page_list+0x2f9/0xe90
[ 5657.091937] RSP: 0018:ffffb5c745c8f728 EFLAGS: 00010287
[ 5657.092953] RAX: 0000000000000074 RBX: ffffb5c745c8f830 RCX: 0000000000000000
[ 5657.094590] RDX: 0000000000000000 RSI: 0000000000000001 RDI: ffff9a8747fdf3d0
[ 5657.095987] RBP: ffffb5c745c8f9e0 R08: 0000000000000000 R09: 0000000000000000
[ 5657.097159] R10: ffff9a8747fdf5e8 R11: 0000000000000000 R12: ffffb5c745c8f788
[ 5657.098513] R13: ffff9a877f6ff2c0 R14: ffff9a877f6ff2c8 R15: dead000000000200
[ 5657.099689] FS: 00007f948d853b80(0000) GS:ffff9a877d600000(0000) knlGS:0000000000000000
[ 5657.101032] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 5657.101953] CR2: 00000000000000cc CR3: 00000000684bd000 CR4: 00000000000006e0
[ 5657.103159] Call Trace:
[ 5657.103776] shrink_inactive_list+0x194/0x410
[ 5657.104671] shrink_node_memcg.constprop.84+0x39a/0x6a0
[ 5657.105750] shrink_node+0x62/0x1c0
[ 5657.106529] try_to_free_pages+0x1a4/0x500
[ 5657.107408] __alloc_pages_slowpath+0x2c9/0xb20
[ 5657.108418] __alloc_pages_nodemask+0x268/0x2b0
[ 5657.109348] kmalloc_large_node+0x37/0x90
[ 5657.110205] __kmalloc_node+0x236/0x310
[ 5657.111014] kvmalloc_node+0x3e/0x70
Fixes: 30928e9baac2 ("btrfs: don't run delayed_iputs in commit")
Signed-off-by: Omar Sandoval <osandov@fb.com>
Reviewed-by: David Sterba <dsterba@suse.com>
[ add trace ]
Signed-off-by: David Sterba <dsterba@suse.com>
2018-11-01 00:06:08 +07:00
|
|
|
/*
|
|
|
|
* We don't want the cleaner to start new transactions, add more delayed
|
|
|
|
* iputs, etc. while we're closing. We can't use kthread_stop() yet
|
|
|
|
* because that frees the task_struct, and the transaction kthread might
|
|
|
|
* still try to wake up the cleaner.
|
|
|
|
*/
|
|
|
|
kthread_park(fs_info->cleaner_kthread);
|
2008-11-13 02:34:12 +07:00
|
|
|
|
2015-11-05 06:56:16 +07:00
|
|
|
/* wait for the qgroup rescan worker to stop */
|
2016-08-09 09:08:06 +07:00
|
|
|
btrfs_qgroup_wait_for_completion(fs_info, false);
|
2015-11-05 06:56:16 +07:00
|
|
|
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
cancel_delayed_work_sync(&fs_info->locker_update_work);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
2013-08-15 22:11:21 +07:00
|
|
|
/* wait for the uuid_scan task to finish */
|
|
|
|
down(&fs_info->uuid_tree_rescan_sem);
|
|
|
|
/* avoid complains from lockdep et al., set sem back to initial state */
|
|
|
|
up(&fs_info->uuid_tree_rescan_sem);
|
|
|
|
|
2012-01-17 03:04:49 +07:00
|
|
|
/* pause restriper - we want to resume on mount */
|
2012-11-05 23:03:39 +07:00
|
|
|
btrfs_pause_balance(fs_info);
|
2012-01-17 03:04:49 +07:00
|
|
|
|
2012-11-06 19:15:27 +07:00
|
|
|
btrfs_dev_replace_suspend_for_unmount(fs_info);
|
|
|
|
|
2012-11-05 23:03:39 +07:00
|
|
|
btrfs_scrub_cancel(fs_info);
|
2011-05-25 02:35:30 +07:00
|
|
|
|
|
|
|
/* wait for any defraggers to finish */
|
|
|
|
wait_event(fs_info->transaction_wait,
|
|
|
|
(atomic_read(&fs_info->defrag_running) == 0));
|
|
|
|
|
|
|
|
/* clear out the rbtree of defraggable inodes */
|
2012-11-26 16:26:20 +07:00
|
|
|
btrfs_cleanup_defrag_inodes(fs_info);
|
2011-05-25 02:35:30 +07:00
|
|
|
|
Btrfs: reclaim the reserved metadata space at background
Before applying this patch, the task had to reclaim the metadata space
by itself if the metadata space was not enough. And When the task started
the space reclamation, all the other tasks which wanted to reserve the
metadata space were blocked. At some cases, they would be blocked for
a long time, it made the performance fluctuate wildly.
So we introduce the background metadata space reclamation, when the space
is about to be exhausted, we insert a reclaim work into the workqueue, the
worker of the workqueue helps us to reclaim the reserved space at the
background. By this way, the tasks needn't reclaim the space by themselves at
most cases, and even if the tasks have to reclaim the space or are blocked
for the space reclamation, they will get enough space more quickly.
Here is my test result(Tested by compilebench):
Memory: 2GB
CPU: 2Cores * 1CPU
Partition: 40GB(SSD)
Test command:
# compilebench -D <mnt> -m
Without this patch:
intial create total runs 30 avg 54.36 MB/s (user 0.52s sys 2.44s)
compile total runs 30 avg 123.72 MB/s (user 0.13s sys 1.17s)
read compiled tree total runs 3 avg 81.15 MB/s (user 0.74s sys 4.89s)
delete compiled tree total runs 30 avg 5.32 seconds (user 0.35s sys 4.37s)
With this patch:
intial create total runs 30 avg 59.80 MB/s (user 0.52s sys 2.53s)
compile total runs 30 avg 151.44 MB/s (user 0.13s sys 1.11s)
read compiled tree total runs 3 avg 83.25 MB/s (user 0.76s sys 4.91s)
delete compiled tree total runs 30 avg 5.29 seconds (user 0.34s sys 4.34s)
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Chris Mason <clm@fb.com>
2014-05-14 07:29:04 +07:00
|
|
|
cancel_work_sync(&fs_info->async_reclaim_work);
|
2020-07-21 21:22:33 +07:00
|
|
|
cancel_work_sync(&fs_info->async_data_reclaim_work);
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
cancel_work_sync(&fs_info->syno_async_metadata_reclaim_work);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
cancel_work_sync(&fs_info->syno_async_data_flush_work);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
cancel_work_sync(&fs_info->syno_async_metadata_flush_work);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
cancel_work_sync(&fs_info->async_metadata_cache_work);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
cancel_work_sync(&fs_info->syno_usage_rescan_work);
|
|
|
|
cancel_work_sync(&fs_info->syno_usage_fast_rescan_work);
|
|
|
|
cancel_work_sync(&fs_info->syno_usage_full_rescan_work);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
cancel_work_sync(&fs_info->syno_allocator.bg_prefetch_work);
|
|
|
|
#endif /* MY_ABC_HERE */
|
Btrfs: reclaim the reserved metadata space at background
Before applying this patch, the task had to reclaim the metadata space
by itself if the metadata space was not enough. And When the task started
the space reclamation, all the other tasks which wanted to reserve the
metadata space were blocked. At some cases, they would be blocked for
a long time, it made the performance fluctuate wildly.
So we introduce the background metadata space reclamation, when the space
is about to be exhausted, we insert a reclaim work into the workqueue, the
worker of the workqueue helps us to reclaim the reserved space at the
background. By this way, the tasks needn't reclaim the space by themselves at
most cases, and even if the tasks have to reclaim the space or are blocked
for the space reclamation, they will get enough space more quickly.
Here is my test result(Tested by compilebench):
Memory: 2GB
CPU: 2Cores * 1CPU
Partition: 40GB(SSD)
Test command:
# compilebench -D <mnt> -m
Without this patch:
intial create total runs 30 avg 54.36 MB/s (user 0.52s sys 2.44s)
compile total runs 30 avg 123.72 MB/s (user 0.13s sys 1.17s)
read compiled tree total runs 3 avg 81.15 MB/s (user 0.74s sys 4.89s)
delete compiled tree total runs 30 avg 5.32 seconds (user 0.35s sys 4.37s)
With this patch:
intial create total runs 30 avg 59.80 MB/s (user 0.52s sys 2.53s)
compile total runs 30 avg 151.44 MB/s (user 0.13s sys 1.11s)
read compiled tree total runs 3 avg 83.25 MB/s (user 0.76s sys 4.91s)
delete compiled tree total runs 30 avg 5.29 seconds (user 0.34s sys 4.34s)
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Chris Mason <clm@fb.com>
2014-05-14 07:29:04 +07:00
|
|
|
|
2019-12-14 07:22:14 +07:00
|
|
|
/* Cancel or finish ongoing discard work */
|
|
|
|
btrfs_discard_cleanup(fs_info);
|
|
|
|
|
2017-07-17 14:45:34 +07:00
|
|
|
if (!sb_rdonly(fs_info->sb)) {
|
2015-06-15 20:41:18 +07:00
|
|
|
/*
|
Btrfs: fix missing delayed iputs on unmount
There's a race between close_ctree() and cleaner_kthread().
close_ctree() sets btrfs_fs_closing(), and the cleaner stops when it
sees it set, but this is racy; the cleaner might have already checked
the bit and could be cleaning stuff. In particular, if it deletes unused
block groups, it will create delayed iputs for the free space cache
inodes. As of "btrfs: don't run delayed_iputs in commit", we're no
longer running delayed iputs after a commit. Therefore, if the cleaner
creates more delayed iputs after delayed iputs are run in
btrfs_commit_super(), we will leak inodes on unmount and get a busy
inode crash from the VFS.
Fix it by parking the cleaner before we actually close anything. Then,
any remaining delayed iputs will always be handled in
btrfs_commit_super(). This also ensures that the commit in close_ctree()
is really the last commit, so we can get rid of the commit in
cleaner_kthread().
The fstest/generic/475 followed by 476 can trigger a crash that
manifests as a slab corruption caused by accessing the freed kthread
structure by a wake up function. Sample trace:
[ 5657.077612] BUG: unable to handle kernel NULL pointer dereference at 00000000000000cc
[ 5657.079432] PGD 1c57a067 P4D 1c57a067 PUD da10067 PMD 0
[ 5657.080661] Oops: 0000 [#1] PREEMPT SMP
[ 5657.081592] CPU: 1 PID: 5157 Comm: fsstress Tainted: G W 4.19.0-rc8-default+ #323
[ 5657.083703] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.11.2-0-gf9626cc-prebuilt.qemu-project.org 04/01/2014
[ 5657.086577] RIP: 0010:shrink_page_list+0x2f9/0xe90
[ 5657.091937] RSP: 0018:ffffb5c745c8f728 EFLAGS: 00010287
[ 5657.092953] RAX: 0000000000000074 RBX: ffffb5c745c8f830 RCX: 0000000000000000
[ 5657.094590] RDX: 0000000000000000 RSI: 0000000000000001 RDI: ffff9a8747fdf3d0
[ 5657.095987] RBP: ffffb5c745c8f9e0 R08: 0000000000000000 R09: 0000000000000000
[ 5657.097159] R10: ffff9a8747fdf5e8 R11: 0000000000000000 R12: ffffb5c745c8f788
[ 5657.098513] R13: ffff9a877f6ff2c0 R14: ffff9a877f6ff2c8 R15: dead000000000200
[ 5657.099689] FS: 00007f948d853b80(0000) GS:ffff9a877d600000(0000) knlGS:0000000000000000
[ 5657.101032] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 5657.101953] CR2: 00000000000000cc CR3: 00000000684bd000 CR4: 00000000000006e0
[ 5657.103159] Call Trace:
[ 5657.103776] shrink_inactive_list+0x194/0x410
[ 5657.104671] shrink_node_memcg.constprop.84+0x39a/0x6a0
[ 5657.105750] shrink_node+0x62/0x1c0
[ 5657.106529] try_to_free_pages+0x1a4/0x500
[ 5657.107408] __alloc_pages_slowpath+0x2c9/0xb20
[ 5657.108418] __alloc_pages_nodemask+0x268/0x2b0
[ 5657.109348] kmalloc_large_node+0x37/0x90
[ 5657.110205] __kmalloc_node+0x236/0x310
[ 5657.111014] kvmalloc_node+0x3e/0x70
Fixes: 30928e9baac2 ("btrfs: don't run delayed_iputs in commit")
Signed-off-by: Omar Sandoval <osandov@fb.com>
Reviewed-by: David Sterba <dsterba@suse.com>
[ add trace ]
Signed-off-by: David Sterba <dsterba@suse.com>
2018-11-01 00:06:08 +07:00
|
|
|
* The cleaner kthread is stopped, so do one final pass over
|
|
|
|
* unused block groups.
|
2015-06-15 20:41:18 +07:00
|
|
|
*/
|
2016-06-23 05:54:23 +07:00
|
|
|
btrfs_delete_unused_bgs(fs_info);
|
2015-06-15 20:41:18 +07:00
|
|
|
|
Btrfs: fix crash during unmount due to race with delayed inode workers
During unmount we can have a job from the delayed inode items work queue
still running, that can lead to at least two bad things:
1) A crash, because the worker can try to create a transaction just
after the fs roots were freed;
2) A transaction leak, because the worker can create a transaction
before the fs roots are freed and just after we committed the last
transaction and after we stopped the transaction kthread.
A stack trace example of the crash:
[79011.691214] kernel BUG at lib/radix-tree.c:982!
[79011.692056] invalid opcode: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC PTI
[79011.693180] CPU: 3 PID: 1394 Comm: kworker/u8:2 Tainted: G W 5.6.0-rc2-btrfs-next-54 #2
(...)
[79011.696789] Workqueue: btrfs-delayed-meta btrfs_work_helper [btrfs]
[79011.697904] RIP: 0010:radix_tree_tag_set+0xe7/0x170
(...)
[79011.702014] RSP: 0018:ffffb3c84a317ca0 EFLAGS: 00010293
[79011.702949] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
[79011.704202] RDX: ffffb3c84a317cb0 RSI: ffffb3c84a317ca8 RDI: ffff8db3931340a0
[79011.705463] RBP: 0000000000000005 R08: 0000000000000005 R09: ffffffff974629d0
[79011.706756] R10: ffffb3c84a317bc0 R11: 0000000000000001 R12: ffff8db393134000
[79011.708010] R13: ffff8db3931340a0 R14: ffff8db393134068 R15: 0000000000000001
[79011.709270] FS: 0000000000000000(0000) GS:ffff8db3b6a00000(0000) knlGS:0000000000000000
[79011.710699] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[79011.711710] CR2: 00007f22c2a0a000 CR3: 0000000232ad4005 CR4: 00000000003606e0
[79011.712958] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[79011.714205] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[79011.715448] Call Trace:
[79011.715925] record_root_in_trans+0x72/0xf0 [btrfs]
[79011.716819] btrfs_record_root_in_trans+0x4b/0x70 [btrfs]
[79011.717925] start_transaction+0xdd/0x5c0 [btrfs]
[79011.718829] btrfs_async_run_delayed_root+0x17e/0x2b0 [btrfs]
[79011.719915] btrfs_work_helper+0xaa/0x720 [btrfs]
[79011.720773] process_one_work+0x26d/0x6a0
[79011.721497] worker_thread+0x4f/0x3e0
[79011.722153] ? process_one_work+0x6a0/0x6a0
[79011.722901] kthread+0x103/0x140
[79011.723481] ? kthread_create_worker_on_cpu+0x70/0x70
[79011.724379] ret_from_fork+0x3a/0x50
(...)
The following diagram shows a sequence of steps that lead to the crash
during ummount of the filesystem:
CPU 1 CPU 2 CPU 3
btrfs_punch_hole()
btrfs_btree_balance_dirty()
btrfs_balance_delayed_items()
--> sees
fs_info->delayed_root->items
with value 200, which is greater
than
BTRFS_DELAYED_BACKGROUND (128)
and smaller than
BTRFS_DELAYED_WRITEBACK (512)
btrfs_wq_run_delayed_node()
--> queues a job for
fs_info->delayed_workers to run
btrfs_async_run_delayed_root()
btrfs_async_run_delayed_root()
--> job queued by CPU 1
--> starts picking and running
delayed nodes from the
prepare_list list
close_ctree()
btrfs_delete_unused_bgs()
btrfs_commit_super()
btrfs_join_transaction()
--> gets transaction N
btrfs_commit_transaction(N)
--> set transaction state
to TRANTS_STATE_COMMIT_START
btrfs_first_prepared_delayed_node()
--> picks delayed node X through
the prepared_list list
btrfs_run_delayed_items()
btrfs_first_delayed_node()
--> also picks delayed node X
but through the node_list
list
__btrfs_commit_inode_delayed_items()
--> runs all delayed items from
this node and drops the
node's item count to 0
through call to
btrfs_release_delayed_inode()
--> finishes running any remaining
delayed nodes
--> finishes transaction commit
--> stops cleaner and transaction threads
btrfs_free_fs_roots()
--> frees all roots and removes them
from the radix tree
fs_info->fs_roots_radix
btrfs_join_transaction()
start_transaction()
btrfs_record_root_in_trans()
record_root_in_trans()
radix_tree_tag_set()
--> crashes because
the root is not in
the radix tree
anymore
If the worker is able to call btrfs_join_transaction() before the unmount
task frees the fs roots, we end up leaking a transaction and all its
resources, since after the call to btrfs_commit_super() and stopping the
transaction kthread, we don't expect to have any transaction open anymore.
When this situation happens the worker has a delayed node that has no
more items to run, since the task calling btrfs_run_delayed_items(),
which is doing a transaction commit, picks the same node and runs all
its items first.
We can not wait for the worker to complete when running delayed items
through btrfs_run_delayed_items(), because we call that function in
several phases of a transaction commit, and that could cause a deadlock
because the worker calls btrfs_join_transaction() and the task doing the
transaction commit may have already set the transaction state to
TRANS_STATE_COMMIT_DOING.
Also it's not possible to get into a situation where only some of the
items of a delayed node are added to the fs/subvolume tree in the current
transaction and the remaining ones in the next transaction, because when
running the items of a delayed inode we lock its mutex, effectively
waiting for the worker if the worker is running the items of the delayed
node already.
Since this can only cause issues when unmounting a filesystem, fix it in
a simple way by waiting for any jobs on the delayed workers queue before
calling btrfs_commit_supper() at close_ctree(). This works because at this
point no one can call btrfs_btree_balance_dirty() or
btrfs_balance_delayed_items(), and if we end up waiting for any worker to
complete, btrfs_commit_super() will commit the transaction created by the
worker.
CC: stable@vger.kernel.org # 4.4+
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2020-02-28 20:04:36 +07:00
|
|
|
/*
|
|
|
|
* There might be existing delayed inode workers still running
|
|
|
|
* and holding an empty delayed inode item. We must wait for
|
|
|
|
* them to complete first because they can create a transaction.
|
|
|
|
* This happens when someone calls btrfs_balance_delayed_items()
|
|
|
|
* and then a transaction commit runs the same delayed nodes
|
|
|
|
* before any delayed worker has done something with the nodes.
|
|
|
|
* We must wait for any worker here and not at transaction
|
|
|
|
* commit time since that could cause a deadlock.
|
|
|
|
* This is a very rare case.
|
|
|
|
*/
|
|
|
|
btrfs_flush_workqueue(fs_info->delayed_workers);
|
|
|
|
|
2016-06-22 08:16:51 +07:00
|
|
|
ret = btrfs_commit_super(fs_info);
|
2011-01-06 18:30:25 +07:00
|
|
|
if (ret)
|
2014-08-02 06:12:36 +07:00
|
|
|
btrfs_err(fs_info, "commit super ret %d", ret);
|
2011-01-06 18:30:25 +07:00
|
|
|
}
|
|
|
|
|
Btrfs: clean up resources during umount after trans is aborted
Currently if some fatal errors occur, like all IO get -EIO, resources
would be cleaned up when
a) transaction is being committed or
b) BTRFS_FS_STATE_ERROR is set
However, in some rare cases, resources may be left alone after transaction
gets aborted and umount may run into some ASSERT(), e.g.
ASSERT(list_empty(&block_group->dirty_list));
For case a), in btrfs_commit_transaciton(), there're several places at the
beginning where we just call btrfs_end_transaction() without cleaning up
resources. For case b), it is possible that the trans handle doesn't have
any dirty stuff, then only trans hanlde is marked as aborted while
BTRFS_FS_STATE_ERROR is not set, so resources remain in memory.
This makes btrfs also check BTRFS_FS_STATE_TRANS_ABORTED to make sure that
all resources won't stay in memory after umount.
Signed-off-by: Liu Bo <bo.liu@linux.alibaba.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2018-03-31 05:11:56 +07:00
|
|
|
if (test_bit(BTRFS_FS_STATE_ERROR, &fs_info->fs_state) ||
|
|
|
|
test_bit(BTRFS_FS_STATE_TRANS_ABORTED, &fs_info->fs_state))
|
2016-06-23 05:54:24 +07:00
|
|
|
btrfs_error_commit_super(fs_info);
|
2007-04-09 21:42:37 +07:00
|
|
|
|
2011-11-17 12:56:18 +07:00
|
|
|
kthread_stop(fs_info->transaction_kthread);
|
|
|
|
kthread_stop(fs_info->cleaner_kthread);
|
2010-05-16 21:49:58 +07:00
|
|
|
|
2018-09-28 18:18:03 +07:00
|
|
|
ASSERT(list_empty(&fs_info->delayed_iputs));
|
2016-09-03 02:40:02 +07:00
|
|
|
set_bit(BTRFS_FS_CLOSING_DONE, &fs_info->flags);
|
2009-07-28 19:41:57 +07:00
|
|
|
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
if (btrfs_check_usrquota_leak(fs_info)) {
|
|
|
|
WARN_ON(IS_ENABLED(CONFIG_BTRFS_DEBUG));
|
|
|
|
btrfs_err(fs_info, "user quota reserved space leaked");
|
|
|
|
}
|
|
|
|
btrfs_free_usrquota_config(fs_info);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
2020-06-10 08:04:44 +07:00
|
|
|
if (btrfs_check_quota_leak(fs_info)) {
|
|
|
|
WARN_ON(IS_ENABLED(CONFIG_BTRFS_DEBUG));
|
|
|
|
btrfs_err(fs_info, "qgroup reserved space leaked");
|
|
|
|
}
|
|
|
|
|
2014-08-02 06:12:36 +07:00
|
|
|
btrfs_free_qgroup_config(fs_info);
|
2018-04-27 16:21:53 +07:00
|
|
|
ASSERT(list_empty(&fs_info->delalloc_roots));
|
2011-09-13 20:23:30 +07:00
|
|
|
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
free_all_syno_rbd_meta_file_inodes(fs_info);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
2013-01-29 17:10:51 +07:00
|
|
|
if (percpu_counter_sum(&fs_info->delalloc_bytes)) {
|
2014-08-02 06:12:36 +07:00
|
|
|
btrfs_info(fs_info, "at unmount delalloc count %lld",
|
2013-01-29 17:10:51 +07:00
|
|
|
percpu_counter_sum(&fs_info->delalloc_bytes));
|
2008-01-31 23:05:37 +07:00
|
|
|
}
|
2008-07-31 03:29:20 +07:00
|
|
|
|
2019-04-11 02:56:09 +07:00
|
|
|
if (percpu_counter_sum(&fs_info->dio_bytes))
|
|
|
|
btrfs_info(fs_info, "at unmount dio bytes count %lld",
|
|
|
|
percpu_counter_sum(&fs_info->dio_bytes));
|
|
|
|
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
btrfs_debugfs_remove_mounted(fs_info);
|
|
|
|
#endif /* MY_ABC_HERE */
|
|
|
|
|
2015-08-14 17:32:47 +07:00
|
|
|
btrfs_sysfs_remove_mounted(fs_info);
|
2015-03-10 05:38:38 +07:00
|
|
|
btrfs_sysfs_remove_fsid(fs_info->fs_devices);
|
2013-11-02 00:06:58 +07:00
|
|
|
|
2014-01-13 18:53:53 +07:00
|
|
|
btrfs_put_block_group_cache(fs_info);
|
|
|
|
|
2014-04-09 18:23:22 +07:00
|
|
|
/*
|
|
|
|
* we must make sure there is not any read request to
|
|
|
|
* submit after we stopping all workers.
|
|
|
|
*/
|
|
|
|
invalidate_inode_pages2(fs_info->btree_inode->i_mapping);
|
2013-10-17 00:53:28 +07:00
|
|
|
btrfs_stop_all_workers(fs_info);
|
|
|
|
|
2016-09-03 02:40:02 +07:00
|
|
|
clear_bit(BTRFS_FS_OPEN, &fs_info->flags);
|
2019-10-10 09:39:25 +07:00
|
|
|
free_root_pointers(fs_info, true);
|
2020-02-15 04:11:42 +07:00
|
|
|
btrfs_free_fs_roots(fs_info);
|
2008-04-19 03:11:30 +07:00
|
|
|
|
2020-01-21 21:17:06 +07:00
|
|
|
/*
|
|
|
|
* We must free the block groups after dropping the fs_roots as we could
|
|
|
|
* have had an IO error and have left over tree log blocks that aren't
|
|
|
|
* cleaned up until the fs roots are freed. This makes the block group
|
|
|
|
* accounting appear to be wrong because there's pending reserved bytes,
|
|
|
|
* so make sure we do the block group cleanup afterwards.
|
|
|
|
*/
|
|
|
|
btrfs_free_block_groups(fs_info);
|
|
|
|
|
2013-05-31 03:55:44 +07:00
|
|
|
iput(fs_info->btree_inode);
|
2008-05-01 00:59:35 +07:00
|
|
|
|
2011-11-09 19:44:05 +07:00
|
|
|
#ifdef CONFIG_BTRFS_FS_CHECK_INTEGRITY
|
2016-06-23 05:54:23 +07:00
|
|
|
if (btrfs_test_opt(fs_info, CHECK_INTEGRITY))
|
2016-06-23 05:54:24 +07:00
|
|
|
btrfsic_unmount(fs_info->fs_devices);
|
2011-11-09 19:44:05 +07:00
|
|
|
#endif
|
|
|
|
|
2008-03-25 02:01:56 +07:00
|
|
|
btrfs_mapping_tree_free(&fs_info->mapping_tree);
|
2019-02-12 21:13:14 +07:00
|
|
|
btrfs_close_devices(fs_info->fs_devices);
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
kfifo_free(&fs_info->cksumfailed_files);
|
|
|
|
correction_destroy_locked_record(fs_info);
|
|
|
|
#endif /* MY_ABC_HERE */
|
2007-02-02 21:18:22 +07:00
|
|
|
}
|
|
|
|
|
2012-05-06 18:23:47 +07:00
|
|
|
int btrfs_buffer_uptodate(struct extent_buffer *buf, u64 parent_transid,
|
|
|
|
int atomic)
|
2007-10-16 03:14:19 +07:00
|
|
|
{
|
2008-05-13 00:39:03 +07:00
|
|
|
int ret;
|
2010-08-07 00:21:20 +07:00
|
|
|
struct inode *btree_inode = buf->pages[0]->mapping->host;
|
2008-05-13 00:39:03 +07:00
|
|
|
|
2012-03-13 20:38:00 +07:00
|
|
|
ret = extent_buffer_uptodate(buf);
|
2008-05-13 00:39:03 +07:00
|
|
|
if (!ret)
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
ret = verify_parent_transid(&BTRFS_I(btree_inode)->io_tree, buf,
|
2012-05-06 18:23:47 +07:00
|
|
|
parent_transid, atomic);
|
|
|
|
if (ret == -EAGAIN)
|
|
|
|
return ret;
|
2008-05-13 00:39:03 +07:00
|
|
|
return !ret;
|
2007-10-16 03:14:19 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
void btrfs_mark_buffer_dirty(struct extent_buffer *buf)
|
|
|
|
{
|
2016-06-23 05:54:23 +07:00
|
|
|
struct btrfs_fs_info *fs_info;
|
2013-09-20 03:07:01 +07:00
|
|
|
struct btrfs_root *root;
|
2007-10-16 03:14:19 +07:00
|
|
|
u64 transid = btrfs_header_generation(buf);
|
2009-03-13 22:00:37 +07:00
|
|
|
int was_dirty;
|
Btrfs: Change btree locking to use explicit blocking points
Most of the btrfs metadata operations can be protected by a spinlock,
but some operations still need to schedule.
So far, btrfs has been using a mutex along with a trylock loop,
most of the time it is able to avoid going for the full mutex, so
the trylock loop is a big performance gain.
This commit is step one for getting rid of the blocking locks entirely.
btrfs_tree_lock takes a spinlock, and the code explicitly switches
to a blocking lock when it starts an operation that can schedule.
We'll be able get rid of the blocking locks in smaller pieces over time.
Tracing allows us to find the most common cause of blocking, so we
can start with the hot spots first.
The basic idea is:
btrfs_tree_lock() returns with the spin lock held
btrfs_set_lock_blocking() sets the EXTENT_BUFFER_BLOCKING bit in
the extent buffer flags, and then drops the spin lock. The buffer is
still considered locked by all of the btrfs code.
If btrfs_tree_lock gets the spinlock but finds the blocking bit set, it drops
the spin lock and waits on a wait queue for the blocking bit to go away.
Much of the code that needs to set the blocking bit finishes without actually
blocking a good percentage of the time. So, an adaptive spin is still
used against the blocking bit to avoid very high context switch rates.
btrfs_clear_lock_blocking() clears the blocking bit and returns
with the spinlock held again.
btrfs_tree_unlock() can be called on either blocking or spinning locks,
it does the right thing based on the blocking bit.
ctree.c has a helper function to set/clear all the locked buffers in a
path as blocking.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-02-04 21:25:08 +07:00
|
|
|
|
2013-09-20 03:07:01 +07:00
|
|
|
#ifdef CONFIG_BTRFS_FS_RUN_SANITY_TESTS
|
|
|
|
/*
|
|
|
|
* This is a fast path so only do this check if we have sanity tests
|
2018-11-28 18:05:13 +07:00
|
|
|
* enabled. Normal people shouldn't be using unmapped buffers as dirty
|
2013-09-20 03:07:01 +07:00
|
|
|
* outside of the sanity tests.
|
|
|
|
*/
|
2018-06-27 20:38:24 +07:00
|
|
|
if (unlikely(test_bit(EXTENT_BUFFER_UNMAPPED, &buf->bflags)))
|
2013-09-20 03:07:01 +07:00
|
|
|
return;
|
|
|
|
#endif
|
|
|
|
root = BTRFS_I(buf->pages[0]->mapping->host)->root;
|
2016-06-23 05:54:23 +07:00
|
|
|
fs_info = root->fs_info;
|
2009-03-09 22:45:38 +07:00
|
|
|
btrfs_assert_tree_locked(buf);
|
2016-06-23 05:54:23 +07:00
|
|
|
if (transid != fs_info->generation)
|
2016-09-20 21:05:00 +07:00
|
|
|
WARN(1, KERN_CRIT "btrfs transid mismatch buffer %llu, found %llu running %llu\n",
|
2016-06-23 05:54:23 +07:00
|
|
|
buf->start, transid, fs_info->generation);
|
2012-03-13 20:38:00 +07:00
|
|
|
was_dirty = set_extent_buffer_dirty(buf);
|
2013-01-29 17:09:20 +07:00
|
|
|
if (!was_dirty)
|
2017-06-21 01:01:20 +07:00
|
|
|
percpu_counter_add_batch(&fs_info->dirty_metadata_bytes,
|
|
|
|
buf->len,
|
|
|
|
fs_info->dirty_metadata_batch);
|
2014-04-09 21:37:06 +07:00
|
|
|
#ifdef CONFIG_BTRFS_FS_CHECK_INTEGRITY
|
2017-11-08 07:54:24 +07:00
|
|
|
/*
|
|
|
|
* Since btrfs_mark_buffer_dirty() can be called with item pointer set
|
|
|
|
* but item data not updated.
|
|
|
|
* So here we should only check item pointers, not item data.
|
|
|
|
*/
|
|
|
|
if (btrfs_header_level(buf) == 0 &&
|
2019-03-20 22:24:18 +07:00
|
|
|
btrfs_check_leaf_relaxed(buf)) {
|
2017-06-29 23:37:49 +07:00
|
|
|
btrfs_print_leaf(buf);
|
2014-04-09 21:37:06 +07:00
|
|
|
ASSERT(0);
|
|
|
|
}
|
|
|
|
#endif
|
2007-02-02 21:18:22 +07:00
|
|
|
}
|
|
|
|
|
2016-06-23 05:54:24 +07:00
|
|
|
static void __btrfs_btree_balance_dirty(struct btrfs_fs_info *fs_info,
|
2012-11-14 21:34:34 +07:00
|
|
|
int flush_delayed)
|
btrfs: implement delayed inode items operation
Changelog V5 -> V6:
- Fix oom when the memory load is high, by storing the delayed nodes into the
root's radix tree, and letting btrfs inodes go.
Changelog V4 -> V5:
- Fix the race on adding the delayed node to the inode, which is spotted by
Chris Mason.
- Merge Chris Mason's incremental patch into this patch.
- Fix deadlock between readdir() and memory fault, which is reported by
Itaru Kitayama.
Changelog V3 -> V4:
- Fix nested lock, which is reported by Itaru Kitayama, by updating space cache
inode in time.
Changelog V2 -> V3:
- Fix the race between the delayed worker and the task which does delayed items
balance, which is reported by Tsutomu Itoh.
- Modify the patch address David Sterba's comment.
- Fix the bug of the cpu recursion spinlock, reported by Chris Mason
Changelog V1 -> V2:
- break up the global rb-tree, use a list to manage the delayed nodes,
which is created for every directory and file, and used to manage the
delayed directory name index items and the delayed inode item.
- introduce a worker to deal with the delayed nodes.
Compare with Ext3/4, the performance of file creation and deletion on btrfs
is very poor. the reason is that btrfs must do a lot of b+ tree insertions,
such as inode item, directory name item, directory name index and so on.
If we can do some delayed b+ tree insertion or deletion, we can improve the
performance, so we made this patch which implemented delayed directory name
index insertion/deletion and delayed inode update.
Implementation:
- introduce a delayed root object into the filesystem, that use two lists to
manage the delayed nodes which are created for every file/directory.
One is used to manage all the delayed nodes that have delayed items. And the
other is used to manage the delayed nodes which is waiting to be dealt with
by the work thread.
- Every delayed node has two rb-tree, one is used to manage the directory name
index which is going to be inserted into b+ tree, and the other is used to
manage the directory name index which is going to be deleted from b+ tree.
- introduce a worker to deal with the delayed operation. This worker is used
to deal with the works of the delayed directory name index items insertion
and deletion and the delayed inode update.
When the delayed items is beyond the lower limit, we create works for some
delayed nodes and insert them into the work queue of the worker, and then
go back.
When the delayed items is beyond the upper bound, we create works for all
the delayed nodes that haven't been dealt with, and insert them into the work
queue of the worker, and then wait for that the untreated items is below some
threshold value.
- When we want to insert a directory name index into b+ tree, we just add the
information into the delayed inserting rb-tree.
And then we check the number of the delayed items and do delayed items
balance. (The balance policy is above.)
- When we want to delete a directory name index from the b+ tree, we search it
in the inserting rb-tree at first. If we look it up, just drop it. If not,
add the key of it into the delayed deleting rb-tree.
Similar to the delayed inserting rb-tree, we also check the number of the
delayed items and do delayed items balance.
(The same to inserting manipulation)
- When we want to update the metadata of some inode, we cached the data of the
inode into the delayed node. the worker will flush it into the b+ tree after
dealing with the delayed insertion and deletion.
- We will move the delayed node to the tail of the list after we access the
delayed node, By this way, we can cache more delayed items and merge more
inode updates.
- If we want to commit transaction, we will deal with all the delayed node.
- the delayed node will be freed when we free the btrfs inode.
- Before we log the inode items, we commit all the directory name index items
and the delayed inode update.
I did a quick test by the benchmark tool[1] and found we can improve the
performance of file creation by ~15%, and file deletion by ~20%.
Before applying this patch:
Create files:
Total files: 50000
Total time: 1.096108
Average time: 0.000022
Delete files:
Total files: 50000
Total time: 1.510403
Average time: 0.000030
After applying this patch:
Create files:
Total files: 50000
Total time: 0.932899
Average time: 0.000019
Delete files:
Total files: 50000
Total time: 1.215732
Average time: 0.000024
[1] http://marc.info/?l=linux-btrfs&m=128212635122920&q=p3
Many thanks for Kitayama-san's help!
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Reviewed-by: David Sterba <dave@jikos.cz>
Tested-by: Tsutomu Itoh <t-itoh@jp.fujitsu.com>
Tested-by: Itaru Kitayama <kitayama@cl.bb4u.ne.jp>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2011-04-22 17:12:22 +07:00
|
|
|
{
|
|
|
|
/*
|
|
|
|
* looks as though older kernels can get into trouble with
|
|
|
|
* this code, they end up stuck in balance_dirty_pages forever
|
|
|
|
*/
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
#else /* MY_ABC_HERE */
|
2013-01-29 17:09:20 +07:00
|
|
|
int ret;
|
2024-07-05 23:00:04 +07:00
|
|
|
#endif /* MY_ABC_HERE */
|
btrfs: implement delayed inode items operation
Changelog V5 -> V6:
- Fix oom when the memory load is high, by storing the delayed nodes into the
root's radix tree, and letting btrfs inodes go.
Changelog V4 -> V5:
- Fix the race on adding the delayed node to the inode, which is spotted by
Chris Mason.
- Merge Chris Mason's incremental patch into this patch.
- Fix deadlock between readdir() and memory fault, which is reported by
Itaru Kitayama.
Changelog V3 -> V4:
- Fix nested lock, which is reported by Itaru Kitayama, by updating space cache
inode in time.
Changelog V2 -> V3:
- Fix the race between the delayed worker and the task which does delayed items
balance, which is reported by Tsutomu Itoh.
- Modify the patch address David Sterba's comment.
- Fix the bug of the cpu recursion spinlock, reported by Chris Mason
Changelog V1 -> V2:
- break up the global rb-tree, use a list to manage the delayed nodes,
which is created for every directory and file, and used to manage the
delayed directory name index items and the delayed inode item.
- introduce a worker to deal with the delayed nodes.
Compare with Ext3/4, the performance of file creation and deletion on btrfs
is very poor. the reason is that btrfs must do a lot of b+ tree insertions,
such as inode item, directory name item, directory name index and so on.
If we can do some delayed b+ tree insertion or deletion, we can improve the
performance, so we made this patch which implemented delayed directory name
index insertion/deletion and delayed inode update.
Implementation:
- introduce a delayed root object into the filesystem, that use two lists to
manage the delayed nodes which are created for every file/directory.
One is used to manage all the delayed nodes that have delayed items. And the
other is used to manage the delayed nodes which is waiting to be dealt with
by the work thread.
- Every delayed node has two rb-tree, one is used to manage the directory name
index which is going to be inserted into b+ tree, and the other is used to
manage the directory name index which is going to be deleted from b+ tree.
- introduce a worker to deal with the delayed operation. This worker is used
to deal with the works of the delayed directory name index items insertion
and deletion and the delayed inode update.
When the delayed items is beyond the lower limit, we create works for some
delayed nodes and insert them into the work queue of the worker, and then
go back.
When the delayed items is beyond the upper bound, we create works for all
the delayed nodes that haven't been dealt with, and insert them into the work
queue of the worker, and then wait for that the untreated items is below some
threshold value.
- When we want to insert a directory name index into b+ tree, we just add the
information into the delayed inserting rb-tree.
And then we check the number of the delayed items and do delayed items
balance. (The balance policy is above.)
- When we want to delete a directory name index from the b+ tree, we search it
in the inserting rb-tree at first. If we look it up, just drop it. If not,
add the key of it into the delayed deleting rb-tree.
Similar to the delayed inserting rb-tree, we also check the number of the
delayed items and do delayed items balance.
(The same to inserting manipulation)
- When we want to update the metadata of some inode, we cached the data of the
inode into the delayed node. the worker will flush it into the b+ tree after
dealing with the delayed insertion and deletion.
- We will move the delayed node to the tail of the list after we access the
delayed node, By this way, we can cache more delayed items and merge more
inode updates.
- If we want to commit transaction, we will deal with all the delayed node.
- the delayed node will be freed when we free the btrfs inode.
- Before we log the inode items, we commit all the directory name index items
and the delayed inode update.
I did a quick test by the benchmark tool[1] and found we can improve the
performance of file creation by ~15%, and file deletion by ~20%.
Before applying this patch:
Create files:
Total files: 50000
Total time: 1.096108
Average time: 0.000022
Delete files:
Total files: 50000
Total time: 1.510403
Average time: 0.000030
After applying this patch:
Create files:
Total files: 50000
Total time: 0.932899
Average time: 0.000019
Delete files:
Total files: 50000
Total time: 1.215732
Average time: 0.000024
[1] http://marc.info/?l=linux-btrfs&m=128212635122920&q=p3
Many thanks for Kitayama-san's help!
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Reviewed-by: David Sterba <dave@jikos.cz>
Tested-by: Tsutomu Itoh <t-itoh@jp.fujitsu.com>
Tested-by: Itaru Kitayama <kitayama@cl.bb4u.ne.jp>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2011-04-22 17:12:22 +07:00
|
|
|
|
|
|
|
if (current->flags & PF_MEMALLOC)
|
|
|
|
return;
|
|
|
|
|
2012-11-14 21:34:34 +07:00
|
|
|
if (flush_delayed)
|
2016-06-23 05:54:24 +07:00
|
|
|
btrfs_balance_delayed_items(fs_info);
|
btrfs: implement delayed inode items operation
Changelog V5 -> V6:
- Fix oom when the memory load is high, by storing the delayed nodes into the
root's radix tree, and letting btrfs inodes go.
Changelog V4 -> V5:
- Fix the race on adding the delayed node to the inode, which is spotted by
Chris Mason.
- Merge Chris Mason's incremental patch into this patch.
- Fix deadlock between readdir() and memory fault, which is reported by
Itaru Kitayama.
Changelog V3 -> V4:
- Fix nested lock, which is reported by Itaru Kitayama, by updating space cache
inode in time.
Changelog V2 -> V3:
- Fix the race between the delayed worker and the task which does delayed items
balance, which is reported by Tsutomu Itoh.
- Modify the patch address David Sterba's comment.
- Fix the bug of the cpu recursion spinlock, reported by Chris Mason
Changelog V1 -> V2:
- break up the global rb-tree, use a list to manage the delayed nodes,
which is created for every directory and file, and used to manage the
delayed directory name index items and the delayed inode item.
- introduce a worker to deal with the delayed nodes.
Compare with Ext3/4, the performance of file creation and deletion on btrfs
is very poor. the reason is that btrfs must do a lot of b+ tree insertions,
such as inode item, directory name item, directory name index and so on.
If we can do some delayed b+ tree insertion or deletion, we can improve the
performance, so we made this patch which implemented delayed directory name
index insertion/deletion and delayed inode update.
Implementation:
- introduce a delayed root object into the filesystem, that use two lists to
manage the delayed nodes which are created for every file/directory.
One is used to manage all the delayed nodes that have delayed items. And the
other is used to manage the delayed nodes which is waiting to be dealt with
by the work thread.
- Every delayed node has two rb-tree, one is used to manage the directory name
index which is going to be inserted into b+ tree, and the other is used to
manage the directory name index which is going to be deleted from b+ tree.
- introduce a worker to deal with the delayed operation. This worker is used
to deal with the works of the delayed directory name index items insertion
and deletion and the delayed inode update.
When the delayed items is beyond the lower limit, we create works for some
delayed nodes and insert them into the work queue of the worker, and then
go back.
When the delayed items is beyond the upper bound, we create works for all
the delayed nodes that haven't been dealt with, and insert them into the work
queue of the worker, and then wait for that the untreated items is below some
threshold value.
- When we want to insert a directory name index into b+ tree, we just add the
information into the delayed inserting rb-tree.
And then we check the number of the delayed items and do delayed items
balance. (The balance policy is above.)
- When we want to delete a directory name index from the b+ tree, we search it
in the inserting rb-tree at first. If we look it up, just drop it. If not,
add the key of it into the delayed deleting rb-tree.
Similar to the delayed inserting rb-tree, we also check the number of the
delayed items and do delayed items balance.
(The same to inserting manipulation)
- When we want to update the metadata of some inode, we cached the data of the
inode into the delayed node. the worker will flush it into the b+ tree after
dealing with the delayed insertion and deletion.
- We will move the delayed node to the tail of the list after we access the
delayed node, By this way, we can cache more delayed items and merge more
inode updates.
- If we want to commit transaction, we will deal with all the delayed node.
- the delayed node will be freed when we free the btrfs inode.
- Before we log the inode items, we commit all the directory name index items
and the delayed inode update.
I did a quick test by the benchmark tool[1] and found we can improve the
performance of file creation by ~15%, and file deletion by ~20%.
Before applying this patch:
Create files:
Total files: 50000
Total time: 1.096108
Average time: 0.000022
Delete files:
Total files: 50000
Total time: 1.510403
Average time: 0.000030
After applying this patch:
Create files:
Total files: 50000
Total time: 0.932899
Average time: 0.000019
Delete files:
Total files: 50000
Total time: 1.215732
Average time: 0.000024
[1] http://marc.info/?l=linux-btrfs&m=128212635122920&q=p3
Many thanks for Kitayama-san's help!
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Reviewed-by: David Sterba <dave@jikos.cz>
Tested-by: Tsutomu Itoh <t-itoh@jp.fujitsu.com>
Tested-by: Itaru Kitayama <kitayama@cl.bb4u.ne.jp>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2011-04-22 17:12:22 +07:00
|
|
|
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
btrfs_syno_btree_balance_dirty(fs_info, true);
|
|
|
|
#else /* MY_ABC_HERE */
|
2018-07-02 14:44:58 +07:00
|
|
|
ret = __percpu_counter_compare(&fs_info->dirty_metadata_bytes,
|
|
|
|
BTRFS_DIRTY_METADATA_THRESH,
|
|
|
|
fs_info->dirty_metadata_batch);
|
2013-01-29 17:09:20 +07:00
|
|
|
if (ret > 0) {
|
2016-06-23 05:54:23 +07:00
|
|
|
balance_dirty_pages_ratelimited(fs_info->btree_inode->i_mapping);
|
btrfs: implement delayed inode items operation
Changelog V5 -> V6:
- Fix oom when the memory load is high, by storing the delayed nodes into the
root's radix tree, and letting btrfs inodes go.
Changelog V4 -> V5:
- Fix the race on adding the delayed node to the inode, which is spotted by
Chris Mason.
- Merge Chris Mason's incremental patch into this patch.
- Fix deadlock between readdir() and memory fault, which is reported by
Itaru Kitayama.
Changelog V3 -> V4:
- Fix nested lock, which is reported by Itaru Kitayama, by updating space cache
inode in time.
Changelog V2 -> V3:
- Fix the race between the delayed worker and the task which does delayed items
balance, which is reported by Tsutomu Itoh.
- Modify the patch address David Sterba's comment.
- Fix the bug of the cpu recursion spinlock, reported by Chris Mason
Changelog V1 -> V2:
- break up the global rb-tree, use a list to manage the delayed nodes,
which is created for every directory and file, and used to manage the
delayed directory name index items and the delayed inode item.
- introduce a worker to deal with the delayed nodes.
Compare with Ext3/4, the performance of file creation and deletion on btrfs
is very poor. the reason is that btrfs must do a lot of b+ tree insertions,
such as inode item, directory name item, directory name index and so on.
If we can do some delayed b+ tree insertion or deletion, we can improve the
performance, so we made this patch which implemented delayed directory name
index insertion/deletion and delayed inode update.
Implementation:
- introduce a delayed root object into the filesystem, that use two lists to
manage the delayed nodes which are created for every file/directory.
One is used to manage all the delayed nodes that have delayed items. And the
other is used to manage the delayed nodes which is waiting to be dealt with
by the work thread.
- Every delayed node has two rb-tree, one is used to manage the directory name
index which is going to be inserted into b+ tree, and the other is used to
manage the directory name index which is going to be deleted from b+ tree.
- introduce a worker to deal with the delayed operation. This worker is used
to deal with the works of the delayed directory name index items insertion
and deletion and the delayed inode update.
When the delayed items is beyond the lower limit, we create works for some
delayed nodes and insert them into the work queue of the worker, and then
go back.
When the delayed items is beyond the upper bound, we create works for all
the delayed nodes that haven't been dealt with, and insert them into the work
queue of the worker, and then wait for that the untreated items is below some
threshold value.
- When we want to insert a directory name index into b+ tree, we just add the
information into the delayed inserting rb-tree.
And then we check the number of the delayed items and do delayed items
balance. (The balance policy is above.)
- When we want to delete a directory name index from the b+ tree, we search it
in the inserting rb-tree at first. If we look it up, just drop it. If not,
add the key of it into the delayed deleting rb-tree.
Similar to the delayed inserting rb-tree, we also check the number of the
delayed items and do delayed items balance.
(The same to inserting manipulation)
- When we want to update the metadata of some inode, we cached the data of the
inode into the delayed node. the worker will flush it into the b+ tree after
dealing with the delayed insertion and deletion.
- We will move the delayed node to the tail of the list after we access the
delayed node, By this way, we can cache more delayed items and merge more
inode updates.
- If we want to commit transaction, we will deal with all the delayed node.
- the delayed node will be freed when we free the btrfs inode.
- Before we log the inode items, we commit all the directory name index items
and the delayed inode update.
I did a quick test by the benchmark tool[1] and found we can improve the
performance of file creation by ~15%, and file deletion by ~20%.
Before applying this patch:
Create files:
Total files: 50000
Total time: 1.096108
Average time: 0.000022
Delete files:
Total files: 50000
Total time: 1.510403
Average time: 0.000030
After applying this patch:
Create files:
Total files: 50000
Total time: 0.932899
Average time: 0.000019
Delete files:
Total files: 50000
Total time: 1.215732
Average time: 0.000024
[1] http://marc.info/?l=linux-btrfs&m=128212635122920&q=p3
Many thanks for Kitayama-san's help!
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Reviewed-by: David Sterba <dave@jikos.cz>
Tested-by: Tsutomu Itoh <t-itoh@jp.fujitsu.com>
Tested-by: Itaru Kitayama <kitayama@cl.bb4u.ne.jp>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2011-04-22 17:12:22 +07:00
|
|
|
}
|
2024-07-05 23:00:04 +07:00
|
|
|
#endif /* MY_ABC_HERE */
|
btrfs: implement delayed inode items operation
Changelog V5 -> V6:
- Fix oom when the memory load is high, by storing the delayed nodes into the
root's radix tree, and letting btrfs inodes go.
Changelog V4 -> V5:
- Fix the race on adding the delayed node to the inode, which is spotted by
Chris Mason.
- Merge Chris Mason's incremental patch into this patch.
- Fix deadlock between readdir() and memory fault, which is reported by
Itaru Kitayama.
Changelog V3 -> V4:
- Fix nested lock, which is reported by Itaru Kitayama, by updating space cache
inode in time.
Changelog V2 -> V3:
- Fix the race between the delayed worker and the task which does delayed items
balance, which is reported by Tsutomu Itoh.
- Modify the patch address David Sterba's comment.
- Fix the bug of the cpu recursion spinlock, reported by Chris Mason
Changelog V1 -> V2:
- break up the global rb-tree, use a list to manage the delayed nodes,
which is created for every directory and file, and used to manage the
delayed directory name index items and the delayed inode item.
- introduce a worker to deal with the delayed nodes.
Compare with Ext3/4, the performance of file creation and deletion on btrfs
is very poor. the reason is that btrfs must do a lot of b+ tree insertions,
such as inode item, directory name item, directory name index and so on.
If we can do some delayed b+ tree insertion or deletion, we can improve the
performance, so we made this patch which implemented delayed directory name
index insertion/deletion and delayed inode update.
Implementation:
- introduce a delayed root object into the filesystem, that use two lists to
manage the delayed nodes which are created for every file/directory.
One is used to manage all the delayed nodes that have delayed items. And the
other is used to manage the delayed nodes which is waiting to be dealt with
by the work thread.
- Every delayed node has two rb-tree, one is used to manage the directory name
index which is going to be inserted into b+ tree, and the other is used to
manage the directory name index which is going to be deleted from b+ tree.
- introduce a worker to deal with the delayed operation. This worker is used
to deal with the works of the delayed directory name index items insertion
and deletion and the delayed inode update.
When the delayed items is beyond the lower limit, we create works for some
delayed nodes and insert them into the work queue of the worker, and then
go back.
When the delayed items is beyond the upper bound, we create works for all
the delayed nodes that haven't been dealt with, and insert them into the work
queue of the worker, and then wait for that the untreated items is below some
threshold value.
- When we want to insert a directory name index into b+ tree, we just add the
information into the delayed inserting rb-tree.
And then we check the number of the delayed items and do delayed items
balance. (The balance policy is above.)
- When we want to delete a directory name index from the b+ tree, we search it
in the inserting rb-tree at first. If we look it up, just drop it. If not,
add the key of it into the delayed deleting rb-tree.
Similar to the delayed inserting rb-tree, we also check the number of the
delayed items and do delayed items balance.
(The same to inserting manipulation)
- When we want to update the metadata of some inode, we cached the data of the
inode into the delayed node. the worker will flush it into the b+ tree after
dealing with the delayed insertion and deletion.
- We will move the delayed node to the tail of the list after we access the
delayed node, By this way, we can cache more delayed items and merge more
inode updates.
- If we want to commit transaction, we will deal with all the delayed node.
- the delayed node will be freed when we free the btrfs inode.
- Before we log the inode items, we commit all the directory name index items
and the delayed inode update.
I did a quick test by the benchmark tool[1] and found we can improve the
performance of file creation by ~15%, and file deletion by ~20%.
Before applying this patch:
Create files:
Total files: 50000
Total time: 1.096108
Average time: 0.000022
Delete files:
Total files: 50000
Total time: 1.510403
Average time: 0.000030
After applying this patch:
Create files:
Total files: 50000
Total time: 0.932899
Average time: 0.000019
Delete files:
Total files: 50000
Total time: 1.215732
Average time: 0.000024
[1] http://marc.info/?l=linux-btrfs&m=128212635122920&q=p3
Many thanks for Kitayama-san's help!
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Reviewed-by: David Sterba <dave@jikos.cz>
Tested-by: Tsutomu Itoh <t-itoh@jp.fujitsu.com>
Tested-by: Itaru Kitayama <kitayama@cl.bb4u.ne.jp>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2011-04-22 17:12:22 +07:00
|
|
|
}
|
|
|
|
|
2016-06-23 05:54:24 +07:00
|
|
|
void btrfs_btree_balance_dirty(struct btrfs_fs_info *fs_info)
|
2007-05-03 02:53:43 +07:00
|
|
|
{
|
2016-06-23 05:54:24 +07:00
|
|
|
__btrfs_btree_balance_dirty(fs_info, 1);
|
2012-11-14 21:34:34 +07:00
|
|
|
}
|
2009-05-18 21:41:58 +07:00
|
|
|
|
2016-06-23 05:54:24 +07:00
|
|
|
void btrfs_btree_balance_dirty_nodelay(struct btrfs_fs_info *fs_info)
|
2012-11-14 21:34:34 +07:00
|
|
|
{
|
2016-06-23 05:54:24 +07:00
|
|
|
__btrfs_btree_balance_dirty(fs_info, 0);
|
2007-05-03 02:53:43 +07:00
|
|
|
}
|
2007-10-16 03:17:34 +07:00
|
|
|
|
2018-03-29 08:08:11 +07:00
|
|
|
int btrfs_read_buffer(struct extent_buffer *buf, u64 parent_transid, int level,
|
|
|
|
struct btrfs_key *first_key)
|
2007-10-16 03:17:34 +07:00
|
|
|
{
|
2019-03-20 20:56:39 +07:00
|
|
|
return btree_read_extent_buffer_pages(buf, parent_transid,
|
2018-03-29 08:08:11 +07:00
|
|
|
level, first_key);
|
2007-10-16 03:17:34 +07:00
|
|
|
}
|
2007-11-08 09:08:01 +07:00
|
|
|
|
2016-06-23 05:54:24 +07:00
|
|
|
static void btrfs_error_commit_super(struct btrfs_fs_info *fs_info)
|
2011-01-06 18:30:25 +07:00
|
|
|
{
|
2018-04-27 16:21:53 +07:00
|
|
|
/* cleanup FS via transaction */
|
|
|
|
btrfs_cleanup_transaction(fs_info);
|
|
|
|
|
2016-06-23 05:54:23 +07:00
|
|
|
mutex_lock(&fs_info->cleaner_mutex);
|
2016-06-23 05:54:24 +07:00
|
|
|
btrfs_run_delayed_iputs(fs_info);
|
2016-06-23 05:54:23 +07:00
|
|
|
mutex_unlock(&fs_info->cleaner_mutex);
|
2011-01-06 18:30:25 +07:00
|
|
|
|
2016-06-23 05:54:23 +07:00
|
|
|
down_write(&fs_info->cleanup_work_sem);
|
|
|
|
up_write(&fs_info->cleanup_work_sem);
|
2011-01-06 18:30:25 +07:00
|
|
|
}
|
|
|
|
|
2020-03-24 21:47:52 +07:00
|
|
|
static void btrfs_drop_all_logs(struct btrfs_fs_info *fs_info)
|
|
|
|
{
|
|
|
|
struct btrfs_root *gang[8];
|
|
|
|
u64 root_objectid = 0;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
spin_lock(&fs_info->fs_roots_radix_lock);
|
|
|
|
while ((ret = radix_tree_gang_lookup(&fs_info->fs_roots_radix,
|
|
|
|
(void **)gang, root_objectid,
|
|
|
|
ARRAY_SIZE(gang))) != 0) {
|
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = 0; i < ret; i++)
|
|
|
|
gang[i] = btrfs_grab_root(gang[i]);
|
|
|
|
spin_unlock(&fs_info->fs_roots_radix_lock);
|
|
|
|
|
|
|
|
for (i = 0; i < ret; i++) {
|
|
|
|
if (!gang[i])
|
|
|
|
continue;
|
|
|
|
root_objectid = gang[i]->root_key.objectid;
|
|
|
|
btrfs_free_log(NULL, gang[i]);
|
|
|
|
btrfs_put_root(gang[i]);
|
|
|
|
}
|
|
|
|
root_objectid++;
|
|
|
|
spin_lock(&fs_info->fs_roots_radix_lock);
|
|
|
|
}
|
|
|
|
spin_unlock(&fs_info->fs_roots_radix_lock);
|
|
|
|
btrfs_free_log_root_tree(NULL, fs_info);
|
|
|
|
}
|
|
|
|
|
2012-03-01 20:56:26 +07:00
|
|
|
static void btrfs_destroy_ordered_extents(struct btrfs_root *root)
|
2011-01-06 18:30:25 +07:00
|
|
|
{
|
|
|
|
struct btrfs_ordered_extent *ordered;
|
|
|
|
|
2013-05-15 14:48:23 +07:00
|
|
|
spin_lock(&root->ordered_extent_lock);
|
2013-02-01 02:30:08 +07:00
|
|
|
/*
|
|
|
|
* This will just short circuit the ordered completion stuff which will
|
|
|
|
* make sure the ordered extent gets properly cleaned up.
|
|
|
|
*/
|
2013-05-15 14:48:23 +07:00
|
|
|
list_for_each_entry(ordered, &root->ordered_extents,
|
2013-02-01 02:30:08 +07:00
|
|
|
root_extent_list)
|
|
|
|
set_bit(BTRFS_ORDERED_IOERR, &ordered->flags);
|
2013-05-15 14:48:23 +07:00
|
|
|
spin_unlock(&root->ordered_extent_lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void btrfs_destroy_all_ordered_extents(struct btrfs_fs_info *fs_info)
|
|
|
|
{
|
|
|
|
struct btrfs_root *root;
|
|
|
|
struct list_head splice;
|
|
|
|
|
|
|
|
INIT_LIST_HEAD(&splice);
|
|
|
|
|
|
|
|
spin_lock(&fs_info->ordered_root_lock);
|
|
|
|
list_splice_init(&fs_info->ordered_roots, &splice);
|
|
|
|
while (!list_empty(&splice)) {
|
|
|
|
root = list_first_entry(&splice, struct btrfs_root,
|
|
|
|
ordered_root);
|
2013-09-28 03:36:02 +07:00
|
|
|
list_move_tail(&root->ordered_root,
|
|
|
|
&fs_info->ordered_roots);
|
2013-05-15 14:48:23 +07:00
|
|
|
|
2014-02-10 16:07:16 +07:00
|
|
|
spin_unlock(&fs_info->ordered_root_lock);
|
2013-05-15 14:48:23 +07:00
|
|
|
btrfs_destroy_ordered_extents(root);
|
|
|
|
|
2014-02-10 16:07:16 +07:00
|
|
|
cond_resched();
|
|
|
|
spin_lock(&fs_info->ordered_root_lock);
|
2013-05-15 14:48:23 +07:00
|
|
|
}
|
|
|
|
spin_unlock(&fs_info->ordered_root_lock);
|
2018-11-22 02:05:45 +07:00
|
|
|
|
|
|
|
/*
|
|
|
|
* We need this here because if we've been flipped read-only we won't
|
|
|
|
* get sync() from the umount, so we need to make sure any ordered
|
|
|
|
* extents that haven't had their dirty pages IO start writeout yet
|
|
|
|
* actually get run and error out properly.
|
|
|
|
*/
|
|
|
|
btrfs_wait_ordered_roots(fs_info, U64_MAX, 0, (u64)-1);
|
2011-01-06 18:30:25 +07:00
|
|
|
}
|
|
|
|
|
2013-08-14 23:12:25 +07:00
|
|
|
static int btrfs_destroy_delayed_refs(struct btrfs_transaction *trans,
|
2016-06-23 05:54:24 +07:00
|
|
|
struct btrfs_fs_info *fs_info)
|
2011-01-06 18:30:25 +07:00
|
|
|
{
|
|
|
|
struct rb_node *node;
|
|
|
|
struct btrfs_delayed_ref_root *delayed_refs;
|
|
|
|
struct btrfs_delayed_ref_node *ref;
|
|
|
|
int ret = 0;
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
struct btrfs_delayed_data_ref *data_ref = NULL;
|
|
|
|
#endif /* MY_ABC_HERE */
|
2011-01-06 18:30:25 +07:00
|
|
|
|
|
|
|
delayed_refs = &trans->delayed_refs;
|
|
|
|
|
|
|
|
spin_lock(&delayed_refs->lock);
|
2014-01-23 21:21:38 +07:00
|
|
|
if (atomic_read(&delayed_refs->num_entries) == 0) {
|
2011-04-26 06:43:52 +07:00
|
|
|
spin_unlock(&delayed_refs->lock);
|
2019-11-28 21:34:28 +07:00
|
|
|
btrfs_debug(fs_info, "delayed_refs has NO entry");
|
2011-01-06 18:30:25 +07:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2018-08-23 02:51:49 +07:00
|
|
|
while ((node = rb_first_cached(&delayed_refs->href_root)) != NULL) {
|
2014-01-23 21:21:38 +07:00
|
|
|
struct btrfs_delayed_ref_head *head;
|
2017-10-20 01:16:00 +07:00
|
|
|
struct rb_node *n;
|
2013-06-04 03:42:36 +07:00
|
|
|
bool pin_bytes = false;
|
2011-01-06 18:30:25 +07:00
|
|
|
|
2014-01-23 21:21:38 +07:00
|
|
|
head = rb_entry(node, struct btrfs_delayed_ref_head,
|
|
|
|
href_node);
|
2018-11-22 02:05:39 +07:00
|
|
|
if (btrfs_delayed_ref_lock(delayed_refs, head))
|
2014-01-23 21:21:38 +07:00
|
|
|
continue;
|
2018-11-22 02:05:39 +07:00
|
|
|
|
2014-01-23 21:21:38 +07:00
|
|
|
spin_lock(&head->lock);
|
2018-08-23 02:51:50 +07:00
|
|
|
while ((n = rb_first_cached(&head->ref_tree)) != NULL) {
|
2017-10-20 01:16:00 +07:00
|
|
|
ref = rb_entry(n, struct btrfs_delayed_ref_node,
|
|
|
|
ref_node);
|
2014-01-23 21:21:38 +07:00
|
|
|
ref->in_tree = 0;
|
2018-08-23 02:51:50 +07:00
|
|
|
rb_erase_cached(&ref->ref_node, &head->ref_tree);
|
2017-10-20 01:16:00 +07:00
|
|
|
RB_CLEAR_NODE(&ref->ref_node);
|
btrfs: improve delayed refs iterations
This issue was found when I tried to delete a heavily reflinked file,
when deleting such files, other transaction operation will not have a
chance to make progress, for example, start_transaction() will blocked
in wait_current_trans(root) for long time, sometimes it even triggers
soft lockups, and the time taken to delete such heavily reflinked file
is also very large, often hundreds of seconds. Using perf top, it reports
that:
PerfTop: 7416 irqs/sec kernel:99.8% exact: 0.0% [4000Hz cpu-clock], (all, 4 CPUs)
---------------------------------------------------------------------------------------
84.37% [btrfs] [k] __btrfs_run_delayed_refs.constprop.80
11.02% [kernel] [k] delay_tsc
0.79% [kernel] [k] _raw_spin_unlock_irq
0.78% [kernel] [k] _raw_spin_unlock_irqrestore
0.45% [kernel] [k] do_raw_spin_lock
0.18% [kernel] [k] __slab_alloc
It seems __btrfs_run_delayed_refs() took most cpu time, after some debug
work, I found it's select_delayed_ref() causing this issue, for a delayed
head, in our case, it'll be full of BTRFS_DROP_DELAYED_REF nodes, but
select_delayed_ref() will firstly try to iterate node list to find
BTRFS_ADD_DELAYED_REF nodes, obviously it's a disaster in this case, and
waste much time.
To fix this issue, we introduce a new ref_add_list in struct btrfs_delayed_ref_head,
then in select_delayed_ref(), if this list is not empty, we can directly use
nodes in this list. With this patch, it just took about 10~15 seconds to
delte the same file. Now using perf top, it reports that:
PerfTop: 2734 irqs/sec kernel:99.5% exact: 0.0% [4000Hz cpu-clock], (all, 4 CPUs)
----------------------------------------------------------------------------------------
20.74% [kernel] [k] _raw_spin_unlock_irqrestore
16.33% [kernel] [k] __slab_alloc
5.41% [kernel] [k] lock_acquired
4.42% [kernel] [k] lock_acquire
4.05% [kernel] [k] lock_release
3.37% [kernel] [k] _raw_spin_unlock_irq
For normal files, this patch also gives help, at least we do not need to
iterate whole list to found BTRFS_ADD_DELAYED_REF nodes.
Signed-off-by: Wang Xiaoguang <wangxg.fnst@cn.fujitsu.com>
Reviewed-by: Liu Bo <bo.li.liu@oracle.com>
Tested-by: Holger Hoffstätte <holger@applied-asynchrony.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2016-10-26 17:07:33 +07:00
|
|
|
if (!list_empty(&ref->add_list))
|
|
|
|
list_del(&ref->add_list);
|
2024-07-05 23:00:04 +07:00
|
|
|
#ifdef MY_ABC_HERE
|
|
|
|
if (ref->type == BTRFS_EXTENT_DATA_REF_KEY ||
|
|
|
|
ref->type == BTRFS_SHARED_DATA_REF_KEY) {
|
|
|
|
data_ref = btrfs_delayed_node_to_data_ref(ref);
|
|
|
|
if (data_ref->syno_usage)
|
|
|
|
atomic_dec(&delayed_refs->num_syno_usage_entries);
|
|
|
|
}
|
|
|
|
#endif /* MY_ABC_HERE */
|
2014-01-23 21:21:38 +07:00
|
|
|
atomic_dec(&delayed_refs->num_entries);
|
|
|
|
btrfs_put_delayed_ref(ref);
|
2013-06-04 03:42:36 +07:00
|
|
|
}
|
2014-01-23 21:21:38 +07:00
|
|
|
if (head->must_insert_reserved)
|
|
|
|
pin_bytes = true;
|
|
|
|
btrfs_free_delayed_extent_op(head->extent_op);
|
2018-11-22 02:05:40 +07:00
|
|
|
btrfs_delete_ref_head(delayed_refs, head);
|
2014-01-23 21:21:38 +07:00
|
|
|
spin_unlock(&head->lock);
|
|
|
|
spin_unlock(&delayed_refs->lock);
|
|
|
|
mutex_unlock(&head->mutex);
|
2011-01-06 18:30:25 +07:00
|
|
|
|
2020-01-20 21:09:08 +07:00
|
|
|
if (pin_bytes) {
|
|
|
|
struct btrfs_block_group *cache;
|
|
|
|
|
|
|
|
cache = btrfs_lookup_block_group(fs_info, head->bytenr);
|
|
|
|
BUG_ON(!cache);
|
|
|
|
|
|
|
|
spin_lock(&cache->space_info->lock);
|
|
|
|
spin_lock(&cache->lock);
|
|
|
|
cache->pinned += head->num_bytes;
|
|
|
|
btrfs_space_info_update_bytes_pinned(fs_info,
|
|
|
|
cache->space_info, head->num_bytes);
|
|
|
|
cache->reserved -= head->num_bytes;
|
|
|
|
cache->space_info->bytes_reserved -= head->num_bytes;
|
|
|
|
spin_unlock(&cache->lock);
|
|
|
|
spin_unlock(&cache->space_info->lock);
|
|
|
|
percpu_counter_add_batch(
|
|
|
|
&cache->space_info->total_bytes_pinned,
|
|
|
|
head->num_bytes, BTRFS_TOTAL_BYTES_PINNED_BATCH);
|
|
|
|
|
|
|
|
btrfs_put_block_group(cache);
|
|
|
|
|
|
|
|
btrfs_error_unpin_extent_range(fs_info, head->bytenr,
|
|
|
|
head->bytenr + head->num_bytes - 1);
|
|
|
|
}
|
2018-11-22 02:05:41 +07:00
|
|
|
btrfs_cleanup_ref_head_accounting(fs_info, delayed_refs, head);
|
2017-09-30 02:43:57 +07:00
|
|
|
btrfs_put_delayed_ref_head(head);
|
2011-01-06 18:30:25 +07:00
|
|
|
cond_resched();
|
|
|
|
spin_lock(&delayed_refs->lock);
|
|
|
|
}
|
2020-02-11 14:25:37 +07:00
|
|
|
btrfs_qgroup_destroy_extent_records(trans);
|
2011-01-06 18:30:25 +07:00
|
|
|
|
|
|
|
spin_unlock(&delayed_refs->lock);
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2012-03-01 20:56:26 +07:00
|
|
|
static void btrfs_destroy_delalloc_inodes(struct btrfs_root *root)
|
2011-01-06 18:30:25 +07:00
|
|
|
{
|
|
|
|
struct btrfs_inode *btrfs_inode;
|
|
|
|
struct list_head splice;
|
|
|
|
|
|
|
|
INIT_LIST_HEAD(&splice);
|
|
|
|
|
2013-05-15 14:48:22 +07:00
|
|
|
spin_lock(&root->delalloc_lock);
|
|
|
|
list_splice_init(&root->delalloc_inodes, &splice);
|
2011-01-06 18:30:25 +07:00
|
|
|
|
|
|
|
while (!list_empty(&splice)) {
|
2018-04-27 16:21:53 +07:00
|
|
|
struct inode *inode = NULL;
|
2013-05-15 14:48:22 +07:00
|
|
|
btrfs_inode = list_first_entry(&splice, struct btrfs_inode,
|
|
|
|
delalloc_inodes);
|
2018-04-27 16:21:53 +07:00
|
|
|
__btrfs_del_delalloc_inode(root, btrfs_inode);
|
2013-05-15 14:48:22 +07:00
|
|
|
spin_unlock(&root->delalloc_lock);
|
2011-01-06 18:30:25 +07:00
|
|
|
|
2018-04-27 16:21:53 +07:00
|
|
|
/*
|
|
|
|
* Make sure we get a live inode and that it'll not disappear
|
|
|
|
* meanwhile.
|
|
|
|
*/
|
|
|
|
inode = igrab(&btrfs_inode->vfs_inode);
|
|
|
|
if (inode) {
|
|
|
|
invalidate_inode_pages2(inode->i_mapping);
|
|
|
|
iput(inode);
|
|
|
|
}
|
2013-05-15 14:48:22 +07:00
|
|
|
spin_lock(&root->delalloc_lock);
|
2011-01-06 18:30:25 +07:00
|
|
|
}
|
2013-05-15 14:48:22 +07:00
|
|
|
spin_unlock(&root->delalloc_lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void btrfs_destroy_all_delalloc_inodes(struct btrfs_fs_info *fs_info)
|
|
|
|
{
|
|
|
|
struct btrfs_root *root;
|
|
|
|
struct list_head splice;
|
|
|
|
|
|
|
|
INIT_LIST_HEAD(&splice);
|
|
|
|
|
|
|
|
spin_lock(&fs_info->delalloc_root_lock);
|
|
|
|
list_splice_init(&fs_info->delalloc_roots, &splice);
|
|
|
|
while (!list_empty(&splice)) {
|
|
|
|
root = list_first_entry(&splice, struct btrfs_root,
|
|
|
|
delalloc_root);
|
2020-01-24 21:33:01 +07:00
|
|
|
root = btrfs_grab_root(root);
|
2013-05-15 14:48:22 +07:00
|
|
|
BUG_ON(!root);
|
|
|
|
spin_unlock(&fs_info->delalloc_root_lock);
|
|
|
|
|
|
|
|
btrfs_destroy_delalloc_inodes(root);
|
2020-01-24 21:33:01 +07:00
|
|
|
btrfs_put_root(root);
|
2013-05-15 14:48:22 +07:00
|
|
|
|
|
|
|
spin_lock(&fs_info->delalloc_root_lock);
|
|
|
|
}
|
|
|
|
spin_unlock(&fs_info->delalloc_root_lock);
|
2011-01-06 18:30:25 +07:00
|
|
|
}
|
|
|
|
|
2016-06-23 05:54:24 +07:00
|
|
|
static int btrfs_destroy_marked_extents(struct btrfs_fs_info *fs_info,
|
2011-01-06 18:30:25 +07:00
|
|
|
struct extent_io_tree *dirty_pages,
|
|
|
|
int mark)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
struct extent_buffer *eb;
|
|
|
|
u64 start = 0;
|
|
|
|
u64 end;
|
|
|
|
|
|
|
|
while (1) {
|
|
|
|
ret = find_first_extent_bit(dirty_pages, start, &start, &end,
|
2012-09-28 04:07:30 +07:00
|
|
|
mark, NULL);
|
2011-01-06 18:30:25 +07:00
|
|
|
if (ret)
|
|
|
|
break;
|
|
|
|
|
2016-04-27 04:54:39 +07:00
|
|
|
clear_extent_bits(dirty_pages, start, end, mark);
|
2011-01-06 18:30:25 +07:00
|
|
|
while (start <= end) {
|
2016-06-23 05:54:23 +07:00
|
|
|
eb = find_extent_buffer(fs_info, start);
|
|
|
|
start += fs_info->nodesize;
|
2013-04-25 03:41:19 +07:00
|
|
|
if (!eb)
|
2011-01-06 18:30:25 +07:00
|
|
|
continue;
|
2013-04-25 03:41:19 +07:00
|
|
|
wait_on_extent_buffer_writeback(eb);
|
2011-01-06 18:30:25 +07:00
|
|
|
|
2013-04-25 03:41:19 +07:00
|
|
|
if (test_and_clear_bit(EXTENT_BUFFER_DIRTY,
|
|
|
|
&eb->bflags))
|
|
|
|
clear_extent_buffer_dirty(eb);
|
|
|
|
free_extent_buffer_stale(eb);
|
2011-01-06 18:30:25 +07:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2016-06-23 05:54:24 +07:00
|
|
|
static int btrfs_destroy_pinned_extent(struct btrfs_fs_info *fs_info,
|
2020-01-20 21:09:18 +07:00
|
|
|
struct extent_io_tree *unpin)
|
2011-01-06 18:30:25 +07:00
|
|
|
{
|
|
|
|
u64 start;
|
|
|
|
u64 end;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
while (1) {
|
2018-11-16 20:04:44 +07:00
|
|
|
struct extent_state *cached_state = NULL;
|
|
|
|
|
btrfs: fix pinned underflow after transaction aborted
When running generic/475, we may get the following warning in dmesg:
[ 6902.102154] WARNING: CPU: 3 PID: 18013 at fs/btrfs/extent-tree.c:9776 btrfs_free_block_groups+0x2af/0x3b0 [btrfs]
[ 6902.109160] CPU: 3 PID: 18013 Comm: umount Tainted: G W O 4.19.0-rc8+ #8
[ 6902.110971] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
[ 6902.112857] RIP: 0010:btrfs_free_block_groups+0x2af/0x3b0 [btrfs]
[ 6902.118921] RSP: 0018:ffffc9000459bdb0 EFLAGS: 00010286
[ 6902.120315] RAX: ffff880175050bb0 RBX: ffff8801124a8000 RCX: 0000000000170007
[ 6902.121969] RDX: 0000000000000002 RSI: 0000000000170007 RDI: ffffffff8125fb74
[ 6902.123716] RBP: ffff880175055d10 R08: 0000000000000000 R09: 0000000000000000
[ 6902.125417] R10: 0000000000000000 R11: 0000000000000000 R12: ffff880175055d88
[ 6902.127129] R13: ffff880175050bb0 R14: 0000000000000000 R15: dead000000000100
[ 6902.129060] FS: 00007f4507223780(0000) GS:ffff88017ba00000(0000) knlGS:0000000000000000
[ 6902.130996] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 6902.132558] CR2: 00005623599cac78 CR3: 000000014b700001 CR4: 00000000003606e0
[ 6902.134270] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 6902.135981] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 6902.137836] Call Trace:
[ 6902.138939] close_ctree+0x171/0x330 [btrfs]
[ 6902.140181] ? kthread_stop+0x146/0x1f0
[ 6902.141277] generic_shutdown_super+0x6c/0x100
[ 6902.142517] kill_anon_super+0x14/0x30
[ 6902.143554] btrfs_kill_super+0x13/0x100 [btrfs]
[ 6902.144790] deactivate_locked_super+0x2f/0x70
[ 6902.146014] cleanup_mnt+0x3b/0x70
[ 6902.147020] task_work_run+0x9e/0xd0
[ 6902.148036] do_syscall_64+0x470/0x600
[ 6902.149142] ? trace_hardirqs_off_thunk+0x1a/0x1c
[ 6902.150375] entry_SYSCALL_64_after_hwframe+0x49/0xbe
[ 6902.151640] RIP: 0033:0x7f45077a6a7b
[ 6902.157324] RSP: 002b:00007ffd589f3e68 EFLAGS: 00000246 ORIG_RAX: 00000000000000a6
[ 6902.159187] RAX: 0000000000000000 RBX: 000055e8eec732b0 RCX: 00007f45077a6a7b
[ 6902.160834] RDX: 0000000000000001 RSI: 0000000000000000 RDI: 000055e8eec73490
[ 6902.162526] RBP: 0000000000000000 R08: 000055e8eec734b0 R09: 00007ffd589f26c0
[ 6902.164141] R10: 0000000000000000 R11: 0000000000000246 R12: 000055e8eec73490
[ 6902.165815] R13: 00007f4507ac61a4 R14: 0000000000000000 R15: 00007ffd589f40d8
[ 6902.167553] irq event stamp: 0
[ 6902.168998] hardirqs last enabled at (0): [<0000000000000000>] (null)
[ 6902.170731] hardirqs last disabled at (0): [<ffffffff810cd810>] copy_process.part.55+0x3b0/0x1f00
[ 6902.172773] softirqs last enabled at (0): [<ffffffff810cd810>] copy_process.part.55+0x3b0/0x1f00
[ 6902.174671] softirqs last disabled at (0): [<0000000000000000>] (null)
[ 6902.176407] ---[ end trace 463138c2986b275c ]---
[ 6902.177636] BTRFS info (device dm-3): space_info 4 has 273465344 free, is not full
[ 6902.179453] BTRFS info (device dm-3): space_info total=276824064, used=4685824, pinned=18446744073708158976, reserved=0, may_use=0, readonly=65536
In the above line there's "pinned=18446744073708158976" which is an
unsigned u64 value of -1392640, an obvious underflow.
When transaction_kthread is running cleanup_transaction(), another
fsstress is running btrfs_commit_transaction(). The
btrfs_finish_extent_commit() may get the same range as
btrfs_destroy_pinned_extent() got, which causes the pinned underflow.
Fixes: d4b450cd4b33 ("Btrfs: fix race between transaction commit and empty block group removal")
CC: stable@vger.kernel.org # 4.4+
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: Lu Fengqi <lufq.fnst@cn.fujitsu.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2018-10-24 19:24:03 +07:00
|
|
|
/*
|
|
|
|
* The btrfs_finish_extent_commit() may get the same range as
|
|
|
|
* ours between find_first_extent_bit and clear_extent_dirty.
|
|
|
|
* Hence, hold the unused_bg_unpin_mutex to avoid double unpin
|
|
|
|
* the same extent range.
|
|
|
|
*/
|
|
|
|
mutex_lock(&fs_info->unused_bg_unpin_mutex);
|
2011-01-06 18:30:25 +07:00
|
|
|
ret = find_first_extent_bit(unpin, 0, &start, &end,
|
2018-11-16 20:04:44 +07:00
|
|
|
EXTENT_DIRTY, &cached_state);
|
btrfs: fix pinned underflow after transaction aborted
When running generic/475, we may get the following warning in dmesg:
[ 6902.102154] WARNING: CPU: 3 PID: 18013 at fs/btrfs/extent-tree.c:9776 btrfs_free_block_groups+0x2af/0x3b0 [btrfs]
[ 6902.109160] CPU: 3 PID: 18013 Comm: umount Tainted: G W O 4.19.0-rc8+ #8
[ 6902.110971] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
[ 6902.112857] RIP: 0010:btrfs_free_block_groups+0x2af/0x3b0 [btrfs]
[ 6902.118921] RSP: 0018:ffffc9000459bdb0 EFLAGS: 00010286
[ 6902.120315] RAX: ffff880175050bb0 RBX: ffff8801124a8000 RCX: 0000000000170007
[ 6902.121969] RDX: 0000000000000002 RSI: 0000000000170007 RDI: ffffffff8125fb74
[ 6902.123716] RBP: ffff880175055d10 R08: 0000000000000000 R09: 0000000000000000
[ 6902.125417] R10: 0000000000000000 R11: 0000000000000000 R12: ffff880175055d88
[ 6902.127129] R13: ffff880175050bb0 R14: 0000000000000000 R15: dead000000000100
[ 6902.129060] FS: 00007f4507223780(0000) GS:ffff88017ba00000(0000) knlGS:0000000000000000
[ 6902.130996] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 6902.132558] CR2: 00005623599cac78 CR3: 000000014b700001 CR4: 00000000003606e0
[ 6902.134270] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 6902.135981] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 6902.137836] Call Trace:
[ 6902.138939] close_ctree+0x171/0x330 [btrfs]
[ 6902.140181] ? kthread_stop+0x146/0x1f0
[ 6902.141277] generic_shutdown_super+0x6c/0x100
[ 6902.142517] kill_anon_super+0x14/0x30
[ 6902.143554] btrfs_kill_super+0x13/0x100 [btrfs]
[ 6902.144790] deactivate_locked_super+0x2f/0x70
[ 6902.146014] cleanup_mnt+0x3b/0x70
[ 6902.147020] task_work_run+0x9e/0xd0
[ 6902.148036] do_syscall_64+0x470/0x600
[ 6902.149142] ? trace_hardirqs_off_thunk+0x1a/0x1c
[ 6902.150375] entry_SYSCALL_64_after_hwframe+0x49/0xbe
[ 6902.151640] RIP: 0033:0x7f45077a6a7b
[ 6902.157324] RSP: 002b:00007ffd589f3e68 EFLAGS: 00000246 ORIG_RAX: 00000000000000a6
[ 6902.159187] RAX: 0000000000000000 RBX: 000055e8eec732b0 RCX: 00007f45077a6a7b
[ 6902.160834] RDX: 0000000000000001 RSI: 0000000000000000 RDI: 000055e8eec73490
[ 6902.162526] RBP: 0000000000000000 R08: 000055e8eec734b0 R09: 00007ffd589f26c0
[ 6902.164141] R10: 0000000000000000 R11: 0000000000000246 R12: 000055e8eec73490
[ 6902.165815] R13: 00007f4507ac61a4 R14: 0000000000000000 R15: 00007ffd589f40d8
[ 6902.167553] irq event stamp: 0
[ 6902.168998] hardirqs last enabled at (0): [<0000000000000000>] (null)
[ 6902.170731] hardirqs last disabled at (0): [<ffffffff810cd810>] copy_process.part.55+0x3b0/0x1f00
[ 6902.172773] softirqs last enabled at (0): [<ffffffff810cd810>] copy_process.part.55+0x3b0/0x1f00
[ 6902.174671] softirqs last disabled at (0): [<0000000000000000>] (null)
[ 6902.176407] ---[ end trace 463138c2986b275c ]---
[ 6902.177636] BTRFS info (device dm-3): space_info 4 has 273465344 free, is not full
[ 6902.179453] BTRFS info (device dm-3): space_info total=276824064, used=4685824, pinned=18446744073708158976, reserved=0, may_use=0, readonly=65536
In the above line there's "pinned=18446744073708158976" which is an
unsigned u64 value of -1392640, an obvious underflow.
When transaction_kthread is running cleanup_transaction(), another
fsstress is running btrfs_commit_transaction(). The
btrfs_finish_extent_commit() may get the same range as
btrfs_destroy_pinned_extent() got, which causes the pinned underflow.
Fixes: d4b450cd4b33 ("Btrfs: fix race between transaction commit and empty block group removal")
CC: stable@vger.kernel.org # 4.4+
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: Lu Fengqi <lufq.fnst@cn.fujitsu.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2018-10-24 19:24:03 +07:00
|
|
|
if (ret) {
|
|
|
|
mutex_unlock(&fs_info->unused_bg_unpin_mutex);
|
2011-01-06 18:30:25 +07:00
|
|
|
break;
|
btrfs: fix pinned underflow after transaction aborted
When running generic/475, we may get the following warning in dmesg:
[ 6902.102154] WARNING: CPU: 3 PID: 18013 at fs/btrfs/extent-tree.c:9776 btrfs_free_block_groups+0x2af/0x3b0 [btrfs]
[ 6902.109160] CPU: 3 PID: 18013 Comm: umount Tainted: G W O 4.19.0-rc8+ #8
[ 6902.110971] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
[ 6902.112857] RIP: 0010:btrfs_free_block_groups+0x2af/0x3b0 [btrfs]
[ 6902.118921] RSP: 0018:ffffc9000459bdb0 EFLAGS: 00010286
[ 6902.120315] RAX: ffff880175050bb0 RBX: ffff8801124a8000 RCX: 0000000000170007
[ 6902.121969] RDX: 0000000000000002 RSI: 0000000000170007 RDI: ffffffff8125fb74
[ 6902.123716] RBP: ffff880175055d10 R08: 0000000000000000 R09: 0000000000000000
[ 6902.125417] R10: 0000000000000000 R11: 0000000000000000 R12: ffff880175055d88
[ 6902.127129] R13: ffff880175050bb0 R14: 0000000000000000 R15: dead000000000100
[ 6902.129060] FS: 00007f4507223780(0000) GS:ffff88017ba00000(0000) knlGS:0000000000000000
[ 6902.130996] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 6902.132558] CR2: 00005623599cac78 CR3: 000000014b700001 CR4: 00000000003606e0
[ 6902.134270] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 6902.135981] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 6902.137836] Call Trace:
[ 6902.138939] close_ctree+0x171/0x330 [btrfs]
[ 6902.140181] ? kthread_stop+0x146/0x1f0
[ 6902.141277] generic_shutdown_super+0x6c/0x100
[ 6902.142517] kill_anon_super+0x14/0x30
[ 6902.143554] btrfs_kill_super+0x13/0x100 [btrfs]
[ 6902.144790] deactivate_locked_super+0x2f/0x70
[ 6902.146014] cleanup_mnt+0x3b/0x70
[ 6902.147020] task_work_run+0x9e/0xd0
[ 6902.148036] do_syscall_64+0x470/0x600
[ 6902.149142] ? trace_hardirqs_off_thunk+0x1a/0x1c
[ 6902.150375] entry_SYSCALL_64_after_hwframe+0x49/0xbe
[ 6902.151640] RIP: 0033:0x7f45077a6a7b
[ 6902.157324] RSP: 002b:00007ffd589f3e68 EFLAGS: 00000246 ORIG_RAX: 00000000000000a6
[ 6902.159187] RAX: 0000000000000000 RBX: 000055e8eec732b0 RCX: 00007f45077a6a7b
[ 6902.160834] RDX: 0000000000000001 RSI: 0000000000000000 RDI: 000055e8eec73490
[ 6902.162526] RBP: 0000000000000000 R08: 000055e8eec734b0 R09: 00007ffd589f26c0
[ 6902.164141] R10: 0000000000000000 R11: 0000000000000246 R12: 000055e8eec73490
[ 6902.165815] R13: 00007f4507ac61a4 R14: 0000000000000000 R15: 00007ffd589f40d8
[ 6902.167553] irq event stamp: 0
[ 6902.168998] hardirqs last enabled at (0): [<0000000000000000>] (null)
[ 6902.170731] hardirqs last disabled at (0): [<ffffffff810cd810>] copy_process.part.55+0x3b0/0x1f00
[ 6902.172773] softirqs last enabled at (0): [<ffffffff810cd810>] copy_process.part.55+0x3b0/0x1f00
[ 6902.174671] softirqs last disabled at (0): [<0000000000000000>] (null)
[ 6902.176407] ---[ end trace 463138c2986b275c ]---
[ 6902.177636] BTRFS info (device dm-3): space_info 4 has 273465344 free, is not full
[ 6902.179453] BTRFS info (device dm-3): space_info total=276824064, used=4685824, pinned=18446744073708158976, reserved=0, may_use=0, readonly=65536
In the above line there's "pinned=18446744073708158976" which is an
unsigned u64 value of -1392640, an obvious underflow.
When transaction_kthread is running cleanup_transaction(), another
fsstress is running btrfs_commit_transaction(). The
btrfs_finish_extent_commit() may get the same range as
btrfs_destroy_pinned_extent() got, which causes the pinned underflow.
Fixes: d4b450cd4b33 ("Btrfs: fix race between transaction commit and empty block group removal")
CC: stable@vger.kernel.org # 4.4+
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: Lu Fengqi <lufq.fnst@cn.fujitsu.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2018-10-24 19:24:03 +07:00
|
|
|
}
|
2011-01-06 18:30:25 +07:00
|
|
|
|
2018-11-16 20:04:44 +07:00
|
|
|
clear_extent_dirty(unpin, start, end, &cached_state);
|
|
|
|
free_extent_state(cached_state);
|
2016-06-23 05:54:24 +07:00
|
|
|
btrfs_error_unpin_extent_range(fs_info, start, end);
|
btrfs: fix pinned underflow after transaction aborted
When running generic/475, we may get the following warning in dmesg:
[ 6902.102154] WARNING: CPU: 3 PID: 18013 at fs/btrfs/extent-tree.c:9776 btrfs_free_block_groups+0x2af/0x3b0 [btrfs]
[ 6902.109160] CPU: 3 PID: 18013 Comm: umount Tainted: G W O 4.19.0-rc8+ #8
[ 6902.110971] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
[ 6902.112857] RIP: 0010:btrfs_free_block_groups+0x2af/0x3b0 [btrfs]
[ 6902.118921] RSP: 0018:ffffc9000459bdb0 EFLAGS: 00010286
[ 6902.120315] RAX: ffff880175050bb0 RBX: ffff8801124a8000 RCX: 0000000000170007
[ 6902.121969] RDX: 0000000000000002 RSI: 0000000000170007 RDI: ffffffff8125fb74
[ 6902.123716] RBP: ffff880175055d10 R08: 0000000000000000 R09: 0000000000000000
[ 6902.125417] R10: 0000000000000000 R11: 0000000000000000 R12: ffff880175055d88
[ 6902.127129] R13: ffff880175050bb0 R14: 0000000000000000 R15: dead000000000100
[ 6902.129060] FS: 00007f4507223780(0000) GS:ffff88017ba00000(0000) knlGS:0000000000000000
[ 6902.130996] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 6902.132558] CR2: 00005623599cac78 CR3: 000000014b700001 CR4: 00000000003606e0
[ 6902.134270] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 6902.135981] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 6902.137836] Call Trace:
[ 6902.138939] close_ctree+0x171/0x330 [btrfs]
[ 6902.140181] ? kthread_stop+0x146/0x1f0
[ 6902.141277] generic_shutdown_super+0x6c/0x100
[ 6902.142517] kill_anon_super+0x14/0x30
[ 6902.143554] btrfs_kill_super+0x13/0x100 [btrfs]
[ 6902.144790] deactivate_locked_super+0x2f/0x70
[ 6902.146014] cleanup_mnt+0x3b/0x70
[ 6902.147020] task_work_run+0x9e/0xd0
[ 6902.148036] do_syscall_64+0x470/0x600
[ 6902.149142] ? trace_hardirqs_off_thunk+0x1a/0x1c
[ 6902.150375] entry_SYSCALL_64_after_hwframe+0x49/0xbe
[ 6902.151640] RIP: 0033:0x7f45077a6a7b
[ 6902.157324] RSP: 002b:00007ffd589f3e68 EFLAGS: 00000246 ORIG_RAX: 00000000000000a6
[ 6902.159187] RAX: 0000000000000000 RBX: 000055e8eec732b0 RCX: 00007f45077a6a7b
[ 6902.160834] RDX: 0000000000000001 RSI: 0000000000000000 RDI: 000055e8eec73490
[ 6902.162526] RBP: 0000000000000000 R08: 000055e8eec734b0 R09: 00007ffd589f26c0
[ 6902.164141] R10: 0000000000000000 R11: 0000000000000246 R12: 000055e8eec73490
[ 6902.165815] R13: 00007f4507ac61a4 R14: 0000000000000000 R15: 00007ffd589f40d8
[ 6902.167553] irq event stamp: 0
[ 6902.168998] hardirqs last enabled at (0): [<0000000000000000>] (null)
[ 6902.170731] hardirqs last disabled at (0): [<ffffffff810cd810>] copy_process.part.55+0x3b0/0x1f00
[ 6902.172773] softirqs last enabled at (0): [<ffffffff810cd810>] copy_process.part.55+0x3b0/0x1f00
[ 6902.174671] softirqs last disabled at (0): [<0000000000000000>] (null)
[ 6902.176407] ---[ end trace 463138c2986b275c ]---
[ 6902.177636] BTRFS info (device dm-3): space_info 4 has 273465344 free, is not full
[ 6902.179453] BTRFS info (device dm-3): space_info total=276824064, used=4685824, pinned=18446744073708158976, reserved=0, may_use=0, readonly=65536
In the above line there's "pinned=18446744073708158976" which is an
unsigned u64 value of -1392640, an obvious underflow.
When transaction_kthread is running cleanup_transaction(), another
fsstress is running btrfs_commit_transaction(). The
btrfs_finish_extent_commit() may get the same range as
btrfs_destroy_pinned_extent() got, which causes the pinned underflow.
Fixes: d4b450cd4b33 ("Btrfs: fix race between transaction commit and empty block group removal")
CC: stable@vger.kernel.org # 4.4+
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: Lu Fengqi <lufq.fnst@cn.fujitsu.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2018-10-24 19:24:03 +07:00
|
|
|
mutex_unlock(&fs_info->unused_bg_unpin_mutex);
|
2011-01-06 18:30:25 +07:00
|
|
|
cond_resched();
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2019-10-30 01:20:18 +07:00
|
|
|
static void btrfs_cleanup_bg_io(struct btrfs_block_group *cache)
|
2016-07-21 07:44:12 +07:00
|
|
|
{
|
|
|
|
struct inode *inode;
|
|
|
|
|
|
|
|
inode = cache->io_ctl.inode;
|
|
|
|
if (inode) {
|
|
|
|
invalidate_inode_pages2(inode->i_mapping);
|
|
|
|
BTRFS_I(inode)->generation = 0;
|
|
|
|
cache->io_ctl.inode = NULL;
|
|
|
|
iput(inode);
|
|
|
|
}
|
btrfs: fix space cache memory leak after transaction abort
If a transaction aborts it can cause a memory leak of the pages array of
a block group's io_ctl structure. The following steps explain how that can
happen:
1) Transaction N is committing, currently in state TRANS_STATE_UNBLOCKED
and it's about to start writing out dirty extent buffers;
2) Transaction N + 1 already started and another task, task A, just called
btrfs_commit_transaction() on it;
3) Block group B was dirtied (extents allocated from it) by transaction
N + 1, so when task A calls btrfs_start_dirty_block_groups(), at the
very beginning of the transaction commit, it starts writeback for the
block group's space cache by calling btrfs_write_out_cache(), which
allocates the pages array for the block group's io_ctl with a call to
io_ctl_init(). Block group A is added to the io_list of transaction
N + 1 by btrfs_start_dirty_block_groups();
4) While transaction N's commit is writing out the extent buffers, it gets
an IO error and aborts transaction N, also setting the file system to
RO mode;
5) Task A has already returned from btrfs_start_dirty_block_groups(), is at
btrfs_commit_transaction() and has set transaction N + 1 state to
TRANS_STATE_COMMIT_START. Immediately after that it checks that the
filesystem was turned to RO mode, due to transaction N's abort, and
jumps to the "cleanup_transaction" label. After that we end up at
btrfs_cleanup_one_transaction() which calls btrfs_cleanup_dirty_bgs().
That helper finds block group B in the transaction's io_list but it
never releases the pages array of the block group's io_ctl, resulting in
a memory leak.
In fact at the point when we are at btrfs_cleanup_dirty_bgs(), the pages
array points to pages that were already released by us at
__btrfs_write_out_cache() through the call to io_ctl_drop_pages(). We end
up freeing the pages array only after waiting for the ordered extent to
complete through btrfs_wait_cache_io(), which calls io_ctl_free() to do
that. But in the transaction abort case we don't wait for the space cache's
ordered extent to complete through a call to btrfs_wait_cache_io(), so
that's why we end up with a memory leak - we wait for the ordered extent
to complete indirectly by shutting down the work queues and waiting for
any jobs in them to complete before returning from close_ctree().
We can solve the leak simply by freeing the pages array right after
releasing the pages (with the call to io_ctl_drop_pages()) at
__btrfs_write_out_cache(), since we will never use it anymore after that
and the pages array points to already released pages at that point, which
is currently not a problem since no one will use it after that, but not a
good practice anyway since it can easily lead to use-after-free issues.
So fix this by freeing the pages array right after releasing the pages at
__btrfs_write_out_cache().
This issue can often be reproduced with test case generic/475 from fstests
and kmemleak can detect it and reports it with the following trace:
unreferenced object 0xffff9bbf009fa600 (size 512):
comm "fsstress", pid 38807, jiffies 4298504428 (age 22.028s)
hex dump (first 32 bytes):
00 a0 7c 4d 3d ed ff ff 40 a0 7c 4d 3d ed ff ff ..|M=...@.|M=...
80 a0 7c 4d 3d ed ff ff c0 a0 7c 4d 3d ed ff ff ..|M=.....|M=...
backtrace:
[<00000000f4b5cfe2>] __kmalloc+0x1a8/0x3e0
[<0000000028665e7f>] io_ctl_init+0xa7/0x120 [btrfs]
[<00000000a1f95b2d>] __btrfs_write_out_cache+0x86/0x4a0 [btrfs]
[<00000000207ea1b0>] btrfs_write_out_cache+0x7f/0xf0 [btrfs]
[<00000000af21f534>] btrfs_start_dirty_block_groups+0x27b/0x580 [btrfs]
[<00000000c3c23d44>] btrfs_commit_transaction+0xa6f/0xe70 [btrfs]
[<000000009588930c>] create_subvol+0x581/0x9a0 [btrfs]
[<000000009ef2fd7f>] btrfs_mksubvol+0x3fb/0x4a0 [btrfs]
[<00000000474e5187>] __btrfs_ioctl_snap_create+0x119/0x1a0 [btrfs]
[<00000000708ee349>] btrfs_ioctl_snap_create_v2+0xb0/0xf0 [btrfs]
[<00000000ea60106f>] btrfs_ioctl+0x12c/0x3130 [btrfs]
[<000000005c923d6d>] __x64_sys_ioctl+0x83/0xb0
[<0000000043ace2c9>] do_syscall_64+0x33/0x80
[<00000000904efbce>] entry_SYSCALL_64_after_hwframe+0x44/0xa9
CC: stable@vger.kernel.org # 4.9+
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2020-08-14 17:04:09 +07:00
|
|
|
ASSERT(cache->io_ctl.pages == NULL);
|
2016-07-21 07:44:12 +07:00
|
|
|
btrfs_put_block_group(cache);
|
|
|
|
}
|
|
|
|
|
|
|
|
void btrfs_cleanup_dirty_bgs(struct btrfs_transaction *cur_trans,
|
2016-06-23 05:54:24 +07:00
|
|
|
struct btrfs_fs_info *fs_info)
|
2016-07-21 07:44:12 +07:00
|
|
|
{
|
2019-10-30 01:20:18 +07:00
|
|
|
struct btrfs_block_group *cache;
|
2016-07-21 07:44:12 +07:00
|
|
|
|
|
|
|
spin_lock(&cur_trans->dirty_bgs_lock);
|
|
|
|
while (!list_empty(&cur_trans->dirty_bgs)) {
|
|
|
|
cache = list_first_entry(&cur_trans->dirty_bgs,
|
2019-10-30 01:20:18 +07:00
|
|
|
struct btrfs_block_group,
|
2016-07-21 07:44:12 +07:00
|
|
|
dirty_list);
|
|
|
|
|
|
|
|
if (!list_empty(&cache->io_list)) {
|
|
|
|
spin_unlock(&cur_trans->dirty_bgs_lock);
|
|
|
|
list_del_init(&cache->io_list);
|
|
|
|
btrfs_cleanup_bg_io(cache);
|
|
|
|
spin_lock(&cur_trans->dirty_bgs_lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
list_del_init(&cache->dirty_list);
|
|
|
|
spin_lock(&cache->lock);
|
|
|
|
cache->disk_cache_state = BTRFS_DC_ERROR;
|
|
|
|
spin_unlock(&cache->lock);
|
|
|
|
|
|
|
|
spin_unlock(&cur_trans->dirty_bgs_lock);
|
|
|
|
btrfs_put_block_group(cache);
|
btrfs: introduce delayed_refs_rsv
Traditionally we've had voodoo in btrfs to account for the space that
delayed refs may take up by having a global_block_rsv. This works most
of the time, except when it doesn't. We've had issues reported and seen
in production where sometimes the global reserve is exhausted during
transaction commit before we can run all of our delayed refs, resulting
in an aborted transaction. Because of this voodoo we have equally
dubious flushing semantics around throttling delayed refs which we often
get wrong.
So instead give them their own block_rsv. This way we can always know
exactly how much outstanding space we need for delayed refs. This
allows us to make sure we are constantly filling that reservation up
with space, and allows us to put more precise pressure on the enospc
system. Instead of doing math to see if its a good time to throttle,
the normal enospc code will be invoked if we have a lot of delayed refs
pending, and they will be run via the normal flushing mechanism.
For now the delayed_refs_rsv will hold the reservations for the delayed
refs, the block group updates, and deleting csums. We could have a
separate rsv for the block group updates, but the csum deletion stuff is
still handled via the delayed_refs so that will stay there.
Historical background:
The global reserve has grown to cover everything we don't reserve space
explicitly for, and we've grown a lot of weird ad-hoc heuristics to know
if we're running short on space and when it's time to force a commit. A
failure rate of 20-40 file systems when we run hundreds of thousands of
them isn't super high, but cleaning up this code will make things less
ugly and more predictible.
Thus the delayed refs rsv. We always know how many delayed refs we have
outstanding, and although running them generates more we can use the
global reserve for that spill over, which fits better into it's desired
use than a full blown reservation. This first approach is to simply
take how many times we're reserving space for and multiply that by 2 in
order to save enough space for the delayed refs that could be generated.
This is a niave approach and will probably evolve, but for now it works.
Signed-off-by: Josef Bacik <jbacik@fb.com>
Reviewed-by: David Sterba <dsterba@suse.com> # high-level review
[ added background notes from the cover letter ]
Signed-off-by: David Sterba <dsterba@suse.com>
2018-12-03 22:20:33 +07:00
|
|
|
btrfs_delayed_refs_rsv_release(fs_info, 1);
|
2016-07-21 07:44:12 +07:00
|
|
|
spin_lock(&cur_trans->dirty_bgs_lock);
|
|
|
|
}
|
|
|
|
spin_unlock(&cur_trans->dirty_bgs_lock);
|
|
|
|
|
2018-02-08 23:25:18 +07:00
|
|
|
/*
|
|
|
|
* Refer to the definition of io_bgs member for details why it's safe
|
|
|
|
* to use it without any locking
|
|
|
|
*/
|
2016-07-21 07:44:12 +07:00
|
|
|
while (!list_empty(&cur_trans->io_bgs)) {
|
|
|
|
cache = list_first_entry(&cur_trans->io_bgs,
|
2019-10-30 01:20:18 +07:00
|
|
|
struct btrfs_block_group,
|
2016-07-21 07:44:12 +07:00
|
|
|
io_list);
|
|
|
|
|
|
|
|
list_del_init(&cache->io_list);
|
|
|
|
spin_lock(&cache->lock);
|
|
|
|
cache->disk_cache_state = BTRFS_DC_ERROR;
|
|
|
|
spin_unlock(&cache->lock);
|
|
|
|
btrfs_cleanup_bg_io(cache);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2012-03-01 23:24:58 +07:00
|
|
|
void btrfs_cleanup_one_transaction(struct btrfs_transaction *cur_trans,
|
2016-06-23 05:54:24 +07:00
|
|
|
struct btrfs_fs_info *fs_info)
|
2012-03-01 23:24:58 +07:00
|
|
|
{
|
2019-03-25 19:31:22 +07:00
|
|
|
struct btrfs_device *dev, *tmp;
|
|
|
|
|
2016-06-23 05:54:24 +07:00
|
|
|
btrfs_cleanup_dirty_bgs(cur_trans, fs_info);
|
2016-07-21 07:44:12 +07:00
|
|
|
ASSERT(list_empty(&cur_trans->dirty_bgs));
|
|
|
|
ASSERT(list_empty(&cur_trans->io_bgs));
|
|
|
|
|
2019-03-25 19:31:22 +07:00
|
|
|
list_for_each_entry_safe(dev, tmp, &cur_trans->dev_update_list,
|
|
|
|
post_commit_list) {
|
|
|
|
list_del_init(&dev->post_commit_list);
|
|
|
|
}
|
|
|
|
|
2016-06-23 05:54:24 +07:00
|
|
|
btrfs_destroy_delayed_refs(cur_trans, fs_info);
|
2012-03-01 23:24:58 +07:00
|
|
|
|
Btrfs: make the state of the transaction more readable
We used 3 variants to track the state of the transaction, it was complex
and wasted the memory space. Besides that, it was hard to understand that
which types of the transaction handles should be blocked in each transaction
state, so the developers often made mistakes.
This patch improved the above problem. In this patch, we define 6 states
for the transaction,
enum btrfs_trans_state {
TRANS_STATE_RUNNING = 0,
TRANS_STATE_BLOCKED = 1,
TRANS_STATE_COMMIT_START = 2,
TRANS_STATE_COMMIT_DOING = 3,
TRANS_STATE_UNBLOCKED = 4,
TRANS_STATE_COMPLETED = 5,
TRANS_STATE_MAX = 6,
}
and just use 1 variant to track those state.
In order to make the blocked handle types for each state more clear,
we introduce a array:
unsigned int btrfs_blocked_trans_types[TRANS_STATE_MAX] = {
[TRANS_STATE_RUNNING] = 0U,
[TRANS_STATE_BLOCKED] = (__TRANS_USERSPACE |
__TRANS_START),
[TRANS_STATE_COMMIT_START] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH),
[TRANS_STATE_COMMIT_DOING] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN),
[TRANS_STATE_UNBLOCKED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
[TRANS_STATE_COMPLETED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
}
it is very intuitionistic.
Besides that, because we remove ->in_commit in transaction structure, so
the lock ->commit_lock which was used to protect it is unnecessary, remove
->commit_lock.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
2013-05-17 10:53:43 +07:00
|
|
|
cur_trans->state = TRANS_STATE_COMMIT_START;
|
2016-06-23 05:54:23 +07:00
|
|
|
wake_up(&fs_info->transaction_blocked_wait);
|
2012-03-01 23:24:58 +07:00
|
|
|
|
Btrfs: make the state of the transaction more readable
We used 3 variants to track the state of the transaction, it was complex
and wasted the memory space. Besides that, it was hard to understand that
which types of the transaction handles should be blocked in each transaction
state, so the developers often made mistakes.
This patch improved the above problem. In this patch, we define 6 states
for the transaction,
enum btrfs_trans_state {
TRANS_STATE_RUNNING = 0,
TRANS_STATE_BLOCKED = 1,
TRANS_STATE_COMMIT_START = 2,
TRANS_STATE_COMMIT_DOING = 3,
TRANS_STATE_UNBLOCKED = 4,
TRANS_STATE_COMPLETED = 5,
TRANS_STATE_MAX = 6,
}
and just use 1 variant to track those state.
In order to make the blocked handle types for each state more clear,
we introduce a array:
unsigned int btrfs_blocked_trans_types[TRANS_STATE_MAX] = {
[TRANS_STATE_RUNNING] = 0U,
[TRANS_STATE_BLOCKED] = (__TRANS_USERSPACE |
__TRANS_START),
[TRANS_STATE_COMMIT_START] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH),
[TRANS_STATE_COMMIT_DOING] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN),
[TRANS_STATE_UNBLOCKED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
[TRANS_STATE_COMPLETED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
}
it is very intuitionistic.
Besides that, because we remove ->in_commit in transaction structure, so
the lock ->commit_lock which was used to protect it is unnecessary, remove
->commit_lock.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
2013-05-17 10:53:43 +07:00
|
|
|
cur_trans->state = TRANS_STATE_UNBLOCKED;
|
2016-06-23 05:54:23 +07:00
|
|
|
wake_up(&fs_info->transaction_wait);
|
2012-03-01 23:24:58 +07:00
|
|
|
|
2016-06-23 05:54:23 +07:00
|
|
|
btrfs_destroy_delayed_inodes(fs_info);
|
2012-03-01 23:24:58 +07:00
|
|
|
|
2016-06-23 05:54:24 +07:00
|
|
|
btrfs_destroy_marked_extents(fs_info, &cur_trans->dirty_pages,
|
2012-03-01 23:24:58 +07:00
|
|
|
EXTENT_DIRTY);
|
2020-01-20 21:09:18 +07:00
|
|
|
btrfs_destroy_pinned_extent(fs_info, &cur_trans->pinned_extents);
|
2012-03-01 23:24:58 +07:00
|
|
|
|
Btrfs: make the state of the transaction more readable
We used 3 variants to track the state of the transaction, it was complex
and wasted the memory space. Besides that, it was hard to understand that
which types of the transaction handles should be blocked in each transaction
state, so the developers often made mistakes.
This patch improved the above problem. In this patch, we define 6 states
for the transaction,
enum btrfs_trans_state {
TRANS_STATE_RUNNING = 0,
TRANS_STATE_BLOCKED = 1,
TRANS_STATE_COMMIT_START = 2,
TRANS_STATE_COMMIT_DOING = 3,
TRANS_STATE_UNBLOCKED = 4,
TRANS_STATE_COMPLETED = 5,
TRANS_STATE_MAX = 6,
}
and just use 1 variant to track those state.
In order to make the blocked handle types for each state more clear,
we introduce a array:
unsigned int btrfs_blocked_trans_types[TRANS_STATE_MAX] = {
[TRANS_STATE_RUNNING] = 0U,
[TRANS_STATE_BLOCKED] = (__TRANS_USERSPACE |
__TRANS_START),
[TRANS_STATE_COMMIT_START] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH),
[TRANS_STATE_COMMIT_DOING] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN),
[TRANS_STATE_UNBLOCKED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
[TRANS_STATE_COMPLETED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
}
it is very intuitionistic.
Besides that, because we remove ->in_commit in transaction structure, so
the lock ->commit_lock which was used to protect it is unnecessary, remove
->commit_lock.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
2013-05-17 10:53:43 +07:00
|
|
|
cur_trans->state =TRANS_STATE_COMPLETED;
|
|
|
|
wake_up(&cur_trans->commit_wait);
|
2012-03-01 23:24:58 +07:00
|
|
|
}
|
|
|
|
|
2016-06-23 05:54:24 +07:00
|
|
|
static int btrfs_cleanup_transaction(struct btrfs_fs_info *fs_info)
|
2011-01-06 18:30:25 +07:00
|
|
|
{
|
|
|
|
struct btrfs_transaction *t;
|
|
|
|
|
2016-06-23 05:54:23 +07:00
|
|
|
mutex_lock(&fs_info->transaction_kthread_mutex);
|
2011-01-06 18:30:25 +07:00
|
|
|
|
2016-06-23 05:54:23 +07:00
|
|
|
spin_lock(&fs_info->trans_lock);
|
|
|
|
while (!list_empty(&fs_info->trans_list)) {
|
|
|
|
t = list_first_entry(&fs_info->trans_list,
|
2013-09-30 22:36:38 +07:00
|
|
|
struct btrfs_transaction, list);
|
|
|
|
if (t->state >= TRANS_STATE_COMMIT_START) {
|
2017-03-03 15:55:11 +07:00
|
|
|
refcount_inc(&t->use_count);
|
2016-06-23 05:54:23 +07:00
|
|
|
spin_unlock(&fs_info->trans_lock);
|
2016-06-23 05:54:24 +07:00
|
|
|
btrfs_wait_for_commit(fs_info, t->transid);
|
2013-09-30 22:36:38 +07:00
|
|
|
btrfs_put_transaction(t);
|
2016-06-23 05:54:23 +07:00
|
|
|
spin_lock(&fs_info->trans_lock);
|
2013-09-30 22:36:38 +07:00
|
|
|
continue;
|
|
|
|
}
|
2016-06-23 05:54:23 +07:00
|
|
|
if (t == fs_info->running_transaction) {
|
2013-09-30 22:36:38 +07:00
|
|
|
t->state = TRANS_STATE_COMMIT_DOING;
|
2016-06-23 05:54:23 +07:00
|
|
|
spin_unlock(&fs_info->trans_lock);
|
2013-09-30 22:36:38 +07:00
|
|
|
/*
|
|
|
|
* We wait for 0 num_writers since we don't hold a trans
|
|
|
|
* handle open currently for this transaction.
|
|
|
|
*/
|
|
|
|
wait_event(t->writer_wait,
|
|
|
|
atomic_read(&t->num_writers) == 0);
|
|
|
|
} else {
|
2016-06-23 05:54:23 +07:00
|
|
|
spin_unlock(&fs_info->trans_lock);
|
2013-09-30 22:36:38 +07:00
|
|
|
}
|
2016-06-23 05:54:24 +07:00
|
|
|
btrfs_cleanup_one_transaction(t, fs_info);
|
Btrfs: make the state of the transaction more readable
We used 3 variants to track the state of the transaction, it was complex
and wasted the memory space. Besides that, it was hard to understand that
which types of the transaction handles should be blocked in each transaction
state, so the developers often made mistakes.
This patch improved the above problem. In this patch, we define 6 states
for the transaction,
enum btrfs_trans_state {
TRANS_STATE_RUNNING = 0,
TRANS_STATE_BLOCKED = 1,
TRANS_STATE_COMMIT_START = 2,
TRANS_STATE_COMMIT_DOING = 3,
TRANS_STATE_UNBLOCKED = 4,
TRANS_STATE_COMPLETED = 5,
TRANS_STATE_MAX = 6,
}
and just use 1 variant to track those state.
In order to make the blocked handle types for each state more clear,
we introduce a array:
unsigned int btrfs_blocked_trans_types[TRANS_STATE_MAX] = {
[TRANS_STATE_RUNNING] = 0U,
[TRANS_STATE_BLOCKED] = (__TRANS_USERSPACE |
__TRANS_START),
[TRANS_STATE_COMMIT_START] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH),
[TRANS_STATE_COMMIT_DOING] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN),
[TRANS_STATE_UNBLOCKED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
[TRANS_STATE_COMPLETED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
}
it is very intuitionistic.
Besides that, because we remove ->in_commit in transaction structure, so
the lock ->commit_lock which was used to protect it is unnecessary, remove
->commit_lock.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
2013-05-17 10:53:43 +07:00
|
|
|
|
2016-06-23 05:54:23 +07:00
|
|
|
spin_lock(&fs_info->trans_lock);
|
|
|
|
if (t == fs_info->running_transaction)
|
|
|
|
fs_info->running_transaction = NULL;
|
2011-01-06 18:30:25 +07:00
|
|
|
list_del_init(&t->list);
|
2016-06-23 05:54:23 +07:00
|
|
|
spin_unlock(&fs_info->trans_lock);
|
2011-01-06 18:30:25 +07:00
|
|
|
|
2013-09-30 22:36:38 +07:00
|
|
|
btrfs_put_transaction(t);
|
2016-06-23 05:54:24 +07:00
|
|
|
trace_btrfs_transaction_commit(fs_info->tree_root);
|
2016-06-23 05:54:23 +07:00
|
|
|
spin_lock(&fs_info->trans_lock);
|
2013-09-30 22:36:38 +07:00
|
|
|
}
|
2016-06-23 05:54:23 +07:00
|
|
|
spin_unlock(&fs_info->trans_lock);
|
|
|
|
btrfs_destroy_all_ordered_extents(fs_info);
|
2016-06-23 05:54:23 +07:00
|
|
|
btrfs_destroy_delayed_inodes(fs_info);
|
|
|
|
btrfs_assert_delayed_root_empty(fs_info);
|
2016-06-23 05:54:23 +07:00
|
|
|
btrfs_destroy_all_delalloc_inodes(fs_info);
|
2020-03-24 21:47:52 +07:00
|
|
|
btrfs_drop_all_logs(fs_info);
|
2016-06-23 05:54:23 +07:00
|
|
|
mutex_unlock(&fs_info->transaction_kthread_mutex);
|
2011-01-06 18:30:25 +07:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|