2007-06-12 20:07:21 +07:00
|
|
|
/*
|
|
|
|
* Copyright (C) 2007 Oracle. All rights reserved.
|
|
|
|
*
|
|
|
|
* This program is free software; you can redistribute it and/or
|
|
|
|
* modify it under the terms of the GNU General Public
|
|
|
|
* License v2 as published by the Free Software Foundation.
|
|
|
|
*
|
|
|
|
* This program is distributed in the hope that it will be useful,
|
|
|
|
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
|
|
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
|
|
|
* General Public License for more details.
|
|
|
|
*
|
|
|
|
* You should have received a copy of the GNU General Public
|
|
|
|
* License along with this program; if not, write to the
|
|
|
|
* Free Software Foundation, Inc., 59 Temple Place - Suite 330,
|
|
|
|
* Boston, MA 021110-1307, USA.
|
|
|
|
*/
|
|
|
|
|
2007-03-23 02:59:16 +07:00
|
|
|
#include <linux/fs.h>
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 15:04:11 +07:00
|
|
|
#include <linux/slab.h>
|
2007-06-12 22:36:58 +07:00
|
|
|
#include <linux/sched.h>
|
2007-09-17 21:58:06 +07:00
|
|
|
#include <linux/writeback.h>
|
2007-10-16 03:14:19 +07:00
|
|
|
#include <linux/pagemap.h>
|
2008-11-08 06:22:45 +07:00
|
|
|
#include <linux/blkdev.h>
|
2012-07-25 22:35:53 +07:00
|
|
|
#include <linux/uuid.h>
|
2007-03-23 02:59:16 +07:00
|
|
|
#include "ctree.h"
|
|
|
|
#include "disk-io.h"
|
|
|
|
#include "transaction.h"
|
2008-06-26 03:01:30 +07:00
|
|
|
#include "locking.h"
|
2008-09-06 03:13:11 +07:00
|
|
|
#include "tree-log.h"
|
Btrfs: Cache free inode numbers in memory
Currently btrfs stores the highest objectid of the fs tree, and it always
returns (highest+1) inode number when we create a file, so inode numbers
won't be reclaimed when we delete files, so we'll run out of inode numbers
as we keep create/delete files in 32bits machines.
This fixes it, and it works similarly to how we cache free space in block
cgroups.
We start a kernel thread to read the file tree. By scanning inode items,
we know which chunks of inode numbers are free, and we cache them in
an rb-tree.
Because we are searching the commit root, we have to carefully handle the
cross-transaction case.
The rb-tree is a hybrid extent+bitmap tree, so if we have too many small
chunks of inode numbers, we'll use bitmaps. Initially we allow 16K ram
of extents, and a bitmap will be used if we exceed this threshold. The
extents threshold is adjusted in runtime.
Signed-off-by: Li Zefan <lizf@cn.fujitsu.com>
2011-04-20 09:06:11 +07:00
|
|
|
#include "inode-map.h"
|
2012-05-25 21:06:10 +07:00
|
|
|
#include "volumes.h"
|
2012-11-06 19:15:27 +07:00
|
|
|
#include "dev-replace.h"
|
2014-05-14 07:30:47 +07:00
|
|
|
#include "qgroup.h"
|
2007-03-23 02:59:16 +07:00
|
|
|
|
2007-04-09 21:42:37 +07:00
|
|
|
#define BTRFS_ROOT_TRANS_TAG 0
|
|
|
|
|
2015-01-03 00:23:10 +07:00
|
|
|
static const unsigned int btrfs_blocked_trans_types[TRANS_STATE_MAX] = {
|
Btrfs: make the state of the transaction more readable
We used 3 variants to track the state of the transaction, it was complex
and wasted the memory space. Besides that, it was hard to understand that
which types of the transaction handles should be blocked in each transaction
state, so the developers often made mistakes.
This patch improved the above problem. In this patch, we define 6 states
for the transaction,
enum btrfs_trans_state {
TRANS_STATE_RUNNING = 0,
TRANS_STATE_BLOCKED = 1,
TRANS_STATE_COMMIT_START = 2,
TRANS_STATE_COMMIT_DOING = 3,
TRANS_STATE_UNBLOCKED = 4,
TRANS_STATE_COMPLETED = 5,
TRANS_STATE_MAX = 6,
}
and just use 1 variant to track those state.
In order to make the blocked handle types for each state more clear,
we introduce a array:
unsigned int btrfs_blocked_trans_types[TRANS_STATE_MAX] = {
[TRANS_STATE_RUNNING] = 0U,
[TRANS_STATE_BLOCKED] = (__TRANS_USERSPACE |
__TRANS_START),
[TRANS_STATE_COMMIT_START] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH),
[TRANS_STATE_COMMIT_DOING] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN),
[TRANS_STATE_UNBLOCKED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
[TRANS_STATE_COMPLETED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
}
it is very intuitionistic.
Besides that, because we remove ->in_commit in transaction structure, so
the lock ->commit_lock which was used to protect it is unnecessary, remove
->commit_lock.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
2013-05-17 10:53:43 +07:00
|
|
|
[TRANS_STATE_RUNNING] = 0U,
|
|
|
|
[TRANS_STATE_BLOCKED] = (__TRANS_USERSPACE |
|
|
|
|
__TRANS_START),
|
|
|
|
[TRANS_STATE_COMMIT_START] = (__TRANS_USERSPACE |
|
|
|
|
__TRANS_START |
|
|
|
|
__TRANS_ATTACH),
|
|
|
|
[TRANS_STATE_COMMIT_DOING] = (__TRANS_USERSPACE |
|
|
|
|
__TRANS_START |
|
|
|
|
__TRANS_ATTACH |
|
|
|
|
__TRANS_JOIN),
|
|
|
|
[TRANS_STATE_UNBLOCKED] = (__TRANS_USERSPACE |
|
|
|
|
__TRANS_START |
|
|
|
|
__TRANS_ATTACH |
|
|
|
|
__TRANS_JOIN |
|
|
|
|
__TRANS_JOIN_NOLOCK),
|
|
|
|
[TRANS_STATE_COMPLETED] = (__TRANS_USERSPACE |
|
|
|
|
__TRANS_START |
|
|
|
|
__TRANS_ATTACH |
|
|
|
|
__TRANS_JOIN |
|
|
|
|
__TRANS_JOIN_NOLOCK),
|
|
|
|
};
|
|
|
|
|
2013-09-30 22:36:38 +07:00
|
|
|
void btrfs_put_transaction(struct btrfs_transaction *transaction)
|
2007-03-23 02:59:16 +07:00
|
|
|
{
|
2011-04-12 02:45:29 +07:00
|
|
|
WARN_ON(atomic_read(&transaction->use_count) == 0);
|
|
|
|
if (atomic_dec_and_test(&transaction->use_count)) {
|
2011-04-12 04:25:13 +07:00
|
|
|
BUG_ON(!list_empty(&transaction->list));
|
2013-10-14 11:59:45 +07:00
|
|
|
WARN_ON(!RB_EMPTY_ROOT(&transaction->delayed_refs.href_root));
|
2015-02-03 22:50:16 +07:00
|
|
|
if (transaction->delayed_refs.pending_csums)
|
2016-09-20 21:05:02 +07:00
|
|
|
btrfs_err(transaction->fs_info,
|
|
|
|
"pending csums is %llu",
|
|
|
|
transaction->delayed_refs.pending_csums);
|
2013-06-28 00:22:46 +07:00
|
|
|
while (!list_empty(&transaction->pending_chunks)) {
|
|
|
|
struct extent_map *em;
|
|
|
|
|
|
|
|
em = list_first_entry(&transaction->pending_chunks,
|
|
|
|
struct extent_map, list);
|
|
|
|
list_del_init(&em->list);
|
|
|
|
free_extent_map(em);
|
|
|
|
}
|
2015-11-27 23:12:00 +07:00
|
|
|
/*
|
|
|
|
* If any block groups are found in ->deleted_bgs then it's
|
|
|
|
* because the transaction was aborted and a commit did not
|
|
|
|
* happen (things failed before writing the new superblock
|
|
|
|
* and calling btrfs_finish_extent_commit()), so we can not
|
|
|
|
* discard the physical locations of the block groups.
|
|
|
|
*/
|
|
|
|
while (!list_empty(&transaction->deleted_bgs)) {
|
|
|
|
struct btrfs_block_group_cache *cache;
|
|
|
|
|
|
|
|
cache = list_first_entry(&transaction->deleted_bgs,
|
|
|
|
struct btrfs_block_group_cache,
|
|
|
|
bg_list);
|
|
|
|
list_del_init(&cache->bg_list);
|
|
|
|
btrfs_put_block_group_trimming(cache);
|
|
|
|
btrfs_put_block_group(cache);
|
|
|
|
}
|
2007-04-02 21:50:19 +07:00
|
|
|
kmem_cache_free(btrfs_transaction_cachep, transaction);
|
2007-03-25 22:35:08 +07:00
|
|
|
}
|
2007-03-23 02:59:16 +07:00
|
|
|
}
|
|
|
|
|
2014-10-13 18:28:37 +07:00
|
|
|
static void clear_btree_io_tree(struct extent_io_tree *tree)
|
|
|
|
{
|
|
|
|
spin_lock(&tree->lock);
|
2015-10-10 23:24:48 +07:00
|
|
|
/*
|
|
|
|
* Do a single barrier for the waitqueue_active check here, the state
|
|
|
|
* of the waitqueue should not change once clear_btree_io_tree is
|
|
|
|
* called.
|
|
|
|
*/
|
|
|
|
smp_mb();
|
2014-10-13 18:28:37 +07:00
|
|
|
while (!RB_EMPTY_ROOT(&tree->state)) {
|
|
|
|
struct rb_node *node;
|
|
|
|
struct extent_state *state;
|
|
|
|
|
|
|
|
node = rb_first(&tree->state);
|
|
|
|
state = rb_entry(node, struct extent_state, rb_node);
|
|
|
|
rb_erase(&state->rb_node, &tree->state);
|
|
|
|
RB_CLEAR_NODE(&state->rb_node);
|
|
|
|
/*
|
|
|
|
* btree io trees aren't supposed to have tasks waiting for
|
|
|
|
* changes in the flags of extent states ever.
|
|
|
|
*/
|
|
|
|
ASSERT(!waitqueue_active(&state->wq));
|
|
|
|
free_extent_state(state);
|
2015-01-08 21:20:54 +07:00
|
|
|
|
|
|
|
cond_resched_lock(&tree->lock);
|
2014-10-13 18:28:37 +07:00
|
|
|
}
|
|
|
|
spin_unlock(&tree->lock);
|
|
|
|
}
|
|
|
|
|
2014-03-14 02:42:13 +07:00
|
|
|
static noinline void switch_commit_roots(struct btrfs_transaction *trans,
|
|
|
|
struct btrfs_fs_info *fs_info)
|
Btrfs: async block group caching
This patch moves the caching of the block group off to a kthread in order to
allow people to allocate sooner. Instead of blocking up behind the caching
mutex, we instead kick of the caching kthread, and then attempt to make an
allocation. If we cannot, we wait on the block groups caching waitqueue, which
the caching kthread will wake the waiting threads up everytime it finds 2 meg
worth of space, and then again when its finished caching. This is how I tested
the speedup from this
mkfs the disk
mount the disk
fill the disk up with fs_mark
unmount the disk
mount the disk
time touch /mnt/foo
Without my changes this took 11 seconds on my box, with these changes it now
takes 1 second.
Another change thats been put in place is we lock the super mirror's in the
pinned extent map in order to keep us from adding that stuff as free space when
caching the block group. This doesn't really change anything else as far as the
pinned extent map is concerned, since for actual pinned extents we use
EXTENT_DIRTY, but it does mean that when we unmount we have to go in and unlock
those extents to keep from leaking memory.
I've also added a check where when we are reading block groups from disk, if the
amount of space used == the size of the block group, we go ahead and mark the
block group as cached. This drastically reduces the amount of time it takes to
cache the block groups. Using the same test as above, except doing a dd to a
file and then unmounting, it used to take 33 seconds to umount, now it takes 3
seconds.
This version uses the commit_root in the caching kthread, and then keeps track
of how many async caching threads are running at any given time so if one of the
async threads is still running as we cross transactions we can wait until its
finished before handling the pinned extents. Thank you,
Signed-off-by: Josef Bacik <jbacik@redhat.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-07-14 08:29:25 +07:00
|
|
|
{
|
2014-03-14 02:42:13 +07:00
|
|
|
struct btrfs_root *root, *tmp;
|
|
|
|
|
|
|
|
down_write(&fs_info->commit_root_sem);
|
|
|
|
list_for_each_entry_safe(root, tmp, &trans->switch_commits,
|
|
|
|
dirty_list) {
|
|
|
|
list_del_init(&root->dirty_list);
|
|
|
|
free_extent_buffer(root->commit_root);
|
|
|
|
root->commit_root = btrfs_root_node(root);
|
|
|
|
if (is_fstree(root->objectid))
|
|
|
|
btrfs_unpin_free_ino(root);
|
2014-10-13 18:28:37 +07:00
|
|
|
clear_btree_io_tree(&root->dirty_log_pages);
|
2014-03-14 02:42:13 +07:00
|
|
|
}
|
2015-09-15 21:07:04 +07:00
|
|
|
|
|
|
|
/* We can free old roots now. */
|
|
|
|
spin_lock(&trans->dropped_roots_lock);
|
|
|
|
while (!list_empty(&trans->dropped_roots)) {
|
|
|
|
root = list_first_entry(&trans->dropped_roots,
|
|
|
|
struct btrfs_root, root_list);
|
|
|
|
list_del_init(&root->root_list);
|
|
|
|
spin_unlock(&trans->dropped_roots_lock);
|
|
|
|
btrfs_drop_and_free_fs_root(fs_info, root);
|
|
|
|
spin_lock(&trans->dropped_roots_lock);
|
|
|
|
}
|
|
|
|
spin_unlock(&trans->dropped_roots_lock);
|
2014-03-14 02:42:13 +07:00
|
|
|
up_write(&fs_info->commit_root_sem);
|
Btrfs: async block group caching
This patch moves the caching of the block group off to a kthread in order to
allow people to allocate sooner. Instead of blocking up behind the caching
mutex, we instead kick of the caching kthread, and then attempt to make an
allocation. If we cannot, we wait on the block groups caching waitqueue, which
the caching kthread will wake the waiting threads up everytime it finds 2 meg
worth of space, and then again when its finished caching. This is how I tested
the speedup from this
mkfs the disk
mount the disk
fill the disk up with fs_mark
unmount the disk
mount the disk
time touch /mnt/foo
Without my changes this took 11 seconds on my box, with these changes it now
takes 1 second.
Another change thats been put in place is we lock the super mirror's in the
pinned extent map in order to keep us from adding that stuff as free space when
caching the block group. This doesn't really change anything else as far as the
pinned extent map is concerned, since for actual pinned extents we use
EXTENT_DIRTY, but it does mean that when we unmount we have to go in and unlock
those extents to keep from leaking memory.
I've also added a check where when we are reading block groups from disk, if the
amount of space used == the size of the block group, we go ahead and mark the
block group as cached. This drastically reduces the amount of time it takes to
cache the block groups. Using the same test as above, except doing a dd to a
file and then unmounting, it used to take 33 seconds to umount, now it takes 3
seconds.
This version uses the commit_root in the caching kthread, and then keeps track
of how many async caching threads are running at any given time so if one of the
async threads is still running as we cross transactions we can wait until its
finished before handling the pinned extents. Thank you,
Signed-off-by: Josef Bacik <jbacik@redhat.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-07-14 08:29:25 +07:00
|
|
|
}
|
|
|
|
|
2013-05-15 14:48:27 +07:00
|
|
|
static inline void extwriter_counter_inc(struct btrfs_transaction *trans,
|
|
|
|
unsigned int type)
|
|
|
|
{
|
|
|
|
if (type & TRANS_EXTWRITERS)
|
|
|
|
atomic_inc(&trans->num_extwriters);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void extwriter_counter_dec(struct btrfs_transaction *trans,
|
|
|
|
unsigned int type)
|
|
|
|
{
|
|
|
|
if (type & TRANS_EXTWRITERS)
|
|
|
|
atomic_dec(&trans->num_extwriters);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void extwriter_counter_init(struct btrfs_transaction *trans,
|
|
|
|
unsigned int type)
|
|
|
|
{
|
|
|
|
atomic_set(&trans->num_extwriters, ((type & TRANS_EXTWRITERS) ? 1 : 0));
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int extwriter_counter_read(struct btrfs_transaction *trans)
|
|
|
|
{
|
|
|
|
return atomic_read(&trans->num_extwriters);
|
Btrfs: fix the deadlock between the transaction start/attach and commit
Now btrfs_commit_transaction() does this
ret = btrfs_run_ordered_operations(root, 0)
which async flushes all inodes on the ordered operations list, it introduced
a deadlock that transaction-start task, transaction-commit task and the flush
workers waited for each other.
(See the following URL to get the detail
http://marc.info/?l=linux-btrfs&m=136070705732646&w=2)
As we know, if ->in_commit is set, it means someone is committing the
current transaction, we should not try to join it if we are not JOIN
or JOIN_NOLOCK, wait is the best choice for it. In this way, we can avoid
the above problem. In this way, there is another benefit: there is no new
transaction handle to block the transaction which is on the way of commit,
once we set ->in_commit.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
2013-02-20 16:16:24 +07:00
|
|
|
}
|
|
|
|
|
2008-09-30 02:18:18 +07:00
|
|
|
/*
|
|
|
|
* either allocate a new transaction or hop into the existing one
|
|
|
|
*/
|
2013-05-15 14:48:27 +07:00
|
|
|
static noinline int join_transaction(struct btrfs_root *root, unsigned int type)
|
2007-03-23 02:59:16 +07:00
|
|
|
{
|
|
|
|
struct btrfs_transaction *cur_trans;
|
2012-05-20 20:42:19 +07:00
|
|
|
struct btrfs_fs_info *fs_info = root->fs_info;
|
2011-04-12 04:25:13 +07:00
|
|
|
|
2012-05-20 20:42:19 +07:00
|
|
|
spin_lock(&fs_info->trans_lock);
|
2011-11-06 15:26:19 +07:00
|
|
|
loop:
|
2012-03-01 23:24:58 +07:00
|
|
|
/* The file system has been taken offline. No new transactions. */
|
2013-01-29 17:14:48 +07:00
|
|
|
if (test_bit(BTRFS_FS_STATE_ERROR, &fs_info->fs_state)) {
|
2012-05-20 20:42:19 +07:00
|
|
|
spin_unlock(&fs_info->trans_lock);
|
2012-03-01 23:24:58 +07:00
|
|
|
return -EROFS;
|
|
|
|
}
|
|
|
|
|
2012-05-20 20:42:19 +07:00
|
|
|
cur_trans = fs_info->running_transaction;
|
2011-04-12 04:25:13 +07:00
|
|
|
if (cur_trans) {
|
2012-04-02 23:31:37 +07:00
|
|
|
if (cur_trans->aborted) {
|
2012-05-20 20:42:19 +07:00
|
|
|
spin_unlock(&fs_info->trans_lock);
|
2012-03-01 23:24:58 +07:00
|
|
|
return cur_trans->aborted;
|
2012-04-02 23:31:37 +07:00
|
|
|
}
|
Btrfs: make the state of the transaction more readable
We used 3 variants to track the state of the transaction, it was complex
and wasted the memory space. Besides that, it was hard to understand that
which types of the transaction handles should be blocked in each transaction
state, so the developers often made mistakes.
This patch improved the above problem. In this patch, we define 6 states
for the transaction,
enum btrfs_trans_state {
TRANS_STATE_RUNNING = 0,
TRANS_STATE_BLOCKED = 1,
TRANS_STATE_COMMIT_START = 2,
TRANS_STATE_COMMIT_DOING = 3,
TRANS_STATE_UNBLOCKED = 4,
TRANS_STATE_COMPLETED = 5,
TRANS_STATE_MAX = 6,
}
and just use 1 variant to track those state.
In order to make the blocked handle types for each state more clear,
we introduce a array:
unsigned int btrfs_blocked_trans_types[TRANS_STATE_MAX] = {
[TRANS_STATE_RUNNING] = 0U,
[TRANS_STATE_BLOCKED] = (__TRANS_USERSPACE |
__TRANS_START),
[TRANS_STATE_COMMIT_START] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH),
[TRANS_STATE_COMMIT_DOING] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN),
[TRANS_STATE_UNBLOCKED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
[TRANS_STATE_COMPLETED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
}
it is very intuitionistic.
Besides that, because we remove ->in_commit in transaction structure, so
the lock ->commit_lock which was used to protect it is unnecessary, remove
->commit_lock.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
2013-05-17 10:53:43 +07:00
|
|
|
if (btrfs_blocked_trans_types[cur_trans->state] & type) {
|
Btrfs: fix the deadlock between the transaction start/attach and commit
Now btrfs_commit_transaction() does this
ret = btrfs_run_ordered_operations(root, 0)
which async flushes all inodes on the ordered operations list, it introduced
a deadlock that transaction-start task, transaction-commit task and the flush
workers waited for each other.
(See the following URL to get the detail
http://marc.info/?l=linux-btrfs&m=136070705732646&w=2)
As we know, if ->in_commit is set, it means someone is committing the
current transaction, we should not try to join it if we are not JOIN
or JOIN_NOLOCK, wait is the best choice for it. In this way, we can avoid
the above problem. In this way, there is another benefit: there is no new
transaction handle to block the transaction which is on the way of commit,
once we set ->in_commit.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
2013-02-20 16:16:24 +07:00
|
|
|
spin_unlock(&fs_info->trans_lock);
|
|
|
|
return -EBUSY;
|
|
|
|
}
|
2011-04-12 04:25:13 +07:00
|
|
|
atomic_inc(&cur_trans->use_count);
|
2011-04-12 02:45:29 +07:00
|
|
|
atomic_inc(&cur_trans->num_writers);
|
2013-05-15 14:48:27 +07:00
|
|
|
extwriter_counter_inc(cur_trans, type);
|
2012-05-20 20:42:19 +07:00
|
|
|
spin_unlock(&fs_info->trans_lock);
|
2011-04-12 04:25:13 +07:00
|
|
|
return 0;
|
2007-03-23 02:59:16 +07:00
|
|
|
}
|
2012-05-20 20:42:19 +07:00
|
|
|
spin_unlock(&fs_info->trans_lock);
|
2011-04-12 04:25:13 +07:00
|
|
|
|
Btrfs: fix orphan transaction on the freezed filesystem
With the following debug patch:
static int btrfs_freeze(struct super_block *sb)
{
+ struct btrfs_fs_info *fs_info = btrfs_sb(sb);
+ struct btrfs_transaction *trans;
+
+ spin_lock(&fs_info->trans_lock);
+ trans = fs_info->running_transaction;
+ if (trans) {
+ printk("Transid %llu, use_count %d, num_writer %d\n",
+ trans->transid, atomic_read(&trans->use_count),
+ atomic_read(&trans->num_writers));
+ }
+ spin_unlock(&fs_info->trans_lock);
return 0;
}
I found there was a orphan transaction after the freeze operation was done.
It is because the transaction may not be committed when the transaction handle
end even though it is the last handle of the current transaction. This design
avoid committing the transaction frequently, but also introduce the above
problem.
So I add btrfs_attach_transaction() which can catch the current transaction
and commit it. If there is no transaction, it will return ENOENT, and do not
anything.
This function also can be used to instead of btrfs_join_transaction_freeze()
because it don't increase the writer counter and don't start a new transaction,
so it also can fix the deadlock between sync and freeze.
Besides that, it is used to instead of btrfs_join_transaction() in
transaction_kthread(), because if there is no transaction, the transaction
kthread needn't anything.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
2012-09-20 14:54:00 +07:00
|
|
|
/*
|
|
|
|
* If we are ATTACH, we just want to catch the current transaction,
|
|
|
|
* and commit it. If there is no transaction, just return ENOENT.
|
|
|
|
*/
|
|
|
|
if (type == TRANS_ATTACH)
|
|
|
|
return -ENOENT;
|
|
|
|
|
Btrfs: make the state of the transaction more readable
We used 3 variants to track the state of the transaction, it was complex
and wasted the memory space. Besides that, it was hard to understand that
which types of the transaction handles should be blocked in each transaction
state, so the developers often made mistakes.
This patch improved the above problem. In this patch, we define 6 states
for the transaction,
enum btrfs_trans_state {
TRANS_STATE_RUNNING = 0,
TRANS_STATE_BLOCKED = 1,
TRANS_STATE_COMMIT_START = 2,
TRANS_STATE_COMMIT_DOING = 3,
TRANS_STATE_UNBLOCKED = 4,
TRANS_STATE_COMPLETED = 5,
TRANS_STATE_MAX = 6,
}
and just use 1 variant to track those state.
In order to make the blocked handle types for each state more clear,
we introduce a array:
unsigned int btrfs_blocked_trans_types[TRANS_STATE_MAX] = {
[TRANS_STATE_RUNNING] = 0U,
[TRANS_STATE_BLOCKED] = (__TRANS_USERSPACE |
__TRANS_START),
[TRANS_STATE_COMMIT_START] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH),
[TRANS_STATE_COMMIT_DOING] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN),
[TRANS_STATE_UNBLOCKED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
[TRANS_STATE_COMPLETED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
}
it is very intuitionistic.
Besides that, because we remove ->in_commit in transaction structure, so
the lock ->commit_lock which was used to protect it is unnecessary, remove
->commit_lock.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
2013-05-17 10:53:43 +07:00
|
|
|
/*
|
|
|
|
* JOIN_NOLOCK only happens during the transaction commit, so
|
|
|
|
* it is impossible that ->running_transaction is NULL
|
|
|
|
*/
|
|
|
|
BUG_ON(type == TRANS_JOIN_NOLOCK);
|
|
|
|
|
2011-04-12 04:25:13 +07:00
|
|
|
cur_trans = kmem_cache_alloc(btrfs_transaction_cachep, GFP_NOFS);
|
|
|
|
if (!cur_trans)
|
|
|
|
return -ENOMEM;
|
2011-11-06 15:26:19 +07:00
|
|
|
|
2012-05-20 20:42:19 +07:00
|
|
|
spin_lock(&fs_info->trans_lock);
|
|
|
|
if (fs_info->running_transaction) {
|
2011-11-06 15:26:19 +07:00
|
|
|
/*
|
|
|
|
* someone started a transaction after we unlocked. Make sure
|
Btrfs: make the state of the transaction more readable
We used 3 variants to track the state of the transaction, it was complex
and wasted the memory space. Besides that, it was hard to understand that
which types of the transaction handles should be blocked in each transaction
state, so the developers often made mistakes.
This patch improved the above problem. In this patch, we define 6 states
for the transaction,
enum btrfs_trans_state {
TRANS_STATE_RUNNING = 0,
TRANS_STATE_BLOCKED = 1,
TRANS_STATE_COMMIT_START = 2,
TRANS_STATE_COMMIT_DOING = 3,
TRANS_STATE_UNBLOCKED = 4,
TRANS_STATE_COMPLETED = 5,
TRANS_STATE_MAX = 6,
}
and just use 1 variant to track those state.
In order to make the blocked handle types for each state more clear,
we introduce a array:
unsigned int btrfs_blocked_trans_types[TRANS_STATE_MAX] = {
[TRANS_STATE_RUNNING] = 0U,
[TRANS_STATE_BLOCKED] = (__TRANS_USERSPACE |
__TRANS_START),
[TRANS_STATE_COMMIT_START] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH),
[TRANS_STATE_COMMIT_DOING] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN),
[TRANS_STATE_UNBLOCKED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
[TRANS_STATE_COMPLETED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
}
it is very intuitionistic.
Besides that, because we remove ->in_commit in transaction structure, so
the lock ->commit_lock which was used to protect it is unnecessary, remove
->commit_lock.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
2013-05-17 10:53:43 +07:00
|
|
|
* to redo the checks above
|
2011-11-06 15:26:19 +07:00
|
|
|
*/
|
2011-04-12 04:25:13 +07:00
|
|
|
kmem_cache_free(btrfs_transaction_cachep, cur_trans);
|
2011-11-06 15:26:19 +07:00
|
|
|
goto loop;
|
2013-01-29 17:14:48 +07:00
|
|
|
} else if (test_bit(BTRFS_FS_STATE_ERROR, &fs_info->fs_state)) {
|
2012-06-19 17:30:11 +07:00
|
|
|
spin_unlock(&fs_info->trans_lock);
|
2012-06-01 02:52:43 +07:00
|
|
|
kmem_cache_free(btrfs_transaction_cachep, cur_trans);
|
|
|
|
return -EROFS;
|
2007-03-23 02:59:16 +07:00
|
|
|
}
|
2011-11-06 15:26:19 +07:00
|
|
|
|
2016-09-20 21:05:02 +07:00
|
|
|
cur_trans->fs_info = fs_info;
|
2011-04-12 04:25:13 +07:00
|
|
|
atomic_set(&cur_trans->num_writers, 1);
|
2013-05-15 14:48:27 +07:00
|
|
|
extwriter_counter_init(cur_trans, type);
|
2011-04-12 04:25:13 +07:00
|
|
|
init_waitqueue_head(&cur_trans->writer_wait);
|
|
|
|
init_waitqueue_head(&cur_trans->commit_wait);
|
2015-09-25 03:17:39 +07:00
|
|
|
init_waitqueue_head(&cur_trans->pending_wait);
|
Btrfs: make the state of the transaction more readable
We used 3 variants to track the state of the transaction, it was complex
and wasted the memory space. Besides that, it was hard to understand that
which types of the transaction handles should be blocked in each transaction
state, so the developers often made mistakes.
This patch improved the above problem. In this patch, we define 6 states
for the transaction,
enum btrfs_trans_state {
TRANS_STATE_RUNNING = 0,
TRANS_STATE_BLOCKED = 1,
TRANS_STATE_COMMIT_START = 2,
TRANS_STATE_COMMIT_DOING = 3,
TRANS_STATE_UNBLOCKED = 4,
TRANS_STATE_COMPLETED = 5,
TRANS_STATE_MAX = 6,
}
and just use 1 variant to track those state.
In order to make the blocked handle types for each state more clear,
we introduce a array:
unsigned int btrfs_blocked_trans_types[TRANS_STATE_MAX] = {
[TRANS_STATE_RUNNING] = 0U,
[TRANS_STATE_BLOCKED] = (__TRANS_USERSPACE |
__TRANS_START),
[TRANS_STATE_COMMIT_START] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH),
[TRANS_STATE_COMMIT_DOING] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN),
[TRANS_STATE_UNBLOCKED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
[TRANS_STATE_COMPLETED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
}
it is very intuitionistic.
Besides that, because we remove ->in_commit in transaction structure, so
the lock ->commit_lock which was used to protect it is unnecessary, remove
->commit_lock.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
2013-05-17 10:53:43 +07:00
|
|
|
cur_trans->state = TRANS_STATE_RUNNING;
|
2011-04-12 04:25:13 +07:00
|
|
|
/*
|
|
|
|
* One for this trans handle, one so it will live on until we
|
|
|
|
* commit the transaction.
|
|
|
|
*/
|
|
|
|
atomic_set(&cur_trans->use_count, 2);
|
2015-09-25 03:17:39 +07:00
|
|
|
atomic_set(&cur_trans->pending_ordered, 0);
|
2015-09-24 21:46:10 +07:00
|
|
|
cur_trans->flags = 0;
|
2011-04-12 04:25:13 +07:00
|
|
|
cur_trans->start_time = get_seconds();
|
|
|
|
|
2015-09-07 21:24:37 +07:00
|
|
|
memset(&cur_trans->delayed_refs, 0, sizeof(cur_trans->delayed_refs));
|
|
|
|
|
2013-10-14 11:59:45 +07:00
|
|
|
cur_trans->delayed_refs.href_root = RB_ROOT;
|
2015-04-16 13:34:17 +07:00
|
|
|
cur_trans->delayed_refs.dirty_extent_root = RB_ROOT;
|
2014-01-23 21:21:38 +07:00
|
|
|
atomic_set(&cur_trans->delayed_refs.num_entries, 0);
|
2012-05-20 20:43:53 +07:00
|
|
|
|
|
|
|
/*
|
|
|
|
* although the tree mod log is per file system and not per transaction,
|
|
|
|
* the log must never go across transaction boundaries.
|
|
|
|
*/
|
|
|
|
smp_mb();
|
2012-11-03 17:58:34 +07:00
|
|
|
if (!list_empty(&fs_info->tree_mod_seq_list))
|
2016-09-20 21:05:00 +07:00
|
|
|
WARN(1, KERN_ERR "BTRFS: tree_mod_seq_list not empty when creating a fresh transaction\n");
|
2012-11-03 17:58:34 +07:00
|
|
|
if (!RB_EMPTY_ROOT(&fs_info->tree_mod_log))
|
2016-09-20 21:05:00 +07:00
|
|
|
WARN(1, KERN_ERR "BTRFS: tree_mod_log rb tree not empty when creating a fresh transaction\n");
|
2013-04-24 23:57:33 +07:00
|
|
|
atomic64_set(&fs_info->tree_mod_seq, 0);
|
2012-05-20 20:43:53 +07:00
|
|
|
|
2011-04-12 04:25:13 +07:00
|
|
|
spin_lock_init(&cur_trans->delayed_refs.lock);
|
|
|
|
|
|
|
|
INIT_LIST_HEAD(&cur_trans->pending_snapshots);
|
2013-06-28 00:22:46 +07:00
|
|
|
INIT_LIST_HEAD(&cur_trans->pending_chunks);
|
2014-03-14 02:42:13 +07:00
|
|
|
INIT_LIST_HEAD(&cur_trans->switch_commits);
|
2014-11-18 03:45:48 +07:00
|
|
|
INIT_LIST_HEAD(&cur_trans->dirty_bgs);
|
2015-04-07 02:46:08 +07:00
|
|
|
INIT_LIST_HEAD(&cur_trans->io_bgs);
|
2015-09-15 21:07:04 +07:00
|
|
|
INIT_LIST_HEAD(&cur_trans->dropped_roots);
|
2015-04-07 02:46:08 +07:00
|
|
|
mutex_init(&cur_trans->cache_write_mutex);
|
2015-02-18 23:06:57 +07:00
|
|
|
cur_trans->num_dirty_bgs = 0;
|
2014-11-18 03:45:48 +07:00
|
|
|
spin_lock_init(&cur_trans->dirty_bgs_lock);
|
2015-06-15 20:41:19 +07:00
|
|
|
INIT_LIST_HEAD(&cur_trans->deleted_bgs);
|
2015-09-15 21:07:04 +07:00
|
|
|
spin_lock_init(&cur_trans->dropped_roots_lock);
|
2012-05-20 20:42:19 +07:00
|
|
|
list_add_tail(&cur_trans->list, &fs_info->trans_list);
|
2011-04-12 04:25:13 +07:00
|
|
|
extent_io_tree_init(&cur_trans->dirty_pages,
|
2012-05-20 20:42:19 +07:00
|
|
|
fs_info->btree_inode->i_mapping);
|
|
|
|
fs_info->generation++;
|
|
|
|
cur_trans->transid = fs_info->generation;
|
|
|
|
fs_info->running_transaction = cur_trans;
|
2012-03-01 23:24:58 +07:00
|
|
|
cur_trans->aborted = 0;
|
2012-05-20 20:42:19 +07:00
|
|
|
spin_unlock(&fs_info->trans_lock);
|
2007-08-11 03:22:09 +07:00
|
|
|
|
2007-03-23 02:59:16 +07:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2008-09-30 02:18:18 +07:00
|
|
|
/*
|
2009-01-06 09:25:51 +07:00
|
|
|
* this does all the record keeping required to make sure that a reference
|
|
|
|
* counted root is properly recorded in a given transaction. This is required
|
|
|
|
* to make sure the old root from before we joined the transaction is deleted
|
|
|
|
* when the transaction commits
|
2008-09-30 02:18:18 +07:00
|
|
|
*/
|
2011-06-14 07:00:16 +07:00
|
|
|
static int record_root_in_trans(struct btrfs_trans_handle *trans,
|
btrfs: qgroup: Fix qgroup accounting when creating snapshot
Current btrfs qgroup design implies a requirement that after calling
btrfs_qgroup_account_extents() there must be a commit root switch.
Normally this is OK, as btrfs_qgroup_accounting_extents() is only called
inside btrfs_commit_transaction() just be commit_cowonly_roots().
However there is a exception at create_pending_snapshot(), which will
call btrfs_qgroup_account_extents() but no any commit root switch.
In case of creating a snapshot whose parent root is itself (create a
snapshot of fs tree), it will corrupt qgroup by the following trace:
(skipped unrelated data)
======
btrfs_qgroup_account_extent: bytenr = 29786112, num_bytes = 16384, nr_old_roots = 0, nr_new_roots = 1
qgroup_update_counters: qgid = 5, cur_old_count = 0, cur_new_count = 1, rfer = 0, excl = 0
qgroup_update_counters: qgid = 5, cur_old_count = 0, cur_new_count = 1, rfer = 16384, excl = 16384
btrfs_qgroup_account_extent: bytenr = 29786112, num_bytes = 16384, nr_old_roots = 0, nr_new_roots = 0
======
The problem here is in first qgroup_account_extent(), the
nr_new_roots of the extent is 1, which means its reference got
increased, and qgroup increased its rfer and excl.
But at second qgroup_account_extent(), its reference got decreased, but
between these two qgroup_account_extent(), there is no switch roots.
This leads to the same nr_old_roots, and this extent just got ignored by
qgroup, which means this extent is wrongly accounted.
Fix it by call commit_cowonly_roots() after qgroup_account_extent() in
create_pending_snapshot(), with needed preparation.
Mark: I added a check at the top of qgroup_account_snapshot() to skip this
code if qgroups are turned off. xfstest btrfs/122 exposes this problem.
Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com>
Reviewed-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: Mark Fasheh <mfasheh@suse.de>
Signed-off-by: David Sterba <dsterba@suse.com>
2016-05-12 02:53:52 +07:00
|
|
|
struct btrfs_root *root,
|
|
|
|
int force)
|
2007-08-08 03:15:09 +07:00
|
|
|
{
|
btrfs: qgroup: Fix qgroup accounting when creating snapshot
Current btrfs qgroup design implies a requirement that after calling
btrfs_qgroup_account_extents() there must be a commit root switch.
Normally this is OK, as btrfs_qgroup_accounting_extents() is only called
inside btrfs_commit_transaction() just be commit_cowonly_roots().
However there is a exception at create_pending_snapshot(), which will
call btrfs_qgroup_account_extents() but no any commit root switch.
In case of creating a snapshot whose parent root is itself (create a
snapshot of fs tree), it will corrupt qgroup by the following trace:
(skipped unrelated data)
======
btrfs_qgroup_account_extent: bytenr = 29786112, num_bytes = 16384, nr_old_roots = 0, nr_new_roots = 1
qgroup_update_counters: qgid = 5, cur_old_count = 0, cur_new_count = 1, rfer = 0, excl = 0
qgroup_update_counters: qgid = 5, cur_old_count = 0, cur_new_count = 1, rfer = 16384, excl = 16384
btrfs_qgroup_account_extent: bytenr = 29786112, num_bytes = 16384, nr_old_roots = 0, nr_new_roots = 0
======
The problem here is in first qgroup_account_extent(), the
nr_new_roots of the extent is 1, which means its reference got
increased, and qgroup increased its rfer and excl.
But at second qgroup_account_extent(), its reference got decreased, but
between these two qgroup_account_extent(), there is no switch roots.
This leads to the same nr_old_roots, and this extent just got ignored by
qgroup, which means this extent is wrongly accounted.
Fix it by call commit_cowonly_roots() after qgroup_account_extent() in
create_pending_snapshot(), with needed preparation.
Mark: I added a check at the top of qgroup_account_snapshot() to skip this
code if qgroups are turned off. xfstest btrfs/122 exposes this problem.
Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com>
Reviewed-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: Mark Fasheh <mfasheh@suse.de>
Signed-off-by: David Sterba <dsterba@suse.com>
2016-05-12 02:53:52 +07:00
|
|
|
if ((test_bit(BTRFS_ROOT_REF_COWS, &root->state) &&
|
|
|
|
root->last_trans < trans->transid) || force) {
|
2007-08-08 03:15:09 +07:00
|
|
|
WARN_ON(root == root->fs_info->extent_root);
|
Btrfs: Mixed back reference (FORWARD ROLLING FORMAT CHANGE)
This commit introduces a new kind of back reference for btrfs metadata.
Once a filesystem has been mounted with this commit, IT WILL NO LONGER
BE MOUNTABLE BY OLDER KERNELS.
When a tree block in subvolume tree is cow'd, the reference counts of all
extents it points to are increased by one. At transaction commit time,
the old root of the subvolume is recorded in a "dead root" data structure,
and the btree it points to is later walked, dropping reference counts
and freeing any blocks where the reference count goes to 0.
The increments done during cow and decrements done after commit cancel out,
and the walk is a very expensive way to go about freeing the blocks that
are no longer referenced by the new btree root. This commit reduces the
transaction overhead by avoiding the need for dead root records.
When a non-shared tree block is cow'd, we free the old block at once, and the
new block inherits old block's references. When a tree block with reference
count > 1 is cow'd, we increase the reference counts of all extents
the new block points to by one, and decrease the old block's reference count by
one.
This dead tree avoidance code removes the need to modify the reference
counts of lower level extents when a non-shared tree block is cow'd.
But we still need to update back ref for all pointers in the block.
This is because the location of the block is recorded in the back ref
item.
We can solve this by introducing a new type of back ref. The new
back ref provides information about pointer's key, level and in which
tree the pointer lives. This information allow us to find the pointer
by searching the tree. The shortcoming of the new back ref is that it
only works for pointers in tree blocks referenced by their owner trees.
This is mostly a problem for snapshots, where resolving one of these
fuzzy back references would be O(number_of_snapshots) and quite slow.
The solution used here is to use the fuzzy back references in the common
case where a given tree block is only referenced by one root,
and use the full back references when multiple roots have a reference
on a given block.
This commit adds per subvolume red-black tree to keep trace of cached
inodes. The red-black tree helps the balancing code to find cached
inodes whose inode numbers within a given range.
This commit improves the balancing code by introducing several data
structures to keep the state of balancing. The most important one
is the back ref cache. It caches how the upper level tree blocks are
referenced. This greatly reduce the overhead of checking back ref.
The improved balancing code scales significantly better with a large
number of snapshots.
This is a very large commit and was written in a number of
pieces. But, they depend heavily on the disk format change and were
squashed together to make sure git bisect didn't end up in a
bad state wrt space balancing or the format change.
Signed-off-by: Yan Zheng <zheng.yan@oracle.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-06-10 21:45:14 +07:00
|
|
|
WARN_ON(root->commit_root != root->node);
|
|
|
|
|
2011-06-14 07:00:16 +07:00
|
|
|
/*
|
2014-04-02 18:51:05 +07:00
|
|
|
* see below for IN_TRANS_SETUP usage rules
|
2011-06-14 07:00:16 +07:00
|
|
|
* we have the reloc mutex held now, so there
|
|
|
|
* is only one writer in this function
|
|
|
|
*/
|
2014-04-02 18:51:05 +07:00
|
|
|
set_bit(BTRFS_ROOT_IN_TRANS_SETUP, &root->state);
|
2011-06-14 07:00:16 +07:00
|
|
|
|
2014-04-02 18:51:05 +07:00
|
|
|
/* make sure readers find IN_TRANS_SETUP before
|
2011-06-14 07:00:16 +07:00
|
|
|
* they find our root->last_trans update
|
|
|
|
*/
|
|
|
|
smp_wmb();
|
|
|
|
|
2011-04-12 04:25:13 +07:00
|
|
|
spin_lock(&root->fs_info->fs_roots_radix_lock);
|
btrfs: qgroup: Fix qgroup accounting when creating snapshot
Current btrfs qgroup design implies a requirement that after calling
btrfs_qgroup_account_extents() there must be a commit root switch.
Normally this is OK, as btrfs_qgroup_accounting_extents() is only called
inside btrfs_commit_transaction() just be commit_cowonly_roots().
However there is a exception at create_pending_snapshot(), which will
call btrfs_qgroup_account_extents() but no any commit root switch.
In case of creating a snapshot whose parent root is itself (create a
snapshot of fs tree), it will corrupt qgroup by the following trace:
(skipped unrelated data)
======
btrfs_qgroup_account_extent: bytenr = 29786112, num_bytes = 16384, nr_old_roots = 0, nr_new_roots = 1
qgroup_update_counters: qgid = 5, cur_old_count = 0, cur_new_count = 1, rfer = 0, excl = 0
qgroup_update_counters: qgid = 5, cur_old_count = 0, cur_new_count = 1, rfer = 16384, excl = 16384
btrfs_qgroup_account_extent: bytenr = 29786112, num_bytes = 16384, nr_old_roots = 0, nr_new_roots = 0
======
The problem here is in first qgroup_account_extent(), the
nr_new_roots of the extent is 1, which means its reference got
increased, and qgroup increased its rfer and excl.
But at second qgroup_account_extent(), its reference got decreased, but
between these two qgroup_account_extent(), there is no switch roots.
This leads to the same nr_old_roots, and this extent just got ignored by
qgroup, which means this extent is wrongly accounted.
Fix it by call commit_cowonly_roots() after qgroup_account_extent() in
create_pending_snapshot(), with needed preparation.
Mark: I added a check at the top of qgroup_account_snapshot() to skip this
code if qgroups are turned off. xfstest btrfs/122 exposes this problem.
Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com>
Reviewed-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: Mark Fasheh <mfasheh@suse.de>
Signed-off-by: David Sterba <dsterba@suse.com>
2016-05-12 02:53:52 +07:00
|
|
|
if (root->last_trans == trans->transid && !force) {
|
2011-04-12 04:25:13 +07:00
|
|
|
spin_unlock(&root->fs_info->fs_roots_radix_lock);
|
|
|
|
return 0;
|
|
|
|
}
|
Btrfs: Mixed back reference (FORWARD ROLLING FORMAT CHANGE)
This commit introduces a new kind of back reference for btrfs metadata.
Once a filesystem has been mounted with this commit, IT WILL NO LONGER
BE MOUNTABLE BY OLDER KERNELS.
When a tree block in subvolume tree is cow'd, the reference counts of all
extents it points to are increased by one. At transaction commit time,
the old root of the subvolume is recorded in a "dead root" data structure,
and the btree it points to is later walked, dropping reference counts
and freeing any blocks where the reference count goes to 0.
The increments done during cow and decrements done after commit cancel out,
and the walk is a very expensive way to go about freeing the blocks that
are no longer referenced by the new btree root. This commit reduces the
transaction overhead by avoiding the need for dead root records.
When a non-shared tree block is cow'd, we free the old block at once, and the
new block inherits old block's references. When a tree block with reference
count > 1 is cow'd, we increase the reference counts of all extents
the new block points to by one, and decrease the old block's reference count by
one.
This dead tree avoidance code removes the need to modify the reference
counts of lower level extents when a non-shared tree block is cow'd.
But we still need to update back ref for all pointers in the block.
This is because the location of the block is recorded in the back ref
item.
We can solve this by introducing a new type of back ref. The new
back ref provides information about pointer's key, level and in which
tree the pointer lives. This information allow us to find the pointer
by searching the tree. The shortcoming of the new back ref is that it
only works for pointers in tree blocks referenced by their owner trees.
This is mostly a problem for snapshots, where resolving one of these
fuzzy back references would be O(number_of_snapshots) and quite slow.
The solution used here is to use the fuzzy back references in the common
case where a given tree block is only referenced by one root,
and use the full back references when multiple roots have a reference
on a given block.
This commit adds per subvolume red-black tree to keep trace of cached
inodes. The red-black tree helps the balancing code to find cached
inodes whose inode numbers within a given range.
This commit improves the balancing code by introducing several data
structures to keep the state of balancing. The most important one
is the back ref cache. It caches how the upper level tree blocks are
referenced. This greatly reduce the overhead of checking back ref.
The improved balancing code scales significantly better with a large
number of snapshots.
This is a very large commit and was written in a number of
pieces. But, they depend heavily on the disk format change and were
squashed together to make sure git bisect didn't end up in a
bad state wrt space balancing or the format change.
Signed-off-by: Yan Zheng <zheng.yan@oracle.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-06-10 21:45:14 +07:00
|
|
|
radix_tree_tag_set(&root->fs_info->fs_roots_radix,
|
|
|
|
(unsigned long)root->root_key.objectid,
|
|
|
|
BTRFS_ROOT_TRANS_TAG);
|
2011-04-12 04:25:13 +07:00
|
|
|
spin_unlock(&root->fs_info->fs_roots_radix_lock);
|
2011-06-14 07:00:16 +07:00
|
|
|
root->last_trans = trans->transid;
|
|
|
|
|
|
|
|
/* this is pretty tricky. We don't want to
|
|
|
|
* take the relocation lock in btrfs_record_root_in_trans
|
|
|
|
* unless we're really doing the first setup for this root in
|
|
|
|
* this transaction.
|
|
|
|
*
|
|
|
|
* Normally we'd use root->last_trans as a flag to decide
|
|
|
|
* if we want to take the expensive mutex.
|
|
|
|
*
|
|
|
|
* But, we have to set root->last_trans before we
|
|
|
|
* init the relocation root, otherwise, we trip over warnings
|
|
|
|
* in ctree.c. The solution used here is to flag ourselves
|
2014-04-02 18:51:05 +07:00
|
|
|
* with root IN_TRANS_SETUP. When this is 1, we're still
|
2011-06-14 07:00:16 +07:00
|
|
|
* fixing up the reloc trees and everyone must wait.
|
|
|
|
*
|
|
|
|
* When this is zero, they can trust root->last_trans and fly
|
|
|
|
* through btrfs_record_root_in_trans without having to take the
|
|
|
|
* lock. smp_wmb() makes sure that all the writes above are
|
|
|
|
* done before we pop in the zero below
|
|
|
|
*/
|
Btrfs: Mixed back reference (FORWARD ROLLING FORMAT CHANGE)
This commit introduces a new kind of back reference for btrfs metadata.
Once a filesystem has been mounted with this commit, IT WILL NO LONGER
BE MOUNTABLE BY OLDER KERNELS.
When a tree block in subvolume tree is cow'd, the reference counts of all
extents it points to are increased by one. At transaction commit time,
the old root of the subvolume is recorded in a "dead root" data structure,
and the btree it points to is later walked, dropping reference counts
and freeing any blocks where the reference count goes to 0.
The increments done during cow and decrements done after commit cancel out,
and the walk is a very expensive way to go about freeing the blocks that
are no longer referenced by the new btree root. This commit reduces the
transaction overhead by avoiding the need for dead root records.
When a non-shared tree block is cow'd, we free the old block at once, and the
new block inherits old block's references. When a tree block with reference
count > 1 is cow'd, we increase the reference counts of all extents
the new block points to by one, and decrease the old block's reference count by
one.
This dead tree avoidance code removes the need to modify the reference
counts of lower level extents when a non-shared tree block is cow'd.
But we still need to update back ref for all pointers in the block.
This is because the location of the block is recorded in the back ref
item.
We can solve this by introducing a new type of back ref. The new
back ref provides information about pointer's key, level and in which
tree the pointer lives. This information allow us to find the pointer
by searching the tree. The shortcoming of the new back ref is that it
only works for pointers in tree blocks referenced by their owner trees.
This is mostly a problem for snapshots, where resolving one of these
fuzzy back references would be O(number_of_snapshots) and quite slow.
The solution used here is to use the fuzzy back references in the common
case where a given tree block is only referenced by one root,
and use the full back references when multiple roots have a reference
on a given block.
This commit adds per subvolume red-black tree to keep trace of cached
inodes. The red-black tree helps the balancing code to find cached
inodes whose inode numbers within a given range.
This commit improves the balancing code by introducing several data
structures to keep the state of balancing. The most important one
is the back ref cache. It caches how the upper level tree blocks are
referenced. This greatly reduce the overhead of checking back ref.
The improved balancing code scales significantly better with a large
number of snapshots.
This is a very large commit and was written in a number of
pieces. But, they depend heavily on the disk format change and were
squashed together to make sure git bisect didn't end up in a
bad state wrt space balancing or the format change.
Signed-off-by: Yan Zheng <zheng.yan@oracle.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-06-10 21:45:14 +07:00
|
|
|
btrfs_init_reloc_root(trans, root);
|
2014-06-11 03:06:56 +07:00
|
|
|
smp_mb__before_atomic();
|
2014-04-02 18:51:05 +07:00
|
|
|
clear_bit(BTRFS_ROOT_IN_TRANS_SETUP, &root->state);
|
Btrfs: Mixed back reference (FORWARD ROLLING FORMAT CHANGE)
This commit introduces a new kind of back reference for btrfs metadata.
Once a filesystem has been mounted with this commit, IT WILL NO LONGER
BE MOUNTABLE BY OLDER KERNELS.
When a tree block in subvolume tree is cow'd, the reference counts of all
extents it points to are increased by one. At transaction commit time,
the old root of the subvolume is recorded in a "dead root" data structure,
and the btree it points to is later walked, dropping reference counts
and freeing any blocks where the reference count goes to 0.
The increments done during cow and decrements done after commit cancel out,
and the walk is a very expensive way to go about freeing the blocks that
are no longer referenced by the new btree root. This commit reduces the
transaction overhead by avoiding the need for dead root records.
When a non-shared tree block is cow'd, we free the old block at once, and the
new block inherits old block's references. When a tree block with reference
count > 1 is cow'd, we increase the reference counts of all extents
the new block points to by one, and decrease the old block's reference count by
one.
This dead tree avoidance code removes the need to modify the reference
counts of lower level extents when a non-shared tree block is cow'd.
But we still need to update back ref for all pointers in the block.
This is because the location of the block is recorded in the back ref
item.
We can solve this by introducing a new type of back ref. The new
back ref provides information about pointer's key, level and in which
tree the pointer lives. This information allow us to find the pointer
by searching the tree. The shortcoming of the new back ref is that it
only works for pointers in tree blocks referenced by their owner trees.
This is mostly a problem for snapshots, where resolving one of these
fuzzy back references would be O(number_of_snapshots) and quite slow.
The solution used here is to use the fuzzy back references in the common
case where a given tree block is only referenced by one root,
and use the full back references when multiple roots have a reference
on a given block.
This commit adds per subvolume red-black tree to keep trace of cached
inodes. The red-black tree helps the balancing code to find cached
inodes whose inode numbers within a given range.
This commit improves the balancing code by introducing several data
structures to keep the state of balancing. The most important one
is the back ref cache. It caches how the upper level tree blocks are
referenced. This greatly reduce the overhead of checking back ref.
The improved balancing code scales significantly better with a large
number of snapshots.
This is a very large commit and was written in a number of
pieces. But, they depend heavily on the disk format change and were
squashed together to make sure git bisect didn't end up in a
bad state wrt space balancing or the format change.
Signed-off-by: Yan Zheng <zheng.yan@oracle.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-06-10 21:45:14 +07:00
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
2008-07-31 03:29:20 +07:00
|
|
|
|
2011-06-14 07:00:16 +07:00
|
|
|
|
2015-09-15 21:07:04 +07:00
|
|
|
void btrfs_add_dropped_root(struct btrfs_trans_handle *trans,
|
|
|
|
struct btrfs_root *root)
|
|
|
|
{
|
|
|
|
struct btrfs_transaction *cur_trans = trans->transaction;
|
|
|
|
|
|
|
|
/* Add ourselves to the transaction dropped list */
|
|
|
|
spin_lock(&cur_trans->dropped_roots_lock);
|
|
|
|
list_add_tail(&root->root_list, &cur_trans->dropped_roots);
|
|
|
|
spin_unlock(&cur_trans->dropped_roots_lock);
|
|
|
|
|
|
|
|
/* Make sure we don't try to update the root at commit time */
|
|
|
|
spin_lock(&root->fs_info->fs_roots_radix_lock);
|
|
|
|
radix_tree_tag_clear(&root->fs_info->fs_roots_radix,
|
|
|
|
(unsigned long)root->root_key.objectid,
|
|
|
|
BTRFS_ROOT_TRANS_TAG);
|
|
|
|
spin_unlock(&root->fs_info->fs_roots_radix_lock);
|
|
|
|
}
|
|
|
|
|
2011-06-14 07:00:16 +07:00
|
|
|
int btrfs_record_root_in_trans(struct btrfs_trans_handle *trans,
|
|
|
|
struct btrfs_root *root)
|
|
|
|
{
|
2014-04-02 18:51:05 +07:00
|
|
|
if (!test_bit(BTRFS_ROOT_REF_COWS, &root->state))
|
2011-06-14 07:00:16 +07:00
|
|
|
return 0;
|
|
|
|
|
|
|
|
/*
|
2014-04-02 18:51:05 +07:00
|
|
|
* see record_root_in_trans for comments about IN_TRANS_SETUP usage
|
2011-06-14 07:00:16 +07:00
|
|
|
* and barriers
|
|
|
|
*/
|
|
|
|
smp_rmb();
|
|
|
|
if (root->last_trans == trans->transid &&
|
2014-04-02 18:51:05 +07:00
|
|
|
!test_bit(BTRFS_ROOT_IN_TRANS_SETUP, &root->state))
|
2011-06-14 07:00:16 +07:00
|
|
|
return 0;
|
|
|
|
|
|
|
|
mutex_lock(&root->fs_info->reloc_mutex);
|
btrfs: qgroup: Fix qgroup accounting when creating snapshot
Current btrfs qgroup design implies a requirement that after calling
btrfs_qgroup_account_extents() there must be a commit root switch.
Normally this is OK, as btrfs_qgroup_accounting_extents() is only called
inside btrfs_commit_transaction() just be commit_cowonly_roots().
However there is a exception at create_pending_snapshot(), which will
call btrfs_qgroup_account_extents() but no any commit root switch.
In case of creating a snapshot whose parent root is itself (create a
snapshot of fs tree), it will corrupt qgroup by the following trace:
(skipped unrelated data)
======
btrfs_qgroup_account_extent: bytenr = 29786112, num_bytes = 16384, nr_old_roots = 0, nr_new_roots = 1
qgroup_update_counters: qgid = 5, cur_old_count = 0, cur_new_count = 1, rfer = 0, excl = 0
qgroup_update_counters: qgid = 5, cur_old_count = 0, cur_new_count = 1, rfer = 16384, excl = 16384
btrfs_qgroup_account_extent: bytenr = 29786112, num_bytes = 16384, nr_old_roots = 0, nr_new_roots = 0
======
The problem here is in first qgroup_account_extent(), the
nr_new_roots of the extent is 1, which means its reference got
increased, and qgroup increased its rfer and excl.
But at second qgroup_account_extent(), its reference got decreased, but
between these two qgroup_account_extent(), there is no switch roots.
This leads to the same nr_old_roots, and this extent just got ignored by
qgroup, which means this extent is wrongly accounted.
Fix it by call commit_cowonly_roots() after qgroup_account_extent() in
create_pending_snapshot(), with needed preparation.
Mark: I added a check at the top of qgroup_account_snapshot() to skip this
code if qgroups are turned off. xfstest btrfs/122 exposes this problem.
Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com>
Reviewed-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: Mark Fasheh <mfasheh@suse.de>
Signed-off-by: David Sterba <dsterba@suse.com>
2016-05-12 02:53:52 +07:00
|
|
|
record_root_in_trans(trans, root, 0);
|
2011-06-14 07:00:16 +07:00
|
|
|
mutex_unlock(&root->fs_info->reloc_mutex);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
Btrfs: make the state of the transaction more readable
We used 3 variants to track the state of the transaction, it was complex
and wasted the memory space. Besides that, it was hard to understand that
which types of the transaction handles should be blocked in each transaction
state, so the developers often made mistakes.
This patch improved the above problem. In this patch, we define 6 states
for the transaction,
enum btrfs_trans_state {
TRANS_STATE_RUNNING = 0,
TRANS_STATE_BLOCKED = 1,
TRANS_STATE_COMMIT_START = 2,
TRANS_STATE_COMMIT_DOING = 3,
TRANS_STATE_UNBLOCKED = 4,
TRANS_STATE_COMPLETED = 5,
TRANS_STATE_MAX = 6,
}
and just use 1 variant to track those state.
In order to make the blocked handle types for each state more clear,
we introduce a array:
unsigned int btrfs_blocked_trans_types[TRANS_STATE_MAX] = {
[TRANS_STATE_RUNNING] = 0U,
[TRANS_STATE_BLOCKED] = (__TRANS_USERSPACE |
__TRANS_START),
[TRANS_STATE_COMMIT_START] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH),
[TRANS_STATE_COMMIT_DOING] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN),
[TRANS_STATE_UNBLOCKED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
[TRANS_STATE_COMPLETED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
}
it is very intuitionistic.
Besides that, because we remove ->in_commit in transaction structure, so
the lock ->commit_lock which was used to protect it is unnecessary, remove
->commit_lock.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
2013-05-17 10:53:43 +07:00
|
|
|
static inline int is_transaction_blocked(struct btrfs_transaction *trans)
|
|
|
|
{
|
|
|
|
return (trans->state >= TRANS_STATE_BLOCKED &&
|
2013-06-11 03:47:23 +07:00
|
|
|
trans->state < TRANS_STATE_UNBLOCKED &&
|
|
|
|
!trans->aborted);
|
Btrfs: make the state of the transaction more readable
We used 3 variants to track the state of the transaction, it was complex
and wasted the memory space. Besides that, it was hard to understand that
which types of the transaction handles should be blocked in each transaction
state, so the developers often made mistakes.
This patch improved the above problem. In this patch, we define 6 states
for the transaction,
enum btrfs_trans_state {
TRANS_STATE_RUNNING = 0,
TRANS_STATE_BLOCKED = 1,
TRANS_STATE_COMMIT_START = 2,
TRANS_STATE_COMMIT_DOING = 3,
TRANS_STATE_UNBLOCKED = 4,
TRANS_STATE_COMPLETED = 5,
TRANS_STATE_MAX = 6,
}
and just use 1 variant to track those state.
In order to make the blocked handle types for each state more clear,
we introduce a array:
unsigned int btrfs_blocked_trans_types[TRANS_STATE_MAX] = {
[TRANS_STATE_RUNNING] = 0U,
[TRANS_STATE_BLOCKED] = (__TRANS_USERSPACE |
__TRANS_START),
[TRANS_STATE_COMMIT_START] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH),
[TRANS_STATE_COMMIT_DOING] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN),
[TRANS_STATE_UNBLOCKED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
[TRANS_STATE_COMPLETED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
}
it is very intuitionistic.
Besides that, because we remove ->in_commit in transaction structure, so
the lock ->commit_lock which was used to protect it is unnecessary, remove
->commit_lock.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
2013-05-17 10:53:43 +07:00
|
|
|
}
|
|
|
|
|
2008-09-30 02:18:18 +07:00
|
|
|
/* wait for commit against the current transaction to become unblocked
|
|
|
|
* when this is done, it is safe to start a new transaction, but the current
|
|
|
|
* transaction might not be fully on disk.
|
|
|
|
*/
|
2008-07-31 21:48:37 +07:00
|
|
|
static void wait_current_trans(struct btrfs_root *root)
|
2007-03-23 02:59:16 +07:00
|
|
|
{
|
2008-07-17 23:54:14 +07:00
|
|
|
struct btrfs_transaction *cur_trans;
|
2007-03-23 02:59:16 +07:00
|
|
|
|
2011-04-12 04:25:13 +07:00
|
|
|
spin_lock(&root->fs_info->trans_lock);
|
2008-07-17 23:54:14 +07:00
|
|
|
cur_trans = root->fs_info->running_transaction;
|
Btrfs: make the state of the transaction more readable
We used 3 variants to track the state of the transaction, it was complex
and wasted the memory space. Besides that, it was hard to understand that
which types of the transaction handles should be blocked in each transaction
state, so the developers often made mistakes.
This patch improved the above problem. In this patch, we define 6 states
for the transaction,
enum btrfs_trans_state {
TRANS_STATE_RUNNING = 0,
TRANS_STATE_BLOCKED = 1,
TRANS_STATE_COMMIT_START = 2,
TRANS_STATE_COMMIT_DOING = 3,
TRANS_STATE_UNBLOCKED = 4,
TRANS_STATE_COMPLETED = 5,
TRANS_STATE_MAX = 6,
}
and just use 1 variant to track those state.
In order to make the blocked handle types for each state more clear,
we introduce a array:
unsigned int btrfs_blocked_trans_types[TRANS_STATE_MAX] = {
[TRANS_STATE_RUNNING] = 0U,
[TRANS_STATE_BLOCKED] = (__TRANS_USERSPACE |
__TRANS_START),
[TRANS_STATE_COMMIT_START] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH),
[TRANS_STATE_COMMIT_DOING] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN),
[TRANS_STATE_UNBLOCKED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
[TRANS_STATE_COMPLETED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
}
it is very intuitionistic.
Besides that, because we remove ->in_commit in transaction structure, so
the lock ->commit_lock which was used to protect it is unnecessary, remove
->commit_lock.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
2013-05-17 10:53:43 +07:00
|
|
|
if (cur_trans && is_transaction_blocked(cur_trans)) {
|
2011-04-12 02:45:29 +07:00
|
|
|
atomic_inc(&cur_trans->use_count);
|
2011-04-12 04:25:13 +07:00
|
|
|
spin_unlock(&root->fs_info->trans_lock);
|
2011-07-14 10:17:00 +07:00
|
|
|
|
|
|
|
wait_event(root->fs_info->transaction_wait,
|
2013-06-11 03:47:23 +07:00
|
|
|
cur_trans->state >= TRANS_STATE_UNBLOCKED ||
|
|
|
|
cur_trans->aborted);
|
2013-09-30 22:36:38 +07:00
|
|
|
btrfs_put_transaction(cur_trans);
|
2011-04-12 04:25:13 +07:00
|
|
|
} else {
|
|
|
|
spin_unlock(&root->fs_info->trans_lock);
|
2008-07-17 23:54:14 +07:00
|
|
|
}
|
2008-07-31 21:48:37 +07:00
|
|
|
}
|
|
|
|
|
2010-05-16 21:48:46 +07:00
|
|
|
static int may_wait_transaction(struct btrfs_root *root, int type)
|
|
|
|
{
|
2016-09-03 02:40:02 +07:00
|
|
|
if (test_bit(BTRFS_FS_LOG_RECOVERING, &root->fs_info->flags))
|
2011-04-12 04:25:13 +07:00
|
|
|
return 0;
|
|
|
|
|
|
|
|
if (type == TRANS_USERSPACE)
|
|
|
|
return 1;
|
|
|
|
|
|
|
|
if (type == TRANS_START &&
|
|
|
|
!atomic_read(&root->fs_info->open_ioctl_trans))
|
2010-05-16 21:48:46 +07:00
|
|
|
return 1;
|
2011-04-12 04:25:13 +07:00
|
|
|
|
2010-05-16 21:48:46 +07:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2013-09-25 20:47:45 +07:00
|
|
|
static inline bool need_reserve_reloc_root(struct btrfs_root *root)
|
|
|
|
{
|
|
|
|
if (!root->fs_info->reloc_ctl ||
|
2014-04-02 18:51:05 +07:00
|
|
|
!test_bit(BTRFS_ROOT_REF_COWS, &root->state) ||
|
2013-09-25 20:47:45 +07:00
|
|
|
root->root_key.objectid == BTRFS_TREE_RELOC_OBJECTID ||
|
|
|
|
root->reloc_root)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
Btrfs: improve the noflush reservation
In some places(such as: evicting inode), we just can not flush the reserved
space of delalloc, flushing the delayed directory index and delayed inode
is OK, but we don't try to flush those things and just go back when there is
no enough space to be reserved. This patch fixes this problem.
We defined 3 types of the flush operations: NO_FLUSH, FLUSH_LIMIT and FLUSH_ALL.
If we can in the transaction, we should not flush anything, or the deadlock
would happen, so use NO_FLUSH. If we flushing the reserved space of delalloc
would cause deadlock, use FLUSH_LIMIT. In the other cases, FLUSH_ALL is used,
and we will flush all things.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Chris Mason <chris.mason@fusionio.com>
2012-10-16 18:33:38 +07:00
|
|
|
static struct btrfs_trans_handle *
|
2015-09-23 03:59:15 +07:00
|
|
|
start_transaction(struct btrfs_root *root, unsigned int num_items,
|
|
|
|
unsigned int type, enum btrfs_reserve_flush_enum flush)
|
2008-07-31 21:48:37 +07:00
|
|
|
{
|
2010-05-16 21:48:46 +07:00
|
|
|
struct btrfs_trans_handle *h;
|
|
|
|
struct btrfs_transaction *cur_trans;
|
2011-06-08 02:07:51 +07:00
|
|
|
u64 num_bytes = 0;
|
2011-09-14 20:44:05 +07:00
|
|
|
u64 qgroup_reserved = 0;
|
2013-09-25 20:47:45 +07:00
|
|
|
bool reloc_reserved = false;
|
|
|
|
int ret;
|
2011-01-06 18:30:25 +07:00
|
|
|
|
2014-06-24 23:48:28 +07:00
|
|
|
/* Send isn't supposed to start transactions. */
|
2014-07-31 05:43:18 +07:00
|
|
|
ASSERT(current->journal_info != BTRFS_SEND_TRANS_STUB);
|
2014-06-24 23:48:28 +07:00
|
|
|
|
2013-01-29 17:14:48 +07:00
|
|
|
if (test_bit(BTRFS_FS_STATE_ERROR, &root->fs_info->fs_state))
|
2011-01-06 18:30:25 +07:00
|
|
|
return ERR_PTR(-EROFS);
|
2011-04-14 02:15:59 +07:00
|
|
|
|
2014-06-24 23:48:28 +07:00
|
|
|
if (current->journal_info) {
|
2013-05-15 14:48:27 +07:00
|
|
|
WARN_ON(type & TRANS_EXTWRITERS);
|
2011-04-14 02:15:59 +07:00
|
|
|
h = current->journal_info;
|
|
|
|
h->use_count++;
|
2012-11-01 14:32:18 +07:00
|
|
|
WARN_ON(h->use_count > 2);
|
2011-04-14 02:15:59 +07:00
|
|
|
h->orig_rsv = h->block_rsv;
|
|
|
|
h->block_rsv = NULL;
|
|
|
|
goto got_it;
|
|
|
|
}
|
2011-06-08 02:07:51 +07:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Do the reservation before we join the transaction so we can do all
|
|
|
|
* the appropriate flushing if need be.
|
|
|
|
*/
|
|
|
|
if (num_items > 0 && root != root->fs_info->chunk_root) {
|
2015-09-08 16:22:41 +07:00
|
|
|
qgroup_reserved = num_items * root->nodesize;
|
|
|
|
ret = btrfs_qgroup_reserve_meta(root, qgroup_reserved);
|
|
|
|
if (ret)
|
|
|
|
return ERR_PTR(ret);
|
2011-09-14 20:44:05 +07:00
|
|
|
|
2011-06-08 02:07:51 +07:00
|
|
|
num_bytes = btrfs_calc_trans_metadata_size(root, num_items);
|
2013-09-25 20:47:45 +07:00
|
|
|
/*
|
|
|
|
* Do the reservation for the relocation root creation
|
|
|
|
*/
|
2014-09-30 06:33:33 +07:00
|
|
|
if (need_reserve_reloc_root(root)) {
|
2013-09-25 20:47:45 +07:00
|
|
|
num_bytes += root->nodesize;
|
|
|
|
reloc_reserved = true;
|
|
|
|
}
|
|
|
|
|
Btrfs: improve the noflush reservation
In some places(such as: evicting inode), we just can not flush the reserved
space of delalloc, flushing the delayed directory index and delayed inode
is OK, but we don't try to flush those things and just go back when there is
no enough space to be reserved. This patch fixes this problem.
We defined 3 types of the flush operations: NO_FLUSH, FLUSH_LIMIT and FLUSH_ALL.
If we can in the transaction, we should not flush anything, or the deadlock
would happen, so use NO_FLUSH. If we flushing the reserved space of delalloc
would cause deadlock, use FLUSH_LIMIT. In the other cases, FLUSH_ALL is used,
and we will flush all things.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Chris Mason <chris.mason@fusionio.com>
2012-10-16 18:33:38 +07:00
|
|
|
ret = btrfs_block_rsv_add(root,
|
|
|
|
&root->fs_info->trans_block_rsv,
|
|
|
|
num_bytes, flush);
|
2011-06-08 02:07:51 +07:00
|
|
|
if (ret)
|
2013-01-28 19:36:22 +07:00
|
|
|
goto reserve_fail;
|
2011-06-08 02:07:51 +07:00
|
|
|
}
|
2010-05-16 21:48:46 +07:00
|
|
|
again:
|
2015-08-28 06:53:45 +07:00
|
|
|
h = kmem_cache_zalloc(btrfs_trans_handle_cachep, GFP_NOFS);
|
2013-01-28 19:36:22 +07:00
|
|
|
if (!h) {
|
|
|
|
ret = -ENOMEM;
|
|
|
|
goto alloc_fail;
|
|
|
|
}
|
2008-07-31 21:48:37 +07:00
|
|
|
|
2012-09-14 22:22:38 +07:00
|
|
|
/*
|
|
|
|
* If we are JOIN_NOLOCK we're already committing a transaction and
|
|
|
|
* waiting on this guy, so we don't need to do the sb_start_intwrite
|
|
|
|
* because we're already holding a ref. We need this because we could
|
|
|
|
* have raced in and did an fsync() on a file which can kick a commit
|
|
|
|
* and then we deadlock with somebody doing a freeze.
|
Btrfs: fix orphan transaction on the freezed filesystem
With the following debug patch:
static int btrfs_freeze(struct super_block *sb)
{
+ struct btrfs_fs_info *fs_info = btrfs_sb(sb);
+ struct btrfs_transaction *trans;
+
+ spin_lock(&fs_info->trans_lock);
+ trans = fs_info->running_transaction;
+ if (trans) {
+ printk("Transid %llu, use_count %d, num_writer %d\n",
+ trans->transid, atomic_read(&trans->use_count),
+ atomic_read(&trans->num_writers));
+ }
+ spin_unlock(&fs_info->trans_lock);
return 0;
}
I found there was a orphan transaction after the freeze operation was done.
It is because the transaction may not be committed when the transaction handle
end even though it is the last handle of the current transaction. This design
avoid committing the transaction frequently, but also introduce the above
problem.
So I add btrfs_attach_transaction() which can catch the current transaction
and commit it. If there is no transaction, it will return ENOENT, and do not
anything.
This function also can be used to instead of btrfs_join_transaction_freeze()
because it don't increase the writer counter and don't start a new transaction,
so it also can fix the deadlock between sync and freeze.
Besides that, it is used to instead of btrfs_join_transaction() in
transaction_kthread(), because if there is no transaction, the transaction
kthread needn't anything.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
2012-09-20 14:54:00 +07:00
|
|
|
*
|
|
|
|
* If we are ATTACH, it means we just want to catch the current
|
|
|
|
* transaction and commit it, so we needn't do sb_start_intwrite().
|
2012-09-14 22:22:38 +07:00
|
|
|
*/
|
2013-05-15 14:48:27 +07:00
|
|
|
if (type & __TRANS_FREEZABLE)
|
2012-09-14 21:34:40 +07:00
|
|
|
sb_start_intwrite(root->fs_info->sb);
|
2012-06-12 21:20:45 +07:00
|
|
|
|
2010-05-16 21:48:46 +07:00
|
|
|
if (may_wait_transaction(root, type))
|
2008-07-31 21:48:37 +07:00
|
|
|
wait_current_trans(root);
|
2010-05-16 21:48:46 +07:00
|
|
|
|
2011-04-12 04:25:13 +07:00
|
|
|
do {
|
Btrfs: fix orphan transaction on the freezed filesystem
With the following debug patch:
static int btrfs_freeze(struct super_block *sb)
{
+ struct btrfs_fs_info *fs_info = btrfs_sb(sb);
+ struct btrfs_transaction *trans;
+
+ spin_lock(&fs_info->trans_lock);
+ trans = fs_info->running_transaction;
+ if (trans) {
+ printk("Transid %llu, use_count %d, num_writer %d\n",
+ trans->transid, atomic_read(&trans->use_count),
+ atomic_read(&trans->num_writers));
+ }
+ spin_unlock(&fs_info->trans_lock);
return 0;
}
I found there was a orphan transaction after the freeze operation was done.
It is because the transaction may not be committed when the transaction handle
end even though it is the last handle of the current transaction. This design
avoid committing the transaction frequently, but also introduce the above
problem.
So I add btrfs_attach_transaction() which can catch the current transaction
and commit it. If there is no transaction, it will return ENOENT, and do not
anything.
This function also can be used to instead of btrfs_join_transaction_freeze()
because it don't increase the writer counter and don't start a new transaction,
so it also can fix the deadlock between sync and freeze.
Besides that, it is used to instead of btrfs_join_transaction() in
transaction_kthread(), because if there is no transaction, the transaction
kthread needn't anything.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
2012-09-20 14:54:00 +07:00
|
|
|
ret = join_transaction(root, type);
|
Btrfs: fix the deadlock between the transaction start/attach and commit
Now btrfs_commit_transaction() does this
ret = btrfs_run_ordered_operations(root, 0)
which async flushes all inodes on the ordered operations list, it introduced
a deadlock that transaction-start task, transaction-commit task and the flush
workers waited for each other.
(See the following URL to get the detail
http://marc.info/?l=linux-btrfs&m=136070705732646&w=2)
As we know, if ->in_commit is set, it means someone is committing the
current transaction, we should not try to join it if we are not JOIN
or JOIN_NOLOCK, wait is the best choice for it. In this way, we can avoid
the above problem. In this way, there is another benefit: there is no new
transaction handle to block the transaction which is on the way of commit,
once we set ->in_commit.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
2013-02-20 16:16:24 +07:00
|
|
|
if (ret == -EBUSY) {
|
2011-04-12 04:25:13 +07:00
|
|
|
wait_current_trans(root);
|
Btrfs: fix the deadlock between the transaction start/attach and commit
Now btrfs_commit_transaction() does this
ret = btrfs_run_ordered_operations(root, 0)
which async flushes all inodes on the ordered operations list, it introduced
a deadlock that transaction-start task, transaction-commit task and the flush
workers waited for each other.
(See the following URL to get the detail
http://marc.info/?l=linux-btrfs&m=136070705732646&w=2)
As we know, if ->in_commit is set, it means someone is committing the
current transaction, we should not try to join it if we are not JOIN
or JOIN_NOLOCK, wait is the best choice for it. In this way, we can avoid
the above problem. In this way, there is another benefit: there is no new
transaction handle to block the transaction which is on the way of commit,
once we set ->in_commit.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
2013-02-20 16:16:24 +07:00
|
|
|
if (unlikely(type == TRANS_ATTACH))
|
|
|
|
ret = -ENOENT;
|
|
|
|
}
|
2011-04-12 04:25:13 +07:00
|
|
|
} while (ret == -EBUSY);
|
|
|
|
|
2016-09-14 09:15:48 +07:00
|
|
|
if (ret < 0)
|
2013-01-28 19:36:22 +07:00
|
|
|
goto join_fail;
|
2007-04-09 21:42:37 +07:00
|
|
|
|
2010-05-16 21:48:46 +07:00
|
|
|
cur_trans = root->fs_info->running_transaction;
|
|
|
|
|
|
|
|
h->transid = cur_trans->transid;
|
|
|
|
h->transaction = cur_trans;
|
2011-09-13 16:40:09 +07:00
|
|
|
h->root = root;
|
2011-04-14 02:15:59 +07:00
|
|
|
h->use_count = 1;
|
2016-06-21 04:23:41 +07:00
|
|
|
h->fs_info = root->fs_info;
|
2015-09-08 16:22:41 +07:00
|
|
|
|
2012-09-20 14:51:59 +07:00
|
|
|
h->type = type;
|
2015-10-03 19:13:13 +07:00
|
|
|
h->can_flush_pending_bgs = true;
|
2012-06-28 23:03:02 +07:00
|
|
|
INIT_LIST_HEAD(&h->qgroup_ref_list);
|
2012-09-12 03:57:25 +07:00
|
|
|
INIT_LIST_HEAD(&h->new_bgs);
|
2009-03-13 07:12:45 +07:00
|
|
|
|
2010-05-16 21:48:46 +07:00
|
|
|
smp_mb();
|
Btrfs: make the state of the transaction more readable
We used 3 variants to track the state of the transaction, it was complex
and wasted the memory space. Besides that, it was hard to understand that
which types of the transaction handles should be blocked in each transaction
state, so the developers often made mistakes.
This patch improved the above problem. In this patch, we define 6 states
for the transaction,
enum btrfs_trans_state {
TRANS_STATE_RUNNING = 0,
TRANS_STATE_BLOCKED = 1,
TRANS_STATE_COMMIT_START = 2,
TRANS_STATE_COMMIT_DOING = 3,
TRANS_STATE_UNBLOCKED = 4,
TRANS_STATE_COMPLETED = 5,
TRANS_STATE_MAX = 6,
}
and just use 1 variant to track those state.
In order to make the blocked handle types for each state more clear,
we introduce a array:
unsigned int btrfs_blocked_trans_types[TRANS_STATE_MAX] = {
[TRANS_STATE_RUNNING] = 0U,
[TRANS_STATE_BLOCKED] = (__TRANS_USERSPACE |
__TRANS_START),
[TRANS_STATE_COMMIT_START] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH),
[TRANS_STATE_COMMIT_DOING] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN),
[TRANS_STATE_UNBLOCKED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
[TRANS_STATE_COMPLETED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
}
it is very intuitionistic.
Besides that, because we remove ->in_commit in transaction structure, so
the lock ->commit_lock which was used to protect it is unnecessary, remove
->commit_lock.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
2013-05-17 10:53:43 +07:00
|
|
|
if (cur_trans->state >= TRANS_STATE_BLOCKED &&
|
|
|
|
may_wait_transaction(root, type)) {
|
2014-06-24 23:46:58 +07:00
|
|
|
current->journal_info = h;
|
2010-05-16 21:48:46 +07:00
|
|
|
btrfs_commit_transaction(h, root);
|
|
|
|
goto again;
|
|
|
|
}
|
|
|
|
|
2011-06-08 02:07:51 +07:00
|
|
|
if (num_bytes) {
|
2012-01-10 22:31:31 +07:00
|
|
|
trace_btrfs_space_reservation(root->fs_info, "transaction",
|
2012-03-29 20:57:44 +07:00
|
|
|
h->transid, num_bytes, 1);
|
2011-06-08 02:07:51 +07:00
|
|
|
h->block_rsv = &root->fs_info->trans_block_rsv;
|
|
|
|
h->bytes_reserved = num_bytes;
|
2013-09-25 20:47:45 +07:00
|
|
|
h->reloc_reserved = reloc_reserved;
|
2010-05-16 21:48:46 +07:00
|
|
|
}
|
2009-09-12 03:12:44 +07:00
|
|
|
|
2011-04-14 02:15:59 +07:00
|
|
|
got_it:
|
2011-04-12 04:25:13 +07:00
|
|
|
btrfs_record_root_in_trans(h, root);
|
2010-05-16 21:48:46 +07:00
|
|
|
|
|
|
|
if (!current->journal_info && type != TRANS_USERSPACE)
|
|
|
|
current->journal_info = h;
|
2007-03-23 02:59:16 +07:00
|
|
|
return h;
|
2013-01-28 19:36:22 +07:00
|
|
|
|
|
|
|
join_fail:
|
2013-05-15 14:48:27 +07:00
|
|
|
if (type & __TRANS_FREEZABLE)
|
2013-01-28 19:36:22 +07:00
|
|
|
sb_end_intwrite(root->fs_info->sb);
|
|
|
|
kmem_cache_free(btrfs_trans_handle_cachep, h);
|
|
|
|
alloc_fail:
|
|
|
|
if (num_bytes)
|
|
|
|
btrfs_block_rsv_release(root, &root->fs_info->trans_block_rsv,
|
|
|
|
num_bytes);
|
|
|
|
reserve_fail:
|
2015-09-08 16:22:41 +07:00
|
|
|
btrfs_qgroup_free_meta(root, qgroup_reserved);
|
2013-01-28 19:36:22 +07:00
|
|
|
return ERR_PTR(ret);
|
2007-03-23 02:59:16 +07:00
|
|
|
}
|
|
|
|
|
2008-07-17 23:54:14 +07:00
|
|
|
struct btrfs_trans_handle *btrfs_start_transaction(struct btrfs_root *root,
|
2015-09-23 03:59:15 +07:00
|
|
|
unsigned int num_items)
|
2008-07-17 23:54:14 +07:00
|
|
|
{
|
Btrfs: improve the noflush reservation
In some places(such as: evicting inode), we just can not flush the reserved
space of delalloc, flushing the delayed directory index and delayed inode
is OK, but we don't try to flush those things and just go back when there is
no enough space to be reserved. This patch fixes this problem.
We defined 3 types of the flush operations: NO_FLUSH, FLUSH_LIMIT and FLUSH_ALL.
If we can in the transaction, we should not flush anything, or the deadlock
would happen, so use NO_FLUSH. If we flushing the reserved space of delalloc
would cause deadlock, use FLUSH_LIMIT. In the other cases, FLUSH_ALL is used,
and we will flush all things.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Chris Mason <chris.mason@fusionio.com>
2012-10-16 18:33:38 +07:00
|
|
|
return start_transaction(root, num_items, TRANS_START,
|
|
|
|
BTRFS_RESERVE_FLUSH_ALL);
|
2008-07-17 23:54:14 +07:00
|
|
|
}
|
2015-11-14 06:57:16 +07:00
|
|
|
struct btrfs_trans_handle *btrfs_start_transaction_fallback_global_rsv(
|
|
|
|
struct btrfs_root *root,
|
|
|
|
unsigned int num_items,
|
|
|
|
int min_factor)
|
|
|
|
{
|
|
|
|
struct btrfs_trans_handle *trans;
|
|
|
|
u64 num_bytes;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
trans = btrfs_start_transaction(root, num_items);
|
|
|
|
if (!IS_ERR(trans) || PTR_ERR(trans) != -ENOSPC)
|
|
|
|
return trans;
|
|
|
|
|
|
|
|
trans = btrfs_start_transaction(root, 0);
|
|
|
|
if (IS_ERR(trans))
|
|
|
|
return trans;
|
|
|
|
|
|
|
|
num_bytes = btrfs_calc_trans_metadata_size(root, num_items);
|
|
|
|
ret = btrfs_cond_migrate_bytes(root->fs_info,
|
|
|
|
&root->fs_info->trans_block_rsv,
|
|
|
|
num_bytes,
|
|
|
|
min_factor);
|
|
|
|
if (ret) {
|
|
|
|
btrfs_end_transaction(trans, root);
|
|
|
|
return ERR_PTR(ret);
|
|
|
|
}
|
|
|
|
|
|
|
|
trans->block_rsv = &root->fs_info->trans_block_rsv;
|
|
|
|
trans->bytes_reserved = num_bytes;
|
2016-01-14 01:21:20 +07:00
|
|
|
trace_btrfs_space_reservation(root->fs_info, "transaction",
|
|
|
|
trans->transid, num_bytes, 1);
|
2015-11-14 06:57:16 +07:00
|
|
|
|
|
|
|
return trans;
|
|
|
|
}
|
Btrfs: fix corrupted metadata in the snapshot
When we delete a inode, we will remove all the delayed items including delayed
inode update, and then truncate all the relative metadata. If there is lots of
metadata, we will end the current transaction, and start a new transaction to
truncate the left metadata. In this way, we will leave a inode item that its
link counter is > 0, and also may leave some directory index items in fs/file tree
after the current transaction ends. In other words, the metadata in this fs/file tree
is inconsistent. If we create a snapshot for this tree now, we will find a inode with
corrupted metadata in the new snapshot, and we won't continue to drop the left metadata,
because its link counter is not 0.
We fix this problem by updating the inode item before the current transaction ends.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
2012-09-07 14:43:32 +07:00
|
|
|
|
Btrfs: improve the noflush reservation
In some places(such as: evicting inode), we just can not flush the reserved
space of delalloc, flushing the delayed directory index and delayed inode
is OK, but we don't try to flush those things and just go back when there is
no enough space to be reserved. This patch fixes this problem.
We defined 3 types of the flush operations: NO_FLUSH, FLUSH_LIMIT and FLUSH_ALL.
If we can in the transaction, we should not flush anything, or the deadlock
would happen, so use NO_FLUSH. If we flushing the reserved space of delalloc
would cause deadlock, use FLUSH_LIMIT. In the other cases, FLUSH_ALL is used,
and we will flush all things.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Chris Mason <chris.mason@fusionio.com>
2012-10-16 18:33:38 +07:00
|
|
|
struct btrfs_trans_handle *btrfs_start_transaction_lflush(
|
2015-09-23 03:59:15 +07:00
|
|
|
struct btrfs_root *root,
|
|
|
|
unsigned int num_items)
|
Btrfs: fix corrupted metadata in the snapshot
When we delete a inode, we will remove all the delayed items including delayed
inode update, and then truncate all the relative metadata. If there is lots of
metadata, we will end the current transaction, and start a new transaction to
truncate the left metadata. In this way, we will leave a inode item that its
link counter is > 0, and also may leave some directory index items in fs/file tree
after the current transaction ends. In other words, the metadata in this fs/file tree
is inconsistent. If we create a snapshot for this tree now, we will find a inode with
corrupted metadata in the new snapshot, and we won't continue to drop the left metadata,
because its link counter is not 0.
We fix this problem by updating the inode item before the current transaction ends.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
2012-09-07 14:43:32 +07:00
|
|
|
{
|
Btrfs: improve the noflush reservation
In some places(such as: evicting inode), we just can not flush the reserved
space of delalloc, flushing the delayed directory index and delayed inode
is OK, but we don't try to flush those things and just go back when there is
no enough space to be reserved. This patch fixes this problem.
We defined 3 types of the flush operations: NO_FLUSH, FLUSH_LIMIT and FLUSH_ALL.
If we can in the transaction, we should not flush anything, or the deadlock
would happen, so use NO_FLUSH. If we flushing the reserved space of delalloc
would cause deadlock, use FLUSH_LIMIT. In the other cases, FLUSH_ALL is used,
and we will flush all things.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Chris Mason <chris.mason@fusionio.com>
2012-10-16 18:33:38 +07:00
|
|
|
return start_transaction(root, num_items, TRANS_START,
|
|
|
|
BTRFS_RESERVE_FLUSH_LIMIT);
|
Btrfs: fix corrupted metadata in the snapshot
When we delete a inode, we will remove all the delayed items including delayed
inode update, and then truncate all the relative metadata. If there is lots of
metadata, we will end the current transaction, and start a new transaction to
truncate the left metadata. In this way, we will leave a inode item that its
link counter is > 0, and also may leave some directory index items in fs/file tree
after the current transaction ends. In other words, the metadata in this fs/file tree
is inconsistent. If we create a snapshot for this tree now, we will find a inode with
corrupted metadata in the new snapshot, and we won't continue to drop the left metadata,
because its link counter is not 0.
We fix this problem by updating the inode item before the current transaction ends.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
2012-09-07 14:43:32 +07:00
|
|
|
}
|
|
|
|
|
2011-04-13 23:54:33 +07:00
|
|
|
struct btrfs_trans_handle *btrfs_join_transaction(struct btrfs_root *root)
|
2008-07-17 23:54:14 +07:00
|
|
|
{
|
2015-10-26 02:35:44 +07:00
|
|
|
return start_transaction(root, 0, TRANS_JOIN,
|
|
|
|
BTRFS_RESERVE_NO_FLUSH);
|
2008-07-17 23:54:14 +07:00
|
|
|
}
|
|
|
|
|
2011-04-13 23:54:33 +07:00
|
|
|
struct btrfs_trans_handle *btrfs_join_transaction_nolock(struct btrfs_root *root)
|
2010-06-22 01:48:16 +07:00
|
|
|
{
|
2015-10-26 02:35:44 +07:00
|
|
|
return start_transaction(root, 0, TRANS_JOIN_NOLOCK,
|
|
|
|
BTRFS_RESERVE_NO_FLUSH);
|
2010-06-22 01:48:16 +07:00
|
|
|
}
|
|
|
|
|
2011-04-13 23:54:33 +07:00
|
|
|
struct btrfs_trans_handle *btrfs_start_ioctl_transaction(struct btrfs_root *root)
|
2008-08-04 21:41:27 +07:00
|
|
|
{
|
2015-10-26 02:35:44 +07:00
|
|
|
return start_transaction(root, 0, TRANS_USERSPACE,
|
|
|
|
BTRFS_RESERVE_NO_FLUSH);
|
2008-08-04 21:41:27 +07:00
|
|
|
}
|
|
|
|
|
Btrfs: fix uncompleted transaction
In some cases, we need commit the current transaction, but don't want
to start a new one if there is no running transaction, so we introduce
the function - btrfs_attach_transaction(), which can catch the current
transaction, and return -ENOENT if there is no running transaction.
But no running transaction doesn't mean the current transction completely,
because we removed the running transaction before it completes. In some
cases, it doesn't matter. But in some special cases, such as freeze fs, we
hope the transaction is fully on disk, it will introduce some bugs, for
example, we may feeze the fs and dump the data in the disk, if the transction
doesn't complete, we would dump inconsistent data. So we need fix the above
problem for those cases.
We fixes this problem by introducing a function:
btrfs_attach_transaction_barrier()
if we hope all the transaction is fully on the disk, even they are not
running, we can use this function.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
2013-02-20 16:17:06 +07:00
|
|
|
/*
|
|
|
|
* btrfs_attach_transaction() - catch the running transaction
|
|
|
|
*
|
|
|
|
* It is used when we want to commit the current the transaction, but
|
|
|
|
* don't want to start a new one.
|
|
|
|
*
|
|
|
|
* Note: If this function return -ENOENT, it just means there is no
|
|
|
|
* running transaction. But it is possible that the inactive transaction
|
|
|
|
* is still in the memory, not fully on disk. If you hope there is no
|
|
|
|
* inactive transaction in the fs when -ENOENT is returned, you should
|
|
|
|
* invoke
|
|
|
|
* btrfs_attach_transaction_barrier()
|
|
|
|
*/
|
Btrfs: fix orphan transaction on the freezed filesystem
With the following debug patch:
static int btrfs_freeze(struct super_block *sb)
{
+ struct btrfs_fs_info *fs_info = btrfs_sb(sb);
+ struct btrfs_transaction *trans;
+
+ spin_lock(&fs_info->trans_lock);
+ trans = fs_info->running_transaction;
+ if (trans) {
+ printk("Transid %llu, use_count %d, num_writer %d\n",
+ trans->transid, atomic_read(&trans->use_count),
+ atomic_read(&trans->num_writers));
+ }
+ spin_unlock(&fs_info->trans_lock);
return 0;
}
I found there was a orphan transaction after the freeze operation was done.
It is because the transaction may not be committed when the transaction handle
end even though it is the last handle of the current transaction. This design
avoid committing the transaction frequently, but also introduce the above
problem.
So I add btrfs_attach_transaction() which can catch the current transaction
and commit it. If there is no transaction, it will return ENOENT, and do not
anything.
This function also can be used to instead of btrfs_join_transaction_freeze()
because it don't increase the writer counter and don't start a new transaction,
so it also can fix the deadlock between sync and freeze.
Besides that, it is used to instead of btrfs_join_transaction() in
transaction_kthread(), because if there is no transaction, the transaction
kthread needn't anything.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
2012-09-20 14:54:00 +07:00
|
|
|
struct btrfs_trans_handle *btrfs_attach_transaction(struct btrfs_root *root)
|
2012-09-14 21:34:40 +07:00
|
|
|
{
|
2015-10-26 02:35:44 +07:00
|
|
|
return start_transaction(root, 0, TRANS_ATTACH,
|
|
|
|
BTRFS_RESERVE_NO_FLUSH);
|
2012-09-14 21:34:40 +07:00
|
|
|
}
|
|
|
|
|
Btrfs: fix uncompleted transaction
In some cases, we need commit the current transaction, but don't want
to start a new one if there is no running transaction, so we introduce
the function - btrfs_attach_transaction(), which can catch the current
transaction, and return -ENOENT if there is no running transaction.
But no running transaction doesn't mean the current transction completely,
because we removed the running transaction before it completes. In some
cases, it doesn't matter. But in some special cases, such as freeze fs, we
hope the transaction is fully on disk, it will introduce some bugs, for
example, we may feeze the fs and dump the data in the disk, if the transction
doesn't complete, we would dump inconsistent data. So we need fix the above
problem for those cases.
We fixes this problem by introducing a function:
btrfs_attach_transaction_barrier()
if we hope all the transaction is fully on the disk, even they are not
running, we can use this function.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
2013-02-20 16:17:06 +07:00
|
|
|
/*
|
2013-06-14 15:21:24 +07:00
|
|
|
* btrfs_attach_transaction_barrier() - catch the running transaction
|
Btrfs: fix uncompleted transaction
In some cases, we need commit the current transaction, but don't want
to start a new one if there is no running transaction, so we introduce
the function - btrfs_attach_transaction(), which can catch the current
transaction, and return -ENOENT if there is no running transaction.
But no running transaction doesn't mean the current transction completely,
because we removed the running transaction before it completes. In some
cases, it doesn't matter. But in some special cases, such as freeze fs, we
hope the transaction is fully on disk, it will introduce some bugs, for
example, we may feeze the fs and dump the data in the disk, if the transction
doesn't complete, we would dump inconsistent data. So we need fix the above
problem for those cases.
We fixes this problem by introducing a function:
btrfs_attach_transaction_barrier()
if we hope all the transaction is fully on the disk, even they are not
running, we can use this function.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
2013-02-20 16:17:06 +07:00
|
|
|
*
|
|
|
|
* It is similar to the above function, the differentia is this one
|
|
|
|
* will wait for all the inactive transactions until they fully
|
|
|
|
* complete.
|
|
|
|
*/
|
|
|
|
struct btrfs_trans_handle *
|
|
|
|
btrfs_attach_transaction_barrier(struct btrfs_root *root)
|
|
|
|
{
|
|
|
|
struct btrfs_trans_handle *trans;
|
|
|
|
|
2015-10-26 02:35:44 +07:00
|
|
|
trans = start_transaction(root, 0, TRANS_ATTACH,
|
|
|
|
BTRFS_RESERVE_NO_FLUSH);
|
Btrfs: fix uncompleted transaction
In some cases, we need commit the current transaction, but don't want
to start a new one if there is no running transaction, so we introduce
the function - btrfs_attach_transaction(), which can catch the current
transaction, and return -ENOENT if there is no running transaction.
But no running transaction doesn't mean the current transction completely,
because we removed the running transaction before it completes. In some
cases, it doesn't matter. But in some special cases, such as freeze fs, we
hope the transaction is fully on disk, it will introduce some bugs, for
example, we may feeze the fs and dump the data in the disk, if the transction
doesn't complete, we would dump inconsistent data. So we need fix the above
problem for those cases.
We fixes this problem by introducing a function:
btrfs_attach_transaction_barrier()
if we hope all the transaction is fully on the disk, even they are not
running, we can use this function.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
2013-02-20 16:17:06 +07:00
|
|
|
if (IS_ERR(trans) && PTR_ERR(trans) == -ENOENT)
|
|
|
|
btrfs_wait_for_commit(root, 0);
|
|
|
|
|
|
|
|
return trans;
|
|
|
|
}
|
|
|
|
|
2008-09-30 02:18:18 +07:00
|
|
|
/* wait for a transaction commit to be fully complete */
|
2011-07-14 10:17:14 +07:00
|
|
|
static noinline void wait_for_commit(struct btrfs_root *root,
|
2008-06-26 03:01:31 +07:00
|
|
|
struct btrfs_transaction *commit)
|
|
|
|
{
|
Btrfs: make the state of the transaction more readable
We used 3 variants to track the state of the transaction, it was complex
and wasted the memory space. Besides that, it was hard to understand that
which types of the transaction handles should be blocked in each transaction
state, so the developers often made mistakes.
This patch improved the above problem. In this patch, we define 6 states
for the transaction,
enum btrfs_trans_state {
TRANS_STATE_RUNNING = 0,
TRANS_STATE_BLOCKED = 1,
TRANS_STATE_COMMIT_START = 2,
TRANS_STATE_COMMIT_DOING = 3,
TRANS_STATE_UNBLOCKED = 4,
TRANS_STATE_COMPLETED = 5,
TRANS_STATE_MAX = 6,
}
and just use 1 variant to track those state.
In order to make the blocked handle types for each state more clear,
we introduce a array:
unsigned int btrfs_blocked_trans_types[TRANS_STATE_MAX] = {
[TRANS_STATE_RUNNING] = 0U,
[TRANS_STATE_BLOCKED] = (__TRANS_USERSPACE |
__TRANS_START),
[TRANS_STATE_COMMIT_START] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH),
[TRANS_STATE_COMMIT_DOING] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN),
[TRANS_STATE_UNBLOCKED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
[TRANS_STATE_COMPLETED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
}
it is very intuitionistic.
Besides that, because we remove ->in_commit in transaction structure, so
the lock ->commit_lock which was used to protect it is unnecessary, remove
->commit_lock.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
2013-05-17 10:53:43 +07:00
|
|
|
wait_event(commit->commit_wait, commit->state == TRANS_STATE_COMPLETED);
|
2008-06-26 03:01:31 +07:00
|
|
|
}
|
|
|
|
|
Btrfs: add START_SYNC, WAIT_SYNC ioctls
START_SYNC will start a sync/commit, but not wait for it to
complete. Any modification started after the ioctl returns is
guaranteed not to be included in the commit. If a non-NULL
pointer is passed, the transaction id will be returned to
userspace.
WAIT_SYNC will wait for any in-progress commit to complete. If a
transaction id is specified, the ioctl will block and then
return (success) when the specified transaction has committed.
If it has already committed when we call the ioctl, it returns
immediately. If the specified transaction doesn't exist, it
returns EINVAL.
If no transaction id is specified, WAIT_SYNC will wait for the
currently committing transaction to finish it's commit to disk.
If there is no currently committing transaction, it returns
success.
These ioctls are useful for applications which want to impose an
ordering on when fs modifications reach disk, but do not want to
wait for the full (slow) commit process to do so.
Picky callers can take the transid returned by START_SYNC and
feed it to WAIT_SYNC, and be certain to wait only as long as
necessary for the transaction _they_ started to reach disk.
Sloppy callers can START_SYNC and WAIT_SYNC without a transid,
and provided they didn't wait too long between the calls, they
will get the same result. However, if a second commit starts
before they call WAIT_SYNC, they may end up waiting longer for
it to commit as well. Even so, a START_SYNC+WAIT_SYNC still
guarantees that any operation completed before the START_SYNC
reaches disk.
Signed-off-by: Sage Weil <sage@newdream.net>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2010-10-30 02:41:32 +07:00
|
|
|
int btrfs_wait_for_commit(struct btrfs_root *root, u64 transid)
|
|
|
|
{
|
|
|
|
struct btrfs_transaction *cur_trans = NULL, *t;
|
2012-11-26 15:42:07 +07:00
|
|
|
int ret = 0;
|
Btrfs: add START_SYNC, WAIT_SYNC ioctls
START_SYNC will start a sync/commit, but not wait for it to
complete. Any modification started after the ioctl returns is
guaranteed not to be included in the commit. If a non-NULL
pointer is passed, the transaction id will be returned to
userspace.
WAIT_SYNC will wait for any in-progress commit to complete. If a
transaction id is specified, the ioctl will block and then
return (success) when the specified transaction has committed.
If it has already committed when we call the ioctl, it returns
immediately. If the specified transaction doesn't exist, it
returns EINVAL.
If no transaction id is specified, WAIT_SYNC will wait for the
currently committing transaction to finish it's commit to disk.
If there is no currently committing transaction, it returns
success.
These ioctls are useful for applications which want to impose an
ordering on when fs modifications reach disk, but do not want to
wait for the full (slow) commit process to do so.
Picky callers can take the transid returned by START_SYNC and
feed it to WAIT_SYNC, and be certain to wait only as long as
necessary for the transaction _they_ started to reach disk.
Sloppy callers can START_SYNC and WAIT_SYNC without a transid,
and provided they didn't wait too long between the calls, they
will get the same result. However, if a second commit starts
before they call WAIT_SYNC, they may end up waiting longer for
it to commit as well. Even so, a START_SYNC+WAIT_SYNC still
guarantees that any operation completed before the START_SYNC
reaches disk.
Signed-off-by: Sage Weil <sage@newdream.net>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2010-10-30 02:41:32 +07:00
|
|
|
|
|
|
|
if (transid) {
|
|
|
|
if (transid <= root->fs_info->last_trans_committed)
|
2011-04-12 04:25:13 +07:00
|
|
|
goto out;
|
Btrfs: add START_SYNC, WAIT_SYNC ioctls
START_SYNC will start a sync/commit, but not wait for it to
complete. Any modification started after the ioctl returns is
guaranteed not to be included in the commit. If a non-NULL
pointer is passed, the transaction id will be returned to
userspace.
WAIT_SYNC will wait for any in-progress commit to complete. If a
transaction id is specified, the ioctl will block and then
return (success) when the specified transaction has committed.
If it has already committed when we call the ioctl, it returns
immediately. If the specified transaction doesn't exist, it
returns EINVAL.
If no transaction id is specified, WAIT_SYNC will wait for the
currently committing transaction to finish it's commit to disk.
If there is no currently committing transaction, it returns
success.
These ioctls are useful for applications which want to impose an
ordering on when fs modifications reach disk, but do not want to
wait for the full (slow) commit process to do so.
Picky callers can take the transid returned by START_SYNC and
feed it to WAIT_SYNC, and be certain to wait only as long as
necessary for the transaction _they_ started to reach disk.
Sloppy callers can START_SYNC and WAIT_SYNC without a transid,
and provided they didn't wait too long between the calls, they
will get the same result. However, if a second commit starts
before they call WAIT_SYNC, they may end up waiting longer for
it to commit as well. Even so, a START_SYNC+WAIT_SYNC still
guarantees that any operation completed before the START_SYNC
reaches disk.
Signed-off-by: Sage Weil <sage@newdream.net>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2010-10-30 02:41:32 +07:00
|
|
|
|
|
|
|
/* find specified transaction */
|
2011-04-12 04:25:13 +07:00
|
|
|
spin_lock(&root->fs_info->trans_lock);
|
Btrfs: add START_SYNC, WAIT_SYNC ioctls
START_SYNC will start a sync/commit, but not wait for it to
complete. Any modification started after the ioctl returns is
guaranteed not to be included in the commit. If a non-NULL
pointer is passed, the transaction id will be returned to
userspace.
WAIT_SYNC will wait for any in-progress commit to complete. If a
transaction id is specified, the ioctl will block and then
return (success) when the specified transaction has committed.
If it has already committed when we call the ioctl, it returns
immediately. If the specified transaction doesn't exist, it
returns EINVAL.
If no transaction id is specified, WAIT_SYNC will wait for the
currently committing transaction to finish it's commit to disk.
If there is no currently committing transaction, it returns
success.
These ioctls are useful for applications which want to impose an
ordering on when fs modifications reach disk, but do not want to
wait for the full (slow) commit process to do so.
Picky callers can take the transid returned by START_SYNC and
feed it to WAIT_SYNC, and be certain to wait only as long as
necessary for the transaction _they_ started to reach disk.
Sloppy callers can START_SYNC and WAIT_SYNC without a transid,
and provided they didn't wait too long between the calls, they
will get the same result. However, if a second commit starts
before they call WAIT_SYNC, they may end up waiting longer for
it to commit as well. Even so, a START_SYNC+WAIT_SYNC still
guarantees that any operation completed before the START_SYNC
reaches disk.
Signed-off-by: Sage Weil <sage@newdream.net>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2010-10-30 02:41:32 +07:00
|
|
|
list_for_each_entry(t, &root->fs_info->trans_list, list) {
|
|
|
|
if (t->transid == transid) {
|
|
|
|
cur_trans = t;
|
2011-04-12 04:25:13 +07:00
|
|
|
atomic_inc(&cur_trans->use_count);
|
2012-11-26 15:42:07 +07:00
|
|
|
ret = 0;
|
Btrfs: add START_SYNC, WAIT_SYNC ioctls
START_SYNC will start a sync/commit, but not wait for it to
complete. Any modification started after the ioctl returns is
guaranteed not to be included in the commit. If a non-NULL
pointer is passed, the transaction id will be returned to
userspace.
WAIT_SYNC will wait for any in-progress commit to complete. If a
transaction id is specified, the ioctl will block and then
return (success) when the specified transaction has committed.
If it has already committed when we call the ioctl, it returns
immediately. If the specified transaction doesn't exist, it
returns EINVAL.
If no transaction id is specified, WAIT_SYNC will wait for the
currently committing transaction to finish it's commit to disk.
If there is no currently committing transaction, it returns
success.
These ioctls are useful for applications which want to impose an
ordering on when fs modifications reach disk, but do not want to
wait for the full (slow) commit process to do so.
Picky callers can take the transid returned by START_SYNC and
feed it to WAIT_SYNC, and be certain to wait only as long as
necessary for the transaction _they_ started to reach disk.
Sloppy callers can START_SYNC and WAIT_SYNC without a transid,
and provided they didn't wait too long between the calls, they
will get the same result. However, if a second commit starts
before they call WAIT_SYNC, they may end up waiting longer for
it to commit as well. Even so, a START_SYNC+WAIT_SYNC still
guarantees that any operation completed before the START_SYNC
reaches disk.
Signed-off-by: Sage Weil <sage@newdream.net>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2010-10-30 02:41:32 +07:00
|
|
|
break;
|
|
|
|
}
|
2012-11-26 15:42:07 +07:00
|
|
|
if (t->transid > transid) {
|
|
|
|
ret = 0;
|
Btrfs: add START_SYNC, WAIT_SYNC ioctls
START_SYNC will start a sync/commit, but not wait for it to
complete. Any modification started after the ioctl returns is
guaranteed not to be included in the commit. If a non-NULL
pointer is passed, the transaction id will be returned to
userspace.
WAIT_SYNC will wait for any in-progress commit to complete. If a
transaction id is specified, the ioctl will block and then
return (success) when the specified transaction has committed.
If it has already committed when we call the ioctl, it returns
immediately. If the specified transaction doesn't exist, it
returns EINVAL.
If no transaction id is specified, WAIT_SYNC will wait for the
currently committing transaction to finish it's commit to disk.
If there is no currently committing transaction, it returns
success.
These ioctls are useful for applications which want to impose an
ordering on when fs modifications reach disk, but do not want to
wait for the full (slow) commit process to do so.
Picky callers can take the transid returned by START_SYNC and
feed it to WAIT_SYNC, and be certain to wait only as long as
necessary for the transaction _they_ started to reach disk.
Sloppy callers can START_SYNC and WAIT_SYNC without a transid,
and provided they didn't wait too long between the calls, they
will get the same result. However, if a second commit starts
before they call WAIT_SYNC, they may end up waiting longer for
it to commit as well. Even so, a START_SYNC+WAIT_SYNC still
guarantees that any operation completed before the START_SYNC
reaches disk.
Signed-off-by: Sage Weil <sage@newdream.net>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2010-10-30 02:41:32 +07:00
|
|
|
break;
|
2012-11-26 15:42:07 +07:00
|
|
|
}
|
Btrfs: add START_SYNC, WAIT_SYNC ioctls
START_SYNC will start a sync/commit, but not wait for it to
complete. Any modification started after the ioctl returns is
guaranteed not to be included in the commit. If a non-NULL
pointer is passed, the transaction id will be returned to
userspace.
WAIT_SYNC will wait for any in-progress commit to complete. If a
transaction id is specified, the ioctl will block and then
return (success) when the specified transaction has committed.
If it has already committed when we call the ioctl, it returns
immediately. If the specified transaction doesn't exist, it
returns EINVAL.
If no transaction id is specified, WAIT_SYNC will wait for the
currently committing transaction to finish it's commit to disk.
If there is no currently committing transaction, it returns
success.
These ioctls are useful for applications which want to impose an
ordering on when fs modifications reach disk, but do not want to
wait for the full (slow) commit process to do so.
Picky callers can take the transid returned by START_SYNC and
feed it to WAIT_SYNC, and be certain to wait only as long as
necessary for the transaction _they_ started to reach disk.
Sloppy callers can START_SYNC and WAIT_SYNC without a transid,
and provided they didn't wait too long between the calls, they
will get the same result. However, if a second commit starts
before they call WAIT_SYNC, they may end up waiting longer for
it to commit as well. Even so, a START_SYNC+WAIT_SYNC still
guarantees that any operation completed before the START_SYNC
reaches disk.
Signed-off-by: Sage Weil <sage@newdream.net>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2010-10-30 02:41:32 +07:00
|
|
|
}
|
2011-04-12 04:25:13 +07:00
|
|
|
spin_unlock(&root->fs_info->trans_lock);
|
2014-09-26 22:30:06 +07:00
|
|
|
|
|
|
|
/*
|
|
|
|
* The specified transaction doesn't exist, or we
|
|
|
|
* raced with btrfs_commit_transaction
|
|
|
|
*/
|
|
|
|
if (!cur_trans) {
|
|
|
|
if (transid > root->fs_info->last_trans_committed)
|
|
|
|
ret = -EINVAL;
|
2012-11-26 15:42:07 +07:00
|
|
|
goto out;
|
2014-09-26 22:30:06 +07:00
|
|
|
}
|
Btrfs: add START_SYNC, WAIT_SYNC ioctls
START_SYNC will start a sync/commit, but not wait for it to
complete. Any modification started after the ioctl returns is
guaranteed not to be included in the commit. If a non-NULL
pointer is passed, the transaction id will be returned to
userspace.
WAIT_SYNC will wait for any in-progress commit to complete. If a
transaction id is specified, the ioctl will block and then
return (success) when the specified transaction has committed.
If it has already committed when we call the ioctl, it returns
immediately. If the specified transaction doesn't exist, it
returns EINVAL.
If no transaction id is specified, WAIT_SYNC will wait for the
currently committing transaction to finish it's commit to disk.
If there is no currently committing transaction, it returns
success.
These ioctls are useful for applications which want to impose an
ordering on when fs modifications reach disk, but do not want to
wait for the full (slow) commit process to do so.
Picky callers can take the transid returned by START_SYNC and
feed it to WAIT_SYNC, and be certain to wait only as long as
necessary for the transaction _they_ started to reach disk.
Sloppy callers can START_SYNC and WAIT_SYNC without a transid,
and provided they didn't wait too long between the calls, they
will get the same result. However, if a second commit starts
before they call WAIT_SYNC, they may end up waiting longer for
it to commit as well. Even so, a START_SYNC+WAIT_SYNC still
guarantees that any operation completed before the START_SYNC
reaches disk.
Signed-off-by: Sage Weil <sage@newdream.net>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2010-10-30 02:41:32 +07:00
|
|
|
} else {
|
|
|
|
/* find newest transaction that is committing | committed */
|
2011-04-12 04:25:13 +07:00
|
|
|
spin_lock(&root->fs_info->trans_lock);
|
Btrfs: add START_SYNC, WAIT_SYNC ioctls
START_SYNC will start a sync/commit, but not wait for it to
complete. Any modification started after the ioctl returns is
guaranteed not to be included in the commit. If a non-NULL
pointer is passed, the transaction id will be returned to
userspace.
WAIT_SYNC will wait for any in-progress commit to complete. If a
transaction id is specified, the ioctl will block and then
return (success) when the specified transaction has committed.
If it has already committed when we call the ioctl, it returns
immediately. If the specified transaction doesn't exist, it
returns EINVAL.
If no transaction id is specified, WAIT_SYNC will wait for the
currently committing transaction to finish it's commit to disk.
If there is no currently committing transaction, it returns
success.
These ioctls are useful for applications which want to impose an
ordering on when fs modifications reach disk, but do not want to
wait for the full (slow) commit process to do so.
Picky callers can take the transid returned by START_SYNC and
feed it to WAIT_SYNC, and be certain to wait only as long as
necessary for the transaction _they_ started to reach disk.
Sloppy callers can START_SYNC and WAIT_SYNC without a transid,
and provided they didn't wait too long between the calls, they
will get the same result. However, if a second commit starts
before they call WAIT_SYNC, they may end up waiting longer for
it to commit as well. Even so, a START_SYNC+WAIT_SYNC still
guarantees that any operation completed before the START_SYNC
reaches disk.
Signed-off-by: Sage Weil <sage@newdream.net>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2010-10-30 02:41:32 +07:00
|
|
|
list_for_each_entry_reverse(t, &root->fs_info->trans_list,
|
|
|
|
list) {
|
Btrfs: make the state of the transaction more readable
We used 3 variants to track the state of the transaction, it was complex
and wasted the memory space. Besides that, it was hard to understand that
which types of the transaction handles should be blocked in each transaction
state, so the developers often made mistakes.
This patch improved the above problem. In this patch, we define 6 states
for the transaction,
enum btrfs_trans_state {
TRANS_STATE_RUNNING = 0,
TRANS_STATE_BLOCKED = 1,
TRANS_STATE_COMMIT_START = 2,
TRANS_STATE_COMMIT_DOING = 3,
TRANS_STATE_UNBLOCKED = 4,
TRANS_STATE_COMPLETED = 5,
TRANS_STATE_MAX = 6,
}
and just use 1 variant to track those state.
In order to make the blocked handle types for each state more clear,
we introduce a array:
unsigned int btrfs_blocked_trans_types[TRANS_STATE_MAX] = {
[TRANS_STATE_RUNNING] = 0U,
[TRANS_STATE_BLOCKED] = (__TRANS_USERSPACE |
__TRANS_START),
[TRANS_STATE_COMMIT_START] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH),
[TRANS_STATE_COMMIT_DOING] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN),
[TRANS_STATE_UNBLOCKED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
[TRANS_STATE_COMPLETED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
}
it is very intuitionistic.
Besides that, because we remove ->in_commit in transaction structure, so
the lock ->commit_lock which was used to protect it is unnecessary, remove
->commit_lock.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
2013-05-17 10:53:43 +07:00
|
|
|
if (t->state >= TRANS_STATE_COMMIT_START) {
|
|
|
|
if (t->state == TRANS_STATE_COMPLETED)
|
2011-06-09 21:15:17 +07:00
|
|
|
break;
|
Btrfs: add START_SYNC, WAIT_SYNC ioctls
START_SYNC will start a sync/commit, but not wait for it to
complete. Any modification started after the ioctl returns is
guaranteed not to be included in the commit. If a non-NULL
pointer is passed, the transaction id will be returned to
userspace.
WAIT_SYNC will wait for any in-progress commit to complete. If a
transaction id is specified, the ioctl will block and then
return (success) when the specified transaction has committed.
If it has already committed when we call the ioctl, it returns
immediately. If the specified transaction doesn't exist, it
returns EINVAL.
If no transaction id is specified, WAIT_SYNC will wait for the
currently committing transaction to finish it's commit to disk.
If there is no currently committing transaction, it returns
success.
These ioctls are useful for applications which want to impose an
ordering on when fs modifications reach disk, but do not want to
wait for the full (slow) commit process to do so.
Picky callers can take the transid returned by START_SYNC and
feed it to WAIT_SYNC, and be certain to wait only as long as
necessary for the transaction _they_ started to reach disk.
Sloppy callers can START_SYNC and WAIT_SYNC without a transid,
and provided they didn't wait too long between the calls, they
will get the same result. However, if a second commit starts
before they call WAIT_SYNC, they may end up waiting longer for
it to commit as well. Even so, a START_SYNC+WAIT_SYNC still
guarantees that any operation completed before the START_SYNC
reaches disk.
Signed-off-by: Sage Weil <sage@newdream.net>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2010-10-30 02:41:32 +07:00
|
|
|
cur_trans = t;
|
2011-04-12 04:25:13 +07:00
|
|
|
atomic_inc(&cur_trans->use_count);
|
Btrfs: add START_SYNC, WAIT_SYNC ioctls
START_SYNC will start a sync/commit, but not wait for it to
complete. Any modification started after the ioctl returns is
guaranteed not to be included in the commit. If a non-NULL
pointer is passed, the transaction id will be returned to
userspace.
WAIT_SYNC will wait for any in-progress commit to complete. If a
transaction id is specified, the ioctl will block and then
return (success) when the specified transaction has committed.
If it has already committed when we call the ioctl, it returns
immediately. If the specified transaction doesn't exist, it
returns EINVAL.
If no transaction id is specified, WAIT_SYNC will wait for the
currently committing transaction to finish it's commit to disk.
If there is no currently committing transaction, it returns
success.
These ioctls are useful for applications which want to impose an
ordering on when fs modifications reach disk, but do not want to
wait for the full (slow) commit process to do so.
Picky callers can take the transid returned by START_SYNC and
feed it to WAIT_SYNC, and be certain to wait only as long as
necessary for the transaction _they_ started to reach disk.
Sloppy callers can START_SYNC and WAIT_SYNC without a transid,
and provided they didn't wait too long between the calls, they
will get the same result. However, if a second commit starts
before they call WAIT_SYNC, they may end up waiting longer for
it to commit as well. Even so, a START_SYNC+WAIT_SYNC still
guarantees that any operation completed before the START_SYNC
reaches disk.
Signed-off-by: Sage Weil <sage@newdream.net>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2010-10-30 02:41:32 +07:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
2011-04-12 04:25:13 +07:00
|
|
|
spin_unlock(&root->fs_info->trans_lock);
|
Btrfs: add START_SYNC, WAIT_SYNC ioctls
START_SYNC will start a sync/commit, but not wait for it to
complete. Any modification started after the ioctl returns is
guaranteed not to be included in the commit. If a non-NULL
pointer is passed, the transaction id will be returned to
userspace.
WAIT_SYNC will wait for any in-progress commit to complete. If a
transaction id is specified, the ioctl will block and then
return (success) when the specified transaction has committed.
If it has already committed when we call the ioctl, it returns
immediately. If the specified transaction doesn't exist, it
returns EINVAL.
If no transaction id is specified, WAIT_SYNC will wait for the
currently committing transaction to finish it's commit to disk.
If there is no currently committing transaction, it returns
success.
These ioctls are useful for applications which want to impose an
ordering on when fs modifications reach disk, but do not want to
wait for the full (slow) commit process to do so.
Picky callers can take the transid returned by START_SYNC and
feed it to WAIT_SYNC, and be certain to wait only as long as
necessary for the transaction _they_ started to reach disk.
Sloppy callers can START_SYNC and WAIT_SYNC without a transid,
and provided they didn't wait too long between the calls, they
will get the same result. However, if a second commit starts
before they call WAIT_SYNC, they may end up waiting longer for
it to commit as well. Even so, a START_SYNC+WAIT_SYNC still
guarantees that any operation completed before the START_SYNC
reaches disk.
Signed-off-by: Sage Weil <sage@newdream.net>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2010-10-30 02:41:32 +07:00
|
|
|
if (!cur_trans)
|
2011-04-12 04:25:13 +07:00
|
|
|
goto out; /* nothing committing|committed */
|
Btrfs: add START_SYNC, WAIT_SYNC ioctls
START_SYNC will start a sync/commit, but not wait for it to
complete. Any modification started after the ioctl returns is
guaranteed not to be included in the commit. If a non-NULL
pointer is passed, the transaction id will be returned to
userspace.
WAIT_SYNC will wait for any in-progress commit to complete. If a
transaction id is specified, the ioctl will block and then
return (success) when the specified transaction has committed.
If it has already committed when we call the ioctl, it returns
immediately. If the specified transaction doesn't exist, it
returns EINVAL.
If no transaction id is specified, WAIT_SYNC will wait for the
currently committing transaction to finish it's commit to disk.
If there is no currently committing transaction, it returns
success.
These ioctls are useful for applications which want to impose an
ordering on when fs modifications reach disk, but do not want to
wait for the full (slow) commit process to do so.
Picky callers can take the transid returned by START_SYNC and
feed it to WAIT_SYNC, and be certain to wait only as long as
necessary for the transaction _they_ started to reach disk.
Sloppy callers can START_SYNC and WAIT_SYNC without a transid,
and provided they didn't wait too long between the calls, they
will get the same result. However, if a second commit starts
before they call WAIT_SYNC, they may end up waiting longer for
it to commit as well. Even so, a START_SYNC+WAIT_SYNC still
guarantees that any operation completed before the START_SYNC
reaches disk.
Signed-off-by: Sage Weil <sage@newdream.net>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2010-10-30 02:41:32 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
wait_for_commit(root, cur_trans);
|
2013-09-30 22:36:38 +07:00
|
|
|
btrfs_put_transaction(cur_trans);
|
2011-04-12 04:25:13 +07:00
|
|
|
out:
|
Btrfs: add START_SYNC, WAIT_SYNC ioctls
START_SYNC will start a sync/commit, but not wait for it to
complete. Any modification started after the ioctl returns is
guaranteed not to be included in the commit. If a non-NULL
pointer is passed, the transaction id will be returned to
userspace.
WAIT_SYNC will wait for any in-progress commit to complete. If a
transaction id is specified, the ioctl will block and then
return (success) when the specified transaction has committed.
If it has already committed when we call the ioctl, it returns
immediately. If the specified transaction doesn't exist, it
returns EINVAL.
If no transaction id is specified, WAIT_SYNC will wait for the
currently committing transaction to finish it's commit to disk.
If there is no currently committing transaction, it returns
success.
These ioctls are useful for applications which want to impose an
ordering on when fs modifications reach disk, but do not want to
wait for the full (slow) commit process to do so.
Picky callers can take the transid returned by START_SYNC and
feed it to WAIT_SYNC, and be certain to wait only as long as
necessary for the transaction _they_ started to reach disk.
Sloppy callers can START_SYNC and WAIT_SYNC without a transid,
and provided they didn't wait too long between the calls, they
will get the same result. However, if a second commit starts
before they call WAIT_SYNC, they may end up waiting longer for
it to commit as well. Even so, a START_SYNC+WAIT_SYNC still
guarantees that any operation completed before the START_SYNC
reaches disk.
Signed-off-by: Sage Weil <sage@newdream.net>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2010-10-30 02:41:32 +07:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2008-07-31 21:48:37 +07:00
|
|
|
void btrfs_throttle(struct btrfs_root *root)
|
|
|
|
{
|
2011-04-12 04:25:13 +07:00
|
|
|
if (!atomic_read(&root->fs_info->open_ioctl_trans))
|
2008-08-04 21:41:27 +07:00
|
|
|
wait_current_trans(root);
|
2008-07-31 21:48:37 +07:00
|
|
|
}
|
|
|
|
|
2010-05-16 21:49:58 +07:00
|
|
|
static int should_end_transaction(struct btrfs_trans_handle *trans,
|
|
|
|
struct btrfs_root *root)
|
|
|
|
{
|
2013-06-13 00:56:06 +07:00
|
|
|
if (root->fs_info->global_block_rsv.space_info->full &&
|
2014-01-23 22:54:11 +07:00
|
|
|
btrfs_check_space_for_delayed_refs(trans, root))
|
2013-06-13 00:56:06 +07:00
|
|
|
return 1;
|
2011-10-18 23:15:48 +07:00
|
|
|
|
2013-06-13 00:56:06 +07:00
|
|
|
return !!btrfs_block_rsv_check(root, &root->fs_info->global_block_rsv, 5);
|
2010-05-16 21:49:58 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
int btrfs_should_end_transaction(struct btrfs_trans_handle *trans,
|
|
|
|
struct btrfs_root *root)
|
|
|
|
{
|
|
|
|
struct btrfs_transaction *cur_trans = trans->transaction;
|
|
|
|
int updates;
|
2012-03-01 23:24:58 +07:00
|
|
|
int err;
|
2010-05-16 21:49:58 +07:00
|
|
|
|
2011-04-12 04:25:13 +07:00
|
|
|
smp_mb();
|
Btrfs: make the state of the transaction more readable
We used 3 variants to track the state of the transaction, it was complex
and wasted the memory space. Besides that, it was hard to understand that
which types of the transaction handles should be blocked in each transaction
state, so the developers often made mistakes.
This patch improved the above problem. In this patch, we define 6 states
for the transaction,
enum btrfs_trans_state {
TRANS_STATE_RUNNING = 0,
TRANS_STATE_BLOCKED = 1,
TRANS_STATE_COMMIT_START = 2,
TRANS_STATE_COMMIT_DOING = 3,
TRANS_STATE_UNBLOCKED = 4,
TRANS_STATE_COMPLETED = 5,
TRANS_STATE_MAX = 6,
}
and just use 1 variant to track those state.
In order to make the blocked handle types for each state more clear,
we introduce a array:
unsigned int btrfs_blocked_trans_types[TRANS_STATE_MAX] = {
[TRANS_STATE_RUNNING] = 0U,
[TRANS_STATE_BLOCKED] = (__TRANS_USERSPACE |
__TRANS_START),
[TRANS_STATE_COMMIT_START] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH),
[TRANS_STATE_COMMIT_DOING] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN),
[TRANS_STATE_UNBLOCKED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
[TRANS_STATE_COMPLETED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
}
it is very intuitionistic.
Besides that, because we remove ->in_commit in transaction structure, so
the lock ->commit_lock which was used to protect it is unnecessary, remove
->commit_lock.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
2013-05-17 10:53:43 +07:00
|
|
|
if (cur_trans->state >= TRANS_STATE_BLOCKED ||
|
|
|
|
cur_trans->delayed_refs.flushing)
|
2010-05-16 21:49:58 +07:00
|
|
|
return 1;
|
|
|
|
|
|
|
|
updates = trans->delayed_ref_updates;
|
|
|
|
trans->delayed_ref_updates = 0;
|
2012-03-01 23:24:58 +07:00
|
|
|
if (updates) {
|
2014-12-18 00:41:04 +07:00
|
|
|
err = btrfs_run_delayed_refs(trans, root, updates * 2);
|
2012-03-01 23:24:58 +07:00
|
|
|
if (err) /* Error code will also eval true */
|
|
|
|
return err;
|
|
|
|
}
|
2010-05-16 21:49:58 +07:00
|
|
|
|
|
|
|
return should_end_transaction(trans, root);
|
|
|
|
}
|
|
|
|
|
2008-06-26 03:01:31 +07:00
|
|
|
static int __btrfs_end_transaction(struct btrfs_trans_handle *trans,
|
2012-09-20 14:51:59 +07:00
|
|
|
struct btrfs_root *root, int throttle)
|
2007-03-23 02:59:16 +07:00
|
|
|
{
|
2010-05-16 21:49:58 +07:00
|
|
|
struct btrfs_transaction *cur_trans = trans->transaction;
|
2008-07-30 03:15:18 +07:00
|
|
|
struct btrfs_fs_info *info = root->fs_info;
|
2016-04-12 04:37:40 +07:00
|
|
|
u64 transid = trans->transid;
|
2013-06-13 00:56:06 +07:00
|
|
|
unsigned long cur = trans->delayed_ref_updates;
|
2012-09-20 14:51:59 +07:00
|
|
|
int lock = (trans->type != TRANS_JOIN_NOLOCK);
|
2012-04-13 03:03:56 +07:00
|
|
|
int err = 0;
|
2014-05-23 06:18:52 +07:00
|
|
|
int must_run_delayed_refs = 0;
|
2009-03-13 21:17:05 +07:00
|
|
|
|
2014-03-07 07:01:07 +07:00
|
|
|
if (trans->use_count > 1) {
|
|
|
|
trans->use_count--;
|
2011-04-14 02:15:59 +07:00
|
|
|
trans->block_rsv = trans->orig_rsv;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2011-10-15 01:40:17 +07:00
|
|
|
btrfs_trans_release_metadata(trans, root);
|
2011-08-30 22:31:29 +07:00
|
|
|
trans->block_rsv = NULL;
|
2011-09-14 20:44:05 +07:00
|
|
|
|
2012-09-12 03:57:25 +07:00
|
|
|
if (!list_empty(&trans->new_bgs))
|
|
|
|
btrfs_create_pending_block_groups(trans, root);
|
|
|
|
|
2013-06-13 00:56:06 +07:00
|
|
|
trans->delayed_ref_updates = 0;
|
2014-05-23 06:18:52 +07:00
|
|
|
if (!trans->sync) {
|
|
|
|
must_run_delayed_refs =
|
|
|
|
btrfs_should_throttle_delayed_refs(trans, root);
|
2014-01-23 22:54:11 +07:00
|
|
|
cur = max_t(unsigned long, cur, 32);
|
2014-05-23 06:18:52 +07:00
|
|
|
|
|
|
|
/*
|
|
|
|
* don't make the caller wait if they are from a NOLOCK
|
|
|
|
* or ATTACH transaction, it will deadlock with commit
|
|
|
|
*/
|
|
|
|
if (must_run_delayed_refs == 1 &&
|
|
|
|
(trans->type & (__TRANS_JOIN_NOLOCK | __TRANS_ATTACH)))
|
|
|
|
must_run_delayed_refs = 2;
|
2009-03-13 21:10:06 +07:00
|
|
|
}
|
2013-01-30 06:44:12 +07:00
|
|
|
|
2012-06-27 03:13:18 +07:00
|
|
|
btrfs_trans_release_metadata(trans, root);
|
|
|
|
trans->block_rsv = NULL;
|
2009-03-13 21:10:06 +07:00
|
|
|
|
2012-09-12 03:57:25 +07:00
|
|
|
if (!list_empty(&trans->new_bgs))
|
|
|
|
btrfs_create_pending_block_groups(trans, root);
|
|
|
|
|
Btrfs: fix -ENOSPC when finishing block group creation
While creating a block group, we often end up getting ENOSPC while updating
the chunk tree, which leads to a transaction abortion that produces a trace
like the following:
[30670.116368] WARNING: CPU: 4 PID: 20735 at fs/btrfs/super.c:260 __btrfs_abort_transaction+0x52/0x106 [btrfs]()
[30670.117777] BTRFS: Transaction aborted (error -28)
(...)
[30670.163567] Call Trace:
[30670.163906] [<ffffffff8142fa46>] dump_stack+0x4f/0x7b
[30670.164522] [<ffffffff8108b6a2>] ? console_unlock+0x361/0x3ad
[30670.165171] [<ffffffff81045ea5>] warn_slowpath_common+0xa1/0xbb
[30670.166323] [<ffffffffa035daa7>] ? __btrfs_abort_transaction+0x52/0x106 [btrfs]
[30670.167213] [<ffffffff81045f05>] warn_slowpath_fmt+0x46/0x48
[30670.167862] [<ffffffffa035daa7>] __btrfs_abort_transaction+0x52/0x106 [btrfs]
[30670.169116] [<ffffffffa03743d7>] btrfs_create_pending_block_groups+0x101/0x130 [btrfs]
[30670.170593] [<ffffffffa038426a>] __btrfs_end_transaction+0x84/0x366 [btrfs]
[30670.171960] [<ffffffffa038455c>] btrfs_end_transaction+0x10/0x12 [btrfs]
[30670.174649] [<ffffffffa036eb6b>] btrfs_check_data_free_space+0x11f/0x27c [btrfs]
[30670.176092] [<ffffffffa039450d>] btrfs_fallocate+0x7c8/0xb96 [btrfs]
[30670.177218] [<ffffffff812459f2>] ? __this_cpu_preempt_check+0x13/0x15
[30670.178622] [<ffffffff81152447>] vfs_fallocate+0x14c/0x1de
[30670.179642] [<ffffffff8116b915>] ? __fget_light+0x2d/0x4f
[30670.180692] [<ffffffff81152863>] SyS_fallocate+0x47/0x62
[30670.186737] [<ffffffff81435b32>] system_call_fastpath+0x12/0x17
[30670.187792] ---[ end trace 0373e6b491c4a8cc ]---
This is because we don't do proper space reservation for the chunk block
reserve when we have multiple tasks allocating chunks in parallel.
So block group creation has 2 phases, and the first phase essentially
checks if there is enough space in the system space_info, allocating a
new system chunk if there isn't, while the second phase updates the
device, extent and chunk trees. However, because the updates to the
chunk tree happen in the second phase, if we have N tasks, each with
its own transaction handle, allocating new chunks in parallel and if
there is only enough space in the system space_info to allocate M chunks,
where M < N, none of the tasks ends up allocating a new system chunk in
the first phase and N - M tasks will get -ENOSPC when attempting to
update the chunk tree in phase 2 if they need to COW any nodes/leafs
from the chunk tree.
Fix this by doing proper reservation in the chunk block reserve.
The issue could be reproduced by running fstests generic/038 in a loop,
which eventually triggered the problem.
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Chris Mason <clm@fb.com>
2015-05-20 20:01:54 +07:00
|
|
|
btrfs_trans_release_chunk_metadata(trans);
|
|
|
|
|
2011-04-12 04:25:13 +07:00
|
|
|
if (lock && !atomic_read(&root->fs_info->open_ioctl_trans) &&
|
Btrfs: make the state of the transaction more readable
We used 3 variants to track the state of the transaction, it was complex
and wasted the memory space. Besides that, it was hard to understand that
which types of the transaction handles should be blocked in each transaction
state, so the developers often made mistakes.
This patch improved the above problem. In this patch, we define 6 states
for the transaction,
enum btrfs_trans_state {
TRANS_STATE_RUNNING = 0,
TRANS_STATE_BLOCKED = 1,
TRANS_STATE_COMMIT_START = 2,
TRANS_STATE_COMMIT_DOING = 3,
TRANS_STATE_UNBLOCKED = 4,
TRANS_STATE_COMPLETED = 5,
TRANS_STATE_MAX = 6,
}
and just use 1 variant to track those state.
In order to make the blocked handle types for each state more clear,
we introduce a array:
unsigned int btrfs_blocked_trans_types[TRANS_STATE_MAX] = {
[TRANS_STATE_RUNNING] = 0U,
[TRANS_STATE_BLOCKED] = (__TRANS_USERSPACE |
__TRANS_START),
[TRANS_STATE_COMMIT_START] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH),
[TRANS_STATE_COMMIT_DOING] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN),
[TRANS_STATE_UNBLOCKED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
[TRANS_STATE_COMPLETED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
}
it is very intuitionistic.
Besides that, because we remove ->in_commit in transaction structure, so
the lock ->commit_lock which was used to protect it is unnecessary, remove
->commit_lock.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
2013-05-17 10:53:43 +07:00
|
|
|
should_end_transaction(trans, root) &&
|
|
|
|
ACCESS_ONCE(cur_trans->state) == TRANS_STATE_RUNNING) {
|
|
|
|
spin_lock(&info->trans_lock);
|
|
|
|
if (cur_trans->state == TRANS_STATE_RUNNING)
|
|
|
|
cur_trans->state = TRANS_STATE_BLOCKED;
|
|
|
|
spin_unlock(&info->trans_lock);
|
2011-04-12 04:25:13 +07:00
|
|
|
}
|
2010-05-16 21:49:58 +07:00
|
|
|
|
Btrfs: make the state of the transaction more readable
We used 3 variants to track the state of the transaction, it was complex
and wasted the memory space. Besides that, it was hard to understand that
which types of the transaction handles should be blocked in each transaction
state, so the developers often made mistakes.
This patch improved the above problem. In this patch, we define 6 states
for the transaction,
enum btrfs_trans_state {
TRANS_STATE_RUNNING = 0,
TRANS_STATE_BLOCKED = 1,
TRANS_STATE_COMMIT_START = 2,
TRANS_STATE_COMMIT_DOING = 3,
TRANS_STATE_UNBLOCKED = 4,
TRANS_STATE_COMPLETED = 5,
TRANS_STATE_MAX = 6,
}
and just use 1 variant to track those state.
In order to make the blocked handle types for each state more clear,
we introduce a array:
unsigned int btrfs_blocked_trans_types[TRANS_STATE_MAX] = {
[TRANS_STATE_RUNNING] = 0U,
[TRANS_STATE_BLOCKED] = (__TRANS_USERSPACE |
__TRANS_START),
[TRANS_STATE_COMMIT_START] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH),
[TRANS_STATE_COMMIT_DOING] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN),
[TRANS_STATE_UNBLOCKED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
[TRANS_STATE_COMPLETED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
}
it is very intuitionistic.
Besides that, because we remove ->in_commit in transaction structure, so
the lock ->commit_lock which was used to protect it is unnecessary, remove
->commit_lock.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
2013-05-17 10:53:43 +07:00
|
|
|
if (lock && ACCESS_ONCE(cur_trans->state) == TRANS_STATE_BLOCKED) {
|
2014-03-07 07:01:07 +07:00
|
|
|
if (throttle)
|
2010-05-16 21:49:58 +07:00
|
|
|
return btrfs_commit_transaction(trans, root);
|
2014-03-07 07:01:07 +07:00
|
|
|
else
|
2010-05-16 21:49:58 +07:00
|
|
|
wake_up_process(info->transaction_kthread);
|
|
|
|
}
|
|
|
|
|
2013-05-15 14:48:27 +07:00
|
|
|
if (trans->type & __TRANS_FREEZABLE)
|
2012-09-14 22:22:38 +07:00
|
|
|
sb_end_intwrite(root->fs_info->sb);
|
2012-09-05 21:08:30 +07:00
|
|
|
|
2010-05-16 21:49:58 +07:00
|
|
|
WARN_ON(cur_trans != info->running_transaction);
|
2011-04-12 02:45:29 +07:00
|
|
|
WARN_ON(atomic_read(&cur_trans->num_writers) < 1);
|
|
|
|
atomic_dec(&cur_trans->num_writers);
|
2013-05-15 14:48:27 +07:00
|
|
|
extwriter_counter_dec(cur_trans, trans->type);
|
2008-06-26 03:01:31 +07:00
|
|
|
|
2015-02-17 01:36:47 +07:00
|
|
|
/*
|
|
|
|
* Make sure counter is updated before we wake up waiters.
|
|
|
|
*/
|
2010-10-30 02:37:34 +07:00
|
|
|
smp_mb();
|
2007-03-23 02:59:16 +07:00
|
|
|
if (waitqueue_active(&cur_trans->writer_wait))
|
|
|
|
wake_up(&cur_trans->writer_wait);
|
2013-09-30 22:36:38 +07:00
|
|
|
btrfs_put_transaction(cur_trans);
|
2009-09-12 03:12:44 +07:00
|
|
|
|
|
|
|
if (current->journal_info == trans)
|
|
|
|
current->journal_info = NULL;
|
2008-07-30 03:15:18 +07:00
|
|
|
|
2009-11-12 16:36:34 +07:00
|
|
|
if (throttle)
|
|
|
|
btrfs_run_delayed_iputs(root);
|
|
|
|
|
2012-03-01 23:24:58 +07:00
|
|
|
if (trans->aborted ||
|
2013-09-28 03:32:39 +07:00
|
|
|
test_bit(BTRFS_FS_STATE_ERROR, &root->fs_info->fs_state)) {
|
|
|
|
wake_up_process(info->transaction_kthread);
|
2012-04-13 03:03:56 +07:00
|
|
|
err = -EIO;
|
2013-09-28 03:32:39 +07:00
|
|
|
}
|
2012-06-28 23:04:55 +07:00
|
|
|
assert_qgroups_uptodate(trans);
|
2012-03-01 23:24:58 +07:00
|
|
|
|
2012-04-13 03:03:56 +07:00
|
|
|
kmem_cache_free(btrfs_trans_handle_cachep, trans);
|
2014-05-23 06:18:52 +07:00
|
|
|
if (must_run_delayed_refs) {
|
2016-04-12 04:37:40 +07:00
|
|
|
btrfs_async_run_delayed_refs(root, cur, transid,
|
2014-05-23 06:18:52 +07:00
|
|
|
must_run_delayed_refs == 1);
|
|
|
|
}
|
2012-04-13 03:03:56 +07:00
|
|
|
return err;
|
2007-03-23 02:59:16 +07:00
|
|
|
}
|
|
|
|
|
2008-06-26 03:01:31 +07:00
|
|
|
int btrfs_end_transaction(struct btrfs_trans_handle *trans,
|
|
|
|
struct btrfs_root *root)
|
|
|
|
{
|
2013-04-14 21:08:49 +07:00
|
|
|
return __btrfs_end_transaction(trans, root, 0);
|
2008-06-26 03:01:31 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
int btrfs_end_transaction_throttle(struct btrfs_trans_handle *trans,
|
|
|
|
struct btrfs_root *root)
|
|
|
|
{
|
2013-04-14 21:08:49 +07:00
|
|
|
return __btrfs_end_transaction(trans, root, 1);
|
btrfs: implement delayed inode items operation
Changelog V5 -> V6:
- Fix oom when the memory load is high, by storing the delayed nodes into the
root's radix tree, and letting btrfs inodes go.
Changelog V4 -> V5:
- Fix the race on adding the delayed node to the inode, which is spotted by
Chris Mason.
- Merge Chris Mason's incremental patch into this patch.
- Fix deadlock between readdir() and memory fault, which is reported by
Itaru Kitayama.
Changelog V3 -> V4:
- Fix nested lock, which is reported by Itaru Kitayama, by updating space cache
inode in time.
Changelog V2 -> V3:
- Fix the race between the delayed worker and the task which does delayed items
balance, which is reported by Tsutomu Itoh.
- Modify the patch address David Sterba's comment.
- Fix the bug of the cpu recursion spinlock, reported by Chris Mason
Changelog V1 -> V2:
- break up the global rb-tree, use a list to manage the delayed nodes,
which is created for every directory and file, and used to manage the
delayed directory name index items and the delayed inode item.
- introduce a worker to deal with the delayed nodes.
Compare with Ext3/4, the performance of file creation and deletion on btrfs
is very poor. the reason is that btrfs must do a lot of b+ tree insertions,
such as inode item, directory name item, directory name index and so on.
If we can do some delayed b+ tree insertion or deletion, we can improve the
performance, so we made this patch which implemented delayed directory name
index insertion/deletion and delayed inode update.
Implementation:
- introduce a delayed root object into the filesystem, that use two lists to
manage the delayed nodes which are created for every file/directory.
One is used to manage all the delayed nodes that have delayed items. And the
other is used to manage the delayed nodes which is waiting to be dealt with
by the work thread.
- Every delayed node has two rb-tree, one is used to manage the directory name
index which is going to be inserted into b+ tree, and the other is used to
manage the directory name index which is going to be deleted from b+ tree.
- introduce a worker to deal with the delayed operation. This worker is used
to deal with the works of the delayed directory name index items insertion
and deletion and the delayed inode update.
When the delayed items is beyond the lower limit, we create works for some
delayed nodes and insert them into the work queue of the worker, and then
go back.
When the delayed items is beyond the upper bound, we create works for all
the delayed nodes that haven't been dealt with, and insert them into the work
queue of the worker, and then wait for that the untreated items is below some
threshold value.
- When we want to insert a directory name index into b+ tree, we just add the
information into the delayed inserting rb-tree.
And then we check the number of the delayed items and do delayed items
balance. (The balance policy is above.)
- When we want to delete a directory name index from the b+ tree, we search it
in the inserting rb-tree at first. If we look it up, just drop it. If not,
add the key of it into the delayed deleting rb-tree.
Similar to the delayed inserting rb-tree, we also check the number of the
delayed items and do delayed items balance.
(The same to inserting manipulation)
- When we want to update the metadata of some inode, we cached the data of the
inode into the delayed node. the worker will flush it into the b+ tree after
dealing with the delayed insertion and deletion.
- We will move the delayed node to the tail of the list after we access the
delayed node, By this way, we can cache more delayed items and merge more
inode updates.
- If we want to commit transaction, we will deal with all the delayed node.
- the delayed node will be freed when we free the btrfs inode.
- Before we log the inode items, we commit all the directory name index items
and the delayed inode update.
I did a quick test by the benchmark tool[1] and found we can improve the
performance of file creation by ~15%, and file deletion by ~20%.
Before applying this patch:
Create files:
Total files: 50000
Total time: 1.096108
Average time: 0.000022
Delete files:
Total files: 50000
Total time: 1.510403
Average time: 0.000030
After applying this patch:
Create files:
Total files: 50000
Total time: 0.932899
Average time: 0.000019
Delete files:
Total files: 50000
Total time: 1.215732
Average time: 0.000024
[1] http://marc.info/?l=linux-btrfs&m=128212635122920&q=p3
Many thanks for Kitayama-san's help!
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Reviewed-by: David Sterba <dave@jikos.cz>
Tested-by: Tsutomu Itoh <t-itoh@jp.fujitsu.com>
Tested-by: Itaru Kitayama <kitayama@cl.bb4u.ne.jp>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2011-04-22 17:12:22 +07:00
|
|
|
}
|
|
|
|
|
2008-09-30 02:18:18 +07:00
|
|
|
/*
|
|
|
|
* when btree blocks are allocated, they have some corresponding bits set for
|
|
|
|
* them in one of two extent_io trees. This is used to make sure all of
|
2009-10-14 00:29:19 +07:00
|
|
|
* those extents are sent to disk but does not wait on them
|
2008-09-30 02:18:18 +07:00
|
|
|
*/
|
2009-10-14 00:29:19 +07:00
|
|
|
int btrfs_write_marked_extents(struct btrfs_root *root,
|
2009-11-12 16:33:26 +07:00
|
|
|
struct extent_io_tree *dirty_pages, int mark)
|
2007-03-23 02:59:16 +07:00
|
|
|
{
|
2008-08-16 02:34:15 +07:00
|
|
|
int err = 0;
|
2007-04-28 20:29:35 +07:00
|
|
|
int werr = 0;
|
2011-09-27 00:58:47 +07:00
|
|
|
struct address_space *mapping = root->fs_info->btree_inode->i_mapping;
|
2012-09-28 04:07:30 +07:00
|
|
|
struct extent_state *cached_state = NULL;
|
2008-08-16 02:34:15 +07:00
|
|
|
u64 start = 0;
|
2007-10-16 03:14:19 +07:00
|
|
|
u64 end;
|
2007-04-28 20:29:35 +07:00
|
|
|
|
2011-09-27 00:58:47 +07:00
|
|
|
while (!find_first_extent_bit(dirty_pages, start, &start, &end,
|
2012-09-28 04:07:30 +07:00
|
|
|
mark, &cached_state)) {
|
2014-10-13 18:28:37 +07:00
|
|
|
bool wait_writeback = false;
|
|
|
|
|
|
|
|
err = convert_extent_bit(dirty_pages, start, end,
|
|
|
|
EXTENT_NEED_WAIT,
|
2016-04-27 04:54:39 +07:00
|
|
|
mark, &cached_state);
|
2014-10-13 18:28:37 +07:00
|
|
|
/*
|
|
|
|
* convert_extent_bit can return -ENOMEM, which is most of the
|
|
|
|
* time a temporary error. So when it happens, ignore the error
|
|
|
|
* and wait for writeback of this range to finish - because we
|
|
|
|
* failed to set the bit EXTENT_NEED_WAIT for the range, a call
|
|
|
|
* to btrfs_wait_marked_extents() would not know that writeback
|
|
|
|
* for this range started and therefore wouldn't wait for it to
|
|
|
|
* finish - we don't want to commit a superblock that points to
|
|
|
|
* btree nodes/leafs for which writeback hasn't finished yet
|
|
|
|
* (and without errors).
|
|
|
|
* We cleanup any entries left in the io tree when committing
|
|
|
|
* the transaction (through clear_btree_io_tree()).
|
|
|
|
*/
|
|
|
|
if (err == -ENOMEM) {
|
|
|
|
err = 0;
|
|
|
|
wait_writeback = true;
|
|
|
|
}
|
|
|
|
if (!err)
|
|
|
|
err = filemap_fdatawrite_range(mapping, start, end);
|
2011-09-27 00:58:47 +07:00
|
|
|
if (err)
|
|
|
|
werr = err;
|
2014-10-13 18:28:37 +07:00
|
|
|
else if (wait_writeback)
|
|
|
|
werr = filemap_fdatawait_range(mapping, start, end);
|
2014-10-13 18:28:38 +07:00
|
|
|
free_extent_state(cached_state);
|
2014-10-13 18:28:37 +07:00
|
|
|
cached_state = NULL;
|
2011-09-27 00:58:47 +07:00
|
|
|
cond_resched();
|
|
|
|
start = end + 1;
|
2007-04-28 20:29:35 +07:00
|
|
|
}
|
2009-10-14 00:29:19 +07:00
|
|
|
return werr;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* when btree blocks are allocated, they have some corresponding bits set for
|
|
|
|
* them in one of two extent_io trees. This is used to make sure all of
|
|
|
|
* those extents are on disk for transaction or log commit. We wait
|
|
|
|
* on all the pages and clear them from the dirty pages state tree
|
|
|
|
*/
|
|
|
|
int btrfs_wait_marked_extents(struct btrfs_root *root,
|
2009-11-12 16:33:26 +07:00
|
|
|
struct extent_io_tree *dirty_pages, int mark)
|
2009-10-14 00:29:19 +07:00
|
|
|
{
|
|
|
|
int err = 0;
|
|
|
|
int werr = 0;
|
2011-09-27 00:58:47 +07:00
|
|
|
struct address_space *mapping = root->fs_info->btree_inode->i_mapping;
|
2012-09-28 04:07:30 +07:00
|
|
|
struct extent_state *cached_state = NULL;
|
2009-10-14 00:29:19 +07:00
|
|
|
u64 start = 0;
|
|
|
|
u64 end;
|
Btrfs: be aware of btree inode write errors to avoid fs corruption
While we have a transaction ongoing, the VM might decide at any time
to call btree_inode->i_mapping->a_ops->writepages(), which will start
writeback of dirty pages belonging to btree nodes/leafs. This call
might return an error or the writeback might finish with an error
before we attempt to commit the running transaction. If this happens,
we might have no way of knowing that such error happened when we are
committing the transaction - because the pages might no longer be
marked dirty nor tagged for writeback (if a subsequent modification
to the extent buffer didn't happen before the transaction commit) which
makes filemap_fdata[write|wait]_range unable to find such pages (even
if they're marked with SetPageError).
So if this happens we must abort the transaction, otherwise we commit
a super block with btree roots that point to btree nodes/leafs whose
content on disk is invalid - either garbage or the content of some
node/leaf from a past generation that got cowed or deleted and is no
longer valid (for this later case we end up getting error messages like
"parent transid verify failed on 10826481664 wanted 25748 found 29562"
when reading btree nodes/leafs from disk).
Note that setting and checking AS_EIO/AS_ENOSPC in the btree inode's
i_mapping would not be enough because we need to distinguish between
log tree extents (not fatal) vs non-log tree extents (fatal) and
because the next call to filemap_fdatawait_range() will catch and clear
such errors in the mapping - and that call might be from a log sync and
not from a transaction commit, which means we would not know about the
error at transaction commit time. Also, checking for the eb flag
EXTENT_BUFFER_IOERR at transaction commit time isn't done and would
not be completely reliable, as the eb might be removed from memory and
read back when trying to get it, which clears that flag right before
reading the eb's pages from disk, making us not know about the previous
write error.
Using the new 3 flags for the btree inode also makes us achieve the
goal of AS_EIO/AS_ENOSPC when writepages() returns success, started
writeback for all dirty pages and before filemap_fdatawait_range() is
called, the writeback for all dirty pages had already finished with
errors - because we were not using AS_EIO/AS_ENOSPC,
filemap_fdatawait_range() would return success, as it could not know
that writeback errors happened (the pages were no longer tagged for
writeback).
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Chris Mason <clm@fb.com>
2014-09-26 18:25:56 +07:00
|
|
|
bool errors = false;
|
2008-08-16 02:34:15 +07:00
|
|
|
|
2011-09-27 00:58:47 +07:00
|
|
|
while (!find_first_extent_bit(dirty_pages, start, &start, &end,
|
2012-09-28 04:07:30 +07:00
|
|
|
EXTENT_NEED_WAIT, &cached_state)) {
|
2014-10-13 18:28:37 +07:00
|
|
|
/*
|
|
|
|
* Ignore -ENOMEM errors returned by clear_extent_bit().
|
|
|
|
* When committing the transaction, we'll remove any entries
|
|
|
|
* left in the io tree. For a log commit, we don't remove them
|
|
|
|
* after committing the log because the tree can be accessed
|
|
|
|
* concurrently - we do it only at transaction commit time when
|
|
|
|
* it's safe to do it (through clear_btree_io_tree()).
|
|
|
|
*/
|
|
|
|
err = clear_extent_bit(dirty_pages, start, end,
|
|
|
|
EXTENT_NEED_WAIT,
|
|
|
|
0, 0, &cached_state, GFP_NOFS);
|
|
|
|
if (err == -ENOMEM)
|
|
|
|
err = 0;
|
|
|
|
if (!err)
|
|
|
|
err = filemap_fdatawait_range(mapping, start, end);
|
2011-09-27 00:58:47 +07:00
|
|
|
if (err)
|
|
|
|
werr = err;
|
2014-10-13 18:28:38 +07:00
|
|
|
free_extent_state(cached_state);
|
|
|
|
cached_state = NULL;
|
2011-09-27 00:58:47 +07:00
|
|
|
cond_resched();
|
|
|
|
start = end + 1;
|
2008-08-16 02:34:15 +07:00
|
|
|
}
|
2007-04-28 20:29:35 +07:00
|
|
|
if (err)
|
|
|
|
werr = err;
|
Btrfs: be aware of btree inode write errors to avoid fs corruption
While we have a transaction ongoing, the VM might decide at any time
to call btree_inode->i_mapping->a_ops->writepages(), which will start
writeback of dirty pages belonging to btree nodes/leafs. This call
might return an error or the writeback might finish with an error
before we attempt to commit the running transaction. If this happens,
we might have no way of knowing that such error happened when we are
committing the transaction - because the pages might no longer be
marked dirty nor tagged for writeback (if a subsequent modification
to the extent buffer didn't happen before the transaction commit) which
makes filemap_fdata[write|wait]_range unable to find such pages (even
if they're marked with SetPageError).
So if this happens we must abort the transaction, otherwise we commit
a super block with btree roots that point to btree nodes/leafs whose
content on disk is invalid - either garbage or the content of some
node/leaf from a past generation that got cowed or deleted and is no
longer valid (for this later case we end up getting error messages like
"parent transid verify failed on 10826481664 wanted 25748 found 29562"
when reading btree nodes/leafs from disk).
Note that setting and checking AS_EIO/AS_ENOSPC in the btree inode's
i_mapping would not be enough because we need to distinguish between
log tree extents (not fatal) vs non-log tree extents (fatal) and
because the next call to filemap_fdatawait_range() will catch and clear
such errors in the mapping - and that call might be from a log sync and
not from a transaction commit, which means we would not know about the
error at transaction commit time. Also, checking for the eb flag
EXTENT_BUFFER_IOERR at transaction commit time isn't done and would
not be completely reliable, as the eb might be removed from memory and
read back when trying to get it, which clears that flag right before
reading the eb's pages from disk, making us not know about the previous
write error.
Using the new 3 flags for the btree inode also makes us achieve the
goal of AS_EIO/AS_ENOSPC when writepages() returns success, started
writeback for all dirty pages and before filemap_fdatawait_range() is
called, the writeback for all dirty pages had already finished with
errors - because we were not using AS_EIO/AS_ENOSPC,
filemap_fdatawait_range() would return success, as it could not know
that writeback errors happened (the pages were no longer tagged for
writeback).
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Chris Mason <clm@fb.com>
2014-09-26 18:25:56 +07:00
|
|
|
|
|
|
|
if (root->root_key.objectid == BTRFS_TREE_LOG_OBJECTID) {
|
|
|
|
if ((mark & EXTENT_DIRTY) &&
|
2016-09-03 02:40:02 +07:00
|
|
|
test_and_clear_bit(BTRFS_FS_LOG1_ERR,
|
|
|
|
&root->fs_info->flags))
|
Btrfs: be aware of btree inode write errors to avoid fs corruption
While we have a transaction ongoing, the VM might decide at any time
to call btree_inode->i_mapping->a_ops->writepages(), which will start
writeback of dirty pages belonging to btree nodes/leafs. This call
might return an error or the writeback might finish with an error
before we attempt to commit the running transaction. If this happens,
we might have no way of knowing that such error happened when we are
committing the transaction - because the pages might no longer be
marked dirty nor tagged for writeback (if a subsequent modification
to the extent buffer didn't happen before the transaction commit) which
makes filemap_fdata[write|wait]_range unable to find such pages (even
if they're marked with SetPageError).
So if this happens we must abort the transaction, otherwise we commit
a super block with btree roots that point to btree nodes/leafs whose
content on disk is invalid - either garbage or the content of some
node/leaf from a past generation that got cowed or deleted and is no
longer valid (for this later case we end up getting error messages like
"parent transid verify failed on 10826481664 wanted 25748 found 29562"
when reading btree nodes/leafs from disk).
Note that setting and checking AS_EIO/AS_ENOSPC in the btree inode's
i_mapping would not be enough because we need to distinguish between
log tree extents (not fatal) vs non-log tree extents (fatal) and
because the next call to filemap_fdatawait_range() will catch and clear
such errors in the mapping - and that call might be from a log sync and
not from a transaction commit, which means we would not know about the
error at transaction commit time. Also, checking for the eb flag
EXTENT_BUFFER_IOERR at transaction commit time isn't done and would
not be completely reliable, as the eb might be removed from memory and
read back when trying to get it, which clears that flag right before
reading the eb's pages from disk, making us not know about the previous
write error.
Using the new 3 flags for the btree inode also makes us achieve the
goal of AS_EIO/AS_ENOSPC when writepages() returns success, started
writeback for all dirty pages and before filemap_fdatawait_range() is
called, the writeback for all dirty pages had already finished with
errors - because we were not using AS_EIO/AS_ENOSPC,
filemap_fdatawait_range() would return success, as it could not know
that writeback errors happened (the pages were no longer tagged for
writeback).
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Chris Mason <clm@fb.com>
2014-09-26 18:25:56 +07:00
|
|
|
errors = true;
|
|
|
|
|
|
|
|
if ((mark & EXTENT_NEW) &&
|
2016-09-03 02:40:02 +07:00
|
|
|
test_and_clear_bit(BTRFS_FS_LOG2_ERR,
|
|
|
|
&root->fs_info->flags))
|
Btrfs: be aware of btree inode write errors to avoid fs corruption
While we have a transaction ongoing, the VM might decide at any time
to call btree_inode->i_mapping->a_ops->writepages(), which will start
writeback of dirty pages belonging to btree nodes/leafs. This call
might return an error or the writeback might finish with an error
before we attempt to commit the running transaction. If this happens,
we might have no way of knowing that such error happened when we are
committing the transaction - because the pages might no longer be
marked dirty nor tagged for writeback (if a subsequent modification
to the extent buffer didn't happen before the transaction commit) which
makes filemap_fdata[write|wait]_range unable to find such pages (even
if they're marked with SetPageError).
So if this happens we must abort the transaction, otherwise we commit
a super block with btree roots that point to btree nodes/leafs whose
content on disk is invalid - either garbage or the content of some
node/leaf from a past generation that got cowed or deleted and is no
longer valid (for this later case we end up getting error messages like
"parent transid verify failed on 10826481664 wanted 25748 found 29562"
when reading btree nodes/leafs from disk).
Note that setting and checking AS_EIO/AS_ENOSPC in the btree inode's
i_mapping would not be enough because we need to distinguish between
log tree extents (not fatal) vs non-log tree extents (fatal) and
because the next call to filemap_fdatawait_range() will catch and clear
such errors in the mapping - and that call might be from a log sync and
not from a transaction commit, which means we would not know about the
error at transaction commit time. Also, checking for the eb flag
EXTENT_BUFFER_IOERR at transaction commit time isn't done and would
not be completely reliable, as the eb might be removed from memory and
read back when trying to get it, which clears that flag right before
reading the eb's pages from disk, making us not know about the previous
write error.
Using the new 3 flags for the btree inode also makes us achieve the
goal of AS_EIO/AS_ENOSPC when writepages() returns success, started
writeback for all dirty pages and before filemap_fdatawait_range() is
called, the writeback for all dirty pages had already finished with
errors - because we were not using AS_EIO/AS_ENOSPC,
filemap_fdatawait_range() would return success, as it could not know
that writeback errors happened (the pages were no longer tagged for
writeback).
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Chris Mason <clm@fb.com>
2014-09-26 18:25:56 +07:00
|
|
|
errors = true;
|
|
|
|
} else {
|
2016-09-03 02:40:02 +07:00
|
|
|
if (test_and_clear_bit(BTRFS_FS_BTREE_ERR,
|
|
|
|
&root->fs_info->flags))
|
Btrfs: be aware of btree inode write errors to avoid fs corruption
While we have a transaction ongoing, the VM might decide at any time
to call btree_inode->i_mapping->a_ops->writepages(), which will start
writeback of dirty pages belonging to btree nodes/leafs. This call
might return an error or the writeback might finish with an error
before we attempt to commit the running transaction. If this happens,
we might have no way of knowing that such error happened when we are
committing the transaction - because the pages might no longer be
marked dirty nor tagged for writeback (if a subsequent modification
to the extent buffer didn't happen before the transaction commit) which
makes filemap_fdata[write|wait]_range unable to find such pages (even
if they're marked with SetPageError).
So if this happens we must abort the transaction, otherwise we commit
a super block with btree roots that point to btree nodes/leafs whose
content on disk is invalid - either garbage or the content of some
node/leaf from a past generation that got cowed or deleted and is no
longer valid (for this later case we end up getting error messages like
"parent transid verify failed on 10826481664 wanted 25748 found 29562"
when reading btree nodes/leafs from disk).
Note that setting and checking AS_EIO/AS_ENOSPC in the btree inode's
i_mapping would not be enough because we need to distinguish between
log tree extents (not fatal) vs non-log tree extents (fatal) and
because the next call to filemap_fdatawait_range() will catch and clear
such errors in the mapping - and that call might be from a log sync and
not from a transaction commit, which means we would not know about the
error at transaction commit time. Also, checking for the eb flag
EXTENT_BUFFER_IOERR at transaction commit time isn't done and would
not be completely reliable, as the eb might be removed from memory and
read back when trying to get it, which clears that flag right before
reading the eb's pages from disk, making us not know about the previous
write error.
Using the new 3 flags for the btree inode also makes us achieve the
goal of AS_EIO/AS_ENOSPC when writepages() returns success, started
writeback for all dirty pages and before filemap_fdatawait_range() is
called, the writeback for all dirty pages had already finished with
errors - because we were not using AS_EIO/AS_ENOSPC,
filemap_fdatawait_range() would return success, as it could not know
that writeback errors happened (the pages were no longer tagged for
writeback).
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Chris Mason <clm@fb.com>
2014-09-26 18:25:56 +07:00
|
|
|
errors = true;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (errors && !werr)
|
|
|
|
werr = -EIO;
|
|
|
|
|
2007-04-28 20:29:35 +07:00
|
|
|
return werr;
|
2007-03-23 02:59:16 +07:00
|
|
|
}
|
|
|
|
|
2009-10-14 00:29:19 +07:00
|
|
|
/*
|
|
|
|
* when btree blocks are allocated, they have some corresponding bits set for
|
|
|
|
* them in one of two extent_io trees. This is used to make sure all of
|
|
|
|
* those extents are on disk for transaction or log commit
|
|
|
|
*/
|
2013-08-15 03:27:46 +07:00
|
|
|
static int btrfs_write_and_wait_marked_extents(struct btrfs_root *root,
|
2009-11-12 16:33:26 +07:00
|
|
|
struct extent_io_tree *dirty_pages, int mark)
|
2009-10-14 00:29:19 +07:00
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
int ret2;
|
2013-05-28 17:05:39 +07:00
|
|
|
struct blk_plug plug;
|
2009-10-14 00:29:19 +07:00
|
|
|
|
2013-05-28 17:05:39 +07:00
|
|
|
blk_start_plug(&plug);
|
2009-11-12 16:33:26 +07:00
|
|
|
ret = btrfs_write_marked_extents(root, dirty_pages, mark);
|
2013-05-28 17:05:39 +07:00
|
|
|
blk_finish_plug(&plug);
|
2009-11-12 16:33:26 +07:00
|
|
|
ret2 = btrfs_wait_marked_extents(root, dirty_pages, mark);
|
2011-11-04 23:29:37 +07:00
|
|
|
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
if (ret2)
|
|
|
|
return ret2;
|
|
|
|
return 0;
|
2009-10-14 00:29:19 +07:00
|
|
|
}
|
|
|
|
|
2014-10-13 18:28:37 +07:00
|
|
|
static int btrfs_write_and_wait_transaction(struct btrfs_trans_handle *trans,
|
2008-09-12 03:17:57 +07:00
|
|
|
struct btrfs_root *root)
|
|
|
|
{
|
2014-10-13 18:28:37 +07:00
|
|
|
int ret;
|
|
|
|
|
|
|
|
ret = btrfs_write_and_wait_marked_extents(root,
|
2009-11-12 16:33:26 +07:00
|
|
|
&trans->transaction->dirty_pages,
|
|
|
|
EXTENT_DIRTY);
|
2014-10-13 18:28:37 +07:00
|
|
|
clear_btree_io_tree(&trans->transaction->dirty_pages);
|
|
|
|
|
|
|
|
return ret;
|
2008-09-12 03:17:57 +07:00
|
|
|
}
|
|
|
|
|
2008-09-30 02:18:18 +07:00
|
|
|
/*
|
|
|
|
* this is used to update the root pointer in the tree of tree roots.
|
|
|
|
*
|
|
|
|
* But, in the case of the extent allocation tree, updating the root
|
|
|
|
* pointer may allocate blocks which may change the root of the extent
|
|
|
|
* allocation tree.
|
|
|
|
*
|
|
|
|
* So, this loops and repeats and makes sure the cowonly root didn't
|
|
|
|
* change while the root pointer was being updated in the metadata.
|
|
|
|
*/
|
2008-03-25 02:01:56 +07:00
|
|
|
static int update_cowonly_root(struct btrfs_trans_handle *trans,
|
|
|
|
struct btrfs_root *root)
|
2007-03-23 02:59:16 +07:00
|
|
|
{
|
|
|
|
int ret;
|
2008-03-25 02:01:56 +07:00
|
|
|
u64 old_root_bytenr;
|
2009-11-12 16:36:50 +07:00
|
|
|
u64 old_root_used;
|
2008-03-25 02:01:56 +07:00
|
|
|
struct btrfs_root *tree_root = root->fs_info->tree_root;
|
2007-03-23 02:59:16 +07:00
|
|
|
|
2009-11-12 16:36:50 +07:00
|
|
|
old_root_used = btrfs_root_used(&root->root_item);
|
2009-03-13 21:10:06 +07:00
|
|
|
|
2009-01-06 09:25:51 +07:00
|
|
|
while (1) {
|
2008-03-25 02:01:56 +07:00
|
|
|
old_root_bytenr = btrfs_root_bytenr(&root->root_item);
|
2009-11-12 16:36:50 +07:00
|
|
|
if (old_root_bytenr == root->node->start &&
|
2015-03-14 03:40:45 +07:00
|
|
|
old_root_used == btrfs_root_used(&root->root_item))
|
2007-03-23 02:59:16 +07:00
|
|
|
break;
|
2008-10-30 22:23:27 +07:00
|
|
|
|
Btrfs: Mixed back reference (FORWARD ROLLING FORMAT CHANGE)
This commit introduces a new kind of back reference for btrfs metadata.
Once a filesystem has been mounted with this commit, IT WILL NO LONGER
BE MOUNTABLE BY OLDER KERNELS.
When a tree block in subvolume tree is cow'd, the reference counts of all
extents it points to are increased by one. At transaction commit time,
the old root of the subvolume is recorded in a "dead root" data structure,
and the btree it points to is later walked, dropping reference counts
and freeing any blocks where the reference count goes to 0.
The increments done during cow and decrements done after commit cancel out,
and the walk is a very expensive way to go about freeing the blocks that
are no longer referenced by the new btree root. This commit reduces the
transaction overhead by avoiding the need for dead root records.
When a non-shared tree block is cow'd, we free the old block at once, and the
new block inherits old block's references. When a tree block with reference
count > 1 is cow'd, we increase the reference counts of all extents
the new block points to by one, and decrease the old block's reference count by
one.
This dead tree avoidance code removes the need to modify the reference
counts of lower level extents when a non-shared tree block is cow'd.
But we still need to update back ref for all pointers in the block.
This is because the location of the block is recorded in the back ref
item.
We can solve this by introducing a new type of back ref. The new
back ref provides information about pointer's key, level and in which
tree the pointer lives. This information allow us to find the pointer
by searching the tree. The shortcoming of the new back ref is that it
only works for pointers in tree blocks referenced by their owner trees.
This is mostly a problem for snapshots, where resolving one of these
fuzzy back references would be O(number_of_snapshots) and quite slow.
The solution used here is to use the fuzzy back references in the common
case where a given tree block is only referenced by one root,
and use the full back references when multiple roots have a reference
on a given block.
This commit adds per subvolume red-black tree to keep trace of cached
inodes. The red-black tree helps the balancing code to find cached
inodes whose inode numbers within a given range.
This commit improves the balancing code by introducing several data
structures to keep the state of balancing. The most important one
is the back ref cache. It caches how the upper level tree blocks are
referenced. This greatly reduce the overhead of checking back ref.
The improved balancing code scales significantly better with a large
number of snapshots.
This is a very large commit and was written in a number of
pieces. But, they depend heavily on the disk format change and were
squashed together to make sure git bisect didn't end up in a
bad state wrt space balancing or the format change.
Signed-off-by: Yan Zheng <zheng.yan@oracle.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-06-10 21:45:14 +07:00
|
|
|
btrfs_set_root_node(&root->root_item, root->node);
|
2007-03-23 02:59:16 +07:00
|
|
|
ret = btrfs_update_root(trans, tree_root,
|
2008-03-25 02:01:56 +07:00
|
|
|
&root->root_key,
|
|
|
|
&root->root_item);
|
2012-03-01 23:24:58 +07:00
|
|
|
if (ret)
|
|
|
|
return ret;
|
2009-03-13 21:10:06 +07:00
|
|
|
|
2009-11-12 16:36:50 +07:00
|
|
|
old_root_used = btrfs_root_used(&root->root_item);
|
2008-03-25 02:01:56 +07:00
|
|
|
}
|
2009-07-30 20:40:40 +07:00
|
|
|
|
2008-03-25 02:01:56 +07:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2008-09-30 02:18:18 +07:00
|
|
|
/*
|
|
|
|
* update all the cowonly tree roots on disk
|
2012-03-01 23:24:58 +07:00
|
|
|
*
|
|
|
|
* The error handling in this function may not be obvious. Any of the
|
|
|
|
* failures will cause the file system to go offline. We still need
|
|
|
|
* to clean up the delayed refs.
|
2008-09-30 02:18:18 +07:00
|
|
|
*/
|
Btrfs: Mixed back reference (FORWARD ROLLING FORMAT CHANGE)
This commit introduces a new kind of back reference for btrfs metadata.
Once a filesystem has been mounted with this commit, IT WILL NO LONGER
BE MOUNTABLE BY OLDER KERNELS.
When a tree block in subvolume tree is cow'd, the reference counts of all
extents it points to are increased by one. At transaction commit time,
the old root of the subvolume is recorded in a "dead root" data structure,
and the btree it points to is later walked, dropping reference counts
and freeing any blocks where the reference count goes to 0.
The increments done during cow and decrements done after commit cancel out,
and the walk is a very expensive way to go about freeing the blocks that
are no longer referenced by the new btree root. This commit reduces the
transaction overhead by avoiding the need for dead root records.
When a non-shared tree block is cow'd, we free the old block at once, and the
new block inherits old block's references. When a tree block with reference
count > 1 is cow'd, we increase the reference counts of all extents
the new block points to by one, and decrease the old block's reference count by
one.
This dead tree avoidance code removes the need to modify the reference
counts of lower level extents when a non-shared tree block is cow'd.
But we still need to update back ref for all pointers in the block.
This is because the location of the block is recorded in the back ref
item.
We can solve this by introducing a new type of back ref. The new
back ref provides information about pointer's key, level and in which
tree the pointer lives. This information allow us to find the pointer
by searching the tree. The shortcoming of the new back ref is that it
only works for pointers in tree blocks referenced by their owner trees.
This is mostly a problem for snapshots, where resolving one of these
fuzzy back references would be O(number_of_snapshots) and quite slow.
The solution used here is to use the fuzzy back references in the common
case where a given tree block is only referenced by one root,
and use the full back references when multiple roots have a reference
on a given block.
This commit adds per subvolume red-black tree to keep trace of cached
inodes. The red-black tree helps the balancing code to find cached
inodes whose inode numbers within a given range.
This commit improves the balancing code by introducing several data
structures to keep the state of balancing. The most important one
is the back ref cache. It caches how the upper level tree blocks are
referenced. This greatly reduce the overhead of checking back ref.
The improved balancing code scales significantly better with a large
number of snapshots.
This is a very large commit and was written in a number of
pieces. But, they depend heavily on the disk format change and were
squashed together to make sure git bisect didn't end up in a
bad state wrt space balancing or the format change.
Signed-off-by: Yan Zheng <zheng.yan@oracle.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-06-10 21:45:14 +07:00
|
|
|
static noinline int commit_cowonly_roots(struct btrfs_trans_handle *trans,
|
|
|
|
struct btrfs_root *root)
|
2008-03-25 02:01:56 +07:00
|
|
|
{
|
|
|
|
struct btrfs_fs_info *fs_info = root->fs_info;
|
2015-03-14 03:40:45 +07:00
|
|
|
struct list_head *dirty_bgs = &trans->transaction->dirty_bgs;
|
2015-04-07 02:46:08 +07:00
|
|
|
struct list_head *io_bgs = &trans->transaction->io_bgs;
|
2008-03-25 02:01:56 +07:00
|
|
|
struct list_head *next;
|
2008-10-30 01:49:05 +07:00
|
|
|
struct extent_buffer *eb;
|
2009-03-13 21:10:06 +07:00
|
|
|
int ret;
|
2008-10-30 01:49:05 +07:00
|
|
|
|
|
|
|
eb = btrfs_lock_root_node(fs_info->tree_root);
|
2012-03-01 23:24:58 +07:00
|
|
|
ret = btrfs_cow_block(trans, fs_info->tree_root, eb, NULL,
|
|
|
|
0, &eb);
|
2008-10-30 01:49:05 +07:00
|
|
|
btrfs_tree_unlock(eb);
|
|
|
|
free_extent_buffer(eb);
|
2008-03-25 02:01:56 +07:00
|
|
|
|
2012-03-01 23:24:58 +07:00
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
|
2009-03-13 21:10:06 +07:00
|
|
|
ret = btrfs_run_delayed_refs(trans, root, (unsigned long)-1);
|
2012-03-01 23:24:58 +07:00
|
|
|
if (ret)
|
|
|
|
return ret;
|
2008-10-30 22:23:27 +07:00
|
|
|
|
2012-05-25 21:06:10 +07:00
|
|
|
ret = btrfs_run_dev_stats(trans, root->fs_info);
|
2013-09-28 03:38:20 +07:00
|
|
|
if (ret)
|
|
|
|
return ret;
|
2012-11-06 19:15:27 +07:00
|
|
|
ret = btrfs_run_dev_replace(trans, root->fs_info);
|
2013-09-28 03:38:20 +07:00
|
|
|
if (ret)
|
|
|
|
return ret;
|
2012-06-14 21:37:44 +07:00
|
|
|
ret = btrfs_run_qgroups(trans, root->fs_info);
|
2013-09-28 03:38:20 +07:00
|
|
|
if (ret)
|
|
|
|
return ret;
|
2012-06-14 21:37:44 +07:00
|
|
|
|
2015-03-03 04:37:31 +07:00
|
|
|
ret = btrfs_setup_space_cache(trans, root);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
|
2012-06-14 21:37:44 +07:00
|
|
|
/* run_qgroups might have added some more refs */
|
|
|
|
ret = btrfs_run_delayed_refs(trans, root, (unsigned long)-1);
|
2013-09-28 03:38:20 +07:00
|
|
|
if (ret)
|
|
|
|
return ret;
|
2015-03-14 03:40:45 +07:00
|
|
|
again:
|
2009-01-06 09:25:51 +07:00
|
|
|
while (!list_empty(&fs_info->dirty_cowonly_roots)) {
|
2008-03-25 02:01:56 +07:00
|
|
|
next = fs_info->dirty_cowonly_roots.next;
|
|
|
|
list_del_init(next);
|
|
|
|
root = list_entry(next, struct btrfs_root, dirty_list);
|
2014-12-16 23:54:43 +07:00
|
|
|
clear_bit(BTRFS_ROOT_DIRTY, &root->state);
|
2008-10-30 22:23:27 +07:00
|
|
|
|
2014-03-14 02:42:13 +07:00
|
|
|
if (root != fs_info->extent_root)
|
|
|
|
list_add_tail(&root->dirty_list,
|
|
|
|
&trans->transaction->switch_commits);
|
2012-03-01 23:24:58 +07:00
|
|
|
ret = update_cowonly_root(trans, root);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
2015-03-14 03:40:45 +07:00
|
|
|
ret = btrfs_run_delayed_refs(trans, root, (unsigned long)-1);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
2007-03-23 02:59:16 +07:00
|
|
|
}
|
2009-07-30 20:40:40 +07:00
|
|
|
|
2015-04-07 02:46:08 +07:00
|
|
|
while (!list_empty(dirty_bgs) || !list_empty(io_bgs)) {
|
2015-03-14 03:40:45 +07:00
|
|
|
ret = btrfs_write_dirty_block_groups(trans, root);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
ret = btrfs_run_delayed_refs(trans, root, (unsigned long)-1);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!list_empty(&fs_info->dirty_cowonly_roots))
|
|
|
|
goto again;
|
|
|
|
|
2014-03-14 02:42:13 +07:00
|
|
|
list_add_tail(&fs_info->extent_root->dirty_list,
|
|
|
|
&trans->transaction->switch_commits);
|
2012-11-06 19:15:27 +07:00
|
|
|
btrfs_after_dev_replace_commit(fs_info);
|
|
|
|
|
2007-03-23 02:59:16 +07:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2008-09-30 02:18:18 +07:00
|
|
|
/*
|
|
|
|
* dead roots are old snapshots that need to be deleted. This allocates
|
|
|
|
* a dirty root struct and adds it into the list of dead roots that need to
|
|
|
|
* be deleted
|
|
|
|
*/
|
2013-07-26 02:11:47 +07:00
|
|
|
void btrfs_add_dead_root(struct btrfs_root *root)
|
2007-06-23 01:16:25 +07:00
|
|
|
{
|
2011-04-12 04:25:13 +07:00
|
|
|
spin_lock(&root->fs_info->trans_lock);
|
2013-07-26 02:11:47 +07:00
|
|
|
if (list_empty(&root->root_list))
|
|
|
|
list_add_tail(&root->root_list, &root->fs_info->dead_roots);
|
2011-04-12 04:25:13 +07:00
|
|
|
spin_unlock(&root->fs_info->trans_lock);
|
2007-06-23 01:16:25 +07:00
|
|
|
}
|
|
|
|
|
2008-09-30 02:18:18 +07:00
|
|
|
/*
|
Btrfs: Mixed back reference (FORWARD ROLLING FORMAT CHANGE)
This commit introduces a new kind of back reference for btrfs metadata.
Once a filesystem has been mounted with this commit, IT WILL NO LONGER
BE MOUNTABLE BY OLDER KERNELS.
When a tree block in subvolume tree is cow'd, the reference counts of all
extents it points to are increased by one. At transaction commit time,
the old root of the subvolume is recorded in a "dead root" data structure,
and the btree it points to is later walked, dropping reference counts
and freeing any blocks where the reference count goes to 0.
The increments done during cow and decrements done after commit cancel out,
and the walk is a very expensive way to go about freeing the blocks that
are no longer referenced by the new btree root. This commit reduces the
transaction overhead by avoiding the need for dead root records.
When a non-shared tree block is cow'd, we free the old block at once, and the
new block inherits old block's references. When a tree block with reference
count > 1 is cow'd, we increase the reference counts of all extents
the new block points to by one, and decrease the old block's reference count by
one.
This dead tree avoidance code removes the need to modify the reference
counts of lower level extents when a non-shared tree block is cow'd.
But we still need to update back ref for all pointers in the block.
This is because the location of the block is recorded in the back ref
item.
We can solve this by introducing a new type of back ref. The new
back ref provides information about pointer's key, level and in which
tree the pointer lives. This information allow us to find the pointer
by searching the tree. The shortcoming of the new back ref is that it
only works for pointers in tree blocks referenced by their owner trees.
This is mostly a problem for snapshots, where resolving one of these
fuzzy back references would be O(number_of_snapshots) and quite slow.
The solution used here is to use the fuzzy back references in the common
case where a given tree block is only referenced by one root,
and use the full back references when multiple roots have a reference
on a given block.
This commit adds per subvolume red-black tree to keep trace of cached
inodes. The red-black tree helps the balancing code to find cached
inodes whose inode numbers within a given range.
This commit improves the balancing code by introducing several data
structures to keep the state of balancing. The most important one
is the back ref cache. It caches how the upper level tree blocks are
referenced. This greatly reduce the overhead of checking back ref.
The improved balancing code scales significantly better with a large
number of snapshots.
This is a very large commit and was written in a number of
pieces. But, they depend heavily on the disk format change and were
squashed together to make sure git bisect didn't end up in a
bad state wrt space balancing or the format change.
Signed-off-by: Yan Zheng <zheng.yan@oracle.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-06-10 21:45:14 +07:00
|
|
|
* update all the cowonly tree roots on disk
|
2008-09-30 02:18:18 +07:00
|
|
|
*/
|
Btrfs: Mixed back reference (FORWARD ROLLING FORMAT CHANGE)
This commit introduces a new kind of back reference for btrfs metadata.
Once a filesystem has been mounted with this commit, IT WILL NO LONGER
BE MOUNTABLE BY OLDER KERNELS.
When a tree block in subvolume tree is cow'd, the reference counts of all
extents it points to are increased by one. At transaction commit time,
the old root of the subvolume is recorded in a "dead root" data structure,
and the btree it points to is later walked, dropping reference counts
and freeing any blocks where the reference count goes to 0.
The increments done during cow and decrements done after commit cancel out,
and the walk is a very expensive way to go about freeing the blocks that
are no longer referenced by the new btree root. This commit reduces the
transaction overhead by avoiding the need for dead root records.
When a non-shared tree block is cow'd, we free the old block at once, and the
new block inherits old block's references. When a tree block with reference
count > 1 is cow'd, we increase the reference counts of all extents
the new block points to by one, and decrease the old block's reference count by
one.
This dead tree avoidance code removes the need to modify the reference
counts of lower level extents when a non-shared tree block is cow'd.
But we still need to update back ref for all pointers in the block.
This is because the location of the block is recorded in the back ref
item.
We can solve this by introducing a new type of back ref. The new
back ref provides information about pointer's key, level and in which
tree the pointer lives. This information allow us to find the pointer
by searching the tree. The shortcoming of the new back ref is that it
only works for pointers in tree blocks referenced by their owner trees.
This is mostly a problem for snapshots, where resolving one of these
fuzzy back references would be O(number_of_snapshots) and quite slow.
The solution used here is to use the fuzzy back references in the common
case where a given tree block is only referenced by one root,
and use the full back references when multiple roots have a reference
on a given block.
This commit adds per subvolume red-black tree to keep trace of cached
inodes. The red-black tree helps the balancing code to find cached
inodes whose inode numbers within a given range.
This commit improves the balancing code by introducing several data
structures to keep the state of balancing. The most important one
is the back ref cache. It caches how the upper level tree blocks are
referenced. This greatly reduce the overhead of checking back ref.
The improved balancing code scales significantly better with a large
number of snapshots.
This is a very large commit and was written in a number of
pieces. But, they depend heavily on the disk format change and were
squashed together to make sure git bisect didn't end up in a
bad state wrt space balancing or the format change.
Signed-off-by: Yan Zheng <zheng.yan@oracle.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-06-10 21:45:14 +07:00
|
|
|
static noinline int commit_fs_roots(struct btrfs_trans_handle *trans,
|
|
|
|
struct btrfs_root *root)
|
2007-04-09 21:42:37 +07:00
|
|
|
{
|
|
|
|
struct btrfs_root *gang[8];
|
Btrfs: Mixed back reference (FORWARD ROLLING FORMAT CHANGE)
This commit introduces a new kind of back reference for btrfs metadata.
Once a filesystem has been mounted with this commit, IT WILL NO LONGER
BE MOUNTABLE BY OLDER KERNELS.
When a tree block in subvolume tree is cow'd, the reference counts of all
extents it points to are increased by one. At transaction commit time,
the old root of the subvolume is recorded in a "dead root" data structure,
and the btree it points to is later walked, dropping reference counts
and freeing any blocks where the reference count goes to 0.
The increments done during cow and decrements done after commit cancel out,
and the walk is a very expensive way to go about freeing the blocks that
are no longer referenced by the new btree root. This commit reduces the
transaction overhead by avoiding the need for dead root records.
When a non-shared tree block is cow'd, we free the old block at once, and the
new block inherits old block's references. When a tree block with reference
count > 1 is cow'd, we increase the reference counts of all extents
the new block points to by one, and decrease the old block's reference count by
one.
This dead tree avoidance code removes the need to modify the reference
counts of lower level extents when a non-shared tree block is cow'd.
But we still need to update back ref for all pointers in the block.
This is because the location of the block is recorded in the back ref
item.
We can solve this by introducing a new type of back ref. The new
back ref provides information about pointer's key, level and in which
tree the pointer lives. This information allow us to find the pointer
by searching the tree. The shortcoming of the new back ref is that it
only works for pointers in tree blocks referenced by their owner trees.
This is mostly a problem for snapshots, where resolving one of these
fuzzy back references would be O(number_of_snapshots) and quite slow.
The solution used here is to use the fuzzy back references in the common
case where a given tree block is only referenced by one root,
and use the full back references when multiple roots have a reference
on a given block.
This commit adds per subvolume red-black tree to keep trace of cached
inodes. The red-black tree helps the balancing code to find cached
inodes whose inode numbers within a given range.
This commit improves the balancing code by introducing several data
structures to keep the state of balancing. The most important one
is the back ref cache. It caches how the upper level tree blocks are
referenced. This greatly reduce the overhead of checking back ref.
The improved balancing code scales significantly better with a large
number of snapshots.
This is a very large commit and was written in a number of
pieces. But, they depend heavily on the disk format change and were
squashed together to make sure git bisect didn't end up in a
bad state wrt space balancing or the format change.
Signed-off-by: Yan Zheng <zheng.yan@oracle.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-06-10 21:45:14 +07:00
|
|
|
struct btrfs_fs_info *fs_info = root->fs_info;
|
2007-04-09 21:42:37 +07:00
|
|
|
int i;
|
|
|
|
int ret;
|
2007-06-23 01:16:25 +07:00
|
|
|
int err = 0;
|
|
|
|
|
2011-04-12 04:25:13 +07:00
|
|
|
spin_lock(&fs_info->fs_roots_radix_lock);
|
2009-01-06 09:25:51 +07:00
|
|
|
while (1) {
|
Btrfs: Mixed back reference (FORWARD ROLLING FORMAT CHANGE)
This commit introduces a new kind of back reference for btrfs metadata.
Once a filesystem has been mounted with this commit, IT WILL NO LONGER
BE MOUNTABLE BY OLDER KERNELS.
When a tree block in subvolume tree is cow'd, the reference counts of all
extents it points to are increased by one. At transaction commit time,
the old root of the subvolume is recorded in a "dead root" data structure,
and the btree it points to is later walked, dropping reference counts
and freeing any blocks where the reference count goes to 0.
The increments done during cow and decrements done after commit cancel out,
and the walk is a very expensive way to go about freeing the blocks that
are no longer referenced by the new btree root. This commit reduces the
transaction overhead by avoiding the need for dead root records.
When a non-shared tree block is cow'd, we free the old block at once, and the
new block inherits old block's references. When a tree block with reference
count > 1 is cow'd, we increase the reference counts of all extents
the new block points to by one, and decrease the old block's reference count by
one.
This dead tree avoidance code removes the need to modify the reference
counts of lower level extents when a non-shared tree block is cow'd.
But we still need to update back ref for all pointers in the block.
This is because the location of the block is recorded in the back ref
item.
We can solve this by introducing a new type of back ref. The new
back ref provides information about pointer's key, level and in which
tree the pointer lives. This information allow us to find the pointer
by searching the tree. The shortcoming of the new back ref is that it
only works for pointers in tree blocks referenced by their owner trees.
This is mostly a problem for snapshots, where resolving one of these
fuzzy back references would be O(number_of_snapshots) and quite slow.
The solution used here is to use the fuzzy back references in the common
case where a given tree block is only referenced by one root,
and use the full back references when multiple roots have a reference
on a given block.
This commit adds per subvolume red-black tree to keep trace of cached
inodes. The red-black tree helps the balancing code to find cached
inodes whose inode numbers within a given range.
This commit improves the balancing code by introducing several data
structures to keep the state of balancing. The most important one
is the back ref cache. It caches how the upper level tree blocks are
referenced. This greatly reduce the overhead of checking back ref.
The improved balancing code scales significantly better with a large
number of snapshots.
This is a very large commit and was written in a number of
pieces. But, they depend heavily on the disk format change and were
squashed together to make sure git bisect didn't end up in a
bad state wrt space balancing or the format change.
Signed-off-by: Yan Zheng <zheng.yan@oracle.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-06-10 21:45:14 +07:00
|
|
|
ret = radix_tree_gang_lookup_tag(&fs_info->fs_roots_radix,
|
|
|
|
(void **)gang, 0,
|
2007-04-09 21:42:37 +07:00
|
|
|
ARRAY_SIZE(gang),
|
|
|
|
BTRFS_ROOT_TRANS_TAG);
|
|
|
|
if (ret == 0)
|
|
|
|
break;
|
|
|
|
for (i = 0; i < ret; i++) {
|
|
|
|
root = gang[i];
|
Btrfs: Mixed back reference (FORWARD ROLLING FORMAT CHANGE)
This commit introduces a new kind of back reference for btrfs metadata.
Once a filesystem has been mounted with this commit, IT WILL NO LONGER
BE MOUNTABLE BY OLDER KERNELS.
When a tree block in subvolume tree is cow'd, the reference counts of all
extents it points to are increased by one. At transaction commit time,
the old root of the subvolume is recorded in a "dead root" data structure,
and the btree it points to is later walked, dropping reference counts
and freeing any blocks where the reference count goes to 0.
The increments done during cow and decrements done after commit cancel out,
and the walk is a very expensive way to go about freeing the blocks that
are no longer referenced by the new btree root. This commit reduces the
transaction overhead by avoiding the need for dead root records.
When a non-shared tree block is cow'd, we free the old block at once, and the
new block inherits old block's references. When a tree block with reference
count > 1 is cow'd, we increase the reference counts of all extents
the new block points to by one, and decrease the old block's reference count by
one.
This dead tree avoidance code removes the need to modify the reference
counts of lower level extents when a non-shared tree block is cow'd.
But we still need to update back ref for all pointers in the block.
This is because the location of the block is recorded in the back ref
item.
We can solve this by introducing a new type of back ref. The new
back ref provides information about pointer's key, level and in which
tree the pointer lives. This information allow us to find the pointer
by searching the tree. The shortcoming of the new back ref is that it
only works for pointers in tree blocks referenced by their owner trees.
This is mostly a problem for snapshots, where resolving one of these
fuzzy back references would be O(number_of_snapshots) and quite slow.
The solution used here is to use the fuzzy back references in the common
case where a given tree block is only referenced by one root,
and use the full back references when multiple roots have a reference
on a given block.
This commit adds per subvolume red-black tree to keep trace of cached
inodes. The red-black tree helps the balancing code to find cached
inodes whose inode numbers within a given range.
This commit improves the balancing code by introducing several data
structures to keep the state of balancing. The most important one
is the back ref cache. It caches how the upper level tree blocks are
referenced. This greatly reduce the overhead of checking back ref.
The improved balancing code scales significantly better with a large
number of snapshots.
This is a very large commit and was written in a number of
pieces. But, they depend heavily on the disk format change and were
squashed together to make sure git bisect didn't end up in a
bad state wrt space balancing or the format change.
Signed-off-by: Yan Zheng <zheng.yan@oracle.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-06-10 21:45:14 +07:00
|
|
|
radix_tree_tag_clear(&fs_info->fs_roots_radix,
|
|
|
|
(unsigned long)root->root_key.objectid,
|
|
|
|
BTRFS_ROOT_TRANS_TAG);
|
2011-04-12 04:25:13 +07:00
|
|
|
spin_unlock(&fs_info->fs_roots_radix_lock);
|
2008-07-29 02:32:19 +07:00
|
|
|
|
2008-09-06 03:13:11 +07:00
|
|
|
btrfs_free_log(trans, root);
|
Btrfs: Mixed back reference (FORWARD ROLLING FORMAT CHANGE)
This commit introduces a new kind of back reference for btrfs metadata.
Once a filesystem has been mounted with this commit, IT WILL NO LONGER
BE MOUNTABLE BY OLDER KERNELS.
When a tree block in subvolume tree is cow'd, the reference counts of all
extents it points to are increased by one. At transaction commit time,
the old root of the subvolume is recorded in a "dead root" data structure,
and the btree it points to is later walked, dropping reference counts
and freeing any blocks where the reference count goes to 0.
The increments done during cow and decrements done after commit cancel out,
and the walk is a very expensive way to go about freeing the blocks that
are no longer referenced by the new btree root. This commit reduces the
transaction overhead by avoiding the need for dead root records.
When a non-shared tree block is cow'd, we free the old block at once, and the
new block inherits old block's references. When a tree block with reference
count > 1 is cow'd, we increase the reference counts of all extents
the new block points to by one, and decrease the old block's reference count by
one.
This dead tree avoidance code removes the need to modify the reference
counts of lower level extents when a non-shared tree block is cow'd.
But we still need to update back ref for all pointers in the block.
This is because the location of the block is recorded in the back ref
item.
We can solve this by introducing a new type of back ref. The new
back ref provides information about pointer's key, level and in which
tree the pointer lives. This information allow us to find the pointer
by searching the tree. The shortcoming of the new back ref is that it
only works for pointers in tree blocks referenced by their owner trees.
This is mostly a problem for snapshots, where resolving one of these
fuzzy back references would be O(number_of_snapshots) and quite slow.
The solution used here is to use the fuzzy back references in the common
case where a given tree block is only referenced by one root,
and use the full back references when multiple roots have a reference
on a given block.
This commit adds per subvolume red-black tree to keep trace of cached
inodes. The red-black tree helps the balancing code to find cached
inodes whose inode numbers within a given range.
This commit improves the balancing code by introducing several data
structures to keep the state of balancing. The most important one
is the back ref cache. It caches how the upper level tree blocks are
referenced. This greatly reduce the overhead of checking back ref.
The improved balancing code scales significantly better with a large
number of snapshots.
This is a very large commit and was written in a number of
pieces. But, they depend heavily on the disk format change and were
squashed together to make sure git bisect didn't end up in a
bad state wrt space balancing or the format change.
Signed-off-by: Yan Zheng <zheng.yan@oracle.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-06-10 21:45:14 +07:00
|
|
|
btrfs_update_reloc_root(trans, root);
|
2010-05-16 21:49:58 +07:00
|
|
|
btrfs_orphan_commit_root(trans, root);
|
2008-07-31 03:29:20 +07:00
|
|
|
|
2011-04-20 09:33:24 +07:00
|
|
|
btrfs_save_ino_cache(root, trans);
|
|
|
|
|
2011-11-15 08:48:06 +07:00
|
|
|
/* see comments in should_cow_block() */
|
2014-04-02 18:51:05 +07:00
|
|
|
clear_bit(BTRFS_ROOT_FORCE_COW, &root->state);
|
2014-06-11 03:06:56 +07:00
|
|
|
smp_mb__after_atomic();
|
2011-11-15 08:48:06 +07:00
|
|
|
|
2009-06-16 07:01:02 +07:00
|
|
|
if (root->commit_root != root->node) {
|
2014-03-14 02:42:13 +07:00
|
|
|
list_add_tail(&root->dirty_list,
|
|
|
|
&trans->transaction->switch_commits);
|
2009-06-16 07:01:02 +07:00
|
|
|
btrfs_set_root_node(&root->root_item,
|
|
|
|
root->node);
|
|
|
|
}
|
Btrfs: Mixed back reference (FORWARD ROLLING FORMAT CHANGE)
This commit introduces a new kind of back reference for btrfs metadata.
Once a filesystem has been mounted with this commit, IT WILL NO LONGER
BE MOUNTABLE BY OLDER KERNELS.
When a tree block in subvolume tree is cow'd, the reference counts of all
extents it points to are increased by one. At transaction commit time,
the old root of the subvolume is recorded in a "dead root" data structure,
and the btree it points to is later walked, dropping reference counts
and freeing any blocks where the reference count goes to 0.
The increments done during cow and decrements done after commit cancel out,
and the walk is a very expensive way to go about freeing the blocks that
are no longer referenced by the new btree root. This commit reduces the
transaction overhead by avoiding the need for dead root records.
When a non-shared tree block is cow'd, we free the old block at once, and the
new block inherits old block's references. When a tree block with reference
count > 1 is cow'd, we increase the reference counts of all extents
the new block points to by one, and decrease the old block's reference count by
one.
This dead tree avoidance code removes the need to modify the reference
counts of lower level extents when a non-shared tree block is cow'd.
But we still need to update back ref for all pointers in the block.
This is because the location of the block is recorded in the back ref
item.
We can solve this by introducing a new type of back ref. The new
back ref provides information about pointer's key, level and in which
tree the pointer lives. This information allow us to find the pointer
by searching the tree. The shortcoming of the new back ref is that it
only works for pointers in tree blocks referenced by their owner trees.
This is mostly a problem for snapshots, where resolving one of these
fuzzy back references would be O(number_of_snapshots) and quite slow.
The solution used here is to use the fuzzy back references in the common
case where a given tree block is only referenced by one root,
and use the full back references when multiple roots have a reference
on a given block.
This commit adds per subvolume red-black tree to keep trace of cached
inodes. The red-black tree helps the balancing code to find cached
inodes whose inode numbers within a given range.
This commit improves the balancing code by introducing several data
structures to keep the state of balancing. The most important one
is the back ref cache. It caches how the upper level tree blocks are
referenced. This greatly reduce the overhead of checking back ref.
The improved balancing code scales significantly better with a large
number of snapshots.
This is a very large commit and was written in a number of
pieces. But, they depend heavily on the disk format change and were
squashed together to make sure git bisect didn't end up in a
bad state wrt space balancing or the format change.
Signed-off-by: Yan Zheng <zheng.yan@oracle.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-06-10 21:45:14 +07:00
|
|
|
|
|
|
|
err = btrfs_update_root(trans, fs_info->tree_root,
|
2007-04-09 21:42:37 +07:00
|
|
|
&root->root_key,
|
|
|
|
&root->root_item);
|
2011-04-12 04:25:13 +07:00
|
|
|
spin_lock(&fs_info->fs_roots_radix_lock);
|
2007-06-23 01:16:25 +07:00
|
|
|
if (err)
|
|
|
|
break;
|
2015-09-08 16:22:41 +07:00
|
|
|
btrfs_qgroup_free_meta_all(root);
|
2007-04-09 21:42:37 +07:00
|
|
|
}
|
|
|
|
}
|
2011-04-12 04:25:13 +07:00
|
|
|
spin_unlock(&fs_info->fs_roots_radix_lock);
|
2007-06-23 01:16:25 +07:00
|
|
|
return err;
|
2007-04-09 21:42:37 +07:00
|
|
|
}
|
|
|
|
|
2008-09-30 02:18:18 +07:00
|
|
|
/*
|
2013-02-01 01:21:12 +07:00
|
|
|
* defrag a given btree.
|
|
|
|
* Every leaf in the btree is read and defragged.
|
2008-09-30 02:18:18 +07:00
|
|
|
*/
|
2013-02-01 01:21:12 +07:00
|
|
|
int btrfs_defrag_root(struct btrfs_root *root)
|
2007-08-11 01:06:19 +07:00
|
|
|
{
|
|
|
|
struct btrfs_fs_info *info = root->fs_info;
|
|
|
|
struct btrfs_trans_handle *trans;
|
2010-05-16 21:49:58 +07:00
|
|
|
int ret;
|
2007-08-11 01:06:19 +07:00
|
|
|
|
2014-04-02 18:51:05 +07:00
|
|
|
if (test_and_set_bit(BTRFS_ROOT_DEFRAG_RUNNING, &root->state))
|
2007-08-11 01:06:19 +07:00
|
|
|
return 0;
|
2010-05-16 21:49:58 +07:00
|
|
|
|
2007-10-16 03:17:34 +07:00
|
|
|
while (1) {
|
2010-05-16 21:49:58 +07:00
|
|
|
trans = btrfs_start_transaction(root, 0);
|
|
|
|
if (IS_ERR(trans))
|
|
|
|
return PTR_ERR(trans);
|
|
|
|
|
2013-02-01 01:21:12 +07:00
|
|
|
ret = btrfs_defrag_leaves(trans, root);
|
2010-05-16 21:49:58 +07:00
|
|
|
|
2007-08-11 01:06:19 +07:00
|
|
|
btrfs_end_transaction(trans, root);
|
2012-11-14 21:34:34 +07:00
|
|
|
btrfs_btree_balance_dirty(info->tree_root);
|
2007-08-11 01:06:19 +07:00
|
|
|
cond_resched();
|
|
|
|
|
2016-09-20 21:05:02 +07:00
|
|
|
if (btrfs_fs_closing(info) || ret != -EAGAIN)
|
2007-08-11 01:06:19 +07:00
|
|
|
break;
|
2013-02-10 06:38:06 +07:00
|
|
|
|
2016-09-20 21:05:02 +07:00
|
|
|
if (btrfs_defrag_cancelled(info)) {
|
|
|
|
btrfs_debug(info, "defrag_root cancelled");
|
2013-02-10 06:38:06 +07:00
|
|
|
ret = -EAGAIN;
|
|
|
|
break;
|
|
|
|
}
|
2007-08-11 01:06:19 +07:00
|
|
|
}
|
2014-04-02 18:51:05 +07:00
|
|
|
clear_bit(BTRFS_ROOT_DEFRAG_RUNNING, &root->state);
|
2010-05-16 21:49:58 +07:00
|
|
|
return ret;
|
2007-08-11 01:06:19 +07:00
|
|
|
}
|
|
|
|
|
btrfs: qgroup: Fix qgroup accounting when creating snapshot
Current btrfs qgroup design implies a requirement that after calling
btrfs_qgroup_account_extents() there must be a commit root switch.
Normally this is OK, as btrfs_qgroup_accounting_extents() is only called
inside btrfs_commit_transaction() just be commit_cowonly_roots().
However there is a exception at create_pending_snapshot(), which will
call btrfs_qgroup_account_extents() but no any commit root switch.
In case of creating a snapshot whose parent root is itself (create a
snapshot of fs tree), it will corrupt qgroup by the following trace:
(skipped unrelated data)
======
btrfs_qgroup_account_extent: bytenr = 29786112, num_bytes = 16384, nr_old_roots = 0, nr_new_roots = 1
qgroup_update_counters: qgid = 5, cur_old_count = 0, cur_new_count = 1, rfer = 0, excl = 0
qgroup_update_counters: qgid = 5, cur_old_count = 0, cur_new_count = 1, rfer = 16384, excl = 16384
btrfs_qgroup_account_extent: bytenr = 29786112, num_bytes = 16384, nr_old_roots = 0, nr_new_roots = 0
======
The problem here is in first qgroup_account_extent(), the
nr_new_roots of the extent is 1, which means its reference got
increased, and qgroup increased its rfer and excl.
But at second qgroup_account_extent(), its reference got decreased, but
between these two qgroup_account_extent(), there is no switch roots.
This leads to the same nr_old_roots, and this extent just got ignored by
qgroup, which means this extent is wrongly accounted.
Fix it by call commit_cowonly_roots() after qgroup_account_extent() in
create_pending_snapshot(), with needed preparation.
Mark: I added a check at the top of qgroup_account_snapshot() to skip this
code if qgroups are turned off. xfstest btrfs/122 exposes this problem.
Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com>
Reviewed-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: Mark Fasheh <mfasheh@suse.de>
Signed-off-by: David Sterba <dsterba@suse.com>
2016-05-12 02:53:52 +07:00
|
|
|
/*
|
|
|
|
* Do all special snapshot related qgroup dirty hack.
|
|
|
|
*
|
|
|
|
* Will do all needed qgroup inherit and dirty hack like switch commit
|
|
|
|
* roots inside one transaction and write all btree into disk, to make
|
|
|
|
* qgroup works.
|
|
|
|
*/
|
|
|
|
static int qgroup_account_snapshot(struct btrfs_trans_handle *trans,
|
|
|
|
struct btrfs_root *src,
|
|
|
|
struct btrfs_root *parent,
|
|
|
|
struct btrfs_qgroup_inherit *inherit,
|
|
|
|
u64 dst_objectid)
|
|
|
|
{
|
|
|
|
struct btrfs_fs_info *fs_info = src->fs_info;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Save some performance in the case that qgroups are not
|
|
|
|
* enabled. If this check races with the ioctl, rescan will
|
|
|
|
* kick in anyway.
|
|
|
|
*/
|
|
|
|
mutex_lock(&fs_info->qgroup_ioctl_lock);
|
2016-09-03 02:40:02 +07:00
|
|
|
if (!test_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags)) {
|
btrfs: qgroup: Fix qgroup accounting when creating snapshot
Current btrfs qgroup design implies a requirement that after calling
btrfs_qgroup_account_extents() there must be a commit root switch.
Normally this is OK, as btrfs_qgroup_accounting_extents() is only called
inside btrfs_commit_transaction() just be commit_cowonly_roots().
However there is a exception at create_pending_snapshot(), which will
call btrfs_qgroup_account_extents() but no any commit root switch.
In case of creating a snapshot whose parent root is itself (create a
snapshot of fs tree), it will corrupt qgroup by the following trace:
(skipped unrelated data)
======
btrfs_qgroup_account_extent: bytenr = 29786112, num_bytes = 16384, nr_old_roots = 0, nr_new_roots = 1
qgroup_update_counters: qgid = 5, cur_old_count = 0, cur_new_count = 1, rfer = 0, excl = 0
qgroup_update_counters: qgid = 5, cur_old_count = 0, cur_new_count = 1, rfer = 16384, excl = 16384
btrfs_qgroup_account_extent: bytenr = 29786112, num_bytes = 16384, nr_old_roots = 0, nr_new_roots = 0
======
The problem here is in first qgroup_account_extent(), the
nr_new_roots of the extent is 1, which means its reference got
increased, and qgroup increased its rfer and excl.
But at second qgroup_account_extent(), its reference got decreased, but
between these two qgroup_account_extent(), there is no switch roots.
This leads to the same nr_old_roots, and this extent just got ignored by
qgroup, which means this extent is wrongly accounted.
Fix it by call commit_cowonly_roots() after qgroup_account_extent() in
create_pending_snapshot(), with needed preparation.
Mark: I added a check at the top of qgroup_account_snapshot() to skip this
code if qgroups are turned off. xfstest btrfs/122 exposes this problem.
Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com>
Reviewed-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: Mark Fasheh <mfasheh@suse.de>
Signed-off-by: David Sterba <dsterba@suse.com>
2016-05-12 02:53:52 +07:00
|
|
|
mutex_unlock(&fs_info->qgroup_ioctl_lock);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
mutex_unlock(&fs_info->qgroup_ioctl_lock);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We are going to commit transaction, see btrfs_commit_transaction()
|
|
|
|
* comment for reason locking tree_log_mutex
|
|
|
|
*/
|
|
|
|
mutex_lock(&fs_info->tree_log_mutex);
|
|
|
|
|
|
|
|
ret = commit_fs_roots(trans, src);
|
|
|
|
if (ret)
|
|
|
|
goto out;
|
|
|
|
ret = btrfs_qgroup_prepare_account_extents(trans, fs_info);
|
|
|
|
if (ret < 0)
|
|
|
|
goto out;
|
|
|
|
ret = btrfs_qgroup_account_extents(trans, fs_info);
|
|
|
|
if (ret < 0)
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
/* Now qgroup are all updated, we can inherit it to new qgroups */
|
|
|
|
ret = btrfs_qgroup_inherit(trans, fs_info,
|
|
|
|
src->root_key.objectid, dst_objectid,
|
|
|
|
inherit);
|
|
|
|
if (ret < 0)
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Now we do a simplified commit transaction, which will:
|
|
|
|
* 1) commit all subvolume and extent tree
|
|
|
|
* To ensure all subvolume and extent tree have a valid
|
|
|
|
* commit_root to accounting later insert_dir_item()
|
|
|
|
* 2) write all btree blocks onto disk
|
|
|
|
* This is to make sure later btree modification will be cowed
|
|
|
|
* Or commit_root can be populated and cause wrong qgroup numbers
|
|
|
|
* In this simplified commit, we don't really care about other trees
|
|
|
|
* like chunk and root tree, as they won't affect qgroup.
|
|
|
|
* And we don't write super to avoid half committed status.
|
|
|
|
*/
|
|
|
|
ret = commit_cowonly_roots(trans, src);
|
|
|
|
if (ret)
|
|
|
|
goto out;
|
|
|
|
switch_commit_roots(trans->transaction, fs_info);
|
|
|
|
ret = btrfs_write_and_wait_transaction(trans, src);
|
|
|
|
if (ret)
|
2016-06-17 23:15:25 +07:00
|
|
|
btrfs_handle_fs_error(fs_info, ret,
|
btrfs: qgroup: Fix qgroup accounting when creating snapshot
Current btrfs qgroup design implies a requirement that after calling
btrfs_qgroup_account_extents() there must be a commit root switch.
Normally this is OK, as btrfs_qgroup_accounting_extents() is only called
inside btrfs_commit_transaction() just be commit_cowonly_roots().
However there is a exception at create_pending_snapshot(), which will
call btrfs_qgroup_account_extents() but no any commit root switch.
In case of creating a snapshot whose parent root is itself (create a
snapshot of fs tree), it will corrupt qgroup by the following trace:
(skipped unrelated data)
======
btrfs_qgroup_account_extent: bytenr = 29786112, num_bytes = 16384, nr_old_roots = 0, nr_new_roots = 1
qgroup_update_counters: qgid = 5, cur_old_count = 0, cur_new_count = 1, rfer = 0, excl = 0
qgroup_update_counters: qgid = 5, cur_old_count = 0, cur_new_count = 1, rfer = 16384, excl = 16384
btrfs_qgroup_account_extent: bytenr = 29786112, num_bytes = 16384, nr_old_roots = 0, nr_new_roots = 0
======
The problem here is in first qgroup_account_extent(), the
nr_new_roots of the extent is 1, which means its reference got
increased, and qgroup increased its rfer and excl.
But at second qgroup_account_extent(), its reference got decreased, but
between these two qgroup_account_extent(), there is no switch roots.
This leads to the same nr_old_roots, and this extent just got ignored by
qgroup, which means this extent is wrongly accounted.
Fix it by call commit_cowonly_roots() after qgroup_account_extent() in
create_pending_snapshot(), with needed preparation.
Mark: I added a check at the top of qgroup_account_snapshot() to skip this
code if qgroups are turned off. xfstest btrfs/122 exposes this problem.
Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com>
Reviewed-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: Mark Fasheh <mfasheh@suse.de>
Signed-off-by: David Sterba <dsterba@suse.com>
2016-05-12 02:53:52 +07:00
|
|
|
"Error while writing out transaction for qgroup");
|
|
|
|
|
|
|
|
out:
|
|
|
|
mutex_unlock(&fs_info->tree_log_mutex);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Force parent root to be updated, as we recorded it before so its
|
|
|
|
* last_trans == cur_transid.
|
|
|
|
* Or it won't be committed again onto disk after later
|
|
|
|
* insert_dir_item()
|
|
|
|
*/
|
|
|
|
if (!ret)
|
|
|
|
record_root_in_trans(trans, parent, 1);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2008-09-30 02:18:18 +07:00
|
|
|
/*
|
|
|
|
* new snapshots need to be created at a very specific time in the
|
2013-03-04 16:44:29 +07:00
|
|
|
* transaction commit. This does the actual creation.
|
|
|
|
*
|
|
|
|
* Note:
|
|
|
|
* If the error which may affect the commitment of the current transaction
|
|
|
|
* happens, we should return the error number. If the error which just affect
|
|
|
|
* the creation of the pending snapshots, just return 0.
|
2008-09-30 02:18:18 +07:00
|
|
|
*/
|
2008-02-02 04:35:04 +07:00
|
|
|
static noinline int create_pending_snapshot(struct btrfs_trans_handle *trans,
|
2008-01-09 03:46:30 +07:00
|
|
|
struct btrfs_fs_info *fs_info,
|
|
|
|
struct btrfs_pending_snapshot *pending)
|
|
|
|
{
|
|
|
|
struct btrfs_key key;
|
2008-02-02 04:35:04 +07:00
|
|
|
struct btrfs_root_item *new_root_item;
|
2008-01-09 03:46:30 +07:00
|
|
|
struct btrfs_root *tree_root = fs_info->tree_root;
|
|
|
|
struct btrfs_root *root = pending->root;
|
2010-03-16 00:27:13 +07:00
|
|
|
struct btrfs_root *parent_root;
|
2011-09-11 21:52:24 +07:00
|
|
|
struct btrfs_block_rsv *rsv;
|
2010-03-16 00:27:13 +07:00
|
|
|
struct inode *parent_inode;
|
2012-09-06 17:03:32 +07:00
|
|
|
struct btrfs_path *path;
|
|
|
|
struct btrfs_dir_item *dir_item;
|
2010-05-16 21:48:46 +07:00
|
|
|
struct dentry *dentry;
|
2008-01-09 03:46:30 +07:00
|
|
|
struct extent_buffer *tmp;
|
2008-06-26 03:01:30 +07:00
|
|
|
struct extent_buffer *old;
|
2016-02-07 14:57:21 +07:00
|
|
|
struct timespec cur_time;
|
2013-03-04 16:44:29 +07:00
|
|
|
int ret = 0;
|
2010-05-16 21:49:58 +07:00
|
|
|
u64 to_reserve = 0;
|
2010-03-16 00:27:13 +07:00
|
|
|
u64 index = 0;
|
2010-05-16 21:48:46 +07:00
|
|
|
u64 objectid;
|
2010-12-20 15:04:08 +07:00
|
|
|
u64 root_flags;
|
2012-07-25 22:35:53 +07:00
|
|
|
uuid_le new_uuid;
|
2008-01-09 03:46:30 +07:00
|
|
|
|
2015-11-11 00:54:03 +07:00
|
|
|
ASSERT(pending->path);
|
|
|
|
path = pending->path;
|
2012-09-06 17:03:32 +07:00
|
|
|
|
2015-11-11 00:54:00 +07:00
|
|
|
ASSERT(pending->root_item);
|
|
|
|
new_root_item = pending->root_item;
|
2010-05-16 21:48:46 +07:00
|
|
|
|
2013-03-04 16:44:29 +07:00
|
|
|
pending->error = btrfs_find_free_objectid(tree_root, &objectid);
|
|
|
|
if (pending->error)
|
2012-09-06 17:00:32 +07:00
|
|
|
goto no_free_objectid;
|
2008-01-09 03:46:30 +07:00
|
|
|
|
2015-04-20 09:09:06 +07:00
|
|
|
/*
|
|
|
|
* Make qgroup to skip current new snapshot's qgroupid, as it is
|
|
|
|
* accounted by later btrfs_qgroup_inherit().
|
|
|
|
*/
|
|
|
|
btrfs_set_skip_qgroup(trans, objectid);
|
|
|
|
|
2015-08-06 19:58:11 +07:00
|
|
|
btrfs_reloc_pre_snapshot(pending, &to_reserve);
|
2010-05-16 21:49:58 +07:00
|
|
|
|
|
|
|
if (to_reserve > 0) {
|
2013-03-04 16:44:29 +07:00
|
|
|
pending->error = btrfs_block_rsv_add(root,
|
|
|
|
&pending->block_rsv,
|
|
|
|
to_reserve,
|
|
|
|
BTRFS_RESERVE_NO_FLUSH);
|
|
|
|
if (pending->error)
|
2015-04-20 09:09:06 +07:00
|
|
|
goto clear_skip_qgroup;
|
2010-05-16 21:49:58 +07:00
|
|
|
}
|
|
|
|
|
2008-01-09 03:46:30 +07:00
|
|
|
key.objectid = objectid;
|
2010-05-16 21:48:46 +07:00
|
|
|
key.offset = (u64)-1;
|
|
|
|
key.type = BTRFS_ROOT_ITEM_KEY;
|
2008-01-09 03:46:30 +07:00
|
|
|
|
2012-09-06 17:00:32 +07:00
|
|
|
rsv = trans->block_rsv;
|
2010-05-16 21:48:46 +07:00
|
|
|
trans->block_rsv = &pending->block_rsv;
|
2013-02-22 11:33:36 +07:00
|
|
|
trans->bytes_reserved = trans->block_rsv->reserved;
|
2016-01-14 01:21:20 +07:00
|
|
|
trace_btrfs_space_reservation(root->fs_info, "transaction",
|
|
|
|
trans->transid,
|
|
|
|
trans->bytes_reserved, 1);
|
2010-05-16 21:48:46 +07:00
|
|
|
dentry = pending->dentry;
|
2013-02-28 17:01:15 +07:00
|
|
|
parent_inode = pending->dir;
|
2010-05-16 21:48:46 +07:00
|
|
|
parent_root = BTRFS_I(parent_inode)->root;
|
btrfs: qgroup: Fix qgroup accounting when creating snapshot
Current btrfs qgroup design implies a requirement that after calling
btrfs_qgroup_account_extents() there must be a commit root switch.
Normally this is OK, as btrfs_qgroup_accounting_extents() is only called
inside btrfs_commit_transaction() just be commit_cowonly_roots().
However there is a exception at create_pending_snapshot(), which will
call btrfs_qgroup_account_extents() but no any commit root switch.
In case of creating a snapshot whose parent root is itself (create a
snapshot of fs tree), it will corrupt qgroup by the following trace:
(skipped unrelated data)
======
btrfs_qgroup_account_extent: bytenr = 29786112, num_bytes = 16384, nr_old_roots = 0, nr_new_roots = 1
qgroup_update_counters: qgid = 5, cur_old_count = 0, cur_new_count = 1, rfer = 0, excl = 0
qgroup_update_counters: qgid = 5, cur_old_count = 0, cur_new_count = 1, rfer = 16384, excl = 16384
btrfs_qgroup_account_extent: bytenr = 29786112, num_bytes = 16384, nr_old_roots = 0, nr_new_roots = 0
======
The problem here is in first qgroup_account_extent(), the
nr_new_roots of the extent is 1, which means its reference got
increased, and qgroup increased its rfer and excl.
But at second qgroup_account_extent(), its reference got decreased, but
between these two qgroup_account_extent(), there is no switch roots.
This leads to the same nr_old_roots, and this extent just got ignored by
qgroup, which means this extent is wrongly accounted.
Fix it by call commit_cowonly_roots() after qgroup_account_extent() in
create_pending_snapshot(), with needed preparation.
Mark: I added a check at the top of qgroup_account_snapshot() to skip this
code if qgroups are turned off. xfstest btrfs/122 exposes this problem.
Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com>
Reviewed-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: Mark Fasheh <mfasheh@suse.de>
Signed-off-by: David Sterba <dsterba@suse.com>
2016-05-12 02:53:52 +07:00
|
|
|
record_root_in_trans(trans, parent_root, 0);
|
2010-05-16 21:48:46 +07:00
|
|
|
|
2016-09-14 21:48:06 +07:00
|
|
|
cur_time = current_time(parent_inode);
|
2016-02-07 14:57:21 +07:00
|
|
|
|
2008-01-09 03:46:30 +07:00
|
|
|
/*
|
|
|
|
* insert the directory item
|
|
|
|
*/
|
2008-11-18 09:02:50 +07:00
|
|
|
ret = btrfs_set_inode_index(parent_inode, &index);
|
2012-03-01 23:24:58 +07:00
|
|
|
BUG_ON(ret); /* -ENOMEM */
|
2012-09-06 17:03:32 +07:00
|
|
|
|
|
|
|
/* check if there is a file/dir which has the same name. */
|
|
|
|
dir_item = btrfs_lookup_dir_item(NULL, parent_root, path,
|
|
|
|
btrfs_ino(parent_inode),
|
|
|
|
dentry->d_name.name,
|
|
|
|
dentry->d_name.len, 0);
|
|
|
|
if (dir_item != NULL && !IS_ERR(dir_item)) {
|
2012-02-20 20:40:56 +07:00
|
|
|
pending->error = -EEXIST;
|
2013-03-04 16:44:29 +07:00
|
|
|
goto dir_item_existed;
|
2012-09-06 17:03:32 +07:00
|
|
|
} else if (IS_ERR(dir_item)) {
|
|
|
|
ret = PTR_ERR(dir_item);
|
2016-06-11 05:19:25 +07:00
|
|
|
btrfs_abort_transaction(trans, ret);
|
2012-09-18 12:52:38 +07:00
|
|
|
goto fail;
|
2012-03-12 22:03:00 +07:00
|
|
|
}
|
2012-09-06 17:03:32 +07:00
|
|
|
btrfs_release_path(path);
|
2009-01-06 03:43:43 +07:00
|
|
|
|
2011-06-18 03:14:09 +07:00
|
|
|
/*
|
|
|
|
* pull in the delayed directory update
|
|
|
|
* and the delayed inode item
|
|
|
|
* otherwise we corrupt the FS during
|
|
|
|
* snapshot
|
|
|
|
*/
|
|
|
|
ret = btrfs_run_delayed_items(trans, root);
|
2012-09-18 12:52:38 +07:00
|
|
|
if (ret) { /* Transaction aborted */
|
2016-06-11 05:19:25 +07:00
|
|
|
btrfs_abort_transaction(trans, ret);
|
2012-09-18 12:52:38 +07:00
|
|
|
goto fail;
|
|
|
|
}
|
2011-06-18 03:14:09 +07:00
|
|
|
|
btrfs: qgroup: Fix qgroup accounting when creating snapshot
Current btrfs qgroup design implies a requirement that after calling
btrfs_qgroup_account_extents() there must be a commit root switch.
Normally this is OK, as btrfs_qgroup_accounting_extents() is only called
inside btrfs_commit_transaction() just be commit_cowonly_roots().
However there is a exception at create_pending_snapshot(), which will
call btrfs_qgroup_account_extents() but no any commit root switch.
In case of creating a snapshot whose parent root is itself (create a
snapshot of fs tree), it will corrupt qgroup by the following trace:
(skipped unrelated data)
======
btrfs_qgroup_account_extent: bytenr = 29786112, num_bytes = 16384, nr_old_roots = 0, nr_new_roots = 1
qgroup_update_counters: qgid = 5, cur_old_count = 0, cur_new_count = 1, rfer = 0, excl = 0
qgroup_update_counters: qgid = 5, cur_old_count = 0, cur_new_count = 1, rfer = 16384, excl = 16384
btrfs_qgroup_account_extent: bytenr = 29786112, num_bytes = 16384, nr_old_roots = 0, nr_new_roots = 0
======
The problem here is in first qgroup_account_extent(), the
nr_new_roots of the extent is 1, which means its reference got
increased, and qgroup increased its rfer and excl.
But at second qgroup_account_extent(), its reference got decreased, but
between these two qgroup_account_extent(), there is no switch roots.
This leads to the same nr_old_roots, and this extent just got ignored by
qgroup, which means this extent is wrongly accounted.
Fix it by call commit_cowonly_roots() after qgroup_account_extent() in
create_pending_snapshot(), with needed preparation.
Mark: I added a check at the top of qgroup_account_snapshot() to skip this
code if qgroups are turned off. xfstest btrfs/122 exposes this problem.
Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com>
Reviewed-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: Mark Fasheh <mfasheh@suse.de>
Signed-off-by: David Sterba <dsterba@suse.com>
2016-05-12 02:53:52 +07:00
|
|
|
record_root_in_trans(trans, root, 0);
|
2010-03-16 00:27:13 +07:00
|
|
|
btrfs_set_root_last_snapshot(&root->root_item, trans->transid);
|
|
|
|
memcpy(new_root_item, &root->root_item, sizeof(*new_root_item));
|
2011-03-28 09:01:25 +07:00
|
|
|
btrfs_check_and_init_root_item(new_root_item);
|
2010-03-16 00:27:13 +07:00
|
|
|
|
2010-12-20 15:04:08 +07:00
|
|
|
root_flags = btrfs_root_flags(new_root_item);
|
|
|
|
if (pending->readonly)
|
|
|
|
root_flags |= BTRFS_ROOT_SUBVOL_RDONLY;
|
|
|
|
else
|
|
|
|
root_flags &= ~BTRFS_ROOT_SUBVOL_RDONLY;
|
|
|
|
btrfs_set_root_flags(new_root_item, root_flags);
|
|
|
|
|
2012-07-25 22:35:53 +07:00
|
|
|
btrfs_set_root_generation_v2(new_root_item,
|
|
|
|
trans->transid);
|
|
|
|
uuid_le_gen(&new_uuid);
|
|
|
|
memcpy(new_root_item->uuid, new_uuid.b, BTRFS_UUID_SIZE);
|
|
|
|
memcpy(new_root_item->parent_uuid, root->root_item.uuid,
|
|
|
|
BTRFS_UUID_SIZE);
|
2013-04-17 16:11:47 +07:00
|
|
|
if (!(root_flags & BTRFS_ROOT_SUBVOL_RDONLY)) {
|
|
|
|
memset(new_root_item->received_uuid, 0,
|
|
|
|
sizeof(new_root_item->received_uuid));
|
|
|
|
memset(&new_root_item->stime, 0, sizeof(new_root_item->stime));
|
|
|
|
memset(&new_root_item->rtime, 0, sizeof(new_root_item->rtime));
|
|
|
|
btrfs_set_root_stransid(new_root_item, 0);
|
|
|
|
btrfs_set_root_rtransid(new_root_item, 0);
|
|
|
|
}
|
2013-07-16 10:19:18 +07:00
|
|
|
btrfs_set_stack_timespec_sec(&new_root_item->otime, cur_time.tv_sec);
|
|
|
|
btrfs_set_stack_timespec_nsec(&new_root_item->otime, cur_time.tv_nsec);
|
2012-07-25 22:35:53 +07:00
|
|
|
btrfs_set_root_otransid(new_root_item, trans->transid);
|
|
|
|
|
2010-03-16 00:27:13 +07:00
|
|
|
old = btrfs_lock_root_node(root);
|
2012-03-01 23:24:58 +07:00
|
|
|
ret = btrfs_cow_block(trans, root, old, NULL, 0, &old);
|
2012-03-12 22:03:00 +07:00
|
|
|
if (ret) {
|
|
|
|
btrfs_tree_unlock(old);
|
|
|
|
free_extent_buffer(old);
|
2016-06-11 05:19:25 +07:00
|
|
|
btrfs_abort_transaction(trans, ret);
|
2012-09-18 12:52:38 +07:00
|
|
|
goto fail;
|
2012-03-12 22:03:00 +07:00
|
|
|
}
|
2012-03-01 23:24:58 +07:00
|
|
|
|
2010-03-16 00:27:13 +07:00
|
|
|
btrfs_set_lock_blocking(old);
|
|
|
|
|
2012-03-01 23:24:58 +07:00
|
|
|
ret = btrfs_copy_root(trans, root, old, &tmp, objectid);
|
2012-03-12 22:03:00 +07:00
|
|
|
/* clean up in any case */
|
2010-03-16 00:27:13 +07:00
|
|
|
btrfs_tree_unlock(old);
|
|
|
|
free_extent_buffer(old);
|
2012-09-18 12:52:38 +07:00
|
|
|
if (ret) {
|
2016-06-11 05:19:25 +07:00
|
|
|
btrfs_abort_transaction(trans, ret);
|
2012-09-18 12:52:38 +07:00
|
|
|
goto fail;
|
|
|
|
}
|
2011-11-15 08:48:06 +07:00
|
|
|
/* see comments in should_cow_block() */
|
2014-04-02 18:51:05 +07:00
|
|
|
set_bit(BTRFS_ROOT_FORCE_COW, &root->state);
|
2011-11-15 08:48:06 +07:00
|
|
|
smp_wmb();
|
|
|
|
|
2010-03-16 00:27:13 +07:00
|
|
|
btrfs_set_root_node(new_root_item, tmp);
|
2010-05-16 21:48:46 +07:00
|
|
|
/* record when the snapshot was created in key.offset */
|
|
|
|
key.offset = trans->transid;
|
|
|
|
ret = btrfs_insert_root(trans, tree_root, &key, new_root_item);
|
2010-03-16 00:27:13 +07:00
|
|
|
btrfs_tree_unlock(tmp);
|
|
|
|
free_extent_buffer(tmp);
|
2012-09-18 12:52:38 +07:00
|
|
|
if (ret) {
|
2016-06-11 05:19:25 +07:00
|
|
|
btrfs_abort_transaction(trans, ret);
|
2012-09-18 12:52:38 +07:00
|
|
|
goto fail;
|
|
|
|
}
|
2010-03-16 00:27:13 +07:00
|
|
|
|
2010-05-16 21:48:46 +07:00
|
|
|
/*
|
|
|
|
* insert root back/forward references
|
|
|
|
*/
|
|
|
|
ret = btrfs_add_root_ref(trans, tree_root, objectid,
|
2008-11-18 08:37:39 +07:00
|
|
|
parent_root->root_key.objectid,
|
2011-04-20 09:31:50 +07:00
|
|
|
btrfs_ino(parent_inode), index,
|
2010-05-16 21:48:46 +07:00
|
|
|
dentry->d_name.name, dentry->d_name.len);
|
2012-09-18 12:52:38 +07:00
|
|
|
if (ret) {
|
2016-06-11 05:19:25 +07:00
|
|
|
btrfs_abort_transaction(trans, ret);
|
2012-09-18 12:52:38 +07:00
|
|
|
goto fail;
|
|
|
|
}
|
2008-11-18 08:37:39 +07:00
|
|
|
|
2010-05-16 21:48:46 +07:00
|
|
|
key.offset = (u64)-1;
|
|
|
|
pending->snap = btrfs_read_fs_root_no_name(root->fs_info, &key);
|
2012-03-12 22:03:00 +07:00
|
|
|
if (IS_ERR(pending->snap)) {
|
|
|
|
ret = PTR_ERR(pending->snap);
|
2016-06-11 05:19:25 +07:00
|
|
|
btrfs_abort_transaction(trans, ret);
|
2012-09-18 12:52:38 +07:00
|
|
|
goto fail;
|
2012-03-12 22:03:00 +07:00
|
|
|
}
|
2010-05-16 21:49:58 +07:00
|
|
|
|
2012-03-01 23:24:58 +07:00
|
|
|
ret = btrfs_reloc_post_snapshot(trans, pending);
|
2012-09-18 12:52:38 +07:00
|
|
|
if (ret) {
|
2016-06-11 05:19:25 +07:00
|
|
|
btrfs_abort_transaction(trans, ret);
|
2012-09-18 12:52:38 +07:00
|
|
|
goto fail;
|
|
|
|
}
|
Btrfs: fix full backref problem when inserting shared block reference
If we create several snapshots at the same time, the following BUG_ON() will be
triggered.
kernel BUG at fs/btrfs/extent-tree.c:6047!
Steps to reproduce:
# mkfs.btrfs <partition>
# mount <partition> <mnt>
# cd <mnt>
# for ((i=0;i<2400;i++)); do touch long_name_to_make_tree_more_deep$i; done
# for ((i=0; i<4; i++))
> do
> mkdir $i
> for ((j=0; j<200; j++))
> do
> btrfs sub snap . $i/$j
> done &
> done
The reason is:
Before transaction commit, some operations changed the fs tree and new tree
blocks were allocated because of COW. We used the implicit non-shared back
reference for those newly allocated tree blocks because they were not shared by
two or more trees.
And then we created the first snapshot for the fs tree, according to the back
reference rules, we also used implicit back refs for the child tree blocks of
the root node of the fs tree, now those child nodes/leaves were shared by two
trees.
Then We didn't deal with the delayed references, and continued to change the fs
tree(created the second snapshot and inserted the dir item of the new snapshot
into the fs tree). According to the rules of the back reference, we added full
back refs for those tree blocks whose parents have be shared by two trees.
Now some newly allocated tree blocks had two types of the references.
As we know, the delayed reference system handles these delayed references from
back to front, and the full delayed reference is inserted after the implicit
ones. So when we dealt with the back references of those newly allocated tree
blocks, the full references was dealt with at first. And if the first reference
is a shared back reference and the tree block that the reference points to is
newly allocated, It would be considered as a tree block which is shared by two
or more trees when it is allocated and should be a full back reference not a
implicit one, the flag of its reference also should be set to FULL_BACKREF.
But in fact, it was a non-shared tree block with a implicit reference at
beginning, so it was not compulsory to set the flags to FULL_BACKREF. So BUG_ON
was triggered.
We have several methods to fix this bug:
1. deal with delayed references after the snapshot is created and before we
change the source tree of the snapshot. This is the easiest and safest way.
2. modify the sort method of the delayed reference tree, make the full delayed
references be inserted before the implicit ones. It is also very easy, but
I don't know if it will introduce some problems or not.
3. modify select_delayed_ref() and make it select the implicit delayed reference
at first. This way is not so good because it may wastes CPU time if we have
lots of delayed references.
4. set the flags to FULL_BACKREF, this method is a little complex comparing with
the 1st way.
I chose the 1st way to fix it.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
2012-09-06 17:00:57 +07:00
|
|
|
|
|
|
|
ret = btrfs_run_delayed_refs(trans, root, (unsigned long)-1);
|
2012-09-18 12:52:38 +07:00
|
|
|
if (ret) {
|
2016-06-11 05:19:25 +07:00
|
|
|
btrfs_abort_transaction(trans, ret);
|
2012-09-18 12:52:38 +07:00
|
|
|
goto fail;
|
|
|
|
}
|
2012-09-06 17:03:32 +07:00
|
|
|
|
btrfs: qgroup: Fix qgroup accounting when creating snapshot
Current btrfs qgroup design implies a requirement that after calling
btrfs_qgroup_account_extents() there must be a commit root switch.
Normally this is OK, as btrfs_qgroup_accounting_extents() is only called
inside btrfs_commit_transaction() just be commit_cowonly_roots().
However there is a exception at create_pending_snapshot(), which will
call btrfs_qgroup_account_extents() but no any commit root switch.
In case of creating a snapshot whose parent root is itself (create a
snapshot of fs tree), it will corrupt qgroup by the following trace:
(skipped unrelated data)
======
btrfs_qgroup_account_extent: bytenr = 29786112, num_bytes = 16384, nr_old_roots = 0, nr_new_roots = 1
qgroup_update_counters: qgid = 5, cur_old_count = 0, cur_new_count = 1, rfer = 0, excl = 0
qgroup_update_counters: qgid = 5, cur_old_count = 0, cur_new_count = 1, rfer = 16384, excl = 16384
btrfs_qgroup_account_extent: bytenr = 29786112, num_bytes = 16384, nr_old_roots = 0, nr_new_roots = 0
======
The problem here is in first qgroup_account_extent(), the
nr_new_roots of the extent is 1, which means its reference got
increased, and qgroup increased its rfer and excl.
But at second qgroup_account_extent(), its reference got decreased, but
between these two qgroup_account_extent(), there is no switch roots.
This leads to the same nr_old_roots, and this extent just got ignored by
qgroup, which means this extent is wrongly accounted.
Fix it by call commit_cowonly_roots() after qgroup_account_extent() in
create_pending_snapshot(), with needed preparation.
Mark: I added a check at the top of qgroup_account_snapshot() to skip this
code if qgroups are turned off. xfstest btrfs/122 exposes this problem.
Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com>
Reviewed-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: Mark Fasheh <mfasheh@suse.de>
Signed-off-by: David Sterba <dsterba@suse.com>
2016-05-12 02:53:52 +07:00
|
|
|
/*
|
|
|
|
* Do special qgroup accounting for snapshot, as we do some qgroup
|
|
|
|
* snapshot hack to do fast snapshot.
|
|
|
|
* To co-operate with that hack, we do hack again.
|
|
|
|
* Or snapshot will be greatly slowed down by a subtree qgroup rescan
|
|
|
|
*/
|
|
|
|
ret = qgroup_account_snapshot(trans, root, parent_root,
|
|
|
|
pending->inherit, objectid);
|
|
|
|
if (ret < 0)
|
|
|
|
goto fail;
|
|
|
|
|
2012-09-06 17:03:32 +07:00
|
|
|
ret = btrfs_insert_dir_item(trans, parent_root,
|
|
|
|
dentry->d_name.name, dentry->d_name.len,
|
|
|
|
parent_inode, &key,
|
|
|
|
BTRFS_FT_DIR, index);
|
|
|
|
/* We have check then name at the beginning, so it is impossible. */
|
2012-12-18 02:26:57 +07:00
|
|
|
BUG_ON(ret == -EEXIST || ret == -EOVERFLOW);
|
2012-09-18 12:52:38 +07:00
|
|
|
if (ret) {
|
2016-06-11 05:19:25 +07:00
|
|
|
btrfs_abort_transaction(trans, ret);
|
2012-09-18 12:52:38 +07:00
|
|
|
goto fail;
|
|
|
|
}
|
2012-09-06 17:03:32 +07:00
|
|
|
|
|
|
|
btrfs_i_size_write(parent_inode, parent_inode->i_size +
|
|
|
|
dentry->d_name.len * 2);
|
2016-02-07 14:57:21 +07:00
|
|
|
parent_inode->i_mtime = parent_inode->i_ctime =
|
2016-09-14 21:48:06 +07:00
|
|
|
current_time(parent_inode);
|
2012-10-23 02:43:12 +07:00
|
|
|
ret = btrfs_update_inode_fallback(trans, parent_root, parent_inode);
|
2013-08-15 22:11:20 +07:00
|
|
|
if (ret) {
|
2016-06-11 05:19:25 +07:00
|
|
|
btrfs_abort_transaction(trans, ret);
|
2013-08-15 22:11:20 +07:00
|
|
|
goto fail;
|
|
|
|
}
|
|
|
|
ret = btrfs_uuid_tree_add(trans, fs_info->uuid_root, new_uuid.b,
|
|
|
|
BTRFS_UUID_KEY_SUBVOL, objectid);
|
|
|
|
if (ret) {
|
2016-06-11 05:19:25 +07:00
|
|
|
btrfs_abort_transaction(trans, ret);
|
2013-08-15 22:11:20 +07:00
|
|
|
goto fail;
|
|
|
|
}
|
|
|
|
if (!btrfs_is_empty_uuid(new_root_item->received_uuid)) {
|
|
|
|
ret = btrfs_uuid_tree_add(trans, fs_info->uuid_root,
|
|
|
|
new_root_item->received_uuid,
|
|
|
|
BTRFS_UUID_KEY_RECEIVED_SUBVOL,
|
|
|
|
objectid);
|
|
|
|
if (ret && ret != -EEXIST) {
|
2016-06-11 05:19:25 +07:00
|
|
|
btrfs_abort_transaction(trans, ret);
|
2013-08-15 22:11:20 +07:00
|
|
|
goto fail;
|
|
|
|
}
|
|
|
|
}
|
2015-04-20 09:09:06 +07:00
|
|
|
|
|
|
|
ret = btrfs_run_delayed_refs(trans, root, (unsigned long)-1);
|
|
|
|
if (ret) {
|
2016-06-11 05:19:25 +07:00
|
|
|
btrfs_abort_transaction(trans, ret);
|
2015-04-20 09:09:06 +07:00
|
|
|
goto fail;
|
|
|
|
}
|
|
|
|
|
2008-01-09 03:46:30 +07:00
|
|
|
fail:
|
2013-03-04 16:44:29 +07:00
|
|
|
pending->error = ret;
|
|
|
|
dir_item_existed:
|
2011-09-11 21:52:24 +07:00
|
|
|
trans->block_rsv = rsv;
|
2013-02-22 11:33:36 +07:00
|
|
|
trans->bytes_reserved = 0;
|
2015-04-20 09:09:06 +07:00
|
|
|
clear_skip_qgroup:
|
|
|
|
btrfs_clear_skip_qgroup(trans);
|
2012-09-06 17:00:32 +07:00
|
|
|
no_free_objectid:
|
|
|
|
kfree(new_root_item);
|
2015-11-11 00:54:00 +07:00
|
|
|
pending->root_item = NULL;
|
2012-09-06 17:03:32 +07:00
|
|
|
btrfs_free_path(path);
|
2015-11-11 00:54:03 +07:00
|
|
|
pending->path = NULL;
|
|
|
|
|
2012-03-01 23:24:58 +07:00
|
|
|
return ret;
|
2008-01-09 03:46:30 +07:00
|
|
|
}
|
|
|
|
|
2008-09-30 02:18:18 +07:00
|
|
|
/*
|
|
|
|
* create all the snapshots we've scheduled for creation
|
|
|
|
*/
|
2008-02-02 04:35:04 +07:00
|
|
|
static noinline int create_pending_snapshots(struct btrfs_trans_handle *trans,
|
|
|
|
struct btrfs_fs_info *fs_info)
|
2008-11-18 09:02:50 +07:00
|
|
|
{
|
2013-03-04 16:44:29 +07:00
|
|
|
struct btrfs_pending_snapshot *pending, *next;
|
2008-11-18 09:02:50 +07:00
|
|
|
struct list_head *head = &trans->transaction->pending_snapshots;
|
2013-03-04 16:44:29 +07:00
|
|
|
int ret = 0;
|
2008-11-18 09:02:50 +07:00
|
|
|
|
2013-03-04 16:44:29 +07:00
|
|
|
list_for_each_entry_safe(pending, next, head, list) {
|
|
|
|
list_del(&pending->list);
|
|
|
|
ret = create_pending_snapshot(trans, fs_info, pending);
|
|
|
|
if (ret)
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
return ret;
|
2008-11-18 09:02:50 +07:00
|
|
|
}
|
|
|
|
|
Btrfs: Mixed back reference (FORWARD ROLLING FORMAT CHANGE)
This commit introduces a new kind of back reference for btrfs metadata.
Once a filesystem has been mounted with this commit, IT WILL NO LONGER
BE MOUNTABLE BY OLDER KERNELS.
When a tree block in subvolume tree is cow'd, the reference counts of all
extents it points to are increased by one. At transaction commit time,
the old root of the subvolume is recorded in a "dead root" data structure,
and the btree it points to is later walked, dropping reference counts
and freeing any blocks where the reference count goes to 0.
The increments done during cow and decrements done after commit cancel out,
and the walk is a very expensive way to go about freeing the blocks that
are no longer referenced by the new btree root. This commit reduces the
transaction overhead by avoiding the need for dead root records.
When a non-shared tree block is cow'd, we free the old block at once, and the
new block inherits old block's references. When a tree block with reference
count > 1 is cow'd, we increase the reference counts of all extents
the new block points to by one, and decrease the old block's reference count by
one.
This dead tree avoidance code removes the need to modify the reference
counts of lower level extents when a non-shared tree block is cow'd.
But we still need to update back ref for all pointers in the block.
This is because the location of the block is recorded in the back ref
item.
We can solve this by introducing a new type of back ref. The new
back ref provides information about pointer's key, level and in which
tree the pointer lives. This information allow us to find the pointer
by searching the tree. The shortcoming of the new back ref is that it
only works for pointers in tree blocks referenced by their owner trees.
This is mostly a problem for snapshots, where resolving one of these
fuzzy back references would be O(number_of_snapshots) and quite slow.
The solution used here is to use the fuzzy back references in the common
case where a given tree block is only referenced by one root,
and use the full back references when multiple roots have a reference
on a given block.
This commit adds per subvolume red-black tree to keep trace of cached
inodes. The red-black tree helps the balancing code to find cached
inodes whose inode numbers within a given range.
This commit improves the balancing code by introducing several data
structures to keep the state of balancing. The most important one
is the back ref cache. It caches how the upper level tree blocks are
referenced. This greatly reduce the overhead of checking back ref.
The improved balancing code scales significantly better with a large
number of snapshots.
This is a very large commit and was written in a number of
pieces. But, they depend heavily on the disk format change and were
squashed together to make sure git bisect didn't end up in a
bad state wrt space balancing or the format change.
Signed-off-by: Yan Zheng <zheng.yan@oracle.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-06-10 21:45:14 +07:00
|
|
|
static void update_super_roots(struct btrfs_root *root)
|
|
|
|
{
|
|
|
|
struct btrfs_root_item *root_item;
|
|
|
|
struct btrfs_super_block *super;
|
|
|
|
|
2011-04-13 20:41:04 +07:00
|
|
|
super = root->fs_info->super_copy;
|
Btrfs: Mixed back reference (FORWARD ROLLING FORMAT CHANGE)
This commit introduces a new kind of back reference for btrfs metadata.
Once a filesystem has been mounted with this commit, IT WILL NO LONGER
BE MOUNTABLE BY OLDER KERNELS.
When a tree block in subvolume tree is cow'd, the reference counts of all
extents it points to are increased by one. At transaction commit time,
the old root of the subvolume is recorded in a "dead root" data structure,
and the btree it points to is later walked, dropping reference counts
and freeing any blocks where the reference count goes to 0.
The increments done during cow and decrements done after commit cancel out,
and the walk is a very expensive way to go about freeing the blocks that
are no longer referenced by the new btree root. This commit reduces the
transaction overhead by avoiding the need for dead root records.
When a non-shared tree block is cow'd, we free the old block at once, and the
new block inherits old block's references. When a tree block with reference
count > 1 is cow'd, we increase the reference counts of all extents
the new block points to by one, and decrease the old block's reference count by
one.
This dead tree avoidance code removes the need to modify the reference
counts of lower level extents when a non-shared tree block is cow'd.
But we still need to update back ref for all pointers in the block.
This is because the location of the block is recorded in the back ref
item.
We can solve this by introducing a new type of back ref. The new
back ref provides information about pointer's key, level and in which
tree the pointer lives. This information allow us to find the pointer
by searching the tree. The shortcoming of the new back ref is that it
only works for pointers in tree blocks referenced by their owner trees.
This is mostly a problem for snapshots, where resolving one of these
fuzzy back references would be O(number_of_snapshots) and quite slow.
The solution used here is to use the fuzzy back references in the common
case where a given tree block is only referenced by one root,
and use the full back references when multiple roots have a reference
on a given block.
This commit adds per subvolume red-black tree to keep trace of cached
inodes. The red-black tree helps the balancing code to find cached
inodes whose inode numbers within a given range.
This commit improves the balancing code by introducing several data
structures to keep the state of balancing. The most important one
is the back ref cache. It caches how the upper level tree blocks are
referenced. This greatly reduce the overhead of checking back ref.
The improved balancing code scales significantly better with a large
number of snapshots.
This is a very large commit and was written in a number of
pieces. But, they depend heavily on the disk format change and were
squashed together to make sure git bisect didn't end up in a
bad state wrt space balancing or the format change.
Signed-off-by: Yan Zheng <zheng.yan@oracle.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-06-10 21:45:14 +07:00
|
|
|
|
|
|
|
root_item = &root->fs_info->chunk_root->root_item;
|
|
|
|
super->chunk_root = root_item->bytenr;
|
|
|
|
super->chunk_root_generation = root_item->generation;
|
|
|
|
super->chunk_root_level = root_item->level;
|
|
|
|
|
|
|
|
root_item = &root->fs_info->tree_root->root_item;
|
|
|
|
super->root = root_item->bytenr;
|
|
|
|
super->generation = root_item->generation;
|
|
|
|
super->root_level = root_item->level;
|
2016-06-10 08:38:35 +07:00
|
|
|
if (btrfs_test_opt(root->fs_info, SPACE_CACHE))
|
2010-06-22 01:48:16 +07:00
|
|
|
super->cache_generation = root_item->generation;
|
2016-09-03 02:40:02 +07:00
|
|
|
if (test_bit(BTRFS_FS_UPDATE_UUID_TREE_GEN, &root->fs_info->flags))
|
2013-08-15 22:11:23 +07:00
|
|
|
super->uuid_tree_generation = root_item->generation;
|
Btrfs: Mixed back reference (FORWARD ROLLING FORMAT CHANGE)
This commit introduces a new kind of back reference for btrfs metadata.
Once a filesystem has been mounted with this commit, IT WILL NO LONGER
BE MOUNTABLE BY OLDER KERNELS.
When a tree block in subvolume tree is cow'd, the reference counts of all
extents it points to are increased by one. At transaction commit time,
the old root of the subvolume is recorded in a "dead root" data structure,
and the btree it points to is later walked, dropping reference counts
and freeing any blocks where the reference count goes to 0.
The increments done during cow and decrements done after commit cancel out,
and the walk is a very expensive way to go about freeing the blocks that
are no longer referenced by the new btree root. This commit reduces the
transaction overhead by avoiding the need for dead root records.
When a non-shared tree block is cow'd, we free the old block at once, and the
new block inherits old block's references. When a tree block with reference
count > 1 is cow'd, we increase the reference counts of all extents
the new block points to by one, and decrease the old block's reference count by
one.
This dead tree avoidance code removes the need to modify the reference
counts of lower level extents when a non-shared tree block is cow'd.
But we still need to update back ref for all pointers in the block.
This is because the location of the block is recorded in the back ref
item.
We can solve this by introducing a new type of back ref. The new
back ref provides information about pointer's key, level and in which
tree the pointer lives. This information allow us to find the pointer
by searching the tree. The shortcoming of the new back ref is that it
only works for pointers in tree blocks referenced by their owner trees.
This is mostly a problem for snapshots, where resolving one of these
fuzzy back references would be O(number_of_snapshots) and quite slow.
The solution used here is to use the fuzzy back references in the common
case where a given tree block is only referenced by one root,
and use the full back references when multiple roots have a reference
on a given block.
This commit adds per subvolume red-black tree to keep trace of cached
inodes. The red-black tree helps the balancing code to find cached
inodes whose inode numbers within a given range.
This commit improves the balancing code by introducing several data
structures to keep the state of balancing. The most important one
is the back ref cache. It caches how the upper level tree blocks are
referenced. This greatly reduce the overhead of checking back ref.
The improved balancing code scales significantly better with a large
number of snapshots.
This is a very large commit and was written in a number of
pieces. But, they depend heavily on the disk format change and were
squashed together to make sure git bisect didn't end up in a
bad state wrt space balancing or the format change.
Signed-off-by: Yan Zheng <zheng.yan@oracle.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-06-10 21:45:14 +07:00
|
|
|
}
|
|
|
|
|
2009-07-30 21:04:48 +07:00
|
|
|
int btrfs_transaction_in_commit(struct btrfs_fs_info *info)
|
|
|
|
{
|
Btrfs: make the state of the transaction more readable
We used 3 variants to track the state of the transaction, it was complex
and wasted the memory space. Besides that, it was hard to understand that
which types of the transaction handles should be blocked in each transaction
state, so the developers often made mistakes.
This patch improved the above problem. In this patch, we define 6 states
for the transaction,
enum btrfs_trans_state {
TRANS_STATE_RUNNING = 0,
TRANS_STATE_BLOCKED = 1,
TRANS_STATE_COMMIT_START = 2,
TRANS_STATE_COMMIT_DOING = 3,
TRANS_STATE_UNBLOCKED = 4,
TRANS_STATE_COMPLETED = 5,
TRANS_STATE_MAX = 6,
}
and just use 1 variant to track those state.
In order to make the blocked handle types for each state more clear,
we introduce a array:
unsigned int btrfs_blocked_trans_types[TRANS_STATE_MAX] = {
[TRANS_STATE_RUNNING] = 0U,
[TRANS_STATE_BLOCKED] = (__TRANS_USERSPACE |
__TRANS_START),
[TRANS_STATE_COMMIT_START] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH),
[TRANS_STATE_COMMIT_DOING] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN),
[TRANS_STATE_UNBLOCKED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
[TRANS_STATE_COMPLETED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
}
it is very intuitionistic.
Besides that, because we remove ->in_commit in transaction structure, so
the lock ->commit_lock which was used to protect it is unnecessary, remove
->commit_lock.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
2013-05-17 10:53:43 +07:00
|
|
|
struct btrfs_transaction *trans;
|
2009-07-30 21:04:48 +07:00
|
|
|
int ret = 0;
|
Btrfs: make the state of the transaction more readable
We used 3 variants to track the state of the transaction, it was complex
and wasted the memory space. Besides that, it was hard to understand that
which types of the transaction handles should be blocked in each transaction
state, so the developers often made mistakes.
This patch improved the above problem. In this patch, we define 6 states
for the transaction,
enum btrfs_trans_state {
TRANS_STATE_RUNNING = 0,
TRANS_STATE_BLOCKED = 1,
TRANS_STATE_COMMIT_START = 2,
TRANS_STATE_COMMIT_DOING = 3,
TRANS_STATE_UNBLOCKED = 4,
TRANS_STATE_COMPLETED = 5,
TRANS_STATE_MAX = 6,
}
and just use 1 variant to track those state.
In order to make the blocked handle types for each state more clear,
we introduce a array:
unsigned int btrfs_blocked_trans_types[TRANS_STATE_MAX] = {
[TRANS_STATE_RUNNING] = 0U,
[TRANS_STATE_BLOCKED] = (__TRANS_USERSPACE |
__TRANS_START),
[TRANS_STATE_COMMIT_START] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH),
[TRANS_STATE_COMMIT_DOING] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN),
[TRANS_STATE_UNBLOCKED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
[TRANS_STATE_COMPLETED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
}
it is very intuitionistic.
Besides that, because we remove ->in_commit in transaction structure, so
the lock ->commit_lock which was used to protect it is unnecessary, remove
->commit_lock.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
2013-05-17 10:53:43 +07:00
|
|
|
|
2011-04-12 04:25:13 +07:00
|
|
|
spin_lock(&info->trans_lock);
|
Btrfs: make the state of the transaction more readable
We used 3 variants to track the state of the transaction, it was complex
and wasted the memory space. Besides that, it was hard to understand that
which types of the transaction handles should be blocked in each transaction
state, so the developers often made mistakes.
This patch improved the above problem. In this patch, we define 6 states
for the transaction,
enum btrfs_trans_state {
TRANS_STATE_RUNNING = 0,
TRANS_STATE_BLOCKED = 1,
TRANS_STATE_COMMIT_START = 2,
TRANS_STATE_COMMIT_DOING = 3,
TRANS_STATE_UNBLOCKED = 4,
TRANS_STATE_COMPLETED = 5,
TRANS_STATE_MAX = 6,
}
and just use 1 variant to track those state.
In order to make the blocked handle types for each state more clear,
we introduce a array:
unsigned int btrfs_blocked_trans_types[TRANS_STATE_MAX] = {
[TRANS_STATE_RUNNING] = 0U,
[TRANS_STATE_BLOCKED] = (__TRANS_USERSPACE |
__TRANS_START),
[TRANS_STATE_COMMIT_START] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH),
[TRANS_STATE_COMMIT_DOING] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN),
[TRANS_STATE_UNBLOCKED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
[TRANS_STATE_COMPLETED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
}
it is very intuitionistic.
Besides that, because we remove ->in_commit in transaction structure, so
the lock ->commit_lock which was used to protect it is unnecessary, remove
->commit_lock.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
2013-05-17 10:53:43 +07:00
|
|
|
trans = info->running_transaction;
|
|
|
|
if (trans)
|
|
|
|
ret = (trans->state >= TRANS_STATE_COMMIT_START);
|
2011-04-12 04:25:13 +07:00
|
|
|
spin_unlock(&info->trans_lock);
|
2009-07-30 21:04:48 +07:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2010-05-16 21:49:58 +07:00
|
|
|
int btrfs_transaction_blocked(struct btrfs_fs_info *info)
|
|
|
|
{
|
Btrfs: make the state of the transaction more readable
We used 3 variants to track the state of the transaction, it was complex
and wasted the memory space. Besides that, it was hard to understand that
which types of the transaction handles should be blocked in each transaction
state, so the developers often made mistakes.
This patch improved the above problem. In this patch, we define 6 states
for the transaction,
enum btrfs_trans_state {
TRANS_STATE_RUNNING = 0,
TRANS_STATE_BLOCKED = 1,
TRANS_STATE_COMMIT_START = 2,
TRANS_STATE_COMMIT_DOING = 3,
TRANS_STATE_UNBLOCKED = 4,
TRANS_STATE_COMPLETED = 5,
TRANS_STATE_MAX = 6,
}
and just use 1 variant to track those state.
In order to make the blocked handle types for each state more clear,
we introduce a array:
unsigned int btrfs_blocked_trans_types[TRANS_STATE_MAX] = {
[TRANS_STATE_RUNNING] = 0U,
[TRANS_STATE_BLOCKED] = (__TRANS_USERSPACE |
__TRANS_START),
[TRANS_STATE_COMMIT_START] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH),
[TRANS_STATE_COMMIT_DOING] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN),
[TRANS_STATE_UNBLOCKED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
[TRANS_STATE_COMPLETED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
}
it is very intuitionistic.
Besides that, because we remove ->in_commit in transaction structure, so
the lock ->commit_lock which was used to protect it is unnecessary, remove
->commit_lock.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
2013-05-17 10:53:43 +07:00
|
|
|
struct btrfs_transaction *trans;
|
2010-05-16 21:49:58 +07:00
|
|
|
int ret = 0;
|
Btrfs: make the state of the transaction more readable
We used 3 variants to track the state of the transaction, it was complex
and wasted the memory space. Besides that, it was hard to understand that
which types of the transaction handles should be blocked in each transaction
state, so the developers often made mistakes.
This patch improved the above problem. In this patch, we define 6 states
for the transaction,
enum btrfs_trans_state {
TRANS_STATE_RUNNING = 0,
TRANS_STATE_BLOCKED = 1,
TRANS_STATE_COMMIT_START = 2,
TRANS_STATE_COMMIT_DOING = 3,
TRANS_STATE_UNBLOCKED = 4,
TRANS_STATE_COMPLETED = 5,
TRANS_STATE_MAX = 6,
}
and just use 1 variant to track those state.
In order to make the blocked handle types for each state more clear,
we introduce a array:
unsigned int btrfs_blocked_trans_types[TRANS_STATE_MAX] = {
[TRANS_STATE_RUNNING] = 0U,
[TRANS_STATE_BLOCKED] = (__TRANS_USERSPACE |
__TRANS_START),
[TRANS_STATE_COMMIT_START] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH),
[TRANS_STATE_COMMIT_DOING] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN),
[TRANS_STATE_UNBLOCKED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
[TRANS_STATE_COMPLETED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
}
it is very intuitionistic.
Besides that, because we remove ->in_commit in transaction structure, so
the lock ->commit_lock which was used to protect it is unnecessary, remove
->commit_lock.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
2013-05-17 10:53:43 +07:00
|
|
|
|
2011-04-12 04:25:13 +07:00
|
|
|
spin_lock(&info->trans_lock);
|
Btrfs: make the state of the transaction more readable
We used 3 variants to track the state of the transaction, it was complex
and wasted the memory space. Besides that, it was hard to understand that
which types of the transaction handles should be blocked in each transaction
state, so the developers often made mistakes.
This patch improved the above problem. In this patch, we define 6 states
for the transaction,
enum btrfs_trans_state {
TRANS_STATE_RUNNING = 0,
TRANS_STATE_BLOCKED = 1,
TRANS_STATE_COMMIT_START = 2,
TRANS_STATE_COMMIT_DOING = 3,
TRANS_STATE_UNBLOCKED = 4,
TRANS_STATE_COMPLETED = 5,
TRANS_STATE_MAX = 6,
}
and just use 1 variant to track those state.
In order to make the blocked handle types for each state more clear,
we introduce a array:
unsigned int btrfs_blocked_trans_types[TRANS_STATE_MAX] = {
[TRANS_STATE_RUNNING] = 0U,
[TRANS_STATE_BLOCKED] = (__TRANS_USERSPACE |
__TRANS_START),
[TRANS_STATE_COMMIT_START] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH),
[TRANS_STATE_COMMIT_DOING] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN),
[TRANS_STATE_UNBLOCKED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
[TRANS_STATE_COMPLETED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
}
it is very intuitionistic.
Besides that, because we remove ->in_commit in transaction structure, so
the lock ->commit_lock which was used to protect it is unnecessary, remove
->commit_lock.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
2013-05-17 10:53:43 +07:00
|
|
|
trans = info->running_transaction;
|
|
|
|
if (trans)
|
|
|
|
ret = is_transaction_blocked(trans);
|
2011-04-12 04:25:13 +07:00
|
|
|
spin_unlock(&info->trans_lock);
|
2010-05-16 21:49:58 +07:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2010-10-30 02:37:34 +07:00
|
|
|
/*
|
|
|
|
* wait for the current transaction commit to start and block subsequent
|
|
|
|
* transaction joins
|
|
|
|
*/
|
|
|
|
static void wait_current_trans_commit_start(struct btrfs_root *root,
|
|
|
|
struct btrfs_transaction *trans)
|
|
|
|
{
|
Btrfs: make the state of the transaction more readable
We used 3 variants to track the state of the transaction, it was complex
and wasted the memory space. Besides that, it was hard to understand that
which types of the transaction handles should be blocked in each transaction
state, so the developers often made mistakes.
This patch improved the above problem. In this patch, we define 6 states
for the transaction,
enum btrfs_trans_state {
TRANS_STATE_RUNNING = 0,
TRANS_STATE_BLOCKED = 1,
TRANS_STATE_COMMIT_START = 2,
TRANS_STATE_COMMIT_DOING = 3,
TRANS_STATE_UNBLOCKED = 4,
TRANS_STATE_COMPLETED = 5,
TRANS_STATE_MAX = 6,
}
and just use 1 variant to track those state.
In order to make the blocked handle types for each state more clear,
we introduce a array:
unsigned int btrfs_blocked_trans_types[TRANS_STATE_MAX] = {
[TRANS_STATE_RUNNING] = 0U,
[TRANS_STATE_BLOCKED] = (__TRANS_USERSPACE |
__TRANS_START),
[TRANS_STATE_COMMIT_START] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH),
[TRANS_STATE_COMMIT_DOING] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN),
[TRANS_STATE_UNBLOCKED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
[TRANS_STATE_COMPLETED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
}
it is very intuitionistic.
Besides that, because we remove ->in_commit in transaction structure, so
the lock ->commit_lock which was used to protect it is unnecessary, remove
->commit_lock.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
2013-05-17 10:53:43 +07:00
|
|
|
wait_event(root->fs_info->transaction_blocked_wait,
|
2013-06-11 03:47:23 +07:00
|
|
|
trans->state >= TRANS_STATE_COMMIT_START ||
|
|
|
|
trans->aborted);
|
2010-10-30 02:37:34 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* wait for the current transaction to start and then become unblocked.
|
|
|
|
* caller holds ref.
|
|
|
|
*/
|
|
|
|
static void wait_current_trans_commit_start_and_unblock(struct btrfs_root *root,
|
|
|
|
struct btrfs_transaction *trans)
|
|
|
|
{
|
2011-07-14 10:17:00 +07:00
|
|
|
wait_event(root->fs_info->transaction_wait,
|
2013-06-11 03:47:23 +07:00
|
|
|
trans->state >= TRANS_STATE_UNBLOCKED ||
|
|
|
|
trans->aborted);
|
2010-10-30 02:37:34 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* commit transactions asynchronously. once btrfs_commit_transaction_async
|
|
|
|
* returns, any subsequent transaction will not be allowed to join.
|
|
|
|
*/
|
|
|
|
struct btrfs_async_commit {
|
|
|
|
struct btrfs_trans_handle *newtrans;
|
|
|
|
struct btrfs_root *root;
|
2012-11-15 15:14:47 +07:00
|
|
|
struct work_struct work;
|
2010-10-30 02:37:34 +07:00
|
|
|
};
|
|
|
|
|
|
|
|
static void do_async_commit(struct work_struct *work)
|
|
|
|
{
|
|
|
|
struct btrfs_async_commit *ac =
|
2012-11-15 15:14:47 +07:00
|
|
|
container_of(work, struct btrfs_async_commit, work);
|
2010-10-30 02:37:34 +07:00
|
|
|
|
2012-08-31 05:26:15 +07:00
|
|
|
/*
|
|
|
|
* We've got freeze protection passed with the transaction.
|
|
|
|
* Tell lockdep about it.
|
|
|
|
*/
|
2013-11-06 15:57:55 +07:00
|
|
|
if (ac->newtrans->type & __TRANS_FREEZABLE)
|
2015-07-20 04:48:20 +07:00
|
|
|
__sb_writers_acquired(ac->root->fs_info->sb, SB_FREEZE_FS);
|
2012-08-31 05:26:15 +07:00
|
|
|
|
2012-08-31 05:26:16 +07:00
|
|
|
current->journal_info = ac->newtrans;
|
|
|
|
|
2010-10-30 02:37:34 +07:00
|
|
|
btrfs_commit_transaction(ac->newtrans, ac->root);
|
|
|
|
kfree(ac);
|
|
|
|
}
|
|
|
|
|
|
|
|
int btrfs_commit_transaction_async(struct btrfs_trans_handle *trans,
|
|
|
|
struct btrfs_root *root,
|
|
|
|
int wait_for_unblock)
|
|
|
|
{
|
|
|
|
struct btrfs_async_commit *ac;
|
|
|
|
struct btrfs_transaction *cur_trans;
|
|
|
|
|
|
|
|
ac = kmalloc(sizeof(*ac), GFP_NOFS);
|
2011-03-23 15:14:16 +07:00
|
|
|
if (!ac)
|
|
|
|
return -ENOMEM;
|
2010-10-30 02:37:34 +07:00
|
|
|
|
2012-11-15 15:14:47 +07:00
|
|
|
INIT_WORK(&ac->work, do_async_commit);
|
2010-10-30 02:37:34 +07:00
|
|
|
ac->root = root;
|
2011-04-13 23:54:33 +07:00
|
|
|
ac->newtrans = btrfs_join_transaction(root);
|
2011-01-25 09:51:38 +07:00
|
|
|
if (IS_ERR(ac->newtrans)) {
|
|
|
|
int err = PTR_ERR(ac->newtrans);
|
|
|
|
kfree(ac);
|
|
|
|
return err;
|
|
|
|
}
|
2010-10-30 02:37:34 +07:00
|
|
|
|
|
|
|
/* take transaction reference */
|
|
|
|
cur_trans = trans->transaction;
|
2011-04-12 02:45:29 +07:00
|
|
|
atomic_inc(&cur_trans->use_count);
|
2010-10-30 02:37:34 +07:00
|
|
|
|
|
|
|
btrfs_end_transaction(trans, root);
|
2012-08-31 05:26:15 +07:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Tell lockdep we've released the freeze rwsem, since the
|
|
|
|
* async commit thread will be the one to unlock it.
|
|
|
|
*/
|
2013-11-06 15:57:55 +07:00
|
|
|
if (ac->newtrans->type & __TRANS_FREEZABLE)
|
2015-07-20 04:48:20 +07:00
|
|
|
__sb_writers_release(root->fs_info->sb, SB_FREEZE_FS);
|
2012-08-31 05:26:15 +07:00
|
|
|
|
2012-11-15 15:14:47 +07:00
|
|
|
schedule_work(&ac->work);
|
2010-10-30 02:37:34 +07:00
|
|
|
|
|
|
|
/* wait for transaction to start and unblock */
|
|
|
|
if (wait_for_unblock)
|
|
|
|
wait_current_trans_commit_start_and_unblock(root, cur_trans);
|
|
|
|
else
|
|
|
|
wait_current_trans_commit_start(root, cur_trans);
|
|
|
|
|
2011-06-11 01:43:13 +07:00
|
|
|
if (current->journal_info == trans)
|
|
|
|
current->journal_info = NULL;
|
|
|
|
|
2013-09-30 22:36:38 +07:00
|
|
|
btrfs_put_transaction(cur_trans);
|
2010-10-30 02:37:34 +07:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2012-03-01 23:24:58 +07:00
|
|
|
|
|
|
|
static void cleanup_transaction(struct btrfs_trans_handle *trans,
|
2012-06-01 02:52:43 +07:00
|
|
|
struct btrfs_root *root, int err)
|
2012-03-01 23:24:58 +07:00
|
|
|
{
|
|
|
|
struct btrfs_transaction *cur_trans = trans->transaction;
|
2013-02-27 20:28:25 +07:00
|
|
|
DEFINE_WAIT(wait);
|
2012-03-01 23:24:58 +07:00
|
|
|
|
|
|
|
WARN_ON(trans->use_count > 1);
|
|
|
|
|
2016-06-11 05:19:25 +07:00
|
|
|
btrfs_abort_transaction(trans, err);
|
2012-06-01 02:52:43 +07:00
|
|
|
|
2012-03-01 23:24:58 +07:00
|
|
|
spin_lock(&root->fs_info->trans_lock);
|
2013-03-04 23:25:41 +07:00
|
|
|
|
2013-05-15 14:48:26 +07:00
|
|
|
/*
|
|
|
|
* If the transaction is removed from the list, it means this
|
|
|
|
* transaction has been committed successfully, so it is impossible
|
|
|
|
* to call the cleanup function.
|
|
|
|
*/
|
|
|
|
BUG_ON(list_empty(&cur_trans->list));
|
2013-03-04 23:25:41 +07:00
|
|
|
|
2012-03-01 23:24:58 +07:00
|
|
|
list_del_init(&cur_trans->list);
|
2012-06-01 02:49:57 +07:00
|
|
|
if (cur_trans == root->fs_info->running_transaction) {
|
Btrfs: make the state of the transaction more readable
We used 3 variants to track the state of the transaction, it was complex
and wasted the memory space. Besides that, it was hard to understand that
which types of the transaction handles should be blocked in each transaction
state, so the developers often made mistakes.
This patch improved the above problem. In this patch, we define 6 states
for the transaction,
enum btrfs_trans_state {
TRANS_STATE_RUNNING = 0,
TRANS_STATE_BLOCKED = 1,
TRANS_STATE_COMMIT_START = 2,
TRANS_STATE_COMMIT_DOING = 3,
TRANS_STATE_UNBLOCKED = 4,
TRANS_STATE_COMPLETED = 5,
TRANS_STATE_MAX = 6,
}
and just use 1 variant to track those state.
In order to make the blocked handle types for each state more clear,
we introduce a array:
unsigned int btrfs_blocked_trans_types[TRANS_STATE_MAX] = {
[TRANS_STATE_RUNNING] = 0U,
[TRANS_STATE_BLOCKED] = (__TRANS_USERSPACE |
__TRANS_START),
[TRANS_STATE_COMMIT_START] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH),
[TRANS_STATE_COMMIT_DOING] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN),
[TRANS_STATE_UNBLOCKED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
[TRANS_STATE_COMPLETED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
}
it is very intuitionistic.
Besides that, because we remove ->in_commit in transaction structure, so
the lock ->commit_lock which was used to protect it is unnecessary, remove
->commit_lock.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
2013-05-17 10:53:43 +07:00
|
|
|
cur_trans->state = TRANS_STATE_COMMIT_DOING;
|
2013-02-27 20:28:25 +07:00
|
|
|
spin_unlock(&root->fs_info->trans_lock);
|
|
|
|
wait_event(cur_trans->writer_wait,
|
|
|
|
atomic_read(&cur_trans->num_writers) == 1);
|
|
|
|
|
|
|
|
spin_lock(&root->fs_info->trans_lock);
|
2012-06-01 02:49:57 +07:00
|
|
|
}
|
2012-03-01 23:24:58 +07:00
|
|
|
spin_unlock(&root->fs_info->trans_lock);
|
|
|
|
|
|
|
|
btrfs_cleanup_one_transaction(trans->transaction, root);
|
|
|
|
|
Btrfs: make the state of the transaction more readable
We used 3 variants to track the state of the transaction, it was complex
and wasted the memory space. Besides that, it was hard to understand that
which types of the transaction handles should be blocked in each transaction
state, so the developers often made mistakes.
This patch improved the above problem. In this patch, we define 6 states
for the transaction,
enum btrfs_trans_state {
TRANS_STATE_RUNNING = 0,
TRANS_STATE_BLOCKED = 1,
TRANS_STATE_COMMIT_START = 2,
TRANS_STATE_COMMIT_DOING = 3,
TRANS_STATE_UNBLOCKED = 4,
TRANS_STATE_COMPLETED = 5,
TRANS_STATE_MAX = 6,
}
and just use 1 variant to track those state.
In order to make the blocked handle types for each state more clear,
we introduce a array:
unsigned int btrfs_blocked_trans_types[TRANS_STATE_MAX] = {
[TRANS_STATE_RUNNING] = 0U,
[TRANS_STATE_BLOCKED] = (__TRANS_USERSPACE |
__TRANS_START),
[TRANS_STATE_COMMIT_START] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH),
[TRANS_STATE_COMMIT_DOING] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN),
[TRANS_STATE_UNBLOCKED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
[TRANS_STATE_COMPLETED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
}
it is very intuitionistic.
Besides that, because we remove ->in_commit in transaction structure, so
the lock ->commit_lock which was used to protect it is unnecessary, remove
->commit_lock.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
2013-05-17 10:53:43 +07:00
|
|
|
spin_lock(&root->fs_info->trans_lock);
|
|
|
|
if (cur_trans == root->fs_info->running_transaction)
|
|
|
|
root->fs_info->running_transaction = NULL;
|
|
|
|
spin_unlock(&root->fs_info->trans_lock);
|
|
|
|
|
2013-09-21 09:26:29 +07:00
|
|
|
if (trans->type & __TRANS_FREEZABLE)
|
|
|
|
sb_end_intwrite(root->fs_info->sb);
|
2013-09-30 22:36:38 +07:00
|
|
|
btrfs_put_transaction(cur_trans);
|
|
|
|
btrfs_put_transaction(cur_trans);
|
2012-03-01 23:24:58 +07:00
|
|
|
|
|
|
|
trace_btrfs_transaction_commit(root);
|
|
|
|
|
|
|
|
if (current->journal_info == trans)
|
|
|
|
current->journal_info = NULL;
|
2014-02-19 18:24:18 +07:00
|
|
|
btrfs_scrub_cancel(root->fs_info);
|
2012-03-01 23:24:58 +07:00
|
|
|
|
|
|
|
kmem_cache_free(btrfs_trans_handle_cachep, trans);
|
|
|
|
}
|
|
|
|
|
2013-05-15 14:48:28 +07:00
|
|
|
static inline int btrfs_start_delalloc_flush(struct btrfs_fs_info *fs_info)
|
|
|
|
{
|
2016-06-10 08:38:35 +07:00
|
|
|
if (btrfs_test_opt(fs_info, FLUSHONCOMMIT))
|
2014-03-06 12:55:01 +07:00
|
|
|
return btrfs_start_delalloc_roots(fs_info, 1, -1);
|
2013-05-15 14:48:28 +07:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void btrfs_wait_delalloc_flush(struct btrfs_fs_info *fs_info)
|
|
|
|
{
|
2016-06-10 08:38:35 +07:00
|
|
|
if (btrfs_test_opt(fs_info, FLUSHONCOMMIT))
|
2016-04-26 21:36:38 +07:00
|
|
|
btrfs_wait_ordered_roots(fs_info, -1, 0, (u64)-1);
|
2013-05-15 14:48:28 +07:00
|
|
|
}
|
|
|
|
|
2014-11-22 02:52:38 +07:00
|
|
|
static inline void
|
2015-09-25 03:17:39 +07:00
|
|
|
btrfs_wait_pending_ordered(struct btrfs_transaction *cur_trans)
|
2014-11-22 02:52:38 +07:00
|
|
|
{
|
2015-09-25 03:17:39 +07:00
|
|
|
wait_event(cur_trans->pending_wait,
|
|
|
|
atomic_read(&cur_trans->pending_ordered) == 0);
|
2014-11-22 02:52:38 +07:00
|
|
|
}
|
|
|
|
|
2007-03-23 02:59:16 +07:00
|
|
|
int btrfs_commit_transaction(struct btrfs_trans_handle *trans,
|
|
|
|
struct btrfs_root *root)
|
|
|
|
{
|
2012-03-01 23:24:58 +07:00
|
|
|
struct btrfs_transaction *cur_trans = trans->transaction;
|
2007-04-20 08:01:03 +07:00
|
|
|
struct btrfs_transaction *prev_trans = NULL;
|
2012-10-25 16:31:03 +07:00
|
|
|
int ret;
|
2007-03-23 02:59:16 +07:00
|
|
|
|
2013-01-15 13:27:25 +07:00
|
|
|
/* Stop the commit early if ->aborted is set */
|
|
|
|
if (unlikely(ACCESS_ONCE(cur_trans->aborted))) {
|
2012-10-25 16:31:03 +07:00
|
|
|
ret = cur_trans->aborted;
|
2013-02-07 04:55:41 +07:00
|
|
|
btrfs_end_transaction(trans, root);
|
|
|
|
return ret;
|
2012-10-25 16:31:03 +07:00
|
|
|
}
|
2012-03-01 23:24:58 +07:00
|
|
|
|
2009-03-13 21:10:06 +07:00
|
|
|
/* make a pass through all the delayed refs we have so far
|
|
|
|
* any runnings procs may add more while we are here
|
|
|
|
*/
|
|
|
|
ret = btrfs_run_delayed_refs(trans, root, 0);
|
2013-02-07 04:55:41 +07:00
|
|
|
if (ret) {
|
|
|
|
btrfs_end_transaction(trans, root);
|
|
|
|
return ret;
|
|
|
|
}
|
2009-03-13 21:10:06 +07:00
|
|
|
|
2012-06-27 03:13:18 +07:00
|
|
|
btrfs_trans_release_metadata(trans, root);
|
|
|
|
trans->block_rsv = NULL;
|
|
|
|
|
2009-03-13 07:12:45 +07:00
|
|
|
cur_trans = trans->transaction;
|
2012-03-01 23:24:58 +07:00
|
|
|
|
2009-03-13 21:10:06 +07:00
|
|
|
/*
|
|
|
|
* set the flushing flag so procs in this transaction have to
|
|
|
|
* start sending their work down.
|
|
|
|
*/
|
2009-03-13 07:12:45 +07:00
|
|
|
cur_trans->delayed_refs.flushing = 1;
|
2013-06-13 00:56:06 +07:00
|
|
|
smp_wmb();
|
2009-03-13 21:10:06 +07:00
|
|
|
|
2012-09-12 03:57:25 +07:00
|
|
|
if (!list_empty(&trans->new_bgs))
|
|
|
|
btrfs_create_pending_block_groups(trans, root);
|
|
|
|
|
2009-03-13 21:17:05 +07:00
|
|
|
ret = btrfs_run_delayed_refs(trans, root, 0);
|
2013-02-07 04:55:41 +07:00
|
|
|
if (ret) {
|
|
|
|
btrfs_end_transaction(trans, root);
|
|
|
|
return ret;
|
|
|
|
}
|
2009-03-13 21:10:06 +07:00
|
|
|
|
2015-09-24 21:46:10 +07:00
|
|
|
if (!test_bit(BTRFS_TRANS_DIRTY_BG_RUN, &cur_trans->flags)) {
|
2015-04-07 02:46:08 +07:00
|
|
|
int run_it = 0;
|
|
|
|
|
|
|
|
/* this mutex is also taken before trying to set
|
|
|
|
* block groups readonly. We need to make sure
|
|
|
|
* that nobody has set a block group readonly
|
|
|
|
* after a extents from that block group have been
|
|
|
|
* allocated for cache files. btrfs_set_block_group_ro
|
|
|
|
* will wait for the transaction to commit if it
|
2015-09-24 21:46:10 +07:00
|
|
|
* finds BTRFS_TRANS_DIRTY_BG_RUN set.
|
2015-04-07 02:46:08 +07:00
|
|
|
*
|
2015-09-24 21:46:10 +07:00
|
|
|
* The BTRFS_TRANS_DIRTY_BG_RUN flag is also used to make sure
|
|
|
|
* only one process starts all the block group IO. It wouldn't
|
2015-04-07 02:46:08 +07:00
|
|
|
* hurt to have more than one go through, but there's no
|
|
|
|
* real advantage to it either.
|
|
|
|
*/
|
|
|
|
mutex_lock(&root->fs_info->ro_block_group_mutex);
|
2015-09-24 21:46:10 +07:00
|
|
|
if (!test_and_set_bit(BTRFS_TRANS_DIRTY_BG_RUN,
|
|
|
|
&cur_trans->flags))
|
2015-04-07 02:46:08 +07:00
|
|
|
run_it = 1;
|
|
|
|
mutex_unlock(&root->fs_info->ro_block_group_mutex);
|
|
|
|
|
|
|
|
if (run_it)
|
|
|
|
ret = btrfs_start_dirty_block_groups(trans, root);
|
|
|
|
}
|
|
|
|
if (ret) {
|
|
|
|
btrfs_end_transaction(trans, root);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
Btrfs: make the state of the transaction more readable
We used 3 variants to track the state of the transaction, it was complex
and wasted the memory space. Besides that, it was hard to understand that
which types of the transaction handles should be blocked in each transaction
state, so the developers often made mistakes.
This patch improved the above problem. In this patch, we define 6 states
for the transaction,
enum btrfs_trans_state {
TRANS_STATE_RUNNING = 0,
TRANS_STATE_BLOCKED = 1,
TRANS_STATE_COMMIT_START = 2,
TRANS_STATE_COMMIT_DOING = 3,
TRANS_STATE_UNBLOCKED = 4,
TRANS_STATE_COMPLETED = 5,
TRANS_STATE_MAX = 6,
}
and just use 1 variant to track those state.
In order to make the blocked handle types for each state more clear,
we introduce a array:
unsigned int btrfs_blocked_trans_types[TRANS_STATE_MAX] = {
[TRANS_STATE_RUNNING] = 0U,
[TRANS_STATE_BLOCKED] = (__TRANS_USERSPACE |
__TRANS_START),
[TRANS_STATE_COMMIT_START] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH),
[TRANS_STATE_COMMIT_DOING] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN),
[TRANS_STATE_UNBLOCKED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
[TRANS_STATE_COMPLETED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
}
it is very intuitionistic.
Besides that, because we remove ->in_commit in transaction structure, so
the lock ->commit_lock which was used to protect it is unnecessary, remove
->commit_lock.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
2013-05-17 10:53:43 +07:00
|
|
|
spin_lock(&root->fs_info->trans_lock);
|
|
|
|
if (cur_trans->state >= TRANS_STATE_COMMIT_START) {
|
|
|
|
spin_unlock(&root->fs_info->trans_lock);
|
2011-04-12 02:45:29 +07:00
|
|
|
atomic_inc(&cur_trans->use_count);
|
2012-03-01 23:24:58 +07:00
|
|
|
ret = btrfs_end_transaction(trans, root);
|
2007-06-29 02:57:36 +07:00
|
|
|
|
2011-07-14 10:17:14 +07:00
|
|
|
wait_for_commit(root, cur_trans);
|
2007-08-11 03:22:09 +07:00
|
|
|
|
2015-03-06 19:23:44 +07:00
|
|
|
if (unlikely(cur_trans->aborted))
|
|
|
|
ret = cur_trans->aborted;
|
|
|
|
|
2013-09-30 22:36:38 +07:00
|
|
|
btrfs_put_transaction(cur_trans);
|
2007-08-11 03:22:09 +07:00
|
|
|
|
2012-03-01 23:24:58 +07:00
|
|
|
return ret;
|
2007-03-23 02:59:16 +07:00
|
|
|
}
|
2008-01-03 21:08:48 +07:00
|
|
|
|
Btrfs: make the state of the transaction more readable
We used 3 variants to track the state of the transaction, it was complex
and wasted the memory space. Besides that, it was hard to understand that
which types of the transaction handles should be blocked in each transaction
state, so the developers often made mistakes.
This patch improved the above problem. In this patch, we define 6 states
for the transaction,
enum btrfs_trans_state {
TRANS_STATE_RUNNING = 0,
TRANS_STATE_BLOCKED = 1,
TRANS_STATE_COMMIT_START = 2,
TRANS_STATE_COMMIT_DOING = 3,
TRANS_STATE_UNBLOCKED = 4,
TRANS_STATE_COMPLETED = 5,
TRANS_STATE_MAX = 6,
}
and just use 1 variant to track those state.
In order to make the blocked handle types for each state more clear,
we introduce a array:
unsigned int btrfs_blocked_trans_types[TRANS_STATE_MAX] = {
[TRANS_STATE_RUNNING] = 0U,
[TRANS_STATE_BLOCKED] = (__TRANS_USERSPACE |
__TRANS_START),
[TRANS_STATE_COMMIT_START] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH),
[TRANS_STATE_COMMIT_DOING] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN),
[TRANS_STATE_UNBLOCKED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
[TRANS_STATE_COMPLETED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
}
it is very intuitionistic.
Besides that, because we remove ->in_commit in transaction structure, so
the lock ->commit_lock which was used to protect it is unnecessary, remove
->commit_lock.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
2013-05-17 10:53:43 +07:00
|
|
|
cur_trans->state = TRANS_STATE_COMMIT_START;
|
2010-10-30 02:37:34 +07:00
|
|
|
wake_up(&root->fs_info->transaction_blocked_wait);
|
|
|
|
|
2007-06-29 02:57:36 +07:00
|
|
|
if (cur_trans->list.prev != &root->fs_info->trans_list) {
|
|
|
|
prev_trans = list_entry(cur_trans->list.prev,
|
|
|
|
struct btrfs_transaction, list);
|
Btrfs: make the state of the transaction more readable
We used 3 variants to track the state of the transaction, it was complex
and wasted the memory space. Besides that, it was hard to understand that
which types of the transaction handles should be blocked in each transaction
state, so the developers often made mistakes.
This patch improved the above problem. In this patch, we define 6 states
for the transaction,
enum btrfs_trans_state {
TRANS_STATE_RUNNING = 0,
TRANS_STATE_BLOCKED = 1,
TRANS_STATE_COMMIT_START = 2,
TRANS_STATE_COMMIT_DOING = 3,
TRANS_STATE_UNBLOCKED = 4,
TRANS_STATE_COMPLETED = 5,
TRANS_STATE_MAX = 6,
}
and just use 1 variant to track those state.
In order to make the blocked handle types for each state more clear,
we introduce a array:
unsigned int btrfs_blocked_trans_types[TRANS_STATE_MAX] = {
[TRANS_STATE_RUNNING] = 0U,
[TRANS_STATE_BLOCKED] = (__TRANS_USERSPACE |
__TRANS_START),
[TRANS_STATE_COMMIT_START] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH),
[TRANS_STATE_COMMIT_DOING] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN),
[TRANS_STATE_UNBLOCKED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
[TRANS_STATE_COMPLETED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
}
it is very intuitionistic.
Besides that, because we remove ->in_commit in transaction structure, so
the lock ->commit_lock which was used to protect it is unnecessary, remove
->commit_lock.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
2013-05-17 10:53:43 +07:00
|
|
|
if (prev_trans->state != TRANS_STATE_COMPLETED) {
|
2011-04-12 02:45:29 +07:00
|
|
|
atomic_inc(&prev_trans->use_count);
|
2011-04-12 04:25:13 +07:00
|
|
|
spin_unlock(&root->fs_info->trans_lock);
|
2007-06-29 02:57:36 +07:00
|
|
|
|
|
|
|
wait_for_commit(root, prev_trans);
|
Btrfs: check if previous transaction aborted to avoid fs corruption
While we are committing a transaction, it's possible the previous one is
still finishing its commit and therefore we wait for it to finish first.
However we were not checking if that previous transaction ended up getting
aborted after we waited for it to commit, so we ended up committing the
current transaction which can lead to fs corruption because the new
superblock can point to trees that have had one or more nodes/leafs that
were never durably persisted.
The following sequence diagram exemplifies how this is possible:
CPU 0 CPU 1
transaction N starts
(...)
btrfs_commit_transaction(N)
cur_trans->state = TRANS_STATE_COMMIT_START;
(...)
cur_trans->state = TRANS_STATE_COMMIT_DOING;
(...)
cur_trans->state = TRANS_STATE_UNBLOCKED;
root->fs_info->running_transaction = NULL;
btrfs_start_transaction()
--> starts transaction N + 1
btrfs_write_and_wait_transaction(trans, root);
--> starts writing all new or COWed ebs created
at transaction N
creates some new ebs, COWs some
existing ebs but doesn't COW or
deletes eb X
btrfs_commit_transaction(N + 1)
(...)
cur_trans->state = TRANS_STATE_COMMIT_START;
(...)
wait_for_commit(root, prev_trans);
--> prev_trans == transaction N
btrfs_write_and_wait_transaction() continues
writing ebs
--> fails writing eb X, we abort transaction N
and set bit BTRFS_FS_STATE_ERROR on
fs_info->fs_state, so no new transactions
can start after setting that bit
cleanup_transaction()
btrfs_cleanup_one_transaction()
wakes up task at CPU 1
continues, doesn't abort because
cur_trans->aborted (transaction N + 1)
is zero, and no checks for bit
BTRFS_FS_STATE_ERROR in fs_info->fs_state
are made
btrfs_write_and_wait_transaction(trans, root);
--> succeeds, no errors during writeback
write_ctree_super(trans, root, 0);
--> succeeds
--> we have now a superblock that points us
to some root that uses eb X, which was
never written to disk
In this scenario future attempts to read eb X from disk results in an
error message like "parent transid verify failed on X wanted Y found Z".
So fix this by aborting the current transaction if after waiting for the
previous transaction we verify that it was aborted.
Cc: stable@vger.kernel.org
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Reviewed-by: Josef Bacik <jbacik@fb.com>
Reviewed-by: Liu Bo <bo.li.liu@oracle.com>
Signed-off-by: Chris Mason <clm@fb.com>
2015-08-12 17:54:35 +07:00
|
|
|
ret = prev_trans->aborted;
|
2007-06-29 02:57:36 +07:00
|
|
|
|
2013-09-30 22:36:38 +07:00
|
|
|
btrfs_put_transaction(prev_trans);
|
Btrfs: check if previous transaction aborted to avoid fs corruption
While we are committing a transaction, it's possible the previous one is
still finishing its commit and therefore we wait for it to finish first.
However we were not checking if that previous transaction ended up getting
aborted after we waited for it to commit, so we ended up committing the
current transaction which can lead to fs corruption because the new
superblock can point to trees that have had one or more nodes/leafs that
were never durably persisted.
The following sequence diagram exemplifies how this is possible:
CPU 0 CPU 1
transaction N starts
(...)
btrfs_commit_transaction(N)
cur_trans->state = TRANS_STATE_COMMIT_START;
(...)
cur_trans->state = TRANS_STATE_COMMIT_DOING;
(...)
cur_trans->state = TRANS_STATE_UNBLOCKED;
root->fs_info->running_transaction = NULL;
btrfs_start_transaction()
--> starts transaction N + 1
btrfs_write_and_wait_transaction(trans, root);
--> starts writing all new or COWed ebs created
at transaction N
creates some new ebs, COWs some
existing ebs but doesn't COW or
deletes eb X
btrfs_commit_transaction(N + 1)
(...)
cur_trans->state = TRANS_STATE_COMMIT_START;
(...)
wait_for_commit(root, prev_trans);
--> prev_trans == transaction N
btrfs_write_and_wait_transaction() continues
writing ebs
--> fails writing eb X, we abort transaction N
and set bit BTRFS_FS_STATE_ERROR on
fs_info->fs_state, so no new transactions
can start after setting that bit
cleanup_transaction()
btrfs_cleanup_one_transaction()
wakes up task at CPU 1
continues, doesn't abort because
cur_trans->aborted (transaction N + 1)
is zero, and no checks for bit
BTRFS_FS_STATE_ERROR in fs_info->fs_state
are made
btrfs_write_and_wait_transaction(trans, root);
--> succeeds, no errors during writeback
write_ctree_super(trans, root, 0);
--> succeeds
--> we have now a superblock that points us
to some root that uses eb X, which was
never written to disk
In this scenario future attempts to read eb X from disk results in an
error message like "parent transid verify failed on X wanted Y found Z".
So fix this by aborting the current transaction if after waiting for the
previous transaction we verify that it was aborted.
Cc: stable@vger.kernel.org
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Reviewed-by: Josef Bacik <jbacik@fb.com>
Reviewed-by: Liu Bo <bo.li.liu@oracle.com>
Signed-off-by: Chris Mason <clm@fb.com>
2015-08-12 17:54:35 +07:00
|
|
|
if (ret)
|
|
|
|
goto cleanup_transaction;
|
2011-04-12 04:25:13 +07:00
|
|
|
} else {
|
|
|
|
spin_unlock(&root->fs_info->trans_lock);
|
2007-06-29 02:57:36 +07:00
|
|
|
}
|
2011-04-12 04:25:13 +07:00
|
|
|
} else {
|
|
|
|
spin_unlock(&root->fs_info->trans_lock);
|
2007-06-29 02:57:36 +07:00
|
|
|
}
|
2007-08-11 03:22:09 +07:00
|
|
|
|
2013-05-15 14:48:27 +07:00
|
|
|
extwriter_counter_dec(cur_trans, trans->type);
|
|
|
|
|
2013-05-15 14:48:28 +07:00
|
|
|
ret = btrfs_start_delalloc_flush(root->fs_info);
|
|
|
|
if (ret)
|
|
|
|
goto cleanup_transaction;
|
|
|
|
|
2014-08-13 00:47:42 +07:00
|
|
|
ret = btrfs_run_delayed_items(trans, root);
|
2013-05-15 14:48:30 +07:00
|
|
|
if (ret)
|
|
|
|
goto cleanup_transaction;
|
2007-08-11 03:22:09 +07:00
|
|
|
|
2013-05-15 14:48:30 +07:00
|
|
|
wait_event(cur_trans->writer_wait,
|
|
|
|
extwriter_counter_read(cur_trans) == 0);
|
2007-08-11 03:22:09 +07:00
|
|
|
|
2013-05-15 14:48:30 +07:00
|
|
|
/* some pending stuffs might be added after the previous flush. */
|
2014-08-13 00:47:42 +07:00
|
|
|
ret = btrfs_run_delayed_items(trans, root);
|
2012-11-01 14:33:14 +07:00
|
|
|
if (ret)
|
|
|
|
goto cleanup_transaction;
|
|
|
|
|
2013-05-15 14:48:28 +07:00
|
|
|
btrfs_wait_delalloc_flush(root->fs_info);
|
2013-12-04 20:16:53 +07:00
|
|
|
|
2015-09-25 03:17:39 +07:00
|
|
|
btrfs_wait_pending_ordered(cur_trans);
|
2014-11-22 02:52:38 +07:00
|
|
|
|
2013-12-04 20:16:53 +07:00
|
|
|
btrfs_scrub_pause(root);
|
2011-06-15 03:22:15 +07:00
|
|
|
/*
|
|
|
|
* Ok now we need to make sure to block out any other joins while we
|
|
|
|
* commit the transaction. We could have started a join before setting
|
Btrfs: make the state of the transaction more readable
We used 3 variants to track the state of the transaction, it was complex
and wasted the memory space. Besides that, it was hard to understand that
which types of the transaction handles should be blocked in each transaction
state, so the developers often made mistakes.
This patch improved the above problem. In this patch, we define 6 states
for the transaction,
enum btrfs_trans_state {
TRANS_STATE_RUNNING = 0,
TRANS_STATE_BLOCKED = 1,
TRANS_STATE_COMMIT_START = 2,
TRANS_STATE_COMMIT_DOING = 3,
TRANS_STATE_UNBLOCKED = 4,
TRANS_STATE_COMPLETED = 5,
TRANS_STATE_MAX = 6,
}
and just use 1 variant to track those state.
In order to make the blocked handle types for each state more clear,
we introduce a array:
unsigned int btrfs_blocked_trans_types[TRANS_STATE_MAX] = {
[TRANS_STATE_RUNNING] = 0U,
[TRANS_STATE_BLOCKED] = (__TRANS_USERSPACE |
__TRANS_START),
[TRANS_STATE_COMMIT_START] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH),
[TRANS_STATE_COMMIT_DOING] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN),
[TRANS_STATE_UNBLOCKED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
[TRANS_STATE_COMPLETED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
}
it is very intuitionistic.
Besides that, because we remove ->in_commit in transaction structure, so
the lock ->commit_lock which was used to protect it is unnecessary, remove
->commit_lock.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
2013-05-17 10:53:43 +07:00
|
|
|
* COMMIT_DOING so make sure to wait for num_writers to == 1 again.
|
2011-06-15 03:22:15 +07:00
|
|
|
*/
|
|
|
|
spin_lock(&root->fs_info->trans_lock);
|
Btrfs: make the state of the transaction more readable
We used 3 variants to track the state of the transaction, it was complex
and wasted the memory space. Besides that, it was hard to understand that
which types of the transaction handles should be blocked in each transaction
state, so the developers often made mistakes.
This patch improved the above problem. In this patch, we define 6 states
for the transaction,
enum btrfs_trans_state {
TRANS_STATE_RUNNING = 0,
TRANS_STATE_BLOCKED = 1,
TRANS_STATE_COMMIT_START = 2,
TRANS_STATE_COMMIT_DOING = 3,
TRANS_STATE_UNBLOCKED = 4,
TRANS_STATE_COMPLETED = 5,
TRANS_STATE_MAX = 6,
}
and just use 1 variant to track those state.
In order to make the blocked handle types for each state more clear,
we introduce a array:
unsigned int btrfs_blocked_trans_types[TRANS_STATE_MAX] = {
[TRANS_STATE_RUNNING] = 0U,
[TRANS_STATE_BLOCKED] = (__TRANS_USERSPACE |
__TRANS_START),
[TRANS_STATE_COMMIT_START] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH),
[TRANS_STATE_COMMIT_DOING] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN),
[TRANS_STATE_UNBLOCKED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
[TRANS_STATE_COMPLETED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
}
it is very intuitionistic.
Besides that, because we remove ->in_commit in transaction structure, so
the lock ->commit_lock which was used to protect it is unnecessary, remove
->commit_lock.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
2013-05-17 10:53:43 +07:00
|
|
|
cur_trans->state = TRANS_STATE_COMMIT_DOING;
|
2011-06-15 03:22:15 +07:00
|
|
|
spin_unlock(&root->fs_info->trans_lock);
|
|
|
|
wait_event(cur_trans->writer_wait,
|
|
|
|
atomic_read(&cur_trans->num_writers) == 1);
|
|
|
|
|
2013-01-15 13:29:12 +07:00
|
|
|
/* ->aborted might be set after the previous check, so check it */
|
|
|
|
if (unlikely(ACCESS_ONCE(cur_trans->aborted))) {
|
|
|
|
ret = cur_trans->aborted;
|
2014-02-19 18:24:16 +07:00
|
|
|
goto scrub_continue;
|
2013-01-15 13:29:12 +07:00
|
|
|
}
|
2011-06-14 07:00:16 +07:00
|
|
|
/*
|
|
|
|
* the reloc mutex makes sure that we stop
|
|
|
|
* the balancing code from coming in and moving
|
|
|
|
* extents around in the middle of the commit
|
|
|
|
*/
|
|
|
|
mutex_lock(&root->fs_info->reloc_mutex);
|
|
|
|
|
2012-09-06 17:03:32 +07:00
|
|
|
/*
|
|
|
|
* We needn't worry about the delayed items because we will
|
|
|
|
* deal with them in create_pending_snapshot(), which is the
|
|
|
|
* core function of the snapshot creation.
|
|
|
|
*/
|
|
|
|
ret = create_pending_snapshots(trans, root->fs_info);
|
2012-03-01 23:24:58 +07:00
|
|
|
if (ret) {
|
|
|
|
mutex_unlock(&root->fs_info->reloc_mutex);
|
2014-02-19 18:24:16 +07:00
|
|
|
goto scrub_continue;
|
2012-03-01 23:24:58 +07:00
|
|
|
}
|
2008-01-09 03:46:30 +07:00
|
|
|
|
2012-09-06 17:03:32 +07:00
|
|
|
/*
|
|
|
|
* We insert the dir indexes of the snapshots and update the inode
|
|
|
|
* of the snapshots' parents after the snapshot creation, so there
|
|
|
|
* are some delayed items which are not dealt with. Now deal with
|
|
|
|
* them.
|
|
|
|
*
|
|
|
|
* We needn't worry that this operation will corrupt the snapshots,
|
|
|
|
* because all the tree which are snapshoted will be forced to COW
|
|
|
|
* the nodes and leaves.
|
|
|
|
*/
|
|
|
|
ret = btrfs_run_delayed_items(trans, root);
|
2012-03-01 23:24:58 +07:00
|
|
|
if (ret) {
|
|
|
|
mutex_unlock(&root->fs_info->reloc_mutex);
|
2014-02-19 18:24:16 +07:00
|
|
|
goto scrub_continue;
|
2012-03-01 23:24:58 +07:00
|
|
|
}
|
btrfs: implement delayed inode items operation
Changelog V5 -> V6:
- Fix oom when the memory load is high, by storing the delayed nodes into the
root's radix tree, and letting btrfs inodes go.
Changelog V4 -> V5:
- Fix the race on adding the delayed node to the inode, which is spotted by
Chris Mason.
- Merge Chris Mason's incremental patch into this patch.
- Fix deadlock between readdir() and memory fault, which is reported by
Itaru Kitayama.
Changelog V3 -> V4:
- Fix nested lock, which is reported by Itaru Kitayama, by updating space cache
inode in time.
Changelog V2 -> V3:
- Fix the race between the delayed worker and the task which does delayed items
balance, which is reported by Tsutomu Itoh.
- Modify the patch address David Sterba's comment.
- Fix the bug of the cpu recursion spinlock, reported by Chris Mason
Changelog V1 -> V2:
- break up the global rb-tree, use a list to manage the delayed nodes,
which is created for every directory and file, and used to manage the
delayed directory name index items and the delayed inode item.
- introduce a worker to deal with the delayed nodes.
Compare with Ext3/4, the performance of file creation and deletion on btrfs
is very poor. the reason is that btrfs must do a lot of b+ tree insertions,
such as inode item, directory name item, directory name index and so on.
If we can do some delayed b+ tree insertion or deletion, we can improve the
performance, so we made this patch which implemented delayed directory name
index insertion/deletion and delayed inode update.
Implementation:
- introduce a delayed root object into the filesystem, that use two lists to
manage the delayed nodes which are created for every file/directory.
One is used to manage all the delayed nodes that have delayed items. And the
other is used to manage the delayed nodes which is waiting to be dealt with
by the work thread.
- Every delayed node has two rb-tree, one is used to manage the directory name
index which is going to be inserted into b+ tree, and the other is used to
manage the directory name index which is going to be deleted from b+ tree.
- introduce a worker to deal with the delayed operation. This worker is used
to deal with the works of the delayed directory name index items insertion
and deletion and the delayed inode update.
When the delayed items is beyond the lower limit, we create works for some
delayed nodes and insert them into the work queue of the worker, and then
go back.
When the delayed items is beyond the upper bound, we create works for all
the delayed nodes that haven't been dealt with, and insert them into the work
queue of the worker, and then wait for that the untreated items is below some
threshold value.
- When we want to insert a directory name index into b+ tree, we just add the
information into the delayed inserting rb-tree.
And then we check the number of the delayed items and do delayed items
balance. (The balance policy is above.)
- When we want to delete a directory name index from the b+ tree, we search it
in the inserting rb-tree at first. If we look it up, just drop it. If not,
add the key of it into the delayed deleting rb-tree.
Similar to the delayed inserting rb-tree, we also check the number of the
delayed items and do delayed items balance.
(The same to inserting manipulation)
- When we want to update the metadata of some inode, we cached the data of the
inode into the delayed node. the worker will flush it into the b+ tree after
dealing with the delayed insertion and deletion.
- We will move the delayed node to the tail of the list after we access the
delayed node, By this way, we can cache more delayed items and merge more
inode updates.
- If we want to commit transaction, we will deal with all the delayed node.
- the delayed node will be freed when we free the btrfs inode.
- Before we log the inode items, we commit all the directory name index items
and the delayed inode update.
I did a quick test by the benchmark tool[1] and found we can improve the
performance of file creation by ~15%, and file deletion by ~20%.
Before applying this patch:
Create files:
Total files: 50000
Total time: 1.096108
Average time: 0.000022
Delete files:
Total files: 50000
Total time: 1.510403
Average time: 0.000030
After applying this patch:
Create files:
Total files: 50000
Total time: 0.932899
Average time: 0.000019
Delete files:
Total files: 50000
Total time: 1.215732
Average time: 0.000024
[1] http://marc.info/?l=linux-btrfs&m=128212635122920&q=p3
Many thanks for Kitayama-san's help!
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Reviewed-by: David Sterba <dave@jikos.cz>
Tested-by: Tsutomu Itoh <t-itoh@jp.fujitsu.com>
Tested-by: Itaru Kitayama <kitayama@cl.bb4u.ne.jp>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2011-04-22 17:12:22 +07:00
|
|
|
|
2009-03-13 21:10:06 +07:00
|
|
|
ret = btrfs_run_delayed_refs(trans, root, (unsigned long)-1);
|
2012-03-01 23:24:58 +07:00
|
|
|
if (ret) {
|
|
|
|
mutex_unlock(&root->fs_info->reloc_mutex);
|
2014-02-19 18:24:16 +07:00
|
|
|
goto scrub_continue;
|
2012-03-01 23:24:58 +07:00
|
|
|
}
|
2009-03-13 21:10:06 +07:00
|
|
|
|
2015-04-16 15:55:08 +07:00
|
|
|
/* Reocrd old roots for later qgroup accounting */
|
|
|
|
ret = btrfs_qgroup_prepare_account_extents(trans, root->fs_info);
|
|
|
|
if (ret) {
|
|
|
|
mutex_unlock(&root->fs_info->reloc_mutex);
|
|
|
|
goto scrub_continue;
|
|
|
|
}
|
|
|
|
|
2011-06-18 03:14:09 +07:00
|
|
|
/*
|
|
|
|
* make sure none of the code above managed to slip in a
|
|
|
|
* delayed item
|
|
|
|
*/
|
|
|
|
btrfs_assert_delayed_root_empty(root);
|
|
|
|
|
2007-04-02 21:50:19 +07:00
|
|
|
WARN_ON(cur_trans != trans->transaction);
|
2008-01-09 03:46:30 +07:00
|
|
|
|
2008-09-06 03:13:11 +07:00
|
|
|
/* btrfs_commit_tree_roots is responsible for getting the
|
|
|
|
* various roots consistent with each other. Every pointer
|
|
|
|
* in the tree of tree roots has to point to the most up to date
|
|
|
|
* root for every subvolume and other tree. So, we have to keep
|
|
|
|
* the tree logging code from jumping in and changing any
|
|
|
|
* of the trees.
|
|
|
|
*
|
|
|
|
* At this point in the commit, there can't be any tree-log
|
|
|
|
* writers, but a little lower down we drop the trans mutex
|
|
|
|
* and let new people in. By holding the tree_log_mutex
|
|
|
|
* from now until after the super is written, we avoid races
|
|
|
|
* with the tree-log code.
|
|
|
|
*/
|
|
|
|
mutex_lock(&root->fs_info->tree_log_mutex);
|
|
|
|
|
Btrfs: Mixed back reference (FORWARD ROLLING FORMAT CHANGE)
This commit introduces a new kind of back reference for btrfs metadata.
Once a filesystem has been mounted with this commit, IT WILL NO LONGER
BE MOUNTABLE BY OLDER KERNELS.
When a tree block in subvolume tree is cow'd, the reference counts of all
extents it points to are increased by one. At transaction commit time,
the old root of the subvolume is recorded in a "dead root" data structure,
and the btree it points to is later walked, dropping reference counts
and freeing any blocks where the reference count goes to 0.
The increments done during cow and decrements done after commit cancel out,
and the walk is a very expensive way to go about freeing the blocks that
are no longer referenced by the new btree root. This commit reduces the
transaction overhead by avoiding the need for dead root records.
When a non-shared tree block is cow'd, we free the old block at once, and the
new block inherits old block's references. When a tree block with reference
count > 1 is cow'd, we increase the reference counts of all extents
the new block points to by one, and decrease the old block's reference count by
one.
This dead tree avoidance code removes the need to modify the reference
counts of lower level extents when a non-shared tree block is cow'd.
But we still need to update back ref for all pointers in the block.
This is because the location of the block is recorded in the back ref
item.
We can solve this by introducing a new type of back ref. The new
back ref provides information about pointer's key, level and in which
tree the pointer lives. This information allow us to find the pointer
by searching the tree. The shortcoming of the new back ref is that it
only works for pointers in tree blocks referenced by their owner trees.
This is mostly a problem for snapshots, where resolving one of these
fuzzy back references would be O(number_of_snapshots) and quite slow.
The solution used here is to use the fuzzy back references in the common
case where a given tree block is only referenced by one root,
and use the full back references when multiple roots have a reference
on a given block.
This commit adds per subvolume red-black tree to keep trace of cached
inodes. The red-black tree helps the balancing code to find cached
inodes whose inode numbers within a given range.
This commit improves the balancing code by introducing several data
structures to keep the state of balancing. The most important one
is the back ref cache. It caches how the upper level tree blocks are
referenced. This greatly reduce the overhead of checking back ref.
The improved balancing code scales significantly better with a large
number of snapshots.
This is a very large commit and was written in a number of
pieces. But, they depend heavily on the disk format change and were
squashed together to make sure git bisect didn't end up in a
bad state wrt space balancing or the format change.
Signed-off-by: Yan Zheng <zheng.yan@oracle.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-06-10 21:45:14 +07:00
|
|
|
ret = commit_fs_roots(trans, root);
|
2012-03-01 23:24:58 +07:00
|
|
|
if (ret) {
|
|
|
|
mutex_unlock(&root->fs_info->tree_log_mutex);
|
2012-04-02 23:31:37 +07:00
|
|
|
mutex_unlock(&root->fs_info->reloc_mutex);
|
2014-02-19 18:24:16 +07:00
|
|
|
goto scrub_continue;
|
2012-03-01 23:24:58 +07:00
|
|
|
}
|
2007-06-23 01:16:25 +07:00
|
|
|
|
2014-01-13 12:36:06 +07:00
|
|
|
/*
|
2014-02-05 21:26:17 +07:00
|
|
|
* Since the transaction is done, we can apply the pending changes
|
|
|
|
* before the next transaction.
|
2014-01-13 12:36:06 +07:00
|
|
|
*/
|
2014-02-05 21:26:17 +07:00
|
|
|
btrfs_apply_pending_changes(root->fs_info);
|
2014-01-13 12:36:06 +07:00
|
|
|
|
Btrfs: Mixed back reference (FORWARD ROLLING FORMAT CHANGE)
This commit introduces a new kind of back reference for btrfs metadata.
Once a filesystem has been mounted with this commit, IT WILL NO LONGER
BE MOUNTABLE BY OLDER KERNELS.
When a tree block in subvolume tree is cow'd, the reference counts of all
extents it points to are increased by one. At transaction commit time,
the old root of the subvolume is recorded in a "dead root" data structure,
and the btree it points to is later walked, dropping reference counts
and freeing any blocks where the reference count goes to 0.
The increments done during cow and decrements done after commit cancel out,
and the walk is a very expensive way to go about freeing the blocks that
are no longer referenced by the new btree root. This commit reduces the
transaction overhead by avoiding the need for dead root records.
When a non-shared tree block is cow'd, we free the old block at once, and the
new block inherits old block's references. When a tree block with reference
count > 1 is cow'd, we increase the reference counts of all extents
the new block points to by one, and decrease the old block's reference count by
one.
This dead tree avoidance code removes the need to modify the reference
counts of lower level extents when a non-shared tree block is cow'd.
But we still need to update back ref for all pointers in the block.
This is because the location of the block is recorded in the back ref
item.
We can solve this by introducing a new type of back ref. The new
back ref provides information about pointer's key, level and in which
tree the pointer lives. This information allow us to find the pointer
by searching the tree. The shortcoming of the new back ref is that it
only works for pointers in tree blocks referenced by their owner trees.
This is mostly a problem for snapshots, where resolving one of these
fuzzy back references would be O(number_of_snapshots) and quite slow.
The solution used here is to use the fuzzy back references in the common
case where a given tree block is only referenced by one root,
and use the full back references when multiple roots have a reference
on a given block.
This commit adds per subvolume red-black tree to keep trace of cached
inodes. The red-black tree helps the balancing code to find cached
inodes whose inode numbers within a given range.
This commit improves the balancing code by introducing several data
structures to keep the state of balancing. The most important one
is the back ref cache. It caches how the upper level tree blocks are
referenced. This greatly reduce the overhead of checking back ref.
The improved balancing code scales significantly better with a large
number of snapshots.
This is a very large commit and was written in a number of
pieces. But, they depend heavily on the disk format change and were
squashed together to make sure git bisect didn't end up in a
bad state wrt space balancing or the format change.
Signed-off-by: Yan Zheng <zheng.yan@oracle.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-06-10 21:45:14 +07:00
|
|
|
/* commit_fs_roots gets rid of all the tree log roots, it is now
|
2008-09-06 03:13:11 +07:00
|
|
|
* safe to free the root of tree log roots
|
|
|
|
*/
|
|
|
|
btrfs_free_log_root_tree(trans, root->fs_info);
|
|
|
|
|
2015-04-16 15:55:08 +07:00
|
|
|
/*
|
|
|
|
* Since fs roots are all committed, we can get a quite accurate
|
|
|
|
* new_roots. So let's do quota accounting.
|
|
|
|
*/
|
|
|
|
ret = btrfs_qgroup_account_extents(trans, root->fs_info);
|
|
|
|
if (ret < 0) {
|
|
|
|
mutex_unlock(&root->fs_info->tree_log_mutex);
|
|
|
|
mutex_unlock(&root->fs_info->reloc_mutex);
|
|
|
|
goto scrub_continue;
|
|
|
|
}
|
|
|
|
|
Btrfs: Mixed back reference (FORWARD ROLLING FORMAT CHANGE)
This commit introduces a new kind of back reference for btrfs metadata.
Once a filesystem has been mounted with this commit, IT WILL NO LONGER
BE MOUNTABLE BY OLDER KERNELS.
When a tree block in subvolume tree is cow'd, the reference counts of all
extents it points to are increased by one. At transaction commit time,
the old root of the subvolume is recorded in a "dead root" data structure,
and the btree it points to is later walked, dropping reference counts
and freeing any blocks where the reference count goes to 0.
The increments done during cow and decrements done after commit cancel out,
and the walk is a very expensive way to go about freeing the blocks that
are no longer referenced by the new btree root. This commit reduces the
transaction overhead by avoiding the need for dead root records.
When a non-shared tree block is cow'd, we free the old block at once, and the
new block inherits old block's references. When a tree block with reference
count > 1 is cow'd, we increase the reference counts of all extents
the new block points to by one, and decrease the old block's reference count by
one.
This dead tree avoidance code removes the need to modify the reference
counts of lower level extents when a non-shared tree block is cow'd.
But we still need to update back ref for all pointers in the block.
This is because the location of the block is recorded in the back ref
item.
We can solve this by introducing a new type of back ref. The new
back ref provides information about pointer's key, level and in which
tree the pointer lives. This information allow us to find the pointer
by searching the tree. The shortcoming of the new back ref is that it
only works for pointers in tree blocks referenced by their owner trees.
This is mostly a problem for snapshots, where resolving one of these
fuzzy back references would be O(number_of_snapshots) and quite slow.
The solution used here is to use the fuzzy back references in the common
case where a given tree block is only referenced by one root,
and use the full back references when multiple roots have a reference
on a given block.
This commit adds per subvolume red-black tree to keep trace of cached
inodes. The red-black tree helps the balancing code to find cached
inodes whose inode numbers within a given range.
This commit improves the balancing code by introducing several data
structures to keep the state of balancing. The most important one
is the back ref cache. It caches how the upper level tree blocks are
referenced. This greatly reduce the overhead of checking back ref.
The improved balancing code scales significantly better with a large
number of snapshots.
This is a very large commit and was written in a number of
pieces. But, they depend heavily on the disk format change and were
squashed together to make sure git bisect didn't end up in a
bad state wrt space balancing or the format change.
Signed-off-by: Yan Zheng <zheng.yan@oracle.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-06-10 21:45:14 +07:00
|
|
|
ret = commit_cowonly_roots(trans, root);
|
2012-03-01 23:24:58 +07:00
|
|
|
if (ret) {
|
|
|
|
mutex_unlock(&root->fs_info->tree_log_mutex);
|
2012-04-02 23:31:37 +07:00
|
|
|
mutex_unlock(&root->fs_info->reloc_mutex);
|
2014-02-19 18:24:16 +07:00
|
|
|
goto scrub_continue;
|
2012-03-01 23:24:58 +07:00
|
|
|
}
|
2007-06-23 01:16:25 +07:00
|
|
|
|
2013-01-15 13:29:12 +07:00
|
|
|
/*
|
|
|
|
* The tasks which save the space cache and inode cache may also
|
|
|
|
* update ->aborted, check it.
|
|
|
|
*/
|
|
|
|
if (unlikely(ACCESS_ONCE(cur_trans->aborted))) {
|
|
|
|
ret = cur_trans->aborted;
|
|
|
|
mutex_unlock(&root->fs_info->tree_log_mutex);
|
|
|
|
mutex_unlock(&root->fs_info->reloc_mutex);
|
2014-02-19 18:24:16 +07:00
|
|
|
goto scrub_continue;
|
2013-01-15 13:29:12 +07:00
|
|
|
}
|
|
|
|
|
2009-09-12 03:11:19 +07:00
|
|
|
btrfs_prepare_extent_commit(trans, root);
|
|
|
|
|
2007-03-25 22:35:08 +07:00
|
|
|
cur_trans = root->fs_info->running_transaction;
|
Btrfs: Mixed back reference (FORWARD ROLLING FORMAT CHANGE)
This commit introduces a new kind of back reference for btrfs metadata.
Once a filesystem has been mounted with this commit, IT WILL NO LONGER
BE MOUNTABLE BY OLDER KERNELS.
When a tree block in subvolume tree is cow'd, the reference counts of all
extents it points to are increased by one. At transaction commit time,
the old root of the subvolume is recorded in a "dead root" data structure,
and the btree it points to is later walked, dropping reference counts
and freeing any blocks where the reference count goes to 0.
The increments done during cow and decrements done after commit cancel out,
and the walk is a very expensive way to go about freeing the blocks that
are no longer referenced by the new btree root. This commit reduces the
transaction overhead by avoiding the need for dead root records.
When a non-shared tree block is cow'd, we free the old block at once, and the
new block inherits old block's references. When a tree block with reference
count > 1 is cow'd, we increase the reference counts of all extents
the new block points to by one, and decrease the old block's reference count by
one.
This dead tree avoidance code removes the need to modify the reference
counts of lower level extents when a non-shared tree block is cow'd.
But we still need to update back ref for all pointers in the block.
This is because the location of the block is recorded in the back ref
item.
We can solve this by introducing a new type of back ref. The new
back ref provides information about pointer's key, level and in which
tree the pointer lives. This information allow us to find the pointer
by searching the tree. The shortcoming of the new back ref is that it
only works for pointers in tree blocks referenced by their owner trees.
This is mostly a problem for snapshots, where resolving one of these
fuzzy back references would be O(number_of_snapshots) and quite slow.
The solution used here is to use the fuzzy back references in the common
case where a given tree block is only referenced by one root,
and use the full back references when multiple roots have a reference
on a given block.
This commit adds per subvolume red-black tree to keep trace of cached
inodes. The red-black tree helps the balancing code to find cached
inodes whose inode numbers within a given range.
This commit improves the balancing code by introducing several data
structures to keep the state of balancing. The most important one
is the back ref cache. It caches how the upper level tree blocks are
referenced. This greatly reduce the overhead of checking back ref.
The improved balancing code scales significantly better with a large
number of snapshots.
This is a very large commit and was written in a number of
pieces. But, they depend heavily on the disk format change and were
squashed together to make sure git bisect didn't end up in a
bad state wrt space balancing or the format change.
Signed-off-by: Yan Zheng <zheng.yan@oracle.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-06-10 21:45:14 +07:00
|
|
|
|
|
|
|
btrfs_set_root_node(&root->fs_info->tree_root->root_item,
|
|
|
|
root->fs_info->tree_root->node);
|
2014-03-14 02:42:13 +07:00
|
|
|
list_add_tail(&root->fs_info->tree_root->dirty_list,
|
|
|
|
&cur_trans->switch_commits);
|
Btrfs: Mixed back reference (FORWARD ROLLING FORMAT CHANGE)
This commit introduces a new kind of back reference for btrfs metadata.
Once a filesystem has been mounted with this commit, IT WILL NO LONGER
BE MOUNTABLE BY OLDER KERNELS.
When a tree block in subvolume tree is cow'd, the reference counts of all
extents it points to are increased by one. At transaction commit time,
the old root of the subvolume is recorded in a "dead root" data structure,
and the btree it points to is later walked, dropping reference counts
and freeing any blocks where the reference count goes to 0.
The increments done during cow and decrements done after commit cancel out,
and the walk is a very expensive way to go about freeing the blocks that
are no longer referenced by the new btree root. This commit reduces the
transaction overhead by avoiding the need for dead root records.
When a non-shared tree block is cow'd, we free the old block at once, and the
new block inherits old block's references. When a tree block with reference
count > 1 is cow'd, we increase the reference counts of all extents
the new block points to by one, and decrease the old block's reference count by
one.
This dead tree avoidance code removes the need to modify the reference
counts of lower level extents when a non-shared tree block is cow'd.
But we still need to update back ref for all pointers in the block.
This is because the location of the block is recorded in the back ref
item.
We can solve this by introducing a new type of back ref. The new
back ref provides information about pointer's key, level and in which
tree the pointer lives. This information allow us to find the pointer
by searching the tree. The shortcoming of the new back ref is that it
only works for pointers in tree blocks referenced by their owner trees.
This is mostly a problem for snapshots, where resolving one of these
fuzzy back references would be O(number_of_snapshots) and quite slow.
The solution used here is to use the fuzzy back references in the common
case where a given tree block is only referenced by one root,
and use the full back references when multiple roots have a reference
on a given block.
This commit adds per subvolume red-black tree to keep trace of cached
inodes. The red-black tree helps the balancing code to find cached
inodes whose inode numbers within a given range.
This commit improves the balancing code by introducing several data
structures to keep the state of balancing. The most important one
is the back ref cache. It caches how the upper level tree blocks are
referenced. This greatly reduce the overhead of checking back ref.
The improved balancing code scales significantly better with a large
number of snapshots.
This is a very large commit and was written in a number of
pieces. But, they depend heavily on the disk format change and were
squashed together to make sure git bisect didn't end up in a
bad state wrt space balancing or the format change.
Signed-off-by: Yan Zheng <zheng.yan@oracle.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-06-10 21:45:14 +07:00
|
|
|
|
|
|
|
btrfs_set_root_node(&root->fs_info->chunk_root->root_item,
|
|
|
|
root->fs_info->chunk_root->node);
|
2014-03-14 02:42:13 +07:00
|
|
|
list_add_tail(&root->fs_info->chunk_root->dirty_list,
|
|
|
|
&cur_trans->switch_commits);
|
|
|
|
|
|
|
|
switch_commit_roots(cur_trans, root->fs_info);
|
Btrfs: Mixed back reference (FORWARD ROLLING FORMAT CHANGE)
This commit introduces a new kind of back reference for btrfs metadata.
Once a filesystem has been mounted with this commit, IT WILL NO LONGER
BE MOUNTABLE BY OLDER KERNELS.
When a tree block in subvolume tree is cow'd, the reference counts of all
extents it points to are increased by one. At transaction commit time,
the old root of the subvolume is recorded in a "dead root" data structure,
and the btree it points to is later walked, dropping reference counts
and freeing any blocks where the reference count goes to 0.
The increments done during cow and decrements done after commit cancel out,
and the walk is a very expensive way to go about freeing the blocks that
are no longer referenced by the new btree root. This commit reduces the
transaction overhead by avoiding the need for dead root records.
When a non-shared tree block is cow'd, we free the old block at once, and the
new block inherits old block's references. When a tree block with reference
count > 1 is cow'd, we increase the reference counts of all extents
the new block points to by one, and decrease the old block's reference count by
one.
This dead tree avoidance code removes the need to modify the reference
counts of lower level extents when a non-shared tree block is cow'd.
But we still need to update back ref for all pointers in the block.
This is because the location of the block is recorded in the back ref
item.
We can solve this by introducing a new type of back ref. The new
back ref provides information about pointer's key, level and in which
tree the pointer lives. This information allow us to find the pointer
by searching the tree. The shortcoming of the new back ref is that it
only works for pointers in tree blocks referenced by their owner trees.
This is mostly a problem for snapshots, where resolving one of these
fuzzy back references would be O(number_of_snapshots) and quite slow.
The solution used here is to use the fuzzy back references in the common
case where a given tree block is only referenced by one root,
and use the full back references when multiple roots have a reference
on a given block.
This commit adds per subvolume red-black tree to keep trace of cached
inodes. The red-black tree helps the balancing code to find cached
inodes whose inode numbers within a given range.
This commit improves the balancing code by introducing several data
structures to keep the state of balancing. The most important one
is the back ref cache. It caches how the upper level tree blocks are
referenced. This greatly reduce the overhead of checking back ref.
The improved balancing code scales significantly better with a large
number of snapshots.
This is a very large commit and was written in a number of
pieces. But, they depend heavily on the disk format change and were
squashed together to make sure git bisect didn't end up in a
bad state wrt space balancing or the format change.
Signed-off-by: Yan Zheng <zheng.yan@oracle.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-06-10 21:45:14 +07:00
|
|
|
|
2012-06-28 23:04:55 +07:00
|
|
|
assert_qgroups_uptodate(trans);
|
2014-11-18 03:45:48 +07:00
|
|
|
ASSERT(list_empty(&cur_trans->dirty_bgs));
|
2015-04-07 02:46:08 +07:00
|
|
|
ASSERT(list_empty(&cur_trans->io_bgs));
|
Btrfs: Mixed back reference (FORWARD ROLLING FORMAT CHANGE)
This commit introduces a new kind of back reference for btrfs metadata.
Once a filesystem has been mounted with this commit, IT WILL NO LONGER
BE MOUNTABLE BY OLDER KERNELS.
When a tree block in subvolume tree is cow'd, the reference counts of all
extents it points to are increased by one. At transaction commit time,
the old root of the subvolume is recorded in a "dead root" data structure,
and the btree it points to is later walked, dropping reference counts
and freeing any blocks where the reference count goes to 0.
The increments done during cow and decrements done after commit cancel out,
and the walk is a very expensive way to go about freeing the blocks that
are no longer referenced by the new btree root. This commit reduces the
transaction overhead by avoiding the need for dead root records.
When a non-shared tree block is cow'd, we free the old block at once, and the
new block inherits old block's references. When a tree block with reference
count > 1 is cow'd, we increase the reference counts of all extents
the new block points to by one, and decrease the old block's reference count by
one.
This dead tree avoidance code removes the need to modify the reference
counts of lower level extents when a non-shared tree block is cow'd.
But we still need to update back ref for all pointers in the block.
This is because the location of the block is recorded in the back ref
item.
We can solve this by introducing a new type of back ref. The new
back ref provides information about pointer's key, level and in which
tree the pointer lives. This information allow us to find the pointer
by searching the tree. The shortcoming of the new back ref is that it
only works for pointers in tree blocks referenced by their owner trees.
This is mostly a problem for snapshots, where resolving one of these
fuzzy back references would be O(number_of_snapshots) and quite slow.
The solution used here is to use the fuzzy back references in the common
case where a given tree block is only referenced by one root,
and use the full back references when multiple roots have a reference
on a given block.
This commit adds per subvolume red-black tree to keep trace of cached
inodes. The red-black tree helps the balancing code to find cached
inodes whose inode numbers within a given range.
This commit improves the balancing code by introducing several data
structures to keep the state of balancing. The most important one
is the back ref cache. It caches how the upper level tree blocks are
referenced. This greatly reduce the overhead of checking back ref.
The improved balancing code scales significantly better with a large
number of snapshots.
This is a very large commit and was written in a number of
pieces. But, they depend heavily on the disk format change and were
squashed together to make sure git bisect didn't end up in a
bad state wrt space balancing or the format change.
Signed-off-by: Yan Zheng <zheng.yan@oracle.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-06-10 21:45:14 +07:00
|
|
|
update_super_roots(root);
|
2008-09-06 03:13:11 +07:00
|
|
|
|
2013-10-01 01:10:43 +07:00
|
|
|
btrfs_set_super_log_root(root->fs_info->super_copy, 0);
|
|
|
|
btrfs_set_super_log_root_level(root->fs_info->super_copy, 0);
|
2011-04-13 20:41:04 +07:00
|
|
|
memcpy(root->fs_info->super_for_commit, root->fs_info->super_copy,
|
|
|
|
sizeof(*root->fs_info->super_copy));
|
2007-06-29 02:57:36 +07:00
|
|
|
|
2014-09-03 20:35:33 +07:00
|
|
|
btrfs_update_commit_device_size(root->fs_info);
|
2014-09-03 20:35:34 +07:00
|
|
|
btrfs_update_commit_device_bytes_used(root, cur_trans);
|
2014-09-03 20:35:33 +07:00
|
|
|
|
2016-09-03 02:40:02 +07:00
|
|
|
clear_bit(BTRFS_FS_LOG1_ERR, &root->fs_info->flags);
|
|
|
|
clear_bit(BTRFS_FS_LOG2_ERR, &root->fs_info->flags);
|
Btrfs: be aware of btree inode write errors to avoid fs corruption
While we have a transaction ongoing, the VM might decide at any time
to call btree_inode->i_mapping->a_ops->writepages(), which will start
writeback of dirty pages belonging to btree nodes/leafs. This call
might return an error or the writeback might finish with an error
before we attempt to commit the running transaction. If this happens,
we might have no way of knowing that such error happened when we are
committing the transaction - because the pages might no longer be
marked dirty nor tagged for writeback (if a subsequent modification
to the extent buffer didn't happen before the transaction commit) which
makes filemap_fdata[write|wait]_range unable to find such pages (even
if they're marked with SetPageError).
So if this happens we must abort the transaction, otherwise we commit
a super block with btree roots that point to btree nodes/leafs whose
content on disk is invalid - either garbage or the content of some
node/leaf from a past generation that got cowed or deleted and is no
longer valid (for this later case we end up getting error messages like
"parent transid verify failed on 10826481664 wanted 25748 found 29562"
when reading btree nodes/leafs from disk).
Note that setting and checking AS_EIO/AS_ENOSPC in the btree inode's
i_mapping would not be enough because we need to distinguish between
log tree extents (not fatal) vs non-log tree extents (fatal) and
because the next call to filemap_fdatawait_range() will catch and clear
such errors in the mapping - and that call might be from a log sync and
not from a transaction commit, which means we would not know about the
error at transaction commit time. Also, checking for the eb flag
EXTENT_BUFFER_IOERR at transaction commit time isn't done and would
not be completely reliable, as the eb might be removed from memory and
read back when trying to get it, which clears that flag right before
reading the eb's pages from disk, making us not know about the previous
write error.
Using the new 3 flags for the btree inode also makes us achieve the
goal of AS_EIO/AS_ENOSPC when writepages() returns success, started
writeback for all dirty pages and before filemap_fdatawait_range() is
called, the writeback for all dirty pages had already finished with
errors - because we were not using AS_EIO/AS_ENOSPC,
filemap_fdatawait_range() would return success, as it could not know
that writeback errors happened (the pages were no longer tagged for
writeback).
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Chris Mason <clm@fb.com>
2014-09-26 18:25:56 +07:00
|
|
|
|
Btrfs: fix -ENOSPC when finishing block group creation
While creating a block group, we often end up getting ENOSPC while updating
the chunk tree, which leads to a transaction abortion that produces a trace
like the following:
[30670.116368] WARNING: CPU: 4 PID: 20735 at fs/btrfs/super.c:260 __btrfs_abort_transaction+0x52/0x106 [btrfs]()
[30670.117777] BTRFS: Transaction aborted (error -28)
(...)
[30670.163567] Call Trace:
[30670.163906] [<ffffffff8142fa46>] dump_stack+0x4f/0x7b
[30670.164522] [<ffffffff8108b6a2>] ? console_unlock+0x361/0x3ad
[30670.165171] [<ffffffff81045ea5>] warn_slowpath_common+0xa1/0xbb
[30670.166323] [<ffffffffa035daa7>] ? __btrfs_abort_transaction+0x52/0x106 [btrfs]
[30670.167213] [<ffffffff81045f05>] warn_slowpath_fmt+0x46/0x48
[30670.167862] [<ffffffffa035daa7>] __btrfs_abort_transaction+0x52/0x106 [btrfs]
[30670.169116] [<ffffffffa03743d7>] btrfs_create_pending_block_groups+0x101/0x130 [btrfs]
[30670.170593] [<ffffffffa038426a>] __btrfs_end_transaction+0x84/0x366 [btrfs]
[30670.171960] [<ffffffffa038455c>] btrfs_end_transaction+0x10/0x12 [btrfs]
[30670.174649] [<ffffffffa036eb6b>] btrfs_check_data_free_space+0x11f/0x27c [btrfs]
[30670.176092] [<ffffffffa039450d>] btrfs_fallocate+0x7c8/0xb96 [btrfs]
[30670.177218] [<ffffffff812459f2>] ? __this_cpu_preempt_check+0x13/0x15
[30670.178622] [<ffffffff81152447>] vfs_fallocate+0x14c/0x1de
[30670.179642] [<ffffffff8116b915>] ? __fget_light+0x2d/0x4f
[30670.180692] [<ffffffff81152863>] SyS_fallocate+0x47/0x62
[30670.186737] [<ffffffff81435b32>] system_call_fastpath+0x12/0x17
[30670.187792] ---[ end trace 0373e6b491c4a8cc ]---
This is because we don't do proper space reservation for the chunk block
reserve when we have multiple tasks allocating chunks in parallel.
So block group creation has 2 phases, and the first phase essentially
checks if there is enough space in the system space_info, allocating a
new system chunk if there isn't, while the second phase updates the
device, extent and chunk trees. However, because the updates to the
chunk tree happen in the second phase, if we have N tasks, each with
its own transaction handle, allocating new chunks in parallel and if
there is only enough space in the system space_info to allocate M chunks,
where M < N, none of the tasks ends up allocating a new system chunk in
the first phase and N - M tasks will get -ENOSPC when attempting to
update the chunk tree in phase 2 if they need to COW any nodes/leafs
from the chunk tree.
Fix this by doing proper reservation in the chunk block reserve.
The issue could be reproduced by running fstests generic/038 in a loop,
which eventually triggered the problem.
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Chris Mason <clm@fb.com>
2015-05-20 20:01:54 +07:00
|
|
|
btrfs_trans_release_chunk_metadata(trans);
|
|
|
|
|
2011-04-12 04:25:13 +07:00
|
|
|
spin_lock(&root->fs_info->trans_lock);
|
Btrfs: make the state of the transaction more readable
We used 3 variants to track the state of the transaction, it was complex
and wasted the memory space. Besides that, it was hard to understand that
which types of the transaction handles should be blocked in each transaction
state, so the developers often made mistakes.
This patch improved the above problem. In this patch, we define 6 states
for the transaction,
enum btrfs_trans_state {
TRANS_STATE_RUNNING = 0,
TRANS_STATE_BLOCKED = 1,
TRANS_STATE_COMMIT_START = 2,
TRANS_STATE_COMMIT_DOING = 3,
TRANS_STATE_UNBLOCKED = 4,
TRANS_STATE_COMPLETED = 5,
TRANS_STATE_MAX = 6,
}
and just use 1 variant to track those state.
In order to make the blocked handle types for each state more clear,
we introduce a array:
unsigned int btrfs_blocked_trans_types[TRANS_STATE_MAX] = {
[TRANS_STATE_RUNNING] = 0U,
[TRANS_STATE_BLOCKED] = (__TRANS_USERSPACE |
__TRANS_START),
[TRANS_STATE_COMMIT_START] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH),
[TRANS_STATE_COMMIT_DOING] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN),
[TRANS_STATE_UNBLOCKED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
[TRANS_STATE_COMPLETED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
}
it is very intuitionistic.
Besides that, because we remove ->in_commit in transaction structure, so
the lock ->commit_lock which was used to protect it is unnecessary, remove
->commit_lock.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
2013-05-17 10:53:43 +07:00
|
|
|
cur_trans->state = TRANS_STATE_UNBLOCKED;
|
2011-04-12 04:25:13 +07:00
|
|
|
root->fs_info->running_transaction = NULL;
|
|
|
|
spin_unlock(&root->fs_info->trans_lock);
|
2011-06-14 07:00:16 +07:00
|
|
|
mutex_unlock(&root->fs_info->reloc_mutex);
|
2009-03-13 07:12:45 +07:00
|
|
|
|
2008-07-17 23:54:14 +07:00
|
|
|
wake_up(&root->fs_info->transaction_wait);
|
2008-07-17 23:53:50 +07:00
|
|
|
|
2007-03-23 02:59:16 +07:00
|
|
|
ret = btrfs_write_and_wait_transaction(trans, root);
|
2012-03-01 23:24:58 +07:00
|
|
|
if (ret) {
|
2016-03-16 15:43:06 +07:00
|
|
|
btrfs_handle_fs_error(root->fs_info, ret,
|
2013-03-12 21:46:08 +07:00
|
|
|
"Error while writing out transaction");
|
2012-03-01 23:24:58 +07:00
|
|
|
mutex_unlock(&root->fs_info->tree_log_mutex);
|
2014-02-19 18:24:16 +07:00
|
|
|
goto scrub_continue;
|
2012-03-01 23:24:58 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
ret = write_ctree_super(trans, root, 0);
|
|
|
|
if (ret) {
|
|
|
|
mutex_unlock(&root->fs_info->tree_log_mutex);
|
2014-02-19 18:24:16 +07:00
|
|
|
goto scrub_continue;
|
2012-03-01 23:24:58 +07:00
|
|
|
}
|
2008-01-03 21:08:48 +07:00
|
|
|
|
2008-09-06 03:13:11 +07:00
|
|
|
/*
|
|
|
|
* the super is written, we can safely allow the tree-loggers
|
|
|
|
* to go about their business
|
|
|
|
*/
|
|
|
|
mutex_unlock(&root->fs_info->tree_log_mutex);
|
|
|
|
|
2009-09-12 03:11:19 +07:00
|
|
|
btrfs_finish_extent_commit(trans, root);
|
2008-01-03 21:08:48 +07:00
|
|
|
|
2015-09-24 21:46:10 +07:00
|
|
|
if (test_bit(BTRFS_TRANS_HAVE_FREE_BGS, &cur_trans->flags))
|
btrfs: Fix out-of-space bug
Btrfs will report NO_SPACE when we create and remove files for several times,
and we can't write to filesystem until mount it again.
Steps to reproduce:
1: Create a single-dev btrfs fs with default option
2: Write a file into it to take up most fs space
3: Delete above file
4: Wait about 100s to let chunk removed
5: goto 2
Script is like following:
#!/bin/bash
# Recommend 1.2G space, too large disk will make test slow
DEV="/dev/sda16"
MNT="/mnt/tmp"
dev_size="$(lsblk -bn -o SIZE "$DEV")" || exit 2
file_size_m=$((dev_size * 75 / 100 / 1024 / 1024))
echo "Loop write ${file_size_m}M file on $((dev_size / 1024 / 1024))M dev"
for ((i = 0; i < 10; i++)); do umount "$MNT" 2>/dev/null; done
echo "mkfs $DEV"
mkfs.btrfs -f "$DEV" >/dev/null || exit 2
echo "mount $DEV $MNT"
mount "$DEV" "$MNT" || exit 2
for ((loop_i = 0; loop_i < 20; loop_i++)); do
echo
echo "loop $loop_i"
echo "dd file..."
cmd=(dd if=/dev/zero of="$MNT"/file0 bs=1M count="$file_size_m")
"${cmd[@]}" 2>/dev/null || {
# NO_SPACE error triggered
echo "dd failed: ${cmd[*]}"
exit 1
}
echo "rm file..."
rm -f "$MNT"/file0 || exit 2
for ((i = 0; i < 10; i++)); do
df "$MNT" | tail -1
sleep 10
done
done
Reason:
It is triggered by commit: 47ab2a6c689913db23ccae38349714edf8365e0a
which is used to remove empty block groups automatically, but the
reason is not in that patch. Code before works well because btrfs
don't need to create and delete chunks so many times with high
complexity.
Above bug is caused by many reason, any of them can trigger it.
Reason1:
When we remove some continuous chunks but leave other chunks after,
these disk space should be used by chunk-recreating, but in current
code, only first create will successed.
Fixed by Forrest Liu <forrestl@synology.com> in:
Btrfs: fix find_free_dev_extent() malfunction in case device tree has hole
Reason2:
contains_pending_extent() return wrong value in calculation.
Fixed by Forrest Liu <forrestl@synology.com> in:
Btrfs: fix find_free_dev_extent() malfunction in case device tree has hole
Reason3:
btrfs_check_data_free_space() try to commit transaction and retry
allocating chunk when the first allocating failed, but space_info->full
is set in first allocating, and prevent second allocating in retry.
Fixed in this patch by clear space_info->full in commit transaction.
Tested for severial times by above script.
Changelog v3->v4:
use light weight int instead of atomic_t to record have_remove_bgs in
transaction, suggested by:
Josef Bacik <jbacik@fb.com>
Changelog v2->v3:
v2 fixed the bug by adding more commit-transaction, but we
only need to reclaim space when we are really have no space for
new chunk, noticed by:
Filipe David Manana <fdmanana@gmail.com>
Actually, our code already have this type of commit-and-retry,
we only need to make it working with removed-bgs.
v3 fixed the bug with above way.
Changelog v1->v2:
v1 will introduce a new bug when delete and create chunk in same disk
space in same transaction, noticed by:
Filipe David Manana <fdmanana@gmail.com>
V2 fix this bug by commit transaction after remove block grops.
Reported-by: Tsutomu Itoh <t-itoh@jp.fujitsu.com>
Suggested-by: Filipe David Manana <fdmanana@gmail.com>
Suggested-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: Zhao Lei <zhaolei@cn.fujitsu.com>
Signed-off-by: Chris Mason <clm@fb.com>
2015-02-12 13:18:17 +07:00
|
|
|
btrfs_clear_space_info_full(root->fs_info);
|
|
|
|
|
2007-08-11 03:22:09 +07:00
|
|
|
root->fs_info->last_trans_committed = cur_trans->transid;
|
Btrfs: make the state of the transaction more readable
We used 3 variants to track the state of the transaction, it was complex
and wasted the memory space. Besides that, it was hard to understand that
which types of the transaction handles should be blocked in each transaction
state, so the developers often made mistakes.
This patch improved the above problem. In this patch, we define 6 states
for the transaction,
enum btrfs_trans_state {
TRANS_STATE_RUNNING = 0,
TRANS_STATE_BLOCKED = 1,
TRANS_STATE_COMMIT_START = 2,
TRANS_STATE_COMMIT_DOING = 3,
TRANS_STATE_UNBLOCKED = 4,
TRANS_STATE_COMPLETED = 5,
TRANS_STATE_MAX = 6,
}
and just use 1 variant to track those state.
In order to make the blocked handle types for each state more clear,
we introduce a array:
unsigned int btrfs_blocked_trans_types[TRANS_STATE_MAX] = {
[TRANS_STATE_RUNNING] = 0U,
[TRANS_STATE_BLOCKED] = (__TRANS_USERSPACE |
__TRANS_START),
[TRANS_STATE_COMMIT_START] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH),
[TRANS_STATE_COMMIT_DOING] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN),
[TRANS_STATE_UNBLOCKED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
[TRANS_STATE_COMPLETED] = (__TRANS_USERSPACE |
__TRANS_START |
__TRANS_ATTACH |
__TRANS_JOIN |
__TRANS_JOIN_NOLOCK),
}
it is very intuitionistic.
Besides that, because we remove ->in_commit in transaction structure, so
the lock ->commit_lock which was used to protect it is unnecessary, remove
->commit_lock.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
2013-05-17 10:53:43 +07:00
|
|
|
/*
|
|
|
|
* We needn't acquire the lock here because there is no other task
|
|
|
|
* which can change it.
|
|
|
|
*/
|
|
|
|
cur_trans->state = TRANS_STATE_COMPLETED;
|
2007-04-02 21:50:19 +07:00
|
|
|
wake_up(&cur_trans->commit_wait);
|
2008-11-18 09:02:50 +07:00
|
|
|
|
2011-04-12 04:25:13 +07:00
|
|
|
spin_lock(&root->fs_info->trans_lock);
|
2011-04-12 02:45:29 +07:00
|
|
|
list_del_init(&cur_trans->list);
|
2011-04-12 04:25:13 +07:00
|
|
|
spin_unlock(&root->fs_info->trans_lock);
|
|
|
|
|
2013-09-30 22:36:38 +07:00
|
|
|
btrfs_put_transaction(cur_trans);
|
|
|
|
btrfs_put_transaction(cur_trans);
|
2007-08-30 02:47:34 +07:00
|
|
|
|
2013-05-15 14:48:27 +07:00
|
|
|
if (trans->type & __TRANS_FREEZABLE)
|
Btrfs: fix orphan transaction on the freezed filesystem
With the following debug patch:
static int btrfs_freeze(struct super_block *sb)
{
+ struct btrfs_fs_info *fs_info = btrfs_sb(sb);
+ struct btrfs_transaction *trans;
+
+ spin_lock(&fs_info->trans_lock);
+ trans = fs_info->running_transaction;
+ if (trans) {
+ printk("Transid %llu, use_count %d, num_writer %d\n",
+ trans->transid, atomic_read(&trans->use_count),
+ atomic_read(&trans->num_writers));
+ }
+ spin_unlock(&fs_info->trans_lock);
return 0;
}
I found there was a orphan transaction after the freeze operation was done.
It is because the transaction may not be committed when the transaction handle
end even though it is the last handle of the current transaction. This design
avoid committing the transaction frequently, but also introduce the above
problem.
So I add btrfs_attach_transaction() which can catch the current transaction
and commit it. If there is no transaction, it will return ENOENT, and do not
anything.
This function also can be used to instead of btrfs_join_transaction_freeze()
because it don't increase the writer counter and don't start a new transaction,
so it also can fix the deadlock between sync and freeze.
Besides that, it is used to instead of btrfs_join_transaction() in
transaction_kthread(), because if there is no transaction, the transaction
kthread needn't anything.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
2012-09-20 14:54:00 +07:00
|
|
|
sb_end_intwrite(root->fs_info->sb);
|
2012-06-12 21:20:45 +07:00
|
|
|
|
Btrfs: add initial tracepoint support for btrfs
Tracepoints can provide insight into why btrfs hits bugs and be greatly
helpful for debugging, e.g
dd-7822 [000] 2121.641088: btrfs_inode_request: root = 5(FS_TREE), gen = 4, ino = 256, blocks = 8, disk_i_size = 0, last_trans = 8, logged_trans = 0
dd-7822 [000] 2121.641100: btrfs_inode_new: root = 5(FS_TREE), gen = 8, ino = 257, blocks = 0, disk_i_size = 0, last_trans = 0, logged_trans = 0
btrfs-transacti-7804 [001] 2146.935420: btrfs_cow_block: root = 2(EXTENT_TREE), refs = 2, orig_buf = 29368320 (orig_level = 0), cow_buf = 29388800 (cow_level = 0)
btrfs-transacti-7804 [001] 2146.935473: btrfs_cow_block: root = 1(ROOT_TREE), refs = 2, orig_buf = 29364224 (orig_level = 0), cow_buf = 29392896 (cow_level = 0)
btrfs-transacti-7804 [001] 2146.972221: btrfs_transaction_commit: root = 1(ROOT_TREE), gen = 8
flush-btrfs-2-7821 [001] 2155.824210: btrfs_chunk_alloc: root = 3(CHUNK_TREE), offset = 1103101952, size = 1073741824, num_stripes = 1, sub_stripes = 0, type = DATA
flush-btrfs-2-7821 [001] 2155.824241: btrfs_cow_block: root = 2(EXTENT_TREE), refs = 2, orig_buf = 29388800 (orig_level = 0), cow_buf = 29396992 (cow_level = 0)
flush-btrfs-2-7821 [001] 2155.824255: btrfs_cow_block: root = 4(DEV_TREE), refs = 2, orig_buf = 29372416 (orig_level = 0), cow_buf = 29401088 (cow_level = 0)
flush-btrfs-2-7821 [000] 2155.824329: btrfs_cow_block: root = 3(CHUNK_TREE), refs = 2, orig_buf = 20971520 (orig_level = 0), cow_buf = 20975616 (cow_level = 0)
btrfs-endio-wri-7800 [001] 2155.898019: btrfs_cow_block: root = 5(FS_TREE), refs = 2, orig_buf = 29384704 (orig_level = 0), cow_buf = 29405184 (cow_level = 0)
btrfs-endio-wri-7800 [001] 2155.898043: btrfs_cow_block: root = 7(CSUM_TREE), refs = 2, orig_buf = 29376512 (orig_level = 0), cow_buf = 29409280 (cow_level = 0)
Here is what I have added:
1) ordere_extent:
btrfs_ordered_extent_add
btrfs_ordered_extent_remove
btrfs_ordered_extent_start
btrfs_ordered_extent_put
These provide critical information to understand how ordered_extents are
updated.
2) extent_map:
btrfs_get_extent
extent_map is used in both read and write cases, and it is useful for tracking
how btrfs specific IO is running.
3) writepage:
__extent_writepage
btrfs_writepage_end_io_hook
Pages are cirtical resourses and produce a lot of corner cases during writeback,
so it is valuable to know how page is written to disk.
4) inode:
btrfs_inode_new
btrfs_inode_request
btrfs_inode_evict
These can show where and when a inode is created, when a inode is evicted.
5) sync:
btrfs_sync_file
btrfs_sync_fs
These show sync arguments.
6) transaction:
btrfs_transaction_commit
In transaction based filesystem, it will be useful to know the generation and
who does commit.
7) back reference and cow:
btrfs_delayed_tree_ref
btrfs_delayed_data_ref
btrfs_delayed_ref_head
btrfs_cow_block
Btrfs natively supports back references, these tracepoints are helpful on
understanding btrfs's COW mechanism.
8) chunk:
btrfs_chunk_alloc
btrfs_chunk_free
Chunk is a link between physical offset and logical offset, and stands for space
infomation in btrfs, and these are helpful on tracing space things.
9) reserved_extent:
btrfs_reserved_extent_alloc
btrfs_reserved_extent_free
These can show how btrfs uses its space.
Signed-off-by: Liu Bo <liubo2009@cn.fujitsu.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2011-03-24 18:18:59 +07:00
|
|
|
trace_btrfs_transaction_commit(root);
|
|
|
|
|
2011-03-08 20:14:00 +07:00
|
|
|
btrfs_scrub_continue(root);
|
|
|
|
|
2009-09-12 03:12:44 +07:00
|
|
|
if (current->journal_info == trans)
|
|
|
|
current->journal_info = NULL;
|
|
|
|
|
2007-04-02 21:50:19 +07:00
|
|
|
kmem_cache_free(btrfs_trans_handle_cachep, trans);
|
2009-11-12 16:36:34 +07:00
|
|
|
|
btrfs: fix fsfreeze hang caused by delayed iputs deal
When running fstests generic/068, sometimes we got below deadlock:
xfs_io D ffff8800331dbb20 0 6697 6693 0x00000080
ffff8800331dbb20 ffff88007acfc140 ffff880034d895c0 ffff8800331dc000
ffff880032d243e8 fffffffeffffffff ffff880032d24400 0000000000000001
ffff8800331dbb38 ffffffff816a9045 ffff880034d895c0 ffff8800331dbba8
Call Trace:
[<ffffffff816a9045>] schedule+0x35/0x80
[<ffffffff816abab2>] rwsem_down_read_failed+0xf2/0x140
[<ffffffff8118f5e1>] ? __filemap_fdatawrite_range+0xd1/0x100
[<ffffffff8134f978>] call_rwsem_down_read_failed+0x18/0x30
[<ffffffffa06631fc>] ? btrfs_alloc_block_rsv+0x2c/0xb0 [btrfs]
[<ffffffff810d32b5>] percpu_down_read+0x35/0x50
[<ffffffff81217dfc>] __sb_start_write+0x2c/0x40
[<ffffffffa067f5d5>] start_transaction+0x2a5/0x4d0 [btrfs]
[<ffffffffa067f857>] btrfs_join_transaction+0x17/0x20 [btrfs]
[<ffffffffa068ba34>] btrfs_evict_inode+0x3c4/0x5d0 [btrfs]
[<ffffffff81230a1a>] evict+0xba/0x1a0
[<ffffffff812316b6>] iput+0x196/0x200
[<ffffffffa06851d0>] btrfs_run_delayed_iputs+0x70/0xc0 [btrfs]
[<ffffffffa067f1d8>] btrfs_commit_transaction+0x928/0xa80 [btrfs]
[<ffffffffa0646df0>] btrfs_freeze+0x30/0x40 [btrfs]
[<ffffffff81218040>] freeze_super+0xf0/0x190
[<ffffffff81229275>] do_vfs_ioctl+0x4a5/0x5c0
[<ffffffff81003176>] ? do_audit_syscall_entry+0x66/0x70
[<ffffffff810038cf>] ? syscall_trace_enter_phase1+0x11f/0x140
[<ffffffff81229409>] SyS_ioctl+0x79/0x90
[<ffffffff81003c12>] do_syscall_64+0x62/0x110
[<ffffffff816acbe1>] entry_SYSCALL64_slow_path+0x25/0x25
>From this warning, freeze_super() already holds SB_FREEZE_FS, but
btrfs_freeze() will call btrfs_commit_transaction() again, if
btrfs_commit_transaction() finds that it has delayed iputs to handle,
it'll start_transaction(), which will try to get SB_FREEZE_FS lock
again, then deadlock occurs.
The root cause is that in btrfs, sync_filesystem(sb) does not make
sure all metadata is updated. There still maybe some codes adding
delayed iputs, see below sample race window:
CPU1 | CPU2
|-> freeze_super() |
|-> sync_filesystem(sb); |
| |-> cleaner_kthread()
| | |-> btrfs_delete_unused_bgs()
| | |-> btrfs_remove_chunk()
| | |-> btrfs_remove_block_group()
| | |-> btrfs_add_delayed_iput()
| |
|-> sb->s_writers.frozen = SB_FREEZE_FS; |
|-> sb_wait_write(sb, SB_FREEZE_FS); |
| acquire SB_FREEZE_FS lock. |
| |
|-> btrfs_freeze() |
|-> btrfs_commit_transaction() |
|-> btrfs_run_delayed_iputs() |
| will handle delayed iputs, |
| that means start_transaction() |
| will be called, which will try |
| to get SB_FREEZE_FS lock. |
To fix this issue, introduce a "int fs_frozen" to record internally whether
fs has been frozen. If fs has been frozen, we can not handle delayed iputs.
Signed-off-by: Wang Xiaoguang <wangxg.fnst@cn.fujitsu.com>
Reviewed-by: David Sterba <dsterba@suse.com>
[ add comment to btrfs_freeze ]
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Chris Mason <clm@fb.com>
2016-08-01 12:28:08 +07:00
|
|
|
/*
|
|
|
|
* If fs has been frozen, we can not handle delayed iputs, otherwise
|
|
|
|
* it'll result in deadlock about SB_FREEZE_FS.
|
|
|
|
*/
|
btrfs: Fix lockdep warning of btrfs_run_delayed_iputs()
Liu Bo <bo.li.liu@oracle.com> reported a lockdep warning of
delayed_iput_sem in xfstests generic/241:
[ 2061.345955] =============================================
[ 2061.346027] [ INFO: possible recursive locking detected ]
[ 2061.346027] 4.1.0+ #268 Tainted: G W
[ 2061.346027] ---------------------------------------------
[ 2061.346027] btrfs-cleaner/3045 is trying to acquire lock:
[ 2061.346027] (&fs_info->delayed_iput_sem){++++..}, at:
[<ffffffff814063ab>] btrfs_run_delayed_iputs+0x6b/0x100
[ 2061.346027] but task is already holding lock:
[ 2061.346027] (&fs_info->delayed_iput_sem){++++..}, at: [<ffffffff814063ab>] btrfs_run_delayed_iputs+0x6b/0x100
[ 2061.346027] other info that might help us debug this:
[ 2061.346027] Possible unsafe locking scenario:
[ 2061.346027] CPU0
[ 2061.346027] ----
[ 2061.346027] lock(&fs_info->delayed_iput_sem);
[ 2061.346027] lock(&fs_info->delayed_iput_sem);
[ 2061.346027]
*** DEADLOCK ***
It is rarely happened, about 1/400 in my test env.
The reason is recursion of btrfs_run_delayed_iputs():
cleaner_kthread
-> btrfs_run_delayed_iputs() *1
-> get delayed_iput_sem lock *2
-> iput()
-> ...
-> btrfs_commit_transaction()
-> btrfs_run_delayed_iputs() *1
-> get delayed_iput_sem lock (dead lock) *2
*1: recursion of btrfs_run_delayed_iputs()
*2: warning of lockdep about delayed_iput_sem
When fs is in high stress, new iputs may added into fs_info->delayed_iputs
list when btrfs_run_delayed_iputs() is running, which cause
second btrfs_run_delayed_iputs() run into down_read(&fs_info->delayed_iput_sem)
again, and cause above lockdep warning.
Actually, it will not cause real problem because both locks are read lock,
but to avoid lockdep warning, we can do a fix.
Fix:
Don't do btrfs_run_delayed_iputs() in btrfs_commit_transaction() for
cleaner_kthread thread to break above recursion path.
cleaner_kthread is calling btrfs_run_delayed_iputs() explicitly in code,
and don't need to call btrfs_run_delayed_iputs() again in
btrfs_commit_transaction(), it also give us a bonus to avoid stack overflow.
Test:
No above lockdep warning after patch in 1200 generic/241 tests.
Reported-by: Liu Bo <bo.li.liu@oracle.com>
Signed-off-by: Zhao Lei <zhaolei@cn.fujitsu.com>
Reviewed-by: Liu Bo <bo.li.liu@oracle.com>
Signed-off-by: Chris Mason <clm@fb.com>
2015-07-15 10:48:14 +07:00
|
|
|
if (current != root->fs_info->transaction_kthread &&
|
btrfs: fix fsfreeze hang caused by delayed iputs deal
When running fstests generic/068, sometimes we got below deadlock:
xfs_io D ffff8800331dbb20 0 6697 6693 0x00000080
ffff8800331dbb20 ffff88007acfc140 ffff880034d895c0 ffff8800331dc000
ffff880032d243e8 fffffffeffffffff ffff880032d24400 0000000000000001
ffff8800331dbb38 ffffffff816a9045 ffff880034d895c0 ffff8800331dbba8
Call Trace:
[<ffffffff816a9045>] schedule+0x35/0x80
[<ffffffff816abab2>] rwsem_down_read_failed+0xf2/0x140
[<ffffffff8118f5e1>] ? __filemap_fdatawrite_range+0xd1/0x100
[<ffffffff8134f978>] call_rwsem_down_read_failed+0x18/0x30
[<ffffffffa06631fc>] ? btrfs_alloc_block_rsv+0x2c/0xb0 [btrfs]
[<ffffffff810d32b5>] percpu_down_read+0x35/0x50
[<ffffffff81217dfc>] __sb_start_write+0x2c/0x40
[<ffffffffa067f5d5>] start_transaction+0x2a5/0x4d0 [btrfs]
[<ffffffffa067f857>] btrfs_join_transaction+0x17/0x20 [btrfs]
[<ffffffffa068ba34>] btrfs_evict_inode+0x3c4/0x5d0 [btrfs]
[<ffffffff81230a1a>] evict+0xba/0x1a0
[<ffffffff812316b6>] iput+0x196/0x200
[<ffffffffa06851d0>] btrfs_run_delayed_iputs+0x70/0xc0 [btrfs]
[<ffffffffa067f1d8>] btrfs_commit_transaction+0x928/0xa80 [btrfs]
[<ffffffffa0646df0>] btrfs_freeze+0x30/0x40 [btrfs]
[<ffffffff81218040>] freeze_super+0xf0/0x190
[<ffffffff81229275>] do_vfs_ioctl+0x4a5/0x5c0
[<ffffffff81003176>] ? do_audit_syscall_entry+0x66/0x70
[<ffffffff810038cf>] ? syscall_trace_enter_phase1+0x11f/0x140
[<ffffffff81229409>] SyS_ioctl+0x79/0x90
[<ffffffff81003c12>] do_syscall_64+0x62/0x110
[<ffffffff816acbe1>] entry_SYSCALL64_slow_path+0x25/0x25
>From this warning, freeze_super() already holds SB_FREEZE_FS, but
btrfs_freeze() will call btrfs_commit_transaction() again, if
btrfs_commit_transaction() finds that it has delayed iputs to handle,
it'll start_transaction(), which will try to get SB_FREEZE_FS lock
again, then deadlock occurs.
The root cause is that in btrfs, sync_filesystem(sb) does not make
sure all metadata is updated. There still maybe some codes adding
delayed iputs, see below sample race window:
CPU1 | CPU2
|-> freeze_super() |
|-> sync_filesystem(sb); |
| |-> cleaner_kthread()
| | |-> btrfs_delete_unused_bgs()
| | |-> btrfs_remove_chunk()
| | |-> btrfs_remove_block_group()
| | |-> btrfs_add_delayed_iput()
| |
|-> sb->s_writers.frozen = SB_FREEZE_FS; |
|-> sb_wait_write(sb, SB_FREEZE_FS); |
| acquire SB_FREEZE_FS lock. |
| |
|-> btrfs_freeze() |
|-> btrfs_commit_transaction() |
|-> btrfs_run_delayed_iputs() |
| will handle delayed iputs, |
| that means start_transaction() |
| will be called, which will try |
| to get SB_FREEZE_FS lock. |
To fix this issue, introduce a "int fs_frozen" to record internally whether
fs has been frozen. If fs has been frozen, we can not handle delayed iputs.
Signed-off-by: Wang Xiaoguang <wangxg.fnst@cn.fujitsu.com>
Reviewed-by: David Sterba <dsterba@suse.com>
[ add comment to btrfs_freeze ]
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Chris Mason <clm@fb.com>
2016-08-01 12:28:08 +07:00
|
|
|
current != root->fs_info->cleaner_kthread &&
|
|
|
|
!root->fs_info->fs_frozen)
|
2009-11-12 16:36:34 +07:00
|
|
|
btrfs_run_delayed_iputs(root);
|
|
|
|
|
2007-03-23 02:59:16 +07:00
|
|
|
return ret;
|
2012-03-01 23:24:58 +07:00
|
|
|
|
2014-02-19 18:24:16 +07:00
|
|
|
scrub_continue:
|
|
|
|
btrfs_scrub_continue(root);
|
2012-03-01 23:24:58 +07:00
|
|
|
cleanup_transaction:
|
2012-06-27 03:13:18 +07:00
|
|
|
btrfs_trans_release_metadata(trans, root);
|
Btrfs: fix -ENOSPC when finishing block group creation
While creating a block group, we often end up getting ENOSPC while updating
the chunk tree, which leads to a transaction abortion that produces a trace
like the following:
[30670.116368] WARNING: CPU: 4 PID: 20735 at fs/btrfs/super.c:260 __btrfs_abort_transaction+0x52/0x106 [btrfs]()
[30670.117777] BTRFS: Transaction aborted (error -28)
(...)
[30670.163567] Call Trace:
[30670.163906] [<ffffffff8142fa46>] dump_stack+0x4f/0x7b
[30670.164522] [<ffffffff8108b6a2>] ? console_unlock+0x361/0x3ad
[30670.165171] [<ffffffff81045ea5>] warn_slowpath_common+0xa1/0xbb
[30670.166323] [<ffffffffa035daa7>] ? __btrfs_abort_transaction+0x52/0x106 [btrfs]
[30670.167213] [<ffffffff81045f05>] warn_slowpath_fmt+0x46/0x48
[30670.167862] [<ffffffffa035daa7>] __btrfs_abort_transaction+0x52/0x106 [btrfs]
[30670.169116] [<ffffffffa03743d7>] btrfs_create_pending_block_groups+0x101/0x130 [btrfs]
[30670.170593] [<ffffffffa038426a>] __btrfs_end_transaction+0x84/0x366 [btrfs]
[30670.171960] [<ffffffffa038455c>] btrfs_end_transaction+0x10/0x12 [btrfs]
[30670.174649] [<ffffffffa036eb6b>] btrfs_check_data_free_space+0x11f/0x27c [btrfs]
[30670.176092] [<ffffffffa039450d>] btrfs_fallocate+0x7c8/0xb96 [btrfs]
[30670.177218] [<ffffffff812459f2>] ? __this_cpu_preempt_check+0x13/0x15
[30670.178622] [<ffffffff81152447>] vfs_fallocate+0x14c/0x1de
[30670.179642] [<ffffffff8116b915>] ? __fget_light+0x2d/0x4f
[30670.180692] [<ffffffff81152863>] SyS_fallocate+0x47/0x62
[30670.186737] [<ffffffff81435b32>] system_call_fastpath+0x12/0x17
[30670.187792] ---[ end trace 0373e6b491c4a8cc ]---
This is because we don't do proper space reservation for the chunk block
reserve when we have multiple tasks allocating chunks in parallel.
So block group creation has 2 phases, and the first phase essentially
checks if there is enough space in the system space_info, allocating a
new system chunk if there isn't, while the second phase updates the
device, extent and chunk trees. However, because the updates to the
chunk tree happen in the second phase, if we have N tasks, each with
its own transaction handle, allocating new chunks in parallel and if
there is only enough space in the system space_info to allocate M chunks,
where M < N, none of the tasks ends up allocating a new system chunk in
the first phase and N - M tasks will get -ENOSPC when attempting to
update the chunk tree in phase 2 if they need to COW any nodes/leafs
from the chunk tree.
Fix this by doing proper reservation in the chunk block reserve.
The issue could be reproduced by running fstests generic/038 in a loop,
which eventually triggered the problem.
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Chris Mason <clm@fb.com>
2015-05-20 20:01:54 +07:00
|
|
|
btrfs_trans_release_chunk_metadata(trans);
|
2012-06-27 03:13:18 +07:00
|
|
|
trans->block_rsv = NULL;
|
2013-03-20 05:41:23 +07:00
|
|
|
btrfs_warn(root->fs_info, "Skipping commit of aborted transaction.");
|
2012-03-01 23:24:58 +07:00
|
|
|
if (current->journal_info == trans)
|
|
|
|
current->journal_info = NULL;
|
2012-06-01 02:52:43 +07:00
|
|
|
cleanup_transaction(trans, root, ret);
|
2012-03-01 23:24:58 +07:00
|
|
|
|
|
|
|
return ret;
|
2007-03-23 02:59:16 +07:00
|
|
|
}
|
|
|
|
|
2008-09-30 02:18:18 +07:00
|
|
|
/*
|
2013-03-12 22:13:28 +07:00
|
|
|
* return < 0 if error
|
|
|
|
* 0 if there are no more dead_roots at the time of call
|
|
|
|
* 1 there are more to be processed, call me again
|
|
|
|
*
|
|
|
|
* The return value indicates there are certainly more snapshots to delete, but
|
|
|
|
* if there comes a new one during processing, it may return 0. We don't mind,
|
|
|
|
* because btrfs_commit_super will poke cleaner thread and it will process it a
|
|
|
|
* few seconds later.
|
2008-09-30 02:18:18 +07:00
|
|
|
*/
|
2013-03-12 22:13:28 +07:00
|
|
|
int btrfs_clean_one_deleted_snapshot(struct btrfs_root *root)
|
2007-08-11 01:06:19 +07:00
|
|
|
{
|
2013-03-12 22:13:28 +07:00
|
|
|
int ret;
|
Btrfs: Mixed back reference (FORWARD ROLLING FORMAT CHANGE)
This commit introduces a new kind of back reference for btrfs metadata.
Once a filesystem has been mounted with this commit, IT WILL NO LONGER
BE MOUNTABLE BY OLDER KERNELS.
When a tree block in subvolume tree is cow'd, the reference counts of all
extents it points to are increased by one. At transaction commit time,
the old root of the subvolume is recorded in a "dead root" data structure,
and the btree it points to is later walked, dropping reference counts
and freeing any blocks where the reference count goes to 0.
The increments done during cow and decrements done after commit cancel out,
and the walk is a very expensive way to go about freeing the blocks that
are no longer referenced by the new btree root. This commit reduces the
transaction overhead by avoiding the need for dead root records.
When a non-shared tree block is cow'd, we free the old block at once, and the
new block inherits old block's references. When a tree block with reference
count > 1 is cow'd, we increase the reference counts of all extents
the new block points to by one, and decrease the old block's reference count by
one.
This dead tree avoidance code removes the need to modify the reference
counts of lower level extents when a non-shared tree block is cow'd.
But we still need to update back ref for all pointers in the block.
This is because the location of the block is recorded in the back ref
item.
We can solve this by introducing a new type of back ref. The new
back ref provides information about pointer's key, level and in which
tree the pointer lives. This information allow us to find the pointer
by searching the tree. The shortcoming of the new back ref is that it
only works for pointers in tree blocks referenced by their owner trees.
This is mostly a problem for snapshots, where resolving one of these
fuzzy back references would be O(number_of_snapshots) and quite slow.
The solution used here is to use the fuzzy back references in the common
case where a given tree block is only referenced by one root,
and use the full back references when multiple roots have a reference
on a given block.
This commit adds per subvolume red-black tree to keep trace of cached
inodes. The red-black tree helps the balancing code to find cached
inodes whose inode numbers within a given range.
This commit improves the balancing code by introducing several data
structures to keep the state of balancing. The most important one
is the back ref cache. It caches how the upper level tree blocks are
referenced. This greatly reduce the overhead of checking back ref.
The improved balancing code scales significantly better with a large
number of snapshots.
This is a very large commit and was written in a number of
pieces. But, they depend heavily on the disk format change and were
squashed together to make sure git bisect didn't end up in a
bad state wrt space balancing or the format change.
Signed-off-by: Yan Zheng <zheng.yan@oracle.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-06-10 21:45:14 +07:00
|
|
|
struct btrfs_fs_info *fs_info = root->fs_info;
|
|
|
|
|
2011-04-12 04:25:13 +07:00
|
|
|
spin_lock(&fs_info->trans_lock);
|
2013-03-12 22:13:28 +07:00
|
|
|
if (list_empty(&fs_info->dead_roots)) {
|
|
|
|
spin_unlock(&fs_info->trans_lock);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
root = list_first_entry(&fs_info->dead_roots,
|
|
|
|
struct btrfs_root, root_list);
|
2013-07-26 02:11:47 +07:00
|
|
|
list_del_init(&root->root_list);
|
2011-04-12 04:25:13 +07:00
|
|
|
spin_unlock(&fs_info->trans_lock);
|
2007-08-11 01:06:19 +07:00
|
|
|
|
2016-09-20 21:05:02 +07:00
|
|
|
btrfs_debug(fs_info, "cleaner removing %llu", root->objectid);
|
2009-09-22 03:00:26 +07:00
|
|
|
|
2013-03-12 22:13:28 +07:00
|
|
|
btrfs_kill_all_delayed_nodes(root);
|
btrfs: implement delayed inode items operation
Changelog V5 -> V6:
- Fix oom when the memory load is high, by storing the delayed nodes into the
root's radix tree, and letting btrfs inodes go.
Changelog V4 -> V5:
- Fix the race on adding the delayed node to the inode, which is spotted by
Chris Mason.
- Merge Chris Mason's incremental patch into this patch.
- Fix deadlock between readdir() and memory fault, which is reported by
Itaru Kitayama.
Changelog V3 -> V4:
- Fix nested lock, which is reported by Itaru Kitayama, by updating space cache
inode in time.
Changelog V2 -> V3:
- Fix the race between the delayed worker and the task which does delayed items
balance, which is reported by Tsutomu Itoh.
- Modify the patch address David Sterba's comment.
- Fix the bug of the cpu recursion spinlock, reported by Chris Mason
Changelog V1 -> V2:
- break up the global rb-tree, use a list to manage the delayed nodes,
which is created for every directory and file, and used to manage the
delayed directory name index items and the delayed inode item.
- introduce a worker to deal with the delayed nodes.
Compare with Ext3/4, the performance of file creation and deletion on btrfs
is very poor. the reason is that btrfs must do a lot of b+ tree insertions,
such as inode item, directory name item, directory name index and so on.
If we can do some delayed b+ tree insertion or deletion, we can improve the
performance, so we made this patch which implemented delayed directory name
index insertion/deletion and delayed inode update.
Implementation:
- introduce a delayed root object into the filesystem, that use two lists to
manage the delayed nodes which are created for every file/directory.
One is used to manage all the delayed nodes that have delayed items. And the
other is used to manage the delayed nodes which is waiting to be dealt with
by the work thread.
- Every delayed node has two rb-tree, one is used to manage the directory name
index which is going to be inserted into b+ tree, and the other is used to
manage the directory name index which is going to be deleted from b+ tree.
- introduce a worker to deal with the delayed operation. This worker is used
to deal with the works of the delayed directory name index items insertion
and deletion and the delayed inode update.
When the delayed items is beyond the lower limit, we create works for some
delayed nodes and insert them into the work queue of the worker, and then
go back.
When the delayed items is beyond the upper bound, we create works for all
the delayed nodes that haven't been dealt with, and insert them into the work
queue of the worker, and then wait for that the untreated items is below some
threshold value.
- When we want to insert a directory name index into b+ tree, we just add the
information into the delayed inserting rb-tree.
And then we check the number of the delayed items and do delayed items
balance. (The balance policy is above.)
- When we want to delete a directory name index from the b+ tree, we search it
in the inserting rb-tree at first. If we look it up, just drop it. If not,
add the key of it into the delayed deleting rb-tree.
Similar to the delayed inserting rb-tree, we also check the number of the
delayed items and do delayed items balance.
(The same to inserting manipulation)
- When we want to update the metadata of some inode, we cached the data of the
inode into the delayed node. the worker will flush it into the b+ tree after
dealing with the delayed insertion and deletion.
- We will move the delayed node to the tail of the list after we access the
delayed node, By this way, we can cache more delayed items and merge more
inode updates.
- If we want to commit transaction, we will deal with all the delayed node.
- the delayed node will be freed when we free the btrfs inode.
- Before we log the inode items, we commit all the directory name index items
and the delayed inode update.
I did a quick test by the benchmark tool[1] and found we can improve the
performance of file creation by ~15%, and file deletion by ~20%.
Before applying this patch:
Create files:
Total files: 50000
Total time: 1.096108
Average time: 0.000022
Delete files:
Total files: 50000
Total time: 1.510403
Average time: 0.000030
After applying this patch:
Create files:
Total files: 50000
Total time: 0.932899
Average time: 0.000019
Delete files:
Total files: 50000
Total time: 1.215732
Average time: 0.000024
[1] http://marc.info/?l=linux-btrfs&m=128212635122920&q=p3
Many thanks for Kitayama-san's help!
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Reviewed-by: David Sterba <dave@jikos.cz>
Tested-by: Tsutomu Itoh <t-itoh@jp.fujitsu.com>
Tested-by: Itaru Kitayama <kitayama@cl.bb4u.ne.jp>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2011-04-22 17:12:22 +07:00
|
|
|
|
2013-03-12 22:13:28 +07:00
|
|
|
if (btrfs_header_backref_rev(root->node) <
|
|
|
|
BTRFS_MIXED_BACKREF_REV)
|
|
|
|
ret = btrfs_drop_snapshot(root, NULL, 0, 0);
|
|
|
|
else
|
|
|
|
ret = btrfs_drop_snapshot(root, NULL, 1, 0);
|
2014-02-05 08:03:47 +07:00
|
|
|
|
2013-07-31 21:28:05 +07:00
|
|
|
return (ret < 0) ? 0 : 1;
|
2007-08-11 01:06:19 +07:00
|
|
|
}
|
2014-02-05 21:26:17 +07:00
|
|
|
|
|
|
|
void btrfs_apply_pending_changes(struct btrfs_fs_info *fs_info)
|
|
|
|
{
|
|
|
|
unsigned long prev;
|
|
|
|
unsigned long bit;
|
|
|
|
|
2015-01-20 16:05:33 +07:00
|
|
|
prev = xchg(&fs_info->pending_changes, 0);
|
2014-02-05 21:26:17 +07:00
|
|
|
if (!prev)
|
|
|
|
return;
|
|
|
|
|
2014-02-05 21:26:17 +07:00
|
|
|
bit = 1 << BTRFS_PENDING_SET_INODE_MAP_CACHE;
|
|
|
|
if (prev & bit)
|
|
|
|
btrfs_set_opt(fs_info->mount_opt, INODE_MAP_CACHE);
|
|
|
|
prev &= ~bit;
|
|
|
|
|
|
|
|
bit = 1 << BTRFS_PENDING_CLEAR_INODE_MAP_CACHE;
|
|
|
|
if (prev & bit)
|
|
|
|
btrfs_clear_opt(fs_info->mount_opt, INODE_MAP_CACHE);
|
|
|
|
prev &= ~bit;
|
|
|
|
|
2014-11-12 20:24:35 +07:00
|
|
|
bit = 1 << BTRFS_PENDING_COMMIT;
|
|
|
|
if (prev & bit)
|
|
|
|
btrfs_debug(fs_info, "pending commit done");
|
|
|
|
prev &= ~bit;
|
|
|
|
|
2014-02-05 21:26:17 +07:00
|
|
|
if (prev)
|
|
|
|
btrfs_warn(fs_info,
|
|
|
|
"unknown pending changes left 0x%lx, ignoring", prev);
|
|
|
|
}
|