2008-06-12 08:53:53 +07:00
|
|
|
/*
|
|
|
|
* Copyright (C) 2007 Oracle. All rights reserved.
|
|
|
|
*
|
|
|
|
* This program is free software; you can redistribute it and/or
|
|
|
|
* modify it under the terms of the GNU General Public
|
|
|
|
* License v2 as published by the Free Software Foundation.
|
|
|
|
*
|
|
|
|
* This program is distributed in the hope that it will be useful,
|
|
|
|
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
|
|
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
|
|
|
* General Public License for more details.
|
|
|
|
*
|
|
|
|
* You should have received a copy of the GNU General Public
|
|
|
|
* License along with this program; if not, write to the
|
|
|
|
* Free Software Foundation, Inc., 59 Temple Place - Suite 330,
|
|
|
|
* Boston, MA 021110-1307, USA.
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include <linux/kernel.h>
|
|
|
|
#include <linux/bio.h>
|
|
|
|
#include <linux/buffer_head.h>
|
|
|
|
#include <linux/file.h>
|
|
|
|
#include <linux/fs.h>
|
2008-10-10 00:39:39 +07:00
|
|
|
#include <linux/fsnotify.h>
|
2008-06-12 08:53:53 +07:00
|
|
|
#include <linux/pagemap.h>
|
|
|
|
#include <linux/highmem.h>
|
|
|
|
#include <linux/time.h>
|
|
|
|
#include <linux/init.h>
|
|
|
|
#include <linux/string.h>
|
|
|
|
#include <linux/backing-dev.h>
|
2008-10-10 00:39:39 +07:00
|
|
|
#include <linux/mount.h>
|
2008-06-12 08:53:53 +07:00
|
|
|
#include <linux/mpage.h>
|
2008-10-10 00:39:39 +07:00
|
|
|
#include <linux/namei.h>
|
2008-06-12 08:53:53 +07:00
|
|
|
#include <linux/swap.h>
|
|
|
|
#include <linux/writeback.h>
|
|
|
|
#include <linux/statfs.h>
|
|
|
|
#include <linux/compat.h>
|
|
|
|
#include <linux/bit_spinlock.h>
|
2008-10-10 00:39:39 +07:00
|
|
|
#include <linux/security.h>
|
2008-06-12 08:53:53 +07:00
|
|
|
#include <linux/xattr.h>
|
2008-08-06 00:05:02 +07:00
|
|
|
#include <linux/vmalloc.h>
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 15:04:11 +07:00
|
|
|
#include <linux/slab.h>
|
2011-03-24 17:24:28 +07:00
|
|
|
#include <linux/blkdev.h>
|
2008-11-20 22:22:27 +07:00
|
|
|
#include "compat.h"
|
2008-06-12 08:53:53 +07:00
|
|
|
#include "ctree.h"
|
|
|
|
#include "disk-io.h"
|
|
|
|
#include "transaction.h"
|
|
|
|
#include "btrfs_inode.h"
|
|
|
|
#include "ioctl.h"
|
|
|
|
#include "print-tree.h"
|
|
|
|
#include "volumes.h"
|
2008-06-26 03:01:30 +07:00
|
|
|
#include "locking.h"
|
Btrfs: Cache free inode numbers in memory
Currently btrfs stores the highest objectid of the fs tree, and it always
returns (highest+1) inode number when we create a file, so inode numbers
won't be reclaimed when we delete files, so we'll run out of inode numbers
as we keep create/delete files in 32bits machines.
This fixes it, and it works similarly to how we cache free space in block
cgroups.
We start a kernel thread to read the file tree. By scanning inode items,
we know which chunks of inode numbers are free, and we cache them in
an rb-tree.
Because we are searching the commit root, we have to carefully handle the
cross-transaction case.
The rb-tree is a hybrid extent+bitmap tree, so if we have too many small
chunks of inode numbers, we'll use bitmaps. Initially we allow 16K ram
of extents, and a bitmap will be used if we exceed this threshold. The
extents threshold is adjusted in runtime.
Signed-off-by: Li Zefan <lizf@cn.fujitsu.com>
2011-04-20 09:06:11 +07:00
|
|
|
#include "inode-map.h"
|
2011-07-07 21:48:38 +07:00
|
|
|
#include "backref.h"
|
2008-06-12 08:53:53 +07:00
|
|
|
|
2009-04-17 15:37:41 +07:00
|
|
|
/* Mask out flags that are inappropriate for the given type of inode. */
|
|
|
|
static inline __u32 btrfs_mask_flags(umode_t mode, __u32 flags)
|
|
|
|
{
|
|
|
|
if (S_ISDIR(mode))
|
|
|
|
return flags;
|
|
|
|
else if (S_ISREG(mode))
|
|
|
|
return flags & ~FS_DIRSYNC_FL;
|
|
|
|
else
|
|
|
|
return flags & (FS_NODUMP_FL | FS_NOATIME_FL);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Export inode flags to the format expected by the FS_IOC_GETFLAGS ioctl.
|
|
|
|
*/
|
|
|
|
static unsigned int btrfs_flags_to_ioctl(unsigned int flags)
|
|
|
|
{
|
|
|
|
unsigned int iflags = 0;
|
|
|
|
|
|
|
|
if (flags & BTRFS_INODE_SYNC)
|
|
|
|
iflags |= FS_SYNC_FL;
|
|
|
|
if (flags & BTRFS_INODE_IMMUTABLE)
|
|
|
|
iflags |= FS_IMMUTABLE_FL;
|
|
|
|
if (flags & BTRFS_INODE_APPEND)
|
|
|
|
iflags |= FS_APPEND_FL;
|
|
|
|
if (flags & BTRFS_INODE_NODUMP)
|
|
|
|
iflags |= FS_NODUMP_FL;
|
|
|
|
if (flags & BTRFS_INODE_NOATIME)
|
|
|
|
iflags |= FS_NOATIME_FL;
|
|
|
|
if (flags & BTRFS_INODE_DIRSYNC)
|
|
|
|
iflags |= FS_DIRSYNC_FL;
|
2011-04-15 10:03:06 +07:00
|
|
|
if (flags & BTRFS_INODE_NODATACOW)
|
|
|
|
iflags |= FS_NOCOW_FL;
|
|
|
|
|
|
|
|
if ((flags & BTRFS_INODE_COMPRESS) && !(flags & BTRFS_INODE_NOCOMPRESS))
|
|
|
|
iflags |= FS_COMPR_FL;
|
|
|
|
else if (flags & BTRFS_INODE_NOCOMPRESS)
|
|
|
|
iflags |= FS_NOCOMP_FL;
|
2009-04-17 15:37:41 +07:00
|
|
|
|
|
|
|
return iflags;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Update inode->i_flags based on the btrfs internal flags.
|
|
|
|
*/
|
|
|
|
void btrfs_update_iflags(struct inode *inode)
|
|
|
|
{
|
|
|
|
struct btrfs_inode *ip = BTRFS_I(inode);
|
|
|
|
|
|
|
|
inode->i_flags &= ~(S_SYNC|S_APPEND|S_IMMUTABLE|S_NOATIME|S_DIRSYNC);
|
|
|
|
|
|
|
|
if (ip->flags & BTRFS_INODE_SYNC)
|
|
|
|
inode->i_flags |= S_SYNC;
|
|
|
|
if (ip->flags & BTRFS_INODE_IMMUTABLE)
|
|
|
|
inode->i_flags |= S_IMMUTABLE;
|
|
|
|
if (ip->flags & BTRFS_INODE_APPEND)
|
|
|
|
inode->i_flags |= S_APPEND;
|
|
|
|
if (ip->flags & BTRFS_INODE_NOATIME)
|
|
|
|
inode->i_flags |= S_NOATIME;
|
|
|
|
if (ip->flags & BTRFS_INODE_DIRSYNC)
|
|
|
|
inode->i_flags |= S_DIRSYNC;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Inherit flags from the parent inode.
|
|
|
|
*
|
2011-09-27 22:01:30 +07:00
|
|
|
* Currently only the compression flags and the cow flags are inherited.
|
2009-04-17 15:37:41 +07:00
|
|
|
*/
|
|
|
|
void btrfs_inherit_iflags(struct inode *inode, struct inode *dir)
|
|
|
|
{
|
2009-06-11 22:13:35 +07:00
|
|
|
unsigned int flags;
|
|
|
|
|
|
|
|
if (!dir)
|
|
|
|
return;
|
|
|
|
|
|
|
|
flags = BTRFS_I(dir)->flags;
|
2009-04-17 15:37:41 +07:00
|
|
|
|
2011-09-27 22:01:30 +07:00
|
|
|
if (flags & BTRFS_INODE_NOCOMPRESS) {
|
|
|
|
BTRFS_I(inode)->flags &= ~BTRFS_INODE_COMPRESS;
|
|
|
|
BTRFS_I(inode)->flags |= BTRFS_INODE_NOCOMPRESS;
|
|
|
|
} else if (flags & BTRFS_INODE_COMPRESS) {
|
|
|
|
BTRFS_I(inode)->flags &= ~BTRFS_INODE_NOCOMPRESS;
|
|
|
|
BTRFS_I(inode)->flags |= BTRFS_INODE_COMPRESS;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (flags & BTRFS_INODE_NODATACOW)
|
|
|
|
BTRFS_I(inode)->flags |= BTRFS_INODE_NODATACOW;
|
2009-04-17 15:37:41 +07:00
|
|
|
|
|
|
|
btrfs_update_iflags(inode);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int btrfs_ioctl_getflags(struct file *file, void __user *arg)
|
|
|
|
{
|
|
|
|
struct btrfs_inode *ip = BTRFS_I(file->f_path.dentry->d_inode);
|
|
|
|
unsigned int flags = btrfs_flags_to_ioctl(ip->flags);
|
|
|
|
|
|
|
|
if (copy_to_user(arg, &flags, sizeof(flags)))
|
|
|
|
return -EFAULT;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
Btrfs: Per file/directory controls for COW and compression
Data compression and data cow are controlled across the entire FS by mount
options right now. ioctls are needed to set this on a per file or per
directory basis. This has been proposed previously, but VFS developers
wanted us to use generic ioctls rather than btrfs-specific ones.
According to Chris's comment, there should be just one true compression
method(probably LZO) stored in the super. However, before this, we would
wait for that one method is stable enough to be adopted into the super.
So I list it as a long term goal, and just store it in ram today.
After applying this patch, we can use the generic "FS_IOC_SETFLAGS" ioctl to
control file and directory's datacow and compression attribute.
NOTE:
- The compression type is selected by such rules:
If we mount btrfs with compress options, ie, zlib/lzo, the type is it.
Otherwise, we'll use the default compress type (zlib today).
v1->v2:
- rebase to the latest btrfs.
v2->v3:
- fix a problem, i.e. when a file is set NOCOW via mount option, then this NOCOW
will be screwed by inheritance from parent directory.
Signed-off-by: Liu Bo <liubo2009@cn.fujitsu.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2011-03-22 17:12:20 +07:00
|
|
|
static int check_flags(unsigned int flags)
|
|
|
|
{
|
|
|
|
if (flags & ~(FS_IMMUTABLE_FL | FS_APPEND_FL | \
|
|
|
|
FS_NOATIME_FL | FS_NODUMP_FL | \
|
|
|
|
FS_SYNC_FL | FS_DIRSYNC_FL | \
|
2011-04-15 10:02:49 +07:00
|
|
|
FS_NOCOMP_FL | FS_COMPR_FL |
|
|
|
|
FS_NOCOW_FL))
|
Btrfs: Per file/directory controls for COW and compression
Data compression and data cow are controlled across the entire FS by mount
options right now. ioctls are needed to set this on a per file or per
directory basis. This has been proposed previously, but VFS developers
wanted us to use generic ioctls rather than btrfs-specific ones.
According to Chris's comment, there should be just one true compression
method(probably LZO) stored in the super. However, before this, we would
wait for that one method is stable enough to be adopted into the super.
So I list it as a long term goal, and just store it in ram today.
After applying this patch, we can use the generic "FS_IOC_SETFLAGS" ioctl to
control file and directory's datacow and compression attribute.
NOTE:
- The compression type is selected by such rules:
If we mount btrfs with compress options, ie, zlib/lzo, the type is it.
Otherwise, we'll use the default compress type (zlib today).
v1->v2:
- rebase to the latest btrfs.
v2->v3:
- fix a problem, i.e. when a file is set NOCOW via mount option, then this NOCOW
will be screwed by inheritance from parent directory.
Signed-off-by: Liu Bo <liubo2009@cn.fujitsu.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2011-03-22 17:12:20 +07:00
|
|
|
return -EOPNOTSUPP;
|
|
|
|
|
|
|
|
if ((flags & FS_NOCOMP_FL) && (flags & FS_COMPR_FL))
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2009-04-17 15:37:41 +07:00
|
|
|
static int btrfs_ioctl_setflags(struct file *file, void __user *arg)
|
|
|
|
{
|
|
|
|
struct inode *inode = file->f_path.dentry->d_inode;
|
|
|
|
struct btrfs_inode *ip = BTRFS_I(inode);
|
|
|
|
struct btrfs_root *root = ip->root;
|
|
|
|
struct btrfs_trans_handle *trans;
|
|
|
|
unsigned int flags, oldflags;
|
|
|
|
int ret;
|
|
|
|
|
2010-12-20 15:04:08 +07:00
|
|
|
if (btrfs_root_readonly(root))
|
|
|
|
return -EROFS;
|
|
|
|
|
2009-04-17 15:37:41 +07:00
|
|
|
if (copy_from_user(&flags, arg, sizeof(flags)))
|
|
|
|
return -EFAULT;
|
|
|
|
|
Btrfs: Per file/directory controls for COW and compression
Data compression and data cow are controlled across the entire FS by mount
options right now. ioctls are needed to set this on a per file or per
directory basis. This has been proposed previously, but VFS developers
wanted us to use generic ioctls rather than btrfs-specific ones.
According to Chris's comment, there should be just one true compression
method(probably LZO) stored in the super. However, before this, we would
wait for that one method is stable enough to be adopted into the super.
So I list it as a long term goal, and just store it in ram today.
After applying this patch, we can use the generic "FS_IOC_SETFLAGS" ioctl to
control file and directory's datacow and compression attribute.
NOTE:
- The compression type is selected by such rules:
If we mount btrfs with compress options, ie, zlib/lzo, the type is it.
Otherwise, we'll use the default compress type (zlib today).
v1->v2:
- rebase to the latest btrfs.
v2->v3:
- fix a problem, i.e. when a file is set NOCOW via mount option, then this NOCOW
will be screwed by inheritance from parent directory.
Signed-off-by: Liu Bo <liubo2009@cn.fujitsu.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2011-03-22 17:12:20 +07:00
|
|
|
ret = check_flags(flags);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
2008-06-12 08:53:53 +07:00
|
|
|
|
2011-03-24 06:43:26 +07:00
|
|
|
if (!inode_owner_or_capable(inode))
|
2009-04-17 15:37:41 +07:00
|
|
|
return -EACCES;
|
|
|
|
|
|
|
|
mutex_lock(&inode->i_mutex);
|
|
|
|
|
|
|
|
flags = btrfs_mask_flags(inode->i_mode, flags);
|
|
|
|
oldflags = btrfs_flags_to_ioctl(ip->flags);
|
|
|
|
if ((flags ^ oldflags) & (FS_APPEND_FL | FS_IMMUTABLE_FL)) {
|
|
|
|
if (!capable(CAP_LINUX_IMMUTABLE)) {
|
|
|
|
ret = -EPERM;
|
|
|
|
goto out_unlock;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
ret = mnt_want_write(file->f_path.mnt);
|
|
|
|
if (ret)
|
|
|
|
goto out_unlock;
|
|
|
|
|
|
|
|
if (flags & FS_SYNC_FL)
|
|
|
|
ip->flags |= BTRFS_INODE_SYNC;
|
|
|
|
else
|
|
|
|
ip->flags &= ~BTRFS_INODE_SYNC;
|
|
|
|
if (flags & FS_IMMUTABLE_FL)
|
|
|
|
ip->flags |= BTRFS_INODE_IMMUTABLE;
|
|
|
|
else
|
|
|
|
ip->flags &= ~BTRFS_INODE_IMMUTABLE;
|
|
|
|
if (flags & FS_APPEND_FL)
|
|
|
|
ip->flags |= BTRFS_INODE_APPEND;
|
|
|
|
else
|
|
|
|
ip->flags &= ~BTRFS_INODE_APPEND;
|
|
|
|
if (flags & FS_NODUMP_FL)
|
|
|
|
ip->flags |= BTRFS_INODE_NODUMP;
|
|
|
|
else
|
|
|
|
ip->flags &= ~BTRFS_INODE_NODUMP;
|
|
|
|
if (flags & FS_NOATIME_FL)
|
|
|
|
ip->flags |= BTRFS_INODE_NOATIME;
|
|
|
|
else
|
|
|
|
ip->flags &= ~BTRFS_INODE_NOATIME;
|
|
|
|
if (flags & FS_DIRSYNC_FL)
|
|
|
|
ip->flags |= BTRFS_INODE_DIRSYNC;
|
|
|
|
else
|
|
|
|
ip->flags &= ~BTRFS_INODE_DIRSYNC;
|
2011-04-15 10:02:49 +07:00
|
|
|
if (flags & FS_NOCOW_FL)
|
|
|
|
ip->flags |= BTRFS_INODE_NODATACOW;
|
|
|
|
else
|
|
|
|
ip->flags &= ~BTRFS_INODE_NODATACOW;
|
2009-04-17 15:37:41 +07:00
|
|
|
|
Btrfs: Per file/directory controls for COW and compression
Data compression and data cow are controlled across the entire FS by mount
options right now. ioctls are needed to set this on a per file or per
directory basis. This has been proposed previously, but VFS developers
wanted us to use generic ioctls rather than btrfs-specific ones.
According to Chris's comment, there should be just one true compression
method(probably LZO) stored in the super. However, before this, we would
wait for that one method is stable enough to be adopted into the super.
So I list it as a long term goal, and just store it in ram today.
After applying this patch, we can use the generic "FS_IOC_SETFLAGS" ioctl to
control file and directory's datacow and compression attribute.
NOTE:
- The compression type is selected by such rules:
If we mount btrfs with compress options, ie, zlib/lzo, the type is it.
Otherwise, we'll use the default compress type (zlib today).
v1->v2:
- rebase to the latest btrfs.
v2->v3:
- fix a problem, i.e. when a file is set NOCOW via mount option, then this NOCOW
will be screwed by inheritance from parent directory.
Signed-off-by: Liu Bo <liubo2009@cn.fujitsu.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2011-03-22 17:12:20 +07:00
|
|
|
/*
|
|
|
|
* The COMPRESS flag can only be changed by users, while the NOCOMPRESS
|
|
|
|
* flag may be changed automatically if compression code won't make
|
|
|
|
* things smaller.
|
|
|
|
*/
|
|
|
|
if (flags & FS_NOCOMP_FL) {
|
|
|
|
ip->flags &= ~BTRFS_INODE_COMPRESS;
|
|
|
|
ip->flags |= BTRFS_INODE_NOCOMPRESS;
|
|
|
|
} else if (flags & FS_COMPR_FL) {
|
|
|
|
ip->flags |= BTRFS_INODE_COMPRESS;
|
|
|
|
ip->flags &= ~BTRFS_INODE_NOCOMPRESS;
|
2011-04-15 10:03:17 +07:00
|
|
|
} else {
|
|
|
|
ip->flags &= ~(BTRFS_INODE_COMPRESS | BTRFS_INODE_NOCOMPRESS);
|
Btrfs: Per file/directory controls for COW and compression
Data compression and data cow are controlled across the entire FS by mount
options right now. ioctls are needed to set this on a per file or per
directory basis. This has been proposed previously, but VFS developers
wanted us to use generic ioctls rather than btrfs-specific ones.
According to Chris's comment, there should be just one true compression
method(probably LZO) stored in the super. However, before this, we would
wait for that one method is stable enough to be adopted into the super.
So I list it as a long term goal, and just store it in ram today.
After applying this patch, we can use the generic "FS_IOC_SETFLAGS" ioctl to
control file and directory's datacow and compression attribute.
NOTE:
- The compression type is selected by such rules:
If we mount btrfs with compress options, ie, zlib/lzo, the type is it.
Otherwise, we'll use the default compress type (zlib today).
v1->v2:
- rebase to the latest btrfs.
v2->v3:
- fix a problem, i.e. when a file is set NOCOW via mount option, then this NOCOW
will be screwed by inheritance from parent directory.
Signed-off-by: Liu Bo <liubo2009@cn.fujitsu.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2011-03-22 17:12:20 +07:00
|
|
|
}
|
2009-04-17 15:37:41 +07:00
|
|
|
|
2011-04-13 23:54:33 +07:00
|
|
|
trans = btrfs_join_transaction(root);
|
2011-01-25 09:51:38 +07:00
|
|
|
BUG_ON(IS_ERR(trans));
|
2009-04-17 15:37:41 +07:00
|
|
|
|
2011-12-15 08:12:02 +07:00
|
|
|
btrfs_update_iflags(inode);
|
|
|
|
inode->i_ctime = CURRENT_TIME;
|
2009-04-17 15:37:41 +07:00
|
|
|
ret = btrfs_update_inode(trans, root, inode);
|
|
|
|
BUG_ON(ret);
|
|
|
|
|
|
|
|
btrfs_end_transaction(trans, root);
|
|
|
|
|
|
|
|
mnt_drop_write(file->f_path.mnt);
|
2011-02-24 16:38:16 +07:00
|
|
|
|
|
|
|
ret = 0;
|
2009-04-17 15:37:41 +07:00
|
|
|
out_unlock:
|
|
|
|
mutex_unlock(&inode->i_mutex);
|
2011-02-24 16:38:16 +07:00
|
|
|
return ret;
|
2009-04-17 15:37:41 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static int btrfs_ioctl_getversion(struct file *file, int __user *arg)
|
|
|
|
{
|
|
|
|
struct inode *inode = file->f_path.dentry->d_inode;
|
|
|
|
|
|
|
|
return put_user(inode->i_generation, arg);
|
|
|
|
}
|
2008-06-12 08:53:53 +07:00
|
|
|
|
2011-03-24 17:24:28 +07:00
|
|
|
static noinline int btrfs_ioctl_fitrim(struct file *file, void __user *arg)
|
|
|
|
{
|
|
|
|
struct btrfs_root *root = fdentry(file)->d_sb->s_fs_info;
|
|
|
|
struct btrfs_fs_info *fs_info = root->fs_info;
|
|
|
|
struct btrfs_device *device;
|
|
|
|
struct request_queue *q;
|
|
|
|
struct fstrim_range range;
|
|
|
|
u64 minlen = ULLONG_MAX;
|
|
|
|
u64 num_devices = 0;
|
2011-04-13 20:41:04 +07:00
|
|
|
u64 total_bytes = btrfs_super_total_bytes(root->fs_info->super_copy);
|
2011-03-24 17:24:28 +07:00
|
|
|
int ret;
|
|
|
|
|
|
|
|
if (!capable(CAP_SYS_ADMIN))
|
|
|
|
return -EPERM;
|
|
|
|
|
2011-04-20 17:09:16 +07:00
|
|
|
rcu_read_lock();
|
|
|
|
list_for_each_entry_rcu(device, &fs_info->fs_devices->devices,
|
|
|
|
dev_list) {
|
2011-03-24 17:24:28 +07:00
|
|
|
if (!device->bdev)
|
|
|
|
continue;
|
|
|
|
q = bdev_get_queue(device->bdev);
|
|
|
|
if (blk_queue_discard(q)) {
|
|
|
|
num_devices++;
|
|
|
|
minlen = min((u64)q->limits.discard_granularity,
|
|
|
|
minlen);
|
|
|
|
}
|
|
|
|
}
|
2011-04-20 17:09:16 +07:00
|
|
|
rcu_read_unlock();
|
2011-09-05 21:34:54 +07:00
|
|
|
|
2011-03-24 17:24:28 +07:00
|
|
|
if (!num_devices)
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
if (copy_from_user(&range, arg, sizeof(range)))
|
|
|
|
return -EFAULT;
|
2011-09-05 21:34:54 +07:00
|
|
|
if (range.start > total_bytes)
|
|
|
|
return -EINVAL;
|
2011-03-24 17:24:28 +07:00
|
|
|
|
2011-09-05 21:34:54 +07:00
|
|
|
range.len = min(range.len, total_bytes - range.start);
|
2011-03-24 17:24:28 +07:00
|
|
|
range.minlen = max(range.minlen, minlen);
|
|
|
|
ret = btrfs_trim_fs(root, &range);
|
|
|
|
if (ret < 0)
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
if (copy_to_user(arg, &range, sizeof(range)))
|
|
|
|
return -EFAULT;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2008-10-10 00:39:39 +07:00
|
|
|
static noinline int create_subvol(struct btrfs_root *root,
|
|
|
|
struct dentry *dentry,
|
2010-10-30 02:41:32 +07:00
|
|
|
char *name, int namelen,
|
|
|
|
u64 *async_transid)
|
2008-06-12 08:53:53 +07:00
|
|
|
{
|
|
|
|
struct btrfs_trans_handle *trans;
|
|
|
|
struct btrfs_key key;
|
|
|
|
struct btrfs_root_item root_item;
|
|
|
|
struct btrfs_inode_item *inode_item;
|
|
|
|
struct extent_buffer *leaf;
|
2009-09-22 03:00:26 +07:00
|
|
|
struct btrfs_root *new_root;
|
2011-07-17 08:38:06 +07:00
|
|
|
struct dentry *parent = dentry->d_parent;
|
2010-11-20 16:48:00 +07:00
|
|
|
struct inode *dir;
|
2008-06-12 08:53:53 +07:00
|
|
|
int ret;
|
|
|
|
int err;
|
|
|
|
u64 objectid;
|
|
|
|
u64 new_dirid = BTRFS_FIRST_FREE_OBJECTID;
|
2008-11-18 09:02:50 +07:00
|
|
|
u64 index = 0;
|
2008-06-12 08:53:53 +07:00
|
|
|
|
Btrfs: Cache free inode numbers in memory
Currently btrfs stores the highest objectid of the fs tree, and it always
returns (highest+1) inode number when we create a file, so inode numbers
won't be reclaimed when we delete files, so we'll run out of inode numbers
as we keep create/delete files in 32bits machines.
This fixes it, and it works similarly to how we cache free space in block
cgroups.
We start a kernel thread to read the file tree. By scanning inode items,
we know which chunks of inode numbers are free, and we cache them in
an rb-tree.
Because we are searching the commit root, we have to carefully handle the
cross-transaction case.
The rb-tree is a hybrid extent+bitmap tree, so if we have too many small
chunks of inode numbers, we'll use bitmaps. Initially we allow 16K ram
of extents, and a bitmap will be used if we exceed this threshold. The
extents threshold is adjusted in runtime.
Signed-off-by: Li Zefan <lizf@cn.fujitsu.com>
2011-04-20 09:06:11 +07:00
|
|
|
ret = btrfs_find_free_objectid(root->fs_info->tree_root, &objectid);
|
2011-07-17 08:38:06 +07:00
|
|
|
if (ret)
|
2010-05-16 21:48:46 +07:00
|
|
|
return ret;
|
2010-11-20 16:48:00 +07:00
|
|
|
|
|
|
|
dir = parent->d_inode;
|
|
|
|
|
2009-09-12 03:12:44 +07:00
|
|
|
/*
|
|
|
|
* 1 - inode item
|
|
|
|
* 2 - refs
|
|
|
|
* 1 - root item
|
|
|
|
* 2 - dir items
|
|
|
|
*/
|
2010-05-16 21:48:46 +07:00
|
|
|
trans = btrfs_start_transaction(root, 6);
|
2011-07-17 08:38:06 +07:00
|
|
|
if (IS_ERR(trans))
|
2010-05-16 21:48:46 +07:00
|
|
|
return PTR_ERR(trans);
|
2008-06-12 08:53:53 +07:00
|
|
|
|
Btrfs: Mixed back reference (FORWARD ROLLING FORMAT CHANGE)
This commit introduces a new kind of back reference for btrfs metadata.
Once a filesystem has been mounted with this commit, IT WILL NO LONGER
BE MOUNTABLE BY OLDER KERNELS.
When a tree block in subvolume tree is cow'd, the reference counts of all
extents it points to are increased by one. At transaction commit time,
the old root of the subvolume is recorded in a "dead root" data structure,
and the btree it points to is later walked, dropping reference counts
and freeing any blocks where the reference count goes to 0.
The increments done during cow and decrements done after commit cancel out,
and the walk is a very expensive way to go about freeing the blocks that
are no longer referenced by the new btree root. This commit reduces the
transaction overhead by avoiding the need for dead root records.
When a non-shared tree block is cow'd, we free the old block at once, and the
new block inherits old block's references. When a tree block with reference
count > 1 is cow'd, we increase the reference counts of all extents
the new block points to by one, and decrease the old block's reference count by
one.
This dead tree avoidance code removes the need to modify the reference
counts of lower level extents when a non-shared tree block is cow'd.
But we still need to update back ref for all pointers in the block.
This is because the location of the block is recorded in the back ref
item.
We can solve this by introducing a new type of back ref. The new
back ref provides information about pointer's key, level and in which
tree the pointer lives. This information allow us to find the pointer
by searching the tree. The shortcoming of the new back ref is that it
only works for pointers in tree blocks referenced by their owner trees.
This is mostly a problem for snapshots, where resolving one of these
fuzzy back references would be O(number_of_snapshots) and quite slow.
The solution used here is to use the fuzzy back references in the common
case where a given tree block is only referenced by one root,
and use the full back references when multiple roots have a reference
on a given block.
This commit adds per subvolume red-black tree to keep trace of cached
inodes. The red-black tree helps the balancing code to find cached
inodes whose inode numbers within a given range.
This commit improves the balancing code by introducing several data
structures to keep the state of balancing. The most important one
is the back ref cache. It caches how the upper level tree blocks are
referenced. This greatly reduce the overhead of checking back ref.
The improved balancing code scales significantly better with a large
number of snapshots.
This is a very large commit and was written in a number of
pieces. But, they depend heavily on the disk format change and were
squashed together to make sure git bisect didn't end up in a
bad state wrt space balancing or the format change.
Signed-off-by: Yan Zheng <zheng.yan@oracle.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-06-10 21:45:14 +07:00
|
|
|
leaf = btrfs_alloc_free_block(trans, root, root->leafsize,
|
|
|
|
0, objectid, NULL, 0, 0, 0);
|
2008-07-24 23:17:14 +07:00
|
|
|
if (IS_ERR(leaf)) {
|
|
|
|
ret = PTR_ERR(leaf);
|
|
|
|
goto fail;
|
|
|
|
}
|
2008-06-12 08:53:53 +07:00
|
|
|
|
Btrfs: Mixed back reference (FORWARD ROLLING FORMAT CHANGE)
This commit introduces a new kind of back reference for btrfs metadata.
Once a filesystem has been mounted with this commit, IT WILL NO LONGER
BE MOUNTABLE BY OLDER KERNELS.
When a tree block in subvolume tree is cow'd, the reference counts of all
extents it points to are increased by one. At transaction commit time,
the old root of the subvolume is recorded in a "dead root" data structure,
and the btree it points to is later walked, dropping reference counts
and freeing any blocks where the reference count goes to 0.
The increments done during cow and decrements done after commit cancel out,
and the walk is a very expensive way to go about freeing the blocks that
are no longer referenced by the new btree root. This commit reduces the
transaction overhead by avoiding the need for dead root records.
When a non-shared tree block is cow'd, we free the old block at once, and the
new block inherits old block's references. When a tree block with reference
count > 1 is cow'd, we increase the reference counts of all extents
the new block points to by one, and decrease the old block's reference count by
one.
This dead tree avoidance code removes the need to modify the reference
counts of lower level extents when a non-shared tree block is cow'd.
But we still need to update back ref for all pointers in the block.
This is because the location of the block is recorded in the back ref
item.
We can solve this by introducing a new type of back ref. The new
back ref provides information about pointer's key, level and in which
tree the pointer lives. This information allow us to find the pointer
by searching the tree. The shortcoming of the new back ref is that it
only works for pointers in tree blocks referenced by their owner trees.
This is mostly a problem for snapshots, where resolving one of these
fuzzy back references would be O(number_of_snapshots) and quite slow.
The solution used here is to use the fuzzy back references in the common
case where a given tree block is only referenced by one root,
and use the full back references when multiple roots have a reference
on a given block.
This commit adds per subvolume red-black tree to keep trace of cached
inodes. The red-black tree helps the balancing code to find cached
inodes whose inode numbers within a given range.
This commit improves the balancing code by introducing several data
structures to keep the state of balancing. The most important one
is the back ref cache. It caches how the upper level tree blocks are
referenced. This greatly reduce the overhead of checking back ref.
The improved balancing code scales significantly better with a large
number of snapshots.
This is a very large commit and was written in a number of
pieces. But, they depend heavily on the disk format change and were
squashed together to make sure git bisect didn't end up in a
bad state wrt space balancing or the format change.
Signed-off-by: Yan Zheng <zheng.yan@oracle.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-06-10 21:45:14 +07:00
|
|
|
memset_extent_buffer(leaf, 0, 0, sizeof(struct btrfs_header));
|
2008-06-12 08:53:53 +07:00
|
|
|
btrfs_set_header_bytenr(leaf, leaf->start);
|
|
|
|
btrfs_set_header_generation(leaf, trans->transid);
|
Btrfs: Mixed back reference (FORWARD ROLLING FORMAT CHANGE)
This commit introduces a new kind of back reference for btrfs metadata.
Once a filesystem has been mounted with this commit, IT WILL NO LONGER
BE MOUNTABLE BY OLDER KERNELS.
When a tree block in subvolume tree is cow'd, the reference counts of all
extents it points to are increased by one. At transaction commit time,
the old root of the subvolume is recorded in a "dead root" data structure,
and the btree it points to is later walked, dropping reference counts
and freeing any blocks where the reference count goes to 0.
The increments done during cow and decrements done after commit cancel out,
and the walk is a very expensive way to go about freeing the blocks that
are no longer referenced by the new btree root. This commit reduces the
transaction overhead by avoiding the need for dead root records.
When a non-shared tree block is cow'd, we free the old block at once, and the
new block inherits old block's references. When a tree block with reference
count > 1 is cow'd, we increase the reference counts of all extents
the new block points to by one, and decrease the old block's reference count by
one.
This dead tree avoidance code removes the need to modify the reference
counts of lower level extents when a non-shared tree block is cow'd.
But we still need to update back ref for all pointers in the block.
This is because the location of the block is recorded in the back ref
item.
We can solve this by introducing a new type of back ref. The new
back ref provides information about pointer's key, level and in which
tree the pointer lives. This information allow us to find the pointer
by searching the tree. The shortcoming of the new back ref is that it
only works for pointers in tree blocks referenced by their owner trees.
This is mostly a problem for snapshots, where resolving one of these
fuzzy back references would be O(number_of_snapshots) and quite slow.
The solution used here is to use the fuzzy back references in the common
case where a given tree block is only referenced by one root,
and use the full back references when multiple roots have a reference
on a given block.
This commit adds per subvolume red-black tree to keep trace of cached
inodes. The red-black tree helps the balancing code to find cached
inodes whose inode numbers within a given range.
This commit improves the balancing code by introducing several data
structures to keep the state of balancing. The most important one
is the back ref cache. It caches how the upper level tree blocks are
referenced. This greatly reduce the overhead of checking back ref.
The improved balancing code scales significantly better with a large
number of snapshots.
This is a very large commit and was written in a number of
pieces. But, they depend heavily on the disk format change and were
squashed together to make sure git bisect didn't end up in a
bad state wrt space balancing or the format change.
Signed-off-by: Yan Zheng <zheng.yan@oracle.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-06-10 21:45:14 +07:00
|
|
|
btrfs_set_header_backref_rev(leaf, BTRFS_MIXED_BACKREF_REV);
|
2008-06-12 08:53:53 +07:00
|
|
|
btrfs_set_header_owner(leaf, objectid);
|
|
|
|
|
|
|
|
write_extent_buffer(leaf, root->fs_info->fsid,
|
|
|
|
(unsigned long)btrfs_header_fsid(leaf),
|
|
|
|
BTRFS_FSID_SIZE);
|
Btrfs: Mixed back reference (FORWARD ROLLING FORMAT CHANGE)
This commit introduces a new kind of back reference for btrfs metadata.
Once a filesystem has been mounted with this commit, IT WILL NO LONGER
BE MOUNTABLE BY OLDER KERNELS.
When a tree block in subvolume tree is cow'd, the reference counts of all
extents it points to are increased by one. At transaction commit time,
the old root of the subvolume is recorded in a "dead root" data structure,
and the btree it points to is later walked, dropping reference counts
and freeing any blocks where the reference count goes to 0.
The increments done during cow and decrements done after commit cancel out,
and the walk is a very expensive way to go about freeing the blocks that
are no longer referenced by the new btree root. This commit reduces the
transaction overhead by avoiding the need for dead root records.
When a non-shared tree block is cow'd, we free the old block at once, and the
new block inherits old block's references. When a tree block with reference
count > 1 is cow'd, we increase the reference counts of all extents
the new block points to by one, and decrease the old block's reference count by
one.
This dead tree avoidance code removes the need to modify the reference
counts of lower level extents when a non-shared tree block is cow'd.
But we still need to update back ref for all pointers in the block.
This is because the location of the block is recorded in the back ref
item.
We can solve this by introducing a new type of back ref. The new
back ref provides information about pointer's key, level and in which
tree the pointer lives. This information allow us to find the pointer
by searching the tree. The shortcoming of the new back ref is that it
only works for pointers in tree blocks referenced by their owner trees.
This is mostly a problem for snapshots, where resolving one of these
fuzzy back references would be O(number_of_snapshots) and quite slow.
The solution used here is to use the fuzzy back references in the common
case where a given tree block is only referenced by one root,
and use the full back references when multiple roots have a reference
on a given block.
This commit adds per subvolume red-black tree to keep trace of cached
inodes. The red-black tree helps the balancing code to find cached
inodes whose inode numbers within a given range.
This commit improves the balancing code by introducing several data
structures to keep the state of balancing. The most important one
is the back ref cache. It caches how the upper level tree blocks are
referenced. This greatly reduce the overhead of checking back ref.
The improved balancing code scales significantly better with a large
number of snapshots.
This is a very large commit and was written in a number of
pieces. But, they depend heavily on the disk format change and were
squashed together to make sure git bisect didn't end up in a
bad state wrt space balancing or the format change.
Signed-off-by: Yan Zheng <zheng.yan@oracle.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-06-10 21:45:14 +07:00
|
|
|
write_extent_buffer(leaf, root->fs_info->chunk_tree_uuid,
|
|
|
|
(unsigned long)btrfs_header_chunk_tree_uuid(leaf),
|
|
|
|
BTRFS_UUID_SIZE);
|
2008-06-12 08:53:53 +07:00
|
|
|
btrfs_mark_buffer_dirty(leaf);
|
|
|
|
|
|
|
|
inode_item = &root_item.inode;
|
|
|
|
memset(inode_item, 0, sizeof(*inode_item));
|
|
|
|
inode_item->generation = cpu_to_le64(1);
|
|
|
|
inode_item->size = cpu_to_le64(3);
|
|
|
|
inode_item->nlink = cpu_to_le32(1);
|
2008-10-09 22:46:29 +07:00
|
|
|
inode_item->nbytes = cpu_to_le64(root->leafsize);
|
2008-06-12 08:53:53 +07:00
|
|
|
inode_item->mode = cpu_to_le32(S_IFDIR | 0755);
|
|
|
|
|
2011-03-28 09:01:25 +07:00
|
|
|
root_item.flags = 0;
|
|
|
|
root_item.byte_limit = 0;
|
|
|
|
inode_item->flags = cpu_to_le64(BTRFS_INODE_ROOT_ITEM_INIT);
|
|
|
|
|
2008-06-12 08:53:53 +07:00
|
|
|
btrfs_set_root_bytenr(&root_item, leaf->start);
|
2008-10-30 01:49:05 +07:00
|
|
|
btrfs_set_root_generation(&root_item, trans->transid);
|
2008-06-12 08:53:53 +07:00
|
|
|
btrfs_set_root_level(&root_item, 0);
|
|
|
|
btrfs_set_root_refs(&root_item, 1);
|
2009-11-12 16:36:50 +07:00
|
|
|
btrfs_set_root_used(&root_item, leaf->len);
|
2008-10-31 01:20:02 +07:00
|
|
|
btrfs_set_root_last_snapshot(&root_item, 0);
|
2008-06-12 08:53:53 +07:00
|
|
|
|
|
|
|
memset(&root_item.drop_progress, 0, sizeof(root_item.drop_progress));
|
|
|
|
root_item.drop_level = 0;
|
|
|
|
|
2008-06-26 03:01:30 +07:00
|
|
|
btrfs_tree_unlock(leaf);
|
2008-06-12 08:53:53 +07:00
|
|
|
free_extent_buffer(leaf);
|
|
|
|
leaf = NULL;
|
|
|
|
|
|
|
|
btrfs_set_root_dirid(&root_item, new_dirid);
|
|
|
|
|
|
|
|
key.objectid = objectid;
|
Btrfs: Mixed back reference (FORWARD ROLLING FORMAT CHANGE)
This commit introduces a new kind of back reference for btrfs metadata.
Once a filesystem has been mounted with this commit, IT WILL NO LONGER
BE MOUNTABLE BY OLDER KERNELS.
When a tree block in subvolume tree is cow'd, the reference counts of all
extents it points to are increased by one. At transaction commit time,
the old root of the subvolume is recorded in a "dead root" data structure,
and the btree it points to is later walked, dropping reference counts
and freeing any blocks where the reference count goes to 0.
The increments done during cow and decrements done after commit cancel out,
and the walk is a very expensive way to go about freeing the blocks that
are no longer referenced by the new btree root. This commit reduces the
transaction overhead by avoiding the need for dead root records.
When a non-shared tree block is cow'd, we free the old block at once, and the
new block inherits old block's references. When a tree block with reference
count > 1 is cow'd, we increase the reference counts of all extents
the new block points to by one, and decrease the old block's reference count by
one.
This dead tree avoidance code removes the need to modify the reference
counts of lower level extents when a non-shared tree block is cow'd.
But we still need to update back ref for all pointers in the block.
This is because the location of the block is recorded in the back ref
item.
We can solve this by introducing a new type of back ref. The new
back ref provides information about pointer's key, level and in which
tree the pointer lives. This information allow us to find the pointer
by searching the tree. The shortcoming of the new back ref is that it
only works for pointers in tree blocks referenced by their owner trees.
This is mostly a problem for snapshots, where resolving one of these
fuzzy back references would be O(number_of_snapshots) and quite slow.
The solution used here is to use the fuzzy back references in the common
case where a given tree block is only referenced by one root,
and use the full back references when multiple roots have a reference
on a given block.
This commit adds per subvolume red-black tree to keep trace of cached
inodes. The red-black tree helps the balancing code to find cached
inodes whose inode numbers within a given range.
This commit improves the balancing code by introducing several data
structures to keep the state of balancing. The most important one
is the back ref cache. It caches how the upper level tree blocks are
referenced. This greatly reduce the overhead of checking back ref.
The improved balancing code scales significantly better with a large
number of snapshots.
This is a very large commit and was written in a number of
pieces. But, they depend heavily on the disk format change and were
squashed together to make sure git bisect didn't end up in a
bad state wrt space balancing or the format change.
Signed-off-by: Yan Zheng <zheng.yan@oracle.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-06-10 21:45:14 +07:00
|
|
|
key.offset = 0;
|
2008-06-12 08:53:53 +07:00
|
|
|
btrfs_set_key_type(&key, BTRFS_ROOT_ITEM_KEY);
|
|
|
|
ret = btrfs_insert_root(trans, root->fs_info->tree_root, &key,
|
|
|
|
&root_item);
|
|
|
|
if (ret)
|
|
|
|
goto fail;
|
|
|
|
|
2009-09-22 03:00:26 +07:00
|
|
|
key.offset = (u64)-1;
|
|
|
|
new_root = btrfs_read_fs_root_no_name(root->fs_info, &key);
|
|
|
|
BUG_ON(IS_ERR(new_root));
|
|
|
|
|
|
|
|
btrfs_record_root_in_trans(trans, new_root);
|
|
|
|
|
2011-05-12 02:26:06 +07:00
|
|
|
ret = btrfs_create_subvol_root(trans, new_root, new_dirid);
|
2008-06-12 08:53:53 +07:00
|
|
|
/*
|
|
|
|
* insert the directory item
|
|
|
|
*/
|
2008-11-18 09:02:50 +07:00
|
|
|
ret = btrfs_set_inode_index(dir, &index);
|
|
|
|
BUG_ON(ret);
|
|
|
|
|
|
|
|
ret = btrfs_insert_dir_item(trans, root,
|
btrfs: implement delayed inode items operation
Changelog V5 -> V6:
- Fix oom when the memory load is high, by storing the delayed nodes into the
root's radix tree, and letting btrfs inodes go.
Changelog V4 -> V5:
- Fix the race on adding the delayed node to the inode, which is spotted by
Chris Mason.
- Merge Chris Mason's incremental patch into this patch.
- Fix deadlock between readdir() and memory fault, which is reported by
Itaru Kitayama.
Changelog V3 -> V4:
- Fix nested lock, which is reported by Itaru Kitayama, by updating space cache
inode in time.
Changelog V2 -> V3:
- Fix the race between the delayed worker and the task which does delayed items
balance, which is reported by Tsutomu Itoh.
- Modify the patch address David Sterba's comment.
- Fix the bug of the cpu recursion spinlock, reported by Chris Mason
Changelog V1 -> V2:
- break up the global rb-tree, use a list to manage the delayed nodes,
which is created for every directory and file, and used to manage the
delayed directory name index items and the delayed inode item.
- introduce a worker to deal with the delayed nodes.
Compare with Ext3/4, the performance of file creation and deletion on btrfs
is very poor. the reason is that btrfs must do a lot of b+ tree insertions,
such as inode item, directory name item, directory name index and so on.
If we can do some delayed b+ tree insertion or deletion, we can improve the
performance, so we made this patch which implemented delayed directory name
index insertion/deletion and delayed inode update.
Implementation:
- introduce a delayed root object into the filesystem, that use two lists to
manage the delayed nodes which are created for every file/directory.
One is used to manage all the delayed nodes that have delayed items. And the
other is used to manage the delayed nodes which is waiting to be dealt with
by the work thread.
- Every delayed node has two rb-tree, one is used to manage the directory name
index which is going to be inserted into b+ tree, and the other is used to
manage the directory name index which is going to be deleted from b+ tree.
- introduce a worker to deal with the delayed operation. This worker is used
to deal with the works of the delayed directory name index items insertion
and deletion and the delayed inode update.
When the delayed items is beyond the lower limit, we create works for some
delayed nodes and insert them into the work queue of the worker, and then
go back.
When the delayed items is beyond the upper bound, we create works for all
the delayed nodes that haven't been dealt with, and insert them into the work
queue of the worker, and then wait for that the untreated items is below some
threshold value.
- When we want to insert a directory name index into b+ tree, we just add the
information into the delayed inserting rb-tree.
And then we check the number of the delayed items and do delayed items
balance. (The balance policy is above.)
- When we want to delete a directory name index from the b+ tree, we search it
in the inserting rb-tree at first. If we look it up, just drop it. If not,
add the key of it into the delayed deleting rb-tree.
Similar to the delayed inserting rb-tree, we also check the number of the
delayed items and do delayed items balance.
(The same to inserting manipulation)
- When we want to update the metadata of some inode, we cached the data of the
inode into the delayed node. the worker will flush it into the b+ tree after
dealing with the delayed insertion and deletion.
- We will move the delayed node to the tail of the list after we access the
delayed node, By this way, we can cache more delayed items and merge more
inode updates.
- If we want to commit transaction, we will deal with all the delayed node.
- the delayed node will be freed when we free the btrfs inode.
- Before we log the inode items, we commit all the directory name index items
and the delayed inode update.
I did a quick test by the benchmark tool[1] and found we can improve the
performance of file creation by ~15%, and file deletion by ~20%.
Before applying this patch:
Create files:
Total files: 50000
Total time: 1.096108
Average time: 0.000022
Delete files:
Total files: 50000
Total time: 1.510403
Average time: 0.000030
After applying this patch:
Create files:
Total files: 50000
Total time: 0.932899
Average time: 0.000019
Delete files:
Total files: 50000
Total time: 1.215732
Average time: 0.000024
[1] http://marc.info/?l=linux-btrfs&m=128212635122920&q=p3
Many thanks for Kitayama-san's help!
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Reviewed-by: David Sterba <dave@jikos.cz>
Tested-by: Tsutomu Itoh <t-itoh@jp.fujitsu.com>
Tested-by: Itaru Kitayama <kitayama@cl.bb4u.ne.jp>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2011-04-22 17:12:22 +07:00
|
|
|
name, namelen, dir, &key,
|
2008-11-18 09:02:50 +07:00
|
|
|
BTRFS_FT_DIR, index);
|
2008-06-12 08:53:53 +07:00
|
|
|
if (ret)
|
|
|
|
goto fail;
|
2008-11-18 08:37:39 +07:00
|
|
|
|
2009-01-06 03:43:43 +07:00
|
|
|
btrfs_i_size_write(dir, dir->i_size + namelen * 2);
|
|
|
|
ret = btrfs_update_inode(trans, root, dir);
|
|
|
|
BUG_ON(ret);
|
|
|
|
|
2008-11-18 08:37:39 +07:00
|
|
|
ret = btrfs_add_root_ref(trans, root->fs_info->tree_root,
|
2009-09-22 02:56:00 +07:00
|
|
|
objectid, root->root_key.objectid,
|
2011-04-20 09:31:50 +07:00
|
|
|
btrfs_ino(dir), index, name, namelen);
|
2008-11-18 08:37:39 +07:00
|
|
|
|
2009-09-22 03:00:26 +07:00
|
|
|
BUG_ON(ret);
|
2008-06-12 08:53:53 +07:00
|
|
|
|
2009-09-22 03:00:26 +07:00
|
|
|
d_instantiate(dentry, btrfs_lookup_dentry(dir, dentry));
|
2008-06-12 08:53:53 +07:00
|
|
|
fail:
|
2010-10-30 02:41:32 +07:00
|
|
|
if (async_transid) {
|
|
|
|
*async_transid = trans->transid;
|
|
|
|
err = btrfs_commit_transaction_async(trans, root, 1);
|
|
|
|
} else {
|
|
|
|
err = btrfs_commit_transaction(trans, root);
|
|
|
|
}
|
2008-06-12 08:53:53 +07:00
|
|
|
if (err && !ret)
|
|
|
|
ret = err;
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2010-10-30 02:41:32 +07:00
|
|
|
static int create_snapshot(struct btrfs_root *root, struct dentry *dentry,
|
2010-12-20 15:04:08 +07:00
|
|
|
char *name, int namelen, u64 *async_transid,
|
|
|
|
bool readonly)
|
2008-06-12 08:53:53 +07:00
|
|
|
{
|
2009-11-12 16:37:02 +07:00
|
|
|
struct inode *inode;
|
2008-06-12 08:53:53 +07:00
|
|
|
struct btrfs_pending_snapshot *pending_snapshot;
|
|
|
|
struct btrfs_trans_handle *trans;
|
2009-11-12 16:37:02 +07:00
|
|
|
int ret;
|
2008-06-12 08:53:53 +07:00
|
|
|
|
|
|
|
if (!root->ref_cows)
|
|
|
|
return -EINVAL;
|
|
|
|
|
2008-11-18 09:02:50 +07:00
|
|
|
pending_snapshot = kzalloc(sizeof(*pending_snapshot), GFP_NOFS);
|
2010-05-16 21:48:46 +07:00
|
|
|
if (!pending_snapshot)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
btrfs_init_block_rsv(&pending_snapshot->block_rsv);
|
2008-11-18 09:02:50 +07:00
|
|
|
pending_snapshot->dentry = dentry;
|
2008-06-12 08:53:53 +07:00
|
|
|
pending_snapshot->root = root;
|
2010-12-20 15:04:08 +07:00
|
|
|
pending_snapshot->readonly = readonly;
|
2010-05-16 21:48:46 +07:00
|
|
|
|
|
|
|
trans = btrfs_start_transaction(root->fs_info->extent_root, 5);
|
|
|
|
if (IS_ERR(trans)) {
|
|
|
|
ret = PTR_ERR(trans);
|
|
|
|
goto fail;
|
|
|
|
}
|
|
|
|
|
|
|
|
ret = btrfs_snap_reserve_metadata(trans, pending_snapshot);
|
|
|
|
BUG_ON(ret);
|
|
|
|
|
2011-06-15 02:16:14 +07:00
|
|
|
spin_lock(&root->fs_info->trans_lock);
|
2008-06-12 08:53:53 +07:00
|
|
|
list_add(&pending_snapshot->list,
|
|
|
|
&trans->transaction->pending_snapshots);
|
2011-06-15 02:16:14 +07:00
|
|
|
spin_unlock(&root->fs_info->trans_lock);
|
2010-10-30 02:41:32 +07:00
|
|
|
if (async_transid) {
|
|
|
|
*async_transid = trans->transid;
|
|
|
|
ret = btrfs_commit_transaction_async(trans,
|
|
|
|
root->fs_info->extent_root, 1);
|
|
|
|
} else {
|
|
|
|
ret = btrfs_commit_transaction(trans,
|
|
|
|
root->fs_info->extent_root);
|
|
|
|
}
|
2009-11-12 16:37:02 +07:00
|
|
|
BUG_ON(ret);
|
2010-05-16 21:48:46 +07:00
|
|
|
|
|
|
|
ret = pending_snapshot->error;
|
|
|
|
if (ret)
|
|
|
|
goto fail;
|
|
|
|
|
2011-02-01 04:22:42 +07:00
|
|
|
ret = btrfs_orphan_cleanup(pending_snapshot->snap);
|
|
|
|
if (ret)
|
|
|
|
goto fail;
|
2008-06-12 08:53:53 +07:00
|
|
|
|
2011-07-17 08:38:06 +07:00
|
|
|
inode = btrfs_lookup_dentry(dentry->d_parent->d_inode, dentry);
|
2009-11-12 16:37:02 +07:00
|
|
|
if (IS_ERR(inode)) {
|
|
|
|
ret = PTR_ERR(inode);
|
|
|
|
goto fail;
|
|
|
|
}
|
|
|
|
BUG_ON(!inode);
|
|
|
|
d_instantiate(dentry, inode);
|
|
|
|
ret = 0;
|
|
|
|
fail:
|
2010-05-16 21:48:46 +07:00
|
|
|
kfree(pending_snapshot);
|
2008-06-12 08:53:53 +07:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2010-10-30 02:46:43 +07:00
|
|
|
/* copy of check_sticky in fs/namei.c()
|
|
|
|
* It's inline, so penalty for filesystems that don't use sticky bit is
|
|
|
|
* minimal.
|
|
|
|
*/
|
|
|
|
static inline int btrfs_check_sticky(struct inode *dir, struct inode *inode)
|
|
|
|
{
|
|
|
|
uid_t fsuid = current_fsuid();
|
|
|
|
|
|
|
|
if (!(dir->i_mode & S_ISVTX))
|
|
|
|
return 0;
|
|
|
|
if (inode->i_uid == fsuid)
|
|
|
|
return 0;
|
|
|
|
if (dir->i_uid == fsuid)
|
|
|
|
return 0;
|
|
|
|
return !capable(CAP_FOWNER);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* copy of may_delete in fs/namei.c()
|
|
|
|
* Check whether we can remove a link victim from directory dir, check
|
|
|
|
* whether the type of victim is right.
|
|
|
|
* 1. We can't do it if dir is read-only (done in permission())
|
|
|
|
* 2. We should have write and exec permissions on dir
|
|
|
|
* 3. We can't remove anything from append-only dir
|
|
|
|
* 4. We can't do anything with immutable dir (done in permission())
|
|
|
|
* 5. If the sticky bit on dir is set we should either
|
|
|
|
* a. be owner of dir, or
|
|
|
|
* b. be owner of victim, or
|
|
|
|
* c. have CAP_FOWNER capability
|
|
|
|
* 6. If the victim is append-only or immutable we can't do antyhing with
|
|
|
|
* links pointing to it.
|
|
|
|
* 7. If we were asked to remove a directory and victim isn't one - ENOTDIR.
|
|
|
|
* 8. If we were asked to remove a non-directory and victim isn't one - EISDIR.
|
|
|
|
* 9. We can't remove a root or mountpoint.
|
|
|
|
* 10. We don't allow removal of NFS sillyrenamed files; it's handled by
|
|
|
|
* nfs_async_unlink().
|
|
|
|
*/
|
|
|
|
|
|
|
|
static int btrfs_may_delete(struct inode *dir,struct dentry *victim,int isdir)
|
|
|
|
{
|
|
|
|
int error;
|
|
|
|
|
|
|
|
if (!victim->d_inode)
|
|
|
|
return -ENOENT;
|
|
|
|
|
|
|
|
BUG_ON(victim->d_parent->d_inode != dir);
|
|
|
|
audit_inode_child(victim, dir);
|
|
|
|
|
|
|
|
error = inode_permission(dir, MAY_WRITE | MAY_EXEC);
|
|
|
|
if (error)
|
|
|
|
return error;
|
|
|
|
if (IS_APPEND(dir))
|
|
|
|
return -EPERM;
|
|
|
|
if (btrfs_check_sticky(dir, victim->d_inode)||
|
|
|
|
IS_APPEND(victim->d_inode)||
|
|
|
|
IS_IMMUTABLE(victim->d_inode) || IS_SWAPFILE(victim->d_inode))
|
|
|
|
return -EPERM;
|
|
|
|
if (isdir) {
|
|
|
|
if (!S_ISDIR(victim->d_inode->i_mode))
|
|
|
|
return -ENOTDIR;
|
|
|
|
if (IS_ROOT(victim))
|
|
|
|
return -EBUSY;
|
|
|
|
} else if (S_ISDIR(victim->d_inode->i_mode))
|
|
|
|
return -EISDIR;
|
|
|
|
if (IS_DEADDIR(dir))
|
|
|
|
return -ENOENT;
|
|
|
|
if (victim->d_flags & DCACHE_NFSFS_RENAMED)
|
|
|
|
return -EBUSY;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2008-10-10 00:39:39 +07:00
|
|
|
/* copy of may_create in fs/namei.c() */
|
|
|
|
static inline int btrfs_may_create(struct inode *dir, struct dentry *child)
|
|
|
|
{
|
|
|
|
if (child->d_inode)
|
|
|
|
return -EEXIST;
|
|
|
|
if (IS_DEADDIR(dir))
|
|
|
|
return -ENOENT;
|
|
|
|
return inode_permission(dir, MAY_WRITE | MAY_EXEC);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Create a new subvolume below @parent. This is largely modeled after
|
|
|
|
* sys_mkdirat and vfs_mkdir, but we only do a single component lookup
|
|
|
|
* inside this filesystem so it's quite a bit simpler.
|
|
|
|
*/
|
2009-09-22 03:00:26 +07:00
|
|
|
static noinline int btrfs_mksubvol(struct path *parent,
|
|
|
|
char *name, int namelen,
|
2010-10-30 02:41:32 +07:00
|
|
|
struct btrfs_root *snap_src,
|
2010-12-20 15:04:08 +07:00
|
|
|
u64 *async_transid, bool readonly)
|
2008-10-10 00:39:39 +07:00
|
|
|
{
|
2009-09-22 03:00:26 +07:00
|
|
|
struct inode *dir = parent->dentry->d_inode;
|
2008-10-10 00:39:39 +07:00
|
|
|
struct dentry *dentry;
|
|
|
|
int error;
|
|
|
|
|
2009-09-22 03:00:26 +07:00
|
|
|
mutex_lock_nested(&dir->i_mutex, I_MUTEX_PARENT);
|
2008-10-10 00:39:39 +07:00
|
|
|
|
|
|
|
dentry = lookup_one_len(name, parent->dentry, namelen);
|
|
|
|
error = PTR_ERR(dentry);
|
|
|
|
if (IS_ERR(dentry))
|
|
|
|
goto out_unlock;
|
|
|
|
|
|
|
|
error = -EEXIST;
|
|
|
|
if (dentry->d_inode)
|
|
|
|
goto out_dput;
|
|
|
|
|
|
|
|
error = mnt_want_write(parent->mnt);
|
|
|
|
if (error)
|
|
|
|
goto out_dput;
|
|
|
|
|
2009-09-22 03:00:26 +07:00
|
|
|
error = btrfs_may_create(dir, dentry);
|
2008-10-10 00:39:39 +07:00
|
|
|
if (error)
|
|
|
|
goto out_drop_write;
|
|
|
|
|
2009-09-22 03:00:26 +07:00
|
|
|
down_read(&BTRFS_I(dir)->root->fs_info->subvol_sem);
|
|
|
|
|
|
|
|
if (btrfs_root_refs(&BTRFS_I(dir)->root->root_item) == 0)
|
|
|
|
goto out_up_read;
|
|
|
|
|
2008-11-18 09:02:50 +07:00
|
|
|
if (snap_src) {
|
2010-10-30 02:41:32 +07:00
|
|
|
error = create_snapshot(snap_src, dentry,
|
2010-12-20 15:04:08 +07:00
|
|
|
name, namelen, async_transid, readonly);
|
2008-11-18 09:02:50 +07:00
|
|
|
} else {
|
2009-09-22 03:00:26 +07:00
|
|
|
error = create_subvol(BTRFS_I(dir)->root, dentry,
|
2010-10-30 02:41:32 +07:00
|
|
|
name, namelen, async_transid);
|
2008-11-18 09:02:50 +07:00
|
|
|
}
|
2009-09-22 03:00:26 +07:00
|
|
|
if (!error)
|
|
|
|
fsnotify_mkdir(dir, dentry);
|
|
|
|
out_up_read:
|
|
|
|
up_read(&BTRFS_I(dir)->root->fs_info->subvol_sem);
|
2008-10-10 00:39:39 +07:00
|
|
|
out_drop_write:
|
|
|
|
mnt_drop_write(parent->mnt);
|
|
|
|
out_dput:
|
|
|
|
dput(dentry);
|
|
|
|
out_unlock:
|
2009-09-22 03:00:26 +07:00
|
|
|
mutex_unlock(&dir->i_mutex);
|
2008-10-10 00:39:39 +07:00
|
|
|
return error;
|
|
|
|
}
|
|
|
|
|
2011-05-25 02:35:30 +07:00
|
|
|
/*
|
|
|
|
* When we're defragging a range, we don't want to kick it off again
|
|
|
|
* if it is really just waiting for delalloc to send it down.
|
|
|
|
* If we find a nice big extent or delalloc range for the bytes in the
|
|
|
|
* file you want to defrag, we return 0 to let you know to skip this
|
|
|
|
* part of the file
|
|
|
|
*/
|
|
|
|
static int check_defrag_in_cache(struct inode *inode, u64 offset, int thresh)
|
|
|
|
{
|
|
|
|
struct extent_io_tree *io_tree = &BTRFS_I(inode)->io_tree;
|
|
|
|
struct extent_map *em = NULL;
|
|
|
|
struct extent_map_tree *em_tree = &BTRFS_I(inode)->extent_tree;
|
|
|
|
u64 end;
|
|
|
|
|
|
|
|
read_lock(&em_tree->lock);
|
|
|
|
em = lookup_extent_mapping(em_tree, offset, PAGE_CACHE_SIZE);
|
|
|
|
read_unlock(&em_tree->lock);
|
|
|
|
|
|
|
|
if (em) {
|
|
|
|
end = extent_map_end(em);
|
|
|
|
free_extent_map(em);
|
|
|
|
if (end - offset > thresh)
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
/* if we already have a nice delalloc here, just stop */
|
|
|
|
thresh /= 2;
|
|
|
|
end = count_range_bits(io_tree, &offset, offset + thresh,
|
|
|
|
thresh, EXTENT_DELALLOC, 1);
|
|
|
|
if (end >= thresh)
|
|
|
|
return 0;
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* helper function to walk through a file and find extents
|
|
|
|
* newer than a specific transid, and smaller than thresh.
|
|
|
|
*
|
|
|
|
* This is used by the defragging code to find new and small
|
|
|
|
* extents
|
|
|
|
*/
|
|
|
|
static int find_new_extents(struct btrfs_root *root,
|
|
|
|
struct inode *inode, u64 newer_than,
|
|
|
|
u64 *off, int thresh)
|
|
|
|
{
|
|
|
|
struct btrfs_path *path;
|
|
|
|
struct btrfs_key min_key;
|
|
|
|
struct btrfs_key max_key;
|
|
|
|
struct extent_buffer *leaf;
|
|
|
|
struct btrfs_file_extent_item *extent;
|
|
|
|
int type;
|
|
|
|
int ret;
|
2011-06-01 00:08:14 +07:00
|
|
|
u64 ino = btrfs_ino(inode);
|
2011-05-25 02:35:30 +07:00
|
|
|
|
|
|
|
path = btrfs_alloc_path();
|
|
|
|
if (!path)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
2011-06-01 00:08:14 +07:00
|
|
|
min_key.objectid = ino;
|
2011-05-25 02:35:30 +07:00
|
|
|
min_key.type = BTRFS_EXTENT_DATA_KEY;
|
|
|
|
min_key.offset = *off;
|
|
|
|
|
2011-06-01 00:08:14 +07:00
|
|
|
max_key.objectid = ino;
|
2011-05-25 02:35:30 +07:00
|
|
|
max_key.type = (u8)-1;
|
|
|
|
max_key.offset = (u64)-1;
|
|
|
|
|
|
|
|
path->keep_locks = 1;
|
|
|
|
|
|
|
|
while(1) {
|
|
|
|
ret = btrfs_search_forward(root, &min_key, &max_key,
|
|
|
|
path, 0, newer_than);
|
|
|
|
if (ret != 0)
|
|
|
|
goto none;
|
2011-06-01 00:08:14 +07:00
|
|
|
if (min_key.objectid != ino)
|
2011-05-25 02:35:30 +07:00
|
|
|
goto none;
|
|
|
|
if (min_key.type != BTRFS_EXTENT_DATA_KEY)
|
|
|
|
goto none;
|
|
|
|
|
|
|
|
leaf = path->nodes[0];
|
|
|
|
extent = btrfs_item_ptr(leaf, path->slots[0],
|
|
|
|
struct btrfs_file_extent_item);
|
|
|
|
|
|
|
|
type = btrfs_file_extent_type(leaf, extent);
|
|
|
|
if (type == BTRFS_FILE_EXTENT_REG &&
|
|
|
|
btrfs_file_extent_num_bytes(leaf, extent) < thresh &&
|
|
|
|
check_defrag_in_cache(inode, min_key.offset, thresh)) {
|
|
|
|
*off = min_key.offset;
|
|
|
|
btrfs_free_path(path);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (min_key.offset == (u64)-1)
|
|
|
|
goto none;
|
|
|
|
|
|
|
|
min_key.offset++;
|
|
|
|
btrfs_release_path(path);
|
|
|
|
}
|
|
|
|
none:
|
|
|
|
btrfs_free_path(path);
|
|
|
|
return -ENOENT;
|
|
|
|
}
|
|
|
|
|
2010-03-10 22:52:59 +07:00
|
|
|
static int should_defrag_range(struct inode *inode, u64 start, u64 len,
|
2010-03-11 21:42:04 +07:00
|
|
|
int thresh, u64 *last_len, u64 *skip,
|
|
|
|
u64 *defrag_end)
|
2010-03-10 22:52:59 +07:00
|
|
|
{
|
|
|
|
struct extent_io_tree *io_tree = &BTRFS_I(inode)->io_tree;
|
|
|
|
struct extent_map *em = NULL;
|
|
|
|
struct extent_map_tree *em_tree = &BTRFS_I(inode)->extent_tree;
|
|
|
|
int ret = 1;
|
|
|
|
|
|
|
|
/*
|
2011-09-02 14:57:07 +07:00
|
|
|
* make sure that once we start defragging an extent, we keep on
|
2010-03-10 22:52:59 +07:00
|
|
|
* defragging it
|
|
|
|
*/
|
|
|
|
if (start < *defrag_end)
|
|
|
|
return 1;
|
|
|
|
|
|
|
|
*skip = 0;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* hopefully we have this extent in the tree already, try without
|
|
|
|
* the full extent lock
|
|
|
|
*/
|
|
|
|
read_lock(&em_tree->lock);
|
|
|
|
em = lookup_extent_mapping(em_tree, start, len);
|
|
|
|
read_unlock(&em_tree->lock);
|
|
|
|
|
|
|
|
if (!em) {
|
|
|
|
/* get the big lock and read metadata off disk */
|
|
|
|
lock_extent(io_tree, start, start + len - 1, GFP_NOFS);
|
|
|
|
em = btrfs_get_extent(inode, NULL, 0, start, len, 0);
|
|
|
|
unlock_extent(io_tree, start, start + len - 1, GFP_NOFS);
|
|
|
|
|
2010-03-20 18:22:10 +07:00
|
|
|
if (IS_ERR(em))
|
2010-03-10 22:52:59 +07:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* this will cover holes, and inline extents */
|
|
|
|
if (em->block_start >= EXTENT_MAP_LAST_BYTE)
|
|
|
|
ret = 0;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* we hit a real extent, if it is big don't bother defragging it again
|
|
|
|
*/
|
2010-03-11 21:42:04 +07:00
|
|
|
if ((*last_len == 0 || *last_len >= thresh) && em->len >= thresh)
|
2010-03-10 22:52:59 +07:00
|
|
|
ret = 0;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* last_len ends up being a counter of how many bytes we've defragged.
|
|
|
|
* every time we choose not to defrag an extent, we reset *last_len
|
|
|
|
* so that the next tiny extent will force a defrag.
|
|
|
|
*
|
|
|
|
* The end result of this is that tiny extents before a single big
|
|
|
|
* extent will force at least part of that big extent to be defragged.
|
|
|
|
*/
|
|
|
|
if (ret) {
|
|
|
|
*defrag_end = extent_map_end(em);
|
|
|
|
} else {
|
|
|
|
*last_len = 0;
|
|
|
|
*skip = extent_map_end(em);
|
|
|
|
*defrag_end = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
free_extent_map(em);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2011-05-25 02:35:30 +07:00
|
|
|
/*
|
|
|
|
* it doesn't do much good to defrag one or two pages
|
|
|
|
* at a time. This pulls in a nice chunk of pages
|
|
|
|
* to COW and defrag.
|
|
|
|
*
|
|
|
|
* It also makes sure the delalloc code has enough
|
|
|
|
* dirty data to avoid making new small extents as part
|
|
|
|
* of the defrag
|
|
|
|
*
|
|
|
|
* It's a good idea to start RA on this range
|
|
|
|
* before calling this.
|
|
|
|
*/
|
|
|
|
static int cluster_pages_for_defrag(struct inode *inode,
|
|
|
|
struct page **pages,
|
|
|
|
unsigned long start_index,
|
|
|
|
int num_pages)
|
2008-06-12 08:53:53 +07:00
|
|
|
{
|
2011-05-25 02:35:30 +07:00
|
|
|
unsigned long file_end;
|
|
|
|
u64 isize = i_size_read(inode);
|
|
|
|
u64 page_start;
|
|
|
|
u64 page_end;
|
|
|
|
int ret;
|
|
|
|
int i;
|
|
|
|
int i_done;
|
2008-07-24 22:57:52 +07:00
|
|
|
struct btrfs_ordered_extent *ordered;
|
2011-05-25 02:35:30 +07:00
|
|
|
struct extent_state *cached_state = NULL;
|
2011-09-22 02:05:58 +07:00
|
|
|
gfp_t mask = btrfs_alloc_write_mask(inode->i_mapping);
|
2011-05-25 02:35:30 +07:00
|
|
|
|
|
|
|
if (isize == 0)
|
|
|
|
return 0;
|
|
|
|
file_end = (isize - 1) >> PAGE_CACHE_SHIFT;
|
|
|
|
|
Btrfs: fix how we do delalloc reservations and how we free reservations on error
Running xfstests 269 with some tracing my scripts kept spitting out errors about
releasing bytes that we didn't actually have reserved. This took me down a huge
rabbit hole and it turns out the way we deal with reserved_extents is wrong,
we need to only be setting it if the reservation succeeds, otherwise the free()
method will come in and unreserve space that isn't actually reserved yet, which
can lead to other warnings and such. The math was all working out right in the
end, but it caused all sorts of other issues in addition to making my scripts
yell and scream and generally make it impossible for me to track down the
original issue I was looking for. The other problem is with our error handling
in the reservation code. There are two cases that we need to deal with
1) We raced with free. In this case free won't free anything because csum_bytes
is modified before we dro the lock in our reservation path, so free rightly
doesn't release any space because the reservation code may be depending on that
reservation. However if we fail, we need the reservation side to do the free at
that point since that space is no longer in use. So as it stands the code was
doing this fine and it worked out, except in case #2
2) We don't race with free. Nobody comes in and changes anything, and our
reservation fails. In this case we didn't reserve anything anyway and we just
need to clean up csum_bytes but not free anything. So we keep track of
csum_bytes before we drop the lock and if it hasn't changed we know we can just
decrement csum_bytes and carry on.
Because of the case where we can race with free()'s since we have to drop our
spin_lock to do the reservation, I'm going to serialize all reservations with
the i_mutex. We already get this for free in the heavy use paths, truncate and
file write all hold the i_mutex, just needed to add it to page_mkwrite and
various ioctl/balance things. With this patch my space leak scripts no longer
scream bloody murder. Thanks,
Signed-off-by: Josef Bacik <josef@redhat.com>
2011-12-09 23:18:51 +07:00
|
|
|
mutex_lock(&inode->i_mutex);
|
2011-05-25 02:35:30 +07:00
|
|
|
ret = btrfs_delalloc_reserve_space(inode,
|
|
|
|
num_pages << PAGE_CACHE_SHIFT);
|
Btrfs: fix how we do delalloc reservations and how we free reservations on error
Running xfstests 269 with some tracing my scripts kept spitting out errors about
releasing bytes that we didn't actually have reserved. This took me down a huge
rabbit hole and it turns out the way we deal with reserved_extents is wrong,
we need to only be setting it if the reservation succeeds, otherwise the free()
method will come in and unreserve space that isn't actually reserved yet, which
can lead to other warnings and such. The math was all working out right in the
end, but it caused all sorts of other issues in addition to making my scripts
yell and scream and generally make it impossible for me to track down the
original issue I was looking for. The other problem is with our error handling
in the reservation code. There are two cases that we need to deal with
1) We raced with free. In this case free won't free anything because csum_bytes
is modified before we dro the lock in our reservation path, so free rightly
doesn't release any space because the reservation code may be depending on that
reservation. However if we fail, we need the reservation side to do the free at
that point since that space is no longer in use. So as it stands the code was
doing this fine and it worked out, except in case #2
2) We don't race with free. Nobody comes in and changes anything, and our
reservation fails. In this case we didn't reserve anything anyway and we just
need to clean up csum_bytes but not free anything. So we keep track of
csum_bytes before we drop the lock and if it hasn't changed we know we can just
decrement csum_bytes and carry on.
Because of the case where we can race with free()'s since we have to drop our
spin_lock to do the reservation, I'm going to serialize all reservations with
the i_mutex. We already get this for free in the heavy use paths, truncate and
file write all hold the i_mutex, just needed to add it to page_mkwrite and
various ioctl/balance things. With this patch my space leak scripts no longer
scream bloody murder. Thanks,
Signed-off-by: Josef Bacik <josef@redhat.com>
2011-12-09 23:18:51 +07:00
|
|
|
mutex_unlock(&inode->i_mutex);
|
2011-05-25 02:35:30 +07:00
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
again:
|
|
|
|
ret = 0;
|
|
|
|
i_done = 0;
|
|
|
|
|
|
|
|
/* step one, lock all the pages */
|
|
|
|
for (i = 0; i < num_pages; i++) {
|
|
|
|
struct page *page;
|
2011-07-11 21:47:06 +07:00
|
|
|
page = find_or_create_page(inode->i_mapping,
|
2011-09-22 02:05:58 +07:00
|
|
|
start_index + i, mask);
|
2011-05-25 02:35:30 +07:00
|
|
|
if (!page)
|
|
|
|
break;
|
|
|
|
|
|
|
|
if (!PageUptodate(page)) {
|
|
|
|
btrfs_readpage(NULL, page);
|
|
|
|
lock_page(page);
|
|
|
|
if (!PageUptodate(page)) {
|
|
|
|
unlock_page(page);
|
|
|
|
page_cache_release(page);
|
|
|
|
ret = -EIO;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
isize = i_size_read(inode);
|
|
|
|
file_end = (isize - 1) >> PAGE_CACHE_SHIFT;
|
|
|
|
if (!isize || page->index > file_end ||
|
|
|
|
page->mapping != inode->i_mapping) {
|
|
|
|
/* whoops, we blew past eof, skip this page */
|
|
|
|
unlock_page(page);
|
|
|
|
page_cache_release(page);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
pages[i] = page;
|
|
|
|
i_done++;
|
|
|
|
}
|
|
|
|
if (!i_done || ret)
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
if (!(inode->i_sb->s_flags & MS_ACTIVE))
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* so now we have a nice long stream of locked
|
|
|
|
* and up to date pages, lets wait on them
|
|
|
|
*/
|
|
|
|
for (i = 0; i < i_done; i++)
|
|
|
|
wait_on_page_writeback(pages[i]);
|
|
|
|
|
|
|
|
page_start = page_offset(pages[0]);
|
|
|
|
page_end = page_offset(pages[i_done - 1]) + PAGE_CACHE_SIZE;
|
|
|
|
|
|
|
|
lock_extent_bits(&BTRFS_I(inode)->io_tree,
|
|
|
|
page_start, page_end - 1, 0, &cached_state,
|
|
|
|
GFP_NOFS);
|
|
|
|
ordered = btrfs_lookup_first_ordered_extent(inode, page_end - 1);
|
|
|
|
if (ordered &&
|
|
|
|
ordered->file_offset + ordered->len > page_start &&
|
|
|
|
ordered->file_offset < page_end) {
|
|
|
|
btrfs_put_ordered_extent(ordered);
|
|
|
|
unlock_extent_cached(&BTRFS_I(inode)->io_tree,
|
|
|
|
page_start, page_end - 1,
|
|
|
|
&cached_state, GFP_NOFS);
|
|
|
|
for (i = 0; i < i_done; i++) {
|
|
|
|
unlock_page(pages[i]);
|
|
|
|
page_cache_release(pages[i]);
|
|
|
|
}
|
|
|
|
btrfs_wait_ordered_range(inode, page_start,
|
|
|
|
page_end - page_start);
|
|
|
|
goto again;
|
|
|
|
}
|
|
|
|
if (ordered)
|
|
|
|
btrfs_put_ordered_extent(ordered);
|
|
|
|
|
|
|
|
clear_extent_bit(&BTRFS_I(inode)->io_tree, page_start,
|
|
|
|
page_end - 1, EXTENT_DIRTY | EXTENT_DELALLOC |
|
|
|
|
EXTENT_DO_ACCOUNTING, 0, 0, &cached_state,
|
|
|
|
GFP_NOFS);
|
|
|
|
|
|
|
|
if (i_done != num_pages) {
|
2011-07-15 22:16:44 +07:00
|
|
|
spin_lock(&BTRFS_I(inode)->lock);
|
|
|
|
BTRFS_I(inode)->outstanding_extents++;
|
|
|
|
spin_unlock(&BTRFS_I(inode)->lock);
|
2011-05-25 02:35:30 +07:00
|
|
|
btrfs_delalloc_release_space(inode,
|
|
|
|
(num_pages - i_done) << PAGE_CACHE_SHIFT);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
btrfs_set_extent_delalloc(inode, page_start, page_end - 1,
|
|
|
|
&cached_state);
|
|
|
|
|
|
|
|
unlock_extent_cached(&BTRFS_I(inode)->io_tree,
|
|
|
|
page_start, page_end - 1, &cached_state,
|
|
|
|
GFP_NOFS);
|
|
|
|
|
|
|
|
for (i = 0; i < i_done; i++) {
|
|
|
|
clear_page_dirty_for_io(pages[i]);
|
|
|
|
ClearPageChecked(pages[i]);
|
|
|
|
set_page_extent_mapped(pages[i]);
|
|
|
|
set_page_dirty(pages[i]);
|
|
|
|
unlock_page(pages[i]);
|
|
|
|
page_cache_release(pages[i]);
|
|
|
|
}
|
|
|
|
return i_done;
|
|
|
|
out:
|
|
|
|
for (i = 0; i < i_done; i++) {
|
|
|
|
unlock_page(pages[i]);
|
|
|
|
page_cache_release(pages[i]);
|
|
|
|
}
|
|
|
|
btrfs_delalloc_release_space(inode, num_pages << PAGE_CACHE_SHIFT);
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
int btrfs_defrag_file(struct inode *inode, struct file *file,
|
|
|
|
struct btrfs_ioctl_defrag_range_args *range,
|
|
|
|
u64 newer_than, unsigned long max_to_defrag)
|
|
|
|
{
|
|
|
|
struct btrfs_root *root = BTRFS_I(inode)->root;
|
2010-10-25 14:12:50 +07:00
|
|
|
struct btrfs_super_block *disk_super;
|
2011-05-25 02:35:30 +07:00
|
|
|
struct file_ra_state *ra = NULL;
|
2008-06-12 08:53:53 +07:00
|
|
|
unsigned long last_index;
|
2011-09-02 14:56:39 +07:00
|
|
|
u64 isize = i_size_read(inode);
|
2010-10-25 14:12:50 +07:00
|
|
|
u64 features;
|
2010-03-10 22:52:59 +07:00
|
|
|
u64 last_len = 0;
|
|
|
|
u64 skip = 0;
|
|
|
|
u64 defrag_end = 0;
|
2011-05-25 02:35:30 +07:00
|
|
|
u64 newer_off = range->start;
|
2008-06-12 08:53:53 +07:00
|
|
|
unsigned long i;
|
2011-09-02 14:57:07 +07:00
|
|
|
unsigned long ra_index = 0;
|
2008-06-12 08:53:53 +07:00
|
|
|
int ret;
|
2011-05-25 02:35:30 +07:00
|
|
|
int defrag_count = 0;
|
2010-10-25 14:12:50 +07:00
|
|
|
int compress_type = BTRFS_COMPRESS_ZLIB;
|
2011-05-25 02:35:30 +07:00
|
|
|
int extent_thresh = range->extent_thresh;
|
2011-09-02 14:57:07 +07:00
|
|
|
int max_cluster = (256 * 1024) >> PAGE_CACHE_SHIFT;
|
|
|
|
int cluster = max_cluster;
|
2011-05-25 02:35:30 +07:00
|
|
|
u64 new_align = ~((u64)128 * 1024 - 1);
|
|
|
|
struct page **pages = NULL;
|
|
|
|
|
|
|
|
if (extent_thresh == 0)
|
|
|
|
extent_thresh = 256 * 1024;
|
2010-10-25 14:12:50 +07:00
|
|
|
|
|
|
|
if (range->flags & BTRFS_DEFRAG_RANGE_COMPRESS) {
|
|
|
|
if (range->compress_type > BTRFS_COMPRESS_TYPES)
|
|
|
|
return -EINVAL;
|
|
|
|
if (range->compress_type)
|
|
|
|
compress_type = range->compress_type;
|
|
|
|
}
|
2008-06-12 08:53:53 +07:00
|
|
|
|
2011-09-02 14:56:39 +07:00
|
|
|
if (isize == 0)
|
2010-03-10 22:52:59 +07:00
|
|
|
return 0;
|
|
|
|
|
2011-05-25 02:35:30 +07:00
|
|
|
/*
|
|
|
|
* if we were not given a file, allocate a readahead
|
|
|
|
* context
|
|
|
|
*/
|
|
|
|
if (!file) {
|
|
|
|
ra = kzalloc(sizeof(*ra), GFP_NOFS);
|
|
|
|
if (!ra)
|
|
|
|
return -ENOMEM;
|
|
|
|
file_ra_state_init(ra, inode->i_mapping);
|
|
|
|
} else {
|
|
|
|
ra = &file->f_ra;
|
|
|
|
}
|
|
|
|
|
2011-09-02 14:57:07 +07:00
|
|
|
pages = kmalloc(sizeof(struct page *) * max_cluster,
|
2011-05-25 02:35:30 +07:00
|
|
|
GFP_NOFS);
|
|
|
|
if (!pages) {
|
|
|
|
ret = -ENOMEM;
|
|
|
|
goto out_ra;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* find the last page to defrag */
|
2010-03-11 21:42:04 +07:00
|
|
|
if (range->start + range->len > range->start) {
|
2011-09-02 14:56:39 +07:00
|
|
|
last_index = min_t(u64, isize - 1,
|
2010-03-11 21:42:04 +07:00
|
|
|
range->start + range->len - 1) >> PAGE_CACHE_SHIFT;
|
|
|
|
} else {
|
2011-09-02 14:56:39 +07:00
|
|
|
last_index = (isize - 1) >> PAGE_CACHE_SHIFT;
|
2010-03-11 21:42:04 +07:00
|
|
|
}
|
|
|
|
|
2011-05-25 02:35:30 +07:00
|
|
|
if (newer_than) {
|
|
|
|
ret = find_new_extents(root, inode, newer_than,
|
|
|
|
&newer_off, 64 * 1024);
|
|
|
|
if (!ret) {
|
|
|
|
range->start = newer_off;
|
|
|
|
/*
|
|
|
|
* we always align our defrag to help keep
|
|
|
|
* the extents in the file evenly spaced
|
|
|
|
*/
|
|
|
|
i = (newer_off & new_align) >> PAGE_CACHE_SHIFT;
|
|
|
|
} else
|
|
|
|
goto out_ra;
|
|
|
|
} else {
|
|
|
|
i = range->start >> PAGE_CACHE_SHIFT;
|
|
|
|
}
|
|
|
|
if (!max_to_defrag)
|
2011-09-02 14:56:55 +07:00
|
|
|
max_to_defrag = last_index;
|
2011-05-25 02:35:30 +07:00
|
|
|
|
2011-10-11 02:43:34 +07:00
|
|
|
/*
|
|
|
|
* make writeback starts from i, so the defrag range can be
|
|
|
|
* written sequentially.
|
|
|
|
*/
|
|
|
|
if (i < inode->i_mapping->writeback_index)
|
|
|
|
inode->i_mapping->writeback_index = i;
|
|
|
|
|
2011-10-11 22:41:40 +07:00
|
|
|
while (i <= last_index && defrag_count < max_to_defrag &&
|
|
|
|
(i < (i_size_read(inode) + PAGE_CACHE_SIZE - 1) >>
|
|
|
|
PAGE_CACHE_SHIFT)) {
|
2011-05-25 02:35:30 +07:00
|
|
|
/*
|
|
|
|
* make sure we stop running if someone unmounts
|
|
|
|
* the FS
|
|
|
|
*/
|
|
|
|
if (!(inode->i_sb->s_flags & MS_ACTIVE))
|
|
|
|
break;
|
|
|
|
|
|
|
|
if (!newer_than &&
|
|
|
|
!should_defrag_range(inode, (u64)i << PAGE_CACHE_SHIFT,
|
2010-03-11 21:42:04 +07:00
|
|
|
PAGE_CACHE_SIZE,
|
2011-05-25 02:35:30 +07:00
|
|
|
extent_thresh,
|
2010-03-11 21:42:04 +07:00
|
|
|
&last_len, &skip,
|
2010-03-10 22:52:59 +07:00
|
|
|
&defrag_end)) {
|
|
|
|
unsigned long next;
|
|
|
|
/*
|
|
|
|
* the should_defrag function tells us how much to skip
|
|
|
|
* bump our counter by the suggested amount
|
|
|
|
*/
|
|
|
|
next = (skip + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;
|
|
|
|
i = max(i + 1, next);
|
|
|
|
continue;
|
|
|
|
}
|
2011-09-02 14:57:07 +07:00
|
|
|
|
|
|
|
if (!newer_than) {
|
|
|
|
cluster = (PAGE_CACHE_ALIGN(defrag_end) >>
|
|
|
|
PAGE_CACHE_SHIFT) - i;
|
|
|
|
cluster = min(cluster, max_cluster);
|
|
|
|
} else {
|
|
|
|
cluster = max_cluster;
|
|
|
|
}
|
|
|
|
|
2010-03-11 21:42:04 +07:00
|
|
|
if (range->flags & BTRFS_DEFRAG_RANGE_COMPRESS)
|
2010-10-25 14:12:50 +07:00
|
|
|
BTRFS_I(inode)->force_compress = compress_type;
|
2010-03-10 22:52:59 +07:00
|
|
|
|
2011-09-02 14:57:07 +07:00
|
|
|
if (i + cluster > ra_index) {
|
|
|
|
ra_index = max(i, ra_index);
|
|
|
|
btrfs_force_ra(inode->i_mapping, ra, file, ra_index,
|
|
|
|
cluster);
|
|
|
|
ra_index += max_cluster;
|
|
|
|
}
|
2010-03-10 22:52:59 +07:00
|
|
|
|
2011-09-02 14:57:07 +07:00
|
|
|
ret = cluster_pages_for_defrag(inode, pages, i, cluster);
|
2011-05-25 02:35:30 +07:00
|
|
|
if (ret < 0)
|
|
|
|
goto out_ra;
|
|
|
|
|
|
|
|
defrag_count += ret;
|
|
|
|
balance_dirty_pages_ratelimited_nr(inode->i_mapping, ret);
|
|
|
|
|
|
|
|
if (newer_than) {
|
|
|
|
if (newer_off == (u64)-1)
|
|
|
|
break;
|
|
|
|
|
|
|
|
newer_off = max(newer_off + 1,
|
|
|
|
(u64)i << PAGE_CACHE_SHIFT);
|
|
|
|
|
|
|
|
ret = find_new_extents(root, inode,
|
|
|
|
newer_than, &newer_off,
|
|
|
|
64 * 1024);
|
|
|
|
if (!ret) {
|
|
|
|
range->start = newer_off;
|
|
|
|
i = (newer_off & new_align) >> PAGE_CACHE_SHIFT;
|
|
|
|
} else {
|
|
|
|
break;
|
2008-06-12 08:53:53 +07:00
|
|
|
}
|
2011-05-25 02:35:30 +07:00
|
|
|
} else {
|
2011-09-02 14:57:07 +07:00
|
|
|
if (ret > 0) {
|
2011-09-02 14:56:25 +07:00
|
|
|
i += ret;
|
2011-09-02 14:57:07 +07:00
|
|
|
last_len += ret << PAGE_CACHE_SHIFT;
|
|
|
|
} else {
|
2011-09-02 14:56:25 +07:00
|
|
|
i++;
|
2011-09-02 14:57:07 +07:00
|
|
|
last_len = 0;
|
|
|
|
}
|
2008-06-12 08:53:53 +07:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2010-03-11 21:42:04 +07:00
|
|
|
if ((range->flags & BTRFS_DEFRAG_RANGE_START_IO))
|
|
|
|
filemap_flush(inode->i_mapping);
|
|
|
|
|
|
|
|
if ((range->flags & BTRFS_DEFRAG_RANGE_COMPRESS)) {
|
|
|
|
/* the filemap_flush will queue IO into the worker threads, but
|
|
|
|
* we have to make sure the IO is actually started and that
|
|
|
|
* ordered extents get created before we return
|
|
|
|
*/
|
|
|
|
atomic_inc(&root->fs_info->async_submit_draining);
|
|
|
|
while (atomic_read(&root->fs_info->nr_async_submits) ||
|
|
|
|
atomic_read(&root->fs_info->async_delalloc_pages)) {
|
|
|
|
wait_event(root->fs_info->async_submit_wait,
|
|
|
|
(atomic_read(&root->fs_info->nr_async_submits) == 0 &&
|
|
|
|
atomic_read(&root->fs_info->async_delalloc_pages) == 0));
|
|
|
|
}
|
|
|
|
atomic_dec(&root->fs_info->async_submit_draining);
|
|
|
|
|
|
|
|
mutex_lock(&inode->i_mutex);
|
2010-12-17 13:21:50 +07:00
|
|
|
BTRFS_I(inode)->force_compress = BTRFS_COMPRESS_NONE;
|
2010-03-11 21:42:04 +07:00
|
|
|
mutex_unlock(&inode->i_mutex);
|
|
|
|
}
|
|
|
|
|
2011-04-13 20:41:04 +07:00
|
|
|
disk_super = root->fs_info->super_copy;
|
2010-10-25 14:12:50 +07:00
|
|
|
features = btrfs_super_incompat_flags(disk_super);
|
|
|
|
if (range->compress_type == BTRFS_COMPRESS_LZO) {
|
|
|
|
features |= BTRFS_FEATURE_INCOMPAT_COMPRESS_LZO;
|
|
|
|
btrfs_set_super_incompat_flags(disk_super, features);
|
|
|
|
}
|
|
|
|
|
2011-09-01 21:33:57 +07:00
|
|
|
ret = defrag_count;
|
2010-03-10 22:52:59 +07:00
|
|
|
|
2011-05-25 02:35:30 +07:00
|
|
|
out_ra:
|
|
|
|
if (!file)
|
|
|
|
kfree(ra);
|
|
|
|
kfree(pages);
|
2010-03-10 22:52:59 +07:00
|
|
|
return ret;
|
2008-06-12 08:53:53 +07:00
|
|
|
}
|
|
|
|
|
2009-09-22 03:00:26 +07:00
|
|
|
static noinline int btrfs_ioctl_resize(struct btrfs_root *root,
|
|
|
|
void __user *arg)
|
2008-06-12 08:53:53 +07:00
|
|
|
{
|
|
|
|
u64 new_size;
|
|
|
|
u64 old_size;
|
|
|
|
u64 devid = 1;
|
|
|
|
struct btrfs_ioctl_vol_args *vol_args;
|
|
|
|
struct btrfs_trans_handle *trans;
|
|
|
|
struct btrfs_device *device = NULL;
|
|
|
|
char *sizestr;
|
|
|
|
char *devstr = NULL;
|
|
|
|
int ret = 0;
|
|
|
|
int mod = 0;
|
|
|
|
|
2008-11-13 02:34:12 +07:00
|
|
|
if (root->fs_info->sb->s_flags & MS_RDONLY)
|
|
|
|
return -EROFS;
|
|
|
|
|
2009-01-06 04:57:23 +07:00
|
|
|
if (!capable(CAP_SYS_ADMIN))
|
|
|
|
return -EPERM;
|
|
|
|
|
2009-04-08 14:06:54 +07:00
|
|
|
vol_args = memdup_user(arg, sizeof(*vol_args));
|
|
|
|
if (IS_ERR(vol_args))
|
|
|
|
return PTR_ERR(vol_args);
|
2008-07-24 23:20:14 +07:00
|
|
|
|
|
|
|
vol_args->name[BTRFS_PATH_NAME_MAX] = '\0';
|
2008-06-12 08:53:53 +07:00
|
|
|
|
2008-07-09 01:19:17 +07:00
|
|
|
mutex_lock(&root->fs_info->volume_mutex);
|
2008-06-12 08:53:53 +07:00
|
|
|
sizestr = vol_args->name;
|
|
|
|
devstr = strchr(sizestr, ':');
|
|
|
|
if (devstr) {
|
|
|
|
char *end;
|
|
|
|
sizestr = devstr + 1;
|
|
|
|
*devstr = '\0';
|
|
|
|
devstr = vol_args->name;
|
|
|
|
devid = simple_strtoull(devstr, &end, 10);
|
2011-11-20 19:33:38 +07:00
|
|
|
printk(KERN_INFO "btrfs: resizing devid %llu\n",
|
2009-04-22 02:38:29 +07:00
|
|
|
(unsigned long long)devid);
|
2008-06-12 08:53:53 +07:00
|
|
|
}
|
2008-11-18 09:11:30 +07:00
|
|
|
device = btrfs_find_device(root, devid, NULL, NULL);
|
2008-06-12 08:53:53 +07:00
|
|
|
if (!device) {
|
2011-11-20 19:33:38 +07:00
|
|
|
printk(KERN_INFO "btrfs: resizer unable to find device %llu\n",
|
2009-04-22 02:38:29 +07:00
|
|
|
(unsigned long long)devid);
|
2008-06-12 08:53:53 +07:00
|
|
|
ret = -EINVAL;
|
|
|
|
goto out_unlock;
|
|
|
|
}
|
|
|
|
if (!strcmp(sizestr, "max"))
|
|
|
|
new_size = device->bdev->bd_inode->i_size;
|
|
|
|
else {
|
|
|
|
if (sizestr[0] == '-') {
|
|
|
|
mod = -1;
|
|
|
|
sizestr++;
|
|
|
|
} else if (sizestr[0] == '+') {
|
|
|
|
mod = 1;
|
|
|
|
sizestr++;
|
|
|
|
}
|
2010-02-28 17:59:11 +07:00
|
|
|
new_size = memparse(sizestr, NULL);
|
2008-06-12 08:53:53 +07:00
|
|
|
if (new_size == 0) {
|
|
|
|
ret = -EINVAL;
|
|
|
|
goto out_unlock;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
old_size = device->total_bytes;
|
|
|
|
|
|
|
|
if (mod < 0) {
|
|
|
|
if (new_size > old_size) {
|
|
|
|
ret = -EINVAL;
|
|
|
|
goto out_unlock;
|
|
|
|
}
|
|
|
|
new_size = old_size - new_size;
|
|
|
|
} else if (mod > 0) {
|
|
|
|
new_size = old_size + new_size;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (new_size < 256 * 1024 * 1024) {
|
|
|
|
ret = -EINVAL;
|
|
|
|
goto out_unlock;
|
|
|
|
}
|
|
|
|
if (new_size > device->bdev->bd_inode->i_size) {
|
|
|
|
ret = -EFBIG;
|
|
|
|
goto out_unlock;
|
|
|
|
}
|
|
|
|
|
|
|
|
do_div(new_size, root->sectorsize);
|
|
|
|
new_size *= root->sectorsize;
|
|
|
|
|
2011-11-20 19:33:38 +07:00
|
|
|
printk(KERN_INFO "btrfs: new size for %s is %llu\n",
|
2008-06-12 08:53:53 +07:00
|
|
|
device->name, (unsigned long long)new_size);
|
|
|
|
|
|
|
|
if (new_size > old_size) {
|
2010-05-16 21:48:46 +07:00
|
|
|
trans = btrfs_start_transaction(root, 0);
|
2011-01-20 13:19:37 +07:00
|
|
|
if (IS_ERR(trans)) {
|
|
|
|
ret = PTR_ERR(trans);
|
|
|
|
goto out_unlock;
|
|
|
|
}
|
2008-06-12 08:53:53 +07:00
|
|
|
ret = btrfs_grow_device(trans, device, new_size);
|
|
|
|
btrfs_commit_transaction(trans, root);
|
2011-11-19 01:55:01 +07:00
|
|
|
} else if (new_size < old_size) {
|
2008-06-12 08:53:53 +07:00
|
|
|
ret = btrfs_shrink_device(device, new_size);
|
|
|
|
}
|
|
|
|
|
|
|
|
out_unlock:
|
2008-07-09 01:19:17 +07:00
|
|
|
mutex_unlock(&root->fs_info->volume_mutex);
|
2008-06-12 08:53:53 +07:00
|
|
|
kfree(vol_args);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2010-10-30 02:41:32 +07:00
|
|
|
static noinline int btrfs_ioctl_snap_create_transid(struct file *file,
|
|
|
|
char *name,
|
|
|
|
unsigned long fd,
|
|
|
|
int subvol,
|
2010-12-20 15:04:08 +07:00
|
|
|
u64 *transid,
|
|
|
|
bool readonly)
|
2008-06-12 08:53:53 +07:00
|
|
|
{
|
2008-10-10 00:39:39 +07:00
|
|
|
struct btrfs_root *root = BTRFS_I(fdentry(file)->d_inode)->root;
|
2008-11-18 09:02:50 +07:00
|
|
|
struct file *src_file;
|
2008-06-12 08:53:53 +07:00
|
|
|
int namelen;
|
2008-11-18 09:02:50 +07:00
|
|
|
int ret = 0;
|
2008-06-12 08:53:53 +07:00
|
|
|
|
2008-11-13 02:34:12 +07:00
|
|
|
if (root->fs_info->sb->s_flags & MS_RDONLY)
|
|
|
|
return -EROFS;
|
|
|
|
|
2010-10-30 02:41:32 +07:00
|
|
|
namelen = strlen(name);
|
|
|
|
if (strchr(name, '/')) {
|
2008-06-12 08:53:53 +07:00
|
|
|
ret = -EINVAL;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2008-11-18 09:02:50 +07:00
|
|
|
if (subvol) {
|
2010-10-30 02:41:32 +07:00
|
|
|
ret = btrfs_mksubvol(&file->f_path, name, namelen,
|
2010-12-20 15:04:08 +07:00
|
|
|
NULL, transid, readonly);
|
2008-10-10 00:39:39 +07:00
|
|
|
} else {
|
2008-11-18 09:02:50 +07:00
|
|
|
struct inode *src_inode;
|
2010-10-30 02:41:32 +07:00
|
|
|
src_file = fget(fd);
|
2008-11-18 09:02:50 +07:00
|
|
|
if (!src_file) {
|
|
|
|
ret = -EINVAL;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
src_inode = src_file->f_path.dentry->d_inode;
|
|
|
|
if (src_inode->i_sb != file->f_path.dentry->d_inode->i_sb) {
|
2009-01-06 09:25:51 +07:00
|
|
|
printk(KERN_INFO "btrfs: Snapshot src from "
|
|
|
|
"another FS\n");
|
2008-11-18 09:02:50 +07:00
|
|
|
ret = -EINVAL;
|
|
|
|
fput(src_file);
|
|
|
|
goto out;
|
|
|
|
}
|
2010-10-30 02:41:32 +07:00
|
|
|
ret = btrfs_mksubvol(&file->f_path, name, namelen,
|
|
|
|
BTRFS_I(src_inode)->root,
|
2010-12-20 15:04:08 +07:00
|
|
|
transid, readonly);
|
2008-11-18 09:02:50 +07:00
|
|
|
fput(src_file);
|
2008-10-10 00:39:39 +07:00
|
|
|
}
|
2008-06-12 08:53:53 +07:00
|
|
|
out:
|
2010-10-30 02:41:32 +07:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
static noinline int btrfs_ioctl_snap_create(struct file *file,
|
2010-12-20 14:53:28 +07:00
|
|
|
void __user *arg, int subvol)
|
2010-10-30 02:41:32 +07:00
|
|
|
{
|
2010-12-20 14:53:28 +07:00
|
|
|
struct btrfs_ioctl_vol_args *vol_args;
|
2010-10-30 02:41:32 +07:00
|
|
|
int ret;
|
|
|
|
|
2010-12-20 14:53:28 +07:00
|
|
|
vol_args = memdup_user(arg, sizeof(*vol_args));
|
|
|
|
if (IS_ERR(vol_args))
|
|
|
|
return PTR_ERR(vol_args);
|
|
|
|
vol_args->name[BTRFS_PATH_NAME_MAX] = '\0';
|
2010-10-30 02:41:32 +07:00
|
|
|
|
2010-12-20 14:53:28 +07:00
|
|
|
ret = btrfs_ioctl_snap_create_transid(file, vol_args->name,
|
2010-12-20 15:04:08 +07:00
|
|
|
vol_args->fd, subvol,
|
|
|
|
NULL, false);
|
2010-12-10 13:41:56 +07:00
|
|
|
|
2010-12-20 14:53:28 +07:00
|
|
|
kfree(vol_args);
|
|
|
|
return ret;
|
|
|
|
}
|
2010-12-10 13:41:56 +07:00
|
|
|
|
2010-12-20 14:53:28 +07:00
|
|
|
static noinline int btrfs_ioctl_snap_create_v2(struct file *file,
|
|
|
|
void __user *arg, int subvol)
|
|
|
|
{
|
|
|
|
struct btrfs_ioctl_vol_args_v2 *vol_args;
|
|
|
|
int ret;
|
|
|
|
u64 transid = 0;
|
|
|
|
u64 *ptr = NULL;
|
2010-12-20 15:04:08 +07:00
|
|
|
bool readonly = false;
|
2010-12-10 07:36:28 +07:00
|
|
|
|
2010-12-20 14:53:28 +07:00
|
|
|
vol_args = memdup_user(arg, sizeof(*vol_args));
|
|
|
|
if (IS_ERR(vol_args))
|
|
|
|
return PTR_ERR(vol_args);
|
|
|
|
vol_args->name[BTRFS_SUBVOL_NAME_MAX] = '\0';
|
2010-12-10 07:36:28 +07:00
|
|
|
|
2010-12-20 15:04:08 +07:00
|
|
|
if (vol_args->flags &
|
|
|
|
~(BTRFS_SUBVOL_CREATE_ASYNC | BTRFS_SUBVOL_RDONLY)) {
|
|
|
|
ret = -EOPNOTSUPP;
|
2010-12-20 14:53:28 +07:00
|
|
|
goto out;
|
2010-10-30 02:41:32 +07:00
|
|
|
}
|
2010-12-20 14:53:28 +07:00
|
|
|
|
|
|
|
if (vol_args->flags & BTRFS_SUBVOL_CREATE_ASYNC)
|
|
|
|
ptr = &transid;
|
2010-12-20 15:04:08 +07:00
|
|
|
if (vol_args->flags & BTRFS_SUBVOL_RDONLY)
|
|
|
|
readonly = true;
|
2010-12-20 14:53:28 +07:00
|
|
|
|
|
|
|
ret = btrfs_ioctl_snap_create_transid(file, vol_args->name,
|
2010-12-20 15:04:08 +07:00
|
|
|
vol_args->fd, subvol,
|
|
|
|
ptr, readonly);
|
2010-12-20 14:53:28 +07:00
|
|
|
|
|
|
|
if (ret == 0 && ptr &&
|
|
|
|
copy_to_user(arg +
|
|
|
|
offsetof(struct btrfs_ioctl_vol_args_v2,
|
|
|
|
transid), ptr, sizeof(*ptr)))
|
|
|
|
ret = -EFAULT;
|
2010-12-10 13:41:56 +07:00
|
|
|
out:
|
2008-06-12 08:53:53 +07:00
|
|
|
kfree(vol_args);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2010-12-20 15:30:25 +07:00
|
|
|
static noinline int btrfs_ioctl_subvol_getflags(struct file *file,
|
|
|
|
void __user *arg)
|
|
|
|
{
|
|
|
|
struct inode *inode = fdentry(file)->d_inode;
|
|
|
|
struct btrfs_root *root = BTRFS_I(inode)->root;
|
|
|
|
int ret = 0;
|
|
|
|
u64 flags = 0;
|
|
|
|
|
2011-04-20 09:31:50 +07:00
|
|
|
if (btrfs_ino(inode) != BTRFS_FIRST_FREE_OBJECTID)
|
2010-12-20 15:30:25 +07:00
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
down_read(&root->fs_info->subvol_sem);
|
|
|
|
if (btrfs_root_readonly(root))
|
|
|
|
flags |= BTRFS_SUBVOL_RDONLY;
|
|
|
|
up_read(&root->fs_info->subvol_sem);
|
|
|
|
|
|
|
|
if (copy_to_user(arg, &flags, sizeof(flags)))
|
|
|
|
ret = -EFAULT;
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
static noinline int btrfs_ioctl_subvol_setflags(struct file *file,
|
|
|
|
void __user *arg)
|
|
|
|
{
|
|
|
|
struct inode *inode = fdentry(file)->d_inode;
|
|
|
|
struct btrfs_root *root = BTRFS_I(inode)->root;
|
|
|
|
struct btrfs_trans_handle *trans;
|
|
|
|
u64 root_flags;
|
|
|
|
u64 flags;
|
|
|
|
int ret = 0;
|
|
|
|
|
|
|
|
if (root->fs_info->sb->s_flags & MS_RDONLY)
|
|
|
|
return -EROFS;
|
|
|
|
|
2011-04-20 09:31:50 +07:00
|
|
|
if (btrfs_ino(inode) != BTRFS_FIRST_FREE_OBJECTID)
|
2010-12-20 15:30:25 +07:00
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
if (copy_from_user(&flags, arg, sizeof(flags)))
|
|
|
|
return -EFAULT;
|
|
|
|
|
2011-02-16 13:06:34 +07:00
|
|
|
if (flags & BTRFS_SUBVOL_CREATE_ASYNC)
|
2010-12-20 15:30:25 +07:00
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
if (flags & ~BTRFS_SUBVOL_RDONLY)
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
|
2011-03-24 06:43:26 +07:00
|
|
|
if (!inode_owner_or_capable(inode))
|
2011-02-16 13:06:34 +07:00
|
|
|
return -EACCES;
|
|
|
|
|
2010-12-20 15:30:25 +07:00
|
|
|
down_write(&root->fs_info->subvol_sem);
|
|
|
|
|
|
|
|
/* nothing to do */
|
|
|
|
if (!!(flags & BTRFS_SUBVOL_RDONLY) == btrfs_root_readonly(root))
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
root_flags = btrfs_root_flags(&root->root_item);
|
|
|
|
if (flags & BTRFS_SUBVOL_RDONLY)
|
|
|
|
btrfs_set_root_flags(&root->root_item,
|
|
|
|
root_flags | BTRFS_ROOT_SUBVOL_RDONLY);
|
|
|
|
else
|
|
|
|
btrfs_set_root_flags(&root->root_item,
|
|
|
|
root_flags & ~BTRFS_ROOT_SUBVOL_RDONLY);
|
|
|
|
|
|
|
|
trans = btrfs_start_transaction(root, 1);
|
|
|
|
if (IS_ERR(trans)) {
|
|
|
|
ret = PTR_ERR(trans);
|
|
|
|
goto out_reset;
|
|
|
|
}
|
|
|
|
|
2011-02-16 13:06:34 +07:00
|
|
|
ret = btrfs_update_root(trans, root->fs_info->tree_root,
|
2010-12-20 15:30:25 +07:00
|
|
|
&root->root_key, &root->root_item);
|
|
|
|
|
|
|
|
btrfs_commit_transaction(trans, root);
|
|
|
|
out_reset:
|
|
|
|
if (ret)
|
|
|
|
btrfs_set_root_flags(&root->root_item, root_flags);
|
|
|
|
out:
|
|
|
|
up_write(&root->fs_info->subvol_sem);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2009-09-22 03:00:26 +07:00
|
|
|
/*
|
|
|
|
* helper to check if the subvolume references other subvolumes
|
|
|
|
*/
|
|
|
|
static noinline int may_destroy_subvol(struct btrfs_root *root)
|
|
|
|
{
|
|
|
|
struct btrfs_path *path;
|
|
|
|
struct btrfs_key key;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
path = btrfs_alloc_path();
|
|
|
|
if (!path)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
key.objectid = root->root_key.objectid;
|
|
|
|
key.type = BTRFS_ROOT_REF_KEY;
|
|
|
|
key.offset = (u64)-1;
|
|
|
|
|
|
|
|
ret = btrfs_search_slot(NULL, root->fs_info->tree_root,
|
|
|
|
&key, path, 0, 0);
|
|
|
|
if (ret < 0)
|
|
|
|
goto out;
|
|
|
|
BUG_ON(ret == 0);
|
|
|
|
|
|
|
|
ret = 0;
|
|
|
|
if (path->slots[0] > 0) {
|
|
|
|
path->slots[0]--;
|
|
|
|
btrfs_item_key_to_cpu(path->nodes[0], &key, path->slots[0]);
|
|
|
|
if (key.objectid == root->root_key.objectid &&
|
|
|
|
key.type == BTRFS_ROOT_REF_KEY)
|
|
|
|
ret = -ENOTEMPTY;
|
|
|
|
}
|
|
|
|
out:
|
|
|
|
btrfs_free_path(path);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2010-03-01 03:39:26 +07:00
|
|
|
static noinline int key_in_sk(struct btrfs_key *key,
|
|
|
|
struct btrfs_ioctl_search_key *sk)
|
|
|
|
{
|
2010-03-18 23:10:08 +07:00
|
|
|
struct btrfs_key test;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
test.objectid = sk->min_objectid;
|
|
|
|
test.type = sk->min_type;
|
|
|
|
test.offset = sk->min_offset;
|
|
|
|
|
|
|
|
ret = btrfs_comp_cpu_keys(key, &test);
|
|
|
|
if (ret < 0)
|
2010-03-01 03:39:26 +07:00
|
|
|
return 0;
|
2010-03-18 23:10:08 +07:00
|
|
|
|
|
|
|
test.objectid = sk->max_objectid;
|
|
|
|
test.type = sk->max_type;
|
|
|
|
test.offset = sk->max_offset;
|
|
|
|
|
|
|
|
ret = btrfs_comp_cpu_keys(key, &test);
|
|
|
|
if (ret > 0)
|
2010-03-01 03:39:26 +07:00
|
|
|
return 0;
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
static noinline int copy_to_sk(struct btrfs_root *root,
|
|
|
|
struct btrfs_path *path,
|
|
|
|
struct btrfs_key *key,
|
|
|
|
struct btrfs_ioctl_search_key *sk,
|
|
|
|
char *buf,
|
|
|
|
unsigned long *sk_offset,
|
|
|
|
int *num_found)
|
|
|
|
{
|
|
|
|
u64 found_transid;
|
|
|
|
struct extent_buffer *leaf;
|
|
|
|
struct btrfs_ioctl_search_header sh;
|
|
|
|
unsigned long item_off;
|
|
|
|
unsigned long item_len;
|
|
|
|
int nritems;
|
|
|
|
int i;
|
|
|
|
int slot;
|
|
|
|
int ret = 0;
|
|
|
|
|
|
|
|
leaf = path->nodes[0];
|
|
|
|
slot = path->slots[0];
|
|
|
|
nritems = btrfs_header_nritems(leaf);
|
|
|
|
|
|
|
|
if (btrfs_header_generation(leaf) > sk->max_transid) {
|
|
|
|
i = nritems;
|
|
|
|
goto advance_key;
|
|
|
|
}
|
|
|
|
found_transid = btrfs_header_generation(leaf);
|
|
|
|
|
|
|
|
for (i = slot; i < nritems; i++) {
|
|
|
|
item_off = btrfs_item_ptr_offset(leaf, i);
|
|
|
|
item_len = btrfs_item_size_nr(leaf, i);
|
|
|
|
|
|
|
|
if (item_len > BTRFS_SEARCH_ARGS_BUFSIZE)
|
|
|
|
item_len = 0;
|
|
|
|
|
|
|
|
if (sizeof(sh) + item_len + *sk_offset >
|
|
|
|
BTRFS_SEARCH_ARGS_BUFSIZE) {
|
|
|
|
ret = 1;
|
|
|
|
goto overflow;
|
|
|
|
}
|
|
|
|
|
|
|
|
btrfs_item_key_to_cpu(leaf, key, i);
|
|
|
|
if (!key_in_sk(key, sk))
|
|
|
|
continue;
|
|
|
|
|
|
|
|
sh.objectid = key->objectid;
|
|
|
|
sh.offset = key->offset;
|
|
|
|
sh.type = key->type;
|
|
|
|
sh.len = item_len;
|
|
|
|
sh.transid = found_transid;
|
|
|
|
|
|
|
|
/* copy search result header */
|
|
|
|
memcpy(buf + *sk_offset, &sh, sizeof(sh));
|
|
|
|
*sk_offset += sizeof(sh);
|
|
|
|
|
|
|
|
if (item_len) {
|
|
|
|
char *p = buf + *sk_offset;
|
|
|
|
/* copy the item */
|
|
|
|
read_extent_buffer(leaf, p,
|
|
|
|
item_off, item_len);
|
|
|
|
*sk_offset += item_len;
|
|
|
|
}
|
2011-05-15 00:43:41 +07:00
|
|
|
(*num_found)++;
|
2010-03-01 03:39:26 +07:00
|
|
|
|
|
|
|
if (*num_found >= sk->nr_items)
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
advance_key:
|
2010-03-18 23:10:08 +07:00
|
|
|
ret = 0;
|
|
|
|
if (key->offset < (u64)-1 && key->offset < sk->max_offset)
|
2010-03-01 03:39:26 +07:00
|
|
|
key->offset++;
|
2010-03-18 23:10:08 +07:00
|
|
|
else if (key->type < (u8)-1 && key->type < sk->max_type) {
|
|
|
|
key->offset = 0;
|
2010-03-01 03:39:26 +07:00
|
|
|
key->type++;
|
2010-03-18 23:10:08 +07:00
|
|
|
} else if (key->objectid < (u64)-1 && key->objectid < sk->max_objectid) {
|
|
|
|
key->offset = 0;
|
|
|
|
key->type = 0;
|
2010-03-01 03:39:26 +07:00
|
|
|
key->objectid++;
|
2010-03-18 23:10:08 +07:00
|
|
|
} else
|
|
|
|
ret = 1;
|
2010-03-01 03:39:26 +07:00
|
|
|
overflow:
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
static noinline int search_ioctl(struct inode *inode,
|
|
|
|
struct btrfs_ioctl_search_args *args)
|
|
|
|
{
|
|
|
|
struct btrfs_root *root;
|
|
|
|
struct btrfs_key key;
|
|
|
|
struct btrfs_key max_key;
|
|
|
|
struct btrfs_path *path;
|
|
|
|
struct btrfs_ioctl_search_key *sk = &args->key;
|
|
|
|
struct btrfs_fs_info *info = BTRFS_I(inode)->root->fs_info;
|
|
|
|
int ret;
|
|
|
|
int num_found = 0;
|
|
|
|
unsigned long sk_offset = 0;
|
|
|
|
|
|
|
|
path = btrfs_alloc_path();
|
|
|
|
if (!path)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
if (sk->tree_id == 0) {
|
|
|
|
/* search the root of the inode that was passed */
|
|
|
|
root = BTRFS_I(inode)->root;
|
|
|
|
} else {
|
|
|
|
key.objectid = sk->tree_id;
|
|
|
|
key.type = BTRFS_ROOT_ITEM_KEY;
|
|
|
|
key.offset = (u64)-1;
|
|
|
|
root = btrfs_read_fs_root_no_name(info, &key);
|
|
|
|
if (IS_ERR(root)) {
|
|
|
|
printk(KERN_ERR "could not find root %llu\n",
|
|
|
|
sk->tree_id);
|
|
|
|
btrfs_free_path(path);
|
|
|
|
return -ENOENT;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
key.objectid = sk->min_objectid;
|
|
|
|
key.type = sk->min_type;
|
|
|
|
key.offset = sk->min_offset;
|
|
|
|
|
|
|
|
max_key.objectid = sk->max_objectid;
|
|
|
|
max_key.type = sk->max_type;
|
|
|
|
max_key.offset = sk->max_offset;
|
|
|
|
|
|
|
|
path->keep_locks = 1;
|
|
|
|
|
|
|
|
while(1) {
|
|
|
|
ret = btrfs_search_forward(root, &key, &max_key, path, 0,
|
|
|
|
sk->min_transid);
|
|
|
|
if (ret != 0) {
|
|
|
|
if (ret > 0)
|
|
|
|
ret = 0;
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
ret = copy_to_sk(root, path, &key, sk, args->buf,
|
|
|
|
&sk_offset, &num_found);
|
2011-04-21 06:20:15 +07:00
|
|
|
btrfs_release_path(path);
|
2010-03-01 03:39:26 +07:00
|
|
|
if (ret || num_found >= sk->nr_items)
|
|
|
|
break;
|
|
|
|
|
|
|
|
}
|
|
|
|
ret = 0;
|
|
|
|
err:
|
|
|
|
sk->nr_items = num_found;
|
|
|
|
btrfs_free_path(path);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
static noinline int btrfs_ioctl_tree_search(struct file *file,
|
|
|
|
void __user *argp)
|
|
|
|
{
|
|
|
|
struct btrfs_ioctl_search_args *args;
|
|
|
|
struct inode *inode;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
if (!capable(CAP_SYS_ADMIN))
|
|
|
|
return -EPERM;
|
|
|
|
|
2010-10-30 02:14:18 +07:00
|
|
|
args = memdup_user(argp, sizeof(*args));
|
|
|
|
if (IS_ERR(args))
|
|
|
|
return PTR_ERR(args);
|
2010-03-01 03:39:26 +07:00
|
|
|
|
|
|
|
inode = fdentry(file)->d_inode;
|
|
|
|
ret = search_ioctl(inode, args);
|
|
|
|
if (ret == 0 && copy_to_user(argp, args, sizeof(*args)))
|
|
|
|
ret = -EFAULT;
|
|
|
|
kfree(args);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2009-11-18 12:42:14 +07:00
|
|
|
/*
|
2010-03-01 03:39:26 +07:00
|
|
|
* Search INODE_REFs to identify path name of 'dirid' directory
|
|
|
|
* in a 'tree_id' tree. and sets path name to 'name'.
|
|
|
|
*/
|
2009-11-18 12:42:14 +07:00
|
|
|
static noinline int btrfs_search_path_in_tree(struct btrfs_fs_info *info,
|
|
|
|
u64 tree_id, u64 dirid, char *name)
|
|
|
|
{
|
|
|
|
struct btrfs_root *root;
|
|
|
|
struct btrfs_key key;
|
2010-03-01 03:39:26 +07:00
|
|
|
char *ptr;
|
2009-11-18 12:42:14 +07:00
|
|
|
int ret = -1;
|
|
|
|
int slot;
|
|
|
|
int len;
|
|
|
|
int total_len = 0;
|
|
|
|
struct btrfs_inode_ref *iref;
|
|
|
|
struct extent_buffer *l;
|
|
|
|
struct btrfs_path *path;
|
|
|
|
|
|
|
|
if (dirid == BTRFS_FIRST_FREE_OBJECTID) {
|
|
|
|
name[0]='\0';
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
path = btrfs_alloc_path();
|
|
|
|
if (!path)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
2010-03-01 03:39:26 +07:00
|
|
|
ptr = &name[BTRFS_INO_LOOKUP_PATH_MAX];
|
2009-11-18 12:42:14 +07:00
|
|
|
|
|
|
|
key.objectid = tree_id;
|
|
|
|
key.type = BTRFS_ROOT_ITEM_KEY;
|
|
|
|
key.offset = (u64)-1;
|
|
|
|
root = btrfs_read_fs_root_no_name(info, &key);
|
|
|
|
if (IS_ERR(root)) {
|
|
|
|
printk(KERN_ERR "could not find root %llu\n", tree_id);
|
2010-03-18 23:23:10 +07:00
|
|
|
ret = -ENOENT;
|
|
|
|
goto out;
|
2009-11-18 12:42:14 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
key.objectid = dirid;
|
|
|
|
key.type = BTRFS_INODE_REF_KEY;
|
2010-03-18 23:23:10 +07:00
|
|
|
key.offset = (u64)-1;
|
2009-11-18 12:42:14 +07:00
|
|
|
|
|
|
|
while(1) {
|
|
|
|
ret = btrfs_search_slot(NULL, root, &key, path, 0, 0);
|
|
|
|
if (ret < 0)
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
l = path->nodes[0];
|
|
|
|
slot = path->slots[0];
|
2010-03-18 23:23:10 +07:00
|
|
|
if (ret > 0 && slot > 0)
|
|
|
|
slot--;
|
2009-11-18 12:42:14 +07:00
|
|
|
btrfs_item_key_to_cpu(l, &key, slot);
|
|
|
|
|
|
|
|
if (ret > 0 && (key.objectid != dirid ||
|
2010-03-01 03:39:26 +07:00
|
|
|
key.type != BTRFS_INODE_REF_KEY)) {
|
|
|
|
ret = -ENOENT;
|
2009-11-18 12:42:14 +07:00
|
|
|
goto out;
|
2010-03-01 03:39:26 +07:00
|
|
|
}
|
2009-11-18 12:42:14 +07:00
|
|
|
|
|
|
|
iref = btrfs_item_ptr(l, slot, struct btrfs_inode_ref);
|
|
|
|
len = btrfs_inode_ref_name_len(l, iref);
|
|
|
|
ptr -= len + 1;
|
|
|
|
total_len += len + 1;
|
2010-03-01 03:39:26 +07:00
|
|
|
if (ptr < name)
|
2009-11-18 12:42:14 +07:00
|
|
|
goto out;
|
|
|
|
|
|
|
|
*(ptr + len) = '/';
|
|
|
|
read_extent_buffer(l, ptr,(unsigned long)(iref + 1), len);
|
|
|
|
|
|
|
|
if (key.offset == BTRFS_FIRST_FREE_OBJECTID)
|
|
|
|
break;
|
|
|
|
|
2011-04-21 06:20:15 +07:00
|
|
|
btrfs_release_path(path);
|
2009-11-18 12:42:14 +07:00
|
|
|
key.objectid = key.offset;
|
2010-03-18 23:23:10 +07:00
|
|
|
key.offset = (u64)-1;
|
2009-11-18 12:42:14 +07:00
|
|
|
dirid = key.objectid;
|
|
|
|
}
|
2010-03-01 03:39:26 +07:00
|
|
|
if (ptr < name)
|
2009-11-18 12:42:14 +07:00
|
|
|
goto out;
|
2011-07-14 10:16:00 +07:00
|
|
|
memmove(name, ptr, total_len);
|
2009-11-18 12:42:14 +07:00
|
|
|
name[total_len]='\0';
|
|
|
|
ret = 0;
|
|
|
|
out:
|
|
|
|
btrfs_free_path(path);
|
2010-03-01 03:39:26 +07:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
static noinline int btrfs_ioctl_ino_lookup(struct file *file,
|
|
|
|
void __user *argp)
|
|
|
|
{
|
|
|
|
struct btrfs_ioctl_ino_lookup_args *args;
|
|
|
|
struct inode *inode;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
if (!capable(CAP_SYS_ADMIN))
|
|
|
|
return -EPERM;
|
|
|
|
|
2010-10-30 02:14:18 +07:00
|
|
|
args = memdup_user(argp, sizeof(*args));
|
|
|
|
if (IS_ERR(args))
|
|
|
|
return PTR_ERR(args);
|
2010-03-20 18:24:15 +07:00
|
|
|
|
2010-03-01 03:39:26 +07:00
|
|
|
inode = fdentry(file)->d_inode;
|
|
|
|
|
2010-03-18 23:17:05 +07:00
|
|
|
if (args->treeid == 0)
|
|
|
|
args->treeid = BTRFS_I(inode)->root->root_key.objectid;
|
|
|
|
|
2010-03-01 03:39:26 +07:00
|
|
|
ret = btrfs_search_path_in_tree(BTRFS_I(inode)->root->fs_info,
|
|
|
|
args->treeid, args->objectid,
|
|
|
|
args->name);
|
|
|
|
|
|
|
|
if (ret == 0 && copy_to_user(argp, args, sizeof(*args)))
|
|
|
|
ret = -EFAULT;
|
|
|
|
|
|
|
|
kfree(args);
|
2009-11-18 12:42:14 +07:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2009-09-22 03:00:26 +07:00
|
|
|
static noinline int btrfs_ioctl_snap_destroy(struct file *file,
|
|
|
|
void __user *arg)
|
|
|
|
{
|
|
|
|
struct dentry *parent = fdentry(file);
|
|
|
|
struct dentry *dentry;
|
|
|
|
struct inode *dir = parent->d_inode;
|
|
|
|
struct inode *inode;
|
|
|
|
struct btrfs_root *root = BTRFS_I(dir)->root;
|
|
|
|
struct btrfs_root *dest = NULL;
|
|
|
|
struct btrfs_ioctl_vol_args *vol_args;
|
|
|
|
struct btrfs_trans_handle *trans;
|
|
|
|
int namelen;
|
|
|
|
int ret;
|
|
|
|
int err = 0;
|
|
|
|
|
|
|
|
vol_args = memdup_user(arg, sizeof(*vol_args));
|
|
|
|
if (IS_ERR(vol_args))
|
|
|
|
return PTR_ERR(vol_args);
|
|
|
|
|
|
|
|
vol_args->name[BTRFS_PATH_NAME_MAX] = '\0';
|
|
|
|
namelen = strlen(vol_args->name);
|
|
|
|
if (strchr(vol_args->name, '/') ||
|
|
|
|
strncmp(vol_args->name, "..", namelen) == 0) {
|
|
|
|
err = -EINVAL;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
err = mnt_want_write(file->f_path.mnt);
|
|
|
|
if (err)
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
mutex_lock_nested(&dir->i_mutex, I_MUTEX_PARENT);
|
|
|
|
dentry = lookup_one_len(vol_args->name, parent, namelen);
|
|
|
|
if (IS_ERR(dentry)) {
|
|
|
|
err = PTR_ERR(dentry);
|
|
|
|
goto out_unlock_dir;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!dentry->d_inode) {
|
|
|
|
err = -ENOENT;
|
|
|
|
goto out_dput;
|
|
|
|
}
|
|
|
|
|
|
|
|
inode = dentry->d_inode;
|
2010-10-30 02:46:43 +07:00
|
|
|
dest = BTRFS_I(inode)->root;
|
|
|
|
if (!capable(CAP_SYS_ADMIN)){
|
|
|
|
/*
|
|
|
|
* Regular user. Only allow this with a special mount
|
|
|
|
* option, when the user has write+exec access to the
|
|
|
|
* subvol root, and when rmdir(2) would have been
|
|
|
|
* allowed.
|
|
|
|
*
|
|
|
|
* Note that this is _not_ check that the subvol is
|
|
|
|
* empty or doesn't contain data that we wouldn't
|
|
|
|
* otherwise be able to delete.
|
|
|
|
*
|
|
|
|
* Users who want to delete empty subvols should try
|
|
|
|
* rmdir(2).
|
|
|
|
*/
|
|
|
|
err = -EPERM;
|
|
|
|
if (!btrfs_test_opt(root, USER_SUBVOL_RM_ALLOWED))
|
|
|
|
goto out_dput;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Do not allow deletion if the parent dir is the same
|
|
|
|
* as the dir to be deleted. That means the ioctl
|
|
|
|
* must be called on the dentry referencing the root
|
|
|
|
* of the subvol, not a random directory contained
|
|
|
|
* within it.
|
|
|
|
*/
|
|
|
|
err = -EINVAL;
|
|
|
|
if (root == dest)
|
|
|
|
goto out_dput;
|
|
|
|
|
|
|
|
err = inode_permission(inode, MAY_WRITE | MAY_EXEC);
|
|
|
|
if (err)
|
|
|
|
goto out_dput;
|
|
|
|
|
|
|
|
/* check if subvolume may be deleted by a non-root user */
|
|
|
|
err = btrfs_may_delete(dir, dentry, 1);
|
|
|
|
if (err)
|
|
|
|
goto out_dput;
|
|
|
|
}
|
|
|
|
|
2011-04-20 09:31:50 +07:00
|
|
|
if (btrfs_ino(inode) != BTRFS_FIRST_FREE_OBJECTID) {
|
2009-09-22 03:00:26 +07:00
|
|
|
err = -EINVAL;
|
|
|
|
goto out_dput;
|
|
|
|
}
|
|
|
|
|
|
|
|
mutex_lock(&inode->i_mutex);
|
|
|
|
err = d_invalidate(dentry);
|
|
|
|
if (err)
|
|
|
|
goto out_unlock;
|
|
|
|
|
|
|
|
down_write(&root->fs_info->subvol_sem);
|
|
|
|
|
|
|
|
err = may_destroy_subvol(dest);
|
|
|
|
if (err)
|
|
|
|
goto out_up_write;
|
|
|
|
|
2010-05-16 21:48:46 +07:00
|
|
|
trans = btrfs_start_transaction(root, 0);
|
|
|
|
if (IS_ERR(trans)) {
|
|
|
|
err = PTR_ERR(trans);
|
2010-05-29 16:46:47 +07:00
|
|
|
goto out_up_write;
|
2010-05-16 21:48:46 +07:00
|
|
|
}
|
|
|
|
trans->block_rsv = &root->fs_info->global_block_rsv;
|
|
|
|
|
2009-09-22 03:00:26 +07:00
|
|
|
ret = btrfs_unlink_subvol(trans, root, dir,
|
|
|
|
dest->root_key.objectid,
|
|
|
|
dentry->d_name.name,
|
|
|
|
dentry->d_name.len);
|
|
|
|
BUG_ON(ret);
|
|
|
|
|
|
|
|
btrfs_record_root_in_trans(trans, dest);
|
|
|
|
|
|
|
|
memset(&dest->root_item.drop_progress, 0,
|
|
|
|
sizeof(dest->root_item.drop_progress));
|
|
|
|
dest->root_item.drop_level = 0;
|
|
|
|
btrfs_set_root_refs(&dest->root_item, 0);
|
|
|
|
|
2010-05-16 21:49:58 +07:00
|
|
|
if (!xchg(&dest->orphan_item_inserted, 1)) {
|
|
|
|
ret = btrfs_insert_orphan_item(trans,
|
|
|
|
root->fs_info->tree_root,
|
|
|
|
dest->root_key.objectid);
|
|
|
|
BUG_ON(ret);
|
|
|
|
}
|
2009-09-22 03:00:26 +07:00
|
|
|
|
2010-10-30 02:41:32 +07:00
|
|
|
ret = btrfs_end_transaction(trans, root);
|
2009-09-22 03:00:26 +07:00
|
|
|
BUG_ON(ret);
|
|
|
|
inode->i_flags |= S_DEAD;
|
|
|
|
out_up_write:
|
|
|
|
up_write(&root->fs_info->subvol_sem);
|
|
|
|
out_unlock:
|
|
|
|
mutex_unlock(&inode->i_mutex);
|
|
|
|
if (!err) {
|
2009-10-09 20:25:16 +07:00
|
|
|
shrink_dcache_sb(root->fs_info->sb);
|
2009-09-22 03:00:26 +07:00
|
|
|
btrfs_invalidate_inodes(dest);
|
|
|
|
d_delete(dentry);
|
|
|
|
}
|
|
|
|
out_dput:
|
|
|
|
dput(dentry);
|
|
|
|
out_unlock_dir:
|
|
|
|
mutex_unlock(&dir->i_mutex);
|
|
|
|
mnt_drop_write(file->f_path.mnt);
|
|
|
|
out:
|
|
|
|
kfree(vol_args);
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2010-03-11 21:42:04 +07:00
|
|
|
static int btrfs_ioctl_defrag(struct file *file, void __user *argp)
|
2008-06-12 08:53:53 +07:00
|
|
|
{
|
|
|
|
struct inode *inode = fdentry(file)->d_inode;
|
|
|
|
struct btrfs_root *root = BTRFS_I(inode)->root;
|
2010-03-11 21:42:04 +07:00
|
|
|
struct btrfs_ioctl_defrag_range_args *range;
|
2008-11-13 02:34:12 +07:00
|
|
|
int ret;
|
|
|
|
|
2010-12-20 15:04:08 +07:00
|
|
|
if (btrfs_root_readonly(root))
|
|
|
|
return -EROFS;
|
|
|
|
|
2008-11-13 02:34:12 +07:00
|
|
|
ret = mnt_want_write(file->f_path.mnt);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
2008-06-12 08:53:53 +07:00
|
|
|
|
|
|
|
switch (inode->i_mode & S_IFMT) {
|
|
|
|
case S_IFDIR:
|
2009-01-06 04:57:23 +07:00
|
|
|
if (!capable(CAP_SYS_ADMIN)) {
|
|
|
|
ret = -EPERM;
|
|
|
|
goto out;
|
|
|
|
}
|
2010-05-16 21:49:58 +07:00
|
|
|
ret = btrfs_defrag_root(root, 0);
|
|
|
|
if (ret)
|
|
|
|
goto out;
|
|
|
|
ret = btrfs_defrag_root(root->fs_info->extent_root, 0);
|
2008-06-12 08:53:53 +07:00
|
|
|
break;
|
|
|
|
case S_IFREG:
|
2009-01-06 04:57:23 +07:00
|
|
|
if (!(file->f_mode & FMODE_WRITE)) {
|
|
|
|
ret = -EINVAL;
|
|
|
|
goto out;
|
|
|
|
}
|
2010-03-11 21:42:04 +07:00
|
|
|
|
|
|
|
range = kzalloc(sizeof(*range), GFP_KERNEL);
|
|
|
|
if (!range) {
|
|
|
|
ret = -ENOMEM;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (argp) {
|
|
|
|
if (copy_from_user(range, argp,
|
|
|
|
sizeof(*range))) {
|
|
|
|
ret = -EFAULT;
|
|
|
|
kfree(range);
|
2010-03-20 18:24:48 +07:00
|
|
|
goto out;
|
2010-03-11 21:42:04 +07:00
|
|
|
}
|
|
|
|
/* compression requires us to start the IO */
|
|
|
|
if ((range->flags & BTRFS_DEFRAG_RANGE_COMPRESS)) {
|
|
|
|
range->flags |= BTRFS_DEFRAG_RANGE_START_IO;
|
|
|
|
range->extent_thresh = (u32)-1;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
/* the rest are all set to zero by kzalloc */
|
|
|
|
range->len = (u64)-1;
|
|
|
|
}
|
2011-05-25 02:35:30 +07:00
|
|
|
ret = btrfs_defrag_file(fdentry(file)->d_inode, file,
|
|
|
|
range, 0, 0);
|
|
|
|
if (ret > 0)
|
|
|
|
ret = 0;
|
2010-03-11 21:42:04 +07:00
|
|
|
kfree(range);
|
2008-06-12 08:53:53 +07:00
|
|
|
break;
|
2010-05-16 21:49:58 +07:00
|
|
|
default:
|
|
|
|
ret = -EINVAL;
|
2008-06-12 08:53:53 +07:00
|
|
|
}
|
2009-01-06 04:57:23 +07:00
|
|
|
out:
|
2008-12-19 22:58:39 +07:00
|
|
|
mnt_drop_write(file->f_path.mnt);
|
2009-01-06 04:57:23 +07:00
|
|
|
return ret;
|
2008-06-12 08:53:53 +07:00
|
|
|
}
|
|
|
|
|
2008-12-02 21:54:17 +07:00
|
|
|
static long btrfs_ioctl_add_dev(struct btrfs_root *root, void __user *arg)
|
2008-06-12 08:53:53 +07:00
|
|
|
{
|
|
|
|
struct btrfs_ioctl_vol_args *vol_args;
|
|
|
|
int ret;
|
|
|
|
|
2009-01-06 04:57:23 +07:00
|
|
|
if (!capable(CAP_SYS_ADMIN))
|
|
|
|
return -EPERM;
|
|
|
|
|
2009-04-08 14:06:54 +07:00
|
|
|
vol_args = memdup_user(arg, sizeof(*vol_args));
|
|
|
|
if (IS_ERR(vol_args))
|
|
|
|
return PTR_ERR(vol_args);
|
2008-06-12 08:53:53 +07:00
|
|
|
|
2008-07-24 23:20:14 +07:00
|
|
|
vol_args->name[BTRFS_PATH_NAME_MAX] = '\0';
|
2008-06-12 08:53:53 +07:00
|
|
|
ret = btrfs_init_new_device(root, vol_args->name);
|
|
|
|
|
|
|
|
kfree(vol_args);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2008-12-02 21:54:17 +07:00
|
|
|
static long btrfs_ioctl_rm_dev(struct btrfs_root *root, void __user *arg)
|
2008-06-12 08:53:53 +07:00
|
|
|
{
|
|
|
|
struct btrfs_ioctl_vol_args *vol_args;
|
|
|
|
int ret;
|
|
|
|
|
2009-01-06 04:57:23 +07:00
|
|
|
if (!capable(CAP_SYS_ADMIN))
|
|
|
|
return -EPERM;
|
|
|
|
|
2008-11-13 02:34:12 +07:00
|
|
|
if (root->fs_info->sb->s_flags & MS_RDONLY)
|
|
|
|
return -EROFS;
|
|
|
|
|
2009-04-08 14:06:54 +07:00
|
|
|
vol_args = memdup_user(arg, sizeof(*vol_args));
|
|
|
|
if (IS_ERR(vol_args))
|
|
|
|
return PTR_ERR(vol_args);
|
2008-06-12 08:53:53 +07:00
|
|
|
|
2008-07-24 23:20:14 +07:00
|
|
|
vol_args->name[BTRFS_PATH_NAME_MAX] = '\0';
|
2008-06-12 08:53:53 +07:00
|
|
|
ret = btrfs_rm_device(root, vol_args->name);
|
|
|
|
|
|
|
|
kfree(vol_args);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2011-03-11 21:41:01 +07:00
|
|
|
static long btrfs_ioctl_fs_info(struct btrfs_root *root, void __user *arg)
|
|
|
|
{
|
2011-06-08 15:27:56 +07:00
|
|
|
struct btrfs_ioctl_fs_info_args *fi_args;
|
2011-03-11 21:41:01 +07:00
|
|
|
struct btrfs_device *device;
|
|
|
|
struct btrfs_device *next;
|
|
|
|
struct btrfs_fs_devices *fs_devices = root->fs_info->fs_devices;
|
2011-06-08 15:27:56 +07:00
|
|
|
int ret = 0;
|
2011-03-11 21:41:01 +07:00
|
|
|
|
|
|
|
if (!capable(CAP_SYS_ADMIN))
|
|
|
|
return -EPERM;
|
|
|
|
|
2011-06-08 15:27:56 +07:00
|
|
|
fi_args = kzalloc(sizeof(*fi_args), GFP_KERNEL);
|
|
|
|
if (!fi_args)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
fi_args->num_devices = fs_devices->num_devices;
|
|
|
|
memcpy(&fi_args->fsid, root->fs_info->fsid, sizeof(fi_args->fsid));
|
2011-03-11 21:41:01 +07:00
|
|
|
|
|
|
|
mutex_lock(&fs_devices->device_list_mutex);
|
|
|
|
list_for_each_entry_safe(device, next, &fs_devices->devices, dev_list) {
|
2011-06-08 15:27:56 +07:00
|
|
|
if (device->devid > fi_args->max_id)
|
|
|
|
fi_args->max_id = device->devid;
|
2011-03-11 21:41:01 +07:00
|
|
|
}
|
|
|
|
mutex_unlock(&fs_devices->device_list_mutex);
|
|
|
|
|
2011-06-08 15:27:56 +07:00
|
|
|
if (copy_to_user(arg, fi_args, sizeof(*fi_args)))
|
|
|
|
ret = -EFAULT;
|
2011-03-11 21:41:01 +07:00
|
|
|
|
2011-06-08 15:27:56 +07:00
|
|
|
kfree(fi_args);
|
|
|
|
return ret;
|
2011-03-11 21:41:01 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static long btrfs_ioctl_dev_info(struct btrfs_root *root, void __user *arg)
|
|
|
|
{
|
|
|
|
struct btrfs_ioctl_dev_info_args *di_args;
|
|
|
|
struct btrfs_device *dev;
|
|
|
|
struct btrfs_fs_devices *fs_devices = root->fs_info->fs_devices;
|
|
|
|
int ret = 0;
|
|
|
|
char *s_uuid = NULL;
|
|
|
|
char empty_uuid[BTRFS_UUID_SIZE] = {0};
|
|
|
|
|
|
|
|
if (!capable(CAP_SYS_ADMIN))
|
|
|
|
return -EPERM;
|
|
|
|
|
|
|
|
di_args = memdup_user(arg, sizeof(*di_args));
|
|
|
|
if (IS_ERR(di_args))
|
|
|
|
return PTR_ERR(di_args);
|
|
|
|
|
|
|
|
if (memcmp(empty_uuid, di_args->uuid, BTRFS_UUID_SIZE) != 0)
|
|
|
|
s_uuid = di_args->uuid;
|
|
|
|
|
|
|
|
mutex_lock(&fs_devices->device_list_mutex);
|
|
|
|
dev = btrfs_find_device(root, di_args->devid, s_uuid, NULL);
|
|
|
|
mutex_unlock(&fs_devices->device_list_mutex);
|
|
|
|
|
|
|
|
if (!dev) {
|
|
|
|
ret = -ENODEV;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
di_args->devid = dev->devid;
|
|
|
|
di_args->bytes_used = dev->bytes_used;
|
|
|
|
di_args->total_bytes = dev->total_bytes;
|
|
|
|
memcpy(di_args->uuid, dev->uuid, sizeof(di_args->uuid));
|
|
|
|
strncpy(di_args->path, dev->name, sizeof(di_args->path));
|
|
|
|
|
|
|
|
out:
|
|
|
|
if (ret == 0 && copy_to_user(arg, di_args, sizeof(*di_args)))
|
|
|
|
ret = -EFAULT;
|
|
|
|
|
|
|
|
kfree(di_args);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2009-09-22 03:00:26 +07:00
|
|
|
static noinline long btrfs_ioctl_clone(struct file *file, unsigned long srcfd,
|
|
|
|
u64 off, u64 olen, u64 destoff)
|
2008-06-12 08:53:53 +07:00
|
|
|
{
|
|
|
|
struct inode *inode = fdentry(file)->d_inode;
|
|
|
|
struct btrfs_root *root = BTRFS_I(inode)->root;
|
|
|
|
struct file *src_file;
|
|
|
|
struct inode *src;
|
|
|
|
struct btrfs_trans_handle *trans;
|
|
|
|
struct btrfs_path *path;
|
|
|
|
struct extent_buffer *leaf;
|
2008-08-05 10:23:47 +07:00
|
|
|
char *buf;
|
|
|
|
struct btrfs_key key;
|
2008-06-12 08:53:53 +07:00
|
|
|
u32 nritems;
|
|
|
|
int slot;
|
2008-08-05 10:23:47 +07:00
|
|
|
int ret;
|
2008-11-13 02:32:25 +07:00
|
|
|
u64 len = olen;
|
|
|
|
u64 bs = root->fs_info->sb->s_blocksize;
|
|
|
|
u64 hint_byte;
|
Btrfs: move data checksumming into a dedicated tree
Btrfs stores checksums for each data block. Until now, they have
been stored in the subvolume trees, indexed by the inode that is
referencing the data block. This means that when we read the inode,
we've probably read in at least some checksums as well.
But, this has a few problems:
* The checksums are indexed by logical offset in the file. When
compression is on, this means we have to do the expensive checksumming
on the uncompressed data. It would be faster if we could checksum
the compressed data instead.
* If we implement encryption, we'll be checksumming the plain text and
storing that on disk. This is significantly less secure.
* For either compression or encryption, we have to get the plain text
back before we can verify the checksum as correct. This makes the raid
layer balancing and extent moving much more expensive.
* It makes the front end caching code more complex, as we have touch
the subvolume and inodes as we cache extents.
* There is potentitally one copy of the checksum in each subvolume
referencing an extent.
The solution used here is to store the extent checksums in a dedicated
tree. This allows us to index the checksums by phyiscal extent
start and length. It means:
* The checksum is against the data stored on disk, after any compression
or encryption is done.
* The checksum is stored in a central location, and can be verified without
following back references, or reading inodes.
This makes compression significantly faster by reducing the amount of
data that needs to be checksummed. It will also allow much faster
raid management code in general.
The checksums are indexed by a key with a fixed objectid (a magic value
in ctree.h) and offset set to the starting byte of the extent. This
allows us to copy the checksum items into the fsync log tree directly (or
any other tree), without having to invent a second format for them.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-12-09 04:58:54 +07:00
|
|
|
|
2008-11-13 02:32:25 +07:00
|
|
|
/*
|
|
|
|
* TODO:
|
|
|
|
* - split compressed inline extents. annoying: we need to
|
|
|
|
* decompress into destination's address_space (the file offset
|
|
|
|
* may change, so source mapping won't do), then recompress (or
|
|
|
|
* otherwise reinsert) a subrange.
|
|
|
|
* - allow ranges within the same file to be cloned (provided
|
|
|
|
* they don't overlap)?
|
|
|
|
*/
|
|
|
|
|
2009-01-06 04:57:23 +07:00
|
|
|
/* the destination must be opened for writing */
|
2010-07-20 03:58:20 +07:00
|
|
|
if (!(file->f_mode & FMODE_WRITE) || (file->f_flags & O_APPEND))
|
2009-01-06 04:57:23 +07:00
|
|
|
return -EINVAL;
|
|
|
|
|
2010-12-20 15:04:08 +07:00
|
|
|
if (btrfs_root_readonly(root))
|
|
|
|
return -EROFS;
|
|
|
|
|
2008-11-13 02:34:12 +07:00
|
|
|
ret = mnt_want_write(file->f_path.mnt);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
|
2008-11-13 02:32:25 +07:00
|
|
|
src_file = fget(srcfd);
|
2008-12-19 22:58:39 +07:00
|
|
|
if (!src_file) {
|
|
|
|
ret = -EBADF;
|
|
|
|
goto out_drop_write;
|
|
|
|
}
|
2010-05-15 22:27:37 +07:00
|
|
|
|
2008-06-12 08:53:53 +07:00
|
|
|
src = src_file->f_dentry->d_inode;
|
|
|
|
|
2008-11-13 02:32:25 +07:00
|
|
|
ret = -EINVAL;
|
|
|
|
if (src == inode)
|
|
|
|
goto out_fput;
|
|
|
|
|
2010-05-15 22:27:37 +07:00
|
|
|
/* the src must be open for reading */
|
|
|
|
if (!(src_file->f_mode & FMODE_READ))
|
|
|
|
goto out_fput;
|
|
|
|
|
2011-09-18 21:20:46 +07:00
|
|
|
/* don't make the dst file partly checksummed */
|
|
|
|
if ((BTRFS_I(src)->flags & BTRFS_INODE_NODATASUM) !=
|
|
|
|
(BTRFS_I(inode)->flags & BTRFS_INODE_NODATASUM))
|
|
|
|
goto out_fput;
|
|
|
|
|
2008-08-05 10:23:47 +07:00
|
|
|
ret = -EISDIR;
|
|
|
|
if (S_ISDIR(src->i_mode) || S_ISDIR(inode->i_mode))
|
|
|
|
goto out_fput;
|
|
|
|
|
2008-06-12 08:53:53 +07:00
|
|
|
ret = -EXDEV;
|
2008-08-05 10:23:47 +07:00
|
|
|
if (src->i_sb != inode->i_sb || BTRFS_I(src)->root != root)
|
|
|
|
goto out_fput;
|
|
|
|
|
|
|
|
ret = -ENOMEM;
|
|
|
|
buf = vmalloc(btrfs_level_size(root, 0));
|
|
|
|
if (!buf)
|
|
|
|
goto out_fput;
|
|
|
|
|
|
|
|
path = btrfs_alloc_path();
|
|
|
|
if (!path) {
|
|
|
|
vfree(buf);
|
2008-06-12 08:53:53 +07:00
|
|
|
goto out_fput;
|
2008-08-05 10:23:47 +07:00
|
|
|
}
|
|
|
|
path->reada = 2;
|
2008-06-12 08:53:53 +07:00
|
|
|
|
|
|
|
if (inode < src) {
|
2010-10-30 02:37:33 +07:00
|
|
|
mutex_lock_nested(&inode->i_mutex, I_MUTEX_PARENT);
|
|
|
|
mutex_lock_nested(&src->i_mutex, I_MUTEX_CHILD);
|
2008-06-12 08:53:53 +07:00
|
|
|
} else {
|
2010-10-30 02:37:33 +07:00
|
|
|
mutex_lock_nested(&src->i_mutex, I_MUTEX_PARENT);
|
|
|
|
mutex_lock_nested(&inode->i_mutex, I_MUTEX_CHILD);
|
2008-06-12 08:53:53 +07:00
|
|
|
}
|
|
|
|
|
2008-11-13 02:32:25 +07:00
|
|
|
/* determine range to clone */
|
|
|
|
ret = -EINVAL;
|
2010-07-20 03:58:20 +07:00
|
|
|
if (off + len > src->i_size || off + len < off)
|
2008-06-12 08:53:53 +07:00
|
|
|
goto out_unlock;
|
2008-11-13 02:32:25 +07:00
|
|
|
if (len == 0)
|
|
|
|
olen = len = src->i_size - off;
|
|
|
|
/* if we extend to eof, continue to block boundary */
|
|
|
|
if (off + len == src->i_size)
|
2010-11-19 08:36:10 +07:00
|
|
|
len = ALIGN(src->i_size, bs) - off;
|
2008-11-13 02:32:25 +07:00
|
|
|
|
|
|
|
/* verify the end result is block aligned */
|
2010-11-19 08:36:10 +07:00
|
|
|
if (!IS_ALIGNED(off, bs) || !IS_ALIGNED(off + len, bs) ||
|
|
|
|
!IS_ALIGNED(destoff, bs))
|
2008-11-13 02:32:25 +07:00
|
|
|
goto out_unlock;
|
|
|
|
|
2011-09-11 21:52:25 +07:00
|
|
|
if (destoff > inode->i_size) {
|
|
|
|
ret = btrfs_cont_expand(inode, inode->i_size, destoff);
|
|
|
|
if (ret)
|
|
|
|
goto out_unlock;
|
|
|
|
}
|
|
|
|
|
2011-09-18 21:20:46 +07:00
|
|
|
/* truncate page cache pages from target inode range */
|
|
|
|
truncate_inode_pages_range(&inode->i_data, destoff,
|
|
|
|
PAGE_CACHE_ALIGN(destoff + len) - 1);
|
|
|
|
|
2008-06-12 08:53:53 +07:00
|
|
|
/* do any pending delalloc/csum calc on src, one way or
|
|
|
|
another, and lock file content */
|
|
|
|
while (1) {
|
2008-09-24 00:14:14 +07:00
|
|
|
struct btrfs_ordered_extent *ordered;
|
2008-11-13 02:32:25 +07:00
|
|
|
lock_extent(&BTRFS_I(src)->io_tree, off, off+len, GFP_NOFS);
|
2010-10-30 02:37:33 +07:00
|
|
|
ordered = btrfs_lookup_first_ordered_extent(src, off+len);
|
|
|
|
if (!ordered &&
|
|
|
|
!test_range_bit(&BTRFS_I(src)->io_tree, off, off+len,
|
|
|
|
EXTENT_DELALLOC, 0, NULL))
|
2008-06-12 08:53:53 +07:00
|
|
|
break;
|
2008-11-13 02:32:25 +07:00
|
|
|
unlock_extent(&BTRFS_I(src)->io_tree, off, off+len, GFP_NOFS);
|
2008-08-05 10:23:47 +07:00
|
|
|
if (ordered)
|
|
|
|
btrfs_put_ordered_extent(ordered);
|
2010-10-30 02:37:33 +07:00
|
|
|
btrfs_wait_ordered_range(src, off, len);
|
2008-06-12 08:53:53 +07:00
|
|
|
}
|
|
|
|
|
2008-11-13 02:32:25 +07:00
|
|
|
/* clone data */
|
2011-04-20 09:31:50 +07:00
|
|
|
key.objectid = btrfs_ino(src);
|
2008-08-05 10:23:47 +07:00
|
|
|
key.type = BTRFS_EXTENT_DATA_KEY;
|
|
|
|
key.offset = 0;
|
2008-06-12 08:53:53 +07:00
|
|
|
|
|
|
|
while (1) {
|
|
|
|
/*
|
|
|
|
* note the key will change type as we walk through the
|
|
|
|
* tree.
|
|
|
|
*/
|
2010-05-16 21:48:46 +07:00
|
|
|
ret = btrfs_search_slot(NULL, root, &key, path, 0, 0);
|
2008-06-12 08:53:53 +07:00
|
|
|
if (ret < 0)
|
|
|
|
goto out;
|
|
|
|
|
2008-08-05 10:23:47 +07:00
|
|
|
nritems = btrfs_header_nritems(path->nodes[0]);
|
|
|
|
if (path->slots[0] >= nritems) {
|
2008-06-12 08:53:53 +07:00
|
|
|
ret = btrfs_next_leaf(root, path);
|
|
|
|
if (ret < 0)
|
|
|
|
goto out;
|
|
|
|
if (ret > 0)
|
|
|
|
break;
|
2008-08-05 10:23:47 +07:00
|
|
|
nritems = btrfs_header_nritems(path->nodes[0]);
|
2008-06-12 08:53:53 +07:00
|
|
|
}
|
|
|
|
leaf = path->nodes[0];
|
|
|
|
slot = path->slots[0];
|
|
|
|
|
2008-08-05 10:23:47 +07:00
|
|
|
btrfs_item_key_to_cpu(leaf, &key, slot);
|
Btrfs: move data checksumming into a dedicated tree
Btrfs stores checksums for each data block. Until now, they have
been stored in the subvolume trees, indexed by the inode that is
referencing the data block. This means that when we read the inode,
we've probably read in at least some checksums as well.
But, this has a few problems:
* The checksums are indexed by logical offset in the file. When
compression is on, this means we have to do the expensive checksumming
on the uncompressed data. It would be faster if we could checksum
the compressed data instead.
* If we implement encryption, we'll be checksumming the plain text and
storing that on disk. This is significantly less secure.
* For either compression or encryption, we have to get the plain text
back before we can verify the checksum as correct. This makes the raid
layer balancing and extent moving much more expensive.
* It makes the front end caching code more complex, as we have touch
the subvolume and inodes as we cache extents.
* There is potentitally one copy of the checksum in each subvolume
referencing an extent.
The solution used here is to store the extent checksums in a dedicated
tree. This allows us to index the checksums by phyiscal extent
start and length. It means:
* The checksum is against the data stored on disk, after any compression
or encryption is done.
* The checksum is stored in a central location, and can be verified without
following back references, or reading inodes.
This makes compression significantly faster by reducing the amount of
data that needs to be checksummed. It will also allow much faster
raid management code in general.
The checksums are indexed by a key with a fixed objectid (a magic value
in ctree.h) and offset set to the starting byte of the extent. This
allows us to copy the checksum items into the fsync log tree directly (or
any other tree), without having to invent a second format for them.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-12-09 04:58:54 +07:00
|
|
|
if (btrfs_key_type(&key) > BTRFS_EXTENT_DATA_KEY ||
|
2011-04-20 09:31:50 +07:00
|
|
|
key.objectid != btrfs_ino(src))
|
2008-06-12 08:53:53 +07:00
|
|
|
break;
|
|
|
|
|
2008-11-13 02:32:25 +07:00
|
|
|
if (btrfs_key_type(&key) == BTRFS_EXTENT_DATA_KEY) {
|
|
|
|
struct btrfs_file_extent_item *extent;
|
|
|
|
int type;
|
2008-09-24 00:14:14 +07:00
|
|
|
u32 size;
|
|
|
|
struct btrfs_key new_key;
|
2008-11-13 02:32:25 +07:00
|
|
|
u64 disko = 0, diskl = 0;
|
|
|
|
u64 datao = 0, datal = 0;
|
|
|
|
u8 comp;
|
2010-06-13 05:31:14 +07:00
|
|
|
u64 endoff;
|
2008-09-24 00:14:14 +07:00
|
|
|
|
|
|
|
size = btrfs_item_size_nr(leaf, slot);
|
|
|
|
read_extent_buffer(leaf, buf,
|
|
|
|
btrfs_item_ptr_offset(leaf, slot),
|
|
|
|
size);
|
2008-11-13 02:32:25 +07:00
|
|
|
|
|
|
|
extent = btrfs_item_ptr(leaf, slot,
|
|
|
|
struct btrfs_file_extent_item);
|
|
|
|
comp = btrfs_file_extent_compression(leaf, extent);
|
|
|
|
type = btrfs_file_extent_type(leaf, extent);
|
2009-06-28 08:07:03 +07:00
|
|
|
if (type == BTRFS_FILE_EXTENT_REG ||
|
|
|
|
type == BTRFS_FILE_EXTENT_PREALLOC) {
|
2009-01-06 09:25:51 +07:00
|
|
|
disko = btrfs_file_extent_disk_bytenr(leaf,
|
|
|
|
extent);
|
|
|
|
diskl = btrfs_file_extent_disk_num_bytes(leaf,
|
|
|
|
extent);
|
2008-11-13 02:32:25 +07:00
|
|
|
datao = btrfs_file_extent_offset(leaf, extent);
|
2009-01-06 09:25:51 +07:00
|
|
|
datal = btrfs_file_extent_num_bytes(leaf,
|
|
|
|
extent);
|
2008-11-13 02:32:25 +07:00
|
|
|
} else if (type == BTRFS_FILE_EXTENT_INLINE) {
|
|
|
|
/* take upper bound, may be compressed */
|
|
|
|
datal = btrfs_file_extent_ram_bytes(leaf,
|
|
|
|
extent);
|
|
|
|
}
|
2011-04-21 06:20:15 +07:00
|
|
|
btrfs_release_path(path);
|
2008-09-24 00:14:14 +07:00
|
|
|
|
2010-10-30 02:37:33 +07:00
|
|
|
if (key.offset + datal <= off ||
|
2008-11-13 02:32:25 +07:00
|
|
|
key.offset >= off+len)
|
|
|
|
goto next;
|
|
|
|
|
2008-09-24 00:14:14 +07:00
|
|
|
memcpy(&new_key, &key, sizeof(new_key));
|
2011-04-20 09:31:50 +07:00
|
|
|
new_key.objectid = btrfs_ino(inode);
|
2011-01-26 13:10:43 +07:00
|
|
|
if (off <= key.offset)
|
|
|
|
new_key.offset = key.offset + destoff - off;
|
|
|
|
else
|
|
|
|
new_key.offset = destoff;
|
2008-09-24 00:14:14 +07:00
|
|
|
|
2011-09-21 01:48:51 +07:00
|
|
|
/*
|
|
|
|
* 1 - adjusting old extent (we may have to split it)
|
|
|
|
* 1 - add new extent
|
|
|
|
* 1 - inode update
|
|
|
|
*/
|
|
|
|
trans = btrfs_start_transaction(root, 3);
|
2010-05-16 21:48:46 +07:00
|
|
|
if (IS_ERR(trans)) {
|
|
|
|
ret = PTR_ERR(trans);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2009-06-28 08:07:03 +07:00
|
|
|
if (type == BTRFS_FILE_EXTENT_REG ||
|
|
|
|
type == BTRFS_FILE_EXTENT_PREALLOC) {
|
2011-09-11 21:52:25 +07:00
|
|
|
/*
|
|
|
|
* a | --- range to clone ---| b
|
|
|
|
* | ------------- extent ------------- |
|
|
|
|
*/
|
|
|
|
|
|
|
|
/* substract range b */
|
|
|
|
if (key.offset + datal > off + len)
|
|
|
|
datal = off + len - key.offset;
|
|
|
|
|
|
|
|
/* substract range a */
|
2010-05-16 21:48:46 +07:00
|
|
|
if (off > key.offset) {
|
|
|
|
datao += off - key.offset;
|
|
|
|
datal -= off - key.offset;
|
|
|
|
}
|
|
|
|
|
|
|
|
ret = btrfs_drop_extents(trans, inode,
|
|
|
|
new_key.offset,
|
|
|
|
new_key.offset + datal,
|
|
|
|
&hint_byte, 1);
|
|
|
|
BUG_ON(ret);
|
|
|
|
|
2008-11-13 02:32:25 +07:00
|
|
|
ret = btrfs_insert_empty_item(trans, root, path,
|
|
|
|
&new_key, size);
|
2010-05-16 21:48:46 +07:00
|
|
|
BUG_ON(ret);
|
2008-11-13 02:32:25 +07:00
|
|
|
|
|
|
|
leaf = path->nodes[0];
|
|
|
|
slot = path->slots[0];
|
|
|
|
write_extent_buffer(leaf, buf,
|
2008-09-24 00:14:14 +07:00
|
|
|
btrfs_item_ptr_offset(leaf, slot),
|
|
|
|
size);
|
2008-08-05 10:23:47 +07:00
|
|
|
|
2008-11-13 02:32:25 +07:00
|
|
|
extent = btrfs_item_ptr(leaf, slot,
|
2008-06-12 08:53:53 +07:00
|
|
|
struct btrfs_file_extent_item);
|
2008-11-13 02:32:25 +07:00
|
|
|
|
|
|
|
/* disko == 0 means it's a hole */
|
|
|
|
if (!disko)
|
|
|
|
datao = 0;
|
|
|
|
|
|
|
|
btrfs_set_file_extent_offset(leaf, extent,
|
|
|
|
datao);
|
|
|
|
btrfs_set_file_extent_num_bytes(leaf, extent,
|
|
|
|
datal);
|
|
|
|
if (disko) {
|
|
|
|
inode_add_bytes(inode, datal);
|
2008-08-05 10:23:47 +07:00
|
|
|
ret = btrfs_inc_extent_ref(trans, root,
|
Btrfs: Mixed back reference (FORWARD ROLLING FORMAT CHANGE)
This commit introduces a new kind of back reference for btrfs metadata.
Once a filesystem has been mounted with this commit, IT WILL NO LONGER
BE MOUNTABLE BY OLDER KERNELS.
When a tree block in subvolume tree is cow'd, the reference counts of all
extents it points to are increased by one. At transaction commit time,
the old root of the subvolume is recorded in a "dead root" data structure,
and the btree it points to is later walked, dropping reference counts
and freeing any blocks where the reference count goes to 0.
The increments done during cow and decrements done after commit cancel out,
and the walk is a very expensive way to go about freeing the blocks that
are no longer referenced by the new btree root. This commit reduces the
transaction overhead by avoiding the need for dead root records.
When a non-shared tree block is cow'd, we free the old block at once, and the
new block inherits old block's references. When a tree block with reference
count > 1 is cow'd, we increase the reference counts of all extents
the new block points to by one, and decrease the old block's reference count by
one.
This dead tree avoidance code removes the need to modify the reference
counts of lower level extents when a non-shared tree block is cow'd.
But we still need to update back ref for all pointers in the block.
This is because the location of the block is recorded in the back ref
item.
We can solve this by introducing a new type of back ref. The new
back ref provides information about pointer's key, level and in which
tree the pointer lives. This information allow us to find the pointer
by searching the tree. The shortcoming of the new back ref is that it
only works for pointers in tree blocks referenced by their owner trees.
This is mostly a problem for snapshots, where resolving one of these
fuzzy back references would be O(number_of_snapshots) and quite slow.
The solution used here is to use the fuzzy back references in the common
case where a given tree block is only referenced by one root,
and use the full back references when multiple roots have a reference
on a given block.
This commit adds per subvolume red-black tree to keep trace of cached
inodes. The red-black tree helps the balancing code to find cached
inodes whose inode numbers within a given range.
This commit improves the balancing code by introducing several data
structures to keep the state of balancing. The most important one
is the back ref cache. It caches how the upper level tree blocks are
referenced. This greatly reduce the overhead of checking back ref.
The improved balancing code scales significantly better with a large
number of snapshots.
This is a very large commit and was written in a number of
pieces. But, they depend heavily on the disk format change and were
squashed together to make sure git bisect didn't end up in a
bad state wrt space balancing or the format change.
Signed-off-by: Yan Zheng <zheng.yan@oracle.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-06-10 21:45:14 +07:00
|
|
|
disko, diskl, 0,
|
|
|
|
root->root_key.objectid,
|
2011-04-20 09:31:50 +07:00
|
|
|
btrfs_ino(inode),
|
Btrfs: Mixed back reference (FORWARD ROLLING FORMAT CHANGE)
This commit introduces a new kind of back reference for btrfs metadata.
Once a filesystem has been mounted with this commit, IT WILL NO LONGER
BE MOUNTABLE BY OLDER KERNELS.
When a tree block in subvolume tree is cow'd, the reference counts of all
extents it points to are increased by one. At transaction commit time,
the old root of the subvolume is recorded in a "dead root" data structure,
and the btree it points to is later walked, dropping reference counts
and freeing any blocks where the reference count goes to 0.
The increments done during cow and decrements done after commit cancel out,
and the walk is a very expensive way to go about freeing the blocks that
are no longer referenced by the new btree root. This commit reduces the
transaction overhead by avoiding the need for dead root records.
When a non-shared tree block is cow'd, we free the old block at once, and the
new block inherits old block's references. When a tree block with reference
count > 1 is cow'd, we increase the reference counts of all extents
the new block points to by one, and decrease the old block's reference count by
one.
This dead tree avoidance code removes the need to modify the reference
counts of lower level extents when a non-shared tree block is cow'd.
But we still need to update back ref for all pointers in the block.
This is because the location of the block is recorded in the back ref
item.
We can solve this by introducing a new type of back ref. The new
back ref provides information about pointer's key, level and in which
tree the pointer lives. This information allow us to find the pointer
by searching the tree. The shortcoming of the new back ref is that it
only works for pointers in tree blocks referenced by their owner trees.
This is mostly a problem for snapshots, where resolving one of these
fuzzy back references would be O(number_of_snapshots) and quite slow.
The solution used here is to use the fuzzy back references in the common
case where a given tree block is only referenced by one root,
and use the full back references when multiple roots have a reference
on a given block.
This commit adds per subvolume red-black tree to keep trace of cached
inodes. The red-black tree helps the balancing code to find cached
inodes whose inode numbers within a given range.
This commit improves the balancing code by introducing several data
structures to keep the state of balancing. The most important one
is the back ref cache. It caches how the upper level tree blocks are
referenced. This greatly reduce the overhead of checking back ref.
The improved balancing code scales significantly better with a large
number of snapshots.
This is a very large commit and was written in a number of
pieces. But, they depend heavily on the disk format change and were
squashed together to make sure git bisect didn't end up in a
bad state wrt space balancing or the format change.
Signed-off-by: Yan Zheng <zheng.yan@oracle.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-06-10 21:45:14 +07:00
|
|
|
new_key.offset - datao);
|
2008-09-24 00:14:14 +07:00
|
|
|
BUG_ON(ret);
|
2008-06-12 08:53:53 +07:00
|
|
|
}
|
2008-11-13 02:32:25 +07:00
|
|
|
} else if (type == BTRFS_FILE_EXTENT_INLINE) {
|
|
|
|
u64 skip = 0;
|
|
|
|
u64 trim = 0;
|
|
|
|
if (off > key.offset) {
|
|
|
|
skip = off - key.offset;
|
|
|
|
new_key.offset += skip;
|
|
|
|
}
|
2009-01-06 09:25:51 +07:00
|
|
|
|
2008-11-13 02:32:25 +07:00
|
|
|
if (key.offset + datal > off+len)
|
|
|
|
trim = key.offset + datal - (off+len);
|
2009-01-06 09:25:51 +07:00
|
|
|
|
2008-11-13 02:32:25 +07:00
|
|
|
if (comp && (skip || trim)) {
|
|
|
|
ret = -EINVAL;
|
2010-05-16 21:48:46 +07:00
|
|
|
btrfs_end_transaction(trans, root);
|
2008-11-13 02:32:25 +07:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
size -= skip + trim;
|
|
|
|
datal -= skip + trim;
|
2010-05-16 21:48:46 +07:00
|
|
|
|
|
|
|
ret = btrfs_drop_extents(trans, inode,
|
|
|
|
new_key.offset,
|
|
|
|
new_key.offset + datal,
|
|
|
|
&hint_byte, 1);
|
|
|
|
BUG_ON(ret);
|
|
|
|
|
2008-11-13 02:32:25 +07:00
|
|
|
ret = btrfs_insert_empty_item(trans, root, path,
|
|
|
|
&new_key, size);
|
2010-05-16 21:48:46 +07:00
|
|
|
BUG_ON(ret);
|
2008-11-13 02:32:25 +07:00
|
|
|
|
|
|
|
if (skip) {
|
2009-01-06 09:25:51 +07:00
|
|
|
u32 start =
|
|
|
|
btrfs_file_extent_calc_inline_size(0);
|
2008-11-13 02:32:25 +07:00
|
|
|
memmove(buf+start, buf+start+skip,
|
|
|
|
datal);
|
|
|
|
}
|
|
|
|
|
|
|
|
leaf = path->nodes[0];
|
|
|
|
slot = path->slots[0];
|
|
|
|
write_extent_buffer(leaf, buf,
|
|
|
|
btrfs_item_ptr_offset(leaf, slot),
|
|
|
|
size);
|
|
|
|
inode_add_bytes(inode, datal);
|
2008-06-12 08:53:53 +07:00
|
|
|
}
|
2008-11-13 02:32:25 +07:00
|
|
|
|
|
|
|
btrfs_mark_buffer_dirty(leaf);
|
2011-04-21 06:20:15 +07:00
|
|
|
btrfs_release_path(path);
|
2008-11-13 02:32:25 +07:00
|
|
|
|
2010-05-16 21:48:46 +07:00
|
|
|
inode->i_mtime = inode->i_ctime = CURRENT_TIME;
|
2010-06-13 05:31:14 +07:00
|
|
|
|
|
|
|
/*
|
|
|
|
* we round up to the block size at eof when
|
|
|
|
* determining which extents to clone above,
|
|
|
|
* but shouldn't round up the file size
|
|
|
|
*/
|
|
|
|
endoff = new_key.offset + datal;
|
2010-11-19 08:36:34 +07:00
|
|
|
if (endoff > destoff+olen)
|
|
|
|
endoff = destoff+olen;
|
2010-06-13 05:31:14 +07:00
|
|
|
if (endoff > inode->i_size)
|
|
|
|
btrfs_i_size_write(inode, endoff);
|
|
|
|
|
2010-05-16 21:48:46 +07:00
|
|
|
ret = btrfs_update_inode(trans, root, inode);
|
|
|
|
BUG_ON(ret);
|
|
|
|
btrfs_end_transaction(trans, root);
|
|
|
|
}
|
2009-01-06 09:25:51 +07:00
|
|
|
next:
|
2011-04-21 06:20:15 +07:00
|
|
|
btrfs_release_path(path);
|
2008-06-12 08:53:53 +07:00
|
|
|
key.offset++;
|
|
|
|
}
|
|
|
|
ret = 0;
|
|
|
|
out:
|
2011-04-21 06:20:15 +07:00
|
|
|
btrfs_release_path(path);
|
2008-11-13 02:32:25 +07:00
|
|
|
unlock_extent(&BTRFS_I(src)->io_tree, off, off+len, GFP_NOFS);
|
2008-06-12 08:53:53 +07:00
|
|
|
out_unlock:
|
|
|
|
mutex_unlock(&src->i_mutex);
|
|
|
|
mutex_unlock(&inode->i_mutex);
|
2008-08-05 10:23:47 +07:00
|
|
|
vfree(buf);
|
|
|
|
btrfs_free_path(path);
|
2008-06-12 08:53:53 +07:00
|
|
|
out_fput:
|
|
|
|
fput(src_file);
|
2008-12-19 22:58:39 +07:00
|
|
|
out_drop_write:
|
|
|
|
mnt_drop_write(file->f_path.mnt);
|
2008-06-12 08:53:53 +07:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2008-12-02 21:52:24 +07:00
|
|
|
static long btrfs_ioctl_clone_range(struct file *file, void __user *argp)
|
2008-11-13 02:32:25 +07:00
|
|
|
{
|
|
|
|
struct btrfs_ioctl_clone_range_args args;
|
|
|
|
|
2008-12-02 21:52:24 +07:00
|
|
|
if (copy_from_user(&args, argp, sizeof(args)))
|
2008-11-13 02:32:25 +07:00
|
|
|
return -EFAULT;
|
|
|
|
return btrfs_ioctl_clone(file, args.src_fd, args.src_offset,
|
|
|
|
args.src_length, args.dest_offset);
|
|
|
|
}
|
|
|
|
|
2008-06-12 08:53:53 +07:00
|
|
|
/*
|
|
|
|
* there are many ways the trans_start and trans_end ioctls can lead
|
|
|
|
* to deadlocks. They should only be used by applications that
|
|
|
|
* basically own the machine, and have a very in depth understanding
|
|
|
|
* of all the possible deadlocks and enospc problems.
|
|
|
|
*/
|
2008-12-02 21:54:17 +07:00
|
|
|
static long btrfs_ioctl_trans_start(struct file *file)
|
2008-06-12 08:53:53 +07:00
|
|
|
{
|
|
|
|
struct inode *inode = fdentry(file)->d_inode;
|
|
|
|
struct btrfs_root *root = BTRFS_I(inode)->root;
|
|
|
|
struct btrfs_trans_handle *trans;
|
2009-09-30 05:38:44 +07:00
|
|
|
int ret;
|
2008-06-12 08:53:53 +07:00
|
|
|
|
2009-09-30 05:38:44 +07:00
|
|
|
ret = -EPERM;
|
2008-06-12 08:53:58 +07:00
|
|
|
if (!capable(CAP_SYS_ADMIN))
|
2009-09-30 05:38:44 +07:00
|
|
|
goto out;
|
2008-06-12 08:53:58 +07:00
|
|
|
|
2009-09-30 05:38:44 +07:00
|
|
|
ret = -EINPROGRESS;
|
|
|
|
if (file->private_data)
|
2008-06-12 08:53:53 +07:00
|
|
|
goto out;
|
2008-08-04 21:41:27 +07:00
|
|
|
|
2010-12-20 15:04:08 +07:00
|
|
|
ret = -EROFS;
|
|
|
|
if (btrfs_root_readonly(root))
|
|
|
|
goto out;
|
|
|
|
|
2008-11-13 02:34:12 +07:00
|
|
|
ret = mnt_want_write(file->f_path.mnt);
|
|
|
|
if (ret)
|
|
|
|
goto out;
|
|
|
|
|
2011-04-12 04:25:13 +07:00
|
|
|
atomic_inc(&root->fs_info->open_ioctl_trans);
|
2008-08-04 21:41:27 +07:00
|
|
|
|
2009-09-30 05:38:44 +07:00
|
|
|
ret = -ENOMEM;
|
2011-04-13 23:54:33 +07:00
|
|
|
trans = btrfs_start_ioctl_transaction(root);
|
2011-01-24 07:57:10 +07:00
|
|
|
if (IS_ERR(trans))
|
2009-09-30 05:38:44 +07:00
|
|
|
goto out_drop;
|
|
|
|
|
|
|
|
file->private_data = trans;
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
out_drop:
|
2011-04-12 04:25:13 +07:00
|
|
|
atomic_dec(&root->fs_info->open_ioctl_trans);
|
2009-09-30 05:38:44 +07:00
|
|
|
mnt_drop_write(file->f_path.mnt);
|
2008-06-12 08:53:53 +07:00
|
|
|
out:
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2009-12-12 04:11:29 +07:00
|
|
|
static long btrfs_ioctl_default_subvol(struct file *file, void __user *argp)
|
|
|
|
{
|
|
|
|
struct inode *inode = fdentry(file)->d_inode;
|
|
|
|
struct btrfs_root *root = BTRFS_I(inode)->root;
|
|
|
|
struct btrfs_root *new_root;
|
|
|
|
struct btrfs_dir_item *di;
|
|
|
|
struct btrfs_trans_handle *trans;
|
|
|
|
struct btrfs_path *path;
|
|
|
|
struct btrfs_key location;
|
|
|
|
struct btrfs_disk_key disk_key;
|
|
|
|
struct btrfs_super_block *disk_super;
|
|
|
|
u64 features;
|
|
|
|
u64 objectid = 0;
|
|
|
|
u64 dir_id;
|
|
|
|
|
|
|
|
if (!capable(CAP_SYS_ADMIN))
|
|
|
|
return -EPERM;
|
|
|
|
|
|
|
|
if (copy_from_user(&objectid, argp, sizeof(objectid)))
|
|
|
|
return -EFAULT;
|
|
|
|
|
|
|
|
if (!objectid)
|
|
|
|
objectid = root->root_key.objectid;
|
|
|
|
|
|
|
|
location.objectid = objectid;
|
|
|
|
location.type = BTRFS_ROOT_ITEM_KEY;
|
|
|
|
location.offset = (u64)-1;
|
|
|
|
|
|
|
|
new_root = btrfs_read_fs_root_no_name(root->fs_info, &location);
|
|
|
|
if (IS_ERR(new_root))
|
|
|
|
return PTR_ERR(new_root);
|
|
|
|
|
|
|
|
if (btrfs_root_refs(&new_root->root_item) == 0)
|
|
|
|
return -ENOENT;
|
|
|
|
|
|
|
|
path = btrfs_alloc_path();
|
|
|
|
if (!path)
|
|
|
|
return -ENOMEM;
|
|
|
|
path->leave_spinning = 1;
|
|
|
|
|
|
|
|
trans = btrfs_start_transaction(root, 1);
|
2011-01-20 13:19:37 +07:00
|
|
|
if (IS_ERR(trans)) {
|
2009-12-12 04:11:29 +07:00
|
|
|
btrfs_free_path(path);
|
2011-01-20 13:19:37 +07:00
|
|
|
return PTR_ERR(trans);
|
2009-12-12 04:11:29 +07:00
|
|
|
}
|
|
|
|
|
2011-04-13 20:41:04 +07:00
|
|
|
dir_id = btrfs_super_root_dir(root->fs_info->super_copy);
|
2009-12-12 04:11:29 +07:00
|
|
|
di = btrfs_lookup_dir_item(trans, root->fs_info->tree_root, path,
|
|
|
|
dir_id, "default", 7, 1);
|
2010-05-29 16:47:24 +07:00
|
|
|
if (IS_ERR_OR_NULL(di)) {
|
2009-12-12 04:11:29 +07:00
|
|
|
btrfs_free_path(path);
|
|
|
|
btrfs_end_transaction(trans, root);
|
|
|
|
printk(KERN_ERR "Umm, you don't have the default dir item, "
|
|
|
|
"this isn't going to work\n");
|
|
|
|
return -ENOENT;
|
|
|
|
}
|
|
|
|
|
|
|
|
btrfs_cpu_key_to_disk(&disk_key, &new_root->root_key);
|
|
|
|
btrfs_set_dir_item_key(path->nodes[0], di, &disk_key);
|
|
|
|
btrfs_mark_buffer_dirty(path->nodes[0]);
|
|
|
|
btrfs_free_path(path);
|
|
|
|
|
2011-04-13 20:41:04 +07:00
|
|
|
disk_super = root->fs_info->super_copy;
|
2009-12-12 04:11:29 +07:00
|
|
|
features = btrfs_super_incompat_flags(disk_super);
|
|
|
|
if (!(features & BTRFS_FEATURE_INCOMPAT_DEFAULT_SUBVOL)) {
|
|
|
|
features |= BTRFS_FEATURE_INCOMPAT_DEFAULT_SUBVOL;
|
|
|
|
btrfs_set_super_incompat_flags(disk_super, features);
|
|
|
|
}
|
|
|
|
btrfs_end_transaction(trans, root);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2010-09-29 22:22:36 +07:00
|
|
|
static void get_block_group_info(struct list_head *groups_list,
|
|
|
|
struct btrfs_ioctl_space_info *space)
|
|
|
|
{
|
|
|
|
struct btrfs_block_group_cache *block_group;
|
|
|
|
|
|
|
|
space->total_bytes = 0;
|
|
|
|
space->used_bytes = 0;
|
|
|
|
space->flags = 0;
|
|
|
|
list_for_each_entry(block_group, groups_list, list) {
|
|
|
|
space->flags = block_group->flags;
|
|
|
|
space->total_bytes += block_group->key.offset;
|
|
|
|
space->used_bytes +=
|
|
|
|
btrfs_block_group_used(&block_group->item);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2010-01-14 01:19:06 +07:00
|
|
|
long btrfs_ioctl_space_info(struct btrfs_root *root, void __user *arg)
|
|
|
|
{
|
|
|
|
struct btrfs_ioctl_space_args space_args;
|
|
|
|
struct btrfs_ioctl_space_info space;
|
|
|
|
struct btrfs_ioctl_space_info *dest;
|
2010-03-17 02:40:10 +07:00
|
|
|
struct btrfs_ioctl_space_info *dest_orig;
|
2011-04-11 22:56:31 +07:00
|
|
|
struct btrfs_ioctl_space_info __user *user_dest;
|
2010-01-14 01:19:06 +07:00
|
|
|
struct btrfs_space_info *info;
|
2010-09-29 22:22:36 +07:00
|
|
|
u64 types[] = {BTRFS_BLOCK_GROUP_DATA,
|
|
|
|
BTRFS_BLOCK_GROUP_SYSTEM,
|
|
|
|
BTRFS_BLOCK_GROUP_METADATA,
|
|
|
|
BTRFS_BLOCK_GROUP_DATA | BTRFS_BLOCK_GROUP_METADATA};
|
|
|
|
int num_types = 4;
|
2010-03-17 02:40:10 +07:00
|
|
|
int alloc_size;
|
2010-01-14 01:19:06 +07:00
|
|
|
int ret = 0;
|
2011-02-15 04:04:23 +07:00
|
|
|
u64 slot_count = 0;
|
2010-09-29 22:22:36 +07:00
|
|
|
int i, c;
|
2010-01-14 01:19:06 +07:00
|
|
|
|
|
|
|
if (copy_from_user(&space_args,
|
|
|
|
(struct btrfs_ioctl_space_args __user *)arg,
|
|
|
|
sizeof(space_args)))
|
|
|
|
return -EFAULT;
|
|
|
|
|
2010-09-29 22:22:36 +07:00
|
|
|
for (i = 0; i < num_types; i++) {
|
|
|
|
struct btrfs_space_info *tmp;
|
|
|
|
|
|
|
|
info = NULL;
|
|
|
|
rcu_read_lock();
|
|
|
|
list_for_each_entry_rcu(tmp, &root->fs_info->space_info,
|
|
|
|
list) {
|
|
|
|
if (tmp->flags == types[i]) {
|
|
|
|
info = tmp;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
rcu_read_unlock();
|
|
|
|
|
|
|
|
if (!info)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
down_read(&info->groups_sem);
|
|
|
|
for (c = 0; c < BTRFS_NR_RAID_TYPES; c++) {
|
|
|
|
if (!list_empty(&info->block_groups[c]))
|
|
|
|
slot_count++;
|
|
|
|
}
|
|
|
|
up_read(&info->groups_sem);
|
|
|
|
}
|
2010-03-17 02:40:10 +07:00
|
|
|
|
|
|
|
/* space_slots == 0 means they are asking for a count */
|
|
|
|
if (space_args.space_slots == 0) {
|
|
|
|
space_args.total_spaces = slot_count;
|
|
|
|
goto out;
|
|
|
|
}
|
2010-09-29 22:22:36 +07:00
|
|
|
|
2011-02-15 04:04:23 +07:00
|
|
|
slot_count = min_t(u64, space_args.space_slots, slot_count);
|
2010-09-29 22:22:36 +07:00
|
|
|
|
2010-03-17 02:40:10 +07:00
|
|
|
alloc_size = sizeof(*dest) * slot_count;
|
2010-09-29 22:22:36 +07:00
|
|
|
|
2010-03-17 02:40:10 +07:00
|
|
|
/* we generally have at most 6 or so space infos, one for each raid
|
|
|
|
* level. So, a whole page should be more than enough for everyone
|
|
|
|
*/
|
|
|
|
if (alloc_size > PAGE_CACHE_SIZE)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
2010-01-14 01:19:06 +07:00
|
|
|
space_args.total_spaces = 0;
|
2010-03-17 02:40:10 +07:00
|
|
|
dest = kmalloc(alloc_size, GFP_NOFS);
|
|
|
|
if (!dest)
|
|
|
|
return -ENOMEM;
|
|
|
|
dest_orig = dest;
|
2010-01-14 01:19:06 +07:00
|
|
|
|
2010-03-17 02:40:10 +07:00
|
|
|
/* now we have a buffer to copy into */
|
2010-09-29 22:22:36 +07:00
|
|
|
for (i = 0; i < num_types; i++) {
|
|
|
|
struct btrfs_space_info *tmp;
|
|
|
|
|
2011-02-15 04:04:23 +07:00
|
|
|
if (!slot_count)
|
|
|
|
break;
|
|
|
|
|
2010-09-29 22:22:36 +07:00
|
|
|
info = NULL;
|
|
|
|
rcu_read_lock();
|
|
|
|
list_for_each_entry_rcu(tmp, &root->fs_info->space_info,
|
|
|
|
list) {
|
|
|
|
if (tmp->flags == types[i]) {
|
|
|
|
info = tmp;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
rcu_read_unlock();
|
2010-03-17 02:40:10 +07:00
|
|
|
|
2010-09-29 22:22:36 +07:00
|
|
|
if (!info)
|
|
|
|
continue;
|
|
|
|
down_read(&info->groups_sem);
|
|
|
|
for (c = 0; c < BTRFS_NR_RAID_TYPES; c++) {
|
|
|
|
if (!list_empty(&info->block_groups[c])) {
|
|
|
|
get_block_group_info(&info->block_groups[c],
|
|
|
|
&space);
|
|
|
|
memcpy(dest, &space, sizeof(space));
|
|
|
|
dest++;
|
|
|
|
space_args.total_spaces++;
|
2011-02-15 04:04:23 +07:00
|
|
|
slot_count--;
|
2010-09-29 22:22:36 +07:00
|
|
|
}
|
2011-02-15 04:04:23 +07:00
|
|
|
if (!slot_count)
|
|
|
|
break;
|
2010-09-29 22:22:36 +07:00
|
|
|
}
|
|
|
|
up_read(&info->groups_sem);
|
2010-01-14 01:19:06 +07:00
|
|
|
}
|
|
|
|
|
2010-03-17 02:40:10 +07:00
|
|
|
user_dest = (struct btrfs_ioctl_space_info *)
|
|
|
|
(arg + sizeof(struct btrfs_ioctl_space_args));
|
|
|
|
|
|
|
|
if (copy_to_user(user_dest, dest_orig, alloc_size))
|
|
|
|
ret = -EFAULT;
|
|
|
|
|
|
|
|
kfree(dest_orig);
|
|
|
|
out:
|
|
|
|
if (ret == 0 && copy_to_user(arg, &space_args, sizeof(space_args)))
|
2010-01-14 01:19:06 +07:00
|
|
|
ret = -EFAULT;
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2008-06-12 08:53:53 +07:00
|
|
|
/*
|
|
|
|
* there are many ways the trans_start and trans_end ioctls can lead
|
|
|
|
* to deadlocks. They should only be used by applications that
|
|
|
|
* basically own the machine, and have a very in depth understanding
|
|
|
|
* of all the possible deadlocks and enospc problems.
|
|
|
|
*/
|
|
|
|
long btrfs_ioctl_trans_end(struct file *file)
|
|
|
|
{
|
|
|
|
struct inode *inode = fdentry(file)->d_inode;
|
|
|
|
struct btrfs_root *root = BTRFS_I(inode)->root;
|
|
|
|
struct btrfs_trans_handle *trans;
|
|
|
|
|
|
|
|
trans = file->private_data;
|
2009-09-30 05:38:44 +07:00
|
|
|
if (!trans)
|
|
|
|
return -EINVAL;
|
2008-09-06 03:43:31 +07:00
|
|
|
file->private_data = NULL;
|
2008-08-04 21:41:27 +07:00
|
|
|
|
2009-09-30 05:38:44 +07:00
|
|
|
btrfs_end_transaction(trans, root);
|
|
|
|
|
2011-04-12 04:25:13 +07:00
|
|
|
atomic_dec(&root->fs_info->open_ioctl_trans);
|
2008-08-04 21:41:27 +07:00
|
|
|
|
2008-12-12 04:30:06 +07:00
|
|
|
mnt_drop_write(file->f_path.mnt);
|
2009-09-30 05:38:44 +07:00
|
|
|
return 0;
|
2008-06-12 08:53:53 +07:00
|
|
|
}
|
|
|
|
|
Btrfs: add START_SYNC, WAIT_SYNC ioctls
START_SYNC will start a sync/commit, but not wait for it to
complete. Any modification started after the ioctl returns is
guaranteed not to be included in the commit. If a non-NULL
pointer is passed, the transaction id will be returned to
userspace.
WAIT_SYNC will wait for any in-progress commit to complete. If a
transaction id is specified, the ioctl will block and then
return (success) when the specified transaction has committed.
If it has already committed when we call the ioctl, it returns
immediately. If the specified transaction doesn't exist, it
returns EINVAL.
If no transaction id is specified, WAIT_SYNC will wait for the
currently committing transaction to finish it's commit to disk.
If there is no currently committing transaction, it returns
success.
These ioctls are useful for applications which want to impose an
ordering on when fs modifications reach disk, but do not want to
wait for the full (slow) commit process to do so.
Picky callers can take the transid returned by START_SYNC and
feed it to WAIT_SYNC, and be certain to wait only as long as
necessary for the transaction _they_ started to reach disk.
Sloppy callers can START_SYNC and WAIT_SYNC without a transid,
and provided they didn't wait too long between the calls, they
will get the same result. However, if a second commit starts
before they call WAIT_SYNC, they may end up waiting longer for
it to commit as well. Even so, a START_SYNC+WAIT_SYNC still
guarantees that any operation completed before the START_SYNC
reaches disk.
Signed-off-by: Sage Weil <sage@newdream.net>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2010-10-30 02:41:32 +07:00
|
|
|
static noinline long btrfs_ioctl_start_sync(struct file *file, void __user *argp)
|
|
|
|
{
|
|
|
|
struct btrfs_root *root = BTRFS_I(file->f_dentry->d_inode)->root;
|
|
|
|
struct btrfs_trans_handle *trans;
|
|
|
|
u64 transid;
|
2011-03-23 15:14:16 +07:00
|
|
|
int ret;
|
Btrfs: add START_SYNC, WAIT_SYNC ioctls
START_SYNC will start a sync/commit, but not wait for it to
complete. Any modification started after the ioctl returns is
guaranteed not to be included in the commit. If a non-NULL
pointer is passed, the transaction id will be returned to
userspace.
WAIT_SYNC will wait for any in-progress commit to complete. If a
transaction id is specified, the ioctl will block and then
return (success) when the specified transaction has committed.
If it has already committed when we call the ioctl, it returns
immediately. If the specified transaction doesn't exist, it
returns EINVAL.
If no transaction id is specified, WAIT_SYNC will wait for the
currently committing transaction to finish it's commit to disk.
If there is no currently committing transaction, it returns
success.
These ioctls are useful for applications which want to impose an
ordering on when fs modifications reach disk, but do not want to
wait for the full (slow) commit process to do so.
Picky callers can take the transid returned by START_SYNC and
feed it to WAIT_SYNC, and be certain to wait only as long as
necessary for the transaction _they_ started to reach disk.
Sloppy callers can START_SYNC and WAIT_SYNC without a transid,
and provided they didn't wait too long between the calls, they
will get the same result. However, if a second commit starts
before they call WAIT_SYNC, they may end up waiting longer for
it to commit as well. Even so, a START_SYNC+WAIT_SYNC still
guarantees that any operation completed before the START_SYNC
reaches disk.
Signed-off-by: Sage Weil <sage@newdream.net>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2010-10-30 02:41:32 +07:00
|
|
|
|
|
|
|
trans = btrfs_start_transaction(root, 0);
|
2011-01-20 13:19:37 +07:00
|
|
|
if (IS_ERR(trans))
|
|
|
|
return PTR_ERR(trans);
|
Btrfs: add START_SYNC, WAIT_SYNC ioctls
START_SYNC will start a sync/commit, but not wait for it to
complete. Any modification started after the ioctl returns is
guaranteed not to be included in the commit. If a non-NULL
pointer is passed, the transaction id will be returned to
userspace.
WAIT_SYNC will wait for any in-progress commit to complete. If a
transaction id is specified, the ioctl will block and then
return (success) when the specified transaction has committed.
If it has already committed when we call the ioctl, it returns
immediately. If the specified transaction doesn't exist, it
returns EINVAL.
If no transaction id is specified, WAIT_SYNC will wait for the
currently committing transaction to finish it's commit to disk.
If there is no currently committing transaction, it returns
success.
These ioctls are useful for applications which want to impose an
ordering on when fs modifications reach disk, but do not want to
wait for the full (slow) commit process to do so.
Picky callers can take the transid returned by START_SYNC and
feed it to WAIT_SYNC, and be certain to wait only as long as
necessary for the transaction _they_ started to reach disk.
Sloppy callers can START_SYNC and WAIT_SYNC without a transid,
and provided they didn't wait too long between the calls, they
will get the same result. However, if a second commit starts
before they call WAIT_SYNC, they may end up waiting longer for
it to commit as well. Even so, a START_SYNC+WAIT_SYNC still
guarantees that any operation completed before the START_SYNC
reaches disk.
Signed-off-by: Sage Weil <sage@newdream.net>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2010-10-30 02:41:32 +07:00
|
|
|
transid = trans->transid;
|
2011-03-23 15:14:16 +07:00
|
|
|
ret = btrfs_commit_transaction_async(trans, root, 0);
|
2011-04-04 08:52:13 +07:00
|
|
|
if (ret) {
|
|
|
|
btrfs_end_transaction(trans, root);
|
2011-03-23 15:14:16 +07:00
|
|
|
return ret;
|
2011-04-04 08:52:13 +07:00
|
|
|
}
|
Btrfs: add START_SYNC, WAIT_SYNC ioctls
START_SYNC will start a sync/commit, but not wait for it to
complete. Any modification started after the ioctl returns is
guaranteed not to be included in the commit. If a non-NULL
pointer is passed, the transaction id will be returned to
userspace.
WAIT_SYNC will wait for any in-progress commit to complete. If a
transaction id is specified, the ioctl will block and then
return (success) when the specified transaction has committed.
If it has already committed when we call the ioctl, it returns
immediately. If the specified transaction doesn't exist, it
returns EINVAL.
If no transaction id is specified, WAIT_SYNC will wait for the
currently committing transaction to finish it's commit to disk.
If there is no currently committing transaction, it returns
success.
These ioctls are useful for applications which want to impose an
ordering on when fs modifications reach disk, but do not want to
wait for the full (slow) commit process to do so.
Picky callers can take the transid returned by START_SYNC and
feed it to WAIT_SYNC, and be certain to wait only as long as
necessary for the transaction _they_ started to reach disk.
Sloppy callers can START_SYNC and WAIT_SYNC without a transid,
and provided they didn't wait too long between the calls, they
will get the same result. However, if a second commit starts
before they call WAIT_SYNC, they may end up waiting longer for
it to commit as well. Even so, a START_SYNC+WAIT_SYNC still
guarantees that any operation completed before the START_SYNC
reaches disk.
Signed-off-by: Sage Weil <sage@newdream.net>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2010-10-30 02:41:32 +07:00
|
|
|
|
|
|
|
if (argp)
|
|
|
|
if (copy_to_user(argp, &transid, sizeof(transid)))
|
|
|
|
return -EFAULT;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static noinline long btrfs_ioctl_wait_sync(struct file *file, void __user *argp)
|
|
|
|
{
|
|
|
|
struct btrfs_root *root = BTRFS_I(file->f_dentry->d_inode)->root;
|
|
|
|
u64 transid;
|
|
|
|
|
|
|
|
if (argp) {
|
|
|
|
if (copy_from_user(&transid, argp, sizeof(transid)))
|
|
|
|
return -EFAULT;
|
|
|
|
} else {
|
|
|
|
transid = 0; /* current trans */
|
|
|
|
}
|
|
|
|
return btrfs_wait_for_commit(root, transid);
|
|
|
|
}
|
|
|
|
|
2011-03-11 21:41:01 +07:00
|
|
|
static long btrfs_ioctl_scrub(struct btrfs_root *root, void __user *arg)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
struct btrfs_ioctl_scrub_args *sa;
|
|
|
|
|
|
|
|
if (!capable(CAP_SYS_ADMIN))
|
|
|
|
return -EPERM;
|
|
|
|
|
|
|
|
sa = memdup_user(arg, sizeof(*sa));
|
|
|
|
if (IS_ERR(sa))
|
|
|
|
return PTR_ERR(sa);
|
|
|
|
|
|
|
|
ret = btrfs_scrub_dev(root, sa->devid, sa->start, sa->end,
|
2011-03-23 22:34:19 +07:00
|
|
|
&sa->progress, sa->flags & BTRFS_SCRUB_READONLY);
|
2011-03-11 21:41:01 +07:00
|
|
|
|
|
|
|
if (copy_to_user(arg, sa, sizeof(*sa)))
|
|
|
|
ret = -EFAULT;
|
|
|
|
|
|
|
|
kfree(sa);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
static long btrfs_ioctl_scrub_cancel(struct btrfs_root *root, void __user *arg)
|
|
|
|
{
|
|
|
|
if (!capable(CAP_SYS_ADMIN))
|
|
|
|
return -EPERM;
|
|
|
|
|
|
|
|
return btrfs_scrub_cancel(root);
|
|
|
|
}
|
|
|
|
|
|
|
|
static long btrfs_ioctl_scrub_progress(struct btrfs_root *root,
|
|
|
|
void __user *arg)
|
|
|
|
{
|
|
|
|
struct btrfs_ioctl_scrub_args *sa;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
if (!capable(CAP_SYS_ADMIN))
|
|
|
|
return -EPERM;
|
|
|
|
|
|
|
|
sa = memdup_user(arg, sizeof(*sa));
|
|
|
|
if (IS_ERR(sa))
|
|
|
|
return PTR_ERR(sa);
|
|
|
|
|
|
|
|
ret = btrfs_scrub_progress(root, sa->devid, &sa->progress);
|
|
|
|
|
|
|
|
if (copy_to_user(arg, sa, sizeof(*sa)))
|
|
|
|
ret = -EFAULT;
|
|
|
|
|
|
|
|
kfree(sa);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2011-07-07 21:48:38 +07:00
|
|
|
static long btrfs_ioctl_ino_to_path(struct btrfs_root *root, void __user *arg)
|
|
|
|
{
|
|
|
|
int ret = 0;
|
|
|
|
int i;
|
2011-11-03 02:48:34 +07:00
|
|
|
u64 rel_ptr;
|
2011-07-07 21:48:38 +07:00
|
|
|
int size;
|
2011-11-06 15:07:10 +07:00
|
|
|
struct btrfs_ioctl_ino_path_args *ipa = NULL;
|
2011-07-07 21:48:38 +07:00
|
|
|
struct inode_fs_paths *ipath = NULL;
|
|
|
|
struct btrfs_path *path;
|
|
|
|
|
|
|
|
if (!capable(CAP_SYS_ADMIN))
|
|
|
|
return -EPERM;
|
|
|
|
|
|
|
|
path = btrfs_alloc_path();
|
|
|
|
if (!path) {
|
|
|
|
ret = -ENOMEM;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
ipa = memdup_user(arg, sizeof(*ipa));
|
|
|
|
if (IS_ERR(ipa)) {
|
|
|
|
ret = PTR_ERR(ipa);
|
|
|
|
ipa = NULL;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
size = min_t(u32, ipa->size, 4096);
|
|
|
|
ipath = init_ipath(size, root, path);
|
|
|
|
if (IS_ERR(ipath)) {
|
|
|
|
ret = PTR_ERR(ipath);
|
|
|
|
ipath = NULL;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
ret = paths_from_inode(ipa->inum, ipath);
|
|
|
|
if (ret < 0)
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
for (i = 0; i < ipath->fspath->elem_cnt; ++i) {
|
2011-11-20 19:31:57 +07:00
|
|
|
rel_ptr = ipath->fspath->val[i] -
|
|
|
|
(u64)(unsigned long)ipath->fspath->val;
|
2011-11-03 02:48:34 +07:00
|
|
|
ipath->fspath->val[i] = rel_ptr;
|
2011-07-07 21:48:38 +07:00
|
|
|
}
|
|
|
|
|
2011-11-20 19:31:57 +07:00
|
|
|
ret = copy_to_user((void *)(unsigned long)ipa->fspath,
|
|
|
|
(void *)(unsigned long)ipath->fspath, size);
|
2011-07-07 21:48:38 +07:00
|
|
|
if (ret) {
|
|
|
|
ret = -EFAULT;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
out:
|
|
|
|
btrfs_free_path(path);
|
|
|
|
free_ipath(ipath);
|
|
|
|
kfree(ipa);
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int build_ino_list(u64 inum, u64 offset, u64 root, void *ctx)
|
|
|
|
{
|
|
|
|
struct btrfs_data_container *inodes = ctx;
|
|
|
|
const size_t c = 3 * sizeof(u64);
|
|
|
|
|
|
|
|
if (inodes->bytes_left >= c) {
|
|
|
|
inodes->bytes_left -= c;
|
|
|
|
inodes->val[inodes->elem_cnt] = inum;
|
|
|
|
inodes->val[inodes->elem_cnt + 1] = offset;
|
|
|
|
inodes->val[inodes->elem_cnt + 2] = root;
|
|
|
|
inodes->elem_cnt += 3;
|
|
|
|
} else {
|
|
|
|
inodes->bytes_missing += c - inodes->bytes_left;
|
|
|
|
inodes->bytes_left = 0;
|
|
|
|
inodes->elem_missed += 3;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static long btrfs_ioctl_logical_to_ino(struct btrfs_root *root,
|
|
|
|
void __user *arg)
|
|
|
|
{
|
|
|
|
int ret = 0;
|
|
|
|
int size;
|
|
|
|
u64 extent_offset;
|
|
|
|
struct btrfs_ioctl_logical_ino_args *loi;
|
|
|
|
struct btrfs_data_container *inodes = NULL;
|
|
|
|
struct btrfs_path *path = NULL;
|
|
|
|
struct btrfs_key key;
|
|
|
|
|
|
|
|
if (!capable(CAP_SYS_ADMIN))
|
|
|
|
return -EPERM;
|
|
|
|
|
|
|
|
loi = memdup_user(arg, sizeof(*loi));
|
|
|
|
if (IS_ERR(loi)) {
|
|
|
|
ret = PTR_ERR(loi);
|
|
|
|
loi = NULL;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
path = btrfs_alloc_path();
|
|
|
|
if (!path) {
|
|
|
|
ret = -ENOMEM;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
size = min_t(u32, loi->size, 4096);
|
|
|
|
inodes = init_data_container(size);
|
|
|
|
if (IS_ERR(inodes)) {
|
|
|
|
ret = PTR_ERR(inodes);
|
|
|
|
inodes = NULL;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
ret = extent_from_logical(root->fs_info, loi->logical, path, &key);
|
|
|
|
|
|
|
|
if (ret & BTRFS_EXTENT_FLAG_TREE_BLOCK)
|
|
|
|
ret = -ENOENT;
|
|
|
|
if (ret < 0)
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
extent_offset = loi->logical - key.objectid;
|
|
|
|
ret = iterate_extent_inodes(root->fs_info, path, key.objectid,
|
|
|
|
extent_offset, build_ino_list, inodes);
|
|
|
|
|
|
|
|
if (ret < 0)
|
|
|
|
goto out;
|
|
|
|
|
2011-11-20 19:31:57 +07:00
|
|
|
ret = copy_to_user((void *)(unsigned long)loi->inodes,
|
|
|
|
(void *)(unsigned long)inodes, size);
|
2011-07-07 21:48:38 +07:00
|
|
|
if (ret)
|
|
|
|
ret = -EFAULT;
|
|
|
|
|
|
|
|
out:
|
|
|
|
btrfs_free_path(path);
|
|
|
|
kfree(inodes);
|
|
|
|
kfree(loi);
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2008-06-12 08:53:53 +07:00
|
|
|
long btrfs_ioctl(struct file *file, unsigned int
|
|
|
|
cmd, unsigned long arg)
|
|
|
|
{
|
|
|
|
struct btrfs_root *root = BTRFS_I(fdentry(file)->d_inode)->root;
|
2008-12-02 18:36:08 +07:00
|
|
|
void __user *argp = (void __user *)arg;
|
2008-06-12 08:53:53 +07:00
|
|
|
|
|
|
|
switch (cmd) {
|
2009-04-17 15:37:41 +07:00
|
|
|
case FS_IOC_GETFLAGS:
|
|
|
|
return btrfs_ioctl_getflags(file, argp);
|
|
|
|
case FS_IOC_SETFLAGS:
|
|
|
|
return btrfs_ioctl_setflags(file, argp);
|
|
|
|
case FS_IOC_GETVERSION:
|
|
|
|
return btrfs_ioctl_getversion(file, argp);
|
2011-03-24 17:24:28 +07:00
|
|
|
case FITRIM:
|
|
|
|
return btrfs_ioctl_fitrim(file, argp);
|
2008-06-12 08:53:53 +07:00
|
|
|
case BTRFS_IOC_SNAP_CREATE:
|
2010-12-20 14:53:28 +07:00
|
|
|
return btrfs_ioctl_snap_create(file, argp, 0);
|
2010-12-10 13:41:56 +07:00
|
|
|
case BTRFS_IOC_SNAP_CREATE_V2:
|
2010-12-20 14:53:28 +07:00
|
|
|
return btrfs_ioctl_snap_create_v2(file, argp, 0);
|
2008-11-18 09:02:50 +07:00
|
|
|
case BTRFS_IOC_SUBVOL_CREATE:
|
2010-12-20 14:53:28 +07:00
|
|
|
return btrfs_ioctl_snap_create(file, argp, 1);
|
2009-09-22 03:00:26 +07:00
|
|
|
case BTRFS_IOC_SNAP_DESTROY:
|
|
|
|
return btrfs_ioctl_snap_destroy(file, argp);
|
2010-12-20 15:30:25 +07:00
|
|
|
case BTRFS_IOC_SUBVOL_GETFLAGS:
|
|
|
|
return btrfs_ioctl_subvol_getflags(file, argp);
|
|
|
|
case BTRFS_IOC_SUBVOL_SETFLAGS:
|
|
|
|
return btrfs_ioctl_subvol_setflags(file, argp);
|
2009-12-12 04:11:29 +07:00
|
|
|
case BTRFS_IOC_DEFAULT_SUBVOL:
|
|
|
|
return btrfs_ioctl_default_subvol(file, argp);
|
2008-06-12 08:53:53 +07:00
|
|
|
case BTRFS_IOC_DEFRAG:
|
2010-03-11 21:42:04 +07:00
|
|
|
return btrfs_ioctl_defrag(file, NULL);
|
|
|
|
case BTRFS_IOC_DEFRAG_RANGE:
|
|
|
|
return btrfs_ioctl_defrag(file, argp);
|
2008-06-12 08:53:53 +07:00
|
|
|
case BTRFS_IOC_RESIZE:
|
2008-12-02 18:36:08 +07:00
|
|
|
return btrfs_ioctl_resize(root, argp);
|
2008-06-12 08:53:53 +07:00
|
|
|
case BTRFS_IOC_ADD_DEV:
|
2008-12-02 18:36:08 +07:00
|
|
|
return btrfs_ioctl_add_dev(root, argp);
|
2008-06-12 08:53:53 +07:00
|
|
|
case BTRFS_IOC_RM_DEV:
|
2008-12-02 18:36:08 +07:00
|
|
|
return btrfs_ioctl_rm_dev(root, argp);
|
2011-03-11 21:41:01 +07:00
|
|
|
case BTRFS_IOC_FS_INFO:
|
|
|
|
return btrfs_ioctl_fs_info(root, argp);
|
|
|
|
case BTRFS_IOC_DEV_INFO:
|
|
|
|
return btrfs_ioctl_dev_info(root, argp);
|
2008-06-12 08:53:53 +07:00
|
|
|
case BTRFS_IOC_BALANCE:
|
|
|
|
return btrfs_balance(root->fs_info->dev_root);
|
|
|
|
case BTRFS_IOC_CLONE:
|
2008-11-13 02:32:25 +07:00
|
|
|
return btrfs_ioctl_clone(file, arg, 0, 0, 0);
|
|
|
|
case BTRFS_IOC_CLONE_RANGE:
|
2008-12-02 21:52:24 +07:00
|
|
|
return btrfs_ioctl_clone_range(file, argp);
|
2008-06-12 08:53:53 +07:00
|
|
|
case BTRFS_IOC_TRANS_START:
|
|
|
|
return btrfs_ioctl_trans_start(file);
|
|
|
|
case BTRFS_IOC_TRANS_END:
|
|
|
|
return btrfs_ioctl_trans_end(file);
|
2010-03-01 03:39:26 +07:00
|
|
|
case BTRFS_IOC_TREE_SEARCH:
|
|
|
|
return btrfs_ioctl_tree_search(file, argp);
|
|
|
|
case BTRFS_IOC_INO_LOOKUP:
|
|
|
|
return btrfs_ioctl_ino_lookup(file, argp);
|
2011-07-07 21:48:38 +07:00
|
|
|
case BTRFS_IOC_INO_PATHS:
|
|
|
|
return btrfs_ioctl_ino_to_path(root, argp);
|
|
|
|
case BTRFS_IOC_LOGICAL_INO:
|
|
|
|
return btrfs_ioctl_logical_to_ino(root, argp);
|
2010-01-14 01:19:06 +07:00
|
|
|
case BTRFS_IOC_SPACE_INFO:
|
|
|
|
return btrfs_ioctl_space_info(root, argp);
|
2008-06-12 08:53:53 +07:00
|
|
|
case BTRFS_IOC_SYNC:
|
|
|
|
btrfs_sync_fs(file->f_dentry->d_sb, 1);
|
|
|
|
return 0;
|
Btrfs: add START_SYNC, WAIT_SYNC ioctls
START_SYNC will start a sync/commit, but not wait for it to
complete. Any modification started after the ioctl returns is
guaranteed not to be included in the commit. If a non-NULL
pointer is passed, the transaction id will be returned to
userspace.
WAIT_SYNC will wait for any in-progress commit to complete. If a
transaction id is specified, the ioctl will block and then
return (success) when the specified transaction has committed.
If it has already committed when we call the ioctl, it returns
immediately. If the specified transaction doesn't exist, it
returns EINVAL.
If no transaction id is specified, WAIT_SYNC will wait for the
currently committing transaction to finish it's commit to disk.
If there is no currently committing transaction, it returns
success.
These ioctls are useful for applications which want to impose an
ordering on when fs modifications reach disk, but do not want to
wait for the full (slow) commit process to do so.
Picky callers can take the transid returned by START_SYNC and
feed it to WAIT_SYNC, and be certain to wait only as long as
necessary for the transaction _they_ started to reach disk.
Sloppy callers can START_SYNC and WAIT_SYNC without a transid,
and provided they didn't wait too long between the calls, they
will get the same result. However, if a second commit starts
before they call WAIT_SYNC, they may end up waiting longer for
it to commit as well. Even so, a START_SYNC+WAIT_SYNC still
guarantees that any operation completed before the START_SYNC
reaches disk.
Signed-off-by: Sage Weil <sage@newdream.net>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2010-10-30 02:41:32 +07:00
|
|
|
case BTRFS_IOC_START_SYNC:
|
|
|
|
return btrfs_ioctl_start_sync(file, argp);
|
|
|
|
case BTRFS_IOC_WAIT_SYNC:
|
|
|
|
return btrfs_ioctl_wait_sync(file, argp);
|
2011-03-11 21:41:01 +07:00
|
|
|
case BTRFS_IOC_SCRUB:
|
|
|
|
return btrfs_ioctl_scrub(root, argp);
|
|
|
|
case BTRFS_IOC_SCRUB_CANCEL:
|
|
|
|
return btrfs_ioctl_scrub_cancel(root, argp);
|
|
|
|
case BTRFS_IOC_SCRUB_PROGRESS:
|
|
|
|
return btrfs_ioctl_scrub_progress(root, argp);
|
2008-06-12 08:53:53 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
return -ENOTTY;
|
|
|
|
}
|