2005-04-17 05:20:36 +07:00
|
|
|
/*
|
|
|
|
md.c : Multiple Devices driver for Linux
|
2014-09-30 11:23:59 +07:00
|
|
|
Copyright (C) 1998, 1999, 2000 Ingo Molnar
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
completely rewritten, based on the MD driver code from Marc Zyngier
|
|
|
|
|
|
|
|
Changes:
|
|
|
|
|
|
|
|
- RAID-1/RAID-5 extensions by Miguel de Icaza, Gadi Oxman, Ingo Molnar
|
|
|
|
- RAID-6 extensions by H. Peter Anvin <hpa@zytor.com>
|
|
|
|
- boot support for linear and striped mode by Harald Hoyer <HarryH@Royal.Net>
|
|
|
|
- kerneld support by Boris Tobotras <boris@xtalk.msk.su>
|
|
|
|
- kmod support by: Cyrus Durgin
|
|
|
|
- RAID0 bugfixes: Mark Anthony Lisher <markal@iname.com>
|
|
|
|
- Devfs support by Richard Gooch <rgooch@atnf.csiro.au>
|
|
|
|
|
|
|
|
- lots of fixes and improvements to the RAID1/RAID5 and generic
|
|
|
|
RAID code (such as request based resynchronization):
|
|
|
|
|
|
|
|
Neil Brown <neilb@cse.unsw.edu.au>.
|
|
|
|
|
2005-06-22 07:17:14 +07:00
|
|
|
- persistent bitmap code
|
|
|
|
Copyright (C) 2003-2004, Paul Clements, SteelEye Technology, Inc.
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
This program is free software; you can redistribute it and/or modify
|
|
|
|
it under the terms of the GNU General Public License as published by
|
|
|
|
the Free Software Foundation; either version 2, or (at your option)
|
|
|
|
any later version.
|
|
|
|
|
|
|
|
You should have received a copy of the GNU General Public License
|
|
|
|
(for example /usr/src/linux/COPYING); if not, write to the Free
|
|
|
|
Software Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
|
|
|
|
*/
|
|
|
|
|
2005-09-10 06:23:56 +07:00
|
|
|
#include <linux/kthread.h>
|
2009-03-31 10:33:13 +07:00
|
|
|
#include <linux/blkdev.h>
|
2015-12-25 09:20:34 +07:00
|
|
|
#include <linux/badblocks.h>
|
2005-04-17 05:20:36 +07:00
|
|
|
#include <linux/sysctl.h>
|
2009-03-31 10:33:13 +07:00
|
|
|
#include <linux/seq_file.h>
|
2011-09-16 13:31:11 +07:00
|
|
|
#include <linux/fs.h>
|
[PATCH] md: make /proc/mdstat pollable
With this patch it is possible to poll /proc/mdstat to detect arrays appearing
or disappearing, to detect failures, recovery starting, recovery completing,
and devices being added and removed.
It is similar to the poll-ability of /proc/mounts, though different in that:
We always report that the file is readable (because face it, it is, even if
only for EOF).
We report POLLPRI when there is a change so that select() can detect
it as an exceptional event. Not only are these exceptional events, but
that is the mechanism that the current 'mdadm' uses to watch for events
(It also polls after a timeout).
(We also report POLLERR like /proc/mounts).
Finally, we only reset the per-file event counter when the start of the file
is read, rather than when poll() returns an event. This is more robust as it
means that an fd will continue to report activity to poll/select until the
program clearly responds to that activity.
md_new_event takes an 'mddev' which isn't currently used, but it will be soon.
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-06 15:20:30 +07:00
|
|
|
#include <linux/poll.h>
|
2006-06-26 14:27:37 +07:00
|
|
|
#include <linux/ctype.h>
|
2009-12-15 09:01:06 +07:00
|
|
|
#include <linux/string.h>
|
2008-10-13 07:55:12 +07:00
|
|
|
#include <linux/hdreg.h>
|
|
|
|
#include <linux/proc_fs.h>
|
|
|
|
#include <linux/random.h>
|
2011-07-04 00:58:33 +07:00
|
|
|
#include <linux/module.h>
|
2008-10-13 07:55:12 +07:00
|
|
|
#include <linux/reboot.h>
|
2005-06-22 07:17:14 +07:00
|
|
|
#include <linux/file.h>
|
2009-12-14 08:50:05 +07:00
|
|
|
#include <linux/compat.h>
|
2008-10-15 05:09:21 +07:00
|
|
|
#include <linux/delay.h>
|
2009-03-31 10:33:13 +07:00
|
|
|
#include <linux/raid/md_p.h>
|
|
|
|
#include <linux/raid/md_u.h>
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 15:04:11 +07:00
|
|
|
#include <linux/slab.h>
|
2009-03-31 10:33:13 +07:00
|
|
|
#include "md.h"
|
2009-03-31 10:27:03 +07:00
|
|
|
#include "bitmap.h"
|
2014-03-29 22:01:53 +07:00
|
|
|
#include "md-cluster.h"
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
#ifndef MODULE
|
2008-10-13 07:55:12 +07:00
|
|
|
static void autostart_arrays(int part);
|
2005-04-17 05:20:36 +07:00
|
|
|
#endif
|
|
|
|
|
2011-09-21 12:30:20 +07:00
|
|
|
/* pers_list is a list of registered personalities protected
|
|
|
|
* by pers_lock.
|
|
|
|
* pers_lock does extra service to protect accesses to
|
|
|
|
* mddev->thread when the mutex cannot be held.
|
|
|
|
*/
|
2006-01-06 15:20:36 +07:00
|
|
|
static LIST_HEAD(pers_list);
|
2005-04-17 05:20:36 +07:00
|
|
|
static DEFINE_SPINLOCK(pers_lock);
|
|
|
|
|
2014-03-29 22:01:53 +07:00
|
|
|
struct md_cluster_operations *md_cluster_ops;
|
2014-06-07 14:39:37 +07:00
|
|
|
EXPORT_SYMBOL(md_cluster_ops);
|
2014-03-29 22:01:53 +07:00
|
|
|
struct module *md_cluster_mod;
|
|
|
|
EXPORT_SYMBOL(md_cluster_mod);
|
|
|
|
|
2008-05-24 03:04:38 +07:00
|
|
|
static DECLARE_WAIT_QUEUE_HEAD(resync_wait);
|
2010-10-15 20:36:08 +07:00
|
|
|
static struct workqueue_struct *md_wq;
|
|
|
|
static struct workqueue_struct *md_misc_wq;
|
2008-05-24 03:04:38 +07:00
|
|
|
|
2013-04-24 08:42:41 +07:00
|
|
|
static int remove_and_add_spares(struct mddev *mddev,
|
|
|
|
struct md_rdev *this);
|
2014-12-15 08:56:57 +07:00
|
|
|
static void mddev_detach(struct mddev *mddev);
|
2013-04-24 08:42:41 +07:00
|
|
|
|
2009-12-14 08:49:58 +07:00
|
|
|
/*
|
|
|
|
* Default number of read corrections we'll attempt on an rdev
|
|
|
|
* before ejecting it from the array. We divide the read error
|
|
|
|
* count by 2 for every hour elapsed between read errors.
|
|
|
|
*/
|
|
|
|
#define MD_DEFAULT_MAX_CORRECTED_READ_ERRORS 20
|
2005-04-17 05:20:36 +07:00
|
|
|
/*
|
|
|
|
* Current RAID-1,4,5 parallel reconstruction 'guaranteed speed limit'
|
|
|
|
* is 1000 KB/sec, so the extra system load does not show up that much.
|
|
|
|
* Increase it if you want to have more _guaranteed_ speed. Note that
|
2005-09-10 14:26:54 +07:00
|
|
|
* the RAID driver will use the maximum available bandwidth if the IO
|
2005-04-17 05:20:36 +07:00
|
|
|
* subsystem is idle. There is also an 'absolute maximum' reconstruction
|
|
|
|
* speed limit - in case reconstruction slows down your system despite
|
|
|
|
* idle IO detection.
|
|
|
|
*
|
|
|
|
* you can change it via /proc/sys/dev/raid/speed_limit_min and _max.
|
2006-01-06 15:21:36 +07:00
|
|
|
* or /sys/block/mdX/md/sync_speed_{min,max}
|
2005-04-17 05:20:36 +07:00
|
|
|
*/
|
|
|
|
|
|
|
|
static int sysctl_speed_limit_min = 1000;
|
|
|
|
static int sysctl_speed_limit_max = 200000;
|
2011-10-11 12:47:53 +07:00
|
|
|
static inline int speed_min(struct mddev *mddev)
|
2006-01-06 15:21:36 +07:00
|
|
|
{
|
|
|
|
return mddev->sync_speed_min ?
|
|
|
|
mddev->sync_speed_min : sysctl_speed_limit_min;
|
|
|
|
}
|
|
|
|
|
2011-10-11 12:47:53 +07:00
|
|
|
static inline int speed_max(struct mddev *mddev)
|
2006-01-06 15:21:36 +07:00
|
|
|
{
|
|
|
|
return mddev->sync_speed_max ?
|
|
|
|
mddev->sync_speed_max : sysctl_speed_limit_max;
|
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
static struct ctl_table_header *raid_table_header;
|
|
|
|
|
2013-11-14 11:16:18 +07:00
|
|
|
static struct ctl_table raid_table[] = {
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
.procname = "speed_limit_min",
|
|
|
|
.data = &sysctl_speed_limit_min,
|
|
|
|
.maxlen = sizeof(int),
|
2006-07-10 18:44:18 +07:00
|
|
|
.mode = S_IRUGO|S_IWUSR,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec,
|
2005-04-17 05:20:36 +07:00
|
|
|
},
|
|
|
|
{
|
|
|
|
.procname = "speed_limit_max",
|
|
|
|
.data = &sysctl_speed_limit_max,
|
|
|
|
.maxlen = sizeof(int),
|
2006-07-10 18:44:18 +07:00
|
|
|
.mode = S_IRUGO|S_IWUSR,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec,
|
2005-04-17 05:20:36 +07:00
|
|
|
},
|
2009-11-06 05:34:02 +07:00
|
|
|
{ }
|
2005-04-17 05:20:36 +07:00
|
|
|
};
|
|
|
|
|
2013-11-14 11:16:18 +07:00
|
|
|
static struct ctl_table raid_dir_table[] = {
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
.procname = "raid",
|
|
|
|
.maxlen = 0,
|
2006-07-10 18:44:18 +07:00
|
|
|
.mode = S_IRUGO|S_IXUGO,
|
2005-04-17 05:20:36 +07:00
|
|
|
.child = raid_table,
|
|
|
|
},
|
2009-11-06 05:34:02 +07:00
|
|
|
{ }
|
2005-04-17 05:20:36 +07:00
|
|
|
};
|
|
|
|
|
2013-11-14 11:16:18 +07:00
|
|
|
static struct ctl_table raid_root_table[] = {
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
.procname = "dev",
|
|
|
|
.maxlen = 0,
|
|
|
|
.mode = 0555,
|
|
|
|
.child = raid_dir_table,
|
|
|
|
},
|
2009-11-06 05:34:02 +07:00
|
|
|
{ }
|
2005-04-17 05:20:36 +07:00
|
|
|
};
|
|
|
|
|
2009-09-22 07:01:13 +07:00
|
|
|
static const struct block_device_operations md_fops;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
[PATCH] md: allow md arrays to be started read-only (module parameter).
When an md array is started, the superblock will be written, and resync may
commense. This is not good if you want to be completely read-only as, for
example, when preparing to resume from a suspend-to-disk image.
So introduce a module parameter "start_ro" which can be set
to '1' at boot, at module load, or via
/sys/module/md_mod/parameters/start_ro
When this is set, new arrays get an 'auto-ro' mode, which disables all
internal io (superblock updates, resync, recovery) and is automatically
switched to 'rw' when the first write request arrives.
The array can be set to true 'ro' mode using 'mdadm -r' before the first
write request, or resync can be started without a write using 'mdadm -w'.
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-11-09 12:39:36 +07:00
|
|
|
static int start_readonly;
|
|
|
|
|
2010-10-26 14:31:13 +07:00
|
|
|
/* bio_clone_mddev
|
|
|
|
* like bio_clone, but with a local bio set
|
|
|
|
*/
|
|
|
|
|
|
|
|
struct bio *bio_alloc_mddev(gfp_t gfp_mask, int nr_iovecs,
|
2011-10-11 12:47:53 +07:00
|
|
|
struct mddev *mddev)
|
2010-10-26 14:31:13 +07:00
|
|
|
{
|
|
|
|
struct bio *b;
|
|
|
|
|
|
|
|
if (!mddev || !mddev->bio_set)
|
|
|
|
return bio_alloc(gfp_mask, nr_iovecs);
|
|
|
|
|
2012-09-07 05:34:55 +07:00
|
|
|
b = bio_alloc_bioset(gfp_mask, nr_iovecs, mddev->bio_set);
|
2010-10-26 14:31:13 +07:00
|
|
|
if (!b)
|
|
|
|
return NULL;
|
|
|
|
return b;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(bio_alloc_mddev);
|
|
|
|
|
|
|
|
struct bio *bio_clone_mddev(struct bio *bio, gfp_t gfp_mask,
|
2011-10-11 12:47:53 +07:00
|
|
|
struct mddev *mddev)
|
2010-10-26 14:31:13 +07:00
|
|
|
{
|
|
|
|
if (!mddev || !mddev->bio_set)
|
|
|
|
return bio_clone(bio, gfp_mask);
|
|
|
|
|
2012-09-07 05:35:02 +07:00
|
|
|
return bio_clone_bioset(bio, gfp_mask, mddev->bio_set);
|
2010-10-26 14:31:13 +07:00
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(bio_clone_mddev);
|
|
|
|
|
[PATCH] md: make /proc/mdstat pollable
With this patch it is possible to poll /proc/mdstat to detect arrays appearing
or disappearing, to detect failures, recovery starting, recovery completing,
and devices being added and removed.
It is similar to the poll-ability of /proc/mounts, though different in that:
We always report that the file is readable (because face it, it is, even if
only for EOF).
We report POLLPRI when there is a change so that select() can detect
it as an exceptional event. Not only are these exceptional events, but
that is the mechanism that the current 'mdadm' uses to watch for events
(It also polls after a timeout).
(We also report POLLERR like /proc/mounts).
Finally, we only reset the per-file event counter when the start of the file
is read, rather than when poll() returns an event. This is more robust as it
means that an fd will continue to report activity to poll/select until the
program clearly responds to that activity.
md_new_event takes an 'mddev' which isn't currently used, but it will be soon.
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-06 15:20:30 +07:00
|
|
|
/*
|
|
|
|
* We have a system wide 'event count' that is incremented
|
|
|
|
* on any 'interesting' event, and readers of /proc/mdstat
|
|
|
|
* can use 'poll' or 'select' to find out when the event
|
|
|
|
* count increases.
|
|
|
|
*
|
|
|
|
* Events are:
|
|
|
|
* start array, stop array, error, add device, remove device,
|
|
|
|
* start build, activate spare
|
|
|
|
*/
|
2006-01-06 15:20:43 +07:00
|
|
|
static DECLARE_WAIT_QUEUE_HEAD(md_event_waiters);
|
[PATCH] md: make /proc/mdstat pollable
With this patch it is possible to poll /proc/mdstat to detect arrays appearing
or disappearing, to detect failures, recovery starting, recovery completing,
and devices being added and removed.
It is similar to the poll-ability of /proc/mounts, though different in that:
We always report that the file is readable (because face it, it is, even if
only for EOF).
We report POLLPRI when there is a change so that select() can detect
it as an exceptional event. Not only are these exceptional events, but
that is the mechanism that the current 'mdadm' uses to watch for events
(It also polls after a timeout).
(We also report POLLERR like /proc/mounts).
Finally, we only reset the per-file event counter when the start of the file
is read, rather than when poll() returns an event. This is more robust as it
means that an fd will continue to report activity to poll/select until the
program clearly responds to that activity.
md_new_event takes an 'mddev' which isn't currently used, but it will be soon.
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-06 15:20:30 +07:00
|
|
|
static atomic_t md_event_count;
|
2011-10-11 12:47:53 +07:00
|
|
|
void md_new_event(struct mddev *mddev)
|
[PATCH] md: make /proc/mdstat pollable
With this patch it is possible to poll /proc/mdstat to detect arrays appearing
or disappearing, to detect failures, recovery starting, recovery completing,
and devices being added and removed.
It is similar to the poll-ability of /proc/mounts, though different in that:
We always report that the file is readable (because face it, it is, even if
only for EOF).
We report POLLPRI when there is a change so that select() can detect
it as an exceptional event. Not only are these exceptional events, but
that is the mechanism that the current 'mdadm' uses to watch for events
(It also polls after a timeout).
(We also report POLLERR like /proc/mounts).
Finally, we only reset the per-file event counter when the start of the file
is read, rather than when poll() returns an event. This is more robust as it
means that an fd will continue to report activity to poll/select until the
program clearly responds to that activity.
md_new_event takes an 'mddev' which isn't currently used, but it will be soon.
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-06 15:20:30 +07:00
|
|
|
{
|
|
|
|
atomic_inc(&md_event_count);
|
|
|
|
wake_up(&md_event_waiters);
|
|
|
|
}
|
2006-03-27 16:18:10 +07:00
|
|
|
EXPORT_SYMBOL_GPL(md_new_event);
|
[PATCH] md: make /proc/mdstat pollable
With this patch it is possible to poll /proc/mdstat to detect arrays appearing
or disappearing, to detect failures, recovery starting, recovery completing,
and devices being added and removed.
It is similar to the poll-ability of /proc/mounts, though different in that:
We always report that the file is readable (because face it, it is, even if
only for EOF).
We report POLLPRI when there is a change so that select() can detect
it as an exceptional event. Not only are these exceptional events, but
that is the mechanism that the current 'mdadm' uses to watch for events
(It also polls after a timeout).
(We also report POLLERR like /proc/mounts).
Finally, we only reset the per-file event counter when the start of the file
is read, rather than when poll() returns an event. This is more robust as it
means that an fd will continue to report activity to poll/select until the
program clearly responds to that activity.
md_new_event takes an 'mddev' which isn't currently used, but it will be soon.
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-06 15:20:30 +07:00
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
/*
|
|
|
|
* Enables to iterate over all existing md arrays
|
|
|
|
* all_mddevs_lock protects this list.
|
|
|
|
*/
|
|
|
|
static LIST_HEAD(all_mddevs);
|
|
|
|
static DEFINE_SPINLOCK(all_mddevs_lock);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* iterates through all used mddevs in the system.
|
|
|
|
* We take care to grab the all_mddevs_lock whenever navigating
|
|
|
|
* the list, and to always hold a refcount when unlocked.
|
|
|
|
* Any code which breaks out of this loop while own
|
|
|
|
* a reference to the current mddev and must mddev_put it.
|
|
|
|
*/
|
2011-10-11 12:47:53 +07:00
|
|
|
#define for_each_mddev(_mddev,_tmp) \
|
2005-04-17 05:20:36 +07:00
|
|
|
\
|
2014-09-30 11:23:59 +07:00
|
|
|
for (({ spin_lock(&all_mddevs_lock); \
|
2011-10-11 12:47:53 +07:00
|
|
|
_tmp = all_mddevs.next; \
|
|
|
|
_mddev = NULL;}); \
|
|
|
|
({ if (_tmp != &all_mddevs) \
|
|
|
|
mddev_get(list_entry(_tmp, struct mddev, all_mddevs));\
|
2005-04-17 05:20:36 +07:00
|
|
|
spin_unlock(&all_mddevs_lock); \
|
2011-10-11 12:47:53 +07:00
|
|
|
if (_mddev) mddev_put(_mddev); \
|
|
|
|
_mddev = list_entry(_tmp, struct mddev, all_mddevs); \
|
|
|
|
_tmp != &all_mddevs;}); \
|
2005-04-17 05:20:36 +07:00
|
|
|
({ spin_lock(&all_mddevs_lock); \
|
2011-10-11 12:47:53 +07:00
|
|
|
_tmp = _tmp->next;}) \
|
2005-04-17 05:20:36 +07:00
|
|
|
)
|
|
|
|
|
2009-03-31 10:39:39 +07:00
|
|
|
/* Rather than calling directly into the personality make_request function,
|
|
|
|
* IO requests come here first so that we can check if the device is
|
|
|
|
* being suspended pending a reconfiguration.
|
|
|
|
* We hold a refcount over the call to ->make_request. By the time that
|
|
|
|
* call has finished, the bio has been linked into some internal structure
|
|
|
|
* and so is visible to ->quiesce(), so we don't need the refcount any more.
|
|
|
|
*/
|
2015-11-06 00:41:16 +07:00
|
|
|
static blk_qc_t md_make_request(struct request_queue *q, struct bio *bio)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2010-03-25 12:20:56 +07:00
|
|
|
const int rw = bio_data_dir(bio);
|
2011-10-11 12:47:53 +07:00
|
|
|
struct mddev *mddev = q->queuedata;
|
2011-02-08 07:21:48 +07:00
|
|
|
unsigned int sectors;
|
2015-04-03 07:44:47 +07:00
|
|
|
int cpu;
|
2010-03-25 12:20:56 +07:00
|
|
|
|
2015-04-24 12:37:18 +07:00
|
|
|
blk_queue_split(q, &bio, q->bio_split);
|
|
|
|
|
2016-01-04 12:16:58 +07:00
|
|
|
if (mddev == NULL || mddev->pers == NULL) {
|
2009-03-31 10:39:39 +07:00
|
|
|
bio_io_error(bio);
|
2015-11-06 00:41:16 +07:00
|
|
|
return BLK_QC_T_NONE;
|
2009-03-31 10:39:39 +07:00
|
|
|
}
|
2013-02-21 09:28:09 +07:00
|
|
|
if (mddev->ro == 1 && unlikely(rw == WRITE)) {
|
2015-07-20 20:29:37 +07:00
|
|
|
if (bio_sectors(bio) != 0)
|
|
|
|
bio->bi_error = -EROFS;
|
|
|
|
bio_endio(bio);
|
2015-11-06 00:41:16 +07:00
|
|
|
return BLK_QC_T_NONE;
|
2013-02-21 09:28:09 +07:00
|
|
|
}
|
2011-01-14 05:14:33 +07:00
|
|
|
smp_rmb(); /* Ensure implications of 'active' are visible */
|
2009-03-31 10:39:39 +07:00
|
|
|
rcu_read_lock();
|
2010-09-03 16:56:18 +07:00
|
|
|
if (mddev->suspended) {
|
2009-03-31 10:39:39 +07:00
|
|
|
DEFINE_WAIT(__wait);
|
|
|
|
for (;;) {
|
|
|
|
prepare_to_wait(&mddev->sb_wait, &__wait,
|
|
|
|
TASK_UNINTERRUPTIBLE);
|
2010-09-03 16:56:18 +07:00
|
|
|
if (!mddev->suspended)
|
2009-03-31 10:39:39 +07:00
|
|
|
break;
|
|
|
|
rcu_read_unlock();
|
|
|
|
schedule();
|
|
|
|
rcu_read_lock();
|
|
|
|
}
|
|
|
|
finish_wait(&mddev->sb_wait, &__wait);
|
|
|
|
}
|
|
|
|
atomic_inc(&mddev->active_io);
|
|
|
|
rcu_read_unlock();
|
2010-03-25 12:20:56 +07:00
|
|
|
|
2011-02-08 07:21:48 +07:00
|
|
|
/*
|
|
|
|
* save the sectors now since our bio can
|
|
|
|
* go away inside make_request
|
|
|
|
*/
|
|
|
|
sectors = bio_sectors(bio);
|
2016-04-26 06:52:38 +07:00
|
|
|
/* bio could be mergeable after passing to underlayer */
|
2016-08-06 04:35:16 +07:00
|
|
|
bio->bi_opf &= ~REQ_NOMERGE;
|
2011-09-12 17:12:01 +07:00
|
|
|
mddev->pers->make_request(mddev, bio);
|
2010-03-25 12:20:56 +07:00
|
|
|
|
2015-04-03 07:44:47 +07:00
|
|
|
cpu = part_stat_lock();
|
|
|
|
part_stat_inc(cpu, &mddev->gendisk->part0, ios[rw]);
|
|
|
|
part_stat_add(cpu, &mddev->gendisk->part0, sectors[rw], sectors);
|
|
|
|
part_stat_unlock();
|
2010-03-25 12:20:56 +07:00
|
|
|
|
2009-03-31 10:39:39 +07:00
|
|
|
if (atomic_dec_and_test(&mddev->active_io) && mddev->suspended)
|
|
|
|
wake_up(&mddev->sb_wait);
|
2015-11-06 00:41:16 +07:00
|
|
|
|
|
|
|
return BLK_QC_T_NONE;
|
2009-03-31 10:39:39 +07:00
|
|
|
}
|
|
|
|
|
2010-04-06 11:23:02 +07:00
|
|
|
/* mddev_suspend makes sure no new requests are submitted
|
|
|
|
* to the device, and that any requests that have been submitted
|
|
|
|
* are completely handled.
|
2014-12-15 08:56:58 +07:00
|
|
|
* Once mddev_detach() is called and completes, the module will be
|
|
|
|
* completely unused.
|
2010-04-06 11:23:02 +07:00
|
|
|
*/
|
2011-10-11 12:47:53 +07:00
|
|
|
void mddev_suspend(struct mddev *mddev)
|
2009-03-31 10:39:39 +07:00
|
|
|
{
|
2016-05-04 00:43:57 +07:00
|
|
|
WARN_ON_ONCE(mddev->thread && current == mddev->thread->tsk);
|
2015-12-18 11:19:16 +07:00
|
|
|
if (mddev->suspended++)
|
|
|
|
return;
|
2009-03-31 10:39:39 +07:00
|
|
|
synchronize_rcu();
|
|
|
|
wait_event(mddev->sb_wait, atomic_read(&mddev->active_io) == 0);
|
|
|
|
mddev->pers->quiesce(mddev, 1);
|
2012-05-16 16:06:14 +07:00
|
|
|
|
|
|
|
del_timer_sync(&mddev->safemode_timer);
|
2009-03-31 10:39:39 +07:00
|
|
|
}
|
2010-06-01 16:37:27 +07:00
|
|
|
EXPORT_SYMBOL_GPL(mddev_suspend);
|
2009-03-31 10:39:39 +07:00
|
|
|
|
2011-10-11 12:47:53 +07:00
|
|
|
void mddev_resume(struct mddev *mddev)
|
2009-03-31 10:39:39 +07:00
|
|
|
{
|
2015-12-18 11:19:16 +07:00
|
|
|
if (--mddev->suspended)
|
|
|
|
return;
|
2009-03-31 10:39:39 +07:00
|
|
|
wake_up(&mddev->sb_wait);
|
|
|
|
mddev->pers->quiesce(mddev, 0);
|
2011-06-08 05:49:36 +07:00
|
|
|
|
2012-05-22 10:55:29 +07:00
|
|
|
set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
|
2011-06-08 05:49:36 +07:00
|
|
|
md_wakeup_thread(mddev->thread);
|
|
|
|
md_wakeup_thread(mddev->sync_thread); /* possibly kick off a reshape */
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
2010-06-01 16:37:27 +07:00
|
|
|
EXPORT_SYMBOL_GPL(mddev_resume);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2011-10-11 12:47:53 +07:00
|
|
|
int mddev_congested(struct mddev *mddev, int bits)
|
2009-09-23 15:10:29 +07:00
|
|
|
{
|
2014-12-15 08:56:56 +07:00
|
|
|
struct md_personality *pers = mddev->pers;
|
|
|
|
int ret = 0;
|
|
|
|
|
|
|
|
rcu_read_lock();
|
|
|
|
if (mddev->suspended)
|
|
|
|
ret = 1;
|
|
|
|
else if (pers && pers->congested)
|
|
|
|
ret = pers->congested(mddev, bits);
|
|
|
|
rcu_read_unlock();
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(mddev_congested);
|
|
|
|
static int md_congested(void *data, int bits)
|
|
|
|
{
|
|
|
|
struct mddev *mddev = data;
|
|
|
|
return mddev_congested(mddev, bits);
|
2009-09-23 15:10:29 +07:00
|
|
|
}
|
|
|
|
|
2009-12-14 08:49:49 +07:00
|
|
|
/*
|
2010-09-03 16:56:18 +07:00
|
|
|
* Generic flush handling for md
|
2009-12-14 08:49:49 +07:00
|
|
|
*/
|
|
|
|
|
2015-07-20 20:29:37 +07:00
|
|
|
static void md_end_flush(struct bio *bio)
|
2009-12-14 08:49:49 +07:00
|
|
|
{
|
2011-10-11 12:45:26 +07:00
|
|
|
struct md_rdev *rdev = bio->bi_private;
|
2011-10-11 12:47:53 +07:00
|
|
|
struct mddev *mddev = rdev->mddev;
|
2009-12-14 08:49:49 +07:00
|
|
|
|
|
|
|
rdev_dec_pending(rdev, mddev);
|
|
|
|
|
|
|
|
if (atomic_dec_and_test(&mddev->flush_pending)) {
|
2010-09-03 16:56:18 +07:00
|
|
|
/* The pre-request flush has finished */
|
2010-10-15 20:36:08 +07:00
|
|
|
queue_work(md_wq, &mddev->flush_work);
|
2009-12-14 08:49:49 +07:00
|
|
|
}
|
|
|
|
bio_put(bio);
|
|
|
|
}
|
|
|
|
|
2010-12-09 12:04:25 +07:00
|
|
|
static void md_submit_flush_data(struct work_struct *ws);
|
|
|
|
|
2010-12-09 12:17:51 +07:00
|
|
|
static void submit_flushes(struct work_struct *ws)
|
2009-12-14 08:49:49 +07:00
|
|
|
{
|
2011-10-11 12:47:53 +07:00
|
|
|
struct mddev *mddev = container_of(ws, struct mddev, flush_work);
|
2011-10-11 12:45:26 +07:00
|
|
|
struct md_rdev *rdev;
|
2009-12-14 08:49:49 +07:00
|
|
|
|
2010-12-09 12:04:25 +07:00
|
|
|
INIT_WORK(&mddev->flush_work, md_submit_flush_data);
|
|
|
|
atomic_set(&mddev->flush_pending, 1);
|
2009-12-14 08:49:49 +07:00
|
|
|
rcu_read_lock();
|
2012-03-19 08:46:39 +07:00
|
|
|
rdev_for_each_rcu(rdev, mddev)
|
2009-12-14 08:49:49 +07:00
|
|
|
if (rdev->raid_disk >= 0 &&
|
|
|
|
!test_bit(Faulty, &rdev->flags)) {
|
|
|
|
/* Take two references, one is dropped
|
|
|
|
* when request finishes, one after
|
|
|
|
* we reclaim rcu_read_lock
|
|
|
|
*/
|
|
|
|
struct bio *bi;
|
|
|
|
atomic_inc(&rdev->nr_pending);
|
|
|
|
atomic_inc(&rdev->nr_pending);
|
|
|
|
rcu_read_unlock();
|
2012-05-21 06:26:59 +07:00
|
|
|
bi = bio_alloc_mddev(GFP_NOIO, 0, mddev);
|
2010-09-03 16:56:18 +07:00
|
|
|
bi->bi_end_io = md_end_flush;
|
2009-12-14 08:49:49 +07:00
|
|
|
bi->bi_private = rdev;
|
|
|
|
bi->bi_bdev = rdev->bdev;
|
2016-11-01 20:40:10 +07:00
|
|
|
bi->bi_opf = REQ_OP_WRITE | REQ_PREFLUSH;
|
2009-12-14 08:49:49 +07:00
|
|
|
atomic_inc(&mddev->flush_pending);
|
2016-06-06 02:31:41 +07:00
|
|
|
submit_bio(bi);
|
2009-12-14 08:49:49 +07:00
|
|
|
rcu_read_lock();
|
|
|
|
rdev_dec_pending(rdev, mddev);
|
|
|
|
}
|
|
|
|
rcu_read_unlock();
|
2010-12-09 12:04:25 +07:00
|
|
|
if (atomic_dec_and_test(&mddev->flush_pending))
|
|
|
|
queue_work(md_wq, &mddev->flush_work);
|
2009-12-14 08:49:49 +07:00
|
|
|
}
|
|
|
|
|
2010-09-03 16:56:18 +07:00
|
|
|
static void md_submit_flush_data(struct work_struct *ws)
|
2009-12-14 08:49:49 +07:00
|
|
|
{
|
2011-10-11 12:47:53 +07:00
|
|
|
struct mddev *mddev = container_of(ws, struct mddev, flush_work);
|
2010-09-03 16:56:18 +07:00
|
|
|
struct bio *bio = mddev->flush_bio;
|
2009-12-14 08:49:49 +07:00
|
|
|
|
2013-10-12 05:44:27 +07:00
|
|
|
if (bio->bi_iter.bi_size == 0)
|
2009-12-14 08:49:49 +07:00
|
|
|
/* an empty barrier - all done */
|
2015-07-20 20:29:37 +07:00
|
|
|
bio_endio(bio);
|
2009-12-14 08:49:49 +07:00
|
|
|
else {
|
2016-08-06 04:35:16 +07:00
|
|
|
bio->bi_opf &= ~REQ_PREFLUSH;
|
2011-09-12 17:12:01 +07:00
|
|
|
mddev->pers->make_request(mddev, bio);
|
2009-12-14 08:49:49 +07:00
|
|
|
}
|
2010-12-09 11:59:01 +07:00
|
|
|
|
|
|
|
mddev->flush_bio = NULL;
|
|
|
|
wake_up(&mddev->sb_wait);
|
2009-12-14 08:49:49 +07:00
|
|
|
}
|
|
|
|
|
2011-10-11 12:47:53 +07:00
|
|
|
void md_flush_request(struct mddev *mddev, struct bio *bio)
|
2009-12-14 08:49:49 +07:00
|
|
|
{
|
2014-12-15 08:56:56 +07:00
|
|
|
spin_lock_irq(&mddev->lock);
|
2009-12-14 08:49:49 +07:00
|
|
|
wait_event_lock_irq(mddev->sb_wait,
|
2010-09-03 16:56:18 +07:00
|
|
|
!mddev->flush_bio,
|
2014-12-15 08:56:56 +07:00
|
|
|
mddev->lock);
|
2010-09-03 16:56:18 +07:00
|
|
|
mddev->flush_bio = bio;
|
2014-12-15 08:56:56 +07:00
|
|
|
spin_unlock_irq(&mddev->lock);
|
2009-12-14 08:49:49 +07:00
|
|
|
|
2010-12-09 12:17:51 +07:00
|
|
|
INIT_WORK(&mddev->flush_work, submit_flushes);
|
|
|
|
queue_work(md_wq, &mddev->flush_work);
|
2009-12-14 08:49:49 +07:00
|
|
|
}
|
2010-09-03 16:56:18 +07:00
|
|
|
EXPORT_SYMBOL(md_flush_request);
|
2009-03-31 10:39:39 +07:00
|
|
|
|
2012-07-31 14:08:15 +07:00
|
|
|
void md_unplug(struct blk_plug_cb *cb, bool from_schedule)
|
2011-04-18 15:25:42 +07:00
|
|
|
{
|
2012-07-31 14:08:14 +07:00
|
|
|
struct mddev *mddev = cb->data;
|
|
|
|
md_wakeup_thread(mddev->thread);
|
|
|
|
kfree(cb);
|
2011-04-18 15:25:42 +07:00
|
|
|
}
|
2012-07-31 14:08:14 +07:00
|
|
|
EXPORT_SYMBOL(md_unplug);
|
2010-06-01 16:37:29 +07:00
|
|
|
|
2011-10-11 12:47:53 +07:00
|
|
|
static inline struct mddev *mddev_get(struct mddev *mddev)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
atomic_inc(&mddev->active);
|
|
|
|
return mddev;
|
|
|
|
}
|
|
|
|
|
2009-03-04 14:57:25 +07:00
|
|
|
static void mddev_delayed_delete(struct work_struct *ws);
|
md: make devices disappear when they are no longer needed.
Currently md devices, once created, never disappear until the module
is unloaded. This is essentially because the gendisk holds a
reference to the mddev, and the mddev holds a reference to the
gendisk, this a circular reference.
If we drop the reference from mddev to gendisk, then we need to ensure
that the mddev is destroyed when the gendisk is destroyed. However it
is not possible to hook into the gendisk destruction process to enable
this.
So we drop the reference from the gendisk to the mddev and destroy the
gendisk when the mddev gets destroyed. However this has a
complication.
Between the call
__blkdev_get->get_gendisk->kobj_lookup->md_probe
and the call
__blkdev_get->md_open
there is no obvious way to hold a reference on the mddev any more, so
unless something is done, it will disappear and gendisk will be
destroyed prematurely.
Also, once we decide to destroy the mddev, there will be an unlockable
moment before the gendisk is unlinked (blk_unregister_region) during
which a new reference to the gendisk can be created. We need to
ensure that this reference can not be used. i.e. the ->open must
fail.
So:
1/ in md_probe we set a flag in the mddev (hold_active) which
indicates that the array should be treated as active, even
though there are no references, and no appearance of activity.
This is cleared by md_release when the device is closed if it
is no longer needed.
This ensures that the gendisk will survive between md_probe and
md_open.
2/ In md_open we check if the mddev we expect to open matches
the gendisk that we did open.
If there is a mismatch we return -ERESTARTSYS and modify
__blkdev_get to retry from the top in that case.
In the -ERESTARTSYS sys case we make sure to wait until
the old gendisk (that we succeeded in opening) is really gone so
we loop at most once.
Some udev configurations will always open an md device when it first
appears. If we allow an md device that was just created by an open
to disappear on an immediate close, then this can race with such udev
configurations and result in an infinite loop the device being opened
and closed, then re-open due to the 'ADD' even from the first open,
and then close and so on.
So we make sure an md device, once created by an open, remains active
at least until some md 'ioctl' has been made on it. This means that
all normal usage of md devices will allow them to disappear promptly
when not needed, but the worst that an incorrect usage will do it
cause an inactive md device to be left in existence (it can easily be
removed).
As an array can be stopped by writing to a sysfs attribute
echo clear > /sys/block/mdXXX/md/array_state
we need to use scheduled work for deleting the gendisk and other
kobjects. This allows us to wait for any pending gendisk deletion to
complete by simply calling flush_scheduled_work().
Signed-off-by: NeilBrown <neilb@suse.de>
2009-01-09 04:31:10 +07:00
|
|
|
|
2011-10-11 12:47:53 +07:00
|
|
|
static void mddev_put(struct mddev *mddev)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2010-10-26 14:31:13 +07:00
|
|
|
struct bio_set *bs = NULL;
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
if (!atomic_dec_and_lock(&mddev->active, &all_mddevs_lock))
|
|
|
|
return;
|
md: make devices disappear when they are no longer needed.
Currently md devices, once created, never disappear until the module
is unloaded. This is essentially because the gendisk holds a
reference to the mddev, and the mddev holds a reference to the
gendisk, this a circular reference.
If we drop the reference from mddev to gendisk, then we need to ensure
that the mddev is destroyed when the gendisk is destroyed. However it
is not possible to hook into the gendisk destruction process to enable
this.
So we drop the reference from the gendisk to the mddev and destroy the
gendisk when the mddev gets destroyed. However this has a
complication.
Between the call
__blkdev_get->get_gendisk->kobj_lookup->md_probe
and the call
__blkdev_get->md_open
there is no obvious way to hold a reference on the mddev any more, so
unless something is done, it will disappear and gendisk will be
destroyed prematurely.
Also, once we decide to destroy the mddev, there will be an unlockable
moment before the gendisk is unlinked (blk_unregister_region) during
which a new reference to the gendisk can be created. We need to
ensure that this reference can not be used. i.e. the ->open must
fail.
So:
1/ in md_probe we set a flag in the mddev (hold_active) which
indicates that the array should be treated as active, even
though there are no references, and no appearance of activity.
This is cleared by md_release when the device is closed if it
is no longer needed.
This ensures that the gendisk will survive between md_probe and
md_open.
2/ In md_open we check if the mddev we expect to open matches
the gendisk that we did open.
If there is a mismatch we return -ERESTARTSYS and modify
__blkdev_get to retry from the top in that case.
In the -ERESTARTSYS sys case we make sure to wait until
the old gendisk (that we succeeded in opening) is really gone so
we loop at most once.
Some udev configurations will always open an md device when it first
appears. If we allow an md device that was just created by an open
to disappear on an immediate close, then this can race with such udev
configurations and result in an infinite loop the device being opened
and closed, then re-open due to the 'ADD' even from the first open,
and then close and so on.
So we make sure an md device, once created by an open, remains active
at least until some md 'ioctl' has been made on it. This means that
all normal usage of md devices will allow them to disappear promptly
when not needed, but the worst that an incorrect usage will do it
cause an inactive md device to be left in existence (it can easily be
removed).
As an array can be stopped by writing to a sysfs attribute
echo clear > /sys/block/mdXXX/md/array_state
we need to use scheduled work for deleting the gendisk and other
kobjects. This allows us to wait for any pending gendisk deletion to
complete by simply calling flush_scheduled_work().
Signed-off-by: NeilBrown <neilb@suse.de>
2009-01-09 04:31:10 +07:00
|
|
|
if (!mddev->raid_disks && list_empty(&mddev->disks) &&
|
2009-12-30 08:08:49 +07:00
|
|
|
mddev->ctime == 0 && !mddev->hold_active) {
|
|
|
|
/* Array is not configured at all, and not held active,
|
|
|
|
* so destroy it */
|
2011-12-08 11:49:46 +07:00
|
|
|
list_del_init(&mddev->all_mddevs);
|
2010-10-26 14:31:13 +07:00
|
|
|
bs = mddev->bio_set;
|
|
|
|
mddev->bio_set = NULL;
|
md: make devices disappear when they are no longer needed.
Currently md devices, once created, never disappear until the module
is unloaded. This is essentially because the gendisk holds a
reference to the mddev, and the mddev holds a reference to the
gendisk, this a circular reference.
If we drop the reference from mddev to gendisk, then we need to ensure
that the mddev is destroyed when the gendisk is destroyed. However it
is not possible to hook into the gendisk destruction process to enable
this.
So we drop the reference from the gendisk to the mddev and destroy the
gendisk when the mddev gets destroyed. However this has a
complication.
Between the call
__blkdev_get->get_gendisk->kobj_lookup->md_probe
and the call
__blkdev_get->md_open
there is no obvious way to hold a reference on the mddev any more, so
unless something is done, it will disappear and gendisk will be
destroyed prematurely.
Also, once we decide to destroy the mddev, there will be an unlockable
moment before the gendisk is unlinked (blk_unregister_region) during
which a new reference to the gendisk can be created. We need to
ensure that this reference can not be used. i.e. the ->open must
fail.
So:
1/ in md_probe we set a flag in the mddev (hold_active) which
indicates that the array should be treated as active, even
though there are no references, and no appearance of activity.
This is cleared by md_release when the device is closed if it
is no longer needed.
This ensures that the gendisk will survive between md_probe and
md_open.
2/ In md_open we check if the mddev we expect to open matches
the gendisk that we did open.
If there is a mismatch we return -ERESTARTSYS and modify
__blkdev_get to retry from the top in that case.
In the -ERESTARTSYS sys case we make sure to wait until
the old gendisk (that we succeeded in opening) is really gone so
we loop at most once.
Some udev configurations will always open an md device when it first
appears. If we allow an md device that was just created by an open
to disappear on an immediate close, then this can race with such udev
configurations and result in an infinite loop the device being opened
and closed, then re-open due to the 'ADD' even from the first open,
and then close and so on.
So we make sure an md device, once created by an open, remains active
at least until some md 'ioctl' has been made on it. This means that
all normal usage of md devices will allow them to disappear promptly
when not needed, but the worst that an incorrect usage will do it
cause an inactive md device to be left in existence (it can easily be
removed).
As an array can be stopped by writing to a sysfs attribute
echo clear > /sys/block/mdXXX/md/array_state
we need to use scheduled work for deleting the gendisk and other
kobjects. This allows us to wait for any pending gendisk deletion to
complete by simply calling flush_scheduled_work().
Signed-off-by: NeilBrown <neilb@suse.de>
2009-01-09 04:31:10 +07:00
|
|
|
if (mddev->gendisk) {
|
2010-10-15 20:36:08 +07:00
|
|
|
/* We did a probe so need to clean up. Call
|
|
|
|
* queue_work inside the spinlock so that
|
|
|
|
* flush_workqueue() after mddev_find will
|
|
|
|
* succeed in waiting for the work to be done.
|
md: make devices disappear when they are no longer needed.
Currently md devices, once created, never disappear until the module
is unloaded. This is essentially because the gendisk holds a
reference to the mddev, and the mddev holds a reference to the
gendisk, this a circular reference.
If we drop the reference from mddev to gendisk, then we need to ensure
that the mddev is destroyed when the gendisk is destroyed. However it
is not possible to hook into the gendisk destruction process to enable
this.
So we drop the reference from the gendisk to the mddev and destroy the
gendisk when the mddev gets destroyed. However this has a
complication.
Between the call
__blkdev_get->get_gendisk->kobj_lookup->md_probe
and the call
__blkdev_get->md_open
there is no obvious way to hold a reference on the mddev any more, so
unless something is done, it will disappear and gendisk will be
destroyed prematurely.
Also, once we decide to destroy the mddev, there will be an unlockable
moment before the gendisk is unlinked (blk_unregister_region) during
which a new reference to the gendisk can be created. We need to
ensure that this reference can not be used. i.e. the ->open must
fail.
So:
1/ in md_probe we set a flag in the mddev (hold_active) which
indicates that the array should be treated as active, even
though there are no references, and no appearance of activity.
This is cleared by md_release when the device is closed if it
is no longer needed.
This ensures that the gendisk will survive between md_probe and
md_open.
2/ In md_open we check if the mddev we expect to open matches
the gendisk that we did open.
If there is a mismatch we return -ERESTARTSYS and modify
__blkdev_get to retry from the top in that case.
In the -ERESTARTSYS sys case we make sure to wait until
the old gendisk (that we succeeded in opening) is really gone so
we loop at most once.
Some udev configurations will always open an md device when it first
appears. If we allow an md device that was just created by an open
to disappear on an immediate close, then this can race with such udev
configurations and result in an infinite loop the device being opened
and closed, then re-open due to the 'ADD' even from the first open,
and then close and so on.
So we make sure an md device, once created by an open, remains active
at least until some md 'ioctl' has been made on it. This means that
all normal usage of md devices will allow them to disappear promptly
when not needed, but the worst that an incorrect usage will do it
cause an inactive md device to be left in existence (it can easily be
removed).
As an array can be stopped by writing to a sysfs attribute
echo clear > /sys/block/mdXXX/md/array_state
we need to use scheduled work for deleting the gendisk and other
kobjects. This allows us to wait for any pending gendisk deletion to
complete by simply calling flush_scheduled_work().
Signed-off-by: NeilBrown <neilb@suse.de>
2009-01-09 04:31:10 +07:00
|
|
|
*/
|
|
|
|
INIT_WORK(&mddev->del_work, mddev_delayed_delete);
|
2010-10-15 20:36:08 +07:00
|
|
|
queue_work(md_misc_wq, &mddev->del_work);
|
md: make devices disappear when they are no longer needed.
Currently md devices, once created, never disappear until the module
is unloaded. This is essentially because the gendisk holds a
reference to the mddev, and the mddev holds a reference to the
gendisk, this a circular reference.
If we drop the reference from mddev to gendisk, then we need to ensure
that the mddev is destroyed when the gendisk is destroyed. However it
is not possible to hook into the gendisk destruction process to enable
this.
So we drop the reference from the gendisk to the mddev and destroy the
gendisk when the mddev gets destroyed. However this has a
complication.
Between the call
__blkdev_get->get_gendisk->kobj_lookup->md_probe
and the call
__blkdev_get->md_open
there is no obvious way to hold a reference on the mddev any more, so
unless something is done, it will disappear and gendisk will be
destroyed prematurely.
Also, once we decide to destroy the mddev, there will be an unlockable
moment before the gendisk is unlinked (blk_unregister_region) during
which a new reference to the gendisk can be created. We need to
ensure that this reference can not be used. i.e. the ->open must
fail.
So:
1/ in md_probe we set a flag in the mddev (hold_active) which
indicates that the array should be treated as active, even
though there are no references, and no appearance of activity.
This is cleared by md_release when the device is closed if it
is no longer needed.
This ensures that the gendisk will survive between md_probe and
md_open.
2/ In md_open we check if the mddev we expect to open matches
the gendisk that we did open.
If there is a mismatch we return -ERESTARTSYS and modify
__blkdev_get to retry from the top in that case.
In the -ERESTARTSYS sys case we make sure to wait until
the old gendisk (that we succeeded in opening) is really gone so
we loop at most once.
Some udev configurations will always open an md device when it first
appears. If we allow an md device that was just created by an open
to disappear on an immediate close, then this can race with such udev
configurations and result in an infinite loop the device being opened
and closed, then re-open due to the 'ADD' even from the first open,
and then close and so on.
So we make sure an md device, once created by an open, remains active
at least until some md 'ioctl' has been made on it. This means that
all normal usage of md devices will allow them to disappear promptly
when not needed, but the worst that an incorrect usage will do it
cause an inactive md device to be left in existence (it can easily be
removed).
As an array can be stopped by writing to a sysfs attribute
echo clear > /sys/block/mdXXX/md/array_state
we need to use scheduled work for deleting the gendisk and other
kobjects. This allows us to wait for any pending gendisk deletion to
complete by simply calling flush_scheduled_work().
Signed-off-by: NeilBrown <neilb@suse.de>
2009-01-09 04:31:10 +07:00
|
|
|
} else
|
|
|
|
kfree(mddev);
|
|
|
|
}
|
|
|
|
spin_unlock(&all_mddevs_lock);
|
2010-10-26 14:31:13 +07:00
|
|
|
if (bs)
|
|
|
|
bioset_free(bs);
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
2015-07-25 05:19:58 +07:00
|
|
|
static void md_safemode_timeout(unsigned long data);
|
|
|
|
|
2011-10-11 12:47:53 +07:00
|
|
|
void mddev_init(struct mddev *mddev)
|
2010-04-01 11:55:30 +07:00
|
|
|
{
|
|
|
|
mutex_init(&mddev->open_mutex);
|
|
|
|
mutex_init(&mddev->reconfig_mutex);
|
|
|
|
mutex_init(&mddev->bitmap_info.mutex);
|
|
|
|
INIT_LIST_HEAD(&mddev->disks);
|
|
|
|
INIT_LIST_HEAD(&mddev->all_mddevs);
|
2015-07-25 05:19:58 +07:00
|
|
|
setup_timer(&mddev->safemode_timer, md_safemode_timeout,
|
|
|
|
(unsigned long) mddev);
|
2010-04-01 11:55:30 +07:00
|
|
|
atomic_set(&mddev->active, 1);
|
|
|
|
atomic_set(&mddev->openers, 0);
|
|
|
|
atomic_set(&mddev->active_io, 0);
|
2014-12-15 08:56:56 +07:00
|
|
|
spin_lock_init(&mddev->lock);
|
2010-04-01 11:55:30 +07:00
|
|
|
atomic_set(&mddev->flush_pending, 0);
|
|
|
|
init_waitqueue_head(&mddev->sb_wait);
|
|
|
|
init_waitqueue_head(&mddev->recovery_wait);
|
|
|
|
mddev->reshape_position = MaxSector;
|
2012-05-21 06:27:00 +07:00
|
|
|
mddev->reshape_backwards = 0;
|
2013-06-25 13:23:59 +07:00
|
|
|
mddev->last_sync_action = "none";
|
2010-04-01 11:55:30 +07:00
|
|
|
mddev->resync_min = 0;
|
|
|
|
mddev->resync_max = MaxSector;
|
|
|
|
mddev->level = LEVEL_NONE;
|
|
|
|
}
|
2010-06-01 16:37:27 +07:00
|
|
|
EXPORT_SYMBOL_GPL(mddev_init);
|
2010-04-01 11:55:30 +07:00
|
|
|
|
2014-09-30 11:23:59 +07:00
|
|
|
static struct mddev *mddev_find(dev_t unit)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2011-10-11 12:47:53 +07:00
|
|
|
struct mddev *mddev, *new = NULL;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2011-02-16 09:58:51 +07:00
|
|
|
if (unit && MAJOR(unit) != MD_MAJOR)
|
|
|
|
unit &= ~((1<<MdpMinorShift)-1);
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
retry:
|
|
|
|
spin_lock(&all_mddevs_lock);
|
2009-01-09 04:31:10 +07:00
|
|
|
|
|
|
|
if (unit) {
|
|
|
|
list_for_each_entry(mddev, &all_mddevs, all_mddevs)
|
|
|
|
if (mddev->unit == unit) {
|
|
|
|
mddev_get(mddev);
|
|
|
|
spin_unlock(&all_mddevs_lock);
|
|
|
|
kfree(new);
|
|
|
|
return mddev;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (new) {
|
|
|
|
list_add(&new->all_mddevs, &all_mddevs);
|
2005-04-17 05:20:36 +07:00
|
|
|
spin_unlock(&all_mddevs_lock);
|
2009-01-09 04:31:10 +07:00
|
|
|
new->hold_active = UNTIL_IOCTL;
|
|
|
|
return new;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
2009-01-09 04:31:10 +07:00
|
|
|
} else if (new) {
|
|
|
|
/* find an unused unit number */
|
|
|
|
static int next_minor = 512;
|
|
|
|
int start = next_minor;
|
|
|
|
int is_free = 0;
|
|
|
|
int dev = 0;
|
|
|
|
while (!is_free) {
|
|
|
|
dev = MKDEV(MD_MAJOR, next_minor);
|
|
|
|
next_minor++;
|
|
|
|
if (next_minor > MINORMASK)
|
|
|
|
next_minor = 0;
|
|
|
|
if (next_minor == start) {
|
|
|
|
/* Oh dear, all in use. */
|
|
|
|
spin_unlock(&all_mddevs_lock);
|
|
|
|
kfree(new);
|
|
|
|
return NULL;
|
|
|
|
}
|
2014-09-30 11:23:59 +07:00
|
|
|
|
2009-01-09 04:31:10 +07:00
|
|
|
is_free = 1;
|
|
|
|
list_for_each_entry(mddev, &all_mddevs, all_mddevs)
|
|
|
|
if (mddev->unit == dev) {
|
|
|
|
is_free = 0;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
new->unit = dev;
|
|
|
|
new->md_minor = MINOR(dev);
|
|
|
|
new->hold_active = UNTIL_STOP;
|
2005-04-17 05:20:36 +07:00
|
|
|
list_add(&new->all_mddevs, &all_mddevs);
|
|
|
|
spin_unlock(&all_mddevs_lock);
|
|
|
|
return new;
|
|
|
|
}
|
|
|
|
spin_unlock(&all_mddevs_lock);
|
|
|
|
|
2006-01-06 15:20:32 +07:00
|
|
|
new = kzalloc(sizeof(*new), GFP_KERNEL);
|
2005-04-17 05:20:36 +07:00
|
|
|
if (!new)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
new->unit = unit;
|
|
|
|
if (MAJOR(unit) == MD_MAJOR)
|
|
|
|
new->md_minor = MINOR(unit);
|
|
|
|
else
|
|
|
|
new->md_minor = MINOR(unit) >> MdpMinorShift;
|
|
|
|
|
2010-04-01 11:55:30 +07:00
|
|
|
mddev_init(new);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
goto retry;
|
|
|
|
}
|
|
|
|
|
2010-04-15 07:13:47 +07:00
|
|
|
static struct attribute_group md_redundancy_group;
|
|
|
|
|
2014-12-15 08:57:01 +07:00
|
|
|
void mddev_unlock(struct mddev *mddev)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2010-04-14 14:15:37 +07:00
|
|
|
if (mddev->to_remove) {
|
2010-04-15 07:13:47 +07:00
|
|
|
/* These cannot be removed under reconfig_mutex as
|
|
|
|
* an access to the files will try to take reconfig_mutex
|
|
|
|
* while holding the file unremovable, which leads to
|
|
|
|
* a deadlock.
|
2010-08-08 18:18:03 +07:00
|
|
|
* So hold set sysfs_active while the remove in happeing,
|
|
|
|
* and anything else which might set ->to_remove or my
|
|
|
|
* otherwise change the sysfs namespace will fail with
|
|
|
|
* -EBUSY if sysfs_active is still set.
|
|
|
|
* We set sysfs_active under reconfig_mutex and elsewhere
|
|
|
|
* test it under the same mutex to ensure its correct value
|
|
|
|
* is seen.
|
2010-04-15 07:13:47 +07:00
|
|
|
*/
|
2010-04-14 14:15:37 +07:00
|
|
|
struct attribute_group *to_remove = mddev->to_remove;
|
|
|
|
mddev->to_remove = NULL;
|
2010-08-08 18:18:03 +07:00
|
|
|
mddev->sysfs_active = 1;
|
2010-04-15 07:13:47 +07:00
|
|
|
mutex_unlock(&mddev->reconfig_mutex);
|
|
|
|
|
2010-06-01 16:37:23 +07:00
|
|
|
if (mddev->kobj.sd) {
|
|
|
|
if (to_remove != &md_redundancy_group)
|
|
|
|
sysfs_remove_group(&mddev->kobj, to_remove);
|
|
|
|
if (mddev->pers == NULL ||
|
|
|
|
mddev->pers->sync_request == NULL) {
|
|
|
|
sysfs_remove_group(&mddev->kobj, &md_redundancy_group);
|
|
|
|
if (mddev->sysfs_action)
|
|
|
|
sysfs_put(mddev->sysfs_action);
|
|
|
|
mddev->sysfs_action = NULL;
|
|
|
|
}
|
2010-04-14 14:15:37 +07:00
|
|
|
}
|
2010-08-08 18:18:03 +07:00
|
|
|
mddev->sysfs_active = 0;
|
2010-04-15 07:13:47 +07:00
|
|
|
} else
|
|
|
|
mutex_unlock(&mddev->reconfig_mutex);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2011-10-19 12:48:26 +07:00
|
|
|
/* As we've dropped the mutex we need a spinlock to
|
|
|
|
* make sure the thread doesn't disappear
|
2011-09-21 12:30:20 +07:00
|
|
|
*/
|
|
|
|
spin_lock(&pers_lock);
|
2005-08-23 03:11:08 +07:00
|
|
|
md_wakeup_thread(mddev->thread);
|
2011-09-21 12:30:20 +07:00
|
|
|
spin_unlock(&pers_lock);
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
2014-12-15 08:57:01 +07:00
|
|
|
EXPORT_SYMBOL_GPL(mddev_unlock);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2015-04-14 22:43:55 +07:00
|
|
|
struct md_rdev *md_find_rdev_nr_rcu(struct mddev *mddev, int nr)
|
2012-10-11 09:37:33 +07:00
|
|
|
{
|
|
|
|
struct md_rdev *rdev;
|
|
|
|
|
|
|
|
rdev_for_each_rcu(rdev, mddev)
|
|
|
|
if (rdev->desc_nr == nr)
|
|
|
|
return rdev;
|
|
|
|
|
|
|
|
return NULL;
|
|
|
|
}
|
2015-04-14 22:43:55 +07:00
|
|
|
EXPORT_SYMBOL_GPL(md_find_rdev_nr_rcu);
|
2012-10-11 09:37:33 +07:00
|
|
|
|
|
|
|
static struct md_rdev *find_rdev(struct mddev *mddev, dev_t dev)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2011-10-11 12:45:26 +07:00
|
|
|
struct md_rdev *rdev;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2012-03-19 08:46:39 +07:00
|
|
|
rdev_for_each(rdev, mddev)
|
2005-04-17 05:20:36 +07:00
|
|
|
if (rdev->bdev->bd_dev == dev)
|
|
|
|
return rdev;
|
2009-01-09 04:31:08 +07:00
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2012-10-11 09:37:33 +07:00
|
|
|
static struct md_rdev *find_rdev_rcu(struct mddev *mddev, dev_t dev)
|
|
|
|
{
|
|
|
|
struct md_rdev *rdev;
|
|
|
|
|
|
|
|
rdev_for_each_rcu(rdev, mddev)
|
|
|
|
if (rdev->bdev->bd_dev == dev)
|
|
|
|
return rdev;
|
|
|
|
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2011-10-11 12:49:58 +07:00
|
|
|
static struct md_personality *find_pers(int level, char *clevel)
|
2006-01-06 15:20:36 +07:00
|
|
|
{
|
2011-10-11 12:49:58 +07:00
|
|
|
struct md_personality *pers;
|
2006-01-06 15:20:51 +07:00
|
|
|
list_for_each_entry(pers, &pers_list, list) {
|
|
|
|
if (level != LEVEL_NONE && pers->level == level)
|
2006-01-06 15:20:36 +07:00
|
|
|
return pers;
|
2006-01-06 15:20:51 +07:00
|
|
|
if (strcmp(pers->name, clevel)==0)
|
|
|
|
return pers;
|
|
|
|
}
|
2006-01-06 15:20:36 +07:00
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2008-07-11 19:02:23 +07:00
|
|
|
/* return the offset of the super block in 512byte sectors */
|
2011-10-11 12:45:26 +07:00
|
|
|
static inline sector_t calc_dev_sboffset(struct md_rdev *rdev)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2011-01-14 05:14:33 +07:00
|
|
|
sector_t num_sectors = i_size_read(rdev->bdev->bd_inode) / 512;
|
2008-07-11 19:02:23 +07:00
|
|
|
return MD_NEW_SIZE_SECTORS(num_sectors);
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
2014-09-30 11:23:59 +07:00
|
|
|
static int alloc_disk_sb(struct md_rdev *rdev)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
rdev->sb_page = alloc_page(GFP_KERNEL);
|
|
|
|
if (!rdev->sb_page) {
|
|
|
|
printk(KERN_ALERT "md: out of memory.\n");
|
2008-07-11 19:02:20 +07:00
|
|
|
return -ENOMEM;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2012-05-22 10:54:30 +07:00
|
|
|
void md_rdev_clear(struct md_rdev *rdev)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
if (rdev->sb_page) {
|
2006-01-06 15:20:31 +07:00
|
|
|
put_page(rdev->sb_page);
|
2005-04-17 05:20:36 +07:00
|
|
|
rdev->sb_loaded = 0;
|
|
|
|
rdev->sb_page = NULL;
|
2008-07-11 19:02:23 +07:00
|
|
|
rdev->sb_start = 0;
|
2009-03-31 10:33:13 +07:00
|
|
|
rdev->sectors = 0;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
2011-07-28 08:31:47 +07:00
|
|
|
if (rdev->bb_page) {
|
|
|
|
put_page(rdev->bb_page);
|
|
|
|
rdev->bb_page = NULL;
|
|
|
|
}
|
2016-01-07 03:19:22 +07:00
|
|
|
badblocks_exit(&rdev->badblocks);
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
2012-05-22 10:54:30 +07:00
|
|
|
EXPORT_SYMBOL_GPL(md_rdev_clear);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2015-07-20 20:29:37 +07:00
|
|
|
static void super_written(struct bio *bio)
|
2005-06-22 07:17:28 +07:00
|
|
|
{
|
2011-10-11 12:45:26 +07:00
|
|
|
struct md_rdev *rdev = bio->bi_private;
|
2011-10-11 12:47:53 +07:00
|
|
|
struct mddev *mddev = rdev->mddev;
|
2005-06-22 07:17:28 +07:00
|
|
|
|
2015-07-20 20:29:37 +07:00
|
|
|
if (bio->bi_error) {
|
|
|
|
printk("md: super_written gets error=%d\n", bio->bi_error);
|
[PATCH] md: support BIO_RW_BARRIER for md/raid1
We can only accept BARRIER requests if all slaves handle
barriers, and that can, of course, change with time....
So we keep track of whether the whole array seems safe for barriers,
and also whether each individual rdev handles barriers.
We initially assumes barriers are OK.
When writing the superblock we try a barrier, and if that fails, we flag
things for no-barriers. This will usually clear the flags fairly quickly.
If writing the superblock finds that BIO_RW_BARRIER is -ENOTSUPP, we need to
resubmit, so introduce function "md_super_wait" which waits for requests to
finish, and retries ENOTSUPP requests without the barrier flag.
When writing the real raid1, write requests which were BIO_RW_BARRIER but
which aresn't supported need to be retried. So raid1d is enhanced to do this,
and when any bio write completes (i.e. no retry needed) we remove it from the
r1bio, so that devices needing retry are easy to find.
We should hardly ever get -ENOTSUPP errors when writing data to the raid.
It should only happen if:
1/ the device used to support BARRIER, but now doesn't. Few devices
change like this, though raid1 can!
or
2/ the array has no persistent superblock, so there was no opportunity to
pre-test for barriers when writing the superblock.
Signed-off-by: Neil Brown <neilb@cse.unsw.edu.au>
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-11-09 12:39:34 +07:00
|
|
|
md_error(mddev, rdev);
|
2006-10-03 15:16:03 +07:00
|
|
|
}
|
2005-06-22 07:17:28 +07:00
|
|
|
|
[PATCH] md: support BIO_RW_BARRIER for md/raid1
We can only accept BARRIER requests if all slaves handle
barriers, and that can, of course, change with time....
So we keep track of whether the whole array seems safe for barriers,
and also whether each individual rdev handles barriers.
We initially assumes barriers are OK.
When writing the superblock we try a barrier, and if that fails, we flag
things for no-barriers. This will usually clear the flags fairly quickly.
If writing the superblock finds that BIO_RW_BARRIER is -ENOTSUPP, we need to
resubmit, so introduce function "md_super_wait" which waits for requests to
finish, and retries ENOTSUPP requests without the barrier flag.
When writing the real raid1, write requests which were BIO_RW_BARRIER but
which aresn't supported need to be retried. So raid1d is enhanced to do this,
and when any bio write completes (i.e. no retry needed) we remove it from the
r1bio, so that devices needing retry are easy to find.
We should hardly ever get -ENOTSUPP errors when writing data to the raid.
It should only happen if:
1/ the device used to support BARRIER, but now doesn't. Few devices
change like this, though raid1 can!
or
2/ the array has no persistent superblock, so there was no opportunity to
pre-test for barriers when writing the superblock.
Signed-off-by: Neil Brown <neilb@cse.unsw.edu.au>
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-11-09 12:39:34 +07:00
|
|
|
if (atomic_dec_and_test(&mddev->pending_writes))
|
|
|
|
wake_up(&mddev->sb_wait);
|
2016-03-30 04:00:19 +07:00
|
|
|
rdev_dec_pending(rdev, mddev);
|
2005-06-28 12:29:34 +07:00
|
|
|
bio_put(bio);
|
2005-06-22 07:17:28 +07:00
|
|
|
}
|
|
|
|
|
2011-10-11 12:47:53 +07:00
|
|
|
void md_super_write(struct mddev *mddev, struct md_rdev *rdev,
|
2005-06-22 07:17:28 +07:00
|
|
|
sector_t sector, int size, struct page *page)
|
|
|
|
{
|
|
|
|
/* write first size bytes of page to sector of rdev
|
|
|
|
* Increment mddev->pending_writes before returning
|
|
|
|
* and decrement it on completion, waking up sb_wait
|
|
|
|
* if zero is reached.
|
|
|
|
* If an error occurred, call md_error
|
|
|
|
*/
|
2010-10-26 14:31:13 +07:00
|
|
|
struct bio *bio = bio_alloc_mddev(GFP_NOIO, 1, mddev);
|
2005-06-22 07:17:28 +07:00
|
|
|
|
2016-03-30 04:00:19 +07:00
|
|
|
atomic_inc(&rdev->nr_pending);
|
|
|
|
|
2011-01-14 05:14:34 +07:00
|
|
|
bio->bi_bdev = rdev->meta_bdev ? rdev->meta_bdev : rdev->bdev;
|
2013-10-12 05:44:27 +07:00
|
|
|
bio->bi_iter.bi_sector = sector;
|
2005-06-22 07:17:28 +07:00
|
|
|
bio_add_page(bio, page, size, 0);
|
|
|
|
bio->bi_private = rdev;
|
|
|
|
bio->bi_end_io = super_written;
|
2016-11-01 20:40:10 +07:00
|
|
|
bio->bi_opf = REQ_OP_WRITE | REQ_PREFLUSH | REQ_FUA;
|
[PATCH] md: support BIO_RW_BARRIER for md/raid1
We can only accept BARRIER requests if all slaves handle
barriers, and that can, of course, change with time....
So we keep track of whether the whole array seems safe for barriers,
and also whether each individual rdev handles barriers.
We initially assumes barriers are OK.
When writing the superblock we try a barrier, and if that fails, we flag
things for no-barriers. This will usually clear the flags fairly quickly.
If writing the superblock finds that BIO_RW_BARRIER is -ENOTSUPP, we need to
resubmit, so introduce function "md_super_wait" which waits for requests to
finish, and retries ENOTSUPP requests without the barrier flag.
When writing the real raid1, write requests which were BIO_RW_BARRIER but
which aresn't supported need to be retried. So raid1d is enhanced to do this,
and when any bio write completes (i.e. no retry needed) we remove it from the
r1bio, so that devices needing retry are easy to find.
We should hardly ever get -ENOTSUPP errors when writing data to the raid.
It should only happen if:
1/ the device used to support BARRIER, but now doesn't. Few devices
change like this, though raid1 can!
or
2/ the array has no persistent superblock, so there was no opportunity to
pre-test for barriers when writing the superblock.
Signed-off-by: Neil Brown <neilb@cse.unsw.edu.au>
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-11-09 12:39:34 +07:00
|
|
|
|
2005-06-22 07:17:28 +07:00
|
|
|
atomic_inc(&mddev->pending_writes);
|
2016-06-06 02:31:41 +07:00
|
|
|
submit_bio(bio);
|
[PATCH] md: support BIO_RW_BARRIER for md/raid1
We can only accept BARRIER requests if all slaves handle
barriers, and that can, of course, change with time....
So we keep track of whether the whole array seems safe for barriers,
and also whether each individual rdev handles barriers.
We initially assumes barriers are OK.
When writing the superblock we try a barrier, and if that fails, we flag
things for no-barriers. This will usually clear the flags fairly quickly.
If writing the superblock finds that BIO_RW_BARRIER is -ENOTSUPP, we need to
resubmit, so introduce function "md_super_wait" which waits for requests to
finish, and retries ENOTSUPP requests without the barrier flag.
When writing the real raid1, write requests which were BIO_RW_BARRIER but
which aresn't supported need to be retried. So raid1d is enhanced to do this,
and when any bio write completes (i.e. no retry needed) we remove it from the
r1bio, so that devices needing retry are easy to find.
We should hardly ever get -ENOTSUPP errors when writing data to the raid.
It should only happen if:
1/ the device used to support BARRIER, but now doesn't. Few devices
change like this, though raid1 can!
or
2/ the array has no persistent superblock, so there was no opportunity to
pre-test for barriers when writing the superblock.
Signed-off-by: Neil Brown <neilb@cse.unsw.edu.au>
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-11-09 12:39:34 +07:00
|
|
|
}
|
|
|
|
|
2011-10-11 12:47:53 +07:00
|
|
|
void md_super_wait(struct mddev *mddev)
|
[PATCH] md: support BIO_RW_BARRIER for md/raid1
We can only accept BARRIER requests if all slaves handle
barriers, and that can, of course, change with time....
So we keep track of whether the whole array seems safe for barriers,
and also whether each individual rdev handles barriers.
We initially assumes barriers are OK.
When writing the superblock we try a barrier, and if that fails, we flag
things for no-barriers. This will usually clear the flags fairly quickly.
If writing the superblock finds that BIO_RW_BARRIER is -ENOTSUPP, we need to
resubmit, so introduce function "md_super_wait" which waits for requests to
finish, and retries ENOTSUPP requests without the barrier flag.
When writing the real raid1, write requests which were BIO_RW_BARRIER but
which aresn't supported need to be retried. So raid1d is enhanced to do this,
and when any bio write completes (i.e. no retry needed) we remove it from the
r1bio, so that devices needing retry are easy to find.
We should hardly ever get -ENOTSUPP errors when writing data to the raid.
It should only happen if:
1/ the device used to support BARRIER, but now doesn't. Few devices
change like this, though raid1 can!
or
2/ the array has no persistent superblock, so there was no opportunity to
pre-test for barriers when writing the superblock.
Signed-off-by: Neil Brown <neilb@cse.unsw.edu.au>
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-11-09 12:39:34 +07:00
|
|
|
{
|
2010-09-03 16:56:18 +07:00
|
|
|
/* wait for all superblock writes that were scheduled to complete */
|
2014-09-09 11:20:28 +07:00
|
|
|
wait_event(mddev->sb_wait, atomic_read(&mddev->pending_writes)==0);
|
2005-06-22 07:17:28 +07:00
|
|
|
}
|
|
|
|
|
2011-10-11 12:45:26 +07:00
|
|
|
int sync_page_io(struct md_rdev *rdev, sector_t sector, int size,
|
2016-06-06 02:32:07 +07:00
|
|
|
struct page *page, int op, int op_flags, bool metadata_op)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2010-10-26 14:31:13 +07:00
|
|
|
struct bio *bio = bio_alloc_mddev(GFP_NOIO, 1, rdev->mddev);
|
2005-04-17 05:20:36 +07:00
|
|
|
int ret;
|
|
|
|
|
2011-01-14 05:14:34 +07:00
|
|
|
bio->bi_bdev = (metadata_op && rdev->meta_bdev) ?
|
|
|
|
rdev->meta_bdev : rdev->bdev;
|
2016-06-06 02:32:07 +07:00
|
|
|
bio_set_op_attrs(bio, op, op_flags);
|
2011-01-14 05:14:33 +07:00
|
|
|
if (metadata_op)
|
2013-10-12 05:44:27 +07:00
|
|
|
bio->bi_iter.bi_sector = sector + rdev->sb_start;
|
2012-05-21 06:28:32 +07:00
|
|
|
else if (rdev->mddev->reshape_position != MaxSector &&
|
|
|
|
(rdev->mddev->reshape_backwards ==
|
|
|
|
(sector >= rdev->mddev->reshape_position)))
|
2013-10-12 05:44:27 +07:00
|
|
|
bio->bi_iter.bi_sector = sector + rdev->new_data_offset;
|
2011-01-14 05:14:33 +07:00
|
|
|
else
|
2013-10-12 05:44:27 +07:00
|
|
|
bio->bi_iter.bi_sector = sector + rdev->data_offset;
|
2005-04-17 05:20:36 +07:00
|
|
|
bio_add_page(bio, page, size, 0);
|
2016-06-06 02:31:41 +07:00
|
|
|
|
|
|
|
submit_bio_wait(bio);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2015-07-20 20:29:37 +07:00
|
|
|
ret = !bio->bi_error;
|
2005-04-17 05:20:36 +07:00
|
|
|
bio_put(bio);
|
|
|
|
return ret;
|
|
|
|
}
|
2006-01-06 15:20:34 +07:00
|
|
|
EXPORT_SYMBOL_GPL(sync_page_io);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2014-09-30 11:23:59 +07:00
|
|
|
static int read_disk_sb(struct md_rdev *rdev, int size)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
char b[BDEVNAME_SIZE];
|
2014-09-30 12:52:29 +07:00
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
if (rdev->sb_loaded)
|
|
|
|
return 0;
|
|
|
|
|
2016-06-06 02:32:07 +07:00
|
|
|
if (!sync_page_io(rdev, 0, size, rdev->sb_page, REQ_OP_READ, 0, true))
|
2005-04-17 05:20:36 +07:00
|
|
|
goto fail;
|
|
|
|
rdev->sb_loaded = 1;
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
fail:
|
|
|
|
printk(KERN_WARNING "md: disabled device %s, could not read superblock.\n",
|
|
|
|
bdevname(rdev->bdev,b));
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int uuid_equal(mdp_super_t *sb1, mdp_super_t *sb2)
|
|
|
|
{
|
2014-09-30 11:23:59 +07:00
|
|
|
return sb1->set_uuid0 == sb2->set_uuid0 &&
|
2008-07-11 19:02:20 +07:00
|
|
|
sb1->set_uuid1 == sb2->set_uuid1 &&
|
|
|
|
sb1->set_uuid2 == sb2->set_uuid2 &&
|
|
|
|
sb1->set_uuid3 == sb2->set_uuid3;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static int sb_equal(mdp_super_t *sb1, mdp_super_t *sb2)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
mdp_super_t *tmp1, *tmp2;
|
|
|
|
|
|
|
|
tmp1 = kmalloc(sizeof(*tmp1),GFP_KERNEL);
|
|
|
|
tmp2 = kmalloc(sizeof(*tmp2),GFP_KERNEL);
|
|
|
|
|
|
|
|
if (!tmp1 || !tmp2) {
|
|
|
|
ret = 0;
|
2008-03-23 21:10:33 +07:00
|
|
|
printk(KERN_INFO "md.c sb_equal(): failed to allocate memory!\n");
|
2005-04-17 05:20:36 +07:00
|
|
|
goto abort;
|
|
|
|
}
|
|
|
|
|
|
|
|
*tmp1 = *sb1;
|
|
|
|
*tmp2 = *sb2;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* nr_disks is not constant
|
|
|
|
*/
|
|
|
|
tmp1->nr_disks = 0;
|
|
|
|
tmp2->nr_disks = 0;
|
|
|
|
|
2008-07-11 19:02:20 +07:00
|
|
|
ret = (memcmp(tmp1, tmp2, MD_SB_GENERIC_CONSTANT_WORDS * 4) == 0);
|
2005-04-17 05:20:36 +07:00
|
|
|
abort:
|
2005-06-22 07:17:30 +07:00
|
|
|
kfree(tmp1);
|
|
|
|
kfree(tmp2);
|
2005-04-17 05:20:36 +07:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2007-05-09 16:35:37 +07:00
|
|
|
static u32 md_csum_fold(u32 csum)
|
|
|
|
{
|
|
|
|
csum = (csum & 0xffff) + (csum >> 16);
|
|
|
|
return (csum & 0xffff) + (csum >> 16);
|
|
|
|
}
|
|
|
|
|
2014-09-30 11:23:59 +07:00
|
|
|
static unsigned int calc_sb_csum(mdp_super_t *sb)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2007-05-09 16:35:37 +07:00
|
|
|
u64 newcsum = 0;
|
|
|
|
u32 *sb32 = (u32*)sb;
|
|
|
|
int i;
|
2005-04-17 05:20:36 +07:00
|
|
|
unsigned int disk_csum, csum;
|
|
|
|
|
|
|
|
disk_csum = sb->sb_csum;
|
|
|
|
sb->sb_csum = 0;
|
2007-05-09 16:35:37 +07:00
|
|
|
|
|
|
|
for (i = 0; i < MD_SB_BYTES/4 ; i++)
|
|
|
|
newcsum += sb32[i];
|
|
|
|
csum = (newcsum & 0xffffffff) + (newcsum>>32);
|
|
|
|
|
|
|
|
#ifdef CONFIG_ALPHA
|
|
|
|
/* This used to use csum_partial, which was wrong for several
|
|
|
|
* reasons including that different results are returned on
|
|
|
|
* different architectures. It isn't critical that we get exactly
|
|
|
|
* the same return value as before (we always csum_fold before
|
|
|
|
* testing, and that removes any differences). However as we
|
|
|
|
* know that csum_partial always returned a 16bit value on
|
|
|
|
* alphas, do a fold to maximise conformity to previous behaviour.
|
|
|
|
*/
|
|
|
|
sb->sb_csum = md_csum_fold(disk_csum);
|
|
|
|
#else
|
2005-04-17 05:20:36 +07:00
|
|
|
sb->sb_csum = disk_csum;
|
2007-05-09 16:35:37 +07:00
|
|
|
#endif
|
2005-04-17 05:20:36 +07:00
|
|
|
return csum;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Handle superblock details.
|
|
|
|
* We want to be able to handle multiple superblock formats
|
|
|
|
* so we have a common interface to them all, and an array of
|
|
|
|
* different handlers.
|
|
|
|
* We rely on user-space to write the initial superblock, and support
|
|
|
|
* reading and updating of superblocks.
|
|
|
|
* Interface methods are:
|
2011-10-11 12:45:26 +07:00
|
|
|
* int load_super(struct md_rdev *dev, struct md_rdev *refdev, int minor_version)
|
2005-04-17 05:20:36 +07:00
|
|
|
* loads and validates a superblock on dev.
|
|
|
|
* if refdev != NULL, compare superblocks on both devices
|
|
|
|
* Return:
|
|
|
|
* 0 - dev has a superblock that is compatible with refdev
|
|
|
|
* 1 - dev has a superblock that is compatible and newer than refdev
|
|
|
|
* so dev should be used as the refdev in future
|
|
|
|
* -EINVAL superblock incompatible or invalid
|
|
|
|
* -othererror e.g. -EIO
|
|
|
|
*
|
2011-10-11 12:47:53 +07:00
|
|
|
* int validate_super(struct mddev *mddev, struct md_rdev *dev)
|
2005-04-17 05:20:36 +07:00
|
|
|
* Verify that dev is acceptable into mddev.
|
|
|
|
* The first time, mddev->raid_disks will be 0, and data from
|
|
|
|
* dev should be merged in. Subsequent calls check that dev
|
|
|
|
* is new enough. Return 0 or -EINVAL
|
|
|
|
*
|
2011-10-11 12:47:53 +07:00
|
|
|
* void sync_super(struct mddev *mddev, struct md_rdev *dev)
|
2005-04-17 05:20:36 +07:00
|
|
|
* Update the superblock for rdev with data in mddev
|
|
|
|
* This does not write to disc.
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
|
|
|
struct super_type {
|
2008-06-28 05:31:46 +07:00
|
|
|
char *name;
|
|
|
|
struct module *owner;
|
2012-05-21 06:27:00 +07:00
|
|
|
int (*load_super)(struct md_rdev *rdev,
|
|
|
|
struct md_rdev *refdev,
|
2008-06-28 05:31:46 +07:00
|
|
|
int minor_version);
|
2012-05-21 06:27:00 +07:00
|
|
|
int (*validate_super)(struct mddev *mddev,
|
|
|
|
struct md_rdev *rdev);
|
|
|
|
void (*sync_super)(struct mddev *mddev,
|
|
|
|
struct md_rdev *rdev);
|
2011-10-11 12:45:26 +07:00
|
|
|
unsigned long long (*rdev_size_change)(struct md_rdev *rdev,
|
2008-07-21 11:42:12 +07:00
|
|
|
sector_t num_sectors);
|
2012-05-21 06:27:00 +07:00
|
|
|
int (*allow_new_offset)(struct md_rdev *rdev,
|
|
|
|
unsigned long long new_offset);
|
2005-04-17 05:20:36 +07:00
|
|
|
};
|
|
|
|
|
2009-06-18 05:49:23 +07:00
|
|
|
/*
|
|
|
|
* Check that the given mddev has no bitmap.
|
|
|
|
*
|
|
|
|
* This function is called from the run method of all personalities that do not
|
|
|
|
* support bitmaps. It prints an error message and returns non-zero if mddev
|
|
|
|
* has a bitmap. Otherwise, it returns 0.
|
|
|
|
*
|
|
|
|
*/
|
2011-10-11 12:47:53 +07:00
|
|
|
int md_check_no_bitmap(struct mddev *mddev)
|
2009-06-18 05:49:23 +07:00
|
|
|
{
|
2009-12-14 08:49:52 +07:00
|
|
|
if (!mddev->bitmap_info.file && !mddev->bitmap_info.offset)
|
2009-06-18 05:49:23 +07:00
|
|
|
return 0;
|
|
|
|
printk(KERN_ERR "%s: bitmaps are not supported for %s\n",
|
|
|
|
mdname(mddev), mddev->pers->name);
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(md_check_no_bitmap);
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
/*
|
2014-09-30 11:23:59 +07:00
|
|
|
* load_super for 0.90.0
|
2005-04-17 05:20:36 +07:00
|
|
|
*/
|
2011-10-11 12:45:26 +07:00
|
|
|
static int super_90_load(struct md_rdev *rdev, struct md_rdev *refdev, int minor_version)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
char b[BDEVNAME_SIZE], b2[BDEVNAME_SIZE];
|
|
|
|
mdp_super_t *sb;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
/*
|
2008-07-11 19:02:23 +07:00
|
|
|
* Calculate the position of the superblock (512byte sectors),
|
2005-04-17 05:20:36 +07:00
|
|
|
* it's at the end of the disk.
|
|
|
|
*
|
|
|
|
* It also happens to be a multiple of 4Kb.
|
|
|
|
*/
|
2011-01-14 05:14:33 +07:00
|
|
|
rdev->sb_start = calc_dev_sboffset(rdev);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2005-09-10 06:23:53 +07:00
|
|
|
ret = read_disk_sb(rdev, MD_SB_BYTES);
|
2005-04-17 05:20:36 +07:00
|
|
|
if (ret) return ret;
|
|
|
|
|
|
|
|
ret = -EINVAL;
|
|
|
|
|
|
|
|
bdevname(rdev->bdev, b);
|
2011-07-27 08:00:36 +07:00
|
|
|
sb = page_address(rdev->sb_page);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
if (sb->md_magic != MD_SB_MAGIC) {
|
|
|
|
printk(KERN_ERR "md: invalid raid superblock magic on %s\n",
|
|
|
|
b);
|
|
|
|
goto abort;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (sb->major_version != 0 ||
|
2006-03-27 16:18:11 +07:00
|
|
|
sb->minor_version < 90 ||
|
|
|
|
sb->minor_version > 91) {
|
2005-04-17 05:20:36 +07:00
|
|
|
printk(KERN_WARNING "Bad version number %d.%d on %s\n",
|
|
|
|
sb->major_version, sb->minor_version,
|
|
|
|
b);
|
|
|
|
goto abort;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (sb->raid_disks <= 0)
|
|
|
|
goto abort;
|
|
|
|
|
2007-05-09 16:35:37 +07:00
|
|
|
if (md_csum_fold(calc_sb_csum(sb)) != md_csum_fold(sb->sb_csum)) {
|
2005-04-17 05:20:36 +07:00
|
|
|
printk(KERN_WARNING "md: invalid superblock checksum on %s\n",
|
|
|
|
b);
|
|
|
|
goto abort;
|
|
|
|
}
|
|
|
|
|
|
|
|
rdev->preferred_minor = sb->md_minor;
|
|
|
|
rdev->data_offset = 0;
|
2012-05-21 06:27:00 +07:00
|
|
|
rdev->new_data_offset = 0;
|
2005-09-10 06:23:53 +07:00
|
|
|
rdev->sb_size = MD_SB_BYTES;
|
2011-07-28 08:31:47 +07:00
|
|
|
rdev->badblocks.shift = -1;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
if (sb->level == LEVEL_MULTIPATH)
|
|
|
|
rdev->desc_nr = -1;
|
|
|
|
else
|
|
|
|
rdev->desc_nr = sb->this_disk.number;
|
|
|
|
|
2008-04-28 16:15:49 +07:00
|
|
|
if (!refdev) {
|
2005-04-17 05:20:36 +07:00
|
|
|
ret = 1;
|
2008-04-28 16:15:49 +07:00
|
|
|
} else {
|
2005-04-17 05:20:36 +07:00
|
|
|
__u64 ev1, ev2;
|
2011-07-27 08:00:36 +07:00
|
|
|
mdp_super_t *refsb = page_address(refdev->sb_page);
|
2005-04-17 05:20:36 +07:00
|
|
|
if (!uuid_equal(refsb, sb)) {
|
|
|
|
printk(KERN_WARNING "md: %s has different UUID to %s\n",
|
|
|
|
b, bdevname(refdev->bdev,b2));
|
|
|
|
goto abort;
|
|
|
|
}
|
|
|
|
if (!sb_equal(refsb, sb)) {
|
|
|
|
printk(KERN_WARNING "md: %s has same UUID"
|
|
|
|
" but different superblock to %s\n",
|
|
|
|
b, bdevname(refdev->bdev, b2));
|
|
|
|
goto abort;
|
|
|
|
}
|
|
|
|
ev1 = md_event(sb);
|
|
|
|
ev2 = md_event(refsb);
|
|
|
|
if (ev1 > ev2)
|
|
|
|
ret = 1;
|
2014-09-30 11:23:59 +07:00
|
|
|
else
|
2005-04-17 05:20:36 +07:00
|
|
|
ret = 0;
|
|
|
|
}
|
2009-06-18 05:48:58 +07:00
|
|
|
rdev->sectors = rdev->sb_start;
|
2012-08-16 13:46:12 +07:00
|
|
|
/* Limit to 4TB as metadata cannot record more than that.
|
|
|
|
* (not needed for Linear and RAID0 as metadata doesn't
|
|
|
|
* record this size)
|
|
|
|
*/
|
2015-12-21 06:51:01 +07:00
|
|
|
if (IS_ENABLED(CONFIG_LBDAF) && (u64)rdev->sectors >= (2ULL << 32) &&
|
|
|
|
sb->level >= 1)
|
|
|
|
rdev->sectors = (sector_t)(2ULL << 32) - 2;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2011-09-10 14:21:28 +07:00
|
|
|
if (rdev->sectors < ((sector_t)sb->size) * 2 && sb->level >= 1)
|
2006-01-06 15:20:55 +07:00
|
|
|
/* "this cannot possibly happen" ... */
|
|
|
|
ret = -EINVAL;
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
abort:
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* validate_super for 0.90.0
|
|
|
|
*/
|
2011-10-11 12:47:53 +07:00
|
|
|
static int super_90_validate(struct mddev *mddev, struct md_rdev *rdev)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
mdp_disk_t *desc;
|
2011-07-27 08:00:36 +07:00
|
|
|
mdp_super_t *sb = page_address(rdev->sb_page);
|
2006-06-26 14:27:56 +07:00
|
|
|
__u64 ev1 = md_event(sb);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2005-06-22 07:17:25 +07:00
|
|
|
rdev->raid_disk = -1;
|
2008-02-06 16:39:54 +07:00
|
|
|
clear_bit(Faulty, &rdev->flags);
|
|
|
|
clear_bit(In_sync, &rdev->flags);
|
2013-12-12 06:13:33 +07:00
|
|
|
clear_bit(Bitmap_sync, &rdev->flags);
|
2008-02-06 16:39:54 +07:00
|
|
|
clear_bit(WriteMostly, &rdev->flags);
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
if (mddev->raid_disks == 0) {
|
|
|
|
mddev->major_version = 0;
|
|
|
|
mddev->minor_version = sb->minor_version;
|
|
|
|
mddev->patch_version = sb->patch_version;
|
2008-02-06 16:39:51 +07:00
|
|
|
mddev->external = 0;
|
2009-06-18 05:45:01 +07:00
|
|
|
mddev->chunk_sectors = sb->chunk_size >> 9;
|
2005-04-17 05:20:36 +07:00
|
|
|
mddev->ctime = sb->ctime;
|
|
|
|
mddev->utime = sb->utime;
|
|
|
|
mddev->level = sb->level;
|
2006-01-06 15:20:51 +07:00
|
|
|
mddev->clevel[0] = 0;
|
2005-04-17 05:20:36 +07:00
|
|
|
mddev->layout = sb->layout;
|
|
|
|
mddev->raid_disks = sb->raid_disks;
|
2011-09-10 14:21:28 +07:00
|
|
|
mddev->dev_sectors = ((sector_t)sb->size) * 2;
|
2006-06-26 14:27:56 +07:00
|
|
|
mddev->events = ev1;
|
2009-12-14 08:49:52 +07:00
|
|
|
mddev->bitmap_info.offset = 0;
|
2012-05-22 10:55:07 +07:00
|
|
|
mddev->bitmap_info.space = 0;
|
|
|
|
/* bitmap can use 60 K after the 4K superblocks */
|
2009-12-14 08:49:52 +07:00
|
|
|
mddev->bitmap_info.default_offset = MD_SB_BYTES >> 9;
|
2012-05-22 10:55:07 +07:00
|
|
|
mddev->bitmap_info.default_space = 64*2 - (MD_SB_BYTES >> 9);
|
2012-05-21 06:27:00 +07:00
|
|
|
mddev->reshape_backwards = 0;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2006-03-27 16:18:11 +07:00
|
|
|
if (mddev->minor_version >= 91) {
|
|
|
|
mddev->reshape_position = sb->reshape_position;
|
|
|
|
mddev->delta_disks = sb->delta_disks;
|
|
|
|
mddev->new_level = sb->new_level;
|
|
|
|
mddev->new_layout = sb->new_layout;
|
2009-06-18 05:45:27 +07:00
|
|
|
mddev->new_chunk_sectors = sb->new_chunk >> 9;
|
2012-05-21 06:27:00 +07:00
|
|
|
if (mddev->delta_disks < 0)
|
|
|
|
mddev->reshape_backwards = 1;
|
2006-03-27 16:18:11 +07:00
|
|
|
} else {
|
|
|
|
mddev->reshape_position = MaxSector;
|
|
|
|
mddev->delta_disks = 0;
|
|
|
|
mddev->new_level = mddev->level;
|
|
|
|
mddev->new_layout = mddev->layout;
|
2009-06-18 05:45:27 +07:00
|
|
|
mddev->new_chunk_sectors = mddev->chunk_sectors;
|
2006-03-27 16:18:11 +07:00
|
|
|
}
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
if (sb->state & (1<<MD_SB_CLEAN))
|
|
|
|
mddev->recovery_cp = MaxSector;
|
|
|
|
else {
|
2014-09-30 11:23:59 +07:00
|
|
|
if (sb->events_hi == sb->cp_events_hi &&
|
2005-04-17 05:20:36 +07:00
|
|
|
sb->events_lo == sb->cp_events_lo) {
|
|
|
|
mddev->recovery_cp = sb->recovery_cp;
|
|
|
|
} else
|
|
|
|
mddev->recovery_cp = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
memcpy(mddev->uuid+0, &sb->set_uuid0, 4);
|
|
|
|
memcpy(mddev->uuid+4, &sb->set_uuid1, 4);
|
|
|
|
memcpy(mddev->uuid+8, &sb->set_uuid2, 4);
|
|
|
|
memcpy(mddev->uuid+12,&sb->set_uuid3, 4);
|
|
|
|
|
|
|
|
mddev->max_disks = MD_SB_DISKS;
|
2005-06-22 07:17:27 +07:00
|
|
|
|
|
|
|
if (sb->state & (1<<MD_SB_BITMAP_PRESENT) &&
|
2012-05-22 10:55:07 +07:00
|
|
|
mddev->bitmap_info.file == NULL) {
|
2009-12-14 08:49:52 +07:00
|
|
|
mddev->bitmap_info.offset =
|
|
|
|
mddev->bitmap_info.default_offset;
|
2012-05-22 10:55:07 +07:00
|
|
|
mddev->bitmap_info.space =
|
2013-08-20 09:26:32 +07:00
|
|
|
mddev->bitmap_info.default_space;
|
2012-05-22 10:55:07 +07:00
|
|
|
}
|
2005-06-22 07:17:27 +07:00
|
|
|
|
2005-06-22 07:17:25 +07:00
|
|
|
} else if (mddev->pers == NULL) {
|
2010-05-18 07:17:09 +07:00
|
|
|
/* Insist on good event counter while assembling, except
|
|
|
|
* for spares (which don't need an event count) */
|
2005-04-17 05:20:36 +07:00
|
|
|
++ev1;
|
2010-05-18 07:17:09 +07:00
|
|
|
if (sb->disks[rdev->desc_nr].state & (
|
|
|
|
(1<<MD_DISK_SYNC) | (1 << MD_DISK_ACTIVE)))
|
2014-09-30 11:23:59 +07:00
|
|
|
if (ev1 < mddev->events)
|
2010-05-18 07:17:09 +07:00
|
|
|
return -EINVAL;
|
2005-06-22 07:17:25 +07:00
|
|
|
} else if (mddev->bitmap) {
|
|
|
|
/* if adding to array with a bitmap, then we can accept an
|
|
|
|
* older device ... but not too old.
|
|
|
|
*/
|
|
|
|
if (ev1 < mddev->bitmap->events_cleared)
|
|
|
|
return 0;
|
2013-12-12 06:13:33 +07:00
|
|
|
if (ev1 < mddev->events)
|
|
|
|
set_bit(Bitmap_sync, &rdev->flags);
|
2006-06-26 14:27:56 +07:00
|
|
|
} else {
|
|
|
|
if (ev1 < mddev->events)
|
|
|
|
/* just a hot-add of a new device, leave raid_disk at -1 */
|
|
|
|
return 0;
|
|
|
|
}
|
2005-06-22 07:17:25 +07:00
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
if (mddev->level != LEVEL_MULTIPATH) {
|
|
|
|
desc = sb->disks + rdev->desc_nr;
|
|
|
|
|
|
|
|
if (desc->state & (1<<MD_DISK_FAULTY))
|
2005-11-09 12:39:31 +07:00
|
|
|
set_bit(Faulty, &rdev->flags);
|
2006-06-26 14:27:41 +07:00
|
|
|
else if (desc->state & (1<<MD_DISK_SYNC) /* &&
|
|
|
|
desc->raid_disk < mddev->raid_disks */) {
|
2005-11-09 12:39:31 +07:00
|
|
|
set_bit(In_sync, &rdev->flags);
|
2005-04-17 05:20:36 +07:00
|
|
|
rdev->raid_disk = desc->raid_disk;
|
md: Change handling of save_raid_disk and metadata update during recovery.
Since commit d70ed2e4fafdbef0800e739
MD: Allow restarting an interrupted incremental recovery.
we don't write out the metadata to devices while they are recovering.
This had a good reason, but has unfortunate consequences. This patch
changes things to make them work better.
At issue is what happens if the array is shut down while a recovery is
happening, particularly a bitmap-guided recovery.
Ideally the recovery should pick up where it left off.
However the metadata cannot represent the state "A recovery is in
process which is guided by the bitmap".
Before the above mentioned commit, we wrote metadata to the device
which said "this is being recovered and it is up to <here>". So after
a restart, a full recovery (not bitmap-guided) would happen from
where-ever it was up to.
After the commit the metadata wasn't updated so it still said "This
device is fully in sync with <this> event count". That leads to a
bitmap-based recovery following the whole bitmap, which should be a
lot less work than a full recovery from some starting point. So this
was an improvement.
However updates some metadata but not all leads to other problems.
In particular, the metadata written to the fully-up-to-date device
record that the array has all devices present (even though some are
recovering). So on restart, mdadm wants to find all devices and
expects them to have current event counts.
Obviously it doesn't (some have old event counts) so (when assembling
with --incremental) it waits indefinitely for the rest of the expected
devices.
It really is wrong to not update all the metadata together. Do that
is bound to cause confusion.
Instead, we should make it possible to record the truth in the
metadata. i.e. we need to be able to record that a device is being
recovered based on the bitmap.
We already have a Feature flag to say that recovery is happening. We
now add another one to say that it is a bitmap-based recovery.
With this we can remove the code that disables the write-out of
metadata on some devices.
So this patch:
- moves the setting of 'saved_raid_disk' from add_new_disk to
the validate_super methods. This makes sure it is always set
properly, both when adding a new device to an array, and when
assembling an array from a collection of devices.
- Adds a metadata flag MD_FEATURE_RECOVERY_BITMAP which is only
used if MD_FEATURE_RECOVERY_OFFSET is set, and record that a
bitmap-based recovery is allowed.
This is only present in v1.x metadata. v0.90 doesn't support
devices which are in the middle of recovery at all.
- Only skips writing metadata to Faulty devices.
- Also allows rdev state to be set to "-insync" via sysfs.
This can be used for external-metadata arrays. When the
'role' is set the device is assumed to be in-sync. If, after
setting the role, we set the state to "-insync", the role is
moved to saved_raid_disk which effectively says the device is
partly in-sync with that slot and needs a bitmap recovery.
Cc: Andrei Warkentin <andreiw@vmware.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2013-12-09 08:04:56 +07:00
|
|
|
rdev->saved_raid_disk = desc->raid_disk;
|
2009-11-13 13:40:48 +07:00
|
|
|
} else if (desc->state & (1<<MD_DISK_ACTIVE)) {
|
|
|
|
/* active but not in sync implies recovery up to
|
|
|
|
* reshape position. We don't know exactly where
|
|
|
|
* that is, so set to zero for now */
|
|
|
|
if (mddev->minor_version >= 91) {
|
|
|
|
rdev->recovery_offset = 0;
|
|
|
|
rdev->raid_disk = desc->raid_disk;
|
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
2005-09-10 06:23:45 +07:00
|
|
|
if (desc->state & (1<<MD_DISK_WRITEMOSTLY))
|
|
|
|
set_bit(WriteMostly, &rdev->flags);
|
2005-06-22 07:17:25 +07:00
|
|
|
} else /* MULTIPATH are always insync */
|
2005-11-09 12:39:31 +07:00
|
|
|
set_bit(In_sync, &rdev->flags);
|
2005-04-17 05:20:36 +07:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* sync_super for 0.90.0
|
|
|
|
*/
|
2011-10-11 12:47:53 +07:00
|
|
|
static void super_90_sync(struct mddev *mddev, struct md_rdev *rdev)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
mdp_super_t *sb;
|
2011-10-11 12:45:26 +07:00
|
|
|
struct md_rdev *rdev2;
|
2005-04-17 05:20:36 +07:00
|
|
|
int next_spare = mddev->raid_disks;
|
2005-11-09 12:39:35 +07:00
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
/* make rdev->sb match mddev data..
|
|
|
|
*
|
|
|
|
* 1/ zero out disks
|
|
|
|
* 2/ Add info for each disk, keeping track of highest desc_nr (next_spare);
|
|
|
|
* 3/ any empty disks < next_spare become removed
|
|
|
|
*
|
|
|
|
* disks[0] gets initialised to REMOVED because
|
|
|
|
* we cannot be sure from other fields if it has
|
|
|
|
* been initialised or not.
|
|
|
|
*/
|
|
|
|
int i;
|
|
|
|
int active=0, working=0,failed=0,spare=0,nr_disks=0;
|
|
|
|
|
2005-09-10 06:24:02 +07:00
|
|
|
rdev->sb_size = MD_SB_BYTES;
|
|
|
|
|
2011-07-27 08:00:36 +07:00
|
|
|
sb = page_address(rdev->sb_page);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
memset(sb, 0, sizeof(*sb));
|
|
|
|
|
|
|
|
sb->md_magic = MD_SB_MAGIC;
|
|
|
|
sb->major_version = mddev->major_version;
|
|
|
|
sb->patch_version = mddev->patch_version;
|
|
|
|
sb->gvalid_words = 0; /* ignored */
|
|
|
|
memcpy(&sb->set_uuid0, mddev->uuid+0, 4);
|
|
|
|
memcpy(&sb->set_uuid1, mddev->uuid+4, 4);
|
|
|
|
memcpy(&sb->set_uuid2, mddev->uuid+8, 4);
|
|
|
|
memcpy(&sb->set_uuid3, mddev->uuid+12,4);
|
|
|
|
|
2015-12-21 06:51:01 +07:00
|
|
|
sb->ctime = clamp_t(time64_t, mddev->ctime, 0, U32_MAX);
|
2005-04-17 05:20:36 +07:00
|
|
|
sb->level = mddev->level;
|
2009-03-31 10:33:13 +07:00
|
|
|
sb->size = mddev->dev_sectors / 2;
|
2005-04-17 05:20:36 +07:00
|
|
|
sb->raid_disks = mddev->raid_disks;
|
|
|
|
sb->md_minor = mddev->md_minor;
|
2008-02-06 16:39:51 +07:00
|
|
|
sb->not_persistent = 0;
|
2015-12-21 06:51:01 +07:00
|
|
|
sb->utime = clamp_t(time64_t, mddev->utime, 0, U32_MAX);
|
2005-04-17 05:20:36 +07:00
|
|
|
sb->state = 0;
|
|
|
|
sb->events_hi = (mddev->events>>32);
|
|
|
|
sb->events_lo = (u32)mddev->events;
|
|
|
|
|
2006-03-27 16:18:11 +07:00
|
|
|
if (mddev->reshape_position == MaxSector)
|
|
|
|
sb->minor_version = 90;
|
|
|
|
else {
|
|
|
|
sb->minor_version = 91;
|
|
|
|
sb->reshape_position = mddev->reshape_position;
|
|
|
|
sb->new_level = mddev->new_level;
|
|
|
|
sb->delta_disks = mddev->delta_disks;
|
|
|
|
sb->new_layout = mddev->new_layout;
|
2009-06-18 05:45:27 +07:00
|
|
|
sb->new_chunk = mddev->new_chunk_sectors << 9;
|
2006-03-27 16:18:11 +07:00
|
|
|
}
|
|
|
|
mddev->minor_version = sb->minor_version;
|
2005-04-17 05:20:36 +07:00
|
|
|
if (mddev->in_sync)
|
|
|
|
{
|
|
|
|
sb->recovery_cp = mddev->recovery_cp;
|
|
|
|
sb->cp_events_hi = (mddev->events>>32);
|
|
|
|
sb->cp_events_lo = (u32)mddev->events;
|
|
|
|
if (mddev->recovery_cp == MaxSector)
|
|
|
|
sb->state = (1<< MD_SB_CLEAN);
|
|
|
|
} else
|
|
|
|
sb->recovery_cp = 0;
|
|
|
|
|
|
|
|
sb->layout = mddev->layout;
|
2009-06-18 05:45:01 +07:00
|
|
|
sb->chunk_size = mddev->chunk_sectors << 9;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2009-12-14 08:49:52 +07:00
|
|
|
if (mddev->bitmap && mddev->bitmap_info.file == NULL)
|
2005-06-22 07:17:27 +07:00
|
|
|
sb->state |= (1<<MD_SB_BITMAP_PRESENT);
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
sb->disks[0].state = (1<<MD_DISK_REMOVED);
|
2012-03-19 08:46:39 +07:00
|
|
|
rdev_for_each(rdev2, mddev) {
|
2005-04-17 05:20:36 +07:00
|
|
|
mdp_disk_t *d;
|
2005-11-09 12:39:24 +07:00
|
|
|
int desc_nr;
|
2009-11-13 13:40:48 +07:00
|
|
|
int is_active = test_bit(In_sync, &rdev2->flags);
|
|
|
|
|
|
|
|
if (rdev2->raid_disk >= 0 &&
|
|
|
|
sb->minor_version >= 91)
|
|
|
|
/* we have nowhere to store the recovery_offset,
|
|
|
|
* but if it is not below the reshape_position,
|
|
|
|
* we can piggy-back on that.
|
|
|
|
*/
|
|
|
|
is_active = 1;
|
|
|
|
if (rdev2->raid_disk < 0 ||
|
|
|
|
test_bit(Faulty, &rdev2->flags))
|
|
|
|
is_active = 0;
|
|
|
|
if (is_active)
|
2005-11-09 12:39:24 +07:00
|
|
|
desc_nr = rdev2->raid_disk;
|
2005-04-17 05:20:36 +07:00
|
|
|
else
|
2005-11-09 12:39:24 +07:00
|
|
|
desc_nr = next_spare++;
|
2005-11-09 12:39:35 +07:00
|
|
|
rdev2->desc_nr = desc_nr;
|
2005-04-17 05:20:36 +07:00
|
|
|
d = &sb->disks[rdev2->desc_nr];
|
|
|
|
nr_disks++;
|
|
|
|
d->number = rdev2->desc_nr;
|
|
|
|
d->major = MAJOR(rdev2->bdev->bd_dev);
|
|
|
|
d->minor = MINOR(rdev2->bdev->bd_dev);
|
2009-11-13 13:40:48 +07:00
|
|
|
if (is_active)
|
2005-04-17 05:20:36 +07:00
|
|
|
d->raid_disk = rdev2->raid_disk;
|
|
|
|
else
|
|
|
|
d->raid_disk = rdev2->desc_nr; /* compatibility */
|
2006-03-27 16:18:03 +07:00
|
|
|
if (test_bit(Faulty, &rdev2->flags))
|
2005-04-17 05:20:36 +07:00
|
|
|
d->state = (1<<MD_DISK_FAULTY);
|
2009-11-13 13:40:48 +07:00
|
|
|
else if (is_active) {
|
2005-04-17 05:20:36 +07:00
|
|
|
d->state = (1<<MD_DISK_ACTIVE);
|
2009-11-13 13:40:48 +07:00
|
|
|
if (test_bit(In_sync, &rdev2->flags))
|
|
|
|
d->state |= (1<<MD_DISK_SYNC);
|
2005-04-17 05:20:36 +07:00
|
|
|
active++;
|
|
|
|
working++;
|
|
|
|
} else {
|
|
|
|
d->state = 0;
|
|
|
|
spare++;
|
|
|
|
working++;
|
|
|
|
}
|
2005-09-10 06:23:45 +07:00
|
|
|
if (test_bit(WriteMostly, &rdev2->flags))
|
|
|
|
d->state |= (1<<MD_DISK_WRITEMOSTLY);
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
/* now set the "removed" and "faulty" bits on any missing devices */
|
|
|
|
for (i=0 ; i < mddev->raid_disks ; i++) {
|
|
|
|
mdp_disk_t *d = &sb->disks[i];
|
|
|
|
if (d->state == 0 && d->number == 0) {
|
|
|
|
d->number = i;
|
|
|
|
d->raid_disk = i;
|
|
|
|
d->state = (1<<MD_DISK_REMOVED);
|
|
|
|
d->state |= (1<<MD_DISK_FAULTY);
|
|
|
|
failed++;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
sb->nr_disks = nr_disks;
|
|
|
|
sb->active_disks = active;
|
|
|
|
sb->working_disks = working;
|
|
|
|
sb->failed_disks = failed;
|
|
|
|
sb->spare_disks = spare;
|
|
|
|
|
|
|
|
sb->this_disk = sb->disks[rdev->desc_nr];
|
|
|
|
sb->sb_csum = calc_sb_csum(sb);
|
|
|
|
}
|
|
|
|
|
2008-06-28 05:31:46 +07:00
|
|
|
/*
|
|
|
|
* rdev_size_change for 0.90.0
|
|
|
|
*/
|
|
|
|
static unsigned long long
|
2011-10-11 12:45:26 +07:00
|
|
|
super_90_rdev_size_change(struct md_rdev *rdev, sector_t num_sectors)
|
2008-06-28 05:31:46 +07:00
|
|
|
{
|
2009-03-31 10:33:13 +07:00
|
|
|
if (num_sectors && num_sectors < rdev->mddev->dev_sectors)
|
2008-06-28 05:31:46 +07:00
|
|
|
return 0; /* component must fit device */
|
2009-12-14 08:49:52 +07:00
|
|
|
if (rdev->mddev->bitmap_info.offset)
|
2008-06-28 05:31:46 +07:00
|
|
|
return 0; /* can't move bitmap */
|
2011-01-14 05:14:33 +07:00
|
|
|
rdev->sb_start = calc_dev_sboffset(rdev);
|
2008-07-21 11:42:12 +07:00
|
|
|
if (!num_sectors || num_sectors > rdev->sb_start)
|
|
|
|
num_sectors = rdev->sb_start;
|
2011-09-10 14:21:28 +07:00
|
|
|
/* Limit to 4TB as metadata cannot record more than that.
|
|
|
|
* 4TB == 2^32 KB, or 2*2^32 sectors.
|
|
|
|
*/
|
2015-12-21 06:51:01 +07:00
|
|
|
if (IS_ENABLED(CONFIG_LBDAF) && (u64)num_sectors >= (2ULL << 32) &&
|
|
|
|
rdev->mddev->level >= 1)
|
|
|
|
num_sectors = (sector_t)(2ULL << 32) - 2;
|
2008-07-11 19:02:23 +07:00
|
|
|
md_super_write(rdev->mddev, rdev, rdev->sb_start, rdev->sb_size,
|
2008-06-28 05:31:46 +07:00
|
|
|
rdev->sb_page);
|
|
|
|
md_super_wait(rdev->mddev);
|
2010-11-24 12:36:17 +07:00
|
|
|
return num_sectors;
|
2008-06-28 05:31:46 +07:00
|
|
|
}
|
|
|
|
|
2012-05-21 06:27:00 +07:00
|
|
|
static int
|
|
|
|
super_90_allow_new_offset(struct md_rdev *rdev, unsigned long long new_offset)
|
|
|
|
{
|
|
|
|
/* non-zero offset changes not possible with v0.90 */
|
|
|
|
return new_offset == 0;
|
|
|
|
}
|
2008-06-28 05:31:46 +07:00
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
/*
|
|
|
|
* version 1 superblock
|
|
|
|
*/
|
|
|
|
|
2014-09-30 11:23:59 +07:00
|
|
|
static __le32 calc_sb_1_csum(struct mdp_superblock_1 *sb)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2006-10-22 00:24:08 +07:00
|
|
|
__le32 disk_csum;
|
|
|
|
u32 csum;
|
2005-04-17 05:20:36 +07:00
|
|
|
unsigned long long newcsum;
|
|
|
|
int size = 256 + le32_to_cpu(sb->max_dev)*2;
|
2006-10-22 00:24:08 +07:00
|
|
|
__le32 *isuper = (__le32*)sb;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
disk_csum = sb->sb_csum;
|
|
|
|
sb->sb_csum = 0;
|
|
|
|
newcsum = 0;
|
2012-12-11 09:09:00 +07:00
|
|
|
for (; size >= 4; size -= 4)
|
2005-04-17 05:20:36 +07:00
|
|
|
newcsum += le32_to_cpu(*isuper++);
|
|
|
|
|
|
|
|
if (size == 2)
|
2006-10-22 00:24:08 +07:00
|
|
|
newcsum += le16_to_cpu(*(__le16*) isuper);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
csum = (newcsum & 0xffffffff) + (newcsum >> 32);
|
|
|
|
sb->sb_csum = disk_csum;
|
|
|
|
return cpu_to_le32(csum);
|
|
|
|
}
|
|
|
|
|
2011-10-11 12:45:26 +07:00
|
|
|
static int super_1_load(struct md_rdev *rdev, struct md_rdev *refdev, int minor_version)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
struct mdp_superblock_1 *sb;
|
|
|
|
int ret;
|
2008-07-11 19:02:23 +07:00
|
|
|
sector_t sb_start;
|
2012-05-21 06:27:00 +07:00
|
|
|
sector_t sectors;
|
2005-04-17 05:20:36 +07:00
|
|
|
char b[BDEVNAME_SIZE], b2[BDEVNAME_SIZE];
|
2005-09-10 06:23:53 +07:00
|
|
|
int bmask;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
/*
|
2008-07-11 19:02:23 +07:00
|
|
|
* Calculate the position of the superblock in 512byte sectors.
|
2005-04-17 05:20:36 +07:00
|
|
|
* It is always aligned to a 4K boundary and
|
|
|
|
* depeding on minor_version, it can be:
|
|
|
|
* 0: At least 8K, but less than 12K, from end of device
|
|
|
|
* 1: At start of device
|
|
|
|
* 2: 4K from start of device.
|
|
|
|
*/
|
|
|
|
switch(minor_version) {
|
|
|
|
case 0:
|
2010-11-08 20:39:12 +07:00
|
|
|
sb_start = i_size_read(rdev->bdev->bd_inode) >> 9;
|
2008-07-11 19:02:23 +07:00
|
|
|
sb_start -= 8*2;
|
|
|
|
sb_start &= ~(sector_t)(4*2-1);
|
2005-04-17 05:20:36 +07:00
|
|
|
break;
|
|
|
|
case 1:
|
2008-07-11 19:02:23 +07:00
|
|
|
sb_start = 0;
|
2005-04-17 05:20:36 +07:00
|
|
|
break;
|
|
|
|
case 2:
|
2008-07-11 19:02:23 +07:00
|
|
|
sb_start = 8;
|
2005-04-17 05:20:36 +07:00
|
|
|
break;
|
|
|
|
default:
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
2008-07-11 19:02:23 +07:00
|
|
|
rdev->sb_start = sb_start;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2005-09-10 06:23:53 +07:00
|
|
|
/* superblock is rarely larger than 1K, but it can be larger,
|
|
|
|
* and it is safe to read 4k, so we do that
|
|
|
|
*/
|
|
|
|
ret = read_disk_sb(rdev, 4096);
|
2005-04-17 05:20:36 +07:00
|
|
|
if (ret) return ret;
|
|
|
|
|
2011-07-27 08:00:36 +07:00
|
|
|
sb = page_address(rdev->sb_page);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
if (sb->magic != cpu_to_le32(MD_SB_MAGIC) ||
|
|
|
|
sb->major_version != cpu_to_le32(1) ||
|
|
|
|
le32_to_cpu(sb->max_dev) > (4096-256)/2 ||
|
2008-07-11 19:02:23 +07:00
|
|
|
le64_to_cpu(sb->super_offset) != rdev->sb_start ||
|
2005-09-10 06:23:51 +07:00
|
|
|
(le32_to_cpu(sb->feature_map) & ~MD_FEATURE_ALL) != 0)
|
2005-04-17 05:20:36 +07:00
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
if (calc_sb_1_csum(sb) != sb->sb_csum) {
|
|
|
|
printk("md: invalid superblock checksum on %s\n",
|
|
|
|
bdevname(rdev->bdev,b));
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
if (le64_to_cpu(sb->data_size) < 10) {
|
|
|
|
printk("md: data_size too small on %s\n",
|
|
|
|
bdevname(rdev->bdev,b));
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
2012-05-21 06:27:00 +07:00
|
|
|
if (sb->pad0 ||
|
|
|
|
sb->pad3[0] ||
|
|
|
|
memcmp(sb->pad3, sb->pad3+1, sizeof(sb->pad3) - sizeof(sb->pad3[1])))
|
|
|
|
/* Some padding is non-zero, might be a new feature */
|
|
|
|
return -EINVAL;
|
2007-05-09 16:35:36 +07:00
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
rdev->preferred_minor = 0xffff;
|
|
|
|
rdev->data_offset = le64_to_cpu(sb->data_offset);
|
2012-05-21 06:27:00 +07:00
|
|
|
rdev->new_data_offset = rdev->data_offset;
|
|
|
|
if ((le32_to_cpu(sb->feature_map) & MD_FEATURE_RESHAPE_ACTIVE) &&
|
|
|
|
(le32_to_cpu(sb->feature_map) & MD_FEATURE_NEW_OFFSET))
|
|
|
|
rdev->new_data_offset += (s32)le32_to_cpu(sb->new_offset);
|
2006-01-06 15:20:52 +07:00
|
|
|
atomic_set(&rdev->corrected_errors, le32_to_cpu(sb->cnt_corrected_read));
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2005-09-10 06:23:53 +07:00
|
|
|
rdev->sb_size = le32_to_cpu(sb->max_dev) * 2 + 256;
|
2009-05-23 04:17:49 +07:00
|
|
|
bmask = queue_logical_block_size(rdev->bdev->bd_disk->queue)-1;
|
2005-09-10 06:23:53 +07:00
|
|
|
if (rdev->sb_size & bmask)
|
2008-03-05 05:29:31 +07:00
|
|
|
rdev->sb_size = (rdev->sb_size | bmask) + 1;
|
|
|
|
|
|
|
|
if (minor_version
|
2008-07-11 19:02:23 +07:00
|
|
|
&& rdev->data_offset < sb_start + (rdev->sb_size/512))
|
2008-03-05 05:29:31 +07:00
|
|
|
return -EINVAL;
|
2012-05-21 06:27:00 +07:00
|
|
|
if (minor_version
|
|
|
|
&& rdev->new_data_offset < sb_start + (rdev->sb_size/512))
|
|
|
|
return -EINVAL;
|
2005-09-10 06:23:53 +07:00
|
|
|
|
2006-07-10 18:44:14 +07:00
|
|
|
if (sb->level == cpu_to_le32(LEVEL_MULTIPATH))
|
|
|
|
rdev->desc_nr = -1;
|
|
|
|
else
|
|
|
|
rdev->desc_nr = le32_to_cpu(sb->dev_number);
|
|
|
|
|
2011-07-28 08:31:47 +07:00
|
|
|
if (!rdev->bb_page) {
|
|
|
|
rdev->bb_page = alloc_page(GFP_KERNEL);
|
|
|
|
if (!rdev->bb_page)
|
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
if ((le32_to_cpu(sb->feature_map) & MD_FEATURE_BAD_BLOCKS) &&
|
|
|
|
rdev->badblocks.count == 0) {
|
|
|
|
/* need to load the bad block list.
|
|
|
|
* Currently we limit it to one page.
|
|
|
|
*/
|
|
|
|
s32 offset;
|
|
|
|
sector_t bb_sector;
|
|
|
|
u64 *bbp;
|
|
|
|
int i;
|
|
|
|
int sectors = le16_to_cpu(sb->bblog_size);
|
|
|
|
if (sectors > (PAGE_SIZE / 512))
|
|
|
|
return -EINVAL;
|
|
|
|
offset = le32_to_cpu(sb->bblog_offset);
|
|
|
|
if (offset == 0)
|
|
|
|
return -EINVAL;
|
|
|
|
bb_sector = (long long)offset;
|
|
|
|
if (!sync_page_io(rdev, bb_sector, sectors << 9,
|
2016-06-06 02:32:07 +07:00
|
|
|
rdev->bb_page, REQ_OP_READ, 0, true))
|
2011-07-28 08:31:47 +07:00
|
|
|
return -EIO;
|
|
|
|
bbp = (u64 *)page_address(rdev->bb_page);
|
|
|
|
rdev->badblocks.shift = sb->bblog_shift;
|
|
|
|
for (i = 0 ; i < (sectors << (9-3)) ; i++, bbp++) {
|
|
|
|
u64 bb = le64_to_cpu(*bbp);
|
|
|
|
int count = bb & (0x3ff);
|
|
|
|
u64 sector = bb >> 10;
|
|
|
|
sector <<= sb->bblog_shift;
|
|
|
|
count <<= sb->bblog_shift;
|
|
|
|
if (bb + 1 == 0)
|
|
|
|
break;
|
2015-12-25 09:20:34 +07:00
|
|
|
if (badblocks_set(&rdev->badblocks, sector, count, 1))
|
2011-07-28 08:31:47 +07:00
|
|
|
return -EINVAL;
|
|
|
|
}
|
2013-04-24 08:42:44 +07:00
|
|
|
} else if (sb->bblog_offset != 0)
|
|
|
|
rdev->badblocks.shift = 0;
|
2011-07-28 08:31:47 +07:00
|
|
|
|
2008-04-28 16:15:49 +07:00
|
|
|
if (!refdev) {
|
2006-02-03 18:03:41 +07:00
|
|
|
ret = 1;
|
2008-04-28 16:15:49 +07:00
|
|
|
} else {
|
2005-04-17 05:20:36 +07:00
|
|
|
__u64 ev1, ev2;
|
2011-07-27 08:00:36 +07:00
|
|
|
struct mdp_superblock_1 *refsb = page_address(refdev->sb_page);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
if (memcmp(sb->set_uuid, refsb->set_uuid, 16) != 0 ||
|
|
|
|
sb->level != refsb->level ||
|
|
|
|
sb->layout != refsb->layout ||
|
|
|
|
sb->chunksize != refsb->chunksize) {
|
|
|
|
printk(KERN_WARNING "md: %s has strangely different"
|
|
|
|
" superblock to %s\n",
|
|
|
|
bdevname(rdev->bdev,b),
|
|
|
|
bdevname(refdev->bdev,b2));
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
ev1 = le64_to_cpu(sb->events);
|
|
|
|
ev2 = le64_to_cpu(refsb->events);
|
|
|
|
|
|
|
|
if (ev1 > ev2)
|
2006-02-03 18:03:41 +07:00
|
|
|
ret = 1;
|
|
|
|
else
|
|
|
|
ret = 0;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
2012-05-21 06:27:00 +07:00
|
|
|
if (minor_version) {
|
|
|
|
sectors = (i_size_read(rdev->bdev->bd_inode) >> 9);
|
|
|
|
sectors -= rdev->data_offset;
|
|
|
|
} else
|
|
|
|
sectors = rdev->sb_start;
|
|
|
|
if (sectors < le64_to_cpu(sb->data_size))
|
2005-04-17 05:20:36 +07:00
|
|
|
return -EINVAL;
|
2009-03-31 10:33:13 +07:00
|
|
|
rdev->sectors = le64_to_cpu(sb->data_size);
|
2006-02-03 18:03:41 +07:00
|
|
|
return ret;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
2011-10-11 12:47:53 +07:00
|
|
|
static int super_1_validate(struct mddev *mddev, struct md_rdev *rdev)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2011-07-27 08:00:36 +07:00
|
|
|
struct mdp_superblock_1 *sb = page_address(rdev->sb_page);
|
2006-06-26 14:27:56 +07:00
|
|
|
__u64 ev1 = le64_to_cpu(sb->events);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2005-06-22 07:17:25 +07:00
|
|
|
rdev->raid_disk = -1;
|
2008-02-06 16:39:54 +07:00
|
|
|
clear_bit(Faulty, &rdev->flags);
|
|
|
|
clear_bit(In_sync, &rdev->flags);
|
2013-12-12 06:13:33 +07:00
|
|
|
clear_bit(Bitmap_sync, &rdev->flags);
|
2008-02-06 16:39:54 +07:00
|
|
|
clear_bit(WriteMostly, &rdev->flags);
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
if (mddev->raid_disks == 0) {
|
|
|
|
mddev->major_version = 1;
|
|
|
|
mddev->patch_version = 0;
|
2008-02-06 16:39:51 +07:00
|
|
|
mddev->external = 0;
|
2009-06-18 05:45:01 +07:00
|
|
|
mddev->chunk_sectors = le32_to_cpu(sb->chunksize);
|
2015-12-21 06:51:01 +07:00
|
|
|
mddev->ctime = le64_to_cpu(sb->ctime);
|
|
|
|
mddev->utime = le64_to_cpu(sb->utime);
|
2005-04-17 05:20:36 +07:00
|
|
|
mddev->level = le32_to_cpu(sb->level);
|
2006-01-06 15:20:51 +07:00
|
|
|
mddev->clevel[0] = 0;
|
2005-04-17 05:20:36 +07:00
|
|
|
mddev->layout = le32_to_cpu(sb->layout);
|
|
|
|
mddev->raid_disks = le32_to_cpu(sb->raid_disks);
|
2009-03-31 10:33:13 +07:00
|
|
|
mddev->dev_sectors = le64_to_cpu(sb->size);
|
2006-06-26 14:27:56 +07:00
|
|
|
mddev->events = ev1;
|
2009-12-14 08:49:52 +07:00
|
|
|
mddev->bitmap_info.offset = 0;
|
2012-05-22 10:55:07 +07:00
|
|
|
mddev->bitmap_info.space = 0;
|
|
|
|
/* Default location for bitmap is 1K after superblock
|
|
|
|
* using 3K - total of 4K
|
|
|
|
*/
|
2009-12-14 08:49:52 +07:00
|
|
|
mddev->bitmap_info.default_offset = 1024 >> 9;
|
2012-05-22 10:55:07 +07:00
|
|
|
mddev->bitmap_info.default_space = (4096-1024) >> 9;
|
2012-05-21 06:27:00 +07:00
|
|
|
mddev->reshape_backwards = 0;
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
mddev->recovery_cp = le64_to_cpu(sb->resync_offset);
|
|
|
|
memcpy(mddev->uuid, sb->set_uuid, 16);
|
|
|
|
|
|
|
|
mddev->max_disks = (4096-256)/2;
|
2005-06-22 07:17:27 +07:00
|
|
|
|
2005-09-10 06:23:51 +07:00
|
|
|
if ((le32_to_cpu(sb->feature_map) & MD_FEATURE_BITMAP_OFFSET) &&
|
2012-05-22 10:55:07 +07:00
|
|
|
mddev->bitmap_info.file == NULL) {
|
2009-12-14 08:49:52 +07:00
|
|
|
mddev->bitmap_info.offset =
|
|
|
|
(__s32)le32_to_cpu(sb->bitmap_offset);
|
2012-05-22 10:55:07 +07:00
|
|
|
/* Metadata doesn't record how much space is available.
|
|
|
|
* For 1.0, we assume we can use up to the superblock
|
|
|
|
* if before, else to 4K beyond superblock.
|
|
|
|
* For others, assume no change is possible.
|
|
|
|
*/
|
|
|
|
if (mddev->minor_version > 0)
|
|
|
|
mddev->bitmap_info.space = 0;
|
|
|
|
else if (mddev->bitmap_info.offset > 0)
|
|
|
|
mddev->bitmap_info.space =
|
|
|
|
8 - mddev->bitmap_info.offset;
|
|
|
|
else
|
|
|
|
mddev->bitmap_info.space =
|
|
|
|
-mddev->bitmap_info.offset;
|
|
|
|
}
|
2007-05-09 16:35:36 +07:00
|
|
|
|
2006-03-27 16:18:11 +07:00
|
|
|
if ((le32_to_cpu(sb->feature_map) & MD_FEATURE_RESHAPE_ACTIVE)) {
|
|
|
|
mddev->reshape_position = le64_to_cpu(sb->reshape_position);
|
|
|
|
mddev->delta_disks = le32_to_cpu(sb->delta_disks);
|
|
|
|
mddev->new_level = le32_to_cpu(sb->new_level);
|
|
|
|
mddev->new_layout = le32_to_cpu(sb->new_layout);
|
2009-06-18 05:45:27 +07:00
|
|
|
mddev->new_chunk_sectors = le32_to_cpu(sb->new_chunk);
|
2012-05-21 06:27:00 +07:00
|
|
|
if (mddev->delta_disks < 0 ||
|
|
|
|
(mddev->delta_disks == 0 &&
|
|
|
|
(le32_to_cpu(sb->feature_map)
|
|
|
|
& MD_FEATURE_RESHAPE_BACKWARDS)))
|
|
|
|
mddev->reshape_backwards = 1;
|
2006-03-27 16:18:11 +07:00
|
|
|
} else {
|
|
|
|
mddev->reshape_position = MaxSector;
|
|
|
|
mddev->delta_disks = 0;
|
|
|
|
mddev->new_level = mddev->level;
|
|
|
|
mddev->new_layout = mddev->layout;
|
2009-06-18 05:45:27 +07:00
|
|
|
mddev->new_chunk_sectors = mddev->chunk_sectors;
|
2006-03-27 16:18:11 +07:00
|
|
|
}
|
|
|
|
|
2016-08-20 05:34:01 +07:00
|
|
|
if (le32_to_cpu(sb->feature_map) & MD_FEATURE_JOURNAL)
|
2016-01-07 05:37:13 +07:00
|
|
|
set_bit(MD_HAS_JOURNAL, &mddev->flags);
|
2005-06-22 07:17:25 +07:00
|
|
|
} else if (mddev->pers == NULL) {
|
2010-05-18 07:17:09 +07:00
|
|
|
/* Insist of good event counter while assembling, except for
|
|
|
|
* spares (which don't need an event count) */
|
2005-04-17 05:20:36 +07:00
|
|
|
++ev1;
|
2010-05-18 07:17:09 +07:00
|
|
|
if (rdev->desc_nr >= 0 &&
|
|
|
|
rdev->desc_nr < le32_to_cpu(sb->max_dev) &&
|
2015-10-09 11:54:11 +07:00
|
|
|
(le16_to_cpu(sb->dev_roles[rdev->desc_nr]) < MD_DISK_ROLE_MAX ||
|
|
|
|
le16_to_cpu(sb->dev_roles[rdev->desc_nr]) == MD_DISK_ROLE_JOURNAL))
|
2010-05-18 07:17:09 +07:00
|
|
|
if (ev1 < mddev->events)
|
|
|
|
return -EINVAL;
|
2005-06-22 07:17:25 +07:00
|
|
|
} else if (mddev->bitmap) {
|
|
|
|
/* If adding to array with a bitmap, then we can accept an
|
|
|
|
* older device, but not too old.
|
|
|
|
*/
|
|
|
|
if (ev1 < mddev->bitmap->events_cleared)
|
|
|
|
return 0;
|
2013-12-12 06:13:33 +07:00
|
|
|
if (ev1 < mddev->events)
|
|
|
|
set_bit(Bitmap_sync, &rdev->flags);
|
2006-06-26 14:27:56 +07:00
|
|
|
} else {
|
|
|
|
if (ev1 < mddev->events)
|
|
|
|
/* just a hot-add of a new device, leave raid_disk at -1 */
|
|
|
|
return 0;
|
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
if (mddev->level != LEVEL_MULTIPATH) {
|
|
|
|
int role;
|
2009-08-03 07:59:56 +07:00
|
|
|
if (rdev->desc_nr < 0 ||
|
|
|
|
rdev->desc_nr >= le32_to_cpu(sb->max_dev)) {
|
2015-08-14 04:31:54 +07:00
|
|
|
role = MD_DISK_ROLE_SPARE;
|
2009-08-03 07:59:56 +07:00
|
|
|
rdev->desc_nr = -1;
|
|
|
|
} else
|
|
|
|
role = le16_to_cpu(sb->dev_roles[rdev->desc_nr]);
|
2005-04-17 05:20:36 +07:00
|
|
|
switch(role) {
|
2015-08-14 04:31:54 +07:00
|
|
|
case MD_DISK_ROLE_SPARE: /* spare */
|
2005-04-17 05:20:36 +07:00
|
|
|
break;
|
2015-08-14 04:31:54 +07:00
|
|
|
case MD_DISK_ROLE_FAULTY: /* faulty */
|
2005-11-09 12:39:31 +07:00
|
|
|
set_bit(Faulty, &rdev->flags);
|
2005-04-17 05:20:36 +07:00
|
|
|
break;
|
2015-08-14 04:31:55 +07:00
|
|
|
case MD_DISK_ROLE_JOURNAL: /* journal device */
|
|
|
|
if (!(le32_to_cpu(sb->feature_map) & MD_FEATURE_JOURNAL)) {
|
|
|
|
/* journal device without journal feature */
|
|
|
|
printk(KERN_WARNING
|
|
|
|
"md: journal device provided without journal feature, ignoring the device\n");
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
set_bit(Journal, &rdev->flags);
|
2015-08-14 04:31:56 +07:00
|
|
|
rdev->journal_tail = le64_to_cpu(sb->journal_tail);
|
2015-12-18 11:19:16 +07:00
|
|
|
rdev->raid_disk = 0;
|
2015-08-14 04:31:55 +07:00
|
|
|
break;
|
2005-04-17 05:20:36 +07:00
|
|
|
default:
|
md: Change handling of save_raid_disk and metadata update during recovery.
Since commit d70ed2e4fafdbef0800e739
MD: Allow restarting an interrupted incremental recovery.
we don't write out the metadata to devices while they are recovering.
This had a good reason, but has unfortunate consequences. This patch
changes things to make them work better.
At issue is what happens if the array is shut down while a recovery is
happening, particularly a bitmap-guided recovery.
Ideally the recovery should pick up where it left off.
However the metadata cannot represent the state "A recovery is in
process which is guided by the bitmap".
Before the above mentioned commit, we wrote metadata to the device
which said "this is being recovered and it is up to <here>". So after
a restart, a full recovery (not bitmap-guided) would happen from
where-ever it was up to.
After the commit the metadata wasn't updated so it still said "This
device is fully in sync with <this> event count". That leads to a
bitmap-based recovery following the whole bitmap, which should be a
lot less work than a full recovery from some starting point. So this
was an improvement.
However updates some metadata but not all leads to other problems.
In particular, the metadata written to the fully-up-to-date device
record that the array has all devices present (even though some are
recovering). So on restart, mdadm wants to find all devices and
expects them to have current event counts.
Obviously it doesn't (some have old event counts) so (when assembling
with --incremental) it waits indefinitely for the rest of the expected
devices.
It really is wrong to not update all the metadata together. Do that
is bound to cause confusion.
Instead, we should make it possible to record the truth in the
metadata. i.e. we need to be able to record that a device is being
recovered based on the bitmap.
We already have a Feature flag to say that recovery is happening. We
now add another one to say that it is a bitmap-based recovery.
With this we can remove the code that disables the write-out of
metadata on some devices.
So this patch:
- moves the setting of 'saved_raid_disk' from add_new_disk to
the validate_super methods. This makes sure it is always set
properly, both when adding a new device to an array, and when
assembling an array from a collection of devices.
- Adds a metadata flag MD_FEATURE_RECOVERY_BITMAP which is only
used if MD_FEATURE_RECOVERY_OFFSET is set, and record that a
bitmap-based recovery is allowed.
This is only present in v1.x metadata. v0.90 doesn't support
devices which are in the middle of recovery at all.
- Only skips writing metadata to Faulty devices.
- Also allows rdev state to be set to "-insync" via sysfs.
This can be used for external-metadata arrays. When the
'role' is set the device is assumed to be in-sync. If, after
setting the role, we set the state to "-insync", the role is
moved to saved_raid_disk which effectively says the device is
partly in-sync with that slot and needs a bitmap recovery.
Cc: Andrei Warkentin <andreiw@vmware.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2013-12-09 08:04:56 +07:00
|
|
|
rdev->saved_raid_disk = role;
|
2006-06-26 14:27:40 +07:00
|
|
|
if ((le32_to_cpu(sb->feature_map) &
|
md: Change handling of save_raid_disk and metadata update during recovery.
Since commit d70ed2e4fafdbef0800e739
MD: Allow restarting an interrupted incremental recovery.
we don't write out the metadata to devices while they are recovering.
This had a good reason, but has unfortunate consequences. This patch
changes things to make them work better.
At issue is what happens if the array is shut down while a recovery is
happening, particularly a bitmap-guided recovery.
Ideally the recovery should pick up where it left off.
However the metadata cannot represent the state "A recovery is in
process which is guided by the bitmap".
Before the above mentioned commit, we wrote metadata to the device
which said "this is being recovered and it is up to <here>". So after
a restart, a full recovery (not bitmap-guided) would happen from
where-ever it was up to.
After the commit the metadata wasn't updated so it still said "This
device is fully in sync with <this> event count". That leads to a
bitmap-based recovery following the whole bitmap, which should be a
lot less work than a full recovery from some starting point. So this
was an improvement.
However updates some metadata but not all leads to other problems.
In particular, the metadata written to the fully-up-to-date device
record that the array has all devices present (even though some are
recovering). So on restart, mdadm wants to find all devices and
expects them to have current event counts.
Obviously it doesn't (some have old event counts) so (when assembling
with --incremental) it waits indefinitely for the rest of the expected
devices.
It really is wrong to not update all the metadata together. Do that
is bound to cause confusion.
Instead, we should make it possible to record the truth in the
metadata. i.e. we need to be able to record that a device is being
recovered based on the bitmap.
We already have a Feature flag to say that recovery is happening. We
now add another one to say that it is a bitmap-based recovery.
With this we can remove the code that disables the write-out of
metadata on some devices.
So this patch:
- moves the setting of 'saved_raid_disk' from add_new_disk to
the validate_super methods. This makes sure it is always set
properly, both when adding a new device to an array, and when
assembling an array from a collection of devices.
- Adds a metadata flag MD_FEATURE_RECOVERY_BITMAP which is only
used if MD_FEATURE_RECOVERY_OFFSET is set, and record that a
bitmap-based recovery is allowed.
This is only present in v1.x metadata. v0.90 doesn't support
devices which are in the middle of recovery at all.
- Only skips writing metadata to Faulty devices.
- Also allows rdev state to be set to "-insync" via sysfs.
This can be used for external-metadata arrays. When the
'role' is set the device is assumed to be in-sync. If, after
setting the role, we set the state to "-insync", the role is
moved to saved_raid_disk which effectively says the device is
partly in-sync with that slot and needs a bitmap recovery.
Cc: Andrei Warkentin <andreiw@vmware.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2013-12-09 08:04:56 +07:00
|
|
|
MD_FEATURE_RECOVERY_OFFSET)) {
|
2006-06-26 14:27:40 +07:00
|
|
|
rdev->recovery_offset = le64_to_cpu(sb->recovery_offset);
|
md: Change handling of save_raid_disk and metadata update during recovery.
Since commit d70ed2e4fafdbef0800e739
MD: Allow restarting an interrupted incremental recovery.
we don't write out the metadata to devices while they are recovering.
This had a good reason, but has unfortunate consequences. This patch
changes things to make them work better.
At issue is what happens if the array is shut down while a recovery is
happening, particularly a bitmap-guided recovery.
Ideally the recovery should pick up where it left off.
However the metadata cannot represent the state "A recovery is in
process which is guided by the bitmap".
Before the above mentioned commit, we wrote metadata to the device
which said "this is being recovered and it is up to <here>". So after
a restart, a full recovery (not bitmap-guided) would happen from
where-ever it was up to.
After the commit the metadata wasn't updated so it still said "This
device is fully in sync with <this> event count". That leads to a
bitmap-based recovery following the whole bitmap, which should be a
lot less work than a full recovery from some starting point. So this
was an improvement.
However updates some metadata but not all leads to other problems.
In particular, the metadata written to the fully-up-to-date device
record that the array has all devices present (even though some are
recovering). So on restart, mdadm wants to find all devices and
expects them to have current event counts.
Obviously it doesn't (some have old event counts) so (when assembling
with --incremental) it waits indefinitely for the rest of the expected
devices.
It really is wrong to not update all the metadata together. Do that
is bound to cause confusion.
Instead, we should make it possible to record the truth in the
metadata. i.e. we need to be able to record that a device is being
recovered based on the bitmap.
We already have a Feature flag to say that recovery is happening. We
now add another one to say that it is a bitmap-based recovery.
With this we can remove the code that disables the write-out of
metadata on some devices.
So this patch:
- moves the setting of 'saved_raid_disk' from add_new_disk to
the validate_super methods. This makes sure it is always set
properly, both when adding a new device to an array, and when
assembling an array from a collection of devices.
- Adds a metadata flag MD_FEATURE_RECOVERY_BITMAP which is only
used if MD_FEATURE_RECOVERY_OFFSET is set, and record that a
bitmap-based recovery is allowed.
This is only present in v1.x metadata. v0.90 doesn't support
devices which are in the middle of recovery at all.
- Only skips writing metadata to Faulty devices.
- Also allows rdev state to be set to "-insync" via sysfs.
This can be used for external-metadata arrays. When the
'role' is set the device is assumed to be in-sync. If, after
setting the role, we set the state to "-insync", the role is
moved to saved_raid_disk which effectively says the device is
partly in-sync with that slot and needs a bitmap recovery.
Cc: Andrei Warkentin <andreiw@vmware.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2013-12-09 08:04:56 +07:00
|
|
|
if (!(le32_to_cpu(sb->feature_map) &
|
|
|
|
MD_FEATURE_RECOVERY_BITMAP))
|
|
|
|
rdev->saved_raid_disk = -1;
|
|
|
|
} else
|
2006-06-26 14:27:40 +07:00
|
|
|
set_bit(In_sync, &rdev->flags);
|
2005-04-17 05:20:36 +07:00
|
|
|
rdev->raid_disk = role;
|
|
|
|
break;
|
|
|
|
}
|
2005-09-10 06:23:45 +07:00
|
|
|
if (sb->devflags & WriteMostly1)
|
|
|
|
set_bit(WriteMostly, &rdev->flags);
|
2011-12-23 06:17:51 +07:00
|
|
|
if (le32_to_cpu(sb->feature_map) & MD_FEATURE_REPLACEMENT)
|
|
|
|
set_bit(Replacement, &rdev->flags);
|
2005-06-22 07:17:25 +07:00
|
|
|
} else /* MULTIPATH are always insync */
|
2005-11-09 12:39:31 +07:00
|
|
|
set_bit(In_sync, &rdev->flags);
|
2005-06-22 07:17:25 +07:00
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2011-10-11 12:47:53 +07:00
|
|
|
static void super_1_sync(struct mddev *mddev, struct md_rdev *rdev)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
struct mdp_superblock_1 *sb;
|
2011-10-11 12:45:26 +07:00
|
|
|
struct md_rdev *rdev2;
|
2005-04-17 05:20:36 +07:00
|
|
|
int max_dev, i;
|
|
|
|
/* make rdev->sb match mddev and rdev data. */
|
|
|
|
|
2011-07-27 08:00:36 +07:00
|
|
|
sb = page_address(rdev->sb_page);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
sb->feature_map = 0;
|
|
|
|
sb->pad0 = 0;
|
2006-06-26 14:27:40 +07:00
|
|
|
sb->recovery_offset = cpu_to_le64(0);
|
2005-04-17 05:20:36 +07:00
|
|
|
memset(sb->pad3, 0, sizeof(sb->pad3));
|
|
|
|
|
|
|
|
sb->utime = cpu_to_le64((__u64)mddev->utime);
|
|
|
|
sb->events = cpu_to_le64(mddev->events);
|
|
|
|
if (mddev->in_sync)
|
|
|
|
sb->resync_offset = cpu_to_le64(mddev->recovery_cp);
|
2015-09-03 03:49:50 +07:00
|
|
|
else if (test_bit(MD_JOURNAL_CLEAN, &mddev->flags))
|
|
|
|
sb->resync_offset = cpu_to_le64(MaxSector);
|
2005-04-17 05:20:36 +07:00
|
|
|
else
|
|
|
|
sb->resync_offset = cpu_to_le64(0);
|
|
|
|
|
2006-10-22 00:24:08 +07:00
|
|
|
sb->cnt_corrected_read = cpu_to_le32(atomic_read(&rdev->corrected_errors));
|
2006-01-06 15:20:52 +07:00
|
|
|
|
2006-02-03 05:28:04 +07:00
|
|
|
sb->raid_disks = cpu_to_le32(mddev->raid_disks);
|
2009-03-31 10:33:13 +07:00
|
|
|
sb->size = cpu_to_le64(mddev->dev_sectors);
|
2009-06-18 05:45:01 +07:00
|
|
|
sb->chunksize = cpu_to_le32(mddev->chunk_sectors);
|
2009-05-26 06:40:59 +07:00
|
|
|
sb->level = cpu_to_le32(mddev->level);
|
|
|
|
sb->layout = cpu_to_le32(mddev->layout);
|
2006-02-03 05:28:04 +07:00
|
|
|
|
2011-08-25 11:43:08 +07:00
|
|
|
if (test_bit(WriteMostly, &rdev->flags))
|
|
|
|
sb->devflags |= WriteMostly1;
|
|
|
|
else
|
|
|
|
sb->devflags &= ~WriteMostly1;
|
2012-05-21 06:27:00 +07:00
|
|
|
sb->data_offset = cpu_to_le64(rdev->data_offset);
|
|
|
|
sb->data_size = cpu_to_le64(rdev->sectors);
|
2011-08-25 11:43:08 +07:00
|
|
|
|
2009-12-14 08:49:52 +07:00
|
|
|
if (mddev->bitmap && mddev->bitmap_info.file == NULL) {
|
|
|
|
sb->bitmap_offset = cpu_to_le32((__u32)mddev->bitmap_info.offset);
|
2005-09-10 06:23:51 +07:00
|
|
|
sb->feature_map = cpu_to_le32(MD_FEATURE_BITMAP_OFFSET);
|
2005-06-22 07:17:27 +07:00
|
|
|
}
|
2006-06-26 14:27:40 +07:00
|
|
|
|
2015-10-09 11:54:12 +07:00
|
|
|
if (rdev->raid_disk >= 0 && !test_bit(Journal, &rdev->flags) &&
|
2009-03-31 10:33:13 +07:00
|
|
|
!test_bit(In_sync, &rdev->flags)) {
|
2009-12-14 08:50:06 +07:00
|
|
|
sb->feature_map |=
|
|
|
|
cpu_to_le32(MD_FEATURE_RECOVERY_OFFSET);
|
|
|
|
sb->recovery_offset =
|
|
|
|
cpu_to_le64(rdev->recovery_offset);
|
md: Change handling of save_raid_disk and metadata update during recovery.
Since commit d70ed2e4fafdbef0800e739
MD: Allow restarting an interrupted incremental recovery.
we don't write out the metadata to devices while they are recovering.
This had a good reason, but has unfortunate consequences. This patch
changes things to make them work better.
At issue is what happens if the array is shut down while a recovery is
happening, particularly a bitmap-guided recovery.
Ideally the recovery should pick up where it left off.
However the metadata cannot represent the state "A recovery is in
process which is guided by the bitmap".
Before the above mentioned commit, we wrote metadata to the device
which said "this is being recovered and it is up to <here>". So after
a restart, a full recovery (not bitmap-guided) would happen from
where-ever it was up to.
After the commit the metadata wasn't updated so it still said "This
device is fully in sync with <this> event count". That leads to a
bitmap-based recovery following the whole bitmap, which should be a
lot less work than a full recovery from some starting point. So this
was an improvement.
However updates some metadata but not all leads to other problems.
In particular, the metadata written to the fully-up-to-date device
record that the array has all devices present (even though some are
recovering). So on restart, mdadm wants to find all devices and
expects them to have current event counts.
Obviously it doesn't (some have old event counts) so (when assembling
with --incremental) it waits indefinitely for the rest of the expected
devices.
It really is wrong to not update all the metadata together. Do that
is bound to cause confusion.
Instead, we should make it possible to record the truth in the
metadata. i.e. we need to be able to record that a device is being
recovered based on the bitmap.
We already have a Feature flag to say that recovery is happening. We
now add another one to say that it is a bitmap-based recovery.
With this we can remove the code that disables the write-out of
metadata on some devices.
So this patch:
- moves the setting of 'saved_raid_disk' from add_new_disk to
the validate_super methods. This makes sure it is always set
properly, both when adding a new device to an array, and when
assembling an array from a collection of devices.
- Adds a metadata flag MD_FEATURE_RECOVERY_BITMAP which is only
used if MD_FEATURE_RECOVERY_OFFSET is set, and record that a
bitmap-based recovery is allowed.
This is only present in v1.x metadata. v0.90 doesn't support
devices which are in the middle of recovery at all.
- Only skips writing metadata to Faulty devices.
- Also allows rdev state to be set to "-insync" via sysfs.
This can be used for external-metadata arrays. When the
'role' is set the device is assumed to be in-sync. If, after
setting the role, we set the state to "-insync", the role is
moved to saved_raid_disk which effectively says the device is
partly in-sync with that slot and needs a bitmap recovery.
Cc: Andrei Warkentin <andreiw@vmware.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2013-12-09 08:04:56 +07:00
|
|
|
if (rdev->saved_raid_disk >= 0 && mddev->bitmap)
|
|
|
|
sb->feature_map |=
|
|
|
|
cpu_to_le32(MD_FEATURE_RECOVERY_BITMAP);
|
2006-06-26 14:27:40 +07:00
|
|
|
}
|
2015-08-14 04:31:56 +07:00
|
|
|
/* Note: recovery_offset and journal_tail share space */
|
|
|
|
if (test_bit(Journal, &rdev->flags))
|
|
|
|
sb->journal_tail = cpu_to_le64(rdev->journal_tail);
|
2011-12-23 06:17:51 +07:00
|
|
|
if (test_bit(Replacement, &rdev->flags))
|
|
|
|
sb->feature_map |=
|
|
|
|
cpu_to_le32(MD_FEATURE_REPLACEMENT);
|
2006-06-26 14:27:40 +07:00
|
|
|
|
2006-03-27 16:18:11 +07:00
|
|
|
if (mddev->reshape_position != MaxSector) {
|
|
|
|
sb->feature_map |= cpu_to_le32(MD_FEATURE_RESHAPE_ACTIVE);
|
|
|
|
sb->reshape_position = cpu_to_le64(mddev->reshape_position);
|
|
|
|
sb->new_layout = cpu_to_le32(mddev->new_layout);
|
|
|
|
sb->delta_disks = cpu_to_le32(mddev->delta_disks);
|
|
|
|
sb->new_level = cpu_to_le32(mddev->new_level);
|
2009-06-18 05:45:27 +07:00
|
|
|
sb->new_chunk = cpu_to_le32(mddev->new_chunk_sectors);
|
2012-05-21 06:27:00 +07:00
|
|
|
if (mddev->delta_disks == 0 &&
|
|
|
|
mddev->reshape_backwards)
|
|
|
|
sb->feature_map
|
|
|
|
|= cpu_to_le32(MD_FEATURE_RESHAPE_BACKWARDS);
|
2012-05-21 06:27:00 +07:00
|
|
|
if (rdev->new_data_offset != rdev->data_offset) {
|
|
|
|
sb->feature_map
|
|
|
|
|= cpu_to_le32(MD_FEATURE_NEW_OFFSET);
|
|
|
|
sb->new_offset = cpu_to_le32((__u32)(rdev->new_data_offset
|
|
|
|
- rdev->data_offset));
|
|
|
|
}
|
2006-03-27 16:18:11 +07:00
|
|
|
}
|
2005-06-22 07:17:27 +07:00
|
|
|
|
2015-08-19 04:35:54 +07:00
|
|
|
if (mddev_is_clustered(mddev))
|
|
|
|
sb->feature_map |= cpu_to_le32(MD_FEATURE_CLUSTERED);
|
|
|
|
|
2011-07-28 08:31:47 +07:00
|
|
|
if (rdev->badblocks.count == 0)
|
|
|
|
/* Nothing to do for bad blocks*/ ;
|
|
|
|
else if (sb->bblog_offset == 0)
|
|
|
|
/* Cannot record bad blocks on this device */
|
|
|
|
md_error(mddev, rdev);
|
|
|
|
else {
|
|
|
|
struct badblocks *bb = &rdev->badblocks;
|
|
|
|
u64 *bbp = (u64 *)page_address(rdev->bb_page);
|
|
|
|
u64 *p = bb->page;
|
|
|
|
sb->feature_map |= cpu_to_le32(MD_FEATURE_BAD_BLOCKS);
|
|
|
|
if (bb->changed) {
|
|
|
|
unsigned seq;
|
|
|
|
|
|
|
|
retry:
|
|
|
|
seq = read_seqbegin(&bb->lock);
|
|
|
|
|
|
|
|
memset(bbp, 0xff, PAGE_SIZE);
|
|
|
|
|
|
|
|
for (i = 0 ; i < bb->count ; i++) {
|
2012-11-08 07:56:27 +07:00
|
|
|
u64 internal_bb = p[i];
|
2011-07-28 08:31:47 +07:00
|
|
|
u64 store_bb = ((BB_OFFSET(internal_bb) << 10)
|
|
|
|
| BB_LEN(internal_bb));
|
2012-11-08 07:56:27 +07:00
|
|
|
bbp[i] = cpu_to_le64(store_bb);
|
2011-07-28 08:31:47 +07:00
|
|
|
}
|
2012-03-19 08:46:41 +07:00
|
|
|
bb->changed = 0;
|
2011-07-28 08:31:47 +07:00
|
|
|
if (read_seqretry(&bb->lock, seq))
|
|
|
|
goto retry;
|
|
|
|
|
|
|
|
bb->sector = (rdev->sb_start +
|
|
|
|
(int)le32_to_cpu(sb->bblog_offset));
|
|
|
|
bb->size = le16_to_cpu(sb->bblog_size);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
max_dev = 0;
|
2012-03-19 08:46:39 +07:00
|
|
|
rdev_for_each(rdev2, mddev)
|
2005-04-17 05:20:36 +07:00
|
|
|
if (rdev2->desc_nr+1 > max_dev)
|
|
|
|
max_dev = rdev2->desc_nr+1;
|
2007-05-24 03:58:10 +07:00
|
|
|
|
2009-08-03 07:59:57 +07:00
|
|
|
if (max_dev > le32_to_cpu(sb->max_dev)) {
|
|
|
|
int bmask;
|
2007-05-24 03:58:10 +07:00
|
|
|
sb->max_dev = cpu_to_le32(max_dev);
|
2009-08-03 07:59:57 +07:00
|
|
|
rdev->sb_size = max_dev * 2 + 256;
|
|
|
|
bmask = queue_logical_block_size(rdev->bdev->bd_disk->queue)-1;
|
|
|
|
if (rdev->sb_size & bmask)
|
|
|
|
rdev->sb_size = (rdev->sb_size | bmask) + 1;
|
2010-09-08 13:48:17 +07:00
|
|
|
} else
|
|
|
|
max_dev = le32_to_cpu(sb->max_dev);
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
for (i=0; i<max_dev;i++)
|
2015-08-14 04:31:54 +07:00
|
|
|
sb->dev_roles[i] = cpu_to_le16(MD_DISK_ROLE_FAULTY);
|
2014-09-30 11:23:59 +07:00
|
|
|
|
2015-10-09 11:54:09 +07:00
|
|
|
if (test_bit(MD_HAS_JOURNAL, &mddev->flags))
|
|
|
|
sb->feature_map |= cpu_to_le32(MD_FEATURE_JOURNAL);
|
2014-09-30 11:23:59 +07:00
|
|
|
|
2012-03-19 08:46:39 +07:00
|
|
|
rdev_for_each(rdev2, mddev) {
|
2005-04-17 05:20:36 +07:00
|
|
|
i = rdev2->desc_nr;
|
2005-11-09 12:39:31 +07:00
|
|
|
if (test_bit(Faulty, &rdev2->flags))
|
2015-08-14 04:31:54 +07:00
|
|
|
sb->dev_roles[i] = cpu_to_le16(MD_DISK_ROLE_FAULTY);
|
2005-11-09 12:39:31 +07:00
|
|
|
else if (test_bit(In_sync, &rdev2->flags))
|
2005-04-17 05:20:36 +07:00
|
|
|
sb->dev_roles[i] = cpu_to_le16(rdev2->raid_disk);
|
2015-10-09 11:54:09 +07:00
|
|
|
else if (test_bit(Journal, &rdev2->flags))
|
2015-08-14 04:31:55 +07:00
|
|
|
sb->dev_roles[i] = cpu_to_le16(MD_DISK_ROLE_JOURNAL);
|
2009-12-14 08:50:06 +07:00
|
|
|
else if (rdev2->raid_disk >= 0)
|
2006-06-26 14:27:40 +07:00
|
|
|
sb->dev_roles[i] = cpu_to_le16(rdev2->raid_disk);
|
2005-04-17 05:20:36 +07:00
|
|
|
else
|
2015-08-14 04:31:54 +07:00
|
|
|
sb->dev_roles[i] = cpu_to_le16(MD_DISK_ROLE_SPARE);
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
sb->sb_csum = calc_sb_1_csum(sb);
|
|
|
|
}
|
|
|
|
|
2008-06-28 05:31:46 +07:00
|
|
|
static unsigned long long
|
2011-10-11 12:45:26 +07:00
|
|
|
super_1_rdev_size_change(struct md_rdev *rdev, sector_t num_sectors)
|
2008-06-28 05:31:46 +07:00
|
|
|
{
|
|
|
|
struct mdp_superblock_1 *sb;
|
2008-07-21 11:42:12 +07:00
|
|
|
sector_t max_sectors;
|
2009-03-31 10:33:13 +07:00
|
|
|
if (num_sectors && num_sectors < rdev->mddev->dev_sectors)
|
2008-06-28 05:31:46 +07:00
|
|
|
return 0; /* component must fit device */
|
2012-05-21 06:27:00 +07:00
|
|
|
if (rdev->data_offset != rdev->new_data_offset)
|
|
|
|
return 0; /* too confusing */
|
2008-07-11 19:02:23 +07:00
|
|
|
if (rdev->sb_start < rdev->data_offset) {
|
2008-06-28 05:31:46 +07:00
|
|
|
/* minor versions 1 and 2; superblock before data */
|
2010-11-08 20:39:12 +07:00
|
|
|
max_sectors = i_size_read(rdev->bdev->bd_inode) >> 9;
|
2008-07-21 11:42:12 +07:00
|
|
|
max_sectors -= rdev->data_offset;
|
|
|
|
if (!num_sectors || num_sectors > max_sectors)
|
|
|
|
num_sectors = max_sectors;
|
2009-12-14 08:49:52 +07:00
|
|
|
} else if (rdev->mddev->bitmap_info.offset) {
|
2008-06-28 05:31:46 +07:00
|
|
|
/* minor version 0 with bitmap we can't move */
|
|
|
|
return 0;
|
|
|
|
} else {
|
|
|
|
/* minor version 0; superblock after data */
|
2008-07-11 19:02:23 +07:00
|
|
|
sector_t sb_start;
|
2010-11-08 20:39:12 +07:00
|
|
|
sb_start = (i_size_read(rdev->bdev->bd_inode) >> 9) - 8*2;
|
2008-07-11 19:02:23 +07:00
|
|
|
sb_start &= ~(sector_t)(4*2 - 1);
|
2009-03-31 10:33:13 +07:00
|
|
|
max_sectors = rdev->sectors + sb_start - rdev->sb_start;
|
2008-07-21 11:42:12 +07:00
|
|
|
if (!num_sectors || num_sectors > max_sectors)
|
|
|
|
num_sectors = max_sectors;
|
2008-07-11 19:02:23 +07:00
|
|
|
rdev->sb_start = sb_start;
|
2008-06-28 05:31:46 +07:00
|
|
|
}
|
2011-07-27 08:00:36 +07:00
|
|
|
sb = page_address(rdev->sb_page);
|
2008-07-21 11:42:12 +07:00
|
|
|
sb->data_size = cpu_to_le64(num_sectors);
|
2008-07-11 19:02:23 +07:00
|
|
|
sb->super_offset = rdev->sb_start;
|
2008-06-28 05:31:46 +07:00
|
|
|
sb->sb_csum = calc_sb_1_csum(sb);
|
2008-07-11 19:02:23 +07:00
|
|
|
md_super_write(rdev->mddev, rdev, rdev->sb_start, rdev->sb_size,
|
2008-06-28 05:31:46 +07:00
|
|
|
rdev->sb_page);
|
|
|
|
md_super_wait(rdev->mddev);
|
2010-11-24 12:36:17 +07:00
|
|
|
return num_sectors;
|
2012-05-21 06:27:00 +07:00
|
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
super_1_allow_new_offset(struct md_rdev *rdev,
|
|
|
|
unsigned long long new_offset)
|
|
|
|
{
|
|
|
|
/* All necessary checks on new >= old have been done */
|
|
|
|
struct bitmap *bitmap;
|
|
|
|
if (new_offset >= rdev->data_offset)
|
|
|
|
return 1;
|
|
|
|
|
|
|
|
/* with 1.0 metadata, there is no metadata to tread on
|
|
|
|
* so we can always move back */
|
|
|
|
if (rdev->mddev->minor_version == 0)
|
|
|
|
return 1;
|
|
|
|
|
|
|
|
/* otherwise we must be sure not to step on
|
|
|
|
* any metadata, so stay:
|
|
|
|
* 36K beyond start of superblock
|
|
|
|
* beyond end of badblocks
|
|
|
|
* beyond write-intent bitmap
|
|
|
|
*/
|
|
|
|
if (rdev->sb_start + (32+4)*2 > new_offset)
|
|
|
|
return 0;
|
|
|
|
bitmap = rdev->mddev->bitmap;
|
|
|
|
if (bitmap && !rdev->mddev->bitmap_info.file &&
|
|
|
|
rdev->sb_start + rdev->mddev->bitmap_info.offset +
|
2012-05-22 10:55:10 +07:00
|
|
|
bitmap->storage.file_pages * (PAGE_SIZE>>9) > new_offset)
|
2012-05-21 06:27:00 +07:00
|
|
|
return 0;
|
|
|
|
if (rdev->badblocks.sector + rdev->badblocks.size > new_offset)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
return 1;
|
2008-06-28 05:31:46 +07:00
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2005-05-06 06:16:09 +07:00
|
|
|
static struct super_type super_types[] = {
|
2005-04-17 05:20:36 +07:00
|
|
|
[0] = {
|
|
|
|
.name = "0.90.0",
|
|
|
|
.owner = THIS_MODULE,
|
2008-06-28 05:31:46 +07:00
|
|
|
.load_super = super_90_load,
|
|
|
|
.validate_super = super_90_validate,
|
|
|
|
.sync_super = super_90_sync,
|
|
|
|
.rdev_size_change = super_90_rdev_size_change,
|
2012-05-21 06:27:00 +07:00
|
|
|
.allow_new_offset = super_90_allow_new_offset,
|
2005-04-17 05:20:36 +07:00
|
|
|
},
|
|
|
|
[1] = {
|
|
|
|
.name = "md-1",
|
|
|
|
.owner = THIS_MODULE,
|
2008-06-28 05:31:46 +07:00
|
|
|
.load_super = super_1_load,
|
|
|
|
.validate_super = super_1_validate,
|
|
|
|
.sync_super = super_1_sync,
|
|
|
|
.rdev_size_change = super_1_rdev_size_change,
|
2012-05-21 06:27:00 +07:00
|
|
|
.allow_new_offset = super_1_allow_new_offset,
|
2005-04-17 05:20:36 +07:00
|
|
|
},
|
|
|
|
};
|
|
|
|
|
2011-10-11 12:47:53 +07:00
|
|
|
static void sync_super(struct mddev *mddev, struct md_rdev *rdev)
|
2011-06-08 05:51:30 +07:00
|
|
|
{
|
|
|
|
if (mddev->sync_super) {
|
|
|
|
mddev->sync_super(mddev, rdev);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
BUG_ON(mddev->major_version >= ARRAY_SIZE(super_types));
|
|
|
|
|
|
|
|
super_types[mddev->major_version].sync_super(mddev, rdev);
|
|
|
|
}
|
|
|
|
|
2011-10-11 12:47:53 +07:00
|
|
|
static int match_mddev_units(struct mddev *mddev1, struct mddev *mddev2)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2011-10-11 12:45:26 +07:00
|
|
|
struct md_rdev *rdev, *rdev2;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2008-07-21 14:05:25 +07:00
|
|
|
rcu_read_lock();
|
2015-09-04 13:00:35 +07:00
|
|
|
rdev_for_each_rcu(rdev, mddev1) {
|
|
|
|
if (test_bit(Faulty, &rdev->flags) ||
|
|
|
|
test_bit(Journal, &rdev->flags) ||
|
|
|
|
rdev->raid_disk == -1)
|
|
|
|
continue;
|
|
|
|
rdev_for_each_rcu(rdev2, mddev2) {
|
|
|
|
if (test_bit(Faulty, &rdev2->flags) ||
|
|
|
|
test_bit(Journal, &rdev2->flags) ||
|
|
|
|
rdev2->raid_disk == -1)
|
|
|
|
continue;
|
2007-03-01 11:11:35 +07:00
|
|
|
if (rdev->bdev->bd_contains ==
|
2008-07-21 14:05:25 +07:00
|
|
|
rdev2->bdev->bd_contains) {
|
|
|
|
rcu_read_unlock();
|
2007-03-01 11:11:35 +07:00
|
|
|
return 1;
|
2008-07-21 14:05:25 +07:00
|
|
|
}
|
2015-09-04 13:00:35 +07:00
|
|
|
}
|
|
|
|
}
|
2008-07-21 14:05:25 +07:00
|
|
|
rcu_read_unlock();
|
2005-04-17 05:20:36 +07:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static LIST_HEAD(pending_raid_disks);
|
|
|
|
|
2009-08-03 07:59:47 +07:00
|
|
|
/*
|
|
|
|
* Try to register data integrity profile for an mddev
|
|
|
|
*
|
|
|
|
* This is called when an array is started and after a disk has been kicked
|
|
|
|
* from the array. It only succeeds if all working and active component devices
|
|
|
|
* are integrity capable with matching profiles.
|
|
|
|
*/
|
2011-10-11 12:47:53 +07:00
|
|
|
int md_integrity_register(struct mddev *mddev)
|
2009-08-03 07:59:47 +07:00
|
|
|
{
|
2011-10-11 12:45:26 +07:00
|
|
|
struct md_rdev *rdev, *reference = NULL;
|
2009-08-03 07:59:47 +07:00
|
|
|
|
|
|
|
if (list_empty(&mddev->disks))
|
|
|
|
return 0; /* nothing to do */
|
2011-06-08 12:10:08 +07:00
|
|
|
if (!mddev->gendisk || blk_get_integrity(mddev->gendisk))
|
|
|
|
return 0; /* shouldn't register, or already is */
|
2012-03-19 08:46:39 +07:00
|
|
|
rdev_for_each(rdev, mddev) {
|
2009-08-03 07:59:47 +07:00
|
|
|
/* skip spares and non-functional disks */
|
|
|
|
if (test_bit(Faulty, &rdev->flags))
|
|
|
|
continue;
|
|
|
|
if (rdev->raid_disk < 0)
|
|
|
|
continue;
|
|
|
|
if (!reference) {
|
|
|
|
/* Use the first rdev as the reference */
|
|
|
|
reference = rdev;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
/* does this rdev's profile match the reference profile? */
|
|
|
|
if (blk_integrity_compare(reference->bdev->bd_disk,
|
|
|
|
rdev->bdev->bd_disk) < 0)
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
2011-03-29 07:09:12 +07:00
|
|
|
if (!reference || !bdev_get_integrity(reference->bdev))
|
|
|
|
return 0;
|
2009-08-03 07:59:47 +07:00
|
|
|
/*
|
|
|
|
* All component devices are integrity capable and have matching
|
|
|
|
* profiles, register the common profile for the md device.
|
|
|
|
*/
|
2015-10-22 00:19:49 +07:00
|
|
|
blk_integrity_register(mddev->gendisk,
|
|
|
|
bdev_get_integrity(reference->bdev));
|
|
|
|
|
2011-03-17 17:11:05 +07:00
|
|
|
printk(KERN_NOTICE "md: data integrity enabled on %s\n", mdname(mddev));
|
|
|
|
if (bioset_integrity_create(mddev->bio_set, BIO_POOL_SIZE)) {
|
|
|
|
printk(KERN_ERR "md: failed to create integrity pool for %s\n",
|
|
|
|
mdname(mddev));
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
2009-08-03 07:59:47 +07:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(md_integrity_register);
|
|
|
|
|
2016-01-14 07:00:07 +07:00
|
|
|
/*
|
|
|
|
* Attempt to add an rdev, but only if it is consistent with the current
|
|
|
|
* integrity profile
|
|
|
|
*/
|
|
|
|
int md_integrity_add_rdev(struct md_rdev *rdev, struct mddev *mddev)
|
2009-03-31 10:27:02 +07:00
|
|
|
{
|
2012-10-11 09:38:58 +07:00
|
|
|
struct blk_integrity *bi_rdev;
|
|
|
|
struct blk_integrity *bi_mddev;
|
2016-01-14 07:00:07 +07:00
|
|
|
char name[BDEVNAME_SIZE];
|
2012-10-11 09:38:58 +07:00
|
|
|
|
|
|
|
if (!mddev->gendisk)
|
2016-01-14 07:00:07 +07:00
|
|
|
return 0;
|
2012-10-11 09:38:58 +07:00
|
|
|
|
|
|
|
bi_rdev = bdev_get_integrity(rdev->bdev);
|
|
|
|
bi_mddev = blk_get_integrity(mddev->gendisk);
|
2009-03-31 10:27:02 +07:00
|
|
|
|
2009-08-03 07:59:47 +07:00
|
|
|
if (!bi_mddev) /* nothing to do */
|
2016-01-14 07:00:07 +07:00
|
|
|
return 0;
|
|
|
|
|
|
|
|
if (blk_integrity_compare(mddev->gendisk, rdev->bdev->bd_disk) != 0) {
|
|
|
|
printk(KERN_NOTICE "%s: incompatible integrity profile for %s\n",
|
|
|
|
mdname(mddev), bdevname(rdev->bdev, name));
|
|
|
|
return -ENXIO;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
2009-03-31 10:27:02 +07:00
|
|
|
}
|
2009-08-03 07:59:47 +07:00
|
|
|
EXPORT_SYMBOL(md_integrity_add_rdev);
|
2009-03-31 10:27:02 +07:00
|
|
|
|
2014-09-30 11:23:59 +07:00
|
|
|
static int bind_rdev_to_array(struct md_rdev *rdev, struct mddev *mddev)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2007-03-01 11:11:35 +07:00
|
|
|
char b[BDEVNAME_SIZE];
|
2005-11-09 12:39:37 +07:00
|
|
|
struct kobject *ko;
|
2007-03-27 12:32:14 +07:00
|
|
|
int err;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2008-04-30 14:52:32 +07:00
|
|
|
/* prevent duplicates */
|
|
|
|
if (find_rdev(mddev, rdev->bdev->bd_dev))
|
|
|
|
return -EEXIST;
|
|
|
|
|
2009-03-31 10:33:13 +07:00
|
|
|
/* make sure rdev->sectors exceeds mddev->dev_sectors */
|
2015-12-21 06:51:02 +07:00
|
|
|
if (!test_bit(Journal, &rdev->flags) &&
|
|
|
|
rdev->sectors &&
|
|
|
|
(mddev->dev_sectors == 0 || rdev->sectors < mddev->dev_sectors)) {
|
2007-05-24 03:58:10 +07:00
|
|
|
if (mddev->pers) {
|
|
|
|
/* Cannot change size, so fail
|
|
|
|
* If mddev->level <= 0, then we don't care
|
|
|
|
* about aligning sizes (e.g. linear)
|
|
|
|
*/
|
|
|
|
if (mddev->level > 0)
|
|
|
|
return -ENOSPC;
|
|
|
|
} else
|
2009-03-31 10:33:13 +07:00
|
|
|
mddev->dev_sectors = rdev->sectors;
|
2006-01-06 15:20:55 +07:00
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
/* Verify rdev->desc_nr is unique.
|
|
|
|
* If it is -1, assign a free number, else
|
|
|
|
* check number is not in use
|
|
|
|
*/
|
2014-09-25 14:00:11 +07:00
|
|
|
rcu_read_lock();
|
2005-04-17 05:20:36 +07:00
|
|
|
if (rdev->desc_nr < 0) {
|
|
|
|
int choice = 0;
|
2014-09-25 14:00:11 +07:00
|
|
|
if (mddev->pers)
|
|
|
|
choice = mddev->raid_disks;
|
2015-04-14 22:43:55 +07:00
|
|
|
while (md_find_rdev_nr_rcu(mddev, choice))
|
2005-04-17 05:20:36 +07:00
|
|
|
choice++;
|
|
|
|
rdev->desc_nr = choice;
|
|
|
|
} else {
|
2015-04-14 22:43:55 +07:00
|
|
|
if (md_find_rdev_nr_rcu(mddev, rdev->desc_nr)) {
|
2014-09-25 14:00:11 +07:00
|
|
|
rcu_read_unlock();
|
2005-04-17 05:20:36 +07:00
|
|
|
return -EBUSY;
|
2014-09-25 14:00:11 +07:00
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
2014-09-25 14:00:11 +07:00
|
|
|
rcu_read_unlock();
|
2015-12-21 06:51:02 +07:00
|
|
|
if (!test_bit(Journal, &rdev->flags) &&
|
|
|
|
mddev->max_disks && rdev->desc_nr >= mddev->max_disks) {
|
2009-02-06 14:02:46 +07:00
|
|
|
printk(KERN_WARNING "md: %s: array is limited to %d devices\n",
|
|
|
|
mdname(mddev), mddev->max_disks);
|
|
|
|
return -EBUSY;
|
|
|
|
}
|
2005-11-09 12:39:35 +07:00
|
|
|
bdevname(rdev->bdev,b);
|
2015-06-26 05:02:36 +07:00
|
|
|
strreplace(b, '/', '!');
|
2007-12-18 13:05:35 +07:00
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
rdev->mddev = mddev;
|
2005-11-09 12:39:35 +07:00
|
|
|
printk(KERN_INFO "md: bind<%s>\n", b);
|
2005-11-09 12:39:24 +07:00
|
|
|
|
2007-12-18 13:05:35 +07:00
|
|
|
if ((err = kobject_add(&rdev->kobj, &mddev->kobj, "dev-%s", b)))
|
2007-03-27 12:32:14 +07:00
|
|
|
goto fail;
|
2005-11-09 12:39:24 +07:00
|
|
|
|
2008-08-25 17:56:12 +07:00
|
|
|
ko = &part_to_dev(rdev->bdev->bd_part)->kobj;
|
2010-06-01 16:37:23 +07:00
|
|
|
if (sysfs_create_link(&rdev->kobj, ko, "block"))
|
|
|
|
/* failure here is OK */;
|
|
|
|
rdev->sysfs_state = sysfs_get_dirent_safe(rdev->kobj.sd, "state");
|
2008-10-21 09:25:28 +07:00
|
|
|
|
2008-07-21 14:05:25 +07:00
|
|
|
list_add_rcu(&rdev->same_set, &mddev->disks);
|
2010-11-13 17:55:17 +07:00
|
|
|
bd_link_disk_holder(rdev->bdev, mddev->gendisk);
|
2009-01-09 04:31:11 +07:00
|
|
|
|
|
|
|
/* May as well allow recovery to be retried once */
|
2011-07-27 08:00:36 +07:00
|
|
|
mddev->recovery_disabled++;
|
2009-03-31 10:27:02 +07:00
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
return 0;
|
2007-03-27 12:32:14 +07:00
|
|
|
|
|
|
|
fail:
|
|
|
|
printk(KERN_WARNING "md: failed to register dev-%s for %s\n",
|
|
|
|
b, mdname(mddev));
|
|
|
|
return err;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
2008-02-06 16:39:56 +07:00
|
|
|
static void md_delayed_delete(struct work_struct *ws)
|
2007-04-05 09:08:18 +07:00
|
|
|
{
|
2011-10-11 12:45:26 +07:00
|
|
|
struct md_rdev *rdev = container_of(ws, struct md_rdev, del_work);
|
2007-04-05 09:08:18 +07:00
|
|
|
kobject_del(&rdev->kobj);
|
2008-02-06 16:39:56 +07:00
|
|
|
kobject_put(&rdev->kobj);
|
2007-04-05 09:08:18 +07:00
|
|
|
}
|
|
|
|
|
2014-09-30 11:23:59 +07:00
|
|
|
static void unbind_rdev_from_array(struct md_rdev *rdev)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
char b[BDEVNAME_SIZE];
|
2014-09-30 12:52:29 +07:00
|
|
|
|
2011-01-15 00:43:57 +07:00
|
|
|
bd_unlink_disk_holder(rdev->bdev, rdev->mddev->gendisk);
|
2008-07-21 14:05:25 +07:00
|
|
|
list_del_rcu(&rdev->same_set);
|
2005-04-17 05:20:36 +07:00
|
|
|
printk(KERN_INFO "md: unbind<%s>\n", bdevname(rdev->bdev,b));
|
|
|
|
rdev->mddev = NULL;
|
2005-11-09 12:39:24 +07:00
|
|
|
sysfs_remove_link(&rdev->kobj, "block");
|
2008-10-21 09:25:28 +07:00
|
|
|
sysfs_put(rdev->sysfs_state);
|
|
|
|
rdev->sysfs_state = NULL;
|
2011-07-28 08:31:46 +07:00
|
|
|
rdev->badblocks.count = 0;
|
2007-04-05 09:08:18 +07:00
|
|
|
/* We need to delay this, otherwise we can deadlock when
|
2008-07-21 14:05:25 +07:00
|
|
|
* writing to 'remove' to "dev/state". We also need
|
|
|
|
* to delay it due to rcu usage.
|
2007-04-05 09:08:18 +07:00
|
|
|
*/
|
2008-07-21 14:05:25 +07:00
|
|
|
synchronize_rcu();
|
2008-02-06 16:39:56 +07:00
|
|
|
INIT_WORK(&rdev->del_work, md_delayed_delete);
|
|
|
|
kobject_get(&rdev->kobj);
|
2010-10-15 20:36:08 +07:00
|
|
|
queue_work(md_misc_wq, &rdev->del_work);
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* prevent the device from being mounted, repartitioned or
|
|
|
|
* otherwise reused by a RAID array (or any other kernel
|
|
|
|
* subsystem), by bd_claiming the device.
|
|
|
|
*/
|
2011-10-11 12:45:26 +07:00
|
|
|
static int lock_rdev(struct md_rdev *rdev, dev_t dev, int shared)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
int err = 0;
|
|
|
|
struct block_device *bdev;
|
|
|
|
char b[BDEVNAME_SIZE];
|
|
|
|
|
2010-11-13 17:55:18 +07:00
|
|
|
bdev = blkdev_get_by_dev(dev, FMODE_READ|FMODE_WRITE|FMODE_EXCL,
|
2011-10-11 12:45:26 +07:00
|
|
|
shared ? (struct md_rdev *)lock_rdev : rdev);
|
2005-04-17 05:20:36 +07:00
|
|
|
if (IS_ERR(bdev)) {
|
|
|
|
printk(KERN_ERR "md: could not open %s.\n",
|
|
|
|
__bdevname(dev, b));
|
|
|
|
return PTR_ERR(bdev);
|
|
|
|
}
|
|
|
|
rdev->bdev = bdev;
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2011-10-11 12:45:26 +07:00
|
|
|
static void unlock_rdev(struct md_rdev *rdev)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
struct block_device *bdev = rdev->bdev;
|
|
|
|
rdev->bdev = NULL;
|
block: make blkdev_get/put() handle exclusive access
Over time, block layer has accumulated a set of APIs dealing with bdev
open, close, claim and release.
* blkdev_get/put() are the primary open and close functions.
* bd_claim/release() deal with exclusive open.
* open/close_bdev_exclusive() are combination of open and claim and
the other way around, respectively.
* bd_link/unlink_disk_holder() to create and remove holder/slave
symlinks.
* open_by_devnum() wraps bdget() + blkdev_get().
The interface is a bit confusing and the decoupling of open and claim
makes it impossible to properly guarantee exclusive access as
in-kernel open + claim sequence can disturb the existing exclusive
open even before the block layer knows the current open if for another
exclusive access. Reorganize the interface such that,
* blkdev_get() is extended to include exclusive access management.
@holder argument is added and, if is @FMODE_EXCL specified, it will
gain exclusive access atomically w.r.t. other exclusive accesses.
* blkdev_put() is similarly extended. It now takes @mode argument and
if @FMODE_EXCL is set, it releases an exclusive access. Also, when
the last exclusive claim is released, the holder/slave symlinks are
removed automatically.
* bd_claim/release() and close_bdev_exclusive() are no longer
necessary and either made static or removed.
* bd_link_disk_holder() remains the same but bd_unlink_disk_holder()
is no longer necessary and removed.
* open_bdev_exclusive() becomes a simple wrapper around lookup_bdev()
and blkdev_get(). It also has an unexpected extra bdev_read_only()
test which probably should be moved into blkdev_get().
* open_by_devnum() is modified to take @holder argument and pass it to
blkdev_get().
Most of bdev open/close operations are unified into blkdev_get/put()
and most exclusive accesses are tested atomically at the open time (as
it should). This cleans up code and removes some, both valid and
invalid, but unnecessary all the same, corner cases.
open_bdev_exclusive() and open_by_devnum() can use further cleanup -
rename to blkdev_get_by_path() and blkdev_get_by_devt() and drop
special features. Well, let's leave them for another day.
Most conversions are straight-forward. drbd conversion is a bit more
involved as there was some reordering, but the logic should stay the
same.
Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Neil Brown <neilb@suse.de>
Acked-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
Acked-by: Mike Snitzer <snitzer@redhat.com>
Acked-by: Philipp Reisner <philipp.reisner@linbit.com>
Cc: Peter Osterlund <petero2@telia.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Jan Kara <jack@suse.cz>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andreas Dilger <adilger.kernel@dilger.ca>
Cc: "Theodore Ts'o" <tytso@mit.edu>
Cc: Mark Fasheh <mfasheh@suse.com>
Cc: Joel Becker <joel.becker@oracle.com>
Cc: Alex Elder <aelder@sgi.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: dm-devel@redhat.com
Cc: drbd-dev@lists.linbit.com
Cc: Leo Chen <leochen@broadcom.com>
Cc: Scott Branden <sbranden@broadcom.com>
Cc: Chris Mason <chris.mason@oracle.com>
Cc: Steven Whitehouse <swhiteho@redhat.com>
Cc: Dave Kleikamp <shaggy@linux.vnet.ibm.com>
Cc: Joern Engel <joern@logfs.org>
Cc: reiserfs-devel@vger.kernel.org
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
2010-11-13 17:55:17 +07:00
|
|
|
blkdev_put(bdev, FMODE_READ|FMODE_WRITE|FMODE_EXCL);
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
void md_autodetect_dev(dev_t dev);
|
|
|
|
|
2014-09-30 11:23:59 +07:00
|
|
|
static void export_rdev(struct md_rdev *rdev)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
char b[BDEVNAME_SIZE];
|
2014-09-30 12:52:29 +07:00
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
printk(KERN_INFO "md: export_rdev(%s)\n",
|
|
|
|
bdevname(rdev->bdev,b));
|
2012-05-22 10:54:30 +07:00
|
|
|
md_rdev_clear(rdev);
|
2005-04-17 05:20:36 +07:00
|
|
|
#ifndef MODULE
|
2008-03-05 05:29:31 +07:00
|
|
|
if (test_bit(AutoDetected, &rdev->flags))
|
|
|
|
md_autodetect_dev(rdev->bdev->bd_dev);
|
2005-04-17 05:20:36 +07:00
|
|
|
#endif
|
|
|
|
unlock_rdev(rdev);
|
2005-11-09 12:39:24 +07:00
|
|
|
kobject_put(&rdev->kobj);
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
2015-04-14 22:43:24 +07:00
|
|
|
void md_kick_rdev_from_array(struct md_rdev *rdev)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
unbind_rdev_from_array(rdev);
|
|
|
|
export_rdev(rdev);
|
|
|
|
}
|
2015-04-14 22:43:24 +07:00
|
|
|
EXPORT_SYMBOL_GPL(md_kick_rdev_from_array);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2011-10-11 12:47:53 +07:00
|
|
|
static void export_array(struct mddev *mddev)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2014-09-25 14:43:47 +07:00
|
|
|
struct md_rdev *rdev;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2014-09-25 14:43:47 +07:00
|
|
|
while (!list_empty(&mddev->disks)) {
|
|
|
|
rdev = list_first_entry(&mddev->disks, struct md_rdev,
|
|
|
|
same_set);
|
2015-04-14 22:43:24 +07:00
|
|
|
md_kick_rdev_from_array(rdev);
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
mddev->raid_disks = 0;
|
|
|
|
mddev->major_version = 0;
|
|
|
|
}
|
|
|
|
|
2014-09-30 11:23:59 +07:00
|
|
|
static void sync_sbs(struct mddev *mddev, int nospares)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2006-06-26 14:27:57 +07:00
|
|
|
/* Update each superblock (in-memory image), but
|
|
|
|
* if we are allowed to, skip spares which already
|
|
|
|
* have the right event counter, or have one earlier
|
|
|
|
* (which would mean they aren't being marked as dirty
|
|
|
|
* with the rest of the array)
|
|
|
|
*/
|
2011-10-11 12:45:26 +07:00
|
|
|
struct md_rdev *rdev;
|
2012-03-19 08:46:39 +07:00
|
|
|
rdev_for_each(rdev, mddev) {
|
2006-06-26 14:27:57 +07:00
|
|
|
if (rdev->sb_events == mddev->events ||
|
|
|
|
(nospares &&
|
|
|
|
rdev->raid_disk < 0 &&
|
|
|
|
rdev->sb_events+1 == mddev->events)) {
|
|
|
|
/* Don't update this superblock */
|
|
|
|
rdev->sb_loaded = 2;
|
|
|
|
} else {
|
2011-06-08 05:51:30 +07:00
|
|
|
sync_super(mddev, rdev);
|
2006-06-26 14:27:57 +07:00
|
|
|
rdev->sb_loaded = 1;
|
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-09-29 07:21:35 +07:00
|
|
|
static bool does_sb_need_changing(struct mddev *mddev)
|
|
|
|
{
|
|
|
|
struct md_rdev *rdev;
|
|
|
|
struct mdp_superblock_1 *sb;
|
|
|
|
int role;
|
|
|
|
|
|
|
|
/* Find a good rdev */
|
|
|
|
rdev_for_each(rdev, mddev)
|
|
|
|
if ((rdev->raid_disk >= 0) && !test_bit(Faulty, &rdev->flags))
|
|
|
|
break;
|
|
|
|
|
|
|
|
/* No good device found. */
|
|
|
|
if (!rdev)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
sb = page_address(rdev->sb_page);
|
|
|
|
/* Check if a device has become faulty or a spare become active */
|
|
|
|
rdev_for_each(rdev, mddev) {
|
|
|
|
role = le16_to_cpu(sb->dev_roles[rdev->desc_nr]);
|
|
|
|
/* Device activated? */
|
|
|
|
if (role == 0xffff && rdev->raid_disk >=0 &&
|
|
|
|
!test_bit(Faulty, &rdev->flags))
|
|
|
|
return true;
|
|
|
|
/* Device turned faulty? */
|
|
|
|
if (test_bit(Faulty, &rdev->flags) && (role < 0xfffd))
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Check if any mddev parameters have changed */
|
|
|
|
if ((mddev->dev_sectors != le64_to_cpu(sb->size)) ||
|
|
|
|
(mddev->reshape_position != le64_to_cpu(sb->reshape_position)) ||
|
|
|
|
(mddev->layout != le64_to_cpu(sb->layout)) ||
|
|
|
|
(mddev->raid_disks != le32_to_cpu(sb->raid_disks)) ||
|
|
|
|
(mddev->chunk_sectors != le32_to_cpu(sb->chunksize)))
|
|
|
|
return true;
|
|
|
|
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2014-10-30 06:51:31 +07:00
|
|
|
void md_update_sb(struct mddev *mddev, int force_change)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2011-10-11 12:45:26 +07:00
|
|
|
struct md_rdev *rdev;
|
2005-06-22 07:17:12 +07:00
|
|
|
int sync_req;
|
2006-06-26 14:27:57 +07:00
|
|
|
int nospares = 0;
|
2011-07-28 08:31:47 +07:00
|
|
|
int any_badblocks_changed = 0;
|
2015-10-12 16:21:30 +07:00
|
|
|
int ret = -1;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2013-04-24 08:42:40 +07:00
|
|
|
if (mddev->ro) {
|
|
|
|
if (force_change)
|
|
|
|
set_bit(MD_CHANGE_DEVS, &mddev->flags);
|
|
|
|
return;
|
|
|
|
}
|
2015-09-29 07:21:35 +07:00
|
|
|
|
2016-05-02 22:33:09 +07:00
|
|
|
repeat:
|
2015-09-29 07:21:35 +07:00
|
|
|
if (mddev_is_clustered(mddev)) {
|
|
|
|
if (test_and_clear_bit(MD_CHANGE_DEVS, &mddev->flags))
|
|
|
|
force_change = 1;
|
2016-05-04 09:22:13 +07:00
|
|
|
if (test_and_clear_bit(MD_CHANGE_CLEAN, &mddev->flags))
|
|
|
|
nospares = 1;
|
2015-10-12 16:21:30 +07:00
|
|
|
ret = md_cluster_ops->metadata_update_start(mddev);
|
2015-09-29 07:21:35 +07:00
|
|
|
/* Has someone else has updated the sb */
|
|
|
|
if (!does_sb_need_changing(mddev)) {
|
2015-10-12 16:21:30 +07:00
|
|
|
if (ret == 0)
|
|
|
|
md_cluster_ops->metadata_update_cancel(mddev);
|
2016-05-04 09:22:13 +07:00
|
|
|
bit_clear_unless(&mddev->flags, BIT(MD_CHANGE_PENDING),
|
|
|
|
BIT(MD_CHANGE_DEVS) |
|
|
|
|
BIT(MD_CHANGE_CLEAN));
|
2015-09-29 07:21:35 +07:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
}
|
2016-05-02 22:33:09 +07:00
|
|
|
|
2010-08-16 15:09:31 +07:00
|
|
|
/* First make sure individual recovery_offsets are correct */
|
2012-03-19 08:46:39 +07:00
|
|
|
rdev_for_each(rdev, mddev) {
|
2010-08-16 15:09:31 +07:00
|
|
|
if (rdev->raid_disk >= 0 &&
|
|
|
|
mddev->delta_disks >= 0 &&
|
2015-10-09 11:54:12 +07:00
|
|
|
!test_bit(Journal, &rdev->flags) &&
|
2010-08-16 15:09:31 +07:00
|
|
|
!test_bit(In_sync, &rdev->flags) &&
|
|
|
|
mddev->curr_resync_completed > rdev->recovery_offset)
|
|
|
|
rdev->recovery_offset = mddev->curr_resync_completed;
|
|
|
|
|
2014-09-30 11:23:59 +07:00
|
|
|
}
|
2010-08-30 14:33:33 +07:00
|
|
|
if (!mddev->persistent) {
|
2010-08-30 14:33:34 +07:00
|
|
|
clear_bit(MD_CHANGE_CLEAN, &mddev->flags);
|
2010-08-16 15:09:31 +07:00
|
|
|
clear_bit(MD_CHANGE_DEVS, &mddev->flags);
|
2011-07-28 08:31:48 +07:00
|
|
|
if (!mddev->external) {
|
2010-10-28 13:30:20 +07:00
|
|
|
clear_bit(MD_CHANGE_PENDING, &mddev->flags);
|
2012-03-19 08:46:39 +07:00
|
|
|
rdev_for_each(rdev, mddev) {
|
2011-07-28 08:31:48 +07:00
|
|
|
if (rdev->badblocks.changed) {
|
2012-03-19 08:46:41 +07:00
|
|
|
rdev->badblocks.changed = 0;
|
2015-12-25 09:20:34 +07:00
|
|
|
ack_all_badblocks(&rdev->badblocks);
|
2011-07-28 08:31:48 +07:00
|
|
|
md_error(mddev, rdev);
|
|
|
|
}
|
|
|
|
clear_bit(Blocked, &rdev->flags);
|
|
|
|
clear_bit(BlockedBadBlocks, &rdev->flags);
|
|
|
|
wake_up(&rdev->blocked_wait);
|
|
|
|
}
|
|
|
|
}
|
2010-08-16 15:09:31 +07:00
|
|
|
wake_up(&mddev->sb_wait);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2014-12-15 08:56:56 +07:00
|
|
|
spin_lock(&mddev->lock);
|
2006-08-27 15:23:49 +07:00
|
|
|
|
2015-12-21 06:51:01 +07:00
|
|
|
mddev->utime = ktime_get_real_seconds();
|
2010-08-16 15:09:31 +07:00
|
|
|
|
2006-10-03 15:15:46 +07:00
|
|
|
if (test_and_clear_bit(MD_CHANGE_DEVS, &mddev->flags))
|
|
|
|
force_change = 1;
|
|
|
|
if (test_and_clear_bit(MD_CHANGE_CLEAN, &mddev->flags))
|
|
|
|
/* just a clean<-> dirty transition, possibly leave spares alone,
|
|
|
|
* though if events isn't the right even/odd, we will have to do
|
|
|
|
* spares after all
|
|
|
|
*/
|
|
|
|
nospares = 1;
|
|
|
|
if (force_change)
|
|
|
|
nospares = 0;
|
|
|
|
if (mddev->degraded)
|
2006-08-27 15:23:49 +07:00
|
|
|
/* If the array is degraded, then skipping spares is both
|
|
|
|
* dangerous and fairly pointless.
|
|
|
|
* Dangerous because a device that was removed from the array
|
|
|
|
* might have a event_count that still looks up-to-date,
|
|
|
|
* so it can be re-added without a resync.
|
|
|
|
* Pointless because if there are any spares to skip,
|
|
|
|
* then a recovery will happen and soon that array won't
|
|
|
|
* be degraded any more and the spare can go back to sleep then.
|
|
|
|
*/
|
2006-10-03 15:15:46 +07:00
|
|
|
nospares = 0;
|
2006-08-27 15:23:49 +07:00
|
|
|
|
2005-06-22 07:17:12 +07:00
|
|
|
sync_req = mddev->in_sync;
|
2006-06-26 14:27:57 +07:00
|
|
|
|
|
|
|
/* If this is just a dirty<->clean transition, and the array is clean
|
|
|
|
* and 'events' is odd, we can roll back to the previous clean state */
|
2006-10-03 15:15:46 +07:00
|
|
|
if (nospares
|
2006-06-26 14:27:57 +07:00
|
|
|
&& (mddev->in_sync && mddev->recovery_cp == MaxSector)
|
2010-05-18 06:28:43 +07:00
|
|
|
&& mddev->can_decrease_events
|
|
|
|
&& mddev->events != 1) {
|
2006-06-26 14:27:57 +07:00
|
|
|
mddev->events--;
|
2010-05-18 06:28:43 +07:00
|
|
|
mddev->can_decrease_events = 0;
|
|
|
|
} else {
|
2006-06-26 14:27:57 +07:00
|
|
|
/* otherwise we have to go forward and ... */
|
|
|
|
mddev->events ++;
|
2010-05-18 06:28:43 +07:00
|
|
|
mddev->can_decrease_events = nospares;
|
2006-06-26 14:27:57 +07:00
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2014-09-30 12:52:29 +07:00
|
|
|
/*
|
|
|
|
* This 64-bit counter should never wrap.
|
|
|
|
* Either we are in around ~1 trillion A.C., assuming
|
|
|
|
* 1 reboot per second, or we have a bug...
|
|
|
|
*/
|
|
|
|
WARN_ON(mddev->events == 0);
|
2011-07-28 08:31:47 +07:00
|
|
|
|
2012-03-19 08:46:39 +07:00
|
|
|
rdev_for_each(rdev, mddev) {
|
2011-07-28 08:31:47 +07:00
|
|
|
if (rdev->badblocks.changed)
|
|
|
|
any_badblocks_changed++;
|
2011-07-28 08:31:48 +07:00
|
|
|
if (test_bit(Faulty, &rdev->flags))
|
|
|
|
set_bit(FaultRecorded, &rdev->flags);
|
|
|
|
}
|
2011-07-28 08:31:47 +07:00
|
|
|
|
2008-02-06 16:39:51 +07:00
|
|
|
sync_sbs(mddev, nospares);
|
2014-12-15 08:56:56 +07:00
|
|
|
spin_unlock(&mddev->lock);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2011-10-07 10:23:17 +07:00
|
|
|
pr_debug("md: updating %s RAID superblock on device (in sync %d)\n",
|
|
|
|
mdname(mddev), mddev->in_sync);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2007-07-17 18:06:13 +07:00
|
|
|
bitmap_update_sb(mddev->bitmap);
|
2012-03-19 08:46:39 +07:00
|
|
|
rdev_for_each(rdev, mddev) {
|
2005-04-17 05:20:36 +07:00
|
|
|
char b[BDEVNAME_SIZE];
|
2011-10-07 10:23:17 +07:00
|
|
|
|
2006-06-26 14:27:57 +07:00
|
|
|
if (rdev->sb_loaded != 1)
|
|
|
|
continue; /* no noise on spare devices */
|
2005-04-17 05:20:36 +07:00
|
|
|
|
md: Change handling of save_raid_disk and metadata update during recovery.
Since commit d70ed2e4fafdbef0800e739
MD: Allow restarting an interrupted incremental recovery.
we don't write out the metadata to devices while they are recovering.
This had a good reason, but has unfortunate consequences. This patch
changes things to make them work better.
At issue is what happens if the array is shut down while a recovery is
happening, particularly a bitmap-guided recovery.
Ideally the recovery should pick up where it left off.
However the metadata cannot represent the state "A recovery is in
process which is guided by the bitmap".
Before the above mentioned commit, we wrote metadata to the device
which said "this is being recovered and it is up to <here>". So after
a restart, a full recovery (not bitmap-guided) would happen from
where-ever it was up to.
After the commit the metadata wasn't updated so it still said "This
device is fully in sync with <this> event count". That leads to a
bitmap-based recovery following the whole bitmap, which should be a
lot less work than a full recovery from some starting point. So this
was an improvement.
However updates some metadata but not all leads to other problems.
In particular, the metadata written to the fully-up-to-date device
record that the array has all devices present (even though some are
recovering). So on restart, mdadm wants to find all devices and
expects them to have current event counts.
Obviously it doesn't (some have old event counts) so (when assembling
with --incremental) it waits indefinitely for the rest of the expected
devices.
It really is wrong to not update all the metadata together. Do that
is bound to cause confusion.
Instead, we should make it possible to record the truth in the
metadata. i.e. we need to be able to record that a device is being
recovered based on the bitmap.
We already have a Feature flag to say that recovery is happening. We
now add another one to say that it is a bitmap-based recovery.
With this we can remove the code that disables the write-out of
metadata on some devices.
So this patch:
- moves the setting of 'saved_raid_disk' from add_new_disk to
the validate_super methods. This makes sure it is always set
properly, both when adding a new device to an array, and when
assembling an array from a collection of devices.
- Adds a metadata flag MD_FEATURE_RECOVERY_BITMAP which is only
used if MD_FEATURE_RECOVERY_OFFSET is set, and record that a
bitmap-based recovery is allowed.
This is only present in v1.x metadata. v0.90 doesn't support
devices which are in the middle of recovery at all.
- Only skips writing metadata to Faulty devices.
- Also allows rdev state to be set to "-insync" via sysfs.
This can be used for external-metadata arrays. When the
'role' is set the device is assumed to be in-sync. If, after
setting the role, we set the state to "-insync", the role is
moved to saved_raid_disk which effectively says the device is
partly in-sync with that slot and needs a bitmap recovery.
Cc: Andrei Warkentin <andreiw@vmware.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2013-12-09 08:04:56 +07:00
|
|
|
if (!test_bit(Faulty, &rdev->flags)) {
|
2005-06-22 07:17:28 +07:00
|
|
|
md_super_write(mddev,rdev,
|
2008-07-11 19:02:23 +07:00
|
|
|
rdev->sb_start, rdev->sb_size,
|
2005-06-22 07:17:28 +07:00
|
|
|
rdev->sb_page);
|
2011-10-07 10:23:17 +07:00
|
|
|
pr_debug("md: (write) %s's sb offset: %llu\n",
|
|
|
|
bdevname(rdev->bdev, b),
|
|
|
|
(unsigned long long)rdev->sb_start);
|
2006-06-26 14:27:57 +07:00
|
|
|
rdev->sb_events = mddev->events;
|
2011-07-28 08:31:47 +07:00
|
|
|
if (rdev->badblocks.size) {
|
|
|
|
md_super_write(mddev, rdev,
|
|
|
|
rdev->badblocks.sector,
|
|
|
|
rdev->badblocks.size << 9,
|
|
|
|
rdev->bb_page);
|
|
|
|
rdev->badblocks.size = 0;
|
|
|
|
}
|
2005-06-22 07:17:28 +07:00
|
|
|
|
md: Change handling of save_raid_disk and metadata update during recovery.
Since commit d70ed2e4fafdbef0800e739
MD: Allow restarting an interrupted incremental recovery.
we don't write out the metadata to devices while they are recovering.
This had a good reason, but has unfortunate consequences. This patch
changes things to make them work better.
At issue is what happens if the array is shut down while a recovery is
happening, particularly a bitmap-guided recovery.
Ideally the recovery should pick up where it left off.
However the metadata cannot represent the state "A recovery is in
process which is guided by the bitmap".
Before the above mentioned commit, we wrote metadata to the device
which said "this is being recovered and it is up to <here>". So after
a restart, a full recovery (not bitmap-guided) would happen from
where-ever it was up to.
After the commit the metadata wasn't updated so it still said "This
device is fully in sync with <this> event count". That leads to a
bitmap-based recovery following the whole bitmap, which should be a
lot less work than a full recovery from some starting point. So this
was an improvement.
However updates some metadata but not all leads to other problems.
In particular, the metadata written to the fully-up-to-date device
record that the array has all devices present (even though some are
recovering). So on restart, mdadm wants to find all devices and
expects them to have current event counts.
Obviously it doesn't (some have old event counts) so (when assembling
with --incremental) it waits indefinitely for the rest of the expected
devices.
It really is wrong to not update all the metadata together. Do that
is bound to cause confusion.
Instead, we should make it possible to record the truth in the
metadata. i.e. we need to be able to record that a device is being
recovered based on the bitmap.
We already have a Feature flag to say that recovery is happening. We
now add another one to say that it is a bitmap-based recovery.
With this we can remove the code that disables the write-out of
metadata on some devices.
So this patch:
- moves the setting of 'saved_raid_disk' from add_new_disk to
the validate_super methods. This makes sure it is always set
properly, both when adding a new device to an array, and when
assembling an array from a collection of devices.
- Adds a metadata flag MD_FEATURE_RECOVERY_BITMAP which is only
used if MD_FEATURE_RECOVERY_OFFSET is set, and record that a
bitmap-based recovery is allowed.
This is only present in v1.x metadata. v0.90 doesn't support
devices which are in the middle of recovery at all.
- Only skips writing metadata to Faulty devices.
- Also allows rdev state to be set to "-insync" via sysfs.
This can be used for external-metadata arrays. When the
'role' is set the device is assumed to be in-sync. If, after
setting the role, we set the state to "-insync", the role is
moved to saved_raid_disk which effectively says the device is
partly in-sync with that slot and needs a bitmap recovery.
Cc: Andrei Warkentin <andreiw@vmware.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2013-12-09 08:04:56 +07:00
|
|
|
} else
|
2011-10-07 10:23:17 +07:00
|
|
|
pr_debug("md: %s (skipping faulty)\n",
|
|
|
|
bdevname(rdev->bdev, b));
|
2011-10-18 08:16:48 +07:00
|
|
|
|
2005-06-22 07:17:28 +07:00
|
|
|
if (mddev->level == LEVEL_MULTIPATH)
|
2005-04-17 05:20:36 +07:00
|
|
|
/* only need to write one superblock... */
|
|
|
|
break;
|
|
|
|
}
|
[PATCH] md: support BIO_RW_BARRIER for md/raid1
We can only accept BARRIER requests if all slaves handle
barriers, and that can, of course, change with time....
So we keep track of whether the whole array seems safe for barriers,
and also whether each individual rdev handles barriers.
We initially assumes barriers are OK.
When writing the superblock we try a barrier, and if that fails, we flag
things for no-barriers. This will usually clear the flags fairly quickly.
If writing the superblock finds that BIO_RW_BARRIER is -ENOTSUPP, we need to
resubmit, so introduce function "md_super_wait" which waits for requests to
finish, and retries ENOTSUPP requests without the barrier flag.
When writing the real raid1, write requests which were BIO_RW_BARRIER but
which aresn't supported need to be retried. So raid1d is enhanced to do this,
and when any bio write completes (i.e. no retry needed) we remove it from the
r1bio, so that devices needing retry are easy to find.
We should hardly ever get -ENOTSUPP errors when writing data to the raid.
It should only happen if:
1/ the device used to support BARRIER, but now doesn't. Few devices
change like this, though raid1 can!
or
2/ the array has no persistent superblock, so there was no opportunity to
pre-test for barriers when writing the superblock.
Signed-off-by: Neil Brown <neilb@cse.unsw.edu.au>
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-11-09 12:39:34 +07:00
|
|
|
md_super_wait(mddev);
|
2006-10-03 15:15:46 +07:00
|
|
|
/* if there was a failure, MD_CHANGE_DEVS was set, and we re-write super */
|
2005-06-22 07:17:28 +07:00
|
|
|
|
2016-05-02 22:33:09 +07:00
|
|
|
if (mddev_is_clustered(mddev) && ret == 0)
|
|
|
|
md_cluster_ops->metadata_update_finish(mddev);
|
|
|
|
|
2006-10-03 15:15:46 +07:00
|
|
|
if (mddev->in_sync != sync_req ||
|
2016-05-04 09:22:13 +07:00
|
|
|
!bit_clear_unless(&mddev->flags, BIT(MD_CHANGE_PENDING),
|
|
|
|
BIT(MD_CHANGE_DEVS) | BIT(MD_CHANGE_CLEAN)))
|
2005-06-22 07:17:12 +07:00
|
|
|
/* have to write it out again */
|
|
|
|
goto repeat;
|
2005-06-22 07:17:26 +07:00
|
|
|
wake_up(&mddev->sb_wait);
|
2009-04-14 13:28:34 +07:00
|
|
|
if (test_bit(MD_RECOVERY_RUNNING, &mddev->recovery))
|
|
|
|
sysfs_notify(&mddev->kobj, NULL, "sync_completed");
|
2005-06-22 07:17:12 +07:00
|
|
|
|
2012-03-19 08:46:39 +07:00
|
|
|
rdev_for_each(rdev, mddev) {
|
2011-07-28 08:31:48 +07:00
|
|
|
if (test_and_clear_bit(FaultRecorded, &rdev->flags))
|
|
|
|
clear_bit(Blocked, &rdev->flags);
|
|
|
|
|
|
|
|
if (any_badblocks_changed)
|
2015-12-25 09:20:34 +07:00
|
|
|
ack_all_badblocks(&rdev->badblocks);
|
2011-07-28 08:31:48 +07:00
|
|
|
clear_bit(BlockedBadBlocks, &rdev->flags);
|
|
|
|
wake_up(&rdev->blocked_wait);
|
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
2014-10-30 06:51:31 +07:00
|
|
|
EXPORT_SYMBOL(md_update_sb);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2015-04-14 22:45:22 +07:00
|
|
|
static int add_bound_rdev(struct md_rdev *rdev)
|
|
|
|
{
|
|
|
|
struct mddev *mddev = rdev->mddev;
|
|
|
|
int err = 0;
|
2016-01-07 05:37:14 +07:00
|
|
|
bool add_journal = test_bit(Journal, &rdev->flags);
|
2015-04-14 22:45:22 +07:00
|
|
|
|
2016-01-07 05:37:14 +07:00
|
|
|
if (!mddev->pers->hot_remove_disk || add_journal) {
|
2015-04-14 22:45:22 +07:00
|
|
|
/* If there is hot_add_disk but no hot_remove_disk
|
|
|
|
* then added disks for geometry changes,
|
|
|
|
* and should be added immediately.
|
|
|
|
*/
|
|
|
|
super_types[mddev->major_version].
|
|
|
|
validate_super(mddev, rdev);
|
2016-01-07 05:37:14 +07:00
|
|
|
if (add_journal)
|
|
|
|
mddev_suspend(mddev);
|
2015-04-14 22:45:22 +07:00
|
|
|
err = mddev->pers->hot_add_disk(mddev, rdev);
|
2016-01-07 05:37:14 +07:00
|
|
|
if (add_journal)
|
|
|
|
mddev_resume(mddev);
|
2015-04-14 22:45:22 +07:00
|
|
|
if (err) {
|
2016-06-03 10:32:05 +07:00
|
|
|
md_kick_rdev_from_array(rdev);
|
2015-04-14 22:45:22 +07:00
|
|
|
return err;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
sysfs_notify_dirent_safe(rdev->sysfs_state);
|
|
|
|
|
|
|
|
set_bit(MD_CHANGE_DEVS, &mddev->flags);
|
|
|
|
if (mddev->degraded)
|
|
|
|
set_bit(MD_RECOVERY_RECOVER, &mddev->recovery);
|
|
|
|
set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
|
|
|
|
md_new_event(mddev);
|
|
|
|
md_wakeup_thread(mddev->thread);
|
|
|
|
return 0;
|
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2008-03-24 00:34:54 +07:00
|
|
|
/* words written to sysfs files may, or may not, be \n terminated.
|
2006-01-06 15:20:41 +07:00
|
|
|
* We want to accept with case. For this we use cmd_match.
|
|
|
|
*/
|
|
|
|
static int cmd_match(const char *cmd, const char *str)
|
|
|
|
{
|
|
|
|
/* See if cmd, written into a sysfs file, matches
|
|
|
|
* str. They must either be the same, or cmd can
|
|
|
|
* have a trailing newline
|
|
|
|
*/
|
|
|
|
while (*cmd && *str && *cmd == *str) {
|
|
|
|
cmd++;
|
|
|
|
str++;
|
|
|
|
}
|
|
|
|
if (*cmd == '\n')
|
|
|
|
cmd++;
|
|
|
|
if (*str || *cmd)
|
|
|
|
return 0;
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2005-11-09 12:39:24 +07:00
|
|
|
struct rdev_sysfs_entry {
|
|
|
|
struct attribute attr;
|
2011-10-11 12:45:26 +07:00
|
|
|
ssize_t (*show)(struct md_rdev *, char *);
|
|
|
|
ssize_t (*store)(struct md_rdev *, const char *, size_t);
|
2005-11-09 12:39:24 +07:00
|
|
|
};
|
|
|
|
|
|
|
|
static ssize_t
|
2011-10-11 12:45:26 +07:00
|
|
|
state_show(struct md_rdev *rdev, char *page)
|
2005-11-09 12:39:24 +07:00
|
|
|
{
|
|
|
|
char *sep = "";
|
2008-02-06 16:39:57 +07:00
|
|
|
size_t len = 0;
|
2014-12-15 08:56:59 +07:00
|
|
|
unsigned long flags = ACCESS_ONCE(rdev->flags);
|
2005-11-09 12:39:24 +07:00
|
|
|
|
2014-12-15 08:56:59 +07:00
|
|
|
if (test_bit(Faulty, &flags) ||
|
2011-07-28 08:31:48 +07:00
|
|
|
rdev->badblocks.unacked_exist) {
|
2005-11-09 12:39:24 +07:00
|
|
|
len+= sprintf(page+len, "%sfaulty",sep);
|
|
|
|
sep = ",";
|
|
|
|
}
|
2014-12-15 08:56:59 +07:00
|
|
|
if (test_bit(In_sync, &flags)) {
|
2005-11-09 12:39:24 +07:00
|
|
|
len += sprintf(page+len, "%sin_sync",sep);
|
|
|
|
sep = ",";
|
|
|
|
}
|
2015-10-04 23:20:11 +07:00
|
|
|
if (test_bit(Journal, &flags)) {
|
|
|
|
len += sprintf(page+len, "%sjournal",sep);
|
|
|
|
sep = ",";
|
|
|
|
}
|
2014-12-15 08:56:59 +07:00
|
|
|
if (test_bit(WriteMostly, &flags)) {
|
2006-06-26 14:28:01 +07:00
|
|
|
len += sprintf(page+len, "%swrite_mostly",sep);
|
|
|
|
sep = ",";
|
|
|
|
}
|
2014-12-15 08:56:59 +07:00
|
|
|
if (test_bit(Blocked, &flags) ||
|
2011-12-08 12:22:48 +07:00
|
|
|
(rdev->badblocks.unacked_exist
|
2014-12-15 08:56:59 +07:00
|
|
|
&& !test_bit(Faulty, &flags))) {
|
2008-04-30 14:52:32 +07:00
|
|
|
len += sprintf(page+len, "%sblocked", sep);
|
|
|
|
sep = ",";
|
|
|
|
}
|
2014-12-15 08:56:59 +07:00
|
|
|
if (!test_bit(Faulty, &flags) &&
|
2015-10-09 11:54:12 +07:00
|
|
|
!test_bit(Journal, &flags) &&
|
2014-12-15 08:56:59 +07:00
|
|
|
!test_bit(In_sync, &flags)) {
|
2005-11-09 12:39:24 +07:00
|
|
|
len += sprintf(page+len, "%sspare", sep);
|
|
|
|
sep = ",";
|
|
|
|
}
|
2014-12-15 08:56:59 +07:00
|
|
|
if (test_bit(WriteErrorSeen, &flags)) {
|
2011-07-28 08:31:48 +07:00
|
|
|
len += sprintf(page+len, "%swrite_error", sep);
|
|
|
|
sep = ",";
|
|
|
|
}
|
2014-12-15 08:56:59 +07:00
|
|
|
if (test_bit(WantReplacement, &flags)) {
|
2011-12-23 06:17:51 +07:00
|
|
|
len += sprintf(page+len, "%swant_replacement", sep);
|
|
|
|
sep = ",";
|
|
|
|
}
|
2014-12-15 08:56:59 +07:00
|
|
|
if (test_bit(Replacement, &flags)) {
|
2011-12-23 06:17:51 +07:00
|
|
|
len += sprintf(page+len, "%sreplacement", sep);
|
|
|
|
sep = ",";
|
|
|
|
}
|
|
|
|
|
2005-11-09 12:39:24 +07:00
|
|
|
return len+sprintf(page+len, "\n");
|
|
|
|
}
|
|
|
|
|
2006-06-26 14:27:58 +07:00
|
|
|
static ssize_t
|
2011-10-11 12:45:26 +07:00
|
|
|
state_store(struct md_rdev *rdev, const char *buf, size_t len)
|
2006-06-26 14:27:58 +07:00
|
|
|
{
|
|
|
|
/* can write
|
2011-07-28 08:31:48 +07:00
|
|
|
* faulty - simulates an error
|
2006-06-26 14:27:58 +07:00
|
|
|
* remove - disconnects the device
|
2006-06-26 14:28:01 +07:00
|
|
|
* writemostly - sets write_mostly
|
|
|
|
* -writemostly - clears write_mostly
|
2011-07-28 08:31:48 +07:00
|
|
|
* blocked - sets the Blocked flags
|
|
|
|
* -blocked - clears the Blocked and possibly simulates an error
|
2009-04-14 09:01:57 +07:00
|
|
|
* insync - sets Insync providing device isn't active
|
md: Change handling of save_raid_disk and metadata update during recovery.
Since commit d70ed2e4fafdbef0800e739
MD: Allow restarting an interrupted incremental recovery.
we don't write out the metadata to devices while they are recovering.
This had a good reason, but has unfortunate consequences. This patch
changes things to make them work better.
At issue is what happens if the array is shut down while a recovery is
happening, particularly a bitmap-guided recovery.
Ideally the recovery should pick up where it left off.
However the metadata cannot represent the state "A recovery is in
process which is guided by the bitmap".
Before the above mentioned commit, we wrote metadata to the device
which said "this is being recovered and it is up to <here>". So after
a restart, a full recovery (not bitmap-guided) would happen from
where-ever it was up to.
After the commit the metadata wasn't updated so it still said "This
device is fully in sync with <this> event count". That leads to a
bitmap-based recovery following the whole bitmap, which should be a
lot less work than a full recovery from some starting point. So this
was an improvement.
However updates some metadata but not all leads to other problems.
In particular, the metadata written to the fully-up-to-date device
record that the array has all devices present (even though some are
recovering). So on restart, mdadm wants to find all devices and
expects them to have current event counts.
Obviously it doesn't (some have old event counts) so (when assembling
with --incremental) it waits indefinitely for the rest of the expected
devices.
It really is wrong to not update all the metadata together. Do that
is bound to cause confusion.
Instead, we should make it possible to record the truth in the
metadata. i.e. we need to be able to record that a device is being
recovered based on the bitmap.
We already have a Feature flag to say that recovery is happening. We
now add another one to say that it is a bitmap-based recovery.
With this we can remove the code that disables the write-out of
metadata on some devices.
So this patch:
- moves the setting of 'saved_raid_disk' from add_new_disk to
the validate_super methods. This makes sure it is always set
properly, both when adding a new device to an array, and when
assembling an array from a collection of devices.
- Adds a metadata flag MD_FEATURE_RECOVERY_BITMAP which is only
used if MD_FEATURE_RECOVERY_OFFSET is set, and record that a
bitmap-based recovery is allowed.
This is only present in v1.x metadata. v0.90 doesn't support
devices which are in the middle of recovery at all.
- Only skips writing metadata to Faulty devices.
- Also allows rdev state to be set to "-insync" via sysfs.
This can be used for external-metadata arrays. When the
'role' is set the device is assumed to be in-sync. If, after
setting the role, we set the state to "-insync", the role is
moved to saved_raid_disk which effectively says the device is
partly in-sync with that slot and needs a bitmap recovery.
Cc: Andrei Warkentin <andreiw@vmware.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2013-12-09 08:04:56 +07:00
|
|
|
* -insync - clear Insync for a device with a slot assigned,
|
|
|
|
* so that it gets rebuilt based on bitmap
|
2011-07-28 08:31:48 +07:00
|
|
|
* write_error - sets WriteErrorSeen
|
|
|
|
* -write_error - clears WriteErrorSeen
|
2006-06-26 14:27:58 +07:00
|
|
|
*/
|
|
|
|
int err = -EINVAL;
|
|
|
|
if (cmd_match(buf, "faulty") && rdev->mddev->pers) {
|
|
|
|
md_error(rdev->mddev, rdev);
|
2011-08-25 11:42:51 +07:00
|
|
|
if (test_bit(Faulty, &rdev->flags))
|
|
|
|
err = 0;
|
|
|
|
else
|
|
|
|
err = -EBUSY;
|
2006-06-26 14:27:58 +07:00
|
|
|
} else if (cmd_match(buf, "remove")) {
|
2016-07-28 23:06:34 +07:00
|
|
|
if (rdev->mddev->pers) {
|
|
|
|
clear_bit(Blocked, &rdev->flags);
|
|
|
|
remove_and_add_spares(rdev->mddev, rdev);
|
|
|
|
}
|
2006-06-26 14:27:58 +07:00
|
|
|
if (rdev->raid_disk >= 0)
|
|
|
|
err = -EBUSY;
|
|
|
|
else {
|
2011-10-11 12:47:53 +07:00
|
|
|
struct mddev *mddev = rdev->mddev;
|
2006-06-26 14:27:58 +07:00
|
|
|
err = 0;
|
2015-10-12 16:21:27 +07:00
|
|
|
if (mddev_is_clustered(mddev))
|
|
|
|
err = md_cluster_ops->remove_disk(mddev, rdev);
|
|
|
|
|
|
|
|
if (err == 0) {
|
|
|
|
md_kick_rdev_from_array(rdev);
|
|
|
|
if (mddev->pers)
|
|
|
|
md_update_sb(mddev, 1);
|
|
|
|
md_new_event(mddev);
|
|
|
|
}
|
2006-06-26 14:27:58 +07:00
|
|
|
}
|
2006-06-26 14:28:01 +07:00
|
|
|
} else if (cmd_match(buf, "writemostly")) {
|
|
|
|
set_bit(WriteMostly, &rdev->flags);
|
|
|
|
err = 0;
|
|
|
|
} else if (cmd_match(buf, "-writemostly")) {
|
|
|
|
clear_bit(WriteMostly, &rdev->flags);
|
2008-04-30 14:52:32 +07:00
|
|
|
err = 0;
|
|
|
|
} else if (cmd_match(buf, "blocked")) {
|
|
|
|
set_bit(Blocked, &rdev->flags);
|
|
|
|
err = 0;
|
|
|
|
} else if (cmd_match(buf, "-blocked")) {
|
2011-07-28 08:31:48 +07:00
|
|
|
if (!test_bit(Faulty, &rdev->flags) &&
|
2011-08-30 13:20:17 +07:00
|
|
|
rdev->badblocks.unacked_exist) {
|
2011-07-28 08:31:48 +07:00
|
|
|
/* metadata handler doesn't understand badblocks,
|
|
|
|
* so we need to fail the device
|
|
|
|
*/
|
|
|
|
md_error(rdev->mddev, rdev);
|
|
|
|
}
|
2008-04-30 14:52:32 +07:00
|
|
|
clear_bit(Blocked, &rdev->flags);
|
2011-07-28 08:31:48 +07:00
|
|
|
clear_bit(BlockedBadBlocks, &rdev->flags);
|
2008-04-30 14:52:32 +07:00
|
|
|
wake_up(&rdev->blocked_wait);
|
|
|
|
set_bit(MD_RECOVERY_NEEDED, &rdev->mddev->recovery);
|
|
|
|
md_wakeup_thread(rdev->mddev->thread);
|
|
|
|
|
2009-04-14 09:01:57 +07:00
|
|
|
err = 0;
|
|
|
|
} else if (cmd_match(buf, "insync") && rdev->raid_disk == -1) {
|
|
|
|
set_bit(In_sync, &rdev->flags);
|
2006-06-26 14:28:01 +07:00
|
|
|
err = 0;
|
2015-10-09 11:54:12 +07:00
|
|
|
} else if (cmd_match(buf, "-insync") && rdev->raid_disk >= 0 &&
|
|
|
|
!test_bit(Journal, &rdev->flags)) {
|
2014-09-30 12:24:25 +07:00
|
|
|
if (rdev->mddev->pers == NULL) {
|
|
|
|
clear_bit(In_sync, &rdev->flags);
|
|
|
|
rdev->saved_raid_disk = rdev->raid_disk;
|
|
|
|
rdev->raid_disk = -1;
|
|
|
|
err = 0;
|
|
|
|
}
|
2011-07-28 08:31:48 +07:00
|
|
|
} else if (cmd_match(buf, "write_error")) {
|
|
|
|
set_bit(WriteErrorSeen, &rdev->flags);
|
|
|
|
err = 0;
|
|
|
|
} else if (cmd_match(buf, "-write_error")) {
|
|
|
|
clear_bit(WriteErrorSeen, &rdev->flags);
|
|
|
|
err = 0;
|
2011-12-23 06:17:51 +07:00
|
|
|
} else if (cmd_match(buf, "want_replacement")) {
|
|
|
|
/* Any non-spare device that is not a replacement can
|
|
|
|
* become want_replacement at any time, but we then need to
|
|
|
|
* check if recovery is needed.
|
|
|
|
*/
|
|
|
|
if (rdev->raid_disk >= 0 &&
|
2015-10-09 11:54:12 +07:00
|
|
|
!test_bit(Journal, &rdev->flags) &&
|
2011-12-23 06:17:51 +07:00
|
|
|
!test_bit(Replacement, &rdev->flags))
|
|
|
|
set_bit(WantReplacement, &rdev->flags);
|
|
|
|
set_bit(MD_RECOVERY_NEEDED, &rdev->mddev->recovery);
|
|
|
|
md_wakeup_thread(rdev->mddev->thread);
|
|
|
|
err = 0;
|
|
|
|
} else if (cmd_match(buf, "-want_replacement")) {
|
|
|
|
/* Clearing 'want_replacement' is always allowed.
|
|
|
|
* Once replacements starts it is too late though.
|
|
|
|
*/
|
|
|
|
err = 0;
|
|
|
|
clear_bit(WantReplacement, &rdev->flags);
|
|
|
|
} else if (cmd_match(buf, "replacement")) {
|
|
|
|
/* Can only set a device as a replacement when array has not
|
|
|
|
* yet been started. Once running, replacement is automatic
|
|
|
|
* from spares, or by assigning 'slot'.
|
|
|
|
*/
|
|
|
|
if (rdev->mddev->pers)
|
|
|
|
err = -EBUSY;
|
|
|
|
else {
|
|
|
|
set_bit(Replacement, &rdev->flags);
|
|
|
|
err = 0;
|
|
|
|
}
|
|
|
|
} else if (cmd_match(buf, "-replacement")) {
|
|
|
|
/* Similarly, can only clear Replacement before start */
|
|
|
|
if (rdev->mddev->pers)
|
|
|
|
err = -EBUSY;
|
|
|
|
else {
|
|
|
|
clear_bit(Replacement, &rdev->flags);
|
|
|
|
err = 0;
|
|
|
|
}
|
2015-04-14 22:45:22 +07:00
|
|
|
} else if (cmd_match(buf, "re-add")) {
|
|
|
|
if (test_bit(Faulty, &rdev->flags) && (rdev->raid_disk == -1)) {
|
2015-04-14 22:45:42 +07:00
|
|
|
/* clear_bit is performed _after_ all the devices
|
|
|
|
* have their local Faulty bit cleared. If any writes
|
|
|
|
* happen in the meantime in the local node, they
|
|
|
|
* will land in the local bitmap, which will be synced
|
|
|
|
* by this node eventually
|
|
|
|
*/
|
|
|
|
if (!mddev_is_clustered(rdev->mddev) ||
|
|
|
|
(err = md_cluster_ops->gather_bitmaps(rdev)) == 0) {
|
|
|
|
clear_bit(Faulty, &rdev->flags);
|
|
|
|
err = add_bound_rdev(rdev);
|
|
|
|
}
|
2015-04-14 22:45:22 +07:00
|
|
|
} else
|
|
|
|
err = -EBUSY;
|
2006-06-26 14:27:58 +07:00
|
|
|
}
|
2010-06-01 16:37:23 +07:00
|
|
|
if (!err)
|
|
|
|
sysfs_notify_dirent_safe(rdev->sysfs_state);
|
2006-06-26 14:27:58 +07:00
|
|
|
return err ? err : len;
|
|
|
|
}
|
2006-07-10 18:44:18 +07:00
|
|
|
static struct rdev_sysfs_entry rdev_state =
|
2014-09-30 05:53:05 +07:00
|
|
|
__ATTR_PREALLOC(state, S_IRUGO|S_IWUSR, state_show, state_store);
|
2005-11-09 12:39:24 +07:00
|
|
|
|
2006-01-06 15:20:52 +07:00
|
|
|
static ssize_t
|
2011-10-11 12:45:26 +07:00
|
|
|
errors_show(struct md_rdev *rdev, char *page)
|
2006-01-06 15:20:52 +07:00
|
|
|
{
|
|
|
|
return sprintf(page, "%d\n", atomic_read(&rdev->corrected_errors));
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t
|
2011-10-11 12:45:26 +07:00
|
|
|
errors_store(struct md_rdev *rdev, const char *buf, size_t len)
|
2006-01-06 15:20:52 +07:00
|
|
|
{
|
2015-05-16 18:02:38 +07:00
|
|
|
unsigned int n;
|
|
|
|
int rv;
|
|
|
|
|
|
|
|
rv = kstrtouint(buf, 10, &n);
|
|
|
|
if (rv < 0)
|
|
|
|
return rv;
|
|
|
|
atomic_set(&rdev->corrected_errors, n);
|
|
|
|
return len;
|
2006-01-06 15:20:52 +07:00
|
|
|
}
|
|
|
|
static struct rdev_sysfs_entry rdev_errors =
|
2006-07-10 18:44:18 +07:00
|
|
|
__ATTR(errors, S_IRUGO|S_IWUSR, errors_show, errors_store);
|
2006-01-06 15:20:52 +07:00
|
|
|
|
2006-01-06 15:20:55 +07:00
|
|
|
static ssize_t
|
2011-10-11 12:45:26 +07:00
|
|
|
slot_show(struct md_rdev *rdev, char *page)
|
2006-01-06 15:20:55 +07:00
|
|
|
{
|
2015-10-09 11:54:12 +07:00
|
|
|
if (test_bit(Journal, &rdev->flags))
|
|
|
|
return sprintf(page, "journal\n");
|
|
|
|
else if (rdev->raid_disk < 0)
|
2006-01-06 15:20:55 +07:00
|
|
|
return sprintf(page, "none\n");
|
|
|
|
else
|
|
|
|
return sprintf(page, "%d\n", rdev->raid_disk);
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t
|
2011-10-11 12:45:26 +07:00
|
|
|
slot_store(struct md_rdev *rdev, const char *buf, size_t len)
|
2006-01-06 15:20:55 +07:00
|
|
|
{
|
2015-05-16 18:02:38 +07:00
|
|
|
int slot;
|
2008-02-06 16:39:51 +07:00
|
|
|
int err;
|
2015-05-16 18:02:38 +07:00
|
|
|
|
2015-10-09 11:54:12 +07:00
|
|
|
if (test_bit(Journal, &rdev->flags))
|
|
|
|
return -EBUSY;
|
2006-01-06 15:20:55 +07:00
|
|
|
if (strncmp(buf, "none", 4)==0)
|
|
|
|
slot = -1;
|
2015-05-16 18:02:38 +07:00
|
|
|
else {
|
|
|
|
err = kstrtouint(buf, 10, (unsigned int *)&slot);
|
|
|
|
if (err < 0)
|
|
|
|
return err;
|
|
|
|
}
|
2008-06-28 05:31:31 +07:00
|
|
|
if (rdev->mddev->pers && slot == -1) {
|
2008-02-06 16:39:51 +07:00
|
|
|
/* Setting 'slot' on an active array requires also
|
|
|
|
* updating the 'rd%d' link, and communicating
|
|
|
|
* with the personality with ->hot_*_disk.
|
|
|
|
* For now we only support removing
|
|
|
|
* failed/spare devices. This normally happens automatically,
|
|
|
|
* but not when the metadata is externally managed.
|
|
|
|
*/
|
|
|
|
if (rdev->raid_disk == -1)
|
|
|
|
return -EEXIST;
|
|
|
|
/* personality does all needed checks */
|
2011-06-09 08:42:54 +07:00
|
|
|
if (rdev->mddev->pers->hot_remove_disk == NULL)
|
2008-02-06 16:39:51 +07:00
|
|
|
return -EINVAL;
|
2013-04-24 08:42:41 +07:00
|
|
|
clear_bit(Blocked, &rdev->flags);
|
|
|
|
remove_and_add_spares(rdev->mddev, rdev);
|
|
|
|
if (rdev->raid_disk >= 0)
|
|
|
|
return -EBUSY;
|
2008-02-06 16:39:51 +07:00
|
|
|
set_bit(MD_RECOVERY_NEEDED, &rdev->mddev->recovery);
|
|
|
|
md_wakeup_thread(rdev->mddev->thread);
|
2008-06-28 05:31:31 +07:00
|
|
|
} else if (rdev->mddev->pers) {
|
|
|
|
/* Activating a spare .. or possibly reactivating
|
2009-04-14 09:01:57 +07:00
|
|
|
* if we ever get bitmaps working here.
|
2008-06-28 05:31:31 +07:00
|
|
|
*/
|
2015-12-18 11:19:16 +07:00
|
|
|
int err;
|
2008-06-28 05:31:31 +07:00
|
|
|
|
|
|
|
if (rdev->raid_disk != -1)
|
|
|
|
return -EBUSY;
|
|
|
|
|
2011-02-02 07:57:13 +07:00
|
|
|
if (test_bit(MD_RECOVERY_RUNNING, &rdev->mddev->recovery))
|
|
|
|
return -EBUSY;
|
|
|
|
|
2008-06-28 05:31:31 +07:00
|
|
|
if (rdev->mddev->pers->hot_add_disk == NULL)
|
|
|
|
return -EINVAL;
|
|
|
|
|
2011-01-14 05:14:34 +07:00
|
|
|
if (slot >= rdev->mddev->raid_disks &&
|
|
|
|
slot >= rdev->mddev->raid_disks + rdev->mddev->delta_disks)
|
|
|
|
return -ENOSPC;
|
|
|
|
|
2008-06-28 05:31:31 +07:00
|
|
|
rdev->raid_disk = slot;
|
|
|
|
if (test_bit(In_sync, &rdev->flags))
|
|
|
|
rdev->saved_raid_disk = slot;
|
|
|
|
else
|
|
|
|
rdev->saved_raid_disk = -1;
|
2011-10-18 08:13:47 +07:00
|
|
|
clear_bit(In_sync, &rdev->flags);
|
2013-12-12 06:13:33 +07:00
|
|
|
clear_bit(Bitmap_sync, &rdev->flags);
|
2015-12-18 11:19:16 +07:00
|
|
|
err = rdev->mddev->pers->
|
|
|
|
hot_add_disk(rdev->mddev, rdev);
|
|
|
|
if (err) {
|
|
|
|
rdev->raid_disk = -1;
|
|
|
|
return err;
|
|
|
|
} else
|
|
|
|
sysfs_notify_dirent_safe(rdev->sysfs_state);
|
|
|
|
if (sysfs_link_rdev(rdev->mddev, rdev))
|
|
|
|
/* failure here is OK */;
|
2008-06-28 05:31:31 +07:00
|
|
|
/* don't wakeup anyone, leave that to userspace. */
|
2008-02-06 16:39:51 +07:00
|
|
|
} else {
|
2011-01-14 05:14:34 +07:00
|
|
|
if (slot >= rdev->mddev->raid_disks &&
|
|
|
|
slot >= rdev->mddev->raid_disks + rdev->mddev->delta_disks)
|
2008-02-06 16:39:51 +07:00
|
|
|
return -ENOSPC;
|
|
|
|
rdev->raid_disk = slot;
|
|
|
|
/* assume it is working */
|
2008-02-06 16:39:54 +07:00
|
|
|
clear_bit(Faulty, &rdev->flags);
|
|
|
|
clear_bit(WriteMostly, &rdev->flags);
|
2008-02-06 16:39:51 +07:00
|
|
|
set_bit(In_sync, &rdev->flags);
|
2010-06-01 16:37:23 +07:00
|
|
|
sysfs_notify_dirent_safe(rdev->sysfs_state);
|
2008-02-06 16:39:51 +07:00
|
|
|
}
|
2006-01-06 15:20:55 +07:00
|
|
|
return len;
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct rdev_sysfs_entry rdev_slot =
|
2006-07-10 18:44:18 +07:00
|
|
|
__ATTR(slot, S_IRUGO|S_IWUSR, slot_show, slot_store);
|
2006-01-06 15:20:55 +07:00
|
|
|
|
2006-01-06 15:20:56 +07:00
|
|
|
static ssize_t
|
2011-10-11 12:45:26 +07:00
|
|
|
offset_show(struct md_rdev *rdev, char *page)
|
2006-01-06 15:20:56 +07:00
|
|
|
{
|
2006-01-06 15:20:59 +07:00
|
|
|
return sprintf(page, "%llu\n", (unsigned long long)rdev->data_offset);
|
2006-01-06 15:20:56 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t
|
2011-10-11 12:45:26 +07:00
|
|
|
offset_store(struct md_rdev *rdev, const char *buf, size_t len)
|
2006-01-06 15:20:56 +07:00
|
|
|
{
|
2012-05-21 06:27:00 +07:00
|
|
|
unsigned long long offset;
|
2013-06-01 14:15:16 +07:00
|
|
|
if (kstrtoull(buf, 10, &offset) < 0)
|
2006-01-06 15:20:56 +07:00
|
|
|
return -EINVAL;
|
2008-06-28 05:31:29 +07:00
|
|
|
if (rdev->mddev->pers && rdev->raid_disk >= 0)
|
2006-01-06 15:20:56 +07:00
|
|
|
return -EBUSY;
|
2009-03-31 10:33:13 +07:00
|
|
|
if (rdev->sectors && rdev->mddev->external)
|
2008-02-06 16:39:54 +07:00
|
|
|
/* Must set offset before size, so overlap checks
|
|
|
|
* can be sane */
|
|
|
|
return -EBUSY;
|
2006-01-06 15:20:56 +07:00
|
|
|
rdev->data_offset = offset;
|
2012-07-19 12:59:18 +07:00
|
|
|
rdev->new_data_offset = offset;
|
2006-01-06 15:20:56 +07:00
|
|
|
return len;
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct rdev_sysfs_entry rdev_offset =
|
2006-07-10 18:44:18 +07:00
|
|
|
__ATTR(offset, S_IRUGO|S_IWUSR, offset_show, offset_store);
|
2006-01-06 15:20:56 +07:00
|
|
|
|
2012-05-21 06:27:00 +07:00
|
|
|
static ssize_t new_offset_show(struct md_rdev *rdev, char *page)
|
|
|
|
{
|
|
|
|
return sprintf(page, "%llu\n",
|
|
|
|
(unsigned long long)rdev->new_data_offset);
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t new_offset_store(struct md_rdev *rdev,
|
|
|
|
const char *buf, size_t len)
|
|
|
|
{
|
|
|
|
unsigned long long new_offset;
|
|
|
|
struct mddev *mddev = rdev->mddev;
|
|
|
|
|
2013-06-01 14:15:16 +07:00
|
|
|
if (kstrtoull(buf, 10, &new_offset) < 0)
|
2012-05-21 06:27:00 +07:00
|
|
|
return -EINVAL;
|
|
|
|
|
2014-12-11 06:02:10 +07:00
|
|
|
if (mddev->sync_thread ||
|
|
|
|
test_bit(MD_RECOVERY_RUNNING,&mddev->recovery))
|
2012-05-21 06:27:00 +07:00
|
|
|
return -EBUSY;
|
|
|
|
if (new_offset == rdev->data_offset)
|
|
|
|
/* reset is always permitted */
|
|
|
|
;
|
|
|
|
else if (new_offset > rdev->data_offset) {
|
|
|
|
/* must not push array size beyond rdev_sectors */
|
|
|
|
if (new_offset - rdev->data_offset
|
|
|
|
+ mddev->dev_sectors > rdev->sectors)
|
|
|
|
return -E2BIG;
|
|
|
|
}
|
|
|
|
/* Metadata worries about other space details. */
|
|
|
|
|
|
|
|
/* decreasing the offset is inconsistent with a backwards
|
|
|
|
* reshape.
|
|
|
|
*/
|
|
|
|
if (new_offset < rdev->data_offset &&
|
|
|
|
mddev->reshape_backwards)
|
|
|
|
return -EINVAL;
|
|
|
|
/* Increasing offset is inconsistent with forwards
|
|
|
|
* reshape. reshape_direction should be set to
|
|
|
|
* 'backwards' first.
|
|
|
|
*/
|
|
|
|
if (new_offset > rdev->data_offset &&
|
|
|
|
!mddev->reshape_backwards)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
if (mddev->pers && mddev->persistent &&
|
|
|
|
!super_types[mddev->major_version]
|
|
|
|
.allow_new_offset(rdev, new_offset))
|
|
|
|
return -E2BIG;
|
|
|
|
rdev->new_data_offset = new_offset;
|
|
|
|
if (new_offset > rdev->data_offset)
|
|
|
|
mddev->reshape_backwards = 1;
|
|
|
|
else if (new_offset < rdev->data_offset)
|
|
|
|
mddev->reshape_backwards = 0;
|
|
|
|
|
|
|
|
return len;
|
|
|
|
}
|
|
|
|
static struct rdev_sysfs_entry rdev_new_offset =
|
|
|
|
__ATTR(new_offset, S_IRUGO|S_IWUSR, new_offset_show, new_offset_store);
|
|
|
|
|
2006-01-06 15:21:06 +07:00
|
|
|
static ssize_t
|
2011-10-11 12:45:26 +07:00
|
|
|
rdev_size_show(struct md_rdev *rdev, char *page)
|
2006-01-06 15:21:06 +07:00
|
|
|
{
|
2009-03-31 10:33:13 +07:00
|
|
|
return sprintf(page, "%llu\n", (unsigned long long)rdev->sectors / 2);
|
2006-01-06 15:21:06 +07:00
|
|
|
}
|
|
|
|
|
2008-02-06 16:39:54 +07:00
|
|
|
static int overlaps(sector_t s1, sector_t l1, sector_t s2, sector_t l2)
|
|
|
|
{
|
|
|
|
/* check if two start/length pairs overlap */
|
|
|
|
if (s1+l1 <= s2)
|
|
|
|
return 0;
|
|
|
|
if (s2+l2 <= s1)
|
|
|
|
return 0;
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2009-03-31 11:00:31 +07:00
|
|
|
static int strict_blocks_to_sectors(const char *buf, sector_t *sectors)
|
|
|
|
{
|
|
|
|
unsigned long long blocks;
|
|
|
|
sector_t new;
|
|
|
|
|
2013-06-01 14:15:16 +07:00
|
|
|
if (kstrtoull(buf, 10, &blocks) < 0)
|
2009-03-31 11:00:31 +07:00
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
if (blocks & 1ULL << (8 * sizeof(blocks) - 1))
|
|
|
|
return -EINVAL; /* sector conversion overflow */
|
|
|
|
|
|
|
|
new = blocks * 2;
|
|
|
|
if (new != blocks * 2)
|
|
|
|
return -EINVAL; /* unsigned long long to sector_t overflow */
|
|
|
|
|
|
|
|
*sectors = new;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2006-01-06 15:21:06 +07:00
|
|
|
static ssize_t
|
2011-10-11 12:45:26 +07:00
|
|
|
rdev_size_store(struct md_rdev *rdev, const char *buf, size_t len)
|
2006-01-06 15:21:06 +07:00
|
|
|
{
|
2011-10-11 12:47:53 +07:00
|
|
|
struct mddev *my_mddev = rdev->mddev;
|
2009-03-31 10:33:13 +07:00
|
|
|
sector_t oldsectors = rdev->sectors;
|
2009-03-31 11:00:31 +07:00
|
|
|
sector_t sectors;
|
2008-03-05 05:29:33 +07:00
|
|
|
|
2015-10-09 11:54:12 +07:00
|
|
|
if (test_bit(Journal, &rdev->flags))
|
|
|
|
return -EBUSY;
|
2009-03-31 11:00:31 +07:00
|
|
|
if (strict_blocks_to_sectors(buf, §ors) < 0)
|
2008-07-12 07:37:50 +07:00
|
|
|
return -EINVAL;
|
2012-05-21 06:27:00 +07:00
|
|
|
if (rdev->data_offset != rdev->new_data_offset)
|
|
|
|
return -EINVAL; /* too confusing */
|
2008-06-28 05:31:46 +07:00
|
|
|
if (my_mddev->pers && rdev->raid_disk >= 0) {
|
2008-07-12 07:37:50 +07:00
|
|
|
if (my_mddev->persistent) {
|
2009-03-31 10:33:13 +07:00
|
|
|
sectors = super_types[my_mddev->major_version].
|
|
|
|
rdev_size_change(rdev, sectors);
|
|
|
|
if (!sectors)
|
2008-06-28 05:31:46 +07:00
|
|
|
return -EBUSY;
|
2009-03-31 10:33:13 +07:00
|
|
|
} else if (!sectors)
|
2010-11-08 20:39:12 +07:00
|
|
|
sectors = (i_size_read(rdev->bdev->bd_inode) >> 9) -
|
2009-03-31 10:33:13 +07:00
|
|
|
rdev->data_offset;
|
2013-02-21 10:33:17 +07:00
|
|
|
if (!my_mddev->pers->resize)
|
|
|
|
/* Cannot change size for RAID0 or Linear etc */
|
|
|
|
return -EINVAL;
|
2008-06-28 05:31:46 +07:00
|
|
|
}
|
2009-03-31 10:33:13 +07:00
|
|
|
if (sectors < my_mddev->dev_sectors)
|
2008-10-13 07:55:11 +07:00
|
|
|
return -EINVAL; /* component must fit device */
|
2008-06-28 05:31:46 +07:00
|
|
|
|
2009-03-31 10:33:13 +07:00
|
|
|
rdev->sectors = sectors;
|
|
|
|
if (sectors > oldsectors && my_mddev->external) {
|
2014-09-29 12:33:20 +07:00
|
|
|
/* Need to check that all other rdevs with the same
|
|
|
|
* ->bdev do not overlap. 'rcu' is sufficient to walk
|
|
|
|
* the rdev lists safely.
|
|
|
|
* This check does not provide a hard guarantee, it
|
|
|
|
* just helps avoid dangerous mistakes.
|
2008-02-06 16:39:54 +07:00
|
|
|
*/
|
2011-10-11 12:47:53 +07:00
|
|
|
struct mddev *mddev;
|
2008-02-06 16:39:54 +07:00
|
|
|
int overlap = 0;
|
2009-01-09 04:31:08 +07:00
|
|
|
struct list_head *tmp;
|
2008-02-06 16:39:54 +07:00
|
|
|
|
2014-09-29 12:33:20 +07:00
|
|
|
rcu_read_lock();
|
2008-02-06 16:39:58 +07:00
|
|
|
for_each_mddev(mddev, tmp) {
|
2011-10-11 12:45:26 +07:00
|
|
|
struct md_rdev *rdev2;
|
2008-02-06 16:39:54 +07:00
|
|
|
|
2012-03-19 08:46:39 +07:00
|
|
|
rdev_for_each(rdev2, mddev)
|
2011-01-31 08:10:09 +07:00
|
|
|
if (rdev->bdev == rdev2->bdev &&
|
|
|
|
rdev != rdev2 &&
|
|
|
|
overlaps(rdev->data_offset, rdev->sectors,
|
|
|
|
rdev2->data_offset,
|
|
|
|
rdev2->sectors)) {
|
2008-02-06 16:39:54 +07:00
|
|
|
overlap = 1;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
if (overlap) {
|
|
|
|
mddev_put(mddev);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
2014-09-29 12:33:20 +07:00
|
|
|
rcu_read_unlock();
|
2008-02-06 16:39:54 +07:00
|
|
|
if (overlap) {
|
|
|
|
/* Someone else could have slipped in a size
|
|
|
|
* change here, but doing so is just silly.
|
2009-03-31 10:33:13 +07:00
|
|
|
* We put oldsectors back because we *know* it is
|
2008-02-06 16:39:54 +07:00
|
|
|
* safe, and trust userspace not to race with
|
|
|
|
* itself
|
|
|
|
*/
|
2009-03-31 10:33:13 +07:00
|
|
|
rdev->sectors = oldsectors;
|
2008-02-06 16:39:54 +07:00
|
|
|
return -EBUSY;
|
|
|
|
}
|
|
|
|
}
|
2006-01-06 15:21:06 +07:00
|
|
|
return len;
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct rdev_sysfs_entry rdev_size =
|
2006-07-10 18:44:18 +07:00
|
|
|
__ATTR(size, S_IRUGO|S_IWUSR, rdev_size_show, rdev_size_store);
|
2006-01-06 15:21:06 +07:00
|
|
|
|
2011-10-11 12:45:26 +07:00
|
|
|
static ssize_t recovery_start_show(struct md_rdev *rdev, char *page)
|
2009-12-13 11:17:12 +07:00
|
|
|
{
|
|
|
|
unsigned long long recovery_start = rdev->recovery_offset;
|
|
|
|
|
|
|
|
if (test_bit(In_sync, &rdev->flags) ||
|
|
|
|
recovery_start == MaxSector)
|
|
|
|
return sprintf(page, "none\n");
|
|
|
|
|
|
|
|
return sprintf(page, "%llu\n", recovery_start);
|
|
|
|
}
|
|
|
|
|
2011-10-11 12:45:26 +07:00
|
|
|
static ssize_t recovery_start_store(struct md_rdev *rdev, const char *buf, size_t len)
|
2009-12-13 11:17:12 +07:00
|
|
|
{
|
|
|
|
unsigned long long recovery_start;
|
|
|
|
|
|
|
|
if (cmd_match(buf, "none"))
|
|
|
|
recovery_start = MaxSector;
|
2013-06-01 14:15:16 +07:00
|
|
|
else if (kstrtoull(buf, 10, &recovery_start))
|
2009-12-13 11:17:12 +07:00
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
if (rdev->mddev->pers &&
|
|
|
|
rdev->raid_disk >= 0)
|
|
|
|
return -EBUSY;
|
|
|
|
|
|
|
|
rdev->recovery_offset = recovery_start;
|
|
|
|
if (recovery_start == MaxSector)
|
|
|
|
set_bit(In_sync, &rdev->flags);
|
|
|
|
else
|
|
|
|
clear_bit(In_sync, &rdev->flags);
|
|
|
|
return len;
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct rdev_sysfs_entry rdev_recovery_start =
|
|
|
|
__ATTR(recovery_start, S_IRUGO|S_IWUSR, recovery_start_show, recovery_start_store);
|
|
|
|
|
2015-12-25 09:20:34 +07:00
|
|
|
/* sysfs access to bad-blocks list.
|
|
|
|
* We present two files.
|
|
|
|
* 'bad-blocks' lists sector numbers and lengths of ranges that
|
|
|
|
* are recorded as bad. The list is truncated to fit within
|
|
|
|
* the one-page limit of sysfs.
|
|
|
|
* Writing "sector length" to this file adds an acknowledged
|
|
|
|
* bad block list.
|
|
|
|
* 'unacknowledged-bad-blocks' lists bad blocks that have not yet
|
|
|
|
* been acknowledged. Writing to this file adds bad blocks
|
|
|
|
* without acknowledging them. This is largely for testing.
|
|
|
|
*/
|
2011-10-11 12:45:26 +07:00
|
|
|
static ssize_t bb_show(struct md_rdev *rdev, char *page)
|
2011-07-28 08:31:47 +07:00
|
|
|
{
|
|
|
|
return badblocks_show(&rdev->badblocks, page, 0);
|
|
|
|
}
|
2011-10-11 12:45:26 +07:00
|
|
|
static ssize_t bb_store(struct md_rdev *rdev, const char *page, size_t len)
|
2011-07-28 08:31:47 +07:00
|
|
|
{
|
2011-07-28 08:31:48 +07:00
|
|
|
int rv = badblocks_store(&rdev->badblocks, page, len, 0);
|
|
|
|
/* Maybe that ack was all we needed */
|
|
|
|
if (test_and_clear_bit(BlockedBadBlocks, &rdev->flags))
|
|
|
|
wake_up(&rdev->blocked_wait);
|
|
|
|
return rv;
|
2011-07-28 08:31:47 +07:00
|
|
|
}
|
|
|
|
static struct rdev_sysfs_entry rdev_bad_blocks =
|
|
|
|
__ATTR(bad_blocks, S_IRUGO|S_IWUSR, bb_show, bb_store);
|
|
|
|
|
2011-10-11 12:45:26 +07:00
|
|
|
static ssize_t ubb_show(struct md_rdev *rdev, char *page)
|
2011-07-28 08:31:47 +07:00
|
|
|
{
|
|
|
|
return badblocks_show(&rdev->badblocks, page, 1);
|
|
|
|
}
|
2011-10-11 12:45:26 +07:00
|
|
|
static ssize_t ubb_store(struct md_rdev *rdev, const char *page, size_t len)
|
2011-07-28 08:31:47 +07:00
|
|
|
{
|
|
|
|
return badblocks_store(&rdev->badblocks, page, len, 1);
|
|
|
|
}
|
|
|
|
static struct rdev_sysfs_entry rdev_unack_bad_blocks =
|
|
|
|
__ATTR(unacknowledged_bad_blocks, S_IRUGO|S_IWUSR, ubb_show, ubb_store);
|
|
|
|
|
2005-11-09 12:39:24 +07:00
|
|
|
static struct attribute *rdev_default_attrs[] = {
|
|
|
|
&rdev_state.attr,
|
2006-01-06 15:20:52 +07:00
|
|
|
&rdev_errors.attr,
|
2006-01-06 15:20:55 +07:00
|
|
|
&rdev_slot.attr,
|
2006-01-06 15:20:56 +07:00
|
|
|
&rdev_offset.attr,
|
2012-05-21 06:27:00 +07:00
|
|
|
&rdev_new_offset.attr,
|
2006-01-06 15:21:06 +07:00
|
|
|
&rdev_size.attr,
|
2009-12-13 11:17:12 +07:00
|
|
|
&rdev_recovery_start.attr,
|
2011-07-28 08:31:47 +07:00
|
|
|
&rdev_bad_blocks.attr,
|
|
|
|
&rdev_unack_bad_blocks.attr,
|
2005-11-09 12:39:24 +07:00
|
|
|
NULL,
|
|
|
|
};
|
|
|
|
static ssize_t
|
|
|
|
rdev_attr_show(struct kobject *kobj, struct attribute *attr, char *page)
|
|
|
|
{
|
|
|
|
struct rdev_sysfs_entry *entry = container_of(attr, struct rdev_sysfs_entry, attr);
|
2011-10-11 12:45:26 +07:00
|
|
|
struct md_rdev *rdev = container_of(kobj, struct md_rdev, kobj);
|
2005-11-09 12:39:24 +07:00
|
|
|
|
|
|
|
if (!entry->show)
|
|
|
|
return -EIO;
|
2014-12-15 08:56:59 +07:00
|
|
|
if (!rdev->mddev)
|
|
|
|
return -EBUSY;
|
|
|
|
return entry->show(rdev, page);
|
2005-11-09 12:39:24 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t
|
|
|
|
rdev_attr_store(struct kobject *kobj, struct attribute *attr,
|
|
|
|
const char *page, size_t length)
|
|
|
|
{
|
|
|
|
struct rdev_sysfs_entry *entry = container_of(attr, struct rdev_sysfs_entry, attr);
|
2011-10-11 12:45:26 +07:00
|
|
|
struct md_rdev *rdev = container_of(kobj, struct md_rdev, kobj);
|
2008-03-05 05:29:33 +07:00
|
|
|
ssize_t rv;
|
2011-10-11 12:47:53 +07:00
|
|
|
struct mddev *mddev = rdev->mddev;
|
2005-11-09 12:39:24 +07:00
|
|
|
|
|
|
|
if (!entry->store)
|
|
|
|
return -EIO;
|
2006-07-10 18:44:19 +07:00
|
|
|
if (!capable(CAP_SYS_ADMIN))
|
|
|
|
return -EACCES;
|
2008-03-05 05:29:33 +07:00
|
|
|
rv = mddev ? mddev_lock(mddev): -EBUSY;
|
2008-02-06 16:39:55 +07:00
|
|
|
if (!rv) {
|
2008-03-05 05:29:33 +07:00
|
|
|
if (rdev->mddev == NULL)
|
|
|
|
rv = -EBUSY;
|
|
|
|
else
|
|
|
|
rv = entry->store(rdev, page, length);
|
2008-04-30 14:52:28 +07:00
|
|
|
mddev_unlock(mddev);
|
2008-02-06 16:39:55 +07:00
|
|
|
}
|
|
|
|
return rv;
|
2005-11-09 12:39:24 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static void rdev_free(struct kobject *ko)
|
|
|
|
{
|
2011-10-11 12:45:26 +07:00
|
|
|
struct md_rdev *rdev = container_of(ko, struct md_rdev, kobj);
|
2005-11-09 12:39:24 +07:00
|
|
|
kfree(rdev);
|
|
|
|
}
|
2010-01-19 08:58:23 +07:00
|
|
|
static const struct sysfs_ops rdev_sysfs_ops = {
|
2005-11-09 12:39:24 +07:00
|
|
|
.show = rdev_attr_show,
|
|
|
|
.store = rdev_attr_store,
|
|
|
|
};
|
|
|
|
static struct kobj_type rdev_ktype = {
|
|
|
|
.release = rdev_free,
|
|
|
|
.sysfs_ops = &rdev_sysfs_ops,
|
|
|
|
.default_attrs = rdev_default_attrs,
|
|
|
|
};
|
|
|
|
|
2011-10-11 12:45:26 +07:00
|
|
|
int md_rdev_init(struct md_rdev *rdev)
|
2010-06-01 16:37:26 +07:00
|
|
|
{
|
|
|
|
rdev->desc_nr = -1;
|
|
|
|
rdev->saved_raid_disk = -1;
|
|
|
|
rdev->raid_disk = -1;
|
|
|
|
rdev->flags = 0;
|
|
|
|
rdev->data_offset = 0;
|
2012-05-21 06:27:00 +07:00
|
|
|
rdev->new_data_offset = 0;
|
2010-06-01 16:37:26 +07:00
|
|
|
rdev->sb_events = 0;
|
2016-06-17 22:33:10 +07:00
|
|
|
rdev->last_read_error = 0;
|
2011-07-28 08:31:47 +07:00
|
|
|
rdev->sb_loaded = 0;
|
|
|
|
rdev->bb_page = NULL;
|
2010-06-01 16:37:26 +07:00
|
|
|
atomic_set(&rdev->nr_pending, 0);
|
|
|
|
atomic_set(&rdev->read_errors, 0);
|
|
|
|
atomic_set(&rdev->corrected_errors, 0);
|
|
|
|
|
|
|
|
INIT_LIST_HEAD(&rdev->same_set);
|
|
|
|
init_waitqueue_head(&rdev->blocked_wait);
|
2011-07-28 08:31:46 +07:00
|
|
|
|
|
|
|
/* Add space to store bad block list.
|
|
|
|
* This reserves the space even on arrays where it cannot
|
|
|
|
* be used - I wonder if that matters
|
|
|
|
*/
|
2015-12-25 09:20:34 +07:00
|
|
|
return badblocks_init(&rdev->badblocks, 0);
|
2010-06-01 16:37:26 +07:00
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(md_rdev_init);
|
2005-04-17 05:20:36 +07:00
|
|
|
/*
|
|
|
|
* Import a device. If 'super_format' >= 0, then sanity check the superblock
|
|
|
|
*
|
|
|
|
* mark the device faulty if:
|
|
|
|
*
|
|
|
|
* - the device is nonexistent (zero size)
|
|
|
|
* - the device has no valid superblock
|
|
|
|
*
|
|
|
|
* a faulty rdev _never_ has rdev->sb set.
|
|
|
|
*/
|
2011-10-11 12:45:26 +07:00
|
|
|
static struct md_rdev *md_import_device(dev_t newdev, int super_format, int super_minor)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
char b[BDEVNAME_SIZE];
|
|
|
|
int err;
|
2011-10-11 12:45:26 +07:00
|
|
|
struct md_rdev *rdev;
|
2005-04-17 05:20:36 +07:00
|
|
|
sector_t size;
|
|
|
|
|
2006-01-06 15:20:32 +07:00
|
|
|
rdev = kzalloc(sizeof(*rdev), GFP_KERNEL);
|
2005-04-17 05:20:36 +07:00
|
|
|
if (!rdev) {
|
|
|
|
printk(KERN_ERR "md: could not alloc mem for new device!\n");
|
|
|
|
return ERR_PTR(-ENOMEM);
|
|
|
|
}
|
|
|
|
|
2011-07-28 08:31:46 +07:00
|
|
|
err = md_rdev_init(rdev);
|
|
|
|
if (err)
|
|
|
|
goto abort_free;
|
|
|
|
err = alloc_disk_sb(rdev);
|
|
|
|
if (err)
|
2005-04-17 05:20:36 +07:00
|
|
|
goto abort_free;
|
|
|
|
|
2008-02-06 16:39:54 +07:00
|
|
|
err = lock_rdev(rdev, newdev, super_format == -2);
|
2005-04-17 05:20:36 +07:00
|
|
|
if (err)
|
|
|
|
goto abort_free;
|
|
|
|
|
2007-12-18 13:05:35 +07:00
|
|
|
kobject_init(&rdev->kobj, &rdev_ktype);
|
2005-11-09 12:39:24 +07:00
|
|
|
|
2010-11-08 20:39:12 +07:00
|
|
|
size = i_size_read(rdev->bdev->bd_inode) >> BLOCK_SIZE_BITS;
|
2005-04-17 05:20:36 +07:00
|
|
|
if (!size) {
|
2014-09-30 11:23:59 +07:00
|
|
|
printk(KERN_WARNING
|
2005-04-17 05:20:36 +07:00
|
|
|
"md: %s has zero or unknown size, marking faulty!\n",
|
|
|
|
bdevname(rdev->bdev,b));
|
|
|
|
err = -EINVAL;
|
|
|
|
goto abort_free;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (super_format >= 0) {
|
|
|
|
err = super_types[super_format].
|
|
|
|
load_super(rdev, NULL, super_minor);
|
|
|
|
if (err == -EINVAL) {
|
2007-07-17 18:06:11 +07:00
|
|
|
printk(KERN_WARNING
|
|
|
|
"md: %s does not have a valid v%d.%d "
|
|
|
|
"superblock, not importing!\n",
|
|
|
|
bdevname(rdev->bdev,b),
|
|
|
|
super_format, super_minor);
|
2005-04-17 05:20:36 +07:00
|
|
|
goto abort_free;
|
|
|
|
}
|
|
|
|
if (err < 0) {
|
2014-09-30 11:23:59 +07:00
|
|
|
printk(KERN_WARNING
|
2005-04-17 05:20:36 +07:00
|
|
|
"md: could not read %s's sb, not importing!\n",
|
|
|
|
bdevname(rdev->bdev,b));
|
|
|
|
goto abort_free;
|
|
|
|
}
|
|
|
|
}
|
2008-04-30 14:52:32 +07:00
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
return rdev;
|
|
|
|
|
|
|
|
abort_free:
|
2011-07-28 08:31:47 +07:00
|
|
|
if (rdev->bdev)
|
|
|
|
unlock_rdev(rdev);
|
2012-05-22 10:54:30 +07:00
|
|
|
md_rdev_clear(rdev);
|
2005-04-17 05:20:36 +07:00
|
|
|
kfree(rdev);
|
|
|
|
return ERR_PTR(err);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Check a full RAID array for plausibility
|
|
|
|
*/
|
|
|
|
|
2014-09-30 11:23:59 +07:00
|
|
|
static void analyze_sbs(struct mddev *mddev)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
int i;
|
2011-10-11 12:45:26 +07:00
|
|
|
struct md_rdev *rdev, *freshest, *tmp;
|
2005-04-17 05:20:36 +07:00
|
|
|
char b[BDEVNAME_SIZE];
|
|
|
|
|
|
|
|
freshest = NULL;
|
2012-03-19 08:46:39 +07:00
|
|
|
rdev_for_each_safe(rdev, tmp, mddev)
|
2005-04-17 05:20:36 +07:00
|
|
|
switch (super_types[mddev->major_version].
|
|
|
|
load_super(rdev, freshest, mddev->minor_version)) {
|
|
|
|
case 1:
|
|
|
|
freshest = rdev;
|
|
|
|
break;
|
|
|
|
case 0:
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
printk( KERN_ERR \
|
|
|
|
"md: fatal superblock inconsistency in %s"
|
2014-09-30 11:23:59 +07:00
|
|
|
" -- removing from array\n",
|
2005-04-17 05:20:36 +07:00
|
|
|
bdevname(rdev->bdev,b));
|
2015-04-14 22:43:24 +07:00
|
|
|
md_kick_rdev_from_array(rdev);
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
super_types[mddev->major_version].
|
|
|
|
validate_super(mddev, freshest);
|
|
|
|
|
|
|
|
i = 0;
|
2012-03-19 08:46:39 +07:00
|
|
|
rdev_for_each_safe(rdev, tmp, mddev) {
|
2010-04-14 14:02:09 +07:00
|
|
|
if (mddev->max_disks &&
|
|
|
|
(rdev->desc_nr >= mddev->max_disks ||
|
|
|
|
i > mddev->max_disks)) {
|
2009-02-06 14:02:46 +07:00
|
|
|
printk(KERN_WARNING
|
|
|
|
"md: %s: %s: only %d devices permitted\n",
|
|
|
|
mdname(mddev), bdevname(rdev->bdev, b),
|
|
|
|
mddev->max_disks);
|
2015-04-14 22:43:24 +07:00
|
|
|
md_kick_rdev_from_array(rdev);
|
2009-02-06 14:02:46 +07:00
|
|
|
continue;
|
|
|
|
}
|
2014-10-30 06:51:31 +07:00
|
|
|
if (rdev != freshest) {
|
2005-04-17 05:20:36 +07:00
|
|
|
if (super_types[mddev->major_version].
|
|
|
|
validate_super(mddev, rdev)) {
|
|
|
|
printk(KERN_WARNING "md: kicking non-fresh %s"
|
|
|
|
" from array!\n",
|
|
|
|
bdevname(rdev->bdev,b));
|
2015-04-14 22:43:24 +07:00
|
|
|
md_kick_rdev_from_array(rdev);
|
2005-04-17 05:20:36 +07:00
|
|
|
continue;
|
|
|
|
}
|
2014-10-30 06:51:31 +07:00
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
if (mddev->level == LEVEL_MULTIPATH) {
|
|
|
|
rdev->desc_nr = i++;
|
|
|
|
rdev->raid_disk = rdev->desc_nr;
|
2005-11-09 12:39:31 +07:00
|
|
|
set_bit(In_sync, &rdev->flags);
|
2015-10-09 11:54:12 +07:00
|
|
|
} else if (rdev->raid_disk >=
|
|
|
|
(mddev->raid_disks - min(0, mddev->delta_disks)) &&
|
|
|
|
!test_bit(Journal, &rdev->flags)) {
|
2007-05-24 03:58:10 +07:00
|
|
|
rdev->raid_disk = -1;
|
|
|
|
clear_bit(In_sync, &rdev->flags);
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2009-12-14 08:49:55 +07:00
|
|
|
/* Read a fixed-point number.
|
|
|
|
* Numbers in sysfs attributes should be in "standard" units where
|
|
|
|
* possible, so time should be in seconds.
|
2014-09-30 11:23:59 +07:00
|
|
|
* However we internally use a a much smaller unit such as
|
2009-12-14 08:49:55 +07:00
|
|
|
* milliseconds or jiffies.
|
|
|
|
* This function takes a decimal number with a possible fractional
|
|
|
|
* component, and produces an integer which is the result of
|
|
|
|
* multiplying that number by 10^'scale'.
|
|
|
|
* all without any floating-point arithmetic.
|
|
|
|
*/
|
|
|
|
int strict_strtoul_scaled(const char *cp, unsigned long *res, int scale)
|
|
|
|
{
|
|
|
|
unsigned long result = 0;
|
|
|
|
long decimals = -1;
|
|
|
|
while (isdigit(*cp) || (*cp == '.' && decimals < 0)) {
|
|
|
|
if (*cp == '.')
|
|
|
|
decimals = 0;
|
|
|
|
else if (decimals < scale) {
|
|
|
|
unsigned int value;
|
|
|
|
value = *cp - '0';
|
|
|
|
result = result * 10 + value;
|
|
|
|
if (decimals >= 0)
|
|
|
|
decimals++;
|
|
|
|
}
|
|
|
|
cp++;
|
|
|
|
}
|
|
|
|
if (*cp == '\n')
|
|
|
|
cp++;
|
|
|
|
if (*cp)
|
|
|
|
return -EINVAL;
|
|
|
|
if (decimals < 0)
|
|
|
|
decimals = 0;
|
|
|
|
while (decimals < scale) {
|
|
|
|
result *= 10;
|
|
|
|
decimals ++;
|
|
|
|
}
|
|
|
|
*res = result;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2006-06-26 14:27:37 +07:00
|
|
|
static ssize_t
|
2011-10-11 12:47:53 +07:00
|
|
|
safe_delay_show(struct mddev *mddev, char *page)
|
2006-06-26 14:27:37 +07:00
|
|
|
{
|
|
|
|
int msec = (mddev->safemode_delay*1000)/HZ;
|
|
|
|
return sprintf(page, "%d.%03d\n", msec/1000, msec%1000);
|
|
|
|
}
|
|
|
|
static ssize_t
|
2011-10-11 12:47:53 +07:00
|
|
|
safe_delay_store(struct mddev *mddev, const char *cbuf, size_t len)
|
2006-06-26 14:27:37 +07:00
|
|
|
{
|
|
|
|
unsigned long msec;
|
2008-09-25 12:48:19 +07:00
|
|
|
|
2015-10-22 12:01:25 +07:00
|
|
|
if (mddev_is_clustered(mddev)) {
|
|
|
|
pr_info("md: Safemode is disabled for clustered mode\n");
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
2009-12-14 08:49:55 +07:00
|
|
|
if (strict_strtoul_scaled(cbuf, &msec, 3) < 0)
|
2006-06-26 14:27:37 +07:00
|
|
|
return -EINVAL;
|
|
|
|
if (msec == 0)
|
|
|
|
mddev->safemode_delay = 0;
|
|
|
|
else {
|
2008-08-05 12:54:13 +07:00
|
|
|
unsigned long old_delay = mddev->safemode_delay;
|
2014-12-15 08:57:00 +07:00
|
|
|
unsigned long new_delay = (msec*HZ)/1000;
|
|
|
|
|
|
|
|
if (new_delay == 0)
|
|
|
|
new_delay = 1;
|
|
|
|
mddev->safemode_delay = new_delay;
|
|
|
|
if (new_delay < old_delay || old_delay == 0)
|
|
|
|
mod_timer(&mddev->safemode_timer, jiffies+1);
|
2006-06-26 14:27:37 +07:00
|
|
|
}
|
|
|
|
return len;
|
|
|
|
}
|
|
|
|
static struct md_sysfs_entry md_safe_delay =
|
2006-07-10 18:44:18 +07:00
|
|
|
__ATTR(safe_mode_delay, S_IRUGO|S_IWUSR,safe_delay_show, safe_delay_store);
|
2006-06-26 14:27:37 +07:00
|
|
|
|
2005-11-09 12:39:23 +07:00
|
|
|
static ssize_t
|
2011-10-11 12:47:53 +07:00
|
|
|
level_show(struct mddev *mddev, char *page)
|
2005-11-09 12:39:23 +07:00
|
|
|
{
|
2014-12-15 08:56:58 +07:00
|
|
|
struct md_personality *p;
|
|
|
|
int ret;
|
|
|
|
spin_lock(&mddev->lock);
|
|
|
|
p = mddev->pers;
|
2006-01-06 15:20:51 +07:00
|
|
|
if (p)
|
2014-12-15 08:56:58 +07:00
|
|
|
ret = sprintf(page, "%s\n", p->name);
|
2006-01-06 15:20:51 +07:00
|
|
|
else if (mddev->clevel[0])
|
2014-12-15 08:56:58 +07:00
|
|
|
ret = sprintf(page, "%s\n", mddev->clevel);
|
2006-01-06 15:20:51 +07:00
|
|
|
else if (mddev->level != LEVEL_NONE)
|
2014-12-15 08:56:58 +07:00
|
|
|
ret = sprintf(page, "%d\n", mddev->level);
|
2006-01-06 15:20:51 +07:00
|
|
|
else
|
2014-12-15 08:56:58 +07:00
|
|
|
ret = 0;
|
|
|
|
spin_unlock(&mddev->lock);
|
|
|
|
return ret;
|
2005-11-09 12:39:23 +07:00
|
|
|
}
|
|
|
|
|
2006-01-06 15:20:51 +07:00
|
|
|
static ssize_t
|
2011-10-11 12:47:53 +07:00
|
|
|
level_store(struct mddev *mddev, const char *buf, size_t len)
|
2006-01-06 15:20:51 +07:00
|
|
|
{
|
2010-05-03 00:04:16 +07:00
|
|
|
char clevel[16];
|
2014-12-15 08:57:01 +07:00
|
|
|
ssize_t rv;
|
|
|
|
size_t slen = len;
|
2014-12-15 08:56:58 +07:00
|
|
|
struct md_personality *pers, *oldpers;
|
2010-05-03 00:04:16 +07:00
|
|
|
long level;
|
2014-12-15 08:56:58 +07:00
|
|
|
void *priv, *oldpriv;
|
2011-10-11 12:45:26 +07:00
|
|
|
struct md_rdev *rdev;
|
2009-03-31 10:39:39 +07:00
|
|
|
|
2014-12-15 08:57:01 +07:00
|
|
|
if (slen == 0 || slen >= sizeof(clevel))
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
rv = mddev_lock(mddev);
|
|
|
|
if (rv)
|
|
|
|
return rv;
|
|
|
|
|
2009-03-31 10:39:39 +07:00
|
|
|
if (mddev->pers == NULL) {
|
2014-12-15 08:57:01 +07:00
|
|
|
strncpy(mddev->clevel, buf, slen);
|
|
|
|
if (mddev->clevel[slen-1] == '\n')
|
|
|
|
slen--;
|
|
|
|
mddev->clevel[slen] = 0;
|
2009-03-31 10:39:39 +07:00
|
|
|
mddev->level = LEVEL_NONE;
|
2014-12-15 08:57:01 +07:00
|
|
|
rv = len;
|
|
|
|
goto out_unlock;
|
2009-03-31 10:39:39 +07:00
|
|
|
}
|
2014-12-15 08:57:01 +07:00
|
|
|
rv = -EROFS;
|
2014-05-28 10:39:21 +07:00
|
|
|
if (mddev->ro)
|
2014-12-15 08:57:01 +07:00
|
|
|
goto out_unlock;
|
2009-03-31 10:39:39 +07:00
|
|
|
|
|
|
|
/* request to change the personality. Need to ensure:
|
|
|
|
* - array is not engaged in resync/recovery/reshape
|
|
|
|
* - old personality can be suspended
|
|
|
|
* - new personality will access other array.
|
|
|
|
*/
|
|
|
|
|
2014-12-15 08:57:01 +07:00
|
|
|
rv = -EBUSY;
|
2010-08-08 18:18:03 +07:00
|
|
|
if (mddev->sync_thread ||
|
2014-12-11 06:02:10 +07:00
|
|
|
test_bit(MD_RECOVERY_RUNNING, &mddev->recovery) ||
|
2010-08-08 18:18:03 +07:00
|
|
|
mddev->reshape_position != MaxSector ||
|
|
|
|
mddev->sysfs_active)
|
2014-12-15 08:57:01 +07:00
|
|
|
goto out_unlock;
|
2009-03-31 10:39:39 +07:00
|
|
|
|
2014-12-15 08:57:01 +07:00
|
|
|
rv = -EINVAL;
|
2009-03-31 10:39:39 +07:00
|
|
|
if (!mddev->pers->quiesce) {
|
|
|
|
printk(KERN_WARNING "md: %s: %s does not support online personality change\n",
|
|
|
|
mdname(mddev), mddev->pers->name);
|
2014-12-15 08:57:01 +07:00
|
|
|
goto out_unlock;
|
2009-03-31 10:39:39 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Now find the new personality */
|
2014-12-15 08:57:01 +07:00
|
|
|
strncpy(clevel, buf, slen);
|
|
|
|
if (clevel[slen-1] == '\n')
|
|
|
|
slen--;
|
|
|
|
clevel[slen] = 0;
|
2013-06-01 14:15:16 +07:00
|
|
|
if (kstrtol(clevel, 10, &level))
|
2010-05-03 00:04:16 +07:00
|
|
|
level = LEVEL_NONE;
|
2009-03-31 10:39:39 +07:00
|
|
|
|
2010-05-03 00:04:16 +07:00
|
|
|
if (request_module("md-%s", clevel) != 0)
|
|
|
|
request_module("md-level-%s", clevel);
|
2009-03-31 10:39:39 +07:00
|
|
|
spin_lock(&pers_lock);
|
2010-05-03 00:04:16 +07:00
|
|
|
pers = find_pers(level, clevel);
|
2009-03-31 10:39:39 +07:00
|
|
|
if (!pers || !try_module_get(pers->owner)) {
|
|
|
|
spin_unlock(&pers_lock);
|
2010-05-03 00:04:16 +07:00
|
|
|
printk(KERN_WARNING "md: personality %s not loaded\n", clevel);
|
2014-12-15 08:57:01 +07:00
|
|
|
rv = -EINVAL;
|
|
|
|
goto out_unlock;
|
2009-03-31 10:39:39 +07:00
|
|
|
}
|
|
|
|
spin_unlock(&pers_lock);
|
|
|
|
|
|
|
|
if (pers == mddev->pers) {
|
|
|
|
/* Nothing to do! */
|
|
|
|
module_put(pers->owner);
|
2014-12-15 08:57:01 +07:00
|
|
|
rv = len;
|
|
|
|
goto out_unlock;
|
2009-03-31 10:39:39 +07:00
|
|
|
}
|
|
|
|
if (!pers->takeover) {
|
|
|
|
module_put(pers->owner);
|
|
|
|
printk(KERN_WARNING "md: %s: %s does not support personality takeover\n",
|
2010-05-03 00:04:16 +07:00
|
|
|
mdname(mddev), clevel);
|
2014-12-15 08:57:01 +07:00
|
|
|
rv = -EINVAL;
|
|
|
|
goto out_unlock;
|
2009-03-31 10:39:39 +07:00
|
|
|
}
|
|
|
|
|
2012-03-19 08:46:39 +07:00
|
|
|
rdev_for_each(rdev, mddev)
|
2010-06-15 15:36:03 +07:00
|
|
|
rdev->new_raid_disk = rdev->raid_disk;
|
|
|
|
|
2009-03-31 10:39:39 +07:00
|
|
|
/* ->takeover must set new_* and/or delta_disks
|
|
|
|
* if it succeeds, and may set them when it fails.
|
|
|
|
*/
|
|
|
|
priv = pers->takeover(mddev);
|
|
|
|
if (IS_ERR(priv)) {
|
|
|
|
mddev->new_level = mddev->level;
|
|
|
|
mddev->new_layout = mddev->layout;
|
2009-06-18 05:45:27 +07:00
|
|
|
mddev->new_chunk_sectors = mddev->chunk_sectors;
|
2009-03-31 10:39:39 +07:00
|
|
|
mddev->raid_disks -= mddev->delta_disks;
|
|
|
|
mddev->delta_disks = 0;
|
2012-05-21 06:27:00 +07:00
|
|
|
mddev->reshape_backwards = 0;
|
2009-03-31 10:39:39 +07:00
|
|
|
module_put(pers->owner);
|
|
|
|
printk(KERN_WARNING "md: %s: %s would not accept array\n",
|
2010-05-03 00:04:16 +07:00
|
|
|
mdname(mddev), clevel);
|
2014-12-15 08:57:01 +07:00
|
|
|
rv = PTR_ERR(priv);
|
|
|
|
goto out_unlock;
|
2009-03-31 10:39:39 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Looks like we have a winner */
|
|
|
|
mddev_suspend(mddev);
|
2014-12-15 08:56:57 +07:00
|
|
|
mddev_detach(mddev);
|
2014-12-15 08:56:58 +07:00
|
|
|
|
|
|
|
spin_lock(&mddev->lock);
|
2014-12-15 08:56:58 +07:00
|
|
|
oldpers = mddev->pers;
|
|
|
|
oldpriv = mddev->private;
|
|
|
|
mddev->pers = pers;
|
|
|
|
mddev->private = priv;
|
|
|
|
strlcpy(mddev->clevel, pers->name, sizeof(mddev->clevel));
|
|
|
|
mddev->level = mddev->new_level;
|
|
|
|
mddev->layout = mddev->new_layout;
|
|
|
|
mddev->chunk_sectors = mddev->new_chunk_sectors;
|
|
|
|
mddev->delta_disks = 0;
|
|
|
|
mddev->reshape_backwards = 0;
|
|
|
|
mddev->degraded = 0;
|
2014-12-15 08:56:58 +07:00
|
|
|
spin_unlock(&mddev->lock);
|
2014-12-15 08:56:58 +07:00
|
|
|
|
|
|
|
if (oldpers->sync_request == NULL &&
|
|
|
|
mddev->external) {
|
|
|
|
/* We are converting from a no-redundancy array
|
|
|
|
* to a redundancy array and metadata is managed
|
|
|
|
* externally so we need to be sure that writes
|
|
|
|
* won't block due to a need to transition
|
|
|
|
* clean->dirty
|
|
|
|
* until external management is started.
|
|
|
|
*/
|
|
|
|
mddev->in_sync = 0;
|
|
|
|
mddev->safemode_delay = 0;
|
|
|
|
mddev->safemode = 0;
|
|
|
|
}
|
2014-09-30 11:23:59 +07:00
|
|
|
|
2014-12-15 08:56:58 +07:00
|
|
|
oldpers->free(mddev, oldpriv);
|
|
|
|
|
|
|
|
if (oldpers->sync_request == NULL &&
|
2010-04-14 14:15:37 +07:00
|
|
|
pers->sync_request != NULL) {
|
|
|
|
/* need to add the md_redundancy_group */
|
|
|
|
if (sysfs_create_group(&mddev->kobj, &md_redundancy_group))
|
|
|
|
printk(KERN_WARNING
|
|
|
|
"md: cannot register extra attributes for %s\n",
|
|
|
|
mdname(mddev));
|
2013-09-12 10:19:13 +07:00
|
|
|
mddev->sysfs_action = sysfs_get_dirent(mddev->kobj.sd, "sync_action");
|
2014-09-30 11:23:59 +07:00
|
|
|
}
|
2014-12-15 08:56:58 +07:00
|
|
|
if (oldpers->sync_request != NULL &&
|
2010-04-14 14:15:37 +07:00
|
|
|
pers->sync_request == NULL) {
|
|
|
|
/* need to remove the md_redundancy_group */
|
|
|
|
if (mddev->to_remove == NULL)
|
|
|
|
mddev->to_remove = &md_redundancy_group;
|
|
|
|
}
|
|
|
|
|
2016-06-23 17:11:01 +07:00
|
|
|
module_put(oldpers->owner);
|
|
|
|
|
2012-03-19 08:46:39 +07:00
|
|
|
rdev_for_each(rdev, mddev) {
|
2010-06-15 15:36:03 +07:00
|
|
|
if (rdev->raid_disk < 0)
|
|
|
|
continue;
|
2011-01-14 05:14:34 +07:00
|
|
|
if (rdev->new_raid_disk >= mddev->raid_disks)
|
2010-06-15 15:36:03 +07:00
|
|
|
rdev->new_raid_disk = -1;
|
|
|
|
if (rdev->new_raid_disk == rdev->raid_disk)
|
|
|
|
continue;
|
2011-07-27 08:00:36 +07:00
|
|
|
sysfs_unlink_rdev(mddev, rdev);
|
2010-06-15 15:36:03 +07:00
|
|
|
}
|
2012-03-19 08:46:39 +07:00
|
|
|
rdev_for_each(rdev, mddev) {
|
2010-06-15 15:36:03 +07:00
|
|
|
if (rdev->raid_disk < 0)
|
|
|
|
continue;
|
|
|
|
if (rdev->new_raid_disk == rdev->raid_disk)
|
|
|
|
continue;
|
|
|
|
rdev->raid_disk = rdev->new_raid_disk;
|
|
|
|
if (rdev->raid_disk < 0)
|
2009-08-03 07:59:55 +07:00
|
|
|
clear_bit(In_sync, &rdev->flags);
|
2010-06-15 15:36:03 +07:00
|
|
|
else {
|
2011-07-27 08:00:36 +07:00
|
|
|
if (sysfs_link_rdev(mddev, rdev))
|
|
|
|
printk(KERN_WARNING "md: cannot register rd%d"
|
|
|
|
" for %s after level change\n",
|
|
|
|
rdev->raid_disk, mdname(mddev));
|
2009-08-03 07:59:55 +07:00
|
|
|
}
|
2010-06-15 15:36:03 +07:00
|
|
|
}
|
|
|
|
|
2014-12-15 08:56:58 +07:00
|
|
|
if (pers->sync_request == NULL) {
|
2010-03-08 12:02:44 +07:00
|
|
|
/* this is now an array without redundancy, so
|
|
|
|
* it must always be in_sync
|
|
|
|
*/
|
|
|
|
mddev->in_sync = 1;
|
|
|
|
del_timer_sync(&mddev->safemode_timer);
|
|
|
|
}
|
2013-11-14 11:16:15 +07:00
|
|
|
blk_set_stacking_limits(&mddev->queue->limits);
|
2009-03-31 10:39:39 +07:00
|
|
|
pers->run(mddev);
|
|
|
|
set_bit(MD_CHANGE_DEVS, &mddev->flags);
|
2012-05-22 10:55:29 +07:00
|
|
|
mddev_resume(mddev);
|
2014-01-14 11:17:03 +07:00
|
|
|
if (!mddev->thread)
|
|
|
|
md_update_sb(mddev, 1);
|
2010-04-14 14:17:39 +07:00
|
|
|
sysfs_notify(&mddev->kobj, NULL, "level");
|
2010-05-02 08:14:57 +07:00
|
|
|
md_new_event(mddev);
|
2014-12-15 08:57:01 +07:00
|
|
|
rv = len;
|
|
|
|
out_unlock:
|
|
|
|
mddev_unlock(mddev);
|
2006-01-06 15:20:51 +07:00
|
|
|
return rv;
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct md_sysfs_entry md_level =
|
2006-07-10 18:44:18 +07:00
|
|
|
__ATTR(level, S_IRUGO|S_IWUSR, level_show, level_store);
|
2005-11-09 12:39:23 +07:00
|
|
|
|
2006-06-26 14:27:59 +07:00
|
|
|
static ssize_t
|
2011-10-11 12:47:53 +07:00
|
|
|
layout_show(struct mddev *mddev, char *page)
|
2006-06-26 14:27:59 +07:00
|
|
|
{
|
|
|
|
/* just a number, not meaningful for all levels */
|
2007-05-09 16:35:38 +07:00
|
|
|
if (mddev->reshape_position != MaxSector &&
|
|
|
|
mddev->layout != mddev->new_layout)
|
|
|
|
return sprintf(page, "%d (%d)\n",
|
|
|
|
mddev->new_layout, mddev->layout);
|
2006-06-26 14:27:59 +07:00
|
|
|
return sprintf(page, "%d\n", mddev->layout);
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t
|
2011-10-11 12:47:53 +07:00
|
|
|
layout_store(struct mddev *mddev, const char *buf, size_t len)
|
2006-06-26 14:27:59 +07:00
|
|
|
{
|
2015-05-16 18:02:38 +07:00
|
|
|
unsigned int n;
|
2014-12-15 08:57:01 +07:00
|
|
|
int err;
|
2006-06-26 14:27:59 +07:00
|
|
|
|
2015-05-16 18:02:38 +07:00
|
|
|
err = kstrtouint(buf, 10, &n);
|
|
|
|
if (err < 0)
|
|
|
|
return err;
|
2014-12-15 08:57:01 +07:00
|
|
|
err = mddev_lock(mddev);
|
|
|
|
if (err)
|
|
|
|
return err;
|
2006-06-26 14:27:59 +07:00
|
|
|
|
2009-03-31 10:56:41 +07:00
|
|
|
if (mddev->pers) {
|
2009-06-18 05:47:55 +07:00
|
|
|
if (mddev->pers->check_reshape == NULL)
|
2014-12-15 08:57:01 +07:00
|
|
|
err = -EBUSY;
|
|
|
|
else if (mddev->ro)
|
|
|
|
err = -EROFS;
|
|
|
|
else {
|
|
|
|
mddev->new_layout = n;
|
|
|
|
err = mddev->pers->check_reshape(mddev);
|
|
|
|
if (err)
|
|
|
|
mddev->new_layout = mddev->layout;
|
2009-06-18 05:47:42 +07:00
|
|
|
}
|
2009-03-31 10:56:41 +07:00
|
|
|
} else {
|
2007-05-09 16:35:38 +07:00
|
|
|
mddev->new_layout = n;
|
2009-03-31 10:56:41 +07:00
|
|
|
if (mddev->reshape_position == MaxSector)
|
|
|
|
mddev->layout = n;
|
|
|
|
}
|
2014-12-15 08:57:01 +07:00
|
|
|
mddev_unlock(mddev);
|
|
|
|
return err ?: len;
|
2006-06-26 14:27:59 +07:00
|
|
|
}
|
|
|
|
static struct md_sysfs_entry md_layout =
|
2006-07-10 18:44:18 +07:00
|
|
|
__ATTR(layout, S_IRUGO|S_IWUSR, layout_show, layout_store);
|
2006-06-26 14:27:59 +07:00
|
|
|
|
2005-11-09 12:39:23 +07:00
|
|
|
static ssize_t
|
2011-10-11 12:47:53 +07:00
|
|
|
raid_disks_show(struct mddev *mddev, char *page)
|
2005-11-09 12:39:23 +07:00
|
|
|
{
|
2005-11-09 12:39:45 +07:00
|
|
|
if (mddev->raid_disks == 0)
|
|
|
|
return 0;
|
2007-05-09 16:35:38 +07:00
|
|
|
if (mddev->reshape_position != MaxSector &&
|
|
|
|
mddev->delta_disks != 0)
|
|
|
|
return sprintf(page, "%d (%d)\n", mddev->raid_disks,
|
|
|
|
mddev->raid_disks - mddev->delta_disks);
|
2005-11-09 12:39:23 +07:00
|
|
|
return sprintf(page, "%d\n", mddev->raid_disks);
|
|
|
|
}
|
|
|
|
|
2011-10-11 12:47:53 +07:00
|
|
|
static int update_raid_disks(struct mddev *mddev, int raid_disks);
|
2006-01-06 15:20:54 +07:00
|
|
|
|
|
|
|
static ssize_t
|
2011-10-11 12:47:53 +07:00
|
|
|
raid_disks_store(struct mddev *mddev, const char *buf, size_t len)
|
2006-01-06 15:20:54 +07:00
|
|
|
{
|
2015-05-16 18:02:38 +07:00
|
|
|
unsigned int n;
|
2014-12-15 08:57:01 +07:00
|
|
|
int err;
|
2006-01-06 15:20:54 +07:00
|
|
|
|
2015-05-16 18:02:38 +07:00
|
|
|
err = kstrtouint(buf, 10, &n);
|
|
|
|
if (err < 0)
|
|
|
|
return err;
|
2006-01-06 15:20:54 +07:00
|
|
|
|
2014-12-15 08:57:01 +07:00
|
|
|
err = mddev_lock(mddev);
|
|
|
|
if (err)
|
|
|
|
return err;
|
2006-01-06 15:20:54 +07:00
|
|
|
if (mddev->pers)
|
2014-12-15 08:57:01 +07:00
|
|
|
err = update_raid_disks(mddev, n);
|
2007-05-09 16:35:38 +07:00
|
|
|
else if (mddev->reshape_position != MaxSector) {
|
2012-05-21 06:27:00 +07:00
|
|
|
struct md_rdev *rdev;
|
2007-05-09 16:35:38 +07:00
|
|
|
int olddisks = mddev->raid_disks - mddev->delta_disks;
|
2012-05-21 06:27:00 +07:00
|
|
|
|
2014-12-15 08:57:01 +07:00
|
|
|
err = -EINVAL;
|
2012-05-21 06:27:00 +07:00
|
|
|
rdev_for_each(rdev, mddev) {
|
|
|
|
if (olddisks < n &&
|
|
|
|
rdev->data_offset < rdev->new_data_offset)
|
2014-12-15 08:57:01 +07:00
|
|
|
goto out_unlock;
|
2012-05-21 06:27:00 +07:00
|
|
|
if (olddisks > n &&
|
|
|
|
rdev->data_offset > rdev->new_data_offset)
|
2014-12-15 08:57:01 +07:00
|
|
|
goto out_unlock;
|
2012-05-21 06:27:00 +07:00
|
|
|
}
|
2014-12-15 08:57:01 +07:00
|
|
|
err = 0;
|
2007-05-09 16:35:38 +07:00
|
|
|
mddev->delta_disks = n - olddisks;
|
|
|
|
mddev->raid_disks = n;
|
2012-05-21 06:27:00 +07:00
|
|
|
mddev->reshape_backwards = (mddev->delta_disks < 0);
|
2007-05-09 16:35:38 +07:00
|
|
|
} else
|
2006-01-06 15:20:54 +07:00
|
|
|
mddev->raid_disks = n;
|
2014-12-15 08:57:01 +07:00
|
|
|
out_unlock:
|
|
|
|
mddev_unlock(mddev);
|
|
|
|
return err ? err : len;
|
2006-01-06 15:20:54 +07:00
|
|
|
}
|
|
|
|
static struct md_sysfs_entry md_raid_disks =
|
2006-07-10 18:44:18 +07:00
|
|
|
__ATTR(raid_disks, S_IRUGO|S_IWUSR, raid_disks_show, raid_disks_store);
|
2005-11-09 12:39:23 +07:00
|
|
|
|
2006-01-06 15:20:47 +07:00
|
|
|
static ssize_t
|
2011-10-11 12:47:53 +07:00
|
|
|
chunk_size_show(struct mddev *mddev, char *page)
|
2006-01-06 15:20:47 +07:00
|
|
|
{
|
2007-05-09 16:35:38 +07:00
|
|
|
if (mddev->reshape_position != MaxSector &&
|
2009-06-18 05:45:27 +07:00
|
|
|
mddev->chunk_sectors != mddev->new_chunk_sectors)
|
|
|
|
return sprintf(page, "%d (%d)\n",
|
|
|
|
mddev->new_chunk_sectors << 9,
|
2009-06-18 05:45:01 +07:00
|
|
|
mddev->chunk_sectors << 9);
|
|
|
|
return sprintf(page, "%d\n", mddev->chunk_sectors << 9);
|
2006-01-06 15:20:47 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t
|
2011-10-11 12:47:53 +07:00
|
|
|
chunk_size_store(struct mddev *mddev, const char *buf, size_t len)
|
2006-01-06 15:20:47 +07:00
|
|
|
{
|
2015-05-16 18:02:38 +07:00
|
|
|
unsigned long n;
|
2014-12-15 08:57:01 +07:00
|
|
|
int err;
|
2006-01-06 15:20:47 +07:00
|
|
|
|
2015-05-16 18:02:38 +07:00
|
|
|
err = kstrtoul(buf, 10, &n);
|
|
|
|
if (err < 0)
|
|
|
|
return err;
|
2006-01-06 15:20:47 +07:00
|
|
|
|
2014-12-15 08:57:01 +07:00
|
|
|
err = mddev_lock(mddev);
|
|
|
|
if (err)
|
|
|
|
return err;
|
2009-03-31 10:56:41 +07:00
|
|
|
if (mddev->pers) {
|
2009-06-18 05:47:55 +07:00
|
|
|
if (mddev->pers->check_reshape == NULL)
|
2014-12-15 08:57:01 +07:00
|
|
|
err = -EBUSY;
|
|
|
|
else if (mddev->ro)
|
|
|
|
err = -EROFS;
|
|
|
|
else {
|
|
|
|
mddev->new_chunk_sectors = n >> 9;
|
|
|
|
err = mddev->pers->check_reshape(mddev);
|
|
|
|
if (err)
|
|
|
|
mddev->new_chunk_sectors = mddev->chunk_sectors;
|
2009-06-18 05:47:42 +07:00
|
|
|
}
|
2009-03-31 10:56:41 +07:00
|
|
|
} else {
|
2009-06-18 05:45:27 +07:00
|
|
|
mddev->new_chunk_sectors = n >> 9;
|
2009-03-31 10:56:41 +07:00
|
|
|
if (mddev->reshape_position == MaxSector)
|
2009-06-18 05:45:01 +07:00
|
|
|
mddev->chunk_sectors = n >> 9;
|
2009-03-31 10:56:41 +07:00
|
|
|
}
|
2014-12-15 08:57:01 +07:00
|
|
|
mddev_unlock(mddev);
|
|
|
|
return err ?: len;
|
2006-01-06 15:20:47 +07:00
|
|
|
}
|
|
|
|
static struct md_sysfs_entry md_chunk_size =
|
2006-07-10 18:44:18 +07:00
|
|
|
__ATTR(chunk_size, S_IRUGO|S_IWUSR, chunk_size_show, chunk_size_store);
|
2006-01-06 15:20:47 +07:00
|
|
|
|
2006-06-26 14:28:00 +07:00
|
|
|
static ssize_t
|
2011-10-11 12:47:53 +07:00
|
|
|
resync_start_show(struct mddev *mddev, char *page)
|
2006-06-26 14:28:00 +07:00
|
|
|
{
|
2009-03-31 11:24:32 +07:00
|
|
|
if (mddev->recovery_cp == MaxSector)
|
|
|
|
return sprintf(page, "none\n");
|
2006-06-26 14:28:00 +07:00
|
|
|
return sprintf(page, "%llu\n", (unsigned long long)mddev->recovery_cp);
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t
|
2011-10-11 12:47:53 +07:00
|
|
|
resync_start_store(struct mddev *mddev, const char *buf, size_t len)
|
2006-06-26 14:28:00 +07:00
|
|
|
{
|
2015-05-16 18:02:38 +07:00
|
|
|
unsigned long long n;
|
2014-12-15 08:57:01 +07:00
|
|
|
int err;
|
2015-05-16 18:02:38 +07:00
|
|
|
|
|
|
|
if (cmd_match(buf, "none"))
|
|
|
|
n = MaxSector;
|
|
|
|
else {
|
|
|
|
err = kstrtoull(buf, 10, &n);
|
|
|
|
if (err < 0)
|
|
|
|
return err;
|
|
|
|
if (n != (sector_t)n)
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
2006-06-26 14:28:00 +07:00
|
|
|
|
2014-12-15 08:57:01 +07:00
|
|
|
err = mddev_lock(mddev);
|
|
|
|
if (err)
|
|
|
|
return err;
|
2011-05-11 12:52:21 +07:00
|
|
|
if (mddev->pers && !test_bit(MD_RECOVERY_FROZEN, &mddev->recovery))
|
2014-12-15 08:57:01 +07:00
|
|
|
err = -EBUSY;
|
2006-06-26 14:28:00 +07:00
|
|
|
|
2014-12-15 08:57:01 +07:00
|
|
|
if (!err) {
|
|
|
|
mddev->recovery_cp = n;
|
|
|
|
if (mddev->pers)
|
|
|
|
set_bit(MD_CHANGE_CLEAN, &mddev->flags);
|
|
|
|
}
|
|
|
|
mddev_unlock(mddev);
|
|
|
|
return err ?: len;
|
2006-06-26 14:28:00 +07:00
|
|
|
}
|
|
|
|
static struct md_sysfs_entry md_resync_start =
|
2014-09-30 05:53:05 +07:00
|
|
|
__ATTR_PREALLOC(resync_start, S_IRUGO|S_IWUSR,
|
|
|
|
resync_start_show, resync_start_store);
|
2006-06-26 14:28:00 +07:00
|
|
|
|
[PATCH] md: Set/get state of array via sysfs
This allows the state of an md/array to be directly controlled via sysfs and
adds the ability to stop and array without tearing it down.
Array states/settings:
clear
No devices, no size, no level
Equivalent to STOP_ARRAY ioctl
inactive
May have some settings, but array is not active
all IO results in error
When written, doesn't tear down array, but just stops it
suspended (not supported yet)
All IO requests will block. The array can be reconfigured.
Writing this, if accepted, will block until array is quiescent
readonly
no resync can happen. no superblocks get written.
write requests fail
read-auto
like readonly, but behaves like 'clean' on a write request.
clean - no pending writes, but otherwise active.
When written to inactive array, starts without resync
If a write request arrives then
if metadata is known, mark 'dirty' and switch to 'active'.
if not known, block and switch to write-pending
If written to an active array that has pending writes, then fails.
active
fully active: IO and resync can be happening.
When written to inactive array, starts with resync
write-pending (not supported yet)
clean, but writes are blocked waiting for 'active' to be written.
active-idle
like active, but no writes have been seen for a while (100msec).
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-26 14:27:58 +07:00
|
|
|
/*
|
|
|
|
* The array state can be:
|
|
|
|
*
|
|
|
|
* clear
|
|
|
|
* No devices, no size, no level
|
|
|
|
* Equivalent to STOP_ARRAY ioctl
|
|
|
|
* inactive
|
|
|
|
* May have some settings, but array is not active
|
|
|
|
* all IO results in error
|
|
|
|
* When written, doesn't tear down array, but just stops it
|
|
|
|
* suspended (not supported yet)
|
|
|
|
* All IO requests will block. The array can be reconfigured.
|
2008-03-26 03:00:53 +07:00
|
|
|
* Writing this, if accepted, will block until array is quiescent
|
[PATCH] md: Set/get state of array via sysfs
This allows the state of an md/array to be directly controlled via sysfs and
adds the ability to stop and array without tearing it down.
Array states/settings:
clear
No devices, no size, no level
Equivalent to STOP_ARRAY ioctl
inactive
May have some settings, but array is not active
all IO results in error
When written, doesn't tear down array, but just stops it
suspended (not supported yet)
All IO requests will block. The array can be reconfigured.
Writing this, if accepted, will block until array is quiescent
readonly
no resync can happen. no superblocks get written.
write requests fail
read-auto
like readonly, but behaves like 'clean' on a write request.
clean - no pending writes, but otherwise active.
When written to inactive array, starts without resync
If a write request arrives then
if metadata is known, mark 'dirty' and switch to 'active'.
if not known, block and switch to write-pending
If written to an active array that has pending writes, then fails.
active
fully active: IO and resync can be happening.
When written to inactive array, starts with resync
write-pending (not supported yet)
clean, but writes are blocked waiting for 'active' to be written.
active-idle
like active, but no writes have been seen for a while (100msec).
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-26 14:27:58 +07:00
|
|
|
* readonly
|
|
|
|
* no resync can happen. no superblocks get written.
|
|
|
|
* write requests fail
|
|
|
|
* read-auto
|
|
|
|
* like readonly, but behaves like 'clean' on a write request.
|
|
|
|
*
|
|
|
|
* clean - no pending writes, but otherwise active.
|
|
|
|
* When written to inactive array, starts without resync
|
|
|
|
* If a write request arrives then
|
|
|
|
* if metadata is known, mark 'dirty' and switch to 'active'.
|
|
|
|
* if not known, block and switch to write-pending
|
|
|
|
* If written to an active array that has pending writes, then fails.
|
|
|
|
* active
|
|
|
|
* fully active: IO and resync can be happening.
|
|
|
|
* When written to inactive array, starts with resync
|
|
|
|
*
|
|
|
|
* write-pending
|
|
|
|
* clean, but writes are blocked waiting for 'active' to be written.
|
|
|
|
*
|
|
|
|
* active-idle
|
|
|
|
* like active, but no writes have been seen for a while (100msec).
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
enum array_state { clear, inactive, suspended, readonly, read_auto, clean, active,
|
|
|
|
write_pending, active_idle, bad_word};
|
2006-06-26 14:28:01 +07:00
|
|
|
static char *array_states[] = {
|
[PATCH] md: Set/get state of array via sysfs
This allows the state of an md/array to be directly controlled via sysfs and
adds the ability to stop and array without tearing it down.
Array states/settings:
clear
No devices, no size, no level
Equivalent to STOP_ARRAY ioctl
inactive
May have some settings, but array is not active
all IO results in error
When written, doesn't tear down array, but just stops it
suspended (not supported yet)
All IO requests will block. The array can be reconfigured.
Writing this, if accepted, will block until array is quiescent
readonly
no resync can happen. no superblocks get written.
write requests fail
read-auto
like readonly, but behaves like 'clean' on a write request.
clean - no pending writes, but otherwise active.
When written to inactive array, starts without resync
If a write request arrives then
if metadata is known, mark 'dirty' and switch to 'active'.
if not known, block and switch to write-pending
If written to an active array that has pending writes, then fails.
active
fully active: IO and resync can be happening.
When written to inactive array, starts with resync
write-pending (not supported yet)
clean, but writes are blocked waiting for 'active' to be written.
active-idle
like active, but no writes have been seen for a while (100msec).
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-26 14:27:58 +07:00
|
|
|
"clear", "inactive", "suspended", "readonly", "read-auto", "clean", "active",
|
|
|
|
"write-pending", "active-idle", NULL };
|
|
|
|
|
|
|
|
static int match_word(const char *word, char **list)
|
|
|
|
{
|
|
|
|
int n;
|
|
|
|
for (n=0; list[n]; n++)
|
|
|
|
if (cmd_match(word, list[n]))
|
|
|
|
break;
|
|
|
|
return n;
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t
|
2011-10-11 12:47:53 +07:00
|
|
|
array_state_show(struct mddev *mddev, char *page)
|
[PATCH] md: Set/get state of array via sysfs
This allows the state of an md/array to be directly controlled via sysfs and
adds the ability to stop and array without tearing it down.
Array states/settings:
clear
No devices, no size, no level
Equivalent to STOP_ARRAY ioctl
inactive
May have some settings, but array is not active
all IO results in error
When written, doesn't tear down array, but just stops it
suspended (not supported yet)
All IO requests will block. The array can be reconfigured.
Writing this, if accepted, will block until array is quiescent
readonly
no resync can happen. no superblocks get written.
write requests fail
read-auto
like readonly, but behaves like 'clean' on a write request.
clean - no pending writes, but otherwise active.
When written to inactive array, starts without resync
If a write request arrives then
if metadata is known, mark 'dirty' and switch to 'active'.
if not known, block and switch to write-pending
If written to an active array that has pending writes, then fails.
active
fully active: IO and resync can be happening.
When written to inactive array, starts with resync
write-pending (not supported yet)
clean, but writes are blocked waiting for 'active' to be written.
active-idle
like active, but no writes have been seen for a while (100msec).
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-26 14:27:58 +07:00
|
|
|
{
|
|
|
|
enum array_state st = inactive;
|
|
|
|
|
|
|
|
if (mddev->pers)
|
|
|
|
switch(mddev->ro) {
|
|
|
|
case 1:
|
|
|
|
st = readonly;
|
|
|
|
break;
|
|
|
|
case 2:
|
|
|
|
st = read_auto;
|
|
|
|
break;
|
|
|
|
case 0:
|
|
|
|
if (mddev->in_sync)
|
|
|
|
st = clean;
|
2010-08-30 14:33:34 +07:00
|
|
|
else if (test_bit(MD_CHANGE_PENDING, &mddev->flags))
|
2008-02-06 16:39:51 +07:00
|
|
|
st = write_pending;
|
[PATCH] md: Set/get state of array via sysfs
This allows the state of an md/array to be directly controlled via sysfs and
adds the ability to stop and array without tearing it down.
Array states/settings:
clear
No devices, no size, no level
Equivalent to STOP_ARRAY ioctl
inactive
May have some settings, but array is not active
all IO results in error
When written, doesn't tear down array, but just stops it
suspended (not supported yet)
All IO requests will block. The array can be reconfigured.
Writing this, if accepted, will block until array is quiescent
readonly
no resync can happen. no superblocks get written.
write requests fail
read-auto
like readonly, but behaves like 'clean' on a write request.
clean - no pending writes, but otherwise active.
When written to inactive array, starts without resync
If a write request arrives then
if metadata is known, mark 'dirty' and switch to 'active'.
if not known, block and switch to write-pending
If written to an active array that has pending writes, then fails.
active
fully active: IO and resync can be happening.
When written to inactive array, starts with resync
write-pending (not supported yet)
clean, but writes are blocked waiting for 'active' to be written.
active-idle
like active, but no writes have been seen for a while (100msec).
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-26 14:27:58 +07:00
|
|
|
else if (mddev->safemode)
|
|
|
|
st = active_idle;
|
|
|
|
else
|
|
|
|
st = active;
|
|
|
|
}
|
|
|
|
else {
|
|
|
|
if (list_empty(&mddev->disks) &&
|
|
|
|
mddev->raid_disks == 0 &&
|
2009-03-31 10:33:13 +07:00
|
|
|
mddev->dev_sectors == 0)
|
[PATCH] md: Set/get state of array via sysfs
This allows the state of an md/array to be directly controlled via sysfs and
adds the ability to stop and array without tearing it down.
Array states/settings:
clear
No devices, no size, no level
Equivalent to STOP_ARRAY ioctl
inactive
May have some settings, but array is not active
all IO results in error
When written, doesn't tear down array, but just stops it
suspended (not supported yet)
All IO requests will block. The array can be reconfigured.
Writing this, if accepted, will block until array is quiescent
readonly
no resync can happen. no superblocks get written.
write requests fail
read-auto
like readonly, but behaves like 'clean' on a write request.
clean - no pending writes, but otherwise active.
When written to inactive array, starts without resync
If a write request arrives then
if metadata is known, mark 'dirty' and switch to 'active'.
if not known, block and switch to write-pending
If written to an active array that has pending writes, then fails.
active
fully active: IO and resync can be happening.
When written to inactive array, starts with resync
write-pending (not supported yet)
clean, but writes are blocked waiting for 'active' to be written.
active-idle
like active, but no writes have been seen for a while (100msec).
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-26 14:27:58 +07:00
|
|
|
st = clear;
|
|
|
|
else
|
|
|
|
st = inactive;
|
|
|
|
}
|
|
|
|
return sprintf(page, "%s\n", array_states[st]);
|
|
|
|
}
|
|
|
|
|
2014-09-30 11:23:59 +07:00
|
|
|
static int do_md_stop(struct mddev *mddev, int ro, struct block_device *bdev);
|
|
|
|
static int md_set_readonly(struct mddev *mddev, struct block_device *bdev);
|
|
|
|
static int do_md_run(struct mddev *mddev);
|
2011-10-11 12:47:53 +07:00
|
|
|
static int restart_array(struct mddev *mddev);
|
[PATCH] md: Set/get state of array via sysfs
This allows the state of an md/array to be directly controlled via sysfs and
adds the ability to stop and array without tearing it down.
Array states/settings:
clear
No devices, no size, no level
Equivalent to STOP_ARRAY ioctl
inactive
May have some settings, but array is not active
all IO results in error
When written, doesn't tear down array, but just stops it
suspended (not supported yet)
All IO requests will block. The array can be reconfigured.
Writing this, if accepted, will block until array is quiescent
readonly
no resync can happen. no superblocks get written.
write requests fail
read-auto
like readonly, but behaves like 'clean' on a write request.
clean - no pending writes, but otherwise active.
When written to inactive array, starts without resync
If a write request arrives then
if metadata is known, mark 'dirty' and switch to 'active'.
if not known, block and switch to write-pending
If written to an active array that has pending writes, then fails.
active
fully active: IO and resync can be happening.
When written to inactive array, starts with resync
write-pending (not supported yet)
clean, but writes are blocked waiting for 'active' to be written.
active-idle
like active, but no writes have been seen for a while (100msec).
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-26 14:27:58 +07:00
|
|
|
|
|
|
|
static ssize_t
|
2011-10-11 12:47:53 +07:00
|
|
|
array_state_store(struct mddev *mddev, const char *buf, size_t len)
|
[PATCH] md: Set/get state of array via sysfs
This allows the state of an md/array to be directly controlled via sysfs and
adds the ability to stop and array without tearing it down.
Array states/settings:
clear
No devices, no size, no level
Equivalent to STOP_ARRAY ioctl
inactive
May have some settings, but array is not active
all IO results in error
When written, doesn't tear down array, but just stops it
suspended (not supported yet)
All IO requests will block. The array can be reconfigured.
Writing this, if accepted, will block until array is quiescent
readonly
no resync can happen. no superblocks get written.
write requests fail
read-auto
like readonly, but behaves like 'clean' on a write request.
clean - no pending writes, but otherwise active.
When written to inactive array, starts without resync
If a write request arrives then
if metadata is known, mark 'dirty' and switch to 'active'.
if not known, block and switch to write-pending
If written to an active array that has pending writes, then fails.
active
fully active: IO and resync can be happening.
When written to inactive array, starts with resync
write-pending (not supported yet)
clean, but writes are blocked waiting for 'active' to be written.
active-idle
like active, but no writes have been seen for a while (100msec).
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-26 14:27:58 +07:00
|
|
|
{
|
2014-12-15 08:57:01 +07:00
|
|
|
int err;
|
[PATCH] md: Set/get state of array via sysfs
This allows the state of an md/array to be directly controlled via sysfs and
adds the ability to stop and array without tearing it down.
Array states/settings:
clear
No devices, no size, no level
Equivalent to STOP_ARRAY ioctl
inactive
May have some settings, but array is not active
all IO results in error
When written, doesn't tear down array, but just stops it
suspended (not supported yet)
All IO requests will block. The array can be reconfigured.
Writing this, if accepted, will block until array is quiescent
readonly
no resync can happen. no superblocks get written.
write requests fail
read-auto
like readonly, but behaves like 'clean' on a write request.
clean - no pending writes, but otherwise active.
When written to inactive array, starts without resync
If a write request arrives then
if metadata is known, mark 'dirty' and switch to 'active'.
if not known, block and switch to write-pending
If written to an active array that has pending writes, then fails.
active
fully active: IO and resync can be happening.
When written to inactive array, starts with resync
write-pending (not supported yet)
clean, but writes are blocked waiting for 'active' to be written.
active-idle
like active, but no writes have been seen for a while (100msec).
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-26 14:27:58 +07:00
|
|
|
enum array_state st = match_word(buf, array_states);
|
2014-12-15 08:57:01 +07:00
|
|
|
|
|
|
|
if (mddev->pers && (st == active || st == clean) && mddev->ro != 1) {
|
|
|
|
/* don't take reconfig_mutex when toggling between
|
|
|
|
* clean and active
|
|
|
|
*/
|
|
|
|
spin_lock(&mddev->lock);
|
|
|
|
if (st == active) {
|
|
|
|
restart_array(mddev);
|
|
|
|
clear_bit(MD_CHANGE_PENDING, &mddev->flags);
|
|
|
|
wake_up(&mddev->sb_wait);
|
|
|
|
err = 0;
|
|
|
|
} else /* st == clean */ {
|
|
|
|
restart_array(mddev);
|
|
|
|
if (atomic_read(&mddev->writes_pending) == 0) {
|
|
|
|
if (mddev->in_sync == 0) {
|
|
|
|
mddev->in_sync = 1;
|
|
|
|
if (mddev->safemode == 1)
|
|
|
|
mddev->safemode = 0;
|
|
|
|
set_bit(MD_CHANGE_CLEAN, &mddev->flags);
|
|
|
|
}
|
|
|
|
err = 0;
|
|
|
|
} else
|
|
|
|
err = -EBUSY;
|
|
|
|
}
|
2016-06-30 15:47:09 +07:00
|
|
|
if (!err)
|
|
|
|
sysfs_notify_dirent_safe(mddev->sysfs_state);
|
2014-12-15 08:57:01 +07:00
|
|
|
spin_unlock(&mddev->lock);
|
2015-06-12 16:46:44 +07:00
|
|
|
return err ?: len;
|
2014-12-15 08:57:01 +07:00
|
|
|
}
|
|
|
|
err = mddev_lock(mddev);
|
|
|
|
if (err)
|
|
|
|
return err;
|
|
|
|
err = -EINVAL;
|
[PATCH] md: Set/get state of array via sysfs
This allows the state of an md/array to be directly controlled via sysfs and
adds the ability to stop and array without tearing it down.
Array states/settings:
clear
No devices, no size, no level
Equivalent to STOP_ARRAY ioctl
inactive
May have some settings, but array is not active
all IO results in error
When written, doesn't tear down array, but just stops it
suspended (not supported yet)
All IO requests will block. The array can be reconfigured.
Writing this, if accepted, will block until array is quiescent
readonly
no resync can happen. no superblocks get written.
write requests fail
read-auto
like readonly, but behaves like 'clean' on a write request.
clean - no pending writes, but otherwise active.
When written to inactive array, starts without resync
If a write request arrives then
if metadata is known, mark 'dirty' and switch to 'active'.
if not known, block and switch to write-pending
If written to an active array that has pending writes, then fails.
active
fully active: IO and resync can be happening.
When written to inactive array, starts with resync
write-pending (not supported yet)
clean, but writes are blocked waiting for 'active' to be written.
active-idle
like active, but no writes have been seen for a while (100msec).
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-26 14:27:58 +07:00
|
|
|
switch(st) {
|
|
|
|
case bad_word:
|
|
|
|
break;
|
|
|
|
case clear:
|
|
|
|
/* stopping an active array */
|
2012-07-19 12:59:18 +07:00
|
|
|
err = do_md_stop(mddev, 0, NULL);
|
[PATCH] md: Set/get state of array via sysfs
This allows the state of an md/array to be directly controlled via sysfs and
adds the ability to stop and array without tearing it down.
Array states/settings:
clear
No devices, no size, no level
Equivalent to STOP_ARRAY ioctl
inactive
May have some settings, but array is not active
all IO results in error
When written, doesn't tear down array, but just stops it
suspended (not supported yet)
All IO requests will block. The array can be reconfigured.
Writing this, if accepted, will block until array is quiescent
readonly
no resync can happen. no superblocks get written.
write requests fail
read-auto
like readonly, but behaves like 'clean' on a write request.
clean - no pending writes, but otherwise active.
When written to inactive array, starts without resync
If a write request arrives then
if metadata is known, mark 'dirty' and switch to 'active'.
if not known, block and switch to write-pending
If written to an active array that has pending writes, then fails.
active
fully active: IO and resync can be happening.
When written to inactive array, starts with resync
write-pending (not supported yet)
clean, but writes are blocked waiting for 'active' to be written.
active-idle
like active, but no writes have been seen for a while (100msec).
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-26 14:27:58 +07:00
|
|
|
break;
|
|
|
|
case inactive:
|
|
|
|
/* stopping an active array */
|
2012-07-31 07:04:55 +07:00
|
|
|
if (mddev->pers)
|
2012-07-19 12:59:18 +07:00
|
|
|
err = do_md_stop(mddev, 2, NULL);
|
2012-07-31 07:04:55 +07:00
|
|
|
else
|
2008-02-06 16:39:51 +07:00
|
|
|
err = 0; /* already inactive */
|
[PATCH] md: Set/get state of array via sysfs
This allows the state of an md/array to be directly controlled via sysfs and
adds the ability to stop and array without tearing it down.
Array states/settings:
clear
No devices, no size, no level
Equivalent to STOP_ARRAY ioctl
inactive
May have some settings, but array is not active
all IO results in error
When written, doesn't tear down array, but just stops it
suspended (not supported yet)
All IO requests will block. The array can be reconfigured.
Writing this, if accepted, will block until array is quiescent
readonly
no resync can happen. no superblocks get written.
write requests fail
read-auto
like readonly, but behaves like 'clean' on a write request.
clean - no pending writes, but otherwise active.
When written to inactive array, starts without resync
If a write request arrives then
if metadata is known, mark 'dirty' and switch to 'active'.
if not known, block and switch to write-pending
If written to an active array that has pending writes, then fails.
active
fully active: IO and resync can be happening.
When written to inactive array, starts with resync
write-pending (not supported yet)
clean, but writes are blocked waiting for 'active' to be written.
active-idle
like active, but no writes have been seen for a while (100msec).
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-26 14:27:58 +07:00
|
|
|
break;
|
|
|
|
case suspended:
|
|
|
|
break; /* not supported yet */
|
|
|
|
case readonly:
|
|
|
|
if (mddev->pers)
|
2012-07-19 12:59:18 +07:00
|
|
|
err = md_set_readonly(mddev, NULL);
|
[PATCH] md: Set/get state of array via sysfs
This allows the state of an md/array to be directly controlled via sysfs and
adds the ability to stop and array without tearing it down.
Array states/settings:
clear
No devices, no size, no level
Equivalent to STOP_ARRAY ioctl
inactive
May have some settings, but array is not active
all IO results in error
When written, doesn't tear down array, but just stops it
suspended (not supported yet)
All IO requests will block. The array can be reconfigured.
Writing this, if accepted, will block until array is quiescent
readonly
no resync can happen. no superblocks get written.
write requests fail
read-auto
like readonly, but behaves like 'clean' on a write request.
clean - no pending writes, but otherwise active.
When written to inactive array, starts without resync
If a write request arrives then
if metadata is known, mark 'dirty' and switch to 'active'.
if not known, block and switch to write-pending
If written to an active array that has pending writes, then fails.
active
fully active: IO and resync can be happening.
When written to inactive array, starts with resync
write-pending (not supported yet)
clean, but writes are blocked waiting for 'active' to be written.
active-idle
like active, but no writes have been seen for a while (100msec).
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-26 14:27:58 +07:00
|
|
|
else {
|
|
|
|
mddev->ro = 1;
|
2008-04-30 14:52:30 +07:00
|
|
|
set_disk_ro(mddev->gendisk, 1);
|
[PATCH] md: Set/get state of array via sysfs
This allows the state of an md/array to be directly controlled via sysfs and
adds the ability to stop and array without tearing it down.
Array states/settings:
clear
No devices, no size, no level
Equivalent to STOP_ARRAY ioctl
inactive
May have some settings, but array is not active
all IO results in error
When written, doesn't tear down array, but just stops it
suspended (not supported yet)
All IO requests will block. The array can be reconfigured.
Writing this, if accepted, will block until array is quiescent
readonly
no resync can happen. no superblocks get written.
write requests fail
read-auto
like readonly, but behaves like 'clean' on a write request.
clean - no pending writes, but otherwise active.
When written to inactive array, starts without resync
If a write request arrives then
if metadata is known, mark 'dirty' and switch to 'active'.
if not known, block and switch to write-pending
If written to an active array that has pending writes, then fails.
active
fully active: IO and resync can be happening.
When written to inactive array, starts with resync
write-pending (not supported yet)
clean, but writes are blocked waiting for 'active' to be written.
active-idle
like active, but no writes have been seen for a while (100msec).
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-26 14:27:58 +07:00
|
|
|
err = do_md_run(mddev);
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
case read_auto:
|
|
|
|
if (mddev->pers) {
|
2008-10-13 07:55:12 +07:00
|
|
|
if (mddev->ro == 0)
|
2012-07-19 12:59:18 +07:00
|
|
|
err = md_set_readonly(mddev, NULL);
|
2008-10-13 07:55:12 +07:00
|
|
|
else if (mddev->ro == 1)
|
2008-04-30 14:52:30 +07:00
|
|
|
err = restart_array(mddev);
|
|
|
|
if (err == 0) {
|
|
|
|
mddev->ro = 2;
|
|
|
|
set_disk_ro(mddev->gendisk, 0);
|
|
|
|
}
|
[PATCH] md: Set/get state of array via sysfs
This allows the state of an md/array to be directly controlled via sysfs and
adds the ability to stop and array without tearing it down.
Array states/settings:
clear
No devices, no size, no level
Equivalent to STOP_ARRAY ioctl
inactive
May have some settings, but array is not active
all IO results in error
When written, doesn't tear down array, but just stops it
suspended (not supported yet)
All IO requests will block. The array can be reconfigured.
Writing this, if accepted, will block until array is quiescent
readonly
no resync can happen. no superblocks get written.
write requests fail
read-auto
like readonly, but behaves like 'clean' on a write request.
clean - no pending writes, but otherwise active.
When written to inactive array, starts without resync
If a write request arrives then
if metadata is known, mark 'dirty' and switch to 'active'.
if not known, block and switch to write-pending
If written to an active array that has pending writes, then fails.
active
fully active: IO and resync can be happening.
When written to inactive array, starts with resync
write-pending (not supported yet)
clean, but writes are blocked waiting for 'active' to be written.
active-idle
like active, but no writes have been seen for a while (100msec).
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-26 14:27:58 +07:00
|
|
|
} else {
|
|
|
|
mddev->ro = 2;
|
|
|
|
err = do_md_run(mddev);
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
case clean:
|
|
|
|
if (mddev->pers) {
|
2015-10-09 11:54:13 +07:00
|
|
|
err = restart_array(mddev);
|
|
|
|
if (err)
|
|
|
|
break;
|
2014-12-15 08:56:56 +07:00
|
|
|
spin_lock(&mddev->lock);
|
[PATCH] md: Set/get state of array via sysfs
This allows the state of an md/array to be directly controlled via sysfs and
adds the ability to stop and array without tearing it down.
Array states/settings:
clear
No devices, no size, no level
Equivalent to STOP_ARRAY ioctl
inactive
May have some settings, but array is not active
all IO results in error
When written, doesn't tear down array, but just stops it
suspended (not supported yet)
All IO requests will block. The array can be reconfigured.
Writing this, if accepted, will block until array is quiescent
readonly
no resync can happen. no superblocks get written.
write requests fail
read-auto
like readonly, but behaves like 'clean' on a write request.
clean - no pending writes, but otherwise active.
When written to inactive array, starts without resync
If a write request arrives then
if metadata is known, mark 'dirty' and switch to 'active'.
if not known, block and switch to write-pending
If written to an active array that has pending writes, then fails.
active
fully active: IO and resync can be happening.
When written to inactive array, starts with resync
write-pending (not supported yet)
clean, but writes are blocked waiting for 'active' to be written.
active-idle
like active, but no writes have been seen for a while (100msec).
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-26 14:27:58 +07:00
|
|
|
if (atomic_read(&mddev->writes_pending) == 0) {
|
2008-02-06 16:39:51 +07:00
|
|
|
if (mddev->in_sync == 0) {
|
|
|
|
mddev->in_sync = 1;
|
2008-04-30 14:52:30 +07:00
|
|
|
if (mddev->safemode == 1)
|
|
|
|
mddev->safemode = 0;
|
2010-08-30 14:33:34 +07:00
|
|
|
set_bit(MD_CHANGE_CLEAN, &mddev->flags);
|
2008-02-06 16:39:51 +07:00
|
|
|
}
|
|
|
|
err = 0;
|
|
|
|
} else
|
|
|
|
err = -EBUSY;
|
2014-12-15 08:56:56 +07:00
|
|
|
spin_unlock(&mddev->lock);
|
2009-05-07 09:50:57 +07:00
|
|
|
} else
|
|
|
|
err = -EINVAL;
|
[PATCH] md: Set/get state of array via sysfs
This allows the state of an md/array to be directly controlled via sysfs and
adds the ability to stop and array without tearing it down.
Array states/settings:
clear
No devices, no size, no level
Equivalent to STOP_ARRAY ioctl
inactive
May have some settings, but array is not active
all IO results in error
When written, doesn't tear down array, but just stops it
suspended (not supported yet)
All IO requests will block. The array can be reconfigured.
Writing this, if accepted, will block until array is quiescent
readonly
no resync can happen. no superblocks get written.
write requests fail
read-auto
like readonly, but behaves like 'clean' on a write request.
clean - no pending writes, but otherwise active.
When written to inactive array, starts without resync
If a write request arrives then
if metadata is known, mark 'dirty' and switch to 'active'.
if not known, block and switch to write-pending
If written to an active array that has pending writes, then fails.
active
fully active: IO and resync can be happening.
When written to inactive array, starts with resync
write-pending (not supported yet)
clean, but writes are blocked waiting for 'active' to be written.
active-idle
like active, but no writes have been seen for a while (100msec).
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-26 14:27:58 +07:00
|
|
|
break;
|
|
|
|
case active:
|
|
|
|
if (mddev->pers) {
|
2015-10-09 11:54:13 +07:00
|
|
|
err = restart_array(mddev);
|
|
|
|
if (err)
|
|
|
|
break;
|
2010-08-30 14:33:34 +07:00
|
|
|
clear_bit(MD_CHANGE_PENDING, &mddev->flags);
|
[PATCH] md: Set/get state of array via sysfs
This allows the state of an md/array to be directly controlled via sysfs and
adds the ability to stop and array without tearing it down.
Array states/settings:
clear
No devices, no size, no level
Equivalent to STOP_ARRAY ioctl
inactive
May have some settings, but array is not active
all IO results in error
When written, doesn't tear down array, but just stops it
suspended (not supported yet)
All IO requests will block. The array can be reconfigured.
Writing this, if accepted, will block until array is quiescent
readonly
no resync can happen. no superblocks get written.
write requests fail
read-auto
like readonly, but behaves like 'clean' on a write request.
clean - no pending writes, but otherwise active.
When written to inactive array, starts without resync
If a write request arrives then
if metadata is known, mark 'dirty' and switch to 'active'.
if not known, block and switch to write-pending
If written to an active array that has pending writes, then fails.
active
fully active: IO and resync can be happening.
When written to inactive array, starts with resync
write-pending (not supported yet)
clean, but writes are blocked waiting for 'active' to be written.
active-idle
like active, but no writes have been seen for a while (100msec).
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-26 14:27:58 +07:00
|
|
|
wake_up(&mddev->sb_wait);
|
|
|
|
err = 0;
|
|
|
|
} else {
|
|
|
|
mddev->ro = 0;
|
2008-04-30 14:52:30 +07:00
|
|
|
set_disk_ro(mddev->gendisk, 0);
|
[PATCH] md: Set/get state of array via sysfs
This allows the state of an md/array to be directly controlled via sysfs and
adds the ability to stop and array without tearing it down.
Array states/settings:
clear
No devices, no size, no level
Equivalent to STOP_ARRAY ioctl
inactive
May have some settings, but array is not active
all IO results in error
When written, doesn't tear down array, but just stops it
suspended (not supported yet)
All IO requests will block. The array can be reconfigured.
Writing this, if accepted, will block until array is quiescent
readonly
no resync can happen. no superblocks get written.
write requests fail
read-auto
like readonly, but behaves like 'clean' on a write request.
clean - no pending writes, but otherwise active.
When written to inactive array, starts without resync
If a write request arrives then
if metadata is known, mark 'dirty' and switch to 'active'.
if not known, block and switch to write-pending
If written to an active array that has pending writes, then fails.
active
fully active: IO and resync can be happening.
When written to inactive array, starts with resync
write-pending (not supported yet)
clean, but writes are blocked waiting for 'active' to be written.
active-idle
like active, but no writes have been seen for a while (100msec).
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-26 14:27:58 +07:00
|
|
|
err = do_md_run(mddev);
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
case write_pending:
|
|
|
|
case active_idle:
|
|
|
|
/* these cannot be set */
|
|
|
|
break;
|
|
|
|
}
|
2014-12-15 08:57:01 +07:00
|
|
|
|
|
|
|
if (!err) {
|
2011-12-08 11:49:12 +07:00
|
|
|
if (mddev->hold_active == UNTIL_IOCTL)
|
|
|
|
mddev->hold_active = 0;
|
2010-06-01 16:37:23 +07:00
|
|
|
sysfs_notify_dirent_safe(mddev->sysfs_state);
|
2008-06-28 05:31:36 +07:00
|
|
|
}
|
2014-12-15 08:57:01 +07:00
|
|
|
mddev_unlock(mddev);
|
|
|
|
return err ?: len;
|
[PATCH] md: Set/get state of array via sysfs
This allows the state of an md/array to be directly controlled via sysfs and
adds the ability to stop and array without tearing it down.
Array states/settings:
clear
No devices, no size, no level
Equivalent to STOP_ARRAY ioctl
inactive
May have some settings, but array is not active
all IO results in error
When written, doesn't tear down array, but just stops it
suspended (not supported yet)
All IO requests will block. The array can be reconfigured.
Writing this, if accepted, will block until array is quiescent
readonly
no resync can happen. no superblocks get written.
write requests fail
read-auto
like readonly, but behaves like 'clean' on a write request.
clean - no pending writes, but otherwise active.
When written to inactive array, starts without resync
If a write request arrives then
if metadata is known, mark 'dirty' and switch to 'active'.
if not known, block and switch to write-pending
If written to an active array that has pending writes, then fails.
active
fully active: IO and resync can be happening.
When written to inactive array, starts with resync
write-pending (not supported yet)
clean, but writes are blocked waiting for 'active' to be written.
active-idle
like active, but no writes have been seen for a while (100msec).
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-26 14:27:58 +07:00
|
|
|
}
|
2006-07-10 18:44:18 +07:00
|
|
|
static struct md_sysfs_entry md_array_state =
|
2014-09-30 05:53:05 +07:00
|
|
|
__ATTR_PREALLOC(array_state, S_IRUGO|S_IWUSR, array_state_show, array_state_store);
|
[PATCH] md: Set/get state of array via sysfs
This allows the state of an md/array to be directly controlled via sysfs and
adds the ability to stop and array without tearing it down.
Array states/settings:
clear
No devices, no size, no level
Equivalent to STOP_ARRAY ioctl
inactive
May have some settings, but array is not active
all IO results in error
When written, doesn't tear down array, but just stops it
suspended (not supported yet)
All IO requests will block. The array can be reconfigured.
Writing this, if accepted, will block until array is quiescent
readonly
no resync can happen. no superblocks get written.
write requests fail
read-auto
like readonly, but behaves like 'clean' on a write request.
clean - no pending writes, but otherwise active.
When written to inactive array, starts without resync
If a write request arrives then
if metadata is known, mark 'dirty' and switch to 'active'.
if not known, block and switch to write-pending
If written to an active array that has pending writes, then fails.
active
fully active: IO and resync can be happening.
When written to inactive array, starts with resync
write-pending (not supported yet)
clean, but writes are blocked waiting for 'active' to be written.
active-idle
like active, but no writes have been seen for a while (100msec).
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-26 14:27:58 +07:00
|
|
|
|
2009-12-14 08:49:58 +07:00
|
|
|
static ssize_t
|
2011-10-11 12:47:53 +07:00
|
|
|
max_corrected_read_errors_show(struct mddev *mddev, char *page) {
|
2009-12-14 08:49:58 +07:00
|
|
|
return sprintf(page, "%d\n",
|
|
|
|
atomic_read(&mddev->max_corr_read_errors));
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t
|
2011-10-11 12:47:53 +07:00
|
|
|
max_corrected_read_errors_store(struct mddev *mddev, const char *buf, size_t len)
|
2009-12-14 08:49:58 +07:00
|
|
|
{
|
2015-05-16 18:02:38 +07:00
|
|
|
unsigned int n;
|
|
|
|
int rv;
|
2009-12-14 08:49:58 +07:00
|
|
|
|
2015-05-16 18:02:38 +07:00
|
|
|
rv = kstrtouint(buf, 10, &n);
|
|
|
|
if (rv < 0)
|
|
|
|
return rv;
|
|
|
|
atomic_set(&mddev->max_corr_read_errors, n);
|
|
|
|
return len;
|
2009-12-14 08:49:58 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static struct md_sysfs_entry max_corr_read_errors =
|
|
|
|
__ATTR(max_read_errors, S_IRUGO|S_IWUSR, max_corrected_read_errors_show,
|
|
|
|
max_corrected_read_errors_store);
|
|
|
|
|
2006-01-06 15:21:16 +07:00
|
|
|
static ssize_t
|
2011-10-11 12:47:53 +07:00
|
|
|
null_show(struct mddev *mddev, char *page)
|
2006-01-06 15:21:16 +07:00
|
|
|
{
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t
|
2011-10-11 12:47:53 +07:00
|
|
|
new_dev_store(struct mddev *mddev, const char *buf, size_t len)
|
2006-01-06 15:21:16 +07:00
|
|
|
{
|
|
|
|
/* buf must be %d:%d\n? giving major and minor numbers */
|
|
|
|
/* The new device is added to the array.
|
|
|
|
* If the array has a persistent superblock, we read the
|
|
|
|
* superblock to initialise info and check validity.
|
|
|
|
* Otherwise, only checking done is that in bind_rdev_to_array,
|
|
|
|
* which mainly checks size.
|
|
|
|
*/
|
|
|
|
char *e;
|
|
|
|
int major = simple_strtoul(buf, &e, 10);
|
|
|
|
int minor;
|
|
|
|
dev_t dev;
|
2011-10-11 12:45:26 +07:00
|
|
|
struct md_rdev *rdev;
|
2006-01-06 15:21:16 +07:00
|
|
|
int err;
|
|
|
|
|
|
|
|
if (!*buf || *e != ':' || !e[1] || e[1] == '\n')
|
|
|
|
return -EINVAL;
|
|
|
|
minor = simple_strtoul(e+1, &e, 10);
|
|
|
|
if (*e && *e != '\n')
|
|
|
|
return -EINVAL;
|
|
|
|
dev = MKDEV(major, minor);
|
|
|
|
if (major != MAJOR(dev) ||
|
|
|
|
minor != MINOR(dev))
|
|
|
|
return -EOVERFLOW;
|
|
|
|
|
2014-12-15 08:57:01 +07:00
|
|
|
flush_workqueue(md_misc_wq);
|
|
|
|
|
|
|
|
err = mddev_lock(mddev);
|
|
|
|
if (err)
|
|
|
|
return err;
|
2006-01-06 15:21:16 +07:00
|
|
|
if (mddev->persistent) {
|
|
|
|
rdev = md_import_device(dev, mddev->major_version,
|
|
|
|
mddev->minor_version);
|
|
|
|
if (!IS_ERR(rdev) && !list_empty(&mddev->disks)) {
|
2011-10-11 12:45:26 +07:00
|
|
|
struct md_rdev *rdev0
|
|
|
|
= list_entry(mddev->disks.next,
|
|
|
|
struct md_rdev, same_set);
|
2006-01-06 15:21:16 +07:00
|
|
|
err = super_types[mddev->major_version]
|
|
|
|
.load_super(rdev, rdev0, mddev->minor_version);
|
|
|
|
if (err < 0)
|
|
|
|
goto out;
|
|
|
|
}
|
2008-02-06 16:39:54 +07:00
|
|
|
} else if (mddev->external)
|
|
|
|
rdev = md_import_device(dev, -2, -1);
|
|
|
|
else
|
2006-01-06 15:21:16 +07:00
|
|
|
rdev = md_import_device(dev, -1, -1);
|
|
|
|
|
2015-06-25 14:06:40 +07:00
|
|
|
if (IS_ERR(rdev)) {
|
|
|
|
mddev_unlock(mddev);
|
2006-01-06 15:21:16 +07:00
|
|
|
return PTR_ERR(rdev);
|
2015-06-25 14:06:40 +07:00
|
|
|
}
|
2006-01-06 15:21:16 +07:00
|
|
|
err = bind_rdev_to_array(rdev, mddev);
|
|
|
|
out:
|
|
|
|
if (err)
|
|
|
|
export_rdev(rdev);
|
2014-12-15 08:57:01 +07:00
|
|
|
mddev_unlock(mddev);
|
2006-01-06 15:21:16 +07:00
|
|
|
return err ? err : len;
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct md_sysfs_entry md_new_device =
|
2006-07-10 18:44:18 +07:00
|
|
|
__ATTR(new_dev, S_IWUSR, null_show, new_dev_store);
|
2006-01-06 15:20:47 +07:00
|
|
|
|
2006-10-03 15:15:49 +07:00
|
|
|
static ssize_t
|
2011-10-11 12:47:53 +07:00
|
|
|
bitmap_store(struct mddev *mddev, const char *buf, size_t len)
|
2006-10-03 15:15:49 +07:00
|
|
|
{
|
|
|
|
char *end;
|
|
|
|
unsigned long chunk, end_chunk;
|
2014-12-15 08:57:01 +07:00
|
|
|
int err;
|
2006-10-03 15:15:49 +07:00
|
|
|
|
2014-12-15 08:57:01 +07:00
|
|
|
err = mddev_lock(mddev);
|
|
|
|
if (err)
|
|
|
|
return err;
|
2006-10-03 15:15:49 +07:00
|
|
|
if (!mddev->bitmap)
|
|
|
|
goto out;
|
|
|
|
/* buf should be <chunk> <chunk> ... or <chunk>-<chunk> ... (range) */
|
|
|
|
while (*buf) {
|
|
|
|
chunk = end_chunk = simple_strtoul(buf, &end, 0);
|
|
|
|
if (buf == end) break;
|
|
|
|
if (*end == '-') { /* range */
|
|
|
|
buf = end + 1;
|
|
|
|
end_chunk = simple_strtoul(buf, &end, 0);
|
|
|
|
if (buf == end) break;
|
|
|
|
}
|
|
|
|
if (*end && !isspace(*end)) break;
|
|
|
|
bitmap_dirty_bits(mddev->bitmap, chunk, end_chunk);
|
2009-12-15 09:01:06 +07:00
|
|
|
buf = skip_spaces(end);
|
2006-10-03 15:15:49 +07:00
|
|
|
}
|
|
|
|
bitmap_unplug(mddev->bitmap); /* flush the bits to disk */
|
|
|
|
out:
|
2014-12-15 08:57:01 +07:00
|
|
|
mddev_unlock(mddev);
|
2006-10-03 15:15:49 +07:00
|
|
|
return len;
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct md_sysfs_entry md_bitmap =
|
|
|
|
__ATTR(bitmap_set_bits, S_IWUSR, null_show, bitmap_store);
|
|
|
|
|
2006-01-06 15:20:49 +07:00
|
|
|
static ssize_t
|
2011-10-11 12:47:53 +07:00
|
|
|
size_show(struct mddev *mddev, char *page)
|
2006-01-06 15:20:49 +07:00
|
|
|
{
|
2009-03-31 10:33:13 +07:00
|
|
|
return sprintf(page, "%llu\n",
|
|
|
|
(unsigned long long)mddev->dev_sectors / 2);
|
2006-01-06 15:20:49 +07:00
|
|
|
}
|
|
|
|
|
2011-10-11 12:47:53 +07:00
|
|
|
static int update_size(struct mddev *mddev, sector_t num_sectors);
|
2006-01-06 15:20:49 +07:00
|
|
|
|
|
|
|
static ssize_t
|
2011-10-11 12:47:53 +07:00
|
|
|
size_store(struct mddev *mddev, const char *buf, size_t len)
|
2006-01-06 15:20:49 +07:00
|
|
|
{
|
|
|
|
/* If array is inactive, we can reduce the component size, but
|
|
|
|
* not increase it (except from 0).
|
|
|
|
* If array is active, we can try an on-line resize
|
|
|
|
*/
|
2009-03-31 11:00:31 +07:00
|
|
|
sector_t sectors;
|
|
|
|
int err = strict_blocks_to_sectors(buf, §ors);
|
2006-01-06 15:20:49 +07:00
|
|
|
|
2009-03-31 10:33:13 +07:00
|
|
|
if (err < 0)
|
|
|
|
return err;
|
2014-12-15 08:57:01 +07:00
|
|
|
err = mddev_lock(mddev);
|
|
|
|
if (err)
|
|
|
|
return err;
|
2006-01-06 15:20:49 +07:00
|
|
|
if (mddev->pers) {
|
2009-03-31 10:33:13 +07:00
|
|
|
err = update_size(mddev, sectors);
|
2016-06-12 16:18:00 +07:00
|
|
|
if (err == 0)
|
|
|
|
md_update_sb(mddev, 1);
|
2006-01-06 15:20:49 +07:00
|
|
|
} else {
|
2009-03-31 10:33:13 +07:00
|
|
|
if (mddev->dev_sectors == 0 ||
|
|
|
|
mddev->dev_sectors > sectors)
|
|
|
|
mddev->dev_sectors = sectors;
|
2006-01-06 15:20:49 +07:00
|
|
|
else
|
|
|
|
err = -ENOSPC;
|
|
|
|
}
|
2014-12-15 08:57:01 +07:00
|
|
|
mddev_unlock(mddev);
|
2006-01-06 15:20:49 +07:00
|
|
|
return err ? err : len;
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct md_sysfs_entry md_size =
|
2006-07-10 18:44:18 +07:00
|
|
|
__ATTR(component_size, S_IRUGO|S_IWUSR, size_show, size_store);
|
2006-01-06 15:20:49 +07:00
|
|
|
|
2012-10-29 22:18:08 +07:00
|
|
|
/* Metadata version.
|
2008-02-06 16:39:51 +07:00
|
|
|
* This is one of
|
|
|
|
* 'none' for arrays with no metadata (good luck...)
|
|
|
|
* 'external' for arrays with externally managed metadata,
|
2006-01-06 15:20:50 +07:00
|
|
|
* or N.M for internally known formats
|
|
|
|
*/
|
|
|
|
static ssize_t
|
2011-10-11 12:47:53 +07:00
|
|
|
metadata_show(struct mddev *mddev, char *page)
|
2006-01-06 15:20:50 +07:00
|
|
|
{
|
|
|
|
if (mddev->persistent)
|
|
|
|
return sprintf(page, "%d.%d\n",
|
|
|
|
mddev->major_version, mddev->minor_version);
|
2008-02-06 16:39:51 +07:00
|
|
|
else if (mddev->external)
|
|
|
|
return sprintf(page, "external:%s\n", mddev->metadata_type);
|
2006-01-06 15:20:50 +07:00
|
|
|
else
|
|
|
|
return sprintf(page, "none\n");
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t
|
2011-10-11 12:47:53 +07:00
|
|
|
metadata_store(struct mddev *mddev, const char *buf, size_t len)
|
2006-01-06 15:20:50 +07:00
|
|
|
{
|
|
|
|
int major, minor;
|
|
|
|
char *e;
|
2014-12-15 08:57:01 +07:00
|
|
|
int err;
|
2008-10-13 07:55:11 +07:00
|
|
|
/* Changing the details of 'external' metadata is
|
|
|
|
* always permitted. Otherwise there must be
|
|
|
|
* no devices attached to the array.
|
|
|
|
*/
|
2014-12-15 08:57:01 +07:00
|
|
|
|
|
|
|
err = mddev_lock(mddev);
|
|
|
|
if (err)
|
|
|
|
return err;
|
|
|
|
err = -EBUSY;
|
2008-10-13 07:55:11 +07:00
|
|
|
if (mddev->external && strncmp(buf, "external:", 9) == 0)
|
|
|
|
;
|
|
|
|
else if (!list_empty(&mddev->disks))
|
2014-12-15 08:57:01 +07:00
|
|
|
goto out_unlock;
|
2006-01-06 15:20:50 +07:00
|
|
|
|
2014-12-15 08:57:01 +07:00
|
|
|
err = 0;
|
2006-01-06 15:20:50 +07:00
|
|
|
if (cmd_match(buf, "none")) {
|
|
|
|
mddev->persistent = 0;
|
2008-02-06 16:39:51 +07:00
|
|
|
mddev->external = 0;
|
|
|
|
mddev->major_version = 0;
|
|
|
|
mddev->minor_version = 90;
|
2014-12-15 08:57:01 +07:00
|
|
|
goto out_unlock;
|
2008-02-06 16:39:51 +07:00
|
|
|
}
|
|
|
|
if (strncmp(buf, "external:", 9) == 0) {
|
2008-02-06 16:39:57 +07:00
|
|
|
size_t namelen = len-9;
|
2008-02-06 16:39:51 +07:00
|
|
|
if (namelen >= sizeof(mddev->metadata_type))
|
|
|
|
namelen = sizeof(mddev->metadata_type)-1;
|
|
|
|
strncpy(mddev->metadata_type, buf+9, namelen);
|
|
|
|
mddev->metadata_type[namelen] = 0;
|
|
|
|
if (namelen && mddev->metadata_type[namelen-1] == '\n')
|
|
|
|
mddev->metadata_type[--namelen] = 0;
|
|
|
|
mddev->persistent = 0;
|
|
|
|
mddev->external = 1;
|
2006-01-06 15:20:50 +07:00
|
|
|
mddev->major_version = 0;
|
|
|
|
mddev->minor_version = 90;
|
2014-12-15 08:57:01 +07:00
|
|
|
goto out_unlock;
|
2006-01-06 15:20:50 +07:00
|
|
|
}
|
|
|
|
major = simple_strtoul(buf, &e, 10);
|
2014-12-15 08:57:01 +07:00
|
|
|
err = -EINVAL;
|
2006-01-06 15:20:50 +07:00
|
|
|
if (e==buf || *e != '.')
|
2014-12-15 08:57:01 +07:00
|
|
|
goto out_unlock;
|
2006-01-06 15:20:50 +07:00
|
|
|
buf = e+1;
|
|
|
|
minor = simple_strtoul(buf, &e, 10);
|
2006-12-22 16:11:41 +07:00
|
|
|
if (e==buf || (*e && *e != '\n') )
|
2014-12-15 08:57:01 +07:00
|
|
|
goto out_unlock;
|
|
|
|
err = -ENOENT;
|
2007-05-09 16:35:34 +07:00
|
|
|
if (major >= ARRAY_SIZE(super_types) || super_types[major].name == NULL)
|
2014-12-15 08:57:01 +07:00
|
|
|
goto out_unlock;
|
2006-01-06 15:20:50 +07:00
|
|
|
mddev->major_version = major;
|
|
|
|
mddev->minor_version = minor;
|
|
|
|
mddev->persistent = 1;
|
2008-02-06 16:39:51 +07:00
|
|
|
mddev->external = 0;
|
2014-12-15 08:57:01 +07:00
|
|
|
err = 0;
|
|
|
|
out_unlock:
|
|
|
|
mddev_unlock(mddev);
|
|
|
|
return err ?: len;
|
2006-01-06 15:20:50 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static struct md_sysfs_entry md_metadata =
|
2014-09-30 05:53:05 +07:00
|
|
|
__ATTR_PREALLOC(metadata_version, S_IRUGO|S_IWUSR, metadata_show, metadata_store);
|
2006-01-06 15:20:50 +07:00
|
|
|
|
2005-11-09 12:39:26 +07:00
|
|
|
static ssize_t
|
2011-10-11 12:47:53 +07:00
|
|
|
action_show(struct mddev *mddev, char *page)
|
2005-11-09 12:39:26 +07:00
|
|
|
{
|
2005-11-09 12:39:44 +07:00
|
|
|
char *type = "idle";
|
2014-12-15 08:56:59 +07:00
|
|
|
unsigned long recovery = mddev->recovery;
|
|
|
|
if (test_bit(MD_RECOVERY_FROZEN, &recovery))
|
2009-05-26 06:41:17 +07:00
|
|
|
type = "frozen";
|
2014-12-15 08:56:59 +07:00
|
|
|
else if (test_bit(MD_RECOVERY_RUNNING, &recovery) ||
|
|
|
|
(!mddev->ro && test_bit(MD_RECOVERY_NEEDED, &recovery))) {
|
|
|
|
if (test_bit(MD_RECOVERY_RESHAPE, &recovery))
|
2006-03-27 16:18:09 +07:00
|
|
|
type = "reshape";
|
2014-12-15 08:56:59 +07:00
|
|
|
else if (test_bit(MD_RECOVERY_SYNC, &recovery)) {
|
|
|
|
if (!test_bit(MD_RECOVERY_REQUESTED, &recovery))
|
2005-11-09 12:39:26 +07:00
|
|
|
type = "resync";
|
2014-12-15 08:56:59 +07:00
|
|
|
else if (test_bit(MD_RECOVERY_CHECK, &recovery))
|
2005-11-09 12:39:26 +07:00
|
|
|
type = "check";
|
|
|
|
else
|
|
|
|
type = "repair";
|
2014-12-15 08:56:59 +07:00
|
|
|
} else if (test_bit(MD_RECOVERY_RECOVER, &recovery))
|
2005-11-09 12:39:26 +07:00
|
|
|
type = "recover";
|
2015-07-06 09:26:57 +07:00
|
|
|
else if (mddev->reshape_position != MaxSector)
|
|
|
|
type = "reshape";
|
2005-11-09 12:39:26 +07:00
|
|
|
}
|
|
|
|
return sprintf(page, "%s\n", type);
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t
|
2011-10-11 12:47:53 +07:00
|
|
|
action_store(struct mddev *mddev, const char *page, size_t len)
|
2005-11-09 12:39:26 +07:00
|
|
|
{
|
2005-11-09 12:39:44 +07:00
|
|
|
if (!mddev->pers || !mddev->pers->sync_request)
|
|
|
|
return -EINVAL;
|
|
|
|
|
2009-05-26 06:41:17 +07:00
|
|
|
|
|
|
|
if (cmd_match(page, "idle") || cmd_match(page, "frozen")) {
|
2015-05-28 14:53:29 +07:00
|
|
|
if (cmd_match(page, "frozen"))
|
|
|
|
set_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
|
|
|
|
else
|
|
|
|
clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
|
2015-06-12 16:51:27 +07:00
|
|
|
if (test_bit(MD_RECOVERY_RUNNING, &mddev->recovery) &&
|
|
|
|
mddev_lock(mddev) == 0) {
|
|
|
|
flush_workqueue(md_misc_wq);
|
|
|
|
if (mddev->sync_thread) {
|
|
|
|
set_bit(MD_RECOVERY_INTR, &mddev->recovery);
|
2014-12-15 08:57:01 +07:00
|
|
|
md_reap_sync_thread(mddev);
|
|
|
|
}
|
2015-06-12 16:51:27 +07:00
|
|
|
mddev_unlock(mddev);
|
2005-11-09 12:39:44 +07:00
|
|
|
}
|
2015-12-21 07:01:21 +07:00
|
|
|
} else if (test_bit(MD_RECOVERY_RUNNING, &mddev->recovery))
|
2005-11-09 12:39:26 +07:00
|
|
|
return -EBUSY;
|
2008-06-28 05:31:41 +07:00
|
|
|
else if (cmd_match(page, "resync"))
|
2015-05-28 14:53:29 +07:00
|
|
|
clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
|
2008-06-28 05:31:41 +07:00
|
|
|
else if (cmd_match(page, "recover")) {
|
2015-05-28 14:53:29 +07:00
|
|
|
clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
|
2008-06-28 05:31:41 +07:00
|
|
|
set_bit(MD_RECOVERY_RECOVER, &mddev->recovery);
|
|
|
|
} else if (cmd_match(page, "reshape")) {
|
2006-03-27 16:18:13 +07:00
|
|
|
int err;
|
|
|
|
if (mddev->pers->start_reshape == NULL)
|
|
|
|
return -EINVAL;
|
2014-12-15 08:57:01 +07:00
|
|
|
err = mddev_lock(mddev);
|
|
|
|
if (!err) {
|
2015-12-21 07:01:21 +07:00
|
|
|
if (test_bit(MD_RECOVERY_RUNNING, &mddev->recovery))
|
|
|
|
err = -EBUSY;
|
|
|
|
else {
|
|
|
|
clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
|
|
|
|
err = mddev->pers->start_reshape(mddev);
|
|
|
|
}
|
2014-12-15 08:57:01 +07:00
|
|
|
mddev_unlock(mddev);
|
|
|
|
}
|
2006-03-27 16:18:13 +07:00
|
|
|
if (err)
|
|
|
|
return err;
|
2008-06-28 05:31:43 +07:00
|
|
|
sysfs_notify(&mddev->kobj, NULL, "degraded");
|
2006-03-27 16:18:13 +07:00
|
|
|
} else {
|
2006-01-06 15:20:41 +07:00
|
|
|
if (cmd_match(page, "check"))
|
2005-11-09 12:39:44 +07:00
|
|
|
set_bit(MD_RECOVERY_CHECK, &mddev->recovery);
|
2006-05-21 04:59:57 +07:00
|
|
|
else if (!cmd_match(page, "repair"))
|
2005-11-09 12:39:44 +07:00
|
|
|
return -EINVAL;
|
2015-05-28 14:53:29 +07:00
|
|
|
clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
|
2005-11-09 12:39:44 +07:00
|
|
|
set_bit(MD_RECOVERY_REQUESTED, &mddev->recovery);
|
|
|
|
set_bit(MD_RECOVERY_SYNC, &mddev->recovery);
|
|
|
|
}
|
2012-10-11 10:19:39 +07:00
|
|
|
if (mddev->ro == 2) {
|
|
|
|
/* A write to sync_action is enough to justify
|
|
|
|
* canceling read-auto mode
|
|
|
|
*/
|
|
|
|
mddev->ro = 0;
|
|
|
|
md_wakeup_thread(mddev->sync_thread);
|
|
|
|
}
|
2006-01-06 15:20:46 +07:00
|
|
|
set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
|
2005-11-09 12:39:26 +07:00
|
|
|
md_wakeup_thread(mddev->thread);
|
2010-06-01 16:37:23 +07:00
|
|
|
sysfs_notify_dirent_safe(mddev->sysfs_action);
|
2005-11-09 12:39:26 +07:00
|
|
|
return len;
|
|
|
|
}
|
|
|
|
|
2013-06-25 13:23:59 +07:00
|
|
|
static struct md_sysfs_entry md_scan_mode =
|
2014-09-30 05:53:05 +07:00
|
|
|
__ATTR_PREALLOC(sync_action, S_IRUGO|S_IWUSR, action_show, action_store);
|
2013-06-25 13:23:59 +07:00
|
|
|
|
|
|
|
static ssize_t
|
|
|
|
last_sync_action_show(struct mddev *mddev, char *page)
|
|
|
|
{
|
|
|
|
return sprintf(page, "%s\n", mddev->last_sync_action);
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct md_sysfs_entry md_last_scan_mode = __ATTR_RO(last_sync_action);
|
|
|
|
|
2005-11-09 12:39:26 +07:00
|
|
|
static ssize_t
|
2011-10-11 12:47:53 +07:00
|
|
|
mismatch_cnt_show(struct mddev *mddev, char *page)
|
2005-11-09 12:39:26 +07:00
|
|
|
{
|
|
|
|
return sprintf(page, "%llu\n",
|
2012-10-11 10:17:59 +07:00
|
|
|
(unsigned long long)
|
|
|
|
atomic64_read(&mddev->resync_mismatches));
|
2005-11-09 12:39:26 +07:00
|
|
|
}
|
|
|
|
|
2006-07-10 18:44:18 +07:00
|
|
|
static struct md_sysfs_entry md_mismatches = __ATTR_RO(mismatch_cnt);
|
2005-11-09 12:39:26 +07:00
|
|
|
|
2006-01-06 15:21:36 +07:00
|
|
|
static ssize_t
|
2011-10-11 12:47:53 +07:00
|
|
|
sync_min_show(struct mddev *mddev, char *page)
|
2006-01-06 15:21:36 +07:00
|
|
|
{
|
|
|
|
return sprintf(page, "%d (%s)\n", speed_min(mddev),
|
|
|
|
mddev->sync_speed_min ? "local": "system");
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t
|
2011-10-11 12:47:53 +07:00
|
|
|
sync_min_store(struct mddev *mddev, const char *buf, size_t len)
|
2006-01-06 15:21:36 +07:00
|
|
|
{
|
2015-05-16 18:02:38 +07:00
|
|
|
unsigned int min;
|
|
|
|
int rv;
|
|
|
|
|
2006-01-06 15:21:36 +07:00
|
|
|
if (strncmp(buf, "system", 6)==0) {
|
2015-05-16 18:02:38 +07:00
|
|
|
min = 0;
|
|
|
|
} else {
|
|
|
|
rv = kstrtouint(buf, 10, &min);
|
|
|
|
if (rv < 0)
|
|
|
|
return rv;
|
|
|
|
if (min == 0)
|
|
|
|
return -EINVAL;
|
2006-01-06 15:21:36 +07:00
|
|
|
}
|
|
|
|
mddev->sync_speed_min = min;
|
|
|
|
return len;
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct md_sysfs_entry md_sync_min =
|
|
|
|
__ATTR(sync_speed_min, S_IRUGO|S_IWUSR, sync_min_show, sync_min_store);
|
|
|
|
|
|
|
|
static ssize_t
|
2011-10-11 12:47:53 +07:00
|
|
|
sync_max_show(struct mddev *mddev, char *page)
|
2006-01-06 15:21:36 +07:00
|
|
|
{
|
|
|
|
return sprintf(page, "%d (%s)\n", speed_max(mddev),
|
|
|
|
mddev->sync_speed_max ? "local": "system");
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t
|
2011-10-11 12:47:53 +07:00
|
|
|
sync_max_store(struct mddev *mddev, const char *buf, size_t len)
|
2006-01-06 15:21:36 +07:00
|
|
|
{
|
2015-05-16 18:02:38 +07:00
|
|
|
unsigned int max;
|
|
|
|
int rv;
|
|
|
|
|
2006-01-06 15:21:36 +07:00
|
|
|
if (strncmp(buf, "system", 6)==0) {
|
2015-05-16 18:02:38 +07:00
|
|
|
max = 0;
|
|
|
|
} else {
|
|
|
|
rv = kstrtouint(buf, 10, &max);
|
|
|
|
if (rv < 0)
|
|
|
|
return rv;
|
|
|
|
if (max == 0)
|
|
|
|
return -EINVAL;
|
2006-01-06 15:21:36 +07:00
|
|
|
}
|
|
|
|
mddev->sync_speed_max = max;
|
|
|
|
return len;
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct md_sysfs_entry md_sync_max =
|
|
|
|
__ATTR(sync_speed_max, S_IRUGO|S_IWUSR, sync_max_show, sync_max_store);
|
|
|
|
|
2007-10-17 13:30:54 +07:00
|
|
|
static ssize_t
|
2011-10-11 12:47:53 +07:00
|
|
|
degraded_show(struct mddev *mddev, char *page)
|
2007-10-17 13:30:54 +07:00
|
|
|
{
|
|
|
|
return sprintf(page, "%d\n", mddev->degraded);
|
|
|
|
}
|
|
|
|
static struct md_sysfs_entry md_degraded = __ATTR_RO(degraded);
|
2006-01-06 15:21:36 +07:00
|
|
|
|
2008-05-24 03:04:38 +07:00
|
|
|
static ssize_t
|
2011-10-11 12:47:53 +07:00
|
|
|
sync_force_parallel_show(struct mddev *mddev, char *page)
|
2008-05-24 03:04:38 +07:00
|
|
|
{
|
|
|
|
return sprintf(page, "%d\n", mddev->parallel_resync);
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t
|
2011-10-11 12:47:53 +07:00
|
|
|
sync_force_parallel_store(struct mddev *mddev, const char *buf, size_t len)
|
2008-05-24 03:04:38 +07:00
|
|
|
{
|
|
|
|
long n;
|
|
|
|
|
2013-06-01 14:15:16 +07:00
|
|
|
if (kstrtol(buf, 10, &n))
|
2008-05-24 03:04:38 +07:00
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
if (n != 0 && n != 1)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
mddev->parallel_resync = n;
|
|
|
|
|
|
|
|
if (mddev->sync_thread)
|
|
|
|
wake_up(&resync_wait);
|
|
|
|
|
|
|
|
return len;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* force parallel resync, even with shared block devices */
|
|
|
|
static struct md_sysfs_entry md_sync_force_parallel =
|
|
|
|
__ATTR(sync_force_parallel, S_IRUGO|S_IWUSR,
|
|
|
|
sync_force_parallel_show, sync_force_parallel_store);
|
|
|
|
|
2006-01-06 15:21:36 +07:00
|
|
|
static ssize_t
|
2011-10-11 12:47:53 +07:00
|
|
|
sync_speed_show(struct mddev *mddev, char *page)
|
2006-01-06 15:21:36 +07:00
|
|
|
{
|
|
|
|
unsigned long resync, dt, db;
|
2009-03-31 11:24:32 +07:00
|
|
|
if (mddev->curr_resync == 0)
|
|
|
|
return sprintf(page, "none\n");
|
2008-03-26 04:24:09 +07:00
|
|
|
resync = mddev->curr_mark_cnt - atomic_read(&mddev->recovery_active);
|
|
|
|
dt = (jiffies - mddev->resync_mark) / HZ;
|
2006-01-06 15:21:36 +07:00
|
|
|
if (!dt) dt++;
|
2008-03-26 04:24:09 +07:00
|
|
|
db = resync - mddev->resync_mark_cnt;
|
|
|
|
return sprintf(page, "%lu\n", db/dt/2); /* K/sec */
|
2006-01-06 15:21:36 +07:00
|
|
|
}
|
|
|
|
|
2006-07-10 18:44:18 +07:00
|
|
|
static struct md_sysfs_entry md_sync_speed = __ATTR_RO(sync_speed);
|
2006-01-06 15:21:36 +07:00
|
|
|
|
|
|
|
static ssize_t
|
2011-10-11 12:47:53 +07:00
|
|
|
sync_completed_show(struct mddev *mddev, char *page)
|
2006-01-06 15:21:36 +07:00
|
|
|
{
|
2011-01-14 05:14:34 +07:00
|
|
|
unsigned long long max_sectors, resync;
|
2006-01-06 15:21:36 +07:00
|
|
|
|
2009-04-14 13:28:34 +07:00
|
|
|
if (!test_bit(MD_RECOVERY_RUNNING, &mddev->recovery))
|
|
|
|
return sprintf(page, "none\n");
|
|
|
|
|
2012-10-11 10:25:57 +07:00
|
|
|
if (mddev->curr_resync == 1 ||
|
|
|
|
mddev->curr_resync == 2)
|
|
|
|
return sprintf(page, "delayed\n");
|
|
|
|
|
2012-05-21 06:28:33 +07:00
|
|
|
if (test_bit(MD_RECOVERY_SYNC, &mddev->recovery) ||
|
|
|
|
test_bit(MD_RECOVERY_RESHAPE, &mddev->recovery))
|
2009-03-31 10:33:13 +07:00
|
|
|
max_sectors = mddev->resync_max_sectors;
|
2006-01-06 15:21:36 +07:00
|
|
|
else
|
2009-03-31 10:33:13 +07:00
|
|
|
max_sectors = mddev->dev_sectors;
|
2006-01-06 15:21:36 +07:00
|
|
|
|
2009-04-14 13:28:34 +07:00
|
|
|
resync = mddev->curr_resync_completed;
|
2011-01-14 05:14:34 +07:00
|
|
|
return sprintf(page, "%llu / %llu\n", resync, max_sectors);
|
2006-01-06 15:21:36 +07:00
|
|
|
}
|
|
|
|
|
2014-09-30 05:53:05 +07:00
|
|
|
static struct md_sysfs_entry md_sync_completed =
|
|
|
|
__ATTR_PREALLOC(sync_completed, S_IRUGO, sync_completed_show, NULL);
|
2006-01-06 15:21:36 +07:00
|
|
|
|
2008-06-28 05:31:24 +07:00
|
|
|
static ssize_t
|
2011-10-11 12:47:53 +07:00
|
|
|
min_sync_show(struct mddev *mddev, char *page)
|
2008-06-28 05:31:24 +07:00
|
|
|
{
|
|
|
|
return sprintf(page, "%llu\n",
|
|
|
|
(unsigned long long)mddev->resync_min);
|
|
|
|
}
|
|
|
|
static ssize_t
|
2011-10-11 12:47:53 +07:00
|
|
|
min_sync_store(struct mddev *mddev, const char *buf, size_t len)
|
2008-06-28 05:31:24 +07:00
|
|
|
{
|
|
|
|
unsigned long long min;
|
2014-12-15 08:57:01 +07:00
|
|
|
int err;
|
|
|
|
|
2013-06-01 14:15:16 +07:00
|
|
|
if (kstrtoull(buf, 10, &min))
|
2008-06-28 05:31:24 +07:00
|
|
|
return -EINVAL;
|
2014-12-15 08:57:01 +07:00
|
|
|
|
|
|
|
spin_lock(&mddev->lock);
|
|
|
|
err = -EINVAL;
|
2008-06-28 05:31:24 +07:00
|
|
|
if (min > mddev->resync_max)
|
2014-12-15 08:57:01 +07:00
|
|
|
goto out_unlock;
|
|
|
|
|
|
|
|
err = -EBUSY;
|
2008-06-28 05:31:24 +07:00
|
|
|
if (test_bit(MD_RECOVERY_RUNNING, &mddev->recovery))
|
2014-12-15 08:57:01 +07:00
|
|
|
goto out_unlock;
|
2008-06-28 05:31:24 +07:00
|
|
|
|
2015-03-23 13:36:38 +07:00
|
|
|
/* Round down to multiple of 4K for safety */
|
|
|
|
mddev->resync_min = round_down(min, 8);
|
2014-12-15 08:57:01 +07:00
|
|
|
err = 0;
|
2008-06-28 05:31:24 +07:00
|
|
|
|
2014-12-15 08:57:01 +07:00
|
|
|
out_unlock:
|
|
|
|
spin_unlock(&mddev->lock);
|
|
|
|
return err ?: len;
|
2008-06-28 05:31:24 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static struct md_sysfs_entry md_min_sync =
|
|
|
|
__ATTR(sync_min, S_IRUGO|S_IWUSR, min_sync_show, min_sync_store);
|
|
|
|
|
2008-02-06 16:39:52 +07:00
|
|
|
static ssize_t
|
2011-10-11 12:47:53 +07:00
|
|
|
max_sync_show(struct mddev *mddev, char *page)
|
2008-02-06 16:39:52 +07:00
|
|
|
{
|
|
|
|
if (mddev->resync_max == MaxSector)
|
|
|
|
return sprintf(page, "max\n");
|
|
|
|
else
|
|
|
|
return sprintf(page, "%llu\n",
|
|
|
|
(unsigned long long)mddev->resync_max);
|
|
|
|
}
|
|
|
|
static ssize_t
|
2011-10-11 12:47:53 +07:00
|
|
|
max_sync_store(struct mddev *mddev, const char *buf, size_t len)
|
2008-02-06 16:39:52 +07:00
|
|
|
{
|
2014-12-15 08:57:01 +07:00
|
|
|
int err;
|
|
|
|
spin_lock(&mddev->lock);
|
2008-02-06 16:39:52 +07:00
|
|
|
if (strncmp(buf, "max", 3) == 0)
|
|
|
|
mddev->resync_max = MaxSector;
|
|
|
|
else {
|
2008-06-28 05:31:24 +07:00
|
|
|
unsigned long long max;
|
2014-12-15 08:57:01 +07:00
|
|
|
int chunk;
|
|
|
|
|
|
|
|
err = -EINVAL;
|
2013-06-01 14:15:16 +07:00
|
|
|
if (kstrtoull(buf, 10, &max))
|
2014-12-15 08:57:01 +07:00
|
|
|
goto out_unlock;
|
2008-06-28 05:31:24 +07:00
|
|
|
if (max < mddev->resync_min)
|
2014-12-15 08:57:01 +07:00
|
|
|
goto out_unlock;
|
|
|
|
|
|
|
|
err = -EBUSY;
|
2008-02-06 16:39:52 +07:00
|
|
|
if (max < mddev->resync_max &&
|
2009-08-13 07:41:50 +07:00
|
|
|
mddev->ro == 0 &&
|
2008-02-06 16:39:52 +07:00
|
|
|
test_bit(MD_RECOVERY_RUNNING, &mddev->recovery))
|
2014-12-15 08:57:01 +07:00
|
|
|
goto out_unlock;
|
2008-02-06 16:39:52 +07:00
|
|
|
|
|
|
|
/* Must be a multiple of chunk_size */
|
2014-12-15 08:57:01 +07:00
|
|
|
chunk = mddev->chunk_sectors;
|
|
|
|
if (chunk) {
|
2009-06-16 14:01:42 +07:00
|
|
|
sector_t temp = max;
|
2014-12-15 08:57:01 +07:00
|
|
|
|
|
|
|
err = -EINVAL;
|
|
|
|
if (sector_div(temp, chunk))
|
|
|
|
goto out_unlock;
|
2008-02-06 16:39:52 +07:00
|
|
|
}
|
|
|
|
mddev->resync_max = max;
|
|
|
|
}
|
|
|
|
wake_up(&mddev->recovery_wait);
|
2014-12-15 08:57:01 +07:00
|
|
|
err = 0;
|
|
|
|
out_unlock:
|
|
|
|
spin_unlock(&mddev->lock);
|
|
|
|
return err ?: len;
|
2008-02-06 16:39:52 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static struct md_sysfs_entry md_max_sync =
|
|
|
|
__ATTR(sync_max, S_IRUGO|S_IWUSR, max_sync_show, max_sync_store);
|
|
|
|
|
2006-03-27 16:18:14 +07:00
|
|
|
static ssize_t
|
2011-10-11 12:47:53 +07:00
|
|
|
suspend_lo_show(struct mddev *mddev, char *page)
|
2006-03-27 16:18:14 +07:00
|
|
|
{
|
|
|
|
return sprintf(page, "%llu\n", (unsigned long long)mddev->suspend_lo);
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t
|
2011-10-11 12:47:53 +07:00
|
|
|
suspend_lo_store(struct mddev *mddev, const char *buf, size_t len)
|
2006-03-27 16:18:14 +07:00
|
|
|
{
|
2015-05-16 18:02:38 +07:00
|
|
|
unsigned long long old, new;
|
2014-12-15 08:57:01 +07:00
|
|
|
int err;
|
2006-03-27 16:18:14 +07:00
|
|
|
|
2015-05-16 18:02:38 +07:00
|
|
|
err = kstrtoull(buf, 10, &new);
|
|
|
|
if (err < 0)
|
|
|
|
return err;
|
|
|
|
if (new != (sector_t)new)
|
2006-03-27 16:18:14 +07:00
|
|
|
return -EINVAL;
|
2011-01-14 05:14:34 +07:00
|
|
|
|
2014-12-15 08:57:01 +07:00
|
|
|
err = mddev_lock(mddev);
|
|
|
|
if (err)
|
|
|
|
return err;
|
|
|
|
err = -EINVAL;
|
|
|
|
if (mddev->pers == NULL ||
|
|
|
|
mddev->pers->quiesce == NULL)
|
|
|
|
goto unlock;
|
|
|
|
old = mddev->suspend_lo;
|
2011-01-14 05:14:34 +07:00
|
|
|
mddev->suspend_lo = new;
|
|
|
|
if (new >= old)
|
|
|
|
/* Shrinking suspended region */
|
2006-03-27 16:18:14 +07:00
|
|
|
mddev->pers->quiesce(mddev, 2);
|
2011-01-14 05:14:34 +07:00
|
|
|
else {
|
|
|
|
/* Expanding suspended region - need to wait */
|
|
|
|
mddev->pers->quiesce(mddev, 1);
|
|
|
|
mddev->pers->quiesce(mddev, 0);
|
|
|
|
}
|
2014-12-15 08:57:01 +07:00
|
|
|
err = 0;
|
|
|
|
unlock:
|
|
|
|
mddev_unlock(mddev);
|
|
|
|
return err ?: len;
|
2006-03-27 16:18:14 +07:00
|
|
|
}
|
|
|
|
static struct md_sysfs_entry md_suspend_lo =
|
|
|
|
__ATTR(suspend_lo, S_IRUGO|S_IWUSR, suspend_lo_show, suspend_lo_store);
|
|
|
|
|
|
|
|
static ssize_t
|
2011-10-11 12:47:53 +07:00
|
|
|
suspend_hi_show(struct mddev *mddev, char *page)
|
2006-03-27 16:18:14 +07:00
|
|
|
{
|
|
|
|
return sprintf(page, "%llu\n", (unsigned long long)mddev->suspend_hi);
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t
|
2011-10-11 12:47:53 +07:00
|
|
|
suspend_hi_store(struct mddev *mddev, const char *buf, size_t len)
|
2006-03-27 16:18:14 +07:00
|
|
|
{
|
2015-05-16 18:02:38 +07:00
|
|
|
unsigned long long old, new;
|
2014-12-15 08:57:01 +07:00
|
|
|
int err;
|
2006-03-27 16:18:14 +07:00
|
|
|
|
2015-05-16 18:02:38 +07:00
|
|
|
err = kstrtoull(buf, 10, &new);
|
|
|
|
if (err < 0)
|
|
|
|
return err;
|
|
|
|
if (new != (sector_t)new)
|
2006-03-27 16:18:14 +07:00
|
|
|
return -EINVAL;
|
2011-01-14 05:14:34 +07:00
|
|
|
|
2014-12-15 08:57:01 +07:00
|
|
|
err = mddev_lock(mddev);
|
|
|
|
if (err)
|
|
|
|
return err;
|
|
|
|
err = -EINVAL;
|
|
|
|
if (mddev->pers == NULL ||
|
|
|
|
mddev->pers->quiesce == NULL)
|
|
|
|
goto unlock;
|
|
|
|
old = mddev->suspend_hi;
|
2011-01-14 05:14:34 +07:00
|
|
|
mddev->suspend_hi = new;
|
|
|
|
if (new <= old)
|
|
|
|
/* Shrinking suspended region */
|
|
|
|
mddev->pers->quiesce(mddev, 2);
|
|
|
|
else {
|
|
|
|
/* Expanding suspended region - need to wait */
|
2006-03-27 16:18:14 +07:00
|
|
|
mddev->pers->quiesce(mddev, 1);
|
|
|
|
mddev->pers->quiesce(mddev, 0);
|
2011-01-14 05:14:34 +07:00
|
|
|
}
|
2014-12-15 08:57:01 +07:00
|
|
|
err = 0;
|
|
|
|
unlock:
|
|
|
|
mddev_unlock(mddev);
|
|
|
|
return err ?: len;
|
2006-03-27 16:18:14 +07:00
|
|
|
}
|
|
|
|
static struct md_sysfs_entry md_suspend_hi =
|
|
|
|
__ATTR(suspend_hi, S_IRUGO|S_IWUSR, suspend_hi_show, suspend_hi_store);
|
|
|
|
|
2007-05-09 16:35:38 +07:00
|
|
|
static ssize_t
|
2011-10-11 12:47:53 +07:00
|
|
|
reshape_position_show(struct mddev *mddev, char *page)
|
2007-05-09 16:35:38 +07:00
|
|
|
{
|
|
|
|
if (mddev->reshape_position != MaxSector)
|
|
|
|
return sprintf(page, "%llu\n",
|
|
|
|
(unsigned long long)mddev->reshape_position);
|
|
|
|
strcpy(page, "none\n");
|
|
|
|
return 5;
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t
|
2011-10-11 12:47:53 +07:00
|
|
|
reshape_position_store(struct mddev *mddev, const char *buf, size_t len)
|
2007-05-09 16:35:38 +07:00
|
|
|
{
|
2012-05-21 06:27:00 +07:00
|
|
|
struct md_rdev *rdev;
|
2015-05-16 18:02:38 +07:00
|
|
|
unsigned long long new;
|
2014-12-15 08:57:01 +07:00
|
|
|
int err;
|
|
|
|
|
2015-05-16 18:02:38 +07:00
|
|
|
err = kstrtoull(buf, 10, &new);
|
|
|
|
if (err < 0)
|
|
|
|
return err;
|
|
|
|
if (new != (sector_t)new)
|
2007-05-09 16:35:38 +07:00
|
|
|
return -EINVAL;
|
2014-12-15 08:57:01 +07:00
|
|
|
err = mddev_lock(mddev);
|
|
|
|
if (err)
|
|
|
|
return err;
|
|
|
|
err = -EBUSY;
|
|
|
|
if (mddev->pers)
|
|
|
|
goto unlock;
|
2007-05-09 16:35:38 +07:00
|
|
|
mddev->reshape_position = new;
|
|
|
|
mddev->delta_disks = 0;
|
2012-05-21 06:27:00 +07:00
|
|
|
mddev->reshape_backwards = 0;
|
2007-05-09 16:35:38 +07:00
|
|
|
mddev->new_level = mddev->level;
|
|
|
|
mddev->new_layout = mddev->layout;
|
2009-06-18 05:45:27 +07:00
|
|
|
mddev->new_chunk_sectors = mddev->chunk_sectors;
|
2012-05-21 06:27:00 +07:00
|
|
|
rdev_for_each(rdev, mddev)
|
|
|
|
rdev->new_data_offset = rdev->data_offset;
|
2014-12-15 08:57:01 +07:00
|
|
|
err = 0;
|
|
|
|
unlock:
|
|
|
|
mddev_unlock(mddev);
|
|
|
|
return err ?: len;
|
2007-05-09 16:35:38 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static struct md_sysfs_entry md_reshape_position =
|
|
|
|
__ATTR(reshape_position, S_IRUGO|S_IWUSR, reshape_position_show,
|
|
|
|
reshape_position_store);
|
|
|
|
|
2012-05-21 06:27:00 +07:00
|
|
|
static ssize_t
|
|
|
|
reshape_direction_show(struct mddev *mddev, char *page)
|
|
|
|
{
|
|
|
|
return sprintf(page, "%s\n",
|
|
|
|
mddev->reshape_backwards ? "backwards" : "forwards");
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t
|
|
|
|
reshape_direction_store(struct mddev *mddev, const char *buf, size_t len)
|
|
|
|
{
|
|
|
|
int backwards = 0;
|
2014-12-15 08:57:01 +07:00
|
|
|
int err;
|
|
|
|
|
2012-05-21 06:27:00 +07:00
|
|
|
if (cmd_match(buf, "forwards"))
|
|
|
|
backwards = 0;
|
|
|
|
else if (cmd_match(buf, "backwards"))
|
|
|
|
backwards = 1;
|
|
|
|
else
|
|
|
|
return -EINVAL;
|
|
|
|
if (mddev->reshape_backwards == backwards)
|
|
|
|
return len;
|
|
|
|
|
2014-12-15 08:57:01 +07:00
|
|
|
err = mddev_lock(mddev);
|
|
|
|
if (err)
|
|
|
|
return err;
|
2012-05-21 06:27:00 +07:00
|
|
|
/* check if we are allowed to change */
|
|
|
|
if (mddev->delta_disks)
|
2014-12-15 08:57:01 +07:00
|
|
|
err = -EBUSY;
|
|
|
|
else if (mddev->persistent &&
|
2012-05-21 06:27:00 +07:00
|
|
|
mddev->major_version == 0)
|
2014-12-15 08:57:01 +07:00
|
|
|
err = -EINVAL;
|
|
|
|
else
|
|
|
|
mddev->reshape_backwards = backwards;
|
|
|
|
mddev_unlock(mddev);
|
|
|
|
return err ?: len;
|
2012-05-21 06:27:00 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static struct md_sysfs_entry md_reshape_direction =
|
|
|
|
__ATTR(reshape_direction, S_IRUGO|S_IWUSR, reshape_direction_show,
|
|
|
|
reshape_direction_store);
|
|
|
|
|
2009-03-31 11:00:31 +07:00
|
|
|
static ssize_t
|
2011-10-11 12:47:53 +07:00
|
|
|
array_size_show(struct mddev *mddev, char *page)
|
2009-03-31 11:00:31 +07:00
|
|
|
{
|
|
|
|
if (mddev->external_size)
|
|
|
|
return sprintf(page, "%llu\n",
|
|
|
|
(unsigned long long)mddev->array_sectors/2);
|
|
|
|
else
|
|
|
|
return sprintf(page, "default\n");
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t
|
2011-10-11 12:47:53 +07:00
|
|
|
array_size_store(struct mddev *mddev, const char *buf, size_t len)
|
2009-03-31 11:00:31 +07:00
|
|
|
{
|
|
|
|
sector_t sectors;
|
2014-12-15 08:57:01 +07:00
|
|
|
int err;
|
|
|
|
|
|
|
|
err = mddev_lock(mddev);
|
|
|
|
if (err)
|
|
|
|
return err;
|
2009-03-31 11:00:31 +07:00
|
|
|
|
2016-05-02 22:33:13 +07:00
|
|
|
/* cluster raid doesn't support change array_sectors */
|
|
|
|
if (mddev_is_clustered(mddev))
|
|
|
|
return -EINVAL;
|
|
|
|
|
2009-03-31 11:00:31 +07:00
|
|
|
if (strncmp(buf, "default", 7) == 0) {
|
|
|
|
if (mddev->pers)
|
|
|
|
sectors = mddev->pers->size(mddev, 0, 0);
|
|
|
|
else
|
|
|
|
sectors = mddev->array_sectors;
|
|
|
|
|
|
|
|
mddev->external_size = 0;
|
|
|
|
} else {
|
|
|
|
if (strict_blocks_to_sectors(buf, §ors) < 0)
|
2014-12-15 08:57:01 +07:00
|
|
|
err = -EINVAL;
|
|
|
|
else if (mddev->pers && mddev->pers->size(mddev, 0, 0) < sectors)
|
|
|
|
err = -E2BIG;
|
|
|
|
else
|
|
|
|
mddev->external_size = 1;
|
2009-03-31 11:00:31 +07:00
|
|
|
}
|
|
|
|
|
2014-12-15 08:57:01 +07:00
|
|
|
if (!err) {
|
|
|
|
mddev->array_sectors = sectors;
|
|
|
|
if (mddev->pers) {
|
|
|
|
set_capacity(mddev->gendisk, mddev->array_sectors);
|
|
|
|
revalidate_disk(mddev->gendisk);
|
|
|
|
}
|
2011-02-16 09:58:38 +07:00
|
|
|
}
|
2014-12-15 08:57:01 +07:00
|
|
|
mddev_unlock(mddev);
|
|
|
|
return err ?: len;
|
2009-03-31 11:00:31 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static struct md_sysfs_entry md_array_size =
|
|
|
|
__ATTR(array_size, S_IRUGO|S_IWUSR, array_size_show,
|
|
|
|
array_size_store);
|
2006-03-27 16:18:14 +07:00
|
|
|
|
2005-11-09 12:39:23 +07:00
|
|
|
static struct attribute *md_default_attrs[] = {
|
|
|
|
&md_level.attr,
|
2006-06-26 14:27:59 +07:00
|
|
|
&md_layout.attr,
|
2005-11-09 12:39:23 +07:00
|
|
|
&md_raid_disks.attr,
|
2006-01-06 15:20:47 +07:00
|
|
|
&md_chunk_size.attr,
|
2006-01-06 15:20:49 +07:00
|
|
|
&md_size.attr,
|
2006-06-26 14:28:00 +07:00
|
|
|
&md_resync_start.attr,
|
2006-01-06 15:20:50 +07:00
|
|
|
&md_metadata.attr,
|
2006-01-06 15:21:16 +07:00
|
|
|
&md_new_device.attr,
|
2006-06-26 14:27:37 +07:00
|
|
|
&md_safe_delay.attr,
|
[PATCH] md: Set/get state of array via sysfs
This allows the state of an md/array to be directly controlled via sysfs and
adds the ability to stop and array without tearing it down.
Array states/settings:
clear
No devices, no size, no level
Equivalent to STOP_ARRAY ioctl
inactive
May have some settings, but array is not active
all IO results in error
When written, doesn't tear down array, but just stops it
suspended (not supported yet)
All IO requests will block. The array can be reconfigured.
Writing this, if accepted, will block until array is quiescent
readonly
no resync can happen. no superblocks get written.
write requests fail
read-auto
like readonly, but behaves like 'clean' on a write request.
clean - no pending writes, but otherwise active.
When written to inactive array, starts without resync
If a write request arrives then
if metadata is known, mark 'dirty' and switch to 'active'.
if not known, block and switch to write-pending
If written to an active array that has pending writes, then fails.
active
fully active: IO and resync can be happening.
When written to inactive array, starts with resync
write-pending (not supported yet)
clean, but writes are blocked waiting for 'active' to be written.
active-idle
like active, but no writes have been seen for a while (100msec).
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-26 14:27:58 +07:00
|
|
|
&md_array_state.attr,
|
2007-05-09 16:35:38 +07:00
|
|
|
&md_reshape_position.attr,
|
2012-05-21 06:27:00 +07:00
|
|
|
&md_reshape_direction.attr,
|
2009-03-31 11:00:31 +07:00
|
|
|
&md_array_size.attr,
|
2009-12-14 08:49:58 +07:00
|
|
|
&max_corr_read_errors.attr,
|
2005-11-09 12:39:40 +07:00
|
|
|
NULL,
|
|
|
|
};
|
|
|
|
|
|
|
|
static struct attribute *md_redundancy_attrs[] = {
|
2005-11-09 12:39:26 +07:00
|
|
|
&md_scan_mode.attr,
|
2013-06-25 13:23:59 +07:00
|
|
|
&md_last_scan_mode.attr,
|
2005-11-09 12:39:26 +07:00
|
|
|
&md_mismatches.attr,
|
2006-01-06 15:21:36 +07:00
|
|
|
&md_sync_min.attr,
|
|
|
|
&md_sync_max.attr,
|
|
|
|
&md_sync_speed.attr,
|
2008-05-24 03:04:38 +07:00
|
|
|
&md_sync_force_parallel.attr,
|
2006-01-06 15:21:36 +07:00
|
|
|
&md_sync_completed.attr,
|
2008-06-28 05:31:24 +07:00
|
|
|
&md_min_sync.attr,
|
2008-02-06 16:39:52 +07:00
|
|
|
&md_max_sync.attr,
|
2006-03-27 16:18:14 +07:00
|
|
|
&md_suspend_lo.attr,
|
|
|
|
&md_suspend_hi.attr,
|
2006-10-03 15:15:49 +07:00
|
|
|
&md_bitmap.attr,
|
2007-10-17 13:30:54 +07:00
|
|
|
&md_degraded.attr,
|
2005-11-09 12:39:23 +07:00
|
|
|
NULL,
|
|
|
|
};
|
2005-11-09 12:39:40 +07:00
|
|
|
static struct attribute_group md_redundancy_group = {
|
|
|
|
.name = NULL,
|
|
|
|
.attrs = md_redundancy_attrs,
|
|
|
|
};
|
|
|
|
|
2005-11-09 12:39:23 +07:00
|
|
|
static ssize_t
|
|
|
|
md_attr_show(struct kobject *kobj, struct attribute *attr, char *page)
|
|
|
|
{
|
|
|
|
struct md_sysfs_entry *entry = container_of(attr, struct md_sysfs_entry, attr);
|
2011-10-11 12:47:53 +07:00
|
|
|
struct mddev *mddev = container_of(kobj, struct mddev, kobj);
|
2005-11-09 12:39:39 +07:00
|
|
|
ssize_t rv;
|
2005-11-09 12:39:23 +07:00
|
|
|
|
|
|
|
if (!entry->show)
|
|
|
|
return -EIO;
|
2011-12-08 11:49:46 +07:00
|
|
|
spin_lock(&all_mddevs_lock);
|
|
|
|
if (list_empty(&mddev->all_mddevs)) {
|
|
|
|
spin_unlock(&all_mddevs_lock);
|
|
|
|
return -EBUSY;
|
|
|
|
}
|
|
|
|
mddev_get(mddev);
|
|
|
|
spin_unlock(&all_mddevs_lock);
|
|
|
|
|
2014-12-15 08:56:59 +07:00
|
|
|
rv = entry->show(mddev, page);
|
2011-12-08 11:49:46 +07:00
|
|
|
mddev_put(mddev);
|
2005-11-09 12:39:39 +07:00
|
|
|
return rv;
|
2005-11-09 12:39:23 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t
|
|
|
|
md_attr_store(struct kobject *kobj, struct attribute *attr,
|
|
|
|
const char *page, size_t length)
|
|
|
|
{
|
|
|
|
struct md_sysfs_entry *entry = container_of(attr, struct md_sysfs_entry, attr);
|
2011-10-11 12:47:53 +07:00
|
|
|
struct mddev *mddev = container_of(kobj, struct mddev, kobj);
|
2005-11-09 12:39:39 +07:00
|
|
|
ssize_t rv;
|
2005-11-09 12:39:23 +07:00
|
|
|
|
|
|
|
if (!entry->store)
|
|
|
|
return -EIO;
|
2006-07-10 18:44:19 +07:00
|
|
|
if (!capable(CAP_SYS_ADMIN))
|
|
|
|
return -EACCES;
|
2011-12-08 11:49:46 +07:00
|
|
|
spin_lock(&all_mddevs_lock);
|
|
|
|
if (list_empty(&mddev->all_mddevs)) {
|
|
|
|
spin_unlock(&all_mddevs_lock);
|
|
|
|
return -EBUSY;
|
|
|
|
}
|
|
|
|
mddev_get(mddev);
|
|
|
|
spin_unlock(&all_mddevs_lock);
|
2014-12-15 08:57:01 +07:00
|
|
|
rv = entry->store(mddev, page, length);
|
2011-12-08 11:49:46 +07:00
|
|
|
mddev_put(mddev);
|
2005-11-09 12:39:39 +07:00
|
|
|
return rv;
|
2005-11-09 12:39:23 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static void md_free(struct kobject *ko)
|
|
|
|
{
|
2011-10-11 12:47:53 +07:00
|
|
|
struct mddev *mddev = container_of(ko, struct mddev, kobj);
|
2009-01-09 04:31:09 +07:00
|
|
|
|
|
|
|
if (mddev->sysfs_state)
|
|
|
|
sysfs_put(mddev->sysfs_state);
|
|
|
|
|
2015-04-27 11:12:22 +07:00
|
|
|
if (mddev->queue)
|
|
|
|
blk_cleanup_queue(mddev->queue);
|
2009-01-09 04:31:09 +07:00
|
|
|
if (mddev->gendisk) {
|
|
|
|
del_gendisk(mddev->gendisk);
|
|
|
|
put_disk(mddev->gendisk);
|
|
|
|
}
|
|
|
|
|
2005-11-09 12:39:23 +07:00
|
|
|
kfree(mddev);
|
|
|
|
}
|
|
|
|
|
2010-01-19 08:58:23 +07:00
|
|
|
static const struct sysfs_ops md_sysfs_ops = {
|
2005-11-09 12:39:23 +07:00
|
|
|
.show = md_attr_show,
|
|
|
|
.store = md_attr_store,
|
|
|
|
};
|
|
|
|
static struct kobj_type md_ktype = {
|
|
|
|
.release = md_free,
|
|
|
|
.sysfs_ops = &md_sysfs_ops,
|
|
|
|
.default_attrs = md_default_attrs,
|
|
|
|
};
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
int mdp_major = 0;
|
|
|
|
|
2009-03-04 14:57:25 +07:00
|
|
|
static void mddev_delayed_delete(struct work_struct *ws)
|
|
|
|
{
|
2011-10-11 12:47:53 +07:00
|
|
|
struct mddev *mddev = container_of(ws, struct mddev, del_work);
|
2009-03-04 14:57:25 +07:00
|
|
|
|
2009-12-14 08:49:55 +07:00
|
|
|
sysfs_remove_group(&mddev->kobj, &md_bitmap_group);
|
2009-03-04 14:57:25 +07:00
|
|
|
kobject_del(&mddev->kobj);
|
|
|
|
kobject_put(&mddev->kobj);
|
|
|
|
}
|
|
|
|
|
2009-01-09 04:31:10 +07:00
|
|
|
static int md_alloc(dev_t dev, char *name)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2006-03-27 16:18:20 +07:00
|
|
|
static DEFINE_MUTEX(disks_mutex);
|
2011-10-11 12:47:53 +07:00
|
|
|
struct mddev *mddev = mddev_find(dev);
|
2005-04-17 05:20:36 +07:00
|
|
|
struct gendisk *disk;
|
2009-01-09 04:31:10 +07:00
|
|
|
int partitioned;
|
|
|
|
int shift;
|
|
|
|
int unit;
|
2007-12-18 02:54:39 +07:00
|
|
|
int error;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
if (!mddev)
|
2009-01-09 04:31:10 +07:00
|
|
|
return -ENODEV;
|
|
|
|
|
|
|
|
partitioned = (MAJOR(mddev->unit) != MD_MAJOR);
|
|
|
|
shift = partitioned ? MdpMinorShift : 0;
|
|
|
|
unit = MINOR(mddev->unit) >> shift;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2010-10-15 20:36:08 +07:00
|
|
|
/* wait for any previous instance of this device to be
|
|
|
|
* completely removed (mddev_delayed_delete).
|
md: make devices disappear when they are no longer needed.
Currently md devices, once created, never disappear until the module
is unloaded. This is essentially because the gendisk holds a
reference to the mddev, and the mddev holds a reference to the
gendisk, this a circular reference.
If we drop the reference from mddev to gendisk, then we need to ensure
that the mddev is destroyed when the gendisk is destroyed. However it
is not possible to hook into the gendisk destruction process to enable
this.
So we drop the reference from the gendisk to the mddev and destroy the
gendisk when the mddev gets destroyed. However this has a
complication.
Between the call
__blkdev_get->get_gendisk->kobj_lookup->md_probe
and the call
__blkdev_get->md_open
there is no obvious way to hold a reference on the mddev any more, so
unless something is done, it will disappear and gendisk will be
destroyed prematurely.
Also, once we decide to destroy the mddev, there will be an unlockable
moment before the gendisk is unlinked (blk_unregister_region) during
which a new reference to the gendisk can be created. We need to
ensure that this reference can not be used. i.e. the ->open must
fail.
So:
1/ in md_probe we set a flag in the mddev (hold_active) which
indicates that the array should be treated as active, even
though there are no references, and no appearance of activity.
This is cleared by md_release when the device is closed if it
is no longer needed.
This ensures that the gendisk will survive between md_probe and
md_open.
2/ In md_open we check if the mddev we expect to open matches
the gendisk that we did open.
If there is a mismatch we return -ERESTARTSYS and modify
__blkdev_get to retry from the top in that case.
In the -ERESTARTSYS sys case we make sure to wait until
the old gendisk (that we succeeded in opening) is really gone so
we loop at most once.
Some udev configurations will always open an md device when it first
appears. If we allow an md device that was just created by an open
to disappear on an immediate close, then this can race with such udev
configurations and result in an infinite loop the device being opened
and closed, then re-open due to the 'ADD' even from the first open,
and then close and so on.
So we make sure an md device, once created by an open, remains active
at least until some md 'ioctl' has been made on it. This means that
all normal usage of md devices will allow them to disappear promptly
when not needed, but the worst that an incorrect usage will do it
cause an inactive md device to be left in existence (it can easily be
removed).
As an array can be stopped by writing to a sysfs attribute
echo clear > /sys/block/mdXXX/md/array_state
we need to use scheduled work for deleting the gendisk and other
kobjects. This allows us to wait for any pending gendisk deletion to
complete by simply calling flush_scheduled_work().
Signed-off-by: NeilBrown <neilb@suse.de>
2009-01-09 04:31:10 +07:00
|
|
|
*/
|
2010-10-15 20:36:08 +07:00
|
|
|
flush_workqueue(md_misc_wq);
|
md: make devices disappear when they are no longer needed.
Currently md devices, once created, never disappear until the module
is unloaded. This is essentially because the gendisk holds a
reference to the mddev, and the mddev holds a reference to the
gendisk, this a circular reference.
If we drop the reference from mddev to gendisk, then we need to ensure
that the mddev is destroyed when the gendisk is destroyed. However it
is not possible to hook into the gendisk destruction process to enable
this.
So we drop the reference from the gendisk to the mddev and destroy the
gendisk when the mddev gets destroyed. However this has a
complication.
Between the call
__blkdev_get->get_gendisk->kobj_lookup->md_probe
and the call
__blkdev_get->md_open
there is no obvious way to hold a reference on the mddev any more, so
unless something is done, it will disappear and gendisk will be
destroyed prematurely.
Also, once we decide to destroy the mddev, there will be an unlockable
moment before the gendisk is unlinked (blk_unregister_region) during
which a new reference to the gendisk can be created. We need to
ensure that this reference can not be used. i.e. the ->open must
fail.
So:
1/ in md_probe we set a flag in the mddev (hold_active) which
indicates that the array should be treated as active, even
though there are no references, and no appearance of activity.
This is cleared by md_release when the device is closed if it
is no longer needed.
This ensures that the gendisk will survive between md_probe and
md_open.
2/ In md_open we check if the mddev we expect to open matches
the gendisk that we did open.
If there is a mismatch we return -ERESTARTSYS and modify
__blkdev_get to retry from the top in that case.
In the -ERESTARTSYS sys case we make sure to wait until
the old gendisk (that we succeeded in opening) is really gone so
we loop at most once.
Some udev configurations will always open an md device when it first
appears. If we allow an md device that was just created by an open
to disappear on an immediate close, then this can race with such udev
configurations and result in an infinite loop the device being opened
and closed, then re-open due to the 'ADD' even from the first open,
and then close and so on.
So we make sure an md device, once created by an open, remains active
at least until some md 'ioctl' has been made on it. This means that
all normal usage of md devices will allow them to disappear promptly
when not needed, but the worst that an incorrect usage will do it
cause an inactive md device to be left in existence (it can easily be
removed).
As an array can be stopped by writing to a sysfs attribute
echo clear > /sys/block/mdXXX/md/array_state
we need to use scheduled work for deleting the gendisk and other
kobjects. This allows us to wait for any pending gendisk deletion to
complete by simply calling flush_scheduled_work().
Signed-off-by: NeilBrown <neilb@suse.de>
2009-01-09 04:31:10 +07:00
|
|
|
|
2006-03-27 16:18:20 +07:00
|
|
|
mutex_lock(&disks_mutex);
|
2009-07-01 09:27:21 +07:00
|
|
|
error = -EEXIST;
|
|
|
|
if (mddev->gendisk)
|
|
|
|
goto abort;
|
2009-01-09 04:31:10 +07:00
|
|
|
|
|
|
|
if (name) {
|
|
|
|
/* Need to ensure that 'name' is not a duplicate.
|
|
|
|
*/
|
2011-10-11 12:47:53 +07:00
|
|
|
struct mddev *mddev2;
|
2009-01-09 04:31:10 +07:00
|
|
|
spin_lock(&all_mddevs_lock);
|
|
|
|
|
|
|
|
list_for_each_entry(mddev2, &all_mddevs, all_mddevs)
|
|
|
|
if (mddev2->gendisk &&
|
|
|
|
strcmp(mddev2->gendisk->disk_name, name) == 0) {
|
|
|
|
spin_unlock(&all_mddevs_lock);
|
2009-07-01 09:27:21 +07:00
|
|
|
goto abort;
|
2009-01-09 04:31:10 +07:00
|
|
|
}
|
|
|
|
spin_unlock(&all_mddevs_lock);
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
2009-01-09 04:31:08 +07:00
|
|
|
|
2009-07-01 09:27:21 +07:00
|
|
|
error = -ENOMEM;
|
2009-01-09 04:31:08 +07:00
|
|
|
mddev->queue = blk_alloc_queue(GFP_KERNEL);
|
2009-07-01 09:27:21 +07:00
|
|
|
if (!mddev->queue)
|
|
|
|
goto abort;
|
2009-03-31 10:39:39 +07:00
|
|
|
mddev->queue->queuedata = mddev;
|
|
|
|
|
|
|
|
blk_queue_make_request(mddev->queue, md_make_request);
|
2012-01-11 22:27:11 +07:00
|
|
|
blk_set_stacking_limits(&mddev->queue->limits);
|
2009-01-09 04:31:08 +07:00
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
disk = alloc_disk(1 << shift);
|
|
|
|
if (!disk) {
|
2009-01-09 04:31:08 +07:00
|
|
|
blk_cleanup_queue(mddev->queue);
|
|
|
|
mddev->queue = NULL;
|
2009-07-01 09:27:21 +07:00
|
|
|
goto abort;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
2009-01-09 04:31:10 +07:00
|
|
|
disk->major = MAJOR(mddev->unit);
|
2005-04-17 05:20:36 +07:00
|
|
|
disk->first_minor = unit << shift;
|
2009-01-09 04:31:10 +07:00
|
|
|
if (name)
|
|
|
|
strcpy(disk->disk_name, name);
|
|
|
|
else if (partitioned)
|
2005-04-17 05:20:36 +07:00
|
|
|
sprintf(disk->disk_name, "md_d%d", unit);
|
2005-06-21 11:15:16 +07:00
|
|
|
else
|
2005-04-17 05:20:36 +07:00
|
|
|
sprintf(disk->disk_name, "md%d", unit);
|
|
|
|
disk->fops = &md_fops;
|
|
|
|
disk->private_data = mddev;
|
|
|
|
disk->queue = mddev->queue;
|
2016-03-30 23:16:53 +07:00
|
|
|
blk_queue_write_cache(mddev->queue, true, true);
|
2008-10-21 09:25:32 +07:00
|
|
|
/* Allow extended partitions. This makes the
|
md: make devices disappear when they are no longer needed.
Currently md devices, once created, never disappear until the module
is unloaded. This is essentially because the gendisk holds a
reference to the mddev, and the mddev holds a reference to the
gendisk, this a circular reference.
If we drop the reference from mddev to gendisk, then we need to ensure
that the mddev is destroyed when the gendisk is destroyed. However it
is not possible to hook into the gendisk destruction process to enable
this.
So we drop the reference from the gendisk to the mddev and destroy the
gendisk when the mddev gets destroyed. However this has a
complication.
Between the call
__blkdev_get->get_gendisk->kobj_lookup->md_probe
and the call
__blkdev_get->md_open
there is no obvious way to hold a reference on the mddev any more, so
unless something is done, it will disappear and gendisk will be
destroyed prematurely.
Also, once we decide to destroy the mddev, there will be an unlockable
moment before the gendisk is unlinked (blk_unregister_region) during
which a new reference to the gendisk can be created. We need to
ensure that this reference can not be used. i.e. the ->open must
fail.
So:
1/ in md_probe we set a flag in the mddev (hold_active) which
indicates that the array should be treated as active, even
though there are no references, and no appearance of activity.
This is cleared by md_release when the device is closed if it
is no longer needed.
This ensures that the gendisk will survive between md_probe and
md_open.
2/ In md_open we check if the mddev we expect to open matches
the gendisk that we did open.
If there is a mismatch we return -ERESTARTSYS and modify
__blkdev_get to retry from the top in that case.
In the -ERESTARTSYS sys case we make sure to wait until
the old gendisk (that we succeeded in opening) is really gone so
we loop at most once.
Some udev configurations will always open an md device when it first
appears. If we allow an md device that was just created by an open
to disappear on an immediate close, then this can race with such udev
configurations and result in an infinite loop the device being opened
and closed, then re-open due to the 'ADD' even from the first open,
and then close and so on.
So we make sure an md device, once created by an open, remains active
at least until some md 'ioctl' has been made on it. This means that
all normal usage of md devices will allow them to disappear promptly
when not needed, but the worst that an incorrect usage will do it
cause an inactive md device to be left in existence (it can easily be
removed).
As an array can be stopped by writing to a sysfs attribute
echo clear > /sys/block/mdXXX/md/array_state
we need to use scheduled work for deleting the gendisk and other
kobjects. This allows us to wait for any pending gendisk deletion to
complete by simply calling flush_scheduled_work().
Signed-off-by: NeilBrown <neilb@suse.de>
2009-01-09 04:31:10 +07:00
|
|
|
* 'mdp' device redundant, but we can't really
|
2008-10-21 09:25:32 +07:00
|
|
|
* remove it now.
|
|
|
|
*/
|
|
|
|
disk->flags |= GENHD_FL_EXT_DEVT;
|
2005-04-17 05:20:36 +07:00
|
|
|
mddev->gendisk = disk;
|
2011-05-10 14:49:01 +07:00
|
|
|
/* As soon as we call add_disk(), another thread could get
|
|
|
|
* through to md_open, so make sure it doesn't get too far
|
|
|
|
*/
|
|
|
|
mutex_lock(&mddev->open_mutex);
|
|
|
|
add_disk(disk);
|
|
|
|
|
2008-08-25 17:56:05 +07:00
|
|
|
error = kobject_init_and_add(&mddev->kobj, &md_ktype,
|
|
|
|
&disk_to_dev(disk)->kobj, "%s", "md");
|
2009-07-01 09:27:21 +07:00
|
|
|
if (error) {
|
|
|
|
/* This isn't possible, but as kobject_init_and_add is marked
|
|
|
|
* __must_check, we must do something with the result
|
|
|
|
*/
|
2007-03-27 12:32:14 +07:00
|
|
|
printk(KERN_WARNING "md: cannot register %s/md - name in use\n",
|
|
|
|
disk->disk_name);
|
2009-07-01 09:27:21 +07:00
|
|
|
error = 0;
|
|
|
|
}
|
2010-06-01 16:37:23 +07:00
|
|
|
if (mddev->kobj.sd &&
|
|
|
|
sysfs_create_group(&mddev->kobj, &md_bitmap_group))
|
2009-12-14 08:49:55 +07:00
|
|
|
printk(KERN_DEBUG "pointless warning\n");
|
2011-05-10 14:49:01 +07:00
|
|
|
mutex_unlock(&mddev->open_mutex);
|
2009-07-01 09:27:21 +07:00
|
|
|
abort:
|
|
|
|
mutex_unlock(&disks_mutex);
|
2010-06-01 16:37:23 +07:00
|
|
|
if (!error && mddev->kobj.sd) {
|
2007-12-18 02:54:39 +07:00
|
|
|
kobject_uevent(&mddev->kobj, KOBJ_ADD);
|
2010-06-01 16:37:23 +07:00
|
|
|
mddev->sysfs_state = sysfs_get_dirent_safe(mddev->kobj.sd, "array_state");
|
2008-10-21 09:25:21 +07:00
|
|
|
}
|
md: make devices disappear when they are no longer needed.
Currently md devices, once created, never disappear until the module
is unloaded. This is essentially because the gendisk holds a
reference to the mddev, and the mddev holds a reference to the
gendisk, this a circular reference.
If we drop the reference from mddev to gendisk, then we need to ensure
that the mddev is destroyed when the gendisk is destroyed. However it
is not possible to hook into the gendisk destruction process to enable
this.
So we drop the reference from the gendisk to the mddev and destroy the
gendisk when the mddev gets destroyed. However this has a
complication.
Between the call
__blkdev_get->get_gendisk->kobj_lookup->md_probe
and the call
__blkdev_get->md_open
there is no obvious way to hold a reference on the mddev any more, so
unless something is done, it will disappear and gendisk will be
destroyed prematurely.
Also, once we decide to destroy the mddev, there will be an unlockable
moment before the gendisk is unlinked (blk_unregister_region) during
which a new reference to the gendisk can be created. We need to
ensure that this reference can not be used. i.e. the ->open must
fail.
So:
1/ in md_probe we set a flag in the mddev (hold_active) which
indicates that the array should be treated as active, even
though there are no references, and no appearance of activity.
This is cleared by md_release when the device is closed if it
is no longer needed.
This ensures that the gendisk will survive between md_probe and
md_open.
2/ In md_open we check if the mddev we expect to open matches
the gendisk that we did open.
If there is a mismatch we return -ERESTARTSYS and modify
__blkdev_get to retry from the top in that case.
In the -ERESTARTSYS sys case we make sure to wait until
the old gendisk (that we succeeded in opening) is really gone so
we loop at most once.
Some udev configurations will always open an md device when it first
appears. If we allow an md device that was just created by an open
to disappear on an immediate close, then this can race with such udev
configurations and result in an infinite loop the device being opened
and closed, then re-open due to the 'ADD' even from the first open,
and then close and so on.
So we make sure an md device, once created by an open, remains active
at least until some md 'ioctl' has been made on it. This means that
all normal usage of md devices will allow them to disappear promptly
when not needed, but the worst that an incorrect usage will do it
cause an inactive md device to be left in existence (it can easily be
removed).
As an array can be stopped by writing to a sysfs attribute
echo clear > /sys/block/mdXXX/md/array_state
we need to use scheduled work for deleting the gendisk and other
kobjects. This allows us to wait for any pending gendisk deletion to
complete by simply calling flush_scheduled_work().
Signed-off-by: NeilBrown <neilb@suse.de>
2009-01-09 04:31:10 +07:00
|
|
|
mddev_put(mddev);
|
2009-07-01 09:27:21 +07:00
|
|
|
return error;
|
2009-01-09 04:31:10 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static struct kobject *md_probe(dev_t dev, int *part, void *data)
|
|
|
|
{
|
|
|
|
md_alloc(dev, NULL);
|
2005-04-17 05:20:36 +07:00
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2009-01-09 04:31:10 +07:00
|
|
|
static int add_named_array(const char *val, struct kernel_param *kp)
|
|
|
|
{
|
|
|
|
/* val must be "md_*" where * is not all digits.
|
|
|
|
* We allocate an array with a large free minor number, and
|
|
|
|
* set the name to val. val must not already be an active name.
|
|
|
|
*/
|
|
|
|
int len = strlen(val);
|
|
|
|
char buf[DISK_NAME_LEN];
|
|
|
|
|
|
|
|
while (len && val[len-1] == '\n')
|
|
|
|
len--;
|
|
|
|
if (len >= DISK_NAME_LEN)
|
|
|
|
return -E2BIG;
|
|
|
|
strlcpy(buf, val, len+1);
|
|
|
|
if (strncmp(buf, "md_", 3) != 0)
|
|
|
|
return -EINVAL;
|
|
|
|
return md_alloc(0, buf);
|
|
|
|
}
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
static void md_safemode_timeout(unsigned long data)
|
|
|
|
{
|
2011-10-11 12:47:53 +07:00
|
|
|
struct mddev *mddev = (struct mddev *) data;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2008-06-28 05:31:36 +07:00
|
|
|
if (!atomic_read(&mddev->writes_pending)) {
|
|
|
|
mddev->safemode = 1;
|
|
|
|
if (mddev->external)
|
2010-06-01 16:37:23 +07:00
|
|
|
sysfs_notify_dirent_safe(mddev->sysfs_state);
|
2008-06-28 05:31:36 +07:00
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
md_wakeup_thread(mddev->thread);
|
|
|
|
}
|
|
|
|
|
2006-01-06 15:20:15 +07:00
|
|
|
static int start_dirty_degraded;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2011-10-11 12:47:53 +07:00
|
|
|
int md_run(struct mddev *mddev)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2006-01-06 15:20:36 +07:00
|
|
|
int err;
|
2011-10-11 12:45:26 +07:00
|
|
|
struct md_rdev *rdev;
|
2011-10-11 12:49:58 +07:00
|
|
|
struct md_personality *pers;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2005-04-17 05:26:42 +07:00
|
|
|
if (list_empty(&mddev->disks))
|
|
|
|
/* cannot run an array with no devices.. */
|
2005-04-17 05:20:36 +07:00
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
if (mddev->pers)
|
|
|
|
return -EBUSY;
|
2010-08-08 18:18:03 +07:00
|
|
|
/* Cannot run until previous stop completes properly */
|
|
|
|
if (mddev->sysfs_active)
|
|
|
|
return -EBUSY;
|
2010-04-15 07:13:47 +07:00
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
/*
|
|
|
|
* Analyze all RAID superblock(s)
|
|
|
|
*/
|
2008-02-06 16:39:53 +07:00
|
|
|
if (!mddev->raid_disks) {
|
|
|
|
if (!mddev->persistent)
|
|
|
|
return -EINVAL;
|
2005-04-17 05:26:42 +07:00
|
|
|
analyze_sbs(mddev);
|
2008-02-06 16:39:53 +07:00
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2006-01-06 15:20:51 +07:00
|
|
|
if (mddev->level != LEVEL_NONE)
|
|
|
|
request_module("md-level-%d", mddev->level);
|
|
|
|
else if (mddev->clevel[0])
|
|
|
|
request_module("md-%s", mddev->clevel);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Drop all container device buffers, from now on
|
|
|
|
* the only valid external interface is through the md
|
|
|
|
* device.
|
|
|
|
*/
|
2012-03-19 08:46:39 +07:00
|
|
|
rdev_for_each(rdev, mddev) {
|
2005-11-09 12:39:31 +07:00
|
|
|
if (test_bit(Faulty, &rdev->flags))
|
2005-04-17 05:20:36 +07:00
|
|
|
continue;
|
|
|
|
sync_blockdev(rdev->bdev);
|
2007-05-07 04:49:54 +07:00
|
|
|
invalidate_bdev(rdev->bdev);
|
2007-07-17 18:06:12 +07:00
|
|
|
|
|
|
|
/* perform some consistency tests on the device.
|
|
|
|
* We don't want the data to overlap the metadata,
|
2009-03-31 10:33:13 +07:00
|
|
|
* Internal Bitmap issues have been handled elsewhere.
|
2007-07-17 18:06:12 +07:00
|
|
|
*/
|
2011-01-14 05:14:34 +07:00
|
|
|
if (rdev->meta_bdev) {
|
|
|
|
/* Nothing to check */;
|
|
|
|
} else if (rdev->data_offset < rdev->sb_start) {
|
2009-03-31 10:33:13 +07:00
|
|
|
if (mddev->dev_sectors &&
|
|
|
|
rdev->data_offset + mddev->dev_sectors
|
2008-07-11 19:02:23 +07:00
|
|
|
> rdev->sb_start) {
|
2007-07-17 18:06:12 +07:00
|
|
|
printk("md: %s: data overlaps metadata\n",
|
|
|
|
mdname(mddev));
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
} else {
|
2008-07-11 19:02:23 +07:00
|
|
|
if (rdev->sb_start + rdev->sb_size/512
|
2007-07-17 18:06:12 +07:00
|
|
|
> rdev->data_offset) {
|
|
|
|
printk("md: %s: metadata overlaps data\n",
|
|
|
|
mdname(mddev));
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
}
|
2010-06-01 16:37:23 +07:00
|
|
|
sysfs_notify_dirent_safe(rdev->sysfs_state);
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
2010-10-26 14:31:13 +07:00
|
|
|
if (mddev->bio_set == NULL)
|
2012-09-07 05:34:55 +07:00
|
|
|
mddev->bio_set = bioset_create(BIO_POOL_SIZE, 0);
|
2010-10-26 14:31:13 +07:00
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
spin_lock(&pers_lock);
|
2006-01-06 15:20:51 +07:00
|
|
|
pers = find_pers(mddev->level, mddev->clevel);
|
2006-01-06 15:20:36 +07:00
|
|
|
if (!pers || !try_module_get(pers->owner)) {
|
2005-04-17 05:20:36 +07:00
|
|
|
spin_unlock(&pers_lock);
|
2006-01-06 15:20:51 +07:00
|
|
|
if (mddev->level != LEVEL_NONE)
|
|
|
|
printk(KERN_WARNING "md: personality for level %d is not loaded!\n",
|
|
|
|
mddev->level);
|
|
|
|
else
|
|
|
|
printk(KERN_WARNING "md: personality for level %s is not loaded!\n",
|
|
|
|
mddev->clevel);
|
2005-04-17 05:20:36 +07:00
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
spin_unlock(&pers_lock);
|
2009-03-31 10:39:38 +07:00
|
|
|
if (mddev->level != pers->level) {
|
|
|
|
mddev->level = pers->level;
|
|
|
|
mddev->new_level = pers->level;
|
|
|
|
}
|
2006-01-06 15:20:51 +07:00
|
|
|
strlcpy(mddev->clevel, pers->name, sizeof(mddev->clevel));
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2006-03-27 16:18:11 +07:00
|
|
|
if (mddev->reshape_position != MaxSector &&
|
2006-03-27 16:18:13 +07:00
|
|
|
pers->start_reshape == NULL) {
|
2006-03-27 16:18:11 +07:00
|
|
|
/* This personality cannot handle reshaping... */
|
|
|
|
module_put(pers->owner);
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
2007-03-01 11:11:35 +07:00
|
|
|
if (pers->sync_request) {
|
|
|
|
/* Warn if this is a potentially silly
|
|
|
|
* configuration.
|
|
|
|
*/
|
|
|
|
char b[BDEVNAME_SIZE], b2[BDEVNAME_SIZE];
|
2011-10-11 12:45:26 +07:00
|
|
|
struct md_rdev *rdev2;
|
2007-03-01 11:11:35 +07:00
|
|
|
int warned = 0;
|
2009-01-09 04:31:08 +07:00
|
|
|
|
2012-03-19 08:46:39 +07:00
|
|
|
rdev_for_each(rdev, mddev)
|
|
|
|
rdev_for_each(rdev2, mddev) {
|
2007-03-01 11:11:35 +07:00
|
|
|
if (rdev < rdev2 &&
|
|
|
|
rdev->bdev->bd_contains ==
|
|
|
|
rdev2->bdev->bd_contains) {
|
|
|
|
printk(KERN_WARNING
|
|
|
|
"%s: WARNING: %s appears to be"
|
|
|
|
" on the same physical disk as"
|
|
|
|
" %s.\n",
|
|
|
|
mdname(mddev),
|
|
|
|
bdevname(rdev->bdev,b),
|
|
|
|
bdevname(rdev2->bdev,b2));
|
|
|
|
warned = 1;
|
|
|
|
}
|
|
|
|
}
|
2009-01-09 04:31:08 +07:00
|
|
|
|
2007-03-01 11:11:35 +07:00
|
|
|
if (warned)
|
|
|
|
printk(KERN_WARNING
|
|
|
|
"True protection against single-disk"
|
|
|
|
" failure might be compromised.\n");
|
|
|
|
}
|
|
|
|
|
2005-08-27 08:34:16 +07:00
|
|
|
mddev->recovery = 0;
|
2009-03-31 10:33:13 +07:00
|
|
|
/* may be over-ridden by personality */
|
|
|
|
mddev->resync_max_sectors = mddev->dev_sectors;
|
|
|
|
|
2006-01-06 15:20:15 +07:00
|
|
|
mddev->ok_start_degraded = start_dirty_degraded;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2009-12-30 08:08:50 +07:00
|
|
|
if (start_readonly && mddev->ro == 0)
|
[PATCH] md: allow md arrays to be started read-only (module parameter).
When an md array is started, the superblock will be written, and resync may
commense. This is not good if you want to be completely read-only as, for
example, when preparing to resume from a suspend-to-disk image.
So introduce a module parameter "start_ro" which can be set
to '1' at boot, at module load, or via
/sys/module/md_mod/parameters/start_ro
When this is set, new arrays get an 'auto-ro' mode, which disables all
internal io (superblock updates, resync, recovery) and is automatically
switched to 'rw' when the first write request arrives.
The array can be set to true 'ro' mode using 'mdadm -r' before the first
write request, or resync can be started without a write using 'mdadm -w'.
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-11-09 12:39:36 +07:00
|
|
|
mddev->ro = 2; /* read-only, but switch on first write */
|
|
|
|
|
2014-12-15 08:56:58 +07:00
|
|
|
err = pers->run(mddev);
|
2008-03-26 06:07:03 +07:00
|
|
|
if (err)
|
|
|
|
printk(KERN_ERR "md: pers->run() failed ...\n");
|
2014-12-15 08:56:58 +07:00
|
|
|
else if (pers->size(mddev, 0, 0) < mddev->array_sectors) {
|
2009-03-31 11:00:31 +07:00
|
|
|
WARN_ONCE(!mddev->external_size, "%s: default size too small,"
|
|
|
|
" but 'external_size' not in effect?\n", __func__);
|
|
|
|
printk(KERN_ERR
|
|
|
|
"md: invalid array_size %llu > default size %llu\n",
|
|
|
|
(unsigned long long)mddev->array_sectors / 2,
|
2014-12-15 08:56:58 +07:00
|
|
|
(unsigned long long)pers->size(mddev, 0, 0) / 2);
|
2009-03-31 11:00:31 +07:00
|
|
|
err = -EINVAL;
|
|
|
|
}
|
2014-12-15 08:56:58 +07:00
|
|
|
if (err == 0 && pers->sync_request &&
|
2012-05-22 10:55:08 +07:00
|
|
|
(mddev->bitmap_info.file || mddev->bitmap_info.offset)) {
|
2014-06-07 00:43:49 +07:00
|
|
|
struct bitmap *bitmap;
|
|
|
|
|
|
|
|
bitmap = bitmap_create(mddev, -1);
|
|
|
|
if (IS_ERR(bitmap)) {
|
|
|
|
err = PTR_ERR(bitmap);
|
2006-01-06 15:20:16 +07:00
|
|
|
printk(KERN_ERR "%s: failed to create bitmap (%d)\n",
|
|
|
|
mdname(mddev), err);
|
2014-06-07 00:43:49 +07:00
|
|
|
} else
|
|
|
|
mddev->bitmap = bitmap;
|
|
|
|
|
2006-01-06 15:20:16 +07:00
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
if (err) {
|
2014-12-15 08:56:57 +07:00
|
|
|
mddev_detach(mddev);
|
2015-03-13 07:51:18 +07:00
|
|
|
if (mddev->private)
|
|
|
|
pers->free(mddev, mddev->private);
|
2015-06-25 14:01:40 +07:00
|
|
|
mddev->private = NULL;
|
2014-12-15 08:56:58 +07:00
|
|
|
module_put(pers->owner);
|
2005-06-22 07:17:14 +07:00
|
|
|
bitmap_destroy(mddev);
|
|
|
|
return err;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
2014-12-15 08:56:56 +07:00
|
|
|
if (mddev->queue) {
|
2016-09-30 23:45:40 +07:00
|
|
|
bool nonrot = true;
|
|
|
|
|
|
|
|
rdev_for_each(rdev, mddev) {
|
|
|
|
if (rdev->raid_disk >= 0 &&
|
|
|
|
!blk_queue_nonrot(bdev_get_queue(rdev->bdev))) {
|
|
|
|
nonrot = false;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (mddev->degraded)
|
|
|
|
nonrot = false;
|
|
|
|
if (nonrot)
|
|
|
|
queue_flag_set_unlocked(QUEUE_FLAG_NONROT, mddev->queue);
|
|
|
|
else
|
|
|
|
queue_flag_clear_unlocked(QUEUE_FLAG_NONROT, mddev->queue);
|
2014-12-15 08:56:56 +07:00
|
|
|
mddev->queue->backing_dev_info.congested_data = mddev;
|
|
|
|
mddev->queue->backing_dev_info.congested_fn = md_congested;
|
|
|
|
}
|
2014-12-15 08:56:58 +07:00
|
|
|
if (pers->sync_request) {
|
2010-06-01 16:37:23 +07:00
|
|
|
if (mddev->kobj.sd &&
|
|
|
|
sysfs_create_group(&mddev->kobj, &md_redundancy_group))
|
2007-03-27 12:32:14 +07:00
|
|
|
printk(KERN_WARNING
|
|
|
|
"md: cannot register extra attributes for %s\n",
|
|
|
|
mdname(mddev));
|
2010-06-01 16:37:23 +07:00
|
|
|
mddev->sysfs_action = sysfs_get_dirent_safe(mddev->kobj.sd, "sync_action");
|
2007-03-27 12:32:14 +07:00
|
|
|
} else if (mddev->ro == 2) /* auto-readonly not meaningful */
|
2005-11-09 12:39:42 +07:00
|
|
|
mddev->ro = 0;
|
|
|
|
|
2014-09-30 11:23:59 +07:00
|
|
|
atomic_set(&mddev->writes_pending,0);
|
2009-12-14 08:49:58 +07:00
|
|
|
atomic_set(&mddev->max_corr_read_errors,
|
|
|
|
MD_DEFAULT_MAX_CORRECTED_READ_ERRORS);
|
2005-04-17 05:20:36 +07:00
|
|
|
mddev->safemode = 0;
|
2015-10-22 12:01:25 +07:00
|
|
|
if (mddev_is_clustered(mddev))
|
|
|
|
mddev->safemode_delay = 0;
|
|
|
|
else
|
|
|
|
mddev->safemode_delay = (200 * HZ)/1000 +1; /* 200 msec delay */
|
2005-04-17 05:20:36 +07:00
|
|
|
mddev->in_sync = 1;
|
2011-01-14 05:14:33 +07:00
|
|
|
smp_wmb();
|
2014-12-15 08:56:58 +07:00
|
|
|
spin_lock(&mddev->lock);
|
|
|
|
mddev->pers = pers;
|
|
|
|
spin_unlock(&mddev->lock);
|
2012-03-19 08:46:39 +07:00
|
|
|
rdev_for_each(rdev, mddev)
|
2011-07-27 08:00:36 +07:00
|
|
|
if (rdev->raid_disk >= 0)
|
|
|
|
if (sysfs_link_rdev(mddev, rdev))
|
2010-06-01 16:37:23 +07:00
|
|
|
/* failure here is OK */;
|
2014-09-30 11:23:59 +07:00
|
|
|
|
2015-07-17 08:57:30 +07:00
|
|
|
if (mddev->degraded && !mddev->ro)
|
|
|
|
/* This ensures that recovering status is reported immediately
|
|
|
|
* via sysfs - until a lack of spares is confirmed.
|
|
|
|
*/
|
|
|
|
set_bit(MD_RECOVERY_RECOVER, &mddev->recovery);
|
2005-04-17 05:20:36 +07:00
|
|
|
set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
|
2014-09-30 11:23:59 +07:00
|
|
|
|
2013-08-27 13:28:23 +07:00
|
|
|
if (mddev->flags & MD_UPDATE_SB_FLAGS)
|
2006-10-03 15:15:46 +07:00
|
|
|
md_update_sb(mddev, 0);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
[PATCH] md: make /proc/mdstat pollable
With this patch it is possible to poll /proc/mdstat to detect arrays appearing
or disappearing, to detect failures, recovery starting, recovery completing,
and devices being added and removed.
It is similar to the poll-ability of /proc/mounts, though different in that:
We always report that the file is readable (because face it, it is, even if
only for EOF).
We report POLLPRI when there is a change so that select() can detect
it as an exceptional event. Not only are these exceptional events, but
that is the mechanism that the current 'mdadm' uses to watch for events
(It also polls after a timeout).
(We also report POLLERR like /proc/mounts).
Finally, we only reset the per-file event counter when the start of the file
is read, rather than when poll() returns an event. This is more robust as it
means that an fd will continue to report activity to poll/select until the
program clearly responds to that activity.
md_new_event takes an 'mddev' which isn't currently used, but it will be soon.
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-06 15:20:30 +07:00
|
|
|
md_new_event(mddev);
|
2010-06-01 16:37:23 +07:00
|
|
|
sysfs_notify_dirent_safe(mddev->sysfs_state);
|
|
|
|
sysfs_notify_dirent_safe(mddev->sysfs_action);
|
2008-06-28 05:31:43 +07:00
|
|
|
sysfs_notify(&mddev->kobj, NULL, "degraded");
|
2005-04-17 05:20:36 +07:00
|
|
|
return 0;
|
|
|
|
}
|
2010-06-01 16:37:27 +07:00
|
|
|
EXPORT_SYMBOL_GPL(md_run);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2011-10-11 12:47:53 +07:00
|
|
|
static int do_md_run(struct mddev *mddev)
|
2010-03-29 07:10:42 +07:00
|
|
|
{
|
|
|
|
int err;
|
|
|
|
|
|
|
|
err = md_run(mddev);
|
|
|
|
if (err)
|
|
|
|
goto out;
|
2010-06-01 16:37:35 +07:00
|
|
|
err = bitmap_load(mddev);
|
|
|
|
if (err) {
|
|
|
|
bitmap_destroy(mddev);
|
|
|
|
goto out;
|
|
|
|
}
|
2011-06-08 05:49:36 +07:00
|
|
|
|
2015-10-22 12:01:25 +07:00
|
|
|
if (mddev_is_clustered(mddev))
|
|
|
|
md_allow_write(mddev);
|
|
|
|
|
2011-06-08 05:49:36 +07:00
|
|
|
md_wakeup_thread(mddev->thread);
|
|
|
|
md_wakeup_thread(mddev->sync_thread); /* possibly kick off a reshape */
|
|
|
|
|
2010-03-29 07:10:42 +07:00
|
|
|
set_capacity(mddev->gendisk, mddev->array_sectors);
|
|
|
|
revalidate_disk(mddev->gendisk);
|
2011-02-24 13:26:41 +07:00
|
|
|
mddev->changed = 1;
|
2010-03-29 07:10:42 +07:00
|
|
|
kobject_uevent(&disk_to_dev(mddev->gendisk)->kobj, KOBJ_CHANGE);
|
|
|
|
out:
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2011-10-11 12:47:53 +07:00
|
|
|
static int restart_array(struct mddev *mddev)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
struct gendisk *disk = mddev->gendisk;
|
|
|
|
|
2008-07-11 19:02:21 +07:00
|
|
|
/* Complain if it has no devices */
|
2005-04-17 05:20:36 +07:00
|
|
|
if (list_empty(&mddev->disks))
|
2008-07-11 19:02:21 +07:00
|
|
|
return -ENXIO;
|
|
|
|
if (!mddev->pers)
|
|
|
|
return -EINVAL;
|
|
|
|
if (!mddev->ro)
|
|
|
|
return -EBUSY;
|
2015-10-09 11:54:13 +07:00
|
|
|
if (test_bit(MD_HAS_JOURNAL, &mddev->flags)) {
|
|
|
|
struct md_rdev *rdev;
|
|
|
|
bool has_journal = false;
|
|
|
|
|
|
|
|
rcu_read_lock();
|
|
|
|
rdev_for_each_rcu(rdev, mddev) {
|
|
|
|
if (test_bit(Journal, &rdev->flags) &&
|
|
|
|
!test_bit(Faulty, &rdev->flags)) {
|
|
|
|
has_journal = true;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
rcu_read_unlock();
|
|
|
|
|
|
|
|
/* Don't restart rw with journal missing/faulty */
|
|
|
|
if (!has_journal)
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
2008-07-11 19:02:21 +07:00
|
|
|
mddev->safemode = 0;
|
|
|
|
mddev->ro = 0;
|
|
|
|
set_disk_ro(disk, 0);
|
|
|
|
printk(KERN_INFO "md: %s switched to read-write mode.\n",
|
|
|
|
mdname(mddev));
|
|
|
|
/* Kick recovery or resync if necessary */
|
|
|
|
set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
|
|
|
|
md_wakeup_thread(mddev->thread);
|
|
|
|
md_wakeup_thread(mddev->sync_thread);
|
2010-06-01 16:37:23 +07:00
|
|
|
sysfs_notify_dirent_safe(mddev->sysfs_state);
|
2008-07-11 19:02:21 +07:00
|
|
|
return 0;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
2011-10-11 12:47:53 +07:00
|
|
|
static void md_clean(struct mddev *mddev)
|
2010-03-29 07:37:13 +07:00
|
|
|
{
|
|
|
|
mddev->array_sectors = 0;
|
|
|
|
mddev->external_size = 0;
|
|
|
|
mddev->dev_sectors = 0;
|
|
|
|
mddev->raid_disks = 0;
|
|
|
|
mddev->recovery_cp = 0;
|
|
|
|
mddev->resync_min = 0;
|
|
|
|
mddev->resync_max = MaxSector;
|
|
|
|
mddev->reshape_position = MaxSector;
|
|
|
|
mddev->external = 0;
|
|
|
|
mddev->persistent = 0;
|
|
|
|
mddev->level = LEVEL_NONE;
|
|
|
|
mddev->clevel[0] = 0;
|
|
|
|
mddev->flags = 0;
|
|
|
|
mddev->ro = 0;
|
|
|
|
mddev->metadata_type[0] = 0;
|
|
|
|
mddev->chunk_sectors = 0;
|
|
|
|
mddev->ctime = mddev->utime = 0;
|
|
|
|
mddev->layout = 0;
|
|
|
|
mddev->max_disks = 0;
|
|
|
|
mddev->events = 0;
|
2010-05-18 06:28:43 +07:00
|
|
|
mddev->can_decrease_events = 0;
|
2010-03-29 07:37:13 +07:00
|
|
|
mddev->delta_disks = 0;
|
2012-05-21 06:27:00 +07:00
|
|
|
mddev->reshape_backwards = 0;
|
2010-03-29 07:37:13 +07:00
|
|
|
mddev->new_level = LEVEL_NONE;
|
|
|
|
mddev->new_layout = 0;
|
|
|
|
mddev->new_chunk_sectors = 0;
|
|
|
|
mddev->curr_resync = 0;
|
2012-10-11 10:17:59 +07:00
|
|
|
atomic64_set(&mddev->resync_mismatches, 0);
|
2010-03-29 07:37:13 +07:00
|
|
|
mddev->suspend_lo = mddev->suspend_hi = 0;
|
|
|
|
mddev->sync_speed_min = mddev->sync_speed_max = 0;
|
|
|
|
mddev->recovery = 0;
|
|
|
|
mddev->in_sync = 0;
|
2011-02-24 13:26:41 +07:00
|
|
|
mddev->changed = 0;
|
2010-03-29 07:37:13 +07:00
|
|
|
mddev->degraded = 0;
|
|
|
|
mddev->safemode = 0;
|
2015-06-25 14:01:40 +07:00
|
|
|
mddev->private = NULL;
|
2016-08-12 12:42:38 +07:00
|
|
|
mddev->cluster_info = NULL;
|
2010-03-29 07:37:13 +07:00
|
|
|
mddev->bitmap_info.offset = 0;
|
|
|
|
mddev->bitmap_info.default_offset = 0;
|
2012-05-22 10:55:07 +07:00
|
|
|
mddev->bitmap_info.default_space = 0;
|
2010-03-29 07:37:13 +07:00
|
|
|
mddev->bitmap_info.chunksize = 0;
|
|
|
|
mddev->bitmap_info.daemon_sleep = 0;
|
|
|
|
mddev->bitmap_info.max_write_behind = 0;
|
2016-08-12 12:42:38 +07:00
|
|
|
mddev->bitmap_info.nodes = 0;
|
2010-03-29 07:37:13 +07:00
|
|
|
}
|
|
|
|
|
2011-10-11 12:47:53 +07:00
|
|
|
static void __md_stop_writes(struct mddev *mddev)
|
2010-03-29 08:07:53 +07:00
|
|
|
{
|
2013-05-09 06:48:30 +07:00
|
|
|
set_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
|
2014-12-11 06:02:10 +07:00
|
|
|
flush_workqueue(md_misc_wq);
|
2010-03-29 08:07:53 +07:00
|
|
|
if (mddev->sync_thread) {
|
|
|
|
set_bit(MD_RECOVERY_INTR, &mddev->recovery);
|
2013-04-24 08:42:43 +07:00
|
|
|
md_reap_sync_thread(mddev);
|
2010-03-29 08:07:53 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
del_timer_sync(&mddev->safemode_timer);
|
|
|
|
|
|
|
|
bitmap_flush(mddev);
|
|
|
|
md_super_wait(mddev);
|
|
|
|
|
2013-04-24 08:42:42 +07:00
|
|
|
if (mddev->ro == 0 &&
|
2015-10-22 12:01:25 +07:00
|
|
|
((!mddev->in_sync && !mddev_is_clustered(mddev)) ||
|
|
|
|
(mddev->flags & MD_UPDATE_SB_FLAGS))) {
|
2010-03-29 08:07:53 +07:00
|
|
|
/* mark array as shutdown cleanly */
|
2015-10-22 12:01:25 +07:00
|
|
|
if (!mddev_is_clustered(mddev))
|
|
|
|
mddev->in_sync = 1;
|
2010-03-29 08:07:53 +07:00
|
|
|
md_update_sb(mddev, 1);
|
|
|
|
}
|
|
|
|
}
|
2011-01-14 05:14:33 +07:00
|
|
|
|
2011-10-11 12:47:53 +07:00
|
|
|
void md_stop_writes(struct mddev *mddev)
|
2011-01-14 05:14:33 +07:00
|
|
|
{
|
2013-11-14 13:54:51 +07:00
|
|
|
mddev_lock_nointr(mddev);
|
2011-01-14 05:14:33 +07:00
|
|
|
__md_stop_writes(mddev);
|
|
|
|
mddev_unlock(mddev);
|
|
|
|
}
|
2010-06-01 16:37:27 +07:00
|
|
|
EXPORT_SYMBOL_GPL(md_stop_writes);
|
2010-03-29 08:07:53 +07:00
|
|
|
|
2014-12-15 08:56:57 +07:00
|
|
|
static void mddev_detach(struct mddev *mddev)
|
|
|
|
{
|
|
|
|
struct bitmap *bitmap = mddev->bitmap;
|
|
|
|
/* wait for behind writes to complete */
|
|
|
|
if (bitmap && atomic_read(&bitmap->behind_writes) > 0) {
|
|
|
|
printk(KERN_INFO "md:%s: behind writes in progress - waiting to stop.\n",
|
|
|
|
mdname(mddev));
|
|
|
|
/* need to kick something here to make sure I/O goes? */
|
|
|
|
wait_event(bitmap->behind_wait,
|
|
|
|
atomic_read(&bitmap->behind_writes) == 0);
|
|
|
|
}
|
2014-12-15 08:56:58 +07:00
|
|
|
if (mddev->pers && mddev->pers->quiesce) {
|
2014-12-15 08:56:57 +07:00
|
|
|
mddev->pers->quiesce(mddev, 1);
|
|
|
|
mddev->pers->quiesce(mddev, 0);
|
|
|
|
}
|
|
|
|
md_unregister_thread(&mddev->thread);
|
|
|
|
if (mddev->queue)
|
|
|
|
blk_sync_queue(mddev->queue); /* the unplug fn references 'conf'*/
|
|
|
|
}
|
|
|
|
|
2012-11-19 06:47:48 +07:00
|
|
|
static void __md_stop(struct mddev *mddev)
|
2010-03-29 07:37:13 +07:00
|
|
|
{
|
2014-12-15 08:56:58 +07:00
|
|
|
struct md_personality *pers = mddev->pers;
|
2014-12-15 08:56:57 +07:00
|
|
|
mddev_detach(mddev);
|
2015-07-22 07:20:07 +07:00
|
|
|
/* Ensure ->event_work is done */
|
|
|
|
flush_workqueue(md_misc_wq);
|
2014-12-15 08:56:58 +07:00
|
|
|
spin_lock(&mddev->lock);
|
2010-03-29 07:37:13 +07:00
|
|
|
mddev->pers = NULL;
|
2014-12-15 08:56:58 +07:00
|
|
|
spin_unlock(&mddev->lock);
|
|
|
|
pers->free(mddev, mddev->private);
|
2015-06-25 14:01:40 +07:00
|
|
|
mddev->private = NULL;
|
2014-12-15 08:56:58 +07:00
|
|
|
if (pers->sync_request && mddev->to_remove == NULL)
|
|
|
|
mddev->to_remove = &md_redundancy_group;
|
|
|
|
module_put(pers->owner);
|
2010-04-01 08:08:16 +07:00
|
|
|
clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
|
2010-03-29 07:37:13 +07:00
|
|
|
}
|
2012-11-19 06:47:48 +07:00
|
|
|
|
|
|
|
void md_stop(struct mddev *mddev)
|
|
|
|
{
|
|
|
|
/* stop the array and free an attached data structures.
|
|
|
|
* This is called from dm-raid
|
|
|
|
*/
|
|
|
|
__md_stop(mddev);
|
|
|
|
bitmap_destroy(mddev);
|
|
|
|
if (mddev->bio_set)
|
|
|
|
bioset_free(mddev->bio_set);
|
|
|
|
}
|
|
|
|
|
2010-06-01 16:37:27 +07:00
|
|
|
EXPORT_SYMBOL_GPL(md_stop);
|
2010-03-29 07:37:13 +07:00
|
|
|
|
2012-07-19 12:59:18 +07:00
|
|
|
static int md_set_readonly(struct mddev *mddev, struct block_device *bdev)
|
2010-03-29 09:23:10 +07:00
|
|
|
{
|
|
|
|
int err = 0;
|
2013-11-14 11:16:17 +07:00
|
|
|
int did_freeze = 0;
|
|
|
|
|
|
|
|
if (!test_bit(MD_RECOVERY_FROZEN, &mddev->recovery)) {
|
|
|
|
did_freeze = 1;
|
|
|
|
set_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
|
|
|
|
md_wakeup_thread(mddev->thread);
|
|
|
|
}
|
2014-12-11 06:02:10 +07:00
|
|
|
if (test_bit(MD_RECOVERY_RUNNING, &mddev->recovery))
|
2013-11-14 11:16:17 +07:00
|
|
|
set_bit(MD_RECOVERY_INTR, &mddev->recovery);
|
2014-12-11 06:02:10 +07:00
|
|
|
if (mddev->sync_thread)
|
2013-11-14 11:16:17 +07:00
|
|
|
/* Thread might be blocked waiting for metadata update
|
|
|
|
* which will now never happen */
|
|
|
|
wake_up_process(mddev->sync_thread->tsk);
|
2014-12-11 06:02:10 +07:00
|
|
|
|
2015-09-24 11:00:51 +07:00
|
|
|
if (mddev->external && test_bit(MD_CHANGE_PENDING, &mddev->flags))
|
|
|
|
return -EBUSY;
|
2013-11-14 11:16:17 +07:00
|
|
|
mddev_unlock(mddev);
|
2014-12-11 06:02:10 +07:00
|
|
|
wait_event(resync_wait, !test_bit(MD_RECOVERY_RUNNING,
|
|
|
|
&mddev->recovery));
|
2015-09-24 11:00:51 +07:00
|
|
|
wait_event(mddev->sb_wait,
|
|
|
|
!test_bit(MD_CHANGE_PENDING, &mddev->flags));
|
2013-11-14 11:16:17 +07:00
|
|
|
mddev_lock_nointr(mddev);
|
|
|
|
|
2010-03-29 09:23:10 +07:00
|
|
|
mutex_lock(&mddev->open_mutex);
|
2014-09-09 11:00:15 +07:00
|
|
|
if ((mddev->pers && atomic_read(&mddev->openers) > !!bdev) ||
|
2013-11-14 11:16:17 +07:00
|
|
|
mddev->sync_thread ||
|
2016-08-12 12:42:37 +07:00
|
|
|
test_bit(MD_RECOVERY_RUNNING, &mddev->recovery)) {
|
2010-03-29 09:23:10 +07:00
|
|
|
printk("md: %s still in use.\n",mdname(mddev));
|
2013-11-14 11:16:17 +07:00
|
|
|
if (did_freeze) {
|
|
|
|
clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
|
2014-10-29 04:49:50 +07:00
|
|
|
set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
|
2013-11-14 11:16:17 +07:00
|
|
|
md_wakeup_thread(mddev->thread);
|
|
|
|
}
|
2010-03-29 09:23:10 +07:00
|
|
|
err = -EBUSY;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
if (mddev->pers) {
|
2011-01-14 05:14:33 +07:00
|
|
|
__md_stop_writes(mddev);
|
2010-03-29 09:23:10 +07:00
|
|
|
|
|
|
|
err = -ENXIO;
|
|
|
|
if (mddev->ro==1)
|
|
|
|
goto out;
|
|
|
|
mddev->ro = 1;
|
|
|
|
set_disk_ro(mddev->gendisk, 1);
|
|
|
|
clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
|
2014-10-29 04:49:50 +07:00
|
|
|
set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
|
|
|
|
md_wakeup_thread(mddev->thread);
|
2010-06-01 16:37:23 +07:00
|
|
|
sysfs_notify_dirent_safe(mddev->sysfs_state);
|
2013-11-14 11:16:17 +07:00
|
|
|
err = 0;
|
2010-03-29 09:23:10 +07:00
|
|
|
}
|
|
|
|
out:
|
|
|
|
mutex_unlock(&mddev->open_mutex);
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
[PATCH] md: Set/get state of array via sysfs
This allows the state of an md/array to be directly controlled via sysfs and
adds the ability to stop and array without tearing it down.
Array states/settings:
clear
No devices, no size, no level
Equivalent to STOP_ARRAY ioctl
inactive
May have some settings, but array is not active
all IO results in error
When written, doesn't tear down array, but just stops it
suspended (not supported yet)
All IO requests will block. The array can be reconfigured.
Writing this, if accepted, will block until array is quiescent
readonly
no resync can happen. no superblocks get written.
write requests fail
read-auto
like readonly, but behaves like 'clean' on a write request.
clean - no pending writes, but otherwise active.
When written to inactive array, starts without resync
If a write request arrives then
if metadata is known, mark 'dirty' and switch to 'active'.
if not known, block and switch to write-pending
If written to an active array that has pending writes, then fails.
active
fully active: IO and resync can be happening.
When written to inactive array, starts with resync
write-pending (not supported yet)
clean, but writes are blocked waiting for 'active' to be written.
active-idle
like active, but no writes have been seen for a while (100msec).
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-26 14:27:58 +07:00
|
|
|
/* mode:
|
|
|
|
* 0 - completely stop and dis-assemble array
|
|
|
|
* 2 - stop but do not disassemble array
|
|
|
|
*/
|
2014-09-30 11:23:59 +07:00
|
|
|
static int do_md_stop(struct mddev *mddev, int mode,
|
2012-07-19 12:59:18 +07:00
|
|
|
struct block_device *bdev)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
struct gendisk *disk = mddev->gendisk;
|
2011-10-11 12:45:26 +07:00
|
|
|
struct md_rdev *rdev;
|
2013-11-14 11:16:17 +07:00
|
|
|
int did_freeze = 0;
|
|
|
|
|
|
|
|
if (!test_bit(MD_RECOVERY_FROZEN, &mddev->recovery)) {
|
|
|
|
did_freeze = 1;
|
|
|
|
set_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
|
|
|
|
md_wakeup_thread(mddev->thread);
|
|
|
|
}
|
2014-12-11 06:02:10 +07:00
|
|
|
if (test_bit(MD_RECOVERY_RUNNING, &mddev->recovery))
|
2013-11-14 11:16:17 +07:00
|
|
|
set_bit(MD_RECOVERY_INTR, &mddev->recovery);
|
2014-12-11 06:02:10 +07:00
|
|
|
if (mddev->sync_thread)
|
2013-11-14 11:16:17 +07:00
|
|
|
/* Thread might be blocked waiting for metadata update
|
|
|
|
* which will now never happen */
|
|
|
|
wake_up_process(mddev->sync_thread->tsk);
|
2014-12-11 06:02:10 +07:00
|
|
|
|
2013-11-14 11:16:17 +07:00
|
|
|
mddev_unlock(mddev);
|
2014-12-11 06:02:10 +07:00
|
|
|
wait_event(resync_wait, (mddev->sync_thread == NULL &&
|
|
|
|
!test_bit(MD_RECOVERY_RUNNING,
|
|
|
|
&mddev->recovery)));
|
2013-11-14 11:16:17 +07:00
|
|
|
mddev_lock_nointr(mddev);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2009-08-10 09:50:52 +07:00
|
|
|
mutex_lock(&mddev->open_mutex);
|
2014-09-09 11:00:15 +07:00
|
|
|
if ((mddev->pers && atomic_read(&mddev->openers) > !!bdev) ||
|
2013-11-14 11:16:17 +07:00
|
|
|
mddev->sysfs_active ||
|
|
|
|
mddev->sync_thread ||
|
2016-08-12 12:42:37 +07:00
|
|
|
test_bit(MD_RECOVERY_RUNNING, &mddev->recovery)) {
|
2008-07-11 19:02:22 +07:00
|
|
|
printk("md: %s still in use.\n",mdname(mddev));
|
2010-08-07 18:41:19 +07:00
|
|
|
mutex_unlock(&mddev->open_mutex);
|
2013-11-14 11:16:17 +07:00
|
|
|
if (did_freeze) {
|
|
|
|
clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
|
2014-10-29 04:49:50 +07:00
|
|
|
set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
|
2013-11-14 11:16:17 +07:00
|
|
|
md_wakeup_thread(mddev->thread);
|
|
|
|
}
|
2013-08-27 13:44:13 +07:00
|
|
|
return -EBUSY;
|
|
|
|
}
|
2010-08-07 18:41:19 +07:00
|
|
|
if (mddev->pers) {
|
2010-03-29 09:23:10 +07:00
|
|
|
if (mddev->ro)
|
|
|
|
set_disk_ro(disk, 0);
|
2009-03-31 10:39:39 +07:00
|
|
|
|
2011-01-14 05:14:33 +07:00
|
|
|
__md_stop_writes(mddev);
|
2012-11-19 06:47:48 +07:00
|
|
|
__md_stop(mddev);
|
2010-03-29 09:23:10 +07:00
|
|
|
mddev->queue->backing_dev_info.congested_fn = NULL;
|
2010-03-29 07:37:13 +07:00
|
|
|
|
2010-03-29 09:23:10 +07:00
|
|
|
/* tell userspace to handle 'inactive' */
|
2010-06-01 16:37:23 +07:00
|
|
|
sysfs_notify_dirent_safe(mddev->sysfs_state);
|
2006-12-10 17:20:44 +07:00
|
|
|
|
2012-03-19 08:46:39 +07:00
|
|
|
rdev_for_each(rdev, mddev)
|
2011-07-27 08:00:36 +07:00
|
|
|
if (rdev->raid_disk >= 0)
|
|
|
|
sysfs_unlink_rdev(mddev, rdev);
|
2009-05-07 09:51:06 +07:00
|
|
|
|
2010-03-29 09:23:10 +07:00
|
|
|
set_capacity(disk, 0);
|
2010-08-07 18:41:19 +07:00
|
|
|
mutex_unlock(&mddev->open_mutex);
|
2011-02-24 13:26:41 +07:00
|
|
|
mddev->changed = 1;
|
2010-03-29 09:23:10 +07:00
|
|
|
revalidate_disk(disk);
|
2006-12-10 17:20:44 +07:00
|
|
|
|
2010-03-29 09:23:10 +07:00
|
|
|
if (mddev->ro)
|
|
|
|
mddev->ro = 0;
|
2010-08-07 18:41:19 +07:00
|
|
|
} else
|
|
|
|
mutex_unlock(&mddev->open_mutex);
|
2005-04-17 05:20:36 +07:00
|
|
|
/*
|
|
|
|
* Free resources if final stop
|
|
|
|
*/
|
[PATCH] md: Set/get state of array via sysfs
This allows the state of an md/array to be directly controlled via sysfs and
adds the ability to stop and array without tearing it down.
Array states/settings:
clear
No devices, no size, no level
Equivalent to STOP_ARRAY ioctl
inactive
May have some settings, but array is not active
all IO results in error
When written, doesn't tear down array, but just stops it
suspended (not supported yet)
All IO requests will block. The array can be reconfigured.
Writing this, if accepted, will block until array is quiescent
readonly
no resync can happen. no superblocks get written.
write requests fail
read-auto
like readonly, but behaves like 'clean' on a write request.
clean - no pending writes, but otherwise active.
When written to inactive array, starts without resync
If a write request arrives then
if metadata is known, mark 'dirty' and switch to 'active'.
if not known, block and switch to write-pending
If written to an active array that has pending writes, then fails.
active
fully active: IO and resync can be happening.
When written to inactive array, starts with resync
write-pending (not supported yet)
clean, but writes are blocked waiting for 'active' to be written.
active-idle
like active, but no writes have been seen for a while (100msec).
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-26 14:27:58 +07:00
|
|
|
if (mode == 0) {
|
2005-04-17 05:20:36 +07:00
|
|
|
printk(KERN_INFO "md: %s stopped.\n", mdname(mddev));
|
|
|
|
|
2006-02-03 05:28:05 +07:00
|
|
|
bitmap_destroy(mddev);
|
2009-12-14 08:49:52 +07:00
|
|
|
if (mddev->bitmap_info.file) {
|
2014-12-15 08:57:00 +07:00
|
|
|
struct file *f = mddev->bitmap_info.file;
|
|
|
|
spin_lock(&mddev->lock);
|
2009-12-14 08:49:52 +07:00
|
|
|
mddev->bitmap_info.file = NULL;
|
2014-12-15 08:57:00 +07:00
|
|
|
spin_unlock(&mddev->lock);
|
|
|
|
fput(f);
|
2006-02-03 05:28:05 +07:00
|
|
|
}
|
2009-12-14 08:49:52 +07:00
|
|
|
mddev->bitmap_info.offset = 0;
|
2006-02-03 05:28:05 +07:00
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
export_array(mddev);
|
|
|
|
|
2010-03-29 07:37:13 +07:00
|
|
|
md_clean(mddev);
|
2009-01-09 04:31:10 +07:00
|
|
|
if (mddev->hold_active == UNTIL_STOP)
|
|
|
|
mddev->hold_active = 0;
|
2010-03-29 09:23:10 +07:00
|
|
|
}
|
[PATCH] md: make /proc/mdstat pollable
With this patch it is possible to poll /proc/mdstat to detect arrays appearing
or disappearing, to detect failures, recovery starting, recovery completing,
and devices being added and removed.
It is similar to the poll-ability of /proc/mounts, though different in that:
We always report that the file is readable (because face it, it is, even if
only for EOF).
We report POLLPRI when there is a change so that select() can detect
it as an exceptional event. Not only are these exceptional events, but
that is the mechanism that the current 'mdadm' uses to watch for events
(It also polls after a timeout).
(We also report POLLERR like /proc/mounts).
Finally, we only reset the per-file event counter when the start of the file
is read, rather than when poll() returns an event. This is more robust as it
means that an fd will continue to report activity to poll/select until the
program clearly responds to that activity.
md_new_event takes an 'mddev' which isn't currently used, but it will be soon.
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-06 15:20:30 +07:00
|
|
|
md_new_event(mddev);
|
2010-06-01 16:37:23 +07:00
|
|
|
sysfs_notify_dirent_safe(mddev->sysfs_state);
|
2010-08-07 18:41:19 +07:00
|
|
|
return 0;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
2006-12-10 17:20:50 +07:00
|
|
|
#ifndef MODULE
|
2011-10-11 12:47:53 +07:00
|
|
|
static void autorun_array(struct mddev *mddev)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2011-10-11 12:45:26 +07:00
|
|
|
struct md_rdev *rdev;
|
2005-04-17 05:20:36 +07:00
|
|
|
int err;
|
|
|
|
|
2005-04-17 05:26:42 +07:00
|
|
|
if (list_empty(&mddev->disks))
|
2005-04-17 05:20:36 +07:00
|
|
|
return;
|
|
|
|
|
|
|
|
printk(KERN_INFO "md: running: ");
|
|
|
|
|
2012-03-19 08:46:39 +07:00
|
|
|
rdev_for_each(rdev, mddev) {
|
2005-04-17 05:20:36 +07:00
|
|
|
char b[BDEVNAME_SIZE];
|
|
|
|
printk("<%s>", bdevname(rdev->bdev,b));
|
|
|
|
}
|
|
|
|
printk("\n");
|
|
|
|
|
2008-10-13 07:55:12 +07:00
|
|
|
err = do_md_run(mddev);
|
2005-04-17 05:20:36 +07:00
|
|
|
if (err) {
|
|
|
|
printk(KERN_WARNING "md: do_md_run() returned %d\n", err);
|
2012-07-19 12:59:18 +07:00
|
|
|
do_md_stop(mddev, 0, NULL);
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* lets try to run arrays based on all disks that have arrived
|
|
|
|
* until now. (those are in pending_raid_disks)
|
|
|
|
*
|
|
|
|
* the method: pick the first pending disk, collect all disks with
|
|
|
|
* the same UUID, remove all from the pending list and put them into
|
|
|
|
* the 'same_array' list. Then order this list based on superblock
|
|
|
|
* update time (freshest comes first), kick out 'old' disks and
|
|
|
|
* compare superblocks. If everything's fine then run it.
|
|
|
|
*
|
|
|
|
* If "unit" is allocated, then bump its reference count
|
|
|
|
*/
|
|
|
|
static void autorun_devices(int part)
|
|
|
|
{
|
2011-10-11 12:45:26 +07:00
|
|
|
struct md_rdev *rdev0, *rdev, *tmp;
|
2011-10-11 12:47:53 +07:00
|
|
|
struct mddev *mddev;
|
2005-04-17 05:20:36 +07:00
|
|
|
char b[BDEVNAME_SIZE];
|
|
|
|
|
|
|
|
printk(KERN_INFO "md: autorun ...\n");
|
|
|
|
while (!list_empty(&pending_raid_disks)) {
|
2006-10-03 15:15:59 +07:00
|
|
|
int unit;
|
2005-04-17 05:20:36 +07:00
|
|
|
dev_t dev;
|
2006-03-27 16:18:07 +07:00
|
|
|
LIST_HEAD(candidates);
|
2005-04-17 05:20:36 +07:00
|
|
|
rdev0 = list_entry(pending_raid_disks.next,
|
2011-10-11 12:45:26 +07:00
|
|
|
struct md_rdev, same_set);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
printk(KERN_INFO "md: considering %s ...\n",
|
|
|
|
bdevname(rdev0->bdev,b));
|
|
|
|
INIT_LIST_HEAD(&candidates);
|
2009-01-09 04:31:08 +07:00
|
|
|
rdev_for_each_list(rdev, tmp, &pending_raid_disks)
|
2005-04-17 05:20:36 +07:00
|
|
|
if (super_90_load(rdev, rdev0, 0) >= 0) {
|
|
|
|
printk(KERN_INFO "md: adding %s ...\n",
|
|
|
|
bdevname(rdev->bdev,b));
|
|
|
|
list_move(&rdev->same_set, &candidates);
|
|
|
|
}
|
|
|
|
/*
|
|
|
|
* now we have a set of devices, with all of them having
|
|
|
|
* mostly sane superblocks. It's time to allocate the
|
|
|
|
* mddev.
|
|
|
|
*/
|
2006-10-03 15:15:59 +07:00
|
|
|
if (part) {
|
|
|
|
dev = MKDEV(mdp_major,
|
|
|
|
rdev0->preferred_minor << MdpMinorShift);
|
|
|
|
unit = MINOR(dev) >> MdpMinorShift;
|
|
|
|
} else {
|
|
|
|
dev = MKDEV(MD_MAJOR, rdev0->preferred_minor);
|
|
|
|
unit = MINOR(dev);
|
|
|
|
}
|
|
|
|
if (rdev0->preferred_minor != unit) {
|
2005-04-17 05:20:36 +07:00
|
|
|
printk(KERN_INFO "md: unit number in %s is bad: %d\n",
|
|
|
|
bdevname(rdev0->bdev, b), rdev0->preferred_minor);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
md_probe(dev, NULL, NULL);
|
|
|
|
mddev = mddev_find(dev);
|
2008-06-28 05:31:17 +07:00
|
|
|
if (!mddev || !mddev->gendisk) {
|
|
|
|
if (mddev)
|
|
|
|
mddev_put(mddev);
|
|
|
|
printk(KERN_ERR
|
2005-04-17 05:20:36 +07:00
|
|
|
"md: cannot allocate memory for md drive.\n");
|
|
|
|
break;
|
|
|
|
}
|
2014-09-30 11:23:59 +07:00
|
|
|
if (mddev_lock(mddev))
|
2005-04-17 05:20:36 +07:00
|
|
|
printk(KERN_WARNING "md: %s locked, cannot run\n",
|
|
|
|
mdname(mddev));
|
|
|
|
else if (mddev->raid_disks || mddev->major_version
|
|
|
|
|| !list_empty(&mddev->disks)) {
|
2014-09-30 11:23:59 +07:00
|
|
|
printk(KERN_WARNING
|
2005-04-17 05:20:36 +07:00
|
|
|
"md: %s already running, cannot run %s\n",
|
|
|
|
mdname(mddev), bdevname(rdev0->bdev,b));
|
|
|
|
mddev_unlock(mddev);
|
|
|
|
} else {
|
|
|
|
printk(KERN_INFO "md: created %s\n", mdname(mddev));
|
2008-02-06 16:39:53 +07:00
|
|
|
mddev->persistent = 1;
|
2009-01-09 04:31:08 +07:00
|
|
|
rdev_for_each_list(rdev, tmp, &candidates) {
|
2005-04-17 05:20:36 +07:00
|
|
|
list_del_init(&rdev->same_set);
|
|
|
|
if (bind_rdev_to_array(rdev, mddev))
|
|
|
|
export_rdev(rdev);
|
|
|
|
}
|
|
|
|
autorun_array(mddev);
|
|
|
|
mddev_unlock(mddev);
|
|
|
|
}
|
|
|
|
/* on success, candidates will be empty, on error
|
|
|
|
* it won't...
|
|
|
|
*/
|
2009-01-09 04:31:08 +07:00
|
|
|
rdev_for_each_list(rdev, tmp, &candidates) {
|
2008-07-21 14:05:25 +07:00
|
|
|
list_del_init(&rdev->same_set);
|
2005-04-17 05:20:36 +07:00
|
|
|
export_rdev(rdev);
|
2008-07-21 14:05:25 +07:00
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
mddev_put(mddev);
|
|
|
|
}
|
|
|
|
printk(KERN_INFO "md: ... autorun DONE.\n");
|
|
|
|
}
|
2006-12-10 17:20:50 +07:00
|
|
|
#endif /* !MODULE */
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2014-09-30 11:23:59 +07:00
|
|
|
static int get_version(void __user *arg)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
mdu_version_t ver;
|
|
|
|
|
|
|
|
ver.major = MD_MAJOR_VERSION;
|
|
|
|
ver.minor = MD_MINOR_VERSION;
|
|
|
|
ver.patchlevel = MD_PATCHLEVEL_VERSION;
|
|
|
|
|
|
|
|
if (copy_to_user(arg, &ver, sizeof(ver)))
|
|
|
|
return -EFAULT;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2014-09-30 11:23:59 +07:00
|
|
|
static int get_array_info(struct mddev *mddev, void __user *arg)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
mdu_array_info_t info;
|
2009-09-23 15:06:41 +07:00
|
|
|
int nr,working,insync,failed,spare;
|
2011-10-11 12:45:26 +07:00
|
|
|
struct md_rdev *rdev;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2012-10-11 09:37:33 +07:00
|
|
|
nr = working = insync = failed = spare = 0;
|
|
|
|
rcu_read_lock();
|
|
|
|
rdev_for_each_rcu(rdev, mddev) {
|
2005-04-17 05:20:36 +07:00
|
|
|
nr++;
|
2005-11-09 12:39:31 +07:00
|
|
|
if (test_bit(Faulty, &rdev->flags))
|
2005-04-17 05:20:36 +07:00
|
|
|
failed++;
|
|
|
|
else {
|
|
|
|
working++;
|
2005-11-09 12:39:31 +07:00
|
|
|
if (test_bit(In_sync, &rdev->flags))
|
2014-09-30 11:23:59 +07:00
|
|
|
insync++;
|
2016-08-12 07:14:45 +07:00
|
|
|
else if (test_bit(Journal, &rdev->flags))
|
|
|
|
/* TODO: add journal count to md_u.h */
|
|
|
|
;
|
2005-04-17 05:20:36 +07:00
|
|
|
else
|
|
|
|
spare++;
|
|
|
|
}
|
|
|
|
}
|
2012-10-11 09:37:33 +07:00
|
|
|
rcu_read_unlock();
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
info.major_version = mddev->major_version;
|
|
|
|
info.minor_version = mddev->minor_version;
|
|
|
|
info.patch_version = MD_PATCHLEVEL_VERSION;
|
2015-12-21 06:51:01 +07:00
|
|
|
info.ctime = clamp_t(time64_t, mddev->ctime, 0, U32_MAX);
|
2005-04-17 05:20:36 +07:00
|
|
|
info.level = mddev->level;
|
2009-03-31 10:33:13 +07:00
|
|
|
info.size = mddev->dev_sectors / 2;
|
|
|
|
if (info.size != mddev->dev_sectors / 2) /* overflow */
|
2006-02-03 18:03:40 +07:00
|
|
|
info.size = -1;
|
2005-04-17 05:20:36 +07:00
|
|
|
info.nr_disks = nr;
|
|
|
|
info.raid_disks = mddev->raid_disks;
|
|
|
|
info.md_minor = mddev->md_minor;
|
|
|
|
info.not_persistent= !mddev->persistent;
|
|
|
|
|
2015-12-21 06:51:01 +07:00
|
|
|
info.utime = clamp_t(time64_t, mddev->utime, 0, U32_MAX);
|
2005-04-17 05:20:36 +07:00
|
|
|
info.state = 0;
|
|
|
|
if (mddev->in_sync)
|
|
|
|
info.state = (1<<MD_SB_CLEAN);
|
2009-12-14 08:49:52 +07:00
|
|
|
if (mddev->bitmap && mddev->bitmap_info.offset)
|
2014-07-02 08:35:06 +07:00
|
|
|
info.state |= (1<<MD_SB_BITMAP_PRESENT);
|
2014-11-27 01:22:03 +07:00
|
|
|
if (mddev_is_clustered(mddev))
|
|
|
|
info.state |= (1<<MD_SB_CLUSTERED);
|
2009-09-23 15:06:41 +07:00
|
|
|
info.active_disks = insync;
|
2005-04-17 05:20:36 +07:00
|
|
|
info.working_disks = working;
|
|
|
|
info.failed_disks = failed;
|
|
|
|
info.spare_disks = spare;
|
|
|
|
|
|
|
|
info.layout = mddev->layout;
|
2009-06-18 05:45:01 +07:00
|
|
|
info.chunk_size = mddev->chunk_sectors << 9;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
if (copy_to_user(arg, &info, sizeof(info)))
|
|
|
|
return -EFAULT;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2014-09-30 11:23:59 +07:00
|
|
|
static int get_bitmap_file(struct mddev *mddev, void __user * arg)
|
2005-06-22 07:17:14 +07:00
|
|
|
{
|
|
|
|
mdu_bitmap_file_t *file = NULL; /* too big for stack allocation */
|
2014-12-15 08:57:00 +07:00
|
|
|
char *ptr;
|
2014-12-15 08:57:00 +07:00
|
|
|
int err;
|
2005-06-22 07:17:14 +07:00
|
|
|
|
2015-07-25 21:36:50 +07:00
|
|
|
file = kzalloc(sizeof(*file), GFP_NOIO);
|
2005-06-22 07:17:14 +07:00
|
|
|
if (!file)
|
2014-12-15 08:57:00 +07:00
|
|
|
return -ENOMEM;
|
2005-06-22 07:17:14 +07:00
|
|
|
|
2014-12-15 08:57:00 +07:00
|
|
|
err = 0;
|
|
|
|
spin_lock(&mddev->lock);
|
2015-07-25 21:36:50 +07:00
|
|
|
/* bitmap enabled */
|
|
|
|
if (mddev->bitmap_info.file) {
|
|
|
|
ptr = file_path(mddev->bitmap_info.file, file->pathname,
|
|
|
|
sizeof(file->pathname));
|
|
|
|
if (IS_ERR(ptr))
|
|
|
|
err = PTR_ERR(ptr);
|
|
|
|
else
|
|
|
|
memmove(file->pathname, ptr,
|
|
|
|
sizeof(file->pathname)-(ptr-file->pathname));
|
|
|
|
}
|
2014-12-15 08:57:00 +07:00
|
|
|
spin_unlock(&mddev->lock);
|
2005-06-22 07:17:14 +07:00
|
|
|
|
2014-12-15 08:57:00 +07:00
|
|
|
if (err == 0 &&
|
|
|
|
copy_to_user(arg, file, sizeof(*file)))
|
2005-06-22 07:17:14 +07:00
|
|
|
err = -EFAULT;
|
2014-12-15 08:57:00 +07:00
|
|
|
|
2005-06-22 07:17:14 +07:00
|
|
|
kfree(file);
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2014-09-30 11:23:59 +07:00
|
|
|
static int get_disk_info(struct mddev *mddev, void __user * arg)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
mdu_disk_info_t info;
|
2011-10-11 12:45:26 +07:00
|
|
|
struct md_rdev *rdev;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
if (copy_from_user(&info, arg, sizeof(info)))
|
|
|
|
return -EFAULT;
|
|
|
|
|
2012-10-11 09:37:33 +07:00
|
|
|
rcu_read_lock();
|
2015-04-14 22:43:55 +07:00
|
|
|
rdev = md_find_rdev_nr_rcu(mddev, info.number);
|
2005-04-17 05:20:36 +07:00
|
|
|
if (rdev) {
|
|
|
|
info.major = MAJOR(rdev->bdev->bd_dev);
|
|
|
|
info.minor = MINOR(rdev->bdev->bd_dev);
|
|
|
|
info.raid_disk = rdev->raid_disk;
|
|
|
|
info.state = 0;
|
2005-11-09 12:39:31 +07:00
|
|
|
if (test_bit(Faulty, &rdev->flags))
|
2005-04-17 05:20:36 +07:00
|
|
|
info.state |= (1<<MD_DISK_FAULTY);
|
2005-11-09 12:39:31 +07:00
|
|
|
else if (test_bit(In_sync, &rdev->flags)) {
|
2005-04-17 05:20:36 +07:00
|
|
|
info.state |= (1<<MD_DISK_ACTIVE);
|
|
|
|
info.state |= (1<<MD_DISK_SYNC);
|
|
|
|
}
|
2015-10-13 06:59:50 +07:00
|
|
|
if (test_bit(Journal, &rdev->flags))
|
2015-08-14 04:31:55 +07:00
|
|
|
info.state |= (1<<MD_DISK_JOURNAL);
|
2005-09-10 06:23:45 +07:00
|
|
|
if (test_bit(WriteMostly, &rdev->flags))
|
|
|
|
info.state |= (1<<MD_DISK_WRITEMOSTLY);
|
2005-04-17 05:20:36 +07:00
|
|
|
} else {
|
|
|
|
info.major = info.minor = 0;
|
|
|
|
info.raid_disk = -1;
|
|
|
|
info.state = (1<<MD_DISK_REMOVED);
|
|
|
|
}
|
2012-10-11 09:37:33 +07:00
|
|
|
rcu_read_unlock();
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
if (copy_to_user(arg, &info, sizeof(info)))
|
|
|
|
return -EFAULT;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2014-09-30 11:23:59 +07:00
|
|
|
static int add_new_disk(struct mddev *mddev, mdu_disk_info_t *info)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
char b[BDEVNAME_SIZE], b2[BDEVNAME_SIZE];
|
2011-10-11 12:45:26 +07:00
|
|
|
struct md_rdev *rdev;
|
2005-04-17 05:20:36 +07:00
|
|
|
dev_t dev = MKDEV(info->major,info->minor);
|
|
|
|
|
2014-10-30 06:51:31 +07:00
|
|
|
if (mddev_is_clustered(mddev) &&
|
|
|
|
!(info->state & ((1 << MD_DISK_CLUSTER_ADD) | (1 << MD_DISK_CANDIDATE)))) {
|
2015-03-02 23:55:49 +07:00
|
|
|
pr_err("%s: Cannot add to clustered mddev.\n",
|
2014-10-30 06:51:31 +07:00
|
|
|
mdname(mddev));
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
if (info->major != MAJOR(dev) || info->minor != MINOR(dev))
|
|
|
|
return -EOVERFLOW;
|
|
|
|
|
|
|
|
if (!mddev->raid_disks) {
|
|
|
|
int err;
|
|
|
|
/* expecting a device which has a superblock */
|
|
|
|
rdev = md_import_device(dev, mddev->major_version, mddev->minor_version);
|
|
|
|
if (IS_ERR(rdev)) {
|
2014-09-30 11:23:59 +07:00
|
|
|
printk(KERN_WARNING
|
2005-04-17 05:20:36 +07:00
|
|
|
"md: md_import_device returned %ld\n",
|
|
|
|
PTR_ERR(rdev));
|
|
|
|
return PTR_ERR(rdev);
|
|
|
|
}
|
|
|
|
if (!list_empty(&mddev->disks)) {
|
2011-10-11 12:45:26 +07:00
|
|
|
struct md_rdev *rdev0
|
|
|
|
= list_entry(mddev->disks.next,
|
|
|
|
struct md_rdev, same_set);
|
2009-09-23 15:06:41 +07:00
|
|
|
err = super_types[mddev->major_version]
|
2005-04-17 05:20:36 +07:00
|
|
|
.load_super(rdev, rdev0, mddev->minor_version);
|
|
|
|
if (err < 0) {
|
2014-09-30 11:23:59 +07:00
|
|
|
printk(KERN_WARNING
|
2005-04-17 05:20:36 +07:00
|
|
|
"md: %s has different UUID to %s\n",
|
2014-09-30 11:23:59 +07:00
|
|
|
bdevname(rdev->bdev,b),
|
2005-04-17 05:20:36 +07:00
|
|
|
bdevname(rdev0->bdev,b2));
|
|
|
|
export_rdev(rdev);
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
err = bind_rdev_to_array(rdev, mddev);
|
|
|
|
if (err)
|
|
|
|
export_rdev(rdev);
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* add_new_disk can be used once the array is assembled
|
|
|
|
* to add "hot spares". They must already have a superblock
|
|
|
|
* written
|
|
|
|
*/
|
|
|
|
if (mddev->pers) {
|
|
|
|
int err;
|
|
|
|
if (!mddev->pers->hot_add_disk) {
|
2014-09-30 11:23:59 +07:00
|
|
|
printk(KERN_WARNING
|
2005-04-17 05:20:36 +07:00
|
|
|
"%s: personality does not support diskops!\n",
|
|
|
|
mdname(mddev));
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
2005-09-10 06:23:50 +07:00
|
|
|
if (mddev->persistent)
|
|
|
|
rdev = md_import_device(dev, mddev->major_version,
|
|
|
|
mddev->minor_version);
|
|
|
|
else
|
|
|
|
rdev = md_import_device(dev, -1, -1);
|
2005-04-17 05:20:36 +07:00
|
|
|
if (IS_ERR(rdev)) {
|
2014-09-30 11:23:59 +07:00
|
|
|
printk(KERN_WARNING
|
2005-04-17 05:20:36 +07:00
|
|
|
"md: md_import_device returned %ld\n",
|
|
|
|
PTR_ERR(rdev));
|
|
|
|
return PTR_ERR(rdev);
|
|
|
|
}
|
2010-12-09 12:36:28 +07:00
|
|
|
/* set saved_raid_disk if appropriate */
|
2005-06-22 07:17:25 +07:00
|
|
|
if (!mddev->persistent) {
|
|
|
|
if (info->state & (1<<MD_DISK_SYNC) &&
|
2011-01-12 05:03:35 +07:00
|
|
|
info->raid_disk < mddev->raid_disks) {
|
2005-06-22 07:17:25 +07:00
|
|
|
rdev->raid_disk = info->raid_disk;
|
2011-01-12 05:03:35 +07:00
|
|
|
set_bit(In_sync, &rdev->flags);
|
2013-12-12 06:13:33 +07:00
|
|
|
clear_bit(Bitmap_sync, &rdev->flags);
|
2011-01-12 05:03:35 +07:00
|
|
|
} else
|
2005-06-22 07:17:25 +07:00
|
|
|
rdev->raid_disk = -1;
|
md: Change handling of save_raid_disk and metadata update during recovery.
Since commit d70ed2e4fafdbef0800e739
MD: Allow restarting an interrupted incremental recovery.
we don't write out the metadata to devices while they are recovering.
This had a good reason, but has unfortunate consequences. This patch
changes things to make them work better.
At issue is what happens if the array is shut down while a recovery is
happening, particularly a bitmap-guided recovery.
Ideally the recovery should pick up where it left off.
However the metadata cannot represent the state "A recovery is in
process which is guided by the bitmap".
Before the above mentioned commit, we wrote metadata to the device
which said "this is being recovered and it is up to <here>". So after
a restart, a full recovery (not bitmap-guided) would happen from
where-ever it was up to.
After the commit the metadata wasn't updated so it still said "This
device is fully in sync with <this> event count". That leads to a
bitmap-based recovery following the whole bitmap, which should be a
lot less work than a full recovery from some starting point. So this
was an improvement.
However updates some metadata but not all leads to other problems.
In particular, the metadata written to the fully-up-to-date device
record that the array has all devices present (even though some are
recovering). So on restart, mdadm wants to find all devices and
expects them to have current event counts.
Obviously it doesn't (some have old event counts) so (when assembling
with --incremental) it waits indefinitely for the rest of the expected
devices.
It really is wrong to not update all the metadata together. Do that
is bound to cause confusion.
Instead, we should make it possible to record the truth in the
metadata. i.e. we need to be able to record that a device is being
recovered based on the bitmap.
We already have a Feature flag to say that recovery is happening. We
now add another one to say that it is a bitmap-based recovery.
With this we can remove the code that disables the write-out of
metadata on some devices.
So this patch:
- moves the setting of 'saved_raid_disk' from add_new_disk to
the validate_super methods. This makes sure it is always set
properly, both when adding a new device to an array, and when
assembling an array from a collection of devices.
- Adds a metadata flag MD_FEATURE_RECOVERY_BITMAP which is only
used if MD_FEATURE_RECOVERY_OFFSET is set, and record that a
bitmap-based recovery is allowed.
This is only present in v1.x metadata. v0.90 doesn't support
devices which are in the middle of recovery at all.
- Only skips writing metadata to Faulty devices.
- Also allows rdev state to be set to "-insync" via sysfs.
This can be used for external-metadata arrays. When the
'role' is set the device is assumed to be in-sync. If, after
setting the role, we set the state to "-insync", the role is
moved to saved_raid_disk which effectively says the device is
partly in-sync with that slot and needs a bitmap recovery.
Cc: Andrei Warkentin <andreiw@vmware.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2013-12-09 08:04:56 +07:00
|
|
|
rdev->saved_raid_disk = rdev->raid_disk;
|
2005-06-22 07:17:25 +07:00
|
|
|
} else
|
|
|
|
super_types[mddev->major_version].
|
|
|
|
validate_super(mddev, rdev);
|
2011-05-11 11:26:20 +07:00
|
|
|
if ((info->state & (1<<MD_DISK_SYNC)) &&
|
2012-07-03 12:59:06 +07:00
|
|
|
rdev->raid_disk != info->raid_disk) {
|
2011-05-11 11:26:20 +07:00
|
|
|
/* This was a hot-add request, but events doesn't
|
|
|
|
* match, so reject it.
|
|
|
|
*/
|
|
|
|
export_rdev(rdev);
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
2005-11-09 12:39:31 +07:00
|
|
|
clear_bit(In_sync, &rdev->flags); /* just to be sure */
|
2005-09-10 06:23:45 +07:00
|
|
|
if (info->state & (1<<MD_DISK_WRITEMOSTLY))
|
|
|
|
set_bit(WriteMostly, &rdev->flags);
|
2009-03-31 10:33:13 +07:00
|
|
|
else
|
|
|
|
clear_bit(WriteMostly, &rdev->flags);
|
2005-09-10 06:23:45 +07:00
|
|
|
|
2015-12-21 06:51:02 +07:00
|
|
|
if (info->state & (1<<MD_DISK_JOURNAL)) {
|
|
|
|
struct md_rdev *rdev2;
|
|
|
|
bool has_journal = false;
|
|
|
|
|
|
|
|
/* make sure no existing journal disk */
|
|
|
|
rdev_for_each(rdev2, mddev) {
|
|
|
|
if (test_bit(Journal, &rdev2->flags)) {
|
|
|
|
has_journal = true;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (has_journal) {
|
|
|
|
export_rdev(rdev);
|
|
|
|
return -EBUSY;
|
|
|
|
}
|
2015-08-14 04:31:55 +07:00
|
|
|
set_bit(Journal, &rdev->flags);
|
2015-12-21 06:51:02 +07:00
|
|
|
}
|
2014-10-30 06:51:31 +07:00
|
|
|
/*
|
|
|
|
* check whether the device shows up in other nodes
|
|
|
|
*/
|
|
|
|
if (mddev_is_clustered(mddev)) {
|
2015-10-02 01:20:27 +07:00
|
|
|
if (info->state & (1 << MD_DISK_CANDIDATE))
|
2014-10-30 06:51:31 +07:00
|
|
|
set_bit(Candidate, &rdev->flags);
|
2015-10-02 01:20:27 +07:00
|
|
|
else if (info->state & (1 << MD_DISK_CLUSTER_ADD)) {
|
2014-10-30 06:51:31 +07:00
|
|
|
/* --add initiated by this node */
|
2015-10-02 01:20:27 +07:00
|
|
|
err = md_cluster_ops->add_new_disk(mddev, rdev);
|
2014-10-30 06:51:31 +07:00
|
|
|
if (err) {
|
|
|
|
export_rdev(rdev);
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
rdev->raid_disk = -1;
|
|
|
|
err = bind_rdev_to_array(rdev, mddev);
|
2015-10-02 01:20:27 +07:00
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
if (err)
|
|
|
|
export_rdev(rdev);
|
2015-10-02 01:20:27 +07:00
|
|
|
|
|
|
|
if (mddev_is_clustered(mddev)) {
|
2016-08-12 12:42:34 +07:00
|
|
|
if (info->state & (1 << MD_DISK_CANDIDATE)) {
|
|
|
|
if (!err) {
|
|
|
|
err = md_cluster_ops->new_disk_ack(mddev,
|
|
|
|
err == 0);
|
|
|
|
if (err)
|
|
|
|
md_kick_rdev_from_array(rdev);
|
|
|
|
}
|
|
|
|
} else {
|
2015-10-02 01:20:27 +07:00
|
|
|
if (err)
|
|
|
|
md_cluster_ops->add_new_disk_cancel(mddev);
|
|
|
|
else
|
|
|
|
err = add_bound_rdev(rdev);
|
|
|
|
}
|
|
|
|
|
|
|
|
} else if (!err)
|
2015-04-14 22:45:22 +07:00
|
|
|
err = add_bound_rdev(rdev);
|
2015-10-02 01:20:27 +07:00
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* otherwise, add_new_disk is only allowed
|
|
|
|
* for major_version==0 superblocks
|
|
|
|
*/
|
|
|
|
if (mddev->major_version != 0) {
|
|
|
|
printk(KERN_WARNING "%s: ADD_NEW_DISK not supported\n",
|
|
|
|
mdname(mddev));
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!(info->state & (1<<MD_DISK_FAULTY))) {
|
|
|
|
int err;
|
2008-10-13 07:55:12 +07:00
|
|
|
rdev = md_import_device(dev, -1, 0);
|
2005-04-17 05:20:36 +07:00
|
|
|
if (IS_ERR(rdev)) {
|
2014-09-30 11:23:59 +07:00
|
|
|
printk(KERN_WARNING
|
2005-04-17 05:20:36 +07:00
|
|
|
"md: error, md_import_device() returned %ld\n",
|
|
|
|
PTR_ERR(rdev));
|
|
|
|
return PTR_ERR(rdev);
|
|
|
|
}
|
|
|
|
rdev->desc_nr = info->number;
|
|
|
|
if (info->raid_disk < mddev->raid_disks)
|
|
|
|
rdev->raid_disk = info->raid_disk;
|
|
|
|
else
|
|
|
|
rdev->raid_disk = -1;
|
|
|
|
|
|
|
|
if (rdev->raid_disk < mddev->raid_disks)
|
2005-11-09 12:39:31 +07:00
|
|
|
if (info->state & (1<<MD_DISK_SYNC))
|
|
|
|
set_bit(In_sync, &rdev->flags);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2005-09-10 06:23:45 +07:00
|
|
|
if (info->state & (1<<MD_DISK_WRITEMOSTLY))
|
|
|
|
set_bit(WriteMostly, &rdev->flags);
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
if (!mddev->persistent) {
|
|
|
|
printk(KERN_INFO "md: nonpersistent superblock ...\n");
|
2010-11-08 20:39:12 +07:00
|
|
|
rdev->sb_start = i_size_read(rdev->bdev->bd_inode) / 512;
|
|
|
|
} else
|
2011-01-14 05:14:33 +07:00
|
|
|
rdev->sb_start = calc_dev_sboffset(rdev);
|
2009-06-18 05:48:58 +07:00
|
|
|
rdev->sectors = rdev->sb_start;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2006-01-06 15:20:55 +07:00
|
|
|
err = bind_rdev_to_array(rdev, mddev);
|
|
|
|
if (err) {
|
|
|
|
export_rdev(rdev);
|
|
|
|
return err;
|
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2014-09-30 11:23:59 +07:00
|
|
|
static int hot_remove_disk(struct mddev *mddev, dev_t dev)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
char b[BDEVNAME_SIZE];
|
2011-10-11 12:45:26 +07:00
|
|
|
struct md_rdev *rdev;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
rdev = find_rdev(mddev, dev);
|
|
|
|
if (!rdev)
|
|
|
|
return -ENXIO;
|
|
|
|
|
2015-09-28 22:27:26 +07:00
|
|
|
if (rdev->raid_disk < 0)
|
|
|
|
goto kick_rdev;
|
2014-06-07 13:44:51 +07:00
|
|
|
|
2013-04-24 08:42:41 +07:00
|
|
|
clear_bit(Blocked, &rdev->flags);
|
|
|
|
remove_and_add_spares(mddev, rdev);
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
if (rdev->raid_disk >= 0)
|
|
|
|
goto busy;
|
|
|
|
|
2015-09-28 22:27:26 +07:00
|
|
|
kick_rdev:
|
2015-12-21 06:51:00 +07:00
|
|
|
if (mddev_is_clustered(mddev))
|
2015-04-14 22:44:44 +07:00
|
|
|
md_cluster_ops->remove_disk(mddev, rdev);
|
|
|
|
|
2015-04-14 22:43:24 +07:00
|
|
|
md_kick_rdev_from_array(rdev);
|
2006-10-03 15:15:46 +07:00
|
|
|
md_update_sb(mddev, 1);
|
[PATCH] md: make /proc/mdstat pollable
With this patch it is possible to poll /proc/mdstat to detect arrays appearing
or disappearing, to detect failures, recovery starting, recovery completing,
and devices being added and removed.
It is similar to the poll-ability of /proc/mounts, though different in that:
We always report that the file is readable (because face it, it is, even if
only for EOF).
We report POLLPRI when there is a change so that select() can detect
it as an exceptional event. Not only are these exceptional events, but
that is the mechanism that the current 'mdadm' uses to watch for events
(It also polls after a timeout).
(We also report POLLERR like /proc/mounts).
Finally, we only reset the per-file event counter when the start of the file
is read, rather than when poll() returns an event. This is more robust as it
means that an fd will continue to report activity to poll/select until the
program clearly responds to that activity.
md_new_event takes an 'mddev' which isn't currently used, but it will be soon.
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-06 15:20:30 +07:00
|
|
|
md_new_event(mddev);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
busy:
|
2008-04-22 05:42:58 +07:00
|
|
|
printk(KERN_WARNING "md: cannot remove active disk %s from %s ...\n",
|
2005-04-17 05:20:36 +07:00
|
|
|
bdevname(rdev->bdev,b), mdname(mddev));
|
|
|
|
return -EBUSY;
|
|
|
|
}
|
|
|
|
|
2014-09-30 11:23:59 +07:00
|
|
|
static int hot_add_disk(struct mddev *mddev, dev_t dev)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
char b[BDEVNAME_SIZE];
|
|
|
|
int err;
|
2011-10-11 12:45:26 +07:00
|
|
|
struct md_rdev *rdev;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
if (!mddev->pers)
|
|
|
|
return -ENODEV;
|
|
|
|
|
|
|
|
if (mddev->major_version != 0) {
|
|
|
|
printk(KERN_WARNING "%s: HOT_ADD may only be used with"
|
|
|
|
" version-0 superblocks.\n",
|
|
|
|
mdname(mddev));
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
if (!mddev->pers->hot_add_disk) {
|
2014-09-30 11:23:59 +07:00
|
|
|
printk(KERN_WARNING
|
2005-04-17 05:20:36 +07:00
|
|
|
"%s: personality does not support diskops!\n",
|
|
|
|
mdname(mddev));
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
2008-10-13 07:55:12 +07:00
|
|
|
rdev = md_import_device(dev, -1, 0);
|
2005-04-17 05:20:36 +07:00
|
|
|
if (IS_ERR(rdev)) {
|
2014-09-30 11:23:59 +07:00
|
|
|
printk(KERN_WARNING
|
2005-04-17 05:20:36 +07:00
|
|
|
"md: error, md_import_device() returned %ld\n",
|
|
|
|
PTR_ERR(rdev));
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (mddev->persistent)
|
2011-01-14 05:14:33 +07:00
|
|
|
rdev->sb_start = calc_dev_sboffset(rdev);
|
2005-04-17 05:20:36 +07:00
|
|
|
else
|
2010-11-08 20:39:12 +07:00
|
|
|
rdev->sb_start = i_size_read(rdev->bdev->bd_inode) / 512;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2009-06-18 05:48:58 +07:00
|
|
|
rdev->sectors = rdev->sb_start;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2005-11-09 12:39:31 +07:00
|
|
|
if (test_bit(Faulty, &rdev->flags)) {
|
2014-09-30 11:23:59 +07:00
|
|
|
printk(KERN_WARNING
|
2005-04-17 05:20:36 +07:00
|
|
|
"md: can not hot-add faulty %s disk to %s!\n",
|
|
|
|
bdevname(rdev->bdev,b), mdname(mddev));
|
|
|
|
err = -EINVAL;
|
|
|
|
goto abort_export;
|
|
|
|
}
|
2014-06-07 13:44:51 +07:00
|
|
|
|
2005-11-09 12:39:31 +07:00
|
|
|
clear_bit(In_sync, &rdev->flags);
|
2005-04-17 05:20:36 +07:00
|
|
|
rdev->desc_nr = -1;
|
2006-10-06 14:44:04 +07:00
|
|
|
rdev->saved_raid_disk = -1;
|
2006-01-06 15:20:55 +07:00
|
|
|
err = bind_rdev_to_array(rdev, mddev);
|
|
|
|
if (err)
|
2015-09-29 07:21:35 +07:00
|
|
|
goto abort_export;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
/*
|
|
|
|
* The rest should better be atomic, we can have disk failures
|
|
|
|
* noticed in interrupt contexts ...
|
|
|
|
*/
|
|
|
|
|
|
|
|
rdev->raid_disk = -1;
|
|
|
|
|
2006-10-03 15:15:46 +07:00
|
|
|
md_update_sb(mddev, 1);
|
2005-04-17 05:20:36 +07:00
|
|
|
/*
|
|
|
|
* Kick recovery, maybe this spare has to be added to the
|
|
|
|
* array immediately.
|
|
|
|
*/
|
|
|
|
set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
|
|
|
|
md_wakeup_thread(mddev->thread);
|
[PATCH] md: make /proc/mdstat pollable
With this patch it is possible to poll /proc/mdstat to detect arrays appearing
or disappearing, to detect failures, recovery starting, recovery completing,
and devices being added and removed.
It is similar to the poll-ability of /proc/mounts, though different in that:
We always report that the file is readable (because face it, it is, even if
only for EOF).
We report POLLPRI when there is a change so that select() can detect
it as an exceptional event. Not only are these exceptional events, but
that is the mechanism that the current 'mdadm' uses to watch for events
(It also polls after a timeout).
(We also report POLLERR like /proc/mounts).
Finally, we only reset the per-file event counter when the start of the file
is read, rather than when poll() returns an event. This is more robust as it
means that an fd will continue to report activity to poll/select until the
program clearly responds to that activity.
md_new_event takes an 'mddev' which isn't currently used, but it will be soon.
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-06 15:20:30 +07:00
|
|
|
md_new_event(mddev);
|
2005-04-17 05:20:36 +07:00
|
|
|
return 0;
|
|
|
|
|
|
|
|
abort_export:
|
|
|
|
export_rdev(rdev);
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2011-10-11 12:47:53 +07:00
|
|
|
static int set_bitmap_file(struct mddev *mddev, int fd)
|
2005-06-22 07:17:14 +07:00
|
|
|
{
|
2014-04-09 09:25:40 +07:00
|
|
|
int err = 0;
|
2005-06-22 07:17:14 +07:00
|
|
|
|
2005-09-10 06:23:45 +07:00
|
|
|
if (mddev->pers) {
|
2014-08-08 12:40:24 +07:00
|
|
|
if (!mddev->pers->quiesce || !mddev->thread)
|
2005-09-10 06:23:45 +07:00
|
|
|
return -EBUSY;
|
|
|
|
if (mddev->recovery || mddev->sync_thread)
|
|
|
|
return -EBUSY;
|
|
|
|
/* we should be able to change the bitmap.. */
|
|
|
|
}
|
2005-06-22 07:17:14 +07:00
|
|
|
|
2005-09-10 06:23:45 +07:00
|
|
|
if (fd >= 0) {
|
2014-04-09 09:25:40 +07:00
|
|
|
struct inode *inode;
|
2014-12-15 08:57:00 +07:00
|
|
|
struct file *f;
|
|
|
|
|
|
|
|
if (mddev->bitmap || mddev->bitmap_info.file)
|
2005-09-10 06:23:45 +07:00
|
|
|
return -EEXIST; /* cannot add when bitmap is present */
|
2014-12-15 08:57:00 +07:00
|
|
|
f = fget(fd);
|
2005-06-22 07:17:14 +07:00
|
|
|
|
2014-12-15 08:57:00 +07:00
|
|
|
if (f == NULL) {
|
2005-09-10 06:23:45 +07:00
|
|
|
printk(KERN_ERR "%s: error: failed to get bitmap file\n",
|
|
|
|
mdname(mddev));
|
|
|
|
return -EBADF;
|
|
|
|
}
|
|
|
|
|
2014-12-15 08:57:00 +07:00
|
|
|
inode = f->f_mapping->host;
|
2014-04-09 09:25:40 +07:00
|
|
|
if (!S_ISREG(inode->i_mode)) {
|
|
|
|
printk(KERN_ERR "%s: error: bitmap file must be a regular file\n",
|
|
|
|
mdname(mddev));
|
|
|
|
err = -EBADF;
|
2014-12-15 08:57:00 +07:00
|
|
|
} else if (!(f->f_mode & FMODE_WRITE)) {
|
2014-04-09 09:25:40 +07:00
|
|
|
printk(KERN_ERR "%s: error: bitmap file must open for write\n",
|
|
|
|
mdname(mddev));
|
|
|
|
err = -EBADF;
|
|
|
|
} else if (atomic_read(&inode->i_writecount) != 1) {
|
2005-09-10 06:23:45 +07:00
|
|
|
printk(KERN_ERR "%s: error: bitmap file is already in use\n",
|
|
|
|
mdname(mddev));
|
2014-04-09 09:25:40 +07:00
|
|
|
err = -EBUSY;
|
|
|
|
}
|
|
|
|
if (err) {
|
2014-12-15 08:57:00 +07:00
|
|
|
fput(f);
|
2005-09-10 06:23:45 +07:00
|
|
|
return err;
|
|
|
|
}
|
2014-12-15 08:57:00 +07:00
|
|
|
mddev->bitmap_info.file = f;
|
2009-12-14 08:49:52 +07:00
|
|
|
mddev->bitmap_info.offset = 0; /* file overrides offset */
|
2005-09-10 06:23:45 +07:00
|
|
|
} else if (mddev->bitmap == NULL)
|
|
|
|
return -ENOENT; /* cannot remove what isn't there */
|
|
|
|
err = 0;
|
|
|
|
if (mddev->pers) {
|
|
|
|
mddev->pers->quiesce(mddev, 1);
|
2010-06-01 16:37:35 +07:00
|
|
|
if (fd >= 0) {
|
2014-06-07 00:43:49 +07:00
|
|
|
struct bitmap *bitmap;
|
|
|
|
|
|
|
|
bitmap = bitmap_create(mddev, -1);
|
|
|
|
if (!IS_ERR(bitmap)) {
|
|
|
|
mddev->bitmap = bitmap;
|
2010-06-01 16:37:35 +07:00
|
|
|
err = bitmap_load(mddev);
|
2015-02-25 07:44:11 +07:00
|
|
|
} else
|
|
|
|
err = PTR_ERR(bitmap);
|
2010-06-01 16:37:35 +07:00
|
|
|
}
|
2006-06-26 14:27:43 +07:00
|
|
|
if (fd < 0 || err) {
|
2005-09-10 06:23:45 +07:00
|
|
|
bitmap_destroy(mddev);
|
2006-06-26 14:27:43 +07:00
|
|
|
fd = -1; /* make sure to put the file */
|
|
|
|
}
|
2005-09-10 06:23:45 +07:00
|
|
|
mddev->pers->quiesce(mddev, 0);
|
2006-06-26 14:27:43 +07:00
|
|
|
}
|
|
|
|
if (fd < 0) {
|
2014-12-15 08:57:00 +07:00
|
|
|
struct file *f = mddev->bitmap_info.file;
|
|
|
|
if (f) {
|
|
|
|
spin_lock(&mddev->lock);
|
|
|
|
mddev->bitmap_info.file = NULL;
|
|
|
|
spin_unlock(&mddev->lock);
|
|
|
|
fput(f);
|
|
|
|
}
|
2005-09-10 06:23:45 +07:00
|
|
|
}
|
|
|
|
|
2005-06-22 07:17:14 +07:00
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
/*
|
|
|
|
* set_array_info is used two different ways
|
|
|
|
* The original usage is when creating a new array.
|
|
|
|
* In this usage, raid_disks is > 0 and it together with
|
|
|
|
* level, size, not_persistent,layout,chunksize determine the
|
|
|
|
* shape of the array.
|
|
|
|
* This will always create an array with a type-0.90.0 superblock.
|
|
|
|
* The newer usage is when assembling an array.
|
|
|
|
* In this case raid_disks will be 0, and the major_version field is
|
|
|
|
* use to determine which style super-blocks are to be found on the devices.
|
|
|
|
* The minor and patch _version numbers are also kept incase the
|
|
|
|
* super_block handler wishes to interpret them.
|
|
|
|
*/
|
2014-09-30 11:23:59 +07:00
|
|
|
static int set_array_info(struct mddev *mddev, mdu_array_info_t *info)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
|
|
|
|
if (info->raid_disks == 0) {
|
|
|
|
/* just setting version number for superblock loading */
|
|
|
|
if (info->major_version < 0 ||
|
2007-05-09 16:35:34 +07:00
|
|
|
info->major_version >= ARRAY_SIZE(super_types) ||
|
2005-04-17 05:20:36 +07:00
|
|
|
super_types[info->major_version].name == NULL) {
|
|
|
|
/* maybe try to auto-load a module? */
|
2014-09-30 11:23:59 +07:00
|
|
|
printk(KERN_INFO
|
2005-04-17 05:20:36 +07:00
|
|
|
"md: superblock version %d not known\n",
|
|
|
|
info->major_version);
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
mddev->major_version = info->major_version;
|
|
|
|
mddev->minor_version = info->minor_version;
|
|
|
|
mddev->patch_version = info->patch_version;
|
2006-12-22 16:11:41 +07:00
|
|
|
mddev->persistent = !info->not_persistent;
|
2009-12-30 08:08:49 +07:00
|
|
|
/* ensure mddev_put doesn't delete this now that there
|
|
|
|
* is some minimal configuration.
|
|
|
|
*/
|
2015-12-21 06:51:01 +07:00
|
|
|
mddev->ctime = ktime_get_real_seconds();
|
2005-04-17 05:20:36 +07:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
mddev->major_version = MD_MAJOR_VERSION;
|
|
|
|
mddev->minor_version = MD_MINOR_VERSION;
|
|
|
|
mddev->patch_version = MD_PATCHLEVEL_VERSION;
|
2015-12-21 06:51:01 +07:00
|
|
|
mddev->ctime = ktime_get_real_seconds();
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
mddev->level = info->level;
|
2006-01-17 13:14:57 +07:00
|
|
|
mddev->clevel[0] = 0;
|
2009-03-31 10:33:13 +07:00
|
|
|
mddev->dev_sectors = 2 * (sector_t)info->size;
|
2005-04-17 05:20:36 +07:00
|
|
|
mddev->raid_disks = info->raid_disks;
|
|
|
|
/* don't set md_minor, it is determined by which /dev/md* was
|
|
|
|
* openned
|
|
|
|
*/
|
|
|
|
if (info->state & (1<<MD_SB_CLEAN))
|
|
|
|
mddev->recovery_cp = MaxSector;
|
|
|
|
else
|
|
|
|
mddev->recovery_cp = 0;
|
|
|
|
mddev->persistent = ! info->not_persistent;
|
2008-02-06 16:39:51 +07:00
|
|
|
mddev->external = 0;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
mddev->layout = info->layout;
|
2009-06-18 05:45:01 +07:00
|
|
|
mddev->chunk_sectors = info->chunk_size >> 9;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
mddev->max_disks = MD_SB_DISKS;
|
|
|
|
|
2008-02-06 16:39:51 +07:00
|
|
|
if (mddev->persistent)
|
|
|
|
mddev->flags = 0;
|
2006-10-03 15:15:46 +07:00
|
|
|
set_bit(MD_CHANGE_DEVS, &mddev->flags);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2009-12-14 08:49:52 +07:00
|
|
|
mddev->bitmap_info.default_offset = MD_SB_BYTES >> 9;
|
2012-05-22 10:55:07 +07:00
|
|
|
mddev->bitmap_info.default_space = 64*2 - (MD_SB_BYTES >> 9);
|
2009-12-14 08:49:52 +07:00
|
|
|
mddev->bitmap_info.offset = 0;
|
2005-11-29 04:44:12 +07:00
|
|
|
|
2006-03-27 16:18:11 +07:00
|
|
|
mddev->reshape_position = MaxSector;
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
/*
|
|
|
|
* Generate a 128 bit UUID
|
|
|
|
*/
|
|
|
|
get_random_bytes(mddev->uuid, 16);
|
|
|
|
|
2006-03-27 16:18:11 +07:00
|
|
|
mddev->new_level = mddev->level;
|
2009-06-18 05:45:27 +07:00
|
|
|
mddev->new_chunk_sectors = mddev->chunk_sectors;
|
2006-03-27 16:18:11 +07:00
|
|
|
mddev->new_layout = mddev->layout;
|
|
|
|
mddev->delta_disks = 0;
|
2012-05-21 06:27:00 +07:00
|
|
|
mddev->reshape_backwards = 0;
|
2006-03-27 16:18:11 +07:00
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2011-10-11 12:47:53 +07:00
|
|
|
void md_set_array_sectors(struct mddev *mddev, sector_t array_sectors)
|
2009-03-31 10:59:03 +07:00
|
|
|
{
|
2009-03-31 11:00:31 +07:00
|
|
|
WARN(!mddev_is_locked(mddev), "%s: unlocked mddev!\n", __func__);
|
|
|
|
|
|
|
|
if (mddev->external_size)
|
|
|
|
return;
|
|
|
|
|
2009-03-31 10:59:03 +07:00
|
|
|
mddev->array_sectors = array_sectors;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(md_set_array_sectors);
|
|
|
|
|
2011-10-11 12:47:53 +07:00
|
|
|
static int update_size(struct mddev *mddev, sector_t num_sectors)
|
2006-01-06 15:20:49 +07:00
|
|
|
{
|
2011-10-11 12:45:26 +07:00
|
|
|
struct md_rdev *rdev;
|
2006-01-06 15:20:49 +07:00
|
|
|
int rv;
|
2008-07-11 19:02:22 +07:00
|
|
|
int fit = (num_sectors == 0);
|
2006-01-06 15:20:49 +07:00
|
|
|
|
2016-05-02 22:33:13 +07:00
|
|
|
/* cluster raid doesn't support update size */
|
|
|
|
if (mddev_is_clustered(mddev))
|
|
|
|
return -EINVAL;
|
|
|
|
|
2006-01-06 15:20:49 +07:00
|
|
|
if (mddev->pers->resize == NULL)
|
|
|
|
return -EINVAL;
|
2008-07-11 19:02:22 +07:00
|
|
|
/* The "num_sectors" is the number of sectors of each device that
|
|
|
|
* is used. This can only make sense for arrays with redundancy.
|
|
|
|
* linear and raid0 always use whatever space is available. We can only
|
|
|
|
* consider changing this number if no resync or reconstruction is
|
|
|
|
* happening, and if the new size is acceptable. It must fit before the
|
2008-07-11 19:02:23 +07:00
|
|
|
* sb_start or, if that is <data_offset, it must fit before the size
|
2008-07-11 19:02:22 +07:00
|
|
|
* of each device. If num_sectors is zero, we find the largest size
|
|
|
|
* that fits.
|
2006-01-06 15:20:49 +07:00
|
|
|
*/
|
2014-12-11 06:02:10 +07:00
|
|
|
if (test_bit(MD_RECOVERY_RUNNING, &mddev->recovery) ||
|
|
|
|
mddev->sync_thread)
|
2006-01-06 15:20:49 +07:00
|
|
|
return -EBUSY;
|
2014-05-28 10:39:21 +07:00
|
|
|
if (mddev->ro)
|
|
|
|
return -EROFS;
|
2012-05-22 10:55:27 +07:00
|
|
|
|
2012-03-19 08:46:39 +07:00
|
|
|
rdev_for_each(rdev, mddev) {
|
2009-03-31 10:33:13 +07:00
|
|
|
sector_t avail = rdev->sectors;
|
2006-10-29 00:38:30 +07:00
|
|
|
|
2008-07-11 19:02:22 +07:00
|
|
|
if (fit && (num_sectors == 0 || num_sectors > avail))
|
|
|
|
num_sectors = avail;
|
|
|
|
if (avail < num_sectors)
|
2006-01-06 15:20:49 +07:00
|
|
|
return -ENOSPC;
|
|
|
|
}
|
2008-07-11 19:02:22 +07:00
|
|
|
rv = mddev->pers->resize(mddev, num_sectors);
|
2009-08-03 07:59:58 +07:00
|
|
|
if (!rv)
|
|
|
|
revalidate_disk(mddev->gendisk);
|
2006-01-06 15:20:49 +07:00
|
|
|
return rv;
|
|
|
|
}
|
|
|
|
|
2011-10-11 12:47:53 +07:00
|
|
|
static int update_raid_disks(struct mddev *mddev, int raid_disks)
|
2006-01-06 15:20:54 +07:00
|
|
|
{
|
|
|
|
int rv;
|
2012-05-21 06:27:00 +07:00
|
|
|
struct md_rdev *rdev;
|
2006-01-06 15:20:54 +07:00
|
|
|
/* change the number of raid disks */
|
2006-03-27 16:18:13 +07:00
|
|
|
if (mddev->pers->check_reshape == NULL)
|
2006-01-06 15:20:54 +07:00
|
|
|
return -EINVAL;
|
2014-05-28 10:39:21 +07:00
|
|
|
if (mddev->ro)
|
|
|
|
return -EROFS;
|
2006-01-06 15:20:54 +07:00
|
|
|
if (raid_disks <= 0 ||
|
2010-04-14 14:02:09 +07:00
|
|
|
(mddev->max_disks && raid_disks >= mddev->max_disks))
|
2006-01-06 15:20:54 +07:00
|
|
|
return -EINVAL;
|
2014-12-11 06:02:10 +07:00
|
|
|
if (mddev->sync_thread ||
|
|
|
|
test_bit(MD_RECOVERY_RUNNING, &mddev->recovery) ||
|
|
|
|
mddev->reshape_position != MaxSector)
|
2006-01-06 15:20:54 +07:00
|
|
|
return -EBUSY;
|
2012-05-21 06:27:00 +07:00
|
|
|
|
|
|
|
rdev_for_each(rdev, mddev) {
|
|
|
|
if (mddev->raid_disks < raid_disks &&
|
|
|
|
rdev->data_offset < rdev->new_data_offset)
|
|
|
|
return -EINVAL;
|
|
|
|
if (mddev->raid_disks > raid_disks &&
|
|
|
|
rdev->data_offset > rdev->new_data_offset)
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
2006-03-27 16:18:13 +07:00
|
|
|
mddev->delta_disks = raid_disks - mddev->raid_disks;
|
2012-05-21 06:27:00 +07:00
|
|
|
if (mddev->delta_disks < 0)
|
|
|
|
mddev->reshape_backwards = 1;
|
|
|
|
else if (mddev->delta_disks > 0)
|
|
|
|
mddev->reshape_backwards = 0;
|
2006-03-27 16:18:13 +07:00
|
|
|
|
|
|
|
rv = mddev->pers->check_reshape(mddev);
|
2012-05-21 06:27:00 +07:00
|
|
|
if (rv < 0) {
|
2011-01-31 07:57:42 +07:00
|
|
|
mddev->delta_disks = 0;
|
2012-05-21 06:27:00 +07:00
|
|
|
mddev->reshape_backwards = 0;
|
|
|
|
}
|
2006-01-06 15:20:54 +07:00
|
|
|
return rv;
|
|
|
|
}
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
/*
|
|
|
|
* update_array_info is used to change the configuration of an
|
|
|
|
* on-line array.
|
|
|
|
* The version, ctime,level,size,raid_disks,not_persistent, layout,chunk_size
|
|
|
|
* fields in the info are checked against the array.
|
|
|
|
* Any differences that cannot be handled will cause an error.
|
|
|
|
* Normally, only one change can be managed at a time.
|
|
|
|
*/
|
2011-10-11 12:47:53 +07:00
|
|
|
static int update_array_info(struct mddev *mddev, mdu_array_info_t *info)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
int rv = 0;
|
|
|
|
int cnt = 0;
|
2005-09-10 06:23:45 +07:00
|
|
|
int state = 0;
|
|
|
|
|
|
|
|
/* calculate expected state,ignoring low bits */
|
2009-12-14 08:49:52 +07:00
|
|
|
if (mddev->bitmap && mddev->bitmap_info.offset)
|
2005-09-10 06:23:45 +07:00
|
|
|
state |= (1 << MD_SB_BITMAP_PRESENT);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
if (mddev->major_version != info->major_version ||
|
|
|
|
mddev->minor_version != info->minor_version ||
|
|
|
|
/* mddev->patch_version != info->patch_version || */
|
|
|
|
mddev->ctime != info->ctime ||
|
|
|
|
mddev->level != info->level ||
|
|
|
|
/* mddev->layout != info->layout || */
|
2015-06-11 08:41:10 +07:00
|
|
|
mddev->persistent != !info->not_persistent ||
|
2009-06-18 05:45:01 +07:00
|
|
|
mddev->chunk_sectors != info->chunk_size >> 9 ||
|
2005-09-10 06:23:45 +07:00
|
|
|
/* ignore bottom 8 bits of state, and allow SB_BITMAP_PRESENT to change */
|
|
|
|
((state^info->state) & 0xfffffe00)
|
|
|
|
)
|
2005-04-17 05:20:36 +07:00
|
|
|
return -EINVAL;
|
|
|
|
/* Check there is only one change */
|
2009-03-31 10:33:13 +07:00
|
|
|
if (info->size >= 0 && mddev->dev_sectors / 2 != info->size)
|
|
|
|
cnt++;
|
|
|
|
if (mddev->raid_disks != info->raid_disks)
|
|
|
|
cnt++;
|
|
|
|
if (mddev->layout != info->layout)
|
|
|
|
cnt++;
|
|
|
|
if ((state ^ info->state) & (1<<MD_SB_BITMAP_PRESENT))
|
|
|
|
cnt++;
|
|
|
|
if (cnt == 0)
|
|
|
|
return 0;
|
|
|
|
if (cnt > 1)
|
|
|
|
return -EINVAL;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
if (mddev->layout != info->layout) {
|
|
|
|
/* Change layout
|
|
|
|
* we don't need to do anything at the md level, the
|
|
|
|
* personality will take care of it all.
|
|
|
|
*/
|
2009-06-18 05:47:55 +07:00
|
|
|
if (mddev->pers->check_reshape == NULL)
|
2005-04-17 05:20:36 +07:00
|
|
|
return -EINVAL;
|
2009-06-18 05:47:42 +07:00
|
|
|
else {
|
|
|
|
mddev->new_layout = info->layout;
|
2009-06-18 05:47:55 +07:00
|
|
|
rv = mddev->pers->check_reshape(mddev);
|
2009-06-18 05:47:42 +07:00
|
|
|
if (rv)
|
|
|
|
mddev->new_layout = mddev->layout;
|
|
|
|
return rv;
|
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
2009-03-31 10:33:13 +07:00
|
|
|
if (info->size >= 0 && mddev->dev_sectors / 2 != info->size)
|
2008-07-11 19:02:22 +07:00
|
|
|
rv = update_size(mddev, (sector_t)info->size * 2);
|
2006-01-06 15:20:49 +07:00
|
|
|
|
2006-01-06 15:20:54 +07:00
|
|
|
if (mddev->raid_disks != info->raid_disks)
|
|
|
|
rv = update_raid_disks(mddev, info->raid_disks);
|
|
|
|
|
2005-09-10 06:23:45 +07:00
|
|
|
if ((state ^ info->state) & (1<<MD_SB_BITMAP_PRESENT)) {
|
2014-06-07 13:44:51 +07:00
|
|
|
if (mddev->pers->quiesce == NULL || mddev->thread == NULL) {
|
|
|
|
rv = -EINVAL;
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
if (mddev->recovery || mddev->sync_thread) {
|
|
|
|
rv = -EBUSY;
|
|
|
|
goto err;
|
|
|
|
}
|
2005-09-10 06:23:45 +07:00
|
|
|
if (info->state & (1<<MD_SB_BITMAP_PRESENT)) {
|
2014-06-07 00:43:49 +07:00
|
|
|
struct bitmap *bitmap;
|
2005-09-10 06:23:45 +07:00
|
|
|
/* add the bitmap */
|
2014-06-07 13:44:51 +07:00
|
|
|
if (mddev->bitmap) {
|
|
|
|
rv = -EEXIST;
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
if (mddev->bitmap_info.default_offset == 0) {
|
|
|
|
rv = -EINVAL;
|
|
|
|
goto err;
|
|
|
|
}
|
2009-12-14 08:49:52 +07:00
|
|
|
mddev->bitmap_info.offset =
|
|
|
|
mddev->bitmap_info.default_offset;
|
2012-05-22 10:55:07 +07:00
|
|
|
mddev->bitmap_info.space =
|
|
|
|
mddev->bitmap_info.default_space;
|
2005-09-10 06:23:45 +07:00
|
|
|
mddev->pers->quiesce(mddev, 1);
|
2014-06-07 00:43:49 +07:00
|
|
|
bitmap = bitmap_create(mddev, -1);
|
|
|
|
if (!IS_ERR(bitmap)) {
|
|
|
|
mddev->bitmap = bitmap;
|
2010-06-01 16:37:35 +07:00
|
|
|
rv = bitmap_load(mddev);
|
2015-02-25 07:44:11 +07:00
|
|
|
} else
|
|
|
|
rv = PTR_ERR(bitmap);
|
2005-09-10 06:23:45 +07:00
|
|
|
if (rv)
|
|
|
|
bitmap_destroy(mddev);
|
|
|
|
mddev->pers->quiesce(mddev, 0);
|
|
|
|
} else {
|
|
|
|
/* remove the bitmap */
|
2014-06-07 13:44:51 +07:00
|
|
|
if (!mddev->bitmap) {
|
|
|
|
rv = -ENOENT;
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
if (mddev->bitmap->storage.file) {
|
|
|
|
rv = -EINVAL;
|
|
|
|
goto err;
|
|
|
|
}
|
2015-12-21 06:51:00 +07:00
|
|
|
if (mddev->bitmap_info.nodes) {
|
|
|
|
/* hold PW on all the bitmap lock */
|
|
|
|
if (md_cluster_ops->lock_all_bitmaps(mddev) <= 0) {
|
|
|
|
printk("md: can't change bitmap to none since the"
|
|
|
|
" array is in use by more than one node\n");
|
|
|
|
rv = -EPERM;
|
|
|
|
md_cluster_ops->unlock_all_bitmaps(mddev);
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
|
|
|
|
mddev->bitmap_info.nodes = 0;
|
|
|
|
md_cluster_ops->leave(mddev);
|
|
|
|
}
|
2005-09-10 06:23:45 +07:00
|
|
|
mddev->pers->quiesce(mddev, 1);
|
|
|
|
bitmap_destroy(mddev);
|
|
|
|
mddev->pers->quiesce(mddev, 0);
|
2009-12-14 08:49:52 +07:00
|
|
|
mddev->bitmap_info.offset = 0;
|
2005-09-10 06:23:45 +07:00
|
|
|
}
|
|
|
|
}
|
2006-10-03 15:15:46 +07:00
|
|
|
md_update_sb(mddev, 1);
|
2014-06-07 13:44:51 +07:00
|
|
|
return rv;
|
|
|
|
err:
|
2005-04-17 05:20:36 +07:00
|
|
|
return rv;
|
|
|
|
}
|
|
|
|
|
2011-10-11 12:47:53 +07:00
|
|
|
static int set_disk_faulty(struct mddev *mddev, dev_t dev)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2011-10-11 12:45:26 +07:00
|
|
|
struct md_rdev *rdev;
|
2012-10-11 09:37:33 +07:00
|
|
|
int err = 0;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
if (mddev->pers == NULL)
|
|
|
|
return -ENODEV;
|
|
|
|
|
2012-10-11 09:37:33 +07:00
|
|
|
rcu_read_lock();
|
|
|
|
rdev = find_rdev_rcu(mddev, dev);
|
2005-04-17 05:20:36 +07:00
|
|
|
if (!rdev)
|
2012-10-11 09:37:33 +07:00
|
|
|
err = -ENODEV;
|
|
|
|
else {
|
|
|
|
md_error(mddev, rdev);
|
|
|
|
if (!test_bit(Faulty, &rdev->flags))
|
|
|
|
err = -EBUSY;
|
|
|
|
}
|
|
|
|
rcu_read_unlock();
|
|
|
|
return err;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
2008-04-25 23:57:58 +07:00
|
|
|
/*
|
|
|
|
* We have a problem here : there is no easy way to give a CHS
|
|
|
|
* virtual geometry. We currently pretend that we have a 2 heads
|
|
|
|
* 4 sectors (with a BIG number of cylinders...). This drives
|
|
|
|
* dosfs just mad... ;-)
|
|
|
|
*/
|
2006-01-08 16:02:50 +07:00
|
|
|
static int md_getgeo(struct block_device *bdev, struct hd_geometry *geo)
|
|
|
|
{
|
2011-10-11 12:47:53 +07:00
|
|
|
struct mddev *mddev = bdev->bd_disk->private_data;
|
2006-01-08 16:02:50 +07:00
|
|
|
|
|
|
|
geo->heads = 2;
|
|
|
|
geo->sectors = 4;
|
2010-03-29 06:51:42 +07:00
|
|
|
geo->cylinders = mddev->array_sectors / 8;
|
2006-01-08 16:02:50 +07:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2014-01-15 22:58:52 +07:00
|
|
|
static inline bool md_ioctl_valid(unsigned int cmd)
|
|
|
|
{
|
|
|
|
switch (cmd) {
|
|
|
|
case ADD_NEW_DISK:
|
|
|
|
case BLKROSET:
|
|
|
|
case GET_ARRAY_INFO:
|
|
|
|
case GET_BITMAP_FILE:
|
|
|
|
case GET_DISK_INFO:
|
|
|
|
case HOT_ADD_DISK:
|
|
|
|
case HOT_REMOVE_DISK:
|
|
|
|
case RAID_AUTORUN:
|
|
|
|
case RAID_VERSION:
|
|
|
|
case RESTART_ARRAY_RW:
|
|
|
|
case RUN_ARRAY:
|
|
|
|
case SET_ARRAY_INFO:
|
|
|
|
case SET_BITMAP_FILE:
|
|
|
|
case SET_DISK_FAULTY:
|
|
|
|
case STOP_ARRAY:
|
|
|
|
case STOP_ARRAY_RO:
|
2014-10-30 06:51:31 +07:00
|
|
|
case CLUSTERED_DISK_NACK:
|
2014-01-15 22:58:52 +07:00
|
|
|
return true;
|
|
|
|
default:
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2008-03-02 22:31:15 +07:00
|
|
|
static int md_ioctl(struct block_device *bdev, fmode_t mode,
|
2005-04-17 05:20:36 +07:00
|
|
|
unsigned int cmd, unsigned long arg)
|
|
|
|
{
|
|
|
|
int err = 0;
|
|
|
|
void __user *argp = (void __user *)arg;
|
2011-10-11 12:47:53 +07:00
|
|
|
struct mddev *mddev = NULL;
|
2010-05-12 05:25:37 +07:00
|
|
|
int ro;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2014-01-15 22:58:52 +07:00
|
|
|
if (!md_ioctl_valid(cmd))
|
|
|
|
return -ENOTTY;
|
|
|
|
|
2011-12-23 06:17:26 +07:00
|
|
|
switch (cmd) {
|
|
|
|
case RAID_VERSION:
|
|
|
|
case GET_ARRAY_INFO:
|
|
|
|
case GET_DISK_INFO:
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
if (!capable(CAP_SYS_ADMIN))
|
|
|
|
return -EACCES;
|
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Commands dealing with the RAID driver but not any
|
|
|
|
* particular array:
|
|
|
|
*/
|
2012-12-11 09:39:21 +07:00
|
|
|
switch (cmd) {
|
|
|
|
case RAID_VERSION:
|
|
|
|
err = get_version(argp);
|
2014-09-30 12:46:41 +07:00
|
|
|
goto out;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
#ifndef MODULE
|
2012-12-11 09:39:21 +07:00
|
|
|
case RAID_AUTORUN:
|
|
|
|
err = 0;
|
|
|
|
autostart_arrays(arg);
|
2014-09-30 12:46:41 +07:00
|
|
|
goto out;
|
2005-04-17 05:20:36 +07:00
|
|
|
#endif
|
2012-12-11 09:39:21 +07:00
|
|
|
default:;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Commands creating/starting a new array:
|
|
|
|
*/
|
|
|
|
|
2008-03-02 22:31:15 +07:00
|
|
|
mddev = bdev->bd_disk->private_data;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
if (!mddev) {
|
|
|
|
BUG();
|
2014-09-30 12:46:41 +07:00
|
|
|
goto out;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
2012-10-11 09:37:33 +07:00
|
|
|
/* Some actions do not requires the mutex */
|
|
|
|
switch (cmd) {
|
|
|
|
case GET_ARRAY_INFO:
|
|
|
|
if (!mddev->raid_disks && !mddev->external)
|
|
|
|
err = -ENODEV;
|
|
|
|
else
|
|
|
|
err = get_array_info(mddev, argp);
|
2014-09-30 12:46:41 +07:00
|
|
|
goto out;
|
2012-10-11 09:37:33 +07:00
|
|
|
|
|
|
|
case GET_DISK_INFO:
|
|
|
|
if (!mddev->raid_disks && !mddev->external)
|
|
|
|
err = -ENODEV;
|
|
|
|
else
|
|
|
|
err = get_disk_info(mddev, argp);
|
2014-09-30 12:46:41 +07:00
|
|
|
goto out;
|
2012-10-11 09:37:33 +07:00
|
|
|
|
|
|
|
case SET_DISK_FAULTY:
|
|
|
|
err = set_disk_faulty(mddev, new_decode_dev(arg));
|
2014-09-30 12:46:41 +07:00
|
|
|
goto out;
|
2014-12-15 08:57:00 +07:00
|
|
|
|
|
|
|
case GET_BITMAP_FILE:
|
|
|
|
err = get_bitmap_file(mddev, argp);
|
|
|
|
goto out;
|
|
|
|
|
2012-10-11 09:37:33 +07:00
|
|
|
}
|
|
|
|
|
2012-12-11 09:35:54 +07:00
|
|
|
if (cmd == ADD_NEW_DISK)
|
|
|
|
/* need to ensure md_delayed_delete() has completed */
|
|
|
|
flush_workqueue(md_misc_wq);
|
|
|
|
|
2013-04-02 13:38:55 +07:00
|
|
|
if (cmd == HOT_REMOVE_DISK)
|
|
|
|
/* need to ensure recovery thread has run */
|
|
|
|
wait_event_interruptible_timeout(mddev->sb_wait,
|
|
|
|
!test_bit(MD_RECOVERY_NEEDED,
|
|
|
|
&mddev->flags),
|
|
|
|
msecs_to_jiffies(5000));
|
2013-08-27 13:44:13 +07:00
|
|
|
if (cmd == STOP_ARRAY || cmd == STOP_ARRAY_RO) {
|
|
|
|
/* Need to flush page cache, and ensure no-one else opens
|
|
|
|
* and writes
|
|
|
|
*/
|
|
|
|
mutex_lock(&mddev->open_mutex);
|
2014-09-09 11:00:15 +07:00
|
|
|
if (mddev->pers && atomic_read(&mddev->openers) > 1) {
|
2013-08-27 13:44:13 +07:00
|
|
|
mutex_unlock(&mddev->open_mutex);
|
|
|
|
err = -EBUSY;
|
2014-09-30 12:46:41 +07:00
|
|
|
goto out;
|
2013-08-27 13:44:13 +07:00
|
|
|
}
|
2016-08-12 12:42:37 +07:00
|
|
|
set_bit(MD_CLOSING, &mddev->flags);
|
2013-08-27 13:44:13 +07:00
|
|
|
mutex_unlock(&mddev->open_mutex);
|
|
|
|
sync_blockdev(bdev);
|
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
err = mddev_lock(mddev);
|
|
|
|
if (err) {
|
2014-09-30 11:23:59 +07:00
|
|
|
printk(KERN_INFO
|
2005-04-17 05:20:36 +07:00
|
|
|
"md: ioctl lock interrupted, reason %d, cmd %d\n",
|
|
|
|
err, cmd);
|
2014-09-30 12:46:41 +07:00
|
|
|
goto out;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
2012-12-11 09:39:21 +07:00
|
|
|
if (cmd == SET_ARRAY_INFO) {
|
|
|
|
mdu_array_info_t info;
|
|
|
|
if (!arg)
|
|
|
|
memset(&info, 0, sizeof(info));
|
|
|
|
else if (copy_from_user(&info, argp, sizeof(info))) {
|
|
|
|
err = -EFAULT;
|
2014-09-30 12:46:41 +07:00
|
|
|
goto unlock;
|
2012-12-11 09:39:21 +07:00
|
|
|
}
|
|
|
|
if (mddev->pers) {
|
|
|
|
err = update_array_info(mddev, &info);
|
|
|
|
if (err) {
|
|
|
|
printk(KERN_WARNING "md: couldn't update"
|
|
|
|
" array info. %d\n", err);
|
2014-09-30 12:46:41 +07:00
|
|
|
goto unlock;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
2014-09-30 12:46:41 +07:00
|
|
|
goto unlock;
|
2012-12-11 09:39:21 +07:00
|
|
|
}
|
|
|
|
if (!list_empty(&mddev->disks)) {
|
|
|
|
printk(KERN_WARNING
|
|
|
|
"md: array %s already has disks!\n",
|
|
|
|
mdname(mddev));
|
|
|
|
err = -EBUSY;
|
2014-09-30 12:46:41 +07:00
|
|
|
goto unlock;
|
2012-12-11 09:39:21 +07:00
|
|
|
}
|
|
|
|
if (mddev->raid_disks) {
|
|
|
|
printk(KERN_WARNING
|
|
|
|
"md: array %s already initialised!\n",
|
|
|
|
mdname(mddev));
|
|
|
|
err = -EBUSY;
|
2014-09-30 12:46:41 +07:00
|
|
|
goto unlock;
|
2012-12-11 09:39:21 +07:00
|
|
|
}
|
|
|
|
err = set_array_info(mddev, &info);
|
|
|
|
if (err) {
|
|
|
|
printk(KERN_WARNING "md: couldn't set"
|
|
|
|
" array info. %d\n", err);
|
2014-09-30 12:46:41 +07:00
|
|
|
goto unlock;
|
2012-12-11 09:39:21 +07:00
|
|
|
}
|
2014-09-30 12:46:41 +07:00
|
|
|
goto unlock;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Commands querying/configuring an existing array:
|
|
|
|
*/
|
2005-06-22 07:17:14 +07:00
|
|
|
/* if we are not initialised yet, only ADD_NEW_DISK, STOP_ARRAY,
|
2006-12-22 16:11:41 +07:00
|
|
|
* RUN_ARRAY, and GET_ and SET_BITMAP_FILE are allowed */
|
2008-02-06 16:39:55 +07:00
|
|
|
if ((!mddev->raid_disks && !mddev->external)
|
|
|
|
&& cmd != ADD_NEW_DISK && cmd != STOP_ARRAY
|
|
|
|
&& cmd != RUN_ARRAY && cmd != SET_BITMAP_FILE
|
|
|
|
&& cmd != GET_BITMAP_FILE) {
|
2005-04-17 05:20:36 +07:00
|
|
|
err = -ENODEV;
|
2014-09-30 12:46:41 +07:00
|
|
|
goto unlock;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Commands even a read-only array can execute:
|
|
|
|
*/
|
2012-12-11 09:39:21 +07:00
|
|
|
switch (cmd) {
|
|
|
|
case RESTART_ARRAY_RW:
|
|
|
|
err = restart_array(mddev);
|
2014-09-30 12:46:41 +07:00
|
|
|
goto unlock;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2012-12-11 09:39:21 +07:00
|
|
|
case STOP_ARRAY:
|
|
|
|
err = do_md_stop(mddev, 0, bdev);
|
2014-09-30 12:46:41 +07:00
|
|
|
goto unlock;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2012-12-11 09:39:21 +07:00
|
|
|
case STOP_ARRAY_RO:
|
|
|
|
err = md_set_readonly(mddev, bdev);
|
2014-09-30 12:46:41 +07:00
|
|
|
goto unlock;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2013-04-24 08:42:41 +07:00
|
|
|
case HOT_REMOVE_DISK:
|
|
|
|
err = hot_remove_disk(mddev, new_decode_dev(arg));
|
2014-09-30 12:46:41 +07:00
|
|
|
goto unlock;
|
2013-04-24 08:42:41 +07:00
|
|
|
|
md: Allow devices to be re-added to a read-only array.
When assembling an array incrementally we might want to make
it device available when "enough" devices are present, but maybe
not "all" devices are present.
If the remaining devices appear before the array is actually used,
they should be added transparently.
We do this by using the "read-auto" mode where the array acts like
it is read-only until a write request arrives.
Current an add-device request switches a read-auto array to active.
This means that only one device can be added after the array is first
made read-auto. This isn't a problem for RAID5, but is not ideal for
RAID6 or RAID10.
Also we don't really want to switch the array to read-auto at all
when re-adding a device as this doesn't really imply any change.
So:
- remove the "md_update_sb()" call from add_new_disk(). This isn't
really needed as just adding a disk doesn't require a metadata
update. Instead, just set MD_CHANGE_DEVS. This will effect a
metadata update soon enough, once the array is not read-only.
- Allow the ADD_NEW_DISK ioctl to succeed without activating a
read-auto array, providing the MD_DISK_SYNC flag is set.
In this case, the device will be rejected if it cannot be added
with the correct device number, or has an incorrect event count.
- Teach remove_and_add_spares() to be careful about adding spares
when the array is read-only (or read-mostly) - only add devices
that are thought to be in-sync, and only do it if the array is
in-sync itself.
- In md_check_recovery, use remove_and_add_spares in the read-only
case, rather than open coding just the 'remove' part of it.
Reported-by: Martin Wilck <mwilck@arcor.de>
Signed-off-by: NeilBrown <neilb@suse.de>
2013-04-24 08:42:42 +07:00
|
|
|
case ADD_NEW_DISK:
|
|
|
|
/* We can support ADD_NEW_DISK on read-only arrays
|
2016-03-21 18:19:30 +07:00
|
|
|
* only if we are re-adding a preexisting device.
|
md: Allow devices to be re-added to a read-only array.
When assembling an array incrementally we might want to make
it device available when "enough" devices are present, but maybe
not "all" devices are present.
If the remaining devices appear before the array is actually used,
they should be added transparently.
We do this by using the "read-auto" mode where the array acts like
it is read-only until a write request arrives.
Current an add-device request switches a read-auto array to active.
This means that only one device can be added after the array is first
made read-auto. This isn't a problem for RAID5, but is not ideal for
RAID6 or RAID10.
Also we don't really want to switch the array to read-auto at all
when re-adding a device as this doesn't really imply any change.
So:
- remove the "md_update_sb()" call from add_new_disk(). This isn't
really needed as just adding a disk doesn't require a metadata
update. Instead, just set MD_CHANGE_DEVS. This will effect a
metadata update soon enough, once the array is not read-only.
- Allow the ADD_NEW_DISK ioctl to succeed without activating a
read-auto array, providing the MD_DISK_SYNC flag is set.
In this case, the device will be rejected if it cannot be added
with the correct device number, or has an incorrect event count.
- Teach remove_and_add_spares() to be careful about adding spares
when the array is read-only (or read-mostly) - only add devices
that are thought to be in-sync, and only do it if the array is
in-sync itself.
- In md_check_recovery, use remove_and_add_spares in the read-only
case, rather than open coding just the 'remove' part of it.
Reported-by: Martin Wilck <mwilck@arcor.de>
Signed-off-by: NeilBrown <neilb@suse.de>
2013-04-24 08:42:42 +07:00
|
|
|
* So require mddev->pers and MD_DISK_SYNC.
|
|
|
|
*/
|
|
|
|
if (mddev->pers) {
|
|
|
|
mdu_disk_info_t info;
|
|
|
|
if (copy_from_user(&info, argp, sizeof(info)))
|
|
|
|
err = -EFAULT;
|
|
|
|
else if (!(info.state & (1<<MD_DISK_SYNC)))
|
|
|
|
/* Need to clear read-only for this */
|
|
|
|
break;
|
|
|
|
else
|
|
|
|
err = add_new_disk(mddev, &info);
|
2014-09-30 12:46:41 +07:00
|
|
|
goto unlock;
|
md: Allow devices to be re-added to a read-only array.
When assembling an array incrementally we might want to make
it device available when "enough" devices are present, but maybe
not "all" devices are present.
If the remaining devices appear before the array is actually used,
they should be added transparently.
We do this by using the "read-auto" mode where the array acts like
it is read-only until a write request arrives.
Current an add-device request switches a read-auto array to active.
This means that only one device can be added after the array is first
made read-auto. This isn't a problem for RAID5, but is not ideal for
RAID6 or RAID10.
Also we don't really want to switch the array to read-auto at all
when re-adding a device as this doesn't really imply any change.
So:
- remove the "md_update_sb()" call from add_new_disk(). This isn't
really needed as just adding a disk doesn't require a metadata
update. Instead, just set MD_CHANGE_DEVS. This will effect a
metadata update soon enough, once the array is not read-only.
- Allow the ADD_NEW_DISK ioctl to succeed without activating a
read-auto array, providing the MD_DISK_SYNC flag is set.
In this case, the device will be rejected if it cannot be added
with the correct device number, or has an incorrect event count.
- Teach remove_and_add_spares() to be careful about adding spares
when the array is read-only (or read-mostly) - only add devices
that are thought to be in-sync, and only do it if the array is
in-sync itself.
- In md_check_recovery, use remove_and_add_spares in the read-only
case, rather than open coding just the 'remove' part of it.
Reported-by: Martin Wilck <mwilck@arcor.de>
Signed-off-by: NeilBrown <neilb@suse.de>
2013-04-24 08:42:42 +07:00
|
|
|
}
|
|
|
|
break;
|
|
|
|
|
2012-12-11 09:39:21 +07:00
|
|
|
case BLKROSET:
|
|
|
|
if (get_user(ro, (int __user *)(arg))) {
|
|
|
|
err = -EFAULT;
|
2014-09-30 12:46:41 +07:00
|
|
|
goto unlock;
|
2012-12-11 09:39:21 +07:00
|
|
|
}
|
|
|
|
err = -EINVAL;
|
2010-05-12 05:25:37 +07:00
|
|
|
|
2012-12-11 09:39:21 +07:00
|
|
|
/* if the bdev is going readonly the value of mddev->ro
|
|
|
|
* does not matter, no writes are coming
|
|
|
|
*/
|
|
|
|
if (ro)
|
2014-09-30 12:46:41 +07:00
|
|
|
goto unlock;
|
2010-05-12 05:25:37 +07:00
|
|
|
|
2012-12-11 09:39:21 +07:00
|
|
|
/* are we are already prepared for writes? */
|
|
|
|
if (mddev->ro != 1)
|
2014-09-30 12:46:41 +07:00
|
|
|
goto unlock;
|
2010-05-12 05:25:37 +07:00
|
|
|
|
2012-12-11 09:39:21 +07:00
|
|
|
/* transitioning to readauto need only happen for
|
|
|
|
* arrays that call md_write_start
|
|
|
|
*/
|
|
|
|
if (mddev->pers) {
|
|
|
|
err = restart_array(mddev);
|
|
|
|
if (err == 0) {
|
|
|
|
mddev->ro = 2;
|
|
|
|
set_disk_ro(mddev->gendisk, 0);
|
2010-05-12 05:25:37 +07:00
|
|
|
}
|
2012-12-11 09:39:21 +07:00
|
|
|
}
|
2014-09-30 12:46:41 +07:00
|
|
|
goto unlock;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The remaining ioctls are changing the state of the
|
[PATCH] md: allow md arrays to be started read-only (module parameter).
When an md array is started, the superblock will be written, and resync may
commense. This is not good if you want to be completely read-only as, for
example, when preparing to resume from a suspend-to-disk image.
So introduce a module parameter "start_ro" which can be set
to '1' at boot, at module load, or via
/sys/module/md_mod/parameters/start_ro
When this is set, new arrays get an 'auto-ro' mode, which disables all
internal io (superblock updates, resync, recovery) and is automatically
switched to 'rw' when the first write request arrives.
The array can be set to true 'ro' mode using 'mdadm -r' before the first
write request, or resync can be started without a write using 'mdadm -w'.
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-11-09 12:39:36 +07:00
|
|
|
* superblock, so we do not allow them on read-only arrays.
|
2005-04-17 05:20:36 +07:00
|
|
|
*/
|
2014-09-30 12:36:28 +07:00
|
|
|
if (mddev->ro && mddev->pers) {
|
[PATCH] md: allow md arrays to be started read-only (module parameter).
When an md array is started, the superblock will be written, and resync may
commense. This is not good if you want to be completely read-only as, for
example, when preparing to resume from a suspend-to-disk image.
So introduce a module parameter "start_ro" which can be set
to '1' at boot, at module load, or via
/sys/module/md_mod/parameters/start_ro
When this is set, new arrays get an 'auto-ro' mode, which disables all
internal io (superblock updates, resync, recovery) and is automatically
switched to 'rw' when the first write request arrives.
The array can be set to true 'ro' mode using 'mdadm -r' before the first
write request, or resync can be started without a write using 'mdadm -w'.
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-11-09 12:39:36 +07:00
|
|
|
if (mddev->ro == 2) {
|
|
|
|
mddev->ro = 0;
|
2010-06-01 16:37:23 +07:00
|
|
|
sysfs_notify_dirent_safe(mddev->sysfs_state);
|
2008-06-28 05:31:36 +07:00
|
|
|
set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
|
2013-02-28 07:59:03 +07:00
|
|
|
/* mddev_unlock will wake thread */
|
|
|
|
/* If a device failed while we were read-only, we
|
|
|
|
* need to make sure the metadata is updated now.
|
|
|
|
*/
|
|
|
|
if (test_bit(MD_CHANGE_DEVS, &mddev->flags)) {
|
|
|
|
mddev_unlock(mddev);
|
|
|
|
wait_event(mddev->sb_wait,
|
|
|
|
!test_bit(MD_CHANGE_DEVS, &mddev->flags) &&
|
|
|
|
!test_bit(MD_CHANGE_PENDING, &mddev->flags));
|
2013-11-14 13:54:51 +07:00
|
|
|
mddev_lock_nointr(mddev);
|
2013-02-28 07:59:03 +07:00
|
|
|
}
|
[PATCH] md: allow md arrays to be started read-only (module parameter).
When an md array is started, the superblock will be written, and resync may
commense. This is not good if you want to be completely read-only as, for
example, when preparing to resume from a suspend-to-disk image.
So introduce a module parameter "start_ro" which can be set
to '1' at boot, at module load, or via
/sys/module/md_mod/parameters/start_ro
When this is set, new arrays get an 'auto-ro' mode, which disables all
internal io (superblock updates, resync, recovery) and is automatically
switched to 'rw' when the first write request arrives.
The array can be set to true 'ro' mode using 'mdadm -r' before the first
write request, or resync can be started without a write using 'mdadm -w'.
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-11-09 12:39:36 +07:00
|
|
|
} else {
|
|
|
|
err = -EROFS;
|
2014-09-30 12:46:41 +07:00
|
|
|
goto unlock;
|
[PATCH] md: allow md arrays to be started read-only (module parameter).
When an md array is started, the superblock will be written, and resync may
commense. This is not good if you want to be completely read-only as, for
example, when preparing to resume from a suspend-to-disk image.
So introduce a module parameter "start_ro" which can be set
to '1' at boot, at module load, or via
/sys/module/md_mod/parameters/start_ro
When this is set, new arrays get an 'auto-ro' mode, which disables all
internal io (superblock updates, resync, recovery) and is automatically
switched to 'rw' when the first write request arrives.
The array can be set to true 'ro' mode using 'mdadm -r' before the first
write request, or resync can be started without a write using 'mdadm -w'.
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-11-09 12:39:36 +07:00
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
2012-12-11 09:39:21 +07:00
|
|
|
switch (cmd) {
|
|
|
|
case ADD_NEW_DISK:
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2012-12-11 09:39:21 +07:00
|
|
|
mdu_disk_info_t info;
|
|
|
|
if (copy_from_user(&info, argp, sizeof(info)))
|
|
|
|
err = -EFAULT;
|
|
|
|
else
|
|
|
|
err = add_new_disk(mddev, &info);
|
2014-09-30 12:46:41 +07:00
|
|
|
goto unlock;
|
2012-12-11 09:39:21 +07:00
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2014-10-30 06:51:31 +07:00
|
|
|
case CLUSTERED_DISK_NACK:
|
|
|
|
if (mddev_is_clustered(mddev))
|
|
|
|
md_cluster_ops->new_disk_ack(mddev, false);
|
|
|
|
else
|
|
|
|
err = -EINVAL;
|
|
|
|
goto unlock;
|
|
|
|
|
2012-12-11 09:39:21 +07:00
|
|
|
case HOT_ADD_DISK:
|
|
|
|
err = hot_add_disk(mddev, new_decode_dev(arg));
|
2014-09-30 12:46:41 +07:00
|
|
|
goto unlock;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2012-12-11 09:39:21 +07:00
|
|
|
case RUN_ARRAY:
|
|
|
|
err = do_md_run(mddev);
|
2014-09-30 12:46:41 +07:00
|
|
|
goto unlock;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2012-12-11 09:39:21 +07:00
|
|
|
case SET_BITMAP_FILE:
|
|
|
|
err = set_bitmap_file(mddev, (int)arg);
|
2014-09-30 12:46:41 +07:00
|
|
|
goto unlock;
|
2005-06-22 07:17:14 +07:00
|
|
|
|
2012-12-11 09:39:21 +07:00
|
|
|
default:
|
|
|
|
err = -EINVAL;
|
2014-09-30 12:46:41 +07:00
|
|
|
goto unlock;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
2014-09-30 12:46:41 +07:00
|
|
|
unlock:
|
md: make devices disappear when they are no longer needed.
Currently md devices, once created, never disappear until the module
is unloaded. This is essentially because the gendisk holds a
reference to the mddev, and the mddev holds a reference to the
gendisk, this a circular reference.
If we drop the reference from mddev to gendisk, then we need to ensure
that the mddev is destroyed when the gendisk is destroyed. However it
is not possible to hook into the gendisk destruction process to enable
this.
So we drop the reference from the gendisk to the mddev and destroy the
gendisk when the mddev gets destroyed. However this has a
complication.
Between the call
__blkdev_get->get_gendisk->kobj_lookup->md_probe
and the call
__blkdev_get->md_open
there is no obvious way to hold a reference on the mddev any more, so
unless something is done, it will disappear and gendisk will be
destroyed prematurely.
Also, once we decide to destroy the mddev, there will be an unlockable
moment before the gendisk is unlinked (blk_unregister_region) during
which a new reference to the gendisk can be created. We need to
ensure that this reference can not be used. i.e. the ->open must
fail.
So:
1/ in md_probe we set a flag in the mddev (hold_active) which
indicates that the array should be treated as active, even
though there are no references, and no appearance of activity.
This is cleared by md_release when the device is closed if it
is no longer needed.
This ensures that the gendisk will survive between md_probe and
md_open.
2/ In md_open we check if the mddev we expect to open matches
the gendisk that we did open.
If there is a mismatch we return -ERESTARTSYS and modify
__blkdev_get to retry from the top in that case.
In the -ERESTARTSYS sys case we make sure to wait until
the old gendisk (that we succeeded in opening) is really gone so
we loop at most once.
Some udev configurations will always open an md device when it first
appears. If we allow an md device that was just created by an open
to disappear on an immediate close, then this can race with such udev
configurations and result in an infinite loop the device being opened
and closed, then re-open due to the 'ADD' even from the first open,
and then close and so on.
So we make sure an md device, once created by an open, remains active
at least until some md 'ioctl' has been made on it. This means that
all normal usage of md devices will allow them to disappear promptly
when not needed, but the worst that an incorrect usage will do it
cause an inactive md device to be left in existence (it can easily be
removed).
As an array can be stopped by writing to a sysfs attribute
echo clear > /sys/block/mdXXX/md/array_state
we need to use scheduled work for deleting the gendisk and other
kobjects. This allows us to wait for any pending gendisk deletion to
complete by simply calling flush_scheduled_work().
Signed-off-by: NeilBrown <neilb@suse.de>
2009-01-09 04:31:10 +07:00
|
|
|
if (mddev->hold_active == UNTIL_IOCTL &&
|
|
|
|
err != -EINVAL)
|
|
|
|
mddev->hold_active = 0;
|
2005-04-17 05:20:36 +07:00
|
|
|
mddev_unlock(mddev);
|
2014-09-30 12:46:41 +07:00
|
|
|
out:
|
2005-04-17 05:20:36 +07:00
|
|
|
return err;
|
|
|
|
}
|
2009-12-14 08:50:05 +07:00
|
|
|
#ifdef CONFIG_COMPAT
|
|
|
|
static int md_compat_ioctl(struct block_device *bdev, fmode_t mode,
|
|
|
|
unsigned int cmd, unsigned long arg)
|
|
|
|
{
|
|
|
|
switch (cmd) {
|
|
|
|
case HOT_REMOVE_DISK:
|
|
|
|
case HOT_ADD_DISK:
|
|
|
|
case SET_DISK_FAULTY:
|
|
|
|
case SET_BITMAP_FILE:
|
|
|
|
/* These take in integer arg, do not convert */
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
arg = (unsigned long)compat_ptr(arg);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
return md_ioctl(bdev, mode, cmd, arg);
|
|
|
|
}
|
|
|
|
#endif /* CONFIG_COMPAT */
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2008-03-02 22:31:15 +07:00
|
|
|
static int md_open(struct block_device *bdev, fmode_t mode)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
/*
|
|
|
|
* Succeed if we can lock the mddev, which confirms that
|
|
|
|
* it isn't being stopped right now.
|
|
|
|
*/
|
2011-10-11 12:47:53 +07:00
|
|
|
struct mddev *mddev = mddev_find(bdev->bd_dev);
|
2005-04-17 05:20:36 +07:00
|
|
|
int err;
|
|
|
|
|
2012-05-22 10:55:32 +07:00
|
|
|
if (!mddev)
|
|
|
|
return -ENODEV;
|
|
|
|
|
md: make devices disappear when they are no longer needed.
Currently md devices, once created, never disappear until the module
is unloaded. This is essentially because the gendisk holds a
reference to the mddev, and the mddev holds a reference to the
gendisk, this a circular reference.
If we drop the reference from mddev to gendisk, then we need to ensure
that the mddev is destroyed when the gendisk is destroyed. However it
is not possible to hook into the gendisk destruction process to enable
this.
So we drop the reference from the gendisk to the mddev and destroy the
gendisk when the mddev gets destroyed. However this has a
complication.
Between the call
__blkdev_get->get_gendisk->kobj_lookup->md_probe
and the call
__blkdev_get->md_open
there is no obvious way to hold a reference on the mddev any more, so
unless something is done, it will disappear and gendisk will be
destroyed prematurely.
Also, once we decide to destroy the mddev, there will be an unlockable
moment before the gendisk is unlinked (blk_unregister_region) during
which a new reference to the gendisk can be created. We need to
ensure that this reference can not be used. i.e. the ->open must
fail.
So:
1/ in md_probe we set a flag in the mddev (hold_active) which
indicates that the array should be treated as active, even
though there are no references, and no appearance of activity.
This is cleared by md_release when the device is closed if it
is no longer needed.
This ensures that the gendisk will survive between md_probe and
md_open.
2/ In md_open we check if the mddev we expect to open matches
the gendisk that we did open.
If there is a mismatch we return -ERESTARTSYS and modify
__blkdev_get to retry from the top in that case.
In the -ERESTARTSYS sys case we make sure to wait until
the old gendisk (that we succeeded in opening) is really gone so
we loop at most once.
Some udev configurations will always open an md device when it first
appears. If we allow an md device that was just created by an open
to disappear on an immediate close, then this can race with such udev
configurations and result in an infinite loop the device being opened
and closed, then re-open due to the 'ADD' even from the first open,
and then close and so on.
So we make sure an md device, once created by an open, remains active
at least until some md 'ioctl' has been made on it. This means that
all normal usage of md devices will allow them to disappear promptly
when not needed, but the worst that an incorrect usage will do it
cause an inactive md device to be left in existence (it can easily be
removed).
As an array can be stopped by writing to a sysfs attribute
echo clear > /sys/block/mdXXX/md/array_state
we need to use scheduled work for deleting the gendisk and other
kobjects. This allows us to wait for any pending gendisk deletion to
complete by simply calling flush_scheduled_work().
Signed-off-by: NeilBrown <neilb@suse.de>
2009-01-09 04:31:10 +07:00
|
|
|
if (mddev->gendisk != bdev->bd_disk) {
|
|
|
|
/* we are racing with mddev_put which is discarding this
|
|
|
|
* bd_disk.
|
|
|
|
*/
|
|
|
|
mddev_put(mddev);
|
|
|
|
/* Wait until bdev->bd_disk is definitely gone */
|
2010-10-15 20:36:08 +07:00
|
|
|
flush_workqueue(md_misc_wq);
|
md: make devices disappear when they are no longer needed.
Currently md devices, once created, never disappear until the module
is unloaded. This is essentially because the gendisk holds a
reference to the mddev, and the mddev holds a reference to the
gendisk, this a circular reference.
If we drop the reference from mddev to gendisk, then we need to ensure
that the mddev is destroyed when the gendisk is destroyed. However it
is not possible to hook into the gendisk destruction process to enable
this.
So we drop the reference from the gendisk to the mddev and destroy the
gendisk when the mddev gets destroyed. However this has a
complication.
Between the call
__blkdev_get->get_gendisk->kobj_lookup->md_probe
and the call
__blkdev_get->md_open
there is no obvious way to hold a reference on the mddev any more, so
unless something is done, it will disappear and gendisk will be
destroyed prematurely.
Also, once we decide to destroy the mddev, there will be an unlockable
moment before the gendisk is unlinked (blk_unregister_region) during
which a new reference to the gendisk can be created. We need to
ensure that this reference can not be used. i.e. the ->open must
fail.
So:
1/ in md_probe we set a flag in the mddev (hold_active) which
indicates that the array should be treated as active, even
though there are no references, and no appearance of activity.
This is cleared by md_release when the device is closed if it
is no longer needed.
This ensures that the gendisk will survive between md_probe and
md_open.
2/ In md_open we check if the mddev we expect to open matches
the gendisk that we did open.
If there is a mismatch we return -ERESTARTSYS and modify
__blkdev_get to retry from the top in that case.
In the -ERESTARTSYS sys case we make sure to wait until
the old gendisk (that we succeeded in opening) is really gone so
we loop at most once.
Some udev configurations will always open an md device when it first
appears. If we allow an md device that was just created by an open
to disappear on an immediate close, then this can race with such udev
configurations and result in an infinite loop the device being opened
and closed, then re-open due to the 'ADD' even from the first open,
and then close and so on.
So we make sure an md device, once created by an open, remains active
at least until some md 'ioctl' has been made on it. This means that
all normal usage of md devices will allow them to disappear promptly
when not needed, but the worst that an incorrect usage will do it
cause an inactive md device to be left in existence (it can easily be
removed).
As an array can be stopped by writing to a sysfs attribute
echo clear > /sys/block/mdXXX/md/array_state
we need to use scheduled work for deleting the gendisk and other
kobjects. This allows us to wait for any pending gendisk deletion to
complete by simply calling flush_scheduled_work().
Signed-off-by: NeilBrown <neilb@suse.de>
2009-01-09 04:31:10 +07:00
|
|
|
/* Then retry the open from the top */
|
|
|
|
return -ERESTARTSYS;
|
|
|
|
}
|
|
|
|
BUG_ON(mddev != bdev->bd_disk->private_data);
|
|
|
|
|
2009-08-10 09:50:52 +07:00
|
|
|
if ((err = mutex_lock_interruptible(&mddev->open_mutex)))
|
2005-04-17 05:20:36 +07:00
|
|
|
goto out;
|
|
|
|
|
2016-08-12 12:42:37 +07:00
|
|
|
if (test_bit(MD_CLOSING, &mddev->flags)) {
|
|
|
|
mutex_unlock(&mddev->open_mutex);
|
|
|
|
return -ENODEV;
|
|
|
|
}
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
err = 0;
|
2008-07-21 14:05:25 +07:00
|
|
|
atomic_inc(&mddev->openers);
|
2009-08-10 09:50:52 +07:00
|
|
|
mutex_unlock(&mddev->open_mutex);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2011-02-24 13:26:41 +07:00
|
|
|
check_disk_change(bdev);
|
2005-04-17 05:20:36 +07:00
|
|
|
out:
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2013-05-06 08:52:57 +07:00
|
|
|
static void md_release(struct gendisk *disk, fmode_t mode)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2014-09-30 11:23:59 +07:00
|
|
|
struct mddev *mddev = disk->private_data;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2006-10-04 04:33:23 +07:00
|
|
|
BUG_ON(!mddev);
|
2008-07-21 14:05:25 +07:00
|
|
|
atomic_dec(&mddev->openers);
|
2005-04-17 05:20:36 +07:00
|
|
|
mddev_put(mddev);
|
|
|
|
}
|
2011-02-24 13:26:41 +07:00
|
|
|
|
|
|
|
static int md_media_changed(struct gendisk *disk)
|
|
|
|
{
|
2011-10-11 12:47:53 +07:00
|
|
|
struct mddev *mddev = disk->private_data;
|
2011-02-24 13:26:41 +07:00
|
|
|
|
|
|
|
return mddev->changed;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int md_revalidate(struct gendisk *disk)
|
|
|
|
{
|
2011-10-11 12:47:53 +07:00
|
|
|
struct mddev *mddev = disk->private_data;
|
2011-02-24 13:26:41 +07:00
|
|
|
|
|
|
|
mddev->changed = 0;
|
|
|
|
return 0;
|
|
|
|
}
|
2009-09-22 07:01:13 +07:00
|
|
|
static const struct block_device_operations md_fops =
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
.owner = THIS_MODULE,
|
2008-03-02 22:31:15 +07:00
|
|
|
.open = md_open,
|
|
|
|
.release = md_release,
|
2009-05-26 09:57:36 +07:00
|
|
|
.ioctl = md_ioctl,
|
2009-12-14 08:50:05 +07:00
|
|
|
#ifdef CONFIG_COMPAT
|
|
|
|
.compat_ioctl = md_compat_ioctl,
|
|
|
|
#endif
|
2006-01-08 16:02:50 +07:00
|
|
|
.getgeo = md_getgeo,
|
2011-02-24 13:26:41 +07:00
|
|
|
.media_changed = md_media_changed,
|
|
|
|
.revalidate_disk= md_revalidate,
|
2005-04-17 05:20:36 +07:00
|
|
|
};
|
|
|
|
|
2014-09-30 11:23:59 +07:00
|
|
|
static int md_thread(void *arg)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2011-10-11 12:48:23 +07:00
|
|
|
struct md_thread *thread = arg;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
/*
|
|
|
|
* md_thread is a 'system-thread', it's priority should be very
|
|
|
|
* high. We avoid resource deadlocks individually in each
|
|
|
|
* raid personality. (RAID5 does preallocation) We also use RR and
|
|
|
|
* the very same RT priority as kswapd, thus we will never get
|
|
|
|
* into a priority inversion deadlock.
|
|
|
|
*
|
|
|
|
* we definitely have to have equal or higher priority than
|
|
|
|
* bdflush, otherwise bdflush will deadlock if there are too
|
|
|
|
* many dirty RAID5 blocks.
|
|
|
|
*/
|
|
|
|
|
2005-10-20 11:23:47 +07:00
|
|
|
allow_signal(SIGKILL);
|
2005-09-10 06:23:56 +07:00
|
|
|
while (!kthread_should_stop()) {
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2005-11-15 15:09:12 +07:00
|
|
|
/* We need to wait INTERRUPTIBLE so that
|
|
|
|
* we don't add to the load-average.
|
|
|
|
* That means we need to be sure no signals are
|
|
|
|
* pending
|
|
|
|
*/
|
|
|
|
if (signal_pending(current))
|
|
|
|
flush_signals(current);
|
|
|
|
|
|
|
|
wait_event_interruptible_timeout
|
|
|
|
(thread->wqueue,
|
|
|
|
test_bit(THREAD_WAKEUP, &thread->flags)
|
|
|
|
|| kthread_should_stop(),
|
|
|
|
thread->timeout);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2011-01-14 05:13:53 +07:00
|
|
|
clear_bit(THREAD_WAKEUP, &thread->flags);
|
|
|
|
if (!kthread_should_stop())
|
2012-10-11 09:34:00 +07:00
|
|
|
thread->run(thread);
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
2005-09-10 06:23:56 +07:00
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2011-10-11 12:48:23 +07:00
|
|
|
void md_wakeup_thread(struct md_thread *thread)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
if (thread) {
|
2011-10-07 10:23:17 +07:00
|
|
|
pr_debug("md: waking up MD thread %s.\n", thread->tsk->comm);
|
2005-04-17 05:20:36 +07:00
|
|
|
set_bit(THREAD_WAKEUP, &thread->flags);
|
|
|
|
wake_up(&thread->wqueue);
|
|
|
|
}
|
|
|
|
}
|
2014-09-30 13:15:38 +07:00
|
|
|
EXPORT_SYMBOL(md_wakeup_thread);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2012-10-11 09:34:00 +07:00
|
|
|
struct md_thread *md_register_thread(void (*run) (struct md_thread *),
|
|
|
|
struct mddev *mddev, const char *name)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2011-10-11 12:48:23 +07:00
|
|
|
struct md_thread *thread;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2011-10-11 12:48:23 +07:00
|
|
|
thread = kzalloc(sizeof(struct md_thread), GFP_KERNEL);
|
2005-04-17 05:20:36 +07:00
|
|
|
if (!thread)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
init_waitqueue_head(&thread->wqueue);
|
|
|
|
|
|
|
|
thread->run = run;
|
|
|
|
thread->mddev = mddev;
|
2005-06-22 07:17:14 +07:00
|
|
|
thread->timeout = MAX_SCHEDULE_TIMEOUT;
|
2009-09-23 15:09:45 +07:00
|
|
|
thread->tsk = kthread_run(md_thread, thread,
|
|
|
|
"%s_%s",
|
|
|
|
mdname(thread->mddev),
|
2012-07-03 12:56:52 +07:00
|
|
|
name);
|
2005-09-10 06:23:56 +07:00
|
|
|
if (IS_ERR(thread->tsk)) {
|
2005-04-17 05:20:36 +07:00
|
|
|
kfree(thread);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
return thread;
|
|
|
|
}
|
2014-09-30 13:15:38 +07:00
|
|
|
EXPORT_SYMBOL(md_register_thread);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2011-10-11 12:48:23 +07:00
|
|
|
void md_unregister_thread(struct md_thread **threadp)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2011-10-11 12:48:23 +07:00
|
|
|
struct md_thread *thread = *threadp;
|
2009-03-31 10:39:39 +07:00
|
|
|
if (!thread)
|
|
|
|
return;
|
2011-10-07 10:23:17 +07:00
|
|
|
pr_debug("interrupting MD-thread pid %d\n", task_pid_nr(thread->tsk));
|
2011-09-21 12:30:20 +07:00
|
|
|
/* Locking ensures that mddev_unlock does not wake_up a
|
|
|
|
* non-existent thread
|
|
|
|
*/
|
|
|
|
spin_lock(&pers_lock);
|
|
|
|
*threadp = NULL;
|
|
|
|
spin_unlock(&pers_lock);
|
2005-09-10 06:23:56 +07:00
|
|
|
|
|
|
|
kthread_stop(thread->tsk);
|
2005-04-17 05:20:36 +07:00
|
|
|
kfree(thread);
|
|
|
|
}
|
2014-09-30 13:15:38 +07:00
|
|
|
EXPORT_SYMBOL(md_unregister_thread);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2011-10-11 12:47:53 +07:00
|
|
|
void md_error(struct mddev *mddev, struct md_rdev *rdev)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2005-11-09 12:39:31 +07:00
|
|
|
if (!rdev || test_bit(Faulty, &rdev->flags))
|
2005-04-17 05:20:36 +07:00
|
|
|
return;
|
2008-04-30 14:52:32 +07:00
|
|
|
|
2011-07-28 08:31:48 +07:00
|
|
|
if (!mddev->pers || !mddev->pers->error_handler)
|
2005-04-17 05:20:36 +07:00
|
|
|
return;
|
|
|
|
mddev->pers->error_handler(mddev,rdev);
|
2008-06-28 05:31:41 +07:00
|
|
|
if (mddev->degraded)
|
|
|
|
set_bit(MD_RECOVERY_RECOVER, &mddev->recovery);
|
2010-06-01 16:37:23 +07:00
|
|
|
sysfs_notify_dirent_safe(rdev->sysfs_state);
|
2005-04-17 05:20:36 +07:00
|
|
|
set_bit(MD_RECOVERY_INTR, &mddev->recovery);
|
|
|
|
set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
|
|
|
|
md_wakeup_thread(mddev->thread);
|
2010-07-26 08:49:55 +07:00
|
|
|
if (mddev->event_work.func)
|
2010-10-15 20:36:08 +07:00
|
|
|
queue_work(md_misc_wq, &mddev->event_work);
|
2015-12-28 09:46:38 +07:00
|
|
|
md_new_event(mddev);
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
2014-09-30 13:15:38 +07:00
|
|
|
EXPORT_SYMBOL(md_error);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
/* seq_file implementation /proc/mdstat */
|
|
|
|
|
|
|
|
static void status_unused(struct seq_file *seq)
|
|
|
|
{
|
|
|
|
int i = 0;
|
2011-10-11 12:45:26 +07:00
|
|
|
struct md_rdev *rdev;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
seq_printf(seq, "unused devices: ");
|
|
|
|
|
2009-01-09 04:31:08 +07:00
|
|
|
list_for_each_entry(rdev, &pending_raid_disks, same_set) {
|
2005-04-17 05:20:36 +07:00
|
|
|
char b[BDEVNAME_SIZE];
|
|
|
|
i++;
|
|
|
|
seq_printf(seq, "%s ",
|
|
|
|
bdevname(rdev->bdev,b));
|
|
|
|
}
|
|
|
|
if (!i)
|
|
|
|
seq_printf(seq, "<none>");
|
|
|
|
|
|
|
|
seq_printf(seq, "\n");
|
|
|
|
}
|
|
|
|
|
2015-07-02 14:12:58 +07:00
|
|
|
static int status_resync(struct seq_file *seq, struct mddev *mddev)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2009-05-07 09:49:35 +07:00
|
|
|
sector_t max_sectors, resync, res;
|
|
|
|
unsigned long dt, db;
|
|
|
|
sector_t rt;
|
2006-03-27 16:18:04 +07:00
|
|
|
int scale;
|
|
|
|
unsigned int per_milli;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2012-05-21 06:28:33 +07:00
|
|
|
if (test_bit(MD_RECOVERY_SYNC, &mddev->recovery) ||
|
|
|
|
test_bit(MD_RECOVERY_RESHAPE, &mddev->recovery))
|
2009-05-07 09:49:35 +07:00
|
|
|
max_sectors = mddev->resync_max_sectors;
|
2005-04-17 05:20:36 +07:00
|
|
|
else
|
2009-05-07 09:49:35 +07:00
|
|
|
max_sectors = mddev->dev_sectors;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2015-07-02 14:12:58 +07:00
|
|
|
resync = mddev->curr_resync;
|
|
|
|
if (resync <= 3) {
|
|
|
|
if (test_bit(MD_RECOVERY_DONE, &mddev->recovery))
|
|
|
|
/* Still cleaning up */
|
|
|
|
resync = max_sectors;
|
|
|
|
} else
|
|
|
|
resync -= atomic_read(&mddev->recovery_active);
|
|
|
|
|
|
|
|
if (resync == 0) {
|
|
|
|
if (mddev->recovery_cp < MaxSector) {
|
|
|
|
seq_printf(seq, "\tresync=PENDING");
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
if (resync < 3) {
|
|
|
|
seq_printf(seq, "\tresync=DELAYED");
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2014-09-30 12:52:29 +07:00
|
|
|
WARN_ON(max_sectors == 0);
|
2006-03-27 16:18:04 +07:00
|
|
|
/* Pick 'scale' such that (resync>>scale)*1000 will fit
|
2009-05-07 09:49:35 +07:00
|
|
|
* in a sector_t, and (max_sectors>>scale) will fit in a
|
2006-03-27 16:18:04 +07:00
|
|
|
* u32, as those are the requirements for sector_div.
|
|
|
|
* Thus 'scale' must be at least 10
|
|
|
|
*/
|
|
|
|
scale = 10;
|
|
|
|
if (sizeof(sector_t) > sizeof(unsigned long)) {
|
2009-05-07 09:49:35 +07:00
|
|
|
while ( max_sectors/2 > (1ULL<<(scale+32)))
|
2006-03-27 16:18:04 +07:00
|
|
|
scale++;
|
|
|
|
}
|
|
|
|
res = (resync>>scale)*1000;
|
2009-05-07 09:49:35 +07:00
|
|
|
sector_div(res, (u32)((max_sectors>>scale)+1));
|
2006-03-27 16:18:04 +07:00
|
|
|
|
|
|
|
per_milli = res;
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2006-03-27 16:18:04 +07:00
|
|
|
int i, x = per_milli/50, y = 20-x;
|
2005-04-17 05:20:36 +07:00
|
|
|
seq_printf(seq, "[");
|
|
|
|
for (i = 0; i < x; i++)
|
|
|
|
seq_printf(seq, "=");
|
|
|
|
seq_printf(seq, ">");
|
|
|
|
for (i = 0; i < y; i++)
|
|
|
|
seq_printf(seq, ".");
|
|
|
|
seq_printf(seq, "] ");
|
|
|
|
}
|
2006-03-27 16:18:04 +07:00
|
|
|
seq_printf(seq, " %s =%3u.%u%% (%llu/%llu)",
|
2006-03-27 16:18:09 +07:00
|
|
|
(test_bit(MD_RECOVERY_RESHAPE, &mddev->recovery)?
|
|
|
|
"reshape" :
|
2006-10-03 15:15:57 +07:00
|
|
|
(test_bit(MD_RECOVERY_CHECK, &mddev->recovery)?
|
|
|
|
"check" :
|
|
|
|
(test_bit(MD_RECOVERY_SYNC, &mddev->recovery) ?
|
|
|
|
"resync" : "recovery"))),
|
|
|
|
per_milli/10, per_milli % 10,
|
2009-05-07 09:49:35 +07:00
|
|
|
(unsigned long long) resync/2,
|
|
|
|
(unsigned long long) max_sectors/2);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
/*
|
|
|
|
* dt: time from mark until now
|
|
|
|
* db: blocks written from mark until now
|
|
|
|
* rt: remaining time
|
2009-05-07 09:49:35 +07:00
|
|
|
*
|
|
|
|
* rt is a sector_t, so could be 32bit or 64bit.
|
|
|
|
* So we divide before multiply in case it is 32bit and close
|
|
|
|
* to the limit.
|
2011-03-31 08:57:33 +07:00
|
|
|
* We scale the divisor (db) by 32 to avoid losing precision
|
2009-05-07 09:49:35 +07:00
|
|
|
* near the end of resync when the number of remaining sectors
|
|
|
|
* is close to 'db'.
|
|
|
|
* We then divide rt by 32 after multiplying by db to compensate.
|
|
|
|
* The '+1' avoids division by zero if db is very small.
|
2005-04-17 05:20:36 +07:00
|
|
|
*/
|
|
|
|
dt = ((jiffies - mddev->resync_mark) / HZ);
|
|
|
|
if (!dt) dt++;
|
2006-07-10 18:44:16 +07:00
|
|
|
db = (mddev->curr_mark_cnt - atomic_read(&mddev->recovery_active))
|
|
|
|
- mddev->resync_mark_cnt;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2009-05-07 09:49:35 +07:00
|
|
|
rt = max_sectors - resync; /* number of remaining sectors */
|
|
|
|
sector_div(rt, db/32+1);
|
|
|
|
rt *= dt;
|
|
|
|
rt >>= 5;
|
|
|
|
|
|
|
|
seq_printf(seq, " finish=%lu.%lumin", (unsigned long)rt / 60,
|
|
|
|
((unsigned long)rt % 60)/6);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2006-07-10 18:44:16 +07:00
|
|
|
seq_printf(seq, " speed=%ldK/sec", db/2/dt);
|
2015-07-02 14:12:58 +07:00
|
|
|
return 1;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static void *md_seq_start(struct seq_file *seq, loff_t *pos)
|
|
|
|
{
|
|
|
|
struct list_head *tmp;
|
|
|
|
loff_t l = *pos;
|
2011-10-11 12:47:53 +07:00
|
|
|
struct mddev *mddev;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
if (l >= 0x10000)
|
|
|
|
return NULL;
|
|
|
|
if (!l--)
|
|
|
|
/* header */
|
|
|
|
return (void*)1;
|
|
|
|
|
|
|
|
spin_lock(&all_mddevs_lock);
|
|
|
|
list_for_each(tmp,&all_mddevs)
|
|
|
|
if (!l--) {
|
2011-10-11 12:47:53 +07:00
|
|
|
mddev = list_entry(tmp, struct mddev, all_mddevs);
|
2005-04-17 05:20:36 +07:00
|
|
|
mddev_get(mddev);
|
|
|
|
spin_unlock(&all_mddevs_lock);
|
|
|
|
return mddev;
|
|
|
|
}
|
|
|
|
spin_unlock(&all_mddevs_lock);
|
|
|
|
if (!l--)
|
|
|
|
return (void*)2;/* tail */
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void *md_seq_next(struct seq_file *seq, void *v, loff_t *pos)
|
|
|
|
{
|
|
|
|
struct list_head *tmp;
|
2011-10-11 12:47:53 +07:00
|
|
|
struct mddev *next_mddev, *mddev = v;
|
2014-09-30 11:23:59 +07:00
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
++*pos;
|
|
|
|
if (v == (void*)2)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
spin_lock(&all_mddevs_lock);
|
|
|
|
if (v == (void*)1)
|
|
|
|
tmp = all_mddevs.next;
|
|
|
|
else
|
|
|
|
tmp = mddev->all_mddevs.next;
|
|
|
|
if (tmp != &all_mddevs)
|
2011-10-11 12:47:53 +07:00
|
|
|
next_mddev = mddev_get(list_entry(tmp,struct mddev,all_mddevs));
|
2005-04-17 05:20:36 +07:00
|
|
|
else {
|
|
|
|
next_mddev = (void*)2;
|
|
|
|
*pos = 0x10000;
|
2014-09-30 11:23:59 +07:00
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
spin_unlock(&all_mddevs_lock);
|
|
|
|
|
|
|
|
if (v != (void*)1)
|
|
|
|
mddev_put(mddev);
|
|
|
|
return next_mddev;
|
|
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
static void md_seq_stop(struct seq_file *seq, void *v)
|
|
|
|
{
|
2011-10-11 12:47:53 +07:00
|
|
|
struct mddev *mddev = v;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
if (mddev && v != (void*)1 && v != (void*)2)
|
|
|
|
mddev_put(mddev);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int md_seq_show(struct seq_file *seq, void *v)
|
|
|
|
{
|
2011-10-11 12:47:53 +07:00
|
|
|
struct mddev *mddev = v;
|
2009-03-31 10:33:13 +07:00
|
|
|
sector_t sectors;
|
2011-10-11 12:45:26 +07:00
|
|
|
struct md_rdev *rdev;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
if (v == (void*)1) {
|
2011-10-11 12:49:58 +07:00
|
|
|
struct md_personality *pers;
|
2005-04-17 05:20:36 +07:00
|
|
|
seq_printf(seq, "Personalities : ");
|
|
|
|
spin_lock(&pers_lock);
|
2006-01-06 15:20:36 +07:00
|
|
|
list_for_each_entry(pers, &pers_list, list)
|
|
|
|
seq_printf(seq, "[%s] ", pers->name);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
spin_unlock(&pers_lock);
|
|
|
|
seq_printf(seq, "\n");
|
2011-07-13 01:48:39 +07:00
|
|
|
seq->poll_event = atomic_read(&md_event_count);
|
2005-04-17 05:20:36 +07:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
if (v == (void*)2) {
|
|
|
|
status_unused(seq);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2014-12-15 08:56:58 +07:00
|
|
|
spin_lock(&mddev->lock);
|
2005-04-17 05:20:36 +07:00
|
|
|
if (mddev->pers || mddev->raid_disks || !list_empty(&mddev->disks)) {
|
|
|
|
seq_printf(seq, "%s : %sactive", mdname(mddev),
|
|
|
|
mddev->pers ? "" : "in");
|
|
|
|
if (mddev->pers) {
|
[PATCH] md: allow md arrays to be started read-only (module parameter).
When an md array is started, the superblock will be written, and resync may
commense. This is not good if you want to be completely read-only as, for
example, when preparing to resume from a suspend-to-disk image.
So introduce a module parameter "start_ro" which can be set
to '1' at boot, at module load, or via
/sys/module/md_mod/parameters/start_ro
When this is set, new arrays get an 'auto-ro' mode, which disables all
internal io (superblock updates, resync, recovery) and is automatically
switched to 'rw' when the first write request arrives.
The array can be set to true 'ro' mode using 'mdadm -r' before the first
write request, or resync can be started without a write using 'mdadm -w'.
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-11-09 12:39:36 +07:00
|
|
|
if (mddev->ro==1)
|
2005-04-17 05:20:36 +07:00
|
|
|
seq_printf(seq, " (read-only)");
|
[PATCH] md: allow md arrays to be started read-only (module parameter).
When an md array is started, the superblock will be written, and resync may
commense. This is not good if you want to be completely read-only as, for
example, when preparing to resume from a suspend-to-disk image.
So introduce a module parameter "start_ro" which can be set
to '1' at boot, at module load, or via
/sys/module/md_mod/parameters/start_ro
When this is set, new arrays get an 'auto-ro' mode, which disables all
internal io (superblock updates, resync, recovery) and is automatically
switched to 'rw' when the first write request arrives.
The array can be set to true 'ro' mode using 'mdadm -r' before the first
write request, or resync can be started without a write using 'mdadm -w'.
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-11-09 12:39:36 +07:00
|
|
|
if (mddev->ro==2)
|
2008-03-11 01:43:47 +07:00
|
|
|
seq_printf(seq, " (auto-read-only)");
|
2005-04-17 05:20:36 +07:00
|
|
|
seq_printf(seq, " %s", mddev->pers->name);
|
|
|
|
}
|
|
|
|
|
2009-03-31 10:33:13 +07:00
|
|
|
sectors = 0;
|
2014-12-15 08:56:59 +07:00
|
|
|
rcu_read_lock();
|
|
|
|
rdev_for_each_rcu(rdev, mddev) {
|
2005-04-17 05:20:36 +07:00
|
|
|
char b[BDEVNAME_SIZE];
|
|
|
|
seq_printf(seq, " %s[%d]",
|
|
|
|
bdevname(rdev->bdev,b), rdev->desc_nr);
|
2005-09-10 06:23:45 +07:00
|
|
|
if (test_bit(WriteMostly, &rdev->flags))
|
|
|
|
seq_printf(seq, "(W)");
|
2015-10-13 06:59:50 +07:00
|
|
|
if (test_bit(Journal, &rdev->flags))
|
|
|
|
seq_printf(seq, "(J)");
|
2005-11-09 12:39:31 +07:00
|
|
|
if (test_bit(Faulty, &rdev->flags)) {
|
2005-04-17 05:20:36 +07:00
|
|
|
seq_printf(seq, "(F)");
|
|
|
|
continue;
|
2011-12-23 06:17:51 +07:00
|
|
|
}
|
|
|
|
if (rdev->raid_disk < 0)
|
2005-09-10 06:24:00 +07:00
|
|
|
seq_printf(seq, "(S)"); /* spare */
|
2011-12-23 06:17:51 +07:00
|
|
|
if (test_bit(Replacement, &rdev->flags))
|
|
|
|
seq_printf(seq, "(R)");
|
2009-03-31 10:33:13 +07:00
|
|
|
sectors += rdev->sectors;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
2014-12-15 08:56:59 +07:00
|
|
|
rcu_read_unlock();
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
if (!list_empty(&mddev->disks)) {
|
|
|
|
if (mddev->pers)
|
|
|
|
seq_printf(seq, "\n %llu blocks",
|
2008-07-21 14:05:22 +07:00
|
|
|
(unsigned long long)
|
|
|
|
mddev->array_sectors / 2);
|
2005-04-17 05:20:36 +07:00
|
|
|
else
|
|
|
|
seq_printf(seq, "\n %llu blocks",
|
2009-03-31 10:33:13 +07:00
|
|
|
(unsigned long long)sectors / 2);
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
2005-09-10 06:24:00 +07:00
|
|
|
if (mddev->persistent) {
|
|
|
|
if (mddev->major_version != 0 ||
|
|
|
|
mddev->minor_version != 90) {
|
|
|
|
seq_printf(seq," super %d.%d",
|
|
|
|
mddev->major_version,
|
|
|
|
mddev->minor_version);
|
|
|
|
}
|
2008-02-06 16:39:51 +07:00
|
|
|
} else if (mddev->external)
|
|
|
|
seq_printf(seq, " super external:%s",
|
|
|
|
mddev->metadata_type);
|
|
|
|
else
|
2005-09-10 06:24:00 +07:00
|
|
|
seq_printf(seq, " super non-persistent");
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
if (mddev->pers) {
|
2008-10-13 07:55:12 +07:00
|
|
|
mddev->pers->status(seq, mddev);
|
2014-09-30 11:23:59 +07:00
|
|
|
seq_printf(seq, "\n ");
|
2005-11-09 12:39:41 +07:00
|
|
|
if (mddev->pers->sync_request) {
|
2015-07-02 14:12:58 +07:00
|
|
|
if (status_resync(seq, mddev))
|
2005-11-09 12:39:41 +07:00
|
|
|
seq_printf(seq, "\n ");
|
|
|
|
}
|
2005-06-22 07:17:14 +07:00
|
|
|
} else
|
|
|
|
seq_printf(seq, "\n ");
|
|
|
|
|
2012-03-19 08:46:40 +07:00
|
|
|
bitmap_status(seq, mddev->bitmap);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
seq_printf(seq, "\n");
|
|
|
|
}
|
2014-12-15 08:56:58 +07:00
|
|
|
spin_unlock(&mddev->lock);
|
2014-09-30 11:23:59 +07:00
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2009-05-07 09:49:37 +07:00
|
|
|
static const struct seq_operations md_seq_ops = {
|
2005-04-17 05:20:36 +07:00
|
|
|
.start = md_seq_start,
|
|
|
|
.next = md_seq_next,
|
|
|
|
.stop = md_seq_stop,
|
|
|
|
.show = md_seq_show,
|
|
|
|
};
|
|
|
|
|
|
|
|
static int md_seq_open(struct inode *inode, struct file *file)
|
|
|
|
{
|
2011-07-13 01:48:39 +07:00
|
|
|
struct seq_file *seq;
|
2005-04-17 05:20:36 +07:00
|
|
|
int error;
|
|
|
|
|
|
|
|
error = seq_open(file, &md_seq_ops);
|
[PATCH] md: make /proc/mdstat pollable
With this patch it is possible to poll /proc/mdstat to detect arrays appearing
or disappearing, to detect failures, recovery starting, recovery completing,
and devices being added and removed.
It is similar to the poll-ability of /proc/mounts, though different in that:
We always report that the file is readable (because face it, it is, even if
only for EOF).
We report POLLPRI when there is a change so that select() can detect
it as an exceptional event. Not only are these exceptional events, but
that is the mechanism that the current 'mdadm' uses to watch for events
(It also polls after a timeout).
(We also report POLLERR like /proc/mounts).
Finally, we only reset the per-file event counter when the start of the file
is read, rather than when poll() returns an event. This is more robust as it
means that an fd will continue to report activity to poll/select until the
program clearly responds to that activity.
md_new_event takes an 'mddev' which isn't currently used, but it will be soon.
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-06 15:20:30 +07:00
|
|
|
if (error)
|
2011-07-13 01:48:39 +07:00
|
|
|
return error;
|
|
|
|
|
|
|
|
seq = file->private_data;
|
|
|
|
seq->poll_event = atomic_read(&md_event_count);
|
2005-04-17 05:20:36 +07:00
|
|
|
return error;
|
|
|
|
}
|
|
|
|
|
2014-04-09 11:33:51 +07:00
|
|
|
static int md_unloading;
|
[PATCH] md: make /proc/mdstat pollable
With this patch it is possible to poll /proc/mdstat to detect arrays appearing
or disappearing, to detect failures, recovery starting, recovery completing,
and devices being added and removed.
It is similar to the poll-ability of /proc/mounts, though different in that:
We always report that the file is readable (because face it, it is, even if
only for EOF).
We report POLLPRI when there is a change so that select() can detect
it as an exceptional event. Not only are these exceptional events, but
that is the mechanism that the current 'mdadm' uses to watch for events
(It also polls after a timeout).
(We also report POLLERR like /proc/mounts).
Finally, we only reset the per-file event counter when the start of the file
is read, rather than when poll() returns an event. This is more robust as it
means that an fd will continue to report activity to poll/select until the
program clearly responds to that activity.
md_new_event takes an 'mddev' which isn't currently used, but it will be soon.
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-06 15:20:30 +07:00
|
|
|
static unsigned int mdstat_poll(struct file *filp, poll_table *wait)
|
|
|
|
{
|
2011-07-13 01:48:39 +07:00
|
|
|
struct seq_file *seq = filp->private_data;
|
[PATCH] md: make /proc/mdstat pollable
With this patch it is possible to poll /proc/mdstat to detect arrays appearing
or disappearing, to detect failures, recovery starting, recovery completing,
and devices being added and removed.
It is similar to the poll-ability of /proc/mounts, though different in that:
We always report that the file is readable (because face it, it is, even if
only for EOF).
We report POLLPRI when there is a change so that select() can detect
it as an exceptional event. Not only are these exceptional events, but
that is the mechanism that the current 'mdadm' uses to watch for events
(It also polls after a timeout).
(We also report POLLERR like /proc/mounts).
Finally, we only reset the per-file event counter when the start of the file
is read, rather than when poll() returns an event. This is more robust as it
means that an fd will continue to report activity to poll/select until the
program clearly responds to that activity.
md_new_event takes an 'mddev' which isn't currently used, but it will be soon.
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-06 15:20:30 +07:00
|
|
|
int mask;
|
|
|
|
|
2014-04-09 11:33:51 +07:00
|
|
|
if (md_unloading)
|
2014-12-03 12:07:59 +07:00
|
|
|
return POLLIN|POLLRDNORM|POLLERR|POLLPRI;
|
[PATCH] md: make /proc/mdstat pollable
With this patch it is possible to poll /proc/mdstat to detect arrays appearing
or disappearing, to detect failures, recovery starting, recovery completing,
and devices being added and removed.
It is similar to the poll-ability of /proc/mounts, though different in that:
We always report that the file is readable (because face it, it is, even if
only for EOF).
We report POLLPRI when there is a change so that select() can detect
it as an exceptional event. Not only are these exceptional events, but
that is the mechanism that the current 'mdadm' uses to watch for events
(It also polls after a timeout).
(We also report POLLERR like /proc/mounts).
Finally, we only reset the per-file event counter when the start of the file
is read, rather than when poll() returns an event. This is more robust as it
means that an fd will continue to report activity to poll/select until the
program clearly responds to that activity.
md_new_event takes an 'mddev' which isn't currently used, but it will be soon.
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-06 15:20:30 +07:00
|
|
|
poll_wait(filp, &md_event_waiters, wait);
|
|
|
|
|
|
|
|
/* always allow read */
|
|
|
|
mask = POLLIN | POLLRDNORM;
|
|
|
|
|
2011-07-13 01:48:39 +07:00
|
|
|
if (seq->poll_event != atomic_read(&md_event_count))
|
[PATCH] md: make /proc/mdstat pollable
With this patch it is possible to poll /proc/mdstat to detect arrays appearing
or disappearing, to detect failures, recovery starting, recovery completing,
and devices being added and removed.
It is similar to the poll-ability of /proc/mounts, though different in that:
We always report that the file is readable (because face it, it is, even if
only for EOF).
We report POLLPRI when there is a change so that select() can detect
it as an exceptional event. Not only are these exceptional events, but
that is the mechanism that the current 'mdadm' uses to watch for events
(It also polls after a timeout).
(We also report POLLERR like /proc/mounts).
Finally, we only reset the per-file event counter when the start of the file
is read, rather than when poll() returns an event. This is more robust as it
means that an fd will continue to report activity to poll/select until the
program clearly responds to that activity.
md_new_event takes an 'mddev' which isn't currently used, but it will be soon.
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-06 15:20:30 +07:00
|
|
|
mask |= POLLERR | POLLPRI;
|
|
|
|
return mask;
|
|
|
|
}
|
|
|
|
|
2007-02-12 15:55:33 +07:00
|
|
|
static const struct file_operations md_seq_fops = {
|
2006-10-17 14:09:38 +07:00
|
|
|
.owner = THIS_MODULE,
|
2005-04-17 05:20:36 +07:00
|
|
|
.open = md_seq_open,
|
|
|
|
.read = seq_read,
|
|
|
|
.llseek = seq_lseek,
|
2007-05-09 16:35:35 +07:00
|
|
|
.release = seq_release_private,
|
[PATCH] md: make /proc/mdstat pollable
With this patch it is possible to poll /proc/mdstat to detect arrays appearing
or disappearing, to detect failures, recovery starting, recovery completing,
and devices being added and removed.
It is similar to the poll-ability of /proc/mounts, though different in that:
We always report that the file is readable (because face it, it is, even if
only for EOF).
We report POLLPRI when there is a change so that select() can detect
it as an exceptional event. Not only are these exceptional events, but
that is the mechanism that the current 'mdadm' uses to watch for events
(It also polls after a timeout).
(We also report POLLERR like /proc/mounts).
Finally, we only reset the per-file event counter when the start of the file
is read, rather than when poll() returns an event. This is more robust as it
means that an fd will continue to report activity to poll/select until the
program clearly responds to that activity.
md_new_event takes an 'mddev' which isn't currently used, but it will be soon.
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-06 15:20:30 +07:00
|
|
|
.poll = mdstat_poll,
|
2005-04-17 05:20:36 +07:00
|
|
|
};
|
|
|
|
|
2011-10-11 12:49:58 +07:00
|
|
|
int register_md_personality(struct md_personality *p)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2014-09-25 14:28:34 +07:00
|
|
|
printk(KERN_INFO "md: %s personality registered for level %d\n",
|
|
|
|
p->name, p->level);
|
2005-04-17 05:20:36 +07:00
|
|
|
spin_lock(&pers_lock);
|
2006-01-06 15:20:36 +07:00
|
|
|
list_add_tail(&p->list, &pers_list);
|
2005-04-17 05:20:36 +07:00
|
|
|
spin_unlock(&pers_lock);
|
|
|
|
return 0;
|
|
|
|
}
|
2014-09-30 13:15:38 +07:00
|
|
|
EXPORT_SYMBOL(register_md_personality);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2011-10-11 12:49:58 +07:00
|
|
|
int unregister_md_personality(struct md_personality *p)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2006-01-06 15:20:36 +07:00
|
|
|
printk(KERN_INFO "md: %s personality unregistered\n", p->name);
|
2005-04-17 05:20:36 +07:00
|
|
|
spin_lock(&pers_lock);
|
2006-01-06 15:20:36 +07:00
|
|
|
list_del_init(&p->list);
|
2005-04-17 05:20:36 +07:00
|
|
|
spin_unlock(&pers_lock);
|
|
|
|
return 0;
|
|
|
|
}
|
2014-09-30 13:15:38 +07:00
|
|
|
EXPORT_SYMBOL(unregister_md_personality);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2015-08-13 09:32:55 +07:00
|
|
|
int register_md_cluster_operations(struct md_cluster_operations *ops,
|
|
|
|
struct module *module)
|
2014-03-29 22:01:53 +07:00
|
|
|
{
|
2015-08-13 09:32:55 +07:00
|
|
|
int ret = 0;
|
2014-03-29 22:01:53 +07:00
|
|
|
spin_lock(&pers_lock);
|
2015-08-13 09:32:55 +07:00
|
|
|
if (md_cluster_ops != NULL)
|
|
|
|
ret = -EALREADY;
|
|
|
|
else {
|
|
|
|
md_cluster_ops = ops;
|
|
|
|
md_cluster_mod = module;
|
|
|
|
}
|
2014-03-29 22:01:53 +07:00
|
|
|
spin_unlock(&pers_lock);
|
2015-08-13 09:32:55 +07:00
|
|
|
return ret;
|
2014-03-29 22:01:53 +07:00
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(register_md_cluster_operations);
|
|
|
|
|
|
|
|
int unregister_md_cluster_operations(void)
|
|
|
|
{
|
|
|
|
spin_lock(&pers_lock);
|
|
|
|
md_cluster_ops = NULL;
|
|
|
|
spin_unlock(&pers_lock);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(unregister_md_cluster_operations);
|
|
|
|
|
|
|
|
int md_setup_cluster(struct mddev *mddev, int nodes)
|
|
|
|
{
|
2016-09-05 09:17:28 +07:00
|
|
|
if (!md_cluster_ops)
|
|
|
|
request_module("md-cluster");
|
2014-03-29 22:01:53 +07:00
|
|
|
spin_lock(&pers_lock);
|
2016-09-05 09:17:28 +07:00
|
|
|
/* ensure module won't be unloaded */
|
2014-03-29 22:01:53 +07:00
|
|
|
if (!md_cluster_ops || !try_module_get(md_cluster_mod)) {
|
2016-09-05 09:17:28 +07:00
|
|
|
pr_err("can't find md-cluster module or get it's reference.\n");
|
2014-03-29 22:01:53 +07:00
|
|
|
spin_unlock(&pers_lock);
|
|
|
|
return -ENOENT;
|
|
|
|
}
|
|
|
|
spin_unlock(&pers_lock);
|
|
|
|
|
2014-03-30 12:42:49 +07:00
|
|
|
return md_cluster_ops->join(mddev, nodes);
|
2014-03-29 22:01:53 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
void md_cluster_stop(struct mddev *mddev)
|
|
|
|
{
|
2014-03-29 22:20:02 +07:00
|
|
|
if (!md_cluster_ops)
|
|
|
|
return;
|
2014-03-29 22:01:53 +07:00
|
|
|
md_cluster_ops->leave(mddev);
|
|
|
|
module_put(md_cluster_mod);
|
|
|
|
}
|
|
|
|
|
2011-10-11 12:47:53 +07:00
|
|
|
static int is_mddev_idle(struct mddev *mddev, int init)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2014-09-30 11:23:59 +07:00
|
|
|
struct md_rdev *rdev;
|
2005-04-17 05:20:36 +07:00
|
|
|
int idle;
|
2009-03-31 10:27:02 +07:00
|
|
|
int curr_events;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
idle = 1;
|
2008-07-21 14:05:25 +07:00
|
|
|
rcu_read_lock();
|
|
|
|
rdev_for_each_rcu(rdev, mddev) {
|
2005-04-17 05:20:36 +07:00
|
|
|
struct gendisk *disk = rdev->bdev->bd_contains->bd_disk;
|
2009-03-31 10:27:02 +07:00
|
|
|
curr_events = (int)part_stat_read(&disk->part0, sectors[0]) +
|
|
|
|
(int)part_stat_read(&disk->part0, sectors[1]) -
|
|
|
|
atomic_read(&disk->sync_io);
|
2007-07-17 18:06:12 +07:00
|
|
|
/* sync IO will cause sync_io to increase before the disk_stats
|
|
|
|
* as sync_io is counted when a request starts, and
|
|
|
|
* disk_stats is counted when it completes.
|
|
|
|
* So resync activity will cause curr_events to be smaller than
|
|
|
|
* when there was no such activity.
|
|
|
|
* non-sync IO will cause disk_stat to increase without
|
|
|
|
* increasing sync_io so curr_events will (eventually)
|
|
|
|
* be larger than it was before. Once it becomes
|
|
|
|
* substantially larger, the test below will cause
|
|
|
|
* the array to appear non-idle, and resync will slow
|
|
|
|
* down.
|
|
|
|
* If there is a lot of outstanding resync activity when
|
|
|
|
* we set last_event to curr_events, then all that activity
|
|
|
|
* completing might cause the array to appear non-idle
|
|
|
|
* and resync will be slowed down even though there might
|
|
|
|
* not have been non-resync activity. This will only
|
|
|
|
* happen once though. 'last_events' will soon reflect
|
|
|
|
* the state where there is little or no outstanding
|
|
|
|
* resync requests, and further resync activity will
|
|
|
|
* always make curr_events less than last_events.
|
2005-11-18 16:11:01 +07:00
|
|
|
*
|
2005-04-17 05:20:36 +07:00
|
|
|
*/
|
2009-03-31 10:27:02 +07:00
|
|
|
if (init || curr_events - rdev->last_events > 64) {
|
2005-04-17 05:20:36 +07:00
|
|
|
rdev->last_events = curr_events;
|
|
|
|
idle = 0;
|
|
|
|
}
|
|
|
|
}
|
2008-07-21 14:05:25 +07:00
|
|
|
rcu_read_unlock();
|
2005-04-17 05:20:36 +07:00
|
|
|
return idle;
|
|
|
|
}
|
|
|
|
|
2011-10-11 12:47:53 +07:00
|
|
|
void md_done_sync(struct mddev *mddev, int blocks, int ok)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
/* another "blocks" (512byte) blocks have been synced */
|
|
|
|
atomic_sub(blocks, &mddev->recovery_active);
|
|
|
|
wake_up(&mddev->recovery_wait);
|
|
|
|
if (!ok) {
|
md: restart recovery cleanly after device failure.
When we get any IO error during a recovery (rebuilding a spare), we abort
the recovery and restart it.
For RAID6 (and multi-drive RAID1) it may not be best to restart at the
beginning: when multiple failures can be tolerated, the recovery may be
able to continue and re-doing all that has already been done doesn't make
sense.
We already have the infrastructure to record where a recovery is up to
and restart from there, but it is not being used properly.
This is because:
- We sometimes abort with MD_RECOVERY_ERR rather than just MD_RECOVERY_INTR,
which causes the recovery not be be checkpointed.
- We remove spares and then re-added them which loses important state
information.
The distinction between MD_RECOVERY_ERR and MD_RECOVERY_INTR really isn't
needed. If there is an error, the relevant drive will be marked as
Faulty, and that is enough to ensure correct handling of the error. So we
first remove MD_RECOVERY_ERR, changing some of the uses of it to
MD_RECOVERY_INTR.
Then we cause the attempt to remove a non-faulty device from an array to
fail (unless recovery is impossible as the array is too degraded). Then
when remove_and_add_spares attempts to remove the devices on which
recovery can continue, it will fail, they will remain in place, and
recovery will continue on them as desired.
Issue: If we are halfway through rebuilding a spare and another drive
fails, and a new spare is immediately available, do we want to:
1/ complete the current rebuild, then go back and rebuild the new spare or
2/ restart the rebuild from the start and rebuild both devices in
parallel.
Both options can be argued for. The code currently takes option 2 as
a/ this requires least code change
b/ this results in a minimally-degraded array in minimal time.
Cc: "Eivind Sarto" <ivan@kasenna.com>
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-05-24 03:04:39 +07:00
|
|
|
set_bit(MD_RECOVERY_INTR, &mddev->recovery);
|
2012-11-19 18:57:34 +07:00
|
|
|
set_bit(MD_RECOVERY_ERROR, &mddev->recovery);
|
2005-04-17 05:20:36 +07:00
|
|
|
md_wakeup_thread(mddev->thread);
|
|
|
|
// stop recovery, signal do_sync ....
|
|
|
|
}
|
|
|
|
}
|
2014-09-30 13:15:38 +07:00
|
|
|
EXPORT_SYMBOL(md_done_sync);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2005-06-22 07:17:12 +07:00
|
|
|
/* md_write_start(mddev, bi)
|
|
|
|
* If we need to update some array metadata (e.g. 'active' flag
|
2005-06-22 07:17:26 +07:00
|
|
|
* in superblock) before writing, schedule a superblock update
|
|
|
|
* and wait for it to complete.
|
2005-06-22 07:17:12 +07:00
|
|
|
*/
|
2011-10-11 12:47:53 +07:00
|
|
|
void md_write_start(struct mddev *mddev, struct bio *bi)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2008-06-28 05:31:36 +07:00
|
|
|
int did_change = 0;
|
2005-06-22 07:17:12 +07:00
|
|
|
if (bio_data_dir(bi) != WRITE)
|
2005-06-22 07:17:26 +07:00
|
|
|
return;
|
2005-06-22 07:17:12 +07:00
|
|
|
|
[PATCH] md: allow md arrays to be started read-only (module parameter).
When an md array is started, the superblock will be written, and resync may
commense. This is not good if you want to be completely read-only as, for
example, when preparing to resume from a suspend-to-disk image.
So introduce a module parameter "start_ro" which can be set
to '1' at boot, at module load, or via
/sys/module/md_mod/parameters/start_ro
When this is set, new arrays get an 'auto-ro' mode, which disables all
internal io (superblock updates, resync, recovery) and is automatically
switched to 'rw' when the first write request arrives.
The array can be set to true 'ro' mode using 'mdadm -r' before the first
write request, or resync can be started without a write using 'mdadm -w'.
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-11-09 12:39:36 +07:00
|
|
|
BUG_ON(mddev->ro == 1);
|
|
|
|
if (mddev->ro == 2) {
|
|
|
|
/* need to switch to read/write */
|
|
|
|
mddev->ro = 0;
|
|
|
|
set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
|
|
|
|
md_wakeup_thread(mddev->thread);
|
2008-03-05 05:29:32 +07:00
|
|
|
md_wakeup_thread(mddev->sync_thread);
|
2008-06-28 05:31:36 +07:00
|
|
|
did_change = 1;
|
[PATCH] md: allow md arrays to be started read-only (module parameter).
When an md array is started, the superblock will be written, and resync may
commense. This is not good if you want to be completely read-only as, for
example, when preparing to resume from a suspend-to-disk image.
So introduce a module parameter "start_ro" which can be set
to '1' at boot, at module load, or via
/sys/module/md_mod/parameters/start_ro
When this is set, new arrays get an 'auto-ro' mode, which disables all
internal io (superblock updates, resync, recovery) and is automatically
switched to 'rw' when the first write request arrives.
The array can be set to true 'ro' mode using 'mdadm -r' before the first
write request, or resync can be started without a write using 'mdadm -w'.
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-11-09 12:39:36 +07:00
|
|
|
}
|
2005-06-22 07:17:12 +07:00
|
|
|
atomic_inc(&mddev->writes_pending);
|
2008-04-30 14:52:30 +07:00
|
|
|
if (mddev->safemode == 1)
|
|
|
|
mddev->safemode = 0;
|
2005-06-22 07:17:12 +07:00
|
|
|
if (mddev->in_sync) {
|
2014-12-15 08:56:56 +07:00
|
|
|
spin_lock(&mddev->lock);
|
2005-06-22 07:17:26 +07:00
|
|
|
if (mddev->in_sync) {
|
|
|
|
mddev->in_sync = 0;
|
2006-10-03 15:15:46 +07:00
|
|
|
set_bit(MD_CHANGE_CLEAN, &mddev->flags);
|
2010-08-30 14:33:34 +07:00
|
|
|
set_bit(MD_CHANGE_PENDING, &mddev->flags);
|
2005-06-22 07:17:26 +07:00
|
|
|
md_wakeup_thread(mddev->thread);
|
2008-06-28 05:31:36 +07:00
|
|
|
did_change = 1;
|
2005-06-22 07:17:26 +07:00
|
|
|
}
|
2014-12-15 08:56:56 +07:00
|
|
|
spin_unlock(&mddev->lock);
|
2005-06-22 07:17:12 +07:00
|
|
|
}
|
2008-06-28 05:31:36 +07:00
|
|
|
if (did_change)
|
2010-06-01 16:37:23 +07:00
|
|
|
sysfs_notify_dirent_safe(mddev->sysfs_state);
|
2008-05-24 03:04:36 +07:00
|
|
|
wait_event(mddev->sb_wait,
|
|
|
|
!test_bit(MD_CHANGE_PENDING, &mddev->flags));
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
2014-09-30 13:15:38 +07:00
|
|
|
EXPORT_SYMBOL(md_write_start);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2011-10-11 12:47:53 +07:00
|
|
|
void md_write_end(struct mddev *mddev)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
if (atomic_dec_and_test(&mddev->writes_pending)) {
|
|
|
|
if (mddev->safemode == 2)
|
|
|
|
md_wakeup_thread(mddev->thread);
|
2006-06-26 14:27:37 +07:00
|
|
|
else if (mddev->safemode_delay)
|
2005-04-17 05:20:36 +07:00
|
|
|
mod_timer(&mddev->safemode_timer, jiffies + mddev->safemode_delay);
|
|
|
|
}
|
|
|
|
}
|
2014-09-30 13:15:38 +07:00
|
|
|
EXPORT_SYMBOL(md_write_end);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2007-01-26 15:57:11 +07:00
|
|
|
/* md_allow_write(mddev)
|
|
|
|
* Calling this ensures that the array is marked 'active' so that writes
|
|
|
|
* may proceed without blocking. It is important to call this before
|
|
|
|
* attempting a GFP_KERNEL allocation while holding the mddev lock.
|
|
|
|
* Must be called with mddev_lock held.
|
2008-06-28 11:44:04 +07:00
|
|
|
*
|
2015-12-21 06:51:01 +07:00
|
|
|
* In the ->external case MD_CHANGE_PENDING can not be cleared until mddev->lock
|
2008-06-28 11:44:04 +07:00
|
|
|
* is dropped, so return -EAGAIN after notifying userspace.
|
2007-01-26 15:57:11 +07:00
|
|
|
*/
|
2011-10-11 12:47:53 +07:00
|
|
|
int md_allow_write(struct mddev *mddev)
|
2007-01-26 15:57:11 +07:00
|
|
|
{
|
|
|
|
if (!mddev->pers)
|
2008-06-28 11:44:04 +07:00
|
|
|
return 0;
|
2007-01-26 15:57:11 +07:00
|
|
|
if (mddev->ro)
|
2008-06-28 11:44:04 +07:00
|
|
|
return 0;
|
2008-06-28 05:31:27 +07:00
|
|
|
if (!mddev->pers->sync_request)
|
2008-06-28 11:44:04 +07:00
|
|
|
return 0;
|
2007-01-26 15:57:11 +07:00
|
|
|
|
2014-12-15 08:56:56 +07:00
|
|
|
spin_lock(&mddev->lock);
|
2007-01-26 15:57:11 +07:00
|
|
|
if (mddev->in_sync) {
|
|
|
|
mddev->in_sync = 0;
|
|
|
|
set_bit(MD_CHANGE_CLEAN, &mddev->flags);
|
2010-08-30 14:33:34 +07:00
|
|
|
set_bit(MD_CHANGE_PENDING, &mddev->flags);
|
2007-01-26 15:57:11 +07:00
|
|
|
if (mddev->safemode_delay &&
|
|
|
|
mddev->safemode == 0)
|
|
|
|
mddev->safemode = 1;
|
2014-12-15 08:56:56 +07:00
|
|
|
spin_unlock(&mddev->lock);
|
2007-01-26 15:57:11 +07:00
|
|
|
md_update_sb(mddev, 0);
|
2010-06-01 16:37:23 +07:00
|
|
|
sysfs_notify_dirent_safe(mddev->sysfs_state);
|
2007-01-26 15:57:11 +07:00
|
|
|
} else
|
2014-12-15 08:56:56 +07:00
|
|
|
spin_unlock(&mddev->lock);
|
2008-06-28 11:44:04 +07:00
|
|
|
|
2010-08-30 14:33:34 +07:00
|
|
|
if (test_bit(MD_CHANGE_PENDING, &mddev->flags))
|
2008-06-28 11:44:04 +07:00
|
|
|
return -EAGAIN;
|
|
|
|
else
|
|
|
|
return 0;
|
2007-01-26 15:57:11 +07:00
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(md_allow_write);
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
#define SYNC_MARKS 10
|
|
|
|
#define SYNC_MARK_STEP (3*HZ)
|
2012-10-31 07:59:10 +07:00
|
|
|
#define UPDATE_FREQUENCY (5*60*HZ)
|
2012-10-11 09:34:00 +07:00
|
|
|
void md_do_sync(struct md_thread *thread)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2012-10-11 09:34:00 +07:00
|
|
|
struct mddev *mddev = thread->mddev;
|
2011-10-11 12:47:53 +07:00
|
|
|
struct mddev *mddev2;
|
2005-04-17 05:20:36 +07:00
|
|
|
unsigned int currspeed = 0,
|
|
|
|
window;
|
2014-08-07 20:37:41 +07:00
|
|
|
sector_t max_sectors,j, io_sectors, recovery_done;
|
2005-04-17 05:20:36 +07:00
|
|
|
unsigned long mark[SYNC_MARKS];
|
2012-10-31 07:59:10 +07:00
|
|
|
unsigned long update_time;
|
2005-04-17 05:20:36 +07:00
|
|
|
sector_t mark_cnt[SYNC_MARKS];
|
|
|
|
int last_mark,m;
|
|
|
|
struct list_head *tmp;
|
|
|
|
sector_t last_check;
|
2005-06-22 07:17:13 +07:00
|
|
|
int skipped = 0;
|
2011-10-11 12:45:26 +07:00
|
|
|
struct md_rdev *rdev;
|
2013-06-25 13:23:59 +07:00
|
|
|
char *desc, *action = NULL;
|
md:Add blk_plug in sync_thread.
Add blk_plug in sync_thread will increase the performance of sync.
Because sync_thread did not blk_plug,so when raid sync, the bio merge
not well.
Testing environment:
SATA controller: Intel Corporation 82801JI (ICH10 Family) SATA AHCI
Controller.
OS:Linux xxx 3.5.0-rc2+ #340 SMP Tue Jun 12 09:00:25 CST 2012
x86_64 x86_64 x86_64 GNU/Linux.
RAID5: four ST31000524NS disk.
Without blk_plug:recovery speed about 63M/Sec;
Add blk_plug:recovery speed about 120M/Sec.
Using blktrace:
blktrace -d /dev/sdb -w 60 -o -|blkparse -i -
without blk_plug:
Total (8,16):
Reads Queued: 309811, 1239MiB Writes Queued: 0, 0KiB
Read Dispatches: 283583, 1189MiB Write Dispatches: 0, 0KiB
Reads Requeued: 0 Writes Requeued: 0
Reads Completed: 273351, 1149MiB Writes Completed: 0, 0KiB
Read Merges: 23533, 94132KiB Write Merges: 0, 0KiB
IO unplugs: 0 Timer unplugs: 0
add blk_plug:
Total (8,16):
Reads Queued: 428697, 1714MiB Writes Queued: 0, 0KiB
Read Dispatches: 3954, 1714MiB Write Dispatches: 0, 0KiB
Reads Requeued: 0 Writes Requeued: 0
Reads Completed: 3956, 1715MiB Writes Completed: 0, 0KiB
Read Merges: 424743, 1698MiB Write Merges: 0, 0KiB
IO unplugs: 0 Timer unplugs: 3384
The ratio of merge will be markedly increased.
Signed-off-by: majianpeng <majianpeng@gmail.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2012-07-03 09:12:26 +07:00
|
|
|
struct blk_plug plug;
|
2016-05-02 22:33:08 +07:00
|
|
|
int ret;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
/* just incase thread restarts... */
|
|
|
|
if (test_bit(MD_RECOVERY_DONE, &mddev->recovery))
|
|
|
|
return;
|
2014-05-28 10:39:23 +07:00
|
|
|
if (mddev->ro) {/* never try to sync a read-only array */
|
|
|
|
set_bit(MD_RECOVERY_INTR, &mddev->recovery);
|
2006-06-26 14:27:40 +07:00
|
|
|
return;
|
2014-05-28 10:39:23 +07:00
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2016-05-02 22:33:08 +07:00
|
|
|
if (mddev_is_clustered(mddev)) {
|
|
|
|
ret = md_cluster_ops->resync_start(mddev);
|
|
|
|
if (ret)
|
|
|
|
goto skip;
|
|
|
|
|
2016-06-03 10:32:04 +07:00
|
|
|
set_bit(MD_CLUSTER_RESYNC_LOCKED, &mddev->flags);
|
2016-05-02 22:33:08 +07:00
|
|
|
if (!(test_bit(MD_RECOVERY_SYNC, &mddev->recovery) ||
|
|
|
|
test_bit(MD_RECOVERY_RESHAPE, &mddev->recovery) ||
|
|
|
|
test_bit(MD_RECOVERY_RECOVER, &mddev->recovery))
|
|
|
|
&& ((unsigned long long)mddev->curr_resync_completed
|
|
|
|
< (unsigned long long)mddev->resync_max_sectors))
|
|
|
|
goto skip;
|
|
|
|
}
|
|
|
|
|
2006-10-03 15:15:57 +07:00
|
|
|
if (test_bit(MD_RECOVERY_SYNC, &mddev->recovery)) {
|
2013-06-25 13:23:59 +07:00
|
|
|
if (test_bit(MD_RECOVERY_CHECK, &mddev->recovery)) {
|
2006-10-03 15:15:57 +07:00
|
|
|
desc = "data-check";
|
2013-06-25 13:23:59 +07:00
|
|
|
action = "check";
|
|
|
|
} else if (test_bit(MD_RECOVERY_REQUESTED, &mddev->recovery)) {
|
2006-10-03 15:15:57 +07:00
|
|
|
desc = "requested-resync";
|
2013-06-25 13:23:59 +07:00
|
|
|
action = "repair";
|
|
|
|
} else
|
2006-10-03 15:15:57 +07:00
|
|
|
desc = "resync";
|
|
|
|
} else if (test_bit(MD_RECOVERY_RESHAPE, &mddev->recovery))
|
|
|
|
desc = "reshape";
|
|
|
|
else
|
|
|
|
desc = "recovery";
|
|
|
|
|
2013-06-25 13:23:59 +07:00
|
|
|
mddev->last_sync_action = action ?: desc;
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
/* we overload curr_resync somewhat here.
|
|
|
|
* 0 == not engaged in resync at all
|
|
|
|
* 2 == checking that there is no conflict with another sync
|
|
|
|
* 1 == like 2, but have yielded to allow conflicting resync to
|
|
|
|
* commense
|
|
|
|
* other == active in resync - this many blocks
|
|
|
|
*
|
|
|
|
* Before starting a resync we must have set curr_resync to
|
|
|
|
* 2, and then checked that every "conflicting" array has curr_resync
|
|
|
|
* less than ours. When we find one that is the same or higher
|
|
|
|
* we wait on resync_wait. To avoid deadlock, we reduce curr_resync
|
|
|
|
* to 1 if we choose to yield (based arbitrarily on address of mddev structure).
|
|
|
|
* This will mean we have to start checking from the beginning again.
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
|
|
|
do {
|
2016-08-16 19:26:08 +07:00
|
|
|
int mddev2_minor = -1;
|
2005-04-17 05:20:36 +07:00
|
|
|
mddev->curr_resync = 2;
|
|
|
|
|
|
|
|
try_again:
|
2009-12-30 11:25:23 +07:00
|
|
|
if (test_bit(MD_RECOVERY_INTR, &mddev->recovery))
|
2005-04-17 05:20:36 +07:00
|
|
|
goto skip;
|
2008-02-06 16:39:58 +07:00
|
|
|
for_each_mddev(mddev2, tmp) {
|
2005-04-17 05:20:36 +07:00
|
|
|
if (mddev2 == mddev)
|
|
|
|
continue;
|
2008-05-24 03:04:38 +07:00
|
|
|
if (!mddev->parallel_resync
|
|
|
|
&& mddev2->curr_resync
|
|
|
|
&& match_mddev_units(mddev, mddev2)) {
|
2005-04-17 05:20:36 +07:00
|
|
|
DEFINE_WAIT(wq);
|
|
|
|
if (mddev < mddev2 && mddev->curr_resync == 2) {
|
|
|
|
/* arbitrarily yield */
|
|
|
|
mddev->curr_resync = 1;
|
|
|
|
wake_up(&resync_wait);
|
|
|
|
}
|
|
|
|
if (mddev > mddev2 && mddev->curr_resync == 1)
|
|
|
|
/* no need to wait here, we can wait the next
|
|
|
|
* time 'round when curr_resync == 2
|
|
|
|
*/
|
|
|
|
continue;
|
2008-09-19 08:49:54 +07:00
|
|
|
/* We need to wait 'interruptible' so as not to
|
|
|
|
* contribute to the load average, and not to
|
|
|
|
* be caught by 'softlockup'
|
|
|
|
*/
|
|
|
|
prepare_to_wait(&resync_wait, &wq, TASK_INTERRUPTIBLE);
|
2013-11-19 08:02:01 +07:00
|
|
|
if (!test_bit(MD_RECOVERY_INTR, &mddev->recovery) &&
|
2005-10-26 15:58:58 +07:00
|
|
|
mddev2->curr_resync >= mddev->curr_resync) {
|
2016-08-16 19:26:08 +07:00
|
|
|
if (mddev2_minor != mddev2->md_minor) {
|
|
|
|
mddev2_minor = mddev2->md_minor;
|
|
|
|
printk(KERN_INFO "md: delaying %s of %s"
|
|
|
|
" until %s has finished (they"
|
|
|
|
" share one or more physical units)\n",
|
|
|
|
desc, mdname(mddev),
|
|
|
|
mdname(mddev2));
|
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
mddev_put(mddev2);
|
2008-09-19 08:49:54 +07:00
|
|
|
if (signal_pending(current))
|
|
|
|
flush_signals(current);
|
2005-04-17 05:20:36 +07:00
|
|
|
schedule();
|
|
|
|
finish_wait(&resync_wait, &wq);
|
|
|
|
goto try_again;
|
|
|
|
}
|
|
|
|
finish_wait(&resync_wait, &wq);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
} while (mddev->curr_resync < 2);
|
|
|
|
|
2006-06-26 14:27:40 +07:00
|
|
|
j = 0;
|
2005-11-09 12:39:26 +07:00
|
|
|
if (test_bit(MD_RECOVERY_SYNC, &mddev->recovery)) {
|
2005-04-17 05:20:36 +07:00
|
|
|
/* resync follows the size requested by the personality,
|
2005-06-22 07:17:13 +07:00
|
|
|
* which defaults to physical size, but can be virtual size
|
2005-04-17 05:20:36 +07:00
|
|
|
*/
|
|
|
|
max_sectors = mddev->resync_max_sectors;
|
2012-10-11 10:17:59 +07:00
|
|
|
atomic64_set(&mddev->resync_mismatches, 0);
|
2006-06-26 14:27:40 +07:00
|
|
|
/* we don't use the checkpoint if there's a bitmap */
|
2008-06-28 05:31:24 +07:00
|
|
|
if (test_bit(MD_RECOVERY_REQUESTED, &mddev->recovery))
|
|
|
|
j = mddev->resync_min;
|
|
|
|
else if (!mddev->bitmap)
|
2006-06-26 14:27:40 +07:00
|
|
|
j = mddev->recovery_cp;
|
2008-06-28 05:31:24 +07:00
|
|
|
|
2006-03-27 16:18:09 +07:00
|
|
|
} else if (test_bit(MD_RECOVERY_RESHAPE, &mddev->recovery))
|
2012-05-21 06:28:33 +07:00
|
|
|
max_sectors = mddev->resync_max_sectors;
|
2006-06-26 14:27:40 +07:00
|
|
|
else {
|
2005-04-17 05:20:36 +07:00
|
|
|
/* recovery follows the physical size of devices */
|
2009-03-31 10:33:13 +07:00
|
|
|
max_sectors = mddev->dev_sectors;
|
2006-06-26 14:27:40 +07:00
|
|
|
j = MaxSector;
|
2009-12-13 11:17:06 +07:00
|
|
|
rcu_read_lock();
|
2012-03-19 08:46:39 +07:00
|
|
|
rdev_for_each_rcu(rdev, mddev)
|
2006-06-26 14:27:40 +07:00
|
|
|
if (rdev->raid_disk >= 0 &&
|
2015-10-09 11:54:12 +07:00
|
|
|
!test_bit(Journal, &rdev->flags) &&
|
2006-06-26 14:27:40 +07:00
|
|
|
!test_bit(Faulty, &rdev->flags) &&
|
|
|
|
!test_bit(In_sync, &rdev->flags) &&
|
|
|
|
rdev->recovery_offset < j)
|
|
|
|
j = rdev->recovery_offset;
|
2009-12-13 11:17:06 +07:00
|
|
|
rcu_read_unlock();
|
2014-07-02 09:04:14 +07:00
|
|
|
|
|
|
|
/* If there is a bitmap, we need to make sure all
|
|
|
|
* writes that started before we added a spare
|
|
|
|
* complete before we start doing a recovery.
|
|
|
|
* Otherwise the write might complete and (via
|
|
|
|
* bitmap_endwrite) set a bit in the bitmap after the
|
|
|
|
* recovery has checked that bit and skipped that
|
|
|
|
* region.
|
|
|
|
*/
|
|
|
|
if (mddev->bitmap) {
|
|
|
|
mddev->pers->quiesce(mddev, 1);
|
|
|
|
mddev->pers->quiesce(mddev, 0);
|
|
|
|
}
|
2006-06-26 14:27:40 +07:00
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2006-10-03 15:15:57 +07:00
|
|
|
printk(KERN_INFO "md: %s of RAID array %s\n", desc, mdname(mddev));
|
|
|
|
printk(KERN_INFO "md: minimum _guaranteed_ speed:"
|
|
|
|
" %d KB/sec/disk.\n", speed_min(mddev));
|
2005-09-10 14:26:54 +07:00
|
|
|
printk(KERN_INFO "md: using maximum available idle IO bandwidth "
|
2006-10-03 15:15:57 +07:00
|
|
|
"(but not more than %d KB/sec) for %s.\n",
|
|
|
|
speed_max(mddev), desc);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2009-03-31 10:27:02 +07:00
|
|
|
is_mddev_idle(mddev, 1); /* this initializes IO event counters */
|
2006-06-26 14:27:40 +07:00
|
|
|
|
2005-06-22 07:17:13 +07:00
|
|
|
io_sectors = 0;
|
2005-04-17 05:20:36 +07:00
|
|
|
for (m = 0; m < SYNC_MARKS; m++) {
|
|
|
|
mark[m] = jiffies;
|
2005-06-22 07:17:13 +07:00
|
|
|
mark_cnt[m] = io_sectors;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
last_mark = 0;
|
|
|
|
mddev->resync_mark = mark[last_mark];
|
|
|
|
mddev->resync_mark_cnt = mark_cnt[last_mark];
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Tune reconstruction:
|
|
|
|
*/
|
|
|
|
window = 32*(PAGE_SIZE/512);
|
2011-06-08 05:48:35 +07:00
|
|
|
printk(KERN_INFO "md: using %dk window, over a total of %lluk.\n",
|
|
|
|
window/2, (unsigned long long)max_sectors/2);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
atomic_set(&mddev->recovery_active, 0);
|
|
|
|
last_check = 0;
|
|
|
|
|
|
|
|
if (j>2) {
|
2013-11-19 08:02:01 +07:00
|
|
|
printk(KERN_INFO
|
2006-10-03 15:15:57 +07:00
|
|
|
"md: resuming %s of %s from checkpoint.\n",
|
|
|
|
desc, mdname(mddev));
|
2005-04-17 05:20:36 +07:00
|
|
|
mddev->curr_resync = j;
|
2012-10-11 10:25:57 +07:00
|
|
|
} else
|
|
|
|
mddev->curr_resync = 3; /* no longer delayed */
|
2011-01-14 05:14:34 +07:00
|
|
|
mddev->curr_resync_completed = j;
|
2012-10-11 10:25:57 +07:00
|
|
|
sysfs_notify(&mddev->kobj, NULL, "sync_completed");
|
|
|
|
md_new_event(mddev);
|
2012-10-31 07:59:10 +07:00
|
|
|
update_time = jiffies;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
md:Add blk_plug in sync_thread.
Add blk_plug in sync_thread will increase the performance of sync.
Because sync_thread did not blk_plug,so when raid sync, the bio merge
not well.
Testing environment:
SATA controller: Intel Corporation 82801JI (ICH10 Family) SATA AHCI
Controller.
OS:Linux xxx 3.5.0-rc2+ #340 SMP Tue Jun 12 09:00:25 CST 2012
x86_64 x86_64 x86_64 GNU/Linux.
RAID5: four ST31000524NS disk.
Without blk_plug:recovery speed about 63M/Sec;
Add blk_plug:recovery speed about 120M/Sec.
Using blktrace:
blktrace -d /dev/sdb -w 60 -o -|blkparse -i -
without blk_plug:
Total (8,16):
Reads Queued: 309811, 1239MiB Writes Queued: 0, 0KiB
Read Dispatches: 283583, 1189MiB Write Dispatches: 0, 0KiB
Reads Requeued: 0 Writes Requeued: 0
Reads Completed: 273351, 1149MiB Writes Completed: 0, 0KiB
Read Merges: 23533, 94132KiB Write Merges: 0, 0KiB
IO unplugs: 0 Timer unplugs: 0
add blk_plug:
Total (8,16):
Reads Queued: 428697, 1714MiB Writes Queued: 0, 0KiB
Read Dispatches: 3954, 1714MiB Write Dispatches: 0, 0KiB
Reads Requeued: 0 Writes Requeued: 0
Reads Completed: 3956, 1715MiB Writes Completed: 0, 0KiB
Read Merges: 424743, 1698MiB Write Merges: 0, 0KiB
IO unplugs: 0 Timer unplugs: 3384
The ratio of merge will be markedly increased.
Signed-off-by: majianpeng <majianpeng@gmail.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2012-07-03 09:12:26 +07:00
|
|
|
blk_start_plug(&plug);
|
2005-04-17 05:20:36 +07:00
|
|
|
while (j < max_sectors) {
|
2005-06-22 07:17:13 +07:00
|
|
|
sector_t sectors;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2005-06-22 07:17:13 +07:00
|
|
|
skipped = 0;
|
2009-03-31 10:33:13 +07:00
|
|
|
|
2009-05-26 09:57:21 +07:00
|
|
|
if (!test_bit(MD_RECOVERY_RESHAPE, &mddev->recovery) &&
|
|
|
|
((mddev->curr_resync > mddev->curr_resync_completed &&
|
|
|
|
(mddev->curr_resync - mddev->curr_resync_completed)
|
|
|
|
> (max_sectors >> 4)) ||
|
2012-10-31 07:59:10 +07:00
|
|
|
time_after_eq(jiffies, update_time + UPDATE_FREQUENCY) ||
|
2009-05-26 09:57:21 +07:00
|
|
|
(j - mddev->curr_resync_completed)*2
|
2015-07-17 09:06:02 +07:00
|
|
|
>= mddev->resync_max - mddev->curr_resync_completed ||
|
|
|
|
mddev->curr_resync_completed > mddev->resync_max
|
2009-05-26 09:57:21 +07:00
|
|
|
)) {
|
2009-03-31 10:33:13 +07:00
|
|
|
/* time to update curr_resync_completed */
|
|
|
|
wait_event(mddev->recovery_wait,
|
|
|
|
atomic_read(&mddev->recovery_active) == 0);
|
2011-01-14 05:14:34 +07:00
|
|
|
mddev->curr_resync_completed = j;
|
2012-10-31 07:59:10 +07:00
|
|
|
if (test_bit(MD_RECOVERY_SYNC, &mddev->recovery) &&
|
|
|
|
j > mddev->recovery_cp)
|
|
|
|
mddev->recovery_cp = j;
|
2012-10-31 07:59:10 +07:00
|
|
|
update_time = jiffies;
|
2010-08-30 14:33:34 +07:00
|
|
|
set_bit(MD_CHANGE_CLEAN, &mddev->flags);
|
2009-04-14 13:28:34 +07:00
|
|
|
sysfs_notify(&mddev->kobj, NULL, "sync_completed");
|
2009-03-31 10:33:13 +07:00
|
|
|
}
|
2009-04-14 13:28:34 +07:00
|
|
|
|
2013-11-19 08:02:01 +07:00
|
|
|
while (j >= mddev->resync_max &&
|
|
|
|
!test_bit(MD_RECOVERY_INTR, &mddev->recovery)) {
|
2009-07-01 10:15:35 +07:00
|
|
|
/* As this condition is controlled by user-space,
|
|
|
|
* we can block indefinitely, so use '_interruptible'
|
|
|
|
* to avoid triggering warnings.
|
|
|
|
*/
|
|
|
|
flush_signals(current); /* just in case */
|
|
|
|
wait_event_interruptible(mddev->recovery_wait,
|
|
|
|
mddev->resync_max > j
|
2013-11-19 08:02:01 +07:00
|
|
|
|| test_bit(MD_RECOVERY_INTR,
|
|
|
|
&mddev->recovery));
|
2009-07-01 10:15:35 +07:00
|
|
|
}
|
2009-04-14 13:28:34 +07:00
|
|
|
|
2013-11-19 08:02:01 +07:00
|
|
|
if (test_bit(MD_RECOVERY_INTR, &mddev->recovery))
|
|
|
|
break;
|
2009-04-14 13:28:34 +07:00
|
|
|
|
2015-02-19 12:04:40 +07:00
|
|
|
sectors = mddev->pers->sync_request(mddev, j, &skipped);
|
2005-06-22 07:17:13 +07:00
|
|
|
if (sectors == 0) {
|
md: restart recovery cleanly after device failure.
When we get any IO error during a recovery (rebuilding a spare), we abort
the recovery and restart it.
For RAID6 (and multi-drive RAID1) it may not be best to restart at the
beginning: when multiple failures can be tolerated, the recovery may be
able to continue and re-doing all that has already been done doesn't make
sense.
We already have the infrastructure to record where a recovery is up to
and restart from there, but it is not being used properly.
This is because:
- We sometimes abort with MD_RECOVERY_ERR rather than just MD_RECOVERY_INTR,
which causes the recovery not be be checkpointed.
- We remove spares and then re-added them which loses important state
information.
The distinction between MD_RECOVERY_ERR and MD_RECOVERY_INTR really isn't
needed. If there is an error, the relevant drive will be marked as
Faulty, and that is enough to ensure correct handling of the error. So we
first remove MD_RECOVERY_ERR, changing some of the uses of it to
MD_RECOVERY_INTR.
Then we cause the attempt to remove a non-faulty device from an array to
fail (unless recovery is impossible as the array is too degraded). Then
when remove_and_add_spares attempts to remove the devices on which
recovery can continue, it will fail, they will remain in place, and
recovery will continue on them as desired.
Issue: If we are halfway through rebuilding a spare and another drive
fails, and a new spare is immediately available, do we want to:
1/ complete the current rebuild, then go back and rebuild the new spare or
2/ restart the rebuild from the start and rebuild both devices in
parallel.
Both options can be argued for. The code currently takes option 2 as
a/ this requires least code change
b/ this results in a minimally-degraded array in minimal time.
Cc: "Eivind Sarto" <ivan@kasenna.com>
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-05-24 03:04:39 +07:00
|
|
|
set_bit(MD_RECOVERY_INTR, &mddev->recovery);
|
2013-11-19 08:02:01 +07:00
|
|
|
break;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
2005-06-22 07:17:13 +07:00
|
|
|
|
|
|
|
if (!skipped) { /* actual IO requested */
|
|
|
|
io_sectors += sectors;
|
|
|
|
atomic_add(sectors, &mddev->recovery_active);
|
|
|
|
}
|
|
|
|
|
2011-07-28 08:39:24 +07:00
|
|
|
if (test_bit(MD_RECOVERY_INTR, &mddev->recovery))
|
|
|
|
break;
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
j += sectors;
|
2015-07-24 10:27:08 +07:00
|
|
|
if (j > max_sectors)
|
|
|
|
/* when skipping, extra large numbers can be returned. */
|
|
|
|
j = max_sectors;
|
2012-10-11 10:25:57 +07:00
|
|
|
if (j > 2)
|
|
|
|
mddev->curr_resync = j;
|
2006-07-10 18:44:16 +07:00
|
|
|
mddev->curr_mark_cnt = io_sectors;
|
[PATCH] md: make /proc/mdstat pollable
With this patch it is possible to poll /proc/mdstat to detect arrays appearing
or disappearing, to detect failures, recovery starting, recovery completing,
and devices being added and removed.
It is similar to the poll-ability of /proc/mounts, though different in that:
We always report that the file is readable (because face it, it is, even if
only for EOF).
We report POLLPRI when there is a change so that select() can detect
it as an exceptional event. Not only are these exceptional events, but
that is the mechanism that the current 'mdadm' uses to watch for events
(It also polls after a timeout).
(We also report POLLERR like /proc/mounts).
Finally, we only reset the per-file event counter when the start of the file
is read, rather than when poll() returns an event. This is more robust as it
means that an fd will continue to report activity to poll/select until the
program clearly responds to that activity.
md_new_event takes an 'mddev' which isn't currently used, but it will be soon.
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-06 15:20:30 +07:00
|
|
|
if (last_check == 0)
|
2011-07-28 08:39:24 +07:00
|
|
|
/* this is the earliest that rebuild will be
|
[PATCH] md: make /proc/mdstat pollable
With this patch it is possible to poll /proc/mdstat to detect arrays appearing
or disappearing, to detect failures, recovery starting, recovery completing,
and devices being added and removed.
It is similar to the poll-ability of /proc/mounts, though different in that:
We always report that the file is readable (because face it, it is, even if
only for EOF).
We report POLLPRI when there is a change so that select() can detect
it as an exceptional event. Not only are these exceptional events, but
that is the mechanism that the current 'mdadm' uses to watch for events
(It also polls after a timeout).
(We also report POLLERR like /proc/mounts).
Finally, we only reset the per-file event counter when the start of the file
is read, rather than when poll() returns an event. This is more robust as it
means that an fd will continue to report activity to poll/select until the
program clearly responds to that activity.
md_new_event takes an 'mddev' which isn't currently used, but it will be soon.
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-06 15:20:30 +07:00
|
|
|
* visible in /proc/mdstat
|
|
|
|
*/
|
|
|
|
md_new_event(mddev);
|
2005-06-22 07:17:13 +07:00
|
|
|
|
|
|
|
if (last_check + window > io_sectors || j == max_sectors)
|
2005-04-17 05:20:36 +07:00
|
|
|
continue;
|
|
|
|
|
2005-06-22 07:17:13 +07:00
|
|
|
last_check = io_sectors;
|
2005-04-17 05:20:36 +07:00
|
|
|
repeat:
|
|
|
|
if (time_after_eq(jiffies, mark[last_mark] + SYNC_MARK_STEP )) {
|
|
|
|
/* step marks */
|
|
|
|
int next = (last_mark+1) % SYNC_MARKS;
|
|
|
|
|
|
|
|
mddev->resync_mark = mark[next];
|
|
|
|
mddev->resync_mark_cnt = mark_cnt[next];
|
|
|
|
mark[next] = jiffies;
|
2005-06-22 07:17:13 +07:00
|
|
|
mark_cnt[next] = io_sectors - atomic_read(&mddev->recovery_active);
|
2005-04-17 05:20:36 +07:00
|
|
|
last_mark = next;
|
|
|
|
}
|
|
|
|
|
2013-11-19 08:02:01 +07:00
|
|
|
if (test_bit(MD_RECOVERY_INTR, &mddev->recovery))
|
|
|
|
break;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
/*
|
|
|
|
* this loop exits only if either when we are slower than
|
|
|
|
* the 'hard' speed limit, or the system was IO-idle for
|
|
|
|
* a jiffy.
|
|
|
|
* the system might be non-idle CPU-wise, but we only care
|
|
|
|
* about not overloading the IO subsystem. (things like an
|
|
|
|
* e2fsck being done on the RAID array should execute fast)
|
|
|
|
*/
|
|
|
|
cond_resched();
|
|
|
|
|
2014-08-07 20:37:41 +07:00
|
|
|
recovery_done = io_sectors - atomic_read(&mddev->recovery_active);
|
|
|
|
currspeed = ((unsigned long)(recovery_done - mddev->resync_mark_cnt))/2
|
2005-06-22 07:17:13 +07:00
|
|
|
/((jiffies-mddev->resync_mark)/HZ +1) +1;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2006-01-06 15:21:36 +07:00
|
|
|
if (currspeed > speed_min(mddev)) {
|
2015-02-19 12:55:00 +07:00
|
|
|
if (currspeed > speed_max(mddev)) {
|
2005-11-18 16:11:01 +07:00
|
|
|
msleep(500);
|
2005-04-17 05:20:36 +07:00
|
|
|
goto repeat;
|
|
|
|
}
|
2015-02-19 12:55:00 +07:00
|
|
|
if (!is_mddev_idle(mddev, 0)) {
|
|
|
|
/*
|
|
|
|
* Give other IO more of a chance.
|
|
|
|
* The faster the devices, the less we wait.
|
|
|
|
*/
|
|
|
|
wait_event(mddev->recovery_wait,
|
|
|
|
!atomic_read(&mddev->recovery_active));
|
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
}
|
2013-11-19 08:02:01 +07:00
|
|
|
printk(KERN_INFO "md: %s: %s %s.\n",mdname(mddev), desc,
|
|
|
|
test_bit(MD_RECOVERY_INTR, &mddev->recovery)
|
|
|
|
? "interrupted" : "done");
|
2005-04-17 05:20:36 +07:00
|
|
|
/*
|
|
|
|
* this also signals 'finished resyncing' to md_stop
|
|
|
|
*/
|
md:Add blk_plug in sync_thread.
Add blk_plug in sync_thread will increase the performance of sync.
Because sync_thread did not blk_plug,so when raid sync, the bio merge
not well.
Testing environment:
SATA controller: Intel Corporation 82801JI (ICH10 Family) SATA AHCI
Controller.
OS:Linux xxx 3.5.0-rc2+ #340 SMP Tue Jun 12 09:00:25 CST 2012
x86_64 x86_64 x86_64 GNU/Linux.
RAID5: four ST31000524NS disk.
Without blk_plug:recovery speed about 63M/Sec;
Add blk_plug:recovery speed about 120M/Sec.
Using blktrace:
blktrace -d /dev/sdb -w 60 -o -|blkparse -i -
without blk_plug:
Total (8,16):
Reads Queued: 309811, 1239MiB Writes Queued: 0, 0KiB
Read Dispatches: 283583, 1189MiB Write Dispatches: 0, 0KiB
Reads Requeued: 0 Writes Requeued: 0
Reads Completed: 273351, 1149MiB Writes Completed: 0, 0KiB
Read Merges: 23533, 94132KiB Write Merges: 0, 0KiB
IO unplugs: 0 Timer unplugs: 0
add blk_plug:
Total (8,16):
Reads Queued: 428697, 1714MiB Writes Queued: 0, 0KiB
Read Dispatches: 3954, 1714MiB Write Dispatches: 0, 0KiB
Reads Requeued: 0 Writes Requeued: 0
Reads Completed: 3956, 1715MiB Writes Completed: 0, 0KiB
Read Merges: 424743, 1698MiB Write Merges: 0, 0KiB
IO unplugs: 0 Timer unplugs: 3384
The ratio of merge will be markedly increased.
Signed-off-by: majianpeng <majianpeng@gmail.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2012-07-03 09:12:26 +07:00
|
|
|
blk_finish_plug(&plug);
|
2005-04-17 05:20:36 +07:00
|
|
|
wait_event(mddev->recovery_wait, !atomic_read(&mddev->recovery_active));
|
|
|
|
|
2015-07-24 10:27:08 +07:00
|
|
|
if (!test_bit(MD_RECOVERY_RESHAPE, &mddev->recovery) &&
|
|
|
|
!test_bit(MD_RECOVERY_INTR, &mddev->recovery) &&
|
|
|
|
mddev->curr_resync > 2) {
|
|
|
|
mddev->curr_resync_completed = mddev->curr_resync;
|
|
|
|
sysfs_notify(&mddev->kobj, NULL, "sync_completed");
|
|
|
|
}
|
2015-02-19 12:04:40 +07:00
|
|
|
mddev->pers->sync_request(mddev, max_sectors, &skipped);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
md: restart recovery cleanly after device failure.
When we get any IO error during a recovery (rebuilding a spare), we abort
the recovery and restart it.
For RAID6 (and multi-drive RAID1) it may not be best to restart at the
beginning: when multiple failures can be tolerated, the recovery may be
able to continue and re-doing all that has already been done doesn't make
sense.
We already have the infrastructure to record where a recovery is up to
and restart from there, but it is not being used properly.
This is because:
- We sometimes abort with MD_RECOVERY_ERR rather than just MD_RECOVERY_INTR,
which causes the recovery not be be checkpointed.
- We remove spares and then re-added them which loses important state
information.
The distinction between MD_RECOVERY_ERR and MD_RECOVERY_INTR really isn't
needed. If there is an error, the relevant drive will be marked as
Faulty, and that is enough to ensure correct handling of the error. So we
first remove MD_RECOVERY_ERR, changing some of the uses of it to
MD_RECOVERY_INTR.
Then we cause the attempt to remove a non-faulty device from an array to
fail (unless recovery is impossible as the array is too degraded). Then
when remove_and_add_spares attempts to remove the devices on which
recovery can continue, it will fail, they will remain in place, and
recovery will continue on them as desired.
Issue: If we are halfway through rebuilding a spare and another drive
fails, and a new spare is immediately available, do we want to:
1/ complete the current rebuild, then go back and rebuild the new spare or
2/ restart the rebuild from the start and rebuild both devices in
parallel.
Both options can be argued for. The code currently takes option 2 as
a/ this requires least code change
b/ this results in a minimally-degraded array in minimal time.
Cc: "Eivind Sarto" <ivan@kasenna.com>
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-05-24 03:04:39 +07:00
|
|
|
if (!test_bit(MD_RECOVERY_CHECK, &mddev->recovery) &&
|
2006-06-26 14:27:40 +07:00
|
|
|
mddev->curr_resync > 2) {
|
|
|
|
if (test_bit(MD_RECOVERY_SYNC, &mddev->recovery)) {
|
|
|
|
if (test_bit(MD_RECOVERY_INTR, &mddev->recovery)) {
|
|
|
|
if (mddev->curr_resync >= mddev->recovery_cp) {
|
|
|
|
printk(KERN_INFO
|
2006-10-03 15:15:57 +07:00
|
|
|
"md: checkpointing %s of %s.\n",
|
|
|
|
desc, mdname(mddev));
|
2012-11-19 18:57:34 +07:00
|
|
|
if (test_bit(MD_RECOVERY_ERROR,
|
|
|
|
&mddev->recovery))
|
|
|
|
mddev->recovery_cp =
|
|
|
|
mddev->curr_resync_completed;
|
|
|
|
else
|
|
|
|
mddev->recovery_cp =
|
|
|
|
mddev->curr_resync;
|
2006-06-26 14:27:40 +07:00
|
|
|
}
|
|
|
|
} else
|
|
|
|
mddev->recovery_cp = MaxSector;
|
|
|
|
} else {
|
|
|
|
if (!test_bit(MD_RECOVERY_INTR, &mddev->recovery))
|
|
|
|
mddev->curr_resync = MaxSector;
|
2009-12-13 11:17:06 +07:00
|
|
|
rcu_read_lock();
|
2012-03-19 08:46:39 +07:00
|
|
|
rdev_for_each_rcu(rdev, mddev)
|
2006-06-26 14:27:40 +07:00
|
|
|
if (rdev->raid_disk >= 0 &&
|
2010-06-16 14:01:25 +07:00
|
|
|
mddev->delta_disks >= 0 &&
|
2015-10-09 11:54:12 +07:00
|
|
|
!test_bit(Journal, &rdev->flags) &&
|
2006-06-26 14:27:40 +07:00
|
|
|
!test_bit(Faulty, &rdev->flags) &&
|
|
|
|
!test_bit(In_sync, &rdev->flags) &&
|
|
|
|
rdev->recovery_offset < mddev->curr_resync)
|
|
|
|
rdev->recovery_offset = mddev->curr_resync;
|
2009-12-13 11:17:06 +07:00
|
|
|
rcu_read_unlock();
|
2006-06-26 14:27:40 +07:00
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
2012-02-07 08:01:51 +07:00
|
|
|
skip:
|
2016-06-03 10:32:04 +07:00
|
|
|
/* set CHANGE_PENDING here since maybe another update is needed,
|
|
|
|
* so other nodes are informed. It should be harmless for normal
|
|
|
|
* raid */
|
|
|
|
set_mask_bits(&mddev->flags, 0,
|
|
|
|
BIT(MD_CHANGE_PENDING) | BIT(MD_CHANGE_DEVS));
|
2015-10-01 01:20:35 +07:00
|
|
|
|
2014-12-15 08:57:01 +07:00
|
|
|
spin_lock(&mddev->lock);
|
2009-12-14 08:49:48 +07:00
|
|
|
if (!test_bit(MD_RECOVERY_INTR, &mddev->recovery)) {
|
|
|
|
/* We completed so min/max setting can be forgotten if used. */
|
|
|
|
if (test_bit(MD_RECOVERY_REQUESTED, &mddev->recovery))
|
|
|
|
mddev->resync_min = 0;
|
|
|
|
mddev->resync_max = MaxSector;
|
|
|
|
} else if (test_bit(MD_RECOVERY_REQUESTED, &mddev->recovery))
|
|
|
|
mddev->resync_min = mddev->curr_resync_completed;
|
2015-07-02 14:12:58 +07:00
|
|
|
set_bit(MD_RECOVERY_DONE, &mddev->recovery);
|
2005-04-17 05:20:36 +07:00
|
|
|
mddev->curr_resync = 0;
|
2014-12-15 08:57:01 +07:00
|
|
|
spin_unlock(&mddev->lock);
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
wake_up(&resync_wait);
|
|
|
|
md_wakeup_thread(mddev->thread);
|
2008-02-06 16:39:52 +07:00
|
|
|
return;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
2006-03-27 16:18:10 +07:00
|
|
|
EXPORT_SYMBOL_GPL(md_do_sync);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2013-04-24 08:42:41 +07:00
|
|
|
static int remove_and_add_spares(struct mddev *mddev,
|
|
|
|
struct md_rdev *this)
|
2007-03-01 11:11:48 +07:00
|
|
|
{
|
2011-10-11 12:45:26 +07:00
|
|
|
struct md_rdev *rdev;
|
2007-03-01 11:11:48 +07:00
|
|
|
int spares = 0;
|
2012-01-08 20:46:41 +07:00
|
|
|
int removed = 0;
|
2016-06-02 13:19:53 +07:00
|
|
|
bool remove_some = false;
|
2007-03-01 11:11:48 +07:00
|
|
|
|
2016-06-02 13:19:53 +07:00
|
|
|
rdev_for_each(rdev, mddev) {
|
|
|
|
if ((this == NULL || rdev == this) &&
|
|
|
|
rdev->raid_disk >= 0 &&
|
|
|
|
!test_bit(Blocked, &rdev->flags) &&
|
|
|
|
test_bit(Faulty, &rdev->flags) &&
|
|
|
|
atomic_read(&rdev->nr_pending)==0) {
|
|
|
|
/* Faulty non-Blocked devices with nr_pending == 0
|
|
|
|
* never get nr_pending incremented,
|
|
|
|
* never get Faulty cleared, and never get Blocked set.
|
|
|
|
* So we can synchronize_rcu now rather than once per device
|
|
|
|
*/
|
|
|
|
remove_some = true;
|
|
|
|
set_bit(RemoveSynchronized, &rdev->flags);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (remove_some)
|
|
|
|
synchronize_rcu();
|
|
|
|
rdev_for_each(rdev, mddev) {
|
2013-04-24 08:42:41 +07:00
|
|
|
if ((this == NULL || rdev == this) &&
|
|
|
|
rdev->raid_disk >= 0 &&
|
2008-04-30 14:52:32 +07:00
|
|
|
!test_bit(Blocked, &rdev->flags) &&
|
2016-06-02 13:19:53 +07:00
|
|
|
((test_bit(RemoveSynchronized, &rdev->flags) ||
|
2015-10-09 11:54:12 +07:00
|
|
|
(!test_bit(In_sync, &rdev->flags) &&
|
|
|
|
!test_bit(Journal, &rdev->flags))) &&
|
2016-06-02 13:19:53 +07:00
|
|
|
atomic_read(&rdev->nr_pending)==0)) {
|
2007-03-01 11:11:48 +07:00
|
|
|
if (mddev->pers->hot_remove_disk(
|
2011-12-23 06:17:51 +07:00
|
|
|
mddev, rdev) == 0) {
|
2011-07-27 08:00:36 +07:00
|
|
|
sysfs_unlink_rdev(mddev, rdev);
|
2007-03-01 11:11:48 +07:00
|
|
|
rdev->raid_disk = -1;
|
2012-01-08 20:46:41 +07:00
|
|
|
removed++;
|
2007-03-01 11:11:48 +07:00
|
|
|
}
|
|
|
|
}
|
2016-06-02 13:19:53 +07:00
|
|
|
if (remove_some && test_bit(RemoveSynchronized, &rdev->flags))
|
|
|
|
clear_bit(RemoveSynchronized, &rdev->flags);
|
|
|
|
}
|
|
|
|
|
2013-03-08 05:24:26 +07:00
|
|
|
if (removed && mddev->kobj.sd)
|
|
|
|
sysfs_notify(&mddev->kobj, NULL, "degraded");
|
2007-03-01 11:11:48 +07:00
|
|
|
|
2015-09-28 22:27:26 +07:00
|
|
|
if (this && removed)
|
2013-04-24 08:42:41 +07:00
|
|
|
goto no_add;
|
|
|
|
|
2012-03-19 08:46:39 +07:00
|
|
|
rdev_for_each(rdev, mddev) {
|
2015-09-28 22:27:26 +07:00
|
|
|
if (this && this != rdev)
|
|
|
|
continue;
|
2015-10-02 01:20:27 +07:00
|
|
|
if (test_bit(Candidate, &rdev->flags))
|
|
|
|
continue;
|
2011-12-23 06:17:53 +07:00
|
|
|
if (rdev->raid_disk >= 0 &&
|
|
|
|
!test_bit(In_sync, &rdev->flags) &&
|
2015-10-09 11:54:12 +07:00
|
|
|
!test_bit(Journal, &rdev->flags) &&
|
2011-12-23 06:17:53 +07:00
|
|
|
!test_bit(Faulty, &rdev->flags))
|
|
|
|
spares++;
|
md: Allow devices to be re-added to a read-only array.
When assembling an array incrementally we might want to make
it device available when "enough" devices are present, but maybe
not "all" devices are present.
If the remaining devices appear before the array is actually used,
they should be added transparently.
We do this by using the "read-auto" mode where the array acts like
it is read-only until a write request arrives.
Current an add-device request switches a read-auto array to active.
This means that only one device can be added after the array is first
made read-auto. This isn't a problem for RAID5, but is not ideal for
RAID6 or RAID10.
Also we don't really want to switch the array to read-auto at all
when re-adding a device as this doesn't really imply any change.
So:
- remove the "md_update_sb()" call from add_new_disk(). This isn't
really needed as just adding a disk doesn't require a metadata
update. Instead, just set MD_CHANGE_DEVS. This will effect a
metadata update soon enough, once the array is not read-only.
- Allow the ADD_NEW_DISK ioctl to succeed without activating a
read-auto array, providing the MD_DISK_SYNC flag is set.
In this case, the device will be rejected if it cannot be added
with the correct device number, or has an incorrect event count.
- Teach remove_and_add_spares() to be careful about adding spares
when the array is read-only (or read-mostly) - only add devices
that are thought to be in-sync, and only do it if the array is
in-sync itself.
- In md_check_recovery, use remove_and_add_spares in the read-only
case, rather than open coding just the 'remove' part of it.
Reported-by: Martin Wilck <mwilck@arcor.de>
Signed-off-by: NeilBrown <neilb@suse.de>
2013-04-24 08:42:42 +07:00
|
|
|
if (rdev->raid_disk >= 0)
|
|
|
|
continue;
|
|
|
|
if (test_bit(Faulty, &rdev->flags))
|
|
|
|
continue;
|
2015-12-21 06:51:02 +07:00
|
|
|
if (!test_bit(Journal, &rdev->flags)) {
|
|
|
|
if (mddev->ro &&
|
|
|
|
! (rdev->saved_raid_disk >= 0 &&
|
|
|
|
!test_bit(Bitmap_sync, &rdev->flags)))
|
|
|
|
continue;
|
md: Allow devices to be re-added to a read-only array.
When assembling an array incrementally we might want to make
it device available when "enough" devices are present, but maybe
not "all" devices are present.
If the remaining devices appear before the array is actually used,
they should be added transparently.
We do this by using the "read-auto" mode where the array acts like
it is read-only until a write request arrives.
Current an add-device request switches a read-auto array to active.
This means that only one device can be added after the array is first
made read-auto. This isn't a problem for RAID5, but is not ideal for
RAID6 or RAID10.
Also we don't really want to switch the array to read-auto at all
when re-adding a device as this doesn't really imply any change.
So:
- remove the "md_update_sb()" call from add_new_disk(). This isn't
really needed as just adding a disk doesn't require a metadata
update. Instead, just set MD_CHANGE_DEVS. This will effect a
metadata update soon enough, once the array is not read-only.
- Allow the ADD_NEW_DISK ioctl to succeed without activating a
read-auto array, providing the MD_DISK_SYNC flag is set.
In this case, the device will be rejected if it cannot be added
with the correct device number, or has an incorrect event count.
- Teach remove_and_add_spares() to be careful about adding spares
when the array is read-only (or read-mostly) - only add devices
that are thought to be in-sync, and only do it if the array is
in-sync itself.
- In md_check_recovery, use remove_and_add_spares in the read-only
case, rather than open coding just the 'remove' part of it.
Reported-by: Martin Wilck <mwilck@arcor.de>
Signed-off-by: NeilBrown <neilb@suse.de>
2013-04-24 08:42:42 +07:00
|
|
|
|
2015-12-21 06:51:02 +07:00
|
|
|
rdev->recovery_offset = 0;
|
|
|
|
}
|
md: Allow devices to be re-added to a read-only array.
When assembling an array incrementally we might want to make
it device available when "enough" devices are present, but maybe
not "all" devices are present.
If the remaining devices appear before the array is actually used,
they should be added transparently.
We do this by using the "read-auto" mode where the array acts like
it is read-only until a write request arrives.
Current an add-device request switches a read-auto array to active.
This means that only one device can be added after the array is first
made read-auto. This isn't a problem for RAID5, but is not ideal for
RAID6 or RAID10.
Also we don't really want to switch the array to read-auto at all
when re-adding a device as this doesn't really imply any change.
So:
- remove the "md_update_sb()" call from add_new_disk(). This isn't
really needed as just adding a disk doesn't require a metadata
update. Instead, just set MD_CHANGE_DEVS. This will effect a
metadata update soon enough, once the array is not read-only.
- Allow the ADD_NEW_DISK ioctl to succeed without activating a
read-auto array, providing the MD_DISK_SYNC flag is set.
In this case, the device will be rejected if it cannot be added
with the correct device number, or has an incorrect event count.
- Teach remove_and_add_spares() to be careful about adding spares
when the array is read-only (or read-mostly) - only add devices
that are thought to be in-sync, and only do it if the array is
in-sync itself.
- In md_check_recovery, use remove_and_add_spares in the read-only
case, rather than open coding just the 'remove' part of it.
Reported-by: Martin Wilck <mwilck@arcor.de>
Signed-off-by: NeilBrown <neilb@suse.de>
2013-04-24 08:42:42 +07:00
|
|
|
if (mddev->pers->
|
|
|
|
hot_add_disk(mddev, rdev) == 0) {
|
|
|
|
if (sysfs_link_rdev(mddev, rdev))
|
|
|
|
/* failure here is OK */;
|
2015-12-21 06:51:02 +07:00
|
|
|
if (!test_bit(Journal, &rdev->flags))
|
|
|
|
spares++;
|
md: Allow devices to be re-added to a read-only array.
When assembling an array incrementally we might want to make
it device available when "enough" devices are present, but maybe
not "all" devices are present.
If the remaining devices appear before the array is actually used,
they should be added transparently.
We do this by using the "read-auto" mode where the array acts like
it is read-only until a write request arrives.
Current an add-device request switches a read-auto array to active.
This means that only one device can be added after the array is first
made read-auto. This isn't a problem for RAID5, but is not ideal for
RAID6 or RAID10.
Also we don't really want to switch the array to read-auto at all
when re-adding a device as this doesn't really imply any change.
So:
- remove the "md_update_sb()" call from add_new_disk(). This isn't
really needed as just adding a disk doesn't require a metadata
update. Instead, just set MD_CHANGE_DEVS. This will effect a
metadata update soon enough, once the array is not read-only.
- Allow the ADD_NEW_DISK ioctl to succeed without activating a
read-auto array, providing the MD_DISK_SYNC flag is set.
In this case, the device will be rejected if it cannot be added
with the correct device number, or has an incorrect event count.
- Teach remove_and_add_spares() to be careful about adding spares
when the array is read-only (or read-mostly) - only add devices
that are thought to be in-sync, and only do it if the array is
in-sync itself.
- In md_check_recovery, use remove_and_add_spares in the read-only
case, rather than open coding just the 'remove' part of it.
Reported-by: Martin Wilck <mwilck@arcor.de>
Signed-off-by: NeilBrown <neilb@suse.de>
2013-04-24 08:42:42 +07:00
|
|
|
md_new_event(mddev);
|
|
|
|
set_bit(MD_CHANGE_DEVS, &mddev->flags);
|
md: restart recovery cleanly after device failure.
When we get any IO error during a recovery (rebuilding a spare), we abort
the recovery and restart it.
For RAID6 (and multi-drive RAID1) it may not be best to restart at the
beginning: when multiple failures can be tolerated, the recovery may be
able to continue and re-doing all that has already been done doesn't make
sense.
We already have the infrastructure to record where a recovery is up to
and restart from there, but it is not being used properly.
This is because:
- We sometimes abort with MD_RECOVERY_ERR rather than just MD_RECOVERY_INTR,
which causes the recovery not be be checkpointed.
- We remove spares and then re-added them which loses important state
information.
The distinction between MD_RECOVERY_ERR and MD_RECOVERY_INTR really isn't
needed. If there is an error, the relevant drive will be marked as
Faulty, and that is enough to ensure correct handling of the error. So we
first remove MD_RECOVERY_ERR, changing some of the uses of it to
MD_RECOVERY_INTR.
Then we cause the attempt to remove a non-faulty device from an array to
fail (unless recovery is impossible as the array is too degraded). Then
when remove_and_add_spares attempts to remove the devices on which
recovery can continue, it will fail, they will remain in place, and
recovery will continue on them as desired.
Issue: If we are halfway through rebuilding a spare and another drive
fails, and a new spare is immediately available, do we want to:
1/ complete the current rebuild, then go back and rebuild the new spare or
2/ restart the rebuild from the start and rebuild both devices in
parallel.
Both options can be argued for. The code currently takes option 2 as
a/ this requires least code change
b/ this results in a minimally-degraded array in minimal time.
Cc: "Eivind Sarto" <ivan@kasenna.com>
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-05-24 03:04:39 +07:00
|
|
|
}
|
2007-03-01 11:11:48 +07:00
|
|
|
}
|
2013-04-24 08:42:41 +07:00
|
|
|
no_add:
|
2012-09-19 09:54:22 +07:00
|
|
|
if (removed)
|
|
|
|
set_bit(MD_CHANGE_DEVS, &mddev->flags);
|
2007-03-01 11:11:48 +07:00
|
|
|
return spares;
|
|
|
|
}
|
2011-01-14 05:14:33 +07:00
|
|
|
|
2014-09-30 05:10:42 +07:00
|
|
|
static void md_start_sync(struct work_struct *ws)
|
|
|
|
{
|
|
|
|
struct mddev *mddev = container_of(ws, struct mddev, del_work);
|
2015-10-01 01:20:35 +07:00
|
|
|
|
2014-09-30 05:10:42 +07:00
|
|
|
mddev->sync_thread = md_register_thread(md_do_sync,
|
|
|
|
mddev,
|
|
|
|
"resync");
|
|
|
|
if (!mddev->sync_thread) {
|
2016-08-12 12:42:40 +07:00
|
|
|
printk(KERN_ERR "%s: could not start resync thread...\n",
|
|
|
|
mdname(mddev));
|
2014-09-30 05:10:42 +07:00
|
|
|
/* leave the spares where they are, it shouldn't hurt */
|
|
|
|
clear_bit(MD_RECOVERY_SYNC, &mddev->recovery);
|
|
|
|
clear_bit(MD_RECOVERY_RESHAPE, &mddev->recovery);
|
|
|
|
clear_bit(MD_RECOVERY_REQUESTED, &mddev->recovery);
|
|
|
|
clear_bit(MD_RECOVERY_CHECK, &mddev->recovery);
|
|
|
|
clear_bit(MD_RECOVERY_RUNNING, &mddev->recovery);
|
2014-12-11 06:02:10 +07:00
|
|
|
wake_up(&resync_wait);
|
2014-09-30 05:10:42 +07:00
|
|
|
if (test_and_clear_bit(MD_RECOVERY_RECOVER,
|
|
|
|
&mddev->recovery))
|
|
|
|
if (mddev->sysfs_action)
|
|
|
|
sysfs_notify_dirent_safe(mddev->sysfs_action);
|
|
|
|
} else
|
|
|
|
md_wakeup_thread(mddev->sync_thread);
|
|
|
|
sysfs_notify_dirent_safe(mddev->sysfs_action);
|
|
|
|
md_new_event(mddev);
|
|
|
|
}
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
/*
|
|
|
|
* This routine is regularly called by all per-raid-array threads to
|
|
|
|
* deal with generic issues like resync and super-block update.
|
|
|
|
* Raid personalities that don't have a thread (linear/raid0) do not
|
|
|
|
* need this as they never do any recovery or update the superblock.
|
|
|
|
*
|
|
|
|
* It does not do any resync itself, but rather "forks" off other threads
|
|
|
|
* to do that as needed.
|
|
|
|
* When it is determined that resync is needed, we set MD_RECOVERY_RUNNING in
|
|
|
|
* "->recovery" and create a thread at ->sync_thread.
|
md: restart recovery cleanly after device failure.
When we get any IO error during a recovery (rebuilding a spare), we abort
the recovery and restart it.
For RAID6 (and multi-drive RAID1) it may not be best to restart at the
beginning: when multiple failures can be tolerated, the recovery may be
able to continue and re-doing all that has already been done doesn't make
sense.
We already have the infrastructure to record where a recovery is up to
and restart from there, but it is not being used properly.
This is because:
- We sometimes abort with MD_RECOVERY_ERR rather than just MD_RECOVERY_INTR,
which causes the recovery not be be checkpointed.
- We remove spares and then re-added them which loses important state
information.
The distinction between MD_RECOVERY_ERR and MD_RECOVERY_INTR really isn't
needed. If there is an error, the relevant drive will be marked as
Faulty, and that is enough to ensure correct handling of the error. So we
first remove MD_RECOVERY_ERR, changing some of the uses of it to
MD_RECOVERY_INTR.
Then we cause the attempt to remove a non-faulty device from an array to
fail (unless recovery is impossible as the array is too degraded). Then
when remove_and_add_spares attempts to remove the devices on which
recovery can continue, it will fail, they will remain in place, and
recovery will continue on them as desired.
Issue: If we are halfway through rebuilding a spare and another drive
fails, and a new spare is immediately available, do we want to:
1/ complete the current rebuild, then go back and rebuild the new spare or
2/ restart the rebuild from the start and rebuild both devices in
parallel.
Both options can be argued for. The code currently takes option 2 as
a/ this requires least code change
b/ this results in a minimally-degraded array in minimal time.
Cc: "Eivind Sarto" <ivan@kasenna.com>
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-05-24 03:04:39 +07:00
|
|
|
* When the thread finishes it sets MD_RECOVERY_DONE
|
2005-04-17 05:20:36 +07:00
|
|
|
* and wakeups up this thread which will reap the thread and finish up.
|
|
|
|
* This thread also removes any faulty devices (with nr_pending == 0).
|
|
|
|
*
|
|
|
|
* The overall approach is:
|
|
|
|
* 1/ if the superblock needs updating, update it.
|
|
|
|
* 2/ If a recovery thread is running, don't do anything else.
|
|
|
|
* 3/ If recovery has finished, clean up, possibly marking spares active.
|
|
|
|
* 4/ If there are any faulty devices, remove them.
|
|
|
|
* 5/ If array is degraded, try to add spares devices
|
|
|
|
* 6/ If array has spares or is not in-sync, start a resync thread.
|
|
|
|
*/
|
2011-10-11 12:47:53 +07:00
|
|
|
void md_check_recovery(struct mddev *mddev)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2011-06-08 12:10:08 +07:00
|
|
|
if (mddev->suspended)
|
|
|
|
return;
|
|
|
|
|
2005-06-22 07:17:16 +07:00
|
|
|
if (mddev->bitmap)
|
2009-12-14 08:49:46 +07:00
|
|
|
bitmap_daemon_work(mddev);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2005-06-22 07:17:11 +07:00
|
|
|
if (signal_pending(current)) {
|
2008-04-30 14:52:30 +07:00
|
|
|
if (mddev->pers->sync_request && !mddev->external) {
|
2005-06-22 07:17:11 +07:00
|
|
|
printk(KERN_INFO "md: %s in immediate safe mode\n",
|
|
|
|
mdname(mddev));
|
|
|
|
mddev->safemode = 2;
|
|
|
|
}
|
|
|
|
flush_signals(current);
|
|
|
|
}
|
|
|
|
|
2008-08-05 12:54:13 +07:00
|
|
|
if (mddev->ro && !test_bit(MD_RECOVERY_NEEDED, &mddev->recovery))
|
|
|
|
return;
|
2005-04-17 05:20:36 +07:00
|
|
|
if ( ! (
|
2013-11-28 06:34:18 +07:00
|
|
|
(mddev->flags & MD_UPDATE_SB_FLAGS & ~ (1<<MD_CHANGE_PENDING)) ||
|
2005-04-17 05:20:36 +07:00
|
|
|
test_bit(MD_RECOVERY_NEEDED, &mddev->recovery) ||
|
2005-06-22 07:17:11 +07:00
|
|
|
test_bit(MD_RECOVERY_DONE, &mddev->recovery) ||
|
2015-12-21 06:51:00 +07:00
|
|
|
test_bit(MD_RELOAD_SB, &mddev->flags) ||
|
2008-04-30 14:52:30 +07:00
|
|
|
(mddev->external == 0 && mddev->safemode == 1) ||
|
2005-06-22 07:17:11 +07:00
|
|
|
(mddev->safemode == 2 && ! atomic_read(&mddev->writes_pending)
|
|
|
|
&& !mddev->in_sync && mddev->recovery_cp == MaxSector)
|
2005-04-17 05:20:36 +07:00
|
|
|
))
|
|
|
|
return;
|
2005-06-22 07:17:11 +07:00
|
|
|
|
2006-03-27 16:18:20 +07:00
|
|
|
if (mddev_trylock(mddev)) {
|
2007-03-01 11:11:48 +07:00
|
|
|
int spares = 0;
|
2005-06-22 07:17:11 +07:00
|
|
|
|
2008-08-05 12:54:13 +07:00
|
|
|
if (mddev->ro) {
|
2015-06-17 09:31:46 +07:00
|
|
|
struct md_rdev *rdev;
|
|
|
|
if (!mddev->external && mddev->in_sync)
|
|
|
|
/* 'Blocked' flag not needed as failed devices
|
|
|
|
* will be recorded if array switched to read/write.
|
|
|
|
* Leaving it set will prevent the device
|
|
|
|
* from being removed.
|
|
|
|
*/
|
|
|
|
rdev_for_each(rdev, mddev)
|
|
|
|
clear_bit(Blocked, &rdev->flags);
|
md: Allow devices to be re-added to a read-only array.
When assembling an array incrementally we might want to make
it device available when "enough" devices are present, but maybe
not "all" devices are present.
If the remaining devices appear before the array is actually used,
they should be added transparently.
We do this by using the "read-auto" mode where the array acts like
it is read-only until a write request arrives.
Current an add-device request switches a read-auto array to active.
This means that only one device can be added after the array is first
made read-auto. This isn't a problem for RAID5, but is not ideal for
RAID6 or RAID10.
Also we don't really want to switch the array to read-auto at all
when re-adding a device as this doesn't really imply any change.
So:
- remove the "md_update_sb()" call from add_new_disk(). This isn't
really needed as just adding a disk doesn't require a metadata
update. Instead, just set MD_CHANGE_DEVS. This will effect a
metadata update soon enough, once the array is not read-only.
- Allow the ADD_NEW_DISK ioctl to succeed without activating a
read-auto array, providing the MD_DISK_SYNC flag is set.
In this case, the device will be rejected if it cannot be added
with the correct device number, or has an incorrect event count.
- Teach remove_and_add_spares() to be careful about adding spares
when the array is read-only (or read-mostly) - only add devices
that are thought to be in-sync, and only do it if the array is
in-sync itself.
- In md_check_recovery, use remove_and_add_spares in the read-only
case, rather than open coding just the 'remove' part of it.
Reported-by: Martin Wilck <mwilck@arcor.de>
Signed-off-by: NeilBrown <neilb@suse.de>
2013-04-24 08:42:42 +07:00
|
|
|
/* On a read-only array we can:
|
|
|
|
* - remove failed devices
|
|
|
|
* - add already-in_sync devices if the array itself
|
|
|
|
* is in-sync.
|
|
|
|
* As we only add devices that are already in-sync,
|
|
|
|
* we can activate the spares immediately.
|
2008-08-05 12:54:13 +07:00
|
|
|
*/
|
md: Allow devices to be re-added to a read-only array.
When assembling an array incrementally we might want to make
it device available when "enough" devices are present, but maybe
not "all" devices are present.
If the remaining devices appear before the array is actually used,
they should be added transparently.
We do this by using the "read-auto" mode where the array acts like
it is read-only until a write request arrives.
Current an add-device request switches a read-auto array to active.
This means that only one device can be added after the array is first
made read-auto. This isn't a problem for RAID5, but is not ideal for
RAID6 or RAID10.
Also we don't really want to switch the array to read-auto at all
when re-adding a device as this doesn't really imply any change.
So:
- remove the "md_update_sb()" call from add_new_disk(). This isn't
really needed as just adding a disk doesn't require a metadata
update. Instead, just set MD_CHANGE_DEVS. This will effect a
metadata update soon enough, once the array is not read-only.
- Allow the ADD_NEW_DISK ioctl to succeed without activating a
read-auto array, providing the MD_DISK_SYNC flag is set.
In this case, the device will be rejected if it cannot be added
with the correct device number, or has an incorrect event count.
- Teach remove_and_add_spares() to be careful about adding spares
when the array is read-only (or read-mostly) - only add devices
that are thought to be in-sync, and only do it if the array is
in-sync itself.
- In md_check_recovery, use remove_and_add_spares in the read-only
case, rather than open coding just the 'remove' part of it.
Reported-by: Martin Wilck <mwilck@arcor.de>
Signed-off-by: NeilBrown <neilb@suse.de>
2013-04-24 08:42:42 +07:00
|
|
|
remove_and_add_spares(mddev, NULL);
|
2013-12-12 06:13:33 +07:00
|
|
|
/* There is no thread, but we need to call
|
|
|
|
* ->spare_active and clear saved_raid_disk
|
|
|
|
*/
|
2014-05-29 08:40:03 +07:00
|
|
|
set_bit(MD_RECOVERY_INTR, &mddev->recovery);
|
2013-12-12 06:13:33 +07:00
|
|
|
md_reap_sync_thread(mddev);
|
2015-07-17 08:57:30 +07:00
|
|
|
clear_bit(MD_RECOVERY_RECOVER, &mddev->recovery);
|
2013-12-12 06:13:33 +07:00
|
|
|
clear_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
|
2015-09-19 00:20:12 +07:00
|
|
|
clear_bit(MD_CHANGE_PENDING, &mddev->flags);
|
2008-08-05 12:54:13 +07:00
|
|
|
goto unlock;
|
|
|
|
}
|
|
|
|
|
2015-12-21 06:50:59 +07:00
|
|
|
if (mddev_is_clustered(mddev)) {
|
|
|
|
struct md_rdev *rdev;
|
|
|
|
/* kick the device if another node issued a
|
|
|
|
* remove disk.
|
|
|
|
*/
|
|
|
|
rdev_for_each(rdev, mddev) {
|
|
|
|
if (test_and_clear_bit(ClusterRemove, &rdev->flags) &&
|
|
|
|
rdev->raid_disk < 0)
|
|
|
|
md_kick_rdev_from_array(rdev);
|
|
|
|
}
|
2015-12-21 06:51:00 +07:00
|
|
|
|
|
|
|
if (test_and_clear_bit(MD_RELOAD_SB, &mddev->flags))
|
|
|
|
md_reload_sb(mddev, mddev->good_device_nr);
|
2015-12-21 06:50:59 +07:00
|
|
|
}
|
|
|
|
|
2008-04-30 14:52:30 +07:00
|
|
|
if (!mddev->external) {
|
2008-06-28 05:31:36 +07:00
|
|
|
int did_change = 0;
|
2014-12-15 08:56:56 +07:00
|
|
|
spin_lock(&mddev->lock);
|
2008-04-30 14:52:30 +07:00
|
|
|
if (mddev->safemode &&
|
|
|
|
!atomic_read(&mddev->writes_pending) &&
|
|
|
|
!mddev->in_sync &&
|
|
|
|
mddev->recovery_cp == MaxSector) {
|
|
|
|
mddev->in_sync = 1;
|
2008-06-28 05:31:36 +07:00
|
|
|
did_change = 1;
|
2010-08-30 14:33:34 +07:00
|
|
|
set_bit(MD_CHANGE_CLEAN, &mddev->flags);
|
2008-04-30 14:52:30 +07:00
|
|
|
}
|
|
|
|
if (mddev->safemode == 1)
|
|
|
|
mddev->safemode = 0;
|
2014-12-15 08:56:56 +07:00
|
|
|
spin_unlock(&mddev->lock);
|
2008-06-28 05:31:36 +07:00
|
|
|
if (did_change)
|
2010-06-01 16:37:23 +07:00
|
|
|
sysfs_notify_dirent_safe(mddev->sysfs_state);
|
2005-06-22 07:17:11 +07:00
|
|
|
}
|
|
|
|
|
2015-09-29 07:21:35 +07:00
|
|
|
if (mddev->flags & MD_UPDATE_SB_FLAGS)
|
2006-10-03 15:15:46 +07:00
|
|
|
md_update_sb(mddev, 0);
|
2005-06-22 07:17:12 +07:00
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
if (test_bit(MD_RECOVERY_RUNNING, &mddev->recovery) &&
|
|
|
|
!test_bit(MD_RECOVERY_DONE, &mddev->recovery)) {
|
|
|
|
/* resync/recovery still happening */
|
|
|
|
clear_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
|
|
|
|
goto unlock;
|
|
|
|
}
|
|
|
|
if (mddev->sync_thread) {
|
2013-04-24 08:42:43 +07:00
|
|
|
md_reap_sync_thread(mddev);
|
2005-04-17 05:20:36 +07:00
|
|
|
goto unlock;
|
|
|
|
}
|
2008-06-28 05:31:41 +07:00
|
|
|
/* Set RUNNING before clearing NEEDED to avoid
|
|
|
|
* any transients in the value of "sync_action".
|
|
|
|
*/
|
2012-10-11 10:25:57 +07:00
|
|
|
mddev->curr_resync_completed = 0;
|
2014-12-15 08:57:01 +07:00
|
|
|
spin_lock(&mddev->lock);
|
2008-06-28 05:31:41 +07:00
|
|
|
set_bit(MD_RECOVERY_RUNNING, &mddev->recovery);
|
2014-12-15 08:57:01 +07:00
|
|
|
spin_unlock(&mddev->lock);
|
2005-11-09 12:39:26 +07:00
|
|
|
/* Clear some bits that don't mean anything, but
|
|
|
|
* might be left set
|
|
|
|
*/
|
|
|
|
clear_bit(MD_RECOVERY_INTR, &mddev->recovery);
|
|
|
|
clear_bit(MD_RECOVERY_DONE, &mddev->recovery);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2012-04-24 07:23:14 +07:00
|
|
|
if (!test_and_clear_bit(MD_RECOVERY_NEEDED, &mddev->recovery) ||
|
|
|
|
test_bit(MD_RECOVERY_FROZEN, &mddev->recovery))
|
2014-09-30 05:10:42 +07:00
|
|
|
goto not_running;
|
2005-04-17 05:20:36 +07:00
|
|
|
/* no recovery is running.
|
|
|
|
* remove any failed drives, then
|
|
|
|
* add spares if possible.
|
2012-10-11 10:25:57 +07:00
|
|
|
* Spares are also removed and re-added, to allow
|
2005-04-17 05:20:36 +07:00
|
|
|
* the personality to fail the re-add.
|
|
|
|
*/
|
|
|
|
|
2007-03-01 11:11:48 +07:00
|
|
|
if (mddev->reshape_position != MaxSector) {
|
2009-06-18 05:47:55 +07:00
|
|
|
if (mddev->pers->check_reshape == NULL ||
|
|
|
|
mddev->pers->check_reshape(mddev) != 0)
|
2007-03-01 11:11:48 +07:00
|
|
|
/* Cannot proceed */
|
2014-09-30 05:10:42 +07:00
|
|
|
goto not_running;
|
2007-03-01 11:11:48 +07:00
|
|
|
set_bit(MD_RECOVERY_RESHAPE, &mddev->recovery);
|
2008-06-28 05:31:41 +07:00
|
|
|
clear_bit(MD_RECOVERY_RECOVER, &mddev->recovery);
|
2013-04-24 08:42:41 +07:00
|
|
|
} else if ((spares = remove_and_add_spares(mddev, NULL))) {
|
2005-11-09 12:39:26 +07:00
|
|
|
clear_bit(MD_RECOVERY_SYNC, &mddev->recovery);
|
|
|
|
clear_bit(MD_RECOVERY_CHECK, &mddev->recovery);
|
2008-08-08 00:02:47 +07:00
|
|
|
clear_bit(MD_RECOVERY_REQUESTED, &mddev->recovery);
|
2008-06-28 05:31:41 +07:00
|
|
|
set_bit(MD_RECOVERY_RECOVER, &mddev->recovery);
|
2005-11-09 12:39:26 +07:00
|
|
|
} else if (mddev->recovery_cp < MaxSector) {
|
|
|
|
set_bit(MD_RECOVERY_SYNC, &mddev->recovery);
|
2008-06-28 05:31:41 +07:00
|
|
|
clear_bit(MD_RECOVERY_RECOVER, &mddev->recovery);
|
2005-11-09 12:39:26 +07:00
|
|
|
} else if (!test_bit(MD_RECOVERY_SYNC, &mddev->recovery))
|
|
|
|
/* nothing to be done ... */
|
2014-09-30 05:10:42 +07:00
|
|
|
goto not_running;
|
2005-11-09 12:39:26 +07:00
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
if (mddev->pers->sync_request) {
|
2012-05-22 10:55:08 +07:00
|
|
|
if (spares) {
|
2005-06-22 07:17:27 +07:00
|
|
|
/* We are adding a device or devices to an array
|
|
|
|
* which has the bitmap stored on all devices.
|
|
|
|
* So make sure all bitmap pages get written
|
|
|
|
*/
|
|
|
|
bitmap_write_all(mddev->bitmap);
|
|
|
|
}
|
2014-09-30 05:10:42 +07:00
|
|
|
INIT_WORK(&mddev->del_work, md_start_sync);
|
|
|
|
queue_work(md_misc_wq, &mddev->del_work);
|
|
|
|
goto unlock;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
2014-09-30 05:10:42 +07:00
|
|
|
not_running:
|
2008-06-28 05:31:41 +07:00
|
|
|
if (!mddev->sync_thread) {
|
|
|
|
clear_bit(MD_RECOVERY_RUNNING, &mddev->recovery);
|
2014-12-11 06:02:10 +07:00
|
|
|
wake_up(&resync_wait);
|
2008-06-28 05:31:41 +07:00
|
|
|
if (test_and_clear_bit(MD_RECOVERY_RECOVER,
|
|
|
|
&mddev->recovery))
|
2009-01-09 04:31:05 +07:00
|
|
|
if (mddev->sysfs_action)
|
2010-06-01 16:37:23 +07:00
|
|
|
sysfs_notify_dirent_safe(mddev->sysfs_action);
|
2008-06-28 05:31:41 +07:00
|
|
|
}
|
2014-09-30 05:10:42 +07:00
|
|
|
unlock:
|
|
|
|
wake_up(&mddev->sb_wait);
|
2005-04-17 05:20:36 +07:00
|
|
|
mddev_unlock(mddev);
|
|
|
|
}
|
|
|
|
}
|
2014-09-30 13:15:38 +07:00
|
|
|
EXPORT_SYMBOL(md_check_recovery);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2013-04-24 08:42:43 +07:00
|
|
|
void md_reap_sync_thread(struct mddev *mddev)
|
|
|
|
{
|
|
|
|
struct md_rdev *rdev;
|
|
|
|
|
|
|
|
/* resync has finished, collect result */
|
|
|
|
md_unregister_thread(&mddev->sync_thread);
|
|
|
|
if (!test_bit(MD_RECOVERY_INTR, &mddev->recovery) &&
|
|
|
|
!test_bit(MD_RECOVERY_REQUESTED, &mddev->recovery)) {
|
|
|
|
/* success...*/
|
|
|
|
/* activate any spares */
|
|
|
|
if (mddev->pers->spare_active(mddev)) {
|
|
|
|
sysfs_notify(&mddev->kobj, NULL,
|
|
|
|
"degraded");
|
|
|
|
set_bit(MD_CHANGE_DEVS, &mddev->flags);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (test_bit(MD_RECOVERY_RESHAPE, &mddev->recovery) &&
|
|
|
|
mddev->pers->finish_reshape)
|
|
|
|
mddev->pers->finish_reshape(mddev);
|
|
|
|
|
|
|
|
/* If array is no-longer degraded, then any saved_raid_disk
|
md: Change handling of save_raid_disk and metadata update during recovery.
Since commit d70ed2e4fafdbef0800e739
MD: Allow restarting an interrupted incremental recovery.
we don't write out the metadata to devices while they are recovering.
This had a good reason, but has unfortunate consequences. This patch
changes things to make them work better.
At issue is what happens if the array is shut down while a recovery is
happening, particularly a bitmap-guided recovery.
Ideally the recovery should pick up where it left off.
However the metadata cannot represent the state "A recovery is in
process which is guided by the bitmap".
Before the above mentioned commit, we wrote metadata to the device
which said "this is being recovered and it is up to <here>". So after
a restart, a full recovery (not bitmap-guided) would happen from
where-ever it was up to.
After the commit the metadata wasn't updated so it still said "This
device is fully in sync with <this> event count". That leads to a
bitmap-based recovery following the whole bitmap, which should be a
lot less work than a full recovery from some starting point. So this
was an improvement.
However updates some metadata but not all leads to other problems.
In particular, the metadata written to the fully-up-to-date device
record that the array has all devices present (even though some are
recovering). So on restart, mdadm wants to find all devices and
expects them to have current event counts.
Obviously it doesn't (some have old event counts) so (when assembling
with --incremental) it waits indefinitely for the rest of the expected
devices.
It really is wrong to not update all the metadata together. Do that
is bound to cause confusion.
Instead, we should make it possible to record the truth in the
metadata. i.e. we need to be able to record that a device is being
recovered based on the bitmap.
We already have a Feature flag to say that recovery is happening. We
now add another one to say that it is a bitmap-based recovery.
With this we can remove the code that disables the write-out of
metadata on some devices.
So this patch:
- moves the setting of 'saved_raid_disk' from add_new_disk to
the validate_super methods. This makes sure it is always set
properly, both when adding a new device to an array, and when
assembling an array from a collection of devices.
- Adds a metadata flag MD_FEATURE_RECOVERY_BITMAP which is only
used if MD_FEATURE_RECOVERY_OFFSET is set, and record that a
bitmap-based recovery is allowed.
This is only present in v1.x metadata. v0.90 doesn't support
devices which are in the middle of recovery at all.
- Only skips writing metadata to Faulty devices.
- Also allows rdev state to be set to "-insync" via sysfs.
This can be used for external-metadata arrays. When the
'role' is set the device is assumed to be in-sync. If, after
setting the role, we set the state to "-insync", the role is
moved to saved_raid_disk which effectively says the device is
partly in-sync with that slot and needs a bitmap recovery.
Cc: Andrei Warkentin <andreiw@vmware.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2013-12-09 08:04:56 +07:00
|
|
|
* information must be scrapped.
|
2013-04-24 08:42:43 +07:00
|
|
|
*/
|
md: Change handling of save_raid_disk and metadata update during recovery.
Since commit d70ed2e4fafdbef0800e739
MD: Allow restarting an interrupted incremental recovery.
we don't write out the metadata to devices while they are recovering.
This had a good reason, but has unfortunate consequences. This patch
changes things to make them work better.
At issue is what happens if the array is shut down while a recovery is
happening, particularly a bitmap-guided recovery.
Ideally the recovery should pick up where it left off.
However the metadata cannot represent the state "A recovery is in
process which is guided by the bitmap".
Before the above mentioned commit, we wrote metadata to the device
which said "this is being recovered and it is up to <here>". So after
a restart, a full recovery (not bitmap-guided) would happen from
where-ever it was up to.
After the commit the metadata wasn't updated so it still said "This
device is fully in sync with <this> event count". That leads to a
bitmap-based recovery following the whole bitmap, which should be a
lot less work than a full recovery from some starting point. So this
was an improvement.
However updates some metadata but not all leads to other problems.
In particular, the metadata written to the fully-up-to-date device
record that the array has all devices present (even though some are
recovering). So on restart, mdadm wants to find all devices and
expects them to have current event counts.
Obviously it doesn't (some have old event counts) so (when assembling
with --incremental) it waits indefinitely for the rest of the expected
devices.
It really is wrong to not update all the metadata together. Do that
is bound to cause confusion.
Instead, we should make it possible to record the truth in the
metadata. i.e. we need to be able to record that a device is being
recovered based on the bitmap.
We already have a Feature flag to say that recovery is happening. We
now add another one to say that it is a bitmap-based recovery.
With this we can remove the code that disables the write-out of
metadata on some devices.
So this patch:
- moves the setting of 'saved_raid_disk' from add_new_disk to
the validate_super methods. This makes sure it is always set
properly, both when adding a new device to an array, and when
assembling an array from a collection of devices.
- Adds a metadata flag MD_FEATURE_RECOVERY_BITMAP which is only
used if MD_FEATURE_RECOVERY_OFFSET is set, and record that a
bitmap-based recovery is allowed.
This is only present in v1.x metadata. v0.90 doesn't support
devices which are in the middle of recovery at all.
- Only skips writing metadata to Faulty devices.
- Also allows rdev state to be set to "-insync" via sysfs.
This can be used for external-metadata arrays. When the
'role' is set the device is assumed to be in-sync. If, after
setting the role, we set the state to "-insync", the role is
moved to saved_raid_disk which effectively says the device is
partly in-sync with that slot and needs a bitmap recovery.
Cc: Andrei Warkentin <andreiw@vmware.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2013-12-09 08:04:56 +07:00
|
|
|
if (!mddev->degraded)
|
|
|
|
rdev_for_each(rdev, mddev)
|
2013-04-24 08:42:43 +07:00
|
|
|
rdev->saved_raid_disk = -1;
|
|
|
|
|
|
|
|
md_update_sb(mddev, 1);
|
2016-06-03 10:32:04 +07:00
|
|
|
/* MD_CHANGE_PENDING should be cleared by md_update_sb, so we can
|
|
|
|
* call resync_finish here if MD_CLUSTER_RESYNC_LOCKED is set by
|
|
|
|
* clustered raid */
|
|
|
|
if (test_and_clear_bit(MD_CLUSTER_RESYNC_LOCKED, &mddev->flags))
|
|
|
|
md_cluster_ops->resync_finish(mddev);
|
2013-04-24 08:42:43 +07:00
|
|
|
clear_bit(MD_RECOVERY_RUNNING, &mddev->recovery);
|
2015-06-12 17:05:04 +07:00
|
|
|
clear_bit(MD_RECOVERY_DONE, &mddev->recovery);
|
2013-04-24 08:42:43 +07:00
|
|
|
clear_bit(MD_RECOVERY_SYNC, &mddev->recovery);
|
|
|
|
clear_bit(MD_RECOVERY_RESHAPE, &mddev->recovery);
|
|
|
|
clear_bit(MD_RECOVERY_REQUESTED, &mddev->recovery);
|
|
|
|
clear_bit(MD_RECOVERY_CHECK, &mddev->recovery);
|
2014-12-11 06:02:10 +07:00
|
|
|
wake_up(&resync_wait);
|
2013-04-24 08:42:43 +07:00
|
|
|
/* flag recovery needed just to double check */
|
|
|
|
set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
|
|
|
|
sysfs_notify_dirent_safe(mddev->sysfs_action);
|
|
|
|
md_new_event(mddev);
|
|
|
|
if (mddev->event_work.func)
|
|
|
|
queue_work(md_misc_wq, &mddev->event_work);
|
|
|
|
}
|
2014-09-30 13:15:38 +07:00
|
|
|
EXPORT_SYMBOL(md_reap_sync_thread);
|
2013-04-24 08:42:43 +07:00
|
|
|
|
2011-10-11 12:47:53 +07:00
|
|
|
void md_wait_for_blocked_rdev(struct md_rdev *rdev, struct mddev *mddev)
|
2008-04-30 14:52:32 +07:00
|
|
|
{
|
2010-06-01 16:37:23 +07:00
|
|
|
sysfs_notify_dirent_safe(rdev->sysfs_state);
|
2008-04-30 14:52:32 +07:00
|
|
|
wait_event_timeout(rdev->blocked_wait,
|
2011-07-28 08:31:48 +07:00
|
|
|
!test_bit(Blocked, &rdev->flags) &&
|
|
|
|
!test_bit(BlockedBadBlocks, &rdev->flags),
|
2008-04-30 14:52:32 +07:00
|
|
|
msecs_to_jiffies(5000));
|
|
|
|
rdev_dec_pending(rdev, mddev);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(md_wait_for_blocked_rdev);
|
|
|
|
|
2012-05-21 06:27:00 +07:00
|
|
|
void md_finish_reshape(struct mddev *mddev)
|
|
|
|
{
|
|
|
|
/* called be personality module when reshape completes. */
|
|
|
|
struct md_rdev *rdev;
|
|
|
|
|
|
|
|
rdev_for_each(rdev, mddev) {
|
|
|
|
if (rdev->data_offset > rdev->new_data_offset)
|
|
|
|
rdev->sectors += rdev->data_offset - rdev->new_data_offset;
|
|
|
|
else
|
|
|
|
rdev->sectors -= rdev->new_data_offset - rdev->data_offset;
|
|
|
|
rdev->data_offset = rdev->new_data_offset;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(md_finish_reshape);
|
2011-07-28 08:31:46 +07:00
|
|
|
|
2015-12-25 09:20:34 +07:00
|
|
|
/* Bad block management */
|
2011-07-28 08:31:46 +07:00
|
|
|
|
2015-12-25 09:20:34 +07:00
|
|
|
/* Returns 1 on success, 0 on failure */
|
2011-10-11 12:45:26 +07:00
|
|
|
int rdev_set_badblocks(struct md_rdev *rdev, sector_t s, int sectors,
|
2012-05-21 06:27:00 +07:00
|
|
|
int is_new)
|
2011-07-28 08:31:46 +07:00
|
|
|
{
|
2016-05-04 09:22:13 +07:00
|
|
|
struct mddev *mddev = rdev->mddev;
|
2012-05-21 06:27:00 +07:00
|
|
|
int rv;
|
|
|
|
if (is_new)
|
|
|
|
s += rdev->new_data_offset;
|
|
|
|
else
|
|
|
|
s += rdev->data_offset;
|
2015-12-25 09:20:34 +07:00
|
|
|
rv = badblocks_set(&rdev->badblocks, s, sectors, 0);
|
|
|
|
if (rv == 0) {
|
2011-07-28 08:31:46 +07:00
|
|
|
/* Make sure they get written out promptly */
|
2011-12-08 12:26:08 +07:00
|
|
|
sysfs_notify_dirent_safe(rdev->sysfs_state);
|
2016-05-04 09:22:13 +07:00
|
|
|
set_mask_bits(&mddev->flags, 0,
|
|
|
|
BIT(MD_CHANGE_CLEAN) | BIT(MD_CHANGE_PENDING));
|
2011-07-28 08:31:46 +07:00
|
|
|
md_wakeup_thread(rdev->mddev->thread);
|
2015-12-25 09:20:34 +07:00
|
|
|
return 1;
|
|
|
|
} else
|
|
|
|
return 0;
|
2011-07-28 08:31:46 +07:00
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(rdev_set_badblocks);
|
|
|
|
|
2012-05-21 06:27:00 +07:00
|
|
|
int rdev_clear_badblocks(struct md_rdev *rdev, sector_t s, int sectors,
|
|
|
|
int is_new)
|
2011-07-28 08:31:46 +07:00
|
|
|
{
|
2012-05-21 06:27:00 +07:00
|
|
|
if (is_new)
|
|
|
|
s += rdev->new_data_offset;
|
|
|
|
else
|
|
|
|
s += rdev->data_offset;
|
2015-12-25 09:20:34 +07:00
|
|
|
return badblocks_clear(&rdev->badblocks,
|
2012-05-21 06:27:00 +07:00
|
|
|
s, sectors);
|
2011-07-28 08:31:46 +07:00
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(rdev_clear_badblocks);
|
|
|
|
|
2005-05-06 06:16:09 +07:00
|
|
|
static int md_notify_reboot(struct notifier_block *this,
|
|
|
|
unsigned long code, void *x)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
struct list_head *tmp;
|
2011-10-11 12:47:53 +07:00
|
|
|
struct mddev *mddev;
|
2011-09-23 16:40:45 +07:00
|
|
|
int need_delay = 0;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2012-03-19 08:46:37 +07:00
|
|
|
for_each_mddev(mddev, tmp) {
|
|
|
|
if (mddev_trylock(mddev)) {
|
2012-04-24 07:23:16 +07:00
|
|
|
if (mddev->pers)
|
|
|
|
__md_stop_writes(mddev);
|
2014-05-06 06:36:08 +07:00
|
|
|
if (mddev->persistent)
|
|
|
|
mddev->safemode = 2;
|
2012-03-19 08:46:37 +07:00
|
|
|
mddev_unlock(mddev);
|
2011-09-23 16:40:45 +07:00
|
|
|
}
|
2012-03-19 08:46:37 +07:00
|
|
|
need_delay = 1;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
2012-03-19 08:46:37 +07:00
|
|
|
/*
|
|
|
|
* certain more exotic SCSI devices are known to be
|
|
|
|
* volatile wrt too early system reboots. While the
|
|
|
|
* right place to handle this issue is the given
|
|
|
|
* driver, we do want to have a safe RAID driver ...
|
|
|
|
*/
|
|
|
|
if (need_delay)
|
|
|
|
mdelay(1000*1);
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
return NOTIFY_DONE;
|
|
|
|
}
|
|
|
|
|
2005-05-06 06:16:09 +07:00
|
|
|
static struct notifier_block md_notifier = {
|
2005-04-17 05:20:36 +07:00
|
|
|
.notifier_call = md_notify_reboot,
|
|
|
|
.next = NULL,
|
|
|
|
.priority = INT_MAX, /* before any real devices */
|
|
|
|
};
|
|
|
|
|
|
|
|
static void md_geninit(void)
|
|
|
|
{
|
2011-10-07 10:23:17 +07:00
|
|
|
pr_debug("md: sizeof(mdp_super_t) = %d\n", (int)sizeof(mdp_super_t));
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2008-04-29 15:02:35 +07:00
|
|
|
proc_create("mdstat", S_IRUGO, NULL, &md_seq_fops);
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
2005-05-06 06:16:09 +07:00
|
|
|
static int __init md_init(void)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2010-10-15 20:36:08 +07:00
|
|
|
int ret = -ENOMEM;
|
|
|
|
|
2011-01-25 20:35:54 +07:00
|
|
|
md_wq = alloc_workqueue("md", WQ_MEM_RECLAIM, 0);
|
2010-10-15 20:36:08 +07:00
|
|
|
if (!md_wq)
|
|
|
|
goto err_wq;
|
|
|
|
|
|
|
|
md_misc_wq = alloc_workqueue("md_misc", 0, 0);
|
|
|
|
if (!md_misc_wq)
|
|
|
|
goto err_misc_wq;
|
|
|
|
|
|
|
|
if ((ret = register_blkdev(MD_MAJOR, "md")) < 0)
|
|
|
|
goto err_md;
|
|
|
|
|
|
|
|
if ((ret = register_blkdev(0, "mdp")) < 0)
|
|
|
|
goto err_mdp;
|
|
|
|
mdp_major = ret;
|
|
|
|
|
2014-07-31 10:54:54 +07:00
|
|
|
blk_register_region(MKDEV(MD_MAJOR, 0), 512, THIS_MODULE,
|
2006-10-03 15:15:59 +07:00
|
|
|
md_probe, NULL, NULL);
|
|
|
|
blk_register_region(MKDEV(mdp_major, 0), 1UL<<MINORBITS, THIS_MODULE,
|
2005-04-17 05:20:36 +07:00
|
|
|
md_probe, NULL, NULL);
|
|
|
|
|
|
|
|
register_reboot_notifier(&md_notifier);
|
2007-02-14 15:34:09 +07:00
|
|
|
raid_table_header = register_sysctl_table(raid_root_table);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
md_geninit();
|
2008-10-13 07:55:12 +07:00
|
|
|
return 0;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2010-10-15 20:36:08 +07:00
|
|
|
err_mdp:
|
|
|
|
unregister_blkdev(MD_MAJOR, "md");
|
|
|
|
err_md:
|
|
|
|
destroy_workqueue(md_misc_wq);
|
|
|
|
err_misc_wq:
|
|
|
|
destroy_workqueue(md_wq);
|
|
|
|
err_wq:
|
|
|
|
return ret;
|
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2015-08-21 22:33:39 +07:00
|
|
|
static void check_sb_changes(struct mddev *mddev, struct md_rdev *rdev)
|
2014-06-07 13:53:00 +07:00
|
|
|
{
|
2015-08-21 22:33:39 +07:00
|
|
|
struct mdp_superblock_1 *sb = page_address(rdev->sb_page);
|
|
|
|
struct md_rdev *rdev2;
|
|
|
|
int role, ret;
|
|
|
|
char b[BDEVNAME_SIZE];
|
2014-06-07 13:53:00 +07:00
|
|
|
|
2015-08-21 22:33:39 +07:00
|
|
|
/* Check for change of roles in the active devices */
|
|
|
|
rdev_for_each(rdev2, mddev) {
|
|
|
|
if (test_bit(Faulty, &rdev2->flags))
|
|
|
|
continue;
|
|
|
|
|
|
|
|
/* Check if the roles changed */
|
|
|
|
role = le16_to_cpu(sb->dev_roles[rdev2->desc_nr]);
|
2015-10-02 01:20:27 +07:00
|
|
|
|
|
|
|
if (test_bit(Candidate, &rdev2->flags)) {
|
|
|
|
if (role == 0xfffe) {
|
|
|
|
pr_info("md: Removing Candidate device %s because add failed\n", bdevname(rdev2->bdev,b));
|
|
|
|
md_kick_rdev_from_array(rdev2);
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
clear_bit(Candidate, &rdev2->flags);
|
|
|
|
}
|
|
|
|
|
2015-08-21 22:33:39 +07:00
|
|
|
if (role != rdev2->raid_disk) {
|
|
|
|
/* got activated */
|
|
|
|
if (rdev2->raid_disk == -1 && role != 0xffff) {
|
|
|
|
rdev2->saved_raid_disk = role;
|
|
|
|
ret = remove_and_add_spares(mddev, rdev2);
|
|
|
|
pr_info("Activated spare: %s\n",
|
|
|
|
bdevname(rdev2->bdev,b));
|
2016-05-02 22:33:14 +07:00
|
|
|
/* wakeup mddev->thread here, so array could
|
|
|
|
* perform resync with the new activated disk */
|
|
|
|
set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
|
|
|
|
md_wakeup_thread(mddev->thread);
|
|
|
|
|
2015-08-21 22:33:39 +07:00
|
|
|
}
|
|
|
|
/* device faulty
|
|
|
|
* We just want to do the minimum to mark the disk
|
|
|
|
* as faulty. The recovery is performed by the
|
|
|
|
* one who initiated the error.
|
|
|
|
*/
|
|
|
|
if ((role == 0xfffe) || (role == 0xfffd)) {
|
|
|
|
md_error(mddev, rdev2);
|
|
|
|
clear_bit(Blocked, &rdev2->flags);
|
|
|
|
}
|
|
|
|
}
|
2014-06-07 13:53:00 +07:00
|
|
|
}
|
2015-08-21 22:33:39 +07:00
|
|
|
|
2015-10-22 12:01:25 +07:00
|
|
|
if (mddev->raid_disks != le32_to_cpu(sb->raid_disks))
|
|
|
|
update_raid_disks(mddev, le32_to_cpu(sb->raid_disks));
|
2015-08-21 22:33:39 +07:00
|
|
|
|
|
|
|
/* Finally set the event to be up to date */
|
|
|
|
mddev->events = le64_to_cpu(sb->events);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int read_rdev(struct mddev *mddev, struct md_rdev *rdev)
|
|
|
|
{
|
|
|
|
int err;
|
|
|
|
struct page *swapout = rdev->sb_page;
|
|
|
|
struct mdp_superblock_1 *sb;
|
|
|
|
|
|
|
|
/* Store the sb page of the rdev in the swapout temporary
|
|
|
|
* variable in case we err in the future
|
|
|
|
*/
|
|
|
|
rdev->sb_page = NULL;
|
|
|
|
alloc_disk_sb(rdev);
|
|
|
|
ClearPageUptodate(rdev->sb_page);
|
|
|
|
rdev->sb_loaded = 0;
|
|
|
|
err = super_types[mddev->major_version].load_super(rdev, NULL, mddev->minor_version);
|
|
|
|
|
|
|
|
if (err < 0) {
|
|
|
|
pr_warn("%s: %d Could not reload rdev(%d) err: %d. Restoring old values\n",
|
|
|
|
__func__, __LINE__, rdev->desc_nr, err);
|
|
|
|
put_page(rdev->sb_page);
|
|
|
|
rdev->sb_page = swapout;
|
|
|
|
rdev->sb_loaded = 1;
|
|
|
|
return err;
|
2014-06-07 13:53:00 +07:00
|
|
|
}
|
|
|
|
|
2015-08-21 22:33:39 +07:00
|
|
|
sb = page_address(rdev->sb_page);
|
|
|
|
/* Read the offset unconditionally, even if MD_FEATURE_RECOVERY_OFFSET
|
|
|
|
* is not set
|
|
|
|
*/
|
|
|
|
|
|
|
|
if ((le32_to_cpu(sb->feature_map) & MD_FEATURE_RECOVERY_OFFSET))
|
|
|
|
rdev->recovery_offset = le64_to_cpu(sb->recovery_offset);
|
|
|
|
|
|
|
|
/* The other node finished recovery, call spare_active to set
|
|
|
|
* device In_sync and mddev->degraded
|
|
|
|
*/
|
|
|
|
if (rdev->recovery_offset == MaxSector &&
|
|
|
|
!test_bit(In_sync, &rdev->flags) &&
|
|
|
|
mddev->pers->spare_active(mddev))
|
|
|
|
sysfs_notify(&mddev->kobj, NULL, "degraded");
|
|
|
|
|
|
|
|
put_page(swapout);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
void md_reload_sb(struct mddev *mddev, int nr)
|
|
|
|
{
|
|
|
|
struct md_rdev *rdev;
|
|
|
|
int err;
|
|
|
|
|
|
|
|
/* Find the rdev */
|
|
|
|
rdev_for_each_rcu(rdev, mddev) {
|
|
|
|
if (rdev->desc_nr == nr)
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!rdev || rdev->desc_nr != nr) {
|
|
|
|
pr_warn("%s: %d Could not find rdev with nr %d\n", __func__, __LINE__, nr);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
err = read_rdev(mddev, rdev);
|
|
|
|
if (err < 0)
|
|
|
|
return;
|
|
|
|
|
|
|
|
check_sb_changes(mddev, rdev);
|
|
|
|
|
|
|
|
/* Read all rdev's to update recovery_offset */
|
|
|
|
rdev_for_each_rcu(rdev, mddev)
|
|
|
|
read_rdev(mddev, rdev);
|
2014-06-07 13:53:00 +07:00
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(md_reload_sb);
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
#ifndef MODULE
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Searches all registered partitions for autorun RAID arrays
|
|
|
|
* at boot time.
|
|
|
|
*/
|
2007-10-17 13:30:52 +07:00
|
|
|
|
2016-06-08 23:20:16 +07:00
|
|
|
static DEFINE_MUTEX(detected_devices_mutex);
|
2007-10-17 13:30:52 +07:00
|
|
|
static LIST_HEAD(all_detected_devices);
|
|
|
|
struct detected_devices_node {
|
|
|
|
struct list_head list;
|
|
|
|
dev_t dev;
|
|
|
|
};
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
void md_autodetect_dev(dev_t dev)
|
|
|
|
{
|
2007-10-17 13:30:52 +07:00
|
|
|
struct detected_devices_node *node_detected_dev;
|
|
|
|
|
|
|
|
node_detected_dev = kzalloc(sizeof(*node_detected_dev), GFP_KERNEL);
|
|
|
|
if (node_detected_dev) {
|
|
|
|
node_detected_dev->dev = dev;
|
2016-06-08 23:20:16 +07:00
|
|
|
mutex_lock(&detected_devices_mutex);
|
2007-10-17 13:30:52 +07:00
|
|
|
list_add_tail(&node_detected_dev->list, &all_detected_devices);
|
2016-06-08 23:20:16 +07:00
|
|
|
mutex_unlock(&detected_devices_mutex);
|
2007-10-17 13:30:52 +07:00
|
|
|
} else {
|
|
|
|
printk(KERN_CRIT "md: md_autodetect_dev: kzalloc failed"
|
|
|
|
", skipping dev(%d,%d)\n", MAJOR(dev), MINOR(dev));
|
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static void autostart_arrays(int part)
|
|
|
|
{
|
2011-10-11 12:45:26 +07:00
|
|
|
struct md_rdev *rdev;
|
2007-10-17 13:30:52 +07:00
|
|
|
struct detected_devices_node *node_detected_dev;
|
|
|
|
dev_t dev;
|
|
|
|
int i_scanned, i_passed;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2007-10-17 13:30:52 +07:00
|
|
|
i_scanned = 0;
|
|
|
|
i_passed = 0;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2007-10-17 13:30:52 +07:00
|
|
|
printk(KERN_INFO "md: Autodetecting RAID arrays.\n");
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2016-06-08 23:20:16 +07:00
|
|
|
mutex_lock(&detected_devices_mutex);
|
2007-10-17 13:30:52 +07:00
|
|
|
while (!list_empty(&all_detected_devices) && i_scanned < INT_MAX) {
|
|
|
|
i_scanned++;
|
|
|
|
node_detected_dev = list_entry(all_detected_devices.next,
|
|
|
|
struct detected_devices_node, list);
|
|
|
|
list_del(&node_detected_dev->list);
|
|
|
|
dev = node_detected_dev->dev;
|
|
|
|
kfree(node_detected_dev);
|
2016-09-15 04:26:54 +07:00
|
|
|
mutex_unlock(&detected_devices_mutex);
|
2007-07-17 18:06:11 +07:00
|
|
|
rdev = md_import_device(dev,0, 90);
|
2016-09-15 04:26:54 +07:00
|
|
|
mutex_lock(&detected_devices_mutex);
|
2005-04-17 05:20:36 +07:00
|
|
|
if (IS_ERR(rdev))
|
|
|
|
continue;
|
|
|
|
|
2014-09-30 12:52:29 +07:00
|
|
|
if (test_bit(Faulty, &rdev->flags))
|
2005-04-17 05:20:36 +07:00
|
|
|
continue;
|
2014-09-30 12:52:29 +07:00
|
|
|
|
2008-03-05 05:29:31 +07:00
|
|
|
set_bit(AutoDetected, &rdev->flags);
|
2005-04-17 05:20:36 +07:00
|
|
|
list_add(&rdev->same_set, &pending_raid_disks);
|
2007-10-17 13:30:52 +07:00
|
|
|
i_passed++;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
2016-06-08 23:20:16 +07:00
|
|
|
mutex_unlock(&detected_devices_mutex);
|
2007-10-17 13:30:52 +07:00
|
|
|
|
|
|
|
printk(KERN_INFO "md: Scanned %d and added %d devices.\n",
|
|
|
|
i_scanned, i_passed);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
autorun_devices(part);
|
|
|
|
}
|
|
|
|
|
2006-12-10 17:20:50 +07:00
|
|
|
#endif /* !MODULE */
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
static __exit void md_exit(void)
|
|
|
|
{
|
2011-10-11 12:47:53 +07:00
|
|
|
struct mddev *mddev;
|
2005-04-17 05:20:36 +07:00
|
|
|
struct list_head *tmp;
|
2014-04-09 11:33:51 +07:00
|
|
|
int delay = 1;
|
2005-06-21 11:15:16 +07:00
|
|
|
|
2014-07-31 10:54:54 +07:00
|
|
|
blk_unregister_region(MKDEV(MD_MAJOR,0), 512);
|
2006-10-03 15:15:59 +07:00
|
|
|
blk_unregister_region(MKDEV(mdp_major,0), 1U << MINORBITS);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2009-03-31 10:27:02 +07:00
|
|
|
unregister_blkdev(MD_MAJOR,"md");
|
2005-04-17 05:20:36 +07:00
|
|
|
unregister_blkdev(mdp_major, "mdp");
|
|
|
|
unregister_reboot_notifier(&md_notifier);
|
|
|
|
unregister_sysctl_table(raid_table_header);
|
2014-04-09 11:33:51 +07:00
|
|
|
|
|
|
|
/* We cannot unload the modules while some process is
|
|
|
|
* waiting for us in select() or poll() - wake them up
|
|
|
|
*/
|
|
|
|
md_unloading = 1;
|
|
|
|
while (waitqueue_active(&md_event_waiters)) {
|
|
|
|
/* not safe to leave yet */
|
|
|
|
wake_up(&md_event_waiters);
|
|
|
|
msleep(delay);
|
|
|
|
delay += delay;
|
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
remove_proc_entry("mdstat", NULL);
|
2014-04-09 11:33:51 +07:00
|
|
|
|
2008-02-06 16:39:58 +07:00
|
|
|
for_each_mddev(mddev, tmp) {
|
2005-04-17 05:20:36 +07:00
|
|
|
export_array(mddev);
|
md: make devices disappear when they are no longer needed.
Currently md devices, once created, never disappear until the module
is unloaded. This is essentially because the gendisk holds a
reference to the mddev, and the mddev holds a reference to the
gendisk, this a circular reference.
If we drop the reference from mddev to gendisk, then we need to ensure
that the mddev is destroyed when the gendisk is destroyed. However it
is not possible to hook into the gendisk destruction process to enable
this.
So we drop the reference from the gendisk to the mddev and destroy the
gendisk when the mddev gets destroyed. However this has a
complication.
Between the call
__blkdev_get->get_gendisk->kobj_lookup->md_probe
and the call
__blkdev_get->md_open
there is no obvious way to hold a reference on the mddev any more, so
unless something is done, it will disappear and gendisk will be
destroyed prematurely.
Also, once we decide to destroy the mddev, there will be an unlockable
moment before the gendisk is unlinked (blk_unregister_region) during
which a new reference to the gendisk can be created. We need to
ensure that this reference can not be used. i.e. the ->open must
fail.
So:
1/ in md_probe we set a flag in the mddev (hold_active) which
indicates that the array should be treated as active, even
though there are no references, and no appearance of activity.
This is cleared by md_release when the device is closed if it
is no longer needed.
This ensures that the gendisk will survive between md_probe and
md_open.
2/ In md_open we check if the mddev we expect to open matches
the gendisk that we did open.
If there is a mismatch we return -ERESTARTSYS and modify
__blkdev_get to retry from the top in that case.
In the -ERESTARTSYS sys case we make sure to wait until
the old gendisk (that we succeeded in opening) is really gone so
we loop at most once.
Some udev configurations will always open an md device when it first
appears. If we allow an md device that was just created by an open
to disappear on an immediate close, then this can race with such udev
configurations and result in an infinite loop the device being opened
and closed, then re-open due to the 'ADD' even from the first open,
and then close and so on.
So we make sure an md device, once created by an open, remains active
at least until some md 'ioctl' has been made on it. This means that
all normal usage of md devices will allow them to disappear promptly
when not needed, but the worst that an incorrect usage will do it
cause an inactive md device to be left in existence (it can easily be
removed).
As an array can be stopped by writing to a sysfs attribute
echo clear > /sys/block/mdXXX/md/array_state
we need to use scheduled work for deleting the gendisk and other
kobjects. This allows us to wait for any pending gendisk deletion to
complete by simply calling flush_scheduled_work().
Signed-off-by: NeilBrown <neilb@suse.de>
2009-01-09 04:31:10 +07:00
|
|
|
mddev->hold_active = 0;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
2010-10-15 20:36:08 +07:00
|
|
|
destroy_workqueue(md_misc_wq);
|
|
|
|
destroy_workqueue(md_wq);
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
2007-07-10 01:56:42 +07:00
|
|
|
subsys_initcall(md_init);
|
2005-04-17 05:20:36 +07:00
|
|
|
module_exit(md_exit)
|
|
|
|
|
[PATCH] md: allow md arrays to be started read-only (module parameter).
When an md array is started, the superblock will be written, and resync may
commense. This is not good if you want to be completely read-only as, for
example, when preparing to resume from a suspend-to-disk image.
So introduce a module parameter "start_ro" which can be set
to '1' at boot, at module load, or via
/sys/module/md_mod/parameters/start_ro
When this is set, new arrays get an 'auto-ro' mode, which disables all
internal io (superblock updates, resync, recovery) and is automatically
switched to 'rw' when the first write request arrives.
The array can be set to true 'ro' mode using 'mdadm -r' before the first
write request, or resync can be started without a write using 'mdadm -w'.
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-11-09 12:39:36 +07:00
|
|
|
static int get_ro(char *buffer, struct kernel_param *kp)
|
|
|
|
{
|
|
|
|
return sprintf(buffer, "%d", start_readonly);
|
|
|
|
}
|
|
|
|
static int set_ro(const char *val, struct kernel_param *kp)
|
|
|
|
{
|
2015-05-16 18:02:38 +07:00
|
|
|
return kstrtouint(val, 10, (unsigned int *)&start_readonly);
|
[PATCH] md: allow md arrays to be started read-only (module parameter).
When an md array is started, the superblock will be written, and resync may
commense. This is not good if you want to be completely read-only as, for
example, when preparing to resume from a suspend-to-disk image.
So introduce a module parameter "start_ro" which can be set
to '1' at boot, at module load, or via
/sys/module/md_mod/parameters/start_ro
When this is set, new arrays get an 'auto-ro' mode, which disables all
internal io (superblock updates, resync, recovery) and is automatically
switched to 'rw' when the first write request arrives.
The array can be set to true 'ro' mode using 'mdadm -r' before the first
write request, or resync can be started without a write using 'mdadm -w'.
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-11-09 12:39:36 +07:00
|
|
|
}
|
|
|
|
|
2006-07-10 18:44:18 +07:00
|
|
|
module_param_call(start_ro, set_ro, get_ro, NULL, S_IRUSR|S_IWUSR);
|
|
|
|
module_param(start_dirty_degraded, int, S_IRUGO|S_IWUSR);
|
2009-01-09 04:31:10 +07:00
|
|
|
module_param_call(new_array, add_named_array, NULL, NULL, S_IWUSR);
|
[PATCH] md: allow md arrays to be started read-only (module parameter).
When an md array is started, the superblock will be written, and resync may
commense. This is not good if you want to be completely read-only as, for
example, when preparing to resume from a suspend-to-disk image.
So introduce a module parameter "start_ro" which can be set
to '1' at boot, at module load, or via
/sys/module/md_mod/parameters/start_ro
When this is set, new arrays get an 'auto-ro' mode, which disables all
internal io (superblock updates, resync, recovery) and is automatically
switched to 'rw' when the first write request arrives.
The array can be set to true 'ro' mode using 'mdadm -r' before the first
write request, or resync can be started without a write using 'mdadm -w'.
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-11-09 12:39:36 +07:00
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
MODULE_LICENSE("GPL");
|
2009-12-14 08:49:58 +07:00
|
|
|
MODULE_DESCRIPTION("MD RAID framework");
|
2005-08-05 02:53:32 +07:00
|
|
|
MODULE_ALIAS("md");
|
2005-08-27 08:34:15 +07:00
|
|
|
MODULE_ALIAS_BLOCKDEV_MAJOR(MD_MAJOR);
|