2005-04-17 05:20:36 +07:00
|
|
|
/*
|
2006-09-28 07:52:15 +07:00
|
|
|
* Copyright (c) 2000-2006 Silicon Graphics, Inc.
|
2005-11-02 10:58:39 +07:00
|
|
|
* All Rights Reserved.
|
2005-04-17 05:20:36 +07:00
|
|
|
*
|
2005-11-02 10:58:39 +07:00
|
|
|
* This program is free software; you can redistribute it and/or
|
|
|
|
* modify it under the terms of the GNU General Public License as
|
2005-04-17 05:20:36 +07:00
|
|
|
* published by the Free Software Foundation.
|
|
|
|
*
|
2005-11-02 10:58:39 +07:00
|
|
|
* This program is distributed in the hope that it would be useful,
|
|
|
|
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
|
|
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
|
|
* GNU General Public License for more details.
|
2005-04-17 05:20:36 +07:00
|
|
|
*
|
2005-11-02 10:58:39 +07:00
|
|
|
* You should have received a copy of the GNU General Public License
|
|
|
|
* along with this program; if not, write the Free Software Foundation,
|
|
|
|
* Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
|
2005-04-17 05:20:36 +07:00
|
|
|
*/
|
2006-11-11 14:03:49 +07:00
|
|
|
#include "xfs.h"
|
2005-04-17 05:20:36 +07:00
|
|
|
#include <linux/stddef.h>
|
|
|
|
#include <linux/errno.h>
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 15:04:11 +07:00
|
|
|
#include <linux/gfp.h>
|
2005-04-17 05:20:36 +07:00
|
|
|
#include <linux/pagemap.h>
|
|
|
|
#include <linux/init.h>
|
|
|
|
#include <linux/vmalloc.h>
|
|
|
|
#include <linux/bio.h>
|
|
|
|
#include <linux/sysctl.h>
|
|
|
|
#include <linux/proc_fs.h>
|
|
|
|
#include <linux/workqueue.h>
|
|
|
|
#include <linux/percpu.h>
|
|
|
|
#include <linux/blkdev.h>
|
|
|
|
#include <linux/hash.h>
|
2005-09-05 05:34:18 +07:00
|
|
|
#include <linux/kthread.h>
|
2006-03-22 15:09:12 +07:00
|
|
|
#include <linux/migrate.h>
|
2006-10-20 13:28:16 +07:00
|
|
|
#include <linux/backing-dev.h>
|
2006-12-07 11:34:23 +07:00
|
|
|
#include <linux/freezer.h>
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2014-11-28 10:25:04 +07:00
|
|
|
#include "xfs_format.h"
|
2013-10-23 06:50:10 +07:00
|
|
|
#include "xfs_log_format.h"
|
2013-08-12 17:49:32 +07:00
|
|
|
#include "xfs_trans_resv.h"
|
2013-10-23 06:50:10 +07:00
|
|
|
#include "xfs_sb.h"
|
2009-03-04 02:48:37 +07:00
|
|
|
#include "xfs_mount.h"
|
2009-12-15 06:14:59 +07:00
|
|
|
#include "xfs_trace.h"
|
2013-10-23 06:50:10 +07:00
|
|
|
#include "xfs_log.h"
|
2009-03-04 02:48:37 +07:00
|
|
|
|
2007-02-10 14:34:56 +07:00
|
|
|
static kmem_zone_t *xfs_buf_zone;
|
2005-06-21 12:14:01 +07:00
|
|
|
|
2006-01-11 11:39:08 +07:00
|
|
|
#ifdef XFS_BUF_LOCK_TRACKING
|
|
|
|
# define XB_SET_OWNER(bp) ((bp)->b_last_holder = current->pid)
|
|
|
|
# define XB_CLEAR_OWNER(bp) ((bp)->b_last_holder = -1)
|
|
|
|
# define XB_GET_OWNER(bp) ((bp)->b_last_holder)
|
2005-04-17 05:20:36 +07:00
|
|
|
#else
|
2006-01-11 11:39:08 +07:00
|
|
|
# define XB_SET_OWNER(bp) do { } while (0)
|
|
|
|
# define XB_CLEAR_OWNER(bp) do { } while (0)
|
|
|
|
# define XB_GET_OWNER(bp) do { } while (0)
|
2005-04-17 05:20:36 +07:00
|
|
|
#endif
|
|
|
|
|
2006-01-11 11:39:08 +07:00
|
|
|
#define xb_to_gfp(flags) \
|
2012-04-23 12:58:56 +07:00
|
|
|
((((flags) & XBF_READ_AHEAD) ? __GFP_NORETRY : GFP_NOFS) | __GFP_NOWARN)
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
|
2010-01-26 00:42:24 +07:00
|
|
|
static inline int
|
|
|
|
xfs_buf_is_vmapped(
|
|
|
|
struct xfs_buf *bp)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* Return true if the buffer is vmapped.
|
|
|
|
*
|
2012-04-23 12:59:07 +07:00
|
|
|
* b_addr is null if the buffer is not mapped, but the code is clever
|
|
|
|
* enough to know it doesn't have to map a single page, so the check has
|
|
|
|
* to be both for b_addr and bp->b_page_count > 1.
|
2010-01-26 00:42:24 +07:00
|
|
|
*/
|
2012-04-23 12:59:07 +07:00
|
|
|
return bp->b_addr && bp->b_page_count > 1;
|
2010-01-26 00:42:24 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline int
|
|
|
|
xfs_buf_vmap_len(
|
|
|
|
struct xfs_buf *bp)
|
|
|
|
{
|
|
|
|
return (bp->b_page_count * PAGE_SIZE) - bp->b_offset;
|
|
|
|
}
|
|
|
|
|
2010-12-02 12:30:55 +07:00
|
|
|
/*
|
|
|
|
* When we mark a buffer stale, we remove the buffer from the LRU and clear the
|
|
|
|
* b_lru_ref count so that the buffer is freed immediately when the buffer
|
|
|
|
* reference count falls to zero. If the buffer is already on the LRU, we need
|
|
|
|
* to remove the reference that LRU holds on the buffer.
|
|
|
|
*
|
|
|
|
* This prevents build-up of stale buffers on the LRU.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
xfs_buf_stale(
|
|
|
|
struct xfs_buf *bp)
|
|
|
|
{
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 12:58:39 +07:00
|
|
|
ASSERT(xfs_buf_islocked(bp));
|
|
|
|
|
2010-12-02 12:30:55 +07:00
|
|
|
bp->b_flags |= XBF_STALE;
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 12:58:39 +07:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Clear the delwri status so that a delwri queue walker will not
|
|
|
|
* flush this buffer to disk now that it is stale. The delwri queue has
|
|
|
|
* a reference to the buffer, so this is safe to do.
|
|
|
|
*/
|
|
|
|
bp->b_flags &= ~_XBF_DELWRI_Q;
|
|
|
|
|
2013-08-28 07:18:06 +07:00
|
|
|
spin_lock(&bp->b_lock);
|
|
|
|
atomic_set(&bp->b_lru_ref, 0);
|
|
|
|
if (!(bp->b_state & XFS_BSTATE_DISPOSE) &&
|
2013-08-28 07:18:05 +07:00
|
|
|
(list_lru_del(&bp->b_target->bt_lru, &bp->b_lru)))
|
|
|
|
atomic_dec(&bp->b_hold);
|
|
|
|
|
2010-12-02 12:30:55 +07:00
|
|
|
ASSERT(atomic_read(&bp->b_hold) >= 1);
|
2013-08-28 07:18:06 +07:00
|
|
|
spin_unlock(&bp->b_lock);
|
2010-12-02 12:30:55 +07:00
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2012-06-22 15:50:09 +07:00
|
|
|
static int
|
|
|
|
xfs_buf_get_maps(
|
|
|
|
struct xfs_buf *bp,
|
|
|
|
int map_count)
|
|
|
|
{
|
|
|
|
ASSERT(bp->b_maps == NULL);
|
|
|
|
bp->b_map_count = map_count;
|
|
|
|
|
|
|
|
if (map_count == 1) {
|
2012-12-05 06:18:02 +07:00
|
|
|
bp->b_maps = &bp->__b_map;
|
2012-06-22 15:50:09 +07:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
bp->b_maps = kmem_zalloc(map_count * sizeof(struct xfs_buf_map),
|
|
|
|
KM_NOFS);
|
|
|
|
if (!bp->b_maps)
|
2014-06-25 11:58:08 +07:00
|
|
|
return -ENOMEM;
|
2012-06-22 15:50:09 +07:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Frees b_pages if it was allocated.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
xfs_buf_free_maps(
|
|
|
|
struct xfs_buf *bp)
|
|
|
|
{
|
2012-12-05 06:18:02 +07:00
|
|
|
if (bp->b_maps != &bp->__b_map) {
|
2012-06-22 15:50:09 +07:00
|
|
|
kmem_free(bp->b_maps);
|
|
|
|
bp->b_maps = NULL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2011-10-10 23:52:48 +07:00
|
|
|
struct xfs_buf *
|
2012-06-22 15:50:09 +07:00
|
|
|
_xfs_buf_alloc(
|
2011-10-10 23:52:48 +07:00
|
|
|
struct xfs_buftarg *target,
|
2012-06-22 15:50:09 +07:00
|
|
|
struct xfs_buf_map *map,
|
|
|
|
int nmaps,
|
2006-01-11 11:39:08 +07:00
|
|
|
xfs_buf_flags_t flags)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2011-10-10 23:52:48 +07:00
|
|
|
struct xfs_buf *bp;
|
2012-06-22 15:50:09 +07:00
|
|
|
int error;
|
|
|
|
int i;
|
2011-10-10 23:52:48 +07:00
|
|
|
|
2012-04-23 12:58:56 +07:00
|
|
|
bp = kmem_zone_zalloc(xfs_buf_zone, KM_NOFS);
|
2011-10-10 23:52:48 +07:00
|
|
|
if (unlikely(!bp))
|
|
|
|
return NULL;
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
/*
|
2012-04-23 12:59:05 +07:00
|
|
|
* We don't want certain flags to appear in b_flags unless they are
|
|
|
|
* specifically set by later operations on the buffer.
|
2005-04-17 05:20:36 +07:00
|
|
|
*/
|
2012-04-23 12:59:07 +07:00
|
|
|
flags &= ~(XBF_UNMAPPED | XBF_TRYLOCK | XBF_ASYNC | XBF_READ_AHEAD);
|
2006-01-11 11:39:08 +07:00
|
|
|
|
|
|
|
atomic_set(&bp->b_hold, 1);
|
2010-12-02 12:30:55 +07:00
|
|
|
atomic_set(&bp->b_lru_ref, 1);
|
2008-08-13 13:36:11 +07:00
|
|
|
init_completion(&bp->b_iowait);
|
2010-12-02 12:30:55 +07:00
|
|
|
INIT_LIST_HEAD(&bp->b_lru);
|
2006-01-11 11:39:08 +07:00
|
|
|
INIT_LIST_HEAD(&bp->b_list);
|
2010-09-24 16:59:04 +07:00
|
|
|
RB_CLEAR_NODE(&bp->b_rbnode);
|
2010-09-07 21:33:15 +07:00
|
|
|
sema_init(&bp->b_sema, 0); /* held, no waiters */
|
2013-08-28 07:18:06 +07:00
|
|
|
spin_lock_init(&bp->b_lock);
|
2006-01-11 11:39:08 +07:00
|
|
|
XB_SET_OWNER(bp);
|
|
|
|
bp->b_target = target;
|
2012-06-22 15:50:09 +07:00
|
|
|
bp->b_flags = flags;
|
2012-04-23 12:58:50 +07:00
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
/*
|
2012-04-23 12:58:52 +07:00
|
|
|
* Set length and io_length to the same value initially.
|
|
|
|
* I/O routines should use io_length, which will be the same in
|
2005-04-17 05:20:36 +07:00
|
|
|
* most cases but may be reset (e.g. XFS recovery).
|
|
|
|
*/
|
2012-06-22 15:50:09 +07:00
|
|
|
error = xfs_buf_get_maps(bp, nmaps);
|
|
|
|
if (error) {
|
|
|
|
kmem_zone_free(xfs_buf_zone, bp);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
bp->b_bn = map[0].bm_bn;
|
|
|
|
bp->b_length = 0;
|
|
|
|
for (i = 0; i < nmaps; i++) {
|
|
|
|
bp->b_maps[i].bm_bn = map[i].bm_bn;
|
|
|
|
bp->b_maps[i].bm_len = map[i].bm_len;
|
|
|
|
bp->b_length += map[i].bm_len;
|
|
|
|
}
|
|
|
|
bp->b_io_length = bp->b_length;
|
|
|
|
|
2006-01-11 11:39:08 +07:00
|
|
|
atomic_set(&bp->b_pin_count, 0);
|
|
|
|
init_waitqueue_head(&bp->b_waiters);
|
|
|
|
|
2015-10-12 14:21:22 +07:00
|
|
|
XFS_STATS_INC(target->bt_mount, xb_create);
|
2009-12-15 06:14:59 +07:00
|
|
|
trace_xfs_buf_init(bp, _RET_IP_);
|
2011-10-10 23:52:48 +07:00
|
|
|
|
|
|
|
return bp;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2006-01-11 11:39:08 +07:00
|
|
|
* Allocate a page array capable of holding a specified number
|
|
|
|
* of pages, and point the page buf at it.
|
2005-04-17 05:20:36 +07:00
|
|
|
*/
|
|
|
|
STATIC int
|
2006-01-11 11:39:08 +07:00
|
|
|
_xfs_buf_get_pages(
|
|
|
|
xfs_buf_t *bp,
|
2014-04-14 16:01:20 +07:00
|
|
|
int page_count)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
/* Make sure that we have a page list */
|
2006-01-11 11:39:08 +07:00
|
|
|
if (bp->b_pages == NULL) {
|
|
|
|
bp->b_page_count = page_count;
|
|
|
|
if (page_count <= XB_PAGES) {
|
|
|
|
bp->b_pages = bp->b_page_array;
|
2005-04-17 05:20:36 +07:00
|
|
|
} else {
|
2006-01-11 11:39:08 +07:00
|
|
|
bp->b_pages = kmem_alloc(sizeof(struct page *) *
|
2012-04-23 12:58:56 +07:00
|
|
|
page_count, KM_NOFS);
|
2006-01-11 11:39:08 +07:00
|
|
|
if (bp->b_pages == NULL)
|
2005-04-17 05:20:36 +07:00
|
|
|
return -ENOMEM;
|
|
|
|
}
|
2006-01-11 11:39:08 +07:00
|
|
|
memset(bp->b_pages, 0, sizeof(struct page *) * page_count);
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2006-01-11 11:39:08 +07:00
|
|
|
* Frees b_pages if it was allocated.
|
2005-04-17 05:20:36 +07:00
|
|
|
*/
|
|
|
|
STATIC void
|
2006-01-11 11:39:08 +07:00
|
|
|
_xfs_buf_free_pages(
|
2005-04-17 05:20:36 +07:00
|
|
|
xfs_buf_t *bp)
|
|
|
|
{
|
2006-01-11 11:39:08 +07:00
|
|
|
if (bp->b_pages != bp->b_page_array) {
|
2008-05-19 13:31:57 +07:00
|
|
|
kmem_free(bp->b_pages);
|
2009-12-15 06:11:57 +07:00
|
|
|
bp->b_pages = NULL;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Releases the specified buffer.
|
|
|
|
*
|
|
|
|
* The modification state of any associated pages is left unchanged.
|
2013-08-07 17:10:59 +07:00
|
|
|
* The buffer must not be on any hash - use xfs_buf_rele instead for
|
2005-04-17 05:20:36 +07:00
|
|
|
* hashed and refcounted buffers
|
|
|
|
*/
|
|
|
|
void
|
2006-01-11 11:39:08 +07:00
|
|
|
xfs_buf_free(
|
2005-04-17 05:20:36 +07:00
|
|
|
xfs_buf_t *bp)
|
|
|
|
{
|
2009-12-15 06:14:59 +07:00
|
|
|
trace_xfs_buf_free(bp, _RET_IP_);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2010-12-02 12:30:55 +07:00
|
|
|
ASSERT(list_empty(&bp->b_lru));
|
|
|
|
|
2011-03-26 05:16:45 +07:00
|
|
|
if (bp->b_flags & _XBF_PAGES) {
|
2005-04-17 05:20:36 +07:00
|
|
|
uint i;
|
|
|
|
|
2010-01-26 00:42:24 +07:00
|
|
|
if (xfs_buf_is_vmapped(bp))
|
2010-03-17 01:55:56 +07:00
|
|
|
vm_unmap_ram(bp->b_addr - bp->b_offset,
|
|
|
|
bp->b_page_count);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2006-09-28 08:03:13 +07:00
|
|
|
for (i = 0; i < bp->b_page_count; i++) {
|
|
|
|
struct page *page = bp->b_pages[i];
|
|
|
|
|
2011-03-26 05:16:45 +07:00
|
|
|
__free_page(page);
|
2006-09-28 08:03:13 +07:00
|
|
|
}
|
2011-03-26 05:16:45 +07:00
|
|
|
} else if (bp->b_flags & _XBF_KMEM)
|
|
|
|
kmem_free(bp->b_addr);
|
2009-12-15 06:11:57 +07:00
|
|
|
_xfs_buf_free_pages(bp);
|
2012-06-22 15:50:09 +07:00
|
|
|
xfs_buf_free_maps(bp);
|
2011-10-10 23:52:48 +07:00
|
|
|
kmem_zone_free(xfs_buf_zone, bp);
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2011-03-26 05:16:45 +07:00
|
|
|
* Allocates all the pages for buffer in question and builds it's page list.
|
2005-04-17 05:20:36 +07:00
|
|
|
*/
|
|
|
|
STATIC int
|
2011-03-26 05:16:45 +07:00
|
|
|
xfs_buf_allocate_memory(
|
2005-04-17 05:20:36 +07:00
|
|
|
xfs_buf_t *bp,
|
|
|
|
uint flags)
|
|
|
|
{
|
2012-04-23 12:58:52 +07:00
|
|
|
size_t size;
|
2005-04-17 05:20:36 +07:00
|
|
|
size_t nbytes, offset;
|
2006-01-11 11:39:08 +07:00
|
|
|
gfp_t gfp_mask = xb_to_gfp(flags);
|
2005-04-17 05:20:36 +07:00
|
|
|
unsigned short page_count, i;
|
2012-04-23 12:58:53 +07:00
|
|
|
xfs_off_t start, end;
|
2005-04-17 05:20:36 +07:00
|
|
|
int error;
|
|
|
|
|
2011-03-26 05:16:45 +07:00
|
|
|
/*
|
|
|
|
* for buffers that are contained within a single page, just allocate
|
|
|
|
* the memory from the heap - there's no need for the complexity of
|
|
|
|
* page arrays to keep allocation down to order 0.
|
|
|
|
*/
|
2012-04-23 12:58:53 +07:00
|
|
|
size = BBTOB(bp->b_length);
|
|
|
|
if (size < PAGE_SIZE) {
|
2012-04-23 12:58:56 +07:00
|
|
|
bp->b_addr = kmem_alloc(size, KM_NOFS);
|
2011-03-26 05:16:45 +07:00
|
|
|
if (!bp->b_addr) {
|
|
|
|
/* low memory - use alloc_page loop instead */
|
|
|
|
goto use_alloc_page;
|
|
|
|
}
|
|
|
|
|
2012-04-23 12:58:53 +07:00
|
|
|
if (((unsigned long)(bp->b_addr + size - 1) & PAGE_MASK) !=
|
2011-03-26 05:16:45 +07:00
|
|
|
((unsigned long)bp->b_addr & PAGE_MASK)) {
|
|
|
|
/* b_addr spans two pages - use alloc_page instead */
|
|
|
|
kmem_free(bp->b_addr);
|
|
|
|
bp->b_addr = NULL;
|
|
|
|
goto use_alloc_page;
|
|
|
|
}
|
|
|
|
bp->b_offset = offset_in_page(bp->b_addr);
|
|
|
|
bp->b_pages = bp->b_page_array;
|
|
|
|
bp->b_pages[0] = virt_to_page(bp->b_addr);
|
|
|
|
bp->b_page_count = 1;
|
2012-04-23 12:59:07 +07:00
|
|
|
bp->b_flags |= _XBF_KMEM;
|
2011-03-26 05:16:45 +07:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
use_alloc_page:
|
2012-12-05 06:18:02 +07:00
|
|
|
start = BBTOB(bp->b_maps[0].bm_bn) >> PAGE_SHIFT;
|
|
|
|
end = (BBTOB(bp->b_maps[0].bm_bn + bp->b_length) + PAGE_SIZE - 1)
|
2012-06-22 15:50:08 +07:00
|
|
|
>> PAGE_SHIFT;
|
2012-04-23 12:58:53 +07:00
|
|
|
page_count = end - start;
|
2014-04-14 16:01:20 +07:00
|
|
|
error = _xfs_buf_get_pages(bp, page_count);
|
2005-04-17 05:20:36 +07:00
|
|
|
if (unlikely(error))
|
|
|
|
return error;
|
|
|
|
|
2006-01-11 11:39:08 +07:00
|
|
|
offset = bp->b_offset;
|
2011-03-26 05:16:45 +07:00
|
|
|
bp->b_flags |= _XBF_PAGES;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2006-01-11 11:39:08 +07:00
|
|
|
for (i = 0; i < bp->b_page_count; i++) {
|
2005-04-17 05:20:36 +07:00
|
|
|
struct page *page;
|
|
|
|
uint retries = 0;
|
2011-03-26 05:16:45 +07:00
|
|
|
retry:
|
|
|
|
page = alloc_page(gfp_mask);
|
2005-04-17 05:20:36 +07:00
|
|
|
if (unlikely(page == NULL)) {
|
2006-01-11 11:39:08 +07:00
|
|
|
if (flags & XBF_READ_AHEAD) {
|
|
|
|
bp->b_page_count = i;
|
2014-06-25 11:58:08 +07:00
|
|
|
error = -ENOMEM;
|
2011-03-26 05:16:45 +07:00
|
|
|
goto out_free_pages;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* This could deadlock.
|
|
|
|
*
|
|
|
|
* But until all the XFS lowlevel code is revamped to
|
|
|
|
* handle buffer allocation failures we can't do much.
|
|
|
|
*/
|
|
|
|
if (!(++retries % 100))
|
2011-03-07 06:00:35 +07:00
|
|
|
xfs_err(NULL,
|
2015-10-12 11:41:29 +07:00
|
|
|
"%s(%u) possible memory allocation deadlock in %s (mode:0x%x)",
|
|
|
|
current->comm, current->pid,
|
2008-04-10 09:19:21 +07:00
|
|
|
__func__, gfp_mask);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2015-10-12 14:21:22 +07:00
|
|
|
XFS_STATS_INC(bp->b_target->bt_mount, xb_page_retries);
|
2009-07-09 19:52:32 +07:00
|
|
|
congestion_wait(BLK_RW_ASYNC, HZ/50);
|
2005-04-17 05:20:36 +07:00
|
|
|
goto retry;
|
|
|
|
}
|
|
|
|
|
2015-10-12 14:21:22 +07:00
|
|
|
XFS_STATS_INC(bp->b_target->bt_mount, xb_page_found);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2011-03-26 05:16:45 +07:00
|
|
|
nbytes = min_t(size_t, size, PAGE_SIZE - offset);
|
2005-04-17 05:20:36 +07:00
|
|
|
size -= nbytes;
|
2006-01-11 11:39:08 +07:00
|
|
|
bp->b_pages[i] = page;
|
2005-04-17 05:20:36 +07:00
|
|
|
offset = 0;
|
|
|
|
}
|
2011-03-26 05:16:45 +07:00
|
|
|
return 0;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2011-03-26 05:16:45 +07:00
|
|
|
out_free_pages:
|
|
|
|
for (i = 0; i < bp->b_page_count; i++)
|
|
|
|
__free_page(bp->b_pages[i]);
|
2005-04-17 05:20:36 +07:00
|
|
|
return error;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2011-03-31 08:57:33 +07:00
|
|
|
* Map buffer into kernel address-space if necessary.
|
2005-04-17 05:20:36 +07:00
|
|
|
*/
|
|
|
|
STATIC int
|
2006-01-11 11:39:08 +07:00
|
|
|
_xfs_buf_map_pages(
|
2005-04-17 05:20:36 +07:00
|
|
|
xfs_buf_t *bp,
|
|
|
|
uint flags)
|
|
|
|
{
|
2011-03-26 05:16:45 +07:00
|
|
|
ASSERT(bp->b_flags & _XBF_PAGES);
|
2006-01-11 11:39:08 +07:00
|
|
|
if (bp->b_page_count == 1) {
|
2011-03-26 05:16:45 +07:00
|
|
|
/* A single page buffer is always mappable */
|
2006-01-11 11:39:08 +07:00
|
|
|
bp->b_addr = page_address(bp->b_pages[0]) + bp->b_offset;
|
2012-04-23 12:59:07 +07:00
|
|
|
} else if (flags & XBF_UNMAPPED) {
|
|
|
|
bp->b_addr = NULL;
|
|
|
|
} else {
|
2011-03-26 05:13:42 +07:00
|
|
|
int retried = 0;
|
2014-03-07 12:19:14 +07:00
|
|
|
unsigned noio_flag;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* vm_map_ram() will allocate auxillary structures (e.g.
|
|
|
|
* pagetables) with GFP_KERNEL, yet we are likely to be under
|
|
|
|
* GFP_NOFS context here. Hence we need to tell memory reclaim
|
|
|
|
* that we are in such a context via PF_MEMALLOC_NOIO to prevent
|
|
|
|
* memory reclaim re-entering the filesystem here and
|
|
|
|
* potentially deadlocking.
|
|
|
|
*/
|
|
|
|
noio_flag = memalloc_noio_save();
|
2011-03-26 05:13:42 +07:00
|
|
|
do {
|
|
|
|
bp->b_addr = vm_map_ram(bp->b_pages, bp->b_page_count,
|
|
|
|
-1, PAGE_KERNEL);
|
|
|
|
if (bp->b_addr)
|
|
|
|
break;
|
|
|
|
vm_unmap_aliases();
|
|
|
|
} while (retried++ <= 1);
|
2014-03-07 12:19:14 +07:00
|
|
|
memalloc_noio_restore(noio_flag);
|
2011-03-26 05:13:42 +07:00
|
|
|
|
|
|
|
if (!bp->b_addr)
|
2005-04-17 05:20:36 +07:00
|
|
|
return -ENOMEM;
|
2006-01-11 11:39:08 +07:00
|
|
|
bp->b_addr += bp->b_offset;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Finding and Reading Buffers
|
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
2006-01-11 11:39:08 +07:00
|
|
|
* Look up, and creates if absent, a lockable buffer for
|
2005-04-17 05:20:36 +07:00
|
|
|
* a given range of an inode. The buffer is returned
|
2011-09-09 03:18:50 +07:00
|
|
|
* locked. No I/O is implied by this call.
|
2005-04-17 05:20:36 +07:00
|
|
|
*/
|
|
|
|
xfs_buf_t *
|
2006-01-11 11:39:08 +07:00
|
|
|
_xfs_buf_find(
|
2012-04-23 12:58:49 +07:00
|
|
|
struct xfs_buftarg *btp,
|
2012-06-22 15:50:09 +07:00
|
|
|
struct xfs_buf_map *map,
|
|
|
|
int nmaps,
|
2006-01-11 11:39:08 +07:00
|
|
|
xfs_buf_flags_t flags,
|
|
|
|
xfs_buf_t *new_bp)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2010-09-24 16:59:04 +07:00
|
|
|
struct xfs_perag *pag;
|
|
|
|
struct rb_node **rbp;
|
|
|
|
struct rb_node *parent;
|
|
|
|
xfs_buf_t *bp;
|
2012-06-22 15:50:09 +07:00
|
|
|
xfs_daddr_t blkno = map[0].bm_bn;
|
2013-01-21 19:53:52 +07:00
|
|
|
xfs_daddr_t eofs;
|
2012-06-22 15:50:09 +07:00
|
|
|
int numblks = 0;
|
|
|
|
int i;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2012-06-22 15:50:09 +07:00
|
|
|
for (i = 0; i < nmaps; i++)
|
|
|
|
numblks += map[i].bm_len;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
/* Check for IOs smaller than the sector size / not sector aligned */
|
2015-08-25 07:05:13 +07:00
|
|
|
ASSERT(!(BBTOB(numblks) < btp->bt_meta_sectorsize));
|
2014-01-22 05:45:52 +07:00
|
|
|
ASSERT(!(BBTOB(blkno) & (xfs_off_t)btp->bt_meta_sectormask));
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2013-01-21 19:53:52 +07:00
|
|
|
/*
|
|
|
|
* Corrupted block numbers can get through to here, unfortunately, so we
|
|
|
|
* have to check that the buffer falls within the filesystem bounds.
|
|
|
|
*/
|
|
|
|
eofs = XFS_FSB_TO_BB(btp->bt_mount, btp->bt_mount->m_sb.sb_dblocks);
|
2014-11-28 10:03:55 +07:00
|
|
|
if (blkno < 0 || blkno >= eofs) {
|
2013-01-21 19:53:52 +07:00
|
|
|
/*
|
2014-06-25 11:58:08 +07:00
|
|
|
* XXX (dgc): we should really be returning -EFSCORRUPTED here,
|
2013-01-21 19:53:52 +07:00
|
|
|
* but none of the higher level infrastructure supports
|
|
|
|
* returning a specific error on buffer lookup failures.
|
|
|
|
*/
|
|
|
|
xfs_alert(btp->bt_mount,
|
|
|
|
"%s: Block out of range: block 0x%llx, EOFS 0x%llx ",
|
|
|
|
__func__, blkno, eofs);
|
xfs: rework remote attr CRCs
Note: this changes the on-disk remote attribute format. I assert
that this is OK to do as CRCs are marked experimental and the first
kernel it is included in has not yet reached release yet. Further,
the userspace utilities are still evolving and so anyone using this
stuff right now is a developer or tester using volatile filesystems
for testing this feature. Hence changing the format right now to
save longer term pain is the right thing to do.
The fundamental change is to move from a header per extent in the
attribute to a header per filesytem block in the attribute. This
means there are more header blocks and the parsing of the attribute
data is slightly more complex, but it has the advantage that we
always know the size of the attribute on disk based on the length of
the data it contains.
This is where the header-per-extent method has problems. We don't
know the size of the attribute on disk without first knowing how
many extents are used to hold it. And we can't tell from a
mapping lookup, either, because remote attributes can be allocated
contiguously with other attribute blocks and so there is no obvious
way of determining the actual size of the atribute on disk short of
walking and mapping buffers.
The problem with this approach is that if we map a buffer
incorrectly (e.g. we make the last buffer for the attribute data too
long), we then get buffer cache lookup failure when we map it
correctly. i.e. we get a size mismatch on lookup. This is not
necessarily fatal, but it's a cache coherency problem that can lead
to returning the wrong data to userspace or writing the wrong data
to disk. And debug kernels will assert fail if this occurs.
I found lots of niggly little problems trying to fix this issue on a
4k block size filesystem, finally getting it to pass with lots of
fixes. The thing is, 1024 byte filesystems still failed, and it was
getting really complex handling all the corner cases that were
showing up. And there were clearly more that I hadn't found yet.
It is complex, fragile code, and if we don't fix it now, it will be
complex, fragile code forever more.
Hence the simple fix is to add a header to each filesystem block.
This gives us the same relationship between the attribute data
length and the number of blocks on disk as we have without CRCs -
it's a linear mapping and doesn't require us to guess anything. It
is simple to implement, too - the remote block count calculated at
lookup time can be used by the remote attribute set/get/remove code
without modification for both CRC and non-CRC filesystems. The world
becomes sane again.
Because the copy-in and copy-out now need to iterate over each
filesystem block, I moved them into helper functions so we separate
the block mapping and buffer manupulations from the attribute data
and CRC header manipulations. The code becomes much clearer as a
result, and it is a lot easier to understand and debug. It also
appears to be much more robust - once it worked on 4k block size
filesystems, it has worked without failure on 1k block size
filesystems, too.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Ben Myers <bpm@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
(cherry picked from commit ad1858d77771172e08016890f0eb2faedec3ecee)
2013-05-21 15:02:08 +07:00
|
|
|
WARN_ON(1);
|
2013-01-21 19:53:52 +07:00
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2010-09-24 16:59:04 +07:00
|
|
|
/* get tree root */
|
|
|
|
pag = xfs_perag_get(btp->bt_mount,
|
2012-04-23 12:58:49 +07:00
|
|
|
xfs_daddr_to_agno(btp->bt_mount, blkno));
|
2010-09-24 16:59:04 +07:00
|
|
|
|
|
|
|
/* walk tree */
|
|
|
|
spin_lock(&pag->pag_buf_lock);
|
|
|
|
rbp = &pag->pag_buf_tree.rb_node;
|
|
|
|
parent = NULL;
|
|
|
|
bp = NULL;
|
|
|
|
while (*rbp) {
|
|
|
|
parent = *rbp;
|
|
|
|
bp = rb_entry(parent, struct xfs_buf, b_rbnode);
|
|
|
|
|
2012-04-23 12:58:50 +07:00
|
|
|
if (blkno < bp->b_bn)
|
2010-09-24 16:59:04 +07:00
|
|
|
rbp = &(*rbp)->rb_left;
|
2012-04-23 12:58:50 +07:00
|
|
|
else if (blkno > bp->b_bn)
|
2010-09-24 16:59:04 +07:00
|
|
|
rbp = &(*rbp)->rb_right;
|
|
|
|
else {
|
|
|
|
/*
|
2012-04-23 12:58:50 +07:00
|
|
|
* found a block number match. If the range doesn't
|
2010-09-24 16:59:04 +07:00
|
|
|
* match, the only way this is allowed is if the buffer
|
|
|
|
* in the cache is stale and the transaction that made
|
|
|
|
* it stale has not yet committed. i.e. we are
|
|
|
|
* reallocating a busy extent. Skip this buffer and
|
|
|
|
* continue searching to the right for an exact match.
|
|
|
|
*/
|
2012-04-23 12:58:51 +07:00
|
|
|
if (bp->b_length != numblks) {
|
2010-09-24 16:59:04 +07:00
|
|
|
ASSERT(bp->b_flags & XBF_STALE);
|
|
|
|
rbp = &(*rbp)->rb_right;
|
|
|
|
continue;
|
|
|
|
}
|
2006-01-11 11:39:08 +07:00
|
|
|
atomic_inc(&bp->b_hold);
|
2005-04-17 05:20:36 +07:00
|
|
|
goto found;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* No match found */
|
2006-01-11 11:39:08 +07:00
|
|
|
if (new_bp) {
|
2010-09-24 16:59:04 +07:00
|
|
|
rb_link_node(&new_bp->b_rbnode, parent, rbp);
|
|
|
|
rb_insert_color(&new_bp->b_rbnode, &pag->pag_buf_tree);
|
|
|
|
/* the buffer keeps the perag reference until it is freed */
|
|
|
|
new_bp->b_pag = pag;
|
|
|
|
spin_unlock(&pag->pag_buf_lock);
|
2005-04-17 05:20:36 +07:00
|
|
|
} else {
|
2015-10-12 14:21:22 +07:00
|
|
|
XFS_STATS_INC(btp->bt_mount, xb_miss_locked);
|
2010-09-24 16:59:04 +07:00
|
|
|
spin_unlock(&pag->pag_buf_lock);
|
|
|
|
xfs_perag_put(pag);
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
2006-01-11 11:39:08 +07:00
|
|
|
return new_bp;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
found:
|
2010-09-24 16:59:04 +07:00
|
|
|
spin_unlock(&pag->pag_buf_lock);
|
|
|
|
xfs_perag_put(pag);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2011-07-08 19:36:19 +07:00
|
|
|
if (!xfs_buf_trylock(bp)) {
|
|
|
|
if (flags & XBF_TRYLOCK) {
|
2006-01-11 11:39:08 +07:00
|
|
|
xfs_buf_rele(bp);
|
2015-10-12 14:21:22 +07:00
|
|
|
XFS_STATS_INC(btp->bt_mount, xb_busy_locked);
|
2006-01-11 11:39:08 +07:00
|
|
|
return NULL;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
2011-07-08 19:36:19 +07:00
|
|
|
xfs_buf_lock(bp);
|
2015-10-12 14:21:22 +07:00
|
|
|
XFS_STATS_INC(btp->bt_mount, xb_get_locked_waited);
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
2011-03-26 05:16:45 +07:00
|
|
|
/*
|
|
|
|
* if the buffer is stale, clear all the external state associated with
|
|
|
|
* it. We need to keep flags such as how we allocated the buffer memory
|
|
|
|
* intact here.
|
|
|
|
*/
|
2006-01-11 11:39:08 +07:00
|
|
|
if (bp->b_flags & XBF_STALE) {
|
|
|
|
ASSERT((bp->b_flags & _XBF_DELWRI_Q) == 0);
|
2012-11-12 18:54:19 +07:00
|
|
|
ASSERT(bp->b_iodone == NULL);
|
2012-04-23 12:59:07 +07:00
|
|
|
bp->b_flags &= _XBF_KMEM | _XBF_PAGES;
|
2012-11-14 13:54:40 +07:00
|
|
|
bp->b_ops = NULL;
|
2005-09-05 05:33:35 +07:00
|
|
|
}
|
2009-12-15 06:14:59 +07:00
|
|
|
|
|
|
|
trace_xfs_buf_find(bp, flags, _RET_IP_);
|
2015-10-12 14:21:22 +07:00
|
|
|
XFS_STATS_INC(btp->bt_mount, xb_get_locked);
|
2006-01-11 11:39:08 +07:00
|
|
|
return bp;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2011-09-30 11:45:02 +07:00
|
|
|
* Assembles a buffer covering the specified range. The code is optimised for
|
|
|
|
* cache hits, as metadata intensive workloads will see 3 orders of magnitude
|
|
|
|
* more hits than misses.
|
2005-04-17 05:20:36 +07:00
|
|
|
*/
|
2011-09-30 11:45:02 +07:00
|
|
|
struct xfs_buf *
|
2012-06-22 15:50:10 +07:00
|
|
|
xfs_buf_get_map(
|
|
|
|
struct xfs_buftarg *target,
|
|
|
|
struct xfs_buf_map *map,
|
|
|
|
int nmaps,
|
2006-01-11 11:39:08 +07:00
|
|
|
xfs_buf_flags_t flags)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2011-09-30 11:45:02 +07:00
|
|
|
struct xfs_buf *bp;
|
|
|
|
struct xfs_buf *new_bp;
|
2011-03-26 05:16:45 +07:00
|
|
|
int error = 0;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2012-06-22 15:50:10 +07:00
|
|
|
bp = _xfs_buf_find(target, map, nmaps, flags, NULL);
|
2011-09-30 11:45:02 +07:00
|
|
|
if (likely(bp))
|
|
|
|
goto found;
|
|
|
|
|
2012-06-22 15:50:10 +07:00
|
|
|
new_bp = _xfs_buf_alloc(target, map, nmaps, flags);
|
2006-01-11 11:39:08 +07:00
|
|
|
if (unlikely(!new_bp))
|
2005-04-17 05:20:36 +07:00
|
|
|
return NULL;
|
|
|
|
|
2012-04-23 12:58:45 +07:00
|
|
|
error = xfs_buf_allocate_memory(new_bp, flags);
|
|
|
|
if (error) {
|
2012-06-22 15:50:09 +07:00
|
|
|
xfs_buf_free(new_bp);
|
2012-04-23 12:58:45 +07:00
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2012-06-22 15:50:10 +07:00
|
|
|
bp = _xfs_buf_find(target, map, nmaps, flags, new_bp);
|
2011-09-30 11:45:02 +07:00
|
|
|
if (!bp) {
|
2012-04-23 12:58:45 +07:00
|
|
|
xfs_buf_free(new_bp);
|
2011-09-30 11:45:02 +07:00
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2012-04-23 12:58:45 +07:00
|
|
|
if (bp != new_bp)
|
|
|
|
xfs_buf_free(new_bp);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2011-09-30 11:45:02 +07:00
|
|
|
found:
|
2012-04-23 12:59:07 +07:00
|
|
|
if (!bp->b_addr) {
|
2006-01-11 11:39:08 +07:00
|
|
|
error = _xfs_buf_map_pages(bp, flags);
|
2005-04-17 05:20:36 +07:00
|
|
|
if (unlikely(error)) {
|
2011-03-07 06:00:35 +07:00
|
|
|
xfs_warn(target->bt_mount,
|
2013-10-12 08:59:05 +07:00
|
|
|
"%s: failed to map pagesn", __func__);
|
2012-04-23 12:58:54 +07:00
|
|
|
xfs_buf_relse(bp);
|
|
|
|
return NULL;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-01-12 03:03:44 +07:00
|
|
|
/*
|
|
|
|
* Clear b_error if this is a lookup from a caller that doesn't expect
|
|
|
|
* valid data to be found in the buffer.
|
|
|
|
*/
|
|
|
|
if (!(flags & XBF_READ))
|
|
|
|
xfs_buf_ioerror(bp, 0);
|
|
|
|
|
2015-10-12 14:21:22 +07:00
|
|
|
XFS_STATS_INC(target->bt_mount, xb_get);
|
2009-12-15 06:14:59 +07:00
|
|
|
trace_xfs_buf_get(bp, flags, _RET_IP_);
|
2006-01-11 11:39:08 +07:00
|
|
|
return bp;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
2008-12-03 18:20:26 +07:00
|
|
|
STATIC int
|
|
|
|
_xfs_buf_read(
|
|
|
|
xfs_buf_t *bp,
|
|
|
|
xfs_buf_flags_t flags)
|
|
|
|
{
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 12:58:39 +07:00
|
|
|
ASSERT(!(flags & XBF_WRITE));
|
2012-12-05 06:18:02 +07:00
|
|
|
ASSERT(bp->b_maps[0].bm_bn != XFS_BUF_DADDR_NULL);
|
2008-12-03 18:20:26 +07:00
|
|
|
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 12:58:39 +07:00
|
|
|
bp->b_flags &= ~(XBF_WRITE | XBF_ASYNC | XBF_READ_AHEAD);
|
2011-07-08 19:36:32 +07:00
|
|
|
bp->b_flags |= flags & (XBF_READ | XBF_ASYNC | XBF_READ_AHEAD);
|
2008-12-03 18:20:26 +07:00
|
|
|
|
2014-10-02 06:05:14 +07:00
|
|
|
if (flags & XBF_ASYNC) {
|
|
|
|
xfs_buf_submit(bp);
|
2012-04-23 12:58:46 +07:00
|
|
|
return 0;
|
2014-10-02 06:05:14 +07:00
|
|
|
}
|
|
|
|
return xfs_buf_submit_wait(bp);
|
2008-12-03 18:20:26 +07:00
|
|
|
}
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
xfs_buf_t *
|
2012-06-22 15:50:10 +07:00
|
|
|
xfs_buf_read_map(
|
|
|
|
struct xfs_buftarg *target,
|
|
|
|
struct xfs_buf_map *map,
|
|
|
|
int nmaps,
|
2012-11-12 18:54:01 +07:00
|
|
|
xfs_buf_flags_t flags,
|
2012-11-14 13:54:40 +07:00
|
|
|
const struct xfs_buf_ops *ops)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2012-06-22 15:50:10 +07:00
|
|
|
struct xfs_buf *bp;
|
2006-01-11 11:39:08 +07:00
|
|
|
|
|
|
|
flags |= XBF_READ;
|
|
|
|
|
2012-06-22 15:50:10 +07:00
|
|
|
bp = xfs_buf_get_map(target, map, nmaps, flags);
|
2006-01-11 11:39:08 +07:00
|
|
|
if (bp) {
|
2009-12-15 06:14:59 +07:00
|
|
|
trace_xfs_buf_read(bp, flags, _RET_IP_);
|
|
|
|
|
2016-02-10 11:01:11 +07:00
|
|
|
if (!(bp->b_flags & XBF_DONE)) {
|
2015-10-12 14:21:22 +07:00
|
|
|
XFS_STATS_INC(target->bt_mount, xb_get_read);
|
2012-11-14 13:54:40 +07:00
|
|
|
bp->b_ops = ops;
|
2008-12-03 18:20:26 +07:00
|
|
|
_xfs_buf_read(bp, flags);
|
2006-01-11 11:39:08 +07:00
|
|
|
} else if (flags & XBF_ASYNC) {
|
2005-04-17 05:20:36 +07:00
|
|
|
/*
|
|
|
|
* Read ahead call which is already satisfied,
|
|
|
|
* drop the buffer
|
|
|
|
*/
|
2012-04-23 12:58:54 +07:00
|
|
|
xfs_buf_relse(bp);
|
|
|
|
return NULL;
|
2005-04-17 05:20:36 +07:00
|
|
|
} else {
|
|
|
|
/* We do not want read in the flags */
|
2006-01-11 11:39:08 +07:00
|
|
|
bp->b_flags &= ~XBF_READ;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2006-01-11 11:39:08 +07:00
|
|
|
return bp;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2006-01-11 11:39:08 +07:00
|
|
|
* If we are not low on memory then do the readahead in a deadlock
|
|
|
|
* safe manner.
|
2005-04-17 05:20:36 +07:00
|
|
|
*/
|
|
|
|
void
|
2012-06-22 15:50:10 +07:00
|
|
|
xfs_buf_readahead_map(
|
|
|
|
struct xfs_buftarg *target,
|
|
|
|
struct xfs_buf_map *map,
|
2012-11-12 18:54:01 +07:00
|
|
|
int nmaps,
|
2012-11-14 13:54:40 +07:00
|
|
|
const struct xfs_buf_ops *ops)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2011-03-26 05:16:45 +07:00
|
|
|
if (bdi_read_congested(target->bt_bdi))
|
2005-04-17 05:20:36 +07:00
|
|
|
return;
|
|
|
|
|
2012-06-22 15:50:10 +07:00
|
|
|
xfs_buf_read_map(target, map, nmaps,
|
2012-11-14 13:54:40 +07:00
|
|
|
XBF_TRYLOCK|XBF_ASYNC|XBF_READ_AHEAD, ops);
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
2010-09-24 18:58:31 +07:00
|
|
|
/*
|
|
|
|
* Read an uncached buffer from disk. Allocates and returns a locked
|
|
|
|
* buffer containing the disk contents or nothing.
|
|
|
|
*/
|
2014-10-02 06:05:32 +07:00
|
|
|
int
|
2010-09-24 18:58:31 +07:00
|
|
|
xfs_buf_read_uncached(
|
|
|
|
struct xfs_buftarg *target,
|
|
|
|
xfs_daddr_t daddr,
|
2012-04-23 12:58:49 +07:00
|
|
|
size_t numblks,
|
2012-11-12 18:54:01 +07:00
|
|
|
int flags,
|
2014-10-02 06:05:32 +07:00
|
|
|
struct xfs_buf **bpp,
|
2012-11-14 13:54:40 +07:00
|
|
|
const struct xfs_buf_ops *ops)
|
2010-09-24 18:58:31 +07:00
|
|
|
{
|
2012-11-12 18:54:02 +07:00
|
|
|
struct xfs_buf *bp;
|
2010-09-24 18:58:31 +07:00
|
|
|
|
2014-10-02 06:05:32 +07:00
|
|
|
*bpp = NULL;
|
|
|
|
|
2012-04-23 12:58:49 +07:00
|
|
|
bp = xfs_buf_get_uncached(target, numblks, flags);
|
2010-09-24 18:58:31 +07:00
|
|
|
if (!bp)
|
2014-10-02 06:05:32 +07:00
|
|
|
return -ENOMEM;
|
2010-09-24 18:58:31 +07:00
|
|
|
|
|
|
|
/* set up the buffer for a read IO */
|
2012-06-22 15:50:09 +07:00
|
|
|
ASSERT(bp->b_map_count == 1);
|
2014-10-02 06:05:32 +07:00
|
|
|
bp->b_bn = XFS_BUF_DADDR_NULL; /* always null for uncached buffers */
|
2012-06-22 15:50:09 +07:00
|
|
|
bp->b_maps[0].bm_bn = daddr;
|
2012-06-22 15:50:08 +07:00
|
|
|
bp->b_flags |= XBF_READ;
|
2012-11-14 13:54:40 +07:00
|
|
|
bp->b_ops = ops;
|
2010-09-24 18:58:31 +07:00
|
|
|
|
2014-10-02 06:05:14 +07:00
|
|
|
xfs_buf_submit_wait(bp);
|
2014-10-02 06:05:32 +07:00
|
|
|
if (bp->b_error) {
|
|
|
|
int error = bp->b_error;
|
2013-12-17 15:03:52 +07:00
|
|
|
xfs_buf_relse(bp);
|
2014-10-02 06:05:32 +07:00
|
|
|
return error;
|
2013-12-17 15:03:52 +07:00
|
|
|
}
|
2014-10-02 06:05:32 +07:00
|
|
|
|
|
|
|
*bpp = bp;
|
|
|
|
return 0;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
2011-04-21 16:34:27 +07:00
|
|
|
/*
|
|
|
|
* Return a buffer allocated as an empty buffer and associated to external
|
|
|
|
* memory via xfs_buf_associate_memory() back to it's empty state.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
xfs_buf_set_empty(
|
|
|
|
struct xfs_buf *bp,
|
2012-04-23 12:58:49 +07:00
|
|
|
size_t numblks)
|
2011-04-21 16:34:27 +07:00
|
|
|
{
|
|
|
|
if (bp->b_pages)
|
|
|
|
_xfs_buf_free_pages(bp);
|
|
|
|
|
|
|
|
bp->b_pages = NULL;
|
|
|
|
bp->b_page_count = 0;
|
|
|
|
bp->b_addr = NULL;
|
2012-04-23 12:58:51 +07:00
|
|
|
bp->b_length = numblks;
|
2012-04-23 12:58:52 +07:00
|
|
|
bp->b_io_length = numblks;
|
2012-06-22 15:50:09 +07:00
|
|
|
|
|
|
|
ASSERT(bp->b_map_count == 1);
|
2011-04-21 16:34:27 +07:00
|
|
|
bp->b_bn = XFS_BUF_DADDR_NULL;
|
2012-06-22 15:50:09 +07:00
|
|
|
bp->b_maps[0].bm_bn = XFS_BUF_DADDR_NULL;
|
|
|
|
bp->b_maps[0].bm_len = bp->b_length;
|
2011-04-21 16:34:27 +07:00
|
|
|
}
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
static inline struct page *
|
|
|
|
mem_to_page(
|
|
|
|
void *addr)
|
|
|
|
{
|
2008-02-05 13:28:34 +07:00
|
|
|
if ((!is_vmalloc_addr(addr))) {
|
2005-04-17 05:20:36 +07:00
|
|
|
return virt_to_page(addr);
|
|
|
|
} else {
|
|
|
|
return vmalloc_to_page(addr);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
int
|
2006-01-11 11:39:08 +07:00
|
|
|
xfs_buf_associate_memory(
|
|
|
|
xfs_buf_t *bp,
|
2005-04-17 05:20:36 +07:00
|
|
|
void *mem,
|
|
|
|
size_t len)
|
|
|
|
{
|
|
|
|
int rval;
|
|
|
|
int i = 0;
|
2007-11-27 13:01:24 +07:00
|
|
|
unsigned long pageaddr;
|
|
|
|
unsigned long offset;
|
|
|
|
size_t buflen;
|
2005-04-17 05:20:36 +07:00
|
|
|
int page_count;
|
|
|
|
|
2011-03-26 05:16:45 +07:00
|
|
|
pageaddr = (unsigned long)mem & PAGE_MASK;
|
2007-11-27 13:01:24 +07:00
|
|
|
offset = (unsigned long)mem - pageaddr;
|
2011-03-26 05:16:45 +07:00
|
|
|
buflen = PAGE_ALIGN(len + offset);
|
|
|
|
page_count = buflen >> PAGE_SHIFT;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
/* Free any previous set of page pointers */
|
2006-01-11 11:39:08 +07:00
|
|
|
if (bp->b_pages)
|
|
|
|
_xfs_buf_free_pages(bp);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2006-01-11 11:39:08 +07:00
|
|
|
bp->b_pages = NULL;
|
|
|
|
bp->b_addr = mem;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2014-04-14 16:01:20 +07:00
|
|
|
rval = _xfs_buf_get_pages(bp, page_count);
|
2005-04-17 05:20:36 +07:00
|
|
|
if (rval)
|
|
|
|
return rval;
|
|
|
|
|
2006-01-11 11:39:08 +07:00
|
|
|
bp->b_offset = offset;
|
2007-11-27 13:01:24 +07:00
|
|
|
|
|
|
|
for (i = 0; i < bp->b_page_count; i++) {
|
|
|
|
bp->b_pages[i] = mem_to_page((void *)pageaddr);
|
2011-03-26 05:16:45 +07:00
|
|
|
pageaddr += PAGE_SIZE;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
2012-04-23 12:58:52 +07:00
|
|
|
bp->b_io_length = BTOBB(len);
|
2012-04-23 12:58:51 +07:00
|
|
|
bp->b_length = BTOBB(buflen);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
xfs_buf_t *
|
2010-09-24 17:07:47 +07:00
|
|
|
xfs_buf_get_uncached(
|
|
|
|
struct xfs_buftarg *target,
|
2012-04-23 12:58:49 +07:00
|
|
|
size_t numblks,
|
2010-09-24 17:07:47 +07:00
|
|
|
int flags)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2012-04-23 12:58:49 +07:00
|
|
|
unsigned long page_count;
|
2007-05-14 15:23:50 +07:00
|
|
|
int error, i;
|
2012-06-22 15:50:09 +07:00
|
|
|
struct xfs_buf *bp;
|
|
|
|
DEFINE_SINGLE_BUF_MAP(map, XFS_BUF_DADDR_NULL, numblks);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2012-06-22 15:50:09 +07:00
|
|
|
bp = _xfs_buf_alloc(target, &map, 1, 0);
|
2005-04-17 05:20:36 +07:00
|
|
|
if (unlikely(bp == NULL))
|
|
|
|
goto fail;
|
|
|
|
|
2012-04-23 12:58:49 +07:00
|
|
|
page_count = PAGE_ALIGN(numblks << BBSHIFT) >> PAGE_SHIFT;
|
2014-04-14 16:01:20 +07:00
|
|
|
error = _xfs_buf_get_pages(bp, page_count);
|
2007-05-14 15:23:50 +07:00
|
|
|
if (error)
|
2005-04-17 05:20:36 +07:00
|
|
|
goto fail_free_buf;
|
|
|
|
|
2007-05-14 15:23:50 +07:00
|
|
|
for (i = 0; i < page_count; i++) {
|
2010-09-24 17:07:47 +07:00
|
|
|
bp->b_pages[i] = alloc_page(xb_to_gfp(flags));
|
2007-05-14 15:23:50 +07:00
|
|
|
if (!bp->b_pages[i])
|
|
|
|
goto fail_free_mem;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
2007-05-14 15:23:50 +07:00
|
|
|
bp->b_flags |= _XBF_PAGES;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2012-04-23 12:59:07 +07:00
|
|
|
error = _xfs_buf_map_pages(bp, 0);
|
2007-05-14 15:23:50 +07:00
|
|
|
if (unlikely(error)) {
|
2011-03-07 06:00:35 +07:00
|
|
|
xfs_warn(target->bt_mount,
|
2013-10-12 08:59:05 +07:00
|
|
|
"%s: failed to map pages", __func__);
|
2005-04-17 05:20:36 +07:00
|
|
|
goto fail_free_mem;
|
2007-05-14 15:23:50 +07:00
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2010-09-24 17:07:47 +07:00
|
|
|
trace_xfs_buf_get_uncached(bp, _RET_IP_);
|
2005-04-17 05:20:36 +07:00
|
|
|
return bp;
|
2007-05-14 15:23:50 +07:00
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
fail_free_mem:
|
2007-05-14 15:23:50 +07:00
|
|
|
while (--i >= 0)
|
|
|
|
__free_page(bp->b_pages[i]);
|
2007-05-24 12:21:11 +07:00
|
|
|
_xfs_buf_free_pages(bp);
|
2005-04-17 05:20:36 +07:00
|
|
|
fail_free_buf:
|
2012-06-22 15:50:09 +07:00
|
|
|
xfs_buf_free_maps(bp);
|
2011-10-10 23:52:48 +07:00
|
|
|
kmem_zone_free(xfs_buf_zone, bp);
|
2005-04-17 05:20:36 +07:00
|
|
|
fail:
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Increment reference count on buffer, to hold the buffer concurrently
|
|
|
|
* with another thread which may release (free) the buffer asynchronously.
|
|
|
|
* Must hold the buffer already to call this function.
|
|
|
|
*/
|
|
|
|
void
|
2006-01-11 11:39:08 +07:00
|
|
|
xfs_buf_hold(
|
|
|
|
xfs_buf_t *bp)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2009-12-15 06:14:59 +07:00
|
|
|
trace_xfs_buf_hold(bp, _RET_IP_);
|
2006-01-11 11:39:08 +07:00
|
|
|
atomic_inc(&bp->b_hold);
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2006-01-11 11:39:08 +07:00
|
|
|
* Releases a hold on the specified buffer. If the
|
|
|
|
* the hold count is 1, calls xfs_buf_free.
|
2005-04-17 05:20:36 +07:00
|
|
|
*/
|
|
|
|
void
|
2006-01-11 11:39:08 +07:00
|
|
|
xfs_buf_rele(
|
|
|
|
xfs_buf_t *bp)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2010-09-24 16:59:04 +07:00
|
|
|
struct xfs_perag *pag = bp->b_pag;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2009-12-15 06:14:59 +07:00
|
|
|
trace_xfs_buf_rele(bp, _RET_IP_);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2010-09-24 16:59:04 +07:00
|
|
|
if (!pag) {
|
2010-12-02 12:30:55 +07:00
|
|
|
ASSERT(list_empty(&bp->b_lru));
|
2010-09-24 16:59:04 +07:00
|
|
|
ASSERT(RB_EMPTY_NODE(&bp->b_rbnode));
|
2006-02-01 08:14:52 +07:00
|
|
|
if (atomic_dec_and_test(&bp->b_hold))
|
|
|
|
xfs_buf_free(bp);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2010-09-24 16:59:04 +07:00
|
|
|
ASSERT(!RB_EMPTY_NODE(&bp->b_rbnode));
|
2010-12-02 12:30:55 +07:00
|
|
|
|
2008-08-13 12:42:10 +07:00
|
|
|
ASSERT(atomic_read(&bp->b_hold) > 0);
|
2010-09-24 16:59:04 +07:00
|
|
|
if (atomic_dec_and_lock(&bp->b_hold, &pag->pag_buf_lock)) {
|
2013-08-28 07:18:06 +07:00
|
|
|
spin_lock(&bp->b_lock);
|
|
|
|
if (!(bp->b_flags & XBF_STALE) && atomic_read(&bp->b_lru_ref)) {
|
|
|
|
/*
|
|
|
|
* If the buffer is added to the LRU take a new
|
|
|
|
* reference to the buffer for the LRU and clear the
|
|
|
|
* (now stale) dispose list state flag
|
|
|
|
*/
|
|
|
|
if (list_lru_add(&bp->b_target->bt_lru, &bp->b_lru)) {
|
|
|
|
bp->b_state &= ~XFS_BSTATE_DISPOSE;
|
|
|
|
atomic_inc(&bp->b_hold);
|
|
|
|
}
|
|
|
|
spin_unlock(&bp->b_lock);
|
2010-12-02 12:30:55 +07:00
|
|
|
spin_unlock(&pag->pag_buf_lock);
|
2005-04-17 05:20:36 +07:00
|
|
|
} else {
|
2013-08-28 07:18:06 +07:00
|
|
|
/*
|
|
|
|
* most of the time buffers will already be removed from
|
|
|
|
* the LRU, so optimise that case by checking for the
|
|
|
|
* XFS_BSTATE_DISPOSE flag indicating the last list the
|
|
|
|
* buffer was on was the disposal list
|
|
|
|
*/
|
|
|
|
if (!(bp->b_state & XFS_BSTATE_DISPOSE)) {
|
|
|
|
list_lru_del(&bp->b_target->bt_lru, &bp->b_lru);
|
|
|
|
} else {
|
|
|
|
ASSERT(list_empty(&bp->b_lru));
|
|
|
|
}
|
|
|
|
spin_unlock(&bp->b_lock);
|
|
|
|
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 12:58:39 +07:00
|
|
|
ASSERT(!(bp->b_flags & _XBF_DELWRI_Q));
|
2010-09-24 16:59:04 +07:00
|
|
|
rb_erase(&bp->b_rbnode, &pag->pag_buf_tree);
|
|
|
|
spin_unlock(&pag->pag_buf_lock);
|
|
|
|
xfs_perag_put(pag);
|
2006-01-11 11:39:08 +07:00
|
|
|
xfs_buf_free(bp);
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
2011-03-26 05:16:45 +07:00
|
|
|
* Lock a buffer object, if it is not already locked.
|
2010-11-30 11:16:16 +07:00
|
|
|
*
|
|
|
|
* If we come across a stale, pinned, locked buffer, we know that we are
|
|
|
|
* being asked to lock a buffer that has been reallocated. Because it is
|
|
|
|
* pinned, we know that the log has not been pushed to disk and hence it
|
|
|
|
* will still be locked. Rather than continuing to have trylock attempts
|
|
|
|
* fail until someone else pushes the log, push it ourselves before
|
|
|
|
* returning. This means that the xfsaild will not get stuck trying
|
|
|
|
* to push on stale inode buffers.
|
2005-04-17 05:20:36 +07:00
|
|
|
*/
|
|
|
|
int
|
2011-07-08 19:36:19 +07:00
|
|
|
xfs_buf_trylock(
|
|
|
|
struct xfs_buf *bp)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
int locked;
|
|
|
|
|
2006-01-11 11:39:08 +07:00
|
|
|
locked = down_trylock(&bp->b_sema) == 0;
|
2009-12-15 06:14:59 +07:00
|
|
|
if (locked)
|
2006-01-11 11:39:08 +07:00
|
|
|
XB_SET_OWNER(bp);
|
2009-12-15 06:14:59 +07:00
|
|
|
|
2011-07-08 19:36:19 +07:00
|
|
|
trace_xfs_buf_trylock(bp, _RET_IP_);
|
|
|
|
return locked;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2011-03-26 05:16:45 +07:00
|
|
|
* Lock a buffer object.
|
xfs: Improve scalability of busy extent tracking
When we free a metadata extent, we record it in the per-AG busy
extent array so that it is not re-used before the freeing
transaction hits the disk. This array is fixed size, so when it
overflows we make further allocation transactions synchronous
because we cannot track more freed extents until those transactions
hit the disk and are completed. Under heavy mixed allocation and
freeing workloads with large log buffers, we can overflow this array
quite easily.
Further, the array is sparsely populated, which means that inserts
need to search for a free slot, and array searches often have to
search many more slots that are actually used to check all the
busy extents. Quite inefficient, really.
To enable this aspect of extent freeing to scale better, we need
a structure that can grow dynamically. While in other areas of
XFS we have used radix trees, the extents being freed are at random
locations on disk so are better suited to being indexed by an rbtree.
So, use a per-AG rbtree indexed by block number to track busy
extents. This incures a memory allocation when marking an extent
busy, but should not occur too often in low memory situations. This
should scale to an arbitrary number of extents so should not be a
limitation for features such as in-memory aggregation of
transactions.
However, there are still situations where we can't avoid allocating
busy extents (such as allocation from the AGFL). To minimise the
overhead of such occurences, we need to avoid doing a synchronous
log force while holding the AGF locked to ensure that the previous
transactions are safely on disk before we use the extent. We can do
this by marking the transaction doing the allocation as synchronous
rather issuing a log force.
Because of the locking involved and the ordering of transactions,
the synchronous transaction provides the same guarantees as a
synchronous log force because it ensures that all the prior
transactions are already on disk when the synchronous transaction
hits the disk. i.e. it preserves the free->allocate order of the
extent correctly in recovery.
By doing this, we avoid holding the AGF locked while log writes are
in progress, hence reducing the length of time the lock is held and
therefore we increase the rate at which we can allocate and free
from the allocation group, thereby increasing overall throughput.
The only problem with this approach is that when a metadata buffer is
marked stale (e.g. a directory block is removed), then buffer remains
pinned and locked until the log goes to disk. The issue here is that
if that stale buffer is reallocated in a subsequent transaction, the
attempt to lock that buffer in the transaction will hang waiting
the log to go to disk to unlock and unpin the buffer. Hence if
someone tries to lock a pinned, stale, locked buffer we need to
push on the log to get it unlocked ASAP. Effectively we are trading
off a guaranteed log force for a much less common trigger for log
force to occur.
Ideally we should not reallocate busy extents. That is a much more
complex fix to the problem as it involves direct intervention in the
allocation btree searches in many places. This is left to a future
set of modifications.
Finally, now that we track busy extents in allocated memory, we
don't need the descriptors in the transaction structure to point to
them. We can replace the complex busy chunk infrastructure with a
simple linked list of busy extents. This allows us to remove a large
chunk of code, making the overall change a net reduction in code
size.
Signed-off-by: Dave Chinner <david@fromorbit.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Alex Elder <aelder@sgi.com>
2010-05-21 09:07:08 +07:00
|
|
|
*
|
|
|
|
* If we come across a stale, pinned, locked buffer, we know that we
|
|
|
|
* are being asked to lock a buffer that has been reallocated. Because
|
|
|
|
* it is pinned, we know that the log has not been pushed to disk and
|
|
|
|
* hence it will still be locked. Rather than sleeping until someone
|
|
|
|
* else pushes the log, push it ourselves before trying to get the lock.
|
2005-04-17 05:20:36 +07:00
|
|
|
*/
|
2006-01-11 11:39:08 +07:00
|
|
|
void
|
|
|
|
xfs_buf_lock(
|
2011-07-08 19:36:19 +07:00
|
|
|
struct xfs_buf *bp)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2009-12-15 06:14:59 +07:00
|
|
|
trace_xfs_buf_lock(bp, _RET_IP_);
|
|
|
|
|
xfs: Improve scalability of busy extent tracking
When we free a metadata extent, we record it in the per-AG busy
extent array so that it is not re-used before the freeing
transaction hits the disk. This array is fixed size, so when it
overflows we make further allocation transactions synchronous
because we cannot track more freed extents until those transactions
hit the disk and are completed. Under heavy mixed allocation and
freeing workloads with large log buffers, we can overflow this array
quite easily.
Further, the array is sparsely populated, which means that inserts
need to search for a free slot, and array searches often have to
search many more slots that are actually used to check all the
busy extents. Quite inefficient, really.
To enable this aspect of extent freeing to scale better, we need
a structure that can grow dynamically. While in other areas of
XFS we have used radix trees, the extents being freed are at random
locations on disk so are better suited to being indexed by an rbtree.
So, use a per-AG rbtree indexed by block number to track busy
extents. This incures a memory allocation when marking an extent
busy, but should not occur too often in low memory situations. This
should scale to an arbitrary number of extents so should not be a
limitation for features such as in-memory aggregation of
transactions.
However, there are still situations where we can't avoid allocating
busy extents (such as allocation from the AGFL). To minimise the
overhead of such occurences, we need to avoid doing a synchronous
log force while holding the AGF locked to ensure that the previous
transactions are safely on disk before we use the extent. We can do
this by marking the transaction doing the allocation as synchronous
rather issuing a log force.
Because of the locking involved and the ordering of transactions,
the synchronous transaction provides the same guarantees as a
synchronous log force because it ensures that all the prior
transactions are already on disk when the synchronous transaction
hits the disk. i.e. it preserves the free->allocate order of the
extent correctly in recovery.
By doing this, we avoid holding the AGF locked while log writes are
in progress, hence reducing the length of time the lock is held and
therefore we increase the rate at which we can allocate and free
from the allocation group, thereby increasing overall throughput.
The only problem with this approach is that when a metadata buffer is
marked stale (e.g. a directory block is removed), then buffer remains
pinned and locked until the log goes to disk. The issue here is that
if that stale buffer is reallocated in a subsequent transaction, the
attempt to lock that buffer in the transaction will hang waiting
the log to go to disk to unlock and unpin the buffer. Hence if
someone tries to lock a pinned, stale, locked buffer we need to
push on the log to get it unlocked ASAP. Effectively we are trading
off a guaranteed log force for a much less common trigger for log
force to occur.
Ideally we should not reallocate busy extents. That is a much more
complex fix to the problem as it involves direct intervention in the
allocation btree searches in many places. This is left to a future
set of modifications.
Finally, now that we track busy extents in allocated memory, we
don't need the descriptors in the transaction structure to point to
them. We can replace the complex busy chunk infrastructure with a
simple linked list of busy extents. This allows us to remove a large
chunk of code, making the overall change a net reduction in code
size.
Signed-off-by: Dave Chinner <david@fromorbit.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Alex Elder <aelder@sgi.com>
2010-05-21 09:07:08 +07:00
|
|
|
if (atomic_read(&bp->b_pin_count) && (bp->b_flags & XBF_STALE))
|
2010-09-22 07:47:20 +07:00
|
|
|
xfs_log_force(bp->b_target->bt_mount, 0);
|
2006-01-11 11:39:08 +07:00
|
|
|
down(&bp->b_sema);
|
|
|
|
XB_SET_OWNER(bp);
|
2009-12-15 06:14:59 +07:00
|
|
|
|
|
|
|
trace_xfs_buf_lock_done(bp, _RET_IP_);
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2006-01-11 11:39:08 +07:00
|
|
|
xfs_buf_unlock(
|
2011-07-08 19:36:19 +07:00
|
|
|
struct xfs_buf *bp)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2006-01-11 11:39:08 +07:00
|
|
|
XB_CLEAR_OWNER(bp);
|
|
|
|
up(&bp->b_sema);
|
2009-12-15 06:14:59 +07:00
|
|
|
|
|
|
|
trace_xfs_buf_unlock(bp, _RET_IP_);
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
2006-01-11 11:39:08 +07:00
|
|
|
STATIC void
|
|
|
|
xfs_buf_wait_unpin(
|
|
|
|
xfs_buf_t *bp)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
DECLARE_WAITQUEUE (wait, current);
|
|
|
|
|
2006-01-11 11:39:08 +07:00
|
|
|
if (atomic_read(&bp->b_pin_count) == 0)
|
2005-04-17 05:20:36 +07:00
|
|
|
return;
|
|
|
|
|
2006-01-11 11:39:08 +07:00
|
|
|
add_wait_queue(&bp->b_waiters, &wait);
|
2005-04-17 05:20:36 +07:00
|
|
|
for (;;) {
|
|
|
|
set_current_state(TASK_UNINTERRUPTIBLE);
|
2006-01-11 11:39:08 +07:00
|
|
|
if (atomic_read(&bp->b_pin_count) == 0)
|
2005-04-17 05:20:36 +07:00
|
|
|
break;
|
2011-03-10 14:52:07 +07:00
|
|
|
io_schedule();
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
2006-01-11 11:39:08 +07:00
|
|
|
remove_wait_queue(&bp->b_waiters, &wait);
|
2005-04-17 05:20:36 +07:00
|
|
|
set_current_state(TASK_RUNNING);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Buffer Utility Routines
|
|
|
|
*/
|
|
|
|
|
2014-10-02 06:04:22 +07:00
|
|
|
void
|
|
|
|
xfs_buf_ioend(
|
|
|
|
struct xfs_buf *bp)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2014-10-02 06:04:22 +07:00
|
|
|
bool read = bp->b_flags & XBF_READ;
|
|
|
|
|
|
|
|
trace_xfs_buf_iodone(bp, _RET_IP_);
|
2012-11-14 13:54:40 +07:00
|
|
|
|
|
|
|
bp->b_flags &= ~(XBF_READ | XBF_WRITE | XBF_READ_AHEAD);
|
2013-02-27 09:25:54 +07:00
|
|
|
|
2014-10-02 06:04:31 +07:00
|
|
|
/*
|
|
|
|
* Pull in IO completion errors now. We are guaranteed to be running
|
|
|
|
* single threaded, so we don't need the lock to read b_io_error.
|
|
|
|
*/
|
|
|
|
if (!bp->b_error && bp->b_io_error)
|
|
|
|
xfs_buf_ioerror(bp, bp->b_io_error);
|
|
|
|
|
2014-10-02 06:04:22 +07:00
|
|
|
/* Only validate buffers that were read without errors */
|
|
|
|
if (read && !bp->b_error && bp->b_ops) {
|
|
|
|
ASSERT(!bp->b_iodone);
|
2012-11-14 13:54:40 +07:00
|
|
|
bp->b_ops->verify_read(bp);
|
2014-10-02 06:04:22 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
if (!bp->b_error)
|
|
|
|
bp->b_flags |= XBF_DONE;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2010-08-18 16:29:11 +07:00
|
|
|
if (bp->b_iodone)
|
2006-01-11 11:39:08 +07:00
|
|
|
(*(bp->b_iodone))(bp);
|
|
|
|
else if (bp->b_flags & XBF_ASYNC)
|
2005-04-17 05:20:36 +07:00
|
|
|
xfs_buf_relse(bp);
|
2014-10-02 06:05:14 +07:00
|
|
|
else
|
2012-11-14 13:54:40 +07:00
|
|
|
complete(&bp->b_iowait);
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
2014-10-02 06:04:22 +07:00
|
|
|
static void
|
|
|
|
xfs_buf_ioend_work(
|
|
|
|
struct work_struct *work)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2014-10-02 06:04:22 +07:00
|
|
|
struct xfs_buf *bp =
|
2014-12-04 05:43:17 +07:00
|
|
|
container_of(work, xfs_buf_t, b_ioend_work);
|
2009-12-15 06:14:59 +07:00
|
|
|
|
2014-10-02 06:04:22 +07:00
|
|
|
xfs_buf_ioend(bp);
|
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2016-01-04 12:10:42 +07:00
|
|
|
static void
|
2014-10-02 06:04:22 +07:00
|
|
|
xfs_buf_ioend_async(
|
|
|
|
struct xfs_buf *bp)
|
|
|
|
{
|
2014-12-04 05:43:17 +07:00
|
|
|
INIT_WORK(&bp->b_ioend_work, xfs_buf_ioend_work);
|
|
|
|
queue_work(bp->b_ioend_wq, &bp->b_ioend_work);
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2006-01-11 11:39:08 +07:00
|
|
|
xfs_buf_ioerror(
|
|
|
|
xfs_buf_t *bp,
|
|
|
|
int error)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2014-06-25 11:58:08 +07:00
|
|
|
ASSERT(error <= 0 && error >= -1000);
|
|
|
|
bp->b_error = error;
|
2009-12-15 06:14:59 +07:00
|
|
|
trace_xfs_buf_ioerror(bp, error, _RET_IP_);
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
2011-10-10 23:52:49 +07:00
|
|
|
void
|
|
|
|
xfs_buf_ioerror_alert(
|
|
|
|
struct xfs_buf *bp,
|
|
|
|
const char *func)
|
|
|
|
{
|
|
|
|
xfs_alert(bp->b_target->bt_mount,
|
2012-04-23 12:58:52 +07:00
|
|
|
"metadata I/O error: block 0x%llx (\"%s\") error %d numblks %d",
|
2014-06-25 11:58:08 +07:00
|
|
|
(__uint64_t)XFS_BUF_ADDR(bp), func, -bp->b_error, bp->b_length);
|
2011-10-10 23:52:49 +07:00
|
|
|
}
|
|
|
|
|
2012-07-13 13:24:10 +07:00
|
|
|
int
|
|
|
|
xfs_bwrite(
|
|
|
|
struct xfs_buf *bp)
|
|
|
|
{
|
|
|
|
int error;
|
|
|
|
|
|
|
|
ASSERT(xfs_buf_islocked(bp));
|
|
|
|
|
|
|
|
bp->b_flags |= XBF_WRITE;
|
2014-10-02 06:04:56 +07:00
|
|
|
bp->b_flags &= ~(XBF_ASYNC | XBF_READ | _XBF_DELWRI_Q |
|
|
|
|
XBF_WRITE_FAIL | XBF_DONE);
|
2012-07-13 13:24:10 +07:00
|
|
|
|
2014-10-02 06:05:14 +07:00
|
|
|
error = xfs_buf_submit_wait(bp);
|
2012-07-13 13:24:10 +07:00
|
|
|
if (error) {
|
|
|
|
xfs_force_shutdown(bp->b_target->bt_mount,
|
|
|
|
SHUTDOWN_META_IO_ERROR);
|
|
|
|
}
|
|
|
|
return error;
|
|
|
|
}
|
|
|
|
|
2016-05-18 07:56:41 +07:00
|
|
|
static void
|
2006-01-11 11:39:08 +07:00
|
|
|
xfs_buf_bio_end_io(
|
2015-07-20 20:29:37 +07:00
|
|
|
struct bio *bio)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2016-05-18 07:56:41 +07:00
|
|
|
struct xfs_buf *bp = (struct xfs_buf *)bio->bi_private;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2012-11-12 18:09:46 +07:00
|
|
|
/*
|
|
|
|
* don't overwrite existing errors - otherwise we can lose errors on
|
|
|
|
* buffers that require multiple bios to complete.
|
|
|
|
*/
|
2016-05-18 07:56:41 +07:00
|
|
|
if (bio->bi_error)
|
|
|
|
cmpxchg(&bp->b_io_error, 0, bio->bi_error);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2012-11-12 18:09:46 +07:00
|
|
|
if (!bp->b_error && xfs_buf_is_vmapped(bp) && (bp->b_flags & XBF_READ))
|
2010-01-26 00:42:24 +07:00
|
|
|
invalidate_kernel_vmap_range(bp->b_addr, xfs_buf_vmap_len(bp));
|
|
|
|
|
2014-10-02 06:04:22 +07:00
|
|
|
if (atomic_dec_and_test(&bp->b_io_remaining) == 1)
|
|
|
|
xfs_buf_ioend_async(bp);
|
2005-04-17 05:20:36 +07:00
|
|
|
bio_put(bio);
|
|
|
|
}
|
|
|
|
|
2012-06-22 15:50:09 +07:00
|
|
|
static void
|
|
|
|
xfs_buf_ioapply_map(
|
|
|
|
struct xfs_buf *bp,
|
|
|
|
int map,
|
|
|
|
int *buf_offset,
|
|
|
|
int *count,
|
|
|
|
int rw)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2012-06-22 15:50:09 +07:00
|
|
|
int page_index;
|
|
|
|
int total_nr_pages = bp->b_page_count;
|
|
|
|
int nr_pages;
|
|
|
|
struct bio *bio;
|
|
|
|
sector_t sector = bp->b_maps[map].bm_bn;
|
|
|
|
int size;
|
|
|
|
int offset;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2006-01-11 11:39:08 +07:00
|
|
|
total_nr_pages = bp->b_page_count;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2012-06-22 15:50:09 +07:00
|
|
|
/* skip the pages in the buffer before the start offset */
|
|
|
|
page_index = 0;
|
|
|
|
offset = *buf_offset;
|
|
|
|
while (offset >= PAGE_SIZE) {
|
|
|
|
page_index++;
|
|
|
|
offset -= PAGE_SIZE;
|
2005-11-02 06:26:59 +07:00
|
|
|
}
|
|
|
|
|
2012-06-22 15:50:09 +07:00
|
|
|
/*
|
|
|
|
* Limit the IO size to the length of the current vector, and update the
|
|
|
|
* remaining IO count for the next time around.
|
|
|
|
*/
|
|
|
|
size = min_t(int, BBTOB(bp->b_maps[map].bm_len), *count);
|
|
|
|
*count -= size;
|
|
|
|
*buf_offset += size;
|
2011-07-26 22:06:44 +07:00
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
next_chunk:
|
2006-01-11 11:39:08 +07:00
|
|
|
atomic_inc(&bp->b_io_remaining);
|
2005-04-17 05:20:36 +07:00
|
|
|
nr_pages = BIO_MAX_SECTORS >> (PAGE_SHIFT - BBSHIFT);
|
|
|
|
if (nr_pages > total_nr_pages)
|
|
|
|
nr_pages = total_nr_pages;
|
|
|
|
|
|
|
|
bio = bio_alloc(GFP_NOIO, nr_pages);
|
2006-01-11 11:39:08 +07:00
|
|
|
bio->bi_bdev = bp->b_target->bt_bdev;
|
2013-10-12 05:44:27 +07:00
|
|
|
bio->bi_iter.bi_sector = sector;
|
2006-01-11 11:39:08 +07:00
|
|
|
bio->bi_end_io = xfs_buf_bio_end_io;
|
|
|
|
bio->bi_private = bp;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2011-03-26 05:16:45 +07:00
|
|
|
|
2012-06-22 15:50:09 +07:00
|
|
|
for (; size && nr_pages; nr_pages--, page_index++) {
|
2011-03-26 05:16:45 +07:00
|
|
|
int rbytes, nbytes = PAGE_SIZE - offset;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
if (nbytes > size)
|
|
|
|
nbytes = size;
|
|
|
|
|
2012-06-22 15:50:09 +07:00
|
|
|
rbytes = bio_add_page(bio, bp->b_pages[page_index], nbytes,
|
|
|
|
offset);
|
2006-01-11 11:39:08 +07:00
|
|
|
if (rbytes < nbytes)
|
2005-04-17 05:20:36 +07:00
|
|
|
break;
|
|
|
|
|
|
|
|
offset = 0;
|
2012-04-23 12:58:52 +07:00
|
|
|
sector += BTOBB(nbytes);
|
2005-04-17 05:20:36 +07:00
|
|
|
size -= nbytes;
|
|
|
|
total_nr_pages--;
|
|
|
|
}
|
|
|
|
|
2013-10-12 05:44:27 +07:00
|
|
|
if (likely(bio->bi_iter.bi_size)) {
|
2010-01-26 00:42:24 +07:00
|
|
|
if (xfs_buf_is_vmapped(bp)) {
|
|
|
|
flush_kernel_vmap_range(bp->b_addr,
|
|
|
|
xfs_buf_vmap_len(bp));
|
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
submit_bio(rw, bio);
|
|
|
|
if (size)
|
|
|
|
goto next_chunk;
|
|
|
|
} else {
|
2012-11-12 18:09:46 +07:00
|
|
|
/*
|
|
|
|
* This is guaranteed not to be the last io reference count
|
2014-10-02 06:05:14 +07:00
|
|
|
* because the caller (xfs_buf_submit) holds a count itself.
|
2012-11-12 18:09:46 +07:00
|
|
|
*/
|
|
|
|
atomic_dec(&bp->b_io_remaining);
|
2014-06-25 11:58:08 +07:00
|
|
|
xfs_buf_ioerror(bp, -EIO);
|
2010-07-20 14:52:59 +07:00
|
|
|
bio_put(bio);
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
2012-06-22 15:50:09 +07:00
|
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
STATIC void
|
|
|
|
_xfs_buf_ioapply(
|
|
|
|
struct xfs_buf *bp)
|
|
|
|
{
|
|
|
|
struct blk_plug plug;
|
|
|
|
int rw;
|
|
|
|
int offset;
|
|
|
|
int size;
|
|
|
|
int i;
|
|
|
|
|
2013-03-12 19:30:34 +07:00
|
|
|
/*
|
|
|
|
* Make sure we capture only current IO errors rather than stale errors
|
|
|
|
* left over from previous use of the buffer (e.g. failed readahead).
|
|
|
|
*/
|
|
|
|
bp->b_error = 0;
|
|
|
|
|
2014-12-04 05:43:17 +07:00
|
|
|
/*
|
|
|
|
* Initialize the I/O completion workqueue if we haven't yet or the
|
|
|
|
* submitter has not opted to specify a custom one.
|
|
|
|
*/
|
|
|
|
if (!bp->b_ioend_wq)
|
|
|
|
bp->b_ioend_wq = bp->b_target->bt_mount->m_buf_workqueue;
|
|
|
|
|
2012-06-22 15:50:09 +07:00
|
|
|
if (bp->b_flags & XBF_WRITE) {
|
|
|
|
if (bp->b_flags & XBF_SYNCIO)
|
|
|
|
rw = WRITE_SYNC;
|
|
|
|
else
|
|
|
|
rw = WRITE;
|
|
|
|
if (bp->b_flags & XBF_FUA)
|
|
|
|
rw |= REQ_FUA;
|
|
|
|
if (bp->b_flags & XBF_FLUSH)
|
|
|
|
rw |= REQ_FLUSH;
|
2012-11-14 13:54:40 +07:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Run the write verifier callback function if it exists. If
|
|
|
|
* this function fails it will mark the buffer with an error and
|
|
|
|
* the IO should not be dispatched.
|
|
|
|
*/
|
|
|
|
if (bp->b_ops) {
|
|
|
|
bp->b_ops->verify_write(bp);
|
|
|
|
if (bp->b_error) {
|
|
|
|
xfs_force_shutdown(bp->b_target->bt_mount,
|
|
|
|
SHUTDOWN_CORRUPT_INCORE);
|
|
|
|
return;
|
|
|
|
}
|
2014-08-04 09:42:40 +07:00
|
|
|
} else if (bp->b_bn != XFS_BUF_DADDR_NULL) {
|
|
|
|
struct xfs_mount *mp = bp->b_target->bt_mount;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* non-crc filesystems don't attach verifiers during
|
|
|
|
* log recovery, so don't warn for such filesystems.
|
|
|
|
*/
|
|
|
|
if (xfs_sb_version_hascrc(&mp->m_sb)) {
|
|
|
|
xfs_warn(mp,
|
|
|
|
"%s: no ops on block 0x%llx/0x%x",
|
|
|
|
__func__, bp->b_bn, bp->b_length);
|
|
|
|
xfs_hex_dump(bp->b_addr, 64);
|
|
|
|
dump_stack();
|
|
|
|
}
|
2012-11-14 13:54:40 +07:00
|
|
|
}
|
2012-06-22 15:50:09 +07:00
|
|
|
} else if (bp->b_flags & XBF_READ_AHEAD) {
|
|
|
|
rw = READA;
|
|
|
|
} else {
|
|
|
|
rw = READ;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* we only use the buffer cache for meta-data */
|
|
|
|
rw |= REQ_META;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Walk all the vectors issuing IO on them. Set up the initial offset
|
|
|
|
* into the buffer and the desired IO size before we start -
|
|
|
|
* _xfs_buf_ioapply_vec() will modify them appropriately for each
|
|
|
|
* subsequent call.
|
|
|
|
*/
|
|
|
|
offset = bp->b_offset;
|
|
|
|
size = BBTOB(bp->b_io_length);
|
|
|
|
blk_start_plug(&plug);
|
|
|
|
for (i = 0; i < bp->b_map_count; i++) {
|
|
|
|
xfs_buf_ioapply_map(bp, i, &offset, &size, rw);
|
|
|
|
if (bp->b_error)
|
|
|
|
break;
|
|
|
|
if (size <= 0)
|
|
|
|
break; /* all done */
|
|
|
|
}
|
|
|
|
blk_finish_plug(&plug);
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
2014-10-02 06:05:14 +07:00
|
|
|
/*
|
|
|
|
* Asynchronous IO submission path. This transfers the buffer lock ownership and
|
|
|
|
* the current reference to the IO. It is not safe to reference the buffer after
|
|
|
|
* a call to this function unless the caller holds an additional reference
|
|
|
|
* itself.
|
|
|
|
*/
|
2012-04-23 12:58:46 +07:00
|
|
|
void
|
2014-10-02 06:05:14 +07:00
|
|
|
xfs_buf_submit(
|
|
|
|
struct xfs_buf *bp)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2014-10-02 06:05:14 +07:00
|
|
|
trace_xfs_buf_submit(bp, _RET_IP_);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 12:58:39 +07:00
|
|
|
ASSERT(!(bp->b_flags & _XBF_DELWRI_Q));
|
2014-10-02 06:05:14 +07:00
|
|
|
ASSERT(bp->b_flags & XBF_ASYNC);
|
|
|
|
|
|
|
|
/* on shutdown we stale and complete the buffer immediately */
|
|
|
|
if (XFS_FORCED_SHUTDOWN(bp->b_target->bt_mount)) {
|
|
|
|
xfs_buf_ioerror(bp, -EIO);
|
|
|
|
bp->b_flags &= ~XBF_DONE;
|
|
|
|
xfs_buf_stale(bp);
|
|
|
|
xfs_buf_ioend(bp);
|
|
|
|
return;
|
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2011-08-23 15:28:03 +07:00
|
|
|
if (bp->b_flags & XBF_WRITE)
|
2006-01-11 11:39:08 +07:00
|
|
|
xfs_buf_wait_unpin(bp);
|
2014-10-02 06:04:11 +07:00
|
|
|
|
2014-10-02 06:04:31 +07:00
|
|
|
/* clear the internal error state to avoid spurious errors */
|
|
|
|
bp->b_io_error = 0;
|
|
|
|
|
2014-10-02 06:04:11 +07:00
|
|
|
/*
|
2014-10-02 06:05:14 +07:00
|
|
|
* The caller's reference is released during I/O completion.
|
|
|
|
* This occurs some time after the last b_io_remaining reference is
|
|
|
|
* released, so after we drop our Io reference we have to have some
|
|
|
|
* other reference to ensure the buffer doesn't go away from underneath
|
|
|
|
* us. Take a direct reference to ensure we have safe access to the
|
|
|
|
* buffer until we are finished with it.
|
2014-10-02 06:04:11 +07:00
|
|
|
*/
|
2006-01-11 11:39:08 +07:00
|
|
|
xfs_buf_hold(bp);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2014-04-17 05:15:28 +07:00
|
|
|
/*
|
2014-10-02 06:04:11 +07:00
|
|
|
* Set the count to 1 initially, this will stop an I/O completion
|
|
|
|
* callout which happens before we have started all the I/O from calling
|
|
|
|
* xfs_buf_ioend too early.
|
2005-04-17 05:20:36 +07:00
|
|
|
*/
|
2006-01-11 11:39:08 +07:00
|
|
|
atomic_set(&bp->b_io_remaining, 1);
|
|
|
|
_xfs_buf_ioapply(bp);
|
2014-10-02 06:04:11 +07:00
|
|
|
|
2014-04-17 05:15:28 +07:00
|
|
|
/*
|
2014-10-02 06:05:14 +07:00
|
|
|
* If _xfs_buf_ioapply failed, we can get back here with only the IO
|
|
|
|
* reference we took above. If we drop it to zero, run completion so
|
|
|
|
* that we don't return to the caller with completion still pending.
|
2014-04-17 05:15:28 +07:00
|
|
|
*/
|
2014-10-02 06:04:22 +07:00
|
|
|
if (atomic_dec_and_test(&bp->b_io_remaining) == 1) {
|
2014-10-02 06:05:14 +07:00
|
|
|
if (bp->b_error)
|
2014-10-02 06:04:22 +07:00
|
|
|
xfs_buf_ioend(bp);
|
|
|
|
else
|
|
|
|
xfs_buf_ioend_async(bp);
|
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2006-01-11 11:39:08 +07:00
|
|
|
xfs_buf_rele(bp);
|
2014-10-02 06:05:14 +07:00
|
|
|
/* Note: it is not safe to reference bp now we've dropped our ref */
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2014-10-02 06:05:14 +07:00
|
|
|
* Synchronous buffer IO submission path, read or write.
|
2005-04-17 05:20:36 +07:00
|
|
|
*/
|
|
|
|
int
|
2014-10-02 06:05:14 +07:00
|
|
|
xfs_buf_submit_wait(
|
|
|
|
struct xfs_buf *bp)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2014-10-02 06:05:14 +07:00
|
|
|
int error;
|
2009-12-15 06:14:59 +07:00
|
|
|
|
2014-10-02 06:05:14 +07:00
|
|
|
trace_xfs_buf_submit_wait(bp, _RET_IP_);
|
|
|
|
|
|
|
|
ASSERT(!(bp->b_flags & (_XBF_DELWRI_Q | XBF_ASYNC)));
|
2009-12-15 06:14:59 +07:00
|
|
|
|
2014-10-02 06:05:14 +07:00
|
|
|
if (XFS_FORCED_SHUTDOWN(bp->b_target->bt_mount)) {
|
|
|
|
xfs_buf_ioerror(bp, -EIO);
|
|
|
|
xfs_buf_stale(bp);
|
|
|
|
bp->b_flags &= ~XBF_DONE;
|
|
|
|
return -EIO;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (bp->b_flags & XBF_WRITE)
|
|
|
|
xfs_buf_wait_unpin(bp);
|
|
|
|
|
|
|
|
/* clear the internal error state to avoid spurious errors */
|
|
|
|
bp->b_io_error = 0;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* For synchronous IO, the IO does not inherit the submitters reference
|
|
|
|
* count, nor the buffer lock. Hence we cannot release the reference we
|
|
|
|
* are about to take until we've waited for all IO completion to occur,
|
|
|
|
* including any xfs_buf_ioend_async() work that may be pending.
|
|
|
|
*/
|
|
|
|
xfs_buf_hold(bp);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Set the count to 1 initially, this will stop an I/O completion
|
|
|
|
* callout which happens before we have started all the I/O from calling
|
|
|
|
* xfs_buf_ioend too early.
|
|
|
|
*/
|
|
|
|
atomic_set(&bp->b_io_remaining, 1);
|
|
|
|
_xfs_buf_ioapply(bp);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* make sure we run completion synchronously if it raced with us and is
|
|
|
|
* already complete.
|
|
|
|
*/
|
|
|
|
if (atomic_dec_and_test(&bp->b_io_remaining) == 1)
|
|
|
|
xfs_buf_ioend(bp);
|
2009-12-15 06:14:59 +07:00
|
|
|
|
2014-10-02 06:05:14 +07:00
|
|
|
/* wait for completion before gathering the error from the buffer */
|
|
|
|
trace_xfs_buf_iowait(bp, _RET_IP_);
|
|
|
|
wait_for_completion(&bp->b_iowait);
|
2009-12-15 06:14:59 +07:00
|
|
|
trace_xfs_buf_iowait_done(bp, _RET_IP_);
|
2014-10-02 06:05:14 +07:00
|
|
|
error = bp->b_error;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* all done now, we can release the hold that keeps the buffer
|
|
|
|
* referenced for the entire IO.
|
|
|
|
*/
|
|
|
|
xfs_buf_rele(bp);
|
|
|
|
return error;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
2015-06-22 06:44:29 +07:00
|
|
|
void *
|
2006-01-11 11:39:08 +07:00
|
|
|
xfs_buf_offset(
|
2015-06-22 06:44:29 +07:00
|
|
|
struct xfs_buf *bp,
|
2005-04-17 05:20:36 +07:00
|
|
|
size_t offset)
|
|
|
|
{
|
|
|
|
struct page *page;
|
|
|
|
|
2012-04-23 12:59:07 +07:00
|
|
|
if (bp->b_addr)
|
2011-07-23 06:40:15 +07:00
|
|
|
return bp->b_addr + offset;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2006-01-11 11:39:08 +07:00
|
|
|
offset += bp->b_offset;
|
2011-03-26 05:16:45 +07:00
|
|
|
page = bp->b_pages[offset >> PAGE_SHIFT];
|
2015-06-22 06:44:29 +07:00
|
|
|
return page_address(page) + (offset & (PAGE_SIZE-1));
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Move data into or out of a buffer.
|
|
|
|
*/
|
|
|
|
void
|
2006-01-11 11:39:08 +07:00
|
|
|
xfs_buf_iomove(
|
|
|
|
xfs_buf_t *bp, /* buffer to process */
|
2005-04-17 05:20:36 +07:00
|
|
|
size_t boff, /* starting buffer offset */
|
|
|
|
size_t bsize, /* length to copy */
|
2010-01-20 06:47:39 +07:00
|
|
|
void *data, /* data address */
|
2006-01-11 11:39:08 +07:00
|
|
|
xfs_buf_rw_t mode) /* read/write/zero flag */
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2012-04-23 12:58:53 +07:00
|
|
|
size_t bend;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
bend = boff + bsize;
|
|
|
|
while (boff < bend) {
|
2012-04-23 12:58:53 +07:00
|
|
|
struct page *page;
|
|
|
|
int page_index, page_offset, csize;
|
|
|
|
|
|
|
|
page_index = (boff + bp->b_offset) >> PAGE_SHIFT;
|
|
|
|
page_offset = (boff + bp->b_offset) & ~PAGE_MASK;
|
|
|
|
page = bp->b_pages[page_index];
|
|
|
|
csize = min_t(size_t, PAGE_SIZE - page_offset,
|
|
|
|
BBTOB(bp->b_io_length) - boff);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2012-04-23 12:58:53 +07:00
|
|
|
ASSERT((csize + page_offset) <= PAGE_SIZE);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
switch (mode) {
|
2006-01-11 11:39:08 +07:00
|
|
|
case XBRW_ZERO:
|
2012-04-23 12:58:53 +07:00
|
|
|
memset(page_address(page) + page_offset, 0, csize);
|
2005-04-17 05:20:36 +07:00
|
|
|
break;
|
2006-01-11 11:39:08 +07:00
|
|
|
case XBRW_READ:
|
2012-04-23 12:58:53 +07:00
|
|
|
memcpy(data, page_address(page) + page_offset, csize);
|
2005-04-17 05:20:36 +07:00
|
|
|
break;
|
2006-01-11 11:39:08 +07:00
|
|
|
case XBRW_WRITE:
|
2012-04-23 12:58:53 +07:00
|
|
|
memcpy(page_address(page) + page_offset, data, csize);
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
boff += csize;
|
|
|
|
data += csize;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2006-01-11 11:39:08 +07:00
|
|
|
* Handling of buffer targets (buftargs).
|
2005-04-17 05:20:36 +07:00
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
2010-12-02 12:30:55 +07:00
|
|
|
* Wait for any bufs with callbacks that have been submitted but have not yet
|
|
|
|
* returned. These buffers will have an elevated hold count, so wait on those
|
|
|
|
* while freeing all the buffers only held by the LRU.
|
2005-04-17 05:20:36 +07:00
|
|
|
*/
|
2013-08-28 07:18:05 +07:00
|
|
|
static enum lru_status
|
|
|
|
xfs_buftarg_wait_rele(
|
|
|
|
struct list_head *item,
|
2015-02-13 05:59:35 +07:00
|
|
|
struct list_lru_one *lru,
|
2013-08-28 07:18:05 +07:00
|
|
|
spinlock_t *lru_lock,
|
|
|
|
void *arg)
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2013-08-28 07:18:05 +07:00
|
|
|
struct xfs_buf *bp = container_of(item, struct xfs_buf, b_lru);
|
2013-08-28 07:18:06 +07:00
|
|
|
struct list_head *dispose = arg;
|
2010-12-02 12:30:55 +07:00
|
|
|
|
2013-08-28 07:18:05 +07:00
|
|
|
if (atomic_read(&bp->b_hold) > 1) {
|
2013-08-28 07:18:06 +07:00
|
|
|
/* need to wait, so skip it this pass */
|
2013-08-28 07:18:05 +07:00
|
|
|
trace_xfs_buf_wait_buftarg(bp, _RET_IP_);
|
2013-08-28 07:18:06 +07:00
|
|
|
return LRU_SKIP;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
2013-08-28 07:18:06 +07:00
|
|
|
if (!spin_trylock(&bp->b_lock))
|
|
|
|
return LRU_SKIP;
|
2013-08-28 07:18:05 +07:00
|
|
|
|
2013-08-28 07:18:06 +07:00
|
|
|
/*
|
|
|
|
* clear the LRU reference count so the buffer doesn't get
|
|
|
|
* ignored in xfs_buf_rele().
|
|
|
|
*/
|
|
|
|
atomic_set(&bp->b_lru_ref, 0);
|
|
|
|
bp->b_state |= XFS_BSTATE_DISPOSE;
|
2015-02-13 05:59:35 +07:00
|
|
|
list_lru_isolate_move(lru, item, dispose);
|
2013-08-28 07:18:06 +07:00
|
|
|
spin_unlock(&bp->b_lock);
|
|
|
|
return LRU_REMOVED;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
2013-08-28 07:18:05 +07:00
|
|
|
void
|
|
|
|
xfs_wait_buftarg(
|
|
|
|
struct xfs_buftarg *btp)
|
|
|
|
{
|
2013-08-28 07:18:06 +07:00
|
|
|
LIST_HEAD(dispose);
|
|
|
|
int loop = 0;
|
|
|
|
|
xfs: log mount failures don't wait for buffers to be released
Recently I've been seeing xfs/051 fail on 1k block size filesystems.
Trying to trace the events during the test lead to the problem going
away, indicating that it was a race condition that lead to this
ASSERT failure:
XFS: Assertion failed: atomic_read(&pag->pag_ref) == 0, file: fs/xfs/xfs_mount.c, line: 156
.....
[<ffffffff814e1257>] xfs_free_perag+0x87/0xb0
[<ffffffff814e21b9>] xfs_mountfs+0x4d9/0x900
[<ffffffff814e5dff>] xfs_fs_fill_super+0x3bf/0x4d0
[<ffffffff811d8800>] mount_bdev+0x180/0x1b0
[<ffffffff814e3ff5>] xfs_fs_mount+0x15/0x20
[<ffffffff811d90a8>] mount_fs+0x38/0x170
[<ffffffff811f4347>] vfs_kern_mount+0x67/0x120
[<ffffffff811f7018>] do_mount+0x218/0xd60
[<ffffffff811f7e5b>] SyS_mount+0x8b/0xd0
When I finally caught it with tracing enabled, I saw that AG 2 had
an elevated reference count and a buffer was responsible for it. I
tracked down the specific buffer, and found that it was missing the
final reference count release that would put it back on the LRU and
hence be found by xfs_wait_buftarg() calls in the log mount failure
handling.
The last four traces for the buffer before the assert were (trimmed
for relevance)
kworker/0:1-5259 xfs_buf_iodone: hold 2 lock 0 flags ASYNC
kworker/0:1-5259 xfs_buf_ioerror: hold 2 lock 0 error -5
mount-7163 xfs_buf_lock_done: hold 2 lock 0 flags ASYNC
mount-7163 xfs_buf_unlock: hold 2 lock 1 flags ASYNC
This is an async write that is completing, so there's nobody waiting
for it directly. Hence we call xfs_buf_relse() once all the
processing is complete. That does:
static inline void xfs_buf_relse(xfs_buf_t *bp)
{
xfs_buf_unlock(bp);
xfs_buf_rele(bp);
}
Now, it's clear that mount is waiting on the buffer lock, and that
it has been released by xfs_buf_relse() and gained by mount. This is
expected, because at this point the mount process is in
xfs_buf_delwri_submit() waiting for all the IO it submitted to
complete.
The mount process, however, is waiting on the lock for the buffer
because it is in xfs_buf_delwri_submit(). This waits for IO
completion, but it doesn't wait for the buffer reference owned by
the IO to go away. The mount process collects all the completions,
fails the log recovery, and the higher level code then calls
xfs_wait_buftarg() to free all the remaining buffers in the
filesystem.
The issue is that on unlocking the buffer, the scheduler has decided
that the mount process has higher priority than the the kworker
thread that is running the IO completion, and so immediately
switched contexts to the mount process from the semaphore unlock
code, hence preventing the kworker thread from finishing the IO
completion and releasing the IO reference to the buffer.
Hence by the time that xfs_wait_buftarg() is run, the buffer still
has an active reference and so isn't on the LRU list that the
function walks to free the remaining buffers. Hence we miss that
buffer and continue onwards to tear down the mount structures,
at which time we get find a stray reference count on the perag
structure. On a non-debug kernel, this will be ignored and the
structure torn down and freed. Hence when the kworker thread is then
rescheduled and the buffer released and freed, it will access a
freed perag structure.
The problem here is that when the log mount fails, we still need to
quiesce the log to ensure that the IO workqueues have returned to
idle before we run xfs_wait_buftarg(). By synchronising the
workqueues, we ensure that all IO completions are fully processed,
not just to the point where buffers have been unlocked. This ensures
we don't end up in the situation above.
cc: <stable@vger.kernel.org> # 3.18
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Dave Chinner <david@fromorbit.com>
2016-01-19 04:28:10 +07:00
|
|
|
/*
|
|
|
|
* We need to flush the buffer workqueue to ensure that all IO
|
|
|
|
* completion processing is 100% done. Just waiting on buffer locks is
|
|
|
|
* not sufficient for async IO as the reference count held over IO is
|
|
|
|
* not released until after the buffer lock is dropped. Hence we need to
|
|
|
|
* ensure here that all reference counts have been dropped before we
|
|
|
|
* start walking the LRU list.
|
|
|
|
*/
|
|
|
|
drain_workqueue(btp->bt_mount->m_buf_workqueue);
|
|
|
|
|
2013-08-28 07:18:06 +07:00
|
|
|
/* loop until there is nothing left on the lru list. */
|
|
|
|
while (list_lru_count(&btp->bt_lru)) {
|
2013-08-28 07:18:05 +07:00
|
|
|
list_lru_walk(&btp->bt_lru, xfs_buftarg_wait_rele,
|
2013-08-28 07:18:06 +07:00
|
|
|
&dispose, LONG_MAX);
|
|
|
|
|
|
|
|
while (!list_empty(&dispose)) {
|
|
|
|
struct xfs_buf *bp;
|
|
|
|
bp = list_first_entry(&dispose, struct xfs_buf, b_lru);
|
|
|
|
list_del_init(&bp->b_lru);
|
xfs: abort metadata writeback on permanent errors
If we are doing aysnc writeback of metadata, we can get write errors
but have nobody to report them to. At the moment, we simply attempt
to reissue the write from io completion in the hope that it's a
transient error.
When it's not a transient error, the buffer is stuck forever in
this loop, and we cannot break out of it. Eventually, unmount will
hang because the AIL cannot be emptied and everything goes downhill
from them.
To solve this problem, only retry the write IO once before aborting
it. We don't throw the buffer away because some transient errors can
last minutes (e.g. FC path failover) or even hours (thin
provisioned devices that have run out of backing space) before they
go away. Hence we really want to keep trying until we can't try any
more.
Because the buffer was not cleaned, however, it does not get removed
from the AIL and hence the next pass across the AIL will start IO on
it again. As such, we still get the "retry forever" semantics that
we currently have, but we allow other access to the buffer in the
mean time. Meanwhile the filesystem can continue to modify the
buffer and relog it, so the IO errors won't hang the log or the
filesystem.
Now when we are pushing the AIL, we can see all these "permanent IO
error" buffers and we can issue a warning about failures before we
retry the IO. We can also catch these buffers when unmounting an
issue a corruption warning, too.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ben Myers <bpm@sgi.com>
2013-12-12 12:34:38 +07:00
|
|
|
if (bp->b_flags & XBF_WRITE_FAIL) {
|
|
|
|
xfs_alert(btp->bt_mount,
|
2015-07-29 08:52:04 +07:00
|
|
|
"Corruption Alert: Buffer at block 0x%llx had permanent write failures!",
|
xfs: abort metadata writeback on permanent errors
If we are doing aysnc writeback of metadata, we can get write errors
but have nobody to report them to. At the moment, we simply attempt
to reissue the write from io completion in the hope that it's a
transient error.
When it's not a transient error, the buffer is stuck forever in
this loop, and we cannot break out of it. Eventually, unmount will
hang because the AIL cannot be emptied and everything goes downhill
from them.
To solve this problem, only retry the write IO once before aborting
it. We don't throw the buffer away because some transient errors can
last minutes (e.g. FC path failover) or even hours (thin
provisioned devices that have run out of backing space) before they
go away. Hence we really want to keep trying until we can't try any
more.
Because the buffer was not cleaned, however, it does not get removed
from the AIL and hence the next pass across the AIL will start IO on
it again. As such, we still get the "retry forever" semantics that
we currently have, but we allow other access to the buffer in the
mean time. Meanwhile the filesystem can continue to modify the
buffer and relog it, so the IO errors won't hang the log or the
filesystem.
Now when we are pushing the AIL, we can see all these "permanent IO
error" buffers and we can issue a warning about failures before we
retry the IO. We can also catch these buffers when unmounting an
issue a corruption warning, too.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ben Myers <bpm@sgi.com>
2013-12-12 12:34:38 +07:00
|
|
|
(long long)bp->b_bn);
|
2015-07-29 08:52:04 +07:00
|
|
|
xfs_alert(btp->bt_mount,
|
|
|
|
"Please run xfs_repair to determine the extent of the problem.");
|
xfs: abort metadata writeback on permanent errors
If we are doing aysnc writeback of metadata, we can get write errors
but have nobody to report them to. At the moment, we simply attempt
to reissue the write from io completion in the hope that it's a
transient error.
When it's not a transient error, the buffer is stuck forever in
this loop, and we cannot break out of it. Eventually, unmount will
hang because the AIL cannot be emptied and everything goes downhill
from them.
To solve this problem, only retry the write IO once before aborting
it. We don't throw the buffer away because some transient errors can
last minutes (e.g. FC path failover) or even hours (thin
provisioned devices that have run out of backing space) before they
go away. Hence we really want to keep trying until we can't try any
more.
Because the buffer was not cleaned, however, it does not get removed
from the AIL and hence the next pass across the AIL will start IO on
it again. As such, we still get the "retry forever" semantics that
we currently have, but we allow other access to the buffer in the
mean time. Meanwhile the filesystem can continue to modify the
buffer and relog it, so the IO errors won't hang the log or the
filesystem.
Now when we are pushing the AIL, we can see all these "permanent IO
error" buffers and we can issue a warning about failures before we
retry the IO. We can also catch these buffers when unmounting an
issue a corruption warning, too.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ben Myers <bpm@sgi.com>
2013-12-12 12:34:38 +07:00
|
|
|
}
|
2013-08-28 07:18:06 +07:00
|
|
|
xfs_buf_rele(bp);
|
|
|
|
}
|
|
|
|
if (loop++ != 0)
|
|
|
|
delay(100);
|
|
|
|
}
|
2013-08-28 07:18:05 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static enum lru_status
|
|
|
|
xfs_buftarg_isolate(
|
|
|
|
struct list_head *item,
|
2015-02-13 05:59:35 +07:00
|
|
|
struct list_lru_one *lru,
|
2013-08-28 07:18:05 +07:00
|
|
|
spinlock_t *lru_lock,
|
|
|
|
void *arg)
|
|
|
|
{
|
|
|
|
struct xfs_buf *bp = container_of(item, struct xfs_buf, b_lru);
|
|
|
|
struct list_head *dispose = arg;
|
|
|
|
|
2013-08-28 07:18:06 +07:00
|
|
|
/*
|
|
|
|
* we are inverting the lru lock/bp->b_lock here, so use a trylock.
|
|
|
|
* If we fail to get the lock, just skip it.
|
|
|
|
*/
|
|
|
|
if (!spin_trylock(&bp->b_lock))
|
|
|
|
return LRU_SKIP;
|
2013-08-28 07:18:05 +07:00
|
|
|
/*
|
|
|
|
* Decrement the b_lru_ref count unless the value is already
|
|
|
|
* zero. If the value is already zero, we need to reclaim the
|
|
|
|
* buffer, otherwise it gets another trip through the LRU.
|
|
|
|
*/
|
2013-08-28 07:18:06 +07:00
|
|
|
if (!atomic_add_unless(&bp->b_lru_ref, -1, 0)) {
|
|
|
|
spin_unlock(&bp->b_lock);
|
2013-08-28 07:18:05 +07:00
|
|
|
return LRU_ROTATE;
|
2013-08-28 07:18:06 +07:00
|
|
|
}
|
2013-08-28 07:18:05 +07:00
|
|
|
|
2013-08-28 07:18:06 +07:00
|
|
|
bp->b_state |= XFS_BSTATE_DISPOSE;
|
2015-02-13 05:59:35 +07:00
|
|
|
list_lru_isolate_move(lru, item, dispose);
|
2013-08-28 07:18:06 +07:00
|
|
|
spin_unlock(&bp->b_lock);
|
2013-08-28 07:18:05 +07:00
|
|
|
return LRU_REMOVED;
|
|
|
|
}
|
|
|
|
|
2013-08-28 07:18:06 +07:00
|
|
|
static unsigned long
|
2013-08-28 07:18:05 +07:00
|
|
|
xfs_buftarg_shrink_scan(
|
2010-11-30 13:27:57 +07:00
|
|
|
struct shrinker *shrink,
|
2011-05-25 07:12:27 +07:00
|
|
|
struct shrink_control *sc)
|
2006-01-11 11:37:58 +07:00
|
|
|
{
|
2010-11-30 13:27:57 +07:00
|
|
|
struct xfs_buftarg *btp = container_of(shrink,
|
|
|
|
struct xfs_buftarg, bt_shrinker);
|
2010-12-02 12:30:55 +07:00
|
|
|
LIST_HEAD(dispose);
|
2013-08-28 07:18:06 +07:00
|
|
|
unsigned long freed;
|
2010-12-02 12:30:55 +07:00
|
|
|
|
list_lru: introduce list_lru_shrink_{count,walk}
Kmem accounting of memcg is unusable now, because it lacks slab shrinker
support. That means when we hit the limit we will get ENOMEM w/o any
chance to recover. What we should do then is to call shrink_slab, which
would reclaim old inode/dentry caches from this cgroup. This is what
this patch set is intended to do.
Basically, it does two things. First, it introduces the notion of
per-memcg slab shrinker. A shrinker that wants to reclaim objects per
cgroup should mark itself as SHRINKER_MEMCG_AWARE. Then it will be
passed the memory cgroup to scan from in shrink_control->memcg. For
such shrinkers shrink_slab iterates over the whole cgroup subtree under
the target cgroup and calls the shrinker for each kmem-active memory
cgroup.
Secondly, this patch set makes the list_lru structure per-memcg. It's
done transparently to list_lru users - everything they have to do is to
tell list_lru_init that they want memcg-aware list_lru. Then the
list_lru will automatically distribute objects among per-memcg lists
basing on which cgroup the object is accounted to. This way to make FS
shrinkers (icache, dcache) memcg-aware we only need to make them use
memcg-aware list_lru, and this is what this patch set does.
As before, this patch set only enables per-memcg kmem reclaim when the
pressure goes from memory.limit, not from memory.kmem.limit. Handling
memory.kmem.limit is going to be tricky due to GFP_NOFS allocations, and
it is still unclear whether we will have this knob in the unified
hierarchy.
This patch (of 9):
NUMA aware slab shrinkers use the list_lru structure to distribute
objects coming from different NUMA nodes to different lists. Whenever
such a shrinker needs to count or scan objects from a particular node,
it issues commands like this:
count = list_lru_count_node(lru, sc->nid);
freed = list_lru_walk_node(lru, sc->nid, isolate_func,
isolate_arg, &sc->nr_to_scan);
where sc is an instance of the shrink_control structure passed to it
from vmscan.
To simplify this, let's add special list_lru functions to be used by
shrinkers, list_lru_shrink_count() and list_lru_shrink_walk(), which
consolidate the nid and nr_to_scan arguments in the shrink_control
structure.
This will also allow us to avoid patching shrinkers that use list_lru
when we make shrink_slab() per-memcg - all we will have to do is extend
the shrink_control structure to include the target memcg and make
list_lru_shrink_{count,walk} handle this appropriately.
Signed-off-by: Vladimir Davydov <vdavydov@parallels.com>
Suggested-by: Dave Chinner <david@fromorbit.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Greg Thelen <gthelen@google.com>
Cc: Glauber Costa <glommer@gmail.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Tejun Heo <tj@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-02-13 05:58:47 +07:00
|
|
|
freed = list_lru_shrink_walk(&btp->bt_lru, sc,
|
|
|
|
xfs_buftarg_isolate, &dispose);
|
2010-12-02 12:30:55 +07:00
|
|
|
|
|
|
|
while (!list_empty(&dispose)) {
|
2013-08-28 07:18:05 +07:00
|
|
|
struct xfs_buf *bp;
|
2010-12-02 12:30:55 +07:00
|
|
|
bp = list_first_entry(&dispose, struct xfs_buf, b_lru);
|
|
|
|
list_del_init(&bp->b_lru);
|
|
|
|
xfs_buf_rele(bp);
|
|
|
|
}
|
|
|
|
|
2013-08-28 07:18:05 +07:00
|
|
|
return freed;
|
|
|
|
}
|
|
|
|
|
2013-08-28 07:18:06 +07:00
|
|
|
static unsigned long
|
2013-08-28 07:18:05 +07:00
|
|
|
xfs_buftarg_shrink_count(
|
|
|
|
struct shrinker *shrink,
|
|
|
|
struct shrink_control *sc)
|
|
|
|
{
|
|
|
|
struct xfs_buftarg *btp = container_of(shrink,
|
|
|
|
struct xfs_buftarg, bt_shrinker);
|
list_lru: introduce list_lru_shrink_{count,walk}
Kmem accounting of memcg is unusable now, because it lacks slab shrinker
support. That means when we hit the limit we will get ENOMEM w/o any
chance to recover. What we should do then is to call shrink_slab, which
would reclaim old inode/dentry caches from this cgroup. This is what
this patch set is intended to do.
Basically, it does two things. First, it introduces the notion of
per-memcg slab shrinker. A shrinker that wants to reclaim objects per
cgroup should mark itself as SHRINKER_MEMCG_AWARE. Then it will be
passed the memory cgroup to scan from in shrink_control->memcg. For
such shrinkers shrink_slab iterates over the whole cgroup subtree under
the target cgroup and calls the shrinker for each kmem-active memory
cgroup.
Secondly, this patch set makes the list_lru structure per-memcg. It's
done transparently to list_lru users - everything they have to do is to
tell list_lru_init that they want memcg-aware list_lru. Then the
list_lru will automatically distribute objects among per-memcg lists
basing on which cgroup the object is accounted to. This way to make FS
shrinkers (icache, dcache) memcg-aware we only need to make them use
memcg-aware list_lru, and this is what this patch set does.
As before, this patch set only enables per-memcg kmem reclaim when the
pressure goes from memory.limit, not from memory.kmem.limit. Handling
memory.kmem.limit is going to be tricky due to GFP_NOFS allocations, and
it is still unclear whether we will have this knob in the unified
hierarchy.
This patch (of 9):
NUMA aware slab shrinkers use the list_lru structure to distribute
objects coming from different NUMA nodes to different lists. Whenever
such a shrinker needs to count or scan objects from a particular node,
it issues commands like this:
count = list_lru_count_node(lru, sc->nid);
freed = list_lru_walk_node(lru, sc->nid, isolate_func,
isolate_arg, &sc->nr_to_scan);
where sc is an instance of the shrink_control structure passed to it
from vmscan.
To simplify this, let's add special list_lru functions to be used by
shrinkers, list_lru_shrink_count() and list_lru_shrink_walk(), which
consolidate the nid and nr_to_scan arguments in the shrink_control
structure.
This will also allow us to avoid patching shrinkers that use list_lru
when we make shrink_slab() per-memcg - all we will have to do is extend
the shrink_control structure to include the target memcg and make
list_lru_shrink_{count,walk} handle this appropriately.
Signed-off-by: Vladimir Davydov <vdavydov@parallels.com>
Suggested-by: Dave Chinner <david@fromorbit.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Greg Thelen <gthelen@google.com>
Cc: Glauber Costa <glommer@gmail.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Tejun Heo <tj@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-02-13 05:58:47 +07:00
|
|
|
return list_lru_shrink_count(&btp->bt_lru, sc);
|
2006-01-11 11:37:58 +07:00
|
|
|
}
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
void
|
|
|
|
xfs_free_buftarg(
|
2009-03-04 02:48:37 +07:00
|
|
|
struct xfs_mount *mp,
|
|
|
|
struct xfs_buftarg *btp)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2010-11-30 13:27:57 +07:00
|
|
|
unregister_shrinker(&btp->bt_shrinker);
|
2013-08-28 07:18:18 +07:00
|
|
|
list_lru_destroy(&btp->bt_lru);
|
2010-11-30 13:27:57 +07:00
|
|
|
|
2009-03-04 02:48:37 +07:00
|
|
|
if (mp->m_flags & XFS_MOUNT_BARRIER)
|
|
|
|
xfs_blkdev_issue_flush(btp);
|
2006-01-11 11:37:58 +07:00
|
|
|
|
2008-05-19 13:31:57 +07:00
|
|
|
kmem_free(btp);
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
2013-11-14 03:53:45 +07:00
|
|
|
int
|
|
|
|
xfs_setsize_buftarg(
|
2005-04-17 05:20:36 +07:00
|
|
|
xfs_buftarg_t *btp,
|
2013-11-14 03:53:45 +07:00
|
|
|
unsigned int sectorsize)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
xfs: allow logical-sector sized O_DIRECT
Some time ago, mkfs.xfs started picking the storage physical
sector size as the default filesystem "sector size" in order
to avoid RMW costs incurred by doing IOs at logical sector
size alignments.
However, this means that for a filesystem made with i.e.
a 4k sector size on an "advanced format" 4k/512 disk,
512-byte direct IOs are no longer allowed. This means
that XFS has essentially turned this AF drive into a hard
4K device, from the filesystem on up.
XFS's mkfs-specified "sector size" is really just controlling
the minimum size & alignment of filesystem metadata.
There is no real need to tightly couple XFS's minimal
metadata size to the minimum allowed direct IO size;
XFS can continue doing metadata in optimal sizes, but
still allow smaller DIOs for apps which issue them,
for whatever reason.
This patch adds a new field to the xfs_buftarg, so that
we now track 2 sizes:
1) The metadata sector size, which is the minimum unit and
alignment of IO which will be performed by metadata operations.
2) The device logical sector size
The first is used internally by the file system for metadata
alignment and IOs.
The second is used for the minimum allowed direct IO alignment.
This has passed xfstests on filesystems made with 4k sectors,
including when run under the patch I sent to ignore
XFS_IOC_DIOINFO, and issue 512 DIOs anyway. I also directly
tested end of block behavior on preallocated, sparse, and
existing files when we do a 512 IO into a 4k file on a
4k-sector filesystem, to be sure there were no unexpected
behaviors.
Signed-off-by: Eric Sandeen <sandeen@redhat.com>
Reviewed-by: Brian Foster <bfoster@redhat.com>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2014-01-22 05:46:23 +07:00
|
|
|
/* Set up metadata sector size info */
|
2014-01-22 05:45:52 +07:00
|
|
|
btp->bt_meta_sectorsize = sectorsize;
|
|
|
|
btp->bt_meta_sectormask = sectorsize - 1;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2006-01-11 11:39:08 +07:00
|
|
|
if (set_blocksize(btp->bt_bdev, sectorsize)) {
|
2011-03-07 06:00:35 +07:00
|
|
|
xfs_warn(btp->bt_mount,
|
2015-04-13 19:31:37 +07:00
|
|
|
"Cannot set_blocksize to %u on device %pg",
|
|
|
|
sectorsize, btp->bt_bdev);
|
2014-06-25 11:58:08 +07:00
|
|
|
return -EINVAL;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
xfs: allow logical-sector sized O_DIRECT
Some time ago, mkfs.xfs started picking the storage physical
sector size as the default filesystem "sector size" in order
to avoid RMW costs incurred by doing IOs at logical sector
size alignments.
However, this means that for a filesystem made with i.e.
a 4k sector size on an "advanced format" 4k/512 disk,
512-byte direct IOs are no longer allowed. This means
that XFS has essentially turned this AF drive into a hard
4K device, from the filesystem on up.
XFS's mkfs-specified "sector size" is really just controlling
the minimum size & alignment of filesystem metadata.
There is no real need to tightly couple XFS's minimal
metadata size to the minimum allowed direct IO size;
XFS can continue doing metadata in optimal sizes, but
still allow smaller DIOs for apps which issue them,
for whatever reason.
This patch adds a new field to the xfs_buftarg, so that
we now track 2 sizes:
1) The metadata sector size, which is the minimum unit and
alignment of IO which will be performed by metadata operations.
2) The device logical sector size
The first is used internally by the file system for metadata
alignment and IOs.
The second is used for the minimum allowed direct IO alignment.
This has passed xfstests on filesystems made with 4k sectors,
including when run under the patch I sent to ignore
XFS_IOC_DIOINFO, and issue 512 DIOs anyway. I also directly
tested end of block behavior on preallocated, sparse, and
existing files when we do a 512 IO into a 4k file on a
4k-sector filesystem, to be sure there were no unexpected
behaviors.
Signed-off-by: Eric Sandeen <sandeen@redhat.com>
Reviewed-by: Brian Foster <bfoster@redhat.com>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2014-01-22 05:46:23 +07:00
|
|
|
/* Set up device logical sector size mask */
|
|
|
|
btp->bt_logical_sectorsize = bdev_logical_block_size(btp->bt_bdev);
|
|
|
|
btp->bt_logical_sectormask = bdev_logical_block_size(btp->bt_bdev) - 1;
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2013-11-14 03:53:45 +07:00
|
|
|
* When allocating the initial buffer target we have not yet
|
|
|
|
* read in the superblock, so don't know what sized sectors
|
|
|
|
* are being used at this early stage. Play safe.
|
2006-01-11 11:39:08 +07:00
|
|
|
*/
|
2005-04-17 05:20:36 +07:00
|
|
|
STATIC int
|
|
|
|
xfs_setsize_buftarg_early(
|
|
|
|
xfs_buftarg_t *btp,
|
|
|
|
struct block_device *bdev)
|
|
|
|
{
|
2014-04-14 16:00:29 +07:00
|
|
|
return xfs_setsize_buftarg(btp, bdev_logical_block_size(bdev));
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
xfs_buftarg_t *
|
|
|
|
xfs_alloc_buftarg(
|
2010-09-22 07:47:20 +07:00
|
|
|
struct xfs_mount *mp,
|
2014-04-14 16:01:00 +07:00
|
|
|
struct block_device *bdev)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
xfs_buftarg_t *btp;
|
|
|
|
|
2013-05-20 06:51:12 +07:00
|
|
|
btp = kmem_zalloc(sizeof(*btp), KM_SLEEP | KM_NOFS);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2010-09-22 07:47:20 +07:00
|
|
|
btp->bt_mount = mp;
|
2006-01-11 11:39:08 +07:00
|
|
|
btp->bt_dev = bdev->bd_dev;
|
|
|
|
btp->bt_bdev = bdev;
|
2011-03-26 05:16:45 +07:00
|
|
|
btp->bt_bdi = blk_get_backing_dev_info(bdev);
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
if (xfs_setsize_buftarg_early(btp, bdev))
|
|
|
|
goto error;
|
2013-08-28 07:18:18 +07:00
|
|
|
|
|
|
|
if (list_lru_init(&btp->bt_lru))
|
|
|
|
goto error;
|
|
|
|
|
2013-08-28 07:18:05 +07:00
|
|
|
btp->bt_shrinker.count_objects = xfs_buftarg_shrink_count;
|
|
|
|
btp->bt_shrinker.scan_objects = xfs_buftarg_shrink_scan;
|
2010-11-30 13:27:57 +07:00
|
|
|
btp->bt_shrinker.seeks = DEFAULT_SEEKS;
|
2013-08-28 07:18:05 +07:00
|
|
|
btp->bt_shrinker.flags = SHRINKER_NUMA_AWARE;
|
2010-11-30 13:27:57 +07:00
|
|
|
register_shrinker(&btp->bt_shrinker);
|
2005-04-17 05:20:36 +07:00
|
|
|
return btp;
|
|
|
|
|
|
|
|
error:
|
2008-05-19 13:31:57 +07:00
|
|
|
kmem_free(btp);
|
2005-04-17 05:20:36 +07:00
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 12:58:39 +07:00
|
|
|
* Add a buffer to the delayed write list.
|
|
|
|
*
|
|
|
|
* This queues a buffer for writeout if it hasn't already been. Note that
|
|
|
|
* neither this routine nor the buffer list submission functions perform
|
|
|
|
* any internal synchronization. It is expected that the lists are thread-local
|
|
|
|
* to the callers.
|
|
|
|
*
|
|
|
|
* Returns true if we queued up the buffer, or false if it already had
|
|
|
|
* been on the buffer list.
|
2005-04-17 05:20:36 +07:00
|
|
|
*/
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 12:58:39 +07:00
|
|
|
bool
|
2006-01-11 11:39:08 +07:00
|
|
|
xfs_buf_delwri_queue(
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 12:58:39 +07:00
|
|
|
struct xfs_buf *bp,
|
|
|
|
struct list_head *list)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 12:58:39 +07:00
|
|
|
ASSERT(xfs_buf_islocked(bp));
|
2011-08-23 15:28:05 +07:00
|
|
|
ASSERT(!(bp->b_flags & XBF_READ));
|
2005-04-17 05:20:36 +07:00
|
|
|
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 12:58:39 +07:00
|
|
|
/*
|
|
|
|
* If the buffer is already marked delwri it already is queued up
|
|
|
|
* by someone else for imediate writeout. Just ignore it in that
|
|
|
|
* case.
|
|
|
|
*/
|
|
|
|
if (bp->b_flags & _XBF_DELWRI_Q) {
|
|
|
|
trace_xfs_buf_delwri_queued(bp, _RET_IP_);
|
|
|
|
return false;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 12:58:39 +07:00
|
|
|
trace_xfs_buf_delwri_queue(bp, _RET_IP_);
|
2010-02-02 06:13:42 +07:00
|
|
|
|
|
|
|
/*
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 12:58:39 +07:00
|
|
|
* If a buffer gets written out synchronously or marked stale while it
|
|
|
|
* is on a delwri list we lazily remove it. To do this, the other party
|
|
|
|
* clears the _XBF_DELWRI_Q flag but otherwise leaves the buffer alone.
|
|
|
|
* It remains referenced and on the list. In a rare corner case it
|
|
|
|
* might get readded to a delwri list after the synchronous writeout, in
|
|
|
|
* which case we need just need to re-add the flag here.
|
2010-02-02 06:13:42 +07:00
|
|
|
*/
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 12:58:39 +07:00
|
|
|
bp->b_flags |= _XBF_DELWRI_Q;
|
|
|
|
if (list_empty(&bp->b_list)) {
|
|
|
|
atomic_inc(&bp->b_hold);
|
|
|
|
list_add_tail(&bp->b_list, list);
|
2007-02-10 14:32:29 +07:00
|
|
|
}
|
|
|
|
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 12:58:39 +07:00
|
|
|
return true;
|
2007-02-10 14:32:29 +07:00
|
|
|
}
|
|
|
|
|
2010-01-26 11:13:25 +07:00
|
|
|
/*
|
|
|
|
* Compare function is more complex than it needs to be because
|
|
|
|
* the return value is only 32 bits and we are doing comparisons
|
|
|
|
* on 64 bit values
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
xfs_buf_cmp(
|
|
|
|
void *priv,
|
|
|
|
struct list_head *a,
|
|
|
|
struct list_head *b)
|
|
|
|
{
|
|
|
|
struct xfs_buf *ap = container_of(a, struct xfs_buf, b_list);
|
|
|
|
struct xfs_buf *bp = container_of(b, struct xfs_buf, b_list);
|
|
|
|
xfs_daddr_t diff;
|
|
|
|
|
2012-12-05 06:18:02 +07:00
|
|
|
diff = ap->b_maps[0].bm_bn - bp->b_maps[0].bm_bn;
|
2010-01-26 11:13:25 +07:00
|
|
|
if (diff < 0)
|
|
|
|
return -1;
|
|
|
|
if (diff > 0)
|
|
|
|
return 1;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 12:58:39 +07:00
|
|
|
static int
|
|
|
|
__xfs_buf_delwri_submit(
|
|
|
|
struct list_head *buffer_list,
|
|
|
|
struct list_head *io_list,
|
|
|
|
bool wait)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 12:58:39 +07:00
|
|
|
struct blk_plug plug;
|
|
|
|
struct xfs_buf *bp, *n;
|
|
|
|
int pinned = 0;
|
|
|
|
|
|
|
|
list_for_each_entry_safe(bp, n, buffer_list, b_list) {
|
|
|
|
if (!wait) {
|
|
|
|
if (xfs_buf_ispinned(bp)) {
|
|
|
|
pinned++;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
if (!xfs_buf_trylock(bp))
|
|
|
|
continue;
|
|
|
|
} else {
|
|
|
|
xfs_buf_lock(bp);
|
|
|
|
}
|
2007-12-07 10:09:02 +07:00
|
|
|
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 12:58:39 +07:00
|
|
|
/*
|
|
|
|
* Someone else might have written the buffer synchronously or
|
|
|
|
* marked it stale in the meantime. In that case only the
|
|
|
|
* _XBF_DELWRI_Q flag got cleared, and we have to drop the
|
|
|
|
* reference and remove it from the list here.
|
|
|
|
*/
|
|
|
|
if (!(bp->b_flags & _XBF_DELWRI_Q)) {
|
|
|
|
list_del_init(&bp->b_list);
|
|
|
|
xfs_buf_relse(bp);
|
|
|
|
continue;
|
|
|
|
}
|
2010-01-11 18:49:59 +07:00
|
|
|
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 12:58:39 +07:00
|
|
|
list_move_tail(&bp->b_list, io_list);
|
|
|
|
trace_xfs_buf_delwri_split(bp, _RET_IP_);
|
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 12:58:39 +07:00
|
|
|
list_sort(NULL, io_list, xfs_buf_cmp);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 12:58:39 +07:00
|
|
|
blk_start_plug(&plug);
|
|
|
|
list_for_each_entry_safe(bp, n, io_list, b_list) {
|
2016-07-20 07:53:22 +07:00
|
|
|
bp->b_flags &= ~(_XBF_DELWRI_Q | XBF_WRITE_FAIL);
|
2014-10-02 06:04:01 +07:00
|
|
|
bp->b_flags |= XBF_WRITE | XBF_ASYNC;
|
2011-03-30 18:05:09 +07:00
|
|
|
|
2014-10-02 06:04:01 +07:00
|
|
|
/*
|
|
|
|
* we do all Io submission async. This means if we need to wait
|
|
|
|
* for IO completion we need to take an extra reference so the
|
|
|
|
* buffer is still valid on the other side.
|
|
|
|
*/
|
|
|
|
if (wait)
|
|
|
|
xfs_buf_hold(bp);
|
|
|
|
else
|
2006-01-11 11:39:08 +07:00
|
|
|
list_del_init(&bp->b_list);
|
2014-10-02 06:04:40 +07:00
|
|
|
|
2014-10-02 06:05:14 +07:00
|
|
|
xfs_buf_submit(bp);
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 12:58:39 +07:00
|
|
|
}
|
|
|
|
blk_finish_plug(&plug);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 12:58:39 +07:00
|
|
|
return pinned;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 12:58:39 +07:00
|
|
|
* Write out a buffer list asynchronously.
|
|
|
|
*
|
|
|
|
* This will take the @buffer_list, write all non-locked and non-pinned buffers
|
|
|
|
* out and not wait for I/O completion on any of the buffers. This interface
|
|
|
|
* is only safely useable for callers that can track I/O completion by higher
|
|
|
|
* level means, e.g. AIL pushing as the @buffer_list is consumed in this
|
|
|
|
* function.
|
2005-04-17 05:20:36 +07:00
|
|
|
*/
|
|
|
|
int
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 12:58:39 +07:00
|
|
|
xfs_buf_delwri_submit_nowait(
|
|
|
|
struct list_head *buffer_list)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 12:58:39 +07:00
|
|
|
LIST_HEAD (io_list);
|
|
|
|
return __xfs_buf_delwri_submit(buffer_list, &io_list, false);
|
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 12:58:39 +07:00
|
|
|
/*
|
|
|
|
* Write out a buffer list synchronously.
|
|
|
|
*
|
|
|
|
* This will take the @buffer_list, write all buffers out and wait for I/O
|
|
|
|
* completion on all of the buffers. @buffer_list is consumed by the function,
|
|
|
|
* so callers must have some other way of tracking buffers if they require such
|
|
|
|
* functionality.
|
|
|
|
*/
|
|
|
|
int
|
|
|
|
xfs_buf_delwri_submit(
|
|
|
|
struct list_head *buffer_list)
|
|
|
|
{
|
|
|
|
LIST_HEAD (io_list);
|
|
|
|
int error = 0, error2;
|
|
|
|
struct xfs_buf *bp;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 12:58:39 +07:00
|
|
|
__xfs_buf_delwri_submit(buffer_list, &io_list, true);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 12:58:39 +07:00
|
|
|
/* Wait for IO to complete. */
|
|
|
|
while (!list_empty(&io_list)) {
|
|
|
|
bp = list_first_entry(&io_list, struct xfs_buf, b_list);
|
2011-03-30 18:05:09 +07:00
|
|
|
|
2010-01-26 11:13:25 +07:00
|
|
|
list_del_init(&bp->b_list);
|
2014-10-02 06:04:01 +07:00
|
|
|
|
|
|
|
/* locking the buffer will wait for async IO completion. */
|
|
|
|
xfs_buf_lock(bp);
|
|
|
|
error2 = bp->b_error;
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 12:58:39 +07:00
|
|
|
xfs_buf_relse(bp);
|
|
|
|
if (!error)
|
|
|
|
error = error2;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 12:58:39 +07:00
|
|
|
return error;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
2005-11-02 06:15:05 +07:00
|
|
|
int __init
|
2006-01-11 11:39:08 +07:00
|
|
|
xfs_buf_init(void)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2006-03-14 09:18:19 +07:00
|
|
|
xfs_buf_zone = kmem_zone_init_flags(sizeof(xfs_buf_t), "xfs_buf",
|
|
|
|
KM_ZONE_HWALIGN, NULL);
|
2006-01-11 11:39:08 +07:00
|
|
|
if (!xfs_buf_zone)
|
2009-12-15 06:14:59 +07:00
|
|
|
goto out;
|
2005-11-02 06:15:05 +07:00
|
|
|
|
2005-06-21 12:14:01 +07:00
|
|
|
return 0;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2009-12-15 06:14:59 +07:00
|
|
|
out:
|
2006-03-14 09:18:19 +07:00
|
|
|
return -ENOMEM;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2006-01-11 11:39:08 +07:00
|
|
|
xfs_buf_terminate(void)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2006-01-11 11:39:08 +07:00
|
|
|
kmem_zone_destroy(xfs_buf_zone);
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|