2005-04-17 05:20:36 +07:00
|
|
|
/*
|
|
|
|
* linux/mm/filemap.c
|
|
|
|
*
|
|
|
|
* Copyright (C) 1994-1999 Linus Torvalds
|
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
|
|
|
* This file handles the generic file mmap semantics used by
|
|
|
|
* most "normal" filesystems (but you don't /have/ to use this:
|
|
|
|
* the NFS filesystem used to do this differently, for example)
|
|
|
|
*/
|
2011-10-16 13:01:52 +07:00
|
|
|
#include <linux/export.h>
|
2005-04-17 05:20:36 +07:00
|
|
|
#include <linux/compiler.h>
|
|
|
|
#include <linux/fs.h>
|
2006-06-23 16:04:16 +07:00
|
|
|
#include <linux/uaccess.h>
|
2005-04-17 05:20:36 +07:00
|
|
|
#include <linux/aio.h>
|
2006-01-12 03:17:46 +07:00
|
|
|
#include <linux/capability.h>
|
2005-04-17 05:20:36 +07:00
|
|
|
#include <linux/kernel_stat.h>
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 15:04:11 +07:00
|
|
|
#include <linux/gfp.h>
|
2005-04-17 05:20:36 +07:00
|
|
|
#include <linux/mm.h>
|
|
|
|
#include <linux/swap.h>
|
|
|
|
#include <linux/mman.h>
|
|
|
|
#include <linux/pagemap.h>
|
|
|
|
#include <linux/file.h>
|
|
|
|
#include <linux/uio.h>
|
|
|
|
#include <linux/hash.h>
|
|
|
|
#include <linux/writeback.h>
|
2007-10-19 04:47:32 +07:00
|
|
|
#include <linux/backing-dev.h>
|
2005-04-17 05:20:36 +07:00
|
|
|
#include <linux/pagevec.h>
|
|
|
|
#include <linux/blkdev.h>
|
|
|
|
#include <linux/security.h>
|
2006-03-24 18:16:04 +07:00
|
|
|
#include <linux/cpuset.h>
|
2007-10-16 15:24:59 +07:00
|
|
|
#include <linux/hardirq.h> /* for BUG_ON(!in_atomic()) only */
|
2008-02-07 15:13:53 +07:00
|
|
|
#include <linux/memcontrol.h>
|
2011-05-26 23:01:43 +07:00
|
|
|
#include <linux/cleancache.h>
|
2006-03-22 15:08:33 +07:00
|
|
|
#include "internal.h"
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
/*
|
|
|
|
* FIXME: remove all knowledge of the buffer layer from the core VM
|
|
|
|
*/
|
2009-08-18 00:52:36 +07:00
|
|
|
#include <linux/buffer_head.h> /* for try_to_free_buffers */
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
#include <asm/mman.h>
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Shared mappings implemented 30.11.1994. It's not fully working yet,
|
|
|
|
* though.
|
|
|
|
*
|
|
|
|
* Shared mappings now work. 15.8.1995 Bruno.
|
|
|
|
*
|
|
|
|
* finished 'unifying' the page and buffer cache and SMP-threaded the
|
|
|
|
* page-cache, 21.05.1999, Ingo Molnar <mingo@redhat.com>
|
|
|
|
*
|
|
|
|
* SMP-threaded pagemap-LRU 1999, Andrea Arcangeli <andrea@suse.de>
|
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Lock ordering:
|
|
|
|
*
|
2011-05-25 07:12:06 +07:00
|
|
|
* ->i_mmap_mutex (truncate_pagecache)
|
2005-04-17 05:20:36 +07:00
|
|
|
* ->private_lock (__free_pte->__set_page_dirty_buffers)
|
[PATCH] swap: swap_lock replace list+device
The idea of a swap_device_lock per device, and a swap_list_lock over them all,
is appealing; but in practice almost every holder of swap_device_lock must
already hold swap_list_lock, which defeats the purpose of the split.
The only exceptions have been swap_duplicate, valid_swaphandles and an
untrodden path in try_to_unuse (plus a few places added in this series).
valid_swaphandles doesn't show up high in profiles, but swap_duplicate does
demand attention. However, with the hold time in get_swap_pages so much
reduced, I've not yet found a load and set of swap device priorities to show
even swap_duplicate benefitting from the split. Certainly the split is mere
overhead in the common case of a single swap device.
So, replace swap_list_lock and swap_device_lock by spinlock_t swap_lock
(generally we seem to prefer an _ in the name, and not hide in a macro).
If someone can show a regression in swap_duplicate, then probably we should
add a hashlock for the swap_map entries alone (shorts being anatomic), so as
to help the case of the single swap device too.
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-09-04 05:54:41 +07:00
|
|
|
* ->swap_lock (exclusive_swap_page, others)
|
|
|
|
* ->mapping->tree_lock
|
2005-04-17 05:20:36 +07:00
|
|
|
*
|
2006-01-10 06:59:24 +07:00
|
|
|
* ->i_mutex
|
2011-05-25 07:12:06 +07:00
|
|
|
* ->i_mmap_mutex (truncate->unmap_mapping_range)
|
2005-04-17 05:20:36 +07:00
|
|
|
*
|
|
|
|
* ->mmap_sem
|
2011-05-25 07:12:06 +07:00
|
|
|
* ->i_mmap_mutex
|
2005-10-30 08:16:41 +07:00
|
|
|
* ->page_table_lock or pte_lock (various, mainly in memory.c)
|
2005-04-17 05:20:36 +07:00
|
|
|
* ->mapping->tree_lock (arch-dependent flush_dcache_mmap_lock)
|
|
|
|
*
|
|
|
|
* ->mmap_sem
|
|
|
|
* ->lock_page (access_process_vm)
|
|
|
|
*
|
2006-10-20 13:29:10 +07:00
|
|
|
* ->i_mutex (generic_file_buffered_write)
|
|
|
|
* ->mmap_sem (fault_in_pages_readable->do_page_fault)
|
2005-04-17 05:20:36 +07:00
|
|
|
*
|
2011-04-22 07:19:44 +07:00
|
|
|
* bdi->wb.list_lock
|
2011-03-22 18:23:41 +07:00
|
|
|
* sb_lock (fs/fs-writeback.c)
|
2005-04-17 05:20:36 +07:00
|
|
|
* ->mapping->tree_lock (__sync_single_inode)
|
|
|
|
*
|
2011-05-25 07:12:06 +07:00
|
|
|
* ->i_mmap_mutex
|
2005-04-17 05:20:36 +07:00
|
|
|
* ->anon_vma.lock (vma_adjust)
|
|
|
|
*
|
|
|
|
* ->anon_vma.lock
|
2005-10-30 08:16:41 +07:00
|
|
|
* ->page_table_lock or pte_lock (anon_vma_prepare and various)
|
2005-04-17 05:20:36 +07:00
|
|
|
*
|
2005-10-30 08:16:41 +07:00
|
|
|
* ->page_table_lock or pte_lock
|
[PATCH] swap: swap_lock replace list+device
The idea of a swap_device_lock per device, and a swap_list_lock over them all,
is appealing; but in practice almost every holder of swap_device_lock must
already hold swap_list_lock, which defeats the purpose of the split.
The only exceptions have been swap_duplicate, valid_swaphandles and an
untrodden path in try_to_unuse (plus a few places added in this series).
valid_swaphandles doesn't show up high in profiles, but swap_duplicate does
demand attention. However, with the hold time in get_swap_pages so much
reduced, I've not yet found a load and set of swap device priorities to show
even swap_duplicate benefitting from the split. Certainly the split is mere
overhead in the common case of a single swap device.
So, replace swap_list_lock and swap_device_lock by spinlock_t swap_lock
(generally we seem to prefer an _ in the name, and not hide in a macro).
If someone can show a regression in swap_duplicate, then probably we should
add a hashlock for the swap_map entries alone (shorts being anatomic), so as
to help the case of the single swap device too.
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-09-04 05:54:41 +07:00
|
|
|
* ->swap_lock (try_to_unmap_one)
|
2005-04-17 05:20:36 +07:00
|
|
|
* ->private_lock (try_to_unmap_one)
|
|
|
|
* ->tree_lock (try_to_unmap_one)
|
|
|
|
* ->zone.lru_lock (follow_page->mark_page_accessed)
|
2006-01-19 08:42:27 +07:00
|
|
|
* ->zone.lru_lock (check_pte_range->isolate_lru_page)
|
2005-04-17 05:20:36 +07:00
|
|
|
* ->private_lock (page_remove_rmap->set_page_dirty)
|
|
|
|
* ->tree_lock (page_remove_rmap->set_page_dirty)
|
2011-04-22 07:19:44 +07:00
|
|
|
* bdi.wb->list_lock (page_remove_rmap->set_page_dirty)
|
2011-03-22 18:23:36 +07:00
|
|
|
* ->inode->i_lock (page_remove_rmap->set_page_dirty)
|
2011-04-22 07:19:44 +07:00
|
|
|
* bdi.wb->list_lock (zap_pte_range->set_page_dirty)
|
2011-03-22 18:23:36 +07:00
|
|
|
* ->inode->i_lock (zap_pte_range->set_page_dirty)
|
2005-04-17 05:20:36 +07:00
|
|
|
* ->private_lock (zap_pte_range->__set_page_dirty_buffers)
|
|
|
|
*
|
2012-03-22 06:34:09 +07:00
|
|
|
* ->i_mmap_mutex
|
|
|
|
* ->tasklist_lock (memory_failure, collect_procs_ao)
|
2005-04-17 05:20:36 +07:00
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
2011-03-23 06:32:44 +07:00
|
|
|
* Delete a page from the page cache and free it. Caller has to make
|
2005-04-17 05:20:36 +07:00
|
|
|
* sure the page is locked and that nobody else uses it - or that usage
|
2008-07-26 09:45:32 +07:00
|
|
|
* is safe. The caller must hold the mapping's tree_lock.
|
2005-04-17 05:20:36 +07:00
|
|
|
*/
|
2011-03-23 06:32:44 +07:00
|
|
|
void __delete_from_page_cache(struct page *page)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
struct address_space *mapping = page->mapping;
|
|
|
|
|
2011-05-26 23:01:43 +07:00
|
|
|
/*
|
|
|
|
* if we're uptodate, flush out into the cleancache, otherwise
|
|
|
|
* invalidate any existing cleancache entries. We can't leave
|
|
|
|
* stale data around in the cleancache once our page is gone
|
|
|
|
*/
|
|
|
|
if (PageUptodate(page) && PageMappedToDisk(page))
|
|
|
|
cleancache_put_page(page);
|
|
|
|
else
|
2011-09-21 22:56:28 +07:00
|
|
|
cleancache_invalidate_page(mapping, page);
|
2011-05-26 23:01:43 +07:00
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
radix_tree_delete(&mapping->page_tree, page->index);
|
|
|
|
page->mapping = NULL;
|
2011-07-26 07:12:25 +07:00
|
|
|
/* Leave page->index set: truncation lookup relies upon it */
|
2005-04-17 05:20:36 +07:00
|
|
|
mapping->nrpages--;
|
2006-06-30 15:55:35 +07:00
|
|
|
__dec_zone_page_state(page, NR_FILE_PAGES);
|
2009-09-22 07:01:33 +07:00
|
|
|
if (PageSwapBacked(page))
|
|
|
|
__dec_zone_page_state(page, NR_SHMEM);
|
2007-07-16 13:38:12 +07:00
|
|
|
BUG_ON(page_mapped(page));
|
2007-12-20 05:05:13 +07:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Some filesystems seem to re-dirty the page even after
|
|
|
|
* the VM has canceled the dirty bit (eg ext3 journaling).
|
|
|
|
*
|
|
|
|
* Fix it up by doing a final dirty accounting check after
|
|
|
|
* having removed the page entirely.
|
|
|
|
*/
|
|
|
|
if (PageDirty(page) && mapping_cap_account_dirty(mapping)) {
|
|
|
|
dec_zone_page_state(page, NR_FILE_DIRTY);
|
|
|
|
dec_bdi_stat(mapping->backing_dev_info, BDI_RECLAIMABLE);
|
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
2011-03-23 06:32:43 +07:00
|
|
|
/**
|
|
|
|
* delete_from_page_cache - delete page from page cache
|
|
|
|
* @page: the page which the kernel is trying to remove from page cache
|
|
|
|
*
|
|
|
|
* This must be called only on pages that have been verified to be in the page
|
|
|
|
* cache and locked. It will never put the page into the free list, the caller
|
|
|
|
* has a reference on the page.
|
|
|
|
*/
|
|
|
|
void delete_from_page_cache(struct page *page)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
struct address_space *mapping = page->mapping;
|
2010-12-02 01:35:19 +07:00
|
|
|
void (*freepage)(struct page *);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2005-05-01 22:59:01 +07:00
|
|
|
BUG_ON(!PageLocked(page));
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2010-12-02 01:35:19 +07:00
|
|
|
freepage = mapping->a_ops->freepage;
|
2008-07-26 09:45:32 +07:00
|
|
|
spin_lock_irq(&mapping->tree_lock);
|
2011-03-23 06:32:44 +07:00
|
|
|
__delete_from_page_cache(page);
|
2008-07-26 09:45:32 +07:00
|
|
|
spin_unlock_irq(&mapping->tree_lock);
|
2009-05-29 04:34:28 +07:00
|
|
|
mem_cgroup_uncharge_cache_page(page);
|
2010-12-02 01:35:19 +07:00
|
|
|
|
|
|
|
if (freepage)
|
|
|
|
freepage(page);
|
2011-03-23 06:30:53 +07:00
|
|
|
page_cache_release(page);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(delete_from_page_cache);
|
|
|
|
|
2011-03-10 14:52:07 +07:00
|
|
|
static int sleep_on_page(void *word)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
io_schedule();
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2011-03-10 14:52:07 +07:00
|
|
|
static int sleep_on_page_killable(void *word)
|
2007-12-06 23:18:49 +07:00
|
|
|
{
|
2011-03-10 14:52:07 +07:00
|
|
|
sleep_on_page(word);
|
2007-12-06 23:18:49 +07:00
|
|
|
return fatal_signal_pending(current) ? -EINTR : 0;
|
|
|
|
}
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
/**
|
2006-06-23 16:03:49 +07:00
|
|
|
* __filemap_fdatawrite_range - start writeback on mapping dirty pages in range
|
2005-05-01 22:59:26 +07:00
|
|
|
* @mapping: address space structure to write
|
|
|
|
* @start: offset in bytes where the range starts
|
2006-03-24 18:17:45 +07:00
|
|
|
* @end: offset in bytes where the range ends (inclusive)
|
2005-05-01 22:59:26 +07:00
|
|
|
* @sync_mode: enable synchronous operation
|
2005-04-17 05:20:36 +07:00
|
|
|
*
|
2006-06-23 16:03:49 +07:00
|
|
|
* Start writeback against all of a mapping's dirty pages that lie
|
|
|
|
* within the byte offsets <start, end> inclusive.
|
|
|
|
*
|
2005-04-17 05:20:36 +07:00
|
|
|
* If sync_mode is WB_SYNC_ALL then this is a "data integrity" operation, as
|
2006-06-23 16:03:49 +07:00
|
|
|
* opposed to a regular memory cleansing writeback. The difference between
|
2005-04-17 05:20:36 +07:00
|
|
|
* these two operations is that if a dirty page/buffer is encountered, it must
|
|
|
|
* be waited upon, and not just skipped over.
|
|
|
|
*/
|
[PATCH] fadvise(): write commands
Add two new linux-specific fadvise extensions():
LINUX_FADV_ASYNC_WRITE: start async writeout of any dirty pages between file
offsets `offset' and `offset+len'. Any pages which are currently under
writeout are skipped, whether or not they are dirty.
LINUX_FADV_WRITE_WAIT: wait upon writeout of any dirty pages between file
offsets `offset' and `offset+len'.
By combining these two operations the application may do several things:
LINUX_FADV_ASYNC_WRITE: push some or all of the dirty pages at the disk.
LINUX_FADV_WRITE_WAIT, LINUX_FADV_ASYNC_WRITE: push all of the currently dirty
pages at the disk.
LINUX_FADV_WRITE_WAIT, LINUX_FADV_ASYNC_WRITE, LINUX_FADV_WRITE_WAIT: push all
of the currently dirty pages at the disk, wait until they have been written.
It should be noted that none of these operations write out the file's
metadata. So unless the application is strictly performing overwrites of
already-instantiated disk blocks, there are no guarantees here that the data
will be available after a crash.
To complete this suite of operations I guess we should have a "sync file
metadata only" operation. This gives applications access to all the building
blocks needed for all sorts of sync operations. But sync-metadata doesn't fit
well with the fadvise() interface. Probably it should be a new syscall:
sys_fmetadatasync().
The patch also diddles with the meaning of `endbyte' in sys_fadvise64_64().
It is made to represent that last affected byte in the file (ie: it is
inclusive). Generally, all these byterange and pagerange functions are
inclusive so we can easily represent EOF with -1.
As Ulrich notes, these two functions are somewhat abusive of the fadvise()
concept, which appears to be "set the future policy for this fd".
But these commands are a perfect fit with the fadvise() impementation, and
several of the existing fadvise() commands are synchronous and don't affect
future policy either. I think we can live with the slight incongruity.
Cc: Michael Kerrisk <mtk-manpages@gmx.net>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-24 18:18:04 +07:00
|
|
|
int __filemap_fdatawrite_range(struct address_space *mapping, loff_t start,
|
|
|
|
loff_t end, int sync_mode)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
struct writeback_control wbc = {
|
|
|
|
.sync_mode = sync_mode,
|
mm: write_cache_pages integrity fix
In write_cache_pages, nr_to_write is heeded even for data-integrity syncs,
so the function will return success after writing out nr_to_write pages,
even if that was not sufficient to guarantee data integrity.
The callers tend to set it to values that could break data interity
semantics easily in practice. For example, nr_to_write can be set to
mapping->nr_pages * 2, however if a file has a single, dirty page, then
fsync is called, subsequent pages might be concurrently added and dirtied,
then write_cache_pages might writeout two of these newly dirty pages,
while not writing out the old page that should have been written out.
Fix this by ignoring nr_to_write if it is a data integrity sync.
This is a data integrity bug.
The reason this has been done in the past is to avoid stalling sync
operations behind page dirtiers.
"If a file has one dirty page at offset 1000000000000000 then someone
does an fsync() and someone else gets in first and starts madly writing
pages at offset 0, we want to write that page at 1000000000000000.
Somehow."
What we do today is return success after an arbitrary amount of pages are
written, whether or not we have provided the data-integrity semantics that
the caller has asked for. Even this doesn't actually fix all stall cases
completely: in the above situation, if the file has a huge number of pages
in pagecache (but not dirty), then mapping->nrpages is going to be huge,
even if pages are being dirtied.
This change does indeed make the possibility of long stalls lager, and
that's not a good thing, but lying about data integrity is even worse. We
have to either perform the sync, or return -ELINUXISLAME so at least the
caller knows what has happened.
There are subsequent competing approaches in the works to solve the stall
problems properly, without compromising data integrity.
Signed-off-by: Nick Piggin <npiggin@suse.de>
Cc: Chris Mason <chris.mason@oracle.com>
Cc: Dave Chinner <david@fromorbit.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-01-07 05:39:08 +07:00
|
|
|
.nr_to_write = LONG_MAX,
|
[PATCH] writeback: fix range handling
When a writeback_control's `start' and `end' fields are used to
indicate a one-byte-range starting at file offset zero, the required
values of .start=0,.end=0 mean that the ->writepages() implementation
has no way of telling that it is being asked to perform a range
request. Because we're currently overloading (start == 0 && end == 0)
to mean "this is not a write-a-range request".
To make all this sane, the patch changes range of writeback_control.
So caller does: If it is calling ->writepages() to write pages, it
sets range (range_start/end or range_cyclic) always.
And if range_cyclic is true, ->writepages() thinks the range is
cyclic, otherwise it just uses range_start and range_end.
This patch does,
- Add LLONG_MAX, LLONG_MIN, ULLONG_MAX to include/linux/kernel.h
-1 is usually ok for range_end (type is long long). But, if someone did,
range_end += val; range_end is "val - 1"
u64val = range_end >> bits; u64val is "~(0ULL)"
or something, they are wrong. So, this adds LLONG_MAX to avoid nasty
things, and uses LLONG_MAX for range_end.
- All callers of ->writepages() sets range_start/end or range_cyclic.
- Fix updates of ->writeback_index. It seems already bit strange.
If it starts at 0 and ended by check of nr_to_write, this last
index may reduce chance to scan end of file. So, this updates
->writeback_index only if range_cyclic is true or whole-file is
scanned.
Signed-off-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp>
Cc: Nathan Scott <nathans@sgi.com>
Cc: Anton Altaparmakov <aia21@cantab.net>
Cc: Steven French <sfrench@us.ibm.com>
Cc: "Vladimir V. Saveliev" <vs@namesys.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-23 16:03:26 +07:00
|
|
|
.range_start = start,
|
|
|
|
.range_end = end,
|
2005-04-17 05:20:36 +07:00
|
|
|
};
|
|
|
|
|
|
|
|
if (!mapping_cap_writeback_dirty(mapping))
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
ret = do_writepages(mapping, &wbc);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int __filemap_fdatawrite(struct address_space *mapping,
|
|
|
|
int sync_mode)
|
|
|
|
{
|
[PATCH] writeback: fix range handling
When a writeback_control's `start' and `end' fields are used to
indicate a one-byte-range starting at file offset zero, the required
values of .start=0,.end=0 mean that the ->writepages() implementation
has no way of telling that it is being asked to perform a range
request. Because we're currently overloading (start == 0 && end == 0)
to mean "this is not a write-a-range request".
To make all this sane, the patch changes range of writeback_control.
So caller does: If it is calling ->writepages() to write pages, it
sets range (range_start/end or range_cyclic) always.
And if range_cyclic is true, ->writepages() thinks the range is
cyclic, otherwise it just uses range_start and range_end.
This patch does,
- Add LLONG_MAX, LLONG_MIN, ULLONG_MAX to include/linux/kernel.h
-1 is usually ok for range_end (type is long long). But, if someone did,
range_end += val; range_end is "val - 1"
u64val = range_end >> bits; u64val is "~(0ULL)"
or something, they are wrong. So, this adds LLONG_MAX to avoid nasty
things, and uses LLONG_MAX for range_end.
- All callers of ->writepages() sets range_start/end or range_cyclic.
- Fix updates of ->writeback_index. It seems already bit strange.
If it starts at 0 and ended by check of nr_to_write, this last
index may reduce chance to scan end of file. So, this updates
->writeback_index only if range_cyclic is true or whole-file is
scanned.
Signed-off-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp>
Cc: Nathan Scott <nathans@sgi.com>
Cc: Anton Altaparmakov <aia21@cantab.net>
Cc: Steven French <sfrench@us.ibm.com>
Cc: "Vladimir V. Saveliev" <vs@namesys.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-23 16:03:26 +07:00
|
|
|
return __filemap_fdatawrite_range(mapping, 0, LLONG_MAX, sync_mode);
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
int filemap_fdatawrite(struct address_space *mapping)
|
|
|
|
{
|
|
|
|
return __filemap_fdatawrite(mapping, WB_SYNC_ALL);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(filemap_fdatawrite);
|
|
|
|
|
2008-07-12 06:27:31 +07:00
|
|
|
int filemap_fdatawrite_range(struct address_space *mapping, loff_t start,
|
[PATCH] fadvise(): write commands
Add two new linux-specific fadvise extensions():
LINUX_FADV_ASYNC_WRITE: start async writeout of any dirty pages between file
offsets `offset' and `offset+len'. Any pages which are currently under
writeout are skipped, whether or not they are dirty.
LINUX_FADV_WRITE_WAIT: wait upon writeout of any dirty pages between file
offsets `offset' and `offset+len'.
By combining these two operations the application may do several things:
LINUX_FADV_ASYNC_WRITE: push some or all of the dirty pages at the disk.
LINUX_FADV_WRITE_WAIT, LINUX_FADV_ASYNC_WRITE: push all of the currently dirty
pages at the disk.
LINUX_FADV_WRITE_WAIT, LINUX_FADV_ASYNC_WRITE, LINUX_FADV_WRITE_WAIT: push all
of the currently dirty pages at the disk, wait until they have been written.
It should be noted that none of these operations write out the file's
metadata. So unless the application is strictly performing overwrites of
already-instantiated disk blocks, there are no guarantees here that the data
will be available after a crash.
To complete this suite of operations I guess we should have a "sync file
metadata only" operation. This gives applications access to all the building
blocks needed for all sorts of sync operations. But sync-metadata doesn't fit
well with the fadvise() interface. Probably it should be a new syscall:
sys_fmetadatasync().
The patch also diddles with the meaning of `endbyte' in sys_fadvise64_64().
It is made to represent that last affected byte in the file (ie: it is
inclusive). Generally, all these byterange and pagerange functions are
inclusive so we can easily represent EOF with -1.
As Ulrich notes, these two functions are somewhat abusive of the fadvise()
concept, which appears to be "set the future policy for this fd".
But these commands are a perfect fit with the fadvise() impementation, and
several of the existing fadvise() commands are synchronous and don't affect
future policy either. I think we can live with the slight incongruity.
Cc: Michael Kerrisk <mtk-manpages@gmx.net>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-24 18:18:04 +07:00
|
|
|
loff_t end)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
return __filemap_fdatawrite_range(mapping, start, end, WB_SYNC_ALL);
|
|
|
|
}
|
2008-07-12 06:27:31 +07:00
|
|
|
EXPORT_SYMBOL(filemap_fdatawrite_range);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2006-06-23 16:03:49 +07:00
|
|
|
/**
|
|
|
|
* filemap_flush - mostly a non-blocking flush
|
|
|
|
* @mapping: target address_space
|
|
|
|
*
|
2005-04-17 05:20:36 +07:00
|
|
|
* This is a mostly non-blocking flush. Not suitable for data-integrity
|
|
|
|
* purposes - I/O may not be started against all dirty pages.
|
|
|
|
*/
|
|
|
|
int filemap_flush(struct address_space *mapping)
|
|
|
|
{
|
|
|
|
return __filemap_fdatawrite(mapping, WB_SYNC_NONE);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(filemap_flush);
|
|
|
|
|
2006-06-23 16:03:49 +07:00
|
|
|
/**
|
2009-10-01 03:16:33 +07:00
|
|
|
* filemap_fdatawait_range - wait for writeback to complete
|
|
|
|
* @mapping: address space structure to wait for
|
|
|
|
* @start_byte: offset in bytes where the range starts
|
|
|
|
* @end_byte: offset in bytes where the range ends (inclusive)
|
2006-06-23 16:03:49 +07:00
|
|
|
*
|
2009-10-01 03:16:33 +07:00
|
|
|
* Walk the list of under-writeback pages of the given address space
|
|
|
|
* in the given range and wait for all of them.
|
2005-04-17 05:20:36 +07:00
|
|
|
*/
|
2009-10-01 03:16:33 +07:00
|
|
|
int filemap_fdatawait_range(struct address_space *mapping, loff_t start_byte,
|
|
|
|
loff_t end_byte)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2009-10-01 03:16:33 +07:00
|
|
|
pgoff_t index = start_byte >> PAGE_CACHE_SHIFT;
|
|
|
|
pgoff_t end = end_byte >> PAGE_CACHE_SHIFT;
|
2005-04-17 05:20:36 +07:00
|
|
|
struct pagevec pvec;
|
|
|
|
int nr_pages;
|
|
|
|
int ret = 0;
|
|
|
|
|
2009-10-01 03:16:33 +07:00
|
|
|
if (end_byte < start_byte)
|
2005-04-17 05:20:36 +07:00
|
|
|
return 0;
|
|
|
|
|
|
|
|
pagevec_init(&pvec, 0);
|
|
|
|
while ((index <= end) &&
|
|
|
|
(nr_pages = pagevec_lookup_tag(&pvec, mapping, &index,
|
|
|
|
PAGECACHE_TAG_WRITEBACK,
|
|
|
|
min(end - index, (pgoff_t)PAGEVEC_SIZE-1) + 1)) != 0) {
|
|
|
|
unsigned i;
|
|
|
|
|
|
|
|
for (i = 0; i < nr_pages; i++) {
|
|
|
|
struct page *page = pvec.pages[i];
|
|
|
|
|
|
|
|
/* until radix tree lookup accepts end_index */
|
|
|
|
if (page->index > end)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
wait_on_page_writeback(page);
|
2011-01-14 06:46:06 +07:00
|
|
|
if (TestClearPageError(page))
|
2005-04-17 05:20:36 +07:00
|
|
|
ret = -EIO;
|
|
|
|
}
|
|
|
|
pagevec_release(&pvec);
|
|
|
|
cond_resched();
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Check for outstanding write errors */
|
|
|
|
if (test_and_clear_bit(AS_ENOSPC, &mapping->flags))
|
|
|
|
ret = -ENOSPC;
|
|
|
|
if (test_and_clear_bit(AS_EIO, &mapping->flags))
|
|
|
|
ret = -EIO;
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
2009-08-18 00:30:27 +07:00
|
|
|
EXPORT_SYMBOL(filemap_fdatawait_range);
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
/**
|
2006-06-23 16:03:49 +07:00
|
|
|
* filemap_fdatawait - wait for all under-writeback pages to complete
|
2005-04-17 05:20:36 +07:00
|
|
|
* @mapping: address space structure to wait for
|
2006-06-23 16:03:49 +07:00
|
|
|
*
|
|
|
|
* Walk the list of under-writeback pages of the given address space
|
|
|
|
* and wait for all of them.
|
2005-04-17 05:20:36 +07:00
|
|
|
*/
|
|
|
|
int filemap_fdatawait(struct address_space *mapping)
|
|
|
|
{
|
|
|
|
loff_t i_size = i_size_read(mapping->host);
|
|
|
|
|
|
|
|
if (i_size == 0)
|
|
|
|
return 0;
|
|
|
|
|
2009-10-01 03:16:33 +07:00
|
|
|
return filemap_fdatawait_range(mapping, 0, i_size - 1);
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(filemap_fdatawait);
|
|
|
|
|
|
|
|
int filemap_write_and_wait(struct address_space *mapping)
|
|
|
|
{
|
[PATCH] Fix and add EXPORT_SYMBOL(filemap_write_and_wait)
This patch add EXPORT_SYMBOL(filemap_write_and_wait) and use it.
See mm/filemap.c:
And changes the filemap_write_and_wait() and filemap_write_and_wait_range().
Current filemap_write_and_wait() doesn't wait if filemap_fdatawrite()
returns error. However, even if filemap_fdatawrite() returned an
error, it may have submitted the partially data pages to the device.
(e.g. in the case of -ENOSPC)
<quotation>
Andrew Morton writes,
If filemap_fdatawrite() returns an error, this might be due to some
I/O problem: dead disk, unplugged cable, etc. Given the generally
crappy quality of the kernel's handling of such exceptions, there's a
good chance that the filemap_fdatawait() will get stuck in D state
forever.
</quotation>
So, this patch doesn't wait if filemap_fdatawrite() returns the -EIO.
Trond, could you please review the nfs part? Especially I'm not sure,
nfs must use the "filemap_fdatawrite(inode->i_mapping) == 0", or not.
Acked-by: Trond Myklebust <trond.myklebust@fys.uio.no>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-08 16:02:14 +07:00
|
|
|
int err = 0;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
if (mapping->nrpages) {
|
[PATCH] Fix and add EXPORT_SYMBOL(filemap_write_and_wait)
This patch add EXPORT_SYMBOL(filemap_write_and_wait) and use it.
See mm/filemap.c:
And changes the filemap_write_and_wait() and filemap_write_and_wait_range().
Current filemap_write_and_wait() doesn't wait if filemap_fdatawrite()
returns error. However, even if filemap_fdatawrite() returned an
error, it may have submitted the partially data pages to the device.
(e.g. in the case of -ENOSPC)
<quotation>
Andrew Morton writes,
If filemap_fdatawrite() returns an error, this might be due to some
I/O problem: dead disk, unplugged cable, etc. Given the generally
crappy quality of the kernel's handling of such exceptions, there's a
good chance that the filemap_fdatawait() will get stuck in D state
forever.
</quotation>
So, this patch doesn't wait if filemap_fdatawrite() returns the -EIO.
Trond, could you please review the nfs part? Especially I'm not sure,
nfs must use the "filemap_fdatawrite(inode->i_mapping) == 0", or not.
Acked-by: Trond Myklebust <trond.myklebust@fys.uio.no>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-08 16:02:14 +07:00
|
|
|
err = filemap_fdatawrite(mapping);
|
|
|
|
/*
|
|
|
|
* Even if the above returned error, the pages may be
|
|
|
|
* written partially (e.g. -ENOSPC), so we wait for it.
|
|
|
|
* But the -EIO is special case, it may indicate the worst
|
|
|
|
* thing (e.g. bug) happened, so we avoid waiting for it.
|
|
|
|
*/
|
|
|
|
if (err != -EIO) {
|
|
|
|
int err2 = filemap_fdatawait(mapping);
|
|
|
|
if (!err)
|
|
|
|
err = err2;
|
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
[PATCH] Fix and add EXPORT_SYMBOL(filemap_write_and_wait)
This patch add EXPORT_SYMBOL(filemap_write_and_wait) and use it.
See mm/filemap.c:
And changes the filemap_write_and_wait() and filemap_write_and_wait_range().
Current filemap_write_and_wait() doesn't wait if filemap_fdatawrite()
returns error. However, even if filemap_fdatawrite() returned an
error, it may have submitted the partially data pages to the device.
(e.g. in the case of -ENOSPC)
<quotation>
Andrew Morton writes,
If filemap_fdatawrite() returns an error, this might be due to some
I/O problem: dead disk, unplugged cable, etc. Given the generally
crappy quality of the kernel's handling of such exceptions, there's a
good chance that the filemap_fdatawait() will get stuck in D state
forever.
</quotation>
So, this patch doesn't wait if filemap_fdatawrite() returns the -EIO.
Trond, could you please review the nfs part? Especially I'm not sure,
nfs must use the "filemap_fdatawrite(inode->i_mapping) == 0", or not.
Acked-by: Trond Myklebust <trond.myklebust@fys.uio.no>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-08 16:02:14 +07:00
|
|
|
return err;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
[PATCH] Fix and add EXPORT_SYMBOL(filemap_write_and_wait)
This patch add EXPORT_SYMBOL(filemap_write_and_wait) and use it.
See mm/filemap.c:
And changes the filemap_write_and_wait() and filemap_write_and_wait_range().
Current filemap_write_and_wait() doesn't wait if filemap_fdatawrite()
returns error. However, even if filemap_fdatawrite() returned an
error, it may have submitted the partially data pages to the device.
(e.g. in the case of -ENOSPC)
<quotation>
Andrew Morton writes,
If filemap_fdatawrite() returns an error, this might be due to some
I/O problem: dead disk, unplugged cable, etc. Given the generally
crappy quality of the kernel's handling of such exceptions, there's a
good chance that the filemap_fdatawait() will get stuck in D state
forever.
</quotation>
So, this patch doesn't wait if filemap_fdatawrite() returns the -EIO.
Trond, could you please review the nfs part? Especially I'm not sure,
nfs must use the "filemap_fdatawrite(inode->i_mapping) == 0", or not.
Acked-by: Trond Myklebust <trond.myklebust@fys.uio.no>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-08 16:02:14 +07:00
|
|
|
EXPORT_SYMBOL(filemap_write_and_wait);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2006-06-23 16:03:49 +07:00
|
|
|
/**
|
|
|
|
* filemap_write_and_wait_range - write out & wait on a file range
|
|
|
|
* @mapping: the address_space for the pages
|
|
|
|
* @lstart: offset in bytes where the range starts
|
|
|
|
* @lend: offset in bytes where the range ends (inclusive)
|
|
|
|
*
|
2006-03-24 18:17:45 +07:00
|
|
|
* Write out and wait upon file offsets lstart->lend, inclusive.
|
|
|
|
*
|
|
|
|
* Note that `lend' is inclusive (describes the last byte to be written) so
|
|
|
|
* that this function can be used to write to the very end-of-file (end = -1).
|
|
|
|
*/
|
2005-04-17 05:20:36 +07:00
|
|
|
int filemap_write_and_wait_range(struct address_space *mapping,
|
|
|
|
loff_t lstart, loff_t lend)
|
|
|
|
{
|
[PATCH] Fix and add EXPORT_SYMBOL(filemap_write_and_wait)
This patch add EXPORT_SYMBOL(filemap_write_and_wait) and use it.
See mm/filemap.c:
And changes the filemap_write_and_wait() and filemap_write_and_wait_range().
Current filemap_write_and_wait() doesn't wait if filemap_fdatawrite()
returns error. However, even if filemap_fdatawrite() returned an
error, it may have submitted the partially data pages to the device.
(e.g. in the case of -ENOSPC)
<quotation>
Andrew Morton writes,
If filemap_fdatawrite() returns an error, this might be due to some
I/O problem: dead disk, unplugged cable, etc. Given the generally
crappy quality of the kernel's handling of such exceptions, there's a
good chance that the filemap_fdatawait() will get stuck in D state
forever.
</quotation>
So, this patch doesn't wait if filemap_fdatawrite() returns the -EIO.
Trond, could you please review the nfs part? Especially I'm not sure,
nfs must use the "filemap_fdatawrite(inode->i_mapping) == 0", or not.
Acked-by: Trond Myklebust <trond.myklebust@fys.uio.no>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-08 16:02:14 +07:00
|
|
|
int err = 0;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
if (mapping->nrpages) {
|
[PATCH] Fix and add EXPORT_SYMBOL(filemap_write_and_wait)
This patch add EXPORT_SYMBOL(filemap_write_and_wait) and use it.
See mm/filemap.c:
And changes the filemap_write_and_wait() and filemap_write_and_wait_range().
Current filemap_write_and_wait() doesn't wait if filemap_fdatawrite()
returns error. However, even if filemap_fdatawrite() returned an
error, it may have submitted the partially data pages to the device.
(e.g. in the case of -ENOSPC)
<quotation>
Andrew Morton writes,
If filemap_fdatawrite() returns an error, this might be due to some
I/O problem: dead disk, unplugged cable, etc. Given the generally
crappy quality of the kernel's handling of such exceptions, there's a
good chance that the filemap_fdatawait() will get stuck in D state
forever.
</quotation>
So, this patch doesn't wait if filemap_fdatawrite() returns the -EIO.
Trond, could you please review the nfs part? Especially I'm not sure,
nfs must use the "filemap_fdatawrite(inode->i_mapping) == 0", or not.
Acked-by: Trond Myklebust <trond.myklebust@fys.uio.no>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-08 16:02:14 +07:00
|
|
|
err = __filemap_fdatawrite_range(mapping, lstart, lend,
|
|
|
|
WB_SYNC_ALL);
|
|
|
|
/* See comment of filemap_write_and_wait() */
|
|
|
|
if (err != -EIO) {
|
2009-10-01 03:16:33 +07:00
|
|
|
int err2 = filemap_fdatawait_range(mapping,
|
|
|
|
lstart, lend);
|
[PATCH] Fix and add EXPORT_SYMBOL(filemap_write_and_wait)
This patch add EXPORT_SYMBOL(filemap_write_and_wait) and use it.
See mm/filemap.c:
And changes the filemap_write_and_wait() and filemap_write_and_wait_range().
Current filemap_write_and_wait() doesn't wait if filemap_fdatawrite()
returns error. However, even if filemap_fdatawrite() returned an
error, it may have submitted the partially data pages to the device.
(e.g. in the case of -ENOSPC)
<quotation>
Andrew Morton writes,
If filemap_fdatawrite() returns an error, this might be due to some
I/O problem: dead disk, unplugged cable, etc. Given the generally
crappy quality of the kernel's handling of such exceptions, there's a
good chance that the filemap_fdatawait() will get stuck in D state
forever.
</quotation>
So, this patch doesn't wait if filemap_fdatawrite() returns the -EIO.
Trond, could you please review the nfs part? Especially I'm not sure,
nfs must use the "filemap_fdatawrite(inode->i_mapping) == 0", or not.
Acked-by: Trond Myklebust <trond.myklebust@fys.uio.no>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-08 16:02:14 +07:00
|
|
|
if (!err)
|
|
|
|
err = err2;
|
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
[PATCH] Fix and add EXPORT_SYMBOL(filemap_write_and_wait)
This patch add EXPORT_SYMBOL(filemap_write_and_wait) and use it.
See mm/filemap.c:
And changes the filemap_write_and_wait() and filemap_write_and_wait_range().
Current filemap_write_and_wait() doesn't wait if filemap_fdatawrite()
returns error. However, even if filemap_fdatawrite() returned an
error, it may have submitted the partially data pages to the device.
(e.g. in the case of -ENOSPC)
<quotation>
Andrew Morton writes,
If filemap_fdatawrite() returns an error, this might be due to some
I/O problem: dead disk, unplugged cable, etc. Given the generally
crappy quality of the kernel's handling of such exceptions, there's a
good chance that the filemap_fdatawait() will get stuck in D state
forever.
</quotation>
So, this patch doesn't wait if filemap_fdatawrite() returns the -EIO.
Trond, could you please review the nfs part? Especially I'm not sure,
nfs must use the "filemap_fdatawrite(inode->i_mapping) == 0", or not.
Acked-by: Trond Myklebust <trond.myklebust@fys.uio.no>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-08 16:02:14 +07:00
|
|
|
return err;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
2009-04-16 00:22:37 +07:00
|
|
|
EXPORT_SYMBOL(filemap_write_and_wait_range);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2011-03-23 06:30:52 +07:00
|
|
|
/**
|
|
|
|
* replace_page_cache_page - replace a pagecache page with a new one
|
|
|
|
* @old: page to be replaced
|
|
|
|
* @new: page to replace with
|
|
|
|
* @gfp_mask: allocation mode
|
|
|
|
*
|
|
|
|
* This function replaces a page in the pagecache with a new one. On
|
|
|
|
* success it acquires the pagecache reference for the new page and
|
|
|
|
* drops it for the old page. Both the old and new pages must be
|
|
|
|
* locked. This function does not add the new page to the LRU, the
|
|
|
|
* caller must do that.
|
|
|
|
*
|
|
|
|
* The remove + add is atomic. The only way this function can fail is
|
|
|
|
* memory allocation failure.
|
|
|
|
*/
|
|
|
|
int replace_page_cache_page(struct page *old, struct page *new, gfp_t gfp_mask)
|
|
|
|
{
|
|
|
|
int error;
|
|
|
|
|
|
|
|
VM_BUG_ON(!PageLocked(old));
|
|
|
|
VM_BUG_ON(!PageLocked(new));
|
|
|
|
VM_BUG_ON(new->mapping);
|
|
|
|
|
|
|
|
error = radix_tree_preload(gfp_mask & ~__GFP_HIGHMEM);
|
|
|
|
if (!error) {
|
|
|
|
struct address_space *mapping = old->mapping;
|
|
|
|
void (*freepage)(struct page *);
|
|
|
|
|
|
|
|
pgoff_t offset = old->index;
|
|
|
|
freepage = mapping->a_ops->freepage;
|
|
|
|
|
|
|
|
page_cache_get(new);
|
|
|
|
new->mapping = mapping;
|
|
|
|
new->index = offset;
|
|
|
|
|
|
|
|
spin_lock_irq(&mapping->tree_lock);
|
2011-03-23 06:32:44 +07:00
|
|
|
__delete_from_page_cache(old);
|
2011-03-23 06:30:52 +07:00
|
|
|
error = radix_tree_insert(&mapping->page_tree, offset, new);
|
|
|
|
BUG_ON(error);
|
|
|
|
mapping->nrpages++;
|
|
|
|
__inc_zone_page_state(new, NR_FILE_PAGES);
|
|
|
|
if (PageSwapBacked(new))
|
|
|
|
__inc_zone_page_state(new, NR_SHMEM);
|
|
|
|
spin_unlock_irq(&mapping->tree_lock);
|
memcg: add mem_cgroup_replace_page_cache() to fix LRU issue
Commit ef6a3c6311 ("mm: add replace_page_cache_page() function") added a
function replace_page_cache_page(). This function replaces a page in the
radix-tree with a new page. WHen doing this, memory cgroup needs to fix
up the accounting information. memcg need to check PCG_USED bit etc.
In some(many?) cases, 'newpage' is on LRU before calling
replace_page_cache(). So, memcg's LRU accounting information should be
fixed, too.
This patch adds mem_cgroup_replace_page_cache() and removes the old hooks.
In that function, old pages will be unaccounted without touching
res_counter and new page will be accounted to the memcg (of old page).
WHen overwriting pc->mem_cgroup of newpage, take zone->lru_lock and avoid
races with LRU handling.
Background:
replace_page_cache_page() is called by FUSE code in its splice() handling.
Here, 'newpage' is replacing oldpage but this newpage is not a newly allocated
page and may be on LRU. LRU mis-accounting will be critical for memory cgroup
because rmdir() checks the whole LRU is empty and there is no account leak.
If a page is on the other LRU than it should be, rmdir() will fail.
This bug was added in March 2011, but no bug report yet. I guess there
are not many people who use memcg and FUSE at the same time with upstream
kernels.
The result of this bug is that admin cannot destroy a memcg because of
account leak. So, no panic, no deadlock. And, even if an active cgroup
exist, umount can succseed. So no problem at shutdown.
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Michal Hocko <mhocko@suse.cz>
Cc: Miklos Szeredi <mszeredi@suse.cz>
Cc: Hugh Dickins <hughd@google.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-01-13 08:17:44 +07:00
|
|
|
/* mem_cgroup codes must not be called under tree_lock */
|
|
|
|
mem_cgroup_replace_page_cache(old, new);
|
2011-03-23 06:30:52 +07:00
|
|
|
radix_tree_preload_end();
|
|
|
|
if (freepage)
|
|
|
|
freepage(old);
|
|
|
|
page_cache_release(old);
|
|
|
|
}
|
|
|
|
|
|
|
|
return error;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(replace_page_cache_page);
|
|
|
|
|
2006-06-23 16:03:49 +07:00
|
|
|
/**
|
2008-07-26 09:45:30 +07:00
|
|
|
* add_to_page_cache_locked - add a locked page to the pagecache
|
2006-06-23 16:03:49 +07:00
|
|
|
* @page: page to add
|
|
|
|
* @mapping: the page's address_space
|
|
|
|
* @offset: page index
|
|
|
|
* @gfp_mask: page allocation mode
|
|
|
|
*
|
2008-07-26 09:45:30 +07:00
|
|
|
* This function is used to add a page to the pagecache. It must be locked.
|
2005-04-17 05:20:36 +07:00
|
|
|
* This function does not add the page to the LRU. The caller must do that.
|
|
|
|
*/
|
2008-07-26 09:45:30 +07:00
|
|
|
int add_to_page_cache_locked(struct page *page, struct address_space *mapping,
|
2005-10-21 14:18:50 +07:00
|
|
|
pgoff_t offset, gfp_t gfp_mask)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2008-07-26 09:45:30 +07:00
|
|
|
int error;
|
|
|
|
|
|
|
|
VM_BUG_ON(!PageLocked(page));
|
2011-08-04 06:21:27 +07:00
|
|
|
VM_BUG_ON(PageSwapBacked(page));
|
2008-07-26 09:45:30 +07:00
|
|
|
|
|
|
|
error = mem_cgroup_cache_charge(page, current->mm,
|
2009-01-08 09:08:10 +07:00
|
|
|
gfp_mask & GFP_RECLAIM_MASK);
|
2008-02-07 15:14:05 +07:00
|
|
|
if (error)
|
|
|
|
goto out;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2008-02-07 15:14:05 +07:00
|
|
|
error = radix_tree_preload(gfp_mask & ~__GFP_HIGHMEM);
|
2005-04-17 05:20:36 +07:00
|
|
|
if (error == 0) {
|
2008-07-26 09:45:30 +07:00
|
|
|
page_cache_get(page);
|
|
|
|
page->mapping = mapping;
|
|
|
|
page->index = offset;
|
|
|
|
|
2008-07-26 09:45:32 +07:00
|
|
|
spin_lock_irq(&mapping->tree_lock);
|
2005-04-17 05:20:36 +07:00
|
|
|
error = radix_tree_insert(&mapping->page_tree, offset, page);
|
2008-07-26 09:45:30 +07:00
|
|
|
if (likely(!error)) {
|
2005-04-17 05:20:36 +07:00
|
|
|
mapping->nrpages++;
|
2006-06-30 15:55:35 +07:00
|
|
|
__inc_zone_page_state(page, NR_FILE_PAGES);
|
2009-05-29 04:34:28 +07:00
|
|
|
spin_unlock_irq(&mapping->tree_lock);
|
2008-07-26 09:45:30 +07:00
|
|
|
} else {
|
|
|
|
page->mapping = NULL;
|
2011-07-26 07:12:25 +07:00
|
|
|
/* Leave page->index set: truncation relies upon it */
|
2009-05-29 04:34:28 +07:00
|
|
|
spin_unlock_irq(&mapping->tree_lock);
|
memcg: remove refcnt from page_cgroup
memcg: performance improvements
Patch Description
1/5 ... remove refcnt fron page_cgroup patch (shmem handling is fixed)
2/5 ... swapcache handling patch
3/5 ... add helper function for shmem's memory reclaim patch
4/5 ... optimize by likely/unlikely ppatch
5/5 ... remove redundunt check patch (shmem handling is fixed.)
Unix bench result.
== 2.6.26-rc2-mm1 + memory resource controller
Execl Throughput 2915.4 lps (29.6 secs, 3 samples)
C Compiler Throughput 1019.3 lpm (60.0 secs, 3 samples)
Shell Scripts (1 concurrent) 5796.0 lpm (60.0 secs, 3 samples)
Shell Scripts (8 concurrent) 1097.7 lpm (60.0 secs, 3 samples)
Shell Scripts (16 concurrent) 565.3 lpm (60.0 secs, 3 samples)
File Read 1024 bufsize 2000 maxblocks 1022128.0 KBps (30.0 secs, 3 samples)
File Write 1024 bufsize 2000 maxblocks 544057.0 KBps (30.0 secs, 3 samples)
File Copy 1024 bufsize 2000 maxblocks 346481.0 KBps (30.0 secs, 3 samples)
File Read 256 bufsize 500 maxblocks 319325.0 KBps (30.0 secs, 3 samples)
File Write 256 bufsize 500 maxblocks 148788.0 KBps (30.0 secs, 3 samples)
File Copy 256 bufsize 500 maxblocks 99051.0 KBps (30.0 secs, 3 samples)
File Read 4096 bufsize 8000 maxblocks 2058917.0 KBps (30.0 secs, 3 samples)
File Write 4096 bufsize 8000 maxblocks 1606109.0 KBps (30.0 secs, 3 samples)
File Copy 4096 bufsize 8000 maxblocks 854789.0 KBps (30.0 secs, 3 samples)
Dc: sqrt(2) to 99 decimal places 126145.2 lpm (30.0 secs, 3 samples)
INDEX VALUES
TEST BASELINE RESULT INDEX
Execl Throughput 43.0 2915.4 678.0
File Copy 1024 bufsize 2000 maxblocks 3960.0 346481.0 875.0
File Copy 256 bufsize 500 maxblocks 1655.0 99051.0 598.5
File Copy 4096 bufsize 8000 maxblocks 5800.0 854789.0 1473.8
Shell Scripts (8 concurrent) 6.0 1097.7 1829.5
=========
FINAL SCORE 991.3
== 2.6.26-rc2-mm1 + this set ==
Execl Throughput 3012.9 lps (29.9 secs, 3 samples)
C Compiler Throughput 981.0 lpm (60.0 secs, 3 samples)
Shell Scripts (1 concurrent) 5872.0 lpm (60.0 secs, 3 samples)
Shell Scripts (8 concurrent) 1120.3 lpm (60.0 secs, 3 samples)
Shell Scripts (16 concurrent) 578.0 lpm (60.0 secs, 3 samples)
File Read 1024 bufsize 2000 maxblocks 1003993.0 KBps (30.0 secs, 3 samples)
File Write 1024 bufsize 2000 maxblocks 550452.0 KBps (30.0 secs, 3 samples)
File Copy 1024 bufsize 2000 maxblocks 347159.0 KBps (30.0 secs, 3 samples)
File Read 256 bufsize 500 maxblocks 314644.0 KBps (30.0 secs, 3 samples)
File Write 256 bufsize 500 maxblocks 151852.0 KBps (30.0 secs, 3 samples)
File Copy 256 bufsize 500 maxblocks 101000.0 KBps (30.0 secs, 3 samples)
File Read 4096 bufsize 8000 maxblocks 2033256.0 KBps (30.0 secs, 3 samples)
File Write 4096 bufsize 8000 maxblocks 1611814.0 KBps (30.0 secs, 3 samples)
File Copy 4096 bufsize 8000 maxblocks 847979.0 KBps (30.0 secs, 3 samples)
Dc: sqrt(2) to 99 decimal places 128148.7 lpm (30.0 secs, 3 samples)
INDEX VALUES
TEST BASELINE RESULT INDEX
Execl Throughput 43.0 3012.9 700.7
File Copy 1024 bufsize 2000 maxblocks 3960.0 347159.0 876.7
File Copy 256 bufsize 500 maxblocks 1655.0 101000.0 610.3
File Copy 4096 bufsize 8000 maxblocks 5800.0 847979.0 1462.0
Shell Scripts (8 concurrent) 6.0 1120.3 1867.2
=========
FINAL SCORE 1004.6
This patch:
Remove refcnt from page_cgroup().
After this,
* A page is charged only when !page_mapped() && no page_cgroup is assigned.
* Anon page is newly mapped.
* File page is added to mapping->tree.
* A page is uncharged only when
* Anon page is fully unmapped.
* File page is removed from LRU.
There is no change in behavior from user's view.
This patch also removes unnecessary calls in rmap.c which was used only for
refcnt mangement.
[akpm@linux-foundation.org: fix warning]
[hugh@veritas.com: fix shmem_unuse_inode charging]
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Cc: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: Pavel Emelyanov <xemul@openvz.org>
Cc: Li Zefan <lizf@cn.fujitsu.com>
Cc: Hugh Dickins <hugh@veritas.com>
Cc: YAMAMOTO Takashi <yamamoto@valinux.co.jp>
Cc: Paul Menage <menage@google.com>
Cc: David Rientjes <rientjes@google.com>
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-07-25 15:47:14 +07:00
|
|
|
mem_cgroup_uncharge_cache_page(page);
|
2008-07-26 09:45:30 +07:00
|
|
|
page_cache_release(page);
|
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
radix_tree_preload_end();
|
2008-02-07 15:14:05 +07:00
|
|
|
} else
|
memcg: remove refcnt from page_cgroup
memcg: performance improvements
Patch Description
1/5 ... remove refcnt fron page_cgroup patch (shmem handling is fixed)
2/5 ... swapcache handling patch
3/5 ... add helper function for shmem's memory reclaim patch
4/5 ... optimize by likely/unlikely ppatch
5/5 ... remove redundunt check patch (shmem handling is fixed.)
Unix bench result.
== 2.6.26-rc2-mm1 + memory resource controller
Execl Throughput 2915.4 lps (29.6 secs, 3 samples)
C Compiler Throughput 1019.3 lpm (60.0 secs, 3 samples)
Shell Scripts (1 concurrent) 5796.0 lpm (60.0 secs, 3 samples)
Shell Scripts (8 concurrent) 1097.7 lpm (60.0 secs, 3 samples)
Shell Scripts (16 concurrent) 565.3 lpm (60.0 secs, 3 samples)
File Read 1024 bufsize 2000 maxblocks 1022128.0 KBps (30.0 secs, 3 samples)
File Write 1024 bufsize 2000 maxblocks 544057.0 KBps (30.0 secs, 3 samples)
File Copy 1024 bufsize 2000 maxblocks 346481.0 KBps (30.0 secs, 3 samples)
File Read 256 bufsize 500 maxblocks 319325.0 KBps (30.0 secs, 3 samples)
File Write 256 bufsize 500 maxblocks 148788.0 KBps (30.0 secs, 3 samples)
File Copy 256 bufsize 500 maxblocks 99051.0 KBps (30.0 secs, 3 samples)
File Read 4096 bufsize 8000 maxblocks 2058917.0 KBps (30.0 secs, 3 samples)
File Write 4096 bufsize 8000 maxblocks 1606109.0 KBps (30.0 secs, 3 samples)
File Copy 4096 bufsize 8000 maxblocks 854789.0 KBps (30.0 secs, 3 samples)
Dc: sqrt(2) to 99 decimal places 126145.2 lpm (30.0 secs, 3 samples)
INDEX VALUES
TEST BASELINE RESULT INDEX
Execl Throughput 43.0 2915.4 678.0
File Copy 1024 bufsize 2000 maxblocks 3960.0 346481.0 875.0
File Copy 256 bufsize 500 maxblocks 1655.0 99051.0 598.5
File Copy 4096 bufsize 8000 maxblocks 5800.0 854789.0 1473.8
Shell Scripts (8 concurrent) 6.0 1097.7 1829.5
=========
FINAL SCORE 991.3
== 2.6.26-rc2-mm1 + this set ==
Execl Throughput 3012.9 lps (29.9 secs, 3 samples)
C Compiler Throughput 981.0 lpm (60.0 secs, 3 samples)
Shell Scripts (1 concurrent) 5872.0 lpm (60.0 secs, 3 samples)
Shell Scripts (8 concurrent) 1120.3 lpm (60.0 secs, 3 samples)
Shell Scripts (16 concurrent) 578.0 lpm (60.0 secs, 3 samples)
File Read 1024 bufsize 2000 maxblocks 1003993.0 KBps (30.0 secs, 3 samples)
File Write 1024 bufsize 2000 maxblocks 550452.0 KBps (30.0 secs, 3 samples)
File Copy 1024 bufsize 2000 maxblocks 347159.0 KBps (30.0 secs, 3 samples)
File Read 256 bufsize 500 maxblocks 314644.0 KBps (30.0 secs, 3 samples)
File Write 256 bufsize 500 maxblocks 151852.0 KBps (30.0 secs, 3 samples)
File Copy 256 bufsize 500 maxblocks 101000.0 KBps (30.0 secs, 3 samples)
File Read 4096 bufsize 8000 maxblocks 2033256.0 KBps (30.0 secs, 3 samples)
File Write 4096 bufsize 8000 maxblocks 1611814.0 KBps (30.0 secs, 3 samples)
File Copy 4096 bufsize 8000 maxblocks 847979.0 KBps (30.0 secs, 3 samples)
Dc: sqrt(2) to 99 decimal places 128148.7 lpm (30.0 secs, 3 samples)
INDEX VALUES
TEST BASELINE RESULT INDEX
Execl Throughput 43.0 3012.9 700.7
File Copy 1024 bufsize 2000 maxblocks 3960.0 347159.0 876.7
File Copy 256 bufsize 500 maxblocks 1655.0 101000.0 610.3
File Copy 4096 bufsize 8000 maxblocks 5800.0 847979.0 1462.0
Shell Scripts (8 concurrent) 6.0 1120.3 1867.2
=========
FINAL SCORE 1004.6
This patch:
Remove refcnt from page_cgroup().
After this,
* A page is charged only when !page_mapped() && no page_cgroup is assigned.
* Anon page is newly mapped.
* File page is added to mapping->tree.
* A page is uncharged only when
* Anon page is fully unmapped.
* File page is removed from LRU.
There is no change in behavior from user's view.
This patch also removes unnecessary calls in rmap.c which was used only for
refcnt mangement.
[akpm@linux-foundation.org: fix warning]
[hugh@veritas.com: fix shmem_unuse_inode charging]
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Cc: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: Pavel Emelyanov <xemul@openvz.org>
Cc: Li Zefan <lizf@cn.fujitsu.com>
Cc: Hugh Dickins <hugh@veritas.com>
Cc: YAMAMOTO Takashi <yamamoto@valinux.co.jp>
Cc: Paul Menage <menage@google.com>
Cc: David Rientjes <rientjes@google.com>
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-07-25 15:47:14 +07:00
|
|
|
mem_cgroup_uncharge_cache_page(page);
|
2008-02-07 15:13:53 +07:00
|
|
|
out:
|
2005-04-17 05:20:36 +07:00
|
|
|
return error;
|
|
|
|
}
|
2008-07-26 09:45:30 +07:00
|
|
|
EXPORT_SYMBOL(add_to_page_cache_locked);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
int add_to_page_cache_lru(struct page *page, struct address_space *mapping,
|
2005-10-21 14:18:50 +07:00
|
|
|
pgoff_t offset, gfp_t gfp_mask)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2008-10-19 10:26:32 +07:00
|
|
|
int ret;
|
|
|
|
|
|
|
|
ret = add_to_page_cache(page, mapping, offset, gfp_mask);
|
2011-08-04 06:21:27 +07:00
|
|
|
if (ret == 0)
|
|
|
|
lru_cache_add_file(page);
|
2005-04-17 05:20:36 +07:00
|
|
|
return ret;
|
|
|
|
}
|
2009-02-09 21:02:42 +07:00
|
|
|
EXPORT_SYMBOL_GPL(add_to_page_cache_lru);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2006-03-24 18:16:04 +07:00
|
|
|
#ifdef CONFIG_NUMA
|
2006-10-29 00:38:23 +07:00
|
|
|
struct page *__page_cache_alloc(gfp_t gfp)
|
2006-03-24 18:16:04 +07:00
|
|
|
{
|
2010-05-25 04:32:08 +07:00
|
|
|
int n;
|
|
|
|
struct page *page;
|
|
|
|
|
2006-03-24 18:16:04 +07:00
|
|
|
if (cpuset_do_page_mem_spread()) {
|
cpuset: mm: reduce large amounts of memory barrier related damage v3
Commit c0ff7453bb5c ("cpuset,mm: fix no node to alloc memory when
changing cpuset's mems") wins a super prize for the largest number of
memory barriers entered into fast paths for one commit.
[get|put]_mems_allowed is incredibly heavy with pairs of full memory
barriers inserted into a number of hot paths. This was detected while
investigating at large page allocator slowdown introduced some time
after 2.6.32. The largest portion of this overhead was shown by
oprofile to be at an mfence introduced by this commit into the page
allocator hot path.
For extra style points, the commit introduced the use of yield() in an
implementation of what looks like a spinning mutex.
This patch replaces the full memory barriers on both read and write
sides with a sequence counter with just read barriers on the fast path
side. This is much cheaper on some architectures, including x86. The
main bulk of the patch is the retry logic if the nodemask changes in a
manner that can cause a false failure.
While updating the nodemask, a check is made to see if a false failure
is a risk. If it is, the sequence number gets bumped and parallel
allocators will briefly stall while the nodemask update takes place.
In a page fault test microbenchmark, oprofile samples from
__alloc_pages_nodemask went from 4.53% of all samples to 1.15%. The
actual results were
3.3.0-rc3 3.3.0-rc3
rc3-vanilla nobarrier-v2r1
Clients 1 UserTime 0.07 ( 0.00%) 0.08 (-14.19%)
Clients 2 UserTime 0.07 ( 0.00%) 0.07 ( 2.72%)
Clients 4 UserTime 0.08 ( 0.00%) 0.07 ( 3.29%)
Clients 1 SysTime 0.70 ( 0.00%) 0.65 ( 6.65%)
Clients 2 SysTime 0.85 ( 0.00%) 0.82 ( 3.65%)
Clients 4 SysTime 1.41 ( 0.00%) 1.41 ( 0.32%)
Clients 1 WallTime 0.77 ( 0.00%) 0.74 ( 4.19%)
Clients 2 WallTime 0.47 ( 0.00%) 0.45 ( 3.73%)
Clients 4 WallTime 0.38 ( 0.00%) 0.37 ( 1.58%)
Clients 1 Flt/sec/cpu 497620.28 ( 0.00%) 520294.53 ( 4.56%)
Clients 2 Flt/sec/cpu 414639.05 ( 0.00%) 429882.01 ( 3.68%)
Clients 4 Flt/sec/cpu 257959.16 ( 0.00%) 258761.48 ( 0.31%)
Clients 1 Flt/sec 495161.39 ( 0.00%) 517292.87 ( 4.47%)
Clients 2 Flt/sec 820325.95 ( 0.00%) 850289.77 ( 3.65%)
Clients 4 Flt/sec 1020068.93 ( 0.00%) 1022674.06 ( 0.26%)
MMTests Statistics: duration
Sys Time Running Test (seconds) 135.68 132.17
User+Sys Time Running Test (seconds) 164.2 160.13
Total Elapsed Time (seconds) 123.46 120.87
The overall improvement is small but the System CPU time is much
improved and roughly in correlation to what oprofile reported (these
performance figures are without profiling so skew is expected). The
actual number of page faults is noticeably improved.
For benchmarks like kernel builds, the overall benefit is marginal but
the system CPU time is slightly reduced.
To test the actual bug the commit fixed I opened two terminals. The
first ran within a cpuset and continually ran a small program that
faulted 100M of anonymous data. In a second window, the nodemask of the
cpuset was continually randomised in a loop.
Without the commit, the program would fail every so often (usually
within 10 seconds) and obviously with the commit everything worked fine.
With this patch applied, it also worked fine so the fix should be
functionally equivalent.
Signed-off-by: Mel Gorman <mgorman@suse.de>
Cc: Miao Xie <miaox@cn.fujitsu.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Christoph Lameter <cl@linux.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-03-22 06:34:11 +07:00
|
|
|
unsigned int cpuset_mems_cookie;
|
|
|
|
do {
|
|
|
|
cpuset_mems_cookie = get_mems_allowed();
|
|
|
|
n = cpuset_mem_spread_node();
|
|
|
|
page = alloc_pages_exact_node(n, gfp, 0);
|
|
|
|
} while (!put_mems_allowed(cpuset_mems_cookie) && !page);
|
|
|
|
|
2010-05-25 04:32:08 +07:00
|
|
|
return page;
|
2006-03-24 18:16:04 +07:00
|
|
|
}
|
2006-10-29 00:38:23 +07:00
|
|
|
return alloc_pages(gfp, 0);
|
2006-03-24 18:16:04 +07:00
|
|
|
}
|
2006-10-29 00:38:23 +07:00
|
|
|
EXPORT_SYMBOL(__page_cache_alloc);
|
2006-03-24 18:16:04 +07:00
|
|
|
#endif
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
/*
|
|
|
|
* In order to wait for pages to become available there must be
|
|
|
|
* waitqueues associated with pages. By using a hash table of
|
|
|
|
* waitqueues where the bucket discipline is to maintain all
|
|
|
|
* waiters on the same queue and wake all when any of the pages
|
|
|
|
* become available, and for the woken contexts to check to be
|
|
|
|
* sure the appropriate page became available, this saves space
|
|
|
|
* at a cost of "thundering herd" phenomena during rare hash
|
|
|
|
* collisions.
|
|
|
|
*/
|
|
|
|
static wait_queue_head_t *page_waitqueue(struct page *page)
|
|
|
|
{
|
|
|
|
const struct zone *zone = page_zone(page);
|
|
|
|
|
|
|
|
return &zone->wait_table[hash_ptr(page, zone->wait_table_bits)];
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void wake_up_page(struct page *page, int bit)
|
|
|
|
{
|
|
|
|
__wake_up_bit(page_waitqueue(page), &page->flags, bit);
|
|
|
|
}
|
|
|
|
|
2008-02-05 13:29:26 +07:00
|
|
|
void wait_on_page_bit(struct page *page, int bit_nr)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
DEFINE_WAIT_BIT(wait, &page->flags, bit_nr);
|
|
|
|
|
|
|
|
if (test_bit(bit_nr, &page->flags))
|
2011-03-10 14:52:07 +07:00
|
|
|
__wait_on_bit(page_waitqueue(page), &wait, sleep_on_page,
|
2005-04-17 05:20:36 +07:00
|
|
|
TASK_UNINTERRUPTIBLE);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(wait_on_page_bit);
|
|
|
|
|
2011-05-25 07:11:29 +07:00
|
|
|
int wait_on_page_bit_killable(struct page *page, int bit_nr)
|
|
|
|
{
|
|
|
|
DEFINE_WAIT_BIT(wait, &page->flags, bit_nr);
|
|
|
|
|
|
|
|
if (!test_bit(bit_nr, &page->flags))
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
return __wait_on_bit(page_waitqueue(page), &wait,
|
|
|
|
sleep_on_page_killable, TASK_KILLABLE);
|
|
|
|
}
|
|
|
|
|
2009-04-03 22:42:39 +07:00
|
|
|
/**
|
|
|
|
* add_page_wait_queue - Add an arbitrary waiter to a page's wait queue
|
2009-04-14 04:39:54 +07:00
|
|
|
* @page: Page defining the wait queue of interest
|
|
|
|
* @waiter: Waiter to add to the queue
|
2009-04-03 22:42:39 +07:00
|
|
|
*
|
|
|
|
* Add an arbitrary @waiter to the wait queue for the nominated @page.
|
|
|
|
*/
|
|
|
|
void add_page_wait_queue(struct page *page, wait_queue_t *waiter)
|
|
|
|
{
|
|
|
|
wait_queue_head_t *q = page_waitqueue(page);
|
|
|
|
unsigned long flags;
|
|
|
|
|
|
|
|
spin_lock_irqsave(&q->lock, flags);
|
|
|
|
__add_wait_queue(q, waiter);
|
|
|
|
spin_unlock_irqrestore(&q->lock, flags);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(add_page_wait_queue);
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
/**
|
2006-06-23 16:03:49 +07:00
|
|
|
* unlock_page - unlock a locked page
|
2005-04-17 05:20:36 +07:00
|
|
|
* @page: the page
|
|
|
|
*
|
|
|
|
* Unlocks the page and wakes up sleepers in ___wait_on_page_locked().
|
|
|
|
* Also wakes sleepers in wait_on_page_writeback() because the wakeup
|
|
|
|
* mechananism between PageLocked pages and PageWriteback pages is shared.
|
|
|
|
* But that's OK - sleepers in wait_on_page_writeback() just go back to sleep.
|
|
|
|
*
|
2008-10-19 10:26:59 +07:00
|
|
|
* The mb is necessary to enforce ordering between the clear_bit and the read
|
|
|
|
* of the waitqueue (to avoid SMP races with a parallel wait_on_page_locked()).
|
2005-04-17 05:20:36 +07:00
|
|
|
*/
|
2008-02-05 13:29:26 +07:00
|
|
|
void unlock_page(struct page *page)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2008-10-19 10:26:59 +07:00
|
|
|
VM_BUG_ON(!PageLocked(page));
|
|
|
|
clear_bit_unlock(PG_locked, &page->flags);
|
|
|
|
smp_mb__after_clear_bit();
|
2005-04-17 05:20:36 +07:00
|
|
|
wake_up_page(page, PG_locked);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(unlock_page);
|
|
|
|
|
2006-06-23 16:03:49 +07:00
|
|
|
/**
|
|
|
|
* end_page_writeback - end writeback against a page
|
|
|
|
* @page: the page
|
2005-04-17 05:20:36 +07:00
|
|
|
*/
|
|
|
|
void end_page_writeback(struct page *page)
|
|
|
|
{
|
2008-04-28 16:12:38 +07:00
|
|
|
if (TestClearPageReclaim(page))
|
|
|
|
rotate_reclaimable_page(page);
|
|
|
|
|
|
|
|
if (!test_clear_page_writeback(page))
|
|
|
|
BUG();
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
smp_mb__after_clear_bit();
|
|
|
|
wake_up_page(page, PG_writeback);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(end_page_writeback);
|
|
|
|
|
2006-06-23 16:03:49 +07:00
|
|
|
/**
|
|
|
|
* __lock_page - get a lock on the page, assuming we need to sleep to get it
|
|
|
|
* @page: the page to lock
|
2005-04-17 05:20:36 +07:00
|
|
|
*/
|
2008-02-05 13:29:26 +07:00
|
|
|
void __lock_page(struct page *page)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
DEFINE_WAIT_BIT(wait, &page->flags, PG_locked);
|
|
|
|
|
2011-03-10 14:52:07 +07:00
|
|
|
__wait_on_bit_lock(page_waitqueue(page), &wait, sleep_on_page,
|
2005-04-17 05:20:36 +07:00
|
|
|
TASK_UNINTERRUPTIBLE);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(__lock_page);
|
|
|
|
|
2008-02-14 06:03:16 +07:00
|
|
|
int __lock_page_killable(struct page *page)
|
2007-12-06 23:18:49 +07:00
|
|
|
{
|
|
|
|
DEFINE_WAIT_BIT(wait, &page->flags, PG_locked);
|
|
|
|
|
|
|
|
return __wait_on_bit_lock(page_waitqueue(page), &wait,
|
2011-03-10 14:52:07 +07:00
|
|
|
sleep_on_page_killable, TASK_KILLABLE);
|
2007-12-06 23:18:49 +07:00
|
|
|
}
|
2009-02-09 21:02:42 +07:00
|
|
|
EXPORT_SYMBOL_GPL(__lock_page_killable);
|
2007-12-06 23:18:49 +07:00
|
|
|
|
2010-10-27 04:21:57 +07:00
|
|
|
int __lock_page_or_retry(struct page *page, struct mm_struct *mm,
|
|
|
|
unsigned int flags)
|
|
|
|
{
|
2011-05-25 07:11:30 +07:00
|
|
|
if (flags & FAULT_FLAG_ALLOW_RETRY) {
|
|
|
|
/*
|
|
|
|
* CAUTION! In this case, mmap_sem is not released
|
|
|
|
* even though return 0.
|
|
|
|
*/
|
|
|
|
if (flags & FAULT_FLAG_RETRY_NOWAIT)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
up_read(&mm->mmap_sem);
|
|
|
|
if (flags & FAULT_FLAG_KILLABLE)
|
|
|
|
wait_on_page_locked_killable(page);
|
|
|
|
else
|
2011-03-23 06:30:51 +07:00
|
|
|
wait_on_page_locked(page);
|
2010-10-27 04:21:57 +07:00
|
|
|
return 0;
|
2011-05-25 07:11:30 +07:00
|
|
|
} else {
|
|
|
|
if (flags & FAULT_FLAG_KILLABLE) {
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
ret = __lock_page_killable(page);
|
|
|
|
if (ret) {
|
|
|
|
up_read(&mm->mmap_sem);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
} else
|
|
|
|
__lock_page(page);
|
|
|
|
return 1;
|
2010-10-27 04:21:57 +07:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2006-06-23 16:03:49 +07:00
|
|
|
/**
|
|
|
|
* find_get_page - find and get a page reference
|
|
|
|
* @mapping: the address_space to search
|
|
|
|
* @offset: the page index
|
|
|
|
*
|
2006-09-26 13:31:35 +07:00
|
|
|
* Is there a pagecache struct page at the given (mapping, offset) tuple?
|
|
|
|
* If yes, increment its refcount and return it; if no, return NULL.
|
2005-04-17 05:20:36 +07:00
|
|
|
*/
|
2008-07-26 09:45:31 +07:00
|
|
|
struct page *find_get_page(struct address_space *mapping, pgoff_t offset)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2008-07-26 09:45:31 +07:00
|
|
|
void **pagep;
|
2005-04-17 05:20:36 +07:00
|
|
|
struct page *page;
|
|
|
|
|
2008-07-26 09:45:31 +07:00
|
|
|
rcu_read_lock();
|
|
|
|
repeat:
|
|
|
|
page = NULL;
|
|
|
|
pagep = radix_tree_lookup_slot(&mapping->page_tree, offset);
|
|
|
|
if (pagep) {
|
|
|
|
page = radix_tree_deref_slot(pagep);
|
2010-11-12 05:05:19 +07:00
|
|
|
if (unlikely(!page))
|
|
|
|
goto out;
|
mm: let swap use exceptional entries
If swap entries are to be stored along with struct page pointers in a
radix tree, they need to be distinguished as exceptional entries.
Most of the handling of swap entries in radix tree will be contained in
shmem.c, but a few functions in filemap.c's common code need to check
for their appearance: find_get_page(), find_lock_page(),
find_get_pages() and find_get_pages_contig().
So as not to slow their fast paths, tuck those checks inside the
existing checks for unlikely radix_tree_deref_slot(); except for
find_lock_page(), where it is an added test. And make it a BUG in
find_get_pages_tag(), which is not applied to tmpfs files.
A part of the reason for eliminating shmem_readpage() earlier, was to
minimize the places where common code would need to allow for swap
entries.
The swp_entry_t known to swapfile.c must be massaged into a slightly
different form when stored in the radix tree, just as it gets massaged
into a pte_t when stored in page tables.
In an i386 kernel this limits its information (type and page offset) to
30 bits: given 32 "types" of swapfile and 4kB pagesize, that's a maximum
swapfile size of 128GB. Which is less than the 512GB we previously
allowed with X86_PAE (where the swap entry can occupy the entire upper
32 bits of a pte_t), but not a new limitation on 32-bit without PAE; and
there's not a new limitation on 64-bit (where swap filesize is already
limited to 16TB by a 32-bit page offset). Thirty areas of 128GB is
probably still enough swap for a 64GB 32-bit machine.
Provide swp_to_radix_entry() and radix_to_swp_entry() conversions, and
enforce filesize limit in read_swap_header(), just as for ptes.
Signed-off-by: Hugh Dickins <hughd@google.com>
Acked-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-08-04 06:21:19 +07:00
|
|
|
if (radix_tree_exception(page)) {
|
2011-08-04 06:21:28 +07:00
|
|
|
if (radix_tree_deref_retry(page))
|
|
|
|
goto repeat;
|
|
|
|
/*
|
|
|
|
* Otherwise, shmem/tmpfs must be storing a swap entry
|
|
|
|
* here as an exceptional entry: so return it without
|
|
|
|
* attempting to raise page count.
|
|
|
|
*/
|
|
|
|
goto out;
|
mm: let swap use exceptional entries
If swap entries are to be stored along with struct page pointers in a
radix tree, they need to be distinguished as exceptional entries.
Most of the handling of swap entries in radix tree will be contained in
shmem.c, but a few functions in filemap.c's common code need to check
for their appearance: find_get_page(), find_lock_page(),
find_get_pages() and find_get_pages_contig().
So as not to slow their fast paths, tuck those checks inside the
existing checks for unlikely radix_tree_deref_slot(); except for
find_lock_page(), where it is an added test. And make it a BUG in
find_get_pages_tag(), which is not applied to tmpfs files.
A part of the reason for eliminating shmem_readpage() earlier, was to
minimize the places where common code would need to allow for swap
entries.
The swp_entry_t known to swapfile.c must be massaged into a slightly
different form when stored in the radix tree, just as it gets massaged
into a pte_t when stored in page tables.
In an i386 kernel this limits its information (type and page offset) to
30 bits: given 32 "types" of swapfile and 4kB pagesize, that's a maximum
swapfile size of 128GB. Which is less than the 512GB we previously
allowed with X86_PAE (where the swap entry can occupy the entire upper
32 bits of a pte_t), but not a new limitation on 32-bit without PAE; and
there's not a new limitation on 64-bit (where swap filesize is already
limited to 16TB by a 32-bit page offset). Thirty areas of 128GB is
probably still enough swap for a 64GB 32-bit machine.
Provide swp_to_radix_entry() and radix_to_swp_entry() conversions, and
enforce filesize limit in read_swap_header(), just as for ptes.
Signed-off-by: Hugh Dickins <hughd@google.com>
Acked-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-08-04 06:21:19 +07:00
|
|
|
}
|
2008-07-26 09:45:31 +07:00
|
|
|
if (!page_cache_get_speculative(page))
|
|
|
|
goto repeat;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Has the page moved?
|
|
|
|
* This is part of the lockless pagecache protocol. See
|
|
|
|
* include/linux/pagemap.h for details.
|
|
|
|
*/
|
|
|
|
if (unlikely(page != *pagep)) {
|
|
|
|
page_cache_release(page);
|
|
|
|
goto repeat;
|
|
|
|
}
|
|
|
|
}
|
2010-11-12 05:05:19 +07:00
|
|
|
out:
|
2008-07-26 09:45:31 +07:00
|
|
|
rcu_read_unlock();
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
return page;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(find_get_page);
|
|
|
|
|
|
|
|
/**
|
|
|
|
* find_lock_page - locate, pin and lock a pagecache page
|
2005-05-01 22:59:26 +07:00
|
|
|
* @mapping: the address_space to search
|
|
|
|
* @offset: the page index
|
2005-04-17 05:20:36 +07:00
|
|
|
*
|
|
|
|
* Locates the desired pagecache page, locks it, increments its reference
|
|
|
|
* count and returns its address.
|
|
|
|
*
|
|
|
|
* Returns zero if the page was not present. find_lock_page() may sleep.
|
|
|
|
*/
|
2008-07-26 09:45:31 +07:00
|
|
|
struct page *find_lock_page(struct address_space *mapping, pgoff_t offset)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
struct page *page;
|
|
|
|
|
|
|
|
repeat:
|
2008-07-26 09:45:31 +07:00
|
|
|
page = find_get_page(mapping, offset);
|
mm: let swap use exceptional entries
If swap entries are to be stored along with struct page pointers in a
radix tree, they need to be distinguished as exceptional entries.
Most of the handling of swap entries in radix tree will be contained in
shmem.c, but a few functions in filemap.c's common code need to check
for their appearance: find_get_page(), find_lock_page(),
find_get_pages() and find_get_pages_contig().
So as not to slow their fast paths, tuck those checks inside the
existing checks for unlikely radix_tree_deref_slot(); except for
find_lock_page(), where it is an added test. And make it a BUG in
find_get_pages_tag(), which is not applied to tmpfs files.
A part of the reason for eliminating shmem_readpage() earlier, was to
minimize the places where common code would need to allow for swap
entries.
The swp_entry_t known to swapfile.c must be massaged into a slightly
different form when stored in the radix tree, just as it gets massaged
into a pte_t when stored in page tables.
In an i386 kernel this limits its information (type and page offset) to
30 bits: given 32 "types" of swapfile and 4kB pagesize, that's a maximum
swapfile size of 128GB. Which is less than the 512GB we previously
allowed with X86_PAE (where the swap entry can occupy the entire upper
32 bits of a pte_t), but not a new limitation on 32-bit without PAE; and
there's not a new limitation on 64-bit (where swap filesize is already
limited to 16TB by a 32-bit page offset). Thirty areas of 128GB is
probably still enough swap for a 64GB 32-bit machine.
Provide swp_to_radix_entry() and radix_to_swp_entry() conversions, and
enforce filesize limit in read_swap_header(), just as for ptes.
Signed-off-by: Hugh Dickins <hughd@google.com>
Acked-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-08-04 06:21:19 +07:00
|
|
|
if (page && !radix_tree_exception(page)) {
|
2008-07-26 09:45:31 +07:00
|
|
|
lock_page(page);
|
|
|
|
/* Has the page been truncated? */
|
|
|
|
if (unlikely(page->mapping != mapping)) {
|
|
|
|
unlock_page(page);
|
|
|
|
page_cache_release(page);
|
|
|
|
goto repeat;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
2008-07-26 09:45:31 +07:00
|
|
|
VM_BUG_ON(page->index != offset);
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
return page;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(find_lock_page);
|
|
|
|
|
|
|
|
/**
|
|
|
|
* find_or_create_page - locate or add a pagecache page
|
2005-05-01 22:59:26 +07:00
|
|
|
* @mapping: the page's address_space
|
|
|
|
* @index: the page's index into the mapping
|
|
|
|
* @gfp_mask: page allocation mode
|
2005-04-17 05:20:36 +07:00
|
|
|
*
|
|
|
|
* Locates a page in the pagecache. If the page is not present, a new page
|
|
|
|
* is allocated using @gfp_mask and is added to the pagecache and to the VM's
|
|
|
|
* LRU list. The returned page is locked and has its reference count
|
|
|
|
* incremented.
|
|
|
|
*
|
|
|
|
* find_or_create_page() may sleep, even if @gfp_flags specifies an atomic
|
|
|
|
* allocation!
|
|
|
|
*
|
|
|
|
* find_or_create_page() returns the desired page's address, or zero on
|
|
|
|
* memory exhaustion.
|
|
|
|
*/
|
|
|
|
struct page *find_or_create_page(struct address_space *mapping,
|
2007-10-16 15:24:37 +07:00
|
|
|
pgoff_t index, gfp_t gfp_mask)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2007-10-16 15:24:57 +07:00
|
|
|
struct page *page;
|
2005-04-17 05:20:36 +07:00
|
|
|
int err;
|
|
|
|
repeat:
|
|
|
|
page = find_lock_page(mapping, index);
|
|
|
|
if (!page) {
|
2007-10-16 15:24:57 +07:00
|
|
|
page = __page_cache_alloc(gfp_mask);
|
|
|
|
if (!page)
|
|
|
|
return NULL;
|
mm: pagecache gfp flags fix
Frustratingly, gfp_t is really divided into two classes of flags. One are
the context dependent ones (can we sleep? can we enter filesystem? block
subsystem? should we use some extra reserves, etc.). The other ones are
the type of memory required and depend on how the algorithm is implemented
rather than the point at which the memory is allocated (highmem? dma
memory? etc).
Some of the functions which allocate a page and add it to page cache take
a gfp_t, but sometimes those functions or their callers aren't really
doing the right thing: when allocating pagecache page, the memory type
should be mapping_gfp_mask(mapping). When allocating radix tree nodes,
the memory type should be kernel mapped (not highmem) memory. The gfp_t
argument should only really be needed for context dependent options.
This patch doesn't really solve that tangle in a nice way, but it does
attempt to fix a couple of bugs.
- find_or_create_page changes its radix-tree allocation to only include
the main context dependent flags in order so the pagecache page may be
allocated from arbitrary types of memory without affecting the
radix-tree. In practice, slab allocations don't come from highmem
anyway, and radix-tree only uses slab allocations. So there isn't a
practical change (unless some fs uses GFP_DMA for pages).
- grab_cache_page_nowait() is changed to allocate radix-tree nodes with
GFP_NOFS, because it is not supposed to reenter the filesystem. This
bug could cause lock recursion if a filesystem is not expecting the
function to reenter the fs (as-per documentation).
Filesystems should be careful about exactly what semantics they want and
what they get when fiddling with gfp_t masks to allocate pagecache. One
should be as liberal as possible with the type of memory that can be used,
and same for the the context specific flags.
Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-01-07 05:40:28 +07:00
|
|
|
/*
|
|
|
|
* We want a regular kernel memory (not highmem or DMA etc)
|
|
|
|
* allocation for the radix tree nodes, but we need to honour
|
|
|
|
* the context-specific requirements the caller has asked for.
|
|
|
|
* GFP_RECLAIM_MASK collects those requirements.
|
|
|
|
*/
|
|
|
|
err = add_to_page_cache_lru(page, mapping, index,
|
|
|
|
(gfp_mask & GFP_RECLAIM_MASK));
|
2007-10-16 15:24:57 +07:00
|
|
|
if (unlikely(err)) {
|
|
|
|
page_cache_release(page);
|
|
|
|
page = NULL;
|
|
|
|
if (err == -EEXIST)
|
|
|
|
goto repeat;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
}
|
|
|
|
return page;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(find_or_create_page);
|
|
|
|
|
|
|
|
/**
|
|
|
|
* find_get_pages - gang pagecache lookup
|
|
|
|
* @mapping: The address_space to search
|
|
|
|
* @start: The starting page index
|
|
|
|
* @nr_pages: The maximum number of pages
|
|
|
|
* @pages: Where the resulting pages are placed
|
|
|
|
*
|
|
|
|
* find_get_pages() will search for and return a group of up to
|
|
|
|
* @nr_pages pages in the mapping. The pages are placed at @pages.
|
|
|
|
* find_get_pages() takes a reference against the returned pages.
|
|
|
|
*
|
|
|
|
* The search returns a group of mapping-contiguous pages with ascending
|
|
|
|
* indexes. There may be holes in the indices due to not-present pages.
|
|
|
|
*
|
|
|
|
* find_get_pages() returns the number of pages which were found.
|
|
|
|
*/
|
|
|
|
unsigned find_get_pages(struct address_space *mapping, pgoff_t start,
|
|
|
|
unsigned int nr_pages, struct page **pages)
|
|
|
|
{
|
2012-03-29 04:42:54 +07:00
|
|
|
struct radix_tree_iter iter;
|
|
|
|
void **slot;
|
|
|
|
unsigned ret = 0;
|
|
|
|
|
|
|
|
if (unlikely(!nr_pages))
|
|
|
|
return 0;
|
2008-07-26 09:45:31 +07:00
|
|
|
|
|
|
|
rcu_read_lock();
|
|
|
|
restart:
|
2012-03-29 04:42:54 +07:00
|
|
|
radix_tree_for_each_slot(slot, &mapping->page_tree, &iter, start) {
|
2008-07-26 09:45:31 +07:00
|
|
|
struct page *page;
|
|
|
|
repeat:
|
2012-03-29 04:42:54 +07:00
|
|
|
page = radix_tree_deref_slot(slot);
|
2008-07-26 09:45:31 +07:00
|
|
|
if (unlikely(!page))
|
|
|
|
continue;
|
2011-03-23 06:33:06 +07:00
|
|
|
|
mm: let swap use exceptional entries
If swap entries are to be stored along with struct page pointers in a
radix tree, they need to be distinguished as exceptional entries.
Most of the handling of swap entries in radix tree will be contained in
shmem.c, but a few functions in filemap.c's common code need to check
for their appearance: find_get_page(), find_lock_page(),
find_get_pages() and find_get_pages_contig().
So as not to slow their fast paths, tuck those checks inside the
existing checks for unlikely radix_tree_deref_slot(); except for
find_lock_page(), where it is an added test. And make it a BUG in
find_get_pages_tag(), which is not applied to tmpfs files.
A part of the reason for eliminating shmem_readpage() earlier, was to
minimize the places where common code would need to allow for swap
entries.
The swp_entry_t known to swapfile.c must be massaged into a slightly
different form when stored in the radix tree, just as it gets massaged
into a pte_t when stored in page tables.
In an i386 kernel this limits its information (type and page offset) to
30 bits: given 32 "types" of swapfile and 4kB pagesize, that's a maximum
swapfile size of 128GB. Which is less than the 512GB we previously
allowed with X86_PAE (where the swap entry can occupy the entire upper
32 bits of a pte_t), but not a new limitation on 32-bit without PAE; and
there's not a new limitation on 64-bit (where swap filesize is already
limited to 16TB by a 32-bit page offset). Thirty areas of 128GB is
probably still enough swap for a 64GB 32-bit machine.
Provide swp_to_radix_entry() and radix_to_swp_entry() conversions, and
enforce filesize limit in read_swap_header(), just as for ptes.
Signed-off-by: Hugh Dickins <hughd@google.com>
Acked-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-08-04 06:21:19 +07:00
|
|
|
if (radix_tree_exception(page)) {
|
2011-08-04 06:21:28 +07:00
|
|
|
if (radix_tree_deref_retry(page)) {
|
|
|
|
/*
|
|
|
|
* Transient condition which can only trigger
|
|
|
|
* when entry at index 0 moves out of or back
|
|
|
|
* to root: none yet gotten, safe to restart.
|
|
|
|
*/
|
2012-03-29 04:42:54 +07:00
|
|
|
WARN_ON(iter.index);
|
2011-08-04 06:21:28 +07:00
|
|
|
goto restart;
|
|
|
|
}
|
mm: let swap use exceptional entries
If swap entries are to be stored along with struct page pointers in a
radix tree, they need to be distinguished as exceptional entries.
Most of the handling of swap entries in radix tree will be contained in
shmem.c, but a few functions in filemap.c's common code need to check
for their appearance: find_get_page(), find_lock_page(),
find_get_pages() and find_get_pages_contig().
So as not to slow their fast paths, tuck those checks inside the
existing checks for unlikely radix_tree_deref_slot(); except for
find_lock_page(), where it is an added test. And make it a BUG in
find_get_pages_tag(), which is not applied to tmpfs files.
A part of the reason for eliminating shmem_readpage() earlier, was to
minimize the places where common code would need to allow for swap
entries.
The swp_entry_t known to swapfile.c must be massaged into a slightly
different form when stored in the radix tree, just as it gets massaged
into a pte_t when stored in page tables.
In an i386 kernel this limits its information (type and page offset) to
30 bits: given 32 "types" of swapfile and 4kB pagesize, that's a maximum
swapfile size of 128GB. Which is less than the 512GB we previously
allowed with X86_PAE (where the swap entry can occupy the entire upper
32 bits of a pte_t), but not a new limitation on 32-bit without PAE; and
there's not a new limitation on 64-bit (where swap filesize is already
limited to 16TB by a 32-bit page offset). Thirty areas of 128GB is
probably still enough swap for a 64GB 32-bit machine.
Provide swp_to_radix_entry() and radix_to_swp_entry() conversions, and
enforce filesize limit in read_swap_header(), just as for ptes.
Signed-off-by: Hugh Dickins <hughd@google.com>
Acked-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-08-04 06:21:19 +07:00
|
|
|
/*
|
2011-08-04 06:21:28 +07:00
|
|
|
* Otherwise, shmem/tmpfs must be storing a swap entry
|
|
|
|
* here as an exceptional entry: so skip over it -
|
|
|
|
* we only reach this from invalidate_mapping_pages().
|
mm: let swap use exceptional entries
If swap entries are to be stored along with struct page pointers in a
radix tree, they need to be distinguished as exceptional entries.
Most of the handling of swap entries in radix tree will be contained in
shmem.c, but a few functions in filemap.c's common code need to check
for their appearance: find_get_page(), find_lock_page(),
find_get_pages() and find_get_pages_contig().
So as not to slow their fast paths, tuck those checks inside the
existing checks for unlikely radix_tree_deref_slot(); except for
find_lock_page(), where it is an added test. And make it a BUG in
find_get_pages_tag(), which is not applied to tmpfs files.
A part of the reason for eliminating shmem_readpage() earlier, was to
minimize the places where common code would need to allow for swap
entries.
The swp_entry_t known to swapfile.c must be massaged into a slightly
different form when stored in the radix tree, just as it gets massaged
into a pte_t when stored in page tables.
In an i386 kernel this limits its information (type and page offset) to
30 bits: given 32 "types" of swapfile and 4kB pagesize, that's a maximum
swapfile size of 128GB. Which is less than the 512GB we previously
allowed with X86_PAE (where the swap entry can occupy the entire upper
32 bits of a pte_t), but not a new limitation on 32-bit without PAE; and
there's not a new limitation on 64-bit (where swap filesize is already
limited to 16TB by a 32-bit page offset). Thirty areas of 128GB is
probably still enough swap for a 64GB 32-bit machine.
Provide swp_to_radix_entry() and radix_to_swp_entry() conversions, and
enforce filesize limit in read_swap_header(), just as for ptes.
Signed-off-by: Hugh Dickins <hughd@google.com>
Acked-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-08-04 06:21:19 +07:00
|
|
|
*/
|
2011-08-04 06:21:28 +07:00
|
|
|
continue;
|
2010-11-12 05:05:19 +07:00
|
|
|
}
|
2008-07-26 09:45:31 +07:00
|
|
|
|
|
|
|
if (!page_cache_get_speculative(page))
|
|
|
|
goto repeat;
|
|
|
|
|
|
|
|
/* Has the page moved? */
|
2012-03-29 04:42:54 +07:00
|
|
|
if (unlikely(page != *slot)) {
|
2008-07-26 09:45:31 +07:00
|
|
|
page_cache_release(page);
|
|
|
|
goto repeat;
|
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2008-07-26 09:45:31 +07:00
|
|
|
pages[ret] = page;
|
2012-03-29 04:42:54 +07:00
|
|
|
if (++ret == nr_pages)
|
|
|
|
break;
|
2008-07-26 09:45:31 +07:00
|
|
|
}
|
2011-03-23 06:33:07 +07:00
|
|
|
|
2008-07-26 09:45:31 +07:00
|
|
|
rcu_read_unlock();
|
2005-04-17 05:20:36 +07:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2006-04-27 13:46:01 +07:00
|
|
|
/**
|
|
|
|
* find_get_pages_contig - gang contiguous pagecache lookup
|
|
|
|
* @mapping: The address_space to search
|
|
|
|
* @index: The starting page index
|
|
|
|
* @nr_pages: The maximum number of pages
|
|
|
|
* @pages: Where the resulting pages are placed
|
|
|
|
*
|
|
|
|
* find_get_pages_contig() works exactly like find_get_pages(), except
|
|
|
|
* that the returned number of pages are guaranteed to be contiguous.
|
|
|
|
*
|
|
|
|
* find_get_pages_contig() returns the number of pages which were found.
|
|
|
|
*/
|
|
|
|
unsigned find_get_pages_contig(struct address_space *mapping, pgoff_t index,
|
|
|
|
unsigned int nr_pages, struct page **pages)
|
|
|
|
{
|
2012-03-29 04:42:54 +07:00
|
|
|
struct radix_tree_iter iter;
|
|
|
|
void **slot;
|
|
|
|
unsigned int ret = 0;
|
|
|
|
|
|
|
|
if (unlikely(!nr_pages))
|
|
|
|
return 0;
|
2008-07-26 09:45:31 +07:00
|
|
|
|
|
|
|
rcu_read_lock();
|
|
|
|
restart:
|
2012-03-29 04:42:54 +07:00
|
|
|
radix_tree_for_each_contig(slot, &mapping->page_tree, &iter, index) {
|
2008-07-26 09:45:31 +07:00
|
|
|
struct page *page;
|
|
|
|
repeat:
|
2012-03-29 04:42:54 +07:00
|
|
|
page = radix_tree_deref_slot(slot);
|
|
|
|
/* The hole, there no reason to continue */
|
2008-07-26 09:45:31 +07:00
|
|
|
if (unlikely(!page))
|
2012-03-29 04:42:54 +07:00
|
|
|
break;
|
2011-03-23 06:33:06 +07:00
|
|
|
|
mm: let swap use exceptional entries
If swap entries are to be stored along with struct page pointers in a
radix tree, they need to be distinguished as exceptional entries.
Most of the handling of swap entries in radix tree will be contained in
shmem.c, but a few functions in filemap.c's common code need to check
for their appearance: find_get_page(), find_lock_page(),
find_get_pages() and find_get_pages_contig().
So as not to slow their fast paths, tuck those checks inside the
existing checks for unlikely radix_tree_deref_slot(); except for
find_lock_page(), where it is an added test. And make it a BUG in
find_get_pages_tag(), which is not applied to tmpfs files.
A part of the reason for eliminating shmem_readpage() earlier, was to
minimize the places where common code would need to allow for swap
entries.
The swp_entry_t known to swapfile.c must be massaged into a slightly
different form when stored in the radix tree, just as it gets massaged
into a pte_t when stored in page tables.
In an i386 kernel this limits its information (type and page offset) to
30 bits: given 32 "types" of swapfile and 4kB pagesize, that's a maximum
swapfile size of 128GB. Which is less than the 512GB we previously
allowed with X86_PAE (where the swap entry can occupy the entire upper
32 bits of a pte_t), but not a new limitation on 32-bit without PAE; and
there's not a new limitation on 64-bit (where swap filesize is already
limited to 16TB by a 32-bit page offset). Thirty areas of 128GB is
probably still enough swap for a 64GB 32-bit machine.
Provide swp_to_radix_entry() and radix_to_swp_entry() conversions, and
enforce filesize limit in read_swap_header(), just as for ptes.
Signed-off-by: Hugh Dickins <hughd@google.com>
Acked-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-08-04 06:21:19 +07:00
|
|
|
if (radix_tree_exception(page)) {
|
2011-08-04 06:21:28 +07:00
|
|
|
if (radix_tree_deref_retry(page)) {
|
|
|
|
/*
|
|
|
|
* Transient condition which can only trigger
|
|
|
|
* when entry at index 0 moves out of or back
|
|
|
|
* to root: none yet gotten, safe to restart.
|
|
|
|
*/
|
|
|
|
goto restart;
|
|
|
|
}
|
mm: let swap use exceptional entries
If swap entries are to be stored along with struct page pointers in a
radix tree, they need to be distinguished as exceptional entries.
Most of the handling of swap entries in radix tree will be contained in
shmem.c, but a few functions in filemap.c's common code need to check
for their appearance: find_get_page(), find_lock_page(),
find_get_pages() and find_get_pages_contig().
So as not to slow their fast paths, tuck those checks inside the
existing checks for unlikely radix_tree_deref_slot(); except for
find_lock_page(), where it is an added test. And make it a BUG in
find_get_pages_tag(), which is not applied to tmpfs files.
A part of the reason for eliminating shmem_readpage() earlier, was to
minimize the places where common code would need to allow for swap
entries.
The swp_entry_t known to swapfile.c must be massaged into a slightly
different form when stored in the radix tree, just as it gets massaged
into a pte_t when stored in page tables.
In an i386 kernel this limits its information (type and page offset) to
30 bits: given 32 "types" of swapfile and 4kB pagesize, that's a maximum
swapfile size of 128GB. Which is less than the 512GB we previously
allowed with X86_PAE (where the swap entry can occupy the entire upper
32 bits of a pte_t), but not a new limitation on 32-bit without PAE; and
there's not a new limitation on 64-bit (where swap filesize is already
limited to 16TB by a 32-bit page offset). Thirty areas of 128GB is
probably still enough swap for a 64GB 32-bit machine.
Provide swp_to_radix_entry() and radix_to_swp_entry() conversions, and
enforce filesize limit in read_swap_header(), just as for ptes.
Signed-off-by: Hugh Dickins <hughd@google.com>
Acked-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-08-04 06:21:19 +07:00
|
|
|
/*
|
2011-08-04 06:21:28 +07:00
|
|
|
* Otherwise, shmem/tmpfs must be storing a swap entry
|
|
|
|
* here as an exceptional entry: so stop looking for
|
|
|
|
* contiguous pages.
|
mm: let swap use exceptional entries
If swap entries are to be stored along with struct page pointers in a
radix tree, they need to be distinguished as exceptional entries.
Most of the handling of swap entries in radix tree will be contained in
shmem.c, but a few functions in filemap.c's common code need to check
for their appearance: find_get_page(), find_lock_page(),
find_get_pages() and find_get_pages_contig().
So as not to slow their fast paths, tuck those checks inside the
existing checks for unlikely radix_tree_deref_slot(); except for
find_lock_page(), where it is an added test. And make it a BUG in
find_get_pages_tag(), which is not applied to tmpfs files.
A part of the reason for eliminating shmem_readpage() earlier, was to
minimize the places where common code would need to allow for swap
entries.
The swp_entry_t known to swapfile.c must be massaged into a slightly
different form when stored in the radix tree, just as it gets massaged
into a pte_t when stored in page tables.
In an i386 kernel this limits its information (type and page offset) to
30 bits: given 32 "types" of swapfile and 4kB pagesize, that's a maximum
swapfile size of 128GB. Which is less than the 512GB we previously
allowed with X86_PAE (where the swap entry can occupy the entire upper
32 bits of a pte_t), but not a new limitation on 32-bit without PAE; and
there's not a new limitation on 64-bit (where swap filesize is already
limited to 16TB by a 32-bit page offset). Thirty areas of 128GB is
probably still enough swap for a 64GB 32-bit machine.
Provide swp_to_radix_entry() and radix_to_swp_entry() conversions, and
enforce filesize limit in read_swap_header(), just as for ptes.
Signed-off-by: Hugh Dickins <hughd@google.com>
Acked-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-08-04 06:21:19 +07:00
|
|
|
*/
|
2011-08-04 06:21:28 +07:00
|
|
|
break;
|
mm: let swap use exceptional entries
If swap entries are to be stored along with struct page pointers in a
radix tree, they need to be distinguished as exceptional entries.
Most of the handling of swap entries in radix tree will be contained in
shmem.c, but a few functions in filemap.c's common code need to check
for their appearance: find_get_page(), find_lock_page(),
find_get_pages() and find_get_pages_contig().
So as not to slow their fast paths, tuck those checks inside the
existing checks for unlikely radix_tree_deref_slot(); except for
find_lock_page(), where it is an added test. And make it a BUG in
find_get_pages_tag(), which is not applied to tmpfs files.
A part of the reason for eliminating shmem_readpage() earlier, was to
minimize the places where common code would need to allow for swap
entries.
The swp_entry_t known to swapfile.c must be massaged into a slightly
different form when stored in the radix tree, just as it gets massaged
into a pte_t when stored in page tables.
In an i386 kernel this limits its information (type and page offset) to
30 bits: given 32 "types" of swapfile and 4kB pagesize, that's a maximum
swapfile size of 128GB. Which is less than the 512GB we previously
allowed with X86_PAE (where the swap entry can occupy the entire upper
32 bits of a pte_t), but not a new limitation on 32-bit without PAE; and
there's not a new limitation on 64-bit (where swap filesize is already
limited to 16TB by a 32-bit page offset). Thirty areas of 128GB is
probably still enough swap for a 64GB 32-bit machine.
Provide swp_to_radix_entry() and radix_to_swp_entry() conversions, and
enforce filesize limit in read_swap_header(), just as for ptes.
Signed-off-by: Hugh Dickins <hughd@google.com>
Acked-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-08-04 06:21:19 +07:00
|
|
|
}
|
2006-04-27 13:46:01 +07:00
|
|
|
|
2008-07-26 09:45:31 +07:00
|
|
|
if (!page_cache_get_speculative(page))
|
|
|
|
goto repeat;
|
|
|
|
|
|
|
|
/* Has the page moved? */
|
2012-03-29 04:42:54 +07:00
|
|
|
if (unlikely(page != *slot)) {
|
2008-07-26 09:45:31 +07:00
|
|
|
page_cache_release(page);
|
|
|
|
goto repeat;
|
|
|
|
}
|
|
|
|
|
2011-01-14 06:45:51 +07:00
|
|
|
/*
|
|
|
|
* must check mapping and index after taking the ref.
|
|
|
|
* otherwise we can get both false positives and false
|
|
|
|
* negatives, which is just confusing to the caller.
|
|
|
|
*/
|
2012-03-29 04:42:54 +07:00
|
|
|
if (page->mapping == NULL || page->index != iter.index) {
|
2011-01-14 06:45:51 +07:00
|
|
|
page_cache_release(page);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
2008-07-26 09:45:31 +07:00
|
|
|
pages[ret] = page;
|
2012-03-29 04:42:54 +07:00
|
|
|
if (++ret == nr_pages)
|
|
|
|
break;
|
2006-04-27 13:46:01 +07:00
|
|
|
}
|
2008-07-26 09:45:31 +07:00
|
|
|
rcu_read_unlock();
|
|
|
|
return ret;
|
2006-04-27 13:46:01 +07:00
|
|
|
}
|
2007-05-09 16:33:44 +07:00
|
|
|
EXPORT_SYMBOL(find_get_pages_contig);
|
2006-04-27 13:46:01 +07:00
|
|
|
|
2006-06-23 16:03:49 +07:00
|
|
|
/**
|
|
|
|
* find_get_pages_tag - find and return pages that match @tag
|
|
|
|
* @mapping: the address_space to search
|
|
|
|
* @index: the starting page index
|
|
|
|
* @tag: the tag index
|
|
|
|
* @nr_pages: the maximum number of pages
|
|
|
|
* @pages: where the resulting pages are placed
|
|
|
|
*
|
2005-04-17 05:20:36 +07:00
|
|
|
* Like find_get_pages, except we only return pages which are tagged with
|
2006-06-23 16:03:49 +07:00
|
|
|
* @tag. We update @index to index the next page for the traversal.
|
2005-04-17 05:20:36 +07:00
|
|
|
*/
|
|
|
|
unsigned find_get_pages_tag(struct address_space *mapping, pgoff_t *index,
|
|
|
|
int tag, unsigned int nr_pages, struct page **pages)
|
|
|
|
{
|
2012-03-29 04:42:54 +07:00
|
|
|
struct radix_tree_iter iter;
|
|
|
|
void **slot;
|
|
|
|
unsigned ret = 0;
|
|
|
|
|
|
|
|
if (unlikely(!nr_pages))
|
|
|
|
return 0;
|
2008-07-26 09:45:31 +07:00
|
|
|
|
|
|
|
rcu_read_lock();
|
|
|
|
restart:
|
2012-03-29 04:42:54 +07:00
|
|
|
radix_tree_for_each_tagged(slot, &mapping->page_tree,
|
|
|
|
&iter, *index, tag) {
|
2008-07-26 09:45:31 +07:00
|
|
|
struct page *page;
|
|
|
|
repeat:
|
2012-03-29 04:42:54 +07:00
|
|
|
page = radix_tree_deref_slot(slot);
|
2008-07-26 09:45:31 +07:00
|
|
|
if (unlikely(!page))
|
|
|
|
continue;
|
2011-03-23 06:33:06 +07:00
|
|
|
|
mm: let swap use exceptional entries
If swap entries are to be stored along with struct page pointers in a
radix tree, they need to be distinguished as exceptional entries.
Most of the handling of swap entries in radix tree will be contained in
shmem.c, but a few functions in filemap.c's common code need to check
for their appearance: find_get_page(), find_lock_page(),
find_get_pages() and find_get_pages_contig().
So as not to slow their fast paths, tuck those checks inside the
existing checks for unlikely radix_tree_deref_slot(); except for
find_lock_page(), where it is an added test. And make it a BUG in
find_get_pages_tag(), which is not applied to tmpfs files.
A part of the reason for eliminating shmem_readpage() earlier, was to
minimize the places where common code would need to allow for swap
entries.
The swp_entry_t known to swapfile.c must be massaged into a slightly
different form when stored in the radix tree, just as it gets massaged
into a pte_t when stored in page tables.
In an i386 kernel this limits its information (type and page offset) to
30 bits: given 32 "types" of swapfile and 4kB pagesize, that's a maximum
swapfile size of 128GB. Which is less than the 512GB we previously
allowed with X86_PAE (where the swap entry can occupy the entire upper
32 bits of a pte_t), but not a new limitation on 32-bit without PAE; and
there's not a new limitation on 64-bit (where swap filesize is already
limited to 16TB by a 32-bit page offset). Thirty areas of 128GB is
probably still enough swap for a 64GB 32-bit machine.
Provide swp_to_radix_entry() and radix_to_swp_entry() conversions, and
enforce filesize limit in read_swap_header(), just as for ptes.
Signed-off-by: Hugh Dickins <hughd@google.com>
Acked-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-08-04 06:21:19 +07:00
|
|
|
if (radix_tree_exception(page)) {
|
2011-08-04 06:21:28 +07:00
|
|
|
if (radix_tree_deref_retry(page)) {
|
|
|
|
/*
|
|
|
|
* Transient condition which can only trigger
|
|
|
|
* when entry at index 0 moves out of or back
|
|
|
|
* to root: none yet gotten, safe to restart.
|
|
|
|
*/
|
|
|
|
goto restart;
|
|
|
|
}
|
mm: let swap use exceptional entries
If swap entries are to be stored along with struct page pointers in a
radix tree, they need to be distinguished as exceptional entries.
Most of the handling of swap entries in radix tree will be contained in
shmem.c, but a few functions in filemap.c's common code need to check
for their appearance: find_get_page(), find_lock_page(),
find_get_pages() and find_get_pages_contig().
So as not to slow their fast paths, tuck those checks inside the
existing checks for unlikely radix_tree_deref_slot(); except for
find_lock_page(), where it is an added test. And make it a BUG in
find_get_pages_tag(), which is not applied to tmpfs files.
A part of the reason for eliminating shmem_readpage() earlier, was to
minimize the places where common code would need to allow for swap
entries.
The swp_entry_t known to swapfile.c must be massaged into a slightly
different form when stored in the radix tree, just as it gets massaged
into a pte_t when stored in page tables.
In an i386 kernel this limits its information (type and page offset) to
30 bits: given 32 "types" of swapfile and 4kB pagesize, that's a maximum
swapfile size of 128GB. Which is less than the 512GB we previously
allowed with X86_PAE (where the swap entry can occupy the entire upper
32 bits of a pte_t), but not a new limitation on 32-bit without PAE; and
there's not a new limitation on 64-bit (where swap filesize is already
limited to 16TB by a 32-bit page offset). Thirty areas of 128GB is
probably still enough swap for a 64GB 32-bit machine.
Provide swp_to_radix_entry() and radix_to_swp_entry() conversions, and
enforce filesize limit in read_swap_header(), just as for ptes.
Signed-off-by: Hugh Dickins <hughd@google.com>
Acked-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-08-04 06:21:19 +07:00
|
|
|
/*
|
2011-08-04 06:21:28 +07:00
|
|
|
* This function is never used on a shmem/tmpfs
|
|
|
|
* mapping, so a swap entry won't be found here.
|
mm: let swap use exceptional entries
If swap entries are to be stored along with struct page pointers in a
radix tree, they need to be distinguished as exceptional entries.
Most of the handling of swap entries in radix tree will be contained in
shmem.c, but a few functions in filemap.c's common code need to check
for their appearance: find_get_page(), find_lock_page(),
find_get_pages() and find_get_pages_contig().
So as not to slow their fast paths, tuck those checks inside the
existing checks for unlikely radix_tree_deref_slot(); except for
find_lock_page(), where it is an added test. And make it a BUG in
find_get_pages_tag(), which is not applied to tmpfs files.
A part of the reason for eliminating shmem_readpage() earlier, was to
minimize the places where common code would need to allow for swap
entries.
The swp_entry_t known to swapfile.c must be massaged into a slightly
different form when stored in the radix tree, just as it gets massaged
into a pte_t when stored in page tables.
In an i386 kernel this limits its information (type and page offset) to
30 bits: given 32 "types" of swapfile and 4kB pagesize, that's a maximum
swapfile size of 128GB. Which is less than the 512GB we previously
allowed with X86_PAE (where the swap entry can occupy the entire upper
32 bits of a pte_t), but not a new limitation on 32-bit without PAE; and
there's not a new limitation on 64-bit (where swap filesize is already
limited to 16TB by a 32-bit page offset). Thirty areas of 128GB is
probably still enough swap for a 64GB 32-bit machine.
Provide swp_to_radix_entry() and radix_to_swp_entry() conversions, and
enforce filesize limit in read_swap_header(), just as for ptes.
Signed-off-by: Hugh Dickins <hughd@google.com>
Acked-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-08-04 06:21:19 +07:00
|
|
|
*/
|
2011-08-04 06:21:28 +07:00
|
|
|
BUG();
|
mm: let swap use exceptional entries
If swap entries are to be stored along with struct page pointers in a
radix tree, they need to be distinguished as exceptional entries.
Most of the handling of swap entries in radix tree will be contained in
shmem.c, but a few functions in filemap.c's common code need to check
for their appearance: find_get_page(), find_lock_page(),
find_get_pages() and find_get_pages_contig().
So as not to slow their fast paths, tuck those checks inside the
existing checks for unlikely radix_tree_deref_slot(); except for
find_lock_page(), where it is an added test. And make it a BUG in
find_get_pages_tag(), which is not applied to tmpfs files.
A part of the reason for eliminating shmem_readpage() earlier, was to
minimize the places where common code would need to allow for swap
entries.
The swp_entry_t known to swapfile.c must be massaged into a slightly
different form when stored in the radix tree, just as it gets massaged
into a pte_t when stored in page tables.
In an i386 kernel this limits its information (type and page offset) to
30 bits: given 32 "types" of swapfile and 4kB pagesize, that's a maximum
swapfile size of 128GB. Which is less than the 512GB we previously
allowed with X86_PAE (where the swap entry can occupy the entire upper
32 bits of a pte_t), but not a new limitation on 32-bit without PAE; and
there's not a new limitation on 64-bit (where swap filesize is already
limited to 16TB by a 32-bit page offset). Thirty areas of 128GB is
probably still enough swap for a 64GB 32-bit machine.
Provide swp_to_radix_entry() and radix_to_swp_entry() conversions, and
enforce filesize limit in read_swap_header(), just as for ptes.
Signed-off-by: Hugh Dickins <hughd@google.com>
Acked-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-08-04 06:21:19 +07:00
|
|
|
}
|
2008-07-26 09:45:31 +07:00
|
|
|
|
|
|
|
if (!page_cache_get_speculative(page))
|
|
|
|
goto repeat;
|
|
|
|
|
|
|
|
/* Has the page moved? */
|
2012-03-29 04:42:54 +07:00
|
|
|
if (unlikely(page != *slot)) {
|
2008-07-26 09:45:31 +07:00
|
|
|
page_cache_release(page);
|
|
|
|
goto repeat;
|
|
|
|
}
|
|
|
|
|
|
|
|
pages[ret] = page;
|
2012-03-29 04:42:54 +07:00
|
|
|
if (++ret == nr_pages)
|
|
|
|
break;
|
2008-07-26 09:45:31 +07:00
|
|
|
}
|
2011-03-23 06:33:07 +07:00
|
|
|
|
2008-07-26 09:45:31 +07:00
|
|
|
rcu_read_unlock();
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
if (ret)
|
|
|
|
*index = pages[ret - 1]->index + 1;
|
2008-07-26 09:45:31 +07:00
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
return ret;
|
|
|
|
}
|
2007-05-09 16:33:44 +07:00
|
|
|
EXPORT_SYMBOL(find_get_pages_tag);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2006-06-23 16:03:49 +07:00
|
|
|
/**
|
|
|
|
* grab_cache_page_nowait - returns locked page at given index in given cache
|
|
|
|
* @mapping: target address_space
|
|
|
|
* @index: the page index
|
|
|
|
*
|
2007-02-10 16:45:59 +07:00
|
|
|
* Same as grab_cache_page(), but do not wait if the page is unavailable.
|
2005-04-17 05:20:36 +07:00
|
|
|
* This is intended for speculative data generators, where the data can
|
|
|
|
* be regenerated if the page couldn't be grabbed. This routine should
|
|
|
|
* be safe to call while holding the lock for another page.
|
|
|
|
*
|
|
|
|
* Clear __GFP_FS when allocating the page to avoid recursion into the fs
|
|
|
|
* and deadlock against the caller's locked page.
|
|
|
|
*/
|
|
|
|
struct page *
|
2007-10-16 15:24:37 +07:00
|
|
|
grab_cache_page_nowait(struct address_space *mapping, pgoff_t index)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
struct page *page = find_get_page(mapping, index);
|
|
|
|
|
|
|
|
if (page) {
|
2008-08-02 17:01:03 +07:00
|
|
|
if (trylock_page(page))
|
2005-04-17 05:20:36 +07:00
|
|
|
return page;
|
|
|
|
page_cache_release(page);
|
|
|
|
return NULL;
|
|
|
|
}
|
2006-10-29 00:38:23 +07:00
|
|
|
page = __page_cache_alloc(mapping_gfp_mask(mapping) & ~__GFP_FS);
|
mm: pagecache gfp flags fix
Frustratingly, gfp_t is really divided into two classes of flags. One are
the context dependent ones (can we sleep? can we enter filesystem? block
subsystem? should we use some extra reserves, etc.). The other ones are
the type of memory required and depend on how the algorithm is implemented
rather than the point at which the memory is allocated (highmem? dma
memory? etc).
Some of the functions which allocate a page and add it to page cache take
a gfp_t, but sometimes those functions or their callers aren't really
doing the right thing: when allocating pagecache page, the memory type
should be mapping_gfp_mask(mapping). When allocating radix tree nodes,
the memory type should be kernel mapped (not highmem) memory. The gfp_t
argument should only really be needed for context dependent options.
This patch doesn't really solve that tangle in a nice way, but it does
attempt to fix a couple of bugs.
- find_or_create_page changes its radix-tree allocation to only include
the main context dependent flags in order so the pagecache page may be
allocated from arbitrary types of memory without affecting the
radix-tree. In practice, slab allocations don't come from highmem
anyway, and radix-tree only uses slab allocations. So there isn't a
practical change (unless some fs uses GFP_DMA for pages).
- grab_cache_page_nowait() is changed to allocate radix-tree nodes with
GFP_NOFS, because it is not supposed to reenter the filesystem. This
bug could cause lock recursion if a filesystem is not expecting the
function to reenter the fs (as-per documentation).
Filesystems should be careful about exactly what semantics they want and
what they get when fiddling with gfp_t masks to allocate pagecache. One
should be as liberal as possible with the type of memory that can be used,
and same for the the context specific flags.
Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-01-07 05:40:28 +07:00
|
|
|
if (page && add_to_page_cache_lru(page, mapping, index, GFP_NOFS)) {
|
2005-04-17 05:20:36 +07:00
|
|
|
page_cache_release(page);
|
|
|
|
page = NULL;
|
|
|
|
}
|
|
|
|
return page;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(grab_cache_page_nowait);
|
|
|
|
|
[PATCH] readahead: backoff on I/O error
Backoff readahead size exponentially on I/O error.
Michael Tokarev <mjt@tls.msk.ru> described the problem as:
[QUOTE]
Suppose there's a CD-rom with a scratch/etc, one sector is unreadable.
In order to "fix" it, one have to read it and write to another CD-rom,
or something.. or just ignore the error (if it's just a skip in a video
stream). Let's assume the unreadable block is number U.
But current behavior is just insane. An application requests block
number N, which is before U. Kernel tries to read-ahead blocks N..U.
Cdrom drive tries to read it, re-read it.. for some time. Finally,
when all the N..U-1 blocks are read, kernel returns block number N
(as requested) to an application, successefully.
Now an app requests block number N+1, and kernel tries to read
blocks N+1..U+1. Retrying again as in previous step.
And so on, up to when an app requests block number U-1. And when,
finally, it requests block U, it receives read error.
So, kernel currentry tries to re-read the same failing block as
many times as the current readahead value (256 (times?) by default).
This whole process already killed my cdrom drive (I posted about it
to LKML several months ago) - literally, the drive has fried, and
does not work anymore. Ofcourse that problem was a bug in firmware
(or whatever) of the drive *too*, but.. main problem with that is
current readahead logic as described above.
[/QUOTE]
Which was confirmed by Jens Axboe <axboe@suse.de>:
[QUOTE]
For ide-cd, it tends do only end the first part of the request on a
medium error. So you may see a lot of repeats :/
[/QUOTE]
With this patch, retries are expected to be reduced from, say, 256, to 5.
[akpm@osdl.org: cleanups]
Signed-off-by: Wu Fengguang <wfg@mail.ustc.edu.cn>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-25 19:48:43 +07:00
|
|
|
/*
|
|
|
|
* CD/DVDs are error prone. When a medium error occurs, the driver may fail
|
|
|
|
* a _large_ part of the i/o request. Imagine the worst scenario:
|
|
|
|
*
|
|
|
|
* ---R__________________________________________B__________
|
|
|
|
* ^ reading here ^ bad block(assume 4k)
|
|
|
|
*
|
|
|
|
* read(R) => miss => readahead(R...B) => media error => frustrating retries
|
|
|
|
* => failing the whole request => read(R) => read(R+1) =>
|
|
|
|
* readahead(R+1...B+1) => bang => read(R+2) => read(R+3) =>
|
|
|
|
* readahead(R+3...B+2) => bang => read(R+3) => read(R+4) =>
|
|
|
|
* readahead(R+4...B+3) => bang => read(R+4) => read(R+5) => ......
|
|
|
|
*
|
|
|
|
* It is going insane. Fix it by quickly scaling down the readahead size.
|
|
|
|
*/
|
|
|
|
static void shrink_readahead_size_eio(struct file *filp,
|
|
|
|
struct file_ra_state *ra)
|
|
|
|
{
|
|
|
|
ra->ra_pages /= 4;
|
|
|
|
}
|
|
|
|
|
2006-06-23 16:03:49 +07:00
|
|
|
/**
|
2008-02-08 19:21:24 +07:00
|
|
|
* do_generic_file_read - generic file read routine
|
2006-06-23 16:03:49 +07:00
|
|
|
* @filp: the file to read
|
|
|
|
* @ppos: current file position
|
|
|
|
* @desc: read_descriptor
|
|
|
|
* @actor: read method
|
|
|
|
*
|
2005-04-17 05:20:36 +07:00
|
|
|
* This is a generic file read routine, and uses the
|
2006-06-23 16:03:49 +07:00
|
|
|
* mapping->a_ops->readpage() function for the actual low-level stuff.
|
2005-04-17 05:20:36 +07:00
|
|
|
*
|
|
|
|
* This is really ugly. But the goto's actually try to clarify some
|
|
|
|
* of the logic when it comes to error handling etc.
|
|
|
|
*/
|
2008-02-08 19:21:24 +07:00
|
|
|
static void do_generic_file_read(struct file *filp, loff_t *ppos,
|
|
|
|
read_descriptor_t *desc, read_actor_t actor)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2008-02-08 19:21:24 +07:00
|
|
|
struct address_space *mapping = filp->f_mapping;
|
2005-04-17 05:20:36 +07:00
|
|
|
struct inode *inode = mapping->host;
|
2008-02-08 19:21:24 +07:00
|
|
|
struct file_ra_state *ra = &filp->f_ra;
|
2007-10-16 15:24:37 +07:00
|
|
|
pgoff_t index;
|
|
|
|
pgoff_t last_index;
|
|
|
|
pgoff_t prev_index;
|
|
|
|
unsigned long offset; /* offset into pagecache page */
|
2007-05-07 04:49:25 +07:00
|
|
|
unsigned int prev_offset;
|
2005-04-17 05:20:36 +07:00
|
|
|
int error;
|
|
|
|
|
|
|
|
index = *ppos >> PAGE_CACHE_SHIFT;
|
2007-10-16 15:24:35 +07:00
|
|
|
prev_index = ra->prev_pos >> PAGE_CACHE_SHIFT;
|
|
|
|
prev_offset = ra->prev_pos & (PAGE_CACHE_SIZE-1);
|
2005-04-17 05:20:36 +07:00
|
|
|
last_index = (*ppos + desc->count + PAGE_CACHE_SIZE-1) >> PAGE_CACHE_SHIFT;
|
|
|
|
offset = *ppos & ~PAGE_CACHE_MASK;
|
|
|
|
|
|
|
|
for (;;) {
|
|
|
|
struct page *page;
|
2007-10-16 15:24:37 +07:00
|
|
|
pgoff_t end_index;
|
2007-07-17 18:03:04 +07:00
|
|
|
loff_t isize;
|
2005-04-17 05:20:36 +07:00
|
|
|
unsigned long nr, ret;
|
|
|
|
|
|
|
|
cond_resched();
|
|
|
|
find_page:
|
|
|
|
page = find_get_page(mapping, index);
|
2007-07-19 15:48:02 +07:00
|
|
|
if (!page) {
|
2007-07-19 15:48:08 +07:00
|
|
|
page_cache_sync_readahead(mapping,
|
2007-10-16 15:24:35 +07:00
|
|
|
ra, filp,
|
2007-07-19 15:48:02 +07:00
|
|
|
index, last_index - index);
|
|
|
|
page = find_get_page(mapping, index);
|
|
|
|
if (unlikely(page == NULL))
|
|
|
|
goto no_cached_page;
|
|
|
|
}
|
|
|
|
if (PageReadahead(page)) {
|
2007-07-19 15:48:08 +07:00
|
|
|
page_cache_async_readahead(mapping,
|
2007-10-16 15:24:35 +07:00
|
|
|
ra, filp, page,
|
2007-07-19 15:48:02 +07:00
|
|
|
index, last_index - index);
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
vfs: pagecache usage optimization for pagesize!=blocksize
When we read some part of a file through pagecache, if there is a
pagecache of corresponding index but this page is not uptodate, read IO
is issued and this page will be uptodate.
I think this is good for pagesize == blocksize environment but there is
room for improvement on pagesize != blocksize environment. Because in
this case a page can have multiple buffers and even if a page is not
uptodate, some buffers can be uptodate.
So I suggest that when all buffers which correspond to a part of a file
that we want to read are uptodate, use this pagecache and copy data from
this pagecache to user buffer even if a page is not uptodate. This can
reduce read IO and improve system throughput.
I wrote a benchmark program and got result number with this program.
This benchmark do:
1: mount and open a test file.
2: create a 512MB file.
3: close a file and umount.
4: mount and again open a test file.
5: pwrite randomly 300000 times on a test file. offset is aligned
by IO size(1024bytes).
6: measure time of preading randomly 100000 times on a test file.
The result was:
2.6.26
330 sec
2.6.26-patched
226 sec
Arch:i386
Filesystem:ext3
Blocksize:1024 bytes
Memory: 1GB
On ext3/4, a file is written through buffer/block. So random read/write
mixed workloads or random read after random write workloads are optimized
with this patch under pagesize != blocksize environment. This test result
showed this.
The benchmark program is as follows:
#include <stdio.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <unistd.h>
#include <time.h>
#include <stdlib.h>
#include <string.h>
#include <sys/mount.h>
#define LEN 1024
#define LOOP 1024*512 /* 512MB */
main(void)
{
unsigned long i, offset, filesize;
int fd;
char buf[LEN];
time_t t1, t2;
if (mount("/dev/sda1", "/root/test1/", "ext3", 0, 0) < 0) {
perror("cannot mount\n");
exit(1);
}
memset(buf, 0, LEN);
fd = open("/root/test1/testfile", O_CREAT|O_RDWR|O_TRUNC);
if (fd < 0) {
perror("cannot open file\n");
exit(1);
}
for (i = 0; i < LOOP; i++)
write(fd, buf, LEN);
close(fd);
if (umount("/root/test1/") < 0) {
perror("cannot umount\n");
exit(1);
}
if (mount("/dev/sda1", "/root/test1/", "ext3", 0, 0) < 0) {
perror("cannot mount\n");
exit(1);
}
fd = open("/root/test1/testfile", O_RDWR);
if (fd < 0) {
perror("cannot open file\n");
exit(1);
}
filesize = LEN * LOOP;
for (i = 0; i < 300000; i++){
offset = (random() % filesize) & (~(LEN - 1));
pwrite(fd, buf, LEN, offset);
}
printf("start test\n");
time(&t1);
for (i = 0; i < 100000; i++){
offset = (random() % filesize) & (~(LEN - 1));
pread(fd, buf, LEN, offset);
}
time(&t2);
printf("%ld sec\n", t2-t1);
close(fd);
if (umount("/root/test1/") < 0) {
perror("cannot umount\n");
exit(1);
}
}
Signed-off-by: Hisashi Hifumi <hifumi.hisashi@oss.ntt.co.jp>
Cc: Nick Piggin <nickpiggin@yahoo.com.au>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Jan Kara <jack@ucw.cz>
Cc: <linux-ext4@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-07-29 05:46:36 +07:00
|
|
|
if (!PageUptodate(page)) {
|
|
|
|
if (inode->i_blkbits == PAGE_CACHE_SHIFT ||
|
|
|
|
!mapping->a_ops->is_partially_uptodate)
|
|
|
|
goto page_not_up_to_date;
|
2008-08-02 17:01:03 +07:00
|
|
|
if (!trylock_page(page))
|
vfs: pagecache usage optimization for pagesize!=blocksize
When we read some part of a file through pagecache, if there is a
pagecache of corresponding index but this page is not uptodate, read IO
is issued and this page will be uptodate.
I think this is good for pagesize == blocksize environment but there is
room for improvement on pagesize != blocksize environment. Because in
this case a page can have multiple buffers and even if a page is not
uptodate, some buffers can be uptodate.
So I suggest that when all buffers which correspond to a part of a file
that we want to read are uptodate, use this pagecache and copy data from
this pagecache to user buffer even if a page is not uptodate. This can
reduce read IO and improve system throughput.
I wrote a benchmark program and got result number with this program.
This benchmark do:
1: mount and open a test file.
2: create a 512MB file.
3: close a file and umount.
4: mount and again open a test file.
5: pwrite randomly 300000 times on a test file. offset is aligned
by IO size(1024bytes).
6: measure time of preading randomly 100000 times on a test file.
The result was:
2.6.26
330 sec
2.6.26-patched
226 sec
Arch:i386
Filesystem:ext3
Blocksize:1024 bytes
Memory: 1GB
On ext3/4, a file is written through buffer/block. So random read/write
mixed workloads or random read after random write workloads are optimized
with this patch under pagesize != blocksize environment. This test result
showed this.
The benchmark program is as follows:
#include <stdio.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <unistd.h>
#include <time.h>
#include <stdlib.h>
#include <string.h>
#include <sys/mount.h>
#define LEN 1024
#define LOOP 1024*512 /* 512MB */
main(void)
{
unsigned long i, offset, filesize;
int fd;
char buf[LEN];
time_t t1, t2;
if (mount("/dev/sda1", "/root/test1/", "ext3", 0, 0) < 0) {
perror("cannot mount\n");
exit(1);
}
memset(buf, 0, LEN);
fd = open("/root/test1/testfile", O_CREAT|O_RDWR|O_TRUNC);
if (fd < 0) {
perror("cannot open file\n");
exit(1);
}
for (i = 0; i < LOOP; i++)
write(fd, buf, LEN);
close(fd);
if (umount("/root/test1/") < 0) {
perror("cannot umount\n");
exit(1);
}
if (mount("/dev/sda1", "/root/test1/", "ext3", 0, 0) < 0) {
perror("cannot mount\n");
exit(1);
}
fd = open("/root/test1/testfile", O_RDWR);
if (fd < 0) {
perror("cannot open file\n");
exit(1);
}
filesize = LEN * LOOP;
for (i = 0; i < 300000; i++){
offset = (random() % filesize) & (~(LEN - 1));
pwrite(fd, buf, LEN, offset);
}
printf("start test\n");
time(&t1);
for (i = 0; i < 100000; i++){
offset = (random() % filesize) & (~(LEN - 1));
pread(fd, buf, LEN, offset);
}
time(&t2);
printf("%ld sec\n", t2-t1);
close(fd);
if (umount("/root/test1/") < 0) {
perror("cannot umount\n");
exit(1);
}
}
Signed-off-by: Hisashi Hifumi <hifumi.hisashi@oss.ntt.co.jp>
Cc: Nick Piggin <nickpiggin@yahoo.com.au>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Jan Kara <jack@ucw.cz>
Cc: <linux-ext4@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-07-29 05:46:36 +07:00
|
|
|
goto page_not_up_to_date;
|
mm/vfs: revalidate page->mapping in do_generic_file_read()
70 hours into some stress tests of a 2.6.32-based enterprise kernel, we
ran into a NULL dereference in here:
int block_is_partially_uptodate(struct page *page, read_descriptor_t *desc,
unsigned long from)
{
----> struct inode *inode = page->mapping->host;
It looks like page->mapping was the culprit. (xmon trace is below).
After closer examination, I realized that do_generic_file_read() does a
find_get_page(), and eventually locks the page before calling
block_is_partially_uptodate(). However, it doesn't revalidate the
page->mapping after the page is locked. So, there's a small window
between the find_get_page() and ->is_partially_uptodate() where the page
could get truncated and page->mapping cleared.
We _have_ a reference, so it can't get reclaimed, but it certainly
can be truncated.
I think the correct thing is to check page->mapping after the
trylock_page(), and jump out if it got truncated. This patch has been
running in the test environment for a month or so now, and we have not
seen this bug pop up again.
xmon info:
1f:mon> e
cpu 0x1f: Vector: 300 (Data Access) at [c0000002ae36f770]
pc: c0000000001e7a6c: .block_is_partially_uptodate+0xc/0x100
lr: c000000000142944: .generic_file_aio_read+0x1e4/0x770
sp: c0000002ae36f9f0
msr: 8000000000009032
dar: 0
dsisr: 40000000
current = 0xc000000378f99e30
paca = 0xc000000000f66300
pid = 21946, comm = bash
1f:mon> r
R00 = 0025c0500000006d R16 = 0000000000000000
R01 = c0000002ae36f9f0 R17 = c000000362cd3af0
R02 = c000000000e8cd80 R18 = ffffffffffffffff
R03 = c0000000031d0f88 R19 = 0000000000000001
R04 = c0000002ae36fa68 R20 = c0000003bb97b8a0
R05 = 0000000000000000 R21 = c0000002ae36fa68
R06 = 0000000000000000 R22 = 0000000000000000
R07 = 0000000000000001 R23 = c0000002ae36fbb0
R08 = 0000000000000002 R24 = 0000000000000000
R09 = 0000000000000000 R25 = c000000362cd3a80
R10 = 0000000000000000 R26 = 0000000000000002
R11 = c0000000001e7b60 R27 = 0000000000000000
R12 = 0000000042000484 R28 = 0000000000000001
R13 = c000000000f66300 R29 = c0000003bb97b9b8
R14 = 0000000000000001 R30 = c000000000e28a08
R15 = 000000000000ffff R31 = c0000000031d0f88
pc = c0000000001e7a6c .block_is_partially_uptodate+0xc/0x100
lr = c000000000142944 .generic_file_aio_read+0x1e4/0x770
msr = 8000000000009032 cr = 22000488
ctr = c0000000001e7a60 xer = 0000000020000000 trap = 300
dar = 0000000000000000 dsisr = 40000000
1f:mon> t
[link register ] c000000000142944 .generic_file_aio_read+0x1e4/0x770
[c0000002ae36f9f0] c000000000142a14 .generic_file_aio_read+0x2b4/0x770 (unreliable)
[c0000002ae36fb40] c0000000001b03e4 .do_sync_read+0xd4/0x160
[c0000002ae36fce0] c0000000001b153c .vfs_read+0xec/0x1f0
[c0000002ae36fd80] c0000000001b1768 .SyS_read+0x58/0xb0
[c0000002ae36fe30] c00000000000852c syscall_exit+0x0/0x40
--- Exception: c00 (System Call) at 00000080a840bc54
SP (fffca15df30) is in userspace
1f:mon> di c0000000001e7a6c
c0000000001e7a6c e9290000 ld r9,0(r9)
c0000000001e7a70 418200c0 beq c0000000001e7b30 # .block_is_partially_uptodate+0xd0/0x100
c0000000001e7a74 e9440008 ld r10,8(r4)
c0000000001e7a78 78a80020 clrldi r8,r5,32
c0000000001e7a7c 3c000001 lis r0,1
c0000000001e7a80 812900a8 lwz r9,168(r9)
c0000000001e7a84 39600001 li r11,1
c0000000001e7a88 7c080050 subf r0,r8,r0
c0000000001e7a8c 7f805040 cmplw cr7,r0,r10
c0000000001e7a90 7d6b4830 slw r11,r11,r9
c0000000001e7a94 796b0020 clrldi r11,r11,32
c0000000001e7a98 419d00a8 bgt cr7,c0000000001e7b40 # .block_is_partially_uptodate+0xe0/0x100
c0000000001e7a9c 7fa55840 cmpld cr7,r5,r11
c0000000001e7aa0 7d004214 add r8,r0,r8
c0000000001e7aa4 79080020 clrldi r8,r8,32
c0000000001e7aa8 419c0078 blt cr7,c0000000001e7b20 # .block_is_partially_uptodate+0xc0/0x100
Signed-off-by: Dave Hansen <dave@linux.vnet.ibm.com>
Reviewed-by: Minchan Kim <minchan.kim@gmail.com>
Reviewed-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Rik van Riel <riel@redhat.com>
Cc: <arunabal@in.ibm.com>
Cc: <sbest@us.ibm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Minchan Kim <minchan.kim@gmail.com>
Cc: <stable@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-11-12 05:05:15 +07:00
|
|
|
/* Did it get truncated before we got the lock? */
|
|
|
|
if (!page->mapping)
|
|
|
|
goto page_not_up_to_date_locked;
|
vfs: pagecache usage optimization for pagesize!=blocksize
When we read some part of a file through pagecache, if there is a
pagecache of corresponding index but this page is not uptodate, read IO
is issued and this page will be uptodate.
I think this is good for pagesize == blocksize environment but there is
room for improvement on pagesize != blocksize environment. Because in
this case a page can have multiple buffers and even if a page is not
uptodate, some buffers can be uptodate.
So I suggest that when all buffers which correspond to a part of a file
that we want to read are uptodate, use this pagecache and copy data from
this pagecache to user buffer even if a page is not uptodate. This can
reduce read IO and improve system throughput.
I wrote a benchmark program and got result number with this program.
This benchmark do:
1: mount and open a test file.
2: create a 512MB file.
3: close a file and umount.
4: mount and again open a test file.
5: pwrite randomly 300000 times on a test file. offset is aligned
by IO size(1024bytes).
6: measure time of preading randomly 100000 times on a test file.
The result was:
2.6.26
330 sec
2.6.26-patched
226 sec
Arch:i386
Filesystem:ext3
Blocksize:1024 bytes
Memory: 1GB
On ext3/4, a file is written through buffer/block. So random read/write
mixed workloads or random read after random write workloads are optimized
with this patch under pagesize != blocksize environment. This test result
showed this.
The benchmark program is as follows:
#include <stdio.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <unistd.h>
#include <time.h>
#include <stdlib.h>
#include <string.h>
#include <sys/mount.h>
#define LEN 1024
#define LOOP 1024*512 /* 512MB */
main(void)
{
unsigned long i, offset, filesize;
int fd;
char buf[LEN];
time_t t1, t2;
if (mount("/dev/sda1", "/root/test1/", "ext3", 0, 0) < 0) {
perror("cannot mount\n");
exit(1);
}
memset(buf, 0, LEN);
fd = open("/root/test1/testfile", O_CREAT|O_RDWR|O_TRUNC);
if (fd < 0) {
perror("cannot open file\n");
exit(1);
}
for (i = 0; i < LOOP; i++)
write(fd, buf, LEN);
close(fd);
if (umount("/root/test1/") < 0) {
perror("cannot umount\n");
exit(1);
}
if (mount("/dev/sda1", "/root/test1/", "ext3", 0, 0) < 0) {
perror("cannot mount\n");
exit(1);
}
fd = open("/root/test1/testfile", O_RDWR);
if (fd < 0) {
perror("cannot open file\n");
exit(1);
}
filesize = LEN * LOOP;
for (i = 0; i < 300000; i++){
offset = (random() % filesize) & (~(LEN - 1));
pwrite(fd, buf, LEN, offset);
}
printf("start test\n");
time(&t1);
for (i = 0; i < 100000; i++){
offset = (random() % filesize) & (~(LEN - 1));
pread(fd, buf, LEN, offset);
}
time(&t2);
printf("%ld sec\n", t2-t1);
close(fd);
if (umount("/root/test1/") < 0) {
perror("cannot umount\n");
exit(1);
}
}
Signed-off-by: Hisashi Hifumi <hifumi.hisashi@oss.ntt.co.jp>
Cc: Nick Piggin <nickpiggin@yahoo.com.au>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Jan Kara <jack@ucw.cz>
Cc: <linux-ext4@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-07-29 05:46:36 +07:00
|
|
|
if (!mapping->a_ops->is_partially_uptodate(page,
|
|
|
|
desc, offset))
|
|
|
|
goto page_not_up_to_date_locked;
|
|
|
|
unlock_page(page);
|
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
page_ok:
|
2007-07-17 18:03:04 +07:00
|
|
|
/*
|
|
|
|
* i_size must be checked after we know the page is Uptodate.
|
|
|
|
*
|
|
|
|
* Checking i_size after the check allows us to calculate
|
|
|
|
* the correct value for "nr", which means the zero-filled
|
|
|
|
* part of the page is not copied back to userspace (unless
|
|
|
|
* another truncate extends the file - this is desired though).
|
|
|
|
*/
|
|
|
|
|
|
|
|
isize = i_size_read(inode);
|
|
|
|
end_index = (isize - 1) >> PAGE_CACHE_SHIFT;
|
|
|
|
if (unlikely(!isize || index > end_index)) {
|
|
|
|
page_cache_release(page);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* nr is the maximum number of bytes to copy from this page */
|
|
|
|
nr = PAGE_CACHE_SIZE;
|
|
|
|
if (index == end_index) {
|
|
|
|
nr = ((isize - 1) & ~PAGE_CACHE_MASK) + 1;
|
|
|
|
if (nr <= offset) {
|
|
|
|
page_cache_release(page);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
nr = nr - offset;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
/* If users can be writing to this page using arbitrary
|
|
|
|
* virtual addresses, take care about potential aliasing
|
|
|
|
* before reading the page on the kernel side.
|
|
|
|
*/
|
|
|
|
if (mapping_writably_mapped(mapping))
|
|
|
|
flush_dcache_page(page);
|
|
|
|
|
|
|
|
/*
|
2007-05-07 04:49:25 +07:00
|
|
|
* When a sequential read accesses a page several times,
|
|
|
|
* only mark it as accessed the first time.
|
2005-04-17 05:20:36 +07:00
|
|
|
*/
|
2007-05-07 04:49:25 +07:00
|
|
|
if (prev_index != index || offset != prev_offset)
|
2005-04-17 05:20:36 +07:00
|
|
|
mark_page_accessed(page);
|
|
|
|
prev_index = index;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Ok, we have the page, and it's up-to-date, so
|
|
|
|
* now we can copy it to user space...
|
|
|
|
*
|
|
|
|
* The actor routine returns how many bytes were actually used..
|
|
|
|
* NOTE! This may not be the same as how much of a user buffer
|
|
|
|
* we filled up (we may be padding etc), so we can only update
|
|
|
|
* "pos" here (the actor routine has to update the user buffer
|
|
|
|
* pointers and the remaining count).
|
|
|
|
*/
|
|
|
|
ret = actor(desc, page, offset, nr);
|
|
|
|
offset += ret;
|
|
|
|
index += offset >> PAGE_CACHE_SHIFT;
|
|
|
|
offset &= ~PAGE_CACHE_MASK;
|
2007-05-07 04:49:26 +07:00
|
|
|
prev_offset = offset;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
page_cache_release(page);
|
|
|
|
if (ret == nr && desc->count)
|
|
|
|
continue;
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
page_not_up_to_date:
|
|
|
|
/* Get exclusive access to the page ... */
|
do_generic_file_read: s/EINTR/EIO/ if lock_page_killable() fails
If lock_page_killable() fails because the task was killed by SIGKILL or
any other fatal signal, do_generic_file_read() returns -EIO.
This seems to be OK, because in fact the userspace won't see this error,
the task will dequeue SIGKILL and exit.
However, /sbin/init is different, it will dequeue SIGKILL, ignore it, and
return to the user-space with the bogus -EIO.
Change the code to return the error code from lock_page_killable(), -EINTR.
This doesn't fix the bug, but perhaps makes sense anyway. Imho, with this
change the code looks a bit more logical, and the "good" init should handle
the spurious EINTR or short read.
Afaics we can also change lock_page_killable() to return -ERESTARTNOINTR,
but this can't prevent the short reads.
Signed-off-by: Oleg Nesterov <oleg@tv-sign.ru>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-06-09 00:20:43 +07:00
|
|
|
error = lock_page_killable(page);
|
|
|
|
if (unlikely(error))
|
|
|
|
goto readpage_error;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
vfs: pagecache usage optimization for pagesize!=blocksize
When we read some part of a file through pagecache, if there is a
pagecache of corresponding index but this page is not uptodate, read IO
is issued and this page will be uptodate.
I think this is good for pagesize == blocksize environment but there is
room for improvement on pagesize != blocksize environment. Because in
this case a page can have multiple buffers and even if a page is not
uptodate, some buffers can be uptodate.
So I suggest that when all buffers which correspond to a part of a file
that we want to read are uptodate, use this pagecache and copy data from
this pagecache to user buffer even if a page is not uptodate. This can
reduce read IO and improve system throughput.
I wrote a benchmark program and got result number with this program.
This benchmark do:
1: mount and open a test file.
2: create a 512MB file.
3: close a file and umount.
4: mount and again open a test file.
5: pwrite randomly 300000 times on a test file. offset is aligned
by IO size(1024bytes).
6: measure time of preading randomly 100000 times on a test file.
The result was:
2.6.26
330 sec
2.6.26-patched
226 sec
Arch:i386
Filesystem:ext3
Blocksize:1024 bytes
Memory: 1GB
On ext3/4, a file is written through buffer/block. So random read/write
mixed workloads or random read after random write workloads are optimized
with this patch under pagesize != blocksize environment. This test result
showed this.
The benchmark program is as follows:
#include <stdio.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <unistd.h>
#include <time.h>
#include <stdlib.h>
#include <string.h>
#include <sys/mount.h>
#define LEN 1024
#define LOOP 1024*512 /* 512MB */
main(void)
{
unsigned long i, offset, filesize;
int fd;
char buf[LEN];
time_t t1, t2;
if (mount("/dev/sda1", "/root/test1/", "ext3", 0, 0) < 0) {
perror("cannot mount\n");
exit(1);
}
memset(buf, 0, LEN);
fd = open("/root/test1/testfile", O_CREAT|O_RDWR|O_TRUNC);
if (fd < 0) {
perror("cannot open file\n");
exit(1);
}
for (i = 0; i < LOOP; i++)
write(fd, buf, LEN);
close(fd);
if (umount("/root/test1/") < 0) {
perror("cannot umount\n");
exit(1);
}
if (mount("/dev/sda1", "/root/test1/", "ext3", 0, 0) < 0) {
perror("cannot mount\n");
exit(1);
}
fd = open("/root/test1/testfile", O_RDWR);
if (fd < 0) {
perror("cannot open file\n");
exit(1);
}
filesize = LEN * LOOP;
for (i = 0; i < 300000; i++){
offset = (random() % filesize) & (~(LEN - 1));
pwrite(fd, buf, LEN, offset);
}
printf("start test\n");
time(&t1);
for (i = 0; i < 100000; i++){
offset = (random() % filesize) & (~(LEN - 1));
pread(fd, buf, LEN, offset);
}
time(&t2);
printf("%ld sec\n", t2-t1);
close(fd);
if (umount("/root/test1/") < 0) {
perror("cannot umount\n");
exit(1);
}
}
Signed-off-by: Hisashi Hifumi <hifumi.hisashi@oss.ntt.co.jp>
Cc: Nick Piggin <nickpiggin@yahoo.com.au>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Jan Kara <jack@ucw.cz>
Cc: <linux-ext4@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-07-29 05:46:36 +07:00
|
|
|
page_not_up_to_date_locked:
|
2006-09-26 13:31:35 +07:00
|
|
|
/* Did it get truncated before we got the lock? */
|
2005-04-17 05:20:36 +07:00
|
|
|
if (!page->mapping) {
|
|
|
|
unlock_page(page);
|
|
|
|
page_cache_release(page);
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Did somebody else fill it already? */
|
|
|
|
if (PageUptodate(page)) {
|
|
|
|
unlock_page(page);
|
|
|
|
goto page_ok;
|
|
|
|
}
|
|
|
|
|
|
|
|
readpage:
|
2010-05-26 22:49:40 +07:00
|
|
|
/*
|
|
|
|
* A previous I/O error may have been due to temporary
|
|
|
|
* failures, eg. multipath errors.
|
|
|
|
* PG_error will be set again if readpage fails.
|
|
|
|
*/
|
|
|
|
ClearPageError(page);
|
2005-04-17 05:20:36 +07:00
|
|
|
/* Start the actual read. The read will unlock the page. */
|
|
|
|
error = mapping->a_ops->readpage(filp, page);
|
|
|
|
|
2005-12-16 05:28:17 +07:00
|
|
|
if (unlikely(error)) {
|
|
|
|
if (error == AOP_TRUNCATED_PAGE) {
|
|
|
|
page_cache_release(page);
|
|
|
|
goto find_page;
|
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
goto readpage_error;
|
2005-12-16 05:28:17 +07:00
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
if (!PageUptodate(page)) {
|
do_generic_file_read: s/EINTR/EIO/ if lock_page_killable() fails
If lock_page_killable() fails because the task was killed by SIGKILL or
any other fatal signal, do_generic_file_read() returns -EIO.
This seems to be OK, because in fact the userspace won't see this error,
the task will dequeue SIGKILL and exit.
However, /sbin/init is different, it will dequeue SIGKILL, ignore it, and
return to the user-space with the bogus -EIO.
Change the code to return the error code from lock_page_killable(), -EINTR.
This doesn't fix the bug, but perhaps makes sense anyway. Imho, with this
change the code looks a bit more logical, and the "good" init should handle
the spurious EINTR or short read.
Afaics we can also change lock_page_killable() to return -ERESTARTNOINTR,
but this can't prevent the short reads.
Signed-off-by: Oleg Nesterov <oleg@tv-sign.ru>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-06-09 00:20:43 +07:00
|
|
|
error = lock_page_killable(page);
|
|
|
|
if (unlikely(error))
|
|
|
|
goto readpage_error;
|
2005-04-17 05:20:36 +07:00
|
|
|
if (!PageUptodate(page)) {
|
|
|
|
if (page->mapping == NULL) {
|
|
|
|
/*
|
2010-01-26 23:27:20 +07:00
|
|
|
* invalidate_mapping_pages got it
|
2005-04-17 05:20:36 +07:00
|
|
|
*/
|
|
|
|
unlock_page(page);
|
|
|
|
page_cache_release(page);
|
|
|
|
goto find_page;
|
|
|
|
}
|
|
|
|
unlock_page(page);
|
2007-10-16 15:24:35 +07:00
|
|
|
shrink_readahead_size_eio(filp, ra);
|
do_generic_file_read: s/EINTR/EIO/ if lock_page_killable() fails
If lock_page_killable() fails because the task was killed by SIGKILL or
any other fatal signal, do_generic_file_read() returns -EIO.
This seems to be OK, because in fact the userspace won't see this error,
the task will dequeue SIGKILL and exit.
However, /sbin/init is different, it will dequeue SIGKILL, ignore it, and
return to the user-space with the bogus -EIO.
Change the code to return the error code from lock_page_killable(), -EINTR.
This doesn't fix the bug, but perhaps makes sense anyway. Imho, with this
change the code looks a bit more logical, and the "good" init should handle
the spurious EINTR or short read.
Afaics we can also change lock_page_killable() to return -ERESTARTNOINTR,
but this can't prevent the short reads.
Signed-off-by: Oleg Nesterov <oleg@tv-sign.ru>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-06-09 00:20:43 +07:00
|
|
|
error = -EIO;
|
|
|
|
goto readpage_error;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
unlock_page(page);
|
|
|
|
}
|
|
|
|
|
|
|
|
goto page_ok;
|
|
|
|
|
|
|
|
readpage_error:
|
|
|
|
/* UHHUH! A synchronous read error occurred. Report it */
|
|
|
|
desc->error = error;
|
|
|
|
page_cache_release(page);
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
no_cached_page:
|
|
|
|
/*
|
|
|
|
* Ok, it wasn't cached, so we need to create a new
|
|
|
|
* page..
|
|
|
|
*/
|
2007-10-16 15:24:57 +07:00
|
|
|
page = page_cache_alloc_cold(mapping);
|
|
|
|
if (!page) {
|
|
|
|
desc->error = -ENOMEM;
|
|
|
|
goto out;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
2007-10-16 15:24:57 +07:00
|
|
|
error = add_to_page_cache_lru(page, mapping,
|
2005-04-17 05:20:36 +07:00
|
|
|
index, GFP_KERNEL);
|
|
|
|
if (error) {
|
2007-10-16 15:24:57 +07:00
|
|
|
page_cache_release(page);
|
2005-04-17 05:20:36 +07:00
|
|
|
if (error == -EEXIST)
|
|
|
|
goto find_page;
|
|
|
|
desc->error = error;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
goto readpage;
|
|
|
|
}
|
|
|
|
|
|
|
|
out:
|
2007-10-16 15:24:35 +07:00
|
|
|
ra->prev_pos = prev_index;
|
|
|
|
ra->prev_pos <<= PAGE_CACHE_SHIFT;
|
|
|
|
ra->prev_pos |= prev_offset;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2007-10-16 15:24:33 +07:00
|
|
|
*ppos = ((loff_t)index << PAGE_CACHE_SHIFT) + offset;
|
2008-10-16 12:01:13 +07:00
|
|
|
file_accessed(filp);
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
int file_read_actor(read_descriptor_t *desc, struct page *page,
|
|
|
|
unsigned long offset, unsigned long size)
|
|
|
|
{
|
|
|
|
char *kaddr;
|
|
|
|
unsigned long left, count = desc->count;
|
|
|
|
|
|
|
|
if (size > count)
|
|
|
|
size = count;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Faults on the destination of a read are common, so do it before
|
|
|
|
* taking the kmap.
|
|
|
|
*/
|
|
|
|
if (!fault_in_pages_writeable(desc->arg.buf, size)) {
|
2011-11-25 22:14:39 +07:00
|
|
|
kaddr = kmap_atomic(page);
|
2005-04-17 05:20:36 +07:00
|
|
|
left = __copy_to_user_inatomic(desc->arg.buf,
|
|
|
|
kaddr + offset, size);
|
2011-11-25 22:14:39 +07:00
|
|
|
kunmap_atomic(kaddr);
|
2005-04-17 05:20:36 +07:00
|
|
|
if (left == 0)
|
|
|
|
goto success;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Do it the slow way */
|
|
|
|
kaddr = kmap(page);
|
|
|
|
left = __copy_to_user(desc->arg.buf, kaddr + offset, size);
|
|
|
|
kunmap(page);
|
|
|
|
|
|
|
|
if (left) {
|
|
|
|
size -= left;
|
|
|
|
desc->error = -EFAULT;
|
|
|
|
}
|
|
|
|
success:
|
|
|
|
desc->count = count - size;
|
|
|
|
desc->written += size;
|
|
|
|
desc->arg.buf += size;
|
|
|
|
return size;
|
|
|
|
}
|
|
|
|
|
2007-05-08 14:23:02 +07:00
|
|
|
/*
|
|
|
|
* Performs necessary checks before doing a write
|
|
|
|
* @iov: io vector request
|
|
|
|
* @nr_segs: number of segments in the iovec
|
|
|
|
* @count: number of bytes to write
|
|
|
|
* @access_flags: type of access: %VERIFY_READ or %VERIFY_WRITE
|
|
|
|
*
|
|
|
|
* Adjust number of segments and amount of bytes to write (nr_segs should be
|
|
|
|
* properly initialized first). Returns appropriate error code that caller
|
|
|
|
* should return or zero in case that write should be allowed.
|
|
|
|
*/
|
|
|
|
int generic_segment_checks(const struct iovec *iov,
|
|
|
|
unsigned long *nr_segs, size_t *count, int access_flags)
|
|
|
|
{
|
|
|
|
unsigned long seg;
|
|
|
|
size_t cnt = 0;
|
|
|
|
for (seg = 0; seg < *nr_segs; seg++) {
|
|
|
|
const struct iovec *iv = &iov[seg];
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If any segment has a negative length, or the cumulative
|
|
|
|
* length ever wraps negative then return -EINVAL.
|
|
|
|
*/
|
|
|
|
cnt += iv->iov_len;
|
|
|
|
if (unlikely((ssize_t)(cnt|iv->iov_len) < 0))
|
|
|
|
return -EINVAL;
|
|
|
|
if (access_ok(access_flags, iv->iov_base, iv->iov_len))
|
|
|
|
continue;
|
|
|
|
if (seg == 0)
|
|
|
|
return -EFAULT;
|
|
|
|
*nr_segs = seg;
|
|
|
|
cnt -= iv->iov_len; /* This segment is no good */
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
*count = cnt;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(generic_segment_checks);
|
|
|
|
|
2006-06-23 16:03:49 +07:00
|
|
|
/**
|
2006-10-04 16:15:22 +07:00
|
|
|
* generic_file_aio_read - generic filesystem read routine
|
2006-06-23 16:03:49 +07:00
|
|
|
* @iocb: kernel I/O control block
|
|
|
|
* @iov: io vector request
|
|
|
|
* @nr_segs: number of segments in the iovec
|
2006-10-04 16:15:22 +07:00
|
|
|
* @pos: current file position
|
2006-06-23 16:03:49 +07:00
|
|
|
*
|
2005-04-17 05:20:36 +07:00
|
|
|
* This is the "read()" routine for all filesystems
|
|
|
|
* that can use the page cache directly.
|
|
|
|
*/
|
|
|
|
ssize_t
|
2006-10-01 13:28:48 +07:00
|
|
|
generic_file_aio_read(struct kiocb *iocb, const struct iovec *iov,
|
|
|
|
unsigned long nr_segs, loff_t pos)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
struct file *filp = iocb->ki_filp;
|
|
|
|
ssize_t retval;
|
2010-05-23 22:00:54 +07:00
|
|
|
unsigned long seg = 0;
|
2005-04-17 05:20:36 +07:00
|
|
|
size_t count;
|
2006-10-01 13:28:48 +07:00
|
|
|
loff_t *ppos = &iocb->ki_pos;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
count = 0;
|
2007-05-08 14:23:02 +07:00
|
|
|
retval = generic_segment_checks(iov, &nr_segs, &count, VERIFY_WRITE);
|
|
|
|
if (retval)
|
|
|
|
return retval;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
/* coalesce the iovecs and go direct-to-BIO for O_DIRECT */
|
|
|
|
if (filp->f_flags & O_DIRECT) {
|
2006-10-01 13:28:48 +07:00
|
|
|
loff_t size;
|
2005-04-17 05:20:36 +07:00
|
|
|
struct address_space *mapping;
|
|
|
|
struct inode *inode;
|
|
|
|
|
|
|
|
mapping = filp->f_mapping;
|
|
|
|
inode = mapping->host;
|
|
|
|
if (!count)
|
|
|
|
goto out; /* skip atime */
|
|
|
|
size = i_size_read(inode);
|
|
|
|
if (pos < size) {
|
2009-01-07 05:40:22 +07:00
|
|
|
retval = filemap_write_and_wait_range(mapping, pos,
|
|
|
|
pos + iov_length(iov, nr_segs) - 1);
|
2008-07-24 11:27:04 +07:00
|
|
|
if (!retval) {
|
|
|
|
retval = mapping->a_ops->direct_IO(READ, iocb,
|
|
|
|
iov, pos, nr_segs);
|
|
|
|
}
|
2010-05-23 22:00:54 +07:00
|
|
|
if (retval > 0) {
|
2005-04-17 05:20:36 +07:00
|
|
|
*ppos = pos + retval;
|
2010-05-23 22:00:54 +07:00
|
|
|
count -= retval;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Btrfs can have a short DIO read if we encounter
|
|
|
|
* compressed extents, so if there was an error, or if
|
|
|
|
* we've already read everything we wanted to, or if
|
|
|
|
* there was a short read because we hit EOF, go ahead
|
|
|
|
* and return. Otherwise fallthrough to buffered io for
|
|
|
|
* the rest of the read.
|
|
|
|
*/
|
|
|
|
if (retval < 0 || !count || *ppos >= size) {
|
2008-07-24 11:27:34 +07:00
|
|
|
file_accessed(filp);
|
|
|
|
goto out;
|
|
|
|
}
|
2006-09-28 01:45:07 +07:00
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
2010-05-23 22:00:54 +07:00
|
|
|
count = retval;
|
2008-07-24 11:27:34 +07:00
|
|
|
for (seg = 0; seg < nr_segs; seg++) {
|
|
|
|
read_descriptor_t desc;
|
2010-05-23 22:00:54 +07:00
|
|
|
loff_t offset = 0;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If we did a short DIO read we need to skip the section of the
|
|
|
|
* iov that we've already read data into.
|
|
|
|
*/
|
|
|
|
if (count) {
|
|
|
|
if (count > iov[seg].iov_len) {
|
|
|
|
count -= iov[seg].iov_len;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
offset = count;
|
|
|
|
count = 0;
|
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2008-07-24 11:27:34 +07:00
|
|
|
desc.written = 0;
|
2010-05-23 22:00:54 +07:00
|
|
|
desc.arg.buf = iov[seg].iov_base + offset;
|
|
|
|
desc.count = iov[seg].iov_len - offset;
|
2008-07-24 11:27:34 +07:00
|
|
|
if (desc.count == 0)
|
|
|
|
continue;
|
|
|
|
desc.error = 0;
|
|
|
|
do_generic_file_read(filp, ppos, &desc, file_read_actor);
|
|
|
|
retval += desc.written;
|
|
|
|
if (desc.error) {
|
|
|
|
retval = retval ?: desc.error;
|
|
|
|
break;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
2008-07-24 11:27:34 +07:00
|
|
|
if (desc.count > 0)
|
|
|
|
break;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
out:
|
|
|
|
return retval;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(generic_file_aio_read);
|
|
|
|
|
|
|
|
#ifdef CONFIG_MMU
|
2006-06-23 16:03:49 +07:00
|
|
|
/**
|
|
|
|
* page_cache_read - adds requested page to the page cache if not already there
|
|
|
|
* @file: file to read
|
|
|
|
* @offset: page index
|
|
|
|
*
|
2005-04-17 05:20:36 +07:00
|
|
|
* This adds the requested page to the page cache if it isn't already there,
|
|
|
|
* and schedules an I/O to read in its contents from disk.
|
|
|
|
*/
|
2008-02-05 13:29:26 +07:00
|
|
|
static int page_cache_read(struct file *file, pgoff_t offset)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
struct address_space *mapping = file->f_mapping;
|
|
|
|
struct page *page;
|
2005-12-16 05:28:17 +07:00
|
|
|
int ret;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2005-12-16 05:28:17 +07:00
|
|
|
do {
|
|
|
|
page = page_cache_alloc_cold(mapping);
|
|
|
|
if (!page)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
ret = add_to_page_cache_lru(page, mapping, offset, GFP_KERNEL);
|
|
|
|
if (ret == 0)
|
|
|
|
ret = mapping->a_ops->readpage(file, page);
|
|
|
|
else if (ret == -EEXIST)
|
|
|
|
ret = 0; /* losing race to add is OK */
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
page_cache_release(page);
|
|
|
|
|
2005-12-16 05:28:17 +07:00
|
|
|
} while (ret == AOP_TRUNCATED_PAGE);
|
|
|
|
|
|
|
|
return ret;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
#define MMAP_LOTSAMISS (100)
|
|
|
|
|
2009-06-17 05:31:25 +07:00
|
|
|
/*
|
|
|
|
* Synchronous readahead happens when we don't even find
|
|
|
|
* a page in the page cache at all.
|
|
|
|
*/
|
|
|
|
static void do_sync_mmap_readahead(struct vm_area_struct *vma,
|
|
|
|
struct file_ra_state *ra,
|
|
|
|
struct file *file,
|
|
|
|
pgoff_t offset)
|
|
|
|
{
|
|
|
|
unsigned long ra_pages;
|
|
|
|
struct address_space *mapping = file->f_mapping;
|
|
|
|
|
|
|
|
/* If we don't want any read-ahead, don't bother */
|
|
|
|
if (VM_RandomReadHint(vma))
|
|
|
|
return;
|
2011-05-25 07:12:28 +07:00
|
|
|
if (!ra->ra_pages)
|
|
|
|
return;
|
2009-06-17 05:31:25 +07:00
|
|
|
|
2011-05-25 07:12:30 +07:00
|
|
|
if (VM_SequentialReadHint(vma)) {
|
readahead: enforce full sync mmap readahead size
Now that we do readahead for sequential mmap reads, here is a simple
evaluation of the impacts, and one further optimization.
It's an NFS-root debian desktop system, readahead size = 60 pages.
The numbers are grabbed after a fresh boot into console.
approach pgmajfault RA miss ratio mmap IO count avg IO size(pages)
A 383 31.6% 383 11
B 225 32.4% 390 11
C 224 32.6% 307 13
case A: mmap sync/async readahead disabled
case B: mmap sync/async readahead enabled, with enforced full async readahead size
case C: mmap sync/async readahead enabled, with enforced full sync/async readahead size
or:
A = vanilla 2.6.30-rc1
B = A plus mmap readahead
C = B plus this patch
The numbers show that
- there are good possibilities for random mmap reads to trigger readahead
- 'pgmajfault' is reduced by 1/3, due to the _async_ nature of readahead
- case C can further reduce IO count by 1/4
- readahead miss ratios are not quite affected
The theory is
- readahead is _good_ for clustered random reads, and can perform
_better_ than readaround because they could be _async_.
- async readahead size is guaranteed to be larger than readaround
size, and they are _async_, hence will mostly behave better
However for B
- sync readahead size could be smaller than readaround size, hence may
make things worse by produce more smaller IOs
which will be fixed by this patch.
Final conclusion:
- mmap readahead reduced major faults by 1/3 and no obvious overheads;
- mmap io can be further reduced by 1/4 with this patch.
Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-06-17 05:31:38 +07:00
|
|
|
page_cache_sync_readahead(mapping, ra, file, offset,
|
|
|
|
ra->ra_pages);
|
2009-06-17 05:31:25 +07:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2011-05-25 07:12:29 +07:00
|
|
|
/* Avoid banging the cache line if not needed */
|
|
|
|
if (ra->mmap_miss < MMAP_LOTSAMISS * 10)
|
2009-06-17 05:31:25 +07:00
|
|
|
ra->mmap_miss++;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Do we miss much more than hit in this file? If so,
|
|
|
|
* stop bothering with read-ahead. It will only hurt.
|
|
|
|
*/
|
|
|
|
if (ra->mmap_miss > MMAP_LOTSAMISS)
|
|
|
|
return;
|
|
|
|
|
2009-06-17 05:31:30 +07:00
|
|
|
/*
|
|
|
|
* mmap read-around
|
|
|
|
*/
|
2009-06-17 05:31:25 +07:00
|
|
|
ra_pages = max_sane_readahead(ra->ra_pages);
|
2011-05-25 07:12:28 +07:00
|
|
|
ra->start = max_t(long, 0, offset - ra_pages / 2);
|
|
|
|
ra->size = ra_pages;
|
2011-05-25 07:12:30 +07:00
|
|
|
ra->async_size = ra_pages / 4;
|
2011-05-25 07:12:28 +07:00
|
|
|
ra_submit(ra, mapping, file);
|
2009-06-17 05:31:25 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Asynchronous readahead happens when we find the page and PG_readahead,
|
|
|
|
* so we want to possibly extend the readahead further..
|
|
|
|
*/
|
|
|
|
static void do_async_mmap_readahead(struct vm_area_struct *vma,
|
|
|
|
struct file_ra_state *ra,
|
|
|
|
struct file *file,
|
|
|
|
struct page *page,
|
|
|
|
pgoff_t offset)
|
|
|
|
{
|
|
|
|
struct address_space *mapping = file->f_mapping;
|
|
|
|
|
|
|
|
/* If we don't want any read-ahead, don't bother */
|
|
|
|
if (VM_RandomReadHint(vma))
|
|
|
|
return;
|
|
|
|
if (ra->mmap_miss > 0)
|
|
|
|
ra->mmap_miss--;
|
|
|
|
if (PageReadahead(page))
|
readahead: enforce full readahead size on async mmap readahead
We need this in one particular case and two more general ones.
Now we do async readahead for sequential mmap reads, and do it with the
help of PG_readahead. For normal reads, PG_readahead is the sufficient
condition to do a sequential readahead. But unfortunately, for mmap
reads, there is a tiny nuisance:
[11736.998347] readahead-init0(process: sh/23926, file: sda1/w3m, offset=0:4503599627370495, ra=0+4-3) = 4
[11737.014985] readahead-around(process: w3m/23926, file: sda1/w3m, offset=0:0, ra=290+32-0) = 17
[11737.019488] readahead-around(process: w3m/23926, file: sda1/w3m, offset=0:0, ra=118+32-0) = 32
[11737.024921] readahead-interleaved(process: w3m/23926, file: sda1/w3m, offset=0:2, ra=4+6-6) = 6
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~
An unfavorably small readahead. The original dumb read-around size could
be more efficient.
That happened because ld-linux.so does a read(832) in L1 before mmap(),
which triggers a 4-page readahead, with the second page tagged
PG_readahead.
L0: open("/lib/libc.so.6", O_RDONLY) = 3
L1: read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\340\342"..., 832) = 832
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
L2: fstat(3, {st_mode=S_IFREG|0755, st_size=1420624, ...}) = 0
L3: mmap(NULL, 3527256, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fac6e51d000
L4: mprotect(0x7fac6e671000, 2097152, PROT_NONE) = 0
L5: mmap(0x7fac6e871000, 20480, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x154000) = 0x7fac6e871000
L6: mmap(0x7fac6e876000, 16984, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7fac6e876000
L7: close(3) = 0
In general, the PG_readahead flag will also be hit in cases
- sequential reads
- clustered random reads
A full readahead size is desirable in both cases.
Cc: Nick Piggin <npiggin@suse.de>
Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
Cc: Ying Han <yinghan@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-06-17 05:31:29 +07:00
|
|
|
page_cache_async_readahead(mapping, ra, file,
|
|
|
|
page, offset, ra->ra_pages);
|
2009-06-17 05:31:25 +07:00
|
|
|
}
|
|
|
|
|
2006-06-23 16:03:49 +07:00
|
|
|
/**
|
2007-07-19 15:46:59 +07:00
|
|
|
* filemap_fault - read in file data for page fault handling
|
2007-07-19 15:47:03 +07:00
|
|
|
* @vma: vma in which the fault was taken
|
|
|
|
* @vmf: struct vm_fault containing details of the fault
|
2006-06-23 16:03:49 +07:00
|
|
|
*
|
2007-07-19 15:46:59 +07:00
|
|
|
* filemap_fault() is invoked via the vma operations vector for a
|
2005-04-17 05:20:36 +07:00
|
|
|
* mapped memory region to read in file data during a page fault.
|
|
|
|
*
|
|
|
|
* The goto's are kind of ugly, but this streamlines the normal case of having
|
|
|
|
* it in the page cache, and handles the special cases reasonably without
|
|
|
|
* having a lot of duplicated code.
|
|
|
|
*/
|
2007-07-19 15:47:03 +07:00
|
|
|
int filemap_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
int error;
|
2007-07-19 15:46:59 +07:00
|
|
|
struct file *file = vma->vm_file;
|
2005-04-17 05:20:36 +07:00
|
|
|
struct address_space *mapping = file->f_mapping;
|
|
|
|
struct file_ra_state *ra = &file->f_ra;
|
|
|
|
struct inode *inode = mapping->host;
|
2009-06-17 05:31:25 +07:00
|
|
|
pgoff_t offset = vmf->pgoff;
|
2005-04-17 05:20:36 +07:00
|
|
|
struct page *page;
|
2008-02-08 19:20:11 +07:00
|
|
|
pgoff_t size;
|
2007-07-19 15:47:05 +07:00
|
|
|
int ret = 0;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
size = (i_size_read(inode) + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;
|
2009-06-17 05:31:25 +07:00
|
|
|
if (offset >= size)
|
2007-10-31 23:19:46 +07:00
|
|
|
return VM_FAULT_SIGBUS;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Do we have something in the page cache already?
|
|
|
|
*/
|
2009-06-17 05:31:25 +07:00
|
|
|
page = find_get_page(mapping, offset);
|
2012-10-09 06:32:19 +07:00
|
|
|
if (likely(page) && !(vmf->flags & FAULT_FLAG_TRIED)) {
|
2005-04-17 05:20:36 +07:00
|
|
|
/*
|
2009-06-17 05:31:25 +07:00
|
|
|
* We found the page, so try async readahead before
|
|
|
|
* waiting for the lock.
|
2005-04-17 05:20:36 +07:00
|
|
|
*/
|
2009-06-17 05:31:25 +07:00
|
|
|
do_async_mmap_readahead(vma, ra, file, page, offset);
|
2012-10-09 06:32:19 +07:00
|
|
|
} else if (!page) {
|
2009-06-17 05:31:25 +07:00
|
|
|
/* No page in the page cache at all */
|
|
|
|
do_sync_mmap_readahead(vma, ra, file, offset);
|
|
|
|
count_vm_event(PGMAJFAULT);
|
2011-05-27 06:25:38 +07:00
|
|
|
mem_cgroup_count_vm_event(vma->vm_mm, PGMAJFAULT);
|
2009-06-17 05:31:25 +07:00
|
|
|
ret = VM_FAULT_MAJOR;
|
|
|
|
retry_find:
|
2010-10-27 04:21:56 +07:00
|
|
|
page = find_get_page(mapping, offset);
|
2005-04-17 05:20:36 +07:00
|
|
|
if (!page)
|
|
|
|
goto no_cached_page;
|
|
|
|
}
|
|
|
|
|
2010-11-03 03:05:18 +07:00
|
|
|
if (!lock_page_or_retry(page, vma->vm_mm, vmf->flags)) {
|
|
|
|
page_cache_release(page);
|
2010-10-27 04:21:57 +07:00
|
|
|
return ret | VM_FAULT_RETRY;
|
2010-11-03 03:05:18 +07:00
|
|
|
}
|
2010-10-27 04:21:56 +07:00
|
|
|
|
|
|
|
/* Did it get truncated? */
|
|
|
|
if (unlikely(page->mapping != mapping)) {
|
|
|
|
unlock_page(page);
|
|
|
|
put_page(page);
|
|
|
|
goto retry_find;
|
|
|
|
}
|
|
|
|
VM_BUG_ON(page->index != offset);
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
/*
|
mm: fix fault vs invalidate race for linear mappings
Fix the race between invalidate_inode_pages and do_no_page.
Andrea Arcangeli identified a subtle race between invalidation of pages from
pagecache with userspace mappings, and do_no_page.
The issue is that invalidation has to shoot down all mappings to the page,
before it can be discarded from the pagecache. Between shooting down ptes to
a particular page, and actually dropping the struct page from the pagecache,
do_no_page from any process might fault on that page and establish a new
mapping to the page just before it gets discarded from the pagecache.
The most common case where such invalidation is used is in file truncation.
This case was catered for by doing a sort of open-coded seqlock between the
file's i_size, and its truncate_count.
Truncation will decrease i_size, then increment truncate_count before
unmapping userspace pages; do_no_page will read truncate_count, then find the
page if it is within i_size, and then check truncate_count under the page
table lock and back out and retry if it had subsequently been changed (ptl
will serialise against unmapping, and ensure a potentially updated
truncate_count is actually visible).
Complexity and documentation issues aside, the locking protocol fails in the
case where we would like to invalidate pagecache inside i_size. do_no_page
can come in anytime and filemap_nopage is not aware of the invalidation in
progress (as it is when it is outside i_size). The end result is that
dangling (->mapping == NULL) pages that appear to be from a particular file
may be mapped into userspace with nonsense data. Valid mappings to the same
place will see a different page.
Andrea implemented two working fixes, one using a real seqlock, another using
a page->flags bit. He also proposed using the page lock in do_no_page, but
that was initially considered too heavyweight. However, it is not a global or
per-file lock, and the page cacheline is modified in do_no_page to increment
_count and _mapcount anyway, so a further modification should not be a large
performance hit. Scalability is not an issue.
This patch implements this latter approach. ->nopage implementations return
with the page locked if it is possible for their underlying file to be
invalidated (in that case, they must set a special vm_flags bit to indicate
so). do_no_page only unlocks the page after setting up the mapping
completely. invalidation is excluded because it holds the page lock during
invalidation of each page (and ensures that the page is not mapped while
holding the lock).
This also allows significant simplifications in do_no_page, because we have
the page locked in the right place in the pagecache from the start.
Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-07-19 15:46:57 +07:00
|
|
|
* We have a locked page in the page cache, now we need to check
|
|
|
|
* that it's up-to-date. If not, it is going to be due to an error.
|
2005-04-17 05:20:36 +07:00
|
|
|
*/
|
mm: fix fault vs invalidate race for linear mappings
Fix the race between invalidate_inode_pages and do_no_page.
Andrea Arcangeli identified a subtle race between invalidation of pages from
pagecache with userspace mappings, and do_no_page.
The issue is that invalidation has to shoot down all mappings to the page,
before it can be discarded from the pagecache. Between shooting down ptes to
a particular page, and actually dropping the struct page from the pagecache,
do_no_page from any process might fault on that page and establish a new
mapping to the page just before it gets discarded from the pagecache.
The most common case where such invalidation is used is in file truncation.
This case was catered for by doing a sort of open-coded seqlock between the
file's i_size, and its truncate_count.
Truncation will decrease i_size, then increment truncate_count before
unmapping userspace pages; do_no_page will read truncate_count, then find the
page if it is within i_size, and then check truncate_count under the page
table lock and back out and retry if it had subsequently been changed (ptl
will serialise against unmapping, and ensure a potentially updated
truncate_count is actually visible).
Complexity and documentation issues aside, the locking protocol fails in the
case where we would like to invalidate pagecache inside i_size. do_no_page
can come in anytime and filemap_nopage is not aware of the invalidation in
progress (as it is when it is outside i_size). The end result is that
dangling (->mapping == NULL) pages that appear to be from a particular file
may be mapped into userspace with nonsense data. Valid mappings to the same
place will see a different page.
Andrea implemented two working fixes, one using a real seqlock, another using
a page->flags bit. He also proposed using the page lock in do_no_page, but
that was initially considered too heavyweight. However, it is not a global or
per-file lock, and the page cacheline is modified in do_no_page to increment
_count and _mapcount anyway, so a further modification should not be a large
performance hit. Scalability is not an issue.
This patch implements this latter approach. ->nopage implementations return
with the page locked if it is possible for their underlying file to be
invalidated (in that case, they must set a special vm_flags bit to indicate
so). do_no_page only unlocks the page after setting up the mapping
completely. invalidation is excluded because it holds the page lock during
invalidation of each page (and ensures that the page is not mapped while
holding the lock).
This also allows significant simplifications in do_no_page, because we have
the page locked in the right place in the pagecache from the start.
Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-07-19 15:46:57 +07:00
|
|
|
if (unlikely(!PageUptodate(page)))
|
2005-04-17 05:20:36 +07:00
|
|
|
goto page_not_uptodate;
|
|
|
|
|
2009-06-17 05:31:25 +07:00
|
|
|
/*
|
|
|
|
* Found the page and have a reference on it.
|
|
|
|
* We must recheck i_size under page lock.
|
|
|
|
*/
|
mm: fix fault vs invalidate race for linear mappings
Fix the race between invalidate_inode_pages and do_no_page.
Andrea Arcangeli identified a subtle race between invalidation of pages from
pagecache with userspace mappings, and do_no_page.
The issue is that invalidation has to shoot down all mappings to the page,
before it can be discarded from the pagecache. Between shooting down ptes to
a particular page, and actually dropping the struct page from the pagecache,
do_no_page from any process might fault on that page and establish a new
mapping to the page just before it gets discarded from the pagecache.
The most common case where such invalidation is used is in file truncation.
This case was catered for by doing a sort of open-coded seqlock between the
file's i_size, and its truncate_count.
Truncation will decrease i_size, then increment truncate_count before
unmapping userspace pages; do_no_page will read truncate_count, then find the
page if it is within i_size, and then check truncate_count under the page
table lock and back out and retry if it had subsequently been changed (ptl
will serialise against unmapping, and ensure a potentially updated
truncate_count is actually visible).
Complexity and documentation issues aside, the locking protocol fails in the
case where we would like to invalidate pagecache inside i_size. do_no_page
can come in anytime and filemap_nopage is not aware of the invalidation in
progress (as it is when it is outside i_size). The end result is that
dangling (->mapping == NULL) pages that appear to be from a particular file
may be mapped into userspace with nonsense data. Valid mappings to the same
place will see a different page.
Andrea implemented two working fixes, one using a real seqlock, another using
a page->flags bit. He also proposed using the page lock in do_no_page, but
that was initially considered too heavyweight. However, it is not a global or
per-file lock, and the page cacheline is modified in do_no_page to increment
_count and _mapcount anyway, so a further modification should not be a large
performance hit. Scalability is not an issue.
This patch implements this latter approach. ->nopage implementations return
with the page locked if it is possible for their underlying file to be
invalidated (in that case, they must set a special vm_flags bit to indicate
so). do_no_page only unlocks the page after setting up the mapping
completely. invalidation is excluded because it holds the page lock during
invalidation of each page (and ensures that the page is not mapped while
holding the lock).
This also allows significant simplifications in do_no_page, because we have
the page locked in the right place in the pagecache from the start.
Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-07-19 15:46:57 +07:00
|
|
|
size = (i_size_read(inode) + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;
|
2009-06-17 05:31:25 +07:00
|
|
|
if (unlikely(offset >= size)) {
|
mm: fix fault vs invalidate race for linear mappings
Fix the race between invalidate_inode_pages and do_no_page.
Andrea Arcangeli identified a subtle race between invalidation of pages from
pagecache with userspace mappings, and do_no_page.
The issue is that invalidation has to shoot down all mappings to the page,
before it can be discarded from the pagecache. Between shooting down ptes to
a particular page, and actually dropping the struct page from the pagecache,
do_no_page from any process might fault on that page and establish a new
mapping to the page just before it gets discarded from the pagecache.
The most common case where such invalidation is used is in file truncation.
This case was catered for by doing a sort of open-coded seqlock between the
file's i_size, and its truncate_count.
Truncation will decrease i_size, then increment truncate_count before
unmapping userspace pages; do_no_page will read truncate_count, then find the
page if it is within i_size, and then check truncate_count under the page
table lock and back out and retry if it had subsequently been changed (ptl
will serialise against unmapping, and ensure a potentially updated
truncate_count is actually visible).
Complexity and documentation issues aside, the locking protocol fails in the
case where we would like to invalidate pagecache inside i_size. do_no_page
can come in anytime and filemap_nopage is not aware of the invalidation in
progress (as it is when it is outside i_size). The end result is that
dangling (->mapping == NULL) pages that appear to be from a particular file
may be mapped into userspace with nonsense data. Valid mappings to the same
place will see a different page.
Andrea implemented two working fixes, one using a real seqlock, another using
a page->flags bit. He also proposed using the page lock in do_no_page, but
that was initially considered too heavyweight. However, it is not a global or
per-file lock, and the page cacheline is modified in do_no_page to increment
_count and _mapcount anyway, so a further modification should not be a large
performance hit. Scalability is not an issue.
This patch implements this latter approach. ->nopage implementations return
with the page locked if it is possible for their underlying file to be
invalidated (in that case, they must set a special vm_flags bit to indicate
so). do_no_page only unlocks the page after setting up the mapping
completely. invalidation is excluded because it holds the page lock during
invalidation of each page (and ensures that the page is not mapped while
holding the lock).
This also allows significant simplifications in do_no_page, because we have
the page locked in the right place in the pagecache from the start.
Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-07-19 15:46:57 +07:00
|
|
|
unlock_page(page);
|
2007-10-09 00:08:37 +07:00
|
|
|
page_cache_release(page);
|
2007-10-31 23:19:46 +07:00
|
|
|
return VM_FAULT_SIGBUS;
|
mm: fix fault vs invalidate race for linear mappings
Fix the race between invalidate_inode_pages and do_no_page.
Andrea Arcangeli identified a subtle race between invalidation of pages from
pagecache with userspace mappings, and do_no_page.
The issue is that invalidation has to shoot down all mappings to the page,
before it can be discarded from the pagecache. Between shooting down ptes to
a particular page, and actually dropping the struct page from the pagecache,
do_no_page from any process might fault on that page and establish a new
mapping to the page just before it gets discarded from the pagecache.
The most common case where such invalidation is used is in file truncation.
This case was catered for by doing a sort of open-coded seqlock between the
file's i_size, and its truncate_count.
Truncation will decrease i_size, then increment truncate_count before
unmapping userspace pages; do_no_page will read truncate_count, then find the
page if it is within i_size, and then check truncate_count under the page
table lock and back out and retry if it had subsequently been changed (ptl
will serialise against unmapping, and ensure a potentially updated
truncate_count is actually visible).
Complexity and documentation issues aside, the locking protocol fails in the
case where we would like to invalidate pagecache inside i_size. do_no_page
can come in anytime and filemap_nopage is not aware of the invalidation in
progress (as it is when it is outside i_size). The end result is that
dangling (->mapping == NULL) pages that appear to be from a particular file
may be mapped into userspace with nonsense data. Valid mappings to the same
place will see a different page.
Andrea implemented two working fixes, one using a real seqlock, another using
a page->flags bit. He also proposed using the page lock in do_no_page, but
that was initially considered too heavyweight. However, it is not a global or
per-file lock, and the page cacheline is modified in do_no_page to increment
_count and _mapcount anyway, so a further modification should not be a large
performance hit. Scalability is not an issue.
This patch implements this latter approach. ->nopage implementations return
with the page locked if it is possible for their underlying file to be
invalidated (in that case, they must set a special vm_flags bit to indicate
so). do_no_page only unlocks the page after setting up the mapping
completely. invalidation is excluded because it holds the page lock during
invalidation of each page (and ensures that the page is not mapped while
holding the lock).
This also allows significant simplifications in do_no_page, because we have
the page locked in the right place in the pagecache from the start.
Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-07-19 15:46:57 +07:00
|
|
|
}
|
|
|
|
|
2007-07-19 15:47:03 +07:00
|
|
|
vmf->page = page;
|
2007-07-19 15:47:05 +07:00
|
|
|
return ret | VM_FAULT_LOCKED;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
no_cached_page:
|
|
|
|
/*
|
|
|
|
* We're only likely to ever get here if MADV_RANDOM is in
|
|
|
|
* effect.
|
|
|
|
*/
|
2009-06-17 05:31:25 +07:00
|
|
|
error = page_cache_read(file, offset);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
/*
|
|
|
|
* The page we want has now been added to the page cache.
|
|
|
|
* In the unlikely event that someone removed it in the
|
|
|
|
* meantime, we'll just come back here and read it again.
|
|
|
|
*/
|
|
|
|
if (error >= 0)
|
|
|
|
goto retry_find;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* An error return from page_cache_read can result if the
|
|
|
|
* system is low on memory, or a problem occurs while trying
|
|
|
|
* to schedule I/O.
|
|
|
|
*/
|
|
|
|
if (error == -ENOMEM)
|
2007-07-19 15:47:03 +07:00
|
|
|
return VM_FAULT_OOM;
|
|
|
|
return VM_FAULT_SIGBUS;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
page_not_uptodate:
|
|
|
|
/*
|
|
|
|
* Umm, take care of errors if the page isn't up-to-date.
|
|
|
|
* Try to re-read it _once_. We do this synchronously,
|
|
|
|
* because there really aren't any performance issues here
|
|
|
|
* and we need to check for errors.
|
|
|
|
*/
|
|
|
|
ClearPageError(page);
|
2005-12-16 05:28:17 +07:00
|
|
|
error = mapping->a_ops->readpage(file, page);
|
2008-05-15 06:05:37 +07:00
|
|
|
if (!error) {
|
|
|
|
wait_on_page_locked(page);
|
|
|
|
if (!PageUptodate(page))
|
|
|
|
error = -EIO;
|
|
|
|
}
|
mm: fix fault vs invalidate race for linear mappings
Fix the race between invalidate_inode_pages and do_no_page.
Andrea Arcangeli identified a subtle race between invalidation of pages from
pagecache with userspace mappings, and do_no_page.
The issue is that invalidation has to shoot down all mappings to the page,
before it can be discarded from the pagecache. Between shooting down ptes to
a particular page, and actually dropping the struct page from the pagecache,
do_no_page from any process might fault on that page and establish a new
mapping to the page just before it gets discarded from the pagecache.
The most common case where such invalidation is used is in file truncation.
This case was catered for by doing a sort of open-coded seqlock between the
file's i_size, and its truncate_count.
Truncation will decrease i_size, then increment truncate_count before
unmapping userspace pages; do_no_page will read truncate_count, then find the
page if it is within i_size, and then check truncate_count under the page
table lock and back out and retry if it had subsequently been changed (ptl
will serialise against unmapping, and ensure a potentially updated
truncate_count is actually visible).
Complexity and documentation issues aside, the locking protocol fails in the
case where we would like to invalidate pagecache inside i_size. do_no_page
can come in anytime and filemap_nopage is not aware of the invalidation in
progress (as it is when it is outside i_size). The end result is that
dangling (->mapping == NULL) pages that appear to be from a particular file
may be mapped into userspace with nonsense data. Valid mappings to the same
place will see a different page.
Andrea implemented two working fixes, one using a real seqlock, another using
a page->flags bit. He also proposed using the page lock in do_no_page, but
that was initially considered too heavyweight. However, it is not a global or
per-file lock, and the page cacheline is modified in do_no_page to increment
_count and _mapcount anyway, so a further modification should not be a large
performance hit. Scalability is not an issue.
This patch implements this latter approach. ->nopage implementations return
with the page locked if it is possible for their underlying file to be
invalidated (in that case, they must set a special vm_flags bit to indicate
so). do_no_page only unlocks the page after setting up the mapping
completely. invalidation is excluded because it holds the page lock during
invalidation of each page (and ensures that the page is not mapped while
holding the lock).
This also allows significant simplifications in do_no_page, because we have
the page locked in the right place in the pagecache from the start.
Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-07-19 15:46:57 +07:00
|
|
|
page_cache_release(page);
|
|
|
|
|
|
|
|
if (!error || error == AOP_TRUNCATED_PAGE)
|
2005-12-16 05:28:17 +07:00
|
|
|
goto retry_find;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
mm: fix fault vs invalidate race for linear mappings
Fix the race between invalidate_inode_pages and do_no_page.
Andrea Arcangeli identified a subtle race between invalidation of pages from
pagecache with userspace mappings, and do_no_page.
The issue is that invalidation has to shoot down all mappings to the page,
before it can be discarded from the pagecache. Between shooting down ptes to
a particular page, and actually dropping the struct page from the pagecache,
do_no_page from any process might fault on that page and establish a new
mapping to the page just before it gets discarded from the pagecache.
The most common case where such invalidation is used is in file truncation.
This case was catered for by doing a sort of open-coded seqlock between the
file's i_size, and its truncate_count.
Truncation will decrease i_size, then increment truncate_count before
unmapping userspace pages; do_no_page will read truncate_count, then find the
page if it is within i_size, and then check truncate_count under the page
table lock and back out and retry if it had subsequently been changed (ptl
will serialise against unmapping, and ensure a potentially updated
truncate_count is actually visible).
Complexity and documentation issues aside, the locking protocol fails in the
case where we would like to invalidate pagecache inside i_size. do_no_page
can come in anytime and filemap_nopage is not aware of the invalidation in
progress (as it is when it is outside i_size). The end result is that
dangling (->mapping == NULL) pages that appear to be from a particular file
may be mapped into userspace with nonsense data. Valid mappings to the same
place will see a different page.
Andrea implemented two working fixes, one using a real seqlock, another using
a page->flags bit. He also proposed using the page lock in do_no_page, but
that was initially considered too heavyweight. However, it is not a global or
per-file lock, and the page cacheline is modified in do_no_page to increment
_count and _mapcount anyway, so a further modification should not be a large
performance hit. Scalability is not an issue.
This patch implements this latter approach. ->nopage implementations return
with the page locked if it is possible for their underlying file to be
invalidated (in that case, they must set a special vm_flags bit to indicate
so). do_no_page only unlocks the page after setting up the mapping
completely. invalidation is excluded because it holds the page lock during
invalidation of each page (and ensures that the page is not mapped while
holding the lock).
This also allows significant simplifications in do_no_page, because we have
the page locked in the right place in the pagecache from the start.
Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-07-19 15:46:57 +07:00
|
|
|
/* Things didn't work out. Return zero to tell the mm layer so. */
|
[PATCH] readahead: backoff on I/O error
Backoff readahead size exponentially on I/O error.
Michael Tokarev <mjt@tls.msk.ru> described the problem as:
[QUOTE]
Suppose there's a CD-rom with a scratch/etc, one sector is unreadable.
In order to "fix" it, one have to read it and write to another CD-rom,
or something.. or just ignore the error (if it's just a skip in a video
stream). Let's assume the unreadable block is number U.
But current behavior is just insane. An application requests block
number N, which is before U. Kernel tries to read-ahead blocks N..U.
Cdrom drive tries to read it, re-read it.. for some time. Finally,
when all the N..U-1 blocks are read, kernel returns block number N
(as requested) to an application, successefully.
Now an app requests block number N+1, and kernel tries to read
blocks N+1..U+1. Retrying again as in previous step.
And so on, up to when an app requests block number U-1. And when,
finally, it requests block U, it receives read error.
So, kernel currentry tries to re-read the same failing block as
many times as the current readahead value (256 (times?) by default).
This whole process already killed my cdrom drive (I posted about it
to LKML several months ago) - literally, the drive has fried, and
does not work anymore. Ofcourse that problem was a bug in firmware
(or whatever) of the drive *too*, but.. main problem with that is
current readahead logic as described above.
[/QUOTE]
Which was confirmed by Jens Axboe <axboe@suse.de>:
[QUOTE]
For ide-cd, it tends do only end the first part of the request on a
medium error. So you may see a lot of repeats :/
[/QUOTE]
With this patch, retries are expected to be reduced from, say, 256, to 5.
[akpm@osdl.org: cleanups]
Signed-off-by: Wu Fengguang <wfg@mail.ustc.edu.cn>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-25 19:48:43 +07:00
|
|
|
shrink_readahead_size_eio(file, ra);
|
2007-07-19 15:47:03 +07:00
|
|
|
return VM_FAULT_SIGBUS;
|
2007-07-19 15:46:59 +07:00
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(filemap_fault);
|
|
|
|
|
2012-06-12 21:20:29 +07:00
|
|
|
int filemap_page_mkwrite(struct vm_area_struct *vma, struct vm_fault *vmf)
|
|
|
|
{
|
|
|
|
struct page *page = vmf->page;
|
|
|
|
struct inode *inode = vma->vm_file->f_path.dentry->d_inode;
|
|
|
|
int ret = VM_FAULT_LOCKED;
|
|
|
|
|
2012-06-12 21:20:37 +07:00
|
|
|
sb_start_pagefault(inode->i_sb);
|
2012-06-12 21:20:29 +07:00
|
|
|
file_update_time(vma->vm_file);
|
|
|
|
lock_page(page);
|
|
|
|
if (page->mapping != inode->i_mapping) {
|
|
|
|
unlock_page(page);
|
|
|
|
ret = VM_FAULT_NOPAGE;
|
|
|
|
goto out;
|
|
|
|
}
|
2012-06-12 21:20:37 +07:00
|
|
|
/*
|
|
|
|
* We mark the page dirty already here so that when freeze is in
|
|
|
|
* progress, we are guaranteed that writeback during freezing will
|
|
|
|
* see the dirty page and writeprotect it again.
|
|
|
|
*/
|
|
|
|
set_page_dirty(page);
|
2012-06-12 21:20:29 +07:00
|
|
|
out:
|
2012-06-12 21:20:37 +07:00
|
|
|
sb_end_pagefault(inode->i_sb);
|
2012-06-12 21:20:29 +07:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(filemap_page_mkwrite);
|
|
|
|
|
2009-09-28 01:29:37 +07:00
|
|
|
const struct vm_operations_struct generic_file_vm_ops = {
|
2007-07-19 15:46:59 +07:00
|
|
|
.fault = filemap_fault,
|
2012-06-12 21:20:29 +07:00
|
|
|
.page_mkwrite = filemap_page_mkwrite,
|
2012-10-09 06:28:46 +07:00
|
|
|
.remap_pages = generic_file_remap_pages,
|
2005-04-17 05:20:36 +07:00
|
|
|
};
|
|
|
|
|
|
|
|
/* This is used for a general mmap of a disk file */
|
|
|
|
|
|
|
|
int generic_file_mmap(struct file * file, struct vm_area_struct * vma)
|
|
|
|
{
|
|
|
|
struct address_space *mapping = file->f_mapping;
|
|
|
|
|
|
|
|
if (!mapping->a_ops->readpage)
|
|
|
|
return -ENOEXEC;
|
|
|
|
file_accessed(file);
|
|
|
|
vma->vm_ops = &generic_file_vm_ops;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* This is for filesystems which do not implement ->writepage.
|
|
|
|
*/
|
|
|
|
int generic_file_readonly_mmap(struct file *file, struct vm_area_struct *vma)
|
|
|
|
{
|
|
|
|
if ((vma->vm_flags & VM_SHARED) && (vma->vm_flags & VM_MAYWRITE))
|
|
|
|
return -EINVAL;
|
|
|
|
return generic_file_mmap(file, vma);
|
|
|
|
}
|
|
|
|
#else
|
|
|
|
int generic_file_mmap(struct file * file, struct vm_area_struct * vma)
|
|
|
|
{
|
|
|
|
return -ENOSYS;
|
|
|
|
}
|
|
|
|
int generic_file_readonly_mmap(struct file * file, struct vm_area_struct * vma)
|
|
|
|
{
|
|
|
|
return -ENOSYS;
|
|
|
|
}
|
|
|
|
#endif /* CONFIG_MMU */
|
|
|
|
|
|
|
|
EXPORT_SYMBOL(generic_file_mmap);
|
|
|
|
EXPORT_SYMBOL(generic_file_readonly_mmap);
|
|
|
|
|
2007-05-07 04:49:04 +07:00
|
|
|
static struct page *__read_cache_page(struct address_space *mapping,
|
2007-10-16 15:24:37 +07:00
|
|
|
pgoff_t index,
|
2011-07-26 07:12:23 +07:00
|
|
|
int (*filler)(void *, struct page *),
|
2010-01-28 00:20:03 +07:00
|
|
|
void *data,
|
|
|
|
gfp_t gfp)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2007-10-16 15:24:57 +07:00
|
|
|
struct page *page;
|
2005-04-17 05:20:36 +07:00
|
|
|
int err;
|
|
|
|
repeat:
|
|
|
|
page = find_get_page(mapping, index);
|
|
|
|
if (!page) {
|
2010-01-28 00:20:03 +07:00
|
|
|
page = __page_cache_alloc(gfp | __GFP_COLD);
|
2007-10-16 15:24:57 +07:00
|
|
|
if (!page)
|
|
|
|
return ERR_PTR(-ENOMEM);
|
2011-12-22 00:05:48 +07:00
|
|
|
err = add_to_page_cache_lru(page, mapping, index, gfp);
|
2007-10-16 15:24:57 +07:00
|
|
|
if (unlikely(err)) {
|
|
|
|
page_cache_release(page);
|
|
|
|
if (err == -EEXIST)
|
|
|
|
goto repeat;
|
2005-04-17 05:20:36 +07:00
|
|
|
/* Presumably ENOMEM for radix tree node */
|
|
|
|
return ERR_PTR(err);
|
|
|
|
}
|
|
|
|
err = filler(data, page);
|
|
|
|
if (err < 0) {
|
|
|
|
page_cache_release(page);
|
|
|
|
page = ERR_PTR(err);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return page;
|
|
|
|
}
|
|
|
|
|
2010-01-28 00:20:03 +07:00
|
|
|
static struct page *do_read_cache_page(struct address_space *mapping,
|
2007-10-16 15:24:37 +07:00
|
|
|
pgoff_t index,
|
2011-07-26 07:12:23 +07:00
|
|
|
int (*filler)(void *, struct page *),
|
2010-01-28 00:20:03 +07:00
|
|
|
void *data,
|
|
|
|
gfp_t gfp)
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
struct page *page;
|
|
|
|
int err;
|
|
|
|
|
|
|
|
retry:
|
2010-01-28 00:20:03 +07:00
|
|
|
page = __read_cache_page(mapping, index, filler, data, gfp);
|
2005-04-17 05:20:36 +07:00
|
|
|
if (IS_ERR(page))
|
2007-05-09 19:42:20 +07:00
|
|
|
return page;
|
2005-04-17 05:20:36 +07:00
|
|
|
if (PageUptodate(page))
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
lock_page(page);
|
|
|
|
if (!page->mapping) {
|
|
|
|
unlock_page(page);
|
|
|
|
page_cache_release(page);
|
|
|
|
goto retry;
|
|
|
|
}
|
|
|
|
if (PageUptodate(page)) {
|
|
|
|
unlock_page(page);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
err = filler(data, page);
|
|
|
|
if (err < 0) {
|
|
|
|
page_cache_release(page);
|
2007-05-09 19:42:20 +07:00
|
|
|
return ERR_PTR(err);
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
2007-05-09 19:42:20 +07:00
|
|
|
out:
|
2007-05-07 04:49:04 +07:00
|
|
|
mark_page_accessed(page);
|
|
|
|
return page;
|
|
|
|
}
|
2010-01-28 00:20:03 +07:00
|
|
|
|
|
|
|
/**
|
|
|
|
* read_cache_page_async - read into page cache, fill it if needed
|
|
|
|
* @mapping: the page's address_space
|
|
|
|
* @index: the page index
|
|
|
|
* @filler: function to perform the read
|
2011-07-26 07:12:23 +07:00
|
|
|
* @data: first arg to filler(data, page) function, often left as NULL
|
2010-01-28 00:20:03 +07:00
|
|
|
*
|
|
|
|
* Same as read_cache_page, but don't wait for page to become unlocked
|
|
|
|
* after submitting it to the filler.
|
|
|
|
*
|
|
|
|
* Read into the page cache. If a page already exists, and PageUptodate() is
|
|
|
|
* not set, try to fill the page but don't wait for it to become unlocked.
|
|
|
|
*
|
|
|
|
* If the page does not get brought uptodate, return -EIO.
|
|
|
|
*/
|
|
|
|
struct page *read_cache_page_async(struct address_space *mapping,
|
|
|
|
pgoff_t index,
|
2011-07-26 07:12:23 +07:00
|
|
|
int (*filler)(void *, struct page *),
|
2010-01-28 00:20:03 +07:00
|
|
|
void *data)
|
|
|
|
{
|
|
|
|
return do_read_cache_page(mapping, index, filler, data, mapping_gfp_mask(mapping));
|
|
|
|
}
|
2007-05-07 04:49:04 +07:00
|
|
|
EXPORT_SYMBOL(read_cache_page_async);
|
|
|
|
|
2010-01-28 00:20:03 +07:00
|
|
|
static struct page *wait_on_page_read(struct page *page)
|
|
|
|
{
|
|
|
|
if (!IS_ERR(page)) {
|
|
|
|
wait_on_page_locked(page);
|
|
|
|
if (!PageUptodate(page)) {
|
|
|
|
page_cache_release(page);
|
|
|
|
page = ERR_PTR(-EIO);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return page;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* read_cache_page_gfp - read into page cache, using specified page allocation flags.
|
|
|
|
* @mapping: the page's address_space
|
|
|
|
* @index: the page index
|
|
|
|
* @gfp: the page allocator flags to use if allocating
|
|
|
|
*
|
|
|
|
* This is the same as "read_mapping_page(mapping, index, NULL)", but with
|
2011-12-22 00:05:48 +07:00
|
|
|
* any new page allocations done using the specified allocation flags.
|
2010-01-28 00:20:03 +07:00
|
|
|
*
|
|
|
|
* If the page does not get brought uptodate, return -EIO.
|
|
|
|
*/
|
|
|
|
struct page *read_cache_page_gfp(struct address_space *mapping,
|
|
|
|
pgoff_t index,
|
|
|
|
gfp_t gfp)
|
|
|
|
{
|
|
|
|
filler_t *filler = (filler_t *)mapping->a_ops->readpage;
|
|
|
|
|
|
|
|
return wait_on_page_read(do_read_cache_page(mapping, index, filler, NULL, gfp));
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(read_cache_page_gfp);
|
|
|
|
|
2007-05-07 04:49:04 +07:00
|
|
|
/**
|
|
|
|
* read_cache_page - read into page cache, fill it if needed
|
|
|
|
* @mapping: the page's address_space
|
|
|
|
* @index: the page index
|
|
|
|
* @filler: function to perform the read
|
2011-07-26 07:12:23 +07:00
|
|
|
* @data: first arg to filler(data, page) function, often left as NULL
|
2007-05-07 04:49:04 +07:00
|
|
|
*
|
|
|
|
* Read into the page cache. If a page already exists, and PageUptodate() is
|
|
|
|
* not set, try to fill the page then wait for it to become unlocked.
|
|
|
|
*
|
|
|
|
* If the page does not get brought uptodate, return -EIO.
|
|
|
|
*/
|
|
|
|
struct page *read_cache_page(struct address_space *mapping,
|
2007-10-16 15:24:37 +07:00
|
|
|
pgoff_t index,
|
2011-07-26 07:12:23 +07:00
|
|
|
int (*filler)(void *, struct page *),
|
2007-05-07 04:49:04 +07:00
|
|
|
void *data)
|
|
|
|
{
|
2010-01-28 00:20:03 +07:00
|
|
|
return wait_on_page_read(read_cache_page_async(mapping, index, filler, data));
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(read_cache_page);
|
|
|
|
|
2007-10-16 15:24:59 +07:00
|
|
|
static size_t __iovec_copy_from_user_inatomic(char *vaddr,
|
2005-04-17 05:20:36 +07:00
|
|
|
const struct iovec *iov, size_t base, size_t bytes)
|
|
|
|
{
|
2009-03-02 17:00:57 +07:00
|
|
|
size_t copied = 0, left = 0;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
while (bytes) {
|
|
|
|
char __user *buf = iov->iov_base + base;
|
|
|
|
int copy = min(bytes, iov->iov_len - base);
|
|
|
|
|
|
|
|
base = 0;
|
2009-03-02 17:00:57 +07:00
|
|
|
left = __copy_from_user_inatomic(vaddr, buf, copy);
|
2005-04-17 05:20:36 +07:00
|
|
|
copied += copy;
|
|
|
|
bytes -= copy;
|
|
|
|
vaddr += copy;
|
|
|
|
iov++;
|
|
|
|
|
[PATCH] Prepare for __copy_from_user_inatomic to not zero missed bytes
The problem is that when we write to a file, the copy from userspace to
pagecache is first done with preemption disabled, so if the source address is
not immediately available the copy fails *and* *zeros* *the* *destination*.
This is a problem because a concurrent read (which admittedly is an odd thing
to do) might see zeros rather that was there before the write, or what was
there after, or some mixture of the two (any of these being a reasonable thing
to see).
If the copy did fail, it will immediately be retried with preemption
re-enabled so any transient problem with accessing the source won't cause an
error.
The first copying does not need to zero any uncopied bytes, and doing so
causes the problem. It uses copy_from_user_atomic rather than copy_from_user
so the simple expedient is to change copy_from_user_atomic to *not* zero out
bytes on failure.
The first of these two patches prepares for the change by fixing two places
which assume copy_from_user_atomic does zero the tail. The two usages are
very similar pieces of code which copy from a userspace iovec into one or more
page-cache pages. These are changed to remove the assumption.
The second patch changes __copy_from_user_inatomic* to not zero the tail.
Once these are accepted, I will look at similar patches of other architectures
where this is important (ppc, mips and sparc being the ones I can find).
This patch:
There is a problem with __copy_from_user_inatomic zeroing the tail of the
buffer in the case of an error. As it is called in atomic context, the error
may be transient, so it results in zeros being written where maybe they
shouldn't be.
In the usage in filemap, this opens a window for a well timed read to see data
(zeros) which is not consistent with any ordering of reads and writes.
Most cases where __copy_from_user_inatomic is called, a failure results in
__copy_from_user being called immediately. As long as the latter zeros the
tail, the former doesn't need to. However in *copy_from_user_iovec
implementations (in both filemap and ntfs/file), it is assumed that
copy_from_user_inatomic will zero the tail.
This patch removes that assumption, so that after this patch it will
be safe for copy_from_user_inatomic to not zero the tail.
This patch also adds some commentary to filemap.h and asm-i386/uaccess.h.
After this patch, all architectures that might disable preempt when
kmap_atomic is called need to have their __copy_from_user_inatomic* "fixed".
This includes
- powerpc
- i386
- mips
- sparc
Signed-off-by: Neil Brown <neilb@suse.de>
Cc: David Howells <dhowells@redhat.com>
Cc: Anton Altaparmakov <aia21@cantab.net>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: William Lee Irwin III <wli@holomorphy.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-25 19:47:58 +07:00
|
|
|
if (unlikely(left))
|
2005-04-17 05:20:36 +07:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
return copied - left;
|
|
|
|
}
|
|
|
|
|
2007-10-16 15:24:59 +07:00
|
|
|
/*
|
|
|
|
* Copy as much as we can into the page and return the number of bytes which
|
tree-wide: fix assorted typos all over the place
That is "success", "unknown", "through", "performance", "[re|un]mapping"
, "access", "default", "reasonable", "[con]currently", "temperature"
, "channel", "[un]used", "application", "example","hierarchy", "therefore"
, "[over|under]flow", "contiguous", "threshold", "enough" and others.
Signed-off-by: André Goddard Rosa <andre.goddard@gmail.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
2009-11-14 22:09:05 +07:00
|
|
|
* were successfully copied. If a fault is encountered then return the number of
|
2007-10-16 15:24:59 +07:00
|
|
|
* bytes which were copied.
|
|
|
|
*/
|
|
|
|
size_t iov_iter_copy_from_user_atomic(struct page *page,
|
|
|
|
struct iov_iter *i, unsigned long offset, size_t bytes)
|
|
|
|
{
|
|
|
|
char *kaddr;
|
|
|
|
size_t copied;
|
|
|
|
|
|
|
|
BUG_ON(!in_atomic());
|
2011-11-25 22:14:39 +07:00
|
|
|
kaddr = kmap_atomic(page);
|
2007-10-16 15:24:59 +07:00
|
|
|
if (likely(i->nr_segs == 1)) {
|
|
|
|
int left;
|
|
|
|
char __user *buf = i->iov->iov_base + i->iov_offset;
|
2009-03-02 17:00:57 +07:00
|
|
|
left = __copy_from_user_inatomic(kaddr + offset, buf, bytes);
|
2007-10-16 15:24:59 +07:00
|
|
|
copied = bytes - left;
|
|
|
|
} else {
|
|
|
|
copied = __iovec_copy_from_user_inatomic(kaddr + offset,
|
|
|
|
i->iov, i->iov_offset, bytes);
|
|
|
|
}
|
2011-11-25 22:14:39 +07:00
|
|
|
kunmap_atomic(kaddr);
|
2007-10-16 15:24:59 +07:00
|
|
|
|
|
|
|
return copied;
|
|
|
|
}
|
2007-10-16 15:25:07 +07:00
|
|
|
EXPORT_SYMBOL(iov_iter_copy_from_user_atomic);
|
2007-10-16 15:24:59 +07:00
|
|
|
|
|
|
|
/*
|
|
|
|
* This has the same sideeffects and return value as
|
|
|
|
* iov_iter_copy_from_user_atomic().
|
|
|
|
* The difference is that it attempts to resolve faults.
|
|
|
|
* Page must not be locked.
|
|
|
|
*/
|
|
|
|
size_t iov_iter_copy_from_user(struct page *page,
|
|
|
|
struct iov_iter *i, unsigned long offset, size_t bytes)
|
|
|
|
{
|
|
|
|
char *kaddr;
|
|
|
|
size_t copied;
|
|
|
|
|
|
|
|
kaddr = kmap(page);
|
|
|
|
if (likely(i->nr_segs == 1)) {
|
|
|
|
int left;
|
|
|
|
char __user *buf = i->iov->iov_base + i->iov_offset;
|
2009-03-02 17:00:57 +07:00
|
|
|
left = __copy_from_user(kaddr + offset, buf, bytes);
|
2007-10-16 15:24:59 +07:00
|
|
|
copied = bytes - left;
|
|
|
|
} else {
|
|
|
|
copied = __iovec_copy_from_user_inatomic(kaddr + offset,
|
|
|
|
i->iov, i->iov_offset, bytes);
|
|
|
|
}
|
|
|
|
kunmap(page);
|
|
|
|
return copied;
|
|
|
|
}
|
2007-10-16 15:25:07 +07:00
|
|
|
EXPORT_SYMBOL(iov_iter_copy_from_user);
|
2007-10-16 15:24:59 +07:00
|
|
|
|
2008-03-11 01:43:59 +07:00
|
|
|
void iov_iter_advance(struct iov_iter *i, size_t bytes)
|
2007-10-16 15:24:59 +07:00
|
|
|
{
|
2008-03-11 01:43:59 +07:00
|
|
|
BUG_ON(i->count < bytes);
|
|
|
|
|
2007-10-16 15:24:59 +07:00
|
|
|
if (likely(i->nr_segs == 1)) {
|
|
|
|
i->iov_offset += bytes;
|
2008-03-11 01:43:59 +07:00
|
|
|
i->count -= bytes;
|
2007-10-16 15:24:59 +07:00
|
|
|
} else {
|
|
|
|
const struct iovec *iov = i->iov;
|
|
|
|
size_t base = i->iov_offset;
|
2011-10-28 04:53:08 +07:00
|
|
|
unsigned long nr_segs = i->nr_segs;
|
2007-10-16 15:24:59 +07:00
|
|
|
|
2008-02-02 21:01:17 +07:00
|
|
|
/*
|
|
|
|
* The !iov->iov_len check ensures we skip over unlikely
|
2008-03-11 01:43:59 +07:00
|
|
|
* zero-length segments (without overruning the iovec).
|
2008-02-02 21:01:17 +07:00
|
|
|
*/
|
2008-07-31 04:45:12 +07:00
|
|
|
while (bytes || unlikely(i->count && !iov->iov_len)) {
|
2008-03-11 01:43:59 +07:00
|
|
|
int copy;
|
2007-10-16 15:24:59 +07:00
|
|
|
|
2008-03-11 01:43:59 +07:00
|
|
|
copy = min(bytes, iov->iov_len - base);
|
|
|
|
BUG_ON(!i->count || i->count < copy);
|
|
|
|
i->count -= copy;
|
2007-10-16 15:24:59 +07:00
|
|
|
bytes -= copy;
|
|
|
|
base += copy;
|
|
|
|
if (iov->iov_len == base) {
|
|
|
|
iov++;
|
2011-10-28 04:53:08 +07:00
|
|
|
nr_segs--;
|
2007-10-16 15:24:59 +07:00
|
|
|
base = 0;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
i->iov = iov;
|
|
|
|
i->iov_offset = base;
|
2011-10-28 04:53:08 +07:00
|
|
|
i->nr_segs = nr_segs;
|
2007-10-16 15:24:59 +07:00
|
|
|
}
|
|
|
|
}
|
2007-10-16 15:25:07 +07:00
|
|
|
EXPORT_SYMBOL(iov_iter_advance);
|
2007-10-16 15:24:59 +07:00
|
|
|
|
2007-10-16 15:25:01 +07:00
|
|
|
/*
|
|
|
|
* Fault in the first iovec of the given iov_iter, to a maximum length
|
|
|
|
* of bytes. Returns 0 on success, or non-zero if the memory could not be
|
|
|
|
* accessed (ie. because it is an invalid address).
|
|
|
|
*
|
|
|
|
* writev-intensive code may want this to prefault several iovecs -- that
|
|
|
|
* would be possible (callers must not rely on the fact that _only_ the
|
|
|
|
* first iovec will be faulted with the current implementation).
|
|
|
|
*/
|
|
|
|
int iov_iter_fault_in_readable(struct iov_iter *i, size_t bytes)
|
2007-10-16 15:24:59 +07:00
|
|
|
{
|
|
|
|
char __user *buf = i->iov->iov_base + i->iov_offset;
|
2007-10-16 15:25:01 +07:00
|
|
|
bytes = min(bytes, i->iov->iov_len - i->iov_offset);
|
|
|
|
return fault_in_pages_readable(buf, bytes);
|
2007-10-16 15:24:59 +07:00
|
|
|
}
|
2007-10-16 15:25:07 +07:00
|
|
|
EXPORT_SYMBOL(iov_iter_fault_in_readable);
|
2007-10-16 15:24:59 +07:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Return the count of just the current iov_iter segment.
|
|
|
|
*/
|
|
|
|
size_t iov_iter_single_seg_count(struct iov_iter *i)
|
|
|
|
{
|
|
|
|
const struct iovec *iov = i->iov;
|
|
|
|
if (i->nr_segs == 1)
|
|
|
|
return i->count;
|
|
|
|
else
|
|
|
|
return min(i->count, iov->iov_len - i->iov_offset);
|
|
|
|
}
|
2007-10-16 15:25:07 +07:00
|
|
|
EXPORT_SYMBOL(iov_iter_single_seg_count);
|
2007-10-16 15:24:59 +07:00
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
/*
|
|
|
|
* Performs necessary checks before doing a write
|
|
|
|
*
|
2006-06-23 16:03:49 +07:00
|
|
|
* Can adjust writing position or amount of bytes to write.
|
2005-04-17 05:20:36 +07:00
|
|
|
* Returns appropriate error code that caller should return or
|
|
|
|
* zero in case that write should be allowed.
|
|
|
|
*/
|
|
|
|
inline int generic_write_checks(struct file *file, loff_t *pos, size_t *count, int isblk)
|
|
|
|
{
|
|
|
|
struct inode *inode = file->f_mapping->host;
|
2010-03-06 04:41:44 +07:00
|
|
|
unsigned long limit = rlimit(RLIMIT_FSIZE);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
if (unlikely(*pos < 0))
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
if (!isblk) {
|
|
|
|
/* FIXME: this is for backwards compatibility with 2.4 */
|
|
|
|
if (file->f_flags & O_APPEND)
|
|
|
|
*pos = i_size_read(inode);
|
|
|
|
|
|
|
|
if (limit != RLIM_INFINITY) {
|
|
|
|
if (*pos >= limit) {
|
|
|
|
send_sig(SIGXFSZ, current, 0);
|
|
|
|
return -EFBIG;
|
|
|
|
}
|
|
|
|
if (*count > limit - (typeof(limit))*pos) {
|
|
|
|
*count = limit - (typeof(limit))*pos;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* LFS rule
|
|
|
|
*/
|
|
|
|
if (unlikely(*pos + *count > MAX_NON_LFS &&
|
|
|
|
!(file->f_flags & O_LARGEFILE))) {
|
|
|
|
if (*pos >= MAX_NON_LFS) {
|
|
|
|
return -EFBIG;
|
|
|
|
}
|
|
|
|
if (*count > MAX_NON_LFS - (unsigned long)*pos) {
|
|
|
|
*count = MAX_NON_LFS - (unsigned long)*pos;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Are we about to exceed the fs block limit ?
|
|
|
|
*
|
|
|
|
* If we have written data it becomes a short write. If we have
|
|
|
|
* exceeded without writing data we send a signal and return EFBIG.
|
|
|
|
* Linus frestrict idea will clean these up nicely..
|
|
|
|
*/
|
|
|
|
if (likely(!isblk)) {
|
|
|
|
if (unlikely(*pos >= inode->i_sb->s_maxbytes)) {
|
|
|
|
if (*count || *pos > inode->i_sb->s_maxbytes) {
|
|
|
|
return -EFBIG;
|
|
|
|
}
|
|
|
|
/* zero-length writes at ->s_maxbytes are OK */
|
|
|
|
}
|
|
|
|
|
|
|
|
if (unlikely(*pos + *count > inode->i_sb->s_maxbytes))
|
|
|
|
*count = inode->i_sb->s_maxbytes - *pos;
|
|
|
|
} else {
|
[PATCH] BLOCK: Make it possible to disable the block layer [try #6]
Make it possible to disable the block layer. Not all embedded devices require
it, some can make do with just JFFS2, NFS, ramfs, etc - none of which require
the block layer to be present.
This patch does the following:
(*) Introduces CONFIG_BLOCK to disable the block layer, buffering and blockdev
support.
(*) Adds dependencies on CONFIG_BLOCK to any configuration item that controls
an item that uses the block layer. This includes:
(*) Block I/O tracing.
(*) Disk partition code.
(*) All filesystems that are block based, eg: Ext3, ReiserFS, ISOFS.
(*) The SCSI layer. As far as I can tell, even SCSI chardevs use the
block layer to do scheduling. Some drivers that use SCSI facilities -
such as USB storage - end up disabled indirectly from this.
(*) Various block-based device drivers, such as IDE and the old CDROM
drivers.
(*) MTD blockdev handling and FTL.
(*) JFFS - which uses set_bdev_super(), something it could avoid doing by
taking a leaf out of JFFS2's book.
(*) Makes most of the contents of linux/blkdev.h, linux/buffer_head.h and
linux/elevator.h contingent on CONFIG_BLOCK being set. sector_div() is,
however, still used in places, and so is still available.
(*) Also made contingent are the contents of linux/mpage.h, linux/genhd.h and
parts of linux/fs.h.
(*) Makes a number of files in fs/ contingent on CONFIG_BLOCK.
(*) Makes mm/bounce.c (bounce buffering) contingent on CONFIG_BLOCK.
(*) set_page_dirty() doesn't call __set_page_dirty_buffers() if CONFIG_BLOCK
is not enabled.
(*) fs/no-block.c is created to hold out-of-line stubs and things that are
required when CONFIG_BLOCK is not set:
(*) Default blockdev file operations (to give error ENODEV on opening).
(*) Makes some /proc changes:
(*) /proc/devices does not list any blockdevs.
(*) /proc/diskstats and /proc/partitions are contingent on CONFIG_BLOCK.
(*) Makes some compat ioctl handling contingent on CONFIG_BLOCK.
(*) If CONFIG_BLOCK is not defined, makes sys_quotactl() return -ENODEV if
given command other than Q_SYNC or if a special device is specified.
(*) In init/do_mounts.c, no reference is made to the blockdev routines if
CONFIG_BLOCK is not defined. This does not prohibit NFS roots or JFFS2.
(*) The bdflush, ioprio_set and ioprio_get syscalls can now be absent (return
error ENOSYS by way of cond_syscall if so).
(*) The seclvl_bd_claim() and seclvl_bd_release() security calls do nothing if
CONFIG_BLOCK is not set, since they can't then happen.
Signed-Off-By: David Howells <dhowells@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2006-10-01 01:45:40 +07:00
|
|
|
#ifdef CONFIG_BLOCK
|
2005-04-17 05:20:36 +07:00
|
|
|
loff_t isize;
|
|
|
|
if (bdev_read_only(I_BDEV(inode)))
|
|
|
|
return -EPERM;
|
|
|
|
isize = i_size_read(inode);
|
|
|
|
if (*pos >= isize) {
|
|
|
|
if (*count || *pos > isize)
|
|
|
|
return -ENOSPC;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (*pos + *count > isize)
|
|
|
|
*count = isize - *pos;
|
[PATCH] BLOCK: Make it possible to disable the block layer [try #6]
Make it possible to disable the block layer. Not all embedded devices require
it, some can make do with just JFFS2, NFS, ramfs, etc - none of which require
the block layer to be present.
This patch does the following:
(*) Introduces CONFIG_BLOCK to disable the block layer, buffering and blockdev
support.
(*) Adds dependencies on CONFIG_BLOCK to any configuration item that controls
an item that uses the block layer. This includes:
(*) Block I/O tracing.
(*) Disk partition code.
(*) All filesystems that are block based, eg: Ext3, ReiserFS, ISOFS.
(*) The SCSI layer. As far as I can tell, even SCSI chardevs use the
block layer to do scheduling. Some drivers that use SCSI facilities -
such as USB storage - end up disabled indirectly from this.
(*) Various block-based device drivers, such as IDE and the old CDROM
drivers.
(*) MTD blockdev handling and FTL.
(*) JFFS - which uses set_bdev_super(), something it could avoid doing by
taking a leaf out of JFFS2's book.
(*) Makes most of the contents of linux/blkdev.h, linux/buffer_head.h and
linux/elevator.h contingent on CONFIG_BLOCK being set. sector_div() is,
however, still used in places, and so is still available.
(*) Also made contingent are the contents of linux/mpage.h, linux/genhd.h and
parts of linux/fs.h.
(*) Makes a number of files in fs/ contingent on CONFIG_BLOCK.
(*) Makes mm/bounce.c (bounce buffering) contingent on CONFIG_BLOCK.
(*) set_page_dirty() doesn't call __set_page_dirty_buffers() if CONFIG_BLOCK
is not enabled.
(*) fs/no-block.c is created to hold out-of-line stubs and things that are
required when CONFIG_BLOCK is not set:
(*) Default blockdev file operations (to give error ENODEV on opening).
(*) Makes some /proc changes:
(*) /proc/devices does not list any blockdevs.
(*) /proc/diskstats and /proc/partitions are contingent on CONFIG_BLOCK.
(*) Makes some compat ioctl handling contingent on CONFIG_BLOCK.
(*) If CONFIG_BLOCK is not defined, makes sys_quotactl() return -ENODEV if
given command other than Q_SYNC or if a special device is specified.
(*) In init/do_mounts.c, no reference is made to the blockdev routines if
CONFIG_BLOCK is not defined. This does not prohibit NFS roots or JFFS2.
(*) The bdflush, ioprio_set and ioprio_get syscalls can now be absent (return
error ENOSYS by way of cond_syscall if so).
(*) The seclvl_bd_claim() and seclvl_bd_release() security calls do nothing if
CONFIG_BLOCK is not set, since they can't then happen.
Signed-Off-By: David Howells <dhowells@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2006-10-01 01:45:40 +07:00
|
|
|
#else
|
|
|
|
return -EPERM;
|
|
|
|
#endif
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(generic_write_checks);
|
|
|
|
|
2007-10-16 15:25:01 +07:00
|
|
|
int pagecache_write_begin(struct file *file, struct address_space *mapping,
|
|
|
|
loff_t pos, unsigned len, unsigned flags,
|
|
|
|
struct page **pagep, void **fsdata)
|
|
|
|
{
|
|
|
|
const struct address_space_operations *aops = mapping->a_ops;
|
|
|
|
|
2008-10-30 04:00:55 +07:00
|
|
|
return aops->write_begin(file, mapping, pos, len, flags,
|
2007-10-16 15:25:01 +07:00
|
|
|
pagep, fsdata);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(pagecache_write_begin);
|
|
|
|
|
|
|
|
int pagecache_write_end(struct file *file, struct address_space *mapping,
|
|
|
|
loff_t pos, unsigned len, unsigned copied,
|
|
|
|
struct page *page, void *fsdata)
|
|
|
|
{
|
|
|
|
const struct address_space_operations *aops = mapping->a_ops;
|
|
|
|
|
2008-10-30 04:00:55 +07:00
|
|
|
mark_page_accessed(page);
|
|
|
|
return aops->write_end(file, mapping, pos, len, copied, page, fsdata);
|
2007-10-16 15:25:01 +07:00
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(pagecache_write_end);
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
ssize_t
|
|
|
|
generic_file_direct_write(struct kiocb *iocb, const struct iovec *iov,
|
|
|
|
unsigned long *nr_segs, loff_t pos, loff_t *ppos,
|
|
|
|
size_t count, size_t ocount)
|
|
|
|
{
|
|
|
|
struct file *file = iocb->ki_filp;
|
|
|
|
struct address_space *mapping = file->f_mapping;
|
|
|
|
struct inode *inode = mapping->host;
|
|
|
|
ssize_t written;
|
2008-07-24 11:27:04 +07:00
|
|
|
size_t write_len;
|
|
|
|
pgoff_t end;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
if (count != ocount)
|
|
|
|
*nr_segs = iov_shorten((struct iovec *)iov, *nr_segs, count);
|
|
|
|
|
2008-07-24 11:27:04 +07:00
|
|
|
write_len = iov_length(iov, *nr_segs);
|
|
|
|
end = (pos + write_len - 1) >> PAGE_CACHE_SHIFT;
|
|
|
|
|
2009-01-07 05:40:22 +07:00
|
|
|
written = filemap_write_and_wait_range(mapping, pos, pos + write_len - 1);
|
2008-07-24 11:27:04 +07:00
|
|
|
if (written)
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* After a write we want buffered reads to be sure to go to disk to get
|
|
|
|
* the new data. We invalidate clean cached page from the region we're
|
|
|
|
* about to write. We do this *before* the write so that we can return
|
2008-09-03 04:35:40 +07:00
|
|
|
* without clobbering -EIOCBQUEUED from ->direct_IO().
|
2008-07-24 11:27:04 +07:00
|
|
|
*/
|
|
|
|
if (mapping->nrpages) {
|
|
|
|
written = invalidate_inode_pages2_range(mapping,
|
|
|
|
pos >> PAGE_CACHE_SHIFT, end);
|
2008-09-03 04:35:40 +07:00
|
|
|
/*
|
|
|
|
* If a page can not be invalidated, return 0 to fall back
|
|
|
|
* to buffered write.
|
|
|
|
*/
|
|
|
|
if (written) {
|
|
|
|
if (written == -EBUSY)
|
|
|
|
return 0;
|
2008-07-24 11:27:04 +07:00
|
|
|
goto out;
|
2008-09-03 04:35:40 +07:00
|
|
|
}
|
2008-07-24 11:27:04 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
written = mapping->a_ops->direct_IO(WRITE, iocb, iov, pos, *nr_segs);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Finally, try again to invalidate clean pages which might have been
|
|
|
|
* cached by non-direct readahead, or faulted in by get_user_pages()
|
|
|
|
* if the source of the write was an mmap'ed region of the file
|
|
|
|
* we're writing. Either one is a pretty crazy thing to do,
|
|
|
|
* so we don't support it 100%. If this invalidation
|
|
|
|
* fails, tough, the write still worked...
|
|
|
|
*/
|
|
|
|
if (mapping->nrpages) {
|
|
|
|
invalidate_inode_pages2_range(mapping,
|
|
|
|
pos >> PAGE_CACHE_SHIFT, end);
|
|
|
|
}
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
if (written > 0) {
|
2010-10-27 04:21:58 +07:00
|
|
|
pos += written;
|
|
|
|
if (pos > i_size_read(inode) && !S_ISBLK(inode->i_mode)) {
|
|
|
|
i_size_write(inode, pos);
|
2005-04-17 05:20:36 +07:00
|
|
|
mark_inode_dirty(inode);
|
|
|
|
}
|
2010-10-27 04:21:58 +07:00
|
|
|
*ppos = pos;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
2008-07-24 11:27:04 +07:00
|
|
|
out:
|
2005-04-17 05:20:36 +07:00
|
|
|
return written;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(generic_file_direct_write);
|
|
|
|
|
2007-10-16 15:24:57 +07:00
|
|
|
/*
|
|
|
|
* Find or create a page at the given pagecache position. Return the locked
|
|
|
|
* page. This function is specifically for buffered writes.
|
|
|
|
*/
|
fs: symlink write_begin allocation context fix
With the write_begin/write_end aops, page_symlink was broken because it
could no longer pass a GFP_NOFS type mask into the point where the
allocations happened. They are done in write_begin, which would always
assume that the filesystem can be entered from reclaim. This bug could
cause filesystem deadlocks.
The funny thing with having a gfp_t mask there is that it doesn't really
allow the caller to arbitrarily tinker with the context in which it can be
called. It couldn't ever be GFP_ATOMIC, for example, because it needs to
take the page lock. The only thing any callers care about is __GFP_FS
anyway, so turn that into a single flag.
Add a new flag for write_begin, AOP_FLAG_NOFS. Filesystems can now act on
this flag in their write_begin function. Change __grab_cache_page to
accept a nofs argument as well, to honour that flag (while we're there,
change the name to grab_cache_page_write_begin which is more instructive
and does away with random leading underscores).
This is really a more flexible way to go in the end anyway -- if a
filesystem happens to want any extra allocations aside from the pagecache
ones in ints write_begin function, it may now use GFP_KERNEL (rather than
GFP_NOFS) for common case allocations (eg. ocfs2_alloc_write_ctxt, for a
random example).
[kosaki.motohiro@jp.fujitsu.com: fix ubifs]
[kosaki.motohiro@jp.fujitsu.com: fix fuse]
Signed-off-by: Nick Piggin <npiggin@suse.de>
Reviewed-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: <stable@kernel.org> [2.6.28.x]
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
[ Cleaned up the calling convention: just pass in the AOP flags
untouched to the grab_cache_page_write_begin() function. That
just simplifies everybody, and may even allow future expansion of the
logic. - Linus ]
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-01-05 03:00:53 +07:00
|
|
|
struct page *grab_cache_page_write_begin(struct address_space *mapping,
|
|
|
|
pgoff_t index, unsigned flags)
|
2007-10-16 15:24:57 +07:00
|
|
|
{
|
|
|
|
int status;
|
2012-01-11 06:07:53 +07:00
|
|
|
gfp_t gfp_mask;
|
2007-10-16 15:24:57 +07:00
|
|
|
struct page *page;
|
fs: symlink write_begin allocation context fix
With the write_begin/write_end aops, page_symlink was broken because it
could no longer pass a GFP_NOFS type mask into the point where the
allocations happened. They are done in write_begin, which would always
assume that the filesystem can be entered from reclaim. This bug could
cause filesystem deadlocks.
The funny thing with having a gfp_t mask there is that it doesn't really
allow the caller to arbitrarily tinker with the context in which it can be
called. It couldn't ever be GFP_ATOMIC, for example, because it needs to
take the page lock. The only thing any callers care about is __GFP_FS
anyway, so turn that into a single flag.
Add a new flag for write_begin, AOP_FLAG_NOFS. Filesystems can now act on
this flag in their write_begin function. Change __grab_cache_page to
accept a nofs argument as well, to honour that flag (while we're there,
change the name to grab_cache_page_write_begin which is more instructive
and does away with random leading underscores).
This is really a more flexible way to go in the end anyway -- if a
filesystem happens to want any extra allocations aside from the pagecache
ones in ints write_begin function, it may now use GFP_KERNEL (rather than
GFP_NOFS) for common case allocations (eg. ocfs2_alloc_write_ctxt, for a
random example).
[kosaki.motohiro@jp.fujitsu.com: fix ubifs]
[kosaki.motohiro@jp.fujitsu.com: fix fuse]
Signed-off-by: Nick Piggin <npiggin@suse.de>
Reviewed-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: <stable@kernel.org> [2.6.28.x]
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
[ Cleaned up the calling convention: just pass in the AOP flags
untouched to the grab_cache_page_write_begin() function. That
just simplifies everybody, and may even allow future expansion of the
logic. - Linus ]
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-01-05 03:00:53 +07:00
|
|
|
gfp_t gfp_notmask = 0;
|
2012-01-11 06:07:53 +07:00
|
|
|
|
2012-03-22 06:34:08 +07:00
|
|
|
gfp_mask = mapping_gfp_mask(mapping);
|
|
|
|
if (mapping_cap_account_dirty(mapping))
|
|
|
|
gfp_mask |= __GFP_WRITE;
|
fs: symlink write_begin allocation context fix
With the write_begin/write_end aops, page_symlink was broken because it
could no longer pass a GFP_NOFS type mask into the point where the
allocations happened. They are done in write_begin, which would always
assume that the filesystem can be entered from reclaim. This bug could
cause filesystem deadlocks.
The funny thing with having a gfp_t mask there is that it doesn't really
allow the caller to arbitrarily tinker with the context in which it can be
called. It couldn't ever be GFP_ATOMIC, for example, because it needs to
take the page lock. The only thing any callers care about is __GFP_FS
anyway, so turn that into a single flag.
Add a new flag for write_begin, AOP_FLAG_NOFS. Filesystems can now act on
this flag in their write_begin function. Change __grab_cache_page to
accept a nofs argument as well, to honour that flag (while we're there,
change the name to grab_cache_page_write_begin which is more instructive
and does away with random leading underscores).
This is really a more flexible way to go in the end anyway -- if a
filesystem happens to want any extra allocations aside from the pagecache
ones in ints write_begin function, it may now use GFP_KERNEL (rather than
GFP_NOFS) for common case allocations (eg. ocfs2_alloc_write_ctxt, for a
random example).
[kosaki.motohiro@jp.fujitsu.com: fix ubifs]
[kosaki.motohiro@jp.fujitsu.com: fix fuse]
Signed-off-by: Nick Piggin <npiggin@suse.de>
Reviewed-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: <stable@kernel.org> [2.6.28.x]
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
[ Cleaned up the calling convention: just pass in the AOP flags
untouched to the grab_cache_page_write_begin() function. That
just simplifies everybody, and may even allow future expansion of the
logic. - Linus ]
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-01-05 03:00:53 +07:00
|
|
|
if (flags & AOP_FLAG_NOFS)
|
|
|
|
gfp_notmask = __GFP_FS;
|
2007-10-16 15:24:57 +07:00
|
|
|
repeat:
|
|
|
|
page = find_lock_page(mapping, index);
|
2011-01-14 06:46:18 +07:00
|
|
|
if (page)
|
2011-05-28 02:23:34 +07:00
|
|
|
goto found;
|
2007-10-16 15:24:57 +07:00
|
|
|
|
2012-01-11 06:07:53 +07:00
|
|
|
page = __page_cache_alloc(gfp_mask & ~gfp_notmask);
|
2007-10-16 15:24:57 +07:00
|
|
|
if (!page)
|
|
|
|
return NULL;
|
fs: symlink write_begin allocation context fix
With the write_begin/write_end aops, page_symlink was broken because it
could no longer pass a GFP_NOFS type mask into the point where the
allocations happened. They are done in write_begin, which would always
assume that the filesystem can be entered from reclaim. This bug could
cause filesystem deadlocks.
The funny thing with having a gfp_t mask there is that it doesn't really
allow the caller to arbitrarily tinker with the context in which it can be
called. It couldn't ever be GFP_ATOMIC, for example, because it needs to
take the page lock. The only thing any callers care about is __GFP_FS
anyway, so turn that into a single flag.
Add a new flag for write_begin, AOP_FLAG_NOFS. Filesystems can now act on
this flag in their write_begin function. Change __grab_cache_page to
accept a nofs argument as well, to honour that flag (while we're there,
change the name to grab_cache_page_write_begin which is more instructive
and does away with random leading underscores).
This is really a more flexible way to go in the end anyway -- if a
filesystem happens to want any extra allocations aside from the pagecache
ones in ints write_begin function, it may now use GFP_KERNEL (rather than
GFP_NOFS) for common case allocations (eg. ocfs2_alloc_write_ctxt, for a
random example).
[kosaki.motohiro@jp.fujitsu.com: fix ubifs]
[kosaki.motohiro@jp.fujitsu.com: fix fuse]
Signed-off-by: Nick Piggin <npiggin@suse.de>
Reviewed-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: <stable@kernel.org> [2.6.28.x]
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
[ Cleaned up the calling convention: just pass in the AOP flags
untouched to the grab_cache_page_write_begin() function. That
just simplifies everybody, and may even allow future expansion of the
logic. - Linus ]
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-01-05 03:00:53 +07:00
|
|
|
status = add_to_page_cache_lru(page, mapping, index,
|
|
|
|
GFP_KERNEL & ~gfp_notmask);
|
2007-10-16 15:24:57 +07:00
|
|
|
if (unlikely(status)) {
|
|
|
|
page_cache_release(page);
|
|
|
|
if (status == -EEXIST)
|
|
|
|
goto repeat;
|
|
|
|
return NULL;
|
|
|
|
}
|
2011-05-28 02:23:34 +07:00
|
|
|
found:
|
|
|
|
wait_on_page_writeback(page);
|
2007-10-16 15:24:57 +07:00
|
|
|
return page;
|
|
|
|
}
|
fs: symlink write_begin allocation context fix
With the write_begin/write_end aops, page_symlink was broken because it
could no longer pass a GFP_NOFS type mask into the point where the
allocations happened. They are done in write_begin, which would always
assume that the filesystem can be entered from reclaim. This bug could
cause filesystem deadlocks.
The funny thing with having a gfp_t mask there is that it doesn't really
allow the caller to arbitrarily tinker with the context in which it can be
called. It couldn't ever be GFP_ATOMIC, for example, because it needs to
take the page lock. The only thing any callers care about is __GFP_FS
anyway, so turn that into a single flag.
Add a new flag for write_begin, AOP_FLAG_NOFS. Filesystems can now act on
this flag in their write_begin function. Change __grab_cache_page to
accept a nofs argument as well, to honour that flag (while we're there,
change the name to grab_cache_page_write_begin which is more instructive
and does away with random leading underscores).
This is really a more flexible way to go in the end anyway -- if a
filesystem happens to want any extra allocations aside from the pagecache
ones in ints write_begin function, it may now use GFP_KERNEL (rather than
GFP_NOFS) for common case allocations (eg. ocfs2_alloc_write_ctxt, for a
random example).
[kosaki.motohiro@jp.fujitsu.com: fix ubifs]
[kosaki.motohiro@jp.fujitsu.com: fix fuse]
Signed-off-by: Nick Piggin <npiggin@suse.de>
Reviewed-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: <stable@kernel.org> [2.6.28.x]
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
[ Cleaned up the calling convention: just pass in the AOP flags
untouched to the grab_cache_page_write_begin() function. That
just simplifies everybody, and may even allow future expansion of the
logic. - Linus ]
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-01-05 03:00:53 +07:00
|
|
|
EXPORT_SYMBOL(grab_cache_page_write_begin);
|
2007-10-16 15:24:57 +07:00
|
|
|
|
2007-10-16 15:25:01 +07:00
|
|
|
static ssize_t generic_perform_write(struct file *file,
|
|
|
|
struct iov_iter *i, loff_t pos)
|
|
|
|
{
|
|
|
|
struct address_space *mapping = file->f_mapping;
|
|
|
|
const struct address_space_operations *a_ops = mapping->a_ops;
|
|
|
|
long status = 0;
|
|
|
|
ssize_t written = 0;
|
2007-10-16 15:25:03 +07:00
|
|
|
unsigned int flags = 0;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Copies from kernel address space cannot fail (NFSD is a big user).
|
|
|
|
*/
|
|
|
|
if (segment_eq(get_fs(), KERNEL_DS))
|
|
|
|
flags |= AOP_FLAG_UNINTERRUPTIBLE;
|
2007-10-16 15:25:01 +07:00
|
|
|
|
|
|
|
do {
|
|
|
|
struct page *page;
|
|
|
|
unsigned long offset; /* Offset into pagecache page */
|
|
|
|
unsigned long bytes; /* Bytes to write to page */
|
|
|
|
size_t copied; /* Bytes copied from user */
|
|
|
|
void *fsdata;
|
|
|
|
|
|
|
|
offset = (pos & (PAGE_CACHE_SIZE - 1));
|
|
|
|
bytes = min_t(unsigned long, PAGE_CACHE_SIZE - offset,
|
|
|
|
iov_iter_count(i));
|
|
|
|
|
|
|
|
again:
|
|
|
|
/*
|
|
|
|
* Bring in the user page that we will copy from _first_.
|
|
|
|
* Otherwise there's a nasty deadlock on copying from the
|
|
|
|
* same page as we're writing to, without it being marked
|
|
|
|
* up-to-date.
|
|
|
|
*
|
|
|
|
* Not only is this an optimisation, but it is also required
|
|
|
|
* to check that the address is actually valid, when atomic
|
|
|
|
* usercopies are used, below.
|
|
|
|
*/
|
|
|
|
if (unlikely(iov_iter_fault_in_readable(i, bytes))) {
|
|
|
|
status = -EFAULT;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
2007-10-16 15:25:03 +07:00
|
|
|
status = a_ops->write_begin(file, mapping, pos, bytes, flags,
|
2007-10-16 15:25:01 +07:00
|
|
|
&page, &fsdata);
|
|
|
|
if (unlikely(status))
|
|
|
|
break;
|
|
|
|
|
mm: flush dcache before writing into page to avoid alias
The cache alias problem will happen if the changes of user shared mapping
is not flushed before copying, then user and kernel mapping may be mapped
into two different cache line, it is impossible to guarantee the coherence
after iov_iter_copy_from_user_atomic. So the right steps should be:
flush_dcache_page(page);
kmap_atomic(page);
write to page;
kunmap_atomic(page);
flush_dcache_page(page);
More precisely, we might create two new APIs flush_dcache_user_page and
flush_dcache_kern_page to replace the two flush_dcache_page accordingly.
Here is a snippet tested on omap2430 with VIPT cache, and I think it is
not ARM-specific:
int val = 0x11111111;
fd = open("abc", O_RDWR);
addr = mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_SHARED, fd, 0);
*(addr+0) = 0x44444444;
tmp = *(addr+0);
*(addr+1) = 0x77777777;
write(fd, &val, sizeof(int));
close(fd);
The results are not always 0x11111111 0x77777777 at the beginning as expected. Sometimes we see 0x44444444 0x77777777.
Signed-off-by: Anfei <anfei.zhou@gmail.com>
Cc: Russell King <rmk@arm.linux.org.uk>
Cc: Miklos Szeredi <miklos@szeredi.hu>
Cc: Nick Piggin <nickpiggin@yahoo.com.au>
Cc: <linux-arch@vger.kernel.org>
Cc: <stable@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-02-03 04:44:02 +07:00
|
|
|
if (mapping_writably_mapped(mapping))
|
|
|
|
flush_dcache_page(page);
|
|
|
|
|
2007-10-16 15:25:01 +07:00
|
|
|
pagefault_disable();
|
|
|
|
copied = iov_iter_copy_from_user_atomic(page, i, offset, bytes);
|
|
|
|
pagefault_enable();
|
|
|
|
flush_dcache_page(page);
|
|
|
|
|
2009-07-06 02:08:18 +07:00
|
|
|
mark_page_accessed(page);
|
2007-10-16 15:25:01 +07:00
|
|
|
status = a_ops->write_end(file, mapping, pos, bytes, copied,
|
|
|
|
page, fsdata);
|
|
|
|
if (unlikely(status < 0))
|
|
|
|
break;
|
|
|
|
copied = status;
|
|
|
|
|
|
|
|
cond_resched();
|
|
|
|
|
2008-02-02 21:01:17 +07:00
|
|
|
iov_iter_advance(i, copied);
|
2007-10-16 15:25:01 +07:00
|
|
|
if (unlikely(copied == 0)) {
|
|
|
|
/*
|
|
|
|
* If we were unable to copy any data at all, we must
|
|
|
|
* fall back to a single segment length write.
|
|
|
|
*
|
|
|
|
* If we didn't fallback here, we could livelock
|
|
|
|
* because not all segments in the iov can be copied at
|
|
|
|
* once without a pagefault.
|
|
|
|
*/
|
|
|
|
bytes = min_t(unsigned long, PAGE_CACHE_SIZE - offset,
|
|
|
|
iov_iter_single_seg_count(i));
|
|
|
|
goto again;
|
|
|
|
}
|
|
|
|
pos += copied;
|
|
|
|
written += copied;
|
|
|
|
|
|
|
|
balance_dirty_pages_ratelimited(mapping);
|
2011-12-02 08:17:02 +07:00
|
|
|
if (fatal_signal_pending(current)) {
|
|
|
|
status = -EINTR;
|
|
|
|
break;
|
|
|
|
}
|
2007-10-16 15:25:01 +07:00
|
|
|
} while (iov_iter_count(i));
|
|
|
|
|
|
|
|
return written ? written : status;
|
|
|
|
}
|
|
|
|
|
|
|
|
ssize_t
|
|
|
|
generic_file_buffered_write(struct kiocb *iocb, const struct iovec *iov,
|
|
|
|
unsigned long nr_segs, loff_t pos, loff_t *ppos,
|
|
|
|
size_t count, ssize_t written)
|
|
|
|
{
|
|
|
|
struct file *file = iocb->ki_filp;
|
|
|
|
ssize_t status;
|
|
|
|
struct iov_iter i;
|
|
|
|
|
|
|
|
iov_iter_init(&i, iov, nr_segs, count, written);
|
2008-10-30 04:00:55 +07:00
|
|
|
status = generic_perform_write(file, &i, pos);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
if (likely(status >= 0)) {
|
2007-10-16 15:25:01 +07:00
|
|
|
written += status;
|
|
|
|
*ppos = pos + status;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
return written ? written : status;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(generic_file_buffered_write);
|
|
|
|
|
2009-08-17 23:10:06 +07:00
|
|
|
/**
|
|
|
|
* __generic_file_aio_write - write data to a file
|
|
|
|
* @iocb: IO state structure (file, offset, etc.)
|
|
|
|
* @iov: vector with data to write
|
|
|
|
* @nr_segs: number of segments in the vector
|
|
|
|
* @ppos: position where to write
|
|
|
|
*
|
|
|
|
* This function does all the work needed for actually writing data to a
|
|
|
|
* file. It does all basic checks, removes SUID from the file, updates
|
|
|
|
* modification times and calls proper subroutines depending on whether we
|
|
|
|
* do direct IO or a standard buffered write.
|
|
|
|
*
|
|
|
|
* It expects i_mutex to be grabbed unless we work on a block device or similar
|
|
|
|
* object which does not need locking at all.
|
|
|
|
*
|
|
|
|
* This function does *not* take care of syncing data in case of O_SYNC write.
|
|
|
|
* A caller has to handle it. This is mainly due to the fact that we want to
|
|
|
|
* avoid syncing under i_mutex.
|
|
|
|
*/
|
|
|
|
ssize_t __generic_file_aio_write(struct kiocb *iocb, const struct iovec *iov,
|
|
|
|
unsigned long nr_segs, loff_t *ppos)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
struct file *file = iocb->ki_filp;
|
2006-10-20 13:28:13 +07:00
|
|
|
struct address_space * mapping = file->f_mapping;
|
2005-04-17 05:20:36 +07:00
|
|
|
size_t ocount; /* original count */
|
|
|
|
size_t count; /* after file limit checks */
|
|
|
|
struct inode *inode = mapping->host;
|
|
|
|
loff_t pos;
|
|
|
|
ssize_t written;
|
|
|
|
ssize_t err;
|
|
|
|
|
|
|
|
ocount = 0;
|
2007-05-08 14:23:02 +07:00
|
|
|
err = generic_segment_checks(iov, &nr_segs, &ocount, VERIFY_READ);
|
|
|
|
if (err)
|
|
|
|
return err;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
count = ocount;
|
|
|
|
pos = *ppos;
|
|
|
|
|
|
|
|
/* We can write back this queue in page reclaim */
|
|
|
|
current->backing_dev_info = mapping->backing_dev_info;
|
|
|
|
written = 0;
|
|
|
|
|
|
|
|
err = generic_write_checks(file, &pos, &count, S_ISBLK(inode->i_mode));
|
|
|
|
if (err)
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
if (count == 0)
|
|
|
|
goto out;
|
|
|
|
|
2008-06-24 21:50:14 +07:00
|
|
|
err = file_remove_suid(file);
|
2005-04-17 05:20:36 +07:00
|
|
|
if (err)
|
|
|
|
goto out;
|
|
|
|
|
2012-03-26 20:59:21 +07:00
|
|
|
err = file_update_time(file);
|
|
|
|
if (err)
|
|
|
|
goto out;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
/* coalesce the iovecs and go direct-to-BIO for O_DIRECT */
|
|
|
|
if (unlikely(file->f_flags & O_DIRECT)) {
|
2006-10-20 13:28:13 +07:00
|
|
|
loff_t endbyte;
|
|
|
|
ssize_t written_buffered;
|
|
|
|
|
|
|
|
written = generic_file_direct_write(iocb, iov, &nr_segs, pos,
|
|
|
|
ppos, count, ocount);
|
2005-04-17 05:20:36 +07:00
|
|
|
if (written < 0 || written == count)
|
|
|
|
goto out;
|
|
|
|
/*
|
|
|
|
* direct-io write to a hole: fall through to buffered I/O
|
|
|
|
* for completing the rest of the request.
|
|
|
|
*/
|
|
|
|
pos += written;
|
|
|
|
count -= written;
|
2006-10-20 13:28:13 +07:00
|
|
|
written_buffered = generic_file_buffered_write(iocb, iov,
|
|
|
|
nr_segs, pos, ppos, count,
|
|
|
|
written);
|
|
|
|
/*
|
|
|
|
* If generic_file_buffered_write() retuned a synchronous error
|
|
|
|
* then we want to return the number of bytes which were
|
|
|
|
* direct-written, or the error code if that was zero. Note
|
|
|
|
* that this differs from normal direct-io semantics, which
|
|
|
|
* will return -EFOO even if some bytes were written.
|
|
|
|
*/
|
|
|
|
if (written_buffered < 0) {
|
|
|
|
err = written_buffered;
|
|
|
|
goto out;
|
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2006-10-20 13:28:13 +07:00
|
|
|
/*
|
|
|
|
* We need to ensure that the page cache pages are written to
|
|
|
|
* disk and invalidated to preserve the expected O_DIRECT
|
|
|
|
* semantics.
|
|
|
|
*/
|
|
|
|
endbyte = pos + written_buffered - written - 1;
|
2009-09-23 20:07:30 +07:00
|
|
|
err = filemap_write_and_wait_range(file->f_mapping, pos, endbyte);
|
2006-10-20 13:28:13 +07:00
|
|
|
if (err == 0) {
|
|
|
|
written = written_buffered;
|
|
|
|
invalidate_mapping_pages(mapping,
|
|
|
|
pos >> PAGE_CACHE_SHIFT,
|
|
|
|
endbyte >> PAGE_CACHE_SHIFT);
|
|
|
|
} else {
|
|
|
|
/*
|
|
|
|
* We don't know how much we wrote, so just return
|
|
|
|
* the number of bytes which were direct-written
|
|
|
|
*/
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
written = generic_file_buffered_write(iocb, iov, nr_segs,
|
|
|
|
pos, ppos, count, written);
|
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
out:
|
|
|
|
current->backing_dev_info = NULL;
|
|
|
|
return written ? written : err;
|
|
|
|
}
|
2009-08-17 23:10:06 +07:00
|
|
|
EXPORT_SYMBOL(__generic_file_aio_write);
|
|
|
|
|
|
|
|
/**
|
|
|
|
* generic_file_aio_write - write data to a file
|
|
|
|
* @iocb: IO state structure
|
|
|
|
* @iov: vector with data to write
|
|
|
|
* @nr_segs: number of segments in the vector
|
|
|
|
* @pos: position in file where to write
|
|
|
|
*
|
|
|
|
* This is a wrapper around __generic_file_aio_write() to be used by most
|
|
|
|
* filesystems. It takes care of syncing the file in case of O_SYNC file
|
|
|
|
* and acquires i_mutex as needed.
|
|
|
|
*/
|
2006-10-01 13:28:46 +07:00
|
|
|
ssize_t generic_file_aio_write(struct kiocb *iocb, const struct iovec *iov,
|
|
|
|
unsigned long nr_segs, loff_t pos)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
struct file *file = iocb->ki_filp;
|
2009-08-18 00:52:36 +07:00
|
|
|
struct inode *inode = file->f_mapping->host;
|
2005-04-17 05:20:36 +07:00
|
|
|
ssize_t ret;
|
|
|
|
|
|
|
|
BUG_ON(iocb->ki_pos != pos);
|
|
|
|
|
2012-06-12 21:20:37 +07:00
|
|
|
sb_start_write(inode->i_sb);
|
2006-01-10 06:59:24 +07:00
|
|
|
mutex_lock(&inode->i_mutex);
|
2009-08-17 23:10:06 +07:00
|
|
|
ret = __generic_file_aio_write(iocb, iov, nr_segs, &iocb->ki_pos);
|
2006-01-10 06:59:24 +07:00
|
|
|
mutex_unlock(&inode->i_mutex);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2009-08-18 00:52:36 +07:00
|
|
|
if (ret > 0 || ret == -EIOCBQUEUED) {
|
2005-04-17 05:20:36 +07:00
|
|
|
ssize_t err;
|
|
|
|
|
2009-08-18 00:52:36 +07:00
|
|
|
err = generic_write_sync(file, pos, ret);
|
2009-08-18 21:18:20 +07:00
|
|
|
if (err < 0 && ret > 0)
|
2005-04-17 05:20:36 +07:00
|
|
|
ret = err;
|
|
|
|
}
|
2012-06-12 21:20:37 +07:00
|
|
|
sb_end_write(inode->i_sb);
|
2005-04-17 05:20:36 +07:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(generic_file_aio_write);
|
|
|
|
|
2006-08-30 01:05:54 +07:00
|
|
|
/**
|
|
|
|
* try_to_release_page() - release old fs-specific metadata on a page
|
|
|
|
*
|
|
|
|
* @page: the page which the kernel is trying to free
|
|
|
|
* @gfp_mask: memory allocation flags (and I/O mode)
|
|
|
|
*
|
|
|
|
* The address_space is to try to release any data against the page
|
|
|
|
* (presumably at page->private). If the release was successful, return `1'.
|
|
|
|
* Otherwise return zero.
|
|
|
|
*
|
2009-04-03 22:42:36 +07:00
|
|
|
* This may also be called if PG_fscache is set on a page, indicating that the
|
|
|
|
* page is known to the local caching routines.
|
|
|
|
*
|
2006-08-30 01:05:54 +07:00
|
|
|
* The @gfp_mask argument specifies whether I/O may be performed to release
|
2008-07-25 15:46:22 +07:00
|
|
|
* this page (__GFP_IO), and whether the call may block (__GFP_WAIT & __GFP_FS).
|
2006-08-30 01:05:54 +07:00
|
|
|
*
|
|
|
|
*/
|
|
|
|
int try_to_release_page(struct page *page, gfp_t gfp_mask)
|
|
|
|
{
|
|
|
|
struct address_space * const mapping = page->mapping;
|
|
|
|
|
|
|
|
BUG_ON(!PageLocked(page));
|
|
|
|
if (PageWriteback(page))
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
if (mapping && mapping->a_ops->releasepage)
|
|
|
|
return mapping->a_ops->releasepage(page, gfp_mask);
|
|
|
|
return try_to_free_buffers(page);
|
|
|
|
}
|
|
|
|
|
|
|
|
EXPORT_SYMBOL(try_to_release_page);
|