mirror of
https://github.com/AuxXxilium/linux_dsm_epyc7002.git
synced 2024-12-23 16:34:38 +07:00
321fe13c93
xfstest generic/451 intermittently fails. The test does O_DIRECT writes to a file, and then reads back the result using buffered I/O, while running a separate set of tasks that are also doing buffered reads. The client will invalidate the cache prior to a direct write, but it's easy for one of the other readers' replies to race in and reinstantiate the invalidated range with stale data. To fix this, we must to serialize direct I/O writes and buffered reads. We could just sprinkle in some shared locks on the i_rwsem for reads, and increase the exclusive footprint on the write side, but that would cause O_DIRECT writes to end up serialized vs. other direct requests. Instead, borrow the scheme used by nfs.ko. Buffered writes take the i_rwsem exclusively, but buffered reads take a shared lock, allowing them to run in parallel. O_DIRECT requests also take a shared lock, but we need for them to not run in parallel with buffered reads. A flag on the ceph_inode_info is used to indicate whether it's in direct or buffered I/O mode. When a conflicting request is submitted, it will block until the inode can be flipped to the necessary mode. Link: https://tracker.ceph.com/issues/40985 Signed-off-by: Jeff Layton <jlayton@kernel.org> Reviewed-by: "Yan, Zheng" <zyan@redhat.com> Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
13 lines
387 B
C
13 lines
387 B
C
/* SPDX-License-Identifier: GPL-2.0 */
|
|
#ifndef _FS_CEPH_IO_H
|
|
#define _FS_CEPH_IO_H
|
|
|
|
void ceph_start_io_read(struct inode *inode);
|
|
void ceph_end_io_read(struct inode *inode);
|
|
void ceph_start_io_write(struct inode *inode);
|
|
void ceph_end_io_write(struct inode *inode);
|
|
void ceph_start_io_direct(struct inode *inode);
|
|
void ceph_end_io_direct(struct inode *inode);
|
|
|
|
#endif /* FS_CEPH_IO_H */
|