- Feb 03, 2025
-
-
[ Upstream commit c13094b894de289514d84b8db56d1f2931a0bade ] on 32-bit kernels, iomap_write_delalloc_scan() was inadvertently using a 32-bit position due to folio_next_index() returning an unsigned long. This could lead to an infinite loop when writing to an xfs filesystem. Signed-off-by:
Marco Nelissen <marco.nelissen@gmail.com> Link: https://lore.kernel.org/r/20250109041253.2494374-1-marco.nelissen@gmail.com Reviewed-by:
Darrick J. Wong <djwong@kernel.org> Reviewed-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
Christian Brauner <brauner@kernel.org> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
- Jan 14, 2025
-
-
[ Upstream commit 6db38858 ] iomap_want_unshare_iter currently sits in fs/iomap/buffered-io.c, which depends on CONFIG_BLOCK. It is also in used in fs/dax.c whіch has no such dependency. Given that it is a trivial check turn it into an inline in include/linux/iomap.h to fix the DAX && !BLOCK build. Fixes: 6ef6a0e8 ("iomap: share iomap_unshare_iter predicate code with fsdax") Reported-by:
kernel test robot <lkp@intel.com> Signed-off-by:
Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20241015041350.118403-1-hch@lst.de Reviewed-by:
Brian Foster <bfoster@redhat.com> Signed-off-by:
Christian Brauner <brauner@kernel.org> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
[ Upstream commit 6ef6a0e8 ] The predicate code that iomap_unshare_iter uses to decide if it's really needs to unshare a file range mapping should be shared with the fsdax version, because right now they're opencoded and inconsistent. Note that we simplify the predicate logic a bit -- we no longer allow unsharing of inline data mappings, but there aren't any filesystems that allow shared inline data currently. This is a fix in the sense that it should have been ported to fsdax. Fixes: b53fdb21 ("iomap: improve shared block detection in iomap_unshare_iter") Signed-off-by:
Darrick J. Wong <djwong@kernel.org> Link: https://lore.kernel.org/r/172796813294.1131942.15762084021076932620.stgit@frogsfrogsfrogs Reviewed-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
Christian Brauner <brauner@kernel.org> Stable-dep-of: 50793801 ("fsdax: dax_unshare_iter needs to copy entire blocks") Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
[ Upstream commit f7a4874d ] If unshare encounters a delalloc reservation in the srcmap, that means that the file range isn't shared because delalloc reservations cannot be reflinked. Therefore, don't try to unshare them. Signed-off-by:
Darrick J. Wong <djwong@kernel.org> Link: https://lore.kernel.org/r/20241002150040.GB21853@frogsfrogsfrogs Reviewed-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Brian Foster <bfoster@redhat.com> Signed-off-by:
Christian Brauner <brauner@kernel.org> Stable-dep-of: 50793801 ("fsdax: dax_unshare_iter needs to copy entire blocks") Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
[ Upstream commit b53fdb21 ] Currently iomap_unshare_iter relies on the IOMAP_F_SHARED flag to detect blocks to unshare. This is reasonable, but IOMAP_F_SHARED is also useful for the file system to do internal book keeping for out of place writes. XFS used to that, until it got removed in commit 72a048c1 ("xfs: only set IOMAP_F_SHARED when providing a srcmap to a write") because unshare for incorrectly unshare such blocks. Add an extra safeguard by checking the explicitly provided srcmap instead of the fallback to the iomap for valid data, as that catches the case where we'd just copy from the same place we'd write to easily, allowing to reinstate setting IOMAP_F_SHARED for all XFS writes that go to the COW fork. Signed-off-by:
Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20240910043949.3481298-3-hch@lst.de Reviewed-by:
Darrick J. Wong <djwong@kernel.org> Signed-off-by:
Christian Brauner <brauner@kernel.org> Stable-dep-of: 50793801 ("fsdax: dax_unshare_iter needs to copy entire blocks") Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
[ Upstream commit a5f31a50 ] Convert iomap_unshare_iter to create large folios if possible, since the write and zeroing paths already do that. I think this got missed in the conversion of the write paths that landed in 6.6-rc1. Cc: ritesh.list@gmail.com, willy@infradead.org Signed-off-by:
Darrick J. Wong <djwong@kernel.org> Reviewed-by:
Ritesh Harjani (IBM) <ritesh.list@gmail.com> Stable-dep-of: 50793801 ("fsdax: dax_unshare_iter needs to copy entire blocks") Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
[ Upstream commit 7a9d43ea ] When direct I/O completions invalidates the page cache it holds neither the i_rwsem nor the invalidate_lock so it can be racing with iomap_write_delalloc_release. If the search for the end of the region that contains data returns the start offset we hit such a race and just need to look for the end of the newly created hole instead. Signed-off-by:
Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20240910043949.3481298-2-hch@lst.de Reviewed-by:
Darrick J. Wong <djwong@kernel.org> Signed-off-by:
Christian Brauner <brauner@kernel.org> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
[ Upstream commit a311a08a ] File contents can only be shared (i.e. reflinked) below EOF, so it makes no sense to try to unshare ranges beyond EOF. Constrain the file range parameters here so that we don't have to do that in the callers. Fixes: 5f4e5752 ("fs: add iomap_file_dirty") Signed-off-by:
Darrick J. Wong <djwong@kernel.org> Link: https://lore.kernel.org/r/20241002150213.GC21853@frogsfrogsfrogs Reviewed-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Brian Foster <bfoster@redhat.com> Signed-off-by:
Christian Brauner <brauner@kernel.org> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
- Aug 12, 2024
-
-
[ Upstream commit f5ceb1bb ] If the extent spans the block that contains i_size, we need to handle both halves separately so that we properly zero data in the page cache for blocks that are entirely outside of i_size. But this is needed only when i_size is within the current folio under processing. "orig_pos + length > isize" can be true for all folios if the mapped extent length is greater than the folio size. That is making plen to break for every folio instead of only the last folio. So use orig_plen for checking if "orig_pos + orig_plen > isize". Signed-off-by:
Ritesh Harjani (IBM) <ritesh.list@gmail.com> Link: https://lore.kernel.org/r/a32e5f9a4fcfdb99077300c4020ed7ae61d6e0f9.1715067055.git.ritesh.list@gmail.com Reviewed-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Darrick J. Wong <djwong@kernel.org> Reviewed-by:
Jan Kara <jack@suse.cz> cc: Ojaswin Mujoo <ojaswin@linux.ibm.com> Signed-off-by:
Christian Brauner <brauner@kernel.org> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
- Jun 12, 2024
-
-
[ Upstream commit d7b64041 ] A recent multithreaded write data corruption has been uncovered in the iomap write code. The core of the problem is partial folio writes can be flushed to disk while a new racing write can map it and fill the rest of the page: writeback new write allocate blocks blocks are unwritten submit IO ..... map blocks iomap indicates UNWRITTEN range loop { lock folio copyin data ..... IO completes runs unwritten extent conv blocks are marked written <iomap now stale> get next folio } Now add memory pressure such that memory reclaim evicts the partially written folio that has already been written to disk. When the new write finally gets to the last partial page of the new write, it does not find it in cache, so it instantiates a new page, sees the iomap is unwritten, and zeros the part of the page that it does not have data from. This overwrites the data on disk that was originally written. The full description of the corruption mechanism can be found here: https://lore.kernel.org/linux-xfs/20220817093627.GZ3600936@dread.disaster.area/ To solve this problem, we need to check whether the iomap is still valid after we lock each folio during the write. We have to do it after we lock the page so that we don't end up with state changes occurring while we wait for the folio to be locked. Hence we need a mechanism to be able to check that the cached iomap is still valid (similar to what we already do in buffered writeback), and we need a way for ->begin_write to back out and tell the high level iomap iterator that we need to remap the remaining write range. The iomap needs to grow some storage for the validity cookie that the filesystem provides to travel with the iomap. XFS, in particular, also needs to know some more information about what the iomap maps (attribute extents rather than file data extents) to for the validity cookie to cover all the types of iomaps we might need to validate. Signed-off-by:
Dave Chinner <dchinner@redhat.com> Reviewed-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Darrick J. Wong <djwong@kernel.org> Signed-off-by:
Leah Rumancik <leah.rumancik@gmail.com> Acked-by:
Darrick J. Wong <djwong@kernel.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
[ Upstream commit f43dc4dc ] iomap_file_buffered_write_punch_delalloc() currently invalidates the page cache over the unused range of the delalloc extent that was allocated. While the write allocated the delalloc extent, it does not own it exclusively as the write does not hold any locks that prevent either writeback or mmap page faults from changing the state of either the page cache or the extent state backing this range. Whilst xfs_bmap_punch_delalloc_range() already handles races in extent conversion - it will only punch out delalloc extents and it ignores any other type of extent - the page cache truncate does not discriminate between data written by this write or some other task. As a result, truncating the page cache can result in data corruption if the write races with mmap modifications to the file over the same range. generic/346 exercises this workload, and if we randomly fail writes (as will happen when iomap gets stale iomap detection later in the patchset), it will randomly corrupt the file data because it removes data written by mmap() in the same page as the write() that failed. Hence we do not want to punch out the page cache over the range of the extent we failed to write to - what we actually need to do is detect the ranges that have dirty data in cache over them and *not punch them out*. To do this, we have to walk the page cache over the range of the delalloc extent we want to remove. This is made complex by the fact we have to handle partially up-to-date folios correctly and this can happen even when the FSB size == PAGE_SIZE because we now support multi-page folios in the page cache. Because we are only interested in discovering the edges of data ranges in the page cache (i.e. hole-data boundaries) we can make use of mapping_seek_hole_data() to find those transitions in the page cache. As we hold the invalidate_lock, we know that the boundaries are not going to change while we walk the range. This interface is also byte-based and is sub-page block aware, so we can find the data ranges in the cache based on byte offsets rather than page, folio or fs block sized chunks. This greatly simplifies the logic of finding dirty cached ranges in the page cache. Once we've identified a range that contains cached data, we can then iterate the range folio by folio. This allows us to determine if the data is dirty and hence perform the correct delalloc extent punching operations. The seek interface we use to iterate data ranges will give us sub-folio start/end granularity, so we may end up looking up the same folio multiple times as the seek interface iterates across each discontiguous data region in the folio. Signed-off-by:
Dave Chinner <dchinner@redhat.com> Reviewed-by:
Darrick J. Wong <djwong@kernel.org> Signed-off-by:
Leah Rumancik <leah.rumancik@gmail.com> Acked-by:
Darrick J. Wong <djwong@kernel.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
[ Upstream commit 9c7babf9 ] Because that's what Christoph wants for this error handling path only XFS uses. It requires a new iomap export for handling errors over delalloc ranges. This is basically the XFS code as is stands, but even though Christoph wants this as iomap funcitonality, we still have to call it from the filesystem specific ->iomap_end callback, and call into the iomap code with yet another filesystem specific callback to punch the delalloc extent within the defined ranges. Signed-off-by:
Dave Chinner <dchinner@redhat.com> Reviewed-by:
Darrick J. Wong <djwong@kernel.org> Signed-off-by:
Leah Rumancik <leah.rumancik@gmail.com> Acked-by:
Darrick J. Wong <djwong@kernel.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- Dec 12, 2023
-
-
commit 936e114a upstream. Move the ki_pos update down a bit to prepare for a better common helper that invalidates pages based of an iocb. Link: https://lkml.kernel.org/r/20230601145904.1385409-3-hch@lst.de Signed-off-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Damien Le Moal <dlemoal@kernel.org> Reviewed-by:
Hannes Reinecke <hare@suse.de> Reviewed-by:
Darrick J. Wong <djwong@kernel.org> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Andreas Gruenbacher <agruenba@redhat.com> Cc: Anna Schumaker <anna@kernel.org> Cc: Chao Yu <chao@kernel.org> Cc: Christian Brauner <brauner@kernel.org> Cc: Ilya Dryomov <idryomov@gmail.com> Cc: Jaegeuk Kim <jaegeuk@kernel.org> Cc: Jens Axboe <axboe@kernel.dk> Cc: Johannes Thumshirn <johannes.thumshirn@wdc.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Miklos Szeredi <miklos@szeredi.hu> Cc: Miklos Szeredi <mszeredi@redhat.com> Cc: Theodore Ts'o <tytso@mit.edu> Cc: Trond Myklebust <trond.myklebust@hammerspace.com> Cc: Xiubo Li <xiubli@redhat.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Cc: Jan Kara <jack@suse.cz> Link: https://lore.kernel.org/r/20231205122122.dfhhoaswsfscuhc3@quack3 Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- Oct 12, 2023
-
-
[ Upstream commit a221ab71 ] We do not need to release the iomap_page in iomap_invalidate_folio() to allow the folio to be split. The splitting code will call ->release_folio() if there is still per-fs private data attached to the folio. At that point, we will check if the folio is still dirty and decline to release the iomap_page. It is possible to trigger the warning in perfectly legitimate circumstances (eg if a disk read fails, we do a partial write to the folio, then we truncate the folio), which will cause those writes to be lost. Fixes: 60d82310 ("iomap: Support large folios in invalidatepage") Signed-off-by:
Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by:
Darrick J. Wong <djwong@kernel.org> Reviewed-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
- Oct 02, 2022
-
-
Darrick J. Wong authored
Add a new tracepoint so we can see what mapping the filesystem returns to writeback a dirty page. Signed-off-by:
Darrick J. Wong <djwong@kernel.org> Reviewed-by:
Dave Chinner <dchinner@redhat.com>
-
Darrick J. Wong authored
Every now and then I see this crash on arm64: Unable to handle kernel NULL pointer dereference at virtual address 00000000000000f8 Buffer I/O error on dev dm-0, logical block 8733687, async page read Mem abort info: ESR = 0x0000000096000006 EC = 0x25: DABT (current EL), IL = 32 bits SET = 0, FnV = 0 EA = 0, S1PTW = 0 FSC = 0x06: level 2 translation fault Data abort info: ISV = 0, ISS = 0x00000006 CM = 0, WnR = 0 user pgtable: 64k pages, 42-bit VAs, pgdp=0000000139750000 [00000000000000f8] pgd=0000000000000000, p4d=0000000000000000, pud=0000000000000000, pmd=0000000000000000 Internal error: Oops: 96000006 [#1] PREEMPT SMP Buffer I/O error on dev dm-0, logical block 8733688, async page read Dumping ftrace buffer: Buffer I/O error on dev dm-0, logical block 8733689, async page read (ftrace buffer empty) XFS (dm-0): log I/O error -5 Modules linked in: dm_thin_pool dm_persistent_data XFS (dm-0): Metadata I/O Error (0x1) detected at xfs_trans_read_buf_map+0x1ec/0x590 [xfs] (fs/xfs/xfs_trans_buf.c:296). dm_bio_prison XFS (dm-0): Please unmount the filesystem and rectify the problem(s) XFS (dm-0): xfs_imap_lookup: xfs_ialloc_read_agi() returned error -5, agno 0 dm_bufio dm_log_writes xfs nft_chain_nat xt_REDIRECT nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 ip6t_REJECT potentially unexpected fatal signal 6. nf_reject_ipv6 potentially unexpected fatal signal 6. ipt_REJECT nf_reject_ipv4 CPU: 1 PID: 122166 Comm: fsstress Tainted: G W 6.0.0-rc5-djwa #rc5 3004c9f1de887ebae86015f2677638ce51ee7 rpcsec_gss_krb5 auth_rpcgss xt_tcpudp ip_set_hash_ip ip_set_hash_net xt_set nft_compat ip_set_hash_mac ip_set nf_tables Hardware name: QEMU KVM Virtual Machine, BIOS 1.5.1 06/16/2021 pstate: 60001000 (nZCv daif -PAN -UAO -TCO -DIT +SSBS BTYPE=--) ip_tables pc : 000003fd6d7df200 x_tables lr : 000003fd6d7df1ec overlay nfsv4 CPU: 0 PID: 54031 Comm: u4:3 Tainted: G W 6.0.0-rc5-djwa #rc5 3004c9f1de887ebae86015f2677638ce51ee7405 Hardware name: QEMU KVM Virtual Machine, BIOS 1.5.1 06/16/2021 Workqueue: writeback wb_workfn sp : 000003ffd9522fd0 (flush-253:0) pstate: 60401005 (nZCv daif +PAN -UAO -TCO -DIT +SSBS BTYPE=--) pc : errseq_set+0x1c/0x100 x29: 000003ffd9522fd0 x28: 0000000000000023 x27: 000002acefeb6780 x26: 0000000000000005 x25: 0000000000000001 x24: 0000000000000000 x23: 00000000ffffffff x22: 0000000000000005 lr : __filemap_set_wb_err+0x24/0xe0 x21: 0000000000000006 sp : fffffe000f80f760 x29: fffffe000f80f760 x28: 0000000000000003 x27: fffffe000f80f9f8 x26: 0000000002523000 x25: 00000000fffffffb x24: fffffe000f80f868 x23: fffffe000f80fbb0 x22: fffffc0180c26a78 x21: 0000000002530000 x20: 0000000000000000 x19: 0000000000000000 x18: 0000000000000000 x17: 0000000000000000 x16: 0000000000000000 x15: 0000000000000000 x14: 0000000000000001 x13: 0000000000470af3 x12: fffffc0058f70000 x11: 0000000000000040 x10: 0000000000001b20 x9 : fffffe000836b288 x8 : fffffc00eb9fd480 x7 : 0000000000f83659 x6 : 0000000000000000 x5 : 0000000000000869 x4 : 0000000000000005 x3 : 00000000000000f8 x20: 000003fd6d740020 x19: 000000000001dd36 x18: 0000000000000001 x17: 000003fd6d78704c x16: 0000000000000001 x15: 000002acfac87668 x2 : 0000000000000ffa x1 : 00000000fffffffb x0 : 00000000000000f8 Call trace: errseq_set+0x1c/0x100 __filemap_set_wb_err+0x24/0xe0 iomap_do_writepage+0x5e4/0xd5c write_cache_pages+0x208/0x674 iomap_writepages+0x34/0x60 xfs_vm_writepages+0x8c/0xcc [xfs 7a861f39c43631f15d3a5884246ba5035d4ca78b] x14: 0000000000000000 x13: 2064656e72757465 x12: 0000000000002180 x11: 000003fd6d8a82d0 x10: 0000000000000000 x9 : 000003fd6d8ae288 x8 : 0000000000000083 x7 : 00000000ffffffff x6 : 00000000ffffffee x5 : 00000000fbad2887 x4 : 000003fd6d9abb58 x3 : 000003fd6d740020 x2 : 0000000000000006 x1 : 000000000001dd36 x0 : 0000000000000000 CPU: 1 PID: 122167 Comm: fsstress Tainted: G W 6.0.0-rc5-djwa #rc5 3004c9f1de887ebae86015f2677638ce51ee7 do_writepages+0x90/0x1c4 __writeback_single_inode+0x4c/0x4ac Hardware name: QEMU KVM Virtual Machine, BIOS 1.5.1 06/16/2021 writeback_sb_inodes+0x214/0x4ac wb_writeback+0xf4/0x3b0 pstate: 60001000 (nZCv daif -PAN -UAO -TCO -DIT +SSBS BTYPE=--) wb_workfn+0xfc/0x580 process_one_work+0x1e8/0x480 pc : 000003fd6d7df200 worker_thread+0x78/0x430 This crash is a result of iomap_writepage_map encountering some sort of error during writeback and wanting to set that error code in the file mapping so that fsync will report it. Unfortunately, the code dereferences folio->mapping after unlocking the folio, which means that another thread could have removed the page from the page cache (writeback doesn't hold the invalidation lock) and give it to somebody else. At best we crash the system like above; at worst, we corrupt memory or set an error on some other unsuspecting file while failing to record the problems with *this* file. Regardless, fix the problem by reporting the error to the inode mapping. NOTE: Commit 598ecfba lifted the XFS writeback code to iomap, so this fix should be backported to XFS in the 4.6-5.4 kernels in addition to iomap in the 5.5-5.19 kernels. Fixes: e735c007 ("iomap: Convert iomap_add_to_ioend() to take a folio") # 5.17 onward Fixes: 598ecfba ("iomap: lift the xfs writeback code to iomap") # 5.5-5.16, needs backporting Fixes: 150d5be0 ("xfs: remove xfs_cancel_ioend") # 4.6-5.4, needs backporting Signed-off-by:
Darrick J. Wong <djwong@kernel.org> Reviewed-by:
Matthew Wilcox (Oracle) <willy@infradead.org>
-
- Aug 09, 2022
-
-
Al Viro authored
Equivalent of single-segment iovec. Initialized by iov_iter_ubuf(), checked for by iter_is_ubuf(), otherwise behaves like ITER_IOVEC ones. We are going to expose the things like ->write_iter() et.al. to those in subsequent commits. New predicate (user_backed_iter()) that is true for ITER_IOVEC and ITER_UBUF; places like direct-IO handling should use that for checking that pages we modify after getting them from iov_iter_get_pages() would need to be dirtied. DO NOT assume that replacing iter_is_iovec() with user_backed_iter() will solve all problems - there's code that uses iter_is_iovec() to decide how to poke around in iov_iter guts and for that the predicate replacement obviously won't suffice. Signed-off-by:
Al Viro <viro@zeniv.linux.org.uk>
-
- Aug 02, 2022
-
-
Matthew Wilcox (Oracle) authored
There is nothing iomap-specific about iomap_migratepage(), and it fits a pattern used by several other filesystems, so move it to mm/migrate.c, convert it to be filemap_migrate_folio() and convert the iomap filesystems to use it. Signed-off-by:
Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Darrick J. Wong <djwong@kernel.org>
-
- Jul 25, 2022
-
-
Stefan Roesch authored
If iomap_write_iter() encounters -EAGAIN, return -EAGAIN to the caller. Signed-off-by:
Stefan Roesch <shr@fb.com> Reviewed-by:
Darrick J. Wong <djwong@kernel.org> Link: https://lore.kernel.org/r/20220623175157.1715274-7-shr@fb.com Reviewed-by:
Christoph Hellwig <hch@lst.de> [axboe: make the suggested ternary edit] Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
Stefan Roesch authored
This adds async buffered write support to iomap. This replaces the call to balance_dirty_pages_ratelimited() with the call to balance_dirty_pages_ratelimited_flags. This allows to specify if the write request is async or not. In addition this also moves the above function call to the beginning of the function. If the function call is at the end of the function and the decision is made to throttle writes, then there is no request that io-uring can wait on. By moving it to the beginning of the function, the write request is not issued, but returns -EAGAIN instead. io-uring will punt the request and process it in the io-worker. By moving the function call to the beginning of the function, the write throttling will happen one page later. Signed-off-by:
Stefan Roesch <shr@fb.com> Reviewed-by:
Jan Kara <jack@suse.cz> Reviewed-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Darrick J. Wong <djwong@kernel.org> Link: https://lore.kernel.org/r/20220623175157.1715274-6-shr@fb.com Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
Stefan Roesch authored
Add the kiocb flags parameter to the function iomap_page_create(). Depending on the value of the flags parameter it enables different gfp flags. No intended functional changes in this patch. Signed-off-by:
Stefan Roesch <shr@fb.com> Reviewed-by:
Jan Kara <jack@suse.cz> Reviewed-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Darrick J. Wong <djwong@kernel.org> Link: https://lore.kernel.org/r/20220623175157.1715274-5-shr@fb.com Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
- Jul 22, 2022
-
-
Christoph Hellwig authored
Unused now. Signed-off-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Damien Le Moal <damien.lemoal@opensource.wdc.com> Reviewed-by:
Chaitanya Kulkarni <kch@nvidia.com> Reviewed-by:
Darrick J. Wong <djwong@kernel.org> Signed-off-by:
Darrick J. Wong <djwong@kernel.org>
-
- Jul 14, 2022
-
-
Bart Van Assche authored
Improve static type checking by using the enum req_op type for variables that represent a request operation and the new blk_opf_t type for the combination of a request operation and request flags. Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Christoph Hellwig <hch@lst.de> Signed-off-by:
Bart Van Assche <bvanassche@acm.org> Link: https://lore.kernel.org/r/20220714180729.1065367-56-bvanassche@acm.org Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
- Jun 30, 2022
-
-
Kaixu Xia authored
It is unnecessary to check and set did_zero value in while() loop in iomap_zero_iter(), we can set did_zero to true only when zeroing successfully at last. Signed-off-by:
Kaixu Xia <kaixuxia@tencent.com> Reviewed-by:
Chaitanya Kulkarni <kch@nvidia.com> Reviewed-by:
Darrick J. Wong <djwong@kernel.org> Signed-off-by:
Darrick J. Wong <djwong@kernel.org> Reviewed-by:
Christoph Hellwig <hch@lst.de>
-
Chris Mason authored
iomap_do_writepage() sends pages past i_size through folio_redirty_for_writepage(), which normally isn't a problem because truncate and friends clean them very quickly. When the system has cgroups configured, we can end up in situations where one cgroup has almost no dirty pages at all, and other cgroups consume the entire background dirty limit. This is especially common in our XFS workloads in production because they have cgroups using O_DIRECT for almost all of the IO mixed in with cgroups that do more traditional buffered IO work. We've hit storms where the redirty path hits millions of times in a few seconds, on all a single file that's only ~40 pages long. This leads to long tail latencies for file writes because the pdflush workers are hogging the CPU from some kworkers bound to the same CPU. Reproducing this on 5.18 was tricky because 869ae85d ("xfs: flush new eof page on truncate...") ends up writing/waiting most of these dirty pages before truncate gets a chance to wait on them. The actual repro looks like this: /* * run me in a cgroup all alone. Start a second cgroup with dd * streaming IO into the block device. */ int main(int ac, char **av) { int fd; int ret; char buf[BUFFER_SIZE]; char *filename = av[1]; memset(buf, 0, BUFFER_SIZE); if (ac != 2) { fprintf(stderr, "usage: looper filename\n"); exit(1); } fd = open(filename, O_WRONLY | O_CREAT, 0600); if (fd < 0) { err(errno, "failed to open"); } fprintf(stderr, "looping on %s\n", filename); while(1) { /* * skip past page 0 so truncate doesn't write and wait * on our extent before changing i_size */ ret = lseek(fd, 8192, SEEK_SET); if (ret < 0) err(errno, "lseek"); ret = write(fd, buf, BUFFER_SIZE); if (ret != BUFFER_SIZE) err(errno, "write failed"); /* start IO so truncate has to wait after i_size is 0 */ ret = sync_file_range(fd, 16384, 4095, SYNC_FILE_RANGE_WRITE); if (ret < 0) err(errno, "sync_file_range"); ret = ftruncate(fd, 0); if (ret < 0) err(errno, "truncate"); usleep(1000); } } And this bpftrace script will show when you've hit a redirty storm: kretprobe:xfs_vm_writepages { delete(@dirty[pid]); } kprobe:xfs_vm_writepages { @dirty[pid] = 1; } kprobe:folio_redirty_for_writepage /@dirty[pid] > 0/ { $inode = ((struct folio *)arg1)->mapping->host->i_ino; @inodes[$inode] = count(); @redirty++; if (@redirty > 90000) { printf("inode %d redirty was %d", $inode, @redirty); exit(); } } This patch has the same number of failures on xfstests as unpatched 5.18: Failures: generic/648 xfs/019 xfs/050 xfs/168 xfs/299 xfs/348 xfs/506 xfs/543 I also ran it through a long stress of multiple fsx processes hammering. (Johannes Weiner did significant tracing and debugging on this as well) Signed-off-by:
Chris Mason <clm@fb.com> Co-authored-by:
Johannes Weiner <hannes@cmpxchg.org> Reviewed-by:
Johannes Weiner <hannes@cmpxchg.org> Reported-by:
Domas Mituzas <domas@fb.com> Reviewed-by:
Darrick J. Wong <djwong@kernel.org> Signed-off-by:
Darrick J. Wong <djwong@kernel.org>
-
- Jun 29, 2022
-
-
Matthew Wilcox (Oracle) authored
Just because there has been a read error doesn't mean we should avoid marking this part of the folio as uptodate. Indeed, it may overwrite the error part of the folio and let us mark the entire folio uptodate. Signed-off-by:
Matthew Wilcox (Oracle) <willy@infradead.org>
-
- Jun 27, 2022
-
-
Keith Busch authored
Use the address alignment requirements from the block_device for direct io instead of requiring addresses be aligned to the block size. Signed-off-by:
Keith Busch <kbusch@kernel.org> Reviewed-by:
Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20220610195830.3574005-12-kbusch@fb.com Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
- Jun 10, 2022
-
-
Al Viro authored
New helper to be used instead of direct checks for IOCB_DSYNC: iocb_is_dsync(iocb). Checks converted, which allows to avoid the IS_SYNC(iocb->ki_filp->f_mapping->host) part (4 cache lines) from iocb_flags() - it's checked in iocb_is_dsync() instead Reviewed-by:
Christian Brauner (Microsoft) <brauner@kernel.org> Signed-off-by:
Al Viro <viro@zeniv.linux.org.uk>
-
Al Viro authored
New flag, equivalent to removal of IOCB_DSYNC from iocb flags. This mimics what btrfs is doing (and that's what btrfs will switch to). However, I'm not at all sure that we want to suppress REQ_FUA for those - all btrfs hack really cares about is suppression of generic_write_sync(). For now let's keep the existing behaviour, but I really want to hear more detailed arguments pro or contra. [folded brain fix from willy] Suggested-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Christian Brauner (Microsoft) <brauner@kernel.org> Signed-off-by:
Al Viro <viro@zeniv.linux.org.uk>
-
- May 16, 2022
-
-
Darrick J. Wong authored
XFS has the unique behavior (as compared to the other Linux filesystems) that on writeback errors it will completely invalidate the affected folio and force the page cache to reread the contents from disk. All other filesystems leave the page mapped and up to date. This is a rude awakening for user programs, since (in the case where write fails but reread doesn't) file contents will appear to revert to old disk contents with no notification other than an EIO on fsync. This might have been annoying back in the days when iomap dealt with one page at a time, but with multipage folios, we can now throw away *megabytes* worth of data for a single write error. On *most* Linux filesystems, a program can respond to an EIO on write by redirtying the entire file and scheduling it for writeback. This isn't foolproof, since the page that failed writeback is no longer dirty and could be evicted, but programs that want to recover properly *also* have to detect XFS and regenerate every write they've made to the file. When running xfs/314 on arm64, I noticed a UAF when xfs_discard_folio invalidates multipage folios that could be undergoing writeback. If, say, we have a 256K folio caching a mix of written and unwritten extents, it's possible that we could start writeback of the first (say) 64K of the folio and then hit a writeback error on the next 64K. We then free the iop attached to the folio, which is really bad because writeback completion on the first 64k will trip over the "blocks per folio > 1 && !iop" assertion. This can't be fixed by only invalidating the folio if writeback fails at the start of the folio, since the folio is marked !uptodate, which trips other assertions elsewhere. Get rid of the whole behavior entirely. Signed-off-by:
Darrick J. Wong <djwong@kernel.org> Reviewed-by:
Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by:
Jeff Layton <jlayton@kernel.org> Reviewed-by:
Christoph Hellwig <hch@lst.de>
-
Christoph Hellwig authored
Allow the file system to keep state for all iterations. For now only wire it up for direct I/O as there is an immediate need for it there. Reviewed-by:
Darrick J. Wong <djwong@kernel.org> Reviewed-by:
Nikolay Borisov <nborisov@suse.com> Signed-off-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
David Sterba <dsterba@suse.com> Signed-off-by:
David Sterba <dsterba@suse.com>
-
Christoph Hellwig authored
Allow the file system to provide a specific bio_set for allocating direct I/O bios. This will allow file systems that use the ->submit_io hook to stash away additional information for file system use. To make use of this additional space for information in the completion path, the file system needs to override the ->bi_end_io callback and then call back into iomap, so export iomap_dio_bio_end_io for that. Reviewed-by:
Darrick J. Wong <djwong@kernel.org> Reviewed-by:
Nikolay Borisov <nborisov@suse.com> Signed-off-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
David Sterba <dsterba@suse.com> Signed-off-by:
David Sterba <dsterba@suse.com>
-
- May 10, 2022
-
-
Matthew Wilcox (Oracle) authored
Change all the filesystems which used iomap_releasepage to use the new function. Signed-off-by:
Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by:
Jeff Layton <jlayton@kernel.org>
-
- May 09, 2022
-
-
Matthew Wilcox (Oracle) authored
mpage_readpage still works in terms of pages, and has not been audited for correctness with large folios, so include an assertion that the filesystem is not passing it large folios. Convert all the filesystems to call mpage_read_folio() instead of mpage_readpage(). Signed-off-by:
Matthew Wilcox (Oracle) <willy@infradead.org>
-
Matthew Wilcox (Oracle) authored
This function is NOT converted to handle large folios, so include an assert that the filesystem isn't passing one in. Otherwise, use the folio functions instead of the page functions, where they exist. Convert all filesystems which use block_read_full_page(). Signed-off-by:
Matthew Wilcox (Oracle) <willy@infradead.org>
-
Matthew Wilcox (Oracle) authored
A straightforward conversion as iomap_readpage already worked in folios. Signed-off-by:
Matthew Wilcox (Oracle) <willy@infradead.org>
-
- May 08, 2022
-
-
Andreas Gruenbacher authored
In iomap_write_end(), only call iomap_write_failed() on the byte range that has failed. This should improve code readability, but doesn't fix an actual bug because iomap_write_failed() is called after updating the file size here and it only affects the memory beyond the end of the file. Signed-off-by:
Andreas Gruenbacher <agruenba@redhat.com> Reviewed-by:
Darrick J. Wong <djwong@kernel.org> Signed-off-by:
Darrick J. Wong <djwong@kernel.org>
-
Andreas Gruenbacher authored
The @lend parameter of truncate_pagecache_range() should be the offset of the last byte of the hole, not the first byte beyond it. Fixes: ae259a9c ("fs: introduce iomap infrastructure") Signed-off-by:
Andreas Gruenbacher <agruenba@redhat.com> Reviewed-by:
Darrick J. Wong <djwong@kernel.org> Signed-off-by:
Darrick J. Wong <djwong@kernel.org>
-
- May 02, 2022
-
-
Ming Lei authored
So far bio is marked as REQ_POLLED if RWF_HIPRI/IOCB_HIPRI is passed from userspace sync io interface, then block layer tries to poll until the bio is completed. But the current implementation calls blk_io_schedule() if bio_poll() returns 0, and this way causes io hang or timeout easily. But looks no one reports this kind of issue, which should have been triggered in normal io poll sanity test or blktests block/007 as observed by Changhui, that means it is very likely that no one uses it or no one cares it. Also after io_uring is invented, io poll for sync dio becomes legacy interface. So ignore RWF_HIPRI hint for sync dio. CC: linux-mm@kvack.org Cc: linux-xfs@vger.kernel.org Reported-by:
Changhui Zhong <czhong@redhat.com> Suggested-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
Ming Lei <ming.lei@redhat.com> Tested-by:
Changhui Zhong <czhong@redhat.com> Reviewed-by:
Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20220420143110.2679002-1-ming.lei@redhat.com Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
- Apr 18, 2022
-
-
Christoph Hellwig authored
Add a helper to check the FUA flag based on the block_device instead of having to poke into the block layer internal request_queue. Signed-off-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by:
Chaitanya Kulkarni <kch@nvidia.com> Link: https://lore.kernel.org/r/20220415045258.199825-14-hch@lst.de Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-