- Jul 24, 2024
-
-
Yang Yang authored
Our test report the following hung task: [ 2538.459400] INFO: task "kworker/0:0":7 blocked for more than 188 seconds. [ 2538.459427] Call trace: [ 2538.459430] __switch_to+0x174/0x338 [ 2538.459436] __schedule+0x628/0x9c4 [ 2538.459442] schedule+0x7c/0xe8 [ 2538.459447] schedule_preempt_disabled+0x24/0x40 [ 2538.459453] __mutex_lock+0x3ec/0xf04 [ 2538.459456] __mutex_lock_slowpath+0x14/0x24 [ 2538.459459] mutex_lock+0x30/0xd8 [ 2538.459462] del_gendisk+0xdc/0x350 [ 2538.459466] sd_remove+0x30/0x60 [ 2538.459470] device_release_driver_internal+0x1c4/0x2c4 [ 2538.459474] device_release_driver+0x18/0x28 [ 2538.459478] bus_remove_device+0x15c/0x174 [ 2538.459483] device_del+0x1d0/0x358 [ 2538.459488] __scsi_remove_device+0xa8/0x198 [ 2538.459493] scsi_forget_host+0x50/0x70 [ 2538.459497] scsi_remove_host+0x80/0x180 [ 2538.459502] usb_stor_disconnect+0x68/0xf4 [ 2538.459506] usb_unbind_interface+0xd4/0x280 [ 2538.459510] device_release_driver_internal+0x1c4/0x2c4 [ 2538.459514] device_release_driver+0x18/0x28 [ 2538.459518] bus_remove_device+0x15c/0x174 [ 2538.459523] device_del+0x1d0/0x358 [ 2538.459528] usb_disable_device+0x84/0x194 [ 2538.459532] usb_disconnect+0xec/0x300 [ 2538.459537] hub_event+0xb80/0x1870 [ 2538.459541] process_scheduled_works+0x248/0x4dc [ 2538.459545] worker_thread+0x244/0x334 [ 2538.459549] kthread+0x114/0x1bc [ 2538.461001] INFO: task "fsck.":15415 blocked for more than 188 seconds. [ 2538.461014] Call trace: [ 2538.461016] __switch_to+0x174/0x338 [ 2538.461021] __schedule+0x628/0x9c4 [ 2538.461025] schedule+0x7c/0xe8 [ 2538.461030] blk_queue_enter+0xc4/0x160 [ 2538.461034] blk_mq_alloc_request+0x120/0x1d4 [ 2538.461037] scsi_execute_cmd+0x7c/0x23c [ 2538.461040] ioctl_internal_command+0x5c/0x164 [ 2538.461046] scsi_set_medium_removal+0x5c/0xb0 [ 2538.461051] sd_release+0x50/0x94 [ 2538.461054] blkdev_put+0x190/0x28c [ 2538.461058] blkdev_release+0x28/0x40 [ 2538.461063] __fput+0xf8/0x2a8 [ 2538.461066] __fput_sync+0x28/0x5c [ 2538.461070] __arm64_sys_close+0x84/0xe8 [ 2538.461073] invoke_syscall+0x58/0x114 [ 2538.461078] el0_svc_common+0xac/0xe0 [ 2538.461082] do_el0_svc+0x1c/0x28 [ 2538.461087] el0_svc+0x38/0x68 [ 2538.461090] el0t_64_sync_handler+0x68/0xbc [ 2538.461093] el0t_64_sync+0x1a8/0x1ac T1: T2: sd_remove del_gendisk __blk_mark_disk_dead blk_freeze_queue_start ++q->mq_freeze_depth bdev_release mutex_lock(&disk->open_mutex) sd_release scsi_execute_cmd blk_queue_enter wait_event(!q->mq_freeze_depth) mutex_lock(&disk->open_mutex) SCSI does not set GD_OWNS_QUEUE, so QUEUE_FLAG_DYING is not set in this scenario. This is a classic ABBA deadlock. To fix the deadlock, make sure we don't try to acquire disk->open_mutex after freezing the queue. Cc: stable@vger.kernel.org Fixes: eec1be4c ("block: delete partitions later in del_gendisk") Signed-off-by:
Yang Yang <yang.yang@vivo.com> Reviewed-by:
Ming Lei <ming.lei@redhat.com> Reviewed-by:
Bart Van Assche <bvanassche@acm.org> Reviewed-by:
Christoph Hellwig <hch@lst.de> Fixes: and Cc: stable tags are missing. Otherwise this patch looks fine Link: https://lore.kernel.org/r/20240724070412.22521-1-yang.yang@vivo.com Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
- Jul 19, 2024
-
-
Xiu Jianfeng authored
The congestion_count was introduced into the struct cgroup by commit d09d8df3 ("blkcg: add generic throttling mechanism"), but since it is closely related to the blkio subsys, it is not appropriate to put it in the struct cgroup, so let's move it to struct blkcg. There should be no functional changes because blkcg is per cgroup. Signed-off-by:
Xiu Jianfeng <xiujianfeng@huawei.com> Acked-by:
Tejun Heo <tj@kernel.org> Link: https://lore.kernel.org/r/20240716133058.3491350-1-xiujianfeng@huawei.com Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
hexue authored
This patch adds a poll queue check, aiming to help users use polled IO accurately. If users do polled IO but the device doesn't have poll queues, they will get suboptimal performance data and waste CPU resources. Add a poll queue check batching this. If users don't have the device properly configured, or if it simply doesn't support polled IO, it will error the IO with -EOPNOTSUPP. This is similar to what we used to do for sync polled IO, which is no longer supported. Signed-off-by:
hexue <xue01.he@samsung.com> Link: https://lore.kernel.org/r/20240718070817.1031494-1-xue01.he@samsung.com Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
John Garry authored
Add a BUILD_BUG_ON() call to ensure that we are not missing entries in rqf_name[]. Reviewed-by:
Bart Van Assche <bvanassche@acm.org> Signed-off-by:
John Garry <john.g.garry@oracle.com> Link: https://lore.kernel.org/r/20240719112912.3830443-16-john.g.garry@oracle.com Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
John Garry authored
Now that we have a bit index for RQF_x in __RQF_x, use __RQF_x to simplify the definition of RQF_NAME() by not using ilog2((__force u32()). Reviewed-by:
Bart Van Assche <bvanassche@acm.org> Signed-off-by:
John Garry <john.g.garry@oracle.com> Link: https://lore.kernel.org/r/20240719112912.3830443-15-john.g.garry@oracle.com Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
John Garry authored
Add a BUILD_BUG_ON() call to ensure that we are not missing entries in cmd_flag_name[]. Reviewed-by:
Bart Van Assche <bvanassche@acm.org> Signed-off-by:
John Garry <john.g.garry@oracle.com> Link: https://lore.kernel.org/r/20240719112912.3830443-13-john.g.garry@oracle.com Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
John Garry authored
Make BLK_TAG_ALLOC_x an enum and add a "max" entry. Add a BUILD_BUG_ON() call to ensure that we are not missing entries in hctx_flag_name[]. Reviewed-by:
Bart Van Assche <bvanassche@acm.org> Signed-off-by:
John Garry <john.g.garry@oracle.com> Link: https://lore.kernel.org/r/20240719112912.3830443-12-john.g.garry@oracle.com Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
John Garry authored
Refresh values in BLK_MQ_F_x enum, and then re-arrange members in hctx_flag_name[] to match that enum. Renumber BLK_MQ_F_ALLOC_POLICY_START_BIT to match the value refresh. Add a BUILD_BUG_ON() call to ensure that we are not missing entries in hctx_flag_name[]. Signed-off-by:
John Garry <john.g.garry@oracle.com> Link: https://lore.kernel.org/r/20240719112912.3830443-11-john.g.garry@oracle.com Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
John Garry authored
Add a build-time assert that we are not missing entries from hctx_state_name[]. For this, create a separate enum for state flags and add a "max" entry for BLK_MQ_S_x flags. The numbering for those enum values is as default, so don't explicitly number. Signed-off-by:
John Garry <john.g.garry@oracle.com> Link: https://lore.kernel.org/r/20240719112912.3830443-10-john.g.garry@oracle.com Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
John Garry authored
Assert that we are not missing flag entries in blk_queue_flag_name[]. Signed-off-by:
John Garry <john.g.garry@oracle.com> Reviewed-by:
Bart Van Assche <bvanassche@acm.org> Link: https://lore.kernel.org/r/20240719112912.3830443-9-john.g.garry@oracle.com Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
John Garry authored
BLK_MQ_CPU_WORK_BATCH is defined in include/linux/blk-mq.h, but only used in blk-mq.c, so relocate to block/blk-mq.h Signed-off-by:
John Garry <john.g.garry@oracle.com> Link: https://lore.kernel.org/r/20240719112912.3830443-6-john.g.garry@oracle.com Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
Christoph Hellwig authored
QUEUE_FLAG_STOPPED is entirely unused. Signed-off-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Chaitanya Kulkarni <kch@nvidia.com> Reviewed-by:
Bart Van Assche <bvanassche@acm.org> Reviewed-by:
Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by:
John Garry <john.g.garry@oracle.com> Link: https://lore.kernel.org/r/20240719112912.3830443-5-john.g.garry@oracle.com Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
John Garry authored
Add missing entry for NO_SCHED_BY_DEFAULT and reorder to match the enum. Signed-off-by:
John Garry <john.g.garry@oracle.com> Link: https://lore.kernel.org/r/20240719112912.3830443-4-john.g.garry@oracle.com Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
John Garry authored
Add missing entry. Reviewed-by:
Bart Van Assche <bvanassche@acm.org> Signed-off-by:
John Garry <john.g.garry@oracle.com> Link: https://lore.kernel.org/r/20240719112912.3830443-3-john.g.garry@oracle.com Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
John Garry authored
Add missing entries for req_flag_bits. Reviewed-by:
Bart Van Assche <bvanassche@acm.org> Signed-off-by:
John Garry <john.g.garry@oracle.com> Link: https://lore.kernel.org/r/20240719112912.3830443-2-john.g.garry@oracle.com Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
- Jul 09, 2024
-
-
Christoph Hellwig authored
The rebase of commit 09595e0c ("block: pass a phys_addr_t to get_max_segment_size") lost adding the total to to the offset in blk_bvec_map_sg. Add it back. Fixes: 09595e0c ("block: pass a phys_addr_t to get_max_segment_size") Reported-by:
Yi Zhang <yi.zhang@redhat.com> Reported-by:
Chaitanya Kulkarni <chaitanyak@nvidia.com> Signed-off-by:
Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20240709070126.3019940-1-hch@lst.de Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
Chaitanya Kulkarni authored
Correct the parameter name in the comment of get_max_segment_size() to fix following warning:- block/blk-merge.c:220: warning: Function parameter or struct member 'len' not described in 'get_max_segment_size' block/blk-merge.c:220: warning: Excess function parameter 'max_len' description in 'get_max_segment_size' Signed-off-by:
Chaitanya Kulkarni <kch@nvidia.com> Reviewed-by:
Damien Le Moal <dlemoal@kernel.org> Link: https://lore.kernel.org/r/20240709045432.8688-1-kch@nvidia.com Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
John Garry authored
Some drivers validate that their own logical block size. It is no harm to always do this, so validate in blk_validate_limits(). This allows us to remove the validation in most of those drivers. Add a comment to blk_validate_block_size() to inform users that self- validation of LBS is usually unnecessary. Signed-off-by:
John Garry <john.g.garry@oracle.com> Reviewed-by:
Damien Le Moal <dlemoal@kernel.org> Link: https://lore.kernel.org/r/20240708091651.177447-3-john.g.garry@oracle.com Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
- Jul 08, 2024
-
-
Christoph Hellwig authored
Work on a single address to simplify the logic, and prepare the callers from using better helpers. Signed-off-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Chaitanya Kulkarni <kch@nvidia.com> Link: https://lore.kernel.org/r/20240706075228.2350978-3-hch@lst.de Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
Christoph Hellwig authored
Get callers out of poking into bvec internals a bit more. Not a huge win right now, but with the proposed new DMA mapping API we might end up with a lot more of this otherwise. Signed-off-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Chaitanya Kulkarni <kch@nvidia.com> Link: https://lore.kernel.org/r/20240706075228.2350978-2-hch@lst.de Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
- Jul 05, 2024
-
-
Christoph Hellwig authored
Zeroout can access a significant capacity and take longer than the user expected. A user may change their mind about wanting to run that command and attempt to kill the process and do something else with their device. But since the task is uninterruptable, they have to wait for it to finish, which could be many hours. Add a new BLKDEV_ZERO_KILLABLE flag for blkdev_issue_zeroout that checks for a fatal signal at each iteration so the user doesn't have to wait for their regretted operation to complete naturally. Heavily based on an earlier patch from Keith Busch. Signed-off-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Martin K. Petersen <martin.petersen@oracle.com> Link: https://lore.kernel.org/r/20240701165219.1571322-11-hch@lst.de Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
Christoph Hellwig authored
Only fall back from hardware Write Zeroes failures when blkdev_issue_write_zeroes returns -EOPNOTSUPP; Note that blkdev_issue_write_zeroes turns any failure into -EOPNOTSUPP when the write zeroes queue limit has been cleared to 0, so this still catches all I/O errors where the driver detected missing support for the hardware acceleration. Signed-off-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Martin K. Petersen <martin.petersen@oracle.com> Link: https://lore.kernel.org/r/20240701165219.1571322-10-hch@lst.de Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
Christoph Hellwig authored
Split out two well-defined helpers for hardware supported Write Zeroes and manually writing zeroes using the Write command. Signed-off-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Martin K. Petersen <martin.petersen@oracle.com> Link: https://lore.kernel.org/r/20240701165219.1571322-9-hch@lst.de Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
Christoph Hellwig authored
Move these checks out of the lower level helpers and into the higher level ones to prepare for refactoring. Signed-off-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Martin K. Petersen <martin.petersen@oracle.com> Link: https://lore.kernel.org/r/20240701165219.1571322-8-hch@lst.de Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
Christoph Hellwig authored
__blkdev_issue_zeroout is a purely kernel internal API and thus can rely on the block layer sector alignment checks. Signed-off-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Martin K. Petersen <martin.petersen@oracle.com> Link: https://lore.kernel.org/r/20240701165219.1571322-7-hch@lst.de Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
Christoph Hellwig authored
Contrary to the comment in __blkdev_issue_write_zeroes, nothing here checks for a potential bi_size overflow. Add a helper mirroring the secure erase code for the check. Signed-off-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Martin K. Petersen <martin.petersen@oracle.com> Link: https://lore.kernel.org/r/20240701165219.1571322-6-hch@lst.de Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
Damien Le Moal authored
Remove the helper function blk_alloc_zone_bitmap() and replace its single call site with a call to bitmap_alloc(). To be consistent with this change, use bitmap_free() to free a disk convnetional zone bitmap. Signed-off-by:
Damien Le Moal <dlemoal@kernel.org> Reviewed-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Johannes Thumshirn <johannes.thumshirn@wdc.com> Reviewed-by:
Martin K. Petersen <martin.petersen@oracle.com> Link: https://lore.kernel.org/r/20240704052816.623865-6-dlemoal@kernel.org Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
Damien Le Moal authored
Now that device mapper can handle resetting all zones of a mapped zoned device using REQ_OP_ZONE_RESET_ALL, all zoned block device drivers support this operation. With this, the request queue feature BLK_FEAT_ZONE_RESETALL is not necessary and the emulation code in blk-zone.c can be removed. Signed-off-by:
Damien Le Moal <dlemoal@kernel.org> Reviewed-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Johannes Thumshirn <johannes.thumshirn@wdc.com> Reviewed-by:
Martin K. Petersen <martin.petersen@oracle.com> Link: https://lore.kernel.org/r/20240704052816.623865-5-dlemoal@kernel.org Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
- Jul 04, 2024
-
-
Anuj Gupta authored
Modify bio_integrity_clone to reuse the original bvec array instead of allocating and copying it, similar to how bio data path is cloned. Suggested-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
Anuj Gupta <anuj20.g@samsung.com> Signed-off-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by:
Ming Lei <ming.lei@redhat.com> Link: https://lore.kernel.org/r/20240702100753.2168-1-anuj20.g@samsung.com Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
- Jul 03, 2024
-
-
Christoph Hellwig authored
Now that the integrity payload is always freed in bio_uninit, don't bother freeing it a little earlier in bio_integrity_unmap_free_user. With that the separate bio_integrity_unmap_free_user can go away by just passing the bio to bio_integrity_unmap_user. Signed-off-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Johannes Thumshirn <johannes.thumshirn@wdc.com> Reviewed-by:
Kanchan Joshi <joshi.k@samsung.com> Reviewed-by:
Martin K. Petersen <martin.petersen@oracle.com> Link: https://lore.kernel.org/r/20240702151047.1746127-7-hch@lst.de Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
Christoph Hellwig authored
Currently __bio_integrity_endio frees the integrity payload unless it is explicitly marked as user-mapped. This means in-kernel callers that allocate their own integrity payload never get to see it on I/O completion. The current two users don't need it as they just pre-mapped PI tuples received over the network, but this limits uses of integrity data lot. Change bio_integrity_endio to call __bio_integrity_endio for block layer generated integrity data only, and leave freeing of submitter allocated integrity data to bio_uninit which also gets called from the final bio_put. This requires that unmapping user mapped or copied integrity data is now always done by the caller, and the special BIP_INTEGRITY_USER flag can go away. Signed-off-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Johannes Thumshirn <johannes.thumshirn@wdc.com> Reviewed-by:
Kanchan Joshi <joshi.k@samsung.com> Reviewed-by:
Martin K. Petersen <martin.petersen@oracle.com> Link: https://lore.kernel.org/r/20240702151047.1746127-6-hch@lst.de Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
Christoph Hellwig authored
blk_rq_unmap_user always unmaps user space pass-through request. If such a request has integrity data attached it must come from a user mapping as well. Call bio_integrity_unmap_free_user from blk_rq_unmap_user and remove the nvme_unmap_bio wrapper in the nvme driver. Signed-off-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Kanchan Joshi <joshi.k@samsung.com> Reviewed-by:
Anuj Gupta <anuj20.g@samsung.com> Reviewed-by:
Johannes Thumshirn <johannes.thumshirn@wdc.com> Reviewed-by:
Martin K. Petersen <martin.petersen@oracle.com> Link: https://lore.kernel.org/r/20240702151047.1746127-5-hch@lst.de Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
Christoph Hellwig authored
Commit b222dd2f ("block: call bio_uninit in bio_endio") added a call to bio_uninit in bio_endio to work around callers that use bio_init but fail to call bio_uninit after they are done to release the resources. While this is an abuse of the bio_init API we still have quite a few of those left. But this early uninit causes a problem for integrity data, as at least some users need the bio_integrity_payload. Right now the only one is the NVMe passthrough which archives this by adding a special case to skip the freeing if the BIP_INTEGRITY_USER flag is set. Sort this out by only putting bi_blkg in bio_endio as that is the cause of the actual leaks - the few users of the crypto context and integrity data all properly call bio_uninit, usually through bio_put for dynamically allocated bios. Signed-off-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Martin K. Petersen <martin.petersen@oracle.com> Link: https://lore.kernel.org/r/20240702151047.1746127-4-hch@lst.de Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
Christoph Hellwig authored
Split struct bio_integrity_payload and the related prototypes out of bio.h into a separate bio-integrity.h header so that it is only pulled in by the few places that need it. Signed-off-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Anuj Gupta <anuj20.g@samsung.com> Reviewed-by:
Kanchan Joshi <joshi.k@samsung.com> Reviewed-by:
Johannes Thumshirn <johannes.thumshirn@wdc.com> Reviewed-by:
Martin K. Petersen <martin.petersen@oracle.com> Link: https://lore.kernel.org/r/20240702151047.1746127-2-hch@lst.de Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
- Jul 02, 2024
-
-
Bart Van Assche authored
The current tag reservation code is based on a misunderstanding of the meaning of data->shallow_depth. Fix the tag reservation code as follows: * By default, do not reserve any tags for synchronous requests because for certain use cases reserving tags reduces performance. See also Harshit Mogalapalli, [bug-report] Performance regression with fio sequential-write on a multipath setup, 2024-03-07 (https://lore.kernel.org/linux-block/5ce2ae5d-61e2-4ede-ad55-551112602401@oracle.com/ ) * Reduce min_shallow_depth to one because min_shallow_depth must be less than or equal any shallow_depth value. * Scale dd->async_depth from the range [1, nr_requests] to [1, bits_per_sbitmap_word]. Cc: Christoph Hellwig <hch@lst.de> Cc: Damien Le Moal <dlemoal@kernel.org> Cc: Zhiguo Niu <zhiguo.niu@unisoc.com> Fixes: 07757588 ("block/mq-deadline: Reserve 25% of scheduler tags for synchronous requests") Signed-off-by:
Bart Van Assche <bvanassche@acm.org> Reviewed-by:
Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20240509170149.7639-3-bvanassche@acm.org Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
Bart Van Assche authored
Call .limit_depth() after data->hctx has been set such that data->hctx can be used in .limit_depth() implementations. Cc: Christoph Hellwig <hch@lst.de> Cc: Damien Le Moal <dlemoal@kernel.org> Cc: Zhiguo Niu <zhiguo.niu@unisoc.com> Fixes: 07757588 ("block/mq-deadline: Reserve 25% of scheduler tags for synchronous requests") Signed-off-by:
Bart Van Assche <bvanassche@acm.org> Tested-by:
Zhiguo Niu <zhiguo.niu@unisoc.com> Reviewed-by:
Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20240509170149.7639-2-bvanassche@acm.org Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
- Jul 01, 2024
-
-
Christoph Hellwig authored
Don't reduce the max_sectors value below the normal cap when the driver advertsizes a very low io_opt. This restores the behavior we had before the recent changes to the max_sectors calculation. Signed-off-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Chaitanya Kulkarni <kch@nvidia.com> Reviewed-by:
Sagi Grimberg <sagi@grimberg.me> Reviewed-by:
Nitesh Shetty <nj.shetty@samsung.com> Reviewed-by:
Johannes Thumshirn <johannes.thumshirn@wdc.com> Link: https://lore.kernel.org/r/20240701051800.1245240-3-hch@lst.de Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
Christoph Hellwig authored
If io_min is larger than the cap, it must by definition be non-zero. Signed-off-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Chaitanya Kulkarni <kch@nvidia.com> Reviewed-by:
Sagi Grimberg <sagi@grimberg.me> Reviewed-by:
Nitesh Shetty <nj.shetty@samsung.com> Reviewed-by:
Johannes Thumshirn <johannes.thumshirn@wdc.com> Link: https://lore.kernel.org/r/20240701051800.1245240-2-hch@lst.de Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
Baokun Li authored
Now we avoid throttling swap writes by determining whether the current process is kswapd (aka current_is_kswapd()), but swap writes can come from either kswapd or direct reclaim, so the swap writes from direct reclaim will still be throttled. When a process holds a lock to allocate a free page, and enters direct reclaim because there is no free memory, then it might trigger a hung due to the wbt throttling that causes other processes to fail to get the lock. Both kswapd and direct reclaim set the REQ_SWAP flag, so use REQ_SWAP instead of current_is_kswapd() to avoid throttling swap writes. Also renamed WBT_KSWAPD to WBT_SWAP and WBT_RWQ_KSWAPD to WBT_RWQ_SWAP. Signed-off-by:
Baokun Li <libaokun1@huawei.com> Reviewed-by:
Yu Kuai <yukuai3@huawei.com> Reviewed-by:
Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20240604030522.3686177-1-libaokun@huaweicloud.com Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
- Jun 28, 2024
-
-
Christoph Hellwig authored
The kobject for the queue entries is embedded into a struct gendisk. Pass it to the sysfs methods instead of the request_queue derived from it. Signed-off-by:
Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20240627111407.476276-4-hch@lst.de Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-