mkfs.xfs: pwrite64 failed: Input/output error including 'blk_cloned_rq_check_limits'
Environment
- Red Hat Enterprise Linux (RHEL) 7.3
- XFS filesystem
Issue
-
When trying to create a xfs filesystem getting below error:
# mkfs.xfs /dev/mapper/lvname meta-data=/dev/mapper/lvname isize=512 agcount=32, agsize=8388604 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=268435328, imaxpct=25 = sunit=4 swidth=4096 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=131072, version=2 = sectsz=512 sunit=4 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 **mkfs.xfs: pwrite64 failed: Input/output error** -
In
/var/log/messageswe could see below errorskernel: blk_cloned_rq_check_limits: over max size limit. kernel: blk_cloned_rq_check_limits: over max size limit. kernel: blk_cloned_rq_check_limits: over max size limit. kernel: blk_cloned_rq_check_limits: over max size limit. kernel: blk_cloned_rq_check_limits: over max size limit. kernel: blk_cloned_rq_check_limits: over max size limit. kernel: blk_cloned_rq_check_limits: over max size limit. kernel: blk_cloned_rq_check_limits: over max size limit. kernel: blk_cloned_rq_check_limits: over max size limit. kernel: blk_cloned_rq_check_limits: over max size limit. kernel: blk_cloned_rq_check_limits: over max size limit. kernel: blk_cloned_rq_check_limits: over max size limit. kernel: blk_cloned_rq_check_limits: over max size limit. kernel: blk_cloned_rq_check_limits: over max size limit. kernel: device-mapper: multipath: Failing path 8:176. kernel: device-mapper: multipath: Failing path 8:128. kernel: device-mapper: multipath: Failing path 8:132 kernel: device-mapper: multipath: Failing path 8:112 multipathd: sdi: mark as failed multipathd: remaining active paths: 3 multipathd: sdl: mark as failed multipathd: remaining active paths: 2 multipathd: sdh: mark as failed multipathd: remaining active paths: 1 multipathd: sdc: mark as failed multipathd: remaining active paths: 0 kernel: Buffer I/O error on dev dm-7, logical block 268435440, async page read multipathd: sdi - directio checker reports path is up multipathd: 8:128: reinstated multipathd: remaining active paths: 1 kernel: device-mapper: multipath: Reinstating path 8:128. kernel: device-mapper: multipath: Reinstating path 8:176. kernel: device-mapper: multipath: Reinstating path 8:112. kernel: device-mapper: multipath: Reinstating path 8:32. multipathd: sdl - directio checker reports path is up multipathd: 8:176: reinstated multipathd: remaining active paths: 2 multipathd: 8:112: reinstated multipathd:remaining active paths: 3 multipathd: sdc - directio checker reports path is up multipathd: 8:32: reinstated multipathd: remaining active paths: 4 [...] -
But while creating a filesystem with ext4 it is not giving any errors.
Resolution
-
Create a new udev rule file
/etc/udev/rules.d/99-custom.ruleswith following content:ACTION!="add|change", GOTO="rule_end" ENV{ID_VENDOR}=="3PARdata*", ATTR{queue/max_sectors_kb}="4096" LABEL="rule_end"- Note: Replace 3PARdata in above line with the name of the vendor of SAN devices in your setup/environment.
-
Once above udev rule is added, please trigger the above rule with below command:
# udevadm trigger -
Then try to create XFS filesystem again using same
mkfs.xfscommand. -
If the same error persists while running
mkfs.xfscommand, then try to run themkfs.xfscommand with-Koption:# mkfs.xfs -K <path_to_lvm_volume> -
The
-Koption withmkfs.xfscommand would instruct mkfs not to discard blocks at mkfs time. -
Also, above
mkfs.xfscommand will create a new XFS filesystem on LVM volume, and all the previous data on LVM volume will be erased. So it would be recommended to please take a backup of any critical data present in LVM volume, before runningmkfs.xfscommand on it. -
If using kickstart for installation, and facing this issue - add the following parameter to your kickstart script:
--mkfsoptions= "-K"
Root Cause
-
Error message
"blk_cloned_rq_check_limits: over max size limit"is logged from the following code block in"blk_cloned_rq_check_limits"routine:block/blk-core.c 2096 static int blk_cloned_rq_check_limits(struct request_queue *q, 2097 struct request *rq) 2098 { 2099 if (blk_rq_sectors(rq) > blk_queue_get_max_sectors(q, rq->cmd_flags)) { 2100 printk(KERN_ERR "%s: over max size limit.\n", __func__); 2101 return -EIO; <======================== 2102 } 2103 [...] -
The check in above snip calls
"blk_rq_sectors"to verify number of sectors in cloned request and compare it with"max_discard_sectors"or"max_sectors"limits for the"request_queue". If it finds that number of sectors in cloned request are more than"max_discard_sectors"or"max_sectors"limits for the"request_queue"of underlying device, then we print error and return IO error (-EIO). -
Since the cloned request was violating the
"request_queue"limits, kernel had reported"blk_cloned_rq_check_limits: over max size limit"errors. This subsequently resulted in path failure in multipath and mkfs commands fail.
Diagnostic Steps
-
From the sosreport we could see the below error messages in
/var/log/messages:Dec 6 14:11:12 test kernel: blk_cloned_rq_check_limits: over max size limit. Dec 6 14:11:12 test kernel: blk_cloned_rq_check_limits: over max size limit. Dec 6 14:11:12 test kernel: blk_cloned_rq_check_limits: over max size limit. [...] Dec 7 15:55:53 test kernel: device-mapper: multipath: Failing path 8:176. Dec 7 15:55:53 test kernel: device-mapper: multipath: Failing path 8:128. Dec 7 15:55:53 test kernel: device-mapper: multipath: Failing path 8:132 Dec 7 15:55:53 test kernel: device-mapper: multipath: Failing path 8:112
This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.