$ zpool status ztest
pool: ztest
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
ztest ONLINE 0 0 0
gpt/ttt ONLINE 0 0 0
errors: No known data errors
$ dd if=/dev/zero of=dummy bs=4m
dd: dummy: No space left on device
928+0 records in
927+1 records out
3891134464 bytes transferred in 157.300331 secs (24736976 bytes/sec)
$
这就像是用完整个游泳池一样。在这里,我明确指定了 -b 选项,以便检查块数。
$ df -b /ztest
Filesystem 512-blocks Used Avail Capacity Mounted on
ztest 7601024 7601024 0 100% /ztest
$ zpool status ztest
pool: ztest
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
ztest ONLINE 0 0 0
gpt/ttt ONLINE 0 0 0
errors: No known data errors
$
尝试执行 scrub。
$ zpool scrub ztest
$ zpool status ztest
pool: ztest
state: ONLINE
scan: scrub in progress since Mon Aug 29 18:17:33 2022
3.62G scanned at 412M/s, 550M issued at 61.1M/s, 3.62G total
0B repaired, 14.81% done, 00:00:51 to go
config:
NAME STATE READ WRITE CKSUM
ztest ONLINE 0 0 0
gpt/ttt ONLINE 0 0 0
errors: No known data errors
$
等待 scrub 完成,然后通过 status 进行确认。
$ zpool status ztest
pool: ztest
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
scan: scrub repaired 0B in 00:01:05 with 1 errors on Mon Aug 29 18:18:32 2022
config:
NAME STATE READ WRITE CKSUM
ztest ONLINE 0 0 0
gpt/ttt ONLINE 0 0 2
errors: 1 data errors, use '-v' for a list
$
通过 CKSUM 可以发现发生了错误。可以使用 -v 选项实际确认已损坏的文件。
$ zpool status -v ztest
pool: ztest
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
scan: scrub repaired 0B in 00:01:05 with 1 errors on Mon Aug 29 18:18:32 2022
config:
NAME STATE READ WRITE CKSUM
ztest ONLINE 0 0 0
gpt/ttt ONLINE 0 0 2
errors: Permanent errors have been detected in the following files:
/ztest/dummy
$
$ zpool export ztest
$ dd if=/dev/zero of=/dev/gpt/ttt count=1 oseek=4000000
1+0 records in
1+0 records out
512 bytes transferred in 0.382715 secs (1338 bytes/sec)
$
无法读取时,没有错误。虽然 zpool status 显示错误消息,但 CKSUM 错误计数为 0。
$ zpool status ztest
pool: ztest
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
scan: scrub repaired 0B in 00:01:05 with 1 errors on Mon Aug 29 18:18:32 2022
config:
NAME STATE READ WRITE CKSUM
ztest ONLINE 0 0 0
gpt/ttt ONLINE 0 0 0
errors: 1 data errors, use '-v' for a list
$
再次运行 scrub。
$ zpool scrub ztest
检查池状态。
$ zpool status ztest
pool: ztest
state: ONLINE
scan: scrub repaired 0B in 00:01:05 with 0 errors on Mon Aug 29 18:34:27 2022
config:
NAME STATE READ WRITE CKSUM
ztest ONLINE 0 0 0
gpt/ttt ONLINE 0 0 0
errors: No known data errors
$