2016-02-21 05:22:08 |
Richard Laager |
bug |
|
|
added bug |
2016-02-21 05:22:08 |
Richard Laager |
attachment added |
|
zfs-linux.scrub.debdiff.2 https://bugs.launchpad.net/bugs/1548009/+attachment/4577024/+files/zfs-linux.scrub.debdiff.2 |
|
2016-02-21 05:22:19 |
Richard Laager |
bug |
|
|
added subscriber Colin Ian King |
2016-02-21 08:22:46 |
Ubuntu Foundations Team Bug Bot |
tags |
|
patch |
|
2016-02-21 08:22:58 |
Ubuntu Foundations Team Bug Bot |
bug |
|
|
added subscriber Ubuntu Sponsors Team |
2016-02-22 00:37:13 |
Mathew Hodson |
zfs-linux (Ubuntu): importance |
Undecided |
Wishlist |
|
2016-02-22 06:04:50 |
Colin Ian King |
zfs-linux (Ubuntu): assignee |
|
Colin Ian King (colin-king) |
|
2016-03-03 10:25:46 |
Mark Wilkinson |
bug |
|
|
added subscriber Mark Wilkinson |
2016-03-06 15:41:54 |
Launchpad Janitor |
zfs-linux (Ubuntu): status |
New |
Confirmed |
|
2016-03-14 22:30:26 |
Richard Laager |
description |
mdadm automatically checks MD arrays. ZFS should automatically scrub pools.
I've attached a debdiff which accomplishes this.
The meat of it is the scrub script I've been using (and recommending in my HOWTO) for years, which scrubs all *healthy* pools. If a pool is not healthy, scrubbing it is bad for two reasons: 1) It adds a lot of disk load which could lead to another failure. We should save that disk load for resilvering. 2) Performance is already less on a degraded pool and scrubbing will make that worse.
The cron.d in this patch scrubs on the second Sunday of the month. mdadm scrubs on the first Sunday of the month. This way, if a system has both MD and ZFS pools, the load doesn't all happen at the same time. If the system doesn't have both types, it shouldn't really matter which week. |
mdadm automatically checks MD arrays. ZFS should automatically scrub pools too, to detect and (when possible) correct on-disk corruption.
I've attached a debdiff which accomplishes this. It builds and installs cleanly.
The meat of it is the scrub script I've been using (and recommending in my HOWTO) for years, which scrubs all *healthy* pools. If a pool is not healthy, scrubbing it is bad for two reasons: 1) It adds a lot of disk load which could theoretically lead to another failure. We should save that disk load for resilvering. 2) Performance is already less on a degraded pool and scrubbing will make that worse.
The cron.d in this patch scrubs on the second Sunday of the month. mdadm scrubs on the first Sunday of the month. This way, if a system has both MD and ZFS pools, the load doesn't all happen at the same time. If the system doesn't have both types, it shouldn't really matter which week. |
|
2016-03-14 22:34:44 |
Richard Laager |
description |
mdadm automatically checks MD arrays. ZFS should automatically scrub pools too, to detect and (when possible) correct on-disk corruption.
I've attached a debdiff which accomplishes this. It builds and installs cleanly.
The meat of it is the scrub script I've been using (and recommending in my HOWTO) for years, which scrubs all *healthy* pools. If a pool is not healthy, scrubbing it is bad for two reasons: 1) It adds a lot of disk load which could theoretically lead to another failure. We should save that disk load for resilvering. 2) Performance is already less on a degraded pool and scrubbing will make that worse.
The cron.d in this patch scrubs on the second Sunday of the month. mdadm scrubs on the first Sunday of the month. This way, if a system has both MD and ZFS pools, the load doesn't all happen at the same time. If the system doesn't have both types, it shouldn't really matter which week. |
mdadm automatically checks MD arrays. ZFS should automatically scrub pools too. Scrubbing a pool allows ZFS to detect and (when the pool has redundancy) correct on-disk corruption.
I've attached a debdiff which accomplishes this. It builds and installs cleanly.
The meat of it is the scrub script I've been using on production systems, both servers and laptops, and recommending in my Ubuntu root-on-ZFS HOWTO, for years, which scrubs all *healthy* pools. If a pool is not healthy, scrubbing it is bad for two reasons: 1) It adds a lot of disk load which could theoretically lead to another failure. We should save that disk load for resilvering. 2) Performance is already less on a degraded pool and scrubbing can make that worse, even though scrubs are throttled. Arguably, I might be being too conservative here, but the marginal benefit of scrubbing a *degraded* pool is pretty minimal as pools should not be left degraded for very long.
The cron.d in this patch scrubs on the second Sunday of the month. mdadm scrubs on the first Sunday of the month. This way, if a system has both MD and ZFS pools, the load doesn't all happen at the same time. If the system doesn't have both types, it shouldn't really matter which week. If you'd rather make it the same week as MD, I see no problem with that. |
|
2016-03-14 22:34:49 |
Richard Laager |
summary |
ZFS pools should be automatically scrubbed |
[FFe] ZFS pools should be automatically scrubbed |
|
2016-03-14 22:36:01 |
Richard Laager |
description |
mdadm automatically checks MD arrays. ZFS should automatically scrub pools too. Scrubbing a pool allows ZFS to detect and (when the pool has redundancy) correct on-disk corruption.
I've attached a debdiff which accomplishes this. It builds and installs cleanly.
The meat of it is the scrub script I've been using on production systems, both servers and laptops, and recommending in my Ubuntu root-on-ZFS HOWTO, for years, which scrubs all *healthy* pools. If a pool is not healthy, scrubbing it is bad for two reasons: 1) It adds a lot of disk load which could theoretically lead to another failure. We should save that disk load for resilvering. 2) Performance is already less on a degraded pool and scrubbing can make that worse, even though scrubs are throttled. Arguably, I might be being too conservative here, but the marginal benefit of scrubbing a *degraded* pool is pretty minimal as pools should not be left degraded for very long.
The cron.d in this patch scrubs on the second Sunday of the month. mdadm scrubs on the first Sunday of the month. This way, if a system has both MD and ZFS pools, the load doesn't all happen at the same time. If the system doesn't have both types, it shouldn't really matter which week. If you'd rather make it the same week as MD, I see no problem with that. |
mdadm automatically checks MD arrays. ZFS should automatically scrub pools too. Scrubbing a pool allows ZFS to detect on-disk corruption and (when the pool has redundancy) correct it. Note that ZFS does not blindly assume the other copy is correct; it will only overwrite bad data with data that is known to be good (i.e. it passes the checksum).
I've attached a debdiff which accomplishes this. It builds and installs cleanly.
The meat of it is the scrub script I've been using on production systems, both servers and laptops, and recommending in my Ubuntu root-on-ZFS HOWTO, for years, which scrubs all *healthy* pools. If a pool is not healthy, scrubbing it is bad for two reasons: 1) It adds a lot of disk load which could theoretically lead to another failure. We should save that disk load for resilvering. 2) Performance is already less on a degraded pool and scrubbing can make that worse, even though scrubs are throttled. Arguably, I might be being too conservative here, but the marginal benefit of scrubbing a *degraded* pool is pretty minimal as pools should not be left degraded for very long.
The cron.d in this patch scrubs on the second Sunday of the month. mdadm scrubs on the first Sunday of the month. This way, if a system has both MD and ZFS pools, the load doesn't all happen at the same time. If the system doesn't have both types, it shouldn't really matter which week. If you'd rather make it the same week as MD, I see no problem with that. |
|
2016-03-14 22:36:23 |
Richard Laager |
bug |
|
|
added subscriber Ubuntu Release Team |
2016-03-17 02:36:00 |
Arto Bendiken |
bug |
|
|
added subscriber Arto Bendiken |
2016-03-29 17:56:12 |
Jared Fernandez |
bug |
|
|
added subscriber Jared Fernandez |
2016-04-08 00:27:46 |
Launchpad Janitor |
zfs-linux (Ubuntu): status |
Confirmed |
Fix Released |
|
2016-07-07 03:20:39 |
Richard Laager |
zfs-linux (Ubuntu): status |
Fix Released |
Confirmed |
|
2016-07-07 03:44:33 |
Richard Laager |
attachment added |
|
fix-scrubbing.debdiff https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1548009/+attachment/4696723/+files/fix-scrubbing.debdiff |
|
2016-07-07 17:59:33 |
Mathew Hodson |
summary |
[FFe] ZFS pools should be automatically scrubbed |
ZFS pools should be automatically scrubbed |
|
2016-07-26 15:45:26 |
Colin Ian King |
zfs-linux (Ubuntu): importance |
Wishlist |
Medium |
|
2016-07-26 15:45:28 |
Colin Ian King |
zfs-linux (Ubuntu): status |
Confirmed |
In Progress |
|
2016-07-27 21:14:17 |
Michael Terry |
bug |
|
|
added subscriber Michael Terry |
2016-07-28 03:11:14 |
Launchpad Janitor |
zfs-linux (Ubuntu): status |
In Progress |
Fix Released |
|
2016-07-28 04:57:13 |
Richard Laager |
description |
mdadm automatically checks MD arrays. ZFS should automatically scrub pools too. Scrubbing a pool allows ZFS to detect on-disk corruption and (when the pool has redundancy) correct it. Note that ZFS does not blindly assume the other copy is correct; it will only overwrite bad data with data that is known to be good (i.e. it passes the checksum).
I've attached a debdiff which accomplishes this. It builds and installs cleanly.
The meat of it is the scrub script I've been using on production systems, both servers and laptops, and recommending in my Ubuntu root-on-ZFS HOWTO, for years, which scrubs all *healthy* pools. If a pool is not healthy, scrubbing it is bad for two reasons: 1) It adds a lot of disk load which could theoretically lead to another failure. We should save that disk load for resilvering. 2) Performance is already less on a degraded pool and scrubbing can make that worse, even though scrubs are throttled. Arguably, I might be being too conservative here, but the marginal benefit of scrubbing a *degraded* pool is pretty minimal as pools should not be left degraded for very long.
The cron.d in this patch scrubs on the second Sunday of the month. mdadm scrubs on the first Sunday of the month. This way, if a system has both MD and ZFS pools, the load doesn't all happen at the same time. If the system doesn't have both types, it shouldn't really matter which week. If you'd rather make it the same week as MD, I see no problem with that. |
[Impact]
Xenial shipped with a cron job to automatically scrub ZFS pools, as
desired by many users and as implemented by mdadm for traditional Linux
software RAID. Unfortunately, this cron job does not work, because it needs a PATH line for /sbin, where the zpool utility lives.
Given the existence of the cron job and various discussions on IRC, etc.,
users expect that scrubs are happening, when they are not. This means ZFS
is not pre-emptively checking for (and correcting) corruption. The odds of
disk corruption are admittedly very low, but violating users' expectations
of data safety, especially when they've gone out of their way to use a
filesystem which touts data safety, is bad.
[Test Case]
$ truncate -s 1G test.img
$ sudo zpool create test `pwd`/test.img
$ sudo zpool status test
$ sudo vi /etc/cron.d/zfsutils-linux
Modify /etc/cron.d/zfsutils-linux to run the cron job in a few minutes
(modifying the date range if it's not currently the 8th through the 14th
and the "-eq 0" check if it's not currently a Sunday).
$ grep zfs /var/log/cron.log
Verify in /var/log/cron.log that the job ran.
$ sudo zpool status test
Expected results:
scan: scrub repaired 0 in ... on <shortly after the cron job ran>
Actual results:
scan: none requested
Then, add the PATH line, update the time rules in the cron job, and repeat
the test. Now it will work.
- OR -
The best test case is to leave the cron job file untouched, install the
patched package, wait for the second Sunday of the month, and verify with
zpool status that a scrub ran. I did this, on Xenial, with the package I
built. The debdiff is in comment #11 and was accepted to Yakkety.
If someone can get this in -proposed before the 14th, I'll gladly install
the actual package from -proposed and make sure it runs correctly on the
14th.
[Regression Potential]
The patch only touches the cron.d file, which has only one cron job in it.
This cron job is completely broken (inoperative) at the moment, so the
regression potential is very low.
ORIGINAL, PRE-SRU, DESCRIPTION:
mdadm automatically checks MD arrays. ZFS should automatically scrub pools too. Scrubbing a pool allows ZFS to detect on-disk corruption and (when the pool has redundancy) correct it. Note that ZFS does not blindly assume the other copy is correct; it will only overwrite bad data with data that is known to be good (i.e. it passes the checksum).
I've attached a debdiff which accomplishes this. It builds and installs cleanly.
The meat of it is the scrub script I've been using on production systems, both servers and laptops, and recommending in my Ubuntu root-on-ZFS HOWTO, for years, which scrubs all *healthy* pools. If a pool is not healthy, scrubbing it is bad for two reasons: 1) It adds a lot of disk load which could theoretically lead to another failure. We should save that disk load for resilvering. 2) Performance is already less on a degraded pool and scrubbing can make that worse, even though scrubs are throttled. Arguably, I might be being too conservative here, but the marginal benefit of scrubbing a *degraded* pool is pretty minimal as pools should not be left degraded for very long.
The cron.d in this patch scrubs on the second Sunday of the month. mdadm scrubs on the first Sunday of the month. This way, if a system has both MD and ZFS pools, the load doesn't all happen at the same time. If the system doesn't have both types, it shouldn't really matter which week. If you'd rather make it the same week as MD, I see no problem with that. |
|
2016-07-28 10:20:48 |
C de-Avillez |
nominated for series |
|
Ubuntu Xenial |
|
2016-07-28 10:20:48 |
C de-Avillez |
bug task added |
|
zfs-linux (Ubuntu Xenial) |
|
2016-07-28 14:43:22 |
Andy Whitcroft |
zfs-linux (Ubuntu Xenial): status |
New |
Incomplete |
|
2016-07-28 15:24:41 |
Andy Whitcroft |
zfs-linux (Ubuntu Xenial): status |
Incomplete |
In Progress |
|
2016-07-28 15:30:53 |
Andy Whitcroft |
zfs-linux (Ubuntu Xenial): status |
In Progress |
Fix Committed |
|
2016-07-28 15:30:55 |
Andy Whitcroft |
bug |
|
|
added subscriber Ubuntu Stable Release Updates Team |
2016-07-28 15:30:57 |
Andy Whitcroft |
bug |
|
|
added subscriber SRU Verification |
2016-07-28 15:38:58 |
Daniel Holbach |
removed subscriber Ubuntu Sponsors Team |
|
|
|
2016-07-28 20:05:50 |
Mathew Hodson |
zfs-linux (Ubuntu Xenial): importance |
Undecided |
Medium |
|
2016-08-14 08:46:41 |
Richard Laager |
tags |
patch |
patch verification-done |
|
2016-08-17 12:33:03 |
Launchpad Janitor |
zfs-linux (Ubuntu Xenial): status |
Fix Committed |
Fix Released |
|
2016-08-17 12:33:14 |
Chris J Arges |
removed subscriber Ubuntu Stable Release Updates Team |
|
|
|