On Jun 14, 2010, at 2:12 PM, Roy Sigurd Karlsbakk wrote:
It seems zfs scrub is taking a big bit out of I/O when running. During a scrub,
sync I/O, such as NFS and iSCSI is mostly useless. Attaching an SLOG and some
L2ARC helps this, but still, the problem remains in that the scrub is given
Scrub always runs at the lowest priority. However, priority scheduling only
works before the I/Os enter the disk queue. If you are running Solaris 10 or
older releases with HDD JBODs, then the default zfs_vdev_max_pending
is 35. This means that your slow disk will have 35 I/Os queued to it before
priority scheduling makes any difference. Since it is a slow disk, that could
mean 250 to 1500 ms before the high priority I/O reaches the disk.
Is this problem known to the developers? Will it be addressed?
In later OpenSolaris releases, the zfs_vdev_max_pending defaults to 10
which helps. You can tune it lower as described in the Evil Tuning Guide.
Also, as Robert pointed out, CR 6494473 offers a more resource management
friendly way to limit scrub traffic (b143). Everyone can buy George a beer for
implementing this change :-)