Bill Sommerfeld wrote:
On Wed, 2006-06-21 at 14:15, Neil Perrin wrote:
Of course we would need to stress the dangers of setting 'deferred'.
What do you guys think?
I can think of a use case for "deferred": improving the efficiency of a
large mega-"transaction"/batch job such as a nightly build.
You create an initially empty or cloned dedicated filesystem for the
build, and start it off, and won't look inside until it completes. If
the build machine crashes in the middle of the build you're going to
nuke it all and start over because that's lower risk than assuming you
can pick up where it left off.
now, it happens that a bunch of tools used during a build invoke fsync.
But in the context of a full nightly build that effort is wasted. All
you need is one big "sync everything" at the very end, either by using a
command like sync or lockfs -f, or as a side effect of reverting from
sync=deferred to sync=standard.
Can I give support for this use case? Or does it take someone like
Casper Dik with 'fastfs' to come along later and provide a utility that
lets people make the filesystem do what want it to? [still annoyed that
it took me so long to find out about fastfs - hell, the Solaris 8 or 9
OS installation process was using the same IOCTL as fastfs uses, but for
some reason end users still have to find fastfs out on the Net somewhere
instead of getting it with the OS].
If the ZFS docs state why it's not for general use, then what's to
separate this from the zillion other ways that a cavalier sysadmin can
bork their data (or indeed their whole machine)? Otherwise, why even
let people create a striped zpool vdev without redundancy - it's just an
accident waiting to happen, right? We must save people from
themselves! Think of the children! ;-)
Jason.Ozolins@xxxxxxxxxx ANU Supercomputer Facility
APAC Grid Program Leonard Huxley Bldg 56, Mills Road
Ph: +61 2 6125 5449 Australian National University
Fax: +61 2 6125 8199 Canberra, ACT, 0200, Australia
zfs-discuss mailing list