Is it possible to use the zfs copies property and put the disks individually
into a pool? That would give you 3TB (1.5 + 1 + .5) usable.
States that copies will be spread across disks. But what I don't know (and
don't have a test box to figure out) is if losing a disk still kills the whole
pool in spite of having multiple copies of the data. I have a sneaking
suspicion is the pool would be toast.
> Erik Trimble wrote:
> While the "figure it out for me and make it as big as
> you can while still being
> safe" magic of drobo is nice for home users, it's
> less than ideal for enterprise
> users that require performance guarantees.
> It would be nice if somebody created a simple tool
> that, fed a set of disks,
> computed the configuration required for maximum
> usable redundant space.
> zfs-discuss mailing list
> [email protected]
This message posted from opensolaris.org
zfs-discuss mailing list