zfs-discuss@opensolaris.org
[Top] [All Lists]

[zfs-discuss] Drive Failure w/o Redundancy

Subject: [zfs-discuss] Drive Failure w/o Redundancy
From: Jef Pearlman
Date: Tue, 26 Jun 2007 11:55:31 PDT
Hi. I'm looking for the best solution to create an expandable heterogeneous 
pool of drives. I think in an ideal world, there'd be a raid version which 
could cleverly handle both multiple drive sizes and the addition of new drives 
into a group (so one could drop in a new drive of arbitrary size, maintain some 
redundancy, and gain most of that drive's capacity), but my impression is that 
we're far from there.

Absent that, I was considering using zfs and just having a single pool. My main 
question is this: what is the failure mode of zfs if one of those drives either 
fails completely or has errors? Do I permanently lose access to the entire 
pool? Can I attempt to read other data? Can I "zfs replace" the bad drive and 
get some level of data recovery? Otherwise, by pooling drives am I simply 
increasing the probability of a catastrophic data loss? I apologize if this is 
addressed elsewhere -- I've read a bunch about zfs, but not come across this 
particular answer.

As a side-question, does anyone have a suggestion for an intelligent way to 
approach this goal? This is not mission-critical data, but I'd prefer not to 
make data loss _more_ probable. Perhaps some volume manager (like LVM on linux) 
has appropriate features?

Thanks for any help.

-puk
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@xxxxxxxxxxxxxxx
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

<Prev in Thread] Current Thread [Next in Thread>