zfs-discuss@opensolaris.org
[Top] [All Lists]

Re: [zfs-discuss] Size discrepancy (beyond expected amount?)

Subject: Re: [zfs-discuss] Size discrepancy beyond expected amount?
From: Harry Putnam
Date: Fri, 20 Mar 2009 19:22:05 -0500
Blake <blake.irvin@xxxxxxxxx> writes:

> Replies inline (I really would recommend reading the whole ZFS Best
> Practices guide a few times - many of your questions are answered in
> that document):

First, I hope you don't think I am being hard headed and not reading
the documentation you suggest.  I have some trouble reading it but
more important... it really is hard to stick any of the info into my
pea brain and hope it stays without ANY actual experience to hang it
on. 

That is what make responses like yours, which are based on actual
experience, so valuable to me.

[...] snipped out the whole prior discussion to take a different turn
      now

> If the hardware is old/partially supported/flaky, all the more reason
> to use mirrors.. . . . . . . 

OK, I'm convinced... Mirrors it is.  I will ALWAYS be on old shakey
half supported hardware.  

And I think I'm seeing a way to do this.  But first, in the Best
Practices it says in several places that keeping any storage data off
rpool is a good thing.. and a best practice.  

I didn't fully understand the reasoning given in the 
"ZFS Root Pool Considerations"  One of the reasons:

  Data pools can be architecture-neutral. It might make sense to move
  a data pool between SPARC and Intel. Root pools are pretty much tied
  to a particular architecture.

How hard of a problem is that?

For example; can windows XP data be kept there without problems?
             Ditto linux data?

I'm guessing `best-practices' are mainly aimed at people who have
sound new and supported hardware... so guessing there may be some
corners a user like me might cut.

I'm thinking I could manage 3 mirror style pools using the hardware I
have mostly and maybe 1 to 3 more purchases of another 500 GB IDE disk
and possibly 2 750 sata..

I would expect to use the rpool for backup storage too since the OS
would take only a tiny portion.  Being mirrored should remove some of
the problems of having data on rpool. 

Another of the reasons given in `best practices' for NOT keeping data
on rpool... was the fact that it cannot be in a raidz, only mirrored
but of coures thats just what I'll be wanting.

I could move my OS to a 500GB disk and mirror rpool (2 ide ports down).
Create 2 more zpools  500gb disk and mirror on the remaining IDE
controller ports which would be all of them. 

It appears after looking around when buying the 3 500gb IDE drives
that 500gb is about as big as it gets for IDE.  Nobody cares much
about IDE anymore.

Then one more zpool on two sata ports.  I only have 200gb sata disks
to hand but could use them well on one or another windows machines and
I might splurge on a pair of 750s for the sata ports.

So (keeping Casper happy) 500 + 500 + 750 would give me
                          ===============
                             1750 GB 

(minus 50gb for OS and its mirror) Would give me about 1700GB of
mirrored storage room.

I doubt I'd actually use that up.  But if so I could always add a
better sata controller...one with 6 ports or something.  Or get really
big sata drives to replace the 750s.

Or would I be likely to run into some kind PCU underpowered problem
with 10 or so disks?

The one I have is 420watt.  On an Athlon64 2.2ghz +3400 with AOpen
mobo and a limit of 3gb ram.

I kind of like to keep the disk size down personally because in my
experience on linux and some on Windows, the bigger the disk the
longer everthing takes... I mean like formatting, scanning,
defragmenting on windows... etc.

Maybe that isn't true with zfs but it seems something like resilvering
would have to take much longer on a really huge disk.  So 750gb may be
about as big as I'd care to go.  


_______________________________________________
zfs-discuss mailing list
zfs-discuss@xxxxxxxxxxxxxxx
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

<Prev in Thread] Current Thread [Next in Thread>