zfs-discuss@opensolaris.org
[Top] [All Lists]

Re: [zfs-discuss] Any limit on pool hierarchy?

Subject: Re: [zfs-discuss] Any limit on pool hierarchy?
From: Haudy Kazemi
Date: Mon, 08 Nov 2010 18:02:23 -0600
Bryan Horstmann-Allen wrote:
+------------------------------------------------------------------------------
| On 2010-11-08 13:27:09, Peter Taps wrote:
| | From zfs documentation, it appears that a "vdev" can be built from more vdevs. That is, a raidz vdev can be built across a bunch of mirrored vdevs, and a mirror can be built across a few raidz vdevs. | | Is my understanding correct? Also, is there a limit on the depth of a vdev?
It looks like there is confusion coming from the use of the terms virtual device and vdev. The documentation can be confusing in this regard.

There are two types of vdevs: root and leaf. A pool's root vdevs are usually of the 'mirror' or 'raidz' type, but can also directly use the underlying devices if you don't want any redundancy from ZFS whatsoever. Pools dynamically stripe data across all the root vdevs present (and not yet full) in that pool at the time the data was written.

Leaf vdevs directly use the underlying devices. Underlying devices may be hard drives, solid state drives, iSCSI volumes, or even files on filesystems. Root vdevs cannot directly be used as underlying devices.


You are incorrect.

The man page states:

     Virtual devices cannot be nested, so a mirror or raidz  vir-
     tual device can only contain files or disks. Mirrors of mir-
     rors (or other combinations) are not allowed.

     A pool can have any number of virtual devices at the top  of
     the  configuration  (known as "root vdevs"). Data is dynami-
     cally distributed across all top-level  devices  to  balance
     data  among  devices.  As new virtual devices are added, ZFS
     automatically places data on the newly available devices.

     A pool can have any number of virtual devices at the top  of
     the  configuration  (known as "root vdevs"). Data is dynami-
     cally distributed across all top-level  devices  to  balance
     data  among  devices.  As new virtual devices are added, ZFS
     automatically places data on the newly available devices.

This has been touched on and discussed in some previous threads. There is a way to perform nesting, but it is *not* recommended. The trick is to insert another abstraction layer that hides ZFS from itself (or in other words convert a root vdev into an underlying device). An example would be creating iSCSI targets out of a ZFS pool, and then creating a second ZFS pool out of those iSCSI targets. Another example would be creating a ZFS pool out of files stored on another ZFS pool. The main reasons that have been given for not doing this are unknown edge and corner cases that may lead to deadlocks, and that it creates a complex structure with potentially undesirable and unintended performance and reliability implications. Deadlocks may occur in low resource conditions. If resources (disk space and RAM) never run low, the deadlock scenarios may not arise.

_______________________________________________
zfs-discuss mailing list
zfs-discuss@xxxxxxxxxxxxxxx
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

<Prev in Thread] Current Thread [Next in Thread>