[email protected]
[Top] [All Lists]

Re: [zfs-discuss] zones/zonerootA,B,C

Subject: Re: [zfs-discuss] zones/zonerootA,B,C
From: dick hoogendijk
Date: Sun, 2 Nov 2008 20:11:52 +0100
On Mon, 03 Nov 2008 07:16:29 +1300
Ian Collins <[email protected]> wrote:

> dick hoogendijk wrote:
> > SUN advices to create a seperate zfs filesystem for every zone.
> > # zfs create -o canmount=noauto rpool/ROOT/s10BE/zones
> > # zfs mount rpool/ROOT/s10BE/zones
> > # zfs create -o canmount=noauto rpool/ROOT/s10BE/zones/zrootA
> > And so on for every zone.
> > This takes memory. Can't I do with a rpool/ROOT/s10BE/zones and
> > create directories into it for every zone, instead of seperate ZFS
> > filesystems? What are the (dis)advantages?
> >   
> All the usual advantages of finer grained control.   Unless you have a
> marginal system, the overhead if any is minimal.
> Live Upgrade will do this (create an FS per zone) for you when you
> migrate from UFS to ZFS boot.

Live Upgrade does -NOT- do this on my system. Updating s10u4 to s10u5
was already like hell. But also s10u6 is not able to copy my zones. I
have -NO- idea why. A simple three sparse zones on a slice mounted
on /zones. I got the same errors and had to use the same trick.

First ufsdump/restore the zone slice to an new drive
Then remove the /etc/zones/zonefiles and the /zones from vfstab.
Do a lucreate; -UNDO- the zone changes and upgrade the new BE.
The result is a new s10u6 with zones upgraded ;-)

So, I do the same trick from this s10u6 (UFS) to a new BE on ZFS.
First without zones, then copy over the zones on newly created ZFS
filesystems. Put the /etc/zonefiles back and all will be well.

Once on ZFS I hope to never have to use this trick again for LU.

Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS sxce snv101 ++
+ All that's really worth doing is what we do for others (Lewis Carrol)
zfs-discuss mailing list
[email protected]

<Prev in Thread] Current Thread [Next in Thread>