On Sep 15, 2009, at 5:21 PM, Richard Elling wrote:
On Sep 15, 2009, at 1:03 PM, Dale Ghent wrote:
On Sep 10, 2009, at 3:12 PM, Rich Morris wrote:
On 07/28/09 17:13, Rich Morris wrote:
On Mon, Jul 20, 2009 at 7:52 PM, Bob Friesenhahn wrote:
Sun has opened internal CR 6859997. It is now in Dispatched
state at High priority.
CR 6859997 has recently been fixed in Nevada. This fix will also
be in Solaris 10 Update 9.
This fix speeds up the sequential prefetch pattern described in
this CR without slowing down other prefetch patterns. Some kstats
have also been added to help improve the observability of ZFS file
Awesome that the fix exists. I've been having a hell of a time with
device-level prefetch on my iscsi clients causing tons of
ultimately useless IO and have resorted to setting
This only affects metadata. Wouldn't it be better to disable
prefetching for data?
Well, that's a surprise to me, but the zfs_vdev_cache_max=1 did
Just a general description of my environment:
My setup consists of several s10uX iscsi clients which get LUNs from a
pairs of thumpers. Each thumper pair exports identical LUNs to each
iscsi client, and the client in turn mirrors each LUN pair inside a
local zpool. As more space is needed on a client, a new LUN is created
on the pair of thumpers, exported to the iscsi client, which then
picks it up and we add a new mirrored vdev to the client's existing
This is so we have data redundancy across chassis, so if one thumper
were to fail or need patching, etc, the iscsi clients just see one of
side of their mirrors drop out.
The problem that we observed on the iscsi clients was that, when
viewing things through 'zpool iostat -v', far more IO was being
requested from the LUs than was being registered for the vdev those
LUs were a member of.
Being that that was a iscsi setup with stock thumpers (no SSD ZIL,
L2ARC) serving the LUs, this apparently overhead caused far more
uneccessary disk IO on the thumpers, thus starving out IO for data
that was actually needed.
The working set is lots of small-ish files, entirely random IO.
If zfs_vdev_cache_max only affects metadata prefetches, which
parameter affects data prefetches ?
I have to admit that disabling device-level prefetching was a shot in
the dark, but it did result in drastically reduced contention on the
Question though... why is bug fix that can be a watershed for
performance be held back for so long? s10u9 won't be available for
at least 6 months from now, and with a huge environment, I try hard
not to live off of IDRs.
Am I the only one that thinks this is way too conservative? It's
just maddening to know that a highly beneficial fix is out there,
but its release is based on time rather than need. Sustaining
really needs to be more proactive when it comes to this stuff.
zfs-discuss mailing list
zfs-discuss mailing list