It would help to have some config detail (e.g. what options are you using?
zpool status output; property lists for specific filesystems and zvols; etc)
Some basic Solaris stats can be very helpful too (e.g. peak flow samples of
vmstat 1, mpstst 1, iostat -xnz 1, etc)
It would also be great to know how you are running you tests.
I'd also like to know what version of NFS and mount options. A network trace
down to NFS RPC or iSCSI operation level with timings would be great too.
I'm wondering whether your HBA has a write through or write back cache enabled?
The latter might make things very fast, but could put data at risk if not
On 14 Oct 2010, at 22:02, Ian D <[email protected]> wrote:
>> Our next test is to try with a different kind of HBA,
>> we have a Dell H800 lying around.
> ok... we're making progress. After swapping the LSI HBA for a Dell H800 the
> issue disappeared. Now, I'd rather not use those controllers because they
> don't have a JBOD mode. We have no choice but to make individual RAID0
> volumes for each disks which means we need to reboot the server every time we
> replace a failed drive. That's not good...
> What can we do with the LSI HBA? Would you call LSI's support? Is there
> anything we should try besides the obvious (using the latests
> To resume the issue, when we copy files from/to the JBODs connected to that
> HBA using NFS/iSCSI, we get slow transfer rate <20M/s and a 1-2 second pause
> between each file. When we do the same experiment locally using the
> external drives as a local volume (no NFS/iSCSI involved) then it goes upward
> of 350M/sec with no delay between files.
> Message was edited by: reward72
> This message posted from opensolaris.org
> zfs-discuss mailing list
> [email protected]
zfs-discuss mailing list