On 10/3/07, Roch - PAE <[email protected]> wrote:
> We do not retain 2 copies of the same data.
> If the DB cache is made large enough to consume most of memory,
> the ZFS copy will quickly be evicted to stage other I/Os on
> their way to the DB cache.
> What problem does that pose ?
1) The memory copy operations are expensive... I think the following
is a good intro to this problem:
"Copying data in memory can be a serious bottleneck in DBMS software
today. This fact is often a surprise to database students, who assume
that main-memory operations are "free" compared to disk I/O. But in
practice, a welltuned database installation is typically not
I/O-bound." (section 3.2)
(Ch 2: Anatomy of a Database System, Readings in Database Systems, 4th Ed)
2) If you look at the TPC-C disclosure reports, you will see vendors
using thousands of disks for the top 10 systems. With that many disks
working in parallel, the I/O latencies are not as big as of a problem
as systems with fewer disks.
3) Also interesting is Concurrent I/O, which was introduced in AIX 5.2:
"Improving Database Performance With AIX Concurrent I/O"
"Improve database performance on file system containers in IBM DB2 UDB
V8.2 using Concurrent I/O on AIX"
> > Thanks,
> > - Ryan
> > --
> > UNIX Administrator
> > http://prefetch.net
> > _______________________________________________
> > zfs-discuss mailing list
> > [email protected]
> > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> zfs-discuss mailing list
> [email protected]
zfs-discuss mailing list