On 06/04/2010 06:15 PM, Bob Friesenhahn wrote:
> On Fri, 4 Jun 2010, Sandon Van Ness wrote:
>> Interesting enough when I went to copy the data back I got even worse
>> download speeds than I did write speeds! It looks like i need some sort
>> of read-ahead as unlike the writes it doesn't appear to be CPU bound as
>> using mbuffer/tar gives me full gigabit speeds. You can see in my graph
> I am still not sure what you are doing, however, it should not
> surprise that gigabit ethernet is limited to one gigabit of traffic
> (1000 Mb/s) in either direction. Theoretically you should be able to
> get a gigabit of traffic in both directions at once, but this depends
> on the quality of your ethernet switch, ethernet adaptor card, device
> driver, and capabilities of where the data is read and written to.
The problem is that just using rsync I am not getting gigabit. For me
gigabit maxes out at around 930-940 megabits. When I use rsync alone I
only was getting around 720 megabits incomming. This is only when its
reading from the block device. When reading from the memory (IE: cat a
few big files on the server to have them cached) it gets ~935 megabits.
The machine is easily able to sustain that read speed (and write) but
the problem is getting it to actually do it.
The only way I was able to get full gig (935 megabits) was using tar and
mbuffer due to it acting as a read-ahead buffer. is there anyway to turn
the prefetch up as there really is no reason I should only be getting
720 megabits when copying files off with rsync (or NFS) like I am seeing.
zfs-discuss mailing list