qemu-devel-bounces+lirans=il.ibm.com@xxxxxxxxxx wrote on 07/09/2009
> On 7 sept. 2009, at 11:24, lirans@xxxxxxxxxx wrote:
> > This series adds support for live migration without shared storage,
> > means
> > copy the storage while migrating. It was tested with KVM. Supports 2
> > ways
> > to replicate the storage during migration:
> > 1. Complete copy of storage to destination
> > 2. Assuming the storage is cow based, copy only the allocated
> > data, time of the migration will be linear with the amount of
> > allocated
> > data (user responsibility to verify that the same backend file
> > reside
> > on src and destination).
> > Live migration will work as follows:
> > (qemu) migrate -d tcp:0:4444 # for ordinary live migration
> > (qemu) migrate -d blk tcp:0:4444 # for live migration with complete
> > storage copy
> > (qemu) migrate -d blk inc tcp:0:4444 # for live migration with
> > incremental storage copy, storage is cow based.
> > The patches are against
> > kvm-87
> I'm trying blk migration (not incremental) between two machines
> connected over Gigabit ethernet.
> The transfer is quite slow (about 2 MB/s over the wire).
> While the load on the sending end is low (vmstat says ~2000 blocks in/
> sec, and top says ~1% in io wait), on the receiving end I see almost
> 40% CPU in io wait, kjournald takes 20% of the CPU and vmstat reports
> ~14000 blocks out/sec.
> Hosts are running Debian Lenny (2.6.26 32 bits), kvm-87 + your patches.
> The guest is also running Debian Lenny and is idle io wise. I tried
> with both idle and full cpu utilization, it doesn't change anything.
> Pierre Riteau -- http://perso.univ-rennes1.fr/pierre.riteau/
True I see that there is a problem with performance. Need to change the
side to write async to disk And in the source side to submit async read
from disk at
least at the rate of the network bandwidth. I will resend a new patch