On Fri, Mar 20, 2009 at 08:04:41PM -0500, Anthony Liguori wrote:
>> - If there is no event for the iothread to process, TCG will throttle
>> unnecessarily (i can't see how that could happen, but is a
>> possibility) until some event breaks it out of select() thus
>> increasing the generation counter.
> Right, you have to couple this with a signal sent from the TCG thread to
> the io thread. And yeah, signals are not 100% reliable.
>> - Its possible to miss signals (again, I can't see in happening in
>> the scheme you suggest), but..
>> Also note there is no need to be completly fair. It is fine to
>> eventually be unfair.
>> Do you worry about interaction between the two locks? Can make the lock
>> ordering documented.
> I need to think about this a bit more. I understand the idea behind
> qemu_signal_lock() a little better now.
> Since we know that the IO thread is trying to run in TCG (because we
> sent a signal to it), I wonder if we can use that as an indicator that
> we have to let the IO thread run for a bit.
> BTW, your patches lack commit messages and Signed-off-bys. Are you not
> ready for having them committed? I know that we still have to figure
> out Windows support but do you know of any other show stoppers?
There was a significant (25% IIRC) reduction in iperf performance. This
is sort of expected, since there are no optimizations at all (should
collapse the signals sent to TCG context, for one). But my thinking is
to merge the iothread (along the lines of this patchset), stabilize and
How about that?
> Have you thought about how this is going to affect kvm-userspace?
Oops, no. But I can be held accountable for kvm-userspace iothread until
it can be fully replaced by upstream.
> Do you think we can eliminate the io threading code in kvm-userspace
> after this goes in?
Not immediately, need to generalize some of the changes introduced
by the patchset and merge the remaining logic of kvm-userspace's