> Patai, I asked a similar question about a week ago, inquiring about
> efficient buffers for audio. I got a good response:
Well, the general lesson seems to be that using stream fusion or
something analogous is the way to go. In this particular case I don't
need any fancy processing, and I have no feedback loops. All I do in my
mixing routine is to calculate a weighted (volume+panning) sum of
samples and stretch them in time as dictated by their frequency. The
song is flattened into a series of play states first, so all the
information required for mixing is readily available.
I could try rewriting the mixer to return an iterative generator
function instead, which could be passed to an appropriate sound
interface. The latter would have to be written once and for all.
http://www.fastmail.fm - Does exactly what it says on the tin
Haskell-Cafe mailing list