
Heinrich Apfelmus <apfelmus <at> quantentunnel.de> writes:
>
> Will Ness wrote:
> > But I get the impression that GHC isn't working through equational
> > reasoning?..
> > I see all this talk about thunks etc.
>
> Sure it does. Concerning the thunks, they're part of the implementation
> of the reduction model (callbyneed aka lazy evaluation).
At runtime? I meant to eliminate as much calculation as possible, preruntime.
I would expect the best efforts of the best minds to go into searching for ways
how to eliminate computations altogether, instead of how to perform them better.
> >> Concerning the sieves, there is a fundamental difference between the
> >> imperative sieve and the functional sieves, regardless of whether the
> >> latter start at p or p^2 or use a priority queue. [...]
> >
> > We can directy jump to the next multiple too, it is called (+). :) :)
> >
> > But seriously, the real issue is that we have to merge the produced
> > streams of multiples, while the mutablestorage code works on same one,
> > so its "merging cost" is zero.
>
> Not quite, I think there are two things going on:
>
> 1. In contrast to the standard imperative version, we are implementing
> an online algorithm that produces arbitrarily many primes on demand.
> Any imperative online version will need to think about storage for
> pending prime filters as well.
True.
> 2. Even the imperative algorithm can be interpreted as "merging" arrays,
> just that the composites are magically merged at the right place (at
> distance p from each other) because arrays offer O(1) "jumps".
i.e. with a merging cost of zero. :)
> In contrast, the functional version needs O(log something) time to
> calculate where to put the next composite.
when thinking it terms of the finally produced sequence, yes. We have to
produce numbers one by one and take care of their ordering ourselves; _they_
just /throw/ the numbers at the shared canvas and let _it_ take care of
ordering them automagically, _later_, on the final sweep through. ISWYM.
>
> > If you could take a look at the treemerging primes and why it uses
> > too much memory, it would be great.
>
> Fortunately, Daniel already took a detailed look. :)
Yes he really came through! He finally found, and fixed, the space leak. It was
hiding in
mergeSP_BAD (a,b) ~(c,d) = let (bc,b') = spMerge b c
in (a ++ bc, merge b' d)
where
spMerge :: (Ord a) => [a] > [a] > ([a],[a])
spMerge [email protected](x:xs) [email protected](y:ys) = case compare x y of
LT > (x:c,d) where (c,d) = spMerge xs b
EQ > (x:c,d) where (c,d) = spMerge xs ys
GT > (y:c,d) where (c,d) = spMerge a ys
spMerge a [] = ([] ,a)
spMerge [] b = ([] ,b)
which really ought to have been
mergeSP (a,b) ~(c,d) = let ~(bc,bd) = spMerge b c d
in (a ++ bc, bd)
where
spMerge u [] d = ([], merge u d)
spMerge [email protected](x:xs) [email protected](y:ys) d = case compare x y of
LT > spCons x $ spMerge xs w d
EQ > spCons x $ spMerge xs ys d
GT > spCons y $ spMerge u ys d
spCons x ~(a,b) = (x:a,b)
Can you spot the difference? :) :)
> Aww, why remove the cutesy name? The VIPs will be angry for being ignored!
It runs faster on plain pairs, and on less memory, equals for equals. For some
reason. :)
_______________________________________________
HaskellCafe mailing list
[email protected]
http://www.haskell.org/mailman/listinfo/haskellcafe

