On 8/11/07, Hugh Perkins <hughperkins@xxxxxxxxx> wrote:
> > - parallelism must be quite coarse to offset overheads
> > (which I think is the problem with expecting things like map and fold
> > to parallelised automagically; they're just too small grained for it to
> > be worthwhile)
> Someone else said that. I dont understand what you mean.
There are many papers about this in the Parallel Logic Programming
area. It is commonly called "Embarrassing Parallelism". Creating a
thread, or even just scheduling a chunk of work for evaluation has
packaging-up costs, synchronization costs, and so on. It is all too
easy for these costs to outweigh the work to be done, so by
parallelizing your code, you make it run slower.
So, if you want to parallelize "map f xs", unless f is *really*
expensive, you'll only get a benefit if you can break xs into chunks
of e.g. 10^3 elements or more, and more like 10^5 or more for more
usual 'f's. Some tasks, like Monte Carlo integration are very amenable
to such, but most tasks are not.
Dr Thomas Conway
Silence is the perfectest herald of joy:
I were but little happy, if I could say how much.
Haskell-Cafe mailing list