In this short post, I want to show you how you can avail from Scala parallel collections in your application and under which conditions it makes sense to use it.
Parallel Collections were introduced in Scala 2.9 release which are built on the same abstractions and provide the same interfaces as existing collection implementation. It follows the model proposed by Doug Lea in his Fork/Join framework [1]. They are really easy to use and this is one of the advantages you can see for them. If you are familiar with parallel programming, it is quite difficult to convert a sequential program into a parallel one. This difficulty comes from the fact that each has its own paradigm, algorithm and data structure.
Scala enables you to invoke a method called par on a collection to get a parallel collection. This parallel collection then employs all the available cores on the target system. You can easily try this out in REPL:
scala> List(1,2,3,4,5).par.filter { _ % 2 == 0 } res1: scala.collection.parallel.immutable.ParSeq[Int] = ParVector(2, 4)
This is a simple filtering on a list. The only new thing I added is ‘par’ after list definition.
Now let’s have a comparison between sequential and parallel versions and see how parallel collection can help us. Consider that we have a Data class with some data as follows:
and a companion object to construct Data instances:
Now we define computation trait to perform some operations on the aforementioned data class:
This trait has two implementations:
In each implementation, method compute applies a map with function f and then calculate the maximum element in the list. The only difference between these two implementations is the use of ‘par’ method. Now let’s run and time each implementation:
Running the code with SBT on a machine with 4 CPU cores:
> run 3 [info] Running com.jayway.parallel.ParallelExperiment 3 Sequential 44 (ms) Parallel 14 (ms)
> run 100 [info] Running com.jayway.parallel.ParallelExperiment 100 Sequential 1020 (ms) Parallel 268 (ms)
As expected, we gained speed up in parallel version. Results for a couple of run with different list sizes is depicted in the following fgure:
Now let’s change function f to the following:
def f(data:Data):Int = { data.a + data.b }
I removed the Thread.sleep so the method returns immediately. Now let’s have a look at results for different runs again:
Interestingly, this time the result is not good at all! It’s even vice versa and sequential implementation has lower running time for the first 5 runs. This is because using parallel collection has some overhead for distributing (fork) and gathering (join) the data between cores. Thus one can conclude having heavy computations (which is the case in most applications), parallel collections can be of great performance improvement. But there might be scenarios (like the second scenario we discussed) where sequential collection is faster :)
For a better understanding of Scala Parallel Collection internals, please watch Aleksander Prokopec’s presentation from Scala Days 2010. If you are interested to know more then you can read [2].
References:
[1]: Doug Lea, A Java Fork/Join Framework, 2000.
[2]: Aleksandar Prokopec, Tiark Rompf, Phil Bagwell, Martin Odersky, “A Generic Parallel Collection Framework”, Euro-Par 2011
Pingback: itemprop="name">Experimenting with Scala Parallel Collections (contd.) | Jayway Team Blog - Sharing Experience