wiki:DataParallel/BenchmarkStatus

Version 33 (modified by chak, 5 years ago) (diff)

--

Status of DPH Benchmarks

This page gives an overview of how well the benchmarks in the examples/ directory of package dph are currently working.

Overview over the benchmark programs

SumSq
Computes the sum of the squares from 1 to N using Int. There are two variants of this program: (1) "primitives" is directly coded against the array primitives from package dph and (2) "vectorised" is a high-level DPH program transformed by GHC's vectoriser. As a reference implementation, we have a sequential C program denoted by "ref C".
DotP
Computes the dot product of two vectors of Doubles. There are two variants of this program: (1) "primitives" is directly coded against the array primitives from package dph and (2) "vectorised" is a high-level DPH program transformed by GHC's vectoriser. In addition to these two DPH variants of the dot product, we also have two non-DPH reference implementations: (a) "ref Haskell" is a Haskell program using imperative, unboxed arrays and and (b) "ref C" is a C implementation using pthreads.
SMVM
Multiplies a dense vector with a sparse matrix represented in the compressed sparse row format (CSR). There are three variants of this program: (1) "primitives" is directly coded against the array primitives from package dph and (2) "vectorised" is a high-level DPH program transformed by GHC's vectoriser. As a reference implementation, we have a sequential C program denoted by "ref C".
Primes
The Sieve of Eratosthenes using parallel writes into a sieve structure represented as an array of Bools. We currently don't have a proper parallel implementation of this benchmark, as we are missing a parallel version of default backpermute. The problem is that we need to make the representation of parallel arrays of Bool dependent on whether the hardware supports atomic writes of bytes. Investigate whether any of the architectures relevant for DPH actually do have trouble with atomic writes of bytes (aka Word8).
Quickhull
Given a set of points (in a plane), compute the sequence of points that encloses all points in the set. There is only a vectorised version. Currently doesn't work due to bugs in dph-par. Also needs to get a wrapper using the new benchmark framework to generated test input and time execution.
Quicksort
FIXME
ConComp
Implementation of the Awerbuch-Shiloach and Hybrid algorithms for finding connected components in undirected graphs. There is only a version directly coded against the array primitives. Needs to be adapted to new benchmark framework.
BarnesHut
This benchmark implements the Barnes-Hut algorithm to solve the n-body problem in two dimensions. Currently won't compile with vectorisation due to excessive inlining of dictionaries.

Execution on LimitingFactor (2x Quad-Core Xeon)

Hardware spec: 2x 3.0GHz Quad-Core Intel Xeon 5400; 12MB (2x6MB) on-die L2 cache per processor; independent 1.6GHz frontside bus per processor; 800MHz DDR2 FB-DIMM; 256-bit-wide memory architecture; Mac OS X Server 10.5.6

Software spec: GHC 6.11 (from first week of Mar 09); gcc 4.0.1

Program Problem size sequential P=1 P=2 P=4 P=8
SumSq, primitives 10M 22 40 20 10 5
SumSq, vectorised 10M 22 40 20 10 5
SumSq, ref C 10M 9
DotP, primitives 100M elements 823/823/824 812/813/815 408/408/409 220/223/227 210/214/221
DotP, vectorised 100M elements 823/824/824 814/816/818 412/417/421 222/225/227 227/232/238
DotP, ref Haskell 100M elements 810 437 221 209
DotP, ref C 100M elements 458 235 210 210
SMVM, primitives 10kx10k @ density 0.1 119/119 111/111 78/78 36/36 21/21
SMVM, vectorised 10kx10k @ density 0.1 196/196 1220/1220 847/847 515/515 424/424
SMVM, ref C 10kx10k @ density 0.1 35
SMVM, primitives 100kx100k @ density 0.001 132/132 135/135 81/81 91/91 48/48
SMVM, vectorised 100kx100k @ density 0.001 214/214 1259/1259 899/899 556/556 429/429
SMVM, ref C 100kx100k @ density 0.001 46

All results are in milliseconds, and the triples report best/average/worst execution time (wall clock) of three runs. The column marked "sequential" reports times when linked against dph-seq and the columns marked "P=n" report times when linked against dph-par and run in parallel using the specified number of parallel OS threads.

Comments regarding SumSq

The versions compiled against dph-par are by factor of two slower than the ones linked against dph-seq.

However, found a number of general problems when working on this example:

  • We need an extra -funfolding-use-threshold. We don't really want users having to worry about that.
  • mapP (\x -> x * x) xs essentially turns into zipWithU (*) xs xs, which doesn't fuse with enumFromTo anymore. We have a rewrite rule in the library to fix that, but that's not general enough. We really would rather not vectorise the lambda abstraction at all.
  • enumFromTo doesn't fuse due to excessive dictionaries in the unfolding of zipWithUP.
  • Finally, to achieve the current result, we needed an analysis that avoids vectorising subcomputations that don't to be vectorised, and worse, that fusion has to turn back into their original form. In this case, the lambda abstraction \x -> x * x. This is currently implemented in a rather limited and ad-hoc way. We should implement this on the basis of a more general analysis.

Comments regarding DotP

Performance is memory bound, and hence, the benchmark stops scaling once the memory bus saturated. As a consequence, the wall-clock execution time of the Haskell programs and the C reference implementation are the same when all available parallelism is exploited. The parallel DPH library delivers the same single core performance as the sequential one in this benchmark.

Comments regarding smvm

There seems to be a fusion problem in DotP with dph-par (even if the version of zipWithSUP that uses splitSD/joinSD is used); hence the much lower runtime for "N=1" than for "sequential". The vectorised version runs out of memory; maybe because we didn't solve the bpermute problem, yet.

Execution on greyarea (1x UltraSPARC T2)

Hardware spec: 1x 1.4GHz UltraSPARC T2; 8 cores/processors with 8 hardware threads/core; 4MB on-die L2 cache per processor; FB-DIMM; Solaris 5.10

Software spec: GHC 6.11 (from first week of Mar 09) with gcc 4.1.2 for Haskell code; gccfss 4.0.4 (gcc front-end with Sun compiler backend) for C code (as it generates code that is more than twice as fast for numeric computations than vanilla gcc)

Program Problem size sequential P=1 P=2 P=4 P=8 P=16 P=32 P=64
SumSq, primitives 10M 212/212 254/254 127/127 64/64 36/36 25/25 17/17 10/10
SumSq, vectorised 10M 212/212 254/254 128/128 64/64 32/32 25/25 17/17 10/10
SumSq, ref C 10M 120
DotP, primitives 100M elements 937/937 934/934 474/474 238/238 120/120 65/65 38/38 28/28
DotP, vectorised 100M elements 937/937 942/942 471/471 240/240 118/118 65/65 43/43 29/29
DotP, ref Haskell 100M elements 934 467 238 117 61 65 36
DotP, ref C 100M elements 554 277 142 72 37 22 20
SMVM, primitives 10kx10k @ density 0.1 1102/1102 1112/1112 561/561 285/285 150/150 82/82 63/70 54/100
SMVM, vectorised 10kx10k @ density 0.1 2312/2312 15960/15960 8192/8192 4188/4188 2362/2362 1538/1538 1047/1047 950/950
SMVM, ref C 10kx10k @ density 0.1 580
SMVM, primitives 100kx100k @ density 0.001 1112/1112 1299/1299 684/684 653/653 368/368 294/294 197/197 160/160
SMVM, vectorised 100kx100k @ density 0.001 2345/2345 16110/16110 8553/8553 4400/4400 2572/2572 1645/1645 1224/1224 1005/1005
SMVM, ref C 100kx100k @ density 0.001 600

All results are in milliseconds, and the triples report best/worst execution time (wall clock) of three runs. The column marked "sequential" reports times when linked against dph-seq and the columns marked "P=n" report times when linked against dph-par and run in parallel using the specified number of parallel OS threads.

Comments regarding SumSq

The primitives scale nicely, but something is deeply wrong (lack of fusion, perhaps) with the vectorised version.

Comments regarding DotP

The benchmark scales nicely up to the maximum number of hardware threads. Memory latency is largely covered by excess parallelism. It is unclear why the Haskell reference implementation "ref Haskell" falls of at 32 and 64 threads. See also a comparison graph between LimitingFactor and greyarea.

Comments regarding smvm

As on LimitingFactor, but it scales much more nicely and improves until using four threads per core. This suggets that memory bandwidth is again a critical factor in this benchmark (this fits well with earlier observations on other architectures). Despite fusion problem with dph-par, the parallel Haskell program, using all 8 cores, still ends up three times faster than the sequential C program.