Opened 2 years ago

Last modified 7 months ago

#5793 new task

make nofib awesome — at Version 24

Reported by: dterei Owned by: dterei
Priority: normal Milestone:
Component: NoFib benchmark suite Version:
Keywords: Cc: ndmitchell@…, rwbarton@…
Operating System: Unknown/Multiple Architecture: Unknown/Multiple
Type of failure: None/Unknown Difficulty: Unknown
Test Case: Blocked By:
Blocking: #5794 Related Tickets:

Description (last modified by dterei)

Nofib is the standard tool GHC developers use to benchmark changes to the compiler. Its overall design is OK but it's had no love and care for many years and has bittrotted such that it isn't useful in a lot of situations.

This task is about making nofib useful again.

The breakdown for this is something like:

  1. Think and maybe fix nofib framework design. It has 'ways' which I think correspond to compilation method but more in the sense of 'dynamic' vs 'static', seems it may not suite being able to use ways for 'fasm' vs 'fllvm'. There is also the concept of 'modes' which corresponds to different benchmark input. So 'normal' and 'slow' for getting different run-times. At moment no easy way to select which benchmark groups to run, so may want to change that. I guess we should just decide, what knobs do we want to be able to easily tweak, and see how well the current design allows that.

Note there is a shake build system attached that does a lot of this (done by Neil Mitchell!). An explanation of it can be found here: http://neilmitchell.blogspot.com/2013/02/a-nofib-build-system-using-shake.html

The design discussion of it is mostly lost as it was done on private email sorry.

  1. Fixup the runtimes for benchmarks to be significant. This might be best done by changing the way we run benchmarks and collect results to make sure they are meaningful.

E.g., Lots of great discussion and links to papers on this thread

http://www.haskell.org/pipermail/ghc-devs/2013-February/000307.html

  1. Above task is to fix normal but we may want to fixup slow as well and perhaps add a 'fast' mode where benchmarks run in around 1 second.
  1. Maybe add more benchmarks to the suite (text, bytestring, performance regressions from ghc testsuite, vector....)

Change History (25)

comment:1 Changed 2 years ago by tibbe

I assume you mean nofib, not fibon.

comment:2 Changed 2 years ago by dterei

  • Blocking 5794 added

comment:3 Changed 2 years ago by dterei

Pushed a whole bunch of fixes so that fibon all compiles now at least. Now need to do some work on framework and make sure all the tests have significant run-times.

comment:4 Changed 2 years ago by dterei

  • Summary changed from make fibon not suck to make nofib not suck

Yes. meant nofib :). With fibon, nofib, nobench... it gets a little jumbled in my head at times.

comment:5 Changed 2 years ago by dterei

  • Description modified (diff)

comment:6 Changed 2 years ago by dterei

comment:7 Changed 2 years ago by simonmar

  • Difficulty set to Unknown

Here are a few notes I've been collecting about what we should do with nofib.

  • Build system:
    • get rid of runstdtest
    • Make it independent of a GHC build
    • Maybe use Shake instead of make (but retain the ability to compile and run individual benchmarks, with custom options etc.)
  • beef up nofib-analyse:
    • generate gnuplot graphs directly
    • measure std.dev. better (omit results based on std.dev. rather than < 0.2s?)
  • benchmarks themselves:
    • we need a suite of ~10 large real-world programs for publishing results in papers.
    • generally: add new benchmarks, retire old ones

If we modify the programs and inputs to run for longer, then do them all at once: we can't do this incrementally, because each change invalidates all the old logs and they have to be regenerated. Note that we need the benchmarks to build with old compilers, so that we can run comparisons (or at least make sure we can get results even if some of the programs fail to compile).

comment:8 follow-up: Changed 2 years ago by dterei

What do you think about using Criterion Simon?

comment:9 in reply to: ↑ 8 Changed 2 years ago by simonmar

Replying to dterei:

What do you think about using Criterion Simon?

I have thought about this from time to time. There might be a place for criterion here, but it won't be a complete replacement for the benchmark infrastructure, and it might be difficult to integrate it.

We measure lots of things that aren't runtime (allocations, residency, GC time, etc.), and we don't really want criterion messing up these figures. So we would have to measure these things from within the program itself. We have some new infrastructure for doing that (GHC.Stats), but I don't think it is quite up to the job yet.

Measurements of GC time are only meaningful over long runs, but criterion is good at measuring very quick things. IIRC criterion wants to do a large number of runs in order to get good stats, and that might make our runs take too long (perhaps I'm wrong here, but this was the case last time I tried criterion out).

comment:10 Changed 2 years ago by dterei

Of the list you gave Simon where do you feel is best to start? I'm not really interested in implementing any of the build system ones as for my uses cases make and inplace ghc is fine. What is the issue with runstdtest?

I'd most like to just get the benchmarks in good order. How should we proceed here? I've fixed up the Fibon benchmarks to all work again so they might be useful. It also seems useful to pull in the benchmarks from say bytestring, text, attoparssec, and vector... If I start reworking the 'imaginary, spectral, real' inputs to get more significant times, what should we be aiming for? Around 10 seconds a run seems a 'good' value to me. We also need to be careful the benchmarks hit the GC.

comment:11 Changed 2 years ago by simonmar

I agree working on the benchmarks themselves should be the highest priority.

Some of the benchmarks aren't very amenable to running for longer - we end up just repeating the same task many times. I think for benchmarks where we can't come up with a suitable input that keeps the program busy for long enough, we should just put these in a separate category and use them for regression testing only. Measuring allocation still works reliably even for programs that run for a tiny amount of time.

I said "retire old benchmarks" but on seconds thoughts a better plan is to not throw anything away, just keep them all around as regression tests. Make an exception only for programs which are broken beyond repair, or are definitely not measuring anything worthwhile (I occasionally come across a program that has been failing immediately with an error, and somebody accepted the output as correct in 1997...).

I still like to keep the microbenchmarks collected together. These are very useful for spot testing and debugging.

Keep an eye out for good candidates for a real-world benchmark suite. I'm thinking ~10 or so programs that have complex behaviour, preferably with multiple phases or multiple algorithms.

I think ~10s is perhaps slightly on the high side, but I don't feel too strongly about it. Currently it takes ~20min to run the real+spectral+imaginary suites, I think a good target to aim for is less than an hour, with the option to have a longer run.

comment:12 Changed 2 years ago by simonpj

Yes, please keep imaginary and spectral! It doesn't matter at all if they have a tiny runtime... but spotting when their allocation jumps is a very useful signal that some optimisation has gone wonky, and far easy to narrow down than in some giant program.

We should have substantial programs too, with significant runtime, but let's not lose the tiny ones!

comment:13 Changed 2 years ago by dterei

For the benchmarks we want to keep for regression would everyone be happy moving them to a new folder? So some of the benchmarks in 'imaginary' and 'spectral' would be moved to a new folder called, 'regression' say? I would prefer this as I agree that we shouldn't just throw benchmarks away but I want to have a clear distinction of the use case of individual benchmarks.

comment:14 Changed 2 years ago by simonpj

Aren't "imaginary" and "spectral" reasonable folder names already. They are all, without exception, regression tests. The one ones that can possibly be considered real benchmarks are in "real". In short, isn't the distinction you want to make made already?

comment:15 Changed 2 years ago by dterei

Maybe. I was under the impression 'imaginary' and 'spectral' were microbenchmarks. However, I guess the distinction between a microbenchmark and a regression benchmark is pretty minimal to non-existent? so sure lets keep the folder structure the same and just make sure to document this somewhere.

comment:16 Changed 2 years ago by simonmar

The rationale for the naming is described in Will's paper http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.53.4124. The distinction between spectral and imaginary is mainly size: the imaginary programs are microbenchmarks, whereas spectral programs are algorithmic kernels - larger than microbenchmarks but not real programs. I think it's a useful distinction.

Perhaps we also want another distinction - programs that run for long enough that we can obtain reliable timing results. However that's an orthogonal dimension, and we can't sensibly use both dimensions for organising the directory structure. So I suggest we just have a flag per benchmark that is true if it runs for long enough, and have a simple way to run just those benchmarks.

comment:17 Changed 2 years ago by igloo

  • Milestone set to _|_

comment:18 Changed 2 years ago by NeilMitchell

  • Cc ndmitchell@… added

I might try and give a go at a Shake version of the makefile soup that is currently in there.

Changed 2 years ago by dterei

Shake build system

comment:19 Changed 15 months ago by dterei

  • Summary changed from make nofib not suck to make nofib awesome

comment:20 Changed 15 months ago by dterei

  • Description modified (diff)

comment:21 Changed 15 months ago by dterei

  • Description modified (diff)

comment:22 Changed 15 months ago by morabbin

Not sure if this is the place to report this, but without

cabal install html

nofib-analyse won't compile, as it uses Text.Html. Where is the correct place to put this dependency?

comment:23 Changed 15 months ago by dterei

I put a note in http://hackage.haskell.org/trac/ghc/wiki/Building/RunningNoFib and also in the README file about the dependency. I don't think we should otherwise include the actual dependency if that was your implication. Thanks morabbin!

comment:24 Changed 15 months ago by dterei

  • Description modified (diff)
Note: See TracTickets for help on using tickets.