Changes between Version 2 and Version 3 of Debugging/LowLevelProfiling/PAPI


Ignore:
Timestamp:
Nov 3, 2009 11:43:53 AM (4 years ago)
Author:
simonmar
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • Debugging/LowLevelProfiling/PAPI

    v2 v3  
    99For some notes on installing PAPI on Linux, see [wiki:Debugging/LowLevelProfiling/PAPI/Installing]. 
    1010 
     11= Measuring program performance using CPU events = 
    1112 
     13The GHC runtime has been extended to support the use of the [http://icl.cs.utk.edu/papi/ PAPI] library to count occurrences of CPU events such as cache misses and branch mispredictions. The PAPI extension separates the events occurring in the garbage collector and mutator code for more accurate pinpointing of performance problems. 
     14 
     15This page describes how to compile the RTS with PAPI enabled and explains the RTS options for CPU event selection. This page also contains patches to collect CPU event information in nofib runs and to allow their comparison using nofib-analyse. This is especially useful to measure the effects of optimisations accross a whole range of programs systematically. 
     16 
     17= Status of the implementation = 
     18 
     19GHC with PAPI support should compile on any platform where PAPI is installed. It should also be possible to monitor the cache miss events of a ghc compiled program. 
     20 
     21At present, the monitoring of branch mispredictions and stalled cycles is AMD Opteron specific. In the case of branch mispredictions, the portable PAPI API only monitors conditional jumps. We would like to monitor all jumps, especially indirect jumps, that is why we used a native AMD PAPI counter. For strange reasons, the PAPI conditional jump counter maps to the native counter we are using, but we cannot rely on this behaviour on other platforms, so we use the native counter anyway. 
     22 
     23= Compiling and running programs with PAPI = 
     24 
     25First of all, make sure that you have installed the [http://icl.cs.utk.edu/papi/ PAPI library]. 
     26 
     27Follow the instructions in [wiki:Building/Hacking] and add the following line to {{{build.mk}}} before compiling the RTS: 
     28{{{ 
     29GhcRtsWithPapi = YES 
     30}}} 
     31 
     32Now, to monitor and report level 1 cache misses, invoke a program compiled by ghc as follows: 
     33{{{ 
     34./program +RTS -sstderr -a1 -RTS 
     35}}} 
     36The help screen provides options to monitor more events: 
     37{{{ 
     38./program +RTS -h -RTS 
     39}}} 
     40 
     41= Using PAPI with the nofib benchmarking suite = 
     42 
     43In order to use the nofib suite with PAPI, you have to use apply the three patches at the bottom of this page. 
     44 
     45 1. The first patch adds a PAPI flag to the perl testing script. 
     46 2. The second patch adds a make argument to the nofib suite to enable the collection of PAPI number. 
     47 3. The third patch makes nofib-analyse able to process the output produced in the second patch. The standard nofib-analyse won't cut it. 
     48 
     49These patches are not submitted to the HEAD (yet?) because they are not mature, but they are useful. Probably the (only?) patch that needs more work is the third one. 
     50 
     51To collect statistics just run make inside nofib as usual, as an example let's collect statistics together with cache misses: {{{make papi=1}}}. 
     52 
     53= Work in progress = 
     54 
     55The PAPI framework has been used to measure the effects of the [wiki:SemiTagging semi-tagging optimisation], in particular, the effects on branch misprediction. We are currently writing a paper and cleaning up the code for this optimisation. 
     56 
     57= Resources = 
     58 
     59 * [http://icl.cs.utk.edu/papi/] PAPI home page. 
     60 * [http://developer.amd.com/article_print.jsp?id=90] An article introducing the business of using CPU counters for performance measurement. 
     61 * [http://developer.amd.com/articles.jsp?id=2&num=1] An article introducing AMD's code analyst. It even has pipeline simulation, though I haven't tried it out yet. 
     62 * [http://www.cs.mu.oz.au/~njn/pubs/cache-large-lazy2002.ps.gz] The Cache Behaviour of Large Lazy Functional Programs on Stock Hardware.