Opened 8 years ago

Closed 7 months ago

Last modified 7 months ago

#910 closed feature request (fixed)

--make should have a -j flag for parallel building

Reported by: igloo Owned by:
Priority: normal Milestone:
Component: Compiler Version: 6.4.2
Keywords: Cc: bos, hackage.haskell.org@…, dterei, idhameed@…, mail@…, jan.stolarek@…, rrnewton@…, chetant@…, alex@…, danr@…, the.dead.shall.rise@…, tkn.akio@…
Operating System: Unknown/Multiple Architecture: Unknown/Multiple
Type of failure: None/Unknown Difficulty: Unknown
Test Case: N/A Blocked By: #8184, #8235
Blocking: Related Tickets:

Description

It should be possible to give --make a -j flag, similar to make's, to tell it to
use multiple proceses to build modules. This would allow executables, libraries
and cabal packages to be built faster for people with multiple CPUs.

Attachments (2)

ghc-parallel-comp.patch (67.2 KB) - added by simonmar 8 years ago.
FastString-MVar.patch (4.6 KB) - added by simonmar 21 months ago.

Download all attachments as: .zip

Change History (63)

comment:1 Changed 8 years ago by igloo

  • Summary changed from --make should have a -j flag for paralel building to --make should have a -j flag for parallel building

Changed 8 years ago by simonmar

comment:2 Changed 8 years ago by simonmar

This seems like a good place to hang my patch to implement ghc --make -jN, which was used for the experiments in the 2005 Haskell Workshop paper on SMP GHC, but almost certainly isn't ready for prime time.

comment:3 Changed 7 years ago by igloo

  • Test Case set to N/A

comment:4 Changed 7 years ago by bos

  • Cc bos added

Would love to have this.

comment:5 Changed 7 years ago by simonmar

  • Milestone changed from 6.8 to 6.1

Not for 6.8, probably.

comment:6 Changed 6 years ago by simonmar

  • Component changed from Driver to Compiler
  • Milestone changed from 6.10 branch to _|_

We're not planning this for 6.10. It's more likely that Cabal will get parallel make support first, in which case there's less need for us to tackle this.

comment:7 Changed 6 years ago by simonmar

  • Architecture changed from Multiple to Unknown/Multiple

comment:8 Changed 6 years ago by simonmar

  • Operating System changed from Multiple to Unknown/Multiple

comment:9 Changed 3 years ago by liyang

  • Cc hackage.haskell.org@… added
  • Type of failure set to None/Unknown

comment:10 Changed 3 years ago by dterei

  • Cc dterei added

comment:11 Changed 21 months ago by orenbenkiki

Is this a dead ticket? Because I'd love to see it implemented. I'm working on a 32-core machine and compiling large Haskell packages (dozens of modules). A -j flag would make a real difference for me. Granted this isn't the most common case, but I'd expect a significant improvement even for smaller packages on a dual-core machine (and these days, which machine isn't at least that?). Faster builds => happier developers, and all that :-)

Changed 21 months ago by simonmar

comment:12 Changed 21 months ago by simonmar

Attached a patch I had lying around to make FastString thread-safe. IIRC it had a small compile-time performance impact.

comment:13 Changed 15 months ago by morabbin

  • Resolution set to wontfix
  • Status changed from new to closed

Cabal install now has -j flag, so closing this as wontfix.

comment:14 Changed 15 months ago by orenbenkiki

  • Resolution wontfix deleted
  • Status changed from closed to new

Cabal install -j flag solves a different problem. It builds and installs different packages in parallel. This ticket is about GHC being able to build different modules in parallel in a single package, regardless of installing the results.

comment:15 Changed 14 months ago by ihameed

  • Cc idhameed@… added

comment:16 Changed 12 months ago by nh2

  • Cc mail@… added

Can somebody give an idea about the difficulty of this?

From an outsider's view, GHC has a very clear idea about module dependencies and in which order it has to build them so building independent modules in parallel shouldn't be too hard, should it?

comment:17 Changed 11 months ago by jstolarek

  • Cc jan.stolarek@… added

comment:18 Changed 11 months ago by rrnewton

  • Cc rrnewton@… added

comment:19 Changed 11 months ago by chetant

  • Cc chetant@… added

comment:20 Changed 9 months ago by a.ulrich

  • Cc alex@… added

comment:21 Changed 9 months ago by danr

  • Cc danr@… added

comment:22 follow-up: Changed 8 months ago by parcs

  • Status changed from new to patch

I've been working on this for a while, including this year's GSoC. Since the GHC 7.8 feature freeze is imminent, I would like to post a stable subset of my current progress for review and possible inclusion to GHC 7.8. This subset of changes includes the bare minimum required to build multiple modules in parallel, and the parallel upsweep itself. Each patch is for the most part self-explanatory, I hope.

https://github.com/parcs/ghc/commits/ghc-parmake-gsoc

The speedups provided by the parallel upsweep are decent: I can realize a 1.8x speedup when compiling the Cabal library with -j3 -O2, and a 2.4x speedup when compiling 7 independent, relatively large modules with -j3 -O0, for example. The performance/thread ratio seems to peak at -j3 (which instructs the runtime to use 3 capabilities).

The performance of the sequential upsweep is not significantly impacted by these patches: compiling the Cabal library with -O2 takes about 1% longer with the patches than without.

I have not tested these patches on any platform other than x86_64/Linux, but I have no reason to believe that behavior would differ on other platforms. Nonetheless, testing on other platforms is necessary and appreciated.

These changes are well-tested and stable, but there is a single bug that escapes me which can be triggered by the testsuite. When running the testsuite with e.g. EXTRA_HC_OPTS=-j2, there is about a 1/1000 chance that a compiler process will exit with

ghc-stage2: GHC.Event.Manager.loop: state is already Finished

On a separate machine, I don't ever trigger the aforementioned bug, but instead the compiler process on rare occasion never exits. The testsuite script eventually kills the process, causing the particular test to fail with 'exit code non-0'.

These are likely bugs in the IO manager triggered by the changing of the number of capabilities at runtime. If I instead explicitly set the number of capabilities at startup with EXTRA_HC_OPTS=-j2 +RTS -N2 -RTS then neither bug manifests. I don't yet understand the IO manager well enough to fix this issue though.

Other than that though, this feature Just Works. One simply has to pass the -jN flag to GHC and the build will just finish faster.

Questions or comments? I have likely not explained everything I ought to explain.

comment:23 Changed 8 months ago by simonpj

I have not reviewed in detail -- I hope Simon Marlow may find time to do so. But there are some tricky corners and I love a bit more by way of comments, especially Notes along the lines of Commentary/CodingStyle. Think: how easy will it be for someone else to understand and maintain this in 5 years time. (I used github to add comments in a couple of places.)

User manual changes?

Simon

comment:24 Changed 8 months ago by rrnewton

Thanks for doing the work on this! Very exciting. I for one will start testing it.

I started reading the code a bit and have one question. But first, thanks for producing readable, well-documented code. My question has to do with the banal issue of printing stuff out in parallel, which can often be quite ugly.

I like that the normal compilation output is directed to a per-module TQueue. But what about printed exceptions when they occur?

I see that you've got three "[g]bracket" calls in the parallel upsweep, and that you take care to kill worker threads (asychronously) when an exception occurs. However, I don't see a general catch-all for exceptions at the top of each worker thread (such as in Control.Async). In my experience exceptions on child threads can be a real pain. For example, if a worker thread dies... it looks like other threads may be blocked indefinitely waiting on the result MVar?

Apologies if I've missed something...

Last edited 8 months ago by rrnewton (previous) (diff)

comment:25 Changed 8 months ago by parcs

Thanks for reviewing!

Thanks Simon. I will attempt to comment the changes more sufficiently, following the mentioned coding style. I will update the user manual as well.

Ryan: I don't see where a stray exception could occur. The call to parUpsweep_one is already guarded by a try, and asynchronous exceptions are masked throughout the rest of the worker body. And the rest of the worker body doesn't seem to do anything that could throw an exception from within.

So it seems to me that the exception handling is already fairly tight: a worker thread should always exit gracefully. I think.

comment:26 Changed 8 months ago by rrnewton

Ah, good, it sounds like you've handled exceptions carefully. I missed the "try" on line 133 in my first cursory look.

https://github.com/parcs/ghc/blob/0aaeb70aa3291cf2ab90af150c3790cf4981db2a/compiler/main/GhcMake.hs#L681

I was expecting something right after the fork. But it looks like the only code that happens outside of that "try" is the newIORef, putMVar, and writeLogQueue... reasonably safe.

Regarding printing, I guess ErrUtils.errorMsg is thread-safe? What I'm thinking about is an error message from the compiler that gets barfed out simultaneously with other normal compiler output from other threads.

Last edited 8 months ago by rrnewton (previous) (diff)

comment:27 Changed 8 months ago by parcs

errorMsg just calls log_action which will append the stringified exception to the module's LogQueue as with any other compile message, so it should be OK.

comment:28 Changed 8 months ago by thoughtpolice

Most of these patches LGTM, but I haven't reviewed the parallel upsweep patch itself very closely yet. I left a few comments on the others, mostly echoing Simon about some additional documentation, and dead code removal.

But this looks like it can easily make the 7.8.1 window, however.

comment:29 Changed 8 months ago by nh2

This is great! It is probably out of the scope for the Gsoc, but I'd like to mention:

When you have modules in your project that take a very long time to compile with many modules depending on them, it is useful to re-use information about how long each module took to compile the last time. This way we can build dependencies leading towards these "blocker modules" as early as possible.

See our small discussion at: https://github.com/ndmitchell/ghc-make/issues/2#issuecomment-19467708

parcs, you probably have a good overview on how GHC builds things now. Do you think the current state would make it possible to save and re-use such timing information?

comment:30 Changed 8 months ago by simonmar

I'm impressed, it looks like you've done a great job. Well done.

The parallel upsweep itself would look much nicer written using ParIO from monad-par, but that's something for the future.

Take a careful look at reTypecheckLoop, I'm not sure it's correct (see my inline comment).

The FastString changes need some more commentary, as pointed out by others. There are good comments in the parallel upsweep patch though.

You've obviously been careful to minimize the impact on sequential compilation performance, which is great.

The parallel IO manager bug needs to be fixed before we can merge the patch though. We can't ship it with a bug that causes random compilation failure.

Aside from the issues above, I'm happy with the patch.

comment:31 follow-up: Changed 8 months ago by parcs

nh2: That's an interesting idea. We would just have to persist the timing information somehow (through the interface file maybe) and implement a smart semaphore (replacing QSem) that wakes up the module that would result in the shortest compile time. It certainly sounds possible, at least.

Simon: Import loops were indeed not handled correctly. I managed to work out a solution that I hope is understandable. It involves augmenting a module's explicit textual dependencies with the implicit dependencies that arise from module loops. Let me know what you think.

Other changes I made:

  1. removed the BinIO constructor as suggested
  2. more thoroughly commented the FastString implementation, as suggested.
  3. revised one of the thread-safety changes: originally, I changed newUnique and newUniqueSupply to atomically update the env_us var. But the only reason this was necessary is because the env_us var was shared among interleaved threads created by forkM. So instead of making sure to atomically update this var, I think it is more sensible to not share the env_us var among interleaved threads. This solution should be in theory more efficient as well, as multiple threads are no longer potentially contending on the same env_us var.
  4. and I enabled buffering of stdout/stderr when compiling modules via GHCi

Please perform a second pass on all the commits, as I did a lot of rebasings and fixups etc and I might have missed something stupid.

I have not yet fixed the IO manager bug.

comment:33 in reply to: ↑ 31 Changed 8 months ago by nh2

Replying to parcs:

nh2: That's an interesting idea. We would just have to persist the timing information somehow (through the interface file maybe)

I don't think the interface file is a good place: I tried to build/improve Haskell build systems in the last months and interface files not being generated the same for identical inputs was always a problem (e.g. in http://ghc.haskell.org/trac/ghc/ticket/8144). Probably making an own file for that will work as well.

comment:34 Changed 8 months ago by simonmar

The changes to handle loops look OK to me, but I would test it on a GHC build to be sure. You want to build the whole of ghc/compiler with --make -O2; the build system doesn't do this so you have to make a command line by hand.

comment:35 follow-up: Changed 8 months ago by rrnewton

By the way, I've had various hard-to-pin-down problems with changing capabilities at runtime myself.

Is it possible in this case to just disable the use of that feature for the initial release? That is, you would have to use +RTS -N to get real speedup. But, hey, it's kind of an advanced compilation feature anyway. In this scenario we could get a lot of testing experience with parallel builds without setNumCapabilities and then combine them when there is higher confidence.

comment:36 Changed 8 months ago by parcs

  • Blocked By 8184 added

comment:37 Changed 8 months ago by refold

  • Cc the.dead.shall.rise@… added

comment:38 in reply to: ↑ 35 Changed 8 months ago by parcs

Replying to rrnewton:

By the way, I've had various hard-to-pin-down problems with changing capabilities at runtime myself.

Is it possible in this case to just disable the use of that feature for the initial release? That is, you would have to use +RTS -N to get real speedup. But, hey, it's kind of an advanced compilation feature anyway. In this scenario we could get a lot of testing experience with parallel builds without setNumCapabilities and then combine them when there is higher confidence.

I don't mind going that route if the issue doesn't get sorted out soon. A subsequent point release of GHC could reinstate the feature (automatically setting the number of capabilities for the user), then. But I think I could fix it on time.


Replying to simonmar:

The changes to handle loops look OK to me, but I would test it on a GHC build to be sure. You want to build the whole of ghc/compiler with --make -O2; the build system doesn't do this so you have to make a command line by hand.

At first I couldn't even get GHC to compile itself via --make -O2 without -j (see #8184). Since that's been fixed I was able to build GHC via --make -O2 -j after a minor tweak in the code, so the loop handling should be solid now.

On to the setNumCapabilities issue...

comment:39 follow-up: Changed 8 months ago by rrnewton

Ok, I'm trying to get a decent set of libraries installed to test this well. The very first thing I cabal installed ('text') did get a small speedup.

However, I'm also seeing some excessive system time. This may have nothing to do with the parallel make approach and just be a function of the new IO manager. In fact, if I understand the parallel make design, worker threads should either be running or blocked on MVars. (That's good for avoiding wasted user time as well, unlike work-stealing which burns cycles looking for work.)

I'm running on a 32-core Intel Westmere machine, using this command to install text version 0.11.3.1:

time cabal install text --ghc-options="-j24" --reinstall

Notice that in this simple test I am relying on the setNumCapabilities behavior. Though a quick check confirms that I get the same times with +RTS -Nadded. Here are the times:

 * 1 thread:   real 1m20.028s user 1m17.921s sys 0m1.768s
 * 2 threads:  real 1m7.417s user 1m22.818s sys 0m14.891s
 * 4 threads:  real 0m59.528s user 1m29.110s sys 0m37.981s
 * 8 threads:  real 0m57.219s user 1m54.461s sys 1m31.703s
 * 16 threads: real 1m6.225s user 4m46.976s sys 3m32.661s
 * 24 threads: real 1m16.501s user 9m53.254s sys 6m3.375s
 * 31 threads: real 1m27.445s user 17m0.314s sys 8m0.175s

Well, it's nice that final sequential time is not much worse than the one-threaded time!

Finally, here is the fingerprint:

.|4880dfaeafec1fc65568a5445a70ec4286949123
ghc-tarballs|f190b3ce329422e13cbe1b5dad030058ca4bdda7
libffi-tarballs|a0088d1da0e171849ddb47a46c869856037a01d1
libraries/Cabal|9f374ab45e62924506b992db9157c970c7259a03
libraries/Win32|3da00d80f2fd7d1032e3530e1af1b39fba79aac3
libraries/array|b5779026c4d760cc380ef1fc18403534dced55c1
libraries/base|1b725f6ada6c4ddb011172408291a64498d199cb
libraries/binary|2799c25d85b4627200f2e4dcb30d2128488780c3
libraries/bytestring|7d5b516ad0937b7cdc29798db33a37a598123b6c
libraries/containers|154cd539a22e4d82ff56fec2d8ad38855f78513a
libraries/deepseq|420507ea418db8664a79aedaa6588b772e8c97c6
libraries/directory|571f32b2a0af7404a8483af5b1791361c5528ab6
libraries/filepath|8d34f787e06bf3a1802992246785939901dec8aa
libraries/ghc-prim|84fed8933a53cd15e39123a8a0067369c060e69e
libraries/haskeline|40bcd6ac30577d1d240166674d1e328ac52c1fd5
libraries/haskell2010|1c055868f748acb2945cb5652b3fdea6226e8862
libraries/haskell98|40300d61f29aa8d9953079d14fb5b2f1e5e04184
libraries/hoopl|8e0ef3b7bf6d25919209f74d65c4a77c6689934d
libraries/hpc|a7231c6727de54d17ce14b1286cfe88c4db95783
libraries/integer-gmp|cfcd248c0921aafe599c8547022686c5289bf743
libraries/integer-simple|5d9c6565550fb5c9c38f69475f52a2ba1d3edf98
libraries/old-locale|df98c76b078de507ba2f7f23d4473c0ea09d5686
libraries/old-time|7e0df2eb500ce4381725b868440fde04fa139956
libraries/pretty|0b8eada2d4d62dd09ee361d8b6ca9b13e6573202
libraries/primitive|c6b1e204f0f2a1a0d6cb1df35fa60762b2fe3cdc
libraries/process|5d47829c123c10711d14dd089b4d8d65f8289f3b
libraries/random|4b68afd3356674f12a67a4e381fa9becd704fab2
libraries/template-haskell|ec6d5a7c9b0c9e2fb1ce10d776cff74548e17981
libraries/terminfo|116d3ee6840d52bab69c880d775ae290a20d64bc
libraries/time|d4f019b2c6a332be5443b5bf88d0c7fef91523c6
libraries/transformers|a59fb93860f84ccd44178dcbbb82cfea7e02cd07
libraries/unix|ffdb844069497b276a719b0c89be35bd18095f22
libraries/vector|f27156970d9480806a5defcfea5367187c2a6997
libraries/xhtml|fb9e0bbb69e15873682a9f25d39652099a3ccac1
testsuite|5cad49d42e434130671c3d14692d73d56253fab8
utils/haddock|90ad0ea538d2fafed2047de8414c55627b94e879
utils/hsc2hs|46abf34f337dbc5fa638f06912e34966a9d1a147
Last edited 8 months ago by rrnewton (previous) (diff)

comment:40 Changed 8 months ago by akio

  • Cc tkn.akio@… added

#8209 might be related to the setNumCapabilities issue.

comment:41 follow-ups: Changed 8 months ago by rrnewton

Have you tested cabal -j on this branch? They *should* compose fine, and I'd love to see what kind of speedup one can get installing the Haskell Platform packages.

Unfortunately, right now I get a whole bunch of undefined reference errors when I try something like this on 4880dfaeafec1fc65568a5445a70ec4286949123:

time cabal-1.18.0 install -j30 --disable-library-profiling --disable-documentation HUnit --reinstall

Btw, I see the same problem with cabal 1.16.0.2. But taking away the -j30 makes it work. I do NOT have the same problem on master (8c99e698476291c) presently, nor on earlier versions of HEAD from 2013.08.04. (Yet, this might have nothing to do with parcs patches, of course. There are another ~46 patches that are on master but not on the ghc-parmakegsoc branch. Fast forwarding the branch will be my next step.)

Notice that, weirdly, this problem occurs even when ghc -j is not used (as above).

For reference here is a sample of the errors:

cabal-1.17.0-HEAD-20130802: Error: some packages failed to install:
HUnit-1.2.5.2 failed during the configure step. The exception was:
user error
(/home/beehive/ryan_scratch/ghc-parGSOC2/libraries/Cabal/Cabal/dist-install/build/libHSCabal-1.18.0.a(Simple__64.o):
In function `chbT_info':
(.text+0x857): undefined reference to `rff5_info'
/home/beehive/ryan_scratch/ghc-parGSOC2/libraries/Cabal/Cabal/dist-install/build/libHSCabal-1.18.0.a(Simple__64.o):
In function `sg4L_info':
...
Last edited 8 months ago by rrnewton (previous) (diff)

comment:42 in reply to: ↑ 41 Changed 8 months ago by refold

Replying to rrnewton:

Have you tested cabal -j on this branch? They *should* compose fine, and I'd love to see what kind of speedup one can get installing the Haskell Platform packages.

Right now this may result in more processes being created than desired. We're working on solving this issue (though the patches won't be ready for 1.18).

comment:43 in reply to: ↑ 39 ; follow-up: Changed 8 months ago by parcs

Ryan:

Thanks for testing. Most of the time each thread will be either blocked on an MVar or compiling (or figuring out what to block on next), so I'm not sure where the excessive system time is coming from but I assume it's either from the RTS or the IO manager.

I haven't tested cabal with my branch but I don't think my changes are causing the errors you're experiencing. I'm willing to bet that it's due to recent changes pushed to master. (Or maybe you forgot to do git submodule update after checking out my branch?)


akio:

#8209 is most likely what's being triggered, although I'm not positive.

comment:44 in reply to: ↑ 22 Changed 8 months ago by refold

Replying to parcs:

Is there some way to detect whether GHC has support for parallel --make? E.g. using ghc --info? I'd prefer to use that instead of conditioning on the version.

comment:45 in reply to: ↑ 43 Changed 7 months ago by rrnewton

Replying to parcs:

I haven't tested cabal with my branch but I don't think my changes are causing the errors you're experiencing. I'm willing to bet that it's due to recent changes pushed to master. (Or maybe you forgot to do git submodule update after checking out my branch?)

I'm pretty sure I updated the submodules, but I'll do a fresh build to double check. Except for the excessive system-time the parallel make actually works for me, as long as I don't do cabal -j. I can even install packages with parallel GHC / non-parallel cabal.

Could I get a fix on what other people who are testing this branch are seeing? Are you able to install packages with cabal -j?

comment:46 Changed 7 months ago by rrnewton

By the way, I just added a ticket for the excessive system time, #8224.

comment:47 Changed 7 months ago by parcs

  • Blocked By 8235 added

comment:48 Changed 7 months ago by parcs

Replying to refold:

Replying to parcs:

Is there some way to detect whether GHC has support for parallel --make? E.g. using ghc --info? I'd prefer to use that instead of conditioning on the version.

What are the advantages of adding an entry to ghc --info over conditioning on the version, in this case?


GHC no longer deadlocks from its use of setNumCapabilities (see #8209) but there is still another related issue: GHC sometimes prints

ghc-stage2: GHC.Event.Manager.loop: state is already Finished

when exiting. See #8235


Ryan:

That's really odd. I'm going to try cabal install -j with my branch later today and let you know how it goes.

comment:49 follow-up: Changed 7 months ago by refold

Replying to parcs:

What are the advantages of adding an entry to ghc --info over conditioning on the version, in this case?

Right now this is mostly for convenience - to enable supporting both the master
version of GHC 7.7 and the parmake branch. In the future this will be less
important - unless GHC HQ opts to disable parallel --make on some platforms for
some reason.

Implementing this is trivial: just add a ("Supports parallel --make", "YES")
tuple to the list returned by compiler/main/DynFlags.compilerInfo. I can write a
patch myself if you want.

comment:50 in reply to: ↑ 49 ; follow-up: Changed 7 months ago by parcs

Replying to refold:

Right now this is mostly for convenience - to enable supporting both the master
version of GHC 7.7 and the parmake branch. In the future this will be less
important - unless GHC HQ opts to disable parallel --make on some platforms for
some reason.

Implementing this is trivial: just add a ("Supports parallel --make", "YES")
tuple to the list returned by compiler/main/DynFlags.compilerInfo. I can write a
patch myself if you want.

OK, done.

comment:51 in reply to: ↑ 50 Changed 7 months ago by refold

Replying to parcs:

OK, done.

Thanks!

comment:52 in reply to: ↑ 41 Changed 7 months ago by parcs

Replying to rrnewton:

I'm not able to reproduce the cabal-install issues. I did the following:

  1. checked out ghc-parmake-gsoc and built ghc in-place
  2. installed cabal-install 1.18 via GHC 7.6.2
  3. successfully ran the command
    cabal install --with-ghc=/home/patrick/code/ghc/inplace/bin/ghc-stage2 --reinstall HUnit async
    
  4. successfully ran the same command with and without -j and/or --ghc-option="-j

Am I missing something?

Last edited 7 months ago by parcs (previous) (diff)

comment:53 Changed 7 months ago by rrnewton

Well, that's good news!

When I can, I'll try a fresh start inside a VM and see what happens. Maybe this is a transient failure of some kind on my RHEL6 machine.

comment:54 Changed 7 months ago by Patrick Palka <patrick@…>

In 8d9edfed74e8fd03933d4e3540f6372c269de538/ghc:

Implement the parallel upsweep (#910)

The parallel upsweep is the parallel counterpart to the default
sequential upsweep. It attempts to compile modules in parallel by
subdividing the work of the upsweep into parts that can be executed
concurrently by multiple Haskell threads.

In order to enable the parallel upsweep, the user has to pass the -jN
flag to GHC, where N is an optional number denoting the number of jobs,
or modules, to compile in parallel, like with GNU make. In GHC this just
sets the number of capabilities to N.

comment:55 Changed 7 months ago by Patrick Palka <patrick@…>

In 9c18ad7475bdc5bd5684a702c2d575ef9dd86fe1/ghc:

Merge branch 'ghc-parmake-gsoc' (#910)

comment:56 Changed 7 months ago by parcs

refold: Are you planning to work on --make -j support in cabal-install? I'd gladly work on it if you aren't.

Last edited 7 months ago by parcs (previous) (diff)

comment:57 follow-up: Changed 7 months ago by refold

@parcs

Yes, I'm working on this, though I had some delays. One problem is that we've already released Cabal 1.18, so this feature will have to go into Cabal 1.19.

Do I understand corectly that right now one must pass +RTS -Nn to GHC if one wants it to use n OS threads?

comment:58 in reply to: ↑ 57 ; follow-up: Changed 7 months ago by parcs

Replying to refold:

@parcs

Yes, I'm working on this, though I had some delays. One problem is that we've already released Cabal 1.18, so this feature will have to go into Cabal 1.19.

Okay, great.

Do I understand corectly that right now one must pass +RTS -Nn to GHC if one wants it to use n OS threads?

Nope, passing +RTS -Nn is not necessary. The number of capabilities will be set at runtime according to the -jn flag.

comment:59 in reply to: ↑ 58 Changed 7 months ago by refold

Replying to parcs:

Nope, passing +RTS -Nn is not necessary. The number of capabilities will be set at runtime according to the -jn flag.

Great.

comment:60 Changed 7 months ago by parcs

  • Resolution set to fixed
  • Status changed from patch to closed

comment:61 Changed 7 months ago by mcandre

Could -j parall builds be on by default?

Note: See TracTickets for help on using tickets.