Opened 8 months ago

Closed 7 months ago

Last modified 6 months ago

#8205 closed bug (fixed)

the 'impossible' happened : expectJust block_order

Reported by: erikd Owned by: jstolarek
Priority: normal Milestone:
Component: Compiler Version: 7.7
Keywords: Cc: jan.stolarek@…, kazu@…
Operating System: Unknown/Multiple Architecture: Unknown/Multiple
Type of failure: Building GHC failed Difficulty: Unknown
Test Case: Blocked By:
Blocking: Related Tickets:

Description

Currently at commit 7e91e5bf84c2b3f461934b43911c0defb61dd9c6 and this was not failing to build 2 days ago.

Compiling results in:

"inplace/bin/ghc-stage1" -hisuf dyn_hi -osuf  dyn_o -hcsuf dyn_hc -fPIC -dynamic 
  -H32m -O -Werror -Wall -H64m -O0    -hide-all-packages -i -iutils/hpc/. 
  -iutils/hpc/dist-install/build -iutils/hpc/dist-install/build/autogen
  -Iutils/hpc/dist-install/build -Iutils/hpc/dist-install/build/autogen
  -optP-include -optPutils/hpc/dist-install/build/autogen/cabal_macros.h
  -package array-0.4.0.2 -package base-4.7.0.0 -package containers-0.5.2.1
  -package directory-1.2.0.1 -package hpc-0.6.0.1 -XHaskell98 -XCPP
  -no-user-package-db -rtsopts -fwarn-tabs     -odir utils/hpc/dist-install/build
  -hidir utils/hpc/dist-install/build -stubdir utils/hpc/dist-install/build
  -c utils/hpc/dist-install/build/HpcParser.hs
  -o utils/hpc/dist-install/build/HpcParser.dyn_o 
ghc-stage1: panic! (the 'impossible' happened)
  (GHC version 7.7.20130830 for powerpc64-unknown-linux):
        expectJust block_order

Attachments (2)

build.mk (4.8 KB) - added by erikd 8 months ago.
build.mk file from my powerpc64 build tree as requested by @jstolarek.
dump-out.txt.gz (122.7 KB) - added by erikd 8 months ago.
Gzipped output of failing compile command with -ddump-cmm -dcmm-lint

Download all attachments as: .zip

Change History (41)

comment:1 Changed 8 months ago by lukexi

Also here on x86_64 Mac OS X.

comment:2 Changed 8 months ago by erikd

  • Architecture changed from powerpc64 to Unknown/Multiple
  • Operating System changed from Linux to Unknown/Multiple

comment:3 Changed 8 months ago by erikd

Hmm, compiles fine on x86_64 Linux and powerpc Linux.

Last edited 8 months ago by erikd (previous) (diff)

comment:4 Changed 8 months ago by simonpj

This looks bad. How can we reproduce it? Is it architecture specific? What is your build setup erikd? Thanks

Simon

comment:5 Changed 8 months ago by jstolarek

  • Cc jan.stolarek@… added

Austin sugests that this happened around the time when I pushed my loopification patch (this bug reported on 30 Aug, my patch pushed a day earlier). Panic happens in splitAtProcPoints function and I recall that my previous attempt at loopification as a Cmm pass didn't work with LLVM because it broke the invariant that a block may be reachable only from a single procpoint.

So, can anyone experiencing this problem try

git revert d61c3ac186c94021c851f7a2a6d20631e35fc1ba

and see if this solves the problem?

comment:6 Changed 8 months ago by jstolarek

erikd, can you upload your build.mk file?

Changed 8 months ago by erikd

build.mk file from my powerpc64 build tree as requested by @jstolarek.

comment:7 Changed 8 months ago by erikd

@jstolarek : On powerpc64-linux, if I revert commit d61c3ac186c94021c851f7a2a6d20631e35fc1ba the stage1 build completes and then fails during the stage2 build. This suggests that something in that commit is causing the expectJust failure.

comment:8 follow-up: Changed 8 months ago by jstolarek

Thanks for your build.mk. Unfortunately I can't reproduce this on my Linux machine - it seems that the problem only happens on Macs.

if I revert commit d61c3ac186c94021c851f7a2a6d20631e35fc1ba the stage1 build completes and then fails during the stage2 build.

This is most strange. If stage2 fails after reverting that commit this would mean that you are experiencing some other bug. Did you clean the build tree after reverting the commit? Also, how does stage2 fail? What error do you get?

One thing you could do to help us in debugging this is trying to build HEAD and when you get a build failure, you could re-run the last command with -ddump-cmm -dcmm-lint added to the command line. So this would be something like this:

"inplace/bin/ghc-stage1" -hisuf dyn_hi -osuf  dyn_o -hcsuf dyn_hc -fPIC -dynamic 
  -H32m -O -Werror -Wall -H64m -O0    -hide-all-packages -i -iutils/hpc/. 
  -iutils/hpc/dist-install/build -iutils/hpc/dist-install/build/autogen
  -Iutils/hpc/dist-install/build -Iutils/hpc/dist-install/build/autogen
  -optP-include -optPutils/hpc/dist-install/build/autogen/cabal_macros.h
  -package array-0.4.0.2 -package base-4.7.0.0 -package containers-0.5.2.1
  -package directory-1.2.0.1 -package hpc-0.6.0.1 -XHaskell98 -XCPP
  -no-user-package-db -rtsopts -fwarn-tabs     -odir utils/hpc/dist-install/build
  -hidir utils/hpc/dist-install/build -stubdir utils/hpc/dist-install/build
  -c utils/hpc/dist-install/build/HpcParser.hs
  -o utils/hpc/dist-install/build/HpcParser.dyn_o -ddump-cmm -dcmm-lint

If you could upload output from this command this would tell us why this is happening.

comment:9 in reply to: ↑ 8 Changed 8 months ago by erikd

Replying to jstolarek:

if I revert commit d61c3ac186c94021c851f7a2a6d20631e35fc1ba the stage1 build completes and then fails during the stage2 build.

This is most strange. If stage2 fails after reverting that commit this would mean that you are experiencing some other bug.

Yes, this is another probably unrelated bug.

Did you clean the build tree after reverting the commit?

Yes.

Also, how does stage2 fail? What error do you get?

"inplace/bin/ghc-stage2" -optc-Werror -optc-Wall -optc-Ilibraries/old-time/include
  -optc-I'/home/erikd/PPC64/ghc-ppc64/libraries/base/include'
  -optc-I'/home/erikd/PPC64/ghc-ppc64/rts/dist/build'
  -optc-I'/home/erikd/PPC64/ghc-ppc64/includes'
  -optc-I'/home/erikd/PPC64/ghc-ppc64/includes/dist-derivedconstants/header'
  -optc-Werror=unused-but-set-variable -optc-Wno-error=inline -fPIC -dynamic  -H32m
  -O -Werror -Wall -H64m -O0    -package-name old-time-1.1.0.1 -hide-all-packages -i
  -ilibraries/old-time/. -ilibraries/old-time/dist-install/build
  -ilibraries/old-time/dist-install/build/autogen -Ilibraries/old-time/dist-install/build
  -Ilibraries/old-time/dist-install/build/autogen -Ilibraries/old-time/include   
  -optP-include -optPlibraries/old-time/dist-install/build/autogen/cabal_macros.h
  -package base-4.7.0.0 -package old-locale-1.0.0.5 -XHaskell98 -XCPP
  -XForeignFunctionInterface -O2 -O -dcore-lint -fno-warn-deprecated-flags 
  -no-user-package-db -rtsopts      -c libraries/old-time/cbits/timeUtils.c
  -o libraries/old-time/dist-install/build/cbits/timeUtils.dyn_o
Segmentation fault
make[1]: *** [libraries/old-time/dist-install/build/cbits/timeUtils.dyn_o] Error 139

THis is actually the first command run using the second stage compiler. It builds the non dynamic timeUtils.o object successfully and when it builds timeUtils.dyn_o I get this segfault.

I'm going to try and build without dyn.

Version 0, edited 8 months ago by erikd (next)

comment:10 Changed 8 months ago by erikd

After disabling dyn it segfaults with:

"inplace/bin/ghc-stage2" -hisuf hi -osuf  o -hcsuf hc -static  -H32m -O -Werror -Wall -H64m -O0  
  -hide-all-packages -i -iutils/haddock/driver -iutils/haddock/src -iutils/haddock/dist/build 
  -iutils/haddock/dist/build/autogen -Iutils/haddock/dist/build -Iutils/haddock/dist/build/autogen    
  -optP-DIN_GHC_TREE -optP-include -optPutils/haddock/dist/build/autogen/cabal_macros.h
  -package Cabal-1.18.0 -package array-0.4.0.2 -package base-4.7.0.0 -package containers-0.5.3.1 
  -package deepseq-1.3.0.2 -package directory-1.2.0.1 -package filepath-1.3.0.2
  -package ghc-7.7.20130906 -package xhtml-3000.2.1 -funbox-strict-fields -Wall -fwarn-tabs
  -O2 -XHaskell2010  -no-user-package-db -rtsopts      -odir utils/haddock/dist/build
  -hidir utils/haddock/dist/build -stubdir utils/haddock/dist/build
  -c utils/haddock/src/Haddock/GhcUtils.hs -o utils/haddock/dist/build/Haddock/GhcUtils.o 

which is the third object file to be built with the stage2 compiler.

comment:11 Changed 8 months ago by jstolarek

OK, I'm really puzzled about this segfault in stage2 compiler, but perhaps I can help with panic in expectJust - we need to reproduce it. This means you need to revert the reverting commit :) or in other words attempt to build unmodified HEAD and allow stage1 build to fail with panic that you originally reported. After that happens run the command that causes the segfault with -ddump-cmm -dcmm-lint added.

comment:12 Changed 8 months ago by ezyang

Jan, have you managed to reproduce it on a Mac OS X machine?

comment:13 Changed 8 months ago by jstolarek

Edward: No, unfortunately I don't have access to one. I asked Richard if he is able to reproduce the problem on his Mac but everything builds fine on his machine.

Changed 8 months ago by erikd

Gzipped output of failing compile command with -ddump-cmm -dcmm-lint

comment:14 Changed 8 months ago by erikd

@jstolarek : I attached the -ddump-cmm -dcmm-lint output you asked for. Let me know if you need anything else.

comment:15 follow-up: Changed 8 months ago by jstolarek

  • Owner set to jstolarek

erikd: Thanks. Are you sure this is the right dump? If compiler panicked during compilation the dump should be incomplete, whereas yours is.

But that's not that important - Kazu provided a dump which allows me to figure out what's going on. Below is an explanation of what is going on (no solution yet).

Here is how Cmm looks before stack layout (cFXJ and cFXS are important here):

==================== Post control-flow optimisations ====================
{offset
  cFXP:
      _sCxU::I32 = I32[(old + 12)];
      _sCxV::P32 = P32[(old + 8)];
      goto cFXI;
  cFXI:
      if (Sp - <highSp> < SpLim) goto cFXS; else goto cFXT;
  cFXT:
      _sCxW::I32 = _sCxU::I32;
      if (_sCxW::I32 != 0) goto cFXN; else goto cFXO;
  cFXN:
      I32[(young<cFXR> + 4)] = cFXR;
      R1 = _sCxV::P32;
      if (R1 & 3 != 0) goto cFXR; else goto cFXU;
  cFXU:
      call (I32[R1])(R1) returns to cFXR, args: 4, res: 4, upd: 4;
  cFXR:
      _sCxX::P32 = R1;
      _sCxY::P32 = P32[_sCxX::P32 + 3];
      _sCxZ::P32 = P32[_sCxX::P32 + 7];
      _cFXZ::I32 = _sCxW::I32 - 1;
      _sCy0::I32 = _cFXZ::I32;
      _sCxV::P32 = _sCxZ::P32;
      _sCxU::I32 = _sCy0::I32;
      goto cFXJ;
  cFXJ:
      if (Sp - <highSp> < SpLim) goto cFXS; else goto cFXT;
  cFXS:
      R1 = happyDropStk_rjgW_closure;
      I32[(old + 12)] = _sCxU::I32;
      P32[(old + 8)] = _sCxV::P32;
      call (stg_gc_fun)(R1) args: 12, res: 0, upd: 4;
  cFXO:
      R1 = _sCxV::P32 & (-4);
      call (I32[R1])(R1) args: 4, res: 0, upd: 4;
}

Stack layout transforms it to:

==================== Layout Stack ====================
{offset
  cFXP:
      _sCxU::I32 = I32[Sp];
      _sCxV::P32 = P32[Sp + 4];
      goto cFXI;
  cFXI:
      goto cFXT;
  cFXT:
      _sCxU::I32 = I32[Sp];
      _sCxV::P32 = P32[Sp + 4];
      _sCxW::I32 = _sCxU::I32;
      if (_sCxW::I32 != 0) goto cFXN; else goto cFXO;
  cFXN:
      I32[Sp] = cFXR;
      R1 = _sCxV::P32;
      I32[Sp + 4] = _sCxW::I32;
      if (R1 & 3 != 0) goto cFXR; else goto cFXU;
  cFXU:
      call (I32[R1])(R1) returns to cFXR, args: 4, res: 4, upd: 4;
  cFXR:
      _sCxW::I32 = I32[Sp + 4];
      _sCxX::P32 = R1;
      _sCxY::P32 = P32[_sCxX::P32 + 3];
      _sCxZ::P32 = P32[_sCxX::P32 + 7];
      _cFXZ::I32 = _sCxW::I32 - 1;
      _sCy0::I32 = _cFXZ::I32;
      _sCxV::P32 = _sCxZ::P32;
      _sCxU::I32 = _sCy0::I32;
      goto cFXJ;
  cFXJ:
      goto uFY0;
  uFY0:
      I32[Sp] = _sCxU::I32;
      P32[Sp + 4] = _sCxV::P32;
      goto cFXT;
  cFXO:
      R1 = _sCxV::P32 & (-4);
      Sp = Sp + 8;
      call (I32[R1])(R1) args: 4, res: 0, upd: 4;
}

Notice that cFXS block was eliminated during stack layout and we got a new uFY0 block. Now comes the time for CAF analysis followed by proc-point analysis:

==================== CAFEnv ====================
[(cFXI, {}), (cFXJ, {}), (cFXN, {}), (cFXO, {}), (cFXP, {}),
 (cFXR, {}), (cFXT, {}), (cFXU, {}), (uFY0, {})]

==================== procpoint map ====================
[(cFXI, reached by cFXP), (cFXJ, reached by cFXR),
 (cFXN, reached by cFXT), (cFXO, reached by cFXT), (cFXP, <procpt>),
 (cFXR, <procpt>), (cFXS, <procpt>), (cFXT, <procpt>),
 (cFXU, reached by cFXT), (uFY0, reached by cFXR)]

Notice that procpoint map refers to deleted cFXS block. The problem is that we determine proc-points before stack layout but run proc-point analysis after stack layout. Clearly, stack layout can remove some proc-points that we previously computed and thus invalidate our analysis. I don't have a good idea for a solution yet. We can't compute proc-points after stack layout, because stack-layout needs that information. One idea that comes to my mind is modifying stack layout so that it returns a new list of procpoints, possibly modified.

I wonder why does this only happen on MacOS and why only on some machines. I think this should be deterministic and happen always, regardless of operating system.

comment:16 in reply to: ↑ 15 ; follow-ups: Changed 8 months ago by erikd

Replying to jstolarek:

I wonder why does this only happen on MacOS and why only on some machines. I think this should be deterministic and happen always, regardless of operating system.

My machine isn't MacOS its Linux, but on powerpc64 (not powerpc). Kazu's machine is x86_64 MacOS.

comment:17 in reply to: ↑ 16 Changed 8 months ago by jstolarek

Replying to erikd:

My machine isn't MacOS its Linux, but on powerpc64 (not powerpc).

Oh, right. I forgot.

Kazu's machine is x86_64 MacOS.

His logs said he's on 32 bit.

Still, I don't understand why this doesn't happen to everyone.

comment:18 in reply to: ↑ 16 Changed 7 months ago by jstolarek

  • Cc kazu@… added

OK, I understand why this happens only on some systems. This is related to splliting proc points, which is turned on by this piece of code in CmmPipeline:

splitting_proc_points = hscTarget dflags /= HscAsm
                     || not (tablesNextToCode dflags)
                     || -- Note [inconsistent-pic-reg]
                        usingInconsistentPicReg
usingInconsistentPicReg
      = case (platformArch platform, platformOS platform, gopt Opt_PIC dflags)
        of   (ArchX86, OSDarwin, pic) -> pic
             (ArchPPC, OSDarwin, pic) -> pic
             _                        -> False

If I turn on splitting proc-points by simply setting splitting_proc_points = True I still can't reproduce the bug because Cmm is transformed in a different way on my architecture (namely, cFXS block is not removed). I have an idea how to fix this, but this will be speculative. Kazu, I will need your help in testing the patch once it is ready.

comment:19 Changed 7 months ago by kazu-yamamoto

Sure. I think I can help you.

comment:20 Changed 7 months ago by Jan Stolarek <jan.stolarek@…>

In bec3c0497fa55e84005d175e0fc6b1d72df961e1/ghc:

Drop proc-points that don't exist in the graph (#8205)

On some architectures it might happen that stack layout pass will
invalidate the list of calculated procpoints by dropping some of
them. We fix this by checking whether a proc-point is in a graph
at the beginning of proc-point analysis. This is a speculative
fix for #8205.

comment:21 Changed 7 months ago by jstolarek

Kazu, I pushed a speculative fix into HEAD. Please try to build latest HEAD and tell me whether the problem is gone.

Kazu, when you were dumping this Cmm, did you got anything on stderr? On my machine I am getting a warning that cFXS and cFXT are non-call proc-points. This warning is not in your dump. I suspect the reason is that you did not redirect stderr to the log.txt file.

comment:22 Changed 7 months ago by erikd

Pulled HEAD and tested this on powerpc64-linux.

No longer getting the originally reported but now get the following:

"inplace/bin/ghc-stage1" -hisuf hi -osuf  o -hcsuf hc -static  -H32m -O -Werror -Wall
  -H64m -O0 -package-name base-4.7.0.0 -hide-all-packages -i -ilibraries/base/.
  -ilibraries/base/dist-install/build -ilibraries/base/dist-install/build/autogen
  -Ilibraries/base/dist-install/build -Ilibraries/base/dist-install/build/autogen
  -Ilibraries/base/include   -optP-DOPTIMISE_INTEGER_GCD_LCM -optP-include
  -optPlibraries/base/dist-install/build/autogen/cabal_macros.h -package ghc-prim-0.3.1.0
  -package integer-gmp-0.5.1.0 -package rts-1.0 -package-name base -XHaskell98 -XCPP
  -O2 -O -dcore-lint -fno-warn-deprecated-flags  -no-user-package-db -rtsopts     
  -odir libraries/base/dist-install/build -hidir libraries/base/dist-install/build
  -stubdir libraries/base/dist-install/build   -c libraries/base/./Text/Printf.hs
  -o libraries/base/dist-install/build/Text/Printf.o

In file included from /home/erikd/PPC64/ghc-ppc64/includes/Stg.h:232:0:
    0,
                     from /tmp/ghc9517_0/ghc9517_2.hc:3:

/home/erikd/PPC64/ghc-ppc64/includes/stg/Regs.h:359:2:
     error: #error BaseReg must be in a register for THREADED_RTS
/tmp/ghc9517_0/ghc9517_2.hc: In function 'ghczm7zi7zi20130912_Platform_zdWPlatform_slow':

/tmp/ghc9517_0/ghc9517_2.hc:16:1:
     error: 'MainCapability' undeclared (first use in this function)

followed by well over 100 complaints about MainCapability.

comment:23 follow-up: Changed 7 months ago by kazu-yamamoto

I verified that the current GHC head can be build on Mac using 32bit GHC 7.6.3!

According to my command history, I specified "2>&1 > log.txt". So, stderr was also recorded, I think.

comment:24 Changed 7 months ago by erikd

If it built correctly for Kazu that means I must be seeing some powerpc64-linux specific issues.

comment:25 in reply to: ↑ 23 Changed 7 months ago by jstolarek

  • Resolution set to fixed
  • Status changed from new to closed

Replying to kazu-yamamoto:

I verified that the current GHC head can be build on Mac using 32bit GHC 7.6.3!

Good. I'm marking this bug as solved.

According to my command history, I specified "2>&1 > log.txt". So, stderr was also recorded, I think.

I don't know MacOS, but on Linux this would NOT redirect stderr to log.txt. Redirections are evaluated from left to right, so what you are saying here is: redirect current stderr to *current* stdout (which is the console!) and later you say redirect *current* stdout to a file. It's like assignmnets:

stderr = stdout;
stdout = log.txt;

This doesn't mean that stderr is now log.txt - Bash is not a functional language :) So you need to say >log.txt 2>&1.

No longer getting the originally reported but now get the following:

Erik, I think something is very wrong with your build. I'd suggest that you try to pull a clean tree and build there. If that fails I'd mail ghc-devs or fill a bug report.

comment:26 Changed 7 months ago by jstolarek

Oh, and sadly no regression test for this bug :( I'm affraid we don't have a small test case that triggers this panic.

comment:27 Changed 7 months ago by kazu-yamamoto

This bug stopped GHC building. I guess that is enough and it is hard to make a regression test. Does anybody have any ideas for regression test?

And, sorry for my redirection mistake.

comment:28 Changed 7 months ago by Jan Stolarek <jan.stolarek@…>

In 9267561a6f1292e829008b52aeb4aecec98dc057/testsuite:

Test for #8205

This test is a bit speculative, because I can't reproduce problem
on my machine. Still, it should work because it produces the same
Cmm that originally caused the problem.

comment:29 Changed 7 months ago by jstolarek

Well, actually we don't have a test because I couldn't reproduce the problem on my machine and I wasn't able to check whether the test really catches the bug. I just pushed a speculative test into testsuite. Kazu, can you test whether it does the job? Here's what you need to do:

./sync-all pull
make -j4
cd testsuite/tests
make TEST=T8205

Since the problem is fixed, this test should pass. Now you need to tests whether the test fails without my fix:

cd ../..
git revert bec3c0497fa55e84005d175e0fc6b1d72df961e1

Now build only stage2 compiler:

make stage=2 -j4

This will give you a broken stage2 compiler. Run the test again:

cd testsuite/test
make TEST=T8205

This time it should fail with a panic. If it does, then all is well and we have the test that catches the bug.

Could you try doing this and report the results?

comment:30 Changed 7 months ago by jstolarek

Kazu, have you managed to verify whether my test actually works?

comment:31 Changed 7 months ago by kazu-yamamoto

Which version of python is necessary?

Traceback (most recent call last):
  File "../driver/runtests.py", line 188, in ?
    from testlib import *
  File "/home/kazu/work/ghc/testsuite/driver/testlib.py", line 1856
    with t.lockFilesWritten:
         ^
SyntaxError: invalid syntax
make: *** [test] Error 1

I'm testing on 32bit Linux. python is version 2.4.

comment:32 Changed 7 months ago by thoughtpolice

You need at least Python 2.5 for the with statement.

However, I believe we could get around this for Python 2.4 by saying something like:

from __future__ import with_statement

which will then let you use with successfully (and be ignored on later versions.) Can you confirm if this works if you modify the testsuite driver, Kazu?

comment:33 Changed 7 months ago by monoidal

Python 2.4 doesn't support "with", even with the __future__ import. The __future__ statement is needed in Python 2.5 and already present in the script.

I highly recommend updating Python, version 2.4 has over five years and possibly has security issues. It should be possible to get rid of "with" and replace it with call to acquire/release but there're probably other obstacles to running the script.

comment:34 Changed 7 months ago by kazu-yamamoto

I installed Python 2.7 to 32bit Linux. I tested this both on 32bit Linux and 64bit Linux without/with reverting.

Unfortunately, "4 expected passes" was displayed in *all* four cases. Should I do "make clean" after reverting?

comment:35 follow-up: Changed 7 months ago by kazu-yamamoto

After reverting, I did "make maintainer-clean" and built and tried this test again.
On both 32bit and 64bit Linux, I got "4 expected passes".

I reverted "bec3c0497fa55e84005d175e0fc6b1d72df961e1". Should I revert other ones?

comment:36 Changed 7 months ago by kazu-yamamoto

I installed GHC head on 32bit Linux today and tried to install mighttpd2. But I got the following error:

Building vault-0.2.0.4...
Failed to install vault-0.2.0.4
Last 10 lines of the build log ( /home/kazu/work/mighttpd2/.cabal-sandbox/logs/vault-0.2.0.4.log ):
Preprocessing library vault-0.2.0.4...
[1 of 4] Compiling Data.Unique.Really ( src/Data/Unique/Really.hs, dist/dist-sandbox-a79d14af/build/Data/Unique/Really.o )
[2 of 4] Compiling Data.Vault.ST_GHC ( src/Data/Vault/ST_GHC.hs, dist/dist-sandbox-a79d14af/build/Data/Vault/ST_GHC.o )
ghc: panic! (the 'impossible' happened)
  (GHC version 7.7.20130920 for i386-unknown-linux):
	allocateRegsAndSpill: Cannot read from uninitialized register
    %vI_s4Zf

Please report this as a GHC bug:  http://www.haskell.org/ghc/reportabug

What should I do?

comment:37 Changed 7 months ago by kazu-yamamoto

I installed GHC head both on 32bit and 64bit Linux today (without haddock). On both Linux, mighttpd2 can be built without any problems.

comment:38 in reply to: ↑ 35 Changed 6 months ago by jstolarek

Replying to kazu-yamamoto:

After reverting, I did "make maintainer-clean" and built and tried this test again.
On both 32bit and 64bit Linux, I got "4 expected passes".

I reverted "bec3c0497fa55e84005d175e0fc6b1d72df961e1". Should I revert other ones?

No, this is the only commit that was relevant here. So this would imply that the test does not work :-/ Strange. It generates the same Cmm as the one that caused the panic.

Sorry for not responding for a couple of weeks, I didn't noticed the notifications.

comment:39 Changed 6 months ago by kazu-yamamoto

No problem. If there are something I can do for you, please feel free to ask me. :-)

Note: See TracTickets for help on using tickets.