Suppose I have a normally built compiler plugin, and I'd like to use it while building some profiled code. This doesn't work: (non-profiled) GHC searches for the profiled version of the plugin. This is wrong wrong wrong: plugin searches should be with respect to how GHC was compiled. (Test case: take the plugins01 test and add -prof to it)
Now that we have -plugin-package, it should be a simple matter to ensure that plugin lookups follow a different codepath, though I'm not sure how to figure out what GHC is built with; this should just be however TH does it.
Trac metadata
Trac field
Value
Version
7.11
Type
Bug
TypeOfFailure
OtherFailure
Priority
normal
Resolution
Unresolved
Component
Profiling
Test case
Differential revisions
BlockedBy
Related
Blocking
CC
Operating system
Architecture
To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information
Child items
0
Show closed items
No child items are currently assigned. Use child items to break down this issue into smaller parts.
Linked items
0
Link issues together to show that they're related or that one is blocking others.
Learn more.
A similar problem applies to TH, although here you can use -fexternal-interpreter to ship the code to a GHC built with profiling to solve the problem.
Let's talk about how to fix this.
The first design decision we have to make is whether or not we should maintain a separate package database for profiled libraries. This is more clear and direct: an entry in the traditional package database indicates hi/o files; an entry in the profiling package database indicates p_hi/p_o files; it also brings a bit of clarity on the implementation side because it's a bit more obvious if you're doing a lookup in the host or target database, and so it's clearer you need to do something different in these cases. On the other hand, changing this would cause a lot of churn on the tool-side. For now, I think that we should not split it up; if we did, Packages would need a bit of changing.
Packages knows nothing about profiling/non-profiling loading; the key player here is LoadIface. Here things get tricky. Right now, both loadPluginInterface and loadSysInterface go through the same code path loadInterface (via loadInterfaceWithException). We need to extract these into two distinct codepaths. The normal, non-plugin codepath should query and update the HPT and EPS as normal. However, the plugin codepath needs to query its OWN copy of EPS (and HPT, I suppose, if you want to support loading a plugin from the same package you're building). The hacky way to do this is to add some extra fields to HscEnv (but it's unclear to me if you also want to play this trick for dynamic/non-dynamic, so maybe you want to make a sub-datastructure for this business, so you can be polymorphic in the choice). The correct distinction is probably to have a host HPT/EPS (things that can be loaded into this GHCl plugins) and a target HPT/EPS (the thing you're compiling; other interfaces.)
Adding a new copy of HPT and EPS has deep implications: all code that interacts with this needs to be updated to look in the correct HPT/EPS. Of particular interest is compiler/main/DynamicLoading.hs; it interacts with interface loading via the plugin interfaces, but it gets a TyThing using lookupTypeHscEnv. With two copies of HPT/EPS, it will need a lookupHostTypeHscEnv invocation instead. Additionally, in GhcMake the HPT needs to be updating only the thing that it is actually building for. There is a bit of trickery needed here, because a common trick is for Cabal to build a package once without profiling, and then once again with profiling; those local interfaces need from the non-profiling pass somehow need to get shoved in the host/plugin HPT if we are to see them. Perhaps best not to do it at first. In any case, lots of functions need to be looked at, and a decision made.
It's interesting to note that this is not only restricted to profiling and non-profiling but crucial for cross compilation. It becomes more and more obvious to me that profiling and cross compilation seem to share a fair bit of preferred infrastructure.
In the cross compilation setting the plugin database would have to hold packages compatible with the host, while the regular target database would hold packages for the target platform.
Therefore I'm not certain this is or should be restricted to profiling/non-profiling only. I'm also uncertain if it would make sense at some point to have more than two package databases? What if I want to allow multi-target support down the line, would I need package databases for each target, or should they live in one package database and can share the interface files while having different object files? I somewhat doubt that would be practical generic solution due to cpp and architecture differences?
I suggest just tackling the profiling problem as a way of avoiding the "biting off more than you can chew" problem. While it's reasonable to try to anticipate things the cross-compiling version will need, I also suspect that you'll have to rewrite a lot of code anyway in the cross-compiling case. So if you anticipate too much, you'll find that you tied yourself up in knots for code that you didn't actually need. This is open source; as long as you're persistent, we can take the time to rewrite several times ;)
Let's talk about the question of separate or together package databases. Common sense seems to dictate two contradictory positions:
Profiling versions of library "ought" to be in the same package database (or, at least, that's how we do it today)
Different cross-compiler targets "ought" to be in different package databases
The crux of the matter is what having separate package databases buys you: you can have multiple copies of the package with the *same* IPID, but compiled differently. Sometimes this occurs in the single package database today (think about the profiling and non-profiling versions of a library), but these have to be awkwardly squeezed into the same package database entry. "Fortunately", the package database entry looks no different whether or not profiling libraries are present, so distros that want to bundle profiling libraries separately from the real ones just drop in some appropriately named p_hi and _p.a files. But it is also a pain for cabal-install Nix-style local builds, which doesn't really want to be retroactively stuffing profiling libraries into preexisting installations.
So, *why* would you want to ensure that the profiling and non-profiling versions of a library get the same symbol names (via the IPID)? In a world where Template Haskell obeys strict staging, I think the answer is, never. But when we blur the lines between compile time and run time (as is encouraged by our current TH model, where there is no difference between a compile time and a runtime dependency), there are situations where you have a Name from the compile time world, and you want to lift (well, the type-class is called that but IMO it goes the wrong way!) a name from the compile-time world to the run-time world. And now the outrageous coincidence of compile-time and run-time entities (e.g., profiling and non-profiling) being named the same way is very useful, because you can just slide it from one realm to the next and still have it mean something. (Of course, if a preprocessor macro caused a Name to stop existing, then you did something bad and GHC will probably panic. So this kind of punning is not very safe--empirically, it seems to have been OK for profiling because no one ever CPPs for profiling.)
So, if you are going to insist on a strict staging separation (I think, from GHC's perspective, everything can be implemented correctly this way; it's just a matter of user code not using NameG and Typeable in naughty ways), then I don't see any reason to have multiple package databases. If I build a profiling version of library, it should get its own entry in the package database, with a different IPID than its vanilla version (even if it's built with the same things), and its own set of dependencies (on the profiling versions of its deps.) And if the IPIDs are different, they can totally coexist in the same package database. Now you can eagerly error if a profiling library is not present (rather than wait for the attempt to load p_hi to fail). You can even eliminate the p_ prefix for profiling objects/interface files.
Some "soft" advice on this patchset. I think it is wise to reduce churn when making a big patch like this. It makes the patch more likely to succeed. So I think there are two reasonable things you can do with this analysis:
You can start by refactoring profiling so that it lives in a separate database, eliminating p_hi files. This is a lot of churn: Cabal, build systems, distro packaging, etc. will all have to update--hopefully for the better. While you are doing this, you will run into the blocking problem of handling plugins and TH, since they are trying to load code from the profiling database which will no longer exist. So then you'll solve this ticket, by adding a second EPS/HPT, etc, as described above.
OR, you could just solve the blocking problem first, introducing a new EPS/HPT, but assuming everything is in the same database, and then solve that problem later (or not at all), perhaps when you need cross compiling to work.
P.S. You wonder if you will need two databases. Let me flip around your question: rather than databases, does it every make sense to have more than two EPSes in GHC? The only situation is if GHC is compiling code for two targets *at the same time.* I can't think this would ever be relevant for cross-compilation, but rather mysteriously GHC does try to generate dynamic and non-dynamic objects at the same time using -dynamic-too. Still, we get along just fine with one EPS even in this case, so I think two is enough. Obviously you can maintain more databases on disk, but GHC will only ever look at two at a time.
With the ultimate goal of supporting TH in cross-compilation, I've got a WIP branch to separate out compile time and runtime code more rigorously at https://github.com/shlevy/ghc/tree/lifecycle-staging (current WIP branch is on separating out the HPT). This work is temporarily on hold while I get a quick fix working using external-interpreter running on the target, but I hope to get back to it soon as it's clearly the better long-term solution and works for plugins too.