Opened 10 years ago
Last modified 9 months ago
#2256 new bug
Incompleteness of type inference: must quantify over implication constraints
Reported by: | simonpj | Owned by: | simonpj |
---|---|---|---|
Priority: | lowest | Milestone: | |
Component: | Compiler (Type checker) | Version: | 6.8.2 |
Keywords: | Cc: | Iceland_jack, RyanGlScott | |
Operating System: | Unknown/Multiple | Architecture: | Unknown/Multiple |
Type of failure: | GHC rejects valid program | Test Case: | |
Blocked By: | Blocking: | ||
Related Tickets: | Differential Rev(s): | ||
Wiki Page: |
Description
Consider this program (from Iavor)
f x = let g y = let h :: Eq c => c -> () h z = sig x y z in () in fst x sig :: Eq (f b c) => f () () -> b -> c -> () sig _ _ _ = ()
This example is rejected by both Hugs and GHC but I think that it is a well typed program. The Right Type to infer for g is this:
g :: forall b. (forall c. Eq c => Eq (b,c)) => b -> ()
but GHC never quantifies over implication constraints at the moment. As a result, type inference is incomplete (although in practice no one notices). I knew about this, but I don't think it's recorded in Trac, hence this ticket.
Following this example through also led me to notice a lurking bug: see "BUG WARNING" around line 715 of TcSimplify
.
Change History (27)
comment:1 Changed 9 years ago by
Architecture: | Unknown → Unknown/Multiple |
---|
comment:2 Changed 9 years ago by
Operating System: | Unknown → Unknown/Multiple |
---|
comment:3 Changed 9 years ago by
Milestone: | 6.10 branch → 6.12 branch |
---|
comment:4 Changed 8 years ago by
Milestone: | 6.12 branch → 6.12.3 |
---|
comment:5 Changed 7 years ago by
Milestone: | 6.12.3 → 6.14.1 |
---|---|
Priority: | normal → low |
comment:6 Changed 7 years ago by
Type of failure: | → None/Unknown |
---|
comment:7 Changed 7 years ago by
Milestone: | 7.0.1 → 7.0.2 |
---|
comment:8 Changed 7 years ago by
Milestone: | 7.0.2 → 7.2.1 |
---|
comment:9 Changed 6 years ago by
Milestone: | 7.2.1 → 7.4.1 |
---|
comment:10 Changed 6 years ago by
Milestone: | 7.4.1 → 7.6.1 |
---|---|
Priority: | low → lowest |
comment:11 Changed 5 years ago by
Milestone: | 7.6.1 → 7.6.2 |
---|
comment:13 Changed 3 years ago by
Component: | Compiler → Compiler (Type checker) |
---|---|
Type of failure: | None/Unknown → GHC rejects valid program |
SPJ: this ticket is not listed on Status/SLPJ-Tickets.
The error message with HEAD (ghc-7.9.20141204):
Could not deduce (Eq b) arising from a use of ‘sig’ from the context (Eq c) bound by the type signature for h :: Eq c => c -> () at test.hs:5:26-40 Possible fix: add (Eq b) to the context of the type signature for h :: Eq c => c -> () or the inferred type of g :: b -> () In the expression: sig x y z In an equation for ‘h’: h z = sig x y z In the expression: let h :: Eq c => c -> () h z = sig x y z in ()
comment:14 follow-up: 27 Changed 3 years ago by
I initially put this ticket into my internal notes as "Difficulty: Rocket Science", but I've changed my mind. I think this is actually straightforward to implement.
GHC already has implication constraints, except that they're effectively all Wanteds. So, all we would need are to be able to make Given implication constraints. But, these should be rather straightforward, because we can view top-level instances as Given implication constraints. We can just adapt the current look-in-Givens algorithm to do some matching, just like the instance-selection algorithm does. When we find an implication-constraint Given that matches a Wanted, we use it to solve the Wanted and then emit new Wanteds for the antecedents of the Given implication.
I was initially concerned about overlap here -- what if an implication Given's RHS overlaps with a global instance? -- but this was a red herring. Givens have always potentially overlapped with global instances, and very few people are bothered. For example:
min :: Ord a => a -> a -> a
If we call min
at type Int
, the Ord
constraint is potentially ambiguous: do we mean the dictionary supplied or the global Ord Int
dictionary? Of course, we mean the local one, and we don't require -XIncoherentInstances
to do so. So, worrying about overlap in the context of implication Givens is a red herring. We have a clear rule: local constraints always shadow global ones. That's nice, simple, and unsurprising.
This also doesn't change the "GHC never backtracks" principle. If a local implication-constraint's RHS matches, it is used, just like if GHC finds a global implication constraint (read, global constrained instance).
To implement this, GHC would also need to be able to produce the witness for an implication constraint, but this is just like inferring the lambdas for a higher-rank type. I don't see any challenge here.
Of course, a niggling feeling tells me that this has to be harder than I'm seeing it. But, really, all I see standing in the way is a bit of plumbing.
Thoughts?
comment:15 Changed 3 years ago by
I agree that much of this is not hard. And it's certainly useful. Indeed it is explicitly discussed in our Haskell Workshop 2000 paper Derivable Type Classes.
The issues I see are:
- Inference. If users can write such instances, they will probably expect them to be inferred. But that may wrongly defer a type error from a function to its call sites, by inferring an implication which is actually unsatisfiable.
- Completeness. "Given" constraints interact with each other. How do "given" implications interact with simple "givens"?
- Impredicativity. Can a constraint variable
c
be instantiated with one of these implications? Perhaps it's just a question of ruling that out.
comment:16 follow-up: 26 Changed 3 years ago by
- Inference: We don't. Just like we don't infer higher-rank types (which, of course, are the same as implication constraints). If a user wants this feature, they have to ask for it.
- Completeness: This is thornier. Here is how I see a problem arising:
class C a class D a instance D Int needsD :: D a => a -> () implic :: (forall a. C a => D a) -> () implic = needsD (5 :: Int)
When typechecking
implic
, the local universal dictionary forD a
will be used to provide a dictionary toneedsD
. This will then generate an unsatisfiable wantedC Int
. But, of course, we could have just used the global instanceD Int
instead.
An issue such as this one can't arise today (even with local dictionaries shadowing global ones) because local dictionaries never give rise to new wanteds. If you have a local dictionary, it's good to use.
Given/Given interactions would be the same. If one implication-Given can be used in an interaction, it would be, giving rise to its antecedent Wanted constraints.
This does get me a little worried about nondeterminism. But, I conjecture that if this interaction can give rise to nondeterminism, then it would be possible to exhibit the nondeterminism today, using overlapping instances.
So, it all comes down to specifying completeness against what spec, and I'm saying that we could be complete against a spec where every local constraint shadows every overlapped global one. That is, when
(forall a. C a => D a)
is in scope,instance D Int
is not. Will this be easy enough to use to make it practicable? There will be some surprises around it, but I think it's all quite easy to digest with a little guidance. (Certainly easier to understand than closed type families!)
- Impredicativity: This, to me, is just like the story in kind
*
.c
should be forbidden from unifying with an implication constraint.
comment:18 Changed 3 years ago by
Milestone: | 7.10.1 → 7.12.1 |
---|
Moving to 7.12.1 milestone; if you feel this is an error and should be addressed sooner, please move it back to the 7.10.1 milestone.
comment:19 Changed 3 years ago by
Moving to 7.12.1 milestone; if you feel this is an error and should be addressed sooner, please move it back to the 7.10.1 milestone.
comment:21 Changed 23 months ago by
Milestone: | 8.0.1 |
---|
comment:23 Changed 18 months ago by
Cc: | Iceland_jack added |
---|
comment:24 Changed 17 months ago by
Cc: | RyanGlScott added |
---|
comment:25 Changed 17 months ago by
From Proof Relevant Corecursive Resolution, I will copy the relevant text:
Derivable type classes
Hinze and Peyton Jones wanted to use an instance of the form
instance (Binary a, Binary (f (GRose f a))) => Binary (GRose f a)but discovered that it causes resolution to diverge. They suggested the following as a replacement:
instance (Binary a, ∀b. Binary b => Binary (f b)) => Binary (GRose f a)Unfortunately, Haskell does not support instances with polymorphic higher-order instance contexts. Nevertheless, allowing such implication constraints would greatly increase the expressivity of corecursive resolution. In the terminology of our paper, it amounts to extending Horn formulas to intuitionistic formulas. Working with intuitionistic formulas would require a certain amount of searching, as the non-overlapping condition for Horn formulas is not enough to ensure uniqueness of the evidence. For example, consider the following axioms:
κ₁ : (A ⇒ B x) ⇒ D (S x) ĸ₂ : A, D x ⇒ B (S x) κ₃ : ⇒ D ZWe have two distinct proof terms for
D (S (S (S (S Z))))
:κ₁ (λα₁. ĸ₂ α₁ (ĸ₁ (λα₂. ĸ₂ α₁ ĸ₃)) κ₁ (λα₁. ĸ₂ α₁ (ĸ₁ (λα₂. ĸ₂ α₂ ĸ₃))This is undesirable from the perspective of generating evidence for type class.
Haskell code
data A data B a data D a data S a data Z
k1 :: (A -> B x) -> D (S x) k2 :: A -> D x -> B (S x) k3 :: D Z
one, two :: D (S (S (S (S Z)))) one = k1 \a1 -> k2 a1 (k1 \a2 -> k2 a1 k3) two = k1 \a1 -> k2 a1 (k1 \a2 -> k2 a2 k3)
comment:26 Changed 9 months ago by
Does the following claim hold water and/or illuminate anything?
The implication constraint on implic
can either be simplified (eg C a
immediately refines a
and/or has D a
as a superclass) or it is only satisfiable by a parametric instance of D
. Suppose that overlapping and incoherent instances are not allowed. Then any non-parametric instance of D
(eg your D Int
) would immediately render that implication constraint unsatisfiable.
So the concern is just yet another due solely to overlapping/incoherent instances?
Replying to goldfire:
- Inference: We don't. Just like we don't infer higher-rank types (which, of course, are the same as implication constraints). If a user wants this feature, they have to ask for it.
- Completeness: This is thornier. Here is how I see a problem arising:
class C a class D a instance D Int needsD :: D a => a -> () implic :: (forall a. C a => D a) -> () implic = needsD (5 :: Int)When typechecking
implic
, the local universal dictionary forD a
will be used to provide a dictionary toneedsD
. This will then generate an unsatisfiable wantedC Int
. But, of course, we could have just used the global instanceD Int
instead.An issue such as this one can't arise today (even with local dictionaries shadowing global ones) because local dictionaries never give rise to new wanteds. If you have a local dictionary, it's good to use.
Given/Given interactions would be the same. If one implication-Given can be used in an interaction, it would be, giving rise to its antecedent Wanted constraints.
This does get me a little worried about nondeterminism. But, I conjecture that if this interaction can give rise to nondeterminism, then it would be possible to exhibit the nondeterminism today, using overlapping instances.
So, it all comes down to specifying completeness against what spec, and I'm saying that we could be complete against a spec where every local constraint shadows every overlapped global one. That is, when
(forall a. C a => D a)
is in scope,instance D Int
is not. Will this be easy enough to use to make it practicable? There will be some surprises around it, but I think it's all quite easy to digest with a little guidance. (Certainly easier to understand than closed type families!)
- Impredicativity: This, to me, is just like the story in kind
*
.c
should be forbidden from unifying with an implication constraint.
comment:27 Changed 9 months ago by
Your min
discussion was helpful to me, but I think it's mostly unrelated, in retrospect; the Ord a
dictionary -- again ignoring overlapping/incoherent instances -- could only ever "overlap" with a parametric instance of Ord
, since a
is skolem. #9701, on the other hand, seems to demonstrate exactly what you were getting at (#12791 too?).
Replying to goldfire:
I initially put this ticket into my internal notes as "Difficulty: Rocket Science", but I've changed my mind. I think this is actually straightforward to implement.
GHC already has implication constraints, except that they're effectively all Wanteds. So, all we would need are to be able to make Given implication constraints. But, these should be rather straightforward, because we can view top-level instances as Given implication constraints. We can just adapt the current look-in-Givens algorithm to do some matching, just like the instance-selection algorithm does. When we find an implication-constraint Given that matches a Wanted, we use it to solve the Wanted and then emit new Wanteds for the antecedents of the Given implication.
I was initially concerned about overlap here -- what if an implication Given's RHS overlaps with a global instance? -- but this was a red herring. Givens have always potentially overlapped with global instances, and very few people are bothered. For example:
min :: Ord a => a -> a -> aIf we call
min
at typeInt
, theOrd
constraint is potentially ambiguous: do we mean the dictionary supplied or the globalOrd Int
dictionary? Of course, we mean the local one, and we don't require-XIncoherentInstances
to do so. So, worrying about overlap in the context of implication Givens is a red herring. We have a clear rule: local constraints always shadow global ones. That's nice, simple, and unsurprising.This also doesn't change the "GHC never backtracks" principle. If a local implication-constraint's RHS matches, it is used, just like if GHC finds a global implication constraint (read, global constrained instance).
To implement this, GHC would also need to be able to produce the witness for an implication constraint, but this is just like inferring the lambdas for a higher-rank type. I don't see any challenge here.
Of course, a niggling feeling tells me that this has to be harder than I'm seeing it. But, really, all I see standing in the way is a bit of plumbing.
Thoughts?
OK, with the new typechecker the BUG WARNING problem has gone away. However, when we are attempting to generalise a local binding (remember, this isn't recommended; I'd prefer to have
-XMonoLocalBinds
on), there is a fixable incompleteness in the algorithm. Specifically, suppose we are generalising a let-binding with constraints(Eq a, (forall b. Ord b => ...a...b....))
, and we want to generalise overa
. ThensimplifyAsMuchAsPossible
.a
after all.(forall a. Eq a => (forall b. Ord b => ...a...b...))
.I'm not sure which is better, and it's very obscure anyway, so I don't think it's high priority. I'm just making these notes while I think of them.
Simon