wiki:LinearTypes/Implementation

On this page we describe the principles behind the implementation of the linear types extension as described at LinearTypes.

The current implementation progress can be seen on here

Principles

The main principle behind the implementation is to modify FunTyCon with an extra argument which indicates the multiplicity of an arrow. The data type for multiplicities is defined in compiler/basicTypes/Weight.hs and is called Rig (both the name of the file and the name of the type are temporary). There are two multiplicities, One which indicates that the function is linear and Omega which indicates that it is not.

Core binders also have a multiplicity attached to them.

The rest of the implementation is essentially correctly propagating and calculating linearity information whenever a FunTy is created.

Fontend

The source representation of a weight is HsRig, it is parametrised by the current pass like all over front-end types.

data HsRig pass = HsZero | HsOne | HsOmega | HsRigTy (LHsType pass)

We also define HsWeighted pass a which is a HsRig associated with a thing.

Parsing

There is currently a rule in the lexer to parse -->. as ITlolly2. This syntax is chosen to be easy to parse currently. The linear arrow can be written as ->. or with UnicodeSyntax as .

The syntax for polymorphism is currently.

a -->. (m) b

where m is a multiplicity variable.

id :: a -->. (One) a
id x = x

This is the linear identity function, for example.

DataCon

Rebuilding expressions in the optimiser

There are situations in the optimiser where lets more through cases when case of case is applied.

The simplifier creates let bindings under certain circumstances which are then inserted later. These are returned in SimplFloats. However, we have to be somewhat careful here when it comes to linearity as if we create a floating binding x in the scrutinee position.

case_w (let x[1] = "Foo" in Qux x) of                                              
  Qux _ -> "Bar"                                                                   

then the let will end up outside the case if we perform KnownBranch or the case of case optimisation.

let x[1] = "Foo"                                                                   
in "Bar"                                                                           

So we get a linearity failure as the one usage of x is eliminated. However, because the ambient context is an Omega context, we know that we will use the scrutinee Omega times and hence all bindings inside it Omega times as well. The failure was that we created a [1] binding whilst inside this context and it then escaped without being scaled. We also have to be careful as if we have a [1] case

case_1 (let x[1] = "Foo" in Qux x) of                                              
  Qux x -> x                                                                       

then the binding maintains the correct linearity once it is floated rom the case and KnownBranch is performed.

let x[1] = "Foo"                                                                   
in x                                                                               

The difficulty mainly comes from that we only discover this context at a later point once we have rebuilt the continuation. So, whilst rebuilding a continuation we keep track of how many case-of-case like opportunities take place and hence how much we have to scale lets floated from the scrutinee. This is achieved by adding a Writer like effect to the SimplM data type. It seems to work in practice, at least for T12944 which originally highlighted this problem. Why do we do this scaling afterwards rather than when the binding is created? It is possible the binding comes from a point deep inside the expression. It wasn't clear to me that we know enough about the context at the point we make the binding due to the SimplCont type. It might be thread this information through to get it right at definition site. For now, I leave warnings and this message to my future self.

For an in-depth discussion see: https://github.com/tweag/ghc/issues/78 and https://github.com/tweag/ghc/pull/87

FunTyCon

FunTy is a special case of a Type. It is a fully applied function type constructor, so now a function type constructor with five arguments. This special case is constructed in mkTyConApp. The problems come when a FunTy is deconstructed, for example repSplitTyConApp_maybe, if this list is not the right length then you get some very confusing errors. The place which was hardest to track down was in Coercion where decomposeFunCo had some magic numbers corresponding to the the position of the types of the function arguments.

Look for Note [Function coercions] and grep for lists of exactly length 5 if you modify this for whatever reason.

splitFunTy and mkFunTy.

The core of the implementation is a change to splitFunTy. As the arrow now has an associated weight, splitFunTy must also return the weight of the arrow. Many changes to the compiler arise from situations where either splitFunTy is called and then we must keep track of the additional information it returns. Further, when there is a call to mkFunTy, we must also supply a multiplicity which is often not obvious what it should be. These are often marked as TODO in the code base and it is hoped that as more linearity information is propagated it will become more obvious at more of these call sites how we need to construct these arrows.

Core Lint

TODO - this is described somewhat in the minicore document but it is not finished. In particular, join points are not implemented at all. This hasn't been a problem as the only linearity has been from data constructors which are never made into join points.

Polymorphism

The principle of the polymorphism implementation is simple. We modified the function type constructor to take an extra type argument. So it's kind is now.

 (->) :: forall (m :: Multiplicity) (rep1 :: RuntimeRep) (rep2 :: RuntimeRep). TYPE rep1 -> TYPE rep2 -> *

Then all arrows can be polymorphic in their multiplicity.

Data Constructors are polymorphic

A key part of the original proposal was the type of data constructors was linear.

(,) :: a ->. b ->. (a, b)

However, this was quickly found not to work properly. Below are two examples as described by Arnaud

Example 1

foo :: Identity (a -> b) -> a -> b
foo = unIndentity

foo (Identity Just)

The last line doesn’t type check because foo expects an Identity (a -> b) but Identity Just is inferred to have type Identity (a ⊸ b). The limited subtyping doesn’t handle this case. In a perfect world, Identity Just would have been elaborated to Identity (\x -> Just x) :: Identity (a -> b), but at the time where the application Identity Just was type-checked, it was not known that Just should be cast to an unrestricted arrow type.

Example 2

The second problem is much more obvious in retrospect: there are typeclasses defined on (->), and they can fail to apply when using linear function.

import Control.Category

Just . Just -- fails

The latter fails because there are no Category instance of (⊸). In the case of Category, we can write an instance, but it’s not necessarily the case that an instance for (->) will have a corresponding instance for (⊸). Anyway, even if we have a Category instance for (⊸), it is not clear what expression will typecheck: probably of the following will fail

import Control.Category

(Just :: _ -> _) . Just
Just . (Just :: _ -> _)

Polymorphic Constructors

Simon quickly suggested a solution to these problems. To make the type of linear data constructors polymorphic, when they are used as terms (their type stays linear when they are used in patterns).

(,) :: forall (p :: Multiplicity) (q :: Multiplicity). a ->@{p} b  ->@{q} -> (a, b)

Currently simplified as having a single multiplicity variable for the sake of a simpler implementation

(,) :: forall (p :: Multiplicity). a ->@{p} b  ->@{p} -> (a, b)

We never infer multiplicity polymorphic arrows (like levity polymorphism). Any type variables which get to the top level are default to Omega. Thus, in most cases the multiplicity argument is defaulted to Omega or forced to be Omega by unification.

Implementation

The way this is implemented is that every data constructor is given a wrapper with NO exceptions. Even internally used data types have wrappers for the moment which makes the treatment quite uniform. The most significant challenge of this endeavour was giving build in data types wrappers as previously none of them had wrappers. This led to the refactoring of mkDataConRep and the introduction of mkDataConRepSimple which is a pure, simpler version of mkDataConRep.

Once everything has a wrapper, you have to be quite careful with the difference between dataConWrapId and dataConWorkId. There were a few places where this used to work by accident as they were the same thing for builtin types.

In particular, functions like exprConApp_maybe are fragile and I spent a while looking there.

Other uses can be found by grepping for omegaDataConTy. There are probably places where they can be removed by using dataConWorkId rather than dataConWrapId. The desugaring of ConLikeOut uses dataConWrapId though so anything in source syntax should apply the extra argument.

As another note, be warned that the serialisation for inbuilt tuples is different from normal data constructors. I didn't know this until I printed out the uniques - ad397aa99be3aa1f46520953cb195b97e4cfaabf

Otherwise, the implementation followed much the same path as levity polymorphism.

Rules and Wrappers

Giving data constructors wrappers makes RULES mentioning data constructors not work as well. Mentioning a data constructor in a RULE currently means the wrapper, which is often inlined without hesitation and hence means that rule will not fire at a later point. How to solve this is currently unresolved.

magicDict

A specific point of pain was magicDict which is a special identifier which does not exist at runtime. It relies on an inbuilt RULE in order to eliminate it. The rule is defined at match_magicDict. There are t

If you don't eliminate it then you will get a linker error like

/root/ghc-leap/libraries/base/dist-install/build/libHSbase-4.12.0.0-ghc8.5.so: undefined reference to `ghczmprim_GHCziPrim_magicDict_closure

I made the matching more robust in the two places in base by using a function as the argument to magicDict rather than a data constructor as the builtin rule only uses that information for the type of the function.

Typechecking

The internal representation of a multiplicity is called Rig.

data Rig = Zero                                                                    
         | One                                                                     
         | Omega                                                                   
         | RigAdd Rig Rig                                                          
         | RigMul Rig Rig                                                          
         | RigTy Type                                                              
  deriving (Data) 

Each constructor represents how many times a variable is allowed to be used. There are precisely two places where we check how often variables are used.

  1. In TcEnv.tc_extend_local_env, which is the function which brings local variables into scope. Upon exiting a binder, we call tcSubWeight (via check_binder) to ensure that the variable usage is compatible with the declared multiplicity (if no multiplicity was declared, a fresh existential multiplicity variable is created instead).
    • tcSubWeight emits constraints of the form π ⩽ ρ. In check_binder there is a call to subWeightMaybe which checks for obvious cases before we delegate to tcSubWeight which decomposes multiplication and addition constraints

before calling tc_sub_weight_ds in order to check for precise equality of types. This function will also do unification of variables.

  1. In tc_sub_type_ds, In the FunTy case, we unify the arrow multiplicity which can lead to the unification of multiplicity variables.
    • tc_sub_type_ds emits constraints of the form π = ρ, this is achieved by a call to tc_sub_weight_ds which just calls rigToType on the Rigs before checking for equality using tc_sub_type_ds recursively.

A better implementation would probably emit a real constraint pi <= rho and then add logic for solving it to the constraint solver. The current ad-hoc approach reduces the task of checking the relation to checking certain equality constraints.

There are two useful functions in this dance. In order to use the normal unification machinery, we eventually call tc_sub_type_ds but before that we check for domain specific rules we want to implement such as 1 <= p which is achieved by calls to subweight or subweightMaybe. The flattenRig function removes redundancy from the representation (by replacing RigTy omegaDataConTy with Omega and likewise for One).

It is also important that subweight is checked before rigToType is called as there is no representation for Zero as it is not allowed to be written in user programs.

Solving constraints

Constraint solving is not completely designed yet. The current implementation follows very simple rules, to get the implementation off the ground. Basically both equality and inequality constraints are treated as syntactic equality unification (as opposed, in particular, to unification up to laws of multiplicities as in the proposal). There are few other rules (described below) which are necessary to type even simple linear programs:

The 1 <= p rule

Given the current domain, it is true that 1 is the smallest element. As such, we assume 1 is smaller than everything which allows more functions to type check.

This is implemented by making sure to call subweight on the weights before passing them to the normal unifier which knows nothing special about multiplicities. This can be seen at both tc_extend_local_env and tc_sub_type_ds. At the moment we also get much better error messages by doing this short circuiting.

Complex constraints

Multiplication and addition are approximated.

  • A constraint of the form p1 * p2 <= q is solved as p1 <= q and p2 <= q.
  • A constraint of the form p1 + p2 <= q is solved as Omega <= q

FunCo

FunCo is modified to take an extra coercion argument which corresponds to coercions between multiplicities. This was added because there was a point to where mkTyConAppCo took a coercion as an argument and needed to produce a Rig. It was defaulted to Omega for a long time but eventually I changed it to Coercion. This seemed to work but there were some problems in CoreLint which mean the check for the role of the coercion has been commented out for now.

Specialisation

Unsolved multiplicity variables are specialised to ω by the following functions:

Calls to isMultiplicityVar are used in places where we do defaulting.

  1. TcSimplify.defaultTyVarTcS
  2. TcMType.defaultTyVar

Debugging

If you are debugging then it is very useful to turn on the explicit printing of weights in two places.

  1. The outputable instance for Weighted in Weight.
  2. The weight of variables in Var, line 309.

There are disabled by default as they affect test output.

Module Cycles

You have to be very careful about where you define functions which operator on Rig. A Rig contains a Type and types contains Vars which contain a Rig.

This means that Weight, UsageEnv and Type all have things added to their hs-boot files. I managed to play this game well enough but it was quite precarious adding new definitions into this cycle. This is why `

Misc

  • Patterns are type checked in a context multiplicity which scales the constructor fields, extending the case_p from the paper.
Last modified 10 days ago Last modified on Jul 12, 2018 12:58:57 PM