Changes between Version 1 and Version 2 of Commentary/ResourceLimits


Ignore:
Timestamp:
Mar 8, 2013 8:44:59 AM (2 years ago)
Author:
ezyang
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • Commentary/ResourceLimits

    v1 v2  
    11= Resource Limits =
    22
    3 This page describes a proposed resource limits capabilities for GHC. The idea is to give users the ability to create and utilize cost-centres inside programs (so that cost-centres do not necessarily have to be tied to source code locations), and then provide in-program access to heap census and other information. The end result is the ability to impose resource limits (space and time) on fragments of Haskell code, as well as a side-effect of more precise profiling.
     3This page describes a proposed resource limits capabilities for GHC. The idea is to give users the ability to create and utilize cost-centres inside programs (so that cost-centres do not necessarily have to be tied to source code locations), and then provide in-program access to heap census and other information. The end result is the ability to impose resource limits on space usage, as well as a side-effect of more precise profiling.
    44
    55== Front-end changes ==
    66
    7 TBD
     7{{{
     8type CostCentreStack
     9type CostCentre
     10type Listener
    811
    9 There are two choices on how to represent dynamic SCCs at the core level:
     12ccsDynamic :: CostCentreStack
     13
     14newCostCentre :: IO CostCentre
     15pushCostCentre :: CostCentreStack -> CostCentre -> IO CostCentreStack
     16setCostCentreStack :: CostCentreStack -> a -> IO ()
     17listenCostCentreStack :: CostCentreStack -> Int -> IO () -> IO Listener
     18unlistenCostCentreStack :: Listener -> IO ()
     19}}}
     20
     21The general usage of this API goes like:
     22
     23{{{
     24f n =
     25    let xs = [1..n::Integer]
     26    in  sum xs * product xs
     27
     28main = do
     29  m <- newEmptyMVar
     30  forkIO $ do
     31    x <- newCostCentreStack
     32    tid <- myThreadId
     33    l <- listenCostCentreStack x 2000 (putStrLn "Too much memory is being used" >> killThread tid)
     34    let thunk = f 20000
     35    setCostCentreStack x thunk
     36    evaluate thunk
     37    unlistenCostCentreStack l
     38    putMVar m ()
     39  takeMVar m
     40}}}
     41
     42Another use-case is more fine-grained SCCs based on runtime properties, not source-level features.
     43
     44I am planning on providing semantics, based on GHC’s current profiling semantics.
     45
     46Some points to bikeshed:
     47
     48 * Naming: CostCentreStack/CostCentre or CCS/CC?
     49
     50 * Provide {{{withCostCentreStack :: CostCentreStack -> a -> a}}} which is simply a {{{IND_PERM}}} with the CCS set? In my testing, this had to be done very carefully, because if the inner value was a thunk, then this is a no-op
     51
     52 * Provide {{{withCostCentre :: CostCentre -> (a -> b) -> (a -> b)}}} which provides the equivalent of function entry (adds the CC to what ever the CCCS is). I don't have a good case for complicated cost-centre stacks and listeners and have not implemented it yet.
     53
     54 * Should listeners automatically deregister themselves when their event triggers? Active listeners are considered roots so they must be handled with care to avoid leaks.
     55
     56If we implement something like {{{withCostCentre}}}, we also need to adjust Core slightly. There are two choices:
    1057
    1158 * Modify Tick so that it can take an optional argument (cost-centre); modify the type-checker appropriately. This is not so great because we’re making an already ad hoc addition to the Core language even more complicated, even if the extra typing rules are not that complicated.
     
    1360 * Add a new Tickish type, which has no impact on code-generation but still appropriately modifies optimization behavior, and introduce new prim-ops to actually set cost-centers.
    1461
    15 Something you might wonder about is whether or not ordinary (source-generated) ticks could also be converted into prim-ops; while this sounds appealing, it gets complicated because true source-level SCCs need to be statically initialized (so the runtime system knows about them and can assign an integer ID to them), and moving them into hard-wired constants would complicate the late-stage STG passes. (It's possible, but probably loses out as far as overall complexity goes.)
     62Note that ordinary (source-generated) ticks could also be converted into prim-ops; but while this sounds appealing, it gets complicated because true source-level SCCs need to be statically initialized (so the runtime system knows about them and can assign an integer ID to them), and moving them into hard-wired constants would complicate the late-stage STG passes. (It's possible, but probably loses out as far as overall complexity goes.)
    1663
    1764== Runtime changes ==
    1865
    19 
    20 
    21 == Garbage collector changes ==
    22 
    23 One of the killer features of this machinery is the ability to say:
    24 
    25 {{{
    26     cc <- newCC "<user>"
    27     let x = scc cc someUserCode
    28     limitCC cc 100000
    29    
    30 }}}
     66 * {{{Listener}}} is a new garbage collected object; we expect it can be implemented as a simple {{{PRIM}}} using techniques similar to {{{StgMVarTSOQueue}}}.
     67 * Checks for listeners occur during heap census; you'll need to pass the {{{-hc}}} flag for this machinery to do anything. See also #7751 which will dramatically improve performance.
    3168
    3269== Commentary ==
     
    5289=== Callback triple fault ===
    5390
    54 Finalizer could trigger a new finalizer, ad infinitum.
     91Finalizer could trigger a new finalizer, ad infinitum. Maybe we don't have to do anything.