Changes between Version 25 and Version 26 of DataParallel/WorkPlan


Ignore:
Timestamp:
May 21, 2009 12:17:42 AM (5 years ago)
Author:
chak
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • DataParallel/WorkPlan

    v25 v26  
    44=== Issues that need discussion and planning === 
    55 
     6 * '''Code blow up:''' Even for Quickhull, the generated Core is too much admit any sensible attempt at optimising it.  ''This is our main road block at the moment.'' 
     7 
    68 * '''Unlifted functions:''' specify the exact behaviour of this optimisation and how the unliftedness of a named function is propagated across module boundaries. 
    7  
    8  * '''Code blow up:''' what do we do about dictionary instances, their inlining, floating, and sharing? 
    9  
    10  * What is the status of using TH for generating library boilerplate? 
    119 
    1210=== Milestones === 
    1311 
     12 0. Get Quickhull to run fast. 
    1413 0. The system should be usable for small applications for the GHC 6.12 release. 
    1514 
     
    2120   – status: partly implemented, but still needs serious work 
    2221   * All segmented operations have been removed from the backend library. 
     22   * Roman currently adapts the vectoriser to make use of the separation of data and shape in the library.  This turned out to be more work than expected, but it should also simplify the vectoriser. 
    2323   * We still don't have the code blow up under control. 
    2424   * Before any further major changes to the library, Roman needs to first re-arrange things such that the library boilerplate is generated, instead of being hardcode; otherwise, changes require a lot of tiresome code editing.  This is partially done. 
     
    2626 ''Simon'':: 
    2727   '''New dictionary representation and optimisation''' 
    28    – status: prototype shipped to Roman, seems to work.   I am doing detailed perf comparisons before committing to HEAD.  
     28   – status: prototype shipped to Roman, seems to work.   I am doing detailed perf comparisons before committing to HEAD.  '''NB:''' We won't be able to make progress with benchmarks, before we can use the new inliner (currently we have no version that applies to the HEAD). 
    2929 
    3030 ''Gabi'':: 
    31    '''Regular multi-dimensional arrays''' & '''Hierarchical matrix representation''' 
    32    – status: partially implemented benchmark; regular multi-dimensional arrays are in the design phase 
     31   '''Regular multi-dimensional arrays (language design and implementation)''' & '''Hierarchical matrix representation''' 
     32   – status: partially implemented benchmark; regular multi-dimensional arrays are in an experimental state 
    3333 
    3434 ''Manuel'':: 
     
    5050 * '''Desugaring comprehensions:''' The current desugaring of array comprehensions produces very inefficient code.  This needs to be improved.  In particular, the `map/crossMap` base case in `dotP` for `x<-xs` and use `zipWith` instead of `map/zip`.  Moreover, `[:n..m:]` is being desugared to `GHC.PArr.enumFromToP` - it needs to use the implementation from the current dph backend instead. 
    5151 
    52  * '''Regular multi-dimensional arrays:''' Representing regular arrays by nested arrays is generally not efficient, but due to the current lack of fusion for segmented arrays that produce multiple arrays of different length (e.g., `filterSU :: (a -> Bool) -> SUArr a -> SUArr a`), matters are worse than one might think (e.g., when using a hierarchical matrix representation).  Hence, we need direct support for regular, multi-dimensional arrays. 
     52 * '''Regular multi-dimensional arrays (language design and implementation):''' Representing regular arrays by nested arrays is generally not efficient, but due to the current lack of fusion for segmented arrays that produce multiple arrays of different length (e.g., `filterSU :: (a -> Bool) -> SUArr a -> SUArr a`), matters are worse than one might think (e.g., when using a hierarchical matrix representation).  Hence, we need direct support for regular, multi-dimensional arrays. 
    5353 
    5454 * '''Explicit shape information:''' All library internal functions should get shape information (including the lengths of flat arrays) as extra arguments; in particular, lengths shouldn't be stored in every level of a flattened data structure.  This should simplify the use of rewrite rules that are shape sensitive.