Changes between Version 3 and Version 4 of ErlangInHaskell


Ignore:
Timestamp:
Feb 5, 2011 12:19:43 PM (3 years ago)
Author:
jepst
Comment:

Pasted in README from remote-haskell repository

Legend:

Unmodified
Added
Removed
Modified
  • ErlangInHaskell

    v3 v4  
    1 = Distributed Haskell Processes (or Erlang in Haskell) = 
     1== Implementation == 
     2 
     3I'm currently working on an implementation of an Erlang-like distributed computing framework for Haskell. The implementation is a work in progress and its interface differs somewhat from the following sketch. The actual interface is described here: http://www.cl.cam.ac.uk/~jee36/remote/ 
     4 
     5Here's a brief, high-level introduction to my implementation: 
     6 
     7= Introduction = 
     8 
     9Many programming languages expose concurrent programming as a shared memory model, wherein multiple, concurrently executing programs, or threads, can examine and manipulate variables common to them all. Coordination between threads is achieved with locks, mutexes, and other synchronization mechanisms. In Haskell, these facilities are available as MVars. 
     10 
     11In contrast, languages like Erlang eschew shared data and require that concurrent threads communicate only by message-passing. The key insight of Erlang and languages like it is that reasoning about concurrency is much easier without shared memory. Under a message-passing scheme, a thread provides a  recipient, given as a thread identifier, and a unit of data; that data will be transferred to the recipient's address space and placed in a queue, where it can be retrieved by the recipient. Because data is never shared implicitly, this is a particularly good model for distributed systems. 
     12 
     13This framework presents a combined approach to distributed framework. While it provides an Erlang-style message-passing system, it lets the programmer use existing paradigms from Concurrent Haskell. 
     14 
     15= Terminology = 
     16 
     17Location is represented by a ''node''. Usually, a node corresponds to an instance of the Haskell runtime system; that is, each independently executed Haskell program exists in its own node. Multiple nodes may run concurrently on a single physical host system, but the intention is that nodes run on separate hosts, to take advantage of more hardware. 
     18 
     19The basic unit of concurrency is the ''process'' (as distinct from the same term as used at the OS level, applied to an instance of an executing program). A process can be considered a thread with a message queue, and is implemented as a lightweight GHC forkIO thread. There is little overhead involved in starting and executing processes, so programmers can start as many as they need. Processes can send message to other processes and receive messages from them. 
     20 
     21The state associated with process management is wrapped up in the Haskell monad ProcesssM. All framework functions for managing and communicating with processes run in this monad, and most distributed user code will, as well. 
     22 
     23= Process management = 
     24 
     25Processes are created with the 'spawnRemote' and 'forkProcess' functions. Their type signatures help explain their operation: 
     26{{{ 
     27 forkProcess :: ProcessM () -> ProcessM ProcessId 
     28 spawnRemote :: NodeId -> Closure (ProcessM ()) -> ProcessM ProcessId 
     29}}} 
     30'forkProcess' takes a function in the ProcessM monad, starts it concurrently as a process on the same node as the caller, and gives a ProcessId that can be used to send messages to it. 'spawnRemote' works analogously, but also takes a NodeId, indicating where to run the process. This lets the programmer start arbitrary functions on other nodes, which may be running on other hosts. Actual code is not transmitted to the other node; instead, a function identifier is sent. This works on the assumption that all connected nodes are running identical copies of the compiled Haskell binary (unlike Erlang, which allows new code to be sent to remote nodes at runtime). 
     31 
     32We encode the function identifier used to start remote processes as a Closure. Closures may identify only top-level functions, without free variables. Since 'spawnRemote' is the only way to run a process on a remote node, functions run remotely cannot capture local mutable variables. This is the other key distinction between 'spawnRemote' and 'forkProcess': processes run locally with forkProcess share memory with each other, but processes started with 'spawnRemote' cannot (even if the target node is in fact the local node). 
     33 
     34The following code shows how local variable captures works with 'forkProcess'. There is no analogous code for 'spawnRemote'. 
     35{{{ 
     36 do m <- liftIO $ newEmptyMVar 
     37    forkProcess (liftIO $ putMVar m ()) 
     38    liftIO $ takeMVar m 
     39}}} 
     40Whether a process is running locally or remotely, and whether or not it can share memory, sending messages to it works the same: the 'send' function, which corresponds to Erlang's ! operator. 
     41{{{ 
     42 send :: (Binary a) => ProcessId -> a -> ProcessM () 
     43}}} 
     44Given a ProcessId (from 'forkProcess' or 'spawnRemote') and a chunk of serializable data (implementing Haskell's 'Data.Binary.Binary' type class), we can send a message to the given process. The message will transmitted across the network if necessary and placed in the process's message queue. Note that 'send' will accept any type of data, as long as it implements Binary. Initially, all basic Haskell types implement binary, including tuples and arrays, and it's easy to implement Binary for user-defined types. How then does the receiving process know the type of message to extract from its queue? A message can receive processes by distinguishing their type using the 'receiveWait' function, which corresponds to Erlang's receive clause. The process can provide a distinct handler for each type of process that it knows how to deal with; unmatched messages remain on the queue, where they may be retrieved by later invocations of 'receiveWait'. 
     45 
     46A ''channel'' provides an alternative to message transmission with 'send' and 'receiveWait'. While 'send' and 'receiveWait' allow sending messages of any type, channels require messages to be of uniform type. Channels must be explicitly created with a call to 'makeChannel': 
     47{{{ 
     48 makeChannel :: (Binary a) => ProcessM (SendChannel a, ReceiveChannel a) 
     49}}} 
     50 
     51 
     52 
     53 
     54 
     55 
     56 
     57 
     58 
     59= Old = 
     60 
     61This is an older, more abstract discussion of the implementation of this framework, and does not reflect the current state. 
     62 
     63== Distributed Haskell Processes (or Erlang in Haskell) == 
    264 
    365Haskell is great at shared-memory concurrency, but we do not yet  
     
    1577This page summarises a possible design. 
    1678 
    17 == Implementation == 
    18  
    19 I'm currently working on an implementation of the following ideas. The implementation is a work in progress and its interface differs somewhat from the following sketch. The actual interface is described here: http://www.cl.cam.ac.uk/~jee36/remote/ 
    2079 
    2180== Processes ==