Custom Query (6536 matches)
Results (7 - 9 of 6536)
|#7482||wontfix||GHC.Event overwrites main IO managers hooks to RTS||AndreasVoellmy||AndreasVoellmy|
The IO manager registers two file descriptors with the RTS which the RTS uses to send control and wakeup signals to the IO manager. The main IO manager is started up by default and registers some file descriptors that it has allocated with the RTS.
The base package also exposes a GHC.Event module which when initialized will also register files with the RTS, overwriting the main IO manager's files. Now the RTS can no longer signal the main IO manager.
|#7773||fixed||Waiting on non-kqueue supported files on OS X||AndreasVoellmy||AndreasVoellmy|
Neither the old IO manager nor the new "parallel" IO manager properly handle waiting on files on Mac OS X when kqueue does not support the device type. PHO reported this on ghc-devs: http://www.haskell.org/pipermail/ghc-devs/2013-March/000798.html.
Here is the gist of it: the IO manager uses kqueue to wait on files on OS X. kqueue does not support all files. For example, on older versions of OS X (10.5.8) it cannot wait on tty devices and on even on 10.8.2 it cannot wait on /dev/random.
Both the old and parallel IO managers suffered from the problem, but the consequences were slightly different. With the old IO manager the situation was treated as the file being ready, which would just cause the waiting thread to run again. The parallel IO manager changed things slightly and now it just throws an exception and terminates the program. So the behavior when this happens in the parallel IO manager is not acceptable.
|#8158||fixed||Replace IO manager's IntMap with a mutable hash table||AndreasVoellmy||bos|
I've written a patch that replaces the immutable IntMap used by GHC.Event with a mutable hashtable, IntTable.
There's a standalone version of the new data structure, complete with QuickCheck tests and benchmarks, available on github. It's about 15x faster than IntMap, and substantially simpler.
In practice, this translates to a small but measurable improvement in throughput (and presumably latency). I see a 3% to 10% bump in requests handled per second by the tiny acme-http http server when benchmarked using the weighttp load tester.