Opened 7 years ago

Closed 7 years ago

#4514 closed bug (fixed)

IO manager can deadlock if a file descriptor is closed behind its back

Reported by: adept Owned by: bos
Priority: high Milestone: 7.0.2
Component: Runtime System Version: 7.0.1
Keywords: Cc: bos@…, kazu@…, pho@…, johan.tibell@…, mail@…, leuschner@…
Operating System: Linux Architecture: Unknown/Multiple
Type of failure: Incorrect result at runtime Test Case:
Blocked By: Blocking:
Related Tickets: Differential Rev(s):
Wiki Page:

Description

Consider the following simple "echo server":

module Main where

import Control.Concurrent
import Network
import System.IO
import System.Timeout

main :: IO ()
main = do
    s <- listenOn (Service "7000")
    loop s
    return ()
    
loop :: Socket -> IO ThreadId
loop s = do
    (hdr,_,_) <- accept s
    hSetBuffering hdr LineBuffering
    forkIO $ echo hdr
    loop s
    
echo :: Handle -> IO ()
echo hdr = do
    mstr <- timeout 5000000 $ hGetLine hdr
    case mstr of
        Just str -> do
            hPutStrLn hdr str
            hFlush hdr
            echo hdr
        Nothing -> do
            putStrLn "Time out"
            hClose hdr
            return ()

When compiled without -threaded (GHC 7.0.1) it behaves as expected: you could "telnet localhost 7000", send a couple of lines, see them echoed back and if you don't type anything for 5 seconds, connection will be closed. If you connect for the second (third, ...) time, behavior will be the same.

Now, recompile the code with -threaded. On the first connect, everything would be as expected. However, after the first time out, when you "telnet" for the second time, behavior will be different.

1)Your lines would not be echoed back to you 2)Your connection would time out 5 seconds after you connected, no matter if you type and send something or not

See this video (OGV, 1.5mb) for demonstration of behavior with -threaded.

GHC 6.12.3 behaves correctly with -threaded and without, so I guess that new IO manager contributes to this situation.

This bug is confirmed to be present on Linux and MacOS X.

Attachments (5)

base-4514.patch (4.7 KB) - added by bos 7 years ago.
WIP patch to the base library
network-4514.patch (1.2 KB) - added by bos 7 years ago.
WIP patch against the network library
base-4514-1.patch (11.8 KB) - added by bos 7 years ago.
Updated base patch
network-4514-1.patch (1.7 KB) - added by bos 7 years ago.
Updated network patch
base-4514+4533.dpatch (25.1 KB) - added by bos 7 years ago.
Final(?) version of the patch against base for #4514 and #4533

Download all attachments as: .zip

Change History (52)

comment:1 Changed 7 years ago by adept

Here is variant with Sockets instead of Handles, same bug present:

module Main where

import Control.Concurrent
import Network
import Network.Socket as S
import System.IO
import System.Timeout

main :: IO ()
main = do
    s <- listenOn (Service "7000")
    loop s
    return ()
    
loop :: Socket -> IO ThreadId
loop s = do
    (s',_) <- S.accept s
    forkIO $ echo s'
    loop s
    
echo :: Socket -> IO ()
echo s = do
    mstr <- timeout 5000000 $ recv s 1000
    case mstr of
        Just str -> do
            send s str
            echo s
        Nothing -> do
            putStrLn "Time out"
            sClose s
            return ()

comment:2 Changed 7 years ago by tibbe

Owner: set to tibbe

comment:3 Changed 7 years ago by adept

Here is what I've found via the combination of debug output and creative strace use. Allow me to present you an annotated and abbreviated "strace" output:

31628 accept(8, {sa_family=AF_INET, sin_port=htons(33312), sin_addr=inet_addr("127.0.0.1")}, [16]) = 11
31628 fcntl64(11, F_GETFL)              = 0x2 (flags O_RDWR)
31628 fcntl64(11, F_SETFL, O_RDWR|O_NONBLOCK) = 0

      So, connection is accepted and socket 11 is created for communication with "other side".
      I will send "1", then "2", then "FINISH" and allow process to time out.

31630 recv(11,  <unfinished ...>
31630 epoll_ctl(3, EPOLL_CTL_ADD, 11, {EPOLLIN, {u32=11, u64=11}} <unfinished ...>

      As no data is available initially, epoll callback is set up on FD 11.
 
31629 <... epoll_wait resumed> {{EPOLLIN, {u32=11, u64=11}}}, 64, 4684) = 1
31629 epoll_ctl(3, EPOLL_CTL_DEL, 11, {0, {u32=11, u64=2814754062073867}}) = 0
31630 recv(11, "1\r\n", 1000, 0)        = 3

      As soon as data is available, callback is removed, data is read.

31630 send(11, "1\r\n", 3, 0)           = 3
31630 recv(11,  <unfinished ...>
31630 epoll_ctl(3, EPOLL_CTL_ADD, 11, {EPOLLIN, {u32=11, u64=13202238110760435723}}) = 0
31629 <... epoll_wait resumed> {{EPOLLIN, {u32=11, u64=13202238110760435723}}}, 64, 4043) = 1
31629 epoll_ctl(3, EPOLL_CTL_DEL, 11, {0, {u32=11, u64=13201934143039995915}}) = 0
31630 recv(11, "2\r\n", 1000, 0)        = 3
31630 send(11, "2\r\n", 3, 0)           = 3
31630 recv(11,  <unfinished ...>
31630 epoll_ctl(3, EPOLL_CTL_ADD, 11, {EPOLLIN, {u32=11, u64=17179869195}} <unfinished ...>
31629 <... epoll_wait resumed> {{EPOLLIN, {u32=11, u64=17179869195}}}, 64, 2011) = 1
31629 epoll_ctl(3, EPOLL_CTL_DEL, 11, {0, {u32=11, u64=13201950773153366027}}) = 0
31630 recv(11, "FINISH\r\n", 1000, 0)   = 8
31630 send(11, "FINISH\r\n", 8, 0)      = 8
31630 recv(11, 0xb73670f0, 1000, 0)     = -1 EAGAIN (Resource temporarily unavailable)
31630 epoll_ctl(3, EPOLL_CTL_ADD, 11, {EPOLLIN, {u32=11, u64=34359738379}}) = 0

      Here I allow read to time out. 
      Notice that there are no calls to epoll_ctl(3, EPOLL_CTL_DEL, 11, ...)

31630 close(11)                         = 0

      Timeout fires up, socket is closed, my telnet is terminatin at this point.

31628 accept(8, {sa_family=AF_INET, sin_port=htons(33313), sin_addr=inet_addr("127.0.0.1")}, [16]) = 11
31628 fcntl64(11, F_GETFL)              = 0x2 (flags O_RDWR)
31628 fcntl64(11, F_SETFL, O_RDWR|O_NONBLOCK) = 0

      I connect again. Socket/FD 11 is allocated.

31630 recv(11, 0xb737c110, 1000, 0)     = -1 EAGAIN (Resource temporarily unavailable)

      Since I am not as fast as "recv", it sees no data. 
      One would expect epoll_ctl(3, EPOLL_CTL_ADD, 11, ...) at this point, but it
      never comes.

      Eventually, timeout fires up and socket is closed.

31630 close(11)                         = 0

Now, I don't know enough about epoll and System.Event.Thread (yet), but absence of EPOLL_CTL_DEL before the first close(11) looks suspicious. What would you say?

comment:4 Changed 7 years ago by adept

Ok.

After chat with tibbe it seems like the root cause lies here:

registerFd_ :: EventManager -> IOCallback -> Fd -> Event
            -> IO (FdKey, Bool)
registerFd_ EventManager{..} cb fd evs = do
  u <- newUnique emUniqueSource
  modifyMVar emFds $ \oldMap -> do
    let fd'  = fromIntegral fd
        reg  = FdKey fd u
        !fdd = FdData reg evs cb
        (!newMap, (oldEvs, newEvs)) =
            case IM.insertWith (++) fd' [fdd] oldMap of
              (Nothing,   n) -> (n, (mempty, evs))
              (Just prev, n) -> (n, pairEvents prev newMap fd')
        modify = oldEvs /= newEvs
    when modify $ I.modifyFd emBackend fd oldEvs newEvs
    return (newMap, (reg, modify))

After reconnect, when recv is called, it checks for data with C recv and (since there is nothing there yet) calls threadWaitRead on the approprate descriptor. This in turn should call registerFd.

Since there is handler for the same FD in emFds already, pairEvent would try to compute the combined event mask. In this case oldEvs == newEvs, so I.modifyFd would not be called, and this must be the source of the problem.

comment:5 Changed 7 years ago by adept

Right now, threadWait is executed in the same thread as Network.Socket.recv, which is why timeout is able to kill it and prevent unregisterFd from executing.

So either threadWait should make sure that unregisterFd is called no matter what, or registerFd should be more robust.

comment:6 Changed 7 years ago by adept

Scratch my last comment. Lambda passed to registerFd should be captured inside event manager and unaffected by killThread. I have another explanation though.

According to strace output, after I type "FINISH", see it echoed back to me and sit back, reader thread calls recv->recvLen->threadWaitRead, which registers callback for FD 11 with epoll.

Then time out happens, and close(2) is called on FD 11. At this moment epoll drops all callbacks associated with FD 11.

When I connect again, threadWaitRead calls registerFd, which refuses to update epoll status for reasons outlined above (oldEvs == newEvs). At the same time, epoll does not have callbacks for FD 11 anymore so the old callback already registered with Event Manager has no chance of firing. Strace confirms this: even though epoll_wait is called several times after I re-connect, it never returns events for FD 11.

Therefore we arrive at a deadlock: old callback would never be executed, new callback would not be added, plus we have discrepancy in internal states of epoll and Event Manager: manager thinks that he has 1 callback for FD 11 registered, while epoll has zero callbacks for FD 11.

comment:7 Changed 7 years ago by adept

Auto removal of callback in epoll is mentioned here in Q6.

comment:8 Changed 7 years ago by bos

Cc: bos@… added
Operating System: Unknown/MultipleLinux

comment:9 Changed 7 years ago by bos

Summary: System.Timeout cannot properly cancel IO actions with new IO managerIO manager can deadlock if a file descriptor is closed behind its back

I think the kqueue back end might be subject to the same problem, but I've narrowed the bug to Linux for now.

In order to deal with this, we need to explicitly notify the back end when a file descriptor has been closed, e.g. via something like this:

GHC.Conc.threadClosed :: Fd -> IO ()

For some back ends (e.g. select and poll), this can be a no-op, but for epoll and kqueue, it should unregister the Fd if it's still present, and invoke any associated callbacks.

We'll also need to update the I/O library and the network package to actually call this function when a close is performed.

This doesn't address the problem of what to do about uses of the dup or dup2 system calls, but that's really a separate one.

comment:10 Changed 7 years ago by kazu-yamamoto

Cc: kazu@… added

comment:11 Changed 7 years ago by adept

By the way, here is the standalone version of test:

module Main where

import Control.Concurrent
import Network
import Network.Socket as S
import System.IO
import System.Timeout
import Control.Monad

main :: IO ()
main = do
    s <- listenOn (Service "7000")
    forkIO $ loop s
    poll
    poll
    return ()

poll :: IO ()
poll = do
    hdl <- connectTo "localhost" (Service "7000")
    hSetBuffering hdl LineBuffering
    threadDelay 10000
    forM_ ["1","2","FINISH"] $ \str -> do
      hPutStrLn hdl str
      resp <- hGetLine hdl
      putStrLn $ "Sent " ++ str ++ ", got " ++ resp
    threadDelay 6000000
    hClose hdl

loop :: Socket -> IO ()
loop s = do
    (s',_) <- S.accept s
    forkIO $ echo s'
    loop s
    
echo :: Socket -> IO ()
echo s = do
    mstr <- timeout 5000000 $ recv s 1000
    case mstr of
        Just str -> do
            send s str
            echo s
        Nothing -> do
            putStrLn "Time out"
            sClose s
            return ()

Expected output:

Sent 1, got 1
Sent 2, got 2
Sent FINISH, got FINISH
Time out
Sent 1, got 1
Sent 2, got 2
Sent FINISH, got FINISH
Time out

Instead we have:

Sent 1, got 1
Sent 2, got 2
Sent FINISH, got FINISH
Time out
Time out
<socket: 11>: hGetLine: resource vanished (Connection reset by peer)

comment:12 Changed 7 years ago by PHO

Cc: pho@… added

comment:13 Changed 7 years ago by bos

Owner: changed from tibbe to bos

Thanks for the test case, adept. Much appreciated.

comment:14 Changed 7 years ago by bos

I spent a bit of time looking at this yesterday, and my initial naive attempt at a fix didn't work. Will poke some more today.

comment:15 Changed 7 years ago by adept

bos, just out of curiosity - what was your approach?

comment:16 Changed 7 years ago by bos

Never mind, GHC's preprocessor and I were having a friendly disagreement about what version it really was. Ha ha, very jolly, misunderstanding cleared up, backs clapped all around for general good humour.

Right. So the bug was indeed that we were sneaking around behind our very own backs, calling close and then hoping that epoll might do something sensible and notify us that the file descriptor had closed. No such luck, as epoll is a demanding and spiteful deity, and will not help those who cannot help themselves.

My first attempt at a fix was simply to have sClose and friends call fdWasClosed, which conveniently already existed. And sure enough, this did the trick.

There's a wrinkle, though: I'm pretty sure we need the I/O manager thread itself to drop the queued callbacks and then close the file descriptor. Otherwise, we're susceptible to nasty race conditions: if we close then drop callbacks, we could drop callbacks for a new file descriptor. If we drop callbacks then close, another thread could register a new callback after we've dropped them but before we've reached close.

That'll complicate things a bit.

Changed 7 years ago by bos

Attachment: base-4514.patch added

WIP patch to the base library

Changed 7 years ago by bos

Attachment: network-4514.patch added

WIP patch against the network library

comment:17 Changed 7 years ago by bos

Please comment on the two attached patches. The first is against unix-2.0 (i.e. against the version in GHC 7 and HEAD), the second against git HEAD of network.

The patches do indeed get rid of the deadlock, and also avoid the possibility of multiple threads racing against each other.

comment:18 Changed 7 years ago by bos

Status: newpatch

comment:19 Changed 7 years ago by tibbe

Cc: johan.tibell@… added

I believe we need to invoke the callbacks, after closing the file descriptor, instead of just removing them. Otherwise we might leave (non-dead) threads blocking on MVars forever. Example:

main = do
   s <- listenOn (Service "7000")
   forkIO $ do
     threadDelay 10000
     sClose s
   msg <- recv s 1000  -- blocked here forever

By invoking all the callbacks (with evtRead and evtWrite) after closing the file descriptor the threads will retry the read call and fail (and thus throw an exception) as the file descriptor has gone away. I think that's the desired behavior when one thread closes a file descriptor behind another thread's back.

Comments on the patch to network:

  • There's a lingering Debug.Trace import.
  • We need to test for GHC 7.0.2 (presuming this patch will be in that version) instead of GHC 7.0.0 to avoid having network fail to compile for people who use GHC 7.0.1.

comment:20 Changed 7 years ago by bos

Johan, I think you're right about the need to invoke the callbacks. If you look at the closeFd code in base-4514.patch, you'll see that there are two places we could invoke the callbacks:

  1. While we have the MVar named emFds held.
  2. After we've released it.

There's an unavoidable race here at present: another thread could have opened a file or socket, getting the same file descriptor, and we've no way to tell our callee that our file descriptor has been closed, so it could read/write the wrong file descriptor.

To deal with this, I think we need to add the equivalents of POLLHUP and POLLRDHUP to the events that we can send to threads, call 'em evtWriteHup and evtReadHup. If you asked for evtWrite and someone closed your file descriptor, we'll give you evtWriteHup. If you asked for evtRead, we'll give you evtReadHup.

comment:21 Changed 7 years ago by tibbe

Presumably threadWaitRead would throw an exception upon receiving evtReadHup. Alternatively we could have the MVar allocated in threadWait contain something different than unit to indicate that the file descriptor went away and that threadWait should throw an exception.

Another thing: we need to be careful so that the callback IntMap doesn't hold on to MVars after the owning thread is dead (e.g. due to an exception). Perhaps the callbacks should be weak references.

comment:22 Changed 7 years ago by bos

I think we might need to change the type of threadWait (and hence threadWaitRead and threadWaitWrite), so that it either takes a parameter indicating what exception to throw, or returns a value that the caller can inspect to see what kind of exception it should throw. The former seems safer to me.

We could use this for two cases:

  • we close a file descriptor behind a waiter's back
  • a back end's poll method returns the equivalent of POLLHUP (and maybe POLLERR)

comment:23 Changed 7 years ago by bos

Johan, I've filed #4533 to track your observation about possibly leaking MVars.

comment:24 Changed 7 years ago by bos

We looked, and it seems like we can just make threadWait throw an IOException with EBADF, which simplifies everything greatly.

Changed 7 years ago by bos

Attachment: base-4514-1.patch added

Updated base patch

Changed 7 years ago by bos

Attachment: network-4514-1.patch added

Updated network patch

comment:25 Changed 7 years ago by bos

I just uploaded what should be the final patches against base and network.

  • The public APIs for threadWaitRead and threadWaitWrite remain unchanged, and now throw an IOError if a file descriptor is closed behind their backs.
  • The GHC.Conc API is extended to add a closeFd function, the behaviour of which is documented.
  • Behind the scenes, we add a new evtClose event, which is used only when one thread closes a file descriptor that other threads are blocking on.
  • Both base's IO code and network use the new closeFd function.

I've verified that adept's test case remains fixed.

comment:26 Changed 7 years ago by bos

Damn, defeated by Trac's stupid formatting syntax again :-(

comment:27 Changed 7 years ago by tibbe

Patches look good to me except for the comment I made to you offline about wrapping the call to c_close in network with throwErrnoIfMinus1Retry_.

comment:28 Changed 7 years ago by bos

Status: patchmerge

Patches sent to Ian for applying.

comment:29 Changed 7 years ago by bos

Oh, and pushed to https://github.com/haskell/network too.

comment:30 Changed 7 years ago by igloo

Owner: bos deleted
Status: mergenew

comment:31 Changed 7 years ago by igloo

Milestone: 7.0.2
Priority: normalhigh

comment:32 Changed 7 years ago by bos

I'm pasting in some comments from Ian in email:

Skimming through this patch I noticed some interface changes. We try not to change the interfaces of libraries within a GHC branch (e.g. 7.0.1 and 7.0.2 should have the same library APIs). If it's really necessary then we can make changes, although we should keep them as small as possible.

The particular changes also aren't obviously right to me. e.g. I wouldn't expect Control.Concurrent to export something called closeFd, and I wouldn't expect something called closeFd to have that type.

I believe that it's not possible to avoid any of the submitted API changes here. Specifically:

  • The fdWasClosed function that we added during 6.13 development was subject to a race condition: you could mark an fd as closed after another thread had performed an open or similar that could reuse that fd.
  • The new closeFd function is necessary to close that race. It manages this by closing the fd while holding an MVar, so another thread can't race in and register the fd.
  • Why does closeFd accept an action as parameter? Because under Windows, you have to use different functions to close a file descriptor depending on whether it's owned by Winsock or the rest of win32.
  • If the name should be something like closeFdWith to be more descriptive, that's fine (please let me know if you have a preference), as my purpose is to close the race :-)
  • Why should closeFdWith be exported from Control.Concurrent? Because threadWaitRead and threadWaitWrite already are, and closeFdWith must be used with those functions now to guarantee race-free closing.

comment:33 Changed 7 years ago by bos

Goddamn Trac. Augh.

  • The fdWasClosed function that we added during 6.13 development was subject to a race condition: you could mark an fd as closed after another thread had performed an open or similar that could reuse that fd.
  • The new closeFd function is necessary to close that race. It manages this by closing the fd while holding an MVar, so another thread can't race in and register the fd.
  • Why does closeFd accept an action as parameter? Because under Windows, you have to use different functions to close a file descriptor depending on whether it's owned by Winsock or the rest of win32.
  • If the name should be something like closeFdWith to be more descriptive, that's fine (please let me know if you have a preference), as my purpose is to close the race :-)
  • Why should closeFdWith be exported from Control.Concurrent? Because threadWaitRead and threadWaitWrite already are, and closeFdWith must be used with those functions now to guarantee race-free closing.

comment:34 Changed 7 years ago by bos

Owner: set to bos

comment:35 Changed 7 years ago by bos

Status: newpatch

comment:36 in reply to:  33 Changed 7 years ago by simonmar

Replying to bos:

Goddamn Trac. Augh.

  • The fdWasClosed function that we added during 6.13 development was subject to a race condition: you could mark an fd as closed after another thread had performed an open or similar that could reuse that fd.
  • The new closeFd function is necessary to close that race. It manages this by closing the fd while holding an MVar, so another thread can't race in and register the fd.
  • Why does closeFd accept an action as parameter? Because under Windows, you have to use different functions to close a file descriptor depending on whether it's owned by Winsock or the rest of win32.
  • If the name should be something like closeFdWith to be more descriptive, that's fine (please let me know if you have a preference), as my purpose is to close the race :-)
  • Why should closeFdWith be exported from Control.Concurrent? Because threadWaitRead and threadWaitWrite already are, and closeFdWith must be used with those functions now to guarantee race-free closing.

threadWaitRead and threadWaitWrite should not really be exported by Control.Concurrent, they're only there for historical reasons. They have a non-portable API (dependent on file descrxiptors, and using Int rather than Fd too). They also aren't very useful for most people, because they have to be used very carefully with non-blocking I/O operations.

I think something in System.Event or GHC.IO would be more appropriate for closeFd. We should move threadWaitRead and threadWaitWrite in due course too; either System.Event or GHC.IO.FD would be the best place for those.

comment:37 Changed 7 years ago by bos

Okay, I'll drop the export from Control.Concurrent then. Will that (and maybe the rename of closeFd pass muster, then?

comment:38 in reply to:  37 Changed 7 years ago by simonmar

Replying to bos:

Okay, I'll drop the export from Control.Concurrent then. Will that (and maybe the rename of closeFd pass muster, then?

Fine by me. I don't have any better ideas for the name.

comment:39 Changed 7 years ago by StefanWehr

Cc: mail@… added

comment:40 Changed 7 years ago by dleuschner

Cc: leuschner@… added

comment:41 in reply to:  37 ; Changed 7 years ago by igloo

Replying to bos:

Okay, I'll drop the export from Control.Concurrent then. Will that (and maybe the rename of closeFd pass muster, then?

I'm a little confused. Is System.Event meant to be a public, portable API, or is it a private, GHC internal module?

If the former then I think changes ought to go via the library submissions process.

If the latter then I wonder why it is not in the GHC.* namespace, and guarded by if impl(ghc) in the Cabal file?

Changed 7 years ago by bos

Attachment: base-4514+4533.dpatch added

Final(?) version of the patch against base for #4514 and #4533

comment:42 in reply to:  41 ; Changed 7 years ago by simonmar

Replying to igloo:

Replying to bos:

Okay, I'll drop the export from Control.Concurrent then. Will that (and maybe the rename of closeFd pass muster, then?

I'm a little confused. Is System.Event meant to be a public, portable API, or is it a private, GHC internal module?

If the former then I think changes ought to go via the library submissions process.

If the latter then I wonder why it is not in the GHC.* namespace, and guarded by if impl(ghc) in the Cabal file?

For the time being we should consider it to be in the same class as Control.Exception.Base and System.Posix.Internals, i.e. an internal module on which other stuff depends.

Presumably Bryan and Johan intend it to become a public API. In that case we should really have a library proposal to make it public; until then it's an internal API, and the documentation should really make that clear.

comment:43 in reply to:  42 ; Changed 7 years ago by tibbe

Replying to simonmar:

For the time being we should consider it to be in the same class as Control.Exception.Base and System.Posix.Internals, i.e. an internal module on which other stuff depends.

Presumably Bryan and Johan intend it to become a public API. In that case we should really have a library proposal to make it public; until then it's an internal API, and the documentation should really make that clear.

It was indeed intended to be public, but I was forced to rely on a bunch of GHC specific modules rather than portable ones in order to break some cyclic dependencies. If those dependencies could be broken in a different way, the API would be truly portable again and perhaps then we could make a library proposal to make it more official. If someone feel like putting a warning about the library being GHC only in System.Event.Manager, go ahead.

comment:44 Changed 7 years ago by bos

I'll send in a patch that updates the docs to make it clear that those APIs aren't yet public within the next couple of days, but until then, please do apply the latest patches that fix the bug.

comment:45 in reply to:  43 ; Changed 7 years ago by simonmar

Replying to tibbe:

It was indeed intended to be public, but I was forced to rely on a bunch of GHC specific modules rather than portable ones in order to break some cyclic dependencies. If those dependencies could be broken in a different way, the API would be truly portable again and perhaps then we could make a library proposal to make it more official. If someone feel like putting a warning about the library being GHC only in System.Event.Manager, go ahead.

The fact that the implementation of System.Event is stuck in base due to recursive dependencies is really just an implementation detail, it doesn't affect whether the API is portable.

One way to go would be to put the implementation in System.Event.Internals in base, and have a separate package that re-exports the public API.

comment:46 in reply to:  45 Changed 7 years ago by tibbe

Replying to simonmar:

The fact that the implementation of System.Event is stuck in base due to recursive dependencies is really just an implementation detail, it doesn't affect whether the API is portable.

One way to go would be to put the implementation in System.Event.Internals in base, and have a separate package that re-exports the public API.

Sure. That's something we could look into.

comment:47 Changed 7 years ago by igloo

Resolution: fixed
Status: patchclosed

Applied to HEAD and 7.0 branches, thanks.

Note: See TracTickets for help on using tickets.