www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.bugs - [Issue 12090] New: Make std.concurrency compatible with fibers as threads

reply d-bugmail puremagic.com writes:
https://d.puremagic.com/issues/show_bug.cgi?id=12090

           Summary: Make std.concurrency compatible with fibers as threads
           Product: D
           Version: D2
          Platform: All
        OS/Version: All
            Status: NEW
          Severity: enhancement
          Priority: P2
         Component: Phobos
        AssignedTo: nobody puremagic.com
        ReportedBy: sean invisibleduck.org


--- Comment #0 from Sean Kelly <sean invisibleduck.org> 2014-02-06 10:35:13 PST
---
In order to scale std.concurrency past a small number of threads, some support
must be added for spawning fibers.  Also, std.concurrency should be made
compatible with third-party libraries, like vibe.d, that use fibers as a core
facet of their design.

I had been delaying this project because I felt that any changes to what is
considered a "thread" should support the default thread-local storage model
implicitly.  However, this was based on the idea that any change would be to
the implementation only, and so the user would not necessarily have any
indication that they'd spawned a fiber instead of a kernel thread.

Upon further reflection, I think this approach is not the correct one, and as
per the design of Druntime, the user should be given the choice of how
multiprocessing will occur.  Then the choice to use fibers instead of threads,
for example, can be made by the user at design time with the knowledge that
their code doesn't use thread-local statics in an incompatible manner.  This
serves to decouple the thread-local storage issue from multiprocessing in
general and lets us move forward on std.concurrency without being blocked by a
technical obstacle in Druntime.

So std.concurrency should be adapted so that the multiprocessing model can be
configured, at process startup, by plugging in a Scheduler which handles the
details of spawning threads, yielding instead of blocking on a wait when a
message queue is empty, and so on.  By default, if no Scheduler is supplied,
std.concurrency should work just as it always has.  This will save any memory
allocations or performance penalties in the typical case.  Also, the design
should be such that any allocations are minimized as much as possible.

-- 
Configure issuemail: https://d.puremagic.com/issues/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
Feb 06 2014
next sibling parent d-bugmail puremagic.com writes:
https://d.puremagic.com/issues/show_bug.cgi?id=12090


Sean Kelly <sean invisibleduck.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Status|NEW                         |ASSIGNED


--- Comment #1 from Sean Kelly <sean invisibleduck.org> 2014-02-06 10:46:39 PST
---
I think something like the following will work:

interface Scheduler
{
    void start(void delegate() op);
    void spawn(void delegate() op);
    void yield();
     property ref ThreadInfo thisInfo();
    Condition newCondition(Mutex m);
}


When using a scheduler, main() should do any initial setup that it wants and
then call scheduler.start(), which will effectively spawn the supplied delegate
as a thread and then begin dispatching.

The remaining functions are all used by the std.concurrency implementation. 
spawn() does the work of actually creating new threads, yield() will yield
execution of the current fiber (or optionally, thread) so that multiprocessing
can occur, and newCondition constructs a Condition object used by send() and
receive() to indicate that a message has arrived, block waiting for a new
message, etc.  Finally, ThreadInfo is needed so that the thread-local statics
currently in std.concurrency can be made local to the "thread" currently
executing.  If thisInfo is called by a thread or fiber not spawned by
std.concurrency, it can return a thread-local copy instead.

-- 
Configure issuemail: https://d.puremagic.com/issues/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
Feb 06 2014
prev sibling next sibling parent d-bugmail puremagic.com writes:
https://d.puremagic.com/issues/show_bug.cgi?id=12090


Jakob Ovrum <jakobovrum gmail.com> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |jakobovrum gmail.com


--- Comment #2 from Jakob Ovrum <jakobovrum gmail.com> 2014-02-09 07:15:04 PST
---
(In reply to comment #1)
 I think something like the following will work:
 
 interface Scheduler
 {
     void start(void delegate() op);
     void spawn(void delegate() op);
     void yield();
      property ref ThreadInfo thisInfo();
     Condition newCondition(Mutex m);
 }
 
 
 When using a scheduler, main() should do any initial setup that it wants and
 then call scheduler.start(), which will effectively spawn the supplied delegate
 as a thread and then begin dispatching.
 
 The remaining functions are all used by the std.concurrency implementation. 
 spawn() does the work of actually creating new threads, yield() will yield
 execution of the current fiber (or optionally, thread) so that multiprocessing
 can occur, and newCondition constructs a Condition object used by send() and
 receive() to indicate that a message has arrived, block waiting for a new
 message, etc.  Finally, ThreadInfo is needed so that the thread-local statics
 currently in std.concurrency can be made local to the "thread" currently
 executing.  If thisInfo is called by a thread or fiber not spawned by
 std.concurrency, it can return a thread-local copy instead.

It has been suggested that std.concurrency should also be able to support IPC (most importantly by Andrei in the module's documentation). The proposed Scheduler interface seems close to supporting that for fork()-based code (which of course isn't an option for non-POSIX systems), but do you have any thoughts on this? -- Configure issuemail: https://d.puremagic.com/issues/userprefs.cgi?tab=email ------- You are receiving this mail because: -------
Feb 09 2014
prev sibling next sibling parent d-bugmail puremagic.com writes:
https://d.puremagic.com/issues/show_bug.cgi?id=12090


yebblies <yebblies gmail.com> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |yebblies gmail.com


--- Comment #3 from yebblies <yebblies gmail.com> 2014-02-11 22:36:03 EST ---
(In reply to comment #2)
 
 It has been suggested that std.concurrency should also be able to support IPC
 (most importantly by Andrei in the module's documentation). The proposed
 Scheduler interface seems close to supporting that for fork()-based code (which
 of course isn't an option for non-POSIX systems), but do you have any thoughts
 on this?

IPC depends on serialization, which we don't have yet in phobos. -- Configure issuemail: https://d.puremagic.com/issues/userprefs.cgi?tab=email ------- You are receiving this mail because: -------
Feb 11 2014
prev sibling parent d-bugmail puremagic.com writes:
https://d.puremagic.com/issues/show_bug.cgi?id=12090



--- Comment #4 from Jakob Ovrum <jakobovrum gmail.com> 2014-02-13 07:45:00 PST
---
(In reply to comment #3)
 (In reply to comment #2)
 
 It has been suggested that std.concurrency should also be able to support IPC
 (most importantly by Andrei in the module's documentation). The proposed
 Scheduler interface seems close to supporting that for fork()-based code (which
 of course isn't an option for non-POSIX systems), but do you have any thoughts
 on this?

IPC depends on serialization, which we don't have yet in phobos.

Well, the fact remains that the Scheduler interface may not be forward compatible with IPC. I think it's worth pointing out. -- Configure issuemail: https://d.puremagic.com/issues/userprefs.cgi?tab=email ------- You are receiving this mail because: -------
Feb 13 2014