www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Processes and Channels, cf. goroutines.

reply Russel Winder <russel winder.org.uk> writes:
It seems goroutine like process and channels are coming to C++:
https://github.com/ahorn/cpp-channel

-- 
Russel.
=============================================================================
Dr Russel Winder      t: +44 20 7585 2200   voip: sip:russel.winder ekiga.net
41 Buckmaster Road    m: +44 7770 465 077   xmpp: russel winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder
Jan 28 2014
next sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 28 January 2014 at 17:25:35 UTC, Russel Winder wrote:
 It seems goroutine like process and channels are coming to C++:
 https://github.com/ahorn/cpp-channel
That's cool, but it is not going to be part of any standard? Is it? O.
Jan 28 2014
prev sibling parent reply "Bienlein" <jeti789 web.de> writes:
On Tuesday, 28 January 2014 at 17:25:35 UTC, Russel Winder wrote:
 It seems goroutine like process and channels are coming to C++:
 https://github.com/ahorn/cpp-channel
This is interesting, but you can bring that sort of approach to some other language as well. Here is a simple approach to get it done for Java: http://java.dzone.com/articles/go-style-goroutines-java-and. What is hard to implement without support from the language is the CSP-style channel select, which delivers a lot of the power of channels in Go. Also, without green threads as in Go there is no way to spawn some hundred thousand threads as in Go. Java will start to get into its knees after some thousand threads already depending on how much resources are available to the machine. You can work around this by having a pool of threads process tasks added to queues where the number of queues can become very large. But when your thread pool has n threads and for an overlapping time window you have n long-runners being executed all other tasks are stuck till the first long runner has finished execution. Green threads and CSP-style channels is what keeps a lot of people with Go as the rest of the language is almost simplistic. What I would really like to have is D with Go's green threads along with channels and channel select. Some people would now smile mildly, but for today's load on servers there is a real necessity. For example read this article "How We Went from 30 Servers to 2: Go". Link: http://blog.iron.io/2013/03/how-we-went-from-30-servers-to-2-go.html Regards, Bienlein
Feb 04 2014
next sibling parent reply "Bienlein" <jeti789 web.de> writes:
On Tuesday, 4 February 2014 at 09:37:16 UTC, Bienlein wrote:
 This is interesting, but you can bring that sort of approach to 
 some other language as well. Here is a simple approach to get 
 it done for Java: 
 http://java.dzone.com/articles/go-style-goroutines-java-and. 
 What is hard to implement without support from the language is 
 the CSP-style channel select, which delivers a lot of the power 
 of channels in Go.
To follow up on this here is what Ian Taylor (member of Go dev team) says about this (in this thread: https://groups.google.com/forum/?hl=de#!topic/golang-nuts/kF_caFpPNgA): I have not looked at your code. I just want to say that the select statement is the core of Go's channels. Without select, channels are just a simple communication mechanism. If you want to design a different implementation of Go's channels, I recommend designing select first.
Feb 04 2014
parent reply "Sean Kelly" <sean invisibleduck.org> writes:
Support for green threads in std.concurrency is almost complete. 
I should really just do the last bit of work. I imagine you could 
try out the idea now though by using the messaging in vibe.d, 
since every connection is a fiber.
Feb 04 2014
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/4/14, 10:05 AM, Sean Kelly wrote:
 Support for green threads in std.concurrency is almost complete. I
 should really just do the last bit of work. I imagine you could try out
 the idea now though by using the messaging in vibe.d, since every
 connection is a fiber.
Did you express that work as one or more bugzilla issues? Andrei
Feb 04 2014
parent Marco Leise <Marco.Leise gmx.de> writes:
Am Tue, 04 Feb 2014 10:43:26 -0800
schrieb Andrei Alexandrescu <SeeWebsiteForEmail erdani.org>:

 On 2/4/14, 10:05 AM, Sean Kelly wrote:
 Support for green threads in std.concurrency is almost complete. I
 should really just do the last bit of work. I imagine you could try out
 the idea now though by using the messaging in vibe.d, since every
 connection is a fiber.
Did you express that work as one or more bugzilla issues? Andrei
As in: "This would make a good change log entry for added features" ? -- Marco
Feb 17 2014
prev sibling parent reply "Dicebot" <public dicebot.lv> writes:
On Tuesday, 4 February 2014 at 18:05:17 UTC, Sean Kelly wrote:
 Support for green threads in std.concurrency is almost 
 complete. I should really just do the last bit of work. I 
 imagine you could try out the idea now though by using the 
 messaging in vibe.d, since every connection is a fiber.
Can you provide any more details? Right now vibe.d uses own implementation (https://github.com/rejectedsoftware/vibe.d/blob/master/source/vibe core/concurrency.d) - trying to replace with with updated Phobos one can be a good real-world test for such enhancement.
Feb 04 2014
parent reply "Sean Kelly" <sean invisibleduck.org> writes:
On Tuesday, 4 February 2014 at 19:19:22 UTC, Dicebot wrote:
 On Tuesday, 4 February 2014 at 18:05:17 UTC, Sean Kelly wrote:
 Support for green threads in std.concurrency is almost 
 complete. I should really just do the last bit of work. I 
 imagine you could try out the idea now though by using the 
 messaging in vibe.d, since every connection is a fiber.
Can you provide any more details? Right now vibe.d uses own implementation (https://github.com/rejectedsoftware/vibe.d/blob/master/source/vibe core/concurrency.d) - trying to replace with with updated Phobos one can be a good real-world test for such enhancement.
My motivation was to make std.concurrency work with vibe.d. And more generally, to start testing fiber-based concurrency in general. The basic idea is to make certain low-level parts of std.concurrency be pluggable, so the same API can be used on top of different threading schemes. You basically just need to implement this interface: interface Scheduler { void start(void delegate() op); // start the scheduler void spawn(void delegate() op); // spawn a new thread void yield(); // for send and receive to yield to allow green threading Condition newCondition(Mutex m); // the condition will notify/wait for new messages } I should have a sample implementation working for green threads shortly. Then I need to submit a bugzilla ticket and sort out a pull request.
Feb 04 2014
parent reply "Sean Kelly" <sean invisibleduck.org> writes:
Okay, just for fun, here are some results with the new scheduler.
   I injected periodic yields into the code to simulate the
yielding that would happen automatically if the code was using
send and receive.  First the code:


shared long count = 0;

shared static ~this() {
	writefln("count = %s", count);
}

void childThread() {
	foreach(i; 0 .. 1_000) {
		atomicOp!"+="(count, 1);
		if (scheduler && 0 == i % 100)
			scheduler.yield();
	}
}

void mainThread() {
	foreach(i; 0 .. 100_000) {
		auto tid = spawn(&childThread);
	}
}

void runFibers() {
	scheduler = new FiberScheduler;

	scheduler.start(() {
		mainThread();
	});
}

void main(string[] args) {
	if (args.length > 1 && args[1] == "threads")
		mainThread();
	else if (args.length > 1 && args[1] == "fibers")
		runFibers();
	else writeln("specify threads or fibers");
}


And the results:


$ time concurrency threads
count = 100000000

real	1m11.033s
user	1m23.944s
sys	0m29.272s

$ time concurrency fibers
count = 100000000

real	0m5.998s
user	0m3.536s
sys	0m2.455s


I've got to say that I was surprised how fast 1 million kernel
threads were for this task.  That's orders of magnitude beyond
what I'd consider a sane number.
Feb 04 2014
parent reply "Bienlein" <jeti789 web.de> writes:
On Wednesday, 5 February 2014 at 01:02:37 UTC, Sean Kelly wrote:
 Okay, just for fun, here are some results with the new 
 scheduler.
   I injected periodic yields into the code to simulate the
 yielding that would happen automatically if the code was using
 send and receive.  First the code:
Hi Sean, with "send and receive" you mean adding to a channel and doing a blocking take on it? Just for me to build up an understanding. Regards, Bienlein
Feb 05 2014
parent reply "Sean Kelly" <sean invisibleduck.org> writes:
On Wednesday, 5 February 2014 at 10:56:23 UTC, Bienlein wrote:
 On Wednesday, 5 February 2014 at 01:02:37 UTC, Sean Kelly wrote:
 Okay, just for fun, here are some results with the new 
 scheduler.
  I injected periodic yields into the code to simulate the
 yielding that would happen automatically if the code was using
 send and receive.  First the code:
Hi Sean, with "send and receive" you mean adding to a channel and doing a blocking take on it? Just for me to build up an understanding.
Sort of. std.concurrency uses the actor model. So it's messaging, but not the CSP model used by Go. We should probably offer both, but for now it's just actors. And because you basically have one channel per thread, the limiting factor to date is how many threads you can sanely run simultaneously. Actor-oriented languages typically use green threads instead of kernel threads so the number of threads can scale. In Erlang, a "process" (ie. a thread) is equivalent to a class in D, so there tends to be a lot of them.
Feb 05 2014
parent reply "Bienlein" <jeti789 web.de> writes:
On Wednesday, 5 February 2014 at 14:40:46 UTC, Sean Kelly wrote:
 Sort of. std.concurrency uses the actor model. So it's 
 messaging, but not the CSP model used by Go. We should probably 
 offer both, but for now it's just actors. And because you 
 basically have one channel per thread, the limiting factor to 
 date is how many threads you can sanely run simultaneously. 
 Actor-oriented languages typically use green threads instead of 
 kernel threads so the number of threads can scale. In Erlang, a 
 "process" (ie. a thread) is equivalent to a class in D, so 
 there tends to be a lot of them.
On a very well equipped machine 10.000 threads is about the maximum for the JVM. Now for D 1.000.000 kernel threads are not a problem!? Well, I'm a D newbie and a bit confused now... Have to ask some questions trying not to bug people. Apparently, a kernel thread in D is not an OS thread. Does D have it's own threading model then? Couldn't see that from what I found on dlang.org. The measurement result for fibers is that much better as for threads, because fibers have less overhead for context switching? Will actors in D benefit from your FiberScheduler when it has been released? Do you know which next version of D your FiberScheduler is planned to be included? In Go you can easily spawn 100.000 goroutines (aka green threads), probably several 100.000. Being able to spawn way more than 100.000 threads in D with little context switching overhead as with using fibers you are basically in the same league as with Go. And D is a really rich language contrary to Go. This looks cool :-)
Feb 05 2014
parent reply "Sean Kelly" <sean invisibleduck.org> writes:
On Wednesday, 5 February 2014 at 15:38:43 UTC, Bienlein wrote:
 On a very well equipped machine 10.000 threads is about the 
 maximum for the JVM. Now for D 1.000.000 kernel threads are not 
 a problem!? Well, I'm a D newbie and a bit confused now... Have 
 to ask some questions trying not to bug people. Apparently, a 
 kernel thread in D is not an OS thread. Does D have it's own 
 threading model then? Couldn't see that from what I found on 
 dlang.org. The measurement result for fibers is that much 
 better as for threads, because fibers have less overhead for 
 context switching? Will actors in D benefit from your 
 FiberScheduler when it has been released? Do you know which 
 next version of D your FiberScheduler is planned to be included?
Well, I spawned 1 million threads, but there's no guarantee that 1 million were running concurrently. So I decided to run a test. I forced the code to block until all threads were started, and when using kernel threads this hung with 2047 threads running (this is on OSX). So I think OSX has a hard internal limit of 2047 threads. It's possible this can be extended somehow, but I didn't investigate. And since I don't currently have a great way to block fibers, what I was doing there was a busy wait, which was just slow going waiting for all the threads to spin up. Next I just figured I'd keep a high water mark for concurrent thread count for the code I posted yesterday. Both fibers and kernel threads topped out at about 10. For fibers, this makes perfect sense given the yield strategy (each client thread yields 10 times while running). And I guess the scheduling for kernel threads made that come out about the same. So the fact that I was able to spawn 1 million kernel threads doesn't actually mean a whole lot. I should have thought about that more yesterday. Because of the added synchronization counting threads, everything slowed down a bit, so I reduced the number of threads to 100.000. Here are some timings: $ time concurrency threads numThreadsToSpawn = 100000, maxConcurrent = 12 real 1m8.573s user 1m22.516s sys 0m27.985s $ time concurrency fibers numThreadsToSpawn = 100000, maxConcurrent = 10 real 0m5.860s user 0m3.493s sys 0m2.361s So in short, a "kernel thread" in D (which is equivalent to instantiating a core.thread.Thread) is an OS thread. The fibers are user-space threads that context switch when explicitly yielded and use core.thread.Fiber. One thing to note about the FiberScheduler is that I haven't sorted out a solution for thread-local storage. So if you're using the FiberScheduler and each "thread" is accessing some global static data it expects to be exclusive to itself, you'll end up with an undefined result. Making D's "thread-local by default" actually be fiber-local when using fibers is a pretty hard problem to solve, and can be dealt with later if the need arises. My hope was that by making the choice of scheduler user-defined however, it's up to the user to choose the appropriate threading model for their application, and we can hopefully sidestep the need to sort this out. It was the main issue blocking my doing this ages ago, and I didn't think of this pluggable approach until recently. The obvious gain here is that std.concurrency is no longer strictly limited by the overhead of kernel threads, and so can be used more according to the actor model as was originally intended. I can imagine more complex schedulers multiplexing fibers across a pool of kernel threads, for example. The FiberScheduler is more a proof of concept than anything. As for when this will be available... I will have a pull request sorted out shortly, so you could start playing with it soon. It being included in an actual release means a review and such, but as this is really just a fairly succinct change to an existing module, I hope it won't be terribly contentious.
 In Go you can easily spawn 100.000 goroutines (aka green 
 threads), probably several 100.000. Being able to spawn way 
 more than 100.000 threads in D with little context switching 
 overhead as with using fibers you are basically in the same 
 league as with Go. And D is a really rich language contrary to 
 Go. This looks cool :-)
Yeah, I think it's exciting. I had originally modeled std.concurrency after Erlang and like the way the syntax worked out, but using kernel threads is limiting. I'm interested to see how this scales once people start playing with it. It's possible that some tuning of when yields occur may be needed as time goes on, but that really needs more eyes than my own and probably multiple real world tests as well. As some general background on actors vs. CSP in std.concurrency, I chose actors for two reasons. First, the communication model for actors is unstructured, so it's adaptable to a lot of different application designs. If you want structure you can impose it at the protocol level, but it isn't necessary to do so--simply using std.concurency requires practically no code at all for the simple case. And second, I wasn't terribly fond of the "sequential" part of CSP. I really want a messaging model that scales horizontally across processes and across hosts, and the CSP algebra doesn't work that way. At the time, I found a few algebras that were attempting to basically merge the two approaches, but nothing really stood out.
Feb 05 2014
next sibling parent reply "Bienlein" <jeti789 web.de> writes:
On Wednesday, 5 February 2014 at 20:37:44 UTC, Sean Kelly wrote:

 As for when this will be available... I will have a pull request
 sorted out shortly, so you could start playing with it soon.  It
 being included in an actual release means a review and such, but
 as this is really just a fairly succinct change to an existing
 module, I hope it won't be terribly contentious.
Sounds good. So, I only need to watch the Github repo for phobos and I will get notified? Or do I need to watch some other repo for D on Github? Just to be in the save side since I'm new to D and not familiar with the way things are split up.
 ... And second, I wasn't terribly fond of
 the "sequential" part of CSP.  I really want a messaging model
 that scales horizontally across processes and across hosts, and
 the CSP algebra doesn't work that way.
What is nice about CSP is that you can proof that your code is free of deadlocks. The Go guys have developed a tool that parses the code and then tells you what it has found.
 As some general background on actors vs. CSP in std.concurrency,
 I chose actors for two reasons.  First, the communication model
 for actors is unstructured, so it's adaptable to a lot of
 different application designs.
Yeah, I understand the reasoning. CSP is somewhat from its level of granularity between low-level locks/semaphores/etc. and high-level actors. I guess you can easily build actors on top of CSP. In case of D actors are not that blown up as for example in Scala or Akka. Creating an actor is mostly like spawning a thread. So actors in D are much less heavy than in Scala/Akka. Actors in D must also have a message queue like channels in CSP where the message is inserted when some tid.send(...) is done. It is only not accessible from the outside.
 ...  It's possible this can be extended somehow, but I
 didn't investigate.  And since I don't currently have a great 
 way
 to block fibers, what I was doing there was a busy wait, which
 was just slow going waiting for all the threads to spin up.
Goroutines in Go are also co-operative, but I'm not sure (e.g. not pre-emptive). They probably yield when a channel has run empty. Well, then they have to in order to detach the thread that serves the channel to prevent the system to run out of threads. I guess they may have a strategy when to yield based on how long other channels had to wait to get a thread attached to them. For that purposes maybe there is a way to measure the traffic in the message queues of actors in D to get some effective yielding done. Just some thought. I'm not really an expert here.
 Heh, here is more interesting interpretation of this article 
 http://versusit.org/go-vs-ruby
Thanks for the link. Seems like the whole success story in this article in using Go is based on using goroutines and channels. So getting something similar accomplished in D would be important for D to be used for scalabale/elastic server-side software. Rust is basically using the same approach as Go with regard to threading. There seems to be something to it. Cheers, Bienlein
Feb 06 2014
next sibling parent "Bienlein" <jeti789 web.de> writes:
Here is a document about the scheduler design in Go:
https://docs.google.com/document/d/1TTj4T2JO42uD5ID9e89oa0sLKhJYD0Y_kqxDv3I3XMw/edit

The C sources for the Go scheduler are here:
http://code.google.com/p/go/source/browse/src/pkg/runtime/proc.c?r=01acf1dbe91f673f6308248b8f45ec0564b1d751

Maybe it could be useful Just in case... ;-).
Feb 06 2014
prev sibling parent reply "logicchains" <jonathan.t.barnard gmail.com> writes:
 What is nice about CSP is that you can proof that your code is 
 free of deadlocks. The Go guys have developed a tool that 
 parses the code and then tells you what it has found.
Note that the Go race detector isn't a static analysis tool that identifies deadlocks at compile time; it instruments the code and then detects race conditions at runtime. It's based on the C/C++ ThreadSanitizer runtime library, so a similar thing could probably be implemented for D.
 Goroutines in Go are also co-operative, but I'm not sure (e.g. 
 not pre-emptive).
The Go scheduler can perform a limited form of pre-emptive scheduling; from the version 1.2 release notes: "In prior releases, a goroutine that was looping forever could starve out other goroutines on the same thread, a serious problem when GOMAXPROCS provided only one user thread. In Go 1.2, this is partially addressed: The scheduler is invoked occasionally upon entry to a function. This means that any loop that includes a (non-inlined) function call can be pre-empted, allowing other goroutines to run on the same thread. "
 Rust is basically using the same approach as Go with regard to 
 threading.
Rust is actually moving away from directly tying the language to one kind of threading, so that it's possible to choose between M:N threading (goroutines) or 1:1 threading (system threads). See this discussion: https://mail.mozilla.org/pipermail/rust-dev/2013-November/006550.html for the reasoning behind this.
Feb 06 2014
parent "Bienlein" <jeti789 web.de> writes:
On Thursday, 6 February 2014 at 13:00:51 UTC, logicchains wrote:

 Note that the Go race detector isn't a static analysis tool 
 that identifies deadlocks at compile time; it instruments the 
 code and then detects race conditions at runtime. It's based on 
 the C/C++ ThreadSanitizer runtime library, so a similar thing 
 could probably be implemented for D.
Thanks for pointing out. I seem to have interpreted the information I had to optimistically.
 Rust is actually moving away from directly tying the language 
 to one kind of threading, so that it's possible to choose 
 between M:N threading (goroutines) or 1:1 threading (system 
 threads). See this discussion: 
 https://mail.mozilla.org/pipermail/rust-dev/2013-November/006550.html 
 for the reasoning behind this.
Yes, I read an interview on infoq.com saying the same thing which confused me a bit. M:N threading is still there, but is there still some focus on it as with the Go people ? Anyway, as long as D continues its own way with fibers ... ;-).
Feb 06 2014
prev sibling parent reply "Sean Kelly" <sean invisibleduck.org> writes:
On Wednesday, 5 February 2014 at 20:37:44 UTC, Sean Kelly wrote:
 As for when this will be available... I will have a pull request
 sorted out shortly, so you could start playing with it soon.  It
 being included in an actual release means a review and such, but
 as this is really just a fairly succinct change to an existing
 module, I hope it won't be terribly contentious.
https://github.com/D-Programming-Language/phobos/pull/1910
Feb 06 2014
next sibling parent "Dicebot" <public dicebot.lv> writes:
On Thursday, 6 February 2014 at 19:24:39 UTC, Sean Kelly wrote:
 https://github.com/D-Programming-Language/phobos/pull/1910
x-posted to vibe.d newsgroup. Awesome!
Feb 06 2014
prev sibling parent reply "Bienlein" <jeti789 web.de> writes:
On Thursday, 6 February 2014 at 19:24:39 UTC, Sean Kelly wrote:
 On Wednesday, 5 February 2014 at 20:37:44 UTC, Sean Kelly wrote:

 https://github.com/D-Programming-Language/phobos/pull/1910
Hello, I have a little question about how pre-emption works with the FiberScheduler. Let's say I create 100.000 fibers that all run long-runners (such as calculating fibonacci(100)). Now I start another fiber that just prints "hello world" to the console. So it's a short runner. When can I expect "hello world" to appear on the console? a. Will take a long time in case fibonacci(100) never does any yield. b. The FiberScheduler will do a yield periodically. So "hello world" will be displayed in not so long time c. I need to do a yield from within the fibonacci function here and then for "hello world" to be displayed in not so long time Just for my understanding ... Thanks, Bienlein
Feb 13 2014
parent reply "Sean Kelly" <sean invisibleduck.org> writes:
On Thursday, 13 February 2014 at 15:30:58 UTC, Bienlein wrote:
 On Thursday, 6 February 2014 at 19:24:39 UTC, Sean Kelly wrote:
 On Wednesday, 5 February 2014 at 20:37:44 UTC, Sean Kelly 
 wrote:

 https://github.com/D-Programming-Language/phobos/pull/1910
Hello, I have a little question about how pre-emption works with the FiberScheduler. Let's say I create 100.000 fibers that all run long-runners (such as calculating fibonacci(100)). Now I start another fiber that just prints "hello world" to the console. So it's a short runner. When can I expect "hello world" to appear on the console?
The API is able to context switch inside send and receive. So if you aren't sending messages with some frequency then the level of parallel execution will be fairly low. For apps like this, it's possible that a more complex scheduler that is backed by a thread pool would be more appropriate. Since D isn't built from the ground up around fibers, choosing the right scheduler for your application is an important decision.
Feb 13 2014
next sibling parent reply "Bienlein" <jeti789 web.de> writes:
On Thursday, 13 February 2014 at 15:40:05 UTC, Sean Kelly wrote:
 The API is able to context switch inside send and receive. So 
 if you aren't sending messages with some frequency then the 
 level of parallel execution will be fairly low.  For apps like 
 this, it's possible that a more complex scheduler that is 
 backed by a thread pool would be more appropriate.  Since D 
 isn't built from the ground up around fibers, choosing the 
 right scheduler for your application is an important decision.
Hi Sean, thanks for the quick reply. Let's say I have most of my actors running with the FiberScheduler. Then I have my emergency actor that is supposed to run down my nuclear power plant here and now in case it receives a message to do so. Now I let the emergency actor run in a kernel thread. This way it should be able to be immediately responsive. Is that right? Because that would be really good enough for me. Thanks, Bienlein
Feb 13 2014
parent "Sean Kelly" <sean invisibleduck.org> writes:
Yes. The schedulers are required to maintain some data (one being 
a message queue) for each "thread" they spawn. If the data is 
requested from a thread the scheduler doesn't own, it's required 
to return a thread-local copy instead. In short, any manually 
created kernel thread will get its own message queue regardless 
of the scheduler in place.
Feb 13 2014
prev sibling parent reply "Bienlein" <jeti789 web.de> writes:
On Thursday, 13 February 2014 at 15:40:05 UTC, Sean Kelly wrote:
 On Thursday, 13 February 2014 at 15:30:58 UTC, Bienlein wrote:
 On Thursday, 6 February 2014 at 19:24:39 UTC, Sean Kelly wrote:
 On Wednesday, 5 February 2014 at 20:37:44 UTC, Sean Kelly 
 wrote:

 https://github.com/D-Programming-Language/phobos/pull/1910
Are there some plans when we will have the FiberScheduler be included in D? Wait for D 2.066? Not wanting to be "pushy", it's only because of pure impatience ;-)
Mar 03 2014
parent reply =?UTF-8?B?U8O2bmtlIEx1ZHdpZw==?= <sludwig+dforum outerproduct.org> writes:
Am 03.03.2014 11:31, schrieb Bienlein:
 On Thursday, 13 February 2014 at 15:40:05 UTC, Sean Kelly wrote:
 On Thursday, 13 February 2014 at 15:30:58 UTC, Bienlein wrote:
 On Thursday, 6 February 2014 at 19:24:39 UTC, Sean Kelly wrote:
 On Wednesday, 5 February 2014 at 20:37:44 UTC, Sean Kelly wrote:

 https://github.com/D-Programming-Language/phobos/pull/1910
Are there some plans when we will have the FiberScheduler be included in D? Wait for D 2.066? Not wanting to be "pushy", it's only because of pure impatience ;-)
Just out of curiosity, what did you miss in vibe.d regarding fiber based scheduling? Of course it would be great to have something like this in Phobos, but to really have a consistent system, changes need to be made all over the place to make things work asynchronously under the hood, or you have a system that makes it really easy to shoot yourself in the foot, which may be much worse than not having support at all. Rather than starting with bits of this here and there, the complete picture should IMO be well though out beforehand, and rather be started by integrating low level asynchronous operations (and a pluggable event loop implementation). Adding fiber based concurrency would then be the last step.
Mar 03 2014
next sibling parent reply "Bienlein" <jeti789 web.de> writes:
On Monday, 3 March 2014 at 14:27:53 UTC, Sönke Ludwig wrote:
 Just out of curiosity, what did you miss in vibe.d regarding 
 fiber based scheduling?
Hi Söhnke, I'm thinking of developing a little actor library on top of D's spawn/receive model for creating threads, which is already actor-like but on a level of global functions. I want to mold some thin class layer on top of it to have actors on class level. Vibe.d would be a good solution for distributed actors. But for a first step I want to have local actors. Actors that are in the same memory space don't need to communicate through sockets as in case of vibe.d. Regards, Bienlein
Mar 03 2014
parent =?UTF-8?B?U8O2bmtlIEx1ZHdpZw==?= <sludwig+dforum outerproduct.org> writes:
Am 03.03.2014 16:55, schrieb Bienlein:
 On Monday, 3 March 2014 at 14:27:53 UTC, Sönke Ludwig wrote:
 Just out of curiosity, what did you miss in vibe.d regarding fiber
 based scheduling?
Hi Söhnke, I'm thinking of developing a little actor library on top of D's spawn/receive model for creating threads, which is already actor-like but on a level of global functions. I want to mold some thin class layer on top of it to have actors on class level. Vibe.d would be a good solution for distributed actors. But for a first step I want to have local actors. Actors that are in the same memory space don't need to communicate through sockets as in case of vibe.d. Regards, Bienlein
The vibe.core.concurrency module provides the same interface as std.concurrency (with some different details). Once Sean's fiber additions to std.concurrency will be ready, vibe.core.concurrency will be layered on top of (and finally replaced by) it. There is also vibe.stream.taskpipe, which offers a stream interface for passing data between tasks. This works for tasks in the same or in different threads.
Mar 12 2014
prev sibling next sibling parent reply "Bienlein" <jeti789 web.de> writes:
On Monday, 3 March 2014 at 14:27:53 UTC, Sönke Ludwig wrote:

 Just out of curiosity, what did you miss in vibe.d regarding 
 fiber based scheduling?
There is something else I forgot to mention. One scenario I'm thinking of is to have a large number of connections like more than 100.000 I want to listen on. This results in a situation with blocking I/O for all those connections. Fibers in D are more like continuations that are distributed over several kernel threads. The way Sean Kelly has implemented the FiberScheduler a fiber is invoked in case it receives an item like data through the connection it serves as in my scenario. At least this is the way I understood the implementation. So I can have like 100.000 connections simultanously as in Go without having to use Go (the Go language is too simple for my taste).
Mar 03 2014
parent =?UTF-8?B?U8O2bmtlIEx1ZHdpZw==?= <sludwig+dforum outerproduct.org> writes:
Am 03.03.2014 22:58, schrieb Bienlein:
 On Monday, 3 March 2014 at 14:27:53 UTC, Sönke Ludwig wrote:

 Just out of curiosity, what did you miss in vibe.d regarding fiber
 based scheduling?
There is something else I forgot to mention. One scenario I'm thinking of is to have a large number of connections like more than 100.000 I want to listen on. This results in a situation with blocking I/O for all those connections. Fibers in D are more like continuations that are distributed over several kernel threads. The way Sean Kelly has implemented the FiberScheduler a fiber is invoked in case it receives an item like data through the connection it serves as in my scenario. At least this is the way I understood the implementation. So I can have like 100.000 connections simultanously as in Go without having to use Go (the Go language is too simple for my taste).
In vibe.d, there are basically two modes of fiber scheduling. The usual mode is purely driven by the event loop: Once a task/fiber triggers a blocking operation, lets say a socket receive operation, it registers its handle for the corresponding event and calls an internal rawYield() function. Once the event fires, the fiber is then resumed. The other mode happens when yield() (in vibe.core.core) is explicitly called. In this case, tasks are inserted into a singly-linked list, which is processed in chunks alternated with a call to processEvents() and in FIFO order to ensure a fair scheduling and to avoid blocking event processing when tasks perform continuous computations with intermittent yield() calls. So the first mode AFAICS is working just like how Sean has made his fiber scheduler. And at least on 64-bit systems, there is nothing that speaks against handling huge numbers of connections simultaneously. 32-bit can also handle a lot of connections with small fiber stack sizes (setTaskStackSize), but using decently sized stacks will quickly eat up the available address space.
Mar 12 2014
prev sibling parent reply "Bienlein" <jeti789 web.de> writes:
On Monday, 3 March 2014 at 14:27:53 UTC, Sönke Ludwig wrote:

 Just out of curiosity, what did you miss in vibe.d regarding 
 fiber based scheduling?
By the way is there a way to make use of vibe.d in something like a local mode? I mean some in-memory mode without going through TCP. Thanks, Bienlein
Mar 08 2014
parent "Sean Kelly" <sean invisibleduck.org> writes:
On Saturday, 8 March 2014 at 16:01:00 UTC, Bienlein wrote:
 On Monday, 3 March 2014 at 14:27:53 UTC, Sönke Ludwig wrote:

 Just out of curiosity, what did you miss in vibe.d regarding 
 fiber based scheduling?
By the way is there a way to make use of vibe.d in something like a local mode? I mean some in-memory mode without going through TCP.
Pipes maybe?
Mar 08 2014
prev sibling parent "Suliman" <evermind live.ru> writes:
 "How We Went from 30 Servers to 2: Go". Link: 
 http://blog.iron.io/2013/03/how-we-went-from-30-servers-to-2-go.html
Heh, here is more interesting interpretation of this article http://versusit.org/go-vs-ruby
Feb 05 2014