www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - concurrency

reply Denton Cockburn <diboss hotmail.com> writes:
Ok, Walter's said previously (I think) that he's going to wait to see what
C++ does in regards to multicore concurrency.

Ignoring this for now, for fun, what ideas do you guys have regarding
multicore concurrency?
Feb 02 2008
next sibling parent reply "Craig Black" <craigblack2 cox.net> writes:
"Denton Cockburn" <diboss hotmail.com> wrote in message 
news:pan.2008.02.03.02.33.36.603288 hotmail.com...
 Ok, Walter's said previously (I think) that he's going to wait to see what
 C++ does in regards to multicore concurrency.

 Ignoring this for now, for fun, what ideas do you guys have regarding
 multicore concurrency?

Walter also has said recently that he wants to implement automatic parallelization, and is working on features to will support this (const, invariant, pure). I think Andrei is pushing this. I have my doubts that this will be useful for most programs. I think that to leverage this automatic parallelization, you will have to code in a functional style, or build your application using pure functions. Granularity will also probably be an issue. Because of these drawbacks, automatic parallelization may not be so automatic, but may require careful programming, just like manual parallelization. But maybe I'm wrong and it will be the greatest thing ever. -Craig
Feb 03 2008
parent reply Daniel Lewis <murpsoft hotmail.com> writes:
Craig Black Wrote:
 Walter also has said recently that he wants to implement automatic 
 parallelization, and is working on features to will support this (const, 
 invariant, pure).  I think Andrei is pushing this.  I have my doubts that 
 this will be useful for most programs.  I think that to leverage this 
 automatic parallelization, you will have to code in a functional style, or 
 build your application using pure functions.  Granularity will also probably 
 be an issue.  Because of these drawbacks, automatic parallelization may not 
 be so automatic, but may require careful programming, just like manual 
 parallelization.  But maybe I'm wrong and it will be the greatest thing 
 ever.
 
 -Craig 
 

Craig, I'm not sure if you noticed that AMD and Intel had "HT" for a long time and are now pushing multicore on desktop users now, as well as servers. Const and pure are also relevant to live application migration, embedded application interfacing, optimization, and debugging. D is moving towards supporting some assertions that data isn't changed by an algorithm, and/or that it must not be changed. That doesn't require any more work than deciding whether something should be constant, and then making it compile. I really have no idea what the approach will be for parallelization, but if Walter's waiting for C++ to figure it out then it'll be better than what they have. Regards, Dan
Feb 03 2008
parent reply "Craig Black" <craigblack2 cox.net> writes:
"Daniel Lewis" <murpsoft hotmail.com> wrote in message 
news:fo5vdf$2q2e$1 digitalmars.com...
 Craig Black Wrote:
 Walter also has said recently that he wants to implement automatic
 parallelization, and is working on features to will support this (const,
 invariant, pure).  I think Andrei is pushing this.  I have my doubts that
 this will be useful for most programs.  I think that to leverage this
 automatic parallelization, you will have to code in a functional style, 
 or
 build your application using pure functions.  Granularity will also 
 probably
 be an issue.  Because of these drawbacks, automatic parallelization may 
 not
 be so automatic, but may require careful programming, just like manual
 parallelization.  But maybe I'm wrong and it will be the greatest thing
 ever.

 -Craig

Craig, I'm not sure if you noticed that AMD and Intel had "HT" for a long time and are now pushing multicore on desktop users now, as well as servers. Const and pure are also relevant to live application migration, embedded application interfacing, optimization, and debugging.

Yes everything is going multi-threaded and multi-core. Any feature that aids programmers in writing multi-threaded software is a plus. However, I'm skeptical that a compiler will be able to take code that is written without any consideration for threading, and parallelize it.
 D is moving towards supporting some assertions that data isn't changed by 
 an algorithm, and/or that it must not be changed.  That doesn't require 
 any more work than deciding whether something should be constant, and then 
 making it compile.

Consider that the compiler is relying on pure functions for parallelization. If (1) the programmer doesn't write any pure functions, or (2) the granularity of the pure function does not justify the overhead of parallelization, then there's no benefit. Thus, careful consideration will be required to leverage automatic parallelization.
 I really have no idea what the approach will be for parallelization, but 
 if Walter's waiting for C++ to figure it out then it'll be better than 
 what they have.

I guess we can wait and see what happens. It just seems that everyone is anticipating a silver bullet that may never arive. -Craig
Feb 03 2008
parent reply Christopher Wright <dhasenan gmail.com> writes:
Craig Black wrote:
 
 "Daniel Lewis" <murpsoft hotmail.com> wrote in message 
 news:fo5vdf$2q2e$1 digitalmars.com...
 Craig Black Wrote:
 Walter also has said recently that he wants to implement automatic
 parallelization, and is working on features to will support this (const,
 invariant, pure).  I think Andrei is pushing this.  I have my doubts 
 that
 this will be useful for most programs.  I think that to leverage this
 automatic parallelization, you will have to code in a functional 
 style, or
 build your application using pure functions.  Granularity will also 
 probably
 be an issue.  Because of these drawbacks, automatic parallelization 
 may not
 be so automatic, but may require careful programming, just like manual
 parallelization.  But maybe I'm wrong and it will be the greatest thing
 ever.

 -Craig

Craig, I'm not sure if you noticed that AMD and Intel had "HT" for a long time and are now pushing multicore on desktop users now, as well as servers. Const and pure are also relevant to live application migration, embedded application interfacing, optimization, and debugging.

Yes everything is going multi-threaded and multi-core. Any feature that aids programmers in writing multi-threaded software is a plus. However, I'm skeptical that a compiler will be able to take code that is written without any consideration for threading, and parallelize it.
 D is moving towards supporting some assertions that data isn't changed 
 by an algorithm, and/or that it must not be changed.  That doesn't 
 require any more work than deciding whether something should be 
 constant, and then making it compile.

Consider that the compiler is relying on pure functions for parallelization. If (1) the programmer doesn't write any pure functions, or (2) the granularity of the pure function does not justify the overhead of parallelization, then there's no benefit. Thus, careful consideration will be required to leverage automatic parallelization.

I'm curious how automatic parallelization might work with delegates. It probably won't, unless you put the 'pure' keyword in the signature of the delegates. In that case, I hope that pure delegates are implicitly convertible to non-pure delegates. I was wondering because I work with a highly event-driven application in C# that might benefit from automatic parallelization, though some event subscribers probably modify data that they don't own.
Feb 04 2008
parent reply "Craig Black" <cblack ara.com> writes:
"Christopher Wright" <dhasenan gmail.com> wrote in message 
news:fo74ij$2asd$1 digitalmars.com...
 Craig Black wrote:
 "Daniel Lewis" <murpsoft hotmail.com> wrote in message 
 news:fo5vdf$2q2e$1 digitalmars.com...
 Craig Black Wrote:
 Walter also has said recently that he wants to implement automatic
 parallelization, and is working on features to will support this 
 (const,
 invariant, pure).  I think Andrei is pushing this.  I have my doubts 
 that
 this will be useful for most programs.  I think that to leverage this
 automatic parallelization, you will have to code in a functional style, 
 or
 build your application using pure functions.  Granularity will also 
 probably
 be an issue.  Because of these drawbacks, automatic parallelization may 
 not
 be so automatic, but may require careful programming, just like manual
 parallelization.  But maybe I'm wrong and it will be the greatest thing
 ever.

 -Craig

Craig, I'm not sure if you noticed that AMD and Intel had "HT" for a long time and are now pushing multicore on desktop users now, as well as servers. Const and pure are also relevant to live application migration, embedded application interfacing, optimization, and debugging.

Yes everything is going multi-threaded and multi-core. Any feature that aids programmers in writing multi-threaded software is a plus. However, I'm skeptical that a compiler will be able to take code that is written without any consideration for threading, and parallelize it.
 D is moving towards supporting some assertions that data isn't changed 
 by an algorithm, and/or that it must not be changed.  That doesn't 
 require any more work than deciding whether something should be 
 constant, and then making it compile.

Consider that the compiler is relying on pure functions for parallelization. If (1) the programmer doesn't write any pure functions, or (2) the granularity of the pure function does not justify the overhead of parallelization, then there's no benefit. Thus, careful consideration will be required to leverage automatic parallelization.

I'm curious how automatic parallelization might work with delegates. It probably won't, unless you put the 'pure' keyword in the signature of the delegates. In that case, I hope that pure delegates are implicitly convertible to non-pure delegates.

Good question. Yes, it would seem necessary that delegates be pure or non-pure. And I agree, pure should convert easily to non-pure, but not vice-versa.
 I was wondering because I work with a highly event-driven application in 
 C# that might benefit from automatic parallelization, though some event 
 subscribers probably modify data that they don't own.

In that case, it may be beneficial to somehow separate parallel and sequential events, perhaps with separate event queues. However, it would require that each event knows whether it is "pure" or not, so that it is placed on the appropriate queue. -Craig
Feb 04 2008
parent reply Christopher Wright <dhasenan gmail.com> writes:
Craig Black wrote:
 In that case, it may be beneficial to somehow separate parallel and 
 sequential events, perhaps with separate event queues.  However, it would 
 require that each event knows whether it is "pure" or not, so that it is 
 placed on the appropriate queue.

A static if or two in the event broker would solve it. There would be a method: void subscribe (T)(EventTopic topic, T delegate) { static assert (is (T == delegate)); static if (is (T == pure)) { // add to the pure event subscribers for auto parallelization } else { // add to the impure ones } }
 -Craig 

Feb 04 2008
parent "Craig Black" <cblack ara.com> writes:
"Christopher Wright" <dhasenan gmail.com> wrote in message 
news:fo8o62$2m1t$1 digitalmars.com...
 Craig Black wrote:
 In that case, it may be beneficial to somehow separate parallel and 
 sequential events, perhaps with separate event queues.  However, it would 
 require that each event knows whether it is "pure" or not, so that it is 
 placed on the appropriate queue.

A static if or two in the event broker would solve it. There would be a method: void subscribe (T)(EventTopic topic, T delegate) { static assert (is (T == delegate)); static if (is (T == pure)) { // add to the pure event subscribers for auto parallelization } else { // add to the impure ones } }
 -Craig


It might not be as fancy as using static if, but it might be simpler to use overloading (if the syntax will support it). void subscribe(EventTopic topic, void delegate() del) { ... } void subscribe(EventTopic topic, pure void delegate() del) { ... }
Feb 05 2008
prev sibling next sibling parent reply Robert Fraser <fraserofthenight gmail.com> writes:
Denton Cockburn wrote:
 Ok, Walter's said previously (I think) that he's going to wait to see what
 C++ does in regards to multicore concurrency.
 
 Ignoring this for now, for fun, what ideas do you guys have regarding
 multicore concurrency?

There were two solutions for concurrent programming proposed at the D conference. Walter talked about automatic parallelization made available functional programming styles, which Craig & Daniel are discussing. The other solution presented, which I have seen comparatively little discussion in the NG about, was software transactional memory. I don't think that STM necessarily leads to simpler or more readable code than lock-based concurrency, however I think STM has two distinct advantages over these traditional methods: 1. possibly better performance 2. better reliability (i.e. no need to worry about deadlocks, etc.) I think an ideal solution is two combine the two techniques. If functional-style programming is emphasized, and STM is used where state-based programming makes more sense, it frees the programmer to write code without worrying about the complexities of synchronization. That said, I never found traditional concurrency that hard, especially within frameworks like SEDA, etc.
Feb 03 2008
parent reply Sean Kelly <sean f4.ca> writes:
Robert Fraser wrote:
 Denton Cockburn wrote:
 Ok, Walter's said previously (I think) that he's going to wait to see
 what
 C++ does in regards to multicore concurrency.

 Ignoring this for now, for fun, what ideas do you guys have regarding
 multicore concurrency?

There were two solutions for concurrent programming proposed at the D conference. Walter talked about automatic parallelization made available functional programming styles, which Craig & Daniel are discussing. The other solution presented, which I have seen comparatively little discussion in the NG about, was software transactional memory. I don't think that STM necessarily leads to simpler or more readable code than lock-based concurrency, however I think STM has two distinct advantages over these traditional methods: 1. possibly better performance 2. better reliability (i.e. no need to worry about deadlocks, etc.)

STM actually offers worse performance than lock-based programming, but in exchange gains a guarantee that the app won't deadlock (though I believe it could theoretically livelock, at least with some STM strategies). Also it's simply easier for most people to think in terms of transactions. For the average application, I think it's a preferable option to lock-based programming. However, I think even STM will only get us so far, and eventually we're going to need to move to more naturally parallelizable methods of programming. The 'pure' functions and such in D are an attempt to get some of this without losing the imperative syntax that is so popular today.
 I think an ideal solution is two combine the two techniques. If
 functional-style programming is emphasized, and STM is used where
 state-based programming makes more sense, it frees the programmer to
 write code without worrying about the complexities of synchronization.

If we're talking about D, then I agree.
 That said, I never found traditional concurrency that hard, especially
 within frameworks like SEDA, etc.

Me either, but from what I've heard, this is not typical.
Feb 03 2008
parent reply Bedros Hanounik <2bedros NOSPAMgmail.com> writes:
I think the best way to tackle concurrency is to have two types of functions

blocking functions (like in the old sequential code execution)

and non-blocking functions (the new parallel code execution)

for non-blocking functions, the function returns additional type which is true
when function execution is completed

for example


a = foo();

// para1_foo and para2_foo are completely independent and executed in parallel

b = para1_foo();
c = para2_foo();

// wait here for both functions to finish
// another syntax could be used also

if (b.done and c.done)
     continue:


I'm not sure about supporting non-pure functions (or allowing accessing global
vars); it's just too ugly for no good reason.



Sean Kelly Wrote:

 Robert Fraser wrote:
 Denton Cockburn wrote:
 Ok, Walter's said previously (I think) that he's going to wait to see
 what
 C++ does in regards to multicore concurrency.

 Ignoring this for now, for fun, what ideas do you guys have regarding
 multicore concurrency?

There were two solutions for concurrent programming proposed at the D conference. Walter talked about automatic parallelization made available functional programming styles, which Craig & Daniel are discussing. The other solution presented, which I have seen comparatively little discussion in the NG about, was software transactional memory. I don't think that STM necessarily leads to simpler or more readable code than lock-based concurrency, however I think STM has two distinct advantages over these traditional methods: 1. possibly better performance 2. better reliability (i.e. no need to worry about deadlocks, etc.)

STM actually offers worse performance than lock-based programming, but in exchange gains a guarantee that the app won't deadlock (though I believe it could theoretically livelock, at least with some STM strategies). Also it's simply easier for most people to think in terms of transactions. For the average application, I think it's a preferable option to lock-based programming. However, I think even STM will only get us so far, and eventually we're going to need to move to more naturally parallelizable methods of programming. The 'pure' functions and such in D are an attempt to get some of this without losing the imperative syntax that is so popular today.
 I think an ideal solution is two combine the two techniques. If
 functional-style programming is emphasized, and STM is used where
 state-based programming makes more sense, it frees the programmer to
 write code without worrying about the complexities of synchronization.

If we're talking about D, then I agree.
 That said, I never found traditional concurrency that hard, especially
 within frameworks like SEDA, etc.

Me either, but from what I've heard, this is not typical.

Feb 03 2008
parent reply Sean Kelly <sean f4.ca> writes:
Bedros Hanounik wrote:
 I think the best way to tackle concurrency is to have two types of functions
 
 blocking functions (like in the old sequential code execution)
 
 and non-blocking functions (the new parallel code execution)
 
 for non-blocking functions, the function returns additional type which is true
when function execution is completed

This is basically how futures work. It's a pretty useful approach. Sean
Feb 03 2008
next sibling parent interessted <interessted interessted.com> writes:
hi,

wouldn't it be okay to do it like in 'Active Oberon'
(http://bluebottle.ethz.ch/languagereport/ActiveReport.html) or 'Zennon'
(http://www.oberon.ethz.ch/oberon.net/)?



Sean Kelly Wrote:

 Bedros Hanounik wrote:
 I think the best way to tackle concurrency is to have two types of functions
 
 blocking functions (like in the old sequential code execution)
 
 and non-blocking functions (the new parallel code execution)
 
 for non-blocking functions, the function returns additional type which is true
when function execution is completed

This is basically how futures work. It's a pretty useful approach. Sean

Feb 04 2008
prev sibling next sibling parent interessted <interessted interessted.com> writes:
hi,

wouldn't it be okay to do it like in 'Active Oberon'
(http://bluebottle.ethz.ch/languagereport/ActiveReport.html) or 'Zennon'
(http://www.oberon.ethz.ch/oberon.net/)?



Sean Kelly Wrote:

 Bedros Hanounik wrote:
 I think the best way to tackle concurrency is to have two types of functions
 
 blocking functions (like in the old sequential code execution)
 
 and non-blocking functions (the new parallel code execution)
 
 for non-blocking functions, the function returns additional type which is true
when function execution is completed

This is basically how futures work. It's a pretty useful approach. Sean

Feb 04 2008
prev sibling next sibling parent reply Daniel Lewis <murpsoft hotmail.com> writes:
Sean Kelly Wrote:
 This is basically how futures work.  It's a pretty useful approach.

Agreed. Steve Dekorte has been working with them for a long time and integrated them into his iolanguage. He found he could regularly get comparable performance to Apache even in a pure OO framework (even Number!?) just 'cause his parallelization was better. I personally believe the best way though is to take advantage of lock instructions for *allocation* of memory. Once memory is allocated, it's "yours" to do with as you please. I haven't looked at this for a few months but I remember seeing an algorithm that did first-through concurrency loop-locks for malloc and free and had practically no overhead ever. Regards, Dan
Feb 04 2008
parent reply Sean Kelly <sean f4.ca> writes:
Daniel Lewis wrote:
 Sean Kelly Wrote:
 This is basically how futures work.  It's a pretty useful approach.

Agreed. Steve Dekorte has been working with them for a long time and integrated them into his iolanguage. He found he could regularly get comparable performance to Apache even in a pure OO framework (even Number!?) just 'cause his parallelization was better. I personally believe the best way though is to take advantage of lock instructions for *allocation* of memory. Once memory is allocated, it's "yours" to do with as you please. I haven't looked at this for a few months but I remember seeing an algorithm that did first-through concurrency loop-locks for malloc and free and had practically no overhead ever.

Actually, it's entirely possible to do lock-free allocation and deletion. HOARD does lock-free allocation, for example, and lock-free deletion would be a matter of appending the block to a lock-free slist on the appropriate heap. A GC could do basically the same thing, but collections would be a bit more complex. I've considered writing such a GC, but it's an involved project and I simply don't have the time. Sean
Feb 04 2008
parent reply Bedros Hanounik <2bedros NOSPAMgmail.com> writes:
Guys,

take a look at transactional memory concept;  very interesting type of locking
(or should I say sharing) of memory allocations.

http://en.wikipedia.org/wiki/Software_transactional_memory



-Bedros


Sean Kelly Wrote:

 Daniel Lewis wrote:
 Sean Kelly Wrote:
 This is basically how futures work.  It's a pretty useful approach.

Agreed. Steve Dekorte has been working with them for a long time and integrated them into his iolanguage. He found he could regularly get comparable performance to Apache even in a pure OO framework (even Number!?) just 'cause his parallelization was better. I personally believe the best way though is to take advantage of lock instructions for *allocation* of memory. Once memory is allocated, it's "yours" to do with as you please. I haven't looked at this for a few months but I remember seeing an algorithm that did first-through concurrency loop-locks for malloc and free and had practically no overhead ever.

Actually, it's entirely possible to do lock-free allocation and deletion. HOARD does lock-free allocation, for example, and lock-free deletion would be a matter of appending the block to a lock-free slist on the appropriate heap. A GC could do basically the same thing, but collections would be a bit more complex. I've considered writing such a GC, but it's an involved project and I simply don't have the time. Sean

Feb 04 2008
parent Sean Kelly <sean f4.ca> writes:
There's also a presentation about how it might apply to D here:

http://s3.amazonaws.com/dconf2007/DSTM.ppt
http://www.relisoft.com/D/STM_pptx_files/v3_document.htm

Bedros Hanounik wrote:
 Guys,
 
 take a look at transactional memory concept;  very interesting type of locking
(or should I say sharing) of memory allocations.
 
 http://en.wikipedia.org/wiki/Software_transactional_memory
 
 
 
 -Bedros
 
 
 Sean Kelly Wrote:
 
 Daniel Lewis wrote:
 Sean Kelly Wrote:
 This is basically how futures work.  It's a pretty useful approach.

I personally believe the best way though is to take advantage of lock instructions for *allocation* of memory. Once memory is allocated, it's "yours" to do with as you please. I haven't looked at this for a few months but I remember seeing an algorithm that did first-through concurrency loop-locks for malloc and free and had practically no overhead ever.

deletion. HOARD does lock-free allocation, for example, and lock-free deletion would be a matter of appending the block to a lock-free slist on the appropriate heap. A GC could do basically the same thing, but collections would be a bit more complex. I've considered writing such a GC, but it's an involved project and I simply don't have the time. Sean


Feb 05 2008
prev sibling parent reply Jason House <jason.james.house gmail.com> writes:
Sean Kelly Wrote:

 Bedros Hanounik wrote:
 I think the best way to tackle concurrency is to have two types of functions
 
 blocking functions (like in the old sequential code execution)
 
 and non-blocking functions (the new parallel code execution)
 
 for non-blocking functions, the function returns additional type which is true
when function execution is completed

This is basically how futures work. It's a pretty useful approach. Sean

I've never heard of that. Does anyone have a good link for extra detail on futures?
Feb 04 2008
next sibling parent reply downs <default_357-line yahoo.de> writes:
Jason House wrote:
 Sean Kelly Wrote:
 
 Bedros Hanounik wrote:
 I think the best way to tackle concurrency is to have two types of functions

 blocking functions (like in the old sequential code execution)

 and non-blocking functions (the new parallel code execution)

 for non-blocking functions, the function returns additional type which is true
when function execution is completed

Sean

I've never heard of that. Does anyone have a good link for extra detail on futures?

The wrapper object can be evaluated, in which case it blocks until the original dg has returned a value. This value is then returned by the wrapper, as well as cached. The idea is that you create a future for a value that you know you'll need soon, then do some other task and query it later. :) scrapple.tools' ThreadPool class has a futures implementation. Here's an example: auto t = new Threadpool(2); auto f = t.future(&do_complicated_calculation); auto g = t.future(&do_complicated_calculation2); return f() + g(); --downs
Feb 04 2008
parent reply "Joel C. Salomon" <joelcsalomon gmail.com> writes:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

downs wrote:
 Jason House wrote:
 I've never heard of that.  Does anyone have a good link for extra detail on
futures?

The wrapper object can be evaluated, in which case it blocks until the original dg has returned a value. This value is then returned by the wrapper, as well as cached. The idea is that you create a future for a value that you know you'll need soon, then do some other task and query it later. :)

… while Sean Kelly wrote:
 Futures are basically Herb Sutter's rehashing of Hoare's CSP model.

More specifically, this sounds like a special case of a CSP-like channel where only one datum is ever transmitted. (Generally, channels are comparable to UNIX pipes and can transmit many data.) Russ Cox has a nice introduction to channel/thread programming at <http://swtch.com/~rsc/talks/threads07> and an overview of the field at <http://swtch.com/~rsc/thread>. - --Joel -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFHp16azLx4GzBL9dYRAgd6AKCg0wpdVmUDPfPiKaf1hZlp7uE9fgCfXon+ T/F2qWd+OcrgVrIuDejZ14o= =n9T/ -----END PGP SIGNATURE-----
Feb 04 2008
parent downs <default_357-line yahoo.de> writes:
Joel C. Salomon wrote:
 downs wrote:
 Jason House wrote:
 I've never heard of that.  Does anyone have a good link for extra detail on
futures?

The wrapper object can be evaluated, in which case it blocks until the original dg has returned a value. This value is then returned by the wrapper, as well as cached. The idea is that you create a future for a value that you know you'll need soon, then do some other task and query it later. :)

& while Sean Kelly wrote:
 Futures are basically Herb Sutter's rehashing of Hoare's CSP model.

More specifically, this sounds like a special case of a CSP-like channel where only one datum is ever transmitted. (Generally, channels are comparable to UNIX pipes and can transmit many data.)

Heh. Funny coincidence. Let's take a look at the implementation of Future(T): class Future(T) { T res; bool done; MessageChannel!(T) channel; this() { New(channel); } T eval() { if (!done) { res=channel.get(); done=true; } return res; } alias eval opCall; bool finished() { return channel.canGet; } } :) --downs
Feb 04 2008
prev sibling parent Sean Kelly <sean f4.ca> writes:
Jason House wrote:
 Sean Kelly Wrote:
 
 Bedros Hanounik wrote:
 I think the best way to tackle concurrency is to have two types of functions

 blocking functions (like in the old sequential code execution)

 and non-blocking functions (the new parallel code execution)

 for non-blocking functions, the function returns additional type which is true
when function execution is completed


I've never heard of that. Does anyone have a good link for extra detail on futures?

Futures are basically Herb Sutter's rehashing of Hoare's CSP model. Here's a presentation of his where he talks about it: http://irbseminars.intel-research.net/HerbSutter.pdf Sean
Feb 04 2008
prev sibling parent reply Mike Koehmstedt <mykillk gmail.com> writes:
How does garbage collection currently work in a multi-processor environment?

My plan is to only have one thread per processor in addition to the main
thread. When GC runs, does it pause all threads on all processors or does it
only pause threads on a per-processor basis?


Denton Cockburn Wrote:

 Ok, Walter's said previously (I think) that he's going to wait to see what
 C++ does in regards to multicore concurrency.
 
 Ignoring this for now, for fun, what ideas do you guys have regarding
 multicore concurrency?

Feb 09 2008
parent Robert Fraser <fraserofthenight gmail.com> writes:
Mike Koehmstedt wrote:
 How does garbage collection currently work in a multi-processor environment?
 
 My plan is to only have one thread per processor in addition to the main
thread. When GC runs, does it pause all threads on all processors or does it
only pause threads on a per-processor basis?
 
 
 Denton Cockburn Wrote:
 
 Ok, Walter's said previously (I think) that he's going to wait to see what
 C++ does in regards to multicore concurrency.

 Ignoring this for now, for fun, what ideas do you guys have regarding
 multicore concurrency?


It pauses all threads on all processors.
Feb 09 2008