www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Massive loss for D on Tiobe

reply Georg Wrede <georg.wrede iki.fi> writes:
D made the May headline on Tiobe: "Programming language D suffers sharp 
fall". You can say that again, D went down 5 places, to below languges 
like RPG(OS/400) and ABAP!



D's loss seems unbelievable. D now has a 0.628% share, which is even 
less than what it's lost (-0.82%) in the last 12 months. What could be 
the reasons for it? Is it even possible to figure out any reason??

Can this loss induce people to abandon D, and others to not take it up, 
leading to cumulating losses in the coming months? What do we have to do 
to prevent this?
May 06 2009
next sibling parent grauzone <none example.net> writes:
Georg Wrede wrote:
 
 
 D made the May headline on Tiobe: "Programming language D suffers sharp 
 fall". You can say that again, D went down 5 places, to below languges 
 like RPG(OS/400) and ABAP!
 
 
 
 D's loss seems unbelievable. D now has a 0.628% share, which is even 
 less than what it's lost (-0.82%) in the last 12 months. What could be 
 the reasons for it? Is it even possible to figure out any reason??
 
 Can this loss induce people to abandon D, and others to not take it up, 
 leading to cumulating losses in the coming months? What do we have to do 
 to prevent this?
D2.0. Now flame away.
May 06 2009
prev sibling next sibling parent dsimcha <dsimcha yahoo.com> writes:
== Quote from Georg Wrede (georg.wrede iki.fi)'s article
 D made the May headline on Tiobe: "Programming language D suffers sharp
 fall". You can say that again, D went down 5 places, to below languges
 like RPG(OS/400) and ABAP!
 D's loss seems unbelievable. D now has a 0.628% share, which is even
 less than what it's lost (-0.82%) in the last 12 months. What could be
 the reasons for it? Is it even possible to figure out any reason??
 Can this loss induce people to abandon D, and others to not take it up,
 leading to cumulating losses in the coming months? What do we have to do
 to prevent this?
This fully convinces me that the Tiobe index should not be taken at face value. How does a language hit an all time high and a multiyear low within 2 months of each other? At best the Tiobe index is an unbiased but extremely high variance estimator of language popularity, and meaningful results can only be produced by averaging results over a period much longer than a month. At worst, it's so biased that it's just plain garbage.
May 06 2009
prev sibling next sibling parent reply "Carlos Smith" <carlos-smith sympatico.ca> writes:
"Georg Wrede" <georg.wrede iki.fi> a écrit

 Can this loss induce people to abandon D, and others to not take it 
 up, leading to cumulating losses in the coming months? What do we 
 have to do to prevent this?
Yes. Give D1 a future. Shift focus on D1 (D2 is experimental). Make D1 really usable in the workplace. Being called stable is not enough. Produce a grammar for the language. This will give it a definition on which every body will align. Fix any inconsistencies in the language. Choose one (1) license for all DigitalMars D related stuff. Go true Open Source. No strings attached. I was absolutely thrilled to read that D has gone Open Source. Then i was quite unthrilled after i have read the official license. This episode had a negative impact on D. Get rid of OMF... D has a future ...
May 06 2009
parent reply Eldar Insafutdinov <e.insafutdinov gmail.com> writes:
Carlos Smith Wrote:

 "Georg Wrede" <georg.wrede iki.fi> a écrit
 
 Can this loss induce people to abandon D, and others to not take it 
 up, leading to cumulating losses in the coming months? What do we 
 have to do to prevent this?
Yes. Give D1 a future. Shift focus on D1 (D2 is experimental). Make D1 really usable in the workplace. Being called stable is not enough.
D1 has complete open sources now, submit patches, make it more stable. That's what people are actually doing.
 Get rid of OMF...
Both hands raised for that...
 D has a future ...
Oh yeah!
May 06 2009
parent Vincenzo Ampolo <vincenzo.ampolo gmail.com> writes:
Eldar Insafutdinov wrote:

 D1 has complete open sources now, submit patches, make it more 
stable.
 That's what people are actually doing.
From dmd.2.029/dmd/src/dmd/backendlicense.txt "The Software is copyrighted and comes with a single user license, and may not be redistributed. If you wish to obtain a redistribution license, please contact Digital Mars." This is not Open Source neither Free Software IMHO. So D2, the "bleeding edge" which could be very interesting for development is not free software (at least backend) I remember that an open backend is needed to port a compiler into other platforms (x86_64, armel, sparc, powerpc, ecc ecc) Let's see D1. i don't see any "backend" (backend not released?) directory and in dmd.1.030/dmd/license.txt there is again: "The Software is copyrighted and comes with a single user license, and may not be redistributed. If you wish to obtain a redistribution license, please contact Digital Mars." inside dmd/src/dmd there are gpl.txt and artistic.txt. So dmd frontend is gplv1 (well, quite old version of gpl, but at least it's gpl!).
May 07 2009
prev sibling next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Georg Wrede wrote:
 D's loss seems unbelievable. D now has a 0.628% share, which is even 
 less than what it's lost (-0.82%) in the last 12 months. What could be 
 the reasons for it? Is it even possible to figure out any reason??
Of course it's unbelievable. This change didn't happen over a year's time, it happened in one month. This means that the methodology Tiobe uses changed, or the search portals changed their hit count algorithm. Notice http://www.tiobe.com/index.php/content/paperinfo/tpci/tpci_definition.htm where they automatically discount all D hits by 10%. They don't do that for C. For example, of the first 100 hits of "D programming" on google, I found only 6 that were not about D, two of which was already excluded by Tiobe's algorithm. That's 4%, not 10%. I found 3 non-C ones for "C programming" in the first 100. That's 3%, not 0%.
May 06 2009
next sibling parent reply superdan <super dan.org> writes:
Walter Bright Wrote:

 Georg Wrede wrote:
 D's loss seems unbelievable. D now has a 0.628% share, which is even 
 less than what it's lost (-0.82%) in the last 12 months. What could be 
 the reasons for it? Is it even possible to figure out any reason??
Of course it's unbelievable. This change didn't happen over a year's time, it happened in one month. This means that the methodology Tiobe uses changed, or the search portals changed their hit count algorithm. Notice http://www.tiobe.com/index.php/content/paperinfo/tpci/tpci_definition.htm where they automatically discount all D hits by 10%. They don't do that for C. For example, of the first 100 hits of "D programming" on google, I found only 6 that were not about D, two of which was already excluded by Tiobe's algorithm. That's 4%, not 10%. I found 3 non-C ones for "C programming" in the first 100. That's 3%, not 0%.
i sorta prefer grau douche zone's theory. makes no sense 'cept in da framework where he's a retarded dumbass & evil to boot. but u gotta respect the man. he's waited so patiently fer dis opportunity to suck collective cock. gotta give it 2 da man.
May 06 2009
parent reply grauzone <none example.net> writes:
You're offending me. Please stop this immediately.

Thank you.
May 06 2009
parent reply superdan <super dan.org> writes:
grauzone Wrote:

 You're offending me. Please stop this immediately.
 
 Thank you.
wut happened to `flame on', hercules? anyway just killfile me. i dun change handles. better yet. stop being an ass. we'd all be way better off. suit yerself.
May 06 2009
parent reply grauzone <none example.net> writes:
superdan wrote:
 grauzone Wrote:
 
 You're offending me. Please stop this immediately.

 Thank you.
wut happened to `flame on', hercules? <expletives deleted>
When it comes to "everyone being better off", what about stopping talking like a 16 year old rapper on a hormone trip? Grow up. Thank you.
May 07 2009
parent reply superdan <super dan.org> writes:
grauzone Wrote:

 superdan wrote:
 grauzone Wrote:
 
 You're offending me. Please stop this immediately.

 Thank you.
wut happened to `flame on', hercules? <expletives deleted>
When it comes to "everyone being better off", what about stopping talking like a 16 year old rapper on a hormone trip? Grow up.
expletives deleted? then u missed the subject. subject was yer bashing d2 more often than a teen has a boner. that is da problem, not my expletives. ok, yer highness, we fuckin' get it. u dun like d2. u made ur point several times now move on with life. if u r too hung up u say idiotic crap like this with tiobe & d2. better have a foul mouth & a clean mind than vice versa. so u grow up friend. til then, at least dun flamebait. dun do da crime if u can't do da time.
May 07 2009
parent reply grauzone <none example.net> writes:
superdan wrote:
 grauzone Wrote:
 
 superdan wrote:
 grauzone Wrote:

 You're offending me. Please stop this immediately.

 Thank you.
wut happened to `flame on', hercules? <expletives deleted>
When it comes to "everyone being better off", what about stopping talking like a 16 year old rapper on a hormone trip? Grow up.
expletives deleted? then u missed the subject. subject was yer bashing d2 more often than <expletives deleted>
I don't dislike D2. (OK, except for some parts like const & immutable.) I'm just thinking that what D actually needs, is a stable implementation, and not more features.
 better have a foul mouth & a clean mind than vice versa.
Sorry, the "I have a foul mouth, but what I'm really saying is highly intellectual and deep, and thus everyone criticizing me is actually dumb" turn doesn't work on me. Grow up.
May 07 2009
parent reply Vincenzo Ampolo <vincenzo.ampolo gmail.com> writes:
grauzone wrote:

 I'm just thinking that what D actually needs, is a stable
 implementation, and not more features.
+1 And... i think grauzone is right, Superdan, your language seems offensive to me too. Please stop a not needed flame.
May 07 2009
parent reply Don <nospam nospam.com> writes:
Vincenzo Ampolo wrote:
 grauzone wrote:
 
 I'm just thinking that what D actually needs, is a stable
 implementation, and not more features.
+1
I think you'll like the next DMD release <g>.
May 07 2009
parent Jarrett Billingsley <jarrett.billingsley gmail.com> writes:
On Thu, May 7, 2009 at 10:30 AM, Don <nospam nospam.com> wrote:
 Vincenzo Ampolo wrote:
 grauzone wrote:

 I'm just thinking that what D actually needs, is a stable
 implementation, and not more features.
+1
I think you'll like the next DMD release <g>.
Oman, will it be another 0.178? I can only hope..
May 07 2009
prev sibling parent bearophile <bearophileHUGS lycos.com> writes:
Ignore the Tiobe index. It's trash.

Bye,
bearophile
May 06 2009
prev sibling next sibling parent reply BCS <ao pathlink.com> writes:
Reply to Georg,

 D made the May headline on Tiobe: "Programming language D suffers
 sharp fall". You can say that again, D went down 5 places, to below
 languges like RPG(OS/400) and ABAP!
 
 D's loss seems unbelievable. D now has a 0.628% share, which is even
 less than what it's lost (-0.82%) in the last 12 months. What could be
 the reasons for it? Is it even possible to figure out any reason??
 
 Can this loss induce people to abandon D, and others to not take it
 up, leading to cumulating losses in the coming months? What do we have
 to do to prevent this?
 
take a look at the graph for RPG(OS/400) and D http://www.tiobe.com/index.php/paperinfo/tpci/RPG_(OS_400).html http://www.tiobe.com/index.php/paperinfo/tpci/D.html something/someone is gameing the system.
May 06 2009
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
BCS wrote:
 something/someone is gameing the system.
Here's some more food for thought. Tiobe says they do a search for "xxx programming". "C programming" 2,000,000 19.537 "Pascal programming" 136,000 .776 "D programming" 187,000 .628 This doesn't add up. Also, I tried "D programming" an hour ago and got 437,000 hits. ???
May 06 2009
next sibling parent reply "Denis Koroskin" <2korden gmail.com> writes:
On Thu, 07 May 2009 01:52:23 +0400, Walter Bright <newshound1 digitalmars.com>
wrote:

 BCS wrote:
 something/someone is gameing the system.
Here's some more food for thought. Tiobe says they do a search for "xxx programming". "C programming" 2,000,000 19.537 "Pascal programming" 136,000 .776 "D programming" 187,000 .628 This doesn't add up. Also, I tried "D programming" an hour ago and got 437,000 hits. ???
I got just 184 000 for "D programming"
May 06 2009
parent Daniel Keep <daniel.keep.lists gmail.com> writes:
Denis Koroskin wrote:
 On Thu, 07 May 2009 01:52:23 +0400, Walter Bright
 <newshound1 digitalmars.com> wrote:
 
 BCS wrote:
 something/someone is gameing the system.
Here's some more food for thought. Tiobe says they do a search for "xxx programming". "C programming" 2,000,000 19.537 "Pascal programming" 136,000 .776 "D programming" 187,000 .628 This doesn't add up. Also, I tried "D programming" an hour ago and got 437,000 hits. ???
I got just 184 000 for "D programming"
Just did a google search, got 186k. I think we should stop basing D's worth as a language on what Tiobe says. Sticks and stones may break D's bones, but Tiobe has to sleep eventually... -- Daniel
May 06 2009
prev sibling parent Georg Wrede <georg.wrede iki.fi> writes:
Walter Bright wrote:
 BCS wrote:
 something/someone is gameing the system.
Here's some more food for thought. Tiobe says they do a search for "xxx programming". "C programming" 2,000,000 19.537 "Pascal programming" 136,000 .776 "D programming" 187,000 .628 This doesn't add up. Also, I tried "D programming" an hour ago and got 437,000 hits. ???
Tried Google: 184,000 for "D programming" 166,000 for "3d programming" 1,590,000 for "C++ programming" 1,950,000 for "C programming" 125,000 for "Pascal programming" 292,000 for "Delphi programming" 2,920,000 for "Java programming" But 114,000 for "abap programming" 44,400 for "rpg programming"
May 06 2009
prev sibling next sibling parent Georg Wrede <georg.wrede iki.fi> writes:
BCS wrote:
 Reply to Georg,
 
 D made the May headline on Tiobe: "Programming language D suffers
 sharp fall". You can say that again, D went down 5 places, to below
 languges like RPG(OS/400) and ABAP!

 D's loss seems unbelievable. D now has a 0.628% share, which is even
 less than what it's lost (-0.82%) in the last 12 months. What could be
 the reasons for it? Is it even possible to figure out any reason??

 Can this loss induce people to abandon D, and others to not take it
 up, leading to cumulating losses in the coming months? What do we have
 to do to prevent this?
take a look at the graph for RPG(OS/400) and D http://www.tiobe.com/index.php/paperinfo/tpci/RPG_(OS_400).html http://www.tiobe.com/index.php/paperinfo/tpci/D.html something/someone is gameing the system.
http://en.wikipedia.org/wiki/IBM_RPG http://en.wikipedia.org/wiki/Abap Tears and grief, RPG(OS/400) is a language for *punch cards*. And ABAP is Gerry's answer to *COBOL*. And Finland just lost to the *USA* in hockey, right after we beat Canada. Jansen (of Tiobe) should probably not go wild adjusting the knobs. It may erode their credibility. But blaming him doesn't tidy our nest either. The D landscape in front of the programmer in search of a C++ replacement isn't what it should be. The language is ten years old, but you'd never guess. (Except from outdated stuff on each website.)
May 06 2009
prev sibling parent reply Derek Parnell <derek psych.ward> writes:
On Wed, 6 May 2009 21:02:58 +0000 (UTC), BCS wrote:

 Reply to Georg,
 
 D made the May headline on Tiobe: "Programming language D suffers
 sharp fall". You can say that again, D went down 5 places, to below
 languges like RPG(OS/400) and ABAP!
 
 D's loss seems unbelievable. D now has a 0.628% share, which is even
 less than what it's lost (-0.82%) in the last 12 months. What could be
 the reasons for it? Is it even possible to figure out any reason??
 
 Can this loss induce people to abandon D, and others to not take it
 up, leading to cumulating losses in the coming months? What do we have
 to do to prevent this?
 
take a look at the graph for RPG(OS/400) and D http://www.tiobe.com/index.php/paperinfo/tpci/RPG_(OS_400).html
I wonder if they have accidentally include "RolePlayingGame" programming in the RPG category? For "RPG Programming" I get I get 47,500 hits. For "RPG Programming" + OS/400 I get 9,840 hits. On the surface, it seems that the Tiobe figures a very "rubbery", if not outright dishonest. -- Derek Parnell Melbourne, Australia skype: derek.j.parnell
May 07 2009
parent Walter Bright <newshound1 digitalmars.com> writes:
Derek Parnell wrote:
 On the surface, it seems that the Tiobe figures a very "rubbery", if not
 outright dishonest.
I sent an email to Tiobe, and received a nice reply from Paul Jansen, who runs the index. He showed me his numbers, and the biggest factor in the drop in D's ranking was a large drop in hits from Yahoo's engine. Why that would be neither of us knows. He agreed that the 90% "adjustment" factor needed to be revisited. Anyhow, I don't believe there is anything dishonest going on. It's just the erratic nature of what search engines report as the "number of hits". Google's varies wildly all over the place. Who knows what is going on at Yahoo.
May 07 2009
prev sibling next sibling parent Vincenzo Ampolo <vincenzo.ampolo gmail.com> writes:
Georg Wrede wrote:

 D's loss seems unbelievable. D now has a 0.628% share, which is even
 less than what it's lost (-0.82%) in the last 12 months. What could 
be
 the reasons for it? Is it even possible to figure out any reason??
IMHO it's just marketing. Do you wanna still rise that 0.628% share? If yes: 1) Use D 2) Help into other projects and do not create one-man projects 3) Hope that the "big" projects merge into few useful ones well supported (like phobos and tango in D2)(maybe compilers should do it too?).
May 06 2009
prev sibling parent reply Alix Pexton <alix.DOT.pexton gmail.DOT.com> writes:
Georg Wrede wrote:
 
 
 D made the May headline on Tiobe: "Programming language D suffers sharp 
 fall". You can say that again, D went down 5 places, to below languges 
 like RPG(OS/400) and ABAP!
 
 
 
 D's loss seems unbelievable. D now has a 0.628% share, which is even 
 less than what it's lost (-0.82%) in the last 12 months. What could be 
 the reasons for it? Is it even possible to figure out any reason??
 
 Can this loss induce people to abandon D, and others to not take it up, 
 leading to cumulating losses in the coming months? What do we have to do 
 to prevent this?
There was a small drop last month, and a note saying that hits for DTrace had been eliminated as false positives for the D Programming Language. If it had been possible for DTrace to be such a false positive, I am curious about what other false positives could be effecting other languages, but I definitly think it is a blow to the index's credibility. A...
May 07 2009
parent reply Nick B <nick.barbalich gmail.com> writes:
Hi

It seems that Bartosz's  latest post, dated April 26 th is missing from 
his blog.

See :

http://bartoszmilewski.wordpress.com/


Nick B.
May 07 2009
next sibling parent reply Bartosz Milewski <bartosz-nospam relisoft.com> writes:
The post is back, rewritten and with some code teasers.

Nick B Wrote:

 Hi
 
 It seems that Bartosz's  latest post, dated April 26 th is missing from 
 his blog.
 
 See :
 
 http://bartoszmilewski.wordpress.com/
 
 
 Nick B.
May 26 2009
next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Bartosz Milewski wrote:
 The post is back, rewritten and with some code teasers.
Has anyone reddit'ed it yet? Andrei
May 26 2009
prev sibling next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Bartosz Milewski wrote:
 The post is back, rewritten and with some code teasers.
http://www.reddit.com/r/programming/comments/8ngwn/racefree_multithreading_in_a_hypothetical_language/ Vote up! Andrei
May 26 2009
prev sibling next sibling parent reply Jason House <jason.james.house gmail.com> writes:
Bartosz Milewski wrote:

 The post is back, rewritten and with some code teasers.
We've been teased for 6 months or more. I'm hoping the details will come out quickly now! Here's what I took away from the article: * Goal is to have minimal code changes for single threaded code * unique and lent are two new new transitive type constructors * lockfree is a new storage class (to guarantee sequential consistency) * The new := operator is used for move semantics (if appropriate for type) * Objects can be declared as self-owned I think a deep understanding of exactly what the final design is requires understanding the ownership scheme which isn't described yet. unique is to invariant as lent is to const. Function arguments can be declared as lent and accept both unique and non-unique types (just like const can accept immutable and non-immutable types). Lent basically means what I think scope was intended to mean for function arguments. I'm happy to finally see unique in the type system since it really felt like a gaping hole to me.
May 26 2009
parent reply Bartosz Milewski <bartosz-nospam relisoft.com> writes:
You pretty much nailed it. The ownership scheme will be explained in more
detail in the next two installments, which are almost ready.
May 27 2009
parent reply Tim Matthews <tim.matthews7 gmail.com> writes:
This may seem slightly OT but in your blog "I will use syntax similar to 
that of the D programming language, but C++ and Java programmers 
shouldn’t have problems following it."


class MVar<T> {
private:
     T    _msg;
     bool _full;
public:
     // put: asynchronous (non-blocking)
     // Precondition: MVar must be empty
     void put(T msg) {
         assert (!_full);
         _msg := msg; // move
         _full = true;
         notify();
     }
     // take: If empty, blocks until full.
     // Removes the message and switches state to empty
     T take() {
         while (!_full)
             wait();
         _full = false;
         return := _msg;
     }
}
auto mVar = new MVar<owner::self, int>;

Why not MVar!(owner::self, int)? Why go back to ambiguous templates? 
Apart from the move operator it looks like c++ to me. Sorry if this 
doesn't make sense but I've missed a few previous posts.
May 27 2009
next sibling parent Robert Fraser <fraserofthenight gmail.com> writes:
Tim Matthews wrote:
 
 This may seem slightly OT but in your blog "I will use syntax similar to 
 that of the D programming language, but C++ and Java programmers 
 shouldn’t have problems following it."
 
 
 class MVar<T> {
 private:
     T    _msg;
     bool _full;
 public:
     // put: asynchronous (non-blocking)
     // Precondition: MVar must be empty
     void put(T msg) {
         assert (!_full);
         _msg := msg; // move
         _full = true;
         notify();
     }
     // take: If empty, blocks until full.
     // Removes the message and switches state to empty
     T take() {
         while (!_full)
             wait();
         _full = false;
         return := _msg;
     }
 }
 auto mVar = new MVar<owner::self, int>;
 
 Why not MVar!(owner::self, int)? Why go back to ambiguous templates? 
 Apart from the move operator it looks like c++ to me. Sorry if this 
 doesn't make sense but I've missed a few previous posts.
I think most of Bartoz's readers are C++ users. The "I will use syntax similar to that of the D programming language" was probably put there in a first draft and after revision it was changed to more C++y example code, but the sentence wasn't removed.
May 27 2009
prev sibling parent reply Jason House <jason.james.house gmail.com> writes:
Tim Matthews Wrote:

 
 This may seem slightly OT but in your blog "I will use syntax similar to 
 that of the D programming language, but C++ and Java programmers 
 shouldn’t have problems following it."
 
 
 class MVar<T> {
 private:
      T    _msg;
      bool _full;
 public:
      // put: asynchronous (non-blocking)
      // Precondition: MVar must be empty
      void put(T msg) {
          assert (!_full);
          _msg := msg; // move
          _full = true;
          notify();
      }
      // take: If empty, blocks until full.
      // Removes the message and switches state to empty
      T take() {
          while (!_full)
              wait();
          _full = false;
          return := _msg;
      }
 }
 auto mVar = new MVar<owner::self, int>;
 
 Why not MVar!(owner::self, int)? Why go back to ambiguous templates? 
 Apart from the move operator it looks like c++ to me. Sorry if this 
 doesn't make sense but I've missed a few previous posts.
Don't read into it. I took it as being more readable for non-D users. Angle more recognizable, even for those that don't code in any of the languages mentioned. D's syntax is good, just not wide spread. Notice the lack of a template<typename T> that's required for C++, instead, the template argument is after the class name. There's also no constrictor or initializers which would be bugs in C++. It still looks like tweaked D code.
May 27 2009
parent Tim Matthews <tim.matthews7 gmail.com> writes:
Jason House wrote:

 Don't read into it. I took it as being more readable for non-D users. Angle

more recognizable, even for those that don't code in any of the languages
mentioned. D's syntax is good, just not wide spread. 
 
 Notice the lack of a template<typename T> that's required for C++, instead,
the template argument is after the class name. There's also no constrictor or
initializers which would be bugs in C++. It still looks like tweaked D code.
some of his articles a long time ago, I was just pointing out that design decision and that "Similar to D but C++/Java users will be OK" message.
May 27 2009
prev sibling parent reply Jason House <jason.james.house gmail.com> writes:
The article implies some level of flow analysis. Has Walter come around on this
topic?

As far as considering a variable moved, I believe the following should be
reasonable
• Any if statement (or else clause) containing a move
• Any switch statement containing a move for any case
• Any fall-through cases where the prior case moved the variable
• Any function call not using a lent argument for the variable
• Moving inside a loop should be illegal

An explicit is null check should be able to bypass these rules. There are
probably ways to loosen the looping rule such as if there is a way to guarantee
the moved variable won't be read from again.

Very similar rules can be used for detecting initialization of (unique)
variables. A variable can be considered initialized if:
• Both the if and else must initialize a variable
• All cases in a switch must initialize a variable
• Out parameter in a function call 
• Loops can't initialize variables
  relaxation: can init if guaranteed to run at least once

Those rules should be sufficiently simple to implement and extremely tolerable
for programmers.
Inevitably, I missed a case, but I hope the idea is clear, and that whatever I
overlooked does not add complexity.


Bartosz Milewski Wrote:

 The post is back, rewritten and with some code teasers.
 
 Nick B Wrote:
 
 Hi
 
 It seems that Bartosz's  latest post, dated April 26 th is missing from 
 his blog.
 
 See :
 
 http://bartoszmilewski.wordpress.com/
 
 
 Nick B.
May 27 2009
parent reply Jason House <jason.james.house gmail.com> writes:
I'm really surprised by the lack of design discussion in this thread. It's
amazing how there can be huge bursts of discussion on which keyword to use
(e.g. manifest constants), but then complete silence about major design
decisions like thread safety that defines new transitive states and a bunch of
new keywords. The description even made parallels to the (previously?)
unpopular const architecture. 

Maybe people are waiting for Walter to go through all the hard work of
implementing this stuff before complaining that it's crap and proclaiming
Walter should have done in the first place? 
This seems really unfair to Walter. Then again, I see no indication of Walter
wanting anything else.


Jason House Wrote:

 The article implies some level of flow analysis. Has Walter come around on
this topic?
 
 As far as considering a variable moved, I believe the following should be
reasonable
 • Any if statement (or else clause) containing a move
 • Any switch statement containing a move for any case
 • Any fall-through cases where the prior case moved the variable
 • Any function call not using a lent argument for the variable
 • Moving inside a loop should be illegal
 
 An explicit is null check should be able to bypass these rules. There are
probably ways to loosen the looping rule such as if there is a way to guarantee
the moved variable won't be read from again.
 
 Very similar rules can be used for detecting initialization of (unique)
variables. A variable can be considered initialized if:
 • Both the if and else must initialize a variable
 • All cases in a switch must initialize a variable
 • Out parameter in a function call 
 • Loops can't initialize variables
   relaxation: can init if guaranteed to run at least once
 
 Those rules should be sufficiently simple to implement and extremely tolerable
for programmers.
 Inevitably, I missed a case, but I hope the idea is clear, and that whatever I
overlooked does not add complexity.
 
 
 Bartosz Milewski Wrote:
 
 The post is back, rewritten and with some code teasers.
 
 Nick B Wrote:
 
 Hi
 
 It seems that Bartosz's  latest post, dated April 26 th is missing from 
 his blog.
 
 See :
 
 http://bartoszmilewski.wordpress.com/
 
 
 Nick B.
May 28 2009
next sibling parent "Denis Koroskin" <2korden gmail.com> writes:
On Thu, 28 May 2009 16:45:42 +0400, Jason House <jason.james.house gmail.com>
wrote:

 I'm really surprised by the lack of design discussion in this thread.  
 It's amazing how there can be huge bursts of discussion on which keyword  
 to use (e.g. manifest constants), but then complete silence about major  
 design decisions like thread safety that defines new transitive states  
 and a bunch of new keywords. The description even made parallels to the  
 (previously?) unpopular const architecture.

 Maybe people are waiting for Walter to go through all the hard work of  
 implementing this stuff before complaining that it's crap and  
 proclaiming Walter should have done in the first place?
 This seems really unfair to Walter. Then again, I see no indication of  
 Walter wanting anything else.
It's plain easier to discuss bycicle shed color, because everyone is expert in it.
May 28 2009
prev sibling next sibling parent reply grauzone <none example.net> writes:
1. Everyone agrees anyway, that emulating fork() is the best idea to 
deal with multithreading and synchronization.
2. We'll yet have to see how an implementation of the proposed design 
will work out. This means Walter has to implement it. Reading blog 
entries about it is almost a bigger waste of time than discussing in 
this newsgroup.
3. Not that many people are interested in D2.
4. Bikeshed colors
May 28 2009
parent dsimcha <dsimcha yahoo.com> writes:
== Quote from grauzone (none example.net)'s article
 1. Everyone agrees anyway, that emulating fork() is the best idea to
 deal with multithreading and synchronization.
 2. We'll yet have to see how an implementation of the proposed design
 will work out. This means Walter has to implement it. Reading blog
 entries about it is almost a bigger waste of time than discussing in
 this newsgroup.
 3. Not that many people are interested in D2.
 4. Bikeshed colors
Yeah, unfortunately for something as complex as what's being proposed, I have a hard time understanding/forming an opinion of it until I've gotten my hands dirty and actually tried to use it a little. Just reading about it in the abstract, it's hard to really form much of an opinion on it.
May 28 2009
prev sibling next sibling parent reply Tim Matthews <tim.matthews7 gmail.com> writes:
Jason House wrote:
 I'm really surprised by the lack of design discussion in this thread. It's
amazing how there can be huge bursts of discussion on which keyword to use
(e.g. manifest constants), but then complete silence about major design
decisions like thread safety that defines new transitive states and a bunch of
new keywords. The description even made parallels to the (previously?)
unpopular const architecture. 
 
 Maybe people are waiting for Walter to go through all the hard work of
implementing this stuff before complaining that it's crap and proclaiming
Walter should have done in the first place? 
 This seems really unfair to Walter. Then again, I see no indication of Walter
wanting anything else.
 
 
I have a few things I would like to discuss but I feel you are going to reply again with something like "do go there, the syntax is too dangerous for you" (you can really offend people with comments like that and you should get to know them first) I also feel you are going to keep top-replying unless someone tells you not to do so quit complaining and get to your points, design recommendations, ideas etc..
May 28 2009
parent Jason House <jason.james.house gmail.com> writes:
Tim Matthews Wrote:

 Jason House wrote:
 I'm really surprised by the lack of design discussion in this thread. It's
amazing how there can be huge bursts of discussion on which keyword to use
(e.g. manifest constants), but then complete silence about major design
decisions like thread safety that defines new transitive states and a bunch of
new keywords. The description even made parallels to the (previously?)
unpopular const architecture. 
 
 Maybe people are waiting for Walter to go through all the hard work of
implementing this stuff before complaining that it's crap and proclaiming
Walter should have done in the first place? 
 This seems really unfair to Walter. Then again, I see no indication of Walter
wanting anything else.
 
 
I have a few things I would like to discuss but I feel you are going to reply again with something like "do go there, the syntax is too dangerous for you" (you can really offend people with comments like that and you should get to know them first)
I won't bite your head off, or anyone else's. I'm sorry if a prior post on this NG came across that way. That wasn't my intent.
 I also feel you are going to keep top-replying unless someone tells you 
 not to do so quit complaining and get to your points, design 
 recommendations, ideas etc..
I top-posted because what I had to say had very little to do with the message I replied to. Many of my posts lately have aimed at trying to encourage collaboration. Maybe I'm going about it the wrong way. I know I'm a nobody, but I'm still trying in my own way to have a positive impact on D. So far, I think I'm just pissing people off.
May 28 2009
prev sibling next sibling parent "Robert Jacques" <sandford jhu.edu> writes:
On Thu, 28 May 2009 08:45:42 -0400, Jason House  
<jason.james.house gmail.com> wrote:
 I'm really surprised by the lack of design discussion in this thread.  
 It's amazing how there can be huge bursts of discussion on which keyword  
 to use (e.g. manifest constants), but then complete silence about major  
 design decisions like thread safety that defines new transitive states  
 and a bunch of new keywords. The description even made parallels to the  
 (previously?) unpopular const architecture.

 Maybe people are waiting for Walter to go through all the hard work of  
 implementing this stuff before complaining that it's crap and  
 proclaiming Walter should have done in the first place?
 This seems really unfair to Walter. Then again, I see no indication of  
 Walter wanting anything else.
Well, there's been a fair amount of previous related discussion. I've placed a proposal up on Wiki4D (http://www.prowiki.org/wiki4d/wiki.cgi?OwnershipTypesInD), though since it was assembled from a bunch of personal notes on the subject and uses Walter's old suggestion of using 'scope' instead of Bartosz's 'lent' it's a bit confusing. I'm planning on re-working it, but other deadlines come first. There's also been a lot of talk about message passing/future/promise/task/actor/agent based concurrency, data parallel models such as bulk synchronous programming (BSP) or GPU programming and auto-parallelization of pure functions. About the only thing needed from the type system to implement either of these models is the ability for uniques/mobiles to do a do-si-do type move (which should supported by ref unique). And BSP/GPU stuff are way too bleeding edge to support in the language proper yet. Honestly, I think people are holding back in part because Bartosz has only started to reveal a threading scheme and so are waiting for him to complete it, before proverbially ripping it apart.
May 28 2009
prev sibling next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 28 May 2009 08:45:42 -0400, Jason House  
<jason.james.house gmail.com> wrote:

 I'm really surprised by the lack of design discussion in this thread.  
 It's amazing how there can be huge bursts of discussion on which keyword  
 to use (e.g. manifest constants), but then complete silence about major  
 design decisions like thread safety that defines new transitive states  
 and a bunch of new keywords. The description even made parallels to the  
 (previously?) unpopular const architecture.
For the most part, this really academic threading stuff is beyond me. It took me long enough to understand threading with mutex locks... In any case, it didn't seem from the post that this was coming to D. It seemed like it was for a language Bartosz was working on besides D, the syntax doesn't even look close. Is this planned for D2 or D3? Or not at all? I remember Walter saying he didn't want to add umpteen different type constructor keywords, even unique, because of the confusion it would cause. In any case, once I decided it wasn't D related, I ignored it just like I usually ignore bearophile's "look at what the obscureX language does" posts (no offense bearophile). -Steve
May 28 2009
prev sibling parent reply Leandro Lucarella <llucax gmail.com> writes:
Jason House, el 28 de mayo a las 08:45 me escribiste:
 I'm really surprised by the lack of design discussion in this thread.
 It's amazing how there can be huge bursts of discussion on which keyword
 to use (e.g. manifest constants), but then complete silence about major
 design decisions like thread safety that defines new transitive states
 and a bunch of new keywords. The description even made parallels to the
 (previously?) unpopular const architecture. 
I just find the new "thread-aware" design of D2 so complex, so twisted that I don't even know where to start. I think the solution is way worse than the problem here. That's why I don't comment at all. I think D duplicate functionality. For "safe" concurrency I use processes and IPC (I have even more guarantees that D could ever give me). That's all I need. I don't need a huge complexity in the language for that. And I think D2 concurrency model is still way too low level. I would like D2 better if it was focussed on macros for example.
 Maybe people are waiting for Walter to go through all the hard work of
 implementing this stuff before complaining that it's crap and
 proclaiming Walter should have done in the first place?
No, I don't see any point in saying what I said above, because I don't think anything will change. If I didn't like some little detail, that could worth discussing because it has any chance to change Walter/Bartoz mind, but saying "I think all the model is way too complex" don't help much IMHO =) -- Leandro Lucarella (luca) | Blog colectivo: http://www.mazziblog.com.ar/blog/ ---------------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ----------------------------------------------------------------------------
May 28 2009
next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Leandro Lucarella wrote:
 Jason House, el 28 de mayo a las 08:45 me escribiste:
 I'm really surprised by the lack of design discussion in this thread.
 It's amazing how there can be huge bursts of discussion on which keyword
 to use (e.g. manifest constants), but then complete silence about major
 design decisions like thread safety that defines new transitive states
 and a bunch of new keywords. The description even made parallels to the
 (previously?) unpopular const architecture. 
I just find the new "thread-aware" design of D2 so complex, so twisted that I don't even know where to start. I think the solution is way worse than the problem here. That's why I don't comment at all. I think D duplicate functionality. For "safe" concurrency I use processes and IPC (I have even more guarantees that D could ever give me). That's all I need. I don't need a huge complexity in the language for that. And I think D2 concurrency model is still way too low level. I would like D2 better if it was focussed on macros for example.
 Maybe people are waiting for Walter to go through all the hard work of
 implementing this stuff before complaining that it's crap and
 proclaiming Walter should have done in the first place?
No, I don't see any point in saying what I said above, because I don't think anything will change. If I didn't like some little detail, that could worth discussing because it has any chance to change Walter/Bartoz mind, but saying "I think all the model is way too complex" don't help much IMHO =)
On the contrary, we all (Bartosz, Walter, myself and probably other participants) think this would be valuable feedback. We'll always have some insecurity that we cut the pie the wrong way, and therefore we're continuously on lookout for well-argued positives or negatives. Those could lead to very useful quantitative discussions a la "X, Y, and Z together are way too complex, but X' and Z seems palatable and get 90% of the territory covered". I like Bartosz's design, it's sound (as far as I can tell) and puts the defaults in the right place so there's a nice pay-as-you-need quality to it. There are two details that are wanting. One is that I'm not sure we want high-level race-free so badly, we're prepared to pay that kind of price for it. Message passing is more likely to work well (better than lock-based concurrency) on contemporary and future processors. Then there's a design for solving low-level races that is much simpler and solves the nastiest part of the problem, so I wonder whether that would be more suitable. We also have immutable sharing that should help. Given this landscape, do we want to focus on high-level race elimination that badly? I'm not sure. Second, there is no regard to language integration. Bartosz says syntax doesn't matter and that he's flexible, but what that really means is that no attention has been paid to language integration. There is more to language integration than just syntax (and then even syntax is an important part of it). Andrei
May 28 2009
prev sibling next sibling parent reply BCS <none anon.com> writes:
Hello Leandro,

 Jason House, el 28 de mayo a las 08:45 me escribiste:
 
 I'm really surprised by the lack of design discussion in this thread.
 It's amazing how there can be huge bursts of discussion on which
 keyword to use (e.g. manifest constants), but then complete silence
 about major design decisions like thread safety that defines new
 transitive states and a bunch of new keywords. The description even
 made parallels to the (previously?) unpopular const architecture.
 
I just find the new "thread-aware" design of D2 so complex, so twisted that I don't even know where to start. I think the solution is way worse than the problem here. That's why I don't comment at all.
I get the impression, from what little I known about threading, that it is likely you are under estimating the complexity of the threading problem. I get the feeling that *most* non-experts do (either that, or they just assume it more complex than they want to deal with).
 I think D duplicate functionality. For "safe" concurrency I use
 processes and IPC (I have even more guarantees that D could ever give
 me). That's all I need. I don't need a huge complexity in the language
 for that. And I think D2 concurrency model is still way too low level.
You are crazy! processes+IPC only works well if either the OS supports very fast IPC (IIRC none do aside from shared memory and now we are back where we started) or the processing between interaction is very long. Everything is indicating that shared memory multi-threading is where it's all going.
May 28 2009
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
BCS wrote:
 Everything is indicating that shared memory multi-threading is where 
 it's all going.
That is correct, just that it's 40 years late. Right now everything is indicating that things are moving *away* from shared memory. Andrei
May 28 2009
next sibling parent reply BCS <ao pathlink.com> writes:
Reply to Andrei,

 BCS wrote:
 
 Everything is indicating that shared memory multi-threading is where
 it's all going.
 
That is correct, just that it's 40 years late. Right now everything is indicating that things are moving *away* from shared memory. Andrei
I'm talking at the ASM level (not the language model level) and as opposed to each thread running in its own totally isolated address space. Am I wrong in assuming that most languages use user mode (not kernel mode) shared memory for inter thread communication?
May 28 2009
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
BCS wrote:
 Reply to Andrei,
 
 BCS wrote:

 Everything is indicating that shared memory multi-threading is where
 it's all going.
That is correct, just that it's 40 years late. Right now everything is indicating that things are moving *away* from shared memory. Andrei
I'm talking at the ASM level (not the language model level) and as opposed to each thread running in its own totally isolated address space. Am I wrong in assuming that most languages use user mode (not kernel mode) shared memory for inter thread communication?
What happens is that memory is less shared as cache hierarchies go deeper. It was a great model when there were a couple of processors hitting on the same memory because it was close to reality. Cache hierarchies reveal the hard reality that memory is shared to a decreasing extent and that each processor would rather deal with its own memory. Incidentally, message-passing-style protocols are prevalent in such architectures even at low level. It follows that message passing is not only an attractive model for programming at large, but also a model that's closer to machine than memory sharing. Andrei
May 28 2009
next sibling parent reply BCS <ao pathlink.com> writes:
Reply to Andrei,

 It follows that message passing is not only an attractive model
I'm thinking implementation not model. How is the message passing implemented? OS system calls (probably on top of kernel level shared memory)? user space standard
 for programming at large, but also a model that's closer to
 machine than memory sharing.
I think I see what your getting at,.. even for shared memory on a deep cache; the cache invalidation system /is/ your message path.
 
 Andrei
 
May 28 2009
parent Sean Kelly <sean invisibleduck.org> writes:
BCS wrote:
 Reply to Andrei,
 
 It follows that message passing is not only an attractive model
I'm thinking implementation not model. How is the message passing implemented? OS system calls (probably on top of kernel level shared memory)? user space shared memory? Special hardware? If you can't get
I think it depends on whether the message is intraprocess or interprocess. In the first case, I expect message passing would probably be done via user space shared memory if possible (things get a bit weird with per-thread heaps). In the latter case, a kernel api would probably be used if possible--perhaps TIPC or something related to MPI. It's the back door bit that's at issue right now. Should the language provide full explicit support for the intraprocess message passing? ie. move semantics, memory protection, etc?
 for programming at large, but also a model that's closer to
 machine than memory sharing.
I think I see what your getting at,.. even for shared memory on a deep cache; the cache invalidation system /is/ your message path.
Yeah kinda. Look at NUMA machines, for example (SPARC, etc). I expect that NUMA architectures will become increasingly common in the coming years, and it makes total sense to try and build a language that expects such a model.
May 29 2009
prev sibling next sibling parent Daniel Keep <daniel.keep.lists gmail.com> writes:
Andrei Alexandrescu wrote:
 BCS wrote:
 ...

 Am I wrong in assuming that most languages use user mode (not kernel
 mode) shared memory for inter thread communication?
What happens is that memory is less shared as cache hierarchies go deeper. It was a great model when there were a couple of processors hitting on the same memory because it was close to reality. Cache hierarchies reveal the hard reality that memory is shared to a decreasing extent and that each processor would rather deal with its own memory. Incidentally, message-passing-style protocols are prevalent in such architectures even at low level. It follows that message passing is not only an attractive model for programming at large, but also a model that's closer to machine than memory sharing.
This is all very interesting. I've recently been playing with a little toy language I'm designing. It's a postfix language, so I'm fairly certain no one will ever want to even look at it. :P But when I was designing it, I was adamant that it should do safe parallelism. I worked out that I could get everything other than deadlock safety by giving everything value semantics (using copy-on-write for anything larger than an atomic value.) Add in references that remember their "owner" thread and can only be dereferenced by that single thread, and then note that the global dict and stack are just values themselves and hence copied not referenced when you create a new thread. The only method of communication between threads is using message channels. This could be quite slow if you try to pass a very large data structure (since everything always gets copied), so you can create it on the heap via a reference, then "disown" the reference and assign it to another thread. That way you can get the efficiency of pass-by-reference without inter-thread aliasing. I also have a plan for making the language deadlock-free by either re-expressing all locks as blocking messages such that the interpreter knows who is blocking who, or by going all-out and just using transactions. But this is all just me stuffing about with a completely impractical language. What's being done for D is much more interesting. :)
May 28 2009
prev sibling parent reply Michel Fortin <michel.fortin michelf.com> writes:
On 2009-05-28 12:52:06 -0400, Andrei Alexandrescu 
<SeeWebsiteForEmail erdani.org> said:

 What happens is that memory is less shared as cache hierarchies go 
 deeper. It was a great model when there were a couple of processors 
 hitting on the same memory because it was close to reality. Cache 
 hierarchies reveal the hard reality that memory is shared to a 
 decreasing extent and that each processor would rather deal with its 
 own memory. Incidentally, message-passing-style protocols are prevalent 
 in such architectures even at low level. It follows that message 
 passing is not only an attractive model for programming at large, but 
 also a model that's closer to machine than memory sharing.
While message-passing might be useful for some applications, I have a hard time seeing how it could work for others. Try split processing of a 4 Gb array over 4 processors, or implement multi-threaded access to an in-memory database. Message passing by copying all the data might happen at the very low-level, but shared memory is more the right abstraction for these cases. There's a reason why various operating systems support shared memory between different processes: sometime it's easier to deal with shared memory than messaging, even with all the data races you have to deal with. Shared memory becoming more and more implemented as message passing at the very low level might indicate that some uses of shared memory will migrate to message passing at the application level and get some performance gains, but I don't think message passing will ever completely replace shared memory for dealing with large data sets. It's more likely that shared memory will become a scarse resource for some systems while it'll continue to grow for others. That said, I'm no expert in this domain. But I believe D should have good support both shared memory and message passing. I also take note that having a good shared memory model could prove very useful when writting on-disk databases or file systems, not just RAM-based data structures. You could have objects representing disk sectors or file segments, and the language's type system would help you handle the locking part of the equation. -- Michel Fortin michel.fortin michelf.com http://michelf.com/
May 30 2009
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Michel Fortin wrote:
 On 2009-05-28 12:52:06 -0400, Andrei Alexandrescu 
 <SeeWebsiteForEmail erdani.org> said:
 
 What happens is that memory is less shared as cache hierarchies go 
 deeper. It was a great model when there were a couple of processors 
 hitting on the same memory because it was close to reality. Cache 
 hierarchies reveal the hard reality that memory is shared to a 
 decreasing extent and that each processor would rather deal with its 
 own memory. Incidentally, message-passing-style protocols are 
 prevalent in such architectures even at low level. It follows that 
 message passing is not only an attractive model for programming at 
 large, but also a model that's closer to machine than memory sharing.
While message-passing might be useful for some applications, I have a hard time seeing how it could work for others. Try split processing of a 4 Gb array over 4 processors, or implement multi-threaded access to an in-memory database. Message passing by copying all the data might happen at the very low-level, but shared memory is more the right abstraction for these cases.
Depends on what you want to do to those arrays. If concurrent writing is limited (e.g. quicksort) then there's no need to copy. Then many times you want to move (hand over) data from one thread to another. Here something like unique would help because you can safely pass pointers without worrying about subsequent contention.
 There's a reason why various operating systems support shared memory 
 between different processes: sometime it's easier to deal with shared 
 memory than messaging, even with all the data races you have to deal with.
Of course sometimes shared memory is a more natural fit. My argument is that that's the rare case.
 Shared memory becoming more and more implemented as message passing at 
 the very low level might indicate that some uses of shared memory will 
 migrate to message passing at the application level and get some 
 performance gains, but I don't think message passing will ever 
 completely replace shared memory for dealing with large data sets. It's 
 more likely that shared memory will become a scarse resource for some 
 systems while it'll continue to grow for others.
 
 That said, I'm no expert in this domain. But I believe D should have 
 good support both shared memory and message passing.
 
 I also take note that having a good shared memory model could prove very 
 useful when writting on-disk databases or file systems, not just 
 RAM-based data structures. You could have objects representing disk 
 sectors or file segments, and the language's type system would help you 
 handle the locking part of the equation.
I think shared files with interlocking can support such cases with ease. Andrei
May 30 2009
parent Michel Fortin <michel.fortin michelf.com> writes:
On 2009-05-30 09:36:19 -0400, Andrei Alexandrescu 
<SeeWebsiteForEmail erdani.org> said:

 While message-passing might be useful for some applications, I have a 
 hard time seeing how it could work for others. Try split processing of 
 a 4 Gb array over 4 processors, or implement multi-threaded access to 
 an in-memory database. Message passing by copying all the data might 
 happen at the very low-level, but shared memory is more the right 
 abstraction for these cases.
Depends on what you want to do to those arrays. If concurrent writing is limited (e.g. quicksort) then there's no need to copy. Then many times you want to move (hand over) data from one thread to another. Here something like unique would help because you can safely pass pointers without worrying about subsequent contention.
If you include passing unique pointers to shared memory in your definition of "message passing", then yes, you can work with "message passing", and yes 'unique' would help a lot to ensure safety. But then you still need to have shared memory between threads: it's just that by making the pointer 'unique' we're ensuring that no more than one thread at a time is accessing that particular piece of data in the shared memory. I'm still convinced that we should offer a good way to access shared mutable data. It'll have to have limits to what the language can enforce and I'm not sure what they should be. For instance, while I see a need for 'lockfree', it's footprint in language complexity seems a little big for a half-unsafe manual performance enhancement capability, so I'm undecided on that one. Implicit synchronization of shared object member functions seems a good idea good however. I'll be waiting a little more to see what Bartosz has to say about expressing object ownership before making other comments.
 There's a reason why various operating systems support shared memory 
 between different processes: sometime it's easier to deal with shared 
 memory than messaging, even with all the data races you have to deal 
 with.
Of course sometimes shared memory is a more natural fit. My argument is that that's the rare case.
Shared memory is rare between processes, but not between threads. At least, in my experience. Shared memory is something you want to use whenever you can't afford copying data. Message passing is often implemented using shared memory, especially when between threads. That said, sometime you don't have the choice, you need to copy the data (to the GPU or to somewhere else on the network). Also, shared memory is something you want for storage systems you want to be accessible concurently by many threads like a database, a cache, a filesystem, etc. Those are shared storage systems and message passing doesn't help them really much since we're talking about storage here, not communication. Do you realy want to say that multithreaded access to stored data is rare? -- Michel Fortin michel.fortin michelf.com http://michelf.com/
May 30 2009
prev sibling parent Sean Kelly <sean invisibleduck.org> writes:
Michel Fortin wrote:
 On 2009-05-28 12:52:06 -0400, Andrei Alexandrescu 
 <SeeWebsiteForEmail erdani.org> said:
 
 What happens is that memory is less shared as cache hierarchies go 
 deeper. It was a great model when there were a couple of processors 
 hitting on the same memory because it was close to reality. Cache 
 hierarchies reveal the hard reality that memory is shared to a 
 decreasing extent and that each processor would rather deal with its 
 own memory. Incidentally, message-passing-style protocols are 
 prevalent in such architectures even at low level. It follows that 
 message passing is not only an attractive model for programming at 
 large, but also a model that's closer to machine than memory sharing.
While message-passing might be useful for some applications, I have a hard time seeing how it could work for others. Try split processing of a 4 Gb array over 4 processors, or implement multi-threaded access to an in-memory database. Message passing by copying all the data might happen at the very low-level, but shared memory is more the right abstraction for these cases.
Perhaps at an implementation level in some instances, yes. But look at Folding home, etc. The approach is based on message passing. Just if you were Folding OnOnePCOnly then you might pass references to array regions around instead of copying the data. I suppose what I'm getting at is that an interface doesn't typically necessitate a particular implementation.
 There's a reason why various operating systems support shared memory 
 between different processes: sometime it's easier to deal with shared 
 memory than messaging, even with all the data races you have to deal with.
 
 Shared memory becoming more and more implemented as message passing at 
 the very low level might indicate that some uses of shared memory will 
 migrate to message passing at the application level and get some 
 performance gains, but I don't think message passing will ever 
 completely replace shared memory for dealing with large data sets. It's 
 more likely that shared memory will become a scarse resource for some 
 systems while it'll continue to grow for others.
Well sure. At some level, sharing is going to be happening even in message-passing oriented applications. The issue is more what the "encouraged" approach to solving problems that a language supports than what the language allows. D will always allow all sorts of wickedness because it's a systems language. But that doesn't mean this stuff has to be the central feature of the language.
May 30 2009
prev sibling next sibling parent reply "Denis Koroskin" <2korden gmail.com> writes:
On Thu, 28 May 2009 20:32:29 +0400, Andrei Alexandrescu
<SeeWebsiteForEmail erdani.org> wrote:

 BCS wrote:
 Everything is indicating that shared memory multi-threading is where  
 it's all going.
That is correct, just that it's 40 years late. Right now everything is indicating that things are moving *away* from shared memory. Andrei
That's true. For example, we develop for PS3, and its 7 SPU cores have 256KiB of TLS each (which is as fast as L2 cache) and no direct shared memory access. Shared memory needs to be requested via asynchronous memcpy requests, and this scheme doesn't work with OOP well: even after you transfer some object, its vtbl etc still point to shared memory. We had hard time re-arranging our data so that object and everything it owns (and points to) is stored sequencially in a single large block of memory. This also resulted in replacing most of the pointers with relative offsets. Parallelization is hard, but the result is worth the trouble.
May 28 2009
parent reply "Robert Jacques" <sandford jhu.edu> writes:
On Thu, 28 May 2009 12:45:41 -0400, Denis Koroskin <2korden gmail.com>  
wrote:

 On Thu, 28 May 2009 20:32:29 +0400, Andrei Alexandrescu  
 <SeeWebsiteForEmail erdani.org> wrote:

 BCS wrote:
 Everything is indicating that shared memory multi-threading is where
 it's all going.
That is correct, just that it's 40 years late. Right now everything is indicating that things are moving *away* from shared memory. Andrei
That's true. For example, we develop for PS3, and its 7 SPU cores have 256KiB of TLS each (which is as fast as L2 cache) and no direct shared memory access. Shared memory needs to be requested via asynchronous memcpy requests, and this scheme doesn't work with OOP well: even after you transfer some object, its vtbl etc still point to shared memory. We had hard time re-arranging our data so that object and everything it owns (and points to) is stored sequencially in a single large block of memory. This also resulted in replacing most of the pointers with relative offsets. Parallelization is hard, but the result is worth the trouble.
I agree that Andrei's right, but your example is wrong. The Cell's SPU are a SIMD vector processors, not general CPUs. I also work with vector processors (NVIDIA's CUDA) but every software/hardware iteration gets further and further away from pure vector processing. Rumor has it that the NVIDIA's next chip will be MIMD, instead of SIMD.
May 28 2009
parent reply "Denis Koroskin" <2korden gmail.com> writes:
On Thu, 28 May 2009 21:07:57 +0400, Robert Jacques <sandford jhu.edu> wrote:

 On Thu, 28 May 2009 12:45:41 -0400, Denis Koroskin <2korden gmail.com>  
 wrote:

 On Thu, 28 May 2009 20:32:29 +0400, Andrei Alexandrescu  
 <SeeWebsiteForEmail erdani.org> wrote:

 BCS wrote:
 Everything is indicating that shared memory multi-threading is where
 it's all going.
That is correct, just that it's 40 years late. Right now everything is indicating that things are moving *away* from shared memory. Andrei
That's true. For example, we develop for PS3, and its 7 SPU cores have 256KiB of TLS each (which is as fast as L2 cache) and no direct shared memory access. Shared memory needs to be requested via asynchronous memcpy requests, and this scheme doesn't work with OOP well: even after you transfer some object, its vtbl etc still point to shared memory. We had hard time re-arranging our data so that object and everything it owns (and points to) is stored sequencially in a single large block of memory. This also resulted in replacing most of the pointers with relative offsets. Parallelization is hard, but the result is worth the trouble.
I agree that Andrei's right, but your example is wrong. The Cell's SPU are a SIMD vector processors, not general CPUs. I also work with vector processors (NVIDIA's CUDA) but every software/hardware iteration gets further and further away from pure vector processing. Rumor has it that the NVIDIA's next chip will be MIMD, instead of SIMD.
I wanted to stress that multicore PUs tent to have their own local memory (small but fast) and little or none global (shared) memory access (it is not efficient and error prone - race condition et al.) I believe SIMD/MIMD discussion is irrelevant here. It's all about Shared/Distributed Memory Model. MIMD devices can be both (http://en.wikipedia.org/wiki/MIMD)
May 28 2009
parent "Robert Jacques" <sandford jhu.edu> writes:
On Thu, 28 May 2009 13:36:28 -0400, Denis Koroskin <2korden gmail.com>  
wrote:

 On Thu, 28 May 2009 21:07:57 +0400, Robert Jacques <sandford jhu.edu>  
 wrote:

 On Thu, 28 May 2009 12:45:41 -0400, Denis Koroskin <2korden gmail.com>
 wrote:

 On Thu, 28 May 2009 20:32:29 +0400, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> wrote:

 BCS wrote:
 Everything is indicating that shared memory multi-threading is where
 it's all going.
That is correct, just that it's 40 years late. Right now everything is indicating that things are moving *away* from shared memory. Andrei
That's true. For example, we develop for PS3, and its 7 SPU cores have 256KiB of TLS each (which is as fast as L2 cache) and no direct shared memory access. Shared memory needs to be requested via asynchronous memcpy requests, and this scheme doesn't work with OOP well: even after you transfer some object, its vtbl etc still point to shared memory. We had hard time re-arranging our data so that object and everything it owns (and points to) is stored sequencially in a single large block of memory. This also resulted in replacing most of the pointers with relative offsets. Parallelization is hard, but the result is worth the trouble.
I agree that Andrei's right, but your example is wrong. The Cell's SPU are a SIMD vector processors, not general CPUs. I also work with vector processors (NVIDIA's CUDA) but every software/hardware iteration gets further and further away from pure vector processing. Rumor has it that the NVIDIA's next chip will be MIMD, instead of SIMD.
I wanted to stress that multicore PUs tent to have their own local memory (small but fast) and little or none global (shared) memory access (it is not efficient and error prone - race condition et al.) I believe SIMD/MIMD discussion is irrelevant here. It's all about Shared/Distributed Memory Model. MIMD devices can be both (http://en.wikipedia.org/wiki/MIMD)
Well, I thought you were making a different point. Really, the Cell SPU is the only current PU with the design you're talking about. All commercial CPUs and GPUs have very large global memory buses. Every blog and talk I've read/attended has painted the SPU in a very negative light, at least with regard to the programming model. (Which makes sense, since it's sorta like non-cache coherent NUMA, which pretty much all everyone decided is a bad idea.)
May 28 2009
prev sibling parent reply Bartosz Milewski <bartosz-nospam relisoft.com> writes:
Andrei Alexandrescu Wrote:

 BCS wrote:
 Everything is indicating that shared memory multi-threading is where 
 it's all going.
That is correct, just that it's 40 years late. Right now everything is indicating that things are moving *away* from shared memory. Andrei
I understand where you stand. You are looking at where the state-of-the-art hardware is going and that makes perfect sense. There is, however, a co-evolution of hardware and software that is not a simple software-follow-hardware (remember the era of RISC processors?). I'm looking at programming languages and I don't see that away-from-shared-memory trend--neither in mainstream languages, nor in newer languages like Scala, nor in the operating systems. There are many interesting high-level paradigms like message passing, futures, actors, etc.; and I'm sure there will be more in the future. D has a choice of betting the store on one of these paradigms (like ML did on message passing or Erlang on actors) or to try, build solid foundations for a multi-paradigm language or do nothing. I am trying to build the foundations. The examples that I'm exploring in my posts are data structures that support higher level concurrency: channels, message queues, lock-free objects, etc. I want to build a system where high-level concurrency paradigms are reasonably easy to implement. Let's look at the alternatives. Nobody thinks seriously of making message passing a language feature in D. The Erlangization of D wouldn't work because D does not provide guarantees of address space separation (and providing such guarantees would cripple the language beyond repair). Another option we discussed was to provide specialized, well tested message queues in the library and count on programmers' discipline not to stray away from the message passing paradigm. Without type-system support, though, such queues would have to be either unsafe (no guarantee that the client doesn't have unprotected aliases to messages), or restrict messages to very simple data structures (pass-by-value and maybe some very clever unique pointer/array template). The latter approach introduces huge complexity into the library, essentially making user-defined extensions impossible (unless the user's name is Andrei ;-) ). Let's not forget that right now D is _designed_ to support shared-memory programming. Every D object has a lock, it supports synchronized methods, the keyword "shared" is being introduced, etc. It doesn't look like D is moving away from shared memory. It looks more like it's adding some window dressing to the pre-existing mess and bidding its time. I haven't seen a comprehensive plan for D to tackle concurrency and I'm afraid that if D doesn't take a solid stance right now, it will miss the train.
May 28 2009
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Bartosz Milewski wrote:
 Andrei Alexandrescu Wrote:
 
 BCS wrote:
 Everything is indicating that shared memory multi-threading is
 where it's all going.
That is correct, just that it's 40 years late. Right now everything is indicating that things are moving *away* from shared memory. Andrei
I understand where you stand. You are looking at where the state-of-the-art hardware is going and that makes perfect sense. There is, however, a co-evolution of hardware and software that is not a simple software-follow-hardware (remember the era of RISC processors?).
I understand that. However, I don't understand how the comment applies to the situation at hand.
 I'm looking at programming languages and I don't see that
 away-from-shared-memory trend--neither in mainstream languages, nor
 in newer languages like Scala, nor in the operating systems.
Scala doesn't know what to do about threads. The trend I'm seeing is that functional languages are getting increasing attention, and that's exactly because they never share mutable memory. As far as I can see, languages that are based on heavy shared mutation are pretty much dead in the water. We have the chance to not be so.
 There
 are many interesting high-level paradigms like message passing,
 futures, actors, etc.; and I'm sure there will be more in the future.
 D has a choice of betting the store on one of these paradigms (like
 ML did on message passing or Erlang on actors) or to try, build solid
 foundations for a multi-paradigm language or do nothing. I am trying
 to build the foundations.
Building foundations is great. What I'm seeing, however, is one very heavy strong pillar put in a place that might become the doghouse. I'm not at all sure the focus must be put on high-level race avoidance, particularly given that the cost in perceived complexity is this high.
 The examples that I'm exploring in my posts are data structures that
 support higher level concurrency: channels, message queues, lock-free
 objects, etc. I want to build a system where high-level concurrency
 paradigms are reasonably easy to implement.
 
 Let's look at the alternatives.
 
 Nobody thinks seriously of making message passing a language feature
 in D. The Erlangization of D wouldn't work because D does not provide
 guarantees of address space separation (and providing such guarantees
 would cripple the language beyond repair).
Why wouldn't we think of making message passing a language feature in D? Why does message passing need erlangization to support message passing?
 Another option we discussed was to provide specialized, well tested
 message queues in the library and count on programmers' discipline
 not to stray away from the message passing paradigm.
That's not what I discussed. I think there is is an interesting point that you've been missing, so please allow me to restate it. What I discussed was a holistic approach in which language + standard library provides a trusted computing base. Consider Java's new (for arrays) and C's malloc. The new function cannot be defined in Java because it would require unsafe manipulation underneath, so it is defined by its runtime support library. That runtime support is implemented in the likes of C. However, due to the fact that the runtime support is part of Java, in fact Java does have dynamic memory allocation - and nobody blinks an eye. C has famously had a mantra of self-sufficiency: its own support libraries have been written in C, which is pretty remarkable - and almost unprecedented at the time C was defined. For example, C's malloc is written in C, but (and here's an important detail) at some point it becomes _nonportable_ C. So even C has to cross a barrier of some sorts at some point. How is this related to the discussion at hand? You want to put all concurrency support in the language. That is, you want to put enough power into the language to be able to typecheck a variety of concurrent programming primitives and patterns. This approach is blindsided to the opportunity of defining some of these primitives in the standard library, in unsafe/unportable D, yet offering safe primitives to the user. In the process, the user is not hurt because she still has access to the primitives. What she can't do is define their own primitives in safe D. But I think that's as useless a pursuit as adding keywords to C to allow one to implement malloc() in safe C.
 Without
 type-system support, though, such queues would have to be either
 unsafe (no guarantee that the client doesn't have unprotected aliases
 to messages), or restrict messages to very simple data structures
 (pass-by-value and maybe some very clever unique pointer/array
 template). The latter approach introduces huge complexity into the
 library, essentially making user-defined extensions impossible
 (unless the user's name is Andrei ;-) ).
Complexity will be somewhere. The problem is, you want to put much of it in the language, and that will hit the language user too. Semantic checking will always be harder on everyone than a human being who sits down and implements a provably safe library in ways that the compiler can't prove.
 Let's not forget that right now D is _designed_ to support
 shared-memory programming. Every D object has a lock, it supports
 synchronized methods, the keyword "shared" is being introduced, etc.
 It doesn't look like D is moving away from shared memory. It looks
 more like it's adding some window dressing to the pre-existing mess
 and bidding its time. I haven't seen a comprehensive plan for D to
 tackle concurrency and I'm afraid that if D doesn't take a solid
 stance right now, it will miss the train.
I think we can safely ditch this argument. Walter had no idea what to do about threads when he defined D, so he put whatever he saw and understood in Java. He'd be the first to say that - actually, he wouldn't tell me so, he _told_ me so. To me, adding concurrency capabilities to D is nothing like adding window dressing on top of whatever crap is there. Java and C++ are in trouble, and doing what they do doesn't strike me as a good bet. You're right about missing the train, but I think you and I are talking about different trains. I don't want to embark on the steam-powered train. Andrei
May 28 2009
next sibling parent reply Leandro Lucarella <llucax gmail.com> writes:
Andrei Alexandrescu, el 28 de mayo a las 19:52 me escribiste:
 To me, adding concurrency capabilities to D is nothing like adding window 
 dressing on top of whatever crap is there. Java and C++ are in trouble, and 
 doing what they do doesn't strike me as a good bet. You're right about missing 
 the train, but I think you and I are talking about different trains. I don't 
 want to embark on the steam-powered train.
I agree. Maybe is just unjustified fear, but I see D2 being to concurrency what C++ was to templates. Great new idea, terrible hard to use and understand for mortals. For some time people used to think they were complex because they had to, but I think D could prove that wrong ;) -- Leandro Lucarella (luca) | Blog colectivo: http://www.mazziblog.com.ar/blog/ ---------------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ----------------------------------------------------------------------------
May 29 2009
parent reply bearophile <bearophileHUGS lycos.com> writes:
Leandro Lucarella:
 I agree. Maybe is just unjustified fear, but I see D2 being to concurrency
 what C++ was to templates.
Sometimes you need lot of time to find what a simple implementation can be. Often someone has to pay the price of being the first one to implement something :-] This is bad if you mix it with the will of keeping backwards compatibility. Bye, bearophile
May 29 2009
parent reply Leandro Lucarella <llucax gmail.com> writes:
bearophile, el 29 de mayo a las 13:39 me escribiste:
 Leandro Lucarella:
 I agree. Maybe is just unjustified fear, but I see D2 being to concurrency
 what C++ was to templates.
Sometimes you need lot of time to find what a simple implementation can be. Often someone has to pay the price of being the first one to implement something :-] This is bad if you mix it with the will of keeping backwards compatibility.
Exactly. I think D had a good model of "steal good proven stuff that other languages got right". With this, I thinks it's taking a new path of being a pioneer, and chances are it get it wrong (I don't mean to be offensive with this, I'm just speaking statistically) and suffer the mistake for a long long time because of backward compatibility. -- Leandro Lucarella (luca) | Blog colectivo: http://www.mazziblog.com.ar/blog/ ---------------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ----------------------------------------------------------------------------
May 29 2009
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Leandro Lucarella wrote:
 bearophile, el 29 de mayo a las 13:39 me escribiste:
 Leandro Lucarella:
 I agree. Maybe is just unjustified fear, but I see D2 being to concurrency
 what C++ was to templates.
Sometimes you need lot of time to find what a simple implementation can be. Often someone has to pay the price of being the first one to implement something :-] This is bad if you mix it with the will of keeping backwards compatibility.
Exactly. I think D had a good model of "steal good proven stuff that other languages got right". With this, I thinks it's taking a new path of being a pioneer, and chances are it get it wrong (I don't mean to be offensive with this, I'm just speaking statistically) and suffer the mistake for a long long time because of backward compatibility.
With its staunch default isolation, I think D is already making a departure from the traditional imperative languages (which extend their imperative approach to concurrent programming). The difference is that it takes what I think is a sound model (interprocess isolation) and augment it with the likes of shared and Bartosz's work. So my perception is that it's less likely to get things regrettably wrong. But then you never know. Andrei
May 29 2009
prev sibling parent reply Bartosz Milewski <bartosz-nospam relisoft.com> writes:
Andrei Alexandrescu Wrote:

 Scala doesn't know what to do about threads. 
That's my impression too, although Scala's support for actors leaves D in the dust.
 The trend I'm seeing is 
 that functional languages are getting increasing attention, and that's 
 exactly because they never share mutable memory. As far as I can see, 
 languages that are based on heavy shared mutation are pretty much dead 
 in the water. We have the chance to not be so.
 
It's a very sweeping statement. I just looked at the TIOBE index and couldn't find _any_ functional languages in the top 20.
 I'm 
 not at all sure the focus must be put on high-level race avoidance, 
 particularly given that the cost in perceived complexity is this high.
The complexity argument is tenuous. It might look like my proposal is complex because I'm dropping the whole system in at once. But it's a solid, well thought-out system. What I see happening in D is the creeping complexity resulting from sloppy design. We've been talking for years about closing some gaping holes in the design of arrays, slices, immutable, qualifier polymorphism--the list goes on--and there's little progress. There is no solid semantics for scope and shared. A solid solution to those issues will look complex too.
 Why wouldn't we think of making message passing a language feature in D? 
Because we don't have even a tiniest proposal describing it, not to mention a design.
 Why does message passing need erlangization to support message passing?
Because the strength of the Erlang model is the isolation of processes. Take away isolation and it's no better than Scala or Java. Granted, having to explicitly mark objects for sharing in D is a big help. Here we agree.
 What I discussed was a holistic approach in which language + standard 
 library provides a trusted computing base. 
Have you thought about how to eliminate data races in your holistic approach? Will "shared" be forbidden in SafeD? Will library-based message-passing channels (and actors?) only accept simple value types and immutables? Andrei, you are the grand wizard of squeezing powerful abstractions out of a kludge of a language that is C++ with its ad-hoc support for generics. It's impressive and very useful, but it's also hermetic. By contrast, generic programming in D is relatively easy because of the right kind of support built into the language (compile-time interpreter). I trust that you could squeeze powerful multithreading abstraction out of D, even if the language/type system doesn't offer much of a support. But it will be hermetic. Prove me wrong by implementing a message queue using the current D2 (plus some things that are still in the pipeline).
 How is this related to the discussion at hand? You want to put all 
 concurrency support in the language. That is, you want to put enough 
 power into the language to be able to typecheck a variety of concurrent 
 programming primitives and patterns. This approach is blindsided to the 
 opportunity of defining some of these primitives in the standard 
 library, in unsafe/unportable D, yet offering safe primitives to the 
 user. In the process, the user is not hurt because she still has access 
 to the primitives. What she can't do is define their own primitives in 
 safe D. But I think that's as useless a pursuit as adding keywords to C 
 to allow one to implement malloc() in safe C.
 
That's a bad analogy. I'm proposing the tightening of the type system, not the implementation of weak atomics. A better analogy would be adding immutable/const to the language. Except that I don't think const-correctness is as important as the safety of shared-memory concurrency.
 Without
 type-system support, though, such queues would have to be either
 unsafe (no guarantee that the client doesn't have unprotected aliases
 to messages), or restrict messages to very simple data structures
 (pass-by-value and maybe some very clever unique pointer/array
 template). The latter approach introduces huge complexity into the
 library, essentially making user-defined extensions impossible
 (unless the user's name is Andrei ;-) ).
Complexity will be somewhere. The problem is, you want to put much of it in the language, and that will hit the language user too.
I'm being very careful not to hit the language user. You might have noticed that my primitive channel, the MVar, is less complex than a D2 implementation (it doesn't require "synchronized"). And the compiler will immediately tell you if you use it in an unsafe way.
 Semantic 
 checking will always be harder on everyone than a human being who sits 
 down and implements a provably safe library in ways that the compiler 
 can't prove.
 
Be careful with such arguments. Somebody might use them to discredit immutability.
 Let's not forget that right now D is _designed_ to support
 shared-memory programming. Every D object has a lock, it supports
 synchronized methods, the keyword "shared" is being introduced, etc.
 It doesn't look like D is moving away from shared memory. It looks
 more like it's adding some window dressing to the pre-existing mess
 and bidding its time. I haven't seen a comprehensive plan for D to
 tackle concurrency and I'm afraid that if D doesn't take a solid
 stance right now, it will miss the train.
I think we can safely ditch this argument. Walter had no idea what to do about threads when he defined D, so he put whatever he saw and understood in Java. He'd be the first to say that - actually, he wouldn't tell me so, he _told_ me so.
I realize that. Except that the "shared" concept was added very recently.
 To me, adding concurrency capabilities to D is nothing like adding 
 window dressing on top of whatever crap is there. Java and C++ are in 
 trouble, and doing what they do doesn't strike me as a good bet. 
So far D has been doing exactly what Java and C++ are doing. My proposal goes way beyond that. But if you mean D should not support shared-memory concurrency or give it only lip service, than you really have to come up with something revolutionary to take its place. This would obviously not make it into D2 or into your book. So essentially D2 would doomed in the concurrency department.
 You're 
 right about missing the train, but I think you and I are talking about 
 different trains. I don't want to embark on the steam-powered train.
Should we embark on a vapor-powered train then ;-)
May 29 2009
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Bartosz Milewski wrote:
 Andrei Alexandrescu Wrote:
 Scala doesn't know what to do about threads.
That's my impression too, although Scala's support for actors leaves D in the dust.
Scala actors are a library.
 The trend I'm seeing is that functional languages are getting
 increasing attention, and that's exactly because they never share
 mutable memory. As far as I can see, languages that are based on
 heavy shared mutation are pretty much dead in the water. We have
 the chance to not be so.
 
It's a very sweeping statement. I just looked at the TIOBE index and couldn't find _any_ functional languages in the top 20.
We can safely ditch Tiobe, but I agree that functional languages aren't mainstream. There are two trends though. One is that for most of its existence Haskell has had about 12 users. It has definitely turned an exponential elbow during the recent years. Similar trends are to be seen for ML, Ocaml, and friends. The other trend is that all of today's languages are scrambling to add support for pure functional programming.
 I'm not at all sure the focus must be put on high-level race
 avoidance, particularly given that the cost in perceived complexity
 is this high.
The complexity argument is tenuous. It might look like my proposal is complex because I'm dropping the whole system in at once. But it's a solid, well thought-out system. What I see happening in D is the creeping complexity resulting from sloppy design. We've been talking for years about closing some gaping holes in the design of arrays, slices, immutable, qualifier polymorphism--the list goes on--and there's little progress. There is no solid semantics for scope and shared. A solid solution to those issues will look complex too.
What do those holes have to do with the problem at hand? I'm seeing implementation bugs, not design holes. I'd love them to be fixed as much as the next guy, but I don't think we're looking at issues that would be complex (except for scope which sucks; I never claimed there was a solution to that). All of other features you mention have no holes I know of in their design.
 Why wouldn't we think of making message passing a language feature
 in D?
Because we don't have even a tiniest proposal describing it, not to mention a design.
That doesn't mean we shouldn't think of it.
 Why does message passing need erlangization to support message
 passing?
Because the strength of the Erlang model is the isolation of processes. Take away isolation and it's no better than Scala or Java. Granted, having to explicitly mark objects for sharing in D is a big help. Here we agree.
So we should be thinking about it, right?
 What I discussed was a holistic approach in which language +
 standard library provides a trusted computing base.
Have you thought about how to eliminate data races in your holistic approach? Will "shared" be forbidden in SafeD? Will library-based message-passing channels (and actors?) only accept simple value types and immutables?
Shared will be allowed in SafeD but it will be the responsibility of user code to ensure high-level race elimination. Shared will eliminate low-level races. I think it's worth contemplating a scenario in which message passing is restricted to certain types.
 Andrei, you are the grand wizard of squeezing powerful abstractions
 out of a kludge of a language that is C++ with its ad-hoc support for
 generics. It's impressive and very useful, but it's also hermetic. By
 contrast, generic programming in D is relatively easy because of the
 right kind of support built into the language (compile-time
 interpreter).
Please no ad hominem, flattering or not.
 I trust that you could squeeze powerful multithreading abstraction
 out of D, even if the language/type system doesn't offer much of a
 support. But it will be hermetic. Prove me wrong by implementing a
 message queue using the current D2 (plus some things that are still
 in the pipeline).
I don't have the time, but I think it's worth looking into what a message queue implementation should look like.
 How is this related to the discussion at hand? You want to put all
  concurrency support in the language. That is, you want to put
 enough power into the language to be able to typecheck a variety of
 concurrent programming primitives and patterns. This approach is
 blindsided to the opportunity of defining some of these primitives
 in the standard library, in unsafe/unportable D, yet offering safe
 primitives to the user. In the process, the user is not hurt
 because she still has access to the primitives. What she can't do
 is define their own primitives in safe D. But I think that's as
 useless a pursuit as adding keywords to C to allow one to implement
 malloc() in safe C.
 
That's a bad analogy. I'm proposing the tightening of the type system, not the implementation of weak atomics. A better analogy would be adding immutable/const to the language. Except that I don't think const-correctness is as important as the safety of shared-memory concurrency.
It's a good analogy because you want to make possible the implementation of threading primitives in portable D. I am debating whether that is a worthy goal. I doubt it is.
 Without type-system support, though, such queues would have to be
 either unsafe (no guarantee that the client doesn't have
 unprotected aliases to messages), or restrict messages to very
 simple data structures (pass-by-value and maybe some very clever
 unique pointer/array template). The latter approach introduces
 huge complexity into the library, essentially making user-defined
 extensions impossible (unless the user's name is Andrei ;-) ).
Complexity will be somewhere. The problem is, you want to put much of it in the language, and that will hit the language user too.
I'm being very careful not to hit the language user. You might have noticed that my primitive channel, the MVar, is less complex than a D2 implementation (it doesn't require "synchronized"). And the compiler will immediately tell you if you use it in an unsafe way.
 Semantic checking will always be harder on everyone than a human
 being who sits down and implements a provably safe library in ways
 that the compiler can't prove.
 
Be careful with such arguments. Somebody might use them to discredit immutability.
Let them discredit it and we'll see how strong their argument is.
 Let's not forget that right now D is _designed_ to support 
 shared-memory programming. Every D object has a lock, it supports
  synchronized methods, the keyword "shared" is being introduced,
 etc. It doesn't look like D is moving away from shared memory. It
 looks more like it's adding some window dressing to the
 pre-existing mess and bidding its time. I haven't seen a
 comprehensive plan for D to tackle concurrency and I'm afraid
 that if D doesn't take a solid stance right now, it will miss the
 train.
I think we can safely ditch this argument. Walter had no idea what to do about threads when he defined D, so he put whatever he saw and understood in Java. He'd be the first to say that - actually, he wouldn't tell me so, he _told_ me so.
I realize that. Except that the "shared" concept was added very recently.
 To me, adding concurrency capabilities to D is nothing like adding
  window dressing on top of whatever crap is there. Java and C++ are
 in trouble, and doing what they do doesn't strike me as a good bet.
 
So far D has been doing exactly what Java and C++ are doing. My proposal goes way beyond that.
My problem is that I think it goes way beyond that straight in the wrong directions. It goes on and on and on about how to make deadlock-oriented programming less susceptible to races. I don't care about deadlock-oriented programming. I want to stay away from deadlock-oriented programming. I don't understand why I need a hecatomb of concepts and notions that help me continue using a programming style that is unrecommended.
 But if you mean D should not support
 shared-memory concurrency or give it only lip service, than you
 really have to come up with something revolutionary to take its
 place. This would obviously not make it into D2 or into your book. So
 essentially D2 would doomed in the concurrency department.
Message passing and functional style have been around for a while and form an increasingly compelling alternative to mutable sharing. We can support deadlock-oriented programming in addition to these for those who want it, but I don't think it's an area where we need to pay an arm and a leg for eliminating high-level races. I just think it's the wrong problem to work on. Andrei
May 29 2009
next sibling parent reply Bartosz Milewski <bartosz-nospam relisoft.com> writes:
Can you believe it? I was convinced that my response was lost because the
stupid news reader on Digital Mars web site returned an error (twice, hence two
posts). I diligently rewrote the riposte from scratch and tried to post it. It
flunked again! Now I'm not sure if it won't appear in the newsgroup after an
hour. (By the way, I refined my arguments.)
May 29 2009
parent reply Bartosz Milewski <bartosz-nospam relisoft.com> writes:
This is the missing second reply to Andrei. I'm posting parts of it because it
my help understand my position better.

I wouldn't dismiss Scala out of hand. The main threading model in Scala is
(library-supported) actor model. Isn't that what you're proposing for D? Except
that Scala has much better support for functional programming.

The languages [C++ and Java] may be dead in the water (altough still the
overwhelming majority of programmers use them), but I don't see the idea of
shared-memory concurrency dying any time soon. I bet it will be the major
programming model for the next ten years. What will come after that, nobody
knows.

The complexity argument:  My proposal looks complex because I am dropping the
whole comprehensive solution on the D community all at once. I would be much
wearier of the kind of creeping complexity resulting from incremental ad-hoc
solutions. For instance, the whole complexity of immutability hasn't been
exposed yet. If it were, there would be a much larger insurgency among D users.
You know what I'm talking about--invariant constructors. My proposal goes into
nooks and crannies and, of course, it makes it look more complex than it really
is. Not to mention that there could be a lot of ideas that would lead to
simplifications. I sometimes discuss various options for discussion.

Take my proposal for unique objects. I could have punted the need for "lent".
Maybe nobody would ask for it? Compare "unique" with "scope"--nobody knows the
target semantics of "scope". It's a half-baked idea, but nobody's protesting. 

Try to define the semantics of array slices and you'll see eyes glazing. We
know we have to fix them, but we don't know how (array[new]?). Another
half-baked idea. Are slices simple or complex? 

Define the semantics of "shared". Or should we implement it first and hope that
the complexity won't creep in when we discover its shortcomings. 

 Why wouldn't we think of making message passing a language feature in D? 
Well, we could, but why? We don't have to add any new primitives to the language to implement message queues. We would have to *eliminate* some features to make message passing safe. For instance, we'd have to eliminate "shared". Is that an option?
 Why does message passing need erlangization to support message passing?
The power of Erlang lies in guaranteed process isolation. If we don't guarantee that, we are in the same league as Java or C++.
 What I discussed was a holistic approach in which language + standard 
 library provides a trusted computing base. 
I like that very much. But I see the library as enabling certain useful features, while the type system as disabling the dangerous ones. You can't disable features through a library.
May 30 2009
next sibling parent grauzone <none example.net> writes:
 For instance, the whole complexity of immutability hasn't been exposed yet.
What? I thought immutable was already quite complex.
 Compare "unique" with "scope"--nobody knows the target semantics of "scope".
It's a half-baked idea, but nobody's protesting. 
Everyone knows that D is full of half-baked ideas. We're not using D because it's a beautiful or elegant language - we use it because it makes life easier. Slices and arrays are half-baked, but they are much simpler and easier to use than corresponding C/C++ solutions. We're also using D because it's so C/C++ like. D is to C what C++ should have been to C. Other than that, there are already languages, which could have taken D's job: Delphi-Pascal, Ada, Modula... If D stops making life easier, it will be the death of D.
May 30 2009
prev sibling parent Michel Fortin <michel.fortin michelf.com> writes:
On 2009-05-30 13:00:14 -0400, Bartosz Milewski 
<bartosz-nospam relisoft.com> said:

 The complexity argument:  My proposal looks complex because I am 
 dropping the whole comprehensive solution on the D community all at 
 once. I would be much wearier of the kind of creeping complexity 
 resulting from incremental ad-hoc solutions. For instance, the whole 
 complexity of immutability hasn't been exposed yet. If it were, there 
 would be a much larger insurgency among D users. You know what I'm 
 talking about--invariant constructors. My proposal goes into nooks and 
 crannies and, of course, it makes it look more complex than it really 
 is. Not to mention that there could be a lot of ideas that would lead 
 to simplifications. I sometimes discuss various options for discussion.
 
 Take my proposal for unique objects. I could have punted the need for 
 "lent". Maybe nobody would ask for it? Compare "unique" with 
 "scope"--nobody knows the target semantics of "scope". It's a 
 half-baked idea, but nobody's protesting.
 
 Try to define the semantics of array slices and you'll see eyes 
 glazing. We know we have to fix them, but we don't know how 
 (array[new]?). Another half-baked idea. Are slices simple or complex?
Bartosz, you're arguing that your proposal isn't that complex compared to strange semantics of other parts of the language and I agree... I should even say that what you propose offer a solution for fixing these other parts of the language. It's funny how what's needed to make multithreading safe is pretty much the same as what is needed to make safe immutable constructors and safe array slices, let's take a look: A constructor for a unique object is all you need to build an immutable one: move the unique pointer to an immutable pointer and you're sure no one has a mutable pointer to it. Of course, to implement unique constructors, you need 'lent' (or 'scope', whatever our prefered keyword) so you can call functions that will alter the unique object and its member without escaping a reference. As for slices, as long as your slice is 'unique', you can enlarge it without side effects (relocating the slice in memory won't affect any other slice because you're guarentied there aren't any), making a 'unique T[]' as good as an equivalent container... or should I say even safer since enlarging a non-unique container might be as bad as enlarging a slice (the container may realocate and disconnect from all its slices). You could also later transform 'unique T[]' to 'immutable T[]', or to a mutable 'T[]', but then you shouldn't be able to grow it without making a duplicate first to avoid undesirable side effects. So instead of fighting over what's too complex and what isn't by looking at each hole of the language in isolation, I think it's time to look at the various problems as a whole. I believe all those half-baked ideas point to the same underlying deficiency: the lack of a safe unique type (which then requires move semantics and 'lent'/'scope' constrains). C++ will get half of that soon (unique_ptr) but it will still be missing 'lent' though so it won't be so safe. -- Michel Fortin michel.fortin michelf.com http://michelf.com/
May 30 2009
prev sibling next sibling parent reply Bartosz Milewski <bartosz-nospam relisoft.com> writes:
I don't think the item-by-item pingpong works well in the newsgroup. Let's
separate our discussion into separate threads. One philosophical, about the
future of concurrency. Another about the immediate future of concurrency in D2.
And a separate one about my proposed system in the parallel universe where we
all agree that for the next 10 years shared-memory concurrency will be the
dominating paradigm. 
May 29 2009
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Bartosz Milewski wrote:
 I don't think the item-by-item pingpong works well in the newsgroup.
 Let's separate our discussion into separate threads. One
 philosophical, about the future of concurrency. Another about the
 immediate future of concurrency in D2. And a separate one about my
 proposed system in the parallel universe where we all agree that for
 the next 10 years shared-memory concurrency will be the dominating
 paradigm.
I'm sure it's a good idea, particularly if others will participate as well. I warn that I'll be at a conference Sat-Thu and I don't have much time even apart from that. Andrei
May 29 2009
prev sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Andrei Alexandrescu:
I just think it's the wrong problem to work on.<
Beside multiprocessing (that I am ignorant to comment on still), I can see other purposes for having a way to tell the type system that it exists only one reference/pointer to mutable data and ways to safely change ownership of such pointer. It can also be used by the compiler to optimize in various situations. It can be good when the type system gives you a formal way to state a constraint that the programmer wants to put in the program anyway (often just stated in comments, if the language doesn't allow such higher level feature). For example a type system can give a big help avoiding using null object references in a program, saving lot of time of the programmer (eventually D will need this feature), in such situations a more refined type system reduces the time/complexity to write correct programs (and in 20-lines long programs you may just don't use such features). Bye, bearophile
May 30 2009
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
bearophile wrote:
 Andrei Alexandrescu:
 I just think it's the wrong problem to work on.<
Beside multiprocessing (that I am ignorant to comment on still), I can see other purposes for having a way to tell the type system that it exists only one reference/pointer to mutable data and ways to safely change ownership of such pointer. It can also be used by the compiler to optimize in various situations.
Correct. We've been trying valiantly to introduce unique in the type system two times, first time in 2007. Our conclusion back then has been that unique brings more problems than it solves. Last meeting the same pattern ensued. Problems with unique cropped up faster than those solved. Andrei
May 30 2009
prev sibling parent reply Leandro Lucarella <llucax gmail.com> writes:
BCS, el 28 de mayo a las 15:57 me escribiste:
 Hello Leandro,
 
Jason House, el 28 de mayo a las 08:45 me escribiste:
I'm really surprised by the lack of design discussion in this thread.
It's amazing how there can be huge bursts of discussion on which
keyword to use (e.g. manifest constants), but then complete silence
about major design decisions like thread safety that defines new
transitive states and a bunch of new keywords. The description even
made parallels to the (previously?) unpopular const architecture.
I just find the new "thread-aware" design of D2 so complex, so twisted that I don't even know where to start. I think the solution is way worse than the problem here. That's why I don't comment at all.
I get the impression, from what little I known about threading, that it is likely you are under estimating the complexity of the threading problem. I get the feeling that *most* non-experts do (either that, or they just assume it more complex than they want to deal with).
I guess all depends on the kind of fine granularity you want. I work on a distributed application, so threading is not very tempting for me, I get to use the multi-cores by splitting the work among processes, not threads, because I need "location-transparency" (I don't care if the process I'm communicating with runs in the same computer or not, so things like "move semantics" are not interesting for me). Sometimes I need some threading support, for example to be able to receive queries and do some I/O intensive stuff in the same thread, but the thread-communication I need for that is so trivial, using simple mutexes works just fine. And I never needed so much performance to think about lock-free communication either (mutexes are really fast in Linux). I guess threading complexity is proportional to the complexity of the design. If your design is simple, concurrency is simple. Is really hard to get a deadlock in a simple design. Races are a little trickier, but they are inherent to some kind of problems, so there is not a lot to do about that, you have to see problem by problem and try to get a good design to handle them well.
I think D duplicate functionality. For "safe" concurrency I use
processes and IPC (I have even more guarantees that D could ever give
me). That's all I need. I don't need a huge complexity in the language
for that. And I think D2 concurrency model is still way too low level.
You are crazy! processes+IPC only works well if either the OS supports very fast IPC (IIRC none do aside from shared memory and now we are back where we started) or the processing between interaction is very long.
The later is my case =)
 Everything is indicating that shared memory multi-threading is where
 it's all going.
Maybe, I'm just saying why I don't comment on D2 concurrency model. I find it too complex for my needs (i.e. for what I know, I won't give my opinion about things I don't know/use). -- Leandro Lucarella (luca) | Blog colectivo: http://www.mazziblog.com.ar/blog/ ---------------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ----------------------------------------------------------------------------
May 28 2009
parent reply Bartosz Milewski <bartosz-nospam relisoft.com> writes:
Leandro Lucarella Wrote:

 BCS, el 28 de mayo a las 15:57 me escribiste:
 
 Maybe, I'm just saying why I don't comment on D2 concurrency model. I find
 it too complex for my needs (i.e. for what I know, I won't give my opinion
 about things I don't know/use).
 
Probably the majority of users either don't use multithreading (yet) or use it only for very simple tasks. My stated goal is not to force such users to learn the whole race-free type system. In most cases things "just work" by default, and the compiler catches any accidental race conditions. The complex part is for library writers who have very demanding needs. Unfortunately, I have to describe the whole shebang in my blog, otherwise people won't believe that the system is workable and that it satisfies their high expectations. I should probably write a simple tutorial that would show how to use my system for simple tasks.
May 28 2009
next sibling parent dsimcha <dsimcha yahoo.com> writes:
== Quote from Bartosz Milewski (bartosz-nospam relisoft.com)'s article
 Leandro Lucarella Wrote:
 BCS, el 28 de mayo a las 15:57 me escribiste:

 Maybe, I'm just saying why I don't comment on D2 concurrency model. I find
 it too complex for my needs (i.e. for what I know, I won't give my opinion
 about things I don't know/use).
Probably the majority of users either don't use multithreading (yet) or use it
only for very simple tasks. My stated goal is not to force such users to learn the whole race-free type system. In most cases things "just work" by default, and the compiler catches any accidental race conditions.
 The complex part is for library writers who have very demanding needs.
Unfortunately, I have to describe the whole shebang in my blog, otherwise people won't believe that the system is workable and that it satisfies their high expectations.
 I should probably write a simple tutorial that would show how to use my system
for simple tasks. This would be much appreciated. I try to read your blogs, which are geared toward hardcore multithreading people. I know just enough about multithreading to understand why it's a hard problem, so I usually get to about the second paragraph before I feel lost. I would love to see a version that offers simple examples of how the new multithreading might be useful to the kinds of people (like me) who understand the basics of multithreading and write multithreaded code in the very simple cases, but are not experts in concurrency, etc. For my purposes, I'm more interested in the mildly complicated things that are made simple, not the highly complicated things that are made possible.
May 28 2009
prev sibling parent reply Jason House <jason.james.house gmail.com> writes:
Bartosz Milewski Wrote:

 Leandro Lucarella Wrote:
 
 BCS, el 28 de mayo a las 15:57 me escribiste:
 
 Maybe, I'm just saying why I don't comment on D2 concurrency model. I find
 it too complex for my needs (i.e. for what I know, I won't give my opinion
 about things I don't know/use).
 
Probably the majority of users either don't use multithreading (yet) or use it only for very simple tasks. My stated goal is not to force such users to learn the whole race-free type system. In most cases things "just work" by default, and the compiler catches any accidental race conditions.
My hobby project is a multi-threaded game-playing AI. My current scheme uses a shared search tree using lockless updates with search results. Besides general ability to use your scheme for what I've already done, I'm also interested in how to overhaul the garbage collector and implementing lockless hashtables (see high-scale-lib on sf.net) I was interested in doing some of that infrastructure and contributing, but so far I've had no luck getting something as simple as a weak references into druntime :(
 The complex part is for library writers who have very demanding needs.
Unfortunately, I have to describe the whole shebang in my blog, otherwise
people won't believe that the system is workable and that it satisfies their
high expectations. 
Yeah, I'm waiting for more details like which fences are introduced by the lockless SC requirements. The high-scale-lib is virtually fence free.
 I should probably write a simple tutorial that would show how to use my system
for simple tasks. 
 
May 28 2009
next sibling parent reply Bartosz Milewski <bartosz-nospam relisoft.com> writes:
Jason House Wrote:

 Bartosz Milewski Wrote:
 
 My hobby project is a multi-threaded game-playing AI. My current scheme uses a
shared search tree using lockless updates with search results. Besides general
ability to use your scheme for what I've already done, I'm also interested in
how to overhaul the garbage collector and implementing lockless hashtables (see
high-scale-lib on sf.net)
 
I see, you're a hardcore lockfree programmer. All you can expect from D is Sequential Consistency--nothing fancy like C++ weak atomics. But that's for the better.
 The complex part is for library writers who have very demanding needs.
Unfortunately, I have to describe the whole shebang in my blog, otherwise
people won't believe that the system is workable and that it satisfies their
high expectations. 
Yeah, I'm waiting for more details like which fences are introduced by the lockless SC requirements. The high-scale-lib is virtually fence free.
I don't have much to say about that because it's a know problem and it has already been solved in Java. I can tell you what is required on an x86: use xchg for writes, and that's all. I think Walter has already implemented it, because he asked me the same question.
May 28 2009
parent reply Jason House <jason.james.house gmail.com> writes:
Bartosz Milewski Wrote:

 Jason House Wrote:
 
 Bartosz Milewski Wrote:
 
 My hobby project is a multi-threaded game-playing AI. My current scheme uses a
shared search tree using lockless updates with search results. Besides general
ability to use your scheme for what I've already done, I'm also interested in
how to overhaul the garbage collector and implementing lockless hashtables (see
high-scale-lib on sf.net)
 
I see, you're a hardcore lockfree programmer. All you can expect from D is Sequential Consistency--nothing fancy like C++ weak atomics. But that's for the better.
Far from it! I'm stumbling through in an attempt to teach myself the black art. I'm probably in my 3rd coding of the project. The first incarnation had no threads. The 2nd used message passing. The current one is lockless, but still a work in progress.
 The complex part is for library writers who have very demanding needs.
Unfortunately, I have to describe the whole shebang in my blog, otherwise
people won't believe that the system is workable and that it satisfies their
high expectations. 
Yeah, I'm waiting for more details like which fences are introduced by the lockless SC requirements. The high-scale-lib is virtually fence free.
I don't have much to say about that because it's a know problem and it has already been solved in Java. I can tell you what is required on an x86: use xchg for writes, and that's all. I think Walter has already implemented it, because he asked me the same question.
What about cmpchx (AKA compare and swap). It occurs in a lot of algorithms. Also, "lock inc" is fundamental to my use of lockless variables.
May 28 2009
parent Bartosz Milewski <bartosz-nospam relisoft.com> writes:
Jason House Wrote:


 I see, you're a hardcore lockfree programmer. All you can expect from D is
Sequential Consistency--nothing fancy like C++ weak atomics. But that's for the
better.
Far from it! I'm stumbling through in an attempt to teach myself the black art. I'm probably in my 3rd coding of the project. The first incarnation had no threads. The 2nd used message passing. The current one is lockless, but still a work in progress.
Are you sure it's worth the effort? It's extremely hard to get lock-free right, and it often doesn't offer as much speedup as you'd expect. Well, in D it might, because it still doesn't use thin locks.
 What about cmpchx (AKA compare and swap). It occurs in a lot of algorithms.
Also, "lock inc" is fundamental to my use of lockless variables.
These will either be implemented in the library (inline assembly) or as compiler intrinsic. It's not hard.
May 30 2009
prev sibling parent reply BCS <ao pathlink.com> writes:
Reply to Jason,

 My hobby project is a multi-threaded game-playing AI. My current
 scheme uses a shared search tree using lockless updates with search
 results.
 
As in threaded min-max? Have you got anything working? I known from experience that this ones a cast iron SOB. http://arrayboundserror.blogspot.com/search/label/min%20max
May 28 2009
parent Jason House <jason.james.house gmail.com> writes:
BCS Wrote:

 Reply to Jason,
 
 My hobby project is a multi-threaded game-playing AI. My current
 scheme uses a shared search tree using lockless updates with search
 results.
 
As in threaded min-max? Have you got anything working? I known from experience that this ones a cast iron SOB. http://arrayboundserror.blogspot.com/search/label/min%20max
No. Min-max is only good for theory. I'm also not doing alpha-beta which is successful in chess. I'm doing UCT and mostly aim to play the game of "go". UCT uses statistical bounds instead of hard heuristics. It also has less uniform tree exploration.
May 28 2009
prev sibling next sibling parent reply Tim Matthews <tim.matthews7 gmail.com> writes:
Leandro Lucarella wrote:

 I would like D2 better if it was focussed on macros for example.
 
Can you elaborate on this? I think of the word macro as a C preprocessor feature which is no longer needed in D.
May 28 2009
next sibling parent reply "Denis Koroskin" <2korden gmail.com> writes:
On Thu, 28 May 2009 19:59:00 +0400, Tim Matthews <tim.matthews7 gmail.com>
wrote:

 Leandro Lucarella wrote:

 I would like D2 better if it was focussed on macros for example.
Can you elaborate on this? I think of the word macro as a C preprocessor feature which is no longer needed in D.
I believe he is talking about AST macros that are postponed until D3 because current focus has shifted to concurrency.
May 28 2009
parent Tim Matthews <tim.matthews7 gmail.com> writes:
Denis Koroskin wrote:
 On Thu, 28 May 2009 19:59:00 +0400, Tim Matthews <tim.matthews7 gmail.com>
wrote:
 
 Leandro Lucarella wrote:

 I would like D2 better if it was focussed on macros for example.
Can you elaborate on this? I think of the word macro as a C preprocessor feature which is no longer needed in D.
I believe he is talking about AST macros that are postponed until D3 because current focus has shifted to concurrency.
OK thanks I see now because macros have that extra flexibility over templates/mixins. Very useful and I agree so is parallelism/concurrency.
May 28 2009
prev sibling parent BCS <none anon.com> writes:
Hello Tim,

 Leandro Lucarella wrote:
 
 I would like D2 better if it was focussed on macros for example.
 
Can you elaborate on this? I think of the word macro as a C preprocessor feature which is no longer needed in D.
AST macros. Look up Walter et al's talk from the D conference
May 28 2009
prev sibling parent Sean Kelly <sean invisibleduck.org> writes:
== Quote from Leandro Lucarella (llucax gmail.com)'s article
 Jason House, el 28 de mayo a las 08:45 me escribiste:

 Maybe people are waiting for Walter to go through all the hard work of
 implementing this stuff before complaining that it's crap and
 proclaiming Walter should have done in the first place?
No, I don't see any point in saying what I said above, because I don't think anything will change. If I didn't like some little detail, that could worth discussing because it has any chance to change Walter/Bartoz mind, but saying "I think all the model is way too complex" don't help much IMHO =)
That was basically the complaint about the const design for D2, and it did end up being simplified. I also think it would have been simplified further if anyone knew how to do so without losing any required functionality. Regarding the shared proposal so far, I think D will always support sharing memory across processes so the issue is really where to post the sign that says "here be monsters." Bartosz has come up with a model that would provide complete (?) verifiable data integrity, and therefore makes the domain of "safe" shared-memory programming as large as possible (deadlocks aside, of course). However, the overarching question in my mind is whether we really want to build so much support into the language for something that is intended to be used sparingly at best. I can just see someone saying "so you have all these new keywords and all this stuff and you're saying that despite all this I'm really not supposed to use any of it?" This is an area where community feedback would be very valuable, I'd think.
May 28 2009
prev sibling next sibling parent bearophile <bearophileHUGS lycos.com> writes:
Denis Koroskin:
 I believe he is talking about AST macros that are postponed until D3 because
current focus has shifted to concurrency.<
I think shifting to concurrent programming is now the right choice, all other modern languages do the same, because people have more and more cores sleeping in their computers. But data-parallelism too needs more care/focus in D (I have discussed about it diffusely when I have shown Chapel language, for example, and elsewhere). So far Bartosz has not discussed enough about this very large (and important for D future users) topic. Those things are also much more easy to understand and use for me (and I think for other people too). Thead/Actor/Agent/etc parallelism alone is NOT going to be enough for the numeric computing community (and my numeric needs too). Some support for data-parallelism is currently probably more important than macros for D2. Bye, bearophile
May 28 2009
prev sibling next sibling parent Bartosz Milewski <bartosz-nospam relisoft.com> writes:
Andrei Alexandrescu Wrote:

 Second, there is no regard to language integration. Bartosz says syntax 
 doesn't matter and that he's flexible, but what that really means is 
 that no attention has been paid to language integration. There is more 
 to language integration than just syntax (and then even syntax is an 
 important part of it).
It's not that bad. I actually wrote the examples in D and then replaced !() with angle brackets to make it readable to non-D programmers. BTW, Scala doesn't use angle brackets. It uses square brackets [] for template arguments and parens () for array access. Interesting choice.
May 28 2009
prev sibling parent Bartosz Milewski <bartosz-nospam relisoft.com> writes:
Jason House Wrote:

 Bartosz Milewski Wrote:
 
 My hobby project is a multi-threaded game-playing AI. My current scheme uses a
shared search tree using lockless updates with search results. Besides general
ability to use your scheme for what I've already done, I'm also interested in
how to overhaul the garbage collector and implementing lockless hashtables (see
high-scale-lib on sf.net)
 
I see, you're a hardcore lockfree programmer. All you can expect from D is Sequential Consistency--nothing fancy like C++ weak atomics. But that's for the better.
 The complex part is for library writers who have very demanding needs.
Unfortunately, I have to describe the whole shebang in my blog, otherwise
people won't believe that the system is workable and that it satisfies their
high expectations. 
Yeah, I'm waiting for more details like which fences are introduced by the lockless SC requirements. The high-scale-lib is virtually fence free.
I don't have much to say about that because it's a know problem and it has already been solved in Java. I can tell you what is required on an x86: use xchg for writes, and that's all. I think Walter has already implemented it, because he asked me the same question.
May 29 2009