www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Things I Learned from ACCU 2010

reply Walter Bright <newshound1 digitalmars.com> writes:
* Not all functional programming languages do concurrency well. Haskell and 
OCaml in particular have severe fundamental problems with it such that 
parallelizing your code makes it slower. Erlang and Clojure parallelize well,
in 
that performance scales up proportionally as cores are added.

* The future of multi-core hardware is to not have any shared memory, each core 
will have its own address space. Message passing looks like the future.

* Monads have nothing in particular to with I/O. All monads are are a way to 
insert code to pre-process arguments going to a function, and insert code to 
post-process the result coming out of that function.

* Probably nobody understands how to use C++0x atomics correctly, or ever will.

* People really understand and get testing and how it improves programming.
Apr 23 2010
next sibling parent Walter Bright <newshound1 digitalmars.com> writes:
Walter Bright wrote:
 * People really understand and get testing and how it improves programming.

For example, this hilarious video was shown by James Bach, who created it: http://www.youtube.com/watch?v=M37VOKIaDUw
Apr 23 2010
prev sibling next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Walter Bright:

* Not all functional programming languages do concurrency well. Haskell and
OCaml in particular have severe fundamental problems with it such that
parallelizing your code makes it slower.<

What kind of problems have you seen in Haskell? I have read several articles about parallel code written in Haskell, and its situation doesn't look so bad.
Erlang and Clojure parallelize well, in that performance scales up
proportionally as cores are added.<

There is no way for this to be true in general. "Scalability" is a lot a characteristic of the algorithm (and the way its subparts exchange data), not the language. And if you care of performance Erlang is not the best: http://proliferationofniches.blogspot.com/2008/07/multi-core-problem.html Erlang is good for other things, like reliability.
Message passing looks like the future.<

Message passing is one future. There is no single silver bullet to solve the concurrency/parallelism problems. Different algorithms will need different solutions: message passing (actors, agents), data parallelism (AVX registers, GPU cores, vector operations, parallel loops, etc), dataflow programming (http://en.wikipedia.org/wiki/Dataflow_programming ), etc. D will need several solutions in its core language, std lib, external libs.
* Monads have nothing in particular to with I/O.<

Right. They are used for the I/O in Haskell, but they are a quite more general concept that can be used for other purposes too.
* People really understand and get testing and how it improves programming.<

And D unit testing is not good enough yet :-) I think dynamic languages have shown why testing is so useful (for statically compiled languages too). Bye, bearophile
Apr 23 2010
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
bearophile wrote:
 Walter Bright:
 
 * Not all functional programming languages do concurrency well. Haskell and
 OCaml in particular have severe fundamental problems with it such that
 parallelizing your code makes it slower.<

What kind of problems have you seen in Haskell? I have read several articles about parallel code written in Haskell, and its situation doesn't look so bad.

It wasn't me, it was Russell Wider. He wrote a parallel pi calculating program in several languages, and then tried it with 1, 2, 4, and 8 cores. The more cores, the longer the Haskell program took. Charting the core use showed that only one core would run at a time. Same with OCaml. OCaml has a global interpreter lock which explains its behavior. Russell didn't know why the Haskell behavior was so bad. He allowed that it was possible he was misusing it.
 Erlang and Clojure parallelize well, in that performance scales up
 proportionally as cores are added.<

There is no way for this to be true in general. "Scalability" is a lot a characteristic of the algorithm (and the way its subparts exchange data), not the language. And if you care of performance Erlang is not the best: http://proliferationofniches.blogspot.com/2008/07/multi-core-problem.html Erlang is good for other things, like reliability.

With Erlang and Clojure and the parallel pi programming, doubling the number of cores doubled the speed. Graphing the core use showed they were utilizing the cores simultaneously. Erlang was slow in general, but it *did* scale well with the number of cores.
Apr 23 2010
next sibling parent Walter Bright <newshound1 digitalmars.com> writes:
Walter Bright wrote:
 It wasn't me, it was Russell Wider.

That's Russel Winder.
Apr 23 2010
prev sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Walter Bright:
 It wasn't me, it was Russell Wider. He wrote a parallel pi calculating program 
 in several languages, and then tried it with 1, 2, 4, and 8 cores. The more 
 cores, the longer the Haskell program took. Charting the core use showed that 
 only one core would run at a time.
 
 Same with OCaml.
 
 OCaml has a global interpreter lock which explains its behavior. Russell
didn't 
 know why the Haskell behavior was so bad. He allowed that it was possible he
was 
 misusing it.

You have just the illusion to have learned something about this. Trying to read too much from this single example is very wrong. A single benchmark, written by a person not expert in the language, means nearly nothing. You need at least a suite of good benchmarks, written by people that know the respective languages. And even then, you have just an idea of the situation. Bye, bearophile
Apr 23 2010
parent reply Walter Bright <newshound1 digitalmars.com> writes:
bearophile wrote:
 Walter Bright:
 OCaml has a global interpreter lock which explains its behavior. Russell
 didn't know why the Haskell behavior was so bad. He allowed that it was
 possible he was misusing it.

You have just the illusion to have learned something about this. Trying to read too much from this single example is very wrong. A single benchmark, written by a person not expert in the language, means nearly nothing. You need at least a suite of good benchmarks, written by people that know the respective languages. And even then, you have just an idea of the situation.

Fair enough, but in order to dismiss the results I'd need to know *why* the Haskell version failed so badly, and why such a straightforward attempt at parallelism is the wrong solution for Haskell. You shouldn't have to be an expert in a language that is supposedly good at parallelism in order to get good results from it. (Russel may or not be an expert, but he is certainly not a novice at FP or parallelism.) Basically, I'd welcome an explanatory riposte to Russel's results.
Apr 23 2010
next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Walter Bright:

You shouldn't have to be an expert in a language that is supposedly good at
parallelism in order to get good results from it.<

Being easy to learn to use is not one of the qualities of Haskell. If you want to write efficient programs in Haskell you need lot of brain, you can see it also from the large amount of discussions here: http://www.haskell.org/haskellwiki/Great_language_shootout So I think you need experience and knowledge to do anything significant in Haskell, not just highly parallel programs.
Basically, I'd welcome an explanatory riposte to Russel's results.<

If you want an explanation then I think you have to ask in (for example) an Haskell newsgroup, etc.
Fair enough, but in order to dismiss the results I'd need to know *why* the
Haskell version failed so badly, and why such a straightforward attempt at
parallelism is the wrong solution for Haskell.<

Being Haskell not easy, it's even possible for me to not understand the explanation if some Haskell expert eventually explains me why that Haskell program was slow :-) Bye, bearophile
Apr 23 2010
parent reply Walter Bright <newshound1 digitalmars.com> writes:
bearophile wrote:
 Being Haskell not easy, it's even possible for me to not understand the
 explanation if some Haskell expert eventually explains me why that Haskell
 program was slow :-)

It's statements like this (and I've heard this repeatedly) that makes me wonder what the value of Haskell actually is to conventional programming tasks and regular programmers.
Apr 23 2010
next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Walter Bright" <newshound1 digitalmars.com> wrote in message 
news:hqsn2j$1s29$1 digitalmars.com...
 bearophile wrote:
 Being Haskell not easy, it's even possible for me to not understand the
 explanation if some Haskell expert eventually explains me why that 
 Haskell
 program was slow :-)

It's statements like this (and I've heard this repeatedly) that makes me wonder what the value of Haskell actually is to conventional programming tasks and regular programmers.

Not that I have an opinion either way, but FWIW, very similar things could probably be said for C++. ------------------------------- Not sent from an iPhone.
Apr 23 2010
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Nick Sabalausky wrote:
 "Walter Bright" <newshound1 digitalmars.com> wrote in message 
 news:hqsn2j$1s29$1 digitalmars.com...
 bearophile wrote:
 Being Haskell not easy, it's even possible for me to not understand the
 explanation if some Haskell expert eventually explains me why that 
 Haskell
 program was slow :-)

wonder what the value of Haskell actually is to conventional programming tasks and regular programmers.

Not that I have an opinion either way, but FWIW, very similar things could probably be said for C++.

I know, and that creates an opportunity for other languages!
Apr 23 2010
parent "Nick Sabalausky" <a a.a> writes:
Walter Bright" <newshound1 digitalmars.com> wrote in message 
news:hqsuac$29rh$2 digitalmars.com...
 Nick Sabalausky wrote:
 "Walter Bright" <newshound1 digitalmars.com> wrote in message 
 news:hqsn2j$1s29$1 digitalmars.com...
 bearophile wrote:
 Being Haskell not easy, it's even possible for me to not understand the
 explanation if some Haskell expert eventually explains me why that 
 Haskell
 program was slow :-)

wonder what the value of Haskell actually is to conventional programming tasks and regular programmers.

Not that I have an opinion either way, but FWIW, very similar things could probably be said for C++.

I know, and that creates an opportunity for other languages!

Definitely :) Although, I guess what I meant by that was that if there were someone unexperienced in imperative systems languages, C++ would probably have a few things to teach them, even though it may present them in a much-less-than-ideal form. ------------------------------- Not sent from an iPhone.
Apr 23 2010
prev sibling parent Justin Johansson <no spam.com> writes:
retard wrote:
 Fri, 23 Apr 2010 10:57:31 -0700, Walter Bright wrote:
 
 bearophile wrote:
 Being Haskell not easy, it's even possible for me to not understand the
 explanation if some Haskell expert eventually explains me why that
 Haskell program was slow :-)

wonder what the value of Haskell actually is to conventional programming tasks and regular programmers.

Regular programmers just die away. At some point we don't need crappy results anymore. The software engineering is often about reimplementing things. If a level 1 novice writes a blog engine, you need level 2..20 programmers to fix all the sql injection / xss bugs and caching issues. After that, even better programmers finally write maintainable and readable code. But it doesn't scale. That's why companies like Facebook hire guys like Andrei to fix the bugs caused by the 1st generation PHP newbies.

Good one, retard; that's really funny and surprising that Andrei didn't bite :-) Hard to imagine Andrei doing maintenance programming in some infidel programming language that doesn't have decent metaprogramming facilities though !!!
Apr 29 2010
prev sibling next sibling parent BCS <none anon.com> writes:
Hello Walter,

 You shouldn't have to be an expert in a language that is supposedly
 good at parallelism in order to get good results from it.

Very good point. If a design with no blatant flaws performs like that with no easy to spot cause, I'd say there is a problem in the language, even if the problem is in the program. -- ... <IXOYE><
Apr 24 2010
prev sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 04/29/2010 06:03 AM, Petr Kalny wrote:
 On Fri, 23 Apr 2010 15:23:22 +0200, Walter Bright
 <newshound1 digitalmars.com> wrote:

 bearophile wrote:
 Walter Bright:
 OCaml has a global interpreter lock which explains its behavior.
 Russell
 didn't know why the Haskell behavior was so bad. He allowed that it was
 possible he was misusing it.

Trying to read too much from this single example is very wrong. A single benchmark, written by a person not expert in the language, means nearly nothing. You need at least a suite of good benchmarks, written by people that know the respective languages. And even then, you have just an idea of the situation.

Fair enough, but in order to dismiss the results I'd need to know *why* the Haskell version failed so badly, and why such a straightforward attempt at parallelism is the wrong solution for Haskell. You shouldn't have to be an expert in a language that is supposedly good at parallelism in order to get good results from it. (Russel may or not be an expert, but he is certainly not a novice at FP or parallelism.) Basically, I'd welcome an explanatory riposte to Russel's results.

IIRC Haskell's problems with concurrency have roots in its 100% lazy evaluation. Anyone wanting more details may find this page useful: http://www.haskell.org/haskellwiki/Research_papers/Parallelism_and_concurrency

Which specific papers are you referring to? BTW, I wonder how current the page is. It features no paper from 2009 or 2010, one from 2008, none from 2007, and six from 2006. Of those, three links are broken. Andrei
Apr 29 2010
prev sibling next sibling parent retard <re tard.com.invalid> writes:
Fri, 23 Apr 2010 06:23:22 -0700, Walter Bright wrote:

 bearophile wrote:
 Walter Bright:
 OCaml has a global interpreter lock which explains its behavior.
 Russell didn't know why the Haskell behavior was so bad. He allowed
 that it was possible he was misusing it.

You have just the illusion to have learned something about this. Trying to read too much from this single example is very wrong. A single benchmark, written by a person not expert in the language, means nearly nothing. You need at least a suite of good benchmarks, written by people that know the respective languages. And even then, you have just an idea of the situation.

Fair enough, but in order to dismiss the results I'd need to know *why* the Haskell version failed so badly, and why such a straightforward attempt at parallelism is the wrong solution for Haskell. You shouldn't have to be an expert in a language that is supposedly good at parallelism in order to get good results from it.

Why not? Do you think parallelism is simple to manage (efficiently)? No offence but a total novice has zero understanding of e.g. threads or 3rd party libraries. The best he can do is to come up with something using the stdlib Thread classes. Usually it fails miserably due to deadlocks or other synchronization issues with locks.
Apr 23 2010
prev sibling next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Fri, 23 Apr 2010 19:09:55 -0400, retard <re tard.com.invalid> wrote:

 Fri, 23 Apr 2010 06:23:22 -0700, Walter Bright wrote:

 You shouldn't have to be an expert in a language that is supposedly good
 at parallelism in order to get good results from it.

Why not? Do you think parallelism is simple to manage (efficiently)? No offence but a total novice has zero understanding of e.g. threads or 3rd party libraries. The best he can do is to come up with something using the stdlib Thread classes. Usually it fails miserably due to deadlocks or other synchronization issues with locks.

I think his point was that a person who *does* understand parallelism and threading couldn't get it right. Not being an expert in the *language* does not make you a novice at threading. Of course someone who does not understand threading/parallelism is bound to have troubles no matter what the language until he/she gains more experience. You almost have to experience a deadlock-after-2-weeks problem to really get how important threading issues are (I did). -Steve
Apr 23 2010
prev sibling next sibling parent "Petr Kalny" <petr.kalny volny.cz> writes:
On Fri, 23 Apr 2010 15:23:22 +0200, Walter Bright  
<newshound1 digitalmars.com> wrote:

 bearophile wrote:
 Walter Bright:
 OCaml has a global interpreter lock which explains its behavior.  
 Russell
 didn't know why the Haskell behavior was so bad. He allowed that it was
 possible he was misusing it.

Trying to read too much from this single example is very wrong. A single benchmark, written by a person not expert in the language, means nearly nothing. You need at least a suite of good benchmarks, written by people that know the respective languages. And even then, you have just an idea of the situation.

Fair enough, but in order to dismiss the results I'd need to know *why* the Haskell version failed so badly, and why such a straightforward attempt at parallelism is the wrong solution for Haskell. You shouldn't have to be an expert in a language that is supposedly good at parallelism in order to get good results from it. (Russel may or not be an expert, but he is certainly not a novice at FP or parallelism.) Basically, I'd welcome an explanatory riposte to Russel's results.

IIRC Haskell's problems with concurrency have roots in its 100% lazy evaluation. Anyone wanting more details may find this page useful: http://www.haskell.org/haskellwiki/Research_papers/Parallelism_and_concurrency Petr
Apr 29 2010
prev sibling parent "Petr Kalny" <petr.kalny volny.cz> writes:
On Thu, 29 Apr 2010 16:02:03 +0200, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 On 04/29/2010 06:03 AM, Petr Kalny wrote:
 On Fri, 23 Apr 2010 15:23:22 +0200, Walter Bright
 <newshound1 digitalmars.com> wrote:

 bearophile wrote:
 Walter Bright:
 OCaml has a global interpreter lock which explains its behavior.
 Russell
 didn't know why the Haskell behavior was so bad. He allowed that it  
 was
 possible he was misusing it.

Trying to read too much from this single example is very wrong. A single benchmark, written by a person not expert in the language, means nearly nothing. You need at least a suite of good benchmarks, written by people that know the respective languages. And even then, you have just an idea of the situation.

Fair enough, but in order to dismiss the results I'd need to know *why* the Haskell version failed so badly, and why such a straightforward attempt at parallelism is the wrong solution for Haskell. You shouldn't have to be an expert in a language that is supposedly good at parallelism in order to get good results from it. (Russel may or not be an expert, but he is certainly not a novice at FP or parallelism.) Basically, I'd welcome an explanatory riposte to Russel's results.

IIRC Haskell's problems with concurrency have roots in its 100% lazy evaluation. Anyone wanting more details may find this page useful: http://www.haskell.org/haskellwiki/Research_papers/Parallelism_and_concurrency

Which specific papers are you referring to? BTW, I wonder how current the page is. It features no paper from 2009 or 2010, one from 2008, none from 2007, and six from 2006. Of those, three links are broken. Andrei

Right, I couldn't find the paper, I have read about concurrency in Haskell, there as well. (But I hoped there might be some other useful information :o). After more searching I located that paper at: http://research.microsoft.com/en-us/um/people/simonpj/papers/parallel/index.htm Runtime Support for Multicore Haskell http://research.microsoft.com/en-us/um/people/simonpj/papers/parallel/multicore-ghc.pdf HTH Petr
Apr 29 2010
prev sibling next sibling parent reply Clemens <eriatarka84 gmail.com> writes:
Walter Bright Wrote:

 * Not all functional programming languages do concurrency well. Haskell and 
 OCaml in particular have severe fundamental problems with it such that 
 parallelizing your code makes it slower.

Do you have a reference on that? I'll produce one to the contrary: http://cgi.cse.unsw.edu.au/~dons/blog/2007/11/29#smoking-4core
 * Monads have nothing in particular to with I/O.

Right.
 All monads are are a way to 
 insert code to pre-process arguments going to a function, and insert code to 
 post-process the result coming out of that function.

That's a much too narrow view. While it may apply roughly to some uses of monads, even something as simple as the Maybe monad doesn't fit into this mental model anymore. I'd really recommend spending a few days with Haskell. Even if it may not be the language you'll want to spend the rest of your life with, there's no denying that a lot of interesting ideas and research is going into Haskell. (As an aside, I'm generally a bit put off by the hostility towards programming language research and theory in the D community. "We don't need no stinking theory, we'll just roll our own ad-hoc solution which will work much better because ivory-tower academics are completely out of touch with reality anyway." Bleh.) If you try to put ideas of pure functional programming into D, I think it would be a good idea to at least be somewhat familiar with the way the reigning king of that particular niche does it. -- Clemens
Apr 23 2010
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Clemens wrote:
 Walter Bright Wrote:
 
 * Not all functional programming languages do concurrency well. Haskell and
  OCaml in particular have severe fundamental problems with it such that 
 parallelizing your code makes it slower.

Do you have a reference on that? I'll produce one to the contrary: http://cgi.cse.unsw.edu.au/~dons/blog/2007/11/29#smoking-4core

All I've got is Russel Winder's talk on it, Parallelism: The Functional Imperative, with the code and benchmarks. He ran them in real time. http://www.russel.org.uk/
Apr 23 2010
parent reply Clemens <eriatarka84 gmail.com> writes:
Walter Bright Wrote:

 Clemens wrote:
 Walter Bright Wrote:
 
 * Not all functional programming languages do concurrency well. Haskell and
  OCaml in particular have severe fundamental problems with it such that 
 parallelizing your code makes it slower.

Do you have a reference on that? I'll produce one to the contrary: http://cgi.cse.unsw.edu.au/~dons/blog/2007/11/29#smoking-4core

All I've got is Russel Winder's talk on it, Parallelism: The Functional Imperative, with the code and benchmarks. He ran them in real time. http://www.russel.org.uk/

Ah, ok. As bearophile noted, that person seems to have not much experience with Haskell, to put it politely. Obviously I didn't see the presentation and don't want to judge too harshly, but if your summary is an accurate representation of its take-away points, that reeks badly of intellectual dishonesty and FUD. See my link. Or put another way, would you like someone who has never used D before to do a live presentation on it and come to premature conclusions like this?
Apr 23 2010
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Clemens wrote:
 Ah, ok. As bearophile noted, that person seems to have not much experience
 with Haskell, to put it politely. Obviously I didn't see the presentation and
 don't want to judge too harshly, but if your summary is an accurate
 representation of its take-away points, that reeks badly of intellectual
 dishonesty and FUD. See my link.
 
 Or put another way, would you like someone who has never used D before to do
 a live presentation on it and come to premature conclusions like this?

D is meant to give good results even for people who are not experts at it. If someone wrote a straightforward D app and it gave such poor results, I'd take it (and have taken such) as a problem that D needs to improve upon. For example, Andrei has expended a great deal of effort on making the naive use of stdio also the fast way. I will send your link to Russel, I'm sure he'd be interested. I am also interested in *why* Russel's Pi program is a bad example of Haskell programming, it's not enough to dismiss it because Russel is not a Haskell expert.
Apr 23 2010
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 04/23/2010 08:30 AM, Walter Bright wrote:
 Clemens wrote:
 Ah, ok. As bearophile noted, that person seems to have not much
 experience
 with Haskell, to put it politely. Obviously I didn't see the
 presentation and
 don't want to judge too harshly, but if your summary is an accurate
 representation of its take-away points, that reeks badly of intellectual
 dishonesty and FUD. See my link.

 Or put another way, would you like someone who has never used D before
 to do
 a live presentation on it and come to premature conclusions like this?

D is meant to give good results even for people who are not experts at it. If someone wrote a straightforward D app and it gave such poor results, I'd take it (and have taken such) as a problem that D needs to improve upon. For example, Andrei has expended a great deal of effort on making the naive use of stdio also the fast way.

And the correct way. Andrei
Apr 23 2010
parent Walter Bright <newshound1 digitalmars.com> writes:
Andrei Alexandrescu wrote:
 On 04/23/2010 08:30 AM, Walter Bright wrote:
 For example, Andrei has expended a great deal of effort on making the
 naive use of stdio also the fast way.

And the correct way.

Yes. BTW, if it isn't obvious, the Erlang and Clojure versions of the Pi program were the naive approach, and produced expected multicore results.
Apr 23 2010
prev sibling next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Michael Rynn wrote:
 OK where's the naive version of the D Pi program that scales up with 
 1,2,4 cores? How far off are we? Is the concurrency module working with 
 it yet?

Nobody's written a library function to parallelize a map/reduce yet.
Apr 23 2010
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Robert Jacques wrote:
 On Fri, 23 Apr 2010 11:10:48 -0300, Walter Bright 
 <newshound1 digitalmars.com> wrote:
 
 Michael Rynn wrote:
 OK where's the naive version of the D Pi program that scales up with 
 1,2,4 cores? How far off are we? Is the concurrency module working 
 with it yet?

Nobody's written a library function to parallelize a map/reduce yet.

Dave Simcha has. Code: http://dsource.org/projects/scrapple/browser/trunk/parallelFut re/parallelFuture.d Docs: http://cis.jhu.edu/~dsimcha/parallelFuture.html

Cool!
Apr 23 2010
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Walter Bright wrote:
 Robert Jacques wrote:
 On Fri, 23 Apr 2010 11:10:48 -0300, Walter Bright 
 <newshound1 digitalmars.com> wrote:

 Michael Rynn wrote:
 OK where's the naive version of the D Pi program that scales up with 
 1,2,4 cores? How far off are we? Is the concurrency module working 
 with it yet?

Nobody's written a library function to parallelize a map/reduce yet.

Dave Simcha has. Code: http://dsource.org/projects/scrapple/browser/trunk/parallelFut re/parallelFuture.d Docs: http://cis.jhu.edu/~dsimcha/parallelFuture.html

Cool!

Unfortunately, it currently fails to compile with D2.
Apr 23 2010
parent dsimcha <dsimcha yahoo.com> writes:
== Quote from Walter Bright (newshound1 digitalmars.com)'s article
 Walter Bright wrote:
 Robert Jacques wrote:
 On Fri, 23 Apr 2010 11:10:48 -0300, Walter Bright
 <newshound1 digitalmars.com> wrote:

 Michael Rynn wrote:
 OK where's the naive version of the D Pi program that scales up with
 1,2,4 cores? How far off are we? Is the concurrency module working
 with it yet?

Nobody's written a library function to parallelize a map/reduce yet.

Dave Simcha has. Code:



 Docs: http://cis.jhu.edu/~dsimcha/parallelFuture.html

Cool!


Can you tell me what errors you're getting? I realize that map and reduce are slightly brittle due to a combination of severe abuse of templates and subtle differences in the way different compiler releases handle IFTI, but for me all the unittests still compile and run successfully on 2.045. Also, I eat my own dogfood regularly and haven't noticed any problems with this lib, though the vast majority of my uses are the parallel foreach loop, not map and reduce.
May 08 2010
prev sibling parent sybrandy <sybrandy gmail.com> writes:
On 04/23/2010 10:10 AM, Walter Bright wrote:
 Michael Rynn wrote:
 OK where's the naive version of the D Pi program that scales up with
 1,2,4 cores? How far off are we? Is the concurrency module working
 with it yet?

Nobody's written a library function to parallelize a map/reduce yet.

Funny you mention that. I actually started a map/reduce library that I was planning on having run in parallel on a single machine. I didn't get very far as I had to divert my attention elsewhere. I really need to get back to it because it was an interesting little problem to work on. Casey
Apr 23 2010
prev sibling next sibling parent reply Clemens <eriatarka84 gmail.com> writes:
Walter Bright Wrote:

 Clemens wrote:
 Ah, ok. As bearophile noted, that person seems to have not much experience
 with Haskell, to put it politely. Obviously I didn't see the presentation and
 don't want to judge too harshly, but if your summary is an accurate
 representation of its take-away points, that reeks badly of intellectual
 dishonesty and FUD. See my link.
 
 Or put another way, would you like someone who has never used D before to do
 a live presentation on it and come to premature conclusions like this?

D is meant to give good results even for people who are not experts at it. If someone wrote a straightforward D app and it gave such poor results, I'd take it (and have taken such) as a problem that D needs to improve upon.

Someone coming from C++ might think the following program entirely reasonable (and I did indeed make this mistake when starting with D): class A { this() { /* initialize me */ } void foo() { /* do smth */ } } void main() { A a; a.foo(); // blam - segfault right here } This is about the level of understanding that seems to have been applied to Haskell in that example.
 I will send your link to Russel, I'm sure he'd be interested.
 
 I am also interested in *why* Russel's Pi program is a bad example of Haskell 
 programming, it's not enough to dismiss it because Russel is not a Haskell
expert.
 

I tried to have a look at it (not that I'm anything near a Haskell expert), but this link just gives me an empty directory: http://www.russel.org.uk/Bazaar/Pi_Quadrature
Apr 23 2010
parent Walter Bright <newshound1 digitalmars.com> writes:
Clemens wrote:
 I tried to have a look at it (not that I'm anything near a Haskell expert),
but this link just gives me an empty directory:
 http://www.russel.org.uk/Bazaar/Pi_Quadrature

I'll see if Russel will email me the code.
Apr 23 2010
prev sibling parent Walter Bright <newshound1 digitalmars.com> writes:
Leandro Lucarella wrote:
 Walter Bright, el 23 de abril a las 06:30 me escribiste:
 Clemens wrote:
 Ah, ok. As bearophile noted, that person seems to have not much experience
 with Haskell, to put it politely. Obviously I didn't see the presentation and
 don't want to judge too harshly, but if your summary is an accurate
 representation of its take-away points, that reeks badly of intellectual
 dishonesty and FUD. See my link.

 Or put another way, would you like someone who has never used D before to do
 a live presentation on it and come to premature conclusions like this?

at it. If someone wrote a straightforward D app and it gave such poor results, I'd take it (and have taken such) as a problem that D needs to improve upon.

Is very easy to make naive programs that have serious performance problems because of the GC. You have to be almost an expert to tune that programs to make the run fast. Ask bearophile and dschima for examples =)

Relatively inexperienced D programmers should be able to apply straightforward solutions to common programming problems and expect correct behavior and reasonably acceptable performance. I doubt we will always be able to achieve that, but we should always be working towards it.
Apr 23 2010
prev sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
If only multicore programming was all about finding fibonnaci numbers or the
Nth number of Pi, then maybe some of the claims in this thread would be true. :)
Apr 23 2010
prev sibling next sibling parent Michael Rynn <michaelrynn optusnet.com.au> writes:
On Fri, 23 Apr 2010 06:30:13 -0700, Walter Bright wrote:

 
 D is meant to give good results even for people who are not experts at
 it. If someone wrote a straightforward D app and it gave such poor
 results, I'd take it (and have taken such) as a problem that D needs to
 improve upon.
 
 For example, Andrei has expended a great deal of effort on making the
 naive use of stdio also the fast way.
 
 I will send your link to Russel, I'm sure he'd be interested.
 
 I am also interested in *why* Russel's Pi program is a bad example of
 Haskell programming, it's not enough to dismiss it because Russel is not
 a Haskell expert.

OK where's the naive version of the D Pi program that scales up with 1,2,4 cores? How far off are we? Is the concurrency module working with it yet? -3.-1-4-1-5-9.. Michael Rynn
Apr 23 2010
prev sibling next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 04/23/2010 07:00 AM, Clemens wrote:
 (As an aside, I'm generally a bit put off by the
 hostility towards programming language research and theory in the D
 community. "We don't need no stinking theory, we'll just roll our own
 ad-hoc solution which will work much better because ivory-tower
 academics are completely out of touch with reality anyway." Bleh.)

I hope that trend has been definitively reversed. Andrei
Apr 23 2010
prev sibling next sibling parent "Robert Jacques" <sandford jhu.edu> writes:
On Fri, 23 Apr 2010 11:10:48 -0300, Walter Bright  
<newshound1 digitalmars.com> wrote:

 Michael Rynn wrote:
 OK where's the naive version of the D Pi program that scales up with  
 1,2,4 cores? How far off are we? Is the concurrency module working with  
 it yet?

Nobody's written a library function to parallelize a map/reduce yet.

Dave Simcha has. Code: http://dsource.org/projects/scrapple/browser/trunk/parallelFuture/parallelFuture.d Docs: http://cis.jhu.edu/~dsimcha/parallelFuture.html
Apr 23 2010
prev sibling next sibling parent "Robert Jacques" <sandford jhu.edu> writes:
On Fri, 23 Apr 2010 11:16:29 -0300, Clemens <eriatarka84 gmail.com> wrote:
[snip]
 I tried to have a look at it (not that I'm anything near a Haskell  
 expert), but this link just gives me an empty directory:
 http://www.russel.org.uk/Bazaar/Pi_Quadrature

Try http://www.russel.org.uk:8080/Bazaar/Pi_Quadrature/changes
Apr 23 2010
prev sibling next sibling parent so <so so.do> writes:
On Fri, 23 Apr 2010 16:00:32 +0400, Clemens <eriatarka84 gmail.com> wrote:

 Walter Bright Wrote:

 * Not all functional programming languages do concurrency well. Haskell  
 and
 OCaml in particular have severe fundamental problems with it such that
 parallelizing your code makes it slower.

Do you have a reference on that? I'll produce one to the contrary: http://cgi.cse.unsw.edu.au/~dons/blog/2007/11/29#smoking-4core

Haskell is cool but I am puzzled with that Haskell vs C example. What is he comparing? Parallel Haskell vs what? Also he is right by using the argument "naive", but more likely "naive use of the language" for a given algorithm, the very thing you arguing against? I'd like to see the comparisons of a "non-naive" implementations too :) -- Using Opera's revolutionary e-mail client: http://www.opera.com/mail/
Apr 24 2010
prev sibling parent reply Pelle <pelle.mansson gmail.com> writes:
On 04/24/2010 01:13 AM, retard wrote:
 Maybe Walter is trying to break the world record for implementing things
 without understanding them first?

Didn't Walter implement templates without grokking them? I think I read that somewhere around here. That's quite a respectable feat, if you ask me.
Apr 29 2010
parent Walter Bright <newshound1 digitalmars.com> writes:
Pelle wrote:
 On 04/24/2010 01:13 AM, retard wrote:
 Maybe Walter is trying to break the world record for implementing things
 without understanding them first?

Didn't Walter implement templates without grokking them?

Yes.
 I think I read that somewhere around here.
 
 That's quite a respectable feat, if you ask me.

I also passed the quantum mechanics final in physics without understanding QM. I still understood how to apply the rules, though. On the other hand, I "got" newtonian mechanics.
Apr 29 2010
prev sibling next sibling parent Leandro Lucarella <llucax gmail.com> writes:
Clemens, el 23 de abril a las 09:06 me escribiste:
 All I've got is Russel Winder's talk on it, Parallelism: The Functional 
 Imperative, with the code and benchmarks. He ran them in real time.
 
 http://www.russel.org.uk/

Ah, ok. As bearophile noted, that person seems to have not much experience with Haskell, to put it politely. Obviously I didn't see the presentation and don't want to judge too harshly, but if your summary is an accurate representation of its take-away points, that reeks badly of intellectual dishonesty and FUD. See my link. Or put another way, would you like someone who has never used D before to do a live presentation on it and come to premature conclusions like this?

Like using one of the corner cases where the GC really sucks. =) -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- For long you live and high you fly But only if you ride the tide And balanced on the biggest wave You race towards an early grave.
Apr 23 2010
prev sibling next sibling parent Leandro Lucarella <llucax gmail.com> writes:
Walter Bright, el 23 de abril a las 06:30 me escribiste:
 Clemens wrote:
Ah, ok. As bearophile noted, that person seems to have not much experience
with Haskell, to put it politely. Obviously I didn't see the presentation and
don't want to judge too harshly, but if your summary is an accurate
representation of its take-away points, that reeks badly of intellectual
dishonesty and FUD. See my link.

Or put another way, would you like someone who has never used D before to do
a live presentation on it and come to premature conclusions like this?

D is meant to give good results even for people who are not experts at it. If someone wrote a straightforward D app and it gave such poor results, I'd take it (and have taken such) as a problem that D needs to improve upon.

Is very easy to make naive programs that have serious performance problems because of the GC. You have to be almost an expert to tune that programs to make the run fast. Ask bearophile and dschima for examples =) -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- Hey you, dont help them to bury the light Don't give in without a fight.
Apr 23 2010
prev sibling parent reply "Nick Sabalausky" <a a.a> writes:
"retard" <re tard.com.invalid> wrote in message 
news:hqt94i$2sgv$2 digitalmars.com...
 Fri, 23 Apr 2010 08:57:54 -0500, Andrei Alexandrescu wrote:

 On 04/23/2010 07:00 AM, Clemens wrote:
 (As an aside, I'm generally a bit put off by the hostility towards
 programming language research and theory in the D community. "We don't
 need no stinking theory, we'll just roll our own ad-hoc solution which
 will work much better because ivory-tower academics are completely out
 of touch with reality anyway." Bleh.)

I hope that trend has been definitively reversed. Andrei

Instead of hostility we now have blissful ignorance. Maybe I should post here more often again..

When the academic researchers keep their work squirreled away in academic circles and written in such a convoluted style that only other long-term ivory-tower residents can get far enough past the language to see the actual meaning, it's a wonder that *anyone* finds it surprising that programmers are ignorant of it. And that's just the researchers that actually *do* know what they're doing. Let's not fool ourselves into thinking that the *majority* of academia actually knows it's head from it's ass (yea, that's right - I've brought it back to hostility). ------------------------------- Not sent from an iPhone.
Apr 23 2010
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 04/23/2010 10:40 PM, Nick Sabalausky wrote:
 "retard"<re tard.com.invalid>  wrote in message
 Instead of hostility we now have blissful ignorance. Maybe I should post
 here more often again..

When the academic researchers keep their work squirreled away in academic circles and written in such a convoluted style that only other long-term ivory-tower residents can get far enough past the language to see the actual meaning, it's a wonder that *anyone* finds it surprising that programmers are ignorant of it.

The style of academic papers is not convoluted on purpose. The good papers discuss new solutions to difficult problems and therefore must be very precise so as to convey a trove of information in a short space. Preparing a good academic paper may take six months or more. A magazine article of the same length may take an afternoon.
 And that's just the researchers that actually *do* know what they're doing.
 Let's not fool ourselves into thinking that the *majority* of academia
 actually knows it's head from it's ass (yea, that's right - I've brought it
 back to hostility).

That's a truism. Clearly there will be many foot soldiers and few generals in any field. The majority of developers can also be considered to not quite know what they're doing. I've had an email diatribe with an acquaintance who had the same stance "academia sucks". He kept on going how pretentious and fake it was, and I couldn't figure where he was coming from, until one day when he mentioned he'd been an academist so he has first-hand experience. The problem was he was in the outer academic circles that go through the motions of research (author papers, hold conferences, publish proceedings and journals) but they aren't quite doing research. At that point I agreed with him. Andrei
Apr 24 2010
parent reply "Nick Sabalausky" <a a.a> writes:
"Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message 
news:hquqfm$2vse$1 digitalmars.com...
 The majority of developers can also be considered to not quite know what 
 they're doing.

Heh. I've surprised a lot of laymen, after telling them I'm a programmer, by my opinions that most programmers are incompetent and most software and consumer electronics are terrible. Seems hugely ironic to those unfamiliar with the field, but being around it and (at the risk of narcissism) knowing what I'm going, I think puts me (along with many of the people on this board) in a prime position to notice flaws and steps backwards. ------------------------------- Not sent from an iPhone.
Apr 24 2010
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Nick Sabalausky wrote:
 Heh. I've surprised a lot of laymen, after telling them I'm a programmer, by 
 my opinions that most programmers are incompetent and most software and 
 consumer electronics are terrible. Seems hugely ironic to those unfamiliar 
 with the field, but being around it and (at the risk of narcissism) knowing 
 what I'm going, I think puts me (along with many of the people on this 
 board) in a prime position to notice flaws and steps backwards.

I share your opinion that most software and consumer electronics is terrible. Of course, I've produced my share of terrible software, but I won't make any excuses for doing so. For example, my TV set crashes every once in a while, and must be power cycled :-( The old analog sets never did that!
Apr 24 2010
next sibling parent reply Gareth Charnock <gareth.tpc gmail.com> writes:
Walter Bright wrote:
 Nick Sabalausky wrote:
 Heh. I've surprised a lot of laymen, after telling them I'm a 
 programmer, by my opinions that most programmers are incompetent and 
 most software and consumer electronics are terrible. Seems hugely 
 ironic to those unfamiliar with the field, but being around it and (at 
 the risk of narcissism) knowing what I'm going, I think puts me (along 
 with many of the people on this board) in a prime position to notice 
 flaws and steps backwards.

I share your opinion that most software and consumer electronics is terrible. Of course, I've produced my share of terrible software, but I won't make any excuses for doing so. For example, my TV set crashes every once in a while, and must be power cycled :-( The old analog sets never did that!

failing to report itself to my laptop. It then has to be power cycled. I am not making this up!
Apr 25 2010
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 04/25/2010 04:55 AM, Gareth Charnock wrote:
 Walter Bright wrote:
 Nick Sabalausky wrote:
 Heh. I've surprised a lot of laymen, after telling them I'm a
 programmer, by my opinions that most programmers are incompetent and
 most software and consumer electronics are terrible. Seems hugely
 ironic to those unfamiliar with the field, but being around it and
 (at the risk of narcissism) knowing what I'm going, I think puts me
 (along with many of the people on this board) in a prime position to
 notice flaws and steps backwards.

I share your opinion that most software and consumer electronics is terrible. Of course, I've produced my share of terrible software, but I won't make any excuses for doing so. For example, my TV set crashes every once in a while, and must be power cycled :-( The old analog sets never did that!

failing to report itself to my laptop. It then has to be power cycled. I am not making this up!

Well actually that's a different matter altogether. Power supplies are switching devices that, when old, fail to maintain oscillation. When you power cycle them they usually re-prime themselves because there's some simple electronics that does it. If you listen carefully to the source, you may hear a high-pitch sound when it's working. The louder the noise, the older the source. Failure to report comes from the third wire that connects the source to the laptop. That wire is quite thin and is the first to break on an older source. The manifestation is that the laptop intermittently fails to figure that it is connected to a correct power source. Time to change the power brick. Many go for under $10 on ebay, free shipping. (How the heck do they make money off them?) Andrei
Apr 25 2010
prev sibling parent reply div0 <div0 users.sourceforge.net> writes:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Walter Bright wrote:
 
 For example, my TV set crashes every once in a while, and must be power
 cycled :-(
 
 The old analog sets never did that!

Yeah but your old telly didn't play youtube. How on earth we survived without an infinite number of low quality videos of cats doing vaguely amusing things, will forever be a mystery. - -- My enormous talent is exceeded only by my outrageous laziness. http://www.ssTk.co.uk -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.7 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iD8DBQFL1BVzT9LetA9XoXwRAm/NAJ9UAX13Ai3xBFBZJtP4fVcOCxIiPgCeLGVE 8qdQvtwv+XaxryU8gG9JN5A= =qCmB -----END PGP SIGNATURE-----
Apr 25 2010
next sibling parent Walter Bright <newshound1 digitalmars.com> writes:
div0 wrote:
 Walter Bright wrote:
 For example, my TV set crashes every once in a while, and must be power
 cycled :-(

 The old analog sets never did that!

Yeah but your old telly didn't play youtube.

My current one doesn't either; it has no digital inputs. But it clearly has a computer internally. I bought it right before the collapse of LCD TV prices, it's the last of the flat screen tube jobs.
Apr 25 2010
prev sibling parent reply "Nick Sabalausky" <a a.a> writes:
"div0" <div0 users.sourceforge.net> wrote in message 
news:hr14hk$2de0$1 digitalmars.com...
 -----BEGIN PGP SIGNED MESSAGE-----
 Hash: SHA1

 Walter Bright wrote:
 For example, my TV set crashes every once in a while, and must be power
 cycled :-(

 The old analog sets never did that!

Yeah but your old telly didn't play youtube. How on earth we survived without an infinite number of low quality videos of cats doing vaguely amusing things, will forever be a mystery.

We had to get by with Bob Sagat saying vaguely amusng things overtop an infinite number of slightly-less-low-qualty videos of people falling down.
Apr 25 2010
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Nick Sabalausky wrote:
 We had to get by with Bob Sagat saying vaguely amusng things overtop an 
 infinite number of slightly-less-low-qualty videos of people falling down.

Bob Sagat was never amusing. Though I felt sorry for him, how many jokes could you make about the same pratfalls, over and over, week after week?
Apr 25 2010
parent Bernard Helyer <b.helyer gmail.com> writes:
On 26/04/10 07:56, Walter Bright wrote:

Bob Sagat was never amusing. Though I felt sorry for him, how many jokes could you make about the same pratfalls, over and over, week after week?

http://www.youtube.com/watch?v=0HW4mPZmKPM (NSFW) If you don't know, Bob Saget is the most blue comic I have ever heard. Which was surprising to me, only knowing him from AFV and Full House.
Apr 26 2010