www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Super-dee-duper D features

reply Walter Bright <newshound digitalmars.com> writes:
kris wrote:
 Thus; shouting from the rooftops that D is all about meta-code, and DSL
 up-the-wazzoo, may well provoke a backlash from the very people who
 should be embracing the language. I'd imagine Andrei would vehemently
 disagree, but so what? The people who will ultimately be responsible for
 "allowing" D through the door don't care about fads or technical
 superiority; they care about costs. And the overwhelming cost in
 software development today, for the type of companies noted above, is
 maintenance. For them, software dev is already complex enough. In all
 the places I've worked or consulted, in mutiple countries, and since the
 time before Zortech C, pedestrian-code := maintainable-code := less
 overall cost.

Some comments: 1) D has no marketing budget. It isn't backed by a major corporation. Therefore, it needs something else to catch peoples' attention. Mundane features aren't going to do it. 2) I know Java is wildly successful. But Java ain't the language for me - because it takes too much code to do the simplest things. It isn't straightforward clarifying code, either, it looks like a lot of irrelevant bother. I'm getting older and I just don't want to spend the *time* to write all that stuff. My fingertips get sore <g>. I wouldn't use Java if it was twice as fast as any other language for that reason. I wouldn't use Java if it was twice as popular as it is now for that reason. 3) Less code == more productivity, less bugs. I don't mean gratuitously less code, I mean less code in the sense that one can write directly what one means, rather than a lot of tedious bother. For example, if I want to visit each element in an array: foreach(v; e) {...} is more direct than: for (size_t i = 0; i < sizeof(e)/sizeof(e[0]); i++) { T v = e[i]; ... } 4) The more experience I have, the more it seems that the language that got a lot right is ... Lisp. But Lisp did one thing terribly, terribly wrong - the syntax. The Lisp experts who can get past that seem to be amazingly productive with Lisp. The rest of us will remain envious of what Lisp can do, but will never use it. 5) Lisp gets things right, according to what I've read from heavy Lisp users, by being a language that can be modified on the fly to suit the task at hand, in other words, by having a customizable language one can achieve dramatic productivity gains. 6) If I think about it a certain way, it looks like what C++ Boost is doing is a desperate attempt to add Lisp-like features. By desperate I mean that C++'s core feature set is just inadequate to the task. For example, look at all the effort Boost has gone through to do a barely functioning implementation of tuples. Put tuples into the language properly, and all that nasty stuff just falls away like a roofer peeling off the old shingles. 7) A lot of companies have outlawed C++ templates, and for good reason. I believe that is not because templates are inherently bad. I think that C++ templates are a deeply flawed because they were ***never designed for the purpose to which they were put***. 8) I've never been able to create usable C++ templates. Notice that the DMD front end (in C++) doesn't use a single template. I know how they work (in intimate detail) but I still can't use them. 9) But I see what C++ templates can do. So to me, the problem is to design templates in such a way that they are as simple to write as ordinary functions. *Then*, what templates can do can be accessible and maintainable. It's like cars - they used to be very difficult to drive, but now anyone can hop in, turn the key, and go. 10) Your points about pedestrian code are well taken. D needs to do pedestrian code very, very well. But that isn't enough because lots of languages do pedestrian code well enough. 11) Today's way-out feature is tomorrow's pedestrian code. I'm old enough to remember when "structured code", i.e. while, for, switch instead of goto, was the way-out feature (70's). Then, OOP was all the rage (80's), now that's a major yawner. STL was then the way-out fad (90's), now that's pedestrian too. Now it's metaprogramming (00's), and I bet by 2015 that'll be ho-hum too, and it's D that's going to do it. 12) Take a look at what Kirk McDonald is doing with Pyd. He needs all this stuff to make it slicker than oil on ground steel. He's on the bleeding edge of stuff D needs to *make* pedestrian.
Feb 11 2007
next sibling parent janderson <askme me.com> writes:
Walter Bright wrote:
 kris wrote:
 
 7) A lot of companies have outlawed C++ templates, and for good reason. 
 I believe that is not because templates are inherently bad. I think that 
 C++ templates are a deeply flawed because they were ***never designed 
 for the purpose to which they were put***.

Very true. At a previous employ the major reason templates where outlawed was because they caused compilation time to slow down dramatically, if you didn't know what you where doing. The other reason was like you say, they get very complex to understand quickly for all but the most simple cases. Unfortunately without templates you generally end up writing more code or at the least, less efficient code. Which is a maintenance headache. -Joel
Feb 12 2007
prev sibling next sibling parent reply kris <foo bar.com> writes:
Walter Bright wrote:
 kris wrote:
  > Thus; shouting from the rooftops that D is all about meta-code, and DSL
  > up-the-wazzoo, may well provoke a backlash from the very people who
  > should be embracing the language. I'd imagine Andrei would vehemently
  > disagree, but so what? The people who will ultimately be responsible for
  > "allowing" D through the door don't care about fads or technical
  > superiority; they care about costs. And the overwhelming cost in
  > software development today, for the type of companies noted above, is
  > maintenance. For them, software dev is already complex enough. In all
  > the places I've worked or consulted, in mutiple countries, and since the
  > time before Zortech C, pedestrian-code := maintainable-code := less
  > overall cost.
 
 Some comments:
 
 1) D has no marketing budget. It isn't backed by a major corporation. 
 Therefore, it needs something else to catch peoples' attention. Mundane 
 features aren't going to do it.

It's already /stacked/ with attention grabbing features :)
 
 2) I know Java is wildly successful. But Java ain't the language for me 
 - because it takes too much code to do the simplest things. It isn't 
 straightforward clarifying code, either, it looks like a lot of 
 irrelevant bother. I'm getting older and I just don't want to spend the 
 *time* to write all that stuff. My fingertips get sore <g>. I wouldn't 
 use Java if it was twice as fast as any other language for that reason. 
 I wouldn't use Java if it was twice as popular as it is now for that 
 reason.
 

A lot of people feel that way. D can potentially tap that vast market.
 3) Less code == more productivity, less bugs. I don't mean gratuitously 
 less code, I mean less code in the sense that one can write directly 
 what one means, rather than a lot of tedious bother. For example, if I 
 want to visit each element in an array:
 
     foreach(v; e)
     {...}
 
 is more direct than:
 
     for (size_t i = 0; i < sizeof(e)/sizeof(e[0]); i++)
     { T v = e[i];
      ... }
 

Yep, that's great! One of the reasons I like D so much, along with array slicing.
 
 4) The more experience I have, the more it seems that the language that 
 got a lot right is ... Lisp. But Lisp did one thing terribly, terribly 
 wrong - the syntax. The Lisp experts who can get past that seem to be 
 amazingly productive with Lisp. The rest of us will remain envious of 
 what Lisp can do, but will never use it.
 

True.
 5) Lisp gets things right, according to what I've read from heavy Lisp 
 users, by being a language that can be modified on the fly to suit the 
 task at hand, in other words, by having a customizable language one can 
 achieve dramatic productivity gains.

Yet, Lisp will always remain a niche language. You have to wonder why.
 
 6) If I think about it a certain way, it looks like what C++ Boost is 
 doing is a desperate attempt to add Lisp-like features. By desperate I 
 mean that C++'s core feature set is just inadequate to the task. For 
 example, look at all the effort Boost has gone through to do a barely 
 functioning implementation of tuples. Put tuples into the language 
 properly, and all that nasty stuff just falls away like a roofer peeling 
 off the old shingles.

Boost seems more like a language within a language <g> The very thing that increases long-term costs.
 
 7) A lot of companies have outlawed C++ templates, and for good reason. 
 I believe that is not because templates are inherently bad. I think that 
 C++ templates are a deeply flawed because they were ***never designed 
 for the purpose to which they were put***.

Agreed. But the issue is not about how badly they're flawed. Instead, it's the non-standard "language" problem. The MyDSL problem :)
 
 8) I've never been able to create usable C++ templates. Notice that the 
 DMD front end (in C++) doesn't use a single template. I know how they 
 work (in intimate detail) but I still can't use them.

Same here.
 
 9) But I see what C++ templates can do. So to me, the problem is to 
 design templates in such a way that they are as simple to write as 
 ordinary functions. *Then*, what templates can do can be accessible and 
 maintainable. It's like cars - they used to be very difficult to drive, 
 but now anyone can hop in, turn the key, and go.

Templates are certainly a useful tool in D. Tango has some very simple usage for handling char[]/wchar[]/dchar[] implementation. I also look at some D templates for ages, and still can't figure out just how they work. Don Clugston is known around here as the Template Ninja -- the very name itself shouts out "Here Dwell Demons!" :-D That kind of thing is a point of diminishing return, anywhere where cost is king. Relates to #7
 
 10) Your points about pedestrian code are well taken. D needs to do 
 pedestrian code very, very well. But that isn't enough because lots of 
 languages do pedestrian code well enough.
 
 11) Today's way-out feature is tomorrow's pedestrian code. I'm old 
 enough to remember when "structured code", i.e. while, for, switch 
 instead of goto, was the way-out feature (70's). Then, OOP was all the 
 rage (80's), now that's a major yawner. STL was then the way-out fad 
 (90's), now that's pedestrian too. Now it's metaprogramming (00's), and 
 I bet by 2015 that'll be ho-hum too, and it's D that's going to do it.

The danger is, of course, the language-within-a-language that leads to #7. In structured and OOP, if you learned the language you could maintain the code (we disregard the fact that anyone can write truly unmaintainable code if they try to). Yet, this could be described as 'contained' within the language. Quite different than what causes #7 :)
 12) Take a look at what Kirk McDonald is doing with Pyd. He needs all 
 this stuff to make it slicker than oil on ground steel. He's on the 
 bleeding edge of stuff D needs to *make* pedestrian.

Certainly :) What Kirk has been doing (much awesometude there) is one of those things that fit into the "narrow focus" or "speciality" field that /can/ benefit in some manner. But it's a black-box. When it works, nobody will fuss with the insides unless they /really/ have to. That's not how most commerical software is done today, regardless of all the efforts to make it more like circuit-design. - Kris
Feb 12 2007
next sibling parent reply Walter Bright <newshound digitalmars.com> writes:
kris wrote:
 Walter Bright wrote:
 3) Less code == more productivity, less bugs. I don't mean 
 gratuitously less code, I mean less code in the sense that one can 
 write directly what one means, rather than a lot of tedious bother. 
 For example, if I want to visit each element in an array:

     foreach(v; e)
     {...}

 is more direct than:

     for (size_t i = 0; i < sizeof(e)/sizeof(e[0]); i++)
     { T v = e[i];
      ... }

Yep, that's great! One of the reasons I like D so much, along with array slicing.

The C++ version is even *worse* than the C one (for wordiness bother): for (std::vector<T>::const_iterator i = e.begin(); i != e.end(); i++) { T v = *i; ... } I mean I know the reasons for every bit of the syntax there, and in isolation they make sense, but put it all together and it seems to go backwards.
 5) Lisp gets things right, according to what I've read from heavy Lisp 
 users, by being a language that can be modified on the fly to suit the 
 task at hand, in other words, by having a customizable language one 
 can achieve dramatic productivity gains.

Yet, Lisp will always remain a niche language. You have to wonder why.

I'm pretty sure it's the syntax.
 7) A lot of companies have outlawed C++ templates, and for good 
 reason. I believe that is not because templates are inherently bad. I 
 think that C++ templates are a deeply flawed because they were 
 ***never designed for the purpose to which they were put***.

Agreed. But the issue is not about how badly they're flawed. Instead, it's the non-standard "language" problem. The MyDSL problem :)

I disagree with that. When you write a program using classes and functions, you *are* creating your own language. Classes are your custom types, and functions are your custom operators.
 8) I've never been able to create usable C++ templates. Notice that 
 the DMD front end (in C++) doesn't use a single template. I know how 
 they work (in intimate detail) but I still can't use them.

Same here.

But I have been able to create usable D templates <g>.
 I also look at some D templates for ages, and still can't figure out 
 just how they work. Don Clugston is known around here as the Template 
 Ninja -- the very name itself shouts out "Here Dwell Demons!" :-D

The very fact that Don's called the Template Ninja is a problem - after all, there is no "function ninja", no "typedef ninja", no "+ ninja". It's a sign that templates are still not easy enough to use. It's like Paul Mensonidas being recognized as the "World's Leading Expert on the C Preprocessor." Obviously, something is seriously wrong with the preprocessor if there's an ecological niche for a world's leading expert on it. (By the way, Paul is a very nice fellow and has been kind enough to help me iron out several subtle bugs in the DMC++ preprocessor. As long as we're saddled with that preprocessor spec, I'm glad there is a Paul to help!)
 12) Take a look at what Kirk McDonald is doing with Pyd. He needs all 
 this stuff to make it slicker than oil on ground steel. He's on the 
 bleeding edge of stuff D needs to *make* pedestrian.

Certainly :) What Kirk has been doing (much awesometude there) is one of those things that fit into the "narrow focus" or "speciality" field that /can/ benefit in some manner. But it's a black-box. When it works, nobody will fuss with the insides unless they /really/ have to. That's not how most commerical software is done today, regardless of all the efforts to make it more like circuit-design.

Even if D fails to make metaprogramming easy for average joe coders to use, if it still can be used by experts to create useful black box code like Pyd, then it is worthwhile. After all, even though trig functions are very hard to write, they are easily used by joe coders as black box components without any problems. (Nerd that I am, I've spent many contented hours twinking around with the internals of math functions to get them juuuust right <g>.)
Feb 12 2007
next sibling parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Walter Bright wrote:
 kris wrote:

 5) Lisp gets things right, according to what I've read from heavy 
 Lisp users, by being a language that can be modified on the fly to 
 suit the task at hand, in other words, by having a customizable 
 language one can achieve dramatic productivity gains.

Yet, Lisp will always remain a niche language. You have to wonder why.

I'm pretty sure it's the syntax.

And the recursion. People just don't naturally think recursively. And the lack of mutable data structures. OCaml tried to fix that, but OCaml's probably always going to be niche as well (see first point). --bb
Feb 12 2007
parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Bill Baxter wrote:
 Walter Bright wrote:
 kris wrote:

 5) Lisp gets things right, according to what I've read from heavy 
 Lisp users, by being a language that can be modified on the fly to 
 suit the task at hand, in other words, by having a customizable 
 language one can achieve dramatic productivity gains.

Yet, Lisp will always remain a niche language. You have to wonder why.

I'm pretty sure it's the syntax.

And the recursion. People just don't naturally think recursively. And the lack of mutable data structures. OCaml tried to fix that, but OCaml's probably always going to be niche as well (see first point).

LISP does have mutation. Besides, many people naturally think recursively, and many problems (e.g. parsing) can be easiest thought of that way. Andrei
Feb 12 2007
next sibling parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 Bill Baxter wrote:
 Walter Bright wrote:
 kris wrote:

 5) Lisp gets things right, according to what I've read from heavy 
 Lisp users, by being a language that can be modified on the fly to 
 suit the task at hand, in other words, by having a customizable 
 language one can achieve dramatic productivity gains.

Yet, Lisp will always remain a niche language. You have to wonder why.

I'm pretty sure it's the syntax.

And the recursion. People just don't naturally think recursively. And the lack of mutable data structures. OCaml tried to fix that, but OCaml's probably always going to be niche as well (see first point).

LISP does have mutation.

Ok. My bad.
 Besides, many people naturally think 
 recursively, 

The statement was about why LISP is never going to be wildly popular. There may very well be "many" people who naturally think recursively, but if they're not a majority then that's a hurdle to LISP becoming popular.
 and many problems (e.g. parsing) can be easiest thought of 
 that way.

Sure. However, you can write recursive algorithms in most any procedural language to handle those naturally recursive tasks when they come up. With Lisp or <my-favorite-functional-language> you're pretty much /forced/ to look at everything recursively. And I think that makes joe coder nervous, thus presenting a major hurdle to any functional language ever becoming truly popular. My point is just that I don't think syntax is the *only* thing that's prevented lisp from becoming wildly popular. If that were the case then the answer would be to simply create a different syntax for Lisp. (Actually, according to someone's comment here http://discuss.fogcreek.com/newyork/default.asp?cmd=show&ixPost=1998 it's been done and it's called Dylan, another not-wildly popular language). So I think the problem is more fundamental. --bb
Feb 12 2007
next sibling parent "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Bill Baxter wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Bill Baxter wrote:
 Walter Bright wrote:
 kris wrote:

 5) Lisp gets things right, according to what I've read from heavy 
 Lisp users, by being a language that can be modified on the fly to 
 suit the task at hand, in other words, by having a customizable 
 language one can achieve dramatic productivity gains.

Yet, Lisp will always remain a niche language. You have to wonder why.

I'm pretty sure it's the syntax.

And the recursion. People just don't naturally think recursively. And the lack of mutable data structures. OCaml tried to fix that, but OCaml's probably always going to be niche as well (see first point).

LISP does have mutation.

Ok. My bad.
 Besides, many people naturally think recursively, 

The statement was about why LISP is never going to be wildly popular. There may very well be "many" people who naturally think recursively, but if they're not a majority then that's a hurdle to LISP becoming popular.
 and many problems (e.g. parsing) can be easiest thought of that way.

Sure. However, you can write recursive algorithms in most any procedural language to handle those naturally recursive tasks when they come up. With Lisp or <my-favorite-functional-language> you're pretty much /forced/ to look at everything recursively. And I think that makes joe coder nervous, thus presenting a major hurdle to any functional language ever becoming truly popular. My point is just that I don't think syntax is the *only* thing that's prevented lisp from becoming wildly popular. If that were the case then the answer would be to simply create a different syntax for Lisp. (Actually, according to someone's comment here http://discuss.fogcreek.com/newyork/default.asp?cmd=show&ixPost=1998 it's been done and it's called Dylan, another not-wildly popular language). So I think the problem is more fundamental.

I think the bottom line is, languages succeed and fail for the most mysterious reasons; engaging in speculation is a certain time vortex. When it comes about LISP in particular, the most amazing thing to me is not why it didn't catch up in the industry, but why it's so amazingly fresh today after 46 years. Some concepts pioneered by LISP that people have laughed at are now increasingly considered "obviously good" (GC, lambdas, higher-order functions, closures, continuations), while others, I agree with Paul Graham, are starting to blip on the community-at-large radar only now after so many years. Those are the macros. Andrei
Feb 12 2007
prev sibling parent reply X Bunny <xbunny eidosnet.co.uk> writes:
Bill Baxter wrote:
 Besides, many people naturally think recursively, 

The statement was about why LISP is never going to be wildly popular. There may very well be "many" people who naturally think recursively, but if they're not a majority then that's a hurdle to LISP becoming popular.
 and many problems (e.g. parsing) can be easiest thought of that way.

Sure. However, you can write recursive algorithms in most any procedural language to handle those naturally recursive tasks when they come up. With Lisp or <my-favorite-functional-language> you're pretty much /forced/ to look at everything recursively. And I think that makes joe coder nervous, thus presenting a major hurdle to any functional language ever becoming truly popular.

You havent qualified what you mean by Lisp, assuming that means Common Lisp (CL) then you are uniformed, CL offers a number of iteration constructs (they are implemented as macros as a great deal of the language is). dolist is very simple, loop is almost a language within itself. A common package used is called iterate which is a popular alternative to loop. Also CL is not just a functional language, Like D CL is multiparadigm. With CL you can easily write procedural code with side effects like C++ or D or you can use object orientated, aspect orientated, functional, logic, pattern matching and many other programming concepts. Whats more you can add other programming concepts easily should you like.
 
 My point is just that I don't think syntax is the *only* thing that's 
 prevented lisp from becoming wildly popular.  If that were the case then 
 the answer would be to simply create a different syntax for Lisp. 
 (Actually, according to someone's comment here 
 http://discuss.fogcreek.com/newyork/default.asp?cmd=show&ixPost=1998 
 it's been done and it's called Dylan, another not-wildly popular 
 language).  So I think the problem is more fundamental.
 

People often try to modify Lisp to have a syntax more familiar to C language programmers, Dylan aside, it seems its often people who are new to Lisp who do this, after they learn more of Lisp they then can see why it is the way it is and although initially alien its design makes the language easier to use. Im not actually convinced that you can make a language with the features of CL without its syntax. There is an infix package also which allows you to use infix for things like math formulas and the like. My personal feelings as to why Lisp isnt as popular as it could be are some of these misconceptions: 1) Its all functional code and recursion 2) The syntax is weird and mindbending 3) Lisp is interpreted and therefore slow. 4) Its hard to interface Lisp with non Lisp libraries and operating system services. 5) Lisp is old and hasnt changed since the 50's 6) The features are really clever but they wouldnt be useful in a 'real' program 7) Its for AI or 'weird' programs 8) You have to be really clever to program in Lisp 9) Lisp is poorly documented and hard for a beginner to understand 10) Its irrelevant because we have Java/C++/something else now Its interesting to contrast these with D which tends to present an image which is the opposite of many of these notions. Regarding the syntax issue; is this suggestion: (from an earlier post) AddExpressionTemplate!( MultExpressionTemplate!( SubExpressionTemplate!(Vector,Vector), Vector), Vector) anymore more readable than this? (defmacro mymacro (a b c d) (+ (* (- a b) c) d)) Bunny
Feb 12 2007
next sibling parent reply Kevin Bealer <kevinbealer gmail.com> writes:
X Bunny wrote:
 Bill Baxter wrote:

 Also CL is not just a functional language, Like D CL is multiparadigm. 
 With CL you can easily write procedural code with side effects like C++ 
 or D or you can use object orientated, aspect orientated, functional, 
 logic, pattern matching and many other programming concepts. Whats more 
 you can add other programming concepts easily should you like.

First you suggest that there is some ignorance about what LISP can do; let me be the first to confess to having some of that. Having said that... It always seemed to me that LISP syntax for the 'other paradigms' and for iterative programming was designed to steer people away from them. I need to use a "let" and introduce a new scope just to define a variable. I feel like I am using a spoon to cut carrots. It can be made to work, but... I know that that's mostly a syntax thing though.
 My point is just that I don't think syntax is the *only* thing that's 
 prevented lisp from becoming wildly popular.  If that were the case 
 then the answer would be to simply create a different syntax for Lisp. 
 (Actually, according to someone's comment here 
 http://discuss.fogcreek.com/newyork/default.asp?cmd=show&ixPost=1998 
 it's been done and it's called Dylan, another not-wildly popular 
 language).  So I think the problem is more fundamental.


To me dylan looks more like ML than, say, C. Different is not enough, but 'better' would be interesting to see.
 My personal feelings as to why Lisp isnt as popular as it could be are 
 some of these misconceptions:
 
 1) Its all functional code and recursion
 2) The syntax is weird and mindbending
 3) Lisp is interpreted and therefore slow.
 4) Its hard to interface Lisp with non Lisp libraries and operating 
 system services.
 5) Lisp is old and hasnt changed since the 50's
 6) The features are really clever but they wouldnt be useful in a 'real' 
 program
 7) Its for AI or 'weird' programs
 8) You have to be really clever to program in Lisp
 9) Lisp is poorly documented and hard for a beginner to understand
 10) Its irrelevant because we have Java/C++/something else now

I've looked at LISP a number of times but something always pushes me away, including some of the misconceptions here, however: 1. You can avoid this but the language, tutorials, books, libraries, other LISPers, and so on, all seem to want you to go down the recursion path. 2. The syntax doesn't provide visual hints that let you read a program. Web pages use different colors, road signs use different shapes, other languages use different punctuation, etc. I can accept that they all turn into trees on some level, but it makes it unreadable to represent that in the syntax. It's like writing all strings in hex notation. Yes, yes, I know they turn into numbers. Show me the strings anyway; and use quotes when you do it. Accountants use paper with green bars on every other line, so that your eye can *follow* it. The green bars don't do anything else, but they still help a lot with readability -- I've actually thought that other books, i.e. novels might benefit from being printed this way. Good syntax needs to consider *ergonomic* concerns, not teach an important lesson about parse trees. Most importantly: If a chair hurts everyone that sits in it, but "only for the first year" its not a good chair. 3. I'm not sure I buy that this is a myth per se. The rest I'll grant as probably misconceptions, especially 6. I think the real clincher is that when people describe these kinds of issues the LISP community's response seems to be something like "get used to it and you won't notice it". Other languages may catch up with some of LISP's features, but what I consider to be the problems with LISP can't be fixed because they aren't seen as problems. Kevin
Feb 12 2007
parent reply X Bunny <xbunny eidosnet.co.uk> writes:
Kevin Bealer wrote:
 My personal feelings as to why Lisp isnt as popular as it could be are 
 some of these misconceptions:

 1) Its all functional code and recursion
 2) The syntax is weird and mindbending
 3) Lisp is interpreted and therefore slow.
 4) Its hard to interface Lisp with non Lisp libraries and operating 
 system services.
 5) Lisp is old and hasnt changed since the 50's
 6) The features are really clever but they wouldnt be useful in a 
 'real' program
 7) Its for AI or 'weird' programs
 8) You have to be really clever to program in Lisp
 9) Lisp is poorly documented and hard for a beginner to understand
 10) Its irrelevant because we have Java/C++/something else now

I've looked at LISP a number of times but something always pushes me away, including some of the misconceptions here, however:

 
 2. The syntax doesn't provide visual hints that let you read a program.
 
 Web pages use different colors, road signs use different shapes, other 
 languages use different punctuation, etc.  I can accept that they all 
 turn into trees on some level, but it makes it unreadable to represent 
 that in the syntax.  It's like writing all strings in hex notation. Yes, 
 yes, I know they turn into numbers.  Show me the strings anyway; and use 
 quotes when you do it.

I kinda follow you in the first part of this, that different punctuation makes it obvious that something important is happening at that point, ie: (defun foo (x y) (+ x y)) is not syntactically different from: (deffoo frob (x y) (+ x y)) and so you might miss the point that the first expression is probably defining a function. Syntax highlighting in the editor is a simple solution to that though. Really its no different than in another langauge, when you see: Mixin!(Frob) Nuts; Has any of the syntax made this more obvious about what it actually does? You know its a Mixin but so what, its not the punctuation which matters its the actually 'stuff' the symbols. Lisp can be extended with extra punctuation though without having to rewrite Lisp if you think it helps. This is valid syntax which looks nothing like Lisp if you use the infix package: #I(if x<y<=z then f(x)=x^^2+y^^2 else f(x)=x^^2-y^^2) Its also worth noting though that in Lisp the deffoo expression may or maynot define a function also (depending on whatever deffoo does). This is an important part of the language though, Im not sure how atall in D you could make a macro which defined and generated based on some code as input, a new function. Regarding the second part, its worth considering at what point do you consider syntactic sugar useful and why is it part of the language? certainly inputting a string as a list of hex values would be impractical but running with that if you did have a Lisp which only internally supported strings as a list of numbers it wouldnt matter since you could make a " macro character which generated the list for you. Its even argueably more useful than a system which only understands strings because it is hardcoded to generate a list of bytes when it sees text enclosed in quotes.
 
 Accountants use paper with green bars on every other line, so that your 
 eye can *follow* it.  The green bars don't do anything else, but they 
 still help a lot with readability -- I've actually thought that other 
 books, i.e. novels might benefit from being printed this way.  Good 
 syntax needs to consider *ergonomic* concerns, not teach an important 
 lesson about parse trees.

I think theres even an Emacs mode which as you nest statements colours the matching parantheses a different colour. As for green bars, just my opinion but if you have a function with so many statements that it needs green bars to delimit them then its probably too big. With Lisp or D I can break the code up and probably would. (and use comments too!)
 
 Most importantly: If a chair hurts everyone that sits in it, but "only 
 for the first year" its not a good chair.

Its there a reason to sit on the chair that makes the initial discomfort seem worthwhile? Is there a more comfortable alternative chair which does everything the chair does? Is it painful because you have bad posture from sitting too long on a beanbag? :-)
 
 3. I'm not sure I buy that this is a myth per se.

If you mean its true and therefore not a myth then you are wrong. Almost all Common Lisp systems are compilers, most generate native code, some dont include an interpreter at all and compile interactively if you are programming interactively. Another misconception is that Lisp doesnt support typed variables and therefore is unable to generate fast but specific code like C or something else, again simply not true.
 
 The rest I'll grant as probably misconceptions, especially 6.
 
 I think the real clincher is that when people describe these kinds of 
 issues the LISP community's response seems to be something like "get 
 used to it and you won't notice it".  Other languages may catch up with 
 some of LISP's features, but what I consider to be the problems with 
 LISP can't be fixed because they aren't seen as problems.

I reckon there probably is a future possibility for a language which encompasses Lisps features but which is not Lisp. To be honest I dont really see it, since Lisp can already be extended to support additional syntax, those people who need to can implement it anyways and it would still be Lisp. Its worth noting that s-expressions (the Lisp syntax everyone but Lisp programmers has a problem with) wasnt the end-all of Lisp syntax there were also m-expressions which were supposed to be more human readable and an original goal of the development, programmers genuinely prefered sexp. I think a nifty thing would be a D parser written in Lisp, the D parser would emit Lisp forms which would then be compiled like anyother Lisp code, additionally it could then interact with the rest of Lisp and if people felt the desire to write more off the wall stuff they could, D would remain what it is, extensions could be as D like as they wished. Probably the parser could be simpler than the complete D spec since probably the template stuff would be replaced by macros. Who knows? Bunny
Feb 13 2007
parent reply Kevin Bealer <kevinbealer gmail.com> writes:
X Bunny wrote:
 Kevin Bealer wrote:
 2. The syntax doesn't provide visual hints that let you read a program.

 Web pages use different colors, road signs use different shapes, other 
 languages use different punctuation, etc.  I can accept that they all 
 turn into trees on some level, but it makes it unreadable to represent 
 that in the syntax.  It's like writing all strings in hex notation. 
 Yes, yes, I know they turn into numbers.  Show me the strings anyway; 
 and use quotes when you do it.

I kinda follow you in the first part of this, that different punctuation makes it obvious that something important is happening at that point, ie: (defun foo (x y) (+ x y)) is not syntactically different from: (deffoo frob (x y) (+ x y)) and so you might miss the point that the first expression is probably defining a function. Syntax highlighting in the editor is a simple solution to that though. Really its no different than in another langauge, when you see: Mixin!(Frob) Nuts; Has any of the syntax made this more obvious about what it actually does? You know its a Mixin but so what, its not the punctuation which matters its the actually 'stuff' the symbols.

What I mean, is this (from the Computer Language Shootout): (defun ack (x y) (declare (fixnum x y)) (the fixnum (if (zerop x) (1+ y) (if (zerop y) (ack (1- x) 1) (ack (1- x) (ack x (1- y))))))) as compared to this: int Ack(int x, int y) { if (x == 0) { return y + 1; } else if (y == 0) { return Ack(x-1, 1); } else { return Ack(x-1, Ack(x, y-1)); } } These two things do the same thing in the same way, but the structure and syntax make the C example much more readable. If I want to know input and output types, I can find them. When I see the top of the if/elseif/else structure, I can find each part. It's layed out like a table, rather than like a ball of yarn. More importantly, I can understand *parts* of this code without understanding the whole thing.
 Lisp can be extended with extra punctuation though without having to 
 rewrite Lisp if you think it helps. This is valid syntax which looks 
 nothing like Lisp if you use the infix package:
 
 #I(if x<y<=z then f(x)=x^^2+y^^2 else f(x)=x^^2-y^^2)

The fact that you can rewrite the language in this way is powerful, but I think it would be better to start at a really readable syntax. The fact is that most people will use the 'base' syntax regardless of how it looks. Otherwise, you have the problem of coordinating with the author of every piece of code that you use and somehow getting them to adopt the better syntax. They won't, so you end up working in a kind of shanty-town syntax. There is strong pressure to find one 'style' of programming and use that -- for LISP the default syntax is already established.
 Regarding the second part, its worth considering at what point do you 
 consider syntactic sugar useful and why is it part of the language? 
 certainly inputting a string as a list of hex values would be 
 impractical but running with that if you did have a Lisp which only 
 internally supported strings as a list of numbers it wouldnt matter 
 since you could make a " macro character which generated the list for 
 you. Its even argueably more useful than a system which only understands 
 strings because it is hardcoded to generate a list of bytes when it sees 
 text enclosed in quotes.

When you need to interact with large libraries of functionality to do things, having to know ten different syntaxes that accomplish exactly the same thing all in the same code base is going to be really annoying.
 Accountants use paper with green bars on every other line, so that 
 your eye can *follow* it.  The green bars don't do anything else, but 
 they still help a lot with readability -- I've actually thought that 
 other books, i.e. novels might benefit from being printed this way.  
 Good syntax needs to consider *ergonomic* concerns, not teach an 
 important lesson about parse trees.

I think theres even an Emacs mode which as you nest statements colours the matching parantheses a different colour. As for green bars, just my opinion but if you have a function with so many statements that it needs green bars to delimit them then its probably too big. With Lisp or D I can break the code up and probably would. (and use comments too!)

I can syntax highlight D or C++ and benefit from both kind of visual hints.
 Most importantly: If a chair hurts everyone that sits in it, but "only 
 for the first year" its not a good chair.

Its there a reason to sit on the chair that makes the initial discomfort seem worthwhile? Is there a more comfortable alternative chair which does everything the chair does? Is it painful because you have bad posture from sitting too long on a beanbag? :-)

This seems to be a very common view in the LISP community, 'programming in other languages has made you soft'. I don't believe in the power of 'hazing' to make better programmers. My point in this paragraph (I admit it wasn't clearly made), is this: If you ask 100 people about LISP syntax, at least 90 of them will say they don't like it and find it hard to use. The benefits you mention are real, but I don't buy that they come from the syntax itself.
 3. I'm not sure I buy that this is a myth per se.

If you mean its true and therefore not a myth then you are wrong. Almost all Common Lisp systems are compilers, most generate native code, some dont include an interpreter at all and compile interactively if you are programming interactively. Another misconception is that Lisp doesnt support typed variables and therefore is unable to generate fast but specific code like C or something else, again simply not true.

I didn't mean the part about being interpreted. LISP code *tends* to be slower than C++, sometimes a little, sometimes a lot. For many types of code this is not too important, etc, but for some, it is crucial. http://shootout.alioth.debian.org/gp4/benchmark.php?test=all&lang=gpp&lang2=sbcl
 The rest I'll grant as probably misconceptions, especially 6.

 I think the real clincher is that when people describe these kinds of 
 issues the LISP community's response seems to be something like "get 
 used to it and you won't notice it".  Other languages may catch up 
 with some of LISP's features, but what I consider to be the problems 
 with LISP can't be fixed because they aren't seen as problems.

I reckon there probably is a future possibility for a language which encompasses Lisps features but which is not Lisp. To be honest I dont really see it, since Lisp can already be extended to support additional syntax, those people who need to can implement it anyways and it would still be Lisp. Its worth noting that s-expressions (the Lisp syntax everyone but Lisp programmers has a problem with) wasnt the end-all of Lisp syntax there were also m-expressions which were supposed to be more human readable and an original goal of the development, programmers genuinely prefered sexp.

Interesting, I didn't know about m-expressions.
 I think a nifty thing would be a D parser written in Lisp, the D parser 
 would emit Lisp forms which would then be compiled like anyother Lisp 
 code, additionally it could then interact with the rest of Lisp and if 
 people felt the desire to write more off the wall stuff they could, D 
 would remain what it is, extensions could be as D like as they wished. 
 Probably the parser could be simpler than the complete D spec since 
 probably the template stuff would be replaced by macros. Who knows?
 
 Bunny

You can already go the other direction -- there is a LISP environment written in D on dsource. http://www.dsource.org/projects/dlisp Kevin
Feb 13 2007
parent reply X Bunny <xbunny eidosnet.co.uk> writes:
Kevin Bealer wrote:
 X Bunny wrote:
 Kevin Bealer wrote:
 2. The syntax doesn't provide visual hints that let you read a program.


(defun ack (x y) (declare (fixnum x y)) (the fixnum (if (zerop x) (1+ y) (if (zerop y) (ack (1- x) 1) (ack (1- x) (ack x (1- y))))))) as compared to this: int Ack(int x, int y) { if (x == 0) { return y + 1; } else if (y == 0) { return Ack(x-1, 1); } else { return Ack(x-1, Ack(x, y-1)); } } These two things do the same thing in the same way, but the structure and syntax make the C example much more readable. If I want to know input and output types, I can find them. When I see the top of the if/elseif/else structure, I can find each part. It's layed out like a table, rather than like a ball of yarn.

If the C was indented like this would it be as unreadable as the Lisp? int ack(int x, int y) { if(x == 0) { return y + 1; } else if(y == 0) { return ack(x-1, 1); } else { return ack(x-1, ack(x, y-1)); } } (I cant even match up all the brackets with that one!) My editor indents the Lisp like this: (defun ack (x y) (declare (fixnum x y)) (the fixnum (if (zerop x) (1+ y) (if (zerop y) (ack (1- x) 1) (ack (1- x) (ack x (1- y))))))) The structure is no less obvious to me then the C. I can see the input and output types are clearly fixnums. The branches of the ifs are obvious.
 
 More importantly, I can understand *parts* of this code without 
 understanding the whole thing.

Mmmm I dont what to say about that, for me with the Lisp I can do the same. Bunny
Feb 13 2007
next sibling parent X Bunny <xbunny eidosnet.co.uk> writes:
X Bunny wrote:
 Kevin Bealer wrote:
 X Bunny wrote:
 Kevin Bealer wrote:
 2. The syntax doesn't provide visual hints that let you read a program.


(defun ack (x y) (declare (fixnum x y)) (the fixnum (if (zerop x) (1+ y) (if (zerop y) (ack (1- x) 1) (ack (1- x) (ack x (1- y)))))))


 My editor indents the Lisp like this:
 
 (defun ack (x y)
   (declare (fixnum x y))
   (the fixnum
     (if (zerop x)
     (1+ y)
       (if (zerop y)
       (ack (1- x) 1)
     (ack (1- x) (ack x (1- y)))))))
 

hmm that looks exactly the same in my news reader that means (a) my formatting is screwed up by the newsreader or server somewhere and (b) the original was probably indented correctly also before it was posted or when I read it. Therefore I guess you wouldnt agree that correct indenting made it as easily readable as the C, oh well then. It should have the true expression of the if offset four characters from the logical expression and the else expression two characters.
Feb 13 2007
prev sibling next sibling parent reply Walter Bright <newshound digitalmars.com> writes:
X Bunny wrote:
 (defun ack (x y)
   (declare (fixnum x y))
   (the fixnum
     (if (zerop x)
     (1+ y)
       (if (zerop y)
       (ack (1- x) 1)
     (ack (1- x) (ack x (1- y)))))))
 
 The structure is no less obvious to me then the C. I can see the input 
 and output types are clearly fixnums. The branches of the ifs are obvious.

I see: 1- x in the Lisp code, and have to mentally translate it to: x - 1 and not: 1 - x This just hurts my brain.
Feb 13 2007
next sibling parent "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Walter Bright wrote:
 X Bunny wrote:
 (defun ack (x y)
   (declare (fixnum x y))
   (the fixnum
     (if (zerop x)
     (1+ y)
       (if (zerop y)
       (ack (1- x) 1)
     (ack (1- x) (ack x (1- y)))))))

 The structure is no less obvious to me then the C. I can see the input 
 and output types are clearly fixnums. The branches of the ifs are 
 obvious.

I see: 1- x in the Lisp code, and have to mentally translate it to: x - 1 and not: 1 - x This just hurts my brain.

Probably it's fair to say that the converse (hurting a LISPer brain with D code) is not hard to imagine either. Andrei
Feb 13 2007
prev sibling next sibling parent reply Sean Kelly <sean f4.ca> writes:
Walter Bright wrote:
 X Bunny wrote:
 (defun ack (x y)
   (declare (fixnum x y))
   (the fixnum
     (if (zerop x)
     (1+ y)
       (if (zerop y)
       (ack (1- x) 1)
     (ack (1- x) (ack x (1- y)))))))

 The structure is no less obvious to me then the C. I can see the input 
 and output types are clearly fixnums. The branches of the ifs are 
 obvious.

I see: 1- x in the Lisp code, and have to mentally translate it to: x - 1 and not: 1 - x This just hurts my brain.

I thought Lisp used prefix notation, but the above syntax looks like element composition. Sean
Feb 13 2007
parent reply "cracki" <christoph.rackwitz gmail.removethispart.com> writes:
Sean Kelly wrote:
Walter Bright wrote:
I see:
    1- x
in the Lisp code, and have to mentally translate it to:
    x - 1
and not:
    1 - x

This just hurts my brain.

I thought Lisp used prefix notation, but the above syntax looks like element composition. Sean

nah. reading s-expressions needs some getting used to. you're reading serialized tree structure there after all. the "1-" is the name of a function. instead of writing (1- x) you could of couse define a function "subtract-one" that does the same and then write (subtract-one x) but that's not much more readable. you can of course write (- x 1) and any lisp compiler optimizes it.
Feb 13 2007
parent Sean Kelly <sean f4.ca> writes:
cracki wrote:
 Sean Kelly wrote:
 Walter Bright wrote:
 I see:
    1- x
 in the Lisp code, and have to mentally translate it to:
    x - 1
 and not:
    1 - x

 This just hurts my brain.

element composition. Sean

nah. reading s-expressions needs some getting used to. you're reading serialized tree structure there after all. the "1-" is the name of a function. instead of writing (1- x) you could of couse define a function "subtract-one" that does the same and then write (subtract-one x) but that's not much more readable. you can of course write (- x 1) and any lisp compiler optimizes it.

Oh okay. For some reason I was still thinking in terms of C symbol names. I'm going to have to remember that position is really all that matters in Lisp :-) Sean
Feb 14 2007
prev sibling parent Brad Anderson <brad dsource.org> writes:
Walter Bright wrote:
 X Bunny wrote:
 (defun ack (x y)
   (declare (fixnum x y))
   (the fixnum
     (if (zerop x)
     (1+ y)
       (if (zerop y)
       (ack (1- x) 1)
     (ack (1- x) (ack x (1- y)))))))

 The structure is no less obvious to me then the C. I can see the input
 and output types are clearly fixnums. The branches of the ifs are
 obvious.

I see: 1- x in the Lisp code, and have to mentally translate it to: x - 1 and not: 1 - x This just hurts my brain.

The first atom is the function to operate on the rest of the args. (defun (subtract-one-from-it x) (- x 1)) (subtract-one-from-it 43) -> 42 (defun (1- x) (- x 1)) (1- 43) -> 42 same thing, first one is more verbose. Maybe this was a bad example? (+ 2 3 1) -> 6 (defun (who-is-the-bomb boorad kris sean_k) boorad) BA
Feb 13 2007
prev sibling parent reply Kevin Bealer <kevinbealer gmail.com> writes:
X Bunny wrote:
 Kevin Bealer wrote:
 X Bunny wrote:
 Kevin Bealer wrote:
 2. The syntax doesn't provide visual hints that let you read a program.


(defun ack (x y) (declare (fixnum x y)) (the fixnum (if (zerop x) (1+ y) (if (zerop y) (ack (1- x) 1) (ack (1- x) (ack x (1- y))))))) as compared to this: int Ack(int x, int y) { if (x == 0) { return y + 1; } else if (y == 0) { return Ack(x-1, 1); } else { return Ack(x-1, Ack(x, y-1)); } } These two things do the same thing in the same way, but the structure and syntax make the C example much more readable. If I want to know input and output types, I can find them. When I see the top of the if/elseif/else structure, I can find each part. It's layed out like a table, rather than like a ball of yarn.

If the C was indented like this would it be as unreadable as the Lisp? int ack(int x, int y) { if(x == 0) { return y + 1; } else if(y == 0) { return ack(x-1, 1); } else { return ack(x-1, ack(x, y-1)); } }

Yes - indenting badly makes it less readable in either case; I'm not sure if I was indenting the LISP example in a reasonable way. (I took it from someone else's page.)
 (I cant even match up all the brackets with that one!)
        
 My editor indents the Lisp like this:
 
 (defun ack (x y)
   (declare (fixnum x y))
   (the fixnum
     (if (zerop x)
     (1+ y)
       (if (zerop y)
       (ack (1- x) 1)
     (ack (1- x) (ack x (1- y)))))))
 
 The structure is no less obvious to me then the C. I can see the input 
 and output types are clearly fixnums. The branches of the ifs are obvious.

Maybe one day it will be for me if I keep trying to do things in LISP, but I can't shake the feeling that I'm learning to shoe horses -- a skill that had its place and time.
 More importantly, I can understand *parts* of this code without 
 understanding the whole thing.

Mmmm I dont what to say about that, for me with the Lisp I can do the same. Bunny

Actually, I can understand the structure in both languages for this simple example without too much trouble. But the LISP one is a lot 'slower' for me to scan with my eyes. Some of that is a learned reflex, of course, but personally I don't think all of it is. My point is less about indentation than the other aspects of syntax. For one thing, the '(){};,' punctuation, which for me, is a win over the LISP way (again, assuming both examples are indented). If the LISP syntax is readable for you throughout, then that's okay. For me it definitely isn't -- I think most people would agree. Someone I know at work told me today that the LISP notation is very much like mathematical formulas. I couldn't help remembering math classes in college, when (at the time) I was always thinking, 'why can't they break this up into simple steps like a computer program would?' (Of course, I learned a lot about programming before I took algebra, so maybe that's part of the problem.) Kevin
Feb 13 2007
next sibling parent Bill Baxter <dnewsgroup billbaxter.com> writes:
Kevin Bealer wrote:

 Someone I know at work told me today that the LISP notation is very much 
 like mathematical formulas.  I couldn't help remembering math classes in 
 college, when (at the time) I was always thinking, 'why can't they break 
 this up into simple steps like a computer program would?'  (Of course, I 
 learned a lot about programming before I took algebra, so maybe that's 
 part of the problem.)

I'm not sure what you mean. Mathematicians (and lisp coders) are champs of breaking things into simple steps. My problem is usually that there are *too* many simple steps so that the equation that ties it all together is a very uninformative x = f(y)~G, where f, y, G and the operator ~ are all things whose definitions are scattered over the previous 50 pages. With Lisp coders it's because it's so easy to pull out any (...) expression at any level and turn it into a new function when the current function starts to get too nested. --bb
Feb 13 2007
prev sibling next sibling parent "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Kevin Bealer wrote:
 X Bunny wrote:
 Kevin Bealer wrote:
 X Bunny wrote:
 Kevin Bealer wrote:
 2. The syntax doesn't provide visual hints that let you read a 
 program.


(defun ack (x y) (declare (fixnum x y)) (the fixnum (if (zerop x) (1+ y) (if (zerop y) (ack (1- x) 1) (ack (1- x) (ack x (1- y))))))) as compared to this: int Ack(int x, int y) { if (x == 0) { return y + 1; } else if (y == 0) { return Ack(x-1, 1); } else { return Ack(x-1, Ack(x, y-1)); } } These two things do the same thing in the same way, but the structure and syntax make the C example much more readable. If I want to know input and output types, I can find them. When I see the top of the if/elseif/else structure, I can find each part. It's layed out like a table, rather than like a ball of yarn.

If the C was indented like this would it be as unreadable as the Lisp? int ack(int x, int y) { if(x == 0) { return y + 1; } else if(y == 0) { return ack(x-1, 1); } else { return ack(x-1, ack(x, y-1)); } }

Yes - indenting badly makes it less readable in either case; I'm not sure if I was indenting the LISP example in a reasonable way. (I took it from someone else's page.)
 (I cant even match up all the brackets with that one!)
        My editor indents the Lisp like this:

 (defun ack (x y)
   (declare (fixnum x y))
   (the fixnum
     (if (zerop x)
     (1+ y)
       (if (zerop y)
       (ack (1- x) 1)
     (ack (1- x) (ack x (1- y)))))))

 The structure is no less obvious to me then the C. I can see the input 
 and output types are clearly fixnums. The branches of the ifs are 
 obvious.

Maybe one day it will be for me if I keep trying to do things in LISP, but I can't shake the feeling that I'm learning to shoe horses -- a skill that had its place and time.

Probably it's better comparable to harnessing a teleporting machine :o). Andrei
Feb 13 2007
prev sibling next sibling parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Kevin Bealer wrote:
 X Bunny wrote:
 Kevin Bealer wrote:
 X Bunny wrote:
 Kevin Bealer wrote:
 2. The syntax doesn't provide visual hints that let you read a 
 program.


(defun ack (x y) (declare (fixnum x y)) (the fixnum (if (zerop x) (1+ y) (if (zerop y) (ack (1- x) 1) (ack (1- x) (ack x (1- y))))))) as compared to this: int Ack(int x, int y) { if (x == 0) { return y + 1; } else if (y == 0) { return Ack(x-1, 1); } else { return Ack(x-1, Ack(x, y-1)); } } These two things do the same thing in the same way, but the structure and syntax make the C example much more readable. If I want to know input and output types, I can find them. When I see the top of the if/elseif/else structure, I can find each part. It's layed out like a table, rather than like a ball of yarn.

If the C was indented like this would it be as unreadable as the Lisp? int ack(int x, int y) { if(x == 0) { return y + 1; } else if(y == 0) { return ack(x-1, 1); } else { return ack(x-1, ack(x, y-1)); } }

Yes - indenting badly makes it less readable in either case; I'm not sure if I was indenting the LISP example in a reasonable way. (I took it from someone else's page.)
 (I cant even match up all the brackets with that one!)
        My editor indents the Lisp like this:

 (defun ack (x y)
   (declare (fixnum x y))
   (the fixnum
     (if (zerop x)
     (1+ y)
       (if (zerop y)
       (ack (1- x) 1)
     (ack (1- x) (ack x (1- y)))))))

 The structure is no less obvious to me then the C. I can see the input 
 and output types are clearly fixnums. The branches of the ifs are 
 obvious.

Maybe one day it will be for me if I keep trying to do things in LISP, but I can't shake the feeling that I'm learning to shoe horses -- a skill that had its place and time.
 More importantly, I can understand *parts* of this code without 
 understanding the whole thing.

Mmmm I dont what to say about that, for me with the Lisp I can do the same. Bunny

Actually, I can understand the structure in both languages for this simple example without too much trouble. But the LISP one is a lot 'slower' for me to scan with my eyes. Some of that is a learned reflex, of course, but personally I don't think all of it is. My point is less about indentation than the other aspects of syntax. For one thing, the '(){};,' punctuation, which for me, is a win over the LISP way (again, assuming both examples are indented). If the LISP syntax is readable for you throughout, then that's okay. For me it definitely isn't -- I think most people would agree. Someone I know at work told me today that the LISP notation is very much like mathematical formulas. I couldn't help remembering math classes in college, when (at the time) I was always thinking, 'why can't they break this up into simple steps like a computer program would?' (Of course, I learned a lot about programming before I took algebra, so maybe that's part of the problem.)

Probably this might be helpful. I think I should give it a shot, too: http://srfi.schemers.org/srfi-49/srfi-49.html A big-time Scheme champion (wrote an entire Scheme system and two dozens of great papers) admitted to me that syntax is a big hurdle for acceptance and that he is looking into offering alternatives. The advantage is that they come from the "right" place. Unix started as a system for professionals with security, flexibility, etc. in place, and it's much harder to make Windows, which started as a consumer product, get to the same level. Andrei
Feb 13 2007
parent renoX <renosky free.fr> writes:
Andrei Alexandrescu (See Website For Email) a écrit :
 Probably this might be helpful. I think I should give it a shot, too:
 
 http://srfi.schemers.org/srfi-49/srfi-49.html
 
 A big-time Scheme champion (wrote an entire Scheme system and two dozens 
 of great papers) admitted to me that syntax is a big hurdle for 
 acceptance and that he is looking into offering alternatives. The 
 advantage is that they come from the "right" place. Unix started as a 
 system for professionals with security, flexibility, etc. in place, and 
 it's much harder to make Windows, which started as a consumer product, 
 get to the same level.

I'm not sure srfi helps: I find it worse than the original syntax: sure there is no more parenthesis, but now functions are spread over too many lines. I remember one article where the author used a very light grey (over a white background) for the parenthesis to make them less noticeable.. With a correct indentation, this presentation trick makes the program far more readable for non-Lispers.. renoX
 
 
 Andrei

Feb 14 2007
prev sibling parent X Bunny <xbunny eidosnet.co.uk> writes:
Kevin Bealer wrote:
 
 Actually, I can understand the structure in both languages for this 
 simple example without too much trouble.  But the LISP one is a lot 
 'slower' for me to scan with my eyes.  Some of that is a learned reflex, 
 of course, but personally I don't think all of it is.
 
 My point is less about indentation than the other aspects of syntax. For 
 one thing, the '(){};,' punctuation, which for me, is a win over the 
 LISP way (again, assuming both examples are indented).  If the LISP 
 syntax is readable for you throughout, then that's okay.  For me it 
 definitely isn't -- I think most people would agree.

I certainly see your point and accept it. I think its worth noting the Lisp 'if' isnt neccasarily the same as the C like one though, I suppose its closer actually to the ternary operator. This lisp expression: (quite possibly the indentation will screw again in which case Im sorry) (let ((k (if (if (zerop x) (foo 100) (bar 100)) (zen 30) (wibble q))))) would be like this C (I think!): k = ((x == 0) ? foo(100) : bar(30)) ? zen(30) : wibble(q)); Using the structured C 'if' you get: int t; if(x == 0) t = foo(100); else t = bar(100); int k if(t) k = zen(30); else k = wibble(q); I prefer the Lisp, the ternary version is bizarre to read and the structured if version fails to properly show how the first if expression controls the second, you have to memorize (and use for that matter) the temporary variable which represents the nested expression. Regarding how its useful to have different bracket types to delimit the layout, I think thats a decent point, Im pretty sure there is an editor mode which colours each nested level of parans a different colour I think that would acheive a similar thing.
 
 Someone I know at work told me today that the LISP notation is very much 
 like mathematical formulas.  I couldn't help remembering math classes in 
 college, when (at the time) I was always thinking, 'why can't they break 
 this up into simple steps like a computer program would?'  (Of course, I 
 learned a lot about programming before I took algebra, so maybe that's 
 part of the problem.)

Its an interesting observation, I learnt algebra before computer programming and also learnt computer programming without a specific language. My introduction to Computer Science was entirely theoretical; we learnt all about computers and programming without ever actually touching a computer (seems kinda bizarre now I think about it - we did have computers!) I didnt start programming seriously until after I left school. My first practical programming experience was Sinclair Basic, then some Lispish language on a programmable calculator, then Zortech C. I always prefered to write math formulas in something like Lisp (ie prefix, no precedence rules, no funky math syntax) my teachers hated it. When I first tried to learn C it took me ages to get my head around all the different bits of syntactic sugar, precedence rules and unexpected corner cases. I dont have a problem with that now but certainly if I do have something mathematical to write or a complicated algorithm to devise I often sketch it out in Lispish form first or fire up Lisp and program until I get an understanding of how its going to work before starting the C++. Bunny
Feb 14 2007
prev sibling parent reply Johan Granberg <lijat.meREM OVE.gmail.com> writes:
X Bunny wrote:

 Bill Baxter wrote:
 Besides, many people naturally think recursively,

The statement was about why LISP is never going to be wildly popular. There may very well be "many" people who naturally think recursively, but if they're not a majority then that's a hurdle to LISP becoming popular.
 and many problems (e.g. parsing) can be easiest thought of that way.

Sure. However, you can write recursive algorithms in most any procedural language to handle those naturally recursive tasks when they come up. With Lisp or <my-favorite-functional-language> you're pretty much /forced/ to look at everything recursively. And I think that makes joe coder nervous, thus presenting a major hurdle to any functional language ever becoming truly popular.

You havent qualified what you mean by Lisp, assuming that means Common Lisp (CL) then you are uniformed, CL offers a number of iteration constructs (they are implemented as macros as a great deal of the language is). dolist is very simple, loop is almost a language within itself. A common package used is called iterate which is a popular alternative to loop. Also CL is not just a functional language, Like D CL is multiparadigm. With CL you can easily write procedural code with side effects like C++ or D or you can use object orientated, aspect orientated, functional, logic, pattern matching and many other programming concepts. Whats more you can add other programming concepts easily should you like.
 
 My point is just that I don't think syntax is the *only* thing that's
 prevented lisp from becoming wildly popular.  If that were the case then
 the answer would be to simply create a different syntax for Lisp.
 (Actually, according to someone's comment here
 http://discuss.fogcreek.com/newyork/default.asp?cmd=show&ixPost=1998
 it's been done and it's called Dylan, another not-wildly popular
 language).  So I think the problem is more fundamental.
 

People often try to modify Lisp to have a syntax more familiar to C language programmers, Dylan aside, it seems its often people who are new to Lisp who do this, after they learn more of Lisp they then can see why it is the way it is and although initially alien its design makes the language easier to use. Im not actually convinced that you can make a language with the features of CL without its syntax. There is an infix package also which allows you to use infix for things like math formulas and the like. My personal feelings as to why Lisp isnt as popular as it could be are some of these misconceptions: 1) Its all functional code and recursion 2) The syntax is weird and mindbending 3) Lisp is interpreted and therefore slow. 4) Its hard to interface Lisp with non Lisp libraries and operating system services. 5) Lisp is old and hasnt changed since the 50's 6) The features are really clever but they wouldnt be useful in a 'real' program 7) Its for AI or 'weird' programs 8) You have to be really clever to program in Lisp 9) Lisp is poorly documented and hard for a beginner to understand 10) Its irrelevant because we have Java/C++/something else now Its interesting to contrast these with D which tends to present an image which is the opposite of many of these notions. Regarding the syntax issue; is this suggestion: (from an earlier post) AddExpressionTemplate!( MultExpressionTemplate!( SubExpressionTemplate!(Vector,Vector), Vector), Vector) anymore more readable than this? (defmacro mymacro (a b c d) (+ (* (- a b) c) d)) Bunny

From my attempt to learn lisp some months ago i think that at least 4 and 9 is true (please correct me if I'm wrong). A big part of that issue is that it looks like the different implementations does things slightly differently and that creates a fragmentation of the community and causes libraries to require specific implementations (I'm mainly talking of the C interface here as that is what I looked at). Regarding 9 if you know a good lisp tutorial please post a link. (especially how to build code and the quote special syntax)
Feb 13 2007
parent reply X Bunny <xbunny eidosnet.co.uk> writes:
Johan Granberg wrote:
 X Bunny wrote:
 My personal feelings as to why Lisp isnt as popular as it could be are
 some of these misconceptions:

 1) Its all functional code and recursion
 2) The syntax is weird and mindbending
 3) Lisp is interpreted and therefore slow.
 4) Its hard to interface Lisp with non Lisp libraries and operating
 system services.
 5) Lisp is old and hasnt changed since the 50's
 6) The features are really clever but they wouldnt be useful in a 'real'
 program
 7) Its for AI or 'weird' programs
 8) You have to be really clever to program in Lisp
 9) Lisp is poorly documented and hard for a beginner to understand
 10) Its irrelevant because we have Java/C++/something else now

is true (please correct me if I'm wrong). A big part of that issue is that it looks like the different implementations does things slightly differently and that creates a fragmentation of the community and causes libraries to require specific implementations (I'm mainly talking of the C interface here as that is what I looked at). Regarding 9 if you know a good lisp tutorial please post a link. (especially how to build code and the quote special syntax)

I do admit (4) is largely debateable, compared to D and many other successful languages its no harder, but certainly yes your point that different implementations have different ways of doing it is true and it is not mandated by the ANSI standard for the language. However the CFFI library abstracts these differences annd supports many implementations. Regarding (9) a well regarded book is called Practical Common Lisp ( the complete text is online at http://www.gigamonkeys.com/book/ ). My reasons why I dont use Lisp as often as I would like are: 1) Deploying Lisp applications can be difficult; huge exes if you can get your implementation to produce a standalone image atall. 2) Dependance on third party C++ libraries - DirectShow BaseClasses sigh :-( C++ libraries arent even compatible between C++ compilers nevermind to Lisp! 3) Too general - no matter how good Lisp is at doing everything there are languages written for a specific task which are probably better than Lisp for it within a limited niche. So long as you dont exceed the niche theres no need for Lisp. Definately there could have been a Lisp system which blows it away, there probably isnt, if there is does using it outweight the other points. 4) No company support - the boss is scared if I die no-one will maintain it as Lisp programmers are fairly rare. 5) Not a team player - by being subject to the above points Lisp starts to get pretty outcast in your toolbox because you know whatever you write with it might also only be useable in projects which can avoid those points too. Apart from (1) these are also reasons why I dont get to use D as often as I would like. Bunny
Feb 13 2007
parent Walter Bright <newshound digitalmars.com> writes:
X Bunny wrote:
 My reasons why I dont use Lisp as often as I would like are:
 1) Deploying Lisp applications can be difficult; huge exes if you can 
 get your implementation to produce a standalone image atall.
 2) Dependance on third party C++ libraries - DirectShow BaseClasses sigh 
 :-( C++ libraries arent even compatible between C++ compilers nevermind 
 to Lisp!
 3) Too general - no matter how good Lisp is at doing everything there 
 are languages written for a specific task which are probably better than 
 Lisp for it within a limited niche. So long as you dont exceed the niche 
 theres no need for Lisp. Definately there could have been a Lisp system 
 which blows it away, there probably isnt, if there is does using it 
 outweight the other points.
 4) No company support - the boss is scared if I die no-one will maintain 
 it as Lisp programmers are fairly rare.
 5) Not a team player - by being subject to the above points Lisp starts 
 to get pretty outcast in your toolbox because you know whatever you 
 write with it might also only be useable in projects which can avoid 
 those points too.
 
 Apart from (1) these are also reasons why I dont get to use D as often 
 as I would like.

(4) is not as large a problem as it might seem. Unlike Lisp, D can be picked up very quickly by an experienced C++/Java programmer.
Feb 13 2007
prev sibling parent reply Pragma <ericanderton yahoo.removeme.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 Bill Baxter wrote:
 Walter Bright wrote:
 kris wrote:

 5) Lisp gets things right, according to what I've read from heavy 
 Lisp users, by being a language that can be modified on the fly to 
 suit the task at hand, in other words, by having a customizable 
 language one can achieve dramatic productivity gains.

Yet, Lisp will always remain a niche language. You have to wonder why.

I'm pretty sure it's the syntax.

And the recursion. People just don't naturally think recursively. And the lack of mutable data structures. OCaml tried to fix that, but OCaml's probably always going to be niche as well (see first point).

LISP does have mutation. Besides, many people naturally think recursively, and many problems (e.g. parsing) can be easiest thought of that way. Andrei

One nit: I agree with Walter here. People do *not* "naturally think recursively". Computer Scientists, most definitely. Developers, likely. People who make Russian dolls for a living, perhaps. Normal people, not a chance. I'd argue that most folks can't even spell the word, much less know what it means. Proof? Well, how many people go about defining things in terms of the very things they're trying to define? -- - EricAnderton at yahoo
Feb 12 2007
next sibling parent "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Pragma wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Bill Baxter wrote:
 Walter Bright wrote:
 kris wrote:

 5) Lisp gets things right, according to what I've read from heavy 
 Lisp users, by being a language that can be modified on the fly to 
 suit the task at hand, in other words, by having a customizable 
 language one can achieve dramatic productivity gains.

Yet, Lisp will always remain a niche language. You have to wonder why.

I'm pretty sure it's the syntax.

And the recursion. People just don't naturally think recursively. And the lack of mutable data structures. OCaml tried to fix that, but OCaml's probably always going to be niche as well (see first point).

LISP does have mutation. Besides, many people naturally think recursively, and many problems (e.g. parsing) can be easiest thought of that way. Andrei

One nit: I agree with Walter here. People do *not* "naturally think recursively". Computer Scientists, most definitely. Developers, likely. People who make Russian dolls for a living, perhaps. Normal people, not a chance. I'd argue that most folks can't even spell the word, much less know what it means. Proof? Well, how many people go about defining things in terms of the very things they're trying to define?

Hard to define a tree otherwise :o). But I do agree with your point. Andrei
Feb 12 2007
prev sibling parent reply Lutger <lutger.blijdestijn gmail.com> writes:
Pragma wrote:
 One nit: I agree with Walter here.  People do *not* "naturally think
 recursively".  Computer Scientists, most
 definitely. Developers, likely.  People who make Russian dolls for a
 living, perhaps.  Normal people, not a chance.  I'd argue that most folks
 can't even spell the word, much less know what it means.
 
 Proof?  Well, how many people go about defining things in terms of the
 very things they're trying to define?

I'm not sure this is true. Many simple tasks are described as 'do X to Y until Z,' which can be expressed easily as a recursive procedure. I doubt non-programmers would think of a task as iterating over a collection of objects with mutable state either. :) Sometimes I get the feeling that recursion is made out to be more difficult than it is, for programming I mean. As an average Joe I do like to learn more, but when you look up material on the use of recursion, the vast bulk of it pretty much assumes you at least have a pretty good programming knowledge and often require mathematics too. </rant> Recursion just has a nerdy aura around it. It probably doesn't help that most programmers are learned to think iteratively too.
Feb 12 2007
parent reply renoX <renosky free.fr> writes:
Lutger a écrit :
 Pragma wrote:
 One nit: I agree with Walter here.  People do *not* "naturally think
 recursively".


 I'm not sure this is true. Many simple tasks are described as 'do X to Y
 until Z,' which can be expressed easily as a recursive procedure.

And even more easily as a loop.
 I doubt non-programmers would think of a task as iterating over a collection of
 objects with mutable state either. :)

Uh?? Take a bunch of potatoes, peel them. Here you are: people are very accustomed to iterate over a bunch of things, changing their state.
 Sometimes I get the feeling that recursion is made out to be more difficult
 than it is, for programming I mean. As an average Joe I do like to learn
 more, but when you look up material on the use of recursion, the vast bulk
 of it pretty much assumes you at least have a pretty good programming
 knowledge and often require mathematics too. </rant>

Agreed here. Recursion is not very complicated, just not 'natural'.
 Recursion just has a nerdy aura around it. It probably doesn't help that
 most programmers are learned to think iteratively too.

And it doesn't help recursion that when you transform a recursive function to a tail recursive function so that it's not too slow, the result is ugly! Note that recursion is still a useful tool: when you learn about the way to optimize 'Conway's game of life' (couldn't find the url sorry), it really show the power of tree representation/recursion, but it also show how tricky this is. renoX
Feb 13 2007
parent reply Lutger <lutger.blijdestijn gmail.com> writes:
renoX wrote:

 Lutger a écrit :
 Pragma wrote:
 One nit: I agree with Walter here.  People do *not* "naturally think
 recursively".


 I'm not sure this is true. Many simple tasks are described as 'do X to Y
 until Z,' which can be expressed easily as a recursive procedure.

And even more easily as a loop.
 I doubt non-programmers would think of a task as iterating over a
 collection of objects with mutable state either. :)

Uh?? Take a bunch of potatoes, peel them. Here you are: people are very accustomed to iterate over a bunch of things, changing their state.

Grab a potato, peel. Repeat procedure until done and cook the results... This is an iterative process anyhow, but still I think recursion is a natural way to think about tasks. It's just repetition + stop-condition basically.
 Recursion just has a nerdy aura around it. It probably doesn't help that
 most programmers are learned to think iteratively too.

And it doesn't help recursion that when you transform a recursive function to a tail recursive function so that it's not too slow, the result is ugly!

How is that so? Could you give an example? I'm not very familiar with recursive programming actually...
Feb 13 2007
parent renoX <renosky free.fr> writes:
Lutger a écrit :
 renoX wrote:
 Recursion just has a nerdy aura around it. It probably doesn't help that
 most programmers are learned to think iteratively too.

function to a tail recursive function so that it's not too slow, the result is ugly!

How is that so? Could you give an example? I'm not very familiar with recursive programming actually...

A "natural" factorial: fact(n): if (n <= 1) return 1; else return n * fact(n - 1); Here fact is not tail recursive because the last operation is * not f. So if you want an efficient implementation you have to rewrite it as: fact(n) = fact2(1,n); fact2(acc,n): if (n == 1) return acc else return fact2(acc*n, n-1); Bleach. That's fugly. renoX
Feb 13 2007
prev sibling next sibling parent reply Michiel <nomail hotmail.com> writes:
 The C++ version is even *worse* than the C one (for wordiness bother):

  for (std::vector<T>::const_iterator i = e.begin(); i != e.end(); i++)
  {  T v = *i;
  ... }

 I mean I know the reasons for every bit of the syntax there, and in
 isolation they make sense, but put it all together and it seems to go
 backwards.

Hehe, sure. But that's like the worst possible way to do it. :) * Most people make the std namespace public. Or at least the std::vector part. * The variable i CAN be declared inside the loop, but it doesn't have to be. I often do this at the beginning of a function. Granted, this doesn't make the overall code smaller, but it does make it neater. * Inside the loop, you rarely have to make a copy like you do. You can just use i->member() or *i wherever you need them. So it's actually: for (i = e.begin(); i != e.end(); ++i) { ... } Of course, the D foreach loop is still much neater (and I love it). But only if you really want to visit all elements of an array in a row. However, if you want to walk through two AA's at the same time (comparing keys and values, for example), how do you do that in D? Maybe there is a way I haven't found yet (I've only been working with D for a few weeks), but it looks to me like much more bother than with C++ iterators.
Feb 12 2007
next sibling parent janderson <askme me.com> writes:
Michiel wrote:
 The C++ version is even *worse* than the C one (for wordiness bother):

  for (std::vector<T>::const_iterator i = e.begin(); i != e.end(); i++)
  {  T v = *i;
  ... }

 I mean I know the reasons for every bit of the syntax there, and in
 isolation they make sense, but put it all together and it seems to go
 backwards.

Hehe, sure. But that's like the worst possible way to do it. :) * Most people make the std namespace public. Or at least the std::vector part. * The variable i CAN be declared inside the loop, but it doesn't have to be. I often do this at the beginning of a function. Granted, this doesn't make the overall code smaller, but it does make it neater. * Inside the loop, you rarely have to make a copy like you do. You can just use i->member() or *i wherever you need them. So it's actually: for (i = e.begin(); i != e.end(); ++i) { ... } Of course, the D foreach loop is still much neater (and I love it). But only if you really want to visit all elements of an array in a row. However, if you want to walk through two AA's at the same time (comparing keys and values, for example), how do you do that in D? Maybe there is a way I haven't found yet (I've only been working with D for a few weeks), but it looks to me like much more bother than with C++ iterators.

Your right, all that extra work that a simple for-loop causes in C++ its a PITA. -Joel
Feb 12 2007
prev sibling parent Sean Kelly <sean f4.ca> writes:
Michiel wrote:
 The C++ version is even *worse* than the C one (for wordiness bother):

  for (std::vector<T>::const_iterator i = e.begin(); i != e.end(); i++)
  {  T v = *i;
  ... }

 I mean I know the reasons for every bit of the syntax there, and in
 isolation they make sense, but put it all together and it seems to go
 backwards.

Hehe, sure. But that's like the worst possible way to do it. :) * Most people make the std namespace public. Or at least the std::vector part.

No, that's the worst possible way to do it :-)
 Of course, the D foreach loop is still much neater (and I love it). But only if
 you really want to visit all elements of an array in a row. However, if you
want
 to walk through two AA's at the same time (comparing keys and values, for
 example), how do you do that in D? Maybe there is a way I haven't found yet
(I've
 only been working with D for a few weeks), but it looks to me like much more
 bother than with C++ iterators.

For what it's worth (probably not much), C++ does have a for_each template. The problem is that it doesn't accept arbitrary code like the D foreach does, which gave rise to Boost's lambda library. Sean
Feb 12 2007
prev sibling next sibling parent reply kris <foo bar.com> writes:
Walter Bright wrote:
 kris wrote:
 
 Walter Bright wrote:

 3) Less code == more productivity, less bugs. I don't mean 
 gratuitously less code, I mean less code in the sense that one can 
 write directly what one means, rather than a lot of tedious bother. 
 For example, if I want to visit each element in an array:

     foreach(v; e)
     {...}

 is more direct than:

     for (size_t i = 0; i < sizeof(e)/sizeof(e[0]); i++)
     { T v = e[i];
      ... }

Yep, that's great! One of the reasons I like D so much, along with array slicing.

The C++ version is even *worse* than the C one (for wordiness bother): for (std::vector<T>::const_iterator i = e.begin(); i != e.end(); i++) { T v = *i; ... } I mean I know the reasons for every bit of the syntax there, and in isolation they make sense, but put it all together and it seems to go backwards.

Not so hard to argue that example is actually C-- :)
 
 
 5) Lisp gets things right, according to what I've read from heavy 
 Lisp users, by being a language that can be modified on the fly to 
 suit the task at hand, in other words, by having a customizable 
 language one can achieve dramatic productivity gains.

Yet, Lisp will always remain a niche language. You have to wonder why.

I'm pretty sure it's the syntax.

Interestingly, I used to have lunch with one of the original designers. My understanding was that's not how it was viewed from the 'inside'.
 
 
 7) A lot of companies have outlawed C++ templates, and for good 
 reason. I believe that is not because templates are inherently bad. I 
 think that C++ templates are a deeply flawed because they were 
 ***never designed for the purpose to which they were put***.

Agreed. But the issue is not about how badly they're flawed. Instead, it's the non-standard "language" problem. The MyDSL problem :)

I disagree with that. When you write a program using classes and functions, you *are* creating your own language. Classes are your custom types, and functions are your custom operators.

So, the heart of the matter; the one that causes C++ Templates to be outlawed; is perhaps it's /potential/ for abuse? More on this later ...
 
 
 8) I've never been able to create usable C++ templates. Notice that 
 the DMD front end (in C++) doesn't use a single template. I know how 
 they work (in intimate detail) but I still can't use them.

Same here.

But I have been able to create usable D templates <g>.

Me too.
 
 
 I also look at some D templates for ages, and still can't figure out 
 just how they work. Don Clugston is known around here as the Template 
 Ninja -- the very name itself shouts out "Here Dwell Demons!" :-D

The very fact that Don's called the Template Ninja is a problem - after all, there is no "function ninja", no "typedef ninja", no "+ ninja". It's a sign that templates are still not easy enough to use.

Glad that you recognize the latent concern.
 
 It's like Paul Mensonidas being recognized as the "World's Leading 
 Expert on the C Preprocessor." Obviously, something is seriously wrong 
 with the preprocessor if there's an ecological niche for a world's 
 leading expert on it. (By the way, Paul is a very nice fellow and has 
 been kind enough to help me iron out several subtle bugs in the DMC++ 
 preprocessor. As long as we're saddled with that preprocessor spec, I'm 
 glad there is a Paul to help!)
 

Right :)
 
 12) Take a look at what Kirk McDonald is doing with Pyd. He needs all 
 this stuff to make it slicker than oil on ground steel. He's on the 
 bleeding edge of stuff D needs to *make* pedestrian.

Certainly :) What Kirk has been doing (much awesometude there) is one of those things that fit into the "narrow focus" or "speciality" field that /can/ benefit in some manner. But it's a black-box. When it works, nobody will fuss with the insides unless they /really/ have to. That's not how most commerical software is done today, regardless of all the efforts to make it more like circuit-design.

Even if D fails to make metaprogramming easy for average joe coders to use, if it still can be used by experts to create useful black box code like Pyd, then it is worthwhile. After all, even though trig functions are very hard to write, they are easily used by joe coders as black box components without any problems.

I think this is where you've perhaps been missing my point? Let me try an example -- hrm -- you're into cars, right? Ok ... I have a sleek and powerful car; rebuilt from the ground up. Dyno's at around 650ft/lbs & 625bhp at the wheels, with full boost at ~half the rpm range. This is something that can get you into serious trouble rather quickly, if you fail to treat it with utmost respect. Because of this, I have a little switch in the glovebox: it changes the fuel-map, the timing map, and drops max-boost from 30psi to 7psi. I call it the "valet switch" -- it's used for that purpose and for inclement weather. Going back to the original point about pedestrian-code, costs and so on: these shops who outlaw C++ templates have to, in effect, "police" their own codebase. That's no easy task. So here's a suggestion that you may be able to do something with ... If you were to enable a "valet switch" in the compiler, it wouldn't be so hard to present that as an outright /benefit/ for dev-shops <g>. In other words, the dev-shop would have a means to enforce whatever policy they choose, while the compiler acts as the arbitrator. "Mainstream code has to compile with the valet switch" sort of thing. Of course, the trick would be to find a reasonable tripping point where the abuse-potential starts to require some serious respect; like the vehicle noted above. However, it could be done; perhaps with a 'level'? I understand you're not crazy about switches and so on, but please consider it ... it could potentially be responsible for alleviating adoption fears. That's a pretty darned powerful switch. I hope this clarifies that I'm not pushing for less goodies; I'm pushing for responsible usage of them. Enabling that would surely garner D some respect from those people ultimately responsible for "letting" it through the door. - Kris
Feb 12 2007
next sibling parent reply Sean Kelly <sean f4.ca> writes:
kris wrote:
 
 Of course, the trick would be to find a reasonable tripping point where 
 the abuse-potential starts to require some serious respect; like the 
 vehicle noted above. However, it could be done; perhaps with a 'level'? 
 I understand you're not crazy about switches and so on, but please 
 consider it ... it could potentially be responsible for alleviating 
 adoption fears. That's a pretty darned powerful switch.

I think this is a very good idea, and if some reasonable means of implementing it could be found then it might actually aid D's adopting in conservative circles without prohibiting powerful meta features from being added. What fuels my concern is that because the new mixin/import features are so general, it is as easy to envision horrors rivaling the worst of C macro code as it is to envision elegant and practical applications. I suppose this is why I've been kind of hoping an alternate solution would present itself :-) I think the general idea is fantastic, but these are the first features in D that I might actually be inclined to prohibit in certain development environments. The potential for abuse undermines a lot of what appeals to be about D: elegance, clarity, etc. At the same time, I'm excited about the direction in which things are progressing. Sean
Feb 12 2007
parent Pragma <ericanderton yahoo.removeme.com> writes:
Sean Kelly wrote:
 kris wrote:
 Of course, the trick would be to find a reasonable tripping point 
 where the abuse-potential starts to require some serious respect; like 
 the vehicle noted above. However, it could be done; perhaps with a 
 'level'? I understand you're not crazy about switches and so on, but 
 please consider it ... it could potentially be responsible for 
 alleviating adoption fears. That's a pretty darned powerful switch.

I think this is a very good idea, and if some reasonable means of implementing it could be found then it might actually aid D's adopting in conservative circles without prohibiting powerful meta features from being added. What fuels my concern is that because the new mixin/import features are so general, it is as easy to envision horrors rivaling the worst of C macro code as it is to envision elegant and practical applications.

It's funny you mention that. Ever since the start, I've been trying to figure out how one could have an "obfuscated code contest" using D. Until recently, it was only in the far corners of my imagination since you could only abuse operator overloads and identifier names. For better or for worse, this just became a very real possibility - mixin("foo") evaluates recursively, right?
 I suppose this is why I've been kind of hoping an alternate solution would 
 present itself :-)  I think the general idea is fantastic, but these are 
 the first features in D that I might actually be inclined to prohibit in 
 certain development environments.  The potential for abuse undermines a 
 lot of what appeals to be about D: elegance, clarity, etc.  At the same 
 time, I'm excited about the direction in which things are progressing.

I agree with Sean and Kris on this - a goalie of some kind might be a nice attractor for project management. However, I wonder if this is truly something that fits the role of the D compiler itself. Isn't this more of a coding-policy-enforcement tool, akin to something that manages coding style, whitespace, etc? -- - EricAnderton at yahoo
Feb 12 2007
prev sibling parent Walter Bright <newshound digitalmars.com> writes:
kris wrote:
 Of course, the trick would be to find a reasonable tripping point where 
 the abuse-potential starts to require some serious respect; like the 
 vehicle noted above. However, it could be done; perhaps with a 'level'? 
 I understand you're not crazy about switches and so on, but please 
 consider it ... it could potentially be responsible for alleviating 
 adoption fears. That's a pretty darned powerful switch.

I don't believe such a switch belongs in the compiler itself. I think it belongs in a 3rd party tool, where it can be customized to match the coding standards of the organization. That's why D is designed to be easy to parse - it makes it practical to build such custom tools.
Feb 12 2007
prev sibling parent reply James Dennett <jdennett acm.org> writes:
Walter Bright wrote:
 kris wrote:
 Walter Bright wrote:
 3) Less code == more productivity, less bugs. I don't mean
 gratuitously less code, I mean less code in the sense that one can
 write directly what one means, rather than a lot of tedious bother.
 For example, if I want to visit each element in an array:

     foreach(v; e)
     {...}

 is more direct than:

     for (size_t i = 0; i < sizeof(e)/sizeof(e[0]); i++)
     { T v = e[i];
      ... }

Yep, that's great! One of the reasons I like D so much, along with array slicing.

The C++ version is even *worse* than the C one (for wordiness bother): for (std::vector<T>::const_iterator i = e.begin(); i != e.end(); i++) { T v = *i; ... }

C++ can, of course, also do (with type-safety) for (size_t i = 0; i < size(e); ++i)
 I mean I know the reasons for every bit of the syntax there, and in
 isolation they make sense, but put it all together and it seems to go
 backwards.

C++, of course, has std::for_each(e.begin(), e.end(), do_x); in its library (though that's weaker than it could be because of lack of support for convenient anonymous functions/lambdas). C++0x is very likely to have for(v: e). It's implemented in ConceptGCC already. Java already has essentially that, as does C#. This really doesn't set D apart (but at least D isn't falling behind here).
 5) Lisp gets things right, according to what I've read from heavy
 Lisp users, by being a language that can be modified on the fly to
 suit the task at hand, in other words, by having a customizable
 language one can achieve dramatic productivity gains.

Yet, Lisp will always remain a niche language. You have to wonder why.

I'm pretty sure it's the syntax.

Yup, syntax does matter. [snip]
 8) I've never been able to create usable C++ templates. Notice that
 the DMD front end (in C++) doesn't use a single template. I know how
 they work (in intimate detail) but I still can't use them.

Same here.

But I have been able to create usable D templates <g>.

The problems are mostly syntactical; writing C++ templates isn't much harder, by and large, than writing other robust, reusable, flexible code. (Which is hard, by most measures.) Familiarity with Lisp does help when working with C++ templates, it seems. That might also be true for templates in D. [snip]
 It's like Paul Mensonidas being recognized as the "World's Leading
 Expert on the C Preprocessor." Obviously, something is seriously wrong
 with the preprocessor if there's an ecological niche for a world's
 leading expert on it. (By the way, Paul is a very nice fellow and has
 been kind enough to help me iron out several subtle bugs in the DMC++
 preprocessor. As long as we're saddled with that preprocessor spec, I'm
 glad there is a Paul to help!)

I'll second that, though it's offtopic. My knowledge of the dark corners of the C++ preprocessor was heavily influenced by Paul's writing. (Most of what I learned just reconfirmed the view that this tool is a beast.) -- James
Feb 12 2007
next sibling parent reply James Dennett <jdennett acm.org> writes:
[With apologies for following on from my own post:]

James Dennett wrote:
 Walter Bright wrote:
 kris wrote:
 Walter Bright wrote:
 3) Less code == more productivity, less bugs. I don't mean
 gratuitously less code, I mean less code in the sense that one can
 write directly what one means, rather than a lot of tedious bother.
 For example, if I want to visit each element in an array:

     foreach(v; e)
     {...}

 is more direct than:

     for (size_t i = 0; i < sizeof(e)/sizeof(e[0]); i++)
     { T v = e[i];
      ... }

array slicing.

for (std::vector<T>::const_iterator i = e.begin(); i != e.end(); i++) { T v = *i; ... }

C++ can, of course, also do (with type-safety) for (size_t i = 0; i < size(e); ++i)
 I mean I know the reasons for every bit of the syntax there, and in
 isolation they make sense, but put it all together and it seems to go
 backwards.

C++, of course, has std::for_each(e.begin(), e.end(), do_x); in its library (though that's weaker than it could be because of lack of support for convenient anonymous functions/lambdas). C++0x is very likely to have for(v: e). It's implemented in ConceptGCC already. Java already has essentially that, as does C#. This really doesn't set D apart (but at least D isn't falling behind here).

For completeness (and maybe incidentally illustrating how easy it is to miss something relevant) I should mention that the current working paper for C++ also supports type deduction to allow for (auto i = e.begin(); i != e.end(); ++i) which is a big step forward (though in this simple case, the range form of "for" will still be the better/normal choice.) Giving "auto" some use in this way will remove a lot of verbose repetition from C++ code -- but it's good that D benefits from some of the points learned during the evolution of C++. (I hope it can learn from the excessive implicit conversions in C++ too, and eliminate implicit conversions from character and boolean types to integers.) -- James
Feb 12 2007
parent janderson <askme me.com> writes:
James Dennett wrote:
 [With apologies for following on from my own post:]
 
 
 For completeness (and maybe incidentally illustrating how
 easy it is to miss something relevant) I should mention
 that the current working paper for C++ also supports type
 deduction to allow
   for (auto i = e.begin(); i != e.end(); ++i)
 which is a big step forward (though in this simple case,
 the range form of "for" will still be the better/normal
 choice.)  Giving "auto" some use in this way will remove
 a lot of verbose repetition from C++ code -- but it's
 good that D benefits from some of the points learned
 during the evolution of C++.

This shows how D is a head of the game when compared to C++. By the time CC+ get this, D will be on the next level. C++ will always be playing catchup with D's syntax.
 
 (I hope it can learn from the excessive implicit conversions
 in C++ too, and eliminate implicit conversions from character
 and boolean types to integers.)
 
 -- James

Feb 12 2007
prev sibling next sibling parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
James Dennett wrote:
 C++, of course, has std::for_each(e.begin(), e.end(), do_x);
 in its library (though that's weaker than it could be because
 of lack of support for convenient anonymous functions/lambdas).
 
 C++0x is very likely to have for(v: e).  It's implemented
 in ConceptGCC already. Java already has essentially that,
 as does C#.  This really doesn't set D apart (but at least
 D isn't falling behind here).

BTW, D might soon have simultaneous iteration that will blow away all conventional languages: foreach (i ; coll1) (j ; coll2) { ... use i and j ... } continue foreach (i) { ... coll2 finished; use i ... } continue foreach (j) { ... coll1 finished; use j ... } Best languages out there are at best ho-hum when it comes about iterating through simultaneous streams. Most lose their elegant iteration statement entirely and come with something that looks like an old hooker early in the morning. Andrei
Feb 12 2007
next sibling parent John Reimer <Terminal.Node gmail.com> writes:
A what??

That's a rather odd and disturbing analogy... :p

-JJR
Feb 12 2007
prev sibling next sibling parent reply janderson <askme me.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 James Dennett wrote:
 C++, of course, has std::for_each(e.begin(), e.end(), do_x);
 in its library (though that's weaker than it could be because
 of lack of support for convenient anonymous functions/lambdas).

 C++0x is very likely to have for(v: e).  It's implemented
 in ConceptGCC already. Java already has essentially that,
 as does C#.  This really doesn't set D apart (but at least
 D isn't falling behind here).

BTW, D might soon have simultaneous iteration that will blow away all conventional languages: foreach (i ; coll1) (j ; coll2) { ... use i and j ... } continue foreach (i) { ... coll2 finished; use i ... } continue foreach (j) { ... coll1 finished; use j ... } Andrei

Maybe its a bit to late here however this syntax seems very special case. Can you explain why its necessary and how we would use it. How would we do this currently (without meta programming)? -Joel
Feb 12 2007
next sibling parent kris <foo bar.com> writes:
janderson wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 
 James Dennett wrote:

 C++, of course, has std::for_each(e.begin(), e.end(), do_x);
 in its library (though that's weaker than it could be because
 of lack of support for convenient anonymous functions/lambdas).

 C++0x is very likely to have for(v: e).  It's implemented
 in ConceptGCC already. Java already has essentially that,
 as does C#.  This really doesn't set D apart (but at least
 D isn't falling behind here).

BTW, D might soon have simultaneous iteration that will blow away all conventional languages: foreach (i ; coll1) (j ; coll2) { ... use i and j ... } continue foreach (i) { ... coll2 finished; use i ... } continue foreach (j) { ... coll1 finished; use j ... } Andrei

Maybe its a bit to late here however this syntax seems very special case. Can you explain why its necessary and how we would use it. How would we do this currently (without meta programming)? -Joel

You'd typically use something like an interleaved iterator. Tango has such an animal for traversing collections.
Feb 12 2007
prev sibling next sibling parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
janderson wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 James Dennett wrote:
 C++, of course, has std::for_each(e.begin(), e.end(), do_x);
 in its library (though that's weaker than it could be because
 of lack of support for convenient anonymous functions/lambdas).

 C++0x is very likely to have for(v: e).  It's implemented
 in ConceptGCC already. Java already has essentially that,
 as does C#.  This really doesn't set D apart (but at least
 D isn't falling behind here).

BTW, D might soon have simultaneous iteration that will blow away all conventional languages: foreach (i ; coll1) (j ; coll2) { ... use i and j ... } continue foreach (i) { ... coll2 finished; use i ... } continue foreach (j) { ... coll1 finished; use j ... } Andrei

Maybe its a bit to late here however this syntax seems very special case. Can you explain why its necessary and how we would use it. How would we do this currently (without meta programming)?

Yeh, I don't get it either. How would that help me implement merge() from merge sort for instance? --bb
Feb 13 2007
parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Bill Baxter wrote:
 janderson wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 James Dennett wrote:
 C++, of course, has std::for_each(e.begin(), e.end(), do_x);
 in its library (though that's weaker than it could be because
 of lack of support for convenient anonymous functions/lambdas).

 C++0x is very likely to have for(v: e).  It's implemented
 in ConceptGCC already. Java already has essentially that,
 as does C#.  This really doesn't set D apart (but at least
 D isn't falling behind here).

BTW, D might soon have simultaneous iteration that will blow away all conventional languages: foreach (i ; coll1) (j ; coll2) { ... use i and j ... } continue foreach (i) { ... coll2 finished; use i ... } continue foreach (j) { ... coll1 finished; use j ... } Andrei

Maybe its a bit to late here however this syntax seems very special case. Can you explain why its necessary and how we would use it. How would we do this currently (without meta programming)?

Yeh, I don't get it either. How would that help me implement merge() from merge sort for instance?

Merge bumps the iteration in both collections conditionally. The form above bumps the iteration in the two collections unconditionally, until one is finished; then it continues with the other until that is finished. Say you need to compute the minimum and maximum element for each column in a file. The code (translated from Perl) looks something like this (simplified): foreach (row ; rows) { for (int i = 0; i != cols; ++i) { if (mins[i] > row[i]) mins[i] = row[i]; if (maxs[i] < row[i]) maxs[i] = row[i]; } } What you'd rather do is to simultaneously iterate row, mins, and maxs: foreach (row ; rows) { foreach (e ; row) (inout min ; mins) (inout max ; maxs) { if (min > e) min = e; if (max < e) max = e; } } Andrei
Feb 13 2007
next sibling parent reply Frits van Bommel <fvbommel REMwOVExCAPSs.nl> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 Bill Baxter wrote:
 Yeh, I don't get it either.  How would that help me implement merge() 
 from merge sort for instance?

Merge bumps the iteration in both collections conditionally. The form above bumps the iteration in the two collections unconditionally, until one is finished; then it continues with the other until that is finished.

In other words, it doesn't :(.
Feb 13 2007
next sibling parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Frits van Bommel wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Bill Baxter wrote:
 Yeh, I don't get it either.  How would that help me implement merge() 
 from merge sort for instance?

Merge bumps the iteration in both collections conditionally. The form above bumps the iteration in the two collections unconditionally, until one is finished; then it continues with the other until that is finished.

In other words, it doesn't :(.

A need for loops iterating over multiple collections depending on arbitrary conditions will always be there. The point of extending foreach is to address the often-encountered case when you want to iterate over multiple collections simultaneously (e.g.: copy a collection to another), just like foreach itself is addressing the particular but frequent case of iterating one collection in a linear manner. Andrei
Feb 13 2007
parent reply Sean Kelly <sean f4.ca> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 Frits van Bommel wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Bill Baxter wrote:
 Yeh, I don't get it either.  How would that help me implement 
 merge() from merge sort for instance?

Merge bumps the iteration in both collections conditionally. The form above bumps the iteration in the two collections unconditionally, until one is finished; then it continues with the other until that is finished.

In other words, it doesn't :(.

A need for loops iterating over multiple collections depending on arbitrary conditions will always be there. The point of extending foreach is to address the often-encountered case when you want to iterate over multiple collections simultaneously (e.g.: copy a collection to another), just like foreach itself is addressing the particular but frequent case of iterating one collection in a linear manner.

What about: foreach (i ; coll1) (j ; coll2) { if( true ) continue i; } ie. allow 'continue' to accept labels to specify which collection is iterated. A 'continue' without labels would iterate both. Sean
Feb 13 2007
parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Sean Kelly wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Frits van Bommel wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Bill Baxter wrote:
 Yeh, I don't get it either.  How would that help me implement 
 merge() from merge sort for instance?

Merge bumps the iteration in both collections conditionally. The form above bumps the iteration in the two collections unconditionally, until one is finished; then it continues with the other until that is finished.

In other words, it doesn't :(.

A need for loops iterating over multiple collections depending on arbitrary conditions will always be there. The point of extending foreach is to address the often-encountered case when you want to iterate over multiple collections simultaneously (e.g.: copy a collection to another), just like foreach itself is addressing the particular but frequent case of iterating one collection in a linear manner.

What about: foreach (i ; coll1) (j ; coll2) { if( true ) continue i; } ie. allow 'continue' to accept labels to specify which collection is iterated. A 'continue' without labels would iterate both.

I think that's a great idea, except that "continue to label" has the same syntax: http://digitalmars.com/d/statement.html#ContinueStatement Andrei
Feb 13 2007
next sibling parent reply Frits van Bommel <fvbommel REMwOVExCAPSs.nl> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 Sean Kelly wrote:
 What about:

     foreach (i ; coll1) (j ; coll2)
     {
         if( true )
             continue i;
     }

 ie. allow 'continue' to accept labels to specify which collection is 
 iterated.  A 'continue' without labels would iterate both.

I think that's a great idea, except that "continue to label" has the same syntax: http://digitalmars.com/d/statement.html#ContinueStatement

Does that really matter? The compiler knows whether 'i' is a label or a loop variable (presumably it can't be both at the same time?) so it knows what to do. Note that the current "continue to label" wouldn't help here since there's only one statement for a "double" iteration. So the most natural way to specify which loop to continue would be to specify the variable. By the way, would the new loop syntax allow more than two collections to be simultaneously iterated? That would indicate the need for a "continue i, j" as well, to specify multiple variables. On the other hand, with your proposed "continue foreach" clauses after the main loop that would also require an exponential number of those clauses for different sets of collections running out if you want to handle all cases...
Feb 13 2007
next sibling parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Frits van Bommel wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Sean Kelly wrote:
 What about:

     foreach (i ; coll1) (j ; coll2)
     {
         if( true )
             continue i;
     }

 ie. allow 'continue' to accept labels to specify which collection is 
 iterated.  A 'continue' without labels would iterate both.

I think that's a great idea, except that "continue to label" has the same syntax: http://digitalmars.com/d/statement.html#ContinueStatement

Does that really matter? The compiler knows whether 'i' is a label or a loop variable (presumably it can't be both at the same time?) so it knows what to do. Note that the current "continue to label" wouldn't help here since there's only one statement for a "double" iteration. So the most natural way to specify which loop to continue would be to specify the variable. By the way, would the new loop syntax allow more than two collections to be simultaneously iterated?

Whoa! I certainly hope so. It hadn't even occurred to me that Andrei might mean this syntax can only be used for just two collections. If that's the case then ... ick. --bb
Feb 13 2007
parent reply kris <foo bar.com> writes:
Bill Baxter wrote:
 Frits van Bommel wrote:
 By the way, would the new loop syntax allow more than two collections 
 to be simultaneously iterated?

Whoa! I certainly hope so. It hadn't even occurred to me that Andrei might mean this syntax can only be used for just two collections. If that's the case then ... ick. --bb

InterleavedIterator does multiple collections via using multiple instances of InterleavedIterator. It's simple to use, and only needs to be written once. Would be better to implement some basic iterator needs than to introduce some tricky new syntax? - Kris
Feb 13 2007
parent reply kris <foo bar.com> writes:
kris wrote:
 Bill Baxter wrote:
 
 Frits van Bommel wrote:

 By the way, would the new loop syntax allow more than two collections 
 to be simultaneously iterated?

Whoa! I certainly hope so. It hadn't even occurred to me that Andrei might mean this syntax can only be used for just two collections. If that's the case then ... ick. --bb

InterleavedIterator does multiple collections via using multiple instances of InterleavedIterator. It's simple to use, and only needs to be written once. Would be better to implement some basic iterator needs than to introduce some tricky new syntax? - Kris

Should have given an example. Simple case with 2 entities: auto two = InterleavedInterator (x, y); foreach (x; two) ... more than 2: auto three = InterleavedInterator (two, z); foreach (x; three) ...
Feb 13 2007
parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
kris wrote:
 kris wrote:
 Bill Baxter wrote:

 Frits van Bommel wrote:

 By the way, would the new loop syntax allow more than two 
 collections to be simultaneously iterated?

Whoa! I certainly hope so. It hadn't even occurred to me that Andrei might mean this syntax can only be used for just two collections. If that's the case then ... ick. --bb

InterleavedIterator does multiple collections via using multiple instances of InterleavedIterator. It's simple to use, and only needs to be written once. Would be better to implement some basic iterator needs than to introduce some tricky new syntax? - Kris

Should have given an example. Simple case with 2 entities: auto two = InterleavedInterator (x, y); foreach (x; two) ... more than 2: auto three = InterleavedInterator (two, z); foreach (x; three) ...

Should have also mentioned where one can find this mythical InterleavedIterator. --bb
Feb 13 2007
next sibling parent reply kris <foo bar.com> writes:
Bill Baxter wrote:
 kris wrote:
 
 kris wrote:

 Bill Baxter wrote:

 Frits van Bommel wrote:

 By the way, would the new loop syntax allow more than two 
 collections to be simultaneously iterated?

Whoa! I certainly hope so. It hadn't even occurred to me that Andrei might mean this syntax can only be used for just two collections. If that's the case then ... ick. --bb

InterleavedIterator does multiple collections via using multiple instances of InterleavedIterator. It's simple to use, and only needs to be written once. Would be better to implement some basic iterator needs than to introduce some tricky new syntax? - Kris

Should have given an example. Simple case with 2 entities: auto two = InterleavedInterator (x, y); foreach (x; two) ... more than 2: auto three = InterleavedInterator (two, z); foreach (x; three) ...

Should have also mentioned where one can find this mythical InterleavedIterator. --bb

There is no 'standard' one at this time that I know of (judging by the discussion on it a while back). However, Tango does have this beastie in the collections package. The point is, coming up with a lightweight core Iterator approach would likely provide a simpler and more dependable solution. in the above example, x, y, and z are all iterators themselves. If D had a core notion of Iterator, that's what those would be. For instance, D iterators might map to a delegate (which is what the body of a foreach actually is).
Feb 13 2007
parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
kris wrote:
 Bill Baxter wrote:
 kris wrote:

 kris wrote:

 Bill Baxter wrote:

 Frits van Bommel wrote:

 By the way, would the new loop syntax allow more than two 
 collections to be simultaneously iterated?

Whoa! I certainly hope so. It hadn't even occurred to me that Andrei might mean this syntax can only be used for just two collections. If that's the case then ... ick. --bb

InterleavedIterator does multiple collections via using multiple instances of InterleavedIterator. It's simple to use, and only needs to be written once. Would be better to implement some basic iterator needs than to introduce some tricky new syntax? - Kris

Should have given an example. Simple case with 2 entities: auto two = InterleavedInterator (x, y); foreach (x; two) ... more than 2: auto three = InterleavedInterator (two, z); foreach (x; three) ...

Should have also mentioned where one can find this mythical InterleavedIterator. --bb

There is no 'standard' one at this time that I know of (judging by the discussion on it a while back). However, Tango does have this beastie in the collections package. The point is, coming up with a lightweight core Iterator approach would likely provide a simpler and more dependable solution.

Ok that wasn't clear to me. It sounded like you were talking about code I could type in today and have it work given suitable (but not specified) imports.
 in the above example, x, y, and z are all iterators themselves. If D had 
 a core notion of Iterator, that's what those would be. For instance, D 
 iterators might map to a delegate (which is what the body of a foreach 
 actually is).

Yeh, basically it's the same as the Python izip that mentioned. That's python's name for InterleavedIterator. I think the issue with D right now is that the 'x' returned by a hypothetical InterleavedIterator would ideally be a tuple. And you would access the elements with x[0],x[1],x[2] (int the 'three' case above). Or you could do foreach(x,y,z; three) and have it unpacked for you. I think it would be great if this kind of stuff worked. I'm much less excited about a built-in syntax that _only_ knows how to do that one trick. --bb
Feb 13 2007
parent reply kris <foo bar.com> writes:
Bill Baxter wrote:
 kris wrote:
 
 Bill Baxter wrote:

 kris wrote:

 kris wrote:

 Bill Baxter wrote:

 Frits van Bommel wrote:

 By the way, would the new loop syntax allow more than two 
 collections to be simultaneously iterated?

Whoa! I certainly hope so. It hadn't even occurred to me that Andrei might mean this syntax can only be used for just two collections. If that's the case then ... ick. --bb

InterleavedIterator does multiple collections via using multiple instances of InterleavedIterator. It's simple to use, and only needs to be written once. Would be better to implement some basic iterator needs than to introduce some tricky new syntax? - Kris

Should have given an example. Simple case with 2 entities: auto two = InterleavedInterator (x, y); foreach (x; two) ... more than 2: auto three = InterleavedInterator (two, z); foreach (x; three) ...

Should have also mentioned where one can find this mythical InterleavedIterator. --bb

There is no 'standard' one at this time that I know of (judging by the discussion on it a while back). However, Tango does have this beastie in the collections package. The point is, coming up with a lightweight core Iterator approach would likely provide a simpler and more dependable solution.

Ok that wasn't clear to me. It sounded like you were talking about code I could type in today and have it work given suitable (but not specified) imports.
 in the above example, x, y, and z are all iterators themselves. If D 
 had a core notion of Iterator, that's what those would be. For 
 instance, D iterators might map to a delegate (which is what the body 
 of a foreach actually is).

Yeh, basically it's the same as the Python izip that mentioned. That's python's name for InterleavedIterator. I think the issue with D right now is that the 'x' returned by a hypothetical InterleavedIterator would ideally be a tuple. And you would access the elements with x[0],x[1],x[2] (int the 'three' case above). Or you could do foreach(x,y,z; three) and have it unpacked for you. I think it would be great if this kind of stuff worked. I'm much less excited about a built-in syntax that _only_ knows how to do that one trick. --bb

Why would it return a tuple? Would the collection content be of differing types? If not, then the InterleavedIterator would likely have an opApply() for use in the foreach? That's how the Tango one operates, fwiw.
Feb 13 2007
parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
kris wrote:
 Bill Baxter wrote:
 kris wrote:

 Bill Baxter wrote:

 kris wrote:

 kris wrote:

 Bill Baxter wrote:

 Frits van Bommel wrote:

 By the way, would the new loop syntax allow more than two 
 collections to be simultaneously iterated?

Whoa! I certainly hope so. It hadn't even occurred to me that Andrei might mean this syntax can only be used for just two collections. If that's the case then ... ick. --bb

InterleavedIterator does multiple collections via using multiple instances of InterleavedIterator. It's simple to use, and only needs to be written once. Would be better to implement some basic iterator needs than to introduce some tricky new syntax? - Kris

Should have given an example. Simple case with 2 entities: auto two = InterleavedInterator (x, y); foreach (x; two) ... more than 2: auto three = InterleavedInterator (two, z); foreach (x; three) ...

Should have also mentioned where one can find this mythical InterleavedIterator. --bb

There is no 'standard' one at this time that I know of (judging by the discussion on it a while back). However, Tango does have this beastie in the collections package. The point is, coming up with a lightweight core Iterator approach would likely provide a simpler and more dependable solution.

Ok that wasn't clear to me. It sounded like you were talking about code I could type in today and have it work given suitable (but not specified) imports.
 in the above example, x, y, and z are all iterators themselves. If D 
 had a core notion of Iterator, that's what those would be. For 
 instance, D iterators might map to a delegate (which is what the body 
 of a foreach actually is).

Yeh, basically it's the same as the Python izip that mentioned. That's python's name for InterleavedIterator. I think the issue with D right now is that the 'x' returned by a hypothetical InterleavedIterator would ideally be a tuple. And you would access the elements with x[0],x[1],x[2] (int the 'three' case above). Or you could do foreach(x,y,z; three) and have it unpacked for you. I think it would be great if this kind of stuff worked. I'm much less excited about a built-in syntax that _only_ knows how to do that one trick. --bb

Why would it return a tuple? Would the collection content be of differing types? If not, then the InterleavedIterator would likely have an opApply() for use in the foreach? That's how the Tango one operates, fwiw.

I must not understand what your InterleavedIterator does then. I'm thinking of something like: char[][] names = ["chuck", "barney", "bart"]; int[] ids = [12983, 32345, 39284]; foreach (x; InterleavedIterator(names,ids)) { writefln("Name=%s id=%s", x[0], x[1]); } --bb
Feb 13 2007
next sibling parent kris <foo bar.com> writes:
Bill Baxter wrote:
[snip]
 I must not understand what your InterleavedIterator does then.  I'm 
 thinking of something like:
 char[][] names = ["chuck", "barney", "bart"];
 int[] ids = [12983, 32345, 39284];
 
 foreach (x; InterleavedIterator(names,ids)) {
    writefln("Name=%s id=%s", x[0], x[1]);
 }

The one in Tango operates with a common type only; where perhaps you're combining the content of more than one collection (in Tango, you can populate a collection via an iterator, for example). Combining mutliple lists of differing type is a somewhat different animal; if the lists are of differing lengths also, one might imagine all kinds of non-deterministic behaviour :)
Feb 13 2007
prev sibling parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Bill Baxter wrote:
 --bb

Why would it return a tuple? Would the collection content be of differing types? If not, then the InterleavedIterator would likely have an opApply() for use in the foreach? That's how the Tango one operates, fwiw.

I must not understand what your InterleavedIterator does then. I'm thinking of something like: char[][] names = ["chuck", "barney", "bart"]; int[] ids = [12983, 32345, 39284]; foreach (x; InterleavedIterator(names,ids)) { writefln("Name=%s id=%s", x[0], x[1]); }

And here's a partial hypothetical implementation struct InterleavedIterator(Types...) { alias GetElementTypes!(Types) ElemTypes; static InterleavedIterator opCall(Types arg) { InterleavedIterator lists = arg; } int opApply( int delegate(inout ElemTypes) body) { ElemTypes x; for(uint i=0; i<; i++) { foreach(j,inout L; lists) x[j] = L[i]; int ret = body(x); if (ret) return ret; } return 0; } Types lists; } Obviously it doesn't work right now because you can't do some of those things with tuples. But it would be nice if you could. We're really not too far from something like that working. Especially when inout gets sorted out. Making tuples more powerful and general, IMHO, would have many benefits. In the static,strongly typed functional languages like ML and haskell, lists and tuples are the core data structures. They're pretty similar to D tuples, except in ML and haskell you can return them from functions and otherwise manipulate them as first-class entities. --bb
Feb 13 2007
parent kris <foo bar.com> writes:
Bill Baxter wrote:
 Bill Baxter wrote:
 
 --bb

Why would it return a tuple? Would the collection content be of differing types? If not, then the InterleavedIterator would likely have an opApply() for use in the foreach? That's how the Tango one operates, fwiw.

I must not understand what your InterleavedIterator does then. I'm thinking of something like: char[][] names = ["chuck", "barney", "bart"]; int[] ids = [12983, 32345, 39284]; foreach (x; InterleavedIterator(names,ids)) { writefln("Name=%s id=%s", x[0], x[1]); }

And here's a partial hypothetical implementation struct InterleavedIterator(Types...) { alias GetElementTypes!(Types) ElemTypes; static InterleavedIterator opCall(Types arg) { InterleavedIterator lists = arg; } int opApply( int delegate(inout ElemTypes) body) { ElemTypes x; for(uint i=0; i<; i++) { foreach(j,inout L; lists) x[j] = L[i]; int ret = body(x); if (ret) return ret; } return 0; } Types lists; } Obviously it doesn't work right now because you can't do some of those things with tuples. But it would be nice if you could. We're really not too far from something like that working. Especially when inout gets sorted out. Making tuples more powerful and general, IMHO, would have many benefits. In the static,strongly typed functional languages like ML and haskell, lists and tuples are the core data structures. They're pretty similar to D tuples, except in ML and haskell you can return them from functions and otherwise manipulate them as first-class entities. --bb

Yes, would be wonderful if D could support Tuples in that manner :)
Feb 13 2007
prev sibling parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Bill Baxter wrote:
 kris wrote:
 kris wrote:
 Bill Baxter wrote:

 Frits van Bommel wrote:

 By the way, would the new loop syntax allow more than two 
 collections to be simultaneously iterated?

Whoa! I certainly hope so. It hadn't even occurred to me that Andrei might mean this syntax can only be used for just two collections. If that's the case then ... ick. --bb

InterleavedIterator does multiple collections via using multiple instances of InterleavedIterator. It's simple to use, and only needs to be written once. Would be better to implement some basic iterator needs than to introduce some tricky new syntax? - Kris

Should have given an example. Simple case with 2 entities: auto two = InterleavedInterator (x, y); foreach (x; two) ... more than 2: auto three = InterleavedInterator (two, z); foreach (x; three) ...

Should have also mentioned where one can find this mythical InterleavedIterator.

The issue with such a multi-iterator is that it makes it easier to make errors, and harder to write efficient and correct code that's statically verifiable. I'm not sure when the interleaved iterator stops iterating, but there are two possibilities, none of which is satisfactory: 1. Stop after the shortest of the two collections is done. Then user code must query the state of the iterator after the loop to figure what extra work is to be done: auto two = InterleavedInterator (x, y); foreach (x; two) { ... } if (two.MoreData(0)) { auto back2one = two.Project(0); // fetch the first iterator foreach (x ; back2one) { ... } } else if (two.moreData(1)) { ... same (or)deal ... } This is way more work than there should. 2. Stop after the longest of the two collections is done. Then user code must ensure _at each step_ that both iterators have meaningful data: auto two = InterleavedInterator (x, y); foreach (x; two) { if (two.HasData(0)) { ... } else { ... only the second iter has data ... } } This is unclear, verbose, and probably suboptimal. The scoping of foreach links the scope of the variables with their validity range, which rules out a class of possible errors entirely: foreach (x ; c1) (y ; c2) (z ; c3) { ... x, y, z syntactically accessible _and_ valid ... } continue foreach (x, z) { ... x is both invalid _and_ syntactically inaccessible ... } As I mentioned in a different post, the fact that there are combinatorial potential sub-foreach statements is a non-issue. Andrei
Feb 13 2007
next sibling parent reply kris <foo bar.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 Bill Baxter wrote:
 
 kris wrote:

 kris wrote:

 Bill Baxter wrote:

 Frits van Bommel wrote:

 By the way, would the new loop syntax allow more than two 
 collections to be simultaneously iterated?

Whoa! I certainly hope so. It hadn't even occurred to me that Andrei might mean this syntax can only be used for just two collections. If that's the case then ... ick. --bb

InterleavedIterator does multiple collections via using multiple instances of InterleavedIterator. It's simple to use, and only needs to be written once. Would be better to implement some basic iterator needs than to introduce some tricky new syntax? - Kris

Should have given an example. Simple case with 2 entities: auto two = InterleavedInterator (x, y); foreach (x; two) ... more than 2: auto three = InterleavedInterator (two, z); foreach (x; three) ...

Should have also mentioned where one can find this mythical InterleavedIterator.

The issue with such a multi-iterator is that it makes it easier to make errors, and harder to write efficient and correct code that's statically verifiable. I'm not sure when the interleaved iterator stops iterating, but there are two possibilities, none of which is satisfactory: 1. Stop after the shortest of the two collections is done. Then user code must query the state of the iterator after the loop to figure what extra work is to be done: auto two = InterleavedInterator (x, y); foreach (x; two) { ... } if (two.MoreData(0)) { auto back2one = two.Project(0); // fetch the first iterator foreach (x ; back2one) { ... } } else if (two.moreData(1)) { ... same (or)deal ... } This is way more work than there should. 2. Stop after the longest of the two collections is done. Then user code must ensure _at each step_ that both iterators have meaningful data: auto two = InterleavedInterator (x, y); foreach (x; two) { if (two.HasData(0)) { ... } else { ... only the second iter has data ... } } This is unclear, verbose, and probably suboptimal. The scoping of foreach links the scope of the variables with their validity range, which rules out a class of possible errors entirely: foreach (x ; c1) (y ; c2) (z ; c3) { ... x, y, z syntactically accessible _and_ valid ... } continue foreach (x, z) { ... x is both invalid _and_ syntactically inaccessible ... } As I mentioned in a different post, the fact that there are combinatorial potential sub-foreach statements is a non-issue. Andrei

If x, y, & z are of differing type, then I'd agree.
Feb 13 2007
parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
kris wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Bill Baxter wrote:

 kris wrote:

 kris wrote:

 Bill Baxter wrote:

 Frits van Bommel wrote:

 By the way, would the new loop syntax allow more than two 
 collections to be simultaneously iterated?

Whoa! I certainly hope so. It hadn't even occurred to me that Andrei might mean this syntax can only be used for just two collections. If that's the case then ... ick. --bb

InterleavedIterator does multiple collections via using multiple instances of InterleavedIterator. It's simple to use, and only needs to be written once. Would be better to implement some basic iterator needs than to introduce some tricky new syntax? - Kris

Should have given an example. Simple case with 2 entities: auto two = InterleavedInterator (x, y); foreach (x; two) ... more than 2: auto three = InterleavedInterator (two, z); foreach (x; three) ...

Should have also mentioned where one can find this mythical InterleavedIterator.

The issue with such a multi-iterator is that it makes it easier to make errors, and harder to write efficient and correct code that's statically verifiable. I'm not sure when the interleaved iterator stops iterating, but there are two possibilities, none of which is satisfactory: 1. Stop after the shortest of the two collections is done. Then user code must query the state of the iterator after the loop to figure what extra work is to be done: auto two = InterleavedInterator (x, y); foreach (x; two) { ... } if (two.MoreData(0)) { auto back2one = two.Project(0); // fetch the first iterator foreach (x ; back2one) { ... } } else if (two.moreData(1)) { ... same (or)deal ... } This is way more work than there should. 2. Stop after the longest of the two collections is done. Then user code must ensure _at each step_ that both iterators have meaningful data: auto two = InterleavedInterator (x, y); foreach (x; two) { if (two.HasData(0)) { ... } else { ... only the second iter has data ... } } This is unclear, verbose, and probably suboptimal. The scoping of foreach links the scope of the variables with their validity range, which rules out a class of possible errors entirely: foreach (x ; c1) (y ; c2) (z ; c3) { ... x, y, z syntactically accessible _and_ valid ... } continue foreach (x, z) { ... x is both invalid _and_ syntactically inaccessible ... } As I mentioned in a different post, the fact that there are combinatorial potential sub-foreach statements is a non-issue. Andrei

If x, y, & z are of differing type, then I'd agree.

If they are of the same type and in an arbitrarily large numbers (x1, x2, x3...), we start talking about cutting through some sort of a matrix or manifold, which is an entirely different business. Anyhow, I think it's clear by now that the language makes some idioms faster, and library iterators make some other idioms faster. Clearly library iterators are useful. The question is if the language-helped idioms are encountered often enough to justify the cognitive load of implementing them. I'm biased by my own C++ codebase, which does a _ton_ of looping (linear algebra, neural nets, manifold learning...) I have a nice FOREACH(i, 0, n) macro that takes care very effectively of most loops, with proper type deduction (gotta love gcc's typeof), limit hoisting, you name it. (I've sat down and measured that it has no impact on the efficiency of the generated code, which is paramount.) In contrast, the few places in which I had to use a straight for loop or maintain extra variables to do parallel iterations really make the thin facade break down as all of a sudden I need to fully explain myself to the compiler. All of the extra cases could be helped, and efficiently, by continue foreach (actually, to tell the truth, I could also use the foreach_reverse/continue foreach_reverse correspondent feature). Andrei
Feb 13 2007
parent reply kris <foo bar.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 kris wrote:
 
 Andrei Alexandrescu (See Website For Email) wrote:

 Bill Baxter wrote:

 kris wrote:

 kris wrote:

 Bill Baxter wrote:

 Frits van Bommel wrote:

 By the way, would the new loop syntax allow more than two 
 collections to be simultaneously iterated?

Whoa! I certainly hope so. It hadn't even occurred to me that Andrei might mean this syntax can only be used for just two collections. If that's the case then ... ick. --bb

InterleavedIterator does multiple collections via using multiple instances of InterleavedIterator. It's simple to use, and only needs to be written once. Would be better to implement some basic iterator needs than to introduce some tricky new syntax? - Kris

Should have given an example. Simple case with 2 entities: auto two = InterleavedInterator (x, y); foreach (x; two) ... more than 2: auto three = InterleavedInterator (two, z); foreach (x; three) ...

Should have also mentioned where one can find this mythical InterleavedIterator.

The issue with such a multi-iterator is that it makes it easier to make errors, and harder to write efficient and correct code that's statically verifiable. I'm not sure when the interleaved iterator stops iterating, but there are two possibilities, none of which is satisfactory: 1. Stop after the shortest of the two collections is done. Then user code must query the state of the iterator after the loop to figure what extra work is to be done: auto two = InterleavedInterator (x, y); foreach (x; two) { ... } if (two.MoreData(0)) { auto back2one = two.Project(0); // fetch the first iterator foreach (x ; back2one) { ... } } else if (two.moreData(1)) { ... same (or)deal ... } This is way more work than there should. 2. Stop after the longest of the two collections is done. Then user code must ensure _at each step_ that both iterators have meaningful data: auto two = InterleavedInterator (x, y); foreach (x; two) { if (two.HasData(0)) { ... } else { ... only the second iter has data ... } } This is unclear, verbose, and probably suboptimal. The scoping of foreach links the scope of the variables with their validity range, which rules out a class of possible errors entirely: foreach (x ; c1) (y ; c2) (z ; c3) { ... x, y, z syntactically accessible _and_ valid ... } continue foreach (x, z) { ... x is both invalid _and_ syntactically inaccessible ... } As I mentioned in a different post, the fact that there are combinatorial potential sub-foreach statements is a non-issue. Andrei

If x, y, & z are of differing type, then I'd agree.

If they are of the same type and in an arbitrarily large numbers (x1, x2, x3...), we start talking about cutting through some sort of a matrix or manifold, which is an entirely different business.

So if we did have a language-based iterator (which was a hot topic recently), it might take the form of a generator? [snip]
Feb 13 2007
parent "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
kris wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 kris wrote:

 Andrei Alexandrescu (See Website For Email) wrote:

 Bill Baxter wrote:

 kris wrote:

 kris wrote:

 Bill Baxter wrote:

 Frits van Bommel wrote:

 By the way, would the new loop syntax allow more than two 
 collections to be simultaneously iterated?

Whoa! I certainly hope so. It hadn't even occurred to me that Andrei might mean this syntax can only be used for just two collections. If that's the case then ... ick. --bb

InterleavedIterator does multiple collections via using multiple instances of InterleavedIterator. It's simple to use, and only needs to be written once. Would be better to implement some basic iterator needs than to introduce some tricky new syntax? - Kris

Should have given an example. Simple case with 2 entities: auto two = InterleavedInterator (x, y); foreach (x; two) ... more than 2: auto three = InterleavedInterator (two, z); foreach (x; three) ...

Should have also mentioned where one can find this mythical InterleavedIterator.

The issue with such a multi-iterator is that it makes it easier to make errors, and harder to write efficient and correct code that's statically verifiable. I'm not sure when the interleaved iterator stops iterating, but there are two possibilities, none of which is satisfactory: 1. Stop after the shortest of the two collections is done. Then user code must query the state of the iterator after the loop to figure what extra work is to be done: auto two = InterleavedInterator (x, y); foreach (x; two) { ... } if (two.MoreData(0)) { auto back2one = two.Project(0); // fetch the first iterator foreach (x ; back2one) { ... } } else if (two.moreData(1)) { ... same (or)deal ... } This is way more work than there should. 2. Stop after the longest of the two collections is done. Then user code must ensure _at each step_ that both iterators have meaningful data: auto two = InterleavedInterator (x, y); foreach (x; two) { if (two.HasData(0)) { ... } else { ... only the second iter has data ... } } This is unclear, verbose, and probably suboptimal. The scoping of foreach links the scope of the variables with their validity range, which rules out a class of possible errors entirely: foreach (x ; c1) (y ; c2) (z ; c3) { ... x, y, z syntactically accessible _and_ valid ... } continue foreach (x, z) { ... x is both invalid _and_ syntactically inaccessible ... } As I mentioned in a different post, the fact that there are combinatorial potential sub-foreach statements is a non-issue. Andrei

If x, y, & z are of differing type, then I'd agree.

If they are of the same type and in an arbitrarily large numbers (x1, x2, x3...), we start talking about cutting through some sort of a matrix or manifold, which is an entirely different business.

So if we did have a language-based iterator (which was a hot topic recently), it might take the form of a generator?

They all have their place in the language/stdlib ecosystem. a) foreach does great when the iteration policy *and* range are both fixed. I see this limitation as a big advantage. At least one study has shown that most bugs occur in loops, and it's kind of annoying that C turned the clock of progress back when it just provided the most general and the least safe way of iteration to replace all others. Ever since then, industrial languages followed suit, and as often happens, people have done it that way for so long, that it took a few geniuses to figure out what makes sense to people on the street. So only since recently limited iteration started again to receive proper support. b) iterators and generators have both the ability of stopping and resuming iteration. Generators are more natural when there's no real container (do you "iterate" random numbers? does it make sense to compare the input iterator to the "end of input" special iterator?) but have the disadvantage of fixing the iteration policy. Iterators are more flexible in that they offer the user the ability to devise their own iteration policies within a well-defined framework. So there's a place for everyone. Andrei
Feb 13 2007
prev sibling parent Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 Bill Baxter wrote:
 kris wrote:
 kris wrote:
 Bill Baxter wrote:

 Frits van Bommel wrote:

 By the way, would the new loop syntax allow more than two 
 collections to be simultaneously iterated?

Whoa! I certainly hope so. It hadn't even occurred to me that Andrei might mean this syntax can only be used for just two collections. If that's the case then ... ick. --bb

InterleavedIterator does multiple collections via using multiple instances of InterleavedIterator. It's simple to use, and only needs to be written once. Would be better to implement some basic iterator needs than to introduce some tricky new syntax? - Kris

Should have given an example. Simple case with 2 entities: auto two = InterleavedInterator (x, y); foreach (x; two) ... more than 2: auto three = InterleavedInterator (two, z); foreach (x; three) ...

Should have also mentioned where one can find this mythical InterleavedIterator.

The issue with such a multi-iterator is that it makes it easier to make errors, and harder to write efficient and correct code that's statically verifiable. I'm not sure when the interleaved iterator stops iterating, but there are two possibilities, none of which is satisfactory: 1. Stop after the shortest of the two collections is done. Then user code must query the state of the iterator after the loop to figure what extra work is to be done: auto two = InterleavedInterator (x, y); foreach (x; two) { ... } if (two.MoreData(0)) { auto back2one = two.Project(0); // fetch the first iterator foreach (x ; back2one) { ... } } else if (two.moreData(1)) { ... same (or)deal ... } This is way more work than there should.

How about just simply: auto two = InterleavedInterator (x, y); foreach (x; two) { ... } foreach (a; two.Project(0)) { ... } // fetch the first iterator foreach (b; two.Project(1)) { ... } // fetch the second iterator One of the two last foreach's won't execute a cicle because there is no data.
 2. Stop after the longest of the two collections is done. Then user code 
 must ensure _at each step_ that both iterators have meaningful data:
 
 auto two = InterleavedInterator (x, y);
 foreach (x; two) {
   if (two.HasData(0)) { ... }
   else { ... only the second iter has data ... }
 }
 
 This is unclear, verbose, and probably suboptimal.
 

Maybe, but how does your multi foreach proposal helps at all? (see below)
 The scoping of foreach links the scope of the variables with their 
 validity range, which rules out a class of possible errors entirely:
 
 foreach (x ; c1) (y ; c2) (z ; c3) {
   ... x, y, z syntactically accessible _and_ valid ...
 }
 continue foreach (x, z) {
   ... x is both invalid _and_ syntactically inaccessible ...
 }
 

I din't understand the above. Did you mean 'y' instead of one of those 'x' ? -- Bruno Medeiros - MSc in CS/E student http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#D
Feb 15 2007
prev sibling parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Frits van Bommel wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Sean Kelly wrote:
 What about:

     foreach (i ; coll1) (j ; coll2)
     {
         if( true )
             continue i;
     }

 ie. allow 'continue' to accept labels to specify which collection is 
 iterated.  A 'continue' without labels would iterate both.

I think that's a great idea, except that "continue to label" has the same syntax: http://digitalmars.com/d/statement.html#ContinueStatement

Does that really matter? The compiler knows whether 'i' is a label or a loop variable (presumably it can't be both at the same time?) so it knows what to do. Note that the current "continue to label" wouldn't help here since there's only one statement for a "double" iteration. So the most natural way to specify which loop to continue would be to specify the variable.

That's a good point, but changing names could complicate maintenance.
 By the way, would the new loop syntax allow more than two collections to 
 be simultaneously iterated?
 That would indicate the need for a "continue i, j" as well, to specify 
 multiple variables.
 On the other hand, with your proposed "continue foreach" clauses after 
 the main loop that would also require an exponential number of those 
 clauses for different sets of collections running out if you want to 
 handle all cases...

That is correct. But it's not a real issue for a simple reason: the user would have to write the code. If they need to handle all cases, well, that's what they need to do one way or another. The foreach statement does not add anything to the equation. Of course, in most cases the user has some prior constraints on the sizes so they know which "continue foreach" sections must be written. Let me clarify that the continue foreach statements are optional, not required. Andrei
Feb 13 2007
parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 Frits van Bommel wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Sean Kelly wrote:
 What about:

     foreach (i ; coll1) (j ; coll2)
     {
         if( true )
             continue i;
     }

 ie. allow 'continue' to accept labels to specify which collection is 
 iterated.  A 'continue' without labels would iterate both.

I think that's a great idea, except that "continue to label" has the same syntax: http://digitalmars.com/d/statement.html#ContinueStatement

Does that really matter? The compiler knows whether 'i' is a label or a loop variable (presumably it can't be both at the same time?) so it knows what to do. Note that the current "continue to label" wouldn't help here since there's only one statement for a "double" iteration. So the most natural way to specify which loop to continue would be to specify the variable.

That's a good point, but changing names could complicate maintenance.
 By the way, would the new loop syntax allow more than two collections 
 to be simultaneously iterated?
 That would indicate the need for a "continue i, j" as well, to specify 
 multiple variables.
 On the other hand, with your proposed "continue foreach" clauses after 
 the main loop that would also require an exponential number of those 
 clauses for different sets of collections running out if you want to 
 handle all cases...

That is correct. But it's not a real issue for a simple reason: the user would have to write the code. If they need to handle all cases, well, that's what they need to do one way or another. The foreach statement does not add anything to the equation.

Yes it does make a difference. In the iterator case I can use all of D to determine how to resolve the fact that there are different lengths of data. In the continue foreach scenario, I have only one recourse -- write all the continute foreach's I need. Consider I want to add N vectors of potentially different lengths together. With the right iterator solution I could get away with just: foreach( iters,xs; multi_iter(a,b,c,d,e) ) foreach(it; iters) if (!it.end) out[j] += xs[i]; With "continue foreach" I'm just going to give up and write a for loop.
 Of course, in most cases the user has some prior constraints on the 
 sizes so they know which "continue foreach" sections must be written. 
 Let me clarify that the continue foreach statements are optional, not 
 required.

--bb
Feb 13 2007
parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Bill Baxter wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Frits van Bommel wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Sean Kelly wrote:
 What about:

     foreach (i ; coll1) (j ; coll2)
     {
         if( true )
             continue i;
     }

 ie. allow 'continue' to accept labels to specify which collection 
 is iterated.  A 'continue' without labels would iterate both.

I think that's a great idea, except that "continue to label" has the same syntax: http://digitalmars.com/d/statement.html#ContinueStatement

Does that really matter? The compiler knows whether 'i' is a label or a loop variable (presumably it can't be both at the same time?) so it knows what to do. Note that the current "continue to label" wouldn't help here since there's only one statement for a "double" iteration. So the most natural way to specify which loop to continue would be to specify the variable.

That's a good point, but changing names could complicate maintenance.
 By the way, would the new loop syntax allow more than two collections 
 to be simultaneously iterated?
 That would indicate the need for a "continue i, j" as well, to 
 specify multiple variables.
 On the other hand, with your proposed "continue foreach" clauses 
 after the main loop that would also require an exponential number of 
 those clauses for different sets of collections running out if you 
 want to handle all cases...

That is correct. But it's not a real issue for a simple reason: the user would have to write the code. If they need to handle all cases, well, that's what they need to do one way or another. The foreach statement does not add anything to the equation.

Yes it does make a difference. In the iterator case I can use all of D to determine how to resolve the fact that there are different lengths of data. In the continue foreach scenario, I have only one recourse -- write all the continute foreach's I need. Consider I want to add N vectors of potentially different lengths together. With the right iterator solution I could get away with just: foreach( iters,xs; multi_iter(a,b,c,d,e) ) foreach(it; iters) if (!it.end) out[j] += xs[i]; With "continue foreach" I'm just going to give up and write a for loop.

That is correct. It's a limitation of "continue foreach" exactly because it ties validity with syntactic scoping: the result gives a stronger guarantee, but is inherently more restrictive. I'll add that probably adding multiple vectors of different lengths would loop the other way around :o). Andrei
Feb 13 2007
parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 Bill Baxter wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Frits van Bommel wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Sean Kelly wrote:
 What about:

     foreach (i ; coll1) (j ; coll2)
     {
         if( true )
             continue i;
     }

 ie. allow 'continue' to accept labels to specify which collection 
 is iterated.  A 'continue' without labels would iterate both.

I think that's a great idea, except that "continue to label" has the same syntax: http://digitalmars.com/d/statement.html#ContinueStatement

Does that really matter? The compiler knows whether 'i' is a label or a loop variable (presumably it can't be both at the same time?) so it knows what to do. Note that the current "continue to label" wouldn't help here since there's only one statement for a "double" iteration. So the most natural way to specify which loop to continue would be to specify the variable.

That's a good point, but changing names could complicate maintenance.
 By the way, would the new loop syntax allow more than two 
 collections to be simultaneously iterated?
 That would indicate the need for a "continue i, j" as well, to 
 specify multiple variables.
 On the other hand, with your proposed "continue foreach" clauses 
 after the main loop that would also require an exponential number of 
 those clauses for different sets of collections running out if you 
 want to handle all cases...

That is correct. But it's not a real issue for a simple reason: the user would have to write the code. If they need to handle all cases, well, that's what they need to do one way or another. The foreach statement does not add anything to the equation.

Yes it does make a difference. In the iterator case I can use all of D to determine how to resolve the fact that there are different lengths of data. In the continue foreach scenario, I have only one recourse -- write all the continute foreach's I need. Consider I want to add N vectors of potentially different lengths together. With the right iterator solution I could get away with just: foreach( iters,xs; multi_iter(a,b,c,d,e) ) foreach(it; iters) if (!it.end) out[j] += xs[i]; With "continue foreach" I'm just going to give up and write a for loop.

That is correct. It's a limitation of "continue foreach" exactly because it ties validity with syntactic scoping: the result gives a stronger guarantee, but is inherently more restrictive.

Ok so it's like more of the same foreach business. If it works for you, great. If not, sorry. Go find another way to do it. Meh.
 I'll add that probably adding multiple vectors of different lengths 
 would loop the other way around :o).

Yeh, fair enough. :-) Let's say I want to print out the data columnwise instead. foreach( iterset; multi_iter(a,b,c,d,e) ) { foreach(i,it; iterset) writef("%s ", it.end ? "<no data>" : it.val); writefln(); } I will give you though, that it would be more difficult to make that as efficient as the more restricted solution you're talking about. --bb
Feb 13 2007
parent "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Bill Baxter wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Bill Baxter wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Frits van Bommel wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Sean Kelly wrote:
 What about:

     foreach (i ; coll1) (j ; coll2)
     {
         if( true )
             continue i;
     }

 ie. allow 'continue' to accept labels to specify which collection 
 is iterated.  A 'continue' without labels would iterate both.

I think that's a great idea, except that "continue to label" has the same syntax: http://digitalmars.com/d/statement.html#ContinueStatement

Does that really matter? The compiler knows whether 'i' is a label or a loop variable (presumably it can't be both at the same time?) so it knows what to do. Note that the current "continue to label" wouldn't help here since there's only one statement for a "double" iteration. So the most natural way to specify which loop to continue would be to specify the variable.

That's a good point, but changing names could complicate maintenance.
 By the way, would the new loop syntax allow more than two 
 collections to be simultaneously iterated?
 That would indicate the need for a "continue i, j" as well, to 
 specify multiple variables.
 On the other hand, with your proposed "continue foreach" clauses 
 after the main loop that would also require an exponential number 
 of those clauses for different sets of collections running out if 
 you want to handle all cases...

That is correct. But it's not a real issue for a simple reason: the user would have to write the code. If they need to handle all cases, well, that's what they need to do one way or another. The foreach statement does not add anything to the equation.

Yes it does make a difference. In the iterator case I can use all of D to determine how to resolve the fact that there are different lengths of data. In the continue foreach scenario, I have only one recourse -- write all the continute foreach's I need. Consider I want to add N vectors of potentially different lengths together. With the right iterator solution I could get away with just: foreach( iters,xs; multi_iter(a,b,c,d,e) ) foreach(it; iters) if (!it.end) out[j] += xs[i]; With "continue foreach" I'm just going to give up and write a for loop.

That is correct. It's a limitation of "continue foreach" exactly because it ties validity with syntactic scoping: the result gives a stronger guarantee, but is inherently more restrictive.

Ok so it's like more of the same foreach business. If it works for you, great. If not, sorry. Go find another way to do it. Meh.

This goes for many core features. Language design, exactly like code generation optimization, is more about finding the right tradeoff instead of the 100% adobe. If foreach (i ; c) takes care of 70% of cases, and foreach (i ; c) (j ; d) brings that to 90%, that's probably better than a one-size-fits-all solution that boasts uniform clunkiness for 100% of the cases. Andrei
Feb 13 2007
prev sibling parent reply Dave <Dave_member pathlink.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 Sean Kelly wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Frits van Bommel wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Bill Baxter wrote:
 Yeh, I don't get it either.  How would that help me implement 
 merge() from merge sort for instance?

Merge bumps the iteration in both collections conditionally. The form above bumps the iteration in the two collections unconditionally, until one is finished; then it continues with the other until that is finished.

In other words, it doesn't :(.

A need for loops iterating over multiple collections depending on arbitrary conditions will always be there. The point of extending foreach is to address the often-encountered case when you want to iterate over multiple collections simultaneously (e.g.: copy a collection to another), just like foreach itself is addressing the particular but frequent case of iterating one collection in a linear manner.

What about: foreach (i ; coll1) (j ; coll2) { if( true ) continue i; } ie. allow 'continue' to accept labels to specify which collection is iterated. A 'continue' without labels would iterate both.

I think that's a great idea, except that "continue to label" has the same syntax: http://digitalmars.com/d/statement.html#ContinueStatement Andrei

How about using 'next' to keep it simple, so the compiler doesn't have to create / check for 'i' and 'j' as lables with the same function scope: i: while(...) { foreach (i ; coll1) (j ; coll2) { if( true ) continue i; if( i < j ) next i; } } ?
Feb 13 2007
next sibling parent Kirk McDonald <kirklin.mcdonald gmail.com> writes:
Dave wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Sean Kelly wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Frits van Bommel wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Bill Baxter wrote:
 Yeh, I don't get it either.  How would that help me implement 
 merge() from merge sort for instance?

Merge bumps the iteration in both collections conditionally. The form above bumps the iteration in the two collections unconditionally, until one is finished; then it continues with the other until that is finished.

In other words, it doesn't :(.

A need for loops iterating over multiple collections depending on arbitrary conditions will always be there. The point of extending foreach is to address the often-encountered case when you want to iterate over multiple collections simultaneously (e.g.: copy a collection to another), just like foreach itself is addressing the particular but frequent case of iterating one collection in a linear manner.

What about: foreach (i ; coll1) (j ; coll2) { if( true ) continue i; } ie. allow 'continue' to accept labels to specify which collection is iterated. A 'continue' without labels would iterate both.

I think that's a great idea, except that "continue to label" has the same syntax: http://digitalmars.com/d/statement.html#ContinueStatement Andrei

How about using 'next' to keep it simple, so the compiler doesn't have to create / check for 'i' and 'j' as lables with the same function scope: i: while(...) { foreach (i ; coll1) (j ; coll2) { if( true ) continue i; if( i < j ) next i; } } ?

Hahaha! So we're stealing things from BASIC now, are we? -- Kirk McDonald http://kirkmcdonald.blogspot.com Pyd: Connecting D and Python http://pyd.dsource.org
Feb 13 2007
prev sibling next sibling parent kris <foo bar.com> writes:
Dave wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 
 Sean Kelly wrote:

 Andrei Alexandrescu (See Website For Email) wrote:

 Frits van Bommel wrote:

 Andrei Alexandrescu (See Website For Email) wrote:

 Bill Baxter wrote:

 Yeh, I don't get it either.  How would that help me implement 
 merge() from merge sort for instance?

Merge bumps the iteration in both collections conditionally. The form above bumps the iteration in the two collections unconditionally, until one is finished; then it continues with the other until that is finished.

In other words, it doesn't :(.

A need for loops iterating over multiple collections depending on arbitrary conditions will always be there. The point of extending foreach is to address the often-encountered case when you want to iterate over multiple collections simultaneously (e.g.: copy a collection to another), just like foreach itself is addressing the particular but frequent case of iterating one collection in a linear manner.

What about: foreach (i ; coll1) (j ; coll2) { if( true ) continue i; } ie. allow 'continue' to accept labels to specify which collection is iterated. A 'continue' without labels would iterate both.

I think that's a great idea, except that "continue to label" has the same syntax: http://digitalmars.com/d/statement.html#ContinueStatement Andrei

How about using 'next' to keep it simple, so the compiler doesn't have to create / check for 'i' and 'j' as lables with the same function scope: i: while(...) { foreach (i ; coll1) (j ; coll2) { if( true ) continue i; if( i < j ) next i; } } ?

No offence to anyone, but the above and the other so far are *way* too complex for such an operation. They are not simple to follow at all, and could easily be the source of tricky bugs. Perhaps it would be worthwhile waiting until we get some useful form or Iterator before adding this type of complex syntax? (InterleavingIterator encapsulates the trickiness, such that it is trivial to use, and works every time) - Kris
Feb 13 2007
prev sibling parent "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Dave wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Sean Kelly wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Frits van Bommel wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Bill Baxter wrote:
 Yeh, I don't get it either.  How would that help me implement 
 merge() from merge sort for instance?

Merge bumps the iteration in both collections conditionally. The form above bumps the iteration in the two collections unconditionally, until one is finished; then it continues with the other until that is finished.

In other words, it doesn't :(.

A need for loops iterating over multiple collections depending on arbitrary conditions will always be there. The point of extending foreach is to address the often-encountered case when you want to iterate over multiple collections simultaneously (e.g.: copy a collection to another), just like foreach itself is addressing the particular but frequent case of iterating one collection in a linear manner.

What about: foreach (i ; coll1) (j ; coll2) { if( true ) continue i; } ie. allow 'continue' to accept labels to specify which collection is iterated. A 'continue' without labels would iterate both.

I think that's a great idea, except that "continue to label" has the same syntax: http://digitalmars.com/d/statement.html#ContinueStatement Andrei

How about using 'next' to keep it simple, so the compiler doesn't have to create / check for 'i' and 'j' as lables with the same function scope: i: while(...) { foreach (i ; coll1) (j ; coll2) { if( true ) continue i; if( i < j ) next i; } } ?

I think it's very desirable to add keywords only when there's an absolute must. Andrei
Feb 13 2007
prev sibling parent reply Derek Parnell <derek psych.ward> writes:
On Tue, 13 Feb 2007 17:39:46 +0100, Frits van Bommel wrote:

 Andrei Alexandrescu (See Website For Email) wrote:
 Bill Baxter wrote:
 Yeh, I don't get it either.  How would that help me implement merge() 
 from merge sort for instance?

Merge bumps the iteration in both collections conditionally. The form above bumps the iteration in the two collections unconditionally, until one is finished; then it continues with the other until that is finished.

In other words, it doesn't :(.

I imaging that the full syntax will also include this form ... foreach (int x, i ; coll1) (int y, j ; coll2) { ... use i and j ... if (somecondition) x = ... // To set the index back or forward to some // arbitary point in the array 'coll1'. } -- Derek Parnell Melbourne, Australia "Justice for David Hicks!" skype: derek.j.parnell
Feb 13 2007
parent reply Sean Kelly <sean f4.ca> writes:
Derek Parnell wrote:
 On Tue, 13 Feb 2007 17:39:46 +0100, Frits van Bommel wrote:
 
 Andrei Alexandrescu (See Website For Email) wrote:
 Bill Baxter wrote:
 Yeh, I don't get it either.  How would that help me implement merge() 
 from merge sort for instance?

above bumps the iteration in the two collections unconditionally, until one is finished; then it continues with the other until that is finished.


I imaging that the full syntax will also include this form ... foreach (int x, i ; coll1) (int y, j ; coll2) { ... use i and j ... if (somecondition) x = ... // To set the index back or forward to some // arbitary point in the array 'coll1'. }

This currently works for built-in arrays but not for user-defined types. Also, I think the fact that it works as all is the result of an implementation detail, not spec-defined behavior. Sean
Feb 13 2007
next sibling parent reply Kirk McDonald <kirklin.mcdonald gmail.com> writes:
Sean Kelly wrote:
 Derek Parnell wrote:
 
 On Tue, 13 Feb 2007 17:39:46 +0100, Frits van Bommel wrote:

 Andrei Alexandrescu (See Website For Email) wrote:

 Bill Baxter wrote:

 Yeh, I don't get it either.  How would that help me implement 
 merge() from merge sort for instance?

Merge bumps the iteration in both collections conditionally. The form above bumps the iteration in the two collections unconditionally, until one is finished; then it continues with the other until that is finished.

In other words, it doesn't :(.

I imaging that the full syntax will also include this form ... foreach (int x, i ; coll1) (int y, j ; coll2) { ... use i and j ... if (somecondition) x = ... // To set the index back or forward to some // arbitary point in the array 'coll1'. }

This currently works for built-in arrays but not for user-defined types. Also, I think the fact that it works as all is the result of an implementation detail, not spec-defined behavior. Sean

There's no reason a user-defined type couldn't implement this. -- Kirk McDonald http://kirkmcdonald.blogspot.com Pyd: Connecting D and Python http://pyd.dsource.org
Feb 13 2007
next sibling parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Kirk McDonald wrote:
 Sean Kelly wrote:
 Derek Parnell wrote:

 On Tue, 13 Feb 2007 17:39:46 +0100, Frits van Bommel wrote:

 Andrei Alexandrescu (See Website For Email) wrote:

 Bill Baxter wrote:

 Yeh, I don't get it either.  How would that help me implement 
 merge() from merge sort for instance?

Merge bumps the iteration in both collections conditionally. The form above bumps the iteration in the two collections unconditionally, until one is finished; then it continues with the other until that is finished.

In other words, it doesn't :(.

I imaging that the full syntax will also include this form ... foreach (int x, i ; coll1) (int y, j ; coll2) { ... use i and j ... if (somecondition) x = ... // To set the index back or forward to some // arbitary point in the array 'coll1'. }

This currently works for built-in arrays but not for user-defined types. Also, I think the fact that it works as all is the result of an implementation detail, not spec-defined behavior. Sean

There's no reason a user-defined type couldn't implement this.

Exactly. Walter and I have bounced a couple of possibilities and concluded that the feature is of "medium difficulty/medium usefulness". Probably Walter will look into implementing this first: foreach (i ; 0 .. n) { ... } Andrei
Feb 13 2007
parent reply Sean Kelly <sean f4.ca> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 Kirk McDonald wrote:
 Sean Kelly wrote:
 Derek Parnell wrote:

 On Tue, 13 Feb 2007 17:39:46 +0100, Frits van Bommel wrote:

 Andrei Alexandrescu (See Website For Email) wrote:

 Bill Baxter wrote:

 Yeh, I don't get it either.  How would that help me implement 
 merge() from merge sort for instance?

Merge bumps the iteration in both collections conditionally. The form above bumps the iteration in the two collections unconditionally, until one is finished; then it continues with the other until that is finished.

In other words, it doesn't :(.

I imaging that the full syntax will also include this form ... foreach (int x, i ; coll1) (int y, j ; coll2) { ... use i and j ... if (somecondition) x = ... // To set the index back or forward to some // arbitary point in the array 'coll1'. }

This currently works for built-in arrays but not for user-defined types. Also, I think the fact that it works as all is the result of an implementation detail, not spec-defined behavior. Sean

There's no reason a user-defined type couldn't implement this.

Exactly. Walter and I have bounced a couple of possibilities and concluded that the feature is of "medium difficulty/medium usefulness". Probably Walter will look into implementing this first: foreach (i ; 0 .. n) { ... }

Out of curiosity, how would these situations be handled: foreach (i ; n .. 0) {} // A foreach_reverse (i ; 0 .. n) {} // B foreach_reverse (i ; n .. 0) {} // C Sean
Feb 13 2007
parent "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Sean Kelly wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Kirk McDonald wrote:
 Sean Kelly wrote:
 Derek Parnell wrote:

 On Tue, 13 Feb 2007 17:39:46 +0100, Frits van Bommel wrote:

 Andrei Alexandrescu (See Website For Email) wrote:

 Bill Baxter wrote:

 Yeh, I don't get it either.  How would that help me implement 
 merge() from merge sort for instance?

Merge bumps the iteration in both collections conditionally. The form above bumps the iteration in the two collections unconditionally, until one is finished; then it continues with the other until that is finished.

In other words, it doesn't :(.

I imaging that the full syntax will also include this form ... foreach (int x, i ; coll1) (int y, j ; coll2) { ... use i and j ... if (somecondition) x = ... // To set the index back or forward to some // arbitary point in the array 'coll1'. }

This currently works for built-in arrays but not for user-defined types. Also, I think the fact that it works as all is the result of an implementation detail, not spec-defined behavior. Sean

There's no reason a user-defined type couldn't implement this.

Exactly. Walter and I have bounced a couple of possibilities and concluded that the feature is of "medium difficulty/medium usefulness". Probably Walter will look into implementing this first: foreach (i ; 0 .. n) { ... }

Out of curiosity, how would these situations be handled: foreach (i ; n .. 0) {} // A

rewrite to: for (typeof(true ? n : 0) i = n; i < 0; ++i) {} with the amendment that the loop body can't modify i, and that 0 is evaluated only once :o).
   foreach_reverse (i ; 0 .. n) {} // B

rewrite to: for (typeof(true ? 0 : n) i = n; i-- > 0; ) {} with the same amendments.
   foreach_reverse (i ; n .. 0) {} // C

rewrite to: for (typeof(true ? 0 : n) i = 0; i-- > n; ) {} with the amendment that the loop body can't modify i, and that n is evaluated only once. Andrei
Feb 13 2007
prev sibling parent Sean Kelly <sean f4.ca> writes:
Kirk McDonald wrote:
 Sean Kelly wrote:
 Derek Parnell wrote:

 On Tue, 13 Feb 2007 17:39:46 +0100, Frits van Bommel wrote:

 Andrei Alexandrescu (See Website For Email) wrote:

 Bill Baxter wrote:

 Yeh, I don't get it either.  How would that help me implement 
 merge() from merge sort for instance?

Merge bumps the iteration in both collections conditionally. The form above bumps the iteration in the two collections unconditionally, until one is finished; then it continues with the other until that is finished.

In other words, it doesn't :(.

I imaging that the full syntax will also include this form ... foreach (int x, i ; coll1) (int y, j ; coll2) { ... use i and j ... if (somecondition) x = ... // To set the index back or forward to some // arbitary point in the array 'coll1'. }

This currently works for built-in arrays but not for user-defined types. Also, I think the fact that it works as all is the result of an implementation detail, not spec-defined behavior.

There's no reason a user-defined type couldn't implement this.

As Andrei mentioned, the need to decrement the value just to stay in the same location stinks :-) In fact, I can see it being a real issue for forward-only sequences. Sean
Feb 13 2007
prev sibling parent "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Sean Kelly wrote:
 Derek Parnell wrote:
 On Tue, 13 Feb 2007 17:39:46 +0100, Frits van Bommel wrote:

 Andrei Alexandrescu (See Website For Email) wrote:
 Bill Baxter wrote:
 Yeh, I don't get it either.  How would that help me implement 
 merge() from merge sort for instance?

form above bumps the iteration in the two collections unconditionally, until one is finished; then it continues with the other until that is finished.


I imaging that the full syntax will also include this form ... foreach (int x, i ; coll1) (int y, j ; coll2) { ... use i and j ... if (somecondition) x = ... // To set the index back or forward to some // arbitary point in the array 'coll1'. }

This currently works for built-in arrays but not for user-defined types. Also, I think the fact that it works as all is the result of an implementation detail, not spec-defined behavior.

Besides it's awkward to have to decrement x if you want to "stay there" just so that you cancel the next increment. foreach is best for straight loops; in fact I think of it as the functional "map" as foreach (assuming the above gets fixed) has no imperative element to it. Andrei
Feb 13 2007
prev sibling parent Bill Baxter <dnewsgroup billbaxter.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 Bill Baxter wrote:
 janderson wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 James Dennett wrote:
 C++, of course, has std::for_each(e.begin(), e.end(), do_x);
 in its library (though that's weaker than it could be because
 of lack of support for convenient anonymous functions/lambdas).

 C++0x is very likely to have for(v: e).  It's implemented
 in ConceptGCC already. Java already has essentially that,
 as does C#.  This really doesn't set D apart (but at least
 D isn't falling behind here).

BTW, D might soon have simultaneous iteration that will blow away all conventional languages: foreach (i ; coll1) (j ; coll2) { ... use i and j ... } continue foreach (i) { ... coll2 finished; use i ... } continue foreach (j) { ... coll1 finished; use j ... } Andrei

Maybe its a bit to late here however this syntax seems very special case. Can you explain why its necessary and how we would use it. How would we do this currently (without meta programming)?

Yeh, I don't get it either. How would that help me implement merge() from merge sort for instance?

Merge bumps the iteration in both collections conditionally. The form above bumps the iteration in the two collections unconditionally, until one is finished; then it continues with the other until that is finished.

Yes, I get that. My initial impression, though, was that iterating in lock step is just one particular case, and it seems kind of special case to warrant a new syntax. It still doesn't get rid of the need for general iterators to implement more complicated things like merge sort. But maybe it does cover the majority of cases. I guess it's basically like 'for x,y in izip(list1,list2)' in Python. But in python that's a library function, not a bit of language syntax. --bb
Feb 13 2007
prev sibling parent Kirk McDonald <kirklin.mcdonald gmail.com> writes:
janderson wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 James Dennett wrote:
 C++, of course, has std::for_each(e.begin(), e.end(), do_x);
 in its library (though that's weaker than it could be because
 of lack of support for convenient anonymous functions/lambdas).

 C++0x is very likely to have for(v: e).  It's implemented
 in ConceptGCC already. Java already has essentially that,
 as does C#.  This really doesn't set D apart (but at least
 D isn't falling behind here).

BTW, D might soon have simultaneous iteration that will blow away all conventional languages: foreach (i ; coll1) (j ; coll2) { ... use i and j ... } continue foreach (i) { ... coll2 finished; use i ... } continue foreach (j) { ... coll1 finished; use j ... } Andrei

Maybe its a bit to late here however this syntax seems very special case. Can you explain why its necessary and how we would use it. How would we do this currently (without meta programming)? -Joel

Take the following: int[] a = [1, 2, 3, 4]; int[] b = [5, 6, 7, 8]; In current D, you can do: for (int i=0; i<a.length; ++i) { writefln("%s %s", a[i], b[i]); } With this syntax that Andrei mentions: foreach (i; a) (j; b) { writefln("%s %s", i, j); } This has the additional feature of properly handling differently-sized collections automatically. Things get immensely more complicated as soon as you start talking about opApply. My sleep-addled brain isn't coming up with a way to do it in current D, though I suspect it's possible. Personally, I think this syntax would be awesome. -- Kirk McDonald http://kirkmcdonald.blogspot.com Pyd: Connecting D and Python http://pyd.dsource.org
Feb 13 2007
prev sibling next sibling parent reply renoX <renosky free.fr> writes:
Andrei Alexandrescu (See Website For Email) a écrit :
 James Dennett wrote:
 C++, of course, has std::for_each(e.begin(), e.end(), do_x);
 in its library (though that's weaker than it could be because
 of lack of support for convenient anonymous functions/lambdas).

 C++0x is very likely to have for(v: e).  It's implemented
 in ConceptGCC already. Java already has essentially that,
 as does C#.  This really doesn't set D apart (but at least
 D isn't falling behind here).

BTW, D might soon have simultaneous iteration that will blow away all conventional languages: foreach (i ; coll1) (j ; coll2) { ... use i and j ... } continue foreach (i) { ... coll2 finished; use i ... } continue foreach (j) { ... coll1 finished; use j ... } Best languages out there are at best ho-hum when it comes about iterating through simultaneous streams. Most lose their elegant iteration statement entirely and come with something that looks like an old hooker early in the morning.

At first, I really didn't like the 'continue foreach', then afterwards I got used to it, I wonder if this is really such a requested feature though, what's wrong with the good old 'for' or 'while' for the complex case? renoX
 Andrei

Feb 13 2007
parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
renoX wrote:
 Andrei Alexandrescu (See Website For Email) a écrit :
 James Dennett wrote:
 C++, of course, has std::for_each(e.begin(), e.end(), do_x);
 in its library (though that's weaker than it could be because
 of lack of support for convenient anonymous functions/lambdas).

 C++0x is very likely to have for(v: e).  It's implemented
 in ConceptGCC already. Java already has essentially that,
 as does C#.  This really doesn't set D apart (but at least
 D isn't falling behind here).

BTW, D might soon have simultaneous iteration that will blow away all conventional languages: foreach (i ; coll1) (j ; coll2) { ... use i and j ... } continue foreach (i) { ... coll2 finished; use i ... } continue foreach (j) { ... coll1 finished; use j ... } Best languages out there are at best ho-hum when it comes about iterating through simultaneous streams. Most lose their elegant iteration statement entirely and come with something that looks like an old hooker early in the morning.

At first, I really didn't like the 'continue foreach', then afterwards I got used to it, I wonder if this is really such a requested feature though, what's wrong with the good old 'for' or 'while' for the complex case?

Absolutely nothing's wrong. The same argument, however, could be formulated to render foreach redundant. We have for, don't we. The thing is foreach is terse and elegant and has a functional flavor that gives it safety and power that for doesn't have. It's only natural to ask oneself why all of these advantages must go away in a blink just because you want to iterate two things simultaneously. Andrei
Feb 13 2007
next sibling parent reply janderson <askme me.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 renoX wrote:
 Andrei Alexandrescu (See Website For Email) a écrit :
 James Dennett wrote:
 C++, of course, has std::for_each(e.begin(), e.end(), do_x);
 in its library (though that's weaker than it could be because
 of lack of support for convenient anonymous functions/lambdas).

 C++0x is very likely to have for(v: e).  It's implemented
 in ConceptGCC already. Java already has essentially that,
 as does C#.  This really doesn't set D apart (but at least
 D isn't falling behind here).

BTW, D might soon have simultaneous iteration that will blow away all conventional languages: foreach (i ; coll1) (j ; coll2) { ... use i and j ... } continue foreach (i) { ... coll2 finished; use i ... } continue foreach (j) { ... coll1 finished; use j ... } Best languages out there are at best ho-hum when it comes about iterating through simultaneous streams. Most lose their elegant iteration statement entirely and come with something that looks like an old hooker early in the morning.

At first, I really didn't like the 'continue foreach', then afterwards I got used to it, I wonder if this is really such a requested feature though, what's wrong with the good old 'for' or 'while' for the complex case?

Absolutely nothing's wrong. The same argument, however, could be formulated to render foreach redundant. We have for, don't we. The thing is foreach is terse and elegant and has a functional flavor that gives it safety and power that for doesn't have. It's only natural to ask oneself why all of these advantages must go away in a blink just because you want to iterate two things simultaneously. Andrei

I think its about how much this feature will be used. This one seems like it could be useful but its pretty close to borderline-"feature for feature sake" for me. There are probably a lot of other features that could be more useful then this one. -Joel
Feb 13 2007
parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
janderson wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 renoX wrote:
 Andrei Alexandrescu (See Website For Email) a écrit :
 James Dennett wrote:
 C++, of course, has std::for_each(e.begin(), e.end(), do_x);
 in its library (though that's weaker than it could be because
 of lack of support for convenient anonymous functions/lambdas).

 C++0x is very likely to have for(v: e).  It's implemented
 in ConceptGCC already. Java already has essentially that,
 as does C#.  This really doesn't set D apart (but at least
 D isn't falling behind here).

BTW, D might soon have simultaneous iteration that will blow away all conventional languages: foreach (i ; coll1) (j ; coll2) { ... use i and j ... } continue foreach (i) { ... coll2 finished; use i ... } continue foreach (j) { ... coll1 finished; use j ... } Best languages out there are at best ho-hum when it comes about iterating through simultaneous streams. Most lose their elegant iteration statement entirely and come with something that looks like an old hooker early in the morning.

At first, I really didn't like the 'continue foreach', then afterwards I got used to it, I wonder if this is really such a requested feature though, what's wrong with the good old 'for' or 'while' for the complex case?

Absolutely nothing's wrong. The same argument, however, could be formulated to render foreach redundant. We have for, don't we. The thing is foreach is terse and elegant and has a functional flavor that gives it safety and power that for doesn't have. It's only natural to ask oneself why all of these advantages must go away in a blink just because you want to iterate two things simultaneously. Andrei

I think its about how much this feature will be used. This one seems like it could be useful but its pretty close to borderline-"feature for feature sake" for me. There are probably a lot of other features that could be more useful then this one.

No doubt, but there are many factors to take into account (among which implementation difficulty). In Perl it's a constant source of friction for me. If I want to iterate through one thing (array, file, hash...), it's all dandy. As soon as I need to iterate over two things I need to import and use an arcane library, or fall back and use while(1) and do it all with the axe (which is what I end up doing most of the time). I'm actually mildly surprised. Lately there was some talk around here about supporting the day-to-day programmers and so on. I find looping a very day-to-day thing, and looping over 2+ things at least a few-days-to-few-days thing. There is a need for parallel iteration, if nothing else shown by the existence of a library that addresses exactly that - to the extent possible in a library that's not in the position to control syntax, scoping, and visibility. I was sure people will be on this one like white on rice. But Bjarne Stroustrup was right: nobody knows what most programmers do :o). Andrei
Feb 13 2007
parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:


 I'm actually mildly surprised. Lately there was some talk around here 
 about supporting the day-to-day programmers and so on. I find looping a 
 very day-to-day thing, and looping over 2+ things at least a 
 few-days-to-few-days thing. There is a need for parallel iteration, if 
 nothing else shown by the existence of a library that addresses exactly 
 that - to the extent possible in a library that's not in the position to 
 control syntax, scoping, and visibility. I was sure people will be on 
 this one like white on rice. But Bjarne Stroustrup was right: nobody 
 knows what most programmers do :o).

Python and Ruby are hardly considered to be obtuse languages, or unfriendly to Joe coder, but both get by just fine without special case syntax for iterating over multiple collections, or for iterating in reverse. for x,y izip(foo,bar): do stuff for x reversed(foo): do stuff for x,y izip(reversed(foo),bar): do that with your proposal! That said, I understand that Python and Ruby have a little more freedom to pile up the abstractions because both of them are so friggin slow that a few more layers won't hurt anything. D can't be quite so cavalier about tossing performance for elegance. Still, I'm remain unconvinced that D can't have both performance and elegance. --bb
Feb 13 2007
parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Bill Baxter wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 
 
 I'm actually mildly surprised. Lately there was some talk around here 
 about supporting the day-to-day programmers and so on. I find looping 
 a very day-to-day thing, and looping over 2+ things at least a 
 few-days-to-few-days thing. There is a need for parallel iteration, if 
 nothing else shown by the existence of a library that addresses 
 exactly that - to the extent possible in a library that's not in the 
 position to control syntax, scoping, and visibility. I was sure people 
 will be on this one like white on rice. But Bjarne Stroustrup was 
 right: nobody knows what most programmers do :o).

Python and Ruby are hardly considered to be obtuse languages, or unfriendly to Joe coder, but both get by just fine without special case syntax for iterating over multiple collections, or for iterating in reverse. for x,y izip(foo,bar): do stuff for x reversed(foo): do stuff for x,y izip(reversed(foo),bar): do that with your proposal!

foreach (x ; reverse_view(foo)) (y ; bar) probably I could!
 That said, I understand that Python and Ruby have a little more freedom 
 to pile up the abstractions because both of them are so friggin slow 
 that a few more layers won't hurt anything.  D can't be quite so 
 cavalier about tossing performance for elegance.

Yah. Costly abstractions are a dime a dozen.
 Still, I'm remain unconvinced that D can't have both performance and 
 elegance.

Macros will get us closer to that. Andrei
Feb 13 2007
parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 Bill Baxter wrote:
 Andrei Alexandrescu (See Website For Email) wrote:


 I'm actually mildly surprised. Lately there was some talk around here 
 about supporting the day-to-day programmers and so on. I find looping 
 a very day-to-day thing, and looping over 2+ things at least a 
 few-days-to-few-days thing. There is a need for parallel iteration, 
 if nothing else shown by the existence of a library that addresses 
 exactly that - to the extent possible in a library that's not in the 
 position to control syntax, scoping, and visibility. I was sure 
 people will be on this one like white on rice. But Bjarne Stroustrup 
 was right: nobody knows what most programmers do :o).

Python and Ruby are hardly considered to be obtuse languages, or unfriendly to Joe coder, but both get by just fine without special case syntax for iterating over multiple collections, or for iterating in reverse. for x,y izip(foo,bar): do stuff for x reversed(foo): do stuff for x,y izip(reversed(foo),bar): do that with your proposal!

foreach (x ; reverse_view(foo)) (y ; bar) probably I could!

foreach (x,y ; transpose_view(reverse_view(foo),bar) then why not this too?! --bb
Feb 13 2007
parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Bill Baxter wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Bill Baxter wrote:
 Andrei Alexandrescu (See Website For Email) wrote:


 I'm actually mildly surprised. Lately there was some talk around 
 here about supporting the day-to-day programmers and so on. I find 
 looping a very day-to-day thing, and looping over 2+ things at least 
 a few-days-to-few-days thing. There is a need for parallel 
 iteration, if nothing else shown by the existence of a library that 
 addresses exactly that - to the extent possible in a library that's 
 not in the position to control syntax, scoping, and visibility. I 
 was sure people will be on this one like white on rice. But Bjarne 
 Stroustrup was right: nobody knows what most programmers do :o).

Python and Ruby are hardly considered to be obtuse languages, or unfriendly to Joe coder, but both get by just fine without special case syntax for iterating over multiple collections, or for iterating in reverse. for x,y izip(foo,bar): do stuff for x reversed(foo): do stuff for x,y izip(reversed(foo),bar): do that with your proposal!

foreach (x ; reverse_view(foo)) (y ; bar) probably I could!

foreach (x,y ; transpose_view(reverse_view(foo),bar) then why not this too?!

Because it doesn't keep bound variables together with the data. Perl has a way of initializing multiple variables that is unnerving: my ($a, $b, $c) = (e1, e2, e3); The long-distance relationships make it so irritating when ek are more than a couple of characters, I often give up and write: my $a = e1; my $b = e2; my $c = e3; even though I try to use vertical space sparingly. Andrei
Feb 14 2007
next sibling parent janderson <askme me.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 Bill Baxter wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Bill Baxter wrote:


Because it doesn't keep bound variables together with the data. Perl has a way of initializing multiple variables that is unnerving: my ($a, $b, $c) = (e1, e2, e3); The long-distance relationships make it so irritating when ek are more than a couple of characters, I often give up and write: my $a = e1; my $b = e2; my $c = e3; even though I try to use vertical space sparingly. Andrei

IMO, the original version that you've suggested looks best. I can't see any way of improving the syntax. Well maybe the continue part could be improved, like: foreach (i ; coll1) (j ; coll2) { ... use i and j ... } continue(i) { ... coll2 finished; use i ... } continue(j) { ... coll1 finished; use j ... } Or perhaps a word like "finish" would be better. -Joel
Feb 14 2007
prev sibling parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 Bill Baxter wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Bill Baxter wrote:
 Andrei Alexandrescu (See Website For Email) wrote:


 I'm actually mildly surprised. Lately there was some talk around 
 here about supporting the day-to-day programmers and so on. I find 
 looping a very day-to-day thing, and looping over 2+ things at 
 least a few-days-to-few-days thing. There is a need for parallel 
 iteration, if nothing else shown by the existence of a library that 
 addresses exactly that - to the extent possible in a library that's 
 not in the position to control syntax, scoping, and visibility. I 
 was sure people will be on this one like white on rice. But Bjarne 
 Stroustrup was right: nobody knows what most programmers do :o).

Python and Ruby are hardly considered to be obtuse languages, or unfriendly to Joe coder, but both get by just fine without special case syntax for iterating over multiple collections, or for iterating in reverse. for x,y izip(foo,bar): do stuff for x reversed(foo): do stuff for x,y izip(reversed(foo),bar): do that with your proposal!

foreach (x ; reverse_view(foo)) (y ; bar) probably I could!

foreach (x,y ; transpose_view(reverse_view(foo),bar) then why not this too?!

Because it doesn't keep bound variables together with the data. Perl has a way of initializing multiple variables that is unnerving: my ($a, $b, $c) = (e1, e2, e3); The long-distance relationships make it so irritating when ek are more than a couple of characters, I often give up and write: my $a = e1; my $b = e2; my $c = e3; even though I try to use vertical space sparingly.

You're of course welcome to your opinion, but multiple assignment exists in many languages. So you're saying they're all wrong to have such a feature? --bb
Feb 14 2007
parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Bill Baxter wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Bill Baxter wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Bill Baxter wrote:
 Andrei Alexandrescu (See Website For Email) wrote:


 I'm actually mildly surprised. Lately there was some talk around 
 here about supporting the day-to-day programmers and so on. I find 
 looping a very day-to-day thing, and looping over 2+ things at 
 least a few-days-to-few-days thing. There is a need for parallel 
 iteration, if nothing else shown by the existence of a library 
 that addresses exactly that - to the extent possible in a library 
 that's not in the position to control syntax, scoping, and 
 visibility. I was sure people will be on this one like white on 
 rice. But Bjarne Stroustrup was right: nobody knows what most 
 programmers do :o).

Python and Ruby are hardly considered to be obtuse languages, or unfriendly to Joe coder, but both get by just fine without special case syntax for iterating over multiple collections, or for iterating in reverse. for x,y izip(foo,bar): do stuff for x reversed(foo): do stuff for x,y izip(reversed(foo),bar): do that with your proposal!

foreach (x ; reverse_view(foo)) (y ; bar) probably I could!

foreach (x,y ; transpose_view(reverse_view(foo),bar) then why not this too?!

Because it doesn't keep bound variables together with the data. Perl has a way of initializing multiple variables that is unnerving: my ($a, $b, $c) = (e1, e2, e3); The long-distance relationships make it so irritating when ek are more than a couple of characters, I often give up and write: my $a = e1; my $b = e2; my $c = e3; even though I try to use vertical space sparingly.

You're of course welcome to your opinion, but multiple assignment exists in many languages. So you're saying they're all wrong to have such a feature?

No. It's good to have multiple assignments; it's annoying that Perl prevents the option of grouping initializers with the data if I so wanted: my $a = e1, $b = e2, $c = e3; I was just opining that foreach (a ; e1) (a2 ; e2) {} is clearer than: foreach (a ; b) (e1 ; e2) {} Andrei
Feb 14 2007
parent reply Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 Bill Baxter wrote:
 You're of course welcome to your opinion, but multiple assignment 
 exists in many languages.  So you're saying they're all wrong to have 
 such a feature?

No. It's good to have multiple assignments; it's annoying that Perl prevents the option of grouping initializers with the data if I so wanted: my $a = e1, $b = e2, $c = e3; I was just opining that foreach (a ; e1) (a2 ; e2) {} is clearer than: foreach (a ; b) (e1 ; e2) {} Andrei

Ah, but then would you agree that your simultaneous foreach proposal would only serve the purpose of grouping variables next to their data? I.e., it wouldn't help in terms of simplifying code complexity (as it might happen in other languages, like Perl which you mentioned in another post), since D can currently do this: foreach (x,y ; transpose_view(reverse_view(foo),bar) //then why not this too?! And your proposal would only serve to simplify the above to this: foreach (x ; reverse_view(foo)) (y ; bar) //probably I could! which IMO is negligible, and not worthy of adding such new syntax. -- Bruno Medeiros - MSc in CS/E student http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#D
Feb 16 2007
next sibling parent "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Bruno Medeiros wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Bill Baxter wrote:
 You're of course welcome to your opinion, but multiple assignment 
 exists in many languages.  So you're saying they're all wrong to have 
 such a feature?

No. It's good to have multiple assignments; it's annoying that Perl prevents the option of grouping initializers with the data if I so wanted: my $a = e1, $b = e2, $c = e3; I was just opining that foreach (a ; e1) (a2 ; e2) {} is clearer than: foreach (a ; b) (e1 ; e2) {} Andrei

Ah, but then would you agree that your simultaneous foreach proposal would only serve the purpose of grouping variables next to their data? I.e., it wouldn't help in terms of simplifying code complexity (as it might happen in other languages, like Perl which you mentioned in another post), since D can currently do this: foreach (x,y ; transpose_view(reverse_view(foo),bar) //then why not this too?! And your proposal would only serve to simplify the above to this: foreach (x ; reverse_view(foo)) (y ; bar) //probably I could! which IMO is negligible, and not worthy of adding such new syntax.

No, I would not agree. We discussed at length in a couple of posts the advantages and disadvantages of the foreach()() approach vs. library-based iterators. In short, the foreach()() version is safer and potentially faster, but more restricted. Andrei
Feb 16 2007
prev sibling parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Bruno Medeiros wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Bill Baxter wrote:
 You're of course welcome to your opinion, but multiple assignment 
 exists in many languages.  So you're saying they're all wrong to have 
 such a feature?

No. It's good to have multiple assignments; it's annoying that Perl prevents the option of grouping initializers with the data if I so wanted: my $a = e1, $b = e2, $c = e3; I was just opining that foreach (a ; e1) (a2 ; e2) {} is clearer than: foreach (a ; b) (e1 ; e2) {} Andrei

Ah, but then would you agree that your simultaneous foreach proposal would only serve the purpose of grouping variables next to their data? I.e., it wouldn't help in terms of simplifying code complexity (as it might happen in other languages, like Perl which you mentioned in another post), since D can currently do this: foreach (x,y ; transpose_view(reverse_view(foo),bar) //then why not this too?!

Oh, I wasn't thinking D could actually do this already. Is a generic transpose_view possible using a variadic opApply? I don't think so but maybe I'm wrong. You'd need int opApply( int delegate(Args...) body ) { . . . } Does that work? --bb
Feb 16 2007
parent Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
Bill Baxter wrote:
 Bruno Medeiros wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Bill Baxter wrote:
 You're of course welcome to your opinion, but multiple assignment 
 exists in many languages.  So you're saying they're all wrong to 
 have such a feature?

No. It's good to have multiple assignments; it's annoying that Perl prevents the option of grouping initializers with the data if I so wanted: my $a = e1, $b = e2, $c = e3; I was just opining that foreach (a ; e1) (a2 ; e2) {} is clearer than: foreach (a ; b) (e1 ; e2) {} Andrei

Ah, but then would you agree that your simultaneous foreach proposal would only serve the purpose of grouping variables next to their data? I.e., it wouldn't help in terms of simplifying code complexity (as it might happen in other languages, like Perl which you mentioned in another post), since D can currently do this: foreach (x,y ; transpose_view(reverse_view(foo),bar) //then why not this too?!

Oh, I wasn't thinking D could actually do this already. Is a generic transpose_view possible using a variadic opApply? I don't think so but maybe I'm wrong. You'd need int opApply( int delegate(Args...) body ) { . . . } Does that work? --bb

Oops, my mistake, that "since D can currently do this" is wrong. It's not implementable, you need an actual iterator concept to do that, but so does Andrei's simultaneous foreach (news://news.digitalmars.com:119/45D48352.50802 erdani.org), so those examples are comparable. -- Bruno Medeiros - MSc in CS/E student http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#D
Feb 18 2007
prev sibling parent reply renoX <renosky free.fr> writes:
Andrei Alexandrescu (See Website For Email) a écrit :
 renoX wrote:
 Andrei Alexandrescu (See Website For Email) a écrit :
 BTW, D might soon have simultaneous iteration that will blow away all 
 conventional languages:

 foreach (i ; coll1) (j ; coll2)
 {
   ... use i and j ...
 }
 continue foreach (i)
 {
   ... coll2 finished; use i ...
 }
 continue foreach (j)
 {
   ... coll1 finished; use j ...
 }

 Best languages out there are at best ho-hum when it comes about 
 iterating through simultaneous streams. Most lose their elegant 
 iteration statement entirely and come with something that looks like 
 an old hooker early in the morning.

At first, I really didn't like the 'continue foreach', then afterwards I got used to it, I wonder if this is really such a requested feature though, what's wrong with the good old 'for' or 'while' for the complex case?

Absolutely nothing's wrong. The same argument, however, could be formulated to render foreach redundant. We have for, don't we. The thing is foreach is terse and elegant and has a functional flavor that gives it safety and power that for doesn't have.

I wouldn't call 'functional flavored' something with such an 'hidden state' stored in i, but that's just me. And I have a question for the safety: what is supposed to happen if the programmer modifies coll1 between the foreach(i ; coll1) and continue foreach? Adding or removing value in the collection before the continue foreach? Just being curious, I would imagine that this is just forbidden. renoX
 It's only natural 
 to ask oneself why all of these advantages must go away in a blink just 
 because you want to iterate two things simultaneously.

 
 
 Andrei

Feb 13 2007
parent "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
renoX wrote:
 Andrei Alexandrescu (See Website For Email) a écrit :
 renoX wrote:
 Andrei Alexandrescu (See Website For Email) a écrit :
 BTW, D might soon have simultaneous iteration that will blow away 
 all conventional languages:

 foreach (i ; coll1) (j ; coll2)
 {
   ... use i and j ...
 }
 continue foreach (i)
 {
   ... coll2 finished; use i ...
 }
 continue foreach (j)
 {
   ... coll1 finished; use j ...
 }

 Best languages out there are at best ho-hum when it comes about 
 iterating through simultaneous streams. Most lose their elegant 
 iteration statement entirely and come with something that looks like 
 an old hooker early in the morning.

At first, I really didn't like the 'continue foreach', then afterwards I got used to it, I wonder if this is really such a requested feature though, what's wrong with the good old 'for' or 'while' for the complex case?

Absolutely nothing's wrong. The same argument, however, could be formulated to render foreach redundant. We have for, don't we. The thing is foreach is terse and elegant and has a functional flavor that gives it safety and power that for doesn't have.

I wouldn't call 'functional flavored' something with such an 'hidden state' stored in i, but that's just me.

My definition of "foreach (i ; c) S" is: bind i in turn to each element of c, and evaluate S. That's very functional. The definition really reminds much more of map and fold, than of "for (s; e1; e2) S". And let's not forget that all functional programs sneak state in their arguments - or a monad :o).
 And I have a question for the safety: what is supposed to happen if the 
 programmer modifies coll1 between the foreach(i ; coll1) and continue 
 foreach?

That's a good question. Currently the behavior is undefined. The behavior that Walter is thinking of implementing is to render it implementation-defined, but never undefined (as in thrashing random memory). All that needs to be done is disallow in-place shrinking of containers. The garbage collector will take care of the rest.
 Adding or removing value in the collection before the continue foreach?

See above.
 Just being curious, I would imagine that this is just forbidden.

That's what it is today - it's formally forbidden. After Walter eliminates in-place shrinking, it will be safe, and interestingly, there will never be a need for bounds checking. Andrei
Feb 13 2007
prev sibling next sibling parent reply Aarti_pl <aarti interia.pl> writes:
Andrei Alexandrescu (See Website For Email) napisał(a):
 James Dennett wrote:
 C++, of course, has std::for_each(e.begin(), e.end(), do_x);
 in its library (though that's weaker than it could be because
 of lack of support for convenient anonymous functions/lambdas).

 C++0x is very likely to have for(v: e).  It's implemented
 in ConceptGCC already. Java already has essentially that,
 as does C#.  This really doesn't set D apart (but at least
 D isn't falling behind here).

BTW, D might soon have simultaneous iteration that will blow away all conventional languages: foreach (i ; coll1) (j ; coll2) { ... use i and j ... } continue foreach (i) { ... coll2 finished; use i ... } continue foreach (j) { ... coll1 finished; use j ... } Best languages out there are at best ho-hum when it comes about iterating through simultaneous streams. Most lose their elegant iteration statement entirely and come with something that looks like an old hooker early in the morning. Andrei

Isn't it possible to extend proposed syntax for something like suggested on D wish list: http://all-technology.com/eigenpolls/dwishlist/index.php?it=42 --- In short: there is a proposition to add special cases in 'foreach' for first iterated element and last element. It could allow use of 'foreach' also for cases like char[] arr = ["a", "b", "c"]; char[] res; for(int i=0; i<arr.length-1; i++) { res~=arr[i] ~ ","; } if (arr.length>0) res~=arr[$-1]; into something like this (just raw proposition - syntax might by different): foreach(e; arr) { res~=e ~ ","; } last foreach(e) { res~=e; } It seems that it would be logical consequence of proposed syntax... And would be very useful as above use case is quite popular IMHO. Best Regards Marcin Kuszczak
Feb 14 2007
parent reply Aarti_pl <aarti interia.pl> writes:
Aarti_pl napisał(a):
 Andrei Alexandrescu (See Website For Email) napisał(a):
 foreach (i ; coll1) (j ; coll2)
 {
   ... use i and j ...
 }
 continue foreach (i)
 {
   ... coll2 finished; use i ...
 }
 continue foreach (j)
 {
   ... coll1 finished; use j ...
 }

Isn't it possible to extend proposed syntax for something like suggested on D wish list: http://all-technology.com/eigenpolls/dwishlist/index.php?it=42 --- In short: there is a proposition to add special cases in 'foreach' for first iterated element and last element. It could allow use of 'foreach' also for cases like char[] arr = ["a", "b", "c"]; char[] res; for(int i=0; i<arr.length-1; i++) { res~=arr[i] ~ ","; } if (arr.length>0) res~=arr[$-1]; into something like this (just raw proposition - syntax might by different): foreach(e; arr) { res~=e ~ ","; } last foreach(e) { res~=e; } It seems that it would be logical consequence of proposed syntax... And would be very useful as above use case is quite popular IMHO. Best Regards Marcin Kuszczak

Maybe syntax like this would be more D-ish, flexible and compact, because in fact we need kind of switch..case syntax for 'foreach'... : foreach (i ; coll1) (j ; coll2) { ... use i and j ... case(continue)(i) { ... coll2 finished; use i ... } case(continue)(j) { ... coll1 finished; use j ... } case(last)(i) { ... do something with last col1 element } case(first)(j) { ... do something with first col2 element } } It does not introduce new keywords and is easy to extend for other situations. I like also that everything is a part of foreach, and is not separated into different statements... BR Marcin Kuszczak
Feb 14 2007
parent renoX <renosky free.fr> writes:
Aarti_pl a écrit :
 Isn't it possible to extend proposed syntax for something like suggested on D
wish list:
 http://all-technology.com/eigenpolls/dwishlist/index.php?it=42 

The wishlist itself doesn't strike me as very useful: 'on first' can be done with a boolean variable test (maybe a little less optimal for performance point of view) and 'on last' can be done with scope(success){} (at least for the current foreach). Aarti_pl a écrit :
 Maybe syntax like this would be more D-ish, flexible and compact, 
 because in fact we need kind of switch..case syntax for 'foreach'... :
 
 foreach (i ; coll1) (j ; coll2) {
       ... use i and j ...
 
     case(continue)(i) {
       ... coll2 finished; use i ...
     }
 
     case(continue)(j) {
       ... coll1 finished; use j ...
     }
 
     case(last)(i) {
       ... do something with last col1 element
     }
 
     case(first)(j) {
       ... do something with first col2 element
     }
 }
 
 It does not introduce new keywords and is easy to extend for other 
 situations. I like also that everything is a part of foreach, and is not 
 separated into different statements...

Yes! IMHO, this is better than the 'continue foreach': as it avoids the 'magical' state passing between foreach and continue foreach (ok it's not magically passed, i|j are used but as they "look" like simple variabl, this still looks weird). Plus coll1 and coll2 cannot be modified anymore before the second iteration. My only nitpick would be that the name continue in case(continue) is not very good: it could confuse a user to think that this is executed in case of usage of the 'normal' continue. But I can't think of a better name: case(alone), case(only) case(iterate_alone), case(continue_alone) are not very good.. Maybe case(end)(j) (when j do not iterate anymore) to be more consistent with case(first), case(last).. Also what to do when the iteration is made on three collections say i,j,k: case(continue)(i,j) ? or maybe case(end)(j,k)? This would make the code like this: case(continue)(i,j) { .. foo1 .. } case(continue)(j,k) { .. foo2 .. } case(continue)(i,k) { .. foo3 .. } case(continue)(i) { .. foo4 .. } case(continue)(j) { .. foo5 .. } case(continue)(k) { .. foo6 .. } Hopefully one doesn't need to do this too often! renoX PS: IMHO there are quite a few basic features missing in D such as proper associative/static array initialisation, good string format sugar (like in Ruby), etc that I have a hard time to find 'continue foreach / case(continue)' interesting.
 
 BR
 Marcin Kuszczak

Feb 14 2007
prev sibling next sibling parent janderson <askme me.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 James Dennett wrote:
 C++, of course, has std::for_each(e.begin(), e.end(), do_x);
 in its library (though that's weaker than it could be because
 of lack of support for convenient anonymous functions/lambdas).

 C++0x is very likely to have for(v: e).  It's implemented
 in ConceptGCC already. Java already has essentially that,
 as does C#.  This really doesn't set D apart (but at least
 D isn't falling behind here).

BTW, D might soon have simultaneous iteration that will blow away all conventional languages: foreach (i ; coll1) (j ; coll2) { ... use i and j ... } continue foreach (i) { ... coll2 finished; use i ... } continue foreach (j) { ... coll1 finished; use j ... } Best languages out there are at best ho-hum when it comes about iterating through simultaneous streams. Most lose their elegant iteration statement entirely and come with something that looks like an old hooker early in the morning. Andrei

Just a random thought. It would be a good idea to post these sort of things as a new post. It been very hard to follow with all these mega threads of late. -Joel
Feb 14 2007
prev sibling parent reply Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 James Dennett wrote:
 C++, of course, has std::for_each(e.begin(), e.end(), do_x);
 in its library (though that's weaker than it could be because
 of lack of support for convenient anonymous functions/lambdas).

 C++0x is very likely to have for(v: e).  It's implemented
 in ConceptGCC already. Java already has essentially that,
 as does C#.  This really doesn't set D apart (but at least
 D isn't falling behind here).

BTW, D might soon have simultaneous iteration that will blow away all conventional languages: foreach (i ; coll1) (j ; coll2) { ... use i and j ... } continue foreach (i) { ... coll2 finished; use i ... } continue foreach (j) { ... coll1 finished; use j ... } Best languages out there are at best ho-hum when it comes about iterating through simultaneous streams. Most lose their elegant iteration statement entirely and come with something that looks like an old hooker early in the morning. Andrei

Hum. Correct me if I'm wrong, but that proposal would also require adding an iterator concept to iterable types, since the current opApply() mechanism can't suport such simultaneous iteration. -- Bruno Medeiros - MSc in CS/E student http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#D
Feb 15 2007
parent "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Bruno Medeiros wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 James Dennett wrote:
 C++, of course, has std::for_each(e.begin(), e.end(), do_x);
 in its library (though that's weaker than it could be because
 of lack of support for convenient anonymous functions/lambdas).

 C++0x is very likely to have for(v: e).  It's implemented
 in ConceptGCC already. Java already has essentially that,
 as does C#.  This really doesn't set D apart (but at least
 D isn't falling behind here).

BTW, D might soon have simultaneous iteration that will blow away all conventional languages: foreach (i ; coll1) (j ; coll2) { ... use i and j ... } continue foreach (i) { ... coll2 finished; use i ... } continue foreach (j) { ... coll1 finished; use j ... } Best languages out there are at best ho-hum when it comes about iterating through simultaneous streams. Most lose their elegant iteration statement entirely and come with something that looks like an old hooker early in the morning. Andrei

Hum. Correct me if I'm wrong, but that proposal would also require adding an iterator concept to iterable types, since the current opApply() mechanism can't suport such simultaneous iteration.

Correct. Andrei
Feb 15 2007
prev sibling parent Lutger <lutger.blijdestijn gmail.com> writes:
James Dennett wrote:
 Familiarity with Lisp does help when working with C++
 templates, it seems.  That might also be true for templates
 in D.

I can confirm that. I was never able to grok even easy metaprogramming with templates in D until I learned a little Lisp. The whole first chapter of SICP (all the examples and excersises) can be translated to metaprograms directly, and they look like lisp too. I think it is the static if / else and alias (template template) parameters which make this so easy.
Feb 13 2007
prev sibling parent reply janderson <askme me.com> writes:
k
 Agreed. But the issue is not about how badly they're flawed. Instead, 

What do u mean by MyDSL (I must have missed the discussion there)? I'll take a stab. One issue DSL causes is that it can cause code to become un-standardized. You come across a DSL you haven't seen before, and now you've got to scan though all the crazy template code to figure out what its doing. Am I close? At least in my line of work, developing new DSL is part of the job. I probably use in 5 of them a day (XML sub-languages, file binaries, shader languages, makefiles, lua, scripting languages, network subscriptions, gui-loading, data-base communication ect...). I probably create a new one about every month. These are not compile into the language, but they might as well be so. Design, art and even myself cannot let the complexity of C++ get in the way of these repetitive tasks. Even C++ code is written in certain ways which I consider DSL's (albit much duplication and increased likelihood for errors). Actually I'm considering making an intermediate language and a tool so design/art can write some of this themselves. Then hand it to us for integration -> one less communication bottleneck. If fact if you've ever created your own binary file or loaded an XML, that's a DSL. Whether its in compiled-code or run-time code, the cost doesn't change as long as its just as easy to write either way. Note: I'm not arguing that meta-programming should be higher priority then say reflection. I'm just arguing that its just an extension to what programmers do on a day-to-day basis. I also think it will be a while before we will realize the full potential of DSL. Like anything else they should be used with care. -Joel
Feb 12 2007
next sibling parent janderson <askme me.com> writes:
janderson wrote:
 k
  >
  > Agreed. But the issue is not about how badly they're flawed. Instead, 
 it's the non-standard "language" problem. The MyDSL problem :)
 
 What do u mean by MyDSL (I must have missed the discussion there)? I'll 
 take a stab.  One issue DSL causes is that it can cause code to become 
 un-standardized.  You come across a DSL you haven't seen before, and now 
 you've got to scan though all the crazy template code to figure out what 
 its doing.  Am I close?
 
 At least in my line of work, developing new DSL is part of the job.  I 
 probably use in 5 of them a day (XML sub-languages, file binaries, 
 shader languages, makefiles, lua, scripting languages, network 
 subscriptions, gui-loading, data-base communication ect...).

I forgot to mention that "Shader Languages" are actually compiled down to c code for fast loading. We also have other DSL which are compiled into C++ code (which I can't say much more on) whenever we build. -Joel
Feb 12 2007
prev sibling parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
janderson wrote:
 Note: I'm not arguing that meta-programming should be higher priority 
 then say reflection.  I'm just arguing that its just an extension to 
 what programmers do on a day-to-day basis.

But metaprogramming *gives* reflection (as even you and others discussed recently). The half-assed way to do reflection is to have the language implementer sit down and define the run-time reflection mechanism. The full-assed way is to define compile-time reflection, to then allow people to define and refine run-time reflection mechanisms as libraries, in addition to many other useful libraries! It's like in the fish vs. fishing parable.
 I also think it will be a while before we will realize the full 
 potential of DSL.  Like anything else they should be used with care.

There's one way to find out :o). Andrei
Feb 12 2007
next sibling parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
janderson wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 janderson wrote:
 Note: I'm not arguing that meta-programming should be higher priority 
 then say reflection.  I'm just arguing that its just an extension to 
 what programmers do on a day-to-day basis.

But metaprogramming *gives* reflection (as even you and others discussed recently). The half-assed way to do reflection is to have the language implementer sit down and define the run-time reflection mechanism. The full-assed way is to define compile-time reflection, to then allow people to define and refine run-time reflection mechanisms as libraries, in addition to many other useful libraries! It's like in the fish vs. fishing parable.

I agree but its like the stl vectors. I'm unsure weather it just is easier to have that kinda thing in the language because it already has much of that information.

The information being there is all the more reason to make it available during compilation, for reflection and many other purposes, e.g. PyD being one of them.
Also to write a reflection program that 
 doesn't require wrapping each and every call you'd need to write a fully 
 fledged compiler which may become out of sync with the compiler.   I'm 
 undecided on this matter.

Not sure I understand. All that will be needed to make Widget available is: mixin Manifest!(Widget); I don't see where the syncing problem occurs. Andrei
Feb 12 2007
parent reply janderson <askme me.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 janderson wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 janderson wrote:
 Note: I'm not arguing that meta-programming should be higher 
 priority then say reflection.  I'm just arguing that its just an 
 extension to what programmers do on a day-to-day basis.

But metaprogramming *gives* reflection (as even you and others discussed recently). The half-assed way to do reflection is to have the language implementer sit down and define the run-time reflection mechanism. The full-assed way is to define compile-time reflection, to then allow people to define and refine run-time reflection mechanisms as libraries, in addition to many other useful libraries! It's like in the fish vs. fishing parable.

I agree but its like the stl vectors. I'm unsure weather it just is easier to have that kinda thing in the language because it already has much of that information.

The information being there is all the more reason to make it available during compilation, for reflection and many other purposes, e.g. PyD being one of them.
 Also to write a reflection program that doesn't require wrapping each 
 and every call you'd need to write a fully fledged compiler which may 
 become out of sync with the compiler.   I'm undecided on this matter.

Not sure I understand. All that will be needed to make Widget available is: mixin Manifest!(Widget); I don't see where the syncing problem occurs.

This is wrapping each class. What if the the code was hidden in a library or something. How would you get a such information? Also wrapping each class is not as neat as a complete code analysis could be (which is possible in mixin, just very difficult and slow). Syncing problems could still occur if Walter decides to make some change in the language syntax. -Joel
 
 Andrei

Feb 12 2007
next sibling parent janderson <askme me.com> writes:
janderson wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 janderson wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 janderson wrote:
 Note: I'm not arguing that meta-programming should be higher 
 priority then say reflection.  I'm just arguing that its just an 
 extension to what programmers do on a day-to-day basis.

But metaprogramming *gives* reflection (as even you and others discussed recently). The half-assed way to do reflection is to have the language implementer sit down and define the run-time reflection mechanism. The full-assed way is to define compile-time reflection, to then allow people to define and refine run-time reflection mechanisms as libraries, in addition to many other useful libraries! It's like in the fish vs. fishing parable.

I agree but its like the stl vectors. I'm unsure weather it just is easier to have that kinda thing in the language because it already has much of that information.

The information being there is all the more reason to make it available during compilation, for reflection and many other purposes, e.g. PyD being one of them.
 Also to write a reflection program that doesn't require wrapping each 
 and every call you'd need to write a fully fledged compiler which may 
 become out of sync with the compiler.   I'm undecided on this matter.

Not sure I understand. All that will be needed to make Widget available is: mixin Manifest!(Widget); I don't see where the syncing problem occurs.

This is wrapping each class. What if the the code was hidden in a library or something. How would you get a such information? Also wrapping each class is not as neat as a complete code analysis could be (which is possible in mixin, just very difficult and slow). Syncing problems could still occur if Walter decides to make some change in the language syntax. -Joel
 Andrei


That's my argument against. My argument for would be that you would be able to build up a much more powerful reflection then Walter would ever have time to create. For instance you should be able to create a unique ID for ever class so you can version (so if they change, you can still serialize them correctly). -Joel
Feb 12 2007
prev sibling parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
janderson wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 janderson wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 janderson wrote:
 Note: I'm not arguing that meta-programming should be higher 
 priority then say reflection.  I'm just arguing that its just an 
 extension to what programmers do on a day-to-day basis.

But metaprogramming *gives* reflection (as even you and others discussed recently). The half-assed way to do reflection is to have the language implementer sit down and define the run-time reflection mechanism. The full-assed way is to define compile-time reflection, to then allow people to define and refine run-time reflection mechanisms as libraries, in addition to many other useful libraries! It's like in the fish vs. fishing parable.

I agree but its like the stl vectors. I'm unsure weather it just is easier to have that kinda thing in the language because it already has much of that information.

The information being there is all the more reason to make it available during compilation, for reflection and many other purposes, e.g. PyD being one of them.
 Also to write a reflection program that doesn't require wrapping each 
 and every call you'd need to write a fully fledged compiler which may 
 become out of sync with the compiler.   I'm undecided on this matter.

Not sure I understand. All that will be needed to make Widget available is: mixin Manifest!(Widget); I don't see where the syncing problem occurs.

This is wrapping each class. What if the the code was hidden in a library or something. How would you get a such information? Also wrapping each class is not as neat as a complete code analysis could be (which is possible in mixin, just very difficult and slow). Syncing problems could still occur if Walter decides to make some change in the language syntax.

I find it reasonable to require one line per exposed class. This is even easier than writing a manifest file. Andrei
Feb 12 2007
next sibling parent janderson <askme me.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 janderson wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 janderson wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 janderson wrote:
 Note: I'm not arguing that meta-programming should be higher 
 priority then say reflection.  I'm just arguing that its just an 
 extension to what programmers do on a day-to-day basis.

But metaprogramming *gives* reflection (as even you and others discussed recently). The half-assed way to do reflection is to have the language implementer sit down and define the run-time reflection mechanism. The full-assed way is to define compile-time reflection, to then allow people to define and refine run-time reflection mechanisms as libraries, in addition to many other useful libraries! It's like in the fish vs. fishing parable.

I agree but its like the stl vectors. I'm unsure weather it just is easier to have that kinda thing in the language because it already has much of that information.

The information being there is all the more reason to make it available during compilation, for reflection and many other purposes, e.g. PyD being one of them.
 Also to write a reflection program that doesn't require wrapping 
 each and every call you'd need to write a fully fledged compiler 
 which may become out of sync with the compiler.   I'm undecided on 
 this matter.

Not sure I understand. All that will be needed to make Widget available is: mixin Manifest!(Widget); I don't see where the syncing problem occurs.

This is wrapping each class. What if the the code was hidden in a library or something. How would you get a such information? Also wrapping each class is not as neat as a complete code analysis could be (which is possible in mixin, just very difficult and slow). Syncing problems could still occur if Walter decides to make some change in the language syntax.

I find it reasonable to require one line per exposed class. This is even easier than writing a manifest file. Andrei

Very true however I think it could be written in a way that is easier. I'm on the fence with this on. -Joel
Feb 12 2007
prev sibling parent Kevin Bealer <kevinbealer gmail.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 janderson wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 janderson wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 janderson wrote:
 Note: I'm not arguing that meta-programming should be higher 
 priority then say reflection.  I'm just arguing that its just an 
 extension to what programmers do on a day-to-day basis.

But metaprogramming *gives* reflection (as even you and others discussed recently). The half-assed way to do reflection is to have the language implementer sit down and define the run-time reflection mechanism. The full-assed way is to define compile-time reflection, to then allow people to define and refine run-time reflection mechanisms as libraries, in addition to many other useful libraries! It's like in the fish vs. fishing parable.

I agree but its like the stl vectors. I'm unsure weather it just is easier to have that kinda thing in the language because it already has much of that information.

The information being there is all the more reason to make it available during compilation, for reflection and many other purposes, e.g. PyD being one of them.
 Also to write a reflection program that doesn't require wrapping 
 each and every call you'd need to write a fully fledged compiler 
 which may become out of sync with the compiler.   I'm undecided on 
 this matter.

Not sure I understand. All that will be needed to make Widget available is: mixin Manifest!(Widget); I don't see where the syncing problem occurs.

This is wrapping each class. What if the the code was hidden in a library or something. How would you get a such information? Also wrapping each class is not as neat as a complete code analysis could be (which is possible in mixin, just very difficult and slow). Syncing problems could still occur if Walter decides to make some change in the language syntax.

I find it reasonable to require one line per exposed class. This is even easier than writing a manifest file. Andrei

Ah, but what if I want to deal with non-exposed classes, i.e. serialize a complex graph or similar data structure to disk? I think python has 'pickling' that does this either in the language or in a fairly standard library. For this kind of purpose, it would be useful to have some in-language and some library support; the library support can ride on top of the language support, and shouldn't require too much more. Once you can get the useful parts of the parse tree or the equivalent data from the language, most or all of the rest can go in a library. It would help my argument a lot if the unused bits could be left out of the program on the basis of some kind of analysis; that implies that the language support be triggered by the library instances on some level, something like C++'s automatically generated methods. (For some reason I suspect that idea won't be popular...) Kevin
Feb 13 2007
prev sibling parent janderson <askme me.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 janderson wrote:
 Note: I'm not arguing that meta-programming should be higher 


extension to what programmers do on a day-to-day basis.
 But metaprogramming *gives* reflection (as even you and others 

language implementer sit down and define the run-time reflection mechanism. The full-assed way is to define compile-time reflection, to then allow people to define and refine run-time reflection mechanisms as libraries, in addition to many other useful libraries! It's like in the fish vs. fishing parable. I agree but its like the stl vectors. I'm unsure weather it just is easier to have that kinda thing in the language because it already has much of that information. Also to write a reflection program that doesn't require wrapping each and every call you'd need to write a fully fledged D phaser which may become out of sync with the compiler. I'm undecided on this matter.
 I also think it will be a while before we will realize the full 


 There's one way to find out :o).

 Andrei

Feb 12 2007
prev sibling next sibling parent reply Kevin Bealer <kevinbealer gmail.com> writes:
Great post;  all of this is interesting, and I think I agree with all of 
the individual points.

Walter Bright wrote:
 kris wrote:

 
 Some comments:

 4) The more experience I have, the more it seems that the language that 
 got a lot right is ... Lisp. But Lisp did one thing terribly, terribly 
 wrong - the syntax. The Lisp experts who can get past that seem to be 
 amazingly productive with Lisp. The rest of us will remain envious of 
 what Lisp can do, but will never use it.
 
 5) Lisp gets things right, according to what I've read from heavy Lisp 
 users, by being a language that can be modified on the fly to suit the 
 task at hand, in other words, by having a customizable language one can 
 achieve dramatic productivity gains.

Yes - I've often wished for Ocaml or LISP with C syntax, but LISP has another real issue to me, which is that 'mutable' algorithms are hard.
 6) If I think about it a certain way, it looks like what C++ Boost is 
 doing is a desperate attempt to add Lisp-like features. By desperate I 
 mean that C++'s core feature set is just inadequate to the task. For 
 example, look at all the effort Boost has gone through to do a barely 
 functioning implementation of tuples. Put tuples into the language 
 properly, and all that nasty stuff just falls away like a roofer peeling 
 off the old shingles.

 9) But I see what C++ templates can do. So to me, the problem is to 
 design templates in such a way that they are as simple to write as 
 ordinary functions. *Then*, what templates can do can be accessible and 
 maintainable. It's like cars - they used to be very difficult to drive, 
 but now anyone can hop in, turn the key, and go.

 11) Today's way-out feature is tomorrow's pedestrian code. I'm old 
 enough to remember when "structured code", i.e. while, for, switch 
 instead of goto, was the way-out feature (70's). Then, OOP was all the 
 rage (80's), now that's a major yawner. STL was then the way-out fad 
 (90's), now that's pedestrian too. Now it's metaprogramming (00's), and 
 I bet by 2015 that'll be ho-hum too, and it's D that's going to do it.

Code in C++ wants to be iterative/imperative, and code in LISP wants to be recursive and functional. But the 'macro system' in both cases is written as a functional / recursive system. In LISP, this was by design and works with the language, in C++ its because recursion happened to fit through a keyhole in the template grammer. LISP makes it hard / awkward to work with large mutable structures. Everything is functional so a functional macro system can do everything. C++ templates were great (by today's standards, adequate) for container classes, but that task could almost be done just with text substitution. So... In my view, to make metaprogramming work in a way that non-LISP people understand in something like C++ or D, we need to see programs as a series of iterations: 1. A program with all the static code, but main() is replaced by calls to functions in each module that manufacture code (code factories). 2. Same as 1, with the newly manufactured code added, along with new calls to code factories. 3. Same as 2, etc. N. A program with all static (non-template) code. Each step is run to produce the next step -- in most cases, this consists of running one code generator after another -- the individual steps are not actually explicit, but it is important for one code generator to be able to use the results of another when it needs to. All steps except N are optional, and usually you would have just 1 and N. It might be a good idea at first to require that. The difference between this world order and the current (C++) way, is that in C++ the factories are defined as recursive templates that expand like fractals. Rationale: LISP macros have access to essentially all of LISP. If they didn't have access to (for instance) mapcar, the LISP macro system would be measurably harmed. D programs need the same consideration --- if a 'regular D' programmer had to do without std.string, they would feel the absence every time he needed find(). Similarly, every time I try to write a meta-program in D today, I feel the same absence of std.string. I think to really write clean code for the 'code factory' step, I need to be able to write nested loops over the user input, accumulate code in strings with "~=", and so on without writing it recursively. My first thought on "how" is to build an interpreter in the compiler that can run the parse tree of a D function. Which brings us fairly close to the LISP domain, the primary difference being that this interpreter would not necessarily be included in the compiled program (i.e. its not 'eval()', yet.) (Without having written a real working compiler, I don't know how practical this is; I think I've heard of parse-tree based interpreters in relation to perl. A reasonable restriction might be "no C calls".) Kevin
Feb 12 2007
parent Walter Bright <newshound digitalmars.com> writes:
Kevin Bealer wrote:
 My first thought on "how" is to build an interpreter in the compiler 
 that can run the parse tree of a D function.  Which brings us fairly 
 close to the LISP domain, the primary difference being that this 
 interpreter would not necessarily be included in the compiled program 
 (i.e. its not 'eval()', yet.)

Andrei and I have come up with some scribbling that looks like a start on being able to manipulate parse trees. At least we know what we want it to look like <g>, but it's a looong way from being a solid design and then an implementation. I'm not sure where this will lead, it's like Fulton thinking "I've got this steam engine over here, and a boat over there, ..."
Feb 12 2007
prev sibling next sibling parent Pablo Ripolles <in-call gmx.net> writes:
Walter Bright Wrote:

 kris wrote:
  > Thus; shouting from the rooftops that D is all about meta-code, and DSL
  > up-the-wazzoo, may well provoke a backlash from the very people who
  > should be embracing the language. I'd imagine Andrei would vehemently
  > disagree, but so what? The people who will ultimately be responsible for
  > "allowing" D through the door don't care about fads or technical
  > superiority; they care about costs. And the overwhelming cost in
  > software development today, for the type of companies noted above, is
  > maintenance. For them, software dev is already complex enough. In all
  > the places I've worked or consulted, in mutiple countries, and since the
  > time before Zortech C, pedestrian-code := maintainable-code := less
  > overall cost.
 
 Some comments:
 
 1) D has no marketing budget. It isn't backed by a major corporation. 
 Therefore, it needs something else to catch peoples' attention. Mundane 
 features aren't going to do it.
 
 2) I know Java is wildly successful. But Java ain't the language for me 
 - because it takes too much code to do the simplest things. It isn't 
 straightforward clarifying code, either, it looks like a lot of 
 irrelevant bother. I'm getting older and I just don't want to spend the 
 *time* to write all that stuff. My fingertips get sore <g>. I wouldn't 
 use Java if it was twice as fast as any other language for that reason. 
 I wouldn't use Java if it was twice as popular as it is now for that reason.
 
 3) Less code == more productivity, less bugs. I don't mean gratuitously 
 less code, I mean less code in the sense that one can write directly 
 what one means, rather than a lot of tedious bother. For example, if I 
 want to visit each element in an array:
 
 	foreach(v; e)
 	{...}
 
 is more direct than:
 
 	for (size_t i = 0; i < sizeof(e)/sizeof(e[0]); i++)
 	{ T v = e[i];
 	 ... }
 
 
 4) The more experience I have, the more it seems that the language that 
 got a lot right is ... Lisp. But Lisp did one thing terribly, terribly 
 wrong - the syntax. The Lisp experts who can get past that seem to be 
 amazingly productive with Lisp. The rest of us will remain envious of 
 what Lisp can do, but will never use it.
 
 5) Lisp gets things right, according to what I've read from heavy Lisp 
 users, by being a language that can be modified on the fly to suit the 
 task at hand, in other words, by having a customizable language one can 
 achieve dramatic productivity gains.
 
 6) If I think about it a certain way, it looks like what C++ Boost is 
 doing is a desperate attempt to add Lisp-like features. By desperate I 
 mean that C++'s core feature set is just inadequate to the task. For 
 example, look at all the effort Boost has gone through to do a barely 
 functioning implementation of tuples. Put tuples into the language 
 properly, and all that nasty stuff just falls away like a roofer peeling 
 off the old shingles.
 
 7) A lot of companies have outlawed C++ templates, and for good reason. 
 I believe that is not because templates are inherently bad. I think that 
 C++ templates are a deeply flawed because they were ***never designed 
 for the purpose to which they were put***.
 
 8) I've never been able to create usable C++ templates. Notice that the 
 DMD front end (in C++) doesn't use a single template. I know how they 
 work (in intimate detail) but I still can't use them.
 
 9) But I see what C++ templates can do. So to me, the problem is to 
 design templates in such a way that they are as simple to write as 
 ordinary functions. *Then*, what templates can do can be accessible and 
 maintainable. It's like cars - they used to be very difficult to drive, 
 but now anyone can hop in, turn the key, and go.
 
 10) Your points about pedestrian code are well taken. D needs to do 
 pedestrian code very, very well. But that isn't enough because lots of 
 languages do pedestrian code well enough.
 
 11) Today's way-out feature is tomorrow's pedestrian code. I'm old 
 enough to remember when "structured code", i.e. while, for, switch 
 instead of goto, was the way-out feature (70's). Then, OOP was all the 
 rage (80's), now that's a major yawner. STL was then the way-out fad 
 (90's), now that's pedestrian too. Now it's metaprogramming (00's), and 
 I bet by 2015 that'll be ho-hum too, and it's D that's going to do it.
 
 12) Take a look at what Kirk McDonald is doing with Pyd. He needs all 
 this stuff to make it slicker than oil on ground steel. He's on the 
 bleeding edge of stuff D needs to *make* pedestrian.

for me reading this post it's amazingly encouraging and future-exciting, thanx!
Feb 12 2007
prev sibling next sibling parent reply Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
Walter Bright wrote:
 kris wrote:
  > Thus; shouting from the rooftops that D is all about meta-code, and DSL
  > up-the-wazzoo, may well provoke a backlash from the very people who
  > should be embracing the language. I'd imagine Andrei would vehemently
  > disagree, but so what? The people who will ultimately be responsible for
  > "allowing" D through the door don't care about fads or technical
  > superiority; they care about costs. And the overwhelming cost in
  > software development today, for the type of companies noted above, is
  > maintenance. For them, software dev is already complex enough. In all
  > the places I've worked or consulted, in mutiple countries, and since the
  > time before Zortech C, pedestrian-code := maintainable-code := less
  > overall cost.
 

Overall, I agree with kris. Support for mundane code is just as important, if not more important that support for super features.
 Some comments:
 
 1) D has no marketing budget. It isn't backed by a major corporation. 
 Therefore, it needs something else to catch peoples' attention. Mundane 
 features aren't going to do it.
 

Flashy super features may catch people's attention, but how good the mundane features are is what *keeps* the people in the language. Oh, and killer apps are likely much better attractors than flashy super features. :)
 2) I know Java is wildly successful. But Java ain't the language for me 
 - because it takes too much code to do the simplest things. It isn't 
 straightforward clarifying code, either, it looks like a lot of 
 irrelevant bother. I'm getting older and I just don't want to spend the 
 *time* to write all that stuff. My fingertips get sore <g>. I wouldn't 
 use Java if it was twice as fast as any other language for that reason. 
 I wouldn't use Java if it was twice as popular as it is now for that 
 reason.
 

I disagree. I've seen these arguments over and over, that Java takes a lot of code to write the simplest things, that Java is verbose (consider the Kingdom of Nouns article/rant), etc.. It is true that Java is like that, but I disagree that it is a bad thing. The design of the Java language is optimized for large programs, not for small ones. Sure, Java's hello world is one the largest compared to other languages, but should we rate languages based on simple code like hello world's? That's not quite correct. It's like bemoaning D or similar languages because a shell scripting language allows writing a hello world with much less code. It does, but shell scripting does scale well for larger projects. I've been reading and coding a lot of Java recently (as part of a Descent related Eclipse IDE project), and frankly the more I do it, the more I like Java, despite some standing flaws(*). I've been reading and trying to understand a lot of the Eclipse Platform's and JDT's source code, and I'm able to do that fairly well, in good part because the way one writes Java code is both very verbose and very standard. There are no strange or kinky features that obfuscate the code. No MyDSL's for each developer in the team. If Eclipse and JDT were written in C++ (or even in D!) I wonder if my job wouldn't be much more difficult. I'm not saying an ideal language should be like Java (and indeed I don't think that), what I'm saying is that the verbosity and the "herding of the crowd" mentality of Java are not as bad as most people complain it is. They have a very prominent good side. And Walter, frankly, I think you are very biased in this aspect: Correct if I'm wrong, but you have mostly written code for apps where you are the sole developer, which is very different from being part of a multi-developer team (or even of simply using libraries written by other people), which in turn makes one much more vulnerable to suffer language abuse (or simply different use) from other developers. (*) I think Java 1.5 made a great difference in Java, making it much more palatable. 'foreach' had to be there, and I find that Java generics cover most of the cases where one would want metaprogramming. What I still consider annoying is checked exceptions, an obtuse function literal syntax (requires using inner classes), and not having free functions, but overall my opinion of Java still remains very positive.
 
 4) The more experience I have, the more it seems that the language that 
 got a lot right is ... Lisp. But Lisp did one thing terribly, terribly 
 wrong - the syntax. The Lisp experts who can get past that seem to be 
 amazingly productive with Lisp. The rest of us will remain envious of 
 what Lisp can do, but will never use it.
 
 5) Lisp gets things right, according to what I've read from heavy Lisp 
 users, by being a language that can be modified on the fly to suit the 
 task at hand, in other words, by having a customizable language one can 
 achieve dramatic productivity gains.
 

I have a feeling there are some more things other than the syntax. But even the syntax alone is a thing very hard to get right: much of LISP's simplicity (specially in its macro system) comes from having a completely orthogonal syntax.
 10) Your points about pedestrian code are well taken. D needs to do 
 pedestrian code very, very well. But that isn't enough because lots of 
 languages do pedestrian code well enough.
 

Well, no languages do pedestrian code well enough AND allow to code speedy apps at the same time. Java, C#, Python, Ruby, etc. do the first thing, C/C++ does the second, but D is the only one attempting both.
 11) Today's way-out feature is tomorrow's pedestrian code. I'm old 
 enough to remember when "structured code", i.e. while, for, switch 
 instead of goto, was the way-out feature (70's). Then, OOP was all the 
 rage (80's), now that's a major yawner. STL was then the way-out fad 
 (90's), now that's pedestrian too. Now it's metaprogramming (00's), and 
 I bet by 2015 that'll be ho-hum too, and it's D that's going to do it.
 

True enough I guess. -- Bruno Medeiros - MSc in CS/E student http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#D
Feb 12 2007
parent "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Bruno Medeiros wrote:
 I disagree. I've seen these arguments over and over, that Java takes a 
 lot of code to write the simplest things, that Java is verbose (consider 
 the Kingdom of Nouns article/rant), etc.. It is true that Java is like 
 that, but I disagree that it is a bad thing. The design of the Java 
 language is optimized for large programs, not for small ones.
 
 Sure, Java's hello world is one the largest compared to other languages, 
 but should we rate languages based on simple code like hello world's? 
 That's not quite correct. It's like bemoaning D or similar languages 
 because a shell scripting language allows writing a hello world with 
 much less code. It does, but shell scripting does scale well for larger 
 projects.
 
 I've been reading and coding a lot of Java recently (as part of a 
 Descent related Eclipse IDE project), and frankly the more I do it, the 
 more I like Java, despite some standing flaws(*). I've been reading and 
 trying to understand a lot of the Eclipse Platform's and JDT's source 
 code, and I'm able to do that fairly well, in good part because the way 
 one writes Java code is both very verbose and very standard. There are 
 no strange or kinky features that obfuscate the code. No MyDSL's for 
 each developer in the team. If Eclipse and JDT were written in C++ (or 
 even in D!) I wonder if my job wouldn't be much more difficult.

I entirely agree with this feeling. I also don't mind reading Java, but IMHO the features of the language that make it nice and readable are not verboseness or obsession with objects. Let's not forget that Java has a very well thought-out type system with no holes (barring the array covariance that is now assuaged by generic). This means that all Java code is free of a certain category of errors - guess which: the most intractable. This alone has determined a ton of researchers to focus on Java, and further improve it (e.g. threading, generics, optimizers, analyzers...). The language has gotten threading absolutely right - it's the envy of all other imperative languages, barring probably Ada. Java also has the exact features needed for component development (GC, reflection). All these contribute to Java code being writable and readable without a lot of effort, and make up for its minuses. So I disagree that Java should be seen as a hack pushed by a marketing muscle. (That would be Basic, and look at what happened to it :o)). It's a good language developed on a sound basis, and it's that that makes it reasonable to work with. But that shouldn't mean that things can be largely improved; D is slated to have the sound basis and also have things that put it way ahead from other languages. Andrei
Feb 12 2007
prev sibling next sibling parent BLS <Killing_Zoe web.de> writes:
Gentlemen,
somebody out there who remembers the Windows OS/2 race ? and why the 
heck the (without doubt awesome) System is pretty dead ?
Probabely because somebody forget about average Joe Coder, you know him, 
  the brave guy developing applications for the vertical "billion 
dollar" market. The meaning is clear, I think, createing an excellent 
product is not nessesarily a win when you ignore the people this product 
is made for. And D should be a general purpose language, or do I miss 
something ?
So please, tell us average coders something like /
THIS SUPER DUPER NEW FEATURE IS USEFULL BECAUSE IT ENABLES YOU TO .... 
istead of beeing ignorant
Joe




Walter Bright schrieb:
 kris wrote:
  > Thus; shouting from the rooftops that D is all about meta-code, and DSL
  > up-the-wazzoo, may well provoke a backlash from the very people who
  > should be embracing the language. I'd imagine Andrei would vehemently
  > disagree, but so what? The people who will ultimately be responsible for
  > "allowing" D through the door don't care about fads or technical
  > superiority; they care about costs. And the overwhelming cost in
  > software development today, for the type of companies noted above, is
  > maintenance. For them, software dev is already complex enough. In all
  > the places I've worked or consulted, in mutiple countries, and since the
  > time before Zortech C, pedestrian-code := maintainable-code := less
  > overall cost.
 
 Some comments:
 
 1) D has no marketing budget. It isn't backed by a major corporation. 
 Therefore, it needs something else to catch peoples' attention. Mundane 
 features aren't going to do it.
 
 2) I know Java is wildly successful. But Java ain't the language for me 
 - because it takes too much code to do the simplest things. It isn't 
 straightforward clarifying code, either, it looks like a lot of 
 irrelevant bother. I'm getting older and I just don't want to spend the 
 *time* to write all that stuff. My fingertips get sore <g>. I wouldn't 
 use Java if it was twice as fast as any other language for that reason. 
 I wouldn't use Java if it was twice as popular as it is now for that 
 reason.
 
 3) Less code == more productivity, less bugs. I don't mean gratuitously 
 less code, I mean less code in the sense that one can write directly 
 what one means, rather than a lot of tedious bother. For example, if I 
 want to visit each element in an array:
 
     foreach(v; e)
     {...}
 
 is more direct than:
 
     for (size_t i = 0; i < sizeof(e)/sizeof(e[0]); i++)
     { T v = e[i];
      ... }
 
 
 4) The more experience I have, the more it seems that the language that 
 got a lot right is ... Lisp. But Lisp did one thing terribly, terribly 
 wrong - the syntax. The Lisp experts who can get past that seem to be 
 amazingly productive with Lisp. The rest of us will remain envious of 
 what Lisp can do, but will never use it.
 
 5) Lisp gets things right, according to what I've read from heavy Lisp 
 users, by being a language that can be modified on the fly to suit the 
 task at hand, in other words, by having a customizable language one can 
 achieve dramatic productivity gains.
 
 6) If I think about it a certain way, it looks like what C++ Boost is 
 doing is a desperate attempt to add Lisp-like features. By desperate I 
 mean that C++'s core feature set is just inadequate to the task. For 
 example, look at all the effort Boost has gone through to do a barely 
 functioning implementation of tuples. Put tuples into the language 
 properly, and all that nasty stuff just falls away like a roofer peeling 
 off the old shingles.
 
 7) A lot of companies have outlawed C++ templates, and for good reason. 
 I believe that is not because templates are inherently bad. I think that 
 C++ templates are a deeply flawed because they were ***never designed 
 for the purpose to which they were put***.
 
 8) I've never been able to create usable C++ templates. Notice that the 
 DMD front end (in C++) doesn't use a single template. I know how they 
 work (in intimate detail) but I still can't use them.
 
 9) But I see what C++ templates can do. So to me, the problem is to 
 design templates in such a way that they are as simple to write as 
 ordinary functions. *Then*, what templates can do can be accessible and 
 maintainable. It's like cars - they used to be very difficult to drive, 
 but now anyone can hop in, turn the key, and go.
 
 10) Your points about pedestrian code are well taken. D needs to do 
 pedestrian code very, very well. But that isn't enough because lots of 
 languages do pedestrian code well enough.
 
 11) Today's way-out feature is tomorrow's pedestrian code. I'm old 
 enough to remember when "structured code", i.e. while, for, switch 
 instead of goto, was the way-out feature (70's). Then, OOP was all the 
 rage (80's), now that's a major yawner. STL was then the way-out fad 
 (90's), now that's pedestrian too. Now it's metaprogramming (00's), and 
 I bet by 2015 that'll be ho-hum too, and it's D that's going to do it.
 
 12) Take a look at what Kirk McDonald is doing with Pyd. He needs all 
 this stuff to make it slicker than oil on ground steel. He's on the 
 bleeding edge of stuff D needs to *make* pedestrian.

Feb 12 2007
prev sibling next sibling parent reply Sean Kelly <sean f4.ca> writes:
Walter Bright wrote:
 kris wrote:
  > Thus; shouting from the rooftops that D is all about meta-code, and DSL
  > up-the-wazzoo, may well provoke a backlash from the very people who
  > should be embracing the language. I'd imagine Andrei would vehemently
  > disagree, but so what? The people who will ultimately be responsible for
  > "allowing" D through the door don't care about fads or technical
  > superiority; they care about costs. And the overwhelming cost in
  > software development today, for the type of companies noted above, is
  > maintenance. For them, software dev is already complex enough. In all
  > the places I've worked or consulted, in mutiple countries, and since the
  > time before Zortech C, pedestrian-code := maintainable-code := less
  > overall cost.
 
 Some comments:
 
 1) D has no marketing budget. It isn't backed by a major corporation. 
 Therefore, it needs something else to catch peoples' attention. Mundane 
 features aren't going to do it.
 
 2) I know Java is wildly successful. But Java ain't the language for me 
 - because it takes too much code to do the simplest things. It isn't 
 straightforward clarifying code, either, it looks like a lot of 
 irrelevant bother. I'm getting older and I just don't want to spend the 
 *time* to write all that stuff. My fingertips get sore <g>. I wouldn't 
 use Java if it was twice as fast as any other language for that reason. 
 I wouldn't use Java if it was twice as popular as it is now for that 
 reason.
 
 3) Less code == more productivity, less bugs. I don't mean gratuitously 
 less code, I mean less code in the sense that one can write directly 
 what one means, rather than a lot of tedious bother. For example, if I 
 want to visit each element in an array:
 
     foreach(v; e)
     {...}
 
 is more direct than:
 
     for (size_t i = 0; i < sizeof(e)/sizeof(e[0]); i++)
     { T v = e[i];
      ... }
 
 
 4) The more experience I have, the more it seems that the language that 
 got a lot right is ... Lisp. But Lisp did one thing terribly, terribly 
 wrong - the syntax. The Lisp experts who can get past that seem to be 
 amazingly productive with Lisp. The rest of us will remain envious of 
 what Lisp can do, but will never use it.
 
 5) Lisp gets things right, according to what I've read from heavy Lisp 
 users, by being a language that can be modified on the fly to suit the 
 task at hand, in other words, by having a customizable language one can 
 achieve dramatic productivity gains.

Agreed with all of the above.
 6) If I think about it a certain way, it looks like what C++ Boost is 
 doing is a desperate attempt to add Lisp-like features. By desperate I 
 mean that C++'s core feature set is just inadequate to the task. For 
 example, look at all the effort Boost has gone through to do a barely 
 functioning implementation of tuples. Put tuples into the language 
 properly, and all that nasty stuff just falls away like a roofer peeling 
 off the old shingles.

Boost's lambda functions are another good example of a useful idea with a horrific implementation.
 7) A lot of companies have outlawed C++ templates, and for good reason. 
 I believe that is not because templates are inherently bad. I think that 
 C++ templates are a deeply flawed because they were ***never designed 
 for the purpose to which they were put***.

 8) I've never been able to create usable C++ templates. Notice that the 
 DMD front end (in C++) doesn't use a single template. I know how they 
 work (in intimate detail) but I still can't use them.

I have no problem with C++ templates, but the syntax is such that I don't consider them appropriate for a lot of the uses to which they're being put. Also, in many ways, C++ templates feel like a hack. The need to sprinkle such code with "template" and "typename" qualifiers just so the compiler knows what it's parsing sends up a huge red flag that something is wrong.
 9) But I see what C++ templates can do. So to me, the problem is to 
 design templates in such a way that they are as simple to write as 
 ordinary functions. *Then*, what templates can do can be accessible and 
 maintainable. It's like cars - they used to be very difficult to drive, 
 but now anyone can hop in, turn the key, and go.

Yup. It may be that the D community is more technically adept than the C++ community on average simply because of the type of people new languages tend to attract, but I've seen a lot of interesting template code around here.
 10) Your points about pedestrian code are well taken. D needs to do 
 pedestrian code very, very well. But that isn't enough because lots of 
 languages do pedestrian code well enough.
 
 11) Today's way-out feature is tomorrow's pedestrian code. I'm old 
 enough to remember when "structured code", i.e. while, for, switch 
 instead of goto, was the way-out feature (70's). Then, OOP was all the 
 rage (80's), now that's a major yawner. STL was then the way-out fad 
 (90's), now that's pedestrian too. Now it's metaprogramming (00's), and 
 I bet by 2015 that'll be ho-hum too, and it's D that's going to do it.

True enough. And it may end up being only hindsight that shows where the approach tends to break down, much like happened with OO.
 12) Take a look at what Kirk McDonald is doing with Pyd. He needs all 
 this stuff to make it slicker than oil on ground steel. He's on the 
 bleeding edge of stuff D needs to *make* pedestrian.

I'll admit it's nice to see examples that truly require compile-time facilities to be at all feasible. Sean
Feb 12 2007
parent reply Don Clugston <dac nospam.com.au> writes:
Sean Kelly wrote:
 Walter Bright wrote:
 8) I've never been able to create usable C++ templates. Notice that 
 the DMD front end (in C++) doesn't use a single template. I know how 
 they work (in intimate detail) but I still can't use them.

I have no problem with C++ templates, but the syntax is such that I don't consider them appropriate for a lot of the uses to which they're being put. Also, in many ways, C++ templates feel like a hack. The need to sprinkle such code with "template" and "typename" qualifiers just so the compiler knows what it's parsing sends up a huge red flag that something is wrong.
 9) But I see what C++ templates can do. So to me, the problem is to 
 design templates in such a way that they are as simple to write as 
 ordinary functions. *Then*, what templates can do can be accessible 
 and maintainable. It's like cars - they used to be very difficult to 
 drive, but now anyone can hop in, turn the key, and go.

Yup. It may be that the D community is more technically adept than the C++ community on average simply because of the type of people new languages tend to attract, but I've seen a lot of interesting template code around here.

I'm certain that it's just easier in D. I'm a bit amused by the ninja reference to myself, since I've never done anything very sophisticated with C++ templates. Andrei on the other hand... My feeling is, that 'static if' instead of specialisation removes 60% of the mystery from template metaprogramming. Tuples remove another chunk. I think that if basic uses of recursion could be replaced with iteration and compile-time variables (even if they were mutable only inside foreach), it would become readable to average joe.
Feb 13 2007
next sibling parent Sean Kelly <sean f4.ca> writes:
Don Clugston wrote:
 I think that if basic uses of recursion could be replaced with iteration 
 and compile-time variables (even if they were mutable only inside 
 foreach), it would become readable to average joe.

I agree.
Feb 13 2007
prev sibling parent Hasan Aljudy <hasan.aljudy gmail.com> writes:
Don Clugston wrote:
 Sean Kelly wrote:
 Walter Bright wrote:
 8) I've never been able to create usable C++ templates. Notice that 
 the DMD front end (in C++) doesn't use a single template. I know how 
 they work (in intimate detail) but I still can't use them.

I have no problem with C++ templates, but the syntax is such that I don't consider them appropriate for a lot of the uses to which they're being put. Also, in many ways, C++ templates feel like a hack. The need to sprinkle such code with "template" and "typename" qualifiers just so the compiler knows what it's parsing sends up a huge red flag that something is wrong.
 9) But I see what C++ templates can do. So to me, the problem is to 
 design templates in such a way that they are as simple to write as 
 ordinary functions. *Then*, what templates can do can be accessible 
 and maintainable. It's like cars - they used to be very difficult to 
 drive, but now anyone can hop in, turn the key, and go.

Yup. It may be that the D community is more technically adept than the C++ community on average simply because of the type of people new languages tend to attract, but I've seen a lot of interesting template code around here.

I'm certain that it's just easier in D. I'm a bit amused by the ninja reference to myself, since I've never done anything very sophisticated with C++ templates. Andrei on the other hand... My feeling is, that 'static if' instead of specialisation removes 60% of the mystery from template metaprogramming. Tuples remove another chunk. I think that if basic uses of recursion could be replaced with iteration and compile-time variables (even if they were mutable only inside foreach), it would become readable to average joe.

Yeah, a static foreach and a static while would pretty remove a lot of the remaining mystery.
Feb 13 2007
prev sibling next sibling parent reply Don Clugston <dac nospam.com.au> writes:
Walter Bright wrote:
 4) The more experience I have, the more it seems that the language that 
 got a lot right is ... Lisp. But Lisp did one thing terribly, terribly 
 wrong - the syntax. The Lisp experts who can get past that seem to be 
 amazingly productive with Lisp. The rest of us will remain envious of 
 what Lisp can do, but will never use it.
 
 5) Lisp gets things right, according to what I've read from heavy Lisp 
 users, by being a language that can be modified on the fly to suit the 
 task at hand, in other words, by having a customizable language one can 
 achieve dramatic productivity gains.

I suspect: C was a great language because it doesn't try to keep you away from the machine. Lisp is great because it doesn't try to hide you from the compiler. To quote Stepanov (the link that Bill Baxter just posted): Alexander Stepanov Notes on Programming 10/3/2006 Since I am strongly convinced that the purpose of the programming language is to present an abstraction of an underlying hardware C++ is my only choice. Sadly enough, most language designers seem to be interested in preventing me from getting to the raw bits and provide a “better” machine than the one inside my computer. Even C++ is in danger of being “managed” into something completely different. (p7) Sadly enough, C and C++ not only lack facilities for defining type functions but do not provide most useful type functions for extracting different type attributes that are trivially known to the compiler. It is impossible to find out how many members a type has; it is impossible to find the types of the members of a structure type; it is impossible to find out how many arguments a function takes or their types; it is impossible to know if a function is defined for a type; the list goes on and on. The language does its best to hide the things that the compiler discovers while processing a program. (pp. 43-44)
Feb 13 2007
next sibling parent Bill Baxter <dnewsgroup billbaxter.com> writes:
Don Clugston wrote:
 Walter Bright wrote:
 4) The more experience I have, the more it seems that the language 
 that got a lot right is ... Lisp. But Lisp did one thing terribly, 
 terribly wrong - the syntax. The Lisp experts who can get past that 
 seem to be amazingly productive with Lisp. The rest of us will remain 
 envious of what Lisp can do, but will never use it.

 5) Lisp gets things right, according to what I've read from heavy Lisp 
 users, by being a language that can be modified on the fly to suit the 
 task at hand, in other words, by having a customizable language one 
 can achieve dramatic productivity gains.

I suspect: C was a great language because it doesn't try to keep you away from the machine. Lisp is great because it doesn't try to hide you from the compiler. To quote Stepanov (the link that Bill Baxter just posted): Alexander Stepanov Notes on Programming 10/3/2006 Since I am strongly convinced that the purpose of the programming language is to present an abstraction of an underlying hardware C++ is my only choice. Sadly enough, most language designers seem to be interested in preventing me from getting to the raw bits and provide a “better” machine than the one inside my computer. Even C++ is in danger of being “managed” into something completely different. (p7) Sadly enough, C and C++ not only lack facilities for defining type functions but do not provide most useful type functions for extracting different type attributes that are trivially known to the compiler. It is impossible to find out how many members a type has; it is impossible to find the types of the members of a structure type; it is impossible to find out how many arguments a function takes or their types; it is impossible to know if a function is defined for a type; the list goes on and on. The language does its best to hide the things that the compiler discovers while processing a program. (pp. 43-44)

Sounds like someone needs to send Alexander Stepanov an invitation to join this discussions going on here. It sounds D's working precisely on providing him with everything he ever wanted. --bb
Feb 13 2007
prev sibling parent reply Walter Bright <newshound digitalmars.com> writes:
Don Clugston wrote:
 To quote Stepanov (the link that Bill Baxter just posted):
 
 Alexander Stepanov Notes on Programming 10/3/2006

I still can't find the link! What is it?
Feb 13 2007
parent Bill Baxter <dnewsgroup billbaxter.com> writes:
Walter Bright wrote:
 Don Clugston wrote:
 To quote Stepanov (the link that Bill Baxter just posted):

 Alexander Stepanov Notes on Programming 10/3/2006

I still can't find the link! What is it?

http://www.stepanovpapers.com/notes.pdf --bb
Feb 13 2007
prev sibling parent reply Nicolai Waniek <no.spam thank.you> writes:
I waited a long time to reply to this thread and I didn't read the
latest (let's guess, 60) messages replied to this thread because lots of
them where about LISP and of about how people think (recursively or not).

So why do I reply then? Because all this discussion about mixins and all
this stuff makes me a bit sad. I don't like it as well as I don't like
the C++ macros (and everyone on the #d channel knows that I really
_hate_ them...).

The main problem I have with mixins/templates and so on is that it
enables a programmer to change the "look and feel" of the language
completely. What is bad about it? That if you're working in a team
someone has a "great idea" and invents his own MyDSL. So what? Everyone
else has to "learn" this new DSL. So they spend time on something they
would've better spent on coding something else, or fixing bugs... I
worked in a team of about 8 persons bugfixing a project with
approximately 1.000.000 LoC. The former programmers just seemed to not
know how to code (and we programmed using Delphi!!! a language most
people like to tell you: "oh, that's a language that shows you how to
work structured" - that's totally bullshit!).
So on one hand, we had to improve the software as it was used by just
too many companies to completely rewrite it - on the other hand we had
to bugfix it with a syntax that even didn't look like Pascal. Now what?
If we had someone "just for the sake of it" implementing something with
his MyDSL, i would've probably killed him!

The thing I really want to say is: Stick to a clear definition on what
the language is, and what not. As long as I can remember - for example -
working on a python project was as easy as possible, because the
language just defined what is "and what not". If you let the programmer
decide and how to extend the language (and this is definitely possible
with templates and mixins and macros and the whole bunch of shitload).

Okay, there are some examples (even provided by Walter) on how to
properly use mixins and templates (for example letting the compiler
generate the byte-array for an icon file), but there's always someone
who thinks how "smart" he is and invents something that's just time
consuming to learn...

I think, that even templates are used too much at the moment (have a
look at tango, that's full of this). Why? because it doesn't fit into
the OOP pragma - you would've used some interface declarations and then
invented some functions that work on the interfaces instead of writing a
template... But well, there are some cases where templates are just nice.

I would really like to see more of the "details" implemented (runtime
type information, for example, or something like delphi's "type TClass =
class of TObject" to make it possible to create correct Object
Factories) instead of the new language features we got with the last few
releases.

I had a look at D because it was C++ just without all this stuff I
didn't like, but with the new language features, C++'s bad stuff is
simply replaced by something else that will definitely be used in the
wrong way (have a look at the apache headers, for example.
macros&macros&macros&...)

I really hope you will guide the D language into the right direction,
because I do like the language, but unfortunately, I don't like the
newest language features.

Best regards,
Nicolai
Feb 14 2007
next sibling parent reply janderson <askme me.com> writes:
Nicolai Waniek wrote:
 I waited a long time to reply to this thread and I didn't read the
 latest (let's guess, 60) messages replied to this thread because lots of
 them where about LISP and of about how people think (recursively or not).
 
 So why do I reply then? Because all this discussion about mixins and all
 this stuff makes me a bit sad. I don't like it as well as I don't like
 the C++ macros (and everyone on the #d channel knows that I really
 _hate_ them...).
 
 The main problem I have with mixins/templates and so on is that it
 enables a programmer to change the "look and feel" of the language
 completely. What is bad about it? That if you're working in a team
 someone has a "great idea" and invents his own MyDSL. So what? Everyone
 else has to "learn" this new DSL. So they spend time on something they
 would've better spent on coding something else, or fixing bugs... I
 worked in a team of about 8 persons bugfixing a project with
 approximately 1.000.000 LoC. The former programmers just seemed to not
 know how to code (and we programmed using Delphi!!! a language most
 people like to tell you: "oh, that's a language that shows you how to
 work structured" - that's totally bullshit!).
 So on one hand, we had to improve the software as it was used by just
 too many companies to completely rewrite it - on the other hand we had
 to bugfix it with a syntax that even didn't look like Pascal. Now what?
 If we had someone "just for the sake of it" implementing something with
 his MyDSL, i would've probably killed him!
 
 The thing I really want to say is: Stick to a clear definition on what
 the language is, and what not. As long as I can remember - for example -
 working on a python project was as easy as possible, because the
 language just defined what is "and what not". If you let the programmer
 decide and how to extend the language (and this is definitely possible
 with templates and mixins and macros and the whole bunch of shitload).
 
 Okay, there are some examples (even provided by Walter) on how to
 properly use mixins and templates (for example letting the compiler
 generate the byte-array for an icon file), but there's always someone
 who thinks how "smart" he is and invents something that's just time
 consuming to learn...
 
 I think, that even templates are used too much at the moment (have a
 look at tango, that's full of this). Why? because it doesn't fit into
 the OOP pragma - you would've used some interface declarations and then
 invented some functions that work on the interfaces instead of writing a
 template... But well, there are some cases where templates are just nice.
 
 I would really like to see more of the "details" implemented (runtime
 type information, for example, or something like delphi's "type TClass =
 class of TObject" to make it possible to create correct Object
 Factories) instead of the new language features we got with the last few
 releases.
 
 I had a look at D because it was C++ just without all this stuff I
 didn't like, but with the new language features, C++'s bad stuff is
 simply replaced by something else that will definitely be used in the
 wrong way (have a look at the apache headers, for example.
 macros&macros&macros&...)
 
 I really hope you will guide the D language into the right direction,
 because I do like the language, but unfortunately, I don't like the
 newest language features.
 
 Best regards,
 Nicolai

What about using it for files that you are going to load at runtime otherwise? XML ect...? -Joel
Feb 14 2007
next sibling parent Sean Kelly <sean f4.ca> writes:
janderson wrote:
 Nicolai Waniek wrote:
 I waited a long time to reply to this thread and I didn't read the
 latest (let's guess, 60) messages replied to this thread because lots of
 them where about LISP and of about how people think (recursively or not).

 So why do I reply then? Because all this discussion about mixins and all
 this stuff makes me a bit sad. I don't like it as well as I don't like
 the C++ macros (and everyone on the #d channel knows that I really
 _hate_ them...).

 The main problem I have with mixins/templates and so on is that it
 enables a programmer to change the "look and feel" of the language
 completely. What is bad about it? That if you're working in a team
 someone has a "great idea" and invents his own MyDSL. So what? Everyone
 else has to "learn" this new DSL. So they spend time on something they
 would've better spent on coding something else, or fixing bugs... I
 worked in a team of about 8 persons bugfixing a project with
 approximately 1.000.000 LoC. The former programmers just seemed to not
 know how to code (and we programmed using Delphi!!! a language most
 people like to tell you: "oh, that's a language that shows you how to
 work structured" - that's totally bullshit!).
 So on one hand, we had to improve the software as it was used by just
 too many companies to completely rewrite it - on the other hand we had
 to bugfix it with a syntax that even didn't look like Pascal. Now what?
 If we had someone "just for the sake of it" implementing something with
 his MyDSL, i would've probably killed him!

 The thing I really want to say is: Stick to a clear definition on what
 the language is, and what not. As long as I can remember - for example -
 working on a python project was as easy as possible, because the
 language just defined what is "and what not". If you let the programmer
 decide and how to extend the language (and this is definitely possible
 with templates and mixins and macros and the whole bunch of shitload).

 Okay, there are some examples (even provided by Walter) on how to
 properly use mixins and templates (for example letting the compiler
 generate the byte-array for an icon file), but there's always someone
 who thinks how "smart" he is and invents something that's just time
 consuming to learn...

 I think, that even templates are used too much at the moment (have a
 look at tango, that's full of this). Why? because it doesn't fit into
 the OOP pragma - you would've used some interface declarations and then
 invented some functions that work on the interfaces instead of writing a
 template... But well, there are some cases where templates are just nice.

 I would really like to see more of the "details" implemented (runtime
 type information, for example, or something like delphi's "type TClass =
 class of TObject" to make it possible to create correct Object
 Factories) instead of the new language features we got with the last few
 releases.

 I had a look at D because it was C++ just without all this stuff I
 didn't like, but with the new language features, C++'s bad stuff is
 simply replaced by something else that will definitely be used in the
 wrong way (have a look at the apache headers, for example.
 macros&macros&macros&...)

 I really hope you will guide the D language into the right direction,
 because I do like the language, but unfortunately, I don't like the
 newest language features.

 Best regards,
 Nicolai

What about using it for files that you are going to load at runtime otherwise? XML ect...?

I don't personally see this as much of a benefit. Most applications have long run times, and spending a fraction of a second loading files on application start simply isn't a big deal. Certainly not enough of one to invent a new language feature for it. Sean
Feb 14 2007
prev sibling parent reply Nicolai Waniek <no.spam thank.you> writes:
janderson wrote:
 
 What about using it for files that you are going to load at runtime
 otherwise? XML ect...?
 
 -Joel

Loading the file belongs to runtime. Well, if it would take 10 minutes to load the file instead of 1 second if it was "loaded during compilation", that would be an argument for something like that - but I guess this won't be the case in most projects. With your example: If you define something like that for compile time, it won't make it possible to change this dynamically, e.g. with a configuration file. If you write a super-D-duper library that loads file xyz and converts it to zyx, you won't want to sell the code (if it's not open source) so this possibility won't even fit to your business... So, if "a" stands for "easy to learn, easy to use, fixed language" and "b" for "hard to learn because of dynamic language extension" you might get something like this for "not using mixins and that kind of stuff": a [---|-----------] b and the following, if you use them: a [----------|----] b Well I have to admit that there are some real good and well thought-out examples on how to reasonably use the new language features, but there will be more examples on how to _not_ use them. and the latter ones will be the ones with which we will have to struggle - and I don't want to struggle when programming, it usually is a joy! (and that's why I don't want to code in C++).
Feb 14 2007
next sibling parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Nicolai Waniek wrote:
 janderson wrote:
 What about using it for files that you are going to load at runtime
 otherwise? XML ect...?

 -Joel

Loading the file belongs to runtime. Well, if it would take 10 minutes to load the file instead of 1 second if it was "loaded during compilation", that would be an argument for something like that - but I guess this won't be the case in most projects. With your example: If you define something like that for compile time, it won't make it possible to change this dynamically, e.g. with a configuration file.

Maybe that's exactly the point. You want configuration files that are easy to work with for developers, but you do not want end-users mucking with them. Or small scripts that are part of a game engine. You may want them loaded at runtime during development for easy modification, but when you ship your game you may prefer to have that script code embedded in the .exe where users can't muck with it. Another case is where you want to have an all-in-one .exe with a tidy install. If you have external configuration files then you have to worry about finding them at runtime and handling the case when you can't find them for whatever reason. Easier to just embed and be done with it. I don't think loading data files is a one-size-fits-all issue. Sometimes it makes sense to embed. On the other hand, that doesn't mean you need import to do it. You can always write an external tool too 'stringify' a data file. In fact that's exactly what I did: http://www.dsource.org/projects/luigi/browser/trunk/luigi/wrapres.d The new import makes much (but not all) of the functionality of that converter unnecessary. --bb
Feb 14 2007
next sibling parent reply Nicolai Waniek <no.spam thank.you> writes:
Bill Baxter wrote:
 
 Maybe that's exactly the point.  You want configuration files that are
 easy to work with for developers, but you do not want end-users mucking
 with them.  Or small scripts that are part of a game engine.  You may
 want them loaded at runtime during development for easy modification,
 but when you ship your game you may prefer to have that script code
 embedded in the .exe where users can't muck with it.
 
 Another case is where you want to have an all-in-one .exe with a tidy
 install.  If you have external configuration files then you have to
 worry about finding them at runtime and handling the case when you can't
 find them for whatever reason.  Easier to just embed and be done with it.
 
 I don't think loading data files is a one-size-fits-all issue. Sometimes
 it makes sense to embed.
 
 On the other hand, that doesn't mean you need import to do it.  You can
 always write an external tool too 'stringify' a data file.  In fact
 that's exactly what I did:
 http://www.dsource.org/projects/luigi/browser/trunk/luigi/wrapres.d
 
 The new import makes much (but not all) of the functionality of that
 converter unnecessary.
 
 --bb

You could use ressource files linked to your EXE, so you wouldn't have to search for a file. You may even add your DLLs to your EXE and unwrap them at runtime ;) I think as long as you use the language features in a sane way, it's ok. Just don't over-use them.
Feb 14 2007
parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Nicolai Waniek wrote:
 Bill Baxter wrote:
 Maybe that's exactly the point.  You want configuration files that are
 easy to work with for developers, but you do not want end-users mucking
 with them.  Or small scripts that are part of a game engine.  You may
 want them loaded at runtime during development for easy modification,
 but when you ship your game you may prefer to have that script code
 embedded in the .exe where users can't muck with it.

 Another case is where you want to have an all-in-one .exe with a tidy
 install.  If you have external configuration files then you have to
 worry about finding them at runtime and handling the case when you can't
 find them for whatever reason.  Easier to just embed and be done with it.

 I don't think loading data files is a one-size-fits-all issue. Sometimes
 it makes sense to embed.

 On the other hand, that doesn't mean you need import to do it.  You can
 always write an external tool too 'stringify' a data file.  In fact
 that's exactly what I did:
 http://www.dsource.org/projects/luigi/browser/trunk/luigi/wrapres.d

 The new import makes much (but not all) of the functionality of that
 converter unnecessary.

 --bb

You could use ressource files linked to your EXE, so you wouldn't have to search for a file.

Bleh. Not cross-platform.
 You may even add your DLLs to your EXE and unwrap
 them at runtime ;)

Not sure what you mean by that. But how am I going to create the DLL in the first place? Anyway, anything involving dynamic libraries opens up the platform-specific can of worms.
 I think as long as you use the language features in a sane way, it's ok.
 Just don't over-use them.

Ok, I won't. But I may end up in the pits carrying only my steering wheel a few times on the way to figuring out what "over-use" means. ;-) --bb
Feb 14 2007
parent Walter Bright <newshound digitalmars.com> writes:
Bill Baxter wrote:
 Ok, I won't.  But I may end up in the pits carrying only my steering 
 wheel a few times on the way to figuring out what "over-use" means.  ;-)

Yup <g>. And frankly, right now, we don't know what overuse is for this capability. We're just going to have to crash into the wall a few times before we figure it out.
Feb 14 2007
prev sibling parent janderson <askme me.com> writes:
Bill Baxter wrote:
 Nicolai Waniek wrote:
 janderson wrote:
 What about using it for files that you are going to load at runtime
 otherwise? XML ect...?

 -Joel


always write an external tool too 'stringify' a data file. In fact that's exactly what I did: http://www.dsource.org/projects/luigi/browser/trunk/luigi/wrapres.d The new import makes much (but not all) of the functionality of that converter unnecessary. --bb

Stringify won't work for scripting or network mapping. The number of libraries that I've seen that compile down to C++ to get around its limitations illustrate how useful this feature will be. If its not provided you've got more problems when dealing with these libraries. Admittedly I probably see more examples because of the industry I work in. -Joel
Feb 14 2007
prev sibling next sibling parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Nicolai Waniek wrote:
 janderson wrote:
 What about using it for files that you are going to load at runtime
 otherwise? XML ect...?

 -Joel

Loading the file belongs to runtime. Well, if it would take 10 minutes to load the file instead of 1 second if it was "loaded during compilation", that would be an argument for something like that - but I guess this won't be the case in most projects. With your example: If you define something like that for compile time, it won't make it possible to change this dynamically, e.g. with a configuration file. If you write a super-D-duper library that loads file xyz and converts it to zyx, you won't want to sell the code (if it's not open source) so this possibility won't even fit to your business...

I think it should be clarified that the loading-config-files example is a strawman that we could leave in peace. It's the least interesting example of the bunch, and also the hardest to make interesting. What we're talking here is about things that would improve life of everyone: better database integration, better regular expressions, proper reflection, better integration with other languages (e.g. Python, Javascript), remote procedure calls, persistence, networking...
 So, if "a" stands for "easy to learn, easy to use, fixed language" and
 "b" for "hard to learn because of dynamic language extension" you might
 get something like this for "not using mixins and that kind of stuff":
 
 a [---|-----------] b
 
 and the following, if you use them:
 
 a [----------|----] b
 
 Well I have to admit that there are some real good and well thought-out
 examples on how to reasonably use the new language features, but there
 will be more examples on how to _not_ use them. and the latter ones will
 be the ones with which we will have to struggle - and I don't want to
 struggle when programming, it usually is a joy! (and that's why I don't
 want to code in C++).

Technology can be annoying and have bad side effects, it's known ever since Ned Ludd whacked those sewing machines or whatever they were. Andrei
Feb 14 2007
parent Nicolai Waniek <no.spam thank.you> writes:
The longer I think about all this stuff, the more I come up with good
examples for its usage - but i'm not fully converted to a "yeehaa, here
comes the mixin" one =D
Feb 14 2007
prev sibling parent janderson <askme me.com> writes:
Nicolai Waniek wrote:
Nicolai Waniek wrote:
 janderson wrote:
 What about using it for files that you are going to load at runtime
 otherwise? XML ect...?

 -Joel

Loading the file belongs to runtime. Well, if it would take 10 minutes to load the file instead of 1 second if it was "loaded during compilation", that would be an argument for something like that - but I guess this won't be the case in most projects. With your example: If you define something like that for compile time, it won't make it possible to change this dynamically, e.g. with a configuration file. If you write a super-D-duper library that loads file xyz and converts it to zyx, you won't want to sell the code (if it's not open source) so this possibility won't even fit to your business... So, if "a" stands for "easy to learn, easy to use, fixed language" and "b" for "hard to learn because of dynamic language extension" you might get something like this for "not using mixins and that kind of stuff": a [---|-----------] b and the following, if you use them: a [----------|----] b Well I have to admit that there are some real good and well thought-out examples on how to reasonably use the new language features, but there will be more examples on how to _not_ use them. and the latter ones will be the ones with which we will have to struggle - and I don't want to struggle when programming, it usually is a joy! (and that's why I don't want to code in C++).

 janderson wrote:
 What about using it for files that you are going to load at runtime
 otherwise? XML ect...?

 -Joel

Loading the file belongs to runtime. Well, if it would take 10 minutes to load the file instead of 1 second if it was "loaded during compilation", that would be an argument for something like that - but I guess this won't be the case in most projects. With your example: If you define something like that for compile time, it won't make it possible to change this dynamically, e.g. with a configuration file. If you write a super-D-duper library that loads file xyz and converts it to zyx, you won't want to sell the code (if it's not open source) so this possibility won't even fit to your business... So, if "a" stands for "easy to learn, easy to use, fixed language" and "b" for "hard to learn because of dynamic language extension" you might get something like this for "not using mixins and that kind of stuff": a [---|-----------] b and the following, if you use them: a [----------|----] b Well I have to admit that there are some real good and well thought-out examples on how to reasonably use the new language features, but there will be more examples on how to _not_ use them. and the latter ones will be the ones with which we will have to struggle - and I don't want to struggle when programming, it usually is a joy! (and that's why I don't want to code in C++).

My point is that you shouldn't have to fight if its made as simple as run-time. -Joel
Feb 14 2007
prev sibling parent reply Walter Bright <newshound digitalmars.com> writes:
Nicolai Waniek wrote:
 The main problem I have with mixins/templates and so on is that it
 enables a programmer to change the "look and feel" of the language
 completely. What is bad about it? That if you're working in a team
 someone has a "great idea" and invents his own MyDSL. So what? Everyone
 else has to "learn" this new DSL. So they spend time on something they
 would've better spent on coding something else, or fixing bugs... I
 worked in a team of about 8 persons bugfixing a project with
 approximately 1.000.000 LoC. The former programmers just seemed to not
 know how to code (and we programmed using Delphi!!! a language most
 people like to tell you: "oh, that's a language that shows you how to
 work structured" - that's totally bullshit!).
 So on one hand, we had to improve the software as it was used by just
 too many companies to completely rewrite it - on the other hand we had
 to bugfix it with a syntax that even didn't look like Pascal. Now what?
 If we had someone "just for the sake of it" implementing something with
 his MyDSL, i would've probably killed him!

I hear you. I also find most uses C++ templates are put to to be incomprehensible - either from programmers showing off, or from working around the severe limitations of C++ templates. Another big part of the problem is that templates are just hard to read and understand. If they were as easy to deal with as regular functions, then that difficulty would (I hope) melt away. This is what we're working towards. I agree with you on another level - programmers showing off their understanding of the language by some need to use every feature of the language, MAKING SIMPLE THINGS COMPLICATED. To me, *real* programming skill is expressing complicated things in a simple manner. One of my (unfavorite) examples of the making simple things complicated is the simple linked list concept: struct Foo { Foo *next; ... } voila, a linked list. But nooo, for some reason (just for the sake of it? <g>), this has to be made complicated, with templates, macros, iterators, akk. There's a place for iterators, but they are overused. My general feeling is that if one feels impelled to generate an incomprehensible tangle of templates: 1) one has found the wrong solution to the problem 2) there's a serious deficiency in the language (the C++ attempts to implement tuples is a canonical example). Back in the 80's, C programmers discovered they could turn C code into Pascal with: #define BEGIN { #define END } and went to town with that. It was belatedly discovered that this was a terrible idea. People can abuse mixins and DSLs in D, there's not much I can do to prevent it other than hope that soon a general consensus will emerge that overuse of it is also a bad idea. Java went a different route, by disallowing all that stuff completely, but they inevitably threw the good out with the bad. Now, those features are, one by one, being added to Java anyway. P.S. I've tried pretty hard to design operator overloading in D to prevent the abuses of it often seen in C++. I'm cautiously optimistic in this being successful.
Feb 14 2007
next sibling parent reply Nicolai Waniek <no.spam thank.you> writes:
I'm really happy that you share my thoughts :o)

Let's hope that nobody will "over use" the new language features. As I
said (often enough) before, I really like the well though-out examples
on how to use the (new) features I don't really like (and I even
would've done some things the same way), but I fear the examples that
will drive me insane ;)
Feb 14 2007
parent reply Walter Bright <newshound digitalmars.com> writes:
Nicolai Waniek wrote:
 Let's hope that nobody will "over use" the new language features.

They will over use them. Overusing them is a part of the process of learning how to program. I remember a professional race car driver telling me once that if the driver didn't walk back to the pits now and then carrying just the steering wheel, he wasn't pushing the limits hard enough. But if he did it too often, he'd get fired <g>.
 As I
 said (often enough) before, I really like the well though-out examples
 on how to use the (new) features I don't really like (and I even
 would've done some things the same way), but I fear the examples that
 will drive me insane ;)

I think the best we can do is keep coming up with examples showing the right way to use them - lead by example. And we should keep on frowning on attempts to use overloaded << for I/O purposes <g>.
Feb 14 2007
next sibling parent Bill Baxter <dnewsgroup billbaxter.com> writes:
Walter Bright wrote:
 Nicolai Waniek wrote:
 Let's hope that nobody will "over use" the new language features.

They will over use them. Overusing them is a part of the process of learning how to program. I remember a professional race car driver telling me once that if the driver didn't walk back to the pits now and then carrying just the steering wheel, he wasn't pushing the limits hard enough. But if he did it too often, he'd get fired <g>.
 As I
 said (often enough) before, I really like the well though-out examples
 on how to use the (new) features I don't really like (and I even
 would've done some things the same way), but I fear the examples that
 will drive me insane ;)

I think the best we can do is keep coming up with examples showing the right way to use them - lead by example. And we should keep on frowning on attempts to use overloaded << for I/O purposes <g>.

I think Lisp is a pretty good example here. With all those reader macros etc you can really transform the language completely. But (from what I gather) Lispers as a community have declared that to be bad style unless there's /really/ no other way to accomplish the desired goal. (Regular (non-reader) lisp macro's can also be used to transform the language and define DSLs -- but there my impression is it's more like Henry Ford's "you can have any color you like, as long as it's black". With Lisp macros it's "you can create any DSL you like, as long as it still looks like fingernail clippings in oatmeal".) --bb
Feb 14 2007
prev sibling parent reply Michiel <nomail hotmail.com> writes:
 And we should keep on frowning on attempts to use overloaded << for I/O
 purposes <g>.

Why? I think it's intuitive. The arrows point from the source of the message to the destination. Should the operator only be used for shifting because that happens to have been its first purpose? I also like how you can send a message to an abstract object. And that can be a cout, a cerr, an error console in a GUI, or something else. Same thing the other way around.
Feb 14 2007
parent Walter Bright <newshound digitalmars.com> writes:
replied in new thread "overloading operators for I/O"
Feb 14 2007
prev sibling parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Walter Bright wrote:
 Nicolai Waniek wrote:
 The main problem I have with mixins/templates and so on is that it
 enables a programmer to change the "look and feel" of the language
 completely. What is bad about it? That if you're working in a team
 someone has a "great idea" and invents his own MyDSL. So what? Everyone
 else has to "learn" this new DSL. So they spend time on something they
 would've better spent on coding something else, or fixing bugs... I
 worked in a team of about 8 persons bugfixing a project with
 approximately 1.000.000 LoC. The former programmers just seemed to not
 know how to code (and we programmed using Delphi!!! a language most
 people like to tell you: "oh, that's a language that shows you how to
 work structured" - that's totally bullshit!).
 So on one hand, we had to improve the software as it was used by just
 too many companies to completely rewrite it - on the other hand we had
 to bugfix it with a syntax that even didn't look like Pascal. Now what?
 If we had someone "just for the sake of it" implementing something with
 his MyDSL, i would've probably killed him!

I hear you. I also find most uses C++ templates are put to to be incomprehensible - either from programmers showing off, or from working around the severe limitations of C++ templates. Another big part of the problem is that templates are just hard to read and understand. If they were as easy to deal with as regular functions, then that difficulty would (I hope) melt away. This is what we're working towards. I agree with you on another level - programmers showing off their understanding of the language by some need to use every feature of the language, MAKING SIMPLE THINGS COMPLICATED. To me, *real* programming skill is expressing complicated things in a simple manner. One of my (unfavorite) examples of the making simple things complicated is the simple linked list concept: struct Foo { Foo *next; ... } voila, a linked list. But nooo, for some reason (just for the sake of it? <g>), this has to be made complicated, with templates, macros, iterators, akk. There's a place for iterators, but they are overused.

Boy is that not a good example. From http://mitpress.mit.edu/sicp/chapter1/node18.html, a quote that has stayed with me since I first read it (in illegal copy) back in Romania many years ago: "We have seen that procedures are, in effect, abstractions that describe compound operations on numbers independent of the particular numbers. For example, when we (define (cube x) (* x x x)) we are not talking about the cube of a particular number, but rather about a method for obtaining the cube of any number. Of course we could get along without ever defining this procedure, by always writing expressions such as (* 3 3 3) (* x x x) (* y y y) and never mentioning cube explicitly. This would place us at a serious disadvantage, forcing us to work always at the level of the particular operations that happen to be primitives in the language (multiplication, in this case) rather than in terms of higher-level operations. Our programs would be able to compute cubes, but our language would lack the ability to express the concept of cubing. One of the things we should demand from a powerful programming language is the ability to build abstractions by assigning names to common patterns and then to work in terms of the abstractions directly." Your example shows that you are able to make a list, but it gives zero indication on whether you can express what a list _is_. Andrei
Feb 14 2007
next sibling parent reply Walter Bright <newshound digitalmars.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 Your example shows that you are able to make a list, but it gives zero
 indication on whether you can express what a list _is_.

I just don't feel that everything that can be abstracted away, should be. Sometimes a trivial list is just a ->next, and doesn't need any more than that. Sometimes, I just want to do x*x*x rather than abstracting away a cube(T)(T x) function. An abstraction becomes useful when it's repeated repeatedly, not when it's used once or twice.
Feb 14 2007
parent "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Walter Bright wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Your example shows that you are able to make a list, but it gives zero
 indication on whether you can express what a list _is_.

I just don't feel that everything that can be abstracted away, should be. Sometimes a trivial list is just a ->next, and doesn't need any more than that. Sometimes, I just want to do x*x*x rather than abstracting away a cube(T)(T x) function. An abstraction becomes useful when it's repeated repeatedly, not when it's used once or twice.

Of course I agree with that, but I also think lists are quite a worthy-for-reuse abstraction so it's not inspired to choose it to bash needless abstraction. Here's a more humorous take on the subject: http://www.willamette.edu/~fruehr/haskell/evolution.html Andrei
Feb 14 2007
prev sibling parent reply Manfred Nowak <svv1999 hotmail.com> writes:
Andrei Alexandrescu wrote

<citing>
 "One of the things we should demand from a
 powerful programming language is the ability to build abstractions
 by assigning names to common patterns and then to work in terms of
 the abstractions directly." 

Isn't there a contradiction, when one requires to use new abstractions but forbids reinterpretations of older ones? -manfred
Feb 14 2007
parent "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Manfred Nowak wrote:
 Andrei Alexandrescu wrote
 
 <citing>
 "One of the things we should demand from a
 powerful programming language is the ability to build abstractions
 by assigning names to common patterns and then to work in terms of
 the abstractions directly." 

Isn't there a contradiction, when one requires to use new abstractions but forbids reinterpretations of older ones?

It's not new vs. old nor about reinterpretation. It's higher-level vs. lower-level. Andrei
Feb 14 2007