www.digitalmars.com         C & C++   DMDScript  

D - "Hi" questions about features not included

reply "Scott Pigman" <scottpig1 attbi.com> writes:
Hi There,
I've just found out about D (thanks to slashdot), and i see a lot of stuff
about it i really like -- several things that when i've thought to myself
"if i ever made a programming language, i'd do _______" are in there.  It
looks very promising to me.  anyway, if i may, i'd like to fire off a
couple questions:

1) where is D at in regards to specifing the features that it includes? is
it still pretty much a work in progress, or has it moved onto the "okay,
that's enough discussion, now lets get it working" point, which is my
impression from skimming the specs?

2) if it was still in the forming stages (or if i was making a language)
the following are some things i'd be tempted to include.  now, my
assumption is that these have already been discussed to death, since
they're mainly things already in other languages, and probably some of the
long time readers here would rather chuck their computer throught the
window instead of read another thread on it, so please accept my apologies
in advance.  I did skim the headers looking for relevent discussions, but
as there are nearly 10,000 legacy messages here, it's a hell of a lot
easier for me to just write this and hope someone out there doesn't mind
responding.  basically, what i'm wondering is: if it was already
discussed, what was the consensus regarding it, or is it already in the
language and i overlooked it, or is it so obviously a bad idea that it
wasn't even worthy of discussion.  anyway, in no particular order, here
they are:

unless/until syntatic sugarcoating, where "unless(false){}" ==
"if(!false){}"  and "until(true)" == "while(!true)".  okay, these add no
functionality, but i think that it would create easier to follow code,
seeing how i get confused easily by logical negations >;-)

functions as first class objects.  now, i really have no idea what this
means in terms of implementation or performance, but lisp has had it since
day one, so i'd suspect it can't be too bad.  (of course, i don't know of
many other languages that have it, so maybe it ain't so easy after all).
but personally, being able to store functions in data structures, pass
them (easily) as arguments to functions or creating fuctions which are
themselves capable of creating and returning a functions just sounds very
tantalizing to me.  having this would go hand in hand with having
something equivalent to lisp/python lamda statements - i.e. anonymous,
unnammed functions.

multiple return values from a function.  personally, i've never been very
keen on the C/C++ idea of having parameters that are actually return
values.  i like the python model here: "a,b,c = foo(x,y,z)".  again, i
think it's a lot easier to follow code that has all the inputs in one
place and the outputs in another.  i guess this wouldn't be compatibly
with the IDL like the current in/out/inout specifiers, but i'd say that
it's a lot more often that you'd write a function w/ multiple return
values than write something that needs to be compatible w/ IDL.  and no, i
wouldn't suggest getting rid of the in/out/inout specifiers, i'd just
rather not use them myself all that often.

java style online API/user libary documentation -- for my money, one of
the best things about working with Java.  i'd hope that D would have a
similar resource (and am puzzeled why c and c++ don't have something
similar).

lisp like closures.  now, of all the things here which i've heard about
and like the sound of, but haven't actually used in practice, this one is
the one i probably understand the least. but what i think i understand,
appeals to me.  basically, my understanding is that this amounts to
functions with a shared state - or perhaps they're anonymous, unnammed
classes. e.g.  closure{ double total
			add(double x){total += x; return total} 
			sub(double x){total += x; return total}
	}
so, add and sub both act on the same variable total, but total is
protected from having any other function modify it.  maybe not quite a
global variable, but a national variable.  i dunno, sounds good to me
though.  and if that isn't a lisp closure, well, then i like whatever it
is i just described.

a foreach or forall statement for iterating over elements of a collection:
"foreach person in newsgroup{.....}" where newsgroup is a collection/array
of persons, and whatever's between the {}'s is code that is executed on
each person.  to me this just seems a lot more intuitive to work with than
"for(x = 0; x < Y; x++)..."

the obvious name of the following unfortunately clashes with that of the
previous, but having just wrapped up a course in program correctness i'm
thinking i'd like to see boolean "forall" and "exists" expressions for use
in contracts to verify the program correctness.  "forall x in X(cond)"
would be true if all the x's in X satisfy the cond, and "exists x in
X(cond)" would be true in any one x in X satisfied the cond.


well, i think that's more than enough for now.  thanks for reading to
anybody who actually got this far.  take care & good night.

-scott
Jan 16 2003
next sibling parent reply Ilya Minkov <midiclub 8ung.at> writes:
Scott Pigman wrote:
 Hi There,
 I've just found out about D (thanks to slashdot), and i see a lot of stuff
 about it i really like -- several things that when i've thought to myself
 "if i ever made a programming language, i'd do _______" are in there.  It
 looks very promising to me.  anyway, if i may, i'd like to fire off a
 couple questions:
welcome. But i'm fairly new here too.
 1) where is D at in regards to specifing the features that it includes? is
 it still pretty much a work in progress, or has it moved onto the "okay,
 that's enough discussion, now lets get it working" point, which is my
 impression from skimming the specs?
It is already working. And the job is being done of creating further compilers. However, the language is not 100% solid yet - the string format questions as well as many other things are not solved yet. Some major features were introduced only a couple of months ago. And the changes are not too much hampered by any major legacy code to take care of. Currently there is one front-end (well, and one minor deviation), and multiple backends being developed for it. The target compilers are kept in sync with the open-source front-end released by Walter. Besides Walters original compiler, the only working port is "dli", D for Linux/x86 by Burton Radons.
 2) if it was still in the forming stages (or if i was making a language)
 the following are some things i'd be tempted to include.  now, my
 assumption is that these have already been discussed to death, since
 they're mainly things already in other languages, and probably some of the
 long time readers here would rather chuck their computer throught the
 window instead of read another thread on it, so please accept my apologies
 in advance.  I did skim the headers looking for relevent discussions, but
 as there are nearly 10,000 legacy messages here, it's a hell of a lot
 easier for me to just write this and hope someone out there doesn't mind
 responding.  basically, what i'm wondering is: if it was already
 discussed, what was the consensus regarding it, or is it already in the
 language and i overlooked it, or is it so obviously a bad idea that it
 wasn't even worthy of discussion.  anyway, in no particular order, here
 they are:
 
 unless/until syntatic sugarcoating, where "unless(false){}" ==
 "if(!false){}"  and "until(true)" == "while(!true)".  okay, these add no
 functionality, but i think that it would create easier to follow code,
 seeing how i get confused easily by logical negations >;-)
 
 functions as first class objects.  now, i really have no idea what this
 means in terms of implementation or performance, but lisp has had it since
 day one, so i'd suspect it can't be too bad.  (of course, i don't know of
 many other languages that have it, so maybe it ain't so easy after all).
 but personally, being able to store functions in data structures, pass
 them (easily) as arguments to functions or creating fuctions which are
 themselves capable of creating and returning a functions just sounds very
 tantalizing to me.  having this would go hand in hand with having
 something equivalent to lisp/python lamda statements - i.e. anonymous,
 unnammed functions.
Lisp is an interpreted language. What that means, is that you've always got a code generator at hand to generate some more code. In D this cannot be done. Maybe after a VM version of D is made (in years), something similar can be thought of. For now you can toss functions around as you like, but you can't create them at runtime. They need to be all defined to compile time. This possibility can be added as a library later, but it's probable that you'd only be able to use C or a small subset of D for runtime generation. It is a very complicated topic within non-interpreted environments, but i'm glad to tell you that it's a subject of my particular interest so i'll keep investigating how such things can be done. I guess Burton is also interested in something similar. Currently research is led to make code optimise itself at run-time. For example, there are values which become constant at run-time, but are not known at compile time. These would get optimised out then. But don't expect this to become real within your life span. :>
 multiple return values from a function.  personally, i've never been very
 keen on the C/C++ idea of having parameters that are actually return
 values.  i like the python model here: "a,b,c = foo(x,y,z)".  again, i
 think it's a lot easier to follow code that has all the inputs in one
 place and the outputs in another.  i guess this wouldn't be compatibly
 with the IDL like the current in/out/inout specifiers, but i'd say that
 it's a lot more often that you'd write a function w/ multiple return
 values than write something that needs to be compatible w/ IDL.  and no, i
 wouldn't suggest getting rid of the in/out/inout specifiers, i'd just
 rather not use them myself all that often.
Sweet. Hm. Not a real problem, just a decision question. It could make the result be placed on stack, not returned imlicitly by reference, which makes it impossible to call legacy functions this way, but might... be good for something. Opinions? And inout would simply mutate back to what they are - references? But what if you use such a function in an expression - you get a real mess. Noone forces though.
 java style online API/user libary documentation -- for my money, one of
 the best things about working with Java.  i'd hope that D would have a
 similar resource (and am puzzeled why c and c++ don't have something
 similar).
Well, that's something you could make someday. ;) In my personal opinion, D is quite unlike c++ suited as an educational language, so a lot of easy-to-read documentation would have to be written. But of course, Consider: C comes with a tiny library only. But of course there is a description of it. As of c++, there is no common style, little common libraries and tools, and a LOT of resources. C++ coders don't understand each other when they talk. It shouldn't happen to D. It gets a decent library, which in turn needs documentation.
 lisp like closures.  now, of all the things here which i've heard about
 and like the sound of, but haven't actually used in practice, this one is
 the one i probably understand the least. but what i think i understand,
 appeals to me.  basically, my understanding is that this amounts to
 functions with a shared state - or perhaps they're anonymous, unnammed
 classes. e.g.  closure{ double total
 			add(double x){total += x; return total} 
 			sub(double x){total += x; return total}
 	}
 so, add and sub both act on the same variable total, but total is
 protected from having any other function modify it.  maybe not quite a
 global variable, but a national variable.  i dunno, sounds good to me
 though.  and if that isn't a lisp closure, well, then i like whatever it
 is i just described.
Any real usage? Consider using templates, or just creating another module in such case. Such closures would only resolve unit-scope ambiguities, which you would have to stumble over later when making some minor change. It simply takes accurate naming to avoid it. It is better to correct a bug than to hide it, right? Consider also: keeping units (=modules) small enough would do legibility good.
 a foreach or forall statement for iterating over elements of a collection:
 "foreach person in newsgroup{.....}" where newsgroup is a collection/array
 of persons, and whatever's between the {}'s is code that is executed on
 each person.  to me this just seems a lot more intuitive to work with than
 "for(x = 0; x < Y; x++)..."
I'm pretty much sure Walter is considering this right now.
 the obvious name of the following unfortunately clashes with that of the
 previous, but having just wrapped up a course in program correctness i'm
 thinking i'd like to see boolean "forall" and "exists" expressions for use
 in contracts to verify the program correctness.  "forall x in X(cond)"
 would be true if all the x's in X satisfy the cond, and "exists x in
 X(cond)" would be true in any one x in X satisfied the cond.
Hm. {We} are afraid of too many keywords :> as well as of complex parsing. So that features like ^^ runtime-generated functions could become possible someday. Well, a keyword or two doesn't matter but... Hm. What I am considering is an alternative plugin-driven parsing possibility, which would make it possible to make some tests without messing around with the actual language, as well as add a few functional programming constructs. D better stay lighweight, so that it is as easy to learn as Delphi, and yet to combine the powers of Perl and C++ and everything needed really often. Whereas i don't consider pure english keywords a danger for learnability.
 well, i think that's more than enough for now.  thanks for reading to
 anybody who actually got this far.  take care & good night.
 
 -scott
Jan 17 2003
next sibling parent reply "Mike Wynn" <mike.wynn l8night.co.uk> writes:
 Currently research is led to make code optimise itself at run-time. For
 example, there are values which become constant at run-time, but are not
 known at compile time. These would get optimised out then. But don't
 expect this to become real within your life span. :>
that is exactly what java dynamic compilers do, and is why some companies claim that their Java VM's can run code faster than C/C++ because the code executed it the true mainline (all if rewritten to branch out as the rare condition) calculated by interpeting the code a few times first and only compiling the code that is actually executed, reorder, inlined and generally mess about with so within any complete section all the computed values are right, the flow or way they are calculated may not be anything like a statically compiled version of the same code, and a section of code may end up being compiled in several different ways depending on how you get to it. something that it would be possible to do with a static compiler linked to a profiler, assuming that your profile the correct data :) before setting your code "in stone". but I doubt with the dynamic adjustment to input that a true dynamic compiler offers. there are also benifits to the GC too, you can determine short lived objects, the way the objects interconections and allocations are arranged allowing ahead of time object allocations, pruning known dead objects and optimise the way the GC walks the stack etc.
Jan 17 2003
parent reply "Daniel Yokomiso" <daniel_yokomiso yahoo.com.br> writes:
"Mike Wynn" <mike.wynn l8night.co.uk> escreveu na mensagem
news:b0af79$mo1$1 digitaldaemon.com...
 Currently research is led to make code optimise itself at run-time. For
 example, there are values which become constant at run-time, but are not
 known at compile time. These would get optimised out then. But don't
 expect this to become real within your life span. :>
that is exactly what java dynamic compilers do, and is why some companies claim that their Java VM's can run code faster than C/C++ because the code executed it the true mainline (all if rewritten to branch out as the rare condition) calculated by interpeting the code a few times first and only compiling the code that is actually executed, reorder, inlined and generally mess about with so within any complete section all
the
 computed values are right, the flow or way they are calculated may not be
 anything like a statically compiled version of the same code, and a
section
 of code may end up being compiled in several different ways depending on
how
 you get to it. something that it would be possible to do with a static
 compiler linked to a profiler, assuming that your profile the correct data
 :) before setting your code "in stone". but I doubt with the dynamic
 adjustment to input that a true dynamic compiler offers.
 there are also benifits to the GC too, you can determine short lived
 objects,  the way the objects interconections and allocations are arranged
 allowing ahead of time object allocations, pruning known dead objects and
 optimise the way the GC walks the stack etc.
One of the main advantages of a JIT compiler is that it can perform whole-world optimizations, deciding which methods may be inlined, which methods are truly dynamic, etc.. Every libray loaded goes through this process. A static compiler that generates dlls must provide the worst case binary. As a side note Portable Net interpreter (www.dotgnu.org for more info) is sometimes faster than JIT compilers from Mono or DotNet, because it doesn't try to analyze and JIT things first, it just interprets it as fast as it can. --- Outgoing mail is certified Virus Free. Checked by AVG anti-virus system (http://www.grisoft.com). Version: 6.0.443 / Virus Database: 248 - Release Date: 11/1/2003
Jan 17 2003
parent "Mike Wynn" <mike.wynn l8night.co.uk> writes:
     One of the main advantages of a JIT compiler is that it can perform
 whole-world optimizations, deciding which methods may be inlined, which
 methods are truly dynamic, etc.. Every libray loaded goes through this
 process. A static compiler that generates dlls must provide the worst case
 binary. As a side note Portable Net interpreter (www.dotgnu.org for more
 info) is sometimes faster than JIT compilers from Mono or DotNet, because
it
 doesn't try to analyze and JIT things first, it just interprets it as fast
 as it can.
it's all about compromise and what your code is doing and how you distrubute your app. single code path, no matter how long, that executes once will always be fastest if statically compiled here also a plain interpreted will out preform a jit or dynamic compiler becuase there is no compilation or or profile data gathering overheads to consider. however, these are rare condition in most code. for realtime apps, jits are also a pain, a plain interpreter will execute the same code in the same time every time. interpreted langs are also a pain for deployment, you either have to distribute your VM/compiler with the app or rely on people already having it. OS or browser where some are 1.1.x some are 1.4.x memory too, plain interpreters need little extra memory, Jits can be quite memory hungry both the bytecode and the compiled code are required at times, and the compiler itself will require more as it compiles. dynamic compilers (especially those that support code deletion) can operate with much lower memory requirement than jits but usually require more than interpreters to run the same code allthough once you start aggressive inlineing ... well I'm sure you see a pattern. and if you have very compact bytecode, and a very small VM then an interpreter can use less memory than a statically compiled app. Mike.
Jan 18 2003
prev sibling next sibling parent "Scott Pigman" <scottpig1 attbi.com> writes:
On Sat, 18 Jan 2003 03:05:36 +0100, Ilya Minkov wrote:

 Scott Pigman wrote:
 Hi There,
 I've just found out about D (thanks to slashdot), and i see a lot of stuff
 welcome. But i'm fairly new here too.
Thankyou.
 
 functions as first class objects.  now, i really have no idea what this
 means in terms of implementation or performance, but lisp has had it since
<snip>
 Lisp is an interpreted language. What that means, is that you've always 
 got a code generator at hand to generate some more code. In D this 
 cannot be done. Maybe after a VM version of D is made (in years), 
 something similar can be thought of. For now you can toss functions 
 around as you like, but you can't create them at runtime. They need to 
 be all defined to compile time. This possibility can be added as a 
 library later, but it's probable that you'd only be able to use C or a 
 small subset of D for runtime generation.
to what extent is "tossing around" functions supported now? is it c style function pointers (which i have to admit to not being at all comfortable with), or is is it something cleaner? I can't find anything relevent on the web page.
 It is a very complicated topic within non-interpreted environments, but 
 i'm glad to tell you that it's a subject of my particular interest so 
 i'll keep investigating how such things can be done. I guess Burton is 
 also interested in something similar.
i think it's a worthy goal, so put me down as interested too, although i'm not sure if i could contribute much to the mix myself.
 Currently research is led to make code optimise itself at run-time. For 
 example, there are values which become constant at run-time, but are not 
 known at compile time. These would get optimised out then. But don't 
 expect this to become real within your life span. :>
that's another feature i would like. just please don't go putting limits on my lifespan, okay ;-)
 lisp like closures.  now, of all the things here which i've heard about
<snip>
 
 Any real usage? Consider using templates, or just creating another 
 module in such case. Such closures would only resolve unit-scope 
 ambiguities, which you would have to stumble over later when making some 
 minor change. It simply takes accurate naming to avoid it. It is better 
 to correct a bug than to hide it, right? Consider also: keeping units 
 (=modules) small enough would do legibility good.
good points. i'll get back to you on that ;-)
 a foreach or forall statement for iterating over elements of a collection:
 "foreach person in newsgroup{.....}" where newsgroup is a collection/array
 of persons, and whatever's between the {}'s is code that is executed on
 each person.  to me this just seems a lot more intuitive to work with than
 "for(x = 0; x < Y; x++)..."
I'm pretty much sure Walter is considering this right now.
Awesome.
Jan 17 2003
prev sibling next sibling parent reply "Scott Pigman" <scottpig1 attbi.com> writes:
 multiple return values from a function.  personally, i've never been very
 keen on the C/C++ idea of having parameters that are actually return
 values.  i like the python model here: "a,b,c = foo(x,y,z)".  again, i
 think it's a lot easier to follow code that has all the inputs in one
 place and the outputs in another.  i guess this wouldn't be compatibly
 with the IDL like the current in/out/inout specifiers, but i'd say that
 it's a lot more often that you'd write a function w/ multiple return
 values than write something that needs to be compatible w/ IDL.  and no, i
 wouldn't suggest getting rid of the in/out/inout specifiers, i'd just
 rather not use them myself all that often.
Sweet. Hm. Not a real problem, just a decision question. It could make the result be placed on stack, not returned imlicitly by reference, which makes it impossible to call legacy functions this way, but
why does it make impossible to call legacy functions? actually, what do you mean by legacy functions - embedded C functions i presume? their can't be much of any D legacy functions yet.
 might... be good for something. Opinions? And inout would simply mutate 
 back to what they are - references?
 
 But what if you use such a function in an expression - you get a real 
 mess. Noone forces though.
i don't think that's necessary - the expression would use only the first returned value. i think it'd go something like this: int,int,int foo() { return 1,2,3; } a,b,c = foo() // a == 1, b == 2, c == 3 d = foo(); // d = 1, the second & third return values are lost x = foo()*foo() // x = 1 (1 * 1), the other values are ignored. hmmm, how about this construct? foreach x in foo(){ print x} // prints "1 2 3" scott
Jan 17 2003
next sibling parent reply "Sean L. Palmer" <seanpalmer directvinternet.com> writes:
You'd need a tuple construct in the language.  I think it's a good idea.

Sean

"Scott Pigman" <scottpig1 attbi.com> wrote in message
news:b0ahpt$o5l$1 digitaldaemon.com...
 multiple return values from a function.  personally, i've never been
very
 keen on the C/C++ idea of having parameters that are actually return
 values.  i like the python model here: "a,b,c = foo(x,y,z)".  again, i
 think it's a lot easier to follow code that has all the inputs in one
 place and the outputs in another.  i guess this wouldn't be compatibly
 with the IDL like the current in/out/inout specifiers, but i'd say that
 it's a lot more often that you'd write a function w/ multiple return
 values than write something that needs to be compatible w/ IDL.  and
no, i
 wouldn't suggest getting rid of the in/out/inout specifiers, i'd just
 rather not use them myself all that often.
Sweet. Hm. Not a real problem, just a decision question. It could make the result be placed on stack, not returned imlicitly by reference, which makes it impossible to call legacy functions this way, but
why does it make impossible to call legacy functions? actually, what do you mean by legacy functions - embedded C functions i presume? their can't be much of any D legacy functions yet.
 might... be good for something. Opinions? And inout would simply mutate
 back to what they are - references?

 But what if you use such a function in an expression - you get a real
 mess. Noone forces though.
i don't think that's necessary - the expression would use only the first returned value. i think it'd go something like this: int,int,int foo() { return 1,2,3; } a,b,c = foo() // a == 1, b == 2, c == 3 d = foo(); // d = 1, the second & third return values are lost x = foo()*foo() // x = 1 (1 * 1), the other values are ignored. hmmm, how about this construct? foreach x in foo(){ print x} // prints "1 2 3" scott
Jan 18 2003
parent reply Evan McClanahan <evan dontSPAMaltarinteractive.com> writes:
Sean L. Palmer wrote:
 You'd need a tuple construct in the language.  I think it's a good idea.
 
 Sean
Request for Tuples seconded. Evan
Jan 19 2003
parent reply Ilya Minkov <midiclub 8ung.at> writes:
Evan McClanahan wrote:
 Sean L. Palmer wrote:
 
 You'd need a tuple construct in the language.  I think it's a good idea.

 Sean
I think tuples are anonymous structs, and thus the syntax should be leaned on the structs, rather then on tuples in other languages. Anonymous structs with implicit underlying types. :/
 
 Request for Tuples seconded.
 
 Evan
 
Jan 20 2003
parent "Sean L. Palmer" <seanpalmer directvinternet.com> writes:
Yeah, pretty much.  Just needs (initialization, syntax)

Sean

"Ilya Minkov" <midiclub 8ung.at> wrote in message
news:b0gvuf$14u5$1 digitaldaemon.com...
 I think tuples are anonymous structs, and thus the syntax should be
 leaned on the structs, rather then on tuples in other languages.
 Anonymous structs with implicit underlying types. :/
Jan 20 2003
prev sibling parent Antti Sykari <jsykari gamma.hut.fi> writes:
"Scott Pigman" <scottpig1 attbi.com> writes:

 But what if you use such a function in an expression - you get a real 
 mess. Noone forces though.
i don't think that's necessary - the expression would use only the first returned value. i think it'd go something like this: int,int,int foo() { return 1,2,3; } a,b,c = foo() // a == 1, b == 2, c == 3 d = foo(); // d = 1, the second & third return values are lost x = foo()*foo() // x = 1 (1 * 1), the other values are ignored. hmmm, how about this construct? foreach x in foo(){ print x} // prints "1 2 3"
Looks nice. You know, I've always wanted something like: foreach x in 2, 3, 5, 7 { do_something(x); } The same should work for standard containers: void f(IntSet set_of_ints, int[] int_array) { foreach x in setOfInts { bah(x); } foreach x in int_array { pfoo(x); } } While we're defining new syntax, this would be neat, too: foreach x in 1..100 { printLn(x); // print numbers from 1 to 100 } foreach x in 100..1 { // the same, backwards } Of course, it would require some kind of syntax for ".." operator -- which could create an instance of struct Range, for example. (It might even fit the standard operator overloading framework.) For a general foreach construct, I'd think that some kind of iterator concept would be fit the job. As in interface TIterator { T get(); void advance(); bit valid(); } and now "foreach x in y { foo(x); } " would translate to { T x; TIterator i = y.getIterator(); while (i.valid()) { foo(i.get()); i.advance(); } } I faintly remember Java having an iterator like this. And y must have member getIterator(). (Of the appropriate type, of course.) But let's go back to tuples. What if the elements of the tuple are of different types? int, char, float f() { return 1, 'a', 3.14156; } foreach (x in f()) { ummagumma(x); } ..should this be unrolled to: ummagumma(1); ummagumma('a'); ummagumma(3.14159); ?? Just the ideas of tonight, Antti
Jan 20 2003
prev sibling parent reply "Daniel Yokomiso" <daniel_yokomiso yahoo.com.br> writes:
Hi,

    Comments embedded.

"Ilya Minkov" <midiclub 8ung.at> escreveu na mensagem
news:b0accn$l2u$1 digitaldaemon.com...

[snip]

 functions as first class objects.  now, i really have no idea what this
 means in terms of implementation or performance, but lisp has had it
since
 day one, so i'd suspect it can't be too bad.  (of course, i don't know
of
 many other languages that have it, so maybe it ain't so easy after all).
 but personally, being able to store functions in data structures, pass
 them (easily) as arguments to functions or creating fuctions which are
 themselves capable of creating and returning a functions just sounds
very
 tantalizing to me.  having this would go hand in hand with having
 something equivalent to lisp/python lamda statements - i.e. anonymous,
 unnammed functions.
Lisp is an interpreted language. What that means, is that you've always got a code generator at hand to generate some more code. In D this cannot be done. Maybe after a VM version of D is made (in years), something similar can be thought of. For now you can toss functions around as you like, but you can't create them at runtime. They need to be all defined to compile time. This possibility can be added as a library later, but it's probable that you'd only be able to use C or a small subset of D for runtime generation. It is a very complicated topic within non-interpreted environments, but i'm glad to tell you that it's a subject of my particular interest so i'll keep investigating how such things can be done. I guess Burton is also interested in something similar. Currently research is led to make code optimise itself at run-time. For example, there are values which become constant at run-time, but are not known at compile time. These would get optimised out then. But don't expect this to become real within your life span. :>
The remark about Lisp is incorrect. AFAIK almost all Lisp compilers compile also to native code (maybe there are a few exceptions). Check out at http://www.cons.org/cmucl where there is a free (as in freedom) implementation of Common Lisp. Other languages that have closures and are compiled are OCaml, Haskell, SML, Smalltalk (has blocks and can be compiled), Java (using inner classes with final variables as closures). It's pretty much easy topic on non-interpreted environments. You just have to pack a data structure together with a function pointer. I saw one message some months ago (don't remember where) about a trick in C, where you define a function, access it through a function pointer, malloc something with the size of the function in memory, copy the fp to the memory area, access it via a pointer, and change the hardcoded values inside the function, latter executing it through the function pointer. It's a hack but it's possible. Also closures can be inlined by smart compilers without much code bloat (some bloat is always inevitable if you inline things).
 multiple return values from a function.  personally, i've never been
very
 keen on the C/C++ idea of having parameters that are actually return
 values.  i like the python model here: "a,b,c = foo(x,y,z)".  again, i
 think it's a lot easier to follow code that has all the inputs in one
 place and the outputs in another.  i guess this wouldn't be compatibly
 with the IDL like the current in/out/inout specifiers, but i'd say that
 it's a lot more often that you'd write a function w/ multiple return
 values than write something that needs to be compatible w/ IDL.  and no,
i
 wouldn't suggest getting rid of the in/out/inout specifiers, i'd just
 rather not use them myself all that often.
Sweet. Hm. Not a real problem, just a decision question. It could make the result be placed on stack, not returned imlicitly by reference, which makes it impossible to call legacy functions this way, but might... be good for something. Opinions? And inout would simply mutate back to what they are - references? But what if you use such a function in an expression - you get a real mess. Noone forces though.
This kind of thing is very tricky. if you have: (int, int, int) a(int a) { return a,a,a; } int b(int a, int b, int c) { return a + b + c; } Can you do this or not? int six = b(a(2)); Using stack semantics we could do this, but it open some strange possibilities. Other possibilitie is to use tuple semantics, but this has its complications too (changes the type system in several ways). Note that a template Pair or Triple can do a similar job here, simplifying our work. Triple(int,int,int) a(int a) { return new triple(a,a,a); }
 java style online API/user libary documentation -- for my money, one of
 the best things about working with Java.  i'd hope that D would have a
 similar resource (and am puzzeled why c and c++ don't have something
 similar).
Well, that's something you could make someday. ;) In my personal opinion, D is quite unlike c++ suited as an educational language, so a lot of easy-to-read documentation would have to be written. But of course, Consider: C comes with a tiny library only. But of course there is a description of it. As of c++, there is no common style, little common libraries and tools, and a LOT of resources. C++ coders don't understand each other when they talk. It shouldn't happen to D. It gets a decent library, which in turn needs documentation.
 lisp like closures.  now, of all the things here which i've heard about
 and like the sound of, but haven't actually used in practice, this one
is
 the one i probably understand the least. but what i think i understand,
 appeals to me.  basically, my understanding is that this amounts to
 functions with a shared state - or perhaps they're anonymous, unnammed
 classes. e.g.  closure{ double total
 add(double x){total += x; return total}
 sub(double x){total += x; return total}
 }
 so, add and sub both act on the same variable total, but total is
 protected from having any other function modify it.  maybe not quite a
 global variable, but a national variable.  i dunno, sounds good to me
 though.  and if that isn't a lisp closure, well, then i like whatever it
 is i just described.
Any real usage? Consider using templates, or just creating another module in such case. Such closures would only resolve unit-scope ambiguities, which you would have to stumble over later when making some minor change. It simply takes accurate naming to avoid it. It is better to correct a bug than to hide it, right? Consider also: keeping units (=modules) small enough would do legibility good.
If you do higher-order programming, closures are very useful, specially when you want some partial application. A good starting point is http://www.md.chalmers.se/~rjmh/Papers/whyfp.html or http://www.cs.ukc.ac.uk/pubs/1997/224/index.html A simple example is a map function: int[] scale(int[] numbers, int scale) { return numbers.map(i anon(int i) {return i * scale}); } Map apply the function to each item of the array, creating a new array with the results. There are other usages, if you're interested I can dig some nice examples of Higher-Order Functions (HOF) expressiveness.
 a foreach or forall statement for iterating over elements of a
collection:
 "foreach person in newsgroup{.....}" where newsgroup is a
collection/array
 of persons, and whatever's between the {}'s is code that is executed on
 each person.  to me this just seems a lot more intuitive to work with
than
 "for(x = 0; x < Y; x++)..."
I'm pretty much sure Walter is considering this right now.
 the obvious name of the following unfortunately clashes with that of the
 previous, but having just wrapped up a course in program correctness i'm
 thinking i'd like to see boolean "forall" and "exists" expressions for
use
 in contracts to verify the program correctness.  "forall x in X(cond)"
 would be true if all the x's in X satisfy the cond, and "exists x in
 X(cond)" would be true in any one x in X satisfied the cond.
Hm. {We} are afraid of too many keywords :> as well as of complex parsing. So that features like ^^ runtime-generated functions could become possible someday. Well, a keyword or two doesn't matter but... Hm. What I am considering is an alternative plugin-driven parsing possibility, which would make it possible to make some tests without messing around with the actual language, as well as add a few functional programming constructs. D better stay lighweight, so that it is as easy to learn as Delphi, and yet to combine the powers of Perl and C++ and everything needed really often. Whereas i don't consider pure english keywords a danger for learnability.
With HOFs we can do this. Deimos Template Library http://www.minddrome.com/d/deimos/deimos-0.0.1.zip provides all, any and none quantifiers (I think they're in this release) for arrays as template functions, so you just write: public boolean isOdd(int n) { return n % 2 != 0; } instance TArrays(int) arrays; const int[] numbers = ...; if (arrays.all(numbers, &isOdd)) { ...} if (arrays.any(numbers, &isOdd)) { ...}
 well, i think that's more than enough for now.  thanks for reading to
 anybody who actually got this far.  take care & good night.

 -scott
Best regards, Daniel Yokomiso. "Lord save me from your followers." --- Outgoing mail is certified Virus Free. Checked by AVG anti-virus system (http://www.grisoft.com). Version: 6.0.443 / Virus Database: 248 - Release Date: 11/1/2003
Jan 17 2003
next sibling parent reply Ilya Minkov <midiclub 8ung.at> writes:
Daniel Yokomiso wrote:
 Hi,
 
     Comments embedded.
 
 "Ilya Minkov" <midiclub 8ung.at> escreveu na mensagem
 news:b0accn$l2u$1 digitaldaemon.com...
 
 [snip]
 
 
functions as first class objects.  now, i really have no idea what this
means in terms of implementation or performance, but lisp has had it
since
day one, so i'd suspect it can't be too bad.  (of course, i don't know
of
many other languages that have it, so maybe it ain't so easy after all).
but personally, being able to store functions in data structures, pass
them (easily) as arguments to functions or creating fuctions which are
themselves capable of creating and returning a functions just sounds
very
tantalizing to me.  having this would go hand in hand with having
something equivalent to lisp/python lamda statements - i.e. anonymous,
unnammed functions.
Lisp is an interpreted language. What that means, is that you've always got a code generator at hand to generate some more code. In D this cannot be done. Maybe after a VM version of D is made (in years), something similar can be thought of. For now you can toss functions around as you like, but you can't create them at runtime. They need to be all defined to compile time. This possibility can be added as a library later, but it's probable that you'd only be able to use C or a small subset of D for runtime generation. It is a very complicated topic within non-interpreted environments, but i'm glad to tell you that it's a subject of my particular interest so i'll keep investigating how such things can be done. I guess Burton is also interested in something similar. Currently research is led to make code optimise itself at run-time. For example, there are values which become constant at run-time, but are not known at compile time. These would get optimised out then. But don't expect this to become real within your life span. :>
The remark about Lisp is incorrect. AFAIK almost all Lisp compilers compile also to native code (maybe there are a few exceptions). Check out at http://www.cons.org/cmucl where there is a free (as in freedom) implementation of Common Lisp. Other languages that have closures and are compiled are OCaml, Haskell, SML, Smalltalk (has blocks and can be compiled), Java (using inner classes with final variables as closures). It's pretty much easy topic on non-interpreted environments. You just have to pack a data structure together with a function pointer. I saw one message some months ago (don't remember where) about a trick in C, where you define a function, access it through a function pointer, malloc something with the size of the function in memory, copy the fp to the memory area, access it via a pointer, and change the hardcoded values inside the function, latter executing it through the function pointer. It's a hack but it's possible. Also closures can be inlined by smart compilers without much code bloat (some bloat is always inevitable if you inline things).
HM. But is the lisp compiler *always at hand*? I don't know lisp. I'll have to look up how closures are done in OCaml. OK, it basically means you need a piece of code which makes sense, then it can be compiled. But you can't eval() some run-time generated string like in perl. Such work is being done in "Tick C", where you always carry a compiler with you. It would also optimise out many operations depending on the run-time constants. The current Tick C compiler is made out of the infamous LCC 3.6 and uses VCode as its back-back-end. I plan to finish the port of VCode to x86, and to adapt TickCC backend for D, adding the "tick" possibilities. I wonder how you can keep track of so many languages - i chose D because it's almost nothing new to learn for me, not too many new problems to stumble over... Another language which I *have* to learn is OCaml, but i see a lot of sense in it and i'm doing it with plaesure. I have no more power to learn another different language, just for myself. But well, i also have hobbies like drawing, 3d gfx, music (hence my nickname).
multiple return values from a function.  personally, i've never been
very
keen on the C/C++ idea of having parameters that are actually return
values.  i like the python model here: "a,b,c = foo(x,y,z)".  again, i
think it's a lot easier to follow code that has all the inputs in one
place and the outputs in another.  i guess this wouldn't be compatibly
with the IDL like the current in/out/inout specifiers, but i'd say that
it's a lot more often that you'd write a function w/ multiple return
values than write something that needs to be compatible w/ IDL.  and no,
i
wouldn't suggest getting rid of the in/out/inout specifiers, i'd just
rather not use them myself all that often.
Sweet. Hm. Not a real problem, just a decision question. It could make the result be placed on stack, not returned imlicitly by reference, which makes it impossible to call legacy functions this way, but might... be good for something. Opinions? And inout would simply mutate back to what they are - references? But what if you use such a function in an expression - you get a real mess. Noone forces though.
This kind of thing is very tricky. if you have: (int, int, int) a(int a) { return a,a,a; } int b(int a, int b, int c) { return a + b + c; } Can you do this or not? int six = b(a(2));
My brain overheats. I'd say yes... but then it is the same as returning a structure. And taking a structure. Someone in the hread suggested ignoring extra parameters, but it's *very evil*. It would inevitably lead to a new class of bugs. Maybe it's the solution - anonymous structure defined by the types. It would also allow writing functions recieving structures, and feed them with any data of the corresponding types.
 
     Using stack semantics we could do this, but it open some strange
 possibilities. Other possibilitie is to use tuple semantics, but this has
 its complications too (changes the type system in several ways). Note that a
 template Pair or Triple can do a similar job here, simplifying our work.
 
 Triple(int,int,int) a(int a) {
     return new triple(a,a,a);
 }
 
Good job, Daniel. I just have to take a look at the library when I have time. It's like you foresee everything :)
 
 
java style online API/user libary documentation -- for my money, one of
the best things about working with Java.  i'd hope that D would have a
similar resource (and am puzzeled why c and c++ don't have something
similar).
Well, that's something you could make someday. ;) In my personal opinion, D is quite unlike c++ suited as an educational language, so a lot of easy-to-read documentation would have to be written. But of course, Consider: C comes with a tiny library only. But of course there is a description of it. As of c++, there is no common style, little common libraries and tools, and a LOT of resources. C++ coders don't understand each other when they talk. It shouldn't happen to D. It gets a decent library, which in turn needs documentation.
lisp like closures.  now, of all the things here which i've heard about
and like the sound of, but haven't actually used in practice, this one
is
the one i probably understand the least. but what i think i understand,
appeals to me.  basically, my understanding is that this amounts to
functions with a shared state - or perhaps they're anonymous, unnammed
classes. e.g.  closure{ double total
add(double x){total += x; return total}
sub(double x){total += x; return total}
}
so, add and sub both act on the same variable total, but total is
protected from having any other function modify it.  maybe not quite a
global variable, but a national variable.  i dunno, sounds good to me
though.  and if that isn't a lisp closure, well, then i like whatever it
is i just described.
Any real usage? Consider using templates, or just creating another module in such case. Such closures would only resolve unit-scope ambiguities, which you would have to stumble over later when making some minor change. It simply takes accurate naming to avoid it. It is better to correct a bug than to hide it, right? Consider also: keeping units (=modules) small enough would do legibility good.
If you do higher-order programming, closures are very useful, specially when you want some partial application. A good starting point is http://www.md.chalmers.se/~rjmh/Papers/whyfp.html or http://www.cs.ukc.ac.uk/pubs/1997/224/index.html A simple example is a map function: int[] scale(int[] numbers, int scale) { return numbers.map(i anon(int i) {return i * scale}); } Map apply the function to each item of the array, creating a new array with the results. There are other usages, if you're interested I can dig some nice examples of Higher-Order Functions (HOF) expressiveness.
I didn't understand something yet. I'll take myself time to study it.
 
 
a foreach or forall statement for iterating over elements of a
collection:
"foreach person in newsgroup{.....}" where newsgroup is a
collection/array
of persons, and whatever's between the {}'s is code that is executed on
each person.  to me this just seems a lot more intuitive to work with
than
"for(x = 0; x < Y; x++)..."
I'm pretty much sure Walter is considering this right now.
the obvious name of the following unfortunately clashes with that of the
previous, but having just wrapped up a course in program correctness i'm
thinking i'd like to see boolean "forall" and "exists" expressions for
use
in contracts to verify the program correctness.  "forall x in X(cond)"
would be true if all the x's in X satisfy the cond, and "exists x in
X(cond)" would be true in any one x in X satisfied the cond.
Hm. {We} are afraid of too many keywords :> as well as of complex parsing. So that features like ^^ runtime-generated functions could become possible someday. Well, a keyword or two doesn't matter but... Hm. What I am considering is an alternative plugin-driven parsing possibility, which would make it possible to make some tests without messing around with the actual language, as well as add a few functional programming constructs. D better stay lighweight, so that it is as easy to learn as Delphi, and yet to combine the powers of Perl and C++ and everything needed really often. Whereas i don't consider pure english keywords a danger for learnability.
With HOFs we can do this. Deimos Template Library http://www.minddrome.com/d/deimos/deimos-0.0.1.zip provides all, any and none quantifiers (I think they're in this release) for arrays as template functions, so you just write: public boolean isOdd(int n) { return n % 2 != 0; } instance TArrays(int) arrays; const int[] numbers = ...; if (arrays.all(numbers, &isOdd)) { ...} if (arrays.any(numbers, &isOdd)) { ...}
well, i think that's more than enough for now.  thanks for reading to
anybody who actually got this far.  take care & good night.

-scott
Best regards, Daniel Yokomiso. "Lord save me from your followers." --- Outgoing mail is certified Virus Free. Checked by AVG anti-virus system (http://www.grisoft.com). Version: 6.0.443 / Virus Database: 248 - Release Date: 11/1/2003
Jan 18 2003
parent reply "Daniel Yokomiso" <daniel_yokomiso yahoo.com.br> writes:
Hi,

    Comments embedded.

"Ilya Minkov" <midiclub 8ung.at> escreveu na mensagem
news:b0bj31$1a9a$1 digitaldaemon.com...
 Daniel Yokomiso wrote:
[snip]
     The remark about Lisp is incorrect. AFAIK almost all Lisp compilers
 compile also to native code (maybe there are a few exceptions). Check
out at
 http://www.cons.org/cmucl where there is a free (as in freedom)
 implementation of Common Lisp. Other languages that have closures and
are
 compiled are OCaml, Haskell, SML, Smalltalk (has blocks and can be
 compiled), Java (using inner classes with final variables as closures).
It's
 pretty much easy topic on non-interpreted environments. You just have to
 pack a data structure together with a function pointer. I saw one
message
 some months ago (don't remember where) about a trick in C, where you
define
 a function, access it through a function pointer, malloc something with
the
 size of the function in memory, copy the fp to the memory area, access
it
 via a pointer, and change the hardcoded values inside the function,
latter
 executing it through the function pointer. It's a hack but it's
possible.
     Also closures can be inlined by smart compilers without much code
bloat
 (some bloat is always inevitable if you inline things).
HM. But is the lisp compiler *always at hand*? I don't know lisp.
AFAIK it's not necessary but Lisp people like having the compiler available at runtime. I would like that too ;-)
 I'll have to look up how closures are done in OCaml.

 OK, it basically means you need a piece of code which makes sense, then
 it can be compiled. But you can't eval() some run-time generated string
 like in perl. Such work is being done in "Tick C", where you always
 carry a compiler with you. It would also optimise out many operations
 depending on the run-time constants. The current Tick C compiler is made
   out of the infamous LCC 3.6 and uses VCode as its back-back-end. I
 plan to finish the port of VCode to x86, and to adapt TickCC backend for
 D, adding the "tick" possibilities.
At Paul Graham's site, http://www.paulgraham.com/rootsoflisp.html , there's a nice summary about Lisp two links. One to a paper from John McCarthy, where he explains the Lisp language, and other to the code written in this primordial Lisp. Notice that the eval function is written in Lisp and use just Lisp types (atom and list). They don't need to eval strings, because their list datatype is very powerful. Of course this eval is slow as hell, but it proves some point.
 I wonder how you can keep track of so many languages - i chose D because
 it's almost nothing new to learn for me, not too many new problems to
 stumble over... Another language which I *have* to learn is OCaml, but i
 see a lot of sense in it and i'm doing it with plaesure. I have no more
 power to learn another different language, just for myself. But well, i
 also have hobbies like drawing, 3d gfx, music (hence my nickname).
I had other hobbies too, poetry, martial arts, bike riding, comic book reading. But I had to keep track of that many languages when I started developing my own. Back in the far past of abril of 2001 I started growing bored of Java and read a paper about Design by Contract in Java. It led me to Eiffel and comp.lang.eiffel. Skimming old posts I've got into some wars about OO vs. functional, Eiffel vs. C++ and had time to read them. In interesting, because I was designing a language to overcome the flaws in those. In these kind of posts people always post links to their favourite languages, or papers about some corner of computer science. As I'm addicted to information I started reading about all of it, got into Sather, OCaml, Haskell, Smalltalk, and others. Note that I don't develop in those languages, just read a lot about their concepts, and tried to grok them. IMO there's something good in every language and language construct. I don't say "Perl sucks, it's just noise in form of code!" or things like that. I usually think, hmmm these Perl guys have a nice feature in their language, unfortunally the syntax is awful. Are there any way to make it clearer, safer and as terse as (or with a minimum ammount of verbosity added)? But there are some languages that every programmer should at least read a tutorial about (all with tutorials available online): Smalltalk, Eiffel, Sather, OCaml, SML, Lisp, Haskell, C++, C++ template expressions, Java, Python, Perl, Ruby, C and, of course, D. There are others, some useful like Prolog, Sisal or Scheme, some surreal like Befunge, Unlambda or Intercal, which are also nice to read about. It open your horizon to different views of programming. [snip]
     This kind of thing is very tricky. if you have:

 (int, int, int) a(int a) {
     return a,a,a;
 }
 int b(int a, int b, int c) {
     return a + b + c;
 }

     Can you do this or not?

 int six = b(a(2));
My brain overheats. I'd say yes... but then it is the same as returning a structure. And taking a structure. Someone in the hread suggested ignoring extra parameters, but it's *very evil*. It would inevitably lead to a new class of bugs. Maybe it's the solution - anonymous structure defined by the types. It would also allow writing functions recieving structures, and feed them with any data of the corresponding types.
As I said very tricky. But think about it: (int, int) c(int a, int b) { return a / b, a % b; } int d(int a, int b, int c, int d) { return a + b + c + d; } int x = d(c(1,2), c(3,4)); The possibilities are infinite. It may look nice or awful depending on your point of view. And it should also fit in the other parts of the language, or we'll become like C++ or Perl.
     Using stack semantics we could do this, but it open some strange
 possibilities. Other possibilitie is to use tuple semantics, but this
has
 its complications too (changes the type system in several ways). Note
that a
 template Pair or Triple can do a similar job here, simplifying our work.

 Triple(int,int,int) a(int a) {
     return new triple(a,a,a);
 }
Good job, Daniel. I just have to take a look at the library when I have time. It's like you foresee everything :)
Hey, I just copy the ideas from other people ;-) As I said I'm designing my language too, and everytime I've read the Haskell prelude or the basic ML functions I fell ashamed :-) [snip]
     If you do higher-order programming, closures are very useful,
specially
 when you want some partial application. A good starting point is
 http://www.md.chalmers.se/~rjmh/Papers/whyfp.html or
 http://www.cs.ukc.ac.uk/pubs/1997/224/index.html A simple example is a
map
 function:


 int[] scale(int[] numbers, int scale) {
     return numbers.map(i anon(int i) {return i * scale});
 }


     Map apply the function to each item of the array, creating a new
array
 with the results. There are other usages, if you're interested I can dig
 some nice examples of Higher-Order Functions (HOF) expressiveness.
I didn't understand something yet. I'll take myself time to study it.
Closures are tricky to grasp right, specially for us, people with imperative programming backgrounds. [snip] Best regards, Daniel Yokomiso. "Creative minds have always been known to survive any kind of bad training." - Anna Freud --- Outgoing mail is certified Virus Free. Checked by AVG anti-virus system (http://www.grisoft.com). Version: 6.0.443 / Virus Database: 248 - Release Date: 10/1/2003
Jan 18 2003
parent reply "Walter" <walter digitalmars.com> writes:
"Daniel Yokomiso" <daniel_yokomiso yahoo.com.br> wrote in message
news:b0btch$1ev5$1 digitaldaemon.com...
     AFAIK it's not necessary but Lisp people like having the compiler
 available at runtime. I would like that too ;-)
I've been thinking of making DMDScript available at runtime for D programs.
Feb 17 2003
next sibling parent reply Michael Slater <mail effectivity.com> writes:
Walter wrote:
 I've been thinking of making DMDScript available at runtime for D programs.
This is a very cool idea! To have optional scripting support available at runtime would enable D to be used to more easily develop innovative applications that are outside the usual domain of a pure systems language or a pure scripting language. A variant on the idea would be to make D runtime scriptable by any scripting language that supports a particular set of interfaces. Most people use Java/ECMA/DMDScript only because they have to, not because they want to. The scripters of the world seem to truly love Perl, Python, and Ruby. And the esoteric scripters have the script children of Smalltalk and Lisp. It's a big feature, though, which will cost a good chunk of time. --ms
Feb 17 2003
parent reply "Walter" <walter digitalmars.com> writes:
"Michael Slater" <mail effectivity.com> wrote in message
news:b2rrlm$1sb1$1 digitaldaemon.com...
 Walter wrote:
 I've been thinking of making DMDScript available at runtime for D
programs.
 This is a very cool idea! To have optional scripting support available
 at runtime would enable D to be used to more easily develop innovative
 applications that are outside the usual domain of a pure systems
 language or a pure scripting language.
Lots of apps need a built in language, so why not provide one? (For example, the lithp interpreter found in EMACS text editors.)
 A variant on the idea would be to make D runtime scriptable by any
 scripting language that supports a particular set of interfaces. Most
 people use Java/ECMA/DMDScript only because they have to, not because
 they want to. The scripters of the world seem to truly love Perl,
 Python, and Ruby. And the esoteric scripters have the script children of
 Smalltalk and Lisp.
The factors driving DMDScript as the scripting language to use in conjunction with D would be: 1) ECMAscript is popular and well known 2) I know ECMAscript 3) ECMAscript syntax is derived from C, so is not hopelessly dissimilar to D syntax 4) I have a fully implemented and debugged implementation of ECMAScript
 It's a big feature, though, which will cost a good chunk of time.
Not that much. It's mostly translating the code.
Feb 17 2003
parent reply Michael Slater <mail effectivity.com> writes:
Walter wrote:
 Lots of apps need a built in language, so why not provide one? (For example,
 the lithp interpreter found in EMACS text editors.)
Yes. This is absolutely brilliant. This feature alone with make D very interesting to application developers.
 The factors driving DMDScript as the scripting language to use in
 conjunction with D would be:
 1) ECMAscript is popular and well known
 2) I know ECMAscript
 3) ECMAscript syntax is derived from C, so is not hopelessly dissimilar to D
 syntax
 4) I have a fully implemented and debugged implementation of ECMAScript
Going with DMDScript is excellent. It gets the functionality now in a way that is affordable and low risk. And it provides a model for doing other things in the future. What's your view on the impedance match between DMDScript and D? How do the language features and object models compare?
It's a big feature, though, which will cost a good chunk of time.
Not that much. It's mostly translating the code.
If it doesn't cost too much to get the feature into the codebase, great. What do think the impact will be on testing (not just the parts individually, but in combination with each other) and documentation? All in all, I think going forward with D&DScript would be awesome. Hmmm. Not to be confused with D&D Script. --ms
Feb 17 2003
parent reply "Walter" <walter digitalmars.com> writes:
"Michael Slater" <mail effectivity.com> wrote in message
news:b2s7jk$26kv$1 digitaldaemon.com...
 What's your view on the impedance match between DMDScript and D? How do
 the language features and object models compare?
The syntax is superficially the same, but the huge difference is that DMDScript is typeless. Everything is a variant. Typeless languages are ideal for scripting.
 What do think the impact will be on testing (not just the parts
 individually, but in combination with each other) and documentation?
Testing is always a problem.
Feb 17 2003
parent reply Garen <garen_nospam_ wsu.edu> writes:
Walter wrote:

 
 The syntax is superficially the same, but the huge difference is that
 DMDScript is typeless. Everything is a variant. Typeless languages are ideal
 for scripting.
 
Ever considered a kind of type inference for D?
Feb 20 2003
parent reply "Walter" <walter digitalmars.com> writes:
"Garen" <garen_nospam_ wsu.edu> wrote in message
news:b33nfp$374$1 digitaldaemon.com...
 Walter wrote:
 The syntax is superficially the same, but the huge difference is that
 DMDScript is typeless. Everything is a variant. Typeless languages are
ideal
 for scripting.
Ever considered a kind of type inference for D?
Yes, and I think that strong typing is the way to go for large programs.
Feb 21 2003
parent Garen <garen_nospam_ wsu.edu> writes:
Walter wrote:


 Yes, and I think that strong typing is the way to go for large programs.
 
Yeah, you can have type inference with strong typing though, but its some work to do. Not that I know how to do it with D, but I always liked it when using ML-like langs.
Feb 21 2003
prev sibling next sibling parent reply Ilya Minkov <midiclub 8ung.at> writes:
Hmmm... that might mean that other implementors would also have to hack 
some implementation together...

BTW, how come that Netscape implementation is so damn slow? Where does 
that difference of an order of magnitude compared to the next faster 
implementation come from?

Walter wrote:
 I've been thinking of making DMDScript available at runtime for D programs.
Feb 17 2003
parent "Walter" <walter digitalmars.com> writes:
"Ilya Minkov" <midiclub 8ung.at> wrote in message
news:b2rs64$1squ$1 digitaldaemon.com...
 Hmmm... that might mean that other implementors would also have to hack
 some implementation together...
I think something could be worked out. Also, the DMDScript would not be part of the D language, it would just be part of the Digital Mars implementation.
 BTW, how come that Netscape implementation is so damn slow? Where does
 that difference of an order of magnitude compared to the next faster
 implementation come from?
I don't know. I haven't studied the source code to Netscape. Perhaps it's because I have a better profiler <g>.
Feb 17 2003
prev sibling parent reply Burton Radons <loth users.sourceforge.net> writes:
Walter wrote:
 "Daniel Yokomiso" <daniel_yokomiso yahoo.com.br> wrote in message
 news:b0btch$1ev5$1 digitaldaemon.com...
 
    AFAIK it's not necessary but Lisp people like having the compiler
available at runtime. I would like that too ;-)
I've been thinking of making DMDScript available at runtime for D programs.
I'm working on a D compiler in D right now; the backend is cruddy (same thing as DLI, but it creates very slightly better code) but one could wield it for an interactive mode. It'll be faster than any interpreted language, at least. Its state is that integer and floating-point arithmetic are done, some control structures are in, and functions and classes are fairly well done. What's missing are the majority of types, various expressions, some control structures (switch, goto, break, continue), interfaces, exceptions, templates, operator overloading, array operations, Phobos, and saving/loading object files. It's ABI-compatible with DMD.
Feb 17 2003
parent reply Juarez Rudsatz <juarezNOSPAM correio.com> writes:
Burton Radons <loth users.sourceforge.net> wrote in
news:b2s328$234s$1 digitaldaemon.com: 

 I'm working on a D compiler in D right now; the backend is cruddy
 (same thing as DLI, but it creates very slightly better code) but one
 could wield it for an interactive mode.  It'll be faster than any
 interpreted language, at least.
I don't understood: You will need a D compiler to compile your D compiler ? What is the advantage ?
Feb 18 2003
parent reply Ilya Minkov <midiclub 8ung.at> writes:
He inteded it as a "proof of concept". He said that before.
A language which cannot compile a compiler of itself is merely a toy. 
This would also automatically ensure some grade of compatibility between 
different compilers.

BTW, it would be safe to write a D->C compiler (DFront) completely in D, 
bacause as soon as it's bootsrapped with some existing D compiler, it 
can be compiled on all targeted systems.

Juarez Rudsatz wrote:
 I don't understood: 
 
 You will need a D compiler to compile your D compiler ?
 What is the advantage ?
Feb 18 2003
parent Juarez Rudsatz <juarezNO_SPAMTOME correio.com> writes:
Ilya Minkov <midiclub 8ung.at> wrote in news:b2uaa2$10l3$1
 digitaldaemon.com:

 BTW, it would be safe to write a D->C compiler (DFront) completely in D, 
 bacause as soon as it's bootsrapped with some existing D compiler, it 
 can be compiled on all targeted systems.
And maybe tools like: o syntax style formater? o source code documentation tools?
Feb 26 2003
prev sibling parent reply "Sean L. Palmer" <seanpalmer directvinternet.com> writes:
"Daniel Yokomiso" <daniel_yokomiso yahoo.com.br> wrote in message
news:b0apv1$s9u$1 digitaldaemon.com...

     With HOFs we can do this. Deimos Template Library
 http://www.minddrome.com/d/deimos/deimos-0.0.1.zip provides all, any and
 none quantifiers (I think they're in this release) for arrays as template
 functions, so you just write:

 public boolean isOdd(int n) {
     return n % 2 != 0;
 }
 instance TArrays(int) arrays;
 const int[] numbers = ...;
 if (arrays.all(numbers, &isOdd)) { ...}
 if (arrays.any(numbers, &isOdd)) { ...}
The function call to isOdd needs to be eliminated. That's why closures and anonymous functions are so important, so stuff like this can be inlined. The basic idea is to pass *the function*, or a reference to it, not a pointer to the function, as a parameter. Taking a pointer guarantees it must live at a memory address which is not what you want. Also I personally would replace return n % 2 != 0 with return (n & 1) != 0 so that you can be reasonably sure you'll get the best performance possible. Integer modulus can be very slow. Several orders of magnitude slower than binary and. Sean
Jan 18 2003
next sibling parent reply "Daniel Yokomiso" <daniel_yokomiso yahoo.com.br> writes:
"Sean L. Palmer" <seanpalmer directvinternet.com> escreveu na mensagem
news:b0c76g$1nd8$1 digitaldaemon.com...
 "Daniel Yokomiso" <daniel_yokomiso yahoo.com.br> wrote in message
 news:b0apv1$s9u$1 digitaldaemon.com...

     With HOFs we can do this. Deimos Template Library
 http://www.minddrome.com/d/deimos/deimos-0.0.1.zip provides all, any and
 none quantifiers (I think they're in this release) for arrays as
template
 functions, so you just write:

 public boolean isOdd(int n) {
     return n % 2 != 0;
 }
 instance TArrays(int) arrays;
 const int[] numbers = ...;
 if (arrays.all(numbers, &isOdd)) { ...}
 if (arrays.any(numbers, &isOdd)) { ...}
The function call to isOdd needs to be eliminated. That's why closures
and
 anonymous functions are so important, so stuff like this can be inlined.
 The basic idea is to pass *the function*, or a reference to it, not a
 pointer to the function, as a parameter.  Taking a pointer guarantees it
 must live at a memory address which is not what you want.  Also I
personally
 would replace return n % 2 != 0 with return (n & 1) != 0 so that you can
be
 reasonably sure you'll get the best performance possible.  Integer modulus
 can be very slow.  Several orders of magnitude slower than binary and.

 Sean
Hi, That's why I, in every other post I write, talk about anonymous functions. I personally like the anon(n) syntax, with or without simple type inference support. But I don't agree that a call to &isOdd and anon(n) {return n % 2 != 0;} are so different that only the closure can be inlined. If the compiler can statically determine the function reference then it can inline it too. Also a closure is not different than a variable bound to a closure. But currently D doesn't offer anonymous functions, so I must provide code snippets using function pointers. A code snippet that uses invalid syntax or semantics is almost meaningless to the reader, except as critics to language constructs. The remark about using (n & 1) != 0 instead is flawed in my opinion. Of course in a simple code snippet it can be clear, but these kind of things are likely premature optimization and may lead to highly obfuscated code. IMO a better code would read "return n.isDivisibleBy(2);" using an OOish syntax, or even "return 2.divides(n);". If we have a language with different idioms for writing code, like use this if you want to be clear, but use that when you want performance, we'll end up doing C/C++ like code. In D we already have this dissension on code snippets about array iteration, just compare those I write versus Burton code snippets. I'm not claiming superiority here, just saying that if we need two idioms for performance reasons, there's something wrong about the language spec. It's like recursion vs. iteration. usually recursion is simpler to understand, but people argue that it's inneficient, so use iteration instead. Most functional languages solve this problem by requiring a compliant compiler to optimize tail call in general, leading to better performance for every kind of tail call, recursive or not. Best regards, Daniel Yokomiso. "All must fall, and those who stand the highest fall hardest." - Phyrexian Scriptures --- Outgoing mail is certified Virus Free. Checked by AVG anti-virus system (http://www.grisoft.com). Version: 6.0.443 / Virus Database: 248 - Release Date: 10/1/2003
Jan 18 2003
next sibling parent reply "Sean L. Palmer" <seanpalmer directvinternet.com> writes:
"Daniel Yokomiso" <daniel_yokomiso yahoo.com.br> wrote in message
news:b0cqgg$2302$1 digitaldaemon.com...
 The function call to isOdd needs to be eliminated.  That's why closures
and
 anonymous functions are so important, so stuff like this can be inlined.
 The basic idea is to pass *the function*, or a reference to it, not a
 pointer to the function, as a parameter.  Taking a pointer guarantees it
 must live at a memory address which is not what you want.  Also I
personally
 would replace return n % 2 != 0 with return (n & 1) != 0 so that you can
be
 reasonably sure you'll get the best performance possible.  Integer
modulus
 can be very slow.  Several orders of magnitude slower than binary and.

 Sean
Hi, That's why I, in every other post I write, talk about anonymous functions. I personally like the anon(n) syntax, with or without simple
type
 inference support. But I don't agree that a call to &isOdd and anon(n)
 {return n % 2 != 0;} are so different that only the closure can be
inlined.
 If the compiler can statically determine the function reference then it
can
 inline it too. Also a closure is not different than a variable bound to a
 closure. But currently D doesn't offer anonymous functions, so I must
 provide code snippets using function pointers. A code snippet that uses
 invalid syntax or semantics is almost meaningless to the reader, except as
 critics to language constructs.
Gotcha.
     The remark about using (n & 1) != 0 instead is flawed in my opinion.
Of
 course in a simple code snippet it can be clear, but these kind of things
 are likely premature optimization and may lead to highly obfuscated code.
One man's obfuscation is another man's beauty. It's perfectly clear. I guess some programmers don't understand binary arithmetic... I won't be working with those kinds of people. ;) I suppose compilers could optimize the integer modulus, but in practice the ones I've checked only do this for powers of two *if both sides are unsigned*. The problem is that the results of the AND will be different from the results of the MOD if the input is negative, so it can't make the optimization. I suppose a *really* smart compiler could look one step further and see that you're only using the result for comparison with zero, and so allow the optimization. I'll believe it when I see it. And of course we have to use fmod for floats; God evidently forbade use of the same operator for both integer and floating point division or modulus! It's a veritable Tower of Babel. All of programming is like that.
 IMO a better code would read "return n.isDivisibleBy(2);" using an OOish
 syntax, or even "return 2.divides(n);". If we have a language with
different Neither of those seem very readable to me. Just longwinded. It's totally hiding the fact that you're trying to get the computer to test if the low bit of the number is nonzero. ;)
 idioms for writing code, like use this if you want to be clear, but use
that
 when you want performance, we'll end up doing C/C++ like code. In D we
 already have this dissension on code snippets about array iteration, just
 compare those I write versus Burton code snippets. I'm not claiming
As you can tell, I am used to compilers not doing what I would wish them to, and having to spoon-feed them to get them to do what I want. I program video games for a living. I tend to write what I want the compiler to do fairly explicitly, so as to not waste any more performance than necessary. Waste not, want not.
 superiority here, just saying that if we need two idioms for performance
 reasons, there's something wrong about the language spec. It's like
 recursion vs. iteration. usually recursion is simpler to understand, but
 people argue that it's inneficient, so use iteration instead. Most
 functional languages solve this problem by requiring a compliant compiler
to
 optimize tail call in general, leading to better performance for every
kind
 of tail call, recursive or not.
I think tail recursion optimization is a great idea; every compiler should do it. If anyone ever manages to unify all syntax and style then we'll have reached the point where programmers aren't needed anymore. Until then, I'm going to code for efficiency, and you can code for readability. Unfortunately I'm stuck with C++ for the foreseeable future. Sean
Jan 19 2003
next sibling parent reply Chris <cl nopressedmeat.tinfoilhat.ca> writes:
Sean L. Palmer wrote:
 "Daniel Yokomiso" <daniel_yokomiso yahoo.com.br> wrote in message
 news:b0cqgg$2302$1 digitaldaemon.com...
 
    The remark about using (n & 1) != 0 instead is flawed in my opinion.
 Of
course in a simple code snippet it can be clear, but these kind of things
are likely premature optimization and may lead to highly obfuscated code.
One man's obfuscation is another man's beauty. It's perfectly clear. I guess some programmers don't understand binary arithmetic... I won't be working with those kinds of people. ;)
It is not perfectly clear. "Perfectly clear" is a strong statement to make. With your code, upon initial inspection I have to decide "Hey, is this person checking for a value set in a bit field, or are they checking for division by 2?" It shouldn't take me more than a second, but it's just one more thing that I have to work through to understand what your code is actually *doing*. It's only perfectly clear if I immediately know what you're checking for, .isDivisibleBy(2) (for example) is perfectly clear; there's no two ways it could be meant. Regards, Chris
Jan 19 2003
parent "Sean L. Palmer" <seanpalmer directvinternet.com> writes:
All programmers must learn the common idioms in use near them.  If you know
the idiom, it's perfectly clear.

All you people complaining complaining about having to look 3 lines prior in
the file to the declaration of the variable, or the comment next to it that
says "check if divisible by 2", or seeing an & and not knowing if it's an
address of, overloaded address of, bit field extraction, or odd check.
Browse info is pretty powerful stuff.  You're expected to look up the
declarations for things before you use them, unless you already know your
way around that code.

I truly have worse things to worry about than someone having to take a few
minutes to understand someone else's code.  I don't think it's possible to
eliminate that few minutes, or even reduce it by much.  It doesn't take up
your entire day, probably a few minutes tops.  And it's a few well-spent
minutes because during that time you gain understanding.  Just glancing at
the code isn't very helpful;  you won't usually find bugs that way, or
understand much of anything.  I think this is a fundamental problem with the
state of the art in programming languages.  I don't know that there is a
solution.  But telling me not to use bitwise & when I feel it's appropriate
is certainly not the solution.

What I'm alot more worried about is code where alot of cut and paste has
been done, where there are 40 occurrences of *almost* the identical code.
It takes 400 times as long to understand that, or to do anything to it
without breaking it.

Jeez.  Friggin pedants.

Sean

"Chris" <cl nopressedmeat.tinfoilhat.ca> wrote in message
news:3E2A83EB.7040909 nopressedmeat.tinfoilhat.ca...
 Sean L. Palmer wrote:
 "Daniel Yokomiso" <daniel_yokomiso yahoo.com.br> wrote in message
 news:b0cqgg$2302$1 digitaldaemon.com...

    The remark about using (n & 1) != 0 instead is flawed in my opinion.
 Of
course in a simple code snippet it can be clear, but these kind of
things
are likely premature optimization and may lead to highly obfuscated
code.
 One man's obfuscation is another man's beauty.  It's perfectly clear.  I
 guess some programmers don't understand binary arithmetic... I won't be
 working with those kinds of people.  ;)
It is not perfectly clear. "Perfectly clear" is a strong statement to make. With your code, upon initial inspection I have to decide "Hey, is this person checking for a value set in a bit field, or are they checking for division by 2?" It shouldn't take me more than a second, but it's just one more thing that I have to work through to understand what your code is actually *doing*. It's only perfectly clear if I immediately know what you're checking for, .isDivisibleBy(2) (for example) is perfectly clear; there's no two ways it could be meant. Regards, Chris
Jan 20 2003
prev sibling parent reply "Walter" <walter digitalmars.com> writes:
"Sean L. Palmer" <seanpalmer directvinternet.com> wrote in message
news:b0du7h$2k6i$1 digitaldaemon.com...
 And of
 course we have to use fmod for floats; God evidently forbade use of the
same
 operator for both integer and floating point division or modulus!
There is special dispensation for D, which supports % for floats.
Feb 17 2003
parent "Sean L. Palmer" <seanpalmer directvinternet.com> writes:
Awesome!

"Walter" <walter digitalmars.com> wrote in message
news:b2rgk2$1hdn$2 digitaldaemon.com...
 "Sean L. Palmer" <seanpalmer directvinternet.com> wrote in message
 news:b0du7h$2k6i$1 digitaldaemon.com...
 And of
 course we have to use fmod for floats; God evidently forbade use of the
same
 operator for both integer and floating point division or modulus!
There is special dispensation for D, which supports % for floats.
Feb 17 2003
prev sibling parent Antti Sykari <jsykari gamma.hut.fi> writes:
"Daniel Yokomiso" <daniel_yokomiso yahoo.com.br> writes:
 "Sean L. Palmer" <seanpalmer directvinternet.com> escreveu na mensagem
 news:b0c76g$1nd8$1 digitaldaemon.com...
 "Daniel Yokomiso" <daniel_yokomiso yahoo.com.br> wrote in message
 news:b0apv1$s9u$1 digitaldaemon.com...
 instance TArrays(int) arrays;
 const int[] numbers = ...;
 if (arrays.all(numbers, &isOdd)) { ...}
 if (arrays.any(numbers, &isOdd)) { ...}
The function call to isOdd needs to be eliminated. That's why closures and anonymous functions are so important, so stuff like this can be inlined. The basic idea is to pass *the function*, or a reference to it, not a pointer to the function, as a parameter. Taking a pointer guarantees it must live at a memory address which is not what you want.
I might be nitpicking here, but I doubt it that it makes any difference if function's address is passed via pointer or via reference. I hope that I'm not too optimistic when I think that a good compiler will inline the call even if the dreaded address-of operator is used to denote an indirection. Actually, I don't think we should even be talking about pointers to functions. Why not just call them what they are -- functions? After all, they are not even created or stored dynamically, as ordinary objects are. They cannot be used like other pointers (for example, accessing an array of function pointers via a pointer to a function? Surely you jest.) Still we talk about pointers to functions -- but not pointers to objects, although objects are implemented as pointers. A comment about coding style:
 public boolean isOdd(int n) {
     return n % 2 != 0;
 }
Also I personally would replace return n % 2 != 0 with return (n & 1) != 0 so that you can be reasonably sure you'll get the best performance possible. Integer modulus can be very slow. Several orders of magnitude slower than binary and.
     The remark about using (n & 1) != 0 instead is flawed in my opinion. Of
 course in a simple code snippet it can be clear, but these kind of things
 are likely premature optimization and may lead to highly obfuscated code.
 IMO a better code would read "return n.isDivisibleBy(2);" using an OOish
 syntax, or even "return 2.divides(n);". If we have a language with different
IMHO, the function's name says it all. While the efficiency should not be the utmost concern in programming, neither should readability. Both concerns should show where they matter - namely, interfaces should be as readable as possible and the implementation efficient.. Like it was in Sean's example. And everybody's happy. Anyway, that's why we like abstraction so much, right? :-) A.
Jan 20 2003
prev sibling parent Ilya Minkov <midiclub 8ung.at> writes:
Sean L. Palmer wrote:
 "Daniel Yokomiso" <daniel_yokomiso yahoo.com.br> wrote in message
 news:b0apv1$s9u$1 digitaldaemon.com...
 
 
    With HOFs we can do this. Deimos Template Library
http://www.minddrome.com/d/deimos/deimos-0.0.1.zip provides all, any and
none quantifiers (I think they're in this release) for arrays as template
functions, so you just write:

public boolean isOdd(int n) {
    return n % 2 != 0;
}
instance TArrays(int) arrays;
const int[] numbers = ...;
if (arrays.all(numbers, &isOdd)) { ...}
if (arrays.any(numbers, &isOdd)) { ...}
The function call to isOdd needs to be eliminated. That's why closures and anonymous functions are so important, so stuff like this can be inlined. The basic idea is to pass *the function*, or a reference to it, not a pointer to the function, as a parameter. Taking a pointer guarantees it must live at a memory address which is not what you want. Also I personally would replace return n % 2 != 0 with return (n & 1) != 0 so that you can be
*Any* compiler is smart enough to do that. Even the Tiny C Compiler coming from the obfuscated C contest!!!
 reasonably sure you'll get the best performance possible.  Integer modulus
 can be very slow.  Several orders of magnitude slower than binary and.
 
 Sean
 
 
Jan 20 2003
prev sibling parent "Daniel Yokomiso" <daniel_yokomiso yahoo.com.br> writes:
Hi,

    Comments embedded.

"Scott Pigman" <scottpig1 attbi.com> escreveu na mensagem
news:b088tj$2dr9$1 digitaldaemon.com...
 Hi There,
 I've just found out about D (thanks to slashdot), and i see a lot of stuff
 about it i really like -- several things that when i've thought to myself
 "if i ever made a programming language, i'd do _______" are in there.  It
 looks very promising to me.  anyway, if i may, i'd like to fire off a
 couple questions:

 1) where is D at in regards to specifing the features that it includes? is
 it still pretty much a work in progress, or has it moved onto the "okay,
 that's enough discussion, now lets get it working" point, which is my
 impression from skimming the specs?
Dmd compiler is in alpha stage, version 0.50, so there are lots of thing that may change. Others are already settled, and some are trying to survive the trials of time. In particular: - Templates, unified function types and anonymous functions are things that may get in or change in the next releases. Also reflection is a no man's land currently. - Interfaces and operator overloading are being discussed, but are already quite stable in the language. Something may change if intelligent critics are posted. - Structs vs. classes in stack, in/out/inout modifiers, array semantics are pretty much stablished. Some people still argue about it, but I think Walter want things the way they are.
 2) if it was still in the forming stages (or if i was making a language)
 the following are some things i'd be tempted to include.  now, my
 assumption is that these have already been discussed to death, since
 they're mainly things already in other languages, and probably some of the
 long time readers here would rather chuck their computer throught the
 window instead of read another thread on it, so please accept my apologies
 in advance.  I did skim the headers looking for relevent discussions, but
 as there are nearly 10,000 legacy messages here, it's a hell of a lot
 easier for me to just write this and hope someone out there doesn't mind
 responding.  basically, what i'm wondering is: if it was already
 discussed, what was the consensus regarding it, or is it already in the
 language and i overlooked it, or is it so obviously a bad idea that it
 wasn't even worthy of discussion.  anyway, in no particular order, here
 they are:
This is a newsgroup, we always get in recurrent discussions ;-)
 unless/until syntatic sugarcoating, where "unless(false){}" ==
 "if(!false){}"  and "until(true)" == "while(!true)".  okay, these add no
 functionality, but i think that it would create easier to follow code,
 seeing how i get confused easily by logical negations >;-)
I agree they're nice, but proliferation of control structures is a bad thing, IMO. Generalized control structures, OTOH, are a way to go, but I think D is more conservative and go through the path of all other C derivatives.
 functions as first class objects.  now, i really have no idea what this
 means in terms of implementation or performance, but lisp has had it since
 day one, so i'd suspect it can't be too bad.  (of course, i don't know of
 many other languages that have it, so maybe it ain't so easy after all).
 but personally, being able to store functions in data structures, pass
 them (easily) as arguments to functions or creating fuctions which are
 themselves capable of creating and returning a functions just sounds very
 tantalizing to me.  having this would go hand in hand with having
 something equivalent to lisp/python lamda statements - i.e. anonymous,
 unnammed functions.
D has delegates (pointer to object methods) and plain function pointers, but the syntax for each one is different. Also there are no anonymous functions with closure semantics, so most forms of higher-order programming are difficult and error-prone.
 multiple return values from a function.  personally, i've never been very
 keen on the C/C++ idea of having parameters that are actually return
 values.  i like the python model here: "a,b,c = foo(x,y,z)".  again, i
 think it's a lot easier to follow code that has all the inputs in one
 place and the outputs in another.  i guess this wouldn't be compatibly
 with the IDL like the current in/out/inout specifiers, but i'd say that
 it's a lot more often that you'd write a function w/ multiple return
 values than write something that needs to be compatible w/ IDL.  and no, i
 wouldn't suggest getting rid of the in/out/inout specifiers, i'd just
 rather not use them myself all that often.
The problem with multiple assigments is that usually they're linked with a tuple construction/deconstruction pair of operations. Python uses multiple assignments as a particular form of tuple deconstruction. And when we got tuples in the language we are almost a functional language and that is a bad thing... Just kidding ;-)
 java style online API/user libary documentation -- for my money, one of
 the best things about working with Java.  i'd hope that D would have a
 similar resource (and am puzzeled why c and c++ don't have something
 similar).
There's some talk about embedding D source code in HTML, a la literate programming http://www.literateprogramming.com/ AFAIK would be easy to add javadoc-like documentation to D, because D is easy to parse, but nobody written a tool for it yet. Any volunteers?
 lisp like closures.  now, of all the things here which i've heard about
 and like the sound of, but haven't actually used in practice, this one is
 the one i probably understand the least. but what i think i understand,
 appeals to me.  basically, my understanding is that this amounts to
 functions with a shared state - or perhaps they're anonymous, unnammed
 classes. e.g.  closure{ double total
 add(double x){total += x; return total}
 sub(double x){total += x; return total}
 }
 so, add and sub both act on the same variable total, but total is
 protected from having any other function modify it.  maybe not quite a
 global variable, but a national variable.  i dunno, sounds good to me
 though.  and if that isn't a lisp closure, well, then i like whatever it
 is i just described.
See above.
 a foreach or forall statement for iterating over elements of a collection:
 "foreach person in newsgroup{.....}" where newsgroup is a collection/array
 of persons, and whatever's between the {}'s is code that is executed on
 each person.  to me this just seems a lot more intuitive to work with than
 "for(x = 0; x < Y; x++)..."

 the obvious name of the following unfortunately clashes with that of the
 previous, but having just wrapped up a course in program correctness i'm
 thinking i'd like to see boolean "forall" and "exists" expressions for use
 in contracts to verify the program correctness.  "forall x in X(cond)"
 would be true if all the x's in X satisfy the cond, and "exists x in
 X(cond)" would be true in any one x in X satisfied the cond.
A for-each like statement will be in the language anytime soon, IIRC. forall and exists quantifications are available (currently just for arrays) in the Deimos Template Library www.minddrome.com/d/deimos/deimos-0.0.1.zip As there is no standard way to iterate over collections (neither an iterator template nor iterator constructs) there's no way of adding a foreach statemente to every possible collection type. If you have some ideas post them, perhaps you'll solve this problem.
 well, i think that's more than enough for now.  thanks for reading to
 anybody who actually got this far.  take care & good night.

 -scott
The best way to change something in D is proving that current solution is inneficient, awkward or unsafe, but your solution isn't. If you do something in D (libraries, tools, etc.) you'll get a feeling about the way things are in D and perhaps change your opinions, or become convinced that you are right indeed :-) Best regards, Daniel Yokomiso. "God is real, unless declared integer."
Jan 17 2003